Datasets:

bibtex_url
stringlengths
41
50
proceedings
stringlengths
38
47
bibtext
stringlengths
709
3.56k
abstract
stringlengths
17
2.11k
authors
sequencelengths
1
72
title
stringlengths
12
207
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
276 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
14
num_comments
int64
-1
11
n_authors
int64
-1
44
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
14
Spaces
sequencelengths
0
100
https://aclanthology.org/2023.acl-industry.34.bib
https://aclanthology.org/2023.acl-industry.34/
@inproceedings{ding-etal-2023-static, title = "A Static Evaluation of Code Completion by Large Language Models", author = "Ding, Hantian and Kumar, Varun and Tian, Yuchen and Wang, Zijian and Kwiatkowski, Rob and Li, Xiaopeng and Ramanathan, Murali Krishna and Ray, Baishakhi and Bhatia, Parminder and Sengupta, Sudipta", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.34", doi = "10.18653/v1/2023.acl-industry.34", pages = "347--360", abstract = "Large language models trained on code have shown great potential to increase productivity of software developers. Several execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems. Nevertheless, it is expensive to perform the same evaluation on complex real-world projects considering the execution cost. On the other hand, static analysis tools such as linters, which can detect errors without running the program, haven{'}t been well explored for evaluating code generation models. In this work, we propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees. Compared with execution-based evaluation, our method is not only more efficient, but also applicable to code in the wild. For experiments, we collect code context from open source repos to generate one million function bodies using public models. Our static analysis reveals that Undefined Name and Unused Variable are the most common errors among others made by language models. Through extensive studies, we also show the impact of sampling temperature, model size, and context on static errors in code completions.", }
Large language models trained on code have shown great potential to increase productivity of software developers. Several execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems. Nevertheless, it is expensive to perform the same evaluation on complex real-world projects considering the execution cost. On the other hand, static analysis tools such as linters, which can detect errors without running the program, haven{'}t been well explored for evaluating code generation models. In this work, we propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees. Compared with execution-based evaluation, our method is not only more efficient, but also applicable to code in the wild. For experiments, we collect code context from open source repos to generate one million function bodies using public models. Our static analysis reveals that Undefined Name and Unused Variable are the most common errors among others made by language models. Through extensive studies, we also show the impact of sampling temperature, model size, and context on static errors in code completions.
[ "Ding, Hantian", "Kumar, Varun", "Tian, Yuchen", "Wang, Zijian", "Kwiatkowski, Rob", "Li, Xiaopeng", "Ramanathan, Murali Krishna", "Ray, Baishakhi", "Bhatia, Parminder", "Sengupta, Sudipta" ]
A Static Evaluation of Code Completion by Large Language Models
acl-industry.34
Poster
2306.03203
[ "" ]
https://huggingface.co/papers/2306.03203
3
3
0
12
1
[]
[]
[]
https://aclanthology.org/2023.acl-industry.35.bib
https://aclanthology.org/2023.acl-industry.35/
@inproceedings{ahuja-etal-2023-scalable, title = "Scalable and Safe Remediation of Defective Actions in Self-Learning Conversational Systems", author = "Ahuja, Sarthak and Kachuee, Mohammad and Sheikholeslami, Fatemeh and Liu, Weiqing and Do, Jaeyoung", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.35", doi = "10.18653/v1/2023.acl-industry.35", pages = "361--367", abstract = "Off-Policy reinforcement learning has been the driving force for the state-of-the-art conversational AIs leading to more natural human-agent interactions and improving the user satisfaction for goal-oriented agents. However, in large-scale commercial settings, it is often challenging to balance between policy improvements and experience continuity on the broad spectrum of applications handled by such system. In the literature, off-policy evaluation and guard-railing on aggregate statistics has been commonly used to address this problem. In this paper, we propose method for curating and leveraging high-precision samples sourced from historical regression incident reports to validate, safe-guard, and improve policies prior to the online deployment. We conducted extensive experiments using data from a real-world conversational system and actual regression incidents. The proposed method is currently deployed in our production system to protect customers against broken experiences and enable long-term policy improvements.", }
Off-Policy reinforcement learning has been the driving force for the state-of-the-art conversational AIs leading to more natural human-agent interactions and improving the user satisfaction for goal-oriented agents. However, in large-scale commercial settings, it is often challenging to balance between policy improvements and experience continuity on the broad spectrum of applications handled by such system. In the literature, off-policy evaluation and guard-railing on aggregate statistics has been commonly used to address this problem. In this paper, we propose method for curating and leveraging high-precision samples sourced from historical regression incident reports to validate, safe-guard, and improve policies prior to the online deployment. We conducted extensive experiments using data from a real-world conversational system and actual regression incidents. The proposed method is currently deployed in our production system to protect customers against broken experiences and enable long-term policy improvements.
[ "Ahuja, Sarthak", "Kachuee, Mohammad", "Sheikholeslami, Fatemeh", "Liu, Weiqing", "Do, Jaeyoung" ]
Scalable and Safe Remediation of Defective Actions in Self-Learning Conversational Systems
acl-industry.35
Poster
2305.10528
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.36.bib
https://aclanthology.org/2023.acl-industry.36/
@inproceedings{lin-etal-2023-mobilenmt, title = "{M}obile{NMT}: Enabling Translation in 15{MB} and 30ms", author = "Lin, Ye and Wang, Xiaohui and Zhang, Zhexi and Wang, Mingxuan and Xiao, Tong and Zhu, Jingbo", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.36", doi = "10.18653/v1/2023.acl-industry.36", pages = "368--378", abstract = "Deploying NMT models on mobile devices is essential for privacy, low latency, and offline scenarios. For high model capacity, NMT models are rather large. Running these models on devices is challenging with limited storage, memory, computation, and power consumption. Existing work either only focuses on a single metric such as FLOPs or general engine which is not good at auto-regressive decoding. In this paper, we present MobileNMT, a system that can translate in 15MB and 30ms on devices. We propose a series of principles for model compression when combined with quantization. Further, we implement an engine that is friendly to INT8 and decoding. With the co-design of model and engine, compared with the existing system, we speed up 47.0x and save 99.5{\%} of memory with only 11.6{\%} loss of BLEU. Our code will be publicly available after the anonymity period.", }
Deploying NMT models on mobile devices is essential for privacy, low latency, and offline scenarios. For high model capacity, NMT models are rather large. Running these models on devices is challenging with limited storage, memory, computation, and power consumption. Existing work either only focuses on a single metric such as FLOPs or general engine which is not good at auto-regressive decoding. In this paper, we present MobileNMT, a system that can translate in 15MB and 30ms on devices. We propose a series of principles for model compression when combined with quantization. Further, we implement an engine that is friendly to INT8 and decoding. With the co-design of model and engine, compared with the existing system, we speed up 47.0x and save 99.5{\%} of memory with only 11.6{\%} loss of BLEU. Our code will be publicly available after the anonymity period.
[ "Lin, Ye", "Wang, Xiaohui", "Zhang, Zhexi", "Wang, Mingxuan", "Xiao, Tong", "Zhu, Jingbo" ]
MobileNMT: Enabling Translation in 15MB and 30ms
acl-industry.36
Poster
2306.04235
[ "https://github.com/zjersey/lightseq-arm" ]
https://huggingface.co/papers/2306.04235
1
3
0
6
1
[]
[]
[]
https://aclanthology.org/2023.acl-industry.37.bib
https://aclanthology.org/2023.acl-industry.37/
@inproceedings{xiao-2023-multi, title = "Multi-doc Hybrid Summarization via Salient Representation Learning", author = "Xiao, Min", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.37", doi = "10.18653/v1/2023.acl-industry.37", pages = "379--389", abstract = "Multi-document summarization is gaining more and more attention recently and serves as an invaluable tool to obtain key facts among a large information pool. In this paper, we proposed a multi-document hybrid summarization approach, which simultaneously generates a human-readable summary and extracts corresponding key evidences based on multi-doc inputs. To fulfill that purpose, we crafted a salient representation learning method to induce latent salient features, which are effective for joint evidence extraction and summary generation. In order to train this model, we conducted multi-task learning to optimize a composited loss, constructed over extractive and abstractive sub-components in a hierarchical way. We implemented the system based on a ubiquiotously adopted transformer architecture and conducted experimental studies on multiple datasets across two domains, achieving superior performance over the baselines.", }
Multi-document summarization is gaining more and more attention recently and serves as an invaluable tool to obtain key facts among a large information pool. In this paper, we proposed a multi-document hybrid summarization approach, which simultaneously generates a human-readable summary and extracts corresponding key evidences based on multi-doc inputs. To fulfill that purpose, we crafted a salient representation learning method to induce latent salient features, which are effective for joint evidence extraction and summary generation. In order to train this model, we conducted multi-task learning to optimize a composited loss, constructed over extractive and abstractive sub-components in a hierarchical way. We implemented the system based on a ubiquiotously adopted transformer architecture and conducted experimental studies on multiple datasets across two domains, achieving superior performance over the baselines.
[ "Xiao, Min" ]
Multi-doc Hybrid Summarization via Salient Representation Learning
acl-industry.37
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.38.bib
https://aclanthology.org/2023.acl-industry.38/
@inproceedings{qi-etal-2023-safer, title = "{S}a{FER}: A Robust and Efficient Framework for Fine-tuning {BERT}-based Classifier with Noisy Labels", author = "Qi, Zhenting and Tan, Xiaoyu and Qu, Chao and Xu, Yinghui and Qi, Yuan", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.38", doi = "10.18653/v1/2023.acl-industry.38", pages = "390--403", abstract = "Learning on noisy datasets is a challenging problem when pre-trained language models are applied to real-world text classification tasks. In numerous industrial applications, acquiring task-specific datasets with 100{\%} accurate labels is difficult, thus many datasets are accompanied by label noise at different levels. Previous work has shown that existing noise-handling methods could not improve the peak performance of BERT on noisy datasets, and might even deteriorate it. In this paper, we propose SaFER, a robust and efficient fine-tuning framework for BERT-based text classifiers, combating label noises without access to any clean data for training or validation. Utilizing a label-agnostic early-stopping strategy and self-supervised learning, our proposed framework achieves superior performance in terms of both accuracy and speed on multiple text classification benchmarks. The trained model is finally fully deployed in several industrial biomedical literature mining tasks and demonstrates high effectiveness and efficiency.", }
Learning on noisy datasets is a challenging problem when pre-trained language models are applied to real-world text classification tasks. In numerous industrial applications, acquiring task-specific datasets with 100{\%} accurate labels is difficult, thus many datasets are accompanied by label noise at different levels. Previous work has shown that existing noise-handling methods could not improve the peak performance of BERT on noisy datasets, and might even deteriorate it. In this paper, we propose SaFER, a robust and efficient fine-tuning framework for BERT-based text classifiers, combating label noises without access to any clean data for training or validation. Utilizing a label-agnostic early-stopping strategy and self-supervised learning, our proposed framework achieves superior performance in terms of both accuracy and speed on multiple text classification benchmarks. The trained model is finally fully deployed in several industrial biomedical literature mining tasks and demonstrates high effectiveness and efficiency.
[ "Qi, Zhenting", "Tan, Xiaoyu", "Qu, Chao", "Xu, Yinghui", "Qi, Yuan" ]
SaFER: A Robust and Efficient Framework for Fine-tuning BERT-based Classifier with Noisy Labels
acl-industry.38
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.39.bib
https://aclanthology.org/2023.acl-industry.39/
@inproceedings{kim-etal-2023-chemical, title = "Chemical Language Understanding Benchmark", author = "Kim, Yunsoo and Ko, Hyuk and Lee, Jane and Heo, Hyun Young and Yang, Jinyoung and Lee, Sungsoo and Lee, Kyu-hwang", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.39", doi = "10.18653/v1/2023.acl-industry.39", pages = "404--411", abstract = "In this paper, we introduce the benchmark datasets named CLUB (Chemical Language Understanding Benchmark) to facilitate NLP research in the chemical industry. We have 4 datasets consisted of text and token classification tasks. As far as we have recognized, it is one of the first examples of chemical language understanding benchmark datasets consisted of tasks for both patent and literature articles provided by industrial organization. All the datasets are internally made by chemists from scratch. Finally, we evaluate the datasets on the various language models based on BERT and RoBERTa, and demonstrate the model performs better when the domain of the pretrained models are closer to chemistry domain. We provide baselines for our benchmark as 0.8054 in average, and we hope this benchmark is used by many researchers in both industry and academia.", }
In this paper, we introduce the benchmark datasets named CLUB (Chemical Language Understanding Benchmark) to facilitate NLP research in the chemical industry. We have 4 datasets consisted of text and token classification tasks. As far as we have recognized, it is one of the first examples of chemical language understanding benchmark datasets consisted of tasks for both patent and literature articles provided by industrial organization. All the datasets are internally made by chemists from scratch. Finally, we evaluate the datasets on the various language models based on BERT and RoBERTa, and demonstrate the model performs better when the domain of the pretrained models are closer to chemistry domain. We provide baselines for our benchmark as 0.8054 in average, and we hope this benchmark is used by many researchers in both industry and academia.
[ "Kim, Yunsoo", "Ko, Hyuk", "Lee, Jane", "Heo, Hyun Young", "Yang, Jinyoung", "Lee, Sungsoo", "Lee, Kyu-hwang" ]
Chemical Language Understanding Benchmark
acl-industry.39
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.40.bib
https://aclanthology.org/2023.acl-industry.40/
@inproceedings{park-etal-2023-hypert5, title = "{H}yper{T}5: Towards Compute-Efficient {K}orean Language Modeling", author = "Park, Dongju and Ka, Soonwon and Yoo, Kang Min and Lee, Gichang and Kang, Jaewook", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.40", doi = "10.18653/v1/2023.acl-industry.40", pages = "412--424", abstract = "Pretraining and fine-tuning language models have become the standard practice in industrial natural language processing (NLP), but developing and deploying general-purpose language models without the abundant computation or data resources is a real-world issue faced by smaller organizations or communities whose main focus is languages with less accessible resources (e.g., non-English). This paper explores the sequence-to-sequence (seq2seq) language model architecture as a more practical and compute-efficient alternative to the decoder-oriented approach (e.g., GPT-3), accompanied by novel findings in compute-optimality analyses. We successfully trained billion-scale Korean-language seq2seq language models that strongly outperform other competitive models in Korean benchmarks. Moreover, we demonstrate that such language models can be more efficiently utilized by employing a heavy pre-finetuning strategy, by showcasing a case study on dialog-task adaptation. Our case study shows that adopting language models with more readily available domain-specific unlabeled data greatly improves fine-tuning data efficiency in low-resource settings.", }
Pretraining and fine-tuning language models have become the standard practice in industrial natural language processing (NLP), but developing and deploying general-purpose language models without the abundant computation or data resources is a real-world issue faced by smaller organizations or communities whose main focus is languages with less accessible resources (e.g., non-English). This paper explores the sequence-to-sequence (seq2seq) language model architecture as a more practical and compute-efficient alternative to the decoder-oriented approach (e.g., GPT-3), accompanied by novel findings in compute-optimality analyses. We successfully trained billion-scale Korean-language seq2seq language models that strongly outperform other competitive models in Korean benchmarks. Moreover, we demonstrate that such language models can be more efficiently utilized by employing a heavy pre-finetuning strategy, by showcasing a case study on dialog-task adaptation. Our case study shows that adopting language models with more readily available domain-specific unlabeled data greatly improves fine-tuning data efficiency in low-resource settings.
[ "Park, Dongju", "Ka, Soonwon", "Yoo, Kang Min", "Lee, Gichang", "Kang, Jaewook" ]
HyperT5: Towards Compute-Efficient Korean Language Modeling
acl-industry.40
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.41.bib
https://aclanthology.org/2023.acl-industry.41/
@inproceedings{kim-etal-2023-semantic, title = "Semantic Ambiguity Detection in Sentence Classification using Task-Specific Embeddings", author = "Kim, Jong Myoung and Lee, Young-jun and Jung, Sangkeun and Choi, Ho-jin", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.41", doi = "10.18653/v1/2023.acl-industry.41", pages = "425--437", abstract = "Ambiguity is a major obstacle to providing services based on sentence classification. However, because of the structural limitations of the service, there may not be sufficient contextual information to resolve the ambiguity. In this situation, we focus on ambiguity detection so that service design considering ambiguity is possible. We utilize similarity in a semantic space to detect ambiguity in service scenarios and training data. In addition, we apply task-specific embedding to improve performance. Our results demonstrate that ambiguities and resulting labeling errors in training data or scenarios can be detected. Additionally, we confirm that it can be used to debug services", }
Ambiguity is a major obstacle to providing services based on sentence classification. However, because of the structural limitations of the service, there may not be sufficient contextual information to resolve the ambiguity. In this situation, we focus on ambiguity detection so that service design considering ambiguity is possible. We utilize similarity in a semantic space to detect ambiguity in service scenarios and training data. In addition, we apply task-specific embedding to improve performance. Our results demonstrate that ambiguities and resulting labeling errors in training data or scenarios can be detected. Additionally, we confirm that it can be used to debug services
[ "Kim, Jong Myoung", "Lee, Young-jun", "Jung, Sangkeun", "Choi, Ho-jin" ]
Semantic Ambiguity Detection in Sentence Classification using Task-Specific Embeddings
acl-industry.41
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.42.bib
https://aclanthology.org/2023.acl-industry.42/
@inproceedings{rabinovich-etal-2023-reliable, title = "Reliable and Interpretable Drift Detection in Streams of Short Texts", author = "Rabinovich, Ella and Vetzler, Matan and Ackerman, Samuel and Anaby Tavor, Ateret", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.42", doi = "10.18653/v1/2023.acl-industry.42", pages = "438--446", abstract = "Data drift is the change in model input data that is one of the key factors leading to machine learning models performance degradation over time. Monitoring drift helps detecting these issues and preventing their harmful consequences. Meaningful drift interpretation is a fundamental step towards effective re-training of the model. In this study we propose an end-to-end framework for reliable model-agnostic change-point detection and interpretation in large task-oriented dialog systems, proven effective in multiple customer deployments. We evaluate our approach and demonstrate its benefits with a novel variant of intent classification training dataset, simulating customer requests to a dialog system. We make the data publicly available.", }
Data drift is the change in model input data that is one of the key factors leading to machine learning models performance degradation over time. Monitoring drift helps detecting these issues and preventing their harmful consequences. Meaningful drift interpretation is a fundamental step towards effective re-training of the model. In this study we propose an end-to-end framework for reliable model-agnostic change-point detection and interpretation in large task-oriented dialog systems, proven effective in multiple customer deployments. We evaluate our approach and demonstrate its benefits with a novel variant of intent classification training dataset, simulating customer requests to a dialog system. We make the data publicly available.
[ "Rabinovich, Ella", "Vetzler, Matan", "Ackerman, Samuel", "Anaby Tavor, Ateret" ]
Reliable and Interpretable Drift Detection in Streams of Short Texts
acl-industry.42
Poster
2305.17750
[ "" ]
https://huggingface.co/papers/2305.17750
0
0
0
4
1
[]
[ "ibm/clinic150-sur" ]
[]
https://aclanthology.org/2023.acl-industry.43.bib
https://aclanthology.org/2023.acl-industry.43/
@inproceedings{hueser-etal-2023-sharing, title = "Sharing Encoder Representations across Languages, Domains and Tasks in Large-Scale Spoken Language Understanding", author = "Hueser, Jonathan and Gaspers, Judith and Gueudre, Thomas and Prakash, Chandana and Cao, Jin and Sorokin, Daniil and Do, Quynh and Anastassacos, Nicolas and Falke, Tobias and Gojayev, Turan", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.43", doi = "10.18653/v1/2023.acl-industry.43", pages = "447--456", abstract = "Leveraging representations from pre-trained transformer-based encoders achieves state-of-the-art performance on numerous NLP tasks. Larger encoders can improve accuracy for spoken language understanding (SLU) but are challenging to use given the inference latency constraints of online systems (especially on CPU machines).We evaluate using a larger 170M parameter BERT encoder that shares representations across languages, domains and tasks for SLU compared to using smaller 17M parameter BERT encoders with language-, domain- and task-decoupled finetuning.Running inference with a larger shared encoder on GPU is latency neutral and reduces infrastructure cost compared to running inference for decoupled smaller encoders on CPU machines. The larger shared encoder reduces semantic error rates by 4.62{\%} for test sets representing user requests to voice-controlled devices and 5.79{\%} on the tail of the test sets on average across four languages.", }
Leveraging representations from pre-trained transformer-based encoders achieves state-of-the-art performance on numerous NLP tasks. Larger encoders can improve accuracy for spoken language understanding (SLU) but are challenging to use given the inference latency constraints of online systems (especially on CPU machines).We evaluate using a larger 170M parameter BERT encoder that shares representations across languages, domains and tasks for SLU compared to using smaller 17M parameter BERT encoders with language-, domain- and task-decoupled finetuning.Running inference with a larger shared encoder on GPU is latency neutral and reduces infrastructure cost compared to running inference for decoupled smaller encoders on CPU machines. The larger shared encoder reduces semantic error rates by 4.62{\%} for test sets representing user requests to voice-controlled devices and 5.79{\%} on the tail of the test sets on average across four languages.
[ "Hueser, Jonathan", "Gaspers, Judith", "Gueudre, Thomas", "Prakash, Ch", "ana", "Cao, Jin", "Sorokin, Daniil", "Do, Quynh", "Anastassacos, Nicolas", "Falke, Tobias", "Gojayev, Turan" ]
Sharing Encoder Representations across Languages, Domains and Tasks in Large-Scale Spoken Language Understanding
acl-industry.43
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.44.bib
https://aclanthology.org/2023.acl-industry.44/
@inproceedings{tabatabaei-etal-2023-annotating, title = "Annotating Research Infrastructure in Scientific Papers: An {NLP}-driven Approach", author = "Tabatabaei, Seyed Amin and Cheirmpos, Georgios and Doornenbal, Marius and Zigoni, Alberto and Moore, Veronique and Tsatsaronis, Georgios", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.44", doi = "10.18653/v1/2023.acl-industry.44", pages = "457--463", abstract = "In this work, we present a natural language processing (NLP) pipeline for the identification, extraction and linking of Research Infrastructure (RI) used in scientific publications. Links between scientific equipment and publications where the equipment was used can support multiple use cases, such as evaluating the impact of RI investment, and supporting Open Science and research reproducibility. These links can also be used to establish a profile of the RI portfolio of each institution and associate each equipment with scientific output. The system we are describing here is already in production, and has been used to address real business use cases, some of which we discuss in this paper. The computational pipeline at the heart of the system comprises both supervised and unsupervised modules to detect the usage of research equipment by processing the full text of the articles. Additionally, we have created a knowledge graph of RI, which is utilized to annotate the articles with metadata. Finally, examples of the business value of the insights made possible by this NLP pipeline are illustrated.", }
In this work, we present a natural language processing (NLP) pipeline for the identification, extraction and linking of Research Infrastructure (RI) used in scientific publications. Links between scientific equipment and publications where the equipment was used can support multiple use cases, such as evaluating the impact of RI investment, and supporting Open Science and research reproducibility. These links can also be used to establish a profile of the RI portfolio of each institution and associate each equipment with scientific output. The system we are describing here is already in production, and has been used to address real business use cases, some of which we discuss in this paper. The computational pipeline at the heart of the system comprises both supervised and unsupervised modules to detect the usage of research equipment by processing the full text of the articles. Additionally, we have created a knowledge graph of RI, which is utilized to annotate the articles with metadata. Finally, examples of the business value of the insights made possible by this NLP pipeline are illustrated.
[ "Tabatabaei, Seyed Amin", "Cheirmpos, Georgios", "Doornenbal, Marius", "Zigoni, Alberto", "Moore, Veronique", "Tsatsaronis, Georgios" ]
Annotating Research Infrastructure in Scientific Papers: An NLP-driven Approach
acl-industry.44
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.45.bib
https://aclanthology.org/2023.acl-industry.45/
@inproceedings{zhang-etal-2023-event, title = "Event-Centric Query Expansion in Web Search", author = "Zhang, Yanan and Cui, Weijie and Zhang, Yangfan and Bai, Xiaoling and Zhang, Zhe and Ma, Jin and Chen, Xiang and Zhou, Tianhua", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.45", doi = "10.18653/v1/2023.acl-industry.45", pages = "464--475", abstract = "In search engines, query expansion (QE) is a crucial technique to improve search experience. Previous studies often rely on long-term search log mining, which leads to slow updates and is sub-optimal for time-sensitive news searches. In this work, we present Event-Centric Query Expansion (EQE), the QE system used in a famous Chinese search engine. EQE utilizes a novel event retrieval framework that consists of four stages, i.e., event collection, event reformulation, semantic retrieval and online ranking, which can select the best expansion from a significant amount of potential events rapidly and accurately. Specifically, we first collect and filter news headlines from websites. Then we propose a generation model that incorporates contrastive learning and prompt-tuning techniques to reformulate these headlines to concise candidates. Additionally, we fine-tune a dual-tower semantic model to serve as an encoder for event retrieval and explore a two-stage contrastive training approach to enhance the accuracy of event retrieval. Finally, we rank the retrieved events and select the optimal one as QE, which is then used to improve the retrieval of event-related documents. Through offline analysis and online A/B testing, we observed that the EQE system has significantly improved many indicators compared to the baseline. The system has been deployed in a real production environment and serves hundreds of millions of users.", }
In search engines, query expansion (QE) is a crucial technique to improve search experience. Previous studies often rely on long-term search log mining, which leads to slow updates and is sub-optimal for time-sensitive news searches. In this work, we present Event-Centric Query Expansion (EQE), the QE system used in a famous Chinese search engine. EQE utilizes a novel event retrieval framework that consists of four stages, i.e., event collection, event reformulation, semantic retrieval and online ranking, which can select the best expansion from a significant amount of potential events rapidly and accurately. Specifically, we first collect and filter news headlines from websites. Then we propose a generation model that incorporates contrastive learning and prompt-tuning techniques to reformulate these headlines to concise candidates. Additionally, we fine-tune a dual-tower semantic model to serve as an encoder for event retrieval and explore a two-stage contrastive training approach to enhance the accuracy of event retrieval. Finally, we rank the retrieved events and select the optimal one as QE, which is then used to improve the retrieval of event-related documents. Through offline analysis and online A/B testing, we observed that the EQE system has significantly improved many indicators compared to the baseline. The system has been deployed in a real production environment and serves hundreds of millions of users.
[ "Zhang, Yanan", "Cui, Weijie", "Zhang, Yangfan", "Bai, Xiaoling", "Zhang, Zhe", "Ma, Jin", "Chen, Xiang", "Zhou, Tianhua" ]
Event-Centric Query Expansion in Web Search
acl-industry.45
Poster
2305.19019
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.46.bib
https://aclanthology.org/2023.acl-industry.46/
@inproceedings{gong-etal-2023-transferable, title = "Transferable and Efficient: Unifying Dynamic Multi-Domain Product Categorization", author = "Gong, Shansan and Zhou, Zelin and Wang, Shuo and Chen, Fengjiao and Song, Xiujie and Cao, Xuezhi and Xian, Yunsen and Zhu, Kenny", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.46", doi = "10.18653/v1/2023.acl-industry.46", pages = "476--486", abstract = "As e-commerce platforms develop different business lines, a special but challenging product categorization scenario emerges, where there are multiple domain-specific category taxonomies and each of them evolves dynamically over time. In order to unify the categorization process and ensure efficiency, we propose a two-stage taxonomy-agnostic framework that relies solely on calculating the semantic relatedness between product titles and category names in the vector space. To further enhance domain transferability and better exploit cross-domain data, we design two plug-in modules: a heuristic mapping scorer and a pretrained contrastive ranking module with the help of meta concepts, which represent keyword knowledge shared across domains. Comprehensive offline experiments show that our method outperforms strong baselineson three dynamic multi-domain product categorization (DMPC) tasks,and online experiments reconfirm its efficacy with a5{\%} increase on seasonal purchase revenue. Related datasets will be released.", }
As e-commerce platforms develop different business lines, a special but challenging product categorization scenario emerges, where there are multiple domain-specific category taxonomies and each of them evolves dynamically over time. In order to unify the categorization process and ensure efficiency, we propose a two-stage taxonomy-agnostic framework that relies solely on calculating the semantic relatedness between product titles and category names in the vector space. To further enhance domain transferability and better exploit cross-domain data, we design two plug-in modules: a heuristic mapping scorer and a pretrained contrastive ranking module with the help of meta concepts, which represent keyword knowledge shared across domains. Comprehensive offline experiments show that our method outperforms strong baselineson three dynamic multi-domain product categorization (DMPC) tasks,and online experiments reconfirm its efficacy with a5{\%} increase on seasonal purchase revenue. Related datasets will be released.
[ "Gong, Shansan", "Zhou, Zelin", "Wang, Shuo", "Chen, Fengjiao", "Song, Xiujie", "Cao, Xuezhi", "Xian, Yunsen", "Zhu, Kenny" ]
Transferable and Efficient: Unifying Dynamic Multi-Domain Product Categorization
acl-industry.46
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.47.bib
https://aclanthology.org/2023.acl-industry.47/
@inproceedings{darm-etal-2023-discosqa, title = "{DISCOSQA}: A Knowledge Base Question Answering System for Space Debris based on Program Induction", author = "Darm, Paul and Miceli Barone, Antonio Valerio and Cohen, Shay B. and Riccardi, Annalisa", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.47", doi = "10.18653/v1/2023.acl-industry.47", pages = "487--499", abstract = "Space program agencies execute complex satellite operations that need to be supported by the technical knowledge contained in their extensive information systems. Knowledge Base (KB) databases are an effective way of storing and accessing such information to scale. In this work we present a system, developed for the European Space Agency, that can answer complex natural language queries, to support engineers in accessing the information contained in a KB that models the orbital space debris environment. Our system is based on a pipeline which first generates a program sketch from a natural language question, then specializes the sketch into a concrete query program with mentions of entities, attributes and relations, and finally executes the program against the database. This pipeline decomposition approach enables us to train the system by leveraging out-of-domain data and semi-synthetic data generated by GPT-3, thus reducing overfitting and shortcut learning even with limited amount of in-domain training data.", }
Space program agencies execute complex satellite operations that need to be supported by the technical knowledge contained in their extensive information systems. Knowledge Base (KB) databases are an effective way of storing and accessing such information to scale. In this work we present a system, developed for the European Space Agency, that can answer complex natural language queries, to support engineers in accessing the information contained in a KB that models the orbital space debris environment. Our system is based on a pipeline which first generates a program sketch from a natural language question, then specializes the sketch into a concrete query program with mentions of entities, attributes and relations, and finally executes the program against the database. This pipeline decomposition approach enables us to train the system by leveraging out-of-domain data and semi-synthetic data generated by GPT-3, thus reducing overfitting and shortcut learning even with limited amount of in-domain training data.
[ "Darm, Paul", "Miceli Barone, Antonio Valerio", "Cohen, Shay B.", "Riccardi, Annalisa" ]
DISCOSQA: A Knowledge Base Question Answering System for Space Debris based on Program Induction
acl-industry.47
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.48.bib
https://aclanthology.org/2023.acl-industry.48/
@inproceedings{zhu-etal-2023-badge, title = "{BADGE}: Speeding Up {BERT} Inference after Deployment via Block-wise Bypasses and Divergence-based Early Exiting", author = "Zhu, Wei and Wang, Peng and Ni, Yuan and Xie, Guotong and Wang, Xiaoling", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.48", doi = "10.18653/v1/2023.acl-industry.48", pages = "500--509", abstract = "Early exiting can reduce the average latency of pre-trained language models (PLMs) via its adaptive inference mechanism and work with other inference speed-up methods like model pruning, thus drawing much attention from the industry. In this work, we propose a novel framework, BADGE, which consists of two off-the-shelf methods for improving PLMs{'} early exiting. We first address the issues of training a multi-exit PLM, the backbone model for early exiting. We propose the novel architecture of block-wise bypasses, which can alleviate the conflicts in jointly training multiple intermediate classifiers and thus improve the overall performances of multi-exit PLM while introducing negligible additional flops to the model. Second, we propose a novel divergence-based early exiting (DGE) mechanism, which obtains early exiting signals by comparing the predicted distributions of two adjacent layers{'} exits. Extensive experiments on three proprietary datasets and three GLUE benchmark tasks demonstrate that our method can obtain a better speedup-performance trade-off than the existing baseline methods.{\textbackslash}footnote{Code will be made publicly available to the research community upon acceptance.}", }
Early exiting can reduce the average latency of pre-trained language models (PLMs) via its adaptive inference mechanism and work with other inference speed-up methods like model pruning, thus drawing much attention from the industry. In this work, we propose a novel framework, BADGE, which consists of two off-the-shelf methods for improving PLMs{'} early exiting. We first address the issues of training a multi-exit PLM, the backbone model for early exiting. We propose the novel architecture of block-wise bypasses, which can alleviate the conflicts in jointly training multiple intermediate classifiers and thus improve the overall performances of multi-exit PLM while introducing negligible additional flops to the model. Second, we propose a novel divergence-based early exiting (DGE) mechanism, which obtains early exiting signals by comparing the predicted distributions of two adjacent layers{'} exits. Extensive experiments on three proprietary datasets and three GLUE benchmark tasks demonstrate that our method can obtain a better speedup-performance trade-off than the existing baseline methods.{\textbackslash}footnote{Code will be made publicly available to the research community upon acceptance.}
[ "Zhu, Wei", "Wang, Peng", "Ni, Yuan", "Xie, Guotong", "Wang, Xiaoling" ]
BADGE: Speeding Up BERT Inference after Deployment via Block-wise Bypasses and Divergence-based Early Exiting
acl-industry.48
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.49.bib
https://aclanthology.org/2023.acl-industry.49/
@inproceedings{prieur-etal-2023-k, title = "K-pop and fake facts: from texts to smart alerting for maritime security", author = "Prieur, Maxime and Gahbiche, Souhir and Gadek, Guillaume and Gatepaille, Sylvain and Vasnier, Kilian and Justine, Valerian", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.49", doi = "10.18653/v1/2023.acl-industry.49", pages = "510--517", abstract = "Maritime security requires full-time monitoring of the situation, mainly based on technical data (radar, AIS) but also from OSINT-like inputs (e.g., newspapers). Some threats to the operational reliability of this maritime surveillance, such as malicious actors, introduce discrepancies between hard and soft data (sensors and texts), either by tweaking their AIS emitters or by emitting false information on pseudo-newspapers. Many techniques exist to identify these pieces of false information, including using knowledge base population techniques to build a structured view of the information. This paper presents a use case for suspect data identification in a maritime setting. The proposed system UMBAR ingests data from sensors and texts, processing them through an information extraction step, in order to feed a Knowledge Base and finally perform coherence checks between the extracted facts.", }
Maritime security requires full-time monitoring of the situation, mainly based on technical data (radar, AIS) but also from OSINT-like inputs (e.g., newspapers). Some threats to the operational reliability of this maritime surveillance, such as malicious actors, introduce discrepancies between hard and soft data (sensors and texts), either by tweaking their AIS emitters or by emitting false information on pseudo-newspapers. Many techniques exist to identify these pieces of false information, including using knowledge base population techniques to build a structured view of the information. This paper presents a use case for suspect data identification in a maritime setting. The proposed system UMBAR ingests data from sensors and texts, processing them through an information extraction step, in order to feed a Knowledge Base and finally perform coherence checks between the extracted facts.
[ "Prieur, Maxime", "Gahbiche, Souhir", "Gadek, Guillaume", "Gatepaille, Sylvain", "Vasnier, Kilian", "Justine, Valerian" ]
K-pop and fake facts: from texts to smart alerting for maritime security
acl-industry.49
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.50.bib
https://aclanthology.org/2023.acl-industry.50/
@inproceedings{kamalloo-etal-2023-evaluating-embedding, title = "Evaluating Embedding {API}s for Information Retrieval", author = "Kamalloo, Ehsan and Zhang, Xinyu and Ogundepo, Odunayo and Thakur, Nandan and Alfonso-hermelo, David and Rezagholizadeh, Mehdi and Lin, Jimmy", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.50", doi = "10.18653/v1/2023.acl-industry.50", pages = "518--526", abstract = "The ever-increasing size of language models curtails their widespread access to the community, thereby galvanizing many companies and startups into offering access to large language models through APIs. One particular API, suitable for dense retrieval, is the semantic embedding API that builds vector representations of a given text. With a growing number of APIs at our disposal, in this paper, our goal is to analyze semantic embedding APIs in realistic retrieval scenarios in order to assist practitioners and researchers in finding suitable services according to their needs. Specifically, we wish to investigate the capabilities of existing APIs on domain generalization and multilingual retrieval. For this purpose, we evaluate the embedding APIs on two standard benchmarks, BEIR, and MIRACL. We find that re-ranking BM25 results using the APIs is a budget-friendly approach and is most effective on English, in contrast to the standard practice, i.e., employing them as first-stage retrievers. For non-English retrieval, re-ranking still improves the results, but a hybrid model with BM25 works best albeit at a higher cost. We hope our work lays the groundwork for thoroughly evaluating APIs that are critical in search and more broadly, in information retrieval.", }
The ever-increasing size of language models curtails their widespread access to the community, thereby galvanizing many companies and startups into offering access to large language models through APIs. One particular API, suitable for dense retrieval, is the semantic embedding API that builds vector representations of a given text. With a growing number of APIs at our disposal, in this paper, our goal is to analyze semantic embedding APIs in realistic retrieval scenarios in order to assist practitioners and researchers in finding suitable services according to their needs. Specifically, we wish to investigate the capabilities of existing APIs on domain generalization and multilingual retrieval. For this purpose, we evaluate the embedding APIs on two standard benchmarks, BEIR, and MIRACL. We find that re-ranking BM25 results using the APIs is a budget-friendly approach and is most effective on English, in contrast to the standard practice, i.e., employing them as first-stage retrievers. For non-English retrieval, re-ranking still improves the results, but a hybrid model with BM25 works best albeit at a higher cost. We hope our work lays the groundwork for thoroughly evaluating APIs that are critical in search and more broadly, in information retrieval.
[ "Kamalloo, Ehsan", "Zhang, Xinyu", "Ogundepo, Odunayo", "Thakur, N", "an", "Alfonso-hermelo, David", "Rezagholizadeh, Mehdi", "Lin, Jimmy" ]
Evaluating Embedding APIs for Information Retrieval
acl-industry.50
Poster
2305.06300
[ "" ]
https://huggingface.co/papers/2305.06300
3
2
0
7
1
[]
[]
[]
https://aclanthology.org/2023.acl-industry.51.bib
https://aclanthology.org/2023.acl-industry.51/
@inproceedings{wojcik-etal-2023-domain, title = "Domain-Agnostic Neural Architecture for Class Incremental Continual Learning in Document Processing Platform", author = "W{\'o}jcik, Mateusz and Ko{\'s}ciukiewicz, Witold and Baran, Mateusz and Kajdanowicz, Tomasz and Gonczarek, Adam", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.51", doi = "10.18653/v1/2023.acl-industry.51", pages = "527--537", abstract = "Production deployments in complex systems require ML architectures to be highly efficient and usable against multiple tasks. Particularly demanding are classification problems in which data arrives in a streaming fashion and each class is presented separately. Recent methods with stochastic gradient learning have been shown to struggle in such setups or have limitations like memory buffers, and being restricted to specific domains that disable its usage in real-world scenarios. For this reason, we present a fully differentiable architecture based on the Mixture of Experts model, that enables the training of high-performance classifiers when examples from each class are presented separately. We conducted exhaustive experiments that proved its applicability in various domains and ability to learn online in production environments. The proposed technique achieves SOTA results without a memory buffer and clearly outperforms the reference methods.", }
Production deployments in complex systems require ML architectures to be highly efficient and usable against multiple tasks. Particularly demanding are classification problems in which data arrives in a streaming fashion and each class is presented separately. Recent methods with stochastic gradient learning have been shown to struggle in such setups or have limitations like memory buffers, and being restricted to specific domains that disable its usage in real-world scenarios. For this reason, we present a fully differentiable architecture based on the Mixture of Experts model, that enables the training of high-performance classifiers when examples from each class are presented separately. We conducted exhaustive experiments that proved its applicability in various domains and ability to learn online in production environments. The proposed technique achieves SOTA results without a memory buffer and clearly outperforms the reference methods.
[ "W{\\'o}jcik, Mateusz", "Ko{\\'s}ciukiewicz, Witold", "Baran, Mateusz", "Kajdanowicz, Tomasz", "Gonczarek, Adam" ]
Domain-Agnostic Neural Architecture for Class Incremental Continual Learning in Document Processing Platform
acl-industry.51
Poster
2307.05399
[ "https://github.com/mateusz-wojcik-97/domain-agnostic-architecture" ]
https://huggingface.co/papers/2307.05399
0
1
0
5
1
[]
[]
[]
https://aclanthology.org/2023.acl-industry.52.bib
https://aclanthology.org/2023.acl-industry.52/
@inproceedings{caciolai-etal-2023-regression, title = "Regression-Free Model Updates for Spoken Language Understanding", author = "Caciolai, Andrea and Weber, Verena and Falke, Tobias and Pedrani, Alessandro and Bernardi, Davide", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.52", doi = "10.18653/v1/2023.acl-industry.52", pages = "538--551", abstract = "In real-world systems, an important requirement for model updates is to avoid regressions in user experience caused by flips of previously correct classifications to incorrect ones. Multiple techniques for that have been proposed in the recent literature. In this paper, we apply one such technique, focal distillation, to model updates in a goal-oriented dialog system and assess its usefulness in practice. In particular, we evaluate its effectiveness for key language understanding tasks, including sentence classification and sequence labeling tasks, we further assess its effect when applied to repeated model updates over time, and test its compatibility with mislabeled data. Our experiments on a public benchmark and data from a deployed dialog system demonstrate that focal distillation can substantially reduce regressions, at only minor drops in accuracy, and that it further outperforms naive supervised training in challenging mislabeled data and label expansion settings.", }
In real-world systems, an important requirement for model updates is to avoid regressions in user experience caused by flips of previously correct classifications to incorrect ones. Multiple techniques for that have been proposed in the recent literature. In this paper, we apply one such technique, focal distillation, to model updates in a goal-oriented dialog system and assess its usefulness in practice. In particular, we evaluate its effectiveness for key language understanding tasks, including sentence classification and sequence labeling tasks, we further assess its effect when applied to repeated model updates over time, and test its compatibility with mislabeled data. Our experiments on a public benchmark and data from a deployed dialog system demonstrate that focal distillation can substantially reduce regressions, at only minor drops in accuracy, and that it further outperforms naive supervised training in challenging mislabeled data and label expansion settings.
[ "Caciolai, Andrea", "Weber, Verena", "Falke, Tobias", "Pedrani, Aless", "ro", "Bernardi, Davide" ]
Regression-Free Model Updates for Spoken Language Understanding
acl-industry.52
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.53.bib
https://aclanthology.org/2023.acl-industry.53/
@inproceedings{le-etal-2023-reducing, title = "Reducing cohort bias in natural language understanding systems with targeted self-training scheme", author = "Le, Dieu-thu and Hernandez, Gabriela and Chen, Bei and Bradford, Melanie", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.53", doi = "10.18653/v1/2023.acl-industry.53", pages = "552--560", abstract = "Bias in machine learning models can be an issue when the models are trained on particular types of data that do not generalize well, causing under performance in certain groups of users. In this work, we focus on reducing the bias related to new customers in a digital voice assistant system. It is observed that natural language understanding models often have lower performance when dealing with requests coming from new users rather than experienced users. To mitigate this problem, we propose a framework that consists of two phases (1) a fixing phase with four active learning strategies used to identify important samples coming from new users, and (2) a self training phase where a teacher model trained from the first phase is used to annotate semi-supervised samples to expand the training data with relevant cohort utterances. We explain practical strategies that involve an identification of representative cohort-based samples through density clustering as well as employing implicit customer feedbacks to improve new customers{'} experience. We demonstrate the effectiveness of our approach in a real world large scale voice assistant system for two languages, German and French through both offline experiments as well as A/B testings.", }
Bias in machine learning models can be an issue when the models are trained on particular types of data that do not generalize well, causing under performance in certain groups of users. In this work, we focus on reducing the bias related to new customers in a digital voice assistant system. It is observed that natural language understanding models often have lower performance when dealing with requests coming from new users rather than experienced users. To mitigate this problem, we propose a framework that consists of two phases (1) a fixing phase with four active learning strategies used to identify important samples coming from new users, and (2) a self training phase where a teacher model trained from the first phase is used to annotate semi-supervised samples to expand the training data with relevant cohort utterances. We explain practical strategies that involve an identification of representative cohort-based samples through density clustering as well as employing implicit customer feedbacks to improve new customers{'} experience. We demonstrate the effectiveness of our approach in a real world large scale voice assistant system for two languages, German and French through both offline experiments as well as A/B testings.
[ "Le, Dieu-thu", "Hern", "ez, Gabriela", "Chen, Bei", "Bradford, Melanie" ]
Reducing cohort bias in natural language understanding systems with targeted self-training scheme
acl-industry.53
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.54.bib
https://aclanthology.org/2023.acl-industry.54/
@inproceedings{mullick-etal-2023-content, title = "Content Moderation for Evolving Policies using Binary Question Answering", author = "Mullick, Sankha Subhra and Bhambhani, Mohan and Sinha, Suhit and Mathur, Akshat and Gupta, Somya and Shah, Jidnya", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.54", doi = "10.18653/v1/2023.acl-industry.54", pages = "561--573", abstract = "Content moderation on social media is governed by policies that are intricate and frequently updated with evolving world events. However, automated content moderation systems often restrict easy adaptation to policy changes and are expected to learn policy intricacies from limited amounts of labeled data, which make effective policy compliance challenging. We propose to model content moderation as a binary question answering problem where the questions validate the loosely coupled themes constituting a policy. A decision logic is applied on top to aggregate the theme-specific validations. This way the questions pass theme information to a transformer network as explicit policy prompts, that in turn enables explainability. This setting further allows for faster adaptation to policy updates by leveraging zero-shot capabilities of pre-trained transformers. We showcase improved recall for our proposed method at 95{\textbackslash}{\%} precision on two proprietary datasets of social media posts and comments respectively annotated under curated Hate Speech and Commercial Spam policies.", }
Content moderation on social media is governed by policies that are intricate and frequently updated with evolving world events. However, automated content moderation systems often restrict easy adaptation to policy changes and are expected to learn policy intricacies from limited amounts of labeled data, which make effective policy compliance challenging. We propose to model content moderation as a binary question answering problem where the questions validate the loosely coupled themes constituting a policy. A decision logic is applied on top to aggregate the theme-specific validations. This way the questions pass theme information to a transformer network as explicit policy prompts, that in turn enables explainability. This setting further allows for faster adaptation to policy updates by leveraging zero-shot capabilities of pre-trained transformers. We showcase improved recall for our proposed method at 95{\textbackslash}{\%} precision on two proprietary datasets of social media posts and comments respectively annotated under curated Hate Speech and Commercial Spam policies.
[ "Mullick, Sankha Subhra", "Bhambhani, Mohan", "Sinha, Suhit", "Mathur, Akshat", "Gupta, Somya", "Shah, Jidnya" ]
Content Moderation for Evolving Policies using Binary Question Answering
acl-industry.54
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.55.bib
https://aclanthology.org/2023.acl-industry.55/
@inproceedings{wang-etal-2023-weighted, title = "Weighted Contrastive Learning With False Negative Control to Help Long-tailed Product Classification", author = "Wang, Tianqi and Chen, Lei and Zhu, Xiaodan and Lee, Younghun and Gao, Jing", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.55", doi = "10.18653/v1/2023.acl-industry.55", pages = "574--580", abstract = "Item categorization (IC) aims to classify product descriptions into leaf nodes in a categorical taxonomy, which is a key technology used in a wide range of applications. Along with the fact that most datasets often has a long-tailed distribution, classification performances on tail labels tend to be poor due to scarce supervision, causing many issues in real-life applications. To address IC task{'}s long-tail issue, K-positive contrastive loss (KCL) is proposed on image classification task and can be applied on the IC task when using text-based contrastive learning, e.g., SimCSE. However, one shortcoming of using KCL has been neglected in previous research: false negative (FN) instances may harm the KCL{'}s representation learning. To address the FN issue in the KCL, we proposed to re-weight the positive pairs in the KCL loss with a regularization that the sum of weights should be constrained to K+1 as close as possible. After controlling FN instances with the proposed method, IC performance has been further improved and is superior to other LT-addressing methods.", }
Item categorization (IC) aims to classify product descriptions into leaf nodes in a categorical taxonomy, which is a key technology used in a wide range of applications. Along with the fact that most datasets often has a long-tailed distribution, classification performances on tail labels tend to be poor due to scarce supervision, causing many issues in real-life applications. To address IC task{'}s long-tail issue, K-positive contrastive loss (KCL) is proposed on image classification task and can be applied on the IC task when using text-based contrastive learning, e.g., SimCSE. However, one shortcoming of using KCL has been neglected in previous research: false negative (FN) instances may harm the KCL{'}s representation learning. To address the FN issue in the KCL, we proposed to re-weight the positive pairs in the KCL loss with a regularization that the sum of weights should be constrained to K+1 as close as possible. After controlling FN instances with the proposed method, IC performance has been further improved and is superior to other LT-addressing methods.
[ "Wang, Tianqi", "Chen, Lei", "Zhu, Xiaodan", "Lee, Younghun", "Gao, Jing" ]
Weighted Contrastive Learning With False Negative Control to Help Long-tailed Product Classification
acl-industry.55
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.56.bib
https://aclanthology.org/2023.acl-industry.56/
@inproceedings{bespalov-etal-2023-towards, title = "Towards Building a Robust Toxicity Predictor", author = "Bespalov, Dmitriy and Bhabesh, Sourav and Xiang, Yi and Zhou, Liutong and Qi, Yanjun", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.56", doi = "10.18653/v1/2023.acl-industry.56", pages = "581--598", abstract = "Recent NLP literature pays little attention to the robustness of toxicity language predictors, while these systems are most likely to be used in adversarial contexts. This paper presents a novel adversarial attack, {\textbackslash}texttt{ToxicTrap}, introducing small word-level perturbations to fool SOTA text classifiers to predict toxic text samples as benign. {\textbackslash}texttt{ToxicTrap} exploits greedy based search strategies to enable fast and effective generation of toxic adversarial examples. Two novel goal function designs allow {\textbackslash}texttt{ToxicTrap} to identify weaknesses in both multiclass and multilabel toxic language detectors. Our empirical results show that SOTA toxicity text classifiers are indeed vulnerable to the proposed attacks, attaining over 98{\textbackslash}{\%} attack success rates in multilabel cases. We also show how a vanilla adversarial training and its improved version can help increase robustness of a toxicity detector even against unseen attacks.", }
Recent NLP literature pays little attention to the robustness of toxicity language predictors, while these systems are most likely to be used in adversarial contexts. This paper presents a novel adversarial attack, {\textbackslash}texttt{ToxicTrap}, introducing small word-level perturbations to fool SOTA text classifiers to predict toxic text samples as benign. {\textbackslash}texttt{ToxicTrap} exploits greedy based search strategies to enable fast and effective generation of toxic adversarial examples. Two novel goal function designs allow {\textbackslash}texttt{ToxicTrap} to identify weaknesses in both multiclass and multilabel toxic language detectors. Our empirical results show that SOTA toxicity text classifiers are indeed vulnerable to the proposed attacks, attaining over 98{\textbackslash}{\%} attack success rates in multilabel cases. We also show how a vanilla adversarial training and its improved version can help increase robustness of a toxicity detector even against unseen attacks.
[ "Bespalov, Dmitriy", "Bhabesh, Sourav", "Xiang, Yi", "Zhou, Liutong", "Qi, Yanjun" ]
Towards Building a Robust Toxicity Predictor
acl-industry.56
Poster
2404.08690
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.57.bib
https://aclanthology.org/2023.acl-industry.57/
@inproceedings{laskar-etal-2023-ai, title = "{AI} Coach Assist: An Automated Approach for Call Recommendation in Contact Centers for Agent Coaching", author = "Laskar, Md Tahmid Rahman and Chen, Cheng and Fu, Xue-yong and Azizi, Mahsa and Bhushan, Shashi and Corston-oliver, Simon", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.57", doi = "10.18653/v1/2023.acl-industry.57", pages = "599--607", abstract = "In recent years, the utilization of Artificial Intelligence (AI) in the contact center industry is on the rise. One area where AI can have a significant impact is in the coaching of contact center agents. By analyzing call transcripts, AI can quickly determine which calls are most relevant for coaching purposes, and provide relevant feedback and insights to the contact center manager or supervisor. In this paper, we present {``}AI Coach Assis{''}, which leverages the pre-trained transformer-based language models to determine whether a given call is coachable or not based on the quality assurance (QA) queries/questions asked by the contact center managers or supervisors. The system was trained and evaluated on a large dataset collected from real-world contact centers and provides an efficient and effective way to determine which calls are most relevant for coaching purposes. Extensive experimental evaluation demonstrates the potential of AI Coach Assist to improve the coaching process, resulting in enhancing the performance of contact center agents.", }
In recent years, the utilization of Artificial Intelligence (AI) in the contact center industry is on the rise. One area where AI can have a significant impact is in the coaching of contact center agents. By analyzing call transcripts, AI can quickly determine which calls are most relevant for coaching purposes, and provide relevant feedback and insights to the contact center manager or supervisor. In this paper, we present {``}AI Coach Assis{''}, which leverages the pre-trained transformer-based language models to determine whether a given call is coachable or not based on the quality assurance (QA) queries/questions asked by the contact center managers or supervisors. The system was trained and evaluated on a large dataset collected from real-world contact centers and provides an efficient and effective way to determine which calls are most relevant for coaching purposes. Extensive experimental evaluation demonstrates the potential of AI Coach Assist to improve the coaching process, resulting in enhancing the performance of contact center agents.
[ "Laskar, Md Tahmid Rahman", "Chen, Cheng", "Fu, Xue-yong", "Azizi, Mahsa", "Bhushan, Shashi", "Corston-oliver, Simon" ]
AI Coach Assist: An Automated Approach for Call Recommendation in Contact Centers for Agent Coaching
acl-industry.57
Poster
2305.17619
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.58.bib
https://aclanthology.org/2023.acl-industry.58/
@inproceedings{zhou-etal-2023-unified, title = "Unified Contextual Query Rewriting", author = "Zhou, Yingxue and Hao, Jie and Rungta, Mukund and Liu, Yang and Cho, Eunah and Fan, Xing and Lu, Yanbin and Vasudevan, Vishal and Gillespie, Kellen and Raeesy, Zeynab", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.58", doi = "10.18653/v1/2023.acl-industry.58", pages = "608--615", abstract = "Query rewriting (QR) is an important technique for user friction (i.e. recovering ASR error or system error) reduction and contextual carryover (i.e. ellipsis and co-reference) in conversational AI systems. Recently, generation-based QR models have achieved promising results on these two tasks separately. Although these two tasks have many similarities such as they both use the previous dialogue along with the current request as model input, there is no unified model to solve them jointly. To this end, we propose a unified contextual query rewriting model that unifies QR for both reducing friction and contextual carryover purpose. Moreover, we involve multiple auxiliary tasks such as trigger prediction and NLU interpretation tasks to boost the performance of the rewrite. We leverage the text-to-text unified framework which uses independent tasks with weighted loss to account for task importance. Then we propose new unified multitask learning strategies including a sequential model which outputs one sentence for multi-tasks, and a hybrid model where some tasks are independent and some tasks are sequentially generated. Our experimental results demonstrate the effectiveness of the proposed unified learning methods.", }
Query rewriting (QR) is an important technique for user friction (i.e. recovering ASR error or system error) reduction and contextual carryover (i.e. ellipsis and co-reference) in conversational AI systems. Recently, generation-based QR models have achieved promising results on these two tasks separately. Although these two tasks have many similarities such as they both use the previous dialogue along with the current request as model input, there is no unified model to solve them jointly. To this end, we propose a unified contextual query rewriting model that unifies QR for both reducing friction and contextual carryover purpose. Moreover, we involve multiple auxiliary tasks such as trigger prediction and NLU interpretation tasks to boost the performance of the rewrite. We leverage the text-to-text unified framework which uses independent tasks with weighted loss to account for task importance. Then we propose new unified multitask learning strategies including a sequential model which outputs one sentence for multi-tasks, and a hybrid model where some tasks are independent and some tasks are sequentially generated. Our experimental results demonstrate the effectiveness of the proposed unified learning methods.
[ "Zhou, Yingxue", "Hao, Jie", "Rungta, Mukund", "Liu, Yang", "Cho, Eunah", "Fan, Xing", "Lu, Yanbin", "Vasudevan, Vishal", "Gillespie, Kellen", "Raeesy, Zeynab" ]
Unified Contextual Query Rewriting
acl-industry.58
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.59.bib
https://aclanthology.org/2023.acl-industry.59/
@inproceedings{zuo-etal-2023-context, title = "Context-Aware Query Rewriting for Improving Users{'} Search Experience on {E}-commerce Websites", author = "Zuo, Simiao and Yin, Qingyu and Jiang, Haoming and Xi, Shaohui and Yin, Bing and Zhang, Chao and Zhao, Tuo", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.59", doi = "10.18653/v1/2023.acl-industry.59", pages = "616--628", abstract = "E-commerce queries are often short and ambiguous. Consequently, query understanding often uses query rewriting to disambiguate user-input queries. While using e-commerce search tools, users tend to enter multiple searches, which we call context, before purchasing. These history searches contain contextual insights about users{'} true shopping intents. Therefore, modeling such contextual information is critical to a better query rewriting model. However, existing query rewriting models ignore users{'} history behaviors and consider only the instant search query, which is often a short string offering limited information about the true shopping intent. We propose an end-to-end context-aware query rewriting model to bridge this gap, which takes the search context into account. Specifically, our model builds a session graph using the history search queries and their contained words. We then employ a graph attention mechanism that models cross-query relations and computes contextual information of the session. The model subsequently calculates session representations by combining the contextual information with the instant search query using an aggregation network. The session representations are then decoded to generate rewritten queries. Empirically, we demonstrate the superiority of our method to state-of-the-art approaches under various metrics.", }
E-commerce queries are often short and ambiguous. Consequently, query understanding often uses query rewriting to disambiguate user-input queries. While using e-commerce search tools, users tend to enter multiple searches, which we call context, before purchasing. These history searches contain contextual insights about users{'} true shopping intents. Therefore, modeling such contextual information is critical to a better query rewriting model. However, existing query rewriting models ignore users{'} history behaviors and consider only the instant search query, which is often a short string offering limited information about the true shopping intent. We propose an end-to-end context-aware query rewriting model to bridge this gap, which takes the search context into account. Specifically, our model builds a session graph using the history search queries and their contained words. We then employ a graph attention mechanism that models cross-query relations and computes contextual information of the session. The model subsequently calculates session representations by combining the contextual information with the instant search query using an aggregation network. The session representations are then decoded to generate rewritten queries. Empirically, we demonstrate the superiority of our method to state-of-the-art approaches under various metrics.
[ "Zuo, Simiao", "Yin, Qingyu", "Jiang, Haoming", "Xi, Shaohui", "Yin, Bing", "Zhang, Chao", "Zhao, Tuo" ]
Context-Aware Query Rewriting for Improving Users' Search Experience on E-commerce Websites
acl-industry.59
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.60.bib
https://aclanthology.org/2023.acl-industry.60/
@inproceedings{xu-etal-2023-federated, title = "Federated Learning of Gboard Language Models with Differential Privacy", author = "Xu, Zheng and Zhang, Yanxiang and Andrew, Galen and Choquette, Christopher and Kairouz, Peter and Mcmahan, Brendan and Rosenstock, Jesse and Zhang, Yuanbo", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.60", doi = "10.18653/v1/2023.acl-industry.60", pages = "629--639", abstract = "We train and deploy language models (LMs) with federated learning (FL) and differential privacy (DP) in Google Keyboard (Gboard). The recent DP-Follow the Regularized Leader (DP-FTRL) algorithm is applied to achieve meaningfully formal DP guarantees without requiring uniform sampling of clients. To provide favorable privacy-utility trade-offs, we introduce a new client participation criterion and discuss the implication of its configuration in large scale systems. We show how quantile-based clip estimation can be combined with DP-FTRL to adaptively choose the clip norm during training or reduce the hyperparameter tuning in preparation of training. With the help of pretraining on public data, we trained and deployed more than fifteen Gboard LMs that achieve high utility and {\$}{\textbackslash}rho-{\$}zCDP privacy guarantees with {\$}{\textbackslash}rho {\textbackslash}in (0.3, 2){\$}, with one model additionally trained with secure aggregation. We summarize our experience and provide concrete suggestions on DP training for practitioners.", }
We train and deploy language models (LMs) with federated learning (FL) and differential privacy (DP) in Google Keyboard (Gboard). The recent DP-Follow the Regularized Leader (DP-FTRL) algorithm is applied to achieve meaningfully formal DP guarantees without requiring uniform sampling of clients. To provide favorable privacy-utility trade-offs, we introduce a new client participation criterion and discuss the implication of its configuration in large scale systems. We show how quantile-based clip estimation can be combined with DP-FTRL to adaptively choose the clip norm during training or reduce the hyperparameter tuning in preparation of training. With the help of pretraining on public data, we trained and deployed more than fifteen Gboard LMs that achieve high utility and {\$}{\textbackslash}rho-{\$}zCDP privacy guarantees with {\$}{\textbackslash}rho {\textbackslash}in (0.3, 2){\$}, with one model additionally trained with secure aggregation. We summarize our experience and provide concrete suggestions on DP training for practitioners.
[ "Xu, Zheng", "Zhang, Yanxiang", "Andrew, Galen", "Choquette, Christopher", "Kairouz, Peter", "Mcmahan, Brendan", "Rosenstock, Jesse", "Zhang, Yuanbo" ]
Federated Learning of Gboard Language Models with Differential Privacy
acl-industry.60
Poster
2305.18465
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.61.bib
https://aclanthology.org/2023.acl-industry.61/
@inproceedings{ghosh-etal-2023-radling, title = "{R}ad{L}ing: Towards Efficient Radiology Report Understanding", author = "Ghosh, Rikhiya and Farri, Oladimeji and Karn, Sanjeev Kumar and Danu, Manuela and Vunikili, Ramya and Micu, Larisa", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.61", doi = "10.18653/v1/2023.acl-industry.61", pages = "640--651", abstract = "Most natural language tasks in the radiology domain use language models pre-trained on biomedical corpus. There are few pretrained language models trained specifically for radiology, and fewer still that have been trained in a low data setting and gone on to produce comparable results in fine-tuning tasks. We present RadLing, a continuously pretrained language model using ELECTRA-small architecture, trained using over 500K radiology reports that can compete with state-of-the-art results for fine tuning tasks in radiology domain. Our main contribution in this paper is knowledge-aware masking which is an taxonomic knowledge-assisted pre-training task that dynamically masks tokens to inject knowledge during pretraining. In addition, we also introduce an knowledge base-aided vocabulary extension to adapt the general tokenization vocabulary to radiology domain.", }
Most natural language tasks in the radiology domain use language models pre-trained on biomedical corpus. There are few pretrained language models trained specifically for radiology, and fewer still that have been trained in a low data setting and gone on to produce comparable results in fine-tuning tasks. We present RadLing, a continuously pretrained language model using ELECTRA-small architecture, trained using over 500K radiology reports that can compete with state-of-the-art results for fine tuning tasks in radiology domain. Our main contribution in this paper is knowledge-aware masking which is an taxonomic knowledge-assisted pre-training task that dynamically masks tokens to inject knowledge during pretraining. In addition, we also introduce an knowledge base-aided vocabulary extension to adapt the general tokenization vocabulary to radiology domain.
[ "Ghosh, Rikhiya", "Farri, Oladimeji", "Karn, Sanjeev Kumar", "Danu, Manuela", "Vunikili, Ramya", "Micu, Larisa" ]
RadLing: Towards Efficient Radiology Report Understanding
acl-industry.61
Poster
2306.02492
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.62.bib
https://aclanthology.org/2023.acl-industry.62/
@inproceedings{manderscheid-lee-2023-predicting, title = "Predicting Customer Satisfaction with Soft Labels for Ordinal Classification", author = "Manderscheid, Etienne and Lee, Matthias", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.62", doi = "10.18653/v1/2023.acl-industry.62", pages = "652--659", abstract = "In a typical call center, only up to 8{\%} of callersleave a Customer Satisfaction (CSAT) surveyresponse at the end of the call, and these tend tobe customers with strongly positive or negativeexperiences. To manage this data sparsity andresponse bias, we outline a predictive CSATdeep learning algorithm that infers CSAT onthe 1-5 scale on inbound calls to the call centerwith minimal latency. The key metric to maximize is the precision for CSAT = 1 (lowestCSAT). We maximize this metric in two ways. First, reframing the problemas a binary class, rather than five-class problem during model fine-tuning, and then mapping binary outcomes back to five classes usingtemperature-scaled model probabilities. Second, using soft labels to represent the classes. Theresult is a production model able to support keycustomer workflows with high accuracy overmillions of calls a month.", }
In a typical call center, only up to 8{\%} of callersleave a Customer Satisfaction (CSAT) surveyresponse at the end of the call, and these tend tobe customers with strongly positive or negativeexperiences. To manage this data sparsity andresponse bias, we outline a predictive CSATdeep learning algorithm that infers CSAT onthe 1-5 scale on inbound calls to the call centerwith minimal latency. The key metric to maximize is the precision for CSAT = 1 (lowestCSAT). We maximize this metric in two ways. First, reframing the problemas a binary class, rather than five-class problem during model fine-tuning, and then mapping binary outcomes back to five classes usingtemperature-scaled model probabilities. Second, using soft labels to represent the classes. Theresult is a production model able to support keycustomer workflows with high accuracy overmillions of calls a month.
[ "M", "erscheid, Etienne", "Lee, Matthias" ]
Predicting Customer Satisfaction with Soft Labels for Ordinal Classification
acl-industry.62
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.63.bib
https://aclanthology.org/2023.acl-industry.63/
@inproceedings{wang-etal-2023-accurate, title = "Accurate Training of Web-based Question Answering Systems with Feedback from Ranked Users", author = "Wang, Liang and Lauriola, Ivano and Moschitti, Alessandro", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.63", doi = "10.18653/v1/2023.acl-industry.63", pages = "660--667", abstract = "Recent work has shown that large-scale annotated datasets are essential for training state-of-the-art Question Answering (QA) models. Unfortunately, creating this data is expensive and requires a huge amount of annotation work. An alternative and cheaper source of supervision is given by feedback data collected from deployed QA systems. This data can be collected from tens of millions of user with no additional cost, for real-world QA services, e.g., Alexa, Google Home, and etc. The main drawback is the noise affecting feedback on individual examples. Recent literature on QA systems has shown the benefit of training models even with noisy feedback. However, these studies have multiple limitations: (i) they used uniform random noise to simulate feedback responses, which is typically an unrealistic approximation as noise follows specific patterns, depending on target examples and users; and (ii) they do not show how to aggregate feedback for improving training signals. In this paper, we first collect a large scale (16M) QA dataset with real feedback sampled from the QA traffic of a popular Virtual Assistant.Second, we use this data to develop two strategies for filtering unreliable users and thus de-noise feedback: (i) ranking users with an automatic classifier, and (ii) aggregating feedback over similar instances and comparing users between each other. Finally, we train QA models on our filtered feedback data, showing a significant improvement over the state of the art.", }
Recent work has shown that large-scale annotated datasets are essential for training state-of-the-art Question Answering (QA) models. Unfortunately, creating this data is expensive and requires a huge amount of annotation work. An alternative and cheaper source of supervision is given by feedback data collected from deployed QA systems. This data can be collected from tens of millions of user with no additional cost, for real-world QA services, e.g., Alexa, Google Home, and etc. The main drawback is the noise affecting feedback on individual examples. Recent literature on QA systems has shown the benefit of training models even with noisy feedback. However, these studies have multiple limitations: (i) they used uniform random noise to simulate feedback responses, which is typically an unrealistic approximation as noise follows specific patterns, depending on target examples and users; and (ii) they do not show how to aggregate feedback for improving training signals. In this paper, we first collect a large scale (16M) QA dataset with real feedback sampled from the QA traffic of a popular Virtual Assistant.Second, we use this data to develop two strategies for filtering unreliable users and thus de-noise feedback: (i) ranking users with an automatic classifier, and (ii) aggregating feedback over similar instances and comparing users between each other. Finally, we train QA models on our filtered feedback data, showing a significant improvement over the state of the art.
[ "Wang, Liang", "Lauriola, Ivano", "Moschitti, Aless", "ro" ]
Accurate Training of Web-based Question Answering Systems with Feedback from Ranked Users
acl-industry.63
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.64.bib
https://aclanthology.org/2023.acl-industry.64/
@inproceedings{jiang-etal-2023-spm, title = "{SPM}: A Split-Parsing Method for Joint Multi-Intent Detection and Slot Filling", author = "Jiang, Sheng and Zhu, Su and Cao, Ruisheng and Miao, Qingliang and Yu, Kai", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.64", doi = "10.18653/v1/2023.acl-industry.64", pages = "668--675", abstract = "In a task-oriented dialogue system, joint intent detection and slot filling for multi-intent utterances become meaningful since users tend to query more. The current state-of-the-art studies choose to process multi-intent utterances through a single joint model of sequence labelling and multi-label classification, which cannot generalize to utterances with more intents than training samples. Meanwhile, it lacks the ability to assign slots to each corresponding intent. To overcome these problems, we propose a Split-Parsing Method (SPM) for joint multiple intent detection and slot filling, which is a two-stage method. It first splits an input sentence into multiple sub-sentences which contain a single-intent, and then a joint single intent detection and slot filling model is applied to parse each sub-sentence recurrently. Finally, we integrate the parsed results. The sub-sentence split task is also treated as a sequence labelling problem with only one entity-label, which can effectively generalize to a sentence with more intents unseen in the training set. Experimental results on three multi-intent datasets show that our method obtains substantial improvements over different baselines.", }
In a task-oriented dialogue system, joint intent detection and slot filling for multi-intent utterances become meaningful since users tend to query more. The current state-of-the-art studies choose to process multi-intent utterances through a single joint model of sequence labelling and multi-label classification, which cannot generalize to utterances with more intents than training samples. Meanwhile, it lacks the ability to assign slots to each corresponding intent. To overcome these problems, we propose a Split-Parsing Method (SPM) for joint multiple intent detection and slot filling, which is a two-stage method. It first splits an input sentence into multiple sub-sentences which contain a single-intent, and then a joint single intent detection and slot filling model is applied to parse each sub-sentence recurrently. Finally, we integrate the parsed results. The sub-sentence split task is also treated as a sequence labelling problem with only one entity-label, which can effectively generalize to a sentence with more intents unseen in the training set. Experimental results on three multi-intent datasets show that our method obtains substantial improvements over different baselines.
[ "Jiang, Sheng", "Zhu, Su", "Cao, Ruisheng", "Miao, Qingliang", "Yu, Kai" ]
SPM: A Split-Parsing Method for Joint Multi-Intent Detection and Slot Filling
acl-industry.64
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.65.bib
https://aclanthology.org/2023.acl-industry.65/
@inproceedings{zhang-etal-2023-nag, title = "{NAG}-{NER}: a Unified Non-Autoregressive Generation Framework for Various {NER} Tasks", author = "Zhang, Xinpeng and Tan, Ming and Zhang, Jingfan and Zhu, Wei", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.65", doi = "10.18653/v1/2023.acl-industry.65", pages = "676--686", abstract = "Recently, the recognition of flat, nested, and discontinuous entities by a unified generative model framework has received increasing attention both in the research field and industry. However, the current generative NER methods force the entities to be generated in a predefined order, suffering from error propagation and inefficient decoding. In this work, we propose a unified non-autoregressive generation (NAG) framework for general NER tasks, referred to as NAG-NER. First, we propose to generate entities as a set instead of a sequence, avoiding error propagation. Second, we propose incorporating NAG in NER tasks for efficient decoding by treating each entity as a target sequence. Third, to enhance the generation performances of the NAG decoder, we employ the NAG encoder to detect potential entity mentions. Extensive experiments show that our NAG-NER model outperforms the state-of-the-art generative NER models on three benchmark NER datasets of different types and two of our proprietary NER tasks.{\textbackslash}footnote{Code will be publicly available to the research community upon acceptance.}", }
Recently, the recognition of flat, nested, and discontinuous entities by a unified generative model framework has received increasing attention both in the research field and industry. However, the current generative NER methods force the entities to be generated in a predefined order, suffering from error propagation and inefficient decoding. In this work, we propose a unified non-autoregressive generation (NAG) framework for general NER tasks, referred to as NAG-NER. First, we propose to generate entities as a set instead of a sequence, avoiding error propagation. Second, we propose incorporating NAG in NER tasks for efficient decoding by treating each entity as a target sequence. Third, to enhance the generation performances of the NAG decoder, we employ the NAG encoder to detect potential entity mentions. Extensive experiments show that our NAG-NER model outperforms the state-of-the-art generative NER models on three benchmark NER datasets of different types and two of our proprietary NER tasks.{\textbackslash}footnote{Code will be publicly available to the research community upon acceptance.}
[ "Zhang, Xinpeng", "Tan, Ming", "Zhang, Jingfan", "Zhu, Wei" ]
NAG-NER: a Unified Non-Autoregressive Generation Framework for Various NER Tasks
acl-industry.65
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.66.bib
https://aclanthology.org/2023.acl-industry.66/
@inproceedings{kakkar-etal-2023-search, title = "Search Query Spell Correction with Weak Supervision in {E}-commerce", author = "Kakkar, Vishal and Sharma, Chinmay and Pande, Madhura and Kumar, Surender", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.66", doi = "10.18653/v1/2023.acl-industry.66", pages = "687--694", abstract = "Misspelled search queries in e-commerce can lead to empty or irrelevant products. Besides inadvertent typing mistakes, most spell mistakes occur because the user does not know the correct spelling, hence typing it as it is pronounced colloquially. This colloquial typing creates countless misspelling patterns for a single correct query. In this paper, we first systematically analyze and group different spell errors into error classes and then leverage the state-of-the-art Transformer model for contextual spell correction. We overcome the constraint of limited human labelled data by proposing novel synthetic data generation techniques for voluminous generation of training pairs needed by data hungry Transformers, without any human intervention. We further utilize weakly supervised data coupled with curriculum learning strategies to improve on tough spell mistakes without regressing on the easier ones. We show significant improvements from our model on human labeled data and online A/B experiments against multiple state-of-art models.", }
Misspelled search queries in e-commerce can lead to empty or irrelevant products. Besides inadvertent typing mistakes, most spell mistakes occur because the user does not know the correct spelling, hence typing it as it is pronounced colloquially. This colloquial typing creates countless misspelling patterns for a single correct query. In this paper, we first systematically analyze and group different spell errors into error classes and then leverage the state-of-the-art Transformer model for contextual spell correction. We overcome the constraint of limited human labelled data by proposing novel synthetic data generation techniques for voluminous generation of training pairs needed by data hungry Transformers, without any human intervention. We further utilize weakly supervised data coupled with curriculum learning strategies to improve on tough spell mistakes without regressing on the easier ones. We show significant improvements from our model on human labeled data and online A/B experiments against multiple state-of-art models.
[ "Kakkar, Vishal", "Sharma, Chinmay", "P", "e, Madhura", "Kumar, Surender" ]
Search Query Spell Correction with Weak Supervision in E-commerce
acl-industry.66
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.67.bib
https://aclanthology.org/2023.acl-industry.67/
@inproceedings{rajakumar-kalarani-etal-2023-lets, title = "{``}Let{'}s not Quote out of Context{''}: Unified Vision-Language Pretraining for Context Assisted Image Captioning", author = "Rajakumar Kalarani, Abisek and Bhattacharyya, Pushpak and Chhaya, Niyati and Shekhar, Sumit", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.67", doi = "10.18653/v1/2023.acl-industry.67", pages = "695--706", abstract = "Well-formed context aware image captions and tags in enterprise content such as marketing material are critical to ensure their brand presence and content recall. Manual creation and updates to ensure the same is non trivial given the scale and the tedium towards this task. We propose a new unified Vision-Language (VL) model based on the One For All (OFA) model, with a focus on context-assisted image captioning where the caption is generated based on both the image and its context. Our approach aims to overcome the context-independent (image and text are treated independently) nature of the existing approaches. We exploit context by pretraining our model with datasets of three tasks- news image captioning where the news article is the context, contextual visual entailment, and keyword extraction from the context. The second pretraining task is a new VL task, and we construct and release two datasets for the task with 1.1M and 2.2K data instances. Our system achieves state-of-the-art results with an improvement of up to 8.34 CIDEr score on the benchmark news image captioning datasets. To the best of our knowledge, ours is the first effort at incorporating contextual information in pretraining the models for the VL tasks.", }
Well-formed context aware image captions and tags in enterprise content such as marketing material are critical to ensure their brand presence and content recall. Manual creation and updates to ensure the same is non trivial given the scale and the tedium towards this task. We propose a new unified Vision-Language (VL) model based on the One For All (OFA) model, with a focus on context-assisted image captioning where the caption is generated based on both the image and its context. Our approach aims to overcome the context-independent (image and text are treated independently) nature of the existing approaches. We exploit context by pretraining our model with datasets of three tasks- news image captioning where the news article is the context, contextual visual entailment, and keyword extraction from the context. The second pretraining task is a new VL task, and we construct and release two datasets for the task with 1.1M and 2.2K data instances. Our system achieves state-of-the-art results with an improvement of up to 8.34 CIDEr score on the benchmark news image captioning datasets. To the best of our knowledge, ours is the first effort at incorporating contextual information in pretraining the models for the VL tasks.
[ "Rajakumar Kalarani, Abisek", "Bhattacharyya, Pushpak", "Chhaya, Niyati", "Shekhar, Sumit" ]
“Let's not Quote out of Context”: Unified Vision-Language Pretraining for Context Assisted Image Captioning
acl-industry.67
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.68.bib
https://aclanthology.org/2023.acl-industry.68/
@inproceedings{kwon-etal-2023-ground, title = "What, When, and How to Ground: Designing User Persona-Aware Conversational Agents for Engaging Dialogue", author = "Kwon, Deuksin and Lee, Sunwoo and Kim, Ki Hyun and Lee, Seojin and Kim, Taeyoon and Davis, Eric", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.68", doi = "10.18653/v1/2023.acl-industry.68", pages = "707--719", abstract = "This paper presents a method for building a personalized open-domain dialogue system to address the WWH (WHAT, WHEN, and HOW) problem for natural response generation in a commercial setting, where personalized dialogue responses are heavily interleaved with casual response turns. The proposed approach involves weighted dataset blending, negative persona information augmentation methods, and the design of personalized conversation datasets to address the challenges of WWH in personalized, open-domain dialogue systems. Our work effectively balances dialogue fluency and tendency to ground, while also introducing a response-type label to improve the controllability and explainability of the grounded responses. The combination of these methods leads to more fluent conversations, as evidenced by subjective human evaluations as well as objective evaluations.", }
This paper presents a method for building a personalized open-domain dialogue system to address the WWH (WHAT, WHEN, and HOW) problem for natural response generation in a commercial setting, where personalized dialogue responses are heavily interleaved with casual response turns. The proposed approach involves weighted dataset blending, negative persona information augmentation methods, and the design of personalized conversation datasets to address the challenges of WWH in personalized, open-domain dialogue systems. Our work effectively balances dialogue fluency and tendency to ground, while also introducing a response-type label to improve the controllability and explainability of the grounded responses. The combination of these methods leads to more fluent conversations, as evidenced by subjective human evaluations as well as objective evaluations.
[ "Kwon, Deuksin", "Lee, Sunwoo", "Kim, Ki Hyun", "Lee, Seojin", "Kim, Taeyoon", "Davis, Eric" ]
What, When, and How to Ground: Designing User Persona-Aware Conversational Agents for Engaging Dialogue
acl-industry.68
Poster
2306.03361
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.69.bib
https://aclanthology.org/2023.acl-industry.69/
@inproceedings{bhattacharya-etal-2023-cupid, title = "{CUPID}: Curriculum Learning Based Real-Time Prediction using Distillation", author = "Bhattacharya, Arindam and Ms, Ankith and Gandhi, Ankit and Huddar, Vijay and Saroop, Atul and Bhagat, Rahul", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.69", doi = "10.18653/v1/2023.acl-industry.69", pages = "720--728", abstract = "Relevance in E-commerce Product Search is crucial for providing customers with accurate results that match their query intent. With recent advancements in NLP and Deep Learning, Transformers have become the default choice for relevance classification tasks. In such a setting, the relevance model uses query text and product title as input features, and estimates if the product is relevant for the customer query. While cross-attention in Transformers enables a more accurate relevance prediction in such a setting, its high evaluation latency makes it unsuitable for real-time predictions in which thousands of products must be evaluated against a user query within few milliseconds. To address this issue, we propose CUPID: a Curriculum learning based real-time Prediction using Distillation that utilizes knowledge distillation within a curriculum learning setting to learn a simpler architecture that can be evaluated within low latency budgets. In a bi-lingual relevance prediction task, our approach shows an 302 bps improvement on English and 676 bps improvement for low-resource Arabic, while maintaining the low evaluation latency on CPUs.", }
Relevance in E-commerce Product Search is crucial for providing customers with accurate results that match their query intent. With recent advancements in NLP and Deep Learning, Transformers have become the default choice for relevance classification tasks. In such a setting, the relevance model uses query text and product title as input features, and estimates if the product is relevant for the customer query. While cross-attention in Transformers enables a more accurate relevance prediction in such a setting, its high evaluation latency makes it unsuitable for real-time predictions in which thousands of products must be evaluated against a user query within few milliseconds. To address this issue, we propose CUPID: a Curriculum learning based real-time Prediction using Distillation that utilizes knowledge distillation within a curriculum learning setting to learn a simpler architecture that can be evaluated within low latency budgets. In a bi-lingual relevance prediction task, our approach shows an 302 bps improvement on English and 676 bps improvement for low-resource Arabic, while maintaining the low evaluation latency on CPUs.
[ "Bhattacharya, Arindam", "Ms, Ankith", "G", "hi, Ankit", "Huddar, Vijay", "Saroop, Atul", "Bhagat, Rahul" ]
CUPID: Curriculum Learning Based Real-Time Prediction using Distillation
acl-industry.69
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.70.bib
https://aclanthology.org/2023.acl-industry.70/
@inproceedings{faustini-etal-2023-answering, title = "Answering Unanswered Questions through Semantic Reformulations in Spoken {QA}", author = "Faustini, Pedro and Chen, Zhiyu and Fetahu, Besnik and Rokhlenko, Oleg and Malmasi, Shervin", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.70", doi = "10.18653/v1/2023.acl-industry.70", pages = "729--743", abstract = "Spoken Question Answering (QA) is a key feature of voice assistants, usually backed by multiple QA systems. Users ask questions via spontaneous speech that can contain disfluencies, errors, and informal syntax or phrasing. This is a major challenge in QA, causing unanswered questions or irrelevant answers, leading to bad user experiences. We analyze failed QA requests to identify core challenges: lexical gaps, proposition types, complex syntactic structure, and high specificity. We propose a Semantic Question Reformulation (SURF) model offering three linguistically-grounded operations (repair, syntactic reshaping, generalization) to rewrite questions to facilitate answering. Offline evaluation on 1M unanswered questions from a leading voice assistant shows that SURF significantly improves answer rates: up to 24{\%} of previously unanswered questions obtain relevant answers (75{\%}). Live deployment shows positive impact for millions of customers with unanswered questions; explicit relevance feedback shows high user satisfaction.", }
Spoken Question Answering (QA) is a key feature of voice assistants, usually backed by multiple QA systems. Users ask questions via spontaneous speech that can contain disfluencies, errors, and informal syntax or phrasing. This is a major challenge in QA, causing unanswered questions or irrelevant answers, leading to bad user experiences. We analyze failed QA requests to identify core challenges: lexical gaps, proposition types, complex syntactic structure, and high specificity. We propose a Semantic Question Reformulation (SURF) model offering three linguistically-grounded operations (repair, syntactic reshaping, generalization) to rewrite questions to facilitate answering. Offline evaluation on 1M unanswered questions from a leading voice assistant shows that SURF significantly improves answer rates: up to 24{\%} of previously unanswered questions obtain relevant answers (75{\%}). Live deployment shows positive impact for millions of customers with unanswered questions; explicit relevance feedback shows high user satisfaction.
[ "Faustini, Pedro", "Chen, Zhiyu", "Fetahu, Besnik", "Rokhlenko, Oleg", "Malmasi, Shervin" ]
Answering Unanswered Questions through Semantic Reformulations in Spoken QA
acl-industry.70
Poster
2305.17393
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.71.bib
https://aclanthology.org/2023.acl-industry.71/
@inproceedings{parikh-etal-2023-exploring, title = "Exploring Zero and Few-shot Techniques for Intent Classification", author = "Parikh, Soham and Tiwari, Mitul and Tumbade, Prashil and Vohra, Quaizar", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.71", doi = "10.18653/v1/2023.acl-industry.71", pages = "744--751", abstract = "Conversational NLU providers often need to scale to thousands of intent-classification models where new customers often face the cold-start problem. Scaling to so many customers puts a constraint on storage space as well. In this paper, we explore four different zero and few-shot intent classification approaches with this low-resource constraint: 1) domain adaptation, 2) data augmentation, 3) zero-shot intent classification using descriptions large language models (LLMs), and 4) parameter-efficient fine-tuning of instruction-finetuned language models. Our results show that all these approaches are effective to different degrees in low-resource settings. Parameter-efficient fine-tuning using T-few recipe on Flan-T5 yields the best performance even with just one sample per intent. We also show that the zero-shot method of prompting LLMs using intent descriptions is also very competitive.", }
Conversational NLU providers often need to scale to thousands of intent-classification models where new customers often face the cold-start problem. Scaling to so many customers puts a constraint on storage space as well. In this paper, we explore four different zero and few-shot intent classification approaches with this low-resource constraint: 1) domain adaptation, 2) data augmentation, 3) zero-shot intent classification using descriptions large language models (LLMs), and 4) parameter-efficient fine-tuning of instruction-finetuned language models. Our results show that all these approaches are effective to different degrees in low-resource settings. Parameter-efficient fine-tuning using T-few recipe on Flan-T5 yields the best performance even with just one sample per intent. We also show that the zero-shot method of prompting LLMs using intent descriptions is also very competitive.
[ "Parikh, Soham", "Tiwari, Mitul", "Tumbade, Prashil", "Vohra, Quaizar" ]
Exploring Zero and Few-shot Techniques for Intent Classification
acl-industry.71
Poster
2305.07157
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.72.bib
https://aclanthology.org/2023.acl-industry.72/
@inproceedings{bhargava-etal-2023-referring, title = "Referring to Screen Texts with Voice Assistants", author = "Bhargava, Shruti and Dhoot, Anand and Jonsson, Ing-marie and Nguyen, Hoang Long and Patel, Alkesh and Yu, Hong and Renkens, Vincent", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.72", doi = "10.18653/v1/2023.acl-industry.72", pages = "752--762", abstract = "Voice assistants help users make phone calls, send messages, create events, navigate and do a lot more. However assistants have limited capacity to understand their users{'} context. In this work, we aim to take a step in this direction. Our work dives into a new experience for users to refer to phone numbers, addresses, email addresses, urls, and dates on their phone screens. We focus on reference understanding, which is particularly interesting when, similar to visual grounding, there are multiple similar texts on screen. We collect a dataset and propose a lightweight general purpose model for this novel experience. Since consuming pixels directly is expensive, our system is designed to rely only on text extracted from the UI. Our model is modular, offering flexibility, better interpretability and efficient run time memory.", }
Voice assistants help users make phone calls, send messages, create events, navigate and do a lot more. However assistants have limited capacity to understand their users{'} context. In this work, we aim to take a step in this direction. Our work dives into a new experience for users to refer to phone numbers, addresses, email addresses, urls, and dates on their phone screens. We focus on reference understanding, which is particularly interesting when, similar to visual grounding, there are multiple similar texts on screen. We collect a dataset and propose a lightweight general purpose model for this novel experience. Since consuming pixels directly is expensive, our system is designed to rely only on text extracted from the UI. Our model is modular, offering flexibility, better interpretability and efficient run time memory.
[ "Bhargava, Shruti", "Dhoot, An", "", "Jonsson, Ing-marie", "Nguyen, Hoang Long", "Patel, Alkesh", "Yu, Hong", "Renkens, Vincent" ]
Referring to Screen Texts with Voice Assistants
acl-industry.72
Poster
2306.07298
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.73.bib
https://aclanthology.org/2023.acl-industry.73/
@inproceedings{chen-etal-2023-generate, title = "Generate-then-Retrieve: Intent-Aware {FAQ} Retrieval in Product Search", author = "Chen, Zhiyu and Choi, Jason and Fetahu, Besnik and Rokhlenko, Oleg and Malmasi, Shervin", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.73", doi = "10.18653/v1/2023.acl-industry.73", pages = "763--771", abstract = "Frequently Asked Question (FAQ) retrieval aims at retrieving question-answer pairs for a given a user query. Integrating FAQ retrieval with product search can not only empower users to make more informed purchase decisions, but also enhance user retention through efficient post-purchase support. Providing FAQ content without disrupting user{'}s shopping experience poses challenges on deciding when and how to show FAQ results. Our proposed intent-aware FAQ retrieval consists of (1) an intent classifier that predicts whether the query is looking for an FAQ; (2) a reformulation model that rewrites query into a natural question. Offline evaluation demonstrates that our approach improves 12{\%} in Hit@1 on retrieving ground-truth FAQs, while reducing latency by 95{\%} compared to baseline systems. These improvements are further validated by real user feedback, where more than 99{\%} of users consider FAQs displayed on top of product search results is helpful. Overall, our findings show promising directions for integrating FAQ retrieval into product search at scale.", }
Frequently Asked Question (FAQ) retrieval aims at retrieving question-answer pairs for a given a user query. Integrating FAQ retrieval with product search can not only empower users to make more informed purchase decisions, but also enhance user retention through efficient post-purchase support. Providing FAQ content without disrupting user{'}s shopping experience poses challenges on deciding when and how to show FAQ results. Our proposed intent-aware FAQ retrieval consists of (1) an intent classifier that predicts whether the query is looking for an FAQ; (2) a reformulation model that rewrites query into a natural question. Offline evaluation demonstrates that our approach improves 12{\%} in Hit@1 on retrieving ground-truth FAQs, while reducing latency by 95{\%} compared to baseline systems. These improvements are further validated by real user feedback, where more than 99{\%} of users consider FAQs displayed on top of product search results is helpful. Overall, our findings show promising directions for integrating FAQ retrieval into product search at scale.
[ "Chen, Zhiyu", "Choi, Jason", "Fetahu, Besnik", "Rokhlenko, Oleg", "Malmasi, Shervin" ]
Generate-then-Retrieve: Intent-Aware FAQ Retrieval in Product Search
acl-industry.73
Poster
2306.03411
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.74.bib
https://aclanthology.org/2023.acl-industry.74/
@inproceedings{jia-etal-2023-kafa, title = "{KAFA}: Rethinking Image Ad Understanding with Knowledge-Augmented Feature Adaptation of Vision-Language Models", author = "Jia, Zhiwei and Narayana, Pradyumna and Akula, Arjun and Pruthi, Garima and Su, Hao and Basu, Sugato and Jampani, Varun", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.74", doi = "10.18653/v1/2023.acl-industry.74", pages = "772--785", abstract = "Image ad understanding is a crucial task with wide real-world applications. Although highly challenging with the involvement of diverse atypical scenes, real-world entities, and reasoning over scene-texts, how to interpret image ads is relatively under-explored, especially in the era of foundational vision-language models (VLMs) featuring impressive generalizability and adaptability. In this paper, we perform the first empirical study of image ad understanding through the lens of pre-trained VLMs. We benchmark and reveal practical challenges in adapting these VLMs to image ad understanding. We propose a simple feature adaptation strategy to effectively fuse multimodal information for image ads and further empower it with knowledge of real-world entities. We hope our study draws more attention to image ad understanding which is broadly relevant to the advertising industry.", }
Image ad understanding is a crucial task with wide real-world applications. Although highly challenging with the involvement of diverse atypical scenes, real-world entities, and reasoning over scene-texts, how to interpret image ads is relatively under-explored, especially in the era of foundational vision-language models (VLMs) featuring impressive generalizability and adaptability. In this paper, we perform the first empirical study of image ad understanding through the lens of pre-trained VLMs. We benchmark and reveal practical challenges in adapting these VLMs to image ad understanding. We propose a simple feature adaptation strategy to effectively fuse multimodal information for image ads and further empower it with knowledge of real-world entities. We hope our study draws more attention to image ad understanding which is broadly relevant to the advertising industry.
[ "Jia, Zhiwei", "Narayana, Pradyumna", "Akula, Arjun", "Pruthi, Garima", "Su, Hao", "Basu, Sugato", "Jampani, Varun" ]
KAFA: Rethinking Image Ad Understanding with Knowledge-Augmented Feature Adaptation of Vision-Language Models
acl-industry.74
Poster
2305.18373
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.75.bib
https://aclanthology.org/2023.acl-industry.75/
@inproceedings{rana-etal-2023-weakly, title = "Weakly supervised hierarchical multi-task classification of customer questions", author = "Rana, Jitenkumar and Yenigalla, Promod and Aggarwal, Chetan and Mukku, Sandeep Sricharan and Soni, Manan and Patange, Rashmi", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.75", doi = "10.18653/v1/2023.acl-industry.75", pages = "786--793", abstract = "Identifying granular and actionable topics from customer questions (CQ) posted on e-commerce websites helps surface the missing information expected by customers on the product detail page (DP), provide insights to brands and sellers on what critical product information that the customers are looking before making a purchase decision and helps enrich the catalog quality to improve the overall customer experience (CX). We propose a weakly supervised Hierarchical Multi-task Classification Framework (HMCF) to identify topics from customer questions at various granularities. Complexity lies in creating a list of granular topics (taxonomy) for 1000s of product categories and building a scalable classification system. To this end, we introduce a clustering based Taxonomy Creation and Data Labeling (TCDL) module for creating taxonomy and labelled data with minimal supervision. Using TCDL module, taxonomy and labelled data creation task reduces to 2 hours as compared to 2 weeks of manual efforts by a subject matter expert. For classification, we propose a two level HMCF that performs multi-class classification to identify coarse level-1 topic and leverages NLI based label-aware approach to identify granular level-2 topic. We showcase that HMCF (based on BERT and NLI) a) achieves absolute improvement of 13{\%} in Top-1 accuracy over single-task non-hierarchical baselines b) learns a generic domain invariant function that can adapt to constantly evolving taxonomy (open label set) without need of re-training. c) reduces model deployment efforts significantly since it needs only one model that caters to 1000s of product categories.", }
Identifying granular and actionable topics from customer questions (CQ) posted on e-commerce websites helps surface the missing information expected by customers on the product detail page (DP), provide insights to brands and sellers on what critical product information that the customers are looking before making a purchase decision and helps enrich the catalog quality to improve the overall customer experience (CX). We propose a weakly supervised Hierarchical Multi-task Classification Framework (HMCF) to identify topics from customer questions at various granularities. Complexity lies in creating a list of granular topics (taxonomy) for 1000s of product categories and building a scalable classification system. To this end, we introduce a clustering based Taxonomy Creation and Data Labeling (TCDL) module for creating taxonomy and labelled data with minimal supervision. Using TCDL module, taxonomy and labelled data creation task reduces to 2 hours as compared to 2 weeks of manual efforts by a subject matter expert. For classification, we propose a two level HMCF that performs multi-class classification to identify coarse level-1 topic and leverages NLI based label-aware approach to identify granular level-2 topic. We showcase that HMCF (based on BERT and NLI) a) achieves absolute improvement of 13{\%} in Top-1 accuracy over single-task non-hierarchical baselines b) learns a generic domain invariant function that can adapt to constantly evolving taxonomy (open label set) without need of re-training. c) reduces model deployment efforts significantly since it needs only one model that caters to 1000s of product categories.
[ "Rana, Jitenkumar", "Yenigalla, Promod", "Aggarwal, Chetan", "Mukku, S", "eep Sricharan", "Soni, Manan", "Patange, Rashmi" ]
Weakly supervised hierarchical multi-task classification of customer questions
acl-industry.75
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-industry.76.bib
https://aclanthology.org/2023.acl-industry.76/
@inproceedings{sharma-etal-2023-automated, title = "Automated Digitization of Unstructured Medical Prescriptions", author = "Sharma, Megha and Vatsal, Tushar and Merugu, Srujana and Rajan, Aruna", editor = "Sitaram, Sunayana and Beigman Klebanov, Beata and Williams, Jason D", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-industry.76", doi = "10.18653/v1/2023.acl-industry.76", pages = "794--805", abstract = "Automated digitization of prescription images is a critical prerequisite to scale digital healthcare services such as online pharmacies. This is challenging in emerging markets since prescriptions are not digitized at source and patients lack the medical expertise to interpret prescriptions to place orders. In this paper, we present prescription digitization system for online medicine ordering built with minimal supervision. Our system uses a modular pipeline comprising a mix of ML and rule-based components for (a) image to text extraction, (b) segmentation into blocks and medication items, (c) medication attribute extraction, (d) matching against medicine catalog, and (e) shopping cart building. Our approach efficiently utilizes multiple signals like layout, medical ontologies, and semantic embeddings via LayoutLMv2 model to yield substantial improvement relative to strong baselines on medication attribute extraction. Our pipeline achieves +5.9{\%} gain in precision@3 and +5.6{\%} in recall@3 over catalog-based fuzzy matching baseline for shopping cart building for printed prescriptions.", }
Automated digitization of prescription images is a critical prerequisite to scale digital healthcare services such as online pharmacies. This is challenging in emerging markets since prescriptions are not digitized at source and patients lack the medical expertise to interpret prescriptions to place orders. In this paper, we present prescription digitization system for online medicine ordering built with minimal supervision. Our system uses a modular pipeline comprising a mix of ML and rule-based components for (a) image to text extraction, (b) segmentation into blocks and medication items, (c) medication attribute extraction, (d) matching against medicine catalog, and (e) shopping cart building. Our approach efficiently utilizes multiple signals like layout, medical ontologies, and semantic embeddings via LayoutLMv2 model to yield substantial improvement relative to strong baselines on medication attribute extraction. Our pipeline achieves +5.9{\%} gain in precision@3 and +5.6{\%} in recall@3 over catalog-based fuzzy matching baseline for shopping cart building for printed prescriptions.
[ "Sharma, Megha", "Vatsal, Tushar", "Merugu, Srujana", "Rajan, Aruna" ]
Automated Digitization of Unstructured Medical Prescriptions
acl-industry.76
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-tutorials.1.bib
https://aclanthology.org/2023.acl-tutorials.1/
@inproceedings{deng-etal-2023-goal, title = "Goal Awareness for Conversational {AI}: Proactivity, Non-collaborativity, and Beyond", author = "Deng, Yang and Lei, Wenqiang and Huang, Minlie and Chua, Tat-Seng", editor = "Chen, Yun-Nung (Vivian) and Margot, Margot and Reddy, Siva", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-tutorials.1", doi = "10.18653/v1/2023.acl-tutorials.1", pages = "1--10", abstract = "Conversational systems are envisioned to provide social support or functional service to human users via natural language interactions. Conventional conversation researches mainly focus on the responseability of the system, such as dialogue context understanding and response generation, but overlooks the design of an essential property in intelligent conversations, i.e., goal awareness. The awareness of goals means the state of not only being responsive to the users but also aware of the target conversational goal and capable of leading the conversation towards the goal, which is a significant step towards higher-level intelligence and artificial consciousness. It can not only largely improve user engagement and service efficiency in the conversation, but also empower the system to handle more complicated conversation tasks that involve strategical and motivational interactions. In this tutorial, we will introduce the recent advances on the design of agent{'}s awareness of goals in a wide range of conversational systems.", }
Conversational systems are envisioned to provide social support or functional service to human users via natural language interactions. Conventional conversation researches mainly focus on the responseability of the system, such as dialogue context understanding and response generation, but overlooks the design of an essential property in intelligent conversations, i.e., goal awareness. The awareness of goals means the state of not only being responsive to the users but also aware of the target conversational goal and capable of leading the conversation towards the goal, which is a significant step towards higher-level intelligence and artificial consciousness. It can not only largely improve user engagement and service efficiency in the conversation, but also empower the system to handle more complicated conversation tasks that involve strategical and motivational interactions. In this tutorial, we will introduce the recent advances on the design of agent{'}s awareness of goals in a wide range of conversational systems.
[ "Deng, Yang", "Lei, Wenqiang", "Huang, Minlie", "Chua, Tat-Seng" ]
Goal Awareness for Conversational AI: Proactivity, Non-collaborativity, and Beyond
acl-tutorials.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-tutorials.2.bib
https://aclanthology.org/2023.acl-tutorials.2/
@inproceedings{zhao-etal-2023-complex, title = "Complex Reasoning in Natural Language", author = "Zhao, Wenting and Geva, Mor and Lin, Bill Yuchen and Yasunaga, Michihiro and Madaan, Aman and Yu, Tao", editor = "Chen, Yun-Nung (Vivian) and Margot, Margot and Reddy, Siva", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-tutorials.2", doi = "10.18653/v1/2023.acl-tutorials.2", pages = "11--20", abstract = "Teaching machines to reason over texts has been a long-standing goal of natural language processing (NLP). To this end, researchers have designed a diverse set of complex reasoning tasks that involve compositional reasoning, knowledge retrieval, grounding, commonsense reasoning, etc. A standard choice for building systems that perform a desired type of reasoning is to fine-tune a pretrained language model (LM) on specific downstream tasks. However, recent research has demonstrated that such a straightforward approach is often brittle. For example, Elazar et al. (2021) and Branco et al. (2021) show that, on question-answering (QA) tasks, similar performance can be achieved with questions removed from the inputs. Min et al. (2019), Chen and Durrett (2019), and Tang et al. (2021) show that models trained on multi-hop QA do not generalize to answer single-hop questions. The reasoning capabilities of these models thus remain at a surface level, i.e., exploiting data patterns. Consequently, augmenting LMs with techniques that make them robust and effective becomes an active research area. We will start the tutorial by providing an overview of complex reasoning tasks where the standard application of pretrained language models fails. This tutorial then reviews recent promising directions for tackling these tasks. Specifically, we focus on the following groups of approaches that explicitly consider problem structures: (1) knowledge-augmented methods, where the knowledge is either incorporated during fine-tuning or pretraining; (2) few-shot prompting methods, which effectively guide the models to follow instructions; (3) neuro-symbolic methods, which produce explicit intermediate representations; and, (4) rationale-based methods, one of the most popular forms of the neuro-symbolic methods, which highlight subsets of input as explanations for individual model predictions.", }
Teaching machines to reason over texts has been a long-standing goal of natural language processing (NLP). To this end, researchers have designed a diverse set of complex reasoning tasks that involve compositional reasoning, knowledge retrieval, grounding, commonsense reasoning, etc. A standard choice for building systems that perform a desired type of reasoning is to fine-tune a pretrained language model (LM) on specific downstream tasks. However, recent research has demonstrated that such a straightforward approach is often brittle. For example, Elazar et al. (2021) and Branco et al. (2021) show that, on question-answering (QA) tasks, similar performance can be achieved with questions removed from the inputs. Min et al. (2019), Chen and Durrett (2019), and Tang et al. (2021) show that models trained on multi-hop QA do not generalize to answer single-hop questions. The reasoning capabilities of these models thus remain at a surface level, i.e., exploiting data patterns. Consequently, augmenting LMs with techniques that make them robust and effective becomes an active research area. We will start the tutorial by providing an overview of complex reasoning tasks where the standard application of pretrained language models fails. This tutorial then reviews recent promising directions for tackling these tasks. Specifically, we focus on the following groups of approaches that explicitly consider problem structures: (1) knowledge-augmented methods, where the knowledge is either incorporated during fine-tuning or pretraining; (2) few-shot prompting methods, which effectively guide the models to follow instructions; (3) neuro-symbolic methods, which produce explicit intermediate representations; and, (4) rationale-based methods, one of the most popular forms of the neuro-symbolic methods, which highlight subsets of input as explanations for individual model predictions.
[ "Zhao, Wenting", "Geva, Mor", "Lin, Bill Yuchen", "Yasunaga, Michihiro", "Madaan, Aman", "Yu, Tao" ]
Complex Reasoning in Natural Language
acl-tutorials.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-tutorials.3.bib
https://aclanthology.org/2023.acl-tutorials.3/
@inproceedings{sitaram-etal-2023-everything, title = "Everything you need to know about Multilingual {LLM}s: Towards fair, performant and reliable models for languages of the world", author = "Sitaram, Sunayana and Choudhury, Monojit and Patra, Barun and Chaudhary, Vishrav and Ahuja, Kabir and Bali, Kalika", editor = "Chen, Yun-Nung (Vivian) and Margot, Margot and Reddy, Siva", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-tutorials.3", doi = "10.18653/v1/2023.acl-tutorials.3", pages = "21--26", abstract = "This tutorial will describe various aspects of scaling up language technologies to many of the world{'}s languages by describing the latest research in Massively Multilingual Language Models (MMLMs). We will cover topics such as data collection, training and fine-tuning of models, Responsible AI issues such as fairness, bias and toxicity, linguistic diversity and evaluation in the context of MMLMs, specifically focusing on issues in non-English and low-resource languages. Further, we will also talk about some of the real-world challenges in deploying these models in language communities in the field. With the performance of MMLMs improving in the zero-shot setting for many languages, it is now becoming feasible to use them for building language technologies in many languages of the world, and this tutorial will provide the computational linguistics community with unique insights from the latest research in multilingual models.", }
This tutorial will describe various aspects of scaling up language technologies to many of the world{'}s languages by describing the latest research in Massively Multilingual Language Models (MMLMs). We will cover topics such as data collection, training and fine-tuning of models, Responsible AI issues such as fairness, bias and toxicity, linguistic diversity and evaluation in the context of MMLMs, specifically focusing on issues in non-English and low-resource languages. Further, we will also talk about some of the real-world challenges in deploying these models in language communities in the field. With the performance of MMLMs improving in the zero-shot setting for many languages, it is now becoming feasible to use them for building language technologies in many languages of the world, and this tutorial will provide the computational linguistics community with unique insights from the latest research in multilingual models.
[ "Sitaram, Sunayana", "Choudhury, Monojit", "Patra, Barun", "Chaudhary, Vishrav", "Ahuja, Kabir", "Bali, Kalika" ]
Everything you need to know about Multilingual LLMs: Towards fair, performant and reliable models for languages of the world
acl-tutorials.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-tutorials.4.bib
https://aclanthology.org/2023.acl-tutorials.4/
@inproceedings{amini-etal-2023-generating, title = "Generating Text from Language Models", author = "Amini, Afra and Cotterell, Ryan and Hewitt, John and Malagutti, Luca and Meister, Clara and Pimentel, Tiago", editor = "Chen, Yun-Nung (Vivian) and Margot, Margot and Reddy, Siva", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-tutorials.4", doi = "10.18653/v1/2023.acl-tutorials.4", pages = "27--31", abstract = "An increasingly large percentage of natural language processing (NLP) tasks center around the generation of text from probabilistic language models. Despite this trend, techniques for improving or specifying preferences in these generated texts rely mostly on intuition-based heuristics. Further, there lacks a unified presentation of their motivations, practical implementation, successes and pitfalls. Practitioners must, therefore, choose somewhat blindly between generation algorithms{---}like top-p sampling or beam search{---}which can lead to wildly different results. At the same time, language generation research continues to criticize and improve the standard toolboxes, further adding entropy to the state of the field. In this tutorial, we will provide a centralized and cohesive discussion of critical considerations when choosing how to generate from a language model. We will cover a wide range of empirically-observed problems (like degradation, hallucination, repetition) and their corresponding proposed algorithmic solutions from recent research (like top-p sampling and its successors). We will then discuss a subset of these algorithms under a unified light; most stochastic generation strategies can be framed as locally adapting the probabilities of a model to avoid failure cases. Finally, we will then cover methods in controlled generation, that go beyond just ensuring coherence to ensure text exhibits specific desired properties. We aim for NLP practitioners and researchers to leave our tutorial with a unified framework which they can use to evaluate and contribute to the latest research in language generation.", }
An increasingly large percentage of natural language processing (NLP) tasks center around the generation of text from probabilistic language models. Despite this trend, techniques for improving or specifying preferences in these generated texts rely mostly on intuition-based heuristics. Further, there lacks a unified presentation of their motivations, practical implementation, successes and pitfalls. Practitioners must, therefore, choose somewhat blindly between generation algorithms{---}like top-p sampling or beam search{---}which can lead to wildly different results. At the same time, language generation research continues to criticize and improve the standard toolboxes, further adding entropy to the state of the field. In this tutorial, we will provide a centralized and cohesive discussion of critical considerations when choosing how to generate from a language model. We will cover a wide range of empirically-observed problems (like degradation, hallucination, repetition) and their corresponding proposed algorithmic solutions from recent research (like top-p sampling and its successors). We will then discuss a subset of these algorithms under a unified light; most stochastic generation strategies can be framed as locally adapting the probabilities of a model to avoid failure cases. Finally, we will then cover methods in controlled generation, that go beyond just ensuring coherence to ensure text exhibits specific desired properties. We aim for NLP practitioners and researchers to leave our tutorial with a unified framework which they can use to evaluate and contribute to the latest research in language generation.
[ "Amini, Afra", "Cotterell, Ryan", "Hewitt, John", "Malagutti, Luca", "Meister, Clara", "Pimentel, Tiago" ]
Generating Text from Language Models
acl-tutorials.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-tutorials.5.bib
https://aclanthology.org/2023.acl-tutorials.5/
@inproceedings{yin-etal-2023-indirectly, title = "Indirectly Supervised Natural Language Processing", author = "Yin, Wenpeng and Chen, Muhao and Zhou, Ben and Ning, Qiang and Chang, Kai-Wei and Roth, Dan", editor = "Chen, Yun-Nung (Vivian) and Margot, Margot and Reddy, Siva", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-tutorials.5", doi = "10.18653/v1/2023.acl-tutorials.5", pages = "32--40", abstract = "This tutorial targets researchers and practitioners who are interested in ML technologies for NLP from indirect supervision. In particular, we will present a diverse thread of indirect supervision studies that try to answer the following questions: (i) when and how can we provide supervision for a target task T, if all we have is data that corresponds to a {``}related{''} task T′? (ii) humans do not use exhaustive supervision; they rely on occasional feedback, and learn from incidental signals from various sources; how can we effectively incorporate such supervision in machine learning? (iii) how can we leverage multi-modal supervision to help NLP? To the end, we will discuss several lines of research that address those challenges, including (i) indirect supervision from T ′ that handles T with outputs spanning from a moderate size to an open space, (ii) the use of sparsely occurring and incidental signals, such as partial labels, noisy labels, knowledge-based constraints, and cross-domain or cross-task annotations{---}all having statistical associations with the task, (iii) principled ways to measure and understand why these incidental signals can contribute to our target tasks, and (iv) indirect supervision from vision-language signals. We will conclude the tutorial by outlining directions for further investigation.", }
This tutorial targets researchers and practitioners who are interested in ML technologies for NLP from indirect supervision. In particular, we will present a diverse thread of indirect supervision studies that try to answer the following questions: (i) when and how can we provide supervision for a target task T, if all we have is data that corresponds to a {``}related{''} task T′? (ii) humans do not use exhaustive supervision; they rely on occasional feedback, and learn from incidental signals from various sources; how can we effectively incorporate such supervision in machine learning? (iii) how can we leverage multi-modal supervision to help NLP? To the end, we will discuss several lines of research that address those challenges, including (i) indirect supervision from T ′ that handles T with outputs spanning from a moderate size to an open space, (ii) the use of sparsely occurring and incidental signals, such as partial labels, noisy labels, knowledge-based constraints, and cross-domain or cross-task annotations{---}all having statistical associations with the task, (iii) principled ways to measure and understand why these incidental signals can contribute to our target tasks, and (iv) indirect supervision from vision-language signals. We will conclude the tutorial by outlining directions for further investigation.
[ "Yin, Wenpeng", "Chen, Muhao", "Zhou, Ben", "Ning, Qiang", "Chang, Kai-Wei", "Roth, Dan" ]
Indirectly Supervised Natural Language Processing
acl-tutorials.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-tutorials.6.bib
https://aclanthology.org/2023.acl-tutorials.6/
@inproceedings{asai-etal-2023-retrieval, title = "Retrieval-based Language Models and Applications", author = "Asai, Akari and Min, Sewon and Zhong, Zexuan and Chen, Danqi", editor = "Chen, Yun-Nung (Vivian) and Margot, Margot and Reddy, Siva", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-tutorials.6", doi = "10.18653/v1/2023.acl-tutorials.6", pages = "41--46", abstract = "Retrieval-based language models (LMs) have shown impressive performance on diverse NLP tasks. In this tutorial, we will provide a comprehensive and coherent overview of recent advances in retrieval-based LMs. We will start by providing preliminaries covering the foundation of LMs (e.g., masked LMs, autoregressive LMs) and retrieval systems (e.g., nearest-neighbor search). We will then detail recent progress in retrieval-based models, focusing on their model architectures and learning approaches. Finally, we will show how retrieval-based LMs are adapted to downstream applications, and extended to multilingual and multi-modal settings. Finally, we will use an exercise to showcase the effectiveness of retrieval-based LMs.", }
Retrieval-based language models (LMs) have shown impressive performance on diverse NLP tasks. In this tutorial, we will provide a comprehensive and coherent overview of recent advances in retrieval-based LMs. We will start by providing preliminaries covering the foundation of LMs (e.g., masked LMs, autoregressive LMs) and retrieval systems (e.g., nearest-neighbor search). We will then detail recent progress in retrieval-based models, focusing on their model architectures and learning approaches. Finally, we will show how retrieval-based LMs are adapted to downstream applications, and extended to multilingual and multi-modal settings. Finally, we will use an exercise to showcase the effectiveness of retrieval-based LMs.
[ "Asai, Akari", "Min, Sewon", "Zhong, Zexuan", "Chen, Danqi" ]
Retrieval-based Language Models and Applications
acl-tutorials.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.1.bib
https://aclanthology.org/2023.findings-acl.1/
@inproceedings{zhang-etal-2023-investigating, title = "Investigating Glyph-Phonetic Information for {C}hinese Spell Checking: What Works and What{'}s Next?", author = "Zhang, Xiaotian and Zheng, Yanjun and Yan, Hang and Qiu, Xipeng", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.1", doi = "10.18653/v1/2023.findings-acl.1", pages = "1--13", abstract = "While pre-trained Chinese language models have demonstrated impressive performance on a wide range of NLP tasks, the Chinese Spell Checking (CSC) task remains a challenge. Previous research has explored using information such as glyphs and phonetics to improve the ability of CSC models to distinguish misspelled characters, with good results at the accuracy level on public datasets. However, the generalization ability of these CSC models has not been well understood: it is unclear whether they incorporate glyph-phonetic information and, if so, whether this information is fully utilized. In this paper, we aim to better understand the role of glyph-phonetic information in the CSC task and suggest directions for improvement. Additionally, we propose a new, more challenging, and practical setting for testing the generalizability of CSC models. All code is made publicly available.", }
While pre-trained Chinese language models have demonstrated impressive performance on a wide range of NLP tasks, the Chinese Spell Checking (CSC) task remains a challenge. Previous research has explored using information such as glyphs and phonetics to improve the ability of CSC models to distinguish misspelled characters, with good results at the accuracy level on public datasets. However, the generalization ability of these CSC models has not been well understood: it is unclear whether they incorporate glyph-phonetic information and, if so, whether this information is fully utilized. In this paper, we aim to better understand the role of glyph-phonetic information in the CSC task and suggest directions for improvement. Additionally, we propose a new, more challenging, and practical setting for testing the generalizability of CSC models. All code is made publicly available.
[ "Zhang, Xiaotian", "Zheng, Yanjun", "Yan, Hang", "Qiu, Xipeng" ]
Investigating Glyph-Phonetic Information for Chinese Spell Checking: What Works and What's Next?
findings-acl.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.2.bib
https://aclanthology.org/2023.findings-acl.2/
@inproceedings{jo-2023-self, title = "A Self-Supervised Integration Method of Pretrained Language Models and Word Definitions", author = "Jo, Hwiyeol", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.2", doi = "10.18653/v1/2023.findings-acl.2", pages = "14--26", abstract = "We investigate the representation of pretrained language models and humans, using the idea of word definition modeling{--}how well a word is represented by its definition, and vice versa. Our analysis shows that a word representation in pretrained language models does not successfully map its human-written definition and its usage in example sentences. We then present a simple method DefBERT that integrates pretrained models with word semantics in dictionaries. We show its benefits on newly-proposed tasks of definition ranking and definition sense disambiguation. Furthermore, we present the results on standard word similarity tasks and short text classification tasks where models are required to encode semantics with only a few words. The results demonstrate the effectiveness of integrating word definitions and pretrained language models.", }
We investigate the representation of pretrained language models and humans, using the idea of word definition modeling{--}how well a word is represented by its definition, and vice versa. Our analysis shows that a word representation in pretrained language models does not successfully map its human-written definition and its usage in example sentences. We then present a simple method DefBERT that integrates pretrained models with word semantics in dictionaries. We show its benefits on newly-proposed tasks of definition ranking and definition sense disambiguation. Furthermore, we present the results on standard word similarity tasks and short text classification tasks where models are required to encode semantics with only a few words. The results demonstrate the effectiveness of integrating word definitions and pretrained language models.
[ "Jo, Hwiyeol" ]
A Self-Supervised Integration Method of Pretrained Language Models and Word Definitions
findings-acl.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.3.bib
https://aclanthology.org/2023.findings-acl.3/
@inproceedings{ravfogel-etal-2023-conformal, title = "Conformal Nucleus Sampling", author = "Ravfogel, Shauli and Goldberg, Yoav and Goldberger, Jacob", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.3", doi = "10.18653/v1/2023.findings-acl.3", pages = "27--34", abstract = "Language models generate text based on successively sampling the next word. A decoding procedure based on nucleus (top-$p$) sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability $p$. In this work, we assess whether a top-$p$ set is indeed aligned with its probabilistic meaning in various linguistic contexts.We employ conformal prediction, a calibration procedure that focuses on the construction of minimal prediction sets according to a desired confidence level, to calibrate the parameter $p$ as a function of the entropy of the next word distribution. We find that OPT models are overconfident, and that calibration shows a moderate inverse scaling with model size.", }
Language models generate text based on successively sampling the next word. A decoding procedure based on nucleus (top-$p$) sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability $p$. In this work, we assess whether a top-$p$ set is indeed aligned with its probabilistic meaning in various linguistic contexts.We employ conformal prediction, a calibration procedure that focuses on the construction of minimal prediction sets according to a desired confidence level, to calibrate the parameter $p$ as a function of the entropy of the next word distribution. We find that OPT models are overconfident, and that calibration shows a moderate inverse scaling with model size.
[ "Ravfogel, Shauli", "Goldberg, Yoav", "Goldberger, Jacob" ]
Conformal Nucleus Sampling
findings-acl.3
Poster
2305.02633
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.4.bib
https://aclanthology.org/2023.findings-acl.4/
@inproceedings{chan-etal-2023-discoprompt, title = "{D}isco{P}rompt: Path Prediction Prompt Tuning for Implicit Discourse Relation Recognition", author = "Chan, Chunkit and Liu, Xin and Cheng, Jiayang and Li, Zihan and Song, Yangqiu and Wong, Ginny and See, Simon", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.4", doi = "10.18653/v1/2023.findings-acl.4", pages = "35--57", abstract = "Implicit Discourse Relation Recognition (IDRR) is a sophisticated and challenging task to recognize the discourse relations between the arguments with the absence of discourse connectives. The sense labels for each discourse relation follow a hierarchical classification scheme in the annotation process (Prasad et al., 2008), forming a hierarchy structure. Most existing works do not well incorporate the hierarchy structure but focus on the syntax features and the prior knowledge of connectives in the manner of pure text classification. We argue that it is more effective to predict the paths inside the hierarchical tree (e.g., {``}Comparison -{\textgreater} Contrast -{\textgreater} however{''}) rather than flat labels (e.g., Contrast) or connectives (e.g., however). We propose a prompt-based path prediction method to utilize the interactive information and intrinsic senses among the hierarchy in IDRR. This is the first work that injects such structure information into pre-trained language models via prompt tuning, and the performance of our solution shows significant and consistent improvement against competitive baselines.", }
Implicit Discourse Relation Recognition (IDRR) is a sophisticated and challenging task to recognize the discourse relations between the arguments with the absence of discourse connectives. The sense labels for each discourse relation follow a hierarchical classification scheme in the annotation process (Prasad et al., 2008), forming a hierarchy structure. Most existing works do not well incorporate the hierarchy structure but focus on the syntax features and the prior knowledge of connectives in the manner of pure text classification. We argue that it is more effective to predict the paths inside the hierarchical tree (e.g., {``}Comparison -{\textgreater} Contrast -{\textgreater} however{''}) rather than flat labels (e.g., Contrast) or connectives (e.g., however). We propose a prompt-based path prediction method to utilize the interactive information and intrinsic senses among the hierarchy in IDRR. This is the first work that injects such structure information into pre-trained language models via prompt tuning, and the performance of our solution shows significant and consistent improvement against competitive baselines.
[ "Chan, Chunkit", "Liu, Xin", "Cheng, Jiayang", "Li, Zihan", "Song, Yangqiu", "Wong, Ginny", "See, Simon" ]
DiscoPrompt: Path Prediction Prompt Tuning for Implicit Discourse Relation Recognition
findings-acl.4
Poster
2305.03973
[ "https://github.com/hkust-knowcomp/discoprompt" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.5.bib
https://aclanthology.org/2023.findings-acl.5/
@inproceedings{cao-jiang-2023-modularized, title = "Modularized Zero-shot {VQA} with Pre-trained Models", author = "Cao, Rui and Jiang, Jing", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.5", doi = "10.18653/v1/2023.findings-acl.5", pages = "58--76", abstract = "Large-scale pre-trained models (PTMs) show great zero-shot capabilities. In this paper, we study how to leverage them for zero-shot visual question answering (VQA).Our approach is motivated by a few observations. First, VQA questions often require multiple steps of reasoning, which is still a capability that most PTMs lack. Second, different steps in VQA reasoning chains require different skills such as object detection and relational reasoning, but a single PTM may not possess all these skills. Third, recent work on zero-shot VQA does not explicitly consider multi-step reasoning chains, which makes them less interpretable compared with a decomposition-based approach. We propose a modularized zero-shot network that explicitly decomposes questions into sub reasoning steps and is highly interpretable. We convert sub reasoning tasks to acceptable objectives of PTMs and assign tasks to proper PTMs without any adaptation. Our experiments on two VQA benchmarks under the zero-shot setting demonstrate the effectiveness of our method and better interpretability compared with several baselines.", }
Large-scale pre-trained models (PTMs) show great zero-shot capabilities. In this paper, we study how to leverage them for zero-shot visual question answering (VQA).Our approach is motivated by a few observations. First, VQA questions often require multiple steps of reasoning, which is still a capability that most PTMs lack. Second, different steps in VQA reasoning chains require different skills such as object detection and relational reasoning, but a single PTM may not possess all these skills. Third, recent work on zero-shot VQA does not explicitly consider multi-step reasoning chains, which makes them less interpretable compared with a decomposition-based approach. We propose a modularized zero-shot network that explicitly decomposes questions into sub reasoning steps and is highly interpretable. We convert sub reasoning tasks to acceptable objectives of PTMs and assign tasks to proper PTMs without any adaptation. Our experiments on two VQA benchmarks under the zero-shot setting demonstrate the effectiveness of our method and better interpretability compared with several baselines.
[ "Cao, Rui", "Jiang, Jing" ]
Modularized Zero-shot VQA with Pre-trained Models
findings-acl.5
Poster
2305.17369
[ "https://github.com/abril4416/Mod-Zero-VQA" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.6.bib
https://aclanthology.org/2023.findings-acl.6/
@inproceedings{tan-etal-2023-timelineqa, title = "{T}imeline{QA}: A Benchmark for Question Answering over Timelines", author = "Tan, Wang-Chiew and Dwivedi-Yu, Jane and Li, Yuliang and Mathias, Lambert and Saeidi, Marzieh and Yan, Jing Nathan and Halevy, Alon", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.6", doi = "10.18653/v1/2023.findings-acl.6", pages = "77--91", abstract = "Lifelogs are descriptions of experiences that a person had during their life. Lifelogs are created by fusing data from the multitude of digital services, such as online photos, maps, shopping and content streaming services. Question answering over lifelogs can offer personal assistants a critical resource when they try to provide advice in context. However, obtaining answers to questions over lifelogs is beyond the current state of the art of question answering techniques for a variety of reasons, the most pronounced of which is that lifelogs combine free text with some degree of structure such as temporal and geographical information. We create and publicly release TimelineQA, a benchmark for accelerating progress on querying lifelogs. TimelineQA generates lifelogs of imaginary people. The episodes in the lifelog range from major life episodes such as high school graduation to those that occur on a daily basis such as going for a run. We describe a set of experiments on TimelineQA with several state-of-the-art QA models. Our experiments reveal that for atomic queries, an extractive QA system significantly out-performs a state-of-the-art retrieval-augmented QA system. For multi-hop queries involving aggregates, we show that the best result is obtained with a state-of-the-art table QA technique, assuming the ground truth set of episodes for deriving the answer is available.", }
Lifelogs are descriptions of experiences that a person had during their life. Lifelogs are created by fusing data from the multitude of digital services, such as online photos, maps, shopping and content streaming services. Question answering over lifelogs can offer personal assistants a critical resource when they try to provide advice in context. However, obtaining answers to questions over lifelogs is beyond the current state of the art of question answering techniques for a variety of reasons, the most pronounced of which is that lifelogs combine free text with some degree of structure such as temporal and geographical information. We create and publicly release TimelineQA, a benchmark for accelerating progress on querying lifelogs. TimelineQA generates lifelogs of imaginary people. The episodes in the lifelog range from major life episodes such as high school graduation to those that occur on a daily basis such as going for a run. We describe a set of experiments on TimelineQA with several state-of-the-art QA models. Our experiments reveal that for atomic queries, an extractive QA system significantly out-performs a state-of-the-art retrieval-augmented QA system. For multi-hop queries involving aggregates, we show that the best result is obtained with a state-of-the-art table QA technique, assuming the ground truth set of episodes for deriving the answer is available.
[ "Tan, Wang-Chiew", "Dwivedi-Yu, Jane", "Li, Yuliang", "Mathias, Lambert", "Saeidi, Marzieh", "Yan, Jing Nathan", "Halevy, Alon" ]
TimelineQA: A Benchmark for Question Answering over Timelines
findings-acl.6
Poster
2306.01069
[ "https://github.com/facebookresearch/timelineqa" ]
https://huggingface.co/papers/2306.01069
0
2
0
7
1
[]
[]
[]
https://aclanthology.org/2023.findings-acl.7.bib
https://aclanthology.org/2023.findings-acl.7/
@inproceedings{lam-etal-2023-abstractive, title = "Abstractive Text Summarization Using the {BRIO} Training Paradigm", author = "Lam, Khang and Doan, Thieu and Pham, Khang and Kalita, Jugal", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.7", doi = "10.18653/v1/2023.findings-acl.7", pages = "92--99", abstract = "Summary sentences produced by abstractive summarization models may be coherent and comprehensive, but they lack control and rely heavily on reference summaries. The BRIO training paradigm assumes a non-deterministic distribution to reduce the model{'}s dependence on reference summaries, and improve model performance during inference. This paper presents a straightforward but effective technique to improve abstractive summaries by fine-tuning pre-trained language models, and training them with the BRIO paradigm. We build a text summarization dataset for Vietnamese, called VieSum. We perform experiments with abstractive summarization models trained with the BRIO paradigm on the CNNDM and the VieSum datasets. The results show that the models, trained on basic hardware, outperform all existing abstractive summarization models, especially for Vietnamese.", }
Summary sentences produced by abstractive summarization models may be coherent and comprehensive, but they lack control and rely heavily on reference summaries. The BRIO training paradigm assumes a non-deterministic distribution to reduce the model{'}s dependence on reference summaries, and improve model performance during inference. This paper presents a straightforward but effective technique to improve abstractive summaries by fine-tuning pre-trained language models, and training them with the BRIO paradigm. We build a text summarization dataset for Vietnamese, called VieSum. We perform experiments with abstractive summarization models trained with the BRIO paradigm on the CNNDM and the VieSum datasets. The results show that the models, trained on basic hardware, outperform all existing abstractive summarization models, especially for Vietnamese.
[ "Lam, Khang", "Doan, Thieu", "Pham, Khang", "Kalita, Jugal" ]
Abstractive Text Summarization Using the BRIO Training Paradigm
findings-acl.7
Poster
2305.13696
[ "" ]
https://huggingface.co/papers/2305.13696
0
1
0
4
1
[]
[]
[]
https://aclanthology.org/2023.findings-acl.8.bib
https://aclanthology.org/2023.findings-acl.8/
@inproceedings{wu-etal-2023-modeling, title = "Modeling the {Q}-Diversity in a Min-max Play Game for Robust Optimization", author = "Wu, Ting and Zheng, Rui and Gui, Tao and Zhang, Qi and Huang, Xuanjing", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.8", doi = "10.18653/v1/2023.findings-acl.8", pages = "100--113", abstract = "Models trained with empirical risk minimization (ERM) are revealed to easily rely on spurious correlations, resulting in poor generalization. Group distributionally robust optimization (group DRO) can alleviate this problem by minimizing the worst-case loss over pre-defined groups. While promising, in practice factors like expensive annotations and privacy preclude the availability of group labels. More crucially, when taking a closer look at the failure modes of out-of-distribution generalization, the typical procedure of reweighting in group DRO loses efficiency. Hinged on the limitations, in this work, we reformulate the group DRO framework by proposing Q-Diversity. Characterized by an interactive training mode, Q-Diversity relaxes the group identification from annotation into direct parameterization. Furthermore, a novel mixing strategy across groups is presented to diversify the under-represented groups. In a series of experiments on both synthetic and real-world text classification tasks, results demonstrate that Q-Diversity can consistently improve worst-case accuracy under different distributional shifts, outperforming state-of-the-art alternatives.", }
Models trained with empirical risk minimization (ERM) are revealed to easily rely on spurious correlations, resulting in poor generalization. Group distributionally robust optimization (group DRO) can alleviate this problem by minimizing the worst-case loss over pre-defined groups. While promising, in practice factors like expensive annotations and privacy preclude the availability of group labels. More crucially, when taking a closer look at the failure modes of out-of-distribution generalization, the typical procedure of reweighting in group DRO loses efficiency. Hinged on the limitations, in this work, we reformulate the group DRO framework by proposing Q-Diversity. Characterized by an interactive training mode, Q-Diversity relaxes the group identification from annotation into direct parameterization. Furthermore, a novel mixing strategy across groups is presented to diversify the under-represented groups. In a series of experiments on both synthetic and real-world text classification tasks, results demonstrate that Q-Diversity can consistently improve worst-case accuracy under different distributional shifts, outperforming state-of-the-art alternatives.
[ "Wu, Ting", "Zheng, Rui", "Gui, Tao", "Zhang, Qi", "Huang, Xuanjing" ]
Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization
findings-acl.8
Poster
2305.12123
[ "https://github.com/cuteythyme/q-diversity" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.9.bib
https://aclanthology.org/2023.findings-acl.9/
@inproceedings{chen-etal-2023-pre, title = "Pre-training Language Model as a Multi-perspective Course Learner", author = "Chen, Beiduo and Huang, Shaohan and Zhang, Zihan and Guo, Wu and Ling, Zhenhua and Huang, Haizhen and Wei, Furu and Deng, Weiwei and Zhang, Qi", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.9", doi = "10.18653/v1/2023.findings-acl.9", pages = "114--128", abstract = "ELECTRA, the generator-discriminator pre-training framework, has achieved impressive semantic construction capability among various downstream tasks. Despite the convincing performance, ELECTRA still faces the challenges of monotonous training and deficient interaction. Generator with only masked language modeling (MLM) leads to biased learning and label imbalance for discriminator, decreasing learning efficiency; no explicit feedback loop from discriminator to generator results in the chasm between these two components, underutilizing the course learning. In this study, a multi-perspective course learning (MCL) method is proposed to fetch a many degrees and visual angles for sample-efficient pre-training, and to fully leverage the relationship between generator and discriminator. Concretely, three self-supervision courses are designed to alleviate inherent flaws of MLM and balance the label in a multi-perspective way. Besides, two self-correction courses are proposed to bridge the chasm between the two encoders by creating a {``}correction notebook{''} for secondary-supervision. Moreover, a course soups trial is conducted to solve the {``}tug-of-war{''} dynamics problem of MCL, evolving a stronger pre-trained model. Experimental results show that our method significantly improves ELECTRA{'}s average performance by 2.8{\%} and 3.2{\%} absolute points respectively on GLUE and SQuAD 2.0 benchmarks, and overshadows recent advanced ELECTRA-style models under the same settings. The pre-trained MCL model is available at \url{https://huggingface.co/McmanusChen/MCL-base}.", }
ELECTRA, the generator-discriminator pre-training framework, has achieved impressive semantic construction capability among various downstream tasks. Despite the convincing performance, ELECTRA still faces the challenges of monotonous training and deficient interaction. Generator with only masked language modeling (MLM) leads to biased learning and label imbalance for discriminator, decreasing learning efficiency; no explicit feedback loop from discriminator to generator results in the chasm between these two components, underutilizing the course learning. In this study, a multi-perspective course learning (MCL) method is proposed to fetch a many degrees and visual angles for sample-efficient pre-training, and to fully leverage the relationship between generator and discriminator. Concretely, three self-supervision courses are designed to alleviate inherent flaws of MLM and balance the label in a multi-perspective way. Besides, two self-correction courses are proposed to bridge the chasm between the two encoders by creating a {``}correction notebook{''} for secondary-supervision. Moreover, a course soups trial is conducted to solve the {``}tug-of-war{''} dynamics problem of MCL, evolving a stronger pre-trained model. Experimental results show that our method significantly improves ELECTRA{'}s average performance by 2.8{\%} and 3.2{\%} absolute points respectively on GLUE and SQuAD 2.0 benchmarks, and overshadows recent advanced ELECTRA-style models under the same settings. The pre-trained MCL model is available at \url{https://huggingface.co/McmanusChen/MCL-base}.
[ "Chen, Beiduo", "Huang, Shaohan", "Zhang, Zihan", "Guo, Wu", "Ling, Zhenhua", "Huang, Haizhen", "Wei, Furu", "Deng, Weiwei", "Zhang, Qi" ]
Pre-training Language Model as a Multi-perspective Course Learner
findings-acl.9
Poster
2305.03981
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.10.bib
https://aclanthology.org/2023.findings-acl.10/
@inproceedings{tsymboi-etal-2023-layerwise, title = "Layerwise universal adversarial attack on {NLP} models", author = "Tsymboi, Olga and Malaev, Danil and Petrovskii, Andrei and Oseledets, Ivan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.10", doi = "10.18653/v1/2023.findings-acl.10", pages = "129--143", abstract = "In this work, we examine the vulnerability of language models to universal adversarial triggers (UATs). We propose a new white-box approach to the construction of layerwise UATs (LUATs), which searches the triggers by perturbing hidden layers of a network. On the example of three transformer models and three datasets from the GLUE benchmark, we demonstrate that our method provides better transferability in a model-to-model setting with an average gain of 9.3{\%} in the fooling rate over the baseline. Moreover, we investigate triggers transferability in the task-to-task setting. Using small subsets from the datasets similar to the target tasks for choosing a perturbed layer, we show that LUATs are more efficient than vanilla UATs by 7.1{\%} in the fooling rate.", }
In this work, we examine the vulnerability of language models to universal adversarial triggers (UATs). We propose a new white-box approach to the construction of layerwise UATs (LUATs), which searches the triggers by perturbing hidden layers of a network. On the example of three transformer models and three datasets from the GLUE benchmark, we demonstrate that our method provides better transferability in a model-to-model setting with an average gain of 9.3{\%} in the fooling rate over the baseline. Moreover, we investigate triggers transferability in the task-to-task setting. Using small subsets from the datasets similar to the target tasks for choosing a perturbed layer, we show that LUATs are more efficient than vanilla UATs by 7.1{\%} in the fooling rate.
[ "Tsymboi, Olga", "Malaev, Danil", "Petrovskii, Andrei", "Oseledets, Ivan" ]
Layerwise universal adversarial attack on NLP models
findings-acl.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.11.bib
https://aclanthology.org/2023.findings-acl.11/
@inproceedings{wang-etal-2023-scene, title = "Scene-robust Natural Language Video Localization via Learning Domain-invariant Representations", author = "Wang, Zehan and Zhao, Yang and Huang, Haifeng and Xia, Yan and Zhao, Zhou", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.11", doi = "10.18653/v1/2023.findings-acl.11", pages = "144--160", abstract = "Natural language video localization(NLVL) task involves the semantic matching of a text query with a moment from an untrimmed video. Previous methods primarily focus on improving performance with the assumption of independently identical data distribution while ignoring the out-of-distribution data. Therefore, these approaches often fail when handling the videos and queries in novel scenes, which is inevitable in real-world scenarios. In this paper, we, for the first time, formulate the scene-robust NLVL problem and propose a novel generalizable NLVL framework utilizing data in multiple available scenes to learn a robust model. Specifically, our model learns a group of generalizable domain-invariant representations by alignment and decomposition. First, we propose a comprehensive intra- and inter-sample distance metric for complex multi-modal feature space, and an asymmetric multi-modal alignment loss for different information densities of text and vision. Further, to alleviate the conflict between domain-invariant features for generalization and domain-specific information for reasoning, we introduce domain-specific and domain-agnostic predictors to decompose and refine the learned features by dynamically adjusting the weights of samples. Based on the original video tags, we conduct extensive experiments on three NLVL datasets with different-grained scene shifts to show the effectiveness of our proposed methods.", }
Natural language video localization(NLVL) task involves the semantic matching of a text query with a moment from an untrimmed video. Previous methods primarily focus on improving performance with the assumption of independently identical data distribution while ignoring the out-of-distribution data. Therefore, these approaches often fail when handling the videos and queries in novel scenes, which is inevitable in real-world scenarios. In this paper, we, for the first time, formulate the scene-robust NLVL problem and propose a novel generalizable NLVL framework utilizing data in multiple available scenes to learn a robust model. Specifically, our model learns a group of generalizable domain-invariant representations by alignment and decomposition. First, we propose a comprehensive intra- and inter-sample distance metric for complex multi-modal feature space, and an asymmetric multi-modal alignment loss for different information densities of text and vision. Further, to alleviate the conflict between domain-invariant features for generalization and domain-specific information for reasoning, we introduce domain-specific and domain-agnostic predictors to decompose and refine the learned features by dynamically adjusting the weights of samples. Based on the original video tags, we conduct extensive experiments on three NLVL datasets with different-grained scene shifts to show the effectiveness of our proposed methods.
[ "Wang, Zehan", "Zhao, Yang", "Huang, Haifeng", "Xia, Yan", "Zhao, Zhou" ]
Scene-robust Natural Language Video Localization via Learning Domain-invariant Representations
findings-acl.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.12.bib
https://aclanthology.org/2023.findings-acl.12/
@inproceedings{jiang-etal-2023-exploiting, title = "Exploiting Pseudo Image Captions for Multimodal Summarization", author = "Jiang, Chaoya and Xie, Rui and Ye, Wei and Sun, Jinan and Zhang, Shikun", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.12", doi = "10.18653/v1/2023.findings-acl.12", pages = "161--175", abstract = "Multimodal summarization with multimodal output (MSMO) faces a challenging semantic gap between visual and textual modalities due to the lack of reference images for training. Our pilot investigation indicates that image captions, which naturally connect texts and images, can significantly benefit MSMO. However, exposure of image captions during training is inconsistent with MSMO{'}s task settings, where prior cross-modal alignment information is excluded to guarantee the generalization of cross-modal semantic modeling. To this end, we propose a novel coarse-to-fine image-text alignment mechanism to identify the most relevant sentence of each image in a document, resembling the role of image captions in capturing visual knowledge and bridging the cross-modal semantic gap. Equipped with this alignment mechanism, our method easily yet impressively sets up state-of-the-art performances on all intermodality and intramodality metrics (e.g., more than 10{\%} relative improvement on image recommendation precision). Further experiments reveal the correlation between image captions and text summaries, and prove that the pseudo image captions we generated are even better than the original ones in terms of promoting multimodal summarization.", }
Multimodal summarization with multimodal output (MSMO) faces a challenging semantic gap between visual and textual modalities due to the lack of reference images for training. Our pilot investigation indicates that image captions, which naturally connect texts and images, can significantly benefit MSMO. However, exposure of image captions during training is inconsistent with MSMO{'}s task settings, where prior cross-modal alignment information is excluded to guarantee the generalization of cross-modal semantic modeling. To this end, we propose a novel coarse-to-fine image-text alignment mechanism to identify the most relevant sentence of each image in a document, resembling the role of image captions in capturing visual knowledge and bridging the cross-modal semantic gap. Equipped with this alignment mechanism, our method easily yet impressively sets up state-of-the-art performances on all intermodality and intramodality metrics (e.g., more than 10{\%} relative improvement on image recommendation precision). Further experiments reveal the correlation between image captions and text summaries, and prove that the pseudo image captions we generated are even better than the original ones in terms of promoting multimodal summarization.
[ "Jiang, Chaoya", "Xie, Rui", "Ye, Wei", "Sun, Jinan", "Zhang, Shikun" ]
Exploiting Pseudo Image Captions for Multimodal Summarization
findings-acl.12
Poster
2305.05496
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.13.bib
https://aclanthology.org/2023.findings-acl.13/
@inproceedings{parovic-etal-2023-cross, title = "Cross-Lingual Transfer with Target Language-Ready Task Adapters", author = "Parovic, Marinela and Ansell, Alan and Vuli{\'c}, Ivan and Korhonen, Anna", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.13", doi = "10.18653/v1/2023.findings-acl.13", pages = "176--193", abstract = "Adapters have emerged as a modular and parameter-efficient approach to (zero-shot) cross-lingual transfer. The established MAD-X framework employs separate language and task adapters which can be arbitrarily combined to perform the transfer of any task to any target language. Subsequently, BAD-X, an extension of the MAD-X framework, achieves improved transfer at the cost of MAD-X{'}s modularity by creating {`}bilingual{'} adapters specific to the source-target language pair. In this work, we aim to take the best of both worlds by (i) fine-tuning *task* adapters adapted to the target language(s) (so-called *{`}target language-ready{'} (TLR)* adapters) to maintain high transfer performance, but (ii) without sacrificing the highly modular design of MAD-X. The main idea of {`}target language-ready{'} adapters is to resolve the training-vs-inference discrepancy of MAD-X: the task adapter {`}sees{'} the target language adapter for the very first time during inference, and thus might not be fully compatible with it. We address this mismatch by exposing the task adapter to the target language adapter during training, and empirically validate several variants of the idea: in the simplest form, we alternate between using the source and target language adapters during task adapter training, which can be generalized to cycling over any set of language adapters. We evaluate different TLR-based transfer configurations with varying degrees of generality across a suite of standard cross-lingual benchmarks, and find that the most general (and thus most modular) configuration consistently outperforms MAD-X and BAD-X on most tasks and languages.", }
Adapters have emerged as a modular and parameter-efficient approach to (zero-shot) cross-lingual transfer. The established MAD-X framework employs separate language and task adapters which can be arbitrarily combined to perform the transfer of any task to any target language. Subsequently, BAD-X, an extension of the MAD-X framework, achieves improved transfer at the cost of MAD-X{'}s modularity by creating {`}bilingual{'} adapters specific to the source-target language pair. In this work, we aim to take the best of both worlds by (i) fine-tuning *task* adapters adapted to the target language(s) (so-called *{`}target language-ready{'} (TLR)* adapters) to maintain high transfer performance, but (ii) without sacrificing the highly modular design of MAD-X. The main idea of {`}target language-ready{'} adapters is to resolve the training-vs-inference discrepancy of MAD-X: the task adapter {`}sees{'} the target language adapter for the very first time during inference, and thus might not be fully compatible with it. We address this mismatch by exposing the task adapter to the target language adapter during training, and empirically validate several variants of the idea: in the simplest form, we alternate between using the source and target language adapters during task adapter training, which can be generalized to cycling over any set of language adapters. We evaluate different TLR-based transfer configurations with varying degrees of generality across a suite of standard cross-lingual benchmarks, and find that the most general (and thus most modular) configuration consistently outperforms MAD-X and BAD-X on most tasks and languages.
[ "Parovic, Marinela", "Ansell, Alan", "Vuli{\\'c}, Ivan", "Korhonen, Anna" ]
Cross-Lingual Transfer with Target Language-Ready Task Adapters
findings-acl.13
Poster
2306.02767
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.14.bib
https://aclanthology.org/2023.findings-acl.14/
@inproceedings{balepur-etal-2023-dynamite, title = "{D}yna{M}i{TE}: Discovering Explosive Topic Evolutions with User Guidance", author = "Balepur, Nishant and Agarwal, Shivam and Venkat Ramanan, Karthik and Yoon, Susik and Yang, Diyi and Han, Jiawei", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.14", doi = "10.18653/v1/2023.findings-acl.14", pages = "194--217", abstract = "Dynamic topic models (DTMs) analyze text streams to capture the evolution of topics. Despite their popularity, existing DTMs are either fully supervised, requiring expensive human annotations, or fully unsupervised, producing topic evolutions that often do not cater to a user{'}s needs. Further, the topic evolutions produced by DTMs tend to contain generic terms that are not indicative of their designated time steps. To address these issues, we propose the task of discriminative dynamic topic discovery. This task aims to discover topic evolutions from temporal corpora that distinctly align with a set of user-provided category names and uniquely capture topics at each time step. We solve this task by developing DynaMiTE, a framework that ensembles semantic similarity, category indicative, and time indicative scores to produce informative topic evolutions. Through experiments on three diverse datasets, including the use of a newly-designed human evaluation experiment, we demonstrate that DynaMiTE is a practical and efficient framework for helping users discover high-quality topic evolutions suited to their interests.", }
Dynamic topic models (DTMs) analyze text streams to capture the evolution of topics. Despite their popularity, existing DTMs are either fully supervised, requiring expensive human annotations, or fully unsupervised, producing topic evolutions that often do not cater to a user{'}s needs. Further, the topic evolutions produced by DTMs tend to contain generic terms that are not indicative of their designated time steps. To address these issues, we propose the task of discriminative dynamic topic discovery. This task aims to discover topic evolutions from temporal corpora that distinctly align with a set of user-provided category names and uniquely capture topics at each time step. We solve this task by developing DynaMiTE, a framework that ensembles semantic similarity, category indicative, and time indicative scores to produce informative topic evolutions. Through experiments on three diverse datasets, including the use of a newly-designed human evaluation experiment, we demonstrate that DynaMiTE is a practical and efficient framework for helping users discover high-quality topic evolutions suited to their interests.
[ "Balepur, Nishant", "Agarwal, Shivam", "Venkat Ramanan, Karthik", "Yoon, Susik", "Yang, Diyi", "Han, Jiawei" ]
DynaMiTE: Discovering Explosive Topic Evolutions with User Guidance
findings-acl.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.15.bib
https://aclanthology.org/2023.findings-acl.15/
@inproceedings{yu-etal-2023-boost, title = "Boost Transformer-based Language Models with {GPU}-Friendly Sparsity and Quantization", author = "Yu, Chong and Chen, Tao and Gan, Zhongxue", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.15", doi = "10.18653/v1/2023.findings-acl.15", pages = "218--235", abstract = "Along with the performance improvement in NLP domain, the sizes of transformer-based language models (TLM) are also dramatically increased. Some prior works intend to compress TLM models into more compact forms, but do not fully consider the hardware characters may not support the efficient execution for these forms, leading to the deployment of TLM on hardware with noticeable acceleration is still challenging. This paper thoroughly designs a compression scheme named GPUSQ-TLM to maximally utilize the GPU-friendly 2:4 fine-grained structured sparsity and quantization characters. Especially, a dense TLM model is first pruned to meet the GPU{'}s acceleration constraint of sparse patterns with FP16 type, then it is further quantized into a fixed-point one by quantization-aware training, to provide an extra speedup for integer tensors on GPU. A mixed-strategy knowledge distillation of labels, logits and feature maps is used for best accuracy compensation during pruning and quantization process. Experiment results show GPUSQ-TLM scheme achieves state-of-the-art compression on TLM model of various encoder and decoder blocks with negligible accuracy degradation on SQuAD, GLUE, CNN-DM {\&} XSum and WikiText benchmarking tasks. Moreover, GPUSQ-TLM can boost actual deployment performance by up to 4.08-4.25x latency and 6.18-6.79x throughput on A100 GPU.", }
Along with the performance improvement in NLP domain, the sizes of transformer-based language models (TLM) are also dramatically increased. Some prior works intend to compress TLM models into more compact forms, but do not fully consider the hardware characters may not support the efficient execution for these forms, leading to the deployment of TLM on hardware with noticeable acceleration is still challenging. This paper thoroughly designs a compression scheme named GPUSQ-TLM to maximally utilize the GPU-friendly 2:4 fine-grained structured sparsity and quantization characters. Especially, a dense TLM model is first pruned to meet the GPU{'}s acceleration constraint of sparse patterns with FP16 type, then it is further quantized into a fixed-point one by quantization-aware training, to provide an extra speedup for integer tensors on GPU. A mixed-strategy knowledge distillation of labels, logits and feature maps is used for best accuracy compensation during pruning and quantization process. Experiment results show GPUSQ-TLM scheme achieves state-of-the-art compression on TLM model of various encoder and decoder blocks with negligible accuracy degradation on SQuAD, GLUE, CNN-DM {\&} XSum and WikiText benchmarking tasks. Moreover, GPUSQ-TLM can boost actual deployment performance by up to 4.08-4.25x latency and 6.18-6.79x throughput on A100 GPU.
[ "Yu, Chong", "Chen, Tao", "Gan, Zhongxue" ]
Boost Transformer-based Language Models with GPU-Friendly Sparsity and Quantization
findings-acl.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.16.bib
https://aclanthology.org/2023.findings-acl.16/
@inproceedings{he-etal-2023-rmssinger, title = "{RMSS}inger: Realistic-Music-Score based Singing Voice Synthesis", author = "He, Jinzheng and Liu, Jinglin and Ye, Zhenhui and Huang, Rongjie and Cui, Chenye and Liu, Huadai and Zhao, Zhou", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.16", doi = "10.18653/v1/2023.findings-acl.16", pages = "236--248", abstract = "We are interested in a challenging task, Realistic-Music-Score based Singing Voice Synthesis (RMS-SVS). RMS-SVS aims to generate high-quality singing voices given realistic music scores with different note types (grace, slur, rest, etc.). Though significant progress has been achieved, recent singing voice synthesis (SVS) methods are limited to fine-grained music scores, which require a complicated data collection pipeline with time-consuming manual annotation to align music notes with phonemes. {\%} Furthermore, existing approaches cannot synthesize rhythmic singing voices given realistic music scores due to the domain gap between fine-grained music scores and realistic music scores. Furthermore, these manual annotation destroys the regularity of note durations in music scores, making fine-grained music scores inconvenient for composing. To tackle these challenges, we propose RMSSinger, the first RMS-SVS method, which takes realistic music scores as input, eliminating most of the tedious manual annotation and avoiding the aforementioned inconvenience. Note that music scores are based on words rather than phonemes, in RMSSinger, we introduce word-level modeling to avoid the time-consuming phoneme duration annotation and the complicated phoneme-level mel-note alignment. Furthermore, we propose the first diffusion-based pitch modeling method, which ameliorates the naturalness of existing pitch-modeling methods. To achieve these, we collect a new dataset containing realistic music scores and singing voices according to these realistic music scores from professional singers. Extensive experiments on the dataset demonstrate the effectiveness of our methods. Audio samples are available at \url{https://rmssinger.github.io/}.", }
We are interested in a challenging task, Realistic-Music-Score based Singing Voice Synthesis (RMS-SVS). RMS-SVS aims to generate high-quality singing voices given realistic music scores with different note types (grace, slur, rest, etc.). Though significant progress has been achieved, recent singing voice synthesis (SVS) methods are limited to fine-grained music scores, which require a complicated data collection pipeline with time-consuming manual annotation to align music notes with phonemes. {\%} Furthermore, existing approaches cannot synthesize rhythmic singing voices given realistic music scores due to the domain gap between fine-grained music scores and realistic music scores. Furthermore, these manual annotation destroys the regularity of note durations in music scores, making fine-grained music scores inconvenient for composing. To tackle these challenges, we propose RMSSinger, the first RMS-SVS method, which takes realistic music scores as input, eliminating most of the tedious manual annotation and avoiding the aforementioned inconvenience. Note that music scores are based on words rather than phonemes, in RMSSinger, we introduce word-level modeling to avoid the time-consuming phoneme duration annotation and the complicated phoneme-level mel-note alignment. Furthermore, we propose the first diffusion-based pitch modeling method, which ameliorates the naturalness of existing pitch-modeling methods. To achieve these, we collect a new dataset containing realistic music scores and singing voices according to these realistic music scores from professional singers. Extensive experiments on the dataset demonstrate the effectiveness of our methods. Audio samples are available at \url{https://rmssinger.github.io/}.
[ "He, Jinzheng", "Liu, Jinglin", "Ye, Zhenhui", "Huang, Rongjie", "Cui, Chenye", "Liu, Huadai", "Zhao, Zhou" ]
RMSSinger: Realistic-Music-Score based Singing Voice Synthesis
findings-acl.16
Poster
2305.10686
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.17.bib
https://aclanthology.org/2023.findings-acl.17/
@inproceedings{kuo-chen-2023-zero, title = "Zero-Shot Prompting for Implicit Intent Prediction and Recommendation with Commonsense Reasoning", author = "Kuo, Hui-Chi and Chen, Yun-Nung", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.17", doi = "10.18653/v1/2023.findings-acl.17", pages = "249--258", abstract = "The current generation of intelligent assistants require explicit user requests to perform tasks or services, often leading to lengthy and complex conversations. In contrast, human assistants can infer multiple implicit intents from utterances via their commonsense knowledge, thereby simplifying interactions. To bridge this gap, this paper proposes a framework for multi-domain dialogue systems. This framework automatically infers implicit intents from user utterances, and prompts a large pre-trained language model to suggest suitable task-oriented bots. By leveraging commonsense knowledge, our framework recommends associated bots in a zero-shot manner, enhancing interaction efficiency and effectiveness. This approach substantially reduces interaction complexity, seamlessly integrates various domains and tasks, and represents a significant step towards creating more human-like intelligent assistants that can reason about implicit intents, offering a superior user experience.", }
The current generation of intelligent assistants require explicit user requests to perform tasks or services, often leading to lengthy and complex conversations. In contrast, human assistants can infer multiple implicit intents from utterances via their commonsense knowledge, thereby simplifying interactions. To bridge this gap, this paper proposes a framework for multi-domain dialogue systems. This framework automatically infers implicit intents from user utterances, and prompts a large pre-trained language model to suggest suitable task-oriented bots. By leveraging commonsense knowledge, our framework recommends associated bots in a zero-shot manner, enhancing interaction efficiency and effectiveness. This approach substantially reduces interaction complexity, seamlessly integrates various domains and tasks, and represents a significant step towards creating more human-like intelligent assistants that can reason about implicit intents, offering a superior user experience.
[ "Kuo, Hui-Chi", "Chen, Yun-Nung" ]
Zero-Shot Prompting for Implicit Intent Prediction and Recommendation with Commonsense Reasoning
findings-acl.17
Poster
2210.05901
[ "https://github.com/miulab/implicitbot" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.18.bib
https://aclanthology.org/2023.findings-acl.18/
@inproceedings{liu-etal-2023-mtgp, title = "{MTGP}: Multi-turn Target-oriented Dialogue Guided by Generative Global Path with Flexible Turns", author = "Liu, Anqi and Wang, Bo and Tan, Yue and Zhao, Dongming and Huang, Kun and He, Ruifang and Hou, Yuexian", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.18", doi = "10.18653/v1/2023.findings-acl.18", pages = "259--271", abstract = "Target-oriented dialogue guides the dialogue to a target quickly and smoothly. The latest approaches focus on global planning, which plans toward the target before the conversation instead of adopting a greedy strategy during the conversation. However, the global plan in existing works is fixed to certain turns by generating paths with certain nodes, which limits the optimization of turns and coherence of the target-oriented process. Toward flexible global planning, we propose to generate a global path as a natural language sentence instead of a sequence of nodes. With this path, the dialog is guided to the target with flexible turns of dialog. For model training, we also extract targetoriented dialogues from the chit-chat corpus with a knowledge graph. We conduct experiments on three datasets and simulate scenarios with and without user participation. The results show that our method has fewer turns, more coherent semantics, and a higher success rate in reaching the target than baselines.", }
Target-oriented dialogue guides the dialogue to a target quickly and smoothly. The latest approaches focus on global planning, which plans toward the target before the conversation instead of adopting a greedy strategy during the conversation. However, the global plan in existing works is fixed to certain turns by generating paths with certain nodes, which limits the optimization of turns and coherence of the target-oriented process. Toward flexible global planning, we propose to generate a global path as a natural language sentence instead of a sequence of nodes. With this path, the dialog is guided to the target with flexible turns of dialog. For model training, we also extract targetoriented dialogues from the chit-chat corpus with a knowledge graph. We conduct experiments on three datasets and simulate scenarios with and without user participation. The results show that our method has fewer turns, more coherent semantics, and a higher success rate in reaching the target than baselines.
[ "Liu, Anqi", "Wang, Bo", "Tan, Yue", "Zhao, Dongming", "Huang, Kun", "He, Ruifang", "Hou, Yuexian" ]
MTGP: Multi-turn Target-oriented Dialogue Guided by Generative Global Path with Flexible Turns
findings-acl.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.19.bib
https://aclanthology.org/2023.findings-acl.19/
@inproceedings{miceli-barone-etal-2023-larger, title = "The Larger they are, the Harder they Fail: Language Models do not Recognize Identifier Swaps in Python", author = "Miceli Barone, Antonio Valerio and Barez, Fazl and Cohen, Shay B. and Konstas, Ioannis", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.19", doi = "10.18653/v1/2023.findings-acl.19", pages = "272--292", abstract = "Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming. Typical programming languages have invariances and equivariances in their semantics that human programmers intuitively understand and exploit, such as the (near) invariance to the renaming of identifiers. We show that LLMs not only fail to properly generate correct Python code when default function names are swapped, but some of them even become more confident in their incorrect predictions as the model size increases, an instance of the recently discovered phenomenon of Inverse Scaling, which runs contrary to the commonly observed trend of increasing prediction quality with increasing model size. Our findings indicate that, despite their astonishing typical-case performance, LLMs still lack a deep, abstract understanding of the content they manipulate, making them unsuitable for tasks that statistically deviate from their training data, and that mere scaling is not enough to achieve such capability.", }
Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming. Typical programming languages have invariances and equivariances in their semantics that human programmers intuitively understand and exploit, such as the (near) invariance to the renaming of identifiers. We show that LLMs not only fail to properly generate correct Python code when default function names are swapped, but some of them even become more confident in their incorrect predictions as the model size increases, an instance of the recently discovered phenomenon of Inverse Scaling, which runs contrary to the commonly observed trend of increasing prediction quality with increasing model size. Our findings indicate that, despite their astonishing typical-case performance, LLMs still lack a deep, abstract understanding of the content they manipulate, making them unsuitable for tasks that statistically deviate from their training data, and that mere scaling is not enough to achieve such capability.
[ "Miceli Barone, Antonio Valerio", "Barez, Fazl", "Cohen, Shay B.", "Konstas, Ioannis" ]
The Larger they are, the Harder they Fail: Language Models do not Recognize Identifier Swaps in Python
findings-acl.19
Poster
2305.15507
[ "https://github.com/avmb/inverse_scaling_prize_code_identifier_swap" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.20.bib
https://aclanthology.org/2023.findings-acl.20/
@inproceedings{liu-etal-2023-class, title = "Class Lifelong Learning for Intent Detection via Structure Consolidation Networks", author = "Liu, Qingbin and Hao, Yanchao and Liu, Xiaolong and Li, Bo and Sui, Dianbo and He, Shizhu and Liu, Kang and Zhao, Jun and Chen, Xi and Zhang, Ningyu and Chen, Jiaoyan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.20", doi = "10.18653/v1/2023.findings-acl.20", pages = "293--306", abstract = "Intent detection, which estimates diverse intents behind user utterances, is an essential component of task-oriented dialogue systems. Previous intent detection models are usually trained offline, which can only handle predefined intent classes. In the real world, new intents may keep challenging deployed models. For example, with the prevalence of the COVID-19 pandemic, users may pose various issues related to the pandemic to conversational systems, which brings many new intents. A general intent detection model should be intelligent enough to continually learn new data and recognize new arriving intent classes. Therefore, this work explores Class Lifelong Learning for Intent Detection (CLL-ID), where the model continually learns new intent classes from new data while avoiding catastrophic performance degradation on old data. To this end, we propose a novel lifelong learning method, called Structure Consolidation Networks (SCN), which consists of structure-based retrospection and contrastive knowledge distillation to handle the problems of expression diversity and class imbalance in the CLL-ID task. In addition to formulating the new task, we construct 3 benchmarks based on 8 intent detection datasets. Experimental results demonstrate the effectiveness of SCN, which significantly outperforms previous lifelong learning methods on the three benchmarks.", }
Intent detection, which estimates diverse intents behind user utterances, is an essential component of task-oriented dialogue systems. Previous intent detection models are usually trained offline, which can only handle predefined intent classes. In the real world, new intents may keep challenging deployed models. For example, with the prevalence of the COVID-19 pandemic, users may pose various issues related to the pandemic to conversational systems, which brings many new intents. A general intent detection model should be intelligent enough to continually learn new data and recognize new arriving intent classes. Therefore, this work explores Class Lifelong Learning for Intent Detection (CLL-ID), where the model continually learns new intent classes from new data while avoiding catastrophic performance degradation on old data. To this end, we propose a novel lifelong learning method, called Structure Consolidation Networks (SCN), which consists of structure-based retrospection and contrastive knowledge distillation to handle the problems of expression diversity and class imbalance in the CLL-ID task. In addition to formulating the new task, we construct 3 benchmarks based on 8 intent detection datasets. Experimental results demonstrate the effectiveness of SCN, which significantly outperforms previous lifelong learning methods on the three benchmarks.
[ "Liu, Qingbin", "Hao, Yanchao", "Liu, Xiaolong", "Li, Bo", "Sui, Dianbo", "He, Shizhu", "Liu, Kang", "Zhao, Jun", "Chen, Xi", "Zhang, Ningyu", "Chen, Jiaoyan" ]
Class Lifelong Learning for Intent Detection via Structure Consolidation Networks
findings-acl.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.21.bib
https://aclanthology.org/2023.findings-acl.21/
@inproceedings{vashishtha-etal-2023-evaluating, title = "On Evaluating and Mitigating Gender Biases in Multilingual Settings", author = "Vashishtha, Aniket and Ahuja, Kabir and Sitaram, Sunayana", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.21", doi = "10.18653/v1/2023.findings-acl.21", pages = "307--318", abstract = "While understanding and removing gender biases in language models has been a long-standing problem in Natural Language Processing, prior research work has primarily been limited to English. In this work, we investigate some of the challenges with evaluating and mitigating biases in multilingual settings which stem from a lack of existing benchmarks and resources for bias evaluation beyond English especially for non-western context. In this paper, we first create a benchmark for evaluating gender biases in pre-trained masked language models by extending DisCo to different Indian languages using human annotations. We extend various debiasing methods to work beyond English and evaluate their effectiveness for SOTA massively multilingual models on our proposed metric. Overall, our work highlights the challenges that arise while studying social biases in multilingual settings and provides resources as well as mitigation techniques to take a step toward scaling to more languages.", }
While understanding and removing gender biases in language models has been a long-standing problem in Natural Language Processing, prior research work has primarily been limited to English. In this work, we investigate some of the challenges with evaluating and mitigating biases in multilingual settings which stem from a lack of existing benchmarks and resources for bias evaluation beyond English especially for non-western context. In this paper, we first create a benchmark for evaluating gender biases in pre-trained masked language models by extending DisCo to different Indian languages using human annotations. We extend various debiasing methods to work beyond English and evaluate their effectiveness for SOTA massively multilingual models on our proposed metric. Overall, our work highlights the challenges that arise while studying social biases in multilingual settings and provides resources as well as mitigation techniques to take a step toward scaling to more languages.
[ "Vashishtha, Aniket", "Ahuja, Kabir", "Sitaram, Sunayana" ]
On Evaluating and Mitigating Gender Biases in Multilingual Settings
findings-acl.21
Poster
2307.01503
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.22.bib
https://aclanthology.org/2023.findings-acl.22/
@inproceedings{zhuo-etal-2023-rethinking, title = "Rethinking Round-Trip Translation for Machine Translation Evaluation", author = "Zhuo, Terry Yue and Xu, Qiongkai and He, Xuanli and Cohn, Trevor", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.22", doi = "10.18653/v1/2023.findings-acl.22", pages = "319--337", abstract = "Automatic evaluation methods for translation often require model training, and thus the availability of parallel corpora limits their applicability to low-resource settings. Round-trip translation is a potential workaround, which can reframe bilingual evaluation into a much simpler monolingual task. Early results from the era of statistical machine translation (SMT) raised fundamental concerns about the utility of this approach, based on poor correlation with human translation quality judgments. In this paper, we revisit this technique with modern neural translation (NMT) and show that round-trip translation does allow for accurate automatic evaluation without the need for reference translations. These opposite findings can be explained through the copy mechanism in SMT that is absent in NMT. We demonstrate that round-trip translation benefits multiple machine translation evaluation tasks: i) predicting forward translation scores; ii) improving the performance of a quality estimation model; and iii) identifying adversarial competitors in shared tasks via cross-system verification.", }
Automatic evaluation methods for translation often require model training, and thus the availability of parallel corpora limits their applicability to low-resource settings. Round-trip translation is a potential workaround, which can reframe bilingual evaluation into a much simpler monolingual task. Early results from the era of statistical machine translation (SMT) raised fundamental concerns about the utility of this approach, based on poor correlation with human translation quality judgments. In this paper, we revisit this technique with modern neural translation (NMT) and show that round-trip translation does allow for accurate automatic evaluation without the need for reference translations. These opposite findings can be explained through the copy mechanism in SMT that is absent in NMT. We demonstrate that round-trip translation benefits multiple machine translation evaluation tasks: i) predicting forward translation scores; ii) improving the performance of a quality estimation model; and iii) identifying adversarial competitors in shared tasks via cross-system verification.
[ "Zhuo, Terry Yue", "Xu, Qiongkai", "He, Xuanli", "Cohn, Trevor" ]
Rethinking Round-Trip Translation for Machine Translation Evaluation
findings-acl.22
Poster
2209.07351
[ "https://github.com/terryyz/rtt-rethinking" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.23.bib
https://aclanthology.org/2023.findings-acl.23/
@inproceedings{xiang-etal-2023-g, title = "$G^3R$: A Graph-Guided Generate-and-Rerank Framework for Complex and Cross-domain Text-to-{SQL} Generation", author = "Xiang, Yanzheng and Zhang, Qian-Wen and Zhang, Xu and Liu, Zejie and Cao, Yunbo and Zhou, Deyu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.23", doi = "10.18653/v1/2023.findings-acl.23", pages = "338--352", abstract = "We present a framework called G3R for complex and cross-domain Text-to-SQL generation. G3R aims to address two limitations of current approaches: (1) The structure of the abstract syntax tree (AST) is not fully explored during the decoding process which is crucial for complex SQL generation; (2) Domain knowledge is not incorporated to enhance their ability to generalise to unseen domains. G3R consists of a graph-guided SQL generator and a knowledge-enhanced re-ranking mechanism. Firstly, during the decoding process, An AST-Grammar bipartite graph is constructed for both the AST and corresponding grammar rules of the generated partial SQL query. The graph-guided SQL generator captures its structural information and fuses heterogeneous information to predict the action sequence which can construct the AST for the corresponding SQL query uniquely. Then, in the inference stage, a knowledge-enhanced re-ranking mechanism is proposed to introduce domain knowledge to re-rank candidate SQL queries from the beam output and choose the final answer. The SQL ranker is based on pre-trained language models (PLM) and contrastive learning with hybrid prompt tuning is incorporated to stimulate the knowledge of PLMs and make it more discriminative. The proposed approach achieves state-of-the-art results on the Spider and Spider-DK benchmarks, which are challenging complex and cross-domain benchmarks for Text-to-SQL semantic analysis.", }
We present a framework called G3R for complex and cross-domain Text-to-SQL generation. G3R aims to address two limitations of current approaches: (1) The structure of the abstract syntax tree (AST) is not fully explored during the decoding process which is crucial for complex SQL generation; (2) Domain knowledge is not incorporated to enhance their ability to generalise to unseen domains. G3R consists of a graph-guided SQL generator and a knowledge-enhanced re-ranking mechanism. Firstly, during the decoding process, An AST-Grammar bipartite graph is constructed for both the AST and corresponding grammar rules of the generated partial SQL query. The graph-guided SQL generator captures its structural information and fuses heterogeneous information to predict the action sequence which can construct the AST for the corresponding SQL query uniquely. Then, in the inference stage, a knowledge-enhanced re-ranking mechanism is proposed to introduce domain knowledge to re-rank candidate SQL queries from the beam output and choose the final answer. The SQL ranker is based on pre-trained language models (PLM) and contrastive learning with hybrid prompt tuning is incorporated to stimulate the knowledge of PLMs and make it more discriminative. The proposed approach achieves state-of-the-art results on the Spider and Spider-DK benchmarks, which are challenging complex and cross-domain benchmarks for Text-to-SQL semantic analysis.
[ "Xiang, Yanzheng", "Zhang, Qian-Wen", "Zhang, Xu", "Liu, Zejie", "Cao, Yunbo", "Zhou, Deyu" ]
G^3R: A Graph-Guided Generate-and-Rerank Framework for Complex and Cross-domain Text-to-SQL Generation
findings-acl.23
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.24.bib
https://aclanthology.org/2023.findings-acl.24/
@inproceedings{ding-etal-2023-unified, title = "A Unified Knowledge Graph Augmentation Service for Boosting Domain-specific {NLP} Tasks", author = "Ding, Ruiqing and Han, Xiao and Wang, Leye", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.24", doi = "10.18653/v1/2023.findings-acl.24", pages = "353--369", abstract = "By focusing the pre-training process on domain-specific corpora, some domain-specific pre-trained language models (PLMs) have achieved state-of-the-art results. However, it is under-investigated to design a unified paradigm to inject domain knowledge in the PLM fine-tuning stage. We propose KnowledgeDA, a unified domain language model development service to enhance the task-specific training procedure with domain knowledge graphs. Given domain-specific task texts input, KnowledgeDA can automatically generate a domain-specific language model following three steps: (i) localize domain knowledge entities in texts via an embedding-similarity approach; (ii) generate augmented samples by retrieving replaceable domain entity pairs from two views of both knowledge graph and training data; (iii) select high-quality augmented samples for fine-tuning via confidence-based assessment. We implement a prototype of KnowledgeDA to learn language models for two domains, healthcare and software development. Experiments on domain-specific text classification and QA tasks verify the effectiveness and generalizability of KnowledgeDA.", }
By focusing the pre-training process on domain-specific corpora, some domain-specific pre-trained language models (PLMs) have achieved state-of-the-art results. However, it is under-investigated to design a unified paradigm to inject domain knowledge in the PLM fine-tuning stage. We propose KnowledgeDA, a unified domain language model development service to enhance the task-specific training procedure with domain knowledge graphs. Given domain-specific task texts input, KnowledgeDA can automatically generate a domain-specific language model following three steps: (i) localize domain knowledge entities in texts via an embedding-similarity approach; (ii) generate augmented samples by retrieving replaceable domain entity pairs from two views of both knowledge graph and training data; (iii) select high-quality augmented samples for fine-tuning via confidence-based assessment. We implement a prototype of KnowledgeDA to learn language models for two domains, healthcare and software development. Experiments on domain-specific text classification and QA tasks verify the effectiveness and generalizability of KnowledgeDA.
[ "Ding, Ruiqing", "Han, Xiao", "Wang, Leye" ]
A Unified Knowledge Graph Augmentation Service for Boosting Domain-specific NLP Tasks
findings-acl.24
Poster
2212.05251
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.25.bib
https://aclanthology.org/2023.findings-acl.25/
@inproceedings{wang-etal-2023-dialogue, title = "Dialogue Planning via Brownian Bridge Stochastic Process for Goal-directed Proactive Dialogue", author = "Wang, Jian and Lin, Dongding and Li, Wenjie", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.25", doi = "10.18653/v1/2023.findings-acl.25", pages = "370--387", abstract = "Goal-directed dialogue systems aim to proactively reach a pre-determined target through multi-turn conversations. The key to achieving this task lies in planning dialogue paths that smoothly and coherently direct conversations towards the target. However, this is a challenging and under-explored task. In this work, we propose a coherent dialogue planning approach that uses a stochastic process to model the temporal dynamics of dialogue paths. We define a latent space that captures the coherence of goal-directed behavior using a Brownian bridge process, which allows us to incorporate user feedback flexibly in dialogue planning. Based on the derived latent trajectories, we generate dialogue paths explicitly using pre-trained language models. We finally employ these paths as natural language prompts to guide dialogue generation. Our experiments show that our approach generates more coherent utterances and achieves the goal with a higher success rate.", }
Goal-directed dialogue systems aim to proactively reach a pre-determined target through multi-turn conversations. The key to achieving this task lies in planning dialogue paths that smoothly and coherently direct conversations towards the target. However, this is a challenging and under-explored task. In this work, we propose a coherent dialogue planning approach that uses a stochastic process to model the temporal dynamics of dialogue paths. We define a latent space that captures the coherence of goal-directed behavior using a Brownian bridge process, which allows us to incorporate user feedback flexibly in dialogue planning. Based on the derived latent trajectories, we generate dialogue paths explicitly using pre-trained language models. We finally employ these paths as natural language prompts to guide dialogue generation. Our experiments show that our approach generates more coherent utterances and achieves the goal with a higher success rate.
[ "Wang, Jian", "Lin, Dongding", "Li, Wenjie" ]
Dialogue Planning via Brownian Bridge Stochastic Process for Goal-directed Proactive Dialogue
findings-acl.25
Poster
2305.05290
[ "https://github.com/iwangjian/color4dial" ]
https://huggingface.co/papers/2305.05290
0
0
0
3
1
[]
[]
[]
https://aclanthology.org/2023.findings-acl.26.bib
https://aclanthology.org/2023.findings-acl.26/
@inproceedings{badathala-etal-2023-match, title = "A Match Made in Heaven: A Multi-task Framework for Hyperbole and Metaphor Detection", author = "Badathala, Naveen and Rajakumar Kalarani, Abisek and Siledar, Tejpalsingh and Bhattacharyya, Pushpak", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.26", doi = "10.18653/v1/2023.findings-acl.26", pages = "388--401", abstract = "Hyperbole and metaphor are common in day-to-day communication (e.g., {``}I am in deep trouble{''}: how does trouble have depth?), which makes their detection important, especially in a conversational AI setting. Existing approaches to automatically detect metaphor and hyperbole have studied these language phenomena independently, but their relationship has hardly, if ever, been explored computationally. In this paper, we propose a multi-task deep learning framework to detect hyperbole and metaphor simultaneously. We hypothesize that metaphors help in hyperbole detection, and vice-versa. To test this hypothesis, we annotate two hyperbole datasets- HYPO and HYPO-L- with metaphor labels. Simultaneously, we annotate two metaphor datasets- TroFi and LCC- with hyperbole labels. Experiments using these datasets give an improvement of the state of the art of hyperbole detection by 12{\%}. Additionally, our multi-task learning (MTL) approach shows an improvement of up to 17{\%} over single-task learning (STL) for both hyperbole and metaphor detection, supporting our hypothesis. To the best of our knowledge, ours is the first demonstration of computational leveraging of linguistic intimacy between metaphor and hyperbole, leading to showing the superiority of MTL over STL for hyperbole and metaphor detection.", }
Hyperbole and metaphor are common in day-to-day communication (e.g., {``}I am in deep trouble{''}: how does trouble have depth?), which makes their detection important, especially in a conversational AI setting. Existing approaches to automatically detect metaphor and hyperbole have studied these language phenomena independently, but their relationship has hardly, if ever, been explored computationally. In this paper, we propose a multi-task deep learning framework to detect hyperbole and metaphor simultaneously. We hypothesize that metaphors help in hyperbole detection, and vice-versa. To test this hypothesis, we annotate two hyperbole datasets- HYPO and HYPO-L- with metaphor labels. Simultaneously, we annotate two metaphor datasets- TroFi and LCC- with hyperbole labels. Experiments using these datasets give an improvement of the state of the art of hyperbole detection by 12{\%}. Additionally, our multi-task learning (MTL) approach shows an improvement of up to 17{\%} over single-task learning (STL) for both hyperbole and metaphor detection, supporting our hypothesis. To the best of our knowledge, ours is the first demonstration of computational leveraging of linguistic intimacy between metaphor and hyperbole, leading to showing the superiority of MTL over STL for hyperbole and metaphor detection.
[ "Badathala, Naveen", "Rajakumar Kalarani, Abisek", "Siledar, Tejpalsingh", "Bhattacharyya, Pushpak" ]
A Match Made in Heaven: A Multi-task Framework for Hyperbole and Metaphor Detection
findings-acl.26
Poster
2305.17480
[ "https://github.com/abisekrk/multitask_hyperbole_metaphor_detection" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.27.bib
https://aclanthology.org/2023.findings-acl.27/
@inproceedings{yang-etal-2023-prompt, title = "Prompt Tuning for Unified Multimodal Pretrained Models", author = "Yang, Hao and Lin, Junyang and Yang, An and Wang, Peng and Zhou, Chang", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.27", doi = "10.18653/v1/2023.findings-acl.27", pages = "402--416", abstract = "Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural language pretraining and even vision pretraining. The parameter-efficient prompt tuning methods that optimize soft embeddings while keeping the pretrained model frozen demonstrate advantages in low computation costs and almost lossless performance. In this work, we explore the transfer of prompt tuning to multimodal pretrained models. Specifically, we implement prompt tuning to a unified sequence-to-sequence pretrained model by adding a sequence of learnable embeddings to each layer and finetuning the pretrained model on downstream task with only the learnable embeddings being optimized. Experimental results on a series of multimodal understanding and generation tasks demonstrate that our method OFA-PT can achieve comparable performance with finetuning across a series of multimodal generation and understanding tasks. Additionally, it significantly outperforms the unified multimodal pretrained model with other parameter-efficient tuning methods, e.g., Adapter, BitFit. etc. Besides, in comparison with finetuned models, the prompt-tuned models demonstrate improved robustness against adversarial attacks. We further figure out that experimental factors, including prompt length, prompt depth, and reparameteratization, have great impacts on the model performance, and thus we empirically provide a recommendation for the setups of prompt tuning.", }
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural language pretraining and even vision pretraining. The parameter-efficient prompt tuning methods that optimize soft embeddings while keeping the pretrained model frozen demonstrate advantages in low computation costs and almost lossless performance. In this work, we explore the transfer of prompt tuning to multimodal pretrained models. Specifically, we implement prompt tuning to a unified sequence-to-sequence pretrained model by adding a sequence of learnable embeddings to each layer and finetuning the pretrained model on downstream task with only the learnable embeddings being optimized. Experimental results on a series of multimodal understanding and generation tasks demonstrate that our method OFA-PT can achieve comparable performance with finetuning across a series of multimodal generation and understanding tasks. Additionally, it significantly outperforms the unified multimodal pretrained model with other parameter-efficient tuning methods, e.g., Adapter, BitFit. etc. Besides, in comparison with finetuned models, the prompt-tuned models demonstrate improved robustness against adversarial attacks. We further figure out that experimental factors, including prompt length, prompt depth, and reparameteratization, have great impacts on the model performance, and thus we empirically provide a recommendation for the setups of prompt tuning.
[ "Yang, Hao", "Lin, Junyang", "Yang, An", "Wang, Peng", "Zhou, Chang" ]
Prompt Tuning for Unified Multimodal Pretrained Models
findings-acl.27
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.28.bib
https://aclanthology.org/2023.findings-acl.28/
@inproceedings{gao-etal-2023-learning, title = "Learning Joint Structural and Temporal Contextualized Knowledge Embeddings for Temporal Knowledge Graph Completion", author = "Gao, Yifu and He, Yongquan and Kan, Zhigang and Han, Yi and Qiao, Linbo and Li, Dongsheng", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.28", doi = "10.18653/v1/2023.findings-acl.28", pages = "417--430", abstract = "Temporal knowledge graph completion that predicts missing links for incomplete temporal knowledge graphs (TKG) is gaining increasing attention. Most existing works have achieved good results by incorporating time information into static knowledge graph embedding methods. However, they ignore the contextual nature of the TKG structure, i.e., query-specific subgraph contains both structural and temporal neighboring facts. This paper presents the SToKE, a novel method that employs the pre-trained language model (PLM) to learn joint Structural and Temporal Contextualized Knowledge Embeddings.Specifically, we first construct an event evolution tree (EET) for each query to enable PLMs to handle the TKG, which can be seen as a structured event sequence recording query-relevant structural and temporal contexts. We then propose a novel temporal embedding and structural matrix to learn the time information and structural dependencies of facts in EET.Finally, we formulate TKG completion as a mask prediction problem by masking the missing entity of the query to fine-tune pre-trained language models. Experimental results on three widely used datasets show the superiority of our model.", }
Temporal knowledge graph completion that predicts missing links for incomplete temporal knowledge graphs (TKG) is gaining increasing attention. Most existing works have achieved good results by incorporating time information into static knowledge graph embedding methods. However, they ignore the contextual nature of the TKG structure, i.e., query-specific subgraph contains both structural and temporal neighboring facts. This paper presents the SToKE, a novel method that employs the pre-trained language model (PLM) to learn joint Structural and Temporal Contextualized Knowledge Embeddings.Specifically, we first construct an event evolution tree (EET) for each query to enable PLMs to handle the TKG, which can be seen as a structured event sequence recording query-relevant structural and temporal contexts. We then propose a novel temporal embedding and structural matrix to learn the time information and structural dependencies of facts in EET.Finally, we formulate TKG completion as a mask prediction problem by masking the missing entity of the query to fine-tune pre-trained language models. Experimental results on three widely used datasets show the superiority of our model.
[ "Gao, Yifu", "He, Yongquan", "Kan, Zhigang", "Han, Yi", "Qiao, Linbo", "Li, Dongsheng" ]
Learning Joint Structural and Temporal Contextualized Knowledge Embeddings for Temporal Knowledge Graph Completion
findings-acl.28
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.29.bib
https://aclanthology.org/2023.findings-acl.29/
@inproceedings{laskar-etal-2023-systematic, title = "A Systematic Study and Comprehensive Evaluation of {C}hat{GPT} on Benchmark Datasets", author = "Laskar, Md Tahmid Rahman and Bari, M Saiful and Rahman, Mizanur and Bhuiyan, Md Amran Hossen and Joty, Shafiq and Huang, Jimmy", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.29", doi = "10.18653/v1/2023.findings-acl.29", pages = "431--469", abstract = "The development of large language models (LLMs) such as ChatGPT has brought a lot of attention recently. However, their evaluation in the benchmark academic datasets remains under-explored due to the difficulty of evaluating the generative outputs produced by this model against the ground truth. In this paper, we aim to present a thorough evaluation of ChatGPT{'}s performance on diverse academic datasets, covering tasks like question-answering, text summarization, code generation, commonsense reasoning, mathematical problem-solving, machine translation, bias detection, and ethical considerations. Specifically, we evaluate ChatGPT across 140 tasks and analyze 255K responses it generates in these datasets. This makes our work the largest evaluation of ChatGPT in NLP benchmarks. In short, our study aims to validate the strengths and weaknesses of ChatGPT in various tasks and provide insights for future research using LLMs. We also report a new emergent ability to follow multi-query instructions that we mostly found in ChatGPT and other instruction-tuned models. Our extensive evaluation shows that even though ChatGPT is capable of performing a wide variety of tasks, and may obtain impressive performance in several benchmark datasets, it is still far from achieving the ability to reliably solve many challenging tasks. By providing a thorough assessment of ChatGPT{'}s performance across diverse NLP tasks, this paper sets the stage for a targeted deployment of ChatGPT-like LLMs in real-world applications.", }
The development of large language models (LLMs) such as ChatGPT has brought a lot of attention recently. However, their evaluation in the benchmark academic datasets remains under-explored due to the difficulty of evaluating the generative outputs produced by this model against the ground truth. In this paper, we aim to present a thorough evaluation of ChatGPT{'}s performance on diverse academic datasets, covering tasks like question-answering, text summarization, code generation, commonsense reasoning, mathematical problem-solving, machine translation, bias detection, and ethical considerations. Specifically, we evaluate ChatGPT across 140 tasks and analyze 255K responses it generates in these datasets. This makes our work the largest evaluation of ChatGPT in NLP benchmarks. In short, our study aims to validate the strengths and weaknesses of ChatGPT in various tasks and provide insights for future research using LLMs. We also report a new emergent ability to follow multi-query instructions that we mostly found in ChatGPT and other instruction-tuned models. Our extensive evaluation shows that even though ChatGPT is capable of performing a wide variety of tasks, and may obtain impressive performance in several benchmark datasets, it is still far from achieving the ability to reliably solve many challenging tasks. By providing a thorough assessment of ChatGPT{'}s performance across diverse NLP tasks, this paper sets the stage for a targeted deployment of ChatGPT-like LLMs in real-world applications.
[ "Laskar, Md Tahmid Rahman", "Bari, M Saiful", "Rahman, Mizanur", "Bhuiyan, Md Amran Hossen", "Joty, Shafiq", "Huang, Jimmy" ]
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets
findings-acl.29
Poster
2305.18486
[ "https://github.com/ntunlp/chatgpt_eval" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.30.bib
https://aclanthology.org/2023.findings-acl.30/
@inproceedings{yu-etal-2023-generating-deep, title = "Generating Deep Questions with Commonsense Reasoning Ability from the Text by Disentangled Adversarial Inference", author = "Yu, Jianxing and Wang, Shiqi and Zheng, Libin and Su, Qinliang and Liu, Wei and Zhao, Baoquan and Yin, Jian", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.30", doi = "10.18653/v1/2023.findings-acl.30", pages = "470--486", abstract = "This paper proposes a new task of commonsense question generation, which aims to yield deep-level and to-the-point questions from the text. Their answers need to reason over disjoint relevant contexts and external commonsense knowledge, such as encyclopedic facts and causality. The knowledge may not be explicitly mentioned in the text but is used by most humans for problem-shooting. Such complex reasoning with hidden contexts involves deep semantic understanding. Thus, this task has great application value, such as making high-quality quizzes in advanced exams. Due to the lack of modeling complexity, existing methods may produce shallow questions that can be answered by simple word matching. To address these challenges, we propose a new QG model by simultaneously considering asking contents, expressive ways, and answering complexity. We first retrieve text-related commonsense context. Then we disentangle the key factors that control questions in terms of reasoning content and verbalized way. Independence priors and constraints are imposed to facilitate disentanglement. We further develop a discriminator to promote the deep results by considering their answering complexity. Through adversarial inference, we learn the latent factors from data. By sampling the expressive factor from the data distributions, diverse questions can be yielded. Evaluations of two typical data sets show the effectiveness of our approach.", }
This paper proposes a new task of commonsense question generation, which aims to yield deep-level and to-the-point questions from the text. Their answers need to reason over disjoint relevant contexts and external commonsense knowledge, such as encyclopedic facts and causality. The knowledge may not be explicitly mentioned in the text but is used by most humans for problem-shooting. Such complex reasoning with hidden contexts involves deep semantic understanding. Thus, this task has great application value, such as making high-quality quizzes in advanced exams. Due to the lack of modeling complexity, existing methods may produce shallow questions that can be answered by simple word matching. To address these challenges, we propose a new QG model by simultaneously considering asking contents, expressive ways, and answering complexity. We first retrieve text-related commonsense context. Then we disentangle the key factors that control questions in terms of reasoning content and verbalized way. Independence priors and constraints are imposed to facilitate disentanglement. We further develop a discriminator to promote the deep results by considering their answering complexity. Through adversarial inference, we learn the latent factors from data. By sampling the expressive factor from the data distributions, diverse questions can be yielded. Evaluations of two typical data sets show the effectiveness of our approach.
[ "Yu, Jianxing", "Wang, Shiqi", "Zheng, Libin", "Su, Qinliang", "Liu, Wei", "Zhao, Baoquan", "Yin, Jian" ]
Generating Deep Questions with Commonsense Reasoning Ability from the Text by Disentangled Adversarial Inference
findings-acl.30
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.31.bib
https://aclanthology.org/2023.findings-acl.31/
@inproceedings{hung-etal-2023-tada, title = "{TADA}: Efficient Task-Agnostic Domain Adaptation for Transformers", author = {Hung, Chia-Chien and Lange, Lukas and Str{\"o}tgen, Jannik}, editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.31", doi = "10.18653/v1/2023.findings-acl.31", pages = "487--503", abstract = "Intermediate training of pre-trained transformer-based language models on domain-specific data leads to substantial gains for downstream tasks. To increase efficiency and prevent catastrophic forgetting alleviated from full domain-adaptive pre-training, approaches such as adapters have been developed. However, these require additional parameters for each layer, and are criticized for their limited expressiveness. In this work, we introduce TADA, a novel task-agnostic domain adaptation method which is modular, parameter-efficient, and thus, data-efficient. Within TADA, we retrain the embeddings to learn domain-aware input representations and tokenizers for the transformer encoder, while freezing all other parameters of the model. Then, task-specific fine-tuning is performed. We further conduct experiments with meta-embeddings and newly introduced meta-tokenizers, resulting in one model per task in multi-domain use cases. Our broad evaluation in 4 downstream tasks for 14 domains across single- and multi-domain setups and high- and low-resource scenarios reveals that TADA is an effective and efficient alternative to full domain-adaptive pre-training and adapters for domain adaptation, while not introducing additional parameters or complex training steps.", }
Intermediate training of pre-trained transformer-based language models on domain-specific data leads to substantial gains for downstream tasks. To increase efficiency and prevent catastrophic forgetting alleviated from full domain-adaptive pre-training, approaches such as adapters have been developed. However, these require additional parameters for each layer, and are criticized for their limited expressiveness. In this work, we introduce TADA, a novel task-agnostic domain adaptation method which is modular, parameter-efficient, and thus, data-efficient. Within TADA, we retrain the embeddings to learn domain-aware input representations and tokenizers for the transformer encoder, while freezing all other parameters of the model. Then, task-specific fine-tuning is performed. We further conduct experiments with meta-embeddings and newly introduced meta-tokenizers, resulting in one model per task in multi-domain use cases. Our broad evaluation in 4 downstream tasks for 14 domains across single- and multi-domain setups and high- and low-resource scenarios reveals that TADA is an effective and efficient alternative to full domain-adaptive pre-training and adapters for domain adaptation, while not introducing additional parameters or complex training steps.
[ "Hung, Chia-Chien", "Lange, Lukas", "Str{\\\"o}tgen, Jannik" ]
TADA: Efficient Task-Agnostic Domain Adaptation for Transformers
findings-acl.31
Poster
2305.12717
[ "https://github.com/boschresearch/tada" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.32.bib
https://aclanthology.org/2023.findings-acl.32/
@inproceedings{wang-etal-2023-robust, title = "Robust Natural Language Understanding with Residual Attention Debiasing", author = "Wang, Fei and Huang, James Y. and Yan, Tianyi and Zhou, Wenxuan and Chen, Muhao", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.32", doi = "10.18653/v1/2023.findings-acl.32", pages = "504--519", abstract = "Natural language understanding (NLU) models often suffer from unintended dataset biases. Among bias mitigation methods, ensemble-based debiasing methods, especially product-of-experts (PoE), have stood out for their impressive empirical success. However, previous ensemble-based debiasing methods typically apply debiasing on top-level logits without directly addressing biased attention patterns. Attention serves as the main media of feature interaction and aggregation in PLMs and plays a crucial role in providing robust prediction. In this paper, we propose REsidual Attention Debiasing (READ), an end-to-end debiasing method that mitigates unintended biases from attention. Experiments on three NLU benchmarks show that READ significantly improves the OOD performance of BERT-based models, including +12.9{\%} accuracy on HANS, +11.0{\%} accuracy on FEVER-Symmetric, and +2.7{\%} F1 on PAWS. Detailed analyses demonstrate the crucial role of unbiased attention in robust NLU models and that READ effectively mitigates biases in attention.", }
Natural language understanding (NLU) models often suffer from unintended dataset biases. Among bias mitigation methods, ensemble-based debiasing methods, especially product-of-experts (PoE), have stood out for their impressive empirical success. However, previous ensemble-based debiasing methods typically apply debiasing on top-level logits without directly addressing biased attention patterns. Attention serves as the main media of feature interaction and aggregation in PLMs and plays a crucial role in providing robust prediction. In this paper, we propose REsidual Attention Debiasing (READ), an end-to-end debiasing method that mitigates unintended biases from attention. Experiments on three NLU benchmarks show that READ significantly improves the OOD performance of BERT-based models, including +12.9{\%} accuracy on HANS, +11.0{\%} accuracy on FEVER-Symmetric, and +2.7{\%} F1 on PAWS. Detailed analyses demonstrate the crucial role of unbiased attention in robust NLU models and that READ effectively mitigates biases in attention.
[ "Wang, Fei", "Huang, James Y.", "Yan, Tianyi", "Zhou, Wenxuan", "Chen, Muhao" ]
Robust Natural Language Understanding with Residual Attention Debiasing
findings-acl.32
Poster
2305.17627
[ "https://github.com/luka-group/read" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.33.bib
https://aclanthology.org/2023.findings-acl.33/
@inproceedings{zhang-etal-2023-monet, title = "{M}o{NET}: Tackle State Momentum via Noise-Enhanced Training for Dialogue State Tracking", author = "Zhang, Haoning and Bao, Junwei and Sun, Haipeng and Wu, Youzheng and Li, Wenye and Cui, Shuguang and He, Xiaodong", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.33", doi = "10.18653/v1/2023.findings-acl.33", pages = "520--534", abstract = "Dialogue state tracking (DST) aims to convert the dialogue history into dialogue states which consist of slot-value pairs. As condensed structural information memorizes all history information, the dialogue state in the previous turn is typically adopted as the input for predicting the current state by DST models. However, these models tend to keep the predicted slot values unchanged, which is defined as state momentum in this paper. Specifically, the models struggle to update slot values that need to be changed and correct wrongly predicted slot values in the previous turn. To this end, we propose MoNET to tackle state momentum via noise-enhanced training. First, the previous state of each turn in the training data is noised via replacing some of its slot values. Then, the noised previous state is used as the input to learn to predict the current state, improving the model{'}s ability to update and correct slot values. Furthermore, a contrastive contextmatching framework is designed to narrow the representation distance between a state and itscorresponding noised variant, which reduces the impact of noised state and makes the model better understand the dialogue history. Experimental results on MultiWOZ datasets show that MoNET outperforms previous DST methods. Ablations and analysis verify the effectiveness of MoNET in alleviating state momentum issues and improving the anti-noise ability.", }
Dialogue state tracking (DST) aims to convert the dialogue history into dialogue states which consist of slot-value pairs. As condensed structural information memorizes all history information, the dialogue state in the previous turn is typically adopted as the input for predicting the current state by DST models. However, these models tend to keep the predicted slot values unchanged, which is defined as state momentum in this paper. Specifically, the models struggle to update slot values that need to be changed and correct wrongly predicted slot values in the previous turn. To this end, we propose MoNET to tackle state momentum via noise-enhanced training. First, the previous state of each turn in the training data is noised via replacing some of its slot values. Then, the noised previous state is used as the input to learn to predict the current state, improving the model{'}s ability to update and correct slot values. Furthermore, a contrastive contextmatching framework is designed to narrow the representation distance between a state and itscorresponding noised variant, which reduces the impact of noised state and makes the model better understand the dialogue history. Experimental results on MultiWOZ datasets show that MoNET outperforms previous DST methods. Ablations and analysis verify the effectiveness of MoNET in alleviating state momentum issues and improving the anti-noise ability.
[ "Zhang, Haoning", "Bao, Junwei", "Sun, Haipeng", "Wu, Youzheng", "Li, Wenye", "Cui, Shuguang", "He, Xiaodong" ]
MoNET: Tackle State Momentum via Noise-Enhanced Training for Dialogue State Tracking
findings-acl.33
Poster
2211.05503
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.34.bib
https://aclanthology.org/2023.findings-acl.34/
@inproceedings{cheng-etal-2023-pal, title = "{PAL}: Persona-Augmented Emotional Support Conversation Generation", author = "Cheng, Jiale and Sabour, Sahand and Sun, Hao and Chen, Zhuang and Huang, Minlie", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.34", doi = "10.18653/v1/2023.findings-acl.34", pages = "535--554", abstract = "Due to the lack of human resources for mental health support, there is an increasing demand for employing conversational agents for support. Recent work has demonstrated the effectiveness of dialogue models in providing emotional support. As previous studies have demonstrated that seekers{'} persona is an important factor for effective support, we investigate whether there are benefits to modeling such information in dialogue models for support. In this paper, our empirical analysis verifies that persona has an important impact on emotional support. Therefore, we propose a framework for dynamically inferring and modeling seekers{'} persona. We first train a model for inferring the seeker{'}s persona from the conversation history. Accordingly, we propose PAL, a model that leverages persona information and, in conjunction with our strategy-based controllable generation method, provides personalized emotional support. Automatic and manual evaluations demonstrate that PAL achieves state-of-the-art results, outperforming the baselines on the studied benchmark. Our code and data are publicly available at \url{https://github.com/chengjl19/PAL}.", }
Due to the lack of human resources for mental health support, there is an increasing demand for employing conversational agents for support. Recent work has demonstrated the effectiveness of dialogue models in providing emotional support. As previous studies have demonstrated that seekers{'} persona is an important factor for effective support, we investigate whether there are benefits to modeling such information in dialogue models for support. In this paper, our empirical analysis verifies that persona has an important impact on emotional support. Therefore, we propose a framework for dynamically inferring and modeling seekers{'} persona. We first train a model for inferring the seeker{'}s persona from the conversation history. Accordingly, we propose PAL, a model that leverages persona information and, in conjunction with our strategy-based controllable generation method, provides personalized emotional support. Automatic and manual evaluations demonstrate that PAL achieves state-of-the-art results, outperforming the baselines on the studied benchmark. Our code and data are publicly available at \url{https://github.com/chengjl19/PAL}.
[ "Cheng, Jiale", "Sabour, Sah", "", "Sun, Hao", "Chen, Zhuang", "Huang, Minlie" ]
PAL: Persona-Augmented Emotional Support Conversation Generation
findings-acl.34
Poster
2212.09235
[ "https://github.com/chengjl19/pal" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.35.bib
https://aclanthology.org/2023.findings-acl.35/
@inproceedings{wang-etal-2023-farewell, title = "Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model", author = "Wang, Xiao and Zhou, Weikang and Zhang, Qi and Zhou, Jie and Gao, SongYang and Wang, Junzhe and Zhang, Menghan and Gao, Xiang and Chen, Yun Wen and Gui, Tao", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.35", doi = "10.18653/v1/2023.findings-acl.35", pages = "555--568", abstract = "Pretrained language models have achieved remarkable success in various natural language processing tasks. However, pretraining has recently shifted toward larger models and larger data, which has resulted in significant computational and energy costs. In this paper, we propose Influence Subset Selection (ISS) for language model, which explicitly utilizes end-task knowledge to select a tiny subset of the pretraining corpus. Specifically, the ISS selects the samples that will provide the most positive influence on the performance of the end task. Furthermore, we design a gradient matching-based influence estimation method, which can drastically reduce the computation time of influence. With only 0.45{\%} of the data and a three-orders-of-magnitude lower computational cost, ISS outperformed pretrained models (e.g., RoBERTa) on eight datasets covering four domains.", }
Pretrained language models have achieved remarkable success in various natural language processing tasks. However, pretraining has recently shifted toward larger models and larger data, which has resulted in significant computational and energy costs. In this paper, we propose Influence Subset Selection (ISS) for language model, which explicitly utilizes end-task knowledge to select a tiny subset of the pretraining corpus. Specifically, the ISS selects the samples that will provide the most positive influence on the performance of the end task. Furthermore, we design a gradient matching-based influence estimation method, which can drastically reduce the computation time of influence. With only 0.45{\%} of the data and a three-orders-of-magnitude lower computational cost, ISS outperformed pretrained models (e.g., RoBERTa) on eight datasets covering four domains.
[ "Wang, Xiao", "Zhou, Weikang", "Zhang, Qi", "Zhou, Jie", "Gao, SongYang", "Wang, Junzhe", "Zhang, Menghan", "Gao, Xiang", "Chen, Yun Wen", "Gui, Tao" ]
Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model
findings-acl.35
Poster
2305.12816
[ "https://github.com/nitwtog/iss" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.36.bib
https://aclanthology.org/2023.findings-acl.36/
@inproceedings{yadav-bansal-2023-exclusive, title = "Exclusive Supermask Subnetwork Training for Continual Learning", author = "Yadav, Prateek and Bansal, Mohit", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.36", doi = "10.18653/v1/2023.findings-acl.36", pages = "569--587", abstract = "Continual Learning (CL) methods focus on accumulating knowledge over time while avoiding catastrophic forgetting. Recently, Wortsman et al. (2020) proposed a CL method, SupSup, which uses a randomly initialized, fixed base network (model) and finds a supermask for each new task that selectively keeps or removes each weight to produce a subnetwork. They prevent forgetting as the network weights are not being updated. Although there is no forgetting, the performance of SupSup is sub-optimal because fixed weights restrict its representational power. Furthermore, there is no accumulation or transfer of knowledge inside the model when new tasks are learned. Hence, we propose ExSSNeT (Exclusive Supermask SubNetwork Training), that performs exclusive and non-overlapping subnetwork weight training. This avoids conflicting updates to the shared weights by subsequent tasks to improve performance while still preventing forgetting. Furthermore, we propose a novel KNN-based Knowledge Transfer (KKT) module that utilizes previously acquired knowledge to learn new tasks better and faster. We demonstrate that ExSSNeT outperforms strong previous methods on both NLP and Vision domains while preventing forgetting. Moreover, ExSSNeT is particularly advantageous for sparse masks that activate 2-10{\%} of the model parameters, resulting in an average improvement of 8.3{\%} over SupSup. Furthermore, ExSSNeT scales to a large number of tasks (100).", }
Continual Learning (CL) methods focus on accumulating knowledge over time while avoiding catastrophic forgetting. Recently, Wortsman et al. (2020) proposed a CL method, SupSup, which uses a randomly initialized, fixed base network (model) and finds a supermask for each new task that selectively keeps or removes each weight to produce a subnetwork. They prevent forgetting as the network weights are not being updated. Although there is no forgetting, the performance of SupSup is sub-optimal because fixed weights restrict its representational power. Furthermore, there is no accumulation or transfer of knowledge inside the model when new tasks are learned. Hence, we propose ExSSNeT (Exclusive Supermask SubNetwork Training), that performs exclusive and non-overlapping subnetwork weight training. This avoids conflicting updates to the shared weights by subsequent tasks to improve performance while still preventing forgetting. Furthermore, we propose a novel KNN-based Knowledge Transfer (KKT) module that utilizes previously acquired knowledge to learn new tasks better and faster. We demonstrate that ExSSNeT outperforms strong previous methods on both NLP and Vision domains while preventing forgetting. Moreover, ExSSNeT is particularly advantageous for sparse masks that activate 2-10{\%} of the model parameters, resulting in an average improvement of 8.3{\%} over SupSup. Furthermore, ExSSNeT scales to a large number of tasks (100).
[ "Yadav, Prateek", "Bansal, Mohit" ]
Exclusive Supermask Subnetwork Training for Continual Learning
findings-acl.36
Poster
2210.10209
[ "https://github.com/prateeky2806/exessnet" ]
https://huggingface.co/papers/2210.10209
0
1
0
2
1
[]
[]
[]
https://aclanthology.org/2023.findings-acl.37.bib
https://aclanthology.org/2023.findings-acl.37/
@inproceedings{lin-etal-2023-transferring, title = "Transferring General Multimodal Pretrained Models to Text Recognition", author = "Lin, Junyang and Ren, Xuancheng and Zhang, Yichang and Liu, Gao and Wang, Peng and Yang, An and Zhou, Chang", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.37", doi = "10.18653/v1/2023.findings-acl.37", pages = "588--597", abstract = "This paper proposes a new method, OFA-OCR, to transfer multimodal pretrained models to text recognition. Specifically, we recast text recognition as image captioning and directly transfer a unified vision-language pretrained model to the end task. Without pretraining on large-scale annotated or synthetic text recognition data, OFA-OCR outperforms the baselines and achieves state-of-the-art performance in the Chinese text recognition benchmark. Additionally, we construct an OCR pipeline with OFA-OCR, and we demonstrate that it can achieve competitive performance with the product-level API.", }
This paper proposes a new method, OFA-OCR, to transfer multimodal pretrained models to text recognition. Specifically, we recast text recognition as image captioning and directly transfer a unified vision-language pretrained model to the end task. Without pretraining on large-scale annotated or synthetic text recognition data, OFA-OCR outperforms the baselines and achieves state-of-the-art performance in the Chinese text recognition benchmark. Additionally, we construct an OCR pipeline with OFA-OCR, and we demonstrate that it can achieve competitive performance with the product-level API.
[ "Lin, Junyang", "Ren, Xuancheng", "Zhang, Yichang", "Liu, Gao", "Wang, Peng", "Yang, An", "Zhou, Chang" ]
Transferring General Multimodal Pretrained Models to Text Recognition
findings-acl.37
Poster
2212.09297
[ "https://github.com/ofa-sys/ofa" ]
https://huggingface.co/papers/2212.09297
1
0
0
7
1
[]
[]
[ "ake178178/OFA-OCR-dedao-demo001" ]
https://aclanthology.org/2023.findings-acl.38.bib
https://aclanthology.org/2023.findings-acl.38/
@inproceedings{zouhar-etal-2023-formal, title = "A Formal Perspective on Byte-Pair Encoding", author = "Zouhar, Vil{\'e}m and Meister, Clara and Gastaldi, Juan and Du, Li and Vieira, Tim and Sachan, Mrinmaya and Cotterell, Ryan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.38", doi = "10.18653/v1/2023.findings-acl.38", pages = "598--614", abstract = "Byte-Pair Encoding (BPE) is a popular algorithm used for tokenizing data in NLP, despite being devised initially as a compression method.BPE appears to be a greedy algorithm at face value, but the underlying optimization problem that BPE seeks to solve has not yet been laid down. We formalize BPE as a combinatorial optimization problem. Via submodular functions, we prove that the iterative greedy version is a 1/sigma*(1-e(-sigma))-approximation of an optimal merge sequence, where sigma is the total backward curvature with respect to the optimal merge sequence. Empirically the lower bound of the approximation is approx0.37.We provide a faster implementation of BPE which improves the runtime complexity from O(NM) to O(N log M), where N is the sequence length and M is the merge count. Finally, we optimize the brute-force algorithm for optimal BPE using memoization.", }
Byte-Pair Encoding (BPE) is a popular algorithm used for tokenizing data in NLP, despite being devised initially as a compression method.BPE appears to be a greedy algorithm at face value, but the underlying optimization problem that BPE seeks to solve has not yet been laid down. We formalize BPE as a combinatorial optimization problem. Via submodular functions, we prove that the iterative greedy version is a 1/sigma*(1-e(-sigma))-approximation of an optimal merge sequence, where sigma is the total backward curvature with respect to the optimal merge sequence. Empirically the lower bound of the approximation is approx0.37.We provide a faster implementation of BPE which improves the runtime complexity from O(NM) to O(N log M), where N is the sequence length and M is the merge count. Finally, we optimize the brute-force algorithm for optimal BPE using memoization.
[ "Zouhar, Vil{\\'e}m", "Meister, Clara", "Gastaldi, Juan", "Du, Li", "Vieira, Tim", "Sachan, Mrinmaya", "Cotterell, Ryan" ]
A Formal Perspective on Byte-Pair Encoding
findings-acl.38
Poster
2306.16837
[ "https://github.com/zouharvi/formal-bpe" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.39.bib
https://aclanthology.org/2023.findings-acl.39/
@inproceedings{preiss-2023-automatic, title = "Automatic Named Entity Obfuscation in Speech", author = "Preiss, Judita", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.39", doi = "10.18653/v1/2023.findings-acl.39", pages = "615--622", abstract = "Sharing data containing personal information often requires its anonymization, even when consent for sharing was obtained from the data originator. While approaches exist for automated anonymization of text, the area is not as thoroughly explored in speech. This work focuses on identifying, replacing and inserting replacement named entities synthesized using voice cloning into original audio thereby retaining prosodic information while reducing the likelihood of deanonymization. The approach employs a novel named entity recognition (NER) system built directly on speech by training HuBERT (Hsu et al, 2021) using the English speech NER dataset (Yadav et al, 2020). Name substitutes are found using a masked language model and are synthesized using text to speech voice cloning (Eren and team, 2021), upon which the substitute named entities are re-inserted into the original text. The approach is prototyped on a sample of the LibriSpeech corpus (Panyatov et al, 2015) with each step evaluated individually.", }
Sharing data containing personal information often requires its anonymization, even when consent for sharing was obtained from the data originator. While approaches exist for automated anonymization of text, the area is not as thoroughly explored in speech. This work focuses on identifying, replacing and inserting replacement named entities synthesized using voice cloning into original audio thereby retaining prosodic information while reducing the likelihood of deanonymization. The approach employs a novel named entity recognition (NER) system built directly on speech by training HuBERT (Hsu et al, 2021) using the English speech NER dataset (Yadav et al, 2020). Name substitutes are found using a masked language model and are synthesized using text to speech voice cloning (Eren and team, 2021), upon which the substitute named entities are re-inserted into the original text. The approach is prototyped on a sample of the LibriSpeech corpus (Panyatov et al, 2015) with each step evaluated individually.
[ "Preiss, Judita" ]
Automatic Named Entity Obfuscation in Speech
findings-acl.39
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.40.bib
https://aclanthology.org/2023.findings-acl.40/
@inproceedings{lee-kim-2023-recursion, title = "Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models", author = "Lee, Soochan and Kim, Gunhee", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.40", doi = "10.18653/v1/2023.findings-acl.40", pages = "623--658", abstract = "Generating intermediate steps, or Chain of Thought (CoT), is an effective way to significantly improve language models{'} (LM) multi-step reasoning capability. However, the CoT lengths can grow rapidly with the problem complexity, easily exceeding the maximum context size. Instead of increasing the context limit, which has already been heavily investigated, we explore an orthogonal direction: making LMs divide a problem into multiple contexts. We propose a new inference framework, called Recursion of Thought (RoT), which introduces several special tokens that the models can output to trigger context-related operations. Extensive experiments with multiple architectures including GPT-3 show that RoT dramatically improves LMs{'} inference capability to solve problems, whose solution consists of hundreds of thousands of tokens.", }
Generating intermediate steps, or Chain of Thought (CoT), is an effective way to significantly improve language models{'} (LM) multi-step reasoning capability. However, the CoT lengths can grow rapidly with the problem complexity, easily exceeding the maximum context size. Instead of increasing the context limit, which has already been heavily investigated, we explore an orthogonal direction: making LMs divide a problem into multiple contexts. We propose a new inference framework, called Recursion of Thought (RoT), which introduces several special tokens that the models can output to trigger context-related operations. Extensive experiments with multiple architectures including GPT-3 show that RoT dramatically improves LMs{'} inference capability to solve problems, whose solution consists of hundreds of thousands of tokens.
[ "Lee, Soochan", "Kim, Gunhee" ]
Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models
findings-acl.40
Poster
2306.06891
[ "https://github.com/soochan-lee/rot" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.41.bib
https://aclanthology.org/2023.findings-acl.41/
@inproceedings{zou-etal-2023-unis, title = "{U}ni{S}-{MMC}: Multimodal Classification via Unimodality-supervised Multimodal Contrastive Learning", author = "Zou, Heqing and Shen, Meng and Chen, Chen and Hu, Yuchen and Rajan, Deepu and Chng, Eng Siong", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.41", doi = "10.18653/v1/2023.findings-acl.41", pages = "659--672", abstract = "Multimodal learning aims to imitate human beings to acquire complementary information from multiple modalities for various downstream tasks. However, traditional aggregation-based multimodal fusion methods ignore the inter-modality relationship, treat each modality equally, suffer sensor noise, and thus reduce multimodal learning performance. In this work, we propose a novel multimodal contrastive method to explore more reliable multimodal representations under the weak supervision of unimodal predicting. Specifically, we first capture task-related unimodal representations and the unimodal predictions from the introduced unimodal predicting task. Then the unimodal representations are aligned with the more effective one by the designed multimodal contrastive method under the supervision of the unimodal predictions. Experimental results with fused features on two image-text classification benchmarks UPMC-Food-101 and N24News show that our proposed Unimodality-Supervised MultiModal Contrastive UniS-MMC learning method outperforms current state-of-the-art multimodal methods. The detailed ablation study and analysis further demonstrate the advantage of our proposed method.", }
Multimodal learning aims to imitate human beings to acquire complementary information from multiple modalities for various downstream tasks. However, traditional aggregation-based multimodal fusion methods ignore the inter-modality relationship, treat each modality equally, suffer sensor noise, and thus reduce multimodal learning performance. In this work, we propose a novel multimodal contrastive method to explore more reliable multimodal representations under the weak supervision of unimodal predicting. Specifically, we first capture task-related unimodal representations and the unimodal predictions from the introduced unimodal predicting task. Then the unimodal representations are aligned with the more effective one by the designed multimodal contrastive method under the supervision of the unimodal predictions. Experimental results with fused features on two image-text classification benchmarks UPMC-Food-101 and N24News show that our proposed Unimodality-Supervised MultiModal Contrastive UniS-MMC learning method outperforms current state-of-the-art multimodal methods. The detailed ablation study and analysis further demonstrate the advantage of our proposed method.
[ "Zou, Heqing", "Shen, Meng", "Chen, Chen", "Hu, Yuchen", "Rajan, Deepu", "Chng, Eng Siong" ]
UniS-MMC: Multimodal Classification via Unimodality-supervised Multimodal Contrastive Learning
findings-acl.41
Poster
2305.09299
[ "https://github.com/vincent-zhq/unis-mmc" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.42.bib
https://aclanthology.org/2023.findings-acl.42/
@inproceedings{wang-etal-2023-robustness, title = "Robustness-Aware Word Embedding Improves Certified Robustness to Adversarial Word Substitutions", author = "Wang, Yibin and Yang, Yichen and He, Di and He, Kun", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.42", doi = "10.18653/v1/2023.findings-acl.42", pages = "673--687", abstract = "Natural Language Processing (NLP) models have gained great success on clean texts, but they are known to be vulnerable to adversarial examples typically crafted by synonym substitutions. In this paper, we target to solve this problem and find that word embedding is important to the certified robustness of NLP models. Given the findings, we propose the Embedding Interval Bound Constraint (EIBC) triplet loss to train robustness-aware word embeddings for better certified robustness. We optimize the EIBC triplet loss to reduce distances between synonyms in the embedding space, which is theoretically proven to make the verification boundary tighter. Meanwhile, we enlarge distances among non-synonyms, maintaining the semantic representation of word embeddings. Our method is conceptually simple and componentized. It can be easily combined with IBP training and improves the certified robust accuracy from 76.73{\%} to 84.78{\%} on the IMDB dataset. Experiments demonstrate that our method outperforms various state-of-the-art certified defense baselines and generalizes well to unseen substitutions. The code is available at \url{https://github.com/JHL-HUST/EIBC-IBP/}.", }
Natural Language Processing (NLP) models have gained great success on clean texts, but they are known to be vulnerable to adversarial examples typically crafted by synonym substitutions. In this paper, we target to solve this problem and find that word embedding is important to the certified robustness of NLP models. Given the findings, we propose the Embedding Interval Bound Constraint (EIBC) triplet loss to train robustness-aware word embeddings for better certified robustness. We optimize the EIBC triplet loss to reduce distances between synonyms in the embedding space, which is theoretically proven to make the verification boundary tighter. Meanwhile, we enlarge distances among non-synonyms, maintaining the semantic representation of word embeddings. Our method is conceptually simple and componentized. It can be easily combined with IBP training and improves the certified robust accuracy from 76.73{\%} to 84.78{\%} on the IMDB dataset. Experiments demonstrate that our method outperforms various state-of-the-art certified defense baselines and generalizes well to unseen substitutions. The code is available at \url{https://github.com/JHL-HUST/EIBC-IBP/}.
[ "Wang, Yibin", "Yang, Yichen", "He, Di", "He, Kun" ]
Robustness-Aware Word Embedding Improves Certified Robustness to Adversarial Word Substitutions
findings-acl.42
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.43.bib
https://aclanthology.org/2023.findings-acl.43/
@inproceedings{liu-etal-2023-exploring, title = "Exploring the Compositional Generalization in Context Dependent Text-to-{SQL} Parsing", author = "Liu, Aiwei and Liu, Wei and Hu, Xuming and Li, Shuang and Ma, Fukun and Yang, Yawen and Wen, Lijie", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.43", doi = "10.18653/v1/2023.findings-acl.43", pages = "688--700", abstract = "In the context-dependent Text-to-SQL task, the generated SQL statements are refined iteratively based on the user input utterance from each interaction. The input text from each interaction can be viewed as component modifications to the previous SQL statements, which could be further extracted as the modification patterns. Since these modification patterns could also be combined with other SQL statements, the models are supposed to have the compositional generalization to these novel combinations. This work is the first exploration of compositional generalization in context-dependent Text-to-SQL scenarios. To facilitate related studies, we constructed two challenging benchmarks named CoSQL-CG and SParC-CG by recombining the modification patterns and existing SQL statements. The following experiments show that almost all current models struggle on our proposed benchmarks. Furthermore, we found that better aligning the previous SQL statements with the input utterance could give models better combinatorial generalization ability. Based on these observations, we propose a method name p-align to improve the combinatorial generalization of Text-to-SQL models. Further experiments validate the effectiveness of our model.", }
In the context-dependent Text-to-SQL task, the generated SQL statements are refined iteratively based on the user input utterance from each interaction. The input text from each interaction can be viewed as component modifications to the previous SQL statements, which could be further extracted as the modification patterns. Since these modification patterns could also be combined with other SQL statements, the models are supposed to have the compositional generalization to these novel combinations. This work is the first exploration of compositional generalization in context-dependent Text-to-SQL scenarios. To facilitate related studies, we constructed two challenging benchmarks named CoSQL-CG and SParC-CG by recombining the modification patterns and existing SQL statements. The following experiments show that almost all current models struggle on our proposed benchmarks. Furthermore, we found that better aligning the previous SQL statements with the input utterance could give models better combinatorial generalization ability. Based on these observations, we propose a method name p-align to improve the combinatorial generalization of Text-to-SQL models. Further experiments validate the effectiveness of our model.
[ "Liu, Aiwei", "Liu, Wei", "Hu, Xuming", "Li, Shuang", "Ma, Fukun", "Yang, Yawen", "Wen, Lijie" ]
Exploring the Compositional Generalization in Context Dependent Text-to-SQL Parsing
findings-acl.43
Poster
2306.04480
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.44.bib
https://aclanthology.org/2023.findings-acl.44/
@inproceedings{murzaku-etal-2023-towards, title = "Towards Generative Event Factuality Prediction", author = "Murzaku, John and Osborne, Tyler and Aviram, Amittai and Rambow, Owen", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.44", doi = "10.18653/v1/2023.findings-acl.44", pages = "701--715", abstract = "We present a novel end-to-end generative task and system for predicting event factuality holders, targets, and their associated factuality values. We perform the first experiments using all sources and targets of factuality statements from the FactBank corpus. We perform multi-task learning with other tasks and event-factuality corpora to improve on the FactBank source and target task. We argue that careful domain specific target text output format in generative systems is important and verify this with multiple experiments on target text output structure. We redo previous state-of-the-art author-only event factuality experiments and also offer insights towards a generative paradigm for the author-only event factuality prediction task.", }
We present a novel end-to-end generative task and system for predicting event factuality holders, targets, and their associated factuality values. We perform the first experiments using all sources and targets of factuality statements from the FactBank corpus. We perform multi-task learning with other tasks and event-factuality corpora to improve on the FactBank source and target task. We argue that careful domain specific target text output format in generative systems is important and verify this with multiple experiments on target text output structure. We redo previous state-of-the-art author-only event factuality experiments and also offer insights towards a generative paradigm for the author-only event factuality prediction task.
[ "Murzaku, John", "Osborne, Tyler", "Aviram, Amittai", "Rambow, Owen" ]
Towards Generative Event Factuality Prediction
findings-acl.44
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.45.bib
https://aclanthology.org/2023.findings-acl.45/
@inproceedings{huang-etal-2023-language, title = "Can Language Models Be Specific? How?", author = "Huang, Jie and Chang, Kevin Chen-Chuan and Xiong, Jinjun and Hwu, Wen-mei", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.45", doi = "10.18653/v1/2023.findings-acl.45", pages = "716--727", abstract = "{``}He is a person{''}, {``}Paris is located on the earth{''}. Both statements are correct but meaningless - due to lack of specificity. In this paper, we propose to measure how specific the language of pre-trained language models (PLMs) is. To achieve this, we introduce a novel approach to build a benchmark for specificity testing by forming masked token prediction tasks with prompts. For instance, given {``}Toronto is located in [MASK].{''}, we want to test whether a more specific answer will be better filled in by PLMs, e.g., Ontario instead of Canada. From our evaluations, we show that existing PLMs have only a slight preference for more specific answers. We identify underlying factors affecting the specificity and design two prompt-based methods to improve the specificity. Results show that the specificity of the models can be improved by the proposed methods without additional training. We hope this work can bring to awareness the notion of specificity of language models and encourage the research community to further explore this important but understudied problem.", }
{``}He is a person{''}, {``}Paris is located on the earth{''}. Both statements are correct but meaningless - due to lack of specificity. In this paper, we propose to measure how specific the language of pre-trained language models (PLMs) is. To achieve this, we introduce a novel approach to build a benchmark for specificity testing by forming masked token prediction tasks with prompts. For instance, given {``}Toronto is located in [MASK].{''}, we want to test whether a more specific answer will be better filled in by PLMs, e.g., Ontario instead of Canada. From our evaluations, we show that existing PLMs have only a slight preference for more specific answers. We identify underlying factors affecting the specificity and design two prompt-based methods to improve the specificity. Results show that the specificity of the models can be improved by the proposed methods without additional training. We hope this work can bring to awareness the notion of specificity of language models and encourage the research community to further explore this important but understudied problem.
[ "Huang, Jie", "Chang, Kevin Chen-Chuan", "Xiong, Jinjun", "Hwu, Wen-mei" ]
Can Language Models Be Specific? How?
findings-acl.45
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.46.bib
https://aclanthology.org/2023.findings-acl.46/
@inproceedings{li-etal-2023-web, title = "The Web Can Be Your Oyster for Improving Language Models", author = "Li, Junyi and Tang, Tianyi and Zhao, Wayne Xin and Wang, Jingyuan and Nie, Jian-Yun and Wen, Ji-Rong", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.46", doi = "10.18653/v1/2023.findings-acl.46", pages = "728--746", abstract = "Pretrained language models (PLMs) encode a large amount of world knowledge. However, as such knowledge is frozen at the time of model training, the models become static and limited by the training data at that time. In order to further improve the capacity of PLMs for knowledge-intensive tasks, we consider augmenting PLMs with the large-scale web using search engine. Unlike previous augmentation sources (e.g., Wikipedia data dump), the web provides broader, more comprehensive and constantly updated information. In this paper, we present a web-augmented PLM {--} UniWeb, which is trained over 16 knowledge-intensive tasks in a unified text-to-text format. Instead of simply using the retrieved contents from web, our approach has made two major improvements. Firstly, we propose an adaptive search engine assisted learning method that can self-evaluate the confidence level of PLM{'}s predictions, and adaptively determine when to refer to the web for more data, which can avoid useless or noisy augmentation from web. Secondly, we design a pretraining task, i.e., continual knowledge learning, based on salient spans prediction, to reduce the discrepancy between the encoded and retrieved knowledge. Experiments on a wide range of knowledge-intensive tasks show that our model significantly outperforms previous retrieval-augmented methods.", }
Pretrained language models (PLMs) encode a large amount of world knowledge. However, as such knowledge is frozen at the time of model training, the models become static and limited by the training data at that time. In order to further improve the capacity of PLMs for knowledge-intensive tasks, we consider augmenting PLMs with the large-scale web using search engine. Unlike previous augmentation sources (e.g., Wikipedia data dump), the web provides broader, more comprehensive and constantly updated information. In this paper, we present a web-augmented PLM {--} UniWeb, which is trained over 16 knowledge-intensive tasks in a unified text-to-text format. Instead of simply using the retrieved contents from web, our approach has made two major improvements. Firstly, we propose an adaptive search engine assisted learning method that can self-evaluate the confidence level of PLM{'}s predictions, and adaptively determine when to refer to the web for more data, which can avoid useless or noisy augmentation from web. Secondly, we design a pretraining task, i.e., continual knowledge learning, based on salient spans prediction, to reduce the discrepancy between the encoded and retrieved knowledge. Experiments on a wide range of knowledge-intensive tasks show that our model significantly outperforms previous retrieval-augmented methods.
[ "Li, Junyi", "Tang, Tianyi", "Zhao, Wayne Xin", "Wang, Jingyuan", "Nie, Jian-Yun", "Wen, Ji-Rong" ]
The Web Can Be Your Oyster for Improving Language Models
findings-acl.46
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.47.bib
https://aclanthology.org/2023.findings-acl.47/
@inproceedings{kim-komachi-2023-enhancing, title = "Enhancing Few-shot Cross-lingual Transfer with Target Language Peculiar Examples", author = "Kim, Hwichan and Komachi, Mamoru", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.47", doi = "10.18653/v1/2023.findings-acl.47", pages = "747--767", abstract = "Few-shot cross-lingual transfer, fine-tuning Multilingual Masked Language Model (MMLM) with source language labeled data and a small amount of target language labeled data, provides excellent performance in the target language. However, if no labeled data in the target language are available, they need to be created through human annotations. In this study, we devise a metric to select annotation candidates from an unlabeled data pool that efficiently enhance accuracy for few-shot cross-lingual transfer. It is known that training a model with hard examples is important to improve the model{'}s performance. Therefore, we first identify examples that MMLM cannot solve in a zero-shot cross-lingual transfer setting and demonstrate that it is hard to predict peculiar examples in the target language, i.e., the examples distant from the source language examples in cross-lingual semantic space of the MMLM.We then choose high peculiarity examples as annotation candidates and perform few-shot cross-lingual transfer. In comprehensive experiments with 20 languages and 6 tasks, we demonstrate that the high peculiarity examples improve the target language accuracy compared to other candidate selection methods proposed in previous studies.", }
Few-shot cross-lingual transfer, fine-tuning Multilingual Masked Language Model (MMLM) with source language labeled data and a small amount of target language labeled data, provides excellent performance in the target language. However, if no labeled data in the target language are available, they need to be created through human annotations. In this study, we devise a metric to select annotation candidates from an unlabeled data pool that efficiently enhance accuracy for few-shot cross-lingual transfer. It is known that training a model with hard examples is important to improve the model{'}s performance. Therefore, we first identify examples that MMLM cannot solve in a zero-shot cross-lingual transfer setting and demonstrate that it is hard to predict peculiar examples in the target language, i.e., the examples distant from the source language examples in cross-lingual semantic space of the MMLM.We then choose high peculiarity examples as annotation candidates and perform few-shot cross-lingual transfer. In comprehensive experiments with 20 languages and 6 tasks, we demonstrate that the high peculiarity examples improve the target language accuracy compared to other candidate selection methods proposed in previous studies.
[ "Kim, Hwichan", "Komachi, Mamoru" ]
Enhancing Few-shot Cross-lingual Transfer with Target Language Peculiar Examples
findings-acl.47
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.48.bib
https://aclanthology.org/2023.findings-acl.48/
@inproceedings{winata-etal-2023-overcoming, title = "Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning", author = "Winata, Genta and Xie, Lingjue and Radhakrishnan, Karthik and Wu, Shijie and Jin, Xisen and Cheng, Pengxiang and Kulkarni, Mayank and Preotiuc-Pietro, Daniel", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.48", doi = "10.18653/v1/2023.findings-acl.48", pages = "768--777", abstract = "Real-life multilingual systems should be able to efficiently incorporate new languages as data distributions fed to the system evolve and shift over time. To do this, systems need to handle the issue of catastrophic forgetting, where the model performance drops for languages or tasks seen further in its past. In this paper, we study catastrophic forgetting, as well as methods to minimize this, in a massively multilingual continual learning framework involving up to 51 languages and covering both classification and sequence labeling tasks. We present LR ADJUST, a learning rate scheduling method that is simple, yet effective in preserving new information without strongly overwriting past knowledge. Furthermore, we show that this method is effective across multiple continual learning approaches. Finally, we provide further insights into the dynamics of catastrophic forgetting in this massively multilingual setup.", }
Real-life multilingual systems should be able to efficiently incorporate new languages as data distributions fed to the system evolve and shift over time. To do this, systems need to handle the issue of catastrophic forgetting, where the model performance drops for languages or tasks seen further in its past. In this paper, we study catastrophic forgetting, as well as methods to minimize this, in a massively multilingual continual learning framework involving up to 51 languages and covering both classification and sequence labeling tasks. We present LR ADJUST, a learning rate scheduling method that is simple, yet effective in preserving new information without strongly overwriting past knowledge. Furthermore, we show that this method is effective across multiple continual learning approaches. Finally, we provide further insights into the dynamics of catastrophic forgetting in this massively multilingual setup.
[ "Winata, Genta", "Xie, Lingjue", "Radhakrishnan, Karthik", "Wu, Shijie", "Jin, Xisen", "Cheng, Pengxiang", "Kulkarni, Mayank", "Preotiuc-Pietro, Daniel" ]
Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning
findings-acl.48
Poster
2305.16252
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.49.bib
https://aclanthology.org/2023.findings-acl.49/
@inproceedings{sun-etal-2023-unifine, title = "{U}ni{F}ine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding", author = "Sun, Rui and Wang, Zhecan and You, Haoxuan and Codella, Noel and Chang, Kai-Wei and Chang, Shih-Fu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.49", doi = "10.18653/v1/2023.findings-acl.49", pages = "778--793", abstract = "Vision-language tasks, such as VQA, SNLI-VE, and VCR are challenging because they require the model{'}s reasoning ability to understand the semantics of the visual world and natural language. Supervised methods working for vision-language tasks have been well-studied. However, solving these tasks in a zero-shot setting is less explored. Since Contrastive Language-Image Pre-training (CLIP) has shown remarkable zero-shot performance on image-text matching, previous works utilized its strong zero-shot ability by converting vision-language tasks into an image-text matching problem, and they mainly consider global-level matching (e.g., the whole image or sentence). However, we find visual and textual fine-grained information, e.g., keywords in the sentence and objects in the image, can be fairly informative for semantics understanding. Inspired by this, we propose a unified framework to take advantage of the fine-grained information for zero-shot vision-language learning, covering multiple tasks such as VQA, SNLI-VE, and VCR. Our experiments show that our framework outperforms former zero-shot methods on VQA and achieves substantial improvement on SNLI-VE and VCR. Furthermore, our ablation studies confirm the effectiveness and generalizability of our proposed method.", }
Vision-language tasks, such as VQA, SNLI-VE, and VCR are challenging because they require the model{'}s reasoning ability to understand the semantics of the visual world and natural language. Supervised methods working for vision-language tasks have been well-studied. However, solving these tasks in a zero-shot setting is less explored. Since Contrastive Language-Image Pre-training (CLIP) has shown remarkable zero-shot performance on image-text matching, previous works utilized its strong zero-shot ability by converting vision-language tasks into an image-text matching problem, and they mainly consider global-level matching (e.g., the whole image or sentence). However, we find visual and textual fine-grained information, e.g., keywords in the sentence and objects in the image, can be fairly informative for semantics understanding. Inspired by this, we propose a unified framework to take advantage of the fine-grained information for zero-shot vision-language learning, covering multiple tasks such as VQA, SNLI-VE, and VCR. Our experiments show that our framework outperforms former zero-shot methods on VQA and achieves substantial improvement on SNLI-VE and VCR. Furthermore, our ablation studies confirm the effectiveness and generalizability of our proposed method.
[ "Sun, Rui", "Wang, Zhecan", "You, Haoxuan", "Codella, Noel", "Chang, Kai-Wei", "Chang, Shih-Fu" ]
UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding
findings-acl.49
Poster
2307.00862
[ "https://github.com/threesr/unifine" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.50.bib
https://aclanthology.org/2023.findings-acl.50/
@inproceedings{zhang-etal-2023-aligning, title = "Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors", author = "Zhang, Kai and Jimenez Gutierrez, Bernal and Su, Yu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.50", doi = "10.18653/v1/2023.findings-acl.50", pages = "794--812", abstract = "Recent work has shown that fine-tuning large language models (LLMs) on large-scale instruction-following datasets substantially improves their performance on a wide range of NLP tasks, especially in the zero-shot setting. However, even advanced instruction-tuned LLMs still fail to outperform small LMs on relation extraction (RE), a fundamental information extraction task. We hypothesize that instruction-tuning has been unable to elicit strong RE capabilities in LLMs due to RE{'}s low incidence in instruction-tuning datasets, making up less than 1{\%} of all tasks (Wang et al. 2022). To address this limitation, we propose QA4RE, a framework that aligns RE with question answering (QA), a predominant task in instruction-tuning datasets. Comprehensive zero-shot RE experiments over four datasets with two series of instruction-tuned LLMs (six LLMs in total) demonstrate that our QA4RE framework consistently improves LLM performance, strongly verifying our hypothesis and enabling LLMs to outperform strong zero-shot baselines by a large margin. Additionally, we provide thorough experiments and discussions to show the robustness, few-shot effectiveness, and strong transferability of our QA4RE framework. This work illustrates a promising way of adapting LLMs to challenging and underrepresented tasks by aligning these tasks with more common instruction-tuning tasks like QA.", }
Recent work has shown that fine-tuning large language models (LLMs) on large-scale instruction-following datasets substantially improves their performance on a wide range of NLP tasks, especially in the zero-shot setting. However, even advanced instruction-tuned LLMs still fail to outperform small LMs on relation extraction (RE), a fundamental information extraction task. We hypothesize that instruction-tuning has been unable to elicit strong RE capabilities in LLMs due to RE{'}s low incidence in instruction-tuning datasets, making up less than 1{\%} of all tasks (Wang et al. 2022). To address this limitation, we propose QA4RE, a framework that aligns RE with question answering (QA), a predominant task in instruction-tuning datasets. Comprehensive zero-shot RE experiments over four datasets with two series of instruction-tuned LLMs (six LLMs in total) demonstrate that our QA4RE framework consistently improves LLM performance, strongly verifying our hypothesis and enabling LLMs to outperform strong zero-shot baselines by a large margin. Additionally, we provide thorough experiments and discussions to show the robustness, few-shot effectiveness, and strong transferability of our QA4RE framework. This work illustrates a promising way of adapting LLMs to challenging and underrepresented tasks by aligning these tasks with more common instruction-tuning tasks like QA.
[ "Zhang, Kai", "Jimenez Gutierrez, Bernal", "Su, Yu" ]
Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors
findings-acl.50
Poster
2305.11159
[ "https://github.com/osu-nlp-group/qa4re" ]
https://huggingface.co/papers/2305.11159
0
1
0
3
1
[]
[]
[]
https://aclanthology.org/2023.findings-acl.51.bib
https://aclanthology.org/2023.findings-acl.51/
@inproceedings{held-etal-2023-tada, title = "{TADA} : Task Agnostic Dialect Adapters for {E}nglish", author = "Held, William and Ziems, Caleb and Yang, Diyi", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.51", doi = "10.18653/v1/2023.findings-acl.51", pages = "813--824", abstract = "Large Language Models, the dominant starting point for Natural Language Processing (NLP) applications, fail at a higher rate for speakers of English dialects other than Standard American English (SAE). Prior work addresses this using task specific data or synthetic data augmentation, both of which require intervention for each dialect and task pair. This poses a scalability issue that prevents the broad adoption of robust dialectal English NLP. We introduce a simple yet effective method for task-agnostic dialect adaptation by aligning non-SAE dialects using adapters and composing them with task-specific adapters from SAE. Task-Agnostic Dialect Adapters (TADA) improve dialectal robustness on 4 dialectal variants of the GLUE benchmark without task-specific supervision.", }
Large Language Models, the dominant starting point for Natural Language Processing (NLP) applications, fail at a higher rate for speakers of English dialects other than Standard American English (SAE). Prior work addresses this using task specific data or synthetic data augmentation, both of which require intervention for each dialect and task pair. This poses a scalability issue that prevents the broad adoption of robust dialectal English NLP. We introduce a simple yet effective method for task-agnostic dialect adaptation by aligning non-SAE dialects using adapters and composing them with task-specific adapters from SAE. Task-Agnostic Dialect Adapters (TADA) improve dialectal robustness on 4 dialectal variants of the GLUE benchmark without task-specific supervision.
[ "Held, William", "Ziems, Caleb", "Yang, Diyi" ]
TADA : Task Agnostic Dialect Adapters for English
findings-acl.51
Poster
[ "https://github.com/helw150/tada" ]
-1
-1
-1
-1
0
[]
[]
[]