bibtex_url
stringlengths 41
50
| proceedings
stringlengths 38
47
| bibtext
stringlengths 709
3.56k
| abstract
stringlengths 17
2.11k
| authors
sequencelengths 1
72
| title
stringlengths 12
207
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 276
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
14
| num_comments
int64 -1
11
| n_authors
int64 -1
44
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
14
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.acl-demo.26.bib | https://aclanthology.org/2023.acl-demo.26/ | @inproceedings{hu-etal-2023-opendelta,
title = "{O}pen{D}elta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models",
author = "Hu, Shengding and
Ding, Ning and
Zhao, Weilin and
Lv, Xingtai and
Zhang, Zhen and
Liu, Zhiyuan and
Sun, Maosong",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.26",
doi = "10.18653/v1/2023.acl-demo.26",
pages = "274--281",
abstract = "The scale of large pre-trained models (PTMs) poses significant challenges in adapting to downstream tasks due to the high optimization overhead and storage costs associated with full-parameter fine-tuning. To address this, many studies explore parameter-efficient tuning methods, also framed as {``}delta tuning{''} in Ding et al. (2022), which updates only a small subset of parameters, known as {``}delta modules{''}, while keeping the backbone model{'}s parameters fixed. However, the practicality and flexibility of delta tuning have been limited due to existing implementations that directly modify the code of the backbone PTMs and hard-code specific delta tuning methods for each PTM. In this paper, we present OpenDelta, an open-source library that overcomes these limitations by providing a plug-and-play implementation of various delta tuning methods. Our novel techniques eliminate the need to modify the backbone PTMs{'} code, making OpenDelta compatible with different, even novel PTMs. OpenDelta is designed to be simple, modular, and extensible, providing a comprehensive platform for researchers and practitioners to adapt large PTMs efficiently.",
}
| The scale of large pre-trained models (PTMs) poses significant challenges in adapting to downstream tasks due to the high optimization overhead and storage costs associated with full-parameter fine-tuning. To address this, many studies explore parameter-efficient tuning methods, also framed as {``}delta tuning{''} in Ding et al. (2022), which updates only a small subset of parameters, known as {``}delta modules{''}, while keeping the backbone model{'}s parameters fixed. However, the practicality and flexibility of delta tuning have been limited due to existing implementations that directly modify the code of the backbone PTMs and hard-code specific delta tuning methods for each PTM. In this paper, we present OpenDelta, an open-source library that overcomes these limitations by providing a plug-and-play implementation of various delta tuning methods. Our novel techniques eliminate the need to modify the backbone PTMs{'} code, making OpenDelta compatible with different, even novel PTMs. OpenDelta is designed to be simple, modular, and extensible, providing a comprehensive platform for researchers and practitioners to adapt large PTMs efficiently. | [
"Hu, Shengding",
"Ding, Ning",
"Zhao, Weilin",
"Lv, Xingtai",
"Zhang, Zhen",
"Liu, Zhiyuan",
"Sun, Maosong"
] | OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models | acl-demo.26 | Poster | 2307.03084 | [
"https://github.com/thunlp/opendelta"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.27.bib | https://aclanthology.org/2023.acl-demo.27/ | @inproceedings{yair-etal-2023-hierarchy,
title = "Hierarchy Builder: Organizing Textual Spans into a Hierarchy to Facilitate Navigation",
author = "Yair, Itay and
Taub-Tabib, Hillel and
Goldberg, Yoav",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.27",
doi = "10.18653/v1/2023.acl-demo.27",
pages = "282--290",
abstract = "Information extraction systems often producehundreds to thousands of strings on a specifictopic. We present a method that facilitatesbetter consumption of these strings, in an ex-ploratory setting in which a user wants to bothget a broad overview of what{'}s available, and achance to dive deeper on some aspects. The sys-tem works by grouping similar items together,and arranging the remaining items into a hierar-chical navigable DAG structure. We apply themethod to medical information extraction.",
}
| Information extraction systems often producehundreds to thousands of strings on a specifictopic. We present a method that facilitatesbetter consumption of these strings, in an ex-ploratory setting in which a user wants to bothget a broad overview of what{'}s available, and achance to dive deeper on some aspects. The sys-tem works by grouping similar items together,and arranging the remaining items into a hierar-chical navigable DAG structure. We apply themethod to medical information extraction. | [
"Yair, Itay",
"Taub-Tabib, Hillel",
"Goldberg, Yoav"
] | Hierarchy Builder: Organizing Textual Spans into a Hierarchy to Facilitate Navigation | acl-demo.27 | Poster | 2309.10057 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.28.bib | https://aclanthology.org/2023.acl-demo.28/ | @inproceedings{zyska-etal-2023-care,
title = "{CARE}: Collaborative {AI}-Assisted Reading Environment",
author = "Zyska, Dennis and
Dycke, Nils and
Buchmann, Jan and
Kuznetsov, Ilia and
Gurevych, Iryna",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.28",
doi = "10.18653/v1/2023.acl-demo.28",
pages = "291--303",
abstract = "Recent years have seen impressive progress in AI-assisted writing, yet the developments in AI-assisted reading are lacking. We propose inline commentary as a natural vehicle for AI-based reading assistance, and present CARE: the first open integrated platform for the study of inline commentary and reading. CARE facilitates data collection for inline commentaries in a commonplace collaborative reading environment, and provides a framework for enhancing reading with NLP-based assistance, such as text classification, generation or question answering. The extensible behavioral logging allows unique insights into the reading and commenting behavior, and flexible configuration makes the platform easy to deploy in new scenarios. To evaluate CARE in action, we apply the platform in a user study dedicated to scholarly peer review. CARE facilitates the data collection and study of inline commentary in NLP, extrinsic evaluation of NLP assistance, and application prototyping. We invite the community to explore and build upon the open source implementation of CARE.Github Repository: \url{https://github.com/UKPLab/CAREPublic} Live Demo: \url{https://care.ukp.informatik.tu-darmstadt.de}",
}
| Recent years have seen impressive progress in AI-assisted writing, yet the developments in AI-assisted reading are lacking. We propose inline commentary as a natural vehicle for AI-based reading assistance, and present CARE: the first open integrated platform for the study of inline commentary and reading. CARE facilitates data collection for inline commentaries in a commonplace collaborative reading environment, and provides a framework for enhancing reading with NLP-based assistance, such as text classification, generation or question answering. The extensible behavioral logging allows unique insights into the reading and commenting behavior, and flexible configuration makes the platform easy to deploy in new scenarios. To evaluate CARE in action, we apply the platform in a user study dedicated to scholarly peer review. CARE facilitates the data collection and study of inline commentary in NLP, extrinsic evaluation of NLP assistance, and application prototyping. We invite the community to explore and build upon the open source implementation of CARE.Github Repository: \url{https://github.com/UKPLab/CAREPublic} Live Demo: \url{https://care.ukp.informatik.tu-darmstadt.de} | [
"Zyska, Dennis",
"Dycke, Nils",
"Buchmann, Jan",
"Kuznetsov, Ilia",
"Gurevych, Iryna"
] | CARE: Collaborative AI-Assisted Reading Environment | acl-demo.28 | Poster | 2302.12611 | [
"https://github.com/ukplab/care"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.29.bib | https://aclanthology.org/2023.acl-demo.29/ | @inproceedings{piktus-etal-2023-roots,
title = "The {ROOTS} Search Tool: Data Transparency for {LLM}s",
author = "Piktus, Aleksandra and
Akiki, Christopher and
Villegas, Paulo and
Lauren{\c{c}}on, Hugo and
Dupont, G{\'e}rard and
Luccioni, Sasha and
Jernite, Yacine and
Rogers, Anna",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.29",
doi = "10.18653/v1/2023.acl-demo.29",
pages = "304--314",
abstract = "ROOTS is a 1.6TB multilingual text corpus developed for the training of BLOOM, currently the largest language model explicitly accompanied by commensurate data governance efforts. In continuation of these efforts, we present the ROOTS Search Tool: a search engine over the entire ROOTS corpus offering both fuzzy and exact search capabilities. ROOTS is the largest corpus to date that can be investigated this way. The ROOTS Search Tool is open-sourced and available on Hugging Face Spaces: \url{https://huggingface.co/spaces/bigscience-data/roots-search}. We describe our implementation and the possible use cases of our tool.",
}
| ROOTS is a 1.6TB multilingual text corpus developed for the training of BLOOM, currently the largest language model explicitly accompanied by commensurate data governance efforts. In continuation of these efforts, we present the ROOTS Search Tool: a search engine over the entire ROOTS corpus offering both fuzzy and exact search capabilities. ROOTS is the largest corpus to date that can be investigated this way. The ROOTS Search Tool is open-sourced and available on Hugging Face Spaces: \url{https://huggingface.co/spaces/bigscience-data/roots-search}. We describe our implementation and the possible use cases of our tool. | [
"Piktus, Aleks",
"ra",
"Akiki, Christopher",
"Villegas, Paulo",
"Lauren{\\c{c}}on, Hugo",
"Dupont, G{\\'e}rard",
"Luccioni, Sasha",
"Jernite, Yacine",
"Rogers, Anna"
] | The ROOTS Search Tool: Data Transparency for LLMs | acl-demo.29 | Poster | 2302.14035 | [
"https://github.com/huggingface/roots-search-tool"
] | https://huggingface.co/papers/2302.14035 | 8 | 0 | 0 | 8 | 1 | [] | [
"society-ethics/papers"
] | [] |
https://aclanthology.org/2023.acl-demo.30.bib | https://aclanthology.org/2023.acl-demo.30/ | @inproceedings{tiedemann-de-gibert-2023-opus,
title = "The {OPUS}-{MT} Dashboard {--} A Toolkit for a Systematic Evaluation of Open Machine Translation Models",
author = {Tiedemann, J{\"o}rg and
de Gibert, Ona},
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.30",
doi = "10.18653/v1/2023.acl-demo.30",
pages = "315--327",
abstract = "The OPUS-MT dashboard is a web-based platform that provides a comprehensive overview of open translation models. We focus on a systematic collection of benchmark results with verifiable translation performance and large coverage in terms of languages and domains. We provide results for in-house OPUS-MT and Tatoeba models as well as external models from the Huggingface repository and user-contributed translations. The functionalities of the evaluation tool include summaries of benchmarks for over 2,300 models covering 4,560 language directions and 294 languages, as well as the inspection of predicted translations against their human reference. We focus on centralization, reproducibility and coverage of MT evaluation combined with scalability. The dashboard can be accessed live at \url{https://opus.nlpl.eu/dashboard/}.",
}
| The OPUS-MT dashboard is a web-based platform that provides a comprehensive overview of open translation models. We focus on a systematic collection of benchmark results with verifiable translation performance and large coverage in terms of languages and domains. We provide results for in-house OPUS-MT and Tatoeba models as well as external models from the Huggingface repository and user-contributed translations. The functionalities of the evaluation tool include summaries of benchmarks for over 2,300 models covering 4,560 language directions and 294 languages, as well as the inspection of predicted translations against their human reference. We focus on centralization, reproducibility and coverage of MT evaluation combined with scalability. The dashboard can be accessed live at \url{https://opus.nlpl.eu/dashboard/}. | [
"Tiedemann, J{\\\"o}rg",
"de Gibert, Ona"
] | The OPUS-MT Dashboard – A Toolkit for a Systematic Evaluation of Open Machine Translation Models | acl-demo.30 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.31.bib | https://aclanthology.org/2023.acl-demo.31/ | @inproceedings{schneider-etal-2023-wise,
title = "The {D}-{WISE} Tool Suite: Multi-Modal Machine-Learning-Powered Tools Supporting and Enhancing Digital Discourse Analysis",
author = "Schneider, Florian and
Fischer, Tim and
Petersen-Frey, Fynn and
Eiser, Isabel and
Koch, Gertraud and
Biemann, Chris",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.31",
doi = "10.18653/v1/2023.acl-demo.31",
pages = "328--335",
abstract = "This work introduces the D-WISE Tool Suite (DWTS), a novel working environment for digital qualitative discourse analysis in the Digital Humanities (DH). The DWTS addresses limitations of current DH tools induced by the ever-increasing amount of heterogeneous, unstructured, and multi-modal data in which the discourses of contemporary societies are encoded. To provide meaningful insights from such data, our system leverages and combines state-of-the-art machine learning technologies from Natural Language Processing and Com-puter Vision. Further, the DWTS is conceived and developed by an interdisciplinary team ofcultural anthropologists and computer scientists to ensure the tool{'}s usability for modernDH research. Central features of the DWTS are: a) import of multi-modal data like text, image, audio, and video b) preprocessing pipelines for automatic annotations c) lexical and semantic search of documents d) manual span, bounding box, time-span, and frame annotations e) documentation of the research process.",
}
| This work introduces the D-WISE Tool Suite (DWTS), a novel working environment for digital qualitative discourse analysis in the Digital Humanities (DH). The DWTS addresses limitations of current DH tools induced by the ever-increasing amount of heterogeneous, unstructured, and multi-modal data in which the discourses of contemporary societies are encoded. To provide meaningful insights from such data, our system leverages and combines state-of-the-art machine learning technologies from Natural Language Processing and Com-puter Vision. Further, the DWTS is conceived and developed by an interdisciplinary team ofcultural anthropologists and computer scientists to ensure the tool{'}s usability for modernDH research. Central features of the DWTS are: a) import of multi-modal data like text, image, audio, and video b) preprocessing pipelines for automatic annotations c) lexical and semantic search of documents d) manual span, bounding box, time-span, and frame annotations e) documentation of the research process. | [
"Schneider, Florian",
"Fischer, Tim",
"Petersen-Frey, Fynn",
"Eiser, Isabel",
"Koch, Gertraud",
"Biemann, Chris"
] | The D-WISE Tool Suite: Multi-Modal Machine-Learning-Powered Tools Supporting and Enhancing Digital Discourse Analysis | acl-demo.31 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.32.bib | https://aclanthology.org/2023.acl-demo.32/ | @inproceedings{zhao-etal-2023-openrt,
title = "{O}pen{RT}: An Open-source Framework for Reasoning Over Tabular Data",
author = "Zhao, Yilun and
Mi, Boyu and
Qi, Zhenting and
Nan, Linyong and
Guo, Minghao and
Cohan, Arman and
Radev, Dragomir",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.32",
doi = "10.18653/v1/2023.acl-demo.32",
pages = "336--347",
abstract = "There are a growing number of table pre-training methods proposed for reasoning over tabular data (e.g., question answering, fact checking, and faithful text generation). However, most existing methods are benchmarked solely on a limited number of datasets, varying in configuration, which leads to a lack of unified, standardized, fair, and comprehensive comparison between methods. This paper presents OpenRT, the first open-source framework for reasoning over tabular data, to reproduce existing table pre-training models for performance comparison and develop new models quickly. We implemented and compared six table pre-training models on four question answering, one fact checking, and one faithful text generation datasets. Moreover, to enable the community to easily construct new table reasoning datasets, we developed TaRAT, an annotation tool which supports multi-person collaborative annotations for various kinds of table reasoning tasks. The researchers are able to deploy the newly-constructed dataset to OpenRT and compare the performances of different baseline systems.",
}
| There are a growing number of table pre-training methods proposed for reasoning over tabular data (e.g., question answering, fact checking, and faithful text generation). However, most existing methods are benchmarked solely on a limited number of datasets, varying in configuration, which leads to a lack of unified, standardized, fair, and comprehensive comparison between methods. This paper presents OpenRT, the first open-source framework for reasoning over tabular data, to reproduce existing table pre-training models for performance comparison and develop new models quickly. We implemented and compared six table pre-training models on four question answering, one fact checking, and one faithful text generation datasets. Moreover, to enable the community to easily construct new table reasoning datasets, we developed TaRAT, an annotation tool which supports multi-person collaborative annotations for various kinds of table reasoning tasks. The researchers are able to deploy the newly-constructed dataset to OpenRT and compare the performances of different baseline systems. | [
"Zhao, Yilun",
"Mi, Boyu",
"Qi, Zhenting",
"Nan, Linyong",
"Guo, Minghao",
"Cohan, Arman",
"Radev, Dragomir"
] | OpenRT: An Open-source Framework for Reasoning Over Tabular Data | acl-demo.32 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.33.bib | https://aclanthology.org/2023.acl-demo.33/ | @inproceedings{basile-etal-2023-uinauil,
title = "{UINAUIL}: A Unified Benchmark for {I}talian Natural Language Understanding",
author = "Basile, Valerio and
Bioglio, Livio and
Bosca, Alessio and
Bosco, Cristina and
Patti, Viviana",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.33",
doi = "10.18653/v1/2023.acl-demo.33",
pages = "348--356",
abstract = "This paper introduces the Unified Interactive Natural Understanding of the Italian Language (UINAUIL), a benchmark of six tasks for Italian Natural Language Understanding. We present a description of the tasks and software library that collects the data from the European Language Grid, harmonizes the data format, and exposes functionalities to facilitates data manipulation and the evaluation of custom models. We also present the results of tests conducted with available Italian and multilingual language models on UINAUIL, providing an updated picture of the current state of the art in Italian NLU.",
}
| This paper introduces the Unified Interactive Natural Understanding of the Italian Language (UINAUIL), a benchmark of six tasks for Italian Natural Language Understanding. We present a description of the tasks and software library that collects the data from the European Language Grid, harmonizes the data format, and exposes functionalities to facilitates data manipulation and the evaluation of custom models. We also present the results of tests conducted with available Italian and multilingual language models on UINAUIL, providing an updated picture of the current state of the art in Italian NLU. | [
"Basile, Valerio",
"Bioglio, Livio",
"Bosca, Alessio",
"Bosco, Cristina",
"Patti, Viviana"
] | UINAUIL: A Unified Benchmark for Italian Natural Language Understanding | acl-demo.33 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.34.bib | https://aclanthology.org/2023.acl-demo.34/ | @inproceedings{picco-etal-2023-zshot,
title = "Zshot: An Open-source Framework for Zero-Shot Named Entity Recognition and Relation Extraction",
author = "Picco, Gabriele and
Martinez Galindo, Marcos and
Purpura, Alberto and
Fuchs, Leopold and
Lopez, Vanessa and
Hoang, Thanh Lam",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.34",
doi = "10.18653/v1/2023.acl-demo.34",
pages = "357--368",
abstract = "The Zero-Shot Learning (ZSL) task pertains to the identification of entities or relations in texts that were not seen during training. ZSL has emerged as a critical research area due to the scarcity of labeled data in specific domains, and its applications have grown significantly in recent years. With the advent of large pretrained language models, several novel methods have been proposed, resulting in substantial improvements in ZSL performance. There is a growing demand, both in the research community and industry, for a comprehensive ZSL framework that facilitates the development and accessibility of the latest methods and pretrained models. In this study, we propose a novel ZSL framework called Zshot that aims to address the aforementioned challenges. Our primary objective is to provide a platform that allows researchers to compare different state-of-the-art ZSL methods with standard benchmark datasets. Additionally, we have designed our framework to support the industry with readily available APIs for production under the standard SpaCy NLP pipeline. Our API is extendible and evaluable, moreover, we include numerous enhancements such as boosting the accuracy with pipeline ensembling and visualization utilities available as a SpaCy extension.",
}
| The Zero-Shot Learning (ZSL) task pertains to the identification of entities or relations in texts that were not seen during training. ZSL has emerged as a critical research area due to the scarcity of labeled data in specific domains, and its applications have grown significantly in recent years. With the advent of large pretrained language models, several novel methods have been proposed, resulting in substantial improvements in ZSL performance. There is a growing demand, both in the research community and industry, for a comprehensive ZSL framework that facilitates the development and accessibility of the latest methods and pretrained models. In this study, we propose a novel ZSL framework called Zshot that aims to address the aforementioned challenges. Our primary objective is to provide a platform that allows researchers to compare different state-of-the-art ZSL methods with standard benchmark datasets. Additionally, we have designed our framework to support the industry with readily available APIs for production under the standard SpaCy NLP pipeline. Our API is extendible and evaluable, moreover, we include numerous enhancements such as boosting the accuracy with pipeline ensembling and visualization utilities available as a SpaCy extension. | [
"Picco, Gabriele",
"Martinez Galindo, Marcos",
"Purpura, Alberto",
"Fuchs, Leopold",
"Lopez, Vanessa",
"Hoang, Thanh Lam"
] | Zshot: An Open-source Framework for Zero-Shot Named Entity Recognition and Relation Extraction | acl-demo.34 | Poster | 2307.13497 | [
"https://github.com/ibm/zshot"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.35.bib | https://aclanthology.org/2023.acl-demo.35/ | @inproceedings{crego-etal-2023-bisync,
title = "{B}i{S}ync: A Bilingual Editor for Synchronized Monolingual Texts",
author = "Crego, Josep and
Xu, Jitao and
Yvon, Fran{\c{c}}ois",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.35",
doi = "10.18653/v1/2023.acl-demo.35",
pages = "369--376",
abstract = "In our globalized world, a growing number of situations arise where people are required to communicate in one or several foreign languages. In the case of written communication, users with a good command of a foreign language may find assistance from computer-aided translation (CAT) technologies. These technologies often allow users to access external resources, such as dictionaries, terminologies or bilingual concordancers, thereby interrupting and considerably hindering the writing process. In addition, CAT systems assume that the source sentence is fixed and also restrict the possible changes on the target side. In order to make the writing process smoother, we present BiSync, a bilingual writing assistant that allows users to freely compose text in two languages, while maintaining the two monolingual texts synchronized. We also include additional functionalities, such as the display of alternative prefix translations and paraphrases, which are intended to facilitate the authoring of texts. We detail the model architecture used for synchronization and evaluate the resulting tool, showing that high accuracy can be attained with limited computational resources. The interface and models are publicly available at \url{https://github.com/jmcrego/BiSync} and a demonstration video can be watched on YouTube \url{https://youtu.be/_l-ugDHfNgU}.",
}
| In our globalized world, a growing number of situations arise where people are required to communicate in one or several foreign languages. In the case of written communication, users with a good command of a foreign language may find assistance from computer-aided translation (CAT) technologies. These technologies often allow users to access external resources, such as dictionaries, terminologies or bilingual concordancers, thereby interrupting and considerably hindering the writing process. In addition, CAT systems assume that the source sentence is fixed and also restrict the possible changes on the target side. In order to make the writing process smoother, we present BiSync, a bilingual writing assistant that allows users to freely compose text in two languages, while maintaining the two monolingual texts synchronized. We also include additional functionalities, such as the display of alternative prefix translations and paraphrases, which are intended to facilitate the authoring of texts. We detail the model architecture used for synchronization and evaluate the resulting tool, showing that high accuracy can be attained with limited computational resources. The interface and models are publicly available at \url{https://github.com/jmcrego/BiSync} and a demonstration video can be watched on YouTube \url{https://youtu.be/_l-ugDHfNgU}. | [
"Crego, Josep",
"Xu, Jitao",
"Yvon, Fran{\\c{c}}ois"
] | BiSync: A Bilingual Editor for Synchronized Monolingual Texts | acl-demo.35 | Poster | 2306.00400 | [
"https://github.com/jmcrego/bisync"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.36.bib | https://aclanthology.org/2023.acl-demo.36/ | @inproceedings{antoniak-etal-2023-riveter,
title = "Riveter: Measuring Power and Social Dynamics Between Entities",
author = "Antoniak, Maria and
Field, Anjalie and
Mun, Jimin and
Walsh, Melanie and
Klein, Lauren and
Sap, Maarten",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.36",
doi = "10.18653/v1/2023.acl-demo.36",
pages = "377--388",
abstract = "Riveter provides a complete easy-to-use pipeline for analyzing verb connotations associated with entities in text corpora. We prepopulate the package with connotation frames of sentiment, power, and agency, which have demonstrated usefulness for capturing social phenomena, such as gender bias, in a broad range of corpora. For decades, lexical frameworks have been foundational tools in computational social science, digital humanities, and natural language processing, facilitating multifaceted analysis of text corpora. But working with verb-centric lexica specifically requires natural language processing skills, reducing their accessibility to other researchers. By organizing the language processing pipeline, providing complete lexicon scores and visualizations for all entities in a corpus, and providing functionality for users to target specific research questions, Riveter greatly improves the accessibility of verb lexica and can facilitate a broad range of future research.",
}
| Riveter provides a complete easy-to-use pipeline for analyzing verb connotations associated with entities in text corpora. We prepopulate the package with connotation frames of sentiment, power, and agency, which have demonstrated usefulness for capturing social phenomena, such as gender bias, in a broad range of corpora. For decades, lexical frameworks have been foundational tools in computational social science, digital humanities, and natural language processing, facilitating multifaceted analysis of text corpora. But working with verb-centric lexica specifically requires natural language processing skills, reducing their accessibility to other researchers. By organizing the language processing pipeline, providing complete lexicon scores and visualizations for all entities in a corpus, and providing functionality for users to target specific research questions, Riveter greatly improves the accessibility of verb lexica and can facilitate a broad range of future research. | [
"Antoniak, Maria",
"Field, Anjalie",
"Mun, Jimin",
"Walsh, Melanie",
"Klein, Lauren",
"Sap, Maarten"
] | Riveter: Measuring Power and Social Dynamics Between Entities | acl-demo.36 | Poster | 2312.09536 | [
"https://github.com/maartensap/riveter-nlp"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.37.bib | https://aclanthology.org/2023.acl-demo.37/ | @inproceedings{bast-etal-2023-fast,
title = "Fast Whitespace Correction with Encoder-Only Transformers",
author = "Bast, Hannah and
Hertel, Matthias and
Walter, Sebastian",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.37",
doi = "10.18653/v1/2023.acl-demo.37",
pages = "389--399",
abstract = "The goal of whitespace correction is to fix space errors in arbitrary given text. For example, given the text {``}whi te space correctio nwithTransf or mers{''}, produce {``}whitespace correction with Transformers{''}. We compare two Transformer-based models, a character-level encoder-decoder model and a byte-level encoder-only model. We find that the encoder-only model is both faster and achieves higher quality. We provide an easy-to-use tool that is over 900 times faster than the previous best tool, with the same high quality. Our tool repairs text at a rate of over 200 kB/s on GPU, with a sequence-averaged F1-score ranging from 87.5{\%} for hard-to-correct text up to 99{\%} for text without any spaces.",
}
| The goal of whitespace correction is to fix space errors in arbitrary given text. For example, given the text {``}whi te space correctio nwithTransf or mers{''}, produce {``}whitespace correction with Transformers{''}. We compare two Transformer-based models, a character-level encoder-decoder model and a byte-level encoder-only model. We find that the encoder-only model is both faster and achieves higher quality. We provide an easy-to-use tool that is over 900 times faster than the previous best tool, with the same high quality. Our tool repairs text at a rate of over 200 kB/s on GPU, with a sequence-averaged F1-score ranging from 87.5{\%} for hard-to-correct text up to 99{\%} for text without any spaces. | [
"Bast, Hannah",
"Hertel, Matthias",
"Walter, Sebastian"
] | Fast Whitespace Correction with Encoder-Only Transformers | acl-demo.37 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.38.bib | https://aclanthology.org/2023.acl-demo.38/ | @inproceedings{yan-etal-2023-espnet,
title = "{ESP}net-{ST}-v2: Multipurpose Spoken Language Translation Toolkit",
author = "Yan, Brian and
Shi, Jiatong and
Tang, Yun and
Inaguma, Hirofumi and
Peng, Yifan and
Dalmia, Siddharth and
Pol{\'a}k, Peter and
Fernandes, Patrick and
Berrebbi, Dan and
Hayashi, Tomoki and
Zhang, Xiaohui and
Ni, Zhaoheng and
Hira, Moto and
Maiti, Soumi and
Pino, Juan and
Watanabe, Shinji",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.38",
doi = "10.18653/v1/2023.acl-demo.38",
pages = "400--411",
abstract = "ESPnet-ST-v2 is a revamp of the open-source ESPnet-ST toolkit necessitated by the broadening interests of the spoken language translation community. ESPnet-ST-v2 supports 1) offline speech-to-text translation (ST), 2) simultaneous speech-to-text translation (SST), and 3) offline speech-to-speech translation (S2ST) {--} each task is supported with a wide variety of approaches, differentiating ESPnet-ST-v2 from other open source spoken language translation toolkits. This toolkit offers state-of-the-art architectures such as transducers, hybrid CTC/attention, multi-decoders with searchable intermediates, time-synchronous blockwise CTC/attention, Translatotron models, and direct discrete unit models. In this paper, we describe the overall design, example models for each task, and performance benchmarking behind ESPnet-ST-v2, which is publicly available at \url{https://github.com/espnet/espnet}.",
}
| ESPnet-ST-v2 is a revamp of the open-source ESPnet-ST toolkit necessitated by the broadening interests of the spoken language translation community. ESPnet-ST-v2 supports 1) offline speech-to-text translation (ST), 2) simultaneous speech-to-text translation (SST), and 3) offline speech-to-speech translation (S2ST) {--} each task is supported with a wide variety of approaches, differentiating ESPnet-ST-v2 from other open source spoken language translation toolkits. This toolkit offers state-of-the-art architectures such as transducers, hybrid CTC/attention, multi-decoders with searchable intermediates, time-synchronous blockwise CTC/attention, Translatotron models, and direct discrete unit models. In this paper, we describe the overall design, example models for each task, and performance benchmarking behind ESPnet-ST-v2, which is publicly available at \url{https://github.com/espnet/espnet}. | [
"Yan, Brian",
"Shi, Jiatong",
"Tang, Yun",
"Inaguma, Hirofumi",
"Peng, Yifan",
"Dalmia, Siddharth",
"Pol{\\'a}k, Peter",
"Fern",
"es, Patrick",
"Berrebbi, Dan",
"Hayashi, Tomoki",
"Zhang, Xiaohui",
"Ni, Zhaoheng",
"Hira, Moto",
"Maiti, Soumi",
"Pino, Juan",
"Watanabe, Shinji"
] | ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit | acl-demo.38 | Poster | 2304.04596 | [
"https://github.com/espnet/espnet"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.39.bib | https://aclanthology.org/2023.acl-demo.39/ | @inproceedings{sharf-etal-2023-cb2,
title = "{CB}2: Collaborative Natural Language Interaction Research Platform",
author = "Sharf, Jacob and
Gul, Mustafa Omer and
Artzi, Yoav",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.39",
doi = "10.18653/v1/2023.acl-demo.39",
pages = "412--420",
abstract = "CB2 is a multi-agent platform to study collaborative natural language interaction in a grounded task-oriented scenario. It includes a 3D game environment, a backend server designed to serve trained models to human agents, and various tools and processes to enable scalable studies. We deploy CB2 at \url{https://cb2.ai} as a system demonstration with a learned instruction following model.",
}
| CB2 is a multi-agent platform to study collaborative natural language interaction in a grounded task-oriented scenario. It includes a 3D game environment, a backend server designed to serve trained models to human agents, and various tools and processes to enable scalable studies. We deploy CB2 at \url{https://cb2.ai} as a system demonstration with a learned instruction following model. | [
"Sharf, Jacob",
"Gul, Mustafa Omer",
"Artzi, Yoav"
] | CB2: Collaborative Natural Language Interaction Research Platform | acl-demo.39 | Poster | 2303.08127 | [
"https://github.com/lil-lab/cb2"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.40.bib | https://aclanthology.org/2023.acl-demo.40/ | @inproceedings{sarti-etal-2023-inseq,
title = "Inseq: An Interpretability Toolkit for Sequence Generation Models",
author = "Sarti, Gabriele and
Feldhus, Nils and
Sickert, Ludwig and
van der Wal, Oskar",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.40",
doi = "10.18653/v1/2023.acl-demo.40",
pages = "421--435",
abstract = "Past work in natural language processing interpretability focused mainly on popular classification tasks while largely overlooking generation settings, partly due to a lack of dedicated tools. In this work, we introduce Inseq, a Python library to democratize access to interpretability analyses of sequence generation models. Inseq enables intuitive and optimized extraction of models{'} internal information and feature importance scores for popular decoder-only and encoder-decoder Transformers architectures. We showcase its potential by adopting it to highlight gender biases in machine translation models and locate factual knowledge inside GPT-2. Thanks to its extensible interface supporting cutting-edge techniques such as contrastive feature attribution, Inseq can drive future advances in explainable natural language generation, centralizing good practices and enabling fair and reproducible model evaluations.",
}
| Past work in natural language processing interpretability focused mainly on popular classification tasks while largely overlooking generation settings, partly due to a lack of dedicated tools. In this work, we introduce Inseq, a Python library to democratize access to interpretability analyses of sequence generation models. Inseq enables intuitive and optimized extraction of models{'} internal information and feature importance scores for popular decoder-only and encoder-decoder Transformers architectures. We showcase its potential by adopting it to highlight gender biases in machine translation models and locate factual knowledge inside GPT-2. Thanks to its extensible interface supporting cutting-edge techniques such as contrastive feature attribution, Inseq can drive future advances in explainable natural language generation, centralizing good practices and enabling fair and reproducible model evaluations. | [
"Sarti, Gabriele",
"Feldhus, Nils",
"Sickert, Ludwig",
"van der Wal, Oskar"
] | Inseq: An Interpretability Toolkit for Sequence Generation Models | acl-demo.40 | Poster | 2302.13942 | [
"https://github.com/inseq-team/inseq"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.41.bib | https://aclanthology.org/2023.acl-demo.41/ | @inproceedings{priniski-etal-2023-pipeline,
title = "Pipeline for modeling causal beliefs from natural language",
author = "Priniski, John and
Verma, Ishaan and
Morstatter, Fred",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.41",
doi = "10.18653/v1/2023.acl-demo.41",
pages = "436--443",
abstract = "We present a causal language analysis pipeline that leverages a Large Language Model to identify causal claims made in natural language documents, and aggregates claims across a corpus to produce a causal claim network. The pipeline then applies a clustering algorithm that groups causal claims based on their semantic topics. We demonstrate the pipeline by modeling causal belief systems surrounding the Covid-19 vaccine from tweets.",
}
| We present a causal language analysis pipeline that leverages a Large Language Model to identify causal claims made in natural language documents, and aggregates claims across a corpus to produce a causal claim network. The pipeline then applies a clustering algorithm that groups causal claims based on their semantic topics. We demonstrate the pipeline by modeling causal belief systems surrounding the Covid-19 vaccine from tweets. | [
"Priniski, John",
"Verma, Ishaan",
"Morstatter, Fred"
] | Pipeline for modeling causal beliefs from natural language | acl-demo.41 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.42.bib | https://aclanthology.org/2023.acl-demo.42/ | @inproceedings{kasner-etal-2023-tabgenie,
title = "{T}ab{G}enie: A Toolkit for Table-to-Text Generation",
author = "Kasner, Zden{\v{e}}k and
Garanina, Ekaterina and
Platek, Ondrej and
Dusek, Ondrej",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.42",
doi = "10.18653/v1/2023.acl-demo.42",
pages = "444--455",
abstract = "Heterogenity of data-to-text generation datasets limits the research on data-to-text generation systems. We present TabGenie {--} a toolkit which enables researchers to explore, preprocess, and analyze a variety of data-to-text generation datasets through the unified framework of table-to-text generation. In TabGenie, all inputs are represented as tables with associated metadata. The tables can be explored through a web interface, which also provides an interactive mode for debugging table-to-text generation, facilitates side-by-side comparison of generated system outputs, and allows easy exports for manual analysis. Furthermore, TabGenie is equipped with command line processing tools and Python bindings for unified dataset loading and processing. We release TabGenie as a PyPI package and provide its open-source code and a live demo at \url{https://github.com/kasnerz/tabgenie}.",
}
| Heterogenity of data-to-text generation datasets limits the research on data-to-text generation systems. We present TabGenie {--} a toolkit which enables researchers to explore, preprocess, and analyze a variety of data-to-text generation datasets through the unified framework of table-to-text generation. In TabGenie, all inputs are represented as tables with associated metadata. The tables can be explored through a web interface, which also provides an interactive mode for debugging table-to-text generation, facilitates side-by-side comparison of generated system outputs, and allows easy exports for manual analysis. Furthermore, TabGenie is equipped with command line processing tools and Python bindings for unified dataset loading and processing. We release TabGenie as a PyPI package and provide its open-source code and a live demo at \url{https://github.com/kasnerz/tabgenie}. | [
"Kasner, Zden{\\v{e}}k",
"Garanina, Ekaterina",
"Platek, Ondrej",
"Dusek, Ondrej"
] | TabGenie: A Toolkit for Table-to-Text Generation | acl-demo.42 | Poster | 2302.14169 | [
"https://github.com/kasnerz/tabgenie"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.43.bib | https://aclanthology.org/2023.acl-demo.43/ | @inproceedings{zhu-etal-2023-efficient,
title = "An Efficient Conversational Smart Compose System",
author = "Zhu, Yun and
Chen, Xiayu and
Shu, Lei and
Tan, Bowen and
Song, Xinying and
Liu, Lijuan and
Wang, Maria and
Chen, Jindong and
Ruan, Ning",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.43",
doi = "10.18653/v1/2023.acl-demo.43",
pages = "456--462",
abstract = "Online conversation is a ubiquitous way to share information and connect everyone but repetitive idiomatic text typing takes users a lot of time. This paper demonstrates a simple yet effective cloud based smart compose system to improve human-to-human conversation efficiency. Heuristics from different perspectives are designed to achieve the best trade-off between quality and latency. From the modeling side, the decoder-only model exploited the previous turns of conversational history in a computation lightweight manner. Besides, a novel phrase tokenizer is proposed to reduce latency without losing the composing quality further. Additionally, the caching mechanism is applied to the serving framework. The demo video of the system is available at \url{https://youtu.be/U1KXkaqr60g.We} open-sourced our phrase tokenizer in \url{https://github.com/tensorflow/text}.",
}
| Online conversation is a ubiquitous way to share information and connect everyone but repetitive idiomatic text typing takes users a lot of time. This paper demonstrates a simple yet effective cloud based smart compose system to improve human-to-human conversation efficiency. Heuristics from different perspectives are designed to achieve the best trade-off between quality and latency. From the modeling side, the decoder-only model exploited the previous turns of conversational history in a computation lightweight manner. Besides, a novel phrase tokenizer is proposed to reduce latency without losing the composing quality further. Additionally, the caching mechanism is applied to the serving framework. The demo video of the system is available at \url{https://youtu.be/U1KXkaqr60g.We} open-sourced our phrase tokenizer in \url{https://github.com/tensorflow/text}. | [
"Zhu, Yun",
"Chen, Xiayu",
"Shu, Lei",
"Tan, Bowen",
"Song, Xinying",
"Liu, Lijuan",
"Wang, Maria",
"Chen, Jindong",
"Ruan, Ning"
] | An Efficient Conversational Smart Compose System | acl-demo.43 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.44.bib | https://aclanthology.org/2023.acl-demo.44/ | @inproceedings{chan-etal-2023-spurious,
title = "Which Spurious Correlations Impact Reasoning in {NLI} Models? A Visual Interactive Diagnosis through Data-Constrained Counterfactuals",
author = "Chan, Robin and
Amini, Afra and
El-Assady, Mennatallah",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.44",
doi = "10.18653/v1/2023.acl-demo.44",
pages = "463--470",
abstract = "We present a human-in-the-loop dashboard tailored to diagnosing potential spurious features that NLI models rely on for predictions. The dashboard enables users to generate diverse and challenging examples by drawing inspiration from GPT-3 suggestions. Additionally, users can receive feedback from a trained NLI model on how challenging the newly created example is and make refinements based on the feedback. Through our investigation, we discover several categories of spurious correlations that impact the reasoning of NLI models, which we group into three categories: Semantic Relevance, Logical Fallacies, and Bias. Based on our findings, we identify and describe various research opportunities, including diversifying training data and assessing NLI models{'} robustness by creating adversarial test suites.",
}
| We present a human-in-the-loop dashboard tailored to diagnosing potential spurious features that NLI models rely on for predictions. The dashboard enables users to generate diverse and challenging examples by drawing inspiration from GPT-3 suggestions. Additionally, users can receive feedback from a trained NLI model on how challenging the newly created example is and make refinements based on the feedback. Through our investigation, we discover several categories of spurious correlations that impact the reasoning of NLI models, which we group into three categories: Semantic Relevance, Logical Fallacies, and Bias. Based on our findings, we identify and describe various research opportunities, including diversifying training data and assessing NLI models{'} robustness by creating adversarial test suites. | [
"Chan, Robin",
"Amini, Afra",
"El-Assady, Mennatallah"
] | Which Spurious Correlations Impact Reasoning in NLI Models? A Visual Interactive Diagnosis through Data-Constrained Counterfactuals | acl-demo.44 | Poster | 2306.12146 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.45.bib | https://aclanthology.org/2023.acl-demo.45/ | @inproceedings{ramamonjison-etal-2023-latex2solver,
title = "{L}a{T}e{X}2{S}olver: a Hierarchical Semantic Parsing of {L}a{T}e{X} Document into Code for an Assistive Optimization Modeling Application",
author = "Ramamonjison, Rindra and
Yu, Timothy and
Xing, Linzi and
Mostajabdaveh, Mahdi and
Li, Xiaorui and
Fu, Xiaojin and
Han, Xiongwei and
Chen, Yuanzhe and
Li, Ren and
Mao, Kun and
Zhang, Yong",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.45",
doi = "10.18653/v1/2023.acl-demo.45",
pages = "471--478",
abstract = "We demonstrate an interactive system to help operations research (OR) practitioners convert the mathematical formulation of optimization problems from TeX document format into the solver modeling language. In practice, a manual translation is cumbersome and time-consuming. Moreover, it requires an in-depth understanding of the problem description and a technical expertise to produce the modeling code. Thus, our proposed system TeX2Solver helps partially automate this conversion and help the users build optimization models more efficiently. In this paper, we describe its interface and the components of the hierarchical parsing system. A video demo walk-through is available online at \url{http://bit.ly/3kuOm3x}",
}
| We demonstrate an interactive system to help operations research (OR) practitioners convert the mathematical formulation of optimization problems from TeX document format into the solver modeling language. In practice, a manual translation is cumbersome and time-consuming. Moreover, it requires an in-depth understanding of the problem description and a technical expertise to produce the modeling code. Thus, our proposed system TeX2Solver helps partially automate this conversion and help the users build optimization models more efficiently. In this paper, we describe its interface and the components of the hierarchical parsing system. A video demo walk-through is available online at \url{http://bit.ly/3kuOm3x} | [
"Ramamonjison, Rindra",
"Yu, Timothy",
"Xing, Linzi",
"Mostajabdaveh, Mahdi",
"Li, Xiaorui",
"Fu, Xiaojin",
"Han, Xiongwei",
"Chen, Yuanzhe",
"Li, Ren",
"Mao, Kun",
"Zhang, Yong"
] | LaTeX2Solver: a Hierarchical Semantic Parsing of LaTeX Document into Code for an Assistive Optimization Modeling Application | acl-demo.45 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.46.bib | https://aclanthology.org/2023.acl-demo.46/ | @inproceedings{yu-bach-2023-alfred,
title = "Alfred: A System for Prompted Weak Supervision",
author = "Yu, Peilin and
Bach, Stephen",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.46",
doi = "10.18653/v1/2023.acl-demo.46",
pages = "479--488",
abstract = "Alfred is the first system for programmatic weak supervision (PWS) that creates training data for machine learning by prompting. In contrast to typical PWS systems where weak supervision sources are programs coded by experts, Alfred enables users to encode their subject matter expertise via natural language prompts for language and vision-language models. Alfred provides a simple Python interface for the key steps of this emerging paradigm, with a high-throughput backend for large-scale data labeling. Users can quickly create, evaluate, and refine their prompt-based weak supervision sources; map the results to weak labels; and resolve their disagreements with a label model. Alfred enables a seamless local development experience backed by models served from self-managed computing clusters. It automatically optimizes the execution of prompts with optimized batching mechanisms. We find that this optimization improves query throughput by 2.9x versus a naive approach. We present two example use cases demonstrating Alfred on YouTube comment spam detection and pet breeds classification. Alfred is open source, available at \url{https://github.com/BatsResearch/alfred}.",
}
| Alfred is the first system for programmatic weak supervision (PWS) that creates training data for machine learning by prompting. In contrast to typical PWS systems where weak supervision sources are programs coded by experts, Alfred enables users to encode their subject matter expertise via natural language prompts for language and vision-language models. Alfred provides a simple Python interface for the key steps of this emerging paradigm, with a high-throughput backend for large-scale data labeling. Users can quickly create, evaluate, and refine their prompt-based weak supervision sources; map the results to weak labels; and resolve their disagreements with a label model. Alfred enables a seamless local development experience backed by models served from self-managed computing clusters. It automatically optimizes the execution of prompts with optimized batching mechanisms. We find that this optimization improves query throughput by 2.9x versus a naive approach. We present two example use cases demonstrating Alfred on YouTube comment spam detection and pet breeds classification. Alfred is open source, available at \url{https://github.com/BatsResearch/alfred}. | [
"Yu, Peilin",
"Bach, Stephen"
] | Alfred: A System for Prompted Weak Supervision | acl-demo.46 | Poster | 2305.18623 | [
"https://github.com/batsresearch/alfred"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.47.bib | https://aclanthology.org/2023.acl-demo.47/ | @inproceedings{wu-etal-2023-openicl,
title = "{O}pen{ICL}: An Open-Source Framework for In-context Learning",
author = "Wu, Zhenyu and
Wang, Yaoxiang and
Ye, Jiacheng and
Wu, Zhiyong and
Feng, Jiangtao and
Xu, Jingjing and
Qiao, Yu",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.47",
doi = "10.18653/v1/2023.acl-demo.47",
pages = "489--498",
abstract = "In recent years, In-context Learning (ICL) has gained increasing attentionand emerged as the new paradigm for large language model (LLM) evaluation. Unlike traditional fine-tuning methods, ICL instead adapts the pre-trained models to unseen tasks without any parameter updates. However, the implementation of ICL is sophisticated due to the diverse retrieval and inference methods involved, as well as the varying pre-processing requirements for different models, datasets, and tasks. A unified and flexible framework for ICL is urgently needed to ease the implementation of the aforementioned components. To facilitate ICL research, we introduce OpenICL, an open-source toolkit for ICL and LLM evaluation. OpenICL is research-friendly with a highly flexible architecture that users can easily combine different components to suit their needs. It also provides various state-of-the-art retrieval and inference methods to streamline the process of adapting ICL to cutting-edge research. The effectiveness of OpenICL has been validated on a wide range of NLP tasks, including classification, QA, machine translation, and semantic parsing. As a side-product, we found OpenICL to be an efficient yet robust tool for LLMs evaluation. OpenICL is released at \url{https://github.com/Shark-NLP/OpenICL}.",
}
| In recent years, In-context Learning (ICL) has gained increasing attentionand emerged as the new paradigm for large language model (LLM) evaluation. Unlike traditional fine-tuning methods, ICL instead adapts the pre-trained models to unseen tasks without any parameter updates. However, the implementation of ICL is sophisticated due to the diverse retrieval and inference methods involved, as well as the varying pre-processing requirements for different models, datasets, and tasks. A unified and flexible framework for ICL is urgently needed to ease the implementation of the aforementioned components. To facilitate ICL research, we introduce OpenICL, an open-source toolkit for ICL and LLM evaluation. OpenICL is research-friendly with a highly flexible architecture that users can easily combine different components to suit their needs. It also provides various state-of-the-art retrieval and inference methods to streamline the process of adapting ICL to cutting-edge research. The effectiveness of OpenICL has been validated on a wide range of NLP tasks, including classification, QA, machine translation, and semantic parsing. As a side-product, we found OpenICL to be an efficient yet robust tool for LLMs evaluation. OpenICL is released at \url{https://github.com/Shark-NLP/OpenICL}. | [
"Wu, Zhenyu",
"Wang, Yaoxiang",
"Ye, Jiacheng",
"Wu, Zhiyong",
"Feng, Jiangtao",
"Xu, Jingjing",
"Qiao, Yu"
] | OpenICL: An Open-Source Framework for In-context Learning | acl-demo.47 | Poster | 2303.02913 | [
"https://github.com/shark-nlp/openicl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.48.bib | https://aclanthology.org/2023.acl-demo.48/ | @inproceedings{zhang-etal-2023-self-supervised,
title = "Self-Supervised Sentence Polishing by Adding Engaging Modifiers",
author = "Zhang, Zhexin and
Guan, Jian and
Cui, Xin and
Ran, Yu and
Liu, Bo and
Huang, Minlie",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.48",
doi = "10.18653/v1/2023.acl-demo.48",
pages = "499--507",
abstract = "Teachers often guide students to improve their essays by adding engaging modifiers to polish the sentences. In this work, we present the first study on automatic sentence polishing by adding modifiers. Since there is no available dataset for the new task, we first automatically construct a large number of parallel data by removing modifiers in the engaging sentences collected from public resources. Then we fine-tune LongLM to reconstruct the original sentences from the corrupted ones. Considering that much overlap between inputs and outputs may bias the model to completely copy the inputs, we split each source sentence into sub-sentences and only require the model to generate the modified sub-sentences. Furthermore, we design a retrieval augmentation algorithm to prompt the model to add suitable modifiers. Automatic and manual evaluation on the auto-constructed test set and real human texts show that our model can generate more engaging sentences with suitable modifiers than strong baselines while keeping fluency. We deploy the model at \url{http://coai.cs.tsinghua.edu.cn/static/polishSent/}. A demo video is available at \url{https://youtu.be/Y6gFHOgSv8Y}.",
}
| Teachers often guide students to improve their essays by adding engaging modifiers to polish the sentences. In this work, we present the first study on automatic sentence polishing by adding modifiers. Since there is no available dataset for the new task, we first automatically construct a large number of parallel data by removing modifiers in the engaging sentences collected from public resources. Then we fine-tune LongLM to reconstruct the original sentences from the corrupted ones. Considering that much overlap between inputs and outputs may bias the model to completely copy the inputs, we split each source sentence into sub-sentences and only require the model to generate the modified sub-sentences. Furthermore, we design a retrieval augmentation algorithm to prompt the model to add suitable modifiers. Automatic and manual evaluation on the auto-constructed test set and real human texts show that our model can generate more engaging sentences with suitable modifiers than strong baselines while keeping fluency. We deploy the model at \url{http://coai.cs.tsinghua.edu.cn/static/polishSent/}. A demo video is available at \url{https://youtu.be/Y6gFHOgSv8Y}. | [
"Zhang, Zhexin",
"Guan, Jian",
"Cui, Xin",
"Ran, Yu",
"Liu, Bo",
"Huang, Minlie"
] | Self-Supervised Sentence Polishing by Adding Engaging Modifiers | acl-demo.48 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.49.bib | https://aclanthology.org/2023.acl-demo.49/ | @inproceedings{shi-etal-2023-effidit,
title = "Effidit: An Assistant for Improving Writing Efficiency",
author = "Shi, Shuming and
Zhao, Enbo and
Bi, Wei and
Cai, Deng and
Cui, Leyang and
Huang, Xinting and
Jiang, Haiyun and
Tang, Duyu and
Song, Kaiqiang and
Wang, Longyue and
Huang, Chenyan and
Huang, Guoping and
Wang, Yan and
Li, Piji",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.49",
doi = "10.18653/v1/2023.acl-demo.49",
pages = "508--515",
abstract = "Writing assistants are valuable tools that can help writers improve their writing skills. We introduce Effidit (\textbf{Eff}icient and \textbf{I}ntelligent E\textbf{dit}ing), a digital writing assistant that facilitates users to write higher-quality text more efficiently through the use of Artificial Intelligence (AI) and Natural Language Processing (NLP) technologies. We significantly expand the capacities of a writing assistantby providing functions in three modules: text completion, hint recommendation, and writing refinement. Based on the above efforts, Effidit can efficiently assist users in creating their own text. Effidit has been deployed to several Tencent products and publicly released at \url{https://effidit.qq.com/}.",
}
| Writing assistants are valuable tools that can help writers improve their writing skills. We introduce Effidit (\textbf{Eff}icient and \textbf{I}ntelligent E\textbf{dit}ing), a digital writing assistant that facilitates users to write higher-quality text more efficiently through the use of Artificial Intelligence (AI) and Natural Language Processing (NLP) technologies. We significantly expand the capacities of a writing assistantby providing functions in three modules: text completion, hint recommendation, and writing refinement. Based on the above efforts, Effidit can efficiently assist users in creating their own text. Effidit has been deployed to several Tencent products and publicly released at \url{https://effidit.qq.com/}. | [
"Shi, Shuming",
"Zhao, Enbo",
"Bi, Wei",
"Cai, Deng",
"Cui, Leyang",
"Huang, Xinting",
"Jiang, Haiyun",
"Tang, Duyu",
"Song, Kaiqiang",
"Wang, Longyue",
"Huang, Chenyan",
"Huang, Guoping",
"Wang, Yan",
"Li, Piji"
] | Effidit: An Assistant for Improving Writing Efficiency | acl-demo.49 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.50.bib | https://aclanthology.org/2023.acl-demo.50/ | @inproceedings{wang-etal-2023-wizmap,
title = "{W}iz{M}ap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings",
author = "Wang, Zijie J. and
Hohman, Fred and
Chau, Duen Horng",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.50",
doi = "10.18653/v1/2023.acl-demo.50",
pages = "516--523",
abstract = "Machine learning models often learn latent embedding representations that capture the domain semantics of their training data. These embedding representations are valuable for interpreting trained models, building new models, and analyzing new datasets. However, interpreting and using embeddings can be challenging due to their opaqueness, high dimensionality, and the large size of modern datasets. To tackle these challenges, we present WizMap, an interactive visualization tool to help researchers and practitioners easily explore large embeddings. With a novel multi-resolution embedding summarization method and a familiar map-like interaction design, WizMap enables users to navigate and interpret embedding spaces with ease. Leveraging modern web technologies such as WebGL and Web Workers, WizMap scales to millions of embedding points directly in users{'} web browsers and computational notebooks without the need for dedicated backend servers. WizMap is open-source and available at the following public demo link: \url{https://poloclub.github.io/wizmap}.",
}
| Machine learning models often learn latent embedding representations that capture the domain semantics of their training data. These embedding representations are valuable for interpreting trained models, building new models, and analyzing new datasets. However, interpreting and using embeddings can be challenging due to their opaqueness, high dimensionality, and the large size of modern datasets. To tackle these challenges, we present WizMap, an interactive visualization tool to help researchers and practitioners easily explore large embeddings. With a novel multi-resolution embedding summarization method and a familiar map-like interaction design, WizMap enables users to navigate and interpret embedding spaces with ease. Leveraging modern web technologies such as WebGL and Web Workers, WizMap scales to millions of embedding points directly in users{'} web browsers and computational notebooks without the need for dedicated backend servers. WizMap is open-source and available at the following public demo link: \url{https://poloclub.github.io/wizmap}. | [
"Wang, Zijie J.",
"Hohman, Fred",
"Chau, Duen Horng"
] | WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings | acl-demo.50 | Poster | 2306.09328 | [
"https://github.com/poloclub/wizmap"
] | https://huggingface.co/papers/2306.09328 | 2 | 0 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-demo.51.bib | https://aclanthology.org/2023.acl-demo.51/ | @inproceedings{razzhigaev-etal-2023-system,
title = "A System for Answering Simple Questions in Multiple Languages",
author = "Razzhigaev, Anton and
Salnikov, Mikhail and
Malykh, Valentin and
Braslavski, Pavel and
Panchenko, Alexander",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.51",
doi = "10.18653/v1/2023.acl-demo.51",
pages = "524--537",
abstract = "Our research focuses on the most prevalent type of queries{---} simple questions {---}exemplified by questions like {``}What is the capital of France?{''}. These questions reference an entity such as {``}France{''}, which is directly connected (one hop) to the answer entity {``}Paris{''} in the underlying knowledge graph (KG). We propose a multilingual Knowledge Graph Question Answering (KGQA) technique that orders potential responses based on the distance between the question{'}s text embeddings and the answer{'}s graph embeddings. A system incorporating this novel method is also described in our work. Through comprehensive experimentation using various English and multilingual datasets and two KGs {---} Freebase and Wikidata {---} we illustrate the comparative advantage of the proposed method across diverse KG embeddings and languages. This edge is apparent even against robust baseline systems, including seq2seq QA models, search-based solutions and intricate rule-based pipelines. Interestingly, our research underscores that even advanced AI systems like ChatGPT encounter difficulties when tasked with answering simple questions. This finding emphasizes the relevance and effectiveness of our approach, which consistently outperforms such systems. We are making the source code and trained models from our study publicly accessible to promote further advancements in multilingual KGQA.",
}
| Our research focuses on the most prevalent type of queries{---} simple questions {---}exemplified by questions like {``}What is the capital of France?{''}. These questions reference an entity such as {``}France{''}, which is directly connected (one hop) to the answer entity {``}Paris{''} in the underlying knowledge graph (KG). We propose a multilingual Knowledge Graph Question Answering (KGQA) technique that orders potential responses based on the distance between the question{'}s text embeddings and the answer{'}s graph embeddings. A system incorporating this novel method is also described in our work. Through comprehensive experimentation using various English and multilingual datasets and two KGs {---} Freebase and Wikidata {---} we illustrate the comparative advantage of the proposed method across diverse KG embeddings and languages. This edge is apparent even against robust baseline systems, including seq2seq QA models, search-based solutions and intricate rule-based pipelines. Interestingly, our research underscores that even advanced AI systems like ChatGPT encounter difficulties when tasked with answering simple questions. This finding emphasizes the relevance and effectiveness of our approach, which consistently outperforms such systems. We are making the source code and trained models from our study publicly accessible to promote further advancements in multilingual KGQA. | [
"Razzhigaev, Anton",
"Salnikov, Mikhail",
"Malykh, Valentin",
"Braslavski, Pavel",
"Panchenko, Alex",
"er"
] | A System for Answering Simple Questions in Multiple Languages | acl-demo.51 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.52.bib | https://aclanthology.org/2023.acl-demo.52/ | @inproceedings{ueda-etal-2023-kwja,
title = "{KWJA}: A Unified {J}apanese Analyzer Based on Foundation Models",
author = "Ueda, Nobuhiro and
Omura, Kazumasa and
Kodama, Takashi and
Kiyomaru, Hirokazu and
Murawaki, Yugo and
Kawahara, Daisuke and
Kurohashi, Sadao",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.52",
doi = "10.18653/v1/2023.acl-demo.52",
pages = "538--548",
abstract = "We present KWJA, a high-performance unified Japanese text analyzer based on foundation models.KWJA supports a wide range of tasks, including typo correction, word segmentation, word normalization, morphological analysis, named entity recognition, linguistic feature tagging, dependency parsing, PAS analysis, bridging reference resolution, coreference resolution, and discourse relation analysis, making it the most versatile among existing Japanese text analyzers.KWJA solves these tasks in a multi-task manner but still achieves competitive or better performance compared to existing analyzers specialized for each task.KWJA is publicly available under the MIT license at \url{https://github.com/ku-nlp/kwja}.",
}
| We present KWJA, a high-performance unified Japanese text analyzer based on foundation models.KWJA supports a wide range of tasks, including typo correction, word segmentation, word normalization, morphological analysis, named entity recognition, linguistic feature tagging, dependency parsing, PAS analysis, bridging reference resolution, coreference resolution, and discourse relation analysis, making it the most versatile among existing Japanese text analyzers.KWJA solves these tasks in a multi-task manner but still achieves competitive or better performance compared to existing analyzers specialized for each task.KWJA is publicly available under the MIT license at \url{https://github.com/ku-nlp/kwja}. | [
"Ueda, Nobuhiro",
"Omura, Kazumasa",
"Kodama, Takashi",
"Kiyomaru, Hirokazu",
"Murawaki, Yugo",
"Kawahara, Daisuke",
"Kurohashi, Sadao"
] | KWJA: A Unified Japanese Analyzer Based on Foundation Models | acl-demo.52 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.53.bib | https://aclanthology.org/2023.acl-demo.53/ | @inproceedings{sohrab-etal-2023-disease,
title = "Disease Network Constructor: a Pathway Extraction and Visualization",
author = "Sohrab, Mohammad Golam and
Duong, Khoa and
Topi{\'c}, Goran and
Ikeda, Masami and
Nagano, Nozomi and
Natsume-Kitatani, Yayoi and
Kuroda, Masakata and
Itoh, Mari and
Takamura, Hiroya",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.53",
doi = "10.18653/v1/2023.acl-demo.53",
pages = "549--557",
abstract = "We present Disease Network Constructor (DNC), a system that extracts and visualizes a disease network, in which nodes are entities such as diseases, proteins, and genes, and edges represent regulation relation. We focused on the disease network derived through regulation events found in scientific articles on idiopathic pulmonary fibrosis (IPF). The front-end web-base user interface of DNC includes two-dimensional (2D) and 3D visualizations of the constructed disease network. The back-end system of DNC includes several natural language processing (NLP) techniques to process biomedical text including BERT-based tokenization on the basis of Bidirectional Encoder Representations from Transformers (BERT), flat and nested named entity recognition (NER), candidate generation and candidate ranking for entity linking (EL) or, relation extraction (RE), and event extraction (EE) tasks. We evaluated the end-to-end EL and end-to-end nested EE systems to determine the DNC{'}s back-endimplementation performance. To the best of our knowledge, this is the first attempt that addresses neural NER, EL, RE, and EE tasks in an end-to-end manner that constructs a path-way visualization from events, which we name Disease Network Constructor. The demonstration video can be accessed from \url{https://youtu.be/rFhWwAgcXE8}. We release an online system for end users and the source code is available at \url{https://github.com/aistairc/PRISM-APIs/}.",
}
| We present Disease Network Constructor (DNC), a system that extracts and visualizes a disease network, in which nodes are entities such as diseases, proteins, and genes, and edges represent regulation relation. We focused on the disease network derived through regulation events found in scientific articles on idiopathic pulmonary fibrosis (IPF). The front-end web-base user interface of DNC includes two-dimensional (2D) and 3D visualizations of the constructed disease network. The back-end system of DNC includes several natural language processing (NLP) techniques to process biomedical text including BERT-based tokenization on the basis of Bidirectional Encoder Representations from Transformers (BERT), flat and nested named entity recognition (NER), candidate generation and candidate ranking for entity linking (EL) or, relation extraction (RE), and event extraction (EE) tasks. We evaluated the end-to-end EL and end-to-end nested EE systems to determine the DNC{'}s back-endimplementation performance. To the best of our knowledge, this is the first attempt that addresses neural NER, EL, RE, and EE tasks in an end-to-end manner that constructs a path-way visualization from events, which we name Disease Network Constructor. The demonstration video can be accessed from \url{https://youtu.be/rFhWwAgcXE8}. We release an online system for end users and the source code is available at \url{https://github.com/aistairc/PRISM-APIs/}. | [
"Sohrab, Mohammad Golam",
"Duong, Khoa",
"Topi{\\'c}, Goran",
"Ikeda, Masami",
"Nagano, Nozomi",
"Natsume-Kitatani, Yayoi",
"Kuroda, Masakata",
"Itoh, Mari",
"Takamura, Hiroya"
] | Disease Network Constructor: a Pathway Extraction and Visualization | acl-demo.53 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-demo.54.bib | https://aclanthology.org/2023.acl-demo.54/ | @inproceedings{borzunov-etal-2023-petals,
title = "Petals: Collaborative Inference and Fine-tuning of Large Models",
author = "Borzunov, Alexander and
Baranchuk, Dmitry and
Dettmers, Tim and
Riabinin, Maksim and
Belkada, Younes and
Chumachenko, Artem and
Samygin, Pavel and
Raffel, Colin",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.54",
doi = "10.18653/v1/2023.acl-demo.54",
pages = "558--568",
abstract = "Many NLP tasks benefit from using large language models (LLMs) that often have more than 100 billion parameters. With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires high-end hardware unavailable to many researchers. In some cases, LLMs can be used more affordably via RAM offloading or hosted APIs. However, these techniques have innate limitations: offloading is too slow for interactive inference, while APIs are not flexible enough for research that requires access to weights, attention or logits. In this work, we propose Petals - a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties. We demonstrate that this strategy outperforms offloading for very large models, running inference of BLOOM-176B on consumer GPUs with {\mbox{$\approx$}}1 step per second, which is enough for many interactive LLM applications. Unlike most inference APIs, Petals also natively exposes hidden states of served models, allowing to train and share custom model extensions based on efficient fine-tuning methods. The system, its source code, and documentation are available at https://petals.mlVideo (2 min): \url{https://youtu.be/F4muLI-0hTE}",
}
| Many NLP tasks benefit from using large language models (LLMs) that often have more than 100 billion parameters. With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires high-end hardware unavailable to many researchers. In some cases, LLMs can be used more affordably via RAM offloading or hosted APIs. However, these techniques have innate limitations: offloading is too slow for interactive inference, while APIs are not flexible enough for research that requires access to weights, attention or logits. In this work, we propose Petals - a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties. We demonstrate that this strategy outperforms offloading for very large models, running inference of BLOOM-176B on consumer GPUs with {\mbox{$\approx$}}1 step per second, which is enough for many interactive LLM applications. Unlike most inference APIs, Petals also natively exposes hidden states of served models, allowing to train and share custom model extensions based on efficient fine-tuning methods. The system, its source code, and documentation are available at https://petals.mlVideo (2 min): \url{https://youtu.be/F4muLI-0hTE} | [
"Borzunov, Alex",
"er",
"Baranchuk, Dmitry",
"Dettmers, Tim",
"Riabinin, Maksim",
"Belkada, Younes",
"Chumachenko, Artem",
"Samygin, Pavel",
"Raffel, Colin"
] | Petals: Collaborative Inference and Fine-tuning of Large Models | acl-demo.54 | Poster | 2209.01188 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.55.bib | https://aclanthology.org/2023.acl-demo.55/ | @inproceedings{puerto-etal-2023-ukp,
title = "{UKP}-{SQ}u{ARE} v3: A Platform for Multi-Agent {QA} Research",
author = {Puerto, Haritz and
Baumg{\"a}rtner, Tim and
Sachdeva, Rachneet and
Fang, Haishuo and
Zhang, Hao and
Tariverdian, Sewin and
Wang, Kexin and
Gurevych, Iryna},
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.55",
doi = "10.18653/v1/2023.acl-demo.55",
pages = "569--580",
abstract = "The continuous development of Question Answering (QA) datasets has drawn the research community{'}s attention toward multi-domain models. A popular approach is to use multi-dataset models, which are models trained on multiple datasets to learn their regularities and prevent overfitting to a single dataset. However, with the proliferation of QA models in online repositories such as GitHub or Hugging Face, an alternative is becoming viable. Recent works have demonstrated that combining expert agents can yield large performance gains over multi-dataset models. To ease research in multi-agent models, we extend UKP-SQuARE, an online platform for QA research, to support three families of multi-agent systems: i) agent selection, ii) early-fusion of agents, and iii) late-fusion of agents. We conduct experiments to evaluate their inference speed and discuss the performance vs. speed trade-off compared to multi-dataset models. UKP-SQuARE is open-source and publicly available.",
}
| The continuous development of Question Answering (QA) datasets has drawn the research community{'}s attention toward multi-domain models. A popular approach is to use multi-dataset models, which are models trained on multiple datasets to learn their regularities and prevent overfitting to a single dataset. However, with the proliferation of QA models in online repositories such as GitHub or Hugging Face, an alternative is becoming viable. Recent works have demonstrated that combining expert agents can yield large performance gains over multi-dataset models. To ease research in multi-agent models, we extend UKP-SQuARE, an online platform for QA research, to support three families of multi-agent systems: i) agent selection, ii) early-fusion of agents, and iii) late-fusion of agents. We conduct experiments to evaluate their inference speed and discuss the performance vs. speed trade-off compared to multi-dataset models. UKP-SQuARE is open-source and publicly available. | [
"Puerto, Haritz",
"Baumg{\\\"a}rtner, Tim",
"Sachdeva, Rachneet",
"Fang, Haishuo",
"Zhang, Hao",
"Tariverdian, Sewin",
"Wang, Kexin",
"Gurevych, Iryna"
] | UKP-SQuARE v3: A Platform for Multi-Agent QA Research | acl-demo.55 | Poster | 2303.18120 | [
"https://github.com/ukp-square/square-core"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.56.bib | https://aclanthology.org/2023.acl-demo.56/ | @inproceedings{sertkan-etal-2023-ranger,
title = "Ranger: A Toolkit for Effect-Size Based Multi-Task Evaluation",
author = {Sertkan, Mete and
Althammer, Sophia and
Hofst{\"a}tter, Sebastian},
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.56",
doi = "10.18653/v1/2023.acl-demo.56",
pages = "581--587",
abstract = "In this paper, we introduce Ranger - a toolkit to facilitate the easy use of effect-size-based meta-analysis for multi-task evaluation in NLP and IR. We observed that our communities often face the challenge of aggregating results over incomparable metrics and scenarios, which makes conclusions and take-away messages less reliable. With Ranger, we aim to address this issue by providing a task-agnostic toolkit that combines the effect of a treatment on multiple tasks into one statistical evaluation, allowing for comparison of metrics and computation of an overall summary effect. Our toolkit produces publication-ready forest plots that enable clear communication of evaluation results over multiple tasks. Our goal with the ready-to-use Ranger toolkit is to promote robust, effect-size-based evaluation and improve evaluation standards in the community. We provide two case studies for common IR and NLP settings to highlight Ranger{'}s benefits.",
}
| In this paper, we introduce Ranger - a toolkit to facilitate the easy use of effect-size-based meta-analysis for multi-task evaluation in NLP and IR. We observed that our communities often face the challenge of aggregating results over incomparable metrics and scenarios, which makes conclusions and take-away messages less reliable. With Ranger, we aim to address this issue by providing a task-agnostic toolkit that combines the effect of a treatment on multiple tasks into one statistical evaluation, allowing for comparison of metrics and computation of an overall summary effect. Our toolkit produces publication-ready forest plots that enable clear communication of evaluation results over multiple tasks. Our goal with the ready-to-use Ranger toolkit is to promote robust, effect-size-based evaluation and improve evaluation standards in the community. We provide two case studies for common IR and NLP settings to highlight Ranger{'}s benefits. | [
"Sertkan, Mete",
"Althammer, Sophia",
"Hofst{\\\"a}tter, Sebastian"
] | Ranger: A Toolkit for Effect-Size Based Multi-Task Evaluation | acl-demo.56 | Poster | 2305.15048 | [
"https://github.com/metesertkan/ranger"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-demo.57.bib | https://aclanthology.org/2023.acl-demo.57/ | @inproceedings{piktus-etal-2023-gaia,
title = "{GAIA} Search: Hugging Face and Pyserini Interoperability for {NLP} Training Data Exploration",
author = "Piktus, Aleksandra and
Ogundepo, Odunayo and
Akiki, Christopher and
Oladipo, Akintunde and
Zhang, Xinyu and
Schoelkopf, Hailey and
Biderman, Stella and
Potthast, Martin and
Lin, Jimmy",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.57",
doi = "10.18653/v1/2023.acl-demo.57",
pages = "588--598",
abstract = "Noticing the urgent need to provide tools for fast and user-friendly qualitative analysis of large-scale textual corpora of the modern NLP, we propose to turn to the mature and well-tested methods from the domain of Information Retrieval (IR) - a research field with a long history of tackling TB-scale document collections. We discuss how Pyserini - a widely used toolkit for reproducible IR research can be integrated with the Hugging Face ecosystem of open-source AI libraries and artifacts. We leverage the existing functionalities of both platforms while proposing novel features further facilitating their integration. Our goal is to give NLP researchers tools that will allow them to develop retrieval-based instrumentation for their data analytics needs with ease and agility. We include a Jupyter Notebook-based walk through the core interoperability features, available on GitHub: \url{https://github.com/huggingface/gaia}. We then demonstrate how the ideas we present can be operationalized to create a powerful tool for qualitative data analysis in NLP. We present GAIA Search - a search engine built following previously laid out principles, giving access to four popular large-scale text collections. GAIA serves a dual purpose of illustrating the potential of methodologies we discuss but also as a standalone qualitative analysis tool that can be leveraged by NLP researchers aiming to understand datasets prior to using them in training. GAIA is hosted live on Hugging Face Spaces: \url{https://huggingface.co/spaces/spacerini/gaia}.",
}
| Noticing the urgent need to provide tools for fast and user-friendly qualitative analysis of large-scale textual corpora of the modern NLP, we propose to turn to the mature and well-tested methods from the domain of Information Retrieval (IR) - a research field with a long history of tackling TB-scale document collections. We discuss how Pyserini - a widely used toolkit for reproducible IR research can be integrated with the Hugging Face ecosystem of open-source AI libraries and artifacts. We leverage the existing functionalities of both platforms while proposing novel features further facilitating their integration. Our goal is to give NLP researchers tools that will allow them to develop retrieval-based instrumentation for their data analytics needs with ease and agility. We include a Jupyter Notebook-based walk through the core interoperability features, available on GitHub: \url{https://github.com/huggingface/gaia}. We then demonstrate how the ideas we present can be operationalized to create a powerful tool for qualitative data analysis in NLP. We present GAIA Search - a search engine built following previously laid out principles, giving access to four popular large-scale text collections. GAIA serves a dual purpose of illustrating the potential of methodologies we discuss but also as a standalone qualitative analysis tool that can be leveraged by NLP researchers aiming to understand datasets prior to using them in training. GAIA is hosted live on Hugging Face Spaces: \url{https://huggingface.co/spaces/spacerini/gaia}. | [
"Piktus, Aleks",
"ra",
"Ogundepo, Odunayo",
"Akiki, Christopher",
"Oladipo, Akintunde",
"Zhang, Xinyu",
"Schoelkopf, Hailey",
"Biderman, Stella",
"Potthast, Martin",
"Lin, Jimmy"
] | GAIA Search: Hugging Face and Pyserini Interoperability for NLP Training Data Exploration | acl-demo.57 | Poster | 2306.01481 | [
"https://github.com/huggingface/gaia"
] | https://huggingface.co/papers/2306.01481 | 3 | 0 | 0 | 9 | 1 | [] | [
"christopher/my-experiment-repo"
] | [
"spacerini/gaia"
] |
https://aclanthology.org/2023.acl-demo.58.bib | https://aclanthology.org/2023.acl-demo.58/ | @inproceedings{zharikova-etal-2023-deeppavlov,
title = "{D}eep{P}avlov Dream: Platform for Building Generative {AI} Assistants",
author = "Zharikova, Diliara and
Kornev, Daniel and
Ignatov, Fedor and
Talimanchuk, Maxim and
Evseev, Dmitry and
Petukhova, Ksenya and
Smilga, Veronika and
Karpov, Dmitry and
Shishkina, Yana and
Kosenko, Dmitry and
Burtsev, Mikhail",
editor = "Bollegala, Danushka and
Huang, Ruihong and
Ritter, Alan",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-demo.58",
doi = "10.18653/v1/2023.acl-demo.58",
pages = "599--607",
abstract = "An open-source DeepPavlov Dream Platform is specifically tailored for development of complex dialog systems like Generative AI Assistants. The stack prioritizes efficiency, modularity, scalability, and extensibility with the goal to make it easier to develop complex dialog systems from scratch. It supports modular approach to implementation of conversational agents enabling their development through the choice of NLP components and conversational skills from a rich library organized into the distributions of ready-for-use multi-skill AI assistant systems. In DeepPavlov Dream, multi-skill Generative AI Assistant consists of NLP components that extract features from user utterances, conversational skills that generate or retrieve a response, skill and response selectors that facilitate choice of relevant skills and the best response, as well as a conversational orchestrator that enables creation of multi-skill Generative AI Assistants scalable up to industrial grade AI assistants. The platform allows to integrate large language models into dialog pipeline, customize with prompt engineering, handle multiple prompts during the same dialog session and create simple multimodal assistants.",
}
| An open-source DeepPavlov Dream Platform is specifically tailored for development of complex dialog systems like Generative AI Assistants. The stack prioritizes efficiency, modularity, scalability, and extensibility with the goal to make it easier to develop complex dialog systems from scratch. It supports modular approach to implementation of conversational agents enabling their development through the choice of NLP components and conversational skills from a rich library organized into the distributions of ready-for-use multi-skill AI assistant systems. In DeepPavlov Dream, multi-skill Generative AI Assistant consists of NLP components that extract features from user utterances, conversational skills that generate or retrieve a response, skill and response selectors that facilitate choice of relevant skills and the best response, as well as a conversational orchestrator that enables creation of multi-skill Generative AI Assistants scalable up to industrial grade AI assistants. The platform allows to integrate large language models into dialog pipeline, customize with prompt engineering, handle multiple prompts during the same dialog session and create simple multimodal assistants. | [
"Zharikova, Diliara",
"Kornev, Daniel",
"Ignatov, Fedor",
"Talimanchuk, Maxim",
"Evseev, Dmitry",
"Petukhova, Ksenya",
"Smilga, Veronika",
"Karpov, Dmitry",
"Shishkina, Yana",
"Kosenko, Dmitry",
"Burtsev, Mikhail"
] | DeepPavlov Dream: Platform for Building Generative AI Assistants | acl-demo.58 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.1.bib | https://aclanthology.org/2023.acl-srw.1/ | @inproceedings{pu-demberg-2023-chatgpt,
title = "{C}hat{GPT} vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer",
author = "Pu, Dongqi and
Demberg, Vera",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.1",
doi = "10.18653/v1/2023.acl-srw.1",
pages = "1--18",
abstract = "Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT{'}s performance in two controllable generation tasks, with respect to ChatGPT{'}s ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model{'}s performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.",
}
| Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT{'}s performance in two controllable generation tasks, with respect to ChatGPT{'}s ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model{'}s performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style. | [
"Pu, Dongqi",
"Demberg, Vera"
] | ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer | acl-srw.1 | Poster | 2306.07799 | [
""
] | https://huggingface.co/papers/2306.07799 | 0 | 0 | 0 | 2 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-srw.2.bib | https://aclanthology.org/2023.acl-srw.2/ | @inproceedings{jia-2023-multi,
title = "Multi-Dialectal Representation Learning of Sinitic Phonology",
author = "Jia, Zhibai",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.2",
doi = "10.18653/v1/2023.acl-srw.2",
pages = "19--29",
abstract = "Machine learning techniques have shown their competence for representing and reasoning in symbolic systems such as language and phonology. In Sinitic Historical Phonology, notable tasks that could benefit from machine learning include the comparison of dialects and reconstruction of proto-languages systems. Motivated by this, this paper provides an approach for obtaining multi-dialectal representations of Sinitic syllables, by constructing a knowledge graph from structured phonological data ,then applying the BoxE technique from knowledge base learning. We applied unsupervised clustering techniques to the obtained representations to observe that the representations capture phonemic contrast from the input dialects. Furthermore, we trained classifiers to perform inference of unobserved Middle Chinese labels, showing the representations{'} potential for indicating archaic, proto-language features. The representations can be used for performing completion of fragmented Sinitic phonological knowledge bases, estimating divergences between different characters, or aiding the exploration and reconstruction of archaic features.",
}
| Machine learning techniques have shown their competence for representing and reasoning in symbolic systems such as language and phonology. In Sinitic Historical Phonology, notable tasks that could benefit from machine learning include the comparison of dialects and reconstruction of proto-languages systems. Motivated by this, this paper provides an approach for obtaining multi-dialectal representations of Sinitic syllables, by constructing a knowledge graph from structured phonological data ,then applying the BoxE technique from knowledge base learning. We applied unsupervised clustering techniques to the obtained representations to observe that the representations capture phonemic contrast from the input dialects. Furthermore, we trained classifiers to perform inference of unobserved Middle Chinese labels, showing the representations{'} potential for indicating archaic, proto-language features. The representations can be used for performing completion of fragmented Sinitic phonological knowledge bases, estimating divergences between different characters, or aiding the exploration and reconstruction of archaic features. | [
"Jia, Zhibai"
] | Multi-Dialectal Representation Learning of Sinitic Phonology | acl-srw.2 | Poster | 2307.01209 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-srw.4.bib | https://aclanthology.org/2023.acl-srw.4/ | @inproceedings{wang-etal-2023-prompt,
title = "Prompt-based Zero-shot Text Classification with Conceptual Knowledge",
author = "Wang, Yuqi and
Wang, Wei and
Chen, Qi and
Huang, Kaizhu and
Nguyen, Anh and
De, Suparna",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.4",
doi = "10.18653/v1/2023.acl-srw.4",
pages = "30--38",
abstract = "In recent years, pre-trained language models have garnered significant attention due to their effectiveness, which stems from the rich knowledge acquired during pre-training. To mitigate the inconsistency issues between pre-training tasks and downstream tasks and to facilitate the resolution of language-related issues, prompt-based approaches have been introduced, which are particularly useful in low-resource scenarios. However, existing approaches mostly rely on verbalizers to translate the predicted vocabulary to task-specific labels. The major limitations of this approach are the ignorance of potentially relevant domain-specific words and being biased by the pre-training data. To address these limitations, we propose a framework that incorporates conceptual knowledge for text classification in the extreme zero-shot setting. The framework includes prompt-based keyword extraction, weight assignment to each prompt keyword, and final representation estimation in the knowledge graph embedding space. We evaluated the method on four widely-used datasets for sentiment analysis and topic detection, demonstrating that it consistently outperforms recently-developed prompt-based approaches in the same experimental settings.",
}
| In recent years, pre-trained language models have garnered significant attention due to their effectiveness, which stems from the rich knowledge acquired during pre-training. To mitigate the inconsistency issues between pre-training tasks and downstream tasks and to facilitate the resolution of language-related issues, prompt-based approaches have been introduced, which are particularly useful in low-resource scenarios. However, existing approaches mostly rely on verbalizers to translate the predicted vocabulary to task-specific labels. The major limitations of this approach are the ignorance of potentially relevant domain-specific words and being biased by the pre-training data. To address these limitations, we propose a framework that incorporates conceptual knowledge for text classification in the extreme zero-shot setting. The framework includes prompt-based keyword extraction, weight assignment to each prompt keyword, and final representation estimation in the knowledge graph embedding space. We evaluated the method on four widely-used datasets for sentiment analysis and topic detection, demonstrating that it consistently outperforms recently-developed prompt-based approaches in the same experimental settings. | [
"Wang, Yuqi",
"Wang, Wei",
"Chen, Qi",
"Huang, Kaizhu",
"Nguyen, Anh",
"De, Suparna"
] | Prompt-based Zero-shot Text Classification with Conceptual Knowledge | acl-srw.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.5.bib | https://aclanthology.org/2023.acl-srw.5/ | @inproceedings{fujii-etal-2023-different,
title = "How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in {J}apanese",
author = "Fujii, Takuro and
Shibata, Koki and
Yamaguchi, Atsuki and
Morishita, Terufumi and
Sogawa, Yasuhiro",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.5",
doi = "10.18653/v1/2023.acl-srw.5",
pages = "39--49",
abstract = "This paper investigates the effect of tokenizers on the downstream performance of pretrained language models (PLMs) in scriptio continua languages where no explicit spaces exist between words, using Japanese as a case study. The tokenizer for such languages often consists of a morphological analyzer and a subword tokenizer, requiring us to conduct a comprehensive study of all possible pairs. However, previous studies lack this comprehensiveness. We therefore train extensive sets of tokenizers, build a PLM using each, and measure the downstream performance on a wide range of tasks. Our results demonstrate that each downstream task has a different optimal morphological analyzer, and that it is better to use Byte-Pair-Encoding or Unigram rather than WordPiece as a subword tokenizer, regardless of the type of task.",
}
| This paper investigates the effect of tokenizers on the downstream performance of pretrained language models (PLMs) in scriptio continua languages where no explicit spaces exist between words, using Japanese as a case study. The tokenizer for such languages often consists of a morphological analyzer and a subword tokenizer, requiring us to conduct a comprehensive study of all possible pairs. However, previous studies lack this comprehensiveness. We therefore train extensive sets of tokenizers, build a PLM using each, and measure the downstream performance on a wide range of tasks. Our results demonstrate that each downstream task has a different optimal morphological analyzer, and that it is better to use Byte-Pair-Encoding or Unigram rather than WordPiece as a subword tokenizer, regardless of the type of task. | [
"Fujii, Takuro",
"Shibata, Koki",
"Yamaguchi, Atsuki",
"Morishita, Terufumi",
"Sogawa, Yasuhiro"
] | How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in Japanese | acl-srw.5 | Poster | 2306.09572 | [
"https://github.com/hitachi-nlp/compare-ja-tokenizer"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-srw.7.bib | https://aclanthology.org/2023.acl-srw.7/ | @inproceedings{lyu-etal-2023-semantic,
title = "Semantic-Aware Dynamic Retrospective-Prospective Reasoning for Event-Level Video Question Answering",
author = "Lyu, Chenyang and
Ji, Tianbo and
Graham, Yvette and
Foster, Jennifer",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.7",
doi = "10.18653/v1/2023.acl-srw.7",
pages = "50--56",
abstract = "Event-Level Video Question Answering (EVQA) requires complex reasoning across video events to obtain the visual information needed to provide optimal answers. However, despite significant progress in model performance, few studies have focused on using the explicit semantic connections between the question and visual information especially at the event level. There is need for using such semantic connections to facilitate complex reasoning across video frames. Therefore, we propose a semantic-aware dynamic retrospective-prospective reasoning approach for video-based question answering. Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.) of the question is being focused on. We conduct experiments on a benchmark EVQA dataset - TrafficQA. Results show that our proposed approach achieves superior performance compared to previous state-of-the-art models. Our code is publicly available at \url{https://github.com/lyuchenyang/Semantic-aware-VideoQA}.",
}
| Event-Level Video Question Answering (EVQA) requires complex reasoning across video events to obtain the visual information needed to provide optimal answers. However, despite significant progress in model performance, few studies have focused on using the explicit semantic connections between the question and visual information especially at the event level. There is need for using such semantic connections to facilitate complex reasoning across video frames. Therefore, we propose a semantic-aware dynamic retrospective-prospective reasoning approach for video-based question answering. Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.) of the question is being focused on. We conduct experiments on a benchmark EVQA dataset - TrafficQA. Results show that our proposed approach achieves superior performance compared to previous state-of-the-art models. Our code is publicly available at \url{https://github.com/lyuchenyang/Semantic-aware-VideoQA}. | [
"Lyu, Chenyang",
"Ji, Tianbo",
"Graham, Yvette",
"Foster, Jennifer"
] | Semantic-Aware Dynamic Retrospective-Prospective Reasoning for Event-Level Video Question Answering | acl-srw.7 | Poster | 2305.08059 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-srw.8.bib | https://aclanthology.org/2023.acl-srw.8/ | @inproceedings{sugimoto-etal-2023-jamp,
title = "Jamp: Controlled {J}apanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language Models",
author = "Sugimoto, Tomoki and
Onoe, Yasumasa and
Yanaka, Hitomi",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.8",
doi = "10.18653/v1/2023.acl-srw.8",
pages = "57--68",
abstract = "Natural Language Inference (NLI) tasks involving temporal inference remain challenging for pre-trained language models (LMs). Although various datasets have been created for this task, they primarily focus on English and do not address the need for resources in other languages. It is unclear whether current LMs realize the generalization capacity for temporal inference across languages. In this paper, we present Jamp, a Japanese NLI benchmark focused on temporal inference. Our dataset includes a range of temporal inference patterns, which enables us to conduct fine-grained analysis. To begin the data annotation process, we create diverse inference templates based on the formal semantics test suites. We then automatically generate diverse NLI examples by using the Japanese case frame dictionary and well-designed templates while controlling the distribution of inference patterns and gold labels. We evaluate the generalization capacities of monolingual/multilingual LMs by splitting our dataset based on tense fragments (i.e., temporal inference patterns). Our findings demonstrate that LMs struggle with specific linguistic phenomena, such as habituality, indicating that there is potential for the development of more effective NLI models across languages.",
}
| Natural Language Inference (NLI) tasks involving temporal inference remain challenging for pre-trained language models (LMs). Although various datasets have been created for this task, they primarily focus on English and do not address the need for resources in other languages. It is unclear whether current LMs realize the generalization capacity for temporal inference across languages. In this paper, we present Jamp, a Japanese NLI benchmark focused on temporal inference. Our dataset includes a range of temporal inference patterns, which enables us to conduct fine-grained analysis. To begin the data annotation process, we create diverse inference templates based on the formal semantics test suites. We then automatically generate diverse NLI examples by using the Japanese case frame dictionary and well-designed templates while controlling the distribution of inference patterns and gold labels. We evaluate the generalization capacities of monolingual/multilingual LMs by splitting our dataset based on tense fragments (i.e., temporal inference patterns). Our findings demonstrate that LMs struggle with specific linguistic phenomena, such as habituality, indicating that there is potential for the development of more effective NLI models across languages. | [
"Sugimoto, Tomoki",
"Onoe, Yasumasa",
"Yanaka, Hitomi"
] | Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language Models | acl-srw.8 | Poster | 2306.10727 | [
"https://github.com/tomo-ut/temporalnli_dataset"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-srw.10.bib | https://aclanthology.org/2023.acl-srw.10/ | @inproceedings{sekizawa-etal-2023-constructing,
title = "Constructing Multilingual Code Search Dataset Using Neural Machine Translation",
author = "Sekizawa, Ryo and
Duan, Nan and
Lu, Shuai and
Yanaka, Hitomi",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.10",
doi = "10.18653/v1/2023.acl-srw.10",
pages = "69--75",
abstract = "Code search is a task to find programming codes that semantically match the given natural language queries. Even though some of the existing datasets for this task are multilingual on the programming language side, their query data are only in English. In this research, we create a multilingual code search dataset in four natural and four programming languages using a neural machine translation model. Using our dataset, we pre-train and fine-tune the Transformer-based models and then evaluate them on multiple code search test sets. Our results show that the model pre-trained with all natural and programming language data has performed best in most cases. By applying back-translation data filtering to our dataset, we demonstrate that the translation quality affects the model{'}s performance to a certain extent, but the data size matters more.",
}
| Code search is a task to find programming codes that semantically match the given natural language queries. Even though some of the existing datasets for this task are multilingual on the programming language side, their query data are only in English. In this research, we create a multilingual code search dataset in four natural and four programming languages using a neural machine translation model. Using our dataset, we pre-train and fine-tune the Transformer-based models and then evaluate them on multiple code search test sets. Our results show that the model pre-trained with all natural and programming language data has performed best in most cases. By applying back-translation data filtering to our dataset, we demonstrate that the translation quality affects the model{'}s performance to a certain extent, but the data size matters more. | [
"Sekizawa, Ryo",
"Duan, Nan",
"Lu, Shuai",
"Yanaka, Hitomi"
] | Constructing Multilingual Code Search Dataset Using Neural Machine Translation | acl-srw.10 | Poster | 2306.15604 | [
"https://github.com/ynklab/xcodesearchnet"
] | https://huggingface.co/papers/2306.15604 | 0 | 1 | 1 | 4 | 1 | [
"ynklab/XCodeBERT"
] | [
"ynklab/XCodeSearchNet"
] | [] |
https://aclanthology.org/2023.acl-srw.12.bib | https://aclanthology.org/2023.acl-srw.12/ | @inproceedings{yuasa-etal-2023-multimodal,
title = "Multimodal Neural Machine Translation Using Synthetic Images Transformed by Latent Diffusion Model",
author = "Yuasa, Ryoya and
Tamura, Akihiro and
Kajiwara, Tomoyuki and
Ninomiya, Takashi and
Kato, Tsuneo",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.12",
doi = "10.18653/v1/2023.acl-srw.12",
pages = "76--82",
abstract = "This study proposes a new multimodal neural machine translation (MNMT) model using synthetic images transformed by a latent diffusion model. MNMT translates a source language sentence based on its related image, but the image usually contains noisy information that are not relevant to the source language sentence. Our proposed method first generates a synthetic image corresponding to the content of the source language sentence by using a latent diffusion model and then performs translation based on the synthetic image. The experiments on the English-German translation tasks using the Multi30k dataset demonstrate the effectiveness of the proposed method.",
}
| This study proposes a new multimodal neural machine translation (MNMT) model using synthetic images transformed by a latent diffusion model. MNMT translates a source language sentence based on its related image, but the image usually contains noisy information that are not relevant to the source language sentence. Our proposed method first generates a synthetic image corresponding to the content of the source language sentence by using a latent diffusion model and then performs translation based on the synthetic image. The experiments on the English-German translation tasks using the Multi30k dataset demonstrate the effectiveness of the proposed method. | [
"Yuasa, Ryoya",
"Tamura, Akihiro",
"Kajiwara, Tomoyuki",
"Ninomiya, Takashi",
"Kato, Tsuneo"
] | Multimodal Neural Machine Translation Using Synthetic Images Transformed by Latent Diffusion Model | acl-srw.12 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.15.bib | https://aclanthology.org/2023.acl-srw.15/ | @inproceedings{wang-etal-2023-enhancing,
title = "Enhancing {A}ncient {C}hinese Understanding with Derived Noisy Syntax Trees",
author = "Wang, Ping and
Zhang, Shitou and
Li, Zuchao and
Hou, Jingrui",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.15",
doi = "10.18653/v1/2023.acl-srw.15",
pages = "83--92",
abstract = "Despite the rapid development of neural-based models, syntax still plays a crucial role in modern natural language processing. However, few studies have incorporated syntactic information into ancient Chinese understanding tasks due to the lack of syntactic annotation. This paper explores the role of syntax in ancient Chinese understanding based on the noisy syntax trees from unsupervised derivation and modern Chinese syntax parsers. On top of that, we propose a novel syntax encoding component {--} confidence-based syntax encoding network (cSEN) to alleviate the side effects from the existing noise caused by unsupervised syntax derivation and the incompatibility between ancient and modern Chinese. Experiments on two typical ancient Chinese understanding tasks, ancient poetry theme classification and ancient-modern Chinese translation, demonstrate that syntactic information can effectively enhance the understanding of ancient Chinese over strong baselines, and that the proposed cSEN plays an important role in noisy scenarios.",
}
| Despite the rapid development of neural-based models, syntax still plays a crucial role in modern natural language processing. However, few studies have incorporated syntactic information into ancient Chinese understanding tasks due to the lack of syntactic annotation. This paper explores the role of syntax in ancient Chinese understanding based on the noisy syntax trees from unsupervised derivation and modern Chinese syntax parsers. On top of that, we propose a novel syntax encoding component {--} confidence-based syntax encoding network (cSEN) to alleviate the side effects from the existing noise caused by unsupervised syntax derivation and the incompatibility between ancient and modern Chinese. Experiments on two typical ancient Chinese understanding tasks, ancient poetry theme classification and ancient-modern Chinese translation, demonstrate that syntactic information can effectively enhance the understanding of ancient Chinese over strong baselines, and that the proposed cSEN plays an important role in noisy scenarios. | [
"Wang, Ping",
"Zhang, Shitou",
"Li, Zuchao",
"Hou, Jingrui"
] | Enhancing Ancient Chinese Understanding with Derived Noisy Syntax Trees | acl-srw.15 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.17.bib | https://aclanthology.org/2023.acl-srw.17/ | @inproceedings{gao-emami-2023-turing,
title = "The {T}uring Quest: Can Transformers Make Good {NPC}s?",
author = "Gao, Qi Chen and
Emami, Ali",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.17",
doi = "10.18653/v1/2023.acl-srw.17",
pages = "93--103",
abstract = "In this paper, we study the viability of the deployment of language models towards non-playable character (NPC) scripts, by introducing a novel pipeline for the automatic construction of NPC scripts using Transformer-based believable scripts for a variety of game genres and specifications. In addition, we propose a self-diagnosis method inspired by previous work to develop language models, tailored specifically to desirable NPC qualities such as coherency, believability, and degree of repetition. Finally, we propose a new benchmark, called The Turing Quest, which we use to show that the pipeline, when applied to GPT-3, can generate for a variety of game genres and contexts, NPC scripts that can fool judges in thinking they have been written by humans. We believe that these findings can greatly benefit both the gaming industry and its global community of users, since many current games continue to base their NPCs on manually-curated scripts that are resource-demanding and may curb the immersiveness and enjoyment of the user.",
}
| In this paper, we study the viability of the deployment of language models towards non-playable character (NPC) scripts, by introducing a novel pipeline for the automatic construction of NPC scripts using Transformer-based believable scripts for a variety of game genres and specifications. In addition, we propose a self-diagnosis method inspired by previous work to develop language models, tailored specifically to desirable NPC qualities such as coherency, believability, and degree of repetition. Finally, we propose a new benchmark, called The Turing Quest, which we use to show that the pipeline, when applied to GPT-3, can generate for a variety of game genres and contexts, NPC scripts that can fool judges in thinking they have been written by humans. We believe that these findings can greatly benefit both the gaming industry and its global community of users, since many current games continue to base their NPCs on manually-curated scripts that are resource-demanding and may curb the immersiveness and enjoyment of the user. | [
"Gao, Qi Chen",
"Emami, Ali"
] | The Turing Quest: Can Transformers Make Good NPCs? | acl-srw.17 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.18.bib | https://aclanthology.org/2023.acl-srw.18/ | @inproceedings{zheng-etal-2023-making,
title = "Making the Most Out of the Limited Context Length: Predictive Power Varies with Clinical Note Type and Note Section",
author = "Zheng, Hongyi and
Zhu, Yixin and
Jiang, Lavender and
Cho, Kyunghyun and
Oermann, Eric",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.18",
doi = "10.18653/v1/2023.acl-srw.18",
pages = "104--108",
abstract = "Recent advances in large language models have led to renewed interest in natural language processing in healthcare using the free text of clinical notes. One distinguishing characteristic of clinical notes is their long time span over multiple long documents. The unique structure of clinical notes creates a new design choice: when the context length for a language model predictor is limited, which part of clinical notes should we choose as the input? Existing studies either choose the inputs with domain knowledge or simply truncate them. We propose a framework to analyze the sections with high predictive power. Using MIMIC-III, we show that: 1) predictive power distribution is different between nursing notes and discharge notes and 2) combining different types of notes could improve performance when the context length is large. Our findings suggest that a carefully selected sampling function could enable more efficient information extraction from clinical notes.",
}
| Recent advances in large language models have led to renewed interest in natural language processing in healthcare using the free text of clinical notes. One distinguishing characteristic of clinical notes is their long time span over multiple long documents. The unique structure of clinical notes creates a new design choice: when the context length for a language model predictor is limited, which part of clinical notes should we choose as the input? Existing studies either choose the inputs with domain knowledge or simply truncate them. We propose a framework to analyze the sections with high predictive power. Using MIMIC-III, we show that: 1) predictive power distribution is different between nursing notes and discharge notes and 2) combining different types of notes could improve performance when the context length is large. Our findings suggest that a carefully selected sampling function could enable more efficient information extraction from clinical notes. | [
"Zheng, Hongyi",
"Zhu, Yixin",
"Jiang, Lavender",
"Cho, Kyunghyun",
"Oermann, Eric"
] | Making the Most Out of the Limited Context Length: Predictive Power Varies with Clinical Note Type and Note Section | acl-srw.18 | Poster | 2307.07051 | [
""
] | https://huggingface.co/papers/2307.07051 | 0 | 1 | 0 | 5 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-srw.19.bib | https://aclanthology.org/2023.acl-srw.19/ | @inproceedings{yang-etal-2023-intriguing,
title = "Intriguing Effect of the Correlation Prior on {ICD}-9 Code Assignment",
author = "Yang, Zihao and
Zhang, Chenkang and
Wu, Muru and
Liu, Xujin and
Jiang, Lavender and
Cho, Kyunghyun and
Oermann, Eric",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.19",
doi = "10.18653/v1/2023.acl-srw.19",
pages = "109--118",
abstract = "The Ninth Revision of the International Classification of Diseases (ICD-9) is a standardized coding system used to classify health conditions. It is used for billing, tracking individual patient conditions, and for epidemiology. The highly detailed and technical nature of the codes and their associated medical conditions make it difficult for humans to accurately record them. Researchers have explored the use of neural networks, particularly language models, for automated ICD-9 code assignment. However, the imbalanced distribution of ICD-9 codes leads to poor performance. One solution is to use domain knowledge to incorporate a useful prior. This paper evaluates the usefulness of the correlation bias: we hypothesize that correlations between ICD-9 codes and other medical codes could help improve language models{'} performance. We showed that while the correlation bias worsens the overall performance, the effect on individual class can be negative or positive. Performance on classes that are more imbalanced and less correlated with other codes is more sensitive to incorporating the correlation bias. This suggests that while the correlation bias has potential to improve ICD-9 code assignment in certain cases, the applicability criteria need to be more carefully studied.",
}
| The Ninth Revision of the International Classification of Diseases (ICD-9) is a standardized coding system used to classify health conditions. It is used for billing, tracking individual patient conditions, and for epidemiology. The highly detailed and technical nature of the codes and their associated medical conditions make it difficult for humans to accurately record them. Researchers have explored the use of neural networks, particularly language models, for automated ICD-9 code assignment. However, the imbalanced distribution of ICD-9 codes leads to poor performance. One solution is to use domain knowledge to incorporate a useful prior. This paper evaluates the usefulness of the correlation bias: we hypothesize that correlations between ICD-9 codes and other medical codes could help improve language models{'} performance. We showed that while the correlation bias worsens the overall performance, the effect on individual class can be negative or positive. Performance on classes that are more imbalanced and less correlated with other codes is more sensitive to incorporating the correlation bias. This suggests that while the correlation bias has potential to improve ICD-9 code assignment in certain cases, the applicability criteria need to be more carefully studied. | [
"Yang, Zihao",
"Zhang, Chenkang",
"Wu, Muru",
"Liu, Xujin",
"Jiang, Lavender",
"Cho, Kyunghyun",
"Oermann, Eric"
] | Intriguing Effect of the Correlation Prior on ICD-9 Code Assignment | acl-srw.19 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.20.bib | https://aclanthology.org/2023.acl-srw.20/ | @inproceedings{baran-etal-2023-classical,
title = "Classical Out-of-Distribution Detection Methods Benchmark in Text Classification Tasks",
author = "Baran, Mateusz and
Baran, Joanna and
W{\'o}jcik, Mateusz and
Zi{\k{e}}ba, Maciej and
Gonczarek, Adam",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.20",
doi = "10.18653/v1/2023.acl-srw.20",
pages = "119--129",
abstract = "State-of-the-art models can perform well in controlled environments, but they often struggle when presented with out-of-distribution (OOD) examples, making OOD detection a critical component of NLP systems. In this paper, we focus on highlighting the limitations of existing approaches to OOD detection in NLP. Specifically, we evaluated eight OOD detection methods that are easily integrable into existing NLP systems and require no additional OOD data or model modifications. One of our contributions is providing a well-structured research environment that allows for full reproducibility of the results. Additionally, our analysis shows that existing OOD detection methods for NLP tasks are not yet sufficiently sensitive to capture all samples characterized by various types of distributional shifts. Particularly challenging testing scenarios arise in cases of background shift and randomly shuffled word order within in domain texts. This highlights the need for future work to develop more effective OOD detection approaches for the NLP problems, and our work provides a well-defined foundation for further research in this area.",
}
| State-of-the-art models can perform well in controlled environments, but they often struggle when presented with out-of-distribution (OOD) examples, making OOD detection a critical component of NLP systems. In this paper, we focus on highlighting the limitations of existing approaches to OOD detection in NLP. Specifically, we evaluated eight OOD detection methods that are easily integrable into existing NLP systems and require no additional OOD data or model modifications. One of our contributions is providing a well-structured research environment that allows for full reproducibility of the results. Additionally, our analysis shows that existing OOD detection methods for NLP tasks are not yet sufficiently sensitive to capture all samples characterized by various types of distributional shifts. Particularly challenging testing scenarios arise in cases of background shift and randomly shuffled word order within in domain texts. This highlights the need for future work to develop more effective OOD detection approaches for the NLP problems, and our work provides a well-defined foundation for further research in this area. | [
"Baran, Mateusz",
"Baran, Joanna",
"W{\\'o}jcik, Mateusz",
"Zi{\\k{e}}ba, Maciej",
"Gonczarek, Adam"
] | Classical Out-of-Distribution Detection Methods Benchmark in Text Classification Tasks | acl-srw.20 | Poster | 2307.07002 | [
"https://github.com/mateuszbaransanok/trustworthyai"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-srw.22.bib | https://aclanthology.org/2023.acl-srw.22/ | @inproceedings{nagasawa-etal-2023-lms,
title = "Can {LM}s Store and Retrieve 1-to-N Relational Knowledge?",
author = "Nagasawa, Haruki and
Heinzerling, Benjamin and
Kokuta, Kazuma and
Inui, Kentaro",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.22",
doi = "10.18653/v1/2023.acl-srw.22",
pages = "130--138",
abstract = "It has been suggested that pretrained language models can be viewed as knowledge bases. One of the prerequisites for using language models as knowledge bases is how accurately they can store and retrieve world knowledge. It is already revealed that language models can store much 1-to-1 relational knowledge, such as {''}country and its capital,{''} with high memorization accuracy. On the other hand, world knowledge includes not only 1-to-1 but also 1-to-N relational knowledge, such as {''}parent and children.{''}However, it is not clear how accurately language models can handle 1-to-N relational knowledge. To investigate language models{'} abilities toward 1-to-N relational knowledge, we start by designing the problem settings. Specifically, we organize the character of 1-to-N relational knowledge and define two essential skills: (i) memorizing multiple objects individually and (ii) retrieving multiple stored objects without excesses or deficiencies at once. We inspect LMs{'} ability to handle 1-to-N relational knowledge on the controlled synthesized data. As a result, we report that it is possible to memorize multiple objects with high accuracy, but generalizing the retrieval ability (expressly, enumeration) is challenging.",
}
| It has been suggested that pretrained language models can be viewed as knowledge bases. One of the prerequisites for using language models as knowledge bases is how accurately they can store and retrieve world knowledge. It is already revealed that language models can store much 1-to-1 relational knowledge, such as {''}country and its capital,{''} with high memorization accuracy. On the other hand, world knowledge includes not only 1-to-1 but also 1-to-N relational knowledge, such as {''}parent and children.{''}However, it is not clear how accurately language models can handle 1-to-N relational knowledge. To investigate language models{'} abilities toward 1-to-N relational knowledge, we start by designing the problem settings. Specifically, we organize the character of 1-to-N relational knowledge and define two essential skills: (i) memorizing multiple objects individually and (ii) retrieving multiple stored objects without excesses or deficiencies at once. We inspect LMs{'} ability to handle 1-to-N relational knowledge on the controlled synthesized data. As a result, we report that it is possible to memorize multiple objects with high accuracy, but generalizing the retrieval ability (expressly, enumeration) is challenging. | [
"Nagasawa, Haruki",
"Heinzerling, Benjamin",
"Kokuta, Kazuma",
"Inui, Kentaro"
] | Can LMs Store and Retrieve 1-to-N Relational Knowledge? | acl-srw.22 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.24.bib | https://aclanthology.org/2023.acl-srw.24/ | @inproceedings{imai-etal-2023-theoretical,
title = "Theoretical Linguistics Rivals Embeddings in Language Clustering for Multilingual Named Entity Recognition",
author = "Imai, Sakura and
Kawahara, Daisuke and
Orita, Naho and
Oda, Hiromune",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.24",
doi = "10.18653/v1/2023.acl-srw.24",
pages = "139--151",
abstract = "While embedding-based methods have been dominant in language clustering for multilingual tasks, clustering based on linguistic features has not yet been explored much, as it remains baselines (Tan et al., 2019; Shaffer, 2021). This study investigates whether and how theoretical linguistics improves language clustering for multilingual named entity recognition (NER). We propose two types of language groupings: one based on morpho-syntactic features in a nominal domain and one based on a head parameter. Our NER experiments show that the proposed methods largely outperform a state-of-the-art embedding-based model, suggesting that theoretical linguistics plays a significant role in multilingual learning tasks.",
}
| While embedding-based methods have been dominant in language clustering for multilingual tasks, clustering based on linguistic features has not yet been explored much, as it remains baselines (Tan et al., 2019; Shaffer, 2021). This study investigates whether and how theoretical linguistics improves language clustering for multilingual named entity recognition (NER). We propose two types of language groupings: one based on morpho-syntactic features in a nominal domain and one based on a head parameter. Our NER experiments show that the proposed methods largely outperform a state-of-the-art embedding-based model, suggesting that theoretical linguistics plays a significant role in multilingual learning tasks. | [
"Imai, Sakura",
"Kawahara, Daisuke",
"Orita, Naho",
"Oda, Hiromune"
] | Theoretical Linguistics Rivals Embeddings in Language Clustering for Multilingual Named Entity Recognition | acl-srw.24 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.26.bib | https://aclanthology.org/2023.acl-srw.26/ | @inproceedings{skerath-etal-2023-native,
title = "Native Language Prediction from Gaze: a Reproducibility Study",
author = "Skerath, Lina and
Toborek, Paulina and
Zieli{\'n}ska, Anita and
Barrett, Maria and
Van Der Goot, Rob",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.26",
doi = "10.18653/v1/2023.acl-srw.26",
pages = "152--159",
abstract = "Numerous studies found that the linguistic properties of a person{'}s native language affect the cognitive processing of other languages. However, only one study has shown that it was possible to identify the native language based on eye-tracking records of natural L2 reading using machine learning. A new corpus allows us to replicate these results on a more interrelated and larger set of native languages. Our results show that comparable classification performance is maintained despite using less data. However, analysis shows that the correlation between L2 eye movements and native language similarity may be more complex than the original study found.",
}
| Numerous studies found that the linguistic properties of a person{'}s native language affect the cognitive processing of other languages. However, only one study has shown that it was possible to identify the native language based on eye-tracking records of natural L2 reading using machine learning. A new corpus allows us to replicate these results on a more interrelated and larger set of native languages. Our results show that comparable classification performance is maintained despite using less data. However, analysis shows that the correlation between L2 eye movements and native language similarity may be more complex than the original study found. | [
"Skerath, Lina",
"Toborek, Paulina",
"Zieli{\\'n}ska, Anita",
"Barrett, Maria",
"Van Der Goot, Rob"
] | Native Language Prediction from Gaze: a Reproducibility Study | acl-srw.26 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.27.bib | https://aclanthology.org/2023.acl-srw.27/ | @inproceedings{cui-etal-2023-medtem2,
title = "{M}ed{T}em2.0: Prompt-based Temporal Classification of Treatment Events from Discharge Summaries",
author = "Cui, Yang and
Han, Lifeng and
Nenadic, Goran",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.27",
doi = "10.18653/v1/2023.acl-srw.27",
pages = "160--183",
abstract = "Discharge summaries are comprehensive medical records that encompass vital information about a patient{'}s hospital stay. A crucial aspect of discharge summaries is the temporal information of treatments administered throughout the patient{'}s illness. With an extensive volume of clinical documents, manually extracting and compiling a patient{'}s medication list can be laborious, time-consuming, and susceptible to errors. The objective of this paper is to build upon the recent development on clinical NLP by temporally classifying treatments in clinical texts, specifically determining whether a treatment was administered between the time of admission and discharge from the hospital. State-of-the-art NLP methods including prompt-based learning on Generative Pre-trained Transformers (GPTs) models and fine-tuning on pre-trained language models (PLMs) such as BERT were employed to classify temporal relations between treatments and hospitalisation periods in discharge summaries. Fine-tuning with the BERT model achieved an F1 score of 92.45{\%} and a balanced accuracy of 77.56{\%}, while prompt learning using the T5 model and mixed templates resulted in an F1 score of 90.89{\%} and a balanced accuracy of 72.07{\%}.Our codes and data are available at \url{https://github.com/HECTA-UoM/MedTem}.",
}
| Discharge summaries are comprehensive medical records that encompass vital information about a patient{'}s hospital stay. A crucial aspect of discharge summaries is the temporal information of treatments administered throughout the patient{'}s illness. With an extensive volume of clinical documents, manually extracting and compiling a patient{'}s medication list can be laborious, time-consuming, and susceptible to errors. The objective of this paper is to build upon the recent development on clinical NLP by temporally classifying treatments in clinical texts, specifically determining whether a treatment was administered between the time of admission and discharge from the hospital. State-of-the-art NLP methods including prompt-based learning on Generative Pre-trained Transformers (GPTs) models and fine-tuning on pre-trained language models (PLMs) such as BERT were employed to classify temporal relations between treatments and hospitalisation periods in discharge summaries. Fine-tuning with the BERT model achieved an F1 score of 92.45{\%} and a balanced accuracy of 77.56{\%}, while prompt learning using the T5 model and mixed templates resulted in an F1 score of 90.89{\%} and a balanced accuracy of 72.07{\%}.Our codes and data are available at \url{https://github.com/HECTA-UoM/MedTem}. | [
"Cui, Yang",
"Han, Lifeng",
"Nenadic, Goran"
] | MedTem2.0: Prompt-based Temporal Classification of Treatment Events from Discharge Summaries | acl-srw.27 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.28.bib | https://aclanthology.org/2023.acl-srw.28/ | @inproceedings{bonafilia-etal-2023-sudden,
title = "Sudden Semantic Shifts in {S}wedish {NATO} discourse",
author = "Bonafilia, Brian and
Bruinsma, Bastiaan and
Saynova, Denitsa and
Johansson, Moa",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.28",
doi = "10.18653/v1/2023.acl-srw.28",
pages = "184--193",
abstract = "In this paper, we investigate a type of semantic shift that occurs when a sudden event radically changes public opinion on a topic. Looking at Sweden{'}s decision to apply for NATO membership in 2022, we use word embeddings to study how the associations users on Twitter have regarding NATO evolve. We identify several changes that we successfully validate against real-world events. However, the low engagement of the public with the issue often made it challenging to distinguish true signals from noise. We thus find that domain knowledge and data selection are of prime importance when using word embeddings to study semantic shifts.",
}
| In this paper, we investigate a type of semantic shift that occurs when a sudden event radically changes public opinion on a topic. Looking at Sweden{'}s decision to apply for NATO membership in 2022, we use word embeddings to study how the associations users on Twitter have regarding NATO evolve. We identify several changes that we successfully validate against real-world events. However, the low engagement of the public with the issue often made it challenging to distinguish true signals from noise. We thus find that domain knowledge and data selection are of prime importance when using word embeddings to study semantic shifts. | [
"Bonafilia, Brian",
"Bruinsma, Bastiaan",
"Saynova, Denitsa",
"Johansson, Moa"
] | Sudden Semantic Shifts in Swedish NATO discourse | acl-srw.28 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.29.bib | https://aclanthology.org/2023.acl-srw.29/ | @inproceedings{sugiura-etal-2023-building,
title = "Building a Buzzer-quiz Answering System",
author = "Sugiura, Naoya and
Yamada, Kosuke and
Sasano, Ryohei and
Takeda, Koichi and
Toyama, Katsuhiko",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.29",
doi = "10.18653/v1/2023.acl-srw.29",
pages = "194--199",
abstract = "A buzzer quiz is a genre of quiz in which multiple players simultaneously listen to a quiz being read aloud and respond it by buzzing in as soon as they can predict the answer. Because incorrect answers often result in penalties, a buzzer-quiz answering system must not only predict the answer from only part of a question but also estimate the predicted answer{'}s accuracy. In this paper, we introduce two types of buzzer-quiz answering systems: (1) a system that directly generates an answer from part of a question by using an autoregressive language model; and (2) a system that first reconstructs the entire question by using an autoregressive language model and then determines the answer according to the reconstructed question. We then propose a method to estimate the accuracy of the answers for each system by using the internal scores of each model.",
}
| A buzzer quiz is a genre of quiz in which multiple players simultaneously listen to a quiz being read aloud and respond it by buzzing in as soon as they can predict the answer. Because incorrect answers often result in penalties, a buzzer-quiz answering system must not only predict the answer from only part of a question but also estimate the predicted answer{'}s accuracy. In this paper, we introduce two types of buzzer-quiz answering systems: (1) a system that directly generates an answer from part of a question by using an autoregressive language model; and (2) a system that first reconstructs the entire question by using an autoregressive language model and then determines the answer according to the reconstructed question. We then propose a method to estimate the accuracy of the answers for each system by using the internal scores of each model. | [
"Sugiura, Naoya",
"Yamada, Kosuke",
"Sasano, Ryohei",
"Takeda, Koichi",
"Toyama, Katsuhiko"
] | Building a Buzzer-quiz Answering System | acl-srw.29 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.30.bib | https://aclanthology.org/2023.acl-srw.30/ | @inproceedings{schneidermann-etal-2023-probing,
title = "Probing for Hyperbole in Pre-Trained Language Models",
author = "Schneidermann, Nina and
Hershcovich, Daniel and
Pedersen, Bolette",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.30",
doi = "10.18653/v1/2023.acl-srw.30",
pages = "200--211",
abstract = "Hyperbole is a common figure of speech, which is under-explored in NLP research. In this study, we conduct edge and minimal description length (MDL) probing experiments on three pre-trained language models (PLMs) in an attempt to explore the extent to which hyperbolic information is encoded in these models. We use both word-in-context and sentence-level representations as model inputs as a basis for comparison. We also annotate 63 hyperbole sentences from the HYPO dataset according to an operational taxonomy to conduct an error analysis to explore the encoding of different hyperbole categories. Our results show that hyperbole is to a limited extent encoded in PLMs, and mostly in the final layers. They also indicate that hyperbolic information may be better encoded by the sentence-level representations, which, due to the pragmatic nature of hyperbole, may therefore provide a more accurate and informative representation in PLMs. Finally, the inter-annotator agreement for our annotations, a Cohen{'}s Kappa of 0.339, suggest that the taxonomy categories may not be intuitive and need revision or simplification.",
}
| Hyperbole is a common figure of speech, which is under-explored in NLP research. In this study, we conduct edge and minimal description length (MDL) probing experiments on three pre-trained language models (PLMs) in an attempt to explore the extent to which hyperbolic information is encoded in these models. We use both word-in-context and sentence-level representations as model inputs as a basis for comparison. We also annotate 63 hyperbole sentences from the HYPO dataset according to an operational taxonomy to conduct an error analysis to explore the encoding of different hyperbole categories. Our results show that hyperbole is to a limited extent encoded in PLMs, and mostly in the final layers. They also indicate that hyperbolic information may be better encoded by the sentence-level representations, which, due to the pragmatic nature of hyperbole, may therefore provide a more accurate and informative representation in PLMs. Finally, the inter-annotator agreement for our annotations, a Cohen{'}s Kappa of 0.339, suggest that the taxonomy categories may not be intuitive and need revision or simplification. | [
"Schneidermann, Nina",
"Hershcovich, Daniel",
"Pedersen, Bolette"
] | Probing for Hyperbole in Pre-Trained Language Models | acl-srw.30 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.31.bib | https://aclanthology.org/2023.acl-srw.31/ | @inproceedings{anikina-2023-towards,
title = "Towards Efficient Dialogue Processing in the Emergency Response Domain",
author = "Anikina, Tatiana",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.31",
doi = "10.18653/v1/2023.acl-srw.31",
pages = "212--225",
abstract = "In this paper we describe the task of adapting NLP models to dialogue processing in the emergency response domain. Our goal is to provide a recipe for building a system that performs dialogue act classification and domain-specific slot tagging while being efficient, flexible and robust. We show that adapter models Pfeiffer et al. (2020) perform well in the emergency response domain and benefit from additional dialogue context and speaker information. Comparing adapters to standard fine-tuned Transformer models we show that they achieve competitive results and can easily accommodate new tasks without significant memory increase since the base model can be shared between the adapters specializing on different tasks. We also address the problem of scarce annotations in the emergency response domain and evaluate different data augmentation techniques in a low-resource setting.",
}
| In this paper we describe the task of adapting NLP models to dialogue processing in the emergency response domain. Our goal is to provide a recipe for building a system that performs dialogue act classification and domain-specific slot tagging while being efficient, flexible and robust. We show that adapter models Pfeiffer et al. (2020) perform well in the emergency response domain and benefit from additional dialogue context and speaker information. Comparing adapters to standard fine-tuned Transformer models we show that they achieve competitive results and can easily accommodate new tasks without significant memory increase since the base model can be shared between the adapters specializing on different tasks. We also address the problem of scarce annotations in the emergency response domain and evaluate different data augmentation techniques in a low-resource setting. | [
"Anikina, Tatiana"
] | Towards Efficient Dialogue Processing in the Emergency Response Domain | acl-srw.31 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.33.bib | https://aclanthology.org/2023.acl-srw.33/ | @inproceedings{mai-carson-berndsen-2023-already,
title = "{I} already said that! Degenerating redundant questions in open-domain dialogue systems.",
author = "Mai, Long and
Carson-berndsen, Julie",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.33",
doi = "10.18653/v1/2023.acl-srw.33",
pages = "226--236",
abstract = "Neural text generation models have achieved remarkable success in carrying on short open-domain conversations. However, their performance degrades significantly in the long term, especially in their ability to ask coherent questions. A significant issue is the generation of redundant questions where the answer has already been provided by the user. We adapt and evaluate different methods, including negative training, decoding, and classification, to mitigate the redundancy problem. We also propose a simple yet effective method for generating training data without the need for crowdsourcing human-human or human-bot conversations. Experiments with the BlenderBot model show that our combined method significantly reduces the rate of redundant questions from 27.2{\%} to 8.7{\%}, while improving the quality of the original model. The code, dataset, and trained models can be found at our repository.",
}
| Neural text generation models have achieved remarkable success in carrying on short open-domain conversations. However, their performance degrades significantly in the long term, especially in their ability to ask coherent questions. A significant issue is the generation of redundant questions where the answer has already been provided by the user. We adapt and evaluate different methods, including negative training, decoding, and classification, to mitigate the redundancy problem. We also propose a simple yet effective method for generating training data without the need for crowdsourcing human-human or human-bot conversations. Experiments with the BlenderBot model show that our combined method significantly reduces the rate of redundant questions from 27.2{\%} to 8.7{\%}, while improving the quality of the original model. The code, dataset, and trained models can be found at our repository. | [
"Mai, Long",
"Carson-berndsen, Julie"
] | I already said that! Degenerating redundant questions in open-domain dialogue systems. | acl-srw.33 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.34.bib | https://aclanthology.org/2023.acl-srw.34/ | @inproceedings{kodama-etal-2023-knowledge,
title = "Is a Knowledge-based Response Engaging?: An Analysis on Knowledge-Grounded Dialogue with Information Source Annotation",
author = "Kodama, Takashi and
Kiyomaru, Hirokazu and
Huang, Yin Jou and
Okahisa, Taro and
Kurohashi, Sadao",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.34",
doi = "10.18653/v1/2023.acl-srw.34",
pages = "237--243",
abstract = "Currently, most knowledge-grounded dialogue response generation models focus on reflecting given external knowledge. However, even when conveying external knowledge, humans integrate their own knowledge, experiences, and opinions with external knowledge to make their utterances engaging. In this study, we analyze such human behavior by annotating the utterances in an existing knowledge-grounded dialogue corpus. Each entity in the corpus is annotated with its information source, either derived from external knowledge (database-derived) or the speaker{'}s own knowledge, experiences, and opinions (speaker-derived). Our analysis shows that the presence of speaker-derived information in the utterance improves dialogue engagingness. We also confirm that responses generated by an existing model, which is trained to reflect the given knowledge, cannot include speaker-derived information in responses as often as humans do.",
}
| Currently, most knowledge-grounded dialogue response generation models focus on reflecting given external knowledge. However, even when conveying external knowledge, humans integrate their own knowledge, experiences, and opinions with external knowledge to make their utterances engaging. In this study, we analyze such human behavior by annotating the utterances in an existing knowledge-grounded dialogue corpus. Each entity in the corpus is annotated with its information source, either derived from external knowledge (database-derived) or the speaker{'}s own knowledge, experiences, and opinions (speaker-derived). Our analysis shows that the presence of speaker-derived information in the utterance improves dialogue engagingness. We also confirm that responses generated by an existing model, which is trained to reflect the given knowledge, cannot include speaker-derived information in responses as often as humans do. | [
"Kodama, Takashi",
"Kiyomaru, Hirokazu",
"Huang, Yin Jou",
"Okahisa, Taro",
"Kurohashi, Sadao"
] | Is a Knowledge-based Response Engaging?: An Analysis on Knowledge-Grounded Dialogue with Information Source Annotation | acl-srw.34 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.35.bib | https://aclanthology.org/2023.acl-srw.35/ | @inproceedings{sato-etal-2023-choosing,
title = "Choosing What to Mask: More Informed Masking for Multimodal Machine Translation",
author = "Sato, Julia and
Caseli, Helena and
Specia, Lucia",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.35",
doi = "10.18653/v1/2023.acl-srw.35",
pages = "244--253",
abstract = "Pre-trained language models have achieved remarkable results on several NLP tasks. Most of them adopt masked language modeling to learn representations by randomly masking tokens and predicting them based on their context. However, this random selection of tokens to be masked is inefficient to learn some language patterns as it may not consider linguistic information that can be helpful for many NLP tasks, such as multimodal machine translation (MMT). Hence, we propose three novel masking strategies for cross-lingual visual pre-training - more informed visual masking, more informed textual masking, and more informed visual and textual masking - each one focusing on learning different linguistic patterns. We apply them to Vision Translation Language Modelling for video subtitles (Sato et al., 2022) and conduct extensive experiments on the Portuguese-English MMT task. The results show that our masking approaches yield significant improvements over the original random masking strategy for downstream MMT performance. Our models outperform the MMT baseline and we achieve state-of-the-art accuracy (52.70 in terms of BLEU score) on the How2 dataset, indicating that more informed masking helps in acquiring an understanding of specific language structures and has great potential for language understanding.",
}
| Pre-trained language models have achieved remarkable results on several NLP tasks. Most of them adopt masked language modeling to learn representations by randomly masking tokens and predicting them based on their context. However, this random selection of tokens to be masked is inefficient to learn some language patterns as it may not consider linguistic information that can be helpful for many NLP tasks, such as multimodal machine translation (MMT). Hence, we propose three novel masking strategies for cross-lingual visual pre-training - more informed visual masking, more informed textual masking, and more informed visual and textual masking - each one focusing on learning different linguistic patterns. We apply them to Vision Translation Language Modelling for video subtitles (Sato et al., 2022) and conduct extensive experiments on the Portuguese-English MMT task. The results show that our masking approaches yield significant improvements over the original random masking strategy for downstream MMT performance. Our models outperform the MMT baseline and we achieve state-of-the-art accuracy (52.70 in terms of BLEU score) on the How2 dataset, indicating that more informed masking helps in acquiring an understanding of specific language structures and has great potential for language understanding. | [
"Sato, Julia",
"Caseli, Helena",
"Specia, Lucia"
] | Choosing What to Mask: More Informed Masking for Multimodal Machine Translation | acl-srw.35 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.36.bib | https://aclanthology.org/2023.acl-srw.36/ | @inproceedings{shen-silberer-2023-combining,
title = "Combining Tradition with Modernness: Exploring Event Representations in Vision-and-Language Models for Visual Goal-Step Inference",
author = "Shen, Chong and
Silberer, Carina",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.36",
doi = "10.18653/v1/2023.acl-srw.36",
pages = "254--265",
abstract = "Procedural knowledge understanding (PKU) underlies the ability to infer goal-step relations. The task of Visual Goal{--}Step Inference addresses this ability in the multimodal domain. It requires to identify images that represent the steps towards achieving a textually expressed goal. The best existing methods encode texts and images either with independent encoders, or with object-level multimodal encoders using blackbox transformers. This stands in contrast to early, linguistically inspired methods for event representations, which focus on capturing the most crucial information, namely actions and the participants, to learn stereotypical event sequences and hence procedural knowledge. In this work, we study various methods and their effects on PKU of injecting the early shallow event representations to nowadays multimodal deep learning-based models. We find that the early, linguistically inspired methods for representing event knowledge does contribute to understand procedures in combination with modern vision-and-language models. In the future, we are going to explore more complex structure of events and study how to exploit it on top of large language models.",
}
| Procedural knowledge understanding (PKU) underlies the ability to infer goal-step relations. The task of Visual Goal{--}Step Inference addresses this ability in the multimodal domain. It requires to identify images that represent the steps towards achieving a textually expressed goal. The best existing methods encode texts and images either with independent encoders, or with object-level multimodal encoders using blackbox transformers. This stands in contrast to early, linguistically inspired methods for event representations, which focus on capturing the most crucial information, namely actions and the participants, to learn stereotypical event sequences and hence procedural knowledge. In this work, we study various methods and their effects on PKU of injecting the early shallow event representations to nowadays multimodal deep learning-based models. We find that the early, linguistically inspired methods for representing event knowledge does contribute to understand procedures in combination with modern vision-and-language models. In the future, we are going to explore more complex structure of events and study how to exploit it on top of large language models. | [
"Shen, Chong",
"Silberer, Carina"
] | Combining Tradition with Modernness: Exploring Event Representations in Vision-and-Language Models for Visual Goal-Step Inference | acl-srw.36 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.37.bib | https://aclanthology.org/2023.acl-srw.37/ | @inproceedings{schoch-etal-2023-data,
title = "Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values",
author = "Schoch, Stephanie and
Mishra, Ritwick and
Ji, Yangfeng",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.37",
doi = "10.18653/v1/2023.acl-srw.37",
pages = "266--275",
abstract = "Although Shapley values have been shown to be highly effective for identifying harmful training instances, dataset size and model complexity constraints limit the ability to apply Shapley-based data valuation to fine-tuning large pre-trained language models. To address this, we propose TS-DShapley, an algorithm that reduces computational cost of Shapley-based data valuation through: 1) an efficient sampling-based method that aggregates Shapley values computed from subsets for valuation of the entire training set, and 2) a value transfer method that leverages value information extracted from a simple classifier trained using representations from the target language model. Our experiments applying TS-DShapley to select data for fine-tuning BERT-based language models on benchmark natural language understanding (NLU) datasets show that TS-DShapley outperforms existing data selection methods. Further, TS-DShapley can filter fine-tuning data to increase language model performance compared to training with the full fine-tuning dataset.",
}
| Although Shapley values have been shown to be highly effective for identifying harmful training instances, dataset size and model complexity constraints limit the ability to apply Shapley-based data valuation to fine-tuning large pre-trained language models. To address this, we propose TS-DShapley, an algorithm that reduces computational cost of Shapley-based data valuation through: 1) an efficient sampling-based method that aggregates Shapley values computed from subsets for valuation of the entire training set, and 2) a value transfer method that leverages value information extracted from a simple classifier trained using representations from the target language model. Our experiments applying TS-DShapley to select data for fine-tuning BERT-based language models on benchmark natural language understanding (NLU) datasets show that TS-DShapley outperforms existing data selection methods. Further, TS-DShapley can filter fine-tuning data to increase language model performance compared to training with the full fine-tuning dataset. | [
"Schoch, Stephanie",
"Mishra, Ritwick",
"Ji, Yangfeng"
] | Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values | acl-srw.37 | Poster | 2306.10165 | [
"https://github.com/stephanieschoch/ts-dshapley"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-srw.38.bib | https://aclanthology.org/2023.acl-srw.38/ | @inproceedings{yoshimi-etal-2023-distractor,
title = "Distractor Generation for Fill-in-the-Blank Exercises by Question Type",
author = "Yoshimi, Nana and
Kajiwara, Tomoyuki and
Uchida, Satoru and
Arase, Yuki and
Ninomiya, Takashi",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.38",
doi = "10.18653/v1/2023.acl-srw.38",
pages = "276--281",
abstract = "This study addresses the automatic generation of distractors for English fill-in-the-blank exercises in the entrance examinations for Japanese universities. While previous studies applied the same method to all questions, actual entrance examinations have multiple question types that reflect the purpose of the questions. Therefore, we define three types of questions (grammar, function word, and context) and propose a method to generate distractors according to the characteristics of each question type. Experimental results on 500 actual questions show the effectiveness of the proposed method for both automatic and manual evaluation.",
}
| This study addresses the automatic generation of distractors for English fill-in-the-blank exercises in the entrance examinations for Japanese universities. While previous studies applied the same method to all questions, actual entrance examinations have multiple question types that reflect the purpose of the questions. Therefore, we define three types of questions (grammar, function word, and context) and propose a method to generate distractors according to the characteristics of each question type. Experimental results on 500 actual questions show the effectiveness of the proposed method for both automatic and manual evaluation. | [
"Yoshimi, Nana",
"Kajiwara, Tomoyuki",
"Uchida, Satoru",
"Arase, Yuki",
"Ninomiya, Takashi"
] | Distractor Generation for Fill-in-the-Blank Exercises by Question Type | acl-srw.38 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.40.bib | https://aclanthology.org/2023.acl-srw.40/ | @inproceedings{simmons-2023-moral,
title = "Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity",
author = "Simmons, Gabriel",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.40",
doi = "10.18653/v1/2023.acl-srw.40",
pages = "282--297",
abstract = "Large Language Models (LLMs) have demonstrated impressive capabilities in generating fluent text, as well as tendencies to reproduce undesirable social biases. This work investigates whether LLMs reproduce the moral biases associated with political groups in the United States, an instance of a broader capability herein termed moral mimicry. This work explores this hypothesis in the GPT-3/3.5 and OPT families of Transformer-based LLMs. Using tools from Moral Foundations Theory, this work shows that these LLMs are indeed moral mimics. When prompted with a liberal or conservative political identity, the models generate text reflecting corresponding moral biases. This study also explores the relationship between moral mimicry and model size, and similarity between human and LLM moral word use.",
}
| Large Language Models (LLMs) have demonstrated impressive capabilities in generating fluent text, as well as tendencies to reproduce undesirable social biases. This work investigates whether LLMs reproduce the moral biases associated with political groups in the United States, an instance of a broader capability herein termed moral mimicry. This work explores this hypothesis in the GPT-3/3.5 and OPT families of Transformer-based LLMs. Using tools from Moral Foundations Theory, this work shows that these LLMs are indeed moral mimics. When prompted with a liberal or conservative political identity, the models generate text reflecting corresponding moral biases. This study also explores the relationship between moral mimicry and model size, and similarity between human and LLM moral word use. | [
"Simmons, Gabriel"
] | Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity | acl-srw.40 | Poster | 2209.12106 | [
""
] | https://huggingface.co/papers/2209.12106 | 0 | 1 | 0 | 1 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-srw.43.bib | https://aclanthology.org/2023.acl-srw.43/ | @inproceedings{zhang-etal-2023-leco,
title = "{LECO}: Improving Early Exiting via Learned Exits and Comparison-based Exiting Mechanism",
author = "Zhang, Jingfan and
Tan, Ming and
Dai, Pengyu and
Zhu, Wei",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.43",
doi = "10.18653/v1/2023.acl-srw.43",
pages = "298--309",
abstract = "Recently, dynamic early exiting has attracted much attention since it can accelerate the inference speed of pre-trained models (PTMs). However, previous work on early exiting has neglected the intermediate exits{'} architectural designs. In this work, we propose a novel framework, Learned Exits and COmparison-based early exiting (LECO) to improve PTMs{'} early exiting performances. First, to fully uncover the potentials of multi-exit BERT, we design a novel search space for intermediate exits and employ the idea of differentiable neural architecture search (DNAS) to design proper exit architectures for different intermediate layers automatically. Second, we propose a simple-yet-effective comparison-based early exiting mechanism (COBEE), which can help PTMs achieve better performance and speedup tradeoffs. Extensive experiments show that our LECO achieves the SOTA performances for multi-exit BERT training and dynamic early exiting.",
}
| Recently, dynamic early exiting has attracted much attention since it can accelerate the inference speed of pre-trained models (PTMs). However, previous work on early exiting has neglected the intermediate exits{'} architectural designs. In this work, we propose a novel framework, Learned Exits and COmparison-based early exiting (LECO) to improve PTMs{'} early exiting performances. First, to fully uncover the potentials of multi-exit BERT, we design a novel search space for intermediate exits and employ the idea of differentiable neural architecture search (DNAS) to design proper exit architectures for different intermediate layers automatically. Second, we propose a simple-yet-effective comparison-based early exiting mechanism (COBEE), which can help PTMs achieve better performance and speedup tradeoffs. Extensive experiments show that our LECO achieves the SOTA performances for multi-exit BERT training and dynamic early exiting. | [
"Zhang, Jingfan",
"Tan, Ming",
"Dai, Pengyu",
"Zhu, Wei"
] | LECO: Improving Early Exiting via Learned Exits and Comparison-based Exiting Mechanism | acl-srw.43 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.44.bib | https://aclanthology.org/2023.acl-srw.44/ | @inproceedings{silva-etal-2023-authorship,
title = "Authorship Attribution of Late 19th Century Novels using {GAN}-{BERT}",
author = "Silva, Kanishka and
Can, Burcu and
Blain, Fr{\'e}d{\'e}ric and
Sarwar, Raheem and
Ugolini, Laura and
Mitkov, Ruslan",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.44",
doi = "10.18653/v1/2023.acl-srw.44",
pages = "310--320",
abstract = "Authorship attribution aims to identify the author of an anonymous text. The task becomes even more worthwhile when it comes to literary works. For example, pen names were commonly used by female authors in the 19th century resulting in some literary works being incorrectly attributed or claimed. With this motivation, we collated a dataset of late 19th century novels in English. Due to the imbalance in the dataset and the unavailability of enough data per author, we employed the GANBERT model along with data sampling strategies to fine-tune a transformer-based model for authorship attribution. Differently from the earlier studies on the GAN-BERT model, we conducted transfer learning on comparatively smaller author subsets to train more focused author-specific models yielding performance over 0.88 accuracy and F1 scores. Furthermore, we observed that increasing the sample size has a negative impact on the model{'}s performance. Our research mainly contributes to the ongoing authorship attribution research using GAN-BERT architecture, especially in attributing disputed novelists in the late 19th century.",
}
| Authorship attribution aims to identify the author of an anonymous text. The task becomes even more worthwhile when it comes to literary works. For example, pen names were commonly used by female authors in the 19th century resulting in some literary works being incorrectly attributed or claimed. With this motivation, we collated a dataset of late 19th century novels in English. Due to the imbalance in the dataset and the unavailability of enough data per author, we employed the GANBERT model along with data sampling strategies to fine-tune a transformer-based model for authorship attribution. Differently from the earlier studies on the GAN-BERT model, we conducted transfer learning on comparatively smaller author subsets to train more focused author-specific models yielding performance over 0.88 accuracy and F1 scores. Furthermore, we observed that increasing the sample size has a negative impact on the model{'}s performance. Our research mainly contributes to the ongoing authorship attribution research using GAN-BERT architecture, especially in attributing disputed novelists in the late 19th century. | [
"Silva, Kanishka",
"Can, Burcu",
"Blain, Fr{\\'e}d{\\'e}ric",
"Sarwar, Raheem",
"Ugolini, Laura",
"Mitkov, Ruslan"
] | Authorship Attribution of Late 19th Century Novels using GAN-BERT | acl-srw.44 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.46.bib | https://aclanthology.org/2023.acl-srw.46/ | @inproceedings{fanton-etal-2023-guides,
title = "How-to Guides for Specific Audiences: A Corpus and Initial Findings",
author = "Fanton, Nicola and
Falenska, Agnieszka and
Roth, Michael",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.46",
doi = "10.18653/v1/2023.acl-srw.46",
pages = "321--333",
abstract = "Instructional texts for specific target groups should ideally take into account the prior knowledge and needs of the readers in order to guide them efficiently to their desired goals. However, targeting specific groups also carries the risk of reflecting disparate social norms and subtle stereotypes. In this paper, we investigate the extent to which how-to guides from one particular platform, wikiHow, differ in practice depending on the intended audience. We conduct two case studies in which we examine qualitative features of texts written for specific audiences. In a generalization study, we investigate which differences can also be systematically demonstrated using computational methods. The results of our studies show that guides from wikiHow, like other text genres, are subject to subtle biases. We aim to raise awareness of these inequalities as a first step to addressing them in future work.",
}
| Instructional texts for specific target groups should ideally take into account the prior knowledge and needs of the readers in order to guide them efficiently to their desired goals. However, targeting specific groups also carries the risk of reflecting disparate social norms and subtle stereotypes. In this paper, we investigate the extent to which how-to guides from one particular platform, wikiHow, differ in practice depending on the intended audience. We conduct two case studies in which we examine qualitative features of texts written for specific audiences. In a generalization study, we investigate which differences can also be systematically demonstrated using computational methods. The results of our studies show that guides from wikiHow, like other text genres, are subject to subtle biases. We aim to raise awareness of these inequalities as a first step to addressing them in future work. | [
"Fanton, Nicola",
"Falenska, Agnieszka",
"Roth, Michael"
] | How-to Guides for Specific Audiences: A Corpus and Initial Findings | acl-srw.46 | Poster | 2309.12117 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-srw.47.bib | https://aclanthology.org/2023.acl-srw.47/ | @inproceedings{kader-etal-2023-words,
title = "{``}When Words Fail, Emojis Prevail{''}: A Novel Architecture for Generating Sarcastic Sentences With Emoji Using Valence Reversal and Semantic Incongruity",
author = "Kader, Faria Binte and
Hossain Nujat, Nafisa and
Sogir, Tasmia Binte and
Kabir, Mohsinul and
Mahmud, Hasan and
Hasan, Md Kamrul",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.47",
doi = "10.18653/v1/2023.acl-srw.47",
pages = "334--351",
abstract = "Sarcasm is a form of figurative language that serves as a humorous tool for mockery and ridicule. We present a novel architecture for sarcasm generation with emoji from a non-sarcastic input sentence in English. We divide the generation task into two sub tasks: one for generating textual sarcasm and another for collecting emojis associated with those sarcastic sentences. Two key elements of sarcasm are incorporated into the textual sarcasm generation task: valence reversal and semantic incongruity with context, where the context may involve shared commonsense or general knowledge between the speaker and their audience. The majority of existing sarcasm generation works have focused on this textual form. However, in the real world, when written texts fall short of effectively capturing the emotional cues of spoken and face-to-face communication, people often opt for emojis to accurately express their emotions. Due to the wide range of applications of emojis, incorporating appropriate emojis to generate textual sarcastic sentences helps advance sarcasm generation. We conclude our study by evaluating the generated sarcastic sentences using human judgement. All the codes and data used in this study has been made publicly available.",
}
| Sarcasm is a form of figurative language that serves as a humorous tool for mockery and ridicule. We present a novel architecture for sarcasm generation with emoji from a non-sarcastic input sentence in English. We divide the generation task into two sub tasks: one for generating textual sarcasm and another for collecting emojis associated with those sarcastic sentences. Two key elements of sarcasm are incorporated into the textual sarcasm generation task: valence reversal and semantic incongruity with context, where the context may involve shared commonsense or general knowledge between the speaker and their audience. The majority of existing sarcasm generation works have focused on this textual form. However, in the real world, when written texts fall short of effectively capturing the emotional cues of spoken and face-to-face communication, people often opt for emojis to accurately express their emotions. Due to the wide range of applications of emojis, incorporating appropriate emojis to generate textual sarcastic sentences helps advance sarcasm generation. We conclude our study by evaluating the generated sarcastic sentences using human judgement. All the codes and data used in this study has been made publicly available. | [
"Kader, Faria Binte",
"Hossain Nujat, Nafisa",
"Sogir, Tasmia Binte",
"Kabir, Mohsinul",
"Mahmud, Hasan",
"Hasan, Md Kamrul"
] | “When Words Fail, Emojis Prevail”: A Novel Architecture for Generating Sarcastic Sentences With Emoji Using Valence Reversal and Semantic Incongruity | acl-srw.47 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.48.bib | https://aclanthology.org/2023.acl-srw.48/ | @inproceedings{schmidtova-2023-semantic,
title = "Semantic Accuracy in Natural Language Generation: A Thesis Proposal",
author = "Schmidtova, Patricia",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.48",
doi = "10.18653/v1/2023.acl-srw.48",
pages = "352--361",
abstract = "With the fast-growing popularity of current large pre-trained language models (LLMs), it is necessary to dedicate efforts to making them more reliable. In this thesis proposal, we aim to improve the reliability of natural language generation systems (NLG) by researching the semantic accuracy of their outputs. We look at this problem from the outside (evaluation) and from the inside (interpretability). We propose a novel method for evaluating semantic accuracy and discuss the importance of working towards a unified and objective benchmark for NLG metrics. We also review interpretability approaches which could help us pinpoint the sources of inaccuracies within the models and explore potential mitigation strategies.",
}
| With the fast-growing popularity of current large pre-trained language models (LLMs), it is necessary to dedicate efforts to making them more reliable. In this thesis proposal, we aim to improve the reliability of natural language generation systems (NLG) by researching the semantic accuracy of their outputs. We look at this problem from the outside (evaluation) and from the inside (interpretability). We propose a novel method for evaluating semantic accuracy and discuss the importance of working towards a unified and objective benchmark for NLG metrics. We also review interpretability approaches which could help us pinpoint the sources of inaccuracies within the models and explore potential mitigation strategies. | [
"Schmidtova, Patricia"
] | Semantic Accuracy in Natural Language Generation: A Thesis Proposal | acl-srw.48 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-srw.49.bib | https://aclanthology.org/2023.acl-srw.49/ | @inproceedings{raiyan-etal-2023-math,
title = "Math Word Problem Solving by Generating Linguistic Variants of Problem Statements",
author = "Raiyan, Syed Rifat and
Faiyaz, Md Nafis and
Kabir, Shah Md. Jawad and
Kabir, Mohsinul and
Mahmud, Hasan and
Hasan, Md Kamrul",
editor = "Padmakumar, Vishakh and
Vallejo, Gisela and
Fu, Yao",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-srw.49",
doi = "10.18653/v1/2023.acl-srw.49",
pages = "362--378",
abstract = "The art of mathematical reasoning stands as a fundamental pillar of intellectual progress and is a central catalyst in cultivating human ingenuity. Researchers have recently published a plethora of works centered around the task of solving Math Word Problems (MWP) {---} a crucial stride towards general AI. These existing models are susceptible to dependency on shallow heuristics and spurious correlations to derive the solution expressions. In order to ameliorate this issue, in this paper, we propose a framework for MWP solvers based on the generation of linguistic variants of the problem text. The approach involves solving each of the variant problems and electing the predicted expression with the majority of the votes. We use DeBERTa (Decoding-enhanced BERT with disentangled attention) as the encoder to leverage its rich textual representations and enhanced mask decoder to construct the solution expressions. Furthermore, we introduce a challenging dataset, ParaMAWPS, consisting of paraphrased, adversarial, and inverse variants of selectively sampled MWPs from the benchmark Mawps dataset. We extensively experiment on this dataset along with other benchmark datasets using some baseline MWP solver models. We show that training on linguistic variants of problem statements and voting on candidate predictions improve the mathematical reasoning and robustness of the model. We make our code and data publicly available.",
}
| The art of mathematical reasoning stands as a fundamental pillar of intellectual progress and is a central catalyst in cultivating human ingenuity. Researchers have recently published a plethora of works centered around the task of solving Math Word Problems (MWP) {---} a crucial stride towards general AI. These existing models are susceptible to dependency on shallow heuristics and spurious correlations to derive the solution expressions. In order to ameliorate this issue, in this paper, we propose a framework for MWP solvers based on the generation of linguistic variants of the problem text. The approach involves solving each of the variant problems and electing the predicted expression with the majority of the votes. We use DeBERTa (Decoding-enhanced BERT with disentangled attention) as the encoder to leverage its rich textual representations and enhanced mask decoder to construct the solution expressions. Furthermore, we introduce a challenging dataset, ParaMAWPS, consisting of paraphrased, adversarial, and inverse variants of selectively sampled MWPs from the benchmark Mawps dataset. We extensively experiment on this dataset along with other benchmark datasets using some baseline MWP solver models. We show that training on linguistic variants of problem statements and voting on candidate predictions improve the mathematical reasoning and robustness of the model. We make our code and data publicly available. | [
"Raiyan, Syed Rifat",
"Faiyaz, Md Nafis",
"Kabir, Shah Md. Jawad",
"Kabir, Mohsinul",
"Mahmud, Hasan",
"Hasan, Md Kamrul"
] | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | acl-srw.49 | Poster | 2306.13899 | [
"https://github.com/starscream-11813/variational-mathematical-reasoning"
] | https://huggingface.co/papers/2306.13899 | 0 | 1 | 0 | 6 | 1 | [] | [
"Starscream-11813/ParaMAWPS"
] | [] |
https://aclanthology.org/2023.acl-industry.1.bib | https://aclanthology.org/2023.acl-industry.1/ | @inproceedings{li-etal-2023-cwseg,
title = "{CWS}eg: An Efficient and General Approach to {C}hinese Word Segmentation",
author = "Li, Dedong and
Zhao, Rui and
Tan, Fei",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.1",
doi = "10.18653/v1/2023.acl-industry.1",
pages = "1--10",
abstract = "In this work, we report our efforts in advancing Chinese Word Segmentation for the purpose of rapid deployment in different applications. The pre-trained language model (PLM) based segmentation methods have achieved state-of-the-art (SOTA) performance, whereas this paradigm also poses challenges in the deployment. It includes the balance between performance and cost, segmentation ambiguity due to domain diversity and vague words boundary, and multi-grained segmentation. In this context, we propose a simple yet effective approach, namely CWSeg, to augment PLM-based schemes by developing cohort training and versatile decoding strategies. Extensive experiments on benchmark datasets demonstrate the efficiency and generalization of our approach. The corresponding segmentation system is also implemented for practical usage and the demo is recorded.",
}
| In this work, we report our efforts in advancing Chinese Word Segmentation for the purpose of rapid deployment in different applications. The pre-trained language model (PLM) based segmentation methods have achieved state-of-the-art (SOTA) performance, whereas this paradigm also poses challenges in the deployment. It includes the balance between performance and cost, segmentation ambiguity due to domain diversity and vague words boundary, and multi-grained segmentation. In this context, we propose a simple yet effective approach, namely CWSeg, to augment PLM-based schemes by developing cohort training and versatile decoding strategies. Extensive experiments on benchmark datasets demonstrate the efficiency and generalization of our approach. The corresponding segmentation system is also implemented for practical usage and the demo is recorded. | [
"Li, Dedong",
"Zhao, Rui",
"Tan, Fei"
] | CWSeg: An Efficient and General Approach to Chinese Word Segmentation | acl-industry.1 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.2.bib | https://aclanthology.org/2023.acl-industry.2/ | @inproceedings{kale-etal-2023-knowledge,
title = "{``}Knowledge is Power{''}: Constructing Knowledge Graph of Abdominal Organs and Using Them for Automatic Radiology Report Generation",
author = "Kale, Kaveri and
Bhattacharyya, Pushpak and
Shetty, Aditya and
Gune, Milind and
Shrivastava, Kush and
Lawyer, Rustom and
Biswas, Spriha",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.2",
doi = "10.18653/v1/2023.acl-industry.2",
pages = "11--24",
abstract = "In conventional radiology practice, the radiologist dictates the diagnosis to the transcriptionist, who then prepares a preliminary formatted report referring to the notes, after which the radiologist reviews the report, corrects the errors, and signs off. This workflow is prone to delay and error. In this paper, we report our work on automatic radiology report generation from radiologists{'} dictation, which is in collaboration with a startup about to become Unicorn. A major contribution of our work is the set of knowledge graphs (KGs) of ten abdominal organs- Liver, Kidney, Gallbladder, Uterus, Urinary bladder, Ovary, Pancreas, Prostate, Biliary Tree, and Bowel. Our method for constructing these KGs relies on extracting entity1-relation-entity2 triplets from a large collection (about 10,000) of free-text radiology reports. The quality and coverage of the KGs are verified by two experienced radiologists (practicing for the last 30 years and 8 years, respectively). The dictation of the radiologist is automatically converted to what is called a pathological description which is the clinical description of the findings of the radiologist during ultrasonography (USG). Our knowledge-enhanced deep learning model improves the reported BLEU-3, ROUGE-L, METEOR, and CIDEr scores of the pathological description generation by 2{\%}, 4{\%}, 2{\%} and 2{\%} respectively. To the best of our knowledge, this is the first attempt at representing the abdominal organs in the form of knowledge graphs and utilising these graphs for the automatic generation of USG reports. A Minimum Viable Product (MVP) has been made available to the beta users, i.e., radiologists of reputed hospitals, for testing and evaluation. Our solution guarantees report generation within 30 seconds of running a scan.",
}
| In conventional radiology practice, the radiologist dictates the diagnosis to the transcriptionist, who then prepares a preliminary formatted report referring to the notes, after which the radiologist reviews the report, corrects the errors, and signs off. This workflow is prone to delay and error. In this paper, we report our work on automatic radiology report generation from radiologists{'} dictation, which is in collaboration with a startup about to become Unicorn. A major contribution of our work is the set of knowledge graphs (KGs) of ten abdominal organs- Liver, Kidney, Gallbladder, Uterus, Urinary bladder, Ovary, Pancreas, Prostate, Biliary Tree, and Bowel. Our method for constructing these KGs relies on extracting entity1-relation-entity2 triplets from a large collection (about 10,000) of free-text radiology reports. The quality and coverage of the KGs are verified by two experienced radiologists (practicing for the last 30 years and 8 years, respectively). The dictation of the radiologist is automatically converted to what is called a pathological description which is the clinical description of the findings of the radiologist during ultrasonography (USG). Our knowledge-enhanced deep learning model improves the reported BLEU-3, ROUGE-L, METEOR, and CIDEr scores of the pathological description generation by 2{\%}, 4{\%}, 2{\%} and 2{\%} respectively. To the best of our knowledge, this is the first attempt at representing the abdominal organs in the form of knowledge graphs and utilising these graphs for the automatic generation of USG reports. A Minimum Viable Product (MVP) has been made available to the beta users, i.e., radiologists of reputed hospitals, for testing and evaluation. Our solution guarantees report generation within 30 seconds of running a scan. | [
"Kale, Kaveri",
"Bhattacharyya, Pushpak",
"Shetty, Aditya",
"Gune, Milind",
"Shrivastava, Kush",
"Lawyer, Rustom",
"Biswas, Spriha"
] | “Knowledge is Power”: Constructing Knowledge Graph of Abdominal Organs and Using Them for Automatic Radiology Report Generation | acl-industry.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.3.bib | https://aclanthology.org/2023.acl-industry.3/ | @inproceedings{hashimoto-etal-2023-hunt,
title = "Hunt for Buried Treasures: Extracting Unclaimed Embodiments from Patent Specifications",
author = "Hashimoto, Chikara and
Kumar, Gautam and
Hashimoto, Shuichiro and
Suzuki, Jun",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.3",
doi = "10.18653/v1/2023.acl-industry.3",
pages = "25--36",
abstract = "Patent applicants write patent specificationsthat describe embodiments of inventions. Some embodiments are claimed for a patent,while others may be unclaimeddue to strategic considerations. Unclaimed embodiments may be extracted byapplicants later and claimed incontinuing applications togain advantages over competitors. Despite being essential for corporate intellectual property (IP) strategies,unclaimed embodiment extraction is conducted manually,and little research has been conducted on its automation. This paper presents a novel task ofunclaimed embodiment extraction (UEE)and a novel dataset for the task. Our experiments with Transformer-based modelsdemonstratedthat the task was challenging as it requiredconducting natural language inference onpatent specifications, which consisted oftechnical, long, syntactically and semanticallyinvolved sentences. We release the dataset and code to foster this new area of research.",
}
| Patent applicants write patent specificationsthat describe embodiments of inventions. Some embodiments are claimed for a patent,while others may be unclaimeddue to strategic considerations. Unclaimed embodiments may be extracted byapplicants later and claimed incontinuing applications togain advantages over competitors. Despite being essential for corporate intellectual property (IP) strategies,unclaimed embodiment extraction is conducted manually,and little research has been conducted on its automation. This paper presents a novel task ofunclaimed embodiment extraction (UEE)and a novel dataset for the task. Our experiments with Transformer-based modelsdemonstratedthat the task was challenging as it requiredconducting natural language inference onpatent specifications, which consisted oftechnical, long, syntactically and semanticallyinvolved sentences. We release the dataset and code to foster this new area of research. | [
"Hashimoto, Chikara",
"Kumar, Gautam",
"Hashimoto, Shuichiro",
"Suzuki, Jun"
] | Hunt for Buried Treasures: Extracting Unclaimed Embodiments from Patent Specifications | acl-industry.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.4.bib | https://aclanthology.org/2023.acl-industry.4/ | @inproceedings{imani-etal-2023-mathprompter,
title = "{M}ath{P}rompter: Mathematical Reasoning using Large Language Models",
author = "Imani, Shima and
Du, Liang and
Shrivastava, Harsh",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.4",
doi = "10.18653/v1/2023.acl-industry.4",
pages = "37--42",
abstract = "Large Language Models (LLMs) have limited performance when solving arithmetic reasoning tasks and often provide incorrect answers. Unlike natural language understanding, math problems typically have a single correct answer, making the task of generating accurate solutions more challenging for LLMs. To the best of our knowledge, we are not aware of any LLMs that indicate their level of confidence in their responses which fuels a trust deficit in these models impeding their adoption. To address this deficiency, we propose {`}MathPrompter{'}, a technique that improves performance of LLMs on arithmetic problems along with increased reliance in the predictions. MathPrompter uses the Zero-shot chain-of-thought prompting technique to generate multiple algebraic expressions or python functions to solve the same math problem in different ways and thereby raise the confidence level in the output results. This is in contrast to other prompt based CoT methods, where there is no check on the validity of the intermediate steps followed. Our technique improves over state-of-the-art on the {`}MultiArith{'} dataset (78.7{\%} - 92.5{\%}) evaluated using 175B parameter GPT-based LLM.",
}
| Large Language Models (LLMs) have limited performance when solving arithmetic reasoning tasks and often provide incorrect answers. Unlike natural language understanding, math problems typically have a single correct answer, making the task of generating accurate solutions more challenging for LLMs. To the best of our knowledge, we are not aware of any LLMs that indicate their level of confidence in their responses which fuels a trust deficit in these models impeding their adoption. To address this deficiency, we propose {`}MathPrompter{'}, a technique that improves performance of LLMs on arithmetic problems along with increased reliance in the predictions. MathPrompter uses the Zero-shot chain-of-thought prompting technique to generate multiple algebraic expressions or python functions to solve the same math problem in different ways and thereby raise the confidence level in the output results. This is in contrast to other prompt based CoT methods, where there is no check on the validity of the intermediate steps followed. Our technique improves over state-of-the-art on the {`}MultiArith{'} dataset (78.7{\%} - 92.5{\%}) evaluated using 175B parameter GPT-based LLM. | [
"Imani, Shima",
"Du, Liang",
"Shrivastava, Harsh"
] | MathPrompter: Mathematical Reasoning using Large Language Models | acl-industry.4 | Poster | 2303.05398 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.5.bib | https://aclanthology.org/2023.acl-industry.5/ | @inproceedings{kachuee-lee-2023-constrained,
title = "Constrained Policy Optimization for Controlled Self-Learning in Conversational {AI} Systems",
author = "Kachuee, Mohammad and
Lee, Sungjin",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.5",
doi = "10.18653/v1/2023.acl-industry.5",
pages = "43--52",
abstract = "Recently, self-learning methods based on user satisfaction metrics and contextual bandits have shown promising results to enable consistent improvements in conversational AI systems. However, directly targeting such metrics by off-policy bandit learning objectives often increases the risk of making abrupt policy changes that break the current user experience. In this study, we introduce a scalable framework for supporting fine-grained exploration targets for individual domains via user-defined constraints. For example, we may want to ensure fewer policy deviations in business-critical domains such as shopping, while allocating more exploration budget to domains such as music. We present a novel meta-gradient learning approach that is scalable and practical to address this problem. The proposed method adjusts constraint violation penalty terms adaptively through a meta objective that encourages balanced constraint satisfaction across domains. We conducted extensive experiments on a real-world conversational AI and using a set of realistic constraint benchmarks. The proposed approach has been deployed in production for a large-scale commercial assistant, enabling the best balance between the policy value and constraint satisfaction rate.",
}
| Recently, self-learning methods based on user satisfaction metrics and contextual bandits have shown promising results to enable consistent improvements in conversational AI systems. However, directly targeting such metrics by off-policy bandit learning objectives often increases the risk of making abrupt policy changes that break the current user experience. In this study, we introduce a scalable framework for supporting fine-grained exploration targets for individual domains via user-defined constraints. For example, we may want to ensure fewer policy deviations in business-critical domains such as shopping, while allocating more exploration budget to domains such as music. We present a novel meta-gradient learning approach that is scalable and practical to address this problem. The proposed method adjusts constraint violation penalty terms adaptively through a meta objective that encourages balanced constraint satisfaction across domains. We conducted extensive experiments on a real-world conversational AI and using a set of realistic constraint benchmarks. The proposed approach has been deployed in production for a large-scale commercial assistant, enabling the best balance between the policy value and constraint satisfaction rate. | [
"Kachuee, Mohammad",
"Lee, Sungjin"
] | Constrained Policy Optimization for Controlled Self-Learning in Conversational AI Systems | acl-industry.5 | Poster | 2209.08429 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.6.bib | https://aclanthology.org/2023.acl-industry.6/ | @inproceedings{fusco-etal-2023-pnlp,
title = "p{NLP}-Mixer: an Efficient all-{MLP} Architecture for Language",
author = "Fusco, Francesco and
Pascual, Damian and
Staar, Peter and
Antognini, Diego",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.6",
doi = "10.18653/v1/2023.acl-industry.6",
pages = "53--60",
abstract = "Large pre-trained language models based on transformer architectureÆhave drastically changed the natural language processing (NLP) landscape. However, deploying those models for on-device applications in constrained devices such as smart watches is completely impractical due to their size and inference cost. As an alternative to transformer-based architectures, recent work on efficient NLP has shown that weight-efficient models can attain competitive performance for simple tasks, such as slot filling and intent classification, with model sizes in the order of the megabyte. This work introduces the pNLP-Mixer architecture, an embedding-free MLP-Mixer model for on-device NLP that achieves high weight-efficiency thanks to a novel projection layer. We evaluate a pNLP-Mixer model of only one megabyte in size on two multi-lingual semantic parsing datasets, MTOP and multiATIS. Our quantized model achieves 99.4{\%} and 97.8{\%} the performance of mBERT on MTOP and multiATIS, while using 170x less parameters. Our model consistently beats the state-of-the-art of tiny models (pQRNN), which is twice as large, by a margin up to 7.8{\%} on MTOP.",
}
| Large pre-trained language models based on transformer architectureÆhave drastically changed the natural language processing (NLP) landscape. However, deploying those models for on-device applications in constrained devices such as smart watches is completely impractical due to their size and inference cost. As an alternative to transformer-based architectures, recent work on efficient NLP has shown that weight-efficient models can attain competitive performance for simple tasks, such as slot filling and intent classification, with model sizes in the order of the megabyte. This work introduces the pNLP-Mixer architecture, an embedding-free MLP-Mixer model for on-device NLP that achieves high weight-efficiency thanks to a novel projection layer. We evaluate a pNLP-Mixer model of only one megabyte in size on two multi-lingual semantic parsing datasets, MTOP and multiATIS. Our quantized model achieves 99.4{\%} and 97.8{\%} the performance of mBERT on MTOP and multiATIS, while using 170x less parameters. Our model consistently beats the state-of-the-art of tiny models (pQRNN), which is twice as large, by a margin up to 7.8{\%} on MTOP. | [
"Fusco, Francesco",
"Pascual, Damian",
"Staar, Peter",
"Antognini, Diego"
] | pNLP-Mixer: an Efficient all-MLP Architecture for Language | acl-industry.6 | Poster | 2202.04350 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.7.bib | https://aclanthology.org/2023.acl-industry.7/ | @inproceedings{fusco-antognini-2023-extracting,
title = "Extracting Text Representations for Terms and Phrases in Technical Domains",
author = "Fusco, Francesco and
Antognini, Diego",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.7",
doi = "10.18653/v1/2023.acl-industry.7",
pages = "61--70",
abstract = "Extracting dense representations for terms and phrases is a task of great importance for knowledge discovery platforms targeting highly-technical fields. Dense representations are used as features for downstream components and have multiple applications ranging from ranking results in search to summarization. Common approaches to create dense representations include training domain-specific embeddings with self-supervised setups or using sentence encoder models trained over similarity tasks. In contrast to static embeddings, sentence encoders do not suffer from the out-of-vocabulary (OOV) problem, but impose significant computational costs. In this paper, we propose a fully unsupervised approach to text encoding that consists of training small character-based models with the objective of reconstructing large pre-trained embedding matrices. Models trained with this approach can not only match the quality of sentence encoders in technical domains, but are 5 times smaller and up to 10 times faster, even on high-end GPUs.",
}
| Extracting dense representations for terms and phrases is a task of great importance for knowledge discovery platforms targeting highly-technical fields. Dense representations are used as features for downstream components and have multiple applications ranging from ranking results in search to summarization. Common approaches to create dense representations include training domain-specific embeddings with self-supervised setups or using sentence encoder models trained over similarity tasks. In contrast to static embeddings, sentence encoders do not suffer from the out-of-vocabulary (OOV) problem, but impose significant computational costs. In this paper, we propose a fully unsupervised approach to text encoding that consists of training small character-based models with the objective of reconstructing large pre-trained embedding matrices. Models trained with this approach can not only match the quality of sentence encoders in technical domains, but are 5 times smaller and up to 10 times faster, even on high-end GPUs. | [
"Fusco, Francesco",
"Antognini, Diego"
] | Extracting Text Representations for Terms and Phrases in Technical Domains | acl-industry.7 | Poster | 2305.15867 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.8.bib | https://aclanthology.org/2023.acl-industry.8/ | @inproceedings{wang-etal-2023-cocaclip,
title = "{C}oca{CLIP}: Exploring Distillation of Fully-Connected Knowledge Interaction Graph for Lightweight Text-Image Retrieval",
author = "Wang, Jiapeng and
Wang, Chengyu and
Wang, Xiaodan and
Huang, Jun and
Jin, Lianwen",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.8",
doi = "10.18653/v1/2023.acl-industry.8",
pages = "71--80",
abstract = "Large-scale pre-trained text-image models with dual-encoder architectures (such as CLIP) are typically adopted for various vision-language applications, including text-image retrieval. However, these models are still less practical on edge devices or for real-time situations, due to the substantial indexing and inference time and the large consumption of computational resources. Although knowledge distillation techniques have been widely utilized for uni-modal model compression, how to expand them to the situation when the numbers of modalities and teachers/students are doubled has been rarely studied. In this paper, we conduct comprehensive experiments on this topic and propose the fully-Connected knowledge interaction graph (Coca) technique for cross-modal pre-training distillation. Based on our findings, the resulting CocaCLIP achieves SOTA performances on the widely-used Flickr30K and MSCOCO benchmarks under the lightweight setting. An industry application of our method on an e-commercial platform further demonstrates the significant effectiveness of CocaCLIP.",
}
| Large-scale pre-trained text-image models with dual-encoder architectures (such as CLIP) are typically adopted for various vision-language applications, including text-image retrieval. However, these models are still less practical on edge devices or for real-time situations, due to the substantial indexing and inference time and the large consumption of computational resources. Although knowledge distillation techniques have been widely utilized for uni-modal model compression, how to expand them to the situation when the numbers of modalities and teachers/students are doubled has been rarely studied. In this paper, we conduct comprehensive experiments on this topic and propose the fully-Connected knowledge interaction graph (Coca) technique for cross-modal pre-training distillation. Based on our findings, the resulting CocaCLIP achieves SOTA performances on the widely-used Flickr30K and MSCOCO benchmarks under the lightweight setting. An industry application of our method on an e-commercial platform further demonstrates the significant effectiveness of CocaCLIP. | [
"Wang, Jiapeng",
"Wang, Chengyu",
"Wang, Xiaodan",
"Huang, Jun",
"Jin, Lianwen"
] | CocaCLIP: Exploring Distillation of Fully-Connected Knowledge Interaction Graph for Lightweight Text-Image Retrieval | acl-industry.8 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.9.bib | https://aclanthology.org/2023.acl-industry.9/ | @inproceedings{jia-etal-2023-kg,
title = "{KG}-{FLIP}: Knowledge-guided Fashion-domain Language-Image Pre-training for {E}-commerce",
author = "Jia, Qinjin and
Liu, Yang and
Wu, Daoping and
Xu, Shaoyuan and
Liu, Huidong and
Fu, Jinmiao and
Vollgraf, Roland and
Wang, Bryan",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.9",
doi = "10.18653/v1/2023.acl-industry.9",
pages = "81--88",
abstract = "Various Vision-Language Pre-training (VLP) models (e.g., CLIP, BLIP) have sprung up and dramatically advanced the benchmarks for public general-domain datasets (e.g., COCO, Flickr30k). Such models usually learn the cross-modal alignment from large-scale well-aligned image-text datasets without leveraging external knowledge. Adapting these models to downstream applications in specific domains like fashion requires fine-grained in-domain image-text corpus, which are usually less semantically aligned and in small scale that requires efficient pre-training strategies. In this paper, we propose a knowledge-guided fashion-domain language-image pre-training (FLIP) framework that focuses on learning fine-grained representations in e-commerce domain and utilizes external knowledge (i.e., product attribute schema), to improve the pre-training efficiency. Experiments demonstrate that FLIP outperforms previous state-of-the-art VLP models on Amazon data and on the Fashion-Gen dataset by large margins. FLIP has been successfully deployed in the Amazon catalog system to backfill missing attributes and improve the customer shopping experience.",
}
| Various Vision-Language Pre-training (VLP) models (e.g., CLIP, BLIP) have sprung up and dramatically advanced the benchmarks for public general-domain datasets (e.g., COCO, Flickr30k). Such models usually learn the cross-modal alignment from large-scale well-aligned image-text datasets without leveraging external knowledge. Adapting these models to downstream applications in specific domains like fashion requires fine-grained in-domain image-text corpus, which are usually less semantically aligned and in small scale that requires efficient pre-training strategies. In this paper, we propose a knowledge-guided fashion-domain language-image pre-training (FLIP) framework that focuses on learning fine-grained representations in e-commerce domain and utilizes external knowledge (i.e., product attribute schema), to improve the pre-training efficiency. Experiments demonstrate that FLIP outperforms previous state-of-the-art VLP models on Amazon data and on the Fashion-Gen dataset by large margins. FLIP has been successfully deployed in the Amazon catalog system to backfill missing attributes and improve the customer shopping experience. | [
"Jia, Qinjin",
"Liu, Yang",
"Wu, Daoping",
"Xu, Shaoyuan",
"Liu, Huidong",
"Fu, Jinmiao",
"Vollgraf, Rol",
"",
"Wang, Bryan"
] | KG-FLIP: Knowledge-guided Fashion-domain Language-Image Pre-training for E-commerce | acl-industry.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.10.bib | https://aclanthology.org/2023.acl-industry.10/ | @inproceedings{kulkarni-etal-2023-domain,
title = "Domain-specific transformer models for query translation",
author = "Kulkarni, Mandar and
Garera, Nikesh and
Trivedi, Anusua",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.10",
doi = "10.18653/v1/2023.acl-industry.10",
pages = "89--95",
abstract = "Due to the democratization of e-commerce, many product companies are listing their goods for online shopping. For periodic buying within a domain such as Grocery, consumers are generally inclined to buy certain brands of products. Due to a large non-English speaking population in India, we observe a significant percentage of code-mix Hinglish search queries e.g., sasta atta. An intuitive approach to dealing with code-mix queries is to train an encoder-decoder model to translate the query to English to perform the search. However, the problem becomes non-trivial when the brand names themselves have Hinglish names and possibly have a literal English translation. In such queries, only the context (non-brand name) Hinglish words needs to be translated. In this paper, we propose a simple yet effective modification to the transformer training to preserve/correct Grocery brand names in the output while selectively translating the context words. To achieve this, we use an additional dataset of popular Grocery brand names. Brand names are added as tokens to the model vocabulary, and the token embeddings are randomly initialized. Further, we introduce a Brand loss in training the translation model. Brand loss is a cross entropy loss computed using a denoising auto-encoder objective with brand name data. We warm-start the training from a public pre-trained checkpoint (such as BART/T5) and further adapt it for query translation using the domain data. The proposed model is generic and can be used with English as well as code-mix Hinglish queries alleviating the need for language detection. To reduce the latency of the model for the production deployment, we use knowledge distillation and quantization. Experimental evaluation indicates that the proposed approach improves translation results by preserving/correcting English/Hinglish brand names. After positive results with A/B testing, the model is currently deployed in production.",
}
| Due to the democratization of e-commerce, many product companies are listing their goods for online shopping. For periodic buying within a domain such as Grocery, consumers are generally inclined to buy certain brands of products. Due to a large non-English speaking population in India, we observe a significant percentage of code-mix Hinglish search queries e.g., sasta atta. An intuitive approach to dealing with code-mix queries is to train an encoder-decoder model to translate the query to English to perform the search. However, the problem becomes non-trivial when the brand names themselves have Hinglish names and possibly have a literal English translation. In such queries, only the context (non-brand name) Hinglish words needs to be translated. In this paper, we propose a simple yet effective modification to the transformer training to preserve/correct Grocery brand names in the output while selectively translating the context words. To achieve this, we use an additional dataset of popular Grocery brand names. Brand names are added as tokens to the model vocabulary, and the token embeddings are randomly initialized. Further, we introduce a Brand loss in training the translation model. Brand loss is a cross entropy loss computed using a denoising auto-encoder objective with brand name data. We warm-start the training from a public pre-trained checkpoint (such as BART/T5) and further adapt it for query translation using the domain data. The proposed model is generic and can be used with English as well as code-mix Hinglish queries alleviating the need for language detection. To reduce the latency of the model for the production deployment, we use knowledge distillation and quantization. Experimental evaluation indicates that the proposed approach improves translation results by preserving/correcting English/Hinglish brand names. After positive results with A/B testing, the model is currently deployed in production. | [
"Kulkarni, M",
"ar",
"Garera, Nikesh",
"Trivedi, Anusua"
] | Domain-specific transformer models for query translation | acl-industry.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.11.bib | https://aclanthology.org/2023.acl-industry.11/ | @inproceedings{kulkarni-etal-2023-label,
title = "Label efficient semi-supervised conversational intent classification",
author = "Kulkarni, Mandar and
Kim, Kyung and
Garera, Nikesh and
Trivedi, Anusua",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.11",
doi = "10.18653/v1/2023.acl-industry.11",
pages = "96--102",
abstract = "To provide a convenient shopping experience and to answer user queries at scale, conversational platforms are essential for e-commerce. The user queries can be pre-purchase questions, such as product specifications and delivery time related, or post-purchase queries, such as exchange and return. A chatbot should be able to understand and answer a variety of such queries to help users with relevant information. One of the important modules in the chatbot is automated intent identification, i.e., understanding the user{'}s intention from the query text. Due to non-English speaking users interacting with the chatbot, we often get a significant percentage of code mix queries and queries with grammatical errors, which makes the problem more challenging. This paper proposes a simple yet competent Semi-Supervised Learning (SSL) approach for label-efficient intent classification. We use a small labeled corpus and relatively larger unlabeled query data to train a transformer model. For training the model with labeled data, we explore supervised MixUp data augmentation. To train with unlabeled data, we explore label consistency with dropout noise. We experiment with different pre-trained transformer architectures, such as BERT and sentence-BERT. Experimental results demonstrate that the proposed approach significantly improves over the supervised baseline, even with a limited labeled set. A variant of the model is currently deployed in production.",
}
| To provide a convenient shopping experience and to answer user queries at scale, conversational platforms are essential for e-commerce. The user queries can be pre-purchase questions, such as product specifications and delivery time related, or post-purchase queries, such as exchange and return. A chatbot should be able to understand and answer a variety of such queries to help users with relevant information. One of the important modules in the chatbot is automated intent identification, i.e., understanding the user{'}s intention from the query text. Due to non-English speaking users interacting with the chatbot, we often get a significant percentage of code mix queries and queries with grammatical errors, which makes the problem more challenging. This paper proposes a simple yet competent Semi-Supervised Learning (SSL) approach for label-efficient intent classification. We use a small labeled corpus and relatively larger unlabeled query data to train a transformer model. For training the model with labeled data, we explore supervised MixUp data augmentation. To train with unlabeled data, we explore label consistency with dropout noise. We experiment with different pre-trained transformer architectures, such as BERT and sentence-BERT. Experimental results demonstrate that the proposed approach significantly improves over the supervised baseline, even with a limited labeled set. A variant of the model is currently deployed in production. | [
"Kulkarni, M",
"ar",
"Kim, Kyung",
"Garera, Nikesh",
"Trivedi, Anusua"
] | Label efficient semi-supervised conversational intent classification | acl-industry.11 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.12.bib | https://aclanthology.org/2023.acl-industry.12/ | @inproceedings{shen-etal-2023-xpqa,
title = "x{PQA}: Cross-Lingual Product Question Answering in 12 Languages",
author = "Shen, Xiaoyu and
Asai, Akari and
Byrne, Bill and
De Gispert, Adria",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.12",
doi = "10.18653/v1/2023.acl-industry.12",
pages = "103--115",
abstract = "Product Question Answering (PQA) systems are key in e-commerce applications as they provide responses to customers{'} questions as they shop for products. While existing work on PQA focuses mainly on English, in practice there is need to support multiple customer languages while leveraging product information available in English. To study this practical industrial task, we present xPQA, a large-scale annotated cross-lingual PQA dataset in 12 languages, and report results in (1) candidate ranking, to select the best English candidate containing the information to answer a non-English question; and (2) answer generation, to generate a natural-sounding non-English answer based on the selected English candidate. We evaluate various approaches involving machine translation at runtime or offline, leveraging multilingual pre-trained LMs, and including or excluding xPQA training data. We find that in-domain data is essential as cross-lingual rankers trained on other domains perform poorly on the PQA task, and that translation-based approaches are most effective for candidate ranking while multilingual finetuning works best for answer generation. Still, there remains a significant performance gap between the English and the cross-lingual test sets.",
}
| Product Question Answering (PQA) systems are key in e-commerce applications as they provide responses to customers{'} questions as they shop for products. While existing work on PQA focuses mainly on English, in practice there is need to support multiple customer languages while leveraging product information available in English. To study this practical industrial task, we present xPQA, a large-scale annotated cross-lingual PQA dataset in 12 languages, and report results in (1) candidate ranking, to select the best English candidate containing the information to answer a non-English question; and (2) answer generation, to generate a natural-sounding non-English answer based on the selected English candidate. We evaluate various approaches involving machine translation at runtime or offline, leveraging multilingual pre-trained LMs, and including or excluding xPQA training data. We find that in-domain data is essential as cross-lingual rankers trained on other domains perform poorly on the PQA task, and that translation-based approaches are most effective for candidate ranking while multilingual finetuning works best for answer generation. Still, there remains a significant performance gap between the English and the cross-lingual test sets. | [
"Shen, Xiaoyu",
"Asai, Akari",
"Byrne, Bill",
"De Gispert, Adria"
] | xPQA: Cross-Lingual Product Question Answering in 12 Languages | acl-industry.12 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.13.bib | https://aclanthology.org/2023.acl-industry.13/ | @inproceedings{hu-etal-2023-learn,
title = "Learn over Past, Evolve for Future: Forecasting Temporal Trends for Fake News Detection",
author = "Hu, Beizhe and
Sheng, Qiang and
Cao, Juan and
Zhu, Yongchun and
Wang, Danding and
Wang, Zhengjia and
Jin, Zhiwei",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.13",
doi = "10.18653/v1/2023.acl-industry.13",
pages = "116--125",
abstract = "Fake news detection has been a critical task for maintaining the health of the online news ecosystem. However, very few existing works consider the temporal shift issue caused by the rapidly-evolving nature of news data in practice, resulting in significant performance degradation when training on past data and testing on future data. In this paper, we observe that the appearances of news events on the same topic may display discernible patterns over time, and posit that such patterns can assist in selecting training instances that could make the model adapt better to future data. Specifically, we design an effective framework FTT (Forecasting Temporal Trends), which could forecast the temporal distribution patterns of news data and then guide the detector to fast adapt to future distribution. Experiments on the real-world temporally split dataset demonstrate the superiority of our proposed framework.",
}
| Fake news detection has been a critical task for maintaining the health of the online news ecosystem. However, very few existing works consider the temporal shift issue caused by the rapidly-evolving nature of news data in practice, resulting in significant performance degradation when training on past data and testing on future data. In this paper, we observe that the appearances of news events on the same topic may display discernible patterns over time, and posit that such patterns can assist in selecting training instances that could make the model adapt better to future data. Specifically, we design an effective framework FTT (Forecasting Temporal Trends), which could forecast the temporal distribution patterns of news data and then guide the detector to fast adapt to future distribution. Experiments on the real-world temporally split dataset demonstrate the superiority of our proposed framework. | [
"Hu, Beizhe",
"Sheng, Qiang",
"Cao, Juan",
"Zhu, Yongchun",
"Wang, D",
"ing",
"Wang, Zhengjia",
"Jin, Zhiwei"
] | Learn over Past, Evolve for Future: Forecasting Temporal Trends for Fake News Detection | acl-industry.13 | Poster | 2306.14728 | [
"https://github.com/ictmcg/ftt-acl23"
] | https://huggingface.co/papers/2306.14728 | 1 | 0 | 0 | 7 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-industry.14.bib | https://aclanthology.org/2023.acl-industry.14/ | @inproceedings{ricatte-crisostomi-2023-aven,
title = "{AVEN}-{GR}: Attribute Value Extraction and Normalization using product {GR}aphs",
author = "Ricatte, Thomas and
Crisostomi, Donato",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.14",
doi = "10.18653/v1/2023.acl-industry.14",
pages = "126--133",
abstract = "Getting a good understanding of the user intent is vital for e-commerce applications to surface the right product to a given customer query. Query Understanding (QU) systems are essential for this purpose, and many e-commerce providers are working on complex solutions that need to be data efficient and able to capture early emerging market trends. Query Attribute Understanding (QAU) is a sub-component of QU that involves extracting named attributes from user queries and linking them to existing e-commerce entities such as brand, material, color, etc. While extracting named entities from text has been extensively explored in the literature, QAU requires specific attention due to the nature of the queries, which are often short, noisy, ambiguous, and constantly evolving. This paper makes three contributions to QAU. First, we propose a novel end-to-end approach that jointly solves Named Entity Recognition (NER) and Entity Linking (NEL) and enables open-world reasoning for QAU. Second, we introduce a novel method for utilizing product graphs to enhance the representation of query entities. Finally, we present a new dataset constructed from public sources that can be used to evaluate the performance of future QAU systems.",
}
| Getting a good understanding of the user intent is vital for e-commerce applications to surface the right product to a given customer query. Query Understanding (QU) systems are essential for this purpose, and many e-commerce providers are working on complex solutions that need to be data efficient and able to capture early emerging market trends. Query Attribute Understanding (QAU) is a sub-component of QU that involves extracting named attributes from user queries and linking them to existing e-commerce entities such as brand, material, color, etc. While extracting named entities from text has been extensively explored in the literature, QAU requires specific attention due to the nature of the queries, which are often short, noisy, ambiguous, and constantly evolving. This paper makes three contributions to QAU. First, we propose a novel end-to-end approach that jointly solves Named Entity Recognition (NER) and Entity Linking (NEL) and enables open-world reasoning for QAU. Second, we introduce a novel method for utilizing product graphs to enhance the representation of query entities. Finally, we present a new dataset constructed from public sources that can be used to evaluate the performance of future QAU systems. | [
"Ricatte, Thomas",
"Crisostomi, Donato"
] | AVEN-GR: Attribute Value Extraction and Normalization using product GRaphs | acl-industry.14 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.15.bib | https://aclanthology.org/2023.acl-industry.15/ | @inproceedings{tan-etal-2023-gkd,
title = "{GKD}: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model",
author = "Tan, Shicheng and
Tam, Weng Lam and
Wang, Yuanchun and
Gong, Wenwen and
Zhao, Shu and
Zhang, Peng and
Tang, Jie",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.15",
doi = "10.18653/v1/2023.acl-industry.15",
pages = "134--148",
abstract = "Currently, the reduction in the parameter scale of large-scale pre-trained language models (PLMs) through knowledge distillation has greatly facilitated their widespread deployment on various devices. However, the deployment of knowledge distillation systems faces great challenges in real-world industrial-strength applications, which require the use of complex distillation methods on even larger-scale PLMs (over 10B), limited by memory on GPUs and the switching of methods. To overcome these challenges, we propose GKD, a general knowledge distillation framework that supports distillation on larger-scale PLMs using various distillation methods. With GKD, developers can build larger distillation models on memory-limited GPUs and easily switch and combine different distillation methods within a single framework. Experimental results show that GKD can support the distillation of at least 100B-scale PLMs and 25 mainstream methods on 8 NVIDIA A100 (40GB) GPUs.",
}
| Currently, the reduction in the parameter scale of large-scale pre-trained language models (PLMs) through knowledge distillation has greatly facilitated their widespread deployment on various devices. However, the deployment of knowledge distillation systems faces great challenges in real-world industrial-strength applications, which require the use of complex distillation methods on even larger-scale PLMs (over 10B), limited by memory on GPUs and the switching of methods. To overcome these challenges, we propose GKD, a general knowledge distillation framework that supports distillation on larger-scale PLMs using various distillation methods. With GKD, developers can build larger distillation models on memory-limited GPUs and easily switch and combine different distillation methods within a single framework. Experimental results show that GKD can support the distillation of at least 100B-scale PLMs and 25 mainstream methods on 8 NVIDIA A100 (40GB) GPUs. | [
"Tan, Shicheng",
"Tam, Weng Lam",
"Wang, Yuanchun",
"Gong, Wenwen",
"Zhao, Shu",
"Zhang, Peng",
"Tang, Jie"
] | GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model | acl-industry.15 | Poster | 2306.06629 | [
"https://github.com/aitsc/glmkd"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.16.bib | https://aclanthology.org/2023.acl-industry.16/ | @inproceedings{wang-etal-2023-fashionklip,
title = "{F}ashion{KLIP}: Enhancing {E}-Commerce Image-Text Retrieval with Fashion Multi-Modal Conceptual Knowledge Graph",
author = "Wang, Xiaodan and
Wang, Chengyu and
Li, Lei and
Li, Zhixu and
Chen, Ben and
Jin, Linbo and
Huang, Jun and
Xiao, Yanghua and
Gao, Ming",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.16",
doi = "10.18653/v1/2023.acl-industry.16",
pages = "149--158",
abstract = "Image-text retrieval is a core task in the multi-modal domain, which arises a lot of attention from both research and industry communities. Recently, the booming of visual-language pre-trained (VLP) models has greatly enhanced the performance of cross-modal retrieval. However, the fine-grained interactions between objects from different modalities are far from well-established. This issue becomes more severe in the e-commerce domain, which lacks sufficient training data and fine-grained cross-modal knowledge. To alleviate the problem, this paper proposes a novel e-commerce knowledge-enhanced VLP model FashionKLIP. We first automatically establish a multi-modal conceptual knowledge graph from large-scale e-commerce image-text data, and then inject the prior knowledge into the VLP model to align across modalities at the conceptual level. The experiments conducted on a public benchmark dataset demonstrate that FashionKLIP effectively enhances the performance of e-commerce image-text retrieval upon state-of-the-art VLP models by a large margin. The application of the method in real industrial scenarios also proves the feasibility and efficiency of FashionKLIP.",
}
| Image-text retrieval is a core task in the multi-modal domain, which arises a lot of attention from both research and industry communities. Recently, the booming of visual-language pre-trained (VLP) models has greatly enhanced the performance of cross-modal retrieval. However, the fine-grained interactions between objects from different modalities are far from well-established. This issue becomes more severe in the e-commerce domain, which lacks sufficient training data and fine-grained cross-modal knowledge. To alleviate the problem, this paper proposes a novel e-commerce knowledge-enhanced VLP model FashionKLIP. We first automatically establish a multi-modal conceptual knowledge graph from large-scale e-commerce image-text data, and then inject the prior knowledge into the VLP model to align across modalities at the conceptual level. The experiments conducted on a public benchmark dataset demonstrate that FashionKLIP effectively enhances the performance of e-commerce image-text retrieval upon state-of-the-art VLP models by a large margin. The application of the method in real industrial scenarios also proves the feasibility and efficiency of FashionKLIP. | [
"Wang, Xiaodan",
"Wang, Chengyu",
"Li, Lei",
"Li, Zhixu",
"Chen, Ben",
"Jin, Linbo",
"Huang, Jun",
"Xiao, Yanghua",
"Gao, Ming"
] | FashionKLIP: Enhancing E-Commerce Image-Text Retrieval with Fashion Multi-Modal Conceptual Knowledge Graph | acl-industry.16 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.17.bib | https://aclanthology.org/2023.acl-industry.17/ | @inproceedings{rubin-etal-2023-entity,
title = "Entity Contrastive Learning in a Large-Scale Virtual Assistant System",
author = "Rubin, Jonathan and
Crowley, Jason and
Leung, George and
Ziyadi, Morteza and
Minakova, Maria",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.17",
doi = "10.18653/v1/2023.acl-industry.17",
pages = "159--171",
abstract = "Conversational agents are typically made up of domain (DC) and intent classifiers (IC) that identify the general subject an utterance belongs to and the specific action a user wishes to achieve. In addition, named entity recognition (NER) performs per token labeling to identify specific entities of interest in a spoken utterance. We investigate improving joint IC and NER models using entity contrastive learning that attempts to cluster similar entities together in a learned representation space. We compare a full virtual assistant system trained using entity contrastive learning to a production baseline system that does not use contrastive learning. We present both offline results, using retrospective test sets, as well as live online results from an A/B test that compared the two systems. In both the offline and online settings, entity contrastive training improved overall performance against production baselines. Furthermore, we provide a detailed analysis of learned entity embeddings, including both qualitative analysis via dimensionality-reduced visualizations and quantitative analysis by computing alignment and uniformity metrics. We show that entity contrastive learning improves alignment metrics and produces well-formed embedding clusters in representation space.",
}
| Conversational agents are typically made up of domain (DC) and intent classifiers (IC) that identify the general subject an utterance belongs to and the specific action a user wishes to achieve. In addition, named entity recognition (NER) performs per token labeling to identify specific entities of interest in a spoken utterance. We investigate improving joint IC and NER models using entity contrastive learning that attempts to cluster similar entities together in a learned representation space. We compare a full virtual assistant system trained using entity contrastive learning to a production baseline system that does not use contrastive learning. We present both offline results, using retrospective test sets, as well as live online results from an A/B test that compared the two systems. In both the offline and online settings, entity contrastive training improved overall performance against production baselines. Furthermore, we provide a detailed analysis of learned entity embeddings, including both qualitative analysis via dimensionality-reduced visualizations and quantitative analysis by computing alignment and uniformity metrics. We show that entity contrastive learning improves alignment metrics and produces well-formed embedding clusters in representation space. | [
"Rubin, Jonathan",
"Crowley, Jason",
"Leung, George",
"Ziyadi, Morteza",
"Minakova, Maria"
] | Entity Contrastive Learning in a Large-Scale Virtual Assistant System | acl-industry.17 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.18.bib | https://aclanthology.org/2023.acl-industry.18/ | @inproceedings{cheng-etal-2023-tab,
title = "Tab-Cleaner: Weakly Supervised Tabular Data Cleaning via Pre-training for {E}-commerce Catalog",
author = "Cheng, Kewei and
Li, Xian and
Wang, Zhengyang and
Zhang, Chenwei and
Huang, Binxuan and
Xu, Yifan Ethan and
Dong, Xin Luna and
Sun, Yizhou",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.18",
doi = "10.18653/v1/2023.acl-industry.18",
pages = "172--185",
abstract = "Product catalogs, conceptually in the form of text-rich tables, are self-reported by individual retailers and thus inevitably contain noisy facts. Verifying such textual attributes in product catalogs is essential to improve their reliability. However, popular methods for processing free-text content, such as pre-trained language models, are not particularly effective on structured tabular data since they are typically trained on free-form natural language texts. In this paper, we present Tab-Cleaner, a model designed to handle error detection over text-rich tabular data following a pre-training / fine-tuning paradigm. We train Tab-Cleaner on a real-world Amazon Product Catalog table w.r.t millions of products and show improvements over state-of-the-art methods by 16{\textbackslash}{\%} on PR AUC over attribute applicability classification task and by 11{\textbackslash}{\%} on PR AUC over attribute value validation task.",
}
| Product catalogs, conceptually in the form of text-rich tables, are self-reported by individual retailers and thus inevitably contain noisy facts. Verifying such textual attributes in product catalogs is essential to improve their reliability. However, popular methods for processing free-text content, such as pre-trained language models, are not particularly effective on structured tabular data since they are typically trained on free-form natural language texts. In this paper, we present Tab-Cleaner, a model designed to handle error detection over text-rich tabular data following a pre-training / fine-tuning paradigm. We train Tab-Cleaner on a real-world Amazon Product Catalog table w.r.t millions of products and show improvements over state-of-the-art methods by 16{\textbackslash}{\%} on PR AUC over attribute applicability classification task and by 11{\textbackslash}{\%} on PR AUC over attribute value validation task. | [
"Cheng, Kewei",
"Li, Xian",
"Wang, Zhengyang",
"Zhang, Chenwei",
"Huang, Binxuan",
"Xu, Yifan Ethan",
"Dong, Xin Luna",
"Sun, Yizhou"
] | Tab-Cleaner: Weakly Supervised Tabular Data Cleaning via Pre-training for E-commerce Catalog | acl-industry.18 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.19.bib | https://aclanthology.org/2023.acl-industry.19/ | @inproceedings{komma-etal-2023-toward,
title = "Toward More Accurate and Generalizable Evaluation Metrics for Task-Oriented Dialogs",
author = "Komma, Abishek and
Panyam Chandrasekarasastry, Nagesh and
Leffel, Timothy and
Goyal, Anuj and
Metallinou, Angeliki and
Matsoukas, Spyros and
Galstyan, Aram",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.19",
doi = "10.18653/v1/2023.acl-industry.19",
pages = "186--195",
abstract = "Measurement of interaction quality is a critical task for the improvement of large-scale spoken dialog systems. Existing approaches to dialog quality estimation either focus on evaluating the quality of individual turns, or collect dialog-level quality measurements from end users immediately following an interaction. In contrast to these approaches, we introduce a new dialog-level annotation workflow called Dialog Quality Annotation (DQA). DQA expert annotators evaluate the quality of dialogs as a whole, and also label dialogs for attributes such as goal completion and user sentiment. In this contribution, we show that: (i) while dialog quality cannot be completely decomposed into dialog-level attributes, there is a strong relationship between some objective dialog attributes and judgments of dialog quality; (ii) for the task of dialog-level quality estimation, a supervised model trained on dialog-level annotations outperforms methods based purely on aggregating turn-level features; and (iii) the proposed evaluation model shows better domain generalization ability compared to the baselines. On the basis of these results, we argue that having high-quality human-annotated data is an important component of evaluating interaction quality for large industrial-scale voice assistant platforms.",
}
| Measurement of interaction quality is a critical task for the improvement of large-scale spoken dialog systems. Existing approaches to dialog quality estimation either focus on evaluating the quality of individual turns, or collect dialog-level quality measurements from end users immediately following an interaction. In contrast to these approaches, we introduce a new dialog-level annotation workflow called Dialog Quality Annotation (DQA). DQA expert annotators evaluate the quality of dialogs as a whole, and also label dialogs for attributes such as goal completion and user sentiment. In this contribution, we show that: (i) while dialog quality cannot be completely decomposed into dialog-level attributes, there is a strong relationship between some objective dialog attributes and judgments of dialog quality; (ii) for the task of dialog-level quality estimation, a supervised model trained on dialog-level annotations outperforms methods based purely on aggregating turn-level features; and (iii) the proposed evaluation model shows better domain generalization ability compared to the baselines. On the basis of these results, we argue that having high-quality human-annotated data is an important component of evaluating interaction quality for large industrial-scale voice assistant platforms. | [
"Komma, Abishek",
"Panyam Ch",
"rasekarasastry, Nagesh",
"Leffel, Timothy",
"Goyal, Anuj",
"Metallinou, Angeliki",
"Matsoukas, Spyros",
"Galstyan, Aram"
] | Toward More Accurate and Generalizable Evaluation Metrics for Task-Oriented Dialogs | acl-industry.19 | Poster | 2306.03984 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.20.bib | https://aclanthology.org/2023.acl-industry.20/ | @inproceedings{liu-etal-2023-tab,
title = "Tab-{CQA}: A Tabular Conversational Question Answering Dataset on Financial Reports",
author = "Liu, Chuang and
Li, Junzhuo and
Xiong, Deyi",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.20",
doi = "10.18653/v1/2023.acl-industry.20",
pages = "196--207",
abstract = "Existing conversational question answering (CQA) datasets have been usually constructed from unstructured texts in English. In this paper, we propose Tab-CQA, a tabular CQA dataset created from Chinese financial reports that are extracted from listed companies in a wide range of different sectors in the past 30 years. From these reports, we select 2,463 tables, and manually generate 2,463 conversations with 35,494 QA pairs. Additionally, we select 4,578 tables, from which 4,578 conversations with 73,595 QA pairs are automatically created via a template-based method. With the manually- and automatically-generated conversations, Tab-CQA contains answerable and unanswerable questions. For the answerable questions, we further diversify them to cover a wide range of skills, e.g., table retrieval, fact checking, numerical reasoning, so as to accommodate real-world scenarios. We further propose two different tabular CQA models, a text-based model and an operation-based model, and evaluate them on Tab-CQA. Experiment results show that Tab-CQA is a very challenging dataset, where a huge performance gap exists between human and neural models. We will publicly release Tab-CQA as a benchmark testbed to promote further research on Chinese tabular CQA.",
}
| Existing conversational question answering (CQA) datasets have been usually constructed from unstructured texts in English. In this paper, we propose Tab-CQA, a tabular CQA dataset created from Chinese financial reports that are extracted from listed companies in a wide range of different sectors in the past 30 years. From these reports, we select 2,463 tables, and manually generate 2,463 conversations with 35,494 QA pairs. Additionally, we select 4,578 tables, from which 4,578 conversations with 73,595 QA pairs are automatically created via a template-based method. With the manually- and automatically-generated conversations, Tab-CQA contains answerable and unanswerable questions. For the answerable questions, we further diversify them to cover a wide range of skills, e.g., table retrieval, fact checking, numerical reasoning, so as to accommodate real-world scenarios. We further propose two different tabular CQA models, a text-based model and an operation-based model, and evaluate them on Tab-CQA. Experiment results show that Tab-CQA is a very challenging dataset, where a huge performance gap exists between human and neural models. We will publicly release Tab-CQA as a benchmark testbed to promote further research on Chinese tabular CQA. | [
"Liu, Chuang",
"Li, Junzhuo",
"Xiong, Deyi"
] | Tab-CQA: A Tabular Conversational Question Answering Dataset on Financial Reports | acl-industry.20 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.21.bib | https://aclanthology.org/2023.acl-industry.21/ | @inproceedings{lee-etal-2023-kosbi,
title = "{K}o{SBI}: A Dataset for Mitigating Social Bias Risks Towards Safer Large Language Model Applications",
author = "Lee, Hwaran and
Hong, Seokhee and
Park, Joonsuk and
Kim, Takyoung and
Kim, Gunhee and
Ha, Jung-woo",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.21",
doi = "10.18653/v1/2023.acl-industry.21",
pages = "208--224",
abstract = "Large language models (LLMs) not only learn natural text generation abilities but also social biases against different demographic groups from real-world data. This poses a critical risk when deploying LLM-based applications. Existing research and resources are not readily applicable in South Korea due to the differences in language and culture, both of which significantly affect the biases and targeted demographic groups. This limitation requires localized social bias datasets to ensure the safe and effective deployment of LLMs. To this end, we present KosBi, a new social bias dataset of 34k pairs of contexts and sentences in Korean covering 72 demographic groups in 15 categories. We find that through filtering-based moderation, social biases in generated content can be reduced by 16.47{\%}p on average for HyperClova (30B and 82B), and GPT-3.",
}
| Large language models (LLMs) not only learn natural text generation abilities but also social biases against different demographic groups from real-world data. This poses a critical risk when deploying LLM-based applications. Existing research and resources are not readily applicable in South Korea due to the differences in language and culture, both of which significantly affect the biases and targeted demographic groups. This limitation requires localized social bias datasets to ensure the safe and effective deployment of LLMs. To this end, we present KosBi, a new social bias dataset of 34k pairs of contexts and sentences in Korean covering 72 demographic groups in 15 categories. We find that through filtering-based moderation, social biases in generated content can be reduced by 16.47{\%}p on average for HyperClova (30B and 82B), and GPT-3. | [
"Lee, Hwaran",
"Hong, Seokhee",
"Park, Joonsuk",
"Kim, Takyoung",
"Kim, Gunhee",
"Ha, Jung-woo"
] | KoSBI: A Dataset for Mitigating Social Bias Risks Towards Safer Large Language Model Applications | acl-industry.21 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.22.bib | https://aclanthology.org/2023.acl-industry.22/ | @inproceedings{yang-etal-2023-improving,
title = "Improving Knowledge Production Efficiency With Question Answering on Conversation",
author = "Yang, Changlin and
Liu, Siye and
Hu, Sen and
Zhang, Wangshu and
Xu, Teng and
Zheng, Jing",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.22",
doi = "10.18653/v1/2023.acl-industry.22",
pages = "225--234",
abstract = "Through an online customer service application, we have collected many conversations between customer service agents and customers. Building a knowledge production system can help reduce the labor cost of maintaining the FAQ database for the customer service chatbot, whose core module is question answering (QA) on these conversations. However, most existing researches focus on document-based QA tasks, and there is a lack of researches on conversation-based QA and related datasets, especially in Chinese language. The challenges of conversation-based QA include: 1) answers may be scattered among multiple dialogue turns; 2) understanding complex dialogue contexts is more complicated than documents. To address these challenges, we propose a multi-span extraction model on this task and introduce continual pre-training and multi-task learning schemes to further improve model performance. To validate our approach, we construct two Chinese datasets using dialogues as the knowledge source, namely cs-qaconv and kd-qaconv, respectively. Experimental results demonstrate that the proposed model outperforms the baseline on both datasets. The online application also verifies the effectiveness of our method. The dataset kd-qaconv will be released publicly for research purposes.",
}
| Through an online customer service application, we have collected many conversations between customer service agents and customers. Building a knowledge production system can help reduce the labor cost of maintaining the FAQ database for the customer service chatbot, whose core module is question answering (QA) on these conversations. However, most existing researches focus on document-based QA tasks, and there is a lack of researches on conversation-based QA and related datasets, especially in Chinese language. The challenges of conversation-based QA include: 1) answers may be scattered among multiple dialogue turns; 2) understanding complex dialogue contexts is more complicated than documents. To address these challenges, we propose a multi-span extraction model on this task and introduce continual pre-training and multi-task learning schemes to further improve model performance. To validate our approach, we construct two Chinese datasets using dialogues as the knowledge source, namely cs-qaconv and kd-qaconv, respectively. Experimental results demonstrate that the proposed model outperforms the baseline on both datasets. The online application also verifies the effectiveness of our method. The dataset kd-qaconv will be released publicly for research purposes. | [
"Yang, Changlin",
"Liu, Siye",
"Hu, Sen",
"Zhang, Wangshu",
"Xu, Teng",
"Zheng, Jing"
] | Improving Knowledge Production Efficiency With Question Answering on Conversation | acl-industry.22 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.23.bib | https://aclanthology.org/2023.acl-industry.23/ | @inproceedings{crisostomi-etal-2023-mitigating,
title = "Mitigating the Burden of Redundant Datasets via Batch-Wise Unique Samples and Frequency-Aware Losses",
author = "Crisostomi, Donato and
Caciolai, Andrea and
Pedrani, Alessandro and
Rottmann, Kay and
Manzotti, Alessandro and
Palumbo, Enrico and
Bernardi, Davide",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.23",
doi = "10.18653/v1/2023.acl-industry.23",
pages = "235--247",
abstract = "Datasets used to train deep learning models in industrial settings often exhibit skewed distributions with some samples repeated a large number of times. This paper presents a simple yet effective solution to reduce the increased burden of repeated computation on redundant datasets. Our approach eliminates duplicates at the batch level, without altering the data distribution observed by the model, making it model-agnostic and easy to implement as a plug-and-play module. We also provide a mathematical expression to estimate the reduction in training time that our approach provides. Through empirical evidence, we show that our approach significantly reduces training times on various models across datasets with varying redundancy factors, without impacting their performance on the Named Entity Recognition task, both on publicly available datasets and in real industrial settings. In the latter, the approach speeds training by up to 87{\%}, and by 46{\%} on average, with a drop in model performance of 0.2{\%} relative at worst. We finally release a modular and reusable codebase to further advance research in this area.",
}
| Datasets used to train deep learning models in industrial settings often exhibit skewed distributions with some samples repeated a large number of times. This paper presents a simple yet effective solution to reduce the increased burden of repeated computation on redundant datasets. Our approach eliminates duplicates at the batch level, without altering the data distribution observed by the model, making it model-agnostic and easy to implement as a plug-and-play module. We also provide a mathematical expression to estimate the reduction in training time that our approach provides. Through empirical evidence, we show that our approach significantly reduces training times on various models across datasets with varying redundancy factors, without impacting their performance on the Named Entity Recognition task, both on publicly available datasets and in real industrial settings. In the latter, the approach speeds training by up to 87{\%}, and by 46{\%} on average, with a drop in model performance of 0.2{\%} relative at worst. We finally release a modular and reusable codebase to further advance research in this area. | [
"Crisostomi, Donato",
"Caciolai, Andrea",
"Pedrani, Aless",
"ro",
"Rottmann, Kay",
"Manzotti, Aless",
"ro",
"Palumbo, Enrico",
"Bernardi, Davide"
] | Mitigating the Burden of Redundant Datasets via Batch-Wise Unique Samples and Frequency-Aware Losses | acl-industry.23 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.24.bib | https://aclanthology.org/2023.acl-industry.24/ | @inproceedings{howell-etal-2023-distilled,
title = "The economic trade-offs of large language models: A case study",
author = "Howell, Kristen and
Christian, Gwen and
Fomitchov, Pavel and
Kehat, Gitit and
Marzulla, Julianne and
Rolston, Leanne and
Tredup, Jadin and
Zimmerman, Ilana and
Selfridge, Ethan and
Bradley, Joseph",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.24",
doi = "10.18653/v1/2023.acl-industry.24",
pages = "248--267",
abstract = "Contacting customer service via chat is a common practice. Because employing customer service agents is expensive, many companies are turning to NLP that assists human agents by auto-generating responses that can be used directly or with modifications. With their ability to handle large context windows, Large Language Models (LLMs) are a natural fit for this use case. However, their efficacy must be balanced with the cost of training and serving them. This paper assesses the practical cost and impact of LLMs for the enterprise as a function of the usefulness of the responses that they generate. We present a cost framework for evaluating an NLP model{'}s utility for this use case and apply it to a single brand as a case study in the context of an existing agent assistance product. We compare three strategies for specializing an LLM {---} prompt engineering, fine-tuning, and knowledge distillation {---} using feedback from the brand{'}s customer service agents. We find that the usability of a model{'}s responses can make up for a large difference in inference cost for our case study brand, and we extrapolate our findings to the broader enterprise space.",
}
| Contacting customer service via chat is a common practice. Because employing customer service agents is expensive, many companies are turning to NLP that assists human agents by auto-generating responses that can be used directly or with modifications. With their ability to handle large context windows, Large Language Models (LLMs) are a natural fit for this use case. However, their efficacy must be balanced with the cost of training and serving them. This paper assesses the practical cost and impact of LLMs for the enterprise as a function of the usefulness of the responses that they generate. We present a cost framework for evaluating an NLP model{'}s utility for this use case and apply it to a single brand as a case study in the context of an existing agent assistance product. We compare three strategies for specializing an LLM {---} prompt engineering, fine-tuning, and knowledge distillation {---} using feedback from the brand{'}s customer service agents. We find that the usability of a model{'}s responses can make up for a large difference in inference cost for our case study brand, and we extrapolate our findings to the broader enterprise space. | [
"Howell, Kristen",
"Christian, Gwen",
"Fomitchov, Pavel",
"Kehat, Gitit",
"Marzulla, Julianne",
"Rolston, Leanne",
"Tredup, Jadin",
"Zimmerman, Ilana",
"Selfridge, Ethan",
"Bradley, Joseph"
] | The economic trade-offs of large language models: A case study | acl-industry.24 | Poster | 2306.07402 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.25.bib | https://aclanthology.org/2023.acl-industry.25/ | @inproceedings{nussbaum-thom-etal-2023-application,
title = "Application-Agnostic Language Modeling for On-Device {ASR}",
author = "Nussbaum-thom, Markus and
Verwimp, Lyan and
Oualil, Youssef",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.25",
doi = "10.18653/v1/2023.acl-industry.25",
pages = "268--275",
abstract = "On-device automatic speech recognition systems face several challenges compared to server-based systems. They have to meet stricter constraints in terms of speed, disk size and memory while maintaining the same accuracy. Often they have to serve several ap- plications with different distributions at once, such as communicating with a virtual assistant and speech-to-text. The simplest solution to serve multiple applications is to build application-specific (language) models, but this leads to an increase in memory. Therefore, we explore different data- and architecture-driven language modeling approaches to build a single application-agnostic model. We propose two novel feed-forward architectures that find an optimal trade off between different on-device constraints. In comparison to the application-specific solution, one of our novel approaches reduces the disk size by half, while maintaining speed and accuracy of the original model.",
}
| On-device automatic speech recognition systems face several challenges compared to server-based systems. They have to meet stricter constraints in terms of speed, disk size and memory while maintaining the same accuracy. Often they have to serve several ap- plications with different distributions at once, such as communicating with a virtual assistant and speech-to-text. The simplest solution to serve multiple applications is to build application-specific (language) models, but this leads to an increase in memory. Therefore, we explore different data- and architecture-driven language modeling approaches to build a single application-agnostic model. We propose two novel feed-forward architectures that find an optimal trade off between different on-device constraints. In comparison to the application-specific solution, one of our novel approaches reduces the disk size by half, while maintaining speed and accuracy of the original model. | [
"Nussbaum-thom, Markus",
"Verwimp, Lyan",
"Oualil, Youssef"
] | Application-Agnostic Language Modeling for On-Device ASR | acl-industry.25 | Poster | 2305.09764 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.26.bib | https://aclanthology.org/2023.acl-industry.26/ | @inproceedings{goyal-garera-2023-building,
title = "Building Accurate Low Latency {ASR} for Streaming Voice Search in {E}-commerce",
author = "Goyal, Abhinav and
Garera, Nikesh",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.26",
doi = "10.18653/v1/2023.acl-industry.26",
pages = "276--283",
abstract = "Automatic Speech Recognition (ASR) is essential for any voice-based application. The streaming capability of ASR becomes necessary to provide immediate feedback to the user in applications like Voice Search. LSTM/RNN and CTC based ASR systems are very simple to train and deploy for low latency streaming applications but have lower accuracy when compared to the state-of-the-art models. In this work, we build accurate LSTM, attention and CTC based streaming ASR models for large-scale Hinglish (blend of Hindi and English) Voice Search. We evaluate how various modifications in vanilla LSTM training improve the system{'}s accuracy while preserving the streaming capabilities. We also discuss a simple integration of end-of-speech (EOS) detection with CTC models, which helps reduce the overall search latency. Our model achieves a word error rate (WER) of 3.69{\%} without EOS and 4.78{\%} with EOS, with {\textasciitilde}1300 ms ({\textasciitilde}46.64{\%}) reduction in latency.",
}
| Automatic Speech Recognition (ASR) is essential for any voice-based application. The streaming capability of ASR becomes necessary to provide immediate feedback to the user in applications like Voice Search. LSTM/RNN and CTC based ASR systems are very simple to train and deploy for low latency streaming applications but have lower accuracy when compared to the state-of-the-art models. In this work, we build accurate LSTM, attention and CTC based streaming ASR models for large-scale Hinglish (blend of Hindi and English) Voice Search. We evaluate how various modifications in vanilla LSTM training improve the system{'}s accuracy while preserving the streaming capabilities. We also discuss a simple integration of end-of-speech (EOS) detection with CTC models, which helps reduce the overall search latency. Our model achieves a word error rate (WER) of 3.69{\%} without EOS and 4.78{\%} with EOS, with {\textasciitilde}1300 ms ({\textasciitilde}46.64{\%}) reduction in latency. | [
"Goyal, Abhinav",
"Garera, Nikesh"
] | Building Accurate Low Latency ASR for Streaming Voice Search in E-commerce | acl-industry.26 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.27.bib | https://aclanthology.org/2023.acl-industry.27/ | @inproceedings{san-etal-2023-plate,
title = "{PLA}t{E}: A Large-scale Dataset for List Page Web Extraction",
author = "San, Aidan and
Zhuang, Yuan and
Bakus, Jan and
Lockard, Colin and
Ciemiewicz, David and
Atluri, Sandeep and
Small, Kevin and
Ji, Yangfeng and
Elfardy, Heba",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.27",
doi = "10.18653/v1/2023.acl-industry.27",
pages = "284--294",
abstract = "Recently, neural models have been leveraged to significantly improve the performance of information extraction from semi-structured websites. However, a barrier for continued progress is the small number of datasets large enough to train these models. In this work, we introduce the PLAtE (Pages of Lists Attribute Extraction) benchmark dataset as a challenging new web extraction task. PLAtE focuses on shopping data, specifically extractions from product review pages with multiple items encompassing the tasks of: (1) finding product list segmentation boundaries and (2) extracting attributes for each product. PLAtE is composed of 52,898 items collected from 6,694 pages and 156,014 attributes, making it the first large-scale list page web extraction dataset. We use a multi-stage approach to collect and annotate the dataset and adapt three state-of-the-art web extraction models to the two tasks comparing their strengths and weaknesses both quantitatively and qualitatively.",
}
| Recently, neural models have been leveraged to significantly improve the performance of information extraction from semi-structured websites. However, a barrier for continued progress is the small number of datasets large enough to train these models. In this work, we introduce the PLAtE (Pages of Lists Attribute Extraction) benchmark dataset as a challenging new web extraction task. PLAtE focuses on shopping data, specifically extractions from product review pages with multiple items encompassing the tasks of: (1) finding product list segmentation boundaries and (2) extracting attributes for each product. PLAtE is composed of 52,898 items collected from 6,694 pages and 156,014 attributes, making it the first large-scale list page web extraction dataset. We use a multi-stage approach to collect and annotate the dataset and adapt three state-of-the-art web extraction models to the two tasks comparing their strengths and weaknesses both quantitatively and qualitatively. | [
"San, Aidan",
"Zhuang, Yuan",
"Bakus, Jan",
"Lockard, Colin",
"Ciemiewicz, David",
"Atluri, S",
"eep",
"Small, Kevin",
"Ji, Yangfeng",
"Elfardy, Heba"
] | PLAtE: A Large-scale Dataset for List Page Web Extraction | acl-industry.27 | Poster | 2205.12386 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.28.bib | https://aclanthology.org/2023.acl-industry.28/ | @inproceedings{liu-etal-2023-rapid,
title = "Rapid Diffusion: Building Domain-Specific Text-to-Image Synthesizers with Fast Inference Speed",
author = "Liu, Bingyan and
Lin, Weifeng and
Duan, Zhongjie and
Wang, Chengyu and
Ziheng, Wu and
Zipeng, Zhang and
Jia, Kui and
Jin, Lianwen and
Chen, Cen and
Huang, Jun",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.28",
doi = "10.18653/v1/2023.acl-industry.28",
pages = "295--304",
abstract = "Text-to-Image Synthesis (TIS) aims to generate images based on textual inputs. Recently, several large pre-trained diffusion models have been released to create high-quality images with pre-trained text encoders and diffusion-based image synthesizers. However, popular diffusion-based models from the open-source community cannot support industrial domain-specific applications due to the lack of entity knowledge and low inference speed. In this paper, we propose Rapid Diffusion, a novel framework for training and deploying super-resolution, text-to-image latent diffusion models with rich entity knowledge injected and optimized networks. Furthermore, we employ BladeDISC, an end-to-end Artificial Intelligence (AI) compiler, and FlashAttention techniques to optimize computational graphs of the generated models for online deployment. Experiments verify the effectiveness of our approach in terms of image quality and inference speed. In addition, we present industrial use cases and integrate Rapid Diffusion to an AI platform to show its practical values.",
}
| Text-to-Image Synthesis (TIS) aims to generate images based on textual inputs. Recently, several large pre-trained diffusion models have been released to create high-quality images with pre-trained text encoders and diffusion-based image synthesizers. However, popular diffusion-based models from the open-source community cannot support industrial domain-specific applications due to the lack of entity knowledge and low inference speed. In this paper, we propose Rapid Diffusion, a novel framework for training and deploying super-resolution, text-to-image latent diffusion models with rich entity knowledge injected and optimized networks. Furthermore, we employ BladeDISC, an end-to-end Artificial Intelligence (AI) compiler, and FlashAttention techniques to optimize computational graphs of the generated models for online deployment. Experiments verify the effectiveness of our approach in terms of image quality and inference speed. In addition, we present industrial use cases and integrate Rapid Diffusion to an AI platform to show its practical values. | [
"Liu, Bingyan",
"Lin, Weifeng",
"Duan, Zhongjie",
"Wang, Chengyu",
"Ziheng, Wu",
"Zipeng, Zhang",
"Jia, Kui",
"Jin, Lianwen",
"Chen, Cen",
"Huang, Jun"
] | Rapid Diffusion: Building Domain-Specific Text-to-Image Synthesizers with Fast Inference Speed | acl-industry.28 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.29.bib | https://aclanthology.org/2023.acl-industry.29/ | @inproceedings{khandelwal-etal-2023-large,
title = "Large Scale Generative Multimodal Attribute Extraction for {E}-commerce Attributes",
author = "Khandelwal, Anant and
Mittal, Happy and
Kulkarni, Shreyas and
Gupta, Deepak",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.29",
doi = "10.18653/v1/2023.acl-industry.29",
pages = "305--312",
abstract = "E-commerce websites (e.g. Amazon, Alibaba) have a plethora of structured and unstructured information (text and images) present on the product pages. Sellers often don{'}t label or mislabel values of the attributes (e.g. color, size etc.) for their products. Automatically identifying these attribute values from an eCommerce product page that contains both text and images is a challenging task, especially when the attribute value is not explicitly mentioned in the catalog. In this paper, we present a scalable solution for this problem where we pose attribute extraction problem as a question-answering task, which we solve using MXT, that consists of three key components: (i) MAG (Multimodal Adaptation Gate), (ii) Xception network, and (iii) T5 encoder-decoder. Our system consists of a generative model that generates attribute-values for a given product by using both textual and visual characteristics (e.g. images) of the product. We show that our system is capable of handling zero-shot attribute prediction (when attribute value is not seen in training data) and value-absent prediction (when attribute value is not mentioned in the text) which are missing in traditional classification-based and NER-based models respectively. We have trained our models using distant supervision, removing dependency on human labeling, thus making them practical for real-world applications. With this framework, we are able to train a single model for 1000s of (product-type, attribute) pairs, thus reducing the overhead of training and maintaining separate models. Extensive experiments on two real world datasets (total 57 attributes) show that our framework improves the absolute recall@90P by 10.16{\%} and 6.9 from the existing state of the art models. In a popular e-commerce store, we have productionized our models that cater to 12K (product-type, attribute) pairs, and have extracted 150MM attribute values.",
}
| E-commerce websites (e.g. Amazon, Alibaba) have a plethora of structured and unstructured information (text and images) present on the product pages. Sellers often don{'}t label or mislabel values of the attributes (e.g. color, size etc.) for their products. Automatically identifying these attribute values from an eCommerce product page that contains both text and images is a challenging task, especially when the attribute value is not explicitly mentioned in the catalog. In this paper, we present a scalable solution for this problem where we pose attribute extraction problem as a question-answering task, which we solve using MXT, that consists of three key components: (i) MAG (Multimodal Adaptation Gate), (ii) Xception network, and (iii) T5 encoder-decoder. Our system consists of a generative model that generates attribute-values for a given product by using both textual and visual characteristics (e.g. images) of the product. We show that our system is capable of handling zero-shot attribute prediction (when attribute value is not seen in training data) and value-absent prediction (when attribute value is not mentioned in the text) which are missing in traditional classification-based and NER-based models respectively. We have trained our models using distant supervision, removing dependency on human labeling, thus making them practical for real-world applications. With this framework, we are able to train a single model for 1000s of (product-type, attribute) pairs, thus reducing the overhead of training and maintaining separate models. Extensive experiments on two real world datasets (total 57 attributes) show that our framework improves the absolute recall@90P by 10.16{\%} and 6.9 from the existing state of the art models. In a popular e-commerce store, we have productionized our models that cater to 12K (product-type, attribute) pairs, and have extracted 150MM attribute values. | [
"Kh",
"elwal, Anant",
"Mittal, Happy",
"Kulkarni, Shreyas",
"Gupta, Deepak"
] | Large Scale Generative Multimodal Attribute Extraction for E-commerce Attributes | acl-industry.29 | Poster | 2306.00379 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.30.bib | https://aclanthology.org/2023.acl-industry.30/ | @inproceedings{avigdor-etal-2023-consistent,
title = "Consistent Text Categorization using Data Augmentation in e-Commerce",
author = "Avigdor, Noa and
Horowitz, Guy and
Raviv, Ariel and
Yanovsky Daye, Stav",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.30",
doi = "10.18653/v1/2023.acl-industry.30",
pages = "313--321",
abstract = "The categorization of massive e-Commerce data is a crucial, well-studied task, which is prevalent in industrial settings. In this work, we aim to improve an existing product categorization model that is already in use by a major web company, serving multiple applications. At its core, the product categorization model is a text classification model that takes a product title as an input and outputs the most suitable category out of thousands of available candidates. Upon a closer inspection, we found inconsistencies in the labeling of similar items. For example, minor modifications of the product title pertaining to colors or measurements majorly impacted the model{'}s output. This phenomenon can negatively affect downstream recommendation or search applications, leading to a sub-optimal user experience. To address this issue, we propose a new framework for consistent text categorization. Our goal is to improve the model{'}s consistency while maintaining its production-level performance. We use a semi-supervised approach for data augmentation and presents two different methods for utilizing unlabeled samples. One method relies directly on existing catalogs, while the other uses a generative model. We compare the pros and cons of each approach and present our experimental results.",
}
| The categorization of massive e-Commerce data is a crucial, well-studied task, which is prevalent in industrial settings. In this work, we aim to improve an existing product categorization model that is already in use by a major web company, serving multiple applications. At its core, the product categorization model is a text classification model that takes a product title as an input and outputs the most suitable category out of thousands of available candidates. Upon a closer inspection, we found inconsistencies in the labeling of similar items. For example, minor modifications of the product title pertaining to colors or measurements majorly impacted the model{'}s output. This phenomenon can negatively affect downstream recommendation or search applications, leading to a sub-optimal user experience. To address this issue, we propose a new framework for consistent text categorization. Our goal is to improve the model{'}s consistency while maintaining its production-level performance. We use a semi-supervised approach for data augmentation and presents two different methods for utilizing unlabeled samples. One method relies directly on existing catalogs, while the other uses a generative model. We compare the pros and cons of each approach and present our experimental results. | [
"Avigdor, Noa",
"Horowitz, Guy",
"Raviv, Ariel",
"Yanovsky Daye, Stav"
] | Consistent Text Categorization using Data Augmentation in e-Commerce | acl-industry.30 | Poster | 2305.05402 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-industry.31.bib | https://aclanthology.org/2023.acl-industry.31/ | @inproceedings{bhathena-etal-2023-efficient,
title = "An efficient method for Natural Language Querying on Structured Data",
author = "Bhathena, Hanoz and
Joshi, Aviral and
Singh, Prateek",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.31",
doi = "10.18653/v1/2023.acl-industry.31",
pages = "322--331",
abstract = "We present an efficient and reliable approach to Natural Language Querying (NLQ) on databases (DB) which is not based on text-to-SQL type semantic parsing. Our approach simplifies the NLQ on structured data problem to the following {``}bread and butter{''} NLP tasks: (a) Domain classification, for choosing which DB table to query, whether the question is out-of-scope (b) Multi-head slot/entity extraction (SE) to extract the field criteria and other attributes such as its role (filter, sort etc) from the raw text and (c) Slot value disambiguation (SVD) to resolve/normalize raw spans from SE to format suitable to query a DB. This is a general purpose, DB language agnostic approach and the output can be used to query any DB and return results to the user. Also each of these tasks is extremely well studied, mature, easier to collect data for and enables better error analysis by tracing problems to specific components when something goes wrong.",
}
| We present an efficient and reliable approach to Natural Language Querying (NLQ) on databases (DB) which is not based on text-to-SQL type semantic parsing. Our approach simplifies the NLQ on structured data problem to the following {``}bread and butter{''} NLP tasks: (a) Domain classification, for choosing which DB table to query, whether the question is out-of-scope (b) Multi-head slot/entity extraction (SE) to extract the field criteria and other attributes such as its role (filter, sort etc) from the raw text and (c) Slot value disambiguation (SVD) to resolve/normalize raw spans from SE to format suitable to query a DB. This is a general purpose, DB language agnostic approach and the output can be used to query any DB and return results to the user. Also each of these tasks is extremely well studied, mature, easier to collect data for and enables better error analysis by tracing problems to specific components when something goes wrong. | [
"Bhathena, Hanoz",
"Joshi, Aviral",
"Singh, Prateek"
] | An efficient method for Natural Language Querying on Structured Data | acl-industry.31 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.32.bib | https://aclanthology.org/2023.acl-industry.32/ | @inproceedings{chen-etal-2023-boosting,
title = "Boosting Transformers and Language Models for Clinical Prediction in Immunotherapy",
author = "Chen, Zekai and
Micsinai Balan, Mariann and
Brown, Kevin",
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.32",
doi = "10.18653/v1/2023.acl-industry.32",
pages = "332--340",
abstract = "Clinical prediction is an essential task in the healthcare industry. However, the recent success of transformers, on which large language models are built, has not been extended to this domain. In this research, we explore the use of transformers and language models in prognostic prediction for immunotherapy using real-world patients{'} clinical data and molecular profiles. This paper investigates the potential of transformers to improve clinical prediction compared to conventional machine learning approaches and addresses the challenge of few-shot learning in predicting rare disease areas. The study benchmarks the efficacy of baselines and language models on prognostic prediction across multiple cancer types and investigates the impact of different pretrained language models under few-shot regimes. The results demonstrate significant improvements in accuracy and highlight the potential of NLP in clinical research to improve early detection and intervention for different diseases.",
}
| Clinical prediction is an essential task in the healthcare industry. However, the recent success of transformers, on which large language models are built, has not been extended to this domain. In this research, we explore the use of transformers and language models in prognostic prediction for immunotherapy using real-world patients{'} clinical data and molecular profiles. This paper investigates the potential of transformers to improve clinical prediction compared to conventional machine learning approaches and addresses the challenge of few-shot learning in predicting rare disease areas. The study benchmarks the efficacy of baselines and language models on prognostic prediction across multiple cancer types and investigates the impact of different pretrained language models under few-shot regimes. The results demonstrate significant improvements in accuracy and highlight the potential of NLP in clinical research to improve early detection and intervention for different diseases. | [
"Chen, Zekai",
"Micsinai Balan, Mariann",
"Brown, Kevin"
] | Boosting Transformers and Language Models for Clinical Prediction in Immunotherapy | acl-industry.32 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-industry.33.bib | https://aclanthology.org/2023.acl-industry.33/ | @inproceedings{yuksel-etal-2023-evolvemt,
title = "{E}volve{MT}: an Ensemble {MT} Engine Improving Itself with Usage Only",
author = {Y{\"u}ksel, Kamer and
Gunduz, Ahmet and
Al-badrashiny, Mohamed and
Sawaf, Hassan},
editor = "Sitaram, Sunayana and
Beigman Klebanov, Beata and
Williams, Jason D",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-industry.33",
doi = "10.18653/v1/2023.acl-industry.33",
pages = "341--346",
abstract = "This work proposes a method named EvolveMT for the efficient combination of multiple machine translation (MT) engines. The method selects the output from one engine for each segment, using online learning techniques to predict the most appropriate system for each translation request. A neural quality estimation metric supervises the method without requiring reference translations. The method{'}s online learning capability enables it to adapt to changes in the domain or MT engines dynamically, eliminating the requirement for retraining. The method selects a subset of translation engines to be called based on the source sentence features. The degree of exploration is configurable according to the desired quality-cost trade-off. Results from custom datasets demonstrate that EvolveMT achieves similar translation accuracy at a lower cost than selecting the best translation of each segment from all translations using an MT quality estimator. To the best of our knowledge, EvolveMT is the first MT system that adapts itself after deployment to incoming translation requests from the production environment without needing costly retraining on human feedback.",
}
| This work proposes a method named EvolveMT for the efficient combination of multiple machine translation (MT) engines. The method selects the output from one engine for each segment, using online learning techniques to predict the most appropriate system for each translation request. A neural quality estimation metric supervises the method without requiring reference translations. The method{'}s online learning capability enables it to adapt to changes in the domain or MT engines dynamically, eliminating the requirement for retraining. The method selects a subset of translation engines to be called based on the source sentence features. The degree of exploration is configurable according to the desired quality-cost trade-off. Results from custom datasets demonstrate that EvolveMT achieves similar translation accuracy at a lower cost than selecting the best translation of each segment from all translations using an MT quality estimator. To the best of our knowledge, EvolveMT is the first MT system that adapts itself after deployment to incoming translation requests from the production environment without needing costly retraining on human feedback. | [
"Y{\\\"u}ksel, Kamer",
"Gunduz, Ahmet",
"Al-badrashiny, Mohamed",
"Sawaf, Hassan"
] | EvolveMT: an Ensemble MT Engine Improving Itself with Usage Only | acl-industry.33 | Poster | 2306.11823 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
Subsets and Splits