Datasets:

bibtex_url
stringlengths
41
50
proceedings
stringlengths
38
47
bibtext
stringlengths
709
3.56k
abstract
stringlengths
17
2.11k
authors
sequencelengths
1
72
title
stringlengths
12
207
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
276 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
14
num_comments
int64
-1
11
n_authors
int64
-1
44
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
14
Spaces
sequencelengths
0
100
https://aclanthology.org/2023.acl-short.90.bib
https://aclanthology.org/2023.acl-short.90/
@inproceedings{raunak-etal-2023-gpts, title = "Do {GPT}s Produce Less Literal Translations?", author = "Raunak, Vikas and Menezes, Arul and Post, Matt and Hassan, Hany", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.90", doi = "10.18653/v1/2023.acl-short.90", pages = "1041--1050", abstract = "Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks. On the task of Machine Translation (MT), multiple works have investigated few-shot prompting mechanisms to elicit better translations from LLMs. However, there has been relatively little investigation on how such translations differ qualitatively from the translations generated by standard Neural Machine Translation (NMT) models. In this work, we investigate these differences in terms of the literalness of translations produced by the two systems. Using literalness measures involving word alignment and monotonicity, we find that translations out of English (E-X) from GPTs tend to be less literal, while exhibiting similar or better scores on MT quality metrics. We demonstrate that this finding is borne out in human evaluations as well. We then show that these differences are especially pronounced when translating sentences that contain idiomatic expressions.", }
Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks. On the task of Machine Translation (MT), multiple works have investigated few-shot prompting mechanisms to elicit better translations from LLMs. However, there has been relatively little investigation on how such translations differ qualitatively from the translations generated by standard Neural Machine Translation (NMT) models. In this work, we investigate these differences in terms of the literalness of translations produced by the two systems. Using literalness measures involving word alignment and monotonicity, we find that translations out of English (E-X) from GPTs tend to be less literal, while exhibiting similar or better scores on MT quality metrics. We demonstrate that this finding is borne out in human evaluations as well. We then show that these differences are especially pronounced when translating sentences that contain idiomatic expressions.
[ "Raunak, Vikas", "Menezes, Arul", "Post, Matt", "Hassan, Hany" ]
Do GPTs Produce Less Literal Translations?
acl-short.90
Poster
2305.16806
[ "https://github.com/vyraun/literalness" ]
https://huggingface.co/papers/2305.16806
2
1
0
4
1
[]
[]
[]
https://aclanthology.org/2023.acl-short.91.bib
https://aclanthology.org/2023.acl-short.91/
@inproceedings{stammbach-etal-2023-environmental, title = "Environmental Claim Detection", author = "Stammbach, Dominik and Webersinke, Nicolas and Bingler, Julia and Kraus, Mathias and Leippold, Markus", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.91", doi = "10.18653/v1/2023.acl-short.91", pages = "1051--1066", abstract = "To transition to a green economy, environmental claims made by companies must be reliable, comparable, and verifiable. To analyze such claims at scale, automated methods are needed to detect them in the first place. However, there exist no datasets or models for this. Thus, this paper introduces the task of environmental claim detection. To accompany the task, we release an expert-annotated dataset and models trained on this dataset. We preview one potential application of such models: We detect environmental claims made in quarterly earning calls and find that the number of environmental claims has steadily increased since the Paris Agreement in 2015.", }
To transition to a green economy, environmental claims made by companies must be reliable, comparable, and verifiable. To analyze such claims at scale, automated methods are needed to detect them in the first place. However, there exist no datasets or models for this. Thus, this paper introduces the task of environmental claim detection. To accompany the task, we release an expert-annotated dataset and models trained on this dataset. We preview one potential application of such models: We detect environmental claims made in quarterly earning calls and find that the number of environmental claims has steadily increased since the Paris Agreement in 2015.
[ "Stammbach, Dominik", "Webersinke, Nicolas", "Bingler, Julia", "Kraus, Mathias", "Leippold, Markus" ]
Environmental Claim Detection
acl-short.91
Poster
2209.00507
[ "https://github.com/dominiksinsaarland/environmental_claims" ]
https://huggingface.co/papers/2209.00507
1
0
0
5
1
[ "climatebert/environmental-claims" ]
[ "climatebert/environmental_claims" ]
[]
https://aclanthology.org/2023.acl-short.92.bib
https://aclanthology.org/2023.acl-short.92/
@inproceedings{cifka-liutkus-2023-black, title = "Black-box language model explanation by context length probing", author = "C{\'\i}fka, Ond{\v{r}}ej and Liutkus, Antoine", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.92", doi = "10.18653/v1/2023.acl-short.92", pages = "1067--1079", abstract = "The increasingly widespread adoption of large language models has highlighted the need for improving their explainability. We present *context length probing*, a novel explanation technique for causal language models, based on tracking the predictions of a model as a function of the length of available context, and allowing to assign *differential importance scores* to different contexts. The technique is model-agnostic and does not rely on access to model internals beyond computing token-level probabilities. We apply context length probing to large pre-trained language models and offer some initial analyses and insights, including the potential for studying long-range dependencies. The [source code](\url{https://github.com/cifkao/context-probing/}) and an [interactive demo](\url{https://cifkao.github.io/context-probing/}) of the method are available.", }
The increasingly widespread adoption of large language models has highlighted the need for improving their explainability. We present *context length probing*, a novel explanation technique for causal language models, based on tracking the predictions of a model as a function of the length of available context, and allowing to assign *differential importance scores* to different contexts. The technique is model-agnostic and does not rely on access to model internals beyond computing token-level probabilities. We apply context length probing to large pre-trained language models and offer some initial analyses and insights, including the potential for studying long-range dependencies. The [source code](\url{https://github.com/cifkao/context-probing/}) and an [interactive demo](\url{https://cifkao.github.io/context-probing/}) of the method are available.
[ "C{\\'\\i}fka, Ond{\\v{r}}ej", "Liutkus, Antoine" ]
Black-box language model explanation by context length probing
acl-short.92
Poster
2212.14815
[ "https://github.com/cifkao/context-probing" ]
https://huggingface.co/papers/2212.14815
0
0
0
2
1
[]
[]
[ "cifkao/context-probing" ]
https://aclanthology.org/2023.acl-short.93.bib
https://aclanthology.org/2023.acl-short.93/
@inproceedings{wang-etal-2023-check, title = "Let Me Check the Examples: Enhancing Demonstration Learning via Explicit Imitation", author = "Wang, Sirui and Wei, Kaiwen and Zhang, Hongzhi and Li, Yuntao and Wu, Wei", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.93", doi = "10.18653/v1/2023.acl-short.93", pages = "1080--1088", abstract = "Demonstration learning aims to guide the prompt prediction by providing answered demonstrations in the few shot settings. Despite achieving promising results, existing work only concatenates the answered examples as demonstrations to the prompt template (including the raw context) without any additional operation, neglecting the prompt-demonstration dependencies. Besides, prior research found that randomly replacing the labels of demonstrations marginally hurts performance, illustrating that the model could not properly learn the knowledge brought by the demonstrations. Inspired by the human learning process, in this paper, we introduce Imitation DEMOnstration learning (Imitation-Demo) to strengthen demonstration learning via explicitly imitating human review behaviour, which includes: (1) contrastive learning mechanism to concentrate on similar demonstrations.(2) demonstration-label re-prediction method to consolidate known knowledge. Experiment results show that our proposed method achieves state-of-the-art performance on 5 out of 14 classification corpus. Further studies also prove that Imitation-Demo strengthens the associations between the prompt and demonstrations, which could provide the basis for exploring how demonstration learning works.", }
Demonstration learning aims to guide the prompt prediction by providing answered demonstrations in the few shot settings. Despite achieving promising results, existing work only concatenates the answered examples as demonstrations to the prompt template (including the raw context) without any additional operation, neglecting the prompt-demonstration dependencies. Besides, prior research found that randomly replacing the labels of demonstrations marginally hurts performance, illustrating that the model could not properly learn the knowledge brought by the demonstrations. Inspired by the human learning process, in this paper, we introduce Imitation DEMOnstration learning (Imitation-Demo) to strengthen demonstration learning via explicitly imitating human review behaviour, which includes: (1) contrastive learning mechanism to concentrate on similar demonstrations.(2) demonstration-label re-prediction method to consolidate known knowledge. Experiment results show that our proposed method achieves state-of-the-art performance on 5 out of 14 classification corpus. Further studies also prove that Imitation-Demo strengthens the associations between the prompt and demonstrations, which could provide the basis for exploring how demonstration learning works.
[ "Wang, Sirui", "Wei, Kaiwen", "Zhang, Hongzhi", "Li, Yuntao", "Wu, Wei" ]
Let Me Check the Examples: Enhancing Demonstration Learning via Explicit Imitation
acl-short.93
Poster
2209.00455
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.94.bib
https://aclanthology.org/2023.acl-short.94/
@inproceedings{rei-etal-2023-inside, title = "The Inside Story: Towards Better Understanding of Machine Translation Neural Evaluation Metrics", author = "Rei, Ricardo and Guerreiro, Nuno M. and Treviso, Marcos and Coheur, Luisa and Lavie, Alon and Martins, Andr{\'e}", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.94", doi = "10.18653/v1/2023.acl-short.94", pages = "1089--1105", abstract = "Neural metrics for machine translation evaluation, such as COMET, exhibit significant improvements in their correlation with human judgments, as compared to traditional metrics based on lexical overlap, such as BLEU. Yet, neural metrics are, to a great extent, {``}black boxes{''} returning a single sentence-level score without transparency about the decision-making process. In this work, we develop and compare several neural explainability methods and demonstrate their effectiveness for interpreting state-of-the-art fine-tuned neural metrics. Our study reveals that these metrics leverage token-level information that can be directly attributed to translation errors, as assessed through comparison of token-level neural saliency maps with Multidimensional Quality Metrics (MQM) annotations and with synthetically-generated critical translation errors. To ease future research, we release our code at: \url{https://github.com/Unbabel/COMET/tree/explainable-metrics}", }
Neural metrics for machine translation evaluation, such as COMET, exhibit significant improvements in their correlation with human judgments, as compared to traditional metrics based on lexical overlap, such as BLEU. Yet, neural metrics are, to a great extent, {``}black boxes{''} returning a single sentence-level score without transparency about the decision-making process. In this work, we develop and compare several neural explainability methods and demonstrate their effectiveness for interpreting state-of-the-art fine-tuned neural metrics. Our study reveals that these metrics leverage token-level information that can be directly attributed to translation errors, as assessed through comparison of token-level neural saliency maps with Multidimensional Quality Metrics (MQM) annotations and with synthetically-generated critical translation errors. To ease future research, we release our code at: \url{https://github.com/Unbabel/COMET/tree/explainable-metrics}
[ "Rei, Ricardo", "Guerreiro, Nuno M.", "Treviso, Marcos", "Coheur, Luisa", "Lavie, Alon", "Martins, Andr{\\'e}" ]
The Inside Story: Towards Better Understanding of Machine Translation Neural Evaluation Metrics
acl-short.94
Poster
2305.11806
[ "https://github.com/Unbabel/COMET" ]
https://huggingface.co/papers/2305.11806
1
0
0
6
1
[ "Unbabel/wmt22-unite-da" ]
[]
[]
https://aclanthology.org/2023.acl-short.95.bib
https://aclanthology.org/2023.acl-short.95/
@inproceedings{tasawong-etal-2023-typo, title = "Typo-Robust Representation Learning for Dense Retrieval", author = "Tasawong, Panuthep and Ponwitayarat, Wuttikorn and Limkonchotiwat, Peerat and Udomcharoenchaikit, Can and Chuangsuwanich, Ekapol and Nutanong, Sarana", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.95", doi = "10.18653/v1/2023.acl-short.95", pages = "1106--1115", abstract = "Dense retrieval is a basic building block of information retrieval applications. One of the main challenges of dense retrieval in real-world settings is the handling of queries containing misspelled words. A popular approach for handling misspelled queries is minimizing the representations discrepancy between misspelled queries and their pristine ones. Unlike the existing approaches, which only focus on the alignment between misspelled and pristine queries, our method also improves the contrast between each misspelled query and its surrounding queries. To assess the effectiveness of our proposed method, we compare it against the existing competitors using two benchmark datasets and two base encoders. Our method outperforms the competitors in all cases with misspelled queries. Our code and models are available at \url{https://github.com/panuthept/DST-DenseRetrieval}.", }
Dense retrieval is a basic building block of information retrieval applications. One of the main challenges of dense retrieval in real-world settings is the handling of queries containing misspelled words. A popular approach for handling misspelled queries is minimizing the representations discrepancy between misspelled queries and their pristine ones. Unlike the existing approaches, which only focus on the alignment between misspelled and pristine queries, our method also improves the contrast between each misspelled query and its surrounding queries. To assess the effectiveness of our proposed method, we compare it against the existing competitors using two benchmark datasets and two base encoders. Our method outperforms the competitors in all cases with misspelled queries. Our code and models are available at \url{https://github.com/panuthept/DST-DenseRetrieval}.
[ "Tasawong, Panuthep", "Ponwitayarat, Wuttikorn", "Limkonchotiwat, Peerat", "Udomcharoenchaikit, Can", "Chuangsuwanich, Ekapol", "Nutanong, Sarana" ]
Typo-Robust Representation Learning for Dense Retrieval
acl-short.95
Poster
2306.10348
[ "https://github.com/panuthept/dst-denseretrieval" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.96.bib
https://aclanthology.org/2023.acl-short.96/
@inproceedings{ma-etal-2023-focused, title = "Focused Prefix Tuning for Controllable Text Generation", author = "Ma, Congda and Zhao, Tianyu and Shing, Makoto and Sawada, Kei and Okumura, Manabu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.96", doi = "10.18653/v1/2023.acl-short.96", pages = "1116--1127", abstract = "In a controllable text generation dataset, there exist unannotated attributes that could provide irrelevant learning signals to models that use it for training and thus degrade their performance. We propose focused prefix tuning (FPT) to mitigate the problem and to enable the control to focus on the desired attribute. Experimental results show that FPT can achieve better control accuracy and text fluency than baseline models in single-attribute control tasks. In multi-attribute control tasks, FPT achieves comparable control accuracy with the state-of-the-art approach while keeping the flexibility to control new attributes without retraining existing models.", }
In a controllable text generation dataset, there exist unannotated attributes that could provide irrelevant learning signals to models that use it for training and thus degrade their performance. We propose focused prefix tuning (FPT) to mitigate the problem and to enable the control to focus on the desired attribute. Experimental results show that FPT can achieve better control accuracy and text fluency than baseline models in single-attribute control tasks. In multi-attribute control tasks, FPT achieves comparable control accuracy with the state-of-the-art approach while keeping the flexibility to control new attributes without retraining existing models.
[ "Ma, Congda", "Zhao, Tianyu", "Shing, Makoto", "Sawada, Kei", "Okumura, Manabu" ]
Focused Prefix Tuning for Controllable Text Generation
acl-short.96
Poster
2306.00369
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.97.bib
https://aclanthology.org/2023.acl-short.97/
@inproceedings{zhang-etal-2023-reaugkd, title = "{R}e{A}ug{KD}: Retrieval-Augmented Knowledge Distillation For Pre-trained Language Models", author = "Zhang, Jianyi and Muhamed, Aashiq and Anantharaman, Aditya and Wang, Guoyin and Chen, Changyou and Zhong, Kai and Cui, Qingjun and Xu, Yi and Zeng, Belinda and Chilimbi, Trishul and Chen, Yiran", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.97", doi = "10.18653/v1/2023.acl-short.97", pages = "1128--1136", abstract = "Knowledge Distillation (KD) is one of the most effective approaches to deploying large-scale pre-trained language models in low-latency environments by transferring the knowledge contained in the large-scale models to smaller student models. Prior KD approaches use the soft labels and intermediate activations generated by the teacher to transfer knowledge to the student model parameters alone. In this paper, we show that having access to non-parametric memory in the form of a knowledge base with the teacher{'}s soft labels and predictions can further improve student generalization. To enable the student to retrieve from the knowledge base effectively, we propose a new framework and loss function that preserves the semantic similarities of teacher and student training examples. We show through extensive experiments that our retrieval mechanism can achieve state-of-the-art performance for task-specific knowledge distillation on the GLUE benchmark.", }
Knowledge Distillation (KD) is one of the most effective approaches to deploying large-scale pre-trained language models in low-latency environments by transferring the knowledge contained in the large-scale models to smaller student models. Prior KD approaches use the soft labels and intermediate activations generated by the teacher to transfer knowledge to the student model parameters alone. In this paper, we show that having access to non-parametric memory in the form of a knowledge base with the teacher{'}s soft labels and predictions can further improve student generalization. To enable the student to retrieve from the knowledge base effectively, we propose a new framework and loss function that preserves the semantic similarities of teacher and student training examples. We show through extensive experiments that our retrieval mechanism can achieve state-of-the-art performance for task-specific knowledge distillation on the GLUE benchmark.
[ "Zhang, Jianyi", "Muhamed, Aashiq", "Anantharaman, Aditya", "Wang, Guoyin", "Chen, Changyou", "Zhong, Kai", "Cui, Qingjun", "Xu, Yi", "Zeng, Belinda", "Chilimbi, Trishul", "Chen, Yiran" ]
ReAugKD: Retrieval-Augmented Knowledge Distillation For Pre-trained Language Models
acl-short.97
Oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.98.bib
https://aclanthology.org/2023.acl-short.98/
@inproceedings{xia-etal-2023-debiasing, title = "Debiasing Generative Named Entity Recognition by Calibrating Sequence Likelihood", author = "Xia, Yu and Zhao, Yongwei and Wu, Wenhao and Li, Sujian", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.98", doi = "10.18653/v1/2023.acl-short.98", pages = "1137--1148", abstract = "Recognizing flat, overlapped and discontinuous entities uniformly has been paid increasing attention. Among these works, Seq2Seq formulation prevails for its flexibility and effectiveness. It arranges the output entities into a specific target sequence. However, it introduces bias by assigning all the probability mass to the observed sequence. To alleviate the bias, previous works either augment the data with possible sequences or resort to other formulations. In this paper, we stick to the Seq2Seq formulation and propose a reranking-based approach. It redistributes the likelihood among candidate sequences depending on their performance via a contrastive loss. Extensive experiments show that our simple yet effective method consistently boosts the baseline, and yields competitive or better results compared with the state-of-the-art methods on 8 widely-used datasets for Named Entity Recognition.", }
Recognizing flat, overlapped and discontinuous entities uniformly has been paid increasing attention. Among these works, Seq2Seq formulation prevails for its flexibility and effectiveness. It arranges the output entities into a specific target sequence. However, it introduces bias by assigning all the probability mass to the observed sequence. To alleviate the bias, previous works either augment the data with possible sequences or resort to other formulations. In this paper, we stick to the Seq2Seq formulation and propose a reranking-based approach. It redistributes the likelihood among candidate sequences depending on their performance via a contrastive loss. Extensive experiments show that our simple yet effective method consistently boosts the baseline, and yields competitive or better results compared with the state-of-the-art methods on 8 widely-used datasets for Named Entity Recognition.
[ "Xia, Yu", "Zhao, Yongwei", "Wu, Wenhao", "Li, Sujian" ]
Debiasing Generative Named Entity Recognition by Calibrating Sequence Likelihood
acl-short.98
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.99.bib
https://aclanthology.org/2023.acl-short.99/
@inproceedings{torroba-hennigen-kim-2023-deriving, title = "Deriving Language Models from Masked Language Models", author = "Torroba Hennigen, Lucas and Kim, Yoon", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.99", doi = "10.18653/v1/2023.acl-short.99", pages = "1149--1159", abstract = "Masked language models (MLM) do not explicitly define a distribution over language, i.e., they are not language models per se. However, recent work has implicitly treated them as such for the purposes of generation and scoring. This paper studies methods for deriving explicit joint distributions from MLMs, focusing on distributions over two tokens, which makes it possible to calculate exact distributional properties. We find that an approach based on identifying joints whose conditionals are closest to those of the MLM works well and outperforms existing Markov random field-based approaches. We further find that this derived model{'}s conditionals can even occasionally outperform the original MLM{'}s conditionals.", }
Masked language models (MLM) do not explicitly define a distribution over language, i.e., they are not language models per se. However, recent work has implicitly treated them as such for the purposes of generation and scoring. This paper studies methods for deriving explicit joint distributions from MLMs, focusing on distributions over two tokens, which makes it possible to calculate exact distributional properties. We find that an approach based on identifying joints whose conditionals are closest to those of the MLM works well and outperforms existing Markov random field-based approaches. We further find that this derived model{'}s conditionals can even occasionally outperform the original MLM{'}s conditionals.
[ "Torroba Hennigen, Lucas", "Kim, Yoon" ]
Deriving Language Models from Masked Language Models
acl-short.99
Poster
2305.15501
[ "https://github.com/ltorroba/lms-from-mlms" ]
https://huggingface.co/papers/2305.15501
2
0
0
2
1
[]
[]
[]
https://aclanthology.org/2023.acl-short.100.bib
https://aclanthology.org/2023.acl-short.100/
@inproceedings{mao-etal-2023-unitrec, title = "{U}ni{TR}ec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation", author = "Mao, Zhiming and Wang, Huimin and Du, Yiming and Wong, Kam-Fai", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.100", doi = "10.18653/v1/2023.acl-short.100", pages = "1160--1170", abstract = "Prior study has shown that pretrained language models (PLM) can boost the performance of text-based recommendation. In contrast to previous works that either use PLM to encode user history as a whole input text, or impose an additional aggregation network to fuse multi-turn history representations, we propose a unified local- and global-attention Transformer encoder to better model two-level contexts of user history. Moreover, conditioned on user history encoded by Transformer encoders, our framework leverages Transformer decoders to estimate the language perplexity of candidate text items, which can serve as a straightforward yet significant contrastive signal for user-item text matching. Based on this, our framework, UniTRec, unifies the contrastive objectives of discriminative matching scores and candidate text perplexity to jointly enhance text-based recommendation. Extensive evaluation shows that UniTRec delivers SOTA performance on three text-based recommendation tasks.", }
Prior study has shown that pretrained language models (PLM) can boost the performance of text-based recommendation. In contrast to previous works that either use PLM to encode user history as a whole input text, or impose an additional aggregation network to fuse multi-turn history representations, we propose a unified local- and global-attention Transformer encoder to better model two-level contexts of user history. Moreover, conditioned on user history encoded by Transformer encoders, our framework leverages Transformer decoders to estimate the language perplexity of candidate text items, which can serve as a straightforward yet significant contrastive signal for user-item text matching. Based on this, our framework, UniTRec, unifies the contrastive objectives of discriminative matching scores and candidate text perplexity to jointly enhance text-based recommendation. Extensive evaluation shows that UniTRec delivers SOTA performance on three text-based recommendation tasks.
[ "Mao, Zhiming", "Wang, Huimin", "Du, Yiming", "Wong, Kam-Fai" ]
UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation
acl-short.100
Poster
2305.15756
[ "https://github.com/veason-silverbullet/unitrec" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.101.bib
https://aclanthology.org/2023.acl-short.101/
@inproceedings{fei-etal-2023-reasoning, title = "Reasoning Implicit Sentiment with Chain-of-Thought Prompting", author = "Fei, Hao and Li, Bobo and Liu, Qian and Bing, Lidong and Li, Fei and Chua, Tat-Seng", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.101", doi = "10.18653/v1/2023.acl-short.101", pages = "1171--1182", abstract = "While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner. Thus detecting implicit sentiment requires the common-sense and multi-hop reasoning ability to infer the latent intent of opinion. Inspired by the recent chain-of-thought (CoT) idea, in this work we introduce a Three-hop Reasoning (THOR) CoT framework to mimic the human-like reasoning process for ISA. We design a three-step prompting principle for THOR to step-by-step induce the implicit aspect, opinion, and finally the sentiment polarity. Our THOR+Flan-T5 (11B) pushes the state-of-the-art (SoTA) by over 6{\%} F1 on supervised setup. More strikingly, THOR+GPT3 (175B) boosts the SoTA by over 50{\%} F1 on zero-shot setting.", }
While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner. Thus detecting implicit sentiment requires the common-sense and multi-hop reasoning ability to infer the latent intent of opinion. Inspired by the recent chain-of-thought (CoT) idea, in this work we introduce a Three-hop Reasoning (THOR) CoT framework to mimic the human-like reasoning process for ISA. We design a three-step prompting principle for THOR to step-by-step induce the implicit aspect, opinion, and finally the sentiment polarity. Our THOR+Flan-T5 (11B) pushes the state-of-the-art (SoTA) by over 6{\%} F1 on supervised setup. More strikingly, THOR+GPT3 (175B) boosts the SoTA by over 50{\%} F1 on zero-shot setting.
[ "Fei, Hao", "Li, Bobo", "Liu, Qian", "Bing, Lidong", "Li, Fei", "Chua, Tat-Seng" ]
Reasoning Implicit Sentiment with Chain-of-Thought Prompting
acl-short.101
Poster
2305.11255
[ "https://github.com/scofield7419/thor-isa" ]
https://huggingface.co/papers/2305.11255
0
0
0
6
1
[ "nicolay-r/flan-t5-tsa-thor-xl", "nicolay-r/flan-t5-tsa-thor-base", "nicolay-r/flan-t5-tsa-thor-large" ]
[]
[]
https://aclanthology.org/2023.acl-short.102.bib
https://aclanthology.org/2023.acl-short.102/
@inproceedings{chi-etal-2023-latent, title = "Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings", author = "Chi, Ta-Chung and Fan, Ting-Han and Chen, Li-Wei and Rudnicky, Alexander and Ramadge, Peter", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.102", doi = "10.18653/v1/2023.acl-short.102", pages = "1183--1193", abstract = "The use of positional embeddings in transformer language models is widely accepted. However, recent research has called into question the necessity of such embeddings. We further extend this inquiry by demonstrating that a randomly initialized and frozen transformer language model, devoid of positional embeddings, inherently encodes strong positional information through the shrinkage of self-attention variance. To quantify this variance, we derive the underlying distribution of each step within a transformer layer. Through empirical validation using a fully pretrained model, we show that the variance shrinkage effect still persists after extensive gradient updates. Our findings serve to justify the decision to discard positional embeddings and thus facilitate more efficient pretraining of transformer language models.", }
The use of positional embeddings in transformer language models is widely accepted. However, recent research has called into question the necessity of such embeddings. We further extend this inquiry by demonstrating that a randomly initialized and frozen transformer language model, devoid of positional embeddings, inherently encodes strong positional information through the shrinkage of self-attention variance. To quantify this variance, we derive the underlying distribution of each step within a transformer layer. Through empirical validation using a fully pretrained model, we show that the variance shrinkage effect still persists after extensive gradient updates. Our findings serve to justify the decision to discard positional embeddings and thus facilitate more efficient pretraining of transformer language models.
[ "Chi, Ta-Chung", "Fan, Ting-Han", "Chen, Li-Wei", "Rudnicky, Alex", "er", "Ramadge, Peter" ]
Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
acl-short.102
Poster
2305.13571
[ "" ]
https://huggingface.co/papers/2305.13571
0
2
0
5
1
[]
[]
[]
https://aclanthology.org/2023.acl-short.103.bib
https://aclanthology.org/2023.acl-short.103/
@inproceedings{ait-saada-nadif-2023-anisotropy, title = "Is Anisotropy Truly Harmful? A Case Study on Text Clustering", author = "Ait-Saada, Mira and Nadif, Mohamed", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.103", doi = "10.18653/v1/2023.acl-short.103", pages = "1194--1203", abstract = "In the last few years, several studies have been devoted to dissecting dense text representations in order to understand their effectiveness and further improve their quality. Particularly, the anisotropy of such representations has been observed, which means that the directions of the word vectors are not evenly distributed across the space but rather concentrated in a narrow cone. This has led to several attempts to counteract this phenomenon both on static and contextualized text representations. However, despite this effort, there is no established relationship between anisotropy and performance. In this paper, we aim to bridge this gap by investigating the impact of different transformations on both the isotropy and the performance in order to assess the true impact of anisotropy. To this end, we rely on the clustering task as a means of evaluating the ability of text representations to produce meaningful groups. Thereby, we empirically show a limited impact of anisotropy on the expressiveness of sentence representations both in terms of directions and L2 closeness.", }
In the last few years, several studies have been devoted to dissecting dense text representations in order to understand their effectiveness and further improve their quality. Particularly, the anisotropy of such representations has been observed, which means that the directions of the word vectors are not evenly distributed across the space but rather concentrated in a narrow cone. This has led to several attempts to counteract this phenomenon both on static and contextualized text representations. However, despite this effort, there is no established relationship between anisotropy and performance. In this paper, we aim to bridge this gap by investigating the impact of different transformations on both the isotropy and the performance in order to assess the true impact of anisotropy. To this end, we rely on the clustering task as a means of evaluating the ability of text representations to produce meaningful groups. Thereby, we empirically show a limited impact of anisotropy on the expressiveness of sentence representations both in terms of directions and L2 closeness.
[ "Ait-Saada, Mira", "Nadif, Mohamed" ]
Is Anisotropy Truly Harmful? A Case Study on Text Clustering
acl-short.103
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.104.bib
https://aclanthology.org/2023.acl-short.104/
@inproceedings{nguyen-duc-etal-2023-class, title = "Class based Influence Functions for Error Detection", author = "Nguyen-Duc, Thang and Thanh-Tung, Hoang and Tran, Quan Hung and Huu-Tien, Dang and Nguyen, Hieu and T. V. Dau, Anh and Bui, Nghi", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.104", doi = "10.18653/v1/2023.acl-short.104", pages = "1204--1218", abstract = "Influence functions (IFs) are a powerful tool for detecting anomalous examples in large scale datasets. However, they are unstable when applied to deep networks. In this paper, we provide an explanation for the instability of IFs and develop a solution to this problem. We show that IFs are unreliable when the two data points belong to two different classes. Our solution leverages class information to improve the stability of IFs.Extensive experiments show that our modification significantly improves the performance and stability of IFs while incurring no additional computational cost.", }
Influence functions (IFs) are a powerful tool for detecting anomalous examples in large scale datasets. However, they are unstable when applied to deep networks. In this paper, we provide an explanation for the instability of IFs and develop a solution to this problem. We show that IFs are unreliable when the two data points belong to two different classes. Our solution leverages class information to improve the stability of IFs.Extensive experiments show that our modification significantly improves the performance and stability of IFs while incurring no additional computational cost.
[ "Nguyen-Duc, Thang", "Thanh-Tung, Hoang", "Tran, Quan Hung", "Huu-Tien, Dang", "Nguyen, Hieu", "T. V. Dau, Anh", "Bui, Nghi" ]
Class based Influence Functions for Error Detection
acl-short.104
Poster
2305.01384
[ "https://github.com/Fsoft-AIC/Class-Based-Influence-Functions" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.105.bib
https://aclanthology.org/2023.acl-short.105/
@inproceedings{chong-etal-2023-leveraging, title = "Leveraging Prefix Transfer for Multi-Intent Text Revision", author = "Chong, Ruining and Kong, Cunliang and Wu, Liu and Liu, Zhenghao and Jin, Ziye and Yang, Liner and Fan, Yange and Fan, Hanghang and Yang, Erhong", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.105", doi = "10.18653/v1/2023.acl-short.105", pages = "1219--1228", abstract = "Text revision is a necessary process to improve text quality. During this process, writers constantly edit texts out of different edit intentions. Identifying edit intention for a raw text is always an ambiguous work, and most previous work on revision systems mainly focuses on editing texts according to one specific edit intention. In this work, we aim to build a multi-intent text revision system that could revise texts without explicit intent annotation. Our system is based on prefix-tuning, which first gets prefixes for every edit intent, and then trains a prefix transfer module, enabling the system to selectively leverage the knowledge from various prefixes according to the input text. We conduct experiments on the IteraTeR dataset, and the results show that our system outperforms baselines. The system can significantly improve the SARI score with more than 3{\%} improvements, which thrives on the learned editing intention prefixes.", }
Text revision is a necessary process to improve text quality. During this process, writers constantly edit texts out of different edit intentions. Identifying edit intention for a raw text is always an ambiguous work, and most previous work on revision systems mainly focuses on editing texts according to one specific edit intention. In this work, we aim to build a multi-intent text revision system that could revise texts without explicit intent annotation. Our system is based on prefix-tuning, which first gets prefixes for every edit intent, and then trains a prefix transfer module, enabling the system to selectively leverage the knowledge from various prefixes according to the input text. We conduct experiments on the IteraTeR dataset, and the results show that our system outperforms baselines. The system can significantly improve the SARI score with more than 3{\%} improvements, which thrives on the learned editing intention prefixes.
[ "Chong, Ruining", "Kong, Cunliang", "Wu, Liu", "Liu, Zhenghao", "Jin, Ziye", "Yang, Liner", "Fan, Yange", "Fan, Hanghang", "Yang, Erhong" ]
Leveraging Prefix Transfer for Multi-Intent Text Revision
acl-short.105
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.106.bib
https://aclanthology.org/2023.acl-short.106/
@inproceedings{wang-lu-2023-learning, title = "Learning Multi-Step Reasoning by Solving Arithmetic Tasks", author = "Wang, Tianduo and Lu, Wei", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.106", doi = "10.18653/v1/2023.acl-short.106", pages = "1229--1238", abstract = "Mathematical reasoning is regarded as a necessary ability for Language Models (LMs). Recent works demonstrate large LMs{'} impressive performance in solving math problems. The success is attributed to their Chain-of-Thought (CoT) reasoning abilities, i.e., the ability to decompose complex questions into step-by-step reasoning chains, but such ability seems only to emerge from models with abundant parameters. This work investigates how to incorporate relatively small LMs with the capabilities of multi-step reasoning. We propose to inject such abilities by continually pre-training LMs on a synthetic dataset MsAT which is composed of Multi-step Arithmetic Tasks. Our experiments on four math word problem datasets show the effectiveness of the proposed method in enhancing LMs{'} math reasoning abilities.", }
Mathematical reasoning is regarded as a necessary ability for Language Models (LMs). Recent works demonstrate large LMs{'} impressive performance in solving math problems. The success is attributed to their Chain-of-Thought (CoT) reasoning abilities, i.e., the ability to decompose complex questions into step-by-step reasoning chains, but such ability seems only to emerge from models with abundant parameters. This work investigates how to incorporate relatively small LMs with the capabilities of multi-step reasoning. We propose to inject such abilities by continually pre-training LMs on a synthetic dataset MsAT which is composed of Multi-step Arithmetic Tasks. Our experiments on four math word problem datasets show the effectiveness of the proposed method in enhancing LMs{'} math reasoning abilities.
[ "Wang, Ti", "uo", "Lu, Wei" ]
Learning Multi-Step Reasoning by Solving Arithmetic Tasks
acl-short.106
Poster
2306.01707
[ "https://github.com/TianduoWang/MsAT" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.107.bib
https://aclanthology.org/2023.acl-short.107/
@inproceedings{zhang-etal-2023-towards-adaptive, title = "Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning", author = "Zhang, Zhen-Ru and Tan, Chuanqi and Xu, Haiyang and Wang, Chengyu and Huang, Jun and Huang, Songfang", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.107", doi = "10.18653/v1/2023.acl-short.107", pages = "1239--1248", abstract = "Fine-tuning large pre-trained language models on various downstream tasks with whole parameters is prohibitively expensive. Hence, Parameter-efficient fine-tuning has attracted attention that only optimizes a few task-specific parameters with the frozen pre-trained model. In this work, we focus on prefix tuning, which only optimizes continuous prefix vectors (i.e. pseudo tokens) inserted into Transformer layers. Based on the observation that the learned syntax and semantics representation varies a lot at different layers, we argue that the adaptive prefix will be further tailored to each layer than the fixed one, enabling the fine-tuning more effective and efficient. Thus, we propose Adaptive Prefix Tuning (APT) to adjust the prefix in terms of both fine-grained token level and coarse-grained layer level with a gate mechanism. Experiments on the SuperGLUE and NER datasets show the effectiveness of APT. In addition, taking the gate as a probing, we validate the efficiency and effectiveness of the variable prefix.", }
Fine-tuning large pre-trained language models on various downstream tasks with whole parameters is prohibitively expensive. Hence, Parameter-efficient fine-tuning has attracted attention that only optimizes a few task-specific parameters with the frozen pre-trained model. In this work, we focus on prefix tuning, which only optimizes continuous prefix vectors (i.e. pseudo tokens) inserted into Transformer layers. Based on the observation that the learned syntax and semantics representation varies a lot at different layers, we argue that the adaptive prefix will be further tailored to each layer than the fixed one, enabling the fine-tuning more effective and efficient. Thus, we propose Adaptive Prefix Tuning (APT) to adjust the prefix in terms of both fine-grained token level and coarse-grained layer level with a gate mechanism. Experiments on the SuperGLUE and NER datasets show the effectiveness of APT. In addition, taking the gate as a probing, we validate the efficiency and effectiveness of the variable prefix.
[ "Zhang, Zhen-Ru", "Tan, Chuanqi", "Xu, Haiyang", "Wang, Chengyu", "Huang, Jun", "Huang, Songfang" ]
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning
acl-short.107
Poster
2305.15212
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.108.bib
https://aclanthology.org/2023.acl-short.108/
@inproceedings{fatemi-etal-2023-improving, title = "Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting", author = "Fatemi, Zahra and Xing, Chen and Liu, Wenhao and Xiong, Caimming", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.108", doi = "10.18653/v1/2023.acl-short.108", pages = "1249--1262", abstract = "Existing studies addressing gender bias of pre-trained language models, usually build a small gender-neutral data set and conduct a second phase pre-training on the model with such data. However, given the limited size and concentrated focus of the gender-neutral data, catastrophic forgetting would occur during second-phase pre-training. Forgetting information in the original training data may damage the model{'}s downstream performance by a large margin. In this work, we empirically show that catastrophic forgetting occurs in such methods by evaluating them with general NLP tasks in GLUE. Then, we propose a new method, GEnder Equality Prompt (GEEP), to improve gender fairness of pre-trained models with less forgetting. GEEP freezes the pre-trained model and learns gender-related prompts with gender-neutral data. Empirical results show that GEEP not only achieves SOTA performances on gender fairness tasks, but also forgets less and performs better on GLUE by a large margin.", }
Existing studies addressing gender bias of pre-trained language models, usually build a small gender-neutral data set and conduct a second phase pre-training on the model with such data. However, given the limited size and concentrated focus of the gender-neutral data, catastrophic forgetting would occur during second-phase pre-training. Forgetting information in the original training data may damage the model{'}s downstream performance by a large margin. In this work, we empirically show that catastrophic forgetting occurs in such methods by evaluating them with general NLP tasks in GLUE. Then, we propose a new method, GEnder Equality Prompt (GEEP), to improve gender fairness of pre-trained models with less forgetting. GEEP freezes the pre-trained model and learns gender-related prompts with gender-neutral data. Empirical results show that GEEP not only achieves SOTA performances on gender fairness tasks, but also forgets less and performs better on GLUE by a large margin.
[ "Fatemi, Zahra", "Xing, Chen", "Liu, Wenhao", "Xiong, Caimming" ]
Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting
acl-short.108
Poster
2110.05367
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.109.bib
https://aclanthology.org/2023.acl-short.109/
@inproceedings{shao-etal-2023-class, title = "Class-Incremental Learning based on Label Generation", author = "Shao, Yijia and Guo, Yiduo and Zhao, Dongyan and Liu, Bing", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.109", doi = "10.18653/v1/2023.acl-short.109", pages = "1263--1276", abstract = "Despite the great success of pre-trained language models, it is still a challenge to use these models for continual learning, especially for the class-incremental learning (CIL) setting due to catastrophic forgetting (CF). This paper reports our finding that if we formulate CIL as a continual label generation problem, CF is drastically reduced and the generalizable representations of pre-trained models can be better retained. We thus propose a new CIL method (VAG) that also leverages the sparsity of vocabulary to focus the generation and creates pseudo-replay samples by using label semantics. Experimental results show that VAG outperforms baselines by a large margin.", }
Despite the great success of pre-trained language models, it is still a challenge to use these models for continual learning, especially for the class-incremental learning (CIL) setting due to catastrophic forgetting (CF). This paper reports our finding that if we formulate CIL as a continual label generation problem, CF is drastically reduced and the generalizable representations of pre-trained models can be better retained. We thus propose a new CIL method (VAG) that also leverages the sparsity of vocabulary to focus the generation and creates pseudo-replay samples by using label semantics. Experimental results show that VAG outperforms baselines by a large margin.
[ "Shao, Yijia", "Guo, Yiduo", "Zhao, Dongyan", "Liu, Bing" ]
Class-Incremental Learning based on Label Generation
acl-short.109
Poster
2306.12619
[ "https://github.com/shaoyijia/vag" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.110.bib
https://aclanthology.org/2023.acl-short.110/
@inproceedings{tsvilodub-franke-2023-evaluating, title = "Evaluating pragmatic abilities of image captioners on {A}3{DS}", author = "Tsvilodub, Polina and Franke, Michael", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.110", doi = "10.18653/v1/2023.acl-short.110", pages = "1277--1285", abstract = "Evaluating grounded neural language model performance with respect to pragmatic qualities like the trade off between truthfulness, contrastivity and overinformativity of generated utterances remains a challenge in absence of data collected from humans. To enable such evaluation, we present a novel open source image-text dataset {``}Annotated 3D Shapes{''} (A3DS) comprising over nine million exhaustive natural language annotations and over 12 million variable-granularity captions for the 480,000 images provided by Burgess {\&} Kim (2018).We showcase the evaluation of pragmatic abilities developed by a task-neutral image captioner fine-tuned in a multi-agent communication setting to produce contrastive captions. The evaluation is enabled by the dataset because the exhaustive annotations allow to quantify the presence of contrastive features in the model{'}s generations. We show that the model develops human-like patterns (informativity, brevity, over-informativity for specific features (e.g., shape, color biases)).", }
Evaluating grounded neural language model performance with respect to pragmatic qualities like the trade off between truthfulness, contrastivity and overinformativity of generated utterances remains a challenge in absence of data collected from humans. To enable such evaluation, we present a novel open source image-text dataset {``}Annotated 3D Shapes{''} (A3DS) comprising over nine million exhaustive natural language annotations and over 12 million variable-granularity captions for the 480,000 images provided by Burgess {\&} Kim (2018).We showcase the evaluation of pragmatic abilities developed by a task-neutral image captioner fine-tuned in a multi-agent communication setting to produce contrastive captions. The evaluation is enabled by the dataset because the exhaustive annotations allow to quantify the presence of contrastive features in the model{'}s generations. We show that the model develops human-like patterns (informativity, brevity, over-informativity for specific features (e.g., shape, color biases)).
[ "Tsvilodub, Polina", "Franke, Michael" ]
Evaluating pragmatic abilities of image captioners on A3DS
acl-short.110
Poster
2305.12777
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.111.bib
https://aclanthology.org/2023.acl-short.111/
@inproceedings{wang-etal-2023-art, title = "The Art of Prompting: Event Detection based on Type Specific Prompts", author = "Wang, Sijia and Yu, Mo and Huang, Lifu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.111", doi = "10.18653/v1/2023.acl-short.111", pages = "1286--1299", abstract = "We compare various forms of prompts to represent event types and develop a unified framework to incorporate the event type specific prompts for supervised, few-shot, and zero-shot event detection. The experimental results demonstrate that a well-defined and comprehensive event type prompt can significantly improve event detection performance, especially when the annotated data is scarce (few-shot event detection) or not available (zero-shot event detection). By leveraging the semantics of event types, our unified framework shows up to 22.2{\%} F-score gain over the previous state-of-the-art baselines.", }
We compare various forms of prompts to represent event types and develop a unified framework to incorporate the event type specific prompts for supervised, few-shot, and zero-shot event detection. The experimental results demonstrate that a well-defined and comprehensive event type prompt can significantly improve event detection performance, especially when the annotated data is scarce (few-shot event detection) or not available (zero-shot event detection). By leveraging the semantics of event types, our unified framework shows up to 22.2{\%} F-score gain over the previous state-of-the-art baselines.
[ "Wang, Sijia", "Yu, Mo", "Huang, Lifu" ]
The Art of Prompting: Event Detection based on Type Specific Prompts
acl-short.111
Poster
2204.07241
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.112.bib
https://aclanthology.org/2023.acl-short.112/
@inproceedings{mao-etal-2023-exploring, title = "Exploring the Impact of Layer Normalization for Zero-shot Neural Machine Translation", author = "Mao, Zhuoyuan and Dabre, Raj and Liu, Qianying and Song, Haiyue and Chu, Chenhui and Kurohashi, Sadao", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.112", doi = "10.18653/v1/2023.acl-short.112", pages = "1300--1316", abstract = "This paper studies the impact of layer normalization (LayerNorm) on zero-shot translation (ZST). Recent efforts for ZST often utilize the Transformer architecture as the backbone, with LayerNorm at the input of layers (PreNorm) set as the default. However, Xu et al. (2019) has revealed that PreNorm carries the risk of overfitting the training data. Based on this, we hypothesize that PreNorm may overfit supervised directions and thus have low generalizability for ZST. Through experiments on OPUS, IWSLT, and Europarl datasets for 54 ZST directions, we demonstrate that the original Transformer setting of LayerNorm after residual connections (PostNorm) consistently outperforms PreNorm by up to 12.3 BLEU points. We then study the performance disparities by analyzing the differences in off-target rates and structural variations between PreNorm and PostNorm. This study highlights the need for careful consideration of the LayerNorm setting for ZST.", }
This paper studies the impact of layer normalization (LayerNorm) on zero-shot translation (ZST). Recent efforts for ZST often utilize the Transformer architecture as the backbone, with LayerNorm at the input of layers (PreNorm) set as the default. However, Xu et al. (2019) has revealed that PreNorm carries the risk of overfitting the training data. Based on this, we hypothesize that PreNorm may overfit supervised directions and thus have low generalizability for ZST. Through experiments on OPUS, IWSLT, and Europarl datasets for 54 ZST directions, we demonstrate that the original Transformer setting of LayerNorm after residual connections (PostNorm) consistently outperforms PreNorm by up to 12.3 BLEU points. We then study the performance disparities by analyzing the differences in off-target rates and structural variations between PreNorm and PostNorm. This study highlights the need for careful consideration of the LayerNorm setting for ZST.
[ "Mao, Zhuoyuan", "Dabre, Raj", "Liu, Qianying", "Song, Haiyue", "Chu, Chenhui", "Kurohashi, Sadao" ]
Exploring the Impact of Layer Normalization for Zero-shot Neural Machine Translation
acl-short.112
Poster
2305.09312
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.113.bib
https://aclanthology.org/2023.acl-short.113/
@inproceedings{kung-peng-2023-models, title = "Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning", author = "Kung, Po-Nien and Peng, Nanyun", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.113", doi = "10.18653/v1/2023.acl-short.113", pages = "1317--1328", abstract = "Recent works on instruction tuning (IT) have achieved great performance with zero-shot generalizability to unseen tasks. With additional context (e.g., task definition, examples) provided to models for fine-tuning, they achieved much higher performance than untuned models. Despite impressive performance gains, what models learn from IT remains understudied. In this work, we analyze how models utilize instructions during IT by comparing model training with altered vs. original instructions. Specifically, we create simplified task definitions by removing all semantic components and only leaving the output space information, and delusive examples that contain incorrect input-output mapping. Our experiments show that models trained on simplified task definition or delusive examples can achieve comparable performance to the ones trained on the original instructions and examples. Furthermore, we introduce a random baseline to perform zeroshot classification tasks, and find it achieves similar performance (42.6{\%} exact-match) as IT does (43{\%} exact-match) in low resource setting, while both methods outperform naive T5 significantly (30{\%} per exact-match). Our analysis provides evidence that the impressive performance gain of current IT models can come from picking up superficial patterns, such as learning the output format and guessing. Our study highlights the urgent need for more reliable IT methods and evaluation.", }
Recent works on instruction tuning (IT) have achieved great performance with zero-shot generalizability to unseen tasks. With additional context (e.g., task definition, examples) provided to models for fine-tuning, they achieved much higher performance than untuned models. Despite impressive performance gains, what models learn from IT remains understudied. In this work, we analyze how models utilize instructions during IT by comparing model training with altered vs. original instructions. Specifically, we create simplified task definitions by removing all semantic components and only leaving the output space information, and delusive examples that contain incorrect input-output mapping. Our experiments show that models trained on simplified task definition or delusive examples can achieve comparable performance to the ones trained on the original instructions and examples. Furthermore, we introduce a random baseline to perform zeroshot classification tasks, and find it achieves similar performance (42.6{\%} exact-match) as IT does (43{\%} exact-match) in low resource setting, while both methods outperform naive T5 significantly (30{\%} per exact-match). Our analysis provides evidence that the impressive performance gain of current IT models can come from picking up superficial patterns, such as learning the output format and guessing. Our study highlights the urgent need for more reliable IT methods and evaluation.
[ "Kung, Po-Nien", "Peng, Nanyun" ]
Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning
acl-short.113
Poster
2305.11383
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.114.bib
https://aclanthology.org/2023.acl-short.114/
@inproceedings{oneill-dutta-2023-self, title = "Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models", author = "O{'}Neill, James and Dutta, Sourav", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.114", doi = "10.18653/v1/2023.acl-short.114", pages = "1329--1339", abstract = "We investigate the effects of post-training quantization and quantization-aware training on the generalization of Transformer language models. We present a new method called self-distilled quantization (SDQ) that minimizes accumulative quantization errors and outperforms baselines. We apply SDQ to multilingual models XLM-R$_{\text{Base}}$ and InfoXLM$_{\text{Base}}$ and demonstrate that both models can be reduced from 32-bit floating point weights to 8-bit integer weights while maintaining a high level of performance on the XGLUE benchmark. Our results also highlight the challenges of quantizing multilingual models, which must generalize to languages they were not fine-tuned on.", }
We investigate the effects of post-training quantization and quantization-aware training on the generalization of Transformer language models. We present a new method called self-distilled quantization (SDQ) that minimizes accumulative quantization errors and outperforms baselines. We apply SDQ to multilingual models XLM-R$_{\text{Base}}$ and InfoXLM$_{\text{Base}}$ and demonstrate that both models can be reduced from 32-bit floating point weights to 8-bit integer weights while maintaining a high level of performance on the XGLUE benchmark. Our results also highlight the challenges of quantizing multilingual models, which must generalize to languages they were not fine-tuned on.
[ "O{'}Neill, James", "Dutta, Sourav" ]
Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models
acl-short.114
Poster
2307.05972
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.115.bib
https://aclanthology.org/2023.acl-short.115/
@inproceedings{han-etal-2023-modality, title = "Modality Adaption or Regularization? A Case Study on End-to-End Speech Translation", author = "Han, Yuchen and Xu, Chen and Xiao, Tong and Zhu, Jingbo", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.115", doi = "10.18653/v1/2023.acl-short.115", pages = "1340--1348", abstract = "Pre-training and fine-tuning is a paradigm for alleviating the data scarcity problem in end-to-end speech translation (E2E ST). The commonplace {''}modality gap{''} between speech and text data often leads to inconsistent inputs between pre-training and fine-tuning. However, we observe that this gap occurs in the early stages of fine-tuning, but does not have a major impact on the final performance. On the other hand, we find that there has another gap, which we call the {''}capacity gap{''}: high resource tasks (such as ASR and MT) always require a large model to fit, when the model is reused for a low resource task (E2E ST), it will get a sub-optimal performance due to the over-fitting. In a case study, we find that the regularization plays a more important role than the well-designed modality adaption method, which achieves 29.0 for en-de and 40.3 for en-fr on the MuST-C dataset.", }
Pre-training and fine-tuning is a paradigm for alleviating the data scarcity problem in end-to-end speech translation (E2E ST). The commonplace {''}modality gap{''} between speech and text data often leads to inconsistent inputs between pre-training and fine-tuning. However, we observe that this gap occurs in the early stages of fine-tuning, but does not have a major impact on the final performance. On the other hand, we find that there has another gap, which we call the {''}capacity gap{''}: high resource tasks (such as ASR and MT) always require a large model to fit, when the model is reused for a low resource task (E2E ST), it will get a sub-optimal performance due to the over-fitting. In a case study, we find that the regularization plays a more important role than the well-designed modality adaption method, which achieves 29.0 for en-de and 40.3 for en-fr on the MuST-C dataset.
[ "Han, Yuchen", "Xu, Chen", "Xiao, Tong", "Zhu, Jingbo" ]
Modality Adaption or Regularization? A Case Study on End-to-End Speech Translation
acl-short.115
Poster
2306.07650
[ "https://github.com/hannlp/tab" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.116.bib
https://aclanthology.org/2023.acl-short.116/
@inproceedings{li-etal-2023-uncertainty, title = "Uncertainty-Aware Bootstrap Learning for Joint Extraction on Distantly-Supervised Data", author = "Li, Yufei and Yu, Xiao and Liu, Yanchi and Chen, Haifeng and Liu, Cong", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.116", doi = "10.18653/v1/2023.acl-short.116", pages = "1349--1358", abstract = "Jointly extracting entity pairs and their relations is challenging when working on distantly-supervised data with ambiguous or noisy labels. To mitigate such impact, we propose uncertainty-aware bootstrap learning, which is motivated by the intuition that the higher uncertainty of an instance, the more likely the model confidence is inconsistent with the ground truths. Specifically, we first explore instance-level data uncertainty to create an initial high-confident examples. Such subset serves as filtering noisy instances and facilitating the model to converge fast at the early stage. During bootstrap learning, we propose self-ensembling as a regularizer to alleviate inter-model uncertainty produced by noisy labels. We further define probability variance of joint tagging probabilities to estimate inner-model parametric uncertainty, which is used to select and build up new reliable training instances for the next iteration. Experimental results on two large datasets reveal that our approach outperforms existing strong baselines and related methods.", }
Jointly extracting entity pairs and their relations is challenging when working on distantly-supervised data with ambiguous or noisy labels. To mitigate such impact, we propose uncertainty-aware bootstrap learning, which is motivated by the intuition that the higher uncertainty of an instance, the more likely the model confidence is inconsistent with the ground truths. Specifically, we first explore instance-level data uncertainty to create an initial high-confident examples. Such subset serves as filtering noisy instances and facilitating the model to converge fast at the early stage. During bootstrap learning, we propose self-ensembling as a regularizer to alleviate inter-model uncertainty produced by noisy labels. We further define probability variance of joint tagging probabilities to estimate inner-model parametric uncertainty, which is used to select and build up new reliable training instances for the next iteration. Experimental results on two large datasets reveal that our approach outperforms existing strong baselines and related methods.
[ "Li, Yufei", "Yu, Xiao", "Liu, Yanchi", "Chen, Haifeng", "Liu, Cong" ]
Uncertainty-Aware Bootstrap Learning for Joint Extraction on Distantly-Supervised Data
acl-short.116
Poster
2305.03827
[ "https://github.com/yul091/UnBED" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.117.bib
https://aclanthology.org/2023.acl-short.117/
@inproceedings{chen-etal-2023-text, title = "Text-to-{SQL} Error Correction with Language Models of Code", author = "Chen, Ziru and Chen, Shijie and White, Michael and Mooney, Raymond and Payani, Ali and Srinivasa, Jayanth and Su, Yu and Sun, Huan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.117", doi = "10.18653/v1/2023.acl-short.117", pages = "1359--1372", abstract = "Despite recent progress in text-to-SQL parsing, current semantic parsers are still not accurate enough for practical use. In this paper, we investigate how to build automatic text-to-SQL error correction models. Noticing that token-level edits are out of context and sometimes ambiguous, we propose building clause-level edit models instead. Besides, while most language models of code are not specifically pre-trained for SQL, they know common data structures and their operations in programming languages such as Python. Thus, we propose a novel representation for SQL queries and their edits that adheres more closely to the pre-training corpora of language models of code. Our error correction model improves the exact set match accuracy of different parsers by 2.4-6.5 and obtains up to 4.3 point absolute improvement over two strong baselines.", }
Despite recent progress in text-to-SQL parsing, current semantic parsers are still not accurate enough for practical use. In this paper, we investigate how to build automatic text-to-SQL error correction models. Noticing that token-level edits are out of context and sometimes ambiguous, we propose building clause-level edit models instead. Besides, while most language models of code are not specifically pre-trained for SQL, they know common data structures and their operations in programming languages such as Python. Thus, we propose a novel representation for SQL queries and their edits that adheres more closely to the pre-training corpora of language models of code. Our error correction model improves the exact set match accuracy of different parsers by 2.4-6.5 and obtains up to 4.3 point absolute improvement over two strong baselines.
[ "Chen, Ziru", "Chen, Shijie", "White, Michael", "Mooney, Raymond", "Payani, Ali", "Srinivasa, Jayanth", "Su, Yu", "Sun, Huan" ]
Text-to-SQL Error Correction with Language Models of Code
acl-short.117
Poster
2305.13073
[ "https://github.com/osu-nlp-group/auto-sql-correction" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.118.bib
https://aclanthology.org/2023.acl-short.118/
@inproceedings{selvam-etal-2023-tail, title = "The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks", author = "Selvam, Nikil and Dev, Sunipa and Khashabi, Daniel and Khot, Tushar and Chang, Kai-Wei", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.118", doi = "10.18653/v1/2023.acl-short.118", pages = "1373--1386", abstract = "How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given model? In this work, we study this question by contrasting social biases with non-social biases that stem from choices made during dataset construction (which might not even be discernible to the human eye). To do so, we empirically simulate various alternative constructions for a given benchmark based on seemingly innocuous modifications (such as paraphrasing or random-sampling) that maintain the essence of their social bias. On two well-known social bias benchmarks (Winogender and BiasNLI), we observe that these shallow modifications have a surprising effect on the resulting degree of bias across various models and consequently the relative ordering of these models when ranked by measured bias. We hope these troubling observations motivate more robust measures of social biases.", }
How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given model? In this work, we study this question by contrasting social biases with non-social biases that stem from choices made during dataset construction (which might not even be discernible to the human eye). To do so, we empirically simulate various alternative constructions for a given benchmark based on seemingly innocuous modifications (such as paraphrasing or random-sampling) that maintain the essence of their social bias. On two well-known social bias benchmarks (Winogender and BiasNLI), we observe that these shallow modifications have a surprising effect on the resulting degree of bias across various models and consequently the relative ordering of these models when ranked by measured bias. We hope these troubling observations motivate more robust measures of social biases.
[ "Selvam, Nikil", "Dev, Sunipa", "Khashabi, Daniel", "Khot, Tushar", "Chang, Kai-Wei" ]
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks
acl-short.118
Poster
2210.10040
[ "https://github.com/uclanlp/socialbias-dataset-construction-biases" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.119.bib
https://aclanthology.org/2023.acl-short.119/
@inproceedings{shaib-etal-2023-summarizing, title = "Summarizing, Simplifying, and Synthesizing Medical Evidence using {GPT}-3 (with Varying Success)", author = "Shaib, Chantal and Li, Millicent and Joseph, Sebastian and Marshall, Iain and Li, Junyi Jessy and Wallace, Byron", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.119", doi = "10.18653/v1/2023.acl-short.119", pages = "1387--1407", abstract = "Large language models, particularly GPT-3, are able to produce high quality summaries ofgeneral domain news articles in few- and zero-shot settings. However, it is unclear if such models are similarly capable in more specialized domains such as biomedicine. In this paper we enlist domain experts (individuals with medical training) to evaluate summaries of biomedical articles generated by GPT-3, given no supervision. We consider bothsingle- and multi-document settings. In the former, GPT-3 is tasked with generating regular and plain-language summaries of articles describing randomized controlled trials; in thelatter, we assess the degree to which GPT-3 is able to synthesize evidence reported acrossa collection of articles. We design an annotation scheme for evaluating model outputs, withan emphasis on assessing the factual accuracy of generated summaries. We find that whileGPT-3 is able to summarize and simplify single biomedical articles faithfully, it strugglesto provide accurate aggregations of findings over multiple documents. We release all data,code, and annotations used in this work.", }
Large language models, particularly GPT-3, are able to produce high quality summaries ofgeneral domain news articles in few- and zero-shot settings. However, it is unclear if such models are similarly capable in more specialized domains such as biomedicine. In this paper we enlist domain experts (individuals with medical training) to evaluate summaries of biomedical articles generated by GPT-3, given no supervision. We consider bothsingle- and multi-document settings. In the former, GPT-3 is tasked with generating regular and plain-language summaries of articles describing randomized controlled trials; in thelatter, we assess the degree to which GPT-3 is able to synthesize evidence reported acrossa collection of articles. We design an annotation scheme for evaluating model outputs, withan emphasis on assessing the factual accuracy of generated summaries. We find that whileGPT-3 is able to summarize and simplify single biomedical articles faithfully, it strugglesto provide accurate aggregations of findings over multiple documents. We release all data,code, and annotations used in this work.
[ "Shaib, Chantal", "Li, Millicent", "Joseph, Sebastian", "Marshall, Iain", "Li, Junyi Jessy", "Wallace, Byron" ]
Summarizing, Simplifying, and Synthesizing Medical Evidence using GPT-3 (with Varying Success)
acl-short.119
Poster
2305.06299
[ "https://github.com/cshaib/summarizing-medical-evidence" ]
https://huggingface.co/papers/2305.06299
0
0
0
6
1
[]
[]
[]
https://aclanthology.org/2023.acl-short.120.bib
https://aclanthology.org/2023.acl-short.120/
@inproceedings{li-etal-2023-prefix, title = "Prefix Propagation: Parameter-Efficient Tuning for Long Sequences", author = "Li, Jonathan and Aitken, Will and Bhambhoria, Rohan and Zhu, Xiaodan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.120", doi = "10.18653/v1/2023.acl-short.120", pages = "1408--1419", abstract = "Parameter-efficient tuning aims to mitigate the large memory requirements of adapting pretrained language models for downstream tasks. For example, one popular method, prefix-tuning, prepends trainable tokens to sequences while freezing the rest of the model{'}s parameters. Although such models attain comparable performance with fine-tuning when applied to sequences with short to moderate lengths, we show their inferior performance when modelling long sequences. To bridge this gap, we propose prefix-propagation, a simple but effective approach that conditions prefixes on previous hidden states. We empirically demonstrate that prefix-propagation outperforms prefix-tuning across long-document tasks, while using 50{\%} fewer parameters. To further investigate the proposed architecture, we also show its advantage in calibration, and perform additional study on its relationship with kernel attention. To the best of our knowledge, this work is the first to focus on parameter-efficient learning for long-sequence language tasks.", }
Parameter-efficient tuning aims to mitigate the large memory requirements of adapting pretrained language models for downstream tasks. For example, one popular method, prefix-tuning, prepends trainable tokens to sequences while freezing the rest of the model{'}s parameters. Although such models attain comparable performance with fine-tuning when applied to sequences with short to moderate lengths, we show their inferior performance when modelling long sequences. To bridge this gap, we propose prefix-propagation, a simple but effective approach that conditions prefixes on previous hidden states. We empirically demonstrate that prefix-propagation outperforms prefix-tuning across long-document tasks, while using 50{\%} fewer parameters. To further investigate the proposed architecture, we also show its advantage in calibration, and perform additional study on its relationship with kernel attention. To the best of our knowledge, this work is the first to focus on parameter-efficient learning for long-sequence language tasks.
[ "Li, Jonathan", "Aitken, Will", "Bhambhoria, Rohan", "Zhu, Xiaodan" ]
Prefix Propagation: Parameter-Efficient Tuning for Long Sequences
acl-short.120
Poster
2305.12086
[ "https://github.com/monlih/prefix-propagation" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.121.bib
https://aclanthology.org/2023.acl-short.121/
@inproceedings{wu-etal-2023-listener, title = "Listener Model for the {P}hoto{B}ook Referential Game with {CLIPS}cores as Implicit Reference Chain", author = "Wu, Shih-Lun and Chou, Yi-Hui and Li, Liangze", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.121", doi = "10.18653/v1/2023.acl-short.121", pages = "1420--1432", abstract = "PhotoBook is a collaborative dialogue game where two players receive private, partially-overlapping sets of images and resolve which images they have in common. It presents machines with a great challenge to learn how people build common ground around multimodal context to communicate effectively. Methods developed in the literature, however, cannot be deployed to real gameplaysince they only tackle some subtasks of the game,and they require additional reference chains inputs, whose extraction process is imperfect. Therefore, we propose a reference chain-free listener modelthat directly addresses the game{'}s predictive task, i.e., deciding whether an image is shared with partner. Our DeBERTa-based listener model reads the full dialogue, and utilizesCLIPScore features to assess utterance-image relevance. We achieve {\textgreater}77{\%} accuracy on unseen sets of images/game themes, outperforming baseline by {\textgreater}17 points.", }
PhotoBook is a collaborative dialogue game where two players receive private, partially-overlapping sets of images and resolve which images they have in common. It presents machines with a great challenge to learn how people build common ground around multimodal context to communicate effectively. Methods developed in the literature, however, cannot be deployed to real gameplaysince they only tackle some subtasks of the game,and they require additional reference chains inputs, whose extraction process is imperfect. Therefore, we propose a reference chain-free listener modelthat directly addresses the game{'}s predictive task, i.e., deciding whether an image is shared with partner. Our DeBERTa-based listener model reads the full dialogue, and utilizesCLIPScore features to assess utterance-image relevance. We achieve {\textgreater}77{\%} accuracy on unseen sets of images/game themes, outperforming baseline by {\textgreater}17 points.
[ "Wu, Shih-Lun", "Chou, Yi-Hui", "Li, Liangze" ]
Listener Model for the PhotoBook Referential Game with CLIPScores as Implicit Reference Chain
acl-short.121
Poster
2306.09607
[ "https://github.com/slseanwu/photobook-full-listener" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.122.bib
https://aclanthology.org/2023.acl-short.122/
@inproceedings{jung-etal-2023-bring, title = "Bring More Attention to Syntactic Symmetry for Automatic Postediting of High-Quality Machine Translations", author = "Jung, Baikjin and Lee, Myungji and Lee, Jong-Hyeok and Kim, Yunsu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.122", doi = "10.18653/v1/2023.acl-short.122", pages = "1433--1441", abstract = "Automatic postediting (APE) is an automated process to refine a given machine translation (MT). Recent findings present that existing APE systems are not good at handling high-quality MTs even for a language pair with abundant data resources, English{--}German: the better the given MT is, the harder it is to decide what parts to edit and how to fix these errors. One possible solution to this problem is to instill deeper knowledge about the target language into the model. Thus, we propose a linguistically motivated method of regularization that is expected to enhance APE models{'} understanding of the target language: a loss function that encourages symmetric self-attention on the given MT. Our analysis of experimental results demonstrates that the proposed method helps improving the state-of-the-art architecture{'}s APE quality for high-quality MTs.", }
Automatic postediting (APE) is an automated process to refine a given machine translation (MT). Recent findings present that existing APE systems are not good at handling high-quality MTs even for a language pair with abundant data resources, English{--}German: the better the given MT is, the harder it is to decide what parts to edit and how to fix these errors. One possible solution to this problem is to instill deeper knowledge about the target language into the model. Thus, we propose a linguistically motivated method of regularization that is expected to enhance APE models{'} understanding of the target language: a loss function that encourages symmetric self-attention on the given MT. Our analysis of experimental results demonstrates that the proposed method helps improving the state-of-the-art architecture{'}s APE quality for high-quality MTs.
[ "Jung, Baikjin", "Lee, Myungji", "Lee, Jong-Hyeok", "Kim, Yunsu" ]
Bring More Attention to Syntactic Symmetry for Automatic Postediting of High-Quality Machine Translations
acl-short.122
Poster
2305.10557
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.123.bib
https://aclanthology.org/2023.acl-short.123/
@inproceedings{yan-etal-2023-embarrassingly, title = "An Embarrassingly Easy but Strong Baseline for Nested Named Entity Recognition", author = "Yan, Hang and Sun, Yu and Li, Xiaonan and Qiu, Xipeng", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.123", doi = "10.18653/v1/2023.acl-short.123", pages = "1442--1452", abstract = "Named entity recognition (NER) is the task to detect and classify entity spans in the text. When entity spans overlap between each other, the task is named as nested NER. Span-based methods have been widely used to tackle nested NER. Most of these methods get a score matrix, where each entry corresponds to a span. However, previous work ignores spatial relations in the score matrix. In this paper, we propose using Convolutional Neural Network (CNN) to model these spatial relations. Despite being simple, experiments in three commonly used nested NER datasets show that our model surpasses several recently proposed methods with the same pre-trained encoders. Further analysis shows that using CNN can help the model find more nested entities. Besides, we find that different papers use different sentence tokenizations for the three nested NER datasets, which will influence the comparison. Thus, we release a pre-processing script to facilitate future comparison.", }
Named entity recognition (NER) is the task to detect and classify entity spans in the text. When entity spans overlap between each other, the task is named as nested NER. Span-based methods have been widely used to tackle nested NER. Most of these methods get a score matrix, where each entry corresponds to a span. However, previous work ignores spatial relations in the score matrix. In this paper, we propose using Convolutional Neural Network (CNN) to model these spatial relations. Despite being simple, experiments in three commonly used nested NER datasets show that our model surpasses several recently proposed methods with the same pre-trained encoders. Further analysis shows that using CNN can help the model find more nested entities. Besides, we find that different papers use different sentence tokenizations for the three nested NER datasets, which will influence the comparison. Thus, we release a pre-processing script to facilitate future comparison.
[ "Yan, Hang", "Sun, Yu", "Li, Xiaonan", "Qiu, Xipeng" ]
An Embarrassingly Easy but Strong Baseline for Nested Named Entity Recognition
acl-short.123
Poster
2208.04534
[ "https://github.com/yhcc/cnn_nested_ner" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.124.bib
https://aclanthology.org/2023.acl-short.124/
@inproceedings{amini-etal-2023-hexatagging, title = "Hexatagging: Projective Dependency Parsing as Tagging", author = "Amini, Afra and Liu, Tianyu and Cotterell, Ryan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.124", doi = "10.18653/v1/2023.acl-short.124", pages = "1453--1464", abstract = "We introduce a novel dependency parser, the hexatagger, that constructs dependency trees by tagging the words in a sentence with elements from a finite set of possible tags. In contrast to many approaches to dependency parsing, our approach is fully parallelizable at training time, i.e., the structure-building actions needed to build a dependency parse can be predicted in parallel to each other. Additionally, exact decoding is linear in time and space complexity. Furthermore, we derive a probabilistic dependency parser that predicts hexatags using no more than a linear model with features from a pretrained language model, i.e., we forsake a bespoke architecture explicitly designed for the task. Despite the generality and simplicity of our approach, we achieve state-of-the-art performance of 96.4 LAS and 97.4 UAS on the Penn Treebank test set. Additionally, our parser{'}s linear time complexity and parallelism significantly improve computational efficiency, with a roughly 10-times speed-up over previous state-of-the-art models during decoding.", }
We introduce a novel dependency parser, the hexatagger, that constructs dependency trees by tagging the words in a sentence with elements from a finite set of possible tags. In contrast to many approaches to dependency parsing, our approach is fully parallelizable at training time, i.e., the structure-building actions needed to build a dependency parse can be predicted in parallel to each other. Additionally, exact decoding is linear in time and space complexity. Furthermore, we derive a probabilistic dependency parser that predicts hexatags using no more than a linear model with features from a pretrained language model, i.e., we forsake a bespoke architecture explicitly designed for the task. Despite the generality and simplicity of our approach, we achieve state-of-the-art performance of 96.4 LAS and 97.4 UAS on the Penn Treebank test set. Additionally, our parser{'}s linear time complexity and parallelism significantly improve computational efficiency, with a roughly 10-times speed-up over previous state-of-the-art models during decoding.
[ "Amini, Afra", "Liu, Tianyu", "Cotterell, Ryan" ]
Hexatagging: Projective Dependency Parsing as Tagging
acl-short.124
Oral
2306.05477
[ "https://github.com/rycolab/parsing-as-tagging" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.125.bib
https://aclanthology.org/2023.acl-short.125/
@inproceedings{zhang-yu-2023-understanding, title = "Understanding Demonstration-based Learning from a Causal Perspective", author = "Zhang, Ruiyi and Yu, Tong", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.125", doi = "10.18653/v1/2023.acl-short.125", pages = "1465--1475", abstract = "Demonstration-based learning has shown impressive performance in exploiting pretrained language models under few-shot learning settings. It is interesting to see that demonstrations, even those composed of random tokens, can still improve performance. In this paper, we build a Structural Causal Model (SCM) to understand demonstration-based learning from causal perspectives and interpret random demonstrations as interventions on the demonstration variable within the causal model. We investigate the causal effects and find that the concurrence of specific words in the demonstration will induce bias, while randomly sampled tokens in the demonstration do not. Based on this finding, we further propose simple ways to construct random demonstrations, which even outperform hand-crafted, meaningful demonstrations on public sequence labeling benchmarks.", }
Demonstration-based learning has shown impressive performance in exploiting pretrained language models under few-shot learning settings. It is interesting to see that demonstrations, even those composed of random tokens, can still improve performance. In this paper, we build a Structural Causal Model (SCM) to understand demonstration-based learning from causal perspectives and interpret random demonstrations as interventions on the demonstration variable within the causal model. We investigate the causal effects and find that the concurrence of specific words in the demonstration will induce bias, while randomly sampled tokens in the demonstration do not. Based on this finding, we further propose simple ways to construct random demonstrations, which even outperform hand-crafted, meaningful demonstrations on public sequence labeling benchmarks.
[ "Zhang, Ruiyi", "Yu, Tong" ]
Understanding Demonstration-based Learning from a Causal Perspective
acl-short.125
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.126.bib
https://aclanthology.org/2023.acl-short.126/
@inproceedings{sarti-etal-2023-ramp, title = "{RAMP}: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation", author = "Sarti, Gabriele and Htut, Phu Mon and Niu, Xing and Hsu, Benjamin and Currey, Anna and Dinu, Georgiana and Nadejde, Maria", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.126", doi = "10.18653/v1/2023.acl-short.126", pages = "1476--1490", abstract = "Attribute-controlled translation (ACT) is a subtask of machine translation that involves controlling stylistic or linguistic attributes (like formality and gender) of translation outputs. While ACT has garnered attention in recent years due to its usefulness in real-world applications, progress in the task is currently limited by dataset availability, since most prior approaches rely on supervised methods. To address this limitation, we propose Retrieval and Attribute-Marking enhanced Prompting (RAMP), which leverages large multilingual language models to perform ACT in few-shot and zero-shot settings. RAMP improves generation accuracy over the standard prompting approach by (1) incorporating a semantic similarity retrieval component for selecting similar in-context examples, and (2) marking in-context examples with attribute annotations. Our comprehensive experiments show that RAMP is a viable approach in both zero-shot and few-shot settings.", }
Attribute-controlled translation (ACT) is a subtask of machine translation that involves controlling stylistic or linguistic attributes (like formality and gender) of translation outputs. While ACT has garnered attention in recent years due to its usefulness in real-world applications, progress in the task is currently limited by dataset availability, since most prior approaches rely on supervised methods. To address this limitation, we propose Retrieval and Attribute-Marking enhanced Prompting (RAMP), which leverages large multilingual language models to perform ACT in few-shot and zero-shot settings. RAMP improves generation accuracy over the standard prompting approach by (1) incorporating a semantic similarity retrieval component for selecting similar in-context examples, and (2) marking in-context examples with attribute annotations. Our comprehensive experiments show that RAMP is a viable approach in both zero-shot and few-shot settings.
[ "Sarti, Gabriele", "Htut, Phu Mon", "Niu, Xing", "Hsu, Benjamin", "Currey, Anna", "Dinu, Georgiana", "Nadejde, Maria" ]
RAMP: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation
acl-short.126
Poster
2305.17131
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.127.bib
https://aclanthology.org/2023.acl-short.127/
@inproceedings{wen-hauptmann-2023-zero, title = "Zero-Shot and Few-Shot Stance Detection on Varied Topics via Conditional Generation", author = "Wen, Haoyang and Hauptmann, Alexander", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.127", doi = "10.18653/v1/2023.acl-short.127", pages = "1491--1499", abstract = "Zero-shot and few-shot stance detection identify the polarity of text with regard to a certain target when we have only limited or no training resources for the target. Previous work generally formulates the problem into a classification setting, ignoring the potential use of label text. In this paper, we instead utilize a conditional generation framework and formulate the problem as denoising from partially-filled templates, which can better utilize the semantics among input, label, and target texts. We further propose to jointly train an auxiliary task, target prediction, and to incorporate manually constructed incorrect samples with unlikelihood training to improve the representations for both target and label texts. We also verify the effectiveness of target-related Wikipedia knowledge with the generation framework. Experiments show that our proposed method significantly outperforms several strong baselines on VAST, and achieves new state-of-the-art performance.", }
Zero-shot and few-shot stance detection identify the polarity of text with regard to a certain target when we have only limited or no training resources for the target. Previous work generally formulates the problem into a classification setting, ignoring the potential use of label text. In this paper, we instead utilize a conditional generation framework and formulate the problem as denoising from partially-filled templates, which can better utilize the semantics among input, label, and target texts. We further propose to jointly train an auxiliary task, target prediction, and to incorporate manually constructed incorrect samples with unlikelihood training to improve the representations for both target and label texts. We also verify the effectiveness of target-related Wikipedia knowledge with the generation framework. Experiments show that our proposed method significantly outperforms several strong baselines on VAST, and achieves new state-of-the-art performance.
[ "Wen, Haoyang", "Hauptmann, Alex", "er" ]
Zero-Shot and Few-Shot Stance Detection on Varied Topics via Conditional Generation
acl-short.127
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.128.bib
https://aclanthology.org/2023.acl-short.128/
@inproceedings{juhng-etal-2023-discourse, title = "Discourse-Level Representations can Improve Prediction of Degree of Anxiety", author = "Juhng, Swanie and Matero, Matthew and Varadarajan, Vasudha and Eichstaedt, Johannes and V Ganesan, Adithya and Schwartz, H. Andrew", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.128", doi = "10.18653/v1/2023.acl-short.128", pages = "1500--1511", abstract = "Anxiety disorders are the most common of mental illnesses, but relatively little is known about how to detect them from language. The primary clinical manifestation of anxiety is worry associated cognitive distortions, which are likely expressed at the discourse-level of semantics. Here, we investigate the development of a modern linguistic assessment for degree of anxiety, specifically evaluating the utility of discourse-level information in addition to lexical-level large language model embeddings. We find that a combined lexico-discourse model outperforms models based solely on state-of-the-art contextual embeddings (RoBERTa), with discourse-level representations derived from Sentence-BERT and DiscRE both providing additional predictive power not captured by lexical-level representations. Interpreting the model, we find that discourse patterns of causal explanations, among others, were used significantly more by those scoring high in anxiety, dovetailing with psychological literature.", }
Anxiety disorders are the most common of mental illnesses, but relatively little is known about how to detect them from language. The primary clinical manifestation of anxiety is worry associated cognitive distortions, which are likely expressed at the discourse-level of semantics. Here, we investigate the development of a modern linguistic assessment for degree of anxiety, specifically evaluating the utility of discourse-level information in addition to lexical-level large language model embeddings. We find that a combined lexico-discourse model outperforms models based solely on state-of-the-art contextual embeddings (RoBERTa), with discourse-level representations derived from Sentence-BERT and DiscRE both providing additional predictive power not captured by lexical-level representations. Interpreting the model, we find that discourse patterns of causal explanations, among others, were used significantly more by those scoring high in anxiety, dovetailing with psychological literature.
[ "Juhng, Swanie", "Matero, Matthew", "Varadarajan, Vasudha", "Eichstaedt, Johannes", "V Ganesan, Adithya", "Schwartz, H. Andrew" ]
Discourse-Level Representations can Improve Prediction of Degree of Anxiety
acl-short.128
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.129.bib
https://aclanthology.org/2023.acl-short.129/
@inproceedings{ozdayi-etal-2023-controlling, title = "Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning", author = "Ozdayi, Mustafa and Peris, Charith and FitzGerald, Jack and Dupuy, Christophe and Majmudar, Jimit and Khan, Haidar and Parikh, Rahil and Gupta, Rahul", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.129", doi = "10.18653/v1/2023.acl-short.129", pages = "1512--1521", abstract = "Large Language Models (LLMs) are known to memorize significant portions of their training data. Parts of this memorized content have been shown to be extractable by simply querying the model, which poses a privacy risk. We present a novel approach which uses prompt-tuning to control the extraction rates of memorized content in LLMs. We present two prompt training strategies to increase and decrease extraction rates, which correspond to an attack and a defense, respectively. We demonstrate the effectiveness of our techniques by using models from the GPT-Neo family on a public benchmark. For the 1.3B parameter GPT-Neo model, our attack yields a 9.3 percentage point increase in extraction rate compared to our baseline. Our defense can be tuned to achieve different privacy-utility trade-offs by a user-specified hyperparameter. We achieve an extraction rate reduction of up to 97.7{\%} relative to our baseline, with a perplexity increase of 16.9{\%}.", }
Large Language Models (LLMs) are known to memorize significant portions of their training data. Parts of this memorized content have been shown to be extractable by simply querying the model, which poses a privacy risk. We present a novel approach which uses prompt-tuning to control the extraction rates of memorized content in LLMs. We present two prompt training strategies to increase and decrease extraction rates, which correspond to an attack and a defense, respectively. We demonstrate the effectiveness of our techniques by using models from the GPT-Neo family on a public benchmark. For the 1.3B parameter GPT-Neo model, our attack yields a 9.3 percentage point increase in extraction rate compared to our baseline. Our defense can be tuned to achieve different privacy-utility trade-offs by a user-specified hyperparameter. We achieve an extraction rate reduction of up to 97.7{\%} relative to our baseline, with a perplexity increase of 16.9{\%}.
[ "Ozdayi, Mustafa", "Peris, Charith", "FitzGerald, Jack", "Dupuy, Christophe", "Majmudar, Jimit", "Khan, Haidar", "Parikh, Rahil", "Gupta, Rahul" ]
Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
acl-short.129
Poster
2305.11759
[ "https://github.com/amazon-science/controlling-llm-memorization" ]
https://huggingface.co/papers/2305.11759
3
1
0
8
1
[]
[]
[]
https://aclanthology.org/2023.acl-short.130.bib
https://aclanthology.org/2023.acl-short.130/
@inproceedings{inaba-etal-2023-multitool, title = "{M}ulti{T}ool-{C}o{T}: {GPT}-3 Can Use Multiple External Tools with Chain of Thought Prompting", author = "Inaba, Tatsuro and Kiyomaru, Hirokazu and Cheng, Fei and Kurohashi, Sadao", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.130", doi = "10.18653/v1/2023.acl-short.130", pages = "1522--1532", abstract = "Large language models (LLMs) have achieved impressive performance on various reasoning tasks. To further improve the performance, we propose MultiTool-CoT, a novel framework that leverages chain-of-thought (CoT) prompting to incorporate multiple external tools, such as a calculator and a knowledge retriever, during the reasoning process. We apply MultiTool-CoT to the Task 2 dataset of NumGLUE, which requires both numerical reasoning and domain-specific knowledge. The experiments show that our method significantly outperforms strong baselines and achieves state-of-the-art performance.", }
Large language models (LLMs) have achieved impressive performance on various reasoning tasks. To further improve the performance, we propose MultiTool-CoT, a novel framework that leverages chain-of-thought (CoT) prompting to incorporate multiple external tools, such as a calculator and a knowledge retriever, during the reasoning process. We apply MultiTool-CoT to the Task 2 dataset of NumGLUE, which requires both numerical reasoning and domain-specific knowledge. The experiments show that our method significantly outperforms strong baselines and achieves state-of-the-art performance.
[ "Inaba, Tatsuro", "Kiyomaru, Hirokazu", "Cheng, Fei", "Kurohashi, Sadao" ]
MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting
acl-short.130
Poster
2305.16896
[ "https://github.com/inabatatsuro/multitool-cot" ]
https://huggingface.co/papers/2305.16896
0
0
0
4
1
[]
[]
[]
https://aclanthology.org/2023.acl-short.131.bib
https://aclanthology.org/2023.acl-short.131/
@inproceedings{xu-etal-2023-mpmr, title = "m{PMR}: A Multilingual Pre-trained Machine Reader at Scale", author = "Xu, Weiwen and Li, Xin and Lam, Wai and Bing, Lidong", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.131", doi = "10.18653/v1/2023.acl-short.131", pages = "1533--1546", abstract = "We present multilingual Pre-trained Machine Reader (mPMR), a novel method for multilingual machine reading comprehension (MRC)-style pre-training. mPMR aims to guide multilingual pre-trained language models (mPLMs) to perform natural language understanding (NLU) including both sequence classification and span extraction in multiple languages. To achieve cross-lingual generalization when only source-language fine-tuning data is available, existing mPLMs solely transfer NLU capability from a source language to target languages. In contrast, mPMR allows the direct inheritance of multilingual NLU capability from the MRC-style pre-training to downstream tasks. Therefore, mPMR acquires better NLU capability for target languages. mPMR also provides a unified solver for tackling cross-lingual span extraction and sequence classification, thereby enabling the extraction of rationales to explain the sentence-pair classification process.", }
We present multilingual Pre-trained Machine Reader (mPMR), a novel method for multilingual machine reading comprehension (MRC)-style pre-training. mPMR aims to guide multilingual pre-trained language models (mPLMs) to perform natural language understanding (NLU) including both sequence classification and span extraction in multiple languages. To achieve cross-lingual generalization when only source-language fine-tuning data is available, existing mPLMs solely transfer NLU capability from a source language to target languages. In contrast, mPMR allows the direct inheritance of multilingual NLU capability from the MRC-style pre-training to downstream tasks. Therefore, mPMR acquires better NLU capability for target languages. mPMR also provides a unified solver for tackling cross-lingual span extraction and sequence classification, thereby enabling the extraction of rationales to explain the sentence-pair classification process.
[ "Xu, Weiwen", "Li, Xin", "Lam, Wai", "Bing, Lidong" ]
mPMR: A Multilingual Pre-trained Machine Reader at Scale
acl-short.131
Poster
2305.13645
[ "https://github.com/damo-nlp-sg/pmr" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.132.bib
https://aclanthology.org/2023.acl-short.132/
@inproceedings{wang-etal-2023-mospc, title = "{MOSPC}: {MOS} Prediction Based on Pairwise Comparison", author = "Wang, Kexin and Zhao, Yunlong and Dong, Qianqian and Ko, Tom and Wang, Mingxuan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.132", doi = "10.18653/v1/2023.acl-short.132", pages = "1547--1556", abstract = "As a subjective metric to evaluate the quality of synthesized speech, Mean opinion score(MOS) usually requires multiple annotators to score the same speech. Such an annotation approach requires a lot of manpower and is also time-consuming. MOS prediction model for automatic evaluation can significantly reduce labor cost. In previous works, it is difficult to accurately rank the quality of speech when the MOS scores are close. However, in practical applications, it is more important to correctly rank the quality of synthesis systems or sentences than simply predicting MOS scores. Meanwhile, as each annotator scores multiple audios during annotation, the score is probably a relative value based on the first or the first few speech scores given by the annotator. Motivated by the above two points, we propose a general framework for MOS prediction based on pair comparison (MOSPC), and we utilize C-Mixup algorithm to enhance the generalization performance of MOSPC.The experiments on BVCC and VCC2018 show that our framework outperforms the baselines on most of the correlation coefficient metrics, especially on the metric KTAU related to quality ranking. And our framework also surpasses the strong baseline in ranking accuracy on each fine-grained segment. These results indicate that our framework contributes to improving the ranking accuracy of speech quality.", }
As a subjective metric to evaluate the quality of synthesized speech, Mean opinion score(MOS) usually requires multiple annotators to score the same speech. Such an annotation approach requires a lot of manpower and is also time-consuming. MOS prediction model for automatic evaluation can significantly reduce labor cost. In previous works, it is difficult to accurately rank the quality of speech when the MOS scores are close. However, in practical applications, it is more important to correctly rank the quality of synthesis systems or sentences than simply predicting MOS scores. Meanwhile, as each annotator scores multiple audios during annotation, the score is probably a relative value based on the first or the first few speech scores given by the annotator. Motivated by the above two points, we propose a general framework for MOS prediction based on pair comparison (MOSPC), and we utilize C-Mixup algorithm to enhance the generalization performance of MOSPC.The experiments on BVCC and VCC2018 show that our framework outperforms the baselines on most of the correlation coefficient metrics, especially on the metric KTAU related to quality ranking. And our framework also surpasses the strong baseline in ranking accuracy on each fine-grained segment. These results indicate that our framework contributes to improving the ranking accuracy of speech quality.
[ "Wang, Kexin", "Zhao, Yunlong", "Dong, Qianqian", "Ko, Tom", "Wang, Mingxuan" ]
MOSPC: MOS Prediction Based on Pairwise Comparison
acl-short.132
Poster
2306.10493
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.133.bib
https://aclanthology.org/2023.acl-short.133/
@inproceedings{lin-etal-2023-li, title = "{LI}-{RAGE}: Late Interaction Retrieval Augmented Generation with Explicit Signals for Open-Domain Table Question Answering", author = "Lin, Weizhe and Blloshmi, Rexhina and Byrne, Bill and de Gispert, Adria and Iglesias, Gonzalo", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.133", doi = "10.18653/v1/2023.acl-short.133", pages = "1557--1566", abstract = "Recent open-domain TableQA models are typically implemented as retriever-reader pipelines. The retriever component is usually a variant of the Dense Passage Retriever, which computes the similarities between questions and tables based on a single representation of each. These fixed vectors can be insufficient to capture fine-grained features of potentially very big tables with heterogeneous row/column information. We address this limitation by 1) applying late interaction models which enforce a finer-grained interaction between question and table embeddings at retrieval time. In addition, we 2) incorporate a joint training scheme of the retriever and reader with explicit table-level signals, and 3) embed a binary relevance token as a prefix to the answer generated by the reader, so we can determine at inference time whether the table used to answer the question is reliable and filter accordingly. The combined strategies set a new state-to-the-art performance on two public open-domain TableQA datasets.", }
Recent open-domain TableQA models are typically implemented as retriever-reader pipelines. The retriever component is usually a variant of the Dense Passage Retriever, which computes the similarities between questions and tables based on a single representation of each. These fixed vectors can be insufficient to capture fine-grained features of potentially very big tables with heterogeneous row/column information. We address this limitation by 1) applying late interaction models which enforce a finer-grained interaction between question and table embeddings at retrieval time. In addition, we 2) incorporate a joint training scheme of the retriever and reader with explicit table-level signals, and 3) embed a binary relevance token as a prefix to the answer generated by the reader, so we can determine at inference time whether the table used to answer the question is reliable and filter accordingly. The combined strategies set a new state-to-the-art performance on two public open-domain TableQA datasets.
[ "Lin, Weizhe", "Blloshmi, Rexhina", "Byrne, Bill", "de Gispert, Adria", "Iglesias, Gonzalo" ]
LI-RAGE: Late Interaction Retrieval Augmented Generation with Explicit Signals for Open-Domain Table Question Answering
acl-short.133
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.134.bib
https://aclanthology.org/2023.acl-short.134/
@inproceedings{li-etal-2023-well, title = "How Well Apply Simple {MLP} to Incomplete Utterance Rewriting?", author = "Li, Jiang and Su, Xiangdong and Ma, Xinlan and Gao, Guanglai", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.134", doi = "10.18653/v1/2023.acl-short.134", pages = "1567--1576", abstract = "Incomplete utterance rewriting (IUR) aims to restore the incomplete utterance with sufficient context information for comprehension. This paper introduces a simple yet efficient IUR method. Different from prior studies, we first employ only one-layer \textbf{M}LP architecture to mine latent semantic information between joint utterances for \textbf{IUR} task (\textbf{MIUR}). After that, we conduct a joint feature matrix to predict the token type and thus restore the incomplete utterance. The well-designed network and simple architecture make our method significantly superior to existing methods in terms of quality and inference speedOur code is available at \url{https://github.com/IMU-MachineLearningSXD/MIUR}.", }
Incomplete utterance rewriting (IUR) aims to restore the incomplete utterance with sufficient context information for comprehension. This paper introduces a simple yet efficient IUR method. Different from prior studies, we first employ only one-layer \textbf{M}LP architecture to mine latent semantic information between joint utterances for \textbf{IUR} task (\textbf{MIUR}). After that, we conduct a joint feature matrix to predict the token type and thus restore the incomplete utterance. The well-designed network and simple architecture make our method significantly superior to existing methods in terms of quality and inference speedOur code is available at \url{https://github.com/IMU-MachineLearningSXD/MIUR}.
[ "Li, Jiang", "Su, Xiangdong", "Ma, Xinlan", "Gao, Guanglai" ]
How Well Apply Simple MLP to Incomplete Utterance Rewriting?
acl-short.134
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.135.bib
https://aclanthology.org/2023.acl-short.135/
@inproceedings{cassotti-etal-2023-xl, title = "{XL}-{LEXEME}: {W}i{C} Pretrained Model for Cross-Lingual {LEX}ical s{EM}antic chang{E}", author = "Cassotti, Pierluigi and Siciliani, Lucia and DeGemmis, Marco and Semeraro, Giovanni and Basile, Pierpaolo", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.135", doi = "10.18653/v1/2023.acl-short.135", pages = "1577--1585", abstract = "The recent introduction of large-scale datasets for the WiC (Word in Context) task enables the creation of more reliable and meaningful contextualized word embeddings.However, most of the approaches to the WiC task use cross-encoders, which prevent the possibility of deriving comparable word embeddings.In this work, we introduce XL-LEXEME, a Lexical Semantic Change Detection model.XL-LEXEME extends SBERT, highlighting the target word in the sentence. We evaluate XL-LEXEME on the multilingual benchmarks for SemEval-2020 Task 1 - Lexical Semantic Change (LSC) Detection and the RuShiftEval shared task involving five languages: English, German, Swedish, Latin, and Russian.XL-LEXEME outperforms the state-of-the-art in English, German and Swedish with statistically significant differences from the baseline results and obtains state-of-the-art performance in the RuShiftEval shared task.", }
The recent introduction of large-scale datasets for the WiC (Word in Context) task enables the creation of more reliable and meaningful contextualized word embeddings.However, most of the approaches to the WiC task use cross-encoders, which prevent the possibility of deriving comparable word embeddings.In this work, we introduce XL-LEXEME, a Lexical Semantic Change Detection model.XL-LEXEME extends SBERT, highlighting the target word in the sentence. We evaluate XL-LEXEME on the multilingual benchmarks for SemEval-2020 Task 1 - Lexical Semantic Change (LSC) Detection and the RuShiftEval shared task involving five languages: English, German, Swedish, Latin, and Russian.XL-LEXEME outperforms the state-of-the-art in English, German and Swedish with statistically significant differences from the baseline results and obtains state-of-the-art performance in the RuShiftEval shared task.
[ "Cassotti, Pierluigi", "Siciliani, Lucia", "DeGemmis, Marco", "Semeraro, Giovanni", "Basile, Pierpaolo" ]
XL-LEXEME: WiC Pretrained Model for Cross-Lingual LEXical sEMantic changE
acl-short.135
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.136.bib
https://aclanthology.org/2023.acl-short.136/
@inproceedings{mccarthy-dore-2023-theory, title = "Theory-Grounded Computational Text Analysis", author = "McCarthy, Arya D. and Dore, Giovanna Maria Dora", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.136", doi = "10.18653/v1/2023.acl-short.136", pages = "1586--1594", abstract = "In this position paper, we argue that computational text analysis lacks and requires organizing principles. A broad space separates its two constituent disciplines{---}natural language processing and social science{---}which has to date been sidestepped rather than filled by applying increasingly complex computational models to problems in social science research. We contrast descriptive and integrative findings, and our review of approximately 60 papers on computational text analysis reveals that those from *ACL venues are typically descriptive. The lack of theory began at the area{'}s inception and has over the decades, grown more important and challenging. A return to theoretically grounded research questions will propel the area from both theoretical and methodological points of view.", }
In this position paper, we argue that computational text analysis lacks and requires organizing principles. A broad space separates its two constituent disciplines{---}natural language processing and social science{---}which has to date been sidestepped rather than filled by applying increasingly complex computational models to problems in social science research. We contrast descriptive and integrative findings, and our review of approximately 60 papers on computational text analysis reveals that those from *ACL venues are typically descriptive. The lack of theory began at the area{'}s inception and has over the decades, grown more important and challenging. A return to theoretically grounded research questions will propel the area from both theoretical and methodological points of view.
[ "McCarthy, Arya D.", "Dore, Giovanna Maria Dora" ]
Theory-Grounded Computational Text Analysis
acl-short.136
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.137.bib
https://aclanthology.org/2023.acl-short.137/
@inproceedings{martinez-lorenzo-etal-2023-amrs, title = "{AMR}s Assemble! Learning to Ensemble with Autoregressive Models for {AMR} Parsing", author = "Mart{\'\i}nez Lorenzo, Abelardo Carlos and Huguet Cabot, Pere Llu{\'\i}s and Navigli, Roberto", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.137", doi = "10.18653/v1/2023.acl-short.137", pages = "1595--1605", abstract = "In this paper, we examine the current state-of-the-art in AMR parsing, which relies on ensemble strategies by merging multiple graph predictions. Our analysis reveals that the present models often violate AMR structural constraints. To address this issue, we develop a validation method, and show how ensemble models can exploit SMATCH metric weaknesses to obtain higher scores, but sometimes result in corrupted graphs. Additionally, we highlight the demanding need to compute the SMATCH score among all possible predictions. To overcome these challenges, we propose two novel ensemble strategies based on Transformer models, improving robustness to structural constraints, while also reducing the computational time. Our methods provide new insights for enhancing AMR parsers and metrics. Our code is available at [\url{https://www.github.com/babelscape/AMRs-Assemble}](\url{https://www.github.com/babelscape/AMRs-Assemble}).", }
In this paper, we examine the current state-of-the-art in AMR parsing, which relies on ensemble strategies by merging multiple graph predictions. Our analysis reveals that the present models often violate AMR structural constraints. To address this issue, we develop a validation method, and show how ensemble models can exploit SMATCH metric weaknesses to obtain higher scores, but sometimes result in corrupted graphs. Additionally, we highlight the demanding need to compute the SMATCH score among all possible predictions. To overcome these challenges, we propose two novel ensemble strategies based on Transformer models, improving robustness to structural constraints, while also reducing the computational time. Our methods provide new insights for enhancing AMR parsers and metrics. Our code is available at [\url{https://www.github.com/babelscape/AMRs-Assemble}](\url{https://www.github.com/babelscape/AMRs-Assemble}).
[ "Mart{\\'\\i}nez Lorenzo, Abelardo Carlos", "Huguet Cabot, Pere Llu{\\'\\i}s", "Navigli, Roberto" ]
AMRs Assemble! Learning to Ensemble with Autoregressive Models for AMR Parsing
acl-short.137
Poster
2306.10786
[ "https://github.com/babelscape/amrs-assemble" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.138.bib
https://aclanthology.org/2023.acl-short.138/
@inproceedings{liu-etal-2023-molxpt, title = "{M}ol{XPT}: Wrapping Molecules with Text for Generative Pre-training", author = "Liu, Zequn and Zhang, Wei and Xia, Yingce and Wu, Lijun and Xie, Shufang and Qin, Tao and Zhang, Ming and Liu, Tie-Yan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.138", doi = "10.18653/v1/2023.acl-short.138", pages = "1606--1616", abstract = "Generative pre-trained Transformer (GPT) has demonstrates its great success in natural language processing and related techniques have been adapted into molecular modeling. Considering that text is the most important record for scientific discovery, in this paper, we propose MolXPT, a unified language model of text and molecules pre-trained on SMILES (a sequence representation of molecules) wrapped by text. Briefly, we detect the molecule names in each sequence and replace them to the corresponding SMILES. In this way, the SMILES could leverage the information from surrounding text, and vice versa. The above wrapped sequences, text sequences from PubMed and SMILES sequences from PubChem are all fed into a language model for pre-training. Experimental results demonstrate that MolXPT outperforms strong baselines of molecular property prediction on MoleculeNet, performs comparably to the best model in text-molecule translation while using less than half of its parameters, and enables zero-shot molecular generation without finetuning.", }
Generative pre-trained Transformer (GPT) has demonstrates its great success in natural language processing and related techniques have been adapted into molecular modeling. Considering that text is the most important record for scientific discovery, in this paper, we propose MolXPT, a unified language model of text and molecules pre-trained on SMILES (a sequence representation of molecules) wrapped by text. Briefly, we detect the molecule names in each sequence and replace them to the corresponding SMILES. In this way, the SMILES could leverage the information from surrounding text, and vice versa. The above wrapped sequences, text sequences from PubMed and SMILES sequences from PubChem are all fed into a language model for pre-training. Experimental results demonstrate that MolXPT outperforms strong baselines of molecular property prediction on MoleculeNet, performs comparably to the best model in text-molecule translation while using less than half of its parameters, and enables zero-shot molecular generation without finetuning.
[ "Liu, Zequn", "Zhang, Wei", "Xia, Yingce", "Wu, Lijun", "Xie, Shufang", "Qin, Tao", "Zhang, Ming", "Liu, Tie-Yan" ]
MolXPT: Wrapping Molecules with Text for Generative Pre-training
acl-short.138
Poster
2305.10688
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.139.bib
https://aclanthology.org/2023.acl-short.139/
@inproceedings{luo-etal-2023-study, title = "A Study on the Efficiency and Generalization of Light Hybrid Retrievers", author = "Luo, Man and Jain, Shashank and Gupta, Anchit and Einolghozati, Arash and Oguz, Barlas and Chatterjee, Debojeet and Chen, Xilun and Baral, Chitta and Heidari, Peyman", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.139", doi = "10.18653/v1/2023.acl-short.139", pages = "1617--1626", abstract = "Hybrid retrievers can take advantage of both sparse and dense retrievers. Previous hybrid retrievers leverage indexing-heavy dense retrievers. In this work, we study {``}Is it possible to reduce the indexing memory of hybrid retrievers without sacrificing performance{''}? Driven by this question, we leverage an indexing-efficient dense retriever (i.e. DrBoost) and introduce a LITE retriever that further reduces the memory of DrBoost. LITE is jointly trained on contrastive learning and knowledge distillation from DrBoost. Then, we integrate BM25, a sparse retriever, with either LITE or DrBoost to form light hybrid retrievers. Our Hybrid-LITE retriever saves $13\times$ memory while maintaining 98.0{\%} performance of the hybrid retriever of BM25 and DPR. In addition, we study the generalization capacity of our light hybrid retrievers on out-of-domain dataset and a set of adversarial attacks datasets. Experiments showcase that light hybrid retrievers achieve better generalization performance than individual sparse and dense retrievers. Nevertheless, our analysis shows that there is a large room to improve the robustness of retrievers, suggesting a new research direction.", }
Hybrid retrievers can take advantage of both sparse and dense retrievers. Previous hybrid retrievers leverage indexing-heavy dense retrievers. In this work, we study {``}Is it possible to reduce the indexing memory of hybrid retrievers without sacrificing performance{''}? Driven by this question, we leverage an indexing-efficient dense retriever (i.e. DrBoost) and introduce a LITE retriever that further reduces the memory of DrBoost. LITE is jointly trained on contrastive learning and knowledge distillation from DrBoost. Then, we integrate BM25, a sparse retriever, with either LITE or DrBoost to form light hybrid retrievers. Our Hybrid-LITE retriever saves $13\times$ memory while maintaining 98.0{\%} performance of the hybrid retriever of BM25 and DPR. In addition, we study the generalization capacity of our light hybrid retrievers on out-of-domain dataset and a set of adversarial attacks datasets. Experiments showcase that light hybrid retrievers achieve better generalization performance than individual sparse and dense retrievers. Nevertheless, our analysis shows that there is a large room to improve the robustness of retrievers, suggesting a new research direction.
[ "Luo, Man", "Jain, Shashank", "Gupta, Anchit", "Einolghozati, Arash", "Oguz, Barlas", "Chatterjee, Debojeet", "Chen, Xilun", "Baral, Chitta", "Heidari, Peyman" ]
A Study on the Efficiency and Generalization of Light Hybrid Retrievers
acl-short.139
Poster
2210.01371
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.140.bib
https://aclanthology.org/2023.acl-short.140/
@inproceedings{agnew-etal-2023-mechanical, title = "The Mechanical Bard: An Interpretable Machine Learning Approach to {S}hakespearean Sonnet Generation", author = "Agnew, Edwin and Qiu, Michelle and Zhu, Lily and Wiseman, Sam and Rudin, Cynthia", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.140", doi = "10.18653/v1/2023.acl-short.140", pages = "1627--1638", abstract = "We consider the automated generation of sonnets, a poetic form constrained according to meter, rhyme scheme, and length. Sonnets generally also use rhetorical figures, expressive language, and a consistent theme or narrative. Our constrained decoding approach allows for the generation of sonnets within preset poetic constraints, while using a relatively modest neural backbone. Human evaluation confirms that our approach produces Shakespearean sonnets that resemble human-authored sonnets, and which adhere to the genre{'}s defined constraints and contain lyrical language and literary devices.", }
We consider the automated generation of sonnets, a poetic form constrained according to meter, rhyme scheme, and length. Sonnets generally also use rhetorical figures, expressive language, and a consistent theme or narrative. Our constrained decoding approach allows for the generation of sonnets within preset poetic constraints, while using a relatively modest neural backbone. Human evaluation confirms that our approach produces Shakespearean sonnets that resemble human-authored sonnets, and which adhere to the genre{'}s defined constraints and contain lyrical language and literary devices.
[ "Agnew, Edwin", "Qiu, Michelle", "Zhu, Lily", "Wiseman, Sam", "Rudin, Cynthia" ]
The Mechanical Bard: An Interpretable Machine Learning Approach to Shakespearean Sonnet Generation
acl-short.140
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.141.bib
https://aclanthology.org/2023.acl-short.141/
@inproceedings{diwan-etal-2023-use, title = "When to Use Efficient Self Attention? Profiling Text, Speech and Image Transformer Variants", author = "Diwan, Anuj and Choi, Eunsol and Harwath, David", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.141", doi = "10.18653/v1/2023.acl-short.141", pages = "1639--1650", abstract = "We present the first unified study of the efficiency of self-attention-based Transformer variants spanning text, speech and vision. We identify input length thresholds (tipping points) at which efficient Transformer variants become more efficient than vanilla models, using a variety of efficiency metrics (latency, throughput, and memory). To conduct this analysis for speech, we introduce L-HuBERT, a novel local-attention variant of a self-supervised speech model. We observe that these thresholds are (a) much higher than typical dataset sequence lengths and (b) dependent on the metric and modality, showing that choosing the right model depends on modality, task type (long-form vs. typical context) and resource constraints (time vs. memory). By visualising the breakdown of the computational costs for transformer components, we also show that non-self-attention components exhibit significant computational costs. We release our profiling toolkit at \url{https://github.com/ajd12342/profiling-transformers} .", }
We present the first unified study of the efficiency of self-attention-based Transformer variants spanning text, speech and vision. We identify input length thresholds (tipping points) at which efficient Transformer variants become more efficient than vanilla models, using a variety of efficiency metrics (latency, throughput, and memory). To conduct this analysis for speech, we introduce L-HuBERT, a novel local-attention variant of a self-supervised speech model. We observe that these thresholds are (a) much higher than typical dataset sequence lengths and (b) dependent on the metric and modality, showing that choosing the right model depends on modality, task type (long-form vs. typical context) and resource constraints (time vs. memory). By visualising the breakdown of the computational costs for transformer components, we also show that non-self-attention components exhibit significant computational costs. We release our profiling toolkit at \url{https://github.com/ajd12342/profiling-transformers} .
[ "Diwan, Anuj", "Choi, Eunsol", "Harwath, David" ]
When to Use Efficient Self Attention? Profiling Text, Speech and Image Transformer Variants
acl-short.141
Poster
2306.08667
[ "https://github.com/ajd12342/profiling-transformers" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.142.bib
https://aclanthology.org/2023.acl-short.142/
@inproceedings{cai-oconnor-2023-evaluating, title = "Evaluating Zero-Shot Event Structures: Recommendations for Automatic Content Extraction ({ACE}) Annotations", author = "Cai, Erica and O{'}Connor, Brendan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.142", doi = "10.18653/v1/2023.acl-short.142", pages = "1651--1665", abstract = "Zero-shot event extraction (EE) methods infer richly structured event records from text, based only on a minimal user specification and no training examples, which enables flexibility in exploring and developing applications. Most event extraction research uses the Automatic Content Extraction (ACE) annotated dataset to evaluate supervised EE methods, but can it be used to evaluate zero-shot and other low-supervision EE? We describe ACE{'}s event structures and identify significant ambiguities and issues in current evaluation practice, including (1) coreferent argument mentions, (2) conflicting argument head conventions, and (3) ignorance of modality and event class details. By sometimes mishandling these subtleties, current work may dramatically understate the actual performance of zero-shot and other low-supervision EE, considering up to 32{\%} of correctly identified arguments and 25{\%} of correctly ignored event mentions as false negatives. For each issue, we propose recommendations for future evaluations so the research community can better utilize ACE as an event evaluation resource.", }
Zero-shot event extraction (EE) methods infer richly structured event records from text, based only on a minimal user specification and no training examples, which enables flexibility in exploring and developing applications. Most event extraction research uses the Automatic Content Extraction (ACE) annotated dataset to evaluate supervised EE methods, but can it be used to evaluate zero-shot and other low-supervision EE? We describe ACE{'}s event structures and identify significant ambiguities and issues in current evaluation practice, including (1) coreferent argument mentions, (2) conflicting argument head conventions, and (3) ignorance of modality and event class details. By sometimes mishandling these subtleties, current work may dramatically understate the actual performance of zero-shot and other low-supervision EE, considering up to 32{\%} of correctly identified arguments and 25{\%} of correctly ignored event mentions as false negatives. For each issue, we propose recommendations for future evaluations so the research community can better utilize ACE as an event evaluation resource.
[ "Cai, Erica", "O{'}Connor, Brendan" ]
Evaluating Zero-Shot Event Structures: Recommendations for Automatic Content Extraction (ACE) Annotations
acl-short.142
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.143.bib
https://aclanthology.org/2023.acl-short.143/
@inproceedings{lu-etal-2023-event, title = "Event Extraction as Question Generation and Answering", author = "Lu, Di and Ran, Shihao and Tetreault, Joel and Jaimes, Alejandro", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.143", doi = "10.18653/v1/2023.acl-short.143", pages = "1666--1688", abstract = "Recent work on Event Extraction has reframed the task as Question Answering (QA), with promising results. The advantage of this approach is that it addresses the error propagation issue found in traditional token-based classification approaches by directly predicting event arguments without extracting candidates first. However, the questions are typically based on fixed templates and they rarely leverage contextual information such as relevant arguments. In addition, prior QA-based approaches have difficulty handling cases where there are multiple arguments for the same role. In this paper, we propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates. We also propose dynamic templates to assist the training of QG model. Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset.", }
Recent work on Event Extraction has reframed the task as Question Answering (QA), with promising results. The advantage of this approach is that it addresses the error propagation issue found in traditional token-based classification approaches by directly predicting event arguments without extracting candidates first. However, the questions are typically based on fixed templates and they rarely leverage contextual information such as relevant arguments. In addition, prior QA-based approaches have difficulty handling cases where there are multiple arguments for the same role. In this paper, we propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates. We also propose dynamic templates to assist the training of QG model. Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset.
[ "Lu, Di", "Ran, Shihao", "Tetreault, Joel", "Jaimes, Alej", "ro" ]
Event Extraction as Question Generation and Answering
acl-short.143
Poster
2307.05567
[ "https://github.com/dataminr-ai/event-extraction-as-question-generation-and-answering" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.144.bib
https://aclanthology.org/2023.acl-short.144/
@inproceedings{liu-etal-2023-sample, title = "Are Sample-Efficient {NLP} Models More Robust?", author = "Liu, Nelson F. and Kumar, Ananya and Liang, Percy and Jia, Robin", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.144", doi = "10.18653/v1/2023.acl-short.144", pages = "1689--1709", abstract = "Recent results in image classification and extractive question answering have observed that pre-trained models trained on less in-distribution data have better out-ofdistribution performance. However, it is unclear how broadly these trends hold. We conduct a large empirical study across three tasks, three broadly-applicable modeling interventions (increasing model size, using a different adaptation method, and pre-training on more data), and 14 diverse datasets to investigate the relationship between sample efficiency (amount of data needed to reach a given ID accuracy) and robustness (how models fare on OOD evaluation). We find that higher sample efficiency is only correlated with better average OOD robustness on some modeling interventions and tasks, but not others. On individual datasets, models with lower sample efficiency can even be more robust. These results suggest that general-purpose methods for improving sample efficiency are unlikely to yield universal OOD robustness improvements, since such improvements are highly dataset- and task-dependent. Even in an era of large, multi-purpose pre-trained models, task-specific decisions may often be necessary for OOD generalization.", }
Recent results in image classification and extractive question answering have observed that pre-trained models trained on less in-distribution data have better out-ofdistribution performance. However, it is unclear how broadly these trends hold. We conduct a large empirical study across three tasks, three broadly-applicable modeling interventions (increasing model size, using a different adaptation method, and pre-training on more data), and 14 diverse datasets to investigate the relationship between sample efficiency (amount of data needed to reach a given ID accuracy) and robustness (how models fare on OOD evaluation). We find that higher sample efficiency is only correlated with better average OOD robustness on some modeling interventions and tasks, but not others. On individual datasets, models with lower sample efficiency can even be more robust. These results suggest that general-purpose methods for improving sample efficiency are unlikely to yield universal OOD robustness improvements, since such improvements are highly dataset- and task-dependent. Even in an era of large, multi-purpose pre-trained models, task-specific decisions may often be necessary for OOD generalization.
[ "Liu, Nelson F.", "Kumar, Ananya", "Liang, Percy", "Jia, Robin" ]
Are Sample-Efficient NLP Models More Robust?
acl-short.144
Poster
2210.06456
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.145.bib
https://aclanthology.org/2023.acl-short.145/
@inproceedings{li-etal-2023-diversity, title = "Diversity-Aware Coherence Loss for Improving Neural Topic Models", author = "Li, Raymond and Gonzalez-Pizarro, Felipe and Xing, Linzi and Murray, Gabriel and Carenini, Giuseppe", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.145", doi = "10.18653/v1/2023.acl-short.145", pages = "1710--1722", abstract = "The standard approach for neural topic modeling uses a variational autoencoder (VAE) framework that jointly minimizes the KL divergence between the estimated posterior and prior, in addition to the reconstruction loss. Since neural topic models are trained by recreating individual input documents, they do not explicitly capture the coherence between words on the corpus level. In this work, we propose a novel diversity-aware coherence loss that encourages the model to learn corpus-level coherence scores while maintaining high diversity between topics. Experimental results on multiple datasets show that our method significantly improves the performance of neural topic models without requiring any pretraining or additional parameters.", }
The standard approach for neural topic modeling uses a variational autoencoder (VAE) framework that jointly minimizes the KL divergence between the estimated posterior and prior, in addition to the reconstruction loss. Since neural topic models are trained by recreating individual input documents, they do not explicitly capture the coherence between words on the corpus level. In this work, we propose a novel diversity-aware coherence loss that encourages the model to learn corpus-level coherence scores while maintaining high diversity between topics. Experimental results on multiple datasets show that our method significantly improves the performance of neural topic models without requiring any pretraining or additional parameters.
[ "Li, Raymond", "Gonzalez-Pizarro, Felipe", "Xing, Linzi", "Murray, Gabriel", "Carenini, Giuseppe" ]
Diversity-Aware Coherence Loss for Improving Neural Topic Models
acl-short.145
Poster
2305.16199
[ "https://github.com/raymondzmc/topic-model-diversity-aware-coherence-loss" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.146.bib
https://aclanthology.org/2023.acl-short.146/
@inproceedings{li-etal-2023-narrowbert, title = "{N}arrow{BERT}: Accelerating Masked Language Model Pretraining and Inference", author = "Li, Haoxin and Keung, Phillip and Cheng, Daniel and Kasai, Jungo and Smith, Noah A.", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.146", doi = "10.18653/v1/2023.acl-short.146", pages = "1723--1730", abstract = "Large-scale language model pretraining is a very successful form of self-supervised learning in natural language processing, but it is increasingly expensive to perform as the models and pretraining corpora have become larger over time. We propose NarrowBERT, a modified transformer encoder that increases the throughput for masked language model pretraining by more than 2x. NarrowBERT sparsifies the transformer model such that the self-attention queries and feedforward layers only operate on the masked tokens of each sentence during pretraining, rather than all of the tokens as with the usual transformer encoder. We also show that NarrowBERT increases the throughput at inference time by as much as 3.5x with minimal (or no) performance degradation on sentence encoding tasks like MNLI. Finally, we examine the performance of NarrowBERT on the IMDB and Amazon reviews classification and CoNLL NER tasks and show that it is also comparable to standard BERT performance.", }
Large-scale language model pretraining is a very successful form of self-supervised learning in natural language processing, but it is increasingly expensive to perform as the models and pretraining corpora have become larger over time. We propose NarrowBERT, a modified transformer encoder that increases the throughput for masked language model pretraining by more than 2x. NarrowBERT sparsifies the transformer model such that the self-attention queries and feedforward layers only operate on the masked tokens of each sentence during pretraining, rather than all of the tokens as with the usual transformer encoder. We also show that NarrowBERT increases the throughput at inference time by as much as 3.5x with minimal (or no) performance degradation on sentence encoding tasks like MNLI. Finally, we examine the performance of NarrowBERT on the IMDB and Amazon reviews classification and CoNLL NER tasks and show that it is also comparable to standard BERT performance.
[ "Li, Haoxin", "Keung, Phillip", "Cheng, Daniel", "Kasai, Jungo", "Smith, Noah A." ]
NarrowBERT: Accelerating Masked Language Model Pretraining and Inference
acl-short.146
Poster
2301.04761
[ "https://github.com/lihaoxin2020/narrowbert" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.147.bib
https://aclanthology.org/2023.acl-short.147/
@inproceedings{lei-etal-2023-s3hqa, title = "{S}3{HQA}: A Three-Stage Approach for Multi-hop Text-Table Hybrid Question Answering", author = "Lei, Fangyu and Li, Xiang and Wei, Yifan and He, Shizhu and Huang, Yiming and Zhao, Jun and Liu, Kang", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.147", doi = "10.18653/v1/2023.acl-short.147", pages = "1731--1740", abstract = "Answering multi-hop questions over hybrid factual knowledge from the given text and table (TextTableQA) is a challenging task. Existing models mainly adopt a retriever-reader framework, which have several deficiencies, such as noisy labeling in training retriever, insufficient utilization of heterogeneous information over text and table, and deficient ability for different reasoning operations. In this paper, we propose a three-stage TextTableQA framework S3HQA, which comprises of retriever, selector, and reasoner. We use a retriever with refinement training to solve the noisy labeling problem. Then, a hybrid selector considers the linked relationships between heterogeneous data to select the most relevant factual knowledge. For the final stage, instead of adapting a reading comprehension module like in previous methods, we employ a generation-based reasoner to obtain answers. This includes two approaches: a row-wise generator and an LLM prompting generator (first time used in this task). The experimental results demonstrate that our method achieves competitive results in the few-shot setting. When trained on the full dataset, our approach outperforms all baseline methods, ranking first on the HybridQA leaderboard.", }
Answering multi-hop questions over hybrid factual knowledge from the given text and table (TextTableQA) is a challenging task. Existing models mainly adopt a retriever-reader framework, which have several deficiencies, such as noisy labeling in training retriever, insufficient utilization of heterogeneous information over text and table, and deficient ability for different reasoning operations. In this paper, we propose a three-stage TextTableQA framework S3HQA, which comprises of retriever, selector, and reasoner. We use a retriever with refinement training to solve the noisy labeling problem. Then, a hybrid selector considers the linked relationships between heterogeneous data to select the most relevant factual knowledge. For the final stage, instead of adapting a reading comprehension module like in previous methods, we employ a generation-based reasoner to obtain answers. This includes two approaches: a row-wise generator and an LLM prompting generator (first time used in this task). The experimental results demonstrate that our method achieves competitive results in the few-shot setting. When trained on the full dataset, our approach outperforms all baseline methods, ranking first on the HybridQA leaderboard.
[ "Lei, Fangyu", "Li, Xiang", "Wei, Yifan", "He, Shizhu", "Huang, Yiming", "Zhao, Jun", "Liu, Kang" ]
S3HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid Question Answering
acl-short.147
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.148.bib
https://aclanthology.org/2023.acl-short.148/
@inproceedings{sun-etal-2023-towards, title = "Towards Fewer Hallucinations in Knowledge-Grounded Dialogue Generation via Augmentative and Contrastive Knowledge-Dialogue", author = "Sun, Bin and Li, Yitong and Mi, Fei and Bie, Fanhu and Li, Yiwei and Li, Kan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.148", doi = "10.18653/v1/2023.acl-short.148", pages = "1741--1750", abstract = "Existing knowledge-grounded open-domain dialogue generation models often face the hallucination problem, i.e. the dialogue generative model will persist in an inappropriate knowledge and generate responses that inconsistent with the facts. We argue that this problem mainly stems from the polarized optimization objectives and weak knowledge generation ability. To mitigate the hallucination, we take inspiration from human communicating that people will replay euphemistic responses for the unclear or unrecognizable knowledge, and propose an Augmentative and Contrastive Knowledge Dialogue Expansion Framework (ACK-DEF). ACK-DEF constructs the augmentative and contrastive knowledge dialogue samples, which consist of the knowledge of different degrees of errors and the response of manual design, to expand the original training set and smooth the polarized optimization objective that enables models to generate ground-truth with or without gold knowledge. Not only the knowledge, ACK-DEF also provides the tactful responses of manual design corresponding to the incomplete correct knowledge. Experimental results on the Wikipedia of Wizard dataset show that employing the ACK-DEF is effective to alleviate the hallucination problem.", }
Existing knowledge-grounded open-domain dialogue generation models often face the hallucination problem, i.e. the dialogue generative model will persist in an inappropriate knowledge and generate responses that inconsistent with the facts. We argue that this problem mainly stems from the polarized optimization objectives and weak knowledge generation ability. To mitigate the hallucination, we take inspiration from human communicating that people will replay euphemistic responses for the unclear or unrecognizable knowledge, and propose an Augmentative and Contrastive Knowledge Dialogue Expansion Framework (ACK-DEF). ACK-DEF constructs the augmentative and contrastive knowledge dialogue samples, which consist of the knowledge of different degrees of errors and the response of manual design, to expand the original training set and smooth the polarized optimization objective that enables models to generate ground-truth with or without gold knowledge. Not only the knowledge, ACK-DEF also provides the tactful responses of manual design corresponding to the incomplete correct knowledge. Experimental results on the Wikipedia of Wizard dataset show that employing the ACK-DEF is effective to alleviate the hallucination problem.
[ "Sun, Bin", "Li, Yitong", "Mi, Fei", "Bie, Fanhu", "Li, Yiwei", "Li, Kan" ]
Towards Fewer Hallucinations in Knowledge-Grounded Dialogue Generation via Augmentative and Contrastive Knowledge-Dialogue
acl-short.148
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.149.bib
https://aclanthology.org/2023.acl-short.149/
@inproceedings{li-etal-2023-autoconv, title = "{A}uto{C}onv: Automatically Generating Information-seeking Conversations with Large Language Models", author = "Li, Siheng and Yang, Cheng and Yin, Yichun and Zhu, Xinyu and Cheng, Zesen and Shang, Lifeng and Jiang, Xin and Liu, Qun and Yang, Yujiu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.149", doi = "10.18653/v1/2023.acl-short.149", pages = "1751--1762", abstract = "Information-seeking conversation, which aims to help users gather information through conversation, has achieved great progress in recent years. However, the research is still stymied by the scarcity of training data. To alleviate this problem, we propose AutoConv for synthetic conversation generation, which takes advantage of the few-shot learning ability and generation capacity of large language models (LLM). Specifically, we formulate the conversation generation problem as a language modeling task, then finetune an LLM with a few human conversations to capture the characteristics of the information-seeking process and use it for generating synthetic conversations with high quality. Experimental results on two frequently-used datasets verify that AutoConv has substantial improvements over strong baselines and alleviates the dependence on human annotation. In addition, we also provide several analysis studies to promote future research.", }
Information-seeking conversation, which aims to help users gather information through conversation, has achieved great progress in recent years. However, the research is still stymied by the scarcity of training data. To alleviate this problem, we propose AutoConv for synthetic conversation generation, which takes advantage of the few-shot learning ability and generation capacity of large language models (LLM). Specifically, we formulate the conversation generation problem as a language modeling task, then finetune an LLM with a few human conversations to capture the characteristics of the information-seeking process and use it for generating synthetic conversations with high quality. Experimental results on two frequently-used datasets verify that AutoConv has substantial improvements over strong baselines and alleviates the dependence on human annotation. In addition, we also provide several analysis studies to promote future research.
[ "Li, Siheng", "Yang, Cheng", "Yin, Yichun", "Zhu, Xinyu", "Cheng, Zesen", "Shang, Lifeng", "Jiang, Xin", "Liu, Qun", "Yang, Yujiu" ]
AutoConv: Automatically Generating Information-seeking Conversations with Large Language Models
acl-short.149
Poster
2308.06507
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.150.bib
https://aclanthology.org/2023.acl-short.150/
@inproceedings{pluss-etal-2023-stt4sg, title = "{STT}4{SG}-350: A Speech Corpus for All {S}wiss {G}erman Dialect Regions", author = {Pl{\"u}ss, Michel and Deriu, Jan and Schraner, Yanick and Paonessa, Claudio and Hartmann, Julia and Schmidt, Larissa and Scheller, Christian and H{\"u}rlimann, Manuela and Samard{\v{z}}i{\'c}, Tanja and Vogel, Manfred and Cieliebak, Mark}, editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.150", doi = "10.18653/v1/2023.acl-short.150", pages = "1763--1772", abstract = "We present STT4SG-350, a corpus of Swiss German speech, annotated with Standard German text at the sentence level. The data is collected using a web app in which the speakers are shown Standard German sentences, which they translate to Swiss German and record. We make the corpus publicly available. It contains 343 hours of speech from all dialect regions and is the largest public speech corpus for Swiss German to date. Application areas include automatic speech recognition (ASR), text-to-speech, dialect identification, and speaker recognition. Dialect information, age group, and gender of the 316 speakers are provided. Genders are equally represented and the corpus includes speakers of all ages. Roughly the same amount of speech is provided per dialect region, which makes the corpus ideally suited for experiments with speech technology for different dialects. We provide training, validation, and test splits of the data. The test set consists of the same spoken sentences for each dialect region and allows a fair evaluation of the quality of speech technologies in different dialects. We train an ASR model on the training set and achieve an average BLEU score of 74.7 on the test set. The model beats the best published BLEU scores on 2 other Swiss German ASR test sets, demonstrating the quality of the corpus.", }
We present STT4SG-350, a corpus of Swiss German speech, annotated with Standard German text at the sentence level. The data is collected using a web app in which the speakers are shown Standard German sentences, which they translate to Swiss German and record. We make the corpus publicly available. It contains 343 hours of speech from all dialect regions and is the largest public speech corpus for Swiss German to date. Application areas include automatic speech recognition (ASR), text-to-speech, dialect identification, and speaker recognition. Dialect information, age group, and gender of the 316 speakers are provided. Genders are equally represented and the corpus includes speakers of all ages. Roughly the same amount of speech is provided per dialect region, which makes the corpus ideally suited for experiments with speech technology for different dialects. We provide training, validation, and test splits of the data. The test set consists of the same spoken sentences for each dialect region and allows a fair evaluation of the quality of speech technologies in different dialects. We train an ASR model on the training set and achieve an average BLEU score of 74.7 on the test set. The model beats the best published BLEU scores on 2 other Swiss German ASR test sets, demonstrating the quality of the corpus.
[ "Pl{\\\"u}ss, Michel", "Deriu, Jan", "Schraner, Yanick", "Paonessa, Claudio", "Hartmann, Julia", "Schmidt, Larissa", "Scheller, Christian", "H{\\\"u}rlimann, Manuela", "Samard{\\v{z}}i{\\'c}, Tanja", "Vogel, Manfred", "Cieliebak, Mark" ]
STT4SG-350: A Speech Corpus for All Swiss German Dialect Regions
acl-short.150
Poster
2305.18855
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.151.bib
https://aclanthology.org/2023.acl-short.151/
@inproceedings{magister-etal-2023-teaching, title = "Teaching Small Language Models to Reason", author = "Magister, Lucie Charlotte and Mallinson, Jonathan and Adamek, Jakub and Malmi, Eric and Severyn, Aliaksei", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.151", doi = "10.18653/v1/2023.acl-short.151", pages = "1773--1781", abstract = "Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets. However, these reasoning capabilities only appear to emerge in models with at least tens of billions of parameters. In this paper, we explore the transfer of such reasoning capabilities to smaller models via knowledge distillation, also investigating model and dataset size trade-off. Specifically, we finetune a student model on the chain of thought outputs generated by a larger teacher model. Our experiments show that the proposed method improves task performance across arithmetic, commonsense and symbolic reasoning datasets. For example, the accuracy of T5 XXL on GSM8K improves from 8.11{\%} to 21.99{\%} and 18.42{\%} when finetuned on PaLM 540B and GPT-3 175B generated chains of thought, respectively.", }
Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets. However, these reasoning capabilities only appear to emerge in models with at least tens of billions of parameters. In this paper, we explore the transfer of such reasoning capabilities to smaller models via knowledge distillation, also investigating model and dataset size trade-off. Specifically, we finetune a student model on the chain of thought outputs generated by a larger teacher model. Our experiments show that the proposed method improves task performance across arithmetic, commonsense and symbolic reasoning datasets. For example, the accuracy of T5 XXL on GSM8K improves from 8.11{\%} to 21.99{\%} and 18.42{\%} when finetuned on PaLM 540B and GPT-3 175B generated chains of thought, respectively.
[ "Magister, Lucie Charlotte", "Mallinson, Jonathan", "Adamek, Jakub", "Malmi, Eric", "Severyn, Aliaksei" ]
Teaching Small Language Models to Reason
acl-short.151
Poster
2212.08410
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.152.bib
https://aclanthology.org/2023.acl-short.152/
@inproceedings{bhambhoria-etal-2023-simple, title = "A Simple and Effective Framework for Strict Zero-Shot Hierarchical Classification", author = "Bhambhoria, Rohan and Chen, Lei and Zhu, Xiaodan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.152", doi = "10.18653/v1/2023.acl-short.152", pages = "1782--1792", abstract = "In recent years, large language models (LLMs) have achieved strong performance on benchmark tasks, especially in zero or few-shot settings. However, these benchmarks often do not adequately address the challenges posed in the real-world, such as that of hierarchical classification. In order to address this challenge, we propose refactoring conventional tasks on hierarchical datasets into a more indicative long-tail prediction task. We observe LLMs are more prone to failure in these cases. To address these limitations, we propose the use of entailment-contradiction prediction in conjunction with LLMs, which allows for strong performance in a strict zero-shot setting. Importantly, our method does not require any parameter updates, a resource-intensive process and achieves strong performance across multiple datasets.", }
In recent years, large language models (LLMs) have achieved strong performance on benchmark tasks, especially in zero or few-shot settings. However, these benchmarks often do not adequately address the challenges posed in the real-world, such as that of hierarchical classification. In order to address this challenge, we propose refactoring conventional tasks on hierarchical datasets into a more indicative long-tail prediction task. We observe LLMs are more prone to failure in these cases. To address these limitations, we propose the use of entailment-contradiction prediction in conjunction with LLMs, which allows for strong performance in a strict zero-shot setting. Importantly, our method does not require any parameter updates, a resource-intensive process and achieves strong performance across multiple datasets.
[ "Bhambhoria, Rohan", "Chen, Lei", "Zhu, Xiaodan" ]
A Simple and Effective Framework for Strict Zero-Shot Hierarchical Classification
acl-short.152
Poster
2305.15282
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.153.bib
https://aclanthology.org/2023.acl-short.153/
@inproceedings{zhang-etal-2023-simple, title = "A Simple Concatenation can Effectively Improve Speech Translation", author = "Zhang, Linlin and Fan, Kai and Chen, Boxing and Si, Luo", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.153", doi = "10.18653/v1/2023.acl-short.153", pages = "1793--1802", abstract = "A triple speech translation data comprises speech, transcription, and translation. In the end-to-end paradigm, text machine translation (MT) usually plays the role of a teacher model for the speech translation (ST) via knowledge distillation. Parameter sharing with the teacher is often adopted to construct the ST model architecture, however, the two modalities are independently fed and trained via different losses. This situation does not match ST{'}s properties across two modalities and also limits the upper bound of the performance. Inspired by the works of video Transformer, we propose a simple unified cross-modal ST method, which concatenates speech and text as the input, and builds a teacher that can utilize both cross-modal information simultaneously. Experimental results show that in our unified ST framework, models can effectively utilize the auxiliary information from speech and text, and achieve compelling results on MuST-C datasets.", }
A triple speech translation data comprises speech, transcription, and translation. In the end-to-end paradigm, text machine translation (MT) usually plays the role of a teacher model for the speech translation (ST) via knowledge distillation. Parameter sharing with the teacher is often adopted to construct the ST model architecture, however, the two modalities are independently fed and trained via different losses. This situation does not match ST{'}s properties across two modalities and also limits the upper bound of the performance. Inspired by the works of video Transformer, we propose a simple unified cross-modal ST method, which concatenates speech and text as the input, and builds a teacher that can utilize both cross-modal information simultaneously. Experimental results show that in our unified ST framework, models can effectively utilize the auxiliary information from speech and text, and achieve compelling results on MuST-C datasets.
[ "Zhang, Linlin", "Fan, Kai", "Chen, Boxing", "Si, Luo" ]
A Simple Concatenation can Effectively Improve Speech Translation
acl-short.153
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.154.bib
https://aclanthology.org/2023.acl-short.154/
@inproceedings{she-etal-2023-scone, title = "{S}co{N}e: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning", author = "She, Jingyuan S. and Potts, Christopher and Bowman, Samuel R. and Geiger, Atticus", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.154", doi = "10.18653/v1/2023.acl-short.154", pages = "1803--1821", abstract = "A number of recent benchmarks seek to assess how well models handle natural language negation. However, these benchmarks lack the controlled example paradigms that would allow us to infer whether a model had truly learned how negation morphemes semantically scope. To fill these analytical gaps, we present the Scoped Negation NLI (ScoNe-NLI) benchmark, which contains contrast sets of six examples with up to two negations where either zero, one, or both negative morphemes affect the NLI label. We use ScoNe-NLI to assess fine-tuning and in-context learning strategies. We find that RoBERTa and DeBERTa models solve ScoNe-NLI after many shot fine-tuning. For in-context learning, we test the latest InstructGPT models and find that most prompt strategies are not successful, including those using step-by-step reasoning. To better understand this result, we extend ScoNe with ScoNe-NLG, a sentence completion test set that embeds negation reasoning in short narratives. Here, InstructGPT is successful, which reveals the model can correctly reason about negation, but struggles to do so on NLI examples outside of its core pretraining regime.", }
A number of recent benchmarks seek to assess how well models handle natural language negation. However, these benchmarks lack the controlled example paradigms that would allow us to infer whether a model had truly learned how negation morphemes semantically scope. To fill these analytical gaps, we present the Scoped Negation NLI (ScoNe-NLI) benchmark, which contains contrast sets of six examples with up to two negations where either zero, one, or both negative morphemes affect the NLI label. We use ScoNe-NLI to assess fine-tuning and in-context learning strategies. We find that RoBERTa and DeBERTa models solve ScoNe-NLI after many shot fine-tuning. For in-context learning, we test the latest InstructGPT models and find that most prompt strategies are not successful, including those using step-by-step reasoning. To better understand this result, we extend ScoNe with ScoNe-NLG, a sentence completion test set that embeds negation reasoning in short narratives. Here, InstructGPT is successful, which reveals the model can correctly reason about negation, but struggles to do so on NLI examples outside of its core pretraining regime.
[ "She, Jingyuan S.", "Potts, Christopher", "Bowman, Samuel R.", "Geiger, Atticus" ]
ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning
acl-short.154
Poster
2305.19426
[ "https://github.com/selenashe/scone" ]
https://huggingface.co/papers/2305.19426
3
0
0
4
1
[]
[ "tasksource/scone" ]
[]
https://aclanthology.org/2023.acl-short.155.bib
https://aclanthology.org/2023.acl-short.155/
@inproceedings{zhou-etal-2023-revisiting, title = "Revisiting Automated Prompting: Are We Actually Doing Better?", author = "Zhou, Yulin and Zhao, Yiren and Shumailov, Ilia and Mullins, Robert and Gal, Yarin", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.155", doi = "10.18653/v1/2023.acl-short.155", pages = "1822--1832", abstract = "Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting. An attempt to automate human-led prompting followed, with some progress achieved. In particular, subsequent work demonstrates that automation can outperform fine-tuning in certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six different downstream tasks and a larger range of K-shot learning settings. We find that automated prompting does not consistently outperform simple manual prompting. Our work suggests that, in addition to fine-tuning, manual prompting should be used as a baseline in this line of research.", }
Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting. An attempt to automate human-led prompting followed, with some progress achieved. In particular, subsequent work demonstrates that automation can outperform fine-tuning in certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six different downstream tasks and a larger range of K-shot learning settings. We find that automated prompting does not consistently outperform simple manual prompting. Our work suggests that, in addition to fine-tuning, manual prompting should be used as a baseline in this line of research.
[ "Zhou, Yulin", "Zhao, Yiren", "Shumailov, Ilia", "Mullins, Robert", "Gal, Yarin" ]
Revisiting Automated Prompting: Are We Actually Doing Better?
acl-short.155
Poster
2304.03609
[ "https://github.com/KyraZzz/nlp-prompt-attack" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.156.bib
https://aclanthology.org/2023.acl-short.156/
@inproceedings{ganesh-etal-2023-mind, title = "Mind the Gap between the Application Track and the Real World", author = "Ganesh, Ananya and Cao, Jie and Perkoff, E. Margaret and Southwell, Rosy and Palmer, Martha and Kann, Katharina", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.156", doi = "10.18653/v1/2023.acl-short.156", pages = "1833--1842", abstract = "Recent advances in NLP have led to a rise in inter-disciplinary and application-oriented research. While this demonstrates the growing real-world impact of the field, research papers frequently feature experiments that do not account for the complexities of realistic data and environments. To explore the extent of this gap, we investigate the relationship between the real-world motivations described in NLP papers and the models and evaluation which comprise the proposed solution. We first survey papers from the NLP Applications track from ACL 2020 and EMNLP 2020, asking which papers have differences between their stated motivation and their experimental setting, and if so, mention them. We find that many papers fall short of considering real-world input and output conditions due to adopting simplified modeling or evaluation settings. As a case study, we then empirically show that the performance of an educational dialog understanding system deteriorates when used in a realistic classroom environment.", }
Recent advances in NLP have led to a rise in inter-disciplinary and application-oriented research. While this demonstrates the growing real-world impact of the field, research papers frequently feature experiments that do not account for the complexities of realistic data and environments. To explore the extent of this gap, we investigate the relationship between the real-world motivations described in NLP papers and the models and evaluation which comprise the proposed solution. We first survey papers from the NLP Applications track from ACL 2020 and EMNLP 2020, asking which papers have differences between their stated motivation and their experimental setting, and if so, mention them. We find that many papers fall short of considering real-world input and output conditions due to adopting simplified modeling or evaluation settings. As a case study, we then empirically show that the performance of an educational dialog understanding system deteriorates when used in a realistic classroom environment.
[ "Ganesh, Ananya", "Cao, Jie", "Perkoff, E. Margaret", "Southwell, Rosy", "Palmer, Martha", "Kann, Katharina" ]
Mind the Gap between the Application Track and the Real World
acl-short.156
Oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.157.bib
https://aclanthology.org/2023.acl-short.157/
@inproceedings{wang-etal-2023-distill, title = "How to Distill your {BERT}: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives", author = {Wang, Xinpeng and Weissweiler, Leonie and Sch{\"u}tze, Hinrich and Plank, Barbara}, editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.157", doi = "10.18653/v1/2023.acl-short.157", pages = "1843--1852", abstract = "Recently, various intermediate layer distillation (ILD) objectives have been shown to improve compression of BERT models via Knowledge Distillation (KD). However, a comprehensive evaluation of the objectives in both task-specific and task-agnostic settings is lacking. To the best of our knowledge, this is the first work comprehensively evaluating distillation objectives in both settings. We show that attention transfer gives the best performance overall. We also study the impact of layer choice when initializing the student from the teacher layers, finding a significant impact on the performance in task-specific distillation. For vanilla KD and hidden states transfer, initialisation with lower layers of the teacher gives a considerable improvement over higher layers, especially on the task of QNLI (up to an absolute percentage change of 17.8 in accuracy). Attention transfer behaves consistently under different initialisation settings. We release our code as an efficient transformer-based model distillation framework for further studies.", }
Recently, various intermediate layer distillation (ILD) objectives have been shown to improve compression of BERT models via Knowledge Distillation (KD). However, a comprehensive evaluation of the objectives in both task-specific and task-agnostic settings is lacking. To the best of our knowledge, this is the first work comprehensively evaluating distillation objectives in both settings. We show that attention transfer gives the best performance overall. We also study the impact of layer choice when initializing the student from the teacher layers, finding a significant impact on the performance in task-specific distillation. For vanilla KD and hidden states transfer, initialisation with lower layers of the teacher gives a considerable improvement over higher layers, especially on the task of QNLI (up to an absolute percentage change of 17.8 in accuracy). Attention transfer behaves consistently under different initialisation settings. We release our code as an efficient transformer-based model distillation framework for further studies.
[ "Wang, Xinpeng", "Weissweiler, Leonie", "Sch{\\\"u}tze, Hinrich", "Plank, Barbara" ]
How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives
acl-short.157
Poster
2305.15032
[ "https://github.com/mainlp/how-to-distill-your-bert" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.158.bib
https://aclanthology.org/2023.acl-short.158/
@inproceedings{sedova-roth-2023-actc, title = "{ACTC}: Active Threshold Calibration for Cold-Start Knowledge Graph Completion", author = "Sedova, Anastasiia and Roth, Benjamin", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.158", doi = "10.18653/v1/2023.acl-short.158", pages = "1853--1863", abstract = "Self-supervised knowledge-graph completion (KGC) relies on estimating a scoring model over (entity, relation, entity)-tuples, for example, by embedding an initial knowledge graph. Prediction quality can be improved by calibrating the scoring model, typically by adjusting the prediction thresholds using manually annotated examples. In this paper, we attempt for the first time cold-start calibration for KGC, where no annotated examples exist initially for calibration, and only a limited number of tuples can be selected for annotation. Our new method ACTC finds good per-relation thresholds efficiently based on a limited set of annotated tuples. Additionally to a few annotated tuples, ACTC also leverages unlabeled tuples by estimating their correctness with Logistic Regression or Gaussian Process classifiers. We also experiment with different methods for selecting candidate tuples for annotation: density-based and random selection. Experiments with five scoring models and an oracle annotator show an improvement of 7{\%} points when using ACTC in the challenging setting with an annotation budget of only 10 tuples, and an average improvement of 4{\%} points over different budgets.", }
Self-supervised knowledge-graph completion (KGC) relies on estimating a scoring model over (entity, relation, entity)-tuples, for example, by embedding an initial knowledge graph. Prediction quality can be improved by calibrating the scoring model, typically by adjusting the prediction thresholds using manually annotated examples. In this paper, we attempt for the first time cold-start calibration for KGC, where no annotated examples exist initially for calibration, and only a limited number of tuples can be selected for annotation. Our new method ACTC finds good per-relation thresholds efficiently based on a limited set of annotated tuples. Additionally to a few annotated tuples, ACTC also leverages unlabeled tuples by estimating their correctness with Logistic Regression or Gaussian Process classifiers. We also experiment with different methods for selecting candidate tuples for annotation: density-based and random selection. Experiments with five scoring models and an oracle annotator show an improvement of 7{\%} points when using ACTC in the challenging setting with an annotation budget of only 10 tuples, and an average improvement of 4{\%} points over different budgets.
[ "Sedova, Anastasiia", "Roth, Benjamin" ]
ACTC: Active Threshold Calibration for Cold-Start Knowledge Graph Completion
acl-short.158
Poster
2305.06395
[ "https://github.com/anasedova/actc" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.159.bib
https://aclanthology.org/2023.acl-short.159/
@inproceedings{cheng-etal-2023-task, title = "Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering", author = "Cheng, Hao and Fang, Hao and Liu, Xiaodong and Gao, Jianfeng", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.159", doi = "10.18653/v1/2023.acl-short.159", pages = "1864--1875", abstract = "Given its effectiveness on knowledge-intensive natural language processing tasks, dense retrieval models have become increasingly popular. Specifically, the de-facto architecture for open-domain question answering uses two isomorphic encoders that are initialized from the same pretrained model but separately parameterized for questions and passages. This biencoder architecture is parameter-inefficient in that there is no parameter sharing between encoders. Further, recent studies show that such dense retrievers underperform BM25 in various settings. We thus propose a new architecture, Task-Aware Specialization for dEnse Retrieval (TASER), which enables parameter sharing by interleaving shared and specialized blocks in a single encoder. Our experiments on five question answering datasets show that TASER can achieve superior accuracy, surpassing BM25, while using about 60{\%} of the parameters as bi-encoder dense retrievers. In out-of-domain evaluations, TASER is also empirically more robust than bi-encoder dense retrievers. Our code is available at \url{https://github.com/microsoft/taser}.", }
Given its effectiveness on knowledge-intensive natural language processing tasks, dense retrieval models have become increasingly popular. Specifically, the de-facto architecture for open-domain question answering uses two isomorphic encoders that are initialized from the same pretrained model but separately parameterized for questions and passages. This biencoder architecture is parameter-inefficient in that there is no parameter sharing between encoders. Further, recent studies show that such dense retrievers underperform BM25 in various settings. We thus propose a new architecture, Task-Aware Specialization for dEnse Retrieval (TASER), which enables parameter sharing by interleaving shared and specialized blocks in a single encoder. Our experiments on five question answering datasets show that TASER can achieve superior accuracy, surpassing BM25, while using about 60{\%} of the parameters as bi-encoder dense retrievers. In out-of-domain evaluations, TASER is also empirically more robust than bi-encoder dense retrievers. Our code is available at \url{https://github.com/microsoft/taser}.
[ "Cheng, Hao", "Fang, Hao", "Liu, Xiaodong", "Gao, Jianfeng" ]
Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering
acl-short.159
Poster
2210.05156
[ "https://github.com/microsoft/taser" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.160.bib
https://aclanthology.org/2023.acl-short.160/
@inproceedings{lin-etal-2023-linear, title = "Linear Classifier: An Often-Forgotten Baseline for Text Classification", author = "Lin, Yu-Chen and Chen, Si-An and Liu, Jie-Jyun and Lin, Chih-Jen", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.160", doi = "10.18653/v1/2023.acl-short.160", pages = "1876--1888", abstract = "Large-scale pre-trained language models such as BERT are popular solutions for text classification. Due to the superior performance of these advanced methods, nowadays, people often directly train them for a few epochs and deploy the obtained model. In this opinion paper, we point out that this way may only sometimes get satisfactory results. We argue the importance of running a simple baseline like linear classifiers on bag-of-words features along with advanced methods. First, for many text data, linear methods show competitive performance, high efficiency, and robustness. Second, advanced models such as BERT may only achieve the best results if properly applied. Simple baselines help to confirm whether the results of advanced models are acceptable. Our experimental results fully support these points.", }
Large-scale pre-trained language models such as BERT are popular solutions for text classification. Due to the superior performance of these advanced methods, nowadays, people often directly train them for a few epochs and deploy the obtained model. In this opinion paper, we point out that this way may only sometimes get satisfactory results. We argue the importance of running a simple baseline like linear classifiers on bag-of-words features along with advanced methods. First, for many text data, linear methods show competitive performance, high efficiency, and robustness. Second, advanced models such as BERT may only achieve the best results if properly applied. Simple baselines help to confirm whether the results of advanced models are acceptable. Our experimental results fully support these points.
[ "Lin, Yu-Chen", "Chen, Si-An", "Liu, Jie-Jyun", "Lin, Chih-Jen" ]
Linear Classifier: An Often-Forgotten Baseline for Text Classification
acl-short.160
Poster
2306.07111
[ "https://github.com/jameslyc88/text_classification_baseline_code" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.161.bib
https://aclanthology.org/2023.acl-short.161/
@inproceedings{ruoss-etal-2023-randomized, title = "Randomized Positional Encodings Boost Length Generalization of Transformers", author = "Ruoss, Anian and Del{\'e}tang, Gr{\'e}goire and Genewein, Tim and Grau-Moya, Jordi and Csord{\'a}s, R{\'o}bert and Bennani, Mehdi and Legg, Shane and Veness, Joel", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.161", doi = "10.18653/v1/2023.acl-short.161", pages = "1889--1903", abstract = "Transformers have impressive generalization capabilities on tasks with a fixed context length. However, they fail to generalize to sequences of arbitrary length, even for seemingly simple tasks such as duplicating a string. Moreover, simply training on longer sequences is inefficient due to the quadratic computation complexity of the global attention mechanism. In this work, we demonstrate that this failure mode is linked to positional encodings being out-of-distribution for longer sequences (even for relative encodings) and introduce a novel family of positional encodings that can overcome this problem. Concretely, our randomized positional encoding scheme simulates the positions of longer sequences and randomly selects an ordered subset to fit the sequence{'}s length. Our large-scale empirical evaluation of 6000 models across 15 algorithmic reasoning tasks shows that our method allows Transformers to generalize to sequences of unseen length (increasing test accuracy by 12.0{\%} on average).", }
Transformers have impressive generalization capabilities on tasks with a fixed context length. However, they fail to generalize to sequences of arbitrary length, even for seemingly simple tasks such as duplicating a string. Moreover, simply training on longer sequences is inefficient due to the quadratic computation complexity of the global attention mechanism. In this work, we demonstrate that this failure mode is linked to positional encodings being out-of-distribution for longer sequences (even for relative encodings) and introduce a novel family of positional encodings that can overcome this problem. Concretely, our randomized positional encoding scheme simulates the positions of longer sequences and randomly selects an ordered subset to fit the sequence{'}s length. Our large-scale empirical evaluation of 6000 models across 15 algorithmic reasoning tasks shows that our method allows Transformers to generalize to sequences of unseen length (increasing test accuracy by 12.0{\%} on average).
[ "Ruoss, Anian", "Del{\\'e}tang, Gr{\\'e}goire", "Genewein, Tim", "Grau-Moya, Jordi", "Csord{\\'a}s, R{\\'o}bert", "Bennani, Mehdi", "Legg, Shane", "Veness, Joel" ]
Randomized Positional Encodings Boost Length Generalization of Transformers
acl-short.161
Poster
2305.16843
[ "https://github.com/deepmind/randomized_positional_encodings" ]
https://huggingface.co/papers/2305.16843
1
2
0
8
1
[]
[]
[]
https://aclanthology.org/2023.acl-short.162.bib
https://aclanthology.org/2023.acl-short.162/
@inproceedings{kamigaito-etal-2023-table, title = "Table and Image Generation for Investigating Knowledge of Entities in Pre-trained Vision and Language Models", author = "Kamigaito, Hidetaka and Hayashi, Katsuhiko and Watanabe, Taro", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.162", doi = "10.18653/v1/2023.acl-short.162", pages = "1904--1917", abstract = "In this paper, we propose a table and image generation task to verify how the knowledge about entities acquired from natural language is retained in Vision {\&} Language (V {\&} L) models. This task consists of two parts: the first is to generate a table containing knowledge about an entity and its related image, and the second is to generate an image from an entity with a caption and a table containing related knowledge of the entity. In both tasks, the model must know the entities used to perform the generation properly. We created the Wikipedia Table and Image Generation (WikiTIG) dataset from about 200,000 infoboxes in English Wikipedia articles to perform the proposed tasks. We evaluated the performance on the tasks with respect to the above research question using the V {\&} L model OFA, which has achieved state-of-the-art results in multiple tasks. Experimental results show that OFA forgets part of its entity knowledge by pre-training as a complement to improve the performance of image related tasks.", }
In this paper, we propose a table and image generation task to verify how the knowledge about entities acquired from natural language is retained in Vision {\&} Language (V {\&} L) models. This task consists of two parts: the first is to generate a table containing knowledge about an entity and its related image, and the second is to generate an image from an entity with a caption and a table containing related knowledge of the entity. In both tasks, the model must know the entities used to perform the generation properly. We created the Wikipedia Table and Image Generation (WikiTIG) dataset from about 200,000 infoboxes in English Wikipedia articles to perform the proposed tasks. We evaluated the performance on the tasks with respect to the above research question using the V {\&} L model OFA, which has achieved state-of-the-art results in multiple tasks. Experimental results show that OFA forgets part of its entity knowledge by pre-training as a complement to improve the performance of image related tasks.
[ "Kamigaito, Hidetaka", "Hayashi, Katsuhiko", "Watanabe, Taro" ]
Table and Image Generation for Investigating Knowledge of Entities in Pre-trained Vision and Language Models
acl-short.162
Poster
2306.02115
[ "https://github.com/kamigaito/wikitig" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.163.bib
https://aclanthology.org/2023.acl-short.163/
@inproceedings{lou-tu-2023-improving, title = "Improving Grammar-based Sequence-to-Sequence Modeling with Decomposition and Constraints", author = "Lou, Chao and Tu, Kewei", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.163", doi = "10.18653/v1/2023.acl-short.163", pages = "1918--1929", abstract = "Neural QCFG is a grammar-based sequence-to-sequence model with strong inductive biases on hierarchical structures. It excels in interpretability and generalization but suffers from expensive inference. In this paper, we study two low-rank variants of Neural QCFG for faster inference with different trade-offs between efficiency and expressiveness. Furthermore, utilizing the symbolic interface provided by the grammar, we introduce two soft constraints over tree hierarchy and source coverage. We experiment with various datasets and find that our models outperform vanilla Neural QCFG in most settings.", }
Neural QCFG is a grammar-based sequence-to-sequence model with strong inductive biases on hierarchical structures. It excels in interpretability and generalization but suffers from expensive inference. In this paper, we study two low-rank variants of Neural QCFG for faster inference with different trade-offs between efficiency and expressiveness. Furthermore, utilizing the symbolic interface provided by the grammar, we introduce two soft constraints over tree hierarchy and source coverage. We experiment with various datasets and find that our models outperform vanilla Neural QCFG in most settings.
[ "Lou, Chao", "Tu, Kewei" ]
Improving Grammar-based Sequence-to-Sequence Modeling with Decomposition and Constraints
acl-short.163
Poster
2306.02671
[ "https://github.com/louchao98/seq2seq_with_qcfg" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-short.164.bib
https://aclanthology.org/2023.acl-short.164/
@inproceedings{ai-etal-2023-tecs, title = "{T}e{CS}: A Dataset and Benchmark for Tense Consistency of Machine Translation", author = "Ai, Yiming and He, Zhiwei and Yu, Kai and Wang, Rui", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-short.164", doi = "10.18653/v1/2023.acl-short.164", pages = "1930--1941", abstract = "Tense inconsistency frequently occurs in machine translation. However, there are few criteria to assess the model{'}s mastery of tense prediction from a linguistic perspective. In this paper, we present a parallel tense test set, containing French-English 552 utterances. We also introduce a corresponding benchmark, tense prediction accuracy. With the tense test set and the benchmark, researchers are able to measure the tense consistency performance of machine translation systems for the first time.", }
Tense inconsistency frequently occurs in machine translation. However, there are few criteria to assess the model{'}s mastery of tense prediction from a linguistic perspective. In this paper, we present a parallel tense test set, containing French-English 552 utterances. We also introduce a corresponding benchmark, tense prediction accuracy. With the tense test set and the benchmark, researchers are able to measure the tense consistency performance of machine translation systems for the first time.
[ "Ai, Yiming", "He, Zhiwei", "Yu, Kai", "Wang, Rui" ]
TeCS: A Dataset and Benchmark for Tense Consistency of Machine Translation
acl-short.164
Poster
2305.13740
[ "https://github.com/rutilel/tecs-a-dataset-and-benchmark-for-tense-consistency" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.1.bib
https://aclanthology.org/2023.acl-demo.1/
@inproceedings{zhang-etal-2023-human, title = "Human-in-the-loop Schema Induction", author = "Zhang, Tianyi and Tham, Isaac and Hou, Zhaoyi and Ren, Jiaxuan and Zhou, Leon and Xu, Hainiu and Zhang, Li and Martin, Lara J. and Dror, Rotem and Li, Sha and Ji, Heng and Palmer, Martha and Brown, Susan Windisch and Suchocki, Reece and Callison-Burch, Chris", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.1", doi = "10.18653/v1/2023.acl-demo.1", pages = "1--10", abstract = "Schema induction builds a graph representation explaining how events unfold in a scenario. Existing approaches have been based on information retrieval (IR) and information extraction (IE), often with limited human curation. We demonstrate a human-in-the-loop schema induction system powered by GPT-3. We first describe the different modules of our system, including prompting to generate schematic elements, manual edit of those elements, and conversion of those into a schema graph. By qualitatively comparing our system to previous ones, we show that our system not only transfers to new domains more easily than previous approaches, but also reduces efforts of human curation thanks to our interactive interface.", }
Schema induction builds a graph representation explaining how events unfold in a scenario. Existing approaches have been based on information retrieval (IR) and information extraction (IE), often with limited human curation. We demonstrate a human-in-the-loop schema induction system powered by GPT-3. We first describe the different modules of our system, including prompting to generate schematic elements, manual edit of those elements, and conversion of those into a schema graph. By qualitatively comparing our system to previous ones, we show that our system not only transfers to new domains more easily than previous approaches, but also reduces efforts of human curation thanks to our interactive interface.
[ "Zhang, Tianyi", "Tham, Isaac", "Hou, Zhaoyi", "Ren, Jiaxuan", "Zhou, Leon", "Xu, Hainiu", "Zhang, Li", "Martin, Lara J.", "Dror, Rotem", "Li, Sha", "Ji, Heng", "Palmer, Martha", "Brown, Susan Windisch", "Suchocki, Reece", "Callison-Burch, Chris" ]
Human-in-the-loop Schema Induction
acl-demo.1
Poster
2302.13048
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.2.bib
https://aclanthology.org/2023.acl-demo.2/
@inproceedings{shi-etal-2023-perslearn, title = "{P}ers{LEARN}: Research Training through the Lens of Perspective Cultivation", author = "Shi, Yu-Zhe and Li, Shiqian and Niu, Xinyi and Xu, Qiao and Liu, Jiawen and Xu, Yifan and Gu, Shiyu and He, Bingru and Li, Xinyang and Zhao, Xinyu and Zhao, Zijian and Lyu, Yidong and Li, Zhen and Liu, Sijia and Qiu, Lin and Ji, Jinhao and Ruan, Lecheng and Ma, Yuxi and Han, Wenjuan and Zhu, Yixin", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.2", doi = "10.18653/v1/2023.acl-demo.2", pages = "11--30", abstract = "Scientific research is inherently shaped by its authors{'} perspectives, influenced by various factorssuch as their personality, community, or society. Junior researchers often face challenges in identifying the perspectives reflected in the existing literature and struggle to develop their own viewpoints. In response to this issue, we introduce PersLEARN , a tool designed to facilitate the cultivation of scientific perspectives, starting from a basic seed idea and progressing to a well-articulated framework. By interacting with a prompt-based model, researchers can develop their perspectives explicitly. Our humanstudy reveals that scientific perspectives developed by students using PersLEARN exhibit a superior level of logical coherence and depth compared to those that did not. Furthermore, our pipeline outperforms baseline approaches across multiple domains of literature from various perspectives. These results suggest that PersLEARN could help foster a greater appreciation of diversity in scientific perspectives as an essential component of research training.", }
Scientific research is inherently shaped by its authors{'} perspectives, influenced by various factorssuch as their personality, community, or society. Junior researchers often face challenges in identifying the perspectives reflected in the existing literature and struggle to develop their own viewpoints. In response to this issue, we introduce PersLEARN , a tool designed to facilitate the cultivation of scientific perspectives, starting from a basic seed idea and progressing to a well-articulated framework. By interacting with a prompt-based model, researchers can develop their perspectives explicitly. Our humanstudy reveals that scientific perspectives developed by students using PersLEARN exhibit a superior level of logical coherence and depth compared to those that did not. Furthermore, our pipeline outperforms baseline approaches across multiple domains of literature from various perspectives. These results suggest that PersLEARN could help foster a greater appreciation of diversity in scientific perspectives as an essential component of research training.
[ "Shi, Yu-Zhe", "Li, Shiqian", "Niu, Xinyi", "Xu, Qiao", "Liu, Jiawen", "Xu, Yifan", "Gu, Shiyu", "He, Bingru", "Li, Xinyang", "Zhao, Xinyu", "Zhao, Zijian", "Lyu, Yidong", "Li, Zhen", "Liu, Sijia", "Qiu, Lin", "Ji, Jinhao", "Ruan, Lecheng", "Ma, Yuxi", "Han, Wenjuan", "Zhu, Yixin" ]
PersLEARN: Research Training through the Lens of Perspective Cultivation
acl-demo.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.3.bib
https://aclanthology.org/2023.acl-demo.3/
@inproceedings{li-etal-2023-lavis, title = "{LAVIS}: A One-stop Library for Language-Vision Intelligence", author = "Li, Dongxu and Li, Junnan and Le, Hung and Wang, Guangsen and Savarese, Silvio and Hoi, Steven C.H.", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.3", doi = "10.18653/v1/2023.acl-demo.3", pages = "31--41", abstract = "We introduce LAVIS, an open-source deep learning library for LAnguage-VISion research and applications. LAVIS aims to serve as a one-stop comprehensive library that brings recent advancements in the language-vision field accessible for researchers and practitioners, as well as fertilizing future research and development. It features a unified interface to easily access state-of-the-art image-language, video-language models and common datasets. LAVIS supports training, evaluation and benchmarking on a rich variety of tasks, including multimodal classification, retrieval, captioning, visual question answering, dialogue and pre-training. In the meantime, the library is also highly extensible and configurable, facilitating future development and customization. In this technical report, we describe design principles, key components and functionalities of the library, and also present benchmarking results across common language-vision tasks.", }
We introduce LAVIS, an open-source deep learning library for LAnguage-VISion research and applications. LAVIS aims to serve as a one-stop comprehensive library that brings recent advancements in the language-vision field accessible for researchers and practitioners, as well as fertilizing future research and development. It features a unified interface to easily access state-of-the-art image-language, video-language models and common datasets. LAVIS supports training, evaluation and benchmarking on a rich variety of tasks, including multimodal classification, retrieval, captioning, visual question answering, dialogue and pre-training. In the meantime, the library is also highly extensible and configurable, facilitating future development and customization. In this technical report, we describe design principles, key components and functionalities of the library, and also present benchmarking results across common language-vision tasks.
[ "Li, Dongxu", "Li, Junnan", "Le, Hung", "Wang, Guangsen", "Savarese, Silvio", "Hoi, Steven C.H." ]
LAVIS: A One-stop Library for Language-Vision Intelligence
acl-demo.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.4.bib
https://aclanthology.org/2023.acl-demo.4/
@inproceedings{kwon-mihindukulasooriya-2023-finspector, title = "Finspector: A Human-Centered Visual Inspection Tool for Exploring and Comparing Biases among Foundation Models", author = "Kwon, Bum Chul and Mihindukulasooriya, Nandana", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.4", doi = "10.18653/v1/2023.acl-demo.4", pages = "42--50", abstract = "Pre-trained transformer-based language models are becoming increasingly popular due to their exceptional performance on various benchmarks. However, concerns persist regarding the presence of hidden biases within these models, which can lead to discriminatory outcomes and reinforce harmful stereotypes. To address this issue, we propose Finspector, a human-centered visual inspection tool designed to detect biases in different categories through log-likelihood scores generated by language models. The goal of the tool is to enable researchers to easily identify potential biases using visual analytics, ultimately contributing to a fairer and more just deployment of these models in both academic and industrial settings. Finspector is available at \url{https://github.com/IBM/finspector}.", }
Pre-trained transformer-based language models are becoming increasingly popular due to their exceptional performance on various benchmarks. However, concerns persist regarding the presence of hidden biases within these models, which can lead to discriminatory outcomes and reinforce harmful stereotypes. To address this issue, we propose Finspector, a human-centered visual inspection tool designed to detect biases in different categories through log-likelihood scores generated by language models. The goal of the tool is to enable researchers to easily identify potential biases using visual analytics, ultimately contributing to a fairer and more just deployment of these models in both academic and industrial settings. Finspector is available at \url{https://github.com/IBM/finspector}.
[ "Kwon, Bum Chul", "Mihindukulasooriya, N", "ana" ]
Finspector: A Human-Centered Visual Inspection Tool for Exploring and Comparing Biases among Foundation Models
acl-demo.4
Poster
2305.16937
[ "https://github.com/ibm/finspector" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.5.bib
https://aclanthology.org/2023.acl-demo.5/
@inproceedings{sil-etal-2023-primeqa, title = "{P}rime{QA}: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development", author = "Sil, Avi and Sen, Jaydeep and Iyer, Bhavani and Franz, Martin and Fadnis, Kshitij and Bornea, Mihaela and Rosenthal, Sara and McCarley, Scott and Zhang, Rong and Kumar, Vishwajeet and Li, Yulong and Sultan, Md Arafat and Bhat, Riyaz and Bross, Juergen and Florian, Radu and Roukos, Salim", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.5", doi = "10.18653/v1/2023.acl-demo.5", pages = "51--62", abstract = "The field of Question Answering (QA) has made remarkable progress in recent years, thanks to the advent of large pre-trained language models, newer realistic benchmark datasets with leaderboards, and novel algorithms for key components such as retrievers and readers. In this paper, we introduce PrimeQA: a one-stop and open-source QA repository with an aim to democratize QA research and facilitate easy replication of state-of-the-art (SOTA) QA methods. PrimeQA supports core QA functionalities like retrieval and reading comprehension as well as auxiliary capabilities such as question generation. It has been designed as an end-to-end toolkit for various use cases: building front-end applications, replicating SOTA methods on public benchmarks, and expanding pre-existing methods. PrimeQA is available at: \url{https://github.com/primeqa}.", }
The field of Question Answering (QA) has made remarkable progress in recent years, thanks to the advent of large pre-trained language models, newer realistic benchmark datasets with leaderboards, and novel algorithms for key components such as retrievers and readers. In this paper, we introduce PrimeQA: a one-stop and open-source QA repository with an aim to democratize QA research and facilitate easy replication of state-of-the-art (SOTA) QA methods. PrimeQA supports core QA functionalities like retrieval and reading comprehension as well as auxiliary capabilities such as question generation. It has been designed as an end-to-end toolkit for various use cases: building front-end applications, replicating SOTA methods on public benchmarks, and expanding pre-existing methods. PrimeQA is available at: \url{https://github.com/primeqa}.
[ "Sil, Avi", "Sen, Jaydeep", "Iyer, Bhavani", "Franz, Martin", "Fadnis, Kshitij", "Bornea, Mihaela", "Rosenthal, Sara", "McCarley, Scott", "Zhang, Rong", "Kumar, Vishwajeet", "Li, Yulong", "Sultan, Md Arafat", "Bhat, Riyaz", "Bross, Juergen", "Florian, Radu", "Roukos, Salim" ]
PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
acl-demo.5
Poster
2301.09715
[ "https://github.com/primeqa/primeqa" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.6.bib
https://aclanthology.org/2023.acl-demo.6/
@inproceedings{zhang-etal-2023-lingxi, title = "Lingxi: A Diversity-aware {C}hinese Modern Poetry Generation System", author = "Zhang, Xinran and Sun, Maosong and Liu, Jiafeng and Li, Xiaobing", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.6", doi = "10.18653/v1/2023.acl-demo.6", pages = "63--75", abstract = "Chinese modern poetry generation has been a challenging task. One issue is the Chinese word segmentation (CWS) which is critical to comprehend the Chinese language but was not always considered in common tokenization methods. Another is the decoding (sampling) method which may induce repetition and boredom and severely lower the diversity of the generated poetry. To address these issues, we present Lingxi, a diversity-aware Chinese modern poetry generation system. For the CWS issue, we propose a novel framework that incorporates CWS in the tokenization process. The proposed method can achieve a high vocabulary coverage rate with a reasonable vocabulary size. For the decoding method and the diversity issue, we propose a novel sampling algorithm that flattens the high likelihood part of the predicted distribution of the language model to emphasize the comparatively low-likelihood words and increase the diversity of generated poetry. Empirical results show that even when the top 60{\%} of cumulative probability mass of the predicted distribution is flattened, our method achieves comparable or even better performance than baseline sampling methods. Our system is available at \url{http://lingxi.website}.", }
Chinese modern poetry generation has been a challenging task. One issue is the Chinese word segmentation (CWS) which is critical to comprehend the Chinese language but was not always considered in common tokenization methods. Another is the decoding (sampling) method which may induce repetition and boredom and severely lower the diversity of the generated poetry. To address these issues, we present Lingxi, a diversity-aware Chinese modern poetry generation system. For the CWS issue, we propose a novel framework that incorporates CWS in the tokenization process. The proposed method can achieve a high vocabulary coverage rate with a reasonable vocabulary size. For the decoding method and the diversity issue, we propose a novel sampling algorithm that flattens the high likelihood part of the predicted distribution of the language model to emphasize the comparatively low-likelihood words and increase the diversity of generated poetry. Empirical results show that even when the top 60{\%} of cumulative probability mass of the predicted distribution is flattened, our method achieves comparable or even better performance than baseline sampling methods. Our system is available at \url{http://lingxi.website}.
[ "Zhang, Xinran", "Sun, Maosong", "Liu, Jiafeng", "Li, Xiaobing" ]
Lingxi: A Diversity-aware Chinese Modern Poetry Generation System
acl-demo.6
Poster
2108.12108
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.7.bib
https://aclanthology.org/2023.acl-demo.7/
@inproceedings{du-etal-2023-autodive, title = "Autodive: An Integrated Onsite Scientific Literature Annotation Tool", author = "Du, Yi and Wang, Ludi and Huang, Mengyi and Song, Dongze and Cui, Wenjuan and Zhou, Yuanchun", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.7", doi = "10.18653/v1/2023.acl-demo.7", pages = "76--85", abstract = "Scientific literature is always available in Adobe{'}s Portable Document Format (PDF), which is friendly for scientists to read. Compared with raw text, annotating directly on PDF documents can greatly improve the labeling efficiency of scientists whose annotation costs are very high. In this paper, we present Autodive, an integrated onsite scientific literature annotation tool for natural scientists and Natural Language Processing (NLP) researchers. This tool provides six core functions of annotation that support the whole lifecycle of corpus generation including i)annotation project management, ii)resource management, iii)ontology management, iv)manual annotation, v)onsite auto annotation, and vi)annotation task statistic. Two experiments are carried out to verify efficiency of the presented tool. A live demo of Autodive is available at \url{http://autodive.sciwiki.cn}. The source code is available at \url{https://github.com/Autodive}.", }
Scientific literature is always available in Adobe{'}s Portable Document Format (PDF), which is friendly for scientists to read. Compared with raw text, annotating directly on PDF documents can greatly improve the labeling efficiency of scientists whose annotation costs are very high. In this paper, we present Autodive, an integrated onsite scientific literature annotation tool for natural scientists and Natural Language Processing (NLP) researchers. This tool provides six core functions of annotation that support the whole lifecycle of corpus generation including i)annotation project management, ii)resource management, iii)ontology management, iv)manual annotation, v)onsite auto annotation, and vi)annotation task statistic. Two experiments are carried out to verify efficiency of the presented tool. A live demo of Autodive is available at \url{http://autodive.sciwiki.cn}. The source code is available at \url{https://github.com/Autodive}.
[ "Du, Yi", "Wang, Ludi", "Huang, Mengyi", "Song, Dongze", "Cui, Wenjuan", "Zhou, Yuanchun" ]
Autodive: An Integrated Onsite Scientific Literature Annotation Tool
acl-demo.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.8.bib
https://aclanthology.org/2023.acl-demo.8/
@inproceedings{ushio-etal-2023-practical, title = "A Practical Toolkit for Multilingual Question and Answer Generation", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.8", doi = "10.18653/v1/2023.acl-demo.8", pages = "86--94", abstract = "Generating questions along with associated answers from a text has applications in several domains, such as creating reading comprehension tests for students, or improving document search by providing auxiliary questions and answers based on the query. Training models for question and answer generation (QAG) is not straightforward due to the expected structured output (i.e. a list of question and answer pairs), as it requires more than generating a single sentence. This results in a small number of publicly accessible QAG models. In this paper, we introduce AutoQG, an online service for multilingual QAG along with lmqg, an all-in-one python package for model fine-tuning, generation, and evaluation. We also release QAG models in eight languages fine-tuned on a few variants of pre-trained encoder-decoder language models, which can be used online via AutoQG or locally via lmqg. With these resources, practitioners of any level can benefit from a toolkit that includes a web interface for end users, and easy-to-use code for developers who require custom models or fine-grained controls for generation.", }
Generating questions along with associated answers from a text has applications in several domains, such as creating reading comprehension tests for students, or improving document search by providing auxiliary questions and answers based on the query. Training models for question and answer generation (QAG) is not straightforward due to the expected structured output (i.e. a list of question and answer pairs), as it requires more than generating a single sentence. This results in a small number of publicly accessible QAG models. In this paper, we introduce AutoQG, an online service for multilingual QAG along with lmqg, an all-in-one python package for model fine-tuning, generation, and evaluation. We also release QAG models in eight languages fine-tuned on a few variants of pre-trained encoder-decoder language models, which can be used online via AutoQG or locally via lmqg. With these resources, practitioners of any level can benefit from a toolkit that includes a web interface for end users, and easy-to-use code for developers who require custom models or fine-grained controls for generation.
[ "Ushio, Asahi", "Alva-Manchego, Fern", "o", "Camacho-Collados, Jose" ]
A Practical Toolkit for Multilingual Question and Answer Generation
acl-demo.8
Poster
2305.17416
[ "https://github.com/asahi417/lm-question-generation" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.9.bib
https://aclanthology.org/2023.acl-demo.9/
@inproceedings{qin-etal-2023-openslu, title = "{O}pen{SLU}: A Unified, Modularized, and Extensible Toolkit for Spoken Language Understanding", author = "Qin, Libo and Chen, Qiguang and Xu, Xiao and Feng, Yunlong and Che, Wanxiang", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.9", doi = "10.18653/v1/2023.acl-demo.9", pages = "95--102", abstract = "Spoken Language Understanding (SLU) is one of the core components of a task-oriented dialogue system, which aims to extract the semantic meaning of user queries (e.g., intents and slots). In this work, we introduce OpenSLU, an open-source toolkit to provide a unified, modularized, and extensible toolkit for spoken language understanding. Specifically, OpenSLU unifies 10 SLU models for both single-intent and multi-intent scenarios, which support both non-pretrained and pretrained models simultaneously. Additionally, OpenSLU is highly modularized and extensible by decomposing the model architecture, inference, and learning process into reusable modules, which allows researchers to quickly set up SLU experiments with highly flexible configurations. OpenSLU is implemented based on PyTorch, and released at \url{https://github.com/LightChen233/OpenSLU}.", }
Spoken Language Understanding (SLU) is one of the core components of a task-oriented dialogue system, which aims to extract the semantic meaning of user queries (e.g., intents and slots). In this work, we introduce OpenSLU, an open-source toolkit to provide a unified, modularized, and extensible toolkit for spoken language understanding. Specifically, OpenSLU unifies 10 SLU models for both single-intent and multi-intent scenarios, which support both non-pretrained and pretrained models simultaneously. Additionally, OpenSLU is highly modularized and extensible by decomposing the model architecture, inference, and learning process into reusable modules, which allows researchers to quickly set up SLU experiments with highly flexible configurations. OpenSLU is implemented based on PyTorch, and released at \url{https://github.com/LightChen233/OpenSLU}.
[ "Qin, Libo", "Chen, Qiguang", "Xu, Xiao", "Feng, Yunlong", "Che, Wanxiang" ]
OpenSLU: A Unified, Modularized, and Extensible Toolkit for Spoken Language Understanding
acl-demo.9
Poster
2305.10231
[ "https://github.com/lightchen233/openslu" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.10.bib
https://aclanthology.org/2023.acl-demo.10/
@inproceedings{sandhan-etal-2023-sanskritshala, title = "{S}anskrit{S}hala: A Neural {S}anskrit {NLP} Toolkit with Web-Based Interface for Pedagogical and Annotation Purposes", author = "Sandhan, Jivnesh and Agarwal, Anshul and Behera, Laxmidhar and Sandhan, Tushar and Goyal, Pawan", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.10", doi = "10.18653/v1/2023.acl-demo.10", pages = "103--112", abstract = "We present a neural Sanskrit Natural Language Processing (NLP) toolkit named SanskritShala (a school of Sanskrit) to facilitate computational linguistic analyses for several tasks such as word segmentation, morphological tagging, dependency parsing, and compound type identification. Our systems currently report state-of-the-art performance on available benchmark datasets for all tasks. SanskritShala is deployed as a web-based application, which allows a user to get real-time analysis for the given input. It is built with easy-to-use interactive data annotation features that allow annotators to correct the system predictions when it makes mistakes. We publicly release the source codes of the 4 modules included in the toolkit, 7 word embedding models that have been trained on publicly available Sanskrit corpora and multiple annotated datasets such as word similarity, relatedness, categorization, analogy prediction to assess intrinsic properties of word embeddings. So far as we know, this is the first neural-based Sanskrit NLP toolkit that has a web-based interface and a number of NLP modules. We are sure that the people who are willing to work with Sanskrit will find it useful for pedagogical and annotative purposes. SanskritShala is available at: \url{https://cnerg.iitkgp.ac.in/sanskritshala}. The demo video of our platform can be accessed at: \url{https://youtu.be/x0X31Y9k0mw4}.", }
We present a neural Sanskrit Natural Language Processing (NLP) toolkit named SanskritShala (a school of Sanskrit) to facilitate computational linguistic analyses for several tasks such as word segmentation, morphological tagging, dependency parsing, and compound type identification. Our systems currently report state-of-the-art performance on available benchmark datasets for all tasks. SanskritShala is deployed as a web-based application, which allows a user to get real-time analysis for the given input. It is built with easy-to-use interactive data annotation features that allow annotators to correct the system predictions when it makes mistakes. We publicly release the source codes of the 4 modules included in the toolkit, 7 word embedding models that have been trained on publicly available Sanskrit corpora and multiple annotated datasets such as word similarity, relatedness, categorization, analogy prediction to assess intrinsic properties of word embeddings. So far as we know, this is the first neural-based Sanskrit NLP toolkit that has a web-based interface and a number of NLP modules. We are sure that the people who are willing to work with Sanskrit will find it useful for pedagogical and annotative purposes. SanskritShala is available at: \url{https://cnerg.iitkgp.ac.in/sanskritshala}. The demo video of our platform can be accessed at: \url{https://youtu.be/x0X31Y9k0mw4}.
[ "S", "han, Jivnesh", "Agarwal, Anshul", "Behera, Laxmidhar", "S", "han, Tushar", "Goyal, Pawan" ]
SanskritShala: A Neural Sanskrit NLP Toolkit with Web-Based Interface for Pedagogical and Annotation Purposes
acl-demo.10
Poster
2302.09527
[ "https://github.com/jivnesh/sanskritshala" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.11.bib
https://aclanthology.org/2023.acl-demo.11/
@inproceedings{dibia-2023-lida, title = "{LIDA}: A Tool for Automatic Generation of Grammar-Agnostic Visualizations and Infographics using Large Language Models", author = "Dibia, Victor", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.11", doi = "10.18653/v1/2023.acl-demo.11", pages = "113--126", abstract = "Systems that support users in the automatic creation of visualizations must address several subtasks - understand the semantics of data, enumerate relevant visualization goals and generate visualization specifications. In this work, we pose visualization generation as a multi-stage generation problem and argue that well-orchestrated pipelines based on large language models (LLMs) and image generation models (IGMs) are suitable to addressing these tasks. We present LIDA, a novel tool for generating grammar-agnostic visualizations and infographics. LIDA comprises of 4 modules - A SUMMARIZER that converts data into a rich but compact natural language summary, a GOAL EXPLORER that enumerates visualization goals given the data, a VISGENERATOR that generates, refines, executes and filters visualization code and an INFOGRAPHER module that yields data-faithful stylized graphics using IGMs. LIDA provides a python api, and a hybrid user interface (direct manipulation and multilingual natural language) for interactive chart, infographics and data story generation. Code and demo are available at this url - \url{https://microsoft.github.io/lida/}", }
Systems that support users in the automatic creation of visualizations must address several subtasks - understand the semantics of data, enumerate relevant visualization goals and generate visualization specifications. In this work, we pose visualization generation as a multi-stage generation problem and argue that well-orchestrated pipelines based on large language models (LLMs) and image generation models (IGMs) are suitable to addressing these tasks. We present LIDA, a novel tool for generating grammar-agnostic visualizations and infographics. LIDA comprises of 4 modules - A SUMMARIZER that converts data into a rich but compact natural language summary, a GOAL EXPLORER that enumerates visualization goals given the data, a VISGENERATOR that generates, refines, executes and filters visualization code and an INFOGRAPHER module that yields data-faithful stylized graphics using IGMs. LIDA provides a python api, and a hybrid user interface (direct manipulation and multilingual natural language) for interactive chart, infographics and data story generation. Code and demo are available at this url - \url{https://microsoft.github.io/lida/}
[ "Dibia, Victor" ]
LIDA: A Tool for Automatic Generation of Grammar-Agnostic Visualizations and Infographics using Large Language Models
acl-demo.11
Poster
2303.02927
[ "" ]
https://huggingface.co/papers/2303.02927
1
3
0
1
1
[]
[]
[ "Anne31415/LIDA2and1_csv" ]
https://aclanthology.org/2023.acl-demo.12.bib
https://aclanthology.org/2023.acl-demo.12/
@inproceedings{mao-etal-2023-metapro, title = "{M}eta{P}ro Online: A Computational Metaphor Processing Online System", author = "Mao, Rui and Li, Xiao and He, Kai and Ge, Mengshi and Cambria, Erik", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.12", doi = "10.18653/v1/2023.acl-demo.12", pages = "127--135", abstract = "Metaphoric expressions are a special linguistic phenomenon, frequently appearing in everyday language. Metaphors do not take their literal meanings in contexts, which may cause obstacles for language learners to understand them. Metaphoric expressions also reflect the cognition of humans via concept mappings, attracting great attention from cognitive science and psychology communities. Thus, we aim to develop a computational metaphor processing online system, termed MetaPro Online, that allows users without a coding background, e.g., language learners and linguists, to easily query metaphoricity labels, metaphor paraphrases, and concept mappings for non-domain-specific text. The outputs of MetaPro can be directly used by language learners and natural language processing downstream tasks because MetaPro is an end-to-end system.", }
Metaphoric expressions are a special linguistic phenomenon, frequently appearing in everyday language. Metaphors do not take their literal meanings in contexts, which may cause obstacles for language learners to understand them. Metaphoric expressions also reflect the cognition of humans via concept mappings, attracting great attention from cognitive science and psychology communities. Thus, we aim to develop a computational metaphor processing online system, termed MetaPro Online, that allows users without a coding background, e.g., language learners and linguists, to easily query metaphoricity labels, metaphor paraphrases, and concept mappings for non-domain-specific text. The outputs of MetaPro can be directly used by language learners and natural language processing downstream tasks because MetaPro is an end-to-end system.
[ "Mao, Rui", "Li, Xiao", "He, Kai", "Ge, Mengshi", "Cambria, Erik" ]
MetaPro Online: A Computational Metaphor Processing Online System
acl-demo.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.13.bib
https://aclanthology.org/2023.acl-demo.13/
@inproceedings{vath-etal-2023-diagraph, title = "{DIAGRAPH}: An Open-Source Graphic Interface for Dialog Flow Design", author = {V{\"a}th, Dirk and Vanderlyn, Lindsey and Vu, Ngoc Thang}, editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.13", doi = "10.18653/v1/2023.acl-demo.13", pages = "136--143", abstract = "In this work, we present DIAGRAPH, an open-source graphical dialog flow editor built on the ADVISER toolkit. Our goal for this tool is threefold: 1) To support subject-experts to intuitively create complex and flexible dialog systems,2) To support rapid prototyping of dialog system behavior, e.g., for research, and 3) To provide a hands-on test bed for students learning about dialog systems. To facilitate this, DIAGRAPH aims to provide a clean and intuitive graphical interface for creating dialog systems without requiring any coding knowledge. Once a dialog graph has been created, it is automatically turned into a dialog system using state of the art language models. This allows for rapid prototyping and testing. Dialog designers can then distribute a link to their finished dialog system or embed it into a website.Additionally, to support scientific experiments and data collection, dialog designers can access chat logs. Finally, to verify the usability of DIAGRAPH, we performed evaluation with subject-experts who extensively worked with the tool and users testing it for the first time, receiving above average System Usability Scale (SUS) scores from both (82 out 100 and 75 out of 100, respectively).In this way, we hope DIAGRAPH helps reduce the barrier to entry for creating dialog interactions.", }
In this work, we present DIAGRAPH, an open-source graphical dialog flow editor built on the ADVISER toolkit. Our goal for this tool is threefold: 1) To support subject-experts to intuitively create complex and flexible dialog systems,2) To support rapid prototyping of dialog system behavior, e.g., for research, and 3) To provide a hands-on test bed for students learning about dialog systems. To facilitate this, DIAGRAPH aims to provide a clean and intuitive graphical interface for creating dialog systems without requiring any coding knowledge. Once a dialog graph has been created, it is automatically turned into a dialog system using state of the art language models. This allows for rapid prototyping and testing. Dialog designers can then distribute a link to their finished dialog system or embed it into a website.Additionally, to support scientific experiments and data collection, dialog designers can access chat logs. Finally, to verify the usability of DIAGRAPH, we performed evaluation with subject-experts who extensively worked with the tool and users testing it for the first time, receiving above average System Usability Scale (SUS) scores from both (82 out 100 and 75 out of 100, respectively).In this way, we hope DIAGRAPH helps reduce the barrier to entry for creating dialog interactions.
[ "V{\\\"a}th, Dirk", "V", "erlyn, Lindsey", "Vu, Ngoc Thang" ]
DIAGRAPH: An Open-Source Graphic Interface for Dialog Flow Design
acl-demo.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.14.bib
https://aclanthology.org/2023.acl-demo.14/
@inproceedings{kruszewski-etal-2023-disco, title = "disco: a toolkit for Distributional Control of Generative Models", author = "Kruszewski, Germ{\'a}n and Rozen, Jos and Dymetman, Marc", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.14", doi = "10.18653/v1/2023.acl-demo.14", pages = "144--160", abstract = "Pre-trained language models and other generative models have revolutionized NLP and beyond. However, these models tend to reproduce undesirable biases present in their training data. Also, they may overlook patterns that are important but challenging to capture. To address these limitations, researchers have introduced distributional control techniques. These techniques, not limited to language, allow controlling the prevalence (i.e. expectations) of any features of interest in the model{'}s outputs. Despite their potential, the widespread adoption of these techniques has been hindered by the difficulty in adapting the complex, disconnected code. Here, we present disco, an open-source Python library that brings these techniques to the broader public", }
Pre-trained language models and other generative models have revolutionized NLP and beyond. However, these models tend to reproduce undesirable biases present in their training data. Also, they may overlook patterns that are important but challenging to capture. To address these limitations, researchers have introduced distributional control techniques. These techniques, not limited to language, allow controlling the prevalence (i.e. expectations) of any features of interest in the model{'}s outputs. Despite their potential, the widespread adoption of these techniques has been hindered by the difficulty in adapting the complex, disconnected code. Here, we present disco, an open-source Python library that brings these techniques to the broader public
[ "Kruszewski, Germ{\\'a}n", "Rozen, Jos", "Dymetman, Marc" ]
disco: a toolkit for Distributional Control of Generative Models
acl-demo.14
Poster
2303.05431
[ "https://github.com/naver/disco" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.15.bib
https://aclanthology.org/2023.acl-demo.15/
@inproceedings{zhang-etal-2023-hyperparameter, title = "A Hyperparameter Optimization Toolkit for Neural Machine Translation Research", author = "Zhang, Xuan and Duh, Kevin and McNamee, Paul", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.15", doi = "10.18653/v1/2023.acl-demo.15", pages = "161--168", abstract = "Hyperparameter optimization is an important but often overlooked process in the research of deep learning technologies. To obtain a good model, one must carefully tune hyperparameters that determine the architecture and training algorithm. Insufficient tuning may result in poor results, while inequitable tuning may lead to exaggerated differences between models. We present a hyperparameter optimization toolkit for neural machine translation (NMT) to help researchers focus their time on the creative rather than the mundane. The toolkit is implemented as a wrapper on top of the open-source Sockeye NMT software. Using the Asynchronous Successive Halving Algorithm (ASHA), we demonstrate that it is possible to discover near-optimal models under a computational budget with little effort. Code: \url{https://github.com/kevinduh/sockeye-recipes3Video} demo: \url{https://cs.jhu.edu/kevinduh/j/demo.mp4}", }
Hyperparameter optimization is an important but often overlooked process in the research of deep learning technologies. To obtain a good model, one must carefully tune hyperparameters that determine the architecture and training algorithm. Insufficient tuning may result in poor results, while inequitable tuning may lead to exaggerated differences between models. We present a hyperparameter optimization toolkit for neural machine translation (NMT) to help researchers focus their time on the creative rather than the mundane. The toolkit is implemented as a wrapper on top of the open-source Sockeye NMT software. Using the Asynchronous Successive Halving Algorithm (ASHA), we demonstrate that it is possible to discover near-optimal models under a computational budget with little effort. Code: \url{https://github.com/kevinduh/sockeye-recipes3Video} demo: \url{https://cs.jhu.edu/kevinduh/j/demo.mp4}
[ "Zhang, Xuan", "Duh, Kevin", "McNamee, Paul" ]
A Hyperparameter Optimization Toolkit for Neural Machine Translation Research
acl-demo.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.16.bib
https://aclanthology.org/2023.acl-demo.16/
@inproceedings{wang-etal-2023-japanese, title = "{J}apanese-to-{E}nglish Simultaneous Dubbing Prototype", author = "Wang, Xiaolin and Utiyama, Masao and Sumita, Eiichiro", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.16", doi = "10.18653/v1/2023.acl-demo.16", pages = "169--178", abstract = "Live video streaming has become an important form of communication such as virtual conferences. However, for cross-language communication in live video streaming, reading subtitles degrades the viewing experience. To address this problem, our simultaneous dubbing prototype translates and replaces the original speech of a live video stream in a simultaneous manner. Tests on a collection of 90 public videos show that our system achieves a low average latency of 11.90 seconds for smooth playback. Our method is general and can be extended to other language pairs.", }
Live video streaming has become an important form of communication such as virtual conferences. However, for cross-language communication in live video streaming, reading subtitles degrades the viewing experience. To address this problem, our simultaneous dubbing prototype translates and replaces the original speech of a live video stream in a simultaneous manner. Tests on a collection of 90 public videos show that our system achieves a low average latency of 11.90 seconds for smooth playback. Our method is general and can be extended to other language pairs.
[ "Wang, Xiaolin", "Utiyama, Masao", "Sumita, Eiichiro" ]
Japanese-to-English Simultaneous Dubbing Prototype
acl-demo.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.17.bib
https://aclanthology.org/2023.acl-demo.17/
@inproceedings{yao-etal-2023-viskop, title = "{V}is{K}o{P}: Visual Knowledge oriented Programming for Interactive Knowledge Base Question Answering", author = "Yao, Zijun and Chen, Yuanyong and Lv, Xin and Cao, Shulin and Xin, Amy and Yu, Jifan and Jin, Hailong and Xu, Jianjun and Zhang, Peng and Hou, Lei and Li, Juanzi", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.17", doi = "10.18653/v1/2023.acl-demo.17", pages = "179--189", abstract = "We present Visual Knowledge oriented Programming platform (\textbf{VisKoP}), a knowledge base question answering (KBQA) system that integrates human into the loop to edit and debug the knowledge base (KB) queries. VisKoP not only provides a neural program induction module, which converts natural language questions into knowledge oriented program language (KoPL), but also maps KoPL programs into graphical elements. KoPL programs can be edited with simple graphical operators, such as \textit{{''}dragging{''}} to add knowledge operators and \textit{{''}slot filling{''}} to designate operator arguments. Moreover, VisKoP provides auto-completion for its knowledge base schema and users can easily debug the KoPL program by checking its intermediate results. To facilitate the practical KBQA on a million-entity-level KB, we design a highly efficient KoPL execution engine for the back-end. Experiment results show that VisKoP is highly efficient and user interaction can fix a large portion of wrong KoPL programs to acquire the correct answer. The VisKoP online demo, highly efficient KoPL engine, and screencast video are now publicly available.", }
We present Visual Knowledge oriented Programming platform (\textbf{VisKoP}), a knowledge base question answering (KBQA) system that integrates human into the loop to edit and debug the knowledge base (KB) queries. VisKoP not only provides a neural program induction module, which converts natural language questions into knowledge oriented program language (KoPL), but also maps KoPL programs into graphical elements. KoPL programs can be edited with simple graphical operators, such as \textit{{''}dragging{''}} to add knowledge operators and \textit{{''}slot filling{''}} to designate operator arguments. Moreover, VisKoP provides auto-completion for its knowledge base schema and users can easily debug the KoPL program by checking its intermediate results. To facilitate the practical KBQA on a million-entity-level KB, we design a highly efficient KoPL execution engine for the back-end. Experiment results show that VisKoP is highly efficient and user interaction can fix a large portion of wrong KoPL programs to acquire the correct answer. The VisKoP online demo, highly efficient KoPL engine, and screencast video are now publicly available.
[ "Yao, Zijun", "Chen, Yuanyong", "Lv, Xin", "Cao, Shulin", "Xin, Amy", "Yu, Jifan", "Jin, Hailong", "Xu, Jianjun", "Zhang, Peng", "Hou, Lei", "Li, Juanzi" ]
VisKoP: Visual Knowledge oriented Programming for Interactive Knowledge Base Question Answering
acl-demo.17
Poster
2307.03130
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.18.bib
https://aclanthology.org/2023.acl-demo.18/
@inproceedings{lee-etal-2023-peep, title = "{PEEP}-Talk: A Situational Dialogue-based Chatbot for {E}nglish Education", author = "Lee, Seungjun and Jang, Yoonna and Park, Chanjun and Lee, Jungseob and Seo, Jaehyung and Moon, Hyeonseok and Eo, Sugyeong and Lee, Seounghoon and Yahya, Bernardo and Lim, Heuiseok", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.18", doi = "10.18653/v1/2023.acl-demo.18", pages = "190--207", abstract = "English is acknowledged worldwide as a mode of communication. However, due to the absence of realistic practicing scenarios, students learning English as a foreign language (EFL) typically have limited chances to converse and share feedback with others. In this paper, we propose PEEP-Talk, a real-world situational dialogue-based chatbot designed for English education. It also naturally switches to a new topic or situation in response to out-of-topic utterances, which are common among English beginners. Furthermore, PEEP-Talk provides feedback score on conversation and grammar error correction. We performed automatic and user evaluations to validate performance and education efficiency of our system. The results show that PEEP-Talk generates appropriate responses in various real-life situations while providing accurate feedback to learners. Moreover, we demonstrate a positive impact on English-speaking, grammar, and English learning anxiety, implying that PEEP-Talk can lower the barrier to learning natural conversation in effective ways.", }
English is acknowledged worldwide as a mode of communication. However, due to the absence of realistic practicing scenarios, students learning English as a foreign language (EFL) typically have limited chances to converse and share feedback with others. In this paper, we propose PEEP-Talk, a real-world situational dialogue-based chatbot designed for English education. It also naturally switches to a new topic or situation in response to out-of-topic utterances, which are common among English beginners. Furthermore, PEEP-Talk provides feedback score on conversation and grammar error correction. We performed automatic and user evaluations to validate performance and education efficiency of our system. The results show that PEEP-Talk generates appropriate responses in various real-life situations while providing accurate feedback to learners. Moreover, we demonstrate a positive impact on English-speaking, grammar, and English learning anxiety, implying that PEEP-Talk can lower the barrier to learning natural conversation in effective ways.
[ "Lee, Seungjun", "Jang, Yoonna", "Park, Chanjun", "Lee, Jungseob", "Seo, Jaehyung", "Moon, Hyeonseok", "Eo, Sugyeong", "Lee, Seounghoon", "Yahya, Bernardo", "Lim, Heuiseok" ]
PEEP-Talk: A Situational Dialogue-based Chatbot for English Education
acl-demo.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.19.bib
https://aclanthology.org/2023.acl-demo.19/
@inproceedings{landwehr-etal-2023-opentipe, title = "{O}pen{TIPE}: An Open-source Translation Framework for Interactive Post-Editing Research", author = "Landwehr, Fabian and Steinmann, Thomas and Mascarell, Laura", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.19", doi = "10.18653/v1/2023.acl-demo.19", pages = "208--216", abstract = "Despite the latest improvements on machine translation, professional translators still must review and post-edit the automatic output to ensure high-quality translations. The research on automating this process lacks an interactive post-editing environment implemented for this purpose; therefore, current approaches do not consider the human interactions that occur in real post-editing scenarios. To address this issue, we present OpenTIPE, a flexible and extensible framework that aims at supporting research on interactive post-editing. Specifically, the interactive environment of OpenTIPE allows researchers to explore human-centered approaches for the post-editing task. We release the OpenTIPE source code and showcase its main functionalities with a demonstration video and an online live demo.", }
Despite the latest improvements on machine translation, professional translators still must review and post-edit the automatic output to ensure high-quality translations. The research on automating this process lacks an interactive post-editing environment implemented for this purpose; therefore, current approaches do not consider the human interactions that occur in real post-editing scenarios. To address this issue, we present OpenTIPE, a flexible and extensible framework that aims at supporting research on interactive post-editing. Specifically, the interactive environment of OpenTIPE allows researchers to explore human-centered approaches for the post-editing task. We release the OpenTIPE source code and showcase its main functionalities with a demonstration video and an online live demo.
[ "L", "wehr, Fabian", "Steinmann, Thomas", "Mascarell, Laura" ]
OpenTIPE: An Open-source Translation Framework for Interactive Post-Editing Research
acl-demo.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.20.bib
https://aclanthology.org/2023.acl-demo.20/
@inproceedings{zhao-etal-2023-tencentpretrain, title = "{T}encent{P}retrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities", author = "Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and Tian, Rong and Liu, Weijie and Chen, Yiren and Sun, Ningyuan and Liu, Haoyan and Mao, Weiquan and Guo, Han and Gou, Weigang and Wu, Taiqiang and Zhu, Tao and Shi, Wenhang and Chen, Chen and Huang, Shan and Chen, Sihong and Liu, Liqun and Li, Feifei and Chen, Xiaoshuai and Sun, Xingwu and Kang, Zhanhui and Du, Xiaoyong and Shen, Linlin and Yan, Kimmo", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.20", doi = "10.18653/v1/2023.acl-demo.20", pages = "217--225", abstract = "Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.", }
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
[ "Zhao, Zhe", "Li, Yudong", "Hou, Cheng", "Zhao, Jing", "Tian, Rong", "Liu, Weijie", "Chen, Yiren", "Sun, Ningyuan", "Liu, Haoyan", "Mao, Weiquan", "Guo, Han", "Gou, Weigang", "Wu, Taiqiang", "Zhu, Tao", "Shi, Wenhang", "Chen, Chen", "Huang, Shan", "Chen, Sihong", "Liu, Liqun", "Li, Feifei", "Chen, Xiaoshuai", "Sun, Xingwu", "Kang, Zhanhui", "Du, Xiaoyong", "Shen, Linlin", "Yan, Kimmo" ]
TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
acl-demo.20
Poster
2212.06385
[ "https://github.com/tencent/tencentpretrain" ]
https://huggingface.co/papers/2212.06385
1
0
0
26
1
[ "uer/gpt2-chinese-cluecorpussmall", "uer/sbert-base-chinese-nli", "uer/roberta-base-chinese-extractive-qa", "uer/roberta-base-finetuned-chinanews-chinese", "uer/roberta-base-finetuned-dianping-chinese", "uer/gpt2-chinese-poem", "uer/albert-base-chinese-cluecorpussmall", "uer/roberta-base-finetuned-cluener2020-chinese", "uer/gpt2-chinese-lyric", "uer/roberta-base-finetuned-jd-binary-chinese", "uer/gpt2-distil-chinese-cluecorpussmall", "uer/t5-base-chinese-cluecorpussmall", "uer/bart-base-chinese-cluecorpussmall", "uer/t5-small-chinese-cluecorpussmall", "uer/gpt2-chinese-ancient", "uer/t5-v1_1-base-chinese-cluecorpussmall", "uer/roberta-base-finetuned-jd-full-chinese", "uer/chinese_roberta_L-2_H-128", "uer/chinese_roberta_L-12_H-768", "uer/chinese_roberta_L-4_H-512", "uer/gpt2-chinese-couplet", "uer/roberta-base-word-chinese-cluecorpussmall", "uer/t5-v1_1-small-chinese-cluecorpussmall", "uer/albert-large-chinese-cluecorpussmall", "uer/chinese_roberta_L-8_H-512", "uer/chinese_roberta_L-4_H-256", "uer/roberta-tiny-word-chinese-cluecorpussmall", "uer/gpt2-xlarge-chinese-cluecorpussmall", "uer/pegasus-large-chinese-cluecorpussmall", "uer/bart-large-chinese-cluecorpussmall", "uer/chinese_roberta_L-10_H-768", "uer/chinese_roberta_L-6_H-768", "uer/pegasus-base-chinese-cluecorpussmall", "uer/roberta-small-word-chinese-cluecorpussmall", "uer/roberta-medium-word-chinese-cluecorpussmall", "uer/roberta-small-wwm-chinese-cluecorpussmall", "uer/chinese_roberta_L-12_H-128", "uer/chinese_roberta_L-10_H-128", "uer/chinese_roberta_L-12_H-512", "uer/chinese_roberta_L-2_H-512", "uer/chinese_roberta_L-6_H-256", "uer/chinese_roberta_L-8_H-256", "uer/roberta-base-finetuned-ifeng-chinese", "uer/roberta-mini-word-chinese-cluecorpussmall", "uer/roberta-medium-wwm-chinese-cluecorpussmall", "uer/roberta-base-wwm-chinese-cluecorpussmall", "uer/roberta-tiny-wwm-chinese-cluecorpussmall", "uer/gpt2-medium-chinese-cluecorpussmall", "uer/roberta-xlarge-wwm-chinese-cluecorpussmall", "uer/chinese_roberta_L-10_H-256", "uer/chinese_roberta_L-6_H-128", "uer/chinese_roberta_L-8_H-128", "uer/chinese_roberta_L-2_H-256", "uer/chinese_roberta_L-10_H-512", "uer/chinese_roberta_L-8_H-768", "uer/chinese_roberta_L-4_H-128", "uer/chinese_roberta_L-6_H-512", "uer/chinese_roberta_L-4_H-768", "uer/chinese_roberta_L-12_H-256", "uer/chinese_roberta_L-2_H-768", "uer/roberta-large-wwm-chinese-cluecorpussmall", "uer/roberta-mini-wwm-chinese-cluecorpussmall", "uer/gpt2-large-chinese-cluecorpussmall" ]
[]
[ "liyucheng/selective_context", "yhavinga/dutch-tokenizer-arena", "JanDalhuysen/ChatPDF", "zouguojun/chatPDF", "eson/kplug", "divano/test", "shengzi/uer-gpt2-chinese-cluecorpussmall", "betterme/Nice", "koalaYuan/gradio-demo", "alwaysbetter1314/gradio-start", "rjx/rjxai-text-cls", "rjx/rjxai-text-cls-en", "shengzi/uer-gpt2-chinese-ancient", "HsiangNianian/uer-gpt2-chinese-ancient", "erberry/botchatter", "yunqiang/uer-gpt2-chinese-cluecorpussmall", "jetwill/uer-gpt2-chinese-cluecorpussmall", "andrew3279/chinese_story_telling", "mqha/tryGPT2", "jerry3638/dml_st_01", "huolongguo10/uer-gpt2-chinese-cluecorpussmall", "JerryLiJinyi/Prompt-Compression-Toolbox", "orozcohsu/test4", "qgyd2021/gpt2_chat", "chenjg/gpt2-chinese-couplet", "lewiswu1209/gpt2-chinese-couplet", "shengzi/uer-gpt2-chinese-lyric", "o-w-o/uer-gpt2-chinese-lyric", "zuozuo/zuo", "lewiswu1209/gpt2-chinese-poem", "Swanland/Poetry", "xinhe0616/Traditional_Chinese_Poem_Generator", "HaydenYH/MKQA", "leadingbridge/final_project", "linkmancheng/uer-roberta-base-chinese-extractive-qa", "zdxpan/aibb", "johnoe/uer-roberta-base-chinese-extractive-qa", "darkstar94/roberta-base-chinese-extractive-qa", "WXM2000/Named_Entity_recognition_Chinese", "aimaswx/text_classify", "zzh2004/test", "WXM2000/emotion_recognition_Chinese", "olive100/text-classification", "Faizan15/text-classification", "sanshizhang/forgoogleweb", "shenfu/uer-roberta-base-word-chinese-cluecorpussmall", "svjack/Harry-Potter-Knowledge-Question-Answer-in-Chinese", "malusama/bert", "JasonTuTW/mediatek-explain" ]
https://aclanthology.org/2023.acl-demo.21.bib
https://aclanthology.org/2023.acl-demo.21/
@inproceedings{dalvi-etal-2023-neurox, title = "{N}euro{X} Library for Neuron Analysis of Deep {NLP} Models", author = "Dalvi, Fahim and Sajjad, Hassan and Durrani, Nadir", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.21", doi = "10.18653/v1/2023.acl-demo.21", pages = "226--234", abstract = "Neuron analysis provides insights into how knowledge is structured in representations and discovers the role of neurons in the network. In addition to developing an understanding of our models, neuron analysis enables various applications such as debiasing, domain adaptation and architectural search. We present NeuroX, a comprehensive open-source toolkit to conduct neuron analysis of natural language processing models. It implements various interpretation methods under a unified API, and provides a framework for data processing and evaluation, thus making it easier for researchers and practitioners to perform neuron analysis. The Python toolkit is available at \url{https://www.github.com/fdalvi/NeuroX.Demo} Video available at: \url{https://youtu.be/mLhs2YMx4u8}", }
Neuron analysis provides insights into how knowledge is structured in representations and discovers the role of neurons in the network. In addition to developing an understanding of our models, neuron analysis enables various applications such as debiasing, domain adaptation and architectural search. We present NeuroX, a comprehensive open-source toolkit to conduct neuron analysis of natural language processing models. It implements various interpretation methods under a unified API, and provides a framework for data processing and evaluation, thus making it easier for researchers and practitioners to perform neuron analysis. The Python toolkit is available at \url{https://www.github.com/fdalvi/NeuroX.Demo} Video available at: \url{https://youtu.be/mLhs2YMx4u8}
[ "Dalvi, Fahim", "Sajjad, Hassan", "Durrani, Nadir" ]
NeuroX Library for Neuron Analysis of Deep NLP Models
acl-demo.21
Poster
2305.17073
[ "https://github.com/fdalvi/NeuroX" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.22.bib
https://aclanthology.org/2023.acl-demo.22/
@inproceedings{gu-hahnloser-2023-scilit, title = "{S}ci{L}it: A Platform for Joint Scientific Literature Discovery, Summarization and Citation Generation", author = "Gu, Nianlong and Hahnloser, Richard H.R.", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.22", doi = "10.18653/v1/2023.acl-demo.22", pages = "235--246", abstract = "Scientific writing involves retrieving, summarizing, and citing relevant papers, which can be time-consuming processes. Although in many workflows these processes are serially linked, there are opportunities for natural language processing (NLP) to provide end-to-end assistive tools. We propose SciLit, a pipeline that automatically recommends relevant papers, extracts highlights, and suggests a reference sentence as a citation of a paper, taking into consideration the user-provided context and keywords. SciLit efficiently recommends papers from large databases of hundreds of millions of papers using a two-stage pre-fetching and re-ranking literature search system that flexibly deals with addition and removal of a paper database. We provide a convenient user interface that displays the recommended papers as extractive summaries and that offers abstractively-generated citing sentences which are aligned with the provided context and which mention the chosen keyword(s). Our assistive tool for literature discovery and scientific writing is available at \url{https://scilit.vercel.app}", }
Scientific writing involves retrieving, summarizing, and citing relevant papers, which can be time-consuming processes. Although in many workflows these processes are serially linked, there are opportunities for natural language processing (NLP) to provide end-to-end assistive tools. We propose SciLit, a pipeline that automatically recommends relevant papers, extracts highlights, and suggests a reference sentence as a citation of a paper, taking into consideration the user-provided context and keywords. SciLit efficiently recommends papers from large databases of hundreds of millions of papers using a two-stage pre-fetching and re-ranking literature search system that flexibly deals with addition and removal of a paper database. We provide a convenient user interface that displays the recommended papers as extractive summaries and that offers abstractively-generated citing sentences which are aligned with the provided context and which mention the chosen keyword(s). Our assistive tool for literature discovery and scientific writing is available at \url{https://scilit.vercel.app}
[ "Gu, Nianlong", "Hahnloser, Richard H.R." ]
SciLit: A Platform for Joint Scientific Literature Discovery, Summarization and Citation Generation
acl-demo.22
Poster
2306.03535
[ "https://github.com/nianlonggu/scilit" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.23.bib
https://aclanthology.org/2023.acl-demo.23/
@inproceedings{jenkins-etal-2023-massively, title = "Massively Multi-Lingual Event Understanding: Extraction, Visualization, and Search", author = "Jenkins, Chris and Agarwal, Shantanu and Barry, Joel and Fincke, Steven and Boschee, Elizabeth", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.23", doi = "10.18653/v1/2023.acl-demo.23", pages = "247--256", abstract = "In this paper, we present ISI-Clear, a state-of-the-art, cross-lingual, zero-shot event extraction system and accompanying user interface for event visualization {\&} search. Using only English training data, ISI-Clear makes global events available on-demand, processing user-supplied text in 100 languages ranging from Afrikaans to Yiddish. We provide multiple event-centric views of extracted events, including both a graphical representation and a document-level summary. We also integrate existing cross-lingual search algorithms with event extraction capabilities to provide cross-lingual event-centric search, allowing English-speaking users to search over events automatically extracted from a corpus of non-English documents, using either English natural language queries (e.g. {``}cholera outbreaks in Iran{''}) or structured queries (e.g. find all events of type Disease-Outbreak with agent {``}cholera{''} and location {``}Iran{''}).", }
In this paper, we present ISI-Clear, a state-of-the-art, cross-lingual, zero-shot event extraction system and accompanying user interface for event visualization {\&} search. Using only English training data, ISI-Clear makes global events available on-demand, processing user-supplied text in 100 languages ranging from Afrikaans to Yiddish. We provide multiple event-centric views of extracted events, including both a graphical representation and a document-level summary. We also integrate existing cross-lingual search algorithms with event extraction capabilities to provide cross-lingual event-centric search, allowing English-speaking users to search over events automatically extracted from a corpus of non-English documents, using either English natural language queries (e.g. {``}cholera outbreaks in Iran{''}) or structured queries (e.g. find all events of type Disease-Outbreak with agent {``}cholera{''} and location {``}Iran{''}).
[ "Jenkins, Chris", "Agarwal, Shantanu", "Barry, Joel", "Fincke, Steven", "Boschee, Elizabeth" ]
Massively Multi-Lingual Event Understanding: Extraction, Visualization, and Search
acl-demo.23
Poster
2305.10561
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.24.bib
https://aclanthology.org/2023.acl-demo.24/
@inproceedings{dabre-etal-2023-yanmtt, title = "{YANMTT}: Yet Another Neural Machine Translation Toolkit", author = "Dabre, Raj and Kanojia, Diptesh and Sawant, Chinmay and Sumita, Eiichiro", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.24", doi = "10.18653/v1/2023.acl-demo.24", pages = "257--263", abstract = "In this paper, we present our open-source neural machine translation (NMT) toolkit called {``}Yet Another Neural Machine Translation Toolkit{''} abbreviated as YANMTT - \url{https://github.com/prajdabre/yanmtt}, which is built on top of the HuggingFace Transformers library. YANMTT focuses on transfer learning and enables easy pre-training and fine-tuning of sequence-to-sequence models at scale. It can be used for training parameter-heavy models with minimal parameter sharing and efficient, lightweight models via heavy parameter sharing. Additionally, it supports parameter-efficient fine-tuning (PEFT) through adapters and prompts. Our toolkit also comes with a user interface that can be used to demonstrate these models and visualize various parts of the model. Apart from these core features, our toolkit also provides other advanced functionalities such as but not limited to document/multi-source NMT, simultaneous NMT, mixtures-of-experts, model compression and continual learning.", }
In this paper, we present our open-source neural machine translation (NMT) toolkit called {``}Yet Another Neural Machine Translation Toolkit{''} abbreviated as YANMTT - \url{https://github.com/prajdabre/yanmtt}, which is built on top of the HuggingFace Transformers library. YANMTT focuses on transfer learning and enables easy pre-training and fine-tuning of sequence-to-sequence models at scale. It can be used for training parameter-heavy models with minimal parameter sharing and efficient, lightweight models via heavy parameter sharing. Additionally, it supports parameter-efficient fine-tuning (PEFT) through adapters and prompts. Our toolkit also comes with a user interface that can be used to demonstrate these models and visualize various parts of the model. Apart from these core features, our toolkit also provides other advanced functionalities such as but not limited to document/multi-source NMT, simultaneous NMT, mixtures-of-experts, model compression and continual learning.
[ "Dabre, Raj", "Kanojia, Diptesh", "Sawant, Chinmay", "Sumita, Eiichiro" ]
YANMTT: Yet Another Neural Machine Translation Toolkit
acl-demo.24
Poster
2108.11126
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.acl-demo.25.bib
https://aclanthology.org/2023.acl-demo.25/
@inproceedings{lee-etal-2023-xmd, title = "{XMD}: An End-to-End Framework for Interactive Explanation-Based Debugging of {NLP} Models", author = "Lee, Dong-Ho and Kadakia, Akshen and Joshi, Brihi and Chan, Aaron and Liu, Ziyi and Narahari, Kiran and Shibuya, Takashi and Mitani, Ryosuke and Sekiya, Toshiyuki and Pujara, Jay and Ren, Xiang", editor = "Bollegala, Danushka and Huang, Ruihong and Ritter, Alan", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-demo.25", doi = "10.18653/v1/2023.acl-demo.25", pages = "264--273", abstract = "NLP models are susceptible to learning spurious biases (i.e., bugs) that work on some datasets but do not properly reflect the underlying task. Explanation-based model debugging aims to resolve spurious biases by showing human users explanations of model behavior, asking users to give feedback on the behavior, thenusing the feedback to update the model. While existing model debugging methods have shown promise, their prototype-level implementations provide limited practical utility. Thus, we propose XMD: the first open-source, end-to-end framework for explanation-based model debugging. Given task- or instance-level explanations,users can flexibly provide various forms of feedback via an intuitive, web-based UI. After receiving user feedback, XMD automatically updates the model in real time, by regularizing the model so that its explanationsalign with the user feedback. The new model can then be easily deployed into real-world applications via Hugging Face. Using XMD, we can improve the model{'}s OOD performance on text classification tasks by up to 18{\%}.", }
NLP models are susceptible to learning spurious biases (i.e., bugs) that work on some datasets but do not properly reflect the underlying task. Explanation-based model debugging aims to resolve spurious biases by showing human users explanations of model behavior, asking users to give feedback on the behavior, thenusing the feedback to update the model. While existing model debugging methods have shown promise, their prototype-level implementations provide limited practical utility. Thus, we propose XMD: the first open-source, end-to-end framework for explanation-based model debugging. Given task- or instance-level explanations,users can flexibly provide various forms of feedback via an intuitive, web-based UI. After receiving user feedback, XMD automatically updates the model in real time, by regularizing the model so that its explanationsalign with the user feedback. The new model can then be easily deployed into real-world applications via Hugging Face. Using XMD, we can improve the model{'}s OOD performance on text classification tasks by up to 18{\%}.
[ "Lee, Dong-Ho", "Kadakia, Akshen", "Joshi, Brihi", "Chan, Aaron", "Liu, Ziyi", "Narahari, Kiran", "Shibuya, Takashi", "Mitani, Ryosuke", "Sekiya, Toshiyuki", "Pujara, Jay", "Ren, Xiang" ]
XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models
acl-demo.25
Poster
2210.16978
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]