Datasets:

bibtex_url
stringlengths
41
50
proceedings
stringlengths
38
47
bibtext
stringlengths
709
3.56k
abstract
stringlengths
17
2.11k
authors
listlengths
1
72
title
stringlengths
12
207
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
listlengths
1
1
paper_page
stringclasses
276 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
14
num_comments
int64
-1
11
n_authors
int64
-1
44
paper_page_exists_pre_conf
int64
0
1
Models
listlengths
0
100
Datasets
listlengths
0
14
Spaces
listlengths
0
100
https://aclanthology.org/2023.findings-acl.152.bib
https://aclanthology.org/2023.findings-acl.152/
@inproceedings{zhang-etal-2023-contrastive, title = "Contrastive Learning of Sociopragmatic Meaning in Social Media", author = "Zhang, Chiyu and Abdul-Mageed, Muhammad and Jawahar, Ganesh", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.152", doi = "10.18653/v1/2023.findings-acl.152", pages = "2405--2439", abstract = "Recent progress in representation and contrastive learning in NLP has not widely considered the class of sociopragmatic meaning (i.e., meaning in interaction within different language communities). To bridge this gap, we propose a novel framework for learning task-agnostic representations transferable to a wide range of sociopragmatic tasks (e.g., emotion, hate speech, humor, sarcasm). Our framework outperforms other contrastive learning frameworks for both in-domain and out-of-domain data, across both the general and few-shot settings. For example, compared to two popular pre-trained language models, our model obtains an improvement of 11.66 average F1 on 16 datasets when fine-tuned on only 20 training samples per dataset. We also show that our framework improves uniformity and preserves the semantic structure of representations. Our code is available at: \url{https://github.com/UBC-NLP/infodcl}", }
Recent progress in representation and contrastive learning in NLP has not widely considered the class of sociopragmatic meaning (i.e., meaning in interaction within different language communities). To bridge this gap, we propose a novel framework for learning task-agnostic representations transferable to a wide range of sociopragmatic tasks (e.g., emotion, hate speech, humor, sarcasm). Our framework outperforms other contrastive learning frameworks for both in-domain and out-of-domain data, across both the general and few-shot settings. For example, compared to two popular pre-trained language models, our model obtains an improvement of 11.66 average F1 on 16 datasets when fine-tuned on only 20 training samples per dataset. We also show that our framework improves uniformity and preserves the semantic structure of representations. Our code is available at: \url{https://github.com/UBC-NLP/infodcl}
[ "Zhang, Chiyu", "Abdul-Mageed, Muhammad", "Jawahar, Ganesh" ]
Contrastive Learning of Sociopragmatic Meaning in Social Media
findings-acl.152
Poster
2203.07648
[ "https://github.com/ubc-nlp/infodcl" ]
https://huggingface.co/papers/2203.07648
3
0
0
3
1
[ "UBC-NLP/InfoDCL-hashtag", "UBC-NLP/InfoDCL-emoji", "UBC-NLP/InfoDCL-BERTweet-hashtag", "UBC-NLP/InfoDCL-BERTweet-emoji" ]
[]
[]
https://aclanthology.org/2023.findings-acl.153.bib
https://aclanthology.org/2023.findings-acl.153/
@inproceedings{wang-etal-2023-noisy, title = "Noisy Positive-Unlabeled Learning with Self-Training for Speculative Knowledge Graph Reasoning", author = "Wang, Ruijie and Li, Baoyu and Lu, Yichen and Sun, Dachun and Li, Jinning and Yan, Yuchen and Liu, Shengzhong and Tong, Hanghang and Abdelzaher, Tarek", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.153", doi = "10.18653/v1/2023.findings-acl.153", pages = "2440--2457", abstract = "This paper studies speculative reasoning task on real-world knowledge graphs (KG) that contain both false negative issue (i.e., potential true facts being excluded) and false positive issue (i.e., unreliable or outdated facts being included). State-of-the-art methods fall short in the speculative reasoning ability, as they assume the correctness of a fact is solely determined by its presence in KG, making them vulnerable to false negative/positive issues. The new reasoning task is formulated as a noisy Positive-Unlabeled learning problem. We propose a variational framework, namely nPUGraph, that jointly estimates the correctness of both collected and uncollected facts (which we call label posterior) and updates model parameters during training. The label posterior estimation facilitates speculative reasoning from two perspectives. First, it improves the robustness of a label posterior-aware graph encoder against false positive links. Second, it identifies missing facts to provide high-quality grounds of reasoning. They are unified in a simple yet effective self-training procedure. Empirically, extensive experiments on three benchmark KG and one Twitter dataset with various degrees of false negative/positive cases demonstrate the effectiveness of nPUGraph.", }
This paper studies speculative reasoning task on real-world knowledge graphs (KG) that contain both false negative issue (i.e., potential true facts being excluded) and false positive issue (i.e., unreliable or outdated facts being included). State-of-the-art methods fall short in the speculative reasoning ability, as they assume the correctness of a fact is solely determined by its presence in KG, making them vulnerable to false negative/positive issues. The new reasoning task is formulated as a noisy Positive-Unlabeled learning problem. We propose a variational framework, namely nPUGraph, that jointly estimates the correctness of both collected and uncollected facts (which we call label posterior) and updates model parameters during training. The label posterior estimation facilitates speculative reasoning from two perspectives. First, it improves the robustness of a label posterior-aware graph encoder against false positive links. Second, it identifies missing facts to provide high-quality grounds of reasoning. They are unified in a simple yet effective self-training procedure. Empirically, extensive experiments on three benchmark KG and one Twitter dataset with various degrees of false negative/positive cases demonstrate the effectiveness of nPUGraph.
[ "Wang, Ruijie", "Li, Baoyu", "Lu, Yichen", "Sun, Dachun", "Li, Jinning", "Yan, Yuchen", "Liu, Shengzhong", "Tong, Hanghang", "Abdelzaher, Tarek" ]
Noisy Positive-Unlabeled Learning with Self-Training for Speculative Knowledge Graph Reasoning
findings-acl.153
Poster
2306.07512
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.154.bib
https://aclanthology.org/2023.findings-acl.154/
@inproceedings{li-etal-2023-across, title = "{ACROSS}: An Alignment-based Framework for Low-Resource Many-to-One Cross-Lingual Summarization", author = "Li, Peiyao and Zhang, Zhengkun and Wang, Jun and Li, Liang and Jatowt, Adam and Yang, Zhenglu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.154", doi = "10.18653/v1/2023.findings-acl.154", pages = "2458--2472", abstract = "This research addresses the challenges of Cross-Lingual Summarization (CLS) in low-resource scenarios and over imbalanced multilingual data. Existing CLS studies mostly resort to pipeline frameworks or multi-task methods in bilingual settings. However, they ignore the data imbalance in multilingual scenarios and do not utilize the high-resource monolingual summarization data. In this paper, we propose the Aligned CROSs-lingual Summarization (ACROSS) model to tackle these issues. Our framework aligns low-resource cross-lingual data with high-resource monolingual data via contrastive and consistency loss, which help enrich low-resource information for high-quality summaries. In addition, we introduce a data augmentation method that can select informative monolingual sentences, which facilitates a deep exploration of high-resource information and introduce new information for low-resource languages. Experiments on the CrossSum dataset show that ACROSS outperforms baseline models and obtains consistently dominant performance on 45 language pairs.", }
This research addresses the challenges of Cross-Lingual Summarization (CLS) in low-resource scenarios and over imbalanced multilingual data. Existing CLS studies mostly resort to pipeline frameworks or multi-task methods in bilingual settings. However, they ignore the data imbalance in multilingual scenarios and do not utilize the high-resource monolingual summarization data. In this paper, we propose the Aligned CROSs-lingual Summarization (ACROSS) model to tackle these issues. Our framework aligns low-resource cross-lingual data with high-resource monolingual data via contrastive and consistency loss, which help enrich low-resource information for high-quality summaries. In addition, we introduce a data augmentation method that can select informative monolingual sentences, which facilitates a deep exploration of high-resource information and introduce new information for low-resource languages. Experiments on the CrossSum dataset show that ACROSS outperforms baseline models and obtains consistently dominant performance on 45 language pairs.
[ "Li, Peiyao", "Zhang, Zhengkun", "Wang, Jun", "Li, Liang", "Jatowt, Adam", "Yang, Zhenglu" ]
ACROSS: An Alignment-based Framework for Low-Resource Many-to-One Cross-Lingual Summarization
findings-acl.154
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.155.bib
https://aclanthology.org/2023.findings-acl.155/
@inproceedings{wang-etal-2023-rfid, title = "{RF}i{D}: Towards Rational Fusion-in-Decoder for Open-Domain Question Answering", author = "Wang, Cunxiang and Yu, Haofei and Zhang, Yue", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.155", doi = "10.18653/v1/2023.findings-acl.155", pages = "2473--2481", abstract = "Open-Domain Question Answering (ODQA) systems necessitate a reader model capable of generating answers by simultaneously referring to multiple passages. Although representative models like Fusion-in-Decoder (FiD) have been proposed to address this challenge, these systems can inadvertently rely on spurious features instead of genuine causal relationships between the question and the passages to generate answers. To counter this problem, we introduce the Rational Fusion-in-Decoder (RFiD) model. Our model leverages the encoders of FiD to differentiate between causal relationships and spurious features, subsequently guiding the decoder to generate answers informed by this discernment. Experimental results on two ODQA datasets, Natural Questions (NQ) and TriviaQA (TQ), demonstrate that our model surpasses previous methods, achieving improvements of up to 1.5 and 0.7 in Exact Match scores on NQ, and exhibits an enhanced ability to identify causal relationships.", }
Open-Domain Question Answering (ODQA) systems necessitate a reader model capable of generating answers by simultaneously referring to multiple passages. Although representative models like Fusion-in-Decoder (FiD) have been proposed to address this challenge, these systems can inadvertently rely on spurious features instead of genuine causal relationships between the question and the passages to generate answers. To counter this problem, we introduce the Rational Fusion-in-Decoder (RFiD) model. Our model leverages the encoders of FiD to differentiate between causal relationships and spurious features, subsequently guiding the decoder to generate answers informed by this discernment. Experimental results on two ODQA datasets, Natural Questions (NQ) and TriviaQA (TQ), demonstrate that our model surpasses previous methods, achieving improvements of up to 1.5 and 0.7 in Exact Match scores on NQ, and exhibits an enhanced ability to identify causal relationships.
[ "Wang, Cunxiang", "Yu, Haofei", "Zhang, Yue" ]
RFiD: Towards Rational Fusion-in-Decoder for Open-Domain Question Answering
findings-acl.155
Poster
2305.17041
[ "https://github.com/wangcunxiang/rfid" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.156.bib
https://aclanthology.org/2023.findings-acl.156/
@inproceedings{song-etal-2023-unsupervised, title = "Unsupervised Keyphrase Extraction by Learning Neural Keyphrase Set Function", author = "Song, Mingyang and Jiang, Haiyun and Liu, Lemao and Shi, Shuming and Jing, Liping", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.156", doi = "10.18653/v1/2023.findings-acl.156", pages = "2482--2494", abstract = "We create a \textit{paradigm shift} concerning building unsupervised keyphrase extraction systems in this paper. Instead of modeling the relevance between an individual candidate phrase and the document as in the commonly used framework, we formulate the unsupervised keyphrase extraction task as a document-set matching problem from \textit{a set-wise perspective}, in which the document and the candidate set are globally matched in the semantic space to particularly take into account the interactions among all candidate phrases. Since it is intractable to exactly extract the keyphrase set by the matching function during the inference, we propose an approximate approach, which obtains the candidate subsets via a set extractor agent learned by reinforcement learning. Exhaustive experimental results demonstrate the effectiveness of our model, which outperforms the recent state-of-the-art unsupervised keyphrase extraction baselines by a large margin.", }
We create a \textit{paradigm shift} concerning building unsupervised keyphrase extraction systems in this paper. Instead of modeling the relevance between an individual candidate phrase and the document as in the commonly used framework, we formulate the unsupervised keyphrase extraction task as a document-set matching problem from \textit{a set-wise perspective}, in which the document and the candidate set are globally matched in the semantic space to particularly take into account the interactions among all candidate phrases. Since it is intractable to exactly extract the keyphrase set by the matching function during the inference, we propose an approximate approach, which obtains the candidate subsets via a set extractor agent learned by reinforcement learning. Exhaustive experimental results demonstrate the effectiveness of our model, which outperforms the recent state-of-the-art unsupervised keyphrase extraction baselines by a large margin.
[ "Song, Mingyang", "Jiang, Haiyun", "Liu, Lemao", "Shi, Shuming", "Jing, Liping" ]
Unsupervised Keyphrase Extraction by Learning Neural Keyphrase Set Function
findings-acl.156
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.157.bib
https://aclanthology.org/2023.findings-acl.157/
@inproceedings{zhang-etal-2023-diffusion, title = "Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias", author = "Zhang, Zhiyuan and Chen, Deli and Zhou, Hao and Meng, Fandong and Zhou, Jie and Sun, Xu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.157", doi = "10.18653/v1/2023.findings-acl.157", pages = "2495--2517", abstract = "Pre-trained Language Models (PLMs) may be poisonous with backdoors or bias injected by the suspicious attacker during the fine-tuning process. A core challenge of purifying potentially poisonous PLMs is precisely finding poisonous dimensions. To settle this issue, we propose the Fine-purifying approach, which utilizes the diffusion theory to study the dynamic process of fine-tuning for finding potentially poisonous dimensions. According to the relationship between parameter drifts and Hessians of different dimensions, we can detect poisonous dimensions with abnormal dynamics, purify them by resetting them to clean pre-trained weights, and then fine-tune the purified weights on a small clean dataset. To the best of our knowledge, we are the first to study the dynamics guided by the diffusion theory for safety or defense purposes. Experimental results validate the effectiveness of Fine-purifying even with a small clean dataset.", }
Pre-trained Language Models (PLMs) may be poisonous with backdoors or bias injected by the suspicious attacker during the fine-tuning process. A core challenge of purifying potentially poisonous PLMs is precisely finding poisonous dimensions. To settle this issue, we propose the Fine-purifying approach, which utilizes the diffusion theory to study the dynamic process of fine-tuning for finding potentially poisonous dimensions. According to the relationship between parameter drifts and Hessians of different dimensions, we can detect poisonous dimensions with abnormal dynamics, purify them by resetting them to clean pre-trained weights, and then fine-tune the purified weights on a small clean dataset. To the best of our knowledge, we are the first to study the dynamics guided by the diffusion theory for safety or defense purposes. Experimental results validate the effectiveness of Fine-purifying even with a small clean dataset.
[ "Zhang, Zhiyuan", "Chen, Deli", "Zhou, Hao", "Meng, F", "ong", "Zhou, Jie", "Sun, Xu" ]
Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
findings-acl.157
Poster
2305.04547
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.158.bib
https://aclanthology.org/2023.findings-acl.158/
@inproceedings{ossowski-hu-2023-retrieving, title = "Retrieving Multimodal Prompts for Generative Visual Question Answering", author = "Ossowski, Timothy and Hu, Junjie", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.158", doi = "10.18653/v1/2023.findings-acl.158", pages = "2518--2535", abstract = "Recent years have witnessed impressive results of pre-trained vision-language models on knowledge-intensive tasks such as visual question answering (VQA). Despite the recent advances in VQA, existing methods mainly adopt a discriminative formulation that predicts answers within a pre-defined label set, leading to easy overfitting on low-resource domains (e.g., medicine) and poor generalization under domain shift to another dataset. To tackle this limitation, we propose a novel generative model enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts and multimodal features to generate answers in free text. Our generative model enables rapid zero-shot dataset adaptation to unseen data distributions and open-set answer labels across datasets. Our experiments on medical VQA tasks show that MPR outperforms its non-retrieval counterpart by up to 30{\%} accuracy points in a few-shot domain adaptation setting.", }
Recent years have witnessed impressive results of pre-trained vision-language models on knowledge-intensive tasks such as visual question answering (VQA). Despite the recent advances in VQA, existing methods mainly adopt a discriminative formulation that predicts answers within a pre-defined label set, leading to easy overfitting on low-resource domains (e.g., medicine) and poor generalization under domain shift to another dataset. To tackle this limitation, we propose a novel generative model enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts and multimodal features to generate answers in free text. Our generative model enables rapid zero-shot dataset adaptation to unseen data distributions and open-set answer labels across datasets. Our experiments on medical VQA tasks show that MPR outperforms its non-retrieval counterpart by up to 30{\%} accuracy points in a few-shot domain adaptation setting.
[ "Ossowski, Timothy", "Hu, Junjie" ]
Retrieving Multimodal Prompts for Generative Visual Question Answering
findings-acl.158
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.159.bib
https://aclanthology.org/2023.findings-acl.159/
@inproceedings{khincha-etal-2023-infosync, title = "{I}nfo{S}ync: Information Synchronization across Multilingual Semi-structured Tables", author = "Khincha, Siddharth and Jain, Chelsi and Gupta, Vivek and Kataria, Tushar and Zhang, Shuo", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.159", doi = "10.18653/v1/2023.findings-acl.159", pages = "2536--2559", abstract = "Information Synchronization of semi-structured data across languages is challenging. For example, Wikipedia tables in one language need to be synchronized with others. To address this problem, we introduce a new dataset InfoSync and a two-step method for tabular synchronization. InfoSync contains 100K entity-centric tables (Wikipedia Infoboxes) across 14 languages, of which a subset ({\textasciitilde}3.5K pairs) are manually annotated. The proposed method includes 1) Information Alignment to map rows and 2) Information Update for updating missing/outdated information for aligned tables across multilingual tables. When evaluated on InfoSync, information alignment achieves an F1 score of 87.91 (en {\textless}-{\textgreater} non-en). To evaluate information updation, we perform human-assisted Wikipedia edits on Infoboxes for 532 table pairs. Our approach obtains an acceptance rate of 77.28{\%} on Wikipedia, showing the effectiveness of the proposed method.", }
Information Synchronization of semi-structured data across languages is challenging. For example, Wikipedia tables in one language need to be synchronized with others. To address this problem, we introduce a new dataset InfoSync and a two-step method for tabular synchronization. InfoSync contains 100K entity-centric tables (Wikipedia Infoboxes) across 14 languages, of which a subset ({\textasciitilde}3.5K pairs) are manually annotated. The proposed method includes 1) Information Alignment to map rows and 2) Information Update for updating missing/outdated information for aligned tables across multilingual tables. When evaluated on InfoSync, information alignment achieves an F1 score of 87.91 (en {\textless}-{\textgreater} non-en). To evaluate information updation, we perform human-assisted Wikipedia edits on Infoboxes for 532 table pairs. Our approach obtains an acceptance rate of 77.28{\%} on Wikipedia, showing the effectiveness of the proposed method.
[ "Khincha, Siddharth", "Jain, Chelsi", "Gupta, Vivek", "Kataria, Tushar", "Zhang, Shuo" ]
InfoSync: Information Synchronization across Multilingual Semi-structured Tables
findings-acl.159
Poster
2307.03313
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.160.bib
https://aclanthology.org/2023.findings-acl.160/
@inproceedings{wang-etal-2023-t2iat, title = "{T}2{IAT}: Measuring Valence and Stereotypical Biases in Text-to-Image Generation", author = "Wang, Jialu and Liu, Xinyue and Di, Zonglin and Liu, Yang and Wang, Xin", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.160", doi = "10.18653/v1/2023.findings-acl.160", pages = "2560--2574", abstract = "*Warning: This paper contains several contents that may be toxic, harmful, or offensive.*In the last few years, text-to-image generative models have gained remarkable success in generating images with unprecedented quality accompanied by a breakthrough of inference speed. Despite their rapid progress, human biases that manifest in the training examples, particularly with regard to common stereotypical biases, like gender and skin tone, still have been found in these generative models. In this work, we seek to measure more complex human biases exist in the task of text-to-image generations. Inspired by the well-known Implicit Association Test (IAT) from social psychology, we propose a novel Text-to-Image Association Test (T2IAT) framework that quantifies the implicit stereotypes between concepts and valence, and those in the images. We replicate the previously documented bias tests on generative models, including morally neutral tests on flowers and insects as well as demographic stereotypical tests on diverse social attributes. The results of these experiments demonstrate the presence of complex stereotypical behaviors in image generations.", }
*Warning: This paper contains several contents that may be toxic, harmful, or offensive.*In the last few years, text-to-image generative models have gained remarkable success in generating images with unprecedented quality accompanied by a breakthrough of inference speed. Despite their rapid progress, human biases that manifest in the training examples, particularly with regard to common stereotypical biases, like gender and skin tone, still have been found in these generative models. In this work, we seek to measure more complex human biases exist in the task of text-to-image generations. Inspired by the well-known Implicit Association Test (IAT) from social psychology, we propose a novel Text-to-Image Association Test (T2IAT) framework that quantifies the implicit stereotypes between concepts and valence, and those in the images. We replicate the previously documented bias tests on generative models, including morally neutral tests on flowers and insects as well as demographic stereotypical tests on diverse social attributes. The results of these experiments demonstrate the presence of complex stereotypical behaviors in image generations.
[ "Wang, Jialu", "Liu, Xinyue", "Di, Zonglin", "Liu, Yang", "Wang, Xin" ]
T2IAT: Measuring Valence and Stereotypical Biases in Text-to-Image Generation
findings-acl.160
Poster
2306.00905
[ "" ]
https://huggingface.co/papers/2306.00905
1
1
0
5
1
[]
[]
[ "Jialu/T2IAT" ]
https://aclanthology.org/2023.findings-acl.161.bib
https://aclanthology.org/2023.findings-acl.161/
@inproceedings{ben-abacha-etal-2023-investigation, title = "An Investigation of Evaluation Methods in Automatic Medical Note Generation", author = "Ben Abacha, Asma and Yim, Wen-wai and Michalopoulos, George and Lin, Thomas", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.161", doi = "10.18653/v1/2023.findings-acl.161", pages = "2575--2588", abstract = "Recent studies on automatic note generation have shown that doctors can save significant amounts of time when using automatic clinical note generation (Knoll et al., 2022). Summarization models have been used for this task to generate clinical notes as summaries of doctor-patient conversations (Krishna et al., 2021; Cai et al., 2022). However, assessing which model would best serve clinicians in their daily practice is still a challenging task due to the large set of possible correct summaries, and the potential limitations of automatic evaluation metrics. In this paper we study evaluation methods and metrics for the automatic generation of clinical notes from medical conversation. In particular, we propose new task-specific metrics and we compare them to SOTA evaluation metrics in text summarization and generation, including: (i) knowledge-graph embedding-based metrics, (ii) customized model-based metrics with domain-specific weights, (iii) domain-adapted/fine-tuned metrics, and (iv) ensemble metrics. To study the correlation between the automatic metrics and manual judgments, we evaluate automatic notes/summaries by comparing the system and reference facts and computing the factual correctness, and the hallucination and omission rates for critical medical facts. This study relied on seven datasets manually annotated by domain experts. Our experiments show that automatic evaluation metrics can have substantially different behaviors on different types of clinical notes datasets. However, the results highlight one stable subset of metrics as the most correlated with human judgments with a relevant aggregation of different evaluation criteria.", }
Recent studies on automatic note generation have shown that doctors can save significant amounts of time when using automatic clinical note generation (Knoll et al., 2022). Summarization models have been used for this task to generate clinical notes as summaries of doctor-patient conversations (Krishna et al., 2021; Cai et al., 2022). However, assessing which model would best serve clinicians in their daily practice is still a challenging task due to the large set of possible correct summaries, and the potential limitations of automatic evaluation metrics. In this paper we study evaluation methods and metrics for the automatic generation of clinical notes from medical conversation. In particular, we propose new task-specific metrics and we compare them to SOTA evaluation metrics in text summarization and generation, including: (i) knowledge-graph embedding-based metrics, (ii) customized model-based metrics with domain-specific weights, (iii) domain-adapted/fine-tuned metrics, and (iv) ensemble metrics. To study the correlation between the automatic metrics and manual judgments, we evaluate automatic notes/summaries by comparing the system and reference facts and computing the factual correctness, and the hallucination and omission rates for critical medical facts. This study relied on seven datasets manually annotated by domain experts. Our experiments show that automatic evaluation metrics can have substantially different behaviors on different types of clinical notes datasets. However, the results highlight one stable subset of metrics as the most correlated with human judgments with a relevant aggregation of different evaluation criteria.
[ "Ben Abacha, Asma", "Yim, Wen-wai", "Michalopoulos, George", "Lin, Thomas" ]
An Investigation of Evaluation Methods in Automatic Medical Note Generation
findings-acl.161
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.162.bib
https://aclanthology.org/2023.findings-acl.162/
@inproceedings{hao-etal-2023-rethinking, title = "Rethinking Translation Memory Augmented Neural Machine Translation", author = "Hao, Hongkun and Huang, Guoping and Liu, Lemao and Zhang, Zhirui and Shi, Shuming and Wang, Rui", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.162", doi = "10.18653/v1/2023.findings-acl.162", pages = "2589--2605", abstract = "This paper rethinks translation memory augmented neural machine translation (TM-augmented NMT) from two perspectives, i.e., a probabilistic view of retrieval and the variance-bias decomposition principle. The finding demonstrates that TM-augmented NMT is good at the ability of fitting data (i.e., lower bias) but is more sensitive to the fluctuations in the training data (i.e., higher variance), which provides an explanation to a recently reported contradictory phenomenon on the same translation task: TM-augmented NMT substantially advances NMT without TM under the high resource scenario whereas it fails under the low resource scenario. Then this paper proposes a simple yet effective TM-augmented NMT model to promote the variance and address the contradictory phenomenon. Extensive experiments show that the proposed TM-augmented NMT achieves consistent gains over both conventional NMT and existing TM-augmented NMT under two variance-preferable (low resource and plug-and-play) scenarios as well as the high resource scenario.", }
This paper rethinks translation memory augmented neural machine translation (TM-augmented NMT) from two perspectives, i.e., a probabilistic view of retrieval and the variance-bias decomposition principle. The finding demonstrates that TM-augmented NMT is good at the ability of fitting data (i.e., lower bias) but is more sensitive to the fluctuations in the training data (i.e., higher variance), which provides an explanation to a recently reported contradictory phenomenon on the same translation task: TM-augmented NMT substantially advances NMT without TM under the high resource scenario whereas it fails under the low resource scenario. Then this paper proposes a simple yet effective TM-augmented NMT model to promote the variance and address the contradictory phenomenon. Extensive experiments show that the proposed TM-augmented NMT achieves consistent gains over both conventional NMT and existing TM-augmented NMT under two variance-preferable (low resource and plug-and-play) scenarios as well as the high resource scenario.
[ "Hao, Hongkun", "Huang, Guoping", "Liu, Lemao", "Zhang, Zhirui", "Shi, Shuming", "Wang, Rui" ]
Rethinking Translation Memory Augmented Neural Machine Translation
findings-acl.162
Poster
2306.06948
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.163.bib
https://aclanthology.org/2023.findings-acl.163/
@inproceedings{wang-etal-2023-controlling, title = "Controlling Styles in Neural Machine Translation with Activation Prompt", author = "Wang, Yifan and Sun, Zewei and Cheng, Shanbo and Zheng, Weiguo and Wang, Mingxuan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.163", doi = "10.18653/v1/2023.findings-acl.163", pages = "2606--2620", abstract = "Controlling styles in neural machine translation (NMT) has attracted wide attention, as it is crucial for enhancing user experience. Earlier studies on this topic typically concentrate on regulating the level of formality and achieve some progress in this area. However, they still encounter two major challenges. The first is the difficulty in style evaluation. The style comprises various aspects such as lexis, syntax, and others that provide abundant information. Nevertheless, only formality has been thoroughly investigated. The second challenge involves excessive dependence on incremental adjustments, particularly when new styles are necessary. To address both challenges, this paper presents a new benchmark and approach. A multiway stylized machine translation (MSMT) benchmark is introduced, incorporating diverse categories of styles across four linguistic domains. Then, we propose a method named style activation prompt (StyleAP) by retrieving prompts from stylized monolingual corpus, which does not require extra fine-tuning. Experiments show that StyleAP could effectively control the style of translation and achieve remarkable performance.", }
Controlling styles in neural machine translation (NMT) has attracted wide attention, as it is crucial for enhancing user experience. Earlier studies on this topic typically concentrate on regulating the level of formality and achieve some progress in this area. However, they still encounter two major challenges. The first is the difficulty in style evaluation. The style comprises various aspects such as lexis, syntax, and others that provide abundant information. Nevertheless, only formality has been thoroughly investigated. The second challenge involves excessive dependence on incremental adjustments, particularly when new styles are necessary. To address both challenges, this paper presents a new benchmark and approach. A multiway stylized machine translation (MSMT) benchmark is introduced, incorporating diverse categories of styles across four linguistic domains. Then, we propose a method named style activation prompt (StyleAP) by retrieving prompts from stylized monolingual corpus, which does not require extra fine-tuning. Experiments show that StyleAP could effectively control the style of translation and achieve remarkable performance.
[ "Wang, Yifan", "Sun, Zewei", "Cheng, Shanbo", "Zheng, Weiguo", "Wang, Mingxuan" ]
Controlling Styles in Neural Machine Translation with Activation Prompt
findings-acl.163
Poster
2212.08909
[ "https://github.com/ivanwang0730/styleap" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.164.bib
https://aclanthology.org/2023.findings-acl.164/
@inproceedings{xu-etal-2023-focusing, title = "Focusing, Bridging and Prompting for Few-shot Nested Named Entity Recognition", author = "Xu, Yuanyuan and Yang, Zeng and Zhang, Linhai and Zhou, Deyu and Wu, Tiandeng and Zhou, Rong", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.164", doi = "10.18653/v1/2023.findings-acl.164", pages = "2621--2637", abstract = "Few-shot named entity recognition (NER), identifying named entities with a small number of labeled data, has attracted much attention. Frequently, entities are nested within each other. However, most of the existing work on few-shot NER addresses flat entities instead of nested entities. To tackle nested NER in a few-shot setting, it is crucial to utilize the limited labeled data to mine unique features of nested entities, such as the relationship between inner and outer entities and contextual position information. Therefore, in this work, we propose a novel method based on focusing, bridging and prompting for few-shot nested NER without using source domain data. Both focusing and bridging components provide accurate candidate spans for the prompting component. The prompting component leverages the unique features of nested entities to classify spans based on soft prompts and contrastive learning. Experimental results show that the proposed approach achieves state-of-the-art performance consistently on the four benchmark datasets (ACE2004, ACE2005, GENIA and KBP2017) and outperforms several competing baseline models on F1-score by 9.33{\%} on ACE2004, 6.17{\%} on ACE2005, 9.40{\%} on GENIA and 5.12{\%} on KBP2017 on the 5-shot setting.", }
Few-shot named entity recognition (NER), identifying named entities with a small number of labeled data, has attracted much attention. Frequently, entities are nested within each other. However, most of the existing work on few-shot NER addresses flat entities instead of nested entities. To tackle nested NER in a few-shot setting, it is crucial to utilize the limited labeled data to mine unique features of nested entities, such as the relationship between inner and outer entities and contextual position information. Therefore, in this work, we propose a novel method based on focusing, bridging and prompting for few-shot nested NER without using source domain data. Both focusing and bridging components provide accurate candidate spans for the prompting component. The prompting component leverages the unique features of nested entities to classify spans based on soft prompts and contrastive learning. Experimental results show that the proposed approach achieves state-of-the-art performance consistently on the four benchmark datasets (ACE2004, ACE2005, GENIA and KBP2017) and outperforms several competing baseline models on F1-score by 9.33{\%} on ACE2004, 6.17{\%} on ACE2005, 9.40{\%} on GENIA and 5.12{\%} on KBP2017 on the 5-shot setting.
[ "Xu, Yuanyuan", "Yang, Zeng", "Zhang, Linhai", "Zhou, Deyu", "Wu, Ti", "eng", "Zhou, Rong" ]
Focusing, Bridging and Prompting for Few-shot Nested Named Entity Recognition
findings-acl.164
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.165.bib
https://aclanthology.org/2023.findings-acl.165/
@inproceedings{luo-etal-2023-together, title = "Together We Make Sense{--}Learning Meta-Sense Embeddings", author = "Luo, Haochen and Zhou, Yi and Bollegala, Danushka", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.165", doi = "10.18653/v1/2023.findings-acl.165", pages = "2638--2651", abstract = "Sense embedding learning methods learn multiple vectors for a given ambiguous word, corresponding to its different word senses. For this purpose, different methods have been proposed in prior work on sense embedding learning that use different sense inventories, sense-tagged corpora and learning methods. However, not all existing sense embeddings cover all senses of ambiguous words equally well due to the discrepancies in their training resources. To address this problem, we propose the first-ever meta-sense embedding method {--} Neighbour Preserving Meta-Sense Embeddings, which learns meta-sense embeddings by combining multiple independently trained source sense embeddings such that the sense neighbourhoods computed from the source embeddings are preserved in the meta-embedding space. Our proposed method can combine source sense embeddings that cover different sets of word senses. Experimental results on Word Sense Disambiguation (WSD) and Word-in-Context (WiC) tasks show that the proposed meta-sense embedding method consistently outperforms several competitive baselines. An anonymised version of the source code implementation for our proposed method is submitted to reviewing system. Both source code and the learnt meta-sense embeddings will be publicly released upon paper acceptance.", }
Sense embedding learning methods learn multiple vectors for a given ambiguous word, corresponding to its different word senses. For this purpose, different methods have been proposed in prior work on sense embedding learning that use different sense inventories, sense-tagged corpora and learning methods. However, not all existing sense embeddings cover all senses of ambiguous words equally well due to the discrepancies in their training resources. To address this problem, we propose the first-ever meta-sense embedding method {--} Neighbour Preserving Meta-Sense Embeddings, which learns meta-sense embeddings by combining multiple independently trained source sense embeddings such that the sense neighbourhoods computed from the source embeddings are preserved in the meta-embedding space. Our proposed method can combine source sense embeddings that cover different sets of word senses. Experimental results on Word Sense Disambiguation (WSD) and Word-in-Context (WiC) tasks show that the proposed meta-sense embedding method consistently outperforms several competitive baselines. An anonymised version of the source code implementation for our proposed method is submitted to reviewing system. Both source code and the learnt meta-sense embeddings will be publicly released upon paper acceptance.
[ "Luo, Haochen", "Zhou, Yi", "Bollegala, Danushka" ]
Together We Make Sense–Learning Meta-Sense Embeddings
findings-acl.165
Poster
[ "https://github.com/livnlp/npms" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.166.bib
https://aclanthology.org/2023.findings-acl.166/
@inproceedings{yang-etal-2023-multimodal, title = "Multimodal Prompt Learning for Product Title Generation with Extremely Limited Labels", author = "Yang, Bang and Liu, Fenglin and Li, Zheng and Yin, Qingyu and You, Chenyu and Yin, Bing and Zou, Yuexian", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.166", doi = "10.18653/v1/2023.findings-acl.166", pages = "2652--2665", abstract = "Generating an informative and attractive title for the product is a crucial task for e-commerce. Most existing works follow the standard multimodal natural language generation approaches, e.g., image captioning, and employ the large scale of human-labelled datasets to train desirable models. However, for novel products, especially in a different domain, there are few existing labelled data. In this paper, we propose a prompt-based approach, i.e., the Multimodal Prompt Learning framework, to accurately and efficiently generate titles for novel products with limited labels. We observe that the core challenges of novel product title generation are the understanding of novel product characteristics and the generation of titles in a novel writing style. To this end, we build a set of multimodal prompts from different modalities to preserve the corresponding characteristics and writing styles of novel products. As a result, with extremely limited labels for training, the proposed method can retrieve the multimodal prompts to generate desirable titles for novel products. The experiments and analyses are conducted on five novel product categories under both the in-domain and out-of-domain experimental settings. The results show that, with only 1{\%} of downstream labelled data for training, our proposed approach achieves the best few-shot results and even achieves competitive results with fully-supervised methods trained on 100{\%} of training data; With the full labelled data for training, our method achieves state-of-the-art results.", }
Generating an informative and attractive title for the product is a crucial task for e-commerce. Most existing works follow the standard multimodal natural language generation approaches, e.g., image captioning, and employ the large scale of human-labelled datasets to train desirable models. However, for novel products, especially in a different domain, there are few existing labelled data. In this paper, we propose a prompt-based approach, i.e., the Multimodal Prompt Learning framework, to accurately and efficiently generate titles for novel products with limited labels. We observe that the core challenges of novel product title generation are the understanding of novel product characteristics and the generation of titles in a novel writing style. To this end, we build a set of multimodal prompts from different modalities to preserve the corresponding characteristics and writing styles of novel products. As a result, with extremely limited labels for training, the proposed method can retrieve the multimodal prompts to generate desirable titles for novel products. The experiments and analyses are conducted on five novel product categories under both the in-domain and out-of-domain experimental settings. The results show that, with only 1{\%} of downstream labelled data for training, our proposed approach achieves the best few-shot results and even achieves competitive results with fully-supervised methods trained on 100{\%} of training data; With the full labelled data for training, our method achieves state-of-the-art results.
[ "Yang, Bang", "Liu, Fenglin", "Li, Zheng", "Yin, Qingyu", "You, Chenyu", "Yin, Bing", "Zou, Yuexian" ]
Multimodal Prompt Learning for Product Title Generation with Extremely Limited Labels
findings-acl.166
Poster
2307.01969
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.167.bib
https://aclanthology.org/2023.findings-acl.167/
@inproceedings{ziems-etal-2023-large, title = "Large Language Models are Built-in Autoregressive Search Engines", author = "Ziems, Noah and Yu, Wenhao and Zhang, Zhihan and Jiang, Meng", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.167", doi = "10.18653/v1/2023.findings-acl.167", pages = "2666--2678", abstract = "Document retrieval is a key stage of standard Web search engines. Existing dual-encoder dense retrievers obtain representations for questions and documents independently, allowing for only shallow interactions between them. To overcome this limitation, recent autoregressive search engines replace the dual-encoder architecture by directly generating identifiers for relevant documents in the candidate pool. However, the training cost of such autoregressive search engines rises sharply as the number of candidate documents increases. In this paper, we find that large language models (LLMs) can follow human instructions to directly generate URLs for document retrieval. Surprisingly, when providing a few Query-URL pairs as in-context demonstrations, LLMs can generate Web URLs where nearly 90{\%} of the corresponding documents contain correct answers to open-domain questions. In this way, LLMs can be thought of as built-in search engines, since they have not been explicitly trained to map questions to document identifiers. Experiments demonstrate that our method can consistently achieve better retrieval performance than existing retrieval approaches by a significant margin on three open-domain question answering benchmarks, under both zero and few-shot settings. The code for this work can be found at \url{https://github.com/Ziems/llm-url}.", }
Document retrieval is a key stage of standard Web search engines. Existing dual-encoder dense retrievers obtain representations for questions and documents independently, allowing for only shallow interactions between them. To overcome this limitation, recent autoregressive search engines replace the dual-encoder architecture by directly generating identifiers for relevant documents in the candidate pool. However, the training cost of such autoregressive search engines rises sharply as the number of candidate documents increases. In this paper, we find that large language models (LLMs) can follow human instructions to directly generate URLs for document retrieval. Surprisingly, when providing a few Query-URL pairs as in-context demonstrations, LLMs can generate Web URLs where nearly 90{\%} of the corresponding documents contain correct answers to open-domain questions. In this way, LLMs can be thought of as built-in search engines, since they have not been explicitly trained to map questions to document identifiers. Experiments demonstrate that our method can consistently achieve better retrieval performance than existing retrieval approaches by a significant margin on three open-domain question answering benchmarks, under both zero and few-shot settings. The code for this work can be found at \url{https://github.com/Ziems/llm-url}.
[ "Ziems, Noah", "Yu, Wenhao", "Zhang, Zhihan", "Jiang, Meng" ]
Large Language Models are Built-in Autoregressive Search Engines
findings-acl.167
Poster
2305.09612
[ "https://github.com/ziems/llm-url" ]
https://huggingface.co/papers/2305.09612
2
1
0
4
1
[]
[]
[]
https://aclanthology.org/2023.findings-acl.168.bib
https://aclanthology.org/2023.findings-acl.168/
@inproceedings{zhu-etal-2023-beyond, title = "Beyond Triplet: Leveraging the Most Data for Multimodal Machine Translation", author = "Zhu, Yaoming and Sun, Zewei and Cheng, Shanbo and Huang, Luyang and Wu, Liwei and Wang, Mingxuan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.168", doi = "10.18653/v1/2023.findings-acl.168", pages = "2679--2697", abstract = "Multimodal machine translation (MMT) aims to improve translation quality by incorporating information from other modalities, such as vision. Previous MMT systems focus on better access and use of visual information and tend to validate their methods on image-related datasets. However, these studies face two challenges. First, they can only utilize a limited amount of data that is composed of bilingual texts and images (referred to as {``}triple data{''}), which is scarce. Second, current benchmarks for MMT are restricted and do not correspond to realistic scenarios. Therefore, this paper correspondingly establishes new methods and a new dataset for MMT. We propose a novel framework for MMT that addresses these challenges by utilizing large-scale non-triple data, such as monolingual image-text and parallel text-only data. Additionally, we construct a new e-commercial multimodal translation dataset, named EMMT, of which the test set is specifically designed to include ambiguous words that require visual context for accurate translation. Experiments show that our method is well-suited for real-world scenarios and can significantly improve translation performance with more non-triple data. In addition, our model also rivals or surpasses various SOTA models in conventional multimodal translation benchmarks.", }
Multimodal machine translation (MMT) aims to improve translation quality by incorporating information from other modalities, such as vision. Previous MMT systems focus on better access and use of visual information and tend to validate their methods on image-related datasets. However, these studies face two challenges. First, they can only utilize a limited amount of data that is composed of bilingual texts and images (referred to as {``}triple data{''}), which is scarce. Second, current benchmarks for MMT are restricted and do not correspond to realistic scenarios. Therefore, this paper correspondingly establishes new methods and a new dataset for MMT. We propose a novel framework for MMT that addresses these challenges by utilizing large-scale non-triple data, such as monolingual image-text and parallel text-only data. Additionally, we construct a new e-commercial multimodal translation dataset, named EMMT, of which the test set is specifically designed to include ambiguous words that require visual context for accurate translation. Experiments show that our method is well-suited for real-world scenarios and can significantly improve translation performance with more non-triple data. In addition, our model also rivals or surpasses various SOTA models in conventional multimodal translation benchmarks.
[ "Zhu, Yaoming", "Sun, Zewei", "Cheng, Shanbo", "Huang, Luyang", "Wu, Liwei", "Wang, Mingxuan" ]
Beyond Triplet: Leveraging the Most Data for Multimodal Machine Translation
findings-acl.168
Poster
2212.10313
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.169.bib
https://aclanthology.org/2023.findings-acl.169/
@inproceedings{coil-shwartz-2023-chocolate, title = "From chocolate bunny to chocolate crocodile: Do Language Models Understand Noun Compounds?", author = "Coil, Albert and Shwartz, Vered", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.169", doi = "10.18653/v1/2023.findings-acl.169", pages = "2698--2710", abstract = "Noun compound interpretation is the task of expressing a noun compound (e.g. chocolate bunny) in a free-text paraphrase that makes the relationship between the constituent nouns explicit (e.g. bunny-shaped chocolate). We propose modifications to the data and evaluation setup of the standard task (Hendrickx et al., 2013), and show that GPT-3 solves it almost perfectly. We then investigate the task of noun compound conceptualization, i.e. paraphrasing a novel or rare noun compound. E.g., chocolate crocodile is a crocodile-shaped chocolate. This task requires creativity, commonsense, and the ability to generalize knowledge about similar concepts. While GPT-3{'}s performance is not perfect, it is better than that of humans{---}likely thanks to its access to vast amounts of knowledge, and because conceptual processing is effortful for people (Connell and Lynott, 2012). Finally, we estimate the extent to which GPT-3 is reasoning about the world vs. parroting its training data. We find that the outputs from GPT-3 often have significant overlap with a large web corpus, but that the parroting strategy is less beneficial for novel noun compounds.", }
Noun compound interpretation is the task of expressing a noun compound (e.g. chocolate bunny) in a free-text paraphrase that makes the relationship between the constituent nouns explicit (e.g. bunny-shaped chocolate). We propose modifications to the data and evaluation setup of the standard task (Hendrickx et al., 2013), and show that GPT-3 solves it almost perfectly. We then investigate the task of noun compound conceptualization, i.e. paraphrasing a novel or rare noun compound. E.g., chocolate crocodile is a crocodile-shaped chocolate. This task requires creativity, commonsense, and the ability to generalize knowledge about similar concepts. While GPT-3{'}s performance is not perfect, it is better than that of humans{---}likely thanks to its access to vast amounts of knowledge, and because conceptual processing is effortful for people (Connell and Lynott, 2012). Finally, we estimate the extent to which GPT-3 is reasoning about the world vs. parroting its training data. We find that the outputs from GPT-3 often have significant overlap with a large web corpus, but that the parroting strategy is less beneficial for novel noun compounds.
[ "Coil, Albert", "Shwartz, Vered" ]
From chocolate bunny to chocolate crocodile: Do Language Models Understand Noun Compounds?
findings-acl.169
Poster
2305.10568
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.170.bib
https://aclanthology.org/2023.findings-acl.170/
@inproceedings{borenstein-etal-2023-measuring, title = "Measuring Intersectional Biases in Historical Documents", author = {Borenstein, Nadav and Stanczak, Karolina and Rolskov, Thea and Klein K{\"a}fer, Natacha and da Silva Perez, Nat{\'a}lia and Augenstein, Isabelle}, editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.170", doi = "10.18653/v1/2023.findings-acl.170", pages = "2711--2730", abstract = "Data-driven analyses of biases in historical texts can help illuminate the origin and development of biases prevailing in modern society. However, digitised historical documents pose a challenge for NLP practitioners as these corpora suffer from errors introduced by optical character recognition (OCR) and are written in an archaic language. In this paper, we investigate the continuities and transformations of bias in historical newspapers published in the Caribbean during the colonial era (18th to 19th centuries). Our analyses are performed along the axes of gender, race, and their intersection. We examine these biases by conducting a temporal study in which we measure the development of lexical associations using distributional semantics models and word embeddings. Further, we evaluate the effectiveness of techniques designed to process OCR-generated data and assess their stability when trained on and applied to the noisy historical newspapers. We find that there is a trade-off between the stability of the word embeddings and their compatibility with the historical dataset. We provide evidence that gender and racial biases are interdependent, and their intersection triggers distinct effects. These findings align with the theory of intersectionality, which stresses that biases affecting people with multiple marginalised identities compound to more than the sum of their constituents.", }
Data-driven analyses of biases in historical texts can help illuminate the origin and development of biases prevailing in modern society. However, digitised historical documents pose a challenge for NLP practitioners as these corpora suffer from errors introduced by optical character recognition (OCR) and are written in an archaic language. In this paper, we investigate the continuities and transformations of bias in historical newspapers published in the Caribbean during the colonial era (18th to 19th centuries). Our analyses are performed along the axes of gender, race, and their intersection. We examine these biases by conducting a temporal study in which we measure the development of lexical associations using distributional semantics models and word embeddings. Further, we evaluate the effectiveness of techniques designed to process OCR-generated data and assess their stability when trained on and applied to the noisy historical newspapers. We find that there is a trade-off between the stability of the word embeddings and their compatibility with the historical dataset. We provide evidence that gender and racial biases are interdependent, and their intersection triggers distinct effects. These findings align with the theory of intersectionality, which stresses that biases affecting people with multiple marginalised identities compound to more than the sum of their constituents.
[ "Borenstein, Nadav", "Stanczak, Karolina", "Rolskov, Thea", "Klein K{\\\"a}fer, Natacha", "da Silva Perez, Nat{\\'a}lia", "Augenstein, Isabelle" ]
Measuring Intersectional Biases in Historical Documents
findings-acl.170
Poster
2305.12376
[ "https://github.com/copenlu/intersectional-bias-pbw" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.171.bib
https://aclanthology.org/2023.findings-acl.171/
@inproceedings{li-etal-2023-incomplete, title = "Incomplete Utterance Rewriting by A Two-Phase Locate-and-Fill Regime", author = "Li, Zitong and Li, Jiawei and Tang, Haifeng and Zhu, Kenny and Yang, Ruolan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.171", doi = "10.18653/v1/2023.findings-acl.171", pages = "2731--2745", abstract = "Rewriting incomplete and ambiguous utterances can improve dialogue models{'} understanding of the context and help them generate better results. However, the existing end-to-end models will have the problem of too large search space, resulting in poor quality of rewriting results. We propose a 2-phase rewriting framework which first predicts the empty slots in the utterance that need to be completed, and then generate the part to be filled into each positions. Our framework is simple to implement, fast to run, and achieves the state-of-the-art results on several public rewriting datasets.", }
Rewriting incomplete and ambiguous utterances can improve dialogue models{'} understanding of the context and help them generate better results. However, the existing end-to-end models will have the problem of too large search space, resulting in poor quality of rewriting results. We propose a 2-phase rewriting framework which first predicts the empty slots in the utterance that need to be completed, and then generate the part to be filled into each positions. Our framework is simple to implement, fast to run, and achieves the state-of-the-art results on several public rewriting datasets.
[ "Li, Zitong", "Li, Jiawei", "Tang, Haifeng", "Zhu, Kenny", "Yang, Ruolan" ]
Incomplete Utterance Rewriting by A Two-Phase Locate-and-Fill Regime
findings-acl.171
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.172.bib
https://aclanthology.org/2023.findings-acl.172/
@inproceedings{popovic-etal-2023-exploring, title = "Exploring Variation of Results from Different Experimental Conditions", author = "Popovi{\'c}, Maja and Arvan, Mohammad and Parde, Natalie and Belz, Anya", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.172", doi = "10.18653/v1/2023.findings-acl.172", pages = "2746--2757", abstract = "It might reasonably be expected that running multiple experiments for the same task using the same data and model would yield very similar results. Recent research has, however, shown this not to be the case for many NLP experiments. In this paper, we report extensive coordinated work by two NLP groups to run the training and testing pipeline for three neural text simplification models under varying experimental conditions, including different random seeds, run-time environments, and dependency versions, yielding a large number of results for each of the three models using the same data and train/dev/test set splits. From one perspective, these results can be interpreted as shedding light on the reproducibility of evaluation results for the three NTS models, and we present an in-depth analysis of the variation observed for different combinations of experimental conditions. From another perspective, the results raise the question of whether the averaged score should be considered the {`}true{'} result for each model.", }
It might reasonably be expected that running multiple experiments for the same task using the same data and model would yield very similar results. Recent research has, however, shown this not to be the case for many NLP experiments. In this paper, we report extensive coordinated work by two NLP groups to run the training and testing pipeline for three neural text simplification models under varying experimental conditions, including different random seeds, run-time environments, and dependency versions, yielding a large number of results for each of the three models using the same data and train/dev/test set splits. From one perspective, these results can be interpreted as shedding light on the reproducibility of evaluation results for the three NTS models, and we present an in-depth analysis of the variation observed for different combinations of experimental conditions. From another perspective, the results raise the question of whether the averaged score should be considered the {`}true{'} result for each model.
[ "Popovi{\\'c}, Maja", "Arvan, Mohammad", "Parde, Natalie", "Belz, Anya" ]
Exploring Variation of Results from Different Experimental Conditions
findings-acl.172
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.173.bib
https://aclanthology.org/2023.findings-acl.173/
@inproceedings{ocampo-etal-2023-playing, title = "Playing the Part of the Sharp Bully: Generating Adversarial Examples for Implicit Hate Speech Detection", author = "Ocampo, Nicol{\'a}s Benjam{\'\i}n and Cabrio, Elena and Villata, Serena", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.173", doi = "10.18653/v1/2023.findings-acl.173", pages = "2758--2772", abstract = "Research on abusive content detection on social media has primarily focused on explicit forms of hate speech (HS), that are often identifiable by recognizing hateful words and expressions. Messages containing linguistically subtle and implicit forms of hate speech still constitute an open challenge for automatic hate speech detection. In this paper, we propose a new framework for generating adversarial implicit HS short-text messages using Auto-regressive Language Models. Moreover, we propose a strategy to group the generated implicit messages in complexity levels (EASY, MEDIUM, and HARD categories) characterizing how challenging these messages are for supervised classifiers. Finally, relying on (Dinan et al., 2019; Vidgen et al., 2021), we propose a {``}build it, break it, fix it{''}, training scheme using HARD messages showing how iteratively retraining on HARD messages substantially leverages SOTA models{'} performances on implicit HS benchmarks.", }
Research on abusive content detection on social media has primarily focused on explicit forms of hate speech (HS), that are often identifiable by recognizing hateful words and expressions. Messages containing linguistically subtle and implicit forms of hate speech still constitute an open challenge for automatic hate speech detection. In this paper, we propose a new framework for generating adversarial implicit HS short-text messages using Auto-regressive Language Models. Moreover, we propose a strategy to group the generated implicit messages in complexity levels (EASY, MEDIUM, and HARD categories) characterizing how challenging these messages are for supervised classifiers. Finally, relying on (Dinan et al., 2019; Vidgen et al., 2021), we propose a {``}build it, break it, fix it{''}, training scheme using HARD messages showing how iteratively retraining on HARD messages substantially leverages SOTA models{'} performances on implicit HS benchmarks.
[ "Ocampo, Nicol{\\'a}s Benjam{\\'\\i}n", "Cabrio, Elena", "Villata, Serena" ]
Playing the Part of the Sharp Bully: Generating Adversarial Examples for Implicit Hate Speech Detection
findings-acl.173
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.174.bib
https://aclanthology.org/2023.findings-acl.174/
@inproceedings{moradshahi-etal-2023-x, title = "{X}-{R}i{SAWOZ}: High-Quality End-to-End Multilingual Dialogue Datasets and Few-shot Agents", author = "Moradshahi, Mehrad and Shen, Tianhao and Bali, Kalika and Choudhury, Monojit and de Chalendar, Gael and Goel, Anmol and Kim, Sungkyun and Kodali, Prashant and Kumaraguru, Ponnurangam and Semmar, Nasredine and Semnani, Sina and Seo, Jiwon and Seshadri, Vivek and Shrivastava, Manish and Sun, Michael and Yadavalli, Aditya and You, Chaobin and Xiong, Deyi and Lam, Monica", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.174", doi = "10.18653/v1/2023.findings-acl.174", pages = "2773--2794", abstract = "Task-oriented dialogue research has mainly focused on a few popular languages like English and Chinese, due to the high dataset creation cost for a new language. To reduce the cost, we apply manual editing to automatically translated data. We create a new multilingual benchmark, X-RiSAWOZ, by translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean; and a code-mixed English-Hindi language.X-RiSAWOZ has more than 18,000 human-verified dialogue utterances for each language, and unlike most multilingual prior work, is an end-to-end dataset for building fully-functioning agents. The many difficulties we encountered in creating X-RiSAWOZ led us to develop a toolset to accelerate the post-editing of a new language dataset after translation. This toolset improves machine translation with a hybrid entity alignment technique that combines neural with dictionary-based methods, along with many automated and semi-automated validation checks. We establish strong baselines for X-RiSAWOZ by training dialogue agents in the zero- and few-shot settings where limited gold data is available in the target language. Our results suggest that our translation and post-editing methodology and toolset can be used to create new high-quality multilingual dialogue agents cost-effectively. Our dataset, code, and toolkit are released open-source.", }
Task-oriented dialogue research has mainly focused on a few popular languages like English and Chinese, due to the high dataset creation cost for a new language. To reduce the cost, we apply manual editing to automatically translated data. We create a new multilingual benchmark, X-RiSAWOZ, by translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean; and a code-mixed English-Hindi language.X-RiSAWOZ has more than 18,000 human-verified dialogue utterances for each language, and unlike most multilingual prior work, is an end-to-end dataset for building fully-functioning agents. The many difficulties we encountered in creating X-RiSAWOZ led us to develop a toolset to accelerate the post-editing of a new language dataset after translation. This toolset improves machine translation with a hybrid entity alignment technique that combines neural with dictionary-based methods, along with many automated and semi-automated validation checks. We establish strong baselines for X-RiSAWOZ by training dialogue agents in the zero- and few-shot settings where limited gold data is available in the target language. Our results suggest that our translation and post-editing methodology and toolset can be used to create new high-quality multilingual dialogue agents cost-effectively. Our dataset, code, and toolkit are released open-source.
[ "Moradshahi, Mehrad", "Shen, Tianhao", "Bali, Kalika", "Choudhury, Monojit", "de Chalendar, Gael", "Goel, Anmol", "Kim, Sungkyun", "Kodali, Prashant", "Kumaraguru, Ponnurangam", "Semmar, Nasredine", "Semnani, Sina", "Seo, Jiwon", "Seshadri, Vivek", "Shrivastava, Manish", "Sun, Michael", "Yadavalli, Aditya", "You, Chaobin", "Xiong, Deyi", "Lam, Monica" ]
X-RiSAWOZ: High-Quality End-to-End Multilingual Dialogue Datasets and Few-shot Agents
findings-acl.174
Poster
2306.17674
[ "https://github.com/stanford-oval/dialogues" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.175.bib
https://aclanthology.org/2023.findings-acl.175/
@inproceedings{meyer-buys-2023-subword, title = "Subword Segmental Machine Translation: Unifying Segmentation and Target Sentence Generation", author = "Meyer, Francois and Buys, Jan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.175", doi = "10.18653/v1/2023.findings-acl.175", pages = "2795--2809", abstract = "Subword segmenters like BPE operate as a preprocessing step in neural machine translation and other (conditional) language models. They are applied to datasets before training, so translation or text generation quality relies on the quality of segmentations. We propose a departure from this paradigm, called subword segmental machine translation (SSMT). SSMT unifies subword segmentation and MT in a single trainable model. It learns to segment target sentence words while jointly learning to generate target sentences. To use SSMT during inference we propose dynamic decoding, a text generation algorithm that adapts segmentations as it generates translations. Experiments across 6 translation directions show that SSMT improves chrF scores for morphologically rich agglutinative languages. Gains are strongest in the very low-resource scenario. SSMT also learns subwords that are closer to morphemes compared to baselines and proves more robust on a test set constructed for evaluating morphological compositional generalisation.", }
Subword segmenters like BPE operate as a preprocessing step in neural machine translation and other (conditional) language models. They are applied to datasets before training, so translation or text generation quality relies on the quality of segmentations. We propose a departure from this paradigm, called subword segmental machine translation (SSMT). SSMT unifies subword segmentation and MT in a single trainable model. It learns to segment target sentence words while jointly learning to generate target sentences. To use SSMT during inference we propose dynamic decoding, a text generation algorithm that adapts segmentations as it generates translations. Experiments across 6 translation directions show that SSMT improves chrF scores for morphologically rich agglutinative languages. Gains are strongest in the very low-resource scenario. SSMT also learns subwords that are closer to morphemes compared to baselines and proves more robust on a test set constructed for evaluating morphological compositional generalisation.
[ "Meyer, Francois", "Buys, Jan" ]
Subword Segmental Machine Translation: Unifying Segmentation and Target Sentence Generation
findings-acl.175
Poster
2305.07005
[ "https://github.com/francois-meyer/ssmt" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.176.bib
https://aclanthology.org/2023.findings-acl.176/
@inproceedings{datta-etal-2023-measuring, title = "Measuring and Mitigating Local Instability in Deep Neural Networks", author = "Datta, Arghya and Nandi, Subhrangshu and Xu, Jingcheng and Ver Steeg, Greg and Xie, He and Kumar, Anoop and Galstyan, Aram", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.176", doi = "10.18653/v1/2023.findings-acl.176", pages = "2810--2823", abstract = "Deep Neural Networks (DNNs) are becoming integral components of real world services relied upon by millions of users. Unfortunately, architects of these systems can find it difficult to ensure reliable performance as irrelevant details like random initialization can unexpectedly change the outputs of a trained system with potentially disastrous consequences. We formulate the model stability problem by studying how the predictions of a model change, even when it is retrained on the same data, as a consequence of stochasticity in the training process. For Natural Language Understanding (NLU) tasks, we find instability in predictions for a significant fraction of queries. We formulate principled metrics, like per-sample {``}label entropy{''} across training runs or within a single training run, to quantify this phenomenon. Intriguingly, we find that unstable predictions do not appear at random, but rather appear to be clustered in data-specific ways. We study data-agnostic regularization methods to improve stability and propose new data-centric methods that exploit our local stability estimates. We find that our localized data-specific mitigation strategy dramatically outperforms data-agnostic methods, and comes within 90{\%} of the gold standard, achieved by ensembling, at a fraction of the computational cost.", }
Deep Neural Networks (DNNs) are becoming integral components of real world services relied upon by millions of users. Unfortunately, architects of these systems can find it difficult to ensure reliable performance as irrelevant details like random initialization can unexpectedly change the outputs of a trained system with potentially disastrous consequences. We formulate the model stability problem by studying how the predictions of a model change, even when it is retrained on the same data, as a consequence of stochasticity in the training process. For Natural Language Understanding (NLU) tasks, we find instability in predictions for a significant fraction of queries. We formulate principled metrics, like per-sample {``}label entropy{''} across training runs or within a single training run, to quantify this phenomenon. Intriguingly, we find that unstable predictions do not appear at random, but rather appear to be clustered in data-specific ways. We study data-agnostic regularization methods to improve stability and propose new data-centric methods that exploit our local stability estimates. We find that our localized data-specific mitigation strategy dramatically outperforms data-agnostic methods, and comes within 90{\%} of the gold standard, achieved by ensembling, at a fraction of the computational cost.
[ "Datta, Arghya", "N", "i, Subhrangshu", "Xu, Jingcheng", "Ver Steeg, Greg", "Xie, He", "Kumar, Anoop", "Galstyan, Aram" ]
Measuring and Mitigating Local Instability in Deep Neural Networks
findings-acl.176
Poster
2305.10625
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.177.bib
https://aclanthology.org/2023.findings-acl.177/
@inproceedings{zhu-etal-2023-knowledge, title = "What Knowledge Is Needed? Towards Explainable Memory for k{NN}-{MT} Domain Adaptation", author = "Zhu, Wenhao and Huang, Shujian and Lv, Yunzhe and Zheng, Xin and Chen, Jiajun", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.177", doi = "10.18653/v1/2023.findings-acl.177", pages = "2824--2836", abstract = "kNN-MT presents a new paradigm for domain adaptation by building an external datastore, which usually saves all target language token occurrences in the parallel corpus. As a result, the constructed datastore is usually large and possibly redundant. In this paper, we investigate the interpretability issue of this approach: what knowledge does the NMT model need? We propose the notion of local correctness (LAC) as a new angle, which describes the potential translation correctness for a single entry and for a given neighborhood. Empirical study shows that our investigation successfully finds the conditions where the NMT model could easily fail and need related knowledge. Experiments on six diverse target domains and two language-pairs show that pruning according to local correctness brings a light and more explainable memory for kNN-MT domain adaptation.", }
kNN-MT presents a new paradigm for domain adaptation by building an external datastore, which usually saves all target language token occurrences in the parallel corpus. As a result, the constructed datastore is usually large and possibly redundant. In this paper, we investigate the interpretability issue of this approach: what knowledge does the NMT model need? We propose the notion of local correctness (LAC) as a new angle, which describes the potential translation correctness for a single entry and for a given neighborhood. Empirical study shows that our investigation successfully finds the conditions where the NMT model could easily fail and need related knowledge. Experiments on six diverse target domains and two language-pairs show that pruning according to local correctness brings a light and more explainable memory for kNN-MT domain adaptation.
[ "Zhu, Wenhao", "Huang, Shujian", "Lv, Yunzhe", "Zheng, Xin", "Chen, Jiajun" ]
What Knowledge Is Needed? Towards Explainable Memory for kNN-MT Domain Adaptation
findings-acl.177
Poster
2211.04052
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.178.bib
https://aclanthology.org/2023.findings-acl.178/
@inproceedings{xu-etal-2023-measuring, title = "Measuring Your {ASTE} Models in The Wild: A Diversified Multi-domain Dataset For Aspect Sentiment Triplet Extraction", author = "Xu, Ting and Yang, Huiyun and Wu, Zhen and Chen, Jiaze and Zhao, Fei and Dai, Xinyu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.178", doi = "10.18653/v1/2023.findings-acl.178", pages = "2837--2853", abstract = "Aspect Sentiment Triplet Extraction (ASTE) is widely used in various applications. However, existing ASTE datasets are limited in their ability to represent real-world scenarios, hindering the advancement of research in this area. In this paper, we introduce a new dataset, named DMASTE, which is manually annotated to better fit real-world scenarios by providing more diverse and realistic reviews for the task. The dataset includes various lengths, diverse expressions, more aspect types, and more domains than existing datasets. We conduct extensive experiments on DMASTE in multiple settings to evaluate previous ASTE approaches. Empirical results demonstrate that DMASTE is a more challenging ASTE dataset. Further analyses of in-domain and cross-domain settings provide some promising directions for future research.", }
Aspect Sentiment Triplet Extraction (ASTE) is widely used in various applications. However, existing ASTE datasets are limited in their ability to represent real-world scenarios, hindering the advancement of research in this area. In this paper, we introduce a new dataset, named DMASTE, which is manually annotated to better fit real-world scenarios by providing more diverse and realistic reviews for the task. The dataset includes various lengths, diverse expressions, more aspect types, and more domains than existing datasets. We conduct extensive experiments on DMASTE in multiple settings to evaluate previous ASTE approaches. Empirical results demonstrate that DMASTE is a more challenging ASTE dataset. Further analyses of in-domain and cross-domain settings provide some promising directions for future research.
[ "Xu, Ting", "Yang, Huiyun", "Wu, Zhen", "Chen, Jiaze", "Zhao, Fei", "Dai, Xinyu" ]
Measuring Your ASTE Models in The Wild: A Diversified Multi-domain Dataset For Aspect Sentiment Triplet Extraction
findings-acl.178
Poster
2305.17448
[ "https://github.com/njunlp/dmaste" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.179.bib
https://aclanthology.org/2023.findings-acl.179/
@inproceedings{omarov-kondrak-2023-grounding, title = "Grounding the Lexical Substitution Task in Entailment", author = "Omarov, Talgat and Kondrak, Grzegorz", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.179", doi = "10.18653/v1/2023.findings-acl.179", pages = "2854--2869", abstract = "Existing definitions of lexical substitutes are often vague or inconsistent with the gold annotations. We propose a new definition which is grounded in the relation of entailment; namely, that the sentence that results from the substitution should be in the relation of mutual entailment with the original sentence. We argue that the new definition is well-founded and supported by previous work on lexical entailment. We empirically validate our definition by verifying that it covers the majority of gold substitutes in existing datasets. Based on this definition, we create a new dataset from existing semantic resources. Finally, we propose a novel context augmentation method motivated by the definition, which relates the substitutes to the sense of the target word by incorporating glosses and synonyms directly into the context. Experimental results demonstrate that our augmentation approach improves the performance of lexical substitution systems on the existing benchmarks.", }
Existing definitions of lexical substitutes are often vague or inconsistent with the gold annotations. We propose a new definition which is grounded in the relation of entailment; namely, that the sentence that results from the substitution should be in the relation of mutual entailment with the original sentence. We argue that the new definition is well-founded and supported by previous work on lexical entailment. We empirically validate our definition by verifying that it covers the majority of gold substitutes in existing datasets. Based on this definition, we create a new dataset from existing semantic resources. Finally, we propose a novel context augmentation method motivated by the definition, which relates the substitutes to the sense of the target word by incorporating glosses and synonyms directly into the context. Experimental results demonstrate that our augmentation approach improves the performance of lexical substitution systems on the existing benchmarks.
[ "Omarov, Talgat", "Kondrak, Grzegorz" ]
Grounding the Lexical Substitution Task in Entailment
findings-acl.179
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.180.bib
https://aclanthology.org/2023.findings-acl.180/
@inproceedings{xin-etal-2023-operator, title = "Operator Selection and Ordering in a Pipeline Approach to Efficiency Optimizations for Transformers", author = "Xin, Ji and Tang, Raphael and Jiang, Zhiying and Yu, Yaoliang and Lin, Jimmy", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.180", doi = "10.18653/v1/2023.findings-acl.180", pages = "2870--2882", abstract = "There exists a wide variety of efficiency methods for natural language processing (NLP) tasks, such as pruning, distillation, dynamic inference, quantization, etc. From a different perspective, we can consider an efficiency method as an operator applied on a model. Naturally, we may construct a pipeline of operators, i.e., to apply multiple efficiency methods on the model sequentially. In this paper, we study the plausibility of this idea, and more importantly, the commutativity and cumulativeness of efficiency operators. We make two interesting observations from our experiments: (1) The operators are commutative{---}the order of efficiency methods within the pipeline has little impact on the final results; (2) The operators are also cumulative{---}the final results of combining several efficiency methods can be estimated by combining the results of individual methods. These observations deepen our understanding of efficiency operators and provide useful guidelines for building them in real-world applications.", }
There exists a wide variety of efficiency methods for natural language processing (NLP) tasks, such as pruning, distillation, dynamic inference, quantization, etc. From a different perspective, we can consider an efficiency method as an operator applied on a model. Naturally, we may construct a pipeline of operators, i.e., to apply multiple efficiency methods on the model sequentially. In this paper, we study the plausibility of this idea, and more importantly, the commutativity and cumulativeness of efficiency operators. We make two interesting observations from our experiments: (1) The operators are commutative{---}the order of efficiency methods within the pipeline has little impact on the final results; (2) The operators are also cumulative{---}the final results of combining several efficiency methods can be estimated by combining the results of individual methods. These observations deepen our understanding of efficiency operators and provide useful guidelines for building them in real-world applications.
[ "Xin, Ji", "Tang, Raphael", "Jiang, Zhiying", "Yu, Yaoliang", "Lin, Jimmy" ]
Operator Selection and Ordering in a Pipeline Approach to Efficiency Optimizations for Transformers
findings-acl.180
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.181.bib
https://aclanthology.org/2023.findings-acl.181/
@inproceedings{alghamdi-etal-2023-aramus, title = "{A}ra{MUS}: Pushing the Limits of Data and Model Scale for {A}rabic Natural Language Processing", author = "Alghamdi, Asaad and Duan, Xinyu and Jiang, Wei and Wang, Zhenhai and Wu, Yimeng and Xia, Qingrong and Wang, Zhefeng and Zheng, Yi and Rezagholizadeh, Mehdi and Huai, Baoxing and Cheng, Peilun and Ghaddar, Abbas", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.181", doi = "10.18653/v1/2023.findings-acl.181", pages = "2883--2894", abstract = "Developing monolingual large Pre-trained Language Models (PLMs) is shown to be very successful in handling different tasks in Natural Language Processing (NLP). In this work, we present AraMUS, the largest Arabic PLM with 11B parameters trained on 529GB of high-quality Arabic textual data. AraMUS achieves state-of-the-art performances on a diverse set of Arabic classification and generative tasks. Moreover, AraMUS shows impressive few-shot learning abilities compared with the best existing Arabic PLMs.", }
Developing monolingual large Pre-trained Language Models (PLMs) is shown to be very successful in handling different tasks in Natural Language Processing (NLP). In this work, we present AraMUS, the largest Arabic PLM with 11B parameters trained on 529GB of high-quality Arabic textual data. AraMUS achieves state-of-the-art performances on a diverse set of Arabic classification and generative tasks. Moreover, AraMUS shows impressive few-shot learning abilities compared with the best existing Arabic PLMs.
[ "Alghamdi, Asaad", "Duan, Xinyu", "Jiang, Wei", "Wang, Zhenhai", "Wu, Yimeng", "Xia, Qingrong", "Wang, Zhefeng", "Zheng, Yi", "Rezagholizadeh, Mehdi", "Huai, Baoxing", "Cheng, Peilun", "Ghaddar, Abbas" ]
AraMUS: Pushing the Limits of Data and Model Scale for Arabic Natural Language Processing
findings-acl.181
Poster
2306.06800
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.182.bib
https://aclanthology.org/2023.findings-acl.182/
@inproceedings{white-etal-2023-leveraging, title = "Leveraging Explicit Procedural Instructions for Data-Efficient Action Prediction", author = "White, Julia and Raghuvanshi, Arushi and Pruksachatkun, Yada", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.182", doi = "10.18653/v1/2023.findings-acl.182", pages = "2895--2904", abstract = "Task-oriented dialogues often require agents to enact complex, multi-step procedures in order to meet user requests. While large language models have found success automating these dialogues in constrained environments, their widespread deployment is limited by the substantial quantities of task-specific data required for training. The following paper presents a data-efficient solution to constructing dialogue systems, leveraging explicit instructions derived from agent guidelines, such as company policies or customer service manuals. Our proposed Knowledge-Augmented Dialogue System (KADS) combines a large language model with a knowledge retrieval module that pulls documents outlining relevant procedures from a predefined set of policies, given a user-agent interaction. To train this system, we introduce a semi-supervised pre-training scheme that employs dialogue-document matching and action-oriented masked language modeling with partial parameter freezing. We evaluate the effectiveness of our approach on prominent task-oriented dialogue datasets, Action-Based Conversations Dataset and Schema-Guided Dialogue, for two dialogue tasks: action state tracking and workflow discovery. Our results demonstrate that procedural knowledge augmentation improves accuracy predicting in- and out-of-distribution actions while preserving high performance in settings with low or sparse data.", }
Task-oriented dialogues often require agents to enact complex, multi-step procedures in order to meet user requests. While large language models have found success automating these dialogues in constrained environments, their widespread deployment is limited by the substantial quantities of task-specific data required for training. The following paper presents a data-efficient solution to constructing dialogue systems, leveraging explicit instructions derived from agent guidelines, such as company policies or customer service manuals. Our proposed Knowledge-Augmented Dialogue System (KADS) combines a large language model with a knowledge retrieval module that pulls documents outlining relevant procedures from a predefined set of policies, given a user-agent interaction. To train this system, we introduce a semi-supervised pre-training scheme that employs dialogue-document matching and action-oriented masked language modeling with partial parameter freezing. We evaluate the effectiveness of our approach on prominent task-oriented dialogue datasets, Action-Based Conversations Dataset and Schema-Guided Dialogue, for two dialogue tasks: action state tracking and workflow discovery. Our results demonstrate that procedural knowledge augmentation improves accuracy predicting in- and out-of-distribution actions while preserving high performance in settings with low or sparse data.
[ "White, Julia", "Raghuvanshi, Arushi", "Pruksachatkun, Yada" ]
Leveraging Explicit Procedural Instructions for Data-Efficient Action Prediction
findings-acl.182
Poster
2306.03959
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.183.bib
https://aclanthology.org/2023.findings-acl.183/
@inproceedings{kambhatla-etal-2023-quantifying, title = "Quantifying Train-Evaluation Overlap with Nearest Neighbors", author = "Kambhatla, Gauri and Nguyen, Thuy and Choi, Eunsol", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.183", doi = "10.18653/v1/2023.findings-acl.183", pages = "2905--2920", abstract = "Characterizing benchmark datasets is crucial to interpreting model performance. In this work, we study train-evaluation overlap as a measure of an individual dataset{'}s adequacy to evaluate model generalization over a wide range of datasets. We quantify the overlap with a simple novel metric based on a nearest neighbors approach between the training and evaluation sets. We identify nearest training examples for each evaluation example by mapping instances with generic and task-specific embedding methods. Our study on eleven classification and extractive QA tasks reveals a wide range of train-evaluation overlap, and we show that the data collection method of the dataset and the difficulty of the task may play a role in the amount of overlap. Lastly, we use our nearest neighbor analysis to identify challenging or potentially mislabeled examples. Our analysis quantifies train-evaluation overlap, providing insights for constructing datasets to study generalization.", }
Characterizing benchmark datasets is crucial to interpreting model performance. In this work, we study train-evaluation overlap as a measure of an individual dataset{'}s adequacy to evaluate model generalization over a wide range of datasets. We quantify the overlap with a simple novel metric based on a nearest neighbors approach between the training and evaluation sets. We identify nearest training examples for each evaluation example by mapping instances with generic and task-specific embedding methods. Our study on eleven classification and extractive QA tasks reveals a wide range of train-evaluation overlap, and we show that the data collection method of the dataset and the difficulty of the task may play a role in the amount of overlap. Lastly, we use our nearest neighbor analysis to identify challenging or potentially mislabeled examples. Our analysis quantifies train-evaluation overlap, providing insights for constructing datasets to study generalization.
[ "Kambhatla, Gauri", "Nguyen, Thuy", "Choi, Eunsol" ]
Quantifying Train-Evaluation Overlap with Nearest Neighbors
findings-acl.183
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.184.bib
https://aclanthology.org/2023.findings-acl.184/
@inproceedings{weinstein-goldberg-2023-unsupervised, title = "Unsupervised Mapping of Arguments of Deverbal Nouns to Their Corresponding Verbal Labels", author = "Weinstein, Aviv and Goldberg, Yoav", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.184", doi = "10.18653/v1/2023.findings-acl.184", pages = "2921--2935", abstract = "Deverbal nouns are nominal forms of verbs commonly used in written English texts to describe events or actions, as well as their arguments. However, many NLP systems, and in particular pattern-based ones, neglect to handle such nominalized constructions. The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation and require semantic ontologies, making their applications restricted to a small set of nouns. We propose to adopt instead a more syntactic approach, which maps the arguments of deverbal nouns to the universal-dependency relations of the corresponding verbal construction. We present an unsupervised mechanism{---}based on contextualized word representations{---}which allows to enrich universal-dependency trees with dependency arcs denoting arguments of deverbal nouns, using the same labels as the corresponding verbal cases. By sharing the same label set as in the verbal case, patterns that were developed for verbs can be applied without modification but with high accuracy also to the nominal constructions.", }
Deverbal nouns are nominal forms of verbs commonly used in written English texts to describe events or actions, as well as their arguments. However, many NLP systems, and in particular pattern-based ones, neglect to handle such nominalized constructions. The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation and require semantic ontologies, making their applications restricted to a small set of nouns. We propose to adopt instead a more syntactic approach, which maps the arguments of deverbal nouns to the universal-dependency relations of the corresponding verbal construction. We present an unsupervised mechanism{---}based on contextualized word representations{---}which allows to enrich universal-dependency trees with dependency arcs denoting arguments of deverbal nouns, using the same labels as the corresponding verbal cases. By sharing the same label set as in the verbal case, patterns that were developed for verbs can be applied without modification but with high accuracy also to the nominal constructions.
[ "Weinstein, Aviv", "Goldberg, Yoav" ]
Unsupervised Mapping of Arguments of Deverbal Nouns to Their Corresponding Verbal Labels
findings-acl.184
Poster
2306.13922
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.185.bib
https://aclanthology.org/2023.findings-acl.185/
@inproceedings{winata-etal-2023-decades, title = "The Decades Progress on Code-Switching Research in {NLP}: A Systematic Survey on Trends and Challenges", author = "Winata, Genta and Aji, Alham Fikri and Yong, Zheng Xin and Solorio, Thamar", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.185", doi = "10.18653/v1/2023.findings-acl.185", pages = "2936--2978", abstract = "Code-Switching, a common phenomenon in written text and conversation, has been studied over decades by the natural language processing (NLP) research community. Initially, code-switching is intensively explored by leveraging linguistic theories and, currently, more machine-learning oriented approaches to develop models. We introduce a comprehensive systematic survey on code-switching research in natural language processing to understand the progress of the past decades and conceptualize the challenges and tasks on the code-switching topic. Finally, we summarize the trends and findings and conclude with a discussion for future direction and open questions for further investigation.", }
Code-Switching, a common phenomenon in written text and conversation, has been studied over decades by the natural language processing (NLP) research community. Initially, code-switching is intensively explored by leveraging linguistic theories and, currently, more machine-learning oriented approaches to develop models. We introduce a comprehensive systematic survey on code-switching research in natural language processing to understand the progress of the past decades and conceptualize the challenges and tasks on the code-switching topic. Finally, we summarize the trends and findings and conclude with a discussion for future direction and open questions for further investigation.
[ "Winata, Genta", "Aji, Alham Fikri", "Yong, Zheng Xin", "Solorio, Thamar" ]
The Decades Progress on Code-Switching Research in NLP: A Systematic Survey on Trends and Challenges
findings-acl.185
Poster
2212.09660
[ "https://github.com/gentaiscool/code-switching-papers" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.186.bib
https://aclanthology.org/2023.findings-acl.186/
@inproceedings{zhou-etal-2023-learning, title = "Learning to Predict Persona Information for Dialogue Personalization without Explicit Persona Description", author = "Zhou, Wangchunshu and Li, Qifei and Li, Chenle", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.186", doi = "10.18653/v1/2023.findings-acl.186", pages = "2979--2991", abstract = "Personalizing dialogue agents is important for dialogue systems to generate more specific,consistent, and engaging responses. However, most current dialogue personalization approaches rely on explicit persona descriptions during inference, which severely restricts its application. In this paper, we propose a novel approach that learns to predict persona information based on the dialogue history to personalize the dialogue agent without relying on any explicit persona descriptions during inference. Experimental results on the PersonaChat dataset show that the proposed method can improve the consistency of generated responses when conditioning on the predicted profile of the dialogue agent (i.e. {``}self persona{''}), and improve the engagingness of the generated responses when conditioning on the predicted persona of the dialogue partner (i.e. {``}their persona{''}). We also find that a trained persona prediction model can be successfully transferred to other datasets and help generate more relevant responses.", }
Personalizing dialogue agents is important for dialogue systems to generate more specific,consistent, and engaging responses. However, most current dialogue personalization approaches rely on explicit persona descriptions during inference, which severely restricts its application. In this paper, we propose a novel approach that learns to predict persona information based on the dialogue history to personalize the dialogue agent without relying on any explicit persona descriptions during inference. Experimental results on the PersonaChat dataset show that the proposed method can improve the consistency of generated responses when conditioning on the predicted profile of the dialogue agent (i.e. {``}self persona{''}), and improve the engagingness of the generated responses when conditioning on the predicted persona of the dialogue partner (i.e. {``}their persona{''}). We also find that a trained persona prediction model can be successfully transferred to other datasets and help generate more relevant responses.
[ "Zhou, Wangchunshu", "Li, Qifei", "Li, Chenle" ]
Learning to Predict Persona Information for Dialogue Personalization without Explicit Persona Description
findings-acl.186
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.187.bib
https://aclanthology.org/2023.findings-acl.187/
@inproceedings{barale-etal-2023-automated, title = "Automated Refugee Case Analysis: A {NLP} Pipeline for Supporting Legal Practitioners", author = "Barale, Claire and Rovatsos, Michael and Bhuta, Nehal", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.187", doi = "10.18653/v1/2023.findings-acl.187", pages = "2992--3005", abstract = "In this paper, we introduce an end-to-end pipeline for retrieving, processing, and extracting targeted information from legal cases. We investigate an under-studied legal domain with a case study on refugee law Canada. Searching case law for past similar cases is a key part of legal work for both lawyers and judges, the potential end-users of our prototype. While traditional named-entity recognition labels such as dates are meaningful information in law, we propose to extend existing models and retrieve a total of 19 categories of items from refugee cases. After creating a novel data set of cases, we perform information extraction based on state-of-the-art neural named-entity recognition (NER). We test different architectures including two transformer models, using contextual and non-contextual embeddings, and compare general purpose versus domain-specific pre-training. The results demonstrate that models pre-trained on legal data perform best despite their smaller size, suggesting that domain-matching had a larger effect than network architecture. We achieve a F1- score superior to 90{\%} on five of the targeted categories and superior to 80{\%} on an additional 4 categories.", }
In this paper, we introduce an end-to-end pipeline for retrieving, processing, and extracting targeted information from legal cases. We investigate an under-studied legal domain with a case study on refugee law Canada. Searching case law for past similar cases is a key part of legal work for both lawyers and judges, the potential end-users of our prototype. While traditional named-entity recognition labels such as dates are meaningful information in law, we propose to extend existing models and retrieve a total of 19 categories of items from refugee cases. After creating a novel data set of cases, we perform information extraction based on state-of-the-art neural named-entity recognition (NER). We test different architectures including two transformer models, using contextual and non-contextual embeddings, and compare general purpose versus domain-specific pre-training. The results demonstrate that models pre-trained on legal data perform best despite their smaller size, suggesting that domain-matching had a larger effect than network architecture. We achieve a F1- score superior to 90{\%} on five of the targeted categories and superior to 80{\%} on an additional 4 categories.
[ "Barale, Claire", "Rovatsos, Michael", "Bhuta, Nehal" ]
Automated Refugee Case Analysis: A NLP Pipeline for Supporting Legal Practitioners
findings-acl.187
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.188.bib
https://aclanthology.org/2023.findings-acl.188/
@inproceedings{li-etal-2023-recurrent, title = "Recurrent Attention Networks for Long-text Modeling", author = "Li, Xianming and Li, Zongxi and Luo, Xiaotian and Xie, Haoran and Lee, Xing and Zhao, Yingbin and Wang, Fu Lee and Li, Qing", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.188", doi = "10.18653/v1/2023.findings-acl.188", pages = "3006--3019", abstract = "Self-attention-based models have achieved remarkable progress in short-text mining. However, the quadratic computational complexities restrict their application in long text processing. Prior works have adopted the chunking strategy to divide long documents into chunks and stack a self-attention backbone with the recurrent structure to extract semantic representation. Such an approach disables parallelization of the attention mechanism, significantly increasing the training cost and raising hardware requirements. Revisiting the self-attention mechanism and the recurrent structure, this paper proposes a novel long-document encoding model, Recurrent Attention Network (RAN), to enable the recurrent operation of self-attention. Combining the advantages from both sides, the well-designed RAN is capable of extracting global semantics in both token-level and document-level representations, making it inherently compatible with both sequential and classification tasks, respectively. Furthermore, RAN is computationally scalable as it supports parallelization on long document processing. Extensive experiments demonstrate the long-text encoding ability of the proposed RAN model on both classification and sequential tasks, showing its potential for a wide range of applications.", }
Self-attention-based models have achieved remarkable progress in short-text mining. However, the quadratic computational complexities restrict their application in long text processing. Prior works have adopted the chunking strategy to divide long documents into chunks and stack a self-attention backbone with the recurrent structure to extract semantic representation. Such an approach disables parallelization of the attention mechanism, significantly increasing the training cost and raising hardware requirements. Revisiting the self-attention mechanism and the recurrent structure, this paper proposes a novel long-document encoding model, Recurrent Attention Network (RAN), to enable the recurrent operation of self-attention. Combining the advantages from both sides, the well-designed RAN is capable of extracting global semantics in both token-level and document-level representations, making it inherently compatible with both sequential and classification tasks, respectively. Furthermore, RAN is computationally scalable as it supports parallelization on long document processing. Extensive experiments demonstrate the long-text encoding ability of the proposed RAN model on both classification and sequential tasks, showing its potential for a wide range of applications.
[ "Li, Xianming", "Li, Zongxi", "Luo, Xiaotian", "Xie, Haoran", "Lee, Xing", "Zhao, Yingbin", "Wang, Fu Lee", "Li, Qing" ]
Recurrent Attention Networks for Long-text Modeling
findings-acl.188
Poster
2306.06843
[ "https://github.com/4ai/ran" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.189.bib
https://aclanthology.org/2023.findings-acl.189/
@inproceedings{gaschi-etal-2023-exploring, title = "Exploring the Relationship between Alignment and Cross-lingual Transfer in Multilingual Transformers", author = "Gaschi, Felix and Cerda, Patricio and Rastin, Parisa and Toussaint, Yannick", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.189", doi = "10.18653/v1/2023.findings-acl.189", pages = "3020--3042", abstract = "Without any explicit cross-lingual training data, multilingual language models can achieve cross-lingual transfer. One common way to improve this transfer is to perform realignment steps before fine-tuning, i.e., to train the model to build similar representations for pairs of words from translated sentences. But such realignment methods were found to not always improve results across languages and tasks, which raises the question of whether aligned representations are truly beneficial for cross-lingual transfer. We provide evidence that alignment is actually significantly correlated with cross-lingual transfer across languages, models and random seeds. We show that fine-tuning can have a significant impact on alignment, depending mainly on the downstream task and the model. Finally, we show that realignment can, in some instances, improve cross-lingual transfer, and we identify conditions in which realignment methods provide significant improvements. Namely, we find that realignment works better on tasks for which alignment is correlated with cross-lingual transfer when generalizing to a distant language and with smaller models, as well as when using a bilingual dictionary rather than FastAlign to extract realignment pairs. For example, for POS-tagging, between English and Arabic, realignment can bring a +15.8 accuracy improvement on distilmBERT, even outperforming XLM-R Large by 1.7. We thus advocate for further research on realignment methods for smaller multilingual models as an alternative to scaling.", }
Without any explicit cross-lingual training data, multilingual language models can achieve cross-lingual transfer. One common way to improve this transfer is to perform realignment steps before fine-tuning, i.e., to train the model to build similar representations for pairs of words from translated sentences. But such realignment methods were found to not always improve results across languages and tasks, which raises the question of whether aligned representations are truly beneficial for cross-lingual transfer. We provide evidence that alignment is actually significantly correlated with cross-lingual transfer across languages, models and random seeds. We show that fine-tuning can have a significant impact on alignment, depending mainly on the downstream task and the model. Finally, we show that realignment can, in some instances, improve cross-lingual transfer, and we identify conditions in which realignment methods provide significant improvements. Namely, we find that realignment works better on tasks for which alignment is correlated with cross-lingual transfer when generalizing to a distant language and with smaller models, as well as when using a bilingual dictionary rather than FastAlign to extract realignment pairs. For example, for POS-tagging, between English and Arabic, realignment can bring a +15.8 accuracy improvement on distilmBERT, even outperforming XLM-R Large by 1.7. We thus advocate for further research on realignment methods for smaller multilingual models as an alternative to scaling.
[ "Gaschi, Felix", "Cerda, Patricio", "Rastin, Parisa", "Toussaint, Yannick" ]
Exploring the Relationship between Alignment and Cross-lingual Transfer in Multilingual Transformers
findings-acl.189
Poster
2306.02790
[ "https://github.com/posos-tech/multilingual-alignment-and-transfer" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.190.bib
https://aclanthology.org/2023.findings-acl.190/
@inproceedings{fan-etal-2023-aerial, title = "Aerial Vision-and-Dialog Navigation", author = "Fan, Yue and Chen, Winson and Jiang, Tongzhou and Zhou, Chun and Zhang, Yi and Wang, Xin", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.190", doi = "10.18653/v1/2023.findings-acl.190", pages = "3043--3061", abstract = "The ability to converse with humans and follow natural language commands is crucial for intelligent unmanned aerial vehicles (a.k.a. drones). It can relieve people{'}s burden of holding a controller all the time, allow multitasking, and make drone control more accessible for people with disabilities or with their hands occupied. To this end, we introduce Aerial Vision-and-Dialog Navigation (AVDN), to navigate a drone via natural language conversation. We build a drone simulator with a continuous photorealistic environment and collect a new AVDN dataset of over 3k recorded navigation trajectories with asynchronous human-human dialogs between commanders and followers. The commander provides initial navigation instruction and further guidance by request, while the follower navigates the drone in the simulator and asks questions when needed. During data collection, followers{'} attention on the drone{'}s visual observation is also recorded. Based on the AVDN dataset, we study the tasks of aerial navigation from (full) dialog history and propose an effective Human Attention Aided Transformer model (HAA-Transformer), which learns to predict both navigation waypoints and human attention.", }
The ability to converse with humans and follow natural language commands is crucial for intelligent unmanned aerial vehicles (a.k.a. drones). It can relieve people{'}s burden of holding a controller all the time, allow multitasking, and make drone control more accessible for people with disabilities or with their hands occupied. To this end, we introduce Aerial Vision-and-Dialog Navigation (AVDN), to navigate a drone via natural language conversation. We build a drone simulator with a continuous photorealistic environment and collect a new AVDN dataset of over 3k recorded navigation trajectories with asynchronous human-human dialogs between commanders and followers. The commander provides initial navigation instruction and further guidance by request, while the follower navigates the drone in the simulator and asks questions when needed. During data collection, followers{'} attention on the drone{'}s visual observation is also recorded. Based on the AVDN dataset, we study the tasks of aerial navigation from (full) dialog history and propose an effective Human Attention Aided Transformer model (HAA-Transformer), which learns to predict both navigation waypoints and human attention.
[ "Fan, Yue", "Chen, Winson", "Jiang, Tongzhou", "Zhou, Chun", "Zhang, Yi", "Wang, Xin" ]
Aerial Vision-and-Dialog Navigation
findings-acl.190
Poster
2205.12219
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.191.bib
https://aclanthology.org/2023.findings-acl.191/
@inproceedings{zhang-etal-2023-improved, title = "Improved Logical Reasoning of Language Models via Differentiable Symbolic Programming", author = "Zhang, Hanlin and Huang, Jiani and Li, Ziyang and Naik, Mayur and Xing, Eric", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.191", doi = "10.18653/v1/2023.findings-acl.191", pages = "3062--3077", abstract = "Pre-trained large language models (LMs) struggle to perform logical reasoning reliably despite advances in scale and compositionality. In this work, we tackle this challenge through the lens of symbolic programming. We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning. In contrast to works that rely on hand-crafted logic rules, our differentiable symbolic reasoning framework efficiently learns weighted rules and applies semantic loss to further improve LMs. DSR-LM is scalable, interpretable, and allows easy integration of prior knowledge, thereby supporting extensive symbolic programming to robustly derive a logical conclusion. The results of our experiments suggest that DSR-LM improves the logical reasoning abilities of pre-trained language models, resulting in a significant increase in accuracy of over 20{\%} on deductive reasoning benchmarks. Furthermore, DSR-LM outperforms a variety of competitive baselines when faced with systematic changes in sequence length.", }
Pre-trained large language models (LMs) struggle to perform logical reasoning reliably despite advances in scale and compositionality. In this work, we tackle this challenge through the lens of symbolic programming. We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning. In contrast to works that rely on hand-crafted logic rules, our differentiable symbolic reasoning framework efficiently learns weighted rules and applies semantic loss to further improve LMs. DSR-LM is scalable, interpretable, and allows easy integration of prior knowledge, thereby supporting extensive symbolic programming to robustly derive a logical conclusion. The results of our experiments suggest that DSR-LM improves the logical reasoning abilities of pre-trained language models, resulting in a significant increase in accuracy of over 20{\%} on deductive reasoning benchmarks. Furthermore, DSR-LM outperforms a variety of competitive baselines when faced with systematic changes in sequence length.
[ "Zhang, Hanlin", "Huang, Jiani", "Li, Ziyang", "Naik, Mayur", "Xing, Eric" ]
Improved Logical Reasoning of Language Models via Differentiable Symbolic Programming
findings-acl.191
Poster
2305.03742
[ "https://github.com/moqingyan/dsr-lm" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.192.bib
https://aclanthology.org/2023.findings-acl.192/
@inproceedings{takase-etal-2023-b2t, title = "{B}2{T} Connection: Serving Stability and Performance in Deep Transformers", author = "Takase, Sho and Kiyono, Shun and Kobayashi, Sosuke and Suzuki, Jun", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.192", doi = "10.18653/v1/2023.findings-acl.192", pages = "3078--3095", abstract = "In the perspective of a layer normalization (LN) position, the architecture of Transformers can be categorized into two types: Post-LN and Pre-LN.Recent Transformers prefer to select Pre-LN because the training in Post-LN with deep Transformers, e.g., ten or more layers, often becomes unstable, resulting in useless models. However, in contrast, Post-LN has also consistently achieved better performance than Pre-LN in relatively shallow Transformers, e.g., six or fewer layers. This study first investigates the reason for these discrepant observations empirically and theoretically and discovers 1, the LN in Post-LN is the source of the vanishing gradient problem that mainly leads the unstable training whereas Pre-LN prevents it, and 2, Post-LN tends to preserve larger gradient norms in higher layers during the back-propagation that may lead an effective training. Exploiting the new findings, we propose a method that can equip both higher stability and effective training by a simple modification from Post-LN.We conduct experiments on a wide range of text generation tasks and demonstrate that our method outperforms Pre-LN, and stable training regardless of the shallow or deep layer settings.", }
In the perspective of a layer normalization (LN) position, the architecture of Transformers can be categorized into two types: Post-LN and Pre-LN.Recent Transformers prefer to select Pre-LN because the training in Post-LN with deep Transformers, e.g., ten or more layers, often becomes unstable, resulting in useless models. However, in contrast, Post-LN has also consistently achieved better performance than Pre-LN in relatively shallow Transformers, e.g., six or fewer layers. This study first investigates the reason for these discrepant observations empirically and theoretically and discovers 1, the LN in Post-LN is the source of the vanishing gradient problem that mainly leads the unstable training whereas Pre-LN prevents it, and 2, Post-LN tends to preserve larger gradient norms in higher layers during the back-propagation that may lead an effective training. Exploiting the new findings, we propose a method that can equip both higher stability and effective training by a simple modification from Post-LN.We conduct experiments on a wide range of text generation tasks and demonstrate that our method outperforms Pre-LN, and stable training regardless of the shallow or deep layer settings.
[ "Takase, Sho", "Kiyono, Shun", "Kobayashi, Sosuke", "Suzuki, Jun" ]
B2T Connection: Serving Stability and Performance in Deep Transformers
findings-acl.192
Poster
2206.00330
[ "https://github.com/takase/b2t_connection" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.193.bib
https://aclanthology.org/2023.findings-acl.193/
@inproceedings{litschko-etal-2023-boosting, title = "Boosting Zero-shot Cross-lingual Retrieval by Training on Artificially Code-Switched Data", author = "Litschko, Robert and Artemova, Ekaterina and Plank, Barbara", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.193", doi = "10.18653/v1/2023.findings-acl.193", pages = "3096--3108", abstract = "Transferring information retrieval (IR) models from a high-resource language (typically English) to other languages in a zero-shot fashion has become a widely adopted approach. In this work, we show that the effectiveness of zero-shot rankers diminishes when queries and documents are present in different languages. Motivated by this, we propose to train ranking models on artificially code-switched data instead, which we generate by utilizing bilingual lexicons. To this end, we experiment with lexicons induced from (1) cross-lingual word embeddings and (2) parallel Wikipedia page titles. We use the mMARCO dataset to extensively evaluate reranking models on 36 language pairs spanning Monolingual IR (MoIR), Cross-lingual IR (CLIR), and Multilingual IR (MLIR). Our results show that code-switching can yield consistent and substantial gains of 5.1 MRR@10 in CLIR and 3.9 MRR@10 in MLIR, while maintaining stable performance in MoIR. Encouragingly, the gains are especially pronounced for distant languages (up to 2x absolute gain). We further show that our approach is robust towards the ratio of code-switched tokens and also extends to unseen languages. Our results demonstrate that training on code-switched data is a cheap and effective way of generalizing zero-shot rankers for cross-lingual and multilingual retrieval.", }
Transferring information retrieval (IR) models from a high-resource language (typically English) to other languages in a zero-shot fashion has become a widely adopted approach. In this work, we show that the effectiveness of zero-shot rankers diminishes when queries and documents are present in different languages. Motivated by this, we propose to train ranking models on artificially code-switched data instead, which we generate by utilizing bilingual lexicons. To this end, we experiment with lexicons induced from (1) cross-lingual word embeddings and (2) parallel Wikipedia page titles. We use the mMARCO dataset to extensively evaluate reranking models on 36 language pairs spanning Monolingual IR (MoIR), Cross-lingual IR (CLIR), and Multilingual IR (MLIR). Our results show that code-switching can yield consistent and substantial gains of 5.1 MRR@10 in CLIR and 3.9 MRR@10 in MLIR, while maintaining stable performance in MoIR. Encouragingly, the gains are especially pronounced for distant languages (up to 2x absolute gain). We further show that our approach is robust towards the ratio of code-switched tokens and also extends to unseen languages. Our results demonstrate that training on code-switched data is a cheap and effective way of generalizing zero-shot rankers for cross-lingual and multilingual retrieval.
[ "Litschko, Robert", "Artemova, Ekaterina", "Plank, Barbara" ]
Boosting Zero-shot Cross-lingual Retrieval by Training on Artificially Code-Switched Data
findings-acl.193
Poster
2305.05295
[ "https://github.com/mainlp/codeswitchclir" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.194.bib
https://aclanthology.org/2023.findings-acl.194/
@inproceedings{ma-etal-2023-domain, title = "Domain-specific Attention with Distributional Signatures for Multi-Domain End-to-end Task-Oriented Dialogue", author = "Ma, Xing and Zhang, Peng and Zhao, Feifei", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.194", doi = "10.18653/v1/2023.findings-acl.194", pages = "3109--3122", abstract = "The end-to-end task-oriented dialogue system has achieved great success in recent years. Most of these dialogue systems need to accommodate multi-domain dialogue in real-world scenarios. However, due to the high cost of dialogue data annotation and the scarcity of labeled dialogue data, existing methods are difficult to extend to new domains. Therefore, it is important to use limited data to construct multi-domain dialogue systems. To solve this problem, we propose a novel domain attention module. It use the distributional signatures to construct a multi-domain dialogue system effectively with limited data, which has strong extensibility. We also define a adjacent n-gram pattern to explore potential patterns for dialogue entities. Experimental results show that our approach outperforms the baseline models on most metrics. In the few-shot scenario, we show our method get a great improvement compared with previous methods while keeping smaller model scale.", }
The end-to-end task-oriented dialogue system has achieved great success in recent years. Most of these dialogue systems need to accommodate multi-domain dialogue in real-world scenarios. However, due to the high cost of dialogue data annotation and the scarcity of labeled dialogue data, existing methods are difficult to extend to new domains. Therefore, it is important to use limited data to construct multi-domain dialogue systems. To solve this problem, we propose a novel domain attention module. It use the distributional signatures to construct a multi-domain dialogue system effectively with limited data, which has strong extensibility. We also define a adjacent n-gram pattern to explore potential patterns for dialogue entities. Experimental results show that our approach outperforms the baseline models on most metrics. In the few-shot scenario, we show our method get a great improvement compared with previous methods while keeping smaller model scale.
[ "Ma, Xing", "Zhang, Peng", "Zhao, Feifei" ]
Domain-specific Attention with Distributional Signatures for Multi-Domain End-to-end Task-Oriented Dialogue
findings-acl.194
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.195.bib
https://aclanthology.org/2023.findings-acl.195/
@inproceedings{lei-etal-2023-ckdst, title = "{CKDST}: Comprehensively and Effectively Distill Knowledge from Machine Translation to End-to-End Speech Translation", author = "Lei, Yikun and Xue, Zhengshan and Zhao, Xiaohu and Sun, Haoran and Zhu, Shaolin and Lin, Xiaodong and Xiong, Deyi", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.195", doi = "10.18653/v1/2023.findings-acl.195", pages = "3123--3137", abstract = "Distilling knowledge from a high-resource task, e.g., machine translation, is an effective way to alleviate the data scarcity problem of end-to-end speech translation. However, previous works simply use the classical knowledge distillation that does not allow for adequate transfer of knowledge from machine translation. In this paper, we propose a comprehensive knowledge distillation framework for speech translation, CKDST, which is capable of comprehensively and effectively distilling knowledge from machine translation to speech translation from two perspectives: cross-modal contrastive representation distillation and simultaneous decoupled knowledge distillation. In the former, we leverage a contrastive learning objective to optmize the mutual information between speech and text representations for representation distillation in the encoder. In the later, we decouple the non-target class knowledge from target class knowledge for logits distillation in the decoder. Experiments on the MuST-C benchmark dataset demonstrate that our CKDST substantially improves the baseline by 1.2 BLEU on average in all translation directions, and outperforms previous state-of-the-art end-to-end and cascaded speech translation models.", }
Distilling knowledge from a high-resource task, e.g., machine translation, is an effective way to alleviate the data scarcity problem of end-to-end speech translation. However, previous works simply use the classical knowledge distillation that does not allow for adequate transfer of knowledge from machine translation. In this paper, we propose a comprehensive knowledge distillation framework for speech translation, CKDST, which is capable of comprehensively and effectively distilling knowledge from machine translation to speech translation from two perspectives: cross-modal contrastive representation distillation and simultaneous decoupled knowledge distillation. In the former, we leverage a contrastive learning objective to optmize the mutual information between speech and text representations for representation distillation in the encoder. In the later, we decouple the non-target class knowledge from target class knowledge for logits distillation in the decoder. Experiments on the MuST-C benchmark dataset demonstrate that our CKDST substantially improves the baseline by 1.2 BLEU on average in all translation directions, and outperforms previous state-of-the-art end-to-end and cascaded speech translation models.
[ "Lei, Yikun", "Xue, Zhengshan", "Zhao, Xiaohu", "Sun, Haoran", "Zhu, Shaolin", "Lin, Xiaodong", "Xiong, Deyi" ]
CKDST: Comprehensively and Effectively Distill Knowledge from Machine Translation to End-to-End Speech Translation
findings-acl.195
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.196.bib
https://aclanthology.org/2023.findings-acl.196/
@inproceedings{wein-etal-2023-follow, title = "Follow the leader(board) with confidence: Estimating p-values from a single test set with item and response variance", author = "Wein, Shira and Homan, Christopher and Aroyo, Lora and Welty, Chris", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.196", doi = "10.18653/v1/2023.findings-acl.196", pages = "3138--3161", abstract = "Among the problems with leaderboard culture in NLP has been the widespread lack of confidence estimation in reported results. In this work, we present a framework and simulator for estimating p-values for comparisons between the results of two systems, in order to understand the confidence that one is actually better (i.e. ranked higher) than the other. What has made this difficult in the past is that each system must itself be evaluated by comparison to a gold standard. We define a null hypothesis that each system{'}s metric scores are drawn from the same distribution, using variance found naturally (though rarely reported) in test set items and individual labels on an item (responses) to produce the metric distributions. We create a test set that evenly mixes the responses of the two systems under the assumption the null hypothesis is true. Exploring how to best estimate the true p-value from a single test set under different metrics, tests, and sampling methods, we find that the presence of response variance (from multiple raters or multiple model versions) has a profound impact on p-value estimates for model comparison, and that choice of metric and sampling method is critical to providing statistical guarantees on model comparisons.", }
Among the problems with leaderboard culture in NLP has been the widespread lack of confidence estimation in reported results. In this work, we present a framework and simulator for estimating p-values for comparisons between the results of two systems, in order to understand the confidence that one is actually better (i.e. ranked higher) than the other. What has made this difficult in the past is that each system must itself be evaluated by comparison to a gold standard. We define a null hypothesis that each system{'}s metric scores are drawn from the same distribution, using variance found naturally (though rarely reported) in test set items and individual labels on an item (responses) to produce the metric distributions. We create a test set that evenly mixes the responses of the two systems under the assumption the null hypothesis is true. Exploring how to best estimate the true p-value from a single test set under different metrics, tests, and sampling methods, we find that the presence of response variance (from multiple raters or multiple model versions) has a profound impact on p-value estimates for model comparison, and that choice of metric and sampling method is critical to providing statistical guarantees on model comparisons.
[ "Wein, Shira", "Homan, Christopher", "Aroyo, Lora", "Welty, Chris" ]
Follow the leader(board) with confidence: Estimating p-values from a single test set with item and response variance
findings-acl.196
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.197.bib
https://aclanthology.org/2023.findings-acl.197/
@inproceedings{tang-hardmeier-2023-parallel, title = "Parallel Data Helps Neural Entity Coreference Resolution", author = "Tang, Gongbo and Hardmeier, Christian", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.197", doi = "10.18653/v1/2023.findings-acl.197", pages = "3162--3171", abstract = "Coreference resolution is the task of finding expressions that refer to the same entity in a text. Coreference models are generally trained on monolingual annotated data but annotating coreference is expensive and challenging. Hardmeier et al. (2013) have shown that parallel data contains latent anaphoric knowledge, but it has not been explored in end-to-end neural models yet. In this paper, we propose a simple yet effective model to exploit coreference knowledge from parallel data. In addition to the conventional modules learning coreference from annotations, we introduce an unsupervised module to capture cross-lingual coreference knowledge. Our proposed cross-lingual model achieves consistent improvements, up to 1.74 percentage points, on the OntoNotes 5.0 English dataset using 9 different synthetic parallel datasets. These experimental results confirm that parallel data can provide additional coreference knowledge which is beneficial to coreference resolution tasks.", }
Coreference resolution is the task of finding expressions that refer to the same entity in a text. Coreference models are generally trained on monolingual annotated data but annotating coreference is expensive and challenging. Hardmeier et al. (2013) have shown that parallel data contains latent anaphoric knowledge, but it has not been explored in end-to-end neural models yet. In this paper, we propose a simple yet effective model to exploit coreference knowledge from parallel data. In addition to the conventional modules learning coreference from annotations, we introduce an unsupervised module to capture cross-lingual coreference knowledge. Our proposed cross-lingual model achieves consistent improvements, up to 1.74 percentage points, on the OntoNotes 5.0 English dataset using 9 different synthetic parallel datasets. These experimental results confirm that parallel data can provide additional coreference knowledge which is beneficial to coreference resolution tasks.
[ "Tang, Gongbo", "Hardmeier, Christian" ]
Parallel Data Helps Neural Entity Coreference Resolution
findings-acl.197
Poster
2305.17709
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.198.bib
https://aclanthology.org/2023.findings-acl.198/
@inproceedings{wen-etal-2023-towards, title = "Towards Open-Domain {T}witter User Profile Inference", author = "Wen, Haoyang and Xiao, Zhenxin and Hovy, Eduard and Hauptmann, Alexander", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.198", doi = "10.18653/v1/2023.findings-acl.198", pages = "3172--3188", abstract = "Twitter user profile inference utilizes information from Twitter to predict user attributes (e.g., occupation, location), which is controversial because of its usefulness for downstream applications and its potential to reveal users{'} privacy. Therefore, it is important for researchers to determine the extent of profiling in a safe environment to facilitate proper use and make the public aware of the potential risks. Contrary to existing approaches on limited attributes, we explore open-domain Twitter user profile inference. We conduct a case study where we collect publicly available WikiData public figure profiles and use diverse WikiData predicates for profile inference. After removing sensitive attributes, our data contains over 150K public figure profiles from WikiData, over 50 different attribute predicates, and over 700K attribute values. We further propose a prompt-based generation method, which can infer values that are implicitly mentioned in the Twitter information. Experimental results show that the generation-based approach can infer more comprehensive user profiles than baseline extraction-based methods, but limitations still remain to be applied for real-world use. We also enclose a detailed ethical statement for our data, potential benefits and risks from this work, and our efforts to mitigate the risks.", }
Twitter user profile inference utilizes information from Twitter to predict user attributes (e.g., occupation, location), which is controversial because of its usefulness for downstream applications and its potential to reveal users{'} privacy. Therefore, it is important for researchers to determine the extent of profiling in a safe environment to facilitate proper use and make the public aware of the potential risks. Contrary to existing approaches on limited attributes, we explore open-domain Twitter user profile inference. We conduct a case study where we collect publicly available WikiData public figure profiles and use diverse WikiData predicates for profile inference. After removing sensitive attributes, our data contains over 150K public figure profiles from WikiData, over 50 different attribute predicates, and over 700K attribute values. We further propose a prompt-based generation method, which can infer values that are implicitly mentioned in the Twitter information. Experimental results show that the generation-based approach can infer more comprehensive user profiles than baseline extraction-based methods, but limitations still remain to be applied for real-world use. We also enclose a detailed ethical statement for our data, potential benefits and risks from this work, and our efforts to mitigate the risks.
[ "Wen, Haoyang", "Xiao, Zhenxin", "Hovy, Eduard", "Hauptmann, Alex", "er" ]
Towards Open-Domain Twitter User Profile Inference
findings-acl.198
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.199.bib
https://aclanthology.org/2023.findings-acl.199/
@inproceedings{zhuang-riloff-2023-eliciting, title = "Eliciting Affective Events from Language Models by Multiple View Co-prompting", author = "Zhuang, Yuan and Riloff, Ellen", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.199", doi = "10.18653/v1/2023.findings-acl.199", pages = "3189--3201", abstract = "Prior research on affective event classification showed that exploiting weakly labeled data for training can improve model performance. In this work, we propose a simpler and more effective approach for generating training data by automatically acquiring and labeling affective events with Multiple View Co-prompting, which leverages two language model prompts that provide independent views of an event. The approach starts with a modest amount of gold data and prompts pre-trained language models to generate new events. Next, information about the probable affective polarity of each event is collected from two complementary language model prompts and jointly used to assign polarity labels. Experimental results on two datasets show that the newly acquired events improve a state-of-the-art affective event classifier. We also present analyses which show that using multiple views produces polarity labels of higher quality than either view on its own.", }
Prior research on affective event classification showed that exploiting weakly labeled data for training can improve model performance. In this work, we propose a simpler and more effective approach for generating training data by automatically acquiring and labeling affective events with Multiple View Co-prompting, which leverages two language model prompts that provide independent views of an event. The approach starts with a modest amount of gold data and prompts pre-trained language models to generate new events. Next, information about the probable affective polarity of each event is collected from two complementary language model prompts and jointly used to assign polarity labels. Experimental results on two datasets show that the newly acquired events improve a state-of-the-art affective event classifier. We also present analyses which show that using multiple views produces polarity labels of higher quality than either view on its own.
[ "Zhuang, Yuan", "Riloff, Ellen" ]
Eliciting Affective Events from Language Models by Multiple View Co-prompting
findings-acl.199
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.200.bib
https://aclanthology.org/2023.findings-acl.200/
@inproceedings{guo-etal-2023-zeroae, title = "{Z}ero{AE}: Pre-trained Language Model based Autoencoder for Transductive Zero-shot Text Classification", author = "Guo, Kaihao and Yu, Hang and Liao, Cong and Li, Jianguo and Zhang, Haipeng", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.200", doi = "10.18653/v1/2023.findings-acl.200", pages = "3202--3219", abstract = "Many text classification tasks require handling unseen domains with plenty of unlabeled data, thus giving rise to the self-adaption or the so-called transductive zero-shot learning (TZSL) problem. However, current methods based solely on encoders or decoders overlook the possibility that these two modules may promote each other. As a first effort to bridge this gap, we propose an autoencoder named ZeroAE. Specifically, the text is encoded with two separate BERT-based encoders into two disentangled spaces, i.e., label-relevant (for classification) and label-irrelevant respectively. The two latent spaces are then decoded by prompting GPT-2 to recover the text as well as to further generate text with labels in the unseen domains to train the encoder in turn. To better exploit the unlabeled data, a novel indirect uncertainty-aware sampling (IUAS) approach is proposed to train ZeroAE. Extensive experiments show that ZeroAE largely surpasses the SOTA methods by 15.93{\%} and 8.70{\%} on average respectively in the label-partially-unseen and label-fully-unseen scenario. Notably, the label-fully-unseen ZeroAE even possesses superior performance to the label-partially-unseen SOTA methods.", }
Many text classification tasks require handling unseen domains with plenty of unlabeled data, thus giving rise to the self-adaption or the so-called transductive zero-shot learning (TZSL) problem. However, current methods based solely on encoders or decoders overlook the possibility that these two modules may promote each other. As a first effort to bridge this gap, we propose an autoencoder named ZeroAE. Specifically, the text is encoded with two separate BERT-based encoders into two disentangled spaces, i.e., label-relevant (for classification) and label-irrelevant respectively. The two latent spaces are then decoded by prompting GPT-2 to recover the text as well as to further generate text with labels in the unseen domains to train the encoder in turn. To better exploit the unlabeled data, a novel indirect uncertainty-aware sampling (IUAS) approach is proposed to train ZeroAE. Extensive experiments show that ZeroAE largely surpasses the SOTA methods by 15.93{\%} and 8.70{\%} on average respectively in the label-partially-unseen and label-fully-unseen scenario. Notably, the label-fully-unseen ZeroAE even possesses superior performance to the label-partially-unseen SOTA methods.
[ "Guo, Kaihao", "Yu, Hang", "Liao, Cong", "Li, Jianguo", "Zhang, Haipeng" ]
ZeroAE: Pre-trained Language Model based Autoencoder for Transductive Zero-shot Text Classification
findings-acl.200
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.201.bib
https://aclanthology.org/2023.findings-acl.201/
@inproceedings{huang-etal-2023-pram, title = "{PRAM}: An End-to-end Prototype-based Representation Alignment Model for Zero-resource Cross-lingual Named Entity Recognition", author = "Huang, Yucheng and Liu, Wenqiang and Zhang, Xianli and Lang, Jun and Gong, Tieliang and Li, Chen", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.201", doi = "10.18653/v1/2023.findings-acl.201", pages = "3220--3233", abstract = "Zero-resource cross-lingual named entity recognition (ZRCL-NER) aims to leverage rich labeled source language data to address the NER problem in the zero-resource target language. Existing methods are built either based on data transfer or representation transfer. However, the former usually leads to additional computation costs, and the latter lacks explicit optimization specific to the NER task. To overcome the above limitations, we propose a novel prototype-based representation alignment model (PRAM) for the challenging ZRCL-NER task. PRAM models the cross-lingual (CL) NER task and transfers knowledge from source languages to target languages in a unified neural network, and performs end-to-end training, avoiding additional computation costs. Moreover, PRAM borrows the CL inference ability of multilingual language models and enhances it with a novel training objective{---}attribution-prediction consistency (APC){---}for explicitly enforcing the entity-level alignment between entity representations and predictions, as well as that across languages using prototypes as bridges. The experimental results show that PRAM significantly outperforms existing state-of-the-art methods, especially in some challenging scenarios.", }
Zero-resource cross-lingual named entity recognition (ZRCL-NER) aims to leverage rich labeled source language data to address the NER problem in the zero-resource target language. Existing methods are built either based on data transfer or representation transfer. However, the former usually leads to additional computation costs, and the latter lacks explicit optimization specific to the NER task. To overcome the above limitations, we propose a novel prototype-based representation alignment model (PRAM) for the challenging ZRCL-NER task. PRAM models the cross-lingual (CL) NER task and transfers knowledge from source languages to target languages in a unified neural network, and performs end-to-end training, avoiding additional computation costs. Moreover, PRAM borrows the CL inference ability of multilingual language models and enhances it with a novel training objective{---}attribution-prediction consistency (APC){---}for explicitly enforcing the entity-level alignment between entity representations and predictions, as well as that across languages using prototypes as bridges. The experimental results show that PRAM significantly outperforms existing state-of-the-art methods, especially in some challenging scenarios.
[ "Huang, Yucheng", "Liu, Wenqiang", "Zhang, Xianli", "Lang, Jun", "Gong, Tieliang", "Li, Chen" ]
PRAM: An End-to-end Prototype-based Representation Alignment Model for Zero-resource Cross-lingual Named Entity Recognition
findings-acl.201
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.202.bib
https://aclanthology.org/2023.findings-acl.202/
@inproceedings{subramonian-etal-2023-takes, title = "It Takes Two to Tango: Navigating Conceptualizations of {NLP} Tasks and Measurements of Performance", author = "Subramonian, Arjun and Yuan, Xingdi and Daum{\'e} III, Hal and Blodgett, Su Lin", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.202", doi = "10.18653/v1/2023.findings-acl.202", pages = "3234--3279", abstract = "Progress in NLP is increasingly measured through benchmarks; hence, contextualizing progress requires understanding when and why practitioners may disagree about the validity of benchmarks. We develop a taxonomy of disagreement, drawing on tools from measurement modeling, and distinguish between two types of disagreement: 1) how tasks are conceptualized and 2) how measurements of model performance are operationalized. To provide evidence for our taxonomy, we conduct a meta-analysis of relevant literature to understand how NLP tasks are conceptualized, as well as a survey of practitioners about their impressions of different factors that affect benchmark validity. Our meta-analysis and survey across eight tasks, ranging from coreference resolution to question answering, uncover that tasks are generally not clearly and consistently conceptualized and benchmarks suffer from operationalization disagreements. These findings support our proposed taxonomy of disagreement. Finally, based on our taxonomy, we present a framework for constructing benchmarks and documenting their limitations.", }
Progress in NLP is increasingly measured through benchmarks; hence, contextualizing progress requires understanding when and why practitioners may disagree about the validity of benchmarks. We develop a taxonomy of disagreement, drawing on tools from measurement modeling, and distinguish between two types of disagreement: 1) how tasks are conceptualized and 2) how measurements of model performance are operationalized. To provide evidence for our taxonomy, we conduct a meta-analysis of relevant literature to understand how NLP tasks are conceptualized, as well as a survey of practitioners about their impressions of different factors that affect benchmark validity. Our meta-analysis and survey across eight tasks, ranging from coreference resolution to question answering, uncover that tasks are generally not clearly and consistently conceptualized and benchmarks suffer from operationalization disagreements. These findings support our proposed taxonomy of disagreement. Finally, based on our taxonomy, we present a framework for constructing benchmarks and documenting their limitations.
[ "Subramonian, Arjun", "Yuan, Xingdi", "Daum{\\'e} III, Hal", "Blodgett, Su Lin" ]
It Takes Two to Tango: Navigating Conceptualizations of NLP Tasks and Measurements of Performance
findings-acl.202
Poster
2305.09022
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.203.bib
https://aclanthology.org/2023.findings-acl.203/
@inproceedings{zhang-etal-2023-task, title = "Task-adaptive Label Dependency Transfer for Few-shot Named Entity Recognition", author = "Zhang, Shan and Cao, Bin and Zhang, Tianming and Liu, Yuqi and Fan, Jing", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.203", doi = "10.18653/v1/2023.findings-acl.203", pages = "3280--3293", abstract = "Named Entity Recognition (NER), as a crucial subtask in natural language processing (NLP), suffers from limited labeled samples (a.k.a. few-shot). Meta-learning methods are widely used for few-shot NER, but these existing methods overlook the importance of label dependency for NER, resulting in suboptimal performance. However, applying meta-learning methods to label dependency learning faces a special challenge, that is, due to the discrepancy of label sets in different domains, the label dependencies can not be transferred across domains. In this paper, we propose the Task-adaptive Label Dependency Transfer (TLDT) method to make label dependency transferable and effectively adapt to new tasks by a few samples. TLDT improves the existing optimization-based meta-learning methods by learning general initialization and individual parameter update rule for label dependency. Extensive experiments show that TLDT achieves significant improvement over the state-of-the-art methods.", }
Named Entity Recognition (NER), as a crucial subtask in natural language processing (NLP), suffers from limited labeled samples (a.k.a. few-shot). Meta-learning methods are widely used for few-shot NER, but these existing methods overlook the importance of label dependency for NER, resulting in suboptimal performance. However, applying meta-learning methods to label dependency learning faces a special challenge, that is, due to the discrepancy of label sets in different domains, the label dependencies can not be transferred across domains. In this paper, we propose the Task-adaptive Label Dependency Transfer (TLDT) method to make label dependency transferable and effectively adapt to new tasks by a few samples. TLDT improves the existing optimization-based meta-learning methods by learning general initialization and individual parameter update rule for label dependency. Extensive experiments show that TLDT achieves significant improvement over the state-of-the-art methods.
[ "Zhang, Shan", "Cao, Bin", "Zhang, Tianming", "Liu, Yuqi", "Fan, Jing" ]
Task-adaptive Label Dependency Transfer for Few-shot Named Entity Recognition
findings-acl.203
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.204.bib
https://aclanthology.org/2023.findings-acl.204/
@inproceedings{zhou-etal-2023-wyweb, title = "{WYWEB}: A {NLP} Evaluation Benchmark For Classical {C}hinese", author = "Zhou, Bo and Chen, Qianglong and Wang, Tianyu and Zhong, Xiaomi and Zhang, Yin", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.204", doi = "10.18653/v1/2023.findings-acl.204", pages = "3294--3319", abstract = "To fully evaluate the overall performance of different NLP models in a given domain, many evaluation benchmarks are proposed, such as GLUE, SuperGLUE and CLUE. The field of natural language understanding has traditionally focused on benchmarks for various tasks in languages such as Chinese, English, and multilingual, however, there has been a lack of attention given to the area of classical Chinese, also known as ``wen yan wen (文言文)'', which has a rich history spanning thousands of years and holds significant cultural and academic value. For the prosperity of the NLP community, in this paper, we introduce the WYWEB evaluation benchmark, which consists of nine NLP tasks in classical Chinese, implementing sentence classification, sequence labeling, reading comprehension, and machine translation. We evaluate the existing pre-trained language models, which are all struggling with this benchmark. We also introduce a number of supplementary datasets and additional tools to help facilitate further progress on classical Chinese NLU. The github repository is \url{https://github.com/baudzhou/WYWEB}.", }
To fully evaluate the overall performance of different NLP models in a given domain, many evaluation benchmarks are proposed, such as GLUE, SuperGLUE and CLUE. The field of natural language understanding has traditionally focused on benchmarks for various tasks in languages such as Chinese, English, and multilingual, however, there has been a lack of attention given to the area of classical Chinese, also known as ``wen yan wen (文言文)'', which has a rich history spanning thousands of years and holds significant cultural and academic value. For the prosperity of the NLP community, in this paper, we introduce the WYWEB evaluation benchmark, which consists of nine NLP tasks in classical Chinese, implementing sentence classification, sequence labeling, reading comprehension, and machine translation. We evaluate the existing pre-trained language models, which are all struggling with this benchmark. We also introduce a number of supplementary datasets and additional tools to help facilitate further progress on classical Chinese NLU. The github repository is \url{https://github.com/baudzhou/WYWEB}.
[ "Zhou, Bo", "Chen, Qianglong", "Wang, Tianyu", "Zhong, Xiaomi", "Zhang, Yin" ]
WYWEB: A NLP Evaluation Benchmark For Classical Chinese
findings-acl.204
Poster
2305.14150
[ "https://github.com/baudzhou/wyweb" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.205.bib
https://aclanthology.org/2023.findings-acl.205/
@inproceedings{tang-etal-2023-fused, title = "A Fused {G}romov-{W}asserstein Framework for Unsupervised Knowledge Graph Entity Alignment", author = "Tang, Jianheng and Zhao, Kangfei and Li, Jia", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.205", doi = "10.18653/v1/2023.findings-acl.205", pages = "3320--3334", abstract = "Entity alignment is the task of identifying corresponding entities across different knowledge graphs (KGs). Although recent embedding-based entity alignment methods have shown significant advancements, they still struggle to fully utilize KG structural information. In this paper, we introduce FGWEA, an unsupervised entity alignment framework that leverages the Fused Gromov-Wasserstein (FGW) distance, allowing for a comprehensive comparison of entity semantics and KG structures within a joint optimization framework. To address the computational challenges associated with optimizing FGW, we devise a three-stage progressive optimization algorithm. It starts with a basic semantic embedding matching, proceeds to approximate cross-KG structural and relational similarity matching based on iterative updates of high-confidence entity links, and ultimately culminates in a global structural comparison between KGs. We perform extensive experiments on four entity alignment datasets covering 14 distinct KGs across five languages. Without any supervision or hyper-parameter tuning, FGWEA surpasses 21 competitive baselines, including cutting-edge supervised entity alignment methods. Our code is available at \url{https://github.com/squareRoot3/FusedGW-Entity-Alignment}.", }
Entity alignment is the task of identifying corresponding entities across different knowledge graphs (KGs). Although recent embedding-based entity alignment methods have shown significant advancements, they still struggle to fully utilize KG structural information. In this paper, we introduce FGWEA, an unsupervised entity alignment framework that leverages the Fused Gromov-Wasserstein (FGW) distance, allowing for a comprehensive comparison of entity semantics and KG structures within a joint optimization framework. To address the computational challenges associated with optimizing FGW, we devise a three-stage progressive optimization algorithm. It starts with a basic semantic embedding matching, proceeds to approximate cross-KG structural and relational similarity matching based on iterative updates of high-confidence entity links, and ultimately culminates in a global structural comparison between KGs. We perform extensive experiments on four entity alignment datasets covering 14 distinct KGs across five languages. Without any supervision or hyper-parameter tuning, FGWEA surpasses 21 competitive baselines, including cutting-edge supervised entity alignment methods. Our code is available at \url{https://github.com/squareRoot3/FusedGW-Entity-Alignment}.
[ "Tang, Jianheng", "Zhao, Kangfei", "Li, Jia" ]
A Fused Gromov-Wasserstein Framework for Unsupervised Knowledge Graph Entity Alignment
findings-acl.205
Poster
2305.06574
[ "https://github.com/squareroot3/fusedgw-entity-alignment" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.206.bib
https://aclanthology.org/2023.findings-acl.206/
@inproceedings{ha-etal-2023-two, title = "Two Examples are Better than One: Context Regularization for Gradient-based Prompt Tuning", author = "Ha, Hyeonmin and Jung, Soyoung and Park, Jinsol and Seo, Minjoon and Hwang, Seung-won and Chun, Byung-Gon", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.206", doi = "10.18653/v1/2023.findings-acl.206", pages = "3335--3350", abstract = "Prompting has gained tremendous attention as an efficient method for the adaptation of large-scale language models. However, prompts often act against human intuition and report unstable performances, which has motivated methods that automatically find effective prompts. One popular approach is gradient-based search, which iteratively updates a (randomly) initialized prompt towards the optimal one with the guide of gradients. We propose a novel regularization method, CoRe, for gradient-based prompt tuning techniques, which guides a prompt to produce a task context properly. CoRe realizes two regularization effects {---} context attuning and context filtering {---} that improve prediction performance in a zero-shot in-context learning setting where a model makes inferences only with the prompt tuned by CoRe, without any demonstration examples for in-context learning. Context attuning guides the context generated by the input and the tuned prompt toward embedding the appropriate context for the task. In our theoretical analysis, regularizing the context extends to improving zero-shot in-context learning performance. Context filtering steers the prompt to select only the task-related context so that context attuning solely focuses on creating and sending the right task context. We evaluate CoRe on natural language understanding datasets and two large language models, GPT2-XL and GPT-J.Our training scheme shows performance improvements up to 11.9{\%} on GPT2-XL, and up to 6.3{\%} on GPT-J in zero-shot settings.", }
Prompting has gained tremendous attention as an efficient method for the adaptation of large-scale language models. However, prompts often act against human intuition and report unstable performances, which has motivated methods that automatically find effective prompts. One popular approach is gradient-based search, which iteratively updates a (randomly) initialized prompt towards the optimal one with the guide of gradients. We propose a novel regularization method, CoRe, for gradient-based prompt tuning techniques, which guides a prompt to produce a task context properly. CoRe realizes two regularization effects {---} context attuning and context filtering {---} that improve prediction performance in a zero-shot in-context learning setting where a model makes inferences only with the prompt tuned by CoRe, without any demonstration examples for in-context learning. Context attuning guides the context generated by the input and the tuned prompt toward embedding the appropriate context for the task. In our theoretical analysis, regularizing the context extends to improving zero-shot in-context learning performance. Context filtering steers the prompt to select only the task-related context so that context attuning solely focuses on creating and sending the right task context. We evaluate CoRe on natural language understanding datasets and two large language models, GPT2-XL and GPT-J.Our training scheme shows performance improvements up to 11.9{\%} on GPT2-XL, and up to 6.3{\%} on GPT-J in zero-shot settings.
[ "Ha, Hyeonmin", "Jung, Soyoung", "Park, Jinsol", "Seo, Minjoon", "Hwang, Seung-won", "Chun, Byung-Gon" ]
Two Examples are Better than One: Context Regularization for Gradient-based Prompt Tuning
findings-acl.206
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.207.bib
https://aclanthology.org/2023.findings-acl.207/
@inproceedings{wiemerslage-etal-2023-investigation, title = "An Investigation of Noise in Morphological Inflection", author = "Wiemerslage, Adam and Yang, Changbing and Nicolai, Garrett and Silfverberg, Miikka and Kann, Katharina", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.207", doi = "10.18653/v1/2023.findings-acl.207", pages = "3351--3365", abstract = "With a growing focus on morphological inflection systems for languages where high-quality data is scarce, training data noise is a serious but so far largely ignored concern. We aim at closing this gap by investigating the types of noise encountered within a pipeline for truly unsupervised morphological paradigm completion and its impact on morphological inflection systems: First, we propose an error taxonomy and annotation pipeline for inflection training data. Then, we compare the effect of different types of noise on multiple state-of-the- art inflection models. Finally, we propose a novel character-level masked language modeling (CMLM) pretraining objective and explore its impact on the models{'} resistance to noise. Our experiments show that various architectures are impacted differently by separate types of noise, but encoder-decoders tend to be more robust to noise than models trained with a copy bias. CMLM pretraining helps transformers, but has lower impact on LSTMs.", }
With a growing focus on morphological inflection systems for languages where high-quality data is scarce, training data noise is a serious but so far largely ignored concern. We aim at closing this gap by investigating the types of noise encountered within a pipeline for truly unsupervised morphological paradigm completion and its impact on morphological inflection systems: First, we propose an error taxonomy and annotation pipeline for inflection training data. Then, we compare the effect of different types of noise on multiple state-of-the- art inflection models. Finally, we propose a novel character-level masked language modeling (CMLM) pretraining objective and explore its impact on the models{'} resistance to noise. Our experiments show that various architectures are impacted differently by separate types of noise, but encoder-decoders tend to be more robust to noise than models trained with a copy bias. CMLM pretraining helps transformers, but has lower impact on LSTMs.
[ "Wiemerslage, Adam", "Yang, Changbing", "Nicolai, Garrett", "Silfverberg, Miikka", "Kann, Katharina" ]
An Investigation of Noise in Morphological Inflection
findings-acl.207
Poster
2305.16581
[ "https://github.com/adamits/morphological-inflection-noise" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.208.bib
https://aclanthology.org/2023.findings-acl.208/
@inproceedings{li-etal-2023-graph, title = "Graph Reasoning for Question Answering with Triplet Retrieval", author = "Li, Shiyang and Gao, Yifan and Jiang, Haoming and Yin, Qingyu and Li, Zheng and Yan, Xifeng and Zhang, Chao and Yin, Bing", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.208", doi = "10.18653/v1/2023.findings-acl.208", pages = "3366--3375", abstract = "Answering complex questions often requires reasoning over knowledge graphs (KGs). State-of-the-art methods often utilize entities in questions to retrieve local subgraphs, which are then fed into KG encoder, e.g. graph neural networks (GNNs), to model their local structures and integrated into language models for question answering. However, this paradigm constrains retrieved knowledge in local subgraphs and discards more diverse triplets buried in KGs that are disconnected but useful for question answering. In this paper, we propose a simple yet effective method to first retrieve the most relevant triplets from KGs and then rerank them, which are then concatenated with questions to be fed into language models. Extensive results on both CommonsenseQA and OpenbookQA datasets show that our method can outperform state-of-the-art up to 4.6{\%} absolute accuracy.", }
Answering complex questions often requires reasoning over knowledge graphs (KGs). State-of-the-art methods often utilize entities in questions to retrieve local subgraphs, which are then fed into KG encoder, e.g. graph neural networks (GNNs), to model their local structures and integrated into language models for question answering. However, this paradigm constrains retrieved knowledge in local subgraphs and discards more diverse triplets buried in KGs that are disconnected but useful for question answering. In this paper, we propose a simple yet effective method to first retrieve the most relevant triplets from KGs and then rerank them, which are then concatenated with questions to be fed into language models. Extensive results on both CommonsenseQA and OpenbookQA datasets show that our method can outperform state-of-the-art up to 4.6{\%} absolute accuracy.
[ "Li, Shiyang", "Gao, Yifan", "Jiang, Haoming", "Yin, Qingyu", "Li, Zheng", "Yan, Xifeng", "Zhang, Chao", "Yin, Bing" ]
Graph Reasoning for Question Answering with Triplet Retrieval
findings-acl.208
Poster
2305.18742
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.209.bib
https://aclanthology.org/2023.findings-acl.209/
@inproceedings{chistova-2023-end, title = "End-to-End Argument Mining over Varying Rhetorical Structures", author = "Chistova, Elena", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.209", doi = "10.18653/v1/2023.findings-acl.209", pages = "3376--3391", abstract = "Rhetorical Structure Theory implies no single discourse interpretation of a text, and the limitations of RST parsers further exacerbate inconsistent parsing of similar structures. Therefore, it is important to take into account that the same argumentative structure can be found in semantically similar texts with varying rhetorical structures. In this work, the differences between paraphrases within the same argument scheme are evaluated from a rhetorical perspective. The study proposes a deep dependency parsing model to assess the connection between rhetorical and argument structures. The model utilizes rhetorical relations; RST structures of paraphrases serve as training data augmentations. The method allows for end-to-end argumentation analysis using a rhetorical tree instead of a word sequence. It is evaluated on the bilingual Microtexts corpus, and the first results on fully-fledged argument parsing for the Russian version of the corpus are reported. The results suggest that argument mining can benefit from multiple variants of discourse structure.", }
Rhetorical Structure Theory implies no single discourse interpretation of a text, and the limitations of RST parsers further exacerbate inconsistent parsing of similar structures. Therefore, it is important to take into account that the same argumentative structure can be found in semantically similar texts with varying rhetorical structures. In this work, the differences between paraphrases within the same argument scheme are evaluated from a rhetorical perspective. The study proposes a deep dependency parsing model to assess the connection between rhetorical and argument structures. The model utilizes rhetorical relations; RST structures of paraphrases serve as training data augmentations. The method allows for end-to-end argumentation analysis using a rhetorical tree instead of a word sequence. It is evaluated on the bilingual Microtexts corpus, and the first results on fully-fledged argument parsing for the Russian version of the corpus are reported. The results suggest that argument mining can benefit from multiple variants of discourse structure.
[ "Chistova, Elena" ]
End-to-End Argument Mining over Varying Rhetorical Structures
findings-acl.209
Poster
2401.11218
[ "https://github.com/tchewik/e2e-microtexts" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.210.bib
https://aclanthology.org/2023.findings-acl.210/
@inproceedings{logeswaran-etal-2023-unsupervised, title = "Unsupervised Task Graph Generation from Instructional Video Transcripts", author = "Logeswaran, Lajanugen and Sohn, Sungryull and Jang, Yunseok and Lee, Moontae and Lee, Honglak", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.210", doi = "10.18653/v1/2023.findings-acl.210", pages = "3392--3406", abstract = "This work explores the problem of generating task graphs of real-world activities. Different from prior formulations, we consider a setting where text transcripts of instructional videos performing a real-world activity (e.g., making coffee) are provided and the goal is to identify the key steps relevant to the task as well as the dependency relationship between these key steps. We propose a novel task graph generation approach that combines the reasoning capabilities of instruction-tuned language models along with clustering and ranking components to generate accurate task graphs in a completely unsupervised manner. We show that the proposed approach generates more accurate task graphs compared to a supervised learning approach on tasks from the ProceL and CrossTask datasets.", }
This work explores the problem of generating task graphs of real-world activities. Different from prior formulations, we consider a setting where text transcripts of instructional videos performing a real-world activity (e.g., making coffee) are provided and the goal is to identify the key steps relevant to the task as well as the dependency relationship between these key steps. We propose a novel task graph generation approach that combines the reasoning capabilities of instruction-tuned language models along with clustering and ranking components to generate accurate task graphs in a completely unsupervised manner. We show that the proposed approach generates more accurate task graphs compared to a supervised learning approach on tasks from the ProceL and CrossTask datasets.
[ "Logeswaran, Lajanugen", "Sohn, Sungryull", "Jang, Yunseok", "Lee, Moontae", "Lee, Honglak" ]
Unsupervised Task Graph Generation from Instructional Video Transcripts
findings-acl.210
Poster
2302.09173
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.211.bib
https://aclanthology.org/2023.findings-acl.211/
@inproceedings{yang-etal-2023-exploiting, title = "Exploiting Hierarchically Structured Categories in Fine-grained {C}hinese Named Entity Recognition", author = "Yang, Jiuding and Luo, Jinwen and Guo, Weidong and Niu, Di and Xu, Yu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.211", doi = "10.18653/v1/2023.findings-acl.211", pages = "3407--3421", abstract = "Chinese Named Entity Recognition (CNER) is a widely used technology in various applications. While recent studies have focused on utilizing additional information of the Chinese language and characters to enhance CNER performance, this paper focuses on a specific aspect of CNER known as fine-grained CNER (FG-CNER). FG-CNER involves the use of hierarchical, fine-grained categories (e.g. Person-MovieStar) to label named entities. To promote research in this area, we introduce the FiNE dataset, a dataset for FG-CNER consisting of 30,000 sentences from various domains and containing 67,651 entities in 54 fine-grained flattened hierarchical categories. Additionally, we propose SoftFiNE, a novel approach for FG-CNER that utilizes a custom-designed relevance scoring function based on label structures to learn the potential relevance between different flattened hierarchical labels. Our experimental results demonstrate that the proposed SoftFiNE method outperforms the state-of-the-art baselines on the FiNE dataset. Furthermore, we conduct extensive experiments on three other datasets, including OntoNotes 4.0, Weibo, and Resume, where SoftFiNE achieved state-of-the-art performance on all three datasets.", }
Chinese Named Entity Recognition (CNER) is a widely used technology in various applications. While recent studies have focused on utilizing additional information of the Chinese language and characters to enhance CNER performance, this paper focuses on a specific aspect of CNER known as fine-grained CNER (FG-CNER). FG-CNER involves the use of hierarchical, fine-grained categories (e.g. Person-MovieStar) to label named entities. To promote research in this area, we introduce the FiNE dataset, a dataset for FG-CNER consisting of 30,000 sentences from various domains and containing 67,651 entities in 54 fine-grained flattened hierarchical categories. Additionally, we propose SoftFiNE, a novel approach for FG-CNER that utilizes a custom-designed relevance scoring function based on label structures to learn the potential relevance between different flattened hierarchical labels. Our experimental results demonstrate that the proposed SoftFiNE method outperforms the state-of-the-art baselines on the FiNE dataset. Furthermore, we conduct extensive experiments on three other datasets, including OntoNotes 4.0, Weibo, and Resume, where SoftFiNE achieved state-of-the-art performance on all three datasets.
[ "Yang, Jiuding", "Luo, Jinwen", "Guo, Weidong", "Niu, Di", "Xu, Yu" ]
Exploiting Hierarchically Structured Categories in Fine-grained Chinese Named Entity Recognition
findings-acl.211
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.212.bib
https://aclanthology.org/2023.findings-acl.212/
@inproceedings{yu-rieser-2023-adversarial, title = "Adversarial Textual Robustness on Visual Dialog", author = "Yu, Lu and Rieser, Verena", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.212", doi = "10.18653/v1/2023.findings-acl.212", pages = "3422--3438", abstract = "Adversarial robustness evaluates the worst-case performance scenario of a machine learning model to ensure its safety and reliability. For example, cases where the user input contains a minimal change, e.g. a synonym, which causes the previously correct model to return a wrong answer. Using this scenario, this study is the first to investigate the robustness of visually grounded dialog models towards textual attacks. We first aim to understand how multimodal input components contribute to model robustness. Our results show that models which encode dialog history are more robust by providing redundant information. This is in contrast to prior work which finds that dialog history is negligible for model performance on this task. We also evaluate how to generate adversarial test examples which successfully fool the model but remain undetected by the user/software designer. Our analysis shows that the textual, as well as the visual context are important to generate plausible attacks.", }
Adversarial robustness evaluates the worst-case performance scenario of a machine learning model to ensure its safety and reliability. For example, cases where the user input contains a minimal change, e.g. a synonym, which causes the previously correct model to return a wrong answer. Using this scenario, this study is the first to investigate the robustness of visually grounded dialog models towards textual attacks. We first aim to understand how multimodal input components contribute to model robustness. Our results show that models which encode dialog history are more robust by providing redundant information. This is in contrast to prior work which finds that dialog history is negligible for model performance on this task. We also evaluate how to generate adversarial test examples which successfully fool the model but remain undetected by the user/software designer. Our analysis shows that the textual, as well as the visual context are important to generate plausible attacks.
[ "Yu, Lu", "Rieser, Verena" ]
Adversarial Textual Robustness on Visual Dialog
findings-acl.212
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.213.bib
https://aclanthology.org/2023.findings-acl.213/
@inproceedings{he-etal-2023-language, title = "Language Model Analysis for Ontology Subsumption Inference", author = "He, Yuan and Chen, Jiaoyan and Jimenez-Ruiz, Ernesto and Dong, Hang and Horrocks, Ian", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.213", doi = "10.18653/v1/2023.findings-acl.213", pages = "3439--3453", abstract = "Investigating whether pre-trained language models (LMs) can function as knowledge bases (KBs) has raised wide research interests recently. However, existing works focus on simple, triple-based, relational KBs, but omit more sophisticated, logic-based, conceptualised KBs such as OWL ontologies. To investigate an LM{'}s knowledge of ontologies, we propose OntoLAMA, a set of inference-based probing tasks and datasets from ontology subsumption axioms involving both atomic and complex concepts. We conduct extensive experiments on ontologies of different domains and scales, and our results demonstrate that LMs encode relatively less background knowledge of Subsumption Inference (SI) than traditional Natural Language Inference (NLI) but can improve on SI significantly when a small number of samples are given. We will open-source our code and datasets.", }
Investigating whether pre-trained language models (LMs) can function as knowledge bases (KBs) has raised wide research interests recently. However, existing works focus on simple, triple-based, relational KBs, but omit more sophisticated, logic-based, conceptualised KBs such as OWL ontologies. To investigate an LM{'}s knowledge of ontologies, we propose OntoLAMA, a set of inference-based probing tasks and datasets from ontology subsumption axioms involving both atomic and complex concepts. We conduct extensive experiments on ontologies of different domains and scales, and our results demonstrate that LMs encode relatively less background knowledge of Subsumption Inference (SI) than traditional Natural Language Inference (NLI) but can improve on SI significantly when a small number of samples are given. We will open-source our code and datasets.
[ "He, Yuan", "Chen, Jiaoyan", "Jimenez-Ruiz, Ernesto", "Dong, Hang", "Horrocks, Ian" ]
Language Model Analysis for Ontology Subsumption Inference
findings-acl.213
Poster
2302.06761
[ "https://github.com/KRR-Oxford/DeepOnto" ]
https://huggingface.co/papers/2302.06761
0
0
0
5
1
[]
[ "krr-oxford/OntoLAMA" ]
[]
https://aclanthology.org/2023.findings-acl.214.bib
https://aclanthology.org/2023.findings-acl.214/
@inproceedings{cui-chen-2023-exploring, title = "Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction", author = "Cui, Wanyun and Chen, Xingran", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.214", doi = "10.18653/v1/2023.findings-acl.214", pages = "3454--3467", abstract = "Previous research has demonstrated that natural language explanations provide valuable inductive biases that guide models, thereby improving the generalization ability and data efficiency. In this paper, we undertake a systematic examination of the effectiveness of these explanations. Remarkably, we find that corrupted explanations with diminished inductive biases can achieve competitive or superior performance compared to the original explanations. Our findings furnish novel insights into the characteristics of natural language explanations in the following ways: (1) the impact of explanations varies across different training styles and datasets, with previously believed improvements primarily observed in frozen language models. (2) While previous research has attributed the effect of explanations solely to their inductive biases, our study shows that the effect persists even when the explanations are completely corrupted. We propose that the main effect is due to the provision of additional context space. (3) Utilizing the proposed automatic perturbed context, we were able to attain comparable results to annotated explanations, but with a significant increase in computational efficiency, 20-30 times faster.", }
Previous research has demonstrated that natural language explanations provide valuable inductive biases that guide models, thereby improving the generalization ability and data efficiency. In this paper, we undertake a systematic examination of the effectiveness of these explanations. Remarkably, we find that corrupted explanations with diminished inductive biases can achieve competitive or superior performance compared to the original explanations. Our findings furnish novel insights into the characteristics of natural language explanations in the following ways: (1) the impact of explanations varies across different training styles and datasets, with previously believed improvements primarily observed in frozen language models. (2) While previous research has attributed the effect of explanations solely to their inductive biases, our study shows that the effect persists even when the explanations are completely corrupted. We propose that the main effect is due to the provision of additional context space. (3) Utilizing the proposed automatic perturbed context, we were able to attain comparable results to annotated explanations, but with a significant increase in computational efficiency, 20-30 times faster.
[ "Cui, Wanyun", "Chen, Xingran" ]
Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction
findings-acl.214
Poster
2305.15520
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.215.bib
https://aclanthology.org/2023.findings-acl.215/
@inproceedings{aralikatte-etal-2023-varta, title = "Varta: A Large-Scale Headline-Generation Dataset for {I}ndic Languages", author = "Aralikatte, Rahul and Cheng, Ziling and Doddapaneni, Sumanth and Cheung, Jackie Chi Kit", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.215", doi = "10.18653/v1/2023.findings-acl.215", pages = "3468--3492", abstract = "We present Varta, a large-scale multilingual dataset for headline generation in Indic languages. This dataset includes more than 41 million pairs of headlines and articles in 14 different Indic languages (and English), which come from a variety of high-quality news sources. To the best of our knowledge, this is the largest collection of curated news articles for Indic languages currently available. We use the collected data in a series of experiments to answer important questions related to Indic NLP and multilinguality research in general. We show that the dataset is challenging even for state-of-the-art abstractive models and that they perform only slightly better than extractive baselines. Owing to its size, we also show that the dataset can be used to pre-train strong language models that outperform competitive baselines in both NLU and NLG benchmarks.", }
We present Varta, a large-scale multilingual dataset for headline generation in Indic languages. This dataset includes more than 41 million pairs of headlines and articles in 14 different Indic languages (and English), which come from a variety of high-quality news sources. To the best of our knowledge, this is the largest collection of curated news articles for Indic languages currently available. We use the collected data in a series of experiments to answer important questions related to Indic NLP and multilinguality research in general. We show that the dataset is challenging even for state-of-the-art abstractive models and that they perform only slightly better than extractive baselines. Owing to its size, we also show that the dataset can be used to pre-train strong language models that outperform competitive baselines in both NLU and NLG benchmarks.
[ "Aralikatte, Rahul", "Cheng, Ziling", "Doddapaneni, Sumanth", "Cheung, Jackie Chi Kit" ]
Varta: A Large-Scale Headline-Generation Dataset for Indic Languages
findings-acl.215
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.216.bib
https://aclanthology.org/2023.findings-acl.216/
@inproceedings{wan-etal-2023-better, title = "Better Zero-Shot Reasoning with Self-Adaptive Prompting", author = "Wan, Xingchen and Sun, Ruoxi and Dai, Hanjun and Arik, Sercan and Pfister, Tomas", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.216", doi = "10.18653/v1/2023.findings-acl.216", pages = "3493--3514", abstract = "Modern large language models (LLMs) have demonstrated impressive capabilities at sophisticated tasks, often through step-by-step reasoning similar to humans. This is made possible by their strong few- and zero-shot abilities {--} they can effectively learn from a handful of handcrafted, completed responses ({``}in-context examples{''}), or are prompted to reason spontaneously through specially designed triggers. Nonetheless, some limitations have been observed. First, performance in the few-shot setting is sensitive to the choice of the examples, whose design requires significant human effort. Moreover, given the diverse downstream tasks of LLMs, it may be difficult or laborious to handcraft per-task labels. Second, while the zero-shot setting does not require handcrafting, its performance is limited due to the lack of guidance to the LLMs. To address these limitations, we propose Consistency-based Self-adaptive Prompting (COSP), a novel prompt design method for LLMs. Requiring neither handcrafted responses nor ground-truth labels, COSP selects and builds the set of examples from the LLM zero-shot outputs via carefully designed criteria combining consistency, diversity and repetition. In the zero-shot setting for three different LLMs, we show that using only LLM predictions, COSP significantly improves performance up to 15{\%} compared to zero-shot baselines and matches or exceeds few-shot baselines at a range of reasoning tasks.", }
Modern large language models (LLMs) have demonstrated impressive capabilities at sophisticated tasks, often through step-by-step reasoning similar to humans. This is made possible by their strong few- and zero-shot abilities {--} they can effectively learn from a handful of handcrafted, completed responses ({``}in-context examples{''}), or are prompted to reason spontaneously through specially designed triggers. Nonetheless, some limitations have been observed. First, performance in the few-shot setting is sensitive to the choice of the examples, whose design requires significant human effort. Moreover, given the diverse downstream tasks of LLMs, it may be difficult or laborious to handcraft per-task labels. Second, while the zero-shot setting does not require handcrafting, its performance is limited due to the lack of guidance to the LLMs. To address these limitations, we propose Consistency-based Self-adaptive Prompting (COSP), a novel prompt design method for LLMs. Requiring neither handcrafted responses nor ground-truth labels, COSP selects and builds the set of examples from the LLM zero-shot outputs via carefully designed criteria combining consistency, diversity and repetition. In the zero-shot setting for three different LLMs, we show that using only LLM predictions, COSP significantly improves performance up to 15{\%} compared to zero-shot baselines and matches or exceeds few-shot baselines at a range of reasoning tasks.
[ "Wan, Xingchen", "Sun, Ruoxi", "Dai, Hanjun", "Arik, Sercan", "Pfister, Tomas" ]
Better Zero-Shot Reasoning with Self-Adaptive Prompting
findings-acl.216
Poster
2305.14106
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.217.bib
https://aclanthology.org/2023.findings-acl.217/
@inproceedings{long-etal-2023-multimodal, title = "Multimodal Recommendation Dialog with Subjective Preference: A New Challenge and Benchmark", author = "Long, Yuxing and Hui, Binyuan and Yuan, Caixia and Huang, Fei and Li, Yongbin and Wang, Xiaojie", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.217", doi = "10.18653/v1/2023.findings-acl.217", pages = "3515--3533", abstract = "Existing multimodal task-oriented dialog data fails to demonstrate the diverse expressions of user subjective preferences and recommendation acts in the real-life shopping scenario. This paper introduces a new dataset SURE (Multimodal Recommendation Dialog with Subjective Preference), which contains 12K shopping dialogs in complex store scenes. The data is built in two phases with human annotations to ensure quality and diversity. SURE is well-annotated with subjective preferences and recommendation acts proposed by sales experts. A comprehensive analysis is given to reveal the distinguishing features of SURE. Three benchmark tasks are then proposed on the data to evaluate the capability of multimodal recommendation agents. Basing on the SURE, we propose a baseline model, powered by a state-of-the-art multimodal model, for these tasks.", }
Existing multimodal task-oriented dialog data fails to demonstrate the diverse expressions of user subjective preferences and recommendation acts in the real-life shopping scenario. This paper introduces a new dataset SURE (Multimodal Recommendation Dialog with Subjective Preference), which contains 12K shopping dialogs in complex store scenes. The data is built in two phases with human annotations to ensure quality and diversity. SURE is well-annotated with subjective preferences and recommendation acts proposed by sales experts. A comprehensive analysis is given to reveal the distinguishing features of SURE. Three benchmark tasks are then proposed on the data to evaluate the capability of multimodal recommendation agents. Basing on the SURE, we propose a baseline model, powered by a state-of-the-art multimodal model, for these tasks.
[ "Long, Yuxing", "Hui, Binyuan", "Yuan, Caixia", "Huang, Fei", "Li, Yongbin", "Wang, Xiaojie" ]
Multimodal Recommendation Dialog with Subjective Preference: A New Challenge and Benchmark
findings-acl.217
Poster
2305.18212
[ "https://github.com/LYX0501/SURE" ]
https://huggingface.co/papers/2305.18212
0
0
0
6
1
[]
[]
[]
https://aclanthology.org/2023.findings-acl.218.bib
https://aclanthology.org/2023.findings-acl.218/
@inproceedings{wijesiriwardene-etal-2023-analogical, title = "{ANALOGICAL} - A Novel Benchmark for Long Text Analogy Evaluation in Large Language Models", author = "Wijesiriwardene, Thilini and Wickramarachchi, Ruwan and Gajera, Bimal and Gowaikar, Shreeyash and Gupta, Chandan and Chadha, Aman and Reganti, Aishwarya Naresh and Sheth, Amit and Das, Amitava", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.218", doi = "10.18653/v1/2023.findings-acl.218", pages = "3534--3549", abstract = "Over the past decade, analogies, in the form of word-level analogies, have played a significant role as an intrinsic measure of evaluating the quality of word embedding methods such as word2vec. Modern large language models (LLMs), however, are primarily evaluated on extrinsic measures based on benchmarks such as GLUE and SuperGLUE, and there are only a few investigations on whether LLMs can draw analogies between long texts. In this paper, we present ANALOGICAL, a new benchmark to intrinsically evaluate LLMs across a taxonomy of analogies of long text with six levels of complexity {--} (i) word, (ii) word vs. sentence, (iii) syntactic, (iv) negation, (v) entailment, and (vi) metaphor. Using thirteen datasets and three different distance measures, we evaluate the abilities of eight LLMs in identifying analogical pairs in the semantic vector space. Our evaluation finds that it is increasingly challenging for LLMs to identify analogies when going up the analogy taxonomy.", }
Over the past decade, analogies, in the form of word-level analogies, have played a significant role as an intrinsic measure of evaluating the quality of word embedding methods such as word2vec. Modern large language models (LLMs), however, are primarily evaluated on extrinsic measures based on benchmarks such as GLUE and SuperGLUE, and there are only a few investigations on whether LLMs can draw analogies between long texts. In this paper, we present ANALOGICAL, a new benchmark to intrinsically evaluate LLMs across a taxonomy of analogies of long text with six levels of complexity {--} (i) word, (ii) word vs. sentence, (iii) syntactic, (iv) negation, (v) entailment, and (vi) metaphor. Using thirteen datasets and three different distance measures, we evaluate the abilities of eight LLMs in identifying analogical pairs in the semantic vector space. Our evaluation finds that it is increasingly challenging for LLMs to identify analogies when going up the analogy taxonomy.
[ "Wijesiriwardene, Thilini", "Wickramarachchi, Ruwan", "Gajera, Bimal", "Gowaikar, Shreeyash", "Gupta, Ch", "an", "Chadha, Aman", "Reganti, Aishwarya Naresh", "Sheth, Amit", "Das, Amitava" ]
ANALOGICAL - A Novel Benchmark for Long Text Analogy Evaluation in Large Language Models
findings-acl.218
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.219.bib
https://aclanthology.org/2023.findings-acl.219/
@inproceedings{sharma-etal-2023-financial, title = "Financial Numeric Extreme Labelling: A dataset and benchmarking", author = "Sharma, Soumya and Khatuya, Subhendu and Hegde, Manjunath and Shaikh, Afreen and Dasgupta, Koustuv and Goyal, Pawan and Ganguly, Niloy", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.219", doi = "10.18653/v1/2023.findings-acl.219", pages = "3550--3561", abstract = "The U.S. Securities and Exchange Commission (SEC) mandates all public companies to file periodic financial statements that should contain numerals annotated with a particular label from a taxonomy. In this paper, we formulate the task of automating the assignment of a label to a particular numeral span in a sentence from an extremely large label set. Towards this task, we release a dataset, Financial Numeric Extreme Labelling (FNXL), annotated with 2,794 labels. We benchmark the performance of the FNXL dataset by formulating the task as (a) a sequence labelling problem and (b) a pipeline with span extraction followed by Extreme Classification. Although the two approaches perform comparably, the pipeline solution provides a slight edge for the least frequent labels.", }
The U.S. Securities and Exchange Commission (SEC) mandates all public companies to file periodic financial statements that should contain numerals annotated with a particular label from a taxonomy. In this paper, we formulate the task of automating the assignment of a label to a particular numeral span in a sentence from an extremely large label set. Towards this task, we release a dataset, Financial Numeric Extreme Labelling (FNXL), annotated with 2,794 labels. We benchmark the performance of the FNXL dataset by formulating the task as (a) a sequence labelling problem and (b) a pipeline with span extraction followed by Extreme Classification. Although the two approaches perform comparably, the pipeline solution provides a slight edge for the least frequent labels.
[ "Sharma, Soumya", "Khatuya, Subhendu", "Hegde, Manjunath", "Shaikh, Afreen", "Dasgupta, Koustuv", "Goyal, Pawan", "Ganguly, Niloy" ]
Financial Numeric Extreme Labelling: A dataset and benchmarking
findings-acl.219
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.220.bib
https://aclanthology.org/2023.findings-acl.220/
@inproceedings{aharoni-etal-2023-multilingual, title = "Multilingual Summarization with Factual Consistency Evaluation", author = "Aharoni, Roee and Narayan, Shashi and Maynez, Joshua and Herzig, Jonathan and Clark, Elizabeth and Lapata, Mirella", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.220", doi = "10.18653/v1/2023.findings-acl.220", pages = "3562--3591", abstract = "Abstractive summarization has enjoyed renewed interest in recent years, thanks to pre-trained language models and the availability of large-scale datasets. Despite promising results, current models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application. Several recent efforts attempt to address this by devising models that automatically detect factual inconsistencies in machine generated summaries. However, they focus exclusively on English, a language with abundant resources. In this work, we leverage factual consistency evaluation models to improve \textit{multilingual} summarization. We explore two intuitive approaches to mitigate hallucinations based on the signal provided by a multilingual NLI model, namely data filtering and controlled generation. Experimental results in the 45 languages from the XLSum dataset show gains over strong baselines in both automatic and human evaluation. We release models and human judgements of summaries to foster progress towards more factually consistent multilingual summarization.", }
Abstractive summarization has enjoyed renewed interest in recent years, thanks to pre-trained language models and the availability of large-scale datasets. Despite promising results, current models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application. Several recent efforts attempt to address this by devising models that automatically detect factual inconsistencies in machine generated summaries. However, they focus exclusively on English, a language with abundant resources. In this work, we leverage factual consistency evaluation models to improve \textit{multilingual} summarization. We explore two intuitive approaches to mitigate hallucinations based on the signal provided by a multilingual NLI model, namely data filtering and controlled generation. Experimental results in the 45 languages from the XLSum dataset show gains over strong baselines in both automatic and human evaluation. We release models and human judgements of summaries to foster progress towards more factually consistent multilingual summarization.
[ "Aharoni, Roee", "Narayan, Shashi", "Maynez, Joshua", "Herzig, Jonathan", "Clark, Elizabeth", "Lapata, Mirella" ]
Multilingual Summarization with Factual Consistency Evaluation
findings-acl.220
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.221.bib
https://aclanthology.org/2023.findings-acl.221/
@inproceedings{patel-domeniconi-2023-enhancing, title = "Enhancing Out-of-Vocabulary Estimation with Subword Attention", author = "Patel, Raj and Domeniconi, Carlotta", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.221", doi = "10.18653/v1/2023.findings-acl.221", pages = "3592--3601", abstract = "Word embedding methods like word2vec and GloVe have been shown to learn strong representations of words. However, these methods only learn representations for words in the training corpus and therefore struggle to handle unknown and new words, known as out-of-vocabulary (OOV) words. As a result, there have been multiple attempts to learn OOV word representations in a similar fashion to how humans learn new words, using word roots/subwords and/or surrounding words. However, while most of these approaches use advanced architectures like attention on the context of the OOV word, they tend to use simple structures like ngram addition or character based convolutional neural networks (CNN) to handle processing subword information. In response to this, we propose SubAtt, a transformer based OOV estimation model that uses attention mechanisms on both the context and the subwords. In addition to attention, we also show that pretraining subword representations also leads to improvement in OOV estimation. We show SubAtt outperforms current state-of-the-art OOV estimation models.", }
Word embedding methods like word2vec and GloVe have been shown to learn strong representations of words. However, these methods only learn representations for words in the training corpus and therefore struggle to handle unknown and new words, known as out-of-vocabulary (OOV) words. As a result, there have been multiple attempts to learn OOV word representations in a similar fashion to how humans learn new words, using word roots/subwords and/or surrounding words. However, while most of these approaches use advanced architectures like attention on the context of the OOV word, they tend to use simple structures like ngram addition or character based convolutional neural networks (CNN) to handle processing subword information. In response to this, we propose SubAtt, a transformer based OOV estimation model that uses attention mechanisms on both the context and the subwords. In addition to attention, we also show that pretraining subword representations also leads to improvement in OOV estimation. We show SubAtt outperforms current state-of-the-art OOV estimation models.
[ "Patel, Raj", "Domeniconi, Carlotta" ]
Enhancing Out-of-Vocabulary Estimation with Subword Attention
findings-acl.221
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.222.bib
https://aclanthology.org/2023.findings-acl.222/
@inproceedings{duan-zhao-2023-encoder, title = "Encoder and Decoder, Not One Less for Pre-trained Language Model Sponsored {NMT}", author = "Duan, Sufeng and Zhao, Hai", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.222", doi = "10.18653/v1/2023.findings-acl.222", pages = "3602--3613", abstract = "Well pre-trained contextualized representations from pre-trained language model (PLM) have been shown helpful for enhancing various natural language processing tasks, surely including neural machine translation (NMT). However, existing methods either consider encoder-only enhancement or rely on specific multilingual PLMs, which leads to a much larger model or give up potentially helpful knowledge from target PLMs. In this paper, we propose a new monolingual PLM-sponsored NMT model to let both encoder and decoder enjoy PLM enhancement to alleviate such obvious inconvenience. Especially, incorporating a newly proposed frequency-weighted embedding transformation algorithm, PLM embeddings can be effectively exploited in terms of the representations of the NMT decoder. We evaluate our model on IWSLT14 En-De, De-En, WMT14 En-De, and En-Fr tasks, and the results show that our proposed PLM enhancement gives significant improvement and even helps achieve new state-of-the-art.", }
Well pre-trained contextualized representations from pre-trained language model (PLM) have been shown helpful for enhancing various natural language processing tasks, surely including neural machine translation (NMT). However, existing methods either consider encoder-only enhancement or rely on specific multilingual PLMs, which leads to a much larger model or give up potentially helpful knowledge from target PLMs. In this paper, we propose a new monolingual PLM-sponsored NMT model to let both encoder and decoder enjoy PLM enhancement to alleviate such obvious inconvenience. Especially, incorporating a newly proposed frequency-weighted embedding transformation algorithm, PLM embeddings can be effectively exploited in terms of the representations of the NMT decoder. We evaluate our model on IWSLT14 En-De, De-En, WMT14 En-De, and En-Fr tasks, and the results show that our proposed PLM enhancement gives significant improvement and even helps achieve new state-of-the-art.
[ "Duan, Sufeng", "Zhao, Hai" ]
Encoder and Decoder, Not One Less for Pre-trained Language Model Sponsored NMT
findings-acl.222
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.223.bib
https://aclanthology.org/2023.findings-acl.223/
@inproceedings{fang-etal-2023-transgec, title = "{T}rans{GEC}: Improving Grammatical Error Correction with Translationese", author = "Fang, Tao and Liu, Xuebo and Wong, Derek F. and Zhan, Runzhe and Ding, Liang and Chao, Lidia S. and Tao, Dacheng and Zhang, Min", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.223", doi = "10.18653/v1/2023.findings-acl.223", pages = "3614--3633", abstract = "Data augmentation is an effective way to improve model performance of grammatical error correction (GEC). This paper identifies a critical side-effect of GEC data augmentation, which is due to the style discrepancy between the data used in GEC tasks (i.e., texts produced by non-native speakers) and data augmentation (i.e., native texts). To alleviate this issue, we propose to use an alternative data source, translationese (i.e., human-translated texts), as input for GEC data augmentation, which 1) is easier to obtain and usually has better quality than non-native texts, and 2) has a more similar style to non-native texts. Experimental results on the CoNLL14 and BEA19 English, NLPCC18 Chinese, Falko-MERLIN German, and RULEC-GEC Russian GEC benchmarks show that our approach consistently improves correction accuracy over strong baselines. Further analyses reveal that our approach is helpful for overcoming mainstream correction difficulties such as the corrections of frequent words, missing words, and substitution errors. Data, code, models and scripts are freely available at \url{https://github.com/NLP2CT/TransGEC}.", }
Data augmentation is an effective way to improve model performance of grammatical error correction (GEC). This paper identifies a critical side-effect of GEC data augmentation, which is due to the style discrepancy between the data used in GEC tasks (i.e., texts produced by non-native speakers) and data augmentation (i.e., native texts). To alleviate this issue, we propose to use an alternative data source, translationese (i.e., human-translated texts), as input for GEC data augmentation, which 1) is easier to obtain and usually has better quality than non-native texts, and 2) has a more similar style to non-native texts. Experimental results on the CoNLL14 and BEA19 English, NLPCC18 Chinese, Falko-MERLIN German, and RULEC-GEC Russian GEC benchmarks show that our approach consistently improves correction accuracy over strong baselines. Further analyses reveal that our approach is helpful for overcoming mainstream correction difficulties such as the corrections of frequent words, missing words, and substitution errors. Data, code, models and scripts are freely available at \url{https://github.com/NLP2CT/TransGEC}.
[ "Fang, Tao", "Liu, Xuebo", "Wong, Derek F.", "Zhan, Runzhe", "Ding, Liang", "Chao, Lidia S.", "Tao, Dacheng", "Zhang, Min" ]
TransGEC: Improving Grammatical Error Correction with Translationese
findings-acl.223
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.224.bib
https://aclanthology.org/2023.findings-acl.224/
@inproceedings{li-etal-2023-newsdialogues, title = "{N}ews{D}ialogues: Towards Proactive News Grounded Conversation", author = "Li, Siheng and Yin, Yichun and Yang, Cheng and Jiang, Wangjie and Li, Yiwei and Cheng, Zesen and Shang, Lifeng and Jiang, Xin and Liu, Qun and Yang, Yujiu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.224", doi = "10.18653/v1/2023.findings-acl.224", pages = "3634--3649", abstract = "Hot news is one of the most popular topics in daily conversations. However, news grounded conversation has long been stymied by the lack of well-designed task definition and scarce data. In this paper, we propose a novel task, Proactive News Grounded Conversation, in which a dialogue system can proactively lead the conversation based on some key topics of the news. In addition, both information-seeking and chit-chat scenarios are included realistically, where the user may ask a series of questions about the news details or express their opinions and be eager to chat. To further develop this novel task, we collect a human-to-human Chinese dialogue dataset NewsDialogues, which includes 1K conversations with a total of 14.6K utterances and detailed annotations for target topics and knowledge spans. Furthermore, we propose a method named Predict-Generate-Rank, consisting of a generator for grounded knowledge prediction and response generation, and a ranker for the ranking of multiple responses to alleviate the exposure bias. We conduct comprehensive experiments to demonstrate the effectiveness of the proposed method and further present several key findings and challenges to prompt future research.", }
Hot news is one of the most popular topics in daily conversations. However, news grounded conversation has long been stymied by the lack of well-designed task definition and scarce data. In this paper, we propose a novel task, Proactive News Grounded Conversation, in which a dialogue system can proactively lead the conversation based on some key topics of the news. In addition, both information-seeking and chit-chat scenarios are included realistically, where the user may ask a series of questions about the news details or express their opinions and be eager to chat. To further develop this novel task, we collect a human-to-human Chinese dialogue dataset NewsDialogues, which includes 1K conversations with a total of 14.6K utterances and detailed annotations for target topics and knowledge spans. Furthermore, we propose a method named Predict-Generate-Rank, consisting of a generator for grounded knowledge prediction and response generation, and a ranker for the ranking of multiple responses to alleviate the exposure bias. We conduct comprehensive experiments to demonstrate the effectiveness of the proposed method and further present several key findings and challenges to prompt future research.
[ "Li, Siheng", "Yin, Yichun", "Yang, Cheng", "Jiang, Wangjie", "Li, Yiwei", "Cheng, Zesen", "Shang, Lifeng", "Jiang, Xin", "Liu, Qun", "Yang, Yujiu" ]
NewsDialogues: Towards Proactive News Grounded Conversation
findings-acl.224
Poster
2308.06501
[ "https://github.com/sihengli99/newsdialogues" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.225.bib
https://aclanthology.org/2023.findings-acl.225/
@inproceedings{asai-etal-2023-task, title = "Task-aware Retrieval with Instructions", author = "Asai, Akari and Schick, Timo and Lewis, Patrick and Chen, Xilun and Izacard, Gautier and Riedel, Sebastian and Hajishirzi, Hannaneh and Yih, Wen-tau", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.225", doi = "10.18653/v1/2023.findings-acl.225", pages = "3650--3675", abstract = "We study the problem of retrieval with instructions, where users provide explicit descriptions of their intent along with their queries to guide a retrieval system. Our solution is a general-purpose task-aware retrieval system, trained using multi-task instruction tuning and can follow human-written instructions to find relevant documents to a given query. We introduce the first large-scale collection of 37 retrieval datasets with instructions, BERRI, and present TART, a single multi-task retrieval system trained on BERRI with instructions that can adapt to a new task without any parameter updates. TART advances the state of the art on two zero-shot retrieval benchmarks, BEIR and LOTTE, outperforming models up to three times larger. We further introduce a new evaluation setup, X2-Retrieval, to better reflect real-world scenarios in which diverse domains and tasks are pooled. TART significantly outperforms competitive baselines in this setup, further highlighting the effectiveness of guiding retrieval with instructions.", }
We study the problem of retrieval with instructions, where users provide explicit descriptions of their intent along with their queries to guide a retrieval system. Our solution is a general-purpose task-aware retrieval system, trained using multi-task instruction tuning and can follow human-written instructions to find relevant documents to a given query. We introduce the first large-scale collection of 37 retrieval datasets with instructions, BERRI, and present TART, a single multi-task retrieval system trained on BERRI with instructions that can adapt to a new task without any parameter updates. TART advances the state of the art on two zero-shot retrieval benchmarks, BEIR and LOTTE, outperforming models up to three times larger. We further introduce a new evaluation setup, X2-Retrieval, to better reflect real-world scenarios in which diverse domains and tasks are pooled. TART significantly outperforms competitive baselines in this setup, further highlighting the effectiveness of guiding retrieval with instructions.
[ "Asai, Akari", "Schick, Timo", "Lewis, Patrick", "Chen, Xilun", "Izacard, Gautier", "Riedel, Sebastian", "Hajishirzi, Hannaneh", "Yih, Wen-tau" ]
Task-aware Retrieval with Instructions
findings-acl.225
Poster
2211.09260
[ "https://github.com/facebookresearch/tart" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.226.bib
https://aclanthology.org/2023.findings-acl.226/
@inproceedings{belz-etal-2023-non, title = "Non-Repeatable Experiments and Non-Reproducible Results: The Reproducibility Crisis in Human Evaluation in {NLP}", author = "Belz, Anya and Thomson, Craig and Reiter, Ehud and Mille, Simon", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.226", doi = "10.18653/v1/2023.findings-acl.226", pages = "3676--3687", abstract = "Human evaluation is widely regarded as the litmus test of quality in NLP. A basic requirementof all evaluations, but in particular where they are used for meta-evaluation, is that they should support the same conclusions if repeated. However, the reproducibility of human evaluations is virtually never queried, let alone formally tested, in NLP which means that their repeatability and the reproducibility of their results is currently an open question. This focused contribution reports our review of human evaluation experiments reported in NLP papers over the past five years which we assessed in terms oftheir ability to be rerun. Overall, we estimatethat just 5{\%} of human evaluations are repeatable in the sense that (i) there are no prohibitivebarriers to repetition, and (ii) sufficient information about experimental design is publicly available for rerunning them. Our estimate goesup to about 20{\%} when author help is sought. We complement this investigation with a survey of results concerning the reproducibilityof human evaluations where those are repeatable in the first place. Here we find worryinglylow degrees of reproducibility, both in terms ofsimilarity of scores and of findings supportedby them. We summarise what insights can begleaned so far regarding how to make humanevaluations in NLP more repeatable and morereproducible.", }
Human evaluation is widely regarded as the litmus test of quality in NLP. A basic requirementof all evaluations, but in particular where they are used for meta-evaluation, is that they should support the same conclusions if repeated. However, the reproducibility of human evaluations is virtually never queried, let alone formally tested, in NLP which means that their repeatability and the reproducibility of their results is currently an open question. This focused contribution reports our review of human evaluation experiments reported in NLP papers over the past five years which we assessed in terms oftheir ability to be rerun. Overall, we estimatethat just 5{\%} of human evaluations are repeatable in the sense that (i) there are no prohibitivebarriers to repetition, and (ii) sufficient information about experimental design is publicly available for rerunning them. Our estimate goesup to about 20{\%} when author help is sought. We complement this investigation with a survey of results concerning the reproducibilityof human evaluations where those are repeatable in the first place. Here we find worryinglylow degrees of reproducibility, both in terms ofsimilarity of scores and of findings supportedby them. We summarise what insights can begleaned so far regarding how to make humanevaluations in NLP more repeatable and morereproducible.
[ "Belz, Anya", "Thomson, Craig", "Reiter, Ehud", "Mille, Simon" ]
Non-Repeatable Experiments and Non-Reproducible Results: The Reproducibility Crisis in Human Evaluation in NLP
findings-acl.226
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.227.bib
https://aclanthology.org/2023.findings-acl.227/
@inproceedings{zhao-etal-2023-define, title = "Define, Evaluate, and Improve Task-Oriented Cognitive Capabilities for Instruction Generation Models", author = "Zhao, Lingjun and Nguyen, Khanh and Daum{\'e} III, Hal", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.227", doi = "10.18653/v1/2023.findings-acl.227", pages = "3688--3706", abstract = "Recent work studies the cognitive capabilities of language models through psychological tests designed for humans. While these studies are helpful for understanding the general capabilities of these models, there is no guarantee that a model possessing sufficient capabilities to pass those tests would actually use those capabilities in performing real-life tasks. In this work, we formulate task-oriented cognitive capabilities, which are human-like cognitive capabilities that language models leverage to perform tasks. These capabilities are (i) the ability to quickly generate good candidate utterances (the search capability) (ii) the ability to predict how a listener interprets those utterances and choose the most appropriate one (the pragmatic capability). We design an evaluation scheme for comparing these capabilities of a language model with those of a human. Applying this scheme to examine various models in a navigation instruction generation problem, we find that their pragmatic capability is severely lacking. This insight leads us to augment them with better models of the listener and obtain a significant boost of 11{\%} in success rate in guiding real humans. Our work advocates for having a principled procedure for aligning language models with humans that involves (i) formulating task-oriented capabilities, (ii) devising a method to quantify their deficiency, and (iii) iteratively improving them.", }
Recent work studies the cognitive capabilities of language models through psychological tests designed for humans. While these studies are helpful for understanding the general capabilities of these models, there is no guarantee that a model possessing sufficient capabilities to pass those tests would actually use those capabilities in performing real-life tasks. In this work, we formulate task-oriented cognitive capabilities, which are human-like cognitive capabilities that language models leverage to perform tasks. These capabilities are (i) the ability to quickly generate good candidate utterances (the search capability) (ii) the ability to predict how a listener interprets those utterances and choose the most appropriate one (the pragmatic capability). We design an evaluation scheme for comparing these capabilities of a language model with those of a human. Applying this scheme to examine various models in a navigation instruction generation problem, we find that their pragmatic capability is severely lacking. This insight leads us to augment them with better models of the listener and obtain a significant boost of 11{\%} in success rate in guiding real humans. Our work advocates for having a principled procedure for aligning language models with humans that involves (i) formulating task-oriented capabilities, (ii) devising a method to quantify their deficiency, and (iii) iteratively improving them.
[ "Zhao, Lingjun", "Nguyen, Khanh", "Daum{\\'e} III, Hal" ]
Define, Evaluate, and Improve Task-Oriented Cognitive Capabilities for Instruction Generation Models
findings-acl.227
Poster
2301.05149
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.228.bib
https://aclanthology.org/2023.findings-acl.228/
@inproceedings{machacek-etal-2023-robustness, title = "Robustness of Multi-Source {MT} to Transcription Errors", author = "Mach{\'a}{\v{c}}ek, Dominik and Pol{\'a}k, Peter and Bojar, Ond{\v{r}}ej and Dabre, Raj", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.228", doi = "10.18653/v1/2023.findings-acl.228", pages = "3707--3723", abstract = "Automatic speech translation is sensitive to speech recognition errors, but in a multilingual scenario, the same content may be available in various languages via simultaneous interpreting, dubbing or subtitling. In this paper, we hypothesize that leveraging multiple sources will improve translation quality if the sources complement one another in terms of correct information they contain. To this end, we first show that on a 10-hour ESIC corpus, the ASR errors in the original English speech and its simultaneous interpreting into German and Czech are mutually independent. We then use two sources, English and German, in a multi-source setting for translation into Czech to establish its robustness to ASR errors. Furthermore, we observe this robustness when translating both noisy sources together in a simultaneous translation setting. Our results show that multi-source neural machine translation has the potential to be useful in a real-time simultaneous translation setting, thereby motivating further investigation in this area.", }
Automatic speech translation is sensitive to speech recognition errors, but in a multilingual scenario, the same content may be available in various languages via simultaneous interpreting, dubbing or subtitling. In this paper, we hypothesize that leveraging multiple sources will improve translation quality if the sources complement one another in terms of correct information they contain. To this end, we first show that on a 10-hour ESIC corpus, the ASR errors in the original English speech and its simultaneous interpreting into German and Czech are mutually independent. We then use two sources, English and German, in a multi-source setting for translation into Czech to establish its robustness to ASR errors. Furthermore, we observe this robustness when translating both noisy sources together in a simultaneous translation setting. Our results show that multi-source neural machine translation has the potential to be useful in a real-time simultaneous translation setting, thereby motivating further investigation in this area.
[ "Mach{\\'a}{\\v{c}}ek, Dominik", "Pol{\\'a}k, Peter", "Bojar, Ond{\\v{r}}ej", "Dabre, Raj" ]
Robustness of Multi-Source MT to Transcription Errors
findings-acl.228
Poster
2305.16894
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.229.bib
https://aclanthology.org/2023.findings-acl.229/
@inproceedings{liu-etal-2023-end, title = "Not The End of Story: An Evaluation of {C}hat{GPT}-Driven Vulnerability Description Mappings", author = "Liu, Xin and Tan, Yuan and Xiao, Zhenghang and Zhuge, Jianwei and Zhou, Rui", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.229", doi = "10.18653/v1/2023.findings-acl.229", pages = "3724--3731", abstract = "As the number of vulnerabilities increases day by day, security management requires more and more structured data. In addition to textual descriptions of vulnerabilities, security engineers must classify and assess vulnerabilities and clarify their associated techniques. Vulnerability Description Mapping (VDM) refers to mapping vulnerabilities to Common Weakness Enumeration (CWE), Common Attack Pattern Enumeration and Classification, ATT{\&}CK Techniques, and other classifications. Accurate VDM is necessary to reduce the pressure of security management and improve the speed of security emergency response. ChatGPT is the latest state-of-the-art closed-source conversational large language model (LLM), which performs excellently on many tasks. This paper explores the application of closed-source LLMs to real-world security management scenarios by evaluating ChatGPT{'}s performance on VDM tasks. The results show that although ChatGPT may be close to the level of human experts on some tasks, it still cannot replace the critical role of professional security engineers in vulnerability analysis. In a word, closed-source LLM is not the end of story.", }
As the number of vulnerabilities increases day by day, security management requires more and more structured data. In addition to textual descriptions of vulnerabilities, security engineers must classify and assess vulnerabilities and clarify their associated techniques. Vulnerability Description Mapping (VDM) refers to mapping vulnerabilities to Common Weakness Enumeration (CWE), Common Attack Pattern Enumeration and Classification, ATT{\&}CK Techniques, and other classifications. Accurate VDM is necessary to reduce the pressure of security management and improve the speed of security emergency response. ChatGPT is the latest state-of-the-art closed-source conversational large language model (LLM), which performs excellently on many tasks. This paper explores the application of closed-source LLMs to real-world security management scenarios by evaluating ChatGPT{'}s performance on VDM tasks. The results show that although ChatGPT may be close to the level of human experts on some tasks, it still cannot replace the critical role of professional security engineers in vulnerability analysis. In a word, closed-source LLM is not the end of story.
[ "Liu, Xin", "Tan, Yuan", "Xiao, Zhenghang", "Zhuge, Jianwei", "Zhou, Rui" ]
Not The End of Story: An Evaluation of ChatGPT-Driven Vulnerability Description Mappings
findings-acl.229
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.230.bib
https://aclanthology.org/2023.findings-acl.230/
@inproceedings{moghe-etal-2023-multi3nlu, title = "{M}ulti3{NLU}++: A Multilingual, Multi-Intent, Multi-Domain Dataset for Natural Language Understanding in Task-Oriented Dialogue", author = "Moghe, Nikita and Razumovskaia, Evgeniia and Guillou, Liane and Vuli{\'c}, Ivan and Korhonen, Anna and Birch, Alexandra", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.230", doi = "10.18653/v1/2023.findings-acl.230", pages = "3732--3755", abstract = "Task-oriented dialogue (ToD) systems have been widely deployed in many industries as they deliver more efficient customer support. These systems are typically constructed for a single domain or language and do not generalise well beyond this. To support work on Natural Language Understanding (NLU) in ToD across multiple languages and domains simultaneously, we constructed Multi3NLU++, a multilingual, multi-intent, multi-domain dataset. Multi3NLU++ extends the English-only NLU++ dataset to include manual translations into a range of high, medium, and low resource languages (Spanish, Marathi, Turkish and Amharic), in two domains (banking and hotels). Because of its multi-intent property, Multi3NLU++ represents complex and natural user goals, and therefore allows us to measure the realistic performance of ToD systems in a varied set of the world{'}s languages. We use Multi3NLU++ to benchmark state-of-the-art multilingual models for the NLU tasks of intent detection and slot labeling for ToD systems in the multilingual setting. The results demonstrate the challenging nature of the dataset, particularly in the low-resource language setting, offering ample room for future experimentation in multi-domain multilingual ToD setups.", }
Task-oriented dialogue (ToD) systems have been widely deployed in many industries as they deliver more efficient customer support. These systems are typically constructed for a single domain or language and do not generalise well beyond this. To support work on Natural Language Understanding (NLU) in ToD across multiple languages and domains simultaneously, we constructed Multi3NLU++, a multilingual, multi-intent, multi-domain dataset. Multi3NLU++ extends the English-only NLU++ dataset to include manual translations into a range of high, medium, and low resource languages (Spanish, Marathi, Turkish and Amharic), in two domains (banking and hotels). Because of its multi-intent property, Multi3NLU++ represents complex and natural user goals, and therefore allows us to measure the realistic performance of ToD systems in a varied set of the world{'}s languages. We use Multi3NLU++ to benchmark state-of-the-art multilingual models for the NLU tasks of intent detection and slot labeling for ToD systems in the multilingual setting. The results demonstrate the challenging nature of the dataset, particularly in the low-resource language setting, offering ample room for future experimentation in multi-domain multilingual ToD setups.
[ "Moghe, Nikita", "Razumovskaia, Evgeniia", "Guillou, Liane", "Vuli{\\'c}, Ivan", "Korhonen, Anna", "Birch, Alex", "ra" ]
Multi3NLU++: A Multilingual, Multi-Intent, Multi-Domain Dataset for Natural Language Understanding in Task-Oriented Dialogue
findings-acl.230
Poster
2212.10455
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.231.bib
https://aclanthology.org/2023.findings-acl.231/
@inproceedings{hong-etal-2023-robust, title = "A Robust Information-Masking Approach for Domain Counterfactual Generation", author = "Hong, Pengfei and Bhardwaj, Rishabh and Majumder, Navonil and Aditya, Somak and Poria, Soujanya", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.231", doi = "10.18653/v1/2023.findings-acl.231", pages = "3756--3769", abstract = "Domain shift is a big challenge in NLP. Many approaches, thus, resort to learning domain-invariant features to mitigate the hurdles of domain shift during inference. Such methods, however, inexorably fail to leverage the domain-specific nuances relevant to the task at hand. To avoid such drawbacks, domain counterfactual generation has recently been proposed that aims to transform a text from the source domain to a given target domain. To achieve this, the existing method uses a frequency-based approach to identify and mask the source-domain-specific tokens in a text. A pretrained LM is then prompted to fill the masks with target-domain-specific tokens. We, however, have observed that, due to limitations of the available data, such a frequency-based method may either miss some domain-token associations or lead to some spurious domain-token associations. To this end, we additionally employ attention norm-based scores to identify additional token-domain associations from a domain classifier. To minimize spurious associations, we also devise an iterative unmasking heuristic that unmasks the masked tokens to minimize the confidence of a domain classifier in the source domain. Our experiments empirically show that the counterfactual samples sourced from our masked text lead to improved domain transfer across various classification tasks. The proposed approach outperforms the baselines on 10 out of 12 domain-counterfactual classification settings with an average of 1.7{\%} improvement in accuracy metric.", }
Domain shift is a big challenge in NLP. Many approaches, thus, resort to learning domain-invariant features to mitigate the hurdles of domain shift during inference. Such methods, however, inexorably fail to leverage the domain-specific nuances relevant to the task at hand. To avoid such drawbacks, domain counterfactual generation has recently been proposed that aims to transform a text from the source domain to a given target domain. To achieve this, the existing method uses a frequency-based approach to identify and mask the source-domain-specific tokens in a text. A pretrained LM is then prompted to fill the masks with target-domain-specific tokens. We, however, have observed that, due to limitations of the available data, such a frequency-based method may either miss some domain-token associations or lead to some spurious domain-token associations. To this end, we additionally employ attention norm-based scores to identify additional token-domain associations from a domain classifier. To minimize spurious associations, we also devise an iterative unmasking heuristic that unmasks the masked tokens to minimize the confidence of a domain classifier in the source domain. Our experiments empirically show that the counterfactual samples sourced from our masked text lead to improved domain transfer across various classification tasks. The proposed approach outperforms the baselines on 10 out of 12 domain-counterfactual classification settings with an average of 1.7{\%} improvement in accuracy metric.
[ "Hong, Pengfei", "Bhardwaj, Rishabh", "Majumder, Navonil", "Aditya, Somak", "Poria, Soujanya" ]
A Robust Information-Masking Approach for Domain Counterfactual Generation
findings-acl.231
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.232.bib
https://aclanthology.org/2023.findings-acl.232/
@inproceedings{jiang-etal-2023-misleading, title = "Misleading Relation Classifiers by Substituting Words in Texts", author = "Jiang, Tian and Liu, Yunqi and Feng, Yan and Li, Yuqing and Cui, Xiaohui", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.232", doi = "10.18653/v1/2023.findings-acl.232", pages = "3770--3785", abstract = "Relation classification is to determine the semantic relationship between two entities in a given sentence. However, many relation classifiers are vulnerable to adversarial attacks, which is using adversarial examples to lead victim models to output wrong results. In this paper, we propose a simple but effective method for misleading relation classifiers. We first analyze the most important parts of speech (POSs) from the syntax and morphology perspectives, then we substitute words labeled with these POS tags in original samples with synonyms or hyponyms. Experimental results show that our method can generate adversarial texts of high quality, and most of the relationships between entities can be correctly identified in the process of human evaluation. Furthermore, the adversarial examples generated by our method possess promising transferability, and they are also helpful for improving the robustness of victim models.", }
Relation classification is to determine the semantic relationship between two entities in a given sentence. However, many relation classifiers are vulnerable to adversarial attacks, which is using adversarial examples to lead victim models to output wrong results. In this paper, we propose a simple but effective method for misleading relation classifiers. We first analyze the most important parts of speech (POSs) from the syntax and morphology perspectives, then we substitute words labeled with these POS tags in original samples with synonyms or hyponyms. Experimental results show that our method can generate adversarial texts of high quality, and most of the relationships between entities can be correctly identified in the process of human evaluation. Furthermore, the adversarial examples generated by our method possess promising transferability, and they are also helpful for improving the robustness of victim models.
[ "Jiang, Tian", "Liu, Yunqi", "Feng, Yan", "Li, Yuqing", "Cui, Xiaohui" ]
Misleading Relation Classifiers by Substituting Words in Texts
findings-acl.232
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.233.bib
https://aclanthology.org/2023.findings-acl.233/
@inproceedings{hu-etal-2023-automatic, title = "Automatic Table Union Search with Tabular Representation Learning", author = "Hu, Xuming and Wang, Shen and Qin, Xiao and Lei, Chuan and Shen, Zhengyuan and Faloutsos, Christos and Katsifodimos, Asterios and Karypis, George and Wen, Lijie and Yu, Philip S.", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.233", doi = "10.18653/v1/2023.findings-acl.233", pages = "3786--3800", abstract = "Given a data lake of tabular data as well as a query table, how can we retrieve all the tables in the data lake that can be unioned with the query table? Table union search constitutes an essential task in data discovery and preparation as it enables data scientists to navigate massive open data repositories. Existing methods identify uniability based on column representations (word surface forms or token embeddings) and column relation represented by column representation similarity. However, the semantic similarity obtained between column representations is often insufficient to reveal latent relational features to describe the column relation between pair of columns and not robust to the table noise. To address these issues, in this paper, we propose a multi-stage self-supervised table union search framework called AutoTUS, which represents column relation as a vector{--} column relational representation and learn column relational representation in a multi-stage manner that can better describe column relation for unionability prediction. In particular, the large language model powered contextualized column relation encoder is updated by adaptive clustering and pseudo label classification iteratively so that the better column relational representation can be learned. Moreover, to improve the robustness of the model against table noises, we propose table noise generator to add table noise to the training table data. Experiments on real-world datasets as well as synthetic test set augmented with table noise show that AutoTUS achieves 5.2{\%} performance gain over the SOTA baseline.", }
Given a data lake of tabular data as well as a query table, how can we retrieve all the tables in the data lake that can be unioned with the query table? Table union search constitutes an essential task in data discovery and preparation as it enables data scientists to navigate massive open data repositories. Existing methods identify uniability based on column representations (word surface forms or token embeddings) and column relation represented by column representation similarity. However, the semantic similarity obtained between column representations is often insufficient to reveal latent relational features to describe the column relation between pair of columns and not robust to the table noise. To address these issues, in this paper, we propose a multi-stage self-supervised table union search framework called AutoTUS, which represents column relation as a vector{--} column relational representation and learn column relational representation in a multi-stage manner that can better describe column relation for unionability prediction. In particular, the large language model powered contextualized column relation encoder is updated by adaptive clustering and pseudo label classification iteratively so that the better column relational representation can be learned. Moreover, to improve the robustness of the model against table noises, we propose table noise generator to add table noise to the training table data. Experiments on real-world datasets as well as synthetic test set augmented with table noise show that AutoTUS achieves 5.2{\%} performance gain over the SOTA baseline.
[ "Hu, Xuming", "Wang, Shen", "Qin, Xiao", "Lei, Chuan", "Shen, Zhengyuan", "Faloutsos, Christos", "Katsifodimos, Asterios", "Karypis, George", "Wen, Lijie", "Yu, Philip S." ]
Automatic Table Union Search with Tabular Representation Learning
findings-acl.233
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.234.bib
https://aclanthology.org/2023.findings-acl.234/
@inproceedings{zhang-etal-2023-bidirectional, title = "Bidirectional Transformer Reranker for Grammatical Error Correction", author = "Zhang, Ying and Kamigaito, Hidetaka and Okumura, Manabu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.234", doi = "10.18653/v1/2023.findings-acl.234", pages = "3801--3825", abstract = "Pre-trained seq2seq models have achieved state-of-the-art results in the grammatical error correction task. However, these models still suffer from a prediction bias due to their unidirectional decoding. Thus, we propose a bidirectional Transformer reranker (BTR), that re-estimates the probability of each candidate sentence generated by the pre-trained seq2seq model. The BTR preserves the seq2seq-style Transformer architecture but utilizes a BERT-style self-attention mechanism in the decoder to compute the probability of each target token by using masked language modeling to capture bidirectional representations from the target context. For guiding the reranking, the BTR adopts negative sampling in the objective function to minimize the unlikelihood. During inference, the BTR gives final results after comparing the reranked top-1 results with the original ones by an acceptance threshold. Experimental results show that, in reranking candidates from a pre-trained seq2seq model, T5-base, the BTR on top of T5-base could yield 65.47 and 71.27 F0.5 scores on the CoNLL-14 and BEA test sets, respectively, and yield 59.52 GLEU score on the JFLEG corpus, with improvements of 0.36, 0.76 and 0.48 points compared with the original T5-base. Furthermore, when reranking candidates from T5-large, the BTR on top of T5-base improved the original T5-large by 0.26 points on the BEA test set.", }
Pre-trained seq2seq models have achieved state-of-the-art results in the grammatical error correction task. However, these models still suffer from a prediction bias due to their unidirectional decoding. Thus, we propose a bidirectional Transformer reranker (BTR), that re-estimates the probability of each candidate sentence generated by the pre-trained seq2seq model. The BTR preserves the seq2seq-style Transformer architecture but utilizes a BERT-style self-attention mechanism in the decoder to compute the probability of each target token by using masked language modeling to capture bidirectional representations from the target context. For guiding the reranking, the BTR adopts negative sampling in the objective function to minimize the unlikelihood. During inference, the BTR gives final results after comparing the reranked top-1 results with the original ones by an acceptance threshold. Experimental results show that, in reranking candidates from a pre-trained seq2seq model, T5-base, the BTR on top of T5-base could yield 65.47 and 71.27 F0.5 scores on the CoNLL-14 and BEA test sets, respectively, and yield 59.52 GLEU score on the JFLEG corpus, with improvements of 0.36, 0.76 and 0.48 points compared with the original T5-base. Furthermore, when reranking candidates from T5-large, the BTR on top of T5-base improved the original T5-large by 0.26 points on the BEA test set.
[ "Zhang, Ying", "Kamigaito, Hidetaka", "Okumura, Manabu" ]
Bidirectional Transformer Reranker for Grammatical Error Correction
findings-acl.234
Poster
2305.13000
[ "https://github.com/zhangying9128/btr" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.235.bib
https://aclanthology.org/2023.findings-acl.235/
@inproceedings{urbizu-etal-2023-enough, title = "Not Enough Data to Pre-train Your Language Model? {MT} to the Rescue!", author = "Urbizu, Gorka and San Vicente, I{\~n}aki and Saralegi, Xabier and Corral, Ander", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.235", doi = "10.18653/v1/2023.findings-acl.235", pages = "3826--3836", abstract = "In recent years, pre-trained transformer-based language models (LM) have become a key resource for implementing most NLP tasks. However, pre-training such models demands large text collections not available in most languages. In this paper, we study the use of machine-translated corpora for pre-training LMs. We answer the following research questions: RQ1: Is MT-based data an alternative to real data for learning a LM?; RQ2: Can real data be complemented with translated data and improve the resulting LM? In order to validate these two questions, several BERT models for Basque have been trained, combining real data and synthetic data translated from Spanish.The evaluation carried out on 9 NLU tasks indicates that models trained exclusively on translated data offer competitive results. Furthermore, models trained with real data can be improved with synthetic data, although further research is needed on the matter.", }
In recent years, pre-trained transformer-based language models (LM) have become a key resource for implementing most NLP tasks. However, pre-training such models demands large text collections not available in most languages. In this paper, we study the use of machine-translated corpora for pre-training LMs. We answer the following research questions: RQ1: Is MT-based data an alternative to real data for learning a LM?; RQ2: Can real data be complemented with translated data and improve the resulting LM? In order to validate these two questions, several BERT models for Basque have been trained, combining real data and synthetic data translated from Spanish.The evaluation carried out on 9 NLU tasks indicates that models trained exclusively on translated data offer competitive results. Furthermore, models trained with real data can be improved with synthetic data, although further research is needed on the matter.
[ "Urbizu, Gorka", "San Vicente, I{\\~n}aki", "Saralegi, Xabier", "Corral, Ander" ]
Not Enough Data to Pre-train Your Language Model? MT to the Rescue!
findings-acl.235
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.236.bib
https://aclanthology.org/2023.findings-acl.236/
@inproceedings{gao-etal-2023-umse, title = "{UMSE}: Unified Multi-scenario Summarization Evaluation", author = "Gao, Shen and Yao, Zhitao and Tao, Chongyang and Chen, Xiuying and Ren, Pengjie and Ren, Zhaochun and Chen, Zhumin", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.236", doi = "10.18653/v1/2023.findings-acl.236", pages = "3837--3849", abstract = "Summarization quality evaluation is a non-trivial task in text summarization. Contemporary methods can be mainly categorized into two scenarios: (1) reference-based: evaluating with human-labeled reference summary; (2) reference-free: evaluating the summary consistency of the document. Recent studies mainly focus on one of these scenarios and explore training neural models built on PLMs to align with human criteria. However, the models from different scenarios are optimized individually, which may result in sub-optimal performance since they neglect the shared knowledge across different scenarios. Besides, designing individual models for each scenario caused inconvenience to the user. Inspired by this, we propose Unified Multi-scenario Summarization Evaluation Model (UMSE). More specifically, we propose a perturbed prefix tuning method to share cross-scenario knowledge between scenarios and use a self-supervised training paradigm to optimize the model without extra human labeling. Our UMSE is the first unified summarization evaluation framework engaged with the ability to be used in three evaluation scenarios. Experimental results across three typical scenarios on the benchmark dataset SummEval indicate that our UMSE can achieve comparable performance with several existing strong methods which are specifically designed for each scenario.", }
Summarization quality evaluation is a non-trivial task in text summarization. Contemporary methods can be mainly categorized into two scenarios: (1) reference-based: evaluating with human-labeled reference summary; (2) reference-free: evaluating the summary consistency of the document. Recent studies mainly focus on one of these scenarios and explore training neural models built on PLMs to align with human criteria. However, the models from different scenarios are optimized individually, which may result in sub-optimal performance since they neglect the shared knowledge across different scenarios. Besides, designing individual models for each scenario caused inconvenience to the user. Inspired by this, we propose Unified Multi-scenario Summarization Evaluation Model (UMSE). More specifically, we propose a perturbed prefix tuning method to share cross-scenario knowledge between scenarios and use a self-supervised training paradigm to optimize the model without extra human labeling. Our UMSE is the first unified summarization evaluation framework engaged with the ability to be used in three evaluation scenarios. Experimental results across three typical scenarios on the benchmark dataset SummEval indicate that our UMSE can achieve comparable performance with several existing strong methods which are specifically designed for each scenario.
[ "Gao, Shen", "Yao, Zhitao", "Tao, Chongyang", "Chen, Xiuying", "Ren, Pengjie", "Ren, Zhaochun", "Chen, Zhumin" ]
UMSE: Unified Multi-scenario Summarization Evaluation
findings-acl.236
Poster
2305.16895
[ "https://github.com/zt-yao/umse" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.237.bib
https://aclanthology.org/2023.findings-acl.237/
@inproceedings{liu-etal-2023-maximum, title = "Maximum Entropy Loss, the Silver Bullet Targeting Backdoor Attacks in Pre-trained Language Models", author = "Liu, Zhengxiao and Shen, Bowen and Lin, Zheng and Wang, Fali and Wang, Weiping", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.237", doi = "10.18653/v1/2023.findings-acl.237", pages = "3850--3868", abstract = "Pre-trained language model (PLM) can be stealthily misled to target outputs by backdoor attacks when encountering poisoned samples, without performance degradation on clean samples. The stealthiness of backdoor attacks is commonly attained through minimal cross-entropy loss fine-tuning on a union of poisoned and clean samples. Existing defense paradigms provide a workaround by detecting and removing poisoned samples at pre-training or inference time. On the contrary, we provide a new perspective where the backdoor attack is directly reversed. Specifically, maximum entropy loss is incorporated in training to neutralize the minimal cross-entropy loss fine-tuning on poisoned data. We defend against a range of backdoor attacks on classification tasks and significantly lower the attack success rate. In extension, we explore the relationship between intended backdoor attacks and unintended dataset bias, and demonstrate the feasibility of the maximum entropy principle in de-biasing.", }
Pre-trained language model (PLM) can be stealthily misled to target outputs by backdoor attacks when encountering poisoned samples, without performance degradation on clean samples. The stealthiness of backdoor attacks is commonly attained through minimal cross-entropy loss fine-tuning on a union of poisoned and clean samples. Existing defense paradigms provide a workaround by detecting and removing poisoned samples at pre-training or inference time. On the contrary, we provide a new perspective where the backdoor attack is directly reversed. Specifically, maximum entropy loss is incorporated in training to neutralize the minimal cross-entropy loss fine-tuning on poisoned data. We defend against a range of backdoor attacks on classification tasks and significantly lower the attack success rate. In extension, we explore the relationship between intended backdoor attacks and unintended dataset bias, and demonstrate the feasibility of the maximum entropy principle in de-biasing.
[ "Liu, Zhengxiao", "Shen, Bowen", "Lin, Zheng", "Wang, Fali", "Wang, Weiping" ]
Maximum Entropy Loss, the Silver Bullet Targeting Backdoor Attacks in Pre-trained Language Models
findings-acl.237
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.238.bib
https://aclanthology.org/2023.findings-acl.238/
@inproceedings{xu-etal-2023-improving, title = "Improving Named Entity Recognition via Bridge-based Domain Adaptation", author = "Xu, Jingyun and Zheng, Changmeng and Cai, Yi and Chua, Tat-Seng", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.238", doi = "10.18653/v1/2023.findings-acl.238", pages = "3869--3882", abstract = "Recent studies have shown remarkable success in cross-domain named entity recognition (cross-domain NER). Despite the promising results, existing methods mainly utilize pre-training language models like BERT to represent words. As such, the original chaotic representations may challenge them to distinguish entity types of entities, leading to entity type misclassification. To this end, we attempt to utilize contrastive learning to refine the original representations and propose a model-agnostic framework named MoCL for cross-domain NER. Additionally, we respectively combine MoCL with two distinctive cross-domain NER methods and two pre-training language models to explore its generalization ability. Empirical results on seven domains show the effectiveness and good generalization ability of MoCL.", }
Recent studies have shown remarkable success in cross-domain named entity recognition (cross-domain NER). Despite the promising results, existing methods mainly utilize pre-training language models like BERT to represent words. As such, the original chaotic representations may challenge them to distinguish entity types of entities, leading to entity type misclassification. To this end, we attempt to utilize contrastive learning to refine the original representations and propose a model-agnostic framework named MoCL for cross-domain NER. Additionally, we respectively combine MoCL with two distinctive cross-domain NER methods and two pre-training language models to explore its generalization ability. Empirical results on seven domains show the effectiveness and good generalization ability of MoCL.
[ "Xu, Jingyun", "Zheng, Changmeng", "Cai, Yi", "Chua, Tat-Seng" ]
Improving Named Entity Recognition via Bridge-based Domain Adaptation
findings-acl.238
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.239.bib
https://aclanthology.org/2023.findings-acl.239/
@inproceedings{si-etal-2023-santa, title = "{SANTA}: Separate Strategies for Inaccurate and Incomplete Annotation Noise in Distantly-Supervised Named Entity Recognition", author = "Si, Shuzheng and Cai, Zefan and Zeng, Shuang and Feng, Guoqiang and Lin, Jiaxing and Chang, Baobao", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.239", doi = "10.18653/v1/2023.findings-acl.239", pages = "3883--3896", abstract = "Distantly-Supervised Named Entity Recognition effectively alleviates the burden of time-consuming and expensive annotation in the supervised setting. But the context-free matching process and the limited coverage of knowledge bases introduce inaccurate and incomplete annotation noise respectively. Previous studies either considered only incomplete one or indiscriminately handle two types of noise with the same strategy. In this paper, we argue that the different causes of two types of noise bring up the requirement of different strategies in model architecture. Therefore, we propose the SANTA to handle these two types of noise separately with (1) Memory-smoothed Focal Loss and Entity-aware KNN to relieve the entity ambiguity problem caused by inaccurate annotation, and (2) Boundary Mixup to alleviate decision boundary shifting problem caused by incomplete annotation and a noise-tolerant loss to improve the model{'}s robustness. Benefiting from our separate tailored strategies, we confirm in the experiment that the two types of noise are well mitigated.SANTA also achieves a new state-of-the-art on five public datasets.", }
Distantly-Supervised Named Entity Recognition effectively alleviates the burden of time-consuming and expensive annotation in the supervised setting. But the context-free matching process and the limited coverage of knowledge bases introduce inaccurate and incomplete annotation noise respectively. Previous studies either considered only incomplete one or indiscriminately handle two types of noise with the same strategy. In this paper, we argue that the different causes of two types of noise bring up the requirement of different strategies in model architecture. Therefore, we propose the SANTA to handle these two types of noise separately with (1) Memory-smoothed Focal Loss and Entity-aware KNN to relieve the entity ambiguity problem caused by inaccurate annotation, and (2) Boundary Mixup to alleviate decision boundary shifting problem caused by incomplete annotation and a noise-tolerant loss to improve the model{'}s robustness. Benefiting from our separate tailored strategies, we confirm in the experiment that the two types of noise are well mitigated.SANTA also achieves a new state-of-the-art on five public datasets.
[ "Si, Shuzheng", "Cai, Zefan", "Zeng, Shuang", "Feng, Guoqiang", "Lin, Jiaxing", "Chang, Baobao" ]
SANTA: Separate Strategies for Inaccurate and Incomplete Annotation Noise in Distantly-Supervised Named Entity Recognition
findings-acl.239
Poster
2305.04076
[ "https://github.com/pkunlp-icler/santa" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.240.bib
https://aclanthology.org/2023.findings-acl.240/
@inproceedings{nozza-hovy-2023-state, title = "The State of Profanity Obfuscation in Natural Language Processing Scientific Publications", author = "Nozza, Debora and Hovy, Dirk", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.240", doi = "10.18653/v1/2023.findings-acl.240", pages = "3897--3909", abstract = "Work on hate speech has made considering rude and harmful examples in scientific publications inevitable. This situation raises various problems, such as whether or not to obscure profanities. While science must accurately disclose what it does, the unwarranted spread of hate speech can harm readers and increases its internet frequency. While maintaining publications{'} professional appearance, obfuscating profanities makes it challenging to evaluate the content, especially for non-native speakers. Surveying 150 ACL papers, we discovered that obfuscation is usually used for English but not other languages, and even then, quite unevenly. We discuss the problems with obfuscation and suggest a multilingual community resource called PrOf with a Python module to standardize profanity obfuscation processes. We believe PrOf can help scientific publication policies to make hate speech work accessible and comparable, irrespective of language.", }
Work on hate speech has made considering rude and harmful examples in scientific publications inevitable. This situation raises various problems, such as whether or not to obscure profanities. While science must accurately disclose what it does, the unwarranted spread of hate speech can harm readers and increases its internet frequency. While maintaining publications{'} professional appearance, obfuscating profanities makes it challenging to evaluate the content, especially for non-native speakers. Surveying 150 ACL papers, we discovered that obfuscation is usually used for English but not other languages, and even then, quite unevenly. We discuss the problems with obfuscation and suggest a multilingual community resource called PrOf with a Python module to standardize profanity obfuscation processes. We believe PrOf can help scientific publication policies to make hate speech work accessible and comparable, irrespective of language.
[ "Nozza, Debora", "Hovy, Dirk" ]
The State of Profanity Obfuscation in Natural Language Processing Scientific Publications
findings-acl.240
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.241.bib
https://aclanthology.org/2023.findings-acl.241/
@inproceedings{ranasinghe-zampieri-2023-teacher, title = "Teacher and Student Models of Offensive Language in Social Media", author = "Ranasinghe, Tharindu and Zampieri, Marcos", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.241", doi = "10.18653/v1/2023.findings-acl.241", pages = "3910--3922", abstract = "State-of-the-art approaches to identifying offensive language online make use of large pre-trained transformer models. However, the inference time, disk, and memory requirements of these transformer models present challenges for their wide usage in the real world. Even the distilled transformer models remain prohibitively large for many usage scenarios. To cope with these challenges, in this paper, we propose transferring knowledge from transformer models to much smaller neural models to make predictions at the token- and at the post-level. We show that this approach leads to lightweight offensive language identification models that perform on par with large transformers but with 100 times fewer parameters and much less memory usage", }
State-of-the-art approaches to identifying offensive language online make use of large pre-trained transformer models. However, the inference time, disk, and memory requirements of these transformer models present challenges for their wide usage in the real world. Even the distilled transformer models remain prohibitively large for many usage scenarios. To cope with these challenges, in this paper, we propose transferring knowledge from transformer models to much smaller neural models to make predictions at the token- and at the post-level. We show that this approach leads to lightweight offensive language identification models that perform on par with large transformers but with 100 times fewer parameters and much less memory usage
[ "Ranasinghe, Tharindu", "Zampieri, Marcos" ]
Teacher and Student Models of Offensive Language in Social Media
findings-acl.241
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.242.bib
https://aclanthology.org/2023.findings-acl.242/
@inproceedings{zhao-etal-2023-simple, title = "A Simple Yet Strong Domain-Agnostic De-bias Method for Zero-Shot Sentiment Classification", author = "Zhao, Yang and Nasukawa, Tetsuya and Muraoka, Masayasu and Bhattacharjee, Bishwaranjan", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.242", doi = "10.18653/v1/2023.findings-acl.242", pages = "3923--3931", abstract = "Zero-shot prompt-based learning has made much progress in sentiment analysis, and considerable effort has been dedicated to designing high-performing prompt templates. However, two problems exist; First, large language models are often biased to their pre-training data, leading to poor performance in prompt templates that models have rarely seen. Second, in order to adapt to different domains, re-designing prompt templates is usually required, which is time-consuming and inefficient. To remedy both shortcomings, we propose a simple yet strong data construction method to de-bias a given prompt template, yielding a large performance improvement in sentiment analysis tasks across different domains, pre-trained language models, and prompt templates. Also, we demonstrate the advantage of using domain-agnostic generic responses over the in-domain ground-truth data.", }
Zero-shot prompt-based learning has made much progress in sentiment analysis, and considerable effort has been dedicated to designing high-performing prompt templates. However, two problems exist; First, large language models are often biased to their pre-training data, leading to poor performance in prompt templates that models have rarely seen. Second, in order to adapt to different domains, re-designing prompt templates is usually required, which is time-consuming and inefficient. To remedy both shortcomings, we propose a simple yet strong data construction method to de-bias a given prompt template, yielding a large performance improvement in sentiment analysis tasks across different domains, pre-trained language models, and prompt templates. Also, we demonstrate the advantage of using domain-agnostic generic responses over the in-domain ground-truth data.
[ "Zhao, Yang", "Nasukawa, Tetsuya", "Muraoka, Masayasu", "Bhattacharjee, Bishwaranjan" ]
A Simple Yet Strong Domain-Agnostic De-bias Method for Zero-Shot Sentiment Classification
findings-acl.242
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.243.bib
https://aclanthology.org/2023.findings-acl.243/
@inproceedings{das-etal-2023-balancing, title = "Balancing the Effect of Training Dataset Distribution of Multiple Styles for Multi-Style Text Transfer", author = "Das, Debarati and Ma, David and Kang, Dongyeop", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.243", doi = "10.18653/v1/2023.findings-acl.243", pages = "3932--3943", abstract = "Text style transfer is an exciting task within the field of natural language generation that is often plagued by the need for high-quality paired datasets. Furthermore, training a model for multi-attribute text style transfer requires datasets with sufficient support across all combinations of the considered stylistic attributes, adding to the challenges of training a style transfer model. This paper explores the impact of training data input diversity on the quality of the generated text from the multi-style transfer model. We construct a pseudo-parallel dataset by devising heuristics to adjust the style distribution in the training samples. We balance our training dataset using marginal and joint distributions to train our style transfer models. We observe that a balanced dataset produces more effective control effects over multiple styles than an imbalanced or skewed one. Through quantitative analysis, we explore the impact of multiple style distributions in training data on style-transferred output. These findings will better inform the design of style-transfer datasets.", }
Text style transfer is an exciting task within the field of natural language generation that is often plagued by the need for high-quality paired datasets. Furthermore, training a model for multi-attribute text style transfer requires datasets with sufficient support across all combinations of the considered stylistic attributes, adding to the challenges of training a style transfer model. This paper explores the impact of training data input diversity on the quality of the generated text from the multi-style transfer model. We construct a pseudo-parallel dataset by devising heuristics to adjust the style distribution in the training samples. We balance our training dataset using marginal and joint distributions to train our style transfer models. We observe that a balanced dataset produces more effective control effects over multiple styles than an imbalanced or skewed one. Through quantitative analysis, we explore the impact of multiple style distributions in training data on style-transferred output. These findings will better inform the design of style-transfer datasets.
[ "Das, Debarati", "Ma, David", "Kang, Dongyeop" ]
Balancing the Effect of Training Dataset Distribution of Multiple Styles for Multi-Style Text Transfer
findings-acl.243
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.244.bib
https://aclanthology.org/2023.findings-acl.244/
@inproceedings{wang-etal-2023-benchmark, title = "A Benchmark on Extremely Weakly Supervised Text Classification: Reconcile Seed Matching and Prompting Approaches", author = "Wang, Zihan and Wang, Tianle and Mekala, Dheeraj and Shang, Jingbo", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.244", doi = "10.18653/v1/2023.findings-acl.244", pages = "3944--3962", abstract = "Extremely Weakly Supervised Text Classification (XWS-TC) refers to text classification based on minimal high-level human guidance, such as a few label-indicative seed words or classification instructions. There are two mainstream approaches for XWS-TC, however, never being rigorously compared: (1) training classifiers based on pseudo-labels generated by (softly) matching seed words (Seed) and (2) prompting (and calibrating) language models using classification instruction (and raw texts) to decode label words (Prompt). This paper presents the first XWS-TC benchmark to compare the two approaches on fair grounds, where the datasets, supervisions, and hyperparameter choices are standardized across methods. Our benchmarking results suggest that (1) Both Seed and Prompt approaches are competitive and there is no clear winner; (2) Seed is empirically more tolerant than Prompt to human guidance (e.g., seed words, classification instructions, and label words) changes; (3) Seed is empirically more selective than Prompt to the pre-trained language models; (4) Recent Seed and Prompt methods have close connections and a clustering post-processing step based on raw in-domain texts is a strong performance booster to both. We hope this benchmark serves as a guideline in selecting XWS-TC methods in different scenarios and stimulate interest in developing guidance- and model-robust XWS-TC methods.", }
Extremely Weakly Supervised Text Classification (XWS-TC) refers to text classification based on minimal high-level human guidance, such as a few label-indicative seed words or classification instructions. There are two mainstream approaches for XWS-TC, however, never being rigorously compared: (1) training classifiers based on pseudo-labels generated by (softly) matching seed words (Seed) and (2) prompting (and calibrating) language models using classification instruction (and raw texts) to decode label words (Prompt). This paper presents the first XWS-TC benchmark to compare the two approaches on fair grounds, where the datasets, supervisions, and hyperparameter choices are standardized across methods. Our benchmarking results suggest that (1) Both Seed and Prompt approaches are competitive and there is no clear winner; (2) Seed is empirically more tolerant than Prompt to human guidance (e.g., seed words, classification instructions, and label words) changes; (3) Seed is empirically more selective than Prompt to the pre-trained language models; (4) Recent Seed and Prompt methods have close connections and a clustering post-processing step based on raw in-domain texts is a strong performance booster to both. We hope this benchmark serves as a guideline in selecting XWS-TC methods in different scenarios and stimulate interest in developing guidance- and model-robust XWS-TC methods.
[ "Wang, Zihan", "Wang, Tianle", "Mekala, Dheeraj", "Shang, Jingbo" ]
A Benchmark on Extremely Weakly Supervised Text Classification: Reconcile Seed Matching and Prompting Approaches
findings-acl.244
Poster
2305.12749
[ "https://github.com/zihanwangki/x-tc" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.245.bib
https://aclanthology.org/2023.findings-acl.245/
@inproceedings{liu-liu-2023-ambiguity, title = "Ambiguity Meets Uncertainty: Investigating Uncertainty Estimation for Word Sense Disambiguation", author = "Liu, Zhu and Liu, Ying", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.245", doi = "10.18653/v1/2023.findings-acl.245", pages = "3963--3977", abstract = "Word sense disambiguation (WSD), which aims to determine an appropriate sense for a target word given its context, is crucial for natural language understanding. Existing supervised methods treat WSD as a classification task and have achieved remarkable performance. However, they ignore uncertainty estimation (UE) in the real-world setting, where the data is always noisy and out of distribution. This paper extensively studies UE on the benchmark designed for WSD. Specifically, we first compare four uncertainty scores for a state-of-the-art WSD model and verify that the conventional predictive probabilities obtained at the end of the model are inadequate to quantify uncertainty. Then, we examine the capability of capturing data and model uncertainties by the model with the selected UE score on well-designed test scenarios and discover that the model reflects data uncertainty satisfactorily but underestimates model uncertainty. Furthermore, we explore numerous lexical properties that intrinsically affect data uncertainty and provide a detailed analysis of four critical aspects: the syntactic category, morphology, sense granularity, and semantic relations.", }
Word sense disambiguation (WSD), which aims to determine an appropriate sense for a target word given its context, is crucial for natural language understanding. Existing supervised methods treat WSD as a classification task and have achieved remarkable performance. However, they ignore uncertainty estimation (UE) in the real-world setting, where the data is always noisy and out of distribution. This paper extensively studies UE on the benchmark designed for WSD. Specifically, we first compare four uncertainty scores for a state-of-the-art WSD model and verify that the conventional predictive probabilities obtained at the end of the model are inadequate to quantify uncertainty. Then, we examine the capability of capturing data and model uncertainties by the model with the selected UE score on well-designed test scenarios and discover that the model reflects data uncertainty satisfactorily but underestimates model uncertainty. Furthermore, we explore numerous lexical properties that intrinsically affect data uncertainty and provide a detailed analysis of four critical aspects: the syntactic category, morphology, sense granularity, and semantic relations.
[ "Liu, Zhu", "Liu, Ying" ]
Ambiguity Meets Uncertainty: Investigating Uncertainty Estimation for Word Sense Disambiguation
findings-acl.245
Poster
2305.13119
[ "https://github.com/ryanliut/wsd-ue" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.246.bib
https://aclanthology.org/2023.findings-acl.246/
@inproceedings{wang-etal-2023-zemi, title = "Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks", author = "Wang, Zhenhailong and Pan, Xiaoman and Yu, Dian and Yu, Dong and Chen, Jianshu and Ji, Heng", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.246", doi = "10.18653/v1/2023.findings-acl.246", pages = "3978--4004", abstract = "Although large language models have exhibited impressive zero-shot ability, the huge model size generally incurs high cost. Recently, semi-parametric language models, which augment a smaller language model with retrieved related background knowledge, alleviate the need for storing everything into the model parameters. Although existing semi-parametric language models have demonstrated promising language modeling capabilities, it remains unclear whether they can exhibit competitive zero-shot abilities as their fully-parametric counterparts. In this work, we introduce Zemi, a semi-parametric language model for zero-shot task generalization. To our best knowledge, this is the first semi-parametric language model that can demonstrate strong zero-shot performance on a wide range of held-out unseen tasks. We train Zemi with semi-parametric multitask training, which shows significant improvement compared with the parametric multitask training as proposed by T0. Specifically, during both training and inference, Zemi is equipped with a retrieval system based on the unlabeled pretraining corpus of our backbone model. To address the unique challenges from large-scale retrieval, we further propose a novel retrieval-augmentation fusion module that can effectively incorporate noisy retrieved documents. Finally, we show detailed analysis and ablation studies on the key ingredients towards building effective zero-shot semi-parametric language models. Notably, our proposed Zemi{\_}Large model outperforms T0-3B by 16{\%} across seven diverse evaluation tasks while being 3.8x smaller in scale.", }
Although large language models have exhibited impressive zero-shot ability, the huge model size generally incurs high cost. Recently, semi-parametric language models, which augment a smaller language model with retrieved related background knowledge, alleviate the need for storing everything into the model parameters. Although existing semi-parametric language models have demonstrated promising language modeling capabilities, it remains unclear whether they can exhibit competitive zero-shot abilities as their fully-parametric counterparts. In this work, we introduce Zemi, a semi-parametric language model for zero-shot task generalization. To our best knowledge, this is the first semi-parametric language model that can demonstrate strong zero-shot performance on a wide range of held-out unseen tasks. We train Zemi with semi-parametric multitask training, which shows significant improvement compared with the parametric multitask training as proposed by T0. Specifically, during both training and inference, Zemi is equipped with a retrieval system based on the unlabeled pretraining corpus of our backbone model. To address the unique challenges from large-scale retrieval, we further propose a novel retrieval-augmentation fusion module that can effectively incorporate noisy retrieved documents. Finally, we show detailed analysis and ablation studies on the key ingredients towards building effective zero-shot semi-parametric language models. Notably, our proposed Zemi{\_}Large model outperforms T0-3B by 16{\%} across seven diverse evaluation tasks while being 3.8x smaller in scale.
[ "Wang, Zhenhailong", "Pan, Xiaoman", "Yu, Dian", "Yu, Dong", "Chen, Jianshu", "Ji, Heng" ]
Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks
findings-acl.246
Poster
2210.00185
[ "https://github.com/mikewangwzhl/zemi" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.247.bib
https://aclanthology.org/2023.findings-acl.247/
@inproceedings{dai-etal-2023-gpt, title = "Why Can {GPT} Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers", author = "Dai, Damai and Sun, Yutao and Dong, Li and Hao, Yaru and Ma, Shuming and Sui, Zhifang and Wei, Furu", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.247", doi = "10.18653/v1/2023.findings-acl.247", pages = "4005--4019", abstract = "Large pretrained language models have shown surprising in-context learning (ICL) ability. With a few demonstration input-label pairs, they can predict the label for an unseen input without parameter updates. Despite the great success in performance, its working mechanism still remains an open question. In this paper, we explain language models as meta-optimizers and understand in-context learning as implicit finetuning. Theoretically, we figure out that Transformer attention has a dual form of gradient descent. On top of it, we understand ICL as follows: GPT first produces meta-gradients according to the demonstration examples, and then these meta-gradients are applied to the original GPT to build an ICL model. We comprehensively compare the behaviors of in-context learning and explicit finetuning on real tasks to provide empirical evidence that supports our understanding. Experimental results show that in-context learning behaves similarly to explicit finetuning from multiple perspectives. Inspired by the dual form between Transformer attention and gradient descent, we design a momentum-based attention by analogy with gradient descent with momentum. The improved performance over vanilla attention further supports our understanding from another perspective, and more importantly, shows the potential to utilize our understanding for future model design. The code is available at \url{https://aka.ms/icl}.", }
Large pretrained language models have shown surprising in-context learning (ICL) ability. With a few demonstration input-label pairs, they can predict the label for an unseen input without parameter updates. Despite the great success in performance, its working mechanism still remains an open question. In this paper, we explain language models as meta-optimizers and understand in-context learning as implicit finetuning. Theoretically, we figure out that Transformer attention has a dual form of gradient descent. On top of it, we understand ICL as follows: GPT first produces meta-gradients according to the demonstration examples, and then these meta-gradients are applied to the original GPT to build an ICL model. We comprehensively compare the behaviors of in-context learning and explicit finetuning on real tasks to provide empirical evidence that supports our understanding. Experimental results show that in-context learning behaves similarly to explicit finetuning from multiple perspectives. Inspired by the dual form between Transformer attention and gradient descent, we design a momentum-based attention by analogy with gradient descent with momentum. The improved performance over vanilla attention further supports our understanding from another perspective, and more importantly, shows the potential to utilize our understanding for future model design. The code is available at \url{https://aka.ms/icl}.
[ "Dai, Damai", "Sun, Yutao", "Dong, Li", "Hao, Yaru", "Ma, Shuming", "Sui, Zhifang", "Wei, Furu" ]
Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers
findings-acl.247
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.248.bib
https://aclanthology.org/2023.findings-acl.248/
@inproceedings{chang-etal-2023-dramatic, title = "Dramatic Conversation Disentanglement", author = "Chang, Kent and Chen, Danica and Bamman, David", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.248", doi = "10.18653/v1/2023.findings-acl.248", pages = "4020--4046", abstract = "We present a new dataset for studying conversation disentanglement in movies and TV series. While previous work has focused on conversation disentanglement in IRC chatroom dialogues, movies and TV shows provide a space for studying complex pragmatic patterns of floor and topic change in face-to-face multi-party interactions. In this work, we draw on theoretical research in sociolinguistics, sociology, and film studies to operationalize a conversational thread (including the notion of a floor change) in dramatic texts, and use that definition to annotate a dataset of 10,033 dialogue turns (comprising 2,209 threads) from 831 movies. We compare the performance of several disentanglement models on this dramatic dataset, and apply the best-performing model to disentangle 808 movies. We see that, contrary to expectation, average thread lengths do not decrease significantly over the past 40 years, and characters portrayed by actors who are women, while underrepresented, initiate more new conversational threads relative to their speaking time.", }
We present a new dataset for studying conversation disentanglement in movies and TV series. While previous work has focused on conversation disentanglement in IRC chatroom dialogues, movies and TV shows provide a space for studying complex pragmatic patterns of floor and topic change in face-to-face multi-party interactions. In this work, we draw on theoretical research in sociolinguistics, sociology, and film studies to operationalize a conversational thread (including the notion of a floor change) in dramatic texts, and use that definition to annotate a dataset of 10,033 dialogue turns (comprising 2,209 threads) from 831 movies. We compare the performance of several disentanglement models on this dramatic dataset, and apply the best-performing model to disentangle 808 movies. We see that, contrary to expectation, average thread lengths do not decrease significantly over the past 40 years, and characters portrayed by actors who are women, while underrepresented, initiate more new conversational threads relative to their speaking time.
[ "Chang, Kent", "Chen, Danica", "Bamman, David" ]
Dramatic Conversation Disentanglement
findings-acl.248
Poster
2305.16648
[ "https://github.com/kentchang/dramatic-conversation-disentanglement" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.249.bib
https://aclanthology.org/2023.findings-acl.249/
@inproceedings{kim-etal-2023-injecting, title = "Injecting Comparison Skills in Task-Oriented Dialogue Systems for Database Search Results Disambiguation", author = "Kim, Yongil and Hwang, Yerin and Shin, Joongbo and Bae, Hyunkyung and Jung, Kyomin", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.249", doi = "10.18653/v1/2023.findings-acl.249", pages = "4047--4065", abstract = "In task-oriented dialogue (TOD) systems designed to aid users accomplish specific goals in one or more domains, the agent retrieves entities that satisfy user constraints from the database. However, when multiple database search results exist, an ambiguity occurs regarding which results to select and present to the user. Existing TOD systems handle this ambiguity by randomly selecting one or few results and presenting their names to the user. However, in a real scenario, users do not always accept a randomly recommended entity, and users should have access to more comprehensive information about the search results. To address this limitation, we propose a novel task called Comparison-Based database search Ambiguity handling (CBA), which handles ambiguity in database search results by comparing the properties of multiple entities to enable users to choose according to their preferences. Accordingly, we introduce a new framework for automatically collecting high-quality dialogue data along with the Disambiguating Schema-guided Dialogue (DSD) dataset, an augmented version of the SGD dataset. Experimental studies on the DSD dataset demonstrate that training baseline models with the dataset effectively address the CBA task. Our dataset and code will be publicized.", }
In task-oriented dialogue (TOD) systems designed to aid users accomplish specific goals in one or more domains, the agent retrieves entities that satisfy user constraints from the database. However, when multiple database search results exist, an ambiguity occurs regarding which results to select and present to the user. Existing TOD systems handle this ambiguity by randomly selecting one or few results and presenting their names to the user. However, in a real scenario, users do not always accept a randomly recommended entity, and users should have access to more comprehensive information about the search results. To address this limitation, we propose a novel task called Comparison-Based database search Ambiguity handling (CBA), which handles ambiguity in database search results by comparing the properties of multiple entities to enable users to choose according to their preferences. Accordingly, we introduce a new framework for automatically collecting high-quality dialogue data along with the Disambiguating Schema-guided Dialogue (DSD) dataset, an augmented version of the SGD dataset. Experimental studies on the DSD dataset demonstrate that training baseline models with the dataset effectively address the CBA task. Our dataset and code will be publicized.
[ "Kim, Yongil", "Hwang, Yerin", "Shin, Joongbo", "Bae, Hyunkyung", "Jung, Kyomin" ]
Injecting Comparison Skills in Task-Oriented Dialogue Systems for Database Search Results Disambiguation
findings-acl.249
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.250.bib
https://aclanthology.org/2023.findings-acl.250/
@inproceedings{zhang-etal-2023-emergent, title = "Emergent Modularity in Pre-trained Transformers", author = "Zhang, Zhengyan and Zeng, Zhiyuan and Lin, Yankai and Xiao, Chaojun and Wang, Xiaozhi and Han, Xu and Liu, Zhiyuan and Xie, Ruobing and Sun, Maosong and Zhou, Jie", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.250", doi = "10.18653/v1/2023.findings-acl.250", pages = "4066--4083", abstract = "This work examines the presence of modularity in pre-trained Transformers, a feature commonly found in human brains and thought to be vital for general intelligence. In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes. (2) function-based neuron grouping: we explore to find a structure that groups neurons into modules by function, and each module works for its corresponding function. Given the enormous amount of possible structures, we focus on Mixture-of-Experts as a promising candidate, which partitions neurons into experts and usually activates different experts for different inputs. Experimental results show that there are functional experts, where clustered are the neurons specialized in a certain function. Moreover, perturbing the activations of functional experts significantly affects the corresponding function. Finally, we study how modularity emerges during pre-training, and find that the modular structure is stabilized at the early stage, which is faster than neuron stabilization. It suggests that Transformer first constructs the modular structure and then learns fine-grained neuron functions. Our code and data are available at \url{https://github.com/THUNLP/modularity-analysis}.", }
This work examines the presence of modularity in pre-trained Transformers, a feature commonly found in human brains and thought to be vital for general intelligence. In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes. (2) function-based neuron grouping: we explore to find a structure that groups neurons into modules by function, and each module works for its corresponding function. Given the enormous amount of possible structures, we focus on Mixture-of-Experts as a promising candidate, which partitions neurons into experts and usually activates different experts for different inputs. Experimental results show that there are functional experts, where clustered are the neurons specialized in a certain function. Moreover, perturbing the activations of functional experts significantly affects the corresponding function. Finally, we study how modularity emerges during pre-training, and find that the modular structure is stabilized at the early stage, which is faster than neuron stabilization. It suggests that Transformer first constructs the modular structure and then learns fine-grained neuron functions. Our code and data are available at \url{https://github.com/THUNLP/modularity-analysis}.
[ "Zhang, Zhengyan", "Zeng, Zhiyuan", "Lin, Yankai", "Xiao, Chaojun", "Wang, Xiaozhi", "Han, Xu", "Liu, Zhiyuan", "Xie, Ruobing", "Sun, Maosong", "Zhou, Jie" ]
Emergent Modularity in Pre-trained Transformers
findings-acl.250
Poster
2305.18390
[ "https://github.com/thunlp/modularity-analysis" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.findings-acl.251.bib
https://aclanthology.org/2023.findings-acl.251/
@inproceedings{cong-etal-2023-universal, title = "Universal Information Extraction with Meta-Pretrained Self-Retrieval", author = "Cong, Xin and Yu, Bowen and Fang, Mengcheng and Liu, Tingwen and Yu, Haiyang and Hu, Zhongkai and Huang, Fei and Li, Yongbin and Wang, Bin", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.251", doi = "10.18653/v1/2023.findings-acl.251", pages = "4084--4100", abstract = "Universal Information Extraction (Universal IE) aims to solve different extraction tasks in a uniform text-to-structure generation manner. Such a generation procedure tends to struggle when there exist complex information structures to be extracted. Retrieving knowledge from external knowledge bases may help models to overcome this problem but it is impossible to construct a knowledge base suitable for various IE tasks. Inspired by the fact that large amount of knowledge are stored in the pretrained language models (PLM) and can be retrieved explicitly, in this paper, we propose MetaRetriever to retrieve task-specific knowledge from PLMs to enhance universal IE. As different IE tasks need different knowledge, we further propose a Meta-Pretraining Algorithm which allows MetaRetriever to quicktly achieve maximum task-specific retrieval performance when fine-tuning on downstream IE tasks. Experimental results show that MetaRetriever achieves the new state-of-the-art on 4 IE tasks, 12 datasets under fully-supervised, low-resource and few-shot scenarios.", }
Universal Information Extraction (Universal IE) aims to solve different extraction tasks in a uniform text-to-structure generation manner. Such a generation procedure tends to struggle when there exist complex information structures to be extracted. Retrieving knowledge from external knowledge bases may help models to overcome this problem but it is impossible to construct a knowledge base suitable for various IE tasks. Inspired by the fact that large amount of knowledge are stored in the pretrained language models (PLM) and can be retrieved explicitly, in this paper, we propose MetaRetriever to retrieve task-specific knowledge from PLMs to enhance universal IE. As different IE tasks need different knowledge, we further propose a Meta-Pretraining Algorithm which allows MetaRetriever to quicktly achieve maximum task-specific retrieval performance when fine-tuning on downstream IE tasks. Experimental results show that MetaRetriever achieves the new state-of-the-art on 4 IE tasks, 12 datasets under fully-supervised, low-resource and few-shot scenarios.
[ "Cong, Xin", "Yu, Bowen", "Fang, Mengcheng", "Liu, Tingwen", "Yu, Haiyang", "Hu, Zhongkai", "Huang, Fei", "Li, Yongbin", "Wang, Bin" ]
Universal Information Extraction with Meta-Pretrained Self-Retrieval
findings-acl.251
Poster
2306.10444
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]