bibtex_url
stringlengths 41
50
| proceedings
stringlengths 38
47
| bibtext
stringlengths 709
3.56k
| abstract
stringlengths 17
2.11k
| authors
sequencelengths 1
72
| title
stringlengths 12
207
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 276
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
14
| num_comments
int64 -1
11
| n_authors
int64 -1
44
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
14
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.acl-long.800.bib | https://aclanthology.org/2023.acl-long.800/ | @inproceedings{ju-etal-2023-compare,
title = "A Compare-and-contrast Multistage Pipeline for Uncovering Financial Signals in Financial Reports",
author = "Ju, Jia-Huei and
Huang, Yu-Shiang and
Lin, Cheng-Wei and
Lin, Che and
Wang, Chuan-Ju",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.800",
doi = "10.18653/v1/2023.acl-long.800",
pages = "14307--14321",
abstract = "In this paper, we address the challenge of discovering financial signals in narrative financial reports. As these documents are often lengthy and tend to blend routine information with new information, it is challenging for professionals to discern critical financial signals. To this end, we leverage the inherent nature of the year-to-year structure of reports to define a novel signal-highlighting task; more importantly, we propose a compare-and-contrast multistage pipeline that recognizes different relationships between the reports and locates relevant rationales for these relationships. We also create and publicly release a human-annotated dataset for our task. Our experiments on the dataset validate the effectiveness of our pipeline, and we provide detailed analyses and ablation studies to support our findings.",
}
| In this paper, we address the challenge of discovering financial signals in narrative financial reports. As these documents are often lengthy and tend to blend routine information with new information, it is challenging for professionals to discern critical financial signals. To this end, we leverage the inherent nature of the year-to-year structure of reports to define a novel signal-highlighting task; more importantly, we propose a compare-and-contrast multistage pipeline that recognizes different relationships between the reports and locates relevant rationales for these relationships. We also create and publicly release a human-annotated dataset for our task. Our experiments on the dataset validate the effectiveness of our pipeline, and we provide detailed analyses and ablation studies to support our findings. | [
"Ju, Jia-Huei",
"Huang, Yu-Shiang",
"Lin, Cheng-Wei",
"Lin, Che",
"Wang, Chuan-Ju"
] | A Compare-and-contrast Multistage Pipeline for Uncovering Financial Signals in Financial Reports | acl-long.800 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.801.bib | https://aclanthology.org/2023.acl-long.801/ | @inproceedings{korakakis-vlachos-2023-improving,
title = "Improving the robustness of {NLI} models with minimax training",
author = "Korakakis, Michalis and
Vlachos, Andreas",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.801",
doi = "10.18653/v1/2023.acl-long.801",
pages = "14322--14339",
abstract = "Natural language inference (NLI) models are susceptible to learning shortcuts, i.e. decision rules that spuriously correlate with the label. As a result, they achieve high in-distribution performance, but fail to generalize to out-of-distribution samples where such correlations do not hold. In this paper, we present a training method to reduce the reliance of NLI models on shortcuts and improve their out-of-distribution performance without assuming prior knowledge of the shortcuts being targeted. To this end, we propose a minimax objective between a learner model being trained for the NLI task, and an auxiliary model aiming to maximize the learner{'}s loss by up-weighting examples from regions of the input space where the learner incurs high losses. This process incentivizes the learner to focus on under-represented {``}hard{''} examples with patterns that contradict the shortcuts learned from the prevailing {``}easy{''} examples. Experimental results on three NLI datasets demonstrate that our method consistently outperforms other robustness enhancing techniques on out-of-distribution adversarial test sets, while maintaining high in-distribution accuracy.",
}
| Natural language inference (NLI) models are susceptible to learning shortcuts, i.e. decision rules that spuriously correlate with the label. As a result, they achieve high in-distribution performance, but fail to generalize to out-of-distribution samples where such correlations do not hold. In this paper, we present a training method to reduce the reliance of NLI models on shortcuts and improve their out-of-distribution performance without assuming prior knowledge of the shortcuts being targeted. To this end, we propose a minimax objective between a learner model being trained for the NLI task, and an auxiliary model aiming to maximize the learner{'}s loss by up-weighting examples from regions of the input space where the learner incurs high losses. This process incentivizes the learner to focus on under-represented {``}hard{''} examples with patterns that contradict the shortcuts learned from the prevailing {``}easy{''} examples. Experimental results on three NLI datasets demonstrate that our method consistently outperforms other robustness enhancing techniques on out-of-distribution adversarial test sets, while maintaining high in-distribution accuracy. | [
"Korakakis, Michalis",
"Vlachos, Andreas"
] | Improving the robustness of NLI models with minimax training | acl-long.801 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.802.bib | https://aclanthology.org/2023.acl-long.802/ | @inproceedings{zhai-etal-2023-ussa,
title = "{USSA}: A Unified Table Filling Scheme for Structured Sentiment Analysis",
author = "Zhai, Zepeng and
Chen, Hao and
Li, Ruifan and
Wang, Xiaojie",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.802",
doi = "10.18653/v1/2023.acl-long.802",
pages = "14340--14353",
abstract = "Most previous studies on Structured Sentiment Analysis (SSA) have cast it as a problem of bi-lexical dependency parsing, which cannot address issues of overlap and discontinuity simultaneously. In this paper, we propose a niche-targeting and effective solution. Our approach involves creating a novel bi-lexical dependency parsing graph, which is then converted to a unified 2D table-filling scheme, namely USSA. The proposed scheme resolves the kernel bottleneck of previous SSA methods by utilizing 13 different types of relations. In addition, to closely collaborate with the USSA scheme, we have developed a model that includes a proposed bi-axial attention module to effectively capture the correlations among relations in the rows and columns of the table. Extensive experimental results on benchmark datasets demonstrate the effectiveness and robustness of our proposed framework, outperforming state-of-the-art methods consistently.",
}
| Most previous studies on Structured Sentiment Analysis (SSA) have cast it as a problem of bi-lexical dependency parsing, which cannot address issues of overlap and discontinuity simultaneously. In this paper, we propose a niche-targeting and effective solution. Our approach involves creating a novel bi-lexical dependency parsing graph, which is then converted to a unified 2D table-filling scheme, namely USSA. The proposed scheme resolves the kernel bottleneck of previous SSA methods by utilizing 13 different types of relations. In addition, to closely collaborate with the USSA scheme, we have developed a model that includes a proposed bi-axial attention module to effectively capture the correlations among relations in the rows and columns of the table. Extensive experimental results on benchmark datasets demonstrate the effectiveness and robustness of our proposed framework, outperforming state-of-the-art methods consistently. | [
"Zhai, Zepeng",
"Chen, Hao",
"Li, Ruifan",
"Wang, Xiaojie"
] | USSA: A Unified Table Filling Scheme for Structured Sentiment Analysis | acl-long.802 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.803.bib | https://aclanthology.org/2023.acl-long.803/ | @inproceedings{he-etal-2023-pad,
title = "{PAD}-Net: An Efficient Framework for Dynamic Networks",
author = "He, Shwai and
Ding, Liang and
Dong, Daize and
Liu, Boan and
Yu, Fuqiang and
Tao, Dacheng",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.803",
doi = "10.18653/v1/2023.acl-long.803",
pages = "14354--14366",
abstract = "Dynamic networks, e.g., Dynamic Convolution (DY-Conv) and the Mixture of Experts (MoE), have been extensively explored as they can considerably improve the model{'}s representation power with acceptable computational cost. The common practice in implementing dynamic networks is to convert the given static layers into fully dynamic ones where all parameters are dynamic (at least within a single layer) and vary with the input. However, such a fully dynamic setting may cause redundant parameters and high deployment costs, limiting the applicability of dynamic networks to a broader range of tasks and models. The main contributions of our work are challenging the basic commonsense in dynamic networks and proposing a partially dynamic network, namely PAD-Net, to transform the redundant dynamic parameters into static ones. Also, we further design Iterative Mode Partition to partition dynamic and static parameters efficiently. Our method is comprehensively supported by large-scale experiments with two typical advanced dynamic architectures, i.e., DY-Conv and MoE, on both image classification and GLUE benchmarks. Encouragingly, we surpass the fully dynamic networks by $+0.7\%$ top-1 acc with only 30{\%} dynamic parameters for ResNet-50 and $+1.9\%$ average score in language understanding with only 50{\%} dynamic parameters for BERT. Code will be released at: \url{https://github.com/Shwai-He/PAD-Net}.",
}
| Dynamic networks, e.g., Dynamic Convolution (DY-Conv) and the Mixture of Experts (MoE), have been extensively explored as they can considerably improve the model{'}s representation power with acceptable computational cost. The common practice in implementing dynamic networks is to convert the given static layers into fully dynamic ones where all parameters are dynamic (at least within a single layer) and vary with the input. However, such a fully dynamic setting may cause redundant parameters and high deployment costs, limiting the applicability of dynamic networks to a broader range of tasks and models. The main contributions of our work are challenging the basic commonsense in dynamic networks and proposing a partially dynamic network, namely PAD-Net, to transform the redundant dynamic parameters into static ones. Also, we further design Iterative Mode Partition to partition dynamic and static parameters efficiently. Our method is comprehensively supported by large-scale experiments with two typical advanced dynamic architectures, i.e., DY-Conv and MoE, on both image classification and GLUE benchmarks. Encouragingly, we surpass the fully dynamic networks by $+0.7\%$ top-1 acc with only 30{\%} dynamic parameters for ResNet-50 and $+1.9\%$ average score in language understanding with only 50{\%} dynamic parameters for BERT. Code will be released at: \url{https://github.com/Shwai-He/PAD-Net}. | [
"He, Shwai",
"Ding, Liang",
"Dong, Daize",
"Liu, Boan",
"Yu, Fuqiang",
"Tao, Dacheng"
] | PAD-Net: An Efficient Framework for Dynamic Networks | acl-long.803 | Poster | 2211.05528 | [
"https://github.com/shwai-he/pad-net"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.804.bib | https://aclanthology.org/2023.acl-long.804/ | @inproceedings{mehrabi-etal-2023-resolving,
title = "Resolving Ambiguities in Text-to-Image Generative Models",
author = "Mehrabi, Ninareh and
Goyal, Palash and
Verma, Apurv and
Dhamala, Jwala and
Kumar, Varun and
Hu, Qian and
Chang, Kai-Wei and
Zemel, Richard and
Galstyan, Aram and
Gupta, Rahul",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.804",
doi = "10.18653/v1/2023.acl-long.804",
pages = "14367--14388",
abstract = "Natural language often contains ambiguities that can lead to misinterpretation and miscommunication. While humans can handle ambiguities effectively by asking clarifying questions and/or relying on contextual cues and common-sense knowledge, resolving ambiguities can be notoriously hard for machines. In this work, we study ambiguities that arise in text-to-image generative models. We curate the Text-to-image Ambiguity Benchmark (TAB) dataset to study different types of ambiguities in text-to-image generative models. We then propose the Text-to-ImagE Disambiguation (TIED) framework to disambiguate the prompts given to the text-to-image generative models by soliciting clarifications from the end user. Through automatic and human evaluations, we show the effectiveness of our framework in generating more faithful images aligned with end user intention in the presence of ambiguities.",
}
| Natural language often contains ambiguities that can lead to misinterpretation and miscommunication. While humans can handle ambiguities effectively by asking clarifying questions and/or relying on contextual cues and common-sense knowledge, resolving ambiguities can be notoriously hard for machines. In this work, we study ambiguities that arise in text-to-image generative models. We curate the Text-to-image Ambiguity Benchmark (TAB) dataset to study different types of ambiguities in text-to-image generative models. We then propose the Text-to-ImagE Disambiguation (TIED) framework to disambiguate the prompts given to the text-to-image generative models by soliciting clarifications from the end user. Through automatic and human evaluations, we show the effectiveness of our framework in generating more faithful images aligned with end user intention in the presence of ambiguities. | [
"Mehrabi, Ninareh",
"Goyal, Palash",
"Verma, Apurv",
"Dhamala, Jwala",
"Kumar, Varun",
"Hu, Qian",
"Chang, Kai-Wei",
"Zemel, Richard",
"Galstyan, Aram",
"Gupta, Rahul"
] | Resolving Ambiguities in Text-to-Image Generative Models | acl-long.804 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.805.bib | https://aclanthology.org/2023.acl-long.805/ | @inproceedings{jang-etal-2023-knowledge,
title = "Knowledge Unlearning for Mitigating Privacy Risks in Language Models",
author = "Jang, Joel and
Yoon, Dongkeun and
Yang, Sohee and
Cha, Sungmin and
Lee, Moontae and
Logeswaran, Lajanugen and
Seo, Minjoon",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.805",
doi = "10.18653/v1/2023.acl-long.805",
pages = "14389--14408",
abstract = "Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities. Previous work addressing privacy issues for LMs has mostly focused on data preprocessing and differential privacy methods, both requiring re-training the underlying LM. We propose knowledge unlearning as an alternative method to reduce privacy risks for LMs post hoc. We show that simply performing gradient ascent on target token sequences is effective at forgetting them with little to no degradation of general language modeling performances for larger-sized LMs. We also find that sequential unlearning is better than trying to unlearn all the data at once and that unlearning is highly dependent on which kind of data (domain) is forgotten. By showing comparisons with previous methods known to mitigate privacy risks for LMs, we show that our approach can give a stronger empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori while being much more efficient and robust.",
}
| Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities. Previous work addressing privacy issues for LMs has mostly focused on data preprocessing and differential privacy methods, both requiring re-training the underlying LM. We propose knowledge unlearning as an alternative method to reduce privacy risks for LMs post hoc. We show that simply performing gradient ascent on target token sequences is effective at forgetting them with little to no degradation of general language modeling performances for larger-sized LMs. We also find that sequential unlearning is better than trying to unlearn all the data at once and that unlearning is highly dependent on which kind of data (domain) is forgotten. By showing comparisons with previous methods known to mitigate privacy risks for LMs, we show that our approach can give a stronger empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori while being much more efficient and robust. | [
"Jang, Joel",
"Yoon, Dongkeun",
"Yang, Sohee",
"Cha, Sungmin",
"Lee, Moontae",
"Logeswaran, Lajanugen",
"Seo, Minjoon"
] | Knowledge Unlearning for Mitigating Privacy Risks in Language Models | acl-long.805 | Poster | 2210.01504 | [
"https://github.com/joeljang/knowledge-unlearning"
] | https://huggingface.co/papers/2210.01504 | 1 | 0 | 0 | 7 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.806.bib | https://aclanthology.org/2023.acl-long.806/ | @inproceedings{honovich-etal-2023-unnatural,
title = "Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor",
author = "Honovich, Or and
Scialom, Thomas and
Levy, Omer and
Schick, Timo",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.806",
doi = "10.18653/v1/2023.acl-long.806",
pages = "14409--14428",
abstract = "Instruction tuning enables pretrained language models to perform new tasks from inference-time natural language descriptions. These approaches rely on vast amounts of human supervision in the form of crowdsourced datasets or user interactions. In this work, we introduce Unnatural Instructions: a large dataset of creative and diverse instructions, collected with virtually no human labor. We collect 64,000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth. This set is then expanded by prompting the model to rephrase each instruction, creating a total of approximately 240,000 examples of instructions, inputs, and outputs. Experiments show that despite containing a fair amount of noise, training on Unnatural Instructions rivals the effectiveness of training on open-source manually-curated datasets, surpassing the performance of models such as T0++ and Tk-Instruct across various benchmarks. These results demonstrate the potential of model-generated data as a cost-effective alternative to crowdsourcing for dataset expansion and diversification.",
}
| Instruction tuning enables pretrained language models to perform new tasks from inference-time natural language descriptions. These approaches rely on vast amounts of human supervision in the form of crowdsourced datasets or user interactions. In this work, we introduce Unnatural Instructions: a large dataset of creative and diverse instructions, collected with virtually no human labor. We collect 64,000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth. This set is then expanded by prompting the model to rephrase each instruction, creating a total of approximately 240,000 examples of instructions, inputs, and outputs. Experiments show that despite containing a fair amount of noise, training on Unnatural Instructions rivals the effectiveness of training on open-source manually-curated datasets, surpassing the performance of models such as T0++ and Tk-Instruct across various benchmarks. These results demonstrate the potential of model-generated data as a cost-effective alternative to crowdsourcing for dataset expansion and diversification. | [
"Honovich, Or",
"Scialom, Thomas",
"Levy, Omer",
"Schick, Timo"
] | Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor | acl-long.806 | Poster | 2212.09689 | [
"https://github.com/orhonovich/unnatural-instructions"
] | https://huggingface.co/papers/2212.09689 | 0 | 1 | 0 | 4 | 1 | [
"allenai/open-instruct-unnatural-instructions-7b",
"allenai/open-instruct-unnatural-instructions-13b"
] | [
"BAAI/COIG",
"nvidia/ChatQA-Training-Data",
"mrm8488/unnatural-instructions-full",
"mrm8488/unnatural-instructions-core",
"ericflo/unnaturalhermes-questions-30k",
"ericflo/unnaturalhermes-questions-100k"
] | [
"Sharathhebbar24/One-stop-for-Open-source-models",
"K00B404/One-stop-till-you-drop"
] |
https://aclanthology.org/2023.acl-long.807.bib | https://aclanthology.org/2023.acl-long.807/ | @inproceedings{dua-etal-2023-adapt,
title = "To Adapt or to Annotate: Challenges and Interventions for Domain Adaptation in Open-Domain Question Answering",
author = "Dua, Dheeru and
Strubell, Emma and
Singh, Sameer and
Verga, Pat",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.807",
doi = "10.18653/v1/2023.acl-long.807",
pages = "14429--14446",
abstract = "Recent advances in open-domain question answering (ODQA) have demonstrated impressive accuracy on general-purpose domains like Wikipedia. While some work has been investigating how well ODQA models perform when tested for out-of-domain (OOD) generalization, these studies have been conducted only under conservative shifts in data distribution and typically focus on a single component (i.e., retriever or reader) rather than an end-to-end system. This work proposes a more realistic end-to-end domain shift evaluation setting covering five diverse domains. We not only find that end-to-end models fail to generalize but that high retrieval scores often still yield poor answer prediction accuracy. To address these failures, we investigate several interventions, in the form of data augmentations, for improving model adaption and use our evaluation set to elucidate the relationship between the efficacy of an intervention scheme and the particular type of dataset shifts we consider. We propose a generalizability test that estimates the type of shift in a target dataset without training a model in the target domain and that the type of shift is predictive of which data augmentation schemes will be effective for domain adaption. Overall, we find that these interventions increase end-to-end performance by up to {\textasciitilde}24 points.",
}
| Recent advances in open-domain question answering (ODQA) have demonstrated impressive accuracy on general-purpose domains like Wikipedia. While some work has been investigating how well ODQA models perform when tested for out-of-domain (OOD) generalization, these studies have been conducted only under conservative shifts in data distribution and typically focus on a single component (i.e., retriever or reader) rather than an end-to-end system. This work proposes a more realistic end-to-end domain shift evaluation setting covering five diverse domains. We not only find that end-to-end models fail to generalize but that high retrieval scores often still yield poor answer prediction accuracy. To address these failures, we investigate several interventions, in the form of data augmentations, for improving model adaption and use our evaluation set to elucidate the relationship between the efficacy of an intervention scheme and the particular type of dataset shifts we consider. We propose a generalizability test that estimates the type of shift in a target dataset without training a model in the target domain and that the type of shift is predictive of which data augmentation schemes will be effective for domain adaption. Overall, we find that these interventions increase end-to-end performance by up to {\textasciitilde}24 points. | [
"Dua, Dheeru",
"Strubell, Emma",
"Singh, Sameer",
"Verga, Pat"
] | To Adapt or to Annotate: Challenges and Interventions for Domain Adaptation in Open-Domain Question Answering | acl-long.807 | Oral | 2212.10381 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.808.bib | https://aclanthology.org/2023.acl-long.808/ | @inproceedings{zhang-etal-2023-survey-efficient,
title = "A Survey for Efficient Open Domain Question Answering",
author = "Zhang, Qin and
Chen, Shangsi and
Xu, Dongkuan and
Cao, Qingqing and
Chen, Xiaojun and
Cohn, Trevor and
Fang, Meng",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.808",
doi = "10.18653/v1/2023.acl-long.808",
pages = "14447--14465",
abstract = "Open domain question answering (ODQA) is a longstanding task aimed at answering factual questions from a large knowledge corpus without any explicit evidence in natural language processing (NLP). Recent works have predominantly focused on improving the answering accuracy and have achieved promising progress. However, higher accuracy often requires more memory consumption and inference latency, which might not necessarily be efficient enough for direct deployment in the real world. Thus, a trade-off between accuracy, memory consumption and processing speed is pursued. In this paper, we will survey recent advancements in the efficiency of ODQA models and conclude core techniques for achieving efficiency. Additionally, we will provide a quantitative analysis of memory cost, query speed, accuracy, and overall performance comparison. Our goal is to keep scholars informed of the latest advancements and open challenges in ODQA efficiency research and contribute to the further development of ODQA efficiency.",
}
| Open domain question answering (ODQA) is a longstanding task aimed at answering factual questions from a large knowledge corpus without any explicit evidence in natural language processing (NLP). Recent works have predominantly focused on improving the answering accuracy and have achieved promising progress. However, higher accuracy often requires more memory consumption and inference latency, which might not necessarily be efficient enough for direct deployment in the real world. Thus, a trade-off between accuracy, memory consumption and processing speed is pursued. In this paper, we will survey recent advancements in the efficiency of ODQA models and conclude core techniques for achieving efficiency. Additionally, we will provide a quantitative analysis of memory cost, query speed, accuracy, and overall performance comparison. Our goal is to keep scholars informed of the latest advancements and open challenges in ODQA efficiency research and contribute to the further development of ODQA efficiency. | [
"Zhang, Qin",
"Chen, Shangsi",
"Xu, Dongkuan",
"Cao, Qingqing",
"Chen, Xiaojun",
"Cohn, Trevor",
"Fang, Meng"
] | A Survey for Efficient Open Domain Question Answering | acl-long.808 | Poster | 2211.07886 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.809.bib | https://aclanthology.org/2023.acl-long.809/ | @inproceedings{ahmadi-anastasopoulos-2023-script,
title = "Script Normalization for Unconventional Writing of Under-Resourced Languages in Bilingual Communities",
author = "Ahmadi, Sina and
Anastasopoulos, Antonios",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.809",
doi = "10.18653/v1/2023.acl-long.809",
pages = "14466--14487",
abstract = "The wide accessibility of social media has provided linguistically under-represented communities with an extraordinary opportunity to create content in their native languages. This, however, comes with certain challenges in script normalization, particularly where the speakers of a language in a bilingual community rely on another script or orthography to write their native language. This paper addresses the problem of script normalization for several such languages that are mainly written in a Perso-Arabic script. Using synthetic data with various levels of noise and a transformer-based model, we demonstrate that the problem can be effectively remediated. We conduct a small-scale evaluation of real data as well. Our experiments indicate that script normalization is also beneficial to improve the performance of downstream tasks such as machine translation and language identification.",
}
| The wide accessibility of social media has provided linguistically under-represented communities with an extraordinary opportunity to create content in their native languages. This, however, comes with certain challenges in script normalization, particularly where the speakers of a language in a bilingual community rely on another script or orthography to write their native language. This paper addresses the problem of script normalization for several such languages that are mainly written in a Perso-Arabic script. Using synthetic data with various levels of noise and a transformer-based model, we demonstrate that the problem can be effectively remediated. We conduct a small-scale evaluation of real data as well. Our experiments indicate that script normalization is also beneficial to improve the performance of downstream tasks such as machine translation and language identification. | [
"Ahmadi, Sina",
"Anastasopoulos, Antonios"
] | Script Normalization for Unconventional Writing of Under-Resourced Languages in Bilingual Communities | acl-long.809 | Oral | 2305.16407 | [
"https://github.com/sinaahmadi/scriptnormalization"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.810.bib | https://aclanthology.org/2023.acl-long.810/ | @inproceedings{lindemann-etal-2023-compositional-generalization,
title = "Compositional Generalization without Trees using Multiset Tagging and Latent Permutations",
author = "Lindemann, Matthias and
Koller, Alexander and
Titov, Ivan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.810",
doi = "10.18653/v1/2023.acl-long.810",
pages = "14488--14506",
abstract = "Seq2seq models have been shown to struggle with compositional generalization in semantic parsing, i.e. generalizing to unseen compositions of phenomena that the model handles correctly in isolation. We phrase semantic parsing as a two-step process: we first tag each input token with a multiset of output tokens. Then we arrange the tokens into an output sequence using a new way of parameterizing and predicting permutations. We formulate predicting a permutation as solving a regularized linear program and we backpropagate through the solver. In contrast to prior work, our approach does not place a priori restrictions on possible permutations, making it very expressive. Our model outperforms pretrained seq2seq models and prior work on realistic semantic parsing tasks that require generalization to longer examples. We also outperform non-tree-based models on structural generalization on the COGS benchmark. For the first time, we show that a model without an inductive bias provided by trees achieves high accuracy on generalization to deeper recursion depth.",
}
| Seq2seq models have been shown to struggle with compositional generalization in semantic parsing, i.e. generalizing to unseen compositions of phenomena that the model handles correctly in isolation. We phrase semantic parsing as a two-step process: we first tag each input token with a multiset of output tokens. Then we arrange the tokens into an output sequence using a new way of parameterizing and predicting permutations. We formulate predicting a permutation as solving a regularized linear program and we backpropagate through the solver. In contrast to prior work, our approach does not place a priori restrictions on possible permutations, making it very expressive. Our model outperforms pretrained seq2seq models and prior work on realistic semantic parsing tasks that require generalization to longer examples. We also outperform non-tree-based models on structural generalization on the COGS benchmark. For the first time, we show that a model without an inductive bias provided by trees achieves high accuracy on generalization to deeper recursion depth. | [
"Lindemann, Matthias",
"Koller, Alex",
"er",
"Titov, Ivan"
] | Compositional Generalization without Trees using Multiset Tagging and Latent Permutations | acl-long.810 | Poster | 2305.16954 | [
"https://github.com/namednil/multiset-perm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.811.bib | https://aclanthology.org/2023.acl-long.811/ | @inproceedings{xu-etal-2023-managertower,
title = "{M}anager{T}ower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning",
author = "Xu, Xiao and
Li, Bei and
Wu, Chenfei and
Tseng, Shao-Yen and
Bhiwandiwalla, Anahita and
Rosenman, Shachar and
Lal, Vasudev and
Che, Wanxiang and
Duan, Nan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.811",
doi = "10.18653/v1/2023.acl-long.811",
pages = "14507--14525",
abstract = "Two-Tower Vision-Language (VL) models have shown promising improvements on various downstream VL tasks. Although the most advanced work improves performance by building bridges between encoders, it suffers from ineffective layer-by-layer utilization of uni-modal representations and cannot flexibly exploit different levels of uni-modal semantic knowledge. In this work, we propose ManagerTower, a novel VL model architecture that gathers and combines the insights of pre-trained uni-modal experts at different levels. The managers introduced in each cross-modal layer can adaptively aggregate uni-modal semantic knowledge to facilitate more comprehensive cross-modal alignment and fusion. ManagerTower outperforms previous strong baselines both with and without Vision-Language Pre-training (VLP). With only 4M VLP data, ManagerTower achieves superior performances on various downstream VL tasks, especially 79.15{\%} accuracy on VQAv2 Test-Std, 86.56{\%} IR@1 and 95.64{\%} TR@1 on Flickr30K. Code and checkpoints are available at \url{https://github.com/LooperXX/ManagerTower}.",
}
| Two-Tower Vision-Language (VL) models have shown promising improvements on various downstream VL tasks. Although the most advanced work improves performance by building bridges between encoders, it suffers from ineffective layer-by-layer utilization of uni-modal representations and cannot flexibly exploit different levels of uni-modal semantic knowledge. In this work, we propose ManagerTower, a novel VL model architecture that gathers and combines the insights of pre-trained uni-modal experts at different levels. The managers introduced in each cross-modal layer can adaptively aggregate uni-modal semantic knowledge to facilitate more comprehensive cross-modal alignment and fusion. ManagerTower outperforms previous strong baselines both with and without Vision-Language Pre-training (VLP). With only 4M VLP data, ManagerTower achieves superior performances on various downstream VL tasks, especially 79.15{\%} accuracy on VQAv2 Test-Std, 86.56{\%} IR@1 and 95.64{\%} TR@1 on Flickr30K. Code and checkpoints are available at \url{https://github.com/LooperXX/ManagerTower}. | [
"Xu, Xiao",
"Li, Bei",
"Wu, Chenfei",
"Tseng, Shao-Yen",
"Bhiw",
"iwalla, Anahita",
"Rosenman, Shachar",
"Lal, Vasudev",
"Che, Wanxiang",
"Duan, Nan"
] | ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning | acl-long.811 | Oral | 2306.00103 | [
"https://github.com/looperxx/managertower"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.812.bib | https://aclanthology.org/2023.acl-long.812/ | @inproceedings{ni-etal-2023-finding,
title = "Finding the Pillars of Strength for Multi-Head Attention",
author = "Ni, Jinjie and
Mao, Rui and
Yang, Zonglin and
Lei, Han and
Cambria, Erik",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.812",
doi = "10.18653/v1/2023.acl-long.812",
pages = "14526--14540",
abstract = "Recent studies have revealed some issues of Multi-Head Attention (MHA), e.g., redundancy and over-parameterization. Specifically, the heads of MHA were originally designed to attend to information from different representation subspaces, whereas prior studies found that some attention heads likely learn similar features and can be pruned without harming performance. Inspired by the minimum-redundancy feature selection, we assume that focusing on the most representative and distinctive features with minimum resources can mitigate the above issues and lead to more effective and efficient MHAs. In particular, we propose Grouped Head Attention, trained with a self-supervised group constraint that group attention heads, where each group focuses on an essential but distinctive feature subset. We additionally propose a Voting-to-Stay procedure to remove redundant heads, thus achieving a transformer with lighter weights. Extensive experiments are consistent with our hypothesis. Moreover, our method achieves significant performance gains on three well-established tasks while considerably compressing parameters.",
}
| Recent studies have revealed some issues of Multi-Head Attention (MHA), e.g., redundancy and over-parameterization. Specifically, the heads of MHA were originally designed to attend to information from different representation subspaces, whereas prior studies found that some attention heads likely learn similar features and can be pruned without harming performance. Inspired by the minimum-redundancy feature selection, we assume that focusing on the most representative and distinctive features with minimum resources can mitigate the above issues and lead to more effective and efficient MHAs. In particular, we propose Grouped Head Attention, trained with a self-supervised group constraint that group attention heads, where each group focuses on an essential but distinctive feature subset. We additionally propose a Voting-to-Stay procedure to remove redundant heads, thus achieving a transformer with lighter weights. Extensive experiments are consistent with our hypothesis. Moreover, our method achieves significant performance gains on three well-established tasks while considerably compressing parameters. | [
"Ni, Jinjie",
"Mao, Rui",
"Yang, Zonglin",
"Lei, Han",
"Cambria, Erik"
] | Finding the Pillars of Strength for Multi-Head Attention | acl-long.812 | Poster | 2305.14380 | [
"https://github.com/senticnet/gha"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.813.bib | https://aclanthology.org/2023.acl-long.813/ | @inproceedings{zheng-etal-2023-jointprop,
title = "Jointprop: Joint Semi-supervised Learning for Entity and Relation Extraction with Heterogeneous Graph-based Propagation",
author = "Zheng, Yandan and
Hao, Anran and
Luu, Anh Tuan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.813",
doi = "10.18653/v1/2023.acl-long.813",
pages = "14541--14555",
abstract = "Semi-supervised learning has been an important approach to address challenges in extracting entities and relations from limited data. However, current semi-supervised works handle the two tasks (i.e., Named Entity Recognition and Relation Extraction) separately and ignore the cross-correlation of entity and relation instances as well as the existence of similar instances across unlabeled data. To alleviate the issues, we propose Jointprop, a Heterogeneous Graph-based Propagation framework for joint semi-supervised entity and relation extraction, which captures the global structure information between individual tasks and exploits interactions within unlabeled data. Specifically, we construct a unified span-based heterogeneous graph from entity and relation candidates and propagate class labels based on confidence scores. We then employ a propagation learning scheme to leverage the affinities between labelled and unlabeled samples. Experiments on benchmark datasets show that our framework outperforms the state-of-the-art semi-supervised approaches on NER and RE tasks. We show that the joint semi-supervised learning of the two tasks benefits from their codependency and validates the importance of utilizing the shared information between unlabeled data.",
}
| Semi-supervised learning has been an important approach to address challenges in extracting entities and relations from limited data. However, current semi-supervised works handle the two tasks (i.e., Named Entity Recognition and Relation Extraction) separately and ignore the cross-correlation of entity and relation instances as well as the existence of similar instances across unlabeled data. To alleviate the issues, we propose Jointprop, a Heterogeneous Graph-based Propagation framework for joint semi-supervised entity and relation extraction, which captures the global structure information between individual tasks and exploits interactions within unlabeled data. Specifically, we construct a unified span-based heterogeneous graph from entity and relation candidates and propagate class labels based on confidence scores. We then employ a propagation learning scheme to leverage the affinities between labelled and unlabeled samples. Experiments on benchmark datasets show that our framework outperforms the state-of-the-art semi-supervised approaches on NER and RE tasks. We show that the joint semi-supervised learning of the two tasks benefits from their codependency and validates the importance of utilizing the shared information between unlabeled data. | [
"Zheng, Y",
"an",
"Hao, Anran",
"Luu, Anh Tuan"
] | Jointprop: Joint Semi-supervised Learning for Entity and Relation Extraction with Heterogeneous Graph-based Propagation | acl-long.813 | Poster | 2305.15872 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.814.bib | https://aclanthology.org/2023.acl-long.814/ | @inproceedings{zhang-etal-2023-reasoning,
title = "Reasoning over Hierarchical Question Decomposition Tree for Explainable Question Answering",
author = "Zhang, Jiajie and
Cao, Shulin and
Zhang, Tingjian and
Lv, Xin and
Li, Juanzi and
Hou, Lei and
Shi, Jiaxin and
Tian, Qi",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.814",
doi = "10.18653/v1/2023.acl-long.814",
pages = "14556--14570",
abstract = "Explainable question answering (XQA) aims to answer a given question and provide an explanation why the answer is selected. Existing XQA methods focus on reasoning on a single knowledge source, e.g., structured knowledge bases, unstructured corpora, etc. However, integrating information from heterogeneous knowledge sources is essential to answer complex questions. In this paper, we propose to leverage question decomposing for heterogeneous knowledge integration, by breaking down a complex question into simpler ones, and selecting the appropriate knowledge source for each sub-question. To facilitate reasoning, we propose a novel two-stage XQA framework, Reasoning over Hierarchical Question Decomposition Tree (RoHT). First, we build the Hierarchical Question Decomposition Tree (HQDT) to understand the semantics of a complex question; then, we conduct probabilistic reasoning over HQDT from root to leaves recursively, to aggregate heterogeneous knowledge at different tree levels and search for a best solution considering the decomposing and answering probabilities. The experiments on complex QA datasets KQA Pro and Musique show that our framework outperforms SOTA methods significantly, demonstrating the effectiveness of leveraging question decomposing for knowledge integration and our RoHT framework.",
}
| Explainable question answering (XQA) aims to answer a given question and provide an explanation why the answer is selected. Existing XQA methods focus on reasoning on a single knowledge source, e.g., structured knowledge bases, unstructured corpora, etc. However, integrating information from heterogeneous knowledge sources is essential to answer complex questions. In this paper, we propose to leverage question decomposing for heterogeneous knowledge integration, by breaking down a complex question into simpler ones, and selecting the appropriate knowledge source for each sub-question. To facilitate reasoning, we propose a novel two-stage XQA framework, Reasoning over Hierarchical Question Decomposition Tree (RoHT). First, we build the Hierarchical Question Decomposition Tree (HQDT) to understand the semantics of a complex question; then, we conduct probabilistic reasoning over HQDT from root to leaves recursively, to aggregate heterogeneous knowledge at different tree levels and search for a best solution considering the decomposing and answering probabilities. The experiments on complex QA datasets KQA Pro and Musique show that our framework outperforms SOTA methods significantly, demonstrating the effectiveness of leveraging question decomposing for knowledge integration and our RoHT framework. | [
"Zhang, Jiajie",
"Cao, Shulin",
"Zhang, Tingjian",
"Lv, Xin",
"Li, Juanzi",
"Hou, Lei",
"Shi, Jiaxin",
"Tian, Qi"
] | Reasoning over Hierarchical Question Decomposition Tree for Explainable Question Answering | acl-long.814 | Poster | 2305.15056 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.815.bib | https://aclanthology.org/2023.acl-long.815/ | @inproceedings{huang-etal-2023-faking,
title = "Faking Fake News for Real Fake News Detection: Propaganda-Loaded Training Data Generation",
author = "Huang, Kung-Hsiang and
McKeown, Kathleen and
Nakov, Preslav and
Choi, Yejin and
Ji, Heng",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.815",
doi = "10.18653/v1/2023.acl-long.815",
pages = "14571--14589",
abstract = "Despite recent advances in detecting fake news generated by neural models, their results are not readily applicable to effective detection of human-written disinformation. What limits the successful transfer between them is the sizable gap between machine-generated fake news and human-authored ones, including the notable differences in terms of style and underlying intent. With this in mind, we propose a novel framework for generating training examples that are informed by the known styles and strategies of human-authored propaganda. Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles, while also incorporating propaganda techniques, such as appeal to authority and loaded language. In particular, we create a new training dataset, PropaNews, with 2,256 examples, which we release for future use. Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62{--}7.69{\%} F1 score on two public datasets.",
}
| Despite recent advances in detecting fake news generated by neural models, their results are not readily applicable to effective detection of human-written disinformation. What limits the successful transfer between them is the sizable gap between machine-generated fake news and human-authored ones, including the notable differences in terms of style and underlying intent. With this in mind, we propose a novel framework for generating training examples that are informed by the known styles and strategies of human-authored propaganda. Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles, while also incorporating propaganda techniques, such as appeal to authority and loaded language. In particular, we create a new training dataset, PropaNews, with 2,256 examples, which we release for future use. Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62{--}7.69{\%} F1 score on two public datasets. | [
"Huang, Kung-Hsiang",
"McKeown, Kathleen",
"Nakov, Preslav",
"Choi, Yejin",
"Ji, Heng"
] | Faking Fake News for Real Fake News Detection: Propaganda-Loaded Training Data Generation | acl-long.815 | Poster | 2203.05386 | [
"https://github.com/khuangaf/fakingfakenews"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.816.bib | https://aclanthology.org/2023.acl-long.816/ | @inproceedings{sun-etal-2023-length,
title = "A Length-Extrapolatable Transformer",
author = "Sun, Yutao and
Dong, Li and
Patra, Barun and
Ma, Shuming and
Huang, Shaohan and
Benhaim, Alon and
Chaudhary, Vishrav and
Song, Xia and
Wei, Furu",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.816",
doi = "10.18653/v1/2023.acl-long.816",
pages = "14590--14604",
abstract = "Position modeling plays a critical role in Transformers. In this paper, we focus on length extrapolation, i.e., training on short texts while evaluating longer sequences. We define \textit{attention resolution} as an indicator of extrapolation. Then we propose two designs to improve the above metric of Transformers. Specifically, we introduce a relative position embedding to explicitly maximize attention resolution. Moreover, we use blockwise causal attention during inference for better resolution. We evaluate different Transformer variants with language modeling. Experimental results show that our model achieves strong performance in both interpolation and extrapolation settings. The code will be available at \url{https://aka.ms/LeX-Transformer}.",
}
| Position modeling plays a critical role in Transformers. In this paper, we focus on length extrapolation, i.e., training on short texts while evaluating longer sequences. We define \textit{attention resolution} as an indicator of extrapolation. Then we propose two designs to improve the above metric of Transformers. Specifically, we introduce a relative position embedding to explicitly maximize attention resolution. Moreover, we use blockwise causal attention during inference for better resolution. We evaluate different Transformer variants with language modeling. Experimental results show that our model achieves strong performance in both interpolation and extrapolation settings. The code will be available at \url{https://aka.ms/LeX-Transformer}. | [
"Sun, Yutao",
"Dong, Li",
"Patra, Barun",
"Ma, Shuming",
"Huang, Shaohan",
"Benhaim, Alon",
"Chaudhary, Vishrav",
"Song, Xia",
"Wei, Furu"
] | A Length-Extrapolatable Transformer | acl-long.816 | Poster | 2212.10554 | [
"https://github.com/microsoft/torchscale"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.817.bib | https://aclanthology.org/2023.acl-long.817/ | @inproceedings{lu-etal-2023-survey,
title = "A Survey of Deep Learning for Mathematical Reasoning",
author = "Lu, Pan and
Qiu, Liang and
Yu, Wenhao and
Welleck, Sean and
Chang, Kai-Wei",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.817",
doi = "10.18653/v1/2023.acl-long.817",
pages = "14605--14631",
abstract = "Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems in language has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain.",
}
| Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems in language has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain. | [
"Lu, Pan",
"Qiu, Liang",
"Yu, Wenhao",
"Welleck, Sean",
"Chang, Kai-Wei"
] | A Survey of Deep Learning for Mathematical Reasoning | acl-long.817 | Poster | 2212.10535 | [
"https://github.com/lupantech/dl4math"
] | https://huggingface.co/papers/2212.10535 | 0 | 0 | 0 | 5 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.818.bib | https://aclanthology.org/2023.acl-long.818/ | @inproceedings{calderon-etal-2023-systematic,
title = "A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training",
author = "Calderon, Nitay and
Mukherjee, Subhabrata and
Reichart, Roi and
Kantor, Amir",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.818",
doi = "10.18653/v1/2023.acl-long.818",
pages = "14632--14659",
abstract = "Modern Natural Language Generation (NLG) models come with massive computational and storage requirements. In this work, we study the potential of compressing them, which is crucial for real-world applications serving millions of users. We focus on Knowledge Distillation (KD) techniques, in which a small student model learns to imitate a large teacher model, allowing to transfer knowledge from the teacher to the student. In contrast to much of the previous work, our goal is to optimize the model for a specific NLG task and a specific dataset. Typically in real-world applications, in addition to labeled data there is abundant unlabeled task-specific data, which is crucial for attaining high compression rates via KD. In this work, we conduct a systematic study of task-specific KD techniques for various NLG tasks under realistic assumptions. We discuss the special characteristics of NLG distillation and particularly the exposure bias problem. Following, we derive a family of Pseudo-Target (PT) augmentation methods, substantially extending prior work on sequence-level KD. We propose the Joint-Teaching method, which applies word-level KD to multiple PTs generated by both the teacher and the student. Finally, we validate our findings in an extreme setup with no labeled examples using GPT-4 as the teacher. Our study provides practical model design observations and demonstrates the effectiveness of PT training for task-specific KD in NLG.",
}
| Modern Natural Language Generation (NLG) models come with massive computational and storage requirements. In this work, we study the potential of compressing them, which is crucial for real-world applications serving millions of users. We focus on Knowledge Distillation (KD) techniques, in which a small student model learns to imitate a large teacher model, allowing to transfer knowledge from the teacher to the student. In contrast to much of the previous work, our goal is to optimize the model for a specific NLG task and a specific dataset. Typically in real-world applications, in addition to labeled data there is abundant unlabeled task-specific data, which is crucial for attaining high compression rates via KD. In this work, we conduct a systematic study of task-specific KD techniques for various NLG tasks under realistic assumptions. We discuss the special characteristics of NLG distillation and particularly the exposure bias problem. Following, we derive a family of Pseudo-Target (PT) augmentation methods, substantially extending prior work on sequence-level KD. We propose the Joint-Teaching method, which applies word-level KD to multiple PTs generated by both the teacher and the student. Finally, we validate our findings in an extreme setup with no labeled examples using GPT-4 as the teacher. Our study provides practical model design observations and demonstrates the effectiveness of PT training for task-specific KD in NLG. | [
"Calderon, Nitay",
"Mukherjee, Subhabrata",
"Reichart, Roi",
"Kantor, Amir"
] | A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training | acl-long.818 | Poster | 2305.02031 | [
"https://github.com/nitaytech/kd4gen"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.819.bib | https://aclanthology.org/2023.acl-long.819/ | @inproceedings{jiang-etal-2023-vision,
title = "Vision Language Pre-training by Contrastive Learning with Cross-Modal Similarity Regulation",
author = "Jiang, Chaoya and
Ye, Wei and
Xu, Haiyang and
Huang, Songfang and
Huang, Fei and
Zhang, Shikun",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.819",
doi = "10.18653/v1/2023.acl-long.819",
pages = "14660--14679",
abstract = "In this paper, we reconsider the problem of (partial) false negative samples from the Mutual Information (MI) Maximization perspective, the traditional contrastive loss (like InfoNCE loss) will equally push away the anchor of all positive samples and negative samples regardless of their possible semantic similarities. We theoretically show that InfoNCE loss will not only maximize the MI between the anchor and positive samples but minimize the MI between the anchor and false negative samples even though they share similar semantic which could provide a possible theoretical explanation for the observation of the existence of false negative samples in the cross-modal contrastive learning will decrease the downstream task performance of VLP models. Above analysis motivate us to propose the VLP model with a novel Semantic Awared Contrastive Learning framework named SACL where different negative samples are assigned with different contrastive weights according to the semantic similarity between them and the anchor.",
}
| In this paper, we reconsider the problem of (partial) false negative samples from the Mutual Information (MI) Maximization perspective, the traditional contrastive loss (like InfoNCE loss) will equally push away the anchor of all positive samples and negative samples regardless of their possible semantic similarities. We theoretically show that InfoNCE loss will not only maximize the MI between the anchor and positive samples but minimize the MI between the anchor and false negative samples even though they share similar semantic which could provide a possible theoretical explanation for the observation of the existence of false negative samples in the cross-modal contrastive learning will decrease the downstream task performance of VLP models. Above analysis motivate us to propose the VLP model with a novel Semantic Awared Contrastive Learning framework named SACL where different negative samples are assigned with different contrastive weights according to the semantic similarity between them and the anchor. | [
"Jiang, Chaoya",
"Ye, Wei",
"Xu, Haiyang",
"Huang, Songfang",
"Huang, Fei",
"Zhang, Shikun"
] | Vision Language Pre-training by Contrastive Learning with Cross-Modal Similarity Regulation | acl-long.819 | Oral | 2305.04474 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.820.bib | https://aclanthology.org/2023.acl-long.820/ | @inproceedings{leng-etal-2023-tell2design,
title = "{T}ell2{D}esign: A Dataset for Language-Guided Floor Plan Generation",
author = "Leng, Sicong and
Zhou, Yang and
Dupty, Mohammed Haroon and
Lee, Wee Sun and
Joyce, Sam and
Lu, Wei",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.820",
doi = "10.18653/v1/2023.acl-long.820",
pages = "14680--14697",
abstract = "We consider the task of generating designs directly from natural language descriptions, and consider floor plan generation as the initial research area. Language conditional generative models have recently been very successful in generating high-quality artistic images. However, designs must satisfy different constraints that are not present in generating artistic images, particularly spatial and relational constraints. We make multiple contributions to initiate research on this task. First, we introduce a novel dataset, Tell2Design (T2D), which contains more than 80k floor plan designs associated with natural language instructions. Second, we propose a Sequence-to-Sequence model that can serve as a strong baseline for future research. Third, we benchmark this task with several text-conditional image generation models. We conclude by conducting human evaluations on the generated samples and providing an analysis of human performance. We hope our contributions will propel the research on language-guided design generation forward.",
}
| We consider the task of generating designs directly from natural language descriptions, and consider floor plan generation as the initial research area. Language conditional generative models have recently been very successful in generating high-quality artistic images. However, designs must satisfy different constraints that are not present in generating artistic images, particularly spatial and relational constraints. We make multiple contributions to initiate research on this task. First, we introduce a novel dataset, Tell2Design (T2D), which contains more than 80k floor plan designs associated with natural language instructions. Second, we propose a Sequence-to-Sequence model that can serve as a strong baseline for future research. Third, we benchmark this task with several text-conditional image generation models. We conclude by conducting human evaluations on the generated samples and providing an analysis of human performance. We hope our contributions will propel the research on language-guided design generation forward. | [
"Leng, Sicong",
"Zhou, Yang",
"Dupty, Mohammed Haroon",
"Lee, Wee Sun",
"Joyce, Sam",
"Lu, Wei"
] | Tell2Design: A Dataset for Language-Guided Floor Plan Generation | acl-long.820 | Oral | 2311.15941 | [
"https://github.com/lucidrains/imagen-pytorch"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.821.bib | https://aclanthology.org/2023.acl-long.821/ | @inproceedings{yao-etal-2023-human,
title = "Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations",
author = "Yao, Bingsheng and
Sen, Prithviraj and
Popa, Lucian and
Hendler, James and
Wang, Dakuo",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.821",
doi = "10.18653/v1/2023.acl-long.821",
pages = "14698--14713",
abstract = "Human-annotated labels and explanations are critical for training explainable NLP models. However, unlike human-annotated labels whose quality is easier to calibrate (e.g., with a majority vote), human-crafted free-form explanations can be quite subjective. Before blindly using them as ground truth to train ML models, a vital question needs to be asked: How do we evaluate a human-annotated explanation{'}s quality? In this paper, we build on the view that the quality of a human-annotated explanation can be measured based on its helpfulness (or impairment) to the ML models{'} performance for the desired NLP tasks for which the annotations were collected. In comparison to the commonly used Simulatability score, we define a new metric that can take into consideration the helpfulness of an explanation for model performance at both fine-tuning and inference. With the help of a unified dataset format, we evaluated the proposed metric on five datasets (e.g., e-SNLI) against two model architectures (T5 and BART), and the results show that our proposed metric can objectively evaluate the quality of human-annotated explanations, while Simulatability falls short.",
}
| Human-annotated labels and explanations are critical for training explainable NLP models. However, unlike human-annotated labels whose quality is easier to calibrate (e.g., with a majority vote), human-crafted free-form explanations can be quite subjective. Before blindly using them as ground truth to train ML models, a vital question needs to be asked: How do we evaluate a human-annotated explanation{'}s quality? In this paper, we build on the view that the quality of a human-annotated explanation can be measured based on its helpfulness (or impairment) to the ML models{'} performance for the desired NLP tasks for which the annotations were collected. In comparison to the commonly used Simulatability score, we define a new metric that can take into consideration the helpfulness of an explanation for model performance at both fine-tuning and inference. With the help of a unified dataset format, we evaluated the proposed metric on five datasets (e.g., e-SNLI) against two model architectures (T5 and BART), and the results show that our proposed metric can objectively evaluate the quality of human-annotated explanations, while Simulatability falls short. | [
"Yao, Bingsheng",
"Sen, Prithviraj",
"Popa, Lucian",
"Hendler, James",
"Wang, Dakuo"
] | Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations | acl-long.821 | Oral | 2305.03117 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.822.bib | https://aclanthology.org/2023.acl-long.822/ | @inproceedings{yoo-etal-2023-rethinking,
title = "Rethinking Annotation: Can Language Learners Contribute?",
author = "Yoo, Haneul and
Putri, Rifki Afina and
Lee, Changyoon and
Lee, Youngin and
Ahn, So-Yeon and
Kang, Dongyeop and
Oh, Alice",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.822",
doi = "10.18653/v1/2023.acl-long.822",
pages = "14714--14733",
abstract = "Researchers have traditionally recruited native speakers to provide annotations for the widely used benchmark datasets. But there are languages for which recruiting native speakers is difficult, and it would help to get learners of those languages to annotate the data. In this paper, we investigate whether language learners can contribute annotations to the benchmark datasets. In a carefully controlled annotation experiment, we recruit 36 language learners, provide two types of additional resources (dictionaries and machine-translated sentences), and perform mini-tests to measure their language proficiency. We target three languages, English, Korean, and Indonesian, and four NLP tasks, sentiment analysis, natural language inference, named entity recognition, and machine reading comprehension. We find that language learners, especially those with intermediate or advanced language proficiency, are able to provide fairly accurate labels with the help of additional resources. Moreover, we show that data annotation improves learners{'} language proficiency in terms of vocabulary and grammar. The implication of our findings is that broadening the annotation task to include language learners can open up the opportunity to build benchmark datasets for languages for which it is difficult to recruit native speakers.",
}
| Researchers have traditionally recruited native speakers to provide annotations for the widely used benchmark datasets. But there are languages for which recruiting native speakers is difficult, and it would help to get learners of those languages to annotate the data. In this paper, we investigate whether language learners can contribute annotations to the benchmark datasets. In a carefully controlled annotation experiment, we recruit 36 language learners, provide two types of additional resources (dictionaries and machine-translated sentences), and perform mini-tests to measure their language proficiency. We target three languages, English, Korean, and Indonesian, and four NLP tasks, sentiment analysis, natural language inference, named entity recognition, and machine reading comprehension. We find that language learners, especially those with intermediate or advanced language proficiency, are able to provide fairly accurate labels with the help of additional resources. Moreover, we show that data annotation improves learners{'} language proficiency in terms of vocabulary and grammar. The implication of our findings is that broadening the annotation task to include language learners can open up the opportunity to build benchmark datasets for languages for which it is difficult to recruit native speakers. | [
"Yoo, Haneul",
"Putri, Rifki Afina",
"Lee, Changyoon",
"Lee, Youngin",
"Ahn, So-Yeon",
"Kang, Dongyeop",
"Oh, Alice"
] | Rethinking Annotation: Can Language Learners Contribute? | acl-long.822 | Poster | 2210.06828 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.823.bib | https://aclanthology.org/2023.acl-long.823/ | @inproceedings{wu-etal-2023-information,
title = "Information Screening whilst Exploiting! Multimodal Relation Extraction with Feature Denoising and Multimodal Topic Modeling",
author = "Wu, Shengqiong and
Fei, Hao and
Cao, Yixin and
Bing, Lidong and
Chua, Tat-Seng",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.823",
doi = "10.18653/v1/2023.acl-long.823",
pages = "14734--14751",
abstract = "Existing research on multimodal relation extraction (MRE) faces two co-existing challenges, internal-information over-utilization and external-information under-exploitation. To combat that, we propose a novel framework that simultaneously implements the idea of internal-information screening and external-information exploiting. First, we represent the fine-grained semantic structures of the input image and text with the visual and textual scene graphs, which are further fused into a unified cross-modal graph (CMG). Based on CMG, we perform structure refinement with the guidance of the graph information bottleneck principle, actively denoising the less-informative features. Next, we perform topic modeling over the input image and text, incorporating latent multimodal topic features to enrich the contexts. On the benchmark MRE dataset, our system outperforms the current best model significantly. With further in-depth analyses, we reveal the great potential of our method for the MRE task.",
}
| Existing research on multimodal relation extraction (MRE) faces two co-existing challenges, internal-information over-utilization and external-information under-exploitation. To combat that, we propose a novel framework that simultaneously implements the idea of internal-information screening and external-information exploiting. First, we represent the fine-grained semantic structures of the input image and text with the visual and textual scene graphs, which are further fused into a unified cross-modal graph (CMG). Based on CMG, we perform structure refinement with the guidance of the graph information bottleneck principle, actively denoising the less-informative features. Next, we perform topic modeling over the input image and text, incorporating latent multimodal topic features to enrich the contexts. On the benchmark MRE dataset, our system outperforms the current best model significantly. With further in-depth analyses, we reveal the great potential of our method for the MRE task. | [
"Wu, Shengqiong",
"Fei, Hao",
"Cao, Yixin",
"Bing, Lidong",
"Chua, Tat-Seng"
] | Information Screening whilst Exploiting! Multimodal Relation Extraction with Feature Denoising and Multimodal Topic Modeling | acl-long.823 | Poster | 2305.11719 | [
"https://github.com/chocowu/mre-ise"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.824.bib | https://aclanthology.org/2023.acl-long.824/ | @inproceedings{shi-huang-2023-multiemo,
title = "{M}ulti{EMO}: An Attention-Based Correlation-Aware Multimodal Fusion Framework for Emotion Recognition in Conversations",
author = "Shi, Tao and
Huang, Shao-Lun",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.824",
doi = "10.18653/v1/2023.acl-long.824",
pages = "14752--14766",
abstract = "Emotion Recognition in Conversations (ERC) is an increasingly popular task in the Natural Language Processing community, which seeks to achieve accurate emotion classifications of utterances expressed by speakers during a conversation. Most existing approaches focus on modeling speaker and contextual information based on the textual modality, while the complementarity of multimodal information has not been well leveraged, few current methods have sufficiently captured the complex correlations and mapping relationships across different modalities. Furthermore, existing state-of-the-art ERC models have difficulty classifying minority and semantically similar emotion categories. To address these challenges, we propose a novel attention-based correlation-aware multimodal fusion framework named MultiEMO, which effectively integrates multimodal cues by capturing cross-modal mapping relationships across textual, audio and visual modalities based on bidirectional multi-head cross-attention layers. The difficulty of recognizing minority and semantically hard-to-distinguish emotion classes is alleviated by our proposed Sample-Weighted Focal Contrastive (SWFC) loss. Extensive experiments on two benchmark ERC datasets demonstrate that our MultiEMO framework consistently outperforms existing state-of-the-art approaches in all emotion categories on both datasets, the improvements in minority and semantically similar emotions are especially significant.",
}
| Emotion Recognition in Conversations (ERC) is an increasingly popular task in the Natural Language Processing community, which seeks to achieve accurate emotion classifications of utterances expressed by speakers during a conversation. Most existing approaches focus on modeling speaker and contextual information based on the textual modality, while the complementarity of multimodal information has not been well leveraged, few current methods have sufficiently captured the complex correlations and mapping relationships across different modalities. Furthermore, existing state-of-the-art ERC models have difficulty classifying minority and semantically similar emotion categories. To address these challenges, we propose a novel attention-based correlation-aware multimodal fusion framework named MultiEMO, which effectively integrates multimodal cues by capturing cross-modal mapping relationships across textual, audio and visual modalities based on bidirectional multi-head cross-attention layers. The difficulty of recognizing minority and semantically hard-to-distinguish emotion classes is alleviated by our proposed Sample-Weighted Focal Contrastive (SWFC) loss. Extensive experiments on two benchmark ERC datasets demonstrate that our MultiEMO framework consistently outperforms existing state-of-the-art approaches in all emotion categories on both datasets, the improvements in minority and semantically similar emotions are especially significant. | [
"Shi, Tao",
"Huang, Shao-Lun"
] | MultiEMO: An Attention-Based Correlation-Aware Multimodal Fusion Framework for Emotion Recognition in Conversations | acl-long.824 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.825.bib | https://aclanthology.org/2023.acl-long.825/ | @inproceedings{pires-etal-2023-learning,
title = "Learning Language-Specific Layers for Multilingual Machine Translation",
author = "Pires, Telmo and
Schmidt, Robin and
Liao, Yi-Hsiu and
Peitz, Stephan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.825",
doi = "10.18653/v1/2023.acl-long.825",
pages = "14767--14783",
abstract = "Multilingual Machine Translation promises to improve translation quality between non-English languages. This is advantageous for several reasons, namely lower latency (no need to translate twice), and reduced error cascades (e.g., avoiding losing gender and formality information when translating through English).On the downside, adding more languages reduces model capacity per language, which is usually countered by increasing the overall model size, making training harder and inference slower. In this work, we introduce Language-Specific Transformer Layers (LSLs), which allow us to increase model capacity, while keeping the amount of computation and the number of parameters used in the forward pass constant. The key idea is to have some layers of the encoder be source or target language-specific, while keeping the remaining layers shared. We study the best way to place these layers using a neural architecture search inspired approach, and achieve an improvement of 1.3 chrF (1.5 spBLEU) points over not using LSLs on a separate decoder architecture, and 1.9 chrF (2.2 spBLEU) on a shared decoder one.",
}
| Multilingual Machine Translation promises to improve translation quality between non-English languages. This is advantageous for several reasons, namely lower latency (no need to translate twice), and reduced error cascades (e.g., avoiding losing gender and formality information when translating through English).On the downside, adding more languages reduces model capacity per language, which is usually countered by increasing the overall model size, making training harder and inference slower. In this work, we introduce Language-Specific Transformer Layers (LSLs), which allow us to increase model capacity, while keeping the amount of computation and the number of parameters used in the forward pass constant. The key idea is to have some layers of the encoder be source or target language-specific, while keeping the remaining layers shared. We study the best way to place these layers using a neural architecture search inspired approach, and achieve an improvement of 1.3 chrF (1.5 spBLEU) points over not using LSLs on a separate decoder architecture, and 1.9 chrF (2.2 spBLEU) on a shared decoder one. | [
"Pires, Telmo",
"Schmidt, Robin",
"Liao, Yi-Hsiu",
"Peitz, Stephan"
] | Learning Language-Specific Layers for Multilingual Machine Translation | acl-long.825 | Poster | 2305.02665 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.826.bib | https://aclanthology.org/2023.acl-long.826/ | @inproceedings{yu-etal-2023-personality,
title = "Personality Understanding of Fictional Characters during Book Reading",
author = "Yu, Mo and
Li, Jiangnan and
Yao, Shunyu and
Pang, Wenjie and
Zhou, Xiaochen and
Xiao, Zhou and
Meng, Fandong and
Zhou, Jie",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.826",
doi = "10.18653/v1/2023.acl-long.826",
pages = "14784--14802",
abstract = "Comprehending characters{'} personalities is a crucial aspect of story reading. As readers engage with a story, their understanding of a character evolves based on new events and information; and multiple fine-grained aspects of personalities can be perceived. This leads to a natural problem of situated and fine-grained personality understanding. The problem has not been studied in the NLP field, primarily due to the lack of appropriate datasets mimicking the process of book reading. We present the first labeled dataset PersoNet for this problem. Our novel annotation strategy involves annotating user notes from online reading apps as a proxy for the original books. Experiments and human studies indicate that our dataset construction is both efficient and accurate; and our task heavily relies on long-term context to achieve accurate predictions for both machines and humans.",
}
| Comprehending characters{'} personalities is a crucial aspect of story reading. As readers engage with a story, their understanding of a character evolves based on new events and information; and multiple fine-grained aspects of personalities can be perceived. This leads to a natural problem of situated and fine-grained personality understanding. The problem has not been studied in the NLP field, primarily due to the lack of appropriate datasets mimicking the process of book reading. We present the first labeled dataset PersoNet for this problem. Our novel annotation strategy involves annotating user notes from online reading apps as a proxy for the original books. Experiments and human studies indicate that our dataset construction is both efficient and accurate; and our task heavily relies on long-term context to achieve accurate predictions for both machines and humans. | [
"Yu, Mo",
"Li, Jiangnan",
"Yao, Shunyu",
"Pang, Wenjie",
"Zhou, Xiaochen",
"Xiao, Zhou",
"Meng, F",
"ong",
"Zhou, Jie"
] | Personality Understanding of Fictional Characters during Book Reading | acl-long.826 | Poster | 2305.10156 | [
"https://github.com/gorov/personet_acl23"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.827.bib | https://aclanthology.org/2023.acl-long.827/ | @inproceedings{zhu-etal-2023-storytrans,
title = "{S}tory{T}rans: Non-Parallel Story Author-Style Transfer with Discourse Representations and Content Enhancing",
author = "Zhu, Xuekai and
Guan, Jian and
Huang, Minlie and
Liu, Juan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.827",
doi = "10.18653/v1/2023.acl-long.827",
pages = "14803--14819",
abstract = "Non-parallel text style transfer is an important task in natural language generation. However, previous studies concentrate on the token or sentence level, such as sentence sentiment and formality transfer, but neglect long style transfer at the discourse level. Long texts usually involve more complicated author linguistic preferences such as discourse structures than sentences. In this paper, we formulate the task of non-parallel story author-style transfer, which requires transferring an input story into a specified author style while maintaining source semantics. To tackle this problem, we propose a generation model, named StoryTrans, which leverages discourse representations to capture source content information and transfer them to target styles with learnable style embeddings. We use an additional training objective to disentangle stylistic features from the learned discourse representation to prevent the model from degenerating to an auto-encoder. Moreover, to enhance content preservation, we design a mask-and-fill framework to explicitly fuse style-specific keywords of source texts into generation. Furthermore, we constructed new datasets for this task in Chinese and English, respectively. Extensive experiments show that our model outperforms strong baselines in overall performance of style transfer and content preservation.",
}
| Non-parallel text style transfer is an important task in natural language generation. However, previous studies concentrate on the token or sentence level, such as sentence sentiment and formality transfer, but neglect long style transfer at the discourse level. Long texts usually involve more complicated author linguistic preferences such as discourse structures than sentences. In this paper, we formulate the task of non-parallel story author-style transfer, which requires transferring an input story into a specified author style while maintaining source semantics. To tackle this problem, we propose a generation model, named StoryTrans, which leverages discourse representations to capture source content information and transfer them to target styles with learnable style embeddings. We use an additional training objective to disentangle stylistic features from the learned discourse representation to prevent the model from degenerating to an auto-encoder. Moreover, to enhance content preservation, we design a mask-and-fill framework to explicitly fuse style-specific keywords of source texts into generation. Furthermore, we constructed new datasets for this task in Chinese and English, respectively. Extensive experiments show that our model outperforms strong baselines in overall performance of style transfer and content preservation. | [
"Zhu, Xuekai",
"Guan, Jian",
"Huang, Minlie",
"Liu, Juan"
] | StoryTrans: Non-Parallel Story Author-Style Transfer with Discourse Representations and Content Enhancing | acl-long.827 | Poster | 2208.13423 | [
"https://github.com/xuekai-zhu/storytrans_public"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.828.bib | https://aclanthology.org/2023.acl-long.828/ | @inproceedings{tan-etal-2023-towards,
title = "Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models",
author = "Tan, Qingyu and
Ng, Hwee Tou and
Bing, Lidong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.828",
doi = "10.18653/v1/2023.acl-long.828",
pages = "14820--14835",
abstract = "Reasoning about time is of fundamental importance. Many facts are time-dependent. For example, athletes change teams from time to time, and different government officials are elected periodically. Previous time-dependent question answering (QA) datasets tend to be biased in either their coverage of time spans or question types. In this paper, we introduce a comprehensive probing dataset TempReason to evaluate the temporal reasoning capability of large language models. Our dataset includes questions of three temporal reasoning levels. In addition, we also propose a novel learning framework to improve the temporal reasoning capability of large language models, based on temporal span extraction and time-sensitive reinforcement learning. We conducted experiments in closed book QA, open book QA, and reasoning QA settings and demonstrated the effectiveness of our approach.",
}
| Reasoning about time is of fundamental importance. Many facts are time-dependent. For example, athletes change teams from time to time, and different government officials are elected periodically. Previous time-dependent question answering (QA) datasets tend to be biased in either their coverage of time spans or question types. In this paper, we introduce a comprehensive probing dataset TempReason to evaluate the temporal reasoning capability of large language models. Our dataset includes questions of three temporal reasoning levels. In addition, we also propose a novel learning framework to improve the temporal reasoning capability of large language models, based on temporal span extraction and time-sensitive reinforcement learning. We conducted experiments in closed book QA, open book QA, and reasoning QA settings and demonstrated the effectiveness of our approach. | [
"Tan, Qingyu",
"Ng, Hwee Tou",
"Bing, Lidong"
] | Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models | acl-long.828 | Poster | 2306.08952 | [
"https://github.com/damo-nlp-sg/tempreason"
] | https://huggingface.co/papers/2306.08952 | 1 | 0 | 0 | 3 | 1 | [] | [
"sxiong/TGQA"
] | [] |
https://aclanthology.org/2023.acl-long.829.bib | https://aclanthology.org/2023.acl-long.829/ | @inproceedings{rotem-etal-2023-finding,
title = "Finding the {SWEET} Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings",
author = "Rotem, Daniel and
Hassid, Michael and
Mamou, Jonathan and
Schwartz, Roy",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.829",
doi = "10.18653/v1/2023.acl-long.829",
pages = "14836--14851",
abstract = "Adaptive inference is a simple method for reducing inference costs. The method works by maintaining multiple classifiers of different capacities, and allocating resources to each test instance according to its difficulty. In this work, we compare the two main approaches for adaptive inference, Early-Exit and Multi-Model, when training data is limited. First, we observe that for models with the same architecture and size, individual Multi-Model classifiers outperform their Early-Exit counterparts by an average of 2.3{\%}. We show that this gap is caused by Early-Exit classifiers sharing model parameters during training, resulting in conflicting gradient updates of model weights. We find that despite this gap, Early-Exit still provides a better speed-accuracy trade-off due to the overhead of the Multi-Model approach. To address these issues, we propose SWEET (Separating Weights for Early-Exit Transformers) an Early-Exit fine-tuning method that assigns each classifier its own set of unique model weights, not updated by other classifiers. We compare SWEET{'}s speed-accuracy curve to standard Early-Exit and Multi-Model baselines and find that it outperforms both methods at fast speeds while maintaining comparable scores to Early- Exit at slow speeds. Moreover, SWEET individual classifiers outperform Early-Exit ones by 1.1{\%} on average. SWEET enjoys the benefits of both methods, paving the way for further reduction of inference costs in NLP.",
}
| Adaptive inference is a simple method for reducing inference costs. The method works by maintaining multiple classifiers of different capacities, and allocating resources to each test instance according to its difficulty. In this work, we compare the two main approaches for adaptive inference, Early-Exit and Multi-Model, when training data is limited. First, we observe that for models with the same architecture and size, individual Multi-Model classifiers outperform their Early-Exit counterparts by an average of 2.3{\%}. We show that this gap is caused by Early-Exit classifiers sharing model parameters during training, resulting in conflicting gradient updates of model weights. We find that despite this gap, Early-Exit still provides a better speed-accuracy trade-off due to the overhead of the Multi-Model approach. To address these issues, we propose SWEET (Separating Weights for Early-Exit Transformers) an Early-Exit fine-tuning method that assigns each classifier its own set of unique model weights, not updated by other classifiers. We compare SWEET{'}s speed-accuracy curve to standard Early-Exit and Multi-Model baselines and find that it outperforms both methods at fast speeds while maintaining comparable scores to Early- Exit at slow speeds. Moreover, SWEET individual classifiers outperform Early-Exit ones by 1.1{\%} on average. SWEET enjoys the benefits of both methods, paving the way for further reduction of inference costs in NLP. | [
"Rotem, Daniel",
"Hassid, Michael",
"Mamou, Jonathan",
"Schwartz, Roy"
] | Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings | acl-long.829 | Poster | 2306.02307 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.830.bib | https://aclanthology.org/2023.acl-long.830/ | @inproceedings{ho-etal-2023-large,
title = "Large Language Models Are Reasoning Teachers",
author = "Ho, Namgyu and
Schmid, Laura and
Yun, Se-Young",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.830",
doi = "10.18653/v1/2023.acl-long.830",
pages = "14852--14882",
abstract = "Recent works have shown that chain-of-thought (CoT) prompting can elicit language models to solve complex reasoning tasks, step-by-step. However, prompt-based CoT methods are dependent on very large models such as GPT-3 175B which are prohibitive to deploy at scale. In this paper, we use these large models as reasoning teachers to enable complex reasoning in smaller models and reduce model size requirements by several orders of magnitude. We propose Fine-tune-CoT, a method that generates reasoning samples from very large teacher models to fine-tune smaller models. We evaluate our method on a wide range of public models and complex tasks. We find that Fine-tune-CoT enables substantial reasoning capability in small models, far outperforming prompt-based baselines and even the teacher model in many tasks. Additionally, we extend our method by leveraging the teacher model{'}s ability to generate multiple distinct rationales for each original sample. Enriching the fine-tuning data with such diverse reasoning results in a substantial performance boost across datasets, even for very small models. We conduct ablations and sample studies to understand the emergence of reasoning capabilities of student models. Our code implementation and data are available at \url{https://github.com/itsnamgyu/reasoning-teacher}.",
}
| Recent works have shown that chain-of-thought (CoT) prompting can elicit language models to solve complex reasoning tasks, step-by-step. However, prompt-based CoT methods are dependent on very large models such as GPT-3 175B which are prohibitive to deploy at scale. In this paper, we use these large models as reasoning teachers to enable complex reasoning in smaller models and reduce model size requirements by several orders of magnitude. We propose Fine-tune-CoT, a method that generates reasoning samples from very large teacher models to fine-tune smaller models. We evaluate our method on a wide range of public models and complex tasks. We find that Fine-tune-CoT enables substantial reasoning capability in small models, far outperforming prompt-based baselines and even the teacher model in many tasks. Additionally, we extend our method by leveraging the teacher model{'}s ability to generate multiple distinct rationales for each original sample. Enriching the fine-tuning data with such diverse reasoning results in a substantial performance boost across datasets, even for very small models. We conduct ablations and sample studies to understand the emergence of reasoning capabilities of student models. Our code implementation and data are available at \url{https://github.com/itsnamgyu/reasoning-teacher}. | [
"Ho, Namgyu",
"Schmid, Laura",
"Yun, Se-Young"
] | Large Language Models Are Reasoning Teachers | acl-long.830 | Poster | 2212.10071 | [
"https://github.com/itsnamgyu/reasoning-teacher"
] | https://huggingface.co/papers/2212.10071 | 0 | 0 | 0 | 3 | 1 | [] | [
"peterkchung/commonsense_cot_partial_annotated_v0.1",
"peterkchung/commonsense_cot_partial_raw",
"peterkchung/commonsense_cot_partial_annotated_prelim"
] | [] |
https://aclanthology.org/2023.acl-long.831.bib | https://aclanthology.org/2023.acl-long.831/ | @inproceedings{zhao-etal-2023-abductive,
title = "Abductive Commonsense Reasoning Exploiting Mutually Exclusive Explanations",
author = "Zhao, Wenting and
Chiu, Justin and
Cardie, Claire and
Rush, Alexander",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.831",
doi = "10.18653/v1/2023.acl-long.831",
pages = "14883--14896",
abstract = "Abductive reasoning aims to find plausible explanations for an event. This style of reasoning is critical for commonsense tasks where there are often multiple plausible explanations. Existing approaches for abductive reasoning in natural language processing (NLP) often rely on manually generated annotations for supervision; however, such annotations can be subjective and biased. Instead of using direct supervision, this work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context. The method uses posterior regularization to enforce a mutual exclusion constraint, encouraging the model to learn the distinction between fluent explanations and plausible ones. We evaluate our approach on a diverse set of abductive reasoning datasets; experimental results show that our approach outperforms or is comparable to directly applying pretrained language models in a zero-shot manner and other knowledge-augmented zero-shot methods.",
}
| Abductive reasoning aims to find plausible explanations for an event. This style of reasoning is critical for commonsense tasks where there are often multiple plausible explanations. Existing approaches for abductive reasoning in natural language processing (NLP) often rely on manually generated annotations for supervision; however, such annotations can be subjective and biased. Instead of using direct supervision, this work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context. The method uses posterior regularization to enforce a mutual exclusion constraint, encouraging the model to learn the distinction between fluent explanations and plausible ones. We evaluate our approach on a diverse set of abductive reasoning datasets; experimental results show that our approach outperforms or is comparable to directly applying pretrained language models in a zero-shot manner and other knowledge-augmented zero-shot methods. | [
"Zhao, Wenting",
"Chiu, Justin",
"Cardie, Claire",
"Rush, Alex",
"er"
] | Abductive Commonsense Reasoning Exploiting Mutually Exclusive Explanations | acl-long.831 | Oral | 2305.14618 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.832.bib | https://aclanthology.org/2023.acl-long.832/ | @inproceedings{wang-etal-2023-pesco,
title = "{PESCO}: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification",
author = "Wang, Yau-Shian and
Chi, Ta-Chung and
Zhang, Ruohong and
Yang, Yiming",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.832",
doi = "10.18653/v1/2023.acl-long.832",
pages = "14897--14911",
abstract = "We present PESCO, a novel contrastive learning framework that substantially improves the performance of zero-shot text classification. We formulate text classification as a neural text retrieval problem where each document is treated as a query, and the system learns the mapping from each query to the relevant class labels by (1) adding prompts to enhance label retrieval, and (2) using retrieved labels to enrich the training set in a self-training loop of contrastive learning. PESCO achieves state-of-the-art performance on four benchmark text classification datasets. On DBpedia, we achieve 98.5{\%} accuracy without any labeled data, which is close to the fully-supervised result. Extensive experiments and analyses show all the components of PESCO are necessary for improving the performance of zero-shot text classification.",
}
| We present PESCO, a novel contrastive learning framework that substantially improves the performance of zero-shot text classification. We formulate text classification as a neural text retrieval problem where each document is treated as a query, and the system learns the mapping from each query to the relevant class labels by (1) adding prompts to enhance label retrieval, and (2) using retrieved labels to enrich the training set in a self-training loop of contrastive learning. PESCO achieves state-of-the-art performance on four benchmark text classification datasets. On DBpedia, we achieve 98.5{\%} accuracy without any labeled data, which is close to the fully-supervised result. Extensive experiments and analyses show all the components of PESCO are necessary for improving the performance of zero-shot text classification. | [
"Wang, Yau-Shian",
"Chi, Ta-Chung",
"Zhang, Ruohong",
"Yang, Yiming"
] | PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification | acl-long.832 | Poster | 2305.14963 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.833.bib | https://aclanthology.org/2023.acl-long.833/ | @inproceedings{guo-etal-2023-visually,
title = "Visually-augmented pretrained language models for {NLP} tasks without images",
author = "Guo, Hangyu and
Zhou, Kun and
Zhao, Wayne Xin and
Zhang, Qinyu and
Wen, Ji-Rong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.833",
doi = "10.18653/v1/2023.acl-long.833",
pages = "14912--14929",
abstract = "Although pre-trained language models (PLMs) have shown impressive performance by text-only self-supervised training, they are found lack of visual semantics or commonsense. Existing solutions often rely on explicit images for visual knowledge augmentation (requiring time-consuming retrieval or generation), and they also conduct the augmentation for the whole input text, without considering whether it is actually needed in specific inputs or tasks. To address these issues, we propose a novel **V**isually-**A**ugmented fine-tuning approach that can be generally applied to various PLMs or NLP tasks, **W**ithout using any retrieved or generated **I**mages, namely **VAWI**. Experimental results show that our approach can consistently improve the performance of BERT, RoBERTa, BART, and T5 at different scales, and outperform several competitive baselines on ten tasks. Our codes and data are publicly available at \url{https://github.com/RUCAIBox/VAWI}.",
}
| Although pre-trained language models (PLMs) have shown impressive performance by text-only self-supervised training, they are found lack of visual semantics or commonsense. Existing solutions often rely on explicit images for visual knowledge augmentation (requiring time-consuming retrieval or generation), and they also conduct the augmentation for the whole input text, without considering whether it is actually needed in specific inputs or tasks. To address these issues, we propose a novel **V**isually-**A**ugmented fine-tuning approach that can be generally applied to various PLMs or NLP tasks, **W**ithout using any retrieved or generated **I**mages, namely **VAWI**. Experimental results show that our approach can consistently improve the performance of BERT, RoBERTa, BART, and T5 at different scales, and outperform several competitive baselines on ten tasks. Our codes and data are publicly available at \url{https://github.com/RUCAIBox/VAWI}. | [
"Guo, Hangyu",
"Zhou, Kun",
"Zhao, Wayne Xin",
"Zhang, Qinyu",
"Wen, Ji-Rong"
] | Visually-augmented pretrained language models for NLP tasks without images | acl-long.833 | Oral | 2212.07937 | [
"https://github.com/rucaibox/vawi"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.834.bib | https://aclanthology.org/2023.acl-long.834/ | @inproceedings{nourbakhsh-etal-2023-using,
title = "Using counterfactual contrast to improve compositional generalization for multi-step quantitative reasoning",
author = "Nourbakhsh, Armineh and
Shah, Sameena and
Ros{\'e}, Carolyn",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.834",
doi = "10.18653/v1/2023.acl-long.834",
pages = "14930--14943",
abstract = "In quantitative question answering, compositional generalization is one of the main challenges of state of the art models, especially when longer sequences of reasoning steps are required. In this paper we propose CounterComp, a method that uses counterfactual scenarios to generate samples with compositional contrast. Instead of a data augmentation approach, CounterComp is based on metric learning, which allows for direct sampling from the training set and circumvents the need for additional human labels. Our proposed auxiliary metric learning loss improves the performance of three state of the art models on four recently released datasets. We also show how the approach can improve OOD performance on unseen domains, as well as unseen compositions. Lastly, we demonstrate how the method can lead to better compositional attention patterns during training.",
}
| In quantitative question answering, compositional generalization is one of the main challenges of state of the art models, especially when longer sequences of reasoning steps are required. In this paper we propose CounterComp, a method that uses counterfactual scenarios to generate samples with compositional contrast. Instead of a data augmentation approach, CounterComp is based on metric learning, which allows for direct sampling from the training set and circumvents the need for additional human labels. Our proposed auxiliary metric learning loss improves the performance of three state of the art models on four recently released datasets. We also show how the approach can improve OOD performance on unseen domains, as well as unseen compositions. Lastly, we demonstrate how the method can lead to better compositional attention patterns during training. | [
"Nourbakhsh, Armineh",
"Shah, Sameena",
"Ros{\\'e}, Carolyn"
] | Using counterfactual contrast to improve compositional generalization for multi-step quantitative reasoning | acl-long.834 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.835.bib | https://aclanthology.org/2023.acl-long.835/ | @inproceedings{zhang-etal-2023-needle,
title = "A Needle in a Haystack: An Analysis of High-Agreement Workers on {MT}urk for Summarization",
author = "Zhang, Lining and
Mille, Simon and
Hou, Yufang and
Deutsch, Daniel and
Clark, Elizabeth and
Liu, Yixin and
Mahamood, Saad and
Gehrmann, Sebastian and
Clinciu, Miruna and
Chandu, Khyathi Raghavi and
Sedoc, Jo{\~a}o",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.835",
doi = "10.18653/v1/2023.acl-long.835",
pages = "14944--14982",
abstract = "To prevent the costly and inefficient use of resources on low-quality annotations, we want a method for creating a pool of dependable annotators who can effectively complete difficult tasks, such as evaluating automatic summarization. Thus, we investigate the recruitment of high-quality Amazon Mechanical Turk workers via a two-step pipeline. We show that we can successfully filter out subpar workers before they carry out the evaluations and obtain high-agreement annotations with similar constraints on resources. Although our workers demonstrate a strong consensus among themselves and CloudResearch workers, their alignment with expert judgments on a subset of the data is not as expected and needs further training in correctness. This paper still serves as a best practice for the recruitment of qualified annotators in other challenging annotation tasks.",
}
| To prevent the costly and inefficient use of resources on low-quality annotations, we want a method for creating a pool of dependable annotators who can effectively complete difficult tasks, such as evaluating automatic summarization. Thus, we investigate the recruitment of high-quality Amazon Mechanical Turk workers via a two-step pipeline. We show that we can successfully filter out subpar workers before they carry out the evaluations and obtain high-agreement annotations with similar constraints on resources. Although our workers demonstrate a strong consensus among themselves and CloudResearch workers, their alignment with expert judgments on a subset of the data is not as expected and needs further training in correctness. This paper still serves as a best practice for the recruitment of qualified annotators in other challenging annotation tasks. | [
"Zhang, Lining",
"Mille, Simon",
"Hou, Yufang",
"Deutsch, Daniel",
"Clark, Elizabeth",
"Liu, Yixin",
"Mahamood, Saad",
"Gehrmann, Sebastian",
"Clinciu, Miruna",
"Ch",
"u, Khyathi Raghavi",
"Sedoc, Jo{\\~a}o"
] | A Needle in a Haystack: An Analysis of High-Agreement Workers on MTurk for Summarization | acl-long.835 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.836.bib | https://aclanthology.org/2023.acl-long.836/ | @inproceedings{lin-etal-2023-tavt,
title = "{TAVT}: Towards Transferable Audio-Visual Text Generation",
author = "Lin, Wang and
Jin, Tao and
Pan, Wenwen and
Li, Linjun and
Cheng, Xize and
Wang, Ye and
Zhao, Zhou",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.836",
doi = "10.18653/v1/2023.acl-long.836",
pages = "14983--14999",
abstract = "Audio-visual text generation aims to understand multi-modality contents and translate them into texts. Although various transfer learning techniques of text generation have been proposed, they focused on uni-modal analysis (e.g. text-to-text, visual-to-text) and lack consideration of multi-modal content and cross-modal relation. Motivated by the fact that humans can recognize the timbre of the same low-level concepts (e.g., footstep, rainfall, and laughing), even in different visual conditions, we aim to mitigate the domain discrepancies by audio-visual correlation. In this paper, we propose a novel Transferable Audio-Visual Text Generation framework, named TAVT, which consists of two key components: Audio-Visual Meta-Mapper (AVMM) and Dual Counterfactual Contrastive Learning (DCCL). (1) AVMM first introduces a universal auditory semantic space and drifts the domain-invariant low-level concepts into visual prefixes. Then the reconstruct-based learning encourages the AVMM to learn {``}which pixels belong to the same sound{''} and achieve audio-enhanced visual prefix. The well-trained AVMM can be further applied to uni-modal setting. (2) Furthermore, DCCL leverages the destructive counterfactual transformations to provide cross-modal constraints for AVMM from the perspective of feature distribution and text generation. (3) The experimental results show that TAVT outperforms the state-of-the-art methods across multiple domains (cross-datasets, cross-categories) and various modal settings (uni-modal, multi-modal).",
}
| Audio-visual text generation aims to understand multi-modality contents and translate them into texts. Although various transfer learning techniques of text generation have been proposed, they focused on uni-modal analysis (e.g. text-to-text, visual-to-text) and lack consideration of multi-modal content and cross-modal relation. Motivated by the fact that humans can recognize the timbre of the same low-level concepts (e.g., footstep, rainfall, and laughing), even in different visual conditions, we aim to mitigate the domain discrepancies by audio-visual correlation. In this paper, we propose a novel Transferable Audio-Visual Text Generation framework, named TAVT, which consists of two key components: Audio-Visual Meta-Mapper (AVMM) and Dual Counterfactual Contrastive Learning (DCCL). (1) AVMM first introduces a universal auditory semantic space and drifts the domain-invariant low-level concepts into visual prefixes. Then the reconstruct-based learning encourages the AVMM to learn {``}which pixels belong to the same sound{''} and achieve audio-enhanced visual prefix. The well-trained AVMM can be further applied to uni-modal setting. (2) Furthermore, DCCL leverages the destructive counterfactual transformations to provide cross-modal constraints for AVMM from the perspective of feature distribution and text generation. (3) The experimental results show that TAVT outperforms the state-of-the-art methods across multiple domains (cross-datasets, cross-categories) and various modal settings (uni-modal, multi-modal). | [
"Lin, Wang",
"Jin, Tao",
"Pan, Wenwen",
"Li, Linjun",
"Cheng, Xize",
"Wang, Ye",
"Zhao, Zhou"
] | TAVT: Towards Transferable Audio-Visual Text Generation | acl-long.836 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.837.bib | https://aclanthology.org/2023.acl-long.837/ | @inproceedings{prasad-etal-2023-meetingqa,
title = "{M}eeting{QA}: Extractive Question-Answering on Meeting Transcripts",
author = "Prasad, Archiki and
Bui, Trung and
Yoon, Seunghyun and
Deilamsalehy, Hanieh and
Dernoncourt, Franck and
Bansal, Mohit",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.837",
doi = "10.18653/v1/2023.acl-long.837",
pages = "15000--15025",
abstract = "With the ubiquitous use of online meeting platforms and robust automatic speech recognition systems, meeting transcripts have emerged as a promising domain for natural language tasks. Most recent works on meeting transcripts primarily focus on summarization and extraction of action items. However, meeting discussions also have a useful question-answering (QA) component, crucial to understanding the discourse or meeting content, and can be used to build interactive interfaces on top of long transcripts. Hence, in this work, we leverage this inherent QA component of meeting discussions and introduce MeetingQA, an extractive QA dataset comprising of questions asked by meeting participants and corresponding responses. As a result, questions can be open-ended and actively seek discussions, while the answers can be multi-span and distributed across multiple speakers. Our comprehensive empirical study of several robust baselines including long-context language models and recent instruction-tuned models reveals that models perform poorly on this task (F1 = 57.3) and severely lag behind human performance (F1 = 84.6), thus presenting a challenging new task for the community to improve upon.",
}
| With the ubiquitous use of online meeting platforms and robust automatic speech recognition systems, meeting transcripts have emerged as a promising domain for natural language tasks. Most recent works on meeting transcripts primarily focus on summarization and extraction of action items. However, meeting discussions also have a useful question-answering (QA) component, crucial to understanding the discourse or meeting content, and can be used to build interactive interfaces on top of long transcripts. Hence, in this work, we leverage this inherent QA component of meeting discussions and introduce MeetingQA, an extractive QA dataset comprising of questions asked by meeting participants and corresponding responses. As a result, questions can be open-ended and actively seek discussions, while the answers can be multi-span and distributed across multiple speakers. Our comprehensive empirical study of several robust baselines including long-context language models and recent instruction-tuned models reveals that models perform poorly on this task (F1 = 57.3) and severely lag behind human performance (F1 = 84.6), thus presenting a challenging new task for the community to improve upon. | [
"Prasad, Archiki",
"Bui, Trung",
"Yoon, Seunghyun",
"Deilamsalehy, Hanieh",
"Dernoncourt, Franck",
"Bansal, Mohit"
] | MeetingQA: Extractive Question-Answering on Meeting Transcripts | acl-long.837 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.838.bib | https://aclanthology.org/2023.acl-long.838/ | @inproceedings{sivakumar-moosavi-2023-fermat,
title = "{FERMAT}: An Alternative to Accuracy for Numerical Reasoning",
author = "Sivakumar, Jasivan and
Moosavi, Nafise Sadat",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.838",
doi = "10.18653/v1/2023.acl-long.838",
pages = "15026--15043",
abstract = "While pre-trained language models achieve impressive performance on various NLP benchmarks, they still struggle with tasks that require numerical reasoning. Recent advances in improving numerical reasoning are mostly achieved using very large language models that contain billions of parameters and are not accessible to everyone. In addition, numerical reasoning is measured using a single score on existing datasets. As a result, we do not have a clear understanding of the strengths and shortcomings of existing models on different numerical reasoning aspects and therefore, potential ways to improve them apart from scaling them up. Inspired by CheckList (Ribeiro et al., 2020), we introduce a multi-view evaluation set for numerical reasoning in English, called FERMAT. Instead of reporting a single score on a whole dataset, FERMAT evaluates models on various key numerical reasoning aspects such as number understanding, mathematical operations, and training dependency. Apart from providing a comprehensive evaluation of models on different numerical reasoning aspects, FERMAT enables a systematic and automated generation of an arbitrarily large training or evaluation set for each aspect. The datasets and codes are publicly available to generate further multi-view data for ulterior tasks and languages.",
}
| While pre-trained language models achieve impressive performance on various NLP benchmarks, they still struggle with tasks that require numerical reasoning. Recent advances in improving numerical reasoning are mostly achieved using very large language models that contain billions of parameters and are not accessible to everyone. In addition, numerical reasoning is measured using a single score on existing datasets. As a result, we do not have a clear understanding of the strengths and shortcomings of existing models on different numerical reasoning aspects and therefore, potential ways to improve them apart from scaling them up. Inspired by CheckList (Ribeiro et al., 2020), we introduce a multi-view evaluation set for numerical reasoning in English, called FERMAT. Instead of reporting a single score on a whole dataset, FERMAT evaluates models on various key numerical reasoning aspects such as number understanding, mathematical operations, and training dependency. Apart from providing a comprehensive evaluation of models on different numerical reasoning aspects, FERMAT enables a systematic and automated generation of an arbitrarily large training or evaluation set for each aspect. The datasets and codes are publicly available to generate further multi-view data for ulterior tasks and languages. | [
"Sivakumar, Jasivan",
"Moosavi, Nafise Sadat"
] | FERMAT: An Alternative to Accuracy for Numerical Reasoning | acl-long.838 | Poster | 2305.17491 | [
"https://github.com/jasivan/fermat"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.839.bib | https://aclanthology.org/2023.acl-long.839/ | @inproceedings{finch-etal-2023-dont,
title = "Don{'}t Forget Your {ABC}{'}s: Evaluating the State-of-the-Art in Chat-Oriented Dialogue Systems",
author = "Finch, Sarah E. and
Finch, James D. and
Choi, Jinho D.",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.839",
doi = "10.18653/v1/2023.acl-long.839",
pages = "15044--15071",
abstract = "Despite tremendous advancements in dialogue systems, stable evaluation still requires human judgments producing notoriously high-variance metrics due to their inherent subjectivity. Moreover, methods and labels in dialogue evaluation are not fully standardized, especially for open-domain chats, with a lack of work to compare and assess the validity of those approaches. The use of inconsistent evaluation can misinform the performance of a dialogue system, which becomes a major hurdle to enhance it. Thus, a dimensional evaluation of chat-oriented open-domain dialogue systems that reliably measures several aspects of dialogue capabilities is desired. This paper presents a novel human evaluation method to estimate the rates of many{pasted macro {`}LN{'}} dialogue system behaviors. Our method is used to evaluate four state-of-the-art open-domain dialogue systems and compared with existing approaches. The analysis demonstrates that our behavior method is more suitable than alternative Likert-style or comparative approaches for dimensional evaluation of these systems.",
}
| Despite tremendous advancements in dialogue systems, stable evaluation still requires human judgments producing notoriously high-variance metrics due to their inherent subjectivity. Moreover, methods and labels in dialogue evaluation are not fully standardized, especially for open-domain chats, with a lack of work to compare and assess the validity of those approaches. The use of inconsistent evaluation can misinform the performance of a dialogue system, which becomes a major hurdle to enhance it. Thus, a dimensional evaluation of chat-oriented open-domain dialogue systems that reliably measures several aspects of dialogue capabilities is desired. This paper presents a novel human evaluation method to estimate the rates of many{pasted macro {`}LN{'}} dialogue system behaviors. Our method is used to evaluate four state-of-the-art open-domain dialogue systems and compared with existing approaches. The analysis demonstrates that our behavior method is more suitable than alternative Likert-style or comparative approaches for dimensional evaluation of these systems. | [
"Finch, Sarah E.",
"Finch, James D.",
"Choi, Jinho D."
] | Don't Forget Your ABC's: Evaluating the State-of-the-Art in Chat-Oriented Dialogue Systems | acl-long.839 | Poster | 2212.09180 | [
"https://github.com/emorynlp/chatevaluationplatform"
] | https://huggingface.co/papers/2212.09180 | 0 | 0 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.840.bib | https://aclanthology.org/2023.acl-long.840/ | @inproceedings{cui-etal-2023-decoder,
title = "Decoder Tuning: Efficient Language Understanding as Decoding",
author = "Cui, Ganqu and
Li, Wentao and
Ding, Ning and
Huang, Longtao and
Liu, Zhiyuan and
Sun, Maosong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.840",
doi = "10.18653/v1/2023.acl-long.840",
pages = "15072--15087",
abstract = "With the evergrowing sizes of pre-trained models (PTMs), it has been an emerging practice to only provide the inference APIs for users, namely model-as-a-service (MaaS) setting. To adapt PTMs with model parameters frozen, most current approaches focus on the input side, seeking powerful prompts to stimulate models for correct answers. However, we argue that input-side adaptation could be arduous due to the lack of gradient signals and they usually require thousands of API queries, resulting in high computation and time costs. Specifically, DecT first extracts prompt-stimulated output scores for initial predictions. On top of that, we train an additional decoder network on the output representations to incorporate posterior data knowledge. By gradient-based optimization, DecT can be trained within several seconds and requires only one PTM query per sample. Empirically, we conduct extensive natural language understanding experiments and show that DecT significantly outperforms state-of-the-art algorithms with a 200x speed-up. Our code is available at \url{https://github.com/thunlp/DecT}.",
}
| With the evergrowing sizes of pre-trained models (PTMs), it has been an emerging practice to only provide the inference APIs for users, namely model-as-a-service (MaaS) setting. To adapt PTMs with model parameters frozen, most current approaches focus on the input side, seeking powerful prompts to stimulate models for correct answers. However, we argue that input-side adaptation could be arduous due to the lack of gradient signals and they usually require thousands of API queries, resulting in high computation and time costs. Specifically, DecT first extracts prompt-stimulated output scores for initial predictions. On top of that, we train an additional decoder network on the output representations to incorporate posterior data knowledge. By gradient-based optimization, DecT can be trained within several seconds and requires only one PTM query per sample. Empirically, we conduct extensive natural language understanding experiments and show that DecT significantly outperforms state-of-the-art algorithms with a 200x speed-up. Our code is available at \url{https://github.com/thunlp/DecT}. | [
"Cui, Ganqu",
"Li, Wentao",
"Ding, Ning",
"Huang, Longtao",
"Liu, Zhiyuan",
"Sun, Maosong"
] | Decoder Tuning: Efficient Language Understanding as Decoding | acl-long.840 | Poster | 2212.08408 | [
"https://github.com/thunlp/dect"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.841.bib | https://aclanthology.org/2023.acl-long.841/ | @inproceedings{arodi-etal-2023-kitmus,
title = "The {KITMUS} Test: Evaluating Knowledge Integration from Multiple Sources",
author = {Arodi, Akshatha and
P{\"o}msl, Martin and
Suleman, Kaheer and
Trischler, Adam and
Olteanu, Alexandra and
Cheung, Jackie Chi Kit},
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.841",
doi = "10.18653/v1/2023.acl-long.841",
pages = "15088--15108",
abstract = "Many state-of-the-art natural language understanding (NLU) models are based on pretrained neural language models. These models often make inferences using information from multiple sources. An important class of such inferences are those that require both background knowledge, presumably contained in a model{'}s pretrained parameters, and instance-specific information that is supplied at inference time. However, the integration and reasoning abilities of NLU models in the presence of multiple knowledge sources have been largely understudied. In this work, we propose a test suite of coreference resolution subtasks that require reasoning over multiple facts. These subtasks differ in terms of which knowledge sources contain the relevant facts. We also introduce subtasks where knowledge is present only at inference time using fictional knowledge. We evaluate state-of-the-art coreference resolution models on our dataset. Our results indicate that several models struggle to reason on-the-fly over knowledge observed both at pretrain time and at inference time. However, with task-specific training, a subset of models demonstrates the ability to integrate certain knowledge types from multiple sources. Still, even the best performing models seem to have difficulties with reliably integrating knowledge presented only at inference time.",
}
| Many state-of-the-art natural language understanding (NLU) models are based on pretrained neural language models. These models often make inferences using information from multiple sources. An important class of such inferences are those that require both background knowledge, presumably contained in a model{'}s pretrained parameters, and instance-specific information that is supplied at inference time. However, the integration and reasoning abilities of NLU models in the presence of multiple knowledge sources have been largely understudied. In this work, we propose a test suite of coreference resolution subtasks that require reasoning over multiple facts. These subtasks differ in terms of which knowledge sources contain the relevant facts. We also introduce subtasks where knowledge is present only at inference time using fictional knowledge. We evaluate state-of-the-art coreference resolution models on our dataset. Our results indicate that several models struggle to reason on-the-fly over knowledge observed both at pretrain time and at inference time. However, with task-specific training, a subset of models demonstrates the ability to integrate certain knowledge types from multiple sources. Still, even the best performing models seem to have difficulties with reliably integrating knowledge presented only at inference time. | [
"Arodi, Akshatha",
"P{\\\"o}msl, Martin",
"Suleman, Kaheer",
"Trischler, Adam",
"Olteanu, Alex",
"ra",
"Cheung, Jackie Chi Kit"
] | The KITMUS Test: Evaluating Knowledge Integration from Multiple Sources | acl-long.841 | Poster | [
"https://github.com/mpoemsl/kitmus"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.842.bib | https://aclanthology.org/2023.acl-long.842/ | @inproceedings{treviso-etal-2023-crest,
title = "{CREST}: A Joint Framework for Rationalization and Counterfactual Text Generation",
author = "Treviso, Marcos and
Ross, Alexis and
Guerreiro, Nuno M. and
Martins, Andr{\'e}",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.842",
doi = "10.18653/v1/2023.acl-long.842",
pages = "15109--15126",
abstract = "Selective rationales and counterfactual examples have emerged as two effective, complementary classes of interpretability methods for analyzing and training NLP models. However, prior work has not explored how these methods can be integrated to combine their complementary advantages. We overcome this limitation by introducing CREST (ContRastive Edits with Sparse raTionalization), a joint framework for selective rationalization and counterfactual text generation, and show that this framework leads to improvements in counterfactual quality, model robustness, and interpretability. First, CREST generates valid counterfactuals that are more natural than those produced by previous methods, and subsequently can be used for data augmentation at scale, reducing the need for human-generated examples. Second, we introduce a new loss function that leverages CREST counterfactuals to regularize selective rationales and show that this regularization improves both model robustness and rationale quality, compared to methods that do not leverage CREST counterfactuals. Our results demonstrate that CREST successfully bridges the gap between selective rationales and counterfactual examples, addressing the limitations of existing methods and providing a more comprehensive view of a model{'}s predictions.",
}
| Selective rationales and counterfactual examples have emerged as two effective, complementary classes of interpretability methods for analyzing and training NLP models. However, prior work has not explored how these methods can be integrated to combine their complementary advantages. We overcome this limitation by introducing CREST (ContRastive Edits with Sparse raTionalization), a joint framework for selective rationalization and counterfactual text generation, and show that this framework leads to improvements in counterfactual quality, model robustness, and interpretability. First, CREST generates valid counterfactuals that are more natural than those produced by previous methods, and subsequently can be used for data augmentation at scale, reducing the need for human-generated examples. Second, we introduce a new loss function that leverages CREST counterfactuals to regularize selective rationales and show that this regularization improves both model robustness and rationale quality, compared to methods that do not leverage CREST counterfactuals. Our results demonstrate that CREST successfully bridges the gap between selective rationales and counterfactual examples, addressing the limitations of existing methods and providing a more comprehensive view of a model{'}s predictions. | [
"Treviso, Marcos",
"Ross, Alexis",
"Guerreiro, Nuno M.",
"Martins, Andr{\\'e}"
] | CREST: A Joint Framework for Rationalization and Counterfactual Text Generation | acl-long.842 | Oral | 2305.17075 | [
"https://github.com/deep-spin/crest"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.843.bib | https://aclanthology.org/2023.acl-long.843/ | @inproceedings{wang-etal-2023-towards-unifying,
title = "Towards Unifying Multi-Lingual and Cross-Lingual Summarization",
author = "Wang, Jiaan and
Meng, Fandong and
Zheng, Duo and
Liang, Yunlong and
Li, Zhixu and
Qu, Jianfeng and
Zhou, Jie",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.843",
doi = "10.18653/v1/2023.acl-long.843",
pages = "15127--15143",
abstract = "To adapt text summarization to the multilingual world, previous work proposes multi-lingual summarization (MLS) and cross-lingual summarization (CLS). However, these two tasks have been studied separately due to the different definitions, which limits the compatible and systematic research on both of them. In this paper, we aim to unify MLS and CLS into a more general setting, i.e., many-to-many summarization (M2MS), where a single model could process documents in any language and generate their summaries also in any language. As the first step towards M2MS, we conduct preliminary studies to show that M2MS can better transfer task knowledge across different languages than MLS and CLS. Furthermore, we propose Pisces, a pre-trained M2MS model that learns language modeling, cross-lingual ability and summarization ability via three-stage pre-training. Experimental results indicate that our Pisces significantly outperforms the state-of-the-art baselines, especially in the zero-shot directions, where there is no training data from the source-language documents to the target-language summaries.",
}
| To adapt text summarization to the multilingual world, previous work proposes multi-lingual summarization (MLS) and cross-lingual summarization (CLS). However, these two tasks have been studied separately due to the different definitions, which limits the compatible and systematic research on both of them. In this paper, we aim to unify MLS and CLS into a more general setting, i.e., many-to-many summarization (M2MS), where a single model could process documents in any language and generate their summaries also in any language. As the first step towards M2MS, we conduct preliminary studies to show that M2MS can better transfer task knowledge across different languages than MLS and CLS. Furthermore, we propose Pisces, a pre-trained M2MS model that learns language modeling, cross-lingual ability and summarization ability via three-stage pre-training. Experimental results indicate that our Pisces significantly outperforms the state-of-the-art baselines, especially in the zero-shot directions, where there is no training data from the source-language documents to the target-language summaries. | [
"Wang, Jiaan",
"Meng, F",
"ong",
"Zheng, Duo",
"Liang, Yunlong",
"Li, Zhixu",
"Qu, Jianfeng",
"Zhou, Jie"
] | Towards Unifying Multi-Lingual and Cross-Lingual Summarization | acl-long.843 | Poster | 2305.09220 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.844.bib | https://aclanthology.org/2023.acl-long.844/ | @inproceedings{liu-etal-2023-improving,
title = "On Improving Summarization Factual Consistency from Natural Language Feedback",
author = "Liu, Yixin and
Deb, Budhaditya and
Teruel, Milagro and
Halfaker, Aaron and
Radev, Dragomir and
Awadallah, Ahmed Hassan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.844",
doi = "10.18653/v1/2023.acl-long.844",
pages = "15144--15161",
abstract = "Despite the recent progress in language generation models, their outputs may not always meet user expectations. In this work, we study whether informational feedback in natural language can be leveraged to improve generation quality and user preference alignment. To this end, we consider factual consistency in summarization, the quality that the summary should only contain information supported by the input documents, as the user-expected preference. We collect a high-quality dataset, DeFacto, containing human demonstrations and informational natural language feedback consisting of corrective instructions, edited summaries, and explanations with respect to the factual consistency of the summary. Using our dataset, we study three natural language generation tasks: (1) editing a summary by following the human feedback, (2) generating human feedback for editing the original summary, and (3) revising the initial summary to correct factual errors by generating both the human feedback and edited summary. We show that DeFacto can provide factually consistent human-edited summaries and further insights into summarization factual consistency thanks to its informational natural language feedback. We further demonstrate that fine-tuned language models can leverage our dataset to improve the summary factual consistency, while large language models lack the zero-shot learning ability in our proposed tasks that require controllable text generation.",
}
| Despite the recent progress in language generation models, their outputs may not always meet user expectations. In this work, we study whether informational feedback in natural language can be leveraged to improve generation quality and user preference alignment. To this end, we consider factual consistency in summarization, the quality that the summary should only contain information supported by the input documents, as the user-expected preference. We collect a high-quality dataset, DeFacto, containing human demonstrations and informational natural language feedback consisting of corrective instructions, edited summaries, and explanations with respect to the factual consistency of the summary. Using our dataset, we study three natural language generation tasks: (1) editing a summary by following the human feedback, (2) generating human feedback for editing the original summary, and (3) revising the initial summary to correct factual errors by generating both the human feedback and edited summary. We show that DeFacto can provide factually consistent human-edited summaries and further insights into summarization factual consistency thanks to its informational natural language feedback. We further demonstrate that fine-tuned language models can leverage our dataset to improve the summary factual consistency, while large language models lack the zero-shot learning ability in our proposed tasks that require controllable text generation. | [
"Liu, Yixin",
"Deb, Budhaditya",
"Teruel, Milagro",
"Halfaker, Aaron",
"Radev, Dragomir",
"Awadallah, Ahmed Hassan"
] | On Improving Summarization Factual Consistency from Natural Language Feedback | acl-long.844 | Poster | 2212.09968 | [
"https://github.com/microsoft/defacto"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.845.bib | https://aclanthology.org/2023.acl-long.845/ | @inproceedings{mendelsohn-etal-2023-dogwhistles,
title = "From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models",
author = "Mendelsohn, Julia and
Le Bras, Ronan and
Choi, Yejin and
Sap, Maarten",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.845",
doi = "10.18653/v1/2023.acl-long.845",
pages = "15162--15180",
abstract = "Dogwhistles are coded expressions that simultaneously convey one meaning to a broad audience and a second, often hateful or provocative, meaning to a narrow in-group; they are deployed to evade both political repercussions and algorithmic content moderation. For example, the word {``}cosmopolitan{''} in a sentence such as {``}we need to end the cosmopolitan experiment{''} can mean {``}worldly{''} to many but also secretly mean {``}Jewish{''} to a select few. We present the first large-scale computational investigation of dogwhistles. We develop a typology of dogwhistles, curate the largest-to-date glossary of over 300 dogwhistles with rich contextual information and examples, and analyze their usage in historical U.S. politicians{'} speeches. We then assess whether a large language model (GPT-3) can identify dogwhistles and their meanings, and find that GPT-3{'}s performance varies widely across types of dogwhistles and targeted groups. Finally, we show that harmful content containing dogwhistles avoids toxicity detection, highlighting online risks presented by such coded language. This work sheds light on the theoretical and applied importance of dogwhistles in both NLP and computational social science, and provides resources to facilitate future research in modeling dogwhistles and mitigating their online harms.",
}
| Dogwhistles are coded expressions that simultaneously convey one meaning to a broad audience and a second, often hateful or provocative, meaning to a narrow in-group; they are deployed to evade both political repercussions and algorithmic content moderation. For example, the word {``}cosmopolitan{''} in a sentence such as {``}we need to end the cosmopolitan experiment{''} can mean {``}worldly{''} to many but also secretly mean {``}Jewish{''} to a select few. We present the first large-scale computational investigation of dogwhistles. We develop a typology of dogwhistles, curate the largest-to-date glossary of over 300 dogwhistles with rich contextual information and examples, and analyze their usage in historical U.S. politicians{'} speeches. We then assess whether a large language model (GPT-3) can identify dogwhistles and their meanings, and find that GPT-3{'}s performance varies widely across types of dogwhistles and targeted groups. Finally, we show that harmful content containing dogwhistles avoids toxicity detection, highlighting online risks presented by such coded language. This work sheds light on the theoretical and applied importance of dogwhistles in both NLP and computational social science, and provides resources to facilitate future research in modeling dogwhistles and mitigating their online harms. | [
"Mendelsohn, Julia",
"Le Bras, Ronan",
"Choi, Yejin",
"Sap, Maarten"
] | From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models | acl-long.845 | Poster | 2305.17174 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.846.bib | https://aclanthology.org/2023.acl-long.846/ | @inproceedings{riemenschneider-frank-2023-exploring,
title = "Exploring Large Language Models for Classical Philology",
author = "Riemenschneider, Frederick and
Frank, Anette",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.846",
doi = "10.18653/v1/2023.acl-long.846",
pages = "15181--15199",
abstract = "Recent advances in NLP have led to the creation of powerful language models for many languages including Ancient Greek and Latin. While prior work on Classical languages unanimously uses BERT, in this work we create four language models for Ancient Greek that vary along two dimensions to study their versatility for tasks of interest for Classical languages: we explore (i) encoder-only and encoder-decoder architectures using RoBERTa and T5 as strong model types, and create for each of them (ii) a monolingual Ancient Greek and a multilingual instance that includes Latin and English. We evaluate all models on morphological and syntactic tasks, including lemmatization, which demonstrates the added value of T5{'}s decoding abilities. We further define two probing tasks to investigate the knowledge acquired by models pre-trained on Classical texts. Our experiments provide the first benchmarking analysis of existing models of Ancient Greek. Results show that our models provide significant improvements over the SoTA. The systematic analysis of model types can inform future research in designing language models for Classical languages, including the development of novel generative tasks. We make all our models available as community resources, along with a large curated pre-training corpus for Ancient Greek, to support the creation of a larger, comparable model zoo for Classical Philology.",
}
| Recent advances in NLP have led to the creation of powerful language models for many languages including Ancient Greek and Latin. While prior work on Classical languages unanimously uses BERT, in this work we create four language models for Ancient Greek that vary along two dimensions to study their versatility for tasks of interest for Classical languages: we explore (i) encoder-only and encoder-decoder architectures using RoBERTa and T5 as strong model types, and create for each of them (ii) a monolingual Ancient Greek and a multilingual instance that includes Latin and English. We evaluate all models on morphological and syntactic tasks, including lemmatization, which demonstrates the added value of T5{'}s decoding abilities. We further define two probing tasks to investigate the knowledge acquired by models pre-trained on Classical texts. Our experiments provide the first benchmarking analysis of existing models of Ancient Greek. Results show that our models provide significant improvements over the SoTA. The systematic analysis of model types can inform future research in designing language models for Classical languages, including the development of novel generative tasks. We make all our models available as community resources, along with a large curated pre-training corpus for Ancient Greek, to support the creation of a larger, comparable model zoo for Classical Philology. | [
"Riemenschneider, Frederick",
"Frank, Anette"
] | Exploring Large Language Models for Classical Philology | acl-long.846 | Poster | 2305.13698 | [
"https://github.com/heidelberg-nlp/ancient-language-models"
] | https://huggingface.co/papers/2305.13698 | 1 | 0 | 0 | 2 | 1 | [
"bowphs/PhilBerta",
"bowphs/GreBerta",
"bowphs/PhilTa",
"bowphs/LaTa",
"bowphs/GreTa",
"bowphs/LaBerta"
] | [] | [] |
https://aclanthology.org/2023.acl-long.847.bib | https://aclanthology.org/2023.acl-long.847/ | @inproceedings{tu-etal-2023-layoutmask,
title = "{L}ayout{M}ask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding",
author = "Tu, Yi and
Guo, Ya and
Chen, Huan and
Tang, Jinyang",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.847",
doi = "10.18653/v1/2023.acl-long.847",
pages = "15200--15212",
abstract = "Visually-rich Document Understanding (VrDU) has attracted much research attention over the past years. Pre-trained models on a large number of document images with transformer-based backbones have led to significant performance gains in this field. The major challenge is how to fusion the different modalities (text, layout, and image) of the documents in a unified model with different pre-training tasks. This paper focuses on improving text-layout interactions and proposes a novel multi-modal pre-training model, LayoutMask. LayoutMask uses local 1D position, instead of global 1D position, as layout input and has two pre-training objectives: (1) Masked Language Modeling: predicting masked tokens with two novel masking strategies; (2) Masked Position Modeling: predicting masked 2D positions to improve layout representation learning. LayoutMask can enhance the interactions between text and layout modalities in a unified model and produce adaptive and robust multi-modal representations for downstream tasks. Experimental results show that our proposed method can achieve state-of-the-art results on a wide variety of VrDU problems, including form understanding, receipt understanding, and document image classification.",
}
| Visually-rich Document Understanding (VrDU) has attracted much research attention over the past years. Pre-trained models on a large number of document images with transformer-based backbones have led to significant performance gains in this field. The major challenge is how to fusion the different modalities (text, layout, and image) of the documents in a unified model with different pre-training tasks. This paper focuses on improving text-layout interactions and proposes a novel multi-modal pre-training model, LayoutMask. LayoutMask uses local 1D position, instead of global 1D position, as layout input and has two pre-training objectives: (1) Masked Language Modeling: predicting masked tokens with two novel masking strategies; (2) Masked Position Modeling: predicting masked 2D positions to improve layout representation learning. LayoutMask can enhance the interactions between text and layout modalities in a unified model and produce adaptive and robust multi-modal representations for downstream tasks. Experimental results show that our proposed method can achieve state-of-the-art results on a wide variety of VrDU problems, including form understanding, receipt understanding, and document image classification. | [
"Tu, Yi",
"Guo, Ya",
"Chen, Huan",
"Tang, Jinyang"
] | LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding | acl-long.847 | Poster | 2305.18721 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.848.bib | https://aclanthology.org/2023.acl-long.848/ | @inproceedings{hu-etal-2023-hearing,
title = "Hearing Lips in Noise: Universal Viseme-Phoneme Mapping and Transfer for Robust Audio-Visual Speech Recognition",
author = "Hu, Yuchen and
Li, Ruizhe and
Chen, Chen and
Qin, Chengwei and
Zhu, Qiu-Shi and
Chng, Eng Siong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.848",
doi = "10.18653/v1/2023.acl-long.848",
pages = "15213--15232",
abstract = "Audio-visual speech recognition (AVSR) provides a promising solution to ameliorate the noise-robustness of audio-only speech recognition with visual information. However, most existing efforts still focus on audio modality to improve robustness considering its dominance in AVSR task, with noise adaptation techniques such as front-end denoise processing. Though effective, these methods are usually faced with two practical challenges: 1) lack of sufficient labeled noisy audio-visual training data in some real-world scenarios and 2) less optimal model generality to unseen testing noises. In this work, we investigate the noise-invariant visual modality to strengthen robustness of AVSR, which can adapt to any testing noises while without dependence on noisy training data, a.k.a., unsupervised noise adaptation. Inspired by human perception mechanism, we propose a universal viseme-phoneme mapping (UniVPM) approach to implement modality transfer, which can restore clean audio from visual signals to enable speech recognition under any noisy conditions. Extensive experiments on public benchmarks LRS3 and LRS2 show that our approach achieves the state-of-the-art under various noisy as well as clean conditions. In addition, we also outperform previous state-of-the-arts on visual speech recognition task.",
}
| Audio-visual speech recognition (AVSR) provides a promising solution to ameliorate the noise-robustness of audio-only speech recognition with visual information. However, most existing efforts still focus on audio modality to improve robustness considering its dominance in AVSR task, with noise adaptation techniques such as front-end denoise processing. Though effective, these methods are usually faced with two practical challenges: 1) lack of sufficient labeled noisy audio-visual training data in some real-world scenarios and 2) less optimal model generality to unseen testing noises. In this work, we investigate the noise-invariant visual modality to strengthen robustness of AVSR, which can adapt to any testing noises while without dependence on noisy training data, a.k.a., unsupervised noise adaptation. Inspired by human perception mechanism, we propose a universal viseme-phoneme mapping (UniVPM) approach to implement modality transfer, which can restore clean audio from visual signals to enable speech recognition under any noisy conditions. Extensive experiments on public benchmarks LRS3 and LRS2 show that our approach achieves the state-of-the-art under various noisy as well as clean conditions. In addition, we also outperform previous state-of-the-arts on visual speech recognition task. | [
"Hu, Yuchen",
"Li, Ruizhe",
"Chen, Chen",
"Qin, Chengwei",
"Zhu, Qiu-Shi",
"Chng, Eng Siong"
] | Hearing Lips in Noise: Universal Viseme-Phoneme Mapping and Transfer for Robust Audio-Visual Speech Recognition | acl-long.848 | Oral | 2306.10563 | [
"https://github.com/yuchen005/univpm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.849.bib | https://aclanthology.org/2023.acl-long.849/ | @inproceedings{huang-etal-2023-extensible,
title = "An Extensible Plug-and-Play Method for Multi-Aspect Controllable Text Generation",
author = "Huang, Xuancheng and
Liu, Zijun and
Li, Peng and
Li, Tao and
Sun, Maosong and
Liu, Yang",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.849",
doi = "10.18653/v1/2023.acl-long.849",
pages = "15233--15256",
abstract = "Recently, multi-aspect controllable text generation that controls the generated text in multiple aspects (e.g., sentiment, topic, and keywords) has attracted increasing attention. Although methods based on parameter efficient tuning like prefix-tuning could achieve multi-aspect controlling in a plug-and-play way, the mutual interference of multiple prefixes leads to significant degeneration of constraints and limits their extensibility to training-time unseen aspect combinations. In this work, we provide a theoretical lower bound for the interference and empirically found that the interference grows with the number of layers where prefixes are inserted. Based on these analyses, we propose using trainable gates to normalize the intervention of prefixes to restrain the growing interference. As a result, controlling training-time unseen combinations of aspects can be realized by simply concatenating corresponding plugins such that new constraints can be extended at a lower cost. In addition, we propose a unified way to process both categorical and free-form constraints. Experiments on text generation and machine translation demonstrate the superiority of our approach over baselines on constraint accuracy, text quality, and extensibility.",
}
| Recently, multi-aspect controllable text generation that controls the generated text in multiple aspects (e.g., sentiment, topic, and keywords) has attracted increasing attention. Although methods based on parameter efficient tuning like prefix-tuning could achieve multi-aspect controlling in a plug-and-play way, the mutual interference of multiple prefixes leads to significant degeneration of constraints and limits their extensibility to training-time unseen aspect combinations. In this work, we provide a theoretical lower bound for the interference and empirically found that the interference grows with the number of layers where prefixes are inserted. Based on these analyses, we propose using trainable gates to normalize the intervention of prefixes to restrain the growing interference. As a result, controlling training-time unseen combinations of aspects can be realized by simply concatenating corresponding plugins such that new constraints can be extended at a lower cost. In addition, we propose a unified way to process both categorical and free-form constraints. Experiments on text generation and machine translation demonstrate the superiority of our approach over baselines on constraint accuracy, text quality, and extensibility. | [
"Huang, Xuancheng",
"Liu, Zijun",
"Li, Peng",
"Li, Tao",
"Sun, Maosong",
"Liu, Yang"
] | An Extensible Plug-and-Play Method for Multi-Aspect Controllable Text Generation | acl-long.849 | Poster | 2212.09387 | [
"https://github.com/thunlp-mt/promptgating4mctg"
] | https://huggingface.co/papers/2212.09387 | 2 | 0 | 0 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.850.bib | https://aclanthology.org/2023.acl-long.850/ | @inproceedings{xu-etal-2023-double,
title = "Double-Branch Multi-Attention based Graph Neural Network for Knowledge Graph Completion",
author = "Xu, Hongcai and
Bao, Junpeng and
Liu, Wenbo",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.850",
doi = "10.18653/v1/2023.acl-long.850",
pages = "15257--15271",
abstract = "Graph neural networks (GNNs), which effectively use topological structures in the knowledge graphs (KG) to embed entities and relations in low-dimensional spaces, have shown great power in knowledge graph completion (KGC). KG has abundant global and local structural information, however, many GNN-based KGC models cannot capture these two types of information about the graph structure by designing complex aggregation schemes, and are not designed well to learn representations of seen entities with sparse neighborhoods in isolated subgraphs. In this paper, we find that a simple attention-based method can outperform a general GNN-based approach for KGC. We then propose a double-branch multi-attention based graph neural network (MA-GNN) to learn more expressive entity representations which contain rich global-local structural information. Specifically, we first explore the graph attention network-based local aggregator to learn entity representations. Furthermore, we propose a snowball local attention mechanism by leveraging the semantic similarity between two-hop neighbors to enrich the entity embedding. Finally, we use Transformer-based self-attention to learn long-range dependence between entities to obtain richer representations with the global graph structure and entity features. Experimental results on five benchmark datasets show that MA-GNN achieves significant improvements over strong baselines for inductive KGC.",
}
| Graph neural networks (GNNs), which effectively use topological structures in the knowledge graphs (KG) to embed entities and relations in low-dimensional spaces, have shown great power in knowledge graph completion (KGC). KG has abundant global and local structural information, however, many GNN-based KGC models cannot capture these two types of information about the graph structure by designing complex aggregation schemes, and are not designed well to learn representations of seen entities with sparse neighborhoods in isolated subgraphs. In this paper, we find that a simple attention-based method can outperform a general GNN-based approach for KGC. We then propose a double-branch multi-attention based graph neural network (MA-GNN) to learn more expressive entity representations which contain rich global-local structural information. Specifically, we first explore the graph attention network-based local aggregator to learn entity representations. Furthermore, we propose a snowball local attention mechanism by leveraging the semantic similarity between two-hop neighbors to enrich the entity embedding. Finally, we use Transformer-based self-attention to learn long-range dependence between entities to obtain richer representations with the global graph structure and entity features. Experimental results on five benchmark datasets show that MA-GNN achieves significant improvements over strong baselines for inductive KGC. | [
"Xu, Hongcai",
"Bao, Junpeng",
"Liu, Wenbo"
] | Double-Branch Multi-Attention based Graph Neural Network for Knowledge Graph Completion | acl-long.850 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.851.bib | https://aclanthology.org/2023.acl-long.851/ | @inproceedings{guo-etal-2023-dual,
title = "Dual Cache for Long Document Neural Coreference Resolution",
author = "Guo, Qipeng and
Hu, Xiangkun and
Zhang, Yue and
Qiu, Xipeng and
Zhang, Zheng",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.851",
doi = "10.18653/v1/2023.acl-long.851",
pages = "15272--15285",
abstract = "Recent works show the effectiveness of cache-based neural coreference resolution models on long documents. These models incrementally process a long document from left to right and extract relations between mentions and entities in a cache, resulting in much lower memory and computation cost compared to computing all mentions in parallel. However, they do not handle cache misses when high-quality entities are purged from the cache, which causes wrong assignments and leads to prediction errors. We propose a new hybrid cache that integrates two eviction policies to capture global and local entities separately, and effectively reduces the aggregated cache misses up to half as before, while improving F1 score of coreference by 0.7 5.7pt. As such, the hybrid policy can accelerate existing cache-based models and offer a new long document coreference resolution solution. Results show that our method outperforms existing methods on four benchmarks while saving up to 83{\%} of inference time against non-cache-based models. Further, we achieve a new state-of-the-art on a long document coreference benchmark, LitBank.",
}
| Recent works show the effectiveness of cache-based neural coreference resolution models on long documents. These models incrementally process a long document from left to right and extract relations between mentions and entities in a cache, resulting in much lower memory and computation cost compared to computing all mentions in parallel. However, they do not handle cache misses when high-quality entities are purged from the cache, which causes wrong assignments and leads to prediction errors. We propose a new hybrid cache that integrates two eviction policies to capture global and local entities separately, and effectively reduces the aggregated cache misses up to half as before, while improving F1 score of coreference by 0.7 5.7pt. As such, the hybrid policy can accelerate existing cache-based models and offer a new long document coreference resolution solution. Results show that our method outperforms existing methods on four benchmarks while saving up to 83{\%} of inference time against non-cache-based models. Further, we achieve a new state-of-the-art on a long document coreference benchmark, LitBank. | [
"Guo, Qipeng",
"Hu, Xiangkun",
"Zhang, Yue",
"Qiu, Xipeng",
"Zhang, Zheng"
] | Dual Cache for Long Document Neural Coreference Resolution | acl-long.851 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.852.bib | https://aclanthology.org/2023.acl-long.852/ | @inproceedings{huang-etal-2023-knowledge,
title = "Knowledge Transfer in Incremental Learning for Multilingual Neural Machine Translation",
author = "Huang, Kaiyu and
Li, Peng and
Ma, Jin and
Yao, Ting and
Liu, Yang",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.852",
doi = "10.18653/v1/2023.acl-long.852",
pages = "15286--15304",
abstract = "In the real-world scenario, a longstanding goal of multilingual neural machine translation (MNMT) is that a single model can incrementally adapt to new language pairs without accessing previous training data. In this scenario, previous studies concentrate on overcoming catastrophic forgetting while lacking encouragement to learn new knowledge from incremental language pairs, especially when the incremental language is not related to the set of original languages. To better acquire new knowledge, we propose a knowledge transfer method that can efficiently adapt original MNMT models to diverse incremental language pairs. The method flexibly introduces the knowledge from an external model into original models, which encourages the models to learn new language pairs, completing the procedure of knowledge transfer. Moreover, all original parameters are frozen to ensure that translation qualities on original language pairs are not degraded. Experimental results show that our method can learn new knowledge from diverse language pairs incrementally meanwhile maintaining performance on original language pairs, outperforming various strong baselines in incremental learning for MNMT.",
}
| In the real-world scenario, a longstanding goal of multilingual neural machine translation (MNMT) is that a single model can incrementally adapt to new language pairs without accessing previous training data. In this scenario, previous studies concentrate on overcoming catastrophic forgetting while lacking encouragement to learn new knowledge from incremental language pairs, especially when the incremental language is not related to the set of original languages. To better acquire new knowledge, we propose a knowledge transfer method that can efficiently adapt original MNMT models to diverse incremental language pairs. The method flexibly introduces the knowledge from an external model into original models, which encourages the models to learn new language pairs, completing the procedure of knowledge transfer. Moreover, all original parameters are frozen to ensure that translation qualities on original language pairs are not degraded. Experimental results show that our method can learn new knowledge from diverse language pairs incrementally meanwhile maintaining performance on original language pairs, outperforming various strong baselines in incremental learning for MNMT. | [
"Huang, Kaiyu",
"Li, Peng",
"Ma, Jin",
"Yao, Ting",
"Liu, Yang"
] | Knowledge Transfer in Incremental Learning for Multilingual Neural Machine Translation | acl-long.852 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.853.bib | https://aclanthology.org/2023.acl-long.853/ | @inproceedings{aragon-etal-2023-disorbert,
title = "{D}isor{BERT}: A Double Domain Adaptation Model for Detecting Signs of Mental Disorders in Social Media",
author = "Aragon, Mario and
Lopez Monroy, Adrian Pastor and
Gonzalez, Luis and
Losada, David E. and
Montes, Manuel",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.853",
doi = "10.18653/v1/2023.acl-long.853",
pages = "15305--15318",
abstract = "Mental disorders affect millions of people worldwide and cause interference with their thinking and behavior. Through the past years, awareness created by health campaigns and other sources motivated the study of these disorders using information extracted from social media platforms. In this work, we aim to contribute to the study of these disorders and to the understanding of how mental problems reflect on social media. To achieve this goal, we propose a double-domain adaptation of a language model. First, we adapted the model to social media language, and then, we adapted it to the mental health domain. In both steps, we incorporated a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to mental disorders. We have evaluated our model in the detection of signs of three major mental disorders: Anorexia, Self-harm, and Depression. Results are encouraging as they show that the proposed adaptation enhances the classification performance and yields competitive results against state-of-the-art methods.",
}
| Mental disorders affect millions of people worldwide and cause interference with their thinking and behavior. Through the past years, awareness created by health campaigns and other sources motivated the study of these disorders using information extracted from social media platforms. In this work, we aim to contribute to the study of these disorders and to the understanding of how mental problems reflect on social media. To achieve this goal, we propose a double-domain adaptation of a language model. First, we adapted the model to social media language, and then, we adapted it to the mental health domain. In both steps, we incorporated a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to mental disorders. We have evaluated our model in the detection of signs of three major mental disorders: Anorexia, Self-harm, and Depression. Results are encouraging as they show that the proposed adaptation enhances the classification performance and yields competitive results against state-of-the-art methods. | [
"Aragon, Mario",
"Lopez Monroy, Adrian Pastor",
"Gonzalez, Luis",
"Losada, David E.",
"Montes, Manuel"
] | DisorBERT: A Double Domain Adaptation Model for Detecting Signs of Mental Disorders in Social Media | acl-long.853 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.854.bib | https://aclanthology.org/2023.acl-long.854/ | @inproceedings{li-etal-2023-toward,
title = "Toward Interactive Dictation",
author = "Li, Belinda Z. and
Eisner, Jason and
Pauls, Adam and
Thomson, Sam",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.854",
doi = "10.18653/v1/2023.acl-long.854",
pages = "15319--15338",
abstract = "Voice dictation is an increasingly important text input modality. Existing systems that allow both dictation and editing-by-voice restrict their command language to flat templates invoked by trigger words. In this work, we study the feasibility of allowing users to interrupt their dictation with spoken editing commands in open-ended natural language. We introduce a new task and dataset, TERTiUS, to experiment with such systems. To support this flexibility in real-time, a system must incrementally segment and classify spans of speech as either dictation or command, and interpret the spans that are commands. We experiment with using large pre-trained language models to predict the edited text, or alternatively, to predict a small text-editing program. Experiments show a natural trade-off between model accuracy and latency: a smaller model achieves 30{\%} end-state accuracy with 1.3 seconds of latency, while a larger model achieves 55{\%} end-state accuracy with 7 seconds of latency.",
}
| Voice dictation is an increasingly important text input modality. Existing systems that allow both dictation and editing-by-voice restrict their command language to flat templates invoked by trigger words. In this work, we study the feasibility of allowing users to interrupt their dictation with spoken editing commands in open-ended natural language. We introduce a new task and dataset, TERTiUS, to experiment with such systems. To support this flexibility in real-time, a system must incrementally segment and classify spans of speech as either dictation or command, and interpret the spans that are commands. We experiment with using large pre-trained language models to predict the edited text, or alternatively, to predict a small text-editing program. Experiments show a natural trade-off between model accuracy and latency: a smaller model achieves 30{\%} end-state accuracy with 1.3 seconds of latency, while a larger model achieves 55{\%} end-state accuracy with 7 seconds of latency. | [
"Li, Belinda Z.",
"Eisner, Jason",
"Pauls, Adam",
"Thomson, Sam"
] | Toward Interactive Dictation | acl-long.854 | Poster | 2307.04008 | [
""
] | https://huggingface.co/papers/2307.04008 | 1 | 3 | 0 | 4 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.855.bib | https://aclanthology.org/2023.acl-long.855/ | @inproceedings{li-etal-2023-codeie,
title = "{C}ode{IE}: Large Code Generation Models are Better Few-Shot Information Extractors",
author = "Li, Peng and
Sun, Tianxiang and
Tang, Qiong and
Yan, Hang and
Wu, Yuanbin and
Huang, Xuanjing and
Qiu, Xipeng",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.855",
doi = "10.18653/v1/2023.acl-long.855",
pages = "15339--15353",
abstract = "Large language models (LLMs) pre-trained on massive corpora have demonstrated impressive few-shot learning ability on many NLP tasks. A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it. However, it is nontrivial to perform information extraction (IE) tasks with NL-LLMs since the output of the IE task is usually structured and therefore is hard to be converted into plain text. In this paper, we propose to recast the structured output in the form of code instead of natural language and utilize generative LLMs of code (Code-LLMs) such as Codex to perform IE tasks, in particular, named entity recognition and relation extraction. In contrast to NL-LLMs, we show that Code-LLMs can be well-aligned with these IE tasks by designing code-style prompts and formulating these IE tasks as code generation tasks. Experiment results on seven benchmarks show that our method consistently outperforms fine-tuning moderate-size pre-trained models specially designed for IE tasks (e.g., UIE) and prompting NL-LLMs under few-shot settings. We further conduct a series of in-depth analyses to demonstrate the merits of leveraging Code-LLMs for IE tasks.",
}
| Large language models (LLMs) pre-trained on massive corpora have demonstrated impressive few-shot learning ability on many NLP tasks. A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it. However, it is nontrivial to perform information extraction (IE) tasks with NL-LLMs since the output of the IE task is usually structured and therefore is hard to be converted into plain text. In this paper, we propose to recast the structured output in the form of code instead of natural language and utilize generative LLMs of code (Code-LLMs) such as Codex to perform IE tasks, in particular, named entity recognition and relation extraction. In contrast to NL-LLMs, we show that Code-LLMs can be well-aligned with these IE tasks by designing code-style prompts and formulating these IE tasks as code generation tasks. Experiment results on seven benchmarks show that our method consistently outperforms fine-tuning moderate-size pre-trained models specially designed for IE tasks (e.g., UIE) and prompting NL-LLMs under few-shot settings. We further conduct a series of in-depth analyses to demonstrate the merits of leveraging Code-LLMs for IE tasks. | [
"Li, Peng",
"Sun, Tianxiang",
"Tang, Qiong",
"Yan, Hang",
"Wu, Yuanbin",
"Huang, Xuanjing",
"Qiu, Xipeng"
] | CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors | acl-long.855 | Poster | 2305.05711 | [
"https://github.com/dasepli/codeie"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.856.bib | https://aclanthology.org/2023.acl-long.856/ | @inproceedings{patra-etal-2023-beyond,
title = "Beyond {E}nglish-Centric Bitexts for Better Multilingual Language Representation Learning",
author = "Patra, Barun and
Singhal, Saksham and
Huang, Shaohan and
Chi, Zewen and
Dong, Li and
Wei, Furu and
Chaudhary, Vishrav and
Song, Xia",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.856",
doi = "10.18653/v1/2023.acl-long.856",
pages = "15354--15373",
abstract = "In this paper, we elaborate upon recipes for building multilingual representation models that are not only competitive with existing state-of-the-art models but are also more parameter efficient, thereby promoting better adoption in resource-constrained scenarios and practical applications. We show that going beyond English-centric bitexts, coupled with a novel sampling strategy aimed at reducing under-utilization of training data, substantially boosts performance across model sizes for both Electra and MLM pre-training objectives. We introduce XY-LENT: X-Y bitext enhanced Language ENcodings using Transformers which not only achieves state-of-the-art performance over 5 cross-lingual tasks within all model size bands, is also competitive across bands. Our XY-LENT XL variant outperforms XLM-R XXL and exhibits competitive performance with mT5 XXL while being 5x and 6x smaller respectively. We then show that our proposed method helps ameliorate the curse of multilinguality, with the XY-LENT XL achieving 99.3{\%} GLUE performance and 98.5{\%} SQuAD 2.0 performance compared to a SoTA English only model in the same size band. We then analyze our models performance on extremely low resource languages and posit that scaling alone may not be sufficient for improving the performance in this scenario",
}
| In this paper, we elaborate upon recipes for building multilingual representation models that are not only competitive with existing state-of-the-art models but are also more parameter efficient, thereby promoting better adoption in resource-constrained scenarios and practical applications. We show that going beyond English-centric bitexts, coupled with a novel sampling strategy aimed at reducing under-utilization of training data, substantially boosts performance across model sizes for both Electra and MLM pre-training objectives. We introduce XY-LENT: X-Y bitext enhanced Language ENcodings using Transformers which not only achieves state-of-the-art performance over 5 cross-lingual tasks within all model size bands, is also competitive across bands. Our XY-LENT XL variant outperforms XLM-R XXL and exhibits competitive performance with mT5 XXL while being 5x and 6x smaller respectively. We then show that our proposed method helps ameliorate the curse of multilinguality, with the XY-LENT XL achieving 99.3{\%} GLUE performance and 98.5{\%} SQuAD 2.0 performance compared to a SoTA English only model in the same size band. We then analyze our models performance on extremely low resource languages and posit that scaling alone may not be sufficient for improving the performance in this scenario | [
"Patra, Barun",
"Singhal, Saksham",
"Huang, Shaohan",
"Chi, Zewen",
"Dong, Li",
"Wei, Furu",
"Chaudhary, Vishrav",
"Song, Xia"
] | Beyond English-Centric Bitexts for Better Multilingual Language Representation Learning | acl-long.856 | Poster | 2210.14867 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.857.bib | https://aclanthology.org/2023.acl-long.857/ | @inproceedings{zhang-etal-2023-bridging-gap,
title = "Bridging The Gap: Entailment Fused-T5 for Open-retrieval Conversational Machine Reading Comprehension",
author = "Zhang, Xiao and
Huang, Heyan and
Chi, Zewen and
Mao, Xian-Ling",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.857",
doi = "10.18653/v1/2023.acl-long.857",
pages = "15374--15386",
abstract = "Open-retrieval conversational machine reading comprehension (OCMRC) simulates real-life conversational interaction scenes. Machines are required to make a decision of {``}Yes/No/Inquire{''} or generate a follow-up question when the decision is {``}Inquire{''} based on retrieved rule texts, user scenario, user question and dialogue history. Recent studies try to reduce the information gap between decision-making and question generation, in order to improve the performance of generation. However, the information gap still persists because these methods are still limited in pipeline framework, where decision-making and question generation are performed separately, making it hard to share the entailment reasoning used in decision-making across all stages. To tackle the above problem, we propose a novel one-stage end-to-end framework, called Entailment Fused-T5 (EFT), to bridge the information gap between decision-making and question generation in a global understanding manner. The extensive experimental results demonstrate that our proposed framework achieves new state-of-the-art performance on the OR-ShARC benchmark. Our model and code are publicly available at an anonymous link.",
}
| Open-retrieval conversational machine reading comprehension (OCMRC) simulates real-life conversational interaction scenes. Machines are required to make a decision of {``}Yes/No/Inquire{''} or generate a follow-up question when the decision is {``}Inquire{''} based on retrieved rule texts, user scenario, user question and dialogue history. Recent studies try to reduce the information gap between decision-making and question generation, in order to improve the performance of generation. However, the information gap still persists because these methods are still limited in pipeline framework, where decision-making and question generation are performed separately, making it hard to share the entailment reasoning used in decision-making across all stages. To tackle the above problem, we propose a novel one-stage end-to-end framework, called Entailment Fused-T5 (EFT), to bridge the information gap between decision-making and question generation in a global understanding manner. The extensive experimental results demonstrate that our proposed framework achieves new state-of-the-art performance on the OR-ShARC benchmark. Our model and code are publicly available at an anonymous link. | [
"Zhang, Xiao",
"Huang, Heyan",
"Chi, Zewen",
"Mao, Xian-Ling"
] | Bridging The Gap: Entailment Fused-T5 for Open-retrieval Conversational Machine Reading Comprehension | acl-long.857 | Poster | 2212.09353 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.858.bib | https://aclanthology.org/2023.acl-long.858/ | @inproceedings{gao-etal-2023-livechat,
title = "{L}ive{C}hat: A Large-Scale Personalized Dialogue Dataset Automatically Constructed from Live Streaming",
author = "Gao, Jingsheng and
Lian, Yixin and
Zhou, Ziyi and
Fu, Yuzhuo and
Wang, Baoyuan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.858",
doi = "10.18653/v1/2023.acl-long.858",
pages = "15387--15405",
abstract = "Open-domain dialogue systems have made promising progress in recent years. While the state-of-the-art dialogue agents are built upon large-scale social media data and large pre-trained models, there is no guarantee these agents could also perform well in fast-growing scenarios, such as live streaming, due to the bounded transferability of pre-trained models and biased distributions of public datasets from Reddit and Weibo, etc. To improve the essential capability of responding and establish a benchmark in the live open-domain scenario, we introduce the LiveChat dataset, composed of 1.33 million real-life Chinese dialogues with almost 3800 average sessions across 351 personas and fine-grained profiles for each persona. LiveChat is automatically constructed by processing numerous live videos on the Internet and naturally falls within the scope of multi-party conversations, where the issues of Who says What to Whom should be considered. Therefore, we target two critical tasks of response modeling and addressee recognition and propose retrieval-based baselines grounded on advanced techniques. Experimental results have validated the positive effects of leveraging persona profiles and larger average sessions per persona. In addition, we also benchmark the transferability of advanced generation-based models on LiveChat and pose some future directions for current challenges.",
}
| Open-domain dialogue systems have made promising progress in recent years. While the state-of-the-art dialogue agents are built upon large-scale social media data and large pre-trained models, there is no guarantee these agents could also perform well in fast-growing scenarios, such as live streaming, due to the bounded transferability of pre-trained models and biased distributions of public datasets from Reddit and Weibo, etc. To improve the essential capability of responding and establish a benchmark in the live open-domain scenario, we introduce the LiveChat dataset, composed of 1.33 million real-life Chinese dialogues with almost 3800 average sessions across 351 personas and fine-grained profiles for each persona. LiveChat is automatically constructed by processing numerous live videos on the Internet and naturally falls within the scope of multi-party conversations, where the issues of Who says What to Whom should be considered. Therefore, we target two critical tasks of response modeling and addressee recognition and propose retrieval-based baselines grounded on advanced techniques. Experimental results have validated the positive effects of leveraging persona profiles and larger average sessions per persona. In addition, we also benchmark the transferability of advanced generation-based models on LiveChat and pose some future directions for current challenges. | [
"Gao, Jingsheng",
"Lian, Yixin",
"Zhou, Ziyi",
"Fu, Yuzhuo",
"Wang, Baoyuan"
] | LiveChat: A Large-Scale Personalized Dialogue Dataset Automatically Constructed from Live Streaming | acl-long.858 | Poster | 2306.08401 | [
"https://github.com/gaojingsheng/livechat"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.859.bib | https://aclanthology.org/2023.acl-long.859/ | @inproceedings{vilar-etal-2023-prompting,
title = "Prompting {P}a{LM} for Translation: Assessing Strategies and Performance",
author = "Vilar, David and
Freitag, Markus and
Cherry, Colin and
Luo, Jiaming and
Ratnakar, Viresh and
Foster, George",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.859",
doi = "10.18653/v1/2023.acl-long.859",
pages = "15406--15427",
abstract = "Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages. We probe this ability in an in-depth study of the pathways language model (PaLM), which has demonstrated the strongest machine translation (MT) performance among similarly-trained LLMs to date. We investigate various strategies for choosing translation examples for few-shot prompting, concluding that example quality is the most important factor. Using optimized prompts, we revisit previous assessments of PaLM{'}s MT capabilities with more recent test sets, modern MT metrics, and human evaluation, and find that its performance, while impressive, still lags that of state-of-the-art supervised systems. We conclude by providing an analysis of PaLM{'}s MT output which reveals some interesting properties and prospects for future work.",
}
| Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages. We probe this ability in an in-depth study of the pathways language model (PaLM), which has demonstrated the strongest machine translation (MT) performance among similarly-trained LLMs to date. We investigate various strategies for choosing translation examples for few-shot prompting, concluding that example quality is the most important factor. Using optimized prompts, we revisit previous assessments of PaLM{'}s MT capabilities with more recent test sets, modern MT metrics, and human evaluation, and find that its performance, while impressive, still lags that of state-of-the-art supervised systems. We conclude by providing an analysis of PaLM{'}s MT output which reveals some interesting properties and prospects for future work. | [
"Vilar, David",
"Freitag, Markus",
"Cherry, Colin",
"Luo, Jiaming",
"Ratnakar, Viresh",
"Foster, George"
] | Prompting PaLM for Translation: Assessing Strategies and Performance | acl-long.859 | Oral | 2211.09102 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.860.bib | https://aclanthology.org/2023.acl-long.860/ | @inproceedings{chen-etal-2023-exploring,
title = "Exploring Lottery Prompts for Pre-trained Language Models",
author = "Chen, Yulin and
Ding, Ning and
Wang, Xiaobin and
Hu, Shengding and
Zheng, Haitao and
Liu, Zhiyuan and
Xie, Pengjun",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.860",
doi = "10.18653/v1/2023.acl-long.860",
pages = "15428--15444",
abstract = "Consistently scaling pre-trained language models (PLMs) imposes substantial burdens on model adaptation, necessitating more efficient alternatives to conventional fine-tuning. Given the advantage of prompting in the zero-shot setting and the observed performance fluctuation among different prompts, we explore the instance-level prompt and their generalizability.By searching through the prompt space, we first validate the assumption that for every instance, there is almost always a lottery prompt that induces the correct prediction from the PLM, and such prompt can be obtained at a low cost thanks to the inherent ability of PLMs.Meanwhile, it is shown that some strong lottery prompts have high performance over the whole training set, and they are equipped with distinguishable linguistic features. Lastly, we attempt to generalize the searched strong lottery prompts to unseen data with prompt ensembling method. Experiments are conducted on various types of NLP classification tasks and demonstrate that the proposed method can achieve comparable results with other gradient-free and optimization-free baselines.",
}
| Consistently scaling pre-trained language models (PLMs) imposes substantial burdens on model adaptation, necessitating more efficient alternatives to conventional fine-tuning. Given the advantage of prompting in the zero-shot setting and the observed performance fluctuation among different prompts, we explore the instance-level prompt and their generalizability.By searching through the prompt space, we first validate the assumption that for every instance, there is almost always a lottery prompt that induces the correct prediction from the PLM, and such prompt can be obtained at a low cost thanks to the inherent ability of PLMs.Meanwhile, it is shown that some strong lottery prompts have high performance over the whole training set, and they are equipped with distinguishable linguistic features. Lastly, we attempt to generalize the searched strong lottery prompts to unseen data with prompt ensembling method. Experiments are conducted on various types of NLP classification tasks and demonstrate that the proposed method can achieve comparable results with other gradient-free and optimization-free baselines. | [
"Chen, Yulin",
"Ding, Ning",
"Wang, Xiaobin",
"Hu, Shengding",
"Zheng, Haitao",
"Liu, Zhiyuan",
"Xie, Pengjun"
] | Exploring Lottery Prompts for Pre-trained Language Models | acl-long.860 | Poster | 2305.19500 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.861.bib | https://aclanthology.org/2023.acl-long.861/ | @inproceedings{zheng-etal-2023-facial,
title = "A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations",
author = "Zheng, Wenjie and
Yu, Jianfei and
Xia, Rui and
Wang, Shijin",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.861",
doi = "10.18653/v1/2023.acl-long.861",
pages = "15445--15459",
abstract = "Multimodal Emotion Recognition in Multiparty Conversations (MERMC) has recently attracted considerable attention. Due to the complexity of visual scenes in multi-party conversations, most previous MERMC studies mainly focus on text and audio modalities while ignoring visual information. Recently, several works proposed to extract face sequences as visual features and have shown the importance of visual information in MERMC. However, given an utterance, the face sequence extracted by previous methods may contain multiple people{'}s faces, which will inevitably introduce noise to the emotion prediction of the real speaker. To tackle this issue, we propose a two-stage framework named Facial expressionaware Multimodal Multi-Task learning (FacialMMT). Specifically, a pipeline method is first designed to extract the face sequence of the real speaker of each utterance, which consists of multimodal face recognition, unsupervised face clustering, and face matching. With the extracted face sequences, we propose a multimodal facial expression-aware emotion recognition model, which leverages the frame-level facial emotion distributions to help improve utterance-level emotion recognition based on multi-task learning. Experiments demonstrate the effectiveness of the proposed FacialMMT framework on the benchmark MELD dataset. The source code is publicly released at \url{https://github.com/NUSTM/FacialMMT}.",
}
| Multimodal Emotion Recognition in Multiparty Conversations (MERMC) has recently attracted considerable attention. Due to the complexity of visual scenes in multi-party conversations, most previous MERMC studies mainly focus on text and audio modalities while ignoring visual information. Recently, several works proposed to extract face sequences as visual features and have shown the importance of visual information in MERMC. However, given an utterance, the face sequence extracted by previous methods may contain multiple people{'}s faces, which will inevitably introduce noise to the emotion prediction of the real speaker. To tackle this issue, we propose a two-stage framework named Facial expressionaware Multimodal Multi-Task learning (FacialMMT). Specifically, a pipeline method is first designed to extract the face sequence of the real speaker of each utterance, which consists of multimodal face recognition, unsupervised face clustering, and face matching. With the extracted face sequences, we propose a multimodal facial expression-aware emotion recognition model, which leverages the frame-level facial emotion distributions to help improve utterance-level emotion recognition based on multi-task learning. Experiments demonstrate the effectiveness of the proposed FacialMMT framework on the benchmark MELD dataset. The source code is publicly released at \url{https://github.com/NUSTM/FacialMMT}. | [
"Zheng, Wenjie",
"Yu, Jianfei",
"Xia, Rui",
"Wang, Shijin"
] | A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations | acl-long.861 | Poster | [
"https://github.com/NUSTM/FacialMMT"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.862.bib | https://aclanthology.org/2023.acl-long.862/ | @inproceedings{li-etal-2023-teast,
title = "{T}e{AST}: Temporal Knowledge Graph Embedding via Archimedean Spiral Timeline",
author = "Li, Jiang and
Su, Xiangdong and
Gao, Guanglai",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.862",
doi = "10.18653/v1/2023.acl-long.862",
pages = "15460--15474",
abstract = "Temporal knowledge graph embedding (TKGE) models are commonly utilized to infer the missing facts and facilitate reasoning and decision-making in temporal knowledge graph based systems. However, existing methods fuse temporal information into entities, potentially leading to the evolution of entity information and limiting the link prediction performance of TKG. Meanwhile, current TKGE models often lack the ability to simultaneously model important relation patterns and provide interpretability, which hinders their effectiveness and potential applications. To address these limitations, we propose a novel TKGE model which encodes \textbf{T}emporal knowledge graph \textbf{e}mbeddings via \textbf{A}rchimedean \textbf{S}piral \textbf{T}imeline (TeAST), which maps relations onto the corresponding Archimedean spiral timeline and transforms the quadruples completion to 3th-order tensor completion problem. Specifically, the Archimedean spiral timeline ensures that relations that occur simultaneously are placed on the same timeline, and all relations evolve over time. Meanwhile, we present a novel temporal spiral regularizer to make the spiral timeline orderly. In addition, we provide mathematical proofs to demonstrate the ability of TeAST to encode various relation patterns. Experimental results show that our proposed model significantly outperforms existing TKGE methods. Our code is available at \url{https://github.com/IMU-MachineLearningSXD/TeAST}.",
}
| Temporal knowledge graph embedding (TKGE) models are commonly utilized to infer the missing facts and facilitate reasoning and decision-making in temporal knowledge graph based systems. However, existing methods fuse temporal information into entities, potentially leading to the evolution of entity information and limiting the link prediction performance of TKG. Meanwhile, current TKGE models often lack the ability to simultaneously model important relation patterns and provide interpretability, which hinders their effectiveness and potential applications. To address these limitations, we propose a novel TKGE model which encodes \textbf{T}emporal knowledge graph \textbf{e}mbeddings via \textbf{A}rchimedean \textbf{S}piral \textbf{T}imeline (TeAST), which maps relations onto the corresponding Archimedean spiral timeline and transforms the quadruples completion to 3th-order tensor completion problem. Specifically, the Archimedean spiral timeline ensures that relations that occur simultaneously are placed on the same timeline, and all relations evolve over time. Meanwhile, we present a novel temporal spiral regularizer to make the spiral timeline orderly. In addition, we provide mathematical proofs to demonstrate the ability of TeAST to encode various relation patterns. Experimental results show that our proposed model significantly outperforms existing TKGE methods. Our code is available at \url{https://github.com/IMU-MachineLearningSXD/TeAST}. | [
"Li, Jiang",
"Su, Xiangdong",
"Gao, Guanglai"
] | TeAST: Temporal Knowledge Graph Embedding via Archimedean Spiral Timeline | acl-long.862 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.863.bib | https://aclanthology.org/2023.acl-long.863/ | @inproceedings{bao-etal-2023-human,
title = "Human Inspired Progressive Alignment and Comparative Learning for Grounded Word Acquisition",
author = "Bao, Yuwei and
Lattimer, Barrett and
Chai, Joyce",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.863",
doi = "10.18653/v1/2023.acl-long.863",
pages = "15475--15493",
abstract = "Human language acquisition is an efficient, supervised, and continual process. In this work, we took inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning. Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes, learn to filter out and extract the common information for each shared linguistic label. We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping. This procedure does not involve a fixed vocabulary size, nor a discriminative objective, and allows the models to continually learn more concepts efficiently. Our results in controlled experiments have shown the potential of this approach for efficient continual learning of grounded words.",
}
| Human language acquisition is an efficient, supervised, and continual process. In this work, we took inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning. Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes, learn to filter out and extract the common information for each shared linguistic label. We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping. This procedure does not involve a fixed vocabulary size, nor a discriminative objective, and allows the models to continually learn more concepts efficiently. Our results in controlled experiments have shown the potential of this approach for efficient continual learning of grounded words. | [
"Bao, Yuwei",
"Lattimer, Barrett",
"Chai, Joyce"
] | Human Inspired Progressive Alignment and Comparative Learning for Grounded Word Acquisition | acl-long.863 | Poster | 2307.02615 | [
"https://github.com/sled-group/comparative-learning"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.864.bib | https://aclanthology.org/2023.acl-long.864/ | @inproceedings{przepiorkowski-wozniak-2023-conjunct,
title = "Conjunct Lengths in {E}nglish, Dependency Length Minimization, and Dependency Structure of Coordination",
author = "Przepi{\'o}rkowski, Adam and
Wo{\'z}niak, Micha{\l}",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.864",
doi = "10.18653/v1/2023.acl-long.864",
pages = "15494--15512",
abstract = "This paper confirms that, in English binary coordinations, left conjuncts tend to be shorter than right conjuncts, regardless of the position of the governor of the coordination. We demonstrate that this tendency becomes stronger when length differences are greater, but only when the governor is on the left or absent, not when it is on the right. We explain this effect via Dependency Length Minimization and we show that this explanation provides support for symmetrical dependency structures of coordination (where coordination is multi-headed by all conjuncts, as in Word Grammar or in enhanced Universal Dependencies, or where it single-headed by the conjunction, as in the Prague Dependency Treebank), as opposed to asymmetrical structures (where coordination is headed by the first conjunct, as in the Meaning{--}Text Theory or in basic Universal Dependencies).",
}
| This paper confirms that, in English binary coordinations, left conjuncts tend to be shorter than right conjuncts, regardless of the position of the governor of the coordination. We demonstrate that this tendency becomes stronger when length differences are greater, but only when the governor is on the left or absent, not when it is on the right. We explain this effect via Dependency Length Minimization and we show that this explanation provides support for symmetrical dependency structures of coordination (where coordination is multi-headed by all conjuncts, as in Word Grammar or in enhanced Universal Dependencies, or where it single-headed by the conjunction, as in the Prague Dependency Treebank), as opposed to asymmetrical structures (where coordination is headed by the first conjunct, as in the Meaning{--}Text Theory or in basic Universal Dependencies). | [
"Przepi{\\'o}rkowski, Adam",
"Wo{\\'z}niak, Micha{\\l}"
] | Conjunct Lengths in English, Dependency Length Minimization, and Dependency Structure of Coordination | acl-long.864 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.865.bib | https://aclanthology.org/2023.acl-long.865/ | @inproceedings{chalkidis-etal-2023-lexfiles,
title = "{L}e{XF}iles and {L}egal{LAMA}: Facilitating {E}nglish Multinational Legal Language Model Development",
author = "Chalkidis, Ilias and
Garneau, Nicolas and
Goanta, Catalina and
Katz, Daniel and
S{\o}gaard, Anders",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.865",
doi = "10.18653/v1/2023.acl-long.865",
pages = "15513--15535",
abstract = "In this work, we conduct a detailed analysis on the performance of legal-oriented pre-trained language models (PLMs). We examine the interplay between their original objective, acquired knowledge, and legal language understanding capacities which we define as the upstream, probing, and downstream performance, respectively. We consider not only the models{'} size but also the pre-training corpora used as important dimensions in our study. To this end, we release a multinational English legal corpus (LeXFiles) and a legal knowledge probing benchmark (LegalLAMA) to facilitate training and detailed analysis of legal-oriented PLMs. We release two new legal PLMs trained on LeXFiles and evaluate them alongside others on LegalLAMA and LexGLUE. We find that probing performance strongly correlates with upstream performance in related legal topics. On the other hand, downstream performance is mainly driven by the model{'}s size and prior legal knowledge which can be estimated by upstream and probing performance. Based on these findings, we can conclude that both dimensions are important for those seeking the development of domain-specific PLMs.",
}
| In this work, we conduct a detailed analysis on the performance of legal-oriented pre-trained language models (PLMs). We examine the interplay between their original objective, acquired knowledge, and legal language understanding capacities which we define as the upstream, probing, and downstream performance, respectively. We consider not only the models{'} size but also the pre-training corpora used as important dimensions in our study. To this end, we release a multinational English legal corpus (LeXFiles) and a legal knowledge probing benchmark (LegalLAMA) to facilitate training and detailed analysis of legal-oriented PLMs. We release two new legal PLMs trained on LeXFiles and evaluate them alongside others on LegalLAMA and LexGLUE. We find that probing performance strongly correlates with upstream performance in related legal topics. On the other hand, downstream performance is mainly driven by the model{'}s size and prior legal knowledge which can be estimated by upstream and probing performance. Based on these findings, we can conclude that both dimensions are important for those seeking the development of domain-specific PLMs. | [
"Chalkidis, Ilias",
"Garneau, Nicolas",
"Goanta, Catalina",
"Katz, Daniel",
"S{\\o}gaard, Anders"
] | LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development | acl-long.865 | Poster | 2305.07507 | [
"https://github.com/coastalcph/lexlms"
] | https://huggingface.co/papers/2305.07507 | 1 | 0 | 0 | 5 | 1 | [
"lexlms/legal-roberta-large",
"lexlms/legal-longformer-large",
"lexlms/legal-roberta-base",
"lexlms/legal-longformer-base",
"danielsbest/LegalLexRoBERTa"
] | [
"lexlms/lex_files",
"lexlms/legal_lama"
] | [] |
https://aclanthology.org/2023.acl-long.866.bib | https://aclanthology.org/2023.acl-long.866/ | @inproceedings{liu-etal-2023-revisiting-commonsense,
title = "Revisiting Commonsense Reasoning in Machine Translation: Training, Evaluation and Challenge",
author = "Liu, Xuebo and
Wang, Yutong and
Wong, Derek F. and
Zhan, Runzhe and
Yu, Liangxuan and
Zhang, Min",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.866",
doi = "10.18653/v1/2023.acl-long.866",
pages = "15536--15550",
abstract = "The ability of commonsense reasoning (CR) decides whether a neural machine translation (NMT) model can move beyond pattern recognition. Despite the rapid advancement of NMT and the use of pretraining to enhance NMT models, research on CR in NMT is still in its infancy, leaving much to be explored in terms of effectively training NMT models with high CR abilities and devising accurate automatic evaluation metrics. This paper presents a comprehensive study aimed at expanding the understanding of CR in NMT.For the training, we confirm the effectiveness of incorporating pretrained knowledge into NMT models and subsequently utilizing these models as robust testbeds for investigating CR in NMT. For the evaluation, we propose a novel entity-aware evaluation method that takes into account both the NMT candidate and important entities in the candidate, which is more aligned with human judgement. Based on the strong testbed and evaluation methods, we identify challenges in training NMT models with high CR abilities and suggest directions for further unlabeled data utilization and model design. We hope that our methods and findings will contribute to advancing the research of CR in NMT. Source data, code and scripts are freely available at \url{https://github.com/YutongWang1216/CR-NMT}.",
}
| The ability of commonsense reasoning (CR) decides whether a neural machine translation (NMT) model can move beyond pattern recognition. Despite the rapid advancement of NMT and the use of pretraining to enhance NMT models, research on CR in NMT is still in its infancy, leaving much to be explored in terms of effectively training NMT models with high CR abilities and devising accurate automatic evaluation metrics. This paper presents a comprehensive study aimed at expanding the understanding of CR in NMT.For the training, we confirm the effectiveness of incorporating pretrained knowledge into NMT models and subsequently utilizing these models as robust testbeds for investigating CR in NMT. For the evaluation, we propose a novel entity-aware evaluation method that takes into account both the NMT candidate and important entities in the candidate, which is more aligned with human judgement. Based on the strong testbed and evaluation methods, we identify challenges in training NMT models with high CR abilities and suggest directions for further unlabeled data utilization and model design. We hope that our methods and findings will contribute to advancing the research of CR in NMT. Source data, code and scripts are freely available at \url{https://github.com/YutongWang1216/CR-NMT}. | [
"Liu, Xuebo",
"Wang, Yutong",
"Wong, Derek F.",
"Zhan, Runzhe",
"Yu, Liangxuan",
"Zhang, Min"
] | Revisiting Commonsense Reasoning in Machine Translation: Training, Evaluation and Challenge | acl-long.866 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.867.bib | https://aclanthology.org/2023.acl-long.867/ | @inproceedings{mei-etal-2023-notable,
title = "{NOTABLE}: Transferable Backdoor Attacks Against Prompt-based {NLP} Models",
author = "Mei, Kai and
Li, Zheng and
Wang, Zhenting and
Zhang, Yang and
Ma, Shiqing",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.867",
doi = "10.18653/v1/2023.acl-long.867",
pages = "15551--15565",
abstract = "Prompt-based learning is vulnerable to backdoor attacks. Existing backdoor attacks against prompt-based models consider injecting backdoors into the entire embedding layers or word embedding vectors. Such attacks can be easily affected by retraining on downstream tasks and with different prompting strategies, limiting the transferability of backdoor attacks. In this work, we propose transferable backdoor attacks against prompt-based models, called NOTABLE, which is independent of downstream tasks and prompting strategies. Specifically, NOTABLE injects backdoors into the encoders of PLMs by utilizing an adaptive verbalizer to bind triggers to specific words (i.e., anchors). It activates the backdoor by pasting input with triggers to reach adversary-desired anchors, achieving independence from downstream tasks and prompting strategies. We conduct experiments on six NLP tasks, three popular models, and three prompting strategies. Empirical results show that NOTABLE achieves superior attack performance (i.e., attack success rate over 90{\%} on all the datasets), and outperforms two state-of-the-art baselines. Evaluations on three defenses show the robustness of NOTABLE. Our code can be found at \url{https://github.com/RU-System-Software-and-Security/Notable}.",
}
| Prompt-based learning is vulnerable to backdoor attacks. Existing backdoor attacks against prompt-based models consider injecting backdoors into the entire embedding layers or word embedding vectors. Such attacks can be easily affected by retraining on downstream tasks and with different prompting strategies, limiting the transferability of backdoor attacks. In this work, we propose transferable backdoor attacks against prompt-based models, called NOTABLE, which is independent of downstream tasks and prompting strategies. Specifically, NOTABLE injects backdoors into the encoders of PLMs by utilizing an adaptive verbalizer to bind triggers to specific words (i.e., anchors). It activates the backdoor by pasting input with triggers to reach adversary-desired anchors, achieving independence from downstream tasks and prompting strategies. We conduct experiments on six NLP tasks, three popular models, and three prompting strategies. Empirical results show that NOTABLE achieves superior attack performance (i.e., attack success rate over 90{\%} on all the datasets), and outperforms two state-of-the-art baselines. Evaluations on three defenses show the robustness of NOTABLE. Our code can be found at \url{https://github.com/RU-System-Software-and-Security/Notable}. | [
"Mei, Kai",
"Li, Zheng",
"Wang, Zhenting",
"Zhang, Yang",
"Ma, Shiqing"
] | NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models | acl-long.867 | Poster | 2305.17826 | [
"https://github.com/ru-system-software-and-security/notable"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.868.bib | https://aclanthology.org/2023.acl-long.868/ | @inproceedings{wadhwa-etal-2023-revisiting,
title = "Revisiting Relation Extraction in the era of Large Language Models",
author = "Wadhwa, Somin and
Amir, Silvio and
Wallace, Byron",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.868",
doi = "10.18653/v1/2023.acl-long.868",
pages = "15566--15589",
abstract = "Relation extraction (RE) is the core NLP task of inferring semantic relationships between entities from text. Standard supervised RE techniques entail training modules to tag tokens comprising entity spans and then predict the relationship between them. Recent work has instead treated the problem as a sequence-to-sequence task, linearizing relations between entities as target strings to be generated conditioned on the input. Here we push the limits of this approach, using larger language models (GPT-3 and Flan-T5 large) than considered in prior work and evaluating their performance on standard RE tasks under varying levels of supervision. We address issues inherent to evaluating generative approaches to RE by doing human evaluations, in lieu of relying on exact matching. Under this refined evaluation, we find that: (1) Few-shot prompting with GPT-3 achieves near SOTA performance, i.e., roughly equivalent to existing fully supervised models; (2) Flan-T5 is not as capable in the few-shot setting, but supervising and fine-tuning it with Chain-of-Thought (CoT) style explanations (generated via GPT-3) yields SOTA results. We release this model as a new baseline for RE tasks.",
}
| Relation extraction (RE) is the core NLP task of inferring semantic relationships between entities from text. Standard supervised RE techniques entail training modules to tag tokens comprising entity spans and then predict the relationship between them. Recent work has instead treated the problem as a sequence-to-sequence task, linearizing relations between entities as target strings to be generated conditioned on the input. Here we push the limits of this approach, using larger language models (GPT-3 and Flan-T5 large) than considered in prior work and evaluating their performance on standard RE tasks under varying levels of supervision. We address issues inherent to evaluating generative approaches to RE by doing human evaluations, in lieu of relying on exact matching. Under this refined evaluation, we find that: (1) Few-shot prompting with GPT-3 achieves near SOTA performance, i.e., roughly equivalent to existing fully supervised models; (2) Flan-T5 is not as capable in the few-shot setting, but supervising and fine-tuning it with Chain-of-Thought (CoT) style explanations (generated via GPT-3) yields SOTA results. We release this model as a new baseline for RE tasks. | [
"Wadhwa, Somin",
"Amir, Silvio",
"Wallace, Byron"
] | Revisiting Relation Extraction in the era of Large Language Models | acl-long.868 | Poster | 2305.05003 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.869.bib | https://aclanthology.org/2023.acl-long.869/ | @inproceedings{zhao-etal-2023-pre,
title = "Pre-trained Language Models Can be Fully Zero-Shot Learners",
author = "Zhao, Xuandong and
Ouyang, Siqi and
Yu, Zhiguo and
Wu, Ming and
Li, Lei",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.869",
doi = "10.18653/v1/2023.acl-long.869",
pages = "15590--15606",
abstract = "How can we extend a pre-trained model to many language understanding tasks, without labeled or additional unlabeled data? Pre-trained language models (PLMs) have been effective for a wide range of NLP tasks. However, existing approaches either require fine-tuning on downstream labeled datasets or manually constructing proper prompts. In this paper, we propose nonparametric prompting PLM (NPPrompt) for fully zero-shot language understanding. Unlike previous methods, NPPrompt uses only pre-trained language models and does not require any labeled data or additional raw corpus for further fine-tuning, nor does it rely on humans to construct a comprehensive set of prompt label words. We evaluate NPPrompt against previous major few-shot and zero-shot learning methods on diverse NLP tasks: including text classification, text entailment, similar text retrieval, paraphrasing, and multiple-choice question answering. Experimental results demonstrate that our NPPrompt outperforms the previous best fully zero-shot method by big margins, with absolute gains of 12.8{\%} in accuracy on text classification and 15.6{\%} on the GLUE benchmark. Our source code is available at \url{https://anonymous.4open.science/r/NPPrompt}.",
}
| How can we extend a pre-trained model to many language understanding tasks, without labeled or additional unlabeled data? Pre-trained language models (PLMs) have been effective for a wide range of NLP tasks. However, existing approaches either require fine-tuning on downstream labeled datasets or manually constructing proper prompts. In this paper, we propose nonparametric prompting PLM (NPPrompt) for fully zero-shot language understanding. Unlike previous methods, NPPrompt uses only pre-trained language models and does not require any labeled data or additional raw corpus for further fine-tuning, nor does it rely on humans to construct a comprehensive set of prompt label words. We evaluate NPPrompt against previous major few-shot and zero-shot learning methods on diverse NLP tasks: including text classification, text entailment, similar text retrieval, paraphrasing, and multiple-choice question answering. Experimental results demonstrate that our NPPrompt outperforms the previous best fully zero-shot method by big margins, with absolute gains of 12.8{\%} in accuracy on text classification and 15.6{\%} on the GLUE benchmark. Our source code is available at \url{https://anonymous.4open.science/r/NPPrompt}. | [
"Zhao, Xu",
"ong",
"Ouyang, Siqi",
"Yu, Zhiguo",
"Wu, Ming",
"Li, Lei"
] | Pre-trained Language Models Can be Fully Zero-Shot Learners | acl-long.869 | Oral | 2212.06950 | [
"https://github.com/xuandongzhao/npprompt"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.870.bib | https://aclanthology.org/2023.acl-long.870/ | @inproceedings{chiang-lee-2023-large,
title = "Can Large Language Models Be an Alternative to Human Evaluations?",
author = "Chiang, Cheng-Han and
Lee, Hung-yi",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.870",
doi = "10.18653/v1/2023.acl-long.870",
pages = "15607--15631",
abstract = "Human evaluation is indispensable and inevitable for assessing the quality of texts generated by machine learning models or written by humans. However, human evaluation is very difficult to reproduce and its quality is notoriously unstable, hindering fair comparisons among different natural language processing (NLP) models and algorithms. Recently, large language models (LLMs) have demonstrated exceptional performance on unseen tasks when only the task instructions are provided. In this paper, we explore if such an ability of the LLMs can be used as an alternative to human evaluation. We present the LLMs with the exact same instructions, samples to be evaluated, and questions used to conduct human evaluation, and then ask the LLMs to generate responses to those questions; we dub this LLM evaluation. We use human evaluation and LLM evaluation to evaluate the texts in two NLP tasks: open-ended story generation and adversarial attacks. We show that the result of LLM evaluation is consistent with the results obtained by expert human evaluation: the texts rated higher by human experts are also rated higher by the LLMs.We also find that the results of LLM evaluation are stable over different formatting of the task instructions and the sampling algorithm used to generate the answer. We are the first to show the potential of using LLMs to assess the quality of texts and discuss the limitations and ethical considerations of LLM evaluation.",
}
| Human evaluation is indispensable and inevitable for assessing the quality of texts generated by machine learning models or written by humans. However, human evaluation is very difficult to reproduce and its quality is notoriously unstable, hindering fair comparisons among different natural language processing (NLP) models and algorithms. Recently, large language models (LLMs) have demonstrated exceptional performance on unseen tasks when only the task instructions are provided. In this paper, we explore if such an ability of the LLMs can be used as an alternative to human evaluation. We present the LLMs with the exact same instructions, samples to be evaluated, and questions used to conduct human evaluation, and then ask the LLMs to generate responses to those questions; we dub this LLM evaluation. We use human evaluation and LLM evaluation to evaluate the texts in two NLP tasks: open-ended story generation and adversarial attacks. We show that the result of LLM evaluation is consistent with the results obtained by expert human evaluation: the texts rated higher by human experts are also rated higher by the LLMs.We also find that the results of LLM evaluation are stable over different formatting of the task instructions and the sampling algorithm used to generate the answer. We are the first to show the potential of using LLMs to assess the quality of texts and discuss the limitations and ethical considerations of LLM evaluation. | [
"Chiang, Cheng-Han",
"Lee, Hung-yi"
] | Can Large Language Models Be an Alternative to Human Evaluations? | acl-long.870 | Poster | 2305.01937 | [
""
] | https://huggingface.co/papers/2305.01937 | 0 | 2 | 0 | 2 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.871.bib | https://aclanthology.org/2023.acl-long.871/ | @inproceedings{mai-etal-2023-hypermixer,
title = "{H}yper{M}ixer: An {MLP}-based Low Cost Alternative to Transformers",
author = "Mai, Florian and
Pannatier, Arnaud and
Fehr, Fabio and
Chen, Haolin and
Marelli, Francois and
Fleuret, Francois and
Henderson, James",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.871",
doi = "10.18653/v1/2023.acl-long.871",
pages = "15632--15654",
abstract = "Transformer-based architectures are the model of choice for natural language understanding, but they come at a significant cost, as they have quadratic complexity in the input length, require a lot of training data, and can be difficult to tune. In the pursuit of lower costs, we investigate simple MLP-based architectures. We find that existing architectures such as MLPMixer, which achieves token mixing through a static MLP applied to each feature independently, are too detached from the inductive biases required for natural language understanding. In this paper, we propose a simple variant, HyperMixer, which forms the token mixing MLP dynamically using hypernetworks. Empirically, we demonstrate that our model performs better than alternative MLP-based models, and on par with Transformers. In contrast to Transformers, HyperMixer achieves these results at substantially lower costs in terms of processing time, training data, and hyperparameter tuning.",
}
| Transformer-based architectures are the model of choice for natural language understanding, but they come at a significant cost, as they have quadratic complexity in the input length, require a lot of training data, and can be difficult to tune. In the pursuit of lower costs, we investigate simple MLP-based architectures. We find that existing architectures such as MLPMixer, which achieves token mixing through a static MLP applied to each feature independently, are too detached from the inductive biases required for natural language understanding. In this paper, we propose a simple variant, HyperMixer, which forms the token mixing MLP dynamically using hypernetworks. Empirically, we demonstrate that our model performs better than alternative MLP-based models, and on par with Transformers. In contrast to Transformers, HyperMixer achieves these results at substantially lower costs in terms of processing time, training data, and hyperparameter tuning. | [
"Mai, Florian",
"Pannatier, Arnaud",
"Fehr, Fabio",
"Chen, Haolin",
"Marelli, Francois",
"Fleuret, Francois",
"Henderson, James"
] | HyperMixer: An MLP-based Low Cost Alternative to Transformers | acl-long.871 | Poster | 2203.03691 | [
"https://github.com/idiap/hypermixing"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.872.bib | https://aclanthology.org/2023.acl-long.872/ | @inproceedings{inaguma-etal-2023-unity,
title = "{U}nit{Y}: Two-pass Direct Speech-to-speech Translation with Discrete Units",
author = "Inaguma, Hirofumi and
Popuri, Sravya and
Kulikov, Ilia and
Chen, Peng-Jen and
Wang, Changhan and
Chung, Yu-An and
Tang, Yun and
Lee, Ann and
Watanabe, Shinji and
Pino, Juan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.872",
doi = "10.18653/v1/2023.acl-long.872",
pages = "15655--15680",
abstract = "Direct speech-to-speech translation (S2ST), in which all components can be optimized jointly, is advantageous over cascaded approaches to achieve fast inference with a simplified pipeline. We present a novel two-pass direct S2ST architecture, UnitY, which first generates textual representations and predicts discrete acoustic units subsequently. We enhance the model performance by subword prediction in the first-pass decoder, advanced two-pass decoder architecture design and search strategy, and better training regularization. To leverage large amounts of unlabeled text data, we pre-train the first-pass text decoder based on the self-supervised denoising auto-encoding task. Experimental evaluations on benchmark datasets at various data scales demonstrate that UnitY outperforms a single-pass speech-to-unit translation model by 2.5-4.2 ASR-BLEU with 2.83x decoding speed-up. We show that the proposed methods boost the performance even when predicting spectrogram in the second pass. However, predicting discrete units achieves 2.51x decoding speed-up compared to that case.",
}
| Direct speech-to-speech translation (S2ST), in which all components can be optimized jointly, is advantageous over cascaded approaches to achieve fast inference with a simplified pipeline. We present a novel two-pass direct S2ST architecture, UnitY, which first generates textual representations and predicts discrete acoustic units subsequently. We enhance the model performance by subword prediction in the first-pass decoder, advanced two-pass decoder architecture design and search strategy, and better training regularization. To leverage large amounts of unlabeled text data, we pre-train the first-pass text decoder based on the self-supervised denoising auto-encoding task. Experimental evaluations on benchmark datasets at various data scales demonstrate that UnitY outperforms a single-pass speech-to-unit translation model by 2.5-4.2 ASR-BLEU with 2.83x decoding speed-up. We show that the proposed methods boost the performance even when predicting spectrogram in the second pass. However, predicting discrete units achieves 2.51x decoding speed-up compared to that case. | [
"Inaguma, Hirofumi",
"Popuri, Sravya",
"Kulikov, Ilia",
"Chen, Peng-Jen",
"Wang, Changhan",
"Chung, Yu-An",
"Tang, Yun",
"Lee, Ann",
"Watanabe, Shinji",
"Pino, Juan"
] | UnitY: Two-pass Direct Speech-to-speech Translation with Discrete Units | acl-long.872 | Poster | 2212.08055 | [
"https://github.com/facebookresearch/fairseq"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.873.bib | https://aclanthology.org/2023.acl-long.873/ | @inproceedings{wu-etal-2023-estimating,
title = "Estimating the Uncertainty in Emotion Attributes using Deep Evidential Regression",
author = "Wu, Wen and
Zhang, Chao and
Woodland, Philip",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.873",
doi = "10.18653/v1/2023.acl-long.873",
pages = "15681--15695",
abstract = "In automatic emotion recognition (AER), labels assigned by different human annotators to the same utterance are often inconsistent due to the inherent complexity of emotion and the subjectivity of perception. Though deterministic labels generated by averaging or voting are often used as the ground truth, it ignores the intrinsic uncertainty revealed by the inconsistent labels. This paper proposes a Bayesian approach, deep evidential emotion regression (DEER), to estimate the uncertainty in emotion attributes. Treating the emotion attribute labels of an utterance as samples drawn from an unknown Gaussian distribution, DEER places an utterance-specific normal-inverse gamma prior over the Gaussian likelihood and predicts its hyper-parameters using a deep neural network model. It enables a joint estimation of emotion attributes along with the aleatoric and epistemic uncertainties. AER experiments on the widely used MSP-Podcast and IEMOCAP datasets showed DEER produced state-of-the-art results for both the mean values and the distribution of emotion attributes.",
}
| In automatic emotion recognition (AER), labels assigned by different human annotators to the same utterance are often inconsistent due to the inherent complexity of emotion and the subjectivity of perception. Though deterministic labels generated by averaging or voting are often used as the ground truth, it ignores the intrinsic uncertainty revealed by the inconsistent labels. This paper proposes a Bayesian approach, deep evidential emotion regression (DEER), to estimate the uncertainty in emotion attributes. Treating the emotion attribute labels of an utterance as samples drawn from an unknown Gaussian distribution, DEER places an utterance-specific normal-inverse gamma prior over the Gaussian likelihood and predicts its hyper-parameters using a deep neural network model. It enables a joint estimation of emotion attributes along with the aleatoric and epistemic uncertainties. AER experiments on the widely used MSP-Podcast and IEMOCAP datasets showed DEER produced state-of-the-art results for both the mean values and the distribution of emotion attributes. | [
"Wu, Wen",
"Zhang, Chao",
"Woodl",
", Philip"
] | Estimating the Uncertainty in Emotion Attributes using Deep Evidential Regression | acl-long.873 | Poster | 2306.06760 | [
"https://github.com/w-wu/deer"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.874.bib | https://aclanthology.org/2023.acl-long.874/ | @inproceedings{liu-strube-2023-annotation,
title = "Annotation-Inspired Implicit Discourse Relation Classification with Auxiliary Discourse Connective Generation",
author = "Liu, Wei and
Strube, Michael",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.874",
doi = "10.18653/v1/2023.acl-long.874",
pages = "15696--15712",
abstract = "Implicit discourse relation classification is a challenging task due to the absence of discourse connectives. To overcome this issue, we design an end-to-end neural model to explicitly generate discourse connectives for the task, inspired by the annotation process of PDTB. Specifically, our model jointly learns to generate discourse connectives between arguments and predict discourse relations based on the arguments and the generated connectives. To prevent our relation classifier from being misled by poor connectives generated at the early stage of training while alleviating the discrepancy between training and inference, we adopt Scheduled Sampling to the joint learning. We evaluate our method on three benchmarks, PDTB 2.0, PDTB 3.0, and PCC. Results show that our joint model significantly outperforms various baselines on three datasets, demonstrating its superiority for the task.",
}
| Implicit discourse relation classification is a challenging task due to the absence of discourse connectives. To overcome this issue, we design an end-to-end neural model to explicitly generate discourse connectives for the task, inspired by the annotation process of PDTB. Specifically, our model jointly learns to generate discourse connectives between arguments and predict discourse relations based on the arguments and the generated connectives. To prevent our relation classifier from being misled by poor connectives generated at the early stage of training while alleviating the discrepancy between training and inference, we adopt Scheduled Sampling to the joint learning. We evaluate our method on three benchmarks, PDTB 2.0, PDTB 3.0, and PCC. Results show that our joint model significantly outperforms various baselines on three datasets, demonstrating its superiority for the task. | [
"Liu, Wei",
"Strube, Michael"
] | Annotation-Inspired Implicit Discourse Relation Classification with Auxiliary Discourse Connective Generation | acl-long.874 | Poster | 2306.06480 | [
"https://github.com/liuwei1206/connrel"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.875.bib | https://aclanthology.org/2023.acl-long.875/ | @inproceedings{xiao-etal-2023-plug,
title = "Plug-and-Play Document Modules for Pre-trained Models",
author = "Xiao, Chaojun and
Zhang, Zhengyan and
Han, Xu and
Chan, Chi-Min and
Lin, Yankai and
Liu, Zhiyuan and
Li, Xiangyang and
Li, Zhonghua and
Cao, Zhao and
Sun, Maosong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.875",
doi = "10.18653/v1/2023.acl-long.875",
pages = "15713--15729",
abstract = "Large-scale pre-trained models (PTMs) have been widely used in document-oriented NLP tasks, such as question answering. However, the encoding-task coupling requirement results in the repeated encoding of the same documents for different tasks and queries, which is highly computationally inefficient. To this end, we target to decouple document encoding from downstream tasks, and propose to represent each document as a plug-and-play document module, i.e., a document plugin, for PTMs (PlugD). By inserting document plugins into the backbone PTM for downstream tasks, we can encode a document one time to handle multiple tasks, which is more efficient than conventional encoding-task coupling methods that simultaneously encode documents and input queries using task-specific encoders. Extensive experiments on 8 datasets of 4 typical NLP tasks show that PlugD enables models to encode documents once and for all across different scenarios. Especially, PlugD can save 69{\%} computational costs while achieving comparable performance to state-of-the-art encoding-task coupling methods. Additionally, we show that PlugD can serve as an effective post-processing way to inject knowledge into task-specific models, improving model performance without any additional model training. Our code and checkpoints can be found in \url{https://github.com/thunlp/Document-Plugin}.",
}
| Large-scale pre-trained models (PTMs) have been widely used in document-oriented NLP tasks, such as question answering. However, the encoding-task coupling requirement results in the repeated encoding of the same documents for different tasks and queries, which is highly computationally inefficient. To this end, we target to decouple document encoding from downstream tasks, and propose to represent each document as a plug-and-play document module, i.e., a document plugin, for PTMs (PlugD). By inserting document plugins into the backbone PTM for downstream tasks, we can encode a document one time to handle multiple tasks, which is more efficient than conventional encoding-task coupling methods that simultaneously encode documents and input queries using task-specific encoders. Extensive experiments on 8 datasets of 4 typical NLP tasks show that PlugD enables models to encode documents once and for all across different scenarios. Especially, PlugD can save 69{\%} computational costs while achieving comparable performance to state-of-the-art encoding-task coupling methods. Additionally, we show that PlugD can serve as an effective post-processing way to inject knowledge into task-specific models, improving model performance without any additional model training. Our code and checkpoints can be found in \url{https://github.com/thunlp/Document-Plugin}. | [
"Xiao, Chaojun",
"Zhang, Zhengyan",
"Han, Xu",
"Chan, Chi-Min",
"Lin, Yankai",
"Liu, Zhiyuan",
"Li, Xiangyang",
"Li, Zhonghua",
"Cao, Zhao",
"Sun, Maosong"
] | Plug-and-Play Document Modules for Pre-trained Models | acl-long.875 | Poster | 2305.17660 | [
"https://github.com/thunlp/document-plugin"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.876.bib | https://aclanthology.org/2023.acl-long.876/ | @inproceedings{xie-lukasiewicz-2023-empirical,
title = "An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models",
author = "Xie, Zhongbin and
Lukasiewicz, Thomas",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.876",
doi = "10.18653/v1/2023.acl-long.876",
pages = "15730--15745",
abstract = "The increasingly large size of modern pre-trained language models not only makes them inherit more human-like biases from the training corpora, but also makes it computationally expensive to mitigate such biases. In this paper, we investigate recent parameter-efficient methods in combination with counterfactual data augmentation (CDA) for bias mitigation. We conduct extensive experiments with prefix tuning, prompt tuning, and adapter tuning on different language models and bias types to evaluate their debiasing performance and abilities to preserve the internal knowledge of a pre-trained model. We find that the parameter-efficient methods (i) are effective in mitigating gender bias, where adapter tuning is consistently the most effective one and prompt tuning is more suitable for GPT-2 than BERT, (ii) areless effective when it comes to racial and religious bias, which may be attributed to the limitations of CDA, and (iii) can perform similarly to or sometimes better than full fine-tuning with improved time and memory efficiency, as well as maintain the internal knowledge in BERT and GPT-2, evaluated via fact retrieval and downstream fine-tuning.",
}
| The increasingly large size of modern pre-trained language models not only makes them inherit more human-like biases from the training corpora, but also makes it computationally expensive to mitigate such biases. In this paper, we investigate recent parameter-efficient methods in combination with counterfactual data augmentation (CDA) for bias mitigation. We conduct extensive experiments with prefix tuning, prompt tuning, and adapter tuning on different language models and bias types to evaluate their debiasing performance and abilities to preserve the internal knowledge of a pre-trained model. We find that the parameter-efficient methods (i) are effective in mitigating gender bias, where adapter tuning is consistently the most effective one and prompt tuning is more suitable for GPT-2 than BERT, (ii) areless effective when it comes to racial and religious bias, which may be attributed to the limitations of CDA, and (iii) can perform similarly to or sometimes better than full fine-tuning with improved time and memory efficiency, as well as maintain the internal knowledge in BERT and GPT-2, evaluated via fact retrieval and downstream fine-tuning. | [
"Xie, Zhongbin",
"Lukasiewicz, Thomas"
] | An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models | acl-long.876 | Poster | 2306.04067 | [
"https://github.com/x-zb/pedb"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.877.bib | https://aclanthology.org/2023.acl-long.877/ | @inproceedings{wang-etal-2023-two,
title = "Two-Stage Fine-Tuning for Improved Bias and Variance for Large Pretrained Language Models",
author = "Wang, Lijing and
Li, Yingya and
Miller, Timothy and
Bethard, Steven and
Savova, Guergana",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.877",
doi = "10.18653/v1/2023.acl-long.877",
pages = "15746--15761",
abstract = "The bias-variance tradeoff is the idea that learning methods need to balance model complexity with data size to minimize both under-fitting and over-fitting. Recent empirical work and theoretical analysis with over-parameterized neural networks challenges the classic bias-variance trade-off notion suggesting that no such trade-off holds: as the width of the network grows, bias monotonically decreases while variance initially increases followed by a decrease. In this work, we first provide a variance decomposition-based justification criteria to examine whether large pretrained neural models in a fine-tuning setting are generalizable enough to have low bias and variance. We then perform theoretical and empirical analysis using ensemble methods explicitly designed to decrease variance due to optimization. This results in essentially a two-stage fine-tuning algorithm that first ratchets down bias and variance iteratively, and then uses a selected fixed-bias model to further reduce variance due to optimization by ensembling. We also analyze the nature of variance change with the ensemble size in low- and high-resource classes. Empirical results show that this two-stage method obtains strong results on SuperGLUE tasks and clinical information extraction tasks. Code and settings are available: \url{https://github.com/christa60/bias-var-fine-tuning-plms.git}",
}
| The bias-variance tradeoff is the idea that learning methods need to balance model complexity with data size to minimize both under-fitting and over-fitting. Recent empirical work and theoretical analysis with over-parameterized neural networks challenges the classic bias-variance trade-off notion suggesting that no such trade-off holds: as the width of the network grows, bias monotonically decreases while variance initially increases followed by a decrease. In this work, we first provide a variance decomposition-based justification criteria to examine whether large pretrained neural models in a fine-tuning setting are generalizable enough to have low bias and variance. We then perform theoretical and empirical analysis using ensemble methods explicitly designed to decrease variance due to optimization. This results in essentially a two-stage fine-tuning algorithm that first ratchets down bias and variance iteratively, and then uses a selected fixed-bias model to further reduce variance due to optimization by ensembling. We also analyze the nature of variance change with the ensemble size in low- and high-resource classes. Empirical results show that this two-stage method obtains strong results on SuperGLUE tasks and clinical information extraction tasks. Code and settings are available: \url{https://github.com/christa60/bias-var-fine-tuning-plms.git} | [
"Wang, Lijing",
"Li, Yingya",
"Miller, Timothy",
"Bethard, Steven",
"Savova, Guergana"
] | Two-Stage Fine-Tuning for Improved Bias and Variance for Large Pretrained Language Models | acl-long.877 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.878.bib | https://aclanthology.org/2023.acl-long.878/ | @inproceedings{ramesh-etal-2023-comparative,
title = "A Comparative Study on the Impact of Model Compression Techniques on Fairness in Language Models",
author = "Ramesh, Krithika and
Chavan, Arnav and
Pandit, Shrey and
Sitaram, Sunayana",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.878",
doi = "10.18653/v1/2023.acl-long.878",
pages = "15762--15782",
abstract = "Compression techniques for deep learning have become increasingly popular, particularly in settings where latency and memory constraints are imposed. Several methods, such as pruning, distillation, and quantization, have been adopted for compressing models, each providing distinct advantages. However, existing literature demonstrates that compressing deep learning models could affect their fairness. Our analysis involves a comprehensive evaluation of pruned, distilled, and quantized language models, which we benchmark across a range of intrinsic and extrinsic metrics for measuring bias in text classification. We also investigate the impact of using multilingual models and evaluation measures. Our findings highlight the significance of considering both the pre-trained model and the chosen compression strategy in developing equitable language technologies. The results also indicate that compression strategies can have an adverse effect on fairness measures.",
}
| Compression techniques for deep learning have become increasingly popular, particularly in settings where latency and memory constraints are imposed. Several methods, such as pruning, distillation, and quantization, have been adopted for compressing models, each providing distinct advantages. However, existing literature demonstrates that compressing deep learning models could affect their fairness. Our analysis involves a comprehensive evaluation of pruned, distilled, and quantized language models, which we benchmark across a range of intrinsic and extrinsic metrics for measuring bias in text classification. We also investigate the impact of using multilingual models and evaluation measures. Our findings highlight the significance of considering both the pre-trained model and the chosen compression strategy in developing equitable language technologies. The results also indicate that compression strategies can have an adverse effect on fairness measures. | [
"Ramesh, Krithika",
"Chavan, Arnav",
"P",
"it, Shrey",
"Sitaram, Sunayana"
] | A Comparative Study on the Impact of Model Compression Techniques on Fairness in Language Models | acl-long.878 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.879.bib | https://aclanthology.org/2023.acl-long.879/ | @inproceedings{seonwoo-etal-2023-ranking,
title = "Ranking-Enhanced Unsupervised Sentence Representation Learning",
author = "Seonwoo, Yeon and
Wang, Guoyin and
Seo, Changmin and
Choudhary, Sajal and
Li, Jiwei and
Li, Xiang and
Xu, Puyang and
Park, Sunghyun and
Oh, Alice",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.879",
doi = "10.18653/v1/2023.acl-long.879",
pages = "15783--15798",
abstract = "Unsupervised sentence representation learning has progressed through contrastive learning and data augmentation methods such as dropout masking. Despite this progress, sentence encoders are still limited to using only an input sentence when predicting its semantic vector. In this work, we show that the semantic meaning of a sentence is also determined by nearest-neighbor sentences that are similar to the input sentence. Based on this finding, we propose a novel unsupervised sentence encoder, RankEncoder. RankEncoder predicts the semantic vector of an input sentence by leveraging its relationship with other sentences in an external corpus, as well as the input sentence itself. We evaluate RankEncoder on semantic textual benchmark datasets. From the experimental results, we verify that 1) RankEncoder achieves 80.07{\%} Spearman{'}s correlation, a 1.1{\%} absolute improvement compared to the previous state-of-the-art performance, 2) RankEncoder is universally applicable to existing unsupervised sentence embedding methods, and 3) RankEncoder is specifically effective for predicting the similarity scores of similar sentence pairs.",
}
| Unsupervised sentence representation learning has progressed through contrastive learning and data augmentation methods such as dropout masking. Despite this progress, sentence encoders are still limited to using only an input sentence when predicting its semantic vector. In this work, we show that the semantic meaning of a sentence is also determined by nearest-neighbor sentences that are similar to the input sentence. Based on this finding, we propose a novel unsupervised sentence encoder, RankEncoder. RankEncoder predicts the semantic vector of an input sentence by leveraging its relationship with other sentences in an external corpus, as well as the input sentence itself. We evaluate RankEncoder on semantic textual benchmark datasets. From the experimental results, we verify that 1) RankEncoder achieves 80.07{\%} Spearman{'}s correlation, a 1.1{\%} absolute improvement compared to the previous state-of-the-art performance, 2) RankEncoder is universally applicable to existing unsupervised sentence embedding methods, and 3) RankEncoder is specifically effective for predicting the similarity scores of similar sentence pairs. | [
"Seonwoo, Yeon",
"Wang, Guoyin",
"Seo, Changmin",
"Choudhary, Sajal",
"Li, Jiwei",
"Li, Xiang",
"Xu, Puyang",
"Park, Sunghyun",
"Oh, Alice"
] | Ranking-Enhanced Unsupervised Sentence Representation Learning | acl-long.879 | Poster | 2209.04333 | [
"https://github.com/yeonsw/rankencoder"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.880.bib | https://aclanthology.org/2023.acl-long.880/ | @inproceedings{skitalinskaya-wachsmuth-2023-revise,
title = "To Revise or Not to Revise: Learning to Detect Improvable Claims for Argumentative Writing Support",
author = "Skitalinskaya, Gabriella and
Wachsmuth, Henning",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.880",
doi = "10.18653/v1/2023.acl-long.880",
pages = "15799--15816",
abstract = "Optimizing the phrasing of argumentative text is crucial in higher education and professional development. However, assessing whether and how the different claims in a text should be revised is a hard task, especially for novice writers. In this work, we explore the main challenges to identifying argumentative claims in need of specific revisions. By learning from collaborative editing behaviors in online debates, we seek to capture implicit revision patterns in order to develop approaches aimed at guiding writers in how to further improve their arguments. We systematically compare the ability of common word embedding models to capture the differences between different versions of the same text, and we analyze their impact on various types of writing issues. To deal with the noisy nature of revision-based corpora, we propose a new sampling strategy based on revision distance. Opposed to approaches from prior work, such sampling can be done without employing additional annotations and judgments. Moreover, we provide evidence that using contextual information and domain knowledge can further improve prediction results. How useful a certain type of context is, depends on the issue the claim is suffering from, though.",
}
| Optimizing the phrasing of argumentative text is crucial in higher education and professional development. However, assessing whether and how the different claims in a text should be revised is a hard task, especially for novice writers. In this work, we explore the main challenges to identifying argumentative claims in need of specific revisions. By learning from collaborative editing behaviors in online debates, we seek to capture implicit revision patterns in order to develop approaches aimed at guiding writers in how to further improve their arguments. We systematically compare the ability of common word embedding models to capture the differences between different versions of the same text, and we analyze their impact on various types of writing issues. To deal with the noisy nature of revision-based corpora, we propose a new sampling strategy based on revision distance. Opposed to approaches from prior work, such sampling can be done without employing additional annotations and judgments. Moreover, we provide evidence that using contextual information and domain knowledge can further improve prediction results. How useful a certain type of context is, depends on the issue the claim is suffering from, though. | [
"Skitalinskaya, Gabriella",
"Wachsmuth, Henning"
] | To Revise or Not to Revise: Learning to Detect Improvable Claims for Argumentative Writing Support | acl-long.880 | Poster | 2305.16799 | [
"https://github.com/webis-de/acl-23"
] | https://huggingface.co/papers/2305.16799 | 0 | 0 | 0 | 2 | 1 | [
"gabski/deberta-suboptimal-claim-detection-with-parent-context",
"gabski/deberta-claim-improvement-suggestion-with-parent-context",
"gabski/deberta-claim-improvement-suggestion-with-thesis-context",
"gabski/deberta-suboptimal-claim-detection-with-thesis-context",
"gabski/deberta-claim-improvement-suggestion",
"gabski/deberta-suboptimal-claim-detection"
] | [] | [] |
https://aclanthology.org/2023.acl-long.881.bib | https://aclanthology.org/2023.acl-long.881/ | @inproceedings{mendes-etal-2023-human,
title = "Human-in-the-loop Evaluation for Early Misinformation Detection: A Case Study of {COVID}-19 Treatments",
author = "Mendes, Ethan and
Chen, Yang and
Xu, Wei and
Ritter, Alan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.881",
doi = "10.18653/v1/2023.acl-long.881",
pages = "15817--15835",
abstract = "We present a human-in-the-loop evaluation framework for fact-checking novel misinformation claims and identifying social media messages that support them. Our approach extracts check-worthy claims, which are aggregated and ranked for review. Stance classifiers are then used to identify tweets supporting novel misinformation claims, which are further reviewed to determine whether they violate relevant policies. To demonstrate the feasibility of our approach, we develop a baseline system based on modern NLP methods for human-in-the-loop fact-checking in the domain of COVID-19 treatments. We make our data and detailed annotation guidelines available to support the evaluation of human-in-the-loop systems that identify novel misinformation directly from raw user-generated content.",
}
| We present a human-in-the-loop evaluation framework for fact-checking novel misinformation claims and identifying social media messages that support them. Our approach extracts check-worthy claims, which are aggregated and ranked for review. Stance classifiers are then used to identify tweets supporting novel misinformation claims, which are further reviewed to determine whether they violate relevant policies. To demonstrate the feasibility of our approach, we develop a baseline system based on modern NLP methods for human-in-the-loop fact-checking in the domain of COVID-19 treatments. We make our data and detailed annotation guidelines available to support the evaluation of human-in-the-loop systems that identify novel misinformation directly from raw user-generated content. | [
"Mendes, Ethan",
"Chen, Yang",
"Xu, Wei",
"Ritter, Alan"
] | Human-in-the-loop Evaluation for Early Misinformation Detection: A Case Study of COVID-19 Treatments | acl-long.881 | Poster | 2212.09683 | [
"https://github.com/ethanm88/hitl-evaluation-early-misinformation-detection"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.882.bib | https://aclanthology.org/2023.acl-long.882/ | @inproceedings{chanchani-huang-2023-composition,
title = "Composition-contrastive Learning for Sentence Embeddings",
author = "Chanchani, Sachin and
Huang, Ruihong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.882",
doi = "10.18653/v1/2023.acl-long.882",
pages = "15836--15848",
abstract = "Vector representations of natural language are ubiquitous in search applications. Recently, various methods based on contrastive learning have been proposed to learn textual representations from unlabelled data; by maximizing alignment between minimally-perturbed embeddings of the same text, and encouraging a uniform distribution of embeddings across a broader corpus. Differently, we propose maximizing alignment between texts and a composition of their phrasal constituents. We consider several realizations of this objective and elaborate the impact on representations in each case. Experimental results on semantic textual similarity tasks show improvements over baselines that are comparable with state-of-the-art approaches. Moreover, this work is the first to do so without incurring costs in auxiliary training objectives or additional network parameters.",
}
| Vector representations of natural language are ubiquitous in search applications. Recently, various methods based on contrastive learning have been proposed to learn textual representations from unlabelled data; by maximizing alignment between minimally-perturbed embeddings of the same text, and encouraging a uniform distribution of embeddings across a broader corpus. Differently, we propose maximizing alignment between texts and a composition of their phrasal constituents. We consider several realizations of this objective and elaborate the impact on representations in each case. Experimental results on semantic textual similarity tasks show improvements over baselines that are comparable with state-of-the-art approaches. Moreover, this work is the first to do so without incurring costs in auxiliary training objectives or additional network parameters. | [
"Chanchani, Sachin",
"Huang, Ruihong"
] | Composition-contrastive Learning for Sentence Embeddings | acl-long.882 | Poster | 2307.07380 | [
"https://github.com/perceptiveshawty/compcse"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.883.bib | https://aclanthology.org/2023.acl-long.883/ | @inproceedings{shaham-etal-2023-causes,
title = "Causes and Cures for Interference in Multilingual Translation",
author = "Shaham, Uri and
Elbayad, Maha and
Goswami, Vedanuj and
Levy, Omer and
Bhosale, Shruti",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.883",
doi = "10.18653/v1/2023.acl-long.883",
pages = "15849--15863",
abstract = "Multilingual machine translation models can benefit from synergy between different language pairs, but also suffer from interference. While there is a growing number of sophisticated methods that aim to eliminate interference, our understanding of interference as a phenomenon is still limited. This work identifies the main factors that contribute to interference in multilingual machine translation. Through systematic experimentation, we find that interference (or synergy) are primarily determined by model size, data size, and the proportion of each language pair within the total dataset. We observe that substantial interference occurs mainly when the model is very small with respect to the available training data, and that using standard transformer configurations with less than one billion parameters largely alleviates interference and promotes synergy. Moreover, we show that tuning the sampling temperature to control the proportion of each language pair in the data is key to balancing the amount of interference between low and high resource language pairs effectively, and can lead to superior performance overall.",
}
| Multilingual machine translation models can benefit from synergy between different language pairs, but also suffer from interference. While there is a growing number of sophisticated methods that aim to eliminate interference, our understanding of interference as a phenomenon is still limited. This work identifies the main factors that contribute to interference in multilingual machine translation. Through systematic experimentation, we find that interference (or synergy) are primarily determined by model size, data size, and the proportion of each language pair within the total dataset. We observe that substantial interference occurs mainly when the model is very small with respect to the available training data, and that using standard transformer configurations with less than one billion parameters largely alleviates interference and promotes synergy. Moreover, we show that tuning the sampling temperature to control the proportion of each language pair in the data is key to balancing the amount of interference between low and high resource language pairs effectively, and can lead to superior performance overall. | [
"Shaham, Uri",
"Elbayad, Maha",
"Goswami, Vedanuj",
"Levy, Omer",
"Bhosale, Shruti"
] | Causes and Cures for Interference in Multilingual Translation | acl-long.883 | Oral | 2212.07530 | [
""
] | https://huggingface.co/papers/2212.07530 | 2 | 0 | 0 | 5 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.884.bib | https://aclanthology.org/2023.acl-long.884/ | @inproceedings{fang-feng-2023-understanding,
title = "Understanding and Bridging the Modality Gap for Speech Translation",
author = "Fang, Qingkai and
Feng, Yang",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.884",
doi = "10.18653/v1/2023.acl-long.884",
pages = "15864--15881",
abstract = "How to achieve better end-to-end speech translation (ST) by leveraging (text) machine translation (MT) data? Among various existing techniques, multi-task learning is one of the effective ways to share knowledge between ST and MT in which additional MT data can help to learn source-to-target mapping. However, due to the differences between speech and text, there is always a gap between ST and MT. In this paper, we first aim to understand this modality gap from the target-side representation differences, and link the modality gap to another well-known problem in neural machine translation: exposure bias. We find that the modality gap is relatively small during training except for some difficult cases, but keeps increasing during inference due to the cascading effect. To address these problems, we propose the Cross-modal Regularization with Scheduled Sampling (Cress) method. Specifically, we regularize the output predictions of ST and MT, whose target-side contexts are derived by sampling between ground truth words and self-generated words with a varying probability. Furthermore, we introduce token-level adaptive training which assigns different training weights to target tokens to handle difficult cases with large modality gaps. Experiments and analysis show that our approach effectively bridges the modality gap, and achieves significant improvements over a strong baseline in all eight directions of the MuST-C dataset.",
}
| How to achieve better end-to-end speech translation (ST) by leveraging (text) machine translation (MT) data? Among various existing techniques, multi-task learning is one of the effective ways to share knowledge between ST and MT in which additional MT data can help to learn source-to-target mapping. However, due to the differences between speech and text, there is always a gap between ST and MT. In this paper, we first aim to understand this modality gap from the target-side representation differences, and link the modality gap to another well-known problem in neural machine translation: exposure bias. We find that the modality gap is relatively small during training except for some difficult cases, but keeps increasing during inference due to the cascading effect. To address these problems, we propose the Cross-modal Regularization with Scheduled Sampling (Cress) method. Specifically, we regularize the output predictions of ST and MT, whose target-side contexts are derived by sampling between ground truth words and self-generated words with a varying probability. Furthermore, we introduce token-level adaptive training which assigns different training weights to target tokens to handle difficult cases with large modality gaps. Experiments and analysis show that our approach effectively bridges the modality gap, and achieves significant improvements over a strong baseline in all eight directions of the MuST-C dataset. | [
"Fang, Qingkai",
"Feng, Yang"
] | Understanding and Bridging the Modality Gap for Speech Translation | acl-long.884 | Poster | 2305.08706 | [
"https://github.com/ictnlp/cress"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.885.bib | https://aclanthology.org/2023.acl-long.885/ | @inproceedings{khalifa-etal-2023-shot,
title = "Few-shot Reranking for Multi-hop {QA} via Language Model Prompting",
author = "Khalifa, Muhammad and
Logeswaran, Lajanugen and
Lee, Moontae and
Lee, Honglak and
Wang, Lu",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.885",
doi = "10.18653/v1/2023.acl-long.885",
pages = "15882--15897",
abstract = "We study few-shot reranking for multi-hop QA (MQA) with open-domain questions. To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on language model prompting for multi-hop path reranking. PromptRank first constructs an instruction-based prompt that includes a candidate document path and then computes the relevance score between a given question and the path based on the conditional likelihood of the question given the path prompt according to a language model. PromptRank yields strong retrieval performance on HotpotQA with only 128 training examples compared to state-of-the-art methods trained on thousands of examples {---} 73.6 recall@10 by PromptRank vs. 77.8 by PathRetriever and 77.5 by multi-hop dense retrieval.",
}
| We study few-shot reranking for multi-hop QA (MQA) with open-domain questions. To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on language model prompting for multi-hop path reranking. PromptRank first constructs an instruction-based prompt that includes a candidate document path and then computes the relevance score between a given question and the path based on the conditional likelihood of the question given the path prompt according to a language model. PromptRank yields strong retrieval performance on HotpotQA with only 128 training examples compared to state-of-the-art methods trained on thousands of examples {---} 73.6 recall@10 by PromptRank vs. 77.8 by PathRetriever and 77.5 by multi-hop dense retrieval. | [
"Khalifa, Muhammad",
"Logeswaran, Lajanugen",
"Lee, Moontae",
"Lee, Honglak",
"Wang, Lu"
] | Few-shot Reranking for Multi-hop QA via Language Model Prompting | acl-long.885 | Poster | 2205.12650 | [
"https://github.com/mukhal/lepus"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.886.bib | https://aclanthology.org/2023.acl-long.886/ | @inproceedings{ma-etal-2023-dice,
title = "{DICE}: Data-Efficient Clinical Event Extraction with Generative Models",
author = "Ma, Mingyu Derek and
Taylor, Alexander and
Wang, Wei and
Peng, Nanyun",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.886",
doi = "10.18653/v1/2023.acl-long.886",
pages = "15898--15917",
abstract = "Event extraction for the clinical domain is an under-explored research area. The lack of training data along with the high volume of domain-specific terminologies with vague entity boundaries makes the task especially challenging. In this paper, we introduce DICE, a robust and data-efficient generative model for clinical event extraction. DICE frames event extraction as a conditional generation problem and introduces a contrastive learning objective to accurately decide the boundaries of biomedical mentions. DICE also trains an auxiliary mention identification task jointly with event extraction tasks to better identify entity mention boundaries, and further introduces special markers to incorporate identified entity mentions as trigger and argument candidates for their respective tasks. To benchmark clinical event extraction, we compose MACCROBAT-EE, the first clinical event extraction dataset with argument annotation, based on an existing clinical information extraction dataset MACCROBAT. Our experiments demonstrate state-of-the-art performances of DICE for clinical and news domain event extraction, especially under low data settings.",
}
| Event extraction for the clinical domain is an under-explored research area. The lack of training data along with the high volume of domain-specific terminologies with vague entity boundaries makes the task especially challenging. In this paper, we introduce DICE, a robust and data-efficient generative model for clinical event extraction. DICE frames event extraction as a conditional generation problem and introduces a contrastive learning objective to accurately decide the boundaries of biomedical mentions. DICE also trains an auxiliary mention identification task jointly with event extraction tasks to better identify entity mention boundaries, and further introduces special markers to incorporate identified entity mentions as trigger and argument candidates for their respective tasks. To benchmark clinical event extraction, we compose MACCROBAT-EE, the first clinical event extraction dataset with argument annotation, based on an existing clinical information extraction dataset MACCROBAT. Our experiments demonstrate state-of-the-art performances of DICE for clinical and news domain event extraction, especially under low data settings. | [
"Ma, Mingyu Derek",
"Taylor, Alex",
"er",
"Wang, Wei",
"Peng, Nanyun"
] | DICE: Data-Efficient Clinical Event Extraction with Generative Models | acl-long.886 | Oral | 2208.07989 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.887.bib | https://aclanthology.org/2023.acl-long.887/ | @inproceedings{zhang-etal-2023-xsemplr,
title = "{XS}em{PLR}: Cross-Lingual Semantic Parsing in Multiple Natural Languages and Meaning Representations",
author = "Zhang, Yusen and
Wang, Jun and
Wang, Zhiguo and
Zhang, Rui",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.887",
doi = "10.18653/v1/2023.acl-long.887",
pages = "15918--15947",
abstract = "Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple natural languages (NLs) into meaning representations (MRs) such as SQL, lambda calculus, and logic forms. However, existing CLSP models are separately proposed and evaluated on datasets of limited tasks and applications, impeding a comprehensive and unified evaluation of CLSP on a diverse range of NLs and MRs. To this end, we present XSemPLR, a unified benchmark for cross-lingual semantic parsing featured with 22 natural languages and 8 meaning representations by examining and selecting 9 existing datasets to cover 5 tasks and 164 domains. We use XSemPLR to conduct a comprehensive benchmark study on a wide range of multilingual language models including encoder-based models (mBERT, XLM-R), encoder-decoder models (mBART, mT5), and decoder-based models (Codex, BLOOM). We design 6 experiment settings covering various lingual combinations (monolingual, multilingual, cross-lingual) and numbers of learning samples (full dataset, few-shot, and zero-shot). Our experiments show that encoder-decoder models (mT5) achieve the highest performance compared with other popular models, and multilingual training can further improve the average performance. Notably, multilingual large language models (e.g., BLOOM) are still inadequate to perform CLSP tasks. We also find that the performance gap between monolingual training and cross-lingual transfer learning is still significant for multilingual models, though it can be mitigated by cross-lingual few-shot training. Our dataset and code are available at \url{https://github.com/psunlpgroup/XSemPLR}.",
}
| Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple natural languages (NLs) into meaning representations (MRs) such as SQL, lambda calculus, and logic forms. However, existing CLSP models are separately proposed and evaluated on datasets of limited tasks and applications, impeding a comprehensive and unified evaluation of CLSP on a diverse range of NLs and MRs. To this end, we present XSemPLR, a unified benchmark for cross-lingual semantic parsing featured with 22 natural languages and 8 meaning representations by examining and selecting 9 existing datasets to cover 5 tasks and 164 domains. We use XSemPLR to conduct a comprehensive benchmark study on a wide range of multilingual language models including encoder-based models (mBERT, XLM-R), encoder-decoder models (mBART, mT5), and decoder-based models (Codex, BLOOM). We design 6 experiment settings covering various lingual combinations (monolingual, multilingual, cross-lingual) and numbers of learning samples (full dataset, few-shot, and zero-shot). Our experiments show that encoder-decoder models (mT5) achieve the highest performance compared with other popular models, and multilingual training can further improve the average performance. Notably, multilingual large language models (e.g., BLOOM) are still inadequate to perform CLSP tasks. We also find that the performance gap between monolingual training and cross-lingual transfer learning is still significant for multilingual models, though it can be mitigated by cross-lingual few-shot training. Our dataset and code are available at \url{https://github.com/psunlpgroup/XSemPLR}. | [
"Zhang, Yusen",
"Wang, Jun",
"Wang, Zhiguo",
"Zhang, Rui"
] | XSemPLR: Cross-Lingual Semantic Parsing in Multiple Natural Languages and Meaning Representations | acl-long.887 | Poster | 2306.04085 | [
"https://github.com/psunlpgroup/xsemplr"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.888.bib | https://aclanthology.org/2023.acl-long.888/ | @inproceedings{zhu-etal-2023-ink,
title = "{INK}: Injecting k{NN} Knowledge in Nearest Neighbor Machine Translation",
author = "Zhu, Wenhao and
Xu, Jingjing and
Huang, Shujian and
Kong, Lingpeng and
Chen, Jiajun",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.888",
doi = "10.18653/v1/2023.acl-long.888",
pages = "15948--15959",
abstract = "Neural machine translation has achieved promising results on many translation tasks. However, previous studies have shown that neural models induce a non-smooth representation space, which harms its generalization results. Recently, kNN-MT has provided an effective paradigm to smooth the prediction based on neighbor representations during inference. Despite promising results, kNN-MT usually requires large inference overhead. We propose an effective training framework INK to directly smooth the representation space via adjusting representations of kNN neighbors with a small number of new parameters. The new parameters are then used to refresh the whole representation datastore to get new kNN knowledge asynchronously. This loop keeps running until convergence. Experiments on four benchmark datasets show that INK achieves average gains of 1.99 COMET and 1.0 BLEU, outperforming the state-of-the-art kNN-MT system with 0.02x memory space and 1.9x inference speedup.",
}
| Neural machine translation has achieved promising results on many translation tasks. However, previous studies have shown that neural models induce a non-smooth representation space, which harms its generalization results. Recently, kNN-MT has provided an effective paradigm to smooth the prediction based on neighbor representations during inference. Despite promising results, kNN-MT usually requires large inference overhead. We propose an effective training framework INK to directly smooth the representation space via adjusting representations of kNN neighbors with a small number of new parameters. The new parameters are then used to refresh the whole representation datastore to get new kNN knowledge asynchronously. This loop keeps running until convergence. Experiments on four benchmark datasets show that INK achieves average gains of 1.99 COMET and 1.0 BLEU, outperforming the state-of-the-art kNN-MT system with 0.02x memory space and 1.9x inference speedup. | [
"Zhu, Wenhao",
"Xu, Jingjing",
"Huang, Shujian",
"Kong, Lingpeng",
"Chen, Jiajun"
] | INK: Injecting kNN Knowledge in Nearest Neighbor Machine Translation | acl-long.888 | Poster | 2306.06381 | [
"https://github.com/owennju/ink"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.889.bib | https://aclanthology.org/2023.acl-long.889/ | @inproceedings{sun-etal-2023-uncertainty,
title = "Uncertainty Guided Label Denoising for Document-level Distant Relation Extraction",
author = "Sun, Qi and
Huang, Kun and
Yang, Xiaocui and
Hong, Pengfei and
Zhang, Kun and
Poria, Soujanya",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.889",
doi = "10.18653/v1/2023.acl-long.889",
pages = "15960--15973",
abstract = "Document-level relation extraction (DocRE) aims to infer complex semantic relations among entities in a document. Distant supervision (DS) is able to generate massive auto-labeled data, which can improve DocRE performance. Recent works leverage pseudo labels generated by the pre-denoising model to reduce noise in DS data. However, unreliable pseudo labels bring new noise, e.g., adding false pseudo labels and losing correct DS labels. Therefore, how to select effective pseudo labels to denoise DS data is still a challenge in document-level distant relation extraction. To tackle this issue, we introduce uncertainty estimation technology to determine whether pseudo labels can be trusted. In this work, we propose a Document-level distant Relation Extraction framework with Uncertainty Guided label denoising, UGDRE. Specifically, we propose a novel instance-level uncertainty estimation method, which measures the reliability of the pseudo labels with overlapping relations. By further considering the long-tail problem, we design dynamic uncertainty thresholds for different types of relations to filter high-uncertainty pseudo labels. We conduct experiments on two public datasets. Our framework outperforms strong baselines by 1.91 F1 and 2.28 Ign F1 on the RE-DocRED dataset.",
}
| Document-level relation extraction (DocRE) aims to infer complex semantic relations among entities in a document. Distant supervision (DS) is able to generate massive auto-labeled data, which can improve DocRE performance. Recent works leverage pseudo labels generated by the pre-denoising model to reduce noise in DS data. However, unreliable pseudo labels bring new noise, e.g., adding false pseudo labels and losing correct DS labels. Therefore, how to select effective pseudo labels to denoise DS data is still a challenge in document-level distant relation extraction. To tackle this issue, we introduce uncertainty estimation technology to determine whether pseudo labels can be trusted. In this work, we propose a Document-level distant Relation Extraction framework with Uncertainty Guided label denoising, UGDRE. Specifically, we propose a novel instance-level uncertainty estimation method, which measures the reliability of the pseudo labels with overlapping relations. By further considering the long-tail problem, we design dynamic uncertainty thresholds for different types of relations to filter high-uncertainty pseudo labels. We conduct experiments on two public datasets. Our framework outperforms strong baselines by 1.91 F1 and 2.28 Ign F1 on the RE-DocRED dataset. | [
"Sun, Qi",
"Huang, Kun",
"Yang, Xiaocui",
"Hong, Pengfei",
"Zhang, Kun",
"Poria, Soujanya"
] | Uncertainty Guided Label Denoising for Document-level Distant Relation Extraction | acl-long.889 | Oral | 2305.11029 | [
"https://github.com/qisun123/ugdre"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.890.bib | https://aclanthology.org/2023.acl-long.890/ | @inproceedings{ramshetty-etal-2023-cross,
title = "Cross-Modal Attribute Insertions for Assessing the Robustness of Vision-and-Language Learning",
author = "Ramshetty, Shivaen and
Verma, Gaurav and
Kumar, Srijan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.890",
doi = "10.18653/v1/2023.acl-long.890",
pages = "15974--15990",
abstract = "The robustness of multimodal deep learning models to realistic changes in the input text is critical for applicability on important tasks such as text-to-image retrieval and cross-modal entailment. To measure robustness, several existing approaches edit the text data, but without leveraging the cross-modal information present in multimodal data. Such information from the visual modality, such as color, size, and shape, provides additional attributes that users can include in their inputs. Thus, we propose cross-modal attribute insertions as a realistic perturbation strategy for vision-and-language data that inserts visual attributes of the objects in the image into the corresponding text (e.g., {``}girl on a chair{''} to {``}little girl on a wooden chair{''}). Our proposed approach for cross-modal attribute insertions is modular, controllable, and task-agnostic. We find that augmenting input text using cross-modal insertions causes state-of-the-art approaches for text-to-image retrieval and cross-modal entailment to perform poorly, resulting in relative drops of {\textasciitilde}15{\%} in MRR and {\textasciitilde}20{\%} in F1 score, respectively. Crowd-sourced annotations demonstrate that cross-modal insertions lead to higher quality augmentations for multimodal data than augmentations using text-only data, and are equivalent in quality to original examples. We release the code to encourage robustness evaluations of deep vision-and-language models: \url{https://github.com/claws-lab/multimodal-robustness-xmai}",
}
| The robustness of multimodal deep learning models to realistic changes in the input text is critical for applicability on important tasks such as text-to-image retrieval and cross-modal entailment. To measure robustness, several existing approaches edit the text data, but without leveraging the cross-modal information present in multimodal data. Such information from the visual modality, such as color, size, and shape, provides additional attributes that users can include in their inputs. Thus, we propose cross-modal attribute insertions as a realistic perturbation strategy for vision-and-language data that inserts visual attributes of the objects in the image into the corresponding text (e.g., {``}girl on a chair{''} to {``}little girl on a wooden chair{''}). Our proposed approach for cross-modal attribute insertions is modular, controllable, and task-agnostic. We find that augmenting input text using cross-modal insertions causes state-of-the-art approaches for text-to-image retrieval and cross-modal entailment to perform poorly, resulting in relative drops of {\textasciitilde}15{\%} in MRR and {\textasciitilde}20{\%} in F1 score, respectively. Crowd-sourced annotations demonstrate that cross-modal insertions lead to higher quality augmentations for multimodal data than augmentations using text-only data, and are equivalent in quality to original examples. We release the code to encourage robustness evaluations of deep vision-and-language models: \url{https://github.com/claws-lab/multimodal-robustness-xmai} | [
"Ramshetty, Shivaen",
"Verma, Gaurav",
"Kumar, Srijan"
] | Cross-Modal Attribute Insertions for Assessing the Robustness of Vision-and-Language Learning | acl-long.890 | Poster | 2306.11065 | [
"https://github.com/claws-lab/multimodal-robustness-xmai"
] | https://huggingface.co/papers/2306.11065 | 2 | 1 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.891.bib | https://aclanthology.org/2023.acl-long.891/ | @inproceedings{muennighoff-etal-2023-crosslingual,
title = "Crosslingual Generalization through Multitask Finetuning",
author = "Muennighoff, Niklas and
Wang, Thomas and
Sutawika, Lintang and
Roberts, Adam and
Biderman, Stella and
Le Scao, Teven and
Bari, M Saiful and
Shen, Sheng and
Yong, Zheng Xin and
Schoelkopf, Hailey and
Tang, Xiangru and
Radev, Dragomir and
Aji, Alham Fikri and
Almubarak, Khalid and
Albanie, Samuel and
Alyafeai, Zaid and
Webson, Albert and
Raff, Edward and
Raffel, Colin",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.891",
doi = "10.18653/v1/2023.acl-long.891",
pages = "15991--16111",
abstract = "Multitask prompted finetuning (MTF) has been shown to help large language models generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused on English data and models. We apply MTF to the pretrained multilingual BLOOM and mT5 model families to produce finetuned variants called BLOOMZ and mT0. We find finetuning large multilingual language models on English tasks with English prompts allows for task genrealization to non-English languages that appear only in the pretraining corpus. Finetuning on multilingual tasks with English prompts further improves performance on English and non-English tasks leading to various state-of-the-art zero-shot results. We also investigate finetuning on multilingual tasks with prompts that have been machine-translated from English to match the language of each dataset. We find training on these machine-translated prompts leads to better performance on human-written prompts in the respective languages. Surprisingly, we find models are capable of zero-shot generalization to tasks in languages they have never intentionally seen. We conjecture that the models are learning higher-level capabilities that are both task- and language-agnostic. In addition, we introduce xP3, a composite of supervised datasets in 46 languages with English and machine-translated prompts. Our code, datasets and models are freely available at \url{https://github.com/bigscience-workshop/xmtf}.",
}
| Multitask prompted finetuning (MTF) has been shown to help large language models generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused on English data and models. We apply MTF to the pretrained multilingual BLOOM and mT5 model families to produce finetuned variants called BLOOMZ and mT0. We find finetuning large multilingual language models on English tasks with English prompts allows for task genrealization to non-English languages that appear only in the pretraining corpus. Finetuning on multilingual tasks with English prompts further improves performance on English and non-English tasks leading to various state-of-the-art zero-shot results. We also investigate finetuning on multilingual tasks with prompts that have been machine-translated from English to match the language of each dataset. We find training on these machine-translated prompts leads to better performance on human-written prompts in the respective languages. Surprisingly, we find models are capable of zero-shot generalization to tasks in languages they have never intentionally seen. We conjecture that the models are learning higher-level capabilities that are both task- and language-agnostic. In addition, we introduce xP3, a composite of supervised datasets in 46 languages with English and machine-translated prompts. Our code, datasets and models are freely available at \url{https://github.com/bigscience-workshop/xmtf}. | [
"Muennighoff, Niklas",
"Wang, Thomas",
"Sutawika, Lintang",
"Roberts, Adam",
"Biderman, Stella",
"Le Scao, Teven",
"Bari, M Saiful",
"Shen, Sheng",
"Yong, Zheng Xin",
"Schoelkopf, Hailey",
"Tang, Xiangru",
"Radev, Dragomir",
"Aji, Alham Fikri",
"Almubarak, Khalid",
"Albanie, Samuel",
"Alyafeai, Zaid",
"Webson, Albert",
"Raff, Edward",
"Raffel, Colin"
] | Crosslingual Generalization through Multitask Finetuning | acl-long.891 | Poster | 2211.01786 | [
"https://github.com/bigscience-workshop/xmtf"
] | https://huggingface.co/papers/2211.01786 | 13 | 2 | 0 | 19 | 1 | [
"bigscience/bloomz",
"bigscience/bloomz-7b1-mt",
"bigscience/bloomz-7b1",
"bigscience/bloomz-560m",
"bigscience/bloomz-3b",
"bigscience/mt0-xxl",
"bigscience/mt0-xxl-mt",
"bigscience/mt0-large",
"bigscience/bloomz-mt",
"bigscience/bloomz-1b1",
"bigscience/mt0-base",
"bigscience/mt0-xl",
"bigscience/mt0-small",
"bigscience/bloomz-1b7",
"TheBloke/bloomz-176B-GPTQ",
"bigscience/bloomz-p3",
"bigscience/bloomz-7b1-p3",
"rustformers/bloomz-ggml",
"bs-la/bloomz-7b1-4b-ru",
"newsrx/bloomz-7b1",
"bigscience/mt0-xxl-p3",
"bs-la/bloomz-7b1-500m-ru",
"tchebonenko/BLOOMZ-Medical",
"newsrx/mt0-xl",
"jamesdborin/ct2-int8-mt0-xl",
"jamesdborin/ct2-int8-bloomz-7b1-mt",
"bs-la/bloomz-7b1-4b-xp3ru",
"LazarusNLP/bloomz-7b1-mt-fp32",
"LazarusNLP/bloomz-1b7-fp32",
"LazarusNLP/bloomz-560m-fp32",
"monsterbeasts/LishizhenGPT",
"RichardErkhov/bigscience_-_bloomz-7b1-4bits",
"RichardErkhov/bigscience_-_bloomz-560m-8bits",
"RichardErkhov/bigscience_-_bloomz-560m-4bits",
"RichardErkhov/bigscience_-_bloomz-1b1-4bits",
"RichardErkhov/bigscience_-_bloomz-1b1-8bits",
"SDA-Sum/deT0-large",
"SDA-Sum/deT0-xl",
"RichardErkhov/bigscience_-_bloomz-7b1-mt-4bits",
"RichardErkhov/bigscience_-_bloomz-7b1-mt-8bits",
"RichardErkhov/bigscience_-_bloomz-1b7-4bits",
"RichardErkhov/bigscience_-_bloomz-1b7-8bits",
"darkshapes/mt0-small",
"darkshapes/mt0-large",
"darkshapes/mt0-base"
] | [
"bigscience/xP3",
"CohereForAI/xP3x",
"bigscience/xP3all",
"bigscience/xP3mt",
"Muennighoff/xwinograd",
"CATIE-AQ/DFP",
"bigscience/xP3megds",
"Svngoku/xP3x-Kongo",
"BatsResearch/NusaX-senti-LexC-Gen",
"BatsResearch/sib200-LexC-Gen",
"bs-la/xP3ru",
"M-A-D/ArabicDarija-xP3x",
"CATIE-AQ/xwinograd_fr_prompt_coreference",
"polm-stability/xwinograd-ja"
] | [
"open-llm-leaderboard/open_llm_leaderboard",
"olivierdehaene/chat-llm-streaming",
"Intel/low_bit_open_llm_leaderboard",
"fffiloni/langchain-chat-with-pdf",
"tomg-group-umd/lm-watermarking",
"Sharathhebbar24/One-stop-for-Open-source-models",
"BAAI/open_cn_llm_leaderboard",
"monra/freegpt-webui",
"qiantong-xu/toolbench-leaderboard",
"gsaivinay/open_llm_leaderboard",
"justest/gpt4free",
"pix2pix-zero-library/pix2pix-zero-demo",
"ysharma/OSChatbots_ChatGPT_ToeToToe",
"Wootang01/text_generator",
"NeuralInternet/ChatLLMs",
"TencentARC/ImageConductor",
"GTBench/GTBench",
"Wauplin/bloomz.cpp-converter",
"kastan/ai-teaching-assistant",
"Justinrune/LLaMA-Factory",
"SeaEval/SeaEval_Leaderboard",
"RamAnanth1/human_preference",
"TogetherAI/langchain-chat-with-pdf",
"Wootang01/text_generator_two",
"felixz/open_llm_leaderboard",
"OPTML-Group/UnlearnCanvas-Benchmark",
"Vikhrmodels/small-shlepa-lb",
"officialhimanshu595/llama-factory",
"onursavas/langchain-chat-with-pdf",
"mohamedemam/Arabic-meeting-summarization",
"Tj/langchain-chat-with-pdf",
"slush0/petals-playground",
"cloudqi/MultisourceChat",
"kenken999/fastapi_django_main_live",
"dinhanhx/velvet",
"tekkonetes/Chatbots",
"Dogge/bigscience-bloomz-7b1",
"knkarthick/chat-llm-streaming",
"TechWithAnirudh/langchain-chat-with-pdf",
"hra/stable-diffusion-tee-shirt",
"kastan/ai-teaching-assistant-beta",
"g4f/freegpt-webui",
"arborvitae/AI_Legal_documentation_assistant",
"akash418/bloom-zero-shot",
"pierreguillou/bloomz-english",
"polymath707/bigscience-bloomz-7b1",
"Jour/Translate-bloomz",
"rodrigomasini/data_only_open_llm_leaderboard",
"Fan-611177107/bigscience-bloomz-7b1-mt",
"Docfile/open_llm_leaderboard",
"Msp/opensource_chat_assistants",
"Pietrzak/bigscience-bloomz-7b1-mt",
"Jour/Translation-to-small",
"nateraw/text-generation-inference",
"neubla/neubla-llm-evaluation-board",
"rfrossard/langchain-chat-with-pdf",
"Jour/Translate",
"DrBenjamin/AI_Demo",
"Alfasign/chat-llm-streaming",
"Alfasign/AchyuthGPT",
"pikto/Elite-freegpt-webui",
"Fernando22/freegpt-webui",
"andryMLOPS/ASTA-GPT-3.8_web_ui",
"VickyKira/NASAGPT",
"101-5/gpt4free",
"ShashankSS1205/DomainSpecificQuesAns",
"msobhy/langchain-chat-with-pdf",
"KushJaggi/pdfGPT",
"ArpitM/chat-llm-streaming",
"pauri32/llm-challenge",
"alicebobjob/bigscience-bloomz-560m",
"dataroadmap/talk-to-your-docs",
"nateraw/jupyterlab-inference-dev",
"arentz/bigscience-bloomz",
"purna11/bigscience-bloomz-1b1",
"blackwingedkite/gutalk",
"dataroadmap/SR_Chatbot",
"Trillianitus/bigscience-bloomz",
"qodwsjak/bigscience-bloomz-7b1-mt",
"srodg7/tokenizer",
"egodos/bigscience-bloomz-7b1",
"advaitmb/bloomZtest",
"kai0226/bigscience-bloomz-7b1-mt",
"blackwingedkite/alpaca2_clas",
"Jour/Translate-bloomz-7b1",
"zhtet/generative-qa-chatbot",
"ziqi-guo/bigscience-bloomz-7b1-mt",
"0x1668/open_llm_leaderboard",
"cuttycb/amazing_salon",
"selvalogesh/chat-llm-streaming",
"nubifere/vis-llm-ft",
"pngwn/open_llm_leaderboard-check",
"XuBailing/CongMa2",
"infinisoft/opensource_chat_assistants",
"asir0z/open_llm_leaderboard",
"kbmlcoding/open_llm_leaderboard_free",
"RaushanTurganbay/RDF_to_text_generation",
"adsantos/langchain-chat-with-pdf",
"xieyang233/demo",
"dupenf/bigscience-bloomz-560m"
] |
https://aclanthology.org/2023.acl-long.892.bib | https://aclanthology.org/2023.acl-long.892/ | @inproceedings{shou-lin-2023-evaluate,
title = "Evaluate {AMR} Graph Similarity via Self-supervised Learning",
author = "Shou, Ziyi and
Lin, Fangzhen",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.892",
doi = "10.18653/v1/2023.acl-long.892",
pages = "16112--16123",
abstract = "In work on AMR (Abstract Meaning Representation), similarity metrics are crucial as they are used to evaluate AMR systems such as AMR parsers. Current AMR metrics are all based on nodes or triples matching without considering the entire structures of AMR graphs. To address this problem, and inspired by learned similarity evaluation on plain text, we propose AMRSim, an automatic AMR graph similarity evaluation metric. To overcome the high cost of collecting human-annotated data, AMRSim automatically generates silver AMR graphs and utilizes self-supervised learning methods. We evaluated AMRSim on various datasets and found that AMRSim significantly improves the correlations with human semantic scores and remains robust under diverse challenges. We also discuss how AMRSim can be extended to multilingual cases.",
}
| In work on AMR (Abstract Meaning Representation), similarity metrics are crucial as they are used to evaluate AMR systems such as AMR parsers. Current AMR metrics are all based on nodes or triples matching without considering the entire structures of AMR graphs. To address this problem, and inspired by learned similarity evaluation on plain text, we propose AMRSim, an automatic AMR graph similarity evaluation metric. To overcome the high cost of collecting human-annotated data, AMRSim automatically generates silver AMR graphs and utilizes self-supervised learning methods. We evaluated AMRSim on various datasets and found that AMRSim significantly improves the correlations with human semantic scores and remains robust under diverse challenges. We also discuss how AMRSim can be extended to multilingual cases. | [
"Shou, Ziyi",
"Lin, Fangzhen"
] | Evaluate AMR Graph Similarity via Self-supervised Learning | acl-long.892 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.893.bib | https://aclanthology.org/2023.acl-long.893/ | @inproceedings{dar-etal-2023-analyzing,
title = "Analyzing Transformers in Embedding Space",
author = "Dar, Guy and
Geva, Mor and
Gupta, Ankit and
Berant, Jonathan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.893",
doi = "10.18653/v1/2023.acl-long.893",
pages = "16124--16170",
abstract = "Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that a zero-pass approach, where parameters are interpreted directly without a forward/backward pass is feasible for some Transformer parameters, and for two-layer attention networks. In this work, we present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space, that is, the space of vocabulary items they operate on. We derive a simple theoretical framework to support our arguments and provide ample evidence for its validity. First, an empirical analysis showing that parameters of both pretrained and fine-tuned models can be interpreted in embedding space. Second, we present two applications of our framework: (a) aligning the parameters of different models that share a vocabulary, and (b) constructing a classifier without training by {``}translating{''} the parameters of a fine-tuned classifier to parameters of a different model that was only pretrained. Overall, our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only.",
}
| Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that a zero-pass approach, where parameters are interpreted directly without a forward/backward pass is feasible for some Transformer parameters, and for two-layer attention networks. In this work, we present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space, that is, the space of vocabulary items they operate on. We derive a simple theoretical framework to support our arguments and provide ample evidence for its validity. First, an empirical analysis showing that parameters of both pretrained and fine-tuned models can be interpreted in embedding space. Second, we present two applications of our framework: (a) aligning the parameters of different models that share a vocabulary, and (b) constructing a classifier without training by {``}translating{''} the parameters of a fine-tuned classifier to parameters of a different model that was only pretrained. Overall, our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only. | [
"Dar, Guy",
"Geva, Mor",
"Gupta, Ankit",
"Berant, Jonathan"
] | Analyzing Transformers in Embedding Space | acl-long.893 | Poster | 2209.02535 | [
"https://github.com/guyd1995/embedding-space"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.894.bib | https://aclanthology.org/2023.acl-long.894/ | @inproceedings{li-etal-2023-shot-data,
title = "Few-Shot Data-to-Text Generation via Unified Representation and Multi-Source Learning",
author = "Li, Alexander Hanbo and
Shang, Mingyue and
Spiliopoulou, Evangelia and
Ma, Jie and
Ng, Patrick and
Wang, Zhiguo and
Min, Bonan and
Wang, William Yang and
McKeown, Kathleen and
Castelli, Vittorio and
Roth, Dan and
Xiang, Bing",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.894",
doi = "10.18653/v1/2023.acl-long.894",
pages = "16171--16189",
abstract = "In this paper, we present a novel approach for data-to-text generation that addresses the limitations of current methods that primarily focus on specific types of structured data. Our proposed method aims to improve performance in multi-task training, zero-shot and few-shot scenarios by providing a unified representation that can handle various forms of structured data such as tables, knowledge graph triples, and meaning representations. We demonstrate that our proposed approach can effectively adapt to new structured forms, and can improve performance in comparison to current methods. For example, our method resulted in a 66{\%} improvement in zero-shot BLEU scores when transferring models trained on table inputs to a knowledge graph dataset. Our proposed method is an important step towards a more general data-to-text generation framework.",
}
| In this paper, we present a novel approach for data-to-text generation that addresses the limitations of current methods that primarily focus on specific types of structured data. Our proposed method aims to improve performance in multi-task training, zero-shot and few-shot scenarios by providing a unified representation that can handle various forms of structured data such as tables, knowledge graph triples, and meaning representations. We demonstrate that our proposed approach can effectively adapt to new structured forms, and can improve performance in comparison to current methods. For example, our method resulted in a 66{\%} improvement in zero-shot BLEU scores when transferring models trained on table inputs to a knowledge graph dataset. Our proposed method is an important step towards a more general data-to-text generation framework. | [
"Li, Alex",
"er Hanbo",
"Shang, Mingyue",
"Spiliopoulou, Evangelia",
"Ma, Jie",
"Ng, Patrick",
"Wang, Zhiguo",
"Min, Bonan",
"Wang, William Yang",
"McKeown, Kathleen",
"Castelli, Vittorio",
"Roth, Dan",
"Xiang, Bing"
] | Few-Shot Data-to-Text Generation via Unified Representation and Multi-Source Learning | acl-long.894 | Poster | 2308.05317 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.895.bib | https://aclanthology.org/2023.acl-long.895/ | @inproceedings{kim-etal-2023-factkg,
title = "{F}act{KG}: Fact Verification via Reasoning on Knowledge Graphs",
author = "Kim, Jiho and
Park, Sungjin and
Kwon, Yeonsu and
Jo, Yohan and
Thorne, James and
Choi, Edward",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.895",
doi = "10.18653/v1/2023.acl-long.895",
pages = "16190--16206",
abstract = "In real world applications, knowledge graphs (KG) are widely used in various domains (e.g. medical applications and dialogue agents). However, for fact verification, KGs have not been adequately utilized as a knowledge source. KGs can be a valuable knowledge source in fact verification due to their reliability and broad applicability. A KG consists of nodes and edges which makes it clear how concepts are linked together, allowing machines to reason over chains of topics. However, there are many challenges in understanding how these machine-readable concepts map to information in text. To enable the community to better use KGs, we introduce a new dataset, FactKG: Fact Verificationvia Reasoning on Knowledge Graphs. It consists of 108k natural language claims with five types of reasoning: One-hop, Conjunction, Existence, Multi-hop, and Negation. Furthermore, FactKG contains various linguistic patterns, including colloquial style claims as well as written style claims to increase practicality. Lastly, we develop a baseline approach and analyze FactKG over these reasoning types. We believe FactKG can advance both reliability and practicality in KG-based fact verification.",
}
| In real world applications, knowledge graphs (KG) are widely used in various domains (e.g. medical applications and dialogue agents). However, for fact verification, KGs have not been adequately utilized as a knowledge source. KGs can be a valuable knowledge source in fact verification due to their reliability and broad applicability. A KG consists of nodes and edges which makes it clear how concepts are linked together, allowing machines to reason over chains of topics. However, there are many challenges in understanding how these machine-readable concepts map to information in text. To enable the community to better use KGs, we introduce a new dataset, FactKG: Fact Verificationvia Reasoning on Knowledge Graphs. It consists of 108k natural language claims with five types of reasoning: One-hop, Conjunction, Existence, Multi-hop, and Negation. Furthermore, FactKG contains various linguistic patterns, including colloquial style claims as well as written style claims to increase practicality. Lastly, we develop a baseline approach and analyze FactKG over these reasoning types. We believe FactKG can advance both reliability and practicality in KG-based fact verification. | [
"Kim, Jiho",
"Park, Sungjin",
"Kwon, Yeonsu",
"Jo, Yohan",
"Thorne, James",
"Choi, Edward"
] | FactKG: Fact Verification via Reasoning on Knowledge Graphs | acl-long.895 | Poster | 2305.06590 | [
"https://github.com/jiho283/FactKG"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.896.bib | https://aclanthology.org/2023.acl-long.896/ | @inproceedings{labrak-etal-2023-drbert,
title = "{D}r{BERT}: A Robust Pre-trained Model in {F}rench for Biomedical and Clinical domains",
author = "Labrak, Yanis and
Bazoge, Adrien and
Dufour, Richard and
Rouvier, Mickael and
Morin, Emmanuel and
Daille, B{\'e}atrice and
Gourraud, Pierre-Antoine",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.896",
doi = "10.18653/v1/2023.acl-long.896",
pages = "16207--16221",
abstract = "In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains. In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks. In particular, we show that we can take advantage of already existing biomedical PLMs in a foreign language by further pre-train it on our targeted data. Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained.",
}
| In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains. In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks. In particular, we show that we can take advantage of already existing biomedical PLMs in a foreign language by further pre-train it on our targeted data. Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained. | [
"Labrak, Yanis",
"Bazoge, Adrien",
"Dufour, Richard",
"Rouvier, Mickael",
"Morin, Emmanuel",
"Daille, B{\\'e}atrice",
"Gourraud, Pierre-Antoine"
] | DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains | acl-long.896 | Poster | 2304.00958 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.897.bib | https://aclanthology.org/2023.acl-long.897/ | @inproceedings{yuan-etal-2023-discriminative,
title = "Discriminative Reasoning with Sparse Event Representation for Document-level Event-Event Relation Extraction",
author = "Yuan, Changsen and
Huang, Heyan and
Cao, Yixin and
Wen, Yonggang",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.897",
doi = "10.18653/v1/2023.acl-long.897",
pages = "16222--16234",
abstract = "Document-level Event Causality Identification (DECI) aims to extract causal relations between events in a document. It challenges conventional sentence-level task (SECI) with difficult long-text understanding. In this paper, we propose a novel DECI model (SENDIR) for better document-level reasoning. Different from existing works that build an event graph via linguistic tools, SENDIR does not require any prior knowledge. The basic idea is to discriminate event pairs in the same sentence or span multiple sentences by assuming their different information density: 1) low density in the document suggests sparse attention to skip irrelevant information. Our module 1 designs various types of attention for event representation learning to capture long-distance dependence. 2) High density in a sentence makes SECI relatively easy. Module 2 uses different weights to highlight the roles and contributions of intra- and inter-sentential reasoning, which introduces supportive event pairs for joint modeling. Extensive experiments demonstrate great improvements in SENDIR and the effectiveness of various sparse attention for document-level representations. Codes will be released later.",
}
| Document-level Event Causality Identification (DECI) aims to extract causal relations between events in a document. It challenges conventional sentence-level task (SECI) with difficult long-text understanding. In this paper, we propose a novel DECI model (SENDIR) for better document-level reasoning. Different from existing works that build an event graph via linguistic tools, SENDIR does not require any prior knowledge. The basic idea is to discriminate event pairs in the same sentence or span multiple sentences by assuming their different information density: 1) low density in the document suggests sparse attention to skip irrelevant information. Our module 1 designs various types of attention for event representation learning to capture long-distance dependence. 2) High density in a sentence makes SECI relatively easy. Module 2 uses different weights to highlight the roles and contributions of intra- and inter-sentential reasoning, which introduces supportive event pairs for joint modeling. Extensive experiments demonstrate great improvements in SENDIR and the effectiveness of various sparse attention for document-level representations. Codes will be released later. | [
"Yuan, Changsen",
"Huang, Heyan",
"Cao, Yixin",
"Wen, Yonggang"
] | Discriminative Reasoning with Sparse Event Representation for Document-level Event-Event Relation Extraction | acl-long.897 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.898.bib | https://aclanthology.org/2023.acl-long.898/ | @inproceedings{lu-etal-2023-facilitating,
title = "Facilitating Fine-grained Detection of {C}hinese Toxic Language: Hierarchical Taxonomy, Resources, and Benchmarks",
author = "Lu, Junyu and
Xu, Bo and
Zhang, Xiaokun and
Min, Changrong and
Yang, Liang and
Lin, Hongfei",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.898",
doi = "10.18653/v1/2023.acl-long.898",
pages = "16235--16250",
abstract = "The widespread dissemination of toxic online posts is increasingly damaging to society. However, research on detecting toxic language in Chinese has lagged significantly due to limited datasets. Existing datasets suffer from a lack of fine-grained annotations, such as the toxic type and expressions with indirect toxicity. These fine-grained annotations are crucial factors for accurately detecting the toxicity of posts involved with lexical knowledge, which has been a challenge for researchers. To tackle this problem, we facilitate the fine-grained detection of Chinese toxic language by building a new dataset with benchmark results. First, we devised Monitor Toxic Frame, a hierarchical taxonomy to analyze the toxic type and expressions. Then, we built a fine-grained dataset ToxiCN, including both direct and indirect toxic samples. ToxiCN is based on an insulting vocabulary containing implicit profanity. We further propose a benchmark model, Toxic Knowledge Enhancement (TKE), by incorporating lexical features to detect toxic language. We demonstrate the usability of ToxiCN and the effectiveness of TKE based on a systematic quantitative and qualitative analysis.",
}
| The widespread dissemination of toxic online posts is increasingly damaging to society. However, research on detecting toxic language in Chinese has lagged significantly due to limited datasets. Existing datasets suffer from a lack of fine-grained annotations, such as the toxic type and expressions with indirect toxicity. These fine-grained annotations are crucial factors for accurately detecting the toxicity of posts involved with lexical knowledge, which has been a challenge for researchers. To tackle this problem, we facilitate the fine-grained detection of Chinese toxic language by building a new dataset with benchmark results. First, we devised Monitor Toxic Frame, a hierarchical taxonomy to analyze the toxic type and expressions. Then, we built a fine-grained dataset ToxiCN, including both direct and indirect toxic samples. ToxiCN is based on an insulting vocabulary containing implicit profanity. We further propose a benchmark model, Toxic Knowledge Enhancement (TKE), by incorporating lexical features to detect toxic language. We demonstrate the usability of ToxiCN and the effectiveness of TKE based on a systematic quantitative and qualitative analysis. | [
"Lu, Junyu",
"Xu, Bo",
"Zhang, Xiaokun",
"Min, Changrong",
"Yang, Liang",
"Lin, Hongfei"
] | Facilitating Fine-grained Detection of Chinese Toxic Language: Hierarchical Taxonomy, Resources, and Benchmarks | acl-long.898 | Poster | 2305.04446 | [
"https://github.com/dut-lujunyu/toxicn"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.899.bib | https://aclanthology.org/2023.acl-long.899/ | @inproceedings{duquenne-etal-2023-speechmatrix,
title = "{S}peech{M}atrix: A Large-Scale Mined Corpus of Multilingual Speech-to-Speech Translations",
author = "Duquenne, Paul-Ambroise and
Gong, Hongyu and
Dong, Ning and
Du, Jingfei and
Lee, Ann and
Goswami, Vedanuj and
Wang, Changhan and
Pino, Juan and
Sagot, Beno{\^\i}t and
Schwenk, Holger",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.899",
doi = "10.18653/v1/2023.acl-long.899",
pages = "16251--16269",
abstract = "We present SpeechMatrix, a large-scale multilingual corpus of speech-to-speech translations mined from real speech of European Parliament recordings. It contains speech alignments in 136 language pairs with a total of 418 thousand hours of speech. To evaluate the quality of this parallel speech, we train bilingual speech-to-speech translation models on mined data only and establish extensive baseline results on EuroParl-ST, VoxPopuli and FLEURS test sets. Enabled by the multilinguality of SpeechMatrix, we also explore multilingual speech-to-speech translation, a topic which was addressed by few other works. We also demonstrate that model pre-training and sparse scaling using Mixture-of-Experts bring large gains to translation performance. The mined data and models will be publicly released",
}
| We present SpeechMatrix, a large-scale multilingual corpus of speech-to-speech translations mined from real speech of European Parliament recordings. It contains speech alignments in 136 language pairs with a total of 418 thousand hours of speech. To evaluate the quality of this parallel speech, we train bilingual speech-to-speech translation models on mined data only and establish extensive baseline results on EuroParl-ST, VoxPopuli and FLEURS test sets. Enabled by the multilinguality of SpeechMatrix, we also explore multilingual speech-to-speech translation, a topic which was addressed by few other works. We also demonstrate that model pre-training and sparse scaling using Mixture-of-Experts bring large gains to translation performance. The mined data and models will be publicly released | [
"Duquenne, Paul-Ambroise",
"Gong, Hongyu",
"Dong, Ning",
"Du, Jingfei",
"Lee, Ann",
"Goswami, Vedanuj",
"Wang, Changhan",
"Pino, Juan",
"Sagot, Beno{\\^\\i}t",
"Schwenk, Holger"
] | SpeechMatrix: A Large-Scale Mined Corpus of Multilingual Speech-to-Speech Translations | acl-long.899 | Poster | 2211.04508 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
Subsets and Splits