Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
566
3.75k
abstract
stringlengths
4
3.1k
authors
sequencelengths
1
66
title
stringlengths
12
172
id
stringlengths
7
19
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
21
upvotes
int64
-1
116
num_comments
int64
-1
11
n_authors
int64
-1
61
Models
sequencelengths
0
100
Datasets
sequencelengths
0
100
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
100
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.emnlp-main.601.bib
https://aclanthology.org/2024.emnlp-main.601/
@inproceedings{yang-etal-2024-datatales, title = "{D}ata{T}ales: A Benchmark for Real-World Intelligent Data Narration", author = "Yang, Yajing and Liu, Qian and Kan, Min-Yen", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.601", pages = "10764--10788", abstract = "We introduce DataTales, a novel benchmark designed to assess the proficiency of language models in data narration, a task crucial for transforming complex tabular data into accessible narratives. Existing benchmarks often fall short in capturing the requisite analytical complexity for practical applications. DataTales addresses this gap by offering 4.9k financial reports paired with corresponding market data, showcasing the demand for models to create clear narratives and analyze large datasets while understanding specialized terminology in the field. Our findings highlights the significant challenge that language models face in achieving the necessary precision and analytical depth for proficient data narration, suggesting promising avenues for future model development and evaluation methodologies.", }
We introduce DataTales, a novel benchmark designed to assess the proficiency of language models in data narration, a task crucial for transforming complex tabular data into accessible narratives. Existing benchmarks often fall short in capturing the requisite analytical complexity for practical applications. DataTales addresses this gap by offering 4.9k financial reports paired with corresponding market data, showcasing the demand for models to create clear narratives and analyze large datasets while understanding specialized terminology in the field. Our findings highlights the significant challenge that language models face in achieving the necessary precision and analytical depth for proficient data narration, suggesting promising avenues for future model development and evaluation methodologies.
[ "Yang, Yajing", "Liu, Qian", "Kan, Min-Yen" ]
DataTales: A Benchmark for Real-World Intelligent Data Narration
emnlp-main.601
Poster
2410.17859
[ "https://github.com/yajingyang/datatales" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.602.bib
https://aclanthology.org/2024.emnlp-main.602/
@inproceedings{yi-etal-2024-towards, title = "Towards Fast Multilingual {LLM} Inference: Speculative Decoding and Specialized Drafters", author = "Yi, Euiin and Kim, Taehyeon and Jeung, Hongseok and Chang, Du-Seong and Yun, Se-Young", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.602", pages = "10789--10802", abstract = "Large language models (LLMs) have revolutionized natural language processing and broadened their applicability across diverse commercial applications. However, the deployment of these models is constrained by high inference time in multilingual settings. To mitigate this challenge, this paper explores a training recipe of an assistant model in speculative decoding, which are leveraged to draft and-then its future tokens are verified by the target LLM. We show that language-specific draft models, optimized through a targeted pretrain-and-finetune strategy, substantially brings a speedup of inference time compared to the previous methods. We validate these models across various languages in inference time, out-of-domain speedup, and GPT-4o evaluation.", }
Large language models (LLMs) have revolutionized natural language processing and broadened their applicability across diverse commercial applications. However, the deployment of these models is constrained by high inference time in multilingual settings. To mitigate this challenge, this paper explores a training recipe of an assistant model in speculative decoding, which are leveraged to draft and-then its future tokens are verified by the target LLM. We show that language-specific draft models, optimized through a targeted pretrain-and-finetune strategy, substantially brings a speedup of inference time compared to the previous methods. We validate these models across various languages in inference time, out-of-domain speedup, and GPT-4o evaluation.
[ "Yi, Euiin", "Kim, Taehyeon", "Jeung, Hongseok", "Chang, Du-Seong", "Yun, Se-Young" ]
Towards Fast Multilingual LLM Inference: Speculative Decoding and Specialized Drafters
emnlp-main.602
Poster
2406.16758
[ "https://github.com/Kthyeon/Multilingual-SpecBench" ]
https://huggingface.co/papers/2406.16758
3
19
2
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.603.bib
https://aclanthology.org/2024.emnlp-main.603/
@inproceedings{ye-etal-2024-globesumm, title = "{G}lobe{S}umm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization", author = "Ye, Yangfan and Feng, Xiachong and Feng, Xiaocheng and Ma, Weitao and Qin, Libo and Xu, Dongliang and Yang, Qing and Liu, Hongtao and Qin, Bing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.603", pages = "10803--10821", abstract = "News summarization in today{'}s global scene can be daunting with its flood of multilingual content and varied viewpoints from different sources. However, current studies often neglect such real-world scenarios as they tend to focus solely on either single-language or single-document tasks. To bridge this gap, we aim to unify Multi-lingual, Cross-lingual and Multi-document Summarization into a novel task, i.e., MCMS, which encapsulates the real-world requirements all-in-one. Nevertheless, the lack of a benchmark inhibits researchers from adequately studying this invaluable problem. To tackle this, we have meticulously constructed the GLOBESUMM dataset by first collecting a wealth of multilingual news reports and restructuring them into event-centric format. Additionally, we introduce the method of protocol-guided prompting for high-quality and cost-effective reference annotation. In MCMS, we also highlight the challenge of conflicts between news reports, in addition to the issues of redundancies and omissions, further enhancing the complexity of GLOBESUMM. Through extensive experimental analysis, we validate the quality of our dataset and elucidate the inherent challenges of the task. We firmly believe that GLOBESUMM, given its challenging nature, will greatly contribute to the multilingual communities and the evaluation of LLMs.", }
News summarization in today{'}s global scene can be daunting with its flood of multilingual content and varied viewpoints from different sources. However, current studies often neglect such real-world scenarios as they tend to focus solely on either single-language or single-document tasks. To bridge this gap, we aim to unify Multi-lingual, Cross-lingual and Multi-document Summarization into a novel task, i.e., MCMS, which encapsulates the real-world requirements all-in-one. Nevertheless, the lack of a benchmark inhibits researchers from adequately studying this invaluable problem. To tackle this, we have meticulously constructed the GLOBESUMM dataset by first collecting a wealth of multilingual news reports and restructuring them into event-centric format. Additionally, we introduce the method of protocol-guided prompting for high-quality and cost-effective reference annotation. In MCMS, we also highlight the challenge of conflicts between news reports, in addition to the issues of redundancies and omissions, further enhancing the complexity of GLOBESUMM. Through extensive experimental analysis, we validate the quality of our dataset and elucidate the inherent challenges of the task. We firmly believe that GLOBESUMM, given its challenging nature, will greatly contribute to the multilingual communities and the evaluation of LLMs.
[ "Ye, Yangfan", "Feng, Xiachong", "Feng, Xiaocheng", "Ma, Weitao", "Qin, Libo", "Xu, Dongliang", "Yang, Qing", "Liu, Hongtao", "Qin, Bing" ]
GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization
emnlp-main.603
Poster
2410.04087
[ "https://github.com/YYF-Tommy/GlobeSumm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.604.bib
https://aclanthology.org/2024.emnlp-main.604/
@inproceedings{blevins-etal-2024-breaking, title = "Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models", author = "Blevins, Terra and Limisiewicz, Tomasz and Gururangan, Suchin and Li, Margaret and Gonen, Hila and Smith, Noah A. and Zettlemoyer, Luke", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.604", pages = "10822--10837", abstract = "Despite their popularity in non-English NLP, multilingual language models often underperform monolingual ones due to inter-language competition for model parameters. We propose Cross-lingual Expert Language Models (X-ELM), which mitigate this competition by independently training language models on subsets of the multilingual corpus. This process specializes X-ELMs to different languages while remaining effective as a multilingual ensemble. Our experiments show that when given the same compute budget, X-ELM outperforms jointly trained multilingual models across all 16 considered languages and that these gains transfer to downstream tasks. X-ELM provides additional benefits over performance improvements: new experts can be iteratively added, adapting X-ELM to new languages without catastrophic forgetting. Furthermore, training is asynchronous, reducing the hardware requirements for multilingual training and democratizing multilingual modeling.", }
Despite their popularity in non-English NLP, multilingual language models often underperform monolingual ones due to inter-language competition for model parameters. We propose Cross-lingual Expert Language Models (X-ELM), which mitigate this competition by independently training language models on subsets of the multilingual corpus. This process specializes X-ELMs to different languages while remaining effective as a multilingual ensemble. Our experiments show that when given the same compute budget, X-ELM outperforms jointly trained multilingual models across all 16 considered languages and that these gains transfer to downstream tasks. X-ELM provides additional benefits over performance improvements: new experts can be iteratively added, adapting X-ELM to new languages without catastrophic forgetting. Furthermore, training is asynchronous, reducing the hardware requirements for multilingual training and democratizing multilingual modeling.
[ "Blevins, Terra", "Limisiewicz, Tomasz", "Gururangan, Suchin", "Li, Margaret", "Gonen, Hila", "Smith, Noah A.", "Zettlemoyer, Luke" ]
Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models
emnlp-main.604
Poster
2401.10440
[ "https://github.com/blvns/x-elm" ]
https://huggingface.co/papers/2401.10440
1
0
1
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.605.bib
https://aclanthology.org/2024.emnlp-main.605/
@inproceedings{liermann-etal-2024-insightful, title = "More Insightful Feedback for Tutoring: Enhancing Generation Mechanisms and Automatic Evaluation", author = "Liermann, Wencke and Huang, Jin-Xia and Lee, Yohan and Lee, Kong Joo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.605", pages = "10838--10851", abstract = "Incorrect student answers can become valuable learning opportunities, provided that the student understands where they went wrong and why. To this end, rather than being given the correct answer, students should receive elaborated feedback on how to correct a mistake on their own. Highlighting the complex demands that the generation of such feedback places on a model{'}s input utilization abilities, we propose two extensions to the training pipeline. Firstly, we employ a KL regularization term between a standard and enriched input format to achieve more targeted input representations. Secondly, we add a preference optimization step to encourage student answer-adaptive feedback generation. The effectiveness of those extensions is underlined by a significant increase in model performance of 3.3 METEOR points. We go beyond traditional surface form-based metrics to assess two important dimensions of feedback quality, i.e., faithfulness and informativeness. Hereby, we are the first to propose an automatic metric measuring the degree to which feedback divulges the correct answer, that we call Informativeness Index $I^2$. We verify in how far each metric captures feedback quality.", }
Incorrect student answers can become valuable learning opportunities, provided that the student understands where they went wrong and why. To this end, rather than being given the correct answer, students should receive elaborated feedback on how to correct a mistake on their own. Highlighting the complex demands that the generation of such feedback places on a model{'}s input utilization abilities, we propose two extensions to the training pipeline. Firstly, we employ a KL regularization term between a standard and enriched input format to achieve more targeted input representations. Secondly, we add a preference optimization step to encourage student answer-adaptive feedback generation. The effectiveness of those extensions is underlined by a significant increase in model performance of 3.3 METEOR points. We go beyond traditional surface form-based metrics to assess two important dimensions of feedback quality, i.e., faithfulness and informativeness. Hereby, we are the first to propose an automatic metric measuring the degree to which feedback divulges the correct answer, that we call Informativeness Index $I^2$. We verify in how far each metric captures feedback quality.
[ "Liermann, Wencke", "Huang, Jin-Xia", "Lee, Yohan", "Lee, Kong Joo" ]
More Insightful Feedback for Tutoring: Enhancing Generation Mechanisms and Automatic Evaluation
emnlp-main.605
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.606.bib
https://aclanthology.org/2024.emnlp-main.606/
@inproceedings{chung-etal-2024-stable, title = "Stable Language Model Pre-training by Reducing Embedding Variability", author = "Chung, Woojin and Hong, Jiwoo and An, Na Min and Thorne, James and Yun, Se-Young", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.606", pages = "10852--10863", abstract = "Stable pre-training is essential for achieving better-performing language models. However, tracking pre-training stability is impractical due to high computational costs. We study Token Embedding Variability as a simple proxy to estimate pre-training stability. We theoretically and empirically demonstrate that Multi-head Low-Rank Attention acts as a fundamental approach to reducing instability. This is supported by empirical findings on variants on GPT-2, demonstrating improved stability and lower perplexities, even at deeper layer counts.", }
Stable pre-training is essential for achieving better-performing language models. However, tracking pre-training stability is impractical due to high computational costs. We study Token Embedding Variability as a simple proxy to estimate pre-training stability. We theoretically and empirically demonstrate that Multi-head Low-Rank Attention acts as a fundamental approach to reducing instability. This is supported by empirical findings on variants on GPT-2, demonstrating improved stability and lower perplexities, even at deeper layer counts.
[ "Chung, Woojin", "Hong, Jiwoo", "An, Na Min", "Thorne, James", "Yun, Se-Young" ]
Stable Language Model Pre-training by Reducing Embedding Variability
emnlp-main.606
Poster
2409.07787
[ "" ]
https://huggingface.co/papers/2409.07787
0
0
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.607.bib
https://aclanthology.org/2024.emnlp-main.607/
@inproceedings{manohar-pillai-2024-lost, title = "What is lost in Normalization? Exploring Pitfalls in Multilingual {ASR} Model Evaluations", author = "Manohar, Kavya and Pillai, Leena G", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.607", pages = "10864--10869", abstract = "This paper explores the pitfalls in evaluating multilingual automatic speech recognition (ASR) models, with a particular focus on Indic language scripts. We investigate the text normalization routine employed by leading ASR models, including OpenAI Whisper, Meta{'}s MMS, Seamless, and Assembly AI{'}s Conformer, and their unintended consequences on performance metrics. Our research reveals that current text normalization practices, while aiming to standardize ASR outputs for fair comparison, by removing inconsistencies such as variations in spelling, punctuation, and special characters, are fundamentally flawed when applied to Indic scripts. Through empirical analysis using text similarity scores and in-depth linguistic examination, we demonstrate that these flaws lead to artificially improved performance metrics for Indic languages. We conclude by proposing a shift towards developing text normalization routines that leverage native linguistic expertise, ensuring more robust and accurate evaluations of multilingual ASR models.", }
This paper explores the pitfalls in evaluating multilingual automatic speech recognition (ASR) models, with a particular focus on Indic language scripts. We investigate the text normalization routine employed by leading ASR models, including OpenAI Whisper, Meta{'}s MMS, Seamless, and Assembly AI{'}s Conformer, and their unintended consequences on performance metrics. Our research reveals that current text normalization practices, while aiming to standardize ASR outputs for fair comparison, by removing inconsistencies such as variations in spelling, punctuation, and special characters, are fundamentally flawed when applied to Indic scripts. Through empirical analysis using text similarity scores and in-depth linguistic examination, we demonstrate that these flaws lead to artificially improved performance metrics for Indic languages. We conclude by proposing a shift towards developing text normalization routines that leverage native linguistic expertise, ensuring more robust and accurate evaluations of multilingual ASR models.
[ "Manohar, Kavya", "Pillai, Leena G" ]
What is lost in Normalization? Exploring Pitfalls in Multilingual ASR Model Evaluations
emnlp-main.607
Poster
2409.02449
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.608.bib
https://aclanthology.org/2024.emnlp-main.608/
@inproceedings{schiller-etal-2024-diversity, title = "Diversity Over Size: On the Effect of Sample and Topic Sizes for Topic-Dependent Argument Mining Datasets", author = "Schiller, Benjamin and Daxenberger, Johannes and Waldis, Andreas and Gurevych, Iryna", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.608", pages = "10870--10887", abstract = "Topic-Dependent Argument Mining (TDAM), that is extracting and classifying argument components for a specific topic from large document sources, is an inherently difficult task for machine learning models and humans alike, as large TDAM datasets are rare and recognition of argument components requires expert knowledge. The task becomes even more difficult if it also involves stance detection of retrieved arguments. In this work, we investigate the effect of TDAM dataset composition in few- and zero-shot settings. Our findings show that, while fine-tuning is mandatory to achieve acceptable model performance, using carefully composed training samples and reducing the training sample size by up to almost 90{\%} can still yield 95{\%} of the maximum performance. This gain is consistent across three TDAM tasks on three different datasets. We also publish a new dataset and code for future benchmarking.", }
Topic-Dependent Argument Mining (TDAM), that is extracting and classifying argument components for a specific topic from large document sources, is an inherently difficult task for machine learning models and humans alike, as large TDAM datasets are rare and recognition of argument components requires expert knowledge. The task becomes even more difficult if it also involves stance detection of retrieved arguments. In this work, we investigate the effect of TDAM dataset composition in few- and zero-shot settings. Our findings show that, while fine-tuning is mandatory to achieve acceptable model performance, using carefully composed training samples and reducing the training sample size by up to almost 90{\%} can still yield 95{\%} of the maximum performance. This gain is consistent across three TDAM tasks on three different datasets. We also publish a new dataset and code for future benchmarking.
[ "Schiller, Benjamin", "Daxenberger, Johannes", "Waldis, Andreas", "Gurevych, Iryna" ]
Diversity Over Size: On the Effect of Sample and Topic Sizes for Topic-Dependent Argument Mining Datasets
emnlp-main.608
Poster
2205.11472
[ "https://github.com/ukplab/argument-topic-diversity" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.609.bib
https://aclanthology.org/2024.emnlp-main.609/
@inproceedings{sun-etal-2024-kiss, title = "Kiss up, Kick down: Exploring Behavioral Changes in Multi-modal Large Language Models with Assigned Visual Personas", author = "Sun, Seungjong and Lee, Eungu and Baek, Seo Yeon and Hwang, Seunghyun and Lee, Wonbyung and Nan, Dongyan and Jansen, Bernard J and Kim, Jang Hyun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.609", pages = "10888--10901", abstract = "This study is the first to explore whether multi-modal large language models (LLMs) can align their behaviors with visual personas, addressing a significant gap in the literature that predominantly focuses on text-based personas. We developed a novel dataset of 5K fictional avatar images for assignment as visual personas to LLMs, and analyzed their negotiation behaviors based on the visual traits depicted in these images, with a particular focus on aggressiveness. The results indicate that LLMs assess the aggressiveness of images in a manner similar to humans and output more aggressive negotiation behaviors when prompted with an aggressive visual persona. Interestingly, the LLM exhibited more aggressive negotiation behaviors when the opponent{'}s image appeared less aggressive than their own, and less aggressive behaviors when the opponent{'}s image appeared more aggressive.", }
This study is the first to explore whether multi-modal large language models (LLMs) can align their behaviors with visual personas, addressing a significant gap in the literature that predominantly focuses on text-based personas. We developed a novel dataset of 5K fictional avatar images for assignment as visual personas to LLMs, and analyzed their negotiation behaviors based on the visual traits depicted in these images, with a particular focus on aggressiveness. The results indicate that LLMs assess the aggressiveness of images in a manner similar to humans and output more aggressive negotiation behaviors when prompted with an aggressive visual persona. Interestingly, the LLM exhibited more aggressive negotiation behaviors when the opponent{'}s image appeared less aggressive than their own, and less aggressive behaviors when the opponent{'}s image appeared more aggressive.
[ "Sun, Seungjong", "Lee, Eungu", "Baek, Seo Yeon", "Hwang, Seunghyun", "Lee, Wonbyung", "Nan, Dongyan", "Jansen, Bernard J", "Kim, Jang Hyun" ]
Kiss up, Kick down: Exploring Behavioral Changes in Multi-modal Large Language Models with Assigned Visual Personas
emnlp-main.609
Poster
2410.03181
[ "https://github.com/RSS-researcher/LLM_visual_persona" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.610.bib
https://aclanthology.org/2024.emnlp-main.610/
@inproceedings{zhu-etal-2024-atm, title = "{ATM}: Adversarial Tuning Multi-agent System Makes a Robust Retrieval-Augmented Generator", author = "Zhu, Junda and Yan, Lingyong and Shi, Haibo and Yin, Dawei and Sha, Lei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.610", pages = "10902--10919", abstract = "Large language models (LLMs) are proven to benefit a lot from retrieval-augmented generation (RAG) in alleviating hallucinations confronted with knowledge-intensive questions. RAG adopts information retrieval techniques to inject external knowledge from semantic-relevant documents as input contexts. However, due to today{'}s Internet being flooded with numerous noisy and fabricating content, it is inevitable that RAG systems are vulnerable to these noises and prone to respond incorrectly. To this end, we propose to optimize the retrieval-augmented Generator with a Adversarial Tuning Multi-agent system **(ATM)**. The ATM steers the Generator to have a robust perspective of useful documents for question answering with the help of an auxiliary Attacker agent. The Generator and the Attacker are tuned adversarially for several iterations. After rounds of multi-agent iterative tuning, the Generator can eventually better discriminate useful documents amongst fabrications. The experimental results verify the effectiveness of ATM and we also observe that the Generator can achieve better performance compared to state-of-the-art baselines.", }
Large language models (LLMs) are proven to benefit a lot from retrieval-augmented generation (RAG) in alleviating hallucinations confronted with knowledge-intensive questions. RAG adopts information retrieval techniques to inject external knowledge from semantic-relevant documents as input contexts. However, due to today{'}s Internet being flooded with numerous noisy and fabricating content, it is inevitable that RAG systems are vulnerable to these noises and prone to respond incorrectly. To this end, we propose to optimize the retrieval-augmented Generator with a Adversarial Tuning Multi-agent system **(ATM)**. The ATM steers the Generator to have a robust perspective of useful documents for question answering with the help of an auxiliary Attacker agent. The Generator and the Attacker are tuned adversarially for several iterations. After rounds of multi-agent iterative tuning, the Generator can eventually better discriminate useful documents amongst fabrications. The experimental results verify the effectiveness of ATM and we also observe that the Generator can achieve better performance compared to state-of-the-art baselines.
[ "Zhu, Junda", "Yan, Lingyong", "Shi, Haibo", "Yin, Dawei", "Sha, Lei" ]
ATM: Adversarial Tuning Multi-agent System Makes a Robust Retrieval-Augmented Generator
emnlp-main.610
Poster
2405.18111
[ "" ]
https://huggingface.co/papers/2405.18111
2
0
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.611.bib
https://aclanthology.org/2024.emnlp-main.611/
@inproceedings{chen-etal-2024-dynamic, title = "Dynamic Multi-granularity Attribution Network for Aspect-based Sentiment Analysis", author = "Chen, Yanjiang and Zhang, Kai and Hu, Feng and Wang, Xianquan and Li, Ruikang and Liu, Qi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.611", pages = "10920--10931", abstract = "Aspect-based sentiment analysis (ABSA) aims to predict the sentiment polarity of a specific aspect within a given sentence. Most existing methods predominantly leverage semantic or syntactic information based on attention scores, which are susceptible to interference caused by irrelevant contexts and often lack sentiment knowledge at a data-specific level. In this paper, we propose a novel Dynamic Multi-granularity Attribution Network (DMAN) from the perspective of attribution. Initially, we leverage Integrated Gradients to dynamically extract attribution scores for each token, which contain underlying reasoning knowledge for sentiment analysis. Subsequently, we aggregate attribution representations from multiple semantic granularities in natural language, enhancing a profound understanding of the semantics. Finally, we integrate attribution scores with syntactic information to capture the relationships between aspects and their relevant contexts more accurately during the sentence understanding process. Extensive experiments on five benchmark datasets demonstrate the effectiveness of our proposed method.", }
Aspect-based sentiment analysis (ABSA) aims to predict the sentiment polarity of a specific aspect within a given sentence. Most existing methods predominantly leverage semantic or syntactic information based on attention scores, which are susceptible to interference caused by irrelevant contexts and often lack sentiment knowledge at a data-specific level. In this paper, we propose a novel Dynamic Multi-granularity Attribution Network (DMAN) from the perspective of attribution. Initially, we leverage Integrated Gradients to dynamically extract attribution scores for each token, which contain underlying reasoning knowledge for sentiment analysis. Subsequently, we aggregate attribution representations from multiple semantic granularities in natural language, enhancing a profound understanding of the semantics. Finally, we integrate attribution scores with syntactic information to capture the relationships between aspects and their relevant contexts more accurately during the sentence understanding process. Extensive experiments on five benchmark datasets demonstrate the effectiveness of our proposed method.
[ "Chen, Yanjiang", "Zhang, Kai", "Hu, Feng", "Wang, Xianquan", "Li, Ruikang", "Liu, Qi" ]
Dynamic Multi-granularity Attribution Network for Aspect-based Sentiment Analysis
emnlp-main.611
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.612.bib
https://aclanthology.org/2024.emnlp-main.612/
@inproceedings{masoudian-etal-2024-unlabeled, title = "Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization", author = "Masoudian, Shahed and Frohmann, Markus and Rekabsaz, Navid and Schedl, Markus", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.612", pages = "10932--10938", abstract = "Language models frequently inherit societal biases from their training data. Numerous techniques have been proposed to mitigate these biases during both the pre-training and fine-tuning stages. However, fine-tuning a pre-trained debiased language model on a downstream task can reintroduce biases into the model. Additionally, existing debiasing methods for downstream tasks either (i) require labels of protected attributes (e.g., age, race, or political views) that are often not available or (ii) rely on indicators of bias, which restricts their applicability to gender debiasing since they rely on gender-specific words. To address this, we introduce a novel debiasing regularization technique based on the class-wise variance of embeddings. Crucially, our method does not require attribute labels and targets any attribute, thus addressing the shortcomings of existing debiasing methods. Our experiments on encoder language models and three datasets demonstrate that our method outperforms existing strong debiasing baselines that rely on target attribute labels while maintaining performance on the target task.", }
Language models frequently inherit societal biases from their training data. Numerous techniques have been proposed to mitigate these biases during both the pre-training and fine-tuning stages. However, fine-tuning a pre-trained debiased language model on a downstream task can reintroduce biases into the model. Additionally, existing debiasing methods for downstream tasks either (i) require labels of protected attributes (e.g., age, race, or political views) that are often not available or (ii) rely on indicators of bias, which restricts their applicability to gender debiasing since they rely on gender-specific words. To address this, we introduce a novel debiasing regularization technique based on the class-wise variance of embeddings. Crucially, our method does not require attribute labels and targets any attribute, thus addressing the shortcomings of existing debiasing methods. Our experiments on encoder language models and three datasets demonstrate that our method outperforms existing strong debiasing baselines that rely on target attribute labels while maintaining performance on the target task.
[ "Masoudian, Shahed", "Frohmann, Markus", "Rekabsaz, Navid", "Schedl, Markus" ]
Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization
emnlp-main.612
Poster
2409.19541
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.613.bib
https://aclanthology.org/2024.emnlp-main.613/
@inproceedings{jian-etal-2024-large, title = "Large Language Models Know What is Key Visual Entity: An {LLM}-assisted Multimodal Retrieval for {VQA}", author = "Jian, Pu and Yu, Donglei and Zhang, Jiajun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.613", pages = "10939--10956", abstract = "Visual question answering (VQA) tasks, often performed by visual language model (VLM), face challenges with long-tail knowledge. Recent retrieval-augmented VQA (RA-VQA) systems address this by retrieving and integrating external knowledge sources. However, these systems still suffer from redundant visual information irrelevant to the question during retrieval. To address these issues, in this paper, we propose LLM-RA, a novel method leveraging the reasoning capability of a large language model (LLM) to identify key visual entities, thus minimizing the impact of irrelevant information in the query of retriever. Furthermore, key visual entities are independently encoded for multimodal joint retrieval, preventing cross-entity interference. Experimental results demonstrate that our method outperforms other strong RA-VQA systems. In two knowledge-intensive VQA benchmarks, our method achieves the new state-of-the-art performance among those with similar scale of parameters and even performs comparably to models with 1-2 orders larger parameters.", }
Visual question answering (VQA) tasks, often performed by visual language model (VLM), face challenges with long-tail knowledge. Recent retrieval-augmented VQA (RA-VQA) systems address this by retrieving and integrating external knowledge sources. However, these systems still suffer from redundant visual information irrelevant to the question during retrieval. To address these issues, in this paper, we propose LLM-RA, a novel method leveraging the reasoning capability of a large language model (LLM) to identify key visual entities, thus minimizing the impact of irrelevant information in the query of retriever. Furthermore, key visual entities are independently encoded for multimodal joint retrieval, preventing cross-entity interference. Experimental results demonstrate that our method outperforms other strong RA-VQA systems. In two knowledge-intensive VQA benchmarks, our method achieves the new state-of-the-art performance among those with similar scale of parameters and even performs comparably to models with 1-2 orders larger parameters.
[ "Jian, Pu", "Yu, Donglei", "Zhang, Jiajun" ]
Large Language Models Know What is Key Visual Entity: An LLM-assisted Multimodal Retrieval for VQA
emnlp-main.613
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.614.bib
https://aclanthology.org/2024.emnlp-main.614/
@inproceedings{yang-etal-2024-towards-probing, title = "Towards Probing Speech-Specific Risks in Large Multimodal Models: A Taxonomy, Benchmark, and Insights", author = "Yang, Hao and Qu, Lizhen and Shareghi, Ehsan and Haf, Reza", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.614", pages = "10957--10973", abstract = "Large Multimodal Models (LMMs) have achieved great success recently, demonstrating a strong capability to understand multimodal information and to interact with human users. Despite the progress made, the challenge of detecting high-risk interactions in multimodal settings, and in particular in speech modality, remains largely unexplored. Conventional research on risk for speech modality primarily emphasises the content (e.g., what is captured as transcription). However, in speech-based interactions, paralinguistic cues in audio can significantly alter the intended meaning behind utterances. In this work, we propose a speech-specific risk taxonomy, covering 8 risk categories under hostility (malicious sarcasm and threats), malicious imitation (age, gender, ethnicity), and stereotypical biases (age, gender, ethnicity). Based on the taxonomy, we create a small-scale dataset for evaluating current LMMs capability in detecting these categories of risk. We observe even the latest models remain ineffective to detect various paralinguistic-specific risks in speech (e.g., Gemini 1.5 Pro is performing only slightly above random baseline). Warning: this paper contains biased and offensive examples.", }
Large Multimodal Models (LMMs) have achieved great success recently, demonstrating a strong capability to understand multimodal information and to interact with human users. Despite the progress made, the challenge of detecting high-risk interactions in multimodal settings, and in particular in speech modality, remains largely unexplored. Conventional research on risk for speech modality primarily emphasises the content (e.g., what is captured as transcription). However, in speech-based interactions, paralinguistic cues in audio can significantly alter the intended meaning behind utterances. In this work, we propose a speech-specific risk taxonomy, covering 8 risk categories under hostility (malicious sarcasm and threats), malicious imitation (age, gender, ethnicity), and stereotypical biases (age, gender, ethnicity). Based on the taxonomy, we create a small-scale dataset for evaluating current LMMs capability in detecting these categories of risk. We observe even the latest models remain ineffective to detect various paralinguistic-specific risks in speech (e.g., Gemini 1.5 Pro is performing only slightly above random baseline). Warning: this paper contains biased and offensive examples.
[ "Yang, Hao", "Qu, Lizhen", "Shareghi, Ehsan", "Haf, Reza" ]
Towards Probing Speech-Specific Risks in Large Multimodal Models: A Taxonomy, Benchmark, and Insights
emnlp-main.614
Poster
2406.17430
[ "https://github.com/YangHao97/speech_specific_risk" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.615.bib
https://aclanthology.org/2024.emnlp-main.615/
@inproceedings{bhan-etal-2024-self, title = "Self-{AMPLIFY}: Improving Small Language Models with Self Post Hoc Explanations", author = {Bhan, Milan and Vittaut, Jean-No{\"e}l and Chesneau, Nicolas and Lesot, Marie-Jeanne}, editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.615", pages = "10974--10991", abstract = "Incorporating natural language rationales in the prompt and In-Context Learning (ICL) have led to a significant improvement of Large Language Models (LLMs) performance. However, generating high-quality rationales require human-annotation or the use of auxiliary proxy models. In this work, we propose Self-AMPLIFY to automatically generate rationales from post hoc explanation methods applied to Small Language Models (SLMs) to improve their own performance. Self-AMPLIFY is a 3-step method that targets samples, generates rationales and builds a final prompt to leverage ICL. Self-AMPLIFY performance is evaluated on four SLMs and five datasets requiring strong reasoning abilities. Self-AMPLIFY achieves good results against competitors, leading to strong accuracy improvement. Self-AMPLIFY is the first method to apply post hoc explanation methods to autoregressive language models to generate rationales to improve their own performance in a fully automated manner.", }
Incorporating natural language rationales in the prompt and In-Context Learning (ICL) have led to a significant improvement of Large Language Models (LLMs) performance. However, generating high-quality rationales require human-annotation or the use of auxiliary proxy models. In this work, we propose Self-AMPLIFY to automatically generate rationales from post hoc explanation methods applied to Small Language Models (SLMs) to improve their own performance. Self-AMPLIFY is a 3-step method that targets samples, generates rationales and builds a final prompt to leverage ICL. Self-AMPLIFY performance is evaluated on four SLMs and five datasets requiring strong reasoning abilities. Self-AMPLIFY achieves good results against competitors, leading to strong accuracy improvement. Self-AMPLIFY is the first method to apply post hoc explanation methods to autoregressive language models to generate rationales to improve their own performance in a fully automated manner.
[ "Bhan, Milan", "Vittaut, Jean-No{\\\"e}l", "Chesneau, Nicolas", "Lesot, Marie-Jeanne" ]
Self-AMPLIFY: Improving Small Language Models with Self Post Hoc Explanations
emnlp-main.615
Poster
2402.12038
[ "" ]
https://huggingface.co/papers/2402.12038
0
0
0
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.616.bib
https://aclanthology.org/2024.emnlp-main.616/
@inproceedings{xu-etal-2024-generator, title = "What are the Generator Preferences for End-to-end Task-Oriented Dialog System?", author = "Xu, Wanshi and Zhuang, Xianwei and Chen, Zhanpeng and Zhu, Zhihong and Cheng, Xuxin and Zou, Yuexian", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.616", pages = "10992--11003", abstract = "Fully end-to-end task-oriented dialogue (EToD) systems have shown excellent performance, which requires the ability to retrieve entities accurately for generation. Existing methods improve the accuracy of entity retrieval and construct data flows between retrieval results and response generator, achieving promising results. However, most of them suffer from the following issues: (1) The entity is retrieved by directly interacting with the context at a coarse-grained level, so the similarity score may be disturbed by irrelevant attributes; (2) The generator pays equal attention to retrieved entities and the context and does not learn the generation preferences for the current turn. In this paper, we propose a framework called Regulating Preferences of Generator (RPG) based on retrieval results, which includes a generator preference extractor, an entity retriever, and a generator with the gate-controlled preference regulator. The generator preference extractor not only improves the entity retriever by filtering the interference of irrelevant attributes but also provides more focused guidance to the generator by performing inter-turn attribute prediction. Experiments and analyses on three standard benchmarks show that our framework outperforms existing methods and improves the quality of the dialogue.", }
Fully end-to-end task-oriented dialogue (EToD) systems have shown excellent performance, which requires the ability to retrieve entities accurately for generation. Existing methods improve the accuracy of entity retrieval and construct data flows between retrieval results and response generator, achieving promising results. However, most of them suffer from the following issues: (1) The entity is retrieved by directly interacting with the context at a coarse-grained level, so the similarity score may be disturbed by irrelevant attributes; (2) The generator pays equal attention to retrieved entities and the context and does not learn the generation preferences for the current turn. In this paper, we propose a framework called Regulating Preferences of Generator (RPG) based on retrieval results, which includes a generator preference extractor, an entity retriever, and a generator with the gate-controlled preference regulator. The generator preference extractor not only improves the entity retriever by filtering the interference of irrelevant attributes but also provides more focused guidance to the generator by performing inter-turn attribute prediction. Experiments and analyses on three standard benchmarks show that our framework outperforms existing methods and improves the quality of the dialogue.
[ "Xu, Wanshi", "Zhuang, Xianwei", "Chen, Zhanpeng", "Zhu, Zhihong", "Cheng, Xuxin", "Zou, Yuexian" ]
What are the Generator Preferences for End-to-end Task-Oriented Dialog System?
emnlp-main.616
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.617.bib
https://aclanthology.org/2024.emnlp-main.617/
@inproceedings{wahle-etal-2024-paraphrase, title = "Paraphrase Types Elicit Prompt Engineering Capabilities", author = "Wahle, Jan Philip and Ruas, Terry and Xu, Yang and Gipp, Bela", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.617", pages = "11004--11033", abstract = "Much of the success of modern language models depends on finding a suitable prompt to instruct the model. Until now, it has been largely unknown how variations in the linguistic expression of prompts affect these models. This study systematically and empirically evaluates which linguistic features influence models through paraphrase types, i.e., different linguistic changes at particular positions. We measure behavioral changes for five models across 120 tasks and six families of paraphrases (i.e., morphology, syntax, lexicon, lexico-syntax, discourse, and others). We also control for other prompt engineering factors (e.g., prompt length, lexical diversity, and proximity to training data). Our results show a potential for language models to improve tasks when their prompts are adapted in specific paraphrase types (e.g., 6.7{\%} median gain in Mixtral 8x7B; 5.5{\%} in LLaMA 3 8B). In particular, changes in morphology and lexicon, i.e., the vocabulary used, showed promise in improving prompts. These findings contribute to developing more robust language models capable of handling variability in linguistic expression.", }
Much of the success of modern language models depends on finding a suitable prompt to instruct the model. Until now, it has been largely unknown how variations in the linguistic expression of prompts affect these models. This study systematically and empirically evaluates which linguistic features influence models through paraphrase types, i.e., different linguistic changes at particular positions. We measure behavioral changes for five models across 120 tasks and six families of paraphrases (i.e., morphology, syntax, lexicon, lexico-syntax, discourse, and others). We also control for other prompt engineering factors (e.g., prompt length, lexical diversity, and proximity to training data). Our results show a potential for language models to improve tasks when their prompts are adapted in specific paraphrase types (e.g., 6.7{\%} median gain in Mixtral 8x7B; 5.5{\%} in LLaMA 3 8B). In particular, changes in morphology and lexicon, i.e., the vocabulary used, showed promise in improving prompts. These findings contribute to developing more robust language models capable of handling variability in linguistic expression.
[ "Wahle, Jan Philip", "Ruas, Terry", "Xu, Yang", "Gipp, Bela" ]
Paraphrase Types Elicit Prompt Engineering Capabilities
emnlp-main.617
Poster
2406.19898
[ "https://github.com/jpwahle/prompt-paraphrase" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.618.bib
https://aclanthology.org/2024.emnlp-main.618/
@inproceedings{cao-etal-2024-vleu, title = "{VLEU}: a Method for Automatic Evaluation for Generalizability of Text-to-Image Models", author = "Cao, Jingtao and Zheng, Zhang and Wang, Hongru and Wong, Kam-Fai", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.618", pages = "11034--11049", abstract = "Progress in Text-to-Image (T2I) models has significantly advanced the generation of images from textual descriptions. Existing metrics, such as CLIP, effectively measure the semantic alignment between single prompts and their corresponding images. However, they fall short in evaluating a model{'}s ability to generalize across a broad spectrum of textual inputs. To address this gap, we propose the VLEU (\textbf{V}isual \textbf{L}anguage \textbf{E}valuation \textbf{U}nderstudy) metric. VLEU leverages the power of Large Language Models (LLMs) to sample from the visual text domain, encompassing the entire range of potential inputs for the T2I task, to generate a wide variety of visual text. The images generated by T2I models from these prompts are then assessed for their alignment with the input text using the CLIP model. VLEU quantitatively measures a model{'}s generalizability by computing the Kullback-Leibler (KL) divergence between the visual text marginal distribution and the conditional distribution over the images generated by the model. This provides a comprehensive metric for comparing the overall generalizability of T2I models, beyond single-prompt evaluations, and offers valuable insights during the finetuning process. Our experimental results demonstrate VLEU{'}s effectiveness in evaluating the generalizability of various T2I models, positioning it as an essential metric for future research and development in image synthesis from text prompts. Our code and data will be publicly available at \url{https://github.com/mio7690/VLEU}.", }
Progress in Text-to-Image (T2I) models has significantly advanced the generation of images from textual descriptions. Existing metrics, such as CLIP, effectively measure the semantic alignment between single prompts and their corresponding images. However, they fall short in evaluating a model{'}s ability to generalize across a broad spectrum of textual inputs. To address this gap, we propose the VLEU (\textbf{V}isual \textbf{L}anguage \textbf{E}valuation \textbf{U}nderstudy) metric. VLEU leverages the power of Large Language Models (LLMs) to sample from the visual text domain, encompassing the entire range of potential inputs for the T2I task, to generate a wide variety of visual text. The images generated by T2I models from these prompts are then assessed for their alignment with the input text using the CLIP model. VLEU quantitatively measures a model{'}s generalizability by computing the Kullback-Leibler (KL) divergence between the visual text marginal distribution and the conditional distribution over the images generated by the model. This provides a comprehensive metric for comparing the overall generalizability of T2I models, beyond single-prompt evaluations, and offers valuable insights during the finetuning process. Our experimental results demonstrate VLEU{'}s effectiveness in evaluating the generalizability of various T2I models, positioning it as an essential metric for future research and development in image synthesis from text prompts. Our code and data will be publicly available at \url{https://github.com/mio7690/VLEU}.
[ "Cao, Jingtao", "Zheng, Zhang", "Wang, Hongru", "Wong, Kam-Fai" ]
VLEU: a Method for Automatic Evaluation for Generalizability of Text-to-Image Models
emnlp-main.618
Poster
2409.14704
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.619.bib
https://aclanthology.org/2024.emnlp-main.619/
@inproceedings{zuo-etal-2024-towards, title = "Towards Online Continuous Sign Language Recognition and Translation", author = "Zuo, Ronglai and Wei, Fangyun and Mak, Brian", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.619", pages = "11050--11067", abstract = "Research on continuous sign language recognition (CSLR) is essential to bridge the communication gap between deaf and hearing individuals. Numerous previous studies have trained their models using the connectionist temporal classification (CTC) loss. During inference, these CTC-based models generally require the entire sign video as input to make predictions, a process known as offline recognition, which suffers from high latency and substantial memory usage. In this work, we take the first step towards online CSLR. Our approach consists of three phases: 1) developing a sign dictionary; 2) training an isolated sign language recognition model on the dictionary; and 3) employing a sliding window approach on the input sign sequence, feeding each sign clip to the optimized model for online recognition. Additionally, our online recognition model can be extended to support online translation by integrating a gloss-to-text network and can enhance the performance of any offline model. With these extensions, our online approach achieves new state-of-the-art performance on three popular benchmarks across various task settings. Code and models are available at https://github.com/FangyunWei/SLRT.", }
Research on continuous sign language recognition (CSLR) is essential to bridge the communication gap between deaf and hearing individuals. Numerous previous studies have trained their models using the connectionist temporal classification (CTC) loss. During inference, these CTC-based models generally require the entire sign video as input to make predictions, a process known as offline recognition, which suffers from high latency and substantial memory usage. In this work, we take the first step towards online CSLR. Our approach consists of three phases: 1) developing a sign dictionary; 2) training an isolated sign language recognition model on the dictionary; and 3) employing a sliding window approach on the input sign sequence, feeding each sign clip to the optimized model for online recognition. Additionally, our online recognition model can be extended to support online translation by integrating a gloss-to-text network and can enhance the performance of any offline model. With these extensions, our online approach achieves new state-of-the-art performance on three popular benchmarks across various task settings. Code and models are available at https://github.com/FangyunWei/SLRT.
[ "Zuo, Ronglai", "Wei, Fangyun", "Mak, Brian" ]
Towards Online Continuous Sign Language Recognition and Translation
emnlp-main.619
Poster
2401.05336
[ "https://github.com/FangyunWei/SLRT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.620.bib
https://aclanthology.org/2024.emnlp-main.620/
@inproceedings{dai-etal-2024-mitigate, title = "Mitigate Extrinsic Social Bias in Pre-trained Language Models via Continuous Prompts Adjustment", author = "Dai, Yiwei and Gu, Hengrui and Wang, Ying and Wang, Xin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.620", pages = "11068--11083", abstract = "Although pre-trained language models (PLMs) have been widely used in natural language understandings (NLU), they are still exposed to fairness issues. Most existing extrinsic debiasing methods rely on manually curated word lists for each sensitive groups to modify training data or to add regular constraints. However, these word lists are often limited by length and scope, resulting in the degradation performance of extrinsic bias mitigation. To address the aforementioned issues, we propose a **C**ontinuous **P**rompts **A**djustment **D**ebiasing method (CPAD), which generates continuous token lists from the entire vocabulary space and uses them to bridge the gap between outputs and targets in fairness learning process. Specifically, CPAD encapsulates fine-tuning objective and debiasing objectives into several independent prompts. To avoid the limitation of manual word lists, in fairness learning phase, we extract outputs from the entire vocabulary space via fine-tuned PLM. Then, we aggregate the outputs from the same sensitive group as continuous token lists to map the outputs into protected attribute labels. Finally, after we learn the debiasing prompts in the perspective of adversarial learning, we improve fairness by adjusting continuous prompts at model inference time. Through extensive experiments on three NLU tasks, we evaluate the debiasing performance from the perspectives of group fairness and fairness through unawareness. The experimental results show that CPAD outperforms all baselines in term of single and two-attributes debiasing performance.", }
Although pre-trained language models (PLMs) have been widely used in natural language understandings (NLU), they are still exposed to fairness issues. Most existing extrinsic debiasing methods rely on manually curated word lists for each sensitive groups to modify training data or to add regular constraints. However, these word lists are often limited by length and scope, resulting in the degradation performance of extrinsic bias mitigation. To address the aforementioned issues, we propose a **C**ontinuous **P**rompts **A**djustment **D**ebiasing method (CPAD), which generates continuous token lists from the entire vocabulary space and uses them to bridge the gap between outputs and targets in fairness learning process. Specifically, CPAD encapsulates fine-tuning objective and debiasing objectives into several independent prompts. To avoid the limitation of manual word lists, in fairness learning phase, we extract outputs from the entire vocabulary space via fine-tuned PLM. Then, we aggregate the outputs from the same sensitive group as continuous token lists to map the outputs into protected attribute labels. Finally, after we learn the debiasing prompts in the perspective of adversarial learning, we improve fairness by adjusting continuous prompts at model inference time. Through extensive experiments on three NLU tasks, we evaluate the debiasing performance from the perspectives of group fairness and fairness through unawareness. The experimental results show that CPAD outperforms all baselines in term of single and two-attributes debiasing performance.
[ "Dai, Yiwei", "Gu, Hengrui", "Wang, Ying", "Wang, Xin" ]
Mitigate Extrinsic Social Bias in Pre-trained Language Models via Continuous Prompts Adjustment
emnlp-main.620
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.621.bib
https://aclanthology.org/2024.emnlp-main.621/
@inproceedings{li-etal-2024-split, title = "Split and Merge: Aligning Position Biases in {LLM}-based Evaluators", author = "Li, Zongjie and Wang, Chaozheng and Ma, Pingchuan and Wu, Daoyuan and Wang, Shuai and Gao, Cuiyun and Liu, Yang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.621", pages = "11084--11108", abstract = "Large language models (LLMs) have shown promise as automated evaluators for assessing the quality of answers generated by AI systems. However, LLM-based evaluators exhibit position bias, or inconsistency, when used to evaluate candidate answers in pairwise comparisons, favoring either the first or second answer regardless of content. To address this limitation, we propose PORTIA, an alignment-based system designed to mimic human comparison strategies to calibrate position bias in a lightweight yet effective manner. Specifically, PORTIA splits the answers into multiple segments, taking into account both length and semantics, and merges them back into a single prompt for evaluation by LLMs. Extensive experiments with six LLMs on 11,520 answer pairs demonstrate that PORTIA markedly enhances the consistency rates for all models and forms of comparison tested, achieving an average relative improvement of 47.46{\%}. It also enables PORTIA-enhanced GPT-3.5 to achieve agreement rates with humans comparable to GPT-4 and elevates GPT-4{'}s consistency rate up to 98{\%}. Subsequent human evaluations indicate that the PORTIA-enhanced GPT-3.5 model can even surpass standalone GPT-4 in terms of alignment with human evaluators, highlighting PORTIA{'}s ability to correct position bias, improve LLM consistency, and boost performance while keeping cost efficiency.", }
Large language models (LLMs) have shown promise as automated evaluators for assessing the quality of answers generated by AI systems. However, LLM-based evaluators exhibit position bias, or inconsistency, when used to evaluate candidate answers in pairwise comparisons, favoring either the first or second answer regardless of content. To address this limitation, we propose PORTIA, an alignment-based system designed to mimic human comparison strategies to calibrate position bias in a lightweight yet effective manner. Specifically, PORTIA splits the answers into multiple segments, taking into account both length and semantics, and merges them back into a single prompt for evaluation by LLMs. Extensive experiments with six LLMs on 11,520 answer pairs demonstrate that PORTIA markedly enhances the consistency rates for all models and forms of comparison tested, achieving an average relative improvement of 47.46{\%}. It also enables PORTIA-enhanced GPT-3.5 to achieve agreement rates with humans comparable to GPT-4 and elevates GPT-4{'}s consistency rate up to 98{\%}. Subsequent human evaluations indicate that the PORTIA-enhanced GPT-3.5 model can even surpass standalone GPT-4 in terms of alignment with human evaluators, highlighting PORTIA{'}s ability to correct position bias, improve LLM consistency, and boost performance while keeping cost efficiency.
[ "Li, Zongjie", "Wang, Chaozheng", "Ma, Pingchuan", "Wu, Daoyuan", "Wang, Shuai", "Gao, Cuiyun", "Liu, Yang" ]
Split and Merge: Aligning Position Biases in LLM-based Evaluators
emnlp-main.621
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.622.bib
https://aclanthology.org/2024.emnlp-main.622/
@inproceedings{saha-srihari-2024-integrating, title = "Integrating Argumentation and Hate-Speech-based Techniques for Countering Misinformation", author = "Saha, Sougata and Srihari, Rohini", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.622", pages = "11109--11124", abstract = "The proliferation of online misinformation presents a significant challenge, requiring scalable strategies for effective mitigation. While detection methods exist, current reactive approaches, like content flagging and banning, are short-term and insufficient. Additionally, advancements like large language models (LLMs) exacerbate the issue by enabling large-scale creation and dissemination of misinformation. Thus, sustainable, scalable solutions that encourage behavior change and broaden perspectives by persuading misinformants against their viewpoints or broadening their perspectives are needed. To this end, we propose persuasive LLM-based dialogue systems to tackle misinformation. However, challenges arise due to the lack of suitable datasets and formal frameworks for generating persuasive responses. Inspired by existing methods for countering online hate speech, we explore adapting counter-hate response strategies for misinformation. Since misinformation and hate speech often coexist despite differing intentions, we develop classifiers to identify and annotate response strategies from hate-speech counter-responses for use in misinformation scenarios. Human evaluations show a 91{\%} agreement on the applicability of these strategies to misinformation. Next, as a scalable counter-misinformation solution, we create an LLM-based argument graph framework that generates persuasive responses, using the strategies as control codes to adjust the style and content. Human evaluations and case studies demonstrate that our framework generates expert-like responses and is 14{\%} more engaging, 21{\%} more natural, and 18{\%} more factual than the best available alternatives.", }
The proliferation of online misinformation presents a significant challenge, requiring scalable strategies for effective mitigation. While detection methods exist, current reactive approaches, like content flagging and banning, are short-term and insufficient. Additionally, advancements like large language models (LLMs) exacerbate the issue by enabling large-scale creation and dissemination of misinformation. Thus, sustainable, scalable solutions that encourage behavior change and broaden perspectives by persuading misinformants against their viewpoints or broadening their perspectives are needed. To this end, we propose persuasive LLM-based dialogue systems to tackle misinformation. However, challenges arise due to the lack of suitable datasets and formal frameworks for generating persuasive responses. Inspired by existing methods for countering online hate speech, we explore adapting counter-hate response strategies for misinformation. Since misinformation and hate speech often coexist despite differing intentions, we develop classifiers to identify and annotate response strategies from hate-speech counter-responses for use in misinformation scenarios. Human evaluations show a 91{\%} agreement on the applicability of these strategies to misinformation. Next, as a scalable counter-misinformation solution, we create an LLM-based argument graph framework that generates persuasive responses, using the strategies as control codes to adjust the style and content. Human evaluations and case studies demonstrate that our framework generates expert-like responses and is 14{\%} more engaging, 21{\%} more natural, and 18{\%} more factual than the best available alternatives.
[ "Saha, Sougata", "Srihari, Rohini" ]
Integrating Argumentation and Hate-Speech-based Techniques for Countering Misinformation
emnlp-main.622
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.623.bib
https://aclanthology.org/2024.emnlp-main.623/
@inproceedings{xu-etal-2024-bpo, title = "{BPO}: Staying Close to the Behavior {LLM} Creates Better Online {LLM} Alignment", author = "Xu, Wenda and Li, Jiachen and Wang, William Yang and Li, Lei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.623", pages = "11125--11139", abstract = "Direct alignment from preferences (DAP) has emerged as a promising paradigm for aligning large language models (LLMs) to human desiderata from pre-collected, offline preference datasets. While recent studies indicate that existing offline DAP methods can directly benefit from online training samples, we highlight the need to develop specific online DAP algorithms to fully harness the power of online training. Specifically, we identify that the learned LLM should adhere to the proximity of the behavior LLM, which collects the training samples. To this end, we propose online Preference Optimization in proximity to the Behavior LLM (BPO), emphasizing the importance of constructing a proper trust region for LLM alignment.We conduct extensive experiments to validate the effectiveness and applicability of our approach by integrating it with various DAP methods, resulting in significant performance improvements across a wide range of tasks when training with the same amount of preference data. Even when only introducing one additional data collection phase, our online BPO improves its offline DAP baseline from 72.0{\%} to 80.2{\%} on TL;DR and from 82.2{\%} to 89.1{\%} on Anthropic Helpfulness in terms of win rate against human reference text.", }
Direct alignment from preferences (DAP) has emerged as a promising paradigm for aligning large language models (LLMs) to human desiderata from pre-collected, offline preference datasets. While recent studies indicate that existing offline DAP methods can directly benefit from online training samples, we highlight the need to develop specific online DAP algorithms to fully harness the power of online training. Specifically, we identify that the learned LLM should adhere to the proximity of the behavior LLM, which collects the training samples. To this end, we propose online Preference Optimization in proximity to the Behavior LLM (BPO), emphasizing the importance of constructing a proper trust region for LLM alignment.We conduct extensive experiments to validate the effectiveness and applicability of our approach by integrating it with various DAP methods, resulting in significant performance improvements across a wide range of tasks when training with the same amount of preference data. Even when only introducing one additional data collection phase, our online BPO improves its offline DAP baseline from 72.0{\%} to 80.2{\%} on TL;DR and from 82.2{\%} to 89.1{\%} on Anthropic Helpfulness in terms of win rate against human reference text.
[ "Xu, Wenda", "Li, Jiachen", "Wang, William Yang", "Li, Lei" ]
BPO: Staying Close to the Behavior LLM Creates Better Online LLM Alignment
emnlp-main.623
Poster
2406.12168
[ "https://github.com/xu1998hz/bpo" ]
https://huggingface.co/papers/2406.12168
2
7
1
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.624.bib
https://aclanthology.org/2024.emnlp-main.624/
@inproceedings{shao-etal-2024-one2set, title = "{O}ne2{S}et + Large Language Model: Best Partners for Keyphrase Generation", author = "Shao, Liangying and Zhang, Liang and Peng, Minlong and Ma, Guoqi and Yue, Hao and Sun, Mingming and Su, Jinsong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.624", pages = "11140--11153", abstract = "Keyphrase generation (KPG) aims to automatically generate a collection of phrases representing the core concepts of a given document. The dominant paradigms in KPG include one2seq and one2set. Recently, there has been increasing interest in applying large language models (LLMs) to KPG. Our preliminary experiments reveal that it is challenging for a single model to excel in both recall and precision. Further analysis shows that: 1) the one2set paradigm owns the advantage of high recall, but suffers from improper assignments of supervision signals during training; 2) LLMs are powerful in keyphrase selection, but existing selection methods often make redundant selections. Given these observations, we introduce a generate-then-select framework decomposing KPG into two steps, where we adopt a one2set-based model as generator to produce candidates and then use an LLM as selector to select keyphrases from these candidates. Particularly, we make two important improvements on our generator and selector: 1) we design an Optimal Transport-based assignment strategy to address the above improper assignments; 2) we model the keyphrase selection as a sequence labeling task to alleviate redundant selections. Experimental results on multiple benchmark datasets show that our framework significantly surpasses state-of-the-art models, especially in absent keyphrase prediction.", }
Keyphrase generation (KPG) aims to automatically generate a collection of phrases representing the core concepts of a given document. The dominant paradigms in KPG include one2seq and one2set. Recently, there has been increasing interest in applying large language models (LLMs) to KPG. Our preliminary experiments reveal that it is challenging for a single model to excel in both recall and precision. Further analysis shows that: 1) the one2set paradigm owns the advantage of high recall, but suffers from improper assignments of supervision signals during training; 2) LLMs are powerful in keyphrase selection, but existing selection methods often make redundant selections. Given these observations, we introduce a generate-then-select framework decomposing KPG into two steps, where we adopt a one2set-based model as generator to produce candidates and then use an LLM as selector to select keyphrases from these candidates. Particularly, we make two important improvements on our generator and selector: 1) we design an Optimal Transport-based assignment strategy to address the above improper assignments; 2) we model the keyphrase selection as a sequence labeling task to alleviate redundant selections. Experimental results on multiple benchmark datasets show that our framework significantly surpasses state-of-the-art models, especially in absent keyphrase prediction.
[ "Shao, Liangying", "Zhang, Liang", "Peng, Minlong", "Ma, Guoqi", "Yue, Hao", "Sun, Mingming", "Su, Jinsong" ]
One2Set + Large Language Model: Best Partners for Keyphrase Generation
emnlp-main.624
Poster
2410.03421
[ "https://github.com/deeplearnxmu/kpg-setllm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.625.bib
https://aclanthology.org/2024.emnlp-main.625/
@inproceedings{yuan-etal-2024-unlocking, title = "Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering", author = "Yuan, Yifei and Deng, Yang and S{\o}gaard, Anders and Aliannejadi, Mohammad", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.625", pages = "11154--11169", abstract = "Users post numerous product-related questions on e-commerce platforms, affecting their purchase decisions. Product-related question answering (PQA) entails utilizing product-related resources to provide precise responses to users. We propose a novel task of Multilingual Cross-market Product-based Question Answering (MCPQA) and define the task as providing answers to product-related questions in a main marketplace by utilizing information from another resource-rich auxiliary marketplace in a multilingual context. We introduce a large-scale dataset comprising over 7 million questions from 17 marketplaces across 11 languages. We then perform automatic translation on the Electronics category of our dataset, naming it as McMarket. We focus on two subtasks: review-based answer generation and product-related question ranking. For each subtask, we label a subset of McMarket using an LLM and further evaluate the quality of the annotations via human assessment. We then conduct experiments to benchmark our dataset, using models ranging from traditional lexical models to LLMs in both single-market and cross-market scenarios across McMarket and the corresponding LLM subset. Results show that incorporating cross-market information significantly enhances performance in both tasks.", }
Users post numerous product-related questions on e-commerce platforms, affecting their purchase decisions. Product-related question answering (PQA) entails utilizing product-related resources to provide precise responses to users. We propose a novel task of Multilingual Cross-market Product-based Question Answering (MCPQA) and define the task as providing answers to product-related questions in a main marketplace by utilizing information from another resource-rich auxiliary marketplace in a multilingual context. We introduce a large-scale dataset comprising over 7 million questions from 17 marketplaces across 11 languages. We then perform automatic translation on the Electronics category of our dataset, naming it as McMarket. We focus on two subtasks: review-based answer generation and product-related question ranking. For each subtask, we label a subset of McMarket using an LLM and further evaluate the quality of the annotations via human assessment. We then conduct experiments to benchmark our dataset, using models ranging from traditional lexical models to LLMs in both single-market and cross-market scenarios across McMarket and the corresponding LLM subset. Results show that incorporating cross-market information significantly enhances performance in both tasks.
[ "Yuan, Yifei", "Deng, Yang", "S{\\o}gaard, Anders", "Aliannejadi, Mohammad" ]
Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering
emnlp-main.625
Poster
2409.16025
[ "https://github.com/yfyuan01/mcpqa" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.626.bib
https://aclanthology.org/2024.emnlp-main.626/
@inproceedings{hong-etal-2024-orpo, title = "{ORPO}: Monolithic Preference Optimization without Reference Model", author = "Hong, Jiwoo and Lee, Noah and Thorne, James", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.626", pages = "11170--11189", abstract = "While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we revisit SFT in the context of preference alignment, emphasizing that a minor penalty for the disfavored style is sufficient for preference alignment. Building on this foundation, we introduce a straightforward reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the need for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models including Llama-2 Chat and Zephyr with more than 7B and 13B parameters: achieving up to 12.20{\%} on AlpacaEval 2.0 (Figure 1), and 7.32 in MT-Bench (Table 2). We release code and model checkpoints for Mistral-ORPO-$\alpha$ (7B) and Mistral-ORPO-$\beta$ (7B).", }
While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we revisit SFT in the context of preference alignment, emphasizing that a minor penalty for the disfavored style is sufficient for preference alignment. Building on this foundation, we introduce a straightforward reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the need for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models including Llama-2 Chat and Zephyr with more than 7B and 13B parameters: achieving up to 12.20{\%} on AlpacaEval 2.0 (Figure 1), and 7.32 in MT-Bench (Table 2). We release code and model checkpoints for Mistral-ORPO-$\alpha$ (7B) and Mistral-ORPO-$\beta$ (7B).
[ "Hong, Jiwoo", "Lee, Noah", "Thorne, James" ]
ORPO: Monolithic Preference Optimization without Reference Model
emnlp-main.626
Poster
2403.07691
[ "https://github.com/xfactlab/orpo" ]
https://huggingface.co/papers/2403.07691
3
62
6
3
[ "HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1", "shenzhi-wang/Llama3.1-8B-Chinese-Chat", "shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit", "NTQAI/Nxcode-CQ-7B-orpo", "shenzhi-wang/Gemma-2-9B-Chinese-Chat", "allganize/Llama-3-Alpha-Ko-8B-Instruct", "kaist-ai/mistral-orpo-beta", "anakin87/gemma-2b-orpo", "shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-f16", "kaist-ai/mistral-orpo-capybara-7k", "mirlab/AkaLlama-llama3-70b-v0.1", "shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-4bit", "shenzhi-wang/Llama3-70B-Chinese-Chat-GGUF-4bit", "alvarobartt/Mistral-7B-v0.1-ORPO", "mirlab/AkaLlama-llama3-70b-v0.1-GGUF", "rinna/gemma-2-baku-2b-it", "ilsp/Meltemi-7B-Instruct-v1.5", "QuantFactory/Llama3-8B-Chinese-Chat-GGUF", "kaist-ai/mistral-orpo-alpha", "QuantFactory/Llama3.1-8B-Chinese-Chat-GGUF", "INX-TEXT/Bailong-orpo-7B", "allganize/Llama-3-Alpha-Ko-8B-Instruct-GPTQ", "shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat-8bit", "QuantFactory/gemma-2-baku-2b-it-GGUF", "mirlab/AkaLlama-llama3-70b-v0.1-exl2", "alvarobartt/Mistral-7B-v0.1-ORPO-PEFT", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw2.25", "LoneStriker/Nxcode-CQ-7B-orpo-GGUF", "shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat-f16", "shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat-4bit", "QuantFactory/Llama-3-Alpha-Ko-8B-Instruct-GGUF", "QuantFactory/Mistral-7B-v0.3-Chinese-Chat-GGUF", "QuantFactory/Nxcode-CQ-7B-orpo-GGUF", "QuantFactory/Meltemi-7B-Instruct-v1.5-GGUF", "trl-lib/Qwen2-0.5B-ORPO", "mav23/Llama3.1-8B-Chinese-Chat-GGUF", "raphael-gl/orpo", "Z3R6X/Llama-3-8B-ORPO-V1", "alvarobartt/mistral-orpo-mix", "LoneStriker/Llama3-8B-Chinese-Chat-6.0bpw-h6-exl2", "LoneStriker/Llama3-8B-Chinese-Chat-5.0bpw-h6-exl2", "LoneStriker/Llama3-8B-Chinese-Chat-8.0bpw-h8-exl2", "LoneStriker/Llama3-8B-Chinese-Chat-4.0bpw-h6-exl2", "LoneStriker/Llama3-8B-Chinese-Chat-3.0bpw-h6-exl2", "burtenshaw/Qwen1.5-0.5B-dpo-mix-7k-3000", "LiteLLMs/Llama3-8B-Chinese-Chat-GGUF", "RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf", "Amu/orpo-lora-phi2", "RichardErkhov/HuggingFaceH4_-_zephyr-orpo-141b-A35b-v0.1-gguf", "asiansoul/YACHT-Llama-3-Ko-8B-GGUF", "asiansoul/YACHT-Llama-3-Ko-8B", "sunatte/txt2sql", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4", "adamo1139/Yi-34B-200K-HESOYAM-0905", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.2", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.4", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.6", "RichardErkhov/shenzhi-wang_-_Llama3-8B-Chinese-Chat-4bits", "Cynaptics/ftphi3", "RichardErkhov/shenzhi-wang_-_Llama3-8B-Chinese-Chat-8bits", "Ali-Forootani/OrpoLlama-3-8B_fine_tune_trl", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.8", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw5", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw2.5", "SinclairSchneider/zephyr-orpo-141b-A35b-v0.1-bnb-4bit", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw3", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw3.5", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw3.7", "CISCai/Nxcode-CQ-7B-orpo-SOTA-GGUF", "adamo1139/Yi-34B-200K-HESOYAM-0905-4.65bpw-EXL2", "RichardErkhov/anakin87_-_gemma-2b-orpo-gguf", "Apel-sin/nxcode-CQ-7B-orpo-exl2", "Orion-zhen/Llama3.1-70B-Chinese-Chat-4.0bpw-exl2", "Orion-zhen/Llama3.1-8B-Chinese-Chat-4.0bpw-exl2", "Orion-zhen/Llama3.1-8B-Chinese-Chat-6.5bpw-exl2", "BlouseJury/shenzhi-wang_Llama3-8B-Chinese-Chat-6.0bpw-exl2", "oz1115/jy_test_mode", "SungJoo/llama3-8b-instruct-orpo-ko", "RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf", "darkshapes/mistral-7b-v0.3-chinese-chat-4bit", "RichardErkhov/allganize_-_Llama-3-Alpha-Ko-8B-Instruct-gguf", "RichardErkhov/shenzhi-wang_-_Llama3-70B-Chinese-Chat-gguf", "RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-4bits", "RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-8bits", "yusheng123z/llama3.1", "RichardErkhov/shenzhi-wang_-_Mistral-7B-v0.3-Chinese-Chat-gguf", "RichardErkhov/rinna_-_gemma-2-baku-2b-it-gguf", "RichardErkhov/Amu_-_orpo-lora-phi2-gguf", "IlyaGusev/vikhr_nemo_orpo_dostoevsky_12b", "mav23/gemma-2-baku-2b-it-GGUF", "mav23/Nxcode-CQ-7B-orpo-GGUF", "RichardErkhov/mirlab_-_AkaLlama-llama3-70b-v0.1-gguf", "RichardErkhov/rinna_-_gemma-2-baku-2b-it-4bits", "RichardErkhov/rinna_-_gemma-2-baku-2b-it-8bits", "RioShiina/gemma-2-baku-2b-it-exl2", "comfyuiblog/Nxcode-CQ-7B-GGUF", "MachoMaheen/devdock4bit", "bhuvana-ak7/orpo_output", "bhuvana-ak7/orpo_trained_output_gpt_neo", "RichardErkhov/trl-lib_-_Qwen2-0.5B-ORPO-gguf" ]
[]
[ "bigcode/bigcode-models-leaderboard", "Vokturz/can-it-run-llm", "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "NiansuhAI/HFLLMs", "abidlabs/llm-explorer", "Hansimov/hf-llm-api", "llamafactory/Gemma-2-9B-Chinese-Chat", "prometheus-eval/BiGGen-Bench-Leaderboard", "DIBT/preference_data_by_language", "Justinrune/LLaMA-Factory", "rinna/gemma-2-baku-2b-it", "Granther/try-this-model", "NiansuhAI/Main", "kenken999/fastapi_django_main_live", "awacke1/ChatStreamlitMultiplayer", "Darok/Featherless-Feud", "li-qing/FIRE", "saikub/chatB", "DevsDoCode/DeepInfra", "AdarshJi/Deepifra", "Pyboxs/hf-llm-api", "emekaboris/try-this-model", "featherless-ai/klimbr-demo", "SharryOG/MIA", "Niansuh/HFLLMAPI", "tehen1/HuggingFaceH4-zephyr-orpo-141b-A35b-v0.1", "iseehf/hf-llm-api", "kingtest/hf-llm-api", "heidornj/hf-llm-api", "yxmnjxzx/hf-llm-api", "Shuddho/HFLLMAPI", "AIMaster7/HFLLMAPI2", "AhmedMagdy7/can-it-run-llm", "AIMaster7/HFLLMAPI3", "Mehmetist02/HuggingFaceH4-zephyr-orpo-141b-A35b-v0.1", "Kartik2503/cost-estimator", "phamvanla/HuggingFaceH4-zephyr-orpo-141b-A35b-v0.1", "real0x0a1/real0x0a1", "seawolf2357/kai-zephyr", "zerrin/test_space", "Bofeee5675/FIRE", "Nymbo/llm-explorer", "allknowingroger/llm-explorer", "alessoh/aihive", "brokerrobin/can-it-run-llm", "evelyn-lo/evelyn", "tianlong12/hf-llm-api", "lintasmediadanawa/hf-llm-api", "daniellp/HFLLMs", "EBTRFIO/hf-llm-api", "zjasper666/bf16_vs_fp8", "rickqaq/223", "Tanvir1337/can-it-run-llm", "realaer/src", "srinuksv/Main", "ColamanAI/hf-llm-api", "ajimenez78/python2cplusplus", "ArjDLAI/TestofNxcode-CQ-7B-orpo", "SC999/NV_Nemotron", "Oxygen588/anakin87-gemma-2b-orpo", "JohnKouf/meltemi_space", "Sakalti/Baku", "FunFunTeacher/Teacher.AI", "lqr9500/shenzhi-wang-Llama3.1-8B-Chinese-Chat", "VenturaSpectra/shenzhi-wang-Llama3.1-8B-Chinese-Chat", "kk189mm/shenzhi-wang-Llama3.1-8B-Chinese-Chat", "Granp/shenzhi-wang-Llama3.1-8B-Chinese-Chat", "beginor/test", "smarttang/blingsec" ]
[ "HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1", "shenzhi-wang/Llama3.1-8B-Chinese-Chat", "shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit", "NTQAI/Nxcode-CQ-7B-orpo", "shenzhi-wang/Gemma-2-9B-Chinese-Chat", "allganize/Llama-3-Alpha-Ko-8B-Instruct", "kaist-ai/mistral-orpo-beta", "anakin87/gemma-2b-orpo", "shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-f16", "kaist-ai/mistral-orpo-capybara-7k", "mirlab/AkaLlama-llama3-70b-v0.1", "shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-4bit", "shenzhi-wang/Llama3-70B-Chinese-Chat-GGUF-4bit", "alvarobartt/Mistral-7B-v0.1-ORPO", "mirlab/AkaLlama-llama3-70b-v0.1-GGUF", "rinna/gemma-2-baku-2b-it", "ilsp/Meltemi-7B-Instruct-v1.5", "QuantFactory/Llama3-8B-Chinese-Chat-GGUF", "kaist-ai/mistral-orpo-alpha", "QuantFactory/Llama3.1-8B-Chinese-Chat-GGUF", "INX-TEXT/Bailong-orpo-7B", "allganize/Llama-3-Alpha-Ko-8B-Instruct-GPTQ", "shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat-8bit", "QuantFactory/gemma-2-baku-2b-it-GGUF", "mirlab/AkaLlama-llama3-70b-v0.1-exl2", "alvarobartt/Mistral-7B-v0.1-ORPO-PEFT", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw2.25", "LoneStriker/Nxcode-CQ-7B-orpo-GGUF", "shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat-f16", "shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat-4bit", "QuantFactory/Llama-3-Alpha-Ko-8B-Instruct-GGUF", "QuantFactory/Mistral-7B-v0.3-Chinese-Chat-GGUF", "QuantFactory/Nxcode-CQ-7B-orpo-GGUF", "QuantFactory/Meltemi-7B-Instruct-v1.5-GGUF", "trl-lib/Qwen2-0.5B-ORPO", "mav23/Llama3.1-8B-Chinese-Chat-GGUF", "raphael-gl/orpo", "Z3R6X/Llama-3-8B-ORPO-V1", "alvarobartt/mistral-orpo-mix", "LoneStriker/Llama3-8B-Chinese-Chat-6.0bpw-h6-exl2", "LoneStriker/Llama3-8B-Chinese-Chat-5.0bpw-h6-exl2", "LoneStriker/Llama3-8B-Chinese-Chat-8.0bpw-h8-exl2", "LoneStriker/Llama3-8B-Chinese-Chat-4.0bpw-h6-exl2", "LoneStriker/Llama3-8B-Chinese-Chat-3.0bpw-h6-exl2", "burtenshaw/Qwen1.5-0.5B-dpo-mix-7k-3000", "LiteLLMs/Llama3-8B-Chinese-Chat-GGUF", "RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf", "Amu/orpo-lora-phi2", "RichardErkhov/HuggingFaceH4_-_zephyr-orpo-141b-A35b-v0.1-gguf", "asiansoul/YACHT-Llama-3-Ko-8B-GGUF", "asiansoul/YACHT-Llama-3-Ko-8B", "sunatte/txt2sql", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4", "adamo1139/Yi-34B-200K-HESOYAM-0905", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.2", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.4", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.6", "RichardErkhov/shenzhi-wang_-_Llama3-8B-Chinese-Chat-4bits", "Cynaptics/ftphi3", "RichardErkhov/shenzhi-wang_-_Llama3-8B-Chinese-Chat-8bits", "Ali-Forootani/OrpoLlama-3-8B_fine_tune_trl", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.8", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw5", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw2.5", "SinclairSchneider/zephyr-orpo-141b-A35b-v0.1-bnb-4bit", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw3", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw3.5", "blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw3.7", "CISCai/Nxcode-CQ-7B-orpo-SOTA-GGUF", "adamo1139/Yi-34B-200K-HESOYAM-0905-4.65bpw-EXL2", "RichardErkhov/anakin87_-_gemma-2b-orpo-gguf", "Apel-sin/nxcode-CQ-7B-orpo-exl2", "Orion-zhen/Llama3.1-70B-Chinese-Chat-4.0bpw-exl2", "Orion-zhen/Llama3.1-8B-Chinese-Chat-4.0bpw-exl2", "Orion-zhen/Llama3.1-8B-Chinese-Chat-6.5bpw-exl2", "BlouseJury/shenzhi-wang_Llama3-8B-Chinese-Chat-6.0bpw-exl2", "oz1115/jy_test_mode", "SungJoo/llama3-8b-instruct-orpo-ko", "RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf", "darkshapes/mistral-7b-v0.3-chinese-chat-4bit", "RichardErkhov/allganize_-_Llama-3-Alpha-Ko-8B-Instruct-gguf", "RichardErkhov/shenzhi-wang_-_Llama3-70B-Chinese-Chat-gguf", "RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-4bits", "RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-8bits", "yusheng123z/llama3.1", "RichardErkhov/shenzhi-wang_-_Mistral-7B-v0.3-Chinese-Chat-gguf", "RichardErkhov/rinna_-_gemma-2-baku-2b-it-gguf", "RichardErkhov/Amu_-_orpo-lora-phi2-gguf", "IlyaGusev/vikhr_nemo_orpo_dostoevsky_12b", "mav23/gemma-2-baku-2b-it-GGUF", "mav23/Nxcode-CQ-7B-orpo-GGUF", "RichardErkhov/mirlab_-_AkaLlama-llama3-70b-v0.1-gguf", "RichardErkhov/rinna_-_gemma-2-baku-2b-it-4bits", "RichardErkhov/rinna_-_gemma-2-baku-2b-it-8bits", "RioShiina/gemma-2-baku-2b-it-exl2", "comfyuiblog/Nxcode-CQ-7B-GGUF", "MachoMaheen/devdock4bit", "bhuvana-ak7/orpo_output", "bhuvana-ak7/orpo_trained_output_gpt_neo", "RichardErkhov/trl-lib_-_Qwen2-0.5B-ORPO-gguf" ]
[]
[ "bigcode/bigcode-models-leaderboard", "Vokturz/can-it-run-llm", "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "NiansuhAI/HFLLMs", "abidlabs/llm-explorer", "Hansimov/hf-llm-api", "llamafactory/Gemma-2-9B-Chinese-Chat", "prometheus-eval/BiGGen-Bench-Leaderboard", "DIBT/preference_data_by_language", "Justinrune/LLaMA-Factory", "rinna/gemma-2-baku-2b-it", "Granther/try-this-model", "NiansuhAI/Main", "kenken999/fastapi_django_main_live", "awacke1/ChatStreamlitMultiplayer", "Darok/Featherless-Feud", "li-qing/FIRE", "saikub/chatB", "DevsDoCode/DeepInfra", "AdarshJi/Deepifra", "Pyboxs/hf-llm-api", "emekaboris/try-this-model", "featherless-ai/klimbr-demo", "SharryOG/MIA", "Niansuh/HFLLMAPI", "tehen1/HuggingFaceH4-zephyr-orpo-141b-A35b-v0.1", "iseehf/hf-llm-api", "kingtest/hf-llm-api", "heidornj/hf-llm-api", "yxmnjxzx/hf-llm-api", "Shuddho/HFLLMAPI", "AIMaster7/HFLLMAPI2", "AhmedMagdy7/can-it-run-llm", "AIMaster7/HFLLMAPI3", "Mehmetist02/HuggingFaceH4-zephyr-orpo-141b-A35b-v0.1", "Kartik2503/cost-estimator", "phamvanla/HuggingFaceH4-zephyr-orpo-141b-A35b-v0.1", "real0x0a1/real0x0a1", "seawolf2357/kai-zephyr", "zerrin/test_space", "Bofeee5675/FIRE", "Nymbo/llm-explorer", "allknowingroger/llm-explorer", "alessoh/aihive", "brokerrobin/can-it-run-llm", "evelyn-lo/evelyn", "tianlong12/hf-llm-api", "lintasmediadanawa/hf-llm-api", "daniellp/HFLLMs", "EBTRFIO/hf-llm-api", "zjasper666/bf16_vs_fp8", "rickqaq/223", "Tanvir1337/can-it-run-llm", "realaer/src", "srinuksv/Main", "ColamanAI/hf-llm-api", "ajimenez78/python2cplusplus", "ArjDLAI/TestofNxcode-CQ-7B-orpo", "SC999/NV_Nemotron", "Oxygen588/anakin87-gemma-2b-orpo", "JohnKouf/meltemi_space", "Sakalti/Baku", "FunFunTeacher/Teacher.AI", "lqr9500/shenzhi-wang-Llama3.1-8B-Chinese-Chat", "VenturaSpectra/shenzhi-wang-Llama3.1-8B-Chinese-Chat", "kk189mm/shenzhi-wang-Llama3.1-8B-Chinese-Chat", "Granp/shenzhi-wang-Llama3.1-8B-Chinese-Chat", "beginor/test", "smarttang/blingsec" ]
1
https://aclanthology.org/2024.emnlp-main.627.bib
https://aclanthology.org/2024.emnlp-main.627/
@inproceedings{chen-etal-2024-multi-perspective, title = "A Multi-Perspective Analysis of Memorization in Large Language Models", author = "Chen, Bowen and Han, Namgi and Miyao, Yusuke", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.627", pages = "11190--11209", abstract = "Large Language Models (LLMs) can generate the same sequences contained in the pre-train corpora, known as memorization.Previous research studied it at a macro level, leaving micro yet important questions under-explored, e.g., what makes sentences memorized, the dynamics when generating memorized sequence, its connection to unmemorized sequence, and its predictability.We answer the above questions by analyzing the relationship of memorization with outputs from LLM, namely, embeddings, probability distributions, and generated tokens.A memorization score is calculated as the overlap between generated tokens and actual continuations when the LLM is prompted with a context sequence from the pre-train corpora.Our findings reveal:(1) The inter-correlation between memorized/unmemorized sentences, model size, continuation size, and context size, as well as the transition dynamics between sentences of different memorization scores,(2) A sudden drop and increase in the frequency of input tokens when generating memorized/unmemorized sequences (boundary effect),(3) Cluster of sentences with different memorization scores in the embedding space,(4) An inverse boundary effect in the entropy of probability distributions for generated memorized/unmemorized sequences,(5) The predictability of memorization is related to model size and continuation length. In addition, we show a Transformer model trained by the hidden states of LLM can predict unmemorized tokens.", }
Large Language Models (LLMs) can generate the same sequences contained in the pre-train corpora, known as memorization.Previous research studied it at a macro level, leaving micro yet important questions under-explored, e.g., what makes sentences memorized, the dynamics when generating memorized sequence, its connection to unmemorized sequence, and its predictability.We answer the above questions by analyzing the relationship of memorization with outputs from LLM, namely, embeddings, probability distributions, and generated tokens.A memorization score is calculated as the overlap between generated tokens and actual continuations when the LLM is prompted with a context sequence from the pre-train corpora.Our findings reveal:(1) The inter-correlation between memorized/unmemorized sentences, model size, continuation size, and context size, as well as the transition dynamics between sentences of different memorization scores,(2) A sudden drop and increase in the frequency of input tokens when generating memorized/unmemorized sequences (boundary effect),(3) Cluster of sentences with different memorization scores in the embedding space,(4) An inverse boundary effect in the entropy of probability distributions for generated memorized/unmemorized sequences,(5) The predictability of memorization is related to model size and continuation length. In addition, we show a Transformer model trained by the hidden states of LLM can predict unmemorized tokens.
[ "Chen, Bowen", "Han, Namgi", "Miyao, Yusuke" ]
A Multi-Perspective Analysis of Memorization in Large Language Models
emnlp-main.627
Poster
2405.11577
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.628.bib
https://aclanthology.org/2024.emnlp-main.628/
@inproceedings{penzo-etal-2024-llms, title = "Do {LLM}s suffer from Multi-Party Hangover? A Diagnostic Approach to Addressee Recognition and Response Selection in Conversations", author = "Penzo, Nicol{\`o} and Sajedinia, Maryam and Lepri, Bruno and Tonelli, Sara and Guerini, Marco", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.628", pages = "11210--11233", abstract = "Assessing the performance of systems to classify Multi-Party Conversations (MPC) is challenging due to the interconnection between linguistic and structural characteristics of conversations. Conventional evaluation methods often overlook variances in model behavior across different levels of structural complexity on interaction graphs. In this work, we propose a methodological pipeline to investigate model performance across specific structural attributes of conversations. As a proof of concept we focus on Response Selection and Addressee Recognition tasks, to diagnose model weaknesses. To this end, we extract representative diagnostic subdatasets with a fixed number of users and a good structural variety from a large and open corpus of online MPCs. We further frame our work in terms of data minimization, avoiding the use of original usernames to preserve privacy, and propose alternatives to using original text messages. Results show that response selection relies more on the textual content of conversations, while addressee recognition requires capturing their structural dimension. Using an LLM in a zero-shot setting, we further highlight how sensitivity to prompt variations is task-dependent.", }
Assessing the performance of systems to classify Multi-Party Conversations (MPC) is challenging due to the interconnection between linguistic and structural characteristics of conversations. Conventional evaluation methods often overlook variances in model behavior across different levels of structural complexity on interaction graphs. In this work, we propose a methodological pipeline to investigate model performance across specific structural attributes of conversations. As a proof of concept we focus on Response Selection and Addressee Recognition tasks, to diagnose model weaknesses. To this end, we extract representative diagnostic subdatasets with a fixed number of users and a good structural variety from a large and open corpus of online MPCs. We further frame our work in terms of data minimization, avoiding the use of original usernames to preserve privacy, and propose alternatives to using original text messages. Results show that response selection relies more on the textual content of conversations, while addressee recognition requires capturing their structural dimension. Using an LLM in a zero-shot setting, we further highlight how sensitivity to prompt variations is task-dependent.
[ "Penzo, Nicol{\\`o}", "Sajedinia, Maryam", "Lepri, Bruno", "Tonelli, Sara", "Guerini, Marco" ]
Do LLMs suffer from Multi-Party Hangover? A Diagnostic Approach to Addressee Recognition and Response Selection in Conversations
emnlp-main.628
Poster
2409.18602
[ "https://github.com/dhfbk/MPH" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.629.bib
https://aclanthology.org/2024.emnlp-main.629/
@inproceedings{puerto-etal-2024-code, title = "Code Prompting Elicits Conditional Reasoning Abilities in {T}ext+{C}ode {LLM}s", author = "Puerto, Haritz and Tutek, Martin and Aditya, Somak and Zhu, Xiaodan and Gurevych, Iryna", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.629", pages = "11234--11258", abstract = "Reasoning is a fundamental component of language understanding. Recent prompting techniques, such as chain of thought, have consistently improved LLMs{'} performance on various reasoning tasks. Nevertheless, there is still little understanding of what triggers reasoning abilities in LLMs in the inference stage. In this paper, we investigate the effect of the input representation on the reasoning abilities of LLMs. We hypothesize that representing natural language tasks as code can enhance specific reasoning abilities such as entity tracking or logical reasoning. To study this, we propose code prompting, a methodology we operationalize as a chain of prompts that transforms a natural language problem into code and directly prompts the LLM using the generated code without resorting to external code execution. We find that code prompting exhibits a high-performance boost for multiple LLMs (up to 22.52 percentage points on GPT 3.5, 7.75 on Mixtral, and 16.78 on Mistral) across multiple conditional reasoning datasets. We then conduct comprehensive experiments to understand how the code representation triggers reasoning abilities and which capabilities are elicited in the underlying models. Our analysis on GPT 3.5 reveals that the code formatting of the input problem is essential for performance improvement. Furthermore, the code representation improves sample efficiency of in-context learning and facilitates state tracking of entities.", }
Reasoning is a fundamental component of language understanding. Recent prompting techniques, such as chain of thought, have consistently improved LLMs{'} performance on various reasoning tasks. Nevertheless, there is still little understanding of what triggers reasoning abilities in LLMs in the inference stage. In this paper, we investigate the effect of the input representation on the reasoning abilities of LLMs. We hypothesize that representing natural language tasks as code can enhance specific reasoning abilities such as entity tracking or logical reasoning. To study this, we propose code prompting, a methodology we operationalize as a chain of prompts that transforms a natural language problem into code and directly prompts the LLM using the generated code without resorting to external code execution. We find that code prompting exhibits a high-performance boost for multiple LLMs (up to 22.52 percentage points on GPT 3.5, 7.75 on Mixtral, and 16.78 on Mistral) across multiple conditional reasoning datasets. We then conduct comprehensive experiments to understand how the code representation triggers reasoning abilities and which capabilities are elicited in the underlying models. Our analysis on GPT 3.5 reveals that the code formatting of the input problem is essential for performance improvement. Furthermore, the code representation improves sample efficiency of in-context learning and facilitates state tracking of entities.
[ "Puerto, Haritz", "Tutek, Martin", "Aditya, Somak", "Zhu, Xiaodan", "Gurevych, Iryna" ]
Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs
emnlp-main.629
Poster
2401.10065
[ "https://github.com/ukplab/arxiv2024-conditional-reasoning-llms" ]
https://huggingface.co/papers/2401.10065
1
1
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.630.bib
https://aclanthology.org/2024.emnlp-main.630/
@inproceedings{alastruey-etal-2024-unveiling, title = "Unveiling the Role of Pretraining in Direct Speech Translation", author = "Alastruey, Belen and G{\'a}llego, Gerard I. and Costa-juss{\`a}, Marta R.", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.630", pages = "11259--11265", abstract = "Direct speech-to-text translation systems encounter an important drawback in data scarcity. A common solution consists on pretraining the encoder on automatic speech recognition, hence losing efficiency in the training process. In this study, we compare the training dynamics of a system using a pretrained encoder, the conventional approach, and one trained from scratch. We observe that, throughout the training, the randomly initialized model struggles to incorporate information from the speech inputs for its predictions. Hence, we hypothesize that this issue stems from the difficulty of effectively training an encoder for direct speech translation. While a model trained from scratch needs to learn acoustic and semantic modeling simultaneously, a pretrained one can just focus on the latter. Based on these findings, we propose a subtle change in the decoder cross-attention to integrate source information from earlier steps in training. We show that with this change, the model trained from scratch can achieve comparable performance to the pretrained one, while reducing the training time.", }
Direct speech-to-text translation systems encounter an important drawback in data scarcity. A common solution consists on pretraining the encoder on automatic speech recognition, hence losing efficiency in the training process. In this study, we compare the training dynamics of a system using a pretrained encoder, the conventional approach, and one trained from scratch. We observe that, throughout the training, the randomly initialized model struggles to incorporate information from the speech inputs for its predictions. Hence, we hypothesize that this issue stems from the difficulty of effectively training an encoder for direct speech translation. While a model trained from scratch needs to learn acoustic and semantic modeling simultaneously, a pretrained one can just focus on the latter. Based on these findings, we propose a subtle change in the decoder cross-attention to integrate source information from earlier steps in training. We show that with this change, the model trained from scratch can achieve comparable performance to the pretrained one, while reducing the training time.
[ "Alastruey, Belen", "G{\\'a}llego, Gerard I.", "Costa-juss{\\`a}, Marta R." ]
Unveiling the Role of Pretraining in Direct Speech Translation
emnlp-main.630
Poster
2409.18044
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.631.bib
https://aclanthology.org/2024.emnlp-main.631/
@inproceedings{guo-etal-2024-pcqpr, title = "{PCQPR}: Proactive Conversational Question Planning with Reflection", author = "Guo, Shasha and Liao, Lizi and Zhang, Jing and Li, Cuiping and Chen, Hong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.631", pages = "11266--11278", abstract = "Conversational Question Generation (CQG) enhances the interactivity of conversational question-answering systems in fields such as education, customer service, and entertainment. However, traditional CQG, focusing primarily on the immediate context, lacks the conversational foresight necessary to guide conversations toward specified conclusions. This limitation significantly restricts their ability to achieve conclusion-oriented conversational outcomes. In this work, we redefine the CQG task as Conclusion-driven Conversational Question Generation (CCQG) by focusing on proactivity, not merely reacting to the unfolding conversation but actively steering it towards a conclusion-oriented question-answer pair. To address this, we propose a novel approach, called Proactive Conversational Question Planning with self-Refining (PCQPR). Concretely, by integrating a planning algorithm inspired by Monte Carlo Tree Search (MCTS) with the analytical capabilities of large language models (LLMs), PCQPR predicts future conversation turns and continuously refines its questioning strategies. This iterative self-refining mechanism ensures the generation of contextually relevant questions strategically devised to reach a specified outcome. Our extensive evaluations demonstrate that PCQPR significantly surpasses existing CQG methods, marking a paradigm shift towards conclusion-oriented conversational question-answering systems.", }
Conversational Question Generation (CQG) enhances the interactivity of conversational question-answering systems in fields such as education, customer service, and entertainment. However, traditional CQG, focusing primarily on the immediate context, lacks the conversational foresight necessary to guide conversations toward specified conclusions. This limitation significantly restricts their ability to achieve conclusion-oriented conversational outcomes. In this work, we redefine the CQG task as Conclusion-driven Conversational Question Generation (CCQG) by focusing on proactivity, not merely reacting to the unfolding conversation but actively steering it towards a conclusion-oriented question-answer pair. To address this, we propose a novel approach, called Proactive Conversational Question Planning with self-Refining (PCQPR). Concretely, by integrating a planning algorithm inspired by Monte Carlo Tree Search (MCTS) with the analytical capabilities of large language models (LLMs), PCQPR predicts future conversation turns and continuously refines its questioning strategies. This iterative self-refining mechanism ensures the generation of contextually relevant questions strategically devised to reach a specified outcome. Our extensive evaluations demonstrate that PCQPR significantly surpasses existing CQG methods, marking a paradigm shift towards conclusion-oriented conversational question-answering systems.
[ "Guo, Shasha", "Liao, Lizi", "Zhang, Jing", "Li, Cuiping", "Chen, Hong" ]
PCQPR: Proactive Conversational Question Planning with Reflection
emnlp-main.631
Poster
2410.01363
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.632.bib
https://aclanthology.org/2024.emnlp-main.632/
@inproceedings{tang-etal-2024-codeagent, title = "{C}ode{A}gent: Autonomous Communicative Agents for Code Review", author = "Tang, Xunzhu and Kim, Kisub and Song, Yewei and Lothritz, Cedric and Li, Bei and Ezzini, Saad and Tian, Haoye and Klein, Jacques and Bissyand{\'e}, Tegawend{\'e} F.", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.632", pages = "11279--11313", abstract = "Code review, which aims at ensuring the overall quality and reliability of software, is a cornerstone of software development. Unfortunately, while crucial, Code review is a labor-intensive process that the research community is looking to automate. Existing automated methods rely on single input-output generative models and thus generally struggle to emulate the collaborative nature of code review. This work introduces CodeAgent, a novel multi-agent Large Language Model (LLM) system for code review automation. CodeAgent incorporates a supervisory agent, QA-Checker, to ensure that all the agents{'} contributions address the initial review question. We evaluated CodeAgent on critical code review tasks: (1) detect inconsistencies between code changes and commit messages, (2) identify vulnerability introductions, (3) validate code style adherence, and (4) suggest code revisions. The results demonstrate CodeAgent{'}s effectiveness, contributing to a new state-of-the-art in code review automation. Our data and code are publicly available (\url{https://github.com/Daniel4SE/codeagent}).", }
Code review, which aims at ensuring the overall quality and reliability of software, is a cornerstone of software development. Unfortunately, while crucial, Code review is a labor-intensive process that the research community is looking to automate. Existing automated methods rely on single input-output generative models and thus generally struggle to emulate the collaborative nature of code review. This work introduces CodeAgent, a novel multi-agent Large Language Model (LLM) system for code review automation. CodeAgent incorporates a supervisory agent, QA-Checker, to ensure that all the agents{'} contributions address the initial review question. We evaluated CodeAgent on critical code review tasks: (1) detect inconsistencies between code changes and commit messages, (2) identify vulnerability introductions, (3) validate code style adherence, and (4) suggest code revisions. The results demonstrate CodeAgent{'}s effectiveness, contributing to a new state-of-the-art in code review automation. Our data and code are publicly available (\url{https://github.com/Daniel4SE/codeagent}).
[ "Tang, Xunzhu", "Kim, Kisub", "Song, Yewei", "Lothritz, Cedric", "Li, Bei", "Ezzini, Saad", "Tian, Haoye", "Klein, Jacques", "Bissy", "{\\'e}, Tegawend{\\'e} F." ]
CodeAgent: Autonomous Communicative Agents for Code Review
emnlp-main.632
Poster
2402.02172
[ "https://github.com/code4agent/codeagent" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.633.bib
https://aclanthology.org/2024.emnlp-main.633/
@inproceedings{lee-etal-2024-trol, title = "{T}ro{L}: Traversal of Layers for Large Language and Vision Models", author = "Lee, Byung-Kwan and Chung, Sangyun and Kim, Chae Won and Park, Beomchan and Ro, Yong Man", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.633", pages = "11314--11342", abstract = "Large language and vision models (LLVMs) have been driven by the generalization power of large language models (LLMs) and the advent of visual instruction tuning. Along with scaling them up directly, these models enable LLVMs to showcase powerful vision language (VL) performances by covering diverse tasks via natural language instructions. However, existing open-source LLVMs that perform comparably to closed-source LLVMs such as GPT-4V are often considered too large (e.g., 26B, 34B, and 110B parameters), having a larger number of layers. These large models demand costly, high-end resources for both training and inference. To address this issue, we present a new efficient LLVM family with 1.8B, 3.8B, and 7B LLM model sizes, Traversal of Layers (TroL), which enables the reuse of layers in a token-wise manner. This layer traversing technique simulates the effect of looking back and retracing the answering stream while increasing the number of forward propagation layers without physically adding more layers. We demonstrate that TroL employs a simple layer traversing approach yet efficiently outperforms the open-source LLVMs with larger model sizes and rivals the performances of the closed-source LLVMs with substantial sizes.", }
Large language and vision models (LLVMs) have been driven by the generalization power of large language models (LLMs) and the advent of visual instruction tuning. Along with scaling them up directly, these models enable LLVMs to showcase powerful vision language (VL) performances by covering diverse tasks via natural language instructions. However, existing open-source LLVMs that perform comparably to closed-source LLVMs such as GPT-4V are often considered too large (e.g., 26B, 34B, and 110B parameters), having a larger number of layers. These large models demand costly, high-end resources for both training and inference. To address this issue, we present a new efficient LLVM family with 1.8B, 3.8B, and 7B LLM model sizes, Traversal of Layers (TroL), which enables the reuse of layers in a token-wise manner. This layer traversing technique simulates the effect of looking back and retracing the answering stream while increasing the number of forward propagation layers without physically adding more layers. We demonstrate that TroL employs a simple layer traversing approach yet efficiently outperforms the open-source LLVMs with larger model sizes and rivals the performances of the closed-source LLVMs with substantial sizes.
[ "Lee, Byung-Kwan", "Chung, Sangyun", "Kim, Chae Won", "Park, Beomchan", "Ro, Yong Man" ]
TroL: Traversal of Layers for Large Language and Vision Models
emnlp-main.633
Poster
2406.12246
[ "https://github.com/byungkwanlee/trol" ]
https://huggingface.co/papers/2406.12246
5
34
2
5
[ "BK-Lee/TroL-7B", "BK-Lee/TroL-1.8B", "BK-Lee/TroL-3.8B" ]
[]
[ "BK-Lee/TroL" ]
[ "BK-Lee/TroL-7B", "BK-Lee/TroL-1.8B", "BK-Lee/TroL-3.8B" ]
[]
[ "BK-Lee/TroL" ]
1
https://aclanthology.org/2024.emnlp-main.634.bib
https://aclanthology.org/2024.emnlp-main.634/
@inproceedings{wang-etal-2024-mmte, title = "{MMTE}: Corpus and Metrics for Evaluating Machine Translation Quality of Metaphorical Language", author = "Wang, Shun and Zhang, Ge and Wu, Han and Loakman, Tyler and Huang, Wenhao and Lin, Chenghua", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.634", pages = "11343--11358", abstract = "Machine Translation (MT) has developed rapidly since the release of Large Language Models and current MT evaluation is performed through comparison with reference human translations or by predicting quality scores from human-labeled data. However, these mainstream evaluation methods mainly focus on fluency and factual reliability, whilst paying little attention to figurative quality. In this paper, we investigate the figurative quality of MT and propose a set of human evaluation metrics focused on the translation of figurative language. We additionally present a multilingual parallel metaphor corpus generated by post-editing. Our evaluation protocol is designed to estimate four aspects of MT: Metaphorical Equivalence, Emotion, Authenticity, and Quality. In doing so, we observe that translations of figurative expressions display different traits from literal ones.", }
Machine Translation (MT) has developed rapidly since the release of Large Language Models and current MT evaluation is performed through comparison with reference human translations or by predicting quality scores from human-labeled data. However, these mainstream evaluation methods mainly focus on fluency and factual reliability, whilst paying little attention to figurative quality. In this paper, we investigate the figurative quality of MT and propose a set of human evaluation metrics focused on the translation of figurative language. We additionally present a multilingual parallel metaphor corpus generated by post-editing. Our evaluation protocol is designed to estimate four aspects of MT: Metaphorical Equivalence, Emotion, Authenticity, and Quality. In doing so, we observe that translations of figurative expressions display different traits from literal ones.
[ "Wang, Shun", "Zhang, Ge", "Wu, Han", "Loakman, Tyler", "Huang, Wenhao", "Lin, Chenghua" ]
MMTE: Corpus and Metrics for Evaluating Machine Translation Quality of Metaphorical Language
emnlp-main.634
Poster
2406.13698
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.635.bib
https://aclanthology.org/2024.emnlp-main.635/
@inproceedings{zamaraeva-gomez-rodriguez-2024-revisiting, title = "Revisiting Supertagging for faster {HPSG} parsing", author = "Zamaraeva, Olga and G{\'o}mez-Rodr{\'\i}guez, Carlos", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.635", pages = "11359--11374", abstract = "We present new supertaggers trained on English HPSG-based treebanks and test the effects of the best tagger on parsing speed and accuracy. HPSG treebanks are produced automatically by large manually built grammars and feature high-quality annotation based on a well-developed linguistic theory. The English Resource Grammar treebanks include diverse and challenging test datasets, beyond the usual WSJ section 23 and Wikipedia data. HPSG supertagging has previously relied on MaxEnt-based models. We use SVM and neural CRF- and BERT-based methods and show that both SVM and neural supertaggers achieve considerably higher accuracy compared to the baseline and lead to an increase not only in the parsing speed but also the parser accuracy with respect to gold dependency structures. Our fine-tuned BERT-based tagger achieves 97.26{\%} accuracy on 950 sentences from WSJ23 and 93.88{\%} on the out-of-domain technical essay The Cathedral and the Bazaar. We present experiments with integrating the best supertagger into an HPSG parser and observe a speedup of a factor of 3 with respect to the system which uses no tagging at all, as well as large recall gains and an overall precision gain. We also compare our system to an existing integrated tagger and show that although the well-integrated tagger remains the fastest, our experimental system can be more accurate. Finally, we hope that the diverse and difficult datasets we used for evaluation will gain more popularity in the field: we show that results can differ depending on the dataset, even if it is an in-domain one. We contribute the complete datasets reformatted for Huggingface token classification.", }
We present new supertaggers trained on English HPSG-based treebanks and test the effects of the best tagger on parsing speed and accuracy. HPSG treebanks are produced automatically by large manually built grammars and feature high-quality annotation based on a well-developed linguistic theory. The English Resource Grammar treebanks include diverse and challenging test datasets, beyond the usual WSJ section 23 and Wikipedia data. HPSG supertagging has previously relied on MaxEnt-based models. We use SVM and neural CRF- and BERT-based methods and show that both SVM and neural supertaggers achieve considerably higher accuracy compared to the baseline and lead to an increase not only in the parsing speed but also the parser accuracy with respect to gold dependency structures. Our fine-tuned BERT-based tagger achieves 97.26{\%} accuracy on 950 sentences from WSJ23 and 93.88{\%} on the out-of-domain technical essay The Cathedral and the Bazaar. We present experiments with integrating the best supertagger into an HPSG parser and observe a speedup of a factor of 3 with respect to the system which uses no tagging at all, as well as large recall gains and an overall precision gain. We also compare our system to an existing integrated tagger and show that although the well-integrated tagger remains the fastest, our experimental system can be more accurate. Finally, we hope that the diverse and difficult datasets we used for evaluation will gain more popularity in the field: we show that results can differ depending on the dataset, even if it is an in-domain one. We contribute the complete datasets reformatted for Huggingface token classification.
[ "Zamaraeva, Olga", "G{\\'o}mez-Rodr{\\'\\i}guez, Carlos" ]
Revisiting Supertagging for faster HPSG parsing
emnlp-main.635
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.636.bib
https://aclanthology.org/2024.emnlp-main.636/
@inproceedings{dai-etal-2024-improve, title = "Improve Dense Passage Retrieval with Entailment Tuning", author = "Dai, Lu and Liu, Hao and Xiong, Hui", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.636", pages = "11375--11387", abstract = "Retrieval module can be plugged into many downstream NLP tasks to improve their performance, such as open-domain question answering and retrieval-augmented generation. The key to a retrieval system is to calculate relevance scores to query and passage pairs. However, the definition of relevance is often ambiguous. We observed that a major class of relevance aligns with the concept of entailment in NLI tasks. Based on this observation, we designed a method called entailment tuning to improve the embedding of dense retrievers. Specifically, we unify the form of retrieval data and NLI data using existence claim as a bridge. Then, we train retrievers to predict the claims entailed in a passage with a variant task of masked prediction. Our method can be efficiently plugged into current dense retrieval methods, and experiments show the effectiveness of our method.", }
Retrieval module can be plugged into many downstream NLP tasks to improve their performance, such as open-domain question answering and retrieval-augmented generation. The key to a retrieval system is to calculate relevance scores to query and passage pairs. However, the definition of relevance is often ambiguous. We observed that a major class of relevance aligns with the concept of entailment in NLI tasks. Based on this observation, we designed a method called entailment tuning to improve the embedding of dense retrievers. Specifically, we unify the form of retrieval data and NLI data using existence claim as a bridge. Then, we train retrievers to predict the claims entailed in a passage with a variant task of masked prediction. Our method can be efficiently plugged into current dense retrieval methods, and experiments show the effectiveness of our method.
[ "Dai, Lu", "Liu, Hao", "Xiong, Hui" ]
Improve Dense Passage Retrieval with Entailment Tuning
emnlp-main.636
Poster
2410.15801
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.637.bib
https://aclanthology.org/2024.emnlp-main.637/
@inproceedings{zhang-etal-2024-toolbehonest, title = "{T}ool{B}e{H}onest: A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models", author = "Zhang, Yuxiang and Chen, Jing and Wang, Junjie and Liu, Yaxin and Yang, Cheng and Shi, Chufan and Zhu, Xinyu and Lin, Zihao and Wan, Hanwen and Yang, Yujiu and Sakai, Tetsuya and Feng, Tian and Yamana, Hayato", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.637", pages = "11388--11422", abstract = "Tool-augmented large language models (LLMs) are rapidly being integrated into real-world applications. Due to the lack of benchmarks, the community has yet to fully understand the hallucination issues within these models. To address this challenge, we introduce a comprehensive diagnostic benchmark, ToolBH. Specifically, we assess the LLM{'}s hallucinations through two perspectives: depth and breadth. In terms of depth, we propose a multi-level diagnostic process, including (1) solvability detection, (2) solution planning, and (3) missing-tool analysis. For breadth, we consider three scenarios based on the characteristics of the toolset: missing necessary tools, potential tools, and limited functionality tools. Furthermore, we developed seven tasks and collected 700 evaluation samples through multiple rounds of manual annotation. The results show the significant challenges presented by the ToolBH benchmark. The current advanced models Gemini-1.5-Pro and GPT-4o only achieve total scores of 45.3 and 37.0, respectively, on a scale of 100. In this benchmark, larger model parameters do not guarantee better performance; the training data and response strategies also play crucial roles in tool-enhanced LLM scenarios. Our diagnostic analysis indicates that the primary reason for model errors lies in assessing task solvability. Additionally, open-weight models suffer from performance drops with verbose replies, whereas proprietary models excel with longer reasoning.", }
Tool-augmented large language models (LLMs) are rapidly being integrated into real-world applications. Due to the lack of benchmarks, the community has yet to fully understand the hallucination issues within these models. To address this challenge, we introduce a comprehensive diagnostic benchmark, ToolBH. Specifically, we assess the LLM{'}s hallucinations through two perspectives: depth and breadth. In terms of depth, we propose a multi-level diagnostic process, including (1) solvability detection, (2) solution planning, and (3) missing-tool analysis. For breadth, we consider three scenarios based on the characteristics of the toolset: missing necessary tools, potential tools, and limited functionality tools. Furthermore, we developed seven tasks and collected 700 evaluation samples through multiple rounds of manual annotation. The results show the significant challenges presented by the ToolBH benchmark. The current advanced models Gemini-1.5-Pro and GPT-4o only achieve total scores of 45.3 and 37.0, respectively, on a scale of 100. In this benchmark, larger model parameters do not guarantee better performance; the training data and response strategies also play crucial roles in tool-enhanced LLM scenarios. Our diagnostic analysis indicates that the primary reason for model errors lies in assessing task solvability. Additionally, open-weight models suffer from performance drops with verbose replies, whereas proprietary models excel with longer reasoning.
[ "Zhang, Yuxiang", "Chen, Jing", "Wang, Junjie", "Liu, Yaxin", "Yang, Cheng", "Shi, Chufan", "Zhu, Xinyu", "Lin, Zihao", "Wan, Hanwen", "Yang, Yujiu", "Sakai, Tetsuya", "Feng, Tian", "Yamana, Hayato" ]
ToolBeHonest: A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models
emnlp-main.637
Poster
2406.20015
[ "https://github.com/toolbehonest/toolbehonest" ]
https://huggingface.co/papers/2406.20015
3
1
0
13
[]
[ "Joelzhang/ToolBeHonest" ]
[]
[]
[ "Joelzhang/ToolBeHonest" ]
[]
1
https://aclanthology.org/2024.emnlp-main.638.bib
https://aclanthology.org/2024.emnlp-main.638/
@inproceedings{zevallos-etal-2024-tema, title = "{TEMA}: Token Embeddings Mapping for Enriching Low-Resource Language Models", author = "Zevallos, Rodolfo and Bel, N{\'u}ria and Farr{\'u}s, Mireia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.638", pages = "11423--11435", abstract = "The objective of the research we present is to remedy the problem of the low quality of language models for low-resource languages. We introduce an algorithm, the Token Embedding Mapping Algorithm (TEMA), that maps the token embeddings of a richly pre-trained model L1 to a poorly trained model L2, thus creating a richer L2{'} model. Our experiments show that the L2{'} model reduces perplexity with respect to the original monolingual model L2, and that for downstream tasks, including SuperGLUE, the results are state-of-the-art or better for the most semantic tasks. The models obtained with TEMA are also competitive or better than multilingual or extended models proposed as solutions for mitigating the low-resource language problems.", }
The objective of the research we present is to remedy the problem of the low quality of language models for low-resource languages. We introduce an algorithm, the Token Embedding Mapping Algorithm (TEMA), that maps the token embeddings of a richly pre-trained model L1 to a poorly trained model L2, thus creating a richer L2{'} model. Our experiments show that the L2{'} model reduces perplexity with respect to the original monolingual model L2, and that for downstream tasks, including SuperGLUE, the results are state-of-the-art or better for the most semantic tasks. The models obtained with TEMA are also competitive or better than multilingual or extended models proposed as solutions for mitigating the low-resource language problems.
[ "Zevallos, Rodolfo", "Bel, N{\\'u}ria", "Farr{\\'u}s, Mireia" ]
TEMA: Token Embeddings Mapping for Enriching Low-Resource Language Models
emnlp-main.638
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.639.bib
https://aclanthology.org/2024.emnlp-main.639/
@inproceedings{zhang-etal-2024-decor, title = "{DECOR}: Improving Coherence in {L}2 {E}nglish Writing with a Novel Benchmark for Incoherence Detection, Reasoning, and Rewriting", author = "Zhang, Xuanming and Diaz, Anthony and Chen, Zixun and Wu, Qingyang and Qian, Kun and Voss, Erik and Yu, Zhou", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.639", pages = "11436--11458", abstract = "Coherence in writing, an aspect that L2 English learners often struggle with, is crucial in assessing L2 English writing. Existing automated writing evaluation systems primarily use basic surface linguistic features to detect coherence in writing. However, little effort has been made to correct the detected incoherence, which could significantly benefit L2 language learners seeking to improve their writing. To bridge this gap, we introduce DECOR, a novel benchmark that includes expert annotations for detecting incoherence in L2 English writing, identifying the underlying reasons, and rewriting the incoherent sentences. To our knowledge, DECOR is the first coherence assessment dataset specifically designed for improving L2 English writing, featuring pairs of original incoherent sentences alongside their expert-rewritten counterparts. Additionally, we fine-tuned models to automatically detect and rewrite incoherence in student essays. We find that incorporating specific reasons for incoherence during fine-tuning consistently improves the quality of the rewrites, achieving a level that is favored in both automatic and human evaluations.", }
Coherence in writing, an aspect that L2 English learners often struggle with, is crucial in assessing L2 English writing. Existing automated writing evaluation systems primarily use basic surface linguistic features to detect coherence in writing. However, little effort has been made to correct the detected incoherence, which could significantly benefit L2 language learners seeking to improve their writing. To bridge this gap, we introduce DECOR, a novel benchmark that includes expert annotations for detecting incoherence in L2 English writing, identifying the underlying reasons, and rewriting the incoherent sentences. To our knowledge, DECOR is the first coherence assessment dataset specifically designed for improving L2 English writing, featuring pairs of original incoherent sentences alongside their expert-rewritten counterparts. Additionally, we fine-tuned models to automatically detect and rewrite incoherence in student essays. We find that incorporating specific reasons for incoherence during fine-tuning consistently improves the quality of the rewrites, achieving a level that is favored in both automatic and human evaluations.
[ "Zhang, Xuanming", "Diaz, Anthony", "Chen, Zixun", "Wu, Qingyang", "Qian, Kun", "Voss, Erik", "Yu, Zhou" ]
DECOR: Improving Coherence in L2 English Writing with a Novel Benchmark for Incoherence Detection, Reasoning, and Rewriting
emnlp-main.639
Poster
2406.19650
[ "https://github.com/billyzhang24kobe/writing2coherence" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.640.bib
https://aclanthology.org/2024.emnlp-main.640/
@inproceedings{pesaran-zadeh-etal-2024-text2chart31, title = "{T}ext2{C}hart31: Instruction Tuning for Chart Generation with Automatic Feedback", author = "Pesaran Zadeh, Fatemeh and Kim, Juyeon and Kim, Jin-Hwa and Kim, Gunhee", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.640", pages = "11459--11480", abstract = "Large language models (LLMs) have demonstrated strong capabilities across various language tasks, notably through instruction-tuning methods. However, LLMs face challenges in visualizing complex, real-world data through charts and plots. Firstly, existing datasets rarely cover a full range of chart types, such as 3D, volumetric, and gridded charts. Secondly, supervised fine-tuning methods do not fully leverage the intricate relationships within rich datasets, including text, code, and figures. To address these challenges, we propose a hierarchical pipeline and a new dataset for chart generation. Our dataset, Text2Chart31, includes 31 unique plot types referring to the Matplotlib library, with 11.1K tuples of descriptions, code, data tables, and plots. Moreover, we introduce a reinforcement learning-based instruction tuning technique for chart generation tasks without requiring human feedback. Our experiments show that this approach significantly enhances the model performance, enabling smaller models to outperform larger open-source models and be comparable to state-of-the-art proprietary models in data visualization tasks.", }
Large language models (LLMs) have demonstrated strong capabilities across various language tasks, notably through instruction-tuning methods. However, LLMs face challenges in visualizing complex, real-world data through charts and plots. Firstly, existing datasets rarely cover a full range of chart types, such as 3D, volumetric, and gridded charts. Secondly, supervised fine-tuning methods do not fully leverage the intricate relationships within rich datasets, including text, code, and figures. To address these challenges, we propose a hierarchical pipeline and a new dataset for chart generation. Our dataset, Text2Chart31, includes 31 unique plot types referring to the Matplotlib library, with 11.1K tuples of descriptions, code, data tables, and plots. Moreover, we introduce a reinforcement learning-based instruction tuning technique for chart generation tasks without requiring human feedback. Our experiments show that this approach significantly enhances the model performance, enabling smaller models to outperform larger open-source models and be comparable to state-of-the-art proprietary models in data visualization tasks.
[ "Pesaran Zadeh, Fatemeh", "Kim, Juyeon", "Kim, Jin-Hwa", "Kim, Gunhee" ]
Text2Chart31: Instruction Tuning for Chart Generation with Automatic Feedback
emnlp-main.640
Oral
2410.04064
[ "https://github.com/fatemehpesaran310/text2chart31" ]
https://huggingface.co/papers/2410.04064
0
0
0
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.641.bib
https://aclanthology.org/2024.emnlp-main.641/
@inproceedings{leiter-eger-2024-prexme, title = "{P}r{E}x{M}e! Large Scale Prompt Exploration of Open Source {LLM}s for Machine Translation and Summarization Evaluation", author = "Leiter, Christoph and Eger, Steffen", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.641", pages = "11481--11506", abstract = "Large language models (LLMs) have revolutionized NLP research. Notably, in-context learning enables their use as evaluation metrics for natural language generation, making them particularly advantageous in low-resource scenarios and time-restricted applications. In this work, we introduce **PrExMe**, a large-scale **Pr**ompt **Ex**ploration for **Me**trics, where we evaluate more than 720 prompt templates for open-source LLM-based metrics on machine translation (MT) and summarization datasets, totalling over 6.6M evaluations. This extensive comparison (1) benchmarks recent open-source LLMs as metrics and (2) explores the stability and variability of different prompting strategies. We discover that, on the one hand, there are scenarios for which prompts are stable. For instance, some LLMs show idiosyncratic preferences and favor to grade generated texts with textual labels while others prefer to return numeric scores. On the other hand, the stability of prompts and model rankings can be susceptible to seemingly innocuous changes. For example, changing the requested output format from {``}0 to 100{''} to ''-1 to +1{''} can strongly affect the rankings in our evaluation. Our study contributes to understanding the impact of different prompting approaches on LLM-based metrics for MT and summarization evaluation, highlighting the most stable prompting patterns and potential limitations.", }
Large language models (LLMs) have revolutionized NLP research. Notably, in-context learning enables their use as evaluation metrics for natural language generation, making them particularly advantageous in low-resource scenarios and time-restricted applications. In this work, we introduce **PrExMe**, a large-scale **Pr**ompt **Ex**ploration for **Me**trics, where we evaluate more than 720 prompt templates for open-source LLM-based metrics on machine translation (MT) and summarization datasets, totalling over 6.6M evaluations. This extensive comparison (1) benchmarks recent open-source LLMs as metrics and (2) explores the stability and variability of different prompting strategies. We discover that, on the one hand, there are scenarios for which prompts are stable. For instance, some LLMs show idiosyncratic preferences and favor to grade generated texts with textual labels while others prefer to return numeric scores. On the other hand, the stability of prompts and model rankings can be susceptible to seemingly innocuous changes. For example, changing the requested output format from {``}0 to 100{''} to ''-1 to +1{''} can strongly affect the rankings in our evaluation. Our study contributes to understanding the impact of different prompting approaches on LLM-based metrics for MT and summarization evaluation, highlighting the most stable prompting patterns and potential limitations.
[ "Leiter, Christoph", "Eger, Steffen" ]
PrExMe! Large Scale Prompt Exploration of Open Source LLMs for Machine Translation and Summarization Evaluation
emnlp-main.641
Poster
2406.18528
[ "https://github.com/gringham/prexme" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.642.bib
https://aclanthology.org/2024.emnlp-main.642/
@inproceedings{zhao-etal-2024-universal, title = "Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning", author = "Zhao, Shuai and Jia, Meihuizi and Luu, Anh Tuan and Pan, Fengjun and Wen, Jinming", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.642", pages = "11507--11522", abstract = "In-context learning, a paradigm bridging the gap between pre-training and fine-tuning, has demonstrated high efficacy in several NLP tasks, especially in few-shot settings. Despite being widely applied, in-context learning is vulnerable to malicious attacks. In this work, we raise security concerns regarding this paradigm. Our studies demonstrate that an attacker can manipulate the behavior of large language models by poisoning the demonstration context, without the need for fine-tuning the model. Specifically, we design a new backdoor attack method, named ICLAttack, to target large language models based on in-context learning. Our method encompasses two types of attacks: poisoning demonstration examples and poisoning demonstration prompts, which can make models behave in alignment with predefined intentions. ICLAttack does not require additional fine-tuning to implant a backdoor, thus preserving the model{'}s generality. Furthermore, the poisoned examples are correctly labeled, enhancing the natural stealth of our attack method. Extensive experimental results across several language models, ranging in size from 1.3B to 180B parameters, demonstrate the effectiveness of our attack method, exemplified by a high average attack success rate of 95.0{\%} across the three datasets on OPT models.", }
In-context learning, a paradigm bridging the gap between pre-training and fine-tuning, has demonstrated high efficacy in several NLP tasks, especially in few-shot settings. Despite being widely applied, in-context learning is vulnerable to malicious attacks. In this work, we raise security concerns regarding this paradigm. Our studies demonstrate that an attacker can manipulate the behavior of large language models by poisoning the demonstration context, without the need for fine-tuning the model. Specifically, we design a new backdoor attack method, named ICLAttack, to target large language models based on in-context learning. Our method encompasses two types of attacks: poisoning demonstration examples and poisoning demonstration prompts, which can make models behave in alignment with predefined intentions. ICLAttack does not require additional fine-tuning to implant a backdoor, thus preserving the model{'}s generality. Furthermore, the poisoned examples are correctly labeled, enhancing the natural stealth of our attack method. Extensive experimental results across several language models, ranging in size from 1.3B to 180B parameters, demonstrate the effectiveness of our attack method, exemplified by a high average attack success rate of 95.0{\%} across the three datasets on OPT models.
[ "Zhao, Shuai", "Jia, Meihuizi", "Luu, Anh Tuan", "Pan, Fengjun", "Wen, Jinming" ]
Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning
emnlp-main.642
Poster
2401.05949
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.643.bib
https://aclanthology.org/2024.emnlp-main.643/
@inproceedings{chiyah-garcia-etal-2024-repairs, title = "Repairs in a Block World: A New Benchmark for Handling User Corrections with Multi-Modal Language Models", author = "Chiyah-Garcia, Javier and Suglia, Alessandro and Eshghi, Arash", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.643", pages = "11523--11542", abstract = "In dialogue, the addressee may initially misunderstand the speaker and respond erroneously, often prompting the speaker to correct the misunderstanding in the next turn with a Third Position Repair (TPR). The ability to process and respond appropriately to such repair sequences is thus crucial in conversational AI systems. In this paper, we first collect, analyse, and publicly release BlockWorld-Repairs: a dataset of multi-modal TPR sequences in an instruction-following manipulation task that is, by design, rife with referential ambiguity. We employ this dataset to evaluate several state-of-the-art Vision and Language Models (VLM) across multiple settings, focusing on their capability to process and accurately respond to TPRs and thus recover from miscommunication. We find that, compared to humans, all models significantly underperform in this task. We then show that VLMs can benefit from specialised losses targeting relevant tokens during fine-tuning, achieving better performance and generalising better to new scenarios. Our results suggest that these models are not yet ready to be deployed in multi-modal collaborative settings where repairs are common, and highlight the need to design training regimes and objectives that facilitate learning from interaction. Our code and data are available at www.github.com/JChiyah/blockworld-repairs", }
In dialogue, the addressee may initially misunderstand the speaker and respond erroneously, often prompting the speaker to correct the misunderstanding in the next turn with a Third Position Repair (TPR). The ability to process and respond appropriately to such repair sequences is thus crucial in conversational AI systems. In this paper, we first collect, analyse, and publicly release BlockWorld-Repairs: a dataset of multi-modal TPR sequences in an instruction-following manipulation task that is, by design, rife with referential ambiguity. We employ this dataset to evaluate several state-of-the-art Vision and Language Models (VLM) across multiple settings, focusing on their capability to process and accurately respond to TPRs and thus recover from miscommunication. We find that, compared to humans, all models significantly underperform in this task. We then show that VLMs can benefit from specialised losses targeting relevant tokens during fine-tuning, achieving better performance and generalising better to new scenarios. Our results suggest that these models are not yet ready to be deployed in multi-modal collaborative settings where repairs are common, and highlight the need to design training regimes and objectives that facilitate learning from interaction. Our code and data are available at www.github.com/JChiyah/blockworld-repairs
[ "Chiyah-Garcia, Javier", "Suglia, Aless", "ro", "Eshghi, Arash" ]
Repairs in a Block World: A New Benchmark for Handling User Corrections with Multi-Modal Language Models
emnlp-main.643
Oral
2409.14247
[ "https://github.com/jchiyah/blockworld-repairs" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.644.bib
https://aclanthology.org/2024.emnlp-main.644/
@inproceedings{zhang-etal-2024-beyond, title = "Beyond the Turn-Based Game: Enabling Real-Time Conversations with Duplex Models", author = "Zhang, Xinrong and Chen, Yingfa and Hu, Shengding and Han, Xu and Xu, Zihang and Xu, Yuanwei and Zhao, Weilin and Sun, Maosong and Liu, Zhiyuan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.644", pages = "11543--11557", abstract = "As large language models (LLMs) increasingly permeate daily lives, there is a growing demand for real-time interactions that mirror human conversations. Traditional turn-based chat systems driven by LLMs prevent users from verbally interacting with the system while generating responses.To overcome these limitations, we adapt existing LLMs to \textit{duplex models} so that they can listen to users while generating output and dynamically adjust themselves to provide instant feedback.Specifically, we divide the queries and responses of conversations into several time slices and then adopt a time-division-multiplexing (TDM) encoding-decoding strategy to process these slices pseudo-simultaneously.Furthermore, to make LLMs proficient enough to handle real-time conversations, we build a fine-tuning dataset consisting of alternating time slices of queries and responses and covering typical feedback types in instantaneous interactions.Our experiments show that although the queries and responses of conversations are segmented into incomplete slices for processing, LLMs can preserve their original performance on standard benchmarks with a few fine-tuning steps on our dataset. Automatic and human evaluation indicate that duplex models make user-AI interactions more natural and human-like, and greatly improve user satisfaction compared to vanilla LLMs. Our duplex model and dataset will be released soon.", }
As large language models (LLMs) increasingly permeate daily lives, there is a growing demand for real-time interactions that mirror human conversations. Traditional turn-based chat systems driven by LLMs prevent users from verbally interacting with the system while generating responses.To overcome these limitations, we adapt existing LLMs to \textit{duplex models} so that they can listen to users while generating output and dynamically adjust themselves to provide instant feedback.Specifically, we divide the queries and responses of conversations into several time slices and then adopt a time-division-multiplexing (TDM) encoding-decoding strategy to process these slices pseudo-simultaneously.Furthermore, to make LLMs proficient enough to handle real-time conversations, we build a fine-tuning dataset consisting of alternating time slices of queries and responses and covering typical feedback types in instantaneous interactions.Our experiments show that although the queries and responses of conversations are segmented into incomplete slices for processing, LLMs can preserve their original performance on standard benchmarks with a few fine-tuning steps on our dataset. Automatic and human evaluation indicate that duplex models make user-AI interactions more natural and human-like, and greatly improve user satisfaction compared to vanilla LLMs. Our duplex model and dataset will be released soon.
[ "Zhang, Xinrong", "Chen, Yingfa", "Hu, Shengding", "Han, Xu", "Xu, Zihang", "Xu, Yuanwei", "Zhao, Weilin", "Sun, Maosong", "Liu, Zhiyuan" ]
Beyond the Turn-Based Game: Enabling Real-Time Conversations with Duplex Models
emnlp-main.644
Poster
2406.15718
[ "https://github.com/thunlp/duplex-model" ]
https://huggingface.co/papers/2406.15718
5
14
2
9
[ "xinrongzhang2022/MiniCPM-duplex" ]
[ "xinrongzhang2022/Duplex-UltraChat" ]
[]
[ "xinrongzhang2022/MiniCPM-duplex" ]
[ "xinrongzhang2022/Duplex-UltraChat" ]
[]
1
https://aclanthology.org/2024.emnlp-main.645.bib
https://aclanthology.org/2024.emnlp-main.645/
@inproceedings{lindemann-etal-2024-strengthening, title = "Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic Transformations", author = "Lindemann, Matthias and Koller, Alexander and Titov, Ivan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.645", pages = "11558--11573", abstract = "Models need appropriate inductive biases to effectively learn from small amounts of data and generalize systematically outside of the training distribution. While Transformers are highly versatile and powerful, they can still benefit from enhanced structural inductive biases for seq2seq tasks, especially those involving syntactic transformations, such as converting active to passive voice or semantic parsing. In this paper, we propose to strengthen the structural inductive bias of a Transformer by intermediate pre-training to perform synthetically generated syntactic transformations of dependency trees given a description of the transformation. Our experiments confirm that this helps with few-shot learning of syntactic tasks such as chunking, and also improves structural generalization for semantic parsing. Our analysis shows that the intermediate pre-training leads to attention heads that keep track of which syntactic transformation needs to be applied to which token, and that the model can leverage these attention heads on downstream tasks.", }
Models need appropriate inductive biases to effectively learn from small amounts of data and generalize systematically outside of the training distribution. While Transformers are highly versatile and powerful, they can still benefit from enhanced structural inductive biases for seq2seq tasks, especially those involving syntactic transformations, such as converting active to passive voice or semantic parsing. In this paper, we propose to strengthen the structural inductive bias of a Transformer by intermediate pre-training to perform synthetically generated syntactic transformations of dependency trees given a description of the transformation. Our experiments confirm that this helps with few-shot learning of syntactic tasks such as chunking, and also improves structural generalization for semantic parsing. Our analysis shows that the intermediate pre-training leads to attention heads that keep track of which syntactic transformation needs to be applied to which token, and that the model can leverage these attention heads on downstream tasks.
[ "Lindemann, Matthias", "Koller, Alex", "er", "Titov, Ivan" ]
Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic Transformations
emnlp-main.645
Poster
2407.04543
[ "https://github.com/namednil/step" ]
https://huggingface.co/papers/2407.04543
0
0
0
3
[ "namednil/STEP" ]
[]
[]
[ "namednil/STEP" ]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.646.bib
https://aclanthology.org/2024.emnlp-main.646/
@inproceedings{giadikiaroglou-etal-2024-puzzle, title = "Puzzle Solving using Reasoning of Large Language Models: A Survey", author = "Giadikiaroglou, Panagiotis and Lymperaiou, Maria and Filandrianos, Giorgos and Stamou, Giorgos", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.646", pages = "11574--11591", abstract = "Exploring the capabilities of Large Language Models (LLMs) in puzzle solving unveils critical insights into their potential and challenges in AI, marking a significant step towards understanding their applicability in complex reasoning tasks. This survey leverages a unique taxonomy{---}dividing puzzles into rule-based and rule-less categories{---}to critically assess LLMs through various methodologies, including prompting techniques, neuro-symbolic approaches, and fine-tuning. Through a critical review of relevant datasets and benchmarks, we assess LLMs{'} performance, identifying significant challenges in complex puzzle scenarios. Our findings highlight the disparity between LLM capabilities and human-like reasoning, particularly in those requiring advanced logical inference. The survey underscores the necessity for novel strategies and richer datasets to advance LLMs{'} puzzle-solving proficiency and contribute to AI{'}s logical reasoning and creative problem-solving advancements.", }
Exploring the capabilities of Large Language Models (LLMs) in puzzle solving unveils critical insights into their potential and challenges in AI, marking a significant step towards understanding their applicability in complex reasoning tasks. This survey leverages a unique taxonomy{---}dividing puzzles into rule-based and rule-less categories{---}to critically assess LLMs through various methodologies, including prompting techniques, neuro-symbolic approaches, and fine-tuning. Through a critical review of relevant datasets and benchmarks, we assess LLMs{'} performance, identifying significant challenges in complex puzzle scenarios. Our findings highlight the disparity between LLM capabilities and human-like reasoning, particularly in those requiring advanced logical inference. The survey underscores the necessity for novel strategies and richer datasets to advance LLMs{'} puzzle-solving proficiency and contribute to AI{'}s logical reasoning and creative problem-solving advancements.
[ "Giadikiaroglou, Panagiotis", "Lymperaiou, Maria", "Fil", "rianos, Giorgos", "Stamou, Giorgos" ]
Puzzle Solving using Reasoning of Large Language Models: A Survey
emnlp-main.646
Poster
2402.11291
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.647.bib
https://aclanthology.org/2024.emnlp-main.647/
@inproceedings{dinh-etal-2024-sciex, title = "{S}ci{E}x: Benchmarking Large Language Models on Scientific Exams with Human Expert Grading and Automatic Grading", author = {Dinh, Tu Anh and Mullov, Carlos and B{\"a}rmann, Leonard and Li, Zhaolin and Liu, Danni and Rei{\ss}, Simon and Lee, Jueun and Lerzer, Nathan and Gao, Jianfeng and Peller-Konrad, Fabian and R{\"o}ddiger, Tobias and Waibel, Alexander and Asfour, Tamim and Beigl, Michael and Stiefelhagen, Rainer and Dachsbacher, Carsten and B{\"o}hm, Klemens and Niehues, Jan}, editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.647", pages = "11592--11610", abstract = "With the rapid development of Large Language Models (LLMs), it is crucial to have benchmarks which can evaluate the ability of LLMs on different domains. One common use of LLMs is performing tasks on scientific topics, such as writing algorithms, querying databases or giving mathematical proofs. Inspired by the way university students are evaluated on such tasks, in this paper, we propose SciEx - a benchmark consisting of university computer science exam questions, to evaluate LLMs{'} ability on solving scientific tasks. SciEx is (1) multilingual, containing both English and German exams, and (2) multi-modal, containing questions that involve images, and (3) contains various types of freeform questions with different difficulty levels, due to the nature of university exams. We evaluate the performance of various state-of-the-art LLMs on our new benchmark. Since SciEx questions are freeform, it is not straightforward to evaluate LLM performance. Therefore, we provide human expert grading of the LLM outputs on SciEx. We show that the free-form exams in SciEx remain challenging for the current LLMs, where the best LLM only achieves 59.4{\%} exam grade on average. We also provide detailed comparisons between LLM performance and student performance on SciEx. To enable future evaluation of new LLMs, we propose using LLM-as-a-judge to grade the LLM answers on SciEx. Our experiments show that, although they do not perform perfectly on solving the exams, LLMs are decent as graders, achieving 0.948 Pearson correlation with expert grading.", }
With the rapid development of Large Language Models (LLMs), it is crucial to have benchmarks which can evaluate the ability of LLMs on different domains. One common use of LLMs is performing tasks on scientific topics, such as writing algorithms, querying databases or giving mathematical proofs. Inspired by the way university students are evaluated on such tasks, in this paper, we propose SciEx - a benchmark consisting of university computer science exam questions, to evaluate LLMs{'} ability on solving scientific tasks. SciEx is (1) multilingual, containing both English and German exams, and (2) multi-modal, containing questions that involve images, and (3) contains various types of freeform questions with different difficulty levels, due to the nature of university exams. We evaluate the performance of various state-of-the-art LLMs on our new benchmark. Since SciEx questions are freeform, it is not straightforward to evaluate LLM performance. Therefore, we provide human expert grading of the LLM outputs on SciEx. We show that the free-form exams in SciEx remain challenging for the current LLMs, where the best LLM only achieves 59.4{\%} exam grade on average. We also provide detailed comparisons between LLM performance and student performance on SciEx. To enable future evaluation of new LLMs, we propose using LLM-as-a-judge to grade the LLM answers on SciEx. Our experiments show that, although they do not perform perfectly on solving the exams, LLMs are decent as graders, achieving 0.948 Pearson correlation with expert grading.
[ "Dinh, Tu Anh", "Mullov, Carlos", "B{\\\"a}rmann, Leonard", "Li, Zhaolin", "Liu, Danni", "Rei{\\ss}, Simon", "Lee, Jueun", "Lerzer, Nathan", "Gao, Jianfeng", "Peller-Konrad, Fabian", "R{\\\"o}ddiger, Tobias", "Waibel, Alex", "er", "Asfour, Tamim", "Beigl, Michael", "Stiefelhagen, Rainer", "Dachsbacher, Carsten", "B{\\\"o}hm, Klemens", "Niehues, Jan" ]
SciEx: Benchmarking Large Language Models on Scientific Exams with Human Expert Grading and Automatic Grading
emnlp-main.647
Poster
2406.10421
[ "https://github.com/TuAnh23/SciEx" ]
https://huggingface.co/papers/2406.10421
0
0
0
18
[]
[ "tuanh23/SciEx" ]
[]
[]
[ "tuanh23/SciEx" ]
[]
1
https://aclanthology.org/2024.emnlp-main.648.bib
https://aclanthology.org/2024.emnlp-main.648/
@inproceedings{wen-etal-2024-red, title = "Red Teaming Language Models for Processing Contradictory Dialogues", author = "Wen, Xiaofei and Li, Bangzheng and Huang, Tenghao and Chen, Muhao", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.648", pages = "11611--11630", abstract = "Most language models currently available are prone to self-contradiction during dialogues. To mitigate this issue, this study explores a novel contradictory dialogue processing task that aims to detect and modify contradictory statements in a conversation. This task is inspired by research on context faithfulness and dialogue comprehension, which have demonstrated that the detection and understanding of contradictions often necessitate detailed explanations. We develop a dataset comprising contradictory dialogues, in which one side of the conversation contradicts itself. Each dialogue is accompanied by an explanatory label that highlights the location and details of the contradiction. With this dataset, we present a Red Teaming framework for contradictory dialogue processing. The framework detects and attempts to explain the dialogue, then modifies the existing contradictory content using the explanation. Our experiments demonstrate that the framework improves the ability to detect contradictory dialogues and provides valid explanations. Additionally, it showcases distinct capabilities for modifying such dialogues. Our study highlights the importance of the logical inconsistency problem in conversational AI.", }
Most language models currently available are prone to self-contradiction during dialogues. To mitigate this issue, this study explores a novel contradictory dialogue processing task that aims to detect and modify contradictory statements in a conversation. This task is inspired by research on context faithfulness and dialogue comprehension, which have demonstrated that the detection and understanding of contradictions often necessitate detailed explanations. We develop a dataset comprising contradictory dialogues, in which one side of the conversation contradicts itself. Each dialogue is accompanied by an explanatory label that highlights the location and details of the contradiction. With this dataset, we present a Red Teaming framework for contradictory dialogue processing. The framework detects and attempts to explain the dialogue, then modifies the existing contradictory content using the explanation. Our experiments demonstrate that the framework improves the ability to detect contradictory dialogues and provides valid explanations. Additionally, it showcases distinct capabilities for modifying such dialogues. Our study highlights the importance of the logical inconsistency problem in conversational AI.
[ "Wen, Xiaofei", "Li, Bangzheng", "Huang, Tenghao", "Chen, Muhao" ]
Red Teaming Language Models for Processing Contradictory Dialogues
emnlp-main.648
Poster
2405.10128
[ "https://github.com/luka-group/contraDialog" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.649.bib
https://aclanthology.org/2024.emnlp-main.649/
@inproceedings{land-bartolo-2024-fishing, title = "Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large Language Models", author = "Land, Sander and Bartolo, Max", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.649", pages = "11631--11646", abstract = "The disconnect between tokenizer creation and model training in language models allows for specific inputs, such as the infamous SolidGoldMagikarp token, to induce unwanted model behaviour. Although such {`}glitch tokens{'}, tokens present in the tokenizer vocabulary but that are nearly or entirely absent during model training, have been observed across various models, a reliable method to identify and address them has been missing. We present a comprehensive analysis of Large Language Model tokenizers, specifically targeting this issue of detecting under-trained tokens. Through a combination of tokenizer analysis, model weight-based indicators, and prompting techniques, we develop novel and effective methods for automatically detecting these problematic tokens. Our findings demonstrate the prevalence of such tokens across a diverse set of models and provide insights into improving the efficiency and safety of language models.", }
The disconnect between tokenizer creation and model training in language models allows for specific inputs, such as the infamous SolidGoldMagikarp token, to induce unwanted model behaviour. Although such {`}glitch tokens{'}, tokens present in the tokenizer vocabulary but that are nearly or entirely absent during model training, have been observed across various models, a reliable method to identify and address them has been missing. We present a comprehensive analysis of Large Language Model tokenizers, specifically targeting this issue of detecting under-trained tokens. Through a combination of tokenizer analysis, model weight-based indicators, and prompting techniques, we develop novel and effective methods for automatically detecting these problematic tokens. Our findings demonstrate the prevalence of such tokens across a diverse set of models and provide insights into improving the efficiency and safety of language models.
[ "L", ", S", "er", "Bartolo, Max" ]
Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large Language Models
emnlp-main.649
Oral
2405.05417
[ "https://github.com/cohere-ai/magikarp" ]
https://huggingface.co/papers/2405.05417
0
0
0
2
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.650.bib
https://aclanthology.org/2024.emnlp-main.650/
@inproceedings{mehrafarin-etal-2024-reasoning, title = "Reasoning or a Semblance of it? A Diagnostic Study of Transitive Reasoning in {LLM}s", author = "Mehrafarin, Houman and Eshghi, Arash and Konstas, Ioannis", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.650", pages = "11647--11662", abstract = "Evaluating Large Language Models (LLMs) on reasoning benchmarks demonstrates their ability to solve compositional questions. However, little is known of whether these models engage in genuine logical reasoning or simply rely on implicit cues to generate answers. In this paper, we investigate the transitive reasoning capabilities of two distinct LLM architectures, LLaMA 2 and Flan-T5, by manipulating facts within two compositional datasets: QASC and Bamboogle. We controlled for potential cues that might influence the models{'} performance, including (a) word/phrase overlaps across sections of test input; (b) models{'} inherent knowledge during pre-training or fine-tuning; and (c) Named Entities. Our findings reveal that while both models leverage (a), Flan-T5 shows more resilience to experiments (b and c), having less variance than LLaMA 2. This suggests that models may develop an understanding of transitivity through fine-tuning on knowingly relevant datasets, a hypothesis we leave to future work.", }
Evaluating Large Language Models (LLMs) on reasoning benchmarks demonstrates their ability to solve compositional questions. However, little is known of whether these models engage in genuine logical reasoning or simply rely on implicit cues to generate answers. In this paper, we investigate the transitive reasoning capabilities of two distinct LLM architectures, LLaMA 2 and Flan-T5, by manipulating facts within two compositional datasets: QASC and Bamboogle. We controlled for potential cues that might influence the models{'} performance, including (a) word/phrase overlaps across sections of test input; (b) models{'} inherent knowledge during pre-training or fine-tuning; and (c) Named Entities. Our findings reveal that while both models leverage (a), Flan-T5 shows more resilience to experiments (b and c), having less variance than LLaMA 2. This suggests that models may develop an understanding of transitivity through fine-tuning on knowingly relevant datasets, a hypothesis we leave to future work.
[ "Mehrafarin, Houman", "Eshghi, Arash", "Konstas, Ioannis" ]
Reasoning or a Semblance of it? A Diagnostic Study of Transitive Reasoning in LLMs
emnlp-main.650
Poster
2410.20200
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.651.bib
https://aclanthology.org/2024.emnlp-main.651/
@inproceedings{gubelmann-2024-pragmatic, title = "Pragmatic Norms Are All You Need {--} Why The Symbol Grounding Problem Does Not Apply to {LLM}s", author = "Gubelmann, Reto", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.651", pages = "11663--11678", abstract = "Do LLMs fall prey to Harnad{'}s symbol grounding problem (SGP), as it has recently been claimed? We argue that this is not the case. Starting out with countering the arguments of Bender and Koller (2020), we trace the origins of the SGP to the computational theory of mind (CTM), and we show that it only arises with natural language when questionable theories of meaning are presupposed. We conclude by showing that it would apply to LLMs only if they were interpreted in the manner of how the CTM conceives the mind, i.e., by postulating that LLMs rely on a version of a language of thought, or by adopting said questionable theories of meaning; since neither option is rational, we conclude that the SGP does not apply to LLMs.", }
Do LLMs fall prey to Harnad{'}s symbol grounding problem (SGP), as it has recently been claimed? We argue that this is not the case. Starting out with countering the arguments of Bender and Koller (2020), we trace the origins of the SGP to the computational theory of mind (CTM), and we show that it only arises with natural language when questionable theories of meaning are presupposed. We conclude by showing that it would apply to LLMs only if they were interpreted in the manner of how the CTM conceives the mind, i.e., by postulating that LLMs rely on a version of a language of thought, or by adopting said questionable theories of meaning; since neither option is rational, we conclude that the SGP does not apply to LLMs.
[ "Gubelmann, Reto" ]
Pragmatic Norms Are All You Need – Why The Symbol Grounding Problem Does Not Apply to LLMs
emnlp-main.651
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.652.bib
https://aclanthology.org/2024.emnlp-main.652/
@inproceedings{sundar-etal-2024-major, title = "Major Entity Identification: A Generalizable Alternative to Coreference Resolution", author = "Sundar, Kawshik Manikantan and Toshniwal, Shubham and Tapaswi, Makarand and Gandhi, Vineet", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.652", pages = "11679--11695", abstract = "The limited generalization of coreference resolution (CR) models has been a major bottleneck in the task{'}s broad application. Prior work has identified annotation differences, especially for mention detection, as one of the main reasons for the generalization gap and proposed using additional annotated target domain data. Rather than relying on this additional annotation, we propose an alternative referential task, Major Entity Identification (MEI), where we: (a) assume the target entities to be specified in the input, and (b) limit the task to only the frequent entities. Through extensive experiments, we demonstrate that MEI models generalize well across domains on multiple datasets with supervised models and LLM-based few-shot prompting. Additionally, MEI fits the classification framework, which enables the use of robust and intuitive classification-based metrics. Finally, MEI is also of practical use as it allows a user to search for all mentions of a particular entity or a group of entities of interest.", }
The limited generalization of coreference resolution (CR) models has been a major bottleneck in the task{'}s broad application. Prior work has identified annotation differences, especially for mention detection, as one of the main reasons for the generalization gap and proposed using additional annotated target domain data. Rather than relying on this additional annotation, we propose an alternative referential task, Major Entity Identification (MEI), where we: (a) assume the target entities to be specified in the input, and (b) limit the task to only the frequent entities. Through extensive experiments, we demonstrate that MEI models generalize well across domains on multiple datasets with supervised models and LLM-based few-shot prompting. Additionally, MEI fits the classification framework, which enables the use of robust and intuitive classification-based metrics. Finally, MEI is also of practical use as it allows a user to search for all mentions of a particular entity or a group of entities of interest.
[ "Sundar, Kawshik Manikantan", "Toshniwal, Shubham", "Tapaswi, Makar", "", "G", "hi, Vineet" ]
Major Entity Identification: A Generalizable Alternative to Coreference Resolution
emnlp-main.652
Poster
2406.14654
[ "https://github.com/KawshikManikantan/MEI" ]
https://huggingface.co/papers/2406.14654
1
0
0
4
[]
[]
[ "KawshikManikantan/MEIRa" ]
[]
[]
[ "KawshikManikantan/MEIRa" ]
1
https://aclanthology.org/2024.emnlp-main.653.bib
https://aclanthology.org/2024.emnlp-main.653/
@inproceedings{wang-etal-2024-enhancing-high, title = "Enhancing High-order Interaction Awareness in {LLM}-based Recommender Model", author = "Wang, Xinfeng and Cui, Jin and Fukumoto, Fumiyo and Suzuki, Yoshimi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.653", pages = "11696--11711", abstract = "Large language models (LLMs) have demonstrated prominent reasoning capabilities in recommendation tasks by transforming them into text-generation tasks. However, existing approaches either disregard or ineffectively model the user-item high-order interactions. To this end, this paper presents an enhanced LLM-based recommender (ELMRec). We enhance whole-word embeddings to substantially enhance LLMs{'} interpretation of graph-constructed interactions for recommendations, without requiring graph pre-training. This finding may inspire endeavors to incorporate rich knowledge graphs into LLM-based recommenders via whole-word embedding. We also found that LLMs often recommend items based on users{'} earlier interactions rather than recent ones, and present a reranking solution. Our ELMRec outperforms state-of-the-art (SOTA) methods, especially achieving a 124.3{\%} to 293.7{\%} improvement over SOTA LLM-based methods in direct recommendations. Our code is available online.", }
Large language models (LLMs) have demonstrated prominent reasoning capabilities in recommendation tasks by transforming them into text-generation tasks. However, existing approaches either disregard or ineffectively model the user-item high-order interactions. To this end, this paper presents an enhanced LLM-based recommender (ELMRec). We enhance whole-word embeddings to substantially enhance LLMs{'} interpretation of graph-constructed interactions for recommendations, without requiring graph pre-training. This finding may inspire endeavors to incorporate rich knowledge graphs into LLM-based recommenders via whole-word embedding. We also found that LLMs often recommend items based on users{'} earlier interactions rather than recent ones, and present a reranking solution. Our ELMRec outperforms state-of-the-art (SOTA) methods, especially achieving a 124.3{\%} to 293.7{\%} improvement over SOTA LLM-based methods in direct recommendations. Our code is available online.
[ "Wang, Xinfeng", "Cui, Jin", "Fukumoto, Fumiyo", "Suzuki, Yoshimi" ]
Enhancing High-order Interaction Awareness in LLM-based Recommender Model
emnlp-main.653
Poster
2409.19979
[ "https://github.com/WangXFng/ELMRec" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.654.bib
https://aclanthology.org/2024.emnlp-main.654/
@inproceedings{paruchuri-etal-2024-odds, title = "What Are the Odds? Language Models Are Capable of Probabilistic Reasoning", author = "Paruchuri, Akshay and Garrison, Jake and Liao, Shun and Hernandez, John B and Sunshine, Jacob and Althoff, Tim and Liu, Xin and McDuff, Daniel", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.654", pages = "11712--11733", abstract = "Language models (LM) are capable of remarkably complex linguistic tasks; however, numerical reasoning is an area in which they frequently struggle. An important but rarely evaluated form of reasoning is understanding probability distributions. In this paper, we focus on evaluating the probabilistic reasoning capabilities of LMs using idealized and real-world statistical distributions. We perform a systematic evaluation of state-of-the-art LMs on three tasks: estimating percentiles, drawing samples, and calculating probabilities. We evaluate three ways to provide context to LMs 1) anchoring examples from within a distribution or family of distributions, 2) real-world context, 3) summary statistics on which to base a Normal approximation. Models can make inferences about distributions, and can be further aided by the incorporation of real-world context, example shots and simplified assumptions, even if these assumptions are incorrect or misspecified. To conduct this work, we developed a comprehensive benchmark distribution dataset with associated question-answer pairs that we have released publicly.", }
Language models (LM) are capable of remarkably complex linguistic tasks; however, numerical reasoning is an area in which they frequently struggle. An important but rarely evaluated form of reasoning is understanding probability distributions. In this paper, we focus on evaluating the probabilistic reasoning capabilities of LMs using idealized and real-world statistical distributions. We perform a systematic evaluation of state-of-the-art LMs on three tasks: estimating percentiles, drawing samples, and calculating probabilities. We evaluate three ways to provide context to LMs 1) anchoring examples from within a distribution or family of distributions, 2) real-world context, 3) summary statistics on which to base a Normal approximation. Models can make inferences about distributions, and can be further aided by the incorporation of real-world context, example shots and simplified assumptions, even if these assumptions are incorrect or misspecified. To conduct this work, we developed a comprehensive benchmark distribution dataset with associated question-answer pairs that we have released publicly.
[ "Paruchuri, Akshay", "Garrison, Jake", "Liao, Shun", "Hern", "ez, John B", "Sunshine, Jacob", "Althoff, Tim", "Liu, Xin", "McDuff, Daniel" ]
What Are the Odds? Language Models Are Capable of Probabilistic Reasoning
emnlp-main.654
Poster
2406.12830
[ "https://github.com/yahskapar/LLMs-and-Probabilistic-Reasoning" ]
https://huggingface.co/papers/2406.12830
0
0
0
8
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.655.bib
https://aclanthology.org/2024.emnlp-main.655/
@inproceedings{jiang-etal-2024-mare, title = "{MARE}: Multi-Aspect Rationale Extractor on Unsupervised Rationale Extraction", author = "Jiang, Han and Duan, Junwen and Qu, Zhe and Wang, Jianxin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.655", pages = "11734--11745", abstract = "Unsupervised rationale extraction aims to extract text snippets to support model predictions without explicit rationale annotation.Researchers have made many efforts to solve this task. Previous works often encode each aspect independently, which may limit their ability to capture meaningful internal correlations between aspects. While there has been significant work on mitigating spurious correlations, our approach focuses on leveraging the beneficial internal correlations to improve multi-aspect rationale extraction. In this paper, we propose a Multi-Aspect Rationale Extractor (MARE) to explain and predict multiple aspects simultaneously. Concretely, we propose a Multi-Aspect Multi-Head Attention (MAMHA) mechanism based on hard deletion to encode multiple text chunks simultaneously. Furthermore, multiple special tokens are prepended in front of the text with each corresponding to one certain aspect. Finally, multi-task training is deployed to reduce the training overhead. Experimental results on two unsupervised rationale extraction benchmarks show that MARE achieves state-of-the-art performance. Ablation studies further demonstrate the effectiveness of our method. Our codes have been available at https://github.com/CSU-NLP-Group/MARE.", }
Unsupervised rationale extraction aims to extract text snippets to support model predictions without explicit rationale annotation.Researchers have made many efforts to solve this task. Previous works often encode each aspect independently, which may limit their ability to capture meaningful internal correlations between aspects. While there has been significant work on mitigating spurious correlations, our approach focuses on leveraging the beneficial internal correlations to improve multi-aspect rationale extraction. In this paper, we propose a Multi-Aspect Rationale Extractor (MARE) to explain and predict multiple aspects simultaneously. Concretely, we propose a Multi-Aspect Multi-Head Attention (MAMHA) mechanism based on hard deletion to encode multiple text chunks simultaneously. Furthermore, multiple special tokens are prepended in front of the text with each corresponding to one certain aspect. Finally, multi-task training is deployed to reduce the training overhead. Experimental results on two unsupervised rationale extraction benchmarks show that MARE achieves state-of-the-art performance. Ablation studies further demonstrate the effectiveness of our method. Our codes have been available at https://github.com/CSU-NLP-Group/MARE.
[ "Jiang, Han", "Duan, Junwen", "Qu, Zhe", "Wang, Jianxin" ]
MARE: Multi-Aspect Rationale Extractor on Unsupervised Rationale Extraction
emnlp-main.655
Poster
2410.03531
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.656.bib
https://aclanthology.org/2024.emnlp-main.656/
@inproceedings{elesedy-etal-2024-lora, title = "{L}o{RA}-Guard: Parameter-Efficient Guardrail Adaptation for Content Moderation of Large Language Models", author = "Elesedy, Hayder and Esperanca, Pedro M and Oprea, Silviu Vlad and Ozay, Mete", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.656", pages = "11746--11765", abstract = "Guardrails have emerged as an alternative to safety alignment for content moderation of large language models (LLMs). Existing model-based guardrails have not been designed for resource-constrained computational portable devices, such as mobile phones, more and more of which are running LLM-based applications locally. We introduce LoRA-Guard, a parameter-efficient guardrail adaptation method that relies on knowledge sharing between LLMs and guardrail models. LoRA-Guard extracts language features from the LLMs and adapts them for the content moderation task using low-rank adapters, while a dual-path design prevents any performance degradation on the generative task. We show that LoRA-Guard outperforms existing approaches with 100-1000x lower parameter overhead while maintaining accuracy, enabling on-device content moderation.", }
Guardrails have emerged as an alternative to safety alignment for content moderation of large language models (LLMs). Existing model-based guardrails have not been designed for resource-constrained computational portable devices, such as mobile phones, more and more of which are running LLM-based applications locally. We introduce LoRA-Guard, a parameter-efficient guardrail adaptation method that relies on knowledge sharing between LLMs and guardrail models. LoRA-Guard extracts language features from the LLMs and adapts them for the content moderation task using low-rank adapters, while a dual-path design prevents any performance degradation on the generative task. We show that LoRA-Guard outperforms existing approaches with 100-1000x lower parameter overhead while maintaining accuracy, enabling on-device content moderation.
[ "Elesedy, Hayder", "Esperanca, Pedro M", "Oprea, Silviu Vlad", "Ozay, Mete" ]
LoRA-Guard: Parameter-Efficient Guardrail Adaptation for Content Moderation of Large Language Models
emnlp-main.656
Poster
2407.02987
[ "" ]
https://huggingface.co/papers/2407.02987
0
0
0
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.657.bib
https://aclanthology.org/2024.emnlp-main.657/
@inproceedings{xu-etal-2024-good, title = "{``}A good pun is its own reword{''}: Can Large Language Models Understand Puns?", author = "Xu, Zhijun and Yuan, Siyu and Chen, Lingjie and Yang, Deqing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.657", pages = "11766--11782", abstract = "Puns play a vital role in academic research due to their distinct structure and clear definition, which aid in the comprehensive analysis of linguistic humor. However, the understanding of puns in large language models (LLMs) has not been thoroughly examined, limiting their use in creative writing and humor creation. In this paper, we leverage three popular tasks, i.e., pun recognition, explanation and generation to systematically evaluate the capabilities of LLMs in pun understanding. In addition to adopting the automated evaluation metrics from prior research, we introduce new evaluation methods and metrics that are better suited to the in-context learning paradigm of LLMs. These new metrics offer a more rigorous assessment of an LLM{'}s ability to understand puns and align more closely with human cognition than previous metrics. Our findings reveal the {``}lazy pun generation{''} pattern and identify the primary challenges LLMs encounter in understanding puns.", }
Puns play a vital role in academic research due to their distinct structure and clear definition, which aid in the comprehensive analysis of linguistic humor. However, the understanding of puns in large language models (LLMs) has not been thoroughly examined, limiting their use in creative writing and humor creation. In this paper, we leverage three popular tasks, i.e., pun recognition, explanation and generation to systematically evaluate the capabilities of LLMs in pun understanding. In addition to adopting the automated evaluation metrics from prior research, we introduce new evaluation methods and metrics that are better suited to the in-context learning paradigm of LLMs. These new metrics offer a more rigorous assessment of an LLM{'}s ability to understand puns and align more closely with human cognition than previous metrics. Our findings reveal the {``}lazy pun generation{''} pattern and identify the primary challenges LLMs encounter in understanding puns.
[ "Xu, Zhijun", "Yuan, Siyu", "Chen, Lingjie", "Yang, Deqing" ]
“A good pun is its own reword”: Can Large Language Models Understand Puns?
emnlp-main.657
Poster
[ "https://github.com/zhijun-xu/puneval" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.658.bib
https://aclanthology.org/2024.emnlp-main.658/
@inproceedings{fu-etal-2024-qgeval, title = "{QGE}val: Benchmarking Multi-dimensional Evaluation for Question Generation", author = "Fu, Weiping and Wei, Bifan and Hu, Jianxiang and Cai, Zhongmin and Liu, Jun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.658", pages = "11783--11803", abstract = "Automatically generated questions often suffer from problems such as unclear expression or factual inaccuracies, requiring a reliable and comprehensive evaluation of their quality. Human evaluation is widely used in the field of question generation (QG) and serves as the gold standard for automatic metrics. However, there is a lack of unified human evaluation criteria, which hampers consistent and reliable evaluations of both QG models and automatic metrics. To address this, we propose **QGEval**, a multi-dimensional **Eval**uation benchmark for **Q**uestion **G**eneration, which evaluates both generated questions and existing automatic metrics across 7 dimensions: fluency, clarity, conciseness, relevance, consistency, answerability, and answer consistency. We demonstrate the appropriateness of these dimensions by examining their correlations and distinctions. Through consistent evaluations of QG models and automatic metrics with QGEval, we find that 1) most QG models perform unsatisfactorily in terms of answerability and answer consistency, and 2) existing metrics fail to align well with human judgments when evaluating generated questions across the 7 dimensions. We expect this work to foster the development of both QG technologies and their evaluation.", }
Automatically generated questions often suffer from problems such as unclear expression or factual inaccuracies, requiring a reliable and comprehensive evaluation of their quality. Human evaluation is widely used in the field of question generation (QG) and serves as the gold standard for automatic metrics. However, there is a lack of unified human evaluation criteria, which hampers consistent and reliable evaluations of both QG models and automatic metrics. To address this, we propose **QGEval**, a multi-dimensional **Eval**uation benchmark for **Q**uestion **G**eneration, which evaluates both generated questions and existing automatic metrics across 7 dimensions: fluency, clarity, conciseness, relevance, consistency, answerability, and answer consistency. We demonstrate the appropriateness of these dimensions by examining their correlations and distinctions. Through consistent evaluations of QG models and automatic metrics with QGEval, we find that 1) most QG models perform unsatisfactorily in terms of answerability and answer consistency, and 2) existing metrics fail to align well with human judgments when evaluating generated questions across the 7 dimensions. We expect this work to foster the development of both QG technologies and their evaluation.
[ "Fu, Weiping", "Wei, Bifan", "Hu, Jianxiang", "Cai, Zhongmin", "Liu, Jun" ]
QGEval: Benchmarking Multi-dimensional Evaluation for Question Generation
emnlp-main.658
Poster
2406.05707
[ "https://github.com/weipingfu/qgeval" ]
https://huggingface.co/papers/2406.05707
0
0
0
5
[ "fwp/BART-base-HotpotQA-finetune", "QGEval2024/bart-base-hotpotqa-finetune-qg" ]
[]
[]
[ "fwp/BART-base-HotpotQA-finetune", "QGEval2024/bart-base-hotpotqa-finetune-qg" ]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.659.bib
https://aclanthology.org/2024.emnlp-main.659/
@inproceedings{ezquerro-etal-2024-dependency, title = "Dependency Graph Parsing as Sequence Labeling", author = "Ezquerro, Ana and Vilares, David and G{\'o}mez-Rodr{\'\i}guez, Carlos", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.659", pages = "11804--11828", abstract = "Various linearizations have been proposed to cast syntactic dependency parsing as sequence labeling. However, these approaches do not support more complex graph-based representations, such as semantic dependencies or enhanced universal dependencies, as they cannot handle reentrancy or cycles. By extending them, we define a range of unbounded and bounded linearizations that can be used to cast graph parsing as a tagging task, enlarging the toolbox of problems that can be solved under this paradigm. Experimental results on semantic dependency and enhanced UD parsing show that with a good choice of encoding, sequence-labeling semantic dependency parsers combine high efficiency with accuracies close to the state of the art, in spite of their simplicity.", }
Various linearizations have been proposed to cast syntactic dependency parsing as sequence labeling. However, these approaches do not support more complex graph-based representations, such as semantic dependencies or enhanced universal dependencies, as they cannot handle reentrancy or cycles. By extending them, we define a range of unbounded and bounded linearizations that can be used to cast graph parsing as a tagging task, enlarging the toolbox of problems that can be solved under this paradigm. Experimental results on semantic dependency and enhanced UD parsing show that with a good choice of encoding, sequence-labeling semantic dependency parsers combine high efficiency with accuracies close to the state of the art, in spite of their simplicity.
[ "Ezquerro, Ana", "Vilares, David", "G{\\'o}mez-Rodr{\\'\\i}guez, Carlos" ]
Dependency Graph Parsing as Sequence Labeling
emnlp-main.659
Poster
2410.17972
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.660.bib
https://aclanthology.org/2024.emnlp-main.660/
@inproceedings{bogdanov-etal-2024-nuner, title = "{N}u{NER}: Entity Recognition Encoder Pre-training via {LLM}-Annotated Data", author = "Bogdanov, Sergei and Constantin, Alexandre and Bernard, Timoth{\'e}e and Crabb{\'e}, Benoit and Bernard, Etienne P", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.660", pages = "11829--11841", abstract = "Large Language Models (LLMs) have shown impressive abilities in data annotation, opening the way for new approaches to solve classic NLP problems. In this paper, we show how to use LLMs to create NuNER, a compact language representation model specialized in the Named Entity Recognition (NER) task. NuNER can be fine-tuned to solve downstream NER problems in a data-efficient way, outperforming similar-sized foundation models in the few-shot regime and competing with much larger LLMs. We find that the size and entity-type diversity of the pre-training dataset are key to achieving good performance. We view NuNER as a member of the broader family of task-specific foundation models, recently unlocked by LLMs. NuNER and NuNER{'}s dataset are open-sourced with MIT License.", }
Large Language Models (LLMs) have shown impressive abilities in data annotation, opening the way for new approaches to solve classic NLP problems. In this paper, we show how to use LLMs to create NuNER, a compact language representation model specialized in the Named Entity Recognition (NER) task. NuNER can be fine-tuned to solve downstream NER problems in a data-efficient way, outperforming similar-sized foundation models in the few-shot regime and competing with much larger LLMs. We find that the size and entity-type diversity of the pre-training dataset are key to achieving good performance. We view NuNER as a member of the broader family of task-specific foundation models, recently unlocked by LLMs. NuNER and NuNER{'}s dataset are open-sourced with MIT License.
[ "Bogdanov, Sergei", "Constantin, Alex", "re", "Bernard, Timoth{\\'e}e", "Crabb{\\'e}, Benoit", "Bernard, Etienne P" ]
NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data
emnlp-main.660
Poster
2402.15343
[ "https://github.com/Serega6678/NuNER" ]
https://huggingface.co/papers/2402.15343
3
12
0
5
[ "numind/NuNER_Zero", "numind/NuNER-v0.1", "numind/NuNER-multilingual-v0.1", "numind/NuNER-v2.0", "numind/NuNER_Zero-4k", "numind/NuNER_Zero-span", "numind/NuNER-v1.0", "guishe/nuner-v1_ontonotes5", "guishe/nuner-v1_orgs", "numind/NuNER-BERT-v1.0", "guishe/nuner-v1_fewnerd_coarse_super", "guishe/nuner-v1_fewnerd_fine_super", "jilijeanlouis/NuNER_Zero", "guishe/nuner-v2_fewnerd_fine_super" ]
[ "numind/NuNER" ]
[ "numind/NuNER_Zero", "Saripudin/zero-shot-demo", "Datasaur/zero-shot-demo" ]
[ "numind/NuNER_Zero", "numind/NuNER-v0.1", "numind/NuNER-multilingual-v0.1", "numind/NuNER-v2.0", "numind/NuNER_Zero-4k", "numind/NuNER_Zero-span", "numind/NuNER-v1.0", "guishe/nuner-v1_ontonotes5", "guishe/nuner-v1_orgs", "numind/NuNER-BERT-v1.0", "guishe/nuner-v1_fewnerd_coarse_super", "guishe/nuner-v1_fewnerd_fine_super", "jilijeanlouis/NuNER_Zero", "guishe/nuner-v2_fewnerd_fine_super" ]
[ "numind/NuNER" ]
[ "numind/NuNER_Zero", "Saripudin/zero-shot-demo", "Datasaur/zero-shot-demo" ]
1
https://aclanthology.org/2024.emnlp-main.661.bib
https://aclanthology.org/2024.emnlp-main.661/
@inproceedings{pavlopoulos-etal-2024-towards, title = "Towards a {G}reek Proverb Atlas: Computational Spatial Exploration and Attribution of {G}reek Proverbs", author = "Pavlopoulos, John and Louridas, Panos and Filos, Panagiotis", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.661", pages = "11842--11854", abstract = "Proverbs carry wisdom transferred orally from generation to generation. Based on the place they were recorded, this study introduces a publicly-available and machine-actionable dataset of more than one hundred thousand Greek proverb variants. By quantifying the spatial distribution of proverbs, we show that the most widespread proverbs come from the mainland while the least widespread proverbs come primarily from the islands. By focusing on the least dispersed proverbs, we present the most frequent tokens per location and undertake a benchmark in geographical attribution, using text classification and regression (text geocoding). Our results show that this is a challenging task for which specific locations can be attributed more successfully compared to others. The potential of our resource and benchmark is showcased by two novel applications. First, we extracted terms moving the regression prediction toward the four cardinal directions. Second, we leveraged conformal prediction to attribute 3,676 unregistered proverbs with statistically rigorous predictions of locations each of these proverbs was possibly registered in.", }
Proverbs carry wisdom transferred orally from generation to generation. Based on the place they were recorded, this study introduces a publicly-available and machine-actionable dataset of more than one hundred thousand Greek proverb variants. By quantifying the spatial distribution of proverbs, we show that the most widespread proverbs come from the mainland while the least widespread proverbs come primarily from the islands. By focusing on the least dispersed proverbs, we present the most frequent tokens per location and undertake a benchmark in geographical attribution, using text classification and regression (text geocoding). Our results show that this is a challenging task for which specific locations can be attributed more successfully compared to others. The potential of our resource and benchmark is showcased by two novel applications. First, we extracted terms moving the regression prediction toward the four cardinal directions. Second, we leveraged conformal prediction to attribute 3,676 unregistered proverbs with statistically rigorous predictions of locations each of these proverbs was possibly registered in.
[ "Pavlopoulos, John", "Louridas, Panos", "Filos, Panagiotis" ]
Towards a Greek Proverb Atlas: Computational Spatial Exploration and Attribution of Greek Proverbs
emnlp-main.661
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.662.bib
https://aclanthology.org/2024.emnlp-main.662/
@inproceedings{liu-etal-2024-unraveling, title = "Unraveling Babel: Exploring Multilingual Activation Patterns of {LLM}s and Their Applications", author = "Liu, Weize and Xu, Yinlong and Xu, Hongxia and Chen, Jintai and Hu, Xuming and Wu, Jian", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.662", pages = "11855--11881", abstract = "Recently, large language models (LLMs) have achieved tremendous breakthroughs in the field of NLP, but still lack understanding of their internal neuron activities when processing different languages. We designed a method to convert dense LLMs into fine-grained MoE architectures, and then visually studied the multilingual activation patterns of LLMs through expert activation frequency heatmaps. Through comprehensive experiments on different model families, different model sizes, and different variants, we analyzed the similarities and differences in the internal neuron activation patterns of LLMs when processing different languages. Specifically, we investigated the distribution of high-frequency activated experts, multilingual shared experts, whether multilingual activation patterns are related to language families, and the impact of instruction tuning on activation patterns. We further explored leveraging the discovered differences in expert activation frequencies to guide sparse activation and pruning. Experimental results demonstrated that our method significantly outperformed random expert pruning and even exceeded the performance of unpruned models in some languages. Additionally, we found that configuring different pruning rates for different layers based on activation level differences could achieve better results. Our findings reveal the multilingual processing mechanisms within LLMs and utilize these insights to offer new perspectives for applications such as sparse activation and model pruning.", }
Recently, large language models (LLMs) have achieved tremendous breakthroughs in the field of NLP, but still lack understanding of their internal neuron activities when processing different languages. We designed a method to convert dense LLMs into fine-grained MoE architectures, and then visually studied the multilingual activation patterns of LLMs through expert activation frequency heatmaps. Through comprehensive experiments on different model families, different model sizes, and different variants, we analyzed the similarities and differences in the internal neuron activation patterns of LLMs when processing different languages. Specifically, we investigated the distribution of high-frequency activated experts, multilingual shared experts, whether multilingual activation patterns are related to language families, and the impact of instruction tuning on activation patterns. We further explored leveraging the discovered differences in expert activation frequencies to guide sparse activation and pruning. Experimental results demonstrated that our method significantly outperformed random expert pruning and even exceeded the performance of unpruned models in some languages. Additionally, we found that configuring different pruning rates for different layers based on activation level differences could achieve better results. Our findings reveal the multilingual processing mechanisms within LLMs and utilize these insights to offer new perspectives for applications such as sparse activation and model pruning.
[ "Liu, Weize", "Xu, Yinlong", "Xu, Hongxia", "Chen, Jintai", "Hu, Xuming", "Wu, Jian" ]
Unraveling Babel: Exploring Multilingual Activation Patterns of LLMs and Their Applications
emnlp-main.662
Poster
2402.16367
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.663.bib
https://aclanthology.org/2024.emnlp-main.663/
@inproceedings{zhang-li-2024-advancing, title = "Advancing Semantic Textual Similarity Modeling: A Regression Framework with Translated {R}e{LU} and Smooth K2 Loss", author = "Zhang, Bowen and Li, Chunping", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.663", pages = "11882--11893", abstract = "Since the introduction of BERT and RoBERTa, research on Semantic Textual Similarity (STS) has made groundbreaking progress. Particularly, the adoption of contrastive learning has substantially elevated state-of-the-art performance across various STS benchmarks. However, contrastive learning categorizes text pairs as either semantically similar or dissimilar, failing to leverage fine-grained annotated information and necessitating large batch sizes to prevent model collapse. These constraints pose challenges for researchers engaged in STS tasks that involve nuanced similarity levels or those with limited computational resources, compelling them to explore alternatives like Sentence-BERT. Despite its efficiency, Sentence-BERT tackles STS tasks from a classification perspective, overlooking the progressive nature of semantic relationships, which results in suboptimal performance. To bridge this gap, this paper presents an innovative regression framework and proposes two simple yet effective loss functions: Translated ReLU and Smooth K2 Loss. Experimental results demonstrate that our method achieves convincing performance across seven established STS benchmarks and offers the potential for further optimization of contrastive learning pre-trained models.", }
Since the introduction of BERT and RoBERTa, research on Semantic Textual Similarity (STS) has made groundbreaking progress. Particularly, the adoption of contrastive learning has substantially elevated state-of-the-art performance across various STS benchmarks. However, contrastive learning categorizes text pairs as either semantically similar or dissimilar, failing to leverage fine-grained annotated information and necessitating large batch sizes to prevent model collapse. These constraints pose challenges for researchers engaged in STS tasks that involve nuanced similarity levels or those with limited computational resources, compelling them to explore alternatives like Sentence-BERT. Despite its efficiency, Sentence-BERT tackles STS tasks from a classification perspective, overlooking the progressive nature of semantic relationships, which results in suboptimal performance. To bridge this gap, this paper presents an innovative regression framework and proposes two simple yet effective loss functions: Translated ReLU and Smooth K2 Loss. Experimental results demonstrate that our method achieves convincing performance across seven established STS benchmarks and offers the potential for further optimization of contrastive learning pre-trained models.
[ "Zhang, Bowen", "Li, Chunping" ]
Advancing Semantic Textual Similarity Modeling: A Regression Framework with Translated ReLU and Smooth K2 Loss
emnlp-main.663
Poster
2406.05326
[ "https://github.com/ZBWpro/STS-Regression" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.664.bib
https://aclanthology.org/2024.emnlp-main.664/
@inproceedings{brinner-zarriess-2024-rationalizing, title = "Rationalizing Transformer Predictions via End-To-End Differentiable Self-Training", author = "Brinner, Marc Felix and Zarrie{\ss}, Sina", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.664", pages = "11894--11907", abstract = "We propose an end-to-end differentiable training paradigm for stable training of a rationalized transformer classifier. Our approach results in a single model that simultaneously classifies a sample and scores input tokens based on their relevance to the classification. To this end, we build on the widely-used three-player-game for training rationalized models, which typically relies on training a rationale selector, a classifier and a complement classifier. We simplify this approach by making a single model fulfill all three roles, leading to a more efficient training paradigm that is not susceptible to the common training instabilities that plague existing approaches. Further, we extend this paradigm to produce class-wise rationales while incorporating recent advances in parameterizing and regularizing the resulting rationales, thus leading to substantially improved and state-of-the-art alignment with human annotations without any explicit supervision.", }
We propose an end-to-end differentiable training paradigm for stable training of a rationalized transformer classifier. Our approach results in a single model that simultaneously classifies a sample and scores input tokens based on their relevance to the classification. To this end, we build on the widely-used three-player-game for training rationalized models, which typically relies on training a rationale selector, a classifier and a complement classifier. We simplify this approach by making a single model fulfill all three roles, leading to a more efficient training paradigm that is not susceptible to the common training instabilities that plague existing approaches. Further, we extend this paradigm to produce class-wise rationales while incorporating recent advances in parameterizing and regularizing the resulting rationales, thus leading to substantially improved and state-of-the-art alignment with human annotations without any explicit supervision.
[ "Brinner, Marc Felix", "Zarrie{\\ss}, Sina" ]
Rationalizing Transformer Predictions via End-To-End Differentiable Self-Training
emnlp-main.664
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.665.bib
https://aclanthology.org/2024.emnlp-main.665/
@inproceedings{frohmann-etal-2024-segment, title = "Segment Any Text: A Universal Approach for Robust, Efficient and Adaptable Sentence Segmentation", author = "Frohmann, Markus and Sterner, Igor and Vuli{\'c}, Ivan and Minixhofer, Benjamin and Schedl, Markus", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.665", pages = "11908--11941", abstract = "Segmenting text into sentences plays an early and crucial role in many NLP systems. This is commonly achieved by using rule-based or statistical methods relying on lexical features such as punctuation. Although some recent works no longer exclusively rely on punctuation, we find that no prior method achieves all of (i) robustness to missing punctuation, (ii) effective adaptability to new domains, and (iii) high efficiency. We introduce a new model {---} Segment any Text (SaT) {---} to solve this problem. To enhance robustness, we propose a new pretraining scheme that ensures less reliance on punctuation. To address adaptability, we introduce an extra stage of parameter-efficient fine-tuning, establishing state-of-the-art performance in distinct domains such as verses from lyrics and legal documents. Along the way, we introduce architectural modifications that result in a threefold gain in speed over the previous state of the art and solve spurious reliance on context far in the future. Finally, we introduce a variant of our model with fine-tuning on a diverse, multilingual mixture of sentence-segmented data, acting as a drop-in replacement and enhancement for existing segmentation tools. Overall, our contributions provide a universal approach for segmenting any text. Our method outperforms all baselines {---} including strong LLMs {---} across 8 corpora spanning diverse domains and languages, especially in practically relevant situations where text is poorly formatted. Our models and code, including documentation, are readily available at https://github.com/segment-any-text/wtpsplit under the MIT license.", }
Segmenting text into sentences plays an early and crucial role in many NLP systems. This is commonly achieved by using rule-based or statistical methods relying on lexical features such as punctuation. Although some recent works no longer exclusively rely on punctuation, we find that no prior method achieves all of (i) robustness to missing punctuation, (ii) effective adaptability to new domains, and (iii) high efficiency. We introduce a new model {---} Segment any Text (SaT) {---} to solve this problem. To enhance robustness, we propose a new pretraining scheme that ensures less reliance on punctuation. To address adaptability, we introduce an extra stage of parameter-efficient fine-tuning, establishing state-of-the-art performance in distinct domains such as verses from lyrics and legal documents. Along the way, we introduce architectural modifications that result in a threefold gain in speed over the previous state of the art and solve spurious reliance on context far in the future. Finally, we introduce a variant of our model with fine-tuning on a diverse, multilingual mixture of sentence-segmented data, acting as a drop-in replacement and enhancement for existing segmentation tools. Overall, our contributions provide a universal approach for segmenting any text. Our method outperforms all baselines {---} including strong LLMs {---} across 8 corpora spanning diverse domains and languages, especially in practically relevant situations where text is poorly formatted. Our models and code, including documentation, are readily available at https://github.com/segment-any-text/wtpsplit under the MIT license.
[ "Frohmann, Markus", "Sterner, Igor", "Vuli{\\'c}, Ivan", "Minixhofer, Benjamin", "Schedl, Markus" ]
Segment Any Text: A Universal Approach for Robust, Efficient and Adaptable Sentence Segmentation
emnlp-main.665
Poster
2406.16678
[ "https://github.com/segment-any-text/wtpsplit" ]
https://huggingface.co/papers/2406.16678
4
14
2
5
[ "segment-any-text/sat-12l-sm", "igorsterner/xlmr-multilingual-sentence-segmentation", "segment-any-text/sat-12l", "segment-any-text/sat-3l-sm", "segment-any-text/sat-6l-sm", "segment-any-text/sat-3l", "segment-any-text/sat-1l", "segment-any-text/sat-1l-sm", "segment-any-text/sat-9l", "segment-any-text/sat-9l-sm", "segment-any-text/sat-6l" ]
[]
[]
[ "segment-any-text/sat-12l-sm", "igorsterner/xlmr-multilingual-sentence-segmentation", "segment-any-text/sat-12l", "segment-any-text/sat-3l-sm", "segment-any-text/sat-6l-sm", "segment-any-text/sat-3l", "segment-any-text/sat-1l", "segment-any-text/sat-1l-sm", "segment-any-text/sat-9l", "segment-any-text/sat-9l-sm", "segment-any-text/sat-6l" ]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.666.bib
https://aclanthology.org/2024.emnlp-main.666/
@inproceedings{ji-etal-2024-applying, title = "Applying Contrastive Learning to Code Vulnerability Type Classification", author = "Ji, Chen and Yang, Su and Sun, Hongyu and Zhang, Yuqing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.666", pages = "11942--11952", abstract = "Vulnerability classification is a crucial task in software security analysis, essential for identifying and mitigating potential security risks. Learning-based methods often perform poorly due to the long-tail distribution of vulnerability classification datasets. Recent approaches try to address the problem but treat each CWE class in isolation, ignoring their relationships. This results in non-scalable code vector representations, causing significant performance drops when handling complex real-world vulnerabilities. We propose a hierarchical contrastive learning framework for code vulnerability type classification to bring vector representations of related CWEs closer together. To address the issue of class collapse and enhance model robustness, we mix self-supervised contrastive learning loss into our loss function. Additionally, we employ max-pooling to enable the model to handle longer vulnerability code inputs. Extensive experiments demonstrate that our proposed framework outperforms state-of-the-art methods by 2.97{\%}$-$17.90{\%} on accuracy and 0.98{\%}$-$22.27{\%} on weighted-F1, with even better performance on higher-quality datasets. We also utilize an ablation study to prove each component{'}s contribution. These findings underscore the potential and advantages of our approach in the multi-class vulnerability classification task.", }
Vulnerability classification is a crucial task in software security analysis, essential for identifying and mitigating potential security risks. Learning-based methods often perform poorly due to the long-tail distribution of vulnerability classification datasets. Recent approaches try to address the problem but treat each CWE class in isolation, ignoring their relationships. This results in non-scalable code vector representations, causing significant performance drops when handling complex real-world vulnerabilities. We propose a hierarchical contrastive learning framework for code vulnerability type classification to bring vector representations of related CWEs closer together. To address the issue of class collapse and enhance model robustness, we mix self-supervised contrastive learning loss into our loss function. Additionally, we employ max-pooling to enable the model to handle longer vulnerability code inputs. Extensive experiments demonstrate that our proposed framework outperforms state-of-the-art methods by 2.97{\%}$-$17.90{\%} on accuracy and 0.98{\%}$-$22.27{\%} on weighted-F1, with even better performance on higher-quality datasets. We also utilize an ablation study to prove each component{'}s contribution. These findings underscore the potential and advantages of our approach in the multi-class vulnerability classification task.
[ "Ji, Chen", "Yang, Su", "Sun, Hongyu", "Zhang, Yuqing" ]
Applying Contrastive Learning to Code Vulnerability Type Classification
emnlp-main.666
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.667.bib
https://aclanthology.org/2024.emnlp-main.667/
@inproceedings{wang-etal-2024-theoremllama, title = "{T}heorem{L}lama: Transforming General-Purpose {LLM}s into Lean4 Experts", author = "Wang, Ruida and Zhang, Jipeng and Jia, Yizhen and Pan, Rui and Diao, Shizhe and Pi, Renjie and Zhang, Tong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.667", pages = "11953--11974", abstract = "Proving mathematical theorems using computer-verifiable formal languages like Lean significantly impacts mathematical reasoning. One approach to formal theorem proving involves generating complete proofs using Large Language Models (LLMs) based on Natural Language (NL) proofs. However, due to the scarcity of aligned NL and Formal Language (FL) theorem-proving data most modern LLMs exhibit suboptimal performance.This scarcity results in a paucity of methodologies for training LLMs and techniques to fully utilize their capabilities in composing formal proofs. To address these challenges, this paper proposes **TheoremLlama**, an end-to-end framework that trains a general-purpose LLM to be a Lean4 expert. **TheoremLlama** includes NL-FL dataset generation and bootstrapping method to obtain aligned dataset, curriculum learning and block training techniques to train the model, and iterative proof writing method to write Lean4 proofs that work together synergistically.Using the dataset generation method in **TheoremLlama**, we provide *Open Bootstrapped Theorems* (OBT), an NL-FL aligned and bootstrapped dataset. Our novel NL-FL bootstrapping method, where NL proofs are integrated into Lean4 code for training datasets, leverages the NL reasoning ability of LLMs for formal reasoning. The **TheoremLlama** framework achieves cumulative accuracies of 36.48{\%} and 33.61{\%} on MiniF2F-Valid and Test datasets respectively, surpassing the GPT-4 baseline of 22.95{\%} and 25.41{\%}. Our code, model checkpoints, and the generated dataset is published in GitHub", }
Proving mathematical theorems using computer-verifiable formal languages like Lean significantly impacts mathematical reasoning. One approach to formal theorem proving involves generating complete proofs using Large Language Models (LLMs) based on Natural Language (NL) proofs. However, due to the scarcity of aligned NL and Formal Language (FL) theorem-proving data most modern LLMs exhibit suboptimal performance.This scarcity results in a paucity of methodologies for training LLMs and techniques to fully utilize their capabilities in composing formal proofs. To address these challenges, this paper proposes **TheoremLlama**, an end-to-end framework that trains a general-purpose LLM to be a Lean4 expert. **TheoremLlama** includes NL-FL dataset generation and bootstrapping method to obtain aligned dataset, curriculum learning and block training techniques to train the model, and iterative proof writing method to write Lean4 proofs that work together synergistically.Using the dataset generation method in **TheoremLlama**, we provide *Open Bootstrapped Theorems* (OBT), an NL-FL aligned and bootstrapped dataset. Our novel NL-FL bootstrapping method, where NL proofs are integrated into Lean4 code for training datasets, leverages the NL reasoning ability of LLMs for formal reasoning. The **TheoremLlama** framework achieves cumulative accuracies of 36.48{\%} and 33.61{\%} on MiniF2F-Valid and Test datasets respectively, surpassing the GPT-4 baseline of 22.95{\%} and 25.41{\%}. Our code, model checkpoints, and the generated dataset is published in GitHub
[ "Wang, Ruida", "Zhang, Jipeng", "Jia, Yizhen", "Pan, Rui", "Diao, Shizhe", "Pi, Renjie", "Zhang, Tong" ]
TheoremLlama: Transforming General-Purpose LLMs into Lean4 Experts
emnlp-main.667
Poster
2407.03203
[ "https://github.com/RickySkywalker/TheoremLlama" ]
https://huggingface.co/papers/2407.03203
0
10
1
7
[ "RickyDeSkywalker/TheoremLlama" ]
[ "RickyDeSkywalker/OpenBootstrappedTheorem" ]
[]
[ "RickyDeSkywalker/TheoremLlama" ]
[ "RickyDeSkywalker/OpenBootstrappedTheorem" ]
[]
1
https://aclanthology.org/2024.emnlp-main.668.bib
https://aclanthology.org/2024.emnlp-main.668/
@inproceedings{zhang-etal-2024-multi-level, title = "Multi-Level Cross-Modal Alignment for Speech Relation Extraction", author = "Zhang, Liang and Yang, Zhen and Fu, Biao and Lu, Ziyao and Shao, Liangying and Liu, Shiyu and Meng, Fandong and Zhou, Jie and Wang, Xiaoli and Su, Jinsong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.668", pages = "11975--11986", abstract = "Speech Relation Extraction (SpeechRE) aims to extract relation triplets from speech data. However, existing studies usually use synthetic speech to train and evaluate SpeechRE models, hindering the further development of SpeechRE due to the disparity between synthetic and real speech. Meanwhile, the modality gap issue, unexplored in SpeechRE, limits the performance of existing models. In this paper, we construct two real SpeechRE datasets to facilitate subsequent researches and propose a Multi-level Cross-modal Alignment Model (MCAM) for SpeechRE. Our model consists of three components: 1) a speech encoder, extracting speech features from the input speech; 2) an alignment adapter, mapping these speech features into a suitable semantic space for the text decoder; and 3) a text decoder, autoregressively generating relation triplets based on the speech features. During training, we first additionally introduce a text encoder to serve as a semantic bridge between the speech encoder and the text decoder, and then train the alignment adapter to align the output features of speech and text encoders at multiple levels. In this way, we can effectively train the alignment adapter to bridge the modality gap between the speech encoder and the text decoder. Experimental results and in-depth analysis on our datasets strongly demonstrate the efficacy of our method.", }
Speech Relation Extraction (SpeechRE) aims to extract relation triplets from speech data. However, existing studies usually use synthetic speech to train and evaluate SpeechRE models, hindering the further development of SpeechRE due to the disparity between synthetic and real speech. Meanwhile, the modality gap issue, unexplored in SpeechRE, limits the performance of existing models. In this paper, we construct two real SpeechRE datasets to facilitate subsequent researches and propose a Multi-level Cross-modal Alignment Model (MCAM) for SpeechRE. Our model consists of three components: 1) a speech encoder, extracting speech features from the input speech; 2) an alignment adapter, mapping these speech features into a suitable semantic space for the text decoder; and 3) a text decoder, autoregressively generating relation triplets based on the speech features. During training, we first additionally introduce a text encoder to serve as a semantic bridge between the speech encoder and the text decoder, and then train the alignment adapter to align the output features of speech and text encoders at multiple levels. In this way, we can effectively train the alignment adapter to bridge the modality gap between the speech encoder and the text decoder. Experimental results and in-depth analysis on our datasets strongly demonstrate the efficacy of our method.
[ "Zhang, Liang", "Yang, Zhen", "Fu, Biao", "Lu, Ziyao", "Shao, Liangying", "Liu, Shiyu", "Meng, F", "ong", "Zhou, Jie", "Wang, Xiaoli", "Su, Jinsong" ]
Multi-Level Cross-Modal Alignment for Speech Relation Extraction
emnlp-main.668
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.669.bib
https://aclanthology.org/2024.emnlp-main.669/
@inproceedings{schroder-heyer-2024-self, title = "Self-Training for Sample-Efficient Active Learning for Text Classification with Pre-Trained Language Models", author = {Schr{\"o}der, Christopher and Heyer, Gerhard}, editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.669", pages = "11987--12004", abstract = "Active learning is an iterative labeling process that is used to obtain a small labeled subset, despite the absence of labeled data, thereby enabling to train a model for supervised tasks such as text classification.While active learning has made considerable progress in recent years due to improvements provided by pre-trained language models, there is untapped potential in the often neglected unlabeled portion of the data, although it is available in considerably larger quantities than the usually small set of labeled data. In this work, we investigate how self-training, a semi-supervised approach that uses a model to obtain pseudo-labels for unlabeled data, can be used to improve the efficiency of active learning for text classification. Building on a comprehensive reproduction of four previous self-training approaches, some of which are evaluated for the first time in the context of active learning or natural language processing, we introduce HAST, a new and effective self-training strategy, which is evaluated on four text classification benchmarks. Our results show that it outperforms the reproduced self-training approaches and reaches classification results comparable to previous experiments for three out of four datasets, using as little as 25{\%} of the data. The code is publicly available at https://github.com/chschroeder/self-training-for-sample-efficient-active-learning.", }
Active learning is an iterative labeling process that is used to obtain a small labeled subset, despite the absence of labeled data, thereby enabling to train a model for supervised tasks such as text classification.While active learning has made considerable progress in recent years due to improvements provided by pre-trained language models, there is untapped potential in the often neglected unlabeled portion of the data, although it is available in considerably larger quantities than the usually small set of labeled data. In this work, we investigate how self-training, a semi-supervised approach that uses a model to obtain pseudo-labels for unlabeled data, can be used to improve the efficiency of active learning for text classification. Building on a comprehensive reproduction of four previous self-training approaches, some of which are evaluated for the first time in the context of active learning or natural language processing, we introduce HAST, a new and effective self-training strategy, which is evaluated on four text classification benchmarks. Our results show that it outperforms the reproduced self-training approaches and reaches classification results comparable to previous experiments for three out of four datasets, using as little as 25{\%} of the data. The code is publicly available at https://github.com/chschroeder/self-training-for-sample-efficient-active-learning.
[ "Schr{\\\"o}der, Christopher", "Heyer, Gerhard" ]
Self-Training for Sample-Efficient Active Learning for Text Classification with Pre-Trained Language Models
emnlp-main.669
Poster
2406.09206
[ "https://github.com/chschroeder/self-training-for-sample-efficient-active-learning" ]
https://huggingface.co/papers/2406.09206
0
1
0
2
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.670.bib
https://aclanthology.org/2024.emnlp-main.670/
@inproceedings{kim-etal-2024-panda, title = "{PANDA}: Persona Attributes Navigation for Detecting and Alleviating Overuse Problem in Large Language Models", author = "Kim, Jinsung and Koo, Seonmin and Lim, Heuiseok", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.670", pages = "12005--12026", abstract = "In the persona-grounded dialogue (PGD) task, it is required not only to respond fluently, but also to ground the attributes according to the current conversation topic properly. However, due to their tendency to overly ground given attributes, LLMs often generate unnatural responses provoked by using attributes that deviate from the flow of the conversation or by exploiting too many attributes at once. We term this phenomenon the *overuse* problem of LLMs. Unfortunately, research devising precise criteria and frameworks to quantitatively verify LLMs{'} *overuse* problem is obviously insufficient. To address this issue, we propose **P**ersona **A**ttributes **N**avigation for **D**etecting and **A**lleviating the *overuse* problem (**PANDA**) framework. **PANDA** is the first study to quantify the persona *overuse* problem of LLMs by establishing clear standards of the problem and verifying various LLMs based on them. Moreover, this framework navigates us into understanding persona attributes by introducing diverse and detailed dialogue topics that consider practical conversation situations. We provide insights related to LLMs{'} persona attribute *overuse* problem through comprehensive verification and analysis with **PANDA** in the PGD task. Our code and resources can be found at http://github.com/jin62304/PANDA.", }
In the persona-grounded dialogue (PGD) task, it is required not only to respond fluently, but also to ground the attributes according to the current conversation topic properly. However, due to their tendency to overly ground given attributes, LLMs often generate unnatural responses provoked by using attributes that deviate from the flow of the conversation or by exploiting too many attributes at once. We term this phenomenon the *overuse* problem of LLMs. Unfortunately, research devising precise criteria and frameworks to quantitatively verify LLMs{'} *overuse* problem is obviously insufficient. To address this issue, we propose **P**ersona **A**ttributes **N**avigation for **D**etecting and **A**lleviating the *overuse* problem (**PANDA**) framework. **PANDA** is the first study to quantify the persona *overuse* problem of LLMs by establishing clear standards of the problem and verifying various LLMs based on them. Moreover, this framework navigates us into understanding persona attributes by introducing diverse and detailed dialogue topics that consider practical conversation situations. We provide insights related to LLMs{'} persona attribute *overuse* problem through comprehensive verification and analysis with **PANDA** in the PGD task. Our code and resources can be found at http://github.com/jin62304/PANDA.
[ "Kim, Jinsung", "Koo, Seonmin", "Lim, Heuiseok" ]
PANDA: Persona Attributes Navigation for Detecting and Alleviating Overuse Problem in Large Language Models
emnlp-main.670
Oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.671.bib
https://aclanthology.org/2024.emnlp-main.671/
@inproceedings{aakanksha-etal-2024-multilingual, title = "The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm", author = "{Aakanksha} and Ahmadian, Arash and Ermis, Beyza and Goldfarb-Tarrant, Seraphina and Kreutzer, Julia and Fadaee, Marzieh and Hooker, Sara", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.671", pages = "12027--12049", abstract = "A key concern with the concept of *{``}alignment{''}* is the implicit question of *{``}alignment to what?{''}*. AI systems are increasingly used across the world, yet safety alignment is often focused on homogeneous monolingual settings. Additionally, preference training and safety measures often overfit to harms common in Western-centric datasets. Here, we explore the viability of different alignment approaches when balancing dual objectives: addressing and optimizing for a non-homogeneous set of languages and cultural preferences while minimizing both global and local harms. We collect the first human annotated red teaming prompts in different languages, distinguishing between global and local harm, which serve as a laboratory to understand the reliability of alignment techniques when faced with preference distributions that are non-stationary across geographies and languages. While this setting is seldom covered by the literature to date, which primarily centers on English harm mitigation, it captures real-world interactions with AI systems around the world. We establish a new precedent for state-of-the-art alignment techniques across 6 languages with minimal degradation in general performance. Our work provides important insights into cross-lingual transfer and novel optimization approaches to safeguard AI systems designed to serve global populations.", }
A key concern with the concept of *{``}alignment{''}* is the implicit question of *{``}alignment to what?{''}*. AI systems are increasingly used across the world, yet safety alignment is often focused on homogeneous monolingual settings. Additionally, preference training and safety measures often overfit to harms common in Western-centric datasets. Here, we explore the viability of different alignment approaches when balancing dual objectives: addressing and optimizing for a non-homogeneous set of languages and cultural preferences while minimizing both global and local harms. We collect the first human annotated red teaming prompts in different languages, distinguishing between global and local harm, which serve as a laboratory to understand the reliability of alignment techniques when faced with preference distributions that are non-stationary across geographies and languages. While this setting is seldom covered by the literature to date, which primarily centers on English harm mitigation, it captures real-world interactions with AI systems around the world. We establish a new precedent for state-of-the-art alignment techniques across 6 languages with minimal degradation in general performance. Our work provides important insights into cross-lingual transfer and novel optimization approaches to safeguard AI systems designed to serve global populations.
[ "{Aakanksha}", "Ahmadian, Arash", "Ermis, Beyza", "Goldfarb-Tarrant, Seraphina", "Kreutzer, Julia", "Fadaee, Marzieh", "Hooker, Sara" ]
The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm
emnlp-main.671
Poster
2406.18682
[ "" ]
https://huggingface.co/papers/2406.18682
2
0
0
7
[ "CohereForAI/aya-expanse-8b", "CohereForAI/aya-expanse-32b", "QuantFactory/aya-expanse-8b-GGUF", "lucyknada/CohereForAI_aya-expanse-8b-exl2", "lucyknada/CohereForAI_aya-expanse-32b-exl2", "adamo1139/aya-expanse-8b-ungated", "jth01/aya-expanse-8b-5.0bpw-exl2", "duyntnet/aya-expanse-8b-imatrix-GGUF", "Andrewwwwww/aya-expanse-32b", "Svngoku/Aya-Expanse-8B-French", "duyntnet/aya-expanse-32b-imatrix-GGUF", "Jellon/aya-expanse-32b-exl2-4bpw", "Jellon/aya-expanse-32b-exl2-6bpw", "adamo1139/aya-expanse-32b-ungated", "Svngoku/French-Aya-Expanse-8B", "RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf" ]
[ "CohereForAI/aya_redteaming", "walledai/AyaRedTeaming", "pbevan11/aya_redteaming_consitutional" ]
[ "CohereForAI/aya_expanse", "logikon/open_cot_leaderboard", "cot-leaderboard/open-cot-dashboard", "Rijgersberg/Aya-Expanse-8B", "IllyrianSpace/aya_expanse", "Svngoku/Aya-Expanse-8B", "Anupam251272/AJ-Chat", "arnavnextai/Aya-Expanse-8B" ]
[ "CohereForAI/aya-expanse-8b", "CohereForAI/aya-expanse-32b", "QuantFactory/aya-expanse-8b-GGUF", "lucyknada/CohereForAI_aya-expanse-8b-exl2", "lucyknada/CohereForAI_aya-expanse-32b-exl2", "adamo1139/aya-expanse-8b-ungated", "jth01/aya-expanse-8b-5.0bpw-exl2", "duyntnet/aya-expanse-8b-imatrix-GGUF", "Andrewwwwww/aya-expanse-32b", "Svngoku/Aya-Expanse-8B-French", "duyntnet/aya-expanse-32b-imatrix-GGUF", "Jellon/aya-expanse-32b-exl2-4bpw", "Jellon/aya-expanse-32b-exl2-6bpw", "adamo1139/aya-expanse-32b-ungated", "Svngoku/French-Aya-Expanse-8B", "RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf" ]
[ "CohereForAI/aya_redteaming", "walledai/AyaRedTeaming", "pbevan11/aya_redteaming_consitutional" ]
[ "CohereForAI/aya_expanse", "logikon/open_cot_leaderboard", "cot-leaderboard/open-cot-dashboard", "Rijgersberg/Aya-Expanse-8B", "IllyrianSpace/aya_expanse", "Svngoku/Aya-Expanse-8B", "Anupam251272/AJ-Chat", "arnavnextai/Aya-Expanse-8B" ]
1
https://aclanthology.org/2024.emnlp-main.672.bib
https://aclanthology.org/2024.emnlp-main.672/
@inproceedings{marco-fraser-2024-subword, title = "Subword Segmentation in {LLM}s: Looking at Inflection and Consistency", author = "Marco, Marion Di and Fraser, Alexander", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.672", pages = "12050--12060", abstract = "The role of subword segmentation in relation to capturing morphological patterns in LLMs is currently not well explored. Ideally, one would train models like GPT using various segmentations and evaluate how well word meanings are captured. Since this is not computationally feasible, we group words according to their segmentation properties and compare how well a model can solve a linguistic task for these groups. We study two criteria: (i) adherence to morpheme boundaries and (ii) the segmentation consistency of the different inflected forms of a lemma. We select word forms with high and low values for these criteria and carry out experiments on GPT-4o{'}s ability to capture verbal inflection for 10 languages. Our results indicate that in particular the criterion of segmentation consistency can help to predict the model{'}s ability to recognize and generate the lemma from an inflected form, providing evidence that subword segmentation is relevant.", }
The role of subword segmentation in relation to capturing morphological patterns in LLMs is currently not well explored. Ideally, one would train models like GPT using various segmentations and evaluate how well word meanings are captured. Since this is not computationally feasible, we group words according to their segmentation properties and compare how well a model can solve a linguistic task for these groups. We study two criteria: (i) adherence to morpheme boundaries and (ii) the segmentation consistency of the different inflected forms of a lemma. We select word forms with high and low values for these criteria and carry out experiments on GPT-4o{'}s ability to capture verbal inflection for 10 languages. Our results indicate that in particular the criterion of segmentation consistency can help to predict the model{'}s ability to recognize and generate the lemma from an inflected form, providing evidence that subword segmentation is relevant.
[ "Marco, Marion Di", "Fraser, Alex", "er" ]
Subword Segmentation in LLMs: Looking at Inflection and Consistency
emnlp-main.672
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.673.bib
https://aclanthology.org/2024.emnlp-main.673/
@inproceedings{sharif-etal-2024-explicit, title = "Explicit, Implicit, and Scattered: Revisiting Event Extraction to Capture Complex Arguments", author = "Sharif, Omar and Gatto, Joseph and Basak, Madhusudan and Preum, Sarah Masud", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.673", pages = "12061--12081", abstract = "Prior works formulate the extraction of event-specific arguments as a span extraction problem, where event arguments are explicit {---} i.e. assumed to be contiguous spans of text in a document. In this study, we revisit this definition of Event Extraction (EE) by introducing two key argument types that cannot be modeled by existing EE frameworks. First, implicit arguments are event arguments which are not explicitly mentioned in the text, but can be inferred through context. Second, scattered arguments are event arguments that are composed of information scattered throughout the text. These two argument types are crucial to elicit the full breadth of information required for proper event modeling.To support the extraction of explicit, implicit, and scattered arguments, we develop a novel dataset, DiscourseEE, which includes 7,464 argument annotations from online health discourse. Notably, 51.2{\%} of the arguments are implicit, and 17.4{\%} are scattered, making DiscourseEE a unique corpus for complex event extraction. Additionally, we formulate argument extraction as a text generation problem to facilitate the extraction of complex argument types. We provide a comprehensive evaluation of state-of-the-art models and highlight critical open challenges in generative event extraction. Our data and codebase are available at https://omar-sharif03.github.io/DiscourseEE.", }
Prior works formulate the extraction of event-specific arguments as a span extraction problem, where event arguments are explicit {---} i.e. assumed to be contiguous spans of text in a document. In this study, we revisit this definition of Event Extraction (EE) by introducing two key argument types that cannot be modeled by existing EE frameworks. First, implicit arguments are event arguments which are not explicitly mentioned in the text, but can be inferred through context. Second, scattered arguments are event arguments that are composed of information scattered throughout the text. These two argument types are crucial to elicit the full breadth of information required for proper event modeling.To support the extraction of explicit, implicit, and scattered arguments, we develop a novel dataset, DiscourseEE, which includes 7,464 argument annotations from online health discourse. Notably, 51.2{\%} of the arguments are implicit, and 17.4{\%} are scattered, making DiscourseEE a unique corpus for complex event extraction. Additionally, we formulate argument extraction as a text generation problem to facilitate the extraction of complex argument types. We provide a comprehensive evaluation of state-of-the-art models and highlight critical open challenges in generative event extraction. Our data and codebase are available at https://omar-sharif03.github.io/DiscourseEE.
[ "Sharif, Omar", "Gatto, Joseph", "Basak, Madhusudan", "Preum, Sarah Masud" ]
Explicit, Implicit, and Scattered: Revisiting Event Extraction to Capture Complex Arguments
emnlp-main.673
Oral
2410.03594
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.674.bib
https://aclanthology.org/2024.emnlp-main.674/
@inproceedings{borges-etal-2024-teach, title = "Let Me Teach You: Pedagogical Foundations of Feedback for Language Models", author = {Borges, Beatriz and Tandon, Niket and K{\"a}ser, Tanja and Bosselut, Antoine}, editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.674", pages = "12082--12104", abstract = "Natural Language Feedback (NLF) is an increasingly popular mechanism for aligning Large Language Models (LLMs) to human preferences. Despite the diversity of the information it can convey, NLF methods are often hand-designed and arbitrary, with little systematic grounding. At the same time, research in learning sciences has long established several effective feedback models. In this opinion piece, we compile ideas from pedagogy to introduce FELT, a feedback framework for LLMs that outlines various characteristics of the feedback space, and a feedback content taxonomy based on these variables, providing a general mapping of the feedback space. In addition to streamlining NLF designs, FELT also brings out new, unexplored directions for research in NLF. We make our taxonomy available to the community, providing guides and examples for mapping our categorizations to future research.", }
Natural Language Feedback (NLF) is an increasingly popular mechanism for aligning Large Language Models (LLMs) to human preferences. Despite the diversity of the information it can convey, NLF methods are often hand-designed and arbitrary, with little systematic grounding. At the same time, research in learning sciences has long established several effective feedback models. In this opinion piece, we compile ideas from pedagogy to introduce FELT, a feedback framework for LLMs that outlines various characteristics of the feedback space, and a feedback content taxonomy based on these variables, providing a general mapping of the feedback space. In addition to streamlining NLF designs, FELT also brings out new, unexplored directions for research in NLF. We make our taxonomy available to the community, providing guides and examples for mapping our categorizations to future research.
[ "Borges, Beatriz", "T", "on, Niket", "K{\\\"a}ser, Tanja", "Bosselut, Antoine" ]
Let Me Teach You: Pedagogical Foundations of Feedback for Language Models
emnlp-main.674
Poster
2307.00279
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.675.bib
https://aclanthology.org/2024.emnlp-main.675/
@inproceedings{bussotti-etal-2024-unknown, title = "Unknown Claims: Generation of Fact-Checking Training Examples from Unstructured and Structured Data", author = "Bussotti, Jean-Flavien and Ragazzi, Luca and Frisoni, Giacomo and Moro, Gianluca and Papotti, Paolo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.675", pages = "12105--12122", abstract = "Computational fact-checking (FC) relies on supervised models to verify claims based on given evidence, requiring a resource-intensive process to annotate large volumes of training data. We introduce Unown, a novel framework that generates training instances for FC systems automatically using both textual and tabular content. Unown selects relevant evidence and generates supporting and refuting claims with advanced negation artifacts. Designed to be flexible, Unown accommodates various strategies for evidence selection and claim generation, offering unparalleled adaptability. We comprehensively evaluate Unown on both text-only and table+text benchmarks, including Feverous, SciFact, and MMFC, a new multi-modal FC dataset. Our results prove that Unown examples are of comparable quality to expert-labeled data, even enabling models to achieve up to 5{\%} higher accuracy. The code, data, and models are available at https://github.com/disi-unibo-nlp/unown", }
Computational fact-checking (FC) relies on supervised models to verify claims based on given evidence, requiring a resource-intensive process to annotate large volumes of training data. We introduce Unown, a novel framework that generates training instances for FC systems automatically using both textual and tabular content. Unown selects relevant evidence and generates supporting and refuting claims with advanced negation artifacts. Designed to be flexible, Unown accommodates various strategies for evidence selection and claim generation, offering unparalleled adaptability. We comprehensively evaluate Unown on both text-only and table+text benchmarks, including Feverous, SciFact, and MMFC, a new multi-modal FC dataset. Our results prove that Unown examples are of comparable quality to expert-labeled data, even enabling models to achieve up to 5{\%} higher accuracy. The code, data, and models are available at https://github.com/disi-unibo-nlp/unown
[ "Bussotti, Jean-Flavien", "Ragazzi, Luca", "Frisoni, Giacomo", "Moro, Gianluca", "Papotti, Paolo" ]
Unknown Claims: Generation of Fact-Checking Training Examples from Unstructured and Structured Data
emnlp-main.675
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.676.bib
https://aclanthology.org/2024.emnlp-main.676/
@inproceedings{satapara-srijith-2024-tl, title = "{TL}-{CL}: Task And Language Incremental Continual Learning", author = "Satapara, Shrey and Srijith, P. K.", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.676", pages = "12123--12142", abstract = "This paper introduces and investigates the problem of Task and Language Incremental Continual Learning (TLCL), wherein a multilingual model is systematically updated to accommodate new tasks in previously learned languages or new languages for established tasks. This significant yet previously unexplored area holds substantial practical relevance as it mirrors the dynamic requirements of real-world applications. We benchmark a representative set of continual learning (CL) algorithms for TLCL. Furthermore, we propose Task and Language-Specific Adapters (TLSA), an adapter-based parameter-efficient fine-tuning strategy. TLSA facilitates cross-lingual and cross-task transfer and outperforms other parameter-efficient fine-tuning techniques. Crucially, TLSA reduces parameter growth stemming from saving adapters to linear complexity from polynomial complexity as it was with parameter isolation-based adapter tuning. We conducted experiments on several NLP tasks arising across several languages. We observed that TLSA outperforms all other parameter-efficient approaches without requiring access to historical data for replay.", }
This paper introduces and investigates the problem of Task and Language Incremental Continual Learning (TLCL), wherein a multilingual model is systematically updated to accommodate new tasks in previously learned languages or new languages for established tasks. This significant yet previously unexplored area holds substantial practical relevance as it mirrors the dynamic requirements of real-world applications. We benchmark a representative set of continual learning (CL) algorithms for TLCL. Furthermore, we propose Task and Language-Specific Adapters (TLSA), an adapter-based parameter-efficient fine-tuning strategy. TLSA facilitates cross-lingual and cross-task transfer and outperforms other parameter-efficient fine-tuning techniques. Crucially, TLSA reduces parameter growth stemming from saving adapters to linear complexity from polynomial complexity as it was with parameter isolation-based adapter tuning. We conducted experiments on several NLP tasks arising across several languages. We observed that TLSA outperforms all other parameter-efficient approaches without requiring access to historical data for replay.
[ "Satapara, Shrey", "Srijith, P. K." ]
TL-CL: Task And Language Incremental Continual Learning
emnlp-main.676
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.677.bib
https://aclanthology.org/2024.emnlp-main.677/
@inproceedings{jeong-etal-2024-medical, title = "Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress?", author = "Jeong, Daniel P and Garg, Saurabh and Lipton, Zachary Chase and Oberst, Michael", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.677", pages = "12143--12170", abstract = "Several recent works seek to develop foundation models specifically for medical applications, adapting general-purpose large language models (LLMs) and vision-language models (VLMs) via continued pretraining on publicly available biomedical corpora. These works typically claim that such domain-adaptive pretraining (DAPT) improves performance on downstream medical tasks, such as answering medical licensing exam questions. In this paper, we compare seven public {``}medical{''} LLMs and two VLMs against their corresponding base models, arriving at a different conclusion: all medical VLMs and nearly all medical LLMs fail to consistently improve over their base models in the zero-/few-shot prompting regime for medical question-answering (QA) tasks. For instance, across the tasks and model pairs we consider in the 3-shot setting, medical LLMs only outperform their base models in 12.1{\%} of cases, reach a (statistical) tie in 49.8{\%} of cases, and are significantly worse than their base models in the remaining 38.2{\%} of cases. Our conclusions are based on (i) comparing each medical model head-to-head, directly against the corresponding base model; (ii) optimizing the prompts for each model separately; and (iii) accounting for statistical uncertainty in comparisons. While these basic practices are not consistently adopted in the literature, our ablations show that they substantially impact conclusions. Our findings suggest that state-of-the-art general-domain models may already exhibit strong medical knowledge and reasoning capabilities, and offer recommendations to strengthen the conclusions of future studies.", }
Several recent works seek to develop foundation models specifically for medical applications, adapting general-purpose large language models (LLMs) and vision-language models (VLMs) via continued pretraining on publicly available biomedical corpora. These works typically claim that such domain-adaptive pretraining (DAPT) improves performance on downstream medical tasks, such as answering medical licensing exam questions. In this paper, we compare seven public {``}medical{''} LLMs and two VLMs against their corresponding base models, arriving at a different conclusion: all medical VLMs and nearly all medical LLMs fail to consistently improve over their base models in the zero-/few-shot prompting regime for medical question-answering (QA) tasks. For instance, across the tasks and model pairs we consider in the 3-shot setting, medical LLMs only outperform their base models in 12.1{\%} of cases, reach a (statistical) tie in 49.8{\%} of cases, and are significantly worse than their base models in the remaining 38.2{\%} of cases. Our conclusions are based on (i) comparing each medical model head-to-head, directly against the corresponding base model; (ii) optimizing the prompts for each model separately; and (iii) accounting for statistical uncertainty in comparisons. While these basic practices are not consistently adopted in the literature, our ablations show that they substantially impact conclusions. Our findings suggest that state-of-the-art general-domain models may already exhibit strong medical knowledge and reasoning capabilities, and offer recommendations to strengthen the conclusions of future studies.
[ "Jeong, Daniel P", "Garg, Saurabh", "Lipton, Zachary Chase", "Oberst, Michael" ]
Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress?
emnlp-main.677
Oral
2411.04118
[ "" ]
https://huggingface.co/papers/2411.04118
1
1
0
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.678.bib
https://aclanthology.org/2024.emnlp-main.678/
@inproceedings{ranaldi-etal-2024-empowering-multi, title = "Empowering Multi-step Reasoning across Languages via Program-Aided Language Models", author = "Ranaldi, Leonardo and Pucci, Giulia and Haddow, Barry and Birch, Alexandra", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.678", pages = "12171--12187", abstract = "In-context learning methods are popular inference strategies where Large Language Models (LLMs) are elicited to solve a task using provided demonstrations without parameter updates. Among these approaches are the reasoning methods, best exemplified by Chain-of-Thought (CoT) and Program-Aided Language Models (PAL), which elicit LLMs to generate reasoning paths, thus promoting accuracy and attracting increasing attention. However, despite the success of these methods, the ability to deliver multi-step reasoning remains limited to a single language, making it challenging to generalize to other languages and hindering global development.In this work, we propose Cross-lingual Program-Aided Language Models (CrossPAL), a method for aligning reasoning programs across languages. In particular, our method delivers programs as intermediate reasoning steps in different languages through a double-step cross-lingual prompting mechanism inspired by the Program-Aided approach. In addition, we introduce Self-consistent CrossPAL (SCrossPAL) to ensemble different reasoning paths across languages. Our experimental evaluations show that our method significantly outperforms existing prompting methods, reducing the number of interactions and achieving state-of-the-art performance.", }
In-context learning methods are popular inference strategies where Large Language Models (LLMs) are elicited to solve a task using provided demonstrations without parameter updates. Among these approaches are the reasoning methods, best exemplified by Chain-of-Thought (CoT) and Program-Aided Language Models (PAL), which elicit LLMs to generate reasoning paths, thus promoting accuracy and attracting increasing attention. However, despite the success of these methods, the ability to deliver multi-step reasoning remains limited to a single language, making it challenging to generalize to other languages and hindering global development.In this work, we propose Cross-lingual Program-Aided Language Models (CrossPAL), a method for aligning reasoning programs across languages. In particular, our method delivers programs as intermediate reasoning steps in different languages through a double-step cross-lingual prompting mechanism inspired by the Program-Aided approach. In addition, we introduce Self-consistent CrossPAL (SCrossPAL) to ensemble different reasoning paths across languages. Our experimental evaluations show that our method significantly outperforms existing prompting methods, reducing the number of interactions and achieving state-of-the-art performance.
[ "Ranaldi, Leonardo", "Pucci, Giulia", "Haddow, Barry", "Birch, Alex", "ra" ]
Empowering Multi-step Reasoning across Languages via Program-Aided Language Models
emnlp-main.678
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.679.bib
https://aclanthology.org/2024.emnlp-main.679/
@inproceedings{yuan-etal-2024-llms, title = "Do {LLM}s Overcome Shortcut Learning? An Evaluation of Shortcut Challenges in Large Language Models", author = "Yuan, Yu and Zhao, Lili and Zhang, Kai and Zheng, Guangting and Liu, Qi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.679", pages = "12188--12200", abstract = "Large Language Models (LLMs) have shown remarkable capabilities in various natural language processing tasks. However, LLMs may rely on dataset biases as shortcuts for prediction, which can significantly impair their robustness and generalization capabilities. This paper presents Shortcut Suite, a comprehensive test suite designed to evaluate the impact of shortcuts on LLMs{'} performance, incorporating six shortcut types, five evaluation metrics, and four prompting strategies. Our extensive experiments yield several key findings: 1) LLMs demonstrate varying reliance on shortcuts for downstream tasks, which significantly impairs their performance. 2) Larger LLMs are more likely to utilize shortcuts under zero-shot and few-shot in-context learning prompts. 3) Chain-of-thought prompting notably reduces shortcut reliance and outperforms other prompting strategies, while few-shot prompts generally underperform compared to zero-shot prompts. 4) LLMs often exhibit overconfidence in their predictions, especially when dealing with datasets that contain shortcuts. 5) LLMs generally have a lower explanation quality in shortcut-laden datasets, with errors falling into three types: distraction, disguised comprehension, and logical fallacy. Our findings offer new insights for evaluating robustness and generalization in LLMs and suggest potential directions for mitigating the reliance on shortcuts.", }
Large Language Models (LLMs) have shown remarkable capabilities in various natural language processing tasks. However, LLMs may rely on dataset biases as shortcuts for prediction, which can significantly impair their robustness and generalization capabilities. This paper presents Shortcut Suite, a comprehensive test suite designed to evaluate the impact of shortcuts on LLMs{'} performance, incorporating six shortcut types, five evaluation metrics, and four prompting strategies. Our extensive experiments yield several key findings: 1) LLMs demonstrate varying reliance on shortcuts for downstream tasks, which significantly impairs their performance. 2) Larger LLMs are more likely to utilize shortcuts under zero-shot and few-shot in-context learning prompts. 3) Chain-of-thought prompting notably reduces shortcut reliance and outperforms other prompting strategies, while few-shot prompts generally underperform compared to zero-shot prompts. 4) LLMs often exhibit overconfidence in their predictions, especially when dealing with datasets that contain shortcuts. 5) LLMs generally have a lower explanation quality in shortcut-laden datasets, with errors falling into three types: distraction, disguised comprehension, and logical fallacy. Our findings offer new insights for evaluating robustness and generalization in LLMs and suggest potential directions for mitigating the reliance on shortcuts.
[ "Yuan, Yu", "Zhao, Lili", "Zhang, Kai", "Zheng, Guangting", "Liu, Qi" ]
Do LLMs Overcome Shortcut Learning? An Evaluation of Shortcut Challenges in Large Language Models
emnlp-main.679
Poster
2410.13343
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.680.bib
https://aclanthology.org/2024.emnlp-main.680/
@inproceedings{chen-etal-2024-controlmath, title = "{C}ontrol{M}ath: Controllable Data Generation Promotes Math Generalist Models", author = "Chen, Nuo and Wu, Ning and Chang, Jianhui and Shou, Linjun and Li, Jia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.680", pages = "12201--12217", abstract = "Utilizing large language models (LLMs) for data augmentation has yielded encouraging results in mathematical reasoning. However, these approaches face constraints in problem diversity, potentially restricting them to in-domain/distribution data generation. To this end, we propose **ControlMath**, an iterative method involving an equation-generator module and two LLM-based agents. The module creates diverse equations, which the Problem-Crafter agent then transforms into math word problems. The Reverse-Agent filters and selects high-quality data, adhering to the {``}less is more{''} principle. This approach enables the generation of diverse math problems, not limited to specific domains or distributions. As a result, we collect ControlMathQA, which involves 190k math word problems. Extensive results prove that combining our dataset with in-domain datasets like GSM8K can help improve the model{'}s mathematical ability to generalize, leading to improved performance both within and beyond specific domains.", }
Utilizing large language models (LLMs) for data augmentation has yielded encouraging results in mathematical reasoning. However, these approaches face constraints in problem diversity, potentially restricting them to in-domain/distribution data generation. To this end, we propose **ControlMath**, an iterative method involving an equation-generator module and two LLM-based agents. The module creates diverse equations, which the Problem-Crafter agent then transforms into math word problems. The Reverse-Agent filters and selects high-quality data, adhering to the {``}less is more{''} principle. This approach enables the generation of diverse math problems, not limited to specific domains or distributions. As a result, we collect ControlMathQA, which involves 190k math word problems. Extensive results prove that combining our dataset with in-domain datasets like GSM8K can help improve the model{'}s mathematical ability to generalize, leading to improved performance both within and beyond specific domains.
[ "Chen, Nuo", "Wu, Ning", "Chang, Jianhui", "Shou, Linjun", "Li, Jia" ]
ControlMath: Controllable Data Generation Promotes Math Generalist Models
emnlp-main.680
Poster
2409.15376
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.681.bib
https://aclanthology.org/2024.emnlp-main.681/
@inproceedings{li-etal-2024-identifying, title = "Where Am {I} From? Identifying Origin of {LLM}-generated Content", author = "Li, Liying and Bai, Yihan and Cheng, Minhao", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.681", pages = "12218--12229", abstract = "Generative models, particularly large language models (LLMs), have achieved remarkable success in producing natural and high-quality content. However, their widespread adoption raises concerns regarding copyright infringement, privacy violations, and security risks associated with AI-generated content. To address these concerns, we propose a novel digital forensics framework for LLMs, enabling the tracing of AI-generated content back to its source. This framework embeds a secret watermark directly into the generated output, eliminating the need for model retraining. To enhance traceability, especially for short outputs, we introduce a {``}depth watermark{''} that strengthens the link between content and generator. Our approach ensures accurate tracing while maintaining the quality of the generated content. Extensive experiments across various settings and datasets validate the effectiveness and robustness of our proposed framework.", }
Generative models, particularly large language models (LLMs), have achieved remarkable success in producing natural and high-quality content. However, their widespread adoption raises concerns regarding copyright infringement, privacy violations, and security risks associated with AI-generated content. To address these concerns, we propose a novel digital forensics framework for LLMs, enabling the tracing of AI-generated content back to its source. This framework embeds a secret watermark directly into the generated output, eliminating the need for model retraining. To enhance traceability, especially for short outputs, we introduce a {``}depth watermark{''} that strengthens the link between content and generator. Our approach ensures accurate tracing while maintaining the quality of the generated content. Extensive experiments across various settings and datasets validate the effectiveness and robustness of our proposed framework.
[ "Li, Liying", "Bai, Yihan", "Cheng, Minhao" ]
Where Am I From? Identifying Origin of LLM-generated Content
emnlp-main.681
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.682.bib
https://aclanthology.org/2024.emnlp-main.682/
@inproceedings{naous-etal-2024-readme, title = "{R}ead{M}e++: Benchmarking Multilingual Language Models for Multi-Domain Readability Assessment", author = "Naous, Tarek and Ryan, Michael J and Lavrouk, Anton and Chandra, Mohit and Xu, Wei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.682", pages = "12230--12266", abstract = "We present a comprehensive evaluation of large language models for multilingual readability assessment. Existing evaluation resources lack domain and language diversity, limiting the ability for cross-domain and cross-lingual analyses. This paper introduces ReadMe++, a multilingual multi-domain dataset with human annotations of 9757 sentences in Arabic, English, French, Hindi, and Russian, collected from 112 different data sources. This benchmark will encourage research on developing robust multilingual readability assessment methods. Using ReadMe++, we benchmark multilingual and monolingual language models in the supervised, unsupervised, and few-shot prompting settings. The domain and language diversity in ReadMe++ enable us to test more effective few-shot prompting, and identify shortcomings in state-of-the-art unsupervised methods. Our experiments also reveal exciting results of superior domain generalization and enhanced cross-lingual transfer capabilities by models trained on ReadMe++. We will make our data publicly available and release a python package tool for multilingual sentence readability prediction using our trained models at: https://github.com/tareknaous/readme", }
We present a comprehensive evaluation of large language models for multilingual readability assessment. Existing evaluation resources lack domain and language diversity, limiting the ability for cross-domain and cross-lingual analyses. This paper introduces ReadMe++, a multilingual multi-domain dataset with human annotations of 9757 sentences in Arabic, English, French, Hindi, and Russian, collected from 112 different data sources. This benchmark will encourage research on developing robust multilingual readability assessment methods. Using ReadMe++, we benchmark multilingual and monolingual language models in the supervised, unsupervised, and few-shot prompting settings. The domain and language diversity in ReadMe++ enable us to test more effective few-shot prompting, and identify shortcomings in state-of-the-art unsupervised methods. Our experiments also reveal exciting results of superior domain generalization and enhanced cross-lingual transfer capabilities by models trained on ReadMe++. We will make our data publicly available and release a python package tool for multilingual sentence readability prediction using our trained models at: https://github.com/tareknaous/readme
[ "Naous, Tarek", "Ryan, Michael J", "Lavrouk, Anton", "Ch", "ra, Mohit", "Xu, Wei" ]
ReadMe++: Benchmarking Multilingual Language Models for Multi-Domain Readability Assessment
emnlp-main.682
Oral
2305.14463
[ "https://github.com/tareknaous/readme" ]
https://huggingface.co/papers/2305.14463
1
0
0
5
[ "tareknaous/readabert-en", "tareknaous/readabert-ar", "tareknaous/readabert-ru", "tareknaous/readabert-fr", "tareknaous/readabert-hi" ]
[]
[]
[ "tareknaous/readabert-en", "tareknaous/readabert-ar", "tareknaous/readabert-ru", "tareknaous/readabert-fr", "tareknaous/readabert-hi" ]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.683.bib
https://aclanthology.org/2024.emnlp-main.683/
@inproceedings{ginn-etal-2024-glosslm, title = "{G}loss{LM}: A Massively Multilingual Corpus and Pretrained Model for Interlinear Glossed Text", author = "Ginn, Michael and Tjuatja, Lindia and He, Taiqi and Rice, Enora and Neubig, Graham and Palmer, Alexis and Levin, Lori", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.683", pages = "12267--12286", abstract = "Language documentation projects often involve the creation of annotated text in a format such as interlinear glossed text (IGT), which captures fine-grained morphosyntactic analyses in a morpheme-by-morpheme format. However, there are few existing resources providing large amounts of standardized, easily accessible IGT data, limiting their applicability to linguistic research, and making it difficult to use such data in NLP modeling. We compile the largest existing corpus of IGT data from a variety of sources, covering over 450k examples across 1.8k languages, to enable research on crosslingual transfer and IGT generation. We normalize much of our data to follow a standard set of labels across languages.Furthermore, we explore the task of automatically generating IGT in order to aid documentation projects. As many languages lack sufficient monolingual data, we pretrain a large multilingual model on our corpus. We demonstrate the utility of this model by finetuning it on monolingual corpora, outperforming SOTA models by up to 6.6{\%}. Our pretrained model and dataset are available on Hugging Face: https://huggingface.co/collections/lecslab/glosslm-66da150854209e910113dd87", }
Language documentation projects often involve the creation of annotated text in a format such as interlinear glossed text (IGT), which captures fine-grained morphosyntactic analyses in a morpheme-by-morpheme format. However, there are few existing resources providing large amounts of standardized, easily accessible IGT data, limiting their applicability to linguistic research, and making it difficult to use such data in NLP modeling. We compile the largest existing corpus of IGT data from a variety of sources, covering over 450k examples across 1.8k languages, to enable research on crosslingual transfer and IGT generation. We normalize much of our data to follow a standard set of labels across languages.Furthermore, we explore the task of automatically generating IGT in order to aid documentation projects. As many languages lack sufficient monolingual data, we pretrain a large multilingual model on our corpus. We demonstrate the utility of this model by finetuning it on monolingual corpora, outperforming SOTA models by up to 6.6{\%}. Our pretrained model and dataset are available on Hugging Face: https://huggingface.co/collections/lecslab/glosslm-66da150854209e910113dd87
[ "Ginn, Michael", "Tjuatja, Lindia", "He, Taiqi", "Rice, Enora", "Neubig, Graham", "Palmer, Alexis", "Levin, Lori" ]
GlossLM: A Massively Multilingual Corpus and Pretrained Model for Interlinear Glossed Text
emnlp-main.683
Poster
2403.06399
[ "" ]
https://huggingface.co/papers/2403.06399
1
0
0
7
[ "lecslab/glosslm" ]
[]
[]
[ "lecslab/glosslm" ]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.684.bib
https://aclanthology.org/2024.emnlp-main.684/
@inproceedings{liu-etal-2024-gdtb, title = "{GDTB}: Genre Diverse Data for {E}nglish Shallow Discourse Parsing across Modalities, Text Types, and Domains", author = "Liu, Yang Janet and Aoyama, Tatsuya and Scivetti, Wesley and Zhu, Yilun and Behzad, Shabnam and Levine, Lauren Elizabeth and Lin, Jessica and Tiwari, Devika and Zeldes, Amir", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.684", pages = "12287--12303", abstract = "Work on shallow discourse parsing in English has focused on the Wall Street Journal corpus, the only large-scale dataset for the language in the PDTB framework. However, the data is not openly available, is restricted to the news domain, and is by now 35 years old. In this paper, we present and evaluate a new open-access, multi-genre benchmark for PDTB-style shallow discourse parsing, based on the existing UD English GUM corpus, for which discourse relation annotations in other frameworks already exist. In a series of experiments on cross-domain relation classification, we show that while our dataset is compatible with PDTB, substantial out-of-domain degradation is observed, which can be alleviated by joint training on both datasets.", }
Work on shallow discourse parsing in English has focused on the Wall Street Journal corpus, the only large-scale dataset for the language in the PDTB framework. However, the data is not openly available, is restricted to the news domain, and is by now 35 years old. In this paper, we present and evaluate a new open-access, multi-genre benchmark for PDTB-style shallow discourse parsing, based on the existing UD English GUM corpus, for which discourse relation annotations in other frameworks already exist. In a series of experiments on cross-domain relation classification, we show that while our dataset is compatible with PDTB, substantial out-of-domain degradation is observed, which can be alleviated by joint training on both datasets.
[ "Liu, Yang Janet", "Aoyama, Tatsuya", "Scivetti, Wesley", "Zhu, Yilun", "Behzad, Shabnam", "Levine, Lauren Elizabeth", "Lin, Jessica", "Tiwari, Devika", "Zeldes, Amir" ]
GDTB: Genre Diverse Data for English Shallow Discourse Parsing across Modalities, Text Types, and Domains
emnlp-main.684
Poster
2411.00491
[ "https://github.com/gucorpling/gum2pdtb" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.685.bib
https://aclanthology.org/2024.emnlp-main.685/
@inproceedings{zhu-etal-2024-ra2fd, title = "{RA}2{FD}: Distilling Faithfulness into Efficient Dialogue Systems", author = "Zhu, Zhiyuan and Liao, Yusheng and Xu, Chenxin and Guan, Yunfeng and Wang, Yanfeng and Wang, Yu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.685", pages = "12304--12317", abstract = "Generating faithful and fast responses is crucial in the knowledge-grounded dialogue. Retrieval Augmented Generation (RAG) strategies are effective but are inference inefficient, while previous Retrieval Free Generations (RFG) are more efficient but sacrifice faithfulness. To solve this faithfulness-efficiency trade-off dilemma, we propose a novel retrieval-free model training scheme named Retrieval Augmented to Retrieval Free Distillation (RA2FD) to build a retrieval-free model that achieves higher faithfulness than the previous RFG method while maintaining inference efficiency. The core idea of RA2FD is to use a teacher-student framework to distill the faithfulness capacity of a teacher, which is an oracle RAG model that generates multiple knowledge-infused responses. The student retrieval-free model learns how to generate faithful responses from these teacher labels through sequence-level distillation and contrastive learning. Experiment results show that RA2FD let the faithfulness performance of an RFG model surpass the previous SOTA RFG baseline on three knowledge-grounded dialogue datasets by an average of 33{\%} and even matching an RAG model{'}s performance while significantly improving inference efficiency. Our code is available at https://github.com/zzysjtuiwct/RA2FD.", }
Generating faithful and fast responses is crucial in the knowledge-grounded dialogue. Retrieval Augmented Generation (RAG) strategies are effective but are inference inefficient, while previous Retrieval Free Generations (RFG) are more efficient but sacrifice faithfulness. To solve this faithfulness-efficiency trade-off dilemma, we propose a novel retrieval-free model training scheme named Retrieval Augmented to Retrieval Free Distillation (RA2FD) to build a retrieval-free model that achieves higher faithfulness than the previous RFG method while maintaining inference efficiency. The core idea of RA2FD is to use a teacher-student framework to distill the faithfulness capacity of a teacher, which is an oracle RAG model that generates multiple knowledge-infused responses. The student retrieval-free model learns how to generate faithful responses from these teacher labels through sequence-level distillation and contrastive learning. Experiment results show that RA2FD let the faithfulness performance of an RFG model surpass the previous SOTA RFG baseline on three knowledge-grounded dialogue datasets by an average of 33{\%} and even matching an RAG model{'}s performance while significantly improving inference efficiency. Our code is available at https://github.com/zzysjtuiwct/RA2FD.
[ "Zhu, Zhiyuan", "Liao, Yusheng", "Xu, Chenxin", "Guan, Yunfeng", "Wang, Yanfeng", "Wang, Yu" ]
RA2FD: Distilling Faithfulness into Efficient Dialogue Systems
emnlp-main.685
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.686.bib
https://aclanthology.org/2024.emnlp-main.686/
@inproceedings{lv-etal-2024-subjective, title = "Subjective Topic meets {LLM}s: Unleashing Comprehensive, Reflective and Creative Thinking through the Negation of Negation", author = "Lv, Fangrui and Gong, Kaixiong and Liang, Jian and Pang, Xinyu and Zhang, Changshui", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.686", pages = "12318--12341", abstract = "Large language models (LLMs) exhibit powerful reasoning capacity, as evidenced by prior studies focusing on objective topics that with unique standard answers such as arithmetic and commonsense reasoning. However, the reasoning to definite answers emphasizes more on logical thinking, and falls short in effectively reflecting the comprehensive, reflective, and creative thinking that is also critical for the overall reasoning prowess of LLMs. In light of this, we build a dataset SJTP comprising diverse SubJective ToPics with free responses, as well as three evaluation indicators to fully explore LLM{'}s reasoning ability. We observe that a sole emphasis on logical thinking falls short in effectively tackling subjective challenges. Therefore, we introduce a framework grounded in the principle of the Negation of Negation (NeoN) to unleash the potential comprehensive, reflective, and creative thinking abilities of LLMs. Comprehensive experiments on SJTP demonstrate the efficacy of NeoN, and the enhanced performance on various objective reasoning tasks unequivocally underscores the benefits of stimulating LLM{'}s subjective thinking in augmenting overall reasoning capabilities.", }
Large language models (LLMs) exhibit powerful reasoning capacity, as evidenced by prior studies focusing on objective topics that with unique standard answers such as arithmetic and commonsense reasoning. However, the reasoning to definite answers emphasizes more on logical thinking, and falls short in effectively reflecting the comprehensive, reflective, and creative thinking that is also critical for the overall reasoning prowess of LLMs. In light of this, we build a dataset SJTP comprising diverse SubJective ToPics with free responses, as well as three evaluation indicators to fully explore LLM{'}s reasoning ability. We observe that a sole emphasis on logical thinking falls short in effectively tackling subjective challenges. Therefore, we introduce a framework grounded in the principle of the Negation of Negation (NeoN) to unleash the potential comprehensive, reflective, and creative thinking abilities of LLMs. Comprehensive experiments on SJTP demonstrate the efficacy of NeoN, and the enhanced performance on various objective reasoning tasks unequivocally underscores the benefits of stimulating LLM{'}s subjective thinking in augmenting overall reasoning capabilities.
[ "Lv, Fangrui", "Gong, Kaixiong", "Liang, Jian", "Pang, Xinyu", "Zhang, Changshui" ]
Subjective Topic meets LLMs: Unleashing Comprehensive, Reflective and Creative Thinking through the Negation of Negation
emnlp-main.686
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.687.bib
https://aclanthology.org/2024.emnlp-main.687/
@inproceedings{misra-etal-2024-experimental, title = "Experimental Contexts Can Facilitate Robust Semantic Property Inference in Language Models, but Inconsistently", author = "Misra, Kanishka and Ettinger, Allyson and Mahowald, Kyle", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.687", pages = "12342--12355", abstract = "Recent zero-shot evaluations have highlighted important limitations in the abilities of language models (LMs) to perform meaning extraction. However, it is now well known that LMs can demonstrate radical improvements in the presence of experimental contexts such as in-context examples and instructions. How well does this translate to previously studied meaning-sensitive tasks? We present a case-study on the extent to which experimental contexts can improve LMs{'} robustness in performing property inheritance{---}predicting semantic properties of novel concepts, a task that they have been previously shown to fail on. Upon carefully controlling the nature of the in-context examples and the instructions, our work reveals that they can indeed lead to non-trivial property inheritance behavior in LMs. However, this ability is inconsistent: with a minimal reformulation of the task, some LMs were found to pick up on shallow, non-semantic heuristics from their inputs, suggesting that the computational principles of semantic property inference are yet to be mastered by LMs.", }
Recent zero-shot evaluations have highlighted important limitations in the abilities of language models (LMs) to perform meaning extraction. However, it is now well known that LMs can demonstrate radical improvements in the presence of experimental contexts such as in-context examples and instructions. How well does this translate to previously studied meaning-sensitive tasks? We present a case-study on the extent to which experimental contexts can improve LMs{'} robustness in performing property inheritance{---}predicting semantic properties of novel concepts, a task that they have been previously shown to fail on. Upon carefully controlling the nature of the in-context examples and the instructions, our work reveals that they can indeed lead to non-trivial property inheritance behavior in LMs. However, this ability is inconsistent: with a minimal reformulation of the task, some LMs were found to pick up on shallow, non-semantic heuristics from their inputs, suggesting that the computational principles of semantic property inference are yet to be mastered by LMs.
[ "Misra, Kanishka", "Ettinger, Allyson", "Mahowald, Kyle" ]
Experimental Contexts Can Facilitate Robust Semantic Property Inference in Language Models, but Inconsistently
emnlp-main.687
Poster
2401.06640
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.688.bib
https://aclanthology.org/2024.emnlp-main.688/
@inproceedings{bai-etal-2024-leveraging, title = "Leveraging Estimated Transferability Over Human Intuition for Model Selection in Text Ranking", author = "Bai, Jun and Chen, Zhuofan and Li, Zhenzi and Hong, Hanhua and Zhang, Jianfei and Li, Chen and Lin, Chenghua and Rong, Wenge", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.688", pages = "12356--12374", abstract = "Text ranking has witnessed significant advancements, attributed to the utilization of dual-encoder enhanced by Pre-trained Language Models (PLMs). Given the proliferation of available PLMs, selecting the most effective one for a given dataset has become a non-trivial challenge. As a promising alternative to human intuition and brute-force fine-tuning, Transferability Estimation (TE) has emerged as an effective approach to model selection. However, current TE methods are primarily designed for classification tasks, and their estimated transferability may not align well with the objectives of text ranking. To address this challenge, we propose to compute the expected rank as transferability, explicitly reflecting the model{'}s ranking capability. Furthermore, to mitigate anisotropy and incorporate training dynamics, we adaptively scale isotropic sentence embeddings to yield an accurate expected rank score. Our resulting method, Adaptive Ranking Transferability (AiRTran), can effectively capture subtle differences between models. On challenging model selection scenarios across various text ranking datasets, it demonstrates significant improvements over previous classification-oriented TE methods, human intuition, and ChatGPT with minor time consumption.", }
Text ranking has witnessed significant advancements, attributed to the utilization of dual-encoder enhanced by Pre-trained Language Models (PLMs). Given the proliferation of available PLMs, selecting the most effective one for a given dataset has become a non-trivial challenge. As a promising alternative to human intuition and brute-force fine-tuning, Transferability Estimation (TE) has emerged as an effective approach to model selection. However, current TE methods are primarily designed for classification tasks, and their estimated transferability may not align well with the objectives of text ranking. To address this challenge, we propose to compute the expected rank as transferability, explicitly reflecting the model{'}s ranking capability. Furthermore, to mitigate anisotropy and incorporate training dynamics, we adaptively scale isotropic sentence embeddings to yield an accurate expected rank score. Our resulting method, Adaptive Ranking Transferability (AiRTran), can effectively capture subtle differences between models. On challenging model selection scenarios across various text ranking datasets, it demonstrates significant improvements over previous classification-oriented TE methods, human intuition, and ChatGPT with minor time consumption.
[ "Bai, Jun", "Chen, Zhuofan", "Li, Zhenzi", "Hong, Hanhua", "Zhang, Jianfei", "Li, Chen", "Lin, Chenghua", "Rong, Wenge" ]
Leveraging Estimated Transferability Over Human Intuition for Model Selection in Text Ranking
emnlp-main.688
Poster
2409.16198
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.689.bib
https://aclanthology.org/2024.emnlp-main.689/
@inproceedings{zhao-etal-2024-unveiling, title = "Unveiling In-Context Learning: A Coordinate System to Understand Its Working Mechanism", author = "Zhao, Anhao and Ye, Fanghua and Fu, Jinlan and Shen, Xiaoyu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.689", pages = "12375--12400", abstract = "Large language models (LLMs) exhibit remarkable in-context learning (ICL) capabilities. However, the underlying working mechanism of ICL remains poorly understood. Recent research presents two conflicting views on ICL: One emphasizes the impact of similar examples in the demonstrations, stressing the need for label correctness and more shots. The other attributes it to LLMs{'} inherent ability of task recognition, deeming label correctness and shot numbers of demonstrations as not crucial. In this work, we provide a Two-Dimensional Coordinate System that unifies both views into a systematic framework. The framework explains the behavior of ICL through two orthogonal variables: whether similar examples are presented in the demonstrations (perception) and whether LLMs can recognize the task (cognition). We propose the peak inverse rank metric to detect the task recognition ability of LLMs and study LLMs{'} reactions to different definitions of similarity. Based on these, we conduct extensive experiments to elucidate how ICL functions across each quadrant on multiple representative classification tasks. Finally, we extend our analyses to generation tasks, showing that our coordinate system can also be used to interpret ICL for generation tasks effectively.", }
Large language models (LLMs) exhibit remarkable in-context learning (ICL) capabilities. However, the underlying working mechanism of ICL remains poorly understood. Recent research presents two conflicting views on ICL: One emphasizes the impact of similar examples in the demonstrations, stressing the need for label correctness and more shots. The other attributes it to LLMs{'} inherent ability of task recognition, deeming label correctness and shot numbers of demonstrations as not crucial. In this work, we provide a Two-Dimensional Coordinate System that unifies both views into a systematic framework. The framework explains the behavior of ICL through two orthogonal variables: whether similar examples are presented in the demonstrations (perception) and whether LLMs can recognize the task (cognition). We propose the peak inverse rank metric to detect the task recognition ability of LLMs and study LLMs{'} reactions to different definitions of similarity. Based on these, we conduct extensive experiments to elucidate how ICL functions across each quadrant on multiple representative classification tasks. Finally, we extend our analyses to generation tasks, showing that our coordinate system can also be used to interpret ICL for generation tasks effectively.
[ "Zhao, Anhao", "Ye, Fanghua", "Fu, Jinlan", "Shen, Xiaoyu" ]
Unveiling In-Context Learning: A Coordinate System to Understand Its Working Mechanism
emnlp-main.689
Poster
2407.17011
[ "https://github.com/eit-nlp/2d-coordinate-system-for-icl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.690.bib
https://aclanthology.org/2024.emnlp-main.690/
@inproceedings{yu-etal-2024-self-powered, title = "Self-Powered {LLM} Modality Expansion for Large Speech-Text Models", author = "Yu, Tengfei and Liu, Xuebo and Hou, Zhiyi and Ding, Liang and Tao, Dacheng and Zhang, Min", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.690", pages = "12401--12417", abstract = "Large language models (LLMs) exhibit remarkable performance across diverse tasks, indicating their potential for expansion into large speech-text models (LSMs) by integrating speech capabilities. Although unified speech-text pre-training and multimodal data instruction-tuning offer considerable benefits, these methods generally entail significant resource demands and tend to overfit specific tasks.This study aims to refine the use of speech datasets for LSM training by addressing the limitations of vanilla instruction tuning. We explore the instruction-following dynamics within LSMs, identifying a critical issue termed speech anchor bias{---}a tendency for LSMs to over-rely on speech inputs, mistakenly interpreting the entire speech modality as directives, thereby neglecting textual instructions.To counteract this bias, we introduce a self-powered LSM that leverages augmented automatic speech recognition data generated by the model itself for more effective instruction tuning. Our experiments across a range of speech-based tasks demonstrate that self-powered LSM mitigates speech anchor bias and improves the fusion of speech and text modalities in LSMs. Data, code and scripts are freely available at https://github.com/ytf-philp/Self-powered-LSM.", }
Large language models (LLMs) exhibit remarkable performance across diverse tasks, indicating their potential for expansion into large speech-text models (LSMs) by integrating speech capabilities. Although unified speech-text pre-training and multimodal data instruction-tuning offer considerable benefits, these methods generally entail significant resource demands and tend to overfit specific tasks.This study aims to refine the use of speech datasets for LSM training by addressing the limitations of vanilla instruction tuning. We explore the instruction-following dynamics within LSMs, identifying a critical issue termed speech anchor bias{---}a tendency for LSMs to over-rely on speech inputs, mistakenly interpreting the entire speech modality as directives, thereby neglecting textual instructions.To counteract this bias, we introduce a self-powered LSM that leverages augmented automatic speech recognition data generated by the model itself for more effective instruction tuning. Our experiments across a range of speech-based tasks demonstrate that self-powered LSM mitigates speech anchor bias and improves the fusion of speech and text modalities in LSMs. Data, code and scripts are freely available at https://github.com/ytf-philp/Self-powered-LSM.
[ "Yu, Tengfei", "Liu, Xuebo", "Hou, Zhiyi", "Ding, Liang", "Tao, Dacheng", "Zhang, Min" ]
Self-Powered LLM Modality Expansion for Large Speech-Text Models
emnlp-main.690
Poster
2410.03798
[ "https://github.com/ytf-philp/self-powered-lsm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.691.bib
https://aclanthology.org/2024.emnlp-main.691/
@inproceedings{liang-etal-2024-abseval, title = "{ABSE}val: An Agent-based Framework for Script Evaluation", author = "Liang, Sirui and Zhang, Baoli and Zhao, Jun and Liu, Kang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.691", pages = "12418--12434", abstract = "Recent research indicates that large language models (LLMs) possess a certain degree of script planning capability. However, there is still a lack of focused work on evaluating scripts generated by LLMs. The evaluation of scripts poses challenges due to their logical structure, sequential organization, adherence to commonsense constraints, and open-endedness. In this work, We introduced a novel script evaluation dataset, MCScript, consisting of more than 1,500 script evaluation tasks and steps, and developed an agent-based script evaluation framework, ABSEval, to collaboratively evaluate scripts generated by LLMs. Our experiments demonstrate that ABSEval provides superior accuracy and relevance, aligning closely with human evaluation. We evaluated the script planning capabilities of 15 mainstream LLMs and provided a detailed analysis. Furthermore, we observed phenomena like the key factor influencing the script planning ability of LLM is not parameter size and suggested improvements for evaluating open-ended questions.", }
Recent research indicates that large language models (LLMs) possess a certain degree of script planning capability. However, there is still a lack of focused work on evaluating scripts generated by LLMs. The evaluation of scripts poses challenges due to their logical structure, sequential organization, adherence to commonsense constraints, and open-endedness. In this work, We introduced a novel script evaluation dataset, MCScript, consisting of more than 1,500 script evaluation tasks and steps, and developed an agent-based script evaluation framework, ABSEval, to collaboratively evaluate scripts generated by LLMs. Our experiments demonstrate that ABSEval provides superior accuracy and relevance, aligning closely with human evaluation. We evaluated the script planning capabilities of 15 mainstream LLMs and provided a detailed analysis. Furthermore, we observed phenomena like the key factor influencing the script planning ability of LLM is not parameter size and suggested improvements for evaluating open-ended questions.
[ "Liang, Sirui", "Zhang, Baoli", "Zhao, Jun", "Liu, Kang" ]
ABSEval: An Agent-based Framework for Script Evaluation
emnlp-main.691
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.692.bib
https://aclanthology.org/2024.emnlp-main.692/
@inproceedings{yu-etal-2024-latent, title = "Latent Concept-based Explanation of {NLP} Models", author = "Yu, Xuemin and Dalvi, Fahim and Durrani, Nadir and Nouri, Marzia and Sajjad, Hassan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.692", pages = "12435--12459", abstract = "Interpreting and understanding the predictions made by deep learning models poses a formidable challenge due to their inherently opaque nature. Many previous efforts aimed at explaining these predictions rely on input features, specifically, the words within NLP models. However, such explanations are often less informative due to the discrete nature of these words and their lack of contextual verbosity. To address this limitation, we introduce the Latent Concept Attribution method (LACOAT), which generates explanations for predictions based on latent concepts. Our foundational intuition is that a word can exhibit multiple facets, contingent upon the context in which it is used. Therefore, given a word in context, the latent space derived from our training process reflects a specific facet of that word. LACOAT functions by mapping the representations of salient input words into the training latent space, allowing it to provide latent context-based explanations of the prediction.", }
Interpreting and understanding the predictions made by deep learning models poses a formidable challenge due to their inherently opaque nature. Many previous efforts aimed at explaining these predictions rely on input features, specifically, the words within NLP models. However, such explanations are often less informative due to the discrete nature of these words and their lack of contextual verbosity. To address this limitation, we introduce the Latent Concept Attribution method (LACOAT), which generates explanations for predictions based on latent concepts. Our foundational intuition is that a word can exhibit multiple facets, contingent upon the context in which it is used. Therefore, given a word in context, the latent space derived from our training process reflects a specific facet of that word. LACOAT functions by mapping the representations of salient input words into the training latent space, allowing it to provide latent context-based explanations of the prediction.
[ "Yu, Xuemin", "Dalvi, Fahim", "Durrani, Nadir", "Nouri, Marzia", "Sajjad, Hassan" ]
Latent Concept-based Explanation of NLP Models
emnlp-main.692
Poster
2404.12545
[ "https://github.com/xuemin-yu/eraser_movie_latentconcept" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.693.bib
https://aclanthology.org/2024.emnlp-main.693/
@inproceedings{ok-etal-2024-decoding, title = "Decoding with Limited Teacher Supervision Requires Understanding When to Trust the Teacher", author = "Ok, Hyunjong and Ryu, Jegwang and Lee, Jaeho", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.693", pages = "12460--12476", }
No abstract found
[ "Ok, Hyunjong", "Ryu, Jegwang", "Lee, Jaeho" ]
Decoding with Limited Teacher Supervision Requires Understanding When to Trust the Teacher
emnlp-main.693
Poster
2406.18002
[ "https://github.com/hj-ok/declimsup" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.694.bib
https://aclanthology.org/2024.emnlp-main.694/
@inproceedings{mu-etal-2024-enhancing, title = "Enhancing Data Quality through Simple De-duplication: Navigating Responsible Computational Social Science Research", author = "Mu, Yida and Jin, Mali and Song, Xingyi and Aletras, Nikolaos", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.694", pages = "12477--12492", abstract = "Research in natural language processing (NLP) for Computational Social Science (CSS) heavily relies on data from social media platforms. This data plays a crucial role in the development of models for analysing socio-linguistic phenomena within online communities. In this work, we conduct an in-depth examination of 20 datasets extensively used in NLP for CSS to comprehensively examine data quality. Our analysis reveals that social media datasets exhibit varying levels of data duplication. Consequently, this gives rise to challenges like label inconsistencies and data leakage, compromising the reliability of models. Our findings also suggest that data duplication has an impact on the current claims of state-of-the-art performance, potentially leading to an overestimation of model effectiveness in real-world scenarios. Finally, we propose new protocols and best practices for improving dataset development from social media data and its usage.", }
Research in natural language processing (NLP) for Computational Social Science (CSS) heavily relies on data from social media platforms. This data plays a crucial role in the development of models for analysing socio-linguistic phenomena within online communities. In this work, we conduct an in-depth examination of 20 datasets extensively used in NLP for CSS to comprehensively examine data quality. Our analysis reveals that social media datasets exhibit varying levels of data duplication. Consequently, this gives rise to challenges like label inconsistencies and data leakage, compromising the reliability of models. Our findings also suggest that data duplication has an impact on the current claims of state-of-the-art performance, potentially leading to an overestimation of model effectiveness in real-world scenarios. Finally, we propose new protocols and best practices for improving dataset development from social media data and its usage.
[ "Mu, Yida", "Jin, Mali", "Song, Xingyi", "Aletras, Nikolaos" ]
Enhancing Data Quality through Simple De-duplication: Navigating Responsible Computational Social Science Research
emnlp-main.694
Poster
2410.03545
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.695.bib
https://aclanthology.org/2024.emnlp-main.695/
@inproceedings{frydenlund-2024-mystery, title = "The Mystery of the Pathological Path-star Task for Language Models", author = "Frydenlund, Arvid", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.695", pages = "12493--12516", abstract = "The recently introduced path-star task is a minimal task designed to exemplify limitations to the abilities of language models (Bachmann and Nagarajan, 2024). It involves a path-star graph where multiple arms radiate from a single starting node and each node is unique. Given the start node and a specified target node that ends an arm, the task is to generate the arm containing that target node. This is straightforward for a human but surprisingly difficult for language models, which did not outperform the random baseline. The authors hypothesized this is due to a deficiency in teacher-forcing and the next-token prediction paradigm. We demonstrate the task is learnable using teacher-forcing in alternative settings and that the issue is partially due to representation. We introduce a regularization method using structured samples of the same graph but with differing target nodes, improving results across a variety of model types. We provide RASP proofs showing the task is theoretically solvable. Finally, we find settings where an encoder-only model can consistently solve the task.", }
The recently introduced path-star task is a minimal task designed to exemplify limitations to the abilities of language models (Bachmann and Nagarajan, 2024). It involves a path-star graph where multiple arms radiate from a single starting node and each node is unique. Given the start node and a specified target node that ends an arm, the task is to generate the arm containing that target node. This is straightforward for a human but surprisingly difficult for language models, which did not outperform the random baseline. The authors hypothesized this is due to a deficiency in teacher-forcing and the next-token prediction paradigm. We demonstrate the task is learnable using teacher-forcing in alternative settings and that the issue is partially due to representation. We introduce a regularization method using structured samples of the same graph but with differing target nodes, improving results across a variety of model types. We provide RASP proofs showing the task is theoretically solvable. Finally, we find settings where an encoder-only model can consistently solve the task.
[ "Frydenlund, Arvid" ]
The Mystery of the Pathological Path-star Task for Language Models
emnlp-main.695
Poster
2410.13779
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.696.bib
https://aclanthology.org/2024.emnlp-main.696/
@inproceedings{vitsakis-etal-2024-voices, title = "Voices in a Crowd: Searching for clusters of unique perspectives", author = "Vitsakis, Nikolas and Parekh, Amit and Konstas, Ioannis", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.696", pages = "12517--12539", abstract = "Language models have been shown to reproduce underlying biases existing in their training data, which is the majority perspective by default. Proposed solutions aim to capture minority perspectives by either modelling annotator disagreements or grouping annotators based on shared metadata, both of which face significant challenges. We propose a framework that trains models without encoding annotator metadata, extracts latent embeddings informed by annotator behaviour, and creates clusters of similar opinions, that we refer to as voices. Resulting clusters are validated post-hoc via internal and external quantitative metrics, as well a qualitative analysis to identify the type of voice that each cluster represents. Our results demonstrate the strong generalisation capability of our framework, indicated by resulting clusters being adequately robust, while also capturing minority perspectives based on different demographic factors throughout two distinct datasets.", }
Language models have been shown to reproduce underlying biases existing in their training data, which is the majority perspective by default. Proposed solutions aim to capture minority perspectives by either modelling annotator disagreements or grouping annotators based on shared metadata, both of which face significant challenges. We propose a framework that trains models without encoding annotator metadata, extracts latent embeddings informed by annotator behaviour, and creates clusters of similar opinions, that we refer to as voices. Resulting clusters are validated post-hoc via internal and external quantitative metrics, as well a qualitative analysis to identify the type of voice that each cluster represents. Our results demonstrate the strong generalisation capability of our framework, indicated by resulting clusters being adequately robust, while also capturing minority perspectives based on different demographic factors throughout two distinct datasets.
[ "Vitsakis, Nikolas", "Parekh, Amit", "Konstas, Ioannis" ]
Voices in a Crowd: Searching for clusters of unique perspectives
emnlp-main.696
Poster
2407.14259
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.697.bib
https://aclanthology.org/2024.emnlp-main.697/
@inproceedings{yu-etal-2024-neeko, title = "Neeko: Leveraging Dynamic {L}o{RA} for Efficient Multi-Character Role-Playing Agent", author = "Yu, Xiaoyan and Luo, Tongxu and Wei, Yifan and Lei, Fangyu and Huang, Yiming and Peng, Hao and Zhu, Liehuang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.697", pages = "12540--12557", abstract = "Large Language Models (LLMs) have revolutionized open-domain dialogue agents but encounter challenges in multi-character role-playing (MCRP) scenarios. To address the issue, we present Neeko, an innovative framework designed for efficient multiple characters imitation. Neeko employs a dynamic low-rank adapter (LoRA) strategy, enabling it to adapt seamlessly to diverse characters. Our framework breaks down the role-playing process into agent pre-training, multiple characters playing, and character incremental learning, effectively handling both seen and unseen roles. This dynamic approach, coupled with distinct LoRA blocks for each character, enhances Neeko{'}s adaptability to unique attributes, personalities, and speaking patterns. As a result, Neeko demonstrates superior performance in MCRP over most existing methods, offering more engaging and versatile user interaction experiences.", }
Large Language Models (LLMs) have revolutionized open-domain dialogue agents but encounter challenges in multi-character role-playing (MCRP) scenarios. To address the issue, we present Neeko, an innovative framework designed for efficient multiple characters imitation. Neeko employs a dynamic low-rank adapter (LoRA) strategy, enabling it to adapt seamlessly to diverse characters. Our framework breaks down the role-playing process into agent pre-training, multiple characters playing, and character incremental learning, effectively handling both seen and unseen roles. This dynamic approach, coupled with distinct LoRA blocks for each character, enhances Neeko{'}s adaptability to unique attributes, personalities, and speaking patterns. As a result, Neeko demonstrates superior performance in MCRP over most existing methods, offering more engaging and versatile user interaction experiences.
[ "Yu, Xiaoyan", "Luo, Tongxu", "Wei, Yifan", "Lei, Fangyu", "Huang, Yiming", "Peng, Hao", "Zhu, Liehuang" ]
Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent
emnlp-main.697
Poster
2402.13717
[ "https://github.com/weiyifan1023/neeko" ]
https://huggingface.co/papers/2402.13717
1
2
0
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.698.bib
https://aclanthology.org/2024.emnlp-main.698/
@inproceedings{mei-etal-2024-slang, title = "{SLANG}: New Concept Comprehension of Large Language Models", author = "Mei, Lingrui and Liu, Shenghua and Wang, Yiwei and Bi, Baolong and Cheng, Xueqi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.698", pages = "12558--12575", abstract = "The dynamic nature of language, particularly evident in the realm of slang and memes on the Internet, poses serious challenges to the adaptability of Large Language Models (LLMs). Traditionally anchored to static datasets, these models often struggle to keep up with the rapid linguistic evolution characteristic of online communities. This research aims to bridge this gap by enhancing LLMs{'} comprehension of the evolving new concepts on the Internet, without the high cost of continual retraining. In pursuit of this goal, we introduce \textbf{SLNAG}, a benchmark designed to autonomously integrate novel data and assess LLMs{'} ability to comprehend emerging concepts, alongside \textbf{FOCUS}, an approach uses causal inference to enhance LLMs to understand new phrases and their colloquial context. Our benchmark and approach involves understanding real-world instances of linguistic shifts, serving as contextual beacons, to form more precise and contextually relevant connections between newly emerging expressions and their meanings. The empirical analysis shows that our causal inference-based approach outperforms the baseline methods in terms of precision and relevance in the comprehension of Internet slang and memes.", }
The dynamic nature of language, particularly evident in the realm of slang and memes on the Internet, poses serious challenges to the adaptability of Large Language Models (LLMs). Traditionally anchored to static datasets, these models often struggle to keep up with the rapid linguistic evolution characteristic of online communities. This research aims to bridge this gap by enhancing LLMs{'} comprehension of the evolving new concepts on the Internet, without the high cost of continual retraining. In pursuit of this goal, we introduce \textbf{SLNAG}, a benchmark designed to autonomously integrate novel data and assess LLMs{'} ability to comprehend emerging concepts, alongside \textbf{FOCUS}, an approach uses causal inference to enhance LLMs to understand new phrases and their colloquial context. Our benchmark and approach involves understanding real-world instances of linguistic shifts, serving as contextual beacons, to form more precise and contextually relevant connections between newly emerging expressions and their meanings. The empirical analysis shows that our causal inference-based approach outperforms the baseline methods in terms of precision and relevance in the comprehension of Internet slang and memes.
[ "Mei, Lingrui", "Liu, Shenghua", "Wang, Yiwei", "Bi, Baolong", "Cheng, Xueqi" ]
SLANG: New Concept Comprehension of Large Language Models
emnlp-main.698
Poster
2401.12585
[ "https://github.com/meirtz/focusonslang-toolbox" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.699.bib
https://aclanthology.org/2024.emnlp-main.699/
@inproceedings{lan-etal-2024-towards, title = "Towards Interpretable Sequence Continuation: Analyzing Shared Circuits in Large Language Models", author = "Lan, Michael and Torr, Philip and Barez, Fazl", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.699", pages = "12576--12601", abstract = "While transformer models exhibit strong capabilities on linguistic tasks, their complex architectures make them difficult to interpret. Recent work has aimed to reverse engineer transformer models into human-readable representations called circuits that implement algorithmic functions. We extend this research by analyzing and comparing circuits for similar sequence continuation tasks, which include increasing sequences of Arabic numerals, number words, and months. By applying circuit interpretability analysis, we identify a key sub-circuit in both GPT-2 Small and Llama-2-7B responsible for detecting sequence members and for predicting the next member in a sequence. Our analysis reveals that semantically related sequences rely on shared circuit subgraphs with analogous roles. Additionally, we show that this sub-circuit has effects on various math-related prompts, such as on intervaled circuits, Spanish number word and months continuation, and natural language word problems. Overall, documenting shared computational structures enables better model behavior predictions, identification of errors, and safer editing procedures. This mechanistic understanding of transformers is a critical step towards building more robust, aligned, and interpretable language models.", }
While transformer models exhibit strong capabilities on linguistic tasks, their complex architectures make them difficult to interpret. Recent work has aimed to reverse engineer transformer models into human-readable representations called circuits that implement algorithmic functions. We extend this research by analyzing and comparing circuits for similar sequence continuation tasks, which include increasing sequences of Arabic numerals, number words, and months. By applying circuit interpretability analysis, we identify a key sub-circuit in both GPT-2 Small and Llama-2-7B responsible for detecting sequence members and for predicting the next member in a sequence. Our analysis reveals that semantically related sequences rely on shared circuit subgraphs with analogous roles. Additionally, we show that this sub-circuit has effects on various math-related prompts, such as on intervaled circuits, Spanish number word and months continuation, and natural language word problems. Overall, documenting shared computational structures enables better model behavior predictions, identification of errors, and safer editing procedures. This mechanistic understanding of transformers is a critical step towards building more robust, aligned, and interpretable language models.
[ "Lan, Michael", "Torr, Philip", "Barez, Fazl" ]
Towards Interpretable Sequence Continuation: Analyzing Shared Circuits in Large Language Models
emnlp-main.699
Poster
2311.04131
[ "https://github.com/apartresearch/seqcont_circuits" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.700.bib
https://aclanthology.org/2024.emnlp-main.700/
@inproceedings{qin-etal-2024-new, title = "Why Does New Knowledge Create Messy Ripple Effects in {LLM}s?", author = "Qin, Jiaxin and Zhang, Zixuan and Han, Chi and Yu, Pengfei and Li, Manling and Ji, Heng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.700", pages = "12602--12609", abstract = "Extensive previous research has focused on post-training knowledge editing (KE) for language models (LMs) to ensure that knowledge remains accurate and up-to-date. One desired property and open question in KE is to let edited LMs correctly handle ripple effects, where LM is expected to answer its logically related knowledge accurately. In this paper, we answer the question of why most KE methods still create messy ripple effects. We conduct extensive analysis and identify a salient indicator, GradSim, that effectively reveals when and why updated knowledge ripples in LMs. GradSim is computed by the cosine similarity between gradients of the original fact and its related knowledge. We observe a strong positive correlation between ripple effect performance and GradSim across different LMs, KE methods, and evaluation metrics. Further investigations into three counter-intuitive failure cases (Negation, Over-Ripple, Multi-Lingual) of ripple effects demonstrate that these failures are often associated with very low GradSim. This finding validates that GradSim is an effective indicator of when knowledge ripples in LMs.", }
Extensive previous research has focused on post-training knowledge editing (KE) for language models (LMs) to ensure that knowledge remains accurate and up-to-date. One desired property and open question in KE is to let edited LMs correctly handle ripple effects, where LM is expected to answer its logically related knowledge accurately. In this paper, we answer the question of why most KE methods still create messy ripple effects. We conduct extensive analysis and identify a salient indicator, GradSim, that effectively reveals when and why updated knowledge ripples in LMs. GradSim is computed by the cosine similarity between gradients of the original fact and its related knowledge. We observe a strong positive correlation between ripple effect performance and GradSim across different LMs, KE methods, and evaluation metrics. Further investigations into three counter-intuitive failure cases (Negation, Over-Ripple, Multi-Lingual) of ripple effects demonstrate that these failures are often associated with very low GradSim. This finding validates that GradSim is an effective indicator of when knowledge ripples in LMs.
[ "Qin, Jiaxin", "Zhang, Zixuan", "Han, Chi", "Yu, Pengfei", "Li, Manling", "Ji, Heng" ]
Why Does New Knowledge Create Messy Ripple Effects in LLMs?
emnlp-main.700
Poster
2407.12828
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0