Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
566
3.75k
abstract
stringlengths
4
3.1k
authors
sequencelengths
1
66
title
stringlengths
12
172
id
stringlengths
7
19
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
21
upvotes
int64
-1
116
num_comments
int64
-1
11
n_authors
int64
-1
61
Models
sequencelengths
0
100
Datasets
sequencelengths
0
100
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
100
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.emnlp-main.301.bib
https://aclanthology.org/2024.emnlp-main.301/
@inproceedings{sileo-2024-scaling, title = "Scaling Synthetic Logical Reasoning Datasets with Context-Sensitive Declarative Grammars", author = "Sileo, Damien", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.301", pages = "5275--5283", abstract = "Logical reasoning remains a challenge for natural language processing, but it can be improved by training language models to mimic theorem provers on procedurally generated problems. Previous work used domain-specific proof generation algorithms, which biases reasoning toward specific proof traces and limits auditability and extensibility. We present a simpler and more general declarative framework with flexible context-sensitive rules binding multiple languages (specifically, simplified English and the TPTP theorem-proving language). We construct first-order logic problems by selecting up to 32 premises and one hypothesis. We demonstrate that using semantic constraints during generation and careful English verbalization of predicates enhances logical reasoning without hurting natural English tasks. Using relatively small DeBERTa-v3 models, we achieve state-of-the-art accuracy on the FOLIO human-authored logic dataset, surpassing GPT-4 in accuracy with or without an external solver by 12{\%}.", }
Logical reasoning remains a challenge for natural language processing, but it can be improved by training language models to mimic theorem provers on procedurally generated problems. Previous work used domain-specific proof generation algorithms, which biases reasoning toward specific proof traces and limits auditability and extensibility. We present a simpler and more general declarative framework with flexible context-sensitive rules binding multiple languages (specifically, simplified English and the TPTP theorem-proving language). We construct first-order logic problems by selecting up to 32 premises and one hypothesis. We demonstrate that using semantic constraints during generation and careful English verbalization of predicates enhances logical reasoning without hurting natural English tasks. Using relatively small DeBERTa-v3 models, we achieve state-of-the-art accuracy on the FOLIO human-authored logic dataset, surpassing GPT-4 in accuracy with or without an external solver by 12{\%}.
[ "Sileo, Damien" ]
Scaling Synthetic Logical Reasoning Datasets with Context-Sensitive Declarative Grammars
emnlp-main.301
Poster
2406.11035
[ "" ]
https://huggingface.co/papers/2406.11035
1
1
0
1
[]
[ "tasksource/FOL-nli" ]
[]
[]
[ "tasksource/FOL-nli" ]
[]
1
https://aclanthology.org/2024.emnlp-main.302.bib
https://aclanthology.org/2024.emnlp-main.302/
@inproceedings{poli-etal-2024-improving, title = "Improving Spoken Language Modeling with Phoneme Classification: A Simple Fine-tuning Approach", author = "Poli, Maxime and Chemla, Emmanuel and Dupoux, Emmanuel", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.302", pages = "5284--5292", abstract = "Recent progress in Spoken Language Modeling has shown that learning language directly from speech is feasible. Generating speech through a pipeline that operates at the text level typically loses nuances, intonations, and non-verbal vocalizations. Modeling directly from speech opens up the path to more natural and expressive systems. On the other hand, speech-only systems require up to three orders of magnitude more data to catch up to their text-based counterparts in terms of their semantic abilities. We show that fine-tuning speech representation models on phoneme classification leads to more context-invariant representations, and language models trained on these units achieve comparable lexical comprehension to ones trained on hundred times more data.", }
Recent progress in Spoken Language Modeling has shown that learning language directly from speech is feasible. Generating speech through a pipeline that operates at the text level typically loses nuances, intonations, and non-verbal vocalizations. Modeling directly from speech opens up the path to more natural and expressive systems. On the other hand, speech-only systems require up to three orders of magnitude more data to catch up to their text-based counterparts in terms of their semantic abilities. We show that fine-tuning speech representation models on phoneme classification leads to more context-invariant representations, and language models trained on these units achieve comparable lexical comprehension to ones trained on hundred times more data.
[ "Poli, Maxime", "Chemla, Emmanuel", "Dupoux, Emmanuel" ]
Improving Spoken Language Modeling with Phoneme Classification: A Simple Fine-tuning Approach
emnlp-main.302
Poster
2410.00025
[ "https://github.com/bootphon/spokenlm-phoneme" ]
https://huggingface.co/papers/2410.00025
0
0
0
3
[ "coml/hubert-phoneme-classification" ]
[]
[]
[ "coml/hubert-phoneme-classification" ]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.303.bib
https://aclanthology.org/2024.emnlp-main.303/
@inproceedings{zheng-etal-2024-safely, title = "Safely Learning with Private Data: A Federated Learning Framework for Large Language Model", author = "Zheng, Jia-Ying and Zhang, Hainan and Wang, Lingxiang and Qiu, Wangjie and Zheng, Hong-Wei and Zheng, Zhi-Ming", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.303", pages = "5293--5306", abstract = "Private data, being larger and quality-higher than public data, can greatly improve large language models (LLM). However, due to privacy concerns, this data is often dispersed in multiple silos, making its secure utilization for LLM training a challenge. Federated learning (FL) is an ideal solution for training models with distributed private data, but traditional frameworks like FedAvg are unsuitable for LLM due to their high computational demands on clients. An alternative, split learning, offloads most training parameters to the server while training embedding and output layers locally, making it more suitable for LLM. Nonetheless, it faces significant challenges in security and efficiency. Firstly, the gradients of embeddings are prone to attacks, leading to potential reverse engineering of private data. Furthermore, the server{'}s limitation of handling only one client{'}s training request at a time hinders parallel training, severely impacting training efficiency. In this paper, we propose a Federated Learning framework for LLM, named FL-GLM, which prevents data leakage caused by both server-side and peer-client attacks while improving training efficiency. Specifically, we first place the input block and output block on local client to prevent embedding gradient attacks from server. Secondly, we employ key-encryption during client-server communication to prevent reverse engineering attacks from peer-clients. Lastly, we employ optimization methods like client-batching or server-hierarchical, adopting different acceleration methods based on the actual computational capabilities of the server. Experimental results on NLU and generation tasks demonstrate that FL-GLM achieves comparable metrics to centralized chatGLM model, validating the effectiveness of our federated learning framework.", }
Private data, being larger and quality-higher than public data, can greatly improve large language models (LLM). However, due to privacy concerns, this data is often dispersed in multiple silos, making its secure utilization for LLM training a challenge. Federated learning (FL) is an ideal solution for training models with distributed private data, but traditional frameworks like FedAvg are unsuitable for LLM due to their high computational demands on clients. An alternative, split learning, offloads most training parameters to the server while training embedding and output layers locally, making it more suitable for LLM. Nonetheless, it faces significant challenges in security and efficiency. Firstly, the gradients of embeddings are prone to attacks, leading to potential reverse engineering of private data. Furthermore, the server{'}s limitation of handling only one client{'}s training request at a time hinders parallel training, severely impacting training efficiency. In this paper, we propose a Federated Learning framework for LLM, named FL-GLM, which prevents data leakage caused by both server-side and peer-client attacks while improving training efficiency. Specifically, we first place the input block and output block on local client to prevent embedding gradient attacks from server. Secondly, we employ key-encryption during client-server communication to prevent reverse engineering attacks from peer-clients. Lastly, we employ optimization methods like client-batching or server-hierarchical, adopting different acceleration methods based on the actual computational capabilities of the server. Experimental results on NLU and generation tasks demonstrate that FL-GLM achieves comparable metrics to centralized chatGLM model, validating the effectiveness of our federated learning framework.
[ "Zheng, Jia-Ying", "Zhang, Hainan", "Wang, Lingxiang", "Qiu, Wangjie", "Zheng, Hong-Wei", "Zheng, Zhi-Ming" ]
Safely Learning with Private Data: A Federated Learning Framework for Large Language Model
emnlp-main.303
Poster
2406.14898
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.304.bib
https://aclanthology.org/2024.emnlp-main.304/
@inproceedings{li-etal-2024-formality, title = "Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge", author = "Li, Jiahuan and Cao, Yiqing and Huang, Shujian and Chen, Jiajun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.304", pages = "5307--5320", abstract = "Having been trained on massive pretraining data, large language models have shown excellent performance on many knowledge-intensive tasks. However, pretraining data tends to contain misleading and even conflicting information, and it is intriguing to understand how LLMs handle these noisy data during training. In this study, we systematically analyze LLMs{'} learning preferences for data with conflicting knowledge. We find that pretrained LLMs establish learning preferences similar to humans, i.e., preferences towards formal texts and texts with fewer spelling errors, resulting in faster learning and more favorable treatment of knowledge in data with such features when facing conflicts. This finding is generalizable across models and languages and is more evident in larger models. An in-depth analysis reveals that LLMs tend to trust data with features that signify consistency with the majority of data, and it is possible to instill new preferences and erase old ones by manipulating the degree of consistency with the majority data.", }
Having been trained on massive pretraining data, large language models have shown excellent performance on many knowledge-intensive tasks. However, pretraining data tends to contain misleading and even conflicting information, and it is intriguing to understand how LLMs handle these noisy data during training. In this study, we systematically analyze LLMs{'} learning preferences for data with conflicting knowledge. We find that pretrained LLMs establish learning preferences similar to humans, i.e., preferences towards formal texts and texts with fewer spelling errors, resulting in faster learning and more favorable treatment of knowledge in data with such features when facing conflicts. This finding is generalizable across models and languages and is more evident in larger models. An in-depth analysis reveals that LLMs tend to trust data with features that signify consistency with the majority of data, and it is possible to instill new preferences and erase old ones by manipulating the degree of consistency with the majority data.
[ "Li, Jiahuan", "Cao, Yiqing", "Huang, Shujian", "Chen, Jiajun" ]
Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge
emnlp-main.304
Poster
2410.04784
[ "https://github.com/CaoYiqingT/Formality-is-Favored" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.305.bib
https://aclanthology.org/2024.emnlp-main.305/
@inproceedings{luo-etal-2024-textual, title = "How Does the Textual Information Affect the Retrieval of Multimodal In-Context Learning?", author = "Luo, Yang and Zheng, Zangwei and Zhu, Zirui and You, Yang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.305", pages = "5321--5335", abstract = "The increase in parameter size of multimodal large language models (MLLMs) introduces significant capabilities, particularly multimodal in-context learning, where MLLMs enhance task performance without updating pre-trained parameters. However, this effectiveness hinges on the appropriate selection of in-context examples, a process currently biased towards visual data, overlooking textual information. More importantly, the area of supervised retrievers for retrieval of multimodal in-context learning, crucial for optimal in-context example selection, continues to be investigated. Our study provides an in-depth evaluation of the impact of textual information on the unsupervised selection of in-context examples in multimodal contexts, uncovering a notable sensitivity of retriever performance to the employed modalities. Based on the above finding, we introduce a novel supervised MLLM prompt retriever MSIER that leverages a trained retriever based on MLLM{'}s confidence to select examples, which enhances multimodal in-context learning efficiency. This approach is validated through extensive testing across three different tasks, demonstrating the method{'}s effectiveness. Additionally, we investigate the influence of modalities on our supervised retrieval method{'}s training and explore the transferability of the supervised prompt retriever. This exploration paves the way for future advancements, highlighting the potential for refined in-context learning in MLLMs through the strategic use of multimodal data. The public code is available at https://github.com/NUS-HPC-AI-Lab/Multimodal-ICL-Retriever.", }
The increase in parameter size of multimodal large language models (MLLMs) introduces significant capabilities, particularly multimodal in-context learning, where MLLMs enhance task performance without updating pre-trained parameters. However, this effectiveness hinges on the appropriate selection of in-context examples, a process currently biased towards visual data, overlooking textual information. More importantly, the area of supervised retrievers for retrieval of multimodal in-context learning, crucial for optimal in-context example selection, continues to be investigated. Our study provides an in-depth evaluation of the impact of textual information on the unsupervised selection of in-context examples in multimodal contexts, uncovering a notable sensitivity of retriever performance to the employed modalities. Based on the above finding, we introduce a novel supervised MLLM prompt retriever MSIER that leverages a trained retriever based on MLLM{'}s confidence to select examples, which enhances multimodal in-context learning efficiency. This approach is validated through extensive testing across three different tasks, demonstrating the method{'}s effectiveness. Additionally, we investigate the influence of modalities on our supervised retrieval method{'}s training and explore the transferability of the supervised prompt retriever. This exploration paves the way for future advancements, highlighting the potential for refined in-context learning in MLLMs through the strategic use of multimodal data. The public code is available at https://github.com/NUS-HPC-AI-Lab/Multimodal-ICL-Retriever.
[ "Luo, Yang", "Zheng, Zangwei", "Zhu, Zirui", "You, Yang" ]
How Does the Textual Information Affect the Retrieval of Multimodal In-Context Learning?
emnlp-main.305
Poster
2404.12866
[ "https://github.com/nus-hpc-ai-lab/multimodal-icl-retriever" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.306.bib
https://aclanthology.org/2024.emnlp-main.306/
@inproceedings{hayati-etal-2024-far, title = "How Far Can We Extract Diverse Perspectives from Large Language Models?", author = "Hayati, Shirley Anugrah and Lee, Minhwa and Rajagopal, Dheeraj and Kang, Dongyeop", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.306", pages = "5336--5366", abstract = "Collecting diverse human opinions is costly and challenging. This leads to a recent trend in exploiting large language models (LLMs) for generating diverse data for potential scalable and efficient solutions. However, the extent to which LLMs can generate diverse perspectives on subjective topics is still unclear. In this study, we explore LLMs{'} capacity of generating diverse perspectives and rationales on subjective topics such as social norms and argumentative texts. We introduce the problem of extracting maximum diversity from LLMs. Motivated by how humans form opinions based on values, we propose a criteria-based prompting technique to ground diverse opinions. To see how far we can extract diverse perspectives from LLMs, or called diversity coverage, we employ a step-by-step recall prompting to generate more outputs from the model iteratively. Our methods, applied to various tasks, show that LLMs can indeed produce diverse opinions according to the degree of task subjectivity. We also find that LLMs performance of extracting maximum diversity is on par with human.", }
Collecting diverse human opinions is costly and challenging. This leads to a recent trend in exploiting large language models (LLMs) for generating diverse data for potential scalable and efficient solutions. However, the extent to which LLMs can generate diverse perspectives on subjective topics is still unclear. In this study, we explore LLMs{'} capacity of generating diverse perspectives and rationales on subjective topics such as social norms and argumentative texts. We introduce the problem of extracting maximum diversity from LLMs. Motivated by how humans form opinions based on values, we propose a criteria-based prompting technique to ground diverse opinions. To see how far we can extract diverse perspectives from LLMs, or called diversity coverage, we employ a step-by-step recall prompting to generate more outputs from the model iteratively. Our methods, applied to various tasks, show that LLMs can indeed produce diverse opinions according to the degree of task subjectivity. We also find that LLMs performance of extracting maximum diversity is on par with human.
[ "Hayati, Shirley Anugrah", "Lee, Minhwa", "Rajagopal, Dheeraj", "Kang, Dongyeop" ]
How Far Can We Extract Diverse Perspectives from Large Language Models?
emnlp-main.306
Poster
2311.09799
[ "https://github.com/minnesotanlp/diversity-extraction-from-llms" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.307.bib
https://aclanthology.org/2024.emnlp-main.307/
@inproceedings{purohit-etal-2024-explora, title = "{EXPLORA}: Efficient Exemplar Subset Selection for Complex Reasoning", author = "Purohit, Kiran and V, Venktesh and Devalla, Raghuram and Yerragorla, Krishna Mohan and Bhattacharya, Sourangshu and Anand, Avishek", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.307", pages = "5367--5388", abstract = "Answering reasoning-based complex questions over text and hybrid sources, including tables, is a challenging task. Recent advances in large language models (LLMs) have enabled in-context learning (ICL), allowing LLMs to acquire proficiency in a specific task using only a few demonstration samples (exemplars). A critical challenge in ICL is the selection of optimal exemplars, which can be either task-specific (static) or test-example-specific (dynamic). Static exemplars provide faster inference times and increased robustness across a distribution of test examples. In this paper, we propose an algorithm for static exemplar subset selection for complex reasoning tasks. We introduce EXPLORA, a novel exploration method designed to estimate the parameters of the scoring function, which evaluates exemplar subsets without incorporating confidence information. EXPLORA significantly reduces the number of LLM calls to {\textasciitilde}11{\%} of those required by state-of-the-art methods and achieves a substantial performance improvement of 12.24{\%}. We open-source our code and data (https://github.com/kiranpurohit/EXPLORA).", }
Answering reasoning-based complex questions over text and hybrid sources, including tables, is a challenging task. Recent advances in large language models (LLMs) have enabled in-context learning (ICL), allowing LLMs to acquire proficiency in a specific task using only a few demonstration samples (exemplars). A critical challenge in ICL is the selection of optimal exemplars, which can be either task-specific (static) or test-example-specific (dynamic). Static exemplars provide faster inference times and increased robustness across a distribution of test examples. In this paper, we propose an algorithm for static exemplar subset selection for complex reasoning tasks. We introduce EXPLORA, a novel exploration method designed to estimate the parameters of the scoring function, which evaluates exemplar subsets without incorporating confidence information. EXPLORA significantly reduces the number of LLM calls to {\textasciitilde}11{\%} of those required by state-of-the-art methods and achieves a substantial performance improvement of 12.24{\%}. We open-source our code and data (https://github.com/kiranpurohit/EXPLORA).
[ "Purohit, Kiran", "V, Venktesh", "Devalla, Raghuram", "Yerragorla, Krishna Mohan", "Bhattacharya, Sourangshu", "An", ", Avishek" ]
EXPLORA: Efficient Exemplar Subset Selection for Complex Reasoning
emnlp-main.307
Poster
2411.03877
[ "https://github.com/kiranpurohit/explora" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.308.bib
https://aclanthology.org/2024.emnlp-main.308/
@inproceedings{zhou-etal-2024-llm-feature, title = "An {LLM} Feature-based Framework for Dialogue Constructiveness Assessment", author = "Zhou, Lexin and Farag, Youmna and Vlachos, Andreas", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.308", pages = "5389--5409", abstract = "Research on dialogue constructiveness assessment focuses on (i) analysing conversational factors that influence individuals to take specific actions, win debates, change their perspectives or broaden their open-mindedness and (ii) predicting constructiveness outcomes following dialogues for such use cases. These objectives can be achieved by training either interpretable feature-based models (which often involve costly human annotations) or neural models such as pre-trained language models (which have empirically shown higher task accuracy but lack interpretability). In this paper we propose an LLM feature-based framework for dialogue constructiveness assessment that combines the strengths of feature-based and neural approaches, while mitigating their downsides. The framework first defines a set of dataset-independent and interpretable linguistic features, which can be extracted by both prompting an LLM and simple heuristics. Such features are then used to train LLM feature-based models. We apply this framework to three datasets of dialogue constructiveness and find that our LLM feature-based models outperform or performs at least as well as standard feature-based models and neural models. We also find that the LLM feature-based model learns more robust prediction rules instead of relying on superficial shortcuts, which often trouble neural models.", }
Research on dialogue constructiveness assessment focuses on (i) analysing conversational factors that influence individuals to take specific actions, win debates, change their perspectives or broaden their open-mindedness and (ii) predicting constructiveness outcomes following dialogues for such use cases. These objectives can be achieved by training either interpretable feature-based models (which often involve costly human annotations) or neural models such as pre-trained language models (which have empirically shown higher task accuracy but lack interpretability). In this paper we propose an LLM feature-based framework for dialogue constructiveness assessment that combines the strengths of feature-based and neural approaches, while mitigating their downsides. The framework first defines a set of dataset-independent and interpretable linguistic features, which can be extracted by both prompting an LLM and simple heuristics. Such features are then used to train LLM feature-based models. We apply this framework to three datasets of dialogue constructiveness and find that our LLM feature-based models outperform or performs at least as well as standard feature-based models and neural models. We also find that the LLM feature-based model learns more robust prediction rules instead of relying on superficial shortcuts, which often trouble neural models.
[ "Zhou, Lexin", "Farag, Youmna", "Vlachos, Andreas" ]
An LLM Feature-based Framework for Dialogue Constructiveness Assessment
emnlp-main.308
Poster
2406.14760
[ "https://github.com/lexzhou/llm-feature-based-framework-for-DCA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.309.bib
https://aclanthology.org/2024.emnlp-main.309/
@inproceedings{chen-etal-2024-relevance, title = "Relevance Is a Guiding Light: Relevance-aware Adaptive Learning for End-to-end Task-oriented Dialogue System", author = "Chen, Zhanpeng and Zhu, Zhihong and Xu, Wanshi and Zhuang, Xianwei and Zou, Yuexian", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.309", pages = "5410--5420", abstract = "Retrieving accurate domain knowledge and providing helpful information are crucial in developing an effective end-to-end task-oriented dialogue system (E2ETOD). The field has witnessed numerous methods following the retrieve-then-generate paradigm and training their systems on one specific domain. However, existing approaches still suffer from the Distractive Attributes Problem (DAP): struggling to deal with false but similar knowledge (hard negative entities), which is even more intractable when countless pieces of knowledge from different domains are blended in a real-world scenario. To alleviate DAP, we propose the Relevance-aware Adaptive Learning (ReAL) method, a two-stage training framework that eliminates hard negatives step-by-step and aligns retrieval with generation. In the first stage, we introduce a top-k adaptive contrastive loss and utilize the divergence-driven feedback from the frozen generator to pre-train the retriever. In the second stage, we propose using the metric score distribution as an anchor to align retrieval with generation. Thorough experiments on three benchmark datasets demonstrate ReAL{'}s superiority over existing methods, with extensive analysis validating its strong capabilities of overcoming in- and cross-domain distractions.", }
Retrieving accurate domain knowledge and providing helpful information are crucial in developing an effective end-to-end task-oriented dialogue system (E2ETOD). The field has witnessed numerous methods following the retrieve-then-generate paradigm and training their systems on one specific domain. However, existing approaches still suffer from the Distractive Attributes Problem (DAP): struggling to deal with false but similar knowledge (hard negative entities), which is even more intractable when countless pieces of knowledge from different domains are blended in a real-world scenario. To alleviate DAP, we propose the Relevance-aware Adaptive Learning (ReAL) method, a two-stage training framework that eliminates hard negatives step-by-step and aligns retrieval with generation. In the first stage, we introduce a top-k adaptive contrastive loss and utilize the divergence-driven feedback from the frozen generator to pre-train the retriever. In the second stage, we propose using the metric score distribution as an anchor to align retrieval with generation. Thorough experiments on three benchmark datasets demonstrate ReAL{'}s superiority over existing methods, with extensive analysis validating its strong capabilities of overcoming in- and cross-domain distractions.
[ "Chen, Zhanpeng", "Zhu, Zhihong", "Xu, Wanshi", "Zhuang, Xianwei", "Zou, Yuexian" ]
Relevance Is a Guiding Light: Relevance-aware Adaptive Learning for End-to-end Task-oriented Dialogue System
emnlp-main.309
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.310.bib
https://aclanthology.org/2024.emnlp-main.310/
@inproceedings{burdisso-etal-2024-dialog2flow, title = "{D}ialog2{F}low: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction", author = "Burdisso, Sergio and Madikeri, Srikanth and Motlicek, Petr", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.310", pages = "5421--5440", abstract = "Efficiently deriving structured workflows from unannotated dialogs remains an underexplored and formidable challenge in computational linguistics. Automating this process could significantly accelerate the manual design of workflows in new domains and enable the grounding of large language models in domain-specific flowcharts, enhancing transparency and controllability.In this paper, we introduce Dialog2Flow (D2F) embeddings, which differ from conventional sentence embeddings by mapping utterances to a latent space where they are grouped according to their communicative and informative functions (i.e., the actions they represent). D2F allows for modeling dialogs as continuous trajectories in a latent space with distinct action-related regions. By clustering D2F embeddings, the latent space is quantized, and dialogs can be converted into sequences of region/action IDs, facilitating the extraction of the underlying workflow.To pre-train D2F, we build a comprehensive dataset by unifying twenty task-oriented dialog datasets with normalized per-turn action annotations. We also introduce a novel soft contrastive loss that leverages the semantic information of these actions to guide the representation learning process, showing superior performance compared to standard supervised contrastive loss.Evaluation against various sentence embeddings, including dialog-specific ones, demonstrates that D2F yields superior qualitative and quantitative results across diverse domains.", }
Efficiently deriving structured workflows from unannotated dialogs remains an underexplored and formidable challenge in computational linguistics. Automating this process could significantly accelerate the manual design of workflows in new domains and enable the grounding of large language models in domain-specific flowcharts, enhancing transparency and controllability.In this paper, we introduce Dialog2Flow (D2F) embeddings, which differ from conventional sentence embeddings by mapping utterances to a latent space where they are grouped according to their communicative and informative functions (i.e., the actions they represent). D2F allows for modeling dialogs as continuous trajectories in a latent space with distinct action-related regions. By clustering D2F embeddings, the latent space is quantized, and dialogs can be converted into sequences of region/action IDs, facilitating the extraction of the underlying workflow.To pre-train D2F, we build a comprehensive dataset by unifying twenty task-oriented dialog datasets with normalized per-turn action annotations. We also introduce a novel soft contrastive loss that leverages the semantic information of these actions to guide the representation learning process, showing superior performance compared to standard supervised contrastive loss.Evaluation against various sentence embeddings, including dialog-specific ones, demonstrates that D2F yields superior qualitative and quantitative results across diverse domains.
[ "Burdisso, Sergio", "Madikeri, Srikanth", "Motlicek, Petr" ]
Dialog2Flow: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction
emnlp-main.310
Poster
2410.18481
[ "https://github.com/idiap/dialog2flow" ]
https://huggingface.co/papers/2410.18481
1
5
2
3
[ "sergioburdisso/dialog2flow-joint-bert-base", "sergioburdisso/dialog2flow-single-bert-base", "sergioburdisso/dialog2flow-single-dse-base", "sergioburdisso/dialog2flow-joint-dse-base" ]
[ "sergioburdisso/dialog2flow-dataset" ]
[]
[ "sergioburdisso/dialog2flow-joint-bert-base", "sergioburdisso/dialog2flow-single-bert-base", "sergioburdisso/dialog2flow-single-dse-base", "sergioburdisso/dialog2flow-joint-dse-base" ]
[ "sergioburdisso/dialog2flow-dataset" ]
[]
1
https://aclanthology.org/2024.emnlp-main.311.bib
https://aclanthology.org/2024.emnlp-main.311/
@inproceedings{tang-etal-2024-words, title = "Words Worth a Thousand Pictures: Measuring and Understanding Perceptual Variability in Text-to-Image Generation", author = "Tang, Raphael and Zhang, Crystina and Xu, Lixinyu and Lu, Yao and Li, Wenyan and Stenetorp, Pontus and Lin, Jimmy and Ture, Ferhan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.311", pages = "5441--5454", abstract = "Diffusion models are the state of the art in text-to-image generation, but their perceptual variability remains understudied. In this paper, we examine how prompts affect image variability in black-box diffusion-based models. We propose W1KP, a human-calibrated measure of variability in a set of images, bootstrapped from existing image-pair perceptual distances. Current datasets do not cover recent diffusion models, thus we curate three test sets for evaluation. Our best perceptual distance outperforms nine baselines by up to 18 points in accuracy, and our calibration matches graded human judgements 78{\%} of the time. Using W1KP, we study prompt reusability and show that Imagen prompts can be reused for 10-50 random seeds before new images become too similar to already generated images, while Stable Diffusion XL and DALL-E 3 can be reused 50-200 times. Lastly, we analyze 56 linguistic features of real prompts, finding that the prompt{'}s length, CLIP embedding norm, concreteness, and word senses influence variability most. As far as we are aware, we are the first to analyze diffusion variability from a visuolinguistic perspective. Our project page is at http://w1kp.com.", }
Diffusion models are the state of the art in text-to-image generation, but their perceptual variability remains understudied. In this paper, we examine how prompts affect image variability in black-box diffusion-based models. We propose W1KP, a human-calibrated measure of variability in a set of images, bootstrapped from existing image-pair perceptual distances. Current datasets do not cover recent diffusion models, thus we curate three test sets for evaluation. Our best perceptual distance outperforms nine baselines by up to 18 points in accuracy, and our calibration matches graded human judgements 78{\%} of the time. Using W1KP, we study prompt reusability and show that Imagen prompts can be reused for 10-50 random seeds before new images become too similar to already generated images, while Stable Diffusion XL and DALL-E 3 can be reused 50-200 times. Lastly, we analyze 56 linguistic features of real prompts, finding that the prompt{'}s length, CLIP embedding norm, concreteness, and word senses influence variability most. As far as we are aware, we are the first to analyze diffusion variability from a visuolinguistic perspective. Our project page is at http://w1kp.com.
[ "Tang, Raphael", "Zhang, Crystina", "Xu, Lixinyu", "Lu, Yao", "Li, Wenyan", "Stenetorp, Pontus", "Lin, Jimmy", "Ture, Ferhan" ]
Words Worth a Thousand Pictures: Measuring and Understanding Perceptual Variability in Text-to-Image Generation
emnlp-main.311
Poster
2406.08482
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.312.bib
https://aclanthology.org/2024.emnlp-main.312/
@inproceedings{chalkidis-2024-investigating, title = "Investigating {LLM}s as Voting Assistants via Contextual Augmentation: A Case Study on the {E}uropean Parliament Elections 2024", author = "Chalkidis, Ilias", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.312", pages = "5455--5467", abstract = "In light of the recent 2024 European Parliament elections, we are investigating if LLMs can be used as Voting Advice Applications (VAAs). We audit MISTRAL and MIXTRAL models and evaluate their accuracy in predicting the stance of political parties based on the latest {``}EU and I{''} voting assistance questionnaire. Furthermore, we explore alternatives to improve models{'} performance by augmenting the input context via Retrieval-Augmented Generation (RAG) relying on web search, and Self-Reflection using staged conversations that aim to re-collect relevant content from the model{'}s internal memory. We find that MIXTRAL is highly accurate with an 82{\%} accuracy on average with a significant performance disparity across different political groups (50-95{\%}). Augmenting the input context with expert-curated information can lead to a significant boost of approx. 9{\%}, which remains an open challenge for automated RAG approaches, even considering curated content.", }
In light of the recent 2024 European Parliament elections, we are investigating if LLMs can be used as Voting Advice Applications (VAAs). We audit MISTRAL and MIXTRAL models and evaluate their accuracy in predicting the stance of political parties based on the latest {``}EU and I{''} voting assistance questionnaire. Furthermore, we explore alternatives to improve models{'} performance by augmenting the input context via Retrieval-Augmented Generation (RAG) relying on web search, and Self-Reflection using staged conversations that aim to re-collect relevant content from the model{'}s internal memory. We find that MIXTRAL is highly accurate with an 82{\%} accuracy on average with a significant performance disparity across different political groups (50-95{\%}). Augmenting the input context with expert-curated information can lead to a significant boost of approx. 9{\%}, which remains an open challenge for automated RAG approaches, even considering curated content.
[ "Chalkidis, Ilias" ]
Investigating LLMs as Voting Assistants via Contextual Augmentation: A Case Study on the European Parliament Elections 2024
emnlp-main.312
Poster
2407.08495
[ "" ]
https://huggingface.co/papers/2407.08495
0
0
0
1
[]
[ "coastalcph/euandi_2024" ]
[]
[]
[ "coastalcph/euandi_2024" ]
[]
1
https://aclanthology.org/2024.emnlp-main.313.bib
https://aclanthology.org/2024.emnlp-main.313/
@inproceedings{xu-etal-2024-adaption, title = "Adaption-of-Thought: Learning Question Difficulty Improves Large Language Models for Reasoning", author = "Xu, Mayi and Li, Yongqi and Sun, Ke and Qian, Tieyun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.313", pages = "5468--5495", abstract = "Large language models (LLMs) have shown excellent capability for solving reasoning problems. Existing approaches do not differentiate the question difficulty when designing prompting methods for them. Clearly, a simple method cannot elicit sufficient knowledge from LLMs to answer a hard question. Meanwhile, a sophisticated one will force the LLM to generate redundant or even inaccurate intermediate steps toward a simple question. Consequently, the performance of existing methods fluctuates among various questions.In this work, we propose Adaption-of-Thought (AdoT), an adaptive method to improve LLMs for the reasoning problem, which first measures the question difficulty and then tailors demonstration set construction and difficulty-adapted retrieval strategies for the adaptive demonstration construction. Experimental results on three reasoning tasks prove the superiority of our proposed method, showing an absolute improvement of up to 5.5{\%} on arithmetic reasoning, 7.4{\%} on symbolic reasoning, and 2.3{\%} on commonsense reasoning. Our codes and implementation details are available at: https://github.com/NLPGM/AdoT", }
Large language models (LLMs) have shown excellent capability for solving reasoning problems. Existing approaches do not differentiate the question difficulty when designing prompting methods for them. Clearly, a simple method cannot elicit sufficient knowledge from LLMs to answer a hard question. Meanwhile, a sophisticated one will force the LLM to generate redundant or even inaccurate intermediate steps toward a simple question. Consequently, the performance of existing methods fluctuates among various questions.In this work, we propose Adaption-of-Thought (AdoT), an adaptive method to improve LLMs for the reasoning problem, which first measures the question difficulty and then tailors demonstration set construction and difficulty-adapted retrieval strategies for the adaptive demonstration construction. Experimental results on three reasoning tasks prove the superiority of our proposed method, showing an absolute improvement of up to 5.5{\%} on arithmetic reasoning, 7.4{\%} on symbolic reasoning, and 2.3{\%} on commonsense reasoning. Our codes and implementation details are available at: https://github.com/NLPGM/AdoT
[ "Xu, Mayi", "Li, Yongqi", "Sun, Ke", "Qian, Tieyun" ]
Adaption-of-Thought: Learning Question Difficulty Improves Large Language Models for Reasoning
emnlp-main.313
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.314.bib
https://aclanthology.org/2024.emnlp-main.314/
@inproceedings{fan-etal-2024-logicst, title = "{L}ogic{ST}: A Logical Self-Training Framework for Document-Level Relation Extraction with Incomplete Annotations", author = "Fan, Shengda and Wang, Yanting and Mo, Shasha and Niu, Jianwei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.314", pages = "5496--5510", abstract = "Document-level relation extraction (DocRE) aims to identify relationships between entities within a document. Due to the vast number of entity pairs, fully annotating all fact triplets is challenging, resulting in datasets with numerous false negative samples. Recently, self-training-based methods have been introduced to address this issue. However, these methods are purely black-box and sub-symbolic, making them difficult to interpret and prone to overlooking symbolic interdependencies between relations.To remedy this deficiency, our insight is that symbolic knowledge, such as logical rules, can be used as diagnostic tools to identify conflicts between pseudo-labels. By resolving these conflicts through logical diagnoses, we can correct erroneous pseudo-labels, thus enhancing the training of neural models.To achieve this, we propose **LogicST**, a neural-logic self-training framework that iteratively resolves conflicts and constructs the minimal diagnostic set for updating models. Extensive experiments demonstrate that LogicST significantly improves performance and outperforms previous state-of-the-art methods. For instance, LogicST achieves an increase of **7.94{\%}** in F1 score compared to CAST (Tan et al., 2023a) on the DocRED benchmark (Yao et al., 2019). Additionally, LogicST is more time-efficient than its self-training counterparts, requiring only **10{\%}** of the training time of CAST.", }
Document-level relation extraction (DocRE) aims to identify relationships between entities within a document. Due to the vast number of entity pairs, fully annotating all fact triplets is challenging, resulting in datasets with numerous false negative samples. Recently, self-training-based methods have been introduced to address this issue. However, these methods are purely black-box and sub-symbolic, making them difficult to interpret and prone to overlooking symbolic interdependencies between relations.To remedy this deficiency, our insight is that symbolic knowledge, such as logical rules, can be used as diagnostic tools to identify conflicts between pseudo-labels. By resolving these conflicts through logical diagnoses, we can correct erroneous pseudo-labels, thus enhancing the training of neural models.To achieve this, we propose **LogicST**, a neural-logic self-training framework that iteratively resolves conflicts and constructs the minimal diagnostic set for updating models. Extensive experiments demonstrate that LogicST significantly improves performance and outperforms previous state-of-the-art methods. For instance, LogicST achieves an increase of **7.94{\%}** in F1 score compared to CAST (Tan et al., 2023a) on the DocRED benchmark (Yao et al., 2019). Additionally, LogicST is more time-efficient than its self-training counterparts, requiring only **10{\%}** of the training time of CAST.
[ "Fan, Shengda", "Wang, Yanting", "Mo, Shasha", "Niu, Jianwei" ]
LogicST: A Logical Self-Training Framework for Document-Level Relation Extraction with Incomplete Annotations
emnlp-main.314
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.315.bib
https://aclanthology.org/2024.emnlp-main.315/
@inproceedings{peng-sogaard-2024-concept, title = "Concept Space Alignment in Multilingual {LLM}s", author = "Peng, Qiwei and S{\o}gaard, Anders", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.315", pages = "5511--5526", abstract = "Multilingual large language models (LLMs) seem to generalize somewhat across languages. We hypothesize this is a result of implicit vector space alignment. Evaluating such alignment, we see that larger models exhibit very high-quality linear alignments between corresponding concepts in different languages. Our experiments show that multilingual LLMs suffer from two familiar weaknesses: generalization works best for languages with similar typology, and for abstract concepts. For some models, e.g., the Llama-2 family of models, prompt-based embeddings align better than word embeddings, but the projections are less linear {--} an observation that holds across almost all model families, indicating that some of the implicitly learned alignments are broken somewhat by prompt-based methods.", }
Multilingual large language models (LLMs) seem to generalize somewhat across languages. We hypothesize this is a result of implicit vector space alignment. Evaluating such alignment, we see that larger models exhibit very high-quality linear alignments between corresponding concepts in different languages. Our experiments show that multilingual LLMs suffer from two familiar weaknesses: generalization works best for languages with similar typology, and for abstract concepts. For some models, e.g., the Llama-2 family of models, prompt-based embeddings align better than word embeddings, but the projections are less linear {--} an observation that holds across almost all model families, indicating that some of the implicitly learned alignments are broken somewhat by prompt-based methods.
[ "Peng, Qiwei", "S{\\o}gaard, Anders" ]
Concept Space Alignment in Multilingual LLMs
emnlp-main.315
Poster
2410.01079
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.316.bib
https://aclanthology.org/2024.emnlp-main.316/
@inproceedings{yuan-etal-2024-predicting, title = "Predicting Rewards Alongside Tokens: Non-disruptive Parameter Insertion for Efficient Inference Intervention in Large Language Model", author = "Yuan, Chenhan and Huang, Fei and Peng, Ru and Lu, Keming and Yu, Bowen and Zhou, Chang and Zhou, Jingren", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.316", pages = "5527--5542", abstract = "Transformer-based large language models (LLMs) exhibit limitations such as generating unsafe responses, unreliable reasoning, etc. Existing inference intervention approaches attempt to mitigate these issues by finetuning additional models to produce calibration signals (such as rewards) that guide the LLM{'}s decoding process. However, this solution introduces substantial time and space overhead due to the separate models required. This work proposes Non-disruptive parameters insertion (Otter), inserting extra parameters into the transformer architecture to predict calibration signals along with the original LLM output. Otter offers state-of-the-art performance on multiple demanding tasks while saving up to 86.5{\%} extra space and 98.5{\%} extra time. Furthermore, Otter seamlessly integrates with existing inference engines, requiring only a one-line code change, and the original model response remains accessible after the parameter insertion.", }
Transformer-based large language models (LLMs) exhibit limitations such as generating unsafe responses, unreliable reasoning, etc. Existing inference intervention approaches attempt to mitigate these issues by finetuning additional models to produce calibration signals (such as rewards) that guide the LLM{'}s decoding process. However, this solution introduces substantial time and space overhead due to the separate models required. This work proposes Non-disruptive parameters insertion (Otter), inserting extra parameters into the transformer architecture to predict calibration signals along with the original LLM output. Otter offers state-of-the-art performance on multiple demanding tasks while saving up to 86.5{\%} extra space and 98.5{\%} extra time. Furthermore, Otter seamlessly integrates with existing inference engines, requiring only a one-line code change, and the original model response remains accessible after the parameter insertion.
[ "Yuan, Chenhan", "Huang, Fei", "Peng, Ru", "Lu, Keming", "Yu, Bowen", "Zhou, Chang", "Zhou, Jingren" ]
Predicting Rewards Alongside Tokens: Non-disruptive Parameter Insertion for Efficient Inference Intervention in Large Language Model
emnlp-main.316
Poster
2408.10764
[ "" ]
https://huggingface.co/papers/2408.10764
0
7
2
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.317.bib
https://aclanthology.org/2024.emnlp-main.317/
@inproceedings{liu-etal-2024-nlebench, title = "{NLEB}ench+{N}or{GLM}: A Comprehensive Empirical Analysis and Benchmark Dataset for Generative Language Models in {N}orwegian", author = "Liu, Peng and Zhang, Lemei and Farup, Terje and Lauvrak, Even W. and Ingvaldsen, Jon Espen and Eide, Simen and Gulla, Jon Atle and Yang, Zhirong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.317", pages = "5543--5560", abstract = "Norwegian, spoken by only 5 million population, is under-representative within the most impressive breakthroughs in NLP tasks. To the best of our knowledge, there has not yet been a comprehensive evaluation of the existing language models (LMs) on Norwegian generation tasks during the article writing process. To fill this gap, we 1) compiled the existing Norwegian dataset and pre-trained 4 Norwegian Open Language Models varied from parameter scales and architectures, collectively called NorGLM; 2) introduced a comprehensive benchmark, NLEBench, for evaluating natural language generation capabilities in Norwegian, encompassing translation and human annotation. Based on the investigation, we find that: 1) the mainstream, English-dominated LM GPT-3.5 has limited capability in understanding the Norwegian context; 2) the increase in model parameter scales demonstrates limited impact on the performance of downstream tasks when the pre-training dataset is constrained in size; 3) smaller models also demonstrate the reasoning capability through Chain-of-Thought; 4) a multi-task dataset that includes synergy tasks can be used to verify the generalizability of LLMs on natural language understanding and, meanwhile, test the interconnectedness of these NLP tasks. We share our resources and code for reproducibility under a CC BY-NC 4.0 license.", }
Norwegian, spoken by only 5 million population, is under-representative within the most impressive breakthroughs in NLP tasks. To the best of our knowledge, there has not yet been a comprehensive evaluation of the existing language models (LMs) on Norwegian generation tasks during the article writing process. To fill this gap, we 1) compiled the existing Norwegian dataset and pre-trained 4 Norwegian Open Language Models varied from parameter scales and architectures, collectively called NorGLM; 2) introduced a comprehensive benchmark, NLEBench, for evaluating natural language generation capabilities in Norwegian, encompassing translation and human annotation. Based on the investigation, we find that: 1) the mainstream, English-dominated LM GPT-3.5 has limited capability in understanding the Norwegian context; 2) the increase in model parameter scales demonstrates limited impact on the performance of downstream tasks when the pre-training dataset is constrained in size; 3) smaller models also demonstrate the reasoning capability through Chain-of-Thought; 4) a multi-task dataset that includes synergy tasks can be used to verify the generalizability of LLMs on natural language understanding and, meanwhile, test the interconnectedness of these NLP tasks. We share our resources and code for reproducibility under a CC BY-NC 4.0 license.
[ "Liu, Peng", "Zhang, Lemei", "Farup, Terje", "Lauvrak, Even W.", "Ingvaldsen, Jon Espen", "Eide, Simen", "Gulla, Jon Atle", "Yang, Zhirong" ]
NLEBench+NorGLM: A Comprehensive Empirical Analysis and Benchmark Dataset for Generative Language Models in Norwegian
emnlp-main.317
Poster
2312.01314
[ "https://github.com/smartmedia-ai/norglm" ]
https://huggingface.co/papers/2312.01314
1
2
1
8
[ "NorGLM/NorGPT-369M", "NorGLM/NorGPT-3B", "NorGLM/NorLlama-3B", "NorGLM/NorGPT-3B-continue", "NorGLM/Entailment", "NorGLM/NorGPT-3B-rfhl-summarization", "RichardErkhov/NorGLM_-_NorLlama-3B-gguf" ]
[ "NorGLM/NO-Multi-QA-Sum", "NorGLM/NO-ConvAI2", "NorGLM/NO-Alpaca-Plus", "NorGLM/NO-CrowS-Pairs", "NorGLM/NO-BoolQ", "NorGLM/NO-QNLI", "NorGLM/NO-MRPC", "NorGLM/NO-CNN-DailyMail", "NorGLM/NO-Story" ]
[ "hanschbe/NorGLM-NorGPT-3B" ]
[ "NorGLM/NorGPT-369M", "NorGLM/NorGPT-3B", "NorGLM/NorLlama-3B", "NorGLM/NorGPT-3B-continue", "NorGLM/Entailment", "NorGLM/NorGPT-3B-rfhl-summarization", "RichardErkhov/NorGLM_-_NorLlama-3B-gguf" ]
[ "NorGLM/NO-Multi-QA-Sum", "NorGLM/NO-ConvAI2", "NorGLM/NO-Alpaca-Plus", "NorGLM/NO-CrowS-Pairs", "NorGLM/NO-BoolQ", "NorGLM/NO-QNLI", "NorGLM/NO-MRPC", "NorGLM/NO-CNN-DailyMail", "NorGLM/NO-Story" ]
[ "hanschbe/NorGLM-NorGPT-3B" ]
1
https://aclanthology.org/2024.emnlp-main.318.bib
https://aclanthology.org/2024.emnlp-main.318/
@inproceedings{wang-demberg-2024-rsa, title = "{RSA}-Control: A Pragmatics-Grounded Lightweight Controllable Text Generation Framework", author = "Wang, Yifan and Demberg, Vera", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.318", pages = "5561--5582", abstract = "Despite significant advancements in natural language generation, controlling language models to produce texts with desired attributes remains a formidable challenge. In this work, we introduce RSA-Control, a training-free controllable text generation framework grounded in pragmatics. RSA-Control directs the generation process by recursively reasoning between imaginary speakers and listeners, enhancing the likelihood that target attributes are correctly interpreted by listeners amidst distractors. Additionally, we introduce a self-adjustable rationality parameter, which allows for automatic adjustment of control strength based on context. Our experiments, conducted with two task types and two types of language models, demonstrate that RSA-Control achieves strong attribute control while maintaining language fluency and content consistency. Our code is available at https://github.com/Ewanwong/RSA-Control.", }
Despite significant advancements in natural language generation, controlling language models to produce texts with desired attributes remains a formidable challenge. In this work, we introduce RSA-Control, a training-free controllable text generation framework grounded in pragmatics. RSA-Control directs the generation process by recursively reasoning between imaginary speakers and listeners, enhancing the likelihood that target attributes are correctly interpreted by listeners amidst distractors. Additionally, we introduce a self-adjustable rationality parameter, which allows for automatic adjustment of control strength based on context. Our experiments, conducted with two task types and two types of language models, demonstrate that RSA-Control achieves strong attribute control while maintaining language fluency and content consistency. Our code is available at https://github.com/Ewanwong/RSA-Control.
[ "Wang, Yifan", "Demberg, Vera" ]
RSA-Control: A Pragmatics-Grounded Lightweight Controllable Text Generation Framework
emnlp-main.318
Poster
2410.19109
[ "https://github.com/ewanwong/rsa-control" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.319.bib
https://aclanthology.org/2024.emnlp-main.319/
@inproceedings{wang-etal-2024-scaling, title = "Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and {M}o{E} Models in Large Language Models", author = "Wang, Siqi and Chen, Zhengyu and Li, Bei and He, Keqing and Zhang, Min and Wang, Jingang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.319", pages = "5583--5595", abstract = "The scaling of large language models (LLMs) is a critical research area for the efficiency and effectiveness of model training and deployment. Our work investigates the transferability and discrepancies of scaling laws between Dense Models and Mixture of Experts (MoE) models. Through a combination of theoretical analysis and extensive experiments, including consistent loss scaling, optimal batch size/learning rate scaling, and resource allocation strategies scaling, our findings reveal that the power-law scaling framework also applies to MoE Models, indicating that the fundamental principles governing the scaling behavior of these models are preserved, even though the architecture differs. Additionally, MoE Models demonstrate superior generalization, resulting in lower testing losses with the same training compute budget compared to Dense Models. These findings indicate the scaling consistency and transfer generalization capabilities of MoE Models, providing new insights for optimizing MoE Model training and deployment strategies.", }
The scaling of large language models (LLMs) is a critical research area for the efficiency and effectiveness of model training and deployment. Our work investigates the transferability and discrepancies of scaling laws between Dense Models and Mixture of Experts (MoE) models. Through a combination of theoretical analysis and extensive experiments, including consistent loss scaling, optimal batch size/learning rate scaling, and resource allocation strategies scaling, our findings reveal that the power-law scaling framework also applies to MoE Models, indicating that the fundamental principles governing the scaling behavior of these models are preserved, even though the architecture differs. Additionally, MoE Models demonstrate superior generalization, resulting in lower testing losses with the same training compute budget compared to Dense Models. These findings indicate the scaling consistency and transfer generalization capabilities of MoE Models, providing new insights for optimizing MoE Model training and deployment strategies.
[ "Wang, Siqi", "Chen, Zhengyu", "Li, Bei", "He, Keqing", "Zhang, Min", "Wang, Jingang" ]
Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models
emnlp-main.319
Poster
2410.05661
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.320.bib
https://aclanthology.org/2024.emnlp-main.320/
@inproceedings{saley-etal-2024-synergizing, title = "Synergizing In-context Learning with Hints for End-to-end Task-oriented Dialog Systems", author = "Saley, Vishal Vivek and Das, Rocktim Jyoti and Raghu, Dinesh and ., Mausam", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.320", pages = "5596--5612", abstract = "End-to-end Task-Oriented Dialog (TOD) systems typically require extensive training datasets to perform well. In contrast, large language model (LLM) based TOD systems can excel even with limited data due to their ability to learn tasks through in-context exemplars. However, these models lack alignment with the style of responses in training data and often generate comprehensive responses, making it difficult for users to grasp the information quickly. In response, we propose SyncTOD that synergizes LLMs with task-specific hints to improve alignment in low-data settings. SyncTOD employs small auxiliary models to provide hints and select exemplars for in-context prompts. With ChatGPT, SyncTOD achieves superior performance compared to LLM-based baselines and SoTA models in low-data settings, while retaining competitive performance in full-data settings.", }
End-to-end Task-Oriented Dialog (TOD) systems typically require extensive training datasets to perform well. In contrast, large language model (LLM) based TOD systems can excel even with limited data due to their ability to learn tasks through in-context exemplars. However, these models lack alignment with the style of responses in training data and often generate comprehensive responses, making it difficult for users to grasp the information quickly. In response, we propose SyncTOD that synergizes LLMs with task-specific hints to improve alignment in low-data settings. SyncTOD employs small auxiliary models to provide hints and select exemplars for in-context prompts. With ChatGPT, SyncTOD achieves superior performance compared to LLM-based baselines and SoTA models in low-data settings, while retaining competitive performance in full-data settings.
[ "Saley, Vishal Vivek", "Das, Rocktim Jyoti", "Raghu, Dinesh", "., Mausam" ]
Synergizing In-context Learning with Hints for End-to-end Task-oriented Dialog Systems
emnlp-main.320
Poster
2405.15585
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.321.bib
https://aclanthology.org/2024.emnlp-main.321/
@inproceedings{wang-etal-2024-rear, title = "{REAR}: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering", author = "Wang, Yuhao and Ren, Ruiyang and Li, Junyi and Zhao, Xin and Liu, Jing and Wen, Ji-Rong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.321", pages = "5613--5626", abstract = "Considering the limited internal parametric knowledge, retrieval-augmented generation (RAG) has been widely used to extend the knowledge scope of large language models (LLMs). Despite the extensive efforts on RAG research, in existing methods, LLMs cannot precisely assess the relevance of retrieved documents, thus likely leading to misleading or even incorrect utilization of external knowledge (i.e., retrieved documents). To address this issue, in this paper, we propose REAR, a RElevance-Aware Retrieval-augmented approach for open-domain question answering (QA). As the key motivation, we aim to enhance the self-awareness regarding the reliability of external knowledge for LLMs, so as to adaptively utilize external knowledge in RAG systems. Specially, we develop a novel architecture for LLM based RAG system, by incorporating a specially designed assessnent module that precisely assesses the relevance of retrieved documents. Furthermore, we propose an improved training method based on bi-granularity relevance fusion and noise-resistant training. By combining the improvements in both architecture and training, our proposed REAR can better utilize external knowledge by effectively perceiving the relevance of retrieved documents. Experiments on four open-domain QA tasks show that REAR significantly outperforms previous a number of competitive RAG approaches. Our codes can be accessed at https://github.com/RUCAIBox/REAR.", }
Considering the limited internal parametric knowledge, retrieval-augmented generation (RAG) has been widely used to extend the knowledge scope of large language models (LLMs). Despite the extensive efforts on RAG research, in existing methods, LLMs cannot precisely assess the relevance of retrieved documents, thus likely leading to misleading or even incorrect utilization of external knowledge (i.e., retrieved documents). To address this issue, in this paper, we propose REAR, a RElevance-Aware Retrieval-augmented approach for open-domain question answering (QA). As the key motivation, we aim to enhance the self-awareness regarding the reliability of external knowledge for LLMs, so as to adaptively utilize external knowledge in RAG systems. Specially, we develop a novel architecture for LLM based RAG system, by incorporating a specially designed assessnent module that precisely assesses the relevance of retrieved documents. Furthermore, we propose an improved training method based on bi-granularity relevance fusion and noise-resistant training. By combining the improvements in both architecture and training, our proposed REAR can better utilize external knowledge by effectively perceiving the relevance of retrieved documents. Experiments on four open-domain QA tasks show that REAR significantly outperforms previous a number of competitive RAG approaches. Our codes can be accessed at https://github.com/RUCAIBox/REAR.
[ "Wang, Yuhao", "Ren, Ruiyang", "Li, Junyi", "Zhao, Xin", "Liu, Jing", "Wen, Ji-Rong" ]
REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering
emnlp-main.321
Poster
2402.17497
[ "https://github.com/rucaibox/rear" ]
https://huggingface.co/papers/2402.17497
0
0
0
6
[ "RUCAIBox/rear-llama-7b-hf" ]
[]
[]
[ "RUCAIBox/rear-llama-7b-hf" ]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.322.bib
https://aclanthology.org/2024.emnlp-main.322/
@inproceedings{wang-etal-2024-leave, title = "Leave No Document Behind: Benchmarking Long-Context {LLM}s with Extended Multi-Doc {QA}", author = "Wang, Minzheng and Chen, Longze and Cheng, Fu and Liao, Shengyi and Zhang, Xinghua and Wu, Bingli and Yu, Haiyang and Xu, Nan and Zhang, Lei and Luo, Run and Li, Yunshui and Yang, Min and Huang, Fei and Li, Yongbin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.322", pages = "5627--5646", abstract = "Long-context modeling capabilities of Large Language Models (LLMs) have garnered widespread attention, leading to the emergence of LLMs with ultra-context windows. Meanwhile, benchmarks for evaluating long-context language models are gradually catching up. However, existing benchmarks employ irrelevant noise texts to artificially extend the length of test cases, diverging from the real-world scenarios of long-context applications. To bridge this gap, we propose a novel long-context benchmark, Loong, aligning with realistic scenarios through extended multi-document question answering (QA). Unlike typical document QA, in Loong{'}s test cases, each document is relevant to the final answer, ignoring any document will lead to the failure of the answer. Furthermore, Loong introduces four types of tasks with a range of context lengths: Spotlight Locating, Comparison, Clustering, and Chain of Reasoning, to facilitate a more realistic and comprehensive evaluation of long-context understanding. Extensive experiments indicate that existing long-context language models still exhibit considerable potential for enhancement. Retrieval augmented generation (RAG) achieves poor performance, demonstrating that Loong can reliably assess the model{'}s long-context modeling capabilities.", }
Long-context modeling capabilities of Large Language Models (LLMs) have garnered widespread attention, leading to the emergence of LLMs with ultra-context windows. Meanwhile, benchmarks for evaluating long-context language models are gradually catching up. However, existing benchmarks employ irrelevant noise texts to artificially extend the length of test cases, diverging from the real-world scenarios of long-context applications. To bridge this gap, we propose a novel long-context benchmark, Loong, aligning with realistic scenarios through extended multi-document question answering (QA). Unlike typical document QA, in Loong{'}s test cases, each document is relevant to the final answer, ignoring any document will lead to the failure of the answer. Furthermore, Loong introduces four types of tasks with a range of context lengths: Spotlight Locating, Comparison, Clustering, and Chain of Reasoning, to facilitate a more realistic and comprehensive evaluation of long-context understanding. Extensive experiments indicate that existing long-context language models still exhibit considerable potential for enhancement. Retrieval augmented generation (RAG) achieves poor performance, demonstrating that Loong can reliably assess the model{'}s long-context modeling capabilities.
[ "Wang, Minzheng", "Chen, Longze", "Cheng, Fu", "Liao, Shengyi", "Zhang, Xinghua", "Wu, Bingli", "Yu, Haiyang", "Xu, Nan", "Zhang, Lei", "Luo, Run", "Li, Yunshui", "Yang, Min", "Huang, Fei", "Li, Yongbin" ]
Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA
emnlp-main.322
Oral
2406.17419
[ "https://github.com/mozerwang/loong" ]
https://huggingface.co/papers/2406.17419
12
16
1
14
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.323.bib
https://aclanthology.org/2024.emnlp-main.323/
@inproceedings{swain-etal-2024-mitigating, title = "On Mitigating Performance Disparities in Multilingual Speech Recognition", author = "Swain, Monorama and Zee, Anna Katrine Van and S{\o}gaard, Anders", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.323", pages = "5647--5655", abstract = "How far have we come in mitigating performance disparities across genders in multilingual speech recognition? We compare the impact on gender disparity of different fine-tuning algorithms for automated speech recognition across model sizes, languages and gender. We look at both performance-focused and fairness-promoting algorithms. Across languages, we see slightly better performance for female speakers for larger models regardless of the fine-tuning algorithm. The best trade-off between performance and parity is found using adapter fusion. Fairness-promoting fine-tuning algorithms (Group-DRO and Spectral Decoupling) hurt performance compared to adapter fusion with only slightly better performance parity. LoRA increases disparities slightly. Fairness-mitigating fine-tuning techniques led to slightly higher variance in performance across languages, with the exception of adapter fusion.", }
How far have we come in mitigating performance disparities across genders in multilingual speech recognition? We compare the impact on gender disparity of different fine-tuning algorithms for automated speech recognition across model sizes, languages and gender. We look at both performance-focused and fairness-promoting algorithms. Across languages, we see slightly better performance for female speakers for larger models regardless of the fine-tuning algorithm. The best trade-off between performance and parity is found using adapter fusion. Fairness-promoting fine-tuning algorithms (Group-DRO and Spectral Decoupling) hurt performance compared to adapter fusion with only slightly better performance parity. LoRA increases disparities slightly. Fairness-mitigating fine-tuning techniques led to slightly higher variance in performance across languages, with the exception of adapter fusion.
[ "Swain, Monorama", "Zee, Anna Katrine Van", "S{\\o}gaard, Anders" ]
On Mitigating Performance Disparities in Multilingual Speech Recognition
emnlp-main.323
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.324.bib
https://aclanthology.org/2024.emnlp-main.324/
@inproceedings{meisenbacher-matthes-2024-thinking, title = "Thinking Outside of the Differential Privacy Box: A Case Study in Text Privatization with Language Model Prompting", author = "Meisenbacher, Stephen and Matthes, Florian", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.324", pages = "5656--5665", abstract = "The field of privacy-preserving Natural Language Processing has risen in popularity, particularly at a time when concerns about privacy grow with the proliferation of large language models. One solution consistently appearing in recent literature has been the integration of Differential Privacy (DP) into NLP techniques. In this paper, we take these approaches into critical view, discussing the restrictions that DP integration imposes, as well as bring to light the challenges that such restrictions entail. To accomplish this, we focus on **DP-Prompt**, a recent method for text privatization leveraging language models to rewrite texts. In particular, we explore this rewriting task in multiple scenarios, both with DP and without DP. To drive the discussion on the merits of DP in NLP, we conduct empirical utility and privacy experiments. Our results demonstrate the need for more discussion on the usability of DP in NLP and its benefits over non-DP approaches.", }
The field of privacy-preserving Natural Language Processing has risen in popularity, particularly at a time when concerns about privacy grow with the proliferation of large language models. One solution consistently appearing in recent literature has been the integration of Differential Privacy (DP) into NLP techniques. In this paper, we take these approaches into critical view, discussing the restrictions that DP integration imposes, as well as bring to light the challenges that such restrictions entail. To accomplish this, we focus on **DP-Prompt**, a recent method for text privatization leveraging language models to rewrite texts. In particular, we explore this rewriting task in multiple scenarios, both with DP and without DP. To drive the discussion on the merits of DP in NLP, we conduct empirical utility and privacy experiments. Our results demonstrate the need for more discussion on the usability of DP in NLP and its benefits over non-DP approaches.
[ "Meisenbacher, Stephen", "Matthes, Florian" ]
Thinking Outside of the Differential Privacy Box: A Case Study in Text Privatization with Language Model Prompting
emnlp-main.324
Poster
2410.00751
[ "https://github.com/sjmeis/DPNONDP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.325.bib
https://aclanthology.org/2024.emnlp-main.325/
@inproceedings{lin-etal-2024-preserve, title = "To Preserve or To Compress: An In-Depth Study of Connector Selection in Multimodal Large Language Models", author = "Lin, Junyan and Chen, Haoran and Zhu, Dawei and Shen, Xiaoyu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.325", pages = "5666--5680", abstract = "In recent years, multimodal large language models (MLLMs) have attracted widespread attention from both industry and academia. Based on the integration position, MLLMs can be categorized into external and internal fusion architectures, with the former being more predominant. However, there remains considerable debate on how to construct the optimal external fusion MLLM architecture, especially regarding the performance of different connectors on tasks with varying granularities. This paper systematically investigates the impact of connectors on MLLM performance. Specifically, we classify connectors into feature-preserving and feature-compressing types. Utilizing a unified classification standard, we categorize sub-tasks from three comprehensive benchmarks, MMBench, MME, and SEED-Bench, into three task types: coarse-grained perception, fine-grained perception, and reasoning, and evaluate the performance from this perspective. Our findings reveal significant performance differences between different types of connectors across various tasks, offering essential guidance for MLLM architecture design and advancing the understanding of MLLM architecture optimization.", }
In recent years, multimodal large language models (MLLMs) have attracted widespread attention from both industry and academia. Based on the integration position, MLLMs can be categorized into external and internal fusion architectures, with the former being more predominant. However, there remains considerable debate on how to construct the optimal external fusion MLLM architecture, especially regarding the performance of different connectors on tasks with varying granularities. This paper systematically investigates the impact of connectors on MLLM performance. Specifically, we classify connectors into feature-preserving and feature-compressing types. Utilizing a unified classification standard, we categorize sub-tasks from three comprehensive benchmarks, MMBench, MME, and SEED-Bench, into three task types: coarse-grained perception, fine-grained perception, and reasoning, and evaluate the performance from this perspective. Our findings reveal significant performance differences between different types of connectors across various tasks, offering essential guidance for MLLM architecture design and advancing the understanding of MLLM architecture optimization.
[ "Lin, Junyan", "Chen, Haoran", "Zhu, Dawei", "Shen, Xiaoyu" ]
To Preserve or To Compress: An In-Depth Study of Connector Selection in Multimodal Large Language Models
emnlp-main.325
Poster
2410.06765
[ "https://github.com/eit-nlp/connector-selection-for-mllm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.326.bib
https://aclanthology.org/2024.emnlp-main.326/
@inproceedings{ploeger-etal-2024-typological, title = "What is {''}Typological Diversity{''} in {NLP}?", author = "Ploeger, Esther and Poelman, Wessel and de Lhoneux, Miryam and Bjerva, Johannes", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.326", pages = "5681--5700", abstract = "The NLP research community has devoted increased attention to languages beyond English, resulting in considerable improvements for multilingual NLP. However, these improvements only apply to a small subset of the world{'}s languages. An increasing number of papers aspires to enhance generalizable multilingual performance across languages. To this end, linguistic typology is commonly used to motivate language selection, on the basis that a broad typological sample ought to imply generalization across a broad range of languages. These selections are often described as being typologically diverse. In this meta-analysis, we systematically investigate NLP research that includes claims regarding typological diversity. We find there are no set definitions or criteria for such claims. We introduce metrics to approximate the diversity of resulting language samples along several axes and find that the results vary considerably across papers. Crucially, we show that skewed language selection can lead to overestimated multilingual performance. We recommend future work to include an operationalization of typological diversity that empirically justifies the diversity of language samples. To help facilitate this, we release the code for our diversity measures.", }
The NLP research community has devoted increased attention to languages beyond English, resulting in considerable improvements for multilingual NLP. However, these improvements only apply to a small subset of the world{'}s languages. An increasing number of papers aspires to enhance generalizable multilingual performance across languages. To this end, linguistic typology is commonly used to motivate language selection, on the basis that a broad typological sample ought to imply generalization across a broad range of languages. These selections are often described as being typologically diverse. In this meta-analysis, we systematically investigate NLP research that includes claims regarding typological diversity. We find there are no set definitions or criteria for such claims. We introduce metrics to approximate the diversity of resulting language samples along several axes and find that the results vary considerably across papers. Crucially, we show that skewed language selection can lead to overestimated multilingual performance. We recommend future work to include an operationalization of typological diversity that empirically justifies the diversity of language samples. To help facilitate this, we release the code for our diversity measures.
[ "Ploeger, Esther", "Poelman, Wessel", "de Lhoneux, Miryam", "Bjerva, Johannes" ]
What is ”Typological Diversity” in NLP?
emnlp-main.326
Poster
[ "https://github.com/wpoelman/typ-div-survey" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.327.bib
https://aclanthology.org/2024.emnlp-main.327/
@inproceedings{guo-etal-2024-computational, title = "The Computational Anatomy of Humility: Modeling Intellectual Humility in Online Public Discourse", author = "Guo, Xiaobo and Potnis, Neil and Yu, Melody and Gillani, Nabeel and Vosoughi, Soroush", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.327", pages = "5701--5723", abstract = "The ability for individuals to constructively engage with one another across lines of difference is a critical feature of a healthy pluralistic society. This is also true in online discussion spaces like social media platforms. To date, much social media research has focused on preventing ills{---}like political polarization and the spread of misinformation. While this is important, enhancing the quality of online public discourse requires not just reducing ills, but also, promoting foundational human virtues. In this study, we focus on one particular virtue: {``}intellectual humility{''} (IH), or acknowledging the potential limitations in one{'}s own beliefs. Specifically, we explore the development of computational methods for measuring IH at scale. We manually curate and validate an IH codebook on 350 posts about religion drawn from subreddits and use them to develop LLM-based models for automating this measurement. Our best model achieves a Macro-F1 score of 0.64 across labels (and 0.70 when predicting IH/IA/Neutral at the coarse level), higher than an expected naive baseline of 0.51 (0.32 for IH/IA/Neutral) but lower than a human annotator-informed upper bound of 0.85 (0.83 for IH/IA/Neutral). Our results both highlight the challenging nature of detecting IH online{---}opening the door to new directions in NLP research{---}and also lay a foundation for computational social science researchers interested in analyzing and fostering more IH in online public discourse.", }
The ability for individuals to constructively engage with one another across lines of difference is a critical feature of a healthy pluralistic society. This is also true in online discussion spaces like social media platforms. To date, much social media research has focused on preventing ills{---}like political polarization and the spread of misinformation. While this is important, enhancing the quality of online public discourse requires not just reducing ills, but also, promoting foundational human virtues. In this study, we focus on one particular virtue: {``}intellectual humility{''} (IH), or acknowledging the potential limitations in one{'}s own beliefs. Specifically, we explore the development of computational methods for measuring IH at scale. We manually curate and validate an IH codebook on 350 posts about religion drawn from subreddits and use them to develop LLM-based models for automating this measurement. Our best model achieves a Macro-F1 score of 0.64 across labels (and 0.70 when predicting IH/IA/Neutral at the coarse level), higher than an expected naive baseline of 0.51 (0.32 for IH/IA/Neutral) but lower than a human annotator-informed upper bound of 0.85 (0.83 for IH/IA/Neutral). Our results both highlight the challenging nature of detecting IH online{---}opening the door to new directions in NLP research{---}and also lay a foundation for computational social science researchers interested in analyzing and fostering more IH in online public discourse.
[ "Guo, Xiaobo", "Potnis, Neil", "Yu, Melody", "Gillani, Nabeel", "Vosoughi, Soroush" ]
The Computational Anatomy of Humility: Modeling Intellectual Humility in Online Public Discourse
emnlp-main.327
Poster
2410.15182
[ "https://github.com/xiaobo-guo/The-Computational-Anatomy-of-Humility-Modeling-Intellectual-Humility-in-Online-Public-Discourse" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.328.bib
https://aclanthology.org/2024.emnlp-main.328/
@inproceedings{shopov-gerdjikov-2024-consistent, title = "Consistent Bidirectional Language Modelling: Expressive Power and Representational Conciseness", author = "Shopov, Georgi and Gerdjikov, Stefan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.328", pages = "5724--5768", abstract = "The inability to utilise future contexts and the pre-determined left-to-right generation order are major limitations of unidirectional language models. Bidirectionality has been introduced to address those deficiencies. However, a crucial shortcoming of bidirectional language models is the potential inconsistency of their conditional distributions. This fundamental flaw greatly diminishes their applicability and hinders their capability of tractable sampling and likelihood computation. In this work, we introduce a class of bidirectional language models, called latent language models, that are consistent by definition and can be efficiently used both for generation and scoring of sequences. We define latent language models based on the well-understood formalism of bisequential decompositions from automata theory. This formal correspondence allows us to precisely charaterise the abilities and limitations of a subclass of latent language models, called rational language models. As a result, we obtain that latent language models are exponentially more concise and significantly more expressive than unidirectional language models.", }
The inability to utilise future contexts and the pre-determined left-to-right generation order are major limitations of unidirectional language models. Bidirectionality has been introduced to address those deficiencies. However, a crucial shortcoming of bidirectional language models is the potential inconsistency of their conditional distributions. This fundamental flaw greatly diminishes their applicability and hinders their capability of tractable sampling and likelihood computation. In this work, we introduce a class of bidirectional language models, called latent language models, that are consistent by definition and can be efficiently used both for generation and scoring of sequences. We define latent language models based on the well-understood formalism of bisequential decompositions from automata theory. This formal correspondence allows us to precisely charaterise the abilities and limitations of a subclass of latent language models, called rational language models. As a result, we obtain that latent language models are exponentially more concise and significantly more expressive than unidirectional language models.
[ "Shopov, Georgi", "Gerdjikov, Stefan" ]
Consistent Bidirectional Language Modelling: Expressive Power and Representational Conciseness
emnlp-main.328
Oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.329.bib
https://aclanthology.org/2024.emnlp-main.329/
@inproceedings{nayak-etal-2024-benchmarking, title = "Benchmarking Vision Language Models for Cultural Understanding", author = "Nayak, Shravan and Jain, Kanishk and Awal, Rabiul and Reddy, Siva and Steenkiste, Sjoerd Van and Hendricks, Lisa Anne and Stanczak, Karolina and Agrawal, Aishwarya", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.329", pages = "5769--5790", abstract = "Foundation models and vision-language pre-training have notably advanced Vision Language Models (VLMs), enabling multimodal processing of visual and linguistic data. However, their performance has been typically assessed on general scene understanding - recognizing objects, attributes, and actions - rather than cultural comprehension. This study introduces CulturalVQA, a visual question-answering benchmark aimed at assessing VLM{'}s geo-diverse cultural understanding. We curate a diverse collection of 2,378 image-question pairs with 1-5 answers per question representing cultures from 11 countries across 5 continents. The questions probe understanding of various facets of culture such as clothing, food, drinks, rituals, and traditions. Benchmarking VLMs on CulturalVQA, including GPT-4V and Gemini, reveals disparity in their level of cultural understanding across regions, with strong cultural understanding capabilities for North America while significantly weaker capabilities for Africa. We observe disparity in their performance across cultural facets too, with clothing, rituals, and traditions seeing higher performances than food and drink. These disparities help us identify areas where VLMs lack cultural understanding and demonstrate the potential of CulturalVQA as a comprehensive evaluation set for gauging VLM progress in understanding diverse cultures.", }
Foundation models and vision-language pre-training have notably advanced Vision Language Models (VLMs), enabling multimodal processing of visual and linguistic data. However, their performance has been typically assessed on general scene understanding - recognizing objects, attributes, and actions - rather than cultural comprehension. This study introduces CulturalVQA, a visual question-answering benchmark aimed at assessing VLM{'}s geo-diverse cultural understanding. We curate a diverse collection of 2,378 image-question pairs with 1-5 answers per question representing cultures from 11 countries across 5 continents. The questions probe understanding of various facets of culture such as clothing, food, drinks, rituals, and traditions. Benchmarking VLMs on CulturalVQA, including GPT-4V and Gemini, reveals disparity in their level of cultural understanding across regions, with strong cultural understanding capabilities for North America while significantly weaker capabilities for Africa. We observe disparity in their performance across cultural facets too, with clothing, rituals, and traditions seeing higher performances than food and drink. These disparities help us identify areas where VLMs lack cultural understanding and demonstrate the potential of CulturalVQA as a comprehensive evaluation set for gauging VLM progress in understanding diverse cultures.
[ "Nayak, Shravan", "Jain, Kanishk", "Awal, Rabiul", "Reddy, Siva", "Steenkiste, Sjoerd Van", "Hendricks, Lisa Anne", "Stanczak, Karolina", "Agrawal, Aishwarya" ]
Benchmarking Vision Language Models for Cultural Understanding
emnlp-main.329
Oral
2407.10920
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.330.bib
https://aclanthology.org/2024.emnlp-main.330/
@inproceedings{iakovenko-hain-2024-methods, title = "Methods of Automatic Matrix Language Determination for Code-Switched Speech", author = "Iakovenko, Olga and Hain, Thomas", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.330", pages = "5791--5800", abstract = "Code-switching (CS) is the process of speakers interchanging between two or more languages which in the modern world becomes increasingly common. In order to better describe CS speech the Matrix Language Frame (MLF) theory introduces the concept of a Matrix Language, which is the language that provides the grammatical structure for a CS utterance. In this work the MLF theory was used to develop systems for Matrix Language Identity (MLID) determination. The MLID of English/Mandarin and English/Spanish CS text and speech was compared to acoustic language identity (LID), which is a typical way to identify a language in monolingual utterances. MLID predictors from audio show higher correlation with the textual principles than LID in all cases while also outperforming LID in an MLID recognition task based on F1 macro (60{\%}) and correlation score (0.38). This novel approach has identified that non-English languages (Mandarin and Spanish) are preferred over the English language as the ML contrary to the monolingual choice of LID.", }
Code-switching (CS) is the process of speakers interchanging between two or more languages which in the modern world becomes increasingly common. In order to better describe CS speech the Matrix Language Frame (MLF) theory introduces the concept of a Matrix Language, which is the language that provides the grammatical structure for a CS utterance. In this work the MLF theory was used to develop systems for Matrix Language Identity (MLID) determination. The MLID of English/Mandarin and English/Spanish CS text and speech was compared to acoustic language identity (LID), which is a typical way to identify a language in monolingual utterances. MLID predictors from audio show higher correlation with the textual principles than LID in all cases while also outperforming LID in an MLID recognition task based on F1 macro (60{\%}) and correlation score (0.38). This novel approach has identified that non-English languages (Mandarin and Spanish) are preferred over the English language as the ML contrary to the monolingual choice of LID.
[ "Iakovenko, Olga", "Hain, Thomas" ]
Methods of Automatic Matrix Language Determination for Code-Switched Speech
emnlp-main.330
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.331.bib
https://aclanthology.org/2024.emnlp-main.331/
@inproceedings{lee-etal-2024-analyzing, title = "Analyzing Key Factors Influencing Emotion Prediction Performance of {VLLM}s in Conversational Contexts", author = "Lee, Jaewook and Jang, Yeajin and Kim, Hongjin and Lee, Woojin and Kim, Harksoo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.331", pages = "5801--5816", abstract = "Emotional intelligence (EI) in artificial intelligence (AI), which refers to the ability of an AI to understand and respond appropriately to human emotions, has emerged as a crucial research topic. Recent studies have shown that large language models (LLMs) and vision large language models (VLLMs) possess EI and the ability to understand emotional stimuli in the form of text and images, respectively. However, factors influencing the emotion prediction performance of VLLMs in real-world conversational contexts have not been sufficiently explored. This study aims to analyze the key elements affecting the emotion prediction performance of VLLMs in conversational contexts systematically. To achieve this, we reconstructed the MELD dataset, which is based on the popular TV series Friends, and conducted experiments through three sub-tasks: overall emotion tone prediction, character emotion prediction, and contextually appropriate emotion expression selection. We evaluated the performance differences based on various model architectures (e.g., image encoders, modality alignment, and LLMs) and image scopes (e.g., entire scene, person, and facial expression). In addition, we investigated the impact of providing persona information on the emotion prediction performance of the models and analyzed how personality traits and speaking styles influenced the emotion prediction process. We conducted an in-depth analysis of the impact of various other factors, such as gender and regional biases, on the emotion prediction performance of VLLMs. The results revealed that these factors significantly influenced the model performance.", }
Emotional intelligence (EI) in artificial intelligence (AI), which refers to the ability of an AI to understand and respond appropriately to human emotions, has emerged as a crucial research topic. Recent studies have shown that large language models (LLMs) and vision large language models (VLLMs) possess EI and the ability to understand emotional stimuli in the form of text and images, respectively. However, factors influencing the emotion prediction performance of VLLMs in real-world conversational contexts have not been sufficiently explored. This study aims to analyze the key elements affecting the emotion prediction performance of VLLMs in conversational contexts systematically. To achieve this, we reconstructed the MELD dataset, which is based on the popular TV series Friends, and conducted experiments through three sub-tasks: overall emotion tone prediction, character emotion prediction, and contextually appropriate emotion expression selection. We evaluated the performance differences based on various model architectures (e.g., image encoders, modality alignment, and LLMs) and image scopes (e.g., entire scene, person, and facial expression). In addition, we investigated the impact of providing persona information on the emotion prediction performance of the models and analyzed how personality traits and speaking styles influenced the emotion prediction process. We conducted an in-depth analysis of the impact of various other factors, such as gender and regional biases, on the emotion prediction performance of VLLMs. The results revealed that these factors significantly influenced the model performance.
[ "Lee, Jaewook", "Jang, Yeajin", "Kim, Hongjin", "Lee, Woojin", "Kim, Harksoo" ]
Analyzing Key Factors Influencing Emotion Prediction Performance of VLLMs in Conversational Contexts
emnlp-main.331
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.332.bib
https://aclanthology.org/2024.emnlp-main.332/
@inproceedings{huang-etal-2024-context, title = "Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models", author = "Huang, Jerry and Parthasarathi, Prasanna and Rezagholizadeh, Mehdi and Chandar, Sarath", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.332", pages = "5817--5830", abstract = "Despite their widespread adoption, large language models (LLMs) remain prohibitive to use under resource constraints, with their ever growing sizes only increasing the barrier for use. One particular issue stems from the high latency associated with auto-regressive generation in LLMs, rendering the largest LLMs difficult to use without advanced computing infrastructure. Assisted decoding, where a smaller draft model guides a larger expert model{'}s generation, has helped alleviate this concern, but remains dependent on alignment between the two models. Thus if the draft model is insufficiently capable on some domain of interest relative to the target model, performance can degrade. Alternatively, one can leverage multiple draft models to better cover the expertise of the target, but when multiple black-box draft models are available, selecting an assistant without details about its construction can be difficult. To better understand this decision making problem, we observe it as a contextual bandit, where a policy must choose a draft model based on a context. We show that even without prior knowledge of the draft models, creating an offline dataset from only outputs of independent draft/target models and training a policy over the alignment of these outputs can accelerate performance on multiple domains as long as an individual draft model is effective. We observe these results hold on various settings with multiple assisted decoding candidates, highlighting its flexibility and the advantageous role that such decision making can play.", }
Despite their widespread adoption, large language models (LLMs) remain prohibitive to use under resource constraints, with their ever growing sizes only increasing the barrier for use. One particular issue stems from the high latency associated with auto-regressive generation in LLMs, rendering the largest LLMs difficult to use without advanced computing infrastructure. Assisted decoding, where a smaller draft model guides a larger expert model{'}s generation, has helped alleviate this concern, but remains dependent on alignment between the two models. Thus if the draft model is insufficiently capable on some domain of interest relative to the target model, performance can degrade. Alternatively, one can leverage multiple draft models to better cover the expertise of the target, but when multiple black-box draft models are available, selecting an assistant without details about its construction can be difficult. To better understand this decision making problem, we observe it as a contextual bandit, where a policy must choose a draft model based on a context. We show that even without prior knowledge of the draft models, creating an offline dataset from only outputs of independent draft/target models and training a policy over the alignment of these outputs can accelerate performance on multiple domains as long as an individual draft model is effective. We observe these results hold on various settings with multiple assisted decoding candidates, highlighting its flexibility and the advantageous role that such decision making can play.
[ "Huang, Jerry", "Parthasarathi, Prasanna", "Rezagholizadeh, Mehdi", "Ch", "ar, Sarath" ]
Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models
emnlp-main.332
Poster
2408.08470
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.333.bib
https://aclanthology.org/2024.emnlp-main.333/
@inproceedings{feng-etal-2024-teaching-small, title = "Teaching Small Language Models Reasoning through Counterfactual Distillation", author = "Feng, Tao and Li, Yicheng and Chenglin, Li and Chen, Hao and Yu, Fei and Zhang, Yin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.333", pages = "5831--5842", abstract = "With the rise of large language models (LLMs), many studies are interested in transferring the reasoning capabilities of LLMs to small language models (SLMs). Previous distillation methods usually utilize the capabilities of LLMs to generate chain-of-thought (CoT) samples and teach SLMs via fine-tuning. However, such a standard distillation approach performs poorly when applied to out-of-distribution (OOD) examples, and the diversity of the generated CoT samples is insufficient. In this work, we propose a novel counterfactual distillation framework. Firstly, we leverage LLMs to automatically generate high-quality counterfactual data. Given an input text example, our method generates a counterfactual example that is very similar to the original input, but its task label has been changed to the desired one. Then, we utilize multi-view CoT to enhance the diversity of reasoning samples. Experiments on four NLP benchmarks show that our approach enhances the reasoning capabilities of SLMs and is more robust to OOD data. We also conduct extensive ablations and sample studies to understand the reasoning capabilities of SLMs.", }
With the rise of large language models (LLMs), many studies are interested in transferring the reasoning capabilities of LLMs to small language models (SLMs). Previous distillation methods usually utilize the capabilities of LLMs to generate chain-of-thought (CoT) samples and teach SLMs via fine-tuning. However, such a standard distillation approach performs poorly when applied to out-of-distribution (OOD) examples, and the diversity of the generated CoT samples is insufficient. In this work, we propose a novel counterfactual distillation framework. Firstly, we leverage LLMs to automatically generate high-quality counterfactual data. Given an input text example, our method generates a counterfactual example that is very similar to the original input, but its task label has been changed to the desired one. Then, we utilize multi-view CoT to enhance the diversity of reasoning samples. Experiments on four NLP benchmarks show that our approach enhances the reasoning capabilities of SLMs and is more robust to OOD data. We also conduct extensive ablations and sample studies to understand the reasoning capabilities of SLMs.
[ "Feng, Tao", "Li, Yicheng", "Chenglin, Li", "Chen, Hao", "Yu, Fei", "Zhang, Yin" ]
Teaching Small Language Models Reasoning through Counterfactual Distillation
emnlp-main.333
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.334.bib
https://aclanthology.org/2024.emnlp-main.334/
@inproceedings{doshi-etal-2024-pretraining, title = "Pretraining Language Models Using Translationese", author = "Doshi, Meet and Dabre, Raj and Bhattacharyya, Pushpak", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.334", pages = "5843--5862", abstract = "In this paper, we explore the utility of Translationese as synthetic data created using machine translation for pre-training language models (LMs) for low-resource languages (LRLs). Our simple methodology consists of translating large amounts of web-crawled monolingual documents (clean) into the LRLs, followed by filtering the translated documents using tiny LMs trained on small but clean LRL data. Taking the case of Indian languages, we pre-train LMs from scratch with 28M and 85M parameters, and then fine-tune them for 5 downstream natural language understanding (NLU) and 4 generative (NLG) tasks. We observe that pre-training on filtered synthetic data leads to relative performance drops of only 0.87{\%} for NLU and 2.35{\%} for NLG, compared to pre-training on clean data, and this gap further diminishes upon the inclusion of a small amount of clean data. We also study the impact of synthetic data filtering and the choice of source language for synthetic data generation. Furthermore, evaluating continually pre-trained larger models like Gemma-2B and Llama-3-8B in few-shot settings, we observe that using synthetic data is competitive with using clean data. Our findings suggest that synthetic data shows promise for bridging the pre-training gap between English and LRLs.", }
In this paper, we explore the utility of Translationese as synthetic data created using machine translation for pre-training language models (LMs) for low-resource languages (LRLs). Our simple methodology consists of translating large amounts of web-crawled monolingual documents (clean) into the LRLs, followed by filtering the translated documents using tiny LMs trained on small but clean LRL data. Taking the case of Indian languages, we pre-train LMs from scratch with 28M and 85M parameters, and then fine-tune them for 5 downstream natural language understanding (NLU) and 4 generative (NLG) tasks. We observe that pre-training on filtered synthetic data leads to relative performance drops of only 0.87{\%} for NLU and 2.35{\%} for NLG, compared to pre-training on clean data, and this gap further diminishes upon the inclusion of a small amount of clean data. We also study the impact of synthetic data filtering and the choice of source language for synthetic data generation. Furthermore, evaluating continually pre-trained larger models like Gemma-2B and Llama-3-8B in few-shot settings, we observe that using synthetic data is competitive with using clean data. Our findings suggest that synthetic data shows promise for bridging the pre-training gap between English and LRLs.
[ "Doshi, Meet", "Dabre, Raj", "Bhattacharyya, Pushpak" ]
Pretraining Language Models Using Translationese
emnlp-main.334
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.335.bib
https://aclanthology.org/2024.emnlp-main.335/
@inproceedings{buettner-kovashka-2024-quantifying, title = "Quantifying the Gaps Between Translation and Native Perception in Training for Multimodal, Multilingual Retrieval", author = "Buettner, Kyle and Kovashka, Adriana", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.335", pages = "5863--5870", abstract = "There is a scarcity of multilingual vision-language models that properly account for the perceptual differences that are reflected in image captions across languages and cultures. In this work, through a multimodal, multilingual retrieval case study, we quantify the existing lack of model flexibility. We empirically show performance gaps between training on captions that come from native German perception and captions that have been either machine-translated or human-translated from English into German. To address these gaps, we further propose and evaluate caption augmentation strategies. While we achieve mean recall improvements (+1.3), gaps still remain, indicating an open area of future work for the community.", }
There is a scarcity of multilingual vision-language models that properly account for the perceptual differences that are reflected in image captions across languages and cultures. In this work, through a multimodal, multilingual retrieval case study, we quantify the existing lack of model flexibility. We empirically show performance gaps between training on captions that come from native German perception and captions that have been either machine-translated or human-translated from English into German. To address these gaps, we further propose and evaluate caption augmentation strategies. While we achieve mean recall improvements (+1.3), gaps still remain, indicating an open area of future work for the community.
[ "Buettner, Kyle", "Kovashka, Adriana" ]
Quantifying the Gaps Between Translation and Native Perception in Training for Multimodal, Multilingual Retrieval
emnlp-main.335
Poster
2410.02027
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.336.bib
https://aclanthology.org/2024.emnlp-main.336/
@inproceedings{lu-etal-2024-mta4dpr, title = "{MTA}4{DPR}: Multi-Teaching-Assistants Based Iterative Knowledge Distillation for Dense Passage Retrieval", author = "Lu, Qixi and Xun, Endong and Tang, Gongbo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.336", pages = "5871--5883", abstract = "Although Dense Passage Retrieval (DPR) models have achieved significantly enhanced performance, their widespread application is still hindered by the demanding inference efficiency and high deployment costs. Knowledge distillation is an efficient method to compress models, which transfers knowledge from strong teacher models to weak student models. Previous studies have proved the effectiveness of knowledge distillation in DPR. However, there often remains a significant performance gap between the teacher and the distilled student. To narrow this performance gap, we propose MTA4DPR, a Multi-Teaching-Assistants based iterative knowledge distillation method for Dense Passage Retrieval, which transfers knowledge from the teacher to the student with the help of multiple assistants in an iterative manner; with each iteration, the student learns from more performant assistants and more difficult data. The experimental results show that our 66M student model achieves the state-of-the-art performance among models with same parameters on multiple datasets, and is very competitive when compared with larger, even LLM-based, DPR models.", }
Although Dense Passage Retrieval (DPR) models have achieved significantly enhanced performance, their widespread application is still hindered by the demanding inference efficiency and high deployment costs. Knowledge distillation is an efficient method to compress models, which transfers knowledge from strong teacher models to weak student models. Previous studies have proved the effectiveness of knowledge distillation in DPR. However, there often remains a significant performance gap between the teacher and the distilled student. To narrow this performance gap, we propose MTA4DPR, a Multi-Teaching-Assistants based iterative knowledge distillation method for Dense Passage Retrieval, which transfers knowledge from the teacher to the student with the help of multiple assistants in an iterative manner; with each iteration, the student learns from more performant assistants and more difficult data. The experimental results show that our 66M student model achieves the state-of-the-art performance among models with same parameters on multiple datasets, and is very competitive when compared with larger, even LLM-based, DPR models.
[ "Lu, Qixi", "Xun, Endong", "Tang, Gongbo" ]
MTA4DPR: Multi-Teaching-Assistants Based Iterative Knowledge Distillation for Dense Passage Retrieval
emnlp-main.336
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.337.bib
https://aclanthology.org/2024.emnlp-main.337/
@inproceedings{kostikova-etal-2024-fine, title = "Fine-Grained Detection of Solidarity for Women and Migrants in 155 Years of {G}erman Parliamentary Debates", author = {Kostikova, Aida and Beese, Dominik and Paassen, Benjamin and P{\"u}tz, Ole and Wiedemann, Gregor and Eger, Steffen}, editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.337", pages = "5884--5907", abstract = "Solidarity is a crucial concept to understand social relations in societies. In this study, we investigate the frequency of (anti-)solidarity towards women and migrants in German parliamentary debates between 1867 and 2022. Using 2,864 manually annotated text snippets, we evaluate large language models (LLMs) like Llama 3, GPT-3.5, and GPT-4. We find that GPT-4 outperforms other models, approaching human annotation accuracy. Using GPT-4, we automatically annotate 18,300 further instances and find that solidarity with migrants outweighs anti-solidarity but that frequencies and solidarity types shift over time. Most importantly, group-based notions of (anti-)solidarity fade in favor of compassionate solidarity, focusing on the vulnerability of migrant groups, and exchange-based anti-solidarity, focusing on the lack of (economic) contribution. This study highlights the interplay of historical events, socio-economic needs, and political ideologies in shaping migration discourse and social cohesion.", }
Solidarity is a crucial concept to understand social relations in societies. In this study, we investigate the frequency of (anti-)solidarity towards women and migrants in German parliamentary debates between 1867 and 2022. Using 2,864 manually annotated text snippets, we evaluate large language models (LLMs) like Llama 3, GPT-3.5, and GPT-4. We find that GPT-4 outperforms other models, approaching human annotation accuracy. Using GPT-4, we automatically annotate 18,300 further instances and find that solidarity with migrants outweighs anti-solidarity but that frequencies and solidarity types shift over time. Most importantly, group-based notions of (anti-)solidarity fade in favor of compassionate solidarity, focusing on the vulnerability of migrant groups, and exchange-based anti-solidarity, focusing on the lack of (economic) contribution. This study highlights the interplay of historical events, socio-economic needs, and political ideologies in shaping migration discourse and social cohesion.
[ "Kostikova, Aida", "Beese, Dominik", "Paassen, Benjamin", "P{\\\"u}tz, Ole", "Wiedemann, Gregor", "Eger, Steffen" ]
Fine-Grained Detection of Solidarity for Women and Migrants in 155 Years of German Parliamentary Debates
emnlp-main.337
Oral
2210.04359
[ "https://github.com/dominikbeese/deuparl-v2" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.338.bib
https://aclanthology.org/2024.emnlp-main.338/
@inproceedings{bai-etal-2024-citrus, title = "{CI}tru{S}: Chunked Instruction-aware State Eviction for Long Sequence Modeling", author = "Bai, Yu and Zou, Xiyuan and Huang, Heyan and Chen, Sanxing and Rondeau, Marc-Antoine and Gao, Yang and Cheung, Jackie CK", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.338", pages = "5908--5930", abstract = "Long sequence modeling has gained broad interest as large language models (LLMs) continue to advance. Recent research has identified that a large portion of hidden states within the key-value caches of Transformer models can be discarded (also termed evicted) withoutaffecting the perplexity performance in generating long sequences. However, we show that these methods, despite preserving perplexity performance, often drop information that is important for solving downstream tasks, a problem which we call information neglect. To address this issue, we introduce Chunked Instruction-aware State Eviction (CItruS), a novel modeling technique that integrates the attention preferences useful for a downstream task into the eviction process of hidden states. In addition, we design a method for chunked sequence processing to further improve efficiency. Our training-free method exhibits superior performance on long sequence comprehension and retrieval tasks over several strong baselines under the same memory budget, while preserving language modeling perplexity. The code and data have been released at https://github.com/ybai-nlp/CItruS.", }
Long sequence modeling has gained broad interest as large language models (LLMs) continue to advance. Recent research has identified that a large portion of hidden states within the key-value caches of Transformer models can be discarded (also termed evicted) withoutaffecting the perplexity performance in generating long sequences. However, we show that these methods, despite preserving perplexity performance, often drop information that is important for solving downstream tasks, a problem which we call information neglect. To address this issue, we introduce Chunked Instruction-aware State Eviction (CItruS), a novel modeling technique that integrates the attention preferences useful for a downstream task into the eviction process of hidden states. In addition, we design a method for chunked sequence processing to further improve efficiency. Our training-free method exhibits superior performance on long sequence comprehension and retrieval tasks over several strong baselines under the same memory budget, while preserving language modeling perplexity. The code and data have been released at https://github.com/ybai-nlp/CItruS.
[ "Bai, Yu", "Zou, Xiyuan", "Huang, Heyan", "Chen, Sanxing", "Rondeau, Marc-Antoine", "Gao, Yang", "Cheung, Jackie CK" ]
CItruS: Chunked Instruction-aware State Eviction for Long Sequence Modeling
emnlp-main.338
Poster
2406.12018
[ "https://github.com/ybai-nlp/CItruS" ]
https://huggingface.co/papers/2406.12018
0
0
0
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.339.bib
https://aclanthology.org/2024.emnlp-main.339/
@inproceedings{hatzel-biemann-2024-story, title = "Story Embeddings {---} Narrative-Focused Representations of Fictional Stories", author = "Hatzel, Hans Ole and Biemann, Chris", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.339", pages = "5931--5943", abstract = "We present a novel approach to modeling fictional narratives. The proposed model creates embeddings that represent a story such that similar narratives, that is, reformulations of the same story, will result in similar embeddings. We showcase the prowess of our narrative-focused embeddings on various datasets, exhibiting state-of-the-art performance on multiple retrieval tasks. The embeddings also show promising results on a narrative understanding task. Additionally, we perform an annotation-based evaluation to validate that our introduced computational notion of narrative similarity aligns with human perception. The approach can help to explore vast datasets of stories, with potential applications in recommender systems and in the computational analysis of literature.", }
We present a novel approach to modeling fictional narratives. The proposed model creates embeddings that represent a story such that similar narratives, that is, reformulations of the same story, will result in similar embeddings. We showcase the prowess of our narrative-focused embeddings on various datasets, exhibiting state-of-the-art performance on multiple retrieval tasks. The embeddings also show promising results on a narrative understanding task. Additionally, we perform an annotation-based evaluation to validate that our introduced computational notion of narrative similarity aligns with human perception. The approach can help to explore vast datasets of stories, with potential applications in recommender systems and in the computational analysis of literature.
[ "Hatzel, Hans Ole", "Biemann, Chris" ]
Story Embeddings — Narrative-Focused Representations of Fictional Stories
emnlp-main.339
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.340.bib
https://aclanthology.org/2024.emnlp-main.340/
@inproceedings{li-etal-2024-c, title = "{C}-{LLM}: Learn to Check {C}hinese Spelling Errors Character by Character", author = "Li, Kunting and Hu, Yong and He, Liang and Meng, Fandong and Zhou, Jie", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.340", pages = "5944--5957", abstract = "Chinese Spell Checking (CSC) aims to detect and correct spelling errors in sentences. Despite Large Language Models (LLMs) exhibit robust capabilities and are widely applied in various tasks, their performance on CSC is often unsatisfactory. We find that LLMs fail to meet the Chinese character-level constraints of the CSC task, namely equal length and phonetic similarity, leading to a performance bottleneck. Further analysis reveals that this issue stems from the granularity of tokenization, as current mixed character-word tokenization struggles to satisfy these character-level constraints. To address this issue, we propose C-LLM, a Large Language Model-based Chinese Spell Checking method that learns to check errors Character by Character. Character-level tokenization enables the model to learn character-level alignment, effectively mitigating issues related to character-level constraints. Furthermore, CSC is simplified to replication-dominated and substitution-supplemented tasks. Experiments on two CSC benchmarks demonstrate that C-LLM achieves a 2.1{\%} enhancement in general scenarios and a significant 12{\%} improvement in vertical domain scenarios compared to existing methods, establishing state-of-the-art performance.", }
Chinese Spell Checking (CSC) aims to detect and correct spelling errors in sentences. Despite Large Language Models (LLMs) exhibit robust capabilities and are widely applied in various tasks, their performance on CSC is often unsatisfactory. We find that LLMs fail to meet the Chinese character-level constraints of the CSC task, namely equal length and phonetic similarity, leading to a performance bottleneck. Further analysis reveals that this issue stems from the granularity of tokenization, as current mixed character-word tokenization struggles to satisfy these character-level constraints. To address this issue, we propose C-LLM, a Large Language Model-based Chinese Spell Checking method that learns to check errors Character by Character. Character-level tokenization enables the model to learn character-level alignment, effectively mitigating issues related to character-level constraints. Furthermore, CSC is simplified to replication-dominated and substitution-supplemented tasks. Experiments on two CSC benchmarks demonstrate that C-LLM achieves a 2.1{\%} enhancement in general scenarios and a significant 12{\%} improvement in vertical domain scenarios compared to existing methods, establishing state-of-the-art performance.
[ "Li, Kunting", "Hu, Yong", "He, Liang", "Meng, F", "ong", "Zhou, Jie" ]
C-LLM: Learn to Check Chinese Spelling Errors Character by Character
emnlp-main.340
Poster
2406.16536
[ "https://github.com/ktlktl/c-llm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.341.bib
https://aclanthology.org/2024.emnlp-main.341/
@inproceedings{zhu-etal-2024-psc, title = "{PSC}: Extending Context Window of Large Language Models via Phase Shift Calibration", author = "Zhu, Wenqiao and Xu, Chao and Wang, Lulu and Wu, Jun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.341", pages = "5958--5970", abstract = "Rotary Position Embedding (RoPE) is an efficient position encoding approach and is widely utilized in numerous large language models (LLMs). Recently, a lot of methods have been put forward to further expand the context window based on RoPE. The core concept of those methods is to predefine or search for a set of factors to rescale the base frequencies of RoPE. Nevertheless, it is quite a challenge for existing methods to predefine an optimal factor due to the exponential search space. In view of this, we introduce PSC (Phase Shift Calibration), a small module for calibrating the frequencies predefined by existing methods. With the employment of PSC, we demonstrate that many existing methods can be further enhanced, like PI, YaRN, and LongRoPE. We conducted extensive experiments across multiple models and tasks. The results demonstrate that (1) when PSC is enabled, the comparative reductions in perplexity increase as the context window size is varied from 16k, to 32k, and up to 64k. (2) Our approach is broadly applicable and exhibits robustness across a variety of models and tasks.", }
Rotary Position Embedding (RoPE) is an efficient position encoding approach and is widely utilized in numerous large language models (LLMs). Recently, a lot of methods have been put forward to further expand the context window based on RoPE. The core concept of those methods is to predefine or search for a set of factors to rescale the base frequencies of RoPE. Nevertheless, it is quite a challenge for existing methods to predefine an optimal factor due to the exponential search space. In view of this, we introduce PSC (Phase Shift Calibration), a small module for calibrating the frequencies predefined by existing methods. With the employment of PSC, we demonstrate that many existing methods can be further enhanced, like PI, YaRN, and LongRoPE. We conducted extensive experiments across multiple models and tasks. The results demonstrate that (1) when PSC is enabled, the comparative reductions in perplexity increase as the context window size is varied from 16k, to 32k, and up to 64k. (2) Our approach is broadly applicable and exhibits robustness across a variety of models and tasks.
[ "Zhu, Wenqiao", "Xu, Chao", "Wang, Lulu", "Wu, Jun" ]
PSC: Extending Context Window of Large Language Models via Phase Shift Calibration
emnlp-main.341
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.342.bib
https://aclanthology.org/2024.emnlp-main.342/
@inproceedings{lin-etal-2024-video, title = "Video-{LL}a{VA}: Learning United Visual Representation by Alignment Before Projection", author = "Lin, Bin and Ye, Yang and Zhu, Bin and Cui, Jiaxi and Ning, Munan and Jin, Peng and Yuan, Li", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.342", pages = "5971--5984", abstract = "Large Vision-Language Model (LVLM) has enhanced the performance of various downstream tasks in visual-language understanding. Most existing approaches encode images and videos into separate feature spaces, which are then fed as inputs to large language models. However, due to the lack of unified tokenization for images and videos, namely misalignment before projection, it becomes challenging for a Large Language Model (LLM) to learn multi-modal interactions from several poor projection layers.In this work, we unify visual representation into the language feature space to advance the foundational LLM towards a unified LVLM. As a result, we establish a simple but robust LVLM baseline, Video-LLaVA, which learns from a mixed dataset of images and videos, mutually enhancing each other.As a result, Video-LLaVA outperforms Video-ChatGPT by 5.8{\%}, 9.9{\%}, 18.6{\%}, and 10.1{\%} on MSRVTT, MSVD, TGIF, and ActivityNet, respectively. Additionally, our Video-LLaVA also achieves superior performances on a broad range of 9 image benchmarks.Notably, extensive experiments demonstrate that Video-LLaVA mutually benefits images and videos within a unified visual representation, outperforming models designed specifically for images or videos. We aim for this work to provide modest insights into the multi-modal inputs for the LLM.", }
Large Vision-Language Model (LVLM) has enhanced the performance of various downstream tasks in visual-language understanding. Most existing approaches encode images and videos into separate feature spaces, which are then fed as inputs to large language models. However, due to the lack of unified tokenization for images and videos, namely misalignment before projection, it becomes challenging for a Large Language Model (LLM) to learn multi-modal interactions from several poor projection layers.In this work, we unify visual representation into the language feature space to advance the foundational LLM towards a unified LVLM. As a result, we establish a simple but robust LVLM baseline, Video-LLaVA, which learns from a mixed dataset of images and videos, mutually enhancing each other.As a result, Video-LLaVA outperforms Video-ChatGPT by 5.8{\%}, 9.9{\%}, 18.6{\%}, and 10.1{\%} on MSRVTT, MSVD, TGIF, and ActivityNet, respectively. Additionally, our Video-LLaVA also achieves superior performances on a broad range of 9 image benchmarks.Notably, extensive experiments demonstrate that Video-LLaVA mutually benefits images and videos within a unified visual representation, outperforming models designed specifically for images or videos. We aim for this work to provide modest insights into the multi-modal inputs for the LLM.
[ "Lin, Bin", "Ye, Yang", "Zhu, Bin", "Cui, Jiaxi", "Ning, Munan", "Jin, Peng", "Yuan, Li" ]
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
emnlp-main.342
Poster
2311.10122
[ "https://github.com/PKU-YuanGroup/Video-LLaVA" ]
https://huggingface.co/papers/2311.10122
3
26
1
6
[ "LanguageBind/Video-LLaVA-7B", "LanguageBind/Video-LLaVA-7B-hf", "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e", "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e-384", "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e", "LanguageBind/Video-LLaVA-Pretrain-7B", "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e-384", "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e", "pandalla/MBTIGPT_zh_INTP", "pandalla/MBTIGPT_en_INTP", "pandalla/MBTIGPT_en_ENTP", "pandalla/MBTIGPT_zh_ESFP", "LanguageBind/MoE-LLaVA-Qwen-Pretrain", "pandalla/MBTIGPT_en_ENTJ", "pandalla/MBTIGPT_en_ISFP", "LanguageBind/MoE-LLaVA-OpenChat-7B-4e", "ManishThota/Ollama_Video_llama_7B", "pandalla/MBTIGPT_zh_ISFP", "pandalla/MBTIGPT_zh_ESTP", "pandalla/MBTIGPT_zh_ENTJ", "pandalla/MBTIGPT_zh_ESTJ", "pandalla/MBTIGPT_zh_INFJ", "pandalla/MBTIGPT_zh_INTJ", "pandalla/MBTIGPT_zh_ENFP", "pandalla/MBTIGPT_zh_ENTP", "pandalla/MBTIGPT_zh_ISTJ", "pandalla/MBTIGPT_zh_ESFJ", "pandalla/MBTIGPT_zh_ENFJ", "pandalla/MBTIGPT_zh_ISTP", "pandalla/MBTIGPT_zh_ISFJ", "pandalla/MBTIGPT_en_INTJ", "pandalla/MBTIGPT_en_INFP", "pandalla/MBTIGPT_en_ENFP", "pandalla/MBTIGPT_zh_INFP", "LanguageBind/MoE-LLaVA-Phi2-Pretrain", "LanguageBind/MoE-LLaVA-StableLM-Pretrain", "LanguageBind/MoE-LLaVA-Phi2-384-Pretrain", "pandalla/MBTIGPT_en_ESTP", "pandalla/MBTIGPT_en_ISTJ", "pandalla/MBTIGPT_en_ESFP", "pandalla/MBTIGPT_en_ENFJ", "pandalla/MBTIGPT_en_ESFJ", "pandalla/MBTIGPT_en_ISFJ", "pandalla/MBTIGPT_en_ESTJ", "pandalla/MBTIGPT_en_INFJ", "pandalla/MBTIGPT_en_ISTP" ]
[]
[ "LanguageBind/Video-LLaVA", "EvanTHU/MotionLLM", "llava-hf/video-llava", "lyx97/TempCompass", "anonymousatom/AdViduCopy", "anonymousatom/LanguageBind-MoE-LLaVA-StableLM-1.6B-4e", "Omnibus-archive/Video-LLaVA", "YashGb/Video-LLaVA", "XaviXva/Video-LLaVA", "namangupta2001/LanguageBind-Video-LLaVA-7B", "Nymbo/Video-LLaVA", "abhinav-neophyte/LanguageBind-Video-LLaVA-7B", "pivo197878/LanguageBind-Video-LLaVA-7B", "SNUMPR/vlm-rlaif-demo", "jirvin16/TEOChat", "realaer/src" ]
[ "LanguageBind/Video-LLaVA-7B", "LanguageBind/Video-LLaVA-7B-hf", "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e", "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e-384", "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e", "LanguageBind/Video-LLaVA-Pretrain-7B", "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e-384", "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e", "pandalla/MBTIGPT_zh_INTP", "pandalla/MBTIGPT_en_INTP", "pandalla/MBTIGPT_en_ENTP", "pandalla/MBTIGPT_zh_ESFP", "LanguageBind/MoE-LLaVA-Qwen-Pretrain", "pandalla/MBTIGPT_en_ENTJ", "pandalla/MBTIGPT_en_ISFP", "LanguageBind/MoE-LLaVA-OpenChat-7B-4e", "ManishThota/Ollama_Video_llama_7B", "pandalla/MBTIGPT_zh_ISFP", "pandalla/MBTIGPT_zh_ESTP", "pandalla/MBTIGPT_zh_ENTJ", "pandalla/MBTIGPT_zh_ESTJ", "pandalla/MBTIGPT_zh_INFJ", "pandalla/MBTIGPT_zh_INTJ", "pandalla/MBTIGPT_zh_ENFP", "pandalla/MBTIGPT_zh_ENTP", "pandalla/MBTIGPT_zh_ISTJ", "pandalla/MBTIGPT_zh_ESFJ", "pandalla/MBTIGPT_zh_ENFJ", "pandalla/MBTIGPT_zh_ISTP", "pandalla/MBTIGPT_zh_ISFJ", "pandalla/MBTIGPT_en_INTJ", "pandalla/MBTIGPT_en_INFP", "pandalla/MBTIGPT_en_ENFP", "pandalla/MBTIGPT_zh_INFP", "LanguageBind/MoE-LLaVA-Phi2-Pretrain", "LanguageBind/MoE-LLaVA-StableLM-Pretrain", "LanguageBind/MoE-LLaVA-Phi2-384-Pretrain", "pandalla/MBTIGPT_en_ESTP", "pandalla/MBTIGPT_en_ISTJ", "pandalla/MBTIGPT_en_ESFP", "pandalla/MBTIGPT_en_ENFJ", "pandalla/MBTIGPT_en_ESFJ", "pandalla/MBTIGPT_en_ISFJ", "pandalla/MBTIGPT_en_ESTJ", "pandalla/MBTIGPT_en_INFJ", "pandalla/MBTIGPT_en_ISTP" ]
[]
[ "LanguageBind/Video-LLaVA", "EvanTHU/MotionLLM", "llava-hf/video-llava", "lyx97/TempCompass", "anonymousatom/AdViduCopy", "anonymousatom/LanguageBind-MoE-LLaVA-StableLM-1.6B-4e", "Omnibus-archive/Video-LLaVA", "YashGb/Video-LLaVA", "XaviXva/Video-LLaVA", "namangupta2001/LanguageBind-Video-LLaVA-7B", "Nymbo/Video-LLaVA", "abhinav-neophyte/LanguageBind-Video-LLaVA-7B", "pivo197878/LanguageBind-Video-LLaVA-7B", "SNUMPR/vlm-rlaif-demo", "jirvin16/TEOChat", "realaer/src" ]
1
https://aclanthology.org/2024.emnlp-main.343.bib
https://aclanthology.org/2024.emnlp-main.343/
@inproceedings{xu-etal-2024-sayself, title = "{S}ay{S}elf: Teaching {LLM}s to Express Confidence with Self-Reflective Rationales", author = "Xu, Tianyang and Wu, Shujin and Diao, Shizhe and Liu, Xiaoze and Wang, Xingyao and Chen, Yangyi and Gao, Jing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.343", pages = "5985--5998", abstract = "Large language models (LLMs) often generate inaccurate or fabricated information and generally fail to indicate their confidence, which limits their broader applications. Previous work has elicited confidence from LLMs by direct or self-consistency prompting, or constructing specific datasets for supervised finetuning. The prompting-based approaches have inferior performance, and the training-based approaches are limited to binary or inaccurate group-level confidence estimates. In this work, we present SaySelf, a novel training framework that teaches LLMs to express more fine-grained confidence estimates. In addition, beyond the confidence scores, SaySelf initiates the process of directing LLMs to produce self-reflective rationales that clearly identify gaps in their parametric knowledge and explain their uncertainty. This is achieved by using an LLM to automatically summarize the uncertainties in specific knowledge via natural language. The summarization is based on the analysis of the inconsistency in multiple sampled reasoning chains, and the resulting data is utilized for supervised fine-tuning. Moreover, we utilize reinforcement learning with a meticulously crafted reward function to calibrate the confidence estimates, motivating LLMs to deliver accurate, high-confidence predictions and to penalize overconfidence in erroneous outputs. Experimental results demonstrate the effectiveness of SaySelf in reducing the confidence calibration error and maintaining the task performance. The generated self-reflective rationales are also reasonable and can further contribute to the calibration. The code is made public at https://github.com/xu1868/SaySelf.", }
Large language models (LLMs) often generate inaccurate or fabricated information and generally fail to indicate their confidence, which limits their broader applications. Previous work has elicited confidence from LLMs by direct or self-consistency prompting, or constructing specific datasets for supervised finetuning. The prompting-based approaches have inferior performance, and the training-based approaches are limited to binary or inaccurate group-level confidence estimates. In this work, we present SaySelf, a novel training framework that teaches LLMs to express more fine-grained confidence estimates. In addition, beyond the confidence scores, SaySelf initiates the process of directing LLMs to produce self-reflective rationales that clearly identify gaps in their parametric knowledge and explain their uncertainty. This is achieved by using an LLM to automatically summarize the uncertainties in specific knowledge via natural language. The summarization is based on the analysis of the inconsistency in multiple sampled reasoning chains, and the resulting data is utilized for supervised fine-tuning. Moreover, we utilize reinforcement learning with a meticulously crafted reward function to calibrate the confidence estimates, motivating LLMs to deliver accurate, high-confidence predictions and to penalize overconfidence in erroneous outputs. Experimental results demonstrate the effectiveness of SaySelf in reducing the confidence calibration error and maintaining the task performance. The generated self-reflective rationales are also reasonable and can further contribute to the calibration. The code is made public at https://github.com/xu1868/SaySelf.
[ "Xu, Tianyang", "Wu, Shujin", "Diao, Shizhe", "Liu, Xiaoze", "Wang, Xingyao", "Chen, Yangyi", "Gao, Jing" ]
SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales
emnlp-main.343
Poster
2405.20974
[ "https://github.com/xu1868/sayself" ]
https://huggingface.co/papers/2405.20974
3
0
0
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.344.bib
https://aclanthology.org/2024.emnlp-main.344/
@inproceedings{diehl-martinez-etal-2024-mitigating, title = "Mitigating Frequency Bias and Anisotropy in Language Model Pre-Training with Syntactic Smoothing", author = "Diehl Martinez, Richard and Goriely, Zebulon and Caines, Andrew and Buttery, Paula and Beinborn, Lisa", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.344", pages = "5999--6011", abstract = "Language models strongly rely on frequency information because they maximize the likelihood of tokens during pre-training. As a consequence, language models tend to not generalize well to tokens that are seldom seen during training. Moreover, maximum likelihood training has been discovered to give rise to anisotropy: representations of tokens in a model tend to cluster tightly in a high-dimensional cone, rather than spreading out over their representational capacity.Our work introduces a method for quantifying the frequency bias of a language model by assessing sentence-level perplexity with respect to token-level frequency. We then present a method for reducing the frequency bias of a language model by inducing a syntactic prior over token representations during pre-training. Our Syntactic Smoothing method adjusts the maximum likelihood objective function to distribute the learning signal to syntactically similar tokens. This approach results in better performance on infrequent English tokens and a decrease in anisotropy. We empirically show that the degree of anisotropy in a model correlates with its frequency bias.", }
Language models strongly rely on frequency information because they maximize the likelihood of tokens during pre-training. As a consequence, language models tend to not generalize well to tokens that are seldom seen during training. Moreover, maximum likelihood training has been discovered to give rise to anisotropy: representations of tokens in a model tend to cluster tightly in a high-dimensional cone, rather than spreading out over their representational capacity.Our work introduces a method for quantifying the frequency bias of a language model by assessing sentence-level perplexity with respect to token-level frequency. We then present a method for reducing the frequency bias of a language model by inducing a syntactic prior over token representations during pre-training. Our Syntactic Smoothing method adjusts the maximum likelihood objective function to distribute the learning signal to syntactically similar tokens. This approach results in better performance on infrequent English tokens and a decrease in anisotropy. We empirically show that the degree of anisotropy in a model correlates with its frequency bias.
[ "Diehl Martinez, Richard", "Goriely, Zebulon", "Caines, Andrew", "Buttery, Paula", "Beinborn, Lisa" ]
Mitigating Frequency Bias and Anisotropy in Language Model Pre-Training with Syntactic Smoothing
emnlp-main.344
Poster
2410.11462
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.345.bib
https://aclanthology.org/2024.emnlp-main.345/
@inproceedings{xiao-etal-2024-toxicloakcn, title = "{T}oxi{C}loak{CN}: Evaluating Robustness of Offensive Language Detection in {C}hinese with Cloaking Perturbations", author = "Xiao, Yunze and Hu, Yujia and Choo, Kenny Tsu Wei and Lee, Roy Ka-Wei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.345", pages = "6012--6025", abstract = "Detecting hate speech and offensive language is essential for maintaining a safe and respectful digital environment. This study examines the limitations of state-of-the-art large language models (LLMs) in identifying offensive content within systematically perturbed data, with a focus on Chinese, a language particularly susceptible to such perturbations. We introduce ToxiCloakCN, an enhanced dataset derived from ToxiCN, augmented with homophonic substitutions and emoji transformations, to test the robustness of LLMs against these cloaking perturbations. Our findings reveal that existing models significantly underperform in detecting offensive content when these perturbations are applied. We provide an in-depth analysis of how different types of offensive content are affected by these perturbations and explore the alignment between human and model explanations of offensiveness. Our work highlights the urgent need for more advanced techniques in offensive language detection to combat the evolving tactics used to evade detection mechanisms.", }
Detecting hate speech and offensive language is essential for maintaining a safe and respectful digital environment. This study examines the limitations of state-of-the-art large language models (LLMs) in identifying offensive content within systematically perturbed data, with a focus on Chinese, a language particularly susceptible to such perturbations. We introduce ToxiCloakCN, an enhanced dataset derived from ToxiCN, augmented with homophonic substitutions and emoji transformations, to test the robustness of LLMs against these cloaking perturbations. Our findings reveal that existing models significantly underperform in detecting offensive content when these perturbations are applied. We provide an in-depth analysis of how different types of offensive content are affected by these perturbations and explore the alignment between human and model explanations of offensiveness. Our work highlights the urgent need for more advanced techniques in offensive language detection to combat the evolving tactics used to evade detection mechanisms.
[ "Xiao, Yunze", "Hu, Yujia", "Choo, Kenny Tsu Wei", "Lee, Roy Ka-Wei" ]
ToxiCloakCN: Evaluating Robustness of Offensive Language Detection in Chinese with Cloaking Perturbations
emnlp-main.345
Poster
2406.12223
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.346.bib
https://aclanthology.org/2024.emnlp-main.346/
@inproceedings{yuan-etal-2024-boosting, title = "Boosting Scientific Concepts Understanding: Can Analogy from Teacher Models Empower Student Models?", author = "Yuan, Siyu and Jiayang, Cheng and Qiu, Lin and Yang, Deqing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.346", pages = "6026--6036", abstract = "Analogical reasoning plays a critical role in human cognition, enabling us to understand new concepts by associating them with familiar ones. Previous research in the AI community has mainly focused on identifying and generating analogies and then examining their quality under human evaluation, which overlooks the practical application of these analogies in real-world settings. Inspired by the human education process, in this paper, we propose to investigate how analogies created by teacher language models (LMs) can assist student LMs in understanding scientific concepts, thereby aligning more closely with practical scenarios. Our results suggest that free-form analogies can indeed aid LMs in understanding concepts. Additionally, analogies generated by student LMs can improve their own performance on scientific question answering, demonstrating their capability to use analogies for self-learning new knowledge. Resources are available athttps://github.com/siyuyuan/SCUA.", }
Analogical reasoning plays a critical role in human cognition, enabling us to understand new concepts by associating them with familiar ones. Previous research in the AI community has mainly focused on identifying and generating analogies and then examining their quality under human evaluation, which overlooks the practical application of these analogies in real-world settings. Inspired by the human education process, in this paper, we propose to investigate how analogies created by teacher language models (LMs) can assist student LMs in understanding scientific concepts, thereby aligning more closely with practical scenarios. Our results suggest that free-form analogies can indeed aid LMs in understanding concepts. Additionally, analogies generated by student LMs can improve their own performance on scientific question answering, demonstrating their capability to use analogies for self-learning new knowledge. Resources are available athttps://github.com/siyuyuan/SCUA.
[ "Yuan, Siyu", "Jiayang, Cheng", "Qiu, Lin", "Yang, Deqing" ]
Boosting Scientific Concepts Understanding: Can Analogy from Teacher Models Empower Student Models?
emnlp-main.346
Poster
2406.11375
[ "https://github.com/siyuyuan/scua" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.347.bib
https://aclanthology.org/2024.emnlp-main.347/
@inproceedings{qi-etal-2024-model, title = "Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation", author = "Qi, Jirui and Sarti, Gabriele and Fern{\'a}ndez, Raquel and Bisazza, Arianna", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.347", pages = "6037--6053", abstract = "Ensuring the verifiability of model answers is a fundamental challenge for retrieval-augmented generation (RAG) in the question answering (QA) domain. Recently, self-citation prompting was proposed to make large language models (LLMs) generate citations to supporting documents along with their answers. However, self-citing LLMs often struggle to match the required format, refer to non-existent sources, and fail to faithfully reflect LLMs{'} context usage throughout the generation. In this work, we present MIRAGE {--} Model Internals-based RAG Explanations {--} a plug-and-play approach using model internals for faithful answer attribution in RAG applications. MIRAGE detects context-sensitive answer tokens and pairs them with retrieved documents contributing to their prediction via saliency methods. We evaluate our proposed approach on a multilingual extractive QA dataset, finding high agreement with human answer attribution. On open-ended QA, MIRAGE achieves citation quality and efficiency comparable to self-citation while also allowing for a finer-grained control of attribution parameters. Our qualitative evaluation highlights the faithfulness of MIRAGE{'}s attributions and underscores the promising application of model internals for RAG answer attribution. Code and data released at https://github.com/Betswish/MIRAGE.", }
Ensuring the verifiability of model answers is a fundamental challenge for retrieval-augmented generation (RAG) in the question answering (QA) domain. Recently, self-citation prompting was proposed to make large language models (LLMs) generate citations to supporting documents along with their answers. However, self-citing LLMs often struggle to match the required format, refer to non-existent sources, and fail to faithfully reflect LLMs{'} context usage throughout the generation. In this work, we present MIRAGE {--} Model Internals-based RAG Explanations {--} a plug-and-play approach using model internals for faithful answer attribution in RAG applications. MIRAGE detects context-sensitive answer tokens and pairs them with retrieved documents contributing to their prediction via saliency methods. We evaluate our proposed approach on a multilingual extractive QA dataset, finding high agreement with human answer attribution. On open-ended QA, MIRAGE achieves citation quality and efficiency comparable to self-citation while also allowing for a finer-grained control of attribution parameters. Our qualitative evaluation highlights the faithfulness of MIRAGE{'}s attributions and underscores the promising application of model internals for RAG answer attribution. Code and data released at https://github.com/Betswish/MIRAGE.
[ "Qi, Jirui", "Sarti, Gabriele", "Fern{\\'a}ndez, Raquel", "Bisazza, Arianna" ]
Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation
emnlp-main.347
Poster
2406.13663
[ "https://github.com/betswish/mirage" ]
https://huggingface.co/papers/2406.13663
3
7
1
4
[]
[]
[ "gsarti/mirage" ]
[]
[]
[ "gsarti/mirage" ]
1
https://aclanthology.org/2024.emnlp-main.348.bib
https://aclanthology.org/2024.emnlp-main.348/
@inproceedings{prato-etal-2024-large, title = "Do Large Language Models Know How Much They Know?", author = "Prato, Gabriele and Huang, Jerry and Parthasarathi, Prasanna and Sodhani, Shagun and Chandar, Sarath", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.348", pages = "6054--6070", abstract = "Large Language Models (LLMs) have emerged as highly capable systems and are increasingly being integrated into various uses. Nevertheless, the rapid advancement in their deployment trails a comprehensive understanding of their internal mechanisms, as well as a delineation of their capabilities and limitations. A desired characteristic of an intelligent system is its ability to recognize the scope of its own knowledge. To investigate whether LLMs embody this attribute, we develop a benchmark that challenges these models to enumerate all information they possess on specific topics. This benchmark assesses whether the models recall excessive, insufficient, or the precise amount of required information, thereby indicating their awareness of how much they know about the given topic. Our findings reveal that the emergence of this property varies across different architectures and manifests at diverse rates. However, with sufficient scaling, all tested models are ultimately capable of performing this task. The insights gained from this research advance our understanding of LLMs, shedding light on their operational capabilities and contributing to the ongoing exploration of their intricate dynamics.", }
Large Language Models (LLMs) have emerged as highly capable systems and are increasingly being integrated into various uses. Nevertheless, the rapid advancement in their deployment trails a comprehensive understanding of their internal mechanisms, as well as a delineation of their capabilities and limitations. A desired characteristic of an intelligent system is its ability to recognize the scope of its own knowledge. To investigate whether LLMs embody this attribute, we develop a benchmark that challenges these models to enumerate all information they possess on specific topics. This benchmark assesses whether the models recall excessive, insufficient, or the precise amount of required information, thereby indicating their awareness of how much they know about the given topic. Our findings reveal that the emergence of this property varies across different architectures and manifests at diverse rates. However, with sufficient scaling, all tested models are ultimately capable of performing this task. The insights gained from this research advance our understanding of LLMs, shedding light on their operational capabilities and contributing to the ongoing exploration of their intricate dynamics.
[ "Prato, Gabriele", "Huang, Jerry", "Parthasarathi, Prasanna", "Sodhani, Shagun", "Ch", "ar, Sarath" ]
Do Large Language Models Know How Much They Know?
emnlp-main.348
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.349.bib
https://aclanthology.org/2024.emnlp-main.349/
@inproceedings{wadhwa-etal-2024-investigating, title = "Investigating Mysteries of {C}o{T}-Augmented Distillation", author = "Wadhwa, Somin and Amir, Silvio and Wallace, Byron C", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.349", pages = "6071--6086", abstract = "Eliciting chain of thought (CoT) rationales - sequences of token that convey a {``}reasoning{''} process has been shown to consistently improve LLM performance on tasks like question answering. More recent efforts have shown that such rationales can also be used for model distillation: Including CoT sequences (elicited from a large {``}teacher{''} model) in addition to target labels when fine-tuning a small student model yields (often substantial) improvements. In this work we ask: Why and how does this additional training signal help in model distillation? We perform ablations to interrogate this, and report some potentially surprising results. Specifically: (1) Placing CoT sequences after labels (rather than before) realizes consistently better downstream performance {--} this means that no student {``}reasoning{''} is necessary at test time to realize gains. (2) When rationales are appended in this way, they need not be coherent reasoning sequences to yield improvements; performance increases are robust to permutations of CoT tokens, for example. In fact, (3) a small number of key tokens are sufficient to achieve improvements equivalent to those observed when full rationales are used in model distillation.", }
Eliciting chain of thought (CoT) rationales - sequences of token that convey a {``}reasoning{''} process has been shown to consistently improve LLM performance on tasks like question answering. More recent efforts have shown that such rationales can also be used for model distillation: Including CoT sequences (elicited from a large {``}teacher{''} model) in addition to target labels when fine-tuning a small student model yields (often substantial) improvements. In this work we ask: Why and how does this additional training signal help in model distillation? We perform ablations to interrogate this, and report some potentially surprising results. Specifically: (1) Placing CoT sequences after labels (rather than before) realizes consistently better downstream performance {--} this means that no student {``}reasoning{''} is necessary at test time to realize gains. (2) When rationales are appended in this way, they need not be coherent reasoning sequences to yield improvements; performance increases are robust to permutations of CoT tokens, for example. In fact, (3) a small number of key tokens are sufficient to achieve improvements equivalent to those observed when full rationales are used in model distillation.
[ "Wadhwa, Somin", "Amir, Silvio", "Wallace, Byron C" ]
Investigating Mysteries of CoT-Augmented Distillation
emnlp-main.349
Poster
2406.14511
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.350.bib
https://aclanthology.org/2024.emnlp-main.350/
@inproceedings{you-etal-2024-sciprompt, title = "{S}ci{P}rompt: Knowledge-augmented Prompting for Fine-grained Categorization of Scientific Topics", author = "You, Zhiwen and Han, Kanyao and Zhu, Haotian and Ludaescher, Bertram and Diesner, Jana", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.350", pages = "6087--6104", abstract = "Prompt-based fine-tuning has become an essential method for eliciting information encoded in pre-trained language models for a variety of tasks, including text classification. For multi-class classification tasks, prompt-based fine-tuning under low-resource scenarios has resulted in performance levels comparable to those of fully fine-tuning methods. Previous studies have used crafted prompt templates and verbalizers, mapping from the label terms space to the class space, to solve the classification problem as a masked language modeling task. However, cross-domain and fine-grained prompt-based fine-tuning with an automatically enriched verbalizer remains unexplored, mainly due to the difficulty and costs of manually selecting domain label terms for the verbalizer, which requires humans with domain expertise. To address this challenge, we introduce SciPrompt, a framework designed to automatically retrieve scientific topic-related terms for low-resource text classification tasks. To this end, we select semantically correlated and domain-specific label terms within the context of scientific literature for verbalizer augmentation. Furthermore, we propose a new verbalization strategy that uses correlation scores as additional weights to enhance the prediction performance of the language model during model tuning. Our method outperforms state-of-the-art, prompt-based fine-tuning methods on scientific text classification tasks under few and zero-shot settings, especially in classifying fine-grained and emerging scientific topics.", }
Prompt-based fine-tuning has become an essential method for eliciting information encoded in pre-trained language models for a variety of tasks, including text classification. For multi-class classification tasks, prompt-based fine-tuning under low-resource scenarios has resulted in performance levels comparable to those of fully fine-tuning methods. Previous studies have used crafted prompt templates and verbalizers, mapping from the label terms space to the class space, to solve the classification problem as a masked language modeling task. However, cross-domain and fine-grained prompt-based fine-tuning with an automatically enriched verbalizer remains unexplored, mainly due to the difficulty and costs of manually selecting domain label terms for the verbalizer, which requires humans with domain expertise. To address this challenge, we introduce SciPrompt, a framework designed to automatically retrieve scientific topic-related terms for low-resource text classification tasks. To this end, we select semantically correlated and domain-specific label terms within the context of scientific literature for verbalizer augmentation. Furthermore, we propose a new verbalization strategy that uses correlation scores as additional weights to enhance the prediction performance of the language model during model tuning. Our method outperforms state-of-the-art, prompt-based fine-tuning methods on scientific text classification tasks under few and zero-shot settings, especially in classifying fine-grained and emerging scientific topics.
[ "You, Zhiwen", "Han, Kanyao", "Zhu, Haotian", "Ludaescher, Bertram", "Diesner, Jana" ]
SciPrompt: Knowledge-augmented Prompting for Fine-grained Categorization of Scientific Topics
emnlp-main.350
Poster
2410.01946
[ "https://github.com/zhiwenyou103/SciPrompt" ]
https://huggingface.co/papers/2410.01946
1
4
3
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.351.bib
https://aclanthology.org/2024.emnlp-main.351/
@inproceedings{basu-etal-2024-distilling, title = "Distilling Knowledge from Text-to-Image Generative Models Improves Visio-Linguistic Reasoning in {CLIP}", author = "Basu, Samyadeep and Hu, Shell Xu and Sanjabi, Maziar and Massiceti, Daniela and Feizi, Soheil", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.351", pages = "6105--6113", abstract = "Image-text contrastive models like CLIP have wide applications in zero-shot classification, image-text retrieval, and transfer learning. However, they often struggle on compositional visio-linguistic tasks (e.g., attribute-binding or object-relationships) where their performance is no better than random chance. To address this, we introduce SDS-CLIP, a lightweight and sample-efficient distillation method to enhance CLIP{'}s compositional visio-linguistic reasoning. Our approach fine-tunes CLIP using a distillation objective borrowed from large text-to-image generative models like Stable-Diffusion, which are known for their strong visio-linguistic reasoning abilities. On the challenging Winoground benchmark, SDS-CLIP improves the visio-linguistic performance of various CLIP models by up to 7{\%}, while on the ARO dataset, it boosts performance by up to 3{\%}. This work underscores the potential of well-designed distillation objectives from generative models to enhance contrastive image-text models with improved visio-linguistic reasoning capabilities.", }
Image-text contrastive models like CLIP have wide applications in zero-shot classification, image-text retrieval, and transfer learning. However, they often struggle on compositional visio-linguistic tasks (e.g., attribute-binding or object-relationships) where their performance is no better than random chance. To address this, we introduce SDS-CLIP, a lightweight and sample-efficient distillation method to enhance CLIP{'}s compositional visio-linguistic reasoning. Our approach fine-tunes CLIP using a distillation objective borrowed from large text-to-image generative models like Stable-Diffusion, which are known for their strong visio-linguistic reasoning abilities. On the challenging Winoground benchmark, SDS-CLIP improves the visio-linguistic performance of various CLIP models by up to 7{\%}, while on the ARO dataset, it boosts performance by up to 3{\%}. This work underscores the potential of well-designed distillation objectives from generative models to enhance contrastive image-text models with improved visio-linguistic reasoning capabilities.
[ "Basu, Samyadeep", "Hu, Shell Xu", "Sanjabi, Maziar", "Massiceti, Daniela", "Feizi, Soheil" ]
Distilling Knowledge from Text-to-Image Generative Models Improves Visio-Linguistic Reasoning in CLIP
emnlp-main.351
Poster
2307.09233
[ "" ]
https://huggingface.co/papers/2307.09233
4
7
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.352.bib
https://aclanthology.org/2024.emnlp-main.352/
@inproceedings{wadhwa-etal-2024-learning, title = "Learning from Natural Language Explanations for Generalizable Entity Matching", author = "Wadhwa, Somin and Krishnan, Adit and Wang, Runhui and Wallace, Byron C and Kong, Luyang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.352", pages = "6114--6129", abstract = "Entity matching is the task of linking records from different sources that refer to the same real-world entity. Past work has primarily treated entity linking as a standard supervised learning problem. However, supervised entity matching models often do not generalize well to new data, and collecting exhaustive labeled training data is often cost prohibitive. Further, recent efforts have adopted LLMs for this task in few/zero-shot settings, exploiting their general knowledge. But LLMs are prohibitively expensive for performing inference at scale for real-world entity matching tasks.As an efficient alternative, we re-cast entity matching as a conditional generation task as opposed to binary classification. This enables us to {``}distill{''} LLM reasoning into smaller entity matching models via natural language explanations. This approach achieves strong performance, especially on out-of-domain generalization tests (10.85{\%} F-1) where standalone generative methods struggle. We perform ablations that highlight the importance of explanations, both for performance and model robustness.", }
Entity matching is the task of linking records from different sources that refer to the same real-world entity. Past work has primarily treated entity linking as a standard supervised learning problem. However, supervised entity matching models often do not generalize well to new data, and collecting exhaustive labeled training data is often cost prohibitive. Further, recent efforts have adopted LLMs for this task in few/zero-shot settings, exploiting their general knowledge. But LLMs are prohibitively expensive for performing inference at scale for real-world entity matching tasks.As an efficient alternative, we re-cast entity matching as a conditional generation task as opposed to binary classification. This enables us to {``}distill{''} LLM reasoning into smaller entity matching models via natural language explanations. This approach achieves strong performance, especially on out-of-domain generalization tests (10.85{\%} F-1) where standalone generative methods struggle. We perform ablations that highlight the importance of explanations, both for performance and model robustness.
[ "Wadhwa, Somin", "Krishnan, Adit", "Wang, Runhui", "Wallace, Byron C", "Kong, Luyang" ]
Learning from Natural Language Explanations for Generalizable Entity Matching
emnlp-main.352
Poster
2406.09330
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.353.bib
https://aclanthology.org/2024.emnlp-main.353/
@inproceedings{li-etal-2024-know, title = "Do You Know What You Are Talking About? Characterizing Query-Knowledge Relevance For Reliable Retrieval Augmented Generation", author = "Li, Zhuohang and Zhang, Jiaxin and Yan, Chao and Das, Kamalika and Kumar, Sricharan and Kantarcioglu, Murat and Malin, Bradley A.", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.353", pages = "6130--6151", abstract = "Language models (LMs) are known to suffer from hallucinations and misinformation. Retrieval augmented generation (RAG) that retrieves verifiable information from an external knowledge corpus to complement the parametric knowledge in LMs provides a tangible solution to these problems. However, the generation quality of RAG is highly dependent on the relevance between a user{'}s query and the retrieved documents. Inaccurate responses may be generated when the query is outside of the scope of knowledge represented in the external knowledge corpus or if the information in the corpus is out-of-date. In this work, we establish a statistical framework that assesses how well a query can be answered by an RAG system by capturing the relevance of knowledge. We introduce an online testing procedure that employs goodness-of-fit (GoF) tests to inspect the relevance of each user query to detect out-of-knowledge queries with low knowledge relevance. Additionally, we develop an offline testing framework that examines a collection of user queries, aiming to detect significant shifts in the query distribution which indicates the knowledge corpus is no longer sufficiently capable of supporting the interests of the users. We demonstrate the capabilities of these strategies through a systematic evaluation on eight question-answering (QA) datasets, the results of which indicate that the new testing framework is an efficient solution to enhance the reliability of existing RAG systems.", }
Language models (LMs) are known to suffer from hallucinations and misinformation. Retrieval augmented generation (RAG) that retrieves verifiable information from an external knowledge corpus to complement the parametric knowledge in LMs provides a tangible solution to these problems. However, the generation quality of RAG is highly dependent on the relevance between a user{'}s query and the retrieved documents. Inaccurate responses may be generated when the query is outside of the scope of knowledge represented in the external knowledge corpus or if the information in the corpus is out-of-date. In this work, we establish a statistical framework that assesses how well a query can be answered by an RAG system by capturing the relevance of knowledge. We introduce an online testing procedure that employs goodness-of-fit (GoF) tests to inspect the relevance of each user query to detect out-of-knowledge queries with low knowledge relevance. Additionally, we develop an offline testing framework that examines a collection of user queries, aiming to detect significant shifts in the query distribution which indicates the knowledge corpus is no longer sufficiently capable of supporting the interests of the users. We demonstrate the capabilities of these strategies through a systematic evaluation on eight question-answering (QA) datasets, the results of which indicate that the new testing framework is an efficient solution to enhance the reliability of existing RAG systems.
[ "Li, Zhuohang", "Zhang, Jiaxin", "Yan, Chao", "Das, Kamalika", "Kumar, Sricharan", "Kantarcioglu, Murat", "Malin, Bradley A." ]
Do You Know What You Are Talking About? Characterizing Query-Knowledge Relevance For Reliable Retrieval Augmented Generation
emnlp-main.353
Oral
2410.08320
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.354.bib
https://aclanthology.org/2024.emnlp-main.354/
@inproceedings{huang-etal-2024-reliability, title = "On the Reliability of Psychological Scales on Large Language Models", author = "Huang, Jen-tse and Jiao, Wenxiang and Lam, Man Ho and Li, Eric John and Wang, Wenxuan and Lyu, Michael", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.354", pages = "6152--6173", abstract = "Recent research has focused on examining Large Language Models{'} (LLMs) characteristics from a psychological standpoint, acknowledging the necessity of understanding their behavioral characteristics. The administration of personality tests to LLMs has emerged as a noteworthy area in this context. However, the suitability of employing psychological scales, initially devised for humans, on LLMs is a matter of ongoing debate. Our study aims to determine the reliability of applying personality assessments to LLMs, explicitly investigating whether LLMs demonstrate consistent personality traits. Analysis of 2,500 settings per model, including GPT-3.5, GPT-4, Gemini-Pro, and LLaMA-3.1, reveals that various LLMs show consistency in responses to the Big Five Inventory, indicating a satisfactory level of reliability. Furthermore, our research explores the potential of GPT-3.5 to emulate diverse personalities and represent various groups{---}a capability increasingly sought after in social sciences for substituting human participants with LLMs to reduce costs. Our findings reveal that LLMs have the potential to represent different personalities with specific prompt instructions.", }
Recent research has focused on examining Large Language Models{'} (LLMs) characteristics from a psychological standpoint, acknowledging the necessity of understanding their behavioral characteristics. The administration of personality tests to LLMs has emerged as a noteworthy area in this context. However, the suitability of employing psychological scales, initially devised for humans, on LLMs is a matter of ongoing debate. Our study aims to determine the reliability of applying personality assessments to LLMs, explicitly investigating whether LLMs demonstrate consistent personality traits. Analysis of 2,500 settings per model, including GPT-3.5, GPT-4, Gemini-Pro, and LLaMA-3.1, reveals that various LLMs show consistency in responses to the Big Five Inventory, indicating a satisfactory level of reliability. Furthermore, our research explores the potential of GPT-3.5 to emulate diverse personalities and represent various groups{---}a capability increasingly sought after in social sciences for substituting human participants with LLMs to reduce costs. Our findings reveal that LLMs have the potential to represent different personalities with specific prompt instructions.
[ "Huang, Jen-tse", "Jiao, Wenxiang", "Lam, Man Ho", "Li, Eric John", "Wang, Wenxuan", "Lyu, Michael" ]
On the Reliability of Psychological Scales on Large Language Models
emnlp-main.354
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.355.bib
https://aclanthology.org/2024.emnlp-main.355/
@inproceedings{arora-etal-2024-contrastive, title = "Contrastive Entity Coreference and Disambiguation for Historical Texts", author = "Arora, Abhishek and Silcock, Emily and Dell, Melissa and Heldring, Leander", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.355", pages = "6174--6186", abstract = "Massive-scale historical document collections are crucial for social science research. Despite increasing digitization, these documents typically lack unique cross-document identifiers for individuals mentioned within the texts, as well as individual identifiers from external knowledge bases like Wikipedia/Wikidata. Existing entity disambiguation methods often fall short in accuracy for historical documents, which are replete with individuals not remembered in contemporary knowledge bases. This study makes three key contributions to improve cross-document coreference resolution and disambiguation in historical texts: a massive-scale training dataset replete with hard negatives - that sources over 190 million entity pairs from Wikipedia contexts and disambiguation pages - high-quality evaluation data from hand-labeled historical newswire articles, and trained models evaluated on this historical benchmark. We contrastively train bi-encoder models for coreferencing and disambiguating individuals in historical texts, achieving accurate, scalable performance that identifies out-of-knowledge base individuals. Our approach significantly surpasses other entity disambiguation models on our historical newswire benchmark. Our models also demonstrate competitive performance on modern entity disambiguation benchmarks, particularly on certain news disambiguation datasets.", }
Massive-scale historical document collections are crucial for social science research. Despite increasing digitization, these documents typically lack unique cross-document identifiers for individuals mentioned within the texts, as well as individual identifiers from external knowledge bases like Wikipedia/Wikidata. Existing entity disambiguation methods often fall short in accuracy for historical documents, which are replete with individuals not remembered in contemporary knowledge bases. This study makes three key contributions to improve cross-document coreference resolution and disambiguation in historical texts: a massive-scale training dataset replete with hard negatives - that sources over 190 million entity pairs from Wikipedia contexts and disambiguation pages - high-quality evaluation data from hand-labeled historical newswire articles, and trained models evaluated on this historical benchmark. We contrastively train bi-encoder models for coreferencing and disambiguating individuals in historical texts, achieving accurate, scalable performance that identifies out-of-knowledge base individuals. Our approach significantly surpasses other entity disambiguation models on our historical newswire benchmark. Our models also demonstrate competitive performance on modern entity disambiguation benchmarks, particularly on certain news disambiguation datasets.
[ "Arora, Abhishek", "Silcock, Emily", "Dell, Melissa", "Heldring, Le", "er" ]
Contrastive Entity Coreference and Disambiguation for Historical Texts
emnlp-main.355
Poster
2406.15576
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.356.bib
https://aclanthology.org/2024.emnlp-main.356/
@inproceedings{kim-ji-2024-finer, title = "Finer: Investigating and Enhancing Fine-Grained Visual Concept Recognition in Large Vision Language Models", author = "Kim, Jeonghwan and Ji, Heng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.356", pages = "6187--6207", abstract = "Recent advances in instruction-tuned Large Vision-Language Models (LVLMs) have imbued the models with the ability to generate high-level, image-grounded explanations with ease. While such capability is largely attributed to the rich world knowledge contained within the Large Language Models (LLMs), our work reveals their shortcomings in fine-grained visual categorization (FGVC) across six different benchmark settings. Most recent state-of-the-art LVLMs such as LLaVa-1.5, InstructBLIP and GPT-4V not only severely deteriorate in terms of classification performance, e.g., average drop of 65.58 in EM for Stanford Dogs for LLaVA-1.5, but also struggle to generate descriptive visual attributes based on a concept that appears within an input image despite their prominent zero-shot image captioning ability. In-depth analyses show that instruction-tuned LVLMs suffer from modality gap, showing discrepancy when given textual and visual inputs that correspond to the same concept. In an effort to further the community{'}s endeavor in this direction, we propose a multiple granularity attribute-centric benchmark and training mixture, Finer, which aims to establish a ground to evaluate LVLMs{'} fine-grained visual comprehension ability and provide significantly improved explainability.", }
Recent advances in instruction-tuned Large Vision-Language Models (LVLMs) have imbued the models with the ability to generate high-level, image-grounded explanations with ease. While such capability is largely attributed to the rich world knowledge contained within the Large Language Models (LLMs), our work reveals their shortcomings in fine-grained visual categorization (FGVC) across six different benchmark settings. Most recent state-of-the-art LVLMs such as LLaVa-1.5, InstructBLIP and GPT-4V not only severely deteriorate in terms of classification performance, e.g., average drop of 65.58 in EM for Stanford Dogs for LLaVA-1.5, but also struggle to generate descriptive visual attributes based on a concept that appears within an input image despite their prominent zero-shot image captioning ability. In-depth analyses show that instruction-tuned LVLMs suffer from modality gap, showing discrepancy when given textual and visual inputs that correspond to the same concept. In an effort to further the community{'}s endeavor in this direction, we propose a multiple granularity attribute-centric benchmark and training mixture, Finer, which aims to establish a ground to evaluate LVLMs{'} fine-grained visual comprehension ability and provide significantly improved explainability.
[ "Kim, Jeonghwan", "Ji, Heng" ]
Finer: Investigating and Enhancing Fine-Grained Visual Concept Recognition in Large Vision Language Models
emnlp-main.356
Poster
2402.16315
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.357.bib
https://aclanthology.org/2024.emnlp-main.357/
@inproceedings{asthana-etal-2024-evaluating, title = "Evaluating {LLM}s for Targeted Concept Simplification for Domain-Specific Texts", author = "Asthana, Sumit and Rashkin, Hannah and Clark, Elizabeth and Huot, Fantine and Lapata, Mirella", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.357", pages = "6208--6226", abstract = "One useful application of NLP models is to support people in reading complex text from unfamiliar domains (e.g., scientific articles). Simplifying the entire text makes it understandable but sometimes removes important details. On the contrary, helping adult readers understand difficult concepts in context can enhance their vocabulary and knowledge. In a preliminary human study, we first identify that lack of context and unfamiliarity with difficult concepts is a major reason for adult readers{'} difficulty with domain-specific text. We then introduce targeted concept simplification, a simplification task for rewriting text to help readers comprehend text containing unfamiliar concepts. We also introduce WikiDomains, a new dataset of 22k definitions from 13 academic domains paired with a difficult concept within each definition. We benchmark the performance of open-source and commercial LLMs and a simple dictionary baseline on this task across human judgments of ease of understanding and meaning preservation. Interestingly, our human judges preferred explanations about the difficult concept more than simplifications of the concept phrase. Further, no single model achieved superior performance across all quality dimensions, and automated metrics also show low correlations with human evaluations of concept simplification ({\textasciitilde}0.2), opening up rich avenues for research on personalized human reading comprehension support.", }
One useful application of NLP models is to support people in reading complex text from unfamiliar domains (e.g., scientific articles). Simplifying the entire text makes it understandable but sometimes removes important details. On the contrary, helping adult readers understand difficult concepts in context can enhance their vocabulary and knowledge. In a preliminary human study, we first identify that lack of context and unfamiliarity with difficult concepts is a major reason for adult readers{'} difficulty with domain-specific text. We then introduce targeted concept simplification, a simplification task for rewriting text to help readers comprehend text containing unfamiliar concepts. We also introduce WikiDomains, a new dataset of 22k definitions from 13 academic domains paired with a difficult concept within each definition. We benchmark the performance of open-source and commercial LLMs and a simple dictionary baseline on this task across human judgments of ease of understanding and meaning preservation. Interestingly, our human judges preferred explanations about the difficult concept more than simplifications of the concept phrase. Further, no single model achieved superior performance across all quality dimensions, and automated metrics also show low correlations with human evaluations of concept simplification ({\textasciitilde}0.2), opening up rich avenues for research on personalized human reading comprehension support.
[ "Asthana, Sumit", "Rashkin, Hannah", "Clark, Elizabeth", "Huot, Fantine", "Lapata, Mirella" ]
Evaluating LLMs for Targeted Concept Simplification for Domain-Specific Texts
emnlp-main.357
Poster
2410.20763
[ "https://github.com/google-deepmind/wikidomains" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.358.bib
https://aclanthology.org/2024.emnlp-main.358/
@inproceedings{li-etal-2024-vlfeedback, title = "{VLF}eedback: A Large-Scale {AI} Feedback Dataset for Large Vision-Language Models Alignment", author = "Li, Lei and Xie, Zhihui and Li, Mukai and Chen, Shunian and Wang, Peiyi and Chen, Liang and Yang, Yazheng and Wang, Benyou and Kong, Lingpeng and Liu, Qi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.358", pages = "6227--6246", abstract = "As large vision-language models (LVLMs) evolve rapidly, the demand for high-quality and diverse data to align these models becomes increasingly crucial. However, the creation of such data with human supervision proves costly and time-intensive. In this paper, we investigate the efficacy of AI feedback to scale supervision for aligning LVLMs. We introduce VLFeedback, the first large-scale vision-language feedback dataset, comprising over 82K multi-modal instructions and comprehensive rationales generated by off-the-shelf models without human annotations. To evaluate the effectiveness of AI feedback for vision-language alignment, we train Silkie, an LVLM fine-tuned via direct preference optimization on VLFeedback. Silkie showcases exceptional performance regarding helpfulness, visual faithfulness, and safety metrics. It outperforms its base model by 6.9{\%} and 9.5{\%} in perception and cognition tasks, reduces hallucination issues on MMHal-Bench, and exhibits enhanced resilience against red-teaming attacks. Furthermore, our analysis underscores the advantage of AI feedback, particularly in fostering preference diversity to deliver more comprehensive improvements. Our dataset, training code and models are available at \url{https://vlf-silkie.github.io}.", }
As large vision-language models (LVLMs) evolve rapidly, the demand for high-quality and diverse data to align these models becomes increasingly crucial. However, the creation of such data with human supervision proves costly and time-intensive. In this paper, we investigate the efficacy of AI feedback to scale supervision for aligning LVLMs. We introduce VLFeedback, the first large-scale vision-language feedback dataset, comprising over 82K multi-modal instructions and comprehensive rationales generated by off-the-shelf models without human annotations. To evaluate the effectiveness of AI feedback for vision-language alignment, we train Silkie, an LVLM fine-tuned via direct preference optimization on VLFeedback. Silkie showcases exceptional performance regarding helpfulness, visual faithfulness, and safety metrics. It outperforms its base model by 6.9{\%} and 9.5{\%} in perception and cognition tasks, reduces hallucination issues on MMHal-Bench, and exhibits enhanced resilience against red-teaming attacks. Furthermore, our analysis underscores the advantage of AI feedback, particularly in fostering preference diversity to deliver more comprehensive improvements. Our dataset, training code and models are available at \url{https://vlf-silkie.github.io}.
[ "Li, Lei", "Xie, Zhihui", "Li, Mukai", "Chen, Shunian", "Wang, Peiyi", "Chen, Liang", "Yang, Yazheng", "Wang, Benyou", "Kong, Lingpeng", "Liu, Qi" ]
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment
emnlp-main.358
Poster
2410.09421
[ "" ]
https://huggingface.co/papers/2410.09421
1
0
0
10
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.359.bib
https://aclanthology.org/2024.emnlp-main.359/
@inproceedings{yuan-etal-2024-focused, title = "Focused Large Language Models are Stable Many-Shot Learners", author = "Yuan, Peiwen and Feng, Shaoxiong and Li, Yiwei and Wang, Xinglin and Zhang, Yueqi and Tan, Chuyi and Pan, Boyuan and Wang, Heda and Hu, Yao and Li, Kan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.359", pages = "6247--6261", abstract = "In-Context Learning (ICL) enables large language models (LLMs) to achieve rapid task adaptation by learning from demonstrations. With the increase in available context length of LLMs, recent experiments have shown that the performance of ICL does not necessarily scale well in many-shot (demonstration) settings. We hypothesize that the reason lies in more demonstrations dispersing the model attention from the query, hindering its understanding of key content, which we validate both theoretically and experimentally. Inspired by how humans learn from examples, we propose a training-free method FocusICL, which conducts triviality filtering to avoid attention being diverted by unimportant contents at token-level and operates hierarchical attention to further ensure sufficient attention towards current query at demonstration-level. We also design an efficient hyperparameter searching strategy for FocusICL based on model perplexity of demonstrations. Comprehensive experiments validate that FocusICL achieves an average performance improvement of 5.2{\%} over vanilla ICL and scales well with many-shot demonstrations.", }
In-Context Learning (ICL) enables large language models (LLMs) to achieve rapid task adaptation by learning from demonstrations. With the increase in available context length of LLMs, recent experiments have shown that the performance of ICL does not necessarily scale well in many-shot (demonstration) settings. We hypothesize that the reason lies in more demonstrations dispersing the model attention from the query, hindering its understanding of key content, which we validate both theoretically and experimentally. Inspired by how humans learn from examples, we propose a training-free method FocusICL, which conducts triviality filtering to avoid attention being diverted by unimportant contents at token-level and operates hierarchical attention to further ensure sufficient attention towards current query at demonstration-level. We also design an efficient hyperparameter searching strategy for FocusICL based on model perplexity of demonstrations. Comprehensive experiments validate that FocusICL achieves an average performance improvement of 5.2{\%} over vanilla ICL and scales well with many-shot demonstrations.
[ "Yuan, Peiwen", "Feng, Shaoxiong", "Li, Yiwei", "Wang, Xinglin", "Zhang, Yueqi", "Tan, Chuyi", "Pan, Boyuan", "Wang, Heda", "Hu, Yao", "Li, Kan" ]
Focused Large Language Models are Stable Many-Shot Learners
emnlp-main.359
Poster
2408.13987
[ "" ]
https://huggingface.co/papers/2408.13987
0
0
0
10
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.360.bib
https://aclanthology.org/2024.emnlp-main.360/
@inproceedings{tanzer-etal-2024-reconsidering, title = "Reconsidering Sentence-Level Sign Language Translation", author = "Tanzer, Garrett and Shengelia, Maximus and Harrenstien, Ken and Uthus, David", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.360", pages = "6262--6287", abstract = "Historically, sign language machine translation has been posed as a sentence-level task: datasets consisting of continuous narratives are chopped up and presented to the model as isolated clips. In this work, we explore the limitations of this task framing. First, we survey a number of linguistic phenomena in sign languages that depend on discourse-level context. Then as a case study, we perform the first human baseline for sign language translation that actually substitutes a human into the machine learning task framing, rather than provide the human with the entire document as context. This human baseline{---}for ASL to English translation on the How2Sign dataset{---}shows that for 33{\%} of sentences in our sample, our fluent Deaf signer annotators were only able to understand key parts of the clip in light of additional discourse-level context. These results underscore the importance of understanding and sanity checking examples when adapting machine learning to new domains.", }
Historically, sign language machine translation has been posed as a sentence-level task: datasets consisting of continuous narratives are chopped up and presented to the model as isolated clips. In this work, we explore the limitations of this task framing. First, we survey a number of linguistic phenomena in sign languages that depend on discourse-level context. Then as a case study, we perform the first human baseline for sign language translation that actually substitutes a human into the machine learning task framing, rather than provide the human with the entire document as context. This human baseline{---}for ASL to English translation on the How2Sign dataset{---}shows that for 33{\%} of sentences in our sample, our fluent Deaf signer annotators were only able to understand key parts of the clip in light of additional discourse-level context. These results underscore the importance of understanding and sanity checking examples when adapting machine learning to new domains.
[ "Tanzer, Garrett", "Shengelia, Maximus", "Harrenstien, Ken", "Uthus, David" ]
Reconsidering Sentence-Level Sign Language Translation
emnlp-main.360
Poster
2406.11049
[ "" ]
https://huggingface.co/papers/2406.11049
0
0
0
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.361.bib
https://aclanthology.org/2024.emnlp-main.361/
@inproceedings{ghosh-etal-2024-gama, title = "{GAMA}: A Large Audio-Language Model with Advanced Audio Understanding and Complex Reasoning Abilities", author = "Ghosh, Sreyan and Kumar, Sonal and Seth, Ashish and Evuru, Chandra Kiran Reddy and Tyagi, Utkarsh and Sakshi, S and Nieto, Oriol and Duraiswami, Ramani and Manocha, Dinesh", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.361", pages = "6288--6313", abstract = "Perceiving and understanding non-speech sounds and non-verbal speech is essential to making decisions that help us interact with our surroundings. In this paper, we propose GAMA, a novel General-purpose Large Audio-Language Model (LALM) with Advanced Audio Understanding and Complex Reasoning Abilities. We build GAMA by integrating an LLM with multiple types of audio representations, including features from a custom Audio Q-Former, a multi-layer aggregator that aggregates features from multiple layers of an audio encoder. We fine-tune GAMA on a large-scale audio-language dataset, which augments it with audio understanding capabilities. Next, we propose CompA-R (Instruction-Tuning for Complex Audio Reasoning), a synthetically generated instruction-tuning (IT) dataset with instructions that require the model to perform complex reasoning on the input audio. We instruction-tune GAMA with CompA-R to endow it with complex reasoning abilities, where we further add a soft prompt as input with high-level semantic evidence by leveraging event tags of the input audio. Finally, we also propose CompA-R-test, a human-labeled evaluation dataset for evaluating the capabilities of LALMs on open-ended audio question-answering that requires complex reasoning. Through automated and expert human evaluations, we show that GAMA outperforms all other LALMs in literature on diverse audio understanding tasks by margins of 1{\%}-84{\%} and demonstrates state-of-the-art performance on deductive reasoning and hallucination evaluation benchmarks. Further, GAMA IT-ed on CompA-R proves to be superior in its complex reasoning capabilities.", }
Perceiving and understanding non-speech sounds and non-verbal speech is essential to making decisions that help us interact with our surroundings. In this paper, we propose GAMA, a novel General-purpose Large Audio-Language Model (LALM) with Advanced Audio Understanding and Complex Reasoning Abilities. We build GAMA by integrating an LLM with multiple types of audio representations, including features from a custom Audio Q-Former, a multi-layer aggregator that aggregates features from multiple layers of an audio encoder. We fine-tune GAMA on a large-scale audio-language dataset, which augments it with audio understanding capabilities. Next, we propose CompA-R (Instruction-Tuning for Complex Audio Reasoning), a synthetically generated instruction-tuning (IT) dataset with instructions that require the model to perform complex reasoning on the input audio. We instruction-tune GAMA with CompA-R to endow it with complex reasoning abilities, where we further add a soft prompt as input with high-level semantic evidence by leveraging event tags of the input audio. Finally, we also propose CompA-R-test, a human-labeled evaluation dataset for evaluating the capabilities of LALMs on open-ended audio question-answering that requires complex reasoning. Through automated and expert human evaluations, we show that GAMA outperforms all other LALMs in literature on diverse audio understanding tasks by margins of 1{\%}-84{\%} and demonstrates state-of-the-art performance on deductive reasoning and hallucination evaluation benchmarks. Further, GAMA IT-ed on CompA-R proves to be superior in its complex reasoning capabilities.
[ "Ghosh, Sreyan", "Kumar, Sonal", "Seth, Ashish", "Evuru, Ch", "ra Kiran Reddy", "Tyagi, Utkarsh", "Sakshi, S", "Nieto, Oriol", "Duraiswami, Ramani", "Manocha, Dinesh" ]
GAMA: A Large Audio-Language Model with Advanced Audio Understanding and Complex Reasoning Abilities
emnlp-main.361
Poster
2406.11768
[ "" ]
https://huggingface.co/papers/2406.11768
6
20
1
9
[]
[]
[ "sonalkum/GAMA", "sonalkum/GAMA-IT" ]
[]
[]
[ "sonalkum/GAMA", "sonalkum/GAMA-IT" ]
1
https://aclanthology.org/2024.emnlp-main.362.bib
https://aclanthology.org/2024.emnlp-main.362/
@inproceedings{dinu-etal-2024-verba, title = "Verba volant, scripta volant? Don{'}t worry! There are computational solutions for protoword reconstruction", author = "Dinu, Liviu P and Uban, Ana Sabina and Cristea, Alina Maria and Iordache, Ioan-Bogdan and Marchitan, Teodor-George and Georgescu, Simona and Zoicas, Laurentiu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.362", pages = "6314--6326", abstract = "We introduce a new database of cognate words and etymons for the five main Romance languages, the most comprehensive one to date. We propose a strong benchmark for the automatic reconstruction of protowords for Romance languages, by applying a set of machine learning models and features on these data. The best results reach 90{\%} accuracy in predicting the protoword of a given cognate set, surpassing existing state-of-the-art results for this task and showing that computational methods can be very useful in assisting linguists with protoword reconstruction.", }
We introduce a new database of cognate words and etymons for the five main Romance languages, the most comprehensive one to date. We propose a strong benchmark for the automatic reconstruction of protowords for Romance languages, by applying a set of machine learning models and features on these data. The best results reach 90{\%} accuracy in predicting the protoword of a given cognate set, surpassing existing state-of-the-art results for this task and showing that computational methods can be very useful in assisting linguists with protoword reconstruction.
[ "Dinu, Liviu P", "Uban, Ana Sabina", "Cristea, Alina Maria", "Iordache, Ioan-Bogdan", "Marchitan, Teodor-George", "Georgescu, Simona", "Zoicas, Laurentiu" ]
Verba volant, scripta volant? Don't worry! There are computational solutions for protoword reconstruction
emnlp-main.362
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.363.bib
https://aclanthology.org/2024.emnlp-main.363/
@inproceedings{li-etal-2024-chatgpt-doesnt, title = "{C}hat{GPT} Doesn{'}t Trust Chargers Fans: Guardrail Sensitivity in Context", author = "Li, Victoria R and Chen, Yida and Saphra, Naomi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.363", pages = "6327--6345", abstract = "While the biases of language models in production are extensively documented, the biases of their guardrails have been neglected. This paper studies how contextual information about the user influences the likelihood of an LLM to refuse to execute a request. By generating user biographies that offer ideological and demographic information, we find a number of biases in guardrail sensitivity on GPT-3.5. Younger, female, and Asian-American personas are more likely to trigger a refusal guardrail when requesting censored or illegal information. Guardrails are also sycophantic, refusing to comply with requests for a political position the user is likely to disagree with. We find that certain identity groups and seemingly innocuous information, e.g., sports fandom, can elicit changes in guardrail sensitivity similar to direct statements of political ideology. For each demographic category and even for American football team fandom, we find that ChatGPT appears to infer a likely political ideology and modify guardrail behavior accordingly.", }
While the biases of language models in production are extensively documented, the biases of their guardrails have been neglected. This paper studies how contextual information about the user influences the likelihood of an LLM to refuse to execute a request. By generating user biographies that offer ideological and demographic information, we find a number of biases in guardrail sensitivity on GPT-3.5. Younger, female, and Asian-American personas are more likely to trigger a refusal guardrail when requesting censored or illegal information. Guardrails are also sycophantic, refusing to comply with requests for a political position the user is likely to disagree with. We find that certain identity groups and seemingly innocuous information, e.g., sports fandom, can elicit changes in guardrail sensitivity similar to direct statements of political ideology. For each demographic category and even for American football team fandom, we find that ChatGPT appears to infer a likely political ideology and modify guardrail behavior accordingly.
[ "Li, Victoria R", "Chen, Yida", "Saphra, Naomi" ]
ChatGPT Doesn't Trust Chargers Fans: Guardrail Sensitivity in Context
emnlp-main.363
Poster
2407.06866
[ "https://github.com/vli31/llm-guardrail-sensitivity" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.364.bib
https://aclanthology.org/2024.emnlp-main.364/
@inproceedings{joshi-etal-2024-personas, title = "Personas as a Way to Model Truthfulness in Language Models", author = "Joshi, Nitish and Rando, Javier and Saparov, Abulhair and Kim, Najoung and He, He", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.364", pages = "6346--6359", abstract = "Large language models (LLMs) are trained on vast amounts of text from the internet, which contains both factual and misleading information about the world. While unintuitive from a classic view of LMs, recent work has shown that the truth value of a statement can be elicited from the model{'}s representations. This paper presents an explanation for why LMs appear to know the truth despite not being trained with truth labels. We hypothesize that the pretraining data is generated by groups of (un)truthful agents whose outputs share common features, and they form a (un)truthful persona. By training on this data, LMs can infer and represent the persona in its activation space. This allows the model to separate truth from falsehoods and controls the truthfulness of its generation. We show evidence for the persona hypothesis via two observations: (1) we can probe whether a model{'}s answer will be truthful before it is generated; (2) finetuning a model on a set of facts improves its truthfulness on unseen topics. Next, using arithmetics as a synthetic environment, we show that structures of the pretraining data are crucial for the model to infer the truthful persona. Overall, our findings suggest that models can exploit hierarchical structures in the data to learn abstract concepts like truthfulness.", }
Large language models (LLMs) are trained on vast amounts of text from the internet, which contains both factual and misleading information about the world. While unintuitive from a classic view of LMs, recent work has shown that the truth value of a statement can be elicited from the model{'}s representations. This paper presents an explanation for why LMs appear to know the truth despite not being trained with truth labels. We hypothesize that the pretraining data is generated by groups of (un)truthful agents whose outputs share common features, and they form a (un)truthful persona. By training on this data, LMs can infer and represent the persona in its activation space. This allows the model to separate truth from falsehoods and controls the truthfulness of its generation. We show evidence for the persona hypothesis via two observations: (1) we can probe whether a model{'}s answer will be truthful before it is generated; (2) finetuning a model on a set of facts improves its truthfulness on unseen topics. Next, using arithmetics as a synthetic environment, we show that structures of the pretraining data are crucial for the model to infer the truthful persona. Overall, our findings suggest that models can exploit hierarchical structures in the data to learn abstract concepts like truthfulness.
[ "Joshi, Nitish", "R", "o, Javier", "Saparov, Abulhair", "Kim, Najoung", "He, He" ]
Personas as a Way to Model Truthfulness in Language Models
emnlp-main.364
Poster
2310.18168
[ "" ]
https://huggingface.co/papers/2310.18168
2
5
1
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.365.bib
https://aclanthology.org/2024.emnlp-main.365/
@inproceedings{sterbentz-etal-2024-satyrn, title = "Satyrn: A Platform for Analytics Augmented Generation", author = "Sterbentz, Marko and Barrie, Cameron and Shahi, Shubham and Dutta, Abhratanu and Hooshmand, Donna and Pack, Harper and Hammond, Kristian J", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.365", pages = "6360--6385", abstract = "Large language models (LLMs) are capable of producing documents, and retrieval augmented generation (RAG) has shown itself to be a powerful method for improving accuracy without sacrificing fluency. However, not all information can be retrieved from text. We propose an approach that uses the analysis of structured data to generate fact sets that are used to guide generation in much the same way that retrieved documents are used in RAG. This analytics augmented generation (AAG) approach supports the ability to utilize standard analytic techniques to generate facts that are then converted to text and passed to an LLM. We present a neurosymbolic platform, Satyrn, that leverages AAG to produce accurate, fluent, and coherent reports grounded in large scale databases. In our experiments, we find that Satyrn generates reports in which over 86{\%} of claims are accurate while maintaining high levels of fluency and coherence, even when using smaller language models such as Mistral-7B, as compared to GPT-4 Code Interpreter in which just 57{\%} of claims are accurate.", }
Large language models (LLMs) are capable of producing documents, and retrieval augmented generation (RAG) has shown itself to be a powerful method for improving accuracy without sacrificing fluency. However, not all information can be retrieved from text. We propose an approach that uses the analysis of structured data to generate fact sets that are used to guide generation in much the same way that retrieved documents are used in RAG. This analytics augmented generation (AAG) approach supports the ability to utilize standard analytic techniques to generate facts that are then converted to text and passed to an LLM. We present a neurosymbolic platform, Satyrn, that leverages AAG to produce accurate, fluent, and coherent reports grounded in large scale databases. In our experiments, we find that Satyrn generates reports in which over 86{\%} of claims are accurate while maintaining high levels of fluency and coherence, even when using smaller language models such as Mistral-7B, as compared to GPT-4 Code Interpreter in which just 57{\%} of claims are accurate.
[ "Sterbentz, Marko", "Barrie, Cameron", "Shahi, Shubham", "Dutta, Abhratanu", "Hooshm", ", Donna", "Pack, Harper", "Hammond, Kristian J" ]
Satyrn: A Platform for Analytics Augmented Generation
emnlp-main.365
Poster
2406.12069
[ "https://github.com/nu-c3lab/satyrn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.366.bib
https://aclanthology.org/2024.emnlp-main.366/
@inproceedings{seth-etal-2024-eh, title = "{EH}-{MAM}: Easy-to-Hard Masked Acoustic Modeling for Self-Supervised Speech Representation Learning", author = "Seth, Ashish and Selvakumar, Ramaneswaran and Sakshi, S and Kumar, Sonal and Ghosh, Sreyan and Manocha, Dinesh", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.366", pages = "6386--6400", abstract = "In this paper, we present EH-MAM (Easy-to-Hard adaptive Masked Acoustic Modeling), a novel self-supervised learning approach for speech representation learning. In contrast to the prior methods that use random masking schemes for Masked Acoustic Modeling (MAM), we introduce a novel selective and adaptive masking strategy. Specifically, during SSL training, we progressively introduce harder regions to the model for reconstruction. Our approach automatically selects hard regions and is built on the observation that the reconstruction loss of individual frames in MAM can provide natural signals to judge the difficulty of solving the MAM pre-text task for that frame. To identify these hard regions, we employ a teacher model that first predicts the frame-wise losses and then decides which frames to mask. By learning to create challenging problems, such as identifying harder frames and solving them simultaneously, the model is able to learn more effective representations and thereby acquire a more comprehensive understanding of the speech. Quantitatively, EH-MAM outperforms several state-of-the-art baselines across various low-resource speech recognition and SUPERB benchmarks by 5{\%}-10{\%}. Additionally, we conduct a thorough analysis to show that the regions masked by EH-MAM effectively capture useful context across speech frames.", }
In this paper, we present EH-MAM (Easy-to-Hard adaptive Masked Acoustic Modeling), a novel self-supervised learning approach for speech representation learning. In contrast to the prior methods that use random masking schemes for Masked Acoustic Modeling (MAM), we introduce a novel selective and adaptive masking strategy. Specifically, during SSL training, we progressively introduce harder regions to the model for reconstruction. Our approach automatically selects hard regions and is built on the observation that the reconstruction loss of individual frames in MAM can provide natural signals to judge the difficulty of solving the MAM pre-text task for that frame. To identify these hard regions, we employ a teacher model that first predicts the frame-wise losses and then decides which frames to mask. By learning to create challenging problems, such as identifying harder frames and solving them simultaneously, the model is able to learn more effective representations and thereby acquire a more comprehensive understanding of the speech. Quantitatively, EH-MAM outperforms several state-of-the-art baselines across various low-resource speech recognition and SUPERB benchmarks by 5{\%}-10{\%}. Additionally, we conduct a thorough analysis to show that the regions masked by EH-MAM effectively capture useful context across speech frames.
[ "Seth, Ashish", "Selvakumar, Ramaneswaran", "Sakshi, S", "Kumar, Sonal", "Ghosh, Sreyan", "Manocha, Dinesh" ]
EH-MAM: Easy-to-Hard Masked Acoustic Modeling for Self-Supervised Speech Representation Learning
emnlp-main.366
Oral
2410.13179
[ "https://github.com/cs20s030/ehmam" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.367.bib
https://aclanthology.org/2024.emnlp-main.367/
@inproceedings{zhao-etal-2024-epo, title = "{EPO}: Hierarchical {LLM} Agents with Environment Preference Optimization", author = "Zhao, Qi and Fu, Haotian and Sun, Chen and Konidaris, George", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.367", pages = "6401--6415", abstract = "Long-horizon decision-making tasks present significant challenges for LLM-based agents due to the need for extensive planning over multiple steps. In this paper, we propose a hierarchical framework that decomposes complex tasks into manageable subgoals, utilizing separate LLMs for subgoal prediction and low-level action generation. To address the challenge of creating training signals for unannotated datasets, we develop a reward model that leverages multimodal environment feedback to automatically generate reward signals. We introduce Environment Preference Optimization (EPO), a novel method that generates preference signals from the environment{'}s feedback and uses them to train LLM-based agents. Extensive experiments on ALFRED demonstrate the state-of-the-art performance of our framework, achieving first place on the ALFRED public leaderboard and showcasing its potential to improve long-horizon decision-making in diverse environments.", }
Long-horizon decision-making tasks present significant challenges for LLM-based agents due to the need for extensive planning over multiple steps. In this paper, we propose a hierarchical framework that decomposes complex tasks into manageable subgoals, utilizing separate LLMs for subgoal prediction and low-level action generation. To address the challenge of creating training signals for unannotated datasets, we develop a reward model that leverages multimodal environment feedback to automatically generate reward signals. We introduce Environment Preference Optimization (EPO), a novel method that generates preference signals from the environment{'}s feedback and uses them to train LLM-based agents. Extensive experiments on ALFRED demonstrate the state-of-the-art performance of our framework, achieving first place on the ALFRED public leaderboard and showcasing its potential to improve long-horizon decision-making in diverse environments.
[ "Zhao, Qi", "Fu, Haotian", "Sun, Chen", "Konidaris, George" ]
EPO: Hierarchical LLM Agents with Environment Preference Optimization
emnlp-main.367
Poster
2408.16090
[ "https://github.com/kevinz8866/epo" ]
https://huggingface.co/papers/2408.16090
0
0
0
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.368.bib
https://aclanthology.org/2024.emnlp-main.368/
@inproceedings{shaib-etal-2024-detection, title = "Detection and Measurement of Syntactic Templates in Generated Text", author = "Shaib, Chantal and Elazar, Yanai and Li, Junyi Jessy and Wallace, Byron C", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.368", pages = "6416--6431", abstract = "The diversity of text can be measured beyond word-level features, however existing diversity evaluation focuses primarily on word-level features. Here we propose a method for evaluating diversity over syntactic features to characterize general repetition in models, beyond frequent $n$-grams. Specifically, we define \textit{syntactic templates} (e.g., strings comprising parts-of-speech) and show that models tend to produce templated text in downstream tasks at a higher rate than what is found in human-reference textsWe find that most (76{\%}) templates in model-generated text can be found in pre-training data (compared to only 35{\%} of human-authored text), and are not overwritten during fine-tuning or alignment processes such as RLHF. The connection between templates in generated text and the pre-training data allows us to analyze syntactic templates in models where we do not have the pre-training data.We also find that templates as features are able to differentiate between models, tasks, and domains, and are useful for qualitatively evaluating common model constructions.Finally, we demonstrate the use of templates as a useful tool for analyzing style memorization of training data in LLMs.", }
The diversity of text can be measured beyond word-level features, however existing diversity evaluation focuses primarily on word-level features. Here we propose a method for evaluating diversity over syntactic features to characterize general repetition in models, beyond frequent $n$-grams. Specifically, we define \textit{syntactic templates} (e.g., strings comprising parts-of-speech) and show that models tend to produce templated text in downstream tasks at a higher rate than what is found in human-reference textsWe find that most (76{\%}) templates in model-generated text can be found in pre-training data (compared to only 35{\%} of human-authored text), and are not overwritten during fine-tuning or alignment processes such as RLHF. The connection between templates in generated text and the pre-training data allows us to analyze syntactic templates in models where we do not have the pre-training data.We also find that templates as features are able to differentiate between models, tasks, and domains, and are useful for qualitatively evaluating common model constructions.Finally, we demonstrate the use of templates as a useful tool for analyzing style memorization of training data in LLMs.
[ "Shaib, Chantal", "Elazar, Yanai", "Li, Junyi Jessy", "Wallace, Byron C" ]
Detection and Measurement of Syntactic Templates in Generated Text
emnlp-main.368
Poster
2407.00211
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.369.bib
https://aclanthology.org/2024.emnlp-main.369/
@inproceedings{pi-etal-2024-uouo, title = "{UOUO}: Uncontextualized Uncommon Objects for Measuring Knowledge Horizons of Vision Language Models", author = "Pi, Xinyu and Wu, Mingyuan and Jiang, Jize and Zheng, Haozhen and Tian, Beitong and Zhai, ChengXiang and Nahrstedt, Klara and Hu, Zhiting", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.369", pages = "6432--6441", abstract = "Smaller-scale Vision-Language Models (VLMs) often claim to perform on par with larger models in general-domain visual grounding and question-answering benchmarks while offering advantages in computational efficiency and storage. However, their ability to handle rare objects, which fall into the long tail of data distributions, is less understood. To rigorously evaluate this aspect, we introduce the {``}Uncontextualized Uncommon Objects{''} (UOUO) benchmark. This benchmark focuses on systematically testing VLMs with both large and small parameter counts on rare and specialized objects. Our comprehensive analysis reveals that while smaller VLMs maintain competitive performance on common datasets, they significantly underperform on tasks involving uncommon objects. We also propose an advanced, scalable pipeline for data collection and cleaning, ensuring the UOUO benchmark provides high-quality, challenging instances. These findings highlight the need to consider long-tail distributions when assessing the true capabilities of VLMs. Code and project details for UOUO can be found at https://zoezheng126.github.io/UOUO-Website/.", }
Smaller-scale Vision-Language Models (VLMs) often claim to perform on par with larger models in general-domain visual grounding and question-answering benchmarks while offering advantages in computational efficiency and storage. However, their ability to handle rare objects, which fall into the long tail of data distributions, is less understood. To rigorously evaluate this aspect, we introduce the {``}Uncontextualized Uncommon Objects{''} (UOUO) benchmark. This benchmark focuses on systematically testing VLMs with both large and small parameter counts on rare and specialized objects. Our comprehensive analysis reveals that while smaller VLMs maintain competitive performance on common datasets, they significantly underperform on tasks involving uncommon objects. We also propose an advanced, scalable pipeline for data collection and cleaning, ensuring the UOUO benchmark provides high-quality, challenging instances. These findings highlight the need to consider long-tail distributions when assessing the true capabilities of VLMs. Code and project details for UOUO can be found at https://zoezheng126.github.io/UOUO-Website/.
[ "Pi, Xinyu", "Wu, Mingyuan", "Jiang, Jize", "Zheng, Haozhen", "Tian, Beitong", "Zhai, ChengXiang", "Nahrstedt, Klara", "Hu, Zhiting" ]
UOUO: Uncontextualized Uncommon Objects for Measuring Knowledge Horizons of Vision Language Models
emnlp-main.369
Poster
2407.18391
[ "" ]
https://huggingface.co/papers/2407.18391
0
0
0
8
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.370.bib
https://aclanthology.org/2024.emnlp-main.370/
@inproceedings{wagner-etal-2024-optimized, title = "Optimized Speculative Sampling for {GPU} Hardware Accelerators", author = "Wagner, Dominik and Lee, Seanie and Baumann, Ilja and Seeberger, Philipp and Riedhammer, Korbinian and Bocklet, Tobias", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.370", pages = "6442--6458", abstract = "In this work, we optimize speculative sampling for parallel hardware accelerators to improve sampling speed. We notice that substantial portions of the intermediate matrices necessary for speculative sampling can be computed concurrently. This allows us to distribute the workload across multiple GPU threads, enabling simultaneous operations on matrix segments within thread blocks. This results in profiling time improvements ranging from 6{\%} to 13{\%} relative to the baseline implementation, without compromising accuracy. To further accelerate speculative sampling, probability distributions parameterized by softmax are approximated by sigmoid. This approximation approach results in significantly greater relative improvements in profiling time, ranging from 37{\%} to 94{\%}, with a minor decline in accuracy. We conduct extensive experiments on both automatic speech recognition and summarization tasks to validate the effectiveness of our optimization methods.", }
In this work, we optimize speculative sampling for parallel hardware accelerators to improve sampling speed. We notice that substantial portions of the intermediate matrices necessary for speculative sampling can be computed concurrently. This allows us to distribute the workload across multiple GPU threads, enabling simultaneous operations on matrix segments within thread blocks. This results in profiling time improvements ranging from 6{\%} to 13{\%} relative to the baseline implementation, without compromising accuracy. To further accelerate speculative sampling, probability distributions parameterized by softmax are approximated by sigmoid. This approximation approach results in significantly greater relative improvements in profiling time, ranging from 37{\%} to 94{\%}, with a minor decline in accuracy. We conduct extensive experiments on both automatic speech recognition and summarization tasks to validate the effectiveness of our optimization methods.
[ "Wagner, Dominik", "Lee, Seanie", "Baumann, Ilja", "Seeberger, Philipp", "Riedhammer, Korbinian", "Bocklet, Tobias" ]
Optimized Speculative Sampling for GPU Hardware Accelerators
emnlp-main.370
Poster
2406.11016
[ "https://github.com/dwgnr/optimized-speculative-sampling" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.371.bib
https://aclanthology.org/2024.emnlp-main.371/
@inproceedings{tan-etal-2024-personalized, title = "Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts", author = "Tan, Zhaoxuan and Liu, Zheyuan and Jiang, Meng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.371", pages = "6459--6475", abstract = "Personalized large language models (LLMs) aim to tailor interactions, content, and recommendations to individual user preferences. While parameter-efficient fine-tuning (PEFT) methods excel in performance and generalization, they are costly and limit communal benefits when used individually. To this end, we introduce Personalized Pieces (Per-Pcs), a framework that allows users to safely share and assemble personalized PEFT efficiently with collaborative efforts. Per-Pcs involves selecting sharers, breaking their PEFT into pieces, and training gates for each piece. These pieces are added to a pool, from which target users can select and assemble personalized PEFT using their history data. This approach preserves privacy and enables fine-grained user modeling without excessive storage and computation demands. Experimental results show Per-Pcs outperforms non-personalized and PEFT retrieval baselines, offering performance comparable to OPPU with significantly lower resource use across six tasks. Further analysis highlights Per-Pcs{'}s robustness concerning sharer count and selection strategy, pieces sharing ratio, and scalability in computation time and storage space. Per-Pcs{'}s modularity promotes safe sharing, making LLM personalization more efficient, effective, and widely accessible through collaborative efforts.", }
Personalized large language models (LLMs) aim to tailor interactions, content, and recommendations to individual user preferences. While parameter-efficient fine-tuning (PEFT) methods excel in performance and generalization, they are costly and limit communal benefits when used individually. To this end, we introduce Personalized Pieces (Per-Pcs), a framework that allows users to safely share and assemble personalized PEFT efficiently with collaborative efforts. Per-Pcs involves selecting sharers, breaking their PEFT into pieces, and training gates for each piece. These pieces are added to a pool, from which target users can select and assemble personalized PEFT using their history data. This approach preserves privacy and enables fine-grained user modeling without excessive storage and computation demands. Experimental results show Per-Pcs outperforms non-personalized and PEFT retrieval baselines, offering performance comparable to OPPU with significantly lower resource use across six tasks. Further analysis highlights Per-Pcs{'}s robustness concerning sharer count and selection strategy, pieces sharing ratio, and scalability in computation time and storage space. Per-Pcs{'}s modularity promotes safe sharing, making LLM personalization more efficient, effective, and widely accessible through collaborative efforts.
[ "Tan, Zhaoxuan", "Liu, Zheyuan", "Jiang, Meng" ]
Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts
emnlp-main.371
Poster
2406.10471
[ "https://github.com/TamSiuhin/Per-Pcs" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.372.bib
https://aclanthology.org/2024.emnlp-main.372/
@inproceedings{tan-etal-2024-democratizing, title = "Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning", author = "Tan, Zhaoxuan and Zeng, Qingkai and Tian, Yijun and Liu, Zheyuan and Yin, Bing and Jiang, Meng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.372", pages = "6476--6491", abstract = "Personalization in large language models (LLMs) is increasingly important, aiming to align the LLMs{'} interactions, content, and recommendations with individual user preferences. Recent advances have highlighted effective prompt design by enriching user queries with non-parametric knowledge through behavior history retrieval and textual profiles. However, these methods faced limitations due to a lack of model ownership, resulting in constrained customization and privacy issues, and often failed to capture complex, dynamic user behavior patterns. To address these shortcomings, we introduce One PEFT Per User (OPPU), employing personalized parameter-efficient fine-tuning (PEFT) modules to store user-specific behavior patterns and preferences. By plugging in personal PEFT parameters, users can own and use their LLMs individually. OPPU integrates parametric user knowledge in the personal PEFT parameters with non-parametric knowledge from retrieval and profiles, adapting LLMs to user behavior shifts. Experimental results demonstrate that OPPU significantly outperforms existing prompt-based methods across seven diverse tasks in the LaMP benchmark. Further studies reveal OPPU{'}s enhanced capabilities in handling user behavior shifts, modeling users at different activity levels, maintaining robustness across various user history formats, and displaying versatility with different PEFT methods.", }
Personalization in large language models (LLMs) is increasingly important, aiming to align the LLMs{'} interactions, content, and recommendations with individual user preferences. Recent advances have highlighted effective prompt design by enriching user queries with non-parametric knowledge through behavior history retrieval and textual profiles. However, these methods faced limitations due to a lack of model ownership, resulting in constrained customization and privacy issues, and often failed to capture complex, dynamic user behavior patterns. To address these shortcomings, we introduce One PEFT Per User (OPPU), employing personalized parameter-efficient fine-tuning (PEFT) modules to store user-specific behavior patterns and preferences. By plugging in personal PEFT parameters, users can own and use their LLMs individually. OPPU integrates parametric user knowledge in the personal PEFT parameters with non-parametric knowledge from retrieval and profiles, adapting LLMs to user behavior shifts. Experimental results demonstrate that OPPU significantly outperforms existing prompt-based methods across seven diverse tasks in the LaMP benchmark. Further studies reveal OPPU{'}s enhanced capabilities in handling user behavior shifts, modeling users at different activity levels, maintaining robustness across various user history formats, and displaying versatility with different PEFT methods.
[ "Tan, Zhaoxuan", "Zeng, Qingkai", "Tian, Yijun", "Liu, Zheyuan", "Yin, Bing", "Jiang, Meng" ]
Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning
emnlp-main.372
Poster
2402.04401
[ "https://github.com/tamsiuhin/oppu" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.373.bib
https://aclanthology.org/2024.emnlp-main.373/
@inproceedings{ma-etal-2024-unifying, title = "Unifying Multimodal Retrieval via Document Screenshot Embedding", author = "Ma, Xueguang and Lin, Sheng-Chieh and Li, Minghan and Chen, Wenhu and Lin, Jimmy", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.373", pages = "6492--6505", abstract = "In the real world, documents are organized in different formats and varied modalities. Traditional retrieval pipelines require tailored document parsing techniques and content extraction modules to prepare input for indexing. This process is tedious, prone to errors, and has information loss. To this end, we propose Document Screenshot Embedding (DSE), a novel retrieval paradigm that regards document screenshots as a unified input format, which does not require any content extraction preprocess and preserves all the information in a document (e.g., text, image and layout). DSE leverages a large vision-language model to directly encode document screenshots into dense representations for retrieval. To evaluate our method, we first craft the dataset of Wiki-SS, a 1.3M Wikipedia web page screenshots as the corpus to answer the questions from the Natural Questions dataset. In such a text-intensive document retrieval setting, DSE shows competitive effectiveness compared to other text retrieval methods relying on parsing. For example, DSE outperforms BM25 by 17 points in top-1 retrieval accuracy. Additionally, in a mixed-modality task of slide retrieval, DSE significantly outperforms OCR text retrieval methods by over 15 points in nDCG@10. These experiments show that DSE is an effective document retrieval paradigm for diverse types of documents. Model checkpoints, code, and Wiki-SS collection will be released.", }
In the real world, documents are organized in different formats and varied modalities. Traditional retrieval pipelines require tailored document parsing techniques and content extraction modules to prepare input for indexing. This process is tedious, prone to errors, and has information loss. To this end, we propose Document Screenshot Embedding (DSE), a novel retrieval paradigm that regards document screenshots as a unified input format, which does not require any content extraction preprocess and preserves all the information in a document (e.g., text, image and layout). DSE leverages a large vision-language model to directly encode document screenshots into dense representations for retrieval. To evaluate our method, we first craft the dataset of Wiki-SS, a 1.3M Wikipedia web page screenshots as the corpus to answer the questions from the Natural Questions dataset. In such a text-intensive document retrieval setting, DSE shows competitive effectiveness compared to other text retrieval methods relying on parsing. For example, DSE outperforms BM25 by 17 points in top-1 retrieval accuracy. Additionally, in a mixed-modality task of slide retrieval, DSE significantly outperforms OCR text retrieval methods by over 15 points in nDCG@10. These experiments show that DSE is an effective document retrieval paradigm for diverse types of documents. Model checkpoints, code, and Wiki-SS collection will be released.
[ "Ma, Xueguang", "Lin, Sheng-Chieh", "Li, Minghan", "Chen, Wenhu", "Lin, Jimmy" ]
Unifying Multimodal Retrieval via Document Screenshot Embedding
emnlp-main.373
Poster
2406.11251
[ "" ]
https://huggingface.co/papers/2406.11251
1
9
1
5
[ "MrLight/dse-qwen2-2b-mrl-v1", "MrLight/dse-phi35-vidore-ft", "Tevatron/dse-phi3-docmatix-v1", "Tevatron/dse-phi3-docmatix-v2" ]
[ "Tevatron/docmatix-ir" ]
[ "lightonai/vision_pipeline" ]
[ "MrLight/dse-qwen2-2b-mrl-v1", "MrLight/dse-phi35-vidore-ft", "Tevatron/dse-phi3-docmatix-v1", "Tevatron/dse-phi3-docmatix-v2" ]
[ "Tevatron/docmatix-ir" ]
[ "lightonai/vision_pipeline" ]
1
https://aclanthology.org/2024.emnlp-main.374.bib
https://aclanthology.org/2024.emnlp-main.374/
@inproceedings{tan-etal-2024-neuron, title = "Neuron Specialization: Leveraging Intrinsic Task Modularity for Multilingual Machine Translation", author = "Tan, Shaomu and Wu, Di and Monz, Christof", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.374", pages = "6506--6527", abstract = "Training a unified multilingual model promotes knowledge transfer but inevitably introduces negative interference. Language-specific modeling methods show promise in reducing interference. However, they often rely on heuristics to distribute capacity and struggle to foster cross-lingual transfer via isolated modules. In this paper, we explore intrinsic task modularity within multilingual networks and leverage these observations to circumvent interference under multilingual translation. We show that neurons in the feed-forward layers tend to be activated in a language-specific manner. Meanwhile, these specialized neurons exhibit structural overlaps that reflect language proximity, which progress across layers. Based on these findings, we propose Neuron Specialization, an approach that identifies specialized neurons to modularize feed-forward layers and then continuously updates them through sparse networks. Extensive experiments show that our approach achieves consistent performance gains over strong baselines with additional analyses demonstrating reduced interference and increased knowledge transfer.", }
Training a unified multilingual model promotes knowledge transfer but inevitably introduces negative interference. Language-specific modeling methods show promise in reducing interference. However, they often rely on heuristics to distribute capacity and struggle to foster cross-lingual transfer via isolated modules. In this paper, we explore intrinsic task modularity within multilingual networks and leverage these observations to circumvent interference under multilingual translation. We show that neurons in the feed-forward layers tend to be activated in a language-specific manner. Meanwhile, these specialized neurons exhibit structural overlaps that reflect language proximity, which progress across layers. Based on these findings, we propose Neuron Specialization, an approach that identifies specialized neurons to modularize feed-forward layers and then continuously updates them through sparse networks. Extensive experiments show that our approach achieves consistent performance gains over strong baselines with additional analyses demonstrating reduced interference and increased knowledge transfer.
[ "Tan, Shaomu", "Wu, Di", "Monz, Christof" ]
Neuron Specialization: Leveraging Intrinsic Task Modularity for Multilingual Machine Translation
emnlp-main.374
Oral
2404.11201
[ "" ]
https://huggingface.co/papers/2404.11201
0
0
0
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.375.bib
https://aclanthology.org/2024.emnlp-main.375/
@inproceedings{narayanan-venkit-etal-2024-audit, title = "An Audit on the Perspectives and Challenges of Hallucinations in {NLP}", author = "Narayanan Venkit, Pranav and Chakravorti, Tatiana and Gupta, Vipul and Biggs, Heidi and Srinath, Mukund and Goswami, Koustava and Rajtmajer, Sarah and Wilson, Shomir", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.375", pages = "6528--6548", abstract = "We audit how hallucination in large language models (LLMs) is characterized in peer-reviewed literature, using a critical examination of 103 publications across NLP research. Through the examination of the literature, we identify a lack of agreement with the term {`}hallucination{'} in the field of NLP. Additionally, to compliment our audit, we conduct a survey with 171 practitioners from the field of NLP and AI to capture varying perspectives on hallucination. Our analysis calls for the necessity of explicit definitions and frameworks outlining hallucination within NLP, highlighting potential challenges, and our survey inputs provide a thematic understanding of the influence and ramifications of hallucination in society.", }
We audit how hallucination in large language models (LLMs) is characterized in peer-reviewed literature, using a critical examination of 103 publications across NLP research. Through the examination of the literature, we identify a lack of agreement with the term {`}hallucination{'} in the field of NLP. Additionally, to compliment our audit, we conduct a survey with 171 practitioners from the field of NLP and AI to capture varying perspectives on hallucination. Our analysis calls for the necessity of explicit definitions and frameworks outlining hallucination within NLP, highlighting potential challenges, and our survey inputs provide a thematic understanding of the influence and ramifications of hallucination in society.
[ "Narayanan Venkit, Pranav", "Chakravorti, Tatiana", "Gupta, Vipul", "Biggs, Heidi", "Srinath, Mukund", "Goswami, Koustava", "Rajtmajer, Sarah", "Wilson, Shomir" ]
An Audit on the Perspectives and Challenges of Hallucinations in NLP
emnlp-main.375
Poster
2404.07461
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.376.bib
https://aclanthology.org/2024.emnlp-main.376/
@inproceedings{bayazit-etal-2024-discovering, title = "Discovering Knowledge-Critical Subnetworks in Pretrained Language Models", author = "Bayazit, Deniz and Foroutan, Negar and Chen, Zeming and Weiss, Gail and Bosselut, Antoine", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.376", pages = "6549--6583", abstract = "Pretrained language models (LMs) encode implicit representations of knowledge in their parameters. However, localizing these representations and disentangling them from each other remains an open problem. In this work, we investigate whether pretrained language models contain various *knowledge-critical* subnetworks: particular sparse computational subgraphs that can, if removed, precisely suppress specific knowledge the model has memorized. We propose a multi-objective differentiable masking scheme that can be applied to both weights and neurons to discover such subnetworks and show that we can use them to precisely remove specific knowledge from models while minimizing adverse effects on the behavior of the original model. We demonstrate our method on multiple GPT2 variants, uncovering highly sparse subnetworks (98{\%}+ sparsity) that are critical for expressing specific collections of relational knowledge. When these subnetworks are removed, the remaining network maintains most of its initial abilities but struggles to represent the suppressed knowledge.", }
Pretrained language models (LMs) encode implicit representations of knowledge in their parameters. However, localizing these representations and disentangling them from each other remains an open problem. In this work, we investigate whether pretrained language models contain various *knowledge-critical* subnetworks: particular sparse computational subgraphs that can, if removed, precisely suppress specific knowledge the model has memorized. We propose a multi-objective differentiable masking scheme that can be applied to both weights and neurons to discover such subnetworks and show that we can use them to precisely remove specific knowledge from models while minimizing adverse effects on the behavior of the original model. We demonstrate our method on multiple GPT2 variants, uncovering highly sparse subnetworks (98{\%}+ sparsity) that are critical for expressing specific collections of relational knowledge. When these subnetworks are removed, the remaining network maintains most of its initial abilities but struggles to represent the suppressed knowledge.
[ "Bayazit, Deniz", "Foroutan, Negar", "Chen, Zeming", "Weiss, Gail", "Bosselut, Antoine" ]
Discovering Knowledge-Critical Subnetworks in Pretrained Language Models
emnlp-main.376
Poster
2310.03084
[ "https://github.com/bayazitdeniz/know-subnet" ]
https://huggingface.co/papers/2310.03084
1
0
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.377.bib
https://aclanthology.org/2024.emnlp-main.377/
@inproceedings{chu-etal-2024-reconstruct, title = "Reconstruct Your Previous Conversations! Comprehensively Investigating Privacy Leakage Risks in Conversations with {GPT} Models", author = "Chu, Junjie and Sha, Zeyang and Backes, Michael and Zhang, Yang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.377", pages = "6584--6600", abstract = "Significant advancements have recently been made in large language models, represented by GPT models.Users frequently have multi-round private conversations with cloud-hosted GPT models for task optimization.Yet, this operational paradigm introduces additional attack surfaces, particularly in custom GPTs and hijacked chat sessions.In this paper, we introduce a straightforward yet potent Conversation Reconstruction Attack.This attack targets the contents of previous conversations between GPT models and benign users, i.e., the benign users{'} input contents during their interaction with GPT models.The adversary could induce GPT models to leak such contents by querying them with designed malicious prompts.Our comprehensive examination of privacy risks during the interactions with GPT models under this attack reveals GPT-4{'}s considerable resilience.We present two advanced attacks targeting improved reconstruction of past conversations, demonstrating significant privacy leakage across all models under these advanced techniques.Evaluating various defense mechanisms, we find them ineffective against these attacks.Our findings highlight the ease with which privacy can be compromised in interactions with GPT models, urging the community to safeguard against potential abuses of these models{'} capabilities.", }
Significant advancements have recently been made in large language models, represented by GPT models.Users frequently have multi-round private conversations with cloud-hosted GPT models for task optimization.Yet, this operational paradigm introduces additional attack surfaces, particularly in custom GPTs and hijacked chat sessions.In this paper, we introduce a straightforward yet potent Conversation Reconstruction Attack.This attack targets the contents of previous conversations between GPT models and benign users, i.e., the benign users{'} input contents during their interaction with GPT models.The adversary could induce GPT models to leak such contents by querying them with designed malicious prompts.Our comprehensive examination of privacy risks during the interactions with GPT models under this attack reveals GPT-4{'}s considerable resilience.We present two advanced attacks targeting improved reconstruction of past conversations, demonstrating significant privacy leakage across all models under these advanced techniques.Evaluating various defense mechanisms, we find them ineffective against these attacks.Our findings highlight the ease with which privacy can be compromised in interactions with GPT models, urging the community to safeguard against potential abuses of these models{'} capabilities.
[ "Chu, Junjie", "Sha, Zeyang", "Backes, Michael", "Zhang, Yang" ]
Reconstruct Your Previous Conversations! Comprehensively Investigating Privacy Leakage Risks in Conversations with GPT Models
emnlp-main.377
Poster
2402.02987
[ "https://github.com/TrustAIRLab/Conversation_Reconstruction_Attack" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.378.bib
https://aclanthology.org/2024.emnlp-main.378/
@inproceedings{toroghi-etal-2024-right, title = "Right for Right Reasons: Large Language Models for Verifiable Commonsense Knowledge Graph Question Answering", author = "Toroghi, Armin and Guo, Willis and Abdollah Pour, Mohammad Mahdi and Sanner, Scott", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.378", pages = "6601--6633", abstract = "Knowledge Graph Question Answering (KGQA) methods seek to answer Natural Language questions using the relational information stored in Knowledge Graphs (KGs). With the recent advancements of Large Language Models (LLMs) and their remarkable reasoning abilities, there is a growing trend to leverage them for KGQA. However, existing methodologies have only focused on answering factual questions, e.g., *{``}In which city was Silvio Berlusconi{'}s first wife born?{''}*, leaving questions involving commonsense reasoning that real-world users may pose more often, e.g., *{``}Do I need separate visas to see the Venus of Willendorf and attend the Olympics this summer?{''}* unaddressed. In this work, we first observe that existing LLM-based methods for KGQA struggle with hallucination on such questions, especially on queries targeting long-tail entities (e.g., non-mainstream and recent entities), thus hindering their applicability in real-world applications especially since their reasoning processes are not easily verifiable. In response, we propose Right for Right Reasons ($R^3$), a commonsense KGQA methodology that allows for a verifiable reasoning procedure by axiomatically surfacing intrinsic commonsense knowledge of LLMs and grounding every factual reasoning step on KG triples. Through experimental evaluations across three different tasks{---}question answering, claim verification, and preference matching{---}our findings showcase $R^3$ as a superior approach, outperforming existing methodologies and notably reducing instances of hallucination and reasoning errors.", }
Knowledge Graph Question Answering (KGQA) methods seek to answer Natural Language questions using the relational information stored in Knowledge Graphs (KGs). With the recent advancements of Large Language Models (LLMs) and their remarkable reasoning abilities, there is a growing trend to leverage them for KGQA. However, existing methodologies have only focused on answering factual questions, e.g., *{``}In which city was Silvio Berlusconi{'}s first wife born?{''}*, leaving questions involving commonsense reasoning that real-world users may pose more often, e.g., *{``}Do I need separate visas to see the Venus of Willendorf and attend the Olympics this summer?{''}* unaddressed. In this work, we first observe that existing LLM-based methods for KGQA struggle with hallucination on such questions, especially on queries targeting long-tail entities (e.g., non-mainstream and recent entities), thus hindering their applicability in real-world applications especially since their reasoning processes are not easily verifiable. In response, we propose Right for Right Reasons ($R^3$), a commonsense KGQA methodology that allows for a verifiable reasoning procedure by axiomatically surfacing intrinsic commonsense knowledge of LLMs and grounding every factual reasoning step on KG triples. Through experimental evaluations across three different tasks{---}question answering, claim verification, and preference matching{---}our findings showcase $R^3$ as a superior approach, outperforming existing methodologies and notably reducing instances of hallucination and reasoning errors.
[ "Toroghi, Armin", "Guo, Willis", "Abdollah Pour, Mohammad Mahdi", "Sanner, Scott" ]
Right for Right Reasons: Large Language Models for Verifiable Commonsense Knowledge Graph Question Answering
emnlp-main.378
Poster
2403.01390
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.379.bib
https://aclanthology.org/2024.emnlp-main.379/
@inproceedings{toroghi-etal-2024-verifiable, title = "Verifiable, Debuggable, and Repairable Commonsense Logical Reasoning via {LLM}-based Theory Resolution", author = "Toroghi, Armin and Guo, Willis and Pesaranghader, Ali and Sanner, Scott", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.379", pages = "6634--6652", abstract = "Recent advances in Large Language Models (LLM) have led to substantial interest in their application to commonsense reasoning tasks. Despite their potential, LLMs are susceptible to reasoning errors and hallucinations that may be harmful in use cases where accurate reasoning is critical. This challenge underscores the need for verifiable, debuggable, and repairable LLM reasoning. Recent works have made progress toward verifiable reasoning with LLMs by using them as either (i) a reasoner over an axiomatic knowledge base, or (ii) a semantic parser for use in existing logical inference systems. However, both settings are unable to extract commonsense axioms from the LLM that are not already formalized in the knowledge base, and also lack a reliable method to repair missed commonsense inferences. In this work, we present LLM-TRes, a logical reasoning framework based on the notion of {``}theory resolution{''} that allows for seamless integration of the commonsense knowledge from LLMs with a verifiable logical reasoning framework that mitigates hallucinations and facilitates debugging of the reasoning procedure as well as repair. We crucially prove that repaired axioms are theoretically guaranteed to be given precedence over flawed ones in our theory resolution inference process. We conclude by evaluating on three diverse language-based reasoning tasks{---}preference reasoning, deductive reasoning, and causal commonsense reasoning{---}and demonstrate the superior performance of LLM-TRes vs. state-of-the-art LLM-based reasoning methods in terms of both accuracy and reasoning correctness.", }
Recent advances in Large Language Models (LLM) have led to substantial interest in their application to commonsense reasoning tasks. Despite their potential, LLMs are susceptible to reasoning errors and hallucinations that may be harmful in use cases where accurate reasoning is critical. This challenge underscores the need for verifiable, debuggable, and repairable LLM reasoning. Recent works have made progress toward verifiable reasoning with LLMs by using them as either (i) a reasoner over an axiomatic knowledge base, or (ii) a semantic parser for use in existing logical inference systems. However, both settings are unable to extract commonsense axioms from the LLM that are not already formalized in the knowledge base, and also lack a reliable method to repair missed commonsense inferences. In this work, we present LLM-TRes, a logical reasoning framework based on the notion of {``}theory resolution{''} that allows for seamless integration of the commonsense knowledge from LLMs with a verifiable logical reasoning framework that mitigates hallucinations and facilitates debugging of the reasoning procedure as well as repair. We crucially prove that repaired axioms are theoretically guaranteed to be given precedence over flawed ones in our theory resolution inference process. We conclude by evaluating on three diverse language-based reasoning tasks{---}preference reasoning, deductive reasoning, and causal commonsense reasoning{---}and demonstrate the superior performance of LLM-TRes vs. state-of-the-art LLM-based reasoning methods in terms of both accuracy and reasoning correctness.
[ "Toroghi, Armin", "Guo, Willis", "Pesaranghader, Ali", "Sanner, Scott" ]
Verifiable, Debuggable, and Repairable Commonsense Logical Reasoning via LLM-based Theory Resolution
emnlp-main.379
Oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.380.bib
https://aclanthology.org/2024.emnlp-main.380/
@inproceedings{marchisio-etal-2024-understanding, title = "Understanding and Mitigating Language Confusion in {LLM}s", author = "Marchisio, Kelly and Ko, Wei-Yin and Berard, Alexandre and Dehaze, Th{\'e}o and Ruder, Sebastian", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.380", pages = "6653--6677", abstract = "We investigate a surprising limitation of LLMs: their inability to consistently generate text in a user{'}s desired language. We create the Language Confusion Benchmark (LCB) to evaluate such failures, covering 15 typologically diverse languages with existing and newly-created English and multilingual prompts. We evaluate a range of LLMs on monolingual and cross-lingual generation reflecting practical use cases, finding that Llama Instruct and Mistral models exhibit high degrees of language confusion and even the strongest models fail to consistently respond in the correct language. We observe that base and English-centric instruct models are more prone to language confusion, which is aggravated by complex prompts and high sampling temperatures. We find that language confusion can be partially mitigated via few-shot prompting, multilingual SFT and preference tuning. We release our language confusion benchmark, which serves as a first layer of efficient, scalable multilingual evaluation.", }
We investigate a surprising limitation of LLMs: their inability to consistently generate text in a user{'}s desired language. We create the Language Confusion Benchmark (LCB) to evaluate such failures, covering 15 typologically diverse languages with existing and newly-created English and multilingual prompts. We evaluate a range of LLMs on monolingual and cross-lingual generation reflecting practical use cases, finding that Llama Instruct and Mistral models exhibit high degrees of language confusion and even the strongest models fail to consistently respond in the correct language. We observe that base and English-centric instruct models are more prone to language confusion, which is aggravated by complex prompts and high sampling temperatures. We find that language confusion can be partially mitigated via few-shot prompting, multilingual SFT and preference tuning. We release our language confusion benchmark, which serves as a first layer of efficient, scalable multilingual evaluation.
[ "Marchisio, Kelly", "Ko, Wei-Yin", "Berard, Alex", "re", "Dehaze, Th{\\'e}o", "Ruder, Sebastian" ]
Understanding and Mitigating Language Confusion in LLMs
emnlp-main.380
Poster
2406.20052
[ "https://github.com/for-ai/language-confusion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.381.bib
https://aclanthology.org/2024.emnlp-main.381/
@inproceedings{gendron-etal-2024-large, title = "Can Large Language Models Learn Independent Causal Mechanisms?", author = "Gendron, Gael and Nguyen, Bao Trung and Peng, Alex Yuxuan and Witbrock, Michael and Dobbie, Gillian", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.381", pages = "6678--6701", abstract = "Despite impressive performance on language modelling and complex reasoning tasks, Large Language Models (LLMs) fall short on the same tasks in uncommon settings or with distribution shifts, exhibiting a lack of generalisation ability. By contrast, systems such as causal models, that learn abstract variables and causal relationships, can demonstrate increased robustness against changes in the distribution. One reason for this success is the existence and use of Independent Causal Mechanisms (ICMs) representing high-level concepts that only sparsely interact. In this work, we apply two concepts from causality to learn ICMs within LLMs. We develop a new LLM architecture composed of multiple sparsely interacting language modelling modules. We show that such causal constraints can improve out-of-distribution performance on abstract and causal reasoning tasks. We also investigate the level of independence and domain specialisation and show that LLMs rely on pre-trained partially domain-invariant mechanisms resilient to fine-tuning.", }
Despite impressive performance on language modelling and complex reasoning tasks, Large Language Models (LLMs) fall short on the same tasks in uncommon settings or with distribution shifts, exhibiting a lack of generalisation ability. By contrast, systems such as causal models, that learn abstract variables and causal relationships, can demonstrate increased robustness against changes in the distribution. One reason for this success is the existence and use of Independent Causal Mechanisms (ICMs) representing high-level concepts that only sparsely interact. In this work, we apply two concepts from causality to learn ICMs within LLMs. We develop a new LLM architecture composed of multiple sparsely interacting language modelling modules. We show that such causal constraints can improve out-of-distribution performance on abstract and causal reasoning tasks. We also investigate the level of independence and domain specialisation and show that LLMs rely on pre-trained partially domain-invariant mechanisms resilient to fine-tuning.
[ "Gendron, Gael", "Nguyen, Bao Trung", "Peng, Alex Yuxuan", "Witbrock, Michael", "Dobbie, Gillian" ]
Can Large Language Models Learn Independent Causal Mechanisms?
emnlp-main.381
Poster
2402.02636
[ "https://github.com/strong-ai-lab/modular-lm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.382.bib
https://aclanthology.org/2024.emnlp-main.382/
@inproceedings{yunusov-etal-2024-mirrorstories, title = "{M}irror{S}tories: Reflecting Diversity through Personalized Narrative Generation with Large Language Models", author = "Yunusov, Sarfaroz and Sidat, Hamza and Emami, Ali", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.382", pages = "6702--6717", abstract = "This study explores the effectiveness of Large Language Models (LLMs) in creating personalized {``}mirror stories{''} that reflect and resonate with individual readers{'} identities, addressing the significant lack of diversity in literature. We present MirrorStories, a corpus of 1,500 personalized short stories generated by integrating elements such as name, gender, age, ethnicity, reader interest, and story moral. We demonstrate that LLMs can effectively incorporate diverse identity elements into narratives, with human evaluators identifying personalized elements in the stories with high accuracy. Through a comprehensive evaluation involving 26 diverse human judges, we compare the effectiveness of MirrorStories against generic narratives. We find that personalized LLM-generated stories not only outscore generic human-written and LLM-generated ones across all metrics of engagement (with average ratings of 4.22 versus 3.37 on a 5-point scale), but also achieve higher textual diversity while preserving the intended moral. We also provide analyses that include bias assessments and a study on the potential for integrating images into personalized stories.", }
This study explores the effectiveness of Large Language Models (LLMs) in creating personalized {``}mirror stories{''} that reflect and resonate with individual readers{'} identities, addressing the significant lack of diversity in literature. We present MirrorStories, a corpus of 1,500 personalized short stories generated by integrating elements such as name, gender, age, ethnicity, reader interest, and story moral. We demonstrate that LLMs can effectively incorporate diverse identity elements into narratives, with human evaluators identifying personalized elements in the stories with high accuracy. Through a comprehensive evaluation involving 26 diverse human judges, we compare the effectiveness of MirrorStories against generic narratives. We find that personalized LLM-generated stories not only outscore generic human-written and LLM-generated ones across all metrics of engagement (with average ratings of 4.22 versus 3.37 on a 5-point scale), but also achieve higher textual diversity while preserving the intended moral. We also provide analyses that include bias assessments and a study on the potential for integrating images into personalized stories.
[ "Yunusov, Sarfaroz", "Sidat, Hamza", "Emami, Ali" ]
MirrorStories: Reflecting Diversity through Personalized Narrative Generation with Large Language Models
emnlp-main.382
Poster
2409.13935
[ "" ]
https://huggingface.co/papers/2409.13935
0
0
0
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.383.bib
https://aclanthology.org/2024.emnlp-main.383/
@inproceedings{liu-etal-2024-interintent, title = "{I}nter{I}ntent: Investigating Social Intelligence of {LLM}s via Intention Understanding in an Interactive Game Context", author = "Liu, Ziyi and Anand, Abhishek and Zhou, Pei and Huang, Jen-tse and Zhao, Jieyu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.383", pages = "6718--6746", abstract = "Large language models (LLMs) have demonstrated the potential to mimic human social intelligence. However, most studies focus on simplistic and static self-report or performance-based tests, which limits the depth and validity of the analysis. In this paper, we developed a novel framework, InterIntent, to assess LLMs{'} social intelligence by mapping their ability to understand and manage intentions in a game setting. We focus on four dimensions of social intelligence: situational awareness, self-regulation, self-awareness, and theory of mind. Each dimension is linked to a specific game task: intention selection, intention following, intention summarization, and intention guessing. Our findings indicate that while LLMs exhibit high proficiency in selecting intentions, achieving an accuracy of 88{\%}, their ability to infer the intentions of others is significantly weaker, trailing human performance by 20{\%}. Additionally, game performance correlates with intention understanding, highlighting the importance of the four components towards success in this game. These findings underline the crucial role of intention understanding in evaluating LLMs{'} social intelligence and highlight the potential of using social deduction games as a complex testbed to enhance LLM evaluation. InterIntent contributes a structured approach to bridging the evaluation gap in social intelligence within multiplayer LLM-based games.", }
Large language models (LLMs) have demonstrated the potential to mimic human social intelligence. However, most studies focus on simplistic and static self-report or performance-based tests, which limits the depth and validity of the analysis. In this paper, we developed a novel framework, InterIntent, to assess LLMs{'} social intelligence by mapping their ability to understand and manage intentions in a game setting. We focus on four dimensions of social intelligence: situational awareness, self-regulation, self-awareness, and theory of mind. Each dimension is linked to a specific game task: intention selection, intention following, intention summarization, and intention guessing. Our findings indicate that while LLMs exhibit high proficiency in selecting intentions, achieving an accuracy of 88{\%}, their ability to infer the intentions of others is significantly weaker, trailing human performance by 20{\%}. Additionally, game performance correlates with intention understanding, highlighting the importance of the four components towards success in this game. These findings underline the crucial role of intention understanding in evaluating LLMs{'} social intelligence and highlight the potential of using social deduction games as a complex testbed to enhance LLM evaluation. InterIntent contributes a structured approach to bridging the evaluation gap in social intelligence within multiplayer LLM-based games.
[ "Liu, Ziyi", "An", ", Abhishek", "Zhou, Pei", "Huang, Jen-tse", "Zhao, Jieyu" ]
InterIntent: Investigating Social Intelligence of LLMs via Intention Understanding in an Interactive Game Context
emnlp-main.383
Poster
2406.12203
[ "https://github.com/uscnlp-lime/inter-intent" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.384.bib
https://aclanthology.org/2024.emnlp-main.384/
@inproceedings{samir-etal-2024-locating, title = "Locating Information Gaps and Narrative Inconsistencies Across Languages: A Case Study of {LGBT} People Portrayals on {W}ikipedia", author = "Samir, Farhan and Park, Chan Young and Field, Anjalie and Shwartz, Vered and Tsvetkov, Yulia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.384", pages = "6747--6762", abstract = "To explain social phenomena and identify systematic biases, much research in computational social science focuses on comparative text analyses. These studies often rely on coarse corpus-level statistics or local word-level analyses, mainly in English. We introduce the InfoGap method{---}an efficient and reliable approach to locating information gaps and inconsistencies in articles at the fact level, across languages. We evaluate InfoGap by analyzing LGBT people{'}s portrayals, across 2.7K biography pages on English, Russian, and French Wikipedias. We find large discrepancies in factual coverage across the languages. Moreover, our analysis reveals that biographical facts carrying negative connotations are more likely to be highlighted in Russian Wikipedia. Crucially, InfoGap both facilitates large scale analyses, and pinpoints local document- and fact-level information gaps, laying a new foundation for targeted and nuanced comparative language analysis at scale.", }
To explain social phenomena and identify systematic biases, much research in computational social science focuses on comparative text analyses. These studies often rely on coarse corpus-level statistics or local word-level analyses, mainly in English. We introduce the InfoGap method{---}an efficient and reliable approach to locating information gaps and inconsistencies in articles at the fact level, across languages. We evaluate InfoGap by analyzing LGBT people{'}s portrayals, across 2.7K biography pages on English, Russian, and French Wikipedias. We find large discrepancies in factual coverage across the languages. Moreover, our analysis reveals that biographical facts carrying negative connotations are more likely to be highlighted in Russian Wikipedia. Crucially, InfoGap both facilitates large scale analyses, and pinpoints local document- and fact-level information gaps, laying a new foundation for targeted and nuanced comparative language analysis at scale.
[ "Samir, Farhan", "Park, Chan Young", "Field, Anjalie", "Shwartz, Vered", "Tsvetkov, Yulia" ]
Locating Information Gaps and Narrative Inconsistencies Across Languages: A Case Study of LGBT People Portrayals on Wikipedia
emnlp-main.384
Poster
2410.04282
[ "https://github.com/smfsamir/infogap" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.385.bib
https://aclanthology.org/2024.emnlp-main.385/
@inproceedings{bhatia-etal-2024-local, title = "From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-Language Models", author = "Bhatia, Mehar and Ravi, Sahithya and Chinchure, Aditya and Hwang, EunJeong and Shwartz, Vered", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.385", pages = "6763--6782", abstract = "Despite recent advancements in vision-language models, their performance remains suboptimal on images from non-western cultures due to underrepresentation in training datasets. Various benchmarks have been proposed to test models{'} cultural inclusivity. Still, they have limited coverage of cultures and do not adequately assess cultural diversity across universal and culture-specific local concepts. To address these limitations, we introduce the GlobalRG benchmark, comprising two challenging tasks: retrieval across universals and cultural visual grounding. The former task entails retrieving culturally diverse images for universal concepts from 50 countries, while the latter aims at grounding culture-specific concepts within images from 15 countries. Our evaluation across a wide range of models reveals that the performance varies significantly across cultures {--} underscoring the necessity for enhancing multicultural understanding in vision-language models.", }
Despite recent advancements in vision-language models, their performance remains suboptimal on images from non-western cultures due to underrepresentation in training datasets. Various benchmarks have been proposed to test models{'} cultural inclusivity. Still, they have limited coverage of cultures and do not adequately assess cultural diversity across universal and culture-specific local concepts. To address these limitations, we introduce the GlobalRG benchmark, comprising two challenging tasks: retrieval across universals and cultural visual grounding. The former task entails retrieving culturally diverse images for universal concepts from 50 countries, while the latter aims at grounding culture-specific concepts within images from 15 countries. Our evaluation across a wide range of models reveals that the performance varies significantly across cultures {--} underscoring the necessity for enhancing multicultural understanding in vision-language models.
[ "Bhatia, Mehar", "Ravi, Sahithya", "Chinchure, Aditya", "Hwang, EunJeong", "Shwartz, Vered" ]
From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-Language Models
emnlp-main.385
Poster
2407.00263
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.386.bib
https://aclanthology.org/2024.emnlp-main.386/
@inproceedings{de-langis-etal-2024-dynamic, title = "Dynamic Multi-Reward Weighting for Multi-Style Controllable Generation", author = "De Langis, Karin and Koo, Ryan and Kang, Dongyeop", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.386", pages = "6783--6800", abstract = "Textual style expresses a diverse set of information, including interpersonal dynamics (e.g., formality) and the author{'}s emotions or attitudes (e.g., disgust). An open question is how language models can be explicitly controlled so that they weave together target styles when generating text: for example, to produce text that is both negative and non-toxic. One approach to such controlled generation is multi-objective reinforcement learning (RL), but how to best combine multiple objectives in a reward function is an open question. In this paper, we investigate various formulations of multi-style reward formulations, including calibrated outputs from discriminators and dynamic weighting by discriminator gradient magnitudes. We find that our proposed dynamic weighting outperforms static weighting approaches with respect style control while maintaining linguistic quality, and we explore its effectiveness in 2- and 3-style control.", }
Textual style expresses a diverse set of information, including interpersonal dynamics (e.g., formality) and the author{'}s emotions or attitudes (e.g., disgust). An open question is how language models can be explicitly controlled so that they weave together target styles when generating text: for example, to produce text that is both negative and non-toxic. One approach to such controlled generation is multi-objective reinforcement learning (RL), but how to best combine multiple objectives in a reward function is an open question. In this paper, we investigate various formulations of multi-style reward formulations, including calibrated outputs from discriminators and dynamic weighting by discriminator gradient magnitudes. We find that our proposed dynamic weighting outperforms static weighting approaches with respect style control while maintaining linguistic quality, and we explore its effectiveness in 2- and 3-style control.
[ "De Langis, Karin", "Koo, Ryan", "Kang, Dongyeop" ]
Dynamic Multi-Reward Weighting for Multi-Style Controllable Generation
emnlp-main.386
Poster
2402.14146
[ "https://github.com/minnesotanlp/dynamic-multi-reward-weighting" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.387.bib
https://aclanthology.org/2024.emnlp-main.387/
@inproceedings{huo-etal-2024-mmneuron, title = "{MMN}euron: Discovering Neuron-Level Domain-Specific Interpretation in Multimodal Large Language Model", author = "Huo, Jiahao and Yan, Yibo and Hu, Boren and Yue, Yutao and Hu, Xuming", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.387", pages = "6801--6816", abstract = "Projecting visual features into word embedding space has become a significant fusion strategy adopted by Multimodal Large Language Models (MLLMs). However, its internal mechanisms have yet to be explored. Inspired by multilingual research, we identify domain-specific neurons in multimodal large language models. Specifically, we investigate the distribution of domain-specific neurons and the mechanism of how MLLMs process features from diverse domains. Furthermore, we propose a three-stage framework for language model modules in MLLMs when handling projected image features, and verify this hypothesis using logit lens. Extensive experiments indicate that while current MLLMs exhibit Visual Question Answering (VQA) capability, they may not fully utilize domain-specific information. Manipulating domain-specific neurons properly will result in a 10{\%} change of accuracy at most, shedding light on the development of cross-domain, all-encompassing MLLMs in the future. The source code is available at https://anonymous.4open.science/r/MMNeuron.", }
Projecting visual features into word embedding space has become a significant fusion strategy adopted by Multimodal Large Language Models (MLLMs). However, its internal mechanisms have yet to be explored. Inspired by multilingual research, we identify domain-specific neurons in multimodal large language models. Specifically, we investigate the distribution of domain-specific neurons and the mechanism of how MLLMs process features from diverse domains. Furthermore, we propose a three-stage framework for language model modules in MLLMs when handling projected image features, and verify this hypothesis using logit lens. Extensive experiments indicate that while current MLLMs exhibit Visual Question Answering (VQA) capability, they may not fully utilize domain-specific information. Manipulating domain-specific neurons properly will result in a 10{\%} change of accuracy at most, shedding light on the development of cross-domain, all-encompassing MLLMs in the future. The source code is available at https://anonymous.4open.science/r/MMNeuron.
[ "Huo, Jiahao", "Yan, Yibo", "Hu, Boren", "Yue, Yutao", "Hu, Xuming" ]
MMNeuron: Discovering Neuron-Level Domain-Specific Interpretation in Multimodal Large Language Model
emnlp-main.387
Poster
2406.11193
[ "https://github.com/z1zs/mmneuron" ]
https://huggingface.co/papers/2406.11193
0
0
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.388.bib
https://aclanthology.org/2024.emnlp-main.388/
@inproceedings{wu-etal-2024-learning, title = "Learning to Extract Structured Entities Using Language Models", author = "Wu, Haolun and Yuan, Ye and Mikaelyan, Liana and Meulemans, Alexander and Liu, Xue and Hensman, James and Mitra, Bhaskar", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.388", pages = "6817--6834", abstract = "Recent advances in machine learning have significantly impacted the field of information extraction, with Language Models (LMs) playing a pivotal role in extracting structured information from unstructured text. Prior works typically represent information extraction as triplet-centric and use classical metrics such as precision and recall for evaluation. We reformulate the task to be entity-centric, enabling the use of diverse metrics that can provide more insights from various perspectives. We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP (AESOP) metric, designed to appropriately assess model performance. Later, we introduce a new Multistage Structured Entity Extraction (MuSEE) model that harnesses the power of LMs for enhanced effectiveness and efficiency by decomposing the extraction task into multiple stages. Quantitative and human side-by-side evaluations confirm that our model outperforms baselines, offering promising directions for future advancements in structured entity extraction. Our source code is available at https://github.com/microsoft/Structured-Entity-Extraction.", }
Recent advances in machine learning have significantly impacted the field of information extraction, with Language Models (LMs) playing a pivotal role in extracting structured information from unstructured text. Prior works typically represent information extraction as triplet-centric and use classical metrics such as precision and recall for evaluation. We reformulate the task to be entity-centric, enabling the use of diverse metrics that can provide more insights from various perspectives. We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP (AESOP) metric, designed to appropriately assess model performance. Later, we introduce a new Multistage Structured Entity Extraction (MuSEE) model that harnesses the power of LMs for enhanced effectiveness and efficiency by decomposing the extraction task into multiple stages. Quantitative and human side-by-side evaluations confirm that our model outperforms baselines, offering promising directions for future advancements in structured entity extraction. Our source code is available at https://github.com/microsoft/Structured-Entity-Extraction.
[ "Wu, Haolun", "Yuan, Ye", "Mikaelyan, Liana", "Meulemans, Alex", "er", "Liu, Xue", "Hensman, James", "Mitra, Bhaskar" ]
Learning to Extract Structured Entities Using Language Models
emnlp-main.388
Oral
2402.04437
[ "https://github.com/microsoft/structured-entity-extraction" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.389.bib
https://aclanthology.org/2024.emnlp-main.389/
@inproceedings{liusie-etal-2024-efficient, title = "Efficient {LLM} Comparative Assessment: A Product of Experts Framework for Pairwise Comparisons", author = "Liusie, Adian and Raina, Vatsal and Fathullah, Yassir and Gales, Mark", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.389", pages = "6835--6855", abstract = "LLM-as-a-judge approaches are a practical and effective way of assessing a range of text tasks. However, when using pairwise comparisons to rank a set of candidates, the computational cost scales quadratically with the number of candidates, which has practical limitations. This paper introduces a Product of Expert (PoE) framework for efficient LLM Comparative Assessment. Here individual comparisons are considered experts that provide information on a pair{'}s score difference. The PoE framework combines the information from these experts to yield an expression that can be maximized with respect to the underlying set of candidates, and is highly flexible where any form of expert can be assumed. When Gaussian experts are used one can derive simple closed-form solutions for the optimal candidate ranking, as well as expressions for selecting which comparisons should be made to maximize the probability of this ranking. Our approach enables efficient comparative assessment, where by using only a small subset of the possible comparisons, one can generate score predictions that correlate well with human judgements. We evaluate the approach on multiple NLG tasks and demonstrate that our framework can yield considerable computational savings when performing pairwise comparative assessment. With many candidate texts, using as few as 2{\%} of comparisons the PoE solution can achieve similar performance to when all comparisons are used.", }
LLM-as-a-judge approaches are a practical and effective way of assessing a range of text tasks. However, when using pairwise comparisons to rank a set of candidates, the computational cost scales quadratically with the number of candidates, which has practical limitations. This paper introduces a Product of Expert (PoE) framework for efficient LLM Comparative Assessment. Here individual comparisons are considered experts that provide information on a pair{'}s score difference. The PoE framework combines the information from these experts to yield an expression that can be maximized with respect to the underlying set of candidates, and is highly flexible where any form of expert can be assumed. When Gaussian experts are used one can derive simple closed-form solutions for the optimal candidate ranking, as well as expressions for selecting which comparisons should be made to maximize the probability of this ranking. Our approach enables efficient comparative assessment, where by using only a small subset of the possible comparisons, one can generate score predictions that correlate well with human judgements. We evaluate the approach on multiple NLG tasks and demonstrate that our framework can yield considerable computational savings when performing pairwise comparative assessment. With many candidate texts, using as few as 2{\%} of comparisons the PoE solution can achieve similar performance to when all comparisons are used.
[ "Liusie, Adian", "Raina, Vatsal", "Fathullah, Yassir", "Gales, Mark" ]
Efficient LLM Comparative Assessment: A Product of Experts Framework for Pairwise Comparisons
emnlp-main.389
Poster
2405.05894
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.390.bib
https://aclanthology.org/2024.emnlp-main.390/
@inproceedings{wein-opitz-2024-survey, title = "A Survey of {AMR} Applications", author = "Wein, Shira and Opitz, Juri", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.390", pages = "6856--6875", abstract = "In the ten years since the development of the Abstract Meaning Representation (AMR) formalism, substantial progress has been made on AMR-related tasks such as parsing and alignment. Still, the engineering applications of AMR are not fully understood. In this survey, we categorize and characterize more than 100 papers which use AMR for downstream tasks{---} the first survey of this kind for AMR. Specifically, we highlight (1) the range of applications for which AMR has been harnessed, and (2) the techniques for incorporating AMR into those applications. We also detect broader AMR engineering patterns and outline areas of future work that seem ripe for AMR incorporation. We hope that this survey will be useful to those interested in using AMR and that it sparks discussion on the role of symbolic representations in the age of neural-focused NLP research.", }
In the ten years since the development of the Abstract Meaning Representation (AMR) formalism, substantial progress has been made on AMR-related tasks such as parsing and alignment. Still, the engineering applications of AMR are not fully understood. In this survey, we categorize and characterize more than 100 papers which use AMR for downstream tasks{---} the first survey of this kind for AMR. Specifically, we highlight (1) the range of applications for which AMR has been harnessed, and (2) the techniques for incorporating AMR into those applications. We also detect broader AMR engineering patterns and outline areas of future work that seem ripe for AMR incorporation. We hope that this survey will be useful to those interested in using AMR and that it sparks discussion on the role of symbolic representations in the age of neural-focused NLP research.
[ "Wein, Shira", "Opitz, Juri" ]
A Survey of AMR Applications
emnlp-main.390
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.391.bib
https://aclanthology.org/2024.emnlp-main.391/
@inproceedings{zhong-etal-2024-beyond, title = "Beyond Embeddings: The Promise of Visual Table in Visual Reasoning", author = "Zhong, Yiwu and Hu, Zi-Yuan and Lyu, Michael and Wang, Liwei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.391", pages = "6876--6911", abstract = "Visual representation learning has been a cornerstone in computer vision, involving typical forms such as visual embeddings, structural symbols, and text-based representations. Despite the success of CLIP-type visual embeddings, they often lack access to world knowledge critical for visual reasoning. In this work, we propose Visual Table, a novel form of visual representation tailored for visual reasoning. Visual tables are constructed as hierarchical descriptions of visual scenes, featuring a scene description and multiple object-centric descriptions covering categories, attributes, and knowledge. Thanks to the structural and textual formats, visual tables offer unique properties over mere visual embeddings, such as explainability and controllable editing. Furthermore, they deliver instance-level world knowledge and detailed attributes that are essential for visual reasoning. To create visual tables, we develop a generator trained on the dataset with collected, small-scale annotations. Extensive results on 11 visual reasoning benchmarks demonstrate that the generated visual tables significantly outperform previous structural and text-based representations. Moreover, they consistently enhance state-of-the-art multi-modal large language models across diverse benchmarks, showcasing their potential for advancing visual reasoning tasks. Our code is available at https://github.com/LaVi-Lab/Visual-Table.", }
Visual representation learning has been a cornerstone in computer vision, involving typical forms such as visual embeddings, structural symbols, and text-based representations. Despite the success of CLIP-type visual embeddings, they often lack access to world knowledge critical for visual reasoning. In this work, we propose Visual Table, a novel form of visual representation tailored for visual reasoning. Visual tables are constructed as hierarchical descriptions of visual scenes, featuring a scene description and multiple object-centric descriptions covering categories, attributes, and knowledge. Thanks to the structural and textual formats, visual tables offer unique properties over mere visual embeddings, such as explainability and controllable editing. Furthermore, they deliver instance-level world knowledge and detailed attributes that are essential for visual reasoning. To create visual tables, we develop a generator trained on the dataset with collected, small-scale annotations. Extensive results on 11 visual reasoning benchmarks demonstrate that the generated visual tables significantly outperform previous structural and text-based representations. Moreover, they consistently enhance state-of-the-art multi-modal large language models across diverse benchmarks, showcasing their potential for advancing visual reasoning tasks. Our code is available at https://github.com/LaVi-Lab/Visual-Table.
[ "Zhong, Yiwu", "Hu, Zi-Yuan", "Lyu, Michael", "Wang, Liwei" ]
Beyond Embeddings: The Promise of Visual Table in Visual Reasoning
emnlp-main.391
Poster
2403.18252
[ "https://github.com/lavi-lab/visual-table" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.392.bib
https://aclanthology.org/2024.emnlp-main.392/
@inproceedings{farzana-etal-2024-carecorpus, title = "{C}are{C}orpus+: Expanding and Augmenting Caregiver Strategy Data to Support Pediatric Rehabilitation", author = "Farzana, Shahla and Lucero, Ivana and Villegas, Vivian and Kaelin, Vera C and Khetani, Mary and Parde, Natalie", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.392", pages = "6912--6927", abstract = "Caregiver strategy classification in pediatric rehabilitation contexts is strongly motivated by real-world clinical constraints but highly under-resourced and seldom studied in natural language processing settings. We introduce a large dataset of 4,037 caregiver strategies in this setting, a five-fold increase over the nearest contemporary dataset. These strategies are manually categorized into clinically established constructs with high agreement ($\kappa$=0.68-0.89). We also propose two techniques to further address identified data constraints. First, we manually supplement target task data with publicly relevant data from online child health forums. Next, we propose a novel data augmentation technique to generate synthetic caregiver strategies with high downstream task utility. Extensive experiments showcase the quality of our dataset. They also establish evidence that both the publicly available data and the synthetic strategies result in large performance gains, with relative F$_1$ increases of 22.6{\%} and 50.9{\%}, respectively.", }
Caregiver strategy classification in pediatric rehabilitation contexts is strongly motivated by real-world clinical constraints but highly under-resourced and seldom studied in natural language processing settings. We introduce a large dataset of 4,037 caregiver strategies in this setting, a five-fold increase over the nearest contemporary dataset. These strategies are manually categorized into clinically established constructs with high agreement ($\kappa$=0.68-0.89). We also propose two techniques to further address identified data constraints. First, we manually supplement target task data with publicly relevant data from online child health forums. Next, we propose a novel data augmentation technique to generate synthetic caregiver strategies with high downstream task utility. Extensive experiments showcase the quality of our dataset. They also establish evidence that both the publicly available data and the synthetic strategies result in large performance gains, with relative F$_1$ increases of 22.6{\%} and 50.9{\%}, respectively.
[ "Farzana, Shahla", "Lucero, Ivana", "Villegas, Vivian", "Kaelin, Vera C", "Khetani, Mary", "Parde, Natalie" ]
CareCorpus+: Expanding and Augmenting Caregiver Strategy Data to Support Pediatric Rehabilitation
emnlp-main.392
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.393.bib
https://aclanthology.org/2024.emnlp-main.393/
@inproceedings{wang-etal-2024-taylor, title = "{T}aylor Unswift: Secured Weight Release for Large Language Models via {T}aylor Expansion", author = "Wang, Guanchu and Chuang, Yu-Neng and Tang, Ruixiang and Zhong, Shaochen and Yuan, Jiayi and Jin, Hongye and Liu, Zirui and Chaudhary, Vipin and Xu, Shuai and Caverlee, James and Hu, Xia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.393", pages = "6928--6941", abstract = "Ensuring the security of released large language models (LLMs) poses a significant dilemma, as existing mechanisms either compromise ownership rights or raise data privacy concerns. To address this dilemma, we introduce TaylorMLP to protect the ownership of released LLMs and prevent their abuse. Specifically, TaylorMLP preserves the ownership of LLMs by transforming the weights of LLMs into parameters of Taylor-series. Instead of releasing the original weights, developers can release the Taylor-series parameters with users, thereby ensuring the security of LLMs. Moreover, TaylorMLP can prevent abuse of LLMs by adjusting the generation speed. It can induce low-speed token generation for the protected LLMs by increasing the terms in the Taylor-series. This intentional delay helps LLM developers prevent potential large-scale unauthorized uses of their models. Empirical experiments across five datasets and three LLM architectures demonstrate that TaylorMLP induces over increase in latency, producing the tokens precisely matched with original LLMs. Subsequent defensive experiments further confirm that TaylorMLP effectively prevents users from reconstructing the weight values based on downstream datasets.", }
Ensuring the security of released large language models (LLMs) poses a significant dilemma, as existing mechanisms either compromise ownership rights or raise data privacy concerns. To address this dilemma, we introduce TaylorMLP to protect the ownership of released LLMs and prevent their abuse. Specifically, TaylorMLP preserves the ownership of LLMs by transforming the weights of LLMs into parameters of Taylor-series. Instead of releasing the original weights, developers can release the Taylor-series parameters with users, thereby ensuring the security of LLMs. Moreover, TaylorMLP can prevent abuse of LLMs by adjusting the generation speed. It can induce low-speed token generation for the protected LLMs by increasing the terms in the Taylor-series. This intentional delay helps LLM developers prevent potential large-scale unauthorized uses of their models. Empirical experiments across five datasets and three LLM architectures demonstrate that TaylorMLP induces over increase in latency, producing the tokens precisely matched with original LLMs. Subsequent defensive experiments further confirm that TaylorMLP effectively prevents users from reconstructing the weight values based on downstream datasets.
[ "Wang, Guanchu", "Chuang, Yu-Neng", "Tang, Ruixiang", "Zhong, Shaochen", "Yuan, Jiayi", "Jin, Hongye", "Liu, Zirui", "Chaudhary, Vipin", "Xu, Shuai", "Caverlee, James", "Hu, Xia" ]
Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
emnlp-main.393
Poster
2410.05331
[ "https://github.com/guanchuwang/Taylor-Unswift" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.394.bib
https://aclanthology.org/2024.emnlp-main.394/
@inproceedings{qian-etal-2024-timer4, title = "{T}ime{R}$^4$ : Time-aware Retrieval-Augmented Large Language Models for Temporal Knowledge Graph Question Answering", author = "Qian, Xinying and Zhang, Ying and Zhao, Yu and Zhou, Baohang and Sui, Xuhui and Zhang, Li and Song, Kehui", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.394", pages = "6942--6952", abstract = "Temporal Knowledge Graph Question Answering (TKGQA) aims to answer temporal questions using knowledge in Temporal Knowledge Graphs (TKGs). Previous works employ pre-trained TKG embeddings or graph neural networks to incorporate the knowledge of TKGs. However, these methods fail to fully understand the complex semantic information of time constraints in questions.In contrast, Large Language Models (LLMs) have shown exceptional performance in knowledge graph reasoning, unifying both semantic understanding and structural reasoning. To further enhance LLMs{'} temporal reasoning ability, this paper aims to integrate relevant temporal knowledge from TKGs into LLMs through a Time-aware Retrieve-Rewrite-Retrieve-Rerank framework, which we named TimeR$^4$.Specifically, to reduce temporal hallucination in LLMs, we propose a retrieve-rewrite module to rewrite questions using background knowledge stored in the TKGs, thereby acquiring explicit time constraints. Then, we implement a retrieve-rerank module aimed at retrieving semantically and temporally relevant facts from the TKGs and reranking them according to the temporal constraints.To achieve this, we fine-tune a retriever using the contrastive time-aware learning framework.Our approach achieves great improvements, with relative gains of 47.8{\%} and 22.5{\%} on two datasets, underscoring its effectiveness in boosting the temporal reasoning abilities of LLMs. Our code is available at https://github.com/qianxinying/TimeR4.", }
Temporal Knowledge Graph Question Answering (TKGQA) aims to answer temporal questions using knowledge in Temporal Knowledge Graphs (TKGs). Previous works employ pre-trained TKG embeddings or graph neural networks to incorporate the knowledge of TKGs. However, these methods fail to fully understand the complex semantic information of time constraints in questions.In contrast, Large Language Models (LLMs) have shown exceptional performance in knowledge graph reasoning, unifying both semantic understanding and structural reasoning. To further enhance LLMs{'} temporal reasoning ability, this paper aims to integrate relevant temporal knowledge from TKGs into LLMs through a Time-aware Retrieve-Rewrite-Retrieve-Rerank framework, which we named TimeR$^4$.Specifically, to reduce temporal hallucination in LLMs, we propose a retrieve-rewrite module to rewrite questions using background knowledge stored in the TKGs, thereby acquiring explicit time constraints. Then, we implement a retrieve-rerank module aimed at retrieving semantically and temporally relevant facts from the TKGs and reranking them according to the temporal constraints.To achieve this, we fine-tune a retriever using the contrastive time-aware learning framework.Our approach achieves great improvements, with relative gains of 47.8{\%} and 22.5{\%} on two datasets, underscoring its effectiveness in boosting the temporal reasoning abilities of LLMs. Our code is available at https://github.com/qianxinying/TimeR4.
[ "Qian, Xinying", "Zhang, Ying", "Zhao, Yu", "Zhou, Baohang", "Sui, Xuhui", "Zhang, Li", "Song, Kehui" ]
TimeR^4 : Time-aware Retrieval-Augmented Large Language Models for Temporal Knowledge Graph Question Answering
emnlp-main.394
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.395.bib
https://aclanthology.org/2024.emnlp-main.395/
@inproceedings{hu-etal-2024-knowledge, title = "Knowledge-Centric Hallucination Detection", author = "Hu, Xiangkun and Ru, Dongyu and Qiu, Lin and Guo, Qipeng and Zhang, Tianhang and Xu, Yang and Luo, Yun and Liu, Pengfei and Zhang, Yue and Zhang, Zheng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.395", pages = "6953--6975", abstract = "Large Language Models (LLMs) have shown impressive capabilities but also a concerning tendency to hallucinate. This paper presents RefChecker, a framework that introduces claim-triplets to represent claims in LLM responses, aiming to detect fine-grained hallucinations. In RefChecker, an extractor generates claim-triplets from a response, which are then evaluated by a checker against a reference. We delineate three task settings: Zero, Noisy and Accurate Context, to reflect various real-world use cases. We curated a benchmark spanning various NLP tasks and annotated 11k claim-triplets from 2.1k responses by seven LLMs. RefChecker supports both proprietary and open-source models as the extractor and checker. Experiments demonstrate that claim-triplets enable superior hallucination detection, compared to other granularities such as response, sentence and sub-sentence level claims. RefChecker outperforms prior methods by 18.2 to 27.2 points on our benchmark and the checking results of RefChecker are strongly aligned with human judgments.", }
Large Language Models (LLMs) have shown impressive capabilities but also a concerning tendency to hallucinate. This paper presents RefChecker, a framework that introduces claim-triplets to represent claims in LLM responses, aiming to detect fine-grained hallucinations. In RefChecker, an extractor generates claim-triplets from a response, which are then evaluated by a checker against a reference. We delineate three task settings: Zero, Noisy and Accurate Context, to reflect various real-world use cases. We curated a benchmark spanning various NLP tasks and annotated 11k claim-triplets from 2.1k responses by seven LLMs. RefChecker supports both proprietary and open-source models as the extractor and checker. Experiments demonstrate that claim-triplets enable superior hallucination detection, compared to other granularities such as response, sentence and sub-sentence level claims. RefChecker outperforms prior methods by 18.2 to 27.2 points on our benchmark and the checking results of RefChecker are strongly aligned with human judgments.
[ "Hu, Xiangkun", "Ru, Dongyu", "Qiu, Lin", "Guo, Qipeng", "Zhang, Tianhang", "Xu, Yang", "Luo, Yun", "Liu, Pengfei", "Zhang, Yue", "Zhang, Zheng" ]
Knowledge-Centric Hallucination Detection
emnlp-main.395
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.396.bib
https://aclanthology.org/2024.emnlp-main.396/
@inproceedings{mu-etal-2024-revealing, title = "Revealing the Parallel Multilingual Learning within Large Language Models", author = "Mu, Yongyu and Feng, Peinan and Cao, Zhiquan and Wu, Yuzhang and Li, Bei and Wang, Chenglong and Xiao, Tong and Song, Kai and Liu, Tongran and Zhang, Chunliang and Zhu, JingBo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.396", pages = "6976--6997", abstract = "Large language models (LLMs) can handle multilingual and cross-lingual text within a single input; however, previous works leveraging multilingualism in LLMs primarily focus on using English as the pivot language to enhance language understanding and reasoning. Given that multiple languages are a compensation for the losses caused by a single language{'}s limitations, it{'}s a natural next step to enrich the model{'}s learning context through the integration of the original input with its multiple translations. In this paper, we start by revealing that LLMs learn from parallel multilingual input (PMI). Our comprehensive evaluation shows that PMI enhances the model{'}s comprehension of the input, achieving superior performance than conventional in-context learning (ICL). Furthermore, to explore how multilingual processing affects prediction, we examine the activated neurons in LLMs. Surprisingly, involving more languages in the input activates fewer neurons, leading to more focused and effective neural activation patterns. Also, this neural reaction coincidently mirrors the neuroscience insight about synaptic pruning, highlighting a similarity between artificial and biological {`}brains{'}.", }
Large language models (LLMs) can handle multilingual and cross-lingual text within a single input; however, previous works leveraging multilingualism in LLMs primarily focus on using English as the pivot language to enhance language understanding and reasoning. Given that multiple languages are a compensation for the losses caused by a single language{'}s limitations, it{'}s a natural next step to enrich the model{'}s learning context through the integration of the original input with its multiple translations. In this paper, we start by revealing that LLMs learn from parallel multilingual input (PMI). Our comprehensive evaluation shows that PMI enhances the model{'}s comprehension of the input, achieving superior performance than conventional in-context learning (ICL). Furthermore, to explore how multilingual processing affects prediction, we examine the activated neurons in LLMs. Surprisingly, involving more languages in the input activates fewer neurons, leading to more focused and effective neural activation patterns. Also, this neural reaction coincidently mirrors the neuroscience insight about synaptic pruning, highlighting a similarity between artificial and biological {`}brains{'}.
[ "Mu, Yongyu", "Feng, Peinan", "Cao, Zhiquan", "Wu, Yuzhang", "Li, Bei", "Wang, Chenglong", "Xiao, Tong", "Song, Kai", "Liu, Tongran", "Zhang, Chunliang", "Zhu, JingBo" ]
Revealing the Parallel Multilingual Learning within Large Language Models
emnlp-main.396
Poster
2403.09073
[ "https://github.com/takagi97/llms-are-parallel-multilingual-learners" ]
https://huggingface.co/papers/2403.09073
0
0
0
11
[ "sunatte/txt2sql", "MachoMaheen/devdock4bit" ]
[]
[ "Justinrune/LLaMA-Factory", "smarttang/blingsec" ]
[ "sunatte/txt2sql", "MachoMaheen/devdock4bit" ]
[]
[ "Justinrune/LLaMA-Factory", "smarttang/blingsec" ]
1
https://aclanthology.org/2024.emnlp-main.397.bib
https://aclanthology.org/2024.emnlp-main.397/
@inproceedings{zeng-etal-2024-automatic, title = "Automatic Instruction Evolving for Large Language Models", author = "Zeng, Weihao and Xu, Can and Zhao, Yingxiu and Lou, Jian-Guang and Chen, Weizhu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.397", pages = "6998--7018", abstract = "Fine-tuning large pre-trained language models with Evol-Instruct has achieved encouraging results across a wide range of tasks. However, designing effective evolving methods for instruction evolution requires substantial human expertise. This paper proposes Auto Evol-Instruct, an end-to-end framework that evolves instruction datasets using large language models without any human effort. The framework automatically analyzes and summarizes suitable evolutionary strategies for the given instruction data and iteratively improves the evolving method based on issues exposed during the instruction evolution process. Our extensive experiments demonstrate that the best method optimized by Auto Evol-Instruct outperforms human-designed methods on various benchmarks, including MT-Bench, AlpacaEval, GSM8K, and HumanEval.", }
Fine-tuning large pre-trained language models with Evol-Instruct has achieved encouraging results across a wide range of tasks. However, designing effective evolving methods for instruction evolution requires substantial human expertise. This paper proposes Auto Evol-Instruct, an end-to-end framework that evolves instruction datasets using large language models without any human effort. The framework automatically analyzes and summarizes suitable evolutionary strategies for the given instruction data and iteratively improves the evolving method based on issues exposed during the instruction evolution process. Our extensive experiments demonstrate that the best method optimized by Auto Evol-Instruct outperforms human-designed methods on various benchmarks, including MT-Bench, AlpacaEval, GSM8K, and HumanEval.
[ "Zeng, Weihao", "Xu, Can", "Zhao, Yingxiu", "Lou, Jian-Guang", "Chen, Weizhu" ]
Automatic Instruction Evolving for Large Language Models
emnlp-main.397
Poster
2406.00770
[ "" ]
https://huggingface.co/papers/2406.00770
1
1
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.398.bib
https://aclanthology.org/2024.emnlp-main.398/
@inproceedings{sheng-etal-2024-repeval, title = "{R}ep{E}val: Effective Text Evaluation with {LLM} Representation", author = "Sheng, Shuqian and Xu, Yi and Zhang, Tianhang and Shen, Zanwei and Fu, Luoyi and Ding, Jiaxin and Zhou, Lei and Gan, Xiaoying and Wang, Xinbing and Zhou, Chenghu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.398", pages = "7019--7033", abstract = "The era of Large Language Models (LLMs) raises new demands for automatic evaluation metrics, which should be adaptable to various application scenarios while maintaining low cost and effectiveness. Traditional metrics for automatic text evaluation are often tailored to specific scenarios, while LLM-based evaluation metrics are costly, requiring fine-tuning or rely heavily on the generation capabilities of LLMs. Besides, previous LLM-based metrics ignore the fact that, within the space of LLM representations, there exist direction vectors that indicate the estimation of text quality. To this end, we introduce RepEval, a metric that leverages the projection of LLM representations for evaluation. Through simple prompt modifications, RepEval can easily transition to various tasks, requiring only minimal sample pairs for direction vector construction. Results on fourteen datasets across two evaluation tasks demonstrate the high effectiveness of our method, which exhibits a higher correlation with human judgments than previous methods, even in complex evaluation scenarios involving pair-wise selection under nuanced aspects. Our work underscores the richness of information regarding text quality embedded within LLM representations, offering insights for the development of new metrics.", }
The era of Large Language Models (LLMs) raises new demands for automatic evaluation metrics, which should be adaptable to various application scenarios while maintaining low cost and effectiveness. Traditional metrics for automatic text evaluation are often tailored to specific scenarios, while LLM-based evaluation metrics are costly, requiring fine-tuning or rely heavily on the generation capabilities of LLMs. Besides, previous LLM-based metrics ignore the fact that, within the space of LLM representations, there exist direction vectors that indicate the estimation of text quality. To this end, we introduce RepEval, a metric that leverages the projection of LLM representations for evaluation. Through simple prompt modifications, RepEval can easily transition to various tasks, requiring only minimal sample pairs for direction vector construction. Results on fourteen datasets across two evaluation tasks demonstrate the high effectiveness of our method, which exhibits a higher correlation with human judgments than previous methods, even in complex evaluation scenarios involving pair-wise selection under nuanced aspects. Our work underscores the richness of information regarding text quality embedded within LLM representations, offering insights for the development of new metrics.
[ "Sheng, Shuqian", "Xu, Yi", "Zhang, Tianhang", "Shen, Zanwei", "Fu, Luoyi", "Ding, Jiaxin", "Zhou, Lei", "Gan, Xiaoying", "Wang, Xinbing", "Zhou, Chenghu" ]
RepEval: Effective Text Evaluation with LLM Representation
emnlp-main.398
Poster
2404.19563
[ "https://github.com/susisheng/repeval" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.399.bib
https://aclanthology.org/2024.emnlp-main.399/
@inproceedings{he-etal-2024-generative, title = "Generative Models for Automatic Medical Decision Rule Extraction from Text", author = "He, Yuxin and Tang, Buzhou and Wang, Xiaoling", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.399", pages = "7034--7048", abstract = "Medical decision rules play a key role in many clinical decision support systems (CDSS). However, these rules are conventionally constructed by medical experts, which is expensive and hard to scale up. In this study, we explore the automatic extraction of medical decision rules from text, leading to a solution to construct large-scale medical decision rules. We adopt a formulation of medical decision rules as binary trees consisting of condition/decision nodes. Such trees are referred to as medical decision trees and we introduce several generative models to extract them from text. The proposed models inherit the merit of two categories of successful natural language generation frameworks, i.e., sequence-to-sequence generation and autoregressive generation. To unleash the potential of pretrained language models, we design three styles of linearization (natural language, augmented natural language and JSON code), acting as the target sequence for our models. Our final system achieves 67{\%} tree accuracy on a comprehensive Chinese benchmark, outperforming state-of-the-art baseline by 12{\%}. The result demonstrates the effectiveness of generative models on explicitly modeling structural decision-making roadmaps, and shows great potential to boost the development of CDSS and explainable AI. Our code will be open-source upon acceptance.", }
Medical decision rules play a key role in many clinical decision support systems (CDSS). However, these rules are conventionally constructed by medical experts, which is expensive and hard to scale up. In this study, we explore the automatic extraction of medical decision rules from text, leading to a solution to construct large-scale medical decision rules. We adopt a formulation of medical decision rules as binary trees consisting of condition/decision nodes. Such trees are referred to as medical decision trees and we introduce several generative models to extract them from text. The proposed models inherit the merit of two categories of successful natural language generation frameworks, i.e., sequence-to-sequence generation and autoregressive generation. To unleash the potential of pretrained language models, we design three styles of linearization (natural language, augmented natural language and JSON code), acting as the target sequence for our models. Our final system achieves 67{\%} tree accuracy on a comprehensive Chinese benchmark, outperforming state-of-the-art baseline by 12{\%}. The result demonstrates the effectiveness of generative models on explicitly modeling structural decision-making roadmaps, and shows great potential to boost the development of CDSS and explainable AI. Our code will be open-source upon acceptance.
[ "He, Yuxin", "Tang, Buzhou", "Wang, Xiaoling" ]
Generative Models for Automatic Medical Decision Rule Extraction from Text
emnlp-main.399
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.400.bib
https://aclanthology.org/2024.emnlp-main.400/
@inproceedings{nguyen-etal-2024-encoding, title = "Encoding and Controlling Global Semantics for Long-form Video Question Answering", author = "Nguyen, Thong Thanh and Hu, Zhiyuan and Wu, Xiaobao and Nguyen, Cong-Duy T and Ng, See-Kiong and Luu, Anh Tuan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.400", pages = "7049--7066", abstract = "Seeking answers effectively for long videos is essential to build video question answering (videoQA) systems. Previous methods adaptively select frames and regions from long videos to save computations. However, this fails to reason over the whole sequence of video, leading to sub-optimal performance. To address this problem, we introduce a state space layer (SSL) into multi-modal Transformer to efficiently integrate global semantics of the video, which mitigates the video information loss caused by frame and region selection modules. Our SSL includes a gating unit to enable controllability over the flow of global semantics into visual representations. To further enhance the controllability, we introduce a cross-modal compositional congruence objective to encourage global semantics aligned with the question. To rigorously evaluate long-form videoQA capacity, we construct two new benchmarks Ego-QA and MAD-QA featuring videos of considerably long length, i.e. 17.5 minutes and 1.9 hours, respectively. Extensive experiments demonstrate the superiority of our framework on these new as well as existing datasets.", }
Seeking answers effectively for long videos is essential to build video question answering (videoQA) systems. Previous methods adaptively select frames and regions from long videos to save computations. However, this fails to reason over the whole sequence of video, leading to sub-optimal performance. To address this problem, we introduce a state space layer (SSL) into multi-modal Transformer to efficiently integrate global semantics of the video, which mitigates the video information loss caused by frame and region selection modules. Our SSL includes a gating unit to enable controllability over the flow of global semantics into visual representations. To further enhance the controllability, we introduce a cross-modal compositional congruence objective to encourage global semantics aligned with the question. To rigorously evaluate long-form videoQA capacity, we construct two new benchmarks Ego-QA and MAD-QA featuring videos of considerably long length, i.e. 17.5 minutes and 1.9 hours, respectively. Extensive experiments demonstrate the superiority of our framework on these new as well as existing datasets.
[ "Nguyen, Thong Thanh", "Hu, Zhiyuan", "Wu, Xiaobao", "Nguyen, Cong-Duy T", "Ng, See-Kiong", "Luu, Anh Tuan" ]
Encoding and Controlling Global Semantics for Long-form Video Question Answering
emnlp-main.400
Poster
2405.19723
[ "https://github.com/zhiyuanhubj/long_form_videoqa" ]
https://huggingface.co/papers/2405.19723
0
0
0
6
[]
[ "thongnguyen5999/madqa", "thongnguyen5999/egoqa" ]
[]
[]
[ "thongnguyen5999/madqa", "thongnguyen5999/egoqa" ]
[]
1