corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-664501 | 2410.01529 | Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning | <|reference_start|>Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning: Multimodal task specification is essential for enhanced robotic performance, where \textit{Cross-modality Alignment} enables the robot to holistically understand complex task instructions. Directly annotating multimodal instructions for model training proves impractical, due to the sparsity of paired multimodal data. In this study, we demonstrate that by leveraging unimodal instructions abundant in real data, we can effectively teach robots to learn multimodal task specifications. First, we endow the robot with strong \textit{Cross-modality Alignment} capabilities, by pretraining a robotic multimodal encoder using extensive out-of-domain data. Then, we employ two Collapse and Corrupt operations to further bridge the remaining modality gap in the learned multimodal representation. This approach projects different modalities of identical task goal as interchangeable representations, thus enabling accurate robotic operations within a well-aligned multimodal latent space. Evaluation across more than 130 tasks and 4000 evaluations on both simulated LIBERO benchmark and real robot platforms showcases the superior capabilities of our proposed framework, demonstrating significant advantage in overcoming data constraints in robotic learning. Website: zh1hao.wang/Robo_MUTUAL<|reference_end|> | arxiv | @article{li2024robo-mutual:,
title={Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning},
author={Jianxiong Li, Zhihao Wang, Jinliang Zheng, Xiaoai Zhou, Guanming Wang,
Guanglu Song, Yu Liu, Jingjing Liu, Ya-Qin Zhang, Junzhi Yu, Xianyuan Zhan},
journal={arXiv preprint arXiv:2410.01529},
year={2024},
archivePrefix={arXiv},
eprint={2410.01529},
primaryClass={cs.RO cs.CV}
} | li2024robo-mutual: |
arxiv-664502 | 2410.01531 | TiVaT: Joint-Axis Attention for Time Series Forecasting with Lead-Lag Dynamics | <|reference_start|>TiVaT: Joint-Axis Attention for Time Series Forecasting with Lead-Lag Dynamics: Multivariate time series (MTS) forecasting plays a crucial role in various real-world applications, yet simultaneously capturing both temporal and inter-variable dependencies remains a challenge. Conventional Channel-Dependent (CD) models handle these dependencies separately, limiting their ability to model complex interactions such as lead-lag dynamics. To address these limitations, we propose TiVaT (Time-Variable Transformer), a novel architecture that integrates temporal and variate dependencies through its Joint-Axis (JA) attention mechanism. TiVaT's ability to capture intricate variate-temporal dependencies, including asynchronous interactions, is further enhanced by the incorporation of Distance-aware Time-Variable (DTV) Sampling, which reduces noise and improves accuracy through a learned 2D map that focuses on key interactions. TiVaT effectively models both temporal and variate dependencies, consistently delivering strong performance across diverse datasets. Notably, it excels in capturing complex patterns within multivariate time series, enabling it to surpass or remain competitive with state-of-the-art methods. This positions TiVaT as a new benchmark in MTS forecasting, particularly in handling datasets characterized by intricate and challenging dependencies.<|reference_end|> | arxiv | @article{ha2024tivat:,
title={TiVaT: Joint-Axis Attention for Time Series Forecasting with Lead-Lag
Dynamics},
author={Junwoo Ha, Hyukjae Kwon, Sungsoo Kim, Kisu Lee, Ha Young Kim},
journal={arXiv preprint arXiv:2410.01531},
year={2024},
archivePrefix={arXiv},
eprint={2410.01531},
primaryClass={cs.LG cs.AI}
} | ha2024tivat: |
arxiv-664503 | 2410.01532 | Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models | <|reference_start|>Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models: Advancements in Natural Language Processing (NLP), have led to the emergence of Large Language Models (LLMs) such as GPT, Llama, Claude, and Gemini, which excel across a range of tasks but require extensive fine-tuning to align their outputs with human expectations. A widely used method for achieving this alignment is Reinforcement Learning from Human Feedback (RLHF), which, despite its success, faces challenges in accurately modelling human preferences. In this paper, we introduce GazeReward, a novel framework that integrates implicit feedback -- and specifically eye-tracking (ET) data -- into the Reward Model (RM). In addition, we explore how ET-based features can provide insights into user preferences. Through ablation studies we test our framework with different integration methods, LLMs, and ET generator models, demonstrating that our approach significantly improves the accuracy of the RM on established human preference datasets. This work advances the ongoing discussion on optimizing AI alignment with human values, exploring the potential of cognitive data for shaping future NLP research.<|reference_end|> | arxiv | @article{lopez-cardona2024seeing,
title={Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for
Large Language Models},
author={Angela Lopez-Cardona and Carlos Segura and Alexandros Karatzoglou and
Sergi Abadal and Ioannis Arapakis},
journal={arXiv preprint arXiv:2410.01532},
year={2024},
archivePrefix={arXiv},
eprint={2410.01532},
primaryClass={cs.CL cs.AI cs.CV cs.HC}
} | lopez-cardona2024seeing |
arxiv-664504 | 2410.01534 | Toward a Holistic Evaluation of Robustness in CLIP Models | <|reference_start|>Toward a Holistic Evaluation of Robustness in CLIP Models: Contrastive Language-Image Pre-training (CLIP) models have shown significant potential, particularly in zero-shot classification across diverse distribution shifts. Building on existing evaluations of overall classification robustness, this work aims to provide a more comprehensive assessment of CLIP by introducing several new perspectives. First, we investigate their robustness to variations in specific visual factors. Second, we assess two critical safety objectives--confidence uncertainty and out-of-distribution detection--beyond mere classification accuracy. Third, we evaluate the finesse with which CLIP models bridge the image and text modalities. Fourth, we extend our examination to 3D awareness in CLIP models, moving beyond traditional 2D image understanding. Finally, we explore the interaction between vision and language encoders within modern large multimodal models (LMMs) that utilize CLIP as the visual backbone, focusing on how this interaction impacts classification robustness. In each aspect, we consider the impact of six factors on CLIP models: model architecture, training distribution, training set size, fine-tuning, contrastive loss, and test-time prompts. Our study uncovers several previously unknown insights into CLIP. For instance, the architecture of the visual encoder in CLIP plays a significant role in their robustness against 3D corruption. CLIP models tend to exhibit a bias towards shape when making predictions. Moreover, this bias tends to diminish after fine-tuning on ImageNet. Vision-language models like LLaVA, leveraging the CLIP vision encoder, could exhibit benefits in classification performance for challenging categories over CLIP alone. Our findings are poised to offer valuable guidance for enhancing the robustness and reliability of CLIP models.<|reference_end|> | arxiv | @article{tu2024toward,
title={Toward a Holistic Evaluation of Robustness in CLIP Models},
author={Weijie Tu, Weijian Deng, Tom Gedeon},
journal={arXiv preprint arXiv:2410.01534},
year={2024},
archivePrefix={arXiv},
eprint={2410.01534},
primaryClass={cs.CV}
} | tu2024toward |
arxiv-664505 | 2410.01535 | GaussianBlock: Building Part-Aware Compositional and Editable 3D Scene by Primitives and Gaussians | <|reference_start|>GaussianBlock: Building Part-Aware Compositional and Editable 3D Scene by Primitives and Gaussians: Recently, with the development of Neural Radiance Fields and Gaussian Splatting, 3D reconstruction techniques have achieved remarkably high fidelity. However, the latent representations learnt by these methods are highly entangled and lack interpretability. In this paper, we propose a novel part-aware compositional reconstruction method, called GaussianBlock, that enables semantically coherent and disentangled representations, allowing for precise and physical editing akin to building blocks, while simultaneously maintaining high fidelity. Our GaussianBlock introduces a hybrid representation that leverages the advantages of both primitives, known for their flexible actionability and editability, and 3D Gaussians, which excel in reconstruction quality. Specifically, we achieve semantically coherent primitives through a novel attention-guided centering loss derived from 2D semantic priors, complemented by a dynamic splitting and fusion strategy. Furthermore, we utilize 3D Gaussians that hybridize with primitives to refine structural details and enhance fidelity. Additionally, a binding inheritance strategy is employed to strengthen and maintain the connection between the two. Our reconstructed scenes are evidenced to be disentangled, compositional, and compact across diverse benchmarks, enabling seamless, direct and precise editing while maintaining high quality.<|reference_end|> | arxiv | @article{jiang2024gaussianblock:,
title={GaussianBlock: Building Part-Aware Compositional and Editable 3D Scene
by Primitives and Gaussians},
author={Shuyi Jiang, Qihao Zhao, Hossein Rahmani, De Wen Soh, Jun Liu, Na Zhao},
journal={arXiv preprint arXiv:2410.01535},
year={2024},
archivePrefix={arXiv},
eprint={2410.01535},
primaryClass={cs.CV}
} | jiang2024gaussianblock: |
arxiv-664506 | 2410.01536 | EUFCC-CIR: a Composed Image Retrieval Dataset for GLAM Collections | <|reference_start|>EUFCC-CIR: a Composed Image Retrieval Dataset for GLAM Collections: The intersection of Artificial Intelligence and Digital Humanities enables researchers to explore cultural heritage collections with greater depth and scale. In this paper, we present EUFCC-CIR, a dataset designed for Composed Image Retrieval (CIR) within Galleries, Libraries, Archives, and Museums (GLAM) collections. Our dataset is built on top of the EUFCC-340K image labeling dataset and contains over 180K annotated CIR triplets. Each triplet is composed of a multi-modal query (an input image plus a short text describing the desired attribute manipulations) and a set of relevant target images. The EUFCC-CIR dataset fills an existing gap in CIR-specific resources for Digital Humanities. We demonstrate the value of the EUFCC-CIR dataset by highlighting its unique qualities in comparison to other existing CIR datasets and evaluating the performance of several zero-shot CIR baselines.<|reference_end|> | arxiv | @article{net2024eufcc-cir:,
title={EUFCC-CIR: a Composed Image Retrieval Dataset for GLAM Collections},
author={Francesc Net and Lluis Gomez},
journal={arXiv preprint arXiv:2410.01536},
year={2024},
archivePrefix={arXiv},
eprint={2410.01536},
primaryClass={cs.CV}
} | net2024eufcc-cir: |
arxiv-664507 | 2410.01537 | Attention layers provably solve single-location regression | <|reference_start|>Attention layers provably solve single-location regression: Attention-based models, such as Transformer, excel across various tasks but lack a comprehensive theoretical understanding, especially regarding token-wise sparsity and internal linear representations. To address this gap, we introduce the single-location regression task, where only one token in a sequence determines the output, and its position is a latent random variable, retrievable via a linear projection of the input. To solve this task, we propose a dedicated predictor, which turns out to be a simplified version of a non-linear self-attention layer. We study its theoretical properties, by showing its asymptotic Bayes optimality and analyzing its training dynamics. In particular, despite the non-convex nature of the problem, the predictor effectively learns the underlying structure. This work highlights the capacity of attention mechanisms to handle sparse token information and internal linear structures.<|reference_end|> | arxiv | @article{marion2024attention,
title={Attention layers provably solve single-location regression},
author={Pierre Marion, Rapha"el Berthier, G'erard Biau, Claire Boyer},
journal={arXiv preprint arXiv:2410.01537},
year={2024},
archivePrefix={arXiv},
eprint={2410.01537},
primaryClass={stat.ML cs.LG}
} | marion2024attention |
arxiv-664508 | 2410.01538 | Finite element method Detailed proofs to be formalized in Coq | <|reference_start|>Finite element method Detailed proofs to be formalized in Coq: To obtain the highest confidence on the correction of numerical simulation programs for the resolution of Partial Differential Equations (PDEs), one has to formalize the mathematical notions and results that allow to establish the soundness of the approach. The finite element method is one of the popular tools for the numerical resolution of a wide range of PDEs. The purpose of this document is to provide the formal proof community with very detailed pen-and-paper proofs for the construction of the Lagrange finite elements of any degree on simplices in positive dimension.<|reference_end|> | arxiv | @article{clément2024finite,
title={Finite element method. Detailed proofs to be formalized in Coq},
author={Franc{c}ois Cl'ement (SERENA, CERMICS), Vincent Martin (LMAC)},
journal={arXiv preprint arXiv:2410.01538},
year={2024},
archivePrefix={arXiv},
eprint={2410.01538},
primaryClass={cs.LO cs.NA math.NA}
} | clément2024finite |
arxiv-664509 | 2410.01539 | Multi-Scale Fusion for Object Representation | <|reference_start|>Multi-Scale Fusion for Object Representation: Representing images or videos as object-level feature vectors, rather than pixel-level feature maps, facilitates advanced visual tasks. Object-Centric Learning (OCL) primarily achieves this by reconstructing the input under the guidance of Variational Autoencoder (VAE) intermediate representation to drive so-called \textit{slots} to aggregate as much object information as possible. However, existing VAE guidance does not explicitly address that objects can vary in pixel sizes while models typically excel at specific pattern scales. We propose \textit{Multi-Scale Fusion} (MSF) to enhance VAE guidance for OCL training. To ensure objects of all sizes fall within VAE's comfort zone, we adopt the \textit{image pyramid}, which produces intermediate representations at multiple scales; To foster scale-invariance/variance in object super-pixels, we devise \textit{inter}/\textit{intra-scale fusion}, which augments low-quality object super-pixels of one scale with corresponding high-quality super-pixels from another scale. On standard OCL benchmarks, our technique improves mainstream methods, including state-of-the-art diffusion-based ones. The source code is available in the supplemental material.<|reference_end|> | arxiv | @article{zhao2024multi-scale,
title={Multi-Scale Fusion for Object Representation},
author={Rongzhen Zhao, Vivienne Wang, Juho Kannala, Joni Pajarinen},
journal={arXiv preprint arXiv:2410.01539},
year={2024},
archivePrefix={arXiv},
eprint={2410.01539},
primaryClass={cs.CV}
} | zhao2024multi-scale |
arxiv-664510 | 2410.01540 | Edge-preserving noise for diffusion models | <|reference_start|>Edge-preserving noise for diffusion models: Classical generative diffusion models learn an isotropic Gaussian denoising process, treating all spatial regions uniformly, thus neglecting potentially valuable structural information in the data. Inspired by the long-established work on anisotropic diffusion in image processing, we present a novel edge-preserving diffusion model that is a generalization of denoising diffusion probablistic models (DDPM). In particular, we introduce an edge-aware noise scheduler that varies between edge-preserving and isotropic Gaussian noise. We show that our model's generative process converges faster to results that more closely match the target distribution. We demonstrate its capability to better learn the low-to-mid frequencies within the dataset, which plays a crucial role in representing shapes and structural information. Our edge-preserving diffusion process consistently outperforms state-of-the-art baselines in unconditional image generation. It is also more robust for generative tasks guided by a shape-based prior, such as stroke-to-image generation. We present qualitative and quantitative results showing consistent improvements (FID score) of up to 30% for both tasks.<|reference_end|> | arxiv | @article{vandersanden2024edge-preserving,
title={Edge-preserving noise for diffusion models},
author={Jente Vandersanden, Sascha Holl, Xingchang Huang, Gurprit Singh},
journal={arXiv preprint arXiv:2410.01540},
year={2024},
archivePrefix={arXiv},
eprint={2410.01540},
primaryClass={cs.CV cs.AI cs.GR cs.LG}
} | vandersanden2024edge-preserving |
arxiv-664511 | 2410.01541 | Understanding Teams and Productivity in Information Retrieval Research (2000-2018): Academia, Industry, and Cross-Community Collaborations | <|reference_start|>Understanding Teams and Productivity in Information Retrieval Research (2000-2018): Academia, Industry, and Cross-Community Collaborations: Previous researches on the Information retrieval (IR) field have focused on summarizing progress and synthesizing knowledge and techniques from individual studies and data-driven experiments, the extent of contributions and collaborations between researchers from different communities (e.g., academia and industry) in advancing IR knowledge remains unclear. To address this gap, this study explores several characteristics of information retrieval research in four areas: productivity patterns and preferred venues, the relationship between citations and downloads, changes in research topics, and changes in patterns of scientific collaboration, by analyzing 53,471 papers published between 2000 and 2018 from the Association for Computing Machinery (ACM) Digital Library dataset. Through the analysis and interpretation on empirical datasets, we find that academic research, industry research, and collaborative research between academia and industry focused on different topics. Among the collaboration models, Academia-Industry Collaboration is more oriented towards large teamwork. Collaborative networks between researchers in academia and industry suggest that the field of information retrieval has become richer over time in terms of themes, foci, and sub-themes, becoming a more diverse field of study.<|reference_end|> | arxiv | @article{lei2024understanding,
title={Understanding Teams and Productivity in Information Retrieval Research
(2000-2018): Academia, Industry, and Cross-Community Collaborations},
author={Jiaqi Lei, Liang Hu, Yi Bu, Jiqun Liu},
journal={arXiv preprint arXiv:2410.01541},
year={2024},
archivePrefix={arXiv},
eprint={2410.01541},
primaryClass={cs.DL}
} | lei2024understanding |
arxiv-664512 | 2410.01544 | Boosting Weakly-Supervised Referring Image Segmentation via Progressive Comprehension | <|reference_start|>Boosting Weakly-Supervised Referring Image Segmentation via Progressive Comprehension: This paper explores the weakly-supervised referring image segmentation (WRIS) problem, and focuses on a challenging setup where target localization is learned directly from image-text pairs. We note that the input text description typically already contains detailed information on how to localize the target object, and we also observe that humans often follow a step-by-step comprehension process (\ie, progressively utilizing target-related attributes and relations as cues) to identify the target object. Hence, we propose a novel Progressive Comprehension Network (PCNet) to leverage target-related textual cues from the input description for progressively localizing the target object. Specifically, we first use a Large Language Model (LLM) to decompose the input text description into short phrases. These short phrases are taken as target-related cues and fed into a Conditional Referring Module (CRM) in multiple stages, to allow updating the referring text embedding and enhance the response map for target localization in a multi-stage manner. Based on the CRM, we then propose a Region-aware Shrinking (RaS) loss to constrain the visual localization to be conducted progressively in a coarse-to-fine manner across different stages. Finally, we introduce an Instance-aware Disambiguation (IaD) loss to suppress instance localization ambiguity by differentiating overlapping response maps generated by different referring texts on the same image. Extensive experiments show that our method outperforms SOTA methods on three common benchmarks.<|reference_end|> | arxiv | @article{yang2024boosting,
title={Boosting Weakly-Supervised Referring Image Segmentation via Progressive
Comprehension},
author={Zaiquan Yang, Yuhao Liu, Jiaying Lin, Gerhard Hancke, Rynson W.H. Lau},
journal={arXiv preprint arXiv:2410.01544},
year={2024},
archivePrefix={arXiv},
eprint={2410.01544},
primaryClass={cs.CV}
} | yang2024boosting |
arxiv-664513 | 2410.01545 | Lines of Thought in Large Language Models | <|reference_start|>Lines of Thought in Large Language Models: Large Language Models achieve next-token prediction by transporting a vectorized piece of text (prompt) across an accompanying embedding space under the action of successive transformer layers. The resulting high-dimensional trajectories realize different contextualization, or 'thinking', steps, and fully determine the output probability distribution. We aim to characterize the statistical properties of ensembles of these 'lines of thought.' We observe that independent trajectories cluster along a low-dimensional, non-Euclidean manifold, and that their path can be well approximated by a stochastic equation with few parameters extracted from data. We find it remarkable that the vast complexity of such large models can be reduced to a much simpler form, and we reflect on implications.<|reference_end|> | arxiv | @article{sarfati2024lines,
title={Lines of Thought in Large Language Models},
author={Rapha"el Sarfati, Toni J. B. Liu, Nicolas Boull'e, and Christopher
J. Earls},
journal={arXiv preprint arXiv:2410.01545},
year={2024},
archivePrefix={arXiv},
eprint={2410.01545},
primaryClass={cs.LG physics.data-an}
} | sarfati2024lines |
arxiv-664514 | 2410.01548 | In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks | <|reference_start|>In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks: In-context learning (ICL) is an effective approach to help large language models (LLMs) adapt to various tasks by providing demonstrations of the target task. Considering the high cost of labeling demonstrations, many methods propose synthesizing demonstrations from scratch using LLMs. However, the quality of the demonstrations synthesized from scratch is limited by the capabilities and knowledge of LLMs. To address this, inspired by transfer learning, we propose In-Context Transfer Learning (ICTL), which synthesizes target task demonstrations by transferring labeled demonstrations from similar source tasks. ICTL consists of two steps: source sampling and target transfer. First, we define an optimization objective, which minimizes transfer error to sample source demonstrations similar to the target task. Then, we employ LLMs to transfer the sampled source demonstrations to the target task, matching the definition and format of the target task. Experiments on Super-NI show that ICTL outperforms synthesis from scratch by 2.0% on average, demonstrating the effectiveness of our method.<|reference_end|> | arxiv | @article{wang2024in-context,
title={In-Context Transfer Learning: Demonstration Synthesis by Transferring
Similar Tasks},
author={Dingzirui Wang, Xuanliang Zhang, Qiguang Chen, Longxu Dou, Xiao Xu,
Rongyu Cao, Yingwei Ma, Qingfu Zhu, Wanxiang Che, Binhua Li, Fei Huang,
Yongbin Li},
journal={arXiv preprint arXiv:2410.01548},
year={2024},
archivePrefix={arXiv},
eprint={2410.01548},
primaryClass={cs.CL cs.LG}
} | wang2024in-context |
arxiv-664515 | 2410.01553 | MedQA-CS: Benchmarking Large Language Models Clinical Skills Using an AI-SCE Framework | <|reference_start|>MedQA-CS: Benchmarking Large Language Models Clinical Skills Using an AI-SCE Framework: Artificial intelligence (AI) and large language models (LLMs) in healthcare require advanced clinical skills (CS), yet current benchmarks fail to evaluate these comprehensively. We introduce MedQA-CS, an AI-SCE framework inspired by medical education's Objective Structured Clinical Examinations (OSCEs), to address this gap. MedQA-CS evaluates LLMs through two instruction-following tasks, LLM-as-medical-student and LLM-as-CS-examiner, designed to reflect real clinical scenarios. Our contributions include developing MedQA-CS, a comprehensive evaluation framework with publicly available data and expert annotations, and providing the quantitative and qualitative assessment of LLMs as reliable judges in CS evaluation. Our experiments show that MedQA-CS is a more challenging benchmark for evaluating clinical skills than traditional multiple-choice QA benchmarks (e.g., MedQA). Combined with existing benchmarks, MedQA-CS enables a more comprehensive evaluation of LLMs' clinical capabilities for both open- and closed-source LLMs.<|reference_end|> | arxiv | @article{yao2024medqa-cs:,
title={MedQA-CS: Benchmarking Large Language Models Clinical Skills Using an
AI-SCE Framework},
author={Zonghai Yao, Zihao Zhang, Chaolong Tang, Xingyu Bian, Youxia Zhao,
Zhichao Yang, Junda Wang, Huixue Zhou, Won Seok Jang, Feiyun Ouyang, Hong Yu},
journal={arXiv preprint arXiv:2410.01553},
year={2024},
archivePrefix={arXiv},
eprint={2410.01553},
primaryClass={cs.AI cs.CL}
} | yao2024medqa-cs: |
arxiv-664516 | 2410.01554 | Weighted Sum Power Minimization for Cooperative Spectrum Sharing in Cognitive Radio Networks | <|reference_start|>Weighted Sum Power Minimization for Cooperative Spectrum Sharing in Cognitive Radio Networks: This letter introduces weighted sum power (WSP), a new performance metric for wireless resource allocation during cooperative spectrum sharing in cognitive radio networks, where the primary and secondary nodes have different priorities and quality of service (QoS) requirements. Compared to using energy efficiency (EE) and weighted sum energy efficiency (WSEE) as performance metrics and optimization objectives of wireless resource allocation towards green communication, the linear character of WSP can reduce the complexity of optimization problems. Meanwhile, the weights assigned to different nodes are beneficial for managing their power budget. Using WSP as the optimization objective, a suboptimal resource allocation scheme is proposed, leveraging linear programming and Newton's method. Simulations verify that the proposed scheme provides near-optimal performance with low computation time. Furthermore, the initial approximate value selection in Newton's method is also optimized to accelerate the proposed scheme.<|reference_end|> | arxiv | @article{yu2024weighted,
title={Weighted Sum Power Minimization for Cooperative Spectrum Sharing in
Cognitive Radio Networks},
author={Yang Yu},
journal={arXiv preprint arXiv:2410.01554},
year={2024},
archivePrefix={arXiv},
eprint={2410.01554},
primaryClass={cs.NI}
} | yu2024weighted |
arxiv-664517 | 2410.01555 | ACE: A LLM-based Negotiation Coaching System | <|reference_start|>ACE: A LLM-based Negotiation Coaching System: The growing prominence of LLMs has led to an increase in the development of AI tutoring systems. These systems are crucial in providing underrepresented populations with improved access to valuable education. One important area of education that is unavailable to many learners is strategic bargaining related to negotiation. To address this, we develop a LLM-based Assistant for Coaching nEgotiation (ACE). ACE not only serves as a negotiation partner for users but also provides them with targeted feedback for improvement. To build our system, we collect a dataset of negotiation transcripts between MBA students. These transcripts come from trained negotiators and emulate realistic bargaining scenarios. We use the dataset, along with expert consultations, to design an annotation scheme for detecting negotiation mistakes. ACE employs this scheme to identify mistakes and provide targeted feedback to users. To test the effectiveness of ACE-generated feedback, we conducted a user experiment with two consecutive trials of negotiation and found that it improves negotiation performances significantly compared to a system that doesn't provide feedback and one which uses an alternative method of providing feedback.<|reference_end|> | arxiv | @article{shea2024ace:,
title={ACE: A LLM-based Negotiation Coaching System},
author={Ryan Shea, Aymen Kallala, Xin Lucy Liu, Michael W. Morris, Zhou Yu},
journal={arXiv preprint arXiv:2410.01555},
year={2024},
archivePrefix={arXiv},
eprint={2410.01555},
primaryClass={cs.CL cs.HC}
} | shea2024ace: |
arxiv-664518 | 2410.01556 | Integrative Decoding: Improve Factuality via Implicit Self-consistency | <|reference_start|>Integrative Decoding: Improve Factuality via Implicit Self-consistency: Self-consistency-based approaches, which involve repeatedly sampling multiple outputs and selecting the most consistent one as the final response, prove to be remarkably effective in improving the factual accuracy of large language models. Nonetheless, existing methods usually have strict constraints on the task format, largely limiting their applicability. In this paper, we present Integrative Decoding (ID), to unlock the potential of self-consistency in open-ended generation tasks. ID operates by constructing a set of inputs, each prepended with a previously sampled response, and then processes them concurrently, with the next token being selected by aggregating of all their corresponding predictions at each decoding step. In essence, this simple approach implicitly incorporates self-consistency in the decoding objective. Extensive evaluation shows that ID consistently enhances factuality over a wide range of language models, with substantial improvements on the TruthfulQA (+11.2%), Biographies (+15.4%) and LongFact (+8.5%) benchmarks. The performance gains amplify progressively as the number of sampled responses increases, indicating the potential of ID to scale up with repeated sampling.<|reference_end|> | arxiv | @article{cheng2024integrative,
title={Integrative Decoding: Improve Factuality via Implicit Self-consistency},
author={Yi Cheng, Xiao Liang, Yeyun Gong, Wen Xiao, Song Wang, Yuji Zhang,
Wenjun Hou, Kaishuai Xu, Wenge Liu, Wenjie Li, Jian Jiao, Qi Chen, Peng
Cheng, Wayne Xiong},
journal={arXiv preprint arXiv:2410.01556},
year={2024},
archivePrefix={arXiv},
eprint={2410.01556},
primaryClass={cs.CL cs.AI cs.LG}
} | cheng2024integrative |
arxiv-664519 | 2410.01560 | OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data | <|reference_start|>OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data: Mathematical reasoning continues to be a critical challenge in large language model (LLM) development with significant interest. However, most of the cutting-edge progress in mathematical reasoning with LLMs has become \emph{closed-source} due to lack of access to training data. This lack of data access limits researchers from understanding the impact of different choices for synthesizing and utilizing the data. With the goal of creating a high-quality finetuning (SFT) dataset for math reasoning, we conduct careful ablation experiments on data synthesis using the recently released \texttt{Llama3.1} family of models. Our experiments show that: (a) solution format matters, with excessively verbose solutions proving detrimental to SFT performance, (b) data generated by a strong teacher outperforms equally-sized data generated by a weak student model, (c) SFT is robust to low-quality solutions, allowing for imprecise data filtering, and (d) question diversity is crucial for achieving data scaling gains. Based on these insights, we create the OpenMathInstruct-2 dataset, which consists of 14M question-solution pairs ($\approx$ 600K unique questions), making it nearly eight times larger than the previous largest open-source math reasoning dataset. Finetuning the \texttt{Llama-3.1-8B-Base} using OpenMathInstruct-2 outperforms \texttt{Llama3.1-8B-Instruct} on MATH by an absolute 15.9\% (51.9\% $\rightarrow$ 67.8\%). Finally, to accelerate the open-source efforts, we release the code, the finetuned models, and the OpenMathInstruct-2 dataset under a commercially permissive license.<|reference_end|> | arxiv | @article{toshniwal2024openmathinstruct-2:,
title={OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source
Instruction Data},
author={Shubham Toshniwal, Wei Du, Ivan Moshkov, Branislav Kisacanin, Alexan
Ayrapetyan, Igor Gitman},
journal={arXiv preprint arXiv:2410.01560},
year={2024},
archivePrefix={arXiv},
eprint={2410.01560},
primaryClass={cs.CL cs.AI cs.LG}
} | toshniwal2024openmathinstruct-2: |
arxiv-664520 | 2410.01562 | HRTF Estimation using a Score-based Prior | <|reference_start|>HRTF Estimation using a Score-based Prior: We present a head-related transfer function (HRTF) estimation method which relies on a data-driven prior given by a score-based diffusion model. The HRTF is estimated in reverberant environments using natural excitation signals, e.g. human speech. The impulse response of the room is estimated along with the HRTF by optimizing a parametric model of reverberation based on the statistical behaviour of room acoustics. The posterior distribution of HRTF given the reverberant measurement and excitation signal is modelled using the score-based HRTF prior and a log-likelihood approximation. We show that the resulting method outperforms several baselines, including an oracle recommender system that assigns the optimal HRTF in our training set based on the smallest distance to the true HRTF at the given direction of arrival. In particular, we show that the diffusion prior can account for the large variability of high-frequency content in HRTFs.<|reference_end|> | arxiv | @article{thuillier2024hrtf,
title={HRTF Estimation using a Score-based Prior},
author={Etienne Thuillier, Jean-Marie Lemercier, Eloi Moliner, Timo Gerkmann,
Vesa V"alim"aki},
journal={arXiv preprint arXiv:2410.01562},
year={2024},
archivePrefix={arXiv},
eprint={2410.01562},
primaryClass={eess.AS cs.LG cs.SD}
} | thuillier2024hrtf |
arxiv-664521 | 2410.01564 | Outage Probability Analysis for OTFS in Lossy Communications | <|reference_start|>Outage Probability Analysis for OTFS in Lossy Communications: This paper analyzes the outage probability of orthogonal time frequency space (OTFS) modulation under a lossy communication scenario. First of all, we introduce the channel model and the vector form representation of OTFS this paper uses. Then, we derive an exact expression of the OTFS outage probability in lossy communication scenarios, using Shannon's lossy source-channel separation theorem. Because the channel is time-varying, calculating the exact outage probability is computationally expensive. Therefore, this paper aims to derive a lower bound of the outage probability, which can relatively easily be calculated. Thus, given the distortion requirement and number of the resolvable paths, we can obtain a performance limit under the optimal condition as a reference. Finally, the experimental results of outage probability are obtained by Monte-Carlo method, and compared with the theoretical results calculated by the closed-from expression of the lower bound.<|reference_end|> | arxiv | @article{zhang2024outage,
title={Outage Probability Analysis for OTFS in Lossy Communications},
author={Xin Zhang, Wensheng Lin, Lixin Li, Fucheng Yang, Zhu Han and Tad
Matsumoto},
journal={arXiv preprint arXiv:2410.01564},
year={2024},
archivePrefix={arXiv},
eprint={2410.01564},
primaryClass={cs.IT cs.NI math.IT}
} | zhang2024outage |
arxiv-664522 | 2410.01565 | Bayes' Power for Explaining In-Context Learning Generalizations | <|reference_start|>Bayes' Power for Explaining In-Context Learning Generalizations: Traditionally, neural network training has been primarily viewed as an approximation of maximum likelihood estimation (MLE). This interpretation originated in a time when training for multiple epochs on small datasets was common and performance was data bound; but it falls short in the era of large-scale single-epoch trainings ushered in by large self-supervised setups, like language models. In this new setup, performance is compute-bound, but data is readily available. As models became more powerful, in-context learning (ICL), i.e., learning in a single forward-pass based on the context, emerged as one of the dominant paradigms. In this paper, we argue that a more useful interpretation of neural network behavior in this era is as an approximation of the true posterior, as defined by the data-generating process. We demonstrate this interpretations' power for ICL and its usefulness to predict generalizations to previously unseen tasks. We show how models become robust in-context learners by effectively composing knowledge from their training data. We illustrate this with experiments that reveal surprising generalizations, all explicable through the exact posterior. Finally, we show the inherent constraints of the generalization capabilities of posteriors and the limitations of neural networks in approximating these posteriors.<|reference_end|> | arxiv | @article{müller2024bayes',
title={Bayes' Power for Explaining In-Context Learning Generalizations},
author={Samuel M"uller, Noah Hollmann, Frank Hutter},
journal={arXiv preprint arXiv:2410.01565},
year={2024},
archivePrefix={arXiv},
eprint={2410.01565},
primaryClass={cs.LG stat.ML}
} | müller2024bayes' |
arxiv-664523 | 2410.01567 | Design of Convolutional Codes for Varying Constraint Lengths | <|reference_start|>Design of Convolutional Codes for Varying Constraint Lengths: This paper explores the design of convolutional codes for varying constraint lengths, focusing on their role in error correction in digital communication systems. Convolutional codes are essential in achieving reliable data transmission across noisy channels. The constraint length, which determines the memory of the encoder, plays a critical role in the performance of convolutional codes. This study investigates the effect of varying constraint lengths on coding performance, including code rate, complexity, and decoding accuracy. Simulation results and theoretical analysis illustrate the trade-offs between constraint length and decoding efficiency.<|reference_end|> | arxiv | @article{dhounde2024design,
title={Design of Convolutional Codes for Varying Constraint Lengths},
author={Parag Dhounde, Avinash Bhute},
journal={arXiv preprint arXiv:2410.01567},
year={2024},
archivePrefix={arXiv},
eprint={2410.01567},
primaryClass={cs.IT eess.SP math.IT}
} | dhounde2024design |
arxiv-664524 | 2410.01568 | Adaptive Exploit Generation against Security Devices and Security APIs | <|reference_start|>Adaptive Exploit Generation against Security Devices and Security APIs: Proof-of-concept exploits help demonstrate software vulnerability beyond doubt and communicate attacks to non-experts. But exploits can be configuration-specific, for example when in Security APIs, where keys are set up specifically for the application and enterprise the API serves. In this work, we show how to automatically derive proof-of-concept exploits against Security APIs using formal methods. We extend the popular protocol verifier ProVerif with a language-agnostic template mechanism. Employing program snippets attached to steps in the model, we can transform attack traces (which ProVerif typically finds automatically) into programs. Our method is general, flexible and convenient. We demonstrate its use for the W3C Web Cryptography API, for PKCS#11 and for the YubiHSM2, providing the first formal model of the latter.<|reference_end|> | arxiv | @article{künnemann2024adaptive,
title={Adaptive Exploit Generation against Security Devices and Security APIs},
author={Robert K"unnemann and Julian Biehl},
journal={arXiv preprint arXiv:2410.01568},
year={2024},
archivePrefix={arXiv},
eprint={2410.01568},
primaryClass={cs.CR}
} | künnemann2024adaptive |
arxiv-664525 | 2410.01570 | Truncated Kernel Stochastic Gradient Descent on Spheres | <|reference_start|>Truncated Kernel Stochastic Gradient Descent on Spheres: Inspired by the structure of spherical harmonics, we propose the truncated kernel stochastic gradient descent (T-kernel SGD) algorithm with a least-square loss function for spherical data fitting. T-kernel SGD employs a "truncation" operation, enabling the application of series-based kernels function in stochastic gradient descent, thereby avoiding the difficulties of finding suitable closed-form kernel functions in high-dimensional spaces. In contrast to traditional kernel SGD, T-kernel SGD is more effective in balancing bias and variance by dynamically adjusting the hypothesis space during iterations. The most significant advantage of the proposed algorithm is that it can achieve theoretically optimal convergence rates using a constant step size (independent of the sample size) while overcoming the inherent saturation problem of kernel SGD. Additionally, we leverage the structure of spherical polynomials to derive an equivalent T-kernel SGD, significantly reducing storage and computational costs compared to kernel SGD. Typically, T-kernel SGD requires only $\mathcal{O}(n^{1+\frac{d}{d-1}\epsilon})$ computational complexity and $\mathcal{O}(n^{\frac{d}{d-1}\epsilon})$ storage to achieve optimal rates for the d-dimensional sphere, where $0<\epsilon<\frac{1}{2}$ can be arbitrarily small if the optimal fitting or the underlying space possesses sufficient regularity. This regularity is determined by the smoothness parameter of the objective function and the decaying rate of the eigenvalues of the integral operator associated with the kernel function, both of which reflect the difficulty of the estimation problem. Our main results quantitatively characterize how this prior information influences the convergence of T-kernel SGD. The numerical experiments further validate the theoretical findings presented in this paper.<|reference_end|> | arxiv | @article{bai2024truncated,
title={Truncated Kernel Stochastic Gradient Descent on Spheres},
author={JinHui Bai, and Lei Shi},
journal={arXiv preprint arXiv:2410.01570},
year={2024},
archivePrefix={arXiv},
eprint={2410.01570},
primaryClass={cs.LG}
} | bai2024truncated |
arxiv-664526 | 2410.01573 | PASS:Test-Time Prompting to Adapt Styles and Semantic Shapes in Medical Image Segmentation | <|reference_start|>PASS:Test-Time Prompting to Adapt Styles and Semantic Shapes in Medical Image Segmentation: Test-time adaptation (TTA) has emerged as a promising paradigm to handle the domain shifts at test time for medical images from different institutions without using extra training data. However, existing TTA solutions for segmentation tasks suffer from (1) dependency on modifying the source training stage and access to source priors or (2) lack of emphasis on shape-related semantic knowledge that is crucial for segmentation tasks.Recent research on visual prompt learning achieves source-relaxed adaptation by extended parameter space but still neglects the full utilization of semantic features, thus motivating our work on knowledge-enriched deep prompt learning. Beyond the general concern of image style shifts, we reveal that shape variability is another crucial factor causing the performance drop. To address this issue, we propose a TTA framework called PASS (Prompting to Adapt Styles and Semantic shapes), which jointly learns two types of prompts: the input-space prompt to reformulate the style of the test image to fit into the pretrained model and the semantic-aware prompts to bridge high-level shape discrepancy across domains. Instead of naively imposing a fixed prompt, we introduce an input decorator to generate the self-regulating visual prompt conditioned on the input data. To retrieve the knowledge representations and customize target-specific shape prompts for each test sample, we propose a cross-attention prompt modulator, which performs interaction between target representations and an enriched shape prompt bank. Extensive experiments demonstrate the superior performance of PASS over state-of-the-art methods on multiple medical image segmentation datasets. The code is available at https://github.com/EndoluminalSurgicalVision-IMR/PASS.<|reference_end|> | arxiv | @article{zhang2024pass:test-time,
title={PASS:Test-Time Prompting to Adapt Styles and Semantic Shapes in Medical
Image Segmentation},
author={Chuyan Zhang, Hao Zheng, Xin You, Yefeng Zheng, Yun Gu},
journal={arXiv preprint arXiv:2410.01573},
year={2024},
archivePrefix={arXiv},
eprint={2410.01573},
primaryClass={cs.CV}
} | zhang2024pass:test-time |
arxiv-664527 | 2410.01574 | Fake It Until You Break It: On the Adversarial Robustness of AI-generated Image Detectors | <|reference_start|>Fake It Until You Break It: On the Adversarial Robustness of AI-generated Image Detectors: While generative AI (GenAI) offers countless possibilities for creative and productive tasks, artificially generated media can be misused for fraud, manipulation, scams, misinformation campaigns, and more. To mitigate the risks associated with maliciously generated media, forensic classifiers are employed to identify AI-generated content. However, current forensic classifiers are often not evaluated in practically relevant scenarios, such as the presence of an attacker or when real-world artifacts like social media degradations affect images. In this paper, we evaluate state-of-the-art AI-generated image (AIGI) detectors under different attack scenarios. We demonstrate that forensic classifiers can be effectively attacked in realistic settings, even when the attacker does not have access to the target model and post-processing occurs after the adversarial examples are created, which is standard on social media platforms. These attacks can significantly reduce detection accuracy to the extent that the risks of relying on detectors outweigh their benefits. Finally, we propose a simple defense mechanism to make CLIP-based detectors, which are currently the best-performing detectors, robust against these attacks.<|reference_end|> | arxiv | @article{mavali2024fake,
title={Fake It Until You Break It: On the Adversarial Robustness of
AI-generated Image Detectors},
author={Sina Mavali, Jonas Ricker, David Pape, Yash Sharma, Asja Fischer, Lea
Sch"onherr},
journal={arXiv preprint arXiv:2410.01574},
year={2024},
archivePrefix={arXiv},
eprint={2410.01574},
primaryClass={cs.CV cs.LG}
} | mavali2024fake |
arxiv-664528 | 2410.01575 | Computing Ex Ante Equilibrium in Heterogeneous Zero-Sum Team Games | <|reference_start|>Computing Ex Ante Equilibrium in Heterogeneous Zero-Sum Team Games: The ex ante equilibrium for two-team zero-sum games, where agents within each team collaborate to compete against the opposing team, is known to be the best a team can do for coordination. Many existing works on ex ante equilibrium solutions are aiming to extend the scope of ex ante equilibrium solving to large-scale team games based on Policy Space Response Oracle (PSRO). However, the joint team policy space constructed by the most prominent method, Team PSRO, cannot cover the entire team policy space in heterogeneous team games where teammates play distinct roles. Such insufficient policy expressiveness causes Team PSRO to be trapped into a sub-optimal ex ante equilibrium with significantly higher exploitability and never converges to the global ex ante equilibrium. To find the global ex ante equilibrium without introducing additional computational complexity, we first parameterize heterogeneous policies for teammates, and we prove that optimizing the heterogeneous teammates' policies sequentially can guarantee a monotonic improvement in team rewards. We further propose Heterogeneous-PSRO (H-PSRO), a novel framework for heterogeneous team games, which integrates the sequential correlation mechanism into the PSRO framework and serves as the first PSRO framework for heterogeneous team games. We prove that H-PSRO achieves lower exploitability than Team PSRO in heterogeneous team games. Empirically, H-PSRO achieves convergence in matrix heterogeneous games that are unsolvable by non-heterogeneous baselines. Further experiments reveal that H-PSRO outperforms non-heterogeneous baselines in both heterogeneous team games and homogeneous settings.<|reference_end|> | arxiv | @article{liu2024computing,
title={Computing Ex Ante Equilibrium in Heterogeneous Zero-Sum Team Games},
author={Naming Liu, Mingzhi Wang, Xihuai Wang, Weinan Zhang, Yaodong Yang,
Youzhi Zhang, Bo An, Ying Wen},
journal={arXiv preprint arXiv:2410.01575},
year={2024},
archivePrefix={arXiv},
eprint={2410.01575},
primaryClass={cs.GT cs.AI}
} | liu2024computing |
arxiv-664529 | 2410.01577 | Coordinate-Based Neural Representation Enabling Zero-Shot Learning for 3D Multiparametric Quantitative MRI | <|reference_start|>Coordinate-Based Neural Representation Enabling Zero-Shot Learning for 3D Multiparametric Quantitative MRI: Quantitative magnetic resonance imaging (qMRI) offers tissue-specific physical parameters with significant potential for neuroscience research and clinical practice. However, lengthy scan times for 3D multiparametric qMRI acquisition limit its clinical utility. Here, we propose SUMMIT, an innovative imaging methodology that includes data acquisition and an unsupervised reconstruction for simultaneous multiparametric qMRI. SUMMIT first encodes multiple important quantitative properties into highly undersampled k-space. It further leverages implicit neural representation incorporated with a dedicated physics model to reconstruct the desired multiparametric maps without needing external training datasets. SUMMIT delivers co-registered T1, T2, T2*, and quantitative susceptibility mapping. Extensive simulations and phantom imaging demonstrate SUMMIT's high accuracy. Additionally, the proposed unsupervised approach for qMRI reconstruction also introduces a novel zero-shot learning paradigm for multiparametric imaging applicable to various medical imaging modalities.<|reference_end|> | arxiv | @article{lao2024coordinate-based,
title={Coordinate-Based Neural Representation Enabling Zero-Shot Learning for
3D Multiparametric Quantitative MRI},
author={Guoyan Lao, Ruimin Feng, Haikun Qi, Zhenfeng Lv, Qiangqiang Liu,
Chunlei Liu, Yuyao Zhang, and Hongjiang Wei},
journal={arXiv preprint arXiv:2410.01577},
year={2024},
archivePrefix={arXiv},
eprint={2410.01577},
primaryClass={cs.CV cs.LG}
} | lao2024coordinate-based |
arxiv-664530 | 2410.01579 | Spoken Grammar Assessment Using LLM | <|reference_start|>Spoken Grammar Assessment Using LLM: Spoken language assessment (SLA) systems restrict themselves to evaluating the pronunciation and oral fluency of a speaker by analysing the read and spontaneous spoken utterances respectively. The assessment of language grammar or vocabulary is relegated to written language assessment (WLA) systems. Most WLA systems present a set of sentences from a curated finite-size database of sentences thereby making it possible to anticipate the test questions and train oneself. In this paper, we propose a novel end-to-end SLA system to assess language grammar from spoken utterances thus making WLA systems redundant; additionally, we make the assessment largely unteachable by employing a large language model (LLM) to bring in variations in the test. We further demonstrate that a hybrid automatic speech recognition (ASR) with a custom-built language model outperforms the state-of-the-art ASR engine for spoken grammar assessment.<|reference_end|> | arxiv | @article{kopparapu2024spoken,
title={Spoken Grammar Assessment Using LLM},
author={Sunil Kumar Kopparapu and Chitralekha Bhat and Ashish Panda},
journal={arXiv preprint arXiv:2410.01579},
year={2024},
archivePrefix={arXiv},
eprint={2410.01579},
primaryClass={cs.CL cs.AI}
} | kopparapu2024spoken |
arxiv-664531 | 2410.01580 | Learning-Augmented Robust Algorithmic Recourse | <|reference_start|>Learning-Augmented Robust Algorithmic Recourse: The widespread use of machine learning models in high-stakes domains can have a major negative impact, especially on individuals who receive undesirable outcomes. Algorithmic recourse provides such individuals with suggestions of minimum-cost improvements they can make to achieve a desirable outcome in the future. However, machine learning models often get updated over time and this can cause a recourse to become invalid (i.e., not lead to the desirable outcome). The robust recourse literature aims to choose recourses that are less sensitive, even against adversarial model changes, but this comes at a higher cost. To overcome this obstacle, we initiate the study of algorithmic recourse through the learning-augmented framework and evaluate the extent to which a designer equipped with a prediction regarding future model changes can reduce the cost of recourse when the prediction is accurate (consistency) while also limiting the cost even when the prediction is inaccurate (robustness). We propose a novel algorithm for this problem, study the robustness-consistency trade-off, and analyze how prediction accuracy affects performance.<|reference_end|> | arxiv | @article{kayastha2024learning-augmented,
title={Learning-Augmented Robust Algorithmic Recourse},
author={Kshitij Kayastha, Vasilis Gkatzelis, Shahin Jabbari},
journal={arXiv preprint arXiv:2410.01580},
year={2024},
archivePrefix={arXiv},
eprint={2410.01580},
primaryClass={cs.LG}
} | kayastha2024learning-augmented |
arxiv-664532 | 2410.01583 | Iterated Local Search with Linkage Learning | <|reference_start|>Iterated Local Search with Linkage Learning: In pseudo-Boolean optimization, a variable interaction graph represents variables as vertices, and interactions between pairs of variables as edges. In black-box optimization, the variable interaction graph may be at least partially discovered by using empirical linkage learning techniques. These methods never report false variable interactions, but they are computationally expensive. The recently proposed local search with linkage learning discovers the partial variable interaction graph as a side-effect of iterated local search. However, information about the strength of the interactions is not learned by the algorithm. We propose local search with linkage learning 2, which builds a weighted variable interaction graph that stores information about the strength of the interaction between variables. The weighted variable interaction graph can provide new insights about the optimization problem and behavior of optimizers. Experiments with NK landscapes, knapsack problem, and feature selection show that local search with linkage learning 2 is able to efficiently build weighted variable interaction graphs. In particular, experiments with feature selection show that the weighted variable interaction graphs can be used for visualizing the feature interactions in machine learning. Additionally, new transformation operators that exploit the interactions between variables can be designed. We illustrate this ability by proposing a new perturbation operator for iterated local search.<|reference_end|> | arxiv | @article{tinós2024iterated,
title={Iterated Local Search with Linkage Learning},
author={Renato Tin'os, Michal W. Przewozniczek, Darrell Whitley, Francisco
Chicano},
journal={ACM Transactions on Evolutionary Learning and Optimization, Volume
4, Issue 2 Article No.: 7, Pages 1 - 29, 2024},
year={2024},
doi={10.1145/3651165},
archivePrefix={arXiv},
eprint={2410.01583},
primaryClass={cs.AI}
} | tinós2024iterated |
arxiv-664533 | 2410.01584 | AI-Native Network Digital Twin for Intelligent Network Management in 6G | <|reference_start|>AI-Native Network Digital Twin for Intelligent Network Management in 6G: As a pivotal virtualization technology, network digital twin is expected to accurately reflect real-time status and abstract features in the on-going sixth generation (6G) networks. In this article, we propose an artificial intelligence (AI)-native network digital twin framework for 6G networks to enable the synergy of AI and network digital twin, thereby facilitating intelligent network management. In the proposed framework, AI models are utilized to establish network digital twin models to facilitate network status prediction, network pattern abstraction, and network management decision-making. Furthermore, potential solutions are proposed for enhance the performance of network digital twin. Finally, a case study is presented, followed by a discussion of open research issues that are essential for AI-native network digital twin in 6G networks.<|reference_end|> | arxiv | @article{wu2024ai-native,
title={AI-Native Network Digital Twin for Intelligent Network Management in 6G},
author={Wen Wu, Xinyu Huang, and Tom H. Luan},
journal={arXiv preprint arXiv:2410.01584},
year={2024},
archivePrefix={arXiv},
eprint={2410.01584},
primaryClass={cs.NI cs.SY eess.SY}
} | wu2024ai-native |
arxiv-664534 | 2410.01585 | Avatar Appearance and Behavior of Potential Harassers Affect Users' Perceptions and Response Strategies in Social Virtual Reality (VR): A Mixed-Methods Study | <|reference_start|>Avatar Appearance and Behavior of Potential Harassers Affect Users' Perceptions and Response Strategies in Social Virtual Reality (VR): A Mixed-Methods Study: Sexual harassment has been recognized as a significant social issue. In recent years, the emergence of harassment in social virtual reality (VR) has become an important and urgent research topic. We employed a mixed-methods approach by conducting online surveys with VR users (N = 166) and semi-structured interviews with social VR users (N = 18) to investigate how users perceive sexual harassment in social VR, focusing on the influence of avatar appearance. Moreover, we derived users' response strategies to sexual harassment and gained insights on platform regulation. This study contributes to the research on sexual harassment in social VR by examining the moderating effect of avatar appearance on user perception of sexual harassment and uncovering the underlying reasons behind response strategies. Moreover, it presents novel prospects and challenges in platform design and regulation domains.<|reference_end|> | arxiv | @article{wang2024avatar,
title={Avatar Appearance and Behavior of Potential Harassers Affect Users'
Perceptions and Response Strategies in Social Virtual Reality (VR): A
Mixed-Methods Study},
author={Xuetong Wang, Ziyan Wang, Mingmin Zhang, Kangyou Yu, Pan Hui, Mingming
Fan},
journal={arXiv preprint arXiv:2410.01585},
year={2024},
archivePrefix={arXiv},
eprint={2410.01585},
primaryClass={cs.HC}
} | wang2024avatar |
arxiv-664535 | 2410.01588 | DynFrs: An Efficient Framework for Machine Unlearning in Random Forest | <|reference_start|>DynFrs: An Efficient Framework for Machine Unlearning in Random Forest: Random Forests are widely recognized for establishing efficacy in classification and regression tasks, standing out in various domains such as medical diagnosis, finance, and personalized recommendations. These domains, however, are inherently sensitive to privacy concerns, as personal and confidential data are involved. With increasing demand for the right to be forgotten, particularly under regulations such as GDPR and CCPA, the ability to perform machine unlearning has become crucial for Random Forests. However, insufficient attention was paid to this topic, and existing approaches face difficulties in being applied to real-world scenarios. Addressing this gap, we propose the DynFrs framework designed to enable efficient machine unlearning in Random Forests while preserving predictive accuracy. Dynfrs leverages subsampling method Occ(q) and a lazy tag strategy Lzy, and is still adaptable to any Random Forest variant. In essence, Occ(q) ensures that each sample in the training set occurs only in a proportion of trees so that the impact of deleting samples is limited, and Lzy delays the reconstruction of a tree node until necessary, thereby avoiding unnecessary modifications on tree structures. In experiments, applying Dynfrs on Extremely Randomized Trees yields substantial improvements, achieving orders of magnitude faster unlearning performance and better predictive accuracy than existing machine unlearning methods for Random Forests.<|reference_end|> | arxiv | @article{wang2024dynfrs:,
title={DynFrs: An Efficient Framework for Machine Unlearning in Random Forest},
author={Shurong Wang, Zhuoyang Shen, Xinbao Qiao, Tongning Zhang, Meng Zhang},
journal={arXiv preprint arXiv:2410.01588},
year={2024},
archivePrefix={arXiv},
eprint={2410.01588},
primaryClass={cs.LG}
} | wang2024dynfrs: |
arxiv-664536 | 2410.01589 | Min-Time Escape of a Dubins Car from a Polygon | <|reference_start|>Min-Time Escape of a Dubins Car from a Polygon: A turn constrained vehicle is initially located inside a polygon region and desires to escape in minimum time. First, the method of characteristics is used to describe the time-optimal strategies for reaching a line of infinite length. Next, the approach is extended to polygons constructed of a series of line segments. Using this construction technique, the min-time path to reach each edge is obtained; the resulting minimum of the set of optimal trajectories is then selected for escaping the polygon.<|reference_end|> | arxiv | @article{weintraub2024min-time,
title={Min-Time Escape of a Dubins Car from a Polygon},
author={Isaac E. Weintraub, Alexander Von Moll, David Casbeer, Satyanarayana G
Manyam, Meir Pachter, and Colin Taylor},
journal={arXiv preprint arXiv:2410.01589},
year={2024},
archivePrefix={arXiv},
eprint={2410.01589},
primaryClass={eess.SY cs.SY}
} | weintraub2024min-time |
arxiv-664537 | 2410.01590 | Active Learning of Deterministic Transducers with Outputs in Arbitrary Monoids | <|reference_start|>Active Learning of Deterministic Transducers with Outputs in Arbitrary Monoids: We study monoidal transducers, transition systems arising as deterministic automata whose transitions also produce outputs in an arbitrary monoid, for instance allowing outputs to commute or to cancel out. We use the categorical framework for minimization and learning of Colcombet, Petri\c{s}an and Stabile to recover the notion of minimal transducer recognizing a language, and give necessary and sufficient conditions on the output monoid for this minimal transducer to exist and be unique (up to isomorphism). The categorical framework then provides an abstract algorithm for learning it using membership and equivalence queries, and we discuss practical aspects of this algorithm's implementation.<|reference_end|> | arxiv | @article{aristote2024active,
title={Active Learning of Deterministic Transducers with Outputs in Arbitrary
Monoids},
author={Quentin Aristote (Universit'e Paris Cit'e, CNRS, Inria, IRIF,
F-75013, Paris, France)},
journal={arXiv preprint arXiv:2410.01590},
year={2024},
doi={10.4230/LIPIcs.CSL.2024.11},
archivePrefix={arXiv},
eprint={2410.01590},
primaryClass={cs.FL}
} | aristote2024active |
arxiv-664538 | 2410.01591 | Imaging foundation model for universal enhancement of non-ideal measurement CT | <|reference_start|>Imaging foundation model for universal enhancement of non-ideal measurement CT: Non-ideal measurement computed tomography (NICT), which sacrifices optimal imaging standards for new advantages in CT imaging, is expanding the clinical application scope of CT images. However, with the reduction of imaging standards, the image quality has also been reduced, extremely limiting the clinical acceptability. Although numerous studies have demonstrated the feasibility of deep learning for the NICT enhancement in specific scenarios, their high data cost and limited generalizability have become large obstacles. The recent research on the foundation model has brought new opportunities for building a universal NICT enhancement model - bridging the image quality degradation with minimal data cost. However, owing to the challenges in the collection of large pre-training datasets and the compatibility of data variation, no success has been reported. In this paper, we propose a multi-scale integrated Transformer AMPlifier (TAMP), the first imaging foundation model for universal NICT enhancement. It has been pre-trained on a large-scale physical-driven simulation dataset with 3.6 million NICT-ICT image pairs, and is able to directly generalize to the NICT enhancement tasks with various non-ideal settings and body regions. Via the adaptation with few data, it can further achieve professional performance in real-world specific scenarios. Our extensive experiments have demonstrated that the proposed TAMP has significant potential for promoting the exploration and application of NICT and serving a wider range of medical scenarios.<|reference_end|> | arxiv | @article{liu2024imaging,
title={Imaging foundation model for universal enhancement of non-ideal
measurement CT},
author={Yuxin Liu, Rongjun Ge, Yuting He, Zhan Wu, Chenyu You, Shuo Li, Yang
Chen},
journal={arXiv preprint arXiv:2410.01591},
year={2024},
archivePrefix={arXiv},
eprint={2410.01591},
primaryClass={eess.IV cs.AI cs.CV}
} | liu2024imaging |
arxiv-664539 | 2410.01594 | MM-LDM: Multi-Modal Latent Diffusion Model for Sounding Video Generation | <|reference_start|>MM-LDM: Multi-Modal Latent Diffusion Model for Sounding Video Generation: Sounding Video Generation (SVG) is an audio-video joint generation task challenged by high-dimensional signal spaces, distinct data formats, and different patterns of content information. To address these issues, we introduce a novel multi-modal latent diffusion model (MM-LDM) for the SVG task. We first unify the representation of audio and video data by converting them into a single or a couple of images. Then, we introduce a hierarchical multi-modal autoencoder that constructs a low-level perceptual latent space for each modality and a shared high-level semantic feature space. The former space is perceptually equivalent to the raw signal space of each modality but drastically reduces signal dimensions. The latter space serves to bridge the information gap between modalities and provides more insightful cross-modal guidance. Our proposed method achieves new state-of-the-art results with significant quality and efficiency gains. Specifically, our method achieves a comprehensive improvement on all evaluation metrics and a faster training and sampling speed on Landscape and AIST++ datasets. Moreover, we explore its performance on open-domain sounding video generation, long sounding video generation, audio continuation, video continuation, and conditional single-modal generation tasks for a comprehensive evaluation, where our MM-LDM demonstrates exciting adaptability and generalization ability.<|reference_end|> | arxiv | @article{sun2024mm-ldm:,
title={MM-LDM: Multi-Modal Latent Diffusion Model for Sounding Video Generation},
author={Mingzhen Sun, Weining Wang, Yanyuan Qiao, Jiahui Sun, Zihan Qin,
Longteng Guo, Xinxin Zhu, Jing Liu},
journal={arXiv preprint arXiv:2410.01594},
year={2024},
archivePrefix={arXiv},
eprint={2410.01594},
primaryClass={cs.CV}
} | sun2024mm-ldm: |
arxiv-664540 | 2410.01595 | KnobGen: Controlling the Sophistication of Artwork in Sketch-Based Diffusion Models | <|reference_start|>KnobGen: Controlling the Sophistication of Artwork in Sketch-Based Diffusion Models: Recent advances in diffusion models have significantly improved text-to-image (T2I) generation, but they often struggle to balance fine-grained precision with high-level control. Methods like ControlNet and T2I-Adapter excel at following sketches by seasoned artists but tend to be overly rigid, replicating unintentional flaws in sketches from novice users. Meanwhile, coarse-grained methods, such as sketch-based abstraction frameworks, offer more accessible input handling but lack the precise control needed for detailed, professional use. To address these limitations, we propose KnobGen, a dual-pathway framework that democratizes sketch-based image generation by seamlessly adapting to varying levels of sketch complexity and user skill. KnobGen uses a Coarse-Grained Controller (CGC) module for high-level semantics and a Fine-Grained Controller (FGC) module for detailed refinement. The relative strength of these two modules can be adjusted through our knob inference mechanism to align with the user's specific needs. These mechanisms ensure that KnobGen can flexibly generate images from both novice sketches and those drawn by seasoned artists. This maintains control over the final output while preserving the natural appearance of the image, as evidenced on the MultiGen-20M dataset and a newly collected sketch dataset.<|reference_end|> | arxiv | @article{navard2024knobgen:,
title={KnobGen: Controlling the Sophistication of Artwork in Sketch-Based
Diffusion Models},
author={Pouyan Navard, Amin Karimi Monsefi, Mengxi Zhou, Wei-Lun Chao, Alper
Yilmaz, Rajiv Ramnath},
journal={arXiv preprint arXiv:2410.01595},
year={2024},
archivePrefix={arXiv},
eprint={2410.01595},
primaryClass={cs.CV cs.AI}
} | navard2024knobgen: |
arxiv-664541 | 2410.01597 | SAFE: Semantic Adaptive Feature Extraction with Rate Control for 6G Wireless Communications | <|reference_start|>SAFE: Semantic Adaptive Feature Extraction with Rate Control for 6G Wireless Communications: Most current Deep Learning-based Semantic Communication (DeepSC) systems are designed and trained exclusively for particular single-channel conditions, which restricts their adaptability and overall bandwidth utilization. To address this, we propose an innovative Semantic Adaptive Feature Extraction (SAFE) framework, which significantly improves bandwidth efficiency by allowing users to select different sub-semantic combinations based on their channel conditions. This paper also introduces three advanced learning algorithms to optimize the performance of SAFE framework as a whole. Through a series of simulation experiments, we demonstrate that the SAFE framework can effectively and adaptively extract and transmit semantics under different channel bandwidth conditions, of which effectiveness is verified through objective and subjective quality evaluations.<|reference_end|> | arxiv | @article{yan2024safe:,
title={SAFE: Semantic Adaptive Feature Extraction with Rate Control for 6G
Wireless Communications},
author={Yuna Yan, Lixin Li, Xin Zhang, Wensheng Lin, Wenchi Cheng and Zhu Han},
journal={arXiv preprint arXiv:2410.01597},
year={2024},
archivePrefix={arXiv},
eprint={2410.01597},
primaryClass={cs.NI cs.LG eess.SP}
} | yan2024safe: |
arxiv-664542 | 2410.01598 | Elaborative Subtopic Query Reformulation for Broad and Indirect Queries in Travel Destination Recommendation | <|reference_start|>Elaborative Subtopic Query Reformulation for Broad and Indirect Queries in Travel Destination Recommendation: In Query-driven Travel Recommender Systems (RSs), it is crucial to understand the user intent behind challenging natural language(NL) destination queries such as the broadly worded "youth-friendly activities" or the indirect description "a high school graduation trip". Such queries are challenging due to the wide scope and subtlety of potential user intents that confound the ability of retrieval methods to infer relevant destinations from available textual descriptions such as WikiVoyage. While query reformulation (QR) has proven effective in enhancing retrieval by addressing user intent, existing QR methods tend to focus only on expanding the range of potentially matching query subtopics (breadth) or elaborating on the potential meaning of a query (depth), but not both. In this paper, we introduce Elaborative Subtopic Query Reformulation (EQR), a large language model-based QR method that combines both breadth and depth by generating potential query subtopics with information-rich elaborations. We also release TravelDest, a novel dataset for query-driven travel destination RSs. Experiments on TravelDest show that EQR achieves significant improvements in recall and precision over existing state-of-the-art QR methods.<|reference_end|> | arxiv | @article{wen2024elaborative,
title={Elaborative Subtopic Query Reformulation for Broad and Indirect Queries
in Travel Destination Recommendation},
author={Qianfeng Wen, Yifan Liu, Joshua Zhang, George Saad, Anton Korikov,
Yury Sambale and Scott Sanner},
journal={arXiv preprint arXiv:2410.01598},
year={2024},
archivePrefix={arXiv},
eprint={2410.01598},
primaryClass={cs.IR cs.AI}
} | wen2024elaborative |
arxiv-664543 | 2410.01599 | Towards Model Discovery Using Domain Decomposition and PINNs | <|reference_start|>Towards Model Discovery Using Domain Decomposition and PINNs: We enhance machine learning algorithms for learning model parameters in complex systems represented by ordinary differential equations (ODEs) with domain decomposition methods. The study evaluates the performance of two approaches, namely (vanilla) Physics-Informed Neural Networks (PINNs) and Finite Basis Physics-Informed Neural Networks (FBPINNs), in learning the dynamics of test models with a quasi-stationary longtime behavior. We test the approaches for data sets in different dynamical regions and with varying noise level. As results, we find a better performance for the FBPINN approach compared to the vanilla PINN approach, even in cases with data from only a quasi-stationary time domain with few dynamics.<|reference_end|> | arxiv | @article{saha2024towards,
title={Towards Model Discovery Using Domain Decomposition and PINNs},
author={Tirtho S. Saha and Alexander Heinlein and Cordula Reisch},
journal={arXiv preprint arXiv:2410.01599},
year={2024},
archivePrefix={arXiv},
eprint={2410.01599},
primaryClass={math.NA cs.LG cs.NA}
} | saha2024towards |
arxiv-664544 | 2410.01600 | ENTP: Encoder-only Next Token Prediction | <|reference_start|>ENTP: Encoder-only Next Token Prediction: Next-token prediction models have predominantly relied on decoder-only Transformers with causal attention, driven by the common belief that causal attention is essential to prevent "cheating" by masking future tokens. We challenge this widely accepted notion and argue that this design choice is about efficiency rather than necessity. While decoder-only Transformers are still a good choice for practical reasons, they are not the only viable option. In this work, we introduce Encoder-only Next Token Prediction (ENTP). We explore the differences between ENTP and decoder-only Transformers in expressive power and complexity, highlighting potential advantages of ENTP. We introduce the Triplet-Counting task and show, both theoretically and experimentally, that while ENTP can perform this task easily, a decoder-only Transformer cannot. Finally, we empirically demonstrate ENTP's superior performance across various realistic tasks, such as length generalization and in-context learning.<|reference_end|> | arxiv | @article{ewer2024entp:,
title={ENTP: Encoder-only Next Token Prediction},
author={Ethan Ewer, Daewon Chae, Thomas Zeng, Jinkyu Kim, Kangwook Lee},
journal={arXiv preprint arXiv:2410.01600},
year={2024},
archivePrefix={arXiv},
eprint={2410.01600},
primaryClass={cs.LG cs.CL}
} | ewer2024entp: |
arxiv-664545 | 2410.01603 | Beamforming in Secure Integrated Sensing and Communication Systems with Antenna Allocation | <|reference_start|>Beamforming in Secure Integrated Sensing and Communication Systems with Antenna Allocation: In this paper, we consider joint antenna allocation and transmit beamforming design in secure integrated sensing and communication (ISAC) systems. A dual-function base station (DFBS) aims to securely deliver messages to a single-antenna receiver while detecting potential eavesdroppers. To prevent eavesdropping, we incorporate specialized sensing signals, intentionally reducing communication signal power toward suspicious targets to improve sensing. We prioritize minimizing the matching error between the transmitting and required beampatterns for sensing and communication. Our design optimizes antenna allocation and beamforming at the DFBS, meeting minimum secrecy rate and power constraints. We propose solvers based on alternating optimization for the non-convex design problem. Simulations show that the antenna allocation scheme significantly improves safety performance.<|reference_end|> | arxiv | @article{shi2024beamforming,
title={Beamforming in Secure Integrated Sensing and Communication Systems with
Antenna Allocation},
author={Yunxiang Shi, Lixin Li, Wensheng Lin, Wei Liang, and Zhu Han},
journal={arXiv preprint arXiv:2410.01603},
year={2024},
archivePrefix={arXiv},
eprint={2410.01603},
primaryClass={cs.NI}
} | shi2024beamforming |
arxiv-664546 | 2410.01604 | Customizing Generated Signs and Voices of AI Avatars: Deaf-Centric Mixed-Reality Design for Deaf-Hearing Communication | <|reference_start|>Customizing Generated Signs and Voices of AI Avatars: Deaf-Centric Mixed-Reality Design for Deaf-Hearing Communication: This study investigates innovative interaction designs for communication and collaborative learning between learners of mixed hearing and signing abilities, leveraging advancements in mixed reality technologies like Apple Vision Pro and generative AI for animated avatars. Adopting a participatory design approach, we engaged 15 d/Deaf and hard of hearing (DHH) students to brainstorm ideas for an AI avatar with interpreting ability (sign language to English, voice to English) that would facilitate their face-to-face communication with hearing peers. Participants envisioned the AI avatars to address some issues with human interpreters, such as lack of availability, and provide affordable options to expensive personalized interpreting service. Our findings indicate a range of preferences for integrating the AI avatars with actual human figures of both DHH and hearing communication partners. The participants highlighted the importance of having control over customizing the AI avatar, such as AI-generated signs, voices, facial expressions, and their synchronization for enhanced emotional display in communication. Based on our findings, we propose a suite of design recommendations that balance respecting sign language norms with adherence to hearing social norms. Our study offers insights on improving the authenticity of generative AI in scenarios involving specific, and sometimes unfamiliar, social norms.<|reference_end|> | arxiv | @article{chen2024customizing,
title={Customizing Generated Signs and Voices of AI Avatars: Deaf-Centric
Mixed-Reality Design for Deaf-Hearing Communication},
author={Si Chen, Haocong Cheng, Suzy Su, Stephanie Patterson, Raja
Kushalnagar, Qi Wang, Yun Huang},
journal={arXiv preprint arXiv:2410.01604},
year={2024},
archivePrefix={arXiv},
eprint={2410.01604},
primaryClass={cs.HC}
} | chen2024customizing |
arxiv-664547 | 2410.01606 | Automated Red Teaming with GOAT: the Generative Offensive Agent Tester | <|reference_start|>Automated Red Teaming with GOAT: the Generative Offensive Agent Tester: Red teaming assesses how large language models (LLMs) can produce content that violates norms, policies, and rules set during their safety training. However, most existing automated methods in the literature are not representative of the way humans tend to interact with AI models. Common users of AI models may not have advanced knowledge of adversarial machine learning methods or access to model internals, and they do not spend a lot of time crafting a single highly effective adversarial prompt. Instead, they are likely to make use of techniques commonly shared online and exploit the multiturn conversational nature of LLMs. While manual testing addresses this gap, it is an inefficient and often expensive process. To address these limitations, we introduce the Generative Offensive Agent Tester (GOAT), an automated agentic red teaming system that simulates plain language adversarial conversations while leveraging multiple adversarial prompting techniques to identify vulnerabilities in LLMs. We instantiate GOAT with 7 red teaming attacks by prompting a general-purpose model in a way that encourages reasoning through the choices of methods available, the current target model's response, and the next steps. Our approach is designed to be extensible and efficient, allowing human testers to focus on exploring new areas of risk while automation covers the scaled adversarial stress-testing of known risk territory. We present the design and evaluation of GOAT, demonstrating its effectiveness in identifying vulnerabilities in state-of-the-art LLMs, with an ASR@10 of 97% against Llama 3.1 and 88% against GPT-4 on the JailbreakBench dataset.<|reference_end|> | arxiv | @article{pavlova2024automated,
title={Automated Red Teaming with GOAT: the Generative Offensive Agent Tester},
author={Maya Pavlova, Erik Brinkman, Krithika Iyer, Vitor Albiero, Joanna
Bitton, Hailey Nguyen, Joe Li, Cristian Canton Ferrer, Ivan Evtimov, Aaron
Grattafiori},
journal={arXiv preprint arXiv:2410.01606},
year={2024},
archivePrefix={arXiv},
eprint={2410.01606},
primaryClass={cs.LG cs.AI}
} | pavlova2024automated |
arxiv-664548 | 2410.01608 | Computational Teaching for Driving via Multi-Task Imitation Learning | <|reference_start|>Computational Teaching for Driving via Multi-Task Imitation Learning: Learning motor skills for sports or performance driving is often done with professional instruction from expert human teachers, whose availability is limited. Our goal is to enable automated teaching via a learned model that interacts with the student similar to a human teacher. However, training such automated teaching systems is limited by the availability of high-quality annotated datasets of expert teacher and student interactions that are difficult to collect at scale. To address this data scarcity problem, we propose an approach for training a coaching system for complex motor tasks such as high performance driving via a Multi-Task Imitation Learning (MTIL) paradigm. MTIL allows our model to learn robust representations by utilizing self-supervised training signals from more readily available non-interactive datasets of humans performing the task of interest. We validate our approach with (1) a semi-synthetic dataset created from real human driving trajectories, (2) a professional track driving instruction dataset, (3) a track-racing driving simulator human-subject study, and (4) a system demonstration on an instrumented car at a race track. Our experiments show that the right set of auxiliary machine learning tasks improves performance in predicting teaching instructions. Moreover, in the human subjects study, students exposed to the instructions from our teaching system improve their ability to stay within track limits, and show favorable perception of the model's interaction with them, in terms of usefulness and satisfaction.<|reference_end|> | arxiv | @article{gopinath2024computational,
title={Computational Teaching for Driving via Multi-Task Imitation Learning},
author={Deepak Gopinath, Xiongyi Cui, Jonathan DeCastro, Emily Sumner, Jean
Costa, Hiroshi Yasuda, Allison Morgan, Laporsha Dees, Sheryl Chau, John
Leonard, Tiffany Chen, Guy Rosman, Avinash Balachandran},
journal={arXiv preprint arXiv:2410.01608},
year={2024},
archivePrefix={arXiv},
eprint={2410.01608},
primaryClass={cs.RO}
} | gopinath2024computational |
arxiv-664549 | 2410.01609 | DAViD: Domain Adaptive Visually-Rich Document Understanding with Synthetic Insights | <|reference_start|>DAViD: Domain Adaptive Visually-Rich Document Understanding with Synthetic Insights: Visually-Rich Documents (VRDs), encompassing elements like charts, tables, and references, convey complex information across various fields. However, extracting information from these rich documents is labor-intensive, especially given their inconsistent formats and domain-specific requirements. While pretrained models for VRD Understanding have progressed, their reliance on large, annotated datasets limits scalability. This paper introduces the Domain Adaptive Visually-rich Document Understanding (DAViD) framework, which utilises machine-generated synthetic data for domain adaptation. DAViD integrates fine-grained and coarse-grained document representation learning and employs synthetic annotations to reduce the need for costly manual labelling. By leveraging pretrained models and synthetic data, DAViD achieves competitive performance with minimal annotated datasets. Extensive experiments validate DAViD's effectiveness, demonstrating its ability to efficiently adapt to domain-specific VRDU tasks.<|reference_end|> | arxiv | @article{ding2024david:,
title={DAViD: Domain Adaptive Visually-Rich Document Understanding with
Synthetic Insights},
author={Yihao Ding, Soyeon Caren Han, Zechuan Li, Hyunsuk Chung},
journal={arXiv preprint arXiv:2410.01609},
year={2024},
archivePrefix={arXiv},
eprint={2410.01609},
primaryClass={cs.CV}
} | ding2024david: |
arxiv-664550 | 2410.01610 | Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging | <|reference_start|>Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging: Mixture-of-Experts (MoE) shines brightly in large language models (LLMs) and demonstrates outstanding performance in plentiful natural language processing tasks. However, existing methods transforming LLMs from dense to MoE face significant data requirements and typically rely on large-scale post-training. In this paper, we propose Upcycling Instruction Tuning (UpIT), a data-efficient approach for tuning a dense pre-trained model into a MoE instruction model. Specifically, we first point out that intermediate checkpoints during instruction tuning of the dense model are naturally suitable for specialized experts, and then propose an expert expansion stage to flexibly achieve models with flexible numbers of experts, where genetic algorithm and parameter merging are introduced to ensure sufficient diversity of new extended experts. To ensure that each specialized expert in the MoE model works as expected, we select a small amount of seed data that each expert excels to pre-optimize the router. Extensive experiments with various data scales and upcycling settings demonstrate the outstanding performance and data efficiency of UpIT, as well as stable improvement in expert or data scaling. Further analysis reveals the importance of ensuring expert diversity in upcycling.<|reference_end|> | arxiv | @article{hui2024upcycling,
title={Upcycling Instruction Tuning from Dense to Mixture-of-Experts via
Parameter Merging},
author={Tingfeng Hui, Zhenyu Zhang, Shuohuan Wang, Yu Sun, Hua Wu, Sen Su},
journal={arXiv preprint arXiv:2410.01610},
year={2024},
archivePrefix={arXiv},
eprint={2410.01610},
primaryClass={cs.CL cs.AI cs.LG}
} | hui2024upcycling |
arxiv-664551 | 2410.01611 | DRUPI: Dataset Reduction Using Privileged Information | <|reference_start|>DRUPI: Dataset Reduction Using Privileged Information: Dataset reduction (DR) seeks to select or distill samples from large datasets into smaller subsets while preserving performance on target tasks. Existing methods primarily focus on pruning or synthesizing data in the same format as the original dataset, typically the input data and corresponding labels. However, in DR settings, we find it is possible to synthesize more information beyond the data-label pair as an additional learning target to facilitate model training. In this paper, we introduce Dataset Reduction Using Privileged Information (DRUPI), which enriches DR by synthesizing privileged information alongside the reduced dataset. This privileged information can take the form of feature labels or attention labels, providing auxiliary supervision to improve model learning. Our findings reveal that effective feature labels must balance between being overly discriminative and excessively diverse, with a moderate level proving optimal for improving the reduced dataset's efficacy. Extensive experiments on ImageNet, CIFAR-10/100, and Tiny ImageNet demonstrate that DRUPI integrates seamlessly with existing dataset reduction methods, offering significant performance gains. *The code will be released after the paper is accepted.*<|reference_end|> | arxiv | @article{wang2024drupi:,
title={DRUPI: Dataset Reduction Using Privileged Information},
author={Shaobo Wang, Yantai Yang, Shuaiyu Zhang, Chenghao Sun, Weiya Li,
Xuming Hu, Linfeng Zhang},
journal={arXiv preprint arXiv:2410.01611},
year={2024},
archivePrefix={arXiv},
eprint={2410.01611},
primaryClass={cs.CV cs.AI cs.LG}
} | wang2024drupi: |
arxiv-664552 | 2410.01614 | Gaussian Splatting in Mirrors: Reflection-Aware Rendering via Virtual Camera Optimization | <|reference_start|>Gaussian Splatting in Mirrors: Reflection-Aware Rendering via Virtual Camera Optimization: Recent advancements in 3D Gaussian Splatting (3D-GS) have revolutionized novel view synthesis, facilitating real-time, high-quality image rendering. However, in scenarios involving reflective surfaces, particularly mirrors, 3D-GS often misinterprets reflections as virtual spaces, resulting in blurred and inconsistent multi-view rendering within mirrors. Our paper presents a novel method aimed at obtaining high-quality multi-view consistent reflection rendering by modelling reflections as physically-based virtual cameras. We estimate mirror planes with depth and normal estimates from 3D-GS and define virtual cameras that are placed symmetrically about the mirror plane. These virtual cameras are then used to explain mirror reflections in the scene. To address imperfections in mirror plane estimates, we propose a straightforward yet effective virtual camera optimization method to enhance reflection quality. We collect a new mirror dataset including three real-world scenarios for more diverse evaluation. Experimental validation on both Mirror-Nerf and our real-world dataset demonstrate the efficacy of our approach. We achieve comparable or superior results while significantly reducing training time compared to previous state-of-the-art.<|reference_end|> | arxiv | @article{wang2024gaussian,
title={Gaussian Splatting in Mirrors: Reflection-Aware Rendering via Virtual
Camera Optimization},
author={Zihan Wang, Shuzhe Wang, Matias Turkulainen, Junyuan Fang, Juho
Kannala},
journal={arXiv preprint arXiv:2410.01614},
year={2024},
archivePrefix={arXiv},
eprint={2410.01614},
primaryClass={cs.CV}
} | wang2024gaussian |
arxiv-664553 | 2410.01615 | Saliency-Guided DETR for Moment Retrieval and Highlight Detection | <|reference_start|>Saliency-Guided DETR for Moment Retrieval and Highlight Detection: Existing approaches for video moment retrieval and highlight detection are not able to align text and video features efficiently, resulting in unsatisfying performance and limited production usage. To address this, we propose a novel architecture that utilizes recent foundational video models designed for such alignment. Combined with the introduced Saliency-Guided Cross Attention mechanism and a hybrid DETR architecture, our approach significantly enhances performance in both moment retrieval and highlight detection tasks. For even better improvement, we developed InterVid-MR, a large-scale and high-quality dataset for pretraining. Using it, our architecture achieves state-of-the-art results on the QVHighlights, Charades-STA and TACoS benchmarks. The proposed approach provides an efficient and scalable solution for both zero-shot and fine-tuning scenarios in video-language tasks.<|reference_end|> | arxiv | @article{gordeev2024saliency-guided,
title={Saliency-Guided DETR for Moment Retrieval and Highlight Detection},
author={Aleksandr Gordeev, Vladimir Dokholyan, Irina Tolstykh, Maksim
Kuprashevich},
journal={arXiv preprint arXiv:2410.01615},
year={2024},
archivePrefix={arXiv},
eprint={2410.01615},
primaryClass={cs.CV}
} | gordeev2024saliency-guided |
arxiv-664554 | 2410.01617 | On Using Certified Training towards Empirical Robustness | <|reference_start|>On Using Certified Training towards Empirical Robustness: Adversarial training is arguably the most popular way to provide empirical robustness against specific adversarial examples. While variants based on multi-step attacks incur significant computational overhead, single-step variants are vulnerable to a failure mode known as catastrophic overfitting, which hinders their practical utility for large perturbations. A parallel line of work, certified training, has focused on producing networks amenable to formal guarantees of robustness against any possible attack. However, the wide gap between the best-performing empirical and certified defenses has severely limited the applicability of the latter. Inspired by recent developments in certified training, which rely on a combination of adversarial attacks with network over-approximations, and by the connections between local linearity and catastrophic overfitting, we present experimental evidence on the practical utility and limitations of using certified training towards empirical robustness. We show that, when tuned for the purpose, a recent certified training algorithm can prevent catastrophic overfitting on single-step attacks, and that it can bridge the gap to multi-step baselines under appropriate experimental settings. Finally, we present a novel regularizer for network over-approximations that can achieve similar effects while markedly reducing runtime.<|reference_end|> | arxiv | @article{de palma2024on,
title={On Using Certified Training towards Empirical Robustness},
author={Alessandro De Palma, Serge Durand, Zakaria Chihani, Franc{c}ois
Terrier, Caterina Urban},
journal={arXiv preprint arXiv:2410.01617},
year={2024},
archivePrefix={arXiv},
eprint={2410.01617},
primaryClass={cs.LG cs.CR stat.ML}
} | de palma2024on |
arxiv-664555 | 2410.01618 | SGBA: Semantic Gaussian Mixture Model-Based LiDAR Bundle Adjustment | <|reference_start|>SGBA: Semantic Gaussian Mixture Model-Based LiDAR Bundle Adjustment: LiDAR bundle adjustment (BA) is an effective approach to reduce the drifts in pose estimation from the front-end. Existing works on LiDAR BA usually rely on predefined geometric features for landmark representation. This reliance restricts generalizability, as the system will inevitably deteriorate in environments where these specific features are absent. To address this issue, we propose SGBA, a LiDAR BA scheme that models the environment as a semantic Gaussian mixture model (GMM) without predefined feature types. This approach encodes both geometric and semantic information, offering a comprehensive and general representation adaptable to various environments. Additionally, to limit computational complexity while ensuring generalizability, we propose an adaptive semantic selection framework that selects the most informative semantic clusters for optimization by evaluating the condition number of the cost function. Lastly, we introduce a probabilistic feature association scheme that considers the entire probability density of assignments, which can manage uncertainties in measurement and initial pose estimation. We have conducted various experiments and the results demonstrate that SGBA can achieve accurate and robust pose refinement even in challenging scenarios with low-quality initial pose estimation and limited geometric features. We plan to open-source the work for the benefit of the community https://github.com/Ji1Xinyu/SGBA.<|reference_end|> | arxiv | @article{ji2024sgba:,
title={SGBA: Semantic Gaussian Mixture Model-Based LiDAR Bundle Adjustment},
author={Xingyu Ji, Shenghai Yuan, Jianping Li, Pengyu Yin, Haozhi Cao, Lihua
Xie},
journal={arXiv preprint arXiv:2410.01618},
year={2024},
archivePrefix={arXiv},
eprint={2410.01618},
primaryClass={cs.CV cs.RO}
} | ji2024sgba: |
arxiv-664556 | 2410.01620 | LMOD: A Large Multimodal Ophthalmology Dataset and Benchmark for Large Vision-Language Models | <|reference_start|>LMOD: A Large Multimodal Ophthalmology Dataset and Benchmark for Large Vision-Language Models: Ophthalmology relies heavily on detailed image analysis for diagnosis and treatment planning. While large vision-language models (LVLMs) have shown promise in understanding complex visual information, their performance on ophthalmology images remains underexplored. We introduce LMOD, a dataset and benchmark for evaluating LVLMs on ophthalmology images, covering anatomical understanding, diagnostic analysis, and demographic extraction. LMODincludes 21,993 images spanning optical coherence tomography, scanning laser ophthalmoscopy, eye photos, surgical scenes, and color fundus photographs. We benchmark 13 state-of-the-art LVLMs and find that they are far from perfect for comprehending ophthalmology images. Models struggle with diagnostic analysis and demographic extraction, reveal weaknesses in spatial reasoning, diagnostic analysis, handling out-of-domain queries, and safeguards for handling biomarkers of ophthalmology images.<|reference_end|> | arxiv | @article{qin2024lmod:,
title={LMOD: A Large Multimodal Ophthalmology Dataset and Benchmark for Large
Vision-Language Models},
author={Zhenyue Qin, Yu Yin, Dylan Campbell, Xuansheng Wu, Ke Zou, Yih-Chung
Tham, Ninghao Liu, Xiuzhen Zhang, Qingyu Chen},
journal={arXiv preprint arXiv:2410.01620},
year={2024},
archivePrefix={arXiv},
eprint={2410.01620},
primaryClass={cs.CV}
} | qin2024lmod: |
arxiv-664557 | 2410.01623 | Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint? | <|reference_start|>Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?: Low-rank training has emerged as a promising approach for reducing memory usage in training Large Language Models (LLMs). Previous methods either rely on decomposing weight matrices (e.g., LoRA), or seek to decompose gradient matrices (e.g., GaLore) to ensure reduced memory consumption. However, both of them constrain the training in a low-rank subspace, thus inevitably leading to sub-optimal performance. This raises a question: whether it is possible to consistently preserve the low-rank constraint for memory efficiency, while achieving full-rank training (i.e., training with full-rank gradients of full-rank weights) to avoid inferior outcomes? In this paper, we propose a new plug-and-play training framework for LLMs called Fira, as the first attempt to achieve this goal. First, we observe an interesting phenomenon during LLM training: the scaling impact of adaptive optimizers (e.g., Adam) on the gradient norm remains similar from low-rank to full-rank training. Based on this observation, we propose a norm-based scaling method, which utilizes the scaling impact of low-rank optimizers as substitutes for that of original full-rank optimizers to enable full-rank training. In this way, we can preserve the low-rank constraint in the optimizer while achieving full-rank training for better performance. Moreover, we find that there are sudden gradient rises during the optimization process, potentially causing loss spikes. To address this, we further put forward a norm-growth limiter to smooth the gradient via regulating the relative increase of gradient norms. Extensive experiments on the pre-training and fine-tuning of LLMs show that Fira outperforms both LoRA and GaLore, achieving performance that is comparable to or even better than full-rank training.<|reference_end|> | arxiv | @article{chen2024fira:,
title={Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank
Constraint?},
author={Xi Chen, Kaituo Feng, Changsheng Li, Xunhao Lai, Xiangyu Yue, Ye Yuan,
Guoren Wang},
journal={arXiv preprint arXiv:2410.01623},
year={2024},
archivePrefix={arXiv},
eprint={2410.01623},
primaryClass={cs.LG cs.AI}
} | chen2024fira: |
arxiv-664558 | 2410.01626 | Constant pH Simulation with FMM Electrostatics in GROMACS (A) Design and Applications | <|reference_start|>Constant pH Simulation with FMM Electrostatics in GROMACS (A) Design and Applications: The structural dynamics of biological macromolecules, such as proteins, DNA/RNA, or complexes thereof, are strongly influenced by protonation changes of their typically many titratable groups, which explains their sensitivity to pH changes. Conversely, conformational and environmental changes of the biomolecule affect the protonation state of these groups. With few exceptions, conventional force field-based molecular dynamics (MD) simulations do not account for these effects, nor do they allow for coupling to a pH buffer. Here we present a GROMACS implementation of a rigorous Hamiltonian interpolation $\lambda$-dynamics constant pH method, which rests on GPU-accelerated Fast Multipole Method (FMM) electrostatics. Our implementation supports both CHARMM36m and Amber99sb*-ILDN force fields and is largely automated to enable seamless switching from regular MD to constant pH MD, involving minimal changes to the input files. Here, the first of two companion papers describes the underlying constant pH protocol and sample applications to several prototypical benchmark systems such as cardiotoxin V, lysozyme, and staphylococcal nuclease. Enhanced convergence is achieved through a new dynamic barrier height optimization method, and high p$K_a$ accuracy is demonstrated. We use Functional Mode Analysis and Mutual Information to explore the complex intra- and intermolecular couplings between the protonation states of titratable groups as well as those between protonation states and conformational dynamics. We identify striking conformation-dependent p$K_a$ variations and unexpected inter-residue couplings. Conformation-protonation coupling is identified as a primary cause of the slow protonation convergence notorious to constant pH simulations involving multiple titratable groups, suggesting enhanced sampling methods to accelerate convergence.<|reference_end|> | arxiv | @article{briand2024constant,
title={Constant pH Simulation with FMM Electrostatics in GROMACS. (A) Design
and Applications},
author={Eliane Briand, Bartosz Kohnke, Carsten Kutzner and Helmut Grubm"uller},
journal={arXiv preprint arXiv:2410.01626},
year={2024},
archivePrefix={arXiv},
eprint={2410.01626},
primaryClass={cs.DC physics.chem-ph physics.comp-ph q-bio.BM}
} | briand2024constant |
arxiv-664559 | 2410.01627 | Intent Detection in the Age of LLMs | <|reference_start|>Intent Detection in the Age of LLMs: Intent detection is a critical component of task-oriented dialogue systems (TODS) which enables the identification of suitable actions to address user utterances at each dialog turn. Traditional approaches relied on computationally efficient supervised sentence transformer encoder models, which require substantial training data and struggle with out-of-scope (OOS) detection. The emergence of generative large language models (LLMs) with intrinsic world knowledge presents new opportunities to address these challenges. In this work, we adapt 7 SOTA LLMs using adaptive in-context learning and chain-of-thought prompting for intent detection, and compare their performance with contrastively fine-tuned sentence transformer (SetFit) models to highlight prediction quality and latency tradeoff. We propose a hybrid system using uncertainty based routing strategy to combine the two approaches that along with negative data augmentation results in achieving the best of both worlds ( i.e. within 2% of native LLM accuracy with 50% less latency). To better understand LLM OOS detection capabilities, we perform controlled experiments revealing that this capability is significantly influenced by the scope of intent labels and the size of the label space. We also introduce a two-step approach utilizing internal LLM representations, demonstrating empirical gains in OOS detection accuracy and F1-score by >5% for the Mistral-7B model.<|reference_end|> | arxiv | @article{arora2024intent,
title={Intent Detection in the Age of LLMs},
author={Gaurav Arora, Shreya Jain, Srujana Merugu},
journal={arXiv preprint arXiv:2410.01627},
year={2024},
archivePrefix={arXiv},
eprint={2410.01627},
primaryClass={cs.CL}
} | arora2024intent |
arxiv-664560 | 2410.01628 | Entropy-Based Uncertainty Modeling for Trajectory Prediction in Autonomous Driving | <|reference_start|>Entropy-Based Uncertainty Modeling for Trajectory Prediction in Autonomous Driving: In autonomous driving, accurate motion prediction is essential for safe and efficient motion planning. To ensure safety, planners must rely on reliable uncertainty information about the predicted future behavior of surrounding agents, yet this aspect has received limited attention. This paper addresses the so-far neglected problem of uncertainty modeling in trajectory prediction. We adopt a holistic approach that focuses on uncertainty quantification, decomposition, and the influence of model composition. Our method is based on a theoretically grounded information-theoretic approach to measure uncertainty, allowing us to decompose total uncertainty into its aleatoric and epistemic components. We conduct extensive experiments on the nuScenes dataset to assess how different model architectures and configurations affect uncertainty quantification and model robustness.<|reference_end|> | arxiv | @article{distelzweig2024entropy-based,
title={Entropy-Based Uncertainty Modeling for Trajectory Prediction in
Autonomous Driving},
author={Aron Distelzweig, Andreas Look, Eitan Kosman, Faris Janjov{s}, J"org
Wagner, Abhinav Valada},
journal={arXiv preprint arXiv:2410.01628},
year={2024},
archivePrefix={arXiv},
eprint={2410.01628},
primaryClass={cs.RO cs.AI}
} | distelzweig2024entropy-based |
arxiv-664561 | 2410.01630 | One-Shot Robust Imitation Learning for Long-Horizon Visuomotor Tasks from Unsegmented Demonstrations | <|reference_start|>One-Shot Robust Imitation Learning for Long-Horizon Visuomotor Tasks from Unsegmented Demonstrations: In contrast to single-skill tasks, long-horizon tasks play a crucial role in our daily life, e.g., a pouring task requires a proper concatenation of reaching, grasping and pouring subtasks. As an efficient solution for transferring human skills to robots, imitation learning has achieved great progress over the last two decades. However, when learning long-horizon visuomotor skills, imitation learning often demands a large amount of semantically segmented demonstrations. Moreover, the performance of imitation learning could be susceptible to external perturbation and visual occlusion. In this paper, we exploit dynamical movement primitives and meta-learning to provide a new framework for imitation learning, called Meta-Imitation Learning with Adaptive Dynamical Primitives (MiLa). MiLa allows for learning unsegmented long-horizon demonstrations and adapting to unseen tasks with a single demonstration. MiLa can also resist external disturbances and visual occlusion during task execution. Real-world robotic experiments demonstrate the superiority of MiLa, irrespective of visual occlusion and random perturbations on robots.<|reference_end|> | arxiv | @article{wu2024one-shot,
title={One-Shot Robust Imitation Learning for Long-Horizon Visuomotor Tasks
from Unsegmented Demonstrations},
author={Shaokang Wu, Yijin Wang and Yanlong Huang},
journal={arXiv preprint arXiv:2410.01630},
year={2024},
archivePrefix={arXiv},
eprint={2410.01630},
primaryClass={cs.RO}
} | wu2024one-shot |
arxiv-664562 | 2410.01633 | A Thematic Framework for Analyzing Large-scale Self-reported Social Media Data on Opioid Use Disorder Treatment Using Buprenorphine Product | <|reference_start|>A Thematic Framework for Analyzing Large-scale Self-reported Social Media Data on Opioid Use Disorder Treatment Using Buprenorphine Product: Background: One of the key FDA-approved medications for Opioid Use Disorder (OUD) is buprenorphine. Despite its popularity, individuals often report various information needs regarding buprenorphine treatment on social media platforms like Reddit. However, the key challenge is to characterize these needs. In this study, we propose a theme-based framework to curate and analyze large-scale data from social media to characterize self-reported treatment information needs (TINs). Methods: We collected 15,253 posts from r/Suboxone, one of the largest Reddit sub-community for buprenorphine products. Following the standard protocol, we first identified and defined five main themes from the data and then coded 6,000 posts based on these themes, where one post can be labeled with applicable one to three themes. Finally, we determined the most frequently appearing sub-themes (topics) for each theme by analyzing samples from each group. Results: Among the 6,000 posts, 40.3% contained a single theme, 36% two themes, and 13.9% three themes. The most frequent topics for each theme or theme combination came with several key findings - prevalent reporting of psychological and physical effects during recovery, complexities in accessing buprenorphine, and significant information gaps regarding medication administration, tapering, and usage of substances during different stages of recovery. Moreover, self-treatment strategies and peer-driven advice reveal valuable insights and potential misconceptions. Conclusions: The findings obtained using our proposed framework can inform better patient education and patient-provider communication, design systematic interventions to address treatment-related misconceptions and rumors, and streamline the generation of hypotheses for future research.<|reference_end|> | arxiv | @article{basak2024a,
title={A Thematic Framework for Analyzing Large-scale Self-reported Social
Media Data on Opioid Use Disorder Treatment Using Buprenorphine Product},
author={Madhusudan Basak, Omar Sharif, Sarah E. Lord, Jacob T. Borodovsky,
Lisa A. Marsch, Sandra A. Springer, Edward Nunes, Charlie D. Brackett, Luke
J. ArchiBald, Sarah M. Preum},
journal={arXiv preprint arXiv:2410.01633},
year={2024},
archivePrefix={arXiv},
eprint={2410.01633},
primaryClass={cs.CY cs.CL}
} | basak2024a |
arxiv-664563 | 2410.01635 | Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis | <|reference_start|>Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis: In recent years, graph prompting has emerged as a promising research direction, enabling the learning of additional tokens or subgraphs appended to the original graphs without requiring retraining of pre-trained graph models across various applications. This novel paradigm, shifting from the traditional pretraining and finetuning to pretraining and prompting has shown significant empirical success in simulating graph data operations, with applications ranging from recommendation systems to biological networks and graph transferring. However, despite its potential, the theoretical underpinnings of graph prompting remain underexplored, raising critical questions about its fundamental effectiveness. The lack of rigorous theoretical proof of why and how much it works is more like a dark cloud over the graph prompt area to go further. To fill this gap, this paper introduces a theoretical framework that rigorously analyzes graph prompting from a data operation perspective. Our contributions are threefold: First, we provide a formal guarantee theorem, demonstrating graph prompts capacity to approximate graph transformation operators, effectively linking upstream and downstream tasks. Second, we derive upper bounds on the error of these data operations by graph prompts for a single graph and extend this discussion to batches of graphs, which are common in graph model training. Third, we analyze the distribution of data operation errors, extending our theoretical findings from linear graph models (e.g., GCN) to non-linear graph models (e.g., GAT). Extensive experiments support our theoretical results and confirm the practical implications of these guarantees.<|reference_end|> | arxiv | @article{wang2024does,
title={Does Graph Prompt Work? A Data Operation Perspective with Theoretical
Analysis},
author={Qunzhong Wang, Xiangguo Sun, Hong Cheng},
journal={arXiv preprint arXiv:2410.01635},
year={2024},
archivePrefix={arXiv},
eprint={2410.01635},
primaryClass={cs.LG cs.AI cs.SI}
} | wang2024does |
arxiv-664564 | 2410.01637 | On The Adaptation of Unlimiformer for Decoder-Only Transformers | <|reference_start|>On The Adaptation of Unlimiformer for Decoder-Only Transformers: One of the prominent issues stifling the current generation of large language models is their limited context length. Recent proprietary models such as GPT-4 and Claude 2 have introduced longer context lengths, 8k/32k and 100k, respectively; however, despite the efforts in the community, most common models, such as LLama-2, have a context length of 4k or less. Unlimiformer (Bertsch et al., 2023) is a recently popular vector-retrieval augmentation method that offloads cross-attention computations to a kNN index. However, its main limitation is incompatibility with decoder-only transformers out of the box. In this work, we explore practical considerations of adapting Unlimiformer to decoder-only transformers and introduce a series of modifications to overcome this limitation. Moreover, we expand the original experimental setup on summarization to include a new task (i.e., free-form Q&A) and an instruction-tuned model (i.e., a custom 6.7B GPT model). Our results showcase the effectiveness of these modifications on summarization, performing on par with a model with 2x the context length. Moreover, we discuss limitations and future directions for free-form Q&A and instruction-tuned models.<|reference_end|> | arxiv | @article{ahrabian2024on,
title={On The Adaptation of Unlimiformer for Decoder-Only Transformers},
author={Kian Ahrabian, Alon Benhaim, Barun Patra, Jay Pujara, Saksham Singhal,
Xia Song},
journal={arXiv preprint arXiv:2410.01637},
year={2024},
archivePrefix={arXiv},
eprint={2410.01637},
primaryClass={cs.CL cs.LG}
} | ahrabian2024on |
arxiv-664565 | 2410.01638 | Data Extrapolation for Text-to-image Generation on Small Datasets | <|reference_start|>Data Extrapolation for Text-to-image Generation on Small Datasets: Text-to-image generation requires large amount of training data to synthesizing high-quality images. For augmenting training data, previous methods rely on data interpolations like cropping, flipping, and mixing up, which fail to introduce new information and yield only marginal improvements. In this paper, we propose a new data augmentation method for text-to-image generation using linear extrapolation. Specifically, we apply linear extrapolation only on text feature, and new image data are retrieved from the internet by search engines. For the reliability of new text-image pairs, we design two outlier detectors to purify retrieved images. Based on extrapolation, we construct training samples dozens of times larger than the original dataset, resulting in a significant improvement in text-to-image performance. Moreover, we propose a NULL-guidance to refine score estimation, and apply recurrent affine transformation to fuse text information. Our model achieves FID scores of 7.91, 9.52 and 5.00 on the CUB, Oxford and COCO datasets. The code and data will be available on GitHub (https://github.com/senmaoy/RAT-Diffusion).<|reference_end|> | arxiv | @article{ye2024data,
title={Data Extrapolation for Text-to-image Generation on Small Datasets},
author={Senmao Ye, Fei Liu},
journal={arXiv preprint arXiv:2410.01638},
year={2024},
archivePrefix={arXiv},
eprint={2410.01638},
primaryClass={cs.CV cs.AI}
} | ye2024data |
arxiv-664566 | 2410.01639 | Moral Alignment for LLM Agents | <|reference_start|>Moral Alignment for LLM Agents: Decision-making agents based on pre-trained Large Language Models (LLMs) are increasingly being deployed across various domains of human activity. While their applications are currently rather specialized, several research efforts are under way to develop more generalist agents. As LLM-based systems become more agentic, their influence on human activity will grow and the transparency of this will decrease. Consequently, developing effective methods for aligning them to human values is vital. The prevailing practice in alignment often relies on human preference data (e.g., in RLHF or DPO), in which values are implicit and are essentially deduced from relative preferences over different model outputs. In this work, instead of relying on human feedback, we introduce the design of reward functions that explicitly encode core human values for Reinforcement Learning-based fine-tuning of foundation agent models. Specifically, we use intrinsic rewards for the moral alignment of LLM agents. We evaluate our approach using the traditional philosophical frameworks of Deontological Ethics and Utilitarianism, quantifying moral rewards for agents in terms of actions and consequences on the Iterated Prisoner's Dilemma (IPD) environment. We also show how moral fine-tuning can be deployed to enable an agent to unlearn a previously developed selfish strategy. Finally, we find that certain moral strategies learned on the IPD game generalize to several other matrix game environments. In summary, we demonstrate that fine-tuning with intrinsic rewards is a promising general solution for aligning LLM agents to human values, and it might represent a more transparent and cost-effective alternative to currently predominant alignment techniques.<|reference_end|> | arxiv | @article{tennant2024moral,
title={Moral Alignment for LLM Agents},
author={Elizaveta Tennant, Stephen Hailes, Mirco Musolesi},
journal={arXiv preprint arXiv:2410.01639},
year={2024},
archivePrefix={arXiv},
eprint={2410.01639},
primaryClass={cs.LG cs.AI cs.CY}
} | tennant2024moral |
arxiv-664567 | 2410.01643 | Stable Offline Value Function Learning with Bisimulation-based Representations | <|reference_start|>Stable Offline Value Function Learning with Bisimulation-based Representations: In reinforcement learning, offline value function learning is the procedure of using an offline dataset to estimate the expected discounted return from each state when taking actions according to a fixed target policy. The stability of this procedure, i.e., whether it converges to its fixed-point, critically depends on the representations of the state-action pairs. Poorly learned representations can make value function learning unstable, or even divergent. Therefore, it is critical to stabilize value function learning by explicitly shaping the state-action representations. Recently, the class of bisimulation-based algorithms have shown promise in shaping representations for control. However, it is still unclear if this class of methods can stabilize value function learning. In this work, we investigate this question and answer it affirmatively. We introduce a bisimulation-based algorithm called kernel representations for offline policy evaluation (KROPE). KROPE uses a kernel to shape state-action representations such that state-action pairs that have similar immediate rewards and lead to similar next state-action pairs under the target policy also have similar representations. We show that KROPE: 1) learns stable representations and 2) leads to lower value error than baselines. Our analysis provides new theoretical insight into the stability properties of bisimulation-based methods and suggests that practitioners can use these methods for stable and accurate evaluation of offline reinforcement learning agents.<|reference_end|> | arxiv | @article{pavse2024stable,
title={Stable Offline Value Function Learning with Bisimulation-based
Representations},
author={Brahma S. Pavse, Yudong Chen, Qiaomin Xie, Josiah P. Hanna},
journal={arXiv preprint arXiv:2410.01643},
year={2024},
archivePrefix={arXiv},
eprint={2410.01643},
primaryClass={cs.LG cs.AI}
} | pavse2024stable |
arxiv-664568 | 2410.01644 | A Novel Framework of Horizontal-Vertical Hybrid Federated Learning for EdgeIoT | <|reference_start|>A Novel Framework of Horizontal-Vertical Hybrid Federated Learning for EdgeIoT: This letter puts forth a new hybrid horizontal-vertical federated learning (HoVeFL) for mobile edge computing-enabled Internet of Things (EdgeIoT). In this framework, certain EdgeIoT devices train local models using the same data samples but analyze disparate data features, while the others focus on the same features using non-independent and identically distributed (non-IID) data samples. Thus, even though the data features are consistent, the data samples vary across devices. The proposed HoVeFL formulates the training of local and global models to minimize the global loss function. Performance evaluations on CIFAR-10 and SVHN datasets reveal that the testing loss of HoVeFL with 12 horizontal FL devices and six vertical FL devices is 5.5% and 25.2% higher, respectively, compared to a setup with six horizontal FL devices and 12 vertical FL devices.<|reference_end|> | arxiv | @article{li2024a,
title={A Novel Framework of Horizontal-Vertical Hybrid Federated Learning for
EdgeIoT},
author={Kai Li, Yilei Liang, Xin Yuan, Wei Ni, Jon Crowcroft, Chau Yuen, Ozgur
B. Akan},
journal={arXiv preprint arXiv:2410.01644},
year={2024},
archivePrefix={arXiv},
eprint={2410.01644},
primaryClass={cs.DC cs.LG eess.SP}
} | li2024a |
arxiv-664569 | 2410.01647 | 3DGS-DET: Empower 3D Gaussian Splatting with Boundary Guidance and Box-Focused Sampling for 3D Object Detection | <|reference_start|>3DGS-DET: Empower 3D Gaussian Splatting with Boundary Guidance and Box-Focused Sampling for 3D Object Detection: Neural Radiance Fields (NeRF) are widely used for novel-view synthesis and have been adapted for 3D Object Detection (3DOD), offering a promising approach to 3DOD through view-synthesis representation. However, NeRF faces inherent limitations: (i) limited representational capacity for 3DOD due to its implicit nature, and (ii) slow rendering speeds. Recently, 3D Gaussian Splatting (3DGS) has emerged as an explicit 3D representation that addresses these limitations. Inspired by these advantages, this paper introduces 3DGS into 3DOD for the first time, identifying two main challenges: (i) Ambiguous spatial distribution of Gaussian blobs: 3DGS primarily relies on 2D pixel-level supervision, resulting in unclear 3D spatial distribution of Gaussian blobs and poor differentiation between objects and background, which hinders 3DOD; (ii) Excessive background blobs: 2D images often include numerous background pixels, leading to densely reconstructed 3DGS with many noisy Gaussian blobs representing the background, negatively affecting detection. To tackle the challenge (i), we leverage the fact that 3DGS reconstruction is derived from 2D images, and propose an elegant and efficient solution by incorporating 2D Boundary Guidance to significantly enhance the spatial distribution of Gaussian blobs, resulting in clearer differentiation between objects and their background. To address the challenge (ii), we propose a Box-Focused Sampling strategy using 2D boxes to generate object probability distribution in 3D spaces, allowing effective probabilistic sampling in 3D to retain more object blobs and reduce noisy background blobs. Benefiting from our designs, our 3DGS-DET significantly outperforms the SOTA NeRF-based method, NeRF-Det, achieving improvements of +6.6 on [email protected] and +8.1 on [email protected] for the ScanNet dataset, and impressive +31.5 on [email protected] for the ARKITScenes dataset.<|reference_end|> | arxiv | @article{cao20243dgs-det:,
title={3DGS-DET: Empower 3D Gaussian Splatting with Boundary Guidance and
Box-Focused Sampling for 3D Object Detection},
author={Yang Cao, Yuanliang Jv, Dan Xu},
journal={arXiv preprint arXiv:2410.01647},
year={2024},
archivePrefix={arXiv},
eprint={2410.01647},
primaryClass={cs.CV}
} | cao20243dgs-det: |
arxiv-664570 | 2410.01648 | DeIDClinic: A Multi-Layered Framework for De-identification of Clinical Free-text Data | <|reference_start|>DeIDClinic: A Multi-Layered Framework for De-identification of Clinical Free-text Data: De-identification is important in protecting patients' privacy for healthcare text analytics. The MASK framework is one of the best on the de-identification shared task organised by n2c2/i2b2 challenges. This work enhances the MASK framework by integrating ClinicalBERT, a deep learning model specifically fine-tuned on clinical texts, alongside traditional de-identification methods like dictionary lookup and rule-based approaches. The system effectively identifies and either redacts or replaces sensitive identifiable entities within clinical documents, while also allowing users to customise the masked documents according to their specific needs. The integration of ClinicalBERT significantly improves the performance of entity recognition, achieving 0.9732 F1-score, especially for common entities such as names, dates, and locations. A risk assessment feature has also been developed, which analyses the uniqueness of context within documents to classify them into risk levels, guiding further de-identification efforts. While the system demonstrates strong overall performance, this work highlights areas for future improvement, including handling more complex entity occurrences and enhancing the system's adaptability to different clinical settings.<|reference_end|> | arxiv | @article{paul2024deidclinic:,
title={DeIDClinic: A Multi-Layered Framework for De-identification of Clinical
Free-text Data},
author={Angel Paul, Dhivin Shaji, Lifeng Han, Warren Del-Pinto, Goran Nenadic},
journal={arXiv preprint arXiv:2410.01648},
year={2024},
archivePrefix={arXiv},
eprint={2410.01648},
primaryClass={cs.CL}
} | paul2024deidclinic: |
arxiv-664571 | 2410.01649 | shapiq: Shapley Interactions for Machine Learning | <|reference_start|>shapiq: Shapley Interactions for Machine Learning: Originally rooted in game theory, the Shapley Value (SV) has recently become an important tool in machine learning research. Perhaps most notably, it is used for feature attribution and data valuation in explainable artificial intelligence. Shapley Interactions (SIs) naturally extend the SV and address its limitations by assigning joint contributions to groups of entities, which enhance understanding of black box machine learning models. Due to the exponential complexity of computing SVs and SIs, various methods have been proposed that exploit structural assumptions or yield probabilistic estimates given limited resources. In this work, we introduce shapiq, an open-source Python package that unifies state-of-the-art algorithms to efficiently compute SVs and any-order SIs in an application-agnostic framework. Moreover, it includes a benchmarking suite containing 11 machine learning applications of SIs with pre-computed games and ground-truth values to systematically assess computational performance across domains. For practitioners, shapiq is able to explain and visualize any-order feature interactions in predictions of models, including vision transformers, language models, as well as XGBoost and LightGBM with TreeSHAP-IQ. With shapiq, we extend shap beyond feature attributions and consolidate the application of SVs and SIs in machine learning that facilitates future research. The source code and documentation are available at https://github.com/mmschlk/shapiq.<|reference_end|> | arxiv | @article{muschalik2024shapiq:,
title={shapiq: Shapley Interactions for Machine Learning},
author={Maximilian Muschalik, Hubert Baniecki, Fabian Fumagalli, Patrick
Kolpaczki, Barbara Hammer, Eyke H"ullermeier},
journal={arXiv preprint arXiv:2410.01649},
year={2024},
archivePrefix={arXiv},
eprint={2410.01649},
primaryClass={cs.LG cs.AI}
} | muschalik2024shapiq: |
arxiv-664572 | 2410.01651 | Efficient Long-range Language Modeling with Self-supervised Causal Retrieval | <|reference_start|>Efficient Long-range Language Modeling with Self-supervised Causal Retrieval: Recently, retrieval-based language models (RLMs) have received much attention. However, most of them leverage a pre-trained retriever with fixed parameters, which may not adapt well to causal language models. In this work, we propose Grouped Cross-Attention, a novel module enabling joint pre-training of the retriever and causal LM, and apply it to long-context modeling. For a given input sequence, we split it into chunks and use the current chunk to retrieve past chunks for subsequent text generation. Our innovation allows the retriever to learn how to retrieve past chunks that better minimize the auto-regressive loss of subsequent tokens in an end-to-end manner. By integrating top-$k$ retrieval, our model can be pre-trained efficiently from scratch with context lengths up to 64K tokens. Our experiments show our model, compared with long-range LM baselines, can achieve lower perplexity with comparable or lower pre-training and inference costs.<|reference_end|> | arxiv | @article{hu2024efficient,
title={Efficient Long-range Language Modeling with Self-supervised Causal
Retrieval},
author={Xiang Hu, Zhihao Teng, Wei Wu, Kewei Tu},
journal={arXiv preprint arXiv:2410.01651},
year={2024},
archivePrefix={arXiv},
eprint={2410.01651},
primaryClass={cs.CL cs.AI}
} | hu2024efficient |
arxiv-664573 | 2410.01654 | Releasing the Parameter Latency of Neural Representation for High-Efficiency Video Compression | <|reference_start|>Releasing the Parameter Latency of Neural Representation for High-Efficiency Video Compression: For decades, video compression technology has been a prominent research area. Traditional hybrid video compression framework and end-to-end frameworks continue to explore various intra- and inter-frame reference and prediction strategies based on discrete transforms and deep learning techniques. However, the emerging implicit neural representation (INR) technique models entire videos as basic units, automatically capturing intra-frame and inter-frame correlations and obtaining promising performance. INR uses a compact neural network to store video information in network parameters, effectively eliminating spatial and temporal redundancy in the original video. However, in this paper, our exploration and verification reveal that current INR video compression methods do not fully exploit their potential to preserve information. We investigate the potential of enhancing network parameter storage through parameter reuse. By deepening the network, we designed a feasible INR parameter reuse scheme to further improve compression performance. Extensive experimental results show that our method significantly enhances the rate-distortion performance of INR video compression.<|reference_end|> | arxiv | @article{zhang2024releasing,
title={Releasing the Parameter Latency of Neural Representation for
High-Efficiency Video Compression},
author={Gai Zhang, Xinfeng Zhang, Lv Tang, Yue Li, Kai Zhang, Li Zhang},
journal={arXiv preprint arXiv:2410.01654},
year={2024},
archivePrefix={arXiv},
eprint={2410.01654},
primaryClass={eess.IV cs.CV cs.MM}
} | zhang2024releasing |
arxiv-664574 | 2410.01655 | Extending Contextual Self-Modulation: Meta-Learning Across Modalities, Task Dimensionalities, and Data Regimes | <|reference_start|>Extending Contextual Self-Modulation: Meta-Learning Across Modalities, Task Dimensionalities, and Data Regimes: Contextual Self-Modulation (CSM) is a potent regularization mechanism for the Neural Context Flow (NCF) framework which demonstrates powerful meta-learning of physical systems. However, CSM has limitations in its applicability across different modalities and in high-data regimes. In this work, we introduce two extensions: $i$CSM, which expands CSM to infinite-dimensional tasks, and StochasticNCF, which improves scalability. These extensions are demonstrated through comprehensive experimentation on a range of tasks, including dynamical systems with parameter variations, computer vision challenges, and curve fitting problems. $i$CSM embeds the contexts into an infinite-dimensional function space, as opposed to CSM which uses finite-dimensional context vectors. StochasticNCF enables the application of both CSM and $i$CSM to high-data scenarios by providing an unbiased approximation of meta-gradient updates through a sampled set of nearest environments. Additionally, we incorporate higher-order Taylor expansions via Taylor-Mode automatic differentiation, revealing that higher-order approximations do not necessarily enhance generalization. Finally, we demonstrate how CSM can be integrated into other meta-learning frameworks with FlashCAVIA, a computationally efficient extension of the CAVIA meta-learning framework (Zintgraf et al. 2019). FlashCAVIA outperforms its predecessor across various benchmarks and reinforces the utility of bi-level optimization techniques. Together, these contributions establish a robust framework for tackling an expanded spectrum of meta-learning tasks, offering practical insights for out-of-distribution generalization. Our open-sourced library, designed for flexible integration of self-modulation into contextual meta-learning workflows, is available at \url{github.com/ddrous/self-mod}.<|reference_end|> | arxiv | @article{nzoyem2024extending,
title={Extending Contextual Self-Modulation: Meta-Learning Across Modalities,
Task Dimensionalities, and Data Regimes},
author={Roussel Desmond Nzoyem, David A.W. Barton, Tom Deakin},
journal={arXiv preprint arXiv:2410.01655},
year={2024},
archivePrefix={arXiv},
eprint={2410.01655},
primaryClass={cs.LG math.DS}
} | nzoyem2024extending |
arxiv-664575 | 2410.01656 | Efficient Statistics With Unknown Truncation, Polynomial Time Algorithms, Beyond Gaussians | <|reference_start|>Efficient Statistics With Unknown Truncation, Polynomial Time Algorithms, Beyond Gaussians: We study the estimation of distributional parameters when samples are shown only if they fall in some unknown set $S \subseteq \mathbb{R}^d$. Kontonis, Tzamos, and Zampetakis (FOCS'19) gave a $d^{\mathrm{poly}(1/\varepsilon)}$ time algorithm for finding $\varepsilon$-accurate parameters for the special case of Gaussian distributions with diagonal covariance matrix. Recently, Diakonikolas, Kane, Pittas, and Zarifis (COLT'24) showed that this exponential dependence on $1/\varepsilon$ is necessary even when $S$ belongs to some well-behaved classes. These works leave the following open problems which we address in this work: Can we estimate the parameters of any Gaussian or even extend beyond Gaussians? Can we design $\mathrm{poly}(d/\varepsilon)$ time algorithms when $S$ is a simple set such as a halfspace? We make progress on both of these questions by providing the following results: 1. Toward the first question, we give a $d^{\mathrm{poly}(\ell/\varepsilon)}$ time algorithm for any exponential family that satisfies some structural assumptions and any unknown set $S$ that is $\varepsilon$-approximable by degree-$\ell$ polynomials. This result has two important applications: 1a) The first algorithm for estimating arbitrary Gaussian distributions from samples truncated to an unknown $S$; and 1b) The first algorithm for linear regression with unknown truncation and Gaussian features. 2. To address the second question, we provide an algorithm with runtime $\mathrm{poly}(d/\varepsilon)$ that works for a set of exponential families (containing all Gaussians) when $S$ is a halfspace or an axis-aligned rectangle. Along the way, we develop tools that may be of independent interest, including, a reduction from PAC learning with positive and unlabeled samples to PAC learning with positive and negative samples that is robust to certain covariate shifts.<|reference_end|> | arxiv | @article{lee2024efficient,
title={Efficient Statistics With Unknown Truncation, Polynomial Time
Algorithms, Beyond Gaussians},
author={Jane H. Lee and Anay Mehrotra and Manolis Zampetakis},
journal={arXiv preprint arXiv:2410.01656},
year={2024},
archivePrefix={arXiv},
eprint={2410.01656},
primaryClass={math.ST cs.DS cs.LG stat.CO stat.ML stat.TH}
} | lee2024efficient |
arxiv-664576 | 2410.01657 | Scalable and Consistent Graph Neural Networks for Distributed Mesh-based Data-driven Modeling | <|reference_start|>Scalable and Consistent Graph Neural Networks for Distributed Mesh-based Data-driven Modeling: This work develops a distributed graph neural network (GNN) methodology for mesh-based modeling applications using a consistent neural message passing layer. As the name implies, the focus is on enabling scalable operations that satisfy physical consistency via halo nodes at sub-graph boundaries. Here, consistency refers to the fact that a GNN trained and evaluated on one rank (one large graph) is arithmetically equivalent to evaluations on multiple ranks (a partitioned graph). This concept is demonstrated by interfacing GNNs with NekRS, a GPU-capable exascale CFD solver developed at Argonne National Laboratory. It is shown how the NekRS mesh partitioning can be linked to the distributed GNN training and inference routines, resulting in a scalable mesh-based data-driven modeling workflow. We study the impact of consistency on the scalability of mesh-based GNNs, demonstrating efficient scaling in consistent GNNs for up to O(1B) graph nodes on the Frontier exascale supercomputer.<|reference_end|> | arxiv | @article{barwey2024scalable,
title={Scalable and Consistent Graph Neural Networks for Distributed Mesh-based
Data-driven Modeling},
author={Shivam Barwey, Riccardo Balin, Bethany Lusch, Saumil Patel, Ramesh
Balakrishnan, Pinaki Pal, Romit Maulik, Venkatram Vishwanath},
journal={arXiv preprint arXiv:2410.01657},
year={2024},
archivePrefix={arXiv},
eprint={2410.01657},
primaryClass={cs.DC cs.LG physics.comp-ph}
} | barwey2024scalable |
arxiv-664577 | 2410.01658 | Smaller Confidence Intervals From IPW Estimators via Data-Dependent Coarsening | <|reference_start|>Smaller Confidence Intervals From IPW Estimators via Data-Dependent Coarsening: Inverse propensity-score weighted (IPW) estimators are prevalent in causal inference for estimating average treatment effects in observational studies. Under unconfoundedness, given accurate propensity scores and $n$ samples, the size of confidence intervals of IPW estimators scales down with $n$, and, several of their variants improve the rate of scaling. However, neither IPW estimators nor their variants are robust to inaccuracies: even if a single covariate has an $\varepsilon>0$ additive error in the propensity score, the size of confidence intervals of these estimators can increase arbitrarily. Moreover, even without errors, the rate with which the confidence intervals of these estimators go to zero with $n$ can be arbitrarily slow in the presence of extreme propensity scores (those close to 0 or 1). We introduce a family of Coarse IPW (CIPW) estimators that captures existing IPW estimators and their variants. Each CIPW estimator is an IPW estimator on a coarsened covariate space, where certain covariates are merged. Under mild assumptions, e.g., Lipschitzness in expected outcomes and sparsity of extreme propensity scores, we give an efficient algorithm to find a robust estimator: given $\varepsilon$-inaccurate propensity scores and $n$ samples, its confidence interval size scales with $\varepsilon+1/\sqrt{n}$. In contrast, under the same assumptions, existing estimators' confidence interval sizes are $\Omega(1)$ irrespective of $\varepsilon$ and $n$. Crucially, our estimator is data-dependent and we show that no data-independent CIPW estimator can be robust to inaccuracies.<|reference_end|> | arxiv | @article{kalavasis2024smaller,
title={Smaller Confidence Intervals From IPW Estimators via Data-Dependent
Coarsening},
author={Alkis Kalavasis and Anay Mehrotra and Manolis Zampetakis},
journal={arXiv preprint arXiv:2410.01658},
year={2024},
archivePrefix={arXiv},
eprint={2410.01658},
primaryClass={stat.ME cs.LG econ.EM math.ST stat.ML stat.TH}
} | kalavasis2024smaller |
arxiv-664578 | 2410.01659 | Execution-time opacity problems in one-clock parametric timed automata | <|reference_start|>Execution-time opacity problems in one-clock parametric timed automata: Parametric timed automata (PTAs) extend the concept of timed automata, by allowing timing delays not only specified by concrete values but also by parameters, allowing the analysis of systems with uncertainty regarding timing behaviors. The full execution-time opacity is defined as the problem in which an attacker must never be able to deduce whether some private location was visited, by only observing the execution time. The problem of full ET-opacity emptiness (i.e., the emptiness over the parameter valuations for which full execution-time opacity is satisfied) is known to be undecidable for general PTAs. We therefore focus here on one-clock PTAs with integer-valued parameters over dense time. We show that the full ET-opacity emptiness is undecidable for a sufficiently large number of parameters, but is decidable for a single parameter, and exact synthesis can be effectively achieved. Our proofs rely on a novel construction as well as on variants of Presburger arithmetics. We finally prove an additional decidability result on an existential variant of execution-time opacity.<|reference_end|> | arxiv | @article{andré2024execution-time,
title={Execution-time opacity problems in one-clock parametric timed automata},
author={'Etienne Andr'e, Johan Arcile and Engel Lefaucheux},
journal={arXiv preprint arXiv:2410.01659},
year={2024},
archivePrefix={arXiv},
eprint={2410.01659},
primaryClass={cs.FL}
} | andré2024execution-time |
arxiv-664579 | 2410.01660 | Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering | <|reference_start|>Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering: Generative models lack rigorous statistical guarantees for their outputs and are therefore unreliable in safety-critical applications. In this work, we propose Sequential Conformal Prediction for Generative Models (SCOPE-Gen), a sequential conformal prediction method producing prediction sets that satisfy a rigorous statistical guarantee called conformal admissibility control. This guarantee states that with high probability, the prediction sets contain at least one admissible (or valid) example. To this end, our method first samples an initial set of i.i.d. examples from a black box generative model. Then, this set is iteratively pruned via so-called greedy filters. As a consequence of the iterative generation procedure, admissibility of the final prediction set factorizes as a Markov chain. This factorization is crucial, because it allows to control each factor separately, using conformal prediction. In comparison to prior work, our method demonstrates a large reduction in the number of admissibility evaluations during calibration. This reduction is important in safety-critical applications, where these evaluations must be conducted manually by domain experts and are therefore costly and time consuming. We highlight the advantages of our method in terms of admissibility evaluations and cardinality of the prediction sets through experiments in natural language generation and molecular graph extension tasks.<|reference_end|> | arxiv | @article{kladny2024conformal,
title={Conformal Generative Modeling with Improved Sample Efficiency through
Sequential Greedy Filtering},
author={Klaus-Rudolf Kladny, Bernhard Sch"olkopf, Michael Muehlebach},
journal={arXiv preprint arXiv:2410.01660},
year={2024},
archivePrefix={arXiv},
eprint={2410.01660},
primaryClass={cs.LG cs.AI}
} | kladny2024conformal |
arxiv-664580 | 2410.01661 | Finding path and cycle counting formulae in graphs with Deep Reinforcement Learning | <|reference_start|>Finding path and cycle counting formulae in graphs with Deep Reinforcement Learning: This paper presents Grammar Reinforcement Learning (GRL), a reinforcement learning algorithm that uses Monte Carlo Tree Search (MCTS) and a transformer architecture that models a Pushdown Automaton (PDA) within a context-free grammar (CFG) framework. Taking as use case the problem of efficiently counting paths and cycles in graphs, a key challenge in network analysis, computer science, biology, and social sciences, GRL discovers new matrix-based formulas for path/cycle counting that improve computational efficiency by factors of two to six w.r.t state-of-the-art approaches. Our contributions include: (i) a framework for generating gramformers that operate within a CFG, (ii) the development of GRL for optimizing formulas within grammatical structures, and (iii) the discovery of novel formulas for graph substructure counting, leading to significant computational improvements.<|reference_end|> | arxiv | @article{piquenot2024finding,
title={Finding path and cycle counting formulae in graphs with Deep
Reinforcement Learning},
author={Jason Piquenot, Maxime B'erar, Pierre H'eroux, Jean-Yves Ramel,
Romain Raveaux, S'ebastien Adam},
journal={arXiv preprint arXiv:2410.01661},
year={2024},
archivePrefix={arXiv},
eprint={2410.01661},
primaryClass={cs.AI cs.FL}
} | piquenot2024finding |
arxiv-664581 | 2410.01665 | Towards a vision foundation model for comprehensive assessment of Cardiac MRI | <|reference_start|>Towards a vision foundation model for comprehensive assessment of Cardiac MRI: Cardiac magnetic resonance imaging (CMR), considered the gold standard for noninvasive cardiac assessment, is a diverse and complex modality requiring a wide variety of image processing tasks for comprehensive assessment of cardiac morphology and function. Advances in deep learning have enabled the development of state-of-the-art (SoTA) models for these tasks. However, model training is challenging due to data and label scarcity, especially in the less common imaging sequences. Moreover, each model is often trained for a specific task, with no connection between related tasks. In this work, we introduce a vision foundation model trained for CMR assessment, that is trained in a self-supervised fashion on 36 million CMR images. We then finetune the model in supervised way for 9 clinical tasks typical to a CMR workflow, across classification, segmentation, landmark localization, and pathology detection. We demonstrate improved accuracy and robustness across all tasks, over a range of available labeled dataset sizes. We also demonstrate improved few-shot learning with fewer labeled samples, a common challenge in medical image analyses. We achieve an out-of-box performance comparable to SoTA for most clinical tasks. The proposed method thus presents a resource-efficient, unified framework for CMR assessment, with the potential to accelerate the development of deep learning-based solutions for image analysis tasks, even with few annotated data available.<|reference_end|> | arxiv | @article{jacob2024towards,
title={Towards a vision foundation model for comprehensive assessment of
Cardiac MRI},
author={Athira J Jacob, Indraneel Borgohain, Teodora Chitiboi, Puneet Sharma,
Dorin Comaniciu, Daniel Rueckert},
journal={arXiv preprint arXiv:2410.01665},
year={2024},
archivePrefix={arXiv},
eprint={2410.01665},
primaryClass={eess.IV cs.AI cs.CV}
} | jacob2024towards |
arxiv-664582 | 2410.01669 | Sparse Covariance Neural Networks | <|reference_start|>Sparse Covariance Neural Networks: Covariance Neural Networks (VNNs) perform graph convolutions on the covariance matrix of tabular data and achieve success in a variety of applications. However, the empirical covariance matrix on which the VNNs operate may contain many spurious correlations, making VNNs' performance inconsistent due to these noisy estimates and decreasing their computational efficiency. To tackle this issue, we put forth Sparse coVariance Neural Networks (S-VNNs), a framework that applies sparsification techniques on the sample covariance matrix before convolution. When the true covariance matrix is sparse, we propose hard and soft thresholding to improve covariance estimation and reduce computational cost. Instead, when the true covariance is dense, we propose stochastic sparsification where data correlations are dropped in probability according to principled strategies. We show that S-VNNs are more stable than nominal VNNs as well as sparse principal component analysis. By analyzing the impact of sparsification on their behavior, we provide novel connections between S-VNN stability and data distribution. We support our theoretical findings with experimental results on various application scenarios, ranging from brain data to human action recognition, and show an improved task performance, stability, and computational efficiency of S-VNNs compared with nominal VNNs.<|reference_end|> | arxiv | @article{cavallo2024sparse,
title={Sparse Covariance Neural Networks},
author={Andrea Cavallo, Zhan Gao, Elvin Isufi},
journal={arXiv preprint arXiv:2410.01669},
year={2024},
archivePrefix={arXiv},
eprint={2410.01669},
primaryClass={cs.LG stat.ML}
} | cavallo2024sparse |
arxiv-664583 | 2410.01671 | Bridging Context Gaps: Leveraging Coreference Resolution for Long Contextual Understanding | <|reference_start|>Bridging Context Gaps: Leveraging Coreference Resolution for Long Contextual Understanding: Large language models (LLMs) have shown remarkable capabilities in natural language processing; however, they still face difficulties when tasked with understanding lengthy contexts and executing effective question answering. These challenges often arise due to the complexity and ambiguity present in longer texts. To enhance the performance of LLMs in such scenarios, we introduce the Long Question Coreference Adaptation (LQCA) method. This innovative framework focuses on coreference resolution tailored to long contexts, allowing the model to identify and manage references effectively. The LQCA method encompasses four key steps: resolving coreferences within sub-documents, computing the distances between mentions, defining a representative mention for coreference, and answering questions through mention replacement. By processing information systematically, the framework provides easier-to-handle partitions for LLMs, promoting better understanding. Experimental evaluations on a range of LLMs and datasets have yielded positive results, with a notable improvements on OpenAI-o1-mini and GPT-4o models, highlighting the effectiveness of leveraging coreference resolution to bridge context gaps in question answering.<|reference_end|> | arxiv | @article{liu2024bridging,
title={Bridging Context Gaps: Leveraging Coreference Resolution for Long
Contextual Understanding},
author={Yanming Liu, Xinyue Peng, Jiannan Cao, Shi Bo, Yanxin Shen, Xuhong
Zhang, Sheng Cheng, Xun Wang, Jianwei Yin, Tianyu Du},
journal={arXiv preprint arXiv:2410.01671},
year={2024},
archivePrefix={arXiv},
eprint={2410.01671},
primaryClass={cs.CL cs.AI}
} | liu2024bridging |
arxiv-664584 | 2410.01672 | Practicing Stress Relief for the Everyday: Designing Social Simulation Using VR, AR, and LLMs | <|reference_start|>Practicing Stress Relief for the Everyday: Designing Social Simulation Using VR, AR, and LLMs: Stress is an inevitable part of day-to-day life yet many find themselves unable to manage it themselves, particularly when professional or peer support are not always readily available. As self-care becomes increasingly vital for mental well-being, this paper explores the potential of social simulation as a safe, virtual environment for practicing stress relief for everyday situations. Leveraging the immersive capabilities of VR, AR, and LLMs, we developed eight interactive prototypes for various everyday stressful scenarios (e.g. public speaking) then conducted prototype-driven semi-structured interviews with 19 participants. We reveal that people currently lack effective means to support themselves through everyday stress and found that social simulation fills a gap for simulating real environments for training mental health practices. We outline key considerations for future development of simulation for self-care, including risks of trauma from hyper-realism, distrust of LLM-recommended timing for mental health recommendations, and the value of accessibility for self-care interventions.<|reference_end|> | arxiv | @article{fang2024practicing,
title={Practicing Stress Relief for the Everyday: Designing Social Simulation
Using VR, AR, and LLMs},
author={Anna Fang, Hriday Chhabria, Alekhya Maram, Haiyi Zhu},
journal={arXiv preprint arXiv:2410.01672},
year={2024},
archivePrefix={arXiv},
eprint={2410.01672},
primaryClass={cs.HC}
} | fang2024practicing |
arxiv-664585 | 2410.01673 | MaxSAT decoders for arbitrary CSS codes | <|reference_start|>MaxSAT decoders for arbitrary CSS codes: Quantum error correction (QEC) is essential for operating quantum computers in the presence of noise. Here, we accurately decode arbitrary Calderbank-Shor-Steane (CSS) codes via the maximum satisfiability (MaxSAT) problem. We show how to map quantum maximum likelihood problem of CSS codes of arbitrary geometry and parity check weight into MaxSAT problems. We incorporate the syndrome measurements as hard clauses, while qubit and measurement error probabilities, including biased and non-uniform, are encoded as soft MaxSAT clauses. For the code capacity of color codes on a hexagonal lattice, our decoder has a higher threshold and superior scaling in noise suppression compared to belief propagation with ordered statistics post-processing (BP-OSD), while showing similar scaling in computational cost. Further, we decode surface codes and recently proposed bivariate quantum low-density parity check (QLDPC) codes where we find lower error rates than BP-OSD. Finally, we connect the complexity of MaxSAT decoding to a computational phase transition controlled by the clause density of the MaxSAT problem, where we show that our mapping is always in the computationally ''easy`` phase. Our MaxSAT decoder can be further parallelised or implemented on ASICs and FPGAs, promising potential further speedups of several orders of magnitude. Our work provides a flexible platform towards practical applications on quantum computers.<|reference_end|> | arxiv | @article{noormandipour2024maxsat,
title={MaxSAT decoders for arbitrary CSS codes},
author={Mohammadreza Noormandipour and Tobias Haug},
journal={arXiv preprint arXiv:2410.01673},
year={2024},
archivePrefix={arXiv},
eprint={2410.01673},
primaryClass={quant-ph cs.IT math.IT}
} | noormandipour2024maxsat |
arxiv-664586 | 2410.01674 | Optimal Control of Fractional Punishment in Optional Public Goods Game | <|reference_start|>Optimal Control of Fractional Punishment in Optional Public Goods Game: Punishment is probably the most frequently used mechanism to increase cooperation in Public Goods Games (PGG); however, it is expensive. To address this problem, this paper introduces an optimal control problem that uses fractional punishment to promote cooperation. We present a series of computational experiments illustrating the effects of single and combined terms of the optimization cost function. In the findings, the optimal controller outperforms the use of constant fractional punishment and gives an insight into the period and size of the penalization to be implemented with respect to the defection in the game.<|reference_end|> | arxiv | @article{grau2024optimal,
title={Optimal Control of Fractional Punishment in Optional Public Goods Game},
author={J. Grau, R. Botta, C. E. Schaerer},
journal={arXiv preprint arXiv:2410.01674},
year={2024},
archivePrefix={arXiv},
eprint={2410.01674},
primaryClass={eess.SY cs.SY math.OC}
} | grau2024optimal |
arxiv-664587 | 2410.01675 | Trying to be human: Linguistic traces of stochastic empathy in language models | <|reference_start|>Trying to be human: Linguistic traces of stochastic empathy in language models: Differentiating between generated and human-written content is important for navigating the modern world. Large language models (LLMs) are crucial drivers behind the increased quality of computer-generated content. Reportedly, humans find it increasingly difficult to identify whether an AI model generated a piece of text. Our work tests how two important factors contribute to the human vs AI race: empathy and an incentive to appear human. We address both aspects in two experiments: human participants and a state-of-the-art LLM wrote relationship advice (Study 1, n=530) or mere descriptions (Study 2, n=610), either instructed to be as human as possible or not. New samples of humans (n=428 and n=408) then judged the texts' source. Our findings show that when empathy is required, humans excel. Contrary to expectations, instructions to appear human were only effective for the LLM, so the human advantage diminished. Computational text analysis revealed that LLMs become more human because they may have an implicit representation of what makes a text human and effortlessly apply these heuristics. The model resorts to a conversational, self-referential, informal tone with a simpler vocabulary to mimic stochastic empathy. We discuss these findings in light of recent claims on the on-par performance of LLMs.<|reference_end|> | arxiv | @article{kleinberg2024trying,
title={Trying to be human: Linguistic traces of stochastic empathy in language
models},
author={Bennett Kleinberg, Jari Zegers, Jonas Festor, Stefana Vida, Julian
Pr"asent, Riccardo Loconte, Sanne Peereboom},
journal={arXiv preprint arXiv:2410.01675},
year={2024},
archivePrefix={arXiv},
eprint={2410.01675},
primaryClass={cs.CL cs.AI}
} | kleinberg2024trying |
arxiv-664588 | 2410.01676 | Lossy Semantic Communication for the Logical Deduction of the State of the World | <|reference_start|>Lossy Semantic Communication for the Logical Deduction of the State of the World: In this paper, we address the problem of lossy semantic communication to reduce uncertainty about the State of the World (SotW) for deductive tasks in point to point communication. A key challenge is transmitting the maximum semantic information with minimal overhead suitable for downstream applications. Our solution involves maximizing semantic content information within a constrained bit budget, where SotW is described using First-Order Logic, and content informativeness is measured by the usefulness of the transmitted information in reducing the uncertainty of the SotW perceived by the receiver. Calculating content information requires computing inductive logical probabilities of state descriptions; however, naive approaches are infeasible due to the massive size of the state space. To address this, our algorithm draws inspiration from state-of-the-art model counters and employs tree search-based model counting to reduce the computational burden. These algorithmic model counters, designed to count the number of models that satisfy a Boolean equation, efficiently estimate the number of world states that validate the observed evidence. Empirical validation using the FOLIO and custom deduction datasets demonstrate that our algorithm reduces uncertainty and improves task performance with fewer bits compared to baselines.<|reference_end|> | arxiv | @article{saz2024lossy,
title={Lossy Semantic Communication for the Logical Deduction of the State of
the World},
author={Ahmet Faruk Saz, Siheng Xiong, Faramarz Fekri},
journal={arXiv preprint arXiv:2410.01676},
year={2024},
archivePrefix={arXiv},
eprint={2410.01676},
primaryClass={cs.IT math.IT}
} | saz2024lossy |
arxiv-664589 | 2410.01677 | Mind Scramble: Unveiling Large Language Model Psychology Via Typoglycemia | <|reference_start|>Mind Scramble: Unveiling Large Language Model Psychology Via Typoglycemia: Research into the external behaviors and internal mechanisms of large language models (LLMs) has shown promise in addressing complex tasks in the physical world. Studies suggest that powerful LLMs, like GPT-4, are beginning to exhibit human-like cognitive abilities, including planning, reasoning, and reflection. In this paper, we introduce a research line and methodology called LLM Psychology, leveraging human psychology experiments to investigate the cognitive behaviors and mechanisms of LLMs. We migrate the Typoglycemia phenomenon from psychology to explore the "mind" of LLMs. Unlike human brains, which rely on context and word patterns to comprehend scrambled text, LLMs use distinct encoding and decoding processes. Through Typoglycemia experiments at the character, word, and sentence levels, we observe: (I) LLMs demonstrate human-like behaviors on a macro scale, such as lower task accuracy and higher token/time consumption; (II) LLMs exhibit varying robustness to scrambled input, making Typoglycemia a benchmark for model evaluation without new datasets; (III) Different task types have varying impacts, with complex logical tasks (e.g., math) being more challenging in scrambled form; (IV) Each LLM has a unique and consistent "cognitive pattern" across tasks, revealing general mechanisms in its psychology process. We provide an in-depth analysis of hidden layers to explain these phenomena, paving the way for future research in LLM Psychology and deeper interpretability.<|reference_end|> | arxiv | @article{yu2024mind,
title={Mind Scramble: Unveiling Large Language Model Psychology Via
Typoglycemia},
author={Miao Yu, Junyuan Mao, Guibin Zhang, Jingheng Ye, Junfeng Fang, Aoxiao
Zhong, Yang Liu, Yuxuan Liang, Kun Wang, Qingsong Wen},
journal={arXiv preprint arXiv:2410.01677},
year={2024},
archivePrefix={arXiv},
eprint={2410.01677},
primaryClass={cs.AI}
} | yu2024mind |
arxiv-664590 | 2410.01678 | Open3DTrack: Towards Open-Vocabulary 3D Multi-Object Tracking | <|reference_start|>Open3DTrack: Towards Open-Vocabulary 3D Multi-Object Tracking: 3D multi-object tracking plays a critical role in autonomous driving by enabling the real-time monitoring and prediction of multiple objects' movements. Traditional 3D tracking systems are typically constrained by predefined object categories, limiting their adaptability to novel, unseen objects in dynamic environments. To address this limitation, we introduce open-vocabulary 3D tracking, which extends the scope of 3D tracking to include objects beyond predefined categories. We formulate the problem of open-vocabulary 3D tracking and introduce dataset splits designed to represent various open-vocabulary scenarios. We propose a novel approach that integrates open-vocabulary capabilities into a 3D tracking framework, allowing for generalization to unseen object classes. Our method effectively reduces the performance gap between tracking known and novel objects through strategic adaptation. Experimental results demonstrate the robustness and adaptability of our method in diverse outdoor driving scenarios. To the best of our knowledge, this work is the first to address open-vocabulary 3D tracking, presenting a significant advancement for autonomous systems in real-world settings. Code, trained models, and dataset splits are available publicly.<|reference_end|> | arxiv | @article{ishaq2024open3dtrack:,
title={Open3DTrack: Towards Open-Vocabulary 3D Multi-Object Tracking},
author={Ayesha Ishaq, Mohamed El Amine Boudjoghra, Jean Lahoud, Fahad Shahbaz
Khan, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer},
journal={arXiv preprint arXiv:2410.01678},
year={2024},
archivePrefix={arXiv},
eprint={2410.01678},
primaryClass={cs.CV cs.RO}
} | ishaq2024open3dtrack: |
arxiv-664591 | 2410.01679 | VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment | <|reference_start|>VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment: Large language models (LLMs) are increasingly applied to complex reasoning tasks that require executing several complex steps before receiving any reward. Properly assigning credit to these steps is essential for enhancing model performance. Proximal Policy Optimization (PPO), a state-of-the-art reinforcement learning (RL) algorithm used for LLM finetuning, employs value networks to tackle credit assignment. However, value networks face challenges in predicting the expected cumulative rewards accurately in complex reasoning tasks, often leading to high-variance updates and suboptimal performance. In this work, we systematically evaluate the efficacy of value networks and reveal their significant shortcomings in reasoning-heavy LLM tasks, showing that they barely outperform a random baseline when comparing alternative steps. To address this, we propose VinePPO, a straightforward approach that leverages the flexibility of language environments to compute unbiased Monte Carlo-based estimates, bypassing the need for large value networks. Our method consistently outperforms PPO and other RL-free baselines across MATH and GSM8K datasets with fewer gradient updates (up to 9x), less wall-clock time (up to 3.0x). These results emphasize the importance of accurate credit assignment in RL finetuning of LLM and demonstrate VinePPO's potential as a superior alternative.<|reference_end|> | arxiv | @article{kazemnejad2024vineppo:,
title={VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit
Assignment},
author={Amirhossein Kazemnejad, Milad Aghajohari, Eva Portelance, Alessandro
Sordoni, Siva Reddy, Aaron Courville, Nicolas Le Roux},
journal={arXiv preprint arXiv:2410.01679},
year={2024},
archivePrefix={arXiv},
eprint={2410.01679},
primaryClass={cs.LG cs.CL}
} | kazemnejad2024vineppo: |
arxiv-664592 | 2410.01680 | PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation | <|reference_start|>PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation: Various visual foundation models have distinct strengths and weaknesses, both of which can be improved through heterogeneous multi-teacher knowledge distillation without labels, termed "agglomerative models." We build upon this body of work by studying the effect of the teachers' activation statistics, particularly the impact of the loss function on the resulting student model quality. We explore a standard toolkit of statistical normalization techniques to better align the different distributions and assess their effects. Further, we examine the impact on downstream teacher-matching metrics, which motivates the use of Hadamard matrices. With these matrices, we demonstrate useful properties, showing how they can be used for isotropic standardization, where each dimension of a multivariate distribution is standardized using the same scale. We call this technique "PHI Standardization" (PHI-S) and empirically demonstrate that it produces the best student model across the suite of methods studied.<|reference_end|> | arxiv | @article{ranzinger2024phi-s:,
title={PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation},
author={Mike Ranzinger, Jon Barker, Greg Heinrich, Pavlo Molchanov, Bryan
Catanzaro, Andrew Tao},
journal={arXiv preprint arXiv:2410.01680},
year={2024},
archivePrefix={arXiv},
eprint={2410.01680},
primaryClass={cs.LG cs.AI cs.CV}
} | ranzinger2024phi-s: |
arxiv-664593 | 2410.01684 | A Microgrid Deployment Framework to Support Drayage Electrification | <|reference_start|>A Microgrid Deployment Framework to Support Drayage Electrification: The electrification of heavy-duty commercial vehicles (HDCVs) is pivotal in reducing greenhouse gas emissions and urban air pollution; however, this transition poses significant challenges for the existing electric grid, which is not designed to meet the high electricity demands of HDCVs. This can lead to a less effective reduction in freight transportation's carbon intensity despite significant electrification efforts. Deploying renewable energy sources, such as photovoltaics, alongside energy storage solutions, is essential to address these challenges. This paper examines the current grid limitations and explores the critical role of microgrid deployment, integrating solar and battery energy storage systems, in supporting the electrification of HDCVs. We propose an integrated framework that is designed to enhance regional grid capacity and decrease carbon intensity by identifying viable sites where a microgrid can be deployed and provide estimates for the deployment cost. Furthermore, using this framework, we quantify the maximal impact of microgrid deployment in reducing CO2 emissions when we optimize the use of the available power. As a demonstration, we apply our framework to the region of the Port of Savannah, GA USA.<|reference_end|> | arxiv | @article{lucero2024a,
title={A Microgrid Deployment Framework to Support Drayage Electrification},
author={Joseph N. E. Lucero, Ruixiao Sun, Brandon A. Miller, Simona Onori,
Vivek A. Sujan},
journal={arXiv preprint arXiv:2410.01684},
year={2024},
archivePrefix={arXiv},
eprint={2410.01684},
primaryClass={eess.SY cs.SY}
} | lucero2024a |
arxiv-664594 | 2410.01685 | Effects of eco-driving on energy consumption and battery degradation for electric vehicles at signalized intersections | <|reference_start|>Effects of eco-driving on energy consumption and battery degradation for electric vehicles at signalized intersections: Eco-driving has been shown to reduce energy consumption for electric vehicles (EVs). Such strategies can also be implemented to both reduce energy consumption and improve battery lifetime. This study considers the eco-driving of a connected electric vehicle equipped with vehicle-to-infrastructure (V2I) communication passing through two signalized intersections. Dynamic programming is employed to construct an eco-driving algorithm that incorporates a battery degradation model in addition to minimizing energy consumption to optimize the vehicle's speed trajectory while transiting the control zone. A parametric study is conducted for various signal timings and distances between the two intersections. It is found that eco-driving can provide up to 49\% in cost benefits over regular driving due to energy savings and improved battery life which could boost consumers' interests on EVs. This study also considered different battery capacity decay rates based on battery chemistry. Although a higher decay rate affects the optimal speed trajectories only slightly, it amplifies the benefits of eco-driving on battery life. Two battery sizes were also studied to show that the larger battery is associated with a drastically increased lifetime, thus creating opportunities for electric vehicles in other applications such as vehicle-to-grid (V2G) integration. Field tests were also conducted using a simplified rule-based version of the eco-driving algorithm implemented as a phone app which issues audio speed recommendations to the driver. The field test results were promising and validated the results from simulations. The phone app implementation is convenient and could facilitate broader adoption and widespread use of eco-driving which helps to improve transportation efficiency and protect the environment.<|reference_end|> | arxiv | @article{wang2024effects,
title={Effects of eco-driving on energy consumption and battery degradation for
electric vehicles at signalized intersections},
author={Yongqiang Wang, Suresh G. Advani, Ajay K. Prasad},
journal={arXiv preprint arXiv:2410.01685},
year={2024},
archivePrefix={arXiv},
eprint={2410.01685},
primaryClass={eess.SY cs.SY}
} | wang2024effects |
arxiv-664595 | 2410.01686 | Positional Attention: Out-of-Distribution Generalization and Expressivity for Neural Algorithmic Reasoning | <|reference_start|>Positional Attention: Out-of-Distribution Generalization and Expressivity for Neural Algorithmic Reasoning: There has been a growing interest in the ability of neural networks to solve algorithmic tasks, such as arithmetic, summary statistics, and sorting. While state-of-the-art models like Transformers have demonstrated good generalization performance on in-distribution tasks, their out-of-distribution (OOD) performance is poor when trained end-to-end. In this paper, we focus on value generalization, a common instance of OOD generalization where the test distribution has the same input sequence length as the training distribution, but the value ranges in the training and test distributions do not necessarily overlap. To address this issue, we propose that using fixed positional encodings to determine attention weights-referred to as positional attention-enhances empirical OOD performance while maintaining expressivity. We support our claim about expressivity by proving that Transformers with positional attention can effectively simulate parallel algorithms.<|reference_end|> | arxiv | @article{de luca2024positional,
title={Positional Attention: Out-of-Distribution Generalization and
Expressivity for Neural Algorithmic Reasoning},
author={Artur Back de Luca, George Giapitzakis, Shenghao Yang, Petar
Veliv{c}kovi'c, Kimon Fountoulakis},
journal={arXiv preprint arXiv:2410.01686},
year={2024},
archivePrefix={arXiv},
eprint={2410.01686},
primaryClass={cs.LG cs.AI cs.DS}
} | de luca2024positional |
arxiv-664596 | 2410.01687 | Uncertainty Quantification with Bayesian Higher Order ReLU KANs | <|reference_start|>Uncertainty Quantification with Bayesian Higher Order ReLU KANs: We introduce the first method of uncertainty quantification in the domain of Kolmogorov-Arnold Networks, specifically focusing on (Higher Order) ReLUKANs to enhance computational efficiency given the computational demands of Bayesian methods. The method we propose is general in nature, providing access to both epistemic and aleatoric uncertainties. It is also capable of generalization to other various basis functions. We validate our method through a series of closure tests, including simple one-dimensional functions and application to the domain of (Stochastic) Partial Differential Equations. Referring to the latter, we demonstrate the method's ability to correctly identify functional dependencies introduced through the inclusion of a stochastic term. The code supporting this work can be found at https://github.com/wmdataphys/Bayesian-HR-KAN<|reference_end|> | arxiv | @article{giroux2024uncertainty,
title={Uncertainty Quantification with Bayesian Higher Order ReLU KANs},
author={James Giroux, Cristiano Fanelli},
journal={arXiv preprint arXiv:2410.01687},
year={2024},
archivePrefix={arXiv},
eprint={2410.01687},
primaryClass={cs.LG cs.AI physics.data-an}
} | giroux2024uncertainty |
arxiv-664597 | 2410.01690 | Why context matters in VQA and Reasoning: Semantic interventions for VLM input modalities | <|reference_start|>Why context matters in VQA and Reasoning: Semantic interventions for VLM input modalities: The various limitations of Generative AI, such as hallucinations and model failures, have made it crucial to understand the role of different modalities in Visual Language Model (VLM) predictions. Our work investigates how the integration of information from image and text modalities influences the performance and behavior of VLMs in visual question answering (VQA) and reasoning tasks. We measure this effect through answer accuracy, reasoning quality, model uncertainty, and modality relevance. We study the interplay between text and image modalities in different configurations where visual content is essential for solving the VQA task. Our contributions include (1) the Semantic Interventions (SI)-VQA dataset, (2) a benchmark study of various VLM architectures under different modality configurations, and (3) the Interactive Semantic Interventions (ISI) tool. The SI-VQA dataset serves as the foundation for the benchmark, while the ISI tool provides an interface to test and apply semantic interventions in image and text inputs, enabling more fine-grained analysis. Our results show that complementary information between modalities improves answer and reasoning quality, while contradictory information harms model performance and confidence. Image text annotations have minimal impact on accuracy and uncertainty, slightly increasing image relevance. Attention analysis confirms the dominant role of image inputs over text in VQA tasks. In this study, we evaluate state-of-the-art VLMs that allow us to extract attention coefficients for each modality. A key finding is PaliGemma's harmful overconfidence, which poses a higher risk of silent failures compared to the LLaVA models. This work sets the foundation for rigorous analysis of modality integration, supported by datasets specifically designed for this purpose.<|reference_end|> | arxiv | @article{amara2024why,
title={Why context matters in VQA and Reasoning: Semantic interventions for VLM
input modalities},
author={Kenza Amara, Lukas Klein, Carsten L"uth, Paul J"ager, Hendrik
Strobelt, Mennatallah El-Assady},
journal={arXiv preprint arXiv:2410.01690},
year={2024},
archivePrefix={arXiv},
eprint={2410.01690},
primaryClass={cs.AI}
} | amara2024why |
arxiv-664598 | 2410.01691 | FactAlign: Long-form Factuality Alignment of Large Language Models | <|reference_start|>FactAlign: Long-form Factuality Alignment of Large Language Models: Large language models have demonstrated significant potential as the next-generation information access engines. However, their reliability is hindered by issues of hallucination and generating non-factual content. This is particularly problematic in long-form responses, where assessing and ensuring factual accuracy is complex. In this paper, we address this gap by proposing FactAlign, a novel alignment framework designed to enhance the factuality of LLMs' long-form responses while maintaining their helpfulness. We introduce fKTO, a fine-grained, sentence-level alignment algorithm that extends the Kahneman-Tversky Optimization (KTO) alignment method. Leveraging recent advances in automatic factuality evaluation, FactAlign utilizes fine-grained factuality assessments to guide the alignment process. Our experiments on open-domain prompts and information-seeking questions demonstrate that FactAlign significantly improves the factual accuracy of LLM responses while also improving their helpfulness. Further analyses identify that FactAlign is capable of training LLMs to provide more information without losing factual precision, thus improving the factual F1 score. Our source code, datasets, and trained models are publicly available at https://github.com/MiuLab/FactAlign<|reference_end|> | arxiv | @article{huang2024factalign:,
title={FactAlign: Long-form Factuality Alignment of Large Language Models},
author={Chao-Wei Huang and Yun-Nung Chen},
journal={arXiv preprint arXiv:2410.01691},
year={2024},
archivePrefix={arXiv},
eprint={2410.01691},
primaryClass={cs.CL cs.AI}
} | huang2024factalign: |
arxiv-664599 | 2410.01692 | U-shaped and Inverted-U Scaling behind Emergent Abilities of Large Language Models | <|reference_start|>U-shaped and Inverted-U Scaling behind Emergent Abilities of Large Language Models: Large language models (LLMs) have been shown to exhibit emergent abilities in some downstream tasks, where performance seems to stagnate at first and then improve sharply and unpredictably with scale beyond a threshold. By dividing questions in the datasets according to difficulty level by average performance, we observe U-shaped scaling for hard questions, and inverted-U scaling followed by steady improvement for easy questions. Moreover, the emergence threshold roughly coincides with the point at which performance on easy questions reverts from inverse scaling to standard scaling. Capitalizing on the observable though opposing scaling trend on easy and hard questions, we propose a simple yet effective pipeline, called Slice-and-Sandwich, to predict both the emergence threshold and model performance beyond the threshold.<|reference_end|> | arxiv | @article{wu2024u-shaped,
title={U-shaped and Inverted-U Scaling behind Emergent Abilities of Large
Language Models},
author={Tung-Yu Wu and Pei-Yu Lo},
journal={arXiv preprint arXiv:2410.01692},
year={2024},
archivePrefix={arXiv},
eprint={2410.01692},
primaryClass={cs.AI cs.CL}
} | wu2024u-shaped |
arxiv-664600 | 2410.01695 | From Prohibition to Adoption: How Hong Kong Universities Are Navigating ChatGPT in Academic Workflows | <|reference_start|>From Prohibition to Adoption: How Hong Kong Universities Are Navigating ChatGPT in Academic Workflows: This paper aims at comparing the time when Hong Kong universities used to ban ChatGPT to the current periods where it has become integrated in the academic processes. Bolted by concerns of integrity and ethical issues in technologies, institutions have adapted by moving towards the center adopting AI literacy and responsibility policies. This study examines new paradigms which have been developed to help implement these positives while preventing negative effects on academia. Keywords: ChatGPT, Academic Integrity, AI Literacy, Ethical AI Use, Generative AI in Education, University Policy, AI Integration in Academia, Higher Education and Technology<|reference_end|> | arxiv | @article{huang2024from,
title={From Prohibition to Adoption: How Hong Kong Universities Are Navigating
ChatGPT in Academic Workflows},
author={Junjun Huang, Jifan Wu, Qing Wang, Kemeng Yuan, Jiefeng Li, Di Lu},
journal={arXiv preprint arXiv:2410.01695},
year={2024},
archivePrefix={arXiv},
eprint={2410.01695},
primaryClass={cs.CY cs.AI}
} | huang2024from |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.