corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-666101 | 2410.04245 | Towards Propositional KLM-Style Defeasible Standpoint Logics | <|reference_start|>Towards Propositional KLM-Style Defeasible Standpoint Logics: The KLM approach to defeasible reasoning introduces a weakened form of implication into classical logic. This allows one to incorporate exceptions to general rules into a logical system, and for old conclusions to be withdrawn upon learning new contradictory information. Standpoint logics are a group of logics, introduced to the field of Knowledge Representation in the last 5 years, which allow for multiple viewpoints to be integrated into the same ontology, even when certain viewpoints may hold contradicting beliefs. In this paper, we aim to integrate standpoints into KLM propositional logic in a restricted setting. We introduce the logical system of Defeasible Restricted Standpoint Logic (DRSL) and define its syntax and semantics. Specifically, we integrate ranked interpretations and standpoint structures, which provide the semantics for propositional KLM and propositional standpoint logic respectively, in order to introduce ranked standpoint structures for DRSL. Moreover, we extend the non-monotonic entailment relation of rational closure from the propositional KLM case to the DRSL case. The main contribution of this paper is to characterize rational closure for DRSL both algorithmically and semantically, showing that rational closure can be characterized through a single representative ranked standpoint structure. Finally, we conclude that the semantic and algorithmic characterizations of rational closure are equivalent, and that entailment-checking for DRSL under rational closure is in the same complexity class as entailment-checking for propositional KLM.<|reference_end|> | arxiv | @article{leisegang2024towards,
title={Towards Propositional KLM-Style Defeasible Standpoint Logics},
author={Nicholas Leisegang, Thomas Meyer and Sebastian Rudolph},
journal={arXiv preprint arXiv:2410.04245},
year={2024},
archivePrefix={arXiv},
eprint={2410.04245},
primaryClass={cs.AI}
} | leisegang2024towards |
arxiv-666102 | 2410.04246 | Navigating the Future of Healthcare HR: Agile Strategies for Overcoming Modern Challenges | <|reference_start|>Navigating the Future of Healthcare HR: Agile Strategies for Overcoming Modern Challenges: This study examines the challenges hospitals encounter in managing human resources and proposes potential solutions. It provides an overview of current HR practices in hospitals, highlighting key issues affecting recruitment, retention, and professional development of medical staff. The study further explores how these challenges impact patient outcomes and overall hospital performance. A comprehensive framework for effective human resource man agement is presented, outlining strategies for recruiting, retaining, training, and advancing medical professionals. This framework is informed by industry best practices and the latest research in healthcare HR management. The findings underscore that effective HR management is crucial for hospital success and offer recommendations for executives and policymakers to enhance their HR strategies. Additionally, our project introduces a Dropbox feature to facilitate patient care. This allows patients to report their issues, enabling doctors to quickly address ailments via our app. Patients can easily identify local doctors and schedule appointments. The app will also provide emergency medical services and accept online payments, while maintaining a record of patient interactions. Both patients and doctors can file complaints through the app, ensuring appropriate follow-up actions.<|reference_end|> | arxiv | @article{karim2024navigating,
title={Navigating the Future of Healthcare HR: Agile Strategies for Overcoming
Modern Challenges},
author={Syeda Aynul Karim, Md. Juniadul Islam},
journal={arXiv preprint arXiv:2410.04246},
year={2024},
archivePrefix={arXiv},
eprint={2410.04246},
primaryClass={cs.CY}
} | karim2024navigating |
arxiv-666103 | 2410.04247 | Unraveling the Nuances of AI Accountability: A Synthesis of Dimensions Across Disciplines | <|reference_start|>Unraveling the Nuances of AI Accountability: A Synthesis of Dimensions Across Disciplines: The widespread diffusion of Artificial Intelligence (AI)-based systems offers many opportunities to contribute to the well-being of individuals and the advancement of economies and societies. This diffusion is, however, closely accompanied by public scandals causing harm to individuals, markets, or society, and leading to the increasing importance of accountability. AI accountability itself faces conceptual ambiguity, with research scattered across multiple disciplines. To address these issues, we review current research across multiple disciplines and identify key dimensions of accountability in the context of AI. We reveal six themes with 13 corresponding dimensions and additional accountability facilitators that future research can utilize to specify accountability scenarios in the context of AI-based systems.<|reference_end|> | arxiv | @article{nguyen2024unraveling,
title={Unraveling the Nuances of AI Accountability: A Synthesis of Dimensions
Across Disciplines},
author={L. H. Nguyen, S. Lins, M. Renner, A. Sunyaev},
journal={arXiv preprint arXiv:2410.04247},
year={2024},
doi={10.5445/IR/1000170105},
archivePrefix={arXiv},
eprint={2410.04247},
primaryClass={cs.CY}
} | nguyen2024unraveling |
arxiv-666104 | 2410.04249 | DiffSpec: Differential Testing with LLMs using Natural Language Specifications and Code Artifacts | <|reference_start|>DiffSpec: Differential Testing with LLMs using Natural Language Specifications and Code Artifacts: Differential testing can be an effective way to find bugs in software systems with multiple implementations that conform to the same specification, like compilers, network protocol parsers, and language runtimes. Specifications for such systems are often standardized in natural language documents, like Instruction Set Architecture (ISA) specifications, Wasm specifications or IETF RFC's. Large Language Models (LLMs) have demonstrated potential in both generating tests and handling large volumes of natural language text, making them well-suited for utilizing artifacts like specification documents, bug reports, and code implementations. In this work, we leverage natural language and code artifacts to guide LLMs to generate targeted, meaningful tests that highlight meaningful behavioral differences between implementations, including those corresponding to bugs. We introduce DiffSpec, a framework for generating differential tests with LLMs using prompt chaining. We demonstrate the efficacy of DiffSpec on two different systems, namely, eBPF runtimes and Wasm validators. Using DiffSpec, we generated 359 differentiating tests, uncovering at least four distinct and confirmed bugs in eBPF, including a kernel memory leak, inconsistent behavior in jump instructions, and undefined behavior when using the stack pointer. We also found 279 differentiating tests in Wasm validators, that point to at least 2 confirmed and fixed bugs.<|reference_end|> | arxiv | @article{rao2024diffspec:,
title={DiffSpec: Differential Testing with LLMs using Natural Language
Specifications and Code Artifacts},
author={Nikitha Rao, Elizabeth Gilbert, Tahina Ramananandro, Nikhil Swamy,
Claire Le Goues, Sarah Fakhoury},
journal={arXiv preprint arXiv:2410.04249},
year={2024},
archivePrefix={arXiv},
eprint={2410.04249},
primaryClass={cs.SE}
} | rao2024diffspec: |
arxiv-666105 | 2410.04250 | ETHcavation: A Dataset and Pipeline for Panoptic Scene Understanding and Object Tracking in Dynamic Construction Environments | <|reference_start|>ETHcavation: A Dataset and Pipeline for Panoptic Scene Understanding and Object Tracking in Dynamic Construction Environments: Construction sites are challenging environments for autonomous systems due to their unstructured nature and the presence of dynamic actors, such as workers and machinery. This work presents a comprehensive panoptic scene understanding solution designed to handle the complexities of such environments by integrating 2D panoptic segmentation with 3D LiDAR mapping. Our system generates detailed environmental representations in real-time by combining semantic and geometric data, supported by Kalman Filter-based tracking for dynamic object detection. We introduce a fine-tuning method that adapts large pre-trained panoptic segmentation models for construction site applications using a limited number of domain-specific samples. For this use case, we release a first-of-its-kind dataset of 502 hand-labeled sample images with panoptic annotations from construction sites. In addition, we propose a dynamic panoptic mapping technique that enhances scene understanding in unstructured environments. As a case study, we demonstrate the system's application for autonomous navigation, utilizing real-time RRT* for reactive path planning in dynamic scenarios. The dataset (https://leggedrobotics.github.io/panoptic-scene-understanding.github.io/) and code (https://github.com/leggedrobotics/rsl_panoptic_mapping) for training and deployment are publicly available to support future research.<|reference_end|> | arxiv | @article{terenzi2024ethcavation:,
title={ETHcavation: A Dataset and Pipeline for Panoptic Scene Understanding and
Object Tracking in Dynamic Construction Environments},
author={Lorenzo Terenzi, Julian Nubert, Pol Eyschen, Pascal Roth, Simin Fei,
Edo Jelavic, Marco Hutter},
journal={arXiv preprint arXiv:2410.04250},
year={2024},
archivePrefix={arXiv},
eprint={2410.04250},
primaryClass={cs.RO}
} | terenzi2024ethcavation: |
arxiv-666106 | 2410.04251 | Enhancing Future Link Prediction in Quantum Computing Semantic Networks through LLM-Initiated Node Features | <|reference_start|>Enhancing Future Link Prediction in Quantum Computing Semantic Networks through LLM-Initiated Node Features: Quantum computing is rapidly evolving in both physics and computer science, offering the potential to solve complex problems and accelerate computational processes. The development of quantum chips necessitates understanding the correlations among diverse experimental conditions. Semantic networks built on scientific literature, representing meaningful relationships between concepts, have been used across various domains to identify knowledge gaps and novel concept combinations. Neural network-based approaches have shown promise in link prediction within these networks. This study proposes initializing node features using LLMs to enhance node representations for link prediction tasks in graph neural networks. LLMs can provide rich descriptions, reducing the need for manual feature creation and lowering costs. Our method, evaluated using various link prediction models on a quantum computing semantic network, demonstrated efficacy compared to traditional node embedding techniques.<|reference_end|> | arxiv | @article{park2024enhancing,
title={Enhancing Future Link Prediction in Quantum Computing Semantic Networks
through LLM-Initiated Node Features},
author={Gilchan Park, Paul Baity, Byung-Jun Yoon, Adolfy Hoisie},
journal={arXiv preprint arXiv:2410.04251},
year={2024},
archivePrefix={arXiv},
eprint={2410.04251},
primaryClass={cs.LG cs.AI cs.CL cs.SI quant-ph}
} | park2024enhancing |
arxiv-666107 | 2410.04252 | Lazy Qubit Reordering for Accelerating Parallel State-Vector-based Quantum Circuit Simulation | <|reference_start|>Lazy Qubit Reordering for Accelerating Parallel State-Vector-based Quantum Circuit Simulation: This paper proposes two quantum operation scheduling methods for accelerating parallel state-vector-based quantum circuit simulation using multiple graphics processing units (GPUs). The proposed methods reduce all-to-all communication caused by qubit reordering (QR), which can dominate the overhead of parallel simulation. Our approach eliminates redundant QRs by introducing intentional delays in QR communications such that multiple QRs can be aggregated into a single QR. The delays are carefully introduced based on the principles of time-space tiling, or a cache optimization technique for classical computers, which we use to arrange the execution order of quantum operations. Moreover, we present an extended scheduling method for the hierarchical interconnection of GPU cluster systems to avoid slow inter-node communication. We develop these methods tailored for two primary procedures in variational quantum eigensolver (VQE) simulation: quantum state update (QSU) and expectation value computation (EVC). Experimental validation on 32-GPU executions demonstrates acceleration in QSU and EVC -- up to 54$\times$ and 606$\times$, respectively -- compared to existing methods. Moreover, our extended scheduling method further reduced communication time by up to 15\% in a two-layered interconnected cluster system. Our approach is useful for any quantum circuit simulations, including QSU and/or EVC.<|reference_end|> | arxiv | @article{teranishi2024lazy,
title={Lazy Qubit Reordering for Accelerating Parallel State-Vector-based
Quantum Circuit Simulation},
author={Yusuke Teranishi, Shoma Hiraoka, Wataru Mizukami, Masao Okita,
Fumihiko Ino},
journal={arXiv preprint arXiv:2410.04252},
year={2024},
archivePrefix={arXiv},
eprint={2410.04252},
primaryClass={quant-ph cs.DC}
} | teranishi2024lazy |
arxiv-666108 | 2410.04253 | Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills | <|reference_start|>Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills: People's decision-making abilities often fail to improve or may even erode when they rely on AI for decision-support, even when the AI provides informative explanations. We argue this is partly because people intuitively seek contrastive explanations, which clarify the difference between the AI's decision and their own reasoning, while most AI systems offer "unilateral" explanations that justify the AI's decision but do not account for users' thinking. To align human-AI knowledge on decision tasks, we introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice about the same task. Results from a large-scale experiment (N = 628) demonstrate that contrastive explanations significantly enhance users' independent decision-making skills compared to unilateral explanations, without sacrificing decision accuracy. Amid rising deskilling concerns, our research demonstrates that incorporating human reasoning into AI design can foster human skill development.<|reference_end|> | arxiv | @article{buçinca2024contrastive,
title={Contrastive Explanations That Anticipate Human Misconceptions Can
Improve Human Decision-Making Skills},
author={Zana Buc{c}inca and Siddharth Swaroop and Amanda E. Paluch and Finale
Doshi-Velez and Krzysztof Z. Gajos},
journal={arXiv preprint arXiv:2410.04253},
year={2024},
archivePrefix={arXiv},
eprint={2410.04253},
primaryClass={cs.HC cs.AI}
} | buçinca2024contrastive |
arxiv-666109 | 2410.04254 | Entity Insertion in Multilingual Linked Corpora: The Case of Wikipedia | <|reference_start|>Entity Insertion in Multilingual Linked Corpora: The Case of Wikipedia: Links are a fundamental part of information networks, turning isolated pieces of knowledge into a network of information that is much richer than the sum of its parts. However, adding a new link to the network is not trivial: it requires not only the identification of a suitable pair of source and target entities but also the understanding of the content of the source to locate a suitable position for the link in the text. The latter problem has not been addressed effectively, particularly in the absence of text spans in the source that could serve as anchors to insert a link to the target entity. To bridge this gap, we introduce and operationalize the task of entity insertion in information networks. Focusing on the case of Wikipedia, we empirically show that this problem is, both, relevant and challenging for editors. We compile a benchmark dataset in 105 languages and develop a framework for entity insertion called LocEI (Localized Entity Insertion) and its multilingual variant XLocEI. We show that XLocEI outperforms all baseline models (including state-of-the-art prompt-based ranking with LLMs such as GPT-4) and that it can be applied in a zero-shot manner on languages not seen during training with minimal performance drop. These findings are important for applying entity insertion models in practice, e.g., to support editors in adding links across the more than 300 language versions of Wikipedia.<|reference_end|> | arxiv | @article{feith2024entity,
title={Entity Insertion in Multilingual Linked Corpora: The Case of Wikipedia},
author={Tom'as Feith, Akhil Arora, Martin Gerlach, Debjit Paul, Robert West},
journal={arXiv preprint arXiv:2410.04254},
year={2024},
archivePrefix={arXiv},
eprint={2410.04254},
primaryClass={cs.CL cs.AI cs.IR cs.LG cs.SI}
} | feith2024entity |
arxiv-666110 | 2410.04255 | Advancements in Robotics Process Automation: A Novel Model with Enhanced Empirical Validation and Theoretical Insights | <|reference_start|>Advancements in Robotics Process Automation: A Novel Model with Enhanced Empirical Validation and Theoretical Insights: Robotics Process Automation is revolutionizing business operations by significantly enhancing efficiency, productivity, and operational excellence across various industries. This manuscript delivers a comprehensive review of recent advancements in RPA technologies and proposes a novel model designed to elevate RPA capabilities.<|reference_end|> | arxiv | @article{pandy2024advancements,
title={Advancements in Robotics Process Automation: A Novel Model with Enhanced
Empirical Validation and Theoretical Insights},
author={Gokul Pandy, Vivekananda Jayaram, Manjunatha Sughaturu Krishnappa,
Balaji Shesharao Ingole, Koushik Kumar Ganeeb, Shenson Joseph},
journal={arXiv preprint arXiv:2410.04255},
year={2024},
doi={10.37745/ejcsit.2013/vol12n56473},
archivePrefix={arXiv},
eprint={2410.04255},
primaryClass={cs.RO cs.DC}
} | pandy2024advancements |
arxiv-666111 | 2410.04256 | Implicit to Explicit Entropy Regularization: Benchmarking ViT Fine-tuning under Noisy Labels | <|reference_start|>Implicit to Explicit Entropy Regularization: Benchmarking ViT Fine-tuning under Noisy Labels: Automatic annotation of large-scale datasets can introduce noisy training data labels, which adversely affect the learning process of deep neural networks (DNNs). Consequently, Noisy Labels Learning (NLL) has become a critical research field for Convolutional Neural Networks (CNNs), though it remains less explored for Vision Transformers (ViTs). In this study, we evaluate the vulnerability of ViT fine-tuning to noisy labels and compare its robustness with CNNs. We also investigate whether NLL methods developed for CNNs are equally effective for ViTs. Using linear probing and MLP-K fine-tuning, we benchmark two ViT backbones (ViT-B/16 and ViT-L/16) using three commonly used classification losses: Cross Entropy (CE), Focal Loss (FL), and Mean Absolute Error (MAE), alongside six robust NLL methods: GCE, SCE, NLNL, APL, NCE+AGCE, and ANL-CE. The evaluation is conducted across six datasets including MNIST, CIFAR-10/100, WebVision, Clothing1M, and Food-101N. Furthermore, we explore whether implicit prediction entropy minimization contributes to ViT robustness against noisy labels, noting a general trend of prediction entropy reduction across most NLL methods. Building on this observation, we examine whether explicit entropy minimization could enhance ViT resilience to noisy labels. Our findings indicate that incorporating entropy regularization enhances the performance of established loss functions such as CE and FL, as well as the robustness of the six studied NLL methods across both ViT backbones.<|reference_end|> | arxiv | @article{marrium2024implicit,
title={Implicit to Explicit Entropy Regularization: Benchmarking ViT
Fine-tuning under Noisy Labels},
author={Maria Marrium, Arif Mahmood, Mohammed Bennamoun},
journal={arXiv preprint arXiv:2410.04256},
year={2024},
archivePrefix={arXiv},
eprint={2410.04256},
primaryClass={cs.CV cs.AI}
} | marrium2024implicit |
arxiv-666112 | 2410.04259 | Is deeper always better? Replacing linear mappings with deep learning networks in the Discriminative Lexicon Model | <|reference_start|>Is deeper always better? Replacing linear mappings with deep learning networks in the Discriminative Lexicon Model: Recently, deep learning models have increasingly been used in cognitive modelling of language. This study asks whether deep learning can help us to better understand the learning problem that needs to be solved by speakers, above and beyond linear methods. We utilise the Discriminative Lexicon Model (DLM, Baayen et al., 2019), which models comprehension and production with mappings between numeric form and meaning vectors. While so far, these mappings have been linear (Linear Discriminative Learning, LDL), in the present study we replace them with deep dense neural networks (Deep Discriminative Learning, DDL). We find that DDL affords more accurate mappings for large and diverse datasets from English and Dutch, but not necessarily for Estonian and Taiwan Mandarin. DDL outperforms LDL in particular for words with pseudo-morphological structure such as slend+er. Applied to average reaction times, we find that DDL is outperformed by frequency-informed linear mappings (FIL). However, DDL trained in a frequency-informed way ('frequency-informed' deep learning, FIDDL) substantially outperforms FIL. Finally, while linear mappings can very effectively be updated from trial-to-trial to model incremental lexical learning (Heitmeier et al., 2023), deep mappings cannot do so as effectively. At present, both linear and deep mappings are informative for understanding language.<|reference_end|> | arxiv | @article{heitmeier2024is,
title={Is deeper always better? Replacing linear mappings with deep learning
networks in the Discriminative Lexicon Model},
author={Maria Heitmeier, Valeria Schmidt, Hendrik P.A. Lensch, R. Harald
Baayen},
journal={arXiv preprint arXiv:2410.04259},
year={2024},
archivePrefix={arXiv},
eprint={2410.04259},
primaryClass={cs.CL}
} | heitmeier2024is |
arxiv-666113 | 2410.04260 | Pareto Control Barrier Function for Inner Safe Set Maximization Under Input Constraints | <|reference_start|>Pareto Control Barrier Function for Inner Safe Set Maximization Under Input Constraints: This article introduces the Pareto Control Barrier Function (PCBF) algorithm to maximize the inner safe set of dynamical systems under input constraints. Traditional Control Barrier Functions (CBFs) ensure safety by maintaining system trajectories within a safe set but often fail to account for realistic input constraints. To address this problem, we leverage the Pareto multi-task learning framework to balance competing objectives of safety and safe set volume. The PCBF algorithm is applicable to high-dimensional systems and is computationally efficient. We validate its effectiveness through comparison with Hamilton-Jacobi reachability for an inverted pendulum and through simulations on a 12-dimensional quadrotor system. Results show that the PCBF consistently outperforms existing methods, yielding larger safe sets and ensuring safety under input constraints.<|reference_end|> | arxiv | @article{cao2024pareto,
title={Pareto Control Barrier Function for Inner Safe Set Maximization Under
Input Constraints},
author={Xiaoyang Cao, Zhe Fu, and Alexandre M. Bayen},
journal={arXiv preprint arXiv:2410.04260},
year={2024},
archivePrefix={arXiv},
eprint={2410.04260},
primaryClass={math.OC cs.AI cs.RO}
} | cao2024pareto |
arxiv-666114 | 2410.04261 | Compositional Diffusion Models for Powered Descent Trajectory Generation with Flexible Constraints | <|reference_start|>Compositional Diffusion Models for Powered Descent Trajectory Generation with Flexible Constraints: This work introduces TrajDiffuser, a compositional diffusion-based flexible and concurrent trajectory generator for 6 degrees of freedom powered descent guidance. TrajDiffuser is a statistical model that learns the multi-modal distributions of a dataset of simulated optimal trajectories, each subject to only one or few constraints that may vary for different trajectories. During inference, the trajectory is generated simultaneously over time, providing stable long-horizon planning, and constraints can be composed together, increasing the model's generalizability and decreasing the training data required. The generated trajectory is then used to initialize an optimizer, increasing its robustness and speed.<|reference_end|> | arxiv | @article{briden2024compositional,
title={Compositional Diffusion Models for Powered Descent Trajectory Generation
with Flexible Constraints},
author={Julia Briden, Yilun Du, Enrico M. Zucchelli, Richard Linares},
journal={arXiv preprint arXiv:2410.04261},
year={2024},
archivePrefix={arXiv},
eprint={2410.04261},
primaryClass={cs.RO cs.LG cs.SY eess.SY math.OC}
} | briden2024compositional |
arxiv-666115 | 2410.04263 | DeFoG: Discrete Flow Matching for Graph Generation | <|reference_start|>DeFoG: Discrete Flow Matching for Graph Generation: Graph generation is fundamental in diverse scientific applications, due to its ability to reveal the underlying distribution of complex data, and eventually generate new, realistic data points. Despite the success of diffusion models in this domain, those face limitations in sampling efficiency and flexibility, stemming from the tight coupling between the training and sampling stages. To address this, we propose DeFoG, a novel framework using discrete flow matching for graph generation. DeFoG employs a flow-based approach that features an efficient linear interpolation noising process and a flexible denoising process based on a continuous-time Markov chain formulation. We leverage an expressive graph transformer and ensure desirable node permutation properties to respect graph symmetry. Crucially, our framework enables a disentangled design of the training and sampling stages, enabling more effective and efficient optimization of model performance. We navigate this design space by introducing several algorithmic improvements that boost the model performance, consistently surpassing existing diffusion models. We also theoretically demonstrate that, for general discrete data, discrete flow models can faithfully replicate the ground truth distribution - a result that naturally extends to graph data and reinforces DeFoG's foundations. Extensive experiments show that DeFoG achieves state-of-the-art results on synthetic and molecular datasets, improving both training and sampling efficiency over diffusion models, and excels in conditional generation on a digital pathology dataset.<|reference_end|> | arxiv | @article{qin2024defog:,
title={DeFoG: Discrete Flow Matching for Graph Generation},
author={Yiming Qin, Manuel Madeira, Dorina Thanou, Pascal Frossard},
journal={arXiv preprint arXiv:2410.04263},
year={2024},
archivePrefix={arXiv},
eprint={2410.04263},
primaryClass={cs.LG}
} | qin2024defog: |
arxiv-666116 | 2410.04264 | Visualising Feature Learning in Deep Neural Networks by Diagonalizing the Forward Feature Map | <|reference_start|>Visualising Feature Learning in Deep Neural Networks by Diagonalizing the Forward Feature Map: Deep neural networks (DNNs) exhibit a remarkable ability to automatically learn data representations, finding appropriate features without human input. Here we present a method for analysing feature learning by decomposing DNNs into 1) a forward feature-map $\Phi$ that maps the input dataspace to the post-activations of the penultimate layer, and 2) a final linear layer that classifies the data. We diagonalize $\Phi$ with respect to the gradient descent operator and track feature learning by measuring how the eigenfunctions and eigenvalues of $\Phi$ change during training. Across many popular architectures and classification datasets, we find that DNNs converge, after just a few epochs, to a minimal feature (MF) regime dominated by a number of eigenfunctions equal to the number of classes. This behaviour resembles the neural collapse phenomenon studied at longer training times. For other DNN-data combinations, such as a fully connected network on CIFAR10, we find an extended feature (EF) regime where significantly more features are used. Optimal generalisation performance upon hyperparameter tuning typically coincides with the MF regime, but we also find examples of poor performance within the MF regime. Finally, we recast the phenomenon of neural collapse into a kernel picture which can be extended to broader tasks such as regression.<|reference_end|> | arxiv | @article{nam2024visualising,
title={Visualising Feature Learning in Deep Neural Networks by Diagonalizing
the Forward Feature Map},
author={Yoonsoo Nam, Chris Mingard, Seok Hyeong Lee, Soufiane Hayou, Ard Louis},
journal={arXiv preprint arXiv:2410.04264},
year={2024},
archivePrefix={arXiv},
eprint={2410.04264},
primaryClass={stat.ML cs.LG}
} | nam2024visualising |
arxiv-666117 | 2410.04265 | AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text | <|reference_start|>AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text: Creativity has long been considered one of the most difficult aspect of human intelligence for AI to mimic. However, the rise of Large Language Models (LLMs), like ChatGPT, has raised questions about whether AI can match or even surpass human creativity. We present CREATIVITY INDEX as the first step to quantify the linguistic creativity of a text by reconstructing it from existing text snippets on the web. CREATIVITY INDEX is motivated by the hypothesis that the seemingly remarkable creativity of LLMs may be attributable in large part to the creativity of human-written texts on the web. To compute CREATIVITY INDEX efficiently, we introduce DJ SEARCH, a novel dynamic programming algorithm that can search verbatim and near-verbatim matches of text snippets from a given document against the web. Experiments reveal that the CREATIVITY INDEX of professional human authors is on average 66.2% higher than that of LLMs, and that alignment reduces the CREATIVITY INDEX of LLMs by an average of 30.1%. In addition, we find that distinguished authors like Hemingway exhibit measurably higher CREATIVITY INDEX compared to other human writers. Finally, we demonstrate that CREATIVITY INDEX can be used as a surprisingly effective criterion for zero-shot machine text detection, surpassing the strongest existing zero-shot system, DetectGPT, by a significant margin of 30.2%, and even outperforming the strongest supervised system, GhostBuster, in five out of six domains.<|reference_end|> | arxiv | @article{lu2024ai,
title={AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language
Models via Systematic Attribution of Machine Text against Web Text},
author={Ximing Lu, Melanie Sclar, Skyler Hallinan, Niloofar Mireshghallah,
Jiacheng Liu, Seungju Han, Allyson Ettinger, Liwei Jiang, Khyathi Chandu,
Nouha Dziri, Yejin Choi},
journal={arXiv preprint arXiv:2410.04265},
year={2024},
archivePrefix={arXiv},
eprint={2410.04265},
primaryClass={cs.CL}
} | lu2024ai |
arxiv-666118 | 2410.04266 | Constructing Cloze Questions Generatively | <|reference_start|>Constructing Cloze Questions Generatively: We present a generative method called CQG for constructing cloze questions from a given article using neural networks and WordNet, with an emphasis on generating multigram distractors. Built on sense disambiguation, text-to-text transformation, WordNet's synset taxonomies and lexical labels, CQG selects an answer key for a given sentence, segments it into a sequence of instances, generates instance-level distractor candidates (IDCs) using a transformer and sibling synsets.It then removes inappropriate IDCs, ranks the remaining IDCs based on contextual embedding similarities, as well as synset and lexical relatedness, forms distractor candidates by combinatorially replacing instances with the corresponding top-ranked IDCs, and checks if they are legitimate phrases. Finally, it selects top-ranked distractor candidates based on contextual semantic similarities to the answer key. Experiments show that this method significantly outperforms SOTA results. Human judges also confirm the high qualities of the generated distractors.<|reference_end|> | arxiv | @article{sun2024constructing,
title={Constructing Cloze Questions Generatively},
author={Yicheng Sun (1) and Jie Wang (2)},
journal={2023 International Joint Conference on Neural Networks (IJCNN),
Gold Coast, Australia, 2023, pp. 1-8},
year={2024},
doi={10.1109/IJCNN54540.2023.10191481},
archivePrefix={arXiv},
eprint={2410.04266},
primaryClass={cs.CL cs.AI}
} | sun2024constructing |
arxiv-666119 | 2410.04268 | Slim-ABC: An Optimized Atomic Broadcast Protocol | <|reference_start|>Slim-ABC: An Optimized Atomic Broadcast Protocol: The Byzantine Agreement (BA) problem is a fundamental challenge in distributed systems, focusing on achieving reaching an agreement among parties, some of which may behave maliciously. With the rise of cryptocurrencies, there has been significant interest in developing atomic broadcast protocols, which facilitate agreement on a subset of parties' requests. However, these protocols often come with high communication complexity ($O(ln^2 + \lambda n^3 \log n)$, where $l$ is the bit length of the input, $n$ is the number of parties, and $\lambda$ represents the security parameter bit length). This can lead to inefficiency, especially when the requests across parties exhibit little variation, resulting in unnecessary resource consumption. In this paper, we introduce Slim-ABC, a novel atomic broadcast protocol that eliminates the $O(ln^2 + \lambda n^3 \log n)$ term associated with traditional atomic broadcast protocols. While Slim-ABC reduces the number of accepted requests, it significantly mitigates resource wastage, making it more efficient. The protocol leverages the asynchronous common subset and provable-broadcast mechanisms to achieve a communication complexity of $O(ln^2 + \lambda n^2)$. Despite the trade-off in accepted requests, Slim-ABC maintains robust security by allowing only a fraction ($f+1$) of parties to broadcast requests. We present an extensive efficiency analysis of Slim-ABC, evaluating its performance across key metrics such as message complexity, communication complexity, and time complexity. Additionally, we provide a rigorous security analysis, demonstrating that Slim-ABC satisfies the \textit{agreement}, \textit{validity}, and \textit{totality} properties of the asynchronous common subset protocol.<|reference_end|> | arxiv | @article{sony2024slim-abc:,
title={Slim-ABC: An Optimized Atomic Broadcast Protocol},
author={Nasit S Sony, Xianzhong Ding and Mukesh Singhal},
journal={arXiv preprint arXiv:2410.04268},
year={2024},
archivePrefix={arXiv},
eprint={2410.04268},
primaryClass={cs.DC}
} | sony2024slim-abc: |
arxiv-666120 | 2410.04269 | RoQLlama: A Lightweight Romanian Adapted Language Model | <|reference_start|>RoQLlama: A Lightweight Romanian Adapted Language Model: The remarkable achievements obtained by open-source large language models (LLMs) in recent years have predominantly been concentrated on tasks involving the English language. In this paper, we aim to advance the performance of Llama2 models on Romanian tasks. We tackle the problem of reduced computing resources by using QLoRA for training. We release RoQLlama-7b, a quantized LLM, which shows equal or improved results compared to its full-sized counterpart when tested on seven Romanian downstream tasks in the zero-shot setup. Also, it consistently achieves higher average scores across all few-shot prompts. Additionally, we introduce a novel Romanian dataset, namely RoMedQA, which contains single-choice medical questions in Romanian.<|reference_end|> | arxiv | @article{dima2024roqllama:,
title={RoQLlama: A Lightweight Romanian Adapted Language Model},
author={George-Andrei Dima, Andrei-Marius Avram, Cristian-George Cru{a}ciun
and Dumitru-Clementin Cercel},
journal={arXiv preprint arXiv:2410.04269},
year={2024},
archivePrefix={arXiv},
eprint={2410.04269},
primaryClass={cs.CL}
} | dima2024roqllama: |
arxiv-666121 | 2410.04271 | Fundamental Limitations on Subquadratic Alternatives to Transformers | <|reference_start|>Fundamental Limitations on Subquadratic Alternatives to Transformers: The Transformer architecture is widely deployed in many popular and impactful Large Language Models. At its core is the attention mechanism for calculating correlations between pairs of tokens. Performing an attention computation takes quadratic time in the input size, and had become the time bottleneck for transformer operations. In order to circumvent this, researchers have used a variety of approaches, including designing heuristic algorithms for performing attention computations faster, and proposing alternatives to the attention mechanism which can be computed more quickly. For instance, state space models such as Mamba were designed to replace attention with an almost linear time alternative. In this paper, we prove that any such approach cannot perform important tasks that Transformer is able to perform (assuming a popular conjecture from fine-grained complexity theory). We focus on document similarity tasks, where one is given as input many documents and would like to find a pair which is (approximately) the most similar. We prove that Transformer is able to perform this task, and we prove that this task cannot be performed in truly subquadratic time by any algorithm. Thus, any model which can be evaluated in subquadratic time - whether because of subquadratic-time heuristics for attention, faster attention replacements like Mamba, or any other reason - cannot perform this task. In other words, in order to perform tasks that (implicitly or explicitly) involve document similarity, one may as well use Transformer and cannot avoid its quadratic running time.<|reference_end|> | arxiv | @article{alman2024fundamental,
title={Fundamental Limitations on Subquadratic Alternatives to Transformers},
author={Josh Alman, Hantao Yu},
journal={arXiv preprint arXiv:2410.04271},
year={2024},
archivePrefix={arXiv},
eprint={2410.04271},
primaryClass={cs.LG cs.CC cs.CL}
} | alman2024fundamental |
arxiv-666122 | 2410.04272 | Evaluating Language Model Character Traits | <|reference_start|>Evaluating Language Model Character Traits: Language models (LMs) can exhibit human-like behaviour, but it is unclear how to describe this behaviour without undue anthropomorphism. We formalise a behaviourist view of LM character traits: qualities such as truthfulness, sycophancy, or coherent beliefs and intentions, which may manifest as consistent patterns of behaviour. Our theory is grounded in empirical demonstrations of LMs exhibiting different character traits, such as accurate and logically coherent beliefs, and helpful and harmless intentions. We find that the consistency with which LMs exhibit certain character traits varies with model size, fine-tuning, and prompting. In addition to characterising LM character traits, we evaluate how these traits develop over the course of an interaction. We find that traits such as truthfulness and harmfulness can be stationary, i.e., consistent over an interaction, in certain contexts, but may be reflective in different contexts, meaning they mirror the LM's behavior in the preceding interaction. Our formalism enables us to describe LM behaviour precisely in intuitive language, without undue anthropomorphism.<|reference_end|> | arxiv | @article{ward2024evaluating,
title={Evaluating Language Model Character Traits},
author={Francis Rhys Ward, Zejia Yang, Alex Jackson, Randy Brown, Chandler
Smith, Grace Colverd, Louis Thomson, Raymond Douglas, Patrik Bartak, Andrew
Rowan},
journal={arXiv preprint arXiv:2410.04272},
year={2024},
archivePrefix={arXiv},
eprint={2410.04272},
primaryClass={cs.CL}
} | ward2024evaluating |
arxiv-666123 | 2410.04274 | Bosonic Quantum Computational Complexity | <|reference_start|>Bosonic Quantum Computational Complexity: Quantum computing involving physical systems with continuous degrees of freedom, such as the quantum states of light, has recently attracted significant interest. However, a well-defined quantum complexity theory for these bosonic computations over infinite-dimensional Hilbert spaces is missing. In this work, we lay foundations for such a research program. We introduce natural complexity classes and problems based on bosonic generalizations of BQP, the local Hamiltonian problem, and QMA. We uncover several relationships and subtle differences between standard Boolean classical and discrete variable quantum complexity classes and identify outstanding open problems. In particular: 1. We show that the power of quadratic (Gaussian) quantum dynamics is equivalent to the class BQL. More generally, we define classes of continuous-variable quantum polynomial time computations with a bounded probability of error based on higher-degree gates. Due to the infinite dimensional Hilbert space, it is not a priori clear whether a decidable upper bound can be obtained for these classes. We identify complete problems for these classes and demonstrate a BQP lower and EXPSPACE upper bound. We further show that the problem of computing expectation values of polynomial bosonic observables is in PSPACE. 2. We prove that the problem of deciding the boundedness of the spectrum of a bosonic Hamiltonian is co-NP-hard. Furthermore, we show that the problem of finding the minimum energy of a bosonic Hamiltonian critically depends on the non-Gaussian stellar rank of the family of energy-constrained states one optimizes over: for constant stellar rank, it is NP-complete; for polynomially-bounded rank, it is in QMA; for unbounded rank, it is undecidable.<|reference_end|> | arxiv | @article{chabaud2024bosonic,
title={Bosonic Quantum Computational Complexity},
author={Ulysse Chabaud, Michael Joseph, Saeed Mehraban, Arsalan Motamedi},
journal={arXiv preprint arXiv:2410.04274},
year={2024},
archivePrefix={arXiv},
eprint={2410.04274},
primaryClass={quant-ph cs.CC}
} | chabaud2024bosonic |
arxiv-666124 | 2410.04275 | Language Model-Driven Data Pruning Enables Efficient Active Learning | <|reference_start|>Language Model-Driven Data Pruning Enables Efficient Active Learning: Active learning (AL) optimizes data labeling efficiency by selecting the most informative instances for annotation. A key component in this procedure is an acquisition function that guides the selection process and identifies the suitable instances for labeling from the unlabeled pool. However, these acquisition methods suffer from high computational costs with large unlabeled data pools, posing a roadblock to their applicability on large datasets. To address this challenge and bridge this gap, we introduce a novel plug-and-play unlabeled data pruning strategy, ActivePrune, which leverages language models to prune the unlabeled pool. ActivePrune implements a two-stage pruning process: an initial fast evaluation using perplexity scores from an n-gram language model, followed by a high-quality selection using metrics for data quality computed through a quantized LLM. Additionally, to enhance the diversity in the unlabeled pool, we propose a novel perplexity reweighting method that systematically brings forward underrepresented instances for selection in subsequent labeling iterations. Experiments on translation, sentiment analysis, topic classification, and summarization tasks on four diverse datasets and four active learning strategies demonstrate that ActivePrune outperforms existing data pruning methods. Finally, we compare the selection quality $\leftrightarrow$ efficiency tradeoff of the data pruning methods and demonstrate that ActivePrune is computationally more efficient than other LLM score-based pruning methods, and provides up to 74% reduction in the end-to-end time required for active learning.<|reference_end|> | arxiv | @article{azeemi2024language,
title={Language Model-Driven Data Pruning Enables Efficient Active Learning},
author={Abdul Hameed Azeemi, Ihsan Ayyub Qazi, Agha Ali Raza},
journal={arXiv preprint arXiv:2410.04275},
year={2024},
archivePrefix={arXiv},
eprint={2410.04275},
primaryClass={cs.LG cs.CL}
} | azeemi2024language |
arxiv-666125 | 2410.04277 | Mechanistic Behavior Editing of Language Models | <|reference_start|>Mechanistic Behavior Editing of Language Models: Large Language Models trained on web-scale text acquire language generation abilities that can solve a wide range of tasks, particularly when task knowledge is refined into the generative prior using in-context examples. However, spurious features learned from noisy data hinder their generalizability. Supervised finetuning can introduce task specificity, but introduce data inefficiency. Prior studies indicate that (i) noisy neural circuitries coexist with generalizable ones within LLMs, and (ii) finetuning typically enhances (or suppresses) existing abilities without introducing newer ones. Building upon these, we propose TaRot, a novel method for task adaptation. TaRot intervenes in the neural circuitries using learnable rotation matrices that are optimized using Bayesian Optimization, on labelled samples in the order of standard few-shot prompting examples. Experiments on multiple classification and generation tasks using LLMs of varying sizes reveal the efficacy of TaRot, improving upon both zero- as well as few-shot performance, with average improvements (across models and tasks) of 23.81% and 11.15%, respectively. The source code is available at https://github.com/joykirat18/TaRot<|reference_end|> | arxiv | @article{singh2024mechanistic,
title={Mechanistic Behavior Editing of Language Models},
author={Joykirat Singh, Subhabrata Dutta, Tanmoy Chakraborty},
journal={arXiv preprint arXiv:2410.04277},
year={2024},
archivePrefix={arXiv},
eprint={2410.04277},
primaryClass={cs.CL cs.AI}
} | singh2024mechanistic |
arxiv-666126 | 2410.04279 | Black Boxes and Looking Glasses: Multilevel Symmetries, Reflection Planes, and Convex Optimization in Deep Networks | <|reference_start|>Black Boxes and Looking Glasses: Multilevel Symmetries, Reflection Planes, and Convex Optimization in Deep Networks: We show that training deep neural networks (DNNs) with absolute value activation and arbitrary input dimension can be formulated as equivalent convex Lasso problems with novel features expressed using geometric algebra. This formulation reveals geometric structures encoding symmetry in neural networks. Using the equivalent Lasso form of DNNs, we formally prove a fundamental distinction between deep and shallow networks: deep networks inherently favor symmetric structures in their fitted functions, with greater depth enabling multilevel symmetries, i.e., symmetries within symmetries. Moreover, Lasso features represent distances to hyperplanes that are reflected across training points. These reflection hyperplanes are spanned by training data and are orthogonal to optimal weight vectors. Numerical experiments support theory and demonstrate theoretically predicted features when training networks using embeddings generated by Large Language Models.<|reference_end|> | arxiv | @article{zeger2024black,
title={Black Boxes and Looking Glasses: Multilevel Symmetries, Reflection
Planes, and Convex Optimization in Deep Networks},
author={Emi Zeger and Mert Pilanci},
journal={arXiv preprint arXiv:2410.04279},
year={2024},
archivePrefix={arXiv},
eprint={2410.04279},
primaryClass={cs.LG stat.ML}
} | zeger2024black |
arxiv-666127 | 2410.04280 | The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception? | <|reference_start|>The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?: Foundation models for vision and language are the basis of AI applications across numerous sectors of society. The success of these models stems from their ability to mimic human capabilities, namely visual perception in vision models, and analytical reasoning in large language models. As visual perception and analysis are fundamental to data visualization, in this position paper we ask: how can we harness foundation models to advance progress in visualization design? Specifically, how can multimodal foundation models (MFMs) guide visualization design through visual perception? We approach these questions by investigating the effectiveness of MFMs for perceiving visualization, and formalizing the overall visualization design and optimization space. Specifically, we think that MFMs can best be viewed as judges, equipped with the ability to criticize visualizations, and provide us with actions on how to improve a visualization. We provide a deeper characterization for text-to-image generative models, and multi-modal large language models, organized by what these models provide as output, and how to utilize the output for guiding design decisions. We hope that our perspective can inspire researchers in visualization on how to approach MFMs for visualization design.<|reference_end|> | arxiv | @article{berger2024the,
title={The Visualization JUDGE : Can Multimodal Foundation Models Guide
Visualization Design Through Visual Perception?},
author={Matthew Berger, Shusen Liu},
journal={arXiv preprint arXiv:2410.04280},
year={2024},
archivePrefix={arXiv},
eprint={2410.04280},
primaryClass={cs.HC}
} | berger2024the |
arxiv-666128 | 2410.04281 | Age of Synchronization Minimization in Wireless Networks with Random Updates and Time-Varying Timeliness Requirement | <|reference_start|>Age of Synchronization Minimization in Wireless Networks with Random Updates and Time-Varying Timeliness Requirement: This study considers a wireless network where multiple nodes transmit status updates to a base station (BS) via a shared, error-free channel with limited bandwidth. The status updates arrive at each node randomly. We use the Age of Synchronization (AoS) as a metric to measure the information freshness of the updates. The AoS of each node has a timely-varying importance which follows a Markov chain. Our objective is to minimize the weighted sum AoS of the system. The optimization problem is relaxed and formulated as a constrained Markov decision process (CMDP). Solving the relaxed CMDP by a linear programming algorithm yields a stationary policy, which helps us propose a near-stationary policy for the original problem. Numerical simulations show that in most configurations, the AoS performance of our policy outperforms the policy choosing the maximum AoS regardless of weight variations.<|reference_end|> | arxiv | @article{he2024age,
title={Age of Synchronization Minimization in Wireless Networks with Random
Updates and Time-Varying Timeliness Requirement},
author={Yuqiao He, Yuchao Chen, Jintao Wang, Jian Song},
journal={arXiv preprint arXiv:2410.04281},
year={2024},
archivePrefix={arXiv},
eprint={2410.04281},
primaryClass={cs.IT math.IT}
} | he2024age |
arxiv-666129 | 2410.04282 | Locating Information Gaps and Narrative Inconsistencies Across Languages: A Case Study of LGBT People Portrayals on Wikipedia | <|reference_start|>Locating Information Gaps and Narrative Inconsistencies Across Languages: A Case Study of LGBT People Portrayals on Wikipedia: To explain social phenomena and identify systematic biases, much research in computational social science focuses on comparative text analyses. These studies often rely on coarse corpus-level statistics or local word-level analyses, mainly in English. We introduce the InfoGap method -- an efficient and reliable approach to locating information gaps and inconsistencies in articles at the fact level, across languages. We evaluate InfoGap by analyzing LGBT people's portrayals, across 2.7K biography pages on English, Russian, and French Wikipedias. We find large discrepancies in factual coverage across the languages. Moreover, our analysis reveals that biographical facts carrying negative connotations are more likely to be highlighted in Russian Wikipedia. Crucially, InfoGap both facilitates large scale analyses, and pinpoints local document- and fact-level information gaps, laying a new foundation for targeted and nuanced comparative language analysis at scale.<|reference_end|> | arxiv | @article{samir2024locating,
title={Locating Information Gaps and Narrative Inconsistencies Across
Languages: A Case Study of LGBT People Portrayals on Wikipedia},
author={Farhan Samir, Chan Young Park, Anjalie Field, Vered Shwartz, Yulia
Tsvetkov},
journal={arXiv preprint arXiv:2410.04282},
year={2024},
archivePrefix={arXiv},
eprint={2410.04282},
primaryClass={cs.CL}
} | samir2024locating |
arxiv-666130 | 2410.04283 | Applying Hybrid Graph Neural Networks to Strengthen Credit Risk Analysis | <|reference_start|>Applying Hybrid Graph Neural Networks to Strengthen Credit Risk Analysis: This paper presents a novel approach to credit risk prediction by employing Graph Convolutional Neural Networks (GCNNs) to assess the creditworthiness of borrowers. Leveraging the power of big data and artificial intelligence, the proposed method addresses the challenges faced by traditional credit risk assessment models, particularly in handling imbalanced datasets and extracting meaningful features from complex relationships. The paper begins by transforming raw borrower data into graph-structured data, where borrowers and their relationships are represented as nodes and edges, respectively. A classic subgraph convolutional model is then applied to extract local features, followed by the introduction of a hybrid GCNN model that integrates both local and global convolutional operators to capture a comprehensive representation of node features. The hybrid model incorporates an attention mechanism to adaptively select features, mitigating issues of over-smoothing and insufficient feature consideration. The study demonstrates the potential of GCNNs in improving the accuracy of credit risk prediction, offering a robust solution for financial institutions seeking to enhance their lending decision-making processes.<|reference_end|> | arxiv | @article{sun2024applying,
title={Applying Hybrid Graph Neural Networks to Strengthen Credit Risk Analysis},
author={Mengfang Sun, Wenying Sun, Ying Sun, Shaobo Liu, Mohan Jiang, Zhen Xu},
journal={arXiv preprint arXiv:2410.04283},
year={2024},
archivePrefix={arXiv},
eprint={2410.04283},
primaryClass={cs.LG}
} | sun2024applying |
arxiv-666131 | 2410.04285 | MindFlayer: Efficient Asynchronous Parallel SGD in the Presence of Heterogeneous and Random Worker Compute Times | <|reference_start|>MindFlayer: Efficient Asynchronous Parallel SGD in the Presence of Heterogeneous and Random Worker Compute Times: We study the problem of minimizing the expectation of smooth nonconvex functions with the help of several parallel workers whose role is to compute stochastic gradients. In particular, we focus on the challenging situation where the workers' compute times are arbitrarily heterogeneous and random. In the simpler regime characterized by arbitrarily heterogeneous but deterministic compute times, Tyurin and Richt\'arik (NeurIPS 2023) recently designed the first theoretically optimal asynchronous SGD method, called Rennala SGD, in terms of a novel complexity notion called time complexity. The starting point of our work is the observation that Rennala SGD can have arbitrarily bad performance in the presence of random compute times -- a setting it was not designed to handle. To advance our understanding of stochastic optimization in this challenging regime, we propose a new asynchronous SGD method, for which we coin the name MindFlayer SGD. Our theory and empirical results demonstrate the superiority of MindFlayer SGD over existing baselines, including Rennala SGD, in cases when the noise is heavy tailed.<|reference_end|> | arxiv | @article{maranjyan2024mindflayer:,
title={MindFlayer: Efficient Asynchronous Parallel SGD in the Presence of
Heterogeneous and Random Worker Compute Times},
author={Artavazd Maranjyan, Omar Shaikh Omar, Peter Richt'arik},
journal={arXiv preprint arXiv:2410.04285},
year={2024},
archivePrefix={arXiv},
eprint={2410.04285},
primaryClass={math.OC cs.DC cs.LG stat.ML}
} | maranjyan2024mindflayer: |
arxiv-666132 | 2410.04286 | Open Science Practices by Early Career HCI Researchers: Perceptions, Challenges, and Benefits | <|reference_start|>Open Science Practices by Early Career HCI Researchers: Perceptions, Challenges, and Benefits: Many fields of science, including Human-Computer Interaction (HCI), have heightened introspection in the wake of concerns around reproducibility and replicability of published findings. Notably, in recent years the HCI community has worked to implement policy changes and mainstream open science practices. Our work investigates early-career HCI researchers' perceptions of open science and engagement with best practices through 18 semi-structured interviews. Our findings highlight key barriers to the widespread adoption of data and materials sharing, and preregistration, namely: lack of clear incentives; cultural resistance; limited training; time constraints; concerns about intellectual property; and data privacy issues. We observe that small changes at major conferences like CHI could meaningfully impact community norms. We offer recommendations to address these barriers and to promote transparency and openness in HCI.<|reference_end|> | arxiv | @article{chakravorti2024open,
title={Open Science Practices by Early Career HCI Researchers: Perceptions,
Challenges, and Benefits},
author={Tatiana Chakravorti, Sanjana Gautam, Priya Silverstein, Sarah M.
Rajtmajer},
journal={arXiv preprint arXiv:2410.04286},
year={2024},
archivePrefix={arXiv},
eprint={2410.04286},
primaryClass={cs.HC}
} | chakravorti2024open |
arxiv-666133 | 2410.04287 | Unveiling the Impact of Local Homophily on GNN Fairness: In-Depth Analysis and New Benchmarks | <|reference_start|>Unveiling the Impact of Local Homophily on GNN Fairness: In-Depth Analysis and New Benchmarks: Graph Neural Networks (GNNs) often struggle to generalize when graphs exhibit both homophily (same-class connections) and heterophily (different-class connections). Specifically, GNNs tend to underperform for nodes with local homophily levels that differ significantly from the global homophily level. This issue poses a risk in user-centric applications where underrepresented homophily levels are present. Concurrently, fairness within GNNs has received substantial attention due to the potential amplification of biases via message passing. However, the connection between local homophily and fairness in GNNs remains underexplored. In this work, we move beyond global homophily and explore how local homophily levels can lead to unfair predictions. We begin by formalizing the challenge of fair predictions for underrepresented homophily levels as an out-of-distribution (OOD) problem. We then conduct a theoretical analysis that demonstrates how local homophily levels can alter predictions for differing sensitive attributes. We additionally introduce three new GNN fairness benchmarks, as well as a novel semi-synthetic graph generator, to empirically study the OOD problem. Across extensive analysis we find that two factors can promote unfairness: (a) OOD distance, and (b) heterophilous nodes situated in homophilous graphs. In cases where these two conditions are met, fairness drops by up to 24% on real world datasets, and 30% in semi-synthetic datasets. Together, our theoretical insights, empirical analysis, and algorithmic contributions unveil a previously overlooked source of unfairness rooted in the graph's homophily information.<|reference_end|> | arxiv | @article{loveland2024unveiling,
title={Unveiling the Impact of Local Homophily on GNN Fairness: In-Depth
Analysis and New Benchmarks},
author={Donald Loveland, Danai Koutra},
journal={arXiv preprint arXiv:2410.04287},
year={2024},
archivePrefix={arXiv},
eprint={2410.04287},
primaryClass={cs.LG cs.CY cs.SI}
} | loveland2024unveiling |
arxiv-666134 | 2410.04288 | Enhancing Carbon Emission Reduction Strategies using OCO and ICOS data | <|reference_start|>Enhancing Carbon Emission Reduction Strategies using OCO and ICOS data: We propose a methodology to enhance local CO2 monitoring by integrating satellite data from the Orbiting Carbon Observatories (OCO-2 and OCO-3) with ground level observations from the Integrated Carbon Observation System (ICOS) and weather data from the ECMWF Reanalysis v5 (ERA5). Unlike traditional methods that downsample national data, our approach uses multimodal data fusion for high-resolution CO2 estimations. We employ weighted K-nearest neighbor (KNN) interpolation with machine learning models to predict ground level CO2 from satellite measurements, achieving a Root Mean Squared Error of 3.92 ppm. Our results show the effectiveness of integrating diverse data sources in capturing local emission patterns, highlighting the value of high-resolution atmospheric transport models. The developed model improves the granularity of CO2 monitoring, providing precise insights for targeted carbon mitigation strategies, and represents a novel application of neural networks and KNN in environmental monitoring, adaptable to various regions and temporal scales.<|reference_end|> | arxiv | @article{åström2024enhancing,
title={Enhancing Carbon Emission Reduction Strategies using OCO and ICOS data},
author={Oskar {AA}str"om, Carina Geldhauser, Markus Grillitsch, Ola Hall,
Alexandros Sopasakis},
journal={arXiv preprint arXiv:2410.04288},
year={2024},
archivePrefix={arXiv},
eprint={2410.04288},
primaryClass={cs.LG}
} | åström2024enhancing |
arxiv-666135 | 2410.04289 | Self-Supervised Anomaly Detection in the Wild: Favor Joint Embeddings Methods | <|reference_start|>Self-Supervised Anomaly Detection in the Wild: Favor Joint Embeddings Methods: Accurate anomaly detection is critical in vision-based infrastructure inspection, where it helps prevent costly failures and enhances safety. Self-Supervised Learning (SSL) offers a promising approach by learning robust representations from unlabeled data. However, its application in anomaly detection remains underexplored. This paper addresses this gap by providing a comprehensive evaluation of SSL methods for real-world anomaly detection, focusing on sewer infrastructure. Using the Sewer-ML dataset, we evaluate lightweight models such as ViT-Tiny and ResNet-18 across SSL frameworks, including BYOL, Barlow Twins, SimCLR, DINO, and MAE, under varying class imbalance levels. Through 250 experiments, we rigorously assess the performance of these SSL methods to ensure a robust and comprehensive evaluation. Our findings highlight the superiority of joint-embedding methods like SimCLR and Barlow Twins over reconstruction-based approaches such as MAE, which struggle to maintain performance under class imbalance. Furthermore, we find that the SSL model choice is more critical than the backbone architecture. Additionally, we emphasize the need for better label-free assessments of SSL representations, as current methods like RankMe fail to adequately evaluate representation quality, making cross-validation without labels infeasible. Despite the remaining performance gap between SSL and supervised models, these findings highlight the potential of SSL to enhance anomaly detection, paving the way for further research in this underexplored area of SSL applications.<|reference_end|> | arxiv | @article{otero2024self-supervised,
title={Self-Supervised Anomaly Detection in the Wild: Favor Joint Embeddings
Methods},
author={Daniel Otero, Rafael Mateus, and Randall Balestriero},
journal={arXiv preprint arXiv:2410.04289},
year={2024},
archivePrefix={arXiv},
eprint={2410.04289},
primaryClass={cs.CV cs.AI cs.LG}
} | otero2024self-supervised |
arxiv-666136 | 2410.04292 | Efficiently Identifying Low-Quality Language Subsets in Multilingual Datasets: A Case Study on a Large-Scale Multilingual Audio Dataset | <|reference_start|>Efficiently Identifying Low-Quality Language Subsets in Multilingual Datasets: A Case Study on a Large-Scale Multilingual Audio Dataset: Curating datasets that span multiple languages is challenging. To make the collection more scalable, researchers often incorporate one or more imperfect classifiers in the process, like language identification models. These models, however, are prone to failure, resulting in some language subsets being unreliable for downstream tasks. We introduce a statistical test, the Preference Proportion Test, for identifying such unreliable subsets. By annotating only 20 samples for a language subset, we're able to identify systematic transcription errors for 10 language subsets in a recent large multilingual transcribed audio dataset, X-IPAPack (Zhu et al., 2024). We find that filtering this low-quality data out when training models for the downstream task of phonetic transcription brings substantial benefits, most notably a 25.7% relative improvement on transcribing recordings in out-of-distribution languages. Our method lays a path forward for systematic and reliable multilingual dataset auditing.<|reference_end|> | arxiv | @article{samir2024efficiently,
title={Efficiently Identifying Low-Quality Language Subsets in Multilingual
Datasets: A Case Study on a Large-Scale Multilingual Audio Dataset},
author={Farhan Samir, Emily P. Ahn, Shreya Prakash, M'arton Soskuthy, Vered
Shwartz, Jian Zhu},
journal={arXiv preprint arXiv:2410.04292},
year={2024},
archivePrefix={arXiv},
eprint={2410.04292},
primaryClass={cs.CL}
} | samir2024efficiently |
arxiv-666137 | 2410.04294 | Spectral Densities, Structured Noise and Ensemble Averaging within Open Quantum Dynamics | <|reference_start|>Spectral Densities, Structured Noise and Ensemble Averaging within Open Quantum Dynamics: Although recent advances in simulating open quantum systems have lead to significant progress, the applicability of numerically exact methods is still restricted to rather small systems. Hence, more approximate methods remain relevant due to their computational efficiency, enabling simulations of larger systems over extended timescales. In this study, we present advances for one such method, namely the Numerical Integration of Schr\"odinger Equation (NISE). Firstly, we introduce a modified ensemble-averaging procedure that improves the long-time behavior of the thermalized variant of the NISE scheme, termed Thermalized NISE. Secondly, we demonstrate how to use the NISE in conjunction with (highly) structured spectral densities by utilizing a noise generating algorithm for arbitrary structured noise. This algorithm also serves as a tool for establishing best practices in determining spectral densities from excited state calculations along molecular dynamics or quantum mechanics/molecular mechanics trajectories. Finally, we assess the ability of the NISE approach to calculate absorption spectra and demonstrate the utility of the proposed modifications by determining population dynamics.<|reference_end|> | arxiv | @article{holtkamp2024spectral,
title={Spectral Densities, Structured Noise and Ensemble Averaging within Open
Quantum Dynamics},
author={Yannick Marcel Holtkamp, Emiliano Godinez-Ramirez and Ulrich
Kleinekath"ofer},
journal={J. Chem. Phys. 161, 134101 (2024)},
year={2024},
doi={10.1063/5.0224807},
archivePrefix={arXiv},
eprint={2410.04294},
primaryClass={quant-ph cs.LG physics.chem-ph physics.comp-ph}
} | holtkamp2024spectral |
arxiv-666138 | 2410.04297 | Bootstrap Sampling Rate Greater than 10 May Improve Random Forest Performance | <|reference_start|>Bootstrap Sampling Rate Greater than 10 May Improve Random Forest Performance: Random forests utilize bootstrap sampling to create an individual training set for each component tree. This involves sampling with replacement, with the number of instances equal to the size of the original training set ($N$). Research literature indicates that drawing fewer than $N$ observations can also yield satisfactory results. The ratio of the number of observations in each bootstrap sample to the total number of training instances is called the bootstrap rate (BR). Sampling more than $N$ observations (BR $>$ 1) has been explored in the literature only to a limited extent and has generally proven ineffective. In this paper, we re-examine this approach using 36 diverse datasets and consider BR values ranging from 1.2 to 5.0. Contrary to previous findings, we show that such parameterization can result in statistically significant improvements in classification accuracy compared to standard settings (BR $\leq$ 1). Furthermore, we investigate what the optimal BR depends on and conclude that it is more a property of the dataset than a dependence on the random forest hyperparameters. Finally, we develop a binary classifier to predict whether the optimal BR is $\leq$ 1 or $>$ 1 for a given dataset, achieving between 81.88\% and 88.81\% accuracy, depending on the experiment configuration.<|reference_end|> | arxiv | @article{kaźmierczak2024bootstrap,
title={Bootstrap Sampling Rate Greater than 1.0 May Improve Random Forest
Performance},
author={Stanis{l}aw Ka'zmierczak and Jacek Ma'ndziuk},
journal={arXiv preprint arXiv:2410.04297},
year={2024},
archivePrefix={arXiv},
eprint={2410.04297},
primaryClass={cs.LG stat.ML}
} | kaźmierczak2024bootstrap |
arxiv-666139 | 2410.04298 | Test-Time Adaptation for Keypoint-Based Spacecraft Pose Estimation Based on Predicted-View Synthesis | <|reference_start|>Test-Time Adaptation for Keypoint-Based Spacecraft Pose Estimation Based on Predicted-View Synthesis: Due to the difficulty of replicating the real conditions during training, supervised algorithms for spacecraft pose estimation experience a drop in performance when trained on synthetic data and applied to real operational data. To address this issue, we propose a test-time adaptation approach that leverages the temporal redundancy between images acquired during close proximity operations. Our approach involves extracting features from sequential spacecraft images, estimating their poses, and then using this information to synthesise a reconstructed view. We establish a self-supervised learning objective by comparing the synthesised view with the actual one. During training, we supervise both pose estimation and image synthesis, while at test-time, we optimise the self-supervised objective. Additionally, we introduce a regularisation loss to prevent solutions that are not consistent with the keypoint structure of the spacecraft. Our code is available at: https://github.com/JotaBravo/spacecraft-tta.<|reference_end|> | arxiv | @article{pérez-villar2024test-time,
title={Test-Time Adaptation for Keypoint-Based Spacecraft Pose Estimation Based
on Predicted-View Synthesis},
author={Juan Ignacio Bravo P'erez-Villar, 'Alvaro Garc'ia-Mart'in, Jes'us
Besc'os, Juan C. SanMiguel},
journal={IEEE Transactions on Aerospace and Electronic Systems (2024)},
year={2024},
doi={10.1109/TAES.2024.3410956},
archivePrefix={arXiv},
eprint={2410.04298},
primaryClass={cs.CV}
} | pérez-villar2024test-time |
arxiv-666140 | 2410.04299 | Integrating Physics-Informed Deep Learning and Numerical Methods for Robust Dynamics Discovery and Parameter Estimation | <|reference_start|>Integrating Physics-Informed Deep Learning and Numerical Methods for Robust Dynamics Discovery and Parameter Estimation: Incorporating a priori physics knowledge into machine learning leads to more robust and interpretable algorithms. In this work, we combine deep learning techniques and classic numerical methods for differential equations to solve two challenging problems in dynamical systems theory: dynamics discovery and parameter estimation. Results demonstrate the effectiveness of the proposed approaches on a suite of test problems exhibiting oscillatory and chaotic dynamics. When comparing the performance of various numerical schemes, such as the Runge-Kutta and linear multistep families of methods, we observe promising results in predicting the system dynamics and estimating physical parameters, given appropriate choices of spatial and temporal discretization schemes and numerical method orders.<|reference_end|> | arxiv | @article{ho2024integrating,
title={Integrating Physics-Informed Deep Learning and Numerical Methods for
Robust Dynamics Discovery and Parameter Estimation},
author={Caitlin Ho, Andrea Arnold},
journal={arXiv preprint arXiv:2410.04299},
year={2024},
archivePrefix={arXiv},
eprint={2410.04299},
primaryClass={cs.LG cs.NA math.DS math.NA}
} | ho2024integrating |
arxiv-666141 | 2410.04300 | Decentralized Equitable Energy Access in Energy Communities | <|reference_start|>Decentralized Equitable Energy Access in Energy Communities: We address the issue of equitable energy access within an energy community consisting of members with diverse socioeconomic backgrounds, including varying income levels and differing capacities to access distributed energy resources such as solar power and storage systems. While optimal energy consumption scheduling is well-studied, integrating equity into decentralized real-time energy access remains under-explored. This paper formulates Equity-regarding Welfare Maximization (EqWM)--a welfare optimization energy scheduling subject to equity constraints. We further develop a decentralized implementation (D-EqWM) as a bi-level optimization, where a non-profit operator designs a community pricing policy aimed at maximizing overall welfare, subject to constraints that ensure equitable access. Community members, in turn, optimize their individual consumption based on these prices. We present the optimal pricing policy along with its key properties.<|reference_end|> | arxiv | @article{li2024decentralized,
title={Decentralized Equitable Energy Access in Energy Communities},
author={Siying Li, Timothy Douglas Mount, Lang Tong},
journal={arXiv preprint arXiv:2410.04300},
year={2024},
archivePrefix={arXiv},
eprint={2410.04300},
primaryClass={eess.SY cs.SY}
} | li2024decentralized |
arxiv-666142 | 2410.04301 | Coalescing Force of Group Pressure: Consensus in Nonlinear Opinion Dynamics | <|reference_start|>Coalescing Force of Group Pressure: Consensus in Nonlinear Opinion Dynamics: This work extends the recent opinion dynamics model from Cheng et al., emphasizing the role of group pressure in consensus formation. We generalize the findings to incorporate social influence algorithms with general time-varying, opinion-dependent weights and multidimensional opinions, beyond bounded confidence dynamics. We demonstrate that, with uniformly positive conformity levels, group pressure consistently drives consensus and provide a tighter estimate for the convergence rate. Unlike previous models, the common public opinion in our framework can assume arbitrary forms within the convex hull of current opinions, offering flexibility applicable to real-world scenarios such as opinion polls with random participant selection. This analysis provides deeper insights into how group pressure mechanisms foster consensus under diverse conditions.<|reference_end|> | arxiv | @article{zabarianska2024coalescing,
title={Coalescing Force of Group Pressure: Consensus in Nonlinear Opinion
Dynamics},
author={Iryna Zabarianska and Anton V. Proskurnikov},
journal={arXiv preprint arXiv:2410.04301},
year={2024},
archivePrefix={arXiv},
eprint={2410.04301},
primaryClass={physics.soc-ph cs.MA cs.SY eess.SY math.DS math.OC}
} | zabarianska2024coalescing |
arxiv-666143 | 2410.04302 | PANav: Toward Privacy-Aware Robot Navigation via Vision-Language Models | <|reference_start|>PANav: Toward Privacy-Aware Robot Navigation via Vision-Language Models: Navigating robots discreetly in human work environments while considering the possible privacy implications of robotic tasks presents significant challenges. Such scenarios are increasingly common, for instance, when robots transport sensitive objects that demand high levels of privacy in spaces crowded with human activities. While extensive research has been conducted on robotic path planning and social awareness, current robotic systems still lack the functionality of privacy-aware navigation in public environments. To address this, we propose a new framework for mobile robot navigation that leverages vision-language models to incorporate privacy awareness into adaptive path planning. Specifically, all potential paths from the starting point to the destination are generated using the A* algorithm. Concurrently, the vision-language model is used to infer the optimal path for privacy-awareness, given the environmental layout and the navigational instruction. This approach aims to minimize the robot's exposure to human activities and preserve the privacy of the robot and its surroundings. Experimental results on the S3DIS dataset demonstrate that our framework significantly enhances mobile robots' privacy awareness of navigation in human-shared public environments. Furthermore, we demonstrate the practical applicability of our framework by successfully navigating a robotic platform through real-world office environments. The supplementary video and code can be accessed via the following link: https://sites.google.com/view/privacy-aware-nav.<|reference_end|> | arxiv | @article{yu2024panav:,
title={PANav: Toward Privacy-Aware Robot Navigation via Vision-Language Models},
author={Bangguo Yu, Hamidreza Kasaei, Ming Cao},
journal={arXiv preprint arXiv:2410.04302},
year={2024},
archivePrefix={arXiv},
eprint={2410.04302},
primaryClass={cs.RO}
} | yu2024panav: |
arxiv-666144 | 2410.04304 | Robotics Meets Software Engineering: A First Look at the Robotics Discussions on Stackoverflow | <|reference_start|>Robotics Meets Software Engineering: A First Look at the Robotics Discussions on Stackoverflow: Robots can greatly enhance human capabilities, yet their development presents a range of challenges. This collaborative study, conducted by a team of software engineering and robotics researchers, seeks to identify the challenges encountered by robot developers by analyzing questions posted on StackOverflow. We created a filtered dataset of 500 robotics-related questions and examined their characteristics, comparing them with randomly selected questions from the platform. Our findings indicate that the small size of the robotics community limits the visibility of these questions, resulting in fewer responses. While the number of robotics questions has been steadily increasing, they remain less popular than the average question and answer on StackOverflow. This underscores the importance of research that focuses on the challenges faced by robotics practitioners. Consequently, we conducted a thematic analysis of the 500 robotics questions to uncover common inquiry patterns. We identified 11 major themes, with questions about robot movement being the most frequent. Our analysis of yearly trends revealed that certain themes, such as Specifications, were prominent from 2009 to 2014 but have since diminished in relevance. In contrast, themes like Moving, Actuator, and Remote have consistently dominated discussions over the years. These findings suggest that challenges in robotics may vary over time. Notably, the majority of robotics questions are framed as How questions, rather than Why or What questions, revealing the lack of enough resources for the practitioners. These insights can help guide researchers and educators in developing effective and timely educational materials for robotics practitioners.<|reference_end|> | arxiv | @article{kidwai2024robotics,
title={Robotics Meets Software Engineering: A First Look at the Robotics
Discussions on Stackoverflow},
author={Hisham Kidwai, Danika Passler Bates, Sujana Islam Suhi, James Young,
Shaiful Chowdhury},
journal={arXiv preprint arXiv:2410.04304},
year={2024},
archivePrefix={arXiv},
eprint={2410.04304},
primaryClass={cs.SE}
} | kidwai2024robotics |
arxiv-666145 | 2410.04309 | Discovering Hidden Pollution Hotspots Using Sparse Sensor Measurements | <|reference_start|>Discovering Hidden Pollution Hotspots Using Sparse Sensor Measurements: Effective air pollution management in urban areas relies on both monitoring and mitigation strategies, yet high costs often limit sensor networks to a few key pollution hotspots. In this paper, we show that New Delhi's public sensor network is insufficient for identifying all pollution hotspots. To address this, we augmented the city's network with 28 low-cost sensors, monitoring PM 2.5 concentrations over 30 months (May 2018 to November 2020). Our analysis uncovered 189 additional hotspots, supplementing the 660 already detected by the government network. We observed that Space-Time Kriging with limited but accurate sensor data provides a more robust and generalizable approach for identifying these hotspots, as compared to deep learning models that require large amounts of fine-grained multi-modal data (emissions inventory, meteorology, etc.) which was not reliably, frequently and accurately available in the New Delhi context. Using Space-Time Kriging, we achieved 98% precision and 95.4% recall in detecting hotspots with 50% sensor failure. Furthermore, this method proved effective in predicting hotspots in areas without sensors, achieving 95.3% precision and 88.5% recall in the case of 50% missing sensors. Our findings revealed that a significant portion of New Delhi's population, around 23 million people, was exposed to pollution hotspots for at least half of the study period. We also identified areas beyond the reach of the public sensor network that should be prioritized for pollution control. These results highlight the need for more comprehensive monitoring networks and suggest Space-Time Kriging as a viable solution for cities facing similar resource constraints.<|reference_end|> | arxiv | @article{bhardwaj2024discovering,
title={Discovering Hidden Pollution Hotspots Using Sparse Sensor Measurements},
author={Ankit Bhardwaj, Ananth Balashankar, Shiva Iyer, Nita Soans, Anant
Sudarshan, Rohini Pande, Lakshminarayanan Subramanian},
journal={arXiv preprint arXiv:2410.04309},
year={2024},
archivePrefix={arXiv},
eprint={2410.04309},
primaryClass={cs.CY cs.LG}
} | bhardwaj2024discovering |
arxiv-666146 | 2410.04313 | Vehicle-in-Virtual-Environment Method for ADAS and Connected and Automated Driving Function Development/Demonstration/Evaluation | <|reference_start|>Vehicle-in-Virtual-Environment Method for ADAS and Connected and Automated Driving Function Development/Demonstration/Evaluation: The current approach for new Advanced Driver Assistance System (ADAS) and Connected and Automated Driving (CAD) function development involves a significant amount of public road testing which is inefficient due to the number miles that need to be driven for rare and extreme events to take place, thereby being very costly also, and unsafe as the rest of the road users become involuntary test subjects. A new development, evaluation and demonstration method for safe, efficient, and repeatable development, demonstration and evaluation of ADAS and CAD functions called VehicleInVirtualEnvironment (VVE) was recently introduced as a solution to this problem. The vehicle is operated in a large, empty, and flat area during VVE while its localization and perception sensor data is fed from the virtual environment with other traffic and rare and extreme events being generated as needed. The virtual environment can be easily configured and modified to construct different testing scenarios on demand. This paper focuses on the VVE approach and introduces the coordinate transformations needed to sync pose (location and orientation) in the virtual and physical worlds and handling of localization and perception sensor data using the highly realistic 3D simulation model of a recent autonomous shuttle deployment site in Columbus, Ohio as the virtual world. As a further example that uses multiple actors, the use of VVE for VehicleToVRU communication based Vulnerable Road User (VRU) safety is presented in the paper using VVE experiments and real pedestrian(s) in a safe and repeatable manner. VVE experiments are used to demonstrate the efficacy of the method.<|reference_end|> | arxiv | @article{cao2024vehicle-in-virtual-environment,
title={Vehicle-in-Virtual-Environment Method for ADAS and Connected and
Automated Driving Function Development/Demonstration/Evaluation},
author={Xincheng Cao, Haochong Chen, Bilin Aksun-Guvenc, Levent Guvenc},
journal={arXiv preprint arXiv:2410.04313},
year={2024},
archivePrefix={arXiv},
eprint={2410.04313},
primaryClass={cs.RO cs.SY eess.SY}
} | cao2024vehicle-in-virtual-environment |
arxiv-666147 | 2410.04315 | Calibrating Expressions of Certainty | <|reference_start|>Calibrating Expressions of Certainty: We present a novel approach to calibrating linguistic expressions of certainty, e.g., "Maybe" and "Likely". Unlike prior work that assigns a single score to each certainty phrase, we model uncertainty as distributions over the simplex to capture their semantics more accurately. To accommodate this new representation of certainty, we generalize existing measures of miscalibration and introduce a novel post-hoc calibration method. Leveraging these tools, we analyze the calibration of both humans (e.g., radiologists) and computational models (e.g., language models) and provide interpretable suggestions to improve their calibration.<|reference_end|> | arxiv | @article{wang2024calibrating,
title={Calibrating Expressions of Certainty},
author={Peiqi Wang, Barbara D. Lam, Yingcheng Liu, Ameneh Asgari-Targhi,
Rameswar Panda, William M. Wells, Tina Kapur, Polina Golland},
journal={arXiv preprint arXiv:2410.04315},
year={2024},
archivePrefix={arXiv},
eprint={2410.04315},
primaryClass={cs.CL cs.LG}
} | wang2024calibrating |
arxiv-666148 | 2410.04316 | Data-driven Under Frequency Load Shedding Using Reinforcement Learning | <|reference_start|>Data-driven Under Frequency Load Shedding Using Reinforcement Learning: Underfrequency load shedding (UFLS) is a critical control strategy in power systems aimed at maintaining system stability and preventing blackouts during severe frequency drops. Traditional UFLS schemes often rely on predefined rules and thresholds, which may not adapt effectively to the dynamic and complex nature of modern power grids. Reinforcement learning (RL) methods have been proposed to effectively handle the UFLS problem. However, training these RL agents is computationally burdensome due to solving multiple differential equations at each step of training. This computational burden also limits the effectiveness of the RL agents for use in real-time. To reduce the computational burden, a machine learning (ML) classifier is trained to capture the frequency response of the system to various disturbances. The RL agent is then trained using the classifier, thus avoiding multiple computations during each step of agent training. Key features of this approach include reduced training time, as well as faster real-time application compared to other RL agents, and its potential to improve system resilience by minimizing the amount of load shed while effectively stabilizing the frequency. Comparative studies with conventional UFLS schemes demonstrate that the RL-based strategy achieves superior performance while significantly reducing the time required. Simulation results on the IEEE 68-bus system validate the performance of the proposed RL method.<|reference_end|> | arxiv | @article{justin2024data-driven,
title={Data-driven Under Frequency Load Shedding Using Reinforcement Learning},
author={Glory Justin, Santiago Paternain},
journal={arXiv preprint arXiv:2410.04316},
year={2024},
archivePrefix={arXiv},
eprint={2410.04316},
primaryClass={eess.SY cs.SY}
} | justin2024data-driven |
arxiv-666149 | 2410.04317 | Enabling Asymptotic Truth Learning in a Social Network | <|reference_start|>Enabling Asymptotic Truth Learning in a Social Network: Consider a network of agents that all want to guess the correct value of some ground truth state. In a sequential order, each agent makes its decision using a single private signal which has a constant probability of error, as well as observations of actions from its network neighbors earlier in the order. We are interested in enabling \emph{network-wide asymptotic truth learning} -- that in a network of $n$ agents, almost all agents make a correct prediction with probability approaching one as $n$ goes to infinity. In this paper we study both random orderings and carefully crafted decision orders with respect to the graph topology as well as sufficient or necessary conditions for a graph to support such a good ordering. We first show that on a sparse graph of average constant degree with a random ordering asymptotic truth learning does not happen. We then show a rather modest sufficient condition to enable asymptotic truth learning. With the help of this condition we characterize graphs generated from the Erd\"os R\'enyi model and preferential attachment model. In an Erd\"os R\'enyi graph, unless the graph is super sparse (with $O(n)$ edges) or super dense (nearly a complete graph), there exists a decision ordering that supports asymptotic truth learning. Similarly, any preferential attachment network with a constant number of edges per node can achieve asymptotic truth learning under a carefully designed ordering but not under either a random ordering nor the arrival order. We also evaluated a variant of the decision ordering on different network topologies and demonstrated clear effectiveness in improving truth learning over random orderings.<|reference_end|> | arxiv | @article{lu2024enabling,
title={Enabling Asymptotic Truth Learning in a Social Network},
author={Kevin Lu and Jordan Chong and Matt Lu and Jie Gao},
journal={arXiv preprint arXiv:2410.04317},
year={2024},
archivePrefix={arXiv},
eprint={2410.04317},
primaryClass={cs.SI cs.DS}
} | lu2024enabling |
arxiv-666150 | 2410.04318 | Urban Computing for Climate and Environmental Justice: Early Perspectives From Two Research Initiatives | <|reference_start|>Urban Computing for Climate and Environmental Justice: Early Perspectives From Two Research Initiatives: The impacts of climate change are intensifying existing vulnerabilities and disparities within urban communities around the globe, as extreme weather events, including floods and heatwaves, are becoming more frequent and severe, disproportionately affecting low-income and underrepresented groups. Tackling these increasing challenges requires novel approaches that integrate expertise across multiple domains, including computer science, engineering, climate science, and public health. Urban computing can play a pivotal role in these efforts by integrating data from multiple sources to support decision-making and provide actionable insights into weather patterns, infrastructure weaknesses, and population vulnerabilities. However, the capacity to leverage technological advancements varies significantly between the Global South and Global North. In this paper, we present two multiyear, multidisciplinary projects situated in Chicago, USA and Niter\'oi, Brazil, highlighting the opportunities and limitations of urban computing in these diverse contexts. Reflecting on our experiences, we then discuss the essential requirements, as well as existing gaps, for visual analytics tools that facilitate the understanding and mitigation of climate-related risks in urban environments.<|reference_end|> | arxiv | @article{veiga2024urban,
title={Urban Computing for Climate and Environmental Justice: Early
Perspectives From Two Research Initiatives},
author={Carolina Veiga, Ashish Sharma, Daniel de Oliveira, Marcos Lage, Fabio
Miranda},
journal={arXiv preprint arXiv:2410.04318},
year={2024},
archivePrefix={arXiv},
eprint={2410.04318},
primaryClass={cs.CY cs.HC}
} | veiga2024urban |
arxiv-666151 | 2410.04320 | Channel-Aware Throughput Maximization for Cooperative Data Fusion in CAV | <|reference_start|>Channel-Aware Throughput Maximization for Cooperative Data Fusion in CAV: Connected and autonomous vehicles (CAVs) have garnered significant attention due to their extended perception range and enhanced sensing coverage. To address challenges such as blind spots and obstructions, CAVs employ vehicle-to-vehicle (V2V) communications to aggregate sensory data from surrounding vehicles. However, cooperative perception is often constrained by the limitations of achievable network throughput and channel quality. In this paper, we propose a channel-aware throughput maximization approach to facilitate CAV data fusion, leveraging a self-supervised autoencoder for adaptive data compression. We formulate the problem as a mixed integer programming (MIP) model, which we decompose into two sub-problems to derive optimal data rate and compression ratio solutions under given link conditions. An autoencoder is then trained to minimize bitrate with the determined compression ratio, and a fine-tuning strategy is employed to further reduce spectrum resource consumption. Experimental evaluation on the OpenCOOD platform demonstrates the effectiveness of our proposed algorithm, showing more than 20.19\% improvement in network throughput and a 9.38\% increase in average precision (AP@IoU) compared to state-of-the-art methods, with an optimal latency of 19.99 ms.<|reference_end|> | arxiv | @article{an2024channel-aware,
title={Channel-Aware Throughput Maximization for Cooperative Data Fusion in CAV},
author={Haonan An, Zhengru Fang, Yuang Zhang, Senkang Hu, Xianhao Chen, Guowen
Xu, and Yuguang Fang},
journal={arXiv preprint arXiv:2410.04320},
year={2024},
archivePrefix={arXiv},
eprint={2410.04320},
primaryClass={cs.AI}
} | an2024channel-aware |
arxiv-666152 | 2410.04322 | Toward Debugging Deep Reinforcement Learning Programs with RLExplorer | <|reference_start|>Toward Debugging Deep Reinforcement Learning Programs with RLExplorer: Deep reinforcement learning (DRL) has shown success in diverse domains such as robotics, computer games, and recommendation systems. However, like any other software system, DRL-based software systems are susceptible to faults that pose unique challenges for debugging and diagnosing. These faults often result in unexpected behavior without explicit failures and error messages, making debugging difficult and time-consuming. Therefore, automating the monitoring and diagnosis of DRL systems is crucial to alleviate the burden on developers. In this paper, we propose RLExplorer, the first fault diagnosis approach for DRL-based software systems. RLExplorer automatically monitors training traces and runs diagnosis routines based on properties of the DRL learning dynamics to detect the occurrence of DRL-specific faults. It then logs the results of these diagnoses as warnings that cover theoretical concepts, recommended practices, and potential solutions to the identified faults. We conducted two sets of evaluations to assess RLExplorer. Our first evaluation of faulty DRL samples from Stack Overflow revealed that our approach can effectively diagnose real faults in 83% of the cases. Our second evaluation of RLExplorer with 15 DRL experts/developers showed that (1) RLExplorer could identify 3.6 times more defects than manual debugging and (2) RLExplorer is easily integrated into DRL applications.<|reference_end|> | arxiv | @article{bouchoucha2024toward,
title={Toward Debugging Deep Reinforcement Learning Programs with RLExplorer},
author={Rached Bouchoucha, Ahmed Haj Yahmed, Darshan Patil, Janarthanan
Rajendran, Amin Nikanjam, Sarath Chandar, Foutse Khomh},
journal={arXiv preprint arXiv:2410.04322},
year={2024},
archivePrefix={arXiv},
eprint={2410.04322},
primaryClass={cs.SE cs.AI cs.LG}
} | bouchoucha2024toward |
arxiv-666153 | 2410.04324 | SONAR: A Synthetic AI-Audio Detection Framework and Benchmark | <|reference_start|>SONAR: A Synthetic AI-Audio Detection Framework and Benchmark: Recent advances in Text-to-Speech (TTS) and Voice-Conversion (VC) using generative Artificial Intelligence (AI) technology have made it possible to generate high-quality and realistic human-like audio. This introduces significant challenges to distinguishing AI-synthesized speech from the authentic human voice and could raise potential issues of misuse for malicious purposes such as impersonation and fraud, spreading misinformation, deepfakes, and scams. However, existing detection techniques for AI-synthesized audio have not kept pace and often exhibit poor generalization across diverse datasets. In this paper, we introduce SONAR, a synthetic AI-Audio Detection Framework and Benchmark, aiming to provide a comprehensive evaluation for distinguishing cutting-edge AI-synthesized auditory content. SONAR includes a novel evaluation dataset sourced from 9 diverse audio synthesis platforms, including leading TTS providers and state-of-the-art TTS models. It is the first framework to uniformly benchmark AI-audio detection across both traditional and foundation model-based deepfake detection systems. Through extensive experiments, we reveal the generalization limitations of existing detection methods and demonstrate that foundation models exhibit stronger generalization capabilities, which can be attributed to their model size and the scale and quality of pretraining data. Additionally, we explore the effectiveness and efficiency of few-shot fine-tuning in improving generalization, highlighting its potential for tailored applications, such as personalized detection systems for specific entities or individuals. Code and dataset are available at https://github.com/Jessegator/SONAR.<|reference_end|> | arxiv | @article{li2024sonar:,
title={SONAR: A Synthetic AI-Audio Detection Framework and Benchmark},
author={Xiang Li, Pin-Yu Chen, Wenqi Wei},
journal={arXiv preprint arXiv:2410.04324},
year={2024},
archivePrefix={arXiv},
eprint={2410.04324},
primaryClass={cs.SD cs.AI eess.AS}
} | li2024sonar: |
arxiv-666154 | 2410.04327 | Leveraging Hierarchical Taxonomies in Prompt-based Continual Learning | <|reference_start|>Leveraging Hierarchical Taxonomies in Prompt-based Continual Learning: Drawing inspiration from human learning behaviors, this work proposes a novel approach to mitigate catastrophic forgetting in Prompt-based Continual Learning models by exploiting the relationships between continuously emerging class data. We find that applying human habits of organizing and connecting information can serve as an efficient strategy when training deep learning models. Specifically, by building a hierarchical tree structure based on the expanding set of labels, we gain fresh insights into the data, identifying groups of similar classes could easily cause confusion. Additionally, we delve deeper into the hidden connections between classes by exploring the original pretrained model's behavior through an optimal transport-based approach. From these insights, we propose a novel regularization loss function that encourages models to focus more on challenging knowledge areas, thereby enhancing overall performance. Experimentally, our method demonstrated significant superiority over the most robust state-of-the-art models on various benchmarks.<|reference_end|> | arxiv | @article{tran2024leveraging,
title={Leveraging Hierarchical Taxonomies in Prompt-based Continual Learning},
author={Quyen Tran, Minh Le, Tuan Truong, Dinh Phung, Linh Ngo, Thien Nguyen,
Nhat Ho, Trung Le},
journal={arXiv preprint arXiv:2410.04327},
year={2024},
archivePrefix={arXiv},
eprint={2410.04327},
primaryClass={cs.LG}
} | tran2024leveraging |
arxiv-666155 | 2410.04328 | OD-Stega: LLM-Based Near-Imperceptible Steganography via Optimized Distributions | <|reference_start|>OD-Stega: LLM-Based Near-Imperceptible Steganography via Optimized Distributions: We consider coverless steganography where a Large Language Model (LLM) drives an arithmetic coding decoder to generate stego-texts. An efficient method should embed secret message bits in as few language tokens as possible, while still keeping the stego-text natural and fluent. We show that on the individual token level, this problem is mathematically equivalent to maximizing the entropy of a replacement probability distribution of the next token generation, subject to a constraint on the KL divergence between the chosen probability distribution and the original distribution given by the LLM. A closed-form solution is provided for the optimization problem, which can be computed efficiently. Several important practical issues are also tackled: 1) An often-overlooked tokenization mismatch issue is resolved with a simple prompt selection approach, 2) The combination of the optimized distribution and the vocabulary truncation technique is considered, and 3) The combination of the optimized distribution with other sequence-level selection heuristics to further enhance the efficiency and reliability is studied.<|reference_end|> | arxiv | @article{huang2024od-stega:,
title={OD-Stega: LLM-Based Near-Imperceptible Steganography via Optimized
Distributions},
author={Yu-Shin Huang, Peter Just, Krishna Narayanan, and Chao Tian},
journal={arXiv preprint arXiv:2410.04328},
year={2024},
archivePrefix={arXiv},
eprint={2410.04328},
primaryClass={cs.IT cs.AI cs.CL cs.CR cs.LG math.IT}
} | huang2024od-stega: |
arxiv-666156 | 2410.04332 | Gradient Routing: Masking Gradients to Localize Computation in Neural Networks | <|reference_start|>Gradient Routing: Masking Gradients to Localize Computation in Neural Networks: Neural networks are trained primarily based on their inputs and outputs, without regard for their internal mechanisms. These neglected mechanisms determine properties that are critical for safety, like (i) transparency; (ii) the absence of sensitive information or harmful capabilities; and (iii) reliable generalization of goals beyond the training distribution. To address this shortcoming, we introduce gradient routing, a training method that isolates capabilities to specific subregions of a neural network. Gradient routing applies data-dependent, weighted masks to gradients during backpropagation. These masks are supplied by the user in order to configure which parameters are updated by which data points. We show that gradient routing can be used to (1) learn representations which are partitioned in an interpretable way; (2) enable robust unlearning via ablation of a pre-specified network subregion; and (3) achieve scalable oversight of a reinforcement learner by localizing modules responsible for different behaviors. Throughout, we find that gradient routing localizes capabilities even when applied to a limited, ad-hoc subset of the data. We conclude that the approach holds promise for challenging, real-world applications where quality data are scarce.<|reference_end|> | arxiv | @article{cloud2024gradient,
title={Gradient Routing: Masking Gradients to Localize Computation in Neural
Networks},
author={Alex Cloud, Jacob Goldman-Wetzler, Evv{z}en Wybitul, Joseph Miller,
Alexander Matt Turner},
journal={arXiv preprint arXiv:2410.04332},
year={2024},
archivePrefix={arXiv},
eprint={2410.04332},
primaryClass={cs.LG cs.AI}
} | cloud2024gradient |
arxiv-666157 | 2410.04334 | AI Assistants for Incident Lifecycle in a Microservice Environment: A Systematic Literature Review | <|reference_start|>AI Assistants for Incident Lifecycle in a Microservice Environment: A Systematic Literature Review: Incidents in microservice environments can be costly and challenging to recover from due to their complexity and distributed nature. Recent advancements in artificial intelligence (AI) offer promising solutions for improving incident management. This paper systematically reviews primary studies on AI assistants designed to support different phases of the incident lifecycle. It highlights successful applications of AI, identifies gaps in current research, and suggests future opportunities for enhancing incident management through AI. By examining these studies, the paper aims to provide insights into the effectiveness of AI tools and their potential to address ongoing challenges in incident recovery.<|reference_end|> | arxiv | @article{zhou2024ai,
title={AI Assistants for Incident Lifecycle in a Microservice Environment: A
Systematic Literature Review},
author={Dahlia Ziqi Zhou, Marios Fokaefs},
journal={arXiv preprint arXiv:2410.04334},
year={2024},
archivePrefix={arXiv},
eprint={2410.04334},
primaryClass={cs.SE}
} | zhou2024ai |
arxiv-666158 | 2410.04335 | ReTok: Replacing Tokenizer to Enhance Representation Efficiency in Large Language Model | <|reference_start|>ReTok: Replacing Tokenizer to Enhance Representation Efficiency in Large Language Model: Tokenizer is an essential component for large language models (LLMs), and a tokenizer with a high compression rate can improve the model's representation and processing efficiency. However, the tokenizer cannot ensure high compression rate in all scenarios, and an increase in the average input and output lengths will increases the training and inference costs of the model. Therefore, it is crucial to find ways to improve the model's efficiency with minimal cost while maintaining the model's performance. In this work, we propose a method to improve model representation and processing efficiency by replacing the tokenizers of LLMs. We propose replacing and reinitializing the parameters of the model's input and output layers with the parameters of the original model, and training these parameters while keeping other parameters fixed. We conducted experiments on different LLMs, and the results show that our method can maintain the performance of the model after replacing the tokenizer, while significantly improving the decoding speed for long texts.<|reference_end|> | arxiv | @article{gu2024retok:,
title={ReTok: Replacing Tokenizer to Enhance Representation Efficiency in Large
Language Model},
author={Shuhao Gu, Mengdi Zhao, Bowen Zhang, Liangdong Wang, Jijie Li, Guang
Liu},
journal={arXiv preprint arXiv:2410.04335},
year={2024},
archivePrefix={arXiv},
eprint={2410.04335},
primaryClass={cs.CL}
} | gu2024retok: |
arxiv-666159 | 2410.04336 | A Meshfree Method for Eigenvalues of Differential Operators on Surfaces, Including Steklov Problems | <|reference_start|>A Meshfree Method for Eigenvalues of Differential Operators on Surfaces, Including Steklov Problems: We present and study techniques for investigating the spectra of linear differential operators on surfaces and flat domains using symmetric meshfree methods: meshfree methods that arise from finding norm-minimizing Hermite-Birkhoff interpolants in a Hilbert space. Meshfree methods are desirable for surface problems due to the increased difficulties associated with mesh creation and refinement on curved surfaces. While meshfree methods have been used for solving a wide range of partial differential equations (PDEs) in recent years, the spectra of operators discretized using radial basis functions (RBFs) often suffer from the presence of non-physical eigenvalues (spurious modes). This makes many RBF methods unhelpful for eigenvalue problems. We provide rigorously justified processes for finding eigenvalues based on results concerning the norm of the solution in its native space; specifically, only PDEs with solutions in the native space produce numerical solutions with bounded norms as the fill distance approaches zero. For certain problems, we prove that eigenvalue and eigenfunction estimates converge at a high-order rate. The technique we present is general enough to work for a wide variety of problems, including Steklov problems, where the eigenvalue parameter is in the boundary condition. Numerical experiments for a mix of standard and Steklov eigenproblems on surfaces with and without boundary, as well as flat domains, are presented, including a Steklov-Helmholtz problem.<|reference_end|> | arxiv | @article{venn2024a,
title={A Meshfree Method for Eigenvalues of Differential Operators on Surfaces,
Including Steklov Problems},
author={Daniel R. Venn, Steven J. Ruuth},
journal={arXiv preprint arXiv:2410.04336},
year={2024},
archivePrefix={arXiv},
eprint={2410.04336},
primaryClass={math.NA cs.NA}
} | venn2024a |
arxiv-666160 | 2410.04342 | Accelerating Inference of Networks in the Frequency Domain | <|reference_start|>Accelerating Inference of Networks in the Frequency Domain: It has been demonstrated that networks' parameters can be significantly reduced in the frequency domain with a very small decrease in accuracy. However, given the cost of frequency transforms, the computational complexity is not significantly decreased. In this work, we propose performing network inference in the frequency domain to speed up networks whose frequency parameters are sparse. In particular, we propose a frequency inference chain that is dual to the network inference in the spatial domain. In order to handle the non-linear layers, we make a compromise to apply non-linear operations on frequency data directly, which works effectively. Enabled by the frequency inference chain and the strategy for non-linear layers, the proposed approach completes the entire inference in the frequency domain. Unlike previous approaches which require extra frequency or inverse transforms for all layers, the proposed approach only needs the frequency transform and its inverse once at the beginning and once at the end of a network. Comparisons with state-of-the-art methods demonstrate that the proposed approach significantly improves accuracy in the case of a high speedup ratio (over 100x). The source code is available at \url{https://github.com/guanfangdong/FreqNet-Infer}.<|reference_end|> | arxiv | @article{zhao2024accelerating,
title={Accelerating Inference of Networks in the Frequency Domain},
author={Chenqiu Zhao, Guanfang Dong, Anup Basu},
journal={arXiv preprint arXiv:2410.04342},
year={2024},
archivePrefix={arXiv},
eprint={2410.04342},
primaryClass={cs.CV}
} | zhao2024accelerating |
arxiv-666161 | 2410.04343 | Inference Scaling for Long-Context Retrieval Augmented Generation | <|reference_start|>Inference Scaling for Long-Context Retrieval Augmented Generation: The scaling of inference computation has unlocked the potential of long-context large language models (LLMs) across diverse settings. For knowledge-intensive tasks, the increased compute is often allocated to incorporate more external knowledge. However, without effectively utilizing such knowledge, solely expanding context does not always enhance performance. In this work, we investigate inference scaling for retrieval augmented generation (RAG), exploring strategies beyond simply increasing the quantity of knowledge. We focus on two inference scaling strategies: in-context learning and iterative prompting. These strategies provide additional flexibility to scale test-time computation (e.g., by increasing retrieved documents or generation steps), thereby enhancing LLMs' ability to effectively acquire and utilize contextual information. We address two key questions: (1) How does RAG performance benefit from the scaling of inference computation when optimally configured? (2) Can we predict the optimal test-time compute allocation for a given budget by modeling the relationship between RAG performance and inference parameters? Our observations reveal that increasing inference computation leads to nearly linear gains in RAG performance when optimally allocated, a relationship we describe as the inference scaling laws for RAG. Building on this, we further develop the computation allocation model to estimate RAG performance across different inference configurations. The model predicts optimal inference parameters under various computation constraints, which align closely with the experimental results. By applying these optimal configurations, we demonstrate that scaling inference compute on long-context LLMs achieves up to 58.9% gains on benchmark datasets compared to standard RAG.<|reference_end|> | arxiv | @article{yue2024inference,
title={Inference Scaling for Long-Context Retrieval Augmented Generation},
author={Zhenrui Yue, Honglei Zhuang, Aijun Bai, Kai Hui, Rolf Jagerman, Hansi
Zeng, Zhen Qin, Dong Wang, Xuanhui Wang, Michael Bendersky},
journal={arXiv preprint arXiv:2410.04343},
year={2024},
archivePrefix={arXiv},
eprint={2410.04343},
primaryClass={cs.CL}
} | yue2024inference |
arxiv-666162 | 2410.04344 | DeepONet for Solving PDEs: Generalization Analysis in Sobolev Training | <|reference_start|>DeepONet for Solving PDEs: Generalization Analysis in Sobolev Training: In this paper, we investigate the application of operator learning, specifically DeepONet, to solve partial differential equations (PDEs). Unlike function learning methods that require training separate neural networks for each PDE, operator learning generalizes across different PDEs without retraining. We focus on the performance of DeepONet in Sobolev training, addressing two key questions: the approximation ability of deep branch and trunk networks, and the generalization error in Sobolev norms. Our findings highlight that deep branch networks offer significant performance benefits, while trunk networks are best kept simple. Moreover, standard sampling methods without adding derivative information in the encoding part are sufficient for minimizing generalization error in Sobolev training, based on generalization analysis. This paper fills a theoretical gap by providing error estimations for a wide range of physics-informed machine learning models and applications.<|reference_end|> | arxiv | @article{yang2024deeponet,
title={DeepONet for Solving PDEs: Generalization Analysis in Sobolev Training},
author={Yahong Yang},
journal={arXiv preprint arXiv:2410.04344},
year={2024},
archivePrefix={arXiv},
eprint={2410.04344},
primaryClass={cs.LG cs.NA math.NA}
} | yang2024deeponet |
arxiv-666163 | 2410.04345 | MVP-Bench: Can Large Vision--Language Models Conduct Multi-level Visual Perception Like Humans? | <|reference_start|>MVP-Bench: Can Large Vision--Language Models Conduct Multi-level Visual Perception Like Humans?: Humans perform visual perception at multiple levels, including low-level object recognition and high-level semantic interpretation such as behavior understanding. Subtle differences in low-level details can lead to substantial changes in high-level perception. For example, substituting the shopping bag held by a person with a gun suggests violent behavior, implying criminal or violent activity. Despite significant advancements in various multimodal tasks, Large Visual-Language Models (LVLMs) remain unexplored in their capabilities to conduct such multi-level visual perceptions. To investigate the perception gap between LVLMs and humans, we introduce MVP-Bench, the first visual-language benchmark systematically evaluating both low- and high-level visual perception of LVLMs. We construct MVP-Bench across natural and synthetic images to investigate how manipulated content influences model perception. Using MVP-Bench, we diagnose the visual perception of 10 open-source and 2 closed-source LVLMs, showing that high-level perception tasks significantly challenge existing LVLMs. The state-of-the-art GPT-4o only achieves an accuracy of $56\%$ on Yes/No questions, compared with $74\%$ in low-level scenarios. Furthermore, the performance gap between natural and manipulated images indicates that current LVLMs do not generalize in understanding the visual semantics of synthetic images as humans do. Our data and code are publicly available at https://github.com/GuanzhenLi/MVP-Bench.<|reference_end|> | arxiv | @article{li2024mvp-bench:,
title={MVP-Bench: Can Large Vision--Language Models Conduct Multi-level Visual
Perception Like Humans?},
author={Guanzhen Li, Yuxi Xie, Min-Yen Kan},
journal={arXiv preprint arXiv:2410.04345},
year={2024},
archivePrefix={arXiv},
eprint={2410.04345},
primaryClass={cs.CV cs.AI}
} | li2024mvp-bench: |
arxiv-666164 | 2410.04346 | Ordinal Preference Optimization: Aligning Human Preferences via NDCG | <|reference_start|>Ordinal Preference Optimization: Aligning Human Preferences via NDCG: Aligning Large Language Models (LLMs) with diverse human preferences is a pivotal technique for controlling model behaviors and enhancing generation quality. Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), and their variants optimize language models by pairwise comparisons. However, when multiple responses are available, these approaches fall short of leveraging the extensive information in the ranking given by the reward models or human feedback. In this work, we propose a novel listwise approach named Ordinal Preference Optimization (OPO), which employs the Normalized Discounted Cumulative Gain (NDCG), a widely-used ranking metric, to better utilize relative proximity within ordinal multiple responses. We develop an end-to-end preference optimization algorithm by approximating NDCG with a differentiable surrogate loss. This approach builds a connection between ranking models in information retrieval and the alignment problem. In aligning multi-response datasets assigned with ordinal rewards, OPO outperforms existing pairwise and listwise approaches on evaluation sets and general benchmarks like AlpacaEval. Moreover, we demonstrate that increasing the pool of negative samples can enhance model performance by reducing the adverse effects of trivial negatives.<|reference_end|> | arxiv | @article{zhao2024ordinal,
title={Ordinal Preference Optimization: Aligning Human Preferences via NDCG},
author={Yang Zhao, Yixin Wang, Mingzhang Yin},
journal={arXiv preprint arXiv:2410.04346},
year={2024},
archivePrefix={arXiv},
eprint={2410.04346},
primaryClass={cs.CL}
} | zhao2024ordinal |
arxiv-666165 | 2410.04347 | Latent Feature Mining for Predictive Model Enhancement with Large Language Models | <|reference_start|>Latent Feature Mining for Predictive Model Enhancement with Large Language Models: Predictive modeling often faces challenges due to limited data availability and quality, especially in domains where collected features are weakly correlated with outcomes and where additional feature collection is constrained by ethical or practical difficulties. Traditional machine learning (ML) models struggle to incorporate unobserved yet critical factors. In this work, we introduce an effective approach to formulate latent feature mining as text-to-text propositional logical reasoning. We propose FLAME (Faithful Latent Feature Mining for Predictive Model Enhancement), a framework that leverages large language models (LLMs) to augment observed features with latent features and enhance the predictive power of ML models in downstream tasks. Our framework is generalizable across various domains with necessary domain-specific adaptation, as it is designed to incorporate contextual information unique to each area, ensuring effective transfer to different areas facing similar data availability challenges. We validate our framework with two case studies: (1) the criminal justice system, a domain characterized by limited and ethically challenging data collection; (2) the healthcare domain, where patient privacy concerns and the complexity of medical data limit comprehensive feature collection. Our results show that inferred latent features align well with ground truth labels and significantly enhance the downstream classifier.<|reference_end|> | arxiv | @article{li2024latent,
title={Latent Feature Mining for Predictive Model Enhancement with Large
Language Models},
author={Bingxuan Li, Pengyi Shi, Amy Ward},
journal={arXiv preprint arXiv:2410.04347},
year={2024},
archivePrefix={arXiv},
eprint={2410.04347},
primaryClass={cs.LG cs.CL}
} | li2024latent |
arxiv-666166 | 2410.04349 | HyperBlocker: Accelerating Rule-based Blocking in Entity Resolution using GPUs | <|reference_start|>HyperBlocker: Accelerating Rule-based Blocking in Entity Resolution using GPUs: This paper studies rule-based blocking in Entity Resolution (ER). We propose HyperBlocker, a GPU-accelerated system for blocking in ER. As opposed to previous blocking algorithms and parallel blocking solvers, HyperBlocker employs a pipelined architecture to overlap data transfer and GPU operations. It generates a dataaware and rule-aware execution plan on CPUs, for specifying how rules are evaluated, and develops a number of hardware-aware optimizations to achieve massive parallelism on GPUs. Using reallife datasets, we show that HyperBlocker is at least 6.8x and 9.1x faster than prior CPU-powered distributed systems and GPU-based ER solvers, respectively. Better still, by combining HyperBlocker with the state-of-the-art ER matcher, we can speed up the overall ER process by at least 30% with comparable accuracy.<|reference_end|> | arxiv | @article{zhu2024hyperblocker:,
title={HyperBlocker: Accelerating Rule-based Blocking in Entity Resolution
using GPUs},
author={Xiaoke Zhu, Min Xie, Ting Deng, Qi Zhang},
journal={arXiv preprint arXiv:2410.04349},
year={2024},
archivePrefix={arXiv},
eprint={2410.04349},
primaryClass={cs.DB}
} | zhu2024hyperblocker: |
arxiv-666167 | 2410.04350 | TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights | <|reference_start|>TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights: Direct Preference Optimization (DPO) has been widely adopted for preference alignment of Large Language Models (LLMs) due to its simplicity and effectiveness. However, DPO is derived as a bandit problem in which the whole response is treated as a single arm, ignoring the importance differences between tokens, which may affect optimization efficiency and make it difficult to achieve optimal results. In this work, we propose that the optimal data for DPO has equal expected rewards for each token in winning and losing responses, as there is no difference in token importance. However, since the optimal dataset is unavailable in practice, we propose using the original dataset for importance sampling to achieve unbiased optimization. Accordingly, we propose a token-level importance sampling DPO objective named TIS-DPO that assigns importance weights to each token based on its reward. Inspired by previous works, we estimate the token importance weights using the difference in prediction probabilities from a pair of contrastive LLMs. We explore three methods to construct these contrastive LLMs: (1) guiding the original LLM with contrastive prompts, (2) training two separate LLMs using winning and losing responses, and (3) performing forward and reverse DPO training with winning and losing responses. Experiments show that TIS-DPO significantly outperforms various baseline methods on harmlessness and helpfulness alignment and summarization tasks. We also visualize the estimated weights, demonstrating their ability to identify key token positions.<|reference_end|> | arxiv | @article{liu2024tis-dpo:,
title={TIS-DPO: Token-level Importance Sampling for Direct Preference
Optimization With Estimated Weights},
author={Aiwei Liu, Haoping Bai, Zhiyun Lu, Yanchao Sun, Xiang Kong, Simon
Wang, Jiulong Shan, Albin Madappally Jose, Xiaojiang Liu, Lijie Wen, Philip
S. Yu, Meng Cao},
journal={arXiv preprint arXiv:2410.04350},
year={2024},
archivePrefix={arXiv},
eprint={2410.04350},
primaryClass={cs.CL}
} | liu2024tis-dpo: |
arxiv-666168 | 2410.04352 | Enhancing Android Malware Detection: The Influence of ChatGPT on Decision-centric Task | <|reference_start|>Enhancing Android Malware Detection: The Influence of ChatGPT on Decision-centric Task: With the rise of large language models, such as ChatGPT, non-decisional models have been applied to various tasks. Moreover, ChatGPT has drawn attention to the traditional decision-centric task of Android malware detection. Despite effective detection methods proposed by scholars, they face low interpretability issues. Specifically, while these methods excel in classifying applications as benign or malicious and can detect malicious behavior, they often fail to provide detailed explanations for the decisions they make. This challenge raises concerns about the reliability of existing detection schemes and questions their true ability to understand complex data. In this study, we investigate the influence of the non-decisional model, ChatGPT, on the traditional decision-centric task of Android malware detection. We choose three state-of-the-art solutions, Drebin, XMAL, and MaMaDroid, conduct a series of experiments on publicly available datasets, and carry out a comprehensive comparison and analysis. Our findings indicate that these decision-driven solutions primarily rely on statistical patterns within datasets to make decisions, rather than genuinely understanding the underlying data. In contrast, ChatGPT, as a non-decisional model, excels in providing comprehensive analysis reports, substantially enhancing interpretability. Furthermore, we conduct surveys among experienced developers. The result highlights developers' preference for ChatGPT, as it offers in-depth insights and enhances efficiency and understanding of challenges. Meanwhile, these studies and analyses offer profound insights, presenting developers with a novel perspective on Android malware detection--enhancing the reliability of detection results from a non-decisional perspective.<|reference_end|> | arxiv | @article{li2024enhancing,
title={Enhancing Android Malware Detection: The Influence of ChatGPT on
Decision-centric Task},
author={Yao Li, Sen Fang, Tao Zhang, Haipeng Cai},
journal={arXiv preprint arXiv:2410.04352},
year={2024},
archivePrefix={arXiv},
eprint={2410.04352},
primaryClass={cs.CR cs.SE}
} | li2024enhancing |
arxiv-666169 | 2410.04353 | Multi-Attribute Auctions for Efficient Operation of Non-Cooperative Relaying Systems | <|reference_start|>Multi-Attribute Auctions for Efficient Operation of Non-Cooperative Relaying Systems: This paper studies the use of a multi-attribute auction in a communication system to bring about efficient relaying in a non-cooperative setting. We consider a system where a source seeks to offload data to an access point (AP) while balancing both the timeliness and energy-efficiency of the transmission. A deep fade in the communication channel (due to, e.g., a line-of-sight blockage) makes direct communication costly, and the source may alternatively rely on non-cooperative UEs to act as relays. We propose a multi-attribute auction to select a UE and to determine the duration and power of the transmission, with payments to the UE taking the form of energy sent via wireless power transfer (WPT). The quality of the channel from a UE to the AP constitutes private information, and bids consist of a transmission time and transmission power. We show that under a second-preferred-offer auction, truthful bidding by all candidate UEs forms a Nash Equilibrium. However, this auction is not incentive compatible, and we present a modified auction in which truthful bidding is in fact a dominant strategy. Extensive numerical experimentation illustrates the efficacy of our approach, which we compare to a cooperative baseline. We demonstrate that with as few as two candidates, our improved mechanism leads to as much as a 76% reduction in energy consumption, and that with as few as three candidates, the transmission time decreases by as much as 55\%. Further, we see that as the number of candidates increases, the performance of our mechanism approaches that of the cooperative baseline. Overall, our findings highlight the potential of multi-attribute auctions to enhance the efficiency of data transfer in non-cooperative settings.<|reference_end|> | arxiv | @article{hurst2024multi-attribute,
title={Multi-Attribute Auctions for Efficient Operation of Non-Cooperative
Relaying Systems},
author={Winston Hurst and Yasamin Mostofi},
journal={arXiv preprint arXiv:2410.04353},
year={2024},
archivePrefix={arXiv},
eprint={2410.04353},
primaryClass={cs.GT cs.SY eess.SY}
} | hurst2024multi-attribute |
arxiv-666170 | 2410.04354 | StreetSurfGS: Scalable Urban Street Surface Reconstruction with Planar-based Gaussian Splatting | <|reference_start|>StreetSurfGS: Scalable Urban Street Surface Reconstruction with Planar-based Gaussian Splatting: Reconstructing urban street scenes is crucial due to its vital role in applications such as autonomous driving and urban planning. These scenes are characterized by long and narrow camera trajectories, occlusion, complex object relationships, and data sparsity across multiple scales. Despite recent advancements, existing surface reconstruction methods, which are primarily designed for object-centric scenarios, struggle to adapt effectively to the unique characteristics of street scenes. To address this challenge, we introduce StreetSurfGS, the first method to employ Gaussian Splatting specifically tailored for scalable urban street scene surface reconstruction. StreetSurfGS utilizes a planar-based octree representation and segmented training to reduce memory costs, accommodate unique camera characteristics, and ensure scalability. Additionally, to mitigate depth inaccuracies caused by object overlap, we propose a guided smoothing strategy within regularization to eliminate inaccurate boundary points and outliers. Furthermore, to address sparse views and multi-scale challenges, we use a dual-step matching strategy that leverages adjacent and long-term information. Extensive experiments validate the efficacy of StreetSurfGS in both novel view synthesis and surface reconstruction.<|reference_end|> | arxiv | @article{cui2024streetsurfgs:,
title={StreetSurfGS: Scalable Urban Street Surface Reconstruction with
Planar-based Gaussian Splatting},
author={Xiao Cui, Weicai Ye, Yifan Wang, Guofeng Zhang, Wengang Zhou and
Houqiang Li},
journal={arXiv preprint arXiv:2410.04354},
year={2024},
archivePrefix={arXiv},
eprint={2410.04354},
primaryClass={cs.CV}
} | cui2024streetsurfgs: |
arxiv-666171 | 2410.04360 | GenSim: A General Social Simulation Platform with Large Language Model based Agents | <|reference_start|>GenSim: A General Social Simulation Platform with Large Language Model based Agents: With the rapid advancement of large language models (LLMs), recent years have witnessed many promising studies on leveraging LLM-based agents to simulate human social behavior. While prior work has demonstrated significant potential across various domains, much of it has focused on specific scenarios involving a limited number of agents and has lacked the ability to adapt when errors occur during simulation. To overcome these limitations, we propose a novel LLM-agent-based simulation platform called \textit{GenSim}, which: (1) \textbf{Abstracts a set of general functions} to simplify the simulation of customized social scenarios; (2) \textbf{Supports one hundred thousand agents} to better simulate large-scale populations in real-world contexts; (3) \textbf{Incorporates error-correction mechanisms} to ensure more reliable and long-term simulations. To evaluate our platform, we assess both the efficiency of large-scale agent simulations and the effectiveness of the error-correction mechanisms. To our knowledge, GenSim represents an initial step toward a general, large-scale, and correctable social simulation platform based on LLM agents, promising to further advance the field of social science.<|reference_end|> | arxiv | @article{tang2024gensim:,
title={GenSim: A General Social Simulation Platform with Large Language Model
based Agents},
author={Jiakai Tang, Heyang Gao, Xuchen Pan, Lei Wang, Haoran Tan, Dawei Gao,
Yushuo Chen, Xu Chen, Yankai Lin, Yaliang Li, Bolin Ding, Jingren Zhou, Jun
Wang, Ji-Rong Wen},
journal={arXiv preprint arXiv:2410.04360},
year={2024},
archivePrefix={arXiv},
eprint={2410.04360},
primaryClass={cs.MA cs.AI}
} | tang2024gensim: |
arxiv-666172 | 2410.04363 | Multi Armed Bandit Algorithms Based Virtual Machine Allocation Policy for Security in Multi-Tenant Distributed Systems | <|reference_start|>Multi Armed Bandit Algorithms Based Virtual Machine Allocation Policy for Security in Multi-Tenant Distributed Systems: This work proposes a secure and dynamic VM allocation strategy for multi-tenant distributed systems using the Thompson sampling approach. The method proves more effective and secure compared to epsilon-greedy and upper confidence bound methods, showing lower regret levels.,Initially, VM allocation was static, but the unpredictable nature of attacks necessitated a dynamic approach. Historical VM data was analyzed to understand attack responses, with rewards granted for unsuccessful attacks and reduced for successful ones, influencing regret levels.,The paper introduces a Multi Arm Bandit-based VM allocation policy, utilizing a Weighted Average Ensemble Learning algorithm trained on known attacks and non-attacks. This ensemble approach outperforms traditional algorithms like Logistic Regression, SVM, K Nearest Neighbors, and XGBoost.,For suspicious activity detection, a Stacked Anomaly Detector algorithm is proposed, trained on known non-attacks. This method surpasses existing techniques such as Isolation Forest and PCA-based approaches.,Overall, this paper presents an advanced solution for VM allocation policies, enhancing cloud-based system security through a combination of dynamic allocation, ensemble learning, and anomaly detection techniques.<|reference_end|> | arxiv | @article{patil2024multi,
title={Multi Armed Bandit Algorithms Based Virtual Machine Allocation Policy
for Security in Multi-Tenant Distributed Systems},
author={Pravin Patil, Geetanjali Kale, Tanmay Karmarkar, Ruturaj Ghatage},
journal={arXiv preprint arXiv:2410.04363},
year={2024},
archivePrefix={arXiv},
eprint={2410.04363},
primaryClass={cs.DC}
} | patil2024multi |
arxiv-666173 | 2410.04364 | VideoGuide: Improving Video Diffusion Models without Training Through a Teacher's Guide | <|reference_start|>VideoGuide: Improving Video Diffusion Models without Training Through a Teacher's Guide: Text-to-image (T2I) diffusion models have revolutionized visual content creation, but extending these capabilities to text-to-video (T2V) generation remains a challenge, particularly in preserving temporal consistency. Existing methods that aim to improve consistency often cause trade-offs such as reduced imaging quality and impractical computational time. To address these issues we introduce VideoGuide, a novel framework that enhances the temporal consistency of pretrained T2V models without the need for additional training or fine-tuning. Instead, VideoGuide leverages any pretrained video diffusion model (VDM) or itself as a guide during the early stages of inference, improving temporal quality by interpolating the guiding model's denoised samples into the sampling model's denoising process. The proposed method brings about significant improvement in temporal consistency and image fidelity, providing a cost-effective and practical solution that synergizes the strengths of various video diffusion models. Furthermore, we demonstrate prior distillation, revealing that base models can achieve enhanced text coherence by utilizing the superior data prior of the guiding model through the proposed method. Project Page: https://dohunlee1.github.io/videoguide.github.io/<|reference_end|> | arxiv | @article{lee2024videoguide:,
title={VideoGuide: Improving Video Diffusion Models without Training Through a
Teacher's Guide},
author={Dohun Lee, Bryan S Kim, Geon Yeong Park, Jong Chul Ye},
journal={arXiv preprint arXiv:2410.04364},
year={2024},
archivePrefix={arXiv},
eprint={2410.04364},
primaryClass={cs.CV cs.AI cs.LG}
} | lee2024videoguide: |
arxiv-666174 | 2410.04365 | Generative Co-Learners: Enhancing Cognitive and Social Presence of Students in Asynchronous Learning with Generative AI | <|reference_start|>Generative Co-Learners: Enhancing Cognitive and Social Presence of Students in Asynchronous Learning with Generative AI: Cognitive presence and social presence are crucial for a comprehensive learning experience. Despite the flexibility of asynchronous learning environments to accommodate individual schedules, the inherent constraints of asynchronous environments make augmenting cognitive and social presence particularly challenging. Students often face challenges such as a lack of timely feedback and support, a lack of non-verbal cues in communication, and a sense of isolation. To address this challenge, this paper introduces Generative Co-Learners, a system designed to leverage generative AI-powered agents, simulating co-learners supporting multimodal interactions, to improve cognitive and social presence in asynchronous learning environments. We conducted a study involving 12 student participants who used our system to engage with online programming tutorials to assess the system's effectiveness. The results show that by implementing features to support textual and visual communication and simulate an interactive learning environment with generative agents, our system enhances the cognitive and social presence in the asynchronous learning environment. These results suggest the potential to use generative AI to support students learning at scale and transform asynchronous learning into a more inclusive, engaging, and efficacious educational approach.<|reference_end|> | arxiv | @article{wang2024generative,
title={Generative Co-Learners: Enhancing Cognitive and Social Presence of
Students in Asynchronous Learning with Generative AI},
author={Tianjia Wang, Tong Wu, Huayi Liu, Chris Brown, Yan Chen},
journal={arXiv preprint arXiv:2410.04365},
year={2024},
archivePrefix={arXiv},
eprint={2410.04365},
primaryClass={cs.HC}
} | wang2024generative |
arxiv-666175 | 2410.04366 | RespDiff: An End-to-End Multi-scale RNN Diffusion Model for Respiratory Waveform Estimation from PPG Signals | <|reference_start|>RespDiff: An End-to-End Multi-scale RNN Diffusion Model for Respiratory Waveform Estimation from PPG Signals: Respiratory rate (RR) is a critical health indicator often monitored under inconvenient scenarios, limiting its practicality for continuous monitoring. Photoplethysmography (PPG) sensors, increasingly integrated into wearable devices, offer a chance to continuously estimate RR in a portable manner. In this paper, we propose RespDiff, an end-to-end multi-scale RNN diffusion model for respiratory waveform estimation from PPG signals. RespDiff does not require hand-crafted features or the exclusion of low-quality signal segments, making it suitable for real-world scenarios. The model employs multi-scale encoders, to extract features at different resolutions, and a bidirectional RNN to process PPG signals and extract respiratory waveform. Additionally, a spectral loss term is introduced to optimize the model further. Experiments conducted on the BIDMC dataset demonstrate that RespDiff outperforms notable previous works, achieving a mean absolute error (MAE) of 1.18 bpm for RR estimation while others range from 1.66 to 2.15 bpm, showing its potential for robust and accurate respiratory monitoring in real-world applications.<|reference_end|> | arxiv | @article{miao2024respdiff:,
title={RespDiff: An End-to-End Multi-scale RNN Diffusion Model for Respiratory
Waveform Estimation from PPG Signals},
author={Yuyang Miao, Zehua Chen, Chang Li, Danilo Mandic},
journal={arXiv preprint arXiv:2410.04366},
year={2024},
archivePrefix={arXiv},
eprint={2410.04366},
primaryClass={eess.SP cs.AI cs.HC}
} | miao2024respdiff: |
arxiv-666176 | 2410.04367 | IMAGine: An In-Memory Accelerated GEMV Engine Overlay | <|reference_start|>IMAGine: An In-Memory Accelerated GEMV Engine Overlay: Processor-in-Memory (PIM) overlays and new redesigned reconfigurable tile fabrics have been proposed to eliminate the von Neumann bottleneck and enable processing performance to scale with BRAM capacity. The performance of these FPGA-based PIM architectures has been limited due to a reduction of the BRAMs maximum clock frequencies and less than ideal scaling of processing elements with increased BRAM capacity. This paper presents IMAGine, an In-Memory Accelerated GEMV engine, a PIM-array accelerator that clocks at the maximum frequency of the BRAM and scales to 100% of the available BRAMs. Comparative analyses are presented showing execution speeds over existing PIM-based GEMV engines on FPGAs and achieving a 2.65x - 3.2x faster clock. An AMD Alveo U55 implementation is presented that achieves a system clock speed of 737 MHz, providing 64K bit-serial multiply-accumulate (MAC) units for GEMV operation. This establishes IMAGine as the fastest PIM-based GEMV overlay, outperforming even the custom PIM-based FPGA accelerators reported to date. Additionally, it surpasses TPU v1-v2 and Alibaba Hanguang 800 in clock speed while offering an equal or greater number of MAC units.<|reference_end|> | arxiv | @article{kabir2024imagine:,
title={IMAGine: An In-Memory Accelerated GEMV Engine Overlay},
author={MD Arafat Kabir, Tendayi Kamucheka, Nathaniel Fredricks, Joel Mandebi,
Jason Bakos, Miaoqing Huang, David Andrews},
journal={arXiv preprint arXiv:2410.04367},
year={2024},
archivePrefix={arXiv},
eprint={2410.04367},
primaryClass={cs.AR}
} | kabir2024imagine: |
arxiv-666177 | 2410.04368 | Algorithmic Capabilities of Random Transformers | <|reference_start|>Algorithmic Capabilities of Random Transformers: Trained transformer models have been found to implement interpretable procedures for tasks like arithmetic and associative recall, but little is understood about how the circuits that implement these procedures originate during training. To what extent do they depend on the supervisory signal provided to models, and to what extent are they attributable to behavior already present in models at the beginning of training? To investigate these questions, we investigate what functions can be learned by randomly initialized transformers in which only the embedding layers are optimized, so that the only input--output mappings learnable from data are those already implemented (up to a choice of encoding scheme) by the randomly initialized model. We find that these random transformers can perform a wide range of meaningful algorithmic tasks, including modular arithmetic, in-weights and in-context associative recall, decimal addition, parenthesis balancing, and even some aspects of natural language text generation. Our results indicate that some algorithmic capabilities are present in transformers (and accessible via appropriately structured inputs) even before these models are trained. Code is available at https://github.com/fjzzq2002/random_transformers.<|reference_end|> | arxiv | @article{zhong2024algorithmic,
title={Algorithmic Capabilities of Random Transformers},
author={Ziqian Zhong, Jacob Andreas},
journal={arXiv preprint arXiv:2410.04368},
year={2024},
archivePrefix={arXiv},
eprint={2410.04368},
primaryClass={cs.LG cs.AI cs.CL}
} | zhong2024algorithmic |
arxiv-666178 | 2410.04370 | DABI: Evaluation of Data Augmentation Methods Using Downsampling in Bilateral Control-Based Imitation Learning with Images | <|reference_start|>DABI: Evaluation of Data Augmentation Methods Using Downsampling in Bilateral Control-Based Imitation Learning with Images: Autonomous robot manipulation is a complex and continuously evolving robotics field. This paper focuses on data augmentation methods in imitation learning. Imitation learning consists of three stages: data collection from experts, learning model, and execution. However, collecting expert data requires manual effort and is time-consuming. Additionally, as sensors have different data acquisition intervals, preprocessing such as downsampling to match the lowest frequency is necessary. Downsampling enables data augmentation and also contributes to the stabilization of robot operations. In light of this background, this paper proposes the Data Augmentation Method for Bilateral Control-Based Imitation Learning with Images, called "DABI". DABI collects robot joint angles, velocities, and torques at 1000 Hz, and uses images from gripper and environmental cameras captured at 100 Hz as the basis for data augmentation. This enables a tenfold increase in data. In this paper, we collected just 5 expert demonstration datasets. We trained the bilateral control Bi-ACT model with the unaltered dataset and two augmentation methods for comparative experiments and conducted real-world experiments. The results confirmed a significant improvement in success rates, thereby proving the effectiveness of DABI. For additional material, please check https://mertcookimg.github.io/dabi<|reference_end|> | arxiv | @article{kobayashi2024dabi:,
title={DABI: Evaluation of Data Augmentation Methods Using Downsampling in
Bilateral Control-Based Imitation Learning with Images},
author={Masato Kobayashi, Thanpimon Buamanee, Yuki Uranishi},
journal={arXiv preprint arXiv:2410.04370},
year={2024},
archivePrefix={arXiv},
eprint={2410.04370},
primaryClass={cs.RO}
} | kobayashi2024dabi: |
arxiv-666179 | 2410.04371 | A physics-based sensor simulation environment for lunar ground operations | <|reference_start|>A physics-based sensor simulation environment for lunar ground operations: This contribution reports on a software framework that uses physically-based rendering to simulate camera operation in lunar conditions. The focus is on generating synthetic images qualitatively similar to those produced by an actual camera operating on a vehicle traversing and/or actively interacting with lunar terrain, e.g., for construction operations. The highlights of this simulator are its ability to capture (i) light transport in lunar conditions and (ii) artifacts related to the vehicle-terrain interaction, which might include dust formation and transport. The simulation infrastructure is built within an in-house developed physics engine called Chrono, which simulates the dynamics of the deformable terrain-vehicle interaction, as well as fallout of this interaction. The Chrono::Sensor camera model draws on ray tracing and Hapke Photometric Functions. We analyze the performance of the simulator using two virtual experiments featuring digital twins of NASA's VIPER rover navigating a lunar environment, and of the NASA's RASSOR excavator engaged into a digging operation. The sensor simulation solution presented can be used for the design and testing of perception algorithms, or as a component of in-silico experiments that pertain to large lunar operations, e.g., traversability, construction tasks.<|reference_end|> | arxiv | @article{batagoda2024a,
title={A physics-based sensor simulation environment for lunar ground
operations},
author={Nevindu M. Batagoda, Bo-Hsun Chen, Harry Zhang, Radu Serban, Dan
Negrut},
journal={arXiv preprint arXiv:2410.04371},
year={2024},
archivePrefix={arXiv},
eprint={2410.04371},
primaryClass={cs.RO}
} | batagoda2024a |
arxiv-666180 | 2410.04372 | DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion | <|reference_start|>DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion: The rapid progress of Deepfake technology has made face swapping highly realistic, raising concerns about the malicious use of fabricated facial content. Existing methods often struggle to generalize to unseen domains due to the diverse nature of facial manipulations. In this paper, we revisit the generation process and identify a universal principle: Deepfake images inherently contain information from both source and target identities, while genuine faces maintain a consistent identity. Building upon this insight, we introduce DiffusionFake, a novel plug-and-play framework that reverses the generative process of face forgeries to enhance the generalization of detection models. DiffusionFake achieves this by injecting the features extracted by the detection model into a frozen pre-trained Stable Diffusion model, compelling it to reconstruct the corresponding target and source images. This guided reconstruction process constrains the detection network to capture the source and target related features to facilitate the reconstruction, thereby learning rich and disentangled representations that are more resilient to unseen forgeries. Extensive experiments demonstrate that DiffusionFake significantly improves cross-domain generalization of various detector architectures without introducing additional parameters during inference. Our Codes are available in https://github.com/skJack/DiffusionFake.git.<|reference_end|> | arxiv | @article{sun2024diffusionfake:,
title={DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided
Stable Diffusion},
author={Ke Sun, Shen Chen, Taiping Yao, Hong Liu, Xiaoshuai Sun, Shouhong
Ding, Rongrong Ji},
journal={arXiv preprint arXiv:2410.04372},
year={2024},
archivePrefix={arXiv},
eprint={2410.04372},
primaryClass={cs.CV}
} | sun2024diffusionfake: |
arxiv-666181 | 2410.04376 | Putting Gale & Shapley to Work: Guaranteeing Stability Through Learning | <|reference_start|>Putting Gale & Shapley to Work: Guaranteeing Stability Through Learning: Two-sided matching markets describe a large class of problems wherein participants from one side of the market must be matched to those from the other side according to their preferences. In many real-world applications (e.g. content matching or online labor markets), the knowledge about preferences may not be readily available and must be learned, i.e., one side of the market (aka agents) may not know their preferences over the other side (aka arms). Recent research on online settings has focused primarily on welfare optimization aspects (i.e. minimizing the overall regret) while paying little attention to the game-theoretic properties such as the stability of the final matching. In this paper, we exploit the structure of stable solutions to devise algorithms that improve the likelihood of finding stable solutions. We initiate the study of the sample complexity of finding a stable matching, and provide theoretical bounds on the number of samples needed to reach a stable matching with high probability. Finally, our empirical results demonstrate intriguing tradeoffs between stability and optimality of the proposed algorithms, further complementing our theoretical findings.<|reference_end|> | arxiv | @article{hosseini2024putting,
title={Putting Gale & Shapley to Work: Guaranteeing Stability Through Learning},
author={Hadi Hosseini, Sanjukta Roy, Duohan Zhang},
journal={arXiv preprint arXiv:2410.04376},
year={2024},
archivePrefix={arXiv},
eprint={2410.04376},
primaryClass={cs.GT cs.LG}
} | hosseini2024putting |
arxiv-666182 | 2410.04377 | Suspiciousness of Adversarial Texts to Human | <|reference_start|>Suspiciousness of Adversarial Texts to Human: Adversarial examples pose a significant challenge to deep neural networks (DNNs) across both image and text domains, with the intent to degrade model performance through meticulously altered inputs. Adversarial texts, however, are distinct from adversarial images due to their requirement for semantic similarity and the discrete nature of the textual contents. This study delves into the concept of human suspiciousness, a quality distinct from the traditional focus on imperceptibility found in image-based adversarial examples. Unlike images, where adversarial changes are meant to be indistinguishable to the human eye, textual adversarial content must often remain undetected or non-suspicious to human readers, even when the text's purpose is to deceive NLP systems or bypass filters. In this research, we expand the study of human suspiciousness by analyzing how individuals perceive adversarial texts. We gather and publish a novel dataset of Likert-scale human evaluations on the suspiciousness of adversarial sentences, crafted by four widely used adversarial attack methods and assess their correlation with the human ability to detect machine-generated alterations. Additionally, we develop a regression-based model to quantify suspiciousness and establish a baseline for future research in reducing the suspiciousness in adversarial text generation. We also demonstrate how the regressor-generated suspicious scores can be incorporated into adversarial generation methods to produce texts that are less likely to be perceived as computer-generated. We make our human suspiciousness annotated data and our code available.<|reference_end|> | arxiv | @article{tonni2024suspiciousness,
title={Suspiciousness of Adversarial Texts to Human},
author={Shakila Mahjabin Tonni, Pedro Faustini, Mark Dras},
journal={arXiv preprint arXiv:2410.04377},
year={2024},
archivePrefix={arXiv},
eprint={2410.04377},
primaryClass={cs.LG cs.CL cs.CR}
} | tonni2024suspiciousness |
arxiv-666183 | 2410.04380 | HALL-E: Hierarchical Neural Codec Language Model for Minute-Long Zero-Shot Text-to-Speech Synthesis | <|reference_start|>HALL-E: Hierarchical Neural Codec Language Model for Minute-Long Zero-Shot Text-to-Speech Synthesis: Recently, Text-to-speech (TTS) models based on large language models (LLMs) that translate natural language text into sequences of discrete audio tokens have gained great research attention, with advances in neural audio codec (NAC) models using residual vector quantization (RVQ). However, long-form speech synthesis remains a significant challenge due to the high frame rate, which increases the length of audio tokens and makes it difficult for autoregressive language models to generate audio tokens for even a minute of speech. To address this challenge, this paper introduces two novel post-training approaches: 1) Multi-Resolution Requantization (MReQ) and 2) HALL-E. MReQ is a framework to reduce the frame rate of pre-trained NAC models. Specifically, it incorporates multi-resolution residual vector quantization (MRVQ) module that hierarchically reorganizes discrete audio tokens through teacher-student distillation. HALL-E is an LLM-based TTS model designed to predict hierarchical tokens of MReQ. Specifically, it incorporates the technique of using MRVQ sub-modules and continues training from a pre-trained LLM-based TTS model. Furthermore, to promote TTS research, we create MinutesSpeech, a new benchmark dataset consisting of 40k hours of filtered speech data for training and evaluating speech synthesis ranging from 3s up to 180s. In experiments, we demonstrated the effectiveness of our approaches by applying our post-training framework to VALL-E. We achieved the frame rate down to as low as 8 Hz, enabling the stable minitue-long speech synthesis in a single inference step. Audio samples, dataset, codes and pre-trained models are available at https://yutonishimura-v2.github.io/HALL-E_DEMO/.<|reference_end|> | arxiv | @article{nishimura2024hall-e:,
title={HALL-E: Hierarchical Neural Codec Language Model for Minute-Long
Zero-Shot Text-to-Speech Synthesis},
author={Yuto Nishimura, Takumi Hirose, Masanari Ohi, Hideki Nakayama, and
Nakamasa Inoue},
journal={arXiv preprint arXiv:2410.04380},
year={2024},
archivePrefix={arXiv},
eprint={2410.04380},
primaryClass={eess.AS cs.SD}
} | nishimura2024hall-e: |
arxiv-666184 | 2410.04383 | BrainCodec: Neural fMRI codec for the decoding of cognitive brain states | <|reference_start|>BrainCodec: Neural fMRI codec for the decoding of cognitive brain states: Recently, leveraging big data in deep learning has led to significant performance improvements, as confirmed in applications like mental state decoding using fMRI data. However, fMRI datasets remain relatively small in scale, and the inherent issue of low signal-to-noise ratios (SNR) in fMRI data further exacerbates these challenges. To address this, we apply compression techniques as a preprocessing step for fMRI data. We propose BrainCodec, a novel fMRI codec inspired by the neural audio codec. We evaluated BrainCodec's compression capability in mental state decoding, demonstrating further improvements over previous methods. Furthermore, we analyzed the latent representations obtained through BrainCodec, elucidating the similarities and differences between task and resting state fMRI, highlighting the interpretability of BrainCodec. Additionally, we demonstrated that fMRI reconstructions using BrainCodec can enhance the visibility of brain activity by achieving higher SNR, suggesting its potential as a novel denoising method. Our study shows that BrainCodec not only enhances performance over previous methods but also offers new analytical possibilities for neuroscience. Our codes, dataset, and model weights are available at https://github.com/amano-k-lab/BrainCodec.<|reference_end|> | arxiv | @article{nishimura2024braincodec:,
title={BrainCodec: Neural fMRI codec for the decoding of cognitive brain states},
author={Yuto Nishimura, Masataka Sawayama, Ayumu Yamashita, Hideki Nakayama,
and Kaoru Amano},
journal={arXiv preprint arXiv:2410.04383},
year={2024},
archivePrefix={arXiv},
eprint={2410.04383},
primaryClass={q-bio.NC cs.CL}
} | nishimura2024braincodec: |
arxiv-666185 | 2410.04385 | HaTT: Hadamard avoiding TT recompression | <|reference_start|>HaTT: Hadamard avoiding TT recompression: The Hadamard product of tensor train (TT) tensors is one of the most fundamental nonlinear operations in scientific computing and data analysis. Due to its tendency to significantly increase TT ranks, the Hadamard product presents a major computational challenge in TT tensor-based algorithms. Therefore, it is essential to develop recompression algorithms that mitigate the effects of this rank increase. Existing recompression algorithms require an explicit representation of the Hadamard product, resulting in high computational and storage complexity. In this work, we propose the Hadamard avoiding TT recompression (HaTT) algorithm. Leveraging the structure of the Hadamard product in TT tensors and its Hadamard product-free property, the overall complexity of the HaTT algorithm is significantly lower than that of existing TT recompression algorithms. This is validated through complexity analysis and several numerical experiments.<|reference_end|> | arxiv | @article{sun2024hatt:,
title={HaTT: Hadamard avoiding TT recompression},
author={Zhonghao Sun, Jizu Huang, Chuanfu Xiao, Chao Yang},
journal={arXiv preprint arXiv:2410.04385},
year={2024},
archivePrefix={arXiv},
eprint={2410.04385},
primaryClass={math.NA cs.NA}
} | sun2024hatt: |
arxiv-666186 | 2410.04386 | Data Distribution Valuation | <|reference_start|>Data Distribution Valuation: Data valuation is a class of techniques for quantitatively assessing the value of data for applications like pricing in data marketplaces. Existing data valuation methods define a value for a discrete dataset. However, in many use cases, users are interested in not only the value of the dataset, but that of the distribution from which the dataset was sampled. For example, consider a buyer trying to evaluate whether to purchase data from different vendors. The buyer may observe (and compare) only a small preview sample from each vendor, to decide which vendor's data distribution is most useful to the buyer and purchase. The core question is how should we compare the values of data distributions from their samples? Under a Huber characterization of the data heterogeneity across vendors, we propose a maximum mean discrepancy (MMD)-based valuation method which enables theoretically principled and actionable policies for comparing data distributions from samples. We empirically demonstrate that our method is sample-efficient and effective in identifying valuable data distributions against several existing baselines, on multiple real-world datasets (e.g., network intrusion detection, credit card fraud detection) and downstream applications (classification, regression).<|reference_end|> | arxiv | @article{xu2024data,
title={Data Distribution Valuation},
author={Xinyi Xu, Shuaiqi Wang, Chuan-Sheng Foo, Bryan Kian Hsiang Low and
Giulia Fanti},
journal={arXiv preprint arXiv:2410.04386},
year={2024},
archivePrefix={arXiv},
eprint={2410.04386},
primaryClass={cs.LG}
} | xu2024data |
arxiv-666187 | 2410.04387 | WISE: Unraveling Business Process Metrics with Domain Knowledge | <|reference_start|>WISE: Unraveling Business Process Metrics with Domain Knowledge: Anomalies in complex industrial processes are often obscured by high variability and complexity of event data, which hinders their identification and interpretation using process mining. To address this problem, we introduce WISE (Weighted Insights for Evaluating Efficiency), a novel method for analyzing business process metrics through the integration of domain knowledge, process mining, and machine learning. The methodology involves defining business goals and establishing Process Norms with weighted constraints at the activity level, incorporating input from domain experts and process analysts. Individual process instances are scored based on these constraints, and the scores are normalized to identify features impacting process goals. Evaluation using the BPIC 2019 dataset and real industrial contexts demonstrates that WISE enhances automation in business process analysis and effectively detects deviations from desired process flows. While LLMs support the analysis, the inclusion of domain experts ensures the accuracy and relevance of the findings.<|reference_end|> | arxiv | @article{jessen2024wise:,
title={WISE: Unraveling Business Process Metrics with Domain Knowledge},
author={Urszula Jessen, Dirk Fahland},
journal={arXiv preprint arXiv:2410.04387},
year={2024},
archivePrefix={arXiv},
eprint={2410.04387},
primaryClass={cs.SE}
} | jessen2024wise: |
arxiv-666188 | 2410.04389 | Non-conflicting no-where zero $Z_2\times Z_2$ flows in cubic graphs | <|reference_start|>Non-conflicting no-where zero $Z_2\times Z_2$ flows in cubic graphs: Let $Z_2\times Z_2=\{0, \alpha, \beta, \alpha+\beta\}$. If $G$ is a bridgeless cubic graph, $F$ is a perfect matching of $G$ and $\overline{F}$ is the complementary 2-factor of $F$, then a no-where zero $Z_2\times Z_2$-flow $\theta$ of $G/\overline{F}$ is called non-conflicting with respect to $\overline{F}$, if $\overline{F}$ contains no edge $e=uv$, such that $u$ is incident to an edge with $\theta$-value $\alpha$ and $v$ is incident to an edge with $\theta$-value $\beta$. In this paper, we demonstrate the usefulness of non-conflicting flows by showing that if a cubic graph $G$ admits such a flow with respect to some perfect matching $F$, then $G$ admits a normal 6-edge-coloring. We use this observation in order to show that claw-free bridgeless cubic graphs, bridgeless cubic graphs possessing a 2-factor having at most two cycles admit a normal 6-edge-coloring. We demonstrate the usefulness of non-conflicting flows further by relating them to a recent conjecture of Thomassen about edge-disjoint perfect matchings in highly connected regular graphs. In the end of the paper, we construct infinitely many 2-edge-connected cubic graphs such that $G/\overline{F}$ does not admit a non-conflicting no-where zero $Z_2\times Z_2$-flow with respect to any perfect matching $F$.<|reference_end|> | arxiv | @article{mkrtchyan2024non-conflicting,
title={Non-conflicting no-where zero $Z_2\times Z_2$ flows in cubic graphs},
author={Vahan Mkrtchyan},
journal={arXiv preprint arXiv:2410.04389},
year={2024},
archivePrefix={arXiv},
eprint={2410.04389},
primaryClass={math.CO cs.DM}
} | mkrtchyan2024non-conflicting |
arxiv-666189 | 2410.04397 | Towards Understanding and Enhancing Security of Proof-of-Training for DNN Model Ownership Verification | <|reference_start|>Towards Understanding and Enhancing Security of Proof-of-Training for DNN Model Ownership Verification: The great economic values of deep neural networks (DNNs) urge AI enterprises to protect their intellectual property (IP) for these models. Recently, proof-of-training (PoT) has been proposed as a promising solution to DNN IP protection, through which AI enterprises can utilize the record of DNN training process as their ownership proof. To prevent attackers from forging ownership proof, a secure PoT scheme should be able to distinguish honest training records from those forged by attackers. Although existing PoT schemes provide various distinction criteria, these criteria are based on intuitions or observations. The effectiveness of these criteria lacks clear and comprehensive analysis, resulting in existing schemes initially deemed secure being swiftly compromised by simple ideas. In this paper, we make the first move to identify distinction criteria in the style of formal methods, so that their effectiveness can be explicitly demonstrated. Specifically, we conduct systematic modeling to cover a wide range of attacks and then theoretically analyze the distinctions between honest and forged training records. The analysis results not only induce a universal distinction criterion, but also provide detailed reasoning to demonstrate its effectiveness in defending against attacks covered by our model. Guided by the criterion, we propose a generic PoT construction that can be instantiated into concrete schemes. This construction sheds light on the realization that trajectory matching algorithms, previously employed in data distillation, possess significant advantages in PoT construction. Experimental results demonstrate that our scheme can resist attacks that have compromised existing PoT schemes, which corroborates its superiority in security.<|reference_end|> | arxiv | @article{chang2024towards,
title={Towards Understanding and Enhancing Security of Proof-of-Training for
DNN Model Ownership Verification},
author={Yijia Chang, Hanrui Jiang, Chao Lin, Xinyi Huang, Jian Weng},
journal={arXiv preprint arXiv:2410.04397},
year={2024},
archivePrefix={arXiv},
eprint={2410.04397},
primaryClass={cs.CR cs.AI}
} | chang2024towards |
arxiv-666190 | 2410.04402 | Deformable NeRF using Recursively Subdivided Tetrahedra | <|reference_start|>Deformable NeRF using Recursively Subdivided Tetrahedra: While neural radiance fields (NeRF) have shown promise in novel view synthesis, their implicit representation limits explicit control over object manipulation. Existing research has proposed the integration of explicit geometric proxies to enable deformation. However, these methods face two primary challenges: firstly, the time-consuming and computationally demanding tetrahedralization process; and secondly, handling complex or thin structures often leads to either excessive, storage-intensive tetrahedral meshes or poor-quality ones that impair deformation capabilities. To address these challenges, we propose DeformRF, a method that seamlessly integrates the manipulability of tetrahedral meshes with the high-quality rendering capabilities of feature grid representations. To avoid ill-shaped tetrahedra and tetrahedralization for each object, we propose a two-stage training strategy. Starting with an almost-regular tetrahedral grid, our model initially retains key tetrahedra surrounding the object and subsequently refines object details using finer-granularity mesh in the second stage. We also present the concept of recursively subdivided tetrahedra to create higher-resolution meshes implicitly. This enables multi-resolution encoding while only necessitating the storage of the coarse tetrahedral mesh generated in the first training stage. We conduct a comprehensive evaluation of our DeformRF on both synthetic and real-captured datasets. Both quantitative and qualitative results demonstrate the effectiveness of our method for novel view synthesis and deformation tasks. Project page: https://ustc3dv.github.io/DeformRF/<|reference_end|> | arxiv | @article{qiu2024deformable,
title={Deformable NeRF using Recursively Subdivided Tetrahedra},
author={Zherui Qiu, Chenqu Ren, Kaiwen Song, Xiaoyi Zeng, Leyuan Yang, Juyong
Zhang},
journal={arXiv preprint arXiv:2410.04402},
year={2024},
archivePrefix={arXiv},
eprint={2410.04402},
primaryClass={cs.CV cs.GR}
} | qiu2024deformable |
arxiv-666191 | 2410.04404 | CiMaTe: Citation Count Prediction Effectively Leveraging the Main Text | <|reference_start|>CiMaTe: Citation Count Prediction Effectively Leveraging the Main Text: Prediction of the future citation counts of papers is increasingly important to find interesting papers among an ever-growing number of papers. Although a paper's main text is an important factor for citation count prediction, it is difficult to handle in machine learning models because the main text is typically very long; thus previous studies have not fully explored how to leverage it. In this paper, we propose a BERT-based citation count prediction model, called CiMaTe, that leverages the main text by explicitly capturing a paper's sectional structure. Through experiments with papers from computational linguistics and biology domains, we demonstrate the CiMaTe's effectiveness, outperforming the previous methods in Spearman's rank correlation coefficient; 5.1 points in the computational linguistics domain and 1.8 points in the biology domain.<|reference_end|> | arxiv | @article{hirako2024cimate:,
title={CiMaTe: Citation Count Prediction Effectively Leveraging the Main Text},
author={Jun Hirako, Ryohei Sasano, Koichi Takeda},
journal={arXiv preprint arXiv:2410.04404},
year={2024},
archivePrefix={arXiv},
eprint={2410.04404},
primaryClass={cs.CL}
} | hirako2024cimate: |
arxiv-666192 | 2410.04407 | Lens: Rethinking Multilingual Enhancement for Large Language Models | <|reference_start|>Lens: Rethinking Multilingual Enhancement for Large Language Models: Despite the growing global demand for large language models (LLMs) that serve users from diverse linguistic backgrounds, most cutting-edge LLMs remain predominantly English-centric. This creates a performance gap across languages, restricting access to advanced AI services for non-English speakers. Current methods to enhance multilingual capabilities largely rely on data-driven post-training techniques, such as multilingual instruction tuning or continual pre-training. However, these approaches encounter significant challenges, including the scarcity of high-quality multilingual datasets and the limited enhancement of multilingual capabilities. They often suffer from off-target issues and catastrophic forgetting of central language abilities. To this end, we propose Lens, a novel approach to enhance multilingual capabilities of LLMs by leveraging their internal language representation spaces. Specially, Lens operates by manipulating the hidden representations within the language-agnostic and language-specific subspaces from top layers of LLMs. Using the central language as a pivot, the target language is drawn closer to it within the language-agnostic subspace, allowing it to inherit well-established semantic representations. Meanwhile, in the language-specific subspace, the representations of the target and central languages are pushed apart, enabling the target language to express itself distinctly. Extensive experiments on one English-centric and two multilingual LLMs demonstrate that Lens effectively improves multilingual performance without sacrificing the original central language capabilities of the backbone model, achieving superior results with much fewer computational resources compared to existing post-training approaches.<|reference_end|> | arxiv | @article{zhao2024lens:,
title={Lens: Rethinking Multilingual Enhancement for Large Language Models},
author={Weixiang Zhao, Yulin Hu, Jiahe Guo, Xingyu Sui, Tongtong Wu, Yang
Deng, Yanyan Zhao, Bing Qin, Wanxiang Che, Ting Liu},
journal={arXiv preprint arXiv:2410.04407},
year={2024},
archivePrefix={arXiv},
eprint={2410.04407},
primaryClass={cs.CL}
} | zhao2024lens: |
arxiv-666193 | 2410.04408 | Anti-Malicious ISAC Using Proactive Monitoring | <|reference_start|>Anti-Malicious ISAC Using Proactive Monitoring: In this paper, we investigate proactive monitoring to mitigate malicious activities in integrated sensing and communication (ISAC) systems. Our focus is on a scenario where a cell-free massive multiple-input multiple-output (CF-mMIMO) architecture is exploited by malicious actors. Malicious actors use multiple access points (APs) to illegally sense a legitimate target while communicating with users (UEs), one of which is suspected of illegal activities. In our approach, a proactive monitor overhears the suspicious UE and simultaneously sends a jamming signal to degrade the communication links between the APs and suspicious UE. Simultaneously, the monitor sends a precoded jamming signal toward the legitimate target to hinder the malicious sensing attempts. We derive closed-form expressions for the sensing signal-to-interference-noise ratio (SINR), as well as the received SINR at the UEs and overheard SINR at the monitor. The simulation results show that our anti-malicious CF-mMIMO ISAC strategy can significantly reduce the sensing performance while offering excellent monitoring performance.<|reference_end|> | arxiv | @article{wang2024anti-malicious,
title={Anti-Malicious ISAC Using Proactive Monitoring},
author={Zonghan Wang, Zahra Mobini, Hien Quoc Ngo, and Michail Matthaiou},
journal={arXiv preprint arXiv:2410.04408},
year={2024},
archivePrefix={arXiv},
eprint={2410.04408},
primaryClass={cs.IT eess.SP math.IT}
} | wang2024anti-malicious |
arxiv-666194 | 2410.04409 | Quantum Approximate Optimization Algorithms for Maxmimum Cut on Low-Girth Graphs | <|reference_start|>Quantum Approximate Optimization Algorithms for Maxmimum Cut on Low-Girth Graphs: Maximum cut (MaxCut) on graphs is a classic NP-hard problem. In quantum computing, Farhi, Gutmann, and Goldstone proposed the Quantum Approximate Optimization Algorithm (QAOA) for solving the MaxCut problem. Its guarantee on cut fraction (the fraction of edges in the output cut over all edges) was mainly studied for high-girth graphs, i.e., graphs with only long cycles. On the other hand, low-girth graphs are ubiquitous in theoretical computer science, including expander graphs being outstanding examples with wide applications in theory and beyond. In this paper, we apply QAOA to MaxCut on a set of expander graphs proposed by Mohanty and O'Donnell known as additive product graphs. Additionally, we apply multi-angle QAOA (ma-QAOA) to better utilize the graph structure of additive product graphs in ansatz design. In theory, we derive an iterative formula to calculate the expected cut fraction of such graphs. On the other hand, we conduct numerical experiments to compare between best-known classical local algorithms and QAOA with constant depth. Our results demonstrate that QAOA outperforms the best-known classical algorithms by 0.3% to 5.2% on several additive product graphs, while ma-QAOA further enhances this advantage by an additional 0.6% to 2.5%. In particular, we observe cases that ma-QAOA exhibits superiority over best-known classical algorithms but QAOA does not. Furthermore, we extend our experiments to planar graphs such as tiling grid graphs, where QAOA also demonstrates an advantage.<|reference_end|> | arxiv | @article{li2024quantum,
title={Quantum Approximate Optimization Algorithms for Maxmimum Cut on
Low-Girth Graphs},
author={Tongyang Li, Yuexin Su, Ziyi Yang, Shengyu Zhang},
journal={arXiv preprint arXiv:2410.04409},
year={2024},
archivePrefix={arXiv},
eprint={2410.04409},
primaryClass={quant-ph cs.DS math.OC}
} | li2024quantum |
arxiv-666195 | 2410.04410 | Blocks Architecture (BloArk): Efficient, Cost-Effective, and Incremental Dataset Architecture for Wikipedia Revision History | <|reference_start|>Blocks Architecture (BloArk): Efficient, Cost-Effective, and Incremental Dataset Architecture for Wikipedia Revision History: Wikipedia (Wiki) is one of the most widely used and publicly available resources for natural language processing (NLP) applications. Wikipedia Revision History (WikiRevHist) shows the order in which edits were made to any Wiki page since its first modification. While the most up-to-date Wiki has been widely used as a training source, WikiRevHist can also be valuable resources for NLP applications. However, there are insufficient tools available to process WikiRevHist without having substantial computing resources, making additional customization, and spending extra time adapting others' works. Therefore, we report Blocks Architecture (BloArk), an efficiency-focused data processing architecture that reduces running time, computing resource requirements, and repeated works in processing WikiRevHist dataset. BloArk consists of three parts in its infrastructure: blocks, segments, and warehouses. On top of that, we build the core data processing pipeline: builder and modifier. The BloArk builder transforms the original WikiRevHist dataset from XML syntax into JSON Lines (JSONL) format for improving the concurrent and storage efficiency. The BloArk modifier takes previously-built warehouses to operate incremental modifications for improving the utilization of existing databases and reducing the cost of reusing others' works. In the end, BloArk can scale up easily in both processing Wikipedia Revision History and incrementally modifying existing dataset for downstream NLP use cases. The source code, documentations, and example usages are publicly available online and open-sourced under GPL-2.0 license.<|reference_end|> | arxiv | @article{li2024blocks,
title={Blocks Architecture (BloArk): Efficient, Cost-Effective, and Incremental
Dataset Architecture for Wikipedia Revision History},
author={Lingxi Li, Zonghai Yao, Sunjae Kwon, Hong Yu},
journal={arXiv preprint arXiv:2410.04410},
year={2024},
archivePrefix={arXiv},
eprint={2410.04410},
primaryClass={cs.CL}
} | li2024blocks |
arxiv-666196 | 2410.04412 | Log-Concave Sequences in Coding Theory | <|reference_start|>Log-Concave Sequences in Coding Theory: We introduce the notion of logarithmically concave (or log-concave) sequences in Coding Theory. A sequence $a_0, a_1, \dots, a_n$ of real numbers is called log-concave if $a_i^2 \ge a_{i-1}a_{i+1}$ for all $1 \le i \le n-1$. A natural sequence of positive numbers in coding theory is the weight distribution of a linear code consisting of the nonzero values among $A_i$'s where $A_i$ denotes the number of codewords of weight $i$. We call a linear code log-concave if its nonzero weight distribution is log-concave. Our main contribution is to show that all binary general Hamming codes of length $2^r -1$ ($r=3$ or $r \ge 5$), the binary extended Hamming codes of length $2^r ~(r \ge 3)$, and the second order Reed-Muller codes $R(2, m)~ (m \ge 2)$ are all log-concave while the homogeneous and projective second order Reed-Muller codes are either log-concave, or 1-gap log-concave. Furthermore, we show that any MDS $[n, k]$ code over $\mathbb F_q$ satisfying $3 \leqslant k \leqslant n/2 +3$ is log-concave if $q \geqslant q_0(n, k)$ which is the larger root of a quadratic polynomial. Hence, we expect that the concept of log-concavity in coding theory will stimulate many interesting problems.<|reference_end|> | arxiv | @article{shi2024log-concave,
title={Log-Concave Sequences in Coding Theory},
author={Minjia Shi, Xuan Wang, Junmin An, Jon-Lark Kim},
journal={arXiv preprint arXiv:2410.04412},
year={2024},
archivePrefix={arXiv},
eprint={2410.04412},
primaryClass={cs.IT math.CO math.IT}
} | shi2024log-concave |
arxiv-666197 | 2410.04414 | Spatial Multiplexing Oriented Channel Reconfiguration in Multi-IRS Aided MIMO Systems | <|reference_start|>Spatial Multiplexing Oriented Channel Reconfiguration in Multi-IRS Aided MIMO Systems: Spatial multiplexing plays a significant role in improving the capacity of multiple-input multiple-output (MIMO) communication systems. To improve the spectral efficiency (SE) of a point-to-point MIMO system, we exploit the channel reconfiguration capabilities provided by multiple intelligent reflecting surfaces (IRSs) to enhance the spatial multiplexing. Unlike most existing works, we address both the issues of the IRSs placement and elements allocation. To this end, we first introduce an orthogonal placement strategy to mitigate channel correlation, thereby enabling interference-free multi-stream transmission. Subsequently, we propose a successive convex approximation (SCA)-based approach to jointly optimize the IRS elements and power allocation. Our theoretical analysis unveils that equal IRS elements/power allocation scheme becomes asymptotically optimal as the number of IRS elements and transmit power tend to be infinite. Numerical results demonstrate that when the total number of IRS elements or the power exceeds a certain threshold, a multi-IRS assisted system outperforms a single IRS configuration.<|reference_end|> | arxiv | @article{chen2024spatial,
title={Spatial Multiplexing Oriented Channel Reconfiguration in Multi-IRS Aided
MIMO Systems},
author={Yuxuan Chen, Qingqing Wu, Guangji Chen, Wen Chen},
journal={arXiv preprint arXiv:2410.04414},
year={2024},
archivePrefix={arXiv},
eprint={2410.04414},
primaryClass={cs.IT eess.SP math.IT}
} | chen2024spatial |
arxiv-666198 | 2410.04415 | Optimizing AI Reasoning: A Hamiltonian Dynamics Approach to Multi-Hop Question Answering | <|reference_start|>Optimizing AI Reasoning: A Hamiltonian Dynamics Approach to Multi-Hop Question Answering: This paper introduces an innovative approach to analyzing and improving multi-hop reasoning in AI systems by drawing inspiration from Hamiltonian mechanics. We propose a novel framework that maps reasoning chains in embedding spaces to Hamiltonian systems, allowing us to leverage powerful analytical tools from classical physics. Our method defines a Hamiltonian function that balances the progression of reasoning (kinetic energy) against the relevance to the question at hand (potential energy). Using this framework, we analyze a large dataset of reasoning chains from a multi-hop question-answering task, revealing intriguing patterns that distinguish valid from invalid reasoning. We show that valid reasoning chains have lower Hamiltonian energy and move in ways that make the best trade-off between getting more information and answering the right question. Furthermore, we demonstrate the application of this framework to steer the creation of more efficient reasoning algorithms within AI systems. Our results not only provide new insights into the nature of valid reasoning but also open up exciting possibilities for physics-inspired approaches to understanding and improving artificial intelligence.<|reference_end|> | arxiv | @article{marin2024optimizing,
title={Optimizing AI Reasoning: A Hamiltonian Dynamics Approach to Multi-Hop
Question Answering},
author={Javier Marin},
journal={arXiv preprint arXiv:2410.04415},
year={2024},
archivePrefix={arXiv},
eprint={2410.04415},
primaryClass={cs.AI cs.LG}
} | marin2024optimizing |
arxiv-666199 | 2410.04416 | Consistent and Repeatable Testing of O-RAN Distributed Unit (O-DU) across Continents | <|reference_start|>Consistent and Repeatable Testing of O-RAN Distributed Unit (O-DU) across Continents: Open Radio Access Networks (O-RAN) are expected to revolutionize the telecommunications industry with benefits like cost reduction, vendor diversity, and improved network performance through AI optimization. Supporting the O-RAN ALLIANCE's mission to achieve more intelligent, open, virtualized and fully interoperable mobile networks, O-RAN Open Testing and Integration Centers (OTICs) play a key role in accelerating the adoption of O-RAN specifications based on rigorous testing and validation. One theme in the recent O-RAN Global PlugFest Spring 2024 focused on demonstrating consistent and repeatable Open Fronthaul testing in multiple labs. To respond to this topic, in this paper, we present a detailed analysis of the testing methodologies and results for O-RAN Distributed Unit (O-DU) in O-RAN across two OTICs. We identify key differences in testing setups, share challenges encountered, and propose best practices for achieving repeatable and consistent testing results. Our findings highlight the impact of different deployment technologies and testing environments on performance and conformance testing outcomes, providing valuable insights for future O-RAN implementations.<|reference_end|> | arxiv | @article{ngo2024consistent,
title={Consistent and Repeatable Testing of O-RAN Distributed Unit (O-DU)
across Continents},
author={Tuan V. Ngo, Mao V. Ngo, Binbin Chen, Gabriele Gemmi, Eduardo Baena,
Michele Polese, Tommaso Melodia, William Chien, and Tony Quek},
journal={arXiv preprint arXiv:2410.04416},
year={2024},
archivePrefix={arXiv},
eprint={2410.04416},
primaryClass={cs.NI}
} | ngo2024consistent |
arxiv-666200 | 2410.04417 | SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference | <|reference_start|>SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference: In vision-language models (VLMs), visual tokens usually consume a significant amount of computational overhead, despite their sparser information density compared to text tokens. To address this, most existing methods learn a network to prune redundant visual tokens and require additional training data. Differently, we propose an efficient training-free token optimization mechanism dubbed SparseVLM without extra parameters or fine-tuning costs. Concretely, given that visual tokens complement text tokens in VLMs for linguistic reasoning, we select visual-relevant text tokens to rate the significance of vision tokens within the self-attention matrix extracted from the VLMs. Then we progressively prune irrelevant tokens. To maximize sparsity while retaining essential information, we introduce a rank-based strategy to adaptively determine the sparsification ratio for each layer, alongside a token recycling method that compresses pruned tokens into more compact representations. Experimental results show that our SparseVLM improves the efficiency of various VLMs across a range of image and video understanding tasks. In particular, LLaVA equipped with SparseVLM reduces 61% to 67% FLOPs with a compression ratio of 78% while maintaining 93% of the accuracy. Our code is available at https://github.com/Gumpest/SparseVLMs.<|reference_end|> | arxiv | @article{zhang2024sparsevlm:,
title={SparseVLM: Visual Token Sparsification for Efficient Vision-Language
Model Inference},
author={Yuan Zhang, Chun-Kai Fan, Junpeng Ma, Wenzhao Zheng, Tao Huang, Kuan
Cheng, Denis Gudovskiy, Tomoyuki Okuno, Yohei Nakata, Kurt Keutzer, Shanghang
Zhang},
journal={arXiv preprint arXiv:2410.04417},
year={2024},
archivePrefix={arXiv},
eprint={2410.04417},
primaryClass={cs.CV}
} | zhang2024sparsevlm: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.