corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-660301
2409.14151
Numerical calculation method for function integration on submanifolds of $\mathbbR^n$ or compact Riemannian manifolds
<|reference_start|>Numerical calculation method for function integration on submanifolds of $\mathbbR^n$ or compact Riemannian manifolds: In this paper, we present a method for digitally representing the "volume element" and calculating the integral of a function on compact hypersurfaces with or without boundary, and low-dimensional submanifolds in $\mathbb{R}^n$. We also extend such calculation to hypersurfaces in compact Riemannnian manifolds.<|reference_end|>
arxiv
@article{deng2024numerical, title={Numerical calculation method for function integration on submanifolds of $\mathbb{R}^n$ or compact Riemannian manifolds}, author={Fusheng Deng, Gang Huang, Yingyi Wu}, journal={arXiv preprint arXiv:2409.14151}, year={2024}, archivePrefix={arXiv}, eprint={2409.14151}, primaryClass={math.NA cs.NA math.DG} }
deng2024numerical
arxiv-660302
2409.14154
MSSDA: Multi-Sub-Source Adaptation for Diabetic Foot Neuropathy Recognition
<|reference_start|>MSSDA: Multi-Sub-Source Adaptation for Diabetic Foot Neuropathy Recognition: Diabetic foot neuropathy (DFN) is a critical factor leading to diabetic foot ulcers, which is one of the most common and severe complications of diabetes mellitus (DM) and is associated with high risks of amputation and mortality. Despite its significance, existing datasets do not directly derive from plantar data and lack continuous, long-term foot-specific information. To advance DFN research, we have collected a novel dataset comprising continuous plantar pressure data to recognize diabetic foot neuropathy. This dataset includes data from 94 DM patients with DFN and 41 DM patients without DFN. Moreover, traditional methods divide datasets by individuals, potentially leading to significant domain discrepancies in some feature spaces due to the absence of mid-domain data. In this paper, we propose an effective domain adaptation method to address this proplem. We split the dataset based on convolutional feature statistics and select appropriate sub-source domains to enhance efficiency and avoid negative transfer. We then align the distributions of each source and target domain pair in specific feature spaces to minimize the domain gap. Comprehensive results validate the effectiveness of our method on both the newly proposed dataset for DFN recognition and an existing dataset.<|reference_end|>
arxiv
@article{zhong2024mssda:, title={MSSDA: Multi-Sub-Source Adaptation for Diabetic Foot Neuropathy Recognition}, author={Yan Zhong, Zhixin Yan, Yi Xie, Shibin Wu, Huaidong Zhang, Lin Shu and Peiru Zhou}, journal={arXiv preprint arXiv:2409.14154}, year={2024}, archivePrefix={arXiv}, eprint={2409.14154}, primaryClass={cs.CV cs.AI} }
zhong2024mssda:
arxiv-660303
2409.14156
Computing the Proximal Operator of the $\ell_1,q$-norm for Group Sparsity
<|reference_start|>Computing the Proximal Operator of the $\ell_1,q$-norm for Group Sparsity: In this note, we comprehensively characterize the proximal operator of the $\ell_{1,q}$-norm with $0\!<\!q\!<\!1$ by exploiting the well-known proximal operator of the $\ell_q$-norm on the real line. In particular, much more explicit characterizations can be obtained whenever $q\!=\!1/2$ and $q\!=\!2/3$ due to the existence of closed-form expressions for the proximal operators of the $\ell_{1/2}$- and $\ell_{2/3}$-norms. Numerical experiments demonstrate potential advantages of the $\ell_{1,q}$-regularization in the group sparse vector recovery.<|reference_end|>
arxiv
@article{lin2024computing, title={Computing the Proximal Operator of the $\ell_{1,q}$-norm for Group Sparsity}, author={Rongrong Lin, Shihai Chen, Han Feng, Yulan Liu}, journal={arXiv preprint arXiv:2409.14156}, year={2024}, archivePrefix={arXiv}, eprint={2409.14156}, primaryClass={math.NA cs.NA} }
lin2024computing
arxiv-660304
2409.14158
The Foundational Pose as a Selection Mechanism for the Design of Tool-Wielding Multi-Finger Robotic Hands
<|reference_start|>The Foundational Pose as a Selection Mechanism for the Design of Tool-Wielding Multi-Finger Robotic Hands: To wield an object means to hold and move it in a way that exploits its functions. When we wield tools -- such as writing with a pen or cutting with scissors -- our hands would reach very specific poses, often drastically different from how we pick up the same objects just to transport them. In this work, we investigate the design of tool-wielding multi-finger robotic hands based on a hypothesis: the poses that a tool and a hand reach during tool-wielding -- what we call "foundational poses" (FPs) -- can be used as a selection mechanism in the design process. We interpret FPs as snapshots that capture the workings of underlying mechanisms formed by the tool and the hand, and one hand can form multiple mechanisms with the same tool. We tested our hypothesis in a hand design experiment, where we developed a sampling-based design optimization framework that uses FPs to computationally generate many different hand designs and evaluate them in multiple metrics. The results show that more than $99\%$ of the $10,785$ generated hand designs successfully wielded tools in simulation, supporting our hypothesis. Meanwhile, our methods provide insights into the non-convex, multi-objective hand design optimization problem that could be hard to unveil otherwise, such as clustering and the Pareto front. Lastly, we demonstrate our methods' real-world feasibility and potential with a hardware prototype equipped with rigid endoskeleton and soft skin.<|reference_end|>
arxiv
@article{wang2024the, title={The Foundational Pose as a Selection Mechanism for the Design of Tool-Wielding Multi-Finger Robotic Hands}, author={Sunyu Wang, Jean H. Oh, and Nancy S. Pollard}, journal={arXiv preprint arXiv:2409.14158}, year={2024}, archivePrefix={arXiv}, eprint={2409.14158}, primaryClass={cs.RO} }
wang2024the
arxiv-660305
2409.14160
Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI
<|reference_start|>Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI: With the growing attention and investment in recent AI approaches such as large language models, the narrative that the larger the AI system the more valuable, powerful and interesting it is is increasingly seen as common sense. But what is this assumption based on, and how are we measuring value, power, and performance? And what are the collateral consequences of this race to ever-increasing scale? Here, we scrutinize the current scaling trends and trade-offs across multiple axes and refute two common assumptions underlying the 'bigger-is-better' AI paradigm: 1) that improved performance is a product of increased scale, and 2) that all interesting problems addressed by AI require large-scale models. Rather, we argue that this approach is not only fragile scientifically, but comes with undesirable consequences. First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint. Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate. Finally, it exacerbates a concentration of power, which centralizes decision-making in the hands of a few actors while threatening to disempower others in the context of shaping both AI research and its applications throughout society.<|reference_end|>
arxiv
@article{varoquaux2024hype,, title={Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI}, author={Ga"el Varoquaux, Alexandra Sasha Luccioni, Meredith Whittaker}, journal={arXiv preprint arXiv:2409.14160}, year={2024}, archivePrefix={arXiv}, eprint={2409.14160}, primaryClass={cs.CY} }
varoquaux2024hype,
arxiv-660306
2409.14161
When Witnesses Defend: A Witness Graph Topological Layer for Adversarial Graph Learning
<|reference_start|>When Witnesses Defend: A Witness Graph Topological Layer for Adversarial Graph Learning: Capitalizing on the intuitive premise that shape characteristics are more robust to perturbations, we bridge adversarial graph learning with the emerging tools from computational topology, namely, persistent homology representations of graphs. We introduce the concept of witness complex to adversarial analysis on graphs, which allows us to focus only on the salient shape characteristics of graphs, yielded by the subset of the most essential nodes (i.e., landmarks), with minimal loss of topological information on the whole graph. The remaining nodes are then used as witnesses, governing which higher-order graph substructures are incorporated into the learning process. Armed with the witness mechanism, we design Witness Graph Topological Layer (WGTL), which systematically integrates both local and global topological graph feature representations, the impact of which is, in turn, automatically controlled by the robust regularized topological loss. Given the attacker's budget, we derive the important stability guarantees of both local and global topology encodings and the associated robust topological loss. We illustrate the versatility and efficiency of WGTL by its integration with five GNNs and three existing non-topological defense mechanisms. Our extensive experiments across six datasets demonstrate that WGTL boosts the robustness of GNNs across a range of perturbations and against a range of adversarial attacks, leading to relative gains of up to 18%.<|reference_end|>
arxiv
@article{arafat2024when, title={When Witnesses Defend: A Witness Graph Topological Layer for Adversarial Graph Learning}, author={Naheed Anjum Arafat, Debabrota Basu, Yulia Gel, Yuzhou Chen}, journal={arXiv preprint arXiv:2409.14161}, year={2024}, archivePrefix={arXiv}, eprint={2409.14161}, primaryClass={cs.LG} }
arafat2024when
arxiv-660307
2409.14162
On Importance of Pruning and Distillation for Efficient Low Resource NLP
<|reference_start|>On Importance of Pruning and Distillation for Efficient Low Resource NLP: The rise of large transformer models has revolutionized Natural Language Processing, leading to significant advances in tasks like text classification. However, this progress demands substantial computational resources, escalating training duration, and expenses with larger model sizes. Efforts have been made to downsize and accelerate English models (e.g., Distilbert, MobileBert). Yet, research in this area is scarce for low-resource languages. In this study, we explore the case of the low-resource Indic language Marathi. Leveraging the marathi-topic-all-doc-v2 model as our baseline, we implement optimization techniques to reduce computation time and memory usage. Our focus is on enhancing the efficiency of Marathi transformer models while maintaining top-tier accuracy and reducing computational demands. Using the MahaNews document classification dataset and the marathi-topic-all-doc-v2 model from L3Cube, we apply Block Movement Pruning, Knowledge Distillation, and Mixed Precision methods individually and in combination to boost efficiency. We demonstrate the importance of strategic pruning levels in achieving desired efficiency gains. Furthermore, we analyze the balance between efficiency improvements and environmental impact, highlighting how optimized model architectures can contribute to a more sustainable computational ecosystem. Implementing these techniques on a single GPU system, we determine that the optimal configuration is 25\% pruning + knowledge distillation. This approach yielded a 2.56x speedup in computation time while maintaining baseline accuracy levels.<|reference_end|>
arxiv
@article{mirashi2024on, title={On Importance of Pruning and Distillation for Efficient Low Resource NLP}, author={Aishwarya Mirashi, Purva Lingayat, Srushti Sonavane, Tejas Padhiyar, Raviraj Joshi, Geetanjali Kale}, journal={arXiv preprint arXiv:2409.14162}, year={2024}, archivePrefix={arXiv}, eprint={2409.14162}, primaryClass={cs.CL cs.LG} }
mirashi2024on
arxiv-660308
2409.14163
PromptTA: Prompt-driven Text Adapter for Source-free Domain Generalization
<|reference_start|>PromptTA: Prompt-driven Text Adapter for Source-free Domain Generalization: Source-free domain generalization (SFDG) tackles the challenge of adapting models to unseen target domains without access to source domain data. To deal with this challenging task, recent advances in SFDG have primarily focused on leveraging the text modality of vision-language models such as CLIP. These methods involve developing a transferable linear classifier based on diverse style features extracted from the text and learned prompts or deriving domain-unified text representations from domain banks. However, both style features and domain banks have limitations in capturing comprehensive domain knowledge. In this work, we propose Prompt-Driven Text Adapter (PromptTA) method, which is designed to better capture the distribution of style features and employ resampling to ensure thorough coverage of domain knowledge. To further leverage this rich domain information, we introduce a text adapter that learns from these style features for efficient domain information storage. Extensive experiments conducted on four benchmark datasets demonstrate that PromptTA achieves state-of-the-art performance. The code is available at https://github.com/zhanghr2001/PromptTA.<|reference_end|>
arxiv
@article{zhang2024promptta:, title={PromptTA: Prompt-driven Text Adapter for Source-free Domain Generalization}, author={Haoran Zhang, Shuanghao Bai, Wanqi Zhou, Jingwen Fu, Badong Chen}, journal={arXiv preprint arXiv:2409.14163}, year={2024}, archivePrefix={arXiv}, eprint={2409.14163}, primaryClass={cs.CV cs.CL cs.LG} }
zhang2024promptta:
arxiv-660309
2409.14165
Will Large Language Models be a Panacea to Autonomous Driving?
<|reference_start|>Will Large Language Models be a Panacea to Autonomous Driving?: Artificial intelligence (AI) plays a crucial role in autonomous driving (AD) research, propelling its development towards intelligence and efficiency. Currently, the development of AD technology follows two main technical paths: modularization and end-to-end. Modularization decompose the driving task into modules such as perception, prediction, planning, and control, and train them separately. Due to the inconsistency of training objectives between modules, the integrated effect suffers from bias. End-to-end attempts to address this issue by utilizing a single model that directly maps from sensor data to control signals. This path has limited learning capabilities in a comprehensive set of features and struggles to handle unpredictable long-tail events and complex urban traffic scenarios. In the face of challenges encountered in both paths, many researchers believe that large language models (LLMs) with powerful reasoning capabilities and extensive knowledge understanding may be the solution, expecting LLMs to provide AD systems with deeper levels of understanding and decision-making capabilities. In light of the challenges faced by both paths, many researchers believe that LLMs, with their powerful reasoning abilities and extensive knowledge, could offer a solution. To understand if LLMs could enhance AD, this paper conducts a thorough analysis of the potential applications of LLMs in AD systems, including exploring their optimization strategies in both modular and end-to-end approaches, with a particular focus on how LLMs can tackle the problems and challenges present in current solutions. Furthermore, we discuss an important question: Can LLM-based artificial general intelligence (AGI) be a key to achieve high-level AD? We further analyze the potential limitations and challenges that LLMs may encounter in promoting the development of AD technology.<|reference_end|>
arxiv
@article{zhu2024will, title={Will Large Language Models be a Panacea to Autonomous Driving?}, author={Yuxuan Zhu, Shiyi Wang, Wenqing Zhong, Nianchen Shen, Yunqi Li, Siqi Wang, Zhiheng Li, Cathy Wu, Zhengbing He, Li Li}, journal={arXiv preprint arXiv:2409.14165}, year={2024}, archivePrefix={arXiv}, eprint={2409.14165}, primaryClass={cs.AI cs.CL cs.LG cs.RO cs.SY eess.SY} }
zhu2024will
arxiv-660310
2409.14168
Towards Building Efficient Sentence BERT Models using Layer Pruning
<|reference_start|>Towards Building Efficient Sentence BERT Models using Layer Pruning: This study examines the effectiveness of layer pruning in creating efficient Sentence BERT (SBERT) models. Our goal is to create smaller sentence embedding models that reduce complexity while maintaining strong embedding similarity. We assess BERT models like Muril and MahaBERT-v2 before and after pruning, comparing them with smaller, scratch-trained models like MahaBERT-Small and MahaBERT-Smaller. Through a two-phase SBERT fine-tuning process involving Natural Language Inference (NLI) and Semantic Textual Similarity (STS), we evaluate the impact of layer reduction on embedding quality. Our findings show that pruned models, despite fewer layers, perform competitively with fully layered versions. Moreover, pruned models consistently outperform similarly sized, scratch-trained models, establishing layer pruning as an effective strategy for creating smaller, efficient embedding models. These results highlight layer pruning as a practical approach for reducing computational demand while preserving high-quality embeddings, making SBERT models more accessible for languages with limited technological resources.<|reference_end|>
arxiv
@article{shelke2024towards, title={Towards Building Efficient Sentence BERT Models using Layer Pruning}, author={Anushka Shelke, Riya Savant, Raviraj Joshi}, journal={arXiv preprint arXiv:2409.14168}, year={2024}, archivePrefix={arXiv}, eprint={2409.14168}, primaryClass={cs.CL cs.LG} }
shelke2024towards
arxiv-660311
2409.14170
LFP: Efficient and Accurate End-to-End Lane-Level Planning via Camera-LiDAR Fusion
<|reference_start|>LFP: Efficient and Accurate End-to-End Lane-Level Planning via Camera-LiDAR Fusion: Multi-modal systems enhance performance in autonomous driving but face inefficiencies due to indiscriminate processing within each modality. Additionally, the independent feature learning of each modality lacks interaction, which results in extracted features that do not possess the complementary characteristics. These issue increases the cost of fusing redundant information across modalities. To address these challenges, we propose targeting driving-relevant elements, which reduces the volume of LiDAR features while preserving critical information. This approach enhances lane level interaction between the image and LiDAR branches, allowing for the extraction and fusion of their respective advantageous features. Building upon the camera-only framework PHP, we introduce the Lane-level camera-LiDAR Fusion Planning (LFP) method, which balances efficiency with performance by using lanes as the unit for sensor fusion. Specifically, we design three modules to enhance efficiency and performance. For efficiency, we propose an image-guided coarse lane prior generation module that forecasts the region of interest (ROI) for lanes and assigns a confidence score, guiding LiDAR processing. The LiDAR feature extraction modules leverages lane-aware priors from the image branch to guide sampling for pillar, retaining essential pillars. For performance, the lane-level cross-modal query integration and feature enhancement module uses confidence score from ROI to combine low-confidence image queries with LiDAR queries, extracting complementary depth features. These features enhance the low-confidence image features, compensating for the lack of depth. Experiments on the Carla benchmarks show that our method achieves state-of-the-art performance in both driving score and infraction score, with maximum improvement of 15% and 14% over existing algorithms, respectively, maintaining high frame rate of 19.27 FPS.<|reference_end|>
arxiv
@article{you2024lfp:, title={LFP: Efficient and Accurate End-to-End Lane-Level Planning via Camera-LiDAR Fusion}, author={Guoliang You and Xiaomeng Chu and Yifan Duan and Xingchen Li and Sha Zhang and Jianmin Ji and Yanyong Zhang}, journal={arXiv preprint arXiv:2409.14170}, year={2024}, archivePrefix={arXiv}, eprint={2409.14170}, primaryClass={cs.CV} }
you2024lfp:
arxiv-660312
2409.14173
An Evolutionary Algorithm For the Vehicle Routing Problem with Drones with Interceptions
<|reference_start|>An Evolutionary Algorithm For the Vehicle Routing Problem with Drones with Interceptions: The use of trucks and drones as a solution to address last-mile delivery challenges is a new and promising research direction explored in this paper. The variation of the problem where the drone can intercept the truck while in movement or at the customer location is part of an optimisation problem called the vehicle routing problem with drones with interception (VRPDi). This paper proposes an evolutionary algorithm to solve the VRPDi. In this variation of the VRPDi, multiple pairs of trucks and drones need to be scheduled. The pairs leave and return to a depot location together or separately to make deliveries to customer nodes. The drone can intercept the truck after the delivery or meet up with the truck at the following customer location. The algorithm was executed on the travelling salesman problem with drones (TSPD) datasets by Bouman et al. (2015), and the performance of the algorithm was compared by benchmarking the results of the VRPDi against the results of the VRP of the same dataset. This comparison showed improvements in total delivery time between 39% and 60%. Further detailed analysis of the algorithm results examined the total delivery time, distance, node delivery scheduling and the degree of diversity during the algorithm execution. This analysis also considered how the algorithm handled the VRPDi constraints. The results of the algorithm were then benchmarked against algorithms in Dillon et al. (2023) and Ernst (2024). The latter solved the problem with a maximum drone distance constraint added to the VRPDi. The analysis and benchmarking of the algorithm results showed that the algorithm satisfactorily solved 50 and 100-nodes problems in a reasonable amount of time, and the solutions found were better than those found by the algorithms in Dillon et al. (2023) and Ernst (2024) for the same problems.<|reference_end|>
arxiv
@article{pambo2024an, title={An Evolutionary Algorithm For the Vehicle Routing Problem with Drones with Interceptions}, author={Carlos Pambo and Jacomine Grobler}, journal={arXiv preprint arXiv:2409.14173}, year={2024}, archivePrefix={arXiv}, eprint={2409.14173}, primaryClass={cs.AI cs.CY cs.ET math.OC} }
pambo2024an
arxiv-660313
2409.14174
Component-based Sketching for Deep ReLU Nets
<|reference_start|>Component-based Sketching for Deep ReLU Nets: Deep learning has made profound impacts in the domains of data mining and AI, distinguished by the groundbreaking achievements in numerous real-world applications and the innovative algorithm design philosophy. However, it suffers from the inconsistency issue between optimization and generalization, as achieving good generalization, guided by the bias-variance trade-off principle, favors under-parameterized networks, whereas ensuring effective convergence of gradient-based algorithms demands over-parameterized networks. To address this issue, we develop a novel sketching scheme based on deep net components for various tasks. Specifically, we use deep net components with specific efficacy to build a sketching basis that embodies the advantages of deep networks. Subsequently, we transform deep net training into a linear empirical risk minimization problem based on the constructed basis, successfully avoiding the complicated convergence analysis of iterative algorithms. The efficacy of the proposed component-based sketching is validated through both theoretical analysis and numerical experiments. Theoretically, we show that the proposed component-based sketching provides almost optimal rates in approximating saturated functions for shallow nets and also achieves almost optimal generalization error bounds. Numerically, we demonstrate that, compared with the existing gradient-based training methods, component-based sketching possesses superior generalization performance with reduced training costs.<|reference_end|>
arxiv
@article{wang2024component-based, title={Component-based Sketching for Deep ReLU Nets}, author={Di Wang, Shao-Bo Lin, Deyu Meng, Feilong Cao}, journal={arXiv preprint arXiv:2409.14174}, year={2024}, archivePrefix={arXiv}, eprint={2409.14174}, primaryClass={cs.LG math.ST stat.TH} }
wang2024component-based
arxiv-660314
2409.14175
QMOS: Enhancing LLMs for Telecommunication with Question Masked loss and Option Shuffling
<|reference_start|>QMOS: Enhancing LLMs for Telecommunication with Question Masked loss and Option Shuffling: Large Language models (LLMs) have brought about substantial advancements in the field of Question Answering (QA) systems. These models do remarkably well in addressing intricate inquiries in a variety of disciplines. However, because of domain-specific vocabulary, complex technological concepts, and the requirement for exact responses applying LLMs to specialized sectors like telecommunications presents additional obstacles. GPT-3.5 has been used in recent work, to obtain noteworthy accuracy for telecom-related questions in a Retrieval Augmented Generation (RAG) framework. Notwithstanding these developments, the practical use of models such as GPT-3.5 is restricted by their proprietary nature and high computing demands. This paper introduces QMOS, an innovative approach which uses a Question-Masked loss and Option Shuffling trick to enhance the performance of LLMs in answering Multiple-Choice Questions in the telecommunications domain. Our focus was on using opensource, smaller language models (Phi-2 and Falcon-7B) within an enhanced RAG framework. Our multi-faceted approach involves several enhancements to the whole LLM-RAG pipeline of finetuning, retrieval, prompt engineering and inference. Our approaches significantly outperform existing results, achieving accuracy improvements from baselines of 24.70% to 49.30% with Falcon-7B and from 42.07% to 84.65% with Phi-2.<|reference_end|>
arxiv
@article{guda2024qmos:, title={QMOS: Enhancing LLMs for Telecommunication with Question Masked loss and Option Shuffling}, author={Blessed Guda, Gabrial Zencha A., Lawrence Francis and Carlee Joe-Wong}, journal={arXiv preprint arXiv:2409.14175}, year={2024}, archivePrefix={arXiv}, eprint={2409.14175}, primaryClass={cs.CL cs.AI cs.LG} }
guda2024qmos:
arxiv-660315
2409.14176
Fast Local Search Strategies for Large-Scale General Quadratic Integer Programming
<|reference_start|>Fast Local Search Strategies for Large-Scale General Quadratic Integer Programming: This study investigates the area of general quadratic integer programming (QIP), encompassing both unconstrained (UQIP) and constrained (CQIP) variants. These NP-hard problems have far-reaching applications, yet the non-convex cases have received limited attention in the literature. To address this gap, we introduce a closed-form formula for single-variable changes, establishing novel necessary and sufficient conditions for 1-Opt local improvement in UQIP and CQIP. We develop a simple local and sophisticated tabu search with an oscillation strategy tailored for large-scale problems. Experimental results on instances with up to 8000 variables demonstrate the efficiency of these strategies, producing high-quality solutions within a short time. Our approaches significantly outperform the Gurobi 11.0.2 solver.<|reference_end|>
arxiv
@article{wang2024fast, title={Fast Local Search Strategies for Large-Scale General Quadratic Integer Programming}, author={Haibo Wang and Bahram Alidaee}, journal={arXiv preprint arXiv:2409.14176}, year={2024}, archivePrefix={arXiv}, eprint={2409.14176}, primaryClass={cs.DM cs.DS math.OC} }
wang2024fast
arxiv-660316
2409.14177
PathSeeker: Exploring LLM Security Vulnerabilities with a Reinforcement Learning-Based Jailbreak Approach
<|reference_start|>PathSeeker: Exploring LLM Security Vulnerabilities with a Reinforcement Learning-Based Jailbreak Approach: In recent years, Large Language Models (LLMs) have gained widespread use, raising concerns about their security. Traditional jailbreak attacks, which often rely on the model internal information or have limitations when exploring the unsafe behavior of the victim model, limiting their reducing their general applicability. In this paper, we introduce PathSeeker, a novel black-box jailbreak method, which is inspired by the game of rats escaping a maze. We think that each LLM has its unique "security maze", and attackers attempt to find the exit learning from the received feedback and their accumulated experience to compromise the target LLM's security defences. Our approach leverages multi-agent reinforcement learning, where smaller models collaborate to guide the main LLM in performing mutation operations to achieve the attack objectives. By progressively modifying inputs based on the model's feedback, our system induces richer, harmful responses. During our manual attempts to perform jailbreak attacks, we found that the vocabulary of the response of the target model gradually became richer and eventually produced harmful responses. Based on the observation, we also introduce a reward mechanism that exploits the expansion of vocabulary richness in LLM responses to weaken security constraints. Our method outperforms five state-of-the-art attack techniques when tested across 13 commercial and open-source LLMs, achieving high attack success rates, especially in strongly aligned commercial models like GPT-4o-mini, Claude-3.5, and GLM-4-air with strong safety alignment. This study aims to improve the understanding of LLM security vulnerabilities and we hope that this sturdy can contribute to the development of more robust defenses.<|reference_end|>
arxiv
@article{lin2024pathseeker:, title={PathSeeker: Exploring LLM Security Vulnerabilities with a Reinforcement Learning-Based Jailbreak Approach}, author={Zhihao Lin, Wei Ma, Mingyi Zhou, Yanjie Zhao, Haoyu Wang, Yang Liu, Jun Wang, Li Li}, journal={arXiv preprint arXiv:2409.14177}, year={2024}, archivePrefix={arXiv}, eprint={2409.14177}, primaryClass={cs.CR cs.AI} }
lin2024pathseeker:
arxiv-660317
2409.14178
A Distribution-Aware Flow-Matching for Generating Unstructured Data for Few-Shot Reinforcement Learning
<|reference_start|>A Distribution-Aware Flow-Matching for Generating Unstructured Data for Few-Shot Reinforcement Learning: Generating realistic and diverse unstructured data is a significant challenge in reinforcement learning (RL), particularly in few-shot learning scenarios where data is scarce. Traditional RL methods often rely on extensive datasets or simulations, which are costly and time-consuming. In this paper, we introduce a distribution-aware flow matching, designed to generate synthetic unstructured data tailored specifically for an application of few-shot RL called Dynamic Voltage and Frequency Scaling (DVFS) on embedded processors. This method leverages the sample efficiency of flow matching and incorporates statistical learning techniques such as bootstrapping to improve its generalization and robustness of the latent space. Additionally, we apply feature weighting through Random Forests to prioritize critical data aspects, thereby improving the precision of the generated synthetic data. This approach not only mitigates the challenges of overfitting and data correlation in unstructured data in traditional Model-Based RL but also aligns with the Law of Large Numbers, ensuring convergence to true empirical values and optimal policy as the number of samples increases. Through extensive experimentation on an application of DVFS for low energy processing, we demonstrate that our method provides an stable convergence based on max Q-value while enhancing frame rate by 30\% in the very beginning first timestamps, making this RL model efficient in resource-constrained environments.<|reference_end|>
arxiv
@article{pivezhandi2024a, title={A Distribution-Aware Flow-Matching for Generating Unstructured Data for Few-Shot Reinforcement Learning}, author={Mohammad Pivezhandi, Abusayeed Saifullah}, journal={arXiv preprint arXiv:2409.14178}, year={2024}, archivePrefix={arXiv}, eprint={2409.14178}, primaryClass={cs.LG} }
pivezhandi2024a
arxiv-660318
2409.14179
On Lexical Invariance on Multisets and Graphs
<|reference_start|>On Lexical Invariance on Multisets and Graphs: In this draft, we study a novel problem, called lexical invariance, using the medium of multisets and graphs. Traditionally in the NLP domain, lexical invariance indicates that the semantic meaning of a sentence should remain unchanged regardless of the specific lexical or word-based representation of the input. For example, ``The movie was extremely entertaining'' would have the same meaning as ``The film was very enjoyable''. In this paper, we study a more challenging setting, where the output of a function is invariant to any injective transformation applied to the input lexical space. For example, multiset {1,2,3,2} is equivalent to multiset {a,b,c,b} if we specify an injective transformation that maps 1 to a, 2 to b and 3 to c. We study the sufficient and necessary conditions for a most expressive lexical invariant (and permutation invariant) function on multisets and graphs, and proves that for multisets, the function must have a form that only takes the multiset of counts of the unique elements in the original multiset as input. For example, a most expressive lexical invariant function on {a,b,c,b} must have a form that only operates on {1,1,2} (meaning that there are 1, 1, 2 unique elements corresponding to a,c,b). For graphs, we prove that a most expressive lexical invariant and permutation invariant function must have a form that only takes the adjacency matrix and a difference matrix as input, where the (i,j)th element of the difference matrix is 1 if node i and node j have the same feature and 0 otherwise. We perform synthetic experiments on TU datasets to verify our theorems.<|reference_end|>
arxiv
@article{zhang2024on, title={On Lexical Invariance on Multisets and Graphs}, author={Muhan Zhang}, journal={arXiv preprint arXiv:2409.14179}, year={2024}, archivePrefix={arXiv}, eprint={2409.14179}, primaryClass={cs.LG cs.CL} }
zhang2024on
arxiv-660319
2409.14181
Democratising Artificial Intelligence for Pandemic Preparedness and Global Governance in Latin American and Caribbean Countries
<|reference_start|>Democratising Artificial Intelligence for Pandemic Preparedness and Global Governance in Latin American and Caribbean Countries: Infectious diseases, transmitted directly or indirectly, are among the leading causes of epidemics and pandemics. Consequently, several open challenges exist in predicting epidemic outbreaks, detecting variants, tracing contacts, discovering new drugs, and fighting misinformation. Artificial Intelligence (AI) can provide tools to deal with these scenarios, demonstrating promising results in the fight against the COVID-19 pandemic. AI is becoming increasingly integrated into various aspects of society. However, ensuring that AI benefits are distributed equitably and that they are used responsibly is crucial. Multiple countries are creating regulations to address these concerns, but the borderless nature of AI requires global cooperation to define regulatory and guideline consensus. Considering this, The Global South AI for Pandemic & Epidemic Preparedness & Response Network (AI4PEP) has developed an initiative comprising 16 projects across 16 countries in the Global South, seeking to strengthen equitable and responsive public health systems that leverage Southern-led responsible AI solutions to improve prevention, preparedness, and response to emerging and re-emerging infectious disease outbreaks. This opinion introduces our branches in Latin American and Caribbean (LAC) countries and discusses AI governance in LAC in the light of biotechnology. Our network in LAC has high potential to help fight infectious diseases, particularly in low- and middle-income countries, generating opportunities for the widespread use of AI techniques to improve the health and well-being of their communities.<|reference_end|>
arxiv
@article{de carvalho2024democratising, title={Democratising Artificial Intelligence for Pandemic Preparedness and Global Governance in Latin American and Caribbean Countries}, author={Andre de Carvalho, Robson Bonidia, Jude Dzevela Kong, Mariana Dauhajre, Claudio Struchiner, Guilherme Goedert, Peter F. Stadler, Maria Emilia Walter, Danilo Sanches, Troy Day, Marcia Castro, John Edmunds, Manuel Colome-Hidalgo, Demian Arturo Herrera Morban, Edian F. Franco, Cesar Ugarte-Gil, Patricia Espinoza-Lopez, Gabriel Carrasco-Escobar, Ulisses Rocha}, journal={arXiv preprint arXiv:2409.14181}, year={2024}, archivePrefix={arXiv}, eprint={2409.14181}, primaryClass={cs.AI} }
de carvalho2024democratising
arxiv-660320
2409.14183
Quantum Computing for Automotive Applications: From Algorithms to Applications
<|reference_start|>Quantum Computing for Automotive Applications: From Algorithms to Applications: Quantum computing could impact various industries, with the automotive industry with many computational challenges, from optimizing supply chains and manufacturing to vehicle engineering, being particularly promising. This chapter investigates state-of-the-art quantum algorithms to enhance efficiency, accuracy, and scalability across the automotive value chain. We explore recent advances in quantum optimization, machine learning, and numerical and chemistry simulations, highlighting their potential and limitations. We identify and discuss key challenges in near-term and fault-tolerant algorithms and their practical use in industrial applications. While quantum algorithms show potential in many application domains, current noisy intermediate-scale quantum hardware limits scale and, thus, business benefits. In the long term, fault-tolerant systems promise theoretical speedups; however, they also require further progress in hardware and software (e.\,g., related to error correction and data loading). We expect that with this progress, significant practical benefits will emerge eventually.<|reference_end|>
arxiv
@article{bmw group quantum team2024quantum, title={Quantum Computing for Automotive Applications: From Algorithms to Applications}, author={BMW Group Quantum Team -- Carlos A. Riofr'io, Johannes Klepsch, Jernej Rudi Finv{z}gar, Florian Kiwit, Leonhard H"olscher, Marvin Erdmann, Lukas M"uller, Chandan Kumar and Andre Luckow}, journal={arXiv preprint arXiv:2409.14183}, year={2024}, archivePrefix={arXiv}, eprint={2409.14183}, primaryClass={quant-ph cs.ET} }
bmw group quantum team2024quantum
arxiv-660321
2409.14184
Content-aware Tile Generation using Exterior Boundary Inpainting
<|reference_start|>Content-aware Tile Generation using Exterior Boundary Inpainting: We present a novel and flexible learning-based method for generating tileable image sets. Our method goes beyond simple self-tiling, supporting sets of mutually tileable images that exhibit a high degree of diversity. To promote diversity we decouple structure from content by foregoing explicit copying of patches from an exemplar image. Instead we leverage the prior knowledge of natural images and textures embedded in large-scale pretrained diffusion models to guide tile generation constrained by exterior boundary conditions and a text prompt to specify the content. By carefully designing and selecting the exterior boundary conditions, we can reformulate the tile generation process as an inpainting problem, allowing us to directly employ existing diffusion-based inpainting models without the need to retrain a model on a custom training set. We demonstrate the flexibility and efficacy of our content-aware tile generation method on different tiling schemes, such as Wang tiles, from only a text prompt. Furthermore, we introduce a novel Dual Wang tiling scheme that provides greater texture continuity and diversity than existing Wang tile variants.<|reference_end|>
arxiv
@article{sartor2024content-aware, title={Content-aware Tile Generation using Exterior Boundary Inpainting}, author={Sam Sartor and Pieter Peers}, journal={arXiv preprint arXiv:2409.14184}, year={2024}, doi={10.1145/3687981}, archivePrefix={arXiv}, eprint={2409.14184}, primaryClass={cs.CV cs.GR} }
sartor2024content-aware
arxiv-660322
2409.14191
Addressing and Visualizing Misalignments in Human Task-Solving Trajectories
<|reference_start|>Addressing and Visualizing Misalignments in Human Task-Solving Trajectories: The effectiveness of AI model training hinges on the quality of the trajectory data used, particularly in aligning the model's decision with human intentions. However, in the human task-solving trajectories, we observe significant misalignments between human intentions and the recorded trajectories, which can undermine AI model training. This paper addresses the challenges of these misalignments by proposing a visualization tool and a heuristic algorithm designed to detect and categorize discrepancies in trajectory data. Although the heuristic algorithm requires a set of predefined human intentions to function, which we currently cannot extract, the visualization tool offers valuable insights into the nature of these misalignments. We expect that eliminating these misalignments could significantly improve the utility of trajectory data for AI model training. We also propose that future work should focus on developing methods, such as Topic Modeling, to accurately extract human intentions from trajectory data, thereby enhancing the alignment between user actions and AI learning processes.<|reference_end|>
arxiv
@article{kim2024addressing, title={Addressing and Visualizing Misalignments in Human Task-Solving Trajectories}, author={Sejin Kim and Hosung Lee and Sundong Kim}, journal={arXiv preprint arXiv:2409.14191}, year={2024}, archivePrefix={arXiv}, eprint={2409.14191}, primaryClass={cs.AI cs.HC} }
kim2024addressing
arxiv-660323
2409.14192
Knowledge in Triples for LLMs: Enhancing Table QA Accuracy with Semantic Extraction
<|reference_start|>Knowledge in Triples for LLMs: Enhancing Table QA Accuracy with Semantic Extraction: Integrating structured knowledge from tabular formats poses significant challenges within natural language processing (NLP), mainly when dealing with complex, semi-structured tables like those found in the FeTaQA dataset. These tables require advanced methods to interpret and generate meaningful responses accurately. Traditional approaches, such as SQL and SPARQL, often fail to fully capture the semantics of such data, especially in the presence of irregular table structures like web tables. This paper addresses these challenges by proposing a novel approach that extracts triples straightforward from tabular data and integrates it with a retrieval-augmented generation (RAG) model to enhance the accuracy, coherence, and contextual richness of responses generated by a fine-tuned GPT-3.5-turbo-0125 model. Our approach significantly outperforms existing baselines on the FeTaQA dataset, particularly excelling in Sacre-BLEU and ROUGE metrics. It effectively generates contextually accurate and detailed long-form answers from tables, showcasing its strength in complex data interpretation.<|reference_end|>
arxiv
@article{sholehrasa2024knowledge, title={Knowledge in Triples for LLMs: Enhancing Table QA Accuracy with Semantic Extraction}, author={Hossein Sholehrasa, Sanaz Saki Norouzi, Pascal Hitzler, Majid Jaberi-Douraki}, journal={arXiv preprint arXiv:2409.14192}, year={2024}, archivePrefix={arXiv}, eprint={2409.14192}, primaryClass={cs.CL cs.IR} }
sholehrasa2024knowledge
arxiv-660324
2409.14194
Data-Driven Approach to assess and identify gaps in healthcare set up in South Asia
<|reference_start|>Data-Driven Approach to assess and identify gaps in healthcare set up in South Asia: Primary healthcare is a crucial strategy for achieving universal health coverage. South Asian countries are working to improve their primary healthcare system through their country specific policies designed in line with WHO health system framework using the six thematic pillars: Health Financing, Health Service delivery, Human Resource for Health, Health Information Systems, Governance, Essential Medicines and Technology, and an addition area of Cross-Sectoral Linkages. Measuring the current accessibility of healthcare facilities and workforce availability is essential for improving healthcare standards and achieving universal health coverage in developing countries. Data-driven surveillance approaches are required that can provide rapid, reliable, and geographically scalable solutions to understand a) which communities and areas are most at risk of inequitable access and when, b) what barriers to health access exist, and c) how they can be overcome in ways tailored to the specific challenges faced by individual communities. We propose to harness current breakthroughs in Earth-observation (EO) technology, which provide the ability to generate accurate, up-to-date, publicly accessible, and reliable data, which is necessary for equitable access planning and resource allocation to ensure that vaccines, and other interventions reach everyone, particularly those in greatest need, during normal and crisis times. This requires collaboration among countries to identify evidence based solutions to shape health policy and interventions, and drive innovations and research in the region.<|reference_end|>
arxiv
@article{elahi2024data-driven, title={Data-Driven Approach to assess and identify gaps in healthcare set up in South Asia}, author={Rusham Elahi, Zia Tahseen, Tehreem Fatima, Syed Wafa Zahra, Hafiz Muhammad Abubakar, Tehreem Zafar, Aqs Younas, Muhammad Talha Quddoos, Usman Nazir}, journal={ICONIP 2024}, year={2024}, archivePrefix={arXiv}, eprint={2409.14194}, primaryClass={cs.CY cs.AI} }
elahi2024data-driven
arxiv-660325
2409.14195
The Imperative of Conversation Analysis in the Era of LLMs: A Survey of Tasks, Techniques, and Trends
<|reference_start|>The Imperative of Conversation Analysis in the Era of LLMs: A Survey of Tasks, Techniques, and Trends: In the era of large language models (LLMs), a vast amount of conversation logs will be accumulated thanks to the rapid development trend of language UI. Conversation Analysis (CA) strives to uncover and analyze critical information from conversation data, streamlining manual processes and supporting business insights and decision-making. The need for CA to extract actionable insights and drive empowerment is becoming increasingly prominent and attracting widespread attention. However, the lack of a clear scope for CA leads to a dispersion of various techniques, making it difficult to form a systematic technical synergy to empower business applications. In this paper, we perform a thorough review and systematize CA task to summarize the existing related work. Specifically, we formally define CA task to confront the fragmented and chaotic landscape in this field, and derive four key steps of CA from conversation scene reconstruction, to in-depth attribution analysis, and then to performing targeted training, finally generating conversations based on the targeted training for achieving the specific goals. In addition, we showcase the relevant benchmarks, discuss potential challenges and point out future directions in both industry and academia. In view of current advancements, it is evident that the majority of efforts are still concentrated on the analysis of shallow conversation elements, which presents a considerable gap between the research and business, and with the assist of LLMs, recent work has shown a trend towards research on causality and strategic tasks which are sophisticated and high-level. The analyzed experiences and insights will inevitably have broader application value in business operations that target conversation logs.<|reference_end|>
arxiv
@article{zhang2024the, title={The Imperative of Conversation Analysis in the Era of LLMs: A Survey of Tasks, Techniques, and Trends}, author={Xinghua Zhang, Haiyang Yu, Yongbin Li, Minzheng Wang, Longze Chen, Fei Huang}, journal={arXiv preprint arXiv:2409.14195}, year={2024}, archivePrefix={arXiv}, eprint={2409.14195}, primaryClass={cs.CL} }
zhang2024the
arxiv-660326
2409.14196
Adversarial and Reactive Traffic Agents for Realistic Driving Simulation
<|reference_start|>Adversarial and Reactive Traffic Agents for Realistic Driving Simulation: Despite advancements in perception and planning for autonomous vehicles (AVs), validating their performance remains a significant challenge. The deployment of planning algorithms in real-world environments is often ineffective due to discrepancies between simulations and real traffic conditions. Evaluating AVs planning algorithms in simulation typically involves replaying driving logs from recorded real-world traffic. However, agents replayed from offline data are not reactive, lack the ability to respond to arbitrary AV behavior, and cannot behave in an adversarial manner to test certain properties of the driving policy. Therefore, simulation with realistic and potentially adversarial agents represents a critical task for AV planning software validation. In this work, we aim to review current research efforts in the field of adversarial and reactive traffic agents, with a particular focus on the application of classical and adversarial learning-based techniques. The objective of this work is to categorize existing approaches based on the proposed scenario controllability, defined by the number of reactive or adversarial agents. Moreover, we examine existing traffic simulations with respect to their employed default traffic agents and potential extensions, collate datasets that provide initial driving data, and collect relevant evaluation metrics.<|reference_end|>
arxiv
@article{ransiek2024adversarial, title={Adversarial and Reactive Traffic Agents for Realistic Driving Simulation}, author={Joshua Ransiek, Philipp Reis, Eric Sax}, journal={arXiv preprint arXiv:2409.14196}, year={2024}, archivePrefix={arXiv}, eprint={2409.14196}, primaryClass={cs.RO} }
ransiek2024adversarial
arxiv-660327
2409.14197
Advancing Employee Behavior Analysis through Synthetic Data: Leveraging ABMs, GANs, and Statistical Models for Enhanced Organizational Efficiency
<|reference_start|>Advancing Employee Behavior Analysis through Synthetic Data: Leveraging ABMs, GANs, and Statistical Models for Enhanced Organizational Efficiency: Success in todays data-driven corporate climate requires a deep understanding of employee behavior. Companies aim to improve employee satisfaction, boost output, and optimize workflow. This research study delves into creating synthetic data, a powerful tool that allows us to comprehensively understand employee performance, flexibility, cooperation, and team dynamics. Synthetic data provides a detailed and accurate picture of employee activities while protecting individual privacy thanks to cutting-edge methods like agent-based models (ABMs), Generative Adversarial Networks (GANs), and statistical models. Through the creation of multiple situations, this method offers insightful viewpoints regarding increasing teamwork, improving adaptability, and accelerating overall productivity. We examine how synthetic data has evolved from a specialized field to an essential resource for researching employee behavior and enhancing management efficiency. Keywords: Agent-Based Model, Generative Adversarial Network, workflow optimization, organizational success<|reference_end|>
arxiv
@article{jayashankar2024advancing, title={Advancing Employee Behavior Analysis through Synthetic Data: Leveraging ABMs, GANs, and Statistical Models for Enhanced Organizational Efficiency}, author={Rakshitha Jayashankar, Mahesh Balan}, journal={arXiv preprint arXiv:2409.14197}, year={2024}, archivePrefix={arXiv}, eprint={2409.14197}, primaryClass={cs.LG cs.FL stat.OT} }
jayashankar2024advancing
arxiv-660328
2409.14198
A Sinkhorn Regularized Adversarial Network for Image Guided DEM Super-resolution using Frequency Selective Hybrid Graph Transformer
<|reference_start|>A Sinkhorn Regularized Adversarial Network for Image Guided DEM Super-resolution using Frequency Selective Hybrid Graph Transformer: Digital Elevation Model (DEM) is an essential aspect in the remote sensing (RS) domain to analyze various applications related to surface elevations. Here, we address the generation of high-resolution (HR) DEMs using HR multi-spectral (MX) satellite imagery as a guide by introducing a novel hybrid transformer model consisting of Densely connected Multi-Residual Block (DMRB) and multi-headed Frequency Selective Graph Attention (M-FSGA). To promptly regulate this process, we utilize the notion of discriminator spatial maps as the conditional attention to the MX guide. Further, we present a novel adversarial objective related to optimizing Sinkhorn distance with classical GAN. In this regard, we provide both theoretical and empirical substantiation of better performance in terms of vanishing gradient issues and numerical convergence. Based on our experiments on 4 different DEM datasets, we demonstrate both qualitative and quantitative comparisons with available baseline methods and show that the performance of our proposed model is superior to others with sharper details and minimal errors.<|reference_end|>
arxiv
@article{paul2024a, title={A Sinkhorn Regularized Adversarial Network for Image Guided DEM Super-resolution using Frequency Selective Hybrid Graph Transformer}, author={Subhajit Paul, Ashutosh Gupta}, journal={International Conference on Pattern Recognition (ICPR), 2024}, year={2024}, archivePrefix={arXiv}, eprint={2409.14198}, primaryClass={eess.IV cs.CV} }
paul2024a
arxiv-660329
2409.14199
Loop-Residual Neural Networks for Iterative Refinement
<|reference_start|>Loop-Residual Neural Networks for Iterative Refinement: The success of large-scale language models like GPT can be attributed to their ability to efficiently predict the next token in a sequence. However, these models rely on constant computational effort regardless of the complexity of the token they are predicting, lacking the capacity for iterative refinement. In this paper, we introduce a novel Loop-Residual Neural Network, which achieves better performance by utilizing longer computational time without increasing the model size. Our approach revisits the input multiple times, refining the prediction by iteratively looping over a subset of the model with residual connections. We demonstrate the effectiveness of this method through experiments comparing versions of GPT-2 with our Loop-Residual models, showing improved performance in language modeling tasks while maintaining similar parameter counts. Importantly, these improvements are achieved without the need for extra training data.<|reference_end|>
arxiv
@article{ng2024loop, title={Loop Neural Networks for Parameter Sharing}, author={Kei-Sing Ng, Qingchen Wang}, journal={arXiv preprint arXiv:2409.14199}, year={2024}, archivePrefix={arXiv}, eprint={2409.14199}, primaryClass={cs.AI} }
ng2024loop
arxiv-660330
2409.14200
Data-centric NLP Backdoor Defense from the Lens of Memorization
<|reference_start|>Data-centric NLP Backdoor Defense from the Lens of Memorization: Backdoor attack is a severe threat to the trustworthiness of DNN-based language models. In this paper, we first extend the definition of memorization of language models from sample-wise to more fine-grained sentence element-wise (e.g., word, phrase, structure, and style), and then point out that language model backdoors are a type of element-wise memorization. Through further analysis, we find that the strength of such memorization is positively correlated to the frequency of duplicated elements in the training dataset. In conclusion, duplicated sentence elements are necessary for successful backdoor attacks. Based on this, we propose a data-centric defense. We first detect trigger candidates in training data by finding memorizable elements, i.e., duplicated elements, and then confirm real triggers by testing if the candidates can activate backdoor behaviors (i.e., malicious elements). Results show that our method outperforms state-of-the-art defenses in defending against different types of NLP backdoors.<|reference_end|>
arxiv
@article{wang2024data-centric, title={Data-centric NLP Backdoor Defense from the Lens of Memorization}, author={Zhenting Wang, Zhizhi Wang, Mingyu Jin, Mengnan Du, Juan Zhai, Shiqing Ma}, journal={arXiv preprint arXiv:2409.14200}, year={2024}, archivePrefix={arXiv}, eprint={2409.14200}, primaryClass={cs.CL cs.CR cs.LG} }
wang2024data-centric
arxiv-660331
2409.14201
LATTE: Improving Latex Recognition for Tables and Formulae with Iterative Refinement
<|reference_start|>LATTE: Improving Latex Recognition for Tables and Formulae with Iterative Refinement: Portable Document Format (PDF) files are dominantly used for storing and disseminating scientific research, legal documents, and tax information. LaTeX is a popular application for creating PDF documents. Despite its advantages, LaTeX is not WYSWYG -- what you see is what you get, i.e., the LaTeX source and rendered PDF images look drastically different, especially for formulae and tables. This gap makes it hard to modify or export LaTeX sources for formulae and tables from PDF images, and existing work is still limited. First, prior work generates LaTeX sources in a single iteration and struggles with complex LaTeX formulae. Second, existing work mainly recognizes and extracts LaTeX sources for formulae; and is incapable or ineffective for tables. This paper proposes LATTE, the first iterative refinement framework for LaTeX recognition. Specifically, we propose delta-view as feedback, which compares and pinpoints the differences between a pair of rendered images of the extracted LaTeX source and the expected correct image. Such delta-view feedback enables our fault localization model to localize the faulty parts of the incorrect recognition more accurately and enables our LaTeX refinement model to repair the incorrect extraction more accurately. LATTE improves the LaTeX source extraction accuracy of both LaTeX formulae and tables, outperforming existing techniques as well as GPT-4V by at least 7.07% of exact match, with a success refinement rate of 46.08% (formula) and 25.51% (table).<|reference_end|>
arxiv
@article{jiang2024latte:, title={LATTE: Improving Latex Recognition for Tables and Formulae with Iterative Refinement}, author={Nan Jiang, Shanchao Liang, Chengxiao Wang, Jiannan Wang, Lin Tan}, journal={arXiv preprint arXiv:2409.14201}, year={2024}, archivePrefix={arXiv}, eprint={2409.14201}, primaryClass={cs.CV} }
jiang2024latte:
arxiv-660332
2409.14204
UniMo: Universal Motion Correction For Medical Images without Network Retraining
<|reference_start|>UniMo: Universal Motion Correction For Medical Images without Network Retraining: In this paper, we introduce a Universal Motion Correction (UniMo) framework, leveraging deep neural networks to tackle the challenges of motion correction across diverse imaging modalities. Our approach employs advanced neural network architectures with equivariant filters, overcoming the limitations of current models that require iterative inference or retraining for new image modalities. UniMo enables one-time training on a single modality while maintaining high stability and adaptability for inference across multiple unseen image modalities. We developed a joint learning framework that integrates multimodal knowledge from both shape and images that faithfully improve motion correction accuracy despite image appearance variations. UniMo features a geometric deformation augmenter that enhances the robustness of global motion correction by addressing any local deformations whether they are caused by object deformations or geometric distortions, and also generates augmented data to improve the training process. Our experimental results, conducted on various datasets with four different image modalities, demonstrate that UniMo surpasses existing motion correction methods in terms of accuracy. By offering a comprehensive solution to motion correction, UniMo marks a significant advancement in medical imaging, especially in challenging applications with wide ranges of motion, such as fetal imaging. The code for this work is available online, https://github.com/IntelligentImaging/UNIMO/.<|reference_end|>
arxiv
@article{wang2024unimo:, title={UniMo: Universal Motion Correction For Medical Images without Network Retraining}, author={Jian Wang, Razieh Faghihpirayesh, Danny Joca, Polina Golland, Ali Gholipour}, journal={arXiv preprint arXiv:2409.14204}, year={2024}, archivePrefix={arXiv}, eprint={2409.14204}, primaryClass={eess.IV cs.CV} }
wang2024unimo:
arxiv-660333
2409.14205
Egocentric zone-aware action recognition across environments
<|reference_start|>Egocentric zone-aware action recognition across environments: Human activities exhibit a strong correlation between actions and the places where these are performed, such as washing something at a sink. More specifically, in daily living environments we may identify particular locations, hereinafter named activity-centric zones, which may afford a set of homogeneous actions. Their knowledge can serve as a prior to favor vision models to recognize human activities. However, the appearance of these zones is scene-specific, limiting the transferability of this prior information to unfamiliar areas and domains. This problem is particularly relevant in egocentric vision, where the environment takes up most of the image, making it even more difficult to separate the action from the context. In this paper, we discuss the importance of decoupling the domain-specific appearance of activity-centric zones from their universal, domain-agnostic representations, and show how the latter can improve the cross-domain transferability of Egocentric Action Recognition (EAR) models. We validate our solution on the EPIC-Kitchens-100 and Argo1M datasets<|reference_end|>
arxiv
@article{peirone2024egocentric, title={Egocentric zone-aware action recognition across environments}, author={Simone Alberto Peirone, Gabriele Goletto, Mirco Planamente, Andrea Bottino, Barbara Caputo, Giuseppe Averta}, journal={arXiv preprint arXiv:2409.14205}, year={2024}, archivePrefix={arXiv}, eprint={2409.14205}, primaryClass={cs.CV} }
peirone2024egocentric
arxiv-660334
2409.14206
AI Assistants for Spaceflight Procedures: Combining Generative Pre-Trained Transformer and Retrieval-Augmented Generation on Knowledge Graphs With Augmented Reality Cues
<|reference_start|>AI Assistants for Spaceflight Procedures: Combining Generative Pre-Trained Transformer and Retrieval-Augmented Generation on Knowledge Graphs With Augmented Reality Cues: This paper describes the capabilities and potential of the intelligent personal assistant (IPA) CORE (Checklist Organizer for Research and Exploration), designed to support astronauts during procedures onboard the International Space Station (ISS), the Lunar Gateway station, and beyond. We reflect on the importance of a reliable and flexible assistant capable of offline operation and highlight the usefulness of audiovisual interaction using augmented reality elements to intuitively display checklist information. We argue that current approaches to the design of IPAs in space operations fall short of meeting these criteria. Therefore, we propose CORE as an assistant that combines Knowledge Graphs (KGs), Retrieval-Augmented Generation (RAG) for a Generative Pre-Trained Transformer (GPT), and Augmented Reality (AR) elements to ensure an intuitive understanding of procedure steps, reliability, offline availability, and flexibility in terms of response style and procedure updates.<|reference_end|>
arxiv
@article{bensch2024ai, title={AI Assistants for Spaceflight Procedures: Combining Generative Pre-Trained Transformer and Retrieval-Augmented Generation on Knowledge Graphs With Augmented Reality Cues}, author={Oliver Bensch, Leonie Bensch, Tommy Nilsson, Florian Saling, Bernd Bewer, Sophie Jentzsch, Tobias Hecking, J. Nathan Kutz}, journal={arXiv preprint arXiv:2409.14206}, year={2024}, archivePrefix={arXiv}, eprint={2409.14206}, primaryClass={cs.AI cs.HC} }
bensch2024ai
arxiv-660335
2409.14207
Stabilization of vertical motion of a vehicle on bumpy terrain using deep reinforcement learning
<|reference_start|>Stabilization of vertical motion of a vehicle on bumpy terrain using deep reinforcement learning: Stabilizing vertical dynamics for on-road and off-road vehicles is an important research area that has been looked at mostly from the point of view of ride comfort. The advent of autonomous vehicles now shifts the focus more towards developing stabilizing techniques from the point of view of onboard proprioceptive and exteroceptive sensors whose real-time measurements influence the performance of an autonomous vehicle. The current solutions to this problem of managing the vertical oscillations usually limit themselves to the realm of active suspension systems without much consideration to modulating the vehicle velocity, which plays an important role by the virtue of the fact that vertical and longitudinal dynamics of a ground vehicle are coupled. The task of stabilizing vertical oscillations for military ground vehicles becomes even more challenging due lack of structured environments, like city roads or highways, in off-road scenarios. Moreover, changes in structural parameters of the vehicle, such as mass (due to changes in vehicle loading), suspension stiffness and damping values can have significant effect on the controller's performance. This demands the need for developing deep learning based control policies, that can take into account an extremely large number of input features and approximate a near optimal control action. In this work, these problems are addressed by training a deep reinforcement learning agent to minimize the vertical acceleration of a scaled vehicle travelling over bumps by controlling its velocity.<|reference_end|>
arxiv
@article{salvi2024stabilization, title={Stabilization of vertical motion of a vehicle on bumpy terrain using deep reinforcement learning}, author={Ameya Salvi, John Coleman, Jake Buzhardt, Venkat Krovi, Phanindra Tallapragada}, journal={arXiv preprint arXiv:2409.14207}, year={2024}, doi={10.1016/j.ifacol.2022.11.197}, archivePrefix={arXiv}, eprint={2409.14207}, primaryClass={cs.RO} }
salvi2024stabilization
arxiv-660336
2409.14209
A Polynomial Kernel for Deletion to the Scattered Class of Cliques and Trees
<|reference_start|>A Polynomial Kernel for Deletion to the Scattered Class of Cliques and Trees: The class of graph deletion problems has been extensively studied in theoretical computer science, particularly in the field of parameterized complexity. Recently, a new notion of graph deletion problems was introduced, called deletion to scattered graph classes, where after deletion, each connected component of the graph should belong to at least one of the given graph classes. While fixed-parameter algorithms were given for a wide variety of problems, little progress has been made on the kernelization complexity of any of them. In this paper, we present the first non-trivial polynomial kernel for one such deletion problem, where, after deletion, each connected component should be a clique or a tree - that is, as densest as possible or as sparsest as possible (while being connected). We develop a kernel consisting of O(k^5) vertices for this problem.<|reference_end|>
arxiv
@article{jacob2024a, title={A Polynomial Kernel for Deletion to the Scattered Class of Cliques and Trees}, author={Ashwin Jacob, Diptapriyo Majumdar, Meirav Zehavi}, journal={arXiv preprint arXiv:2409.14209}, year={2024}, archivePrefix={arXiv}, eprint={2409.14209}, primaryClass={cs.DS cs.DM} }
jacob2024a
arxiv-660337
2409.14215
@Bench: Benchmarking Vision-Language Models for Human-centered Assistive Technology
<|reference_start|>@Bench: Benchmarking Vision-Language Models for Human-centered Assistive Technology: As Vision-Language Models (VLMs) advance, human-centered Assistive Technologies (ATs) for helping People with Visual Impairments (PVIs) are evolving into generalists, capable of performing multiple tasks simultaneously. However, benchmarking VLMs for ATs remains under-explored. To bridge this gap, we first create a novel AT benchmark (@Bench). Guided by a pre-design user study with PVIs, our benchmark includes the five most crucial vision-language tasks: Panoptic Segmentation, Depth Estimation, Optical Character Recognition (OCR), Image Captioning, and Visual Question Answering (VQA). Besides, we propose a novel AT model (@Model) that addresses all tasks simultaneously and can be expanded to more assistive functions for helping PVIs. Our framework exhibits outstanding performance across tasks by integrating multi-modal information, and it offers PVIs a more comprehensive assistance. Extensive experiments prove the effectiveness and generalizability of our framework.<|reference_end|>
arxiv
@article{jiang2024@bench:, title={@Bench: Benchmarking Vision-Language Models for Human-centered Assistive Technology}, author={Xin Jiang, Junwei Zheng, Ruiping Liu, Jiahang Li, Jiaming Zhang, Sven Matthiesen, and Rainer Stiefelhagen}, journal={arXiv preprint arXiv:2409.14215}, year={2024}, archivePrefix={arXiv}, eprint={2409.14215}, primaryClass={cs.CV} }
jiang2024@bench:
arxiv-660338
2409.14216
R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models
<|reference_start|>R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models: Although research has produced promising results demonstrating the utility of active inference (AIF) in Markov decision processes (MDPs), there is relatively less work that builds AIF models in the context of environments and problems that take the form of partially observable Markov decision processes (POMDPs). In POMDP scenarios, the agent must infer the unobserved environmental state from raw sensory observations, e.g., pixels in an image. Additionally, less work exists in examining the most difficult form of POMDP-centered control: continuous action space POMDPs under sparse reward signals. In this work, we address issues facing the AIF modeling paradigm by introducing novel prior preference learning techniques and self-revision schedules to help the agent excel in sparse-reward, continuous action, goal-based robotic control POMDP environments. Empirically, we show that our agents offer improved performance over state-of-the-art models in terms of cumulative rewards, relative stability, and success rate. The code in support of this work can be found at https://github.com/NACLab/robust-active-inference.<|reference_end|>
arxiv
@article{nguyen2024r-aif:, title={R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models}, author={Viet Dung Nguyen, Zhizhuo Yang, Christopher L. Buckley, Alexander Ororbia}, journal={arXiv preprint arXiv:2409.14216}, year={2024}, archivePrefix={arXiv}, eprint={2409.14216}, primaryClass={cs.RO cs.AI cs.CV cs.LG} }
nguyen2024r-aif:
arxiv-660339
2409.14217
Revisiting BPR: A Replicability Study of a Common Recommender System Baseline
<|reference_start|>Revisiting BPR: A Replicability Study of a Common Recommender System Baseline: Bayesian Personalized Ranking (BPR), a collaborative filtering approach based on matrix factorization, frequently serves as a benchmark for recommender systems research. However, numerous studies often overlook the nuances of BPR implementation, claiming that it performs worse than newly proposed methods across various tasks. In this paper, we thoroughly examine the features of the BPR model, indicating their impact on its performance, and investigate open-source BPR implementations. Our analysis reveals inconsistencies between these implementations and the original BPR paper, leading to a significant decrease in performance of up to 50% for specific implementations. Furthermore, through extensive experiments on real-world datasets under modern evaluation settings, we demonstrate that with proper tuning of its hyperparameters, the BPR model can achieve performance levels close to state-of-the-art methods on the top-n recommendation tasks and even outperform them on specific datasets. Specifically, on the Million Song Dataset, the BPR model with hyperparameters tuning statistically significantly outperforms Mult-VAE by 10% in NDCG@100 with binary relevance function.<|reference_end|>
arxiv
@article{milogradskii2024revisiting, title={Revisiting BPR: A Replicability Study of a Common Recommender System Baseline}, author={Aleksandr Milogradskii, Oleg Lashinin, Alexander P, Marina Ananyeva, Sergey Kolesnikov}, journal={arXiv preprint arXiv:2409.14217}, year={2024}, doi={10.1145/3640457.3688073}, archivePrefix={arXiv}, eprint={2409.14217}, primaryClass={cs.IR} }
milogradskii2024revisiting
arxiv-660340
2409.14219
MEGA-PT: A Meta-Game Framework for Agile Penetration Testing
<|reference_start|>MEGA-PT: A Meta-Game Framework for Agile Penetration Testing: Penetration testing is an essential means of proactive defense in the face of escalating cybersecurity incidents. Traditional manual penetration testing methods are time-consuming, resource-intensive, and prone to human errors. Current trends in automated penetration testing are also impractical, facing significant challenges such as the curse of dimensionality, scalability issues, and lack of adaptability to network changes. To address these issues, we propose MEGA-PT, a meta-game penetration testing framework, featuring micro tactic games for node-level local interactions and a macro strategy process for network-wide attack chains. The micro- and macro-level modeling enables distributed, adaptive, collaborative, and fast penetration testing. MEGA-PT offers agile solutions for various security schemes, including optimal local penetration plans, purple teaming solutions, and risk assessment, providing fundamental principles to guide future automated penetration testing. Our experiments demonstrate the effectiveness and agility of our model by providing improved defense strategies and adaptability to changes at both local and network levels.<|reference_end|>
arxiv
@article{ge2024mega-pt:, title={MEGA-PT: A Meta-Game Framework for Agile Penetration Testing}, author={Yunfei Ge, Quanyan Zhu}, journal={arXiv preprint arXiv:2409.14219}, year={2024}, archivePrefix={arXiv}, eprint={2409.14219}, primaryClass={cs.CR cs.AI cs.GT} }
ge2024mega-pt:
arxiv-660341
2409.14220
Masks and Boxes: Combining the Best of Both Worlds for Multi-Object Tracking
<|reference_start|>Masks and Boxes: Combining the Best of Both Worlds for Multi-Object Tracking: Multi-object tracking (MOT) involves identifying and consistently tracking objects across video sequences. Traditional tracking-by-detection methods, while effective, often require extensive tuning and lack generalizability. On the other hand, segmentation mask-based methods are more generic but struggle with tracking management, making them unsuitable for MOT. We propose a novel approach, McByte, which incorporates a temporally propagated segmentation mask as a strong association cue within a tracking-by-detection framework. By combining bounding box and mask information, McByte enhances robustness and generalizability without per-sequence tuning. Evaluated on four benchmark datasets - DanceTrack, MOT17, SoccerNet-tracking 2022, and KITTI-tracking - McByte demonstrates performance gain in all cases examined. At the same time, it outperforms existing mask-based methods. Implementation code will be provided upon acceptance.<|reference_end|>
arxiv
@article{stanczyk2024masks, title={Masks and Boxes: Combining the Best of Both Worlds for Multi-Object Tracking}, author={Tomasz Stanczyk, Francois Bremond}, journal={arXiv preprint arXiv:2409.14220}, year={2024}, archivePrefix={arXiv}, eprint={2409.14220}, primaryClass={cs.CV} }
stanczyk2024masks
arxiv-660342
2409.14221
Strong Alone, Stronger Together: Synergizing Modality-Binding Foundation Models with Optimal Transport for Non-Verbal Emotion Recognition
<|reference_start|>Strong Alone, Stronger Together: Synergizing Modality-Binding Foundation Models with Optimal Transport for Non-Verbal Emotion Recognition: In this study, we investigate multimodal foundation models (MFMs) for emotion recognition from non-verbal sounds. We hypothesize that MFMs, with their joint pre-training across multiple modalities, will be more effective in non-verbal sounds emotion recognition (NVER) by better interpreting and differentiating subtle emotional cues that may be ambiguous in audio-only foundation models (AFMs). To validate our hypothesis, we extract representations from state-of-the-art (SOTA) MFMs and AFMs and evaluated them on benchmark NVER datasets. We also investigate the potential of combining selected foundation model representations to enhance NVER further inspired by research in speech recognition and audio deepfake detection. To achieve this, we propose a framework called MATA (Intra-Modality Alignment through Transport Attention). Through MATA coupled with the combination of MFMs: LanguageBind and ImageBind, we report the topmost performance with accuracies of 76.47%, 77.40%, 75.12% and F1-scores of 70.35%, 76.19%, 74.63% for ASVP-ESD, JNV, and VIVAE datasets against individual FMs and baseline fusion techniques and report SOTA on the benchmark datasets.<|reference_end|>
arxiv
@article{phukan2024strong, title={Strong Alone, Stronger Together: Synergizing Modality-Binding Foundation Models with Optimal Transport for Non-Verbal Emotion Recognition}, author={Orchid Chetia Phukan, Mohd Mujtaba Akhtar, Girish, Swarup Ranjan Behera, Sishir Kalita, Arun Balaji Buduru, Rajesh Sharma and S.R Mahadeva Prasanna}, journal={arXiv preprint arXiv:2409.14221}, year={2024}, archivePrefix={arXiv}, eprint={2409.14221}, primaryClass={eess.AS cs.SD} }
phukan2024strong
arxiv-660343
2409.14222
Achieving $h$- and $p$-robust monolithic multigrid solvers for the Stokes equations
<|reference_start|>Achieving $h$- and $p$-robust monolithic multigrid solvers for the Stokes equations: The numerical analysis of higher-order mixed finite-element discretizations for saddle-point problems, such as the Stokes equations, has been well-studied in recent years. While the theory and practice of such discretizations is now well-understood, the same cannot be said for efficient preconditioners for solving the resulting linear (or linearized) systems of equations. In this work, we propose and study variants of the well-known Vanka relaxation scheme that lead to effective geometric multigrid preconditioners for both the conforming Taylor-Hood discretizations and non-conforming ${\bf H}(\text{div})$-$L^2$ discretizations of the Stokes equations. Numerical results demonstrate robust performance with respect to FGMRES iteration counts for increasing polynomial order for some of the considered discretizations, and expose open questions about stopping tolerances for effectively preconditioned iterations at high polynomial order.<|reference_end|>
arxiv
@article{rafiei2024achieving, title={Achieving $h$- and $p$-robust monolithic multigrid solvers for the Stokes equations}, author={Amin Rafiei, Scott MacLachlan}, journal={arXiv preprint arXiv:2409.14222}, year={2024}, archivePrefix={arXiv}, eprint={2409.14222}, primaryClass={math.NA cs.NA} }
rafiei2024achieving
arxiv-660344
2409.14223
Collaborative Human-AI Risk Annotation: Co-Annotating Online Incivility with CHAIRA
<|reference_start|>Collaborative Human-AI Risk Annotation: Co-Annotating Online Incivility with CHAIRA: Collaborative human-AI annotation is a promising approach for various tasks with large-scale and complex data. Tools and methods to support effective human-AI collaboration for data annotation are an important direction for research. In this paper, we present CHAIRA: a Collaborative Human-AI Risk Annotation tool that enables human and AI agents to collaboratively annotate online incivility. We leveraged Large Language Models (LLMs) to facilitate the interaction between human and AI annotators and examine four different prompting strategies. The developed CHAIRA system combines multiple prompting approaches with human-AI collaboration for online incivility data annotation. We evaluated CHAIRA on 457 user comments with ground truth labels based on the inter-rater agreement between human and AI coders. We found that the most collaborative prompt supported a high level of agreement between a human agent and AI, comparable to that of two human coders. While the AI missed some implicit incivility that human coders easily identified, it also spotted politically nuanced incivility that human coders overlooked. Our study reveals the benefits and challenges of using AI agents for incivility annotation and provides design implications and best practices for human-AI collaboration in subjective data annotation.<|reference_end|>
arxiv
@article{park2024collaborative, title={Collaborative Human-AI Risk Annotation: Co-Annotating Online Incivility with CHAIRA}, author={Jinkyung Katie Park, Rahul Dev Ellezhuthil, Pamela Wisniewski, Vivek Singh}, journal={arXiv preprint arXiv:2409.14223}, year={2024}, archivePrefix={arXiv}, eprint={2409.14223}, primaryClass={cs.HC} }
park2024collaborative
arxiv-660345
2409.14226
Current Trends and Future Directions for Sexual Health Conversational Agents (CAs) for Youth: A Scoping Review
<|reference_start|>Current Trends and Future Directions for Sexual Health Conversational Agents (CAs) for Youth: A Scoping Review: Conversational Agents (CAs, chatbots) are systems with the ability to interact with users using natural human dialogue. While much of the research on CAs for sexual health has focused on adult populations, the insights from such research may not apply to CAs for youth. The study aimed to comprehensively evaluate the state-of-the-art research on sexual health CAs for youth. Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we synthesized peer-reviewed studies specific to sexual health CAs designed for youth over the past 14 years. We found that most sexual health CAs were designed to adopt the persona of health professionals to provide general sexual and reproductive health information for youth. Text was the primary communication mode in all sexual health CAs, with half supporting multimedia output. Many sexual health CAs employed rule-based techniques to deliver pre-written expert knowledge on sexual health; yet most sexual health CAs did not have the safety features in place. While youth appreciated accessibility to non-judgmental and confidential conversations about sexual health topics, they perceived current sexual health CAs provided limited sexual health information that is not inclusive of sexual and/or gender minorities. Our review brings to light sexual health CAs needing further development and evaluation and we identify multiple important areas for future work. While the new trend of large language models (LLMs) based CAs can make such technologies more feasible, the privacy and safety of the systems should be prioritized. Finally, best practices for risk mitigation and ethical development of sexual health CAs with and for youth are needed.<|reference_end|>
arxiv
@article{park2024current, title={Current Trends and Future Directions for Sexual Health Conversational Agents (CAs) for Youth: A Scoping Review}, author={Jinkyung Katie Park, Vivek Singh, Pamela Wisniewski}, journal={arXiv preprint arXiv:2409.14226}, year={2024}, archivePrefix={arXiv}, eprint={2409.14226}, primaryClass={cs.HC} }
park2024current
arxiv-660346
2409.14227
Characterizing graph-nonedge pairs with single interval Cayley configuration spaces in 3-dimensions
<|reference_start|>Characterizing graph-nonedge pairs with single interval Cayley configuration spaces in 3-dimensions: A linkage $(G,\ell)$ is a pair containing a finite simple undirected graph $G$ and a squared edge-length map $\ell$ that assigns squared Euclidean lengths to the edges of $G$. A $d$-realization of $(G,\ell)$ is an assignment of points in $\mathbb{R}^d$ to the vertices of $G$ for which pair-wise distances between points agree with $\ell$. For $d \leq 3$, we characterize pairs $(G,f)$, where $f$ is a nonedge of $G$, such that, for any squared edge-length map $\ell$, there is a single interval of attained distance values between the endpoints of $f$ over all $d$-realizations of $(G,\ell)$, answering a question posed in \cite{sitharam2010characterizing} a decade ago, which gave an equivalent characterization for $d\le 2$ that does not generalize to $d\ge 3$. Two notable byproducts of this characterization are a new tool for partial $3$-tree completion, a well-studied problem, and a deeper understanding of the realization space of partial $3$-tree linkages through the lens of Cayley realization spaces. Although related to the minor closed class of $3$-flattenable graphs, the class of pairs $(G,f)$ with the above property is not minor closed, has no obvious well quasi-ordering, and there are infinitely many minimal graphs-nonedge pairs - w.r.t. edge contractions - in the complement class. Our characterization overcomes these obstacles, is based on the forbidden minors of the class of $3$-flattenable graphs, and contributes to the theory of Cayley configurations used for analyzing distance-constrained configuration spaces in a range of application domains. Generalizations to higher dimensions and efficient algorithmic characterizations are conjectured.<|reference_end|>
arxiv
@article{sims2024characterizing, title={Characterizing graph-nonedge pairs with single interval Cayley configuration spaces in 3-dimensions}, author={William Sims and Meera Sitharam}, journal={arXiv preprint arXiv:2409.14227}, year={2024}, archivePrefix={arXiv}, eprint={2409.14227}, primaryClass={cs.CG math.CO} }
sims2024characterizing
arxiv-660347
2409.14228
Mentigo: An Intelligent Agent for Mentoring Students in the Creative Problem Solving Process
<|reference_start|>Mentigo: An Intelligent Agent for Mentoring Students in the Creative Problem Solving Process: With the increasing integration of large lauguage models (LLMs) in education, there is growing interest in using AI agents to support student learning in creative tasks. This study presents an interactive Mentor Agent system named Mentigo, which is designed to assist middle school students in the creative problem solving (CPS) process. We created a comprehensive dataset of real classroom interactions between students and mentors, which include the structured CPS task management, diverse guidance techniques, personalized feedback mechanisms. Based on this dataset, we create agentic workflow for the Mentigo system. The system's effectiveness was evaluated through a comparative experiment with 12 students and reviewed by five expert teachers. The Mentigo system demonstrated significant improvements in student engagement and creative outcomes. The findings provide design implications for leveraging LLMs to support CPS and offer insights into the application of AI mentor agents in educational contexts.<|reference_end|>
arxiv
@article{zha2024mentigo:, title={Mentigo: An Intelligent Agent for Mentoring Students in the Creative Problem Solving Process}, author={Siyu Zha, Yujia Liu, Chengbo Zheng, Jiaqi XU, Fuze Yu, Jiangtao Gong and Yingqing XU}, journal={arXiv preprint arXiv:2409.14228}, year={2024}, archivePrefix={arXiv}, eprint={2409.14228}, primaryClass={cs.HC} }
zha2024mentigo:
arxiv-660348
2409.14231
Predicting Coronary Heart Disease Using a Suite of Machine Learning Models
<|reference_start|>Predicting Coronary Heart Disease Using a Suite of Machine Learning Models: Coronary Heart Disease affects millions of people worldwide and is a well-studied area of healthcare. There are many viable and accurate methods for the diagnosis and prediction of heart disease, but they have limiting points such as invasiveness, late detection, or cost. Supervised learning via machine learning algorithms presents a low-cost (computationally speaking), non-invasive solution that can be a precursor for early diagnosis. In this study, we applied several well-known methods and benchmarked their performance against each other. It was found that Random Forest with oversampling of the predictor variable produced the highest accuracy of 84%.<|reference_end|>
arxiv
@article{al-karaki2024predicting, title={Predicting Coronary Heart Disease Using a Suite of Machine Learning Models}, author={Jamal Al-Karaki, Philip Ilono, Sanchit Baweja, Jalal Naghiyev, Raja Singh Yadav, Muhammad Al-Zafar Khan}, journal={arXiv preprint arXiv:2409.14231}, year={2024}, archivePrefix={arXiv}, eprint={2409.14231}, primaryClass={cs.AI} }
al-karaki2024predicting
arxiv-660349
2409.14232
ReFine: Boosting Time Series Prediction of Extreme Events by Reweighting and Fine-tuning
<|reference_start|>ReFine: Boosting Time Series Prediction of Extreme Events by Reweighting and Fine-tuning: Extreme events are of great importance since they often represent impactive occurrences. For instance, in terms of climate and weather, extreme events might be major storms, floods, extreme heat or cold waves, and more. However, they are often located at the tail of the data distribution. Consequently, accurately predicting these extreme events is challenging due to their rarity and irregularity. Prior studies have also referred to this as the out-of-distribution (OOD) problem, which occurs when the distribution of the test data is substantially different from that used for training. In this work, we propose two strategies, reweighting and fine-tuning, to tackle the challenge. Reweighting is a strategy used to force machine learning models to focus on extreme events, which is achieved by a weighted loss function that assigns greater penalties to the prediction errors for the extreme samples relative to those on the remainder of the data. Unlike previous intuitive reweighting methods based on simple heuristics of data distribution, we employ meta-learning to dynamically optimize these penalty weights. To further boost the performance on extreme samples, we start from the reweighted models and fine-tune them using only rare extreme samples. Through extensive experiments on multiple data sets, we empirically validate that our meta-learning-based reweighting outperforms existing heuristic ones, and the fine-tuning strategy can further increase the model performance. More importantly, these two strategies are model-agnostic, which can be implemented on any type of neural network for time series forecasting. The open-sourced code is available at \url{https://github.com/JimengShi/ReFine}.<|reference_end|>
arxiv
@article{shi2024refine:, title={ReFine: Boosting Time Series Prediction of Extreme Events by Reweighting and Fine-tuning}, author={Jimeng Shi, Azam Shirali, Giri Narasimhan}, journal={arXiv preprint arXiv:2409.14232}, year={2024}, archivePrefix={arXiv}, eprint={2409.14232}, primaryClass={cs.LG} }
shi2024refine:
arxiv-660350
2409.14235
Structure Learning via Mutual Information
<|reference_start|>Structure Learning via Mutual Information: This paper presents a novel approach to machine learning algorithm design based on information theory, specifically mutual information (MI). We propose a framework for learning and representing functional relationships in data using MI-based features. Our method aims to capture the underlying structure of information in datasets, enabling more efficient and generalizable learning algorithms. We demonstrate the efficacy of our approach through experiments on synthetic and real-world datasets, showing improved performance in tasks such as function classification, regression, and cross-dataset transfer. This work contributes to the growing field of metalearning and automated machine learning, offering a new perspective on how to leverage information theory for algorithm design and dataset analysis and proposing new mutual information theoretic foundations to learning algorithms.<|reference_end|>
arxiv
@article{nixon2024structure, title={Structure Learning via Mutual Information}, author={Jeremy Nixon}, journal={arXiv preprint arXiv:2409.14235}, year={2024}, archivePrefix={arXiv}, eprint={2409.14235}, primaryClass={cs.LG cs.IT math.IT} }
nixon2024structure
arxiv-660351
2409.14237
An Instance-based Plus Ensemble Learning Method for Classification of Scientific Papers
<|reference_start|>An Instance-based Plus Ensemble Learning Method for Classification of Scientific Papers: The exponential growth of scientific publications in recent years has posed a significant challenge in effective and efficient categorization. This paper introduces a novel approach that combines instance-based learning and ensemble learning techniques for classifying scientific papers into relevant research fields. Working with a classification system with a group of research fields, first a number of typical seed papers are allocated to each of the fields manually. Then for each paper that needs to be classified, we compare it with all the seed papers in every field. Contents and citations are considered separately. An ensemble-based method is then employed to make the final decision. Experimenting with the datasets from DBLP, our experimental results demonstrate that the proposed classification method is effective and efficient in categorizing papers into various research areas. We also find that both content and citation features are useful for the classification of scientific papers.<|reference_end|>
arxiv
@article{zhang2024an, title={An Instance-based Plus Ensemble Learning Method for Classification of Scientific Papers}, author={Fang Zhang and Shengli Wu}, journal={arXiv preprint arXiv:2409.14237}, year={2024}, archivePrefix={arXiv}, eprint={2409.14237}, primaryClass={cs.DL cs.AI} }
zhang2024an
arxiv-660352
2409.14240
Cloud Adversarial Example Generation for Remote Sensing Image Classification
<|reference_start|>Cloud Adversarial Example Generation for Remote Sensing Image Classification: Most existing adversarial attack methods for remote sensing images merely add adversarial perturbations or patches, resulting in unnatural modifications. Clouds are common atmospheric effects in remote sensing images. Generating clouds on these images can produce adversarial examples better aligning with human perception. In this paper, we propose a Perlin noise based cloud generation attack method. Common Perlin noise based cloud generation is a random, non-optimizable process, which cannot be directly used to attack the target models. We design a Perlin Gradient Generator Network (PGGN), which takes a gradient parameter vector as input and outputs the grids of Perlin noise gradient vectors at different scales. After a series of computations based on the gradient vectors, cloud masks at corresponding scales can be produced. These cloud masks are then weighted and summed depending on a mixing coefficient vector and a scaling factor to produce the final cloud masks. The gradient vector, coefficient vector and scaling factor are collectively represented as a cloud parameter vector, transforming the cloud generation into a black-box optimization problem. The Differential Evolution (DE) algorithm is employed to solve for the optimal solution of the cloud parameter vector, achieving a query-based black-box attack. Detailed experiments confirm that this method has strong attack capabilities and achieves high query efficiency. Additionally, we analyze the transferability of the generated adversarial examples and their robustness in adversarial defense scenarios.<|reference_end|>
arxiv
@article{ma2024cloud, title={Cloud Adversarial Example Generation for Remote Sensing Image Classification}, author={Fei Ma, Yuqiang Feng, Fan Zhang, Yongsheng Zhou}, journal={arXiv preprint arXiv:2409.14240}, year={2024}, archivePrefix={arXiv}, eprint={2409.14240}, primaryClass={cs.CV} }
ma2024cloud
arxiv-660353
2409.14241
Research Pearl: The ROSI Operating System Interface
<|reference_start|>Research Pearl: The ROSI Operating System Interface: This paper presents some preliminary results concerning a new user-friendly operating system interface based on the relational data model that is currently under development at the University of Texas at Austin. The premise of our work is that a relational model of the operating system environment wil produce a user and programmer interface to the system: is easier to use, is easier to learn, and allows greater portability as compared with existing operating system interfaces. Our approach is to model elements of the operating system environment as relations and to model operating system commands as statements in a relational language. In adapting the relational model to an operating system environment, we found it necessary to extend the model and improve existing relational languages. The extensions to the relational model are designed to allow a more natural representation of elements of the environment. Our language extensions exploit the universal relation model and utilize the graphical capabilities of modern workstations. The nature of our investigations is ranging from practical implementation issues to the more theoretical questions of modeling and language semanties.<|reference_end|>
arxiv
@article{soulé2024research, title={Research Pearl: The ROSI Operating System Interface}, author={Robert Soul'e and Peter Alvaro and Henry F. Korth and Abraham Silberschatz}, journal={arXiv preprint arXiv:2409.14241}, year={2024}, archivePrefix={arXiv}, eprint={2409.14241}, primaryClass={cs.DB} }
soulé2024research
arxiv-660354
2409.14242
Design of wavelet filter banks for any dilation using Extended Laplacian Pyramid Matrices
<|reference_start|>Design of wavelet filter banks for any dilation using Extended Laplacian Pyramid Matrices: In this paper, we present a new method for designing wavelet filter banks for any dilation matrices and in any dimension. Our approach utilizes extended Laplacian pyramid matrices to achieve this flexibility. By generalizing recent tight wavelet frame construction methods based on the sum of squares representation, we introduce the sum of vanishing products (SVP) condition, which is significantly easier to satisfy. These flexible design methods rely on our main results, which establish the equivalence between the SVP and mixed unitary extension principle conditions. Additionally, we provide illustrative examples to showcase our main findings.<|reference_end|>
arxiv
@article{hur2024design, title={Design of wavelet filter banks for any dilation using Extended Laplacian Pyramid Matrices}, author={Youngmi Hur and Sung Joo Kim}, journal={arXiv preprint arXiv:2409.14242}, year={2024}, archivePrefix={arXiv}, eprint={2409.14242}, primaryClass={cs.IT math.IT} }
hur2024design
arxiv-660355
2409.14244
Cross-course Process Mining of Student Clickstream Data -- Aggregation and Group Comparison
<|reference_start|>Cross-course Process Mining of Student Clickstream Data -- Aggregation and Group Comparison: This paper introduces novel methods for preparing and analyzing student interaction data extracted from course management systems like Moodle to facilitate process mining, like the creation of graphs that show the process flow. Such graphs can get very complex as Moodle courses can contain hundreds of different activities, which makes it difficult to compare the paths of different student cohorts. Moreover, existing research often confines its focus to individual courses, overlooking potential patterns that may transcend course boundaries. Our research addresses these challenges by implementing an automated dataflow that directly queries data from the Moodle database via SQL, offering the flexibility of filtering on individual courses if needed. In addition to analyzing individual Moodle activities, we explore patterns at an aggregated course section level. Furthermore, we present a method for standardizing section labels across courses, facilitating cross-course analysis to uncover broader usage patterns. Our findings reveal, among other insights, that higher-performing students demonstrate a propensity to engage more frequently with available activities and exhibit more dynamic movement between objects. While these patterns are discernible when analyzing individual course activity-events, they become more pronounced when aggregated to the section level and analyzed across multiple courses.<|reference_end|>
arxiv
@article{hildebrandt2024cross-course, title={Cross-course Process Mining of Student Clickstream Data -- Aggregation and Group Comparison}, author={Tobias Hildebrandt, Lars Mehnen}, journal={arXiv preprint arXiv:2409.14244}, year={2024}, archivePrefix={arXiv}, eprint={2409.14244}, primaryClass={cs.CY cs.HC} }
hildebrandt2024cross-course
arxiv-660356
2409.14245
Multi-objective Memetic Algorithm with Adaptive Weights for Inverse Antenna Design
<|reference_start|>Multi-objective Memetic Algorithm with Adaptive Weights for Inverse Antenna Design: This paper describes the modification of a single-objective algorithm into its multi-objective counterpart. The outcome is a considerable increase in speed in the order of tens to hundreds and the resulting Pareto front is of higher quality compared to conventional state-of-the-art automated inverse design setups. This advancement is possible thanks to a memetic algorithm combining a gradient-based search for local minima with heuristic optimization to maintain sufficient diversity. The local algorithm is based on rank-1 perturbations; the global algorithm is NSGA-II. An important advancement is the adaptive weighting of objective functions during optimization. The procedure is tested on three challenging examples dealing with both physical and topological metrics and multi-objective settings. The results are compared with standard techniques, and the superb performance of the proposed technique is reported. The implemented algorithm applies to antenna inverse design problems and is an efficient data miner for machine learning tools.<|reference_end|>
arxiv
@article{kadlec2024multi-objective, title={Multi-objective Memetic Algorithm with Adaptive Weights for Inverse Antenna Design}, author={Petr Kadlec and Miloslav Capek}, journal={arXiv preprint arXiv:2409.14245}, year={2024}, archivePrefix={arXiv}, eprint={2409.14245}, primaryClass={cs.NE} }
kadlec2024multi-objective
arxiv-660357
2409.14247
Repairs in a Block World: A New Benchmark for Handling User Corrections with Multi-Modal Language Models
<|reference_start|>Repairs in a Block World: A New Benchmark for Handling User Corrections with Multi-Modal Language Models: In dialogue, the addressee may initially misunderstand the speaker and respond erroneously, often prompting the speaker to correct the misunderstanding in the next turn with a Third Position Repair (TPR). The ability to process and respond appropriately to such repair sequences is thus crucial in conversational AI systems. In this paper, we first collect, analyse, and publicly release BlockWorld-Repairs: a dataset of multi-modal TPR sequences in an instruction-following manipulation task that is, by design, rife with referential ambiguity. We employ this dataset to evaluate several state-of-the-art Vision and Language Models (VLM) across multiple settings, focusing on their capability to process and accurately respond to TPRs and thus recover from miscommunication. We find that, compared to humans, all models significantly underperform in this task. We then show that VLMs can benefit from specialised losses targeting relevant tokens during fine-tuning, achieving better performance and generalising better to new scenarios. Our results suggest that these models are not yet ready to be deployed in multi-modal collaborative settings where repairs are common, and highlight the need to design training regimes and objectives that facilitate learning from interaction. Our code and data are available at www.github.com/JChiyah/blockworld-repairs<|reference_end|>
arxiv
@article{chiyah-garcia2024repairs, title={Repairs in a Block World: A New Benchmark for Handling User Corrections with Multi-Modal Language Models}, author={Javier Chiyah-Garcia, Alessandro Suglia, Arash Eshghi}, journal={arXiv preprint arXiv:2409.14247}, year={2024}, archivePrefix={arXiv}, eprint={2409.14247}, primaryClass={cs.CL cs.HC} }
chiyah-garcia2024repairs
arxiv-660358
2409.14248
Higher-order-ReLU-KANs (HRKANs) for solving physics-informed neural networks (PINNs) more accurately, robustly and faster
<|reference_start|>Higher-order-ReLU-KANs (HRKANs) for solving physics-informed neural networks (PINNs) more accurately, robustly and faster: Finding solutions to partial differential equations (PDEs) is an important and essential component in many scientific and engineering discoveries. One of the common approaches empowered by deep learning is Physics-informed Neural Networks (PINNs). Recently, a new type of fundamental neural network model, Kolmogorov-Arnold Networks (KANs), has been proposed as a substitute of Multilayer Perceptions (MLPs), and possesses trainable activation functions. To enhance KANs in fitting accuracy, a modification of KANs, so called ReLU-KANs, using "square of ReLU" as the basis of its activation functions, has been suggested. In this work, we propose another basis of activation functions, namely, Higherorder-ReLU (HR), which is simpler than the basis of activation functions used in KANs, namely, Bsplines; allows efficient KAN matrix operations; and possesses smooth and non-zero higher-order derivatives, essential to physicsinformed neural networks. We name such KANs with Higher-order-ReLU (HR) as their activations, HRKANs. Our detailed experiments on two famous and representative PDEs, namely, the linear Poisson equation and nonlinear Burgers' equation with viscosity, reveal that our proposed Higher-order-ReLU-KANs (HRKANs) achieve the highest fitting accuracy and training robustness and lowest training time significantly among KANs, ReLU-KANs and HRKANs. The codes to replicate our experiments are available at https://github.com/kelvinhkcs/HRKAN.<|reference_end|>
arxiv
@article{so2024higher-order-relu-kans, title={Higher-order-ReLU-KANs (HRKANs) for solving physics-informed neural networks (PINNs) more accurately, robustly and faster}, author={Chi Chiu So, Siu Pang Yung}, journal={arXiv preprint arXiv:2409.14248}, year={2024}, archivePrefix={arXiv}, eprint={2409.14248}, primaryClass={cs.NE cs.AI cs.CE cs.LG physics.comp-ph} }
so2024higher-order-relu-kans
arxiv-660359
2409.14249
End to End Face Reconstruction via Differentiable PnP
<|reference_start|>End to End Face Reconstruction via Differentiable PnP: This is a challenge report of the ECCV 2022 WCPA Challenge, Face Reconstruction Track. Inside this report is a brief explanation of how we accomplish this challenge. We design a two-branch network to accomplish this task, whose roles are Face Reconstruction and Face Landmark Detection. The former outputs canonical 3D face coordinates. The latter outputs pixel coordinates, i.e. 2D mapping of 3D coordinates with head pose and perspective projection. In addition, we utilize a differentiable PnP (Perspective-n-Points) layer to finetune the outputs of the two branch. Our method achieves very competitive quantitative results on the MVP-Human dataset and wins a $3^{rd}$ prize in the challenge.<|reference_end|>
arxiv
@article{lu2024end, title={End to End Face Reconstruction via Differentiable PnP}, author={Yiren Lu, Huawei Wei}, journal={arXiv preprint arXiv:2409.14249}, year={2024}, doi={10.1007/978-3-031-25072-9_28}, archivePrefix={arXiv}, eprint={2409.14249}, primaryClass={cs.CV} }
lu2024end
arxiv-660360
2409.14252
Collaborative Text Editing with Eg-walker: Better, Faster, Smaller
<|reference_start|>Collaborative Text Editing with Eg-walker: Better, Faster, Smaller: Collaborative text editing algorithms allow several users to concurrently modify a text file, and automatically merge concurrent edits into a consistent state. Existing algorithms fall in two categories: Operational Transformation (OT) algorithms are slow to merge files that have diverged substantially due to offline editing; CRDTs are slow to load and consume a lot of memory. We introduce Eg-walker, a collaboration algorithm for text that avoids these weaknesses. Compared to existing CRDTs, it consumes an order of magnitude less memory in the steady state, and loading a document from disk is orders of magnitude faster. Compared to OT, merging long-running branches is orders of magnitude faster. In the worst case, the merging performance of Eg-walker is comparable with existing CRDT algorithms. Eg-walker can be used everywhere CRDTs are used, including peer-to-peer systems without a central server. By offering performance that is competitive with centralised algorithms, our result paves the way towards the widespread adoption of peer-to-peer collaboration software.<|reference_end|>
arxiv
@article{gentle2024collaborative, title={Collaborative Text Editing with Eg-walker: Better, Faster, Smaller}, author={Joseph Gentle and Martin Kleppmann}, journal={arXiv preprint arXiv:2409.14252}, year={2024}, doi={10.1145/3689031.3696076}, archivePrefix={arXiv}, eprint={2409.14252}, primaryClass={cs.DC} }
gentle2024collaborative
arxiv-660361
2409.14254
Instruction Following without Instruction Tuning
<|reference_start|>Instruction Following without Instruction Tuning: Instruction tuning commonly means finetuning a language model on instruction-response pairs. We discover two forms of adaptation (tuning) that are deficient compared to instruction tuning, yet still yield instruction following; we call this implicit instruction tuning. We first find that instruction-response pairs are not necessary: training solely on responses, without any corresponding instructions, yields instruction following. This suggests pretrained models have an instruction-response mapping which is revealed by teaching the model the desired distribution of responses. However, we then find it's not necessary to teach the desired distribution of responses: instruction-response training on narrow-domain data like poetry still leads to broad instruction-following behavior like recipe generation. In particular, when instructions are very different from those in the narrow finetuning domain, models' responses do not adhere to the style of the finetuning domain. To begin to explain implicit instruction tuning, we hypothesize that very simple changes to a language model's distribution yield instruction following. We support this by hand-writing a rule-based language model which yields instruction following in a product-of-experts with a pretrained model. The rules are to slowly increase the probability of ending the sequence, penalize repetition, and uniformly change 15 words' probabilities. In summary, adaptations made without being designed to yield instruction following can do so implicitly.<|reference_end|>
arxiv
@article{hewitt2024instruction, title={Instruction Following without Instruction Tuning}, author={John Hewitt, Nelson F. Liu, Percy Liang, Christopher D. Manning}, journal={arXiv preprint arXiv:2409.14254}, year={2024}, archivePrefix={arXiv}, eprint={2409.14254}, primaryClass={cs.CL} }
hewitt2024instruction
arxiv-660362
2409.14259
Combining Switching Mechanism with Re-Initialization and Anomaly Detection for Resiliency of Cyber-Physical Systems
<|reference_start|>Combining Switching Mechanism with Re-Initialization and Anomaly Detection for Resiliency of Cyber-Physical Systems: Cyber-physical systems (CPS) play a pivotal role in numerous critical real-world applications that have stringent requirements for safety. To enhance the CPS resiliency against attacks, redundancy can be integrated in real-time controller implementations by designing strategies that switch among multiple controllers. However, existing switching strategies typically overlook remediation measures for compromised controllers, opting instead to simply exclude them. Such a solution reduces the CPS redundancy since only a subset of controllers are used. To address this gap, this work proposes a multi-controller switching strategy with periodic re-initialization to remove attacks. Controllers that finish re-initialization can be reused by the switching strategy, preserving the CPS redundancy and resiliency. The proposed switching strategy is designed to ensure that at each switching moment, a controller that has just completed re-initialization is available, minimizing the likelihood of compromise. Additionally, the controller's working period decreases with the number of involved controllers, reducing the controller's exposure time to attacks. An anomaly detector is used to detect CPS attacks during the controller's working period. Upon alarm activation, the current control signal is set to a predefined value, and a switch to an alternative controller occurs at the earliest switching moment. Our switching strategy is shown to be still effective even if the anomaly detector fails to detect (stealthy) attacks.<|reference_end|>
arxiv
@article{fu2024combining, title={Combining Switching Mechanism with Re-Initialization and Anomaly Detection for Resiliency of Cyber-Physical Systems}, author={Hao Fu, Prashanth Krishnamurthy, and Farshad Khorrami}, journal={arXiv preprint arXiv:2409.14259}, year={2024}, archivePrefix={arXiv}, eprint={2409.14259}, primaryClass={eess.SY cs.SY} }
fu2024combining
arxiv-660363
2409.14260
Perfect Gradient Inversion in Federated Learning: A New Paradigm from the Hidden Subset Sum Problem
<|reference_start|>Perfect Gradient Inversion in Federated Learning: A New Paradigm from the Hidden Subset Sum Problem: Federated Learning (FL) has emerged as a popular paradigm for collaborative learning among multiple parties. It is considered privacy-friendly because local data remains on personal devices, and only intermediate parameters -- such as gradients or model updates -- are shared. Although gradient inversion is widely viewed as a common attack method in FL, analytical research on reconstructing input training samples from shared gradients remains limited and is typically confined to constrained settings like small batch sizes. In this paper, we aim to overcome these limitations by addressing the problem from a cryptographic perspective. We mathematically formulate the input reconstruction problem using the gradient information shared in FL as the Hidden Subset Sum Problem (HSSP), an extension of the well-known NP-complete Subset Sum Problem (SSP). Leveraging this formulation allows us to achieve perfect input reconstruction, thereby mitigating issues such as dependence on label diversity and underperformance with large batch sizes that hinder existing empirical gradient inversion attacks. Moreover, our analysis provides insights into why empirical input reconstruction attacks degrade with larger batch sizes. By modeling the problem as HSSP, we demonstrate that the batch size \( B \) significantly affects attack complexity, with time complexity reaching \( \mathcal{O}(B^9) \). We further show that applying secure data aggregation techniques -- such as homomorphic encryption and secure multiparty computation -- provides a strong defense by increasing the time complexity to \( \mathcal{O}(N^9 B^9) \), where \( N \) is the number of local clients in FL. To the best of our knowledge, this is the first work to rigorously analyze privacy issues in FL by modeling them as HSSP, providing a concrete analytical foundation for further exploration and development of defense strategies.<|reference_end|>
arxiv
@article{li2024perfect, title={Perfect Gradient Inversion in Federated Learning: A New Paradigm from the Hidden Subset Sum Problem}, author={Qiongxiu Li, Lixia Luo, Agnese Gini, Changlong Ji, Zhanhao Hu, Xiao Li, Chengfang Fang, Jie Shi, Xiaolin Hu}, journal={arXiv preprint arXiv:2409.14260}, year={2024}, archivePrefix={arXiv}, eprint={2409.14260}, primaryClass={cs.CR} }
li2024perfect
arxiv-660364
2409.14261
Re-Evaluating Privacy in Centralized and Decentralized Learning: An Information-Theoretical and Empirical Study
<|reference_start|>Re-Evaluating Privacy in Centralized and Decentralized Learning: An Information-Theoretical and Empirical Study: Decentralized Federated Learning (DFL) has garnered attention for its robustness and scalability compared to Centralized Federated Learning (CFL). While DFL is commonly believed to offer privacy advantages due to the decentralized control of sensitive data, recent work by Pasquini et, al. challenges this view, demonstrating that DFL does not inherently improve privacy against empirical attacks under certain assumptions. For investigating fully this issue, a formal theoretical framework is required. Our study offers a novel perspective by conducting a rigorous information-theoretical analysis of privacy leakage in FL using mutual information. We further investigate the effectiveness of privacy-enhancing techniques like Secure Aggregation (SA) in both CFL and DFL. Our simulations and real-world experiments show that DFL generally offers stronger privacy preservation than CFL in practical scenarios where a fully trusted server is not available. We address discrepancies in previous research by highlighting limitations in their assumptions about graph topology and privacy attacks, which inadequately capture information leakage in FL.<|reference_end|>
arxiv
@article{ji2024re-evaluating, title={Re-Evaluating Privacy in Centralized and Decentralized Learning: An Information-Theoretical and Empirical Study}, author={Changlong Ji, Stephane Maag, Richard Heusdens, Qiongxiu Li}, journal={arXiv preprint arXiv:2409.14261}, year={2024}, archivePrefix={arXiv}, eprint={2409.14261}, primaryClass={cs.CR} }
ji2024re-evaluating
arxiv-660365
2409.14262
GND: Global Navigation Dataset with Multi-Modal Perception and Multi-Category Traversability in Outdoor Campus Environments
<|reference_start|>GND: Global Navigation Dataset with Multi-Modal Perception and Multi-Category Traversability in Outdoor Campus Environments: Navigating large-scale outdoor environments requires complex reasoning in terms of geometric structures, environmental semantics, and terrain characteristics, which are typically captured by onboard sensors such as LiDAR and cameras. While current mobile robots can navigate such environments using pre-defined, high-precision maps based on hand-crafted rules catered for the specific environment, they lack commonsense reasoning capabilities that most humans possess when navigating unknown outdoor spaces. To address this gap, we introduce the Global Navigation Dataset (GND), a large-scale dataset that integrates multi-modal sensory data, including 3D LiDAR point clouds and RGB and 360-degree images, as well as multi-category traversability maps (pedestrian walkways, vehicle roadways, stairs, off-road terrain, and obstacles) from ten university campuses. These environments encompass a variety of parks, urban settings, elevation changes, and campus layouts of different scales. The dataset covers approximately 2.7km2 and includes at least 350 buildings in total. We also present a set of novel applications of GND to showcase its utility to enable global robot navigation, such as map-based global navigation, mapless navigation, and global place recognition.<|reference_end|>
arxiv
@article{liang2024gnd:, title={GND: Global Navigation Dataset with Multi-Modal Perception and Multi-Category Traversability in Outdoor Campus Environments}, author={Jing Liang, Dibyendu Das, Daeun Song, Md Nahid Hasan Shuvo, Mohammad Durrani, Karthik Taranath, Ivan Penskiy, Dinesh Manocha, Xuesu Xiao}, journal={arXiv preprint arXiv:2409.14262}, year={2024}, archivePrefix={arXiv}, eprint={2409.14262}, primaryClass={cs.RO} }
liang2024gnd:
arxiv-660366
2409.14264
The Differential and Boomerang Properties of a Class of Binomials
<|reference_start|>The Differential and Boomerang Properties of a Class of Binomials: Let $q$ be an odd prime power with $q\equiv 3\ ({\rm{mod}}\ 4)$. In this paper, we study the differential and boomerang properties of the function $F_{2,u}(x)=x^2\big(1+u\eta(x)\big)$ over $\mathbb{F}_{q}$, where $u\in\mathbb{F}_{q}^*$ and $\eta$ is the quadratic character of $\mathbb{F}_{q}$. We determine the differential uniformity of $F_{2,u}$ for any $u\in\mathbb{F}_{q}^*$ and determine the differential spectra and boomerang uniformity of the locally-APN functions $F_{2,\pm 1}$, thereby disproving a conjecture proposed in \cite{budaghyan2024arithmetization} which states that there exist infinitely many $q$ and $u$ such that $F_{2,u}$ is an APN function.<|reference_end|>
arxiv
@article{mesnager2024the, title={The Differential and Boomerang Properties of a Class of Binomials}, author={Sihem Mesnager and Huawei Wu}, journal={arXiv preprint arXiv:2409.14264}, year={2024}, archivePrefix={arXiv}, eprint={2409.14264}, primaryClass={math.NT cs.CR cs.IT math.IT} }
mesnager2024the
arxiv-660367
2409.14268
FeDETR: a Federated Approach for Stenosis Detection in Coronary Angiography
<|reference_start|>FeDETR: a Federated Approach for Stenosis Detection in Coronary Angiography: Assessing the severity of stenoses in coronary angiography is critical to the patient's health, as coronary stenosis is an underlying factor in heart failure. Current practice for grading coronary lesions, i.e. fractional flow reserve (FFR) or instantaneous wave-free ratio (iFR), suffers from several drawbacks, including time, cost and invasiveness, alongside potential interobserver variability. In this context, some deep learning methods have emerged to assist cardiologists in automating the estimation of FFR/iFR values. Despite the effectiveness of these methods, their reliance on large datasets is challenging due to the distributed nature of sensitive medical data. Federated learning addresses this challenge by aggregating knowledge from multiple nodes to improve model generalization, while preserving data privacy. We propose the first federated detection transformer approach, FeDETR, to assess stenosis severity in angiography videos based on FFR/iFR values estimation. In our approach, each node trains a detection transformer (DETR) on its local dataset, with the central server federating the backbone part of the network. The proposed method is trained and evaluated on a dataset collected from five hospitals, consisting of 1001 angiographic examinations, and its performance is compared with state-of-the-art federated learning methods.<|reference_end|>
arxiv
@article{mineo2024fedetr:, title={FeDETR: a Federated Approach for Stenosis Detection in Coronary Angiography}, author={Raffaele Mineo, Amelia Sorrenti, Federica Proietto Salanitri}, journal={arXiv preprint arXiv:2409.14268}, year={2024}, doi={10.1007/978-3-031-51026-7_17}, archivePrefix={arXiv}, eprint={2409.14268}, primaryClass={eess.IV cs.CV cs.LG} }
mineo2024fedetr:
arxiv-660368
2409.14269
Combining Absolute and Semi-Generalized Relative Poses for Visual Localization
<|reference_start|>Combining Absolute and Semi-Generalized Relative Poses for Visual Localization: Visual localization is the problem of estimating the camera pose of a given query image within a known scene. Most state-of-the-art localization approaches follow the structure-based paradigm and use 2D-3D matches between pixels in a query image and 3D points in the scene for pose estimation. These approaches assume an accurate 3D model of the scene, which might not always be available, especially if only a few images are available to compute the scene representation. In contrast, structure-less methods rely on 2D-2D matches and do not require any 3D scene model. However, they are also less accurate than structure-based methods. Although one prior work proposed to combine structure-based and structure-less pose estimation strategies, its practical relevance has not been shown. We analyze combining structure-based and structure-less strategies while exploring how to select between poses obtained from 2D-2D and 2D-3D matches, respectively. We show that combining both strategies improves localization performance in multiple practically relevant scenarios.<|reference_end|>
arxiv
@article{panek2024combining, title={Combining Absolute and Semi-Generalized Relative Poses for Visual Localization}, author={Vojtech Panek, Torsten Sattler, Zuzana Kukelova}, journal={arXiv preprint arXiv:2409.14269}, year={2024}, archivePrefix={arXiv}, eprint={2409.14269}, primaryClass={cs.CV} }
panek2024combining
arxiv-660369
2409.14273
Lidar Panoptic Segmentation in an Open World
<|reference_start|>Lidar Panoptic Segmentation in an Open World: Addressing Lidar Panoptic Segmentation (LPS ) is crucial for safe deployment of autonomous vehicles. LPS aims to recognize and segment lidar points w.r.t. a pre-defined vocabulary of semantic classes, including thing classes of countable objects (e.g., pedestrians and vehicles) and stuff classes of amorphous regions (e.g., vegetation and road). Importantly, LPS requires segmenting individual thing instances (e.g., every single vehicle). Current LPS methods make an unrealistic assumption that the semantic class vocabulary is fixed in the real open world, but in fact, class ontologies usually evolve over time as robots encounter instances of novel classes that are considered to be unknowns w.r.t. the pre-defined class vocabulary. To address this unrealistic assumption, we study LPS in the Open World (LiPSOW): we train models on a dataset with a pre-defined semantic class vocabulary and study their generalization to a larger dataset where novel instances of thing and stuff classes can appear. This experimental setting leads to interesting conclusions. While prior art train class-specific instance segmentation methods and obtain state-of-the-art results on known classes, methods based on class-agnostic bottom-up grouping perform favorably on classes outside of the initial class vocabulary (i.e., unknown classes). Unfortunately, these methods do not perform on-par with fully data-driven methods on known classes. Our work suggests a middle ground: we perform class-agnostic point clustering and over-segment the input cloud in a hierarchical fashion, followed by binary point segment classification, akin to Region Proposal Network [1]. We obtain the final point cloud segmentation by computing a cut in the weighted hierarchical tree of point segments, independently of semantic classification. Remarkably, this unified approach leads to strong performance on both known and unknown classes.<|reference_end|>
arxiv
@article{chakravarthy2024lidar, title={Lidar Panoptic Segmentation in an Open World}, author={Anirudh S Chakravarthy, Meghana Reddy Ganesina, Peiyun Hu, Laura Leal-Taixe, Shu Kong, Deva Ramanan, Aljosa Osep}, journal={arXiv preprint arXiv:2409.14273}, year={2024}, doi={10.1007/s11263-024-02166-9}, archivePrefix={arXiv}, eprint={2409.14273}, primaryClass={cs.CV} }
chakravarthy2024lidar
arxiv-660370
2409.14274
Proof Automation with Large Language Models
<|reference_start|>Proof Automation with Large Language Models: Interactive theorem provers such as Coq are powerful tools to formally guarantee the correctness of software. However, using these tools requires significant manual effort and expertise. While Large Language Models (LLMs) have shown promise in automatically generating informal proofs in natural language, they are less effective at generating formal proofs in interactive theorem provers. In this paper, we conduct a formative study to identify common mistakes made by LLMs when asked to generate formal proofs. By analyzing 520 proof generation errors made by GPT-3.5, we found that GPT-3.5 often identified the correct high-level structure of a proof, but struggled to get the lower-level details correct. Based on this insight, we propose PALM, a novel generate-then-repair approach that first prompts an LLM to generate an initial proof and then leverages targeted symbolic methods to iteratively repair low-level problems. We evaluate PALM on a large dataset that includes more than 10K theorems. Our results show that PALM significantly outperforms other state-of-the-art approaches, successfully proving 76.6% to 180.4% more theorems. Moreover, PALM proves 1270 theorems beyond the reach of existing approaches. We also demonstrate the generalizability of PALM across different LLMs.<|reference_end|>
arxiv
@article{lu2024proof, title={Proof Automation with Large Language Models}, author={Minghai Lu, Benjamin Delaware, Tianyi Zhang}, journal={In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering (ASE 2024)}, year={2024}, archivePrefix={arXiv}, eprint={2409.14274}, primaryClass={cs.SE cs.AI cs.LG cs.LO cs.PL} }
lu2024proof
arxiv-660371
2409.14275
Dynamic Scattering-channel-based Approach for Multiuser Image Encryption
<|reference_start|>Dynamic Scattering-channel-based Approach for Multiuser Image Encryption: Conventional scattering-based encryption systems that operate based on a static complex medium which is used by all users are vulnerable to learning-based attacks that exploit ciphertext-plaintext pairs to model and reverse-engineer the scattering medium's response, enabling unauthorized decryption without the physical medium. In this contribution, a new dynamic scattering-channel-based technique for multiuser image encryption is developed. The established approach employs variable, dynamic scattering media which are modeled as tunable aggregates of multiple scattering nanoparticles. The proposed system supports multiple users by allowing distinct combinations of scattering matrices for different time blocks, each combined with user-specific complex-valued coefficients, enabling the creation of unique, hard-to-guess encryption keys for each user. The derived methodology enhances the practical feasibility of multiuser secure communication and storage channels employing scattering media as the encryption mechanism.<|reference_end|>
arxiv
@article{taghavi2024dynamic, title={Dynamic Scattering-channel-based Approach for Multiuser Image Encryption}, author={Mohammadrasoul Taghavi and Edwin A. Marengo}, journal={arXiv preprint arXiv:2409.14275}, year={2024}, archivePrefix={arXiv}, eprint={2409.14275}, primaryClass={cs.CR physics.optics} }
taghavi2024dynamic
arxiv-660372
2409.14276
Making Space for Time: The Special Galilean Group and Its Application to Some Robotics Problems
<|reference_start|>Making Space for Time: The Special Galilean Group and Its Application to Some Robotics Problems: The special Galilean group, usually denoted SGal(3), is a 10-dimensional Lie group whose important subgroups include the special orthogonal group, the special Euclidean group, and the group of extended poses. We briefly describe SGal(3) and its Lie algebra and show how the group structure supports a unified representation of uncertainty in space and time. Our aim is to highlight the potential usefulness of this group for several robotics problems.<|reference_end|>
arxiv
@article{kelly2024making, title={Making Space for Time: The Special Galilean Group and Its Application to Some Robotics Problems}, author={Jonathan Kelly}, journal={arXiv preprint arXiv:2409.14276}, year={2024}, archivePrefix={arXiv}, eprint={2409.14276}, primaryClass={cs.RO math.GR} }
kelly2024making
arxiv-660373
2409.14277
Can-Do! A Dataset and Neuro-Symbolic Grounded Framework for Embodied Planning with Large Multimodal Models
<|reference_start|>Can-Do! A Dataset and Neuro-Symbolic Grounded Framework for Embodied Planning with Large Multimodal Models: Large multimodal models have demonstrated impressive problem-solving abilities in vision and language tasks, and have the potential to encode extensive world knowledge. However, it remains an open challenge for these models to perceive, reason, plan, and act in realistic environments. In this work, we introduce Can-Do, a benchmark dataset designed to evaluate embodied planning abilities through more diverse and complex scenarios than previous datasets. Our dataset includes 400 multimodal samples, each consisting of natural language user instructions, visual images depicting the environment, state changes, and corresponding action plans. The data encompasses diverse aspects of commonsense knowledge, physical understanding, and safety awareness. Our fine-grained analysis reveals that state-of-the-art models, including GPT-4V, face bottlenecks in visual perception, comprehension, and reasoning abilities. To address these challenges, we propose NeuroGround, a neurosymbolic framework that first grounds the plan generation in the perceived environment states and then leverages symbolic planning engines to augment the model-generated plans. Experimental results demonstrate the effectiveness of our framework compared to strong baselines. Our code and dataset are available at https://embodied-planning.github.io.<|reference_end|>
arxiv
@article{chia2024can-do!, title={Can-Do! A Dataset and Neuro-Symbolic Grounded Framework for Embodied Planning with Large Multimodal Models}, author={Yew Ken Chia, Qi Sun, Lidong Bing, Soujanya Poria}, journal={arXiv preprint arXiv:2409.14277}, year={2024}, archivePrefix={arXiv}, eprint={2409.14277}, primaryClass={cs.AI cs.CL cs.CV cs.RO} }
chia2024can-do!
arxiv-660374
2409.14278
Quasi-interpolation for high-dimensional function approximation
<|reference_start|>Quasi-interpolation for high-dimensional function approximation: The paper proposes a general quasi-interpolation scheme for high-dimensional function approximation. To facilitate error analysis, we view our quasi-interpolation as a two-step procedure. In the first step, we approximate a target function by a purpose-built convolution operator (with an error term referred to as convolution error). In the second step, we discretize the underlying convolution operator using certain quadrature rules at the given sampling data sites (with an error term called discretization error). The final approximation error is obtained as an optimally balanced sum of these two errors, which in turn views our quasi-interpolation as a regularization technique that balances convolution error and discretization error. As a concrete example, we construct a sparse grid quasi-interpolation scheme for high-dimensional function approximation. Both theoretical analysis and numerical implementations provide evidence that our quasi-interpolation scheme is robust and capable of mitigating the curse of dimensionality for approximating high-dimensional functions.<|reference_end|>
arxiv
@article{gao2024quasi-interpolation, title={Quasi-interpolation for high-dimensional function approximation}, author={Wenwu Gao, Jiecheng Wang, Zhengjie Sun, Gregory E. Fasshauer}, journal={arXiv preprint arXiv:2409.14278}, year={2024}, archivePrefix={arXiv}, eprint={2409.14278}, primaryClass={math.NA cs.NA} }
gao2024quasi-interpolation
arxiv-660375
2409.14280
Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning
<|reference_start|>Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning: Modern realities and trends in learning require more and more generalization ability of models, which leads to an increase in both models and training sample size. It is already difficult to solve such tasks in a single device mode. This is the reason why distributed and federated learning approaches are becoming more popular every day. Distributed computing involves communication between devices, which requires solving two key problems: efficiency and privacy. One of the most well-known approaches to combat communication costs is to exploit the similarity of local data. Both Hessian similarity and homogeneous gradients have been studied in the literature, but separately. In this paper, we combine both of these assumptions in analyzing a new method that incorporates the ideas of using data similarity and clients sampling. Moreover, to address privacy concerns, we apply the technique of additional noise and analyze its impact on the convergence of the proposed method. The theory is confirmed by training on real datasets.<|reference_end|>
arxiv
@article{bylinkin2024accelerated, title={Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning}, author={Dmitry Bylinkin, Kirill Degtyarev, Aleksandr Beznosikov}, journal={arXiv preprint arXiv:2409.14280}, year={2024}, archivePrefix={arXiv}, eprint={2409.14280}, primaryClass={math.OC cs.LG} }
bylinkin2024accelerated
arxiv-660376
2409.14281
Creative Writers' Attitudes on Writing as Training Data for Large Language Models
<|reference_start|>Creative Writers' Attitudes on Writing as Training Data for Large Language Models: The use of creative writing as training data for large language models (LLMS) is highly contentious. While some argue that such use constitutes "fair use" and therefore does not require consent or compensation, others argue that consent and compensation is the morally correct approach. In this paper, we seek to understand how creative writers reason about the real or hypothetical use of their writing as training data and under what conditions, if any, they would consent to their writing being used. We interviewed 33 writers with variation across genre, method of publishing, degree of professionalization, and attitudes toward and engagement with LLMs. Through a grounded theory analysis, we report on core principles that writers express and how these principles can be at odds with their realistic expectations for how institutions engage with their work.<|reference_end|>
arxiv
@article{gero2024creative, title={Creative Writers' Attitudes on Writing as Training Data for Large Language Models}, author={Katy Ilonka Gero, Meera Desai, Carly Schnitzler, Nayun Eom, Jack Cushman, Elena L. Glassman}, journal={arXiv preprint arXiv:2409.14281}, year={2024}, archivePrefix={arXiv}, eprint={2409.14281}, primaryClass={cs.HC} }
gero2024creative
arxiv-660377
2409.14282
AutoPeel: Adhesion-aware Safe Peeling Trajectory Optimization for Robotic Wound Care
<|reference_start|>AutoPeel: Adhesion-aware Safe Peeling Trajectory Optimization for Robotic Wound Care: Chronic wounds, including diabetic ulcers, pressure ulcers, and ulcers secondary to venous hypertension, affects more than 6.5 million patients and a yearly cost of more than $25 billion in the United States alone. Chronic wound treatment is currently a manual process, and we envision a future where robotics and automation will aid in this treatment to reduce cost and improve patient care. In this work, we present the development of the first robotic system for wound dressing removal which is reported to be the worst aspect of living with chronic wounds. Our method leverages differentiable physics-based simulation to perform gradient-based Model Predictive Control (MPC) for optimized trajectory planning. By integrating fracture mechanics of adhesion, we are able to model the peeling effect inherent to dressing adhesion. The system is further guided by carefully designed objective functions that promote both efficient and safe control, reducing the risk of tissue damage. We validated the efficacy of our approach through a series of experiments conducted on both synthetic skin phantoms and real human subjects. Our results demonstrate the system's ability to achieve precise and safe dressing removal trajectories, offering a promising solution for automating this essential healthcare procedure.<|reference_end|>
arxiv
@article{liang2024autopeel:, title={AutoPeel: Adhesion-aware Safe Peeling Trajectory Optimization for Robotic Wound Care}, author={Xiao Liang, Youcheng Zhang, Fei Liu, Florian Richter, Michael Yip}, journal={arXiv preprint arXiv:2409.14282}, year={2024}, archivePrefix={arXiv}, eprint={2409.14282}, primaryClass={cs.RO} }
liang2024autopeel:
arxiv-660378
2409.14283
Flag Proxy Networks: Tackling the Architectural, Scheduling, and Decoding Obstacles of Quantum LDPC codes
<|reference_start|>Flag Proxy Networks: Tackling the Architectural, Scheduling, and Decoding Obstacles of Quantum LDPC codes: Quantum error correction is necessary for achieving exponential speedups on important applications. The planar surface code has remained the most studied error-correcting code for the last two decades because of its relative simplicity. However, encoding a singular logical qubit with the planar surface code requires physical qubits quadratic in the code distance~($d$), making it space-inefficient for the large-distance codes necessary for promising applications. Thus, {\em Quantum Low-Density Parity-Check (QLDPC)} have emerged as an alternative to the planar surface code but require a higher degree of connectivity. Furthermore, the problems of fault-tolerant syndrome extraction and decoding are understudied for these codes and also remain obstacles to their usage. In this paper, we consider two under-studied families of QLDPC codes: hyperbolic surface codes and hyperbolic color codes. We tackle the three challenges mentioned above as follows. {\em First}, we propose {\em Flag-Proxy Networks (FPNs)}, a generalizable architecture for quantum codes that achieves low connectivity through flag and proxy qubits. {\em Second}, we propose a {\em greedy syndrome extraction scheduling} algorithm for general quantum codes and further use this algorithm for fault-tolerant syndrome extraction on FPNs. {\em Third}, we present two decoders that leverage flag measurements to decode the hyperbolic codes accurately. Our work finds that degree-4 FPNs of the hyperbolic surface and color codes are respectively $2.9\times$ and $5.5\times$ more space-efficient than the $d = 5$ planar surface code, and become even more space-efficient when considering higher distances. The hyperbolic codes also have error rates comparable to their planar counterparts.<|reference_end|>
arxiv
@article{vittal2024flag, title={Flag Proxy Networks: Tackling the Architectural, Scheduling, and Decoding Obstacles of Quantum LDPC codes}, author={Suhas Vittal, Ali Javadi-Abhari, Andrew W. Cross, Lev S. Bishop, Moinuddin Qureshi}, journal={arXiv preprint arXiv:2409.14283}, year={2024}, archivePrefix={arXiv}, eprint={2409.14283}, primaryClass={quant-ph cs.AR} }
vittal2024flag
arxiv-660379
2409.14285
ESPERANTO: Evaluating Synthesized Phrases to Enhance Robustness in AI Detection for Text Origination
<|reference_start|>ESPERANTO: Evaluating Synthesized Phrases to Enhance Robustness in AI Detection for Text Origination: While large language models (LLMs) exhibit significant utility across various domains, they simultaneously are susceptible to exploitation for unethical purposes, including academic misconduct and dissemination of misinformation. Consequently, AI-generated text detection systems have emerged as a countermeasure. However, these detection mechanisms demonstrate vulnerability to evasion techniques and lack robustness against textual manipulations. This paper introduces back-translation as a novel technique for evading detection, underscoring the need to enhance the robustness of current detection systems. The proposed method involves translating AI-generated text through multiple languages before back-translating to English. We present a model that combines these back-translated texts to produce a manipulated version of the original AI-generated text. Our findings demonstrate that the manipulated text retains the original semantics while significantly reducing the true positive rate (TPR) of existing detection methods. We evaluate this technique on nine AI detectors, including six open-source and three proprietary systems, revealing their susceptibility to back-translation manipulation. In response to the identified shortcomings of existing AI text detectors, we present a countermeasure to improve the robustness against this form of manipulation. Our results indicate that the TPR of the proposed method declines by only 1.85% after back-translation manipulation. Furthermore, we build a large dataset of 720k texts using eight different LLMs. Our dataset contains both human-authored and LLM-generated texts in various domains and writing styles to assess the performance of our method and existing detectors. This dataset is publicly shared for the benefit of the research community.<|reference_end|>
arxiv
@article{ayoobi2024esperanto:, title={ESPERANTO: Evaluating Synthesized Phrases to Enhance Robustness in AI Detection for Text Origination}, author={Navid Ayoobi, Lily Knab, Wen Cheng, David Pantoja, Hamidreza Alikhani, Sylvain Flamant, Jin Kim, Arjun Mukherjee}, journal={arXiv preprint arXiv:2409.14285}, year={2024}, archivePrefix={arXiv}, eprint={2409.14285}, primaryClass={cs.CL cs.AI} }
ayoobi2024esperanto:
arxiv-660380
2409.14287
MEDiC: Autonomous Surgical Robotic Assistance to Maximizing Exposure for Dissection and Cautery
<|reference_start|>MEDiC: Autonomous Surgical Robotic Assistance to Maximizing Exposure for Dissection and Cautery: Surgical automation has the capability to improve the consistency of patient outcomes and broaden access to advanced surgical care in underprivileged communities. Shared autonomy, where the robot automates routine subtasks while the surgeon retains partial teleoperative control, offers great potential to make an impact. In this paper we focus on one important skill within surgical shared autonomy: Automating robotic assistance to maximize visual exposure and apply tissue tension for dissection and cautery. Ensuring consistent exposure to visualize the surgical site is crucial for both efficiency and patient safety. However, achieving this is highly challenging due to the complexities of manipulating deformable volumetric tissues that are prevalent in surgery.To address these challenges we propose \methodname, a framework for autonomous surgical robotic assistance to \methodfullname. We integrate a differentiable physics model with perceptual feedback to achieve our two key objectives: 1) Maximizing tissue exposure and applying tension for a specified dissection site through visual-servoing conrol and 2) Selecting optimal control positions for a dissection target based on deformable Jacobian analysis. We quantitatively assess our method through repeated real robot experiments on a tissue phantom, and showcase its capabilities through dissection experiments using shared autonomy on real animal tissue.<|reference_end|>
arxiv
@article{liang2024medic:, title={MEDiC: Autonomous Surgical Robotic Assistance to Maximizing Exposure for Dissection and Cautery}, author={Xiao Liang, Chung-Pang Wang, Nikhil Uday Shinde, Fei Liu, Florian Richter, Michael Yip}, journal={arXiv preprint arXiv:2409.14287}, year={2024}, archivePrefix={arXiv}, eprint={2409.14287}, primaryClass={cs.RO} }
liang2024medic:
arxiv-660381
2409.14289
Deep Learning Technology for Face Forgery Detection: A Survey
<|reference_start|>Deep Learning Technology for Face Forgery Detection: A Survey: Currently, the rapid development of computer vision and deep learning has enabled the creation or manipulation of high-fidelity facial images and videos via deep generative approaches. This technology, also known as deepfake, has achieved dramatic progress and become increasingly popular in social media. However, the technology can generate threats to personal privacy and national security by spreading misinformation. To diminish the risks of deepfake, it is desirable to develop powerful forgery detection methods to distinguish fake faces from real faces. This paper presents a comprehensive survey of recent deep learning-based approaches for facial forgery detection. We attempt to provide the reader with a deeper understanding of the current advances as well as the major challenges for deepfake detection based on deep learning. We present an overview of deepfake techniques and analyse the characteristics of various deepfake datasets. We then provide a systematic review of different categories of deepfake detection and state-of-the-art deepfake detection methods. The drawbacks of existing detection methods are analyzed, and future research directions are discussed to address the challenges in improving both the performance and generalization of deepfake detection.<|reference_end|>
arxiv
@article{ma2024deep, title={Deep Learning Technology for Face Forgery Detection: A Survey}, author={Lixia Ma, Puning Yang, Yuting Xu, Ziming Yang, Peipei Li, Huaibo Huang}, journal={arXiv preprint arXiv:2409.14289}, year={2024}, archivePrefix={arXiv}, eprint={2409.14289}, primaryClass={cs.CV} }
ma2024deep
arxiv-660382
2409.14292
Opinion Mining on Offshore Wind Energy for Environmental Engineering
<|reference_start|>Opinion Mining on Offshore Wind Energy for Environmental Engineering: In this paper, we conduct sentiment analysis on social media data to study mass opinion about offshore wind energy. We adapt three machine learning models, namely, TextBlob, VADER, and SentiWordNet because different functions are provided by each model. TextBlob provides subjectivity analysis as well as polarity classification. VADER offers cumulative sentiment scores. SentiWordNet considers sentiments with reference to context and performs classification accordingly. Techniques in NLP are harnessed to gather meaning from the textual data in social media. Data visualization tools are suitably deployed to display the overall results. This work is much in line with citizen science and smart governance via involvement of mass opinion to guide decision support. It exemplifies the role of Machine Learning and NLP here.<|reference_end|>
arxiv
@article{bittencourt2024opinion, title={Opinion Mining on Offshore Wind Energy for Environmental Engineering}, author={Isabele Bittencourt, Aparna S. Varde, Pankaj Lal}, journal={Springer IEMTRONICS 2024 conference}, year={2024}, archivePrefix={arXiv}, eprint={2409.14292}, primaryClass={cs.LG cs.AI cs.CL} }
bittencourt2024opinion
arxiv-660383
2409.14293
A novel load distribution strategy for aggregators using IoT-enabled mobile devices
<|reference_start|>A novel load distribution strategy for aggregators using IoT-enabled mobile devices: The rapid proliferation of Internet-of-things (IoT) as well as mobile devices such as Electric Vehicles (EVs), has led to unpredictable load at the grid. The demand to supply ratio is particularly exacerbated at a few grid aggregators (charging stations) with excessive demand due to the geographic location, peak time, etc. Existing solutions on demand response cannot achieve significant improvements based only on time-shifting the loads without considering the device properties such as charging modes and movement capabilities to enable geographic migration. Additionally, the information on the spare capacity at a few aggregators can aid in re-channeling the load from other aggregators facing excess demand to allow migration of devices. In this paper, we model these flexible properties of the devices as a mixed-integer non-linear problem (MINLP) to minimize excess load and the improve the utility (benefit) across all devices. We propose an online distributed low-complexity heuristic that prioritizes devices based on demand and deadlines to minimize the cumulative loss in utility. The proposed heuristic is tested on an exhaustive set of synthetic data and compared with solutions from a solver/optimization tool for the same runtime to show the impracticality of using a solver. A real-world EV testbed data is also tested with our proposed solution and other scheduling solutions to show the practicality of generating a feasible schedule and a loss improvement of at least 57.23%.<|reference_end|>
arxiv
@article{shivaraman2024a, title={A novel load distribution strategy for aggregators using IoT-enabled mobile devices}, author={Nitin Shivaraman, Jakob Fittler, Saravanan Ramanathan, Arvind Easwaran, Sebastian Steinhorst}, journal={10.1109/SmartGridComm51999.2021.9632317}, year={2024}, archivePrefix={arXiv}, eprint={2409.14293}, primaryClass={cs.MA cs.SY eess.SY math.OC} }
shivaraman2024a
arxiv-660384
2409.14296
HM3D-OVON: A Dataset and Benchmark for Open-Vocabulary Object Goal Navigation
<|reference_start|>HM3D-OVON: A Dataset and Benchmark for Open-Vocabulary Object Goal Navigation: We present the Habitat-Matterport 3D Open Vocabulary Object Goal Navigation dataset (HM3D-OVON), a large-scale benchmark that broadens the scope and semantic range of prior Object Goal Navigation (ObjectNav) benchmarks. Leveraging the HM3DSem dataset, HM3D-OVON incorporates over 15k annotated instances of household objects across 379 distinct categories, derived from photo-realistic 3D scans of real-world environments. In contrast to earlier ObjectNav datasets, which limit goal objects to a predefined set of 6-20 categories, HM3D-OVON facilitates the training and evaluation of models with an open-set of goals defined through free-form language at test-time. Through this open-vocabulary formulation, HM3D-OVON encourages progress towards learning visuo-semantic navigation behaviors that are capable of searching for any object specified by text in an open-vocabulary manner. Additionally, we systematically evaluate and compare several different types of approaches on HM3D-OVON. We find that HM3D-OVON can be used to train an open-vocabulary ObjectNav agent that achieves both higher performance and is more robust to localization and actuation noise than the state-of-the-art ObjectNav approach. We hope that our benchmark and baseline results will drive interest in developing embodied agents that can navigate real-world spaces to find household objects specified through free-form language, taking a step towards more flexible and human-like semantic visual navigation. Code and videos available at: naoki.io/ovon.<|reference_end|>
arxiv
@article{yokoyama2024hm3d-ovon:, title={HM3D-OVON: A Dataset and Benchmark for Open-Vocabulary Object Goal Navigation}, author={Naoki Yokoyama, Ram Ramrakhya, Abhishek Das, Dhruv Batra, Sehoon Ha}, journal={arXiv preprint arXiv:2409.14296}, year={2024}, archivePrefix={arXiv}, eprint={2409.14296}, primaryClass={cs.AI cs.RO} }
yokoyama2024hm3d-ovon:
arxiv-660385
2409.14298
A Neuromorphic Implementation of the DBSCAN Algorithm
<|reference_start|>A Neuromorphic Implementation of the DBSCAN Algorithm: DBSCAN is an algorithm that performs clustering in the presence of noise. In this paper, we provide two constructions that allow DBSCAN to be implemented neuromorphically, using spiking neural networks. The first construction is termed "flat," resulting in large spiking neural networks that compute the algorithm quickly, in five timesteps. Moreover, the networks allow pipelining, so that a new DBSCAN calculation may be performed every timestep. The second construction is termed "systolic", and generates much smaller networks, but requires the inputs to be spiked in over several timesteps, column by column. We provide precise specifications of the constructions and analyze them in practical neuromorphic computing settings. We also provide an open-source implementation.<|reference_end|>
arxiv
@article{rizzo2024a, title={A Neuromorphic Implementation of the DBSCAN Algorithm}, author={Charles P. Rizzo and James S. Plank}, journal={arXiv preprint arXiv:2409.14298}, year={2024}, archivePrefix={arXiv}, eprint={2409.14298}, primaryClass={cs.NE} }
rizzo2024a
arxiv-660386
2409.14300
A competitive baseline for deep learning enhanced data assimilation using conditional Gaussian ensemble Kalman filtering
<|reference_start|>A competitive baseline for deep learning enhanced data assimilation using conditional Gaussian ensemble Kalman filtering: Ensemble Kalman Filtering (EnKF) is a popular technique for data assimilation, with far ranging applications. However, the vanilla EnKF framework is not well-defined when perturbations are nonlinear. We study two non-linear extensions of the vanilla EnKF - dubbed the conditional-Gaussian EnKF (CG-EnKF) and the normal score EnKF (NS-EnKF) - which sidestep assumptions of linearity by constructing the Kalman gain matrix with the `conditional Gaussian' update formula in place of the traditional one. We then compare these models against a state-of-the-art deep learning based particle filter called the score filter (SF). This model uses an expensive score diffusion model for estimating densities and also requires a strong assumption on the perturbation operator for validity. In our comparison, we find that CG-EnKF and NS-EnKF dramatically outperform SF for a canonical problem in high-dimensional multiscale data assimilation given by the Lorenz-96 system. Our analysis also demonstrates that the CG-EnKF and NS-EnKF can handle highly non-Gaussian additive noise perturbations, with the latter typically outperforming the former.<|reference_end|>
arxiv
@article{malik2024a, title={A competitive baseline for deep learning enhanced data assimilation using conditional Gaussian ensemble Kalman filtering}, author={Zachariah Malik, Romit Maulik}, journal={arXiv preprint arXiv:2409.14300}, year={2024}, archivePrefix={arXiv}, eprint={2409.14300}, primaryClass={stat.ML cs.LG math.DS physics.ao-ph} }
malik2024a
arxiv-660387
2409.14301
Multi-Grained Specifications for Distributed System Model Checking and Verification
<|reference_start|>Multi-Grained Specifications for Distributed System Model Checking and Verification: This paper presents our experience specifying and verifying the correctness of ZooKeeper, a complex and evolving distributed coordination system. We use TLA+ to model fine-grained behaviors of ZooKeeper and use the TLC model checker to verify its correctness properties; we also check conformance between the model and code. The fundamental challenge is to balance the granularity of specifications and the scalability of model checking -- fine-grained specifications lead to state-space explosion, while coarse-grained specifications introduce model-code gaps. To address this challenge, we write specifications with different granularities for composable modules, and compose them into mixed-grained specifications based on specific scenarios. For example, to verify code changes, we compose fine-grained specifications of changed modules and coarse-grained specifications that abstract away details of unchanged code with preserved interactions. We show that writing multi-grained specifications is a viable practice and can cope with model-code gaps without untenable state space, especially for evolving software where changes are typically local and incremental. We detected six severe bugs that violate five types of invariants and verified their code fixes; the fixes have been merged to ZooKeeper. We also improve the protocol design to make it easy to implement correctly.<|reference_end|>
arxiv
@article{ouyang2024multi-grained, title={Multi-Grained Specifications for Distributed System Model Checking and Verification}, author={Lingzhi Ouyang, Xudong Sun, Ruize Tang, Yu Huang, Madhav Jivrajani, Xiaoxing Ma and Tianyin Xu}, journal={arXiv preprint arXiv:2409.14301}, year={2024}, archivePrefix={arXiv}, eprint={2409.14301}, primaryClass={cs.DC cs.SE} }
ouyang2024multi-grained
arxiv-660388
2409.14302
Reliable and diverse evaluation of LLM medical knowledge mastery
<|reference_start|>Reliable and diverse evaluation of LLM medical knowledge mastery: Mastering medical knowledge is crucial for medical-specific LLMs. However, despite the existence of medical benchmarks like MedQA, a unified framework that fully leverages existing knowledge bases to evaluate LLMs' mastery of medical knowledge is still lacking. In the study, we propose a novel framework PretexEval that dynamically generates reliable and diverse test samples to evaluate LLMs for any given medical knowledge base. We notice that test samples produced directly from knowledge bases by templates or LLMs may introduce factual errors and also lack diversity. To address these issues, we introduce a novel schema into our proposed evaluation framework that employs predicate equivalence transformations to produce a series of variants for any given medical knowledge point. Finally, these produced predicate variants are converted into textual language, resulting in a series of reliable and diverse test samples to evaluate whether LLMs fully master the given medical factual knowledge point. Here, we use our proposed framework to systematically investigate the mastery of medical factual knowledge of 12 well-known LLMs, based on two knowledge bases that are crucial for clinical diagnosis and treatment. The evaluation results illustrate that current LLMs still exhibit significant deficiencies in fully mastering medical knowledge, despite achieving considerable success on some famous public benchmarks. These new findings provide valuable insights for developing medical-specific LLMs, highlighting that current LLMs urgently need to strengthen their comprehensive and in-depth mastery of medical knowledge before being applied to real-world medical scenarios.<|reference_end|>
arxiv
@article{zhou2024reliable, title={Reliable and diverse evaluation of LLM medical knowledge mastery}, author={Yuxuan Zhou, Xien Liu, Chen Ning, Xiao Zhang, Ji Wu}, journal={arXiv preprint arXiv:2409.14302}, year={2024}, archivePrefix={arXiv}, eprint={2409.14302}, primaryClass={cs.CL cs.AI} }
zhou2024reliable
arxiv-660389
2409.14305
UU-Mamba: Uncertainty-aware U-Mamba for Cardiovascular Segmentation
<|reference_start|>UU-Mamba: Uncertainty-aware U-Mamba for Cardiovascular Segmentation: Building on the success of deep learning models in cardiovascular structure segmentation, increasing attention has been focused on improving generalization and robustness, particularly in small, annotated datasets. Despite recent advancements, current approaches often face challenges such as overfitting and accuracy limitations, largely due to their reliance on large datasets and narrow optimization techniques. This paper introduces the UU-Mamba model, an extension of the U-Mamba architecture, designed to address these challenges in both cardiac and vascular segmentation. By incorporating Sharpness-Aware Minimization (SAM), the model enhances generalization by targeting flatter minima in the loss landscape. Additionally, we propose an uncertainty-aware loss function that combines region-based, distribution-based, and pixel-based components to improve segmentation accuracy by capturing both local and global features. While the UU-Mamba model has already demonstrated great performance, further testing is required to fully assess its generalization and robustness. We expand our evaluation by conducting new trials on the ImageCAS (coronary artery) and Aorta (aortic branches and zones) datasets, which present more complex segmentation challenges than the ACDC dataset (left and right ventricles) used in our previous work, showcasing the model's adaptability and resilience. We confirm UU-Mamba's superior performance over leading models such as TransUNet, Swin-Unet, nnUNet, and nnFormer. Moreover, we provide a more comprehensive evaluation of the model's robustness and segmentation accuracy, as demonstrated by extensive experiments.<|reference_end|>
arxiv
@article{tsai2024uu-mamba:, title={UU-Mamba: Uncertainty-aware U-Mamba for Cardiovascular Segmentation}, author={Ting Yu Tsai, Li Lin, Shu Hu, Connie W. Tsao, Xin Li, Ming-Ching Chang, Hongtu Zhu, Xin Wang}, journal={arXiv preprint arXiv:2409.14305}, year={2024}, archivePrefix={arXiv}, eprint={2409.14305}, primaryClass={cs.AI} }
tsai2024uu-mamba:
arxiv-660390
2409.14306
LLMs are One-Shot URL Classifiers and Explainers
<|reference_start|>LLMs are One-Shot URL Classifiers and Explainers: Malicious URL classification represents a crucial aspect of cyber security. Although existing work comprises numerous machine learning and deep learning-based URL classification models, most suffer from generalisation and domain-adaptation issues arising from the lack of representative training datasets. Furthermore, these models fail to provide explanations for a given URL classification in natural human language. In this work, we investigate and demonstrate the use of Large Language Models (LLMs) to address this issue. Specifically, we propose an LLM-based one-shot learning framework that uses Chain-of-Thought (CoT) reasoning to predict whether a given URL is benign or phishing. We evaluate our framework using three URL datasets and five state-of-the-art LLMs and show that one-shot LLM prompting indeed provides performances close to supervised models, with GPT 4-Turbo being the best model, followed by Claude 3 Opus. We conduct a quantitative analysis of the LLM explanations and show that most of the explanations provided by LLMs align with the post-hoc explanations of the supervised classifiers, and the explanations have high readability, coherency, and informativeness.<|reference_end|>
arxiv
@article{rashid2024llms, title={LLMs are One-Shot URL Classifiers and Explainers}, author={Fariza Rashid, Nishavi Ranaweera, Ben Doyle, Suranga Seneviratne}, journal={arXiv preprint arXiv:2409.14306}, year={2024}, archivePrefix={arXiv}, eprint={2409.14306}, primaryClass={cs.AI} }
rashid2024llms
arxiv-660391
2409.14307
DilateQuant: Accurate and Efficient Diffusion Quantization via Weight Dilation
<|reference_start|>DilateQuant: Accurate and Efficient Diffusion Quantization via Weight Dilation: Diffusion models have shown excellent performance on various image generation tasks, but the substantial computational costs and huge memory footprint hinder their low-latency applications in real-world scenarios. Quantization is a promising way to compress and accelerate models. Nevertheless, due to the wide range and time-varying activations in diffusion models, existing methods cannot maintain both accuracy and efficiency simultaneously for low-bit quantization. To tackle this issue, we propose DilateQuant, a novel quantization framework for diffusion models that offers comparable accuracy and high efficiency. Specifically, we keenly aware of numerous unsaturated in-channel weights, which can be cleverly exploited to reduce the range of activations without additional computation cost. Based on this insight, we propose Weight Dilation (WD) that maximally dilates the unsaturated in-channel weights to a constrained range through a mathematically equivalent scaling. WD costlessly absorbs the activation quantization errors into weight quantization. The range of activations decreases, which makes activations quantization easy. The range of weights remains constant, which makes model easy to converge in training stage. Considering the temporal network leads to time-varying activations, we design a Temporal Parallel Quantizer (TPQ), which sets time-step quantization parameters and supports parallel quantization for different time steps, significantly improving the performance and reducing time cost. To further enhance performance while preserving efficiency, we introduce a Block-wise Knowledge Distillation (BKD) to align the quantized models with the full-precision models at a block level. The simultaneous training of time-step quantization parameters and weights minimizes the time required, and the shorter backpropagation paths decreases the memory footprint of the quantization process.<|reference_end|>
arxiv
@article{liu2024dilatequant:, title={DilateQuant: Accurate and Efficient Diffusion Quantization via Weight Dilation}, author={Xuewen Liu, Zhikai Li, Qingyi Gu}, journal={arXiv preprint arXiv:2409.14307}, year={2024}, archivePrefix={arXiv}, eprint={2409.14307}, primaryClass={cs.CV cs.AI} }
liu2024dilatequant:
arxiv-660392
2409.14308
NP-Completeness and Physical Zero-Knowledge Proofs for Zeiger
<|reference_start|>NP-Completeness and Physical Zero-Knowledge Proofs for Zeiger: Zeiger is a pencil puzzle consisting of a rectangular grid, with each cell having an arrow pointing in horizontal or vertical direction. Some cells also contain a positive integer. The objective of this puzzle is to fill a positive integer into every unnumbered cell such that the integer in each cell is equal to the number of different integers in all cells along the direction an arrow in that cell points to. In this paper, we prove that deciding solvability of a given Zeiger puzzle is NP-complete via a reduction from the not-all-equal positive 3SAT (NAE3SAT+) problem. We also construct a card-based physical zero-knowledge proof protocol for Zeiger, which enables a prover to physically show a verifier the existence of the puzzle's solution without revealing it.<|reference_end|>
arxiv
@article{ruangwises2024np-completeness, title={NP-Completeness and Physical Zero-Knowledge Proofs for Zeiger}, author={Suthee Ruangwises}, journal={arXiv preprint arXiv:2409.14308}, year={2024}, archivePrefix={arXiv}, eprint={2409.14308}, primaryClass={cs.CC cs.CR} }
ruangwises2024np-completeness
arxiv-660393
2409.14309
Sketch-and-Solve: Optimized Overdetermined Least-Squares Using Randomized Numerical Linear Algebra
<|reference_start|>Sketch-and-Solve: Optimized Overdetermined Least-Squares Using Randomized Numerical Linear Algebra: Sketch-and-solve is a powerful paradigm for tackling large-scale computational problems by reducing their dimensionality using sketching matrices. This paper focuses on applying sketch-and-solve algorithms to efficiently solve the overdetermined least squares problem, which is fundamental in various domains such as machine learning, signal processing, and numerical optimization. We provide a comprehensive overview of the sketch-and-solve paradigm and analyze different sketching operators, including dense and sparse variants. We introduce the Sketch-and-Apply (SAA-SAS) algorithm, which leverages randomized numerical linear algebra techniques to compute approximate solutions efficiently. Through extensive experiments on large-scale least squares problems, we demonstrate that our proposed approach significantly outperforms the traditional Least-Squares QR (LSQR) algorithm in terms of runtime while maintaining comparable accuracy. Our results highlight the potential of sketch-and-solve techniques in efficiently handling large-scale numerical linear algebra problems.<|reference_end|>
arxiv
@article{lavaee2024sketch-and-solve:, title={Sketch-and-Solve: Optimized Overdetermined Least-Squares Using Randomized Numerical Linear Algebra}, author={Alex Lavaee}, journal={arXiv preprint arXiv:2409.14309}, year={2024}, archivePrefix={arXiv}, eprint={2409.14309}, primaryClass={cs.LG cs.NA math.NA} }
lavaee2024sketch-and-solve:
arxiv-660394
2409.14312
Avengers Assemble: Amalgamation of Non-Semantic Features for Depression Detection
<|reference_start|>Avengers Assemble: Amalgamation of Non-Semantic Features for Depression Detection: In this study, we address the challenge of depression detection from speech, focusing on the potential of non-semantic features (NSFs) to capture subtle markers of depression. While prior research has leveraged various features for this task, NSFs-extracted from pre-trained models (PTMs) designed for non-semantic tasks such as paralinguistic speech processing (TRILLsson), speaker recognition (x-vector), and emotion recognition (emoHuBERT)-have shown significant promise. However, the potential of combining these diverse features has not been fully explored. In this work, we demonstrate that the amalgamation of NSFs results in complementary behavior, leading to enhanced depression detection performance. Furthermore, to our end, we introduce a simple novel framework, FuSeR, designed to effectively combine these features. Our results show that FuSeR outperforms models utilizing individual NSFs as well as baseline fusion techniques and obtains state-of-the-art (SOTA) performance in E-DAIC benchmark with RMSE of 5.51 and MAE of 4.48, establishing it as a robust approach for depression detection.<|reference_end|>
arxiv
@article{phukan2024avengers, title={Avengers Assemble: Amalgamation of Non-Semantic Features for Depression Detection}, author={Orchid Chetia Phukan, Swarup Ranjan Behera, Shubham Singh, Muskaan Singh, Vandana Rajan, Arun Balaji Buduru, Rajesh Sharma and S. R. Mahadeva Prasanna}, journal={arXiv preprint arXiv:2409.14312}, year={2024}, archivePrefix={arXiv}, eprint={2409.14312}, primaryClass={eess.AS cs.SD} }
phukan2024avengers
arxiv-660395
2409.14313
Anisotropic Diffusion Probabilistic Model for Imbalanced Image Classification
<|reference_start|>Anisotropic Diffusion Probabilistic Model for Imbalanced Image Classification: Real-world data often has a long-tailed distribution, where the scarcity of tail samples significantly limits the model's generalization ability. Denoising Diffusion Probabilistic Models (DDPM) are generative models based on stochastic differential equation theory and have demonstrated impressive performance in image classification tasks. However, existing diffusion probabilistic models do not perform satisfactorily in classifying tail classes. In this work, we propose the Anisotropic Diffusion Probabilistic Model (ADPM) for imbalanced image classification problems. We utilize the data distribution to control the diffusion speed of different class samples during the forward process, effectively improving the classification accuracy of the denoiser in the reverse process. Specifically, we provide a theoretical strategy for selecting noise levels for different categories in the diffusion process based on error analysis theory to address the imbalanced classification problem. Furthermore, we integrate global and local image prior in the forward process to enhance the model's discriminative ability in the spatial dimension, while incorporate semantic-level contextual information in the reverse process to boost the model's discriminative power and robustness. Through comparisons with state-of-the-art methods on four medical benchmark datasets, we validate the effectiveness of the proposed method in handling long-tail data. Our results confirm that the anisotropic diffusion model significantly improves the classification accuracy of rare classes while maintaining the accuracy of head classes. On the skin lesion datasets, PAD-UFES and HAM10000, the F1-scores of our method improved by 4% and 3%, respectively compared to the original diffusion probabilistic model.<|reference_end|>
arxiv
@article{kong2024anisotropic, title={Anisotropic Diffusion Probabilistic Model for Imbalanced Image Classification}, author={Jingyu Kong, Yuan Guo, Yu Wang and Yuping Duan}, journal={arXiv preprint arXiv:2409.14313}, year={2024}, archivePrefix={arXiv}, eprint={2409.14313}, primaryClass={cs.CV} }
kong2024anisotropic
arxiv-660396
2409.14316
MVPGS: Excavating Multi-view Priors for Gaussian Splatting from Sparse Input Views
<|reference_start|>MVPGS: Excavating Multi-view Priors for Gaussian Splatting from Sparse Input Views: Recently, the Neural Radiance Field (NeRF) advancement has facilitated few-shot Novel View Synthesis (NVS), which is a significant challenge in 3D vision applications. Despite numerous attempts to reduce the dense input requirement in NeRF, it still suffers from time-consumed training and rendering processes. More recently, 3D Gaussian Splatting (3DGS) achieves real-time high-quality rendering with an explicit point-based representation. However, similar to NeRF, it tends to overfit the train views for lack of constraints. In this paper, we propose \textbf{MVPGS}, a few-shot NVS method that excavates the multi-view priors based on 3D Gaussian Splatting. We leverage the recent learning-based Multi-view Stereo (MVS) to enhance the quality of geometric initialization for 3DGS. To mitigate overfitting, we propose a forward-warping method for additional appearance constraints conforming to scenes based on the computed geometry. Furthermore, we introduce a view-consistent geometry constraint for Gaussian parameters to facilitate proper optimization convergence and utilize a monocular depth regularization as compensation. Experiments show that the proposed method achieves state-of-the-art performance with real-time rendering speed. Project page: https://zezeaaa.github.io/projects/MVPGS/<|reference_end|>
arxiv
@article{xu2024mvpgs:, title={MVPGS: Excavating Multi-view Priors for Gaussian Splatting from Sparse Input Views}, author={Wangze Xu, Huachen Gao, Shihe Shen, Rui Peng, Jianbo Jiao, Ronggang Wang}, journal={arXiv preprint arXiv:2409.14316}, year={2024}, archivePrefix={arXiv}, eprint={2409.14316}, primaryClass={cs.CV} }
xu2024mvpgs:
arxiv-660397
2409.14317
Dissecting CXL Memory Performance at Scale: Analysis, Modeling, and Optimization
<|reference_start|>Dissecting CXL Memory Performance at Scale: Analysis, Modeling, and Optimization: We present SupMario, a characterization framework designed to thoroughly analyze, model, and optimize CXL memory performance. SupMario is based on extensive evaluation of 265 workloads spanning 4 real CXL devices within 7 memory latency configurations across 4 processor platforms. SupMario uncovers many key insights, including detailed workload performance at sub-us memory latencies (140-410 ns), CXL tail latencies, CPU tolerance to CXL latencies, CXL performance root-cause analysis and precise performance prediction models. In particular, SupMario performance models rely solely on 12 CPU performance counters and accurately fit over 99% and 91%-94% workloads with a 10% misprediction target for NUMA and CXL memory, respectively. We demonstrate the practical utility of SupMario characterization findings, models, and insights by applying them to popular CXL memory management schemes, such as page interleaving and tiering policies, to identify system inefficiencies during runtime. We introduce a novel ``bestshot'' page interleaving policy and a regulated page tiering policy (Alto) tailored for memory bandwidth- and latency-sensitive workloads. In bandwidth bound scenarios, our ``best-shot'' interleaving, guided by our novel performance prediction model, achieves close-to optimal scenarios by exploiting the aggregate system and CXL/NUMA memory bandwidth. For latency sensitive workloads, Alto, driven by our key insight of utilizing ``amortized'' memory latency to regulate unnecessary page migrations, achieves up to 177% improvement over state-of-the-art memory tiering systems like TPP, as demonstrated through extensive evaluation with 8 real-world applications.<|reference_end|>
arxiv
@article{liu2024dissecting, title={Dissecting CXL Memory Performance at Scale: Analysis, Modeling, and Optimization}, author={Jinshu Liu, Hamid Hadian, Hanchen Xu, Daniel S. Berger and Huaicheng Li}, journal={arXiv preprint arXiv:2409.14317}, year={2024}, archivePrefix={arXiv}, eprint={2409.14317}, primaryClass={cs.OS} }
liu2024dissecting
arxiv-660398
2409.14319
Scene-Text Grounding for Text-Based Video Question Answering
<|reference_start|>Scene-Text Grounding for Text-Based Video Question Answering: Existing efforts in text-based video question answering (TextVideoQA) are criticized for their opaque decisionmaking and heavy reliance on scene-text recognition. In this paper, we propose to study Grounded TextVideoQA by forcing models to answer questions and spatio-temporally localize the relevant scene-text regions, thus decoupling QA from scenetext recognition and promoting research towards interpretable QA. The task has three-fold significance. First, it encourages scene-text evidence versus other short-cuts for answer predictions. Second, it directly accepts scene-text regions as visual answers, thus circumventing the problem of ineffective answer evaluation by stringent string matching. Third, it isolates the challenges inherited in VideoQA and scene-text recognition. This enables the diagnosis of the root causes for failure predictions, e.g., wrong QA or wrong scene-text recognition? To achieve Grounded TextVideoQA, we propose the T2S-QA model that highlights a disentangled temporal-to-spatial contrastive learning strategy for weakly-supervised scene-text grounding and grounded TextVideoQA. To facilitate evaluation, we construct a new dataset ViTXT-GQA which features 52K scene-text bounding boxes within 2.2K temporal segments related to 2K questions and 729 videos. With ViTXT-GQA, we perform extensive experiments and demonstrate the severe limitations of existing techniques in Grounded TextVideoQA. While T2S-QA achieves superior results, the large performance gap with human leaves ample space for improvement. Our further analysis of oracle scene-text inputs posits that the major challenge is scene-text recognition. To advance the research of Grounded TextVideoQA, our dataset and code are at \url{https://github.com/zhousheng97/ViTXT-GQA.git}<|reference_end|>
arxiv
@article{zhou2024scene-text, title={Scene-Text Grounding for Text-Based Video Question Answering}, author={Sheng Zhou, Junbin Xiao, Xun Yang, Peipei Song, Dan Guo, Angela Yao, Meng Wang, Tat-Seng Chua}, journal={arXiv preprint arXiv:2409.14319}, year={2024}, archivePrefix={arXiv}, eprint={2409.14319}, primaryClass={cs.CV cs.MM} }
zhou2024scene-text
arxiv-660399
2409.14320
Exploring the Use of Contingency for Nuclear Electrical Studies
<|reference_start|>Exploring the Use of Contingency for Nuclear Electrical Studies: This paper examines the use of contingency analysis for a nuclear power plant to determine its potential benefits for the nuclear industry. Various N-1 contingencies were analyzed for a model of an existing nuclear plant, primarily inspecting voltage violations resulting from a failure. Remedial Actions Schemes were suggested to support the reduction of voltage violations in the event of a failure within the system. Many of the schemes presented were solved by existing redundancies and protection schemes that have been provided through the use of industry standard bounding analysis in the design process. This paper proposes the future use of real-time contingency analysis for nuclear power plants, conducted using constantly updating voltage, current, and power measurements through the system. This will provide real-time information of the system and can serve as historical data to reduce the analysis needed for pending design changes in the plant.<|reference_end|>
arxiv
@article{khanpour2024exploring, title={Exploring the Use of Contingency for Nuclear Electrical Studies}, author={Cameron Khanpour and Jon T. Fontejon}, journal={arXiv preprint arXiv:2409.14320}, year={2024}, archivePrefix={arXiv}, eprint={2409.14320}, primaryClass={eess.SY cs.SY} }
khanpour2024exploring
arxiv-660400
2409.14323
Cluster-based Network Time Synchronization for Resilience with Energy Efficiency
<|reference_start|>Cluster-based Network Time Synchronization for Resilience with Energy Efficiency: Time synchronization of devices in Internet-of-Things (IoT) networks is one of the challenging problems and a pre-requisite for the design of low-latency applications. Although many existing solutions have tried to address this problem, almost all solutions assume all the devices (nodes) in the network are faultless. Furthermore, these solutions exchange a large number of messages to achieve synchronization, leading to significant communication and energy overhead. To address these shortcomings, we propose C-sync, a clustering-based decentralized time synchronization protocol that provides resilience against several types of faults with energy-efficient communication. C-sync achieves scalability by introducing multiple reference nodes in the network that restrict the maximum number of hops any node can have to its time source. The protocol is designed with a modular structure on the Contiki platform to allow application transitions. We evaluate C-sync on a real testbed that comprises over 40 Tmote Sky hardware nodes distributed across different levels in a building and show through experiments the fault resilience, energy efficiency, and scalability of the protocol. C-sync detects and isolates faults to a cluster and recovers quickly. The evaluation makes a qualitative comparison with state-of-the-art protocols and a quantitative comparison with a class of decentralized protocols (derived from GTSP) that provide synchronization with no/limited fault-tolerance. Results also show a reduction of 56.12% and 75.75% in power consumption in the worst-case and best-case scenarios, respectively, compared to GTSP, while achieving similar accuracy.<|reference_end|>
arxiv
@article{shivaraman2024cluster-based, title={Cluster-based Network Time Synchronization for Resilience with Energy Efficiency}, author={Nitin Shivaraman, Patrick Schuster, Saravanan Ramanathan, Arvind Easwaran, Sebastian Steinhorst}, journal={arXiv preprint arXiv:2409.14323}, year={2024}, doi={10.1109/RTSS52674.2021.00024}, archivePrefix={arXiv}, eprint={2409.14323}, primaryClass={cs.DC cs.NI cs.SY eess.SY} }
shivaraman2024cluster-based