corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-667401
2410.06627
Variations in Multi-Agent Actor-Critic Frameworks for Joint Optimizations in UAV Swarm Networks: Recent Evolution, Challenges, and Directions
<|reference_start|>Variations in Multi-Agent Actor-Critic Frameworks for Joint Optimizations in UAV Swarm Networks: Recent Evolution, Challenges, and Directions: Autonomous unmanned aerial vehicle (UAV) swarm networks (UAVSNs) can effectively execute surveillance, connectivity, and computing services to ground users (GUs). These missions require trajectory planning, UAV-GUs association, task offloading, next-hop selection, and resources such as transmit power, bandwidth, caching, and computing allocation to improve network performances. Owing to the highly dynamic topology, limited resources, and non-availability of global knowledge, optimizing network performance in UAVSNs is very intricate. Hence, it requires an adaptive joint optimization framework that can tackle both discrete and continuous decision variables to ensure optimal network performance under dynamic constraints. Multi-agent deep reinforcement learning-based adaptive actor-critic framework can efficiently address these problems. This paper investigates the recent evolutions of actor-critic frameworks to deal with joint optimization problems in UAVSNs. In addition, challenges and potential solutions are addressed as research directions.<|reference_end|>
arxiv
@article{alam2024variations, title={Variations in Multi-Agent Actor-Critic Frameworks for Joint Optimizations in UAV Swarm Networks: Recent Evolution, Challenges, and Directions}, author={Muhammad Morshed Alam, Muhammad Yeasir Aarafat, and Tamim Hossain}, journal={arXiv preprint arXiv:2410.06627}, year={2024}, archivePrefix={arXiv}, eprint={2410.06627}, primaryClass={eess.SY cs.SY} }
alam2024variations
arxiv-667402
2410.06628
Does Vec2Text Pose a New Corpus Poisoning Threat?
<|reference_start|>Does Vec2Text Pose a New Corpus Poisoning Threat?: The emergence of Vec2Text -- a method for text embedding inversion -- has raised serious privacy concerns for dense retrieval systems which use text embeddings. This threat comes from the ability for an attacker with access to embeddings to reconstruct the original text. In this paper, we take a new look at Vec2Text and investigate how much of a threat it poses to the different attacks of corpus poisoning, whereby an attacker injects adversarial passages into a retrieval corpus with the intention of misleading dense retrievers. Theoretically, Vec2Text is far more dangerous than previous attack methods because it does not need access to the embedding model's weights and it can efficiently generate many adversarial passages. We show that under certain conditions, corpus poisoning with Vec2Text can pose a serious threat to dense retriever system integrity and user experience by injecting adversarial passaged into top ranked positions. Code and data are made available at https://github.com/ielab/vec2text-corpus-poisoning<|reference_end|>
arxiv
@article{zhuang2024does, title={Does Vec2Text Pose a New Corpus Poisoning Threat?}, author={Shengyao Zhuang, Bevan Koopman, Guido Zuccon}, journal={arXiv preprint arXiv:2410.06628}, year={2024}, archivePrefix={arXiv}, eprint={2410.06628}, primaryClass={cs.IR} }
zhuang2024does
arxiv-667403
2410.06634
Tree of Problems: Improving structured problem solving with compositionality
<|reference_start|>Tree of Problems: Improving structured problem solving with compositionality: Large Language Models (LLMs) have demonstrated remarkable performance across multiple tasks through in-context learning. For complex reasoning tasks that require step-by-step thinking, Chain-of-Thought (CoT) prompting has given impressive results, especially when combined with self-consistency. Nonetheless, some tasks remain particularly difficult for LLMs to solve. Tree of Thoughts (ToT) and Graph of Thoughts (GoT) emerged as alternatives, dividing the complex problem into paths of subproblems. In this paper, we propose Tree of Problems (ToP), a simpler version of ToT, which we hypothesise can work better for complex tasks that can be divided into identical subtasks. Our empirical results show that our approach outperforms ToT and GoT, and in addition performs better than CoT on complex reasoning tasks. All code for this paper is publicly available here: https://github.com/ArmelRandy/tree-of-problems.<|reference_end|>
arxiv
@article{zebaze2024tree, title={Tree of Problems: Improving structured problem solving with compositionality}, author={Armel Zebaze and Beno^it Sagot and Rachel Bawden}, journal={arXiv preprint arXiv:2410.06634}, year={2024}, archivePrefix={arXiv}, eprint={2410.06634}, primaryClass={cs.CL} }
zebaze2024tree
arxiv-667404
2410.06638
Subtle Errors Matter: Preference Learning via Error-injected Self-editing
<|reference_start|>Subtle Errors Matter: Preference Learning via Error-injected Self-editing: Large Language Models (LLMs) have exhibited strong mathematical reasoning and computational prowess, tackling tasks ranging from basic arithmetic to advanced competition-level problems. However, frequently occurring subtle errors, such as miscalculations or incorrect substitutions, limit the models' full mathematical potential. Existing studies to improve mathematical ability typically involve distilling reasoning skills from stronger LLMs or applying preference learning to step-wise response pairs. Although these methods leverage samples of varying granularity to mitigate reasoning errors, they overlook the frequently occurring subtle errors. A major reason is that sampled preference pairs involve differences unrelated to the errors, which may distract the model from focusing on subtle errors. In this work, we propose a novel preference learning framework called eRror-Injected Self-Editing (RISE), which injects predefined subtle errors into partial tokens of correct solutions to construct hard pairs for error mitigation. In detail, RISE uses the model itself to edit a small number of tokens in the solution, injecting designed subtle errors. Then, pairs composed of self-edited solutions and their corresponding correct ones, along with pairs of correct and incorrect solutions obtained through sampling, are used together for subtle error-aware DPO training. Compared with other preference learning methods, RISE further refines the training objective to focus on predefined errors and their tokens, without requiring fine-grained sampling or preference annotation. Extensive experiments validate the effectiveness of RISE, with preference learning on Qwen2-7B-Instruct yielding notable improvements of 3.0% on GSM8K and 7.9% on MATH.<|reference_end|>
arxiv
@article{xu2024subtle, title={Subtle Errors Matter: Preference Learning via Error-injected Self-editing}, author={Kaishuai Xu, Tiezheng Yu, Wenjun Hou, Yi Cheng, Chak Tou Leong, Liangyou Li, Xin Jiang, Lifeng Shang, Qun Liu, Wenjie Li}, journal={arXiv preprint arXiv:2410.06638}, year={2024}, archivePrefix={arXiv}, eprint={2410.06638}, primaryClass={cs.CL cs.AI} }
xu2024subtle
arxiv-667405
2410.06645
Continual Learning in the Frequency Domain
<|reference_start|>Continual Learning in the Frequency Domain: Continual learning (CL) is designed to learn new tasks while preserving existing knowledge. Replaying samples from earlier tasks has proven to be an effective method to mitigate the forgetting of previously acquired knowledge. However, the current research on the training efficiency of rehearsal-based methods is insufficient, which limits the practical application of CL systems in resource-limited scenarios. The human visual system (HVS) exhibits varying sensitivities to different frequency components, enabling the efficient elimination of visually redundant information. Inspired by HVS, we propose a novel framework called Continual Learning in the Frequency Domain (CLFD). To our knowledge, this is the first study to utilize frequency domain features to enhance the performance and efficiency of CL training on edge devices. For the input features of the feature extractor, CLFD employs wavelet transform to map the original input image into the frequency domain, thereby effectively reducing the size of input feature maps. Regarding the output features of the feature extractor, CLFD selectively utilizes output features for distinct classes for classification, thereby balancing the reusability and interference of output features based on the frequency domain similarity of the classes across various tasks. Optimizing only the input and output features of the feature extractor allows for seamless integration of CLFD with various rehearsal-based methods. Extensive experiments conducted in both cloud and edge environments demonstrate that CLFD consistently improves the performance of state-of-the-art (SOTA) methods in both precision and training efficiency. Specifically, CLFD can increase the accuracy of the SOTA CL method by up to 6.83% and reduce the training time by 2.6$\times$.<|reference_end|>
arxiv
@article{liu2024continual, title={Continual Learning in the Frequency Domain}, author={Ruiqi Liu, Boyu Diao, Libo Huang, Zijia An, Zhulin An, Yongjun Xu}, journal={arXiv preprint arXiv:2410.06645}, year={2024}, archivePrefix={arXiv}, eprint={2410.06645}, primaryClass={cs.CV} }
liu2024continual
arxiv-667406
2410.06647
Achieving Interference-Free Degrees of Freedom in Cellular Networks via RIS
<|reference_start|>Achieving Interference-Free Degrees of Freedom in Cellular Networks via RIS: It's widely perceived that Reconfigurable Intelligent Surfaces (RIS) cannot increase Degrees of Freedom (DoF) due to their relay nature. A notable exception is Jiang \& Yu's work. They demonstrate via simulation that in an ideal $K$-user interference channel, passive RIS can achieve the interference-free DoF. In this paper, we investigate the DoF gain of RIS in more realistic systems, namely cellular networks, and more challenging scenarios with direct links. We prove that RIS can boost the DoF per cell to that of the interference-free scenario even \textit{ with direct-links}. Furthermore, we \textit{theoretically} quantify the number of RIS elements required to achieve that goal, i.e. $max\left\{ {2L, (\sqrt L + c)\eta+L } \right\}$ (where $L=GM(GM-1)$, $c$ is a constant and $\eta$ denotes the ratio of channel strength) for the $G$-cells with more single-antenna users $K$ than base station antennas $M$ per cell. The main challenge lies in addressing the feasibility of a system of algebraic equations, which is difficult by itself in algebraic geometry. We tackle this problem in a probabilistic way, by exploiting the randomness of the involved coefficients and addressing the problem from the perspective of extreme value statistics and convex geometry. Moreover, numerical results confirm the tightness of our theoretical results.<|reference_end|>
arxiv
@article{wang2024achieving, title={Achieving Interference-Free Degrees of Freedom in Cellular Networks via RIS}, author={Junzhi Wang, Jun Sun, Zheng Xiao, Limin Liao, Yingzhuang Liu}, journal={arXiv preprint arXiv:2410.06647}, year={2024}, archivePrefix={arXiv}, eprint={2410.06647}, primaryClass={cs.IT math.IT} }
wang2024achieving
arxiv-667407
2410.06648
Q-WSL:Leveraging Dynamic Programming for Weighted Supervised Learning in Goal-conditioned RL
<|reference_start|>Q-WSL:Leveraging Dynamic Programming for Weighted Supervised Learning in Goal-conditioned RL: A novel class of advanced algorithms, termed Goal-Conditioned Weighted Supervised Learning (GCWSL), has recently emerged to tackle the challenges posed by sparse rewards in goal-conditioned reinforcement learning (RL). GCWSL consistently delivers strong performance across a diverse set of goal-reaching tasks due to its simplicity, effectiveness, and stability. However, GCWSL methods lack a crucial capability known as trajectory stitching, which is essential for learning optimal policies when faced with unseen skills during testing. This limitation becomes particularly pronounced when the replay buffer is predominantly filled with sub-optimal trajectories. In contrast, traditional TD-based RL methods, such as Q-learning, which utilize Dynamic Programming, do not face this issue but often experience instability due to the inherent difficulties in value function approximation. In this paper, we propose Q-learning Weighted Supervised Learning (Q-WSL), a novel framework designed to overcome the limitations of GCWSL by incorporating the strengths of Dynamic Programming found in Q-learning. Q-WSL leverages Dynamic Programming results to output the optimal action of (state, goal) pairs across different trajectories within the replay buffer. This approach synergizes the strengths of both Q-learning and GCWSL, effectively mitigating their respective weaknesses and enhancing overall performance. Empirical evaluations on challenging goal-reaching tasks demonstrate that Q-WSL surpasses other goal-conditioned approaches in terms of both performance and sample efficiency. Additionally, Q-WSL exhibits notable robustness in environments characterized by binary reward structures and environmental stochasticity.<|reference_end|>
arxiv
@article{lei2024q-wsl:, title={Q-WSL: Optimizing Goal-Conditioned RL with Weighted Supervised Learning via Dynamic Programming}, author={Xing Lei, Xuetao Zhang, Zifeng Zhuang, Donglin Wang}, journal={arXiv preprint arXiv:2410.06648}, year={2024}, archivePrefix={arXiv}, eprint={2410.06648}, primaryClass={cs.LG} }
lei2024q-wsl:
arxiv-667408
2410.06651
Toward Physics-guided Time Series Embedding
<|reference_start|>Toward Physics-guided Time Series Embedding: In various scientific and engineering fields, the primary research areas have revolved around physics-based dynamical systems modeling and data-driven time series analysis. According to the embedding theory, dynamical systems and time series can be mutually transformed using observation functions and physical reconstruction techniques. Based on this, we propose Embedding Duality Theory, where the parameterized embedding layer essentially provides a linear estimation of the non-linear time series dynamics. This theory enables us to bypass the parameterized embedding layer and directly employ physical reconstruction techniques to acquire a data embedding representation. Utilizing physical priors results in a 10X reduction in parameters, a 3X increase in speed, and maximum performance boosts of 18% in expert, 22% in few-shot, and 53\% in zero-shot tasks without any hyper-parameter tuning. All methods are encapsulated as a plug-and-play module<|reference_end|>
arxiv
@article{hu2024toward, title={Toward Physics-guided Time Series Embedding}, author={Jiaxi Hu, Bowen Zhang, Qingsong Wen, Fugee Tsung, Yuxuan Liang}, journal={arXiv preprint arXiv:2410.06651}, year={2024}, archivePrefix={arXiv}, eprint={2410.06651}, primaryClass={cs.LG cs.AI} }
hu2024toward
arxiv-667409
2410.06652
Task-oriented Time Series Imputation Evaluation via Generalized Representers
<|reference_start|>Task-oriented Time Series Imputation Evaluation via Generalized Representers: Time series analysis is widely used in many fields such as power energy, economics, and transportation, including different tasks such as forecasting, anomaly detection, classification, etc. Missing values are widely observed in these tasks, and often leading to unpredictable negative effects on existing methods, hindering their further application. In response to this situation, existing time series imputation methods mainly focus on restoring sequences based on their data characteristics, while ignoring the performance of the restored sequences in downstream tasks. Considering different requirements of downstream tasks (e.g., forecasting), this paper proposes an efficient downstream task-oriented time series imputation evaluation approach. By combining time series imputation with neural network models used for downstream tasks, the gain of different imputation strategies on downstream tasks is estimated without retraining, and the most favorable imputation value for downstream tasks is given by combining different imputation strategies according to the estimated gain.<|reference_end|>
arxiv
@article{wang2024task-oriented, title={Task-oriented Time Series Imputation Evaluation via Generalized Representers}, author={Zhixian Wang and Linxiao Yang and Liang Sun and Qingsong Wen and Yi Wang}, journal={arXiv preprint arXiv:2410.06652}, year={2024}, archivePrefix={arXiv}, eprint={2410.06652}, primaryClass={cs.LG cs.AI} }
wang2024task-oriented
arxiv-667410
2410.06654
Performance Evaluation in Multimedia Retrieval
<|reference_start|>Performance Evaluation in Multimedia Retrieval: Performance evaluation in multimedia retrieval, as in the information retrieval domain at large, relies heavily on retrieval experiments, employing a broad range of techniques and metrics. These can involve human-in-the-loop and machine-only settings for the retrieval process itself and the subsequent verification of results. Such experiments can be elaborate and use-case-specific, which can make them difficult to compare or replicate. In this paper, we present a formal model to express all relevant aspects of such retrieval experiments, as well as a flexible open-source evaluation infrastructure that implements the model. These contributions intend to make a step towards lowering the hurdles for conducting retrieval experiments and improving their reproducibility.<|reference_end|>
arxiv
@article{sauter2024performance, title={Performance Evaluation in Multimedia Retrieval}, author={Loris Sauter, Ralph Gasser, Heiko Schuldt, Abraham Bernstein, Luca Rossetto}, journal={arXiv preprint arXiv:2410.06654}, year={2024}, doi={10.1145/3678881}, archivePrefix={arXiv}, eprint={2410.06654}, primaryClass={cs.IR cs.MM} }
sauter2024performance
arxiv-667411
2410.06656
WardropNet: Traffic Flow Predictions via Equilibrium-Augmented Learning
<|reference_start|>WardropNet: Traffic Flow Predictions via Equilibrium-Augmented Learning: When optimizing transportation systems, anticipating traffic flows is a central element. Yet, computing such traffic equilibria remains computationally expensive. Against this background, we introduce a novel combinatorial optimization augmented neural network architecture that allows for fast and accurate traffic flow predictions. We propose WardropNet, a neural network that combines classical layers with a subsequent equilibrium layer: the first ones inform the latter by predicting the parameterization of the equilibrium problem's latency functions. Using supervised learning we minimize the difference between the actual traffic flow and the predicted output. We show how to leverage a Bregman divergence fitting the geometry of the equilibria, which allows for end-to-end learning. WardropNet outperforms pure learning-based approaches in predicting traffic equilibria for realistic and stylized traffic scenarios. On realistic scenarios, WardropNet improves on average for time-invariant predictions by up to 72% and for time-variant predictions by up to 23% over pure learning-based approaches.<|reference_end|>
arxiv
@article{jungel2024wardropnet:, title={WardropNet: Traffic Flow Predictions via Equilibrium-Augmented Learning}, author={Kai Jungel, Dario Paccagnan, Axel Parmentier, Maximilian Schiffer}, journal={arXiv preprint arXiv:2410.06656}, year={2024}, archivePrefix={arXiv}, eprint={2410.06656}, primaryClass={cs.LG} }
jungel2024wardropnet:
arxiv-667412
2410.06662
A data-driven approach for safety quantification of non-linear stochastic systems with unknown additive noise distribution
<|reference_start|>A data-driven approach for safety quantification of non-linear stochastic systems with unknown additive noise distribution: In this paper, we present a novel data-driven approach to quantify safety for non-linear, discrete-time stochastic systems with unknown noise distribution. We define safety as the probability that the system remains in a given region of the state space for a given time horizon and, to quantify it, we present an approach based on Stochastic Barrier Functions (SBFs). In particular, we introduce an inner approximation of the stochastic program to design a SBF in terms of a chance-constrained optimisation problem, which allows us to leverage the scenario approach theory to design a SBF from samples of the system with Probably Approximately Correct (PAC) guarantees. Our approach leads to tractable, robust linear programs, which enable us to assert safety for non-linear models that were otherwise deemed infeasible with existing methods. To further mitigate the computational complexity of our approach, we exploit the structure of the system dynamics and rely on spatial data structures to accelerate the construction and solution of the underlying optimisation problem. We show the efficacy and validity of our framework in several benchmarks, showing that our approach can obtain substantially tighter certificates compared to state-of-the-art with a confidence that is several orders of magnitude higher.<|reference_end|>
arxiv
@article{mathiesen2024a, title={A data-driven approach for safety quantification of non-linear stochastic systems with unknown additive noise distribution}, author={Frederik Baymler Mathiesen, Licio Romao, Simeon C. Calvert, Luca Laurenti, Alessandro Abate}, journal={arXiv preprint arXiv:2410.06662}, year={2024}, archivePrefix={arXiv}, eprint={2410.06662}, primaryClass={eess.SY cs.SY} }
mathiesen2024a
arxiv-667413
2410.06663
Data-informed modeling of the formation, persistence, and evolution of social norms and conventions
<|reference_start|>Data-informed modeling of the formation, persistence, and evolution of social norms and conventions: Social norms and conventions are commonly accepted and adopted behaviors and practices within a social group that guide interactions -- e.g., how to spell a word or how to greet people -- and are central to a group's culture and identity. Understanding the key mechanisms that govern the formation, persistence, and evolution of social norms and conventions in social communities is a problem of paramount importance for a broad range of real-world applications, spanning from preparedness for future emergencies to promotion of sustainable practices. In the past decades, mathematical modeling has emerged as a powerful tool to reproduce and study the complex dynamics of norm and convention change, gaining insights into their mechanisms, and ultimately deriving tools to predict their evolution. The first goal of this chapter is to introduce some of the main mathematical approaches for modeling social norms and conventions, including population models and agent-based models relying on the theories of dynamical systems, evolutionary dynamics, and game theory. The second goal of the chapter is to illustrate how quantitative observations and empirical data can be incorporated into these mathematical models in a systematic manner, establishing a data-based approach to mathematical modeling of formation, persistence, and evolution of social norms and conventions. Finally, current challenges and future opportunities in this growing field of research are discussed.<|reference_end|>
arxiv
@article{ye2024data-informed, title={Data-informed modeling of the formation, persistence, and evolution of social norms and conventions}, author={Mengbin Ye and Lorenzo Zino}, journal={arXiv preprint arXiv:2410.06663}, year={2024}, archivePrefix={arXiv}, eprint={2410.06663}, primaryClass={cs.SI cs.SY eess.SY math.DS physics.soc-ph} }
ye2024data-informed
arxiv-667414
2410.06664
Decouple-Then-Merge: Towards Better Training for Diffusion Models
<|reference_start|>Decouple-Then-Merge: Towards Better Training for Diffusion Models: Diffusion models are trained by learning a sequence of models that reverse each step of noise corruption. Typically, the model parameters are fully shared across multiple timesteps to enhance training efficiency. However, since the denoising tasks differ at each timestep, the gradients computed at different timesteps may conflict, potentially degrading the overall performance of image generation. To solve this issue, this work proposes a Decouple-then-Merge (DeMe) framework, which begins with a pretrained model and finetunes separate models tailored to specific timesteps. We introduce several improved techniques during the finetuning stage to promote effective knowledge sharing while minimizing training interference across timesteps. Finally, after finetuning, these separate models can be merged into a single model in the parameter space, ensuring efficient and practical inference. Experimental results show significant generation quality improvements upon 6 benchmarks including Stable Diffusion on COCO30K, ImageNet1K, PartiPrompts, and DDPM on LSUN Church, LSUN Bedroom, and CIFAR10.<|reference_end|>
arxiv
@article{ma2024decouple-then-merge:, title={Decouple-Then-Merge: Towards Better Training for Diffusion Models}, author={Qianli Ma, Xuefei Ning, Dongrui Liu, Li Niu, Linfeng Zhang}, journal={arXiv preprint arXiv:2410.06664}, year={2024}, archivePrefix={arXiv}, eprint={2410.06664}, primaryClass={cs.CV cs.AI} }
ma2024decouple-then-merge:
arxiv-667415
2410.06665
Revisiting Multi-Permutation Equivariance through the Lens of Irreducible Representations
<|reference_start|>Revisiting Multi-Permutation Equivariance through the Lens of Irreducible Representations: This paper explores the characterization of equivariant linear layers for representations of permutations and related groups. Unlike traditional approaches, which address these problems using parameter-sharing, we consider an alternative methodology based on irreducible representations and Schur's lemma. Using this methodology, we obtain an alternative derivation for existing models like DeepSets, 2-IGN graph equivariant networks, and Deep Weight Space (DWS) networks. The derivation for DWS networks is significantly simpler than that of previous results. Next, we extend our approach to unaligned symmetric sets, where equivariance to the wreath product of groups is required. Previous works have addressed this problem in a rather restrictive setting, in which almost all wreath equivariant layers are Siamese. In contrast, we give a full characterization of layers in this case and show that there is a vast number of additional non-Siamese layers in some settings. We also show empirically that these additional non-Siamese layers can improve performance in tasks like graph anomaly detection, weight space alignment, and learning Wasserstein distances. Our code is available at \href{https://github.com/yonatansverdlov/Irreducible-Representations-of-Deep-Weight-Spaces}{GitHub}.<|reference_end|>
arxiv
@article{sverdlov2024revisiting, title={Revisiting Multi-Permutation Equivariance through the Lens of Irreducible Representations}, author={Yonatan Sverdlov, Ido Springer, Nadav Dym}, journal={arXiv preprint arXiv:2410.06665}, year={2024}, archivePrefix={arXiv}, eprint={2410.06665}, primaryClass={cs.LG cs.AI} }
sverdlov2024revisiting
arxiv-667416
2410.06667
Large Language Models as Code Executors: An Exploratory Study
<|reference_start|>Large Language Models as Code Executors: An Exploratory Study: The capabilities of Large Language Models (LLMs) have significantly evolved, extending from natural language processing to complex tasks like code understanding and generation. We expand the scope of LLMs' capabilities to a broader context, using LLMs to execute code snippets to obtain the output. This paper pioneers the exploration of LLMs as code executors, where code snippets are directly fed to the models for execution, and outputs are returned. We are the first to comprehensively examine this feasibility across various LLMs, including OpenAI's o1, GPT-4o, GPT-3.5, DeepSeek, and Qwen-Coder. Notably, the o1 model achieved over 90% accuracy in code execution, while others demonstrated lower accuracy levels. Furthermore, we introduce an Iterative Instruction Prompting (IIP) technique that processes code snippets line by line, enhancing the accuracy of weaker models by an average of 7.22% (with the highest improvement of 18.96%) and an absolute average improvement of 3.86% against CoT prompting (with the highest improvement of 19.46%). Our study not only highlights the transformative potential of LLMs in coding but also lays the groundwork for future advancements in automated programming and the completion of complex tasks.<|reference_end|>
arxiv
@article{lyu2024large, title={Large Language Models as Code Executors: An Exploratory Study}, author={Chenyang Lyu, Lecheng Yan, Rui Xing, Wenxi Li, Younes Samih, Tianbo Ji, Longyue Wang}, journal={arXiv preprint arXiv:2410.06667}, year={2024}, archivePrefix={arXiv}, eprint={2410.06667}, primaryClass={cs.CL cs.AI} }
lyu2024large
arxiv-667417
2410.06670
LS-EEND: Long-Form Streaming End-to-End Neural Diarization with Online Attractor Extraction
<|reference_start|>LS-EEND: Long-Form Streaming End-to-End Neural Diarization with Online Attractor Extraction: This work proposes a frame-wise online/streaming end-to-end neural diarization (EEND) method, which detects speaker activities in a frame-in-frame-out fashion. The proposed model mainly consists of a causal embedding encoder and an online attractor decoder. Speakers are modeled in the self-attention-based decoder along both the time and speaker dimensions, and frame-wise speaker attractors are automatically generated and updated for new speakers and existing speakers, respectively. Retention mechanism is employed and especially adapted for long-form diarization with a linear temporal complexity. A multi-step progressive training strategy is proposed for gradually learning from easy tasks to hard tasks in terms of the number of speakers and audio length. Finally, the proposed model (referred to as long-form streaming EEND, LS-EEND) is able to perform streaming diarization for a high (up to 8) and flexible number speakers and very long (say one hour) audio recordings. Experiments on various simulated and real-world datasets show that: 1) when not using oracle speech activity information, the proposed model achieves new state-of-the-art online diarization error rate on all datasets, including CALLHOME (12.11%), DIHARD II (27.58%), DIHARD III (19.61%), and AMI (20.76%); 2) Due to the frame-in-frame-out processing fashion and the linear temporal complexity, the proposed model achieves several times lower real-time-factor than comparison online diarization models.<|reference_end|>
arxiv
@article{liang2024ls-eend:, title={LS-EEND: Long-Form Streaming End-to-End Neural Diarization with Online Attractor Extraction}, author={Di Liang, Xiaofei Li}, journal={arXiv preprint arXiv:2410.06670}, year={2024}, archivePrefix={arXiv}, eprint={2410.06670}, primaryClass={eess.AS cs.SD} }
liang2024ls-eend:
arxiv-667418
2410.06671
GLA-DA: Global-Local Alignment Domain Adaptation for Multivariate Time Series
<|reference_start|>GLA-DA: Global-Local Alignment Domain Adaptation for Multivariate Time Series: Unlike images and natural language tokens, time series data is highly semantically sparse, resulting in labor-intensive label annotations. Unsupervised and Semi-supervised Domain Adaptation (UDA and SSDA) have demonstrated efficiency in addressing this issue by utilizing pre-labeled source data to train on unlabeled or partially labeled target data. However, in domain adaptation methods designed for downstream classification tasks, directly adapting labeled source samples with unlabelled target samples often results in similar distributions across various classes, thereby compromising the performance of the target classification task. To tackle this challenge, we proposed a Global-Local Alignment Domain Adaptation (GLA-DA) method for multivariate time series data. Data from two domains were initially encoded to align in an intermediate feature space adversarially, achieving Global Feature Alignment (GFA). Subsequently, GLA-DA leveraged the consistency between similarity-based and deep learning-based models to assign pseudo labels to unlabeled target data. This process aims to preserve differences among data with distinct labels by aligning the samples with the same class labels together, achieving Local Class Alignment (LCA). We implemented GLA-DA in both UDA and SSDA scenarios, showcasing its superiority over state-of-the-art methods through extensive experiments on various public datasets. Ablation experiments underscored the significance of key components within GLA-DA.<|reference_end|>
arxiv
@article{tu2024gla-da:, title={GLA-DA: Global-Local Alignment Domain Adaptation for Multivariate Time Series}, author={Gang Tu, Dan Li, Bingxin Lin, Zibin Zheng, See-Kiong Ng}, journal={arXiv preprint arXiv:2410.06671}, year={2024}, archivePrefix={arXiv}, eprint={2410.06671}, primaryClass={cs.LG} }
tu2024gla-da:
arxiv-667419
2410.06672
Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures
<|reference_start|>Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures: The hypothesis of Universality in interpretability suggests that different neural networks may converge to implement similar algorithms on similar tasks. In this work, we investigate two mainstream architectures for language modeling, namely Transformers and Mambas, to explore the extent of their mechanistic similarity. We propose to use Sparse Autoencoders (SAEs) to isolate interpretable features from these models and show that most features are similar in these two models. We also validate the correlation between feature similarity and Universality. We then delve into the circuit-level analysis of Mamba models and find that the induction circuits in Mamba are structurally analogous to those in Transformers. We also identify a nuanced difference we call \emph{Off-by-One motif}: The information of one token is written into the SSM state in its next position. Whilst interaction between tokens in Transformers does not exhibit such trend.<|reference_end|>
arxiv
@article{wang2024towards, title={Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures}, author={Junxuan Wang, Xuyang Ge, Wentao Shu, Qiong Tang, Yunhua Zhou, Zhengfu He, Xipeng Qiu}, journal={arXiv preprint arXiv:2410.06672}, year={2024}, archivePrefix={arXiv}, eprint={2410.06672}, primaryClass={cs.CL} }
wang2024towards
arxiv-667420
2410.06675
SCOREQ: Speech Quality Assessment with Contrastive Regression
<|reference_start|>SCOREQ: Speech Quality Assessment with Contrastive Regression: In this paper, we present SCOREQ, a novel approach for speech quality prediction. SCOREQ is a triplet loss function for contrastive regression that addresses the domain generalisation shortcoming exhibited by state of the art no-reference speech quality metrics. In the paper we: (i) illustrate the problem of L2 loss training failing at capturing the continuous nature of the mean opinion score (MOS) labels; (ii) demonstrate the lack of generalisation through a benchmarking evaluation across several speech domains; (iii) outline our approach and explore the impact of the architectural design decisions through incremental evaluation; (iv) evaluate the final model against state of the art models for a wide variety of data and domains. The results show that the lack of generalisation observed in state of the art speech quality metrics is addressed by SCOREQ. We conclude that using a triplet loss function for contrastive regression improves generalisation for speech quality prediction models but also has potential utility across a wide range of applications using regression-based predictive models.<|reference_end|>
arxiv
@article{ragano2024scoreq:, title={SCOREQ: Speech Quality Assessment with Contrastive Regression}, author={Alessandro Ragano, Jan Skoglund, Andrew Hines}, journal={arXiv preprint arXiv:2410.06675}, year={2024}, archivePrefix={arXiv}, eprint={2410.06675}, primaryClass={cs.SD eess.AS} }
ragano2024scoreq:
arxiv-667421
2410.06678
M$^3$Bench: Benchmarking Whole-body Motion Generation for Mobile Manipulation in 3D Scenes
<|reference_start|>M$^3$Bench: Benchmarking Whole-body Motion Generation for Mobile Manipulation in 3D Scenes: We propose M^3Bench, a new benchmark for whole-body motion generation for mobile manipulation tasks. Given a 3D scene context, M^3Bench requires an embodied agent to understand its configuration, environmental constraints and task objectives, then generate coordinated whole-body motion trajectories for object rearrangement tasks. M^3Bench features 30k object rearrangement tasks across 119 diverse scenes, providing expert demonstrations generated by our newly developed M^3BenchMaker. This automatic data generation tool produces coordinated whole-body motion trajectories from high-level task instructions, requiring only basic scene and robot information. Our benchmark incorporates various task splits to assess generalization across different dimensions and leverages realistic physics simulation for trajectory evaluation. Through extensive experimental analyses, we reveal that state-of-the-art models still struggle with coordinated base-arm motion while adhering to environment-context and task-specific constraints, highlighting the need to develop new models that address this gap. Through M^3Bench, we aim to facilitate future robotics research towards more adaptive and capable mobile manipulation in diverse, real-world environments.<|reference_end|>
arxiv
@article{zhang2024m3bench:, title={M3Bench: Benchmarking Whole-body Motion Generation for Mobile Manipulation in 3D Scenes}, author={Zeyu Zhang, Sixu Yan, Muzhi Han, Zaijin Wang, Xinggang Wang, Song-Chun Zhu, Hangxin Liu}, journal={arXiv preprint arXiv:2410.06678}, year={2024}, archivePrefix={arXiv}, eprint={2410.06678}, primaryClass={cs.RO cs.AI cs.CV cs.LG} }
zhang2024m3bench:
arxiv-667422
2410.06681
AI, Climate, and Regulation: From Data Centers to the AI Act
<|reference_start|>AI, Climate, and Regulation: From Data Centers to the AI Act: We live in a world that is experiencing an unprecedented boom of AI applications that increasingly penetrate and enhance all sectors of private and public life, from education, media, medicine, and mobility to the industrial and professional workspace, and -- potentially particularly consequentially -- robotics. As this world is simultaneously grappling with climate change, the climate and environmental implications of the development and use of AI have become an important subject of public and academic debate. In this paper, we aim to provide guidance on the climate-related regulation for data centers and AI specifically, and discuss how to operationalize these requirements. We also highlight challenges and room for improvement, and make a number of policy proposals to this end. In particular, we propose a specific interpretation of the AI Act to bring reporting on the previously unadressed energy consumption from AI inferences back into the scope. We also find that the AI Act fails to address indirect greenhouse gas emissions from AI applications. Furthermore, for the purpose of energy consumption reporting, we compare levels of measurement within data centers and recommend measurement at the cumulative server level. We also argue for an interpretation of the AI Act that includes environmental concerns in the mandatory risk assessment (sustainability risk assessment, SIA), and provide guidance on its operationalization. The EU data center regulation proves to be a good first step but requires further development by including binding renewable energy and efficiency targets for data centers. Overall, we make twelve concrete policy proposals, in four main areas: Energy and Environmental Reporting Obligations; Legal and Regulatory Clarifications; Transparency and Accountability Mechanisms; and Future Far-Reaching Measures beyond Transparency.<|reference_end|>
arxiv
@article{ebert2024ai,, title={AI, Climate, and Regulation: From Data Centers to the AI Act}, author={Kai Ebert, Nicolas Alder, Ralf Herbrich, Philipp Hacker}, journal={arXiv preprint arXiv:2410.06681}, year={2024}, archivePrefix={arXiv}, eprint={2410.06681}, primaryClass={cs.CY cs.AI} }
ebert2024ai,
arxiv-667423
2410.06682
Enhancing Multimodal LLM for Detailed and Accurate Video Captioning using Multi-Round Preference Optimization
<|reference_start|>Enhancing Multimodal LLM for Detailed and Accurate Video Captioning using Multi-Round Preference Optimization: Videos contain a wealth of information, and generating detailed and accurate descriptions in natural language is a key aspect of video understanding. In this paper, we present video-SALMONN 2, an advanced audio-visual large language model (LLM) with low-rank adaptation (LoRA) designed for enhanced video (with paired audio) captioning through directed preference optimization (DPO). We propose new metrics to evaluate the completeness and accuracy of video descriptions, which are optimized using DPO. To further improve training, we introduce a novel multi-round DPO (mrDPO) approach, which involves periodically updating the DPO reference model, merging and re-initializing the LoRA module as a proxy for parameter updates after each training round (1,000 steps), and incorporating guidance from ground-truth video captions to stabilize the process. To address potential catastrophic forgetting of non-captioning abilities due to mrDPO, we propose rebirth tuning, which finetunes the pre-DPO LLM by using the captions generated by the mrDPO-trained model as supervised labels. Experiments show that mrDPO significantly enhances video-SALMONN 2's captioning accuracy, reducing global and local error rates by 40\% and 20\%, respectively, while decreasing the repetition rate by 35\%. The final video-SALMONN 2 model, with just 7 billion parameters, surpasses leading models such as GPT-4o and Gemini-1.5-Pro in video captioning tasks, while maintaining competitive performance to the state-of-the-art on widely used video question-answering benchmark among models of similar size. Upon acceptance, we will release the code, model checkpoints, and training and test data. Demos are available at \href{https://video-salmonn-2.github.io}{https://video-salmonn-2.github.io}.<|reference_end|>
arxiv
@article{tang2024enhancing, title={Enhancing Multimodal LLM for Detailed and Accurate Video Captioning using Multi-Round Preference Optimization}, author={Changli Tang, Yixuan Li, Yudong Yang, Jimin Zhuang, Guangzhi Sun, Wei Li, Zujun Ma, Chao Zhang}, journal={arXiv preprint arXiv:2410.06682}, year={2024}, archivePrefix={arXiv}, eprint={2410.06682}, primaryClass={cs.CV cs.CL eess.IV} }
tang2024enhancing
arxiv-667424
2410.06683
Barter Exchange with Bounded Trading Cycles
<|reference_start|>Barter Exchange with Bounded Trading Cycles: Consider a barter exchange problem over a finite set of agents, where each agent owns an item and is also associated with a (privately known) wish list of items belonging to the other agents. An outcome of the problem is a (re)allocation of the items to the agents such that each agent either keeps her own item or receives an item from her (reported) wish list, subject to the constraint that the length of the trading cycles induced by the allocation is up-bounded by a prespecified length bound k. The utility of an agent from an allocation is 1 if she receives an item from her (true) wish list and 0 if she keeps her own item (the agent incurs a large dis-utility if she receives an item that is neither hers nor belongs to her wish list). In this paper, we investigate the aforementioned barter exchange problem from the perspective of mechanism design without money, aiming for truthful (and individually rational) mechanisms whose objective is to maximize the social welfare. As the construction of a social welfare maximizing allocation is computationally intractable for length bounds k \geq 3, this paper focuses on (computationally efficient) truthful mechanisms that approximate the (combinatorially) optimal social welfare.We also study a more general version of the barter exchange problem, where the utility of an agent from participating in a trading cycle of length 2 \leq \ell \leq k is lambda(\ell), where \lambda is a general (monotonically non-increasing) length function. Our results include upper and lower bounds on the guaranteed approximation ratio, expressed in terms of the length bound k and the length function \lambda. On the technical side, our main contribution is an algorithmic tool that can be viewed as a truthful version of the local search paradigm. As it turns out, this tool can be applied to more general (bounded size) coalition formation problems.<|reference_end|>
arxiv
@article{emek2024barter, title={Barter Exchange with Bounded Trading Cycles}, author={Yuval Emek and Matan-El Shpiro}, journal={arXiv preprint arXiv:2410.06683}, year={2024}, archivePrefix={arXiv}, eprint={2410.06683}, primaryClass={cs.GT} }
emek2024barter
arxiv-667425
2410.06687
Convergence and superconvergence analysis of discontinuous Galerkin methods for index-2 integral-algebraic equations
<|reference_start|>Convergence and superconvergence analysis of discontinuous Galerkin methods for index-2 integral-algebraic equations: The integral-algebraic equation (IAE) is a mixed system of first-kind and second-kind Volterra integral equations (VIEs). This paper mainly focuses on the discontinuous Galerkin (DG) method to solve index-2 IAEs. First, the convergence theory of perturbed DG methods for first-kind VIEs is established, and then used to derive the optimal convergence properties of DG methods for index-2 IAEs. It is shown that an $(m-1)$-th degree DG approximation exhibits global convergence of order~$m$ when~$m$ is odd, and of order~$m-1$ when~$m$ is even, for the first component~$x_1$ of the exact solution, corresponding to the second-kind VIE, whereas the convergence order is reduced by two for the second component~$x_2$ of the exact solution, corresponding to the first-kind VIE. Each component also exhibits local superconvergence of one order higher when~$m$ is even. When~$m$ is odd, superconvergence occurs only if $x_1$ satisfies $x_1^{(m)}(0)=0$. Moreover, with this condition, we can extend the local superconvergence result for~$x_2$ to global superconvergence when~$m$ is odd. Note that in the DG method for an index-1 IAE, generally, the global superconvergence of the exact solution component corresponding to the second-kind VIE can only be obtained by iteration. However, we can get superconvergence for all components of the exact solution of the index-2 IAE directly. Some numerical experiments are given to illustrate the obtained theoretical results.<|reference_end|>
arxiv
@article{gao2024convergence, title={Convergence and superconvergence analysis of discontinuous Galerkin methods for index-2 integral-algebraic equations}, author={Hecong Gao and Hui Liang}, journal={arXiv preprint arXiv:2410.06687}, year={2024}, archivePrefix={arXiv}, eprint={2410.06687}, primaryClass={math.NA cs.NA} }
gao2024convergence
arxiv-667426
2410.06688
Non-overshooting output shaping for switched linear systems under arbitrary switching using eigenstructure assignment
<|reference_start|>Non-overshooting output shaping for switched linear systems under arbitrary switching using eigenstructure assignment: We consider the analytical control design for a pair of switched linear multiple-input multiple-output (MIMO) systems that are subject to arbitrary switching signals. A state feedback controller design method is proposed to obtain an eigenstructure assignment that ensures that the closed-loop switched system is globally asymptotically stable, and the outputs achieve the non-overshooting tracking of a step reference. Our analysis indicates whether non-overshooting or even monotonic tracking is achievable for the given system and considered outputs and provides a choice of possible eigenstructures to be assigned to the constituent subsystems. We derive a structural condition that verifies the feasibility of the chosen assignment. A constructive algorithm to obtain suitable feedback matrices is provided, and the method is illustrated with numerical examples.<|reference_end|>
arxiv
@article{wulff2024non-overshooting, title={Non-overshooting output shaping for switched linear systems under arbitrary switching using eigenstructure assignment}, author={Kai Wulff, Maria Christine Honecker, Robert Schmid and Johann Reger}, journal={arXiv preprint arXiv:2410.06688}, year={2024}, archivePrefix={arXiv}, eprint={2410.06688}, primaryClass={eess.SY cs.SY} }
wulff2024non-overshooting
arxiv-667427
2410.06689
Perceptual Quality Assessment of Trisoup-Lifting Encoded 3D Point Clouds
<|reference_start|>Perceptual Quality Assessment of Trisoup-Lifting Encoded 3D Point Clouds: No-reference bitstream-layer point cloud quality assessment (PCQA) can be deployed without full decoding at any network node to achieve real-time quality monitoring. In this work, we develop the first PCQA model dedicated to Trisoup-Lifting encoded 3D point clouds by analyzing bitstreams without full decoding. Specifically, we investigate the relationship among texture bitrate per point (TBPP), texture complexity (TC) and texture quantization parameter (TQP) while geometry encoding is lossless. Subsequently, we estimate TC by utilizing TQP and TBPP. Then, we establish a texture distortion evaluation model based on TC, TBPP and TQP. Ultimately, by integrating this texture distortion model with a geometry attenuation factor, a function of trisoupNodeSizeLog2 (tNSL), we acquire a comprehensive NR bitstream-layer PCQA model named streamPCQ-TL. In addition, this work establishes a database named WPC6.0, the first and largest PCQA database dedicated to Trisoup-Lifting encoding mode, encompassing 400 distorted point clouds with both 4 geometric multiplied by 5 texture distortion levels. Experiment results on M-PCCD, ICIP2020 and the proposed WPC6.0 database suggest that the proposed streamPCQ-TL model exhibits robust and notable performance in contrast to existing advanced PCQA metrics, particularly in terms of computational cost. The dataset and source code will be publicly released at \href{https://github.com/qdushl/Waterloo-Point-Cloud-Database-6.0}{\textit{https://github.com/qdushl/Waterloo-Point-Cloud-Database-6.0}}<|reference_end|>
arxiv
@article{long2024perceptual, title={Perceptual Quality Assessment of Trisoup-Lifting Encoded 3D Point Clouds}, author={Juncheng Long, Honglei Su, Qi Liu, Hui Yuan, Wei Gao, Jiarun Song and Zhou Wang}, journal={arXiv preprint arXiv:2410.06689}, year={2024}, archivePrefix={arXiv}, eprint={2410.06689}, primaryClass={cs.CV eess.IV} }
long2024perceptual
arxiv-667428
2410.06692
How hard can it be? Quantifying MITRE attack campaigns with attack trees and cATM logic
<|reference_start|>How hard can it be? Quantifying MITRE attack campaigns with attack trees and cATM logic: The landscape of cyber threats grows more complex by the day. Advanced Persistent Threats carry out systematic attack campaigns against which cybersecurity practitioners must defend. Examples of such organized attacks are operations Dream Job, Wocao, WannaCry or the SolarWinds Compromise. To evaluate which risks are most threatening, and which campaigns to prioritize against when defending, cybersecurity experts must be equipped with the right toolbox. In particular, they must be able to (a) obtain likelihood values for each attack campaign recorded in the wild and (b) reliably and transparently operationalize these values to carry out quantitative comparisons among campaigns. This will allow security experts to perform quantitatively-informed decision making that is transparent and accountable. In this paper we construct such a framework by: (1) quantifying the likelihood of attack campaigns via data-driven procedures on the MITRE knowledge base and (2) introducing a methodology for automatic modelling of MITRE intelligence data: this is complete in the sense that it captures any attack campaign via template attack tree models. (3) We further propose a computational framework to carry out this comparisons based on the cATM formal logic, and implement this into an open-source Python tool. Finally, we validate our approach by quantifying the likelihood of all MITRE campaigns, and comparing the likelihood of the Wocao and Dream Job MITRE campaigns -- generated with our proposed approach -- against "ad hoc" traditionally-built attack tree models, demonstrating how our methodology is substantially lighter in modelling effort, and still capable of capturing all the quantitative relevant data.<|reference_end|>
arxiv
@article{nicoletti2024how, title={How hard can it be? Quantifying MITRE attack campaigns with attack trees and cATM logic}, author={Stefano M. Nicoletti, Milan Lopuha"a-Zwakenberg, Mari"elle Stoelinga, Fabio Massacci, Carlos E. Budde}, journal={arXiv preprint arXiv:2410.06692}, year={2024}, archivePrefix={arXiv}, eprint={2410.06692}, primaryClass={cs.CR cs.LO} }
nicoletti2024how
arxiv-667429
2410.06693
Autonomous localization of multiple ionizing radiation sources using miniature single-layer Compton cameras onboard a group of micro aerial vehicles
<|reference_start|>Autonomous localization of multiple ionizing radiation sources using miniature single-layer Compton cameras onboard a group of micro aerial vehicles: A novel method for autonomous localization of multiple sources of gamma radiation using a group of Micro Aerial Vehicles (MAVs) is presented in this paper. The method utilizes an extremely lightweight (44 g) Compton camera MiniPIX TPX3. The compact size of the detector allows for deployment onboard safe and agile small-scale Unmanned Aerial Vehicles (UAVs). The proposed radiation mapping approach fuses measurements from multiple distributed Compton camera sensors to accurately estimate the positions of multiple radioactive sources in real time. Unlike commonly used intensity-based detectors, the Compton camera reconstructs the set of possible directions towards a radiation source from just a single ionizing particle. Therefore, the proposed approach can localize radiation sources without having to estimate the gradient of a radiation field or contour lines, which require longer measurements. The instant estimation is able to fully exploit the potential of highly mobile MAVs. The radiation mapping method is combined with an active search strategy, which coordinates the future actions of the MAVs in order to improve the quality of the estimate of the sources' positions, as well as to explore the area of interest faster. The proposed solution is evaluated in simulation and real world experiments with multiple Cesium-137 radiation sources.<|reference_end|>
arxiv
@article{werner2024autonomous, title={Autonomous localization of multiple ionizing radiation sources using miniature single-layer Compton cameras onboard a group of micro aerial vehicles}, author={Michal Werner, Tom'av{s} B'av{c}a, Petr v{S}tibinger, Daniela Doubravov'a, Jaroslav v{S}olc, Jan Rusv{n}'ak, Martin Saska}, journal={arXiv preprint arXiv:2410.06693}, year={2024}, archivePrefix={arXiv}, eprint={2410.06693}, primaryClass={cs.RO} }
werner2024autonomous
arxiv-667430
2410.06694
OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes from Monocular RGB
<|reference_start|>OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes from Monocular RGB: To address the challenge of short-term object pose tracking in dynamic environments with monocular RGB input, we introduce a large-scale synthetic dataset OmniPose6D, crafted to mirror the diversity of real-world conditions. We additionally present a benchmarking framework for a comprehensive comparison of pose tracking algorithms. We propose a pipeline featuring an uncertainty-aware keypoint refinement network, employing probabilistic modeling to refine pose estimation. Comparative evaluations demonstrate that our approach achieves performance superior to existing baselines on real datasets, underscoring the effectiveness of our synthetic dataset and refinement technique in enhancing tracking precision in dynamic contexts. Our contributions set a new precedent for the development and assessment of object pose tracking methodologies in complex scenes.<|reference_end|>
arxiv
@article{lin2024omnipose6d:, title={OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes from Monocular RGB}, author={Yunzhi Lin, Yipu Zhao, Fu-Jen Chu, Xingyu Chen, Weiyao Wang, Hao Tang, Patricio A. Vela, Matt Feiszli, Kevin Liang}, journal={arXiv preprint arXiv:2410.06694}, year={2024}, archivePrefix={arXiv}, eprint={2410.06694}, primaryClass={cs.CV cs.RO} }
lin2024omnipose6d:
arxiv-667431
2410.06695
Energy Efficient Scheduling for Serverless Systems
<|reference_start|>Energy Efficient Scheduling for Serverless Systems: Serverless computing, also referred to as Function-as-a-Service (FaaS), is a cloud computing model that has attracted significant attention and has been widely adopted in recent years. The serverless computing model offers an intuitive, event-based interface that makes the development and deployment of scalable cloud-based applications easier and cost-effective. An important aspect that has not been examined in these systems is their energy consumption during the application execution. One way to deal with this issue is to schedule the function invocations in an energy-efficient way. However, efficient scheduling of applications in a multi-tenant environment, like FaaS systems, poses significant challenges. The trade-off between the server's energy usage and the hosted functions' performance requirements needs to be taken into consideration. In this work, we propose an Energy Efficient Scheduler for orchestrating the execution of serverless functions so that it minimizes energy consumption while it satisfies the applications' performance demands. Our approach considers real-time performance measurements and historical data and applies a novel DVFS technique to minimize energy consumption. Our detailed experimental evaluation using realistic workloads on our local cluster illustrates the working and benefits of our approach.<|reference_end|>
arxiv
@article{tsenos2024energy, title={Energy Efficient Scheduling for Serverless Systems}, author={Michail Tsenos, Aristotelis Peri, Vana Kalogeraki}, journal={arXiv preprint arXiv:2410.06695}, year={2024}, archivePrefix={arXiv}, eprint={2410.06695}, primaryClass={cs.DC} }
tsenos2024energy
arxiv-667432
2410.06698
Fourier-based Action Recognition for Wildlife Behavior Quantification with Event Cameras
<|reference_start|>Fourier-based Action Recognition for Wildlife Behavior Quantification with Event Cameras: Event cameras are novel bio-inspired vision sensors that measure pixel-wise brightness changes asynchronously instead of images at a given frame rate. They offer promising advantages, namely a high dynamic range, low latency, and minimal motion blur. Modern computer vision algorithms often rely on artificial neural network approaches, which require image-like representations of the data and cannot fully exploit the characteristics of event data. We propose approaches to action recognition based on the Fourier Transform. The approaches are intended to recognize oscillating motion patterns commonly present in nature. In particular, we apply our approaches to a recent dataset of breeding penguins annotated for "ecstatic display", a behavior where the observed penguins flap their wings at a certain frequency. We find that our approaches are both simple and effective, producing slightly lower results than a deep neural network (DNN) while relying just on a tiny fraction of the parameters compared to the DNN (five orders of magnitude fewer parameters). They work well despite the uncontrolled, diverse data present in the dataset. We hope this work opens a new perspective on event-based processing and action recognition.<|reference_end|>
arxiv
@article{hamann2024fourier-based, title={Fourier-based Action Recognition for Wildlife Behavior Quantification with Event Cameras}, author={Friedhelm Hamann, Suman Ghosh, Ignacio Juarez Martinez, Tom Hart, Alex Kacelnik, Guillermo Gallego}, journal={arXiv preprint arXiv:2410.06698}, year={2024}, archivePrefix={arXiv}, eprint={2410.06698}, primaryClass={cs.CV cs.ET} }
hamann2024fourier-based
arxiv-667433
2410.06699
Break the Visual Perception: Adversarial Attacks Targeting Encoded Visual Tokens of Large Vision-Language Models
<|reference_start|>Break the Visual Perception: Adversarial Attacks Targeting Encoded Visual Tokens of Large Vision-Language Models: Large vision-language models (LVLMs) integrate visual information into large language models, showcasing remarkable multi-modal conversational capabilities. However, the visual modules introduces new challenges in terms of robustness for LVLMs, as attackers can craft adversarial images that are visually clean but may mislead the model to generate incorrect answers. In general, LVLMs rely on vision encoders to transform images into visual tokens, which are crucial for the language models to perceive image contents effectively. Therefore, we are curious about one question: Can LVLMs still generate correct responses when the encoded visual tokens are attacked and disrupting the visual information? To this end, we propose a non-targeted attack method referred to as VT-Attack (Visual Tokens Attack), which constructs adversarial examples from multiple perspectives, with the goal of comprehensively disrupting feature representations and inherent relationships as well as the semantic properties of visual tokens output by image encoders. Using only access to the image encoder in the proposed attack, the generated adversarial examples exhibit transferability across diverse LVLMs utilizing the same image encoder and generality across different tasks. Extensive experiments validate the superior attack performance of the VT-Attack over baseline methods, demonstrating its effectiveness in attacking LVLMs with image encoders, which in turn can provide guidance on the robustness of LVLMs, particularly in terms of the stability of the visual feature space.<|reference_end|>
arxiv
@article{wang2024break, title={Break the Visual Perception: Adversarial Attacks Targeting Encoded Visual Tokens of Large Vision-Language Models}, author={Yubo Wang, Chaohu Liu, Yanqiu Qu, Haoyu Cao, Deqiang Jiang, Linli Xu}, journal={arXiv preprint arXiv:2410.06699}, year={2024}, archivePrefix={arXiv}, eprint={2410.06699}, primaryClass={cs.CV cs.AI cs.LG} }
wang2024break
arxiv-667434
2410.06700
Optimizing Integrated Terrestrial and Non-Terrestrial Networks Performance with Traffic-Aware Resource Management
<|reference_start|>Optimizing Integrated Terrestrial and Non-Terrestrial Networks Performance with Traffic-Aware Resource Management: To address an ever-increasing demand for ubiquitous high-speed connectivity, mobile networks have intensified their deployment process. However, achieving this target has proven to be a challenge and has led to a surge in overall energy consumption. In recent years, non-terrestrial networks (NTNs) have been endorsed as a potential solution to these problems by complementing the coverage of the terrestrial network in areas with limited network deployment. To this end, this paper proposes an integrated terrestrial and non-terrestrial network (TN-NTN) that utilises the overall available communication resources to expand coverage and meet Quality of Service (QoS) requirements during high-traffic hours in any deployment scenario. Importantly, our framework allows to drastically reduce the terrestrial network energy consumption during low-traffic hours. Specifically, we introduce a novel radio resource management algorithm, BLASTER (Bandwidth SpLit, User ASsociation, and PowEr ContRol), which integrates bandwidth allocation, user equipment (UE) association, power control, and base station activation within the TN-NTN. This algorithm aims to optimize network resource allocation fairness and energy consumption dynamically, demonstrating new opportunities in deploying satellite networks in legacy cellular systems. Our study offers a comprehensive analysis of the integrated network model, emphasizing the effective balance between energy saving and QoS, and proposing practical solutions to meet the fluctuating traffic demands of cellular networks.<|reference_end|>
arxiv
@article{alam2024optimizing, title={Optimizing Integrated Terrestrial and Non-Terrestrial Networks Performance with Traffic-Aware Resource Management}, author={Henri Alam, Antonio de Domenico, David L'opez-P'erez, Florian Kaltenberger}, journal={arXiv preprint arXiv:2410.06700}, year={2024}, archivePrefix={arXiv}, eprint={2410.06700}, primaryClass={cs.NI} }
alam2024optimizing
arxiv-667435
2410.06703
ST-WebAgentBench: A Benchmark for Evaluating Safety and Trustworthiness in Web Agents
<|reference_start|>ST-WebAgentBench: A Benchmark for Evaluating Safety and Trustworthiness in Web Agents: Recent advancements in LLM-based web agents have introduced novel architectures and benchmarks showcasing progress in autonomous web navigation and interaction. However, most existing benchmarks prioritize effectiveness and accuracy, overlooking crucial factors like safety and trustworthiness which are essential for deploying web agents in enterprise settings. The risks of unsafe web agent behavior, such as accidentally deleting user accounts or performing unintended actions in critical business operations, pose significant barriers to widespread adoption. In this paper, we present ST-WebAgentBench, a new online benchmark specifically designed to evaluate the safety and trustworthiness of web agents in enterprise contexts. This benchmark is grounded in a detailed framework that defines safe and trustworthy (ST) agent behavior, outlines how ST policies should be structured and introduces the Completion under Policies metric to assess agent performance. Our evaluation reveals that current SOTA agents struggle with policy adherence and cannot yet be relied upon for critical business applications. Additionally, we propose architectural principles aimed at improving policy awareness and compliance in web agents. We open-source this benchmark and invite the community to contribute, with the goal of fostering a new generation of safer, more trustworthy AI agents. All code, data, environment reproduction resources, and video demonstrations are available at https://sites.google.com/view/st-webagentbench/home.<|reference_end|>
arxiv
@article{levy2024st-webagentbench:, title={ST-WebAgentBench: A Benchmark for Evaluating Safety and Trustworthiness in Web Agents}, author={Ido Levy, Ben Wiesel, Sami Marreed, Alon Oved, Avi Yaeli, Segev Shlomov}, journal={arXiv preprint arXiv:2410.06703}, year={2024}, archivePrefix={arXiv}, eprint={2410.06703}, primaryClass={cs.AI} }
levy2024st-webagentbench:
arxiv-667436
2410.06704
PII-Scope: A Benchmark for Training Data PII Leakage Assessment in LLMs
<|reference_start|>PII-Scope: A Benchmark for Training Data PII Leakage Assessment in LLMs: In this work, we introduce PII-Scope, a comprehensive benchmark designed to evaluate state-of-the-art methodologies for PII extraction attacks targeting LLMs across diverse threat settings. Our study provides a deeper understanding of these attacks by uncovering several hyperparameters (e.g., demonstration selection) crucial to their effectiveness. Building on this understanding, we extend our study to more realistic attack scenarios, exploring PII attacks that employ advanced adversarial strategies, including repeated and diverse querying, and leveraging iterative learning for continual PII extraction. Through extensive experimentation, our results reveal a notable underestimation of PII leakage in existing single-query attacks. In fact, we show that with sophisticated adversarial capabilities and a limited query budget, PII extraction rates can increase by up to fivefold when targeting the pretrained model. Moreover, we evaluate PII leakage on finetuned models, showing that they are more vulnerable to leakage than pretrained models. Overall, our work establishes a rigorous empirical benchmark for PII extraction attacks in realistic threat scenarios and provides a strong foundation for developing effective mitigation strategies.<|reference_end|>
arxiv
@article{nakka2024pii-scope:, title={PII-Scope: A Benchmark for Training Data PII Leakage Assessment in LLMs}, author={Krishna Kanth Nakka, Ahmed Frikha, Ricardo Mendes, Xue Jiang, Xuebing Zhou}, journal={arXiv preprint arXiv:2410.06704}, year={2024}, archivePrefix={arXiv}, eprint={2410.06704}, primaryClass={cs.CL cs.AI cs.LG} }
nakka2024pii-scope:
arxiv-667437
2410.06705
MERGE: Matching Electronic Results with Genuine Evidence for verifiable voting in person at remote locations
<|reference_start|>MERGE: Matching Electronic Results with Genuine Evidence for verifiable voting in person at remote locations: Overseas military personnel often face significant challenges in participating in elections due to the slow pace of traditional mail systems, which can result in ballots missing crucial deadlines. While internet-based voting offers a faster alternative, it introduces serious risks to the integrity and privacy of the voting process. We introduce the MERGE protocol to address these issues by combining the speed of electronic ballot delivery with the reliability of paper returns. This protocol allows voters to submit an electronic record of their vote quickly while simultaneously mailing a paper ballot for verification. The electronic record can be used for preliminary results, but the paper ballot is used in a Risk Limiting Audit (RLA) if received in time, ensuring the integrity of the election. This approach extends the time window for ballot arrival without undermining the security and accuracy of the vote count.<|reference_end|>
arxiv
@article{adida2024merge:, title={MERGE: Matching Electronic Results with Genuine Evidence for verifiable voting in person at remote locations}, author={Ben Adida, John Caron, Arash Mirzaei, Vanessa Teague}, journal={arXiv preprint arXiv:2410.06705}, year={2024}, archivePrefix={arXiv}, eprint={2410.06705}, primaryClass={cs.CR} }
adida2024merge:
arxiv-667438
2410.06707
Calibrating Verbalized Probabilities for Large Language Models
<|reference_start|>Calibrating Verbalized Probabilities for Large Language Models: Calibrating verbalized probabilities presents a novel approach for reliably assessing and leveraging outputs from black-box Large Language Models (LLMs). Recent methods have demonstrated improved calibration by applying techniques like Platt scaling or temperature scaling to the confidence scores generated by LLMs. In this paper, we explore the calibration of verbalized probability distributions for discriminative tasks. First, we investigate the capability of LLMs to generate probability distributions over categorical labels. We theoretically and empirically identify the issue of re-softmax arising from the scaling of verbalized probabilities, and propose using the invert softmax trick to approximate the "logit" by inverting verbalized probabilities. Through extensive evaluation on three public datasets, we demonstrate: (1) the robust capability of LLMs in generating class distributions, and (2) the effectiveness of the invert softmax trick in estimating logits, which, in turn, facilitates post-calibration adjustments.<|reference_end|>
arxiv
@article{wang2024calibrating, title={Calibrating Verbalized Probabilities for Large Language Models}, author={Cheng Wang, Gyuri Szarvas, Georges Balazs, Pavel Danchenko, Patrick Ernst}, journal={arXiv preprint arXiv:2410.06707}, year={2024}, archivePrefix={arXiv}, eprint={2410.06707}, primaryClass={cs.CL cs.AI} }
wang2024calibrating
arxiv-667439
2410.06708
Do Developers Adopt Green Architectural Tactics for ML-Enabled Systems? A Mining Software Repository Study
<|reference_start|>Do Developers Adopt Green Architectural Tactics for ML-Enabled Systems? A Mining Software Repository Study: As machine learning (ML) and artificial intelligence (AI) technologies become increasingly prevalent in society, concerns about their environmental sustainability have grown. Developing and deploying ML-enabled systems, especially during training and inference, are resource-intensive, raising sustainability issues. Green AI has emerged as a response, advocating for reducing the computational demands of AI while maintaining accuracy. While recent research has identified various green tactics for developing sustainable ML-enabled systems, there is a gap in understanding the extent to which these tactics are adopted in real-world projects and whether additional, undocumented practices exist. This paper addresses these gaps by presenting a mining software repository study that evaluates the adoption of green tactics in 168 open-source ML projects on GitHub. In doing so, we introduce a novel mining mechanism based on large language models to identify and analyze green tactics within software repositories. Our results provide insights into the adoption of green tactics found in the literature and expand previous catalogs by providing 12 new tactics, with code examples to support wider implementation. This study contributes to the development of more sustainable ML systems by identifying adopted green tactics that offer substantial environmental benefits with minimal implementation effort. It provides practical insights for developers to green their systems and offers a path for future research to automate the integration of these tactics.<|reference_end|>
arxiv
@article{de martino2024do, title={Do Developers Adopt Green Architectural Tactics for ML-Enabled Systems? A Mining Software Repository Study}, author={Vincenzo De Martino, Silverio Mart'inez-Fern'andez, Fabio Palomba}, journal={arXiv preprint arXiv:2410.06708}, year={2024}, archivePrefix={arXiv}, eprint={2410.06708}, primaryClass={cs.SE} }
de martino2024do
arxiv-667440
2410.06711
Analysis of different disparity estimation techniques on aerial stereo image datasets
<|reference_start|>Analysis of different disparity estimation techniques on aerial stereo image datasets: With the advent of aerial image datasets, dense stereo matching has gained tremendous progress. This work analyses dense stereo correspondence analysis on aerial images using different techniques. Traditional methods, optimization based methods and learning based methods have been implemented and compared here for aerial images. For traditional methods, we implemented the architecture of Stereo SGBM while using different cost functions to get an understanding of their performance on aerial datasets. Analysis of most of the methods in standard datasets has shown good performance, however in case of aerial dataset, not much benchmarking is available. Visual qualitative and quantitative analysis has been carried out for two stereo aerial datasets in order to compare different cost functions and techniques for the purpose of depth estimation from stereo images. Using existing pre-trained models, recent learning based architectures have also been tested on stereo pairs along with different cost functions in SGBM. The outputs and given ground truth are compared using MSE, SSIM and other error metrics.<|reference_end|>
arxiv
@article{narayan2024analysis, title={Analysis of different disparity estimation techniques on aerial stereo image datasets}, author={Ishan Narayan, Shashi Poddar}, journal={arXiv preprint arXiv:2410.06711}, year={2024}, archivePrefix={arXiv}, eprint={2410.06711}, primaryClass={cs.CV cs.LG} }
narayan2024analysis
arxiv-667441
2410.06713
SHRINK: Data Compression by Semantic Extraction and Residuals Encoding
<|reference_start|>SHRINK: Data Compression by Semantic Extraction and Residuals Encoding: The distributed data infrastructure in Internet of Things (IoT) ecosystems requires efficient data-series compression methods, along with the ability to feed different accuracy demands. However, the compression performance of existing compression methods degrades sharply when calling for ultra-accurate data recovery. In this paper, we introduce SHRINK, a novel highly accurate data compression method that offers a higher compression ratio and also lower runtime than prior compressors. SHRINK extracts data semantics in the form of linear segments to construct a compact knowledge base, using a dynamic error threshold that it adapts to data characteristics. Then, it captures the remaining data details as residuals to support lossy compression at diverse resolutions as well as lossless compression. As SHRINK identifies repeated semantics, its compression ratio increases with data size. Our experimental evaluation demonstrates that SHRINK outperforms state-of-art methods with an up to threefold improvement in compression ratio.<|reference_end|>
arxiv
@article{sun2024shrink:, title={SHRINK: Data Compression by Semantic Extraction and Residuals Encoding}, author={Guoyou Sun, Panagiotis Karras, Qi Zhang}, journal={arXiv preprint arXiv:2410.06713}, year={2024}, archivePrefix={arXiv}, eprint={2410.06713}, primaryClass={cs.DC} }
sun2024shrink:
arxiv-667442
2410.06715
FRESCO: Fast and Reliable Edge Offloading with Reputation-based Hybrid Smart Contracts
<|reference_start|>FRESCO: Fast and Reliable Edge Offloading with Reputation-based Hybrid Smart Contracts: Mobile devices offload latency-sensitive application tasks to edge servers to satisfy applications' Quality of Service (QoS) deadlines. Consequently, ensuring reliable offloading without QoS violations is challenging in distributed and unreliable edge environments. However, current edge offloading solutions are either centralized or do not adequately address challenges in distributed environments. We propose FRESCO, a fast and reliable edge offloading framework that utilizes a blockchain-based reputation system, which enhances the reliability of offloading in the distributed edge. The distributed reputation system tracks the historical performance of edge servers, while blockchain through a consensus mechanism ensures that sensitive reputation information is secured against tampering. However, blockchain consensus typically has high latency, and therefore we employ a Hybrid Smart Contract (HSC) that automatically computes and stores reputation securely on-chain (i.e., on the blockchain) while allowing fast offloading decisions off-chain (i.e., outside of blockchain). The offloading decision engine uses a reputation score to derive fast offloading decisions, which are based on Satisfiability Modulo Theory (SMT). The SMT models edge resource constraints, and QoS deadlines, and can formally guarantee a feasible solution that is valuable for latency-sensitive applications that require high reliability. With a combination of on-chain HSC reputation state management and an off-chain SMT decision engine, FRESCO offloads tasks to reliable servers without being hindered by blockchain consensus. We evaluate FRESCO against real availability traces and simulated applications. FRESCO reduces response time by up to 7.86 times and saves energy by up to 5.4% compared to all baselines while minimizing QoS violations to 0.4% and achieving an average decision time of 5.05 milliseconds.<|reference_end|>
arxiv
@article{zilic2024fresco:, title={FRESCO: Fast and Reliable Edge Offloading with Reputation-based Hybrid Smart Contracts}, author={Josip Zilic, Vincenzo de Maio, Shashikant Ilager, Ivona Brandic}, journal={arXiv preprint arXiv:2410.06715}, year={2024}, archivePrefix={arXiv}, eprint={2410.06715}, primaryClass={cs.DC} }
zilic2024fresco:
arxiv-667443
2410.06716
Guaranteed Generation from Large Language Models
<|reference_start|>Guaranteed Generation from Large Language Models: As large language models (LLMs) are increasingly used across various applications, there is a growing need to control text generation to satisfy specific constraints or requirements. This raises a crucial question: Is it possible to guarantee strict constraint satisfaction in generated outputs while preserving the distribution of the original model as much as possible? We first define the ideal distribution - the one closest to the original model, which also always satisfies the expressed constraint - as the ultimate goal of guaranteed generation. We then state a fundamental limitation, namely that it is impossible to reach that goal through autoregressive training alone. This motivates the necessity of combining training-time and inference-time methods to enforce such guarantees. Based on this insight, we propose GUARD, a simple yet effective approach that combines an autoregressive proposal distribution with rejection sampling. Through GUARD's theoretical properties, we show how controlling the KL divergence between a specific proposal and the target ideal distribution simultaneously optimizes inference speed and distributional closeness. To validate these theoretical concepts, we conduct extensive experiments on two text generation settings with hard-to-satisfy constraints: a lexical constraint scenario and a sentiment reversal scenario. These experiments show that GUARD achieves perfect constraint satisfaction while almost preserving the ideal distribution with highly improved inference efficiency. GUARD provides a principled approach to enforcing strict guarantees for LLMs without compromising their generative capabilities.<|reference_end|>
arxiv
@article{kim2024guaranteed, title={Guaranteed Generation from Large Language Models}, author={Minbeom Kim, Thibaut Thonet, Jos Rozen, Hwaran Lee, Kyomin Jung, Marc Dymetman}, journal={arXiv preprint arXiv:2410.06716}, year={2024}, archivePrefix={arXiv}, eprint={2410.06716}, primaryClass={cs.CL} }
kim2024guaranteed
arxiv-667444
2410.06717
Exact full-RSB SAT/UNSAT transition in infinitely wide two-layer neural networks
<|reference_start|>Exact full-RSB SAT/UNSAT transition in infinitely wide two-layer neural networks: We analyze the problem of storing random pattern-label associations using two classes of continuous non-convex weights models, namely the perceptron with negative margin and an infinite width two layer neural network with non-overlapping receptive fields and generic activation function. Using a full-RSB ansatz we compute the exact value of the SAT/UNSAT transition. Furthermore, in the case of the negative perceptron model we show that, depending on the value of the margin and the constrained density, there is a line separating a phase in which the distribution of overlaps of typical states does not possess a gap from one in which it does. Our results show that the hypothesis underlying some recently developed theorems claiming that Approximate Message Passing (AMP) based algorithms are able to reach capacity, does not hold in general. Finally, we show that Gradient Descent is not able to reach the maximal capacity both in cases where there is and there is not a non-overlap gap phase for the typical states. This, similarly to what occurs in binary weight models, suggests that gradient-based algorithms are biased towards highly atypical states, whose inaccessibility determines the algorithmic threshold.<|reference_end|>
arxiv
@article{annesi2024exact, title={Exact full-RSB SAT/UNSAT transition in infinitely wide two-layer neural networks}, author={Brandon L. Annesi, Enrico M. Malatesta and Francesco Zamponi}, journal={arXiv preprint arXiv:2410.06717}, year={2024}, archivePrefix={arXiv}, eprint={2410.06717}, primaryClass={cond-mat.dis-nn cs.LG math.PR} }
annesi2024exact
arxiv-667445
2410.06718
MatMamba: A Matryoshka State Space Model
<|reference_start|>MatMamba: A Matryoshka State Space Model: State Space Models (SSMs) like Mamba2 are a promising alternative to Transformers, with faster theoretical training and inference times -- especially for long context lengths. Recent work on Matryoshka Representation Learning -- and its application to Transformer backbones in works like MatFormer -- showed how to introduce nested granularities of smaller submodels in one universal elastic model. In this work, we present MatMamba: a state space model which combines Matryoshka-style learning with Mamba2, by modifying the block to contain nested dimensions to enable joint training and adaptive inference. MatMamba allows for efficient and adaptive deployment across various model sizes. We train a single large MatMamba model and are able to get a number of smaller nested models for free -- while maintaining or improving upon the performance of a baseline smaller model trained from scratch. We train language and image models at a variety of parameter sizes from 35M to 1.4B. Our results on ImageNet and FineWeb show that MatMamba models scale comparably to Transformers, while having more efficient inference characteristics. This makes MatMamba a practically viable option for deploying large-scale models in an elastic way based on the available inference compute. Code and models are open sourced at \url{https://github.com/ScaledFoundations/MatMamba}<|reference_end|>
arxiv
@article{shukla2024matmamba:, title={MatMamba: A Matryoshka State Space Model}, author={Abhinav Shukla, Sai Vemprala, Aditya Kusupati, Ashish Kapoor}, journal={arXiv preprint arXiv:2410.06718}, year={2024}, archivePrefix={arXiv}, eprint={2410.06718}, primaryClass={cs.LG cs.CL cs.CV} }
shukla2024matmamba:
arxiv-667446
2410.06719
Suppress Content Shift: Better Diffusion Features via Off-the-Shelf Generation Techniques
<|reference_start|>Suppress Content Shift: Better Diffusion Features via Off-the-Shelf Generation Techniques: Diffusion models are powerful generative models, and this capability can also be applied to discrimination. The inner activations of a pre-trained diffusion model can serve as features for discriminative tasks, namely, diffusion feature. We discover that diffusion feature has been hindered by a hidden yet universal phenomenon that we call content shift. To be specific, there are content differences between features and the input image, such as the exact shape of a certain object. We locate the cause of content shift as one inherent characteristic of diffusion models, which suggests the broad existence of this phenomenon in diffusion feature. Further empirical study also indicates that its negative impact is not negligible even when content shift is not visually perceivable. Hence, we propose to suppress content shift to enhance the overall quality of diffusion features. Specifically, content shift is related to the information drift during the process of recovering an image from the noisy input, pointing out the possibility of turning off-the-shelf generation techniques into tools for content shift suppression. We further propose a practical guideline named GATE to efficiently evaluate the potential benefit of a technique and provide an implementation of our methodology. Despite the simplicity, the proposed approach has achieved superior results on various tasks and datasets, validating its potential as a generic booster for diffusion features. Our code is available at https://github.com/Darkbblue/diffusion-content-shift.<|reference_end|>
arxiv
@article{meng2024suppress, title={Suppress Content Shift: Better Diffusion Features via Off-the-Shelf Generation Techniques}, author={Benyuan Meng, Qianqian Xu, Zitai Wang, Zhiyong Yang, Xiaochun Cao, Qingming Huang}, journal={arXiv preprint arXiv:2410.06719}, year={2024}, archivePrefix={arXiv}, eprint={2410.06719}, primaryClass={cs.CV cs.AI} }
meng2024suppress
arxiv-667447
2410.06720
Collective perception for tracking people with a robot swarm
<|reference_start|>Collective perception for tracking people with a robot swarm: Swarm perception refers to the ability of a robot swarm to utilize the perception capabilities of each individual robot, forming a collective understanding of the environment. Their distributed nature enables robot swarms to continuously monitor dynamic environments by maintaining a constant presence throughout the space.In this study, we present a preliminary experiment on the collective tracking of people using a robot swarm. The experiment was conducted in simulation across four different office environments, with swarms of varying sizes. The robots were provided with images sampled from a dataset of real-world office environment pictures.We measured the time distribution required for a robot to detect a person changing location and to propagate this information to increasing fractions of the swarm. The results indicate that robot swarms show significant promise in monitoring dynamic environments.<|reference_end|>
arxiv
@article{kegeleirs2024collective, title={Collective perception for tracking people with a robot swarm}, author={Miquel Kegeleirs, David Garz'on Ramos, Guillermo Legarda Herranz, Ilyes Gharbi, Jeanne Szpirer, Olivier Debeir, Ken Hasselmann, Lorenzo Garattoni, Gianpiero Francesca, and Mauro Birattari}, journal={arXiv preprint arXiv:2410.06720}, year={2024}, archivePrefix={arXiv}, eprint={2410.06720}, primaryClass={cs.RO} }
kegeleirs2024collective
arxiv-667448
2410.06721
Orchestrating the Execution of Serverless Functions in Hybrid Clouds
<|reference_start|>Orchestrating the Execution of Serverless Functions in Hybrid Clouds: In recent years, serverless computing, especially Function as a Service (FaaS), is rapidly growing in popularity as a cloud programming model. The serverless computing model provides an intuitive interface for developing cloud-based applications, where the development and deployment of scalable microservices has become easier and cost-effective. An increasing number of batch-processing applications are deployed as pipelines that comprise a sequence of functions that must meet their deadline targets to be practical. In this paper, we present our Hybrid Cloud Scheduler (HCS) for orchestrating the execution of serverless batch-processing pipelines deployed over heterogeneous infrastructures. Our framework enables developers to (i) automatically schedule and execute batch-processing applications in heterogeneous environments such as the private edge and public cloud serverless infrastructures, (ii) benefit from cost reduction through the utilization of their own resources in a private cluster, and (iii) significantly improves the probability of meeting the deadline requirements of their applications. Our experimental evaluation demonstrates the efficiency and benefits of our approach.<|reference_end|>
arxiv
@article{peri2024orchestrating, title={Orchestrating the Execution of Serverless Functions in Hybrid Clouds}, author={Aristotelis Peri, Michail Tsenos, Vana Kalogeraki}, journal={arXiv preprint arXiv:2410.06721}, year={2024}, archivePrefix={arXiv}, eprint={2410.06721}, primaryClass={cs.DC} }
peri2024orchestrating
arxiv-667449
2410.06722
Scaling Laws for Mixed quantization in Large Language Models
<|reference_start|>Scaling Laws for Mixed quantization in Large Language Models: Post-training quantization of Large Language Models (LLMs) has proven effective in reducing the computational requirements for running inference on these models. In this study, we focus on a straightforward question: When aiming for a specific accuracy or perplexity target for low-precision quantization, how many high-precision numbers or calculations are required to preserve as we scale LLMs to larger sizes? We first introduce a critical metric named the quantization ratio, which compares the number of parameters quantized to low-precision arithmetic against the total parameter count. Through extensive and carefully controlled experiments across different model families, arithmetic types, and quantization granularities (e.g. layer-wise, matmul-wise), we identify two central phenomenons. 1) The larger the models, the better they can preserve performance with an increased quantization ratio, as measured by perplexity in pre-training tasks or accuracy in downstream tasks. 2) The finer the granularity of mixed-precision quantization (e.g., matmul-wise), the more the model can increase the quantization ratio. We believe these observed phenomena offer valuable insights for future AI hardware design and the development of advanced Efficient AI algorithms.<|reference_end|>
arxiv
@article{cao2024scaling, title={Scaling Laws for Mixed quantization in Large Language Models}, author={Zeyu Cao, Cheng Zhang, Pedro Gimenes, Jianqiao Lu, Jianyi Cheng, Yiren Zhao}, journal={arXiv preprint arXiv:2410.06722}, year={2024}, archivePrefix={arXiv}, eprint={2410.06722}, primaryClass={cs.CL cs.LG} }
cao2024scaling
arxiv-667450
2410.06723
Evaluating Computational Pathology Foundation Models for Prostate Cancer Grading under Distribution Shifts
<|reference_start|>Evaluating Computational Pathology Foundation Models for Prostate Cancer Grading under Distribution Shifts: Foundation models have recently become a popular research direction within computational pathology. They are intended to be general-purpose feature extractors, promising to achieve good performance on a range of downstream tasks. Real-world pathology image data does however exhibit considerable variability. Foundation models should be robust to these variations and other distribution shifts which might be encountered in practice. We evaluate two computational pathology foundation models: UNI (trained on more than 100,000 whole-slide images) and CONCH (trained on more than 1.1 million image-caption pairs), by utilizing them as feature extractors within prostate cancer grading models. We find that while UNI and CONCH perform well relative to baselines, the absolute performance can still be far from satisfactory in certain settings. The fact that foundation models have been trained on large and varied datasets does not guarantee that downstream models always will be robust to common distribution shifts.<|reference_end|>
arxiv
@article{gustafsson2024evaluating, title={Evaluating Computational Pathology Foundation Models for Prostate Cancer Grading under Distribution Shifts}, author={Fredrik K. Gustafsson, Mattias Rantalainen}, journal={arXiv preprint arXiv:2410.06723}, year={2024}, archivePrefix={arXiv}, eprint={2410.06723}, primaryClass={eess.IV cs.CV cs.LG} }
gustafsson2024evaluating
arxiv-667451
2410.06725
Evaluating the Impact of Point Cloud Colorization on Semantic Segmentation Accuracy
<|reference_start|>Evaluating the Impact of Point Cloud Colorization on Semantic Segmentation Accuracy: Point cloud semantic segmentation, the process of classifying each point into predefined categories, is essential for 3D scene understanding. While image-based segmentation is widely adopted due to its maturity, methods relying solely on RGB information often suffer from degraded performance due to color inaccuracies. Recent advancements have incorporated additional features such as intensity and geometric information, yet RGB channels continue to negatively impact segmentation accuracy when errors in colorization occur. Despite this, previous studies have not rigorously quantified the effects of erroneous colorization on segmentation performance. In this paper, we propose a novel statistical approach to evaluate the impact of inaccurate RGB information on image-based point cloud segmentation. We categorize RGB inaccuracies into two types: incorrect color information and similar color information. Our results demonstrate that both types of color inaccuracies significantly degrade segmentation accuracy, with similar color errors particularly affecting the extraction of geometric features. These findings highlight the critical need to reassess the role of RGB information in point cloud segmentation and its implications for future algorithm design.<|reference_end|>
arxiv
@article{zhu2024evaluating, title={Evaluating the Impact of Point Cloud Colorization on Semantic Segmentation Accuracy}, author={Qinfeng Zhu, Jiaze Cao, Yuanzhi Cai, Lei Fan}, journal={arXiv preprint arXiv:2410.06725}, year={2024}, archivePrefix={arXiv}, eprint={2410.06725}, primaryClass={cs.CV cs.AI cs.LG cs.MM} }
zhu2024evaluating
arxiv-667452
2410.06726
Sharp Bounds of the Causal Effect Under MNAR Confounding
<|reference_start|>Sharp Bounds of the Causal Effect Under MNAR Confounding: We report bounds for any contrast between the probabilities of the counterfactual outcome under exposure and non-exposure when the confounders are missing not at random. We assume that the missingness mechanism is outcome-independent, and prove that our bounds are arbitrarily sharp, i.e., practically attainable or logically possible.<|reference_end|>
arxiv
@article{peña2024bounds, title={Bounds and Sensitivity Analysis of the Causal Effect Under Outcome-Independent MNAR Confounding}, author={Jose M. Pe~na}, journal={arXiv preprint arXiv:2410.06726}, year={2024}, archivePrefix={arXiv}, eprint={2410.06726}, primaryClass={stat.ME cs.LG stat.ML} }
peña2024bounds
arxiv-667453
2410.06729
Perceptual Quality Assessment of Octree-RAHT Encoded 3D Point Clouds
<|reference_start|>Perceptual Quality Assessment of Octree-RAHT Encoded 3D Point Clouds: No-reference bitstream-layer point cloud quality assessment (PCQA) can be deployed without full decoding at any network node to achieve real-time quality monitoring. In this work, we focus on the PCQA problem dedicated to Octree-RAHT encoding mode. First, to address the issue that existing PCQA databases have a small scale and limited distortion levels, we establish the WPC5.0 database which is the first one dedicated to Octree-RAHT encoding mode with a scale of 400 distorted point clouds (PCs) including 4 geometric multiplied by 5 attitude distortion levels. Then, we propose the first PCQA model dedicated to Octree-RAHT encoding mode by parsing PC bitstreams without full decoding. The model introduces texture bitrate (TBPP) to predict texture complexity (TC) and further derives the texture distortion factor. In addition, the Geometric Quantization Parameter (PQS) is used to estimate the geometric distortion factor, which is then integrated into the model along with the texture distortion factor to obtain the proposed PCQA model named streamPCQ-OR. The proposed model has been compared with other advanced PCQA methods on the WPC5.0, BASICS and M-PCCD databases, and experimental results show that our model has excellent performance while having very low computational complexity, providing a reliable choice for time-critical applications. To facilitate subsequent research, the database and source code will be publicly released at https://github.com/qdushl/Waterloo-Point-Cloud-Database-5.0.<|reference_end|>
arxiv
@article{duan2024perceptual, title={Perceptual Quality Assessment of Octree-RAHT Encoded 3D Point Clouds}, author={Dongshuai Duan, Honglei Su, Qi Liu, Hui Yuan, Wei Gao, Jiarun Song, Zhou Wang}, journal={arXiv preprint arXiv:2410.06729}, year={2024}, archivePrefix={arXiv}, eprint={2410.06729}, primaryClass={cs.MM} }
duan2024perceptual
arxiv-667454
2410.06731
Gridded Transformer Neural Processes for Large Unstructured Spatio-Temporal Data
<|reference_start|>Gridded Transformer Neural Processes for Large Unstructured Spatio-Temporal Data: Many important problems require modelling large-scale spatio-temporal datasets, with one prevalent example being weather forecasting. Recently, transformer-based approaches have shown great promise in a range of weather forecasting problems. However, these have mostly focused on gridded data sources, neglecting the wealth of unstructured, off-the-grid data from observational measurements such as those at weather stations. A promising family of models suitable for such tasks are neural processes (NPs), notably the family of transformer neural processes (TNPs). Although TNPs have shown promise on small spatio-temporal datasets, they are unable to scale to the quantities of data used by state-of-the-art weather and climate models. This limitation stems from their lack of efficient attention mechanisms. We address this shortcoming through the introduction of gridded pseudo-token TNPs which employ specialised encoders and decoders to handle unstructured observations and utilise a processor containing gridded pseudo-tokens that leverage efficient attention mechanisms. Our method consistently outperforms a range of strong baselines on various synthetic and real-world regression tasks involving large-scale data, while maintaining competitive computational efficiency. The real-life experiments are performed on weather data, demonstrating the potential of our approach to bring performance and computational benefits when applied at scale in a weather modelling pipeline.<|reference_end|>
arxiv
@article{ashman2024gridded, title={Gridded Transformer Neural Processes for Large Unstructured Spatio-Temporal Data}, author={Matthew Ashman, Cristiana Diaconu, Eric Langezaal, Adrian Weller, Richard E. Turner}, journal={arXiv preprint arXiv:2410.06731}, year={2024}, archivePrefix={arXiv}, eprint={2410.06731}, primaryClass={stat.ML cs.LG} }
ashman2024gridded
arxiv-667455
2410.06733
Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles
<|reference_start|>Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles: While advancements in NLP have significantly improved the performance of Large Language Models (LLMs) on tasks requiring vertical thinking, their lateral thinking capabilities remain under-explored and challenging to measure due to the complexity of assessing creative thought processes and the scarcity of relevant data. To address these challenges, we introduce SPLAT, a benchmark leveraging Situation Puzzles to evaluate and elicit LAteral Thinking of LLMs. This benchmark, containing 975 graded situation puzzles across three difficulty levels, employs a new multi-turn player-judge framework instead of the traditional model-based evaluation, which often necessitates a stronger evaluation model. This framework simulates an interactive game where the model (player) asks the evaluation model (judge) questions about an incomplete story to infer the full scenario. The judge answers based on a detailed reference scenario or evaluates if the player's predictions align with the reference one. This approach lessens dependence on more robust evaluation models, enabling the assessment of state-of-the-art LLMs. The experiments demonstrate that a robust evaluation model, such as WizardLM-2, closely matches human judgements in both intermediate question-answering and final scenario accuracy, achieving over 80% agreement-similar to the agreement levels among humans. Furthermore, applying data and reasoning processes from our benchmark to other lateral thinking-related benchmarks, e.g., RiddleSense and BrainTeaser, leads to performance enhancements. This suggests that our benchmark effectively evaluates and elicits the lateral thinking abilities of LLMs. Code is available at: https://github.com/chenqi008/LateralThinking.<|reference_end|>
arxiv
@article{chen2024weak-eval-strong:, title={Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles}, author={Qi Chen, Bowen Zhang, Gang Wang, Qi Wu}, journal={arXiv preprint arXiv:2410.06733}, year={2024}, archivePrefix={arXiv}, eprint={2410.06733}, primaryClass={cs.CL cs.AI cs.CV} }
chen2024weak-eval-strong:
arxiv-667456
2410.06734
MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes
<|reference_start|>MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes: Talking face generation (TFG) aims to animate a target identity's face to create realistic talking videos. Personalized TFG is a variant that emphasizes the perceptual identity similarity of the synthesized result (from the perspective of appearance and talking style). While previous works typically solve this problem by learning an individual neural radiance field (NeRF) for each identity to implicitly store its static and dynamic information, we find it inefficient and non-generalized due to the per-identity-per-training framework and the limited training data. To this end, we propose MimicTalk, the first attempt that exploits the rich knowledge from a NeRF-based person-agnostic generic model for improving the efficiency and robustness of personalized TFG. To be specific, (1) we first come up with a person-agnostic 3D TFG model as the base model and propose to adapt it into a specific identity; (2) we propose a static-dynamic-hybrid adaptation pipeline to help the model learn the personalized static appearance and facial dynamic features; (3) To generate the facial motion of the personalized talking style, we propose an in-context stylized audio-to-motion model that mimics the implicit talking style provided in the reference video without information loss by an explicit style representation. The adaptation process to an unseen identity can be performed in 15 minutes, which is 47 times faster than previous person-dependent methods. Experiments show that our MimicTalk surpasses previous baselines regarding video quality, efficiency, and expressiveness. Source code and video samples are available at https://mimictalk.github.io .<|reference_end|>
arxiv
@article{ye2024mimictalk:, title={MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes}, author={Zhenhui Ye, Tianyun Zhong, Yi Ren, Ziyue Jiang, Jiawei Huang, Rongjie Huang, Jinglin Liu, Jinzheng He, Chen Zhang, Zehan Wang, Xize Chen, Xiang Yin, Zhou Zhao}, journal={arXiv preprint arXiv:2410.06734}, year={2024}, archivePrefix={arXiv}, eprint={2410.06734}, primaryClass={cs.CV} }
ye2024mimictalk:
arxiv-667457
2410.06735
Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
<|reference_start|>Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?: Recent large language models (LLMs) have demonstrated remarkable generalization abilities in mathematics and logical reasoning tasks. Prior research indicates that LLMs pre-trained with programming language data exhibit high mathematical and reasoning abilities; however, this causal relationship has not been rigorously tested. Our research aims to verify which programming languages and features during pre-training affect logical inference performance. Specifically, we pre-trained decoder-based language models from scratch using datasets from ten programming languages (e.g., Python, C, Java) and three natural language datasets (Wikipedia, Fineweb, C4) under identical conditions. Thereafter, we evaluated the trained models in a few-shot in-context learning setting on logical reasoning tasks: FLD and bAbi, which do not require commonsense or world knowledge. The results demonstrate that nearly all models trained with programming languages consistently outperform those trained with natural languages, indicating that programming languages contain factors that elicit logic inference performance. In addition, we found that models trained with programming languages exhibit a better ability to follow instructions compared to those trained with natural languages. Further analysis reveals that the depth of Abstract Syntax Trees representing parsed results of programs also affects logical reasoning performance. These findings will offer insights into the essential elements of pre-training for acquiring the foundational abilities of LLMs.<|reference_end|>
arxiv
@article{uchiyama2024which, title={Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?}, author={Fumiya Uchiyama, Takeshi Kojima, Andrew Gambardella, Qi Cao, Yusuke Iwasawa, Yutaka Matsuo}, journal={arXiv preprint arXiv:2410.06735}, year={2024}, archivePrefix={arXiv}, eprint={2410.06735}, primaryClass={cs.CL cs.AI} }
uchiyama2024which
arxiv-667458
2410.06741
CoBa: Convergence Balancer for Multitask Finetuning of Large Language Models
<|reference_start|>CoBa: Convergence Balancer for Multitask Finetuning of Large Language Models: Multi-task learning (MTL) benefits the fine-tuning of large language models (LLMs) by providing a single model with improved performance and generalization ability across tasks, presenting a resource-efficient alternative to developing separate models for each task. Yet, existing MTL strategies for LLMs often fall short by either being computationally intensive or failing to ensure simultaneous task convergence. This paper presents CoBa, a new MTL approach designed to effectively manage task convergence balance with minimal computational overhead. Utilizing Relative Convergence Scores (RCS), Absolute Convergence Scores (ACS), and a Divergence Factor (DF), CoBa dynamically adjusts task weights during the training process, ensuring that the validation loss of all tasks progress towards convergence at an even pace while mitigating the issue of individual task divergence. The results of our experiments involving three disparate datasets underscore that this approach not only fosters equilibrium in task improvement but enhances the LLMs' performance by up to 13% relative to the second-best baselines. Code is open-sourced at https://github.com/codefuse-ai/MFTCoder.<|reference_end|>
arxiv
@article{gong2024coba:, title={CoBa: Convergence Balancer for Multitask Finetuning of Large Language Models}, author={Zi Gong, Hang Yu, Cong Liao, Bingchang Liu, Chaoyu Chen, Jianguo Li}, journal={arXiv preprint arXiv:2410.06741}, year={2024}, archivePrefix={arXiv}, eprint={2410.06741}, primaryClass={cs.CL cs.LG} }
gong2024coba:
arxiv-667459
2410.06742
Inference over Unseen Entities, Relations and Literals on Knowledge Graphs
<|reference_start|>Inference over Unseen Entities, Relations and Literals on Knowledge Graphs: In recent years, knowledge graph embedding models have been successfully applied in the transductive setting to tackle various challenging tasks including link prediction, and query answering. Yet, the transductive setting does not allow for reasoning over unseen entities, relations, let alone numerical or non-numerical literals. Although increasing efforts are put into exploring inductive scenarios, inference over unseen entities, relations, and literals has yet to come. This limitation prohibits the existing methods from handling real-world dynamic knowledge graphs involving heterogeneous information about the world. Here, we propose a remedy to this limitation. We propose the attentive byte-pair encoding layer (BytE) to construct a triple embedding from a sequence of byte-pair encoded subword units of entities and relations. Compared to the conventional setting, BytE leads to massive feature reuse via weight tying, since it forces a knowledge graph embedding model to learn embeddings for subword units instead of entities and relations directly. Consequently, the size of the embedding matrices are not anymore bound to the unique number of entities and relations of a knowledge graph. Experimental results show that BytE improves the link prediction performance of 4 knowledge graph embedding models on datasets where the syntactic representations of triples are semantically meaningful. However, benefits of training a knowledge graph embedding model with BytE dissipate on knowledge graphs where entities and relations are represented with plain numbers or URIs. We provide an open source implementation of BytE to foster reproducible research.<|reference_end|>
arxiv
@article{demir2024inference, title={Inference over Unseen Entities, Relations and Literals on Knowledge Graphs}, author={Caglar Demir, N'Dah Jean Kouagou, Arnab Sharma, Axel-Cyrille Ngonga Ngomo}, journal={arXiv preprint arXiv:2410.06742}, year={2024}, archivePrefix={arXiv}, eprint={2410.06742}, primaryClass={cs.LG} }
demir2024inference
arxiv-667460
2410.06743
Utilizing Transfer Learning and pre-trained Models for Effective Forest Fire Detection: A Case Study of Uttarakhand
<|reference_start|>Utilizing Transfer Learning and pre-trained Models for Effective Forest Fire Detection: A Case Study of Uttarakhand: Forest fires pose a significant threat to the environment, human life, and property. Early detection and response are crucial to mitigating the impact of these disasters. However, traditional forest fire detection methods are often hindered by our reliability on manual observation and satellite imagery with low spatial resolution. This paper emphasizes the role of transfer learning in enhancing forest fire detection in India, particularly in overcoming data collection challenges and improving model accuracy across various regions. We compare traditional learning methods with transfer learning, focusing on the unique challenges posed by regional differences in terrain, climate, and vegetation. Transfer learning can be categorized into several types based on the similarity between the source and target tasks, as well as the type of knowledge transferred. One key method is utilizing pre-trained models for efficient transfer learning, which significantly reduces the need for extensive labeled data. We outline the transfer learning process, demonstrating how researchers can adapt pre-trained models like MobileNetV2 for specific tasks such as forest fire detection. Finally, we present experimental results from training and evaluating a deep learning model using the Uttarakhand forest fire dataset, showcasing the effectiveness of transfer learning in this context.<|reference_end|>
arxiv
@article{gupta2024utilizing, title={Utilizing Transfer Learning and pre-trained Models for Effective Forest Fire Detection: A Case Study of Uttarakhand}, author={Hari Prabhat Gupta and Rahul Mishra}, journal={arXiv preprint arXiv:2410.06743}, year={2024}, archivePrefix={arXiv}, eprint={2410.06743}, primaryClass={cs.CV cs.LG} }
gupta2024utilizing
arxiv-667461
2410.06746
Cluster-wise Graph Transformer with Dual-granularity Kernelized Attention
<|reference_start|>Cluster-wise Graph Transformer with Dual-granularity Kernelized Attention: In the realm of graph learning, there is a category of methods that conceptualize graphs as hierarchical structures, utilizing node clustering to capture broader structural information. While generally effective, these methods often rely on a fixed graph coarsening routine, leading to overly homogeneous cluster representations and loss of node-level information. In this paper, we envision the graph as a network of interconnected node sets without compressing each cluster into a single embedding. To enable effective information transfer among these node sets, we propose the Node-to-Cluster Attention (N2C-Attn) mechanism. N2C-Attn incorporates techniques from Multiple Kernel Learning into the kernelized attention framework, effectively capturing information at both node and cluster levels. We then devise an efficient form for N2C-Attn using the cluster-wise message-passing framework, achieving linear time complexity. We further analyze how N2C-Attn combines bi-level feature maps of queries and keys, demonstrating its capability to merge dual-granularity information. The resulting architecture, Cluster-wise Graph Transformer (Cluster-GT), which uses node clusters as tokens and employs our proposed N2C-Attn module, shows superior performance on various graph-level tasks. Code is available at https://github.com/LUMIA-Group/Cluster-wise-Graph-Transformer.<|reference_end|>
arxiv
@article{huang2024cluster-wise, title={Cluster-wise Graph Transformer with Dual-granularity Kernelized Attention}, author={Siyuan Huang, Yunchong Song, Jiayue Zhou, Zhouhan Lin}, journal={arXiv preprint arXiv:2410.06746}, year={2024}, archivePrefix={arXiv}, eprint={2410.06746}, primaryClass={cs.LG} }
huang2024cluster-wise
arxiv-667462
2410.06756
DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation
<|reference_start|>DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation: Recent advancements in 2D/3D generative techniques have facilitated the generation of dynamic 3D objects from monocular videos. Previous methods mainly rely on the implicit neural radiance fields (NeRF) or explicit Gaussian Splatting as the underlying representation, and struggle to achieve satisfactory spatial-temporal consistency and surface appearance. Drawing inspiration from modern 3D animation pipelines, we introduce DreamMesh4D, a novel framework combining mesh representation with geometric skinning technique to generate high-quality 4D object from a monocular video. Instead of utilizing classical texture map for appearance, we bind Gaussian splats to triangle face of mesh for differentiable optimization of both the texture and mesh vertices. In particular, DreamMesh4D begins with a coarse mesh obtained through an image-to-3D generation procedure. Sparse points are then uniformly sampled across the mesh surface, and are used to build a deformation graph to drive the motion of the 3D object for the sake of computational efficiency and providing additional constraint. For each step, transformations of sparse control points are predicted using a deformation network, and the mesh vertices as well as the surface Gaussians are deformed via a novel geometric skinning algorithm, which is a hybrid approach combining LBS (linear blending skinning) and DQS (dual-quaternion skinning), mitigating drawbacks associated with both approaches. The static surface Gaussians and mesh vertices as well as the deformation network are learned via reference view photometric loss, score distillation loss as well as other regularizers in a two-stage manner. Extensive experiments demonstrate superior performance of our method. Furthermore, our method is compatible with modern graphic pipelines, showcasing its potential in the 3D gaming and film industry.<|reference_end|>
arxiv
@article{li2024dreammesh4d:, title={DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation}, author={Zhiqi Li and Yiming Chen and Peidong Liu}, journal={arXiv preprint arXiv:2410.06756}, year={2024}, archivePrefix={arXiv}, eprint={2410.06756}, primaryClass={cs.CV} }
li2024dreammesh4d:
arxiv-667463
2410.06757
Diff-FMT: Diffusion Models for Fluorescence Molecular Tomography
<|reference_start|>Diff-FMT: Diffusion Models for Fluorescence Molecular Tomography: Fluorescence molecular tomography (FMT) is a real-time, noninvasive optical imaging technology that plays a significant role in biomedical research. Nevertheless, the ill-posedness of the inverse problem poses huge challenges in FMT reconstructions. Previous various deep learning algorithms have been extensively explored to address the critical issues, but they remain faces the challenge of high data dependency with poor image quality. In this paper, we, for the first time, propose a FMT reconstruction method based on a denoising diffusion probabilistic model (DDPM), termed Diff-FMT, which is capable of obtaining high-quality reconstructed images from noisy images. Specifically, we utilize the noise addition mechanism of DDPM to generate diverse training samples. Through the step-by-step probability sampling mechanism in the inverse process, we achieve fine-grained reconstruction of the image, avoiding issues such as loss of image detail that can occur with end-to-end deep-learning methods. Additionally, we introduce the fluorescence signals as conditional information in the model training to sample a reconstructed image that is highly consistent with the input fluorescence signals from the noisy images. Numerous experimental results show that Diff-FMT can achieve high-resolution reconstruction images without relying on large-scale datasets compared with other cutting-edge algorithms.<|reference_end|>
arxiv
@article{xue2024diff-fmt:, title={Diff-FMT: Diffusion Models for Fluorescence Molecular Tomography}, author={Qianqian Xue and Peng Zhang and Xingyu Liu and Wenjian Wang and Guanglei Zhang}, journal={arXiv preprint arXiv:2410.06757}, year={2024}, archivePrefix={arXiv}, eprint={2410.06757}, primaryClass={eess.IV cs.CV} }
xue2024diff-fmt:
arxiv-667464
2410.06762
Finite-Time Trajectory Tracking of a Four wheeled Mecanum Mobile Robot
<|reference_start|>Finite-Time Trajectory Tracking of a Four wheeled Mecanum Mobile Robot: Four Wheeled Mecanum Robot (FWMR) possess the capability to move in any direction on a plane making it a cornerstone system in modern industrial operations. Despite the extreme maneuverability offered by FWMR, the practical implementation or real-time simulation of Mecanum wheel robots encounters substantial challenges in trajectory tracking control. In this research work, we present a finite-time control law using backstepping technique to perform stabilization and trajectory tracking objectives for a FWMR system. A rigorous stability proof is presented and explicit computation of the finite-time is provided. For tracking objective, we demonstrate the results taking an S-shaped trajectory inclined towards collision avoidance applications. Simulation validation in real time using Gazebo-ROS on a Mecanum robot model is carried out which complies with the theoretical results.<|reference_end|>
arxiv
@article{b2024finite-time, title={Finite-Time Trajectory Tracking of a Four wheeled Mecanum Mobile Robot}, author={Anil B, Mayank Pandey and Sneha Gajbhiye}, journal={arXiv preprint arXiv:2410.06762}, year={2024}, archivePrefix={arXiv}, eprint={2410.06762}, primaryClass={eess.SY cs.SY} }
b2024finite-time
arxiv-667465
2410.06763
Computation of harmonic functions on higher genus surfaces
<|reference_start|>Computation of harmonic functions on higher genus surfaces: We introduce a method to compute efficiently and with arbitrary precision a basis of harmonic functions with prescribed singularities on a general compact surface of genus two and more. This basis is obtained as a composition of theta functions and the Abel-Jacobi map, which is approximated at spectral speed by complex polynomials. We then implement this method to compute harmonic extensions on genus $2$ surfaces with boundary, that are described by their Fenchel-Nielsen coordinates and a smooth parametrization of the boundary. Finally, we prove the spectral convergence of the method for the harmonic extension.<|reference_end|>
arxiv
@article{nahon2024computation, title={Computation of harmonic functions on higher genus surfaces}, author={Micka"el Nahon, 'Edouard Oudet}, journal={arXiv preprint arXiv:2410.06763}, year={2024}, archivePrefix={arXiv}, eprint={2410.06763}, primaryClass={math.NA cs.NA math.AP} }
nahon2024computation
arxiv-667466
2410.06764
An Optimal Algorithm for the Stacker Crane Problem on Fixed Topologies
<|reference_start|>An Optimal Algorithm for the Stacker Crane Problem on Fixed Topologies: The Stacker Crane Problem (SCP) is a variant of the Traveling Salesman Problem. In SCP, pairs of pickup and delivery points are designated on a graph, and a crane must visit these points to move objects from each pickup location to its respective delivery point. The goal is to minimize the total distance traveled. SCP is known to be NP-hard, even on tree structures. The only positive results, in terms of polynomial-time solvability, apply to graphs that are topologically equivalent to a path or a cycle. We propose an algorithm that is optimal for each fixed topology, running in near-linear time. This is achieved by demonstrating that the problem is fixed-parameter tractable (FPT) when parameterized by both the cycle rank and the number of branch vertices.<|reference_end|>
arxiv
@article{chen2024an, title={An Optimal Algorithm for the Stacker Crane Problem on Fixed Topologies}, author={Yike Chen, Ke Shi, Chao Xu}, journal={arXiv preprint arXiv:2410.06764}, year={2024}, archivePrefix={arXiv}, eprint={2410.06764}, primaryClass={cs.DS math.OC} }
chen2024an
arxiv-667467
2410.06765
To Preserve or To Compress: An In-Depth Study of Connector Selection in Multimodal Large Language Models
<|reference_start|>To Preserve or To Compress: An In-Depth Study of Connector Selection in Multimodal Large Language Models: In recent years, multimodal large language models (MLLMs) have garnered significant attention from both industry and academia. However, there is still considerable debate on constructing MLLM architectures, particularly regarding the selection of appropriate connectors for perception tasks of varying granularities. This paper systematically investigates the impact of connectors on MLLM performance. Specifically, we classify connectors into feature-preserving and feature-compressing types. Utilizing a unified classification standard, we categorize sub-tasks from three comprehensive benchmarks, MMBench, MME, and SEED-Bench, into three task types: coarse-grained perception, fine-grained perception, and reasoning, and evaluate the performance. Our findings reveal that feature-preserving connectors excel in \emph{fine-grained perception} tasks due to their ability to retain detailed visual information. In contrast, feature-compressing connectors, while less effective in fine-grained perception tasks, offer significant speed advantages and perform comparably in \emph{coarse-grained perception} and \emph{reasoning} tasks. These insights are crucial for guiding MLLM architecture design and advancing the optimization of MLLM architectures.<|reference_end|>
arxiv
@article{lin2024to, title={To Preserve or To Compress: An In-Depth Study of Connector Selection in Multimodal Large Language Models}, author={Junyan Lin, Haoran Chen, Dawei Zhu and Xiaoyu Shen}, journal={arXiv preprint arXiv:2410.06765}, year={2024}, archivePrefix={arXiv}, eprint={2410.06765}, primaryClass={cs.CL cs.CV} }
lin2024to
arxiv-667468
2410.06767
On the Performance of Pilot-Aided Simultaneous Communication and Tracking
<|reference_start|>On the Performance of Pilot-Aided Simultaneous Communication and Tracking: In this paper, the symbol error rate performance analysis is provided for a pilot-aided simultaneous communication and tracking (PASCAT) system. In specific, we employ multiple drones to actively transmit signals to a BS, which is responsible for continuously monitoring the location of drones over time and decoding the symbols transmitted from the drones. It is found that the estimated location parameters at a given moment during tracking follow Gaussian distributions with means equal to actual values and variances equal to root mean square error (RMSE). Afterwards, the obtained location information is employed for informing the channel information, which is then used to preprocess the received signal before decoding by using the maximum ratio combining (MRC) technique. The average symbol error rate (SER) is also evaluated over the distribution of the estimated location parameters and an approximate value for the average SER is obtained by using a Taylor approximation with fast convergence. The result indicates that there is a cooperation relationship between the RMSE of the estimated location parameters and the average SER. In addition, the effect of the number of pilot signals is analysed as well. By employing more pilots, it is found that both communication and sensing functionalities are enhanced. Furthermore, the SER performance of our PASCAT system is similar to that of maximum likelihood detection (MLD) when a number of pilot signals are employed, which demonstrates the efficiency of the PASCAT system. In the end, all results are validated by using Monte Carlo simulations.<|reference_end|>
arxiv
@article{han2024on, title={On the Performance of Pilot-Aided Simultaneous Communication and Tracking}, author={Shuaishuai Han, Emad Alsusa, Mohammad Ahmad Al-Jarrah, Mahmoud AlaaEldin}, journal={arXiv preprint arXiv:2410.06767}, year={2024}, archivePrefix={arXiv}, eprint={2410.06767}, primaryClass={cs.IT eess.SP math.IT} }
han2024on
arxiv-667469
2410.06768
Patterns of Creativity: How User Input Shapes AI-Generated Visual Diversity
<|reference_start|>Patterns of Creativity: How User Input Shapes AI-Generated Visual Diversity: Recent critiques of Artificial-intelligence (AI)-generated visual content highlight concerns about the erosion of artistic originality, as these systems often replicate patterns from their training datasets, leading to significant uniformity and reduced diversity. Our research adopts a novel approach by focusing on user behavior during interactions with Text-to-Image models. Instead of solely analyzing training data patterns, we examine how users' tendencies to create original prompts or rely on common templates influence content homogenization. We developed three originality metrics -- lexical, thematic, and word-sequence originality -- and applied them to user-generated prompts from two datasets, DiffusionDB and Civiverse. Additionally, we explored how characteristics such as topic choice, language originality, and the presence of NSFW content affect image popularity, using a linear regression model to predict user engagement. Our research enhances the discourse on AI's impact on creativity by emphasizing the critical role of user behavior in shaping the diversity of AI-generated visual content.<|reference_end|>
arxiv
@article{palmini2024patterns, title={Patterns of Creativity: How User Input Shapes AI-Generated Visual Diversity}, author={Maria-Teresa De Rosa Palmini and Eva Cetinic}, journal={arXiv preprint arXiv:2410.06768}, year={2024}, archivePrefix={arXiv}, eprint={2410.06768}, primaryClass={cs.HC} }
palmini2024patterns
arxiv-667470
2410.06769
LR Parsing of Permutation Phrases
<|reference_start|>LR Parsing of Permutation Phrases: This paper presents an efficient method for LR parsing of permutation phrases. In practical cases, the proposed algorithm constructs an LR(0) automaton that requires significantly fewer states to process a permutation phrase compared to the standard construction. For most real-world grammars, the number of states is typically reduced from $\Omega(n!)$ to $O(2^{n})$, resulting in a much more compact parsing table. The state reduction increases with longer permutation phrases and a higher number of permutation phrases within the right-hand side of a rule. We demonstrate the effectiveness of this method through its application to parsing a JSON document.<|reference_end|>
arxiv
@article{kostičová2024lr, title={LR Parsing of Permutation Phrases}, author={Jana Kostiv{c}ov'a}, journal={arXiv preprint arXiv:2410.06769}, year={2024}, archivePrefix={arXiv}, eprint={2410.06769}, primaryClass={cs.FL cs.PL} }
kostičová2024lr
arxiv-667471
2410.06770
BLAS-like Interface for Binary Tensor Contractions
<|reference_start|>BLAS-like Interface for Binary Tensor Contractions: In the world of linear algebra computation, a well-established standard exists called BLAS(Basic Linear Algebra Subprograms). This standard has been crucial for the development of software using linear algebra operations. Its benefits include portability with efficiency and mitigation of suboptimal re-implementations of linear algebra operations. Multilinear algebra is an extension of linear algebra in which the central objects are tensors, which are generalizations of vectors and matrices. Though tensor operations are becoming more common, they do not have a standard like BLAS. Such standardization would be beneficial and decrease the now-visible replication of work, as many libraries nowadays use their own implementations. This master thesis aims to work towards such a standard by discovering whether or not a BLAS-like interface is possible for the operation binary tensor contraction. To answer this, an interface has been developed in the programming language C together with an implementation and tested to see if it would be sufficient. The interface developed is: xGETT(RANKA, EXTA, INCA, A, RANKB, EXTB, INCB, B, CONTS, CONTA, CONTB, PERM, INCC, C) with the implementation and tests, it has been deemed sufficient as a BLAS-like interface for binary tensor contractions and possible to use in a BLAS-like standardization for tensor operations.<|reference_end|>
arxiv
@article{hörnblad2024blas-like, title={BLAS-like Interface for Binary Tensor Contractions}, author={Niklas H"ornblad}, journal={arXiv preprint arXiv:2410.06770}, year={2024}, archivePrefix={arXiv}, eprint={2410.06770}, primaryClass={cs.MS} }
hörnblad2024blas-like
arxiv-667472
2410.06771
Safe and High-Performance Learning of Model Predicitve Control using Kernel-Based Interpolation
<|reference_start|>Safe and High-Performance Learning of Model Predicitve Control using Kernel-Based Interpolation: We present a method, which allows efficient and safe approximation of model predictive controllers using kernel interpolation. Since the computational complexity of the approximating function scales linearly with the number of data points, we propose to use a scoring function which chooses the most promising data. To further reduce the complexity of the approximation, we restrict our considerations to the set of closed-loop reachable states. That is, the approximating function only has to be accurate within this set. This makes our method especially suited for systems, where the set of initial conditions is small. In order to guarantee safety and high performance of the designed approximated controller, we use reachability analysis based on Monte Carlo methods.<|reference_end|>
arxiv
@article{rose2024safe, title={Safe and High-Performance Learning of Model Predicitve Control using Kernel-Based Interpolation}, author={Alexander Rose, Philipp Schaub, Rolf Findeisen}, journal={arXiv preprint arXiv:2410.06771}, year={2024}, archivePrefix={arXiv}, eprint={2410.06771}, primaryClass={eess.SY cs.LG cs.SY} }
rose2024safe
arxiv-667473
2410.06773
A Hybrid Renewable-Battery-Electrolyzer Facility under the Single Imbalance Pricing Scheme
<|reference_start|>A Hybrid Renewable-Battery-Electrolyzer Facility under the Single Imbalance Pricing Scheme: European energy markets are decentralized and entail balance responsibility of each market player. This stresses the importance of imbalance management of renewable energy sources (RES), as the imbalance payments can strongly reduce their profitability. According to the EU Electricity Balancing Guideline, each European transmission system operator should use the single imbalance pricing method which treats both deviation directions the same, no matter if a deviation helps the system or pushes it away from the balance. This paper aims to investigate the behavior of a hybrid facility consisting of an uncontrollable RES, a battery and an electrolyzer under such market setting. The formulated mathematical model of the hybrid facility seeks to maximize profit in the day-ahead energy market, while minimizing the imbalance costs. Uncertainty of the RES output is captured using stochastic scenarios, while the direction of the power system deviation, relevant for the imbalance pricing, is modeled using a newly proposed robust approach. Results of the case study indicate that the single imbalance pricing scheme might bring flexible assets to temptation of intentional deviations should they anticipate favorable imbalance prices.<|reference_end|>
arxiv
@article{draskovic2024a, title={A Hybrid Renewable-Battery-Electrolyzer Facility under the Single Imbalance Pricing Scheme}, author={Petra Draskovic, Ivan Pavic, Karlo Sepetanc, Hrvoje Pandzic}, journal={arXiv preprint arXiv:2410.06773}, year={2024}, archivePrefix={arXiv}, eprint={2410.06773}, primaryClass={eess.SY cs.SY} }
draskovic2024a
arxiv-667474
2410.06775
Participatory Budget Allocation Method for Approval Ballots
<|reference_start|>Participatory Budget Allocation Method for Approval Ballots: In this paper, we study the problem of Participatory Budgeting (PB) with approval ballots, inspired by Multi-Winner Voting schemes. We present generalized preference aggregation methods for participatory budgeting, especially for finding seemingly fair budget allocations. To achieve this, we generalize such preference aggregation methods from the well-known methods, namely the Sequential Chamberlin Courant rule and the Sequential Monroe Rule in the realm of social choice theory. Further, we provide an experimental evaluation of the preference aggregation methods using an impartial culture method of preference generation and study the extent to which such polynomial time algorithms satisfy one of the most popular notions of fairness called proportional representation.<|reference_end|>
arxiv
@article{page2024participatory, title={Participatory Budget Allocation Method for Approval Ballots}, author={Rutvik Page, Arnav Doifode, Jitendra Tembhurne, Aishwarya Sagar Anand Ukey}, journal={arXiv preprint arXiv:2410.06775}, year={2024}, archivePrefix={arXiv}, eprint={2410.06775}, primaryClass={cs.CE} }
page2024participatory
arxiv-667475
2410.06777
HERM: Benchmarking and Enhancing Multimodal LLMs for Human-Centric Understanding
<|reference_start|>HERM: Benchmarking and Enhancing Multimodal LLMs for Human-Centric Understanding: The significant advancements in visual understanding and instruction following from Multimodal Large Language Models (MLLMs) have opened up more possibilities for broader applications in diverse and universal human-centric scenarios. However, existing image-text data may not support the precise modality alignment and integration of multi-grained information, which is crucial for human-centric visual understanding. In this paper, we introduce HERM-Bench, a benchmark for evaluating the human-centric understanding capabilities of MLLMs. Our work reveals the limitations of existing MLLMs in understanding complex human-centric scenarios. To address these challenges, we present HERM-100K, a comprehensive dataset with multi-level human-centric annotations, aimed at enhancing MLLMs' training. Furthermore, we develop HERM-7B, a MLLM that leverages enhanced training data from HERM-100K. Evaluations on HERM-Bench demonstrate that HERM-7B significantly outperforms existing MLLMs across various human-centric dimensions, reflecting the current inadequacy of data annotations used in MLLM training for human-centric visual understanding. This research emphasizes the importance of specialized datasets and benchmarks in advancing the MLLMs' capabilities for human-centric understanding.<|reference_end|>
arxiv
@article{li2024herm:, title={HERM: Benchmarking and Enhancing Multimodal LLMs for Human-Centric Understanding}, author={Keliang Li, Zaifei Yang, Jiahe Zhao, Hongze Shen, Ruibing Hou, Hong Chang, Shiguang Shan and Xilin Chen}, journal={arXiv preprint arXiv:2410.06777}, year={2024}, archivePrefix={arXiv}, eprint={2410.06777}, primaryClass={cs.CV} }
li2024herm:
arxiv-667476
2410.06781
Transesophageal Echocardiography Generation using Anatomical Models
<|reference_start|>Transesophageal Echocardiography Generation using Anatomical Models: Through automation, deep learning (DL) can enhance the analysis of transesophageal echocardiography (TEE) images. However, DL methods require large amounts of high-quality data to produce accurate results, which is difficult to satisfy. Data augmentation is commonly used to tackle this issue. In this work, we develop a pipeline to generate synthetic TEE images and corresponding semantic labels. The proposed data generation pipeline expands on an existing pipeline that generates synthetic transthoracic echocardiography images by transforming slices from anatomical models into synthetic images. We also demonstrate that such images can improve DL network performance through a left-ventricle semantic segmentation task. For the pipeline's unpaired image-to-image (I2I) translation section, we explore two generative methods: CycleGAN and contrastive unpaired translation. Next, we evaluate the synthetic images quantitatively using the Fr\'echet Inception Distance (FID) Score and qualitatively through a human perception quiz involving expert cardiologists and the average researcher. In this study, we achieve a dice score improvement of up to 10% when we augment datasets with our synthetic images. Furthermore, we compare established methods of assessing unpaired I2I translation and observe a disagreement when evaluating the synthetic images. Finally, we see which metric better predicts the generated data's efficacy when used for data augmentation.<|reference_end|>
arxiv
@article{oladokun2024transesophageal, title={Transesophageal Echocardiography Generation using Anatomical Models}, author={Emmanuel Oladokun, Musa Abdulkareem, Jurica v{S}prem, and Vicente Grau}, journal={arXiv preprint arXiv:2410.06781}, year={2024}, doi={10.1007/978-3-031-58171-7_5}, archivePrefix={arXiv}, eprint={2410.06781}, primaryClass={eess.IV cs.CV} }
oladokun2024transesophageal
arxiv-667477
2410.06782
Mind Your Questions! Towards Backdoor Attacks on Text-to-Visualization Models
<|reference_start|>Mind Your Questions! Towards Backdoor Attacks on Text-to-Visualization Models: Text-to-visualization (text-to-vis) models have become valuable tools in the era of big data, enabling users to generate data visualizations and make informed decisions through natural language queries (NLQs). Despite their widespread application, the security vulnerabilities of these models have been largely overlooked. To address this gap, we propose VisPoison, a novel framework designed to identify these vulnerabilities of current text-to-vis models systematically. VisPoison introduces two types of triggers that activate three distinct backdoor attacks, potentially leading to data exposure, misleading visualizations, or denial-of-service (DoS) incidents. The framework features both proactive and passive attack mechanisms: proactive attacks leverage rare-word triggers to access confidential data, while passive attacks, triggered unintentionally by users, exploit a first-word trigger method, causing errors or DoS events in visualizations. Through extensive experiments on both trainable and in-context learning (ICL)-based text-to-vis models, \textit{VisPoison} achieves attack success rates of over 90\%, highlighting the security problem of current text-to-vis models. Additionally, we explore two types of defense mechanisms against these attacks, but the results show that existing countermeasures are insufficient, underscoring the pressing need for more robust security solutions in text-to-vis systems.<|reference_end|>
arxiv
@article{li2024mind, title={Mind Your Questions! Towards Backdoor Attacks on Text-to-Visualization Models}, author={Shuaimin Li, Yuanfeng Song, Xuanang Chen, Anni Peng, Zhuoyue Wan, Chen Jason Zhang and Raymond Chi-Wing Wong}, journal={arXiv preprint arXiv:2410.06782}, year={2024}, archivePrefix={arXiv}, eprint={2410.06782}, primaryClass={cs.CR} }
li2024mind
arxiv-667478
2410.06786
Deep End-to-End Survival Analysis with Temporal Consistency
<|reference_start|>Deep End-to-End Survival Analysis with Temporal Consistency: In this study, we present a novel Survival Analysis algorithm designed to efficiently handle large-scale longitudinal data. Our approach draws inspiration from Reinforcement Learning principles, particularly the Deep Q-Network paradigm, extending Temporal Learning concepts to Survival Regression. A central idea in our method is temporal consistency, a hypothesis that past and future outcomes in the data evolve smoothly over time. Our framework uniquely incorporates temporal consistency into large datasets by providing a stable training signal that captures long-term temporal relationships and ensures reliable updates. Additionally, the method supports arbitrarily complex architectures, enabling the modeling of intricate temporal dependencies, and allows for end-to-end training. Through numerous experiments we provide empirical evidence demonstrating our framework's ability to exploit temporal consistency across datasets of varying sizes. Moreover, our algorithm outperforms benchmarks on datasets with long sequences, demonstrating its ability to capture long-term patterns. Finally, ablation studies show how our method enhances training stability.<|reference_end|>
arxiv
@article{vieyra2024deep, title={Deep End-to-End Survival Analysis with Temporal Consistency}, author={Mariana Vargas Vieyra and Pascal Frossard}, journal={arXiv preprint arXiv:2410.06786}, year={2024}, archivePrefix={arXiv}, eprint={2410.06786}, primaryClass={cs.LG} }
vieyra2024deep
arxiv-667479
2410.06788
Convergence of spectral discretization for the flow of diffeomorphisms
<|reference_start|>Convergence of spectral discretization for the flow of diffeomorphisms: The Large Deformation Diffeomorphic Metric Mapping (LDDMM) or flow of diffeomorphism is a classical framework in the field of shape spaces and is widely applied in mathematical imaging and computational anatomy. Essentially, it equips a group of diffeomorphisms with a right-invariant Riemannian metric, which allows to compute (Riemannian) distances or interpolations between different deformations. The associated Euler--Lagrange equation of shortest interpolation paths is one of the standard examples of a partial differential equation that can be approached with Lie group theory (by interpreting it as a geodesic ordinary differential equation on the Lie group of diffeomorphisms). The particular group $\mathcal D^m$ of Sobolev diffeomorphisms is by now sufficiently understood to allow the analysis of geodesics and their numerical approximation. We prove convergence of a widely used Fourier-type space discretization of the geodesic equation. It is based on a new regularity estimate: We prove that geodesics in $\mathcal D^m$ preserve any higher order Sobolev regularity of their initial velocity.<|reference_end|>
arxiv
@article{wirth2024convergence, title={Convergence of spectral discretization for the flow of diffeomorphisms}, author={Benedikt Wirth}, journal={arXiv preprint arXiv:2410.06788}, year={2024}, archivePrefix={arXiv}, eprint={2410.06788}, primaryClass={math.NA cs.NA} }
wirth2024convergence
arxiv-667480
2410.06790
Discrete time model predictive control for humanoid walking with step adjustment
<|reference_start|>Discrete time model predictive control for humanoid walking with step adjustment: This paper presents a Discrete-Time Model Predictive Controller (MPC) for humanoid walking with online footstep adjustment. The proposed controller utilizes a hierarchical control approach. The high-level controller uses a low-dimensional Linear Inverted Pendulum Model (LIPM) to determine desired foot placement and Center of Mass (CoM) motion, to prevent falls while maintaining the desired velocity. A Task Space Controller (TSC) then tracks the desired motion obtained from the high-level controller, exploiting the whole-body dynamics of the humanoid. Our approach differs from existing MPC methods for walking pattern generation by not relying on a predefined foot-plan or a reference center of pressure (CoP) trajectory. The overall approach is tested in simulation on a torque-controlled Humanoid Robot. Results show that proposed control approach generates stable walking and prevents fall against push disturbances.<|reference_end|>
arxiv
@article{joshi2024discrete, title={Discrete time model predictive control for humanoid walking with step adjustment}, author={Vishnu Joshi, Suraj Kumar, Nithin V and Shishir Kolathaya}, journal={arXiv preprint arXiv:2410.06790}, year={2024}, archivePrefix={arXiv}, eprint={2410.06790}, primaryClass={cs.RO} }
joshi2024discrete
arxiv-667481
2410.06792
An Internal Logic of Virtual Double Categories
<|reference_start|>An Internal Logic of Virtual Double Categories: We present a type theory called fibrational virtual double type theory (FVDblTT) designed specifically for formal category theory, which is a succinct reformulation of New and Licata's Virtual Equipment Type Theory (VETT). FVDblTT formalizes reasoning on isomorphisms that are commonly employed in category theory. Virtual double categories are one of the most successful frameworks for developing formal category theory, and FVDblTT has them as a theoretical foundation. We validate its worth as an internal language of virtual double categories by providing a syntax-semantics duality between virtual double categories and specifications in FVDblTT as a biadjunction.<|reference_end|>
arxiv
@article{nasu2024an, title={An Internal Logic of Virtual Double Categories}, author={Hayato Nasu}, journal={arXiv preprint arXiv:2410.06792}, year={2024}, archivePrefix={arXiv}, eprint={2410.06792}, primaryClass={math.CT cs.LO} }
nasu2024an
arxiv-667482
2410.06793
A Polynomial Time Algorithm for Steiner Tree when Terminals Avoid a $K_4$-Minor
<|reference_start|>A Polynomial Time Algorithm for Steiner Tree when Terminals Avoid a $K_4$-Minor: We study a special case of the Steiner Tree problem in which the input graph does not have a minor model of a complete graph on 4 vertices for which all branch sets contain a terminal. We show that this problem can be solved in $O(n^4)$ time, where $n$ denotes the number of vertices in the input graph. This generalizes a seminal paper by Erickson et al. [Math. Oper. Res., 1987] that solves Steiner tree on planar graphs with all terminals on one face in polynomial time.<|reference_end|>
arxiv
@article{groenland2024a, title={A Polynomial Time Algorithm for Steiner Tree when Terminals Avoid a $K_4$-Minor}, author={Carla Groenland and Jesper Nederlof and Tomohiro Koana}, journal={arXiv preprint arXiv:2410.06793}, year={2024}, archivePrefix={arXiv}, eprint={2410.06793}, primaryClass={cs.DS} }
groenland2024a
arxiv-667483
2410.06795
From Pixels to Tokens: Revisiting Object Hallucinations in Large Vision-Language Models
<|reference_start|>From Pixels to Tokens: Revisiting Object Hallucinations in Large Vision-Language Models: Hallucinations in large vision-language models (LVLMs) are a significant challenge, i.e., generating objects that are not presented in the visual input, which impairs their reliability. Recent studies often attribute hallucinations to a lack of understanding of visual input, yet ignore a more fundamental issue: the model's inability to effectively extract or decouple visual features. In this paper, we revisit the hallucinations in LVLMs from an architectural perspective, investigating whether the primary cause lies in the visual encoder (feature extraction) or the modal alignment module (feature decoupling). Motivated by our findings on the preliminary investigation, we propose a novel tuning strategy, PATCH, to mitigate hallucinations in LVLMs. This plug-and-play method can be integrated into various LVLMs, utilizing adaptive virtual tokens to extract object features from bounding boxes, thereby addressing hallucinations caused by insufficient decoupling of visual features. PATCH achieves state-of-the-art performance on multiple multi-modal hallucination datasets. We hope this approach provides researchers with deeper insights into the underlying causes of hallucinations in LVLMs, fostering further advancements and innovation in this field.<|reference_end|>
arxiv
@article{shang2024from, title={From Pixels to Tokens: Revisiting Object Hallucinations in Large Vision-Language Models}, author={Yuying Shang, Xinyi Zeng, Yutao Zhu, Xiao Yang, Zhengwei Fang, Jingyuan Zhang, Jiawei Chen, Zinan Liu and Yu Tian}, journal={arXiv preprint arXiv:2410.06795}, year={2024}, archivePrefix={arXiv}, eprint={2410.06795}, primaryClass={cs.CL cs.CV} }
shang2024from
arxiv-667484
2410.06796
Diffuse or Confuse: A Diffusion Deepfake Speech Dataset
<|reference_start|>Diffuse or Confuse: A Diffusion Deepfake Speech Dataset: Advancements in artificial intelligence and machine learning have significantly improved synthetic speech generation. This paper explores diffusion models, a novel method for creating realistic synthetic speech. We create a diffusion dataset using available tools and pretrained models. Additionally, this study assesses the quality of diffusion-generated deepfakes versus non-diffusion ones and their potential threat to current deepfake detection systems. Findings indicate that the detection of diffusion-based deepfakes is generally comparable to non-diffusion deepfakes, with some variability based on detector architecture. Re-vocoding with diffusion vocoders shows minimal impact, and the overall speech quality is comparable to non-diffusion methods.<|reference_end|>
arxiv
@article{firc2024diffuse, title={Diffuse or Confuse: A Diffusion Deepfake Speech Dataset}, author={Anton Firc, Kamil Malinka, Petr Han'av{c}ek}, journal={arXiv preprint arXiv:2410.06796}, year={2024}, archivePrefix={arXiv}, eprint={2410.06796}, primaryClass={cs.CR cs.AI cs.LG cs.SD} }
firc2024diffuse
arxiv-667485
2410.06797
Cooperate or Compete: Coalition Formation in Congestion Games
<|reference_start|>Cooperate or Compete: Coalition Formation in Congestion Games: This paper investigates the potential benefits of cooperation in scenarios where finitely many agents compete for shared resources, leading to congestion and thereby reduced rewards. By appropriate coordination the members of the cooperating group (a.k.a., coalition) can minimize the congestion losses due to inmates, while efficiently facing the competition from outsiders (coalitions indulge in a non-cooperative congestion game). The quest in this paper is to identify the stable partition of coalitions that are not challenged by a new coalition. In contrast to the traditional cooperative games, the worth of a coalition in our game also depends upon the arrangement of the opponents. Every arrangement leads to a partition and a corresponding congestion game; the resultant Nash equilibria (NEs) dictate the `worth'. The analysis is further complicated due to the presence of multiple NEs for each such game.<|reference_end|>
arxiv
@article{sultana2024cooperate, title={Cooperate or Compete: Coalition Formation in Congestion Games}, author={Riya Sultana, Veeraruna Kavitha}, journal={arXiv preprint arXiv:2410.06797}, year={2024}, archivePrefix={arXiv}, eprint={2410.06797}, primaryClass={cs.GT} }
sultana2024cooperate
arxiv-667486
2410.06800
Efficient Weight-Space Laplace-Gaussian Filtering and Smoothing for Sequential Deep Learning
<|reference_start|>Efficient Weight-Space Laplace-Gaussian Filtering and Smoothing for Sequential Deep Learning: Efficiently learning a sequence of related tasks, such as in continual learning, poses a significant challenge for neural nets due to the delicate trade-off between catastrophic forgetting and loss of plasticity. We address this challenge with a grounded framework for sequentially learning related tasks based on Bayesian inference. Specifically, we treat the model's parameters as a nonlinear Gaussian state-space model and perform efficient inference using Gaussian filtering and smoothing. This general formalism subsumes existing continual learning approaches, while also offering a clearer conceptual understanding of its components. Leveraging Laplace approximations during filtering, we construct Gaussian posterior measures on the weight space of a neural network for each task. We use it as an efficient regularizer by exploiting the structure of the generalized Gauss-Newton matrix (GGN) to construct diagonal plus low-rank approximations. The dynamics model allows targeted control of the learning process and the incorporation of domain-specific knowledge, such as modeling the type of shift between tasks. Additionally, using Bayesian approximate smoothing can enhance the performance of task-specific models without needing to re-access any data.<|reference_end|>
arxiv
@article{sliwa2024efficient, title={Efficient Weight-Space Laplace-Gaussian Filtering and Smoothing for Sequential Deep Learning}, author={Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig}, journal={arXiv preprint arXiv:2410.06800}, year={2024}, archivePrefix={arXiv}, eprint={2410.06800}, primaryClass={cs.LG stat.ML} }
sliwa2024efficient
arxiv-667487
2410.06802
Seg2Act: Global Context-aware Action Generation for Document Logical Structuring
<|reference_start|>Seg2Act: Global Context-aware Action Generation for Document Logical Structuring: Document logical structuring aims to extract the underlying hierarchical structure of documents, which is crucial for document intelligence. Traditional approaches often fall short in handling the complexity and the variability of lengthy documents. To address these issues, we introduce Seg2Act, an end-to-end, generation-based method for document logical structuring, revisiting logical structure extraction as an action generation task. Specifically, given the text segments of a document, Seg2Act iteratively generates the action sequence via a global context-aware generative model, and simultaneously updates its global context and current logical structure based on the generated actions. Experiments on ChCatExt and HierDoc datasets demonstrate the superior performance of Seg2Act in both supervised and transfer learning settings.<|reference_end|>
arxiv
@article{li2024seg2act:, title={Seg2Act: Global Context-aware Action Generation for Document Logical Structuring}, author={Zichao Li, Shaojie He, Meng Liao, Xuanang Chen, Yaojie Lu, Hongyu Lin, Yanxiong Lu, Xianpei Han, Le Sun}, journal={arXiv preprint arXiv:2410.06802}, year={2024}, archivePrefix={arXiv}, eprint={2410.06802}, primaryClass={cs.CL} }
li2024seg2act:
arxiv-667488
2410.06804
The Clear Sky Corridor: Insights Towards Aerosol Formation in Exoplanets Using An AI-based Survey of Exoplanet Atmospheres
<|reference_start|>The Clear Sky Corridor: Insights Towards Aerosol Formation in Exoplanets Using An AI-based Survey of Exoplanet Atmospheres: Producing optimized and accurate transmission spectra of exoplanets from telescope data has traditionally been a manual and labor-intensive procedure. Here we present the results of the first attempt to improve and standardize this procedure using artificial intelligence (AI) based processing of light curves and spectroscopic data from transiting exoplanets observed with the Hubble Space Telescope's (HST) Wide Field Camera 3 (WFC3) instrument. We implement an AI-based parameter optimizer that autonomously operates the Eureka pipeline to produce homogeneous transmission spectra of publicly available HST WFC3 datasets, spanning exoplanet types from hot Jupiters to sub-Neptunes. Surveying 43 exoplanets with temperatures between 280 and 2580 Kelvin, we confirm modeled relationships between the amplitude of the water band at 1.4um in hot Jupiters and their equilibrium temperatures. We also identify a similar, novel trend in Neptune/sub-Neptune atmospheres, but shifted to cooler temperatures. Excitingly, a planet mass versus equilibrium temperature diagram reveals a "Clear Sky Corridor," where planets between 700 and 1700 Kelvin (depending on the mass) show stronger 1.4um H2O band measurements. This novel trend points to metallicity as a potentially important driver of aerosol formation. As we unveil and include these new discoveries into our understanding of aerosol formation, we enter a thrilling future for the study of exoplanet atmospheres. With HST sculpting this foundational understanding for aerosol formation in various exoplanet types, ranging from Jupiters to sub-Neptunes, we present a compelling platform for the James Webb Space Telescope (JWST) to discover similar atmospheric trends for more planets across a broader wavelength range.<|reference_end|>
arxiv
@article{ashtari2024the, title={The Clear Sky Corridor: Insights Towards Aerosol Formation in Exoplanets Using An AI-based Survey of Exoplanet Atmospheres}, author={Reza Ashtari, Kevin B. Stevenson, David Sing, Mercedes Lopez-Morales, Munazza K. Alam, Nikolay K. Nikolov, Thomas M. Evans-Soma}, journal={arXiv preprint arXiv:2410.06804}, year={2024}, archivePrefix={arXiv}, eprint={2410.06804}, primaryClass={astro-ph.EP astro-ph.IM cs.LG} }
ashtari2024the
arxiv-667489
2410.06806
QuadMamba: Learning Quadtree-based Selective Scan for Visual State Space Model
<|reference_start|>QuadMamba: Learning Quadtree-based Selective Scan for Visual State Space Model: Recent advancements in State Space Models, notably Mamba, have demonstrated superior performance over the dominant Transformer models, particularly in reducing the computational complexity from quadratic to linear. Yet, difficulties in adapting Mamba from language to vision tasks arise due to the distinct characteristics of visual data, such as the spatial locality and adjacency within images and large variations in information granularity across visual tokens. Existing vision Mamba approaches either flatten tokens into sequences in a raster scan fashion, which breaks the local adjacency of images, or manually partition tokens into windows, which limits their long-range modeling and generalization capabilities. To address these limitations, we present a new vision Mamba model, coined QuadMamba, that effectively captures local dependencies of varying granularities via quadtree-based image partition and scan. Concretely, our lightweight quadtree-based scan module learns to preserve the 2D locality of spatial regions within learned window quadrants. The module estimates the locality score of each token from their features, before adaptively partitioning tokens into window quadrants. An omnidirectional window shifting scheme is also introduced to capture more intact and informative features across different local regions. To make the discretized quadtree partition end-to-end trainable, we further devise a sequence masking strategy based on Gumbel-Softmax and its straight-through gradient estimator. Extensive experiments demonstrate that QuadMamba achieves state-of-the-art performance in various vision tasks, including image classification, object detection, instance segmentation, and semantic segmentation. The code is in https://github.com/VISION-SJTU/QuadMamba.<|reference_end|>
arxiv
@article{xie2024quadmamba:, title={QuadMamba: Learning Quadtree-based Selective Scan for Visual State Space Model}, author={Fei Xie, Weijia Zhang, Zhongdao Wang, Chao Ma}, journal={arXiv preprint arXiv:2410.06806}, year={2024}, archivePrefix={arXiv}, eprint={2410.06806}, primaryClass={cs.CV} }
xie2024quadmamba:
arxiv-667490
2410.06808
Near-Optimal-Time Quantum Algorithms for Approximate Pattern Matching
<|reference_start|>Near-Optimal-Time Quantum Algorithms for Approximate Pattern Matching: Approximate Pattern Matching is among the most fundamental string-processing tasks. Given a text $T$ of length $n$, a pattern $P$ of length $m$, and a threshold $k$, the task is to identify the fragments of $T$ that are at distance at most $k$ to $P$. We consider the two most common distances: Hamming distance (the number of character substitutions) in Pattern Matching with Mismatches and edit distance (the minimum number of character insertions, deletions, and substitutions) in Pattern Matching with Edits. We revisit the complexity of these two problems in the quantum setting. Our recent work [STOC'24] shows that $\hat{O}(\sqrt{nk})$ quantum queries are sufficient to solve (the decision version of) Pattern Matching with Edits. However, the quantum time complexity of the underlying solution does not provide any improvement over classical computation. On the other hand, the state-of-the-art algorithm for Pattern Matching with Mismatches [Jin and Nogler; SODA'23] achieves query complexity $\hat{O}(\sqrt{nk^{3/2}})$ and time complexity $\tilde{O}(\sqrt{nk^2})$, falling short of an unconditional lower bound of $\Omega(\sqrt{nk})$ queries. In this work, we present quantum algorithms with a time complexity of $\tilde{O}(\sqrt{nk}+\sqrt{n/m}\cdot k^2)$ for Pattern Matching with Mismatches and $\hat{O}(\sqrt{nk}+\sqrt{n/m}\cdot k^{3.5})$ for Pattern Matching with Edits; both solutions use $\hat{O}(\sqrt{nk})$ queries. The running times are near-optimal for $k\ll m^{1/3}$ and $k\ll m^{1/6}$, respectively, and offer advantage over classical algorithms for $k\ll (mn)^{1/4}$ and $k\ll (mn)^{1/7}$, respectively. Our solutions can also report the starting positions of approximate occurrences of $P$ in $T$ (represented as collections of arithmetic progressions); in this case, the unconditional lower bound and the complexities of our algorithms increase by a $\Theta(\sqrt{n/m})$ factor.<|reference_end|>
arxiv
@article{kociumaka2024near-optimal-time, title={Near-Optimal-Time Quantum Algorithms for Approximate Pattern Matching}, author={Tomasz Kociumaka, Jakob Nogler and Philip Wellnitz}, journal={arXiv preprint arXiv:2410.06808}, year={2024}, archivePrefix={arXiv}, eprint={2410.06808}, primaryClass={cs.DS quant-ph} }
kociumaka2024near-optimal-time
arxiv-667491
2410.06809
Root Defence Strategies: Ensuring Safety of LLM at the Decoding Level
<|reference_start|>Root Defence Strategies: Ensuring Safety of LLM at the Decoding Level: Large language models (LLMs) have demonstrated immense utility across various industries. However, as LLMs advance, the risk of harmful outputs increases due to incorrect or malicious instruction prompts. While current methods effectively address jailbreak risks, they share common limitations: 1) Judging harmful responses from the prefill-level lacks utilization of the model's decoding outputs, leading to relatively lower effectiveness and robustness. 2) Rejecting potentially harmful responses based on a single evaluation can significantly impair the model's helpfulness.This paper examines the LLMs' capability to recognize harmful outputs, revealing and quantifying their proficiency in assessing the danger of previous tokens. Motivated by pilot experiment results, we design a robust defense mechanism at the decoding level. Our novel decoder-oriented, step-by-step defense architecture corrects harmful queries directly rather than rejecting them outright. We introduce speculative decoding to enhance usability and facilitate deployment to boost secure decoding speed. Extensive experiments demonstrate that our approach improves model security without compromising reasoning speed. Notably, our method leverages the model's ability to discern hazardous information, maintaining its helpfulness compared to existing methods.<|reference_end|>
arxiv
@article{zeng2024root, title={Root Defence Strategies: Ensuring Safety of LLM at the Decoding Level}, author={Xinyi Zeng, Yuying Shang, Yutao Zhu, Jiawei Chen, Yu Tian}, journal={arXiv preprint arXiv:2410.06809}, year={2024}, archivePrefix={arXiv}, eprint={2410.06809}, primaryClass={cs.CL cs.CR} }
zeng2024root
arxiv-667492
2410.06811
Rethinking the Evaluation of Visible and Infrared Image Fusion
<|reference_start|>Rethinking the Evaluation of Visible and Infrared Image Fusion: Visible and Infrared Image Fusion (VIF) has garnered significant interest across a wide range of high-level vision tasks, such as object detection and semantic segmentation. However, the evaluation of VIF methods remains challenging due to the absence of ground truth. This paper proposes a Segmentation-oriented Evaluation Approach (SEA) to assess VIF methods by incorporating the semantic segmentation task and leveraging segmentation labels available in latest VIF datasets. Specifically, SEA utilizes universal segmentation models, capable of handling diverse images and classes, to predict segmentation outputs from fused images and compare these outputs with segmentation labels. Our evaluation of recent VIF methods using SEA reveals that their performance is comparable or even inferior to using visible images only, despite nearly half of the infrared images demonstrating better performance than visible images. Further analysis indicates that the two metrics most correlated to our SEA are the gradient-based fusion metric $Q_{\text{ABF}}$ and the visual information fidelity metric $Q_{\text{VIFF}}$ in conventional VIF evaluation metrics, which can serve as proxies when segmentation labels are unavailable. We hope that our evaluation will guide the development of novel and practical VIF methods. The code has been released in \url{https://github.com/Yixuan-2002/SEA/}.<|reference_end|>
arxiv
@article{guan2024rethinking, title={Rethinking the Evaluation of Visible and Infrared Image Fusion}, author={Dayan Guan, Yixuan Wu, Tianzhu Liu, Alex C. Kot, Yanfeng Gu}, journal={arXiv preprint arXiv:2410.06811}, year={2024}, archivePrefix={arXiv}, eprint={2410.06811}, primaryClass={cs.CV} }
guan2024rethinking
arxiv-667493
2410.06814
Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning
<|reference_start|>Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning: Over-parameterized models are typically vulnerable to membership inference attacks, which aim to determine whether a specific sample is included in the training of a given model. Previous Weight regularizations (e.g., L1 regularization) typically impose uniform penalties on all parameters, leading to a suboptimal tradeoff between model utility and privacy. In this work, we first show that only a small fraction of parameters substantially impact the privacy risk. In light of this, we propose Privacy-aware Sparsity Tuning (PAST), a simple fix to the L1 Regularization, by employing adaptive penalties to different parameters. Our key idea behind PAST is to promote sparsity in parameters that significantly contribute to privacy leakage. In particular, we construct the adaptive weight for each parameter based on its privacy sensitivity, i.e., the gradient of the loss gap with respect to the parameter. Using PAST, the network shrinks the loss gap between members and non-members, leading to strong resistance to privacy attacks. Extensive experiments demonstrate the superiority of PAST, achieving a state-of-the-art balance in the privacy-utility trade-off.<|reference_end|>
arxiv
@article{hu2024defending, title={Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning}, author={Qiang Hu, Hengxiang Zhang, Hongxin Wei}, journal={arXiv preprint arXiv:2410.06814}, year={2024}, archivePrefix={arXiv}, eprint={2410.06814}, primaryClass={cs.LG cs.AI} }
hu2024defending
arxiv-667494
2410.06815
Shap-Select: Lightweight Feature Selection Using SHAP Values and Regression
<|reference_start|>Shap-Select: Lightweight Feature Selection Using SHAP Values and Regression: Feature selection is an essential process in machine learning, especially when dealing with high-dimensional datasets. It helps reduce the complexity of machine learning models, improve performance, mitigate overfitting, and decrease computation time. This paper presents a novel feature selection framework, shap-select. The framework conducts a linear or logistic regression of the target on the Shapley values of the features, on the validation set, and uses the signs and significance levels of the regression coefficients to implement an efficient heuristic for feature selection in tabular regression and classification tasks. We evaluate shap-select on the Kaggle credit card fraud dataset, demonstrating its effectiveness compared to established methods such as Recursive Feature Elimination (RFE), HISEL (a mutual information-based feature selection method), Boruta and a simpler Shapley value-based method. Our findings show that shap-select combines interpretability, computational efficiency, and performance, offering a robust solution for feature selection.<|reference_end|>
arxiv
@article{kraev2024shap-select:, title={Shap-Select: Lightweight Feature Selection Using SHAP Values and Regression}, author={Egor Kraev, Baran Koseoglu, Luca Traverso, Mohammed Topiwalla}, journal={arXiv preprint arXiv:2410.06815}, year={2024}, archivePrefix={arXiv}, eprint={2410.06815}, primaryClass={cs.LG} }
kraev2024shap-select:
arxiv-667495
2410.06816
Multi-Neuron Unleashes Expressivity of ReLU Networks Under Convex Relaxation
<|reference_start|>Multi-Neuron Unleashes Expressivity of ReLU Networks Under Convex Relaxation: Neural work certification has established itself as a crucial tool for ensuring the robustness of neural networks. Certification methods typically rely on convex relaxations of the feasible output set to provide sound bounds. However, complete certification requires exact bounds, which strongly limits the expressivity of ReLU networks: even for the simple ``$\max$'' function in $\mathbb{R}^2$, there does not exist a ReLU network that expresses this function and can be exactly bounded by single-neuron relaxation methods. This raises the question whether there exists a convex relaxation that can provide exact bounds for general continuous piecewise linear functions in $\mathbb{R}^n$. In this work, we answer this question affirmatively by showing that (layer-wise) multi-neuron relaxation provides complete certification for general ReLU networks. Based on this novel result, we show that the expressivity of ReLU networks is no longer limited under multi-neuron relaxation. To the best of our knowledge, this is the first positive result on the completeness of convex relaxations, shedding light on the practice of certified robustness.<|reference_end|>
arxiv
@article{mao2024multi-neuron, title={Multi-Neuron Unleashes Expressivity of ReLU Networks Under Convex Relaxation}, author={Yuhao Mao, Yani Zhang, Martin Vechev}, journal={arXiv preprint arXiv:2410.06816}, year={2024}, archivePrefix={arXiv}, eprint={2410.06816}, primaryClass={cs.LG cs.AI} }
mao2024multi-neuron
arxiv-667496
2410.06818
An Improved Approach for Cardiac MRI Segmentation based on 3D UNet Combined with Papillary Muscle Exclusion
<|reference_start|>An Improved Approach for Cardiac MRI Segmentation based on 3D UNet Combined with Papillary Muscle Exclusion: Left ventricular ejection fraction (LVEF) is the most important clinical parameter of cardiovascular function. The accuracy in estimating this parameter is highly dependent upon the precise segmentation of the left ventricle (LV) structure at the end diastole and systole phases. Therefore, it is crucial to develop robust algorithms for the precise segmentation of the heart structure during different phases. Methodology: In this work, an improved 3D UNet model is introduced to segment the myocardium and LV, while excluding papillary muscles, as per the recommendation of the Society for Cardiovascular Magnetic Resonance. For the practical testing of the proposed framework, a total of 8,400 cardiac MRI images were collected and analysed from the military hospital in Tunis (HMPIT), as well as the popular ACDC public dataset. As performance metrics, we used the Dice coefficient and the F1 score for validation/testing of the LV and the myocardium segmentation. Results: The data was split into 70%, 10%, and 20% for training, validation, and testing, respectively. It is worth noting that the proposed segmentation model was tested across three axis views: basal, medio basal and apical at two different cardiac phases: end diastole and end systole instances. The experimental results showed a Dice index of 0.965 and 0.945, and an F1 score of 0.801 and 0.799, at the end diastolic and systolic phases, respectively. Additionally, clinical evaluation outcomes revealed a significant difference in the LVEF and other clinical parameters when the papillary muscles were included or excluded.<|reference_end|>
arxiv
@article{benameur2024an, title={An Improved Approach for Cardiac MRI Segmentation based on 3D UNet Combined with Papillary Muscle Exclusion}, author={Narjes Benameur, Ramzi Mahmoudi, Mohamed Deriche, Amira fayouka, Imene Masmoudi, Nessrine Zoghlami}, journal={arXiv preprint arXiv:2410.06818}, year={2024}, archivePrefix={arXiv}, eprint={2410.06818}, primaryClass={cs.CV cs.AI cs.LG eess.IV} }
benameur2024an
arxiv-667497
2410.06819
Dynamic Neural Potential Field: Online Trajectory Optimization in Presence of Moving Obstacles
<|reference_start|>Dynamic Neural Potential Field: Online Trajectory Optimization in Presence of Moving Obstacles: We address a task of local trajectory planning for the mobile robot in the presence of static and dynamic obstacles. Local trajectory is obtained as a numerical solution of the Model Predictive Control (MPC) problem. Collision avoidance may be provided by adding repulsive potential of the obstacles to the cost function of MPC. We develop an approach, where repulsive potential is estimated by the neural model. We propose and explore three possible strategies of handling dynamic obstacles. First, environment with dynamic obstacles is considered as a sequence of static environments. Second, the neural model predict a sequence of repulsive potential at once. Third, the neural model predict future repulsive potential step by step in autoregressive mode. We implement these strategies and compare it with CIAO* and MPPI using BenchMR framework. First two strategies showed higher performance than CIAO* and MPPI while preserving safety constraints. The third strategy was a bit slower, however it still satisfy time limits. We deploy our approach on Husky UGV mobile platform, which move through the office corridors under proposed MPC local trajectory planner. The code and trained models are available at \url{https://github.com/CognitiveAISystems/Dynamic-Neural-Potential-Field}.<|reference_end|>
arxiv
@article{staroverov2024dynamic, title={Dynamic Neural Potential Field: Online Trajectory Optimization in Presence of Moving Obstacles}, author={Aleksey Staroverov, Muhammad Alhaddad, Aditya Narendra, Konstantin Mironov and Aleksandr Panov}, journal={arXiv preprint arXiv:2410.06819}, year={2024}, archivePrefix={arXiv}, eprint={2410.06819}, primaryClass={cs.RO cs.AI} }
staroverov2024dynamic
arxiv-667498
2410.06820
Learning a Neural Solver for Parametric PDE to Enhance Physics-Informed Methods
<|reference_start|>Learning a Neural Solver for Parametric PDE to Enhance Physics-Informed Methods: Physics-informed deep learning often faces optimization challenges due to the complexity of solving partial differential equations (PDEs), which involve exploring large solution spaces, require numerous iterations, and can lead to unstable training. These challenges arise particularly from the ill-conditioning of the optimization problem, caused by the differential terms in the loss function. To address these issues, we propose learning a solver, i.e., solving PDEs using a physics-informed iterative algorithm trained on data. Our method learns to condition a gradient descent algorithm that automatically adapts to each PDE instance, significantly accelerating and stabilizing the optimization process and enabling faster convergence of physics-aware models. Furthermore, while traditional physics-informed methods solve for a single PDE instance, our approach addresses parametric PDEs. Specifically, our method integrates the physical loss gradient with the PDE parameters to solve over a distribution of PDE parameters, including coefficients, initial conditions, or boundary conditions. We demonstrate the effectiveness of our method through empirical experiments on multiple datasets, comparing training and test-time optimization performance.<|reference_end|>
arxiv
@article{boudec2024learning, title={Learning a Neural Solver for Parametric PDE to Enhance Physics-Informed Methods}, author={Lise Le Boudec, Emmanuel de Bezenac, Louis Serrano, Ramon Daniel Regueiro-Espino, Yuan Yin, Patrick Gallinari}, journal={arXiv preprint arXiv:2410.06820}, year={2024}, archivePrefix={arXiv}, eprint={2410.06820}, primaryClass={cs.LG} }
boudec2024learning
arxiv-667499
2410.06822
Unary counting quantifiers do not increase the expressive power of Presburger aritmetic: an alternative shorter proof
<|reference_start|>Unary counting quantifiers do not increase the expressive power of Presburger aritmetic: an alternative shorter proof: This work was presented in June 5-7, 2017 at the conference "Journ\'{e}es sur les Arithm\'{e}tiques Faibles -- Weak Arithmetics Days" held in Saint-Pertersburg of which no proceeding was ever published. It was not a new result but showed that a different approach is possible. The paper presented at ICALP 2024 addresses, among other problems, the complexity issues which were ignored in my 2017 talk.<|reference_end|>
arxiv
@article{choffrut2024unary, title={Unary counting quantifiers do not increase the expressive power of Presburger aritmetic: an alternative shorter proof}, author={Christian Choffrut}, journal={arXiv preprint arXiv:2410.06822}, year={2024}, archivePrefix={arXiv}, eprint={2410.06822}, primaryClass={cs.LO} }
choffrut2024unary
arxiv-667500
2410.06823
Stabilization of Predator-Prey Age-Structured Hyperbolic PDE when Harvesting both Species is Inevitable
<|reference_start|>Stabilization of Predator-Prey Age-Structured Hyperbolic PDE when Harvesting both Species is Inevitable: Populations do not only interact over time but also age over time. It is therefore common to model them as age-structured PDEs, where age is the space variable. Since the models also involve integrals over age, both in the birth process and in the interaction among species, they are in fact integro-partial differential equations (IPDEs) with positive states. To regulate the population densities to desired profiles, harvesting is used as input. But non-discriminating harvesting, where wanting to repress one species will inevitably repress the other species as well, the positivity restriction on the input (no insertion of population), and the multiplicative nature of harvesting, makes control challenging even for ODE versions of such dynamics, let alone for their IPDE versions on an infinite-dimensional nonnegative state space. We introduce a design for a benchmark version of such a problem: a two-population predator-prey setup. The model is equivalent to two coupled ordinary differential equations (ODEs), actuated by harvesting which must not drop below zero, and strongly disturbed by two autonomous but exponentially stable integral delay equations (IDEs). We develop two control designs. With a modified Volterra-like control Lyapunov function, we design a simple feedback which employs possibly negative harvesting for global stabilization of the ODE model, while guaranteeing regional regulation with positive harvesting. With a more sophisticated, restrained controller we achieve regulation for the ODE model globally, with positive harvesting. For the full IPDE model, with the IDE dynamics acting as large disturbances, for both the simple and saturated feedback laws we provide explicit estimates of the regions of attraction. The paper charts a new pathway for control designs for infinite-dimensional multi-species dynamics and for nonlinear positive systems with positive controls.<|reference_end|>
arxiv
@article{veil2024stabilization, title={Stabilization of Predator-Prey Age-Structured Hyperbolic PDE when Harvesting both Species is Inevitable}, author={Carina Veil, Miroslav Krsti'c, Iasson Karafyllis, Mamadou Diagne, Oliver Sawodny}, journal={arXiv preprint arXiv:2410.06823}, year={2024}, archivePrefix={arXiv}, eprint={2410.06823}, primaryClass={eess.SY cs.SY} }
veil2024stabilization