corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-664001
2410.00498
Exponential Runge-Kutta methods for delay equations in the sun-star abstract framework
<|reference_start|>Exponential Runge-Kutta methods for delay equations in the sun-star abstract framework: Exponential Runge-Kutta methods for semilinear ordinary differential equations can be extended to abstract differential equations, defined on Banach spaces. Thanks to the sun-star theory, both delay differential equations and renewal equations can be recast as abstract differential equations, which motivates the present work. The result is a general approach that allows us to define the methods explicitly and analyze their convergence properties in a unifying way.<|reference_end|>
arxiv
@article{ando'2024exponential, title={Exponential Runge-Kutta methods for delay equations in the sun-star abstract framework}, author={Alessia Ando' and Rossana Vermiglio}, journal={arXiv preprint arXiv:2410.00498}, year={2024}, archivePrefix={arXiv}, eprint={2410.00498}, primaryClass={math.NA cs.NA} }
ando'2024exponential
arxiv-664002
2410.00500
Optimized Excitation Signal Tailored to Pertinent Dynamic Process Characteristics
<|reference_start|>Optimized Excitation Signal Tailored to Pertinent Dynamic Process Characteristics: The effectiveness of data-driven techniques significantly relies on the input signal used to generate the training data. Nevertheless, there is a notable gap in research when it comes to designing excitation signals for identifying nonlinear dynamic systems, likely because of the challenges involved. Based on current knowledge, it is crucial for excitation signals to effectively capture the nonlinearity across the entire operational area and to gather insights into the area-specific dynamic process characteristics. The Incremental Dynamic Space-Filling Design (IDS-FID) strategy designs excitation signals to achieve a space-filling distribution across the input space of a nonlinear approximator used in external dynamics modeling, gathering information throughout its operational area. Simultaneously, the approach enables for a heightened focus on either the systems steady-state or transient responses during information acquisition by altering the excitation signals dynamics, facilitating targeted insights into dynamic process characteristics.<|reference_end|>
arxiv
@article{herkersdorf2024optimized, title={Optimized Excitation Signal Tailored to Pertinent Dynamic Process Characteristics}, author={Max Heinz Herkersdorf, Tarek Koesters, and Oliver Nelles}, journal={arXiv preprint arXiv:2410.00500}, year={2024}, archivePrefix={arXiv}, eprint={2410.00500}, primaryClass={eess.SY cs.SY} }
herkersdorf2024optimized
arxiv-664003
2410.00502
Multi-Target Cross-Lingual Summarization: a novel task and a language-neutral approach
<|reference_start|>Multi-Target Cross-Lingual Summarization: a novel task and a language-neutral approach: Cross-lingual summarization aims to bridge language barriers by summarizing documents in different languages. However, ensuring semantic coherence across languages is an overlooked challenge and can be critical in several contexts. To fill this gap, we introduce multi-target cross-lingual summarization as the task of summarizing a document into multiple target languages while ensuring that the produced summaries are semantically similar. We propose a principled re-ranking approach to this problem and a multi-criteria evaluation protocol to assess semantic coherence across target languages, marking a first step that will hopefully stimulate further research on this problem.<|reference_end|>
arxiv
@article{pernes2024multi-target, title={Multi-Target Cross-Lingual Summarization: a novel task and a language-neutral approach}, author={Diogo Pernes, Gonc{c}alo M. Correia, Afonso Mendes}, journal={arXiv preprint arXiv:2410.00502}, year={2024}, archivePrefix={arXiv}, eprint={2410.00502}, primaryClass={cs.CL cs.AI cs.LG} }
pernes2024multi-target
arxiv-664004
2410.00503
Drone Stereo Vision for Radiata Pine Branch Detection and Distance Measurement: Utilizing Deep Learning and YOLO Integration
<|reference_start|>Drone Stereo Vision for Radiata Pine Branch Detection and Distance Measurement: Utilizing Deep Learning and YOLO Integration: This research focuses on the development of a drone equipped with pruning tools and a stereo vision camera to accurately detect and measure the spatial positions of tree branches. YOLO is employed for branch segmentation, while two depth estimation approaches, monocular and stereo, are investigated. In comparison to SGBM, deep learning techniques produce more refined and accurate depth maps. In the absence of ground-truth data, a fine-tuning process using deep neural networks is applied to approximate optimal depth values. This methodology facilitates precise branch detection and distance measurement, addressing critical challenges in the automation of pruning operations. The results demonstrate notable advancements in both accuracy and efficiency, underscoring the potential of deep learning to drive innovation and enhance automation in the agricultural sector.<|reference_end|>
arxiv
@article{lin2024drone, title={Drone Stereo Vision for Radiata Pine Branch Detection and Distance Measurement: Utilizing Deep Learning and YOLO Integration}, author={Yida Lin, Bing Xue, Mengjie Zhang, Sam Schofield, Richard Green}, journal={arXiv preprint arXiv:2410.00503}, year={2024}, archivePrefix={arXiv}, eprint={2410.00503}, primaryClass={cs.CV cs.AI} }
lin2024drone
arxiv-664005
2410.00504
Optimized Excitation Signal Design Employing Receding Horizon Control
<|reference_start|>Optimized Excitation Signal Design Employing Receding Horizon Control: A novel excitation signal design strategy based on a receding horizon control inspired optimization is presented. The proposed method is shown to effectively generate space-filling designs within the input space of a nonlinear dynamic process, thereby enabling sophisticated acquisition of information in previously unexplored operational areas. Additionally, the strategy can intensify the exploitation of specific operational areas during information gathering, offering flexibility in meeting application-specific requirements.<|reference_end|>
arxiv
@article{herkersdorf2024optimized, title={Optimized Excitation Signal Design Employing Receding Horizon Control}, author={Max Heinz Herkersdorf, Oliver Nelles}, journal={arXiv preprint arXiv:2410.00504}, year={2024}, archivePrefix={arXiv}, eprint={2410.00504}, primaryClass={eess.SY cs.SY} }
herkersdorf2024optimized
arxiv-664006
2410.00506
A five-bar mechanism to assist finger flexion-extension movement: system implementation
<|reference_start|>A five-bar mechanism to assist finger flexion-extension movement: system implementation: The lack of specialized personnel and assistive technology to assist in rehabilitation therapies is one of the challenges facing the health sector today, and it is projected to increase. For researchers and engineers, it represents an opportunity to innovate and develop devices that improve and optimize rehabilitation services for the benefit of society. Among the different types of injuries, hand injuries occur most frequently. These injuries require a rehabilitation process in order for the hand to regain its functionality. This article presents the fabrication and instrumentation of an end-effector prototype, based on a five-bar configuration, for finger rehabilitation that executes a natural flexion-extension movement. The dimensions were obtained through the gradient method optimization and evaluated through Matlab. Experimental tests were carried out to demonstrate the prototype's functionality and the effectiveness of a five-bar mechanism acting in a vertical plane, where gravity influences the mechanism's performance. Position control using fifth-order polynomials with via points was implemented in the joint space. The design of the end-effector was also evaluated by performing a theoretical comparison, calculated as a function of a real flexion-extension trajectory of the fingers and the angle of rotation obtained through an IMU. As a result, controlling the two degrees of freedom of the mechanism at several points of the trajectory assures the end-effector trajectory and therefore the fingers' range of motion, which helps for full patient recovery.<|reference_end|>
arxiv
@article{zapatero-gutiérrez2024a, title={A five-bar mechanism to assist finger flexion-extension movement: system implementation}, author={Araceli Zapatero-Guti'errez, Eduardo Castillo-Casta~neda, Med Amine Laribi (COBRA)}, journal={Robotica, 2022, pp.1-19}, year={2024}, doi={10.1017/S0263574722001217}, archivePrefix={arXiv}, eprint={2410.00506}, primaryClass={cs.RO cs.HC physics.med-ph} }
zapatero-gutiérrez2024a
arxiv-664007
2410.00508
FlipGuard: Defending Preference Alignment against Update Regression with Constrained Optimization
<|reference_start|>FlipGuard: Defending Preference Alignment against Update Regression with Constrained Optimization: Recent breakthroughs in preference alignment have significantly improved Large Language Models' ability to generate texts that align with human preferences and values. However, current alignment metrics typically emphasize the post-hoc overall improvement, while overlooking a critical aspect: regression, which refers to the backsliding on previously correctly-handled data after updates. This potential pitfall may arise from excessive fine-tuning on already well-aligned data, which subsequently leads to over-alignment and degeneration. To address this challenge, we propose FlipGuard, a constrained optimization approach to detect and mitigate update regression with focal attention. Specifically, FlipGuard identifies performance degradation using a customized reward characterization and strategically enforces a constraint to encourage conditional congruence with the pre-aligned model during training. Comprehensive experiments demonstrate that FlipGuard effectively alleviates update regression while demonstrating excellent overall performance, with the added benefit of knowledge preservation while aligning preferences.<|reference_end|>
arxiv
@article{zhu2024flipguard:, title={FlipGuard: Defending Preference Alignment against Update Regression with Constrained Optimization}, author={Mingye Zhu, Yi Liu, Quan Wang, Junbo Guo, Zhendong Mao}, journal={arXiv preprint arXiv:2410.00508}, year={2024}, archivePrefix={arXiv}, eprint={2410.00508}, primaryClass={cs.CL cs.AI} }
zhu2024flipguard:
arxiv-664008
2410.00509
Learning Personalized Treatment Decisions in Precision Medicine: Disentangling Treatment Assignment Bias in Counterfactual Outcome Prediction and Biomarker Identification
<|reference_start|>Learning Personalized Treatment Decisions in Precision Medicine: Disentangling Treatment Assignment Bias in Counterfactual Outcome Prediction and Biomarker Identification: Precision medicine offers the potential to tailor treatment decisions to individual patients, yet it faces significant challenges due to the complex biases in clinical observational data and the high-dimensional nature of biological data. This study models various types of treatment assignment biases using mutual information and investigates their impact on machine learning (ML) models for counterfactual prediction and biomarker identification. Unlike traditional counterfactual benchmarks that rely on fixed treatment policies, our work focuses on modeling different characteristics of the underlying observational treatment policy in distinct clinical settings. We validate our approach through experiments on toy datasets, semi-synthetic tumor cancer genome atlas (TCGA) data, and real-world biological outcomes from drug and CRISPR screens. By incorporating empirical biological mechanisms, we create a more realistic benchmark that reflects the complexities of real-world data. Our analysis reveals that different biases lead to varying model performances, with some biases, especially those unrelated to outcome mechanisms, having minimal effect on prediction accuracy. This highlights the crucial need to account for specific biases in clinical observational data in counterfactual ML model development, ultimately enhancing the personalization of treatment decisions in precision medicine.<|reference_end|>
arxiv
@article{vollenweider2024learning, title={Learning Personalized Treatment Decisions in Precision Medicine: Disentangling Treatment Assignment Bias in Counterfactual Outcome Prediction and Biomarker Identification}, author={Michael Vollenweider, Manuel Sch"urch, Chiara Rohrer, Gabriele Gut, Michael Krauthammer, Andreas Wicki}, journal={arXiv preprint arXiv:2410.00509}, year={2024}, archivePrefix={arXiv}, eprint={2410.00509}, primaryClass={cs.LG cs.IT math.IT q-bio.QM} }
vollenweider2024learning
arxiv-664009
2410.00510
Advancing RVFL networks: Robust classification with the HawkEye loss function
<|reference_start|>Advancing RVFL networks: Robust classification with the HawkEye loss function: Random vector functional link (RVFL), a variant of single-layer feedforward neural network (SLFN), has garnered significant attention due to its lower computational cost and robustness to overfitting. Despite its advantages, the RVFL network's reliance on the square error loss function makes it highly sensitive to outliers and noise, leading to degraded model performance in real-world applications. To remedy it, we propose the incorporation of the HawkEye loss (H-loss) function into the RVFL framework. The H-loss function features nice mathematical properties, including smoothness and boundedness, while simultaneously incorporating an insensitive zone. Each characteristic brings its own advantages: 1) Boundedness limits the impact of extreme errors, enhancing robustness against outliers; 2) Smoothness facilitates the use of gradient-based optimization algorithms, ensuring stable and efficient convergence; and 3) The insensitive zone mitigates the effect of minor discrepancies and noise. Leveraging the H-loss function, we embed it into the RVFL framework and develop a novel robust RVFL model termed H-RVFL. Notably, this work addresses a significant gap, as no bounded loss function has been incorporated into RVFL to date. The non-convex optimization of the proposed H-RVFL is effectively addressed by the Nesterov accelerated gradient (NAG) algorithm, whose computational complexity is also discussed. The proposed H-RVFL model's effectiveness is validated through extensive experiments on $40$ benchmark datasets from UCI and KEEL repositories, with and without label noise. The results highlight significant improvements in robustness and efficiency, establishing the H-RVFL model as a powerful tool for applications in noisy and outlier-prone environments.<|reference_end|>
arxiv
@article{akhtar2024advancing, title={Advancing RVFL networks: Robust classification with the HawkEye loss function}, author={Mushir Akhtar, Ritik Mishra, M. Tanveer, Mohd. Arshad}, journal={31st International Conference on Neural Information Processing (ICONIP), 2024}, year={2024}, archivePrefix={arXiv}, eprint={2410.00510}, primaryClass={cs.LG} }
akhtar2024advancing
arxiv-664010
2410.00511
Pre-training with Synthetic Patterns for Audio
<|reference_start|>Pre-training with Synthetic Patterns for Audio: In this paper, we propose to pre-train audio encoders using synthetic patterns instead of real audio data. Our proposed framework consists of two key elements. The first one is Masked Autoencoder (MAE), a self-supervised learning framework that learns from reconstructing data from randomly masked counterparts. MAEs tend to focus on low-level information such as visual patterns and regularities within data. Therefore, it is unimportant what is portrayed in the input, whether it be images, audio mel-spectrograms, or even synthetic patterns. This leads to the second key element, which is synthetic data. Synthetic data, unlike real audio, is free from privacy and licensing infringement issues. By combining MAEs and synthetic patterns, our framework enables the model to learn generalized feature representations without real data, while addressing the issues related to real audio. To evaluate the efficacy of our framework, we conduct extensive experiments across a total of 13 audio tasks and 17 synthetic datasets. The experiments provide insights into which types of synthetic patterns are effective for audio. Our results demonstrate that our framework achieves performance comparable to models pre-trained on AudioSet-2M and partially outperforms image-based pre-training methods.<|reference_end|>
arxiv
@article{ishikawa2024pre-training, title={Pre-training with Synthetic Patterns for Audio}, author={Yuchi Ishikawa, Tatsuya Komatsu, Yoshimitsu Aoki}, journal={arXiv preprint arXiv:2410.00511}, year={2024}, archivePrefix={arXiv}, eprint={2410.00511}, primaryClass={eess.AS cs.AI cs.CV} }
ishikawa2024pre-training
arxiv-664011
2410.00513
Cross-lingual Back-Parsing: Utterance Synthesis from Meaning Representation for Zero-Resource Semantic Parsing
<|reference_start|>Cross-lingual Back-Parsing: Utterance Synthesis from Meaning Representation for Zero-Resource Semantic Parsing: Recent efforts have aimed to utilize multilingual pretrained language models (mPLMs) to extend semantic parsing (SP) across multiple languages without requiring extensive annotations. However, achieving zero-shot cross-lingual transfer for SP remains challenging, leading to a performance gap between source and target languages. In this study, we propose Cross-Lingual Back-Parsing (CBP), a novel data augmentation methodology designed to enhance cross-lingual transfer for SP. Leveraging the representation geometry of the mPLMs, CBP synthesizes target language utterances from source meaning representations. Our methodology effectively performs cross-lingual data augmentation in challenging zero-resource settings, by utilizing only labeled data in the source language and monolingual corpora. Extensive experiments on two cross-language SP benchmarks (Mschema2QA and Xspider) demonstrate that CBP brings substantial gains in the target language. Further analysis of the synthesized utterances shows that our method successfully generates target language utterances with high slot value alignment rates while preserving semantic integrity. Our codes and data are publicly available at https://github.com/deokhk/CBP.<|reference_end|>
arxiv
@article{kang2024cross-lingual, title={Cross-lingual Back-Parsing: Utterance Synthesis from Meaning Representation for Zero-Resource Semantic Parsing}, author={Deokhyung Kang, Seonjeong Hwang, Yunsu Kim, Gary Geunbae Lee}, journal={arXiv preprint arXiv:2410.00513}, year={2024}, archivePrefix={arXiv}, eprint={2410.00513}, primaryClass={cs.CL cs.AI} }
kang2024cross-lingual
arxiv-664012
2410.00516
Enhancing Sentinel-2 Image Resolution: Evaluating Advanced Techniques based on Convolutional and Generative Neural Networks
<|reference_start|>Enhancing Sentinel-2 Image Resolution: Evaluating Advanced Techniques based on Convolutional and Generative Neural Networks: This paper investigates the enhancement of spatial resolution in Sentinel-2 bands that contain spectral information using advanced super-resolution techniques by a factor of 2. State-of-the-art CNN models are compared with enhanced GAN approaches in terms of quality and feasibility. Therefore, a representative dataset comprising Sentinel-2 low-resolution images and corresponding high-resolution aerial orthophotos is required. Literature study reveals no feasible dataset for the land type of interest (forests), for which reason an adequate dataset had to be generated in addition, accounting for accurate alignment and image source optimization. The results reveal that while CNN-based approaches produce satisfactory outcomes, they tend to yield blurry images. In contrast, GAN-based models not only provide clear and detailed images, but also demonstrate superior performance in terms of quantitative assessment, underlying the potential of the framework beyond the specific land type investigated.<|reference_end|>
arxiv
@article{kramer2024enhancing, title={Enhancing Sentinel-2 Image Resolution: Evaluating Advanced Techniques based on Convolutional and Generative Neural Networks}, author={Patrick Kramer, Alexander Steinhardt, Barbara Pedretscher}, journal={arXiv preprint arXiv:2410.00516}, year={2024}, archivePrefix={arXiv}, eprint={2410.00516}, primaryClass={eess.IV cs.AI cs.CV} }
kramer2024enhancing
arxiv-664013
2410.00517
Human-Robot Collaborative Minimum Time Search through Sub-priors in Ant Colony Optimization
<|reference_start|>Human-Robot Collaborative Minimum Time Search through Sub-priors in Ant Colony Optimization: Human-Robot Collaboration (HRC) has evolved into a highly promising issue owing to the latest breakthroughs in Artificial Intelligence (AI) and Human-Robot Interaction (HRI), among other reasons. This emerging growth increases the need to design multi-agent algorithms that can manage also human preferences. This paper presents an extension of the Ant Colony Optimization (ACO) meta-heuristic to solve the Minimum Time Search (MTS) task, in the case where humans and robots perform an object searching task together. The proposed model consists of two main blocks. The first one is a convolutional neural network (CNN) that provides the prior probabilities about where an object may be from a segmented image. The second one is the Sub-prior MTS-ACO algorithm (SP-MTS-ACO), which takes as inputs the prior probabilities and the particular search preferences of the agents in different sub-priors to generate search plans for all agents. The model has been tested in real experiments for the joint search of an object through a Vizanti web-based visualization in a tablet computer. The designed interface allows the communication between a human and our humanoid robot named IVO. The obtained results show an improvement in the search perception of the users without loss of efficiency.<|reference_end|>
arxiv
@article{viyuela2024human-robot, title={Human-Robot Collaborative Minimum Time Search through Sub-priors in Ant Colony Optimization}, author={Oscar Gil Viyuela, Alberto Sanfeliu}, journal={arXiv preprint arXiv:2410.00517}, year={2024}, doi={10.1109/LRA.2024.3471451}, archivePrefix={arXiv}, eprint={2410.00517}, primaryClass={cs.RO cs.AI} }
viyuela2024human-robot
arxiv-664014
2410.00518
Analysing the Influence of Reorder Strategies for Cartesian Genetic Programming
<|reference_start|>Analysing the Influence of Reorder Strategies for Cartesian Genetic Programming: Cartesian Genetic Programming (CGP) suffers from a specific limitation: Positional bias, a phenomenon in which mostly genes at the start of the genome contribute to a program output, while genes at the end rarely do. This can lead to an overall worse performance of CGP. One solution to overcome positional bias is to introduce reordering methods, which shuffle the current genotype without changing its corresponding phenotype. There are currently two different reorder operators that extend the classic CGP formula and improve its fitness value. In this work, we discuss possible shortcomings of these two existing operators. Afterwards, we introduce three novel operators which reorder the genotype of a graph defined by CGP. We show empirically on four Boolean and four symbolic regression benchmarks that the number of iterations until a solution is found and/or the fitness value improves by using CGP with a reorder method. However, there is no consistently best performing reorder operator. Furthermore, their behaviour is analysed by investigating their convergence plots and we show that all behave the same in terms of convergence type.<|reference_end|>
arxiv
@article{cui2024analysing, title={Analysing the Influence of Reorder Strategies for Cartesian Genetic Programming}, author={Henning Cui (1), Andreas Margraf (2), J"org H"ahner (1) ((1) University of Augsburg, (2) Fraunhofer IGCV)}, journal={arXiv preprint arXiv:2410.00518}, year={2024}, archivePrefix={arXiv}, eprint={2410.00518}, primaryClass={cs.NE} }
cui2024analysing
arxiv-664015
2410.00519
Exploring the Learning Capabilities of Language Models using LEVERWORLDS
<|reference_start|>Exploring the Learning Capabilities of Language Models using LEVERWORLDS: Learning a model of a stochastic setting often involves learning both general structure rules and specific properties of the instance. This paper investigates the interplay between learning the general and the specific in various learning methods, with emphasis on sample efficiency. We design a framework called {\sc LeverWorlds}, which allows the generation of simple physics-inspired worlds that follow a similar generative process with different distributions, and their instances can be expressed in natural language. These worlds allow for controlled experiments to assess the sample complexity of different learning methods. We experiment with classic learning algorithms as well as Transformer language models, both with fine-tuning and In-Context Learning (ICL). Our general finding is that (1) Transformers generally succeed in the task; but (2) they are considerably less sample efficient than classic methods that make stronger assumptions about the structure, such as Maximum Likelihood Estimation and Logistic Regression. This finding is in tension with the recent tendency to use Transformers as general-purpose estimators. We propose an approach that leverages the ICL capabilities of contemporary language models to apply simple algorithms for this type of data. Our experiments show that models currently struggle with the task but show promising potential.<|reference_end|>
arxiv
@article{wagner2024exploring, title={Exploring the Learning Capabilities of Language Models using LEVERWORLDS}, author={Eitan Wagner, Amir Feder, Omri Abend}, journal={arXiv preprint arXiv:2410.00519}, year={2024}, archivePrefix={arXiv}, eprint={2410.00519}, primaryClass={cs.CL cs.AI} }
wagner2024exploring
arxiv-664016
2410.00521
Design and Identification of Keypoint Patches in Unstructured Environments
<|reference_start|>Design and Identification of Keypoint Patches in Unstructured Environments: Reliable perception of targets is crucial for the stable operation of autonomous robots. A widely preferred method is keypoint identification in an image, as it allows direct mapping from raw images to 2D coordinates, facilitating integration with other algorithms like localization and path planning. In this study, we closely examine the design and identification of keypoint patches in cluttered environments, where factors such as blur and shadows can hinder detection. We propose four simple yet distinct designs that consider various scale, rotation and camera projection using a limited number of pixels. Additionally, we customize the Superpoint network to ensure robust detection under various types of image degradation. The effectiveness of our approach is demonstrated through real-world video tests, highlighting potential for vision-based autonomous systems.<|reference_end|>
arxiv
@article{park2024design, title={Design and Identification of Keypoint Patches in Unstructured Environments}, author={Taewook Park, Seunghwan Kim, and Hyondong Oh}, journal={arXiv preprint arXiv:2410.00521}, year={2024}, archivePrefix={arXiv}, eprint={2410.00521}, primaryClass={cs.RO cs.CV} }
park2024design
arxiv-664017
2410.00522
Annotation Guidelines for Corpus Novelties: Part 2 -- Alias Resolution Version 10
<|reference_start|>Annotation Guidelines for Corpus Novelties: Part 2 -- Alias Resolution Version 10: The Novelties corpus is a collection of novels (and parts of novels) annotated for Alias Resolution, among other tasks. This document describes the guidelines applied during the annotation process. It contains the instructions used by the annotators, as well as a number of examples retrieved from the annotated novels, and illustrating how canonical names should be defined, and which names should be considered as referring to the same entity.<|reference_end|>
arxiv
@article{amalvy2024annotation, title={Annotation Guidelines for Corpus Novelties: Part 2 -- Alias Resolution Version 1.0}, author={Arthur Amalvy (LIA), Vincent Labatut (LIA)}, journal={arXiv preprint arXiv:2410.00522}, year={2024}, archivePrefix={arXiv}, eprint={2410.00522}, primaryClass={cs.CL} }
amalvy2024annotation
arxiv-664018
2410.00523
Building a simple oscillator based Ising machine for research and education
<|reference_start|>Building a simple oscillator based Ising machine for research and education: Oscillator based Ising machines are non-von-Neumann machines ideally suited for solving combinatorial problems otherwise intractable on classic stored-program digital computers due to their run-time complexity. Possible future applications are manifold ranging from quantum simulations to protein folding and are of high academic and commercial interest as well. Described in the following is a very simple such machine aimed at educational and research applications.<|reference_end|>
arxiv
@article{ulmann2024building, title={Building a simple oscillator based Ising machine for research and education}, author={Bernd Ulmann, Shrish Roy}, journal={arXiv preprint arXiv:2410.00523}, year={2024}, archivePrefix={arXiv}, eprint={2410.00523}, primaryClass={cs.ET} }
ulmann2024building
arxiv-664019
2410.00524
Deep Model Interpretation with Limited Data : A Coreset-based Approach
<|reference_start|>Deep Model Interpretation with Limited Data : A Coreset-based Approach: Model Interpretation aims at the extraction of insights from the internals of a trained model. A common approach to address this task is the characterization of relevant features internally encoded in the model that are critical for its proper operation. Despite recent progress of these methods, they come with the weakness of being computationally expensive due to the dense evaluation of datasets that they require. As a consequence, research on the design of these methods have focused on smaller data subsets which may led to reduced insights. To address these computational costs, we propose a coreset-based interpretation framework that utilizes coreset selection methods to sample a representative subset of the large dataset for the interpretation task. Towards this goal, we propose a similarity-based evaluation protocol to assess the robustness of model interpretation methods towards the amount data they take as input. Experiments considering several interpretation methods, DNN models, and coreset selection methods show the effectiveness of the proposed framework.<|reference_end|>
arxiv
@article{behzadi-khormouji2024deep, title={Deep Model Interpretation with Limited Data : A Coreset-based Approach}, author={Hamed Behzadi-Khormouji and Jos'e Oramas}, journal={arXiv preprint arXiv:2410.00524}, year={2024}, archivePrefix={arXiv}, eprint={2410.00524}, primaryClass={cs.LG cs.CV} }
behzadi-khormouji2024deep
arxiv-664020
2410.00525
Improving sampling by modifying the effective diffusion
<|reference_start|>Improving sampling by modifying the effective diffusion: This is a preliminary version. Markov chain Monte Carlo samplers based on discretizations of (overdamped) Langevin dynamics are commonly used in the Bayesian inference and computational statistical physics literature to estimate high-dimensional integrals. One can introduce a non-constant diffusion matrix to precondition these dynamics, and recent works have optimized it in order to sooner reach stationarity by overcoming entropic and energy barriers. However, the methodology introduced to compute these optimal diffusions is not suited to high-dimensional settings, as it relies on costly optimization procedures. In this work, we propose a class of diffusion matrices, based on one-dimensional collective variables (CVs), which helps dynamics explore the latent space defined by the CV. The form of the diffusion matrix is such that the effective dynamics, which are approximations of the processes as observed on the latent space, are governed by the optimal effective diffusion coefficient in a homogenized limit, which possesses an analytical expression. We describe how this class of diffusion matrices can be constructed and learned during the simulation. We provide implementations of the Metropolis--Adjusted Langevin Algorithm and Riemann Manifold (Generalized) Hamiltonian Monte Carlo algorithms, and discuss numerical optimizations in the case when the CV depends only on a few number of components of the position of the system. We illustrate the efficiency gains of using this class of diffusion by computing mean transition durations between two configurations of a dimer in a solvent.<|reference_end|>
arxiv
@article{lelièvre2024improving, title={Improving sampling by modifying the effective diffusion}, author={Tony Leli`evre, R'egis Santet, Gabriel Stoltz}, journal={arXiv preprint arXiv:2410.00525}, year={2024}, archivePrefix={arXiv}, eprint={2410.00525}, primaryClass={math.NA cs.NA} }
lelièvre2024improving
arxiv-664021
2410.00526
Benchmarking Large Language Models for Conversational Question Answering in Multi-instructional Documents
<|reference_start|>Benchmarking Large Language Models for Conversational Question Answering in Multi-instructional Documents: Instructional documents are rich sources of knowledge for completing various tasks, yet their unique challenges in conversational question answering (CQA) have not been thoroughly explored. Existing benchmarks have primarily focused on basic factual question-answering from single narrative documents, making them inadequate for assessing a model`s ability to comprehend complex real-world instructional documents and provide accurate step-by-step guidance in daily life. To bridge this gap, we present InsCoQA, a novel benchmark tailored for evaluating large language models (LLMs) in the context of CQA with instructional documents. Sourced from extensive, encyclopedia-style instructional content, InsCoQA assesses models on their ability to retrieve, interpret, and accurately summarize procedural guidance from multiple documents, reflecting the intricate and multi-faceted nature of real-world instructional tasks. Additionally, to comprehensively assess state-of-the-art LLMs on the InsCoQA benchmark, we propose InsEval, an LLM-assisted evaluator that measures the integrity and accuracy of generated responses and procedural instructions.<|reference_end|>
arxiv
@article{wu2024benchmarking, title={Benchmarking Large Language Models for Conversational Question Answering in Multi-instructional Documents}, author={Shiwei Wu, Chen Zhang, Yan Gao, Qimeng Wang, Tong Xu, Yao Hu, Enhong Chen}, journal={arXiv preprint arXiv:2410.00526}, year={2024}, archivePrefix={arXiv}, eprint={2410.00526}, primaryClass={cs.CL} }
wu2024benchmarking
arxiv-664022
2410.00529
Pointwise order of generalized Hofstadter functions $G, H$ and beyond
<|reference_start|>Pointwise order of generalized Hofstadter functions $G, H$ and beyond: Hofstadter's $G$ function is recursively defined via $G(0)=0$ and then $G(n)=n-G(G(n-1))$. Following Hofstadter, a family $(F_k)$ of similar functions is obtained by varying the number $k$ of nested recursive calls in this equation. We establish here that this family is ordered pointwise: for all $k$ and $n$, $F_k(n) \le F_{k+1}(n)$. For achieving this, a detour is made via infinite morphic words generalizing the Fibonacci word. Various properties of these words are proved, concerning the lengths of substituted prefixes of these words and the counts of some specific letters in these prefixes. We also relate the limits of $\frac{1}{n}F_k(n)$ to the frequencies of letters in the considered words.<|reference_end|>
arxiv
@article{letouzey2024pointwise, title={Pointwise order of generalized Hofstadter functions $G, H$ and beyond}, author={Pierre Letouzey (IRIF), Shuo Li, Wolfgang Steiner (LIAFA)}, journal={arXiv preprint arXiv:2410.00529}, year={2024}, archivePrefix={arXiv}, eprint={2410.00529}, primaryClass={cs.DM cs.FL math.CO math.NT} }
letouzey2024pointwise
arxiv-664023
2410.00531
TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices
<|reference_start|>TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices: Large model inference is shifting from cloud to edge due to concerns about the privacy of user interaction data. However, edge devices often struggle with limited computing power, memory, and bandwidth, requiring collaboration across multiple devices to run and speed up LLM inference. Pipeline parallelism, the mainstream solution, is inefficient for single-user scenarios, while tensor parallelism struggles with frequent communications. In this paper, we argue that tensor parallelism can be more effective than pipeline on low-resource devices, and present a compute- and memory-efficient tensor parallel inference system, named TPI-LLM, to serve 70B-scale models. TPI-LLM keeps sensitive raw data local in the users' devices and introduces a sliding window memory scheduler to dynamically manage layer weights during inference, with disk I/O latency overlapped with the computation and communication. This allows larger models to run smoothly on memory-limited devices. We analyze the communication bottleneck and find that link latency, not bandwidth, emerges as the main issue, so a star-based allreduce algorithm is implemented. Through extensive experiments on both emulated and real testbeds, TPI-LLM demonstrated over 80% less time-to-first-token and token latency compared to Accelerate, and over 90% compared to Transformers and Galaxy, while cutting the peak memory footprint of Llama 2-70B by 90%, requiring only 3.1 GB of memory for 70B-scale models.<|reference_end|>
arxiv
@article{li2024tpi-llm:, title={TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices}, author={Zonghang Li and Wenjiao Feng and Mohsen Guizani and Hongfang Yu}, journal={arXiv preprint arXiv:2410.00531}, year={2024}, archivePrefix={arXiv}, eprint={2410.00531}, primaryClass={cs.DC cs.AI} }
li2024tpi-llm:
arxiv-664024
2410.00535
Optimal Causal Representations and the Causal Information Bottleneck
<|reference_start|>Optimal Causal Representations and the Causal Information Bottleneck: To effectively study complex causal systems, it is often useful to construct representations that simplify parts of the system by discarding irrelevant details while preserving key features. The Information Bottleneck (IB) method is a widely used approach in representation learning that compresses random variables while retaining information about a target variable. Traditional methods like IB are purely statistical and ignore underlying causal structures, making them ill-suited for causal tasks. We propose the Causal Information Bottleneck (CIB), a causal extension of the IB, which compresses a set of chosen variables while maintaining causal control over a target variable. This method produces representations which are causally interpretable, and which can be used when reasoning about interventions. We present experimental results demonstrating that the learned representations accurately capture causality as intended.<|reference_end|>
arxiv
@article{simoes2024optimal, title={Optimal Causal Representations and the Causal Information Bottleneck}, author={Francisco N. F. Q. Simoes, Mehdi Dastani, Thijs van Ommen}, journal={arXiv preprint arXiv:2410.00535}, year={2024}, archivePrefix={arXiv}, eprint={2410.00535}, primaryClass={cs.LG cs.AI cs.IT math.IT stat.ML} }
simoes2024optimal
arxiv-664025
2410.00536
Arges: Spatio-Temporal Transformer for Ulcerative Colitis Severity Assessment in Endoscopy Videos
<|reference_start|>Arges: Spatio-Temporal Transformer for Ulcerative Colitis Severity Assessment in Endoscopy Videos: Accurate assessment of disease severity from endoscopy videos in ulcerative colitis (UC) is crucial for evaluating drug efficacy in clinical trials. Severity is often measured by the Mayo Endoscopic Subscore (MES) and Ulcerative Colitis Endoscopic Index of Severity (UCEIS) score. However, expert MES/UCEIS annotation is time-consuming and susceptible to inter-rater variability, factors addressable by automation. Automation attempts with frame-level labels face challenges in fully-supervised solutions due to the prevalence of video-level labels in clinical trials. CNN-based weakly-supervised models (WSL) with end-to-end (e2e) training lack generalization to new disease scores and ignore spatio-temporal information crucial for accurate scoring. To address these limitations, we propose "Arges", a deep learning framework that utilizes a transformer with positional encoding to incorporate spatio-temporal information from frame features to estimate disease severity scores in endoscopy video. Extracted features are derived from a foundation model (ArgesFM), pre-trained on a large diverse dataset from multiple clinical trials (61M frames, 3927 videos). We evaluate four UC disease severity scores, including MES and three UCEIS component scores. Test set evaluation indicates significant improvements, with F1 scores increasing by 4.1% for MES and 18.8%, 6.6%, 3.8% for the three UCEIS component scores compared to state-of-the-art methods. Prospective validation on previously unseen clinical trial data further demonstrates the model's successful generalization.<|reference_end|>
arxiv
@article{chaitanya2024arges:, title={Arges: Spatio-Temporal Transformer for Ulcerative Colitis Severity Assessment in Endoscopy Videos}, author={Krishna Chaitanya, Pablo F. Damasceno, Shreyas Fadnavis, Pooya Mobadersany, Chaitanya Parmar, Emily Scherer, Natalia Zemlianskaia, Lindsey Surace, Louis R. Ghanem, Oana Gabriela Cula, Tommaso Mansi, Kristopher Standish}, journal={arXiv preprint arXiv:2410.00536}, year={2024}, archivePrefix={arXiv}, eprint={2410.00536}, primaryClass={eess.IV cs.AI cs.CV cs.LG} }
chaitanya2024arges:
arxiv-664026
2410.00537
Partial Typing for Asynchronous Multiparty Sessions
<|reference_start|>Partial Typing for Asynchronous Multiparty Sessions: Formal verification methods for concurrent systems cannot always be scaled-down or tailored in order to be applied on specific subsystems. We address such an issue in a MultiParty Session Types setting by devising a partial type assignment system for multiparty sessions (i.e. sets of concurrent participants) with asynchronous communications. Sessions are possibly typed by "asynchronous global types" describing the overall behaviour of specific subsets of participants only (from which the word "partial"). Typability is proven to ensure that sessions enjoy the partial versions of the well-known properties of lock- and orphan-message-freedom.<|reference_end|>
arxiv
@article{barbanera2024partial, title={Partial Typing for Asynchronous Multiparty Sessions}, author={Franco Barbanera (University of Catania), Mariangiola Dezani-Ciancaglini (University of Torino), Ugo de'Liguoro (University of Torino)}, journal={EPTCS 408, 2024, pp. 1-20}, year={2024}, doi={10.4204/EPTCS.408.1}, archivePrefix={arXiv}, eprint={2410.00537}, primaryClass={cs.LO} }
barbanera2024partial
arxiv-664027
2410.00538
From Compactifying Lambda-Letrec Terms to Recognizing Regular-Expression Processes
<|reference_start|>From Compactifying Lambda-Letrec Terms to Recognizing Regular-Expression Processes: As a supplement to my talk at the workshop, this extended abstract motivates and summarizes my work with co-authors on problems in two separate areas: first, in the lambda-calculus with letrec, a universal model of computation, and second, on Milner's process interpretation of regular expressions, a proper subclass of the finite-state processes. The aim of my talk was to motivate a transferal of ideas for workable concepts of structure-constrained graphs: from the problem of finding compact graph representations for terms in the lambda-calculus with letrec to the problem of recognizing finite process graphs that can be expressed by regular expressions. In both cases the construction of structure-constrained graphs was expedient in order to enable to go back and forth easily between, in the first case, lambda-terms and term graphs, and in the second case, regular expressions and process graphs. The main focus here is on providing pointers to my work with co-authors, in both areas separately. A secondary focus is on explaining directions of my present projects, and describing research questions of possibly general interest that have developed out of my work in these two areas.<|reference_end|>
arxiv
@article{grabmayer2024from, title={From Compactifying Lambda-Letrec Terms to Recognizing Regular-Expression Processes}, author={Clemens Grabmayer (Gran Sasso Science Institute)}, journal={EPTCS 408, 2024, pp. 21-41}, year={2024}, doi={10.4204/EPTCS.408.2}, archivePrefix={arXiv}, eprint={2410.00538}, primaryClass={cs.LO cs.FL} }
grabmayer2024from
arxiv-664028
2410.00539
When Do You Start Counting? Revisiting Counting and Pnueli Modalities in Timed Logics
<|reference_start|>When Do You Start Counting? Revisiting Counting and Pnueli Modalities in Timed Logics: Pnueli first noticed that certain simple 'counting' properties appear to be inexpressible in popular timed temporal logics such as Metric Interval Temporal Logic (MITL). This interesting observation has since been studied extensively, culminating in strong timed logics that are capable of expressing such properties yet remain decidable. A slightly more general case, namely where one asserts the existence of a sequence of events in an arbitrary interval of the form <a, b> (instead of an upper-bound interval of the form [0, b>, which starts from the current point in time), has however not been addressed satisfactorily in the existing literature. We show that counting in [0, b> is in fact as powerful as counting in <a, b>; moreover, the general property 'there exist x', x'' in I such that x' <= x'' and phi(x', x'') holds' can be expressed in Extended Metric Interval Temporal Logic (EMITL) with only [0, b>.<|reference_end|>
arxiv
@article{ho2024when, title={When Do You Start Counting? Revisiting Counting and Pnueli Modalities in Timed Logics}, author={Hsi-Ming Ho (University of Sussex), Khushraj Madnani (Max Planck Institute for Software Systems)}, journal={EPTCS 408, 2024, pp. 73-89}, year={2024}, doi={10.4204/EPTCS.408.5}, archivePrefix={arXiv}, eprint={2410.00539}, primaryClass={cs.LO} }
ho2024when
arxiv-664029
2410.00540
Conditional Nested Pattern Matching in Interaction Net
<|reference_start|>Conditional Nested Pattern Matching in Interaction Net: Interaction nets are a form of restricted graph rewrite system that can serve as a graphical or textual programming language. As such, benefits include one-step confluence, ease of parallelism and explicit garbage collection. However, some of these restrictions burden the programmer, so they have been extended in several ways, notably to include data types and conditional rules. This paper introduces a further extension to allow nested pattern matching and to do so in a way that preserves these benefits and fundamental properties of interaction nets. We also show that by introducing a translation to non-nested matching, this extension is conservative in rewriting. In addition, we propose a new notation to express this pattern matching.<|reference_end|>
arxiv
@article{sato2024conditional, title={Conditional Nested Pattern Matching in Interaction Net}, author={Shinya Sato}, journal={EPTCS 408, 2024, pp. 90-106}, year={2024}, doi={10.4204/EPTCS.408.6}, archivePrefix={arXiv}, eprint={2410.00540}, primaryClass={cs.PL} }
sato2024conditional
arxiv-664030
2410.00541
Random Graph Generation in Context-Free Graph Languages
<|reference_start|>Random Graph Generation in Context-Free Graph Languages: We present a method for generating random hypergraphs in context-free hypergraph languages. It is obtained by adapting Mairson's generation algorithm for context-free string grammars to the setting of hyperedge replacement grammars. Our main results are that for non-ambiguous hyperedge replacement grammars, the method generates hypergraphs uniformly at random and in quadratic time. We illustrate our approach by a running example of a hyperedge replacement grammar generating term graphs.<|reference_end|>
arxiv
@article{vastarini2024random, title={Random Graph Generation in Context-Free Graph Languages}, author={Federico Vastarini (University of York), Detlef Plump (University of York)}, journal={EPTCS 408, 2024, pp. 107-120}, year={2024}, doi={10.4204/EPTCS.408.7}, archivePrefix={arXiv}, eprint={2410.00541}, primaryClass={cs.LO cs.FL} }
vastarini2024random
arxiv-664031
2410.00542
Differentially Private Active Learning: Balancing Effective Data Selection and Privacy
<|reference_start|>Differentially Private Active Learning: Balancing Effective Data Selection and Privacy: Active learning (AL) is a widely used technique for optimizing data labeling in machine learning by iteratively selecting, labeling, and training on the most informative data. However, its integration with formal privacy-preserving methods, particularly differential privacy (DP), remains largely underexplored. While some works have explored differentially private AL for specialized scenarios like online learning, the fundamental challenge of combining AL with DP in standard learning settings has remained unaddressed, severely limiting AL's applicability in privacy-sensitive domains. This work addresses this gap by introducing differentially private active learning (DP-AL) for standard learning settings. We demonstrate that naively integrating DP-SGD training into AL presents substantial challenges in privacy budget allocation and data utilization. To overcome these challenges, we propose step amplification, which leverages individual sampling probabilities in batch creation to maximize data point participation in training steps, thus optimizing data utilization. Additionally, we investigate the effectiveness of various acquisition functions for data selection under privacy constraints, revealing that many commonly used functions become impractical. Our experiments on vision and natural language processing tasks show that DP-AL can improve performance for specific datasets and model architectures. However, our findings also highlight the limitations of AL in privacy-constrained environments, emphasizing the trade-offs between privacy, model accuracy, and data selection accuracy.<|reference_end|>
arxiv
@article{schwethelm2024differentially, title={Differentially Private Active Learning: Balancing Effective Data Selection and Privacy}, author={Kristian Schwethelm, Johannes Kaiser, Jonas Kuntzer, Mehmet Yigitsoy, Daniel Rueckert, Georgios Kaissis}, journal={arXiv preprint arXiv:2410.00542}, year={2024}, archivePrefix={arXiv}, eprint={2410.00542}, primaryClass={cs.LG cs.CR} }
schwethelm2024differentially
arxiv-664032
2410.00544
Best Practices for Multi-Fidelity Bayesian Optimization in Materials and Molecular Research
<|reference_start|>Best Practices for Multi-Fidelity Bayesian Optimization in Materials and Molecular Research: Multi-fidelity Bayesian Optimization (MFBO) is a promising framework to speed up materials and molecular discovery as sources of information of different accuracies are at hand at increasing cost. Despite its potential use in chemical tasks, there is a lack of systematic evaluation of the many parameters playing a role in MFBO. In this work, we provide guidelines and recommendations to decide when to use MFBO in experimental settings. We investigate MFBO methods applied to molecules and materials problems. First, we test two different families of acquisition functions in two synthetic problems and study the effect of the informativeness and cost of the approximate function. We use our implementation and guidelines to benchmark three real discovery problems and compare them against their single-fidelity counterparts. Our results may help guide future efforts to implement MFBO as a routine tool in the chemical sciences.<|reference_end|>
arxiv
@article{sabanza-gil2024best, title={Best Practices for Multi-Fidelity Bayesian Optimization in Materials and Molecular Research}, author={V'ictor Sabanza-Gil, Riccardo Barbano, Daniel Pacheco Guti'errez, Jeremy S. Luterbacher, Jos'e Miguel Hern'andez-Lobato, Philippe Schwaller, Lo"ic Roch}, journal={arXiv preprint arXiv:2410.00544}, year={2024}, archivePrefix={arXiv}, eprint={2410.00544}, primaryClass={cs.LG} }
sabanza-gil2024best
arxiv-664033
2410.00545
What the Harm? Quantifying the Tangible Impact of Gender Bias in Machine Translation with a Human-centered Study
<|reference_start|>What the Harm? Quantifying the Tangible Impact of Gender Bias in Machine Translation with a Human-centered Study: Gender bias in machine translation (MT) is recognized as an issue that can harm people and society. And yet, advancements in the field rarely involve people, the final MT users, or inform how they might be impacted by biased technologies. Current evaluations are often restricted to automatic methods, which offer an opaque estimate of what the downstream impact of gender disparities might be. We conduct an extensive human-centered study to examine if and to what extent bias in MT brings harms with tangible costs, such as quality of service gaps across women and men. To this aim, we collect behavioral data from 90 participants, who post-edited MT outputs to ensure correct gender translation. Across multiple datasets, languages, and types of users, our study shows that feminine post-editing demands significantly more technical and temporal effort, also corresponding to higher financial costs. Existing bias measurements, however, fail to reflect the found disparities. Our findings advocate for human-centered approaches that can inform the societal impact of bias.<|reference_end|>
arxiv
@article{savoldi2024what, title={What the Harm? Quantifying the Tangible Impact of Gender Bias in Machine Translation with a Human-centered Study}, author={Beatrice Savoldi and Sara Papi and Matteo Negri and Ana Guerberof and Luisa Bentivogli}, journal={arXiv preprint arXiv:2410.00545}, year={2024}, archivePrefix={arXiv}, eprint={2410.00545}, primaryClass={cs.CL} }
savoldi2024what
arxiv-664034
2410.00548
The complexity of separability for semilinear sets and Parikh automata
<|reference_start|>The complexity of separability for semilinear sets and Parikh automata: In a separability problem, we are given two sets $K$ and $L$ from a class $\mathcal{C}$, and we want to decide whether there exists a set $S$ from a class $\mathcal{S}$ such that $K\subseteq S$ and $S\cap L=\emptyset$. In this case, we speak of separability of sets in $\mathcal{C}$ by sets in $\mathcal{S}$. We study two types of separability problems. First, we consider separability of semilinear sets by recognizable sets of vectors (equivalently, by sets definable by quantifier-free monadic Presburger formulas). Second, we consider separability of languages of Parikh automata by regular languages. A Parikh automaton is a machine with access to counters that can only be incremented, and have to meet a semilinear constraint at the end of the run. Both of these separability problems are known to be decidable with elementary complexity. Our main results are that both problems are coNP-complete. In the case of semilinear sets, coNP-completeness holds regardless of whether the input sets are specified by existential Presburger formulas, quantifier-free formulas, or semilinear representations. Our results imply that recognizable separability of rational subsets of $\Sigma^*\times\mathbb{N}^d$ (shown decidable by Choffrut and Grigorieff) is coNP-complete as well. Another application is that regularity of deterministic Parikh automata (where the target set is specified using a quantifier-free Presburger formula) is coNP-complete as well.<|reference_end|>
arxiv
@article{collins2024the, title={The complexity of separability for semilinear sets and Parikh automata}, author={Elias Rojas Collins, Chris K"ocher, Georg Zetzsche}, journal={arXiv preprint arXiv:2410.00548}, year={2024}, archivePrefix={arXiv}, eprint={2410.00548}, primaryClass={cs.FL} }
collins2024the
arxiv-664035
2410.00557
STanH : Parametric Quantization for Variable Rate Learned Image Compression
<|reference_start|>STanH : Parametric Quantization for Variable Rate Learned Image Compression: In end-to-end learned image compression, encoder and decoder are jointly trained to minimize a $R + {\lambda}D$ cost function, where ${\lambda}$ controls the trade-off between rate of the quantized latent representation and image quality. Unfortunately, a distinct encoder-decoder pair with millions of parameters must be trained for each ${\lambda}$, hence the need to switch encoders and to store multiple encoders and decoders on the user device for every target rate. This paper proposes to exploit a differentiable quantizer designed around a parametric sum of hyperbolic tangents, called STanH , that relaxes the step-wise quantization function. STanH is implemented as a differentiable activation layer with learnable quantization parameters that can be plugged into a pre-trained fixed rate model and refined to achieve different target bitrates. Experimental results show that our method enables variable rate coding with comparable efficiency to the state-of-the-art, yet with significant savings in terms of ease of deployment, training time, and storage costs<|reference_end|>
arxiv
@article{presta2024stanh, title={STanH : Parametric Quantization for Variable Rate Learned Image Compression}, author={Alberto Presta, Enzo Tartaglione, Attilio Fiandrotti, Marco Grangetto}, journal={arXiv preprint arXiv:2410.00557}, year={2024}, archivePrefix={arXiv}, eprint={2410.00557}, primaryClass={cs.CV cs.MM} }
presta2024stanh
arxiv-664036
2410.00558
AMR-Evol: Adaptive Modular Response Evolution Elicits Better Knowledge Distillation for Large Language Models in Code Generation
<|reference_start|>AMR-Evol: Adaptive Modular Response Evolution Elicits Better Knowledge Distillation for Large Language Models in Code Generation: The impressive performance of proprietary LLMs like GPT4 in code generation has led to a trend to replicate these capabilities in open-source models through knowledge distillation (e.g. Code Evol-Instruct). However, these efforts often neglect the crucial aspect of response quality, relying heavily on teacher models for direct response distillation. This paradigm, especially for complex instructions, can degrade the quality of synthesized data, compromising the knowledge distillation process. To this end, our study introduces the Adaptive Modular Response Evolution (AMR-Evol) framework, which employs a two-stage process to refine response distillation. The first stage, modular decomposition, breaks down the direct response into more manageable sub-modules. The second stage, adaptive response evolution, automatically evolves the response with the related function modules. Our experiments with three popular code benchmarks (HumanEval, MBPP, and EvalPlus) attest to the superiority of the AMR-Evol framework over baseline response distillation methods. By comparing with the open-source Code LLMs trained on a similar scale of data, we observed performance enhancements: more than +3.0 points on HumanEval-Plus and +1.0 points on MBPP-Plus, which underscores the effectiveness of our framework. Our codes are available at https://github.com/ChiYeungLaw/AMR-Evol.<|reference_end|>
arxiv
@article{luo2024amr-evol:, title={AMR-Evol: Adaptive Modular Response Evolution Elicits Better Knowledge Distillation for Large Language Models in Code Generation}, author={Ziyang Luo, Xin Li, Hongzhan Lin, Jing Ma, Lidong Bing}, journal={arXiv preprint arXiv:2410.00558}, year={2024}, archivePrefix={arXiv}, eprint={2410.00558}, primaryClass={cs.CL cs.AI cs.SE} }
luo2024amr-evol:
arxiv-664037
2410.00564
Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model Pretraining
<|reference_start|>Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model Pretraining: A significant aspiration of offline reinforcement learning (RL) is to develop a generalist agent with high capabilities from large and heterogeneous datasets. However, prior approaches that scale offline RL either rely heavily on expert trajectories or struggle to generalize to diverse unseen tasks. Inspired by the excellent generalization of world model in conditional video generation, we explore the potential of image observation-based world model for scaling offline RL and enhancing generalization on novel tasks. In this paper, we introduce JOWA: Jointly-Optimized World-Action model, an offline model-based RL agent pretrained on multiple Atari games with 6 billion tokens data to learn general-purpose representation and decision-making ability. Our method jointly optimizes a world-action model through a shared transformer backbone, which stabilize temporal difference learning with large models during pretraining. Moreover, we propose a provably efficient and parallelizable planning algorithm to compensate for the Q-value estimation error and thus search out better policies. Experimental results indicate that our largest agent, with 150 million parameters, achieves 78.9% human-level performance on pretrained games using only 10% subsampled offline data, outperforming existing state-of-the-art large-scale offline RL baselines by 31.6% on averange. Furthermore, JOWA scales favorably with model capacity and can sample-efficiently transfer to novel games using only 5k offline fine-tuning data (approximately 4 trajectories) per game, demonstrating superior generalization. We will release codes and model weights at https://github.com/CJReinforce/JOWA.<|reference_end|>
arxiv
@article{cheng2024scaling, title={Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model Pretraining}, author={Jie Cheng, Ruixi Qiao, Gang Xiong, Qinghai Miao, Yingwei Ma, Binhua Li, Yongbin Li, Yisheng Lv}, journal={arXiv preprint arXiv:2410.00564}, year={2024}, archivePrefix={arXiv}, eprint={2410.00564}, primaryClass={cs.LG cs.AI} }
cheng2024scaling
arxiv-664038
2410.00568
A Note on Approximation of Spanning Tree Congestion
<|reference_start|>A Note on Approximation of Spanning Tree Congestion: The {\em Spanning Tree Congestion} problem is an easy-to-state NP-hard problem: given a graph $G$, construct a spanning tree $T$ of $G$ minimizing its maximum edge congestion where the congestion of an edge $e\in T$ is the number of edges $uv$ in $G$ such that the unique path between $u$ and $v$ in $T$ passes through $e$; the optimum value for a given graph $G$ is denoted $STC(G)$. It is known that {\em every} spanning tree is an $n/2$-approximation. A long-standing problem is to design a better approximation algorithm. Our contribution towards this goal is an $O(\Delta\cdot\log^{3/2}n)$-approximation algorithm for the minimum congestion spanning tree problem where $\Delta$ is the maximum degree in $G$. For graphs with maximum degree bounded by polylog of the number of vertices, this is an exponential improvement over the previous best approximation. For graphs with maximum degree bounded by $o(n/\log^{3/2}n)$, we get $o(n)$-approximation; this is the largest class of graphs that we know of, for which sublinear approximation is known for this problem. Our main tool for the algorithm is a new lower bound on the spanning tree congestion which is of independent interest. We prove that for every graph $G$, $STC(G)\geq \Omega(hb(G)/\Delta)$ where $hb(G)$ denotes the maximum bisection width over all subgraphs of $G$.<|reference_end|>
arxiv
@article{kolman2024a, title={A Note on Approximation of Spanning Tree Congestion}, author={Petr Kolman}, journal={arXiv preprint arXiv:2410.00568}, year={2024}, archivePrefix={arXiv}, eprint={2410.00568}, primaryClass={cs.DS cs.DM} }
kolman2024a
arxiv-664039
2410.00572
Obstacle-Avoidant Leader Following with a Quadruped Robot
<|reference_start|>Obstacle-Avoidant Leader Following with a Quadruped Robot: Personal mobile robotic assistants are expected to find wide applications in industry and healthcare. For example, people with limited mobility can benefit from robots helping with daily tasks, or construction workers can have robots perform precision monitoring tasks on-site. However, manually steering a robot while in motion requires significant concentration from the operator, especially in tight or crowded spaces. This reduces walking speed, and the constant need for vigilance increases fatigue and, thus, the risk of accidents. This work presents a virtual leash with which a robot can naturally follow an operator. We use a sensor fusion based on a custom-built RF transponder, RGB cameras, and a LiDAR. In addition, we customize a local avoidance planner for legged platforms, which enables us to navigate dynamic and narrow environments. We successfully validate on the ANYmal platform the robustness and performance of our entire pipeline in real-world experiments.<|reference_end|>
arxiv
@article{scheidemann2024obstacle-avoidant, title={Obstacle-Avoidant Leader Following with a Quadruped Robot}, author={Carmen Scheidemann, Lennart Werner, Victor Reijgwart, Andrei Cramariuc, Joris Chomarat, Jia-Ruei Chiu, Roland Siegwart, Marco Hutter}, journal={arXiv preprint arXiv:2410.00572}, year={2024}, archivePrefix={arXiv}, eprint={2410.00572}, primaryClass={cs.RO} }
scheidemann2024obstacle-avoidant
arxiv-664040
2410.00578
Towards an Argument Pattern for the Use of Safety Performance Indicators
<|reference_start|>Towards an Argument Pattern for the Use of Safety Performance Indicators: UL 4600, the safety standard for autonomous products, mandates the use of Safety Performance Indicators (SPIs) to continuously ensure the validity of safety cases by monitoring and taking action when violations are identified. Despite numerous examples of concrete SPIs available in the standard and companion literature, their contribution rationale for achieving safety is often left implicit. In this paper, we present our initial work towards an argument pattern for the use of SPIs to ensure validity of safety cases throughout the entire lifecycle of the system. Our aim is to make the implicit argument behind using SPIs explicit, and based on this, to analyze the situations that can undermine confidence in the chosen set of SPIs. To maintain the confidence in SPIs' effectiveness, we propose an approach to continuously monitor their expected performance by using meta-SPIs.<|reference_end|>
arxiv
@article{ratiu2024towards, title={Towards an Argument Pattern for the Use of Safety Performance Indicators}, author={Daniel Ratiu, Tihomir Rohlinger, Torben Stolte and Stefan Wagner}, journal={978-3-031-68738-9 Published: 08 September 2024 Series ISSN 0302-9743 Series E-ISSN 1611-3349 Edition Number1 Number of Pages 462}, year={2024}, archivePrefix={arXiv}, eprint={2410.00578}, primaryClass={cs.SE} }
ratiu2024towards
arxiv-664041
2410.00580
Deep activity propagation via weight initialization in spiking neural networks
<|reference_start|>Deep activity propagation via weight initialization in spiking neural networks: Spiking Neural Networks (SNNs) and neuromorphic computing offer bio-inspired advantages such as sparsity and ultra-low power consumption, providing a promising alternative to conventional networks. However, training deep SNNs from scratch remains a challenge, as SNNs process and transmit information by quantizing the real-valued membrane potentials into binary spikes. This can lead to information loss and vanishing spikes in deeper layers, impeding effective training. While weight initialization is known to be critical for training deep neural networks, what constitutes an effective initial state for a deep SNN is not well-understood. Existing weight initialization methods designed for conventional networks (ANNs) are often applied to SNNs without accounting for their distinct computational properties. In this work we derive an optimal weight initialization method specifically tailored for SNNs, taking into account the quantization operation. We show theoretically that, unlike standard approaches, this method enables the propagation of activity in deep SNNs without loss of spikes. We demonstrate this behavior in numerical simulations of SNNs with up to 100 layers across multiple time steps. We present an in-depth analysis of the numerical conditions, regarding layer width and neuron hyperparameters, which are necessary to accurately apply our theoretical findings. Furthermore, our experiments on MNIST demonstrate higher accuracy and faster convergence when using the proposed weight initialization scheme. Finally, we show that the newly introduced weight initialization is robust against variations in several network and neuron hyperparameters.<|reference_end|>
arxiv
@article{micheli2024deep, title={Deep activity propagation via weight initialization in spiking neural networks}, author={Aurora Micheli, Olaf Booij, Jan van Gemert, Nergis T"omen}, journal={arXiv preprint arXiv:2410.00580}, year={2024}, archivePrefix={arXiv}, eprint={2410.00580}, primaryClass={cs.CV} }
micheli2024deep
arxiv-664042
2410.00582
Can We Remove the Ground? Obstacle-aware Point Cloud Compression for Remote Object Detection
<|reference_start|>Can We Remove the Ground? Obstacle-aware Point Cloud Compression for Remote Object Detection: Efficient point cloud (PC) compression is crucial for streaming applications, such as augmented reality and cooperative perception. Classic PC compression techniques encode all the points in a frame. Tailoring compression towards perception tasks at the receiver side, we ask the question, "Can we remove the ground points during transmission without sacrificing the detection performance?" Our study reveals a strong dependency on the ground from state-of-the-art (SOTA) 3D object detection models, especially on those points below and around the object. In this work, we propose a lightweight obstacle-aware Pillar-based Ground Removal (PGR) algorithm. PGR filters out ground points that do not provide context to object recognition, significantly improving compression ratio without sacrificing the receiver side perception performance. Not using heavy object detection or semantic segmentation models, PGR is light-weight, highly parallelizable, and effective. Our evaluations on KITTI and Waymo Open Dataset show that SOTA detection models work equally well with PGR removing 20-30% of the points, with a speeding of 86 FPS.<|reference_end|>
arxiv
@article{zeng2024can, title={Can We Remove the Ground? Obstacle-aware Point Cloud Compression for Remote Object Detection}, author={Pengxi Zeng, Alberto Presta, Jonah Reinis, Dinesh Bharadia, Hang Qiu, Pamela Cosman}, journal={arXiv preprint arXiv:2410.00582}, year={2024}, archivePrefix={arXiv}, eprint={2410.00582}, primaryClass={cs.CV cs.RO} }
zeng2024can
arxiv-664043
2410.00583
A Mathematical Theory of Hyper-simplex Fractal Network for Blockchain: Part I
<|reference_start|>A Mathematical Theory of Hyper-simplex Fractal Network for Blockchain: Part I: Blockchain technology holds promise for Web 3.0, but scalability remains a critical challenge. Here, we present a mathematical theory for a novel blockchain network topology based on fractal N-dimensional simplexes. This Hyper-simplex fractal network folds one-dimensional data blocks into geometric shapes, reflecting both underlying and overlaying network connectivities. Our approach offers near-infinite scalability, accommodating trillions of nodes while maintaining efficiency. We derive the mathematical foundations for generating and describing these network topologies, proving key properties such as node count, connectivity patterns, and fractal dimension. The resulting structure facilitates a hierarchical consensus mechanism and enables deterministic address mapping for rapid routing. This theoretical framework lays the groundwork for next-generation blockchain architectures, potentially revolutionizing large-scale decentralized systems. The Part I work was conducted between March and September 2024.<|reference_end|>
arxiv
@article{yang2024a, title={A Mathematical Theory of Hyper-simplex Fractal Network for Blockchain: Part I}, author={Kaiwen Yang, Hao Xu, Yunqing Sun, Jiacheng Qian, Zihan Zhou, Xiaoshuai Zhang, Erwu Liu, Lei Zhang and Chih-Lin I}, journal={arXiv preprint arXiv:2410.00583}, year={2024}, archivePrefix={arXiv}, eprint={2410.00583}, primaryClass={cs.NI cs.DC} }
yang2024a
arxiv-664044
2410.00584
Asymmetrically connected reservoir networks learn better
<|reference_start|>Asymmetrically connected reservoir networks learn better: We show that connectivity within the high-dimensional recurrent layer of a reservoir network is crucial for its performance. To this end, we systematically investigate the impact of network connectivity on its performance, i.e., we examine the symmetry and structure of the reservoir in relation to its computational power. Reservoirs with random and asymmetric connections are found to perform better for an exemplary Mackey-Glass time series than all structured reservoirs, including biologically inspired connectivities, such as small-world topologies. This result is quantified by the information processing capacity of the different network topologies which becomes highest for asymmetric and randomly connected networks.<|reference_end|>
arxiv
@article{rathor2024asymmetrically, title={Asymmetrically connected reservoir networks learn better}, author={Shailendra K. Rathor, Martin Ziegler, J"org Schumacher}, journal={arXiv preprint arXiv:2410.00584}, year={2024}, archivePrefix={arXiv}, eprint={2410.00584}, primaryClass={cs.NE nlin.CD} }
rathor2024asymmetrically
arxiv-664045
2410.00589
GERA: Geometric Embedding for Efficient Point Registration Analysis
<|reference_start|>GERA: Geometric Embedding for Efficient Point Registration Analysis: Point cloud registration aims to provide estimated transformations to align point clouds, which plays a crucial role in pose estimation of various navigation systems, such as surgical guidance systems and autonomous vehicles. Despite the impressive performance of recent models on benchmark datasets, many rely on complex modules like KPConv and Transformers, which impose significant computational and memory demands. These requirements hinder their practical application, particularly in resource-constrained environments such as mobile robotics. In this paper, we propose a novel point cloud registration network that leverages a pure MLP architecture, constructing geometric information offline. This approach eliminates the computational and memory burdens associated with traditional complex feature extractors and significantly reduces inference time and resource consumption. Our method is the first to replace 3D coordinate inputs with offline-constructed geometric encoding, improving generalization and stability, as demonstrated by Maximum Mean Discrepancy (MMD) comparisons. This efficient and accurate geometric representation marks a significant advancement in point cloud analysis, particularly for applications requiring fast and reliability.<|reference_end|>
arxiv
@article{li2024gera:, title={GERA: Geometric Embedding for Efficient Point Registration Analysis}, author={Geng Li, Haozhi Cao, Mingyang Liu, Shenghai Yuan, Jianfei Yang}, journal={arXiv preprint arXiv:2410.00589}, year={2024}, archivePrefix={arXiv}, eprint={2410.00589}, primaryClass={cs.CV cs.AI} }
li2024gera:
arxiv-664046
2410.00592
Ultra-low-crosstalk Silicon Switches Driven Thermally and Electrically
<|reference_start|>Ultra-low-crosstalk Silicon Switches Driven Thermally and Electrically: Silicon photonic switches are widely considered as a cost-effective solution for addressing the ever-growing data traffic in datacenter networks, as they offer unique advantages such as low power consumption, low latency, small footprint and high bandwidth. Despite extensive research efforts, crosstalk in large-scale photonic circuits still poses a threat to the signal integrity. In this paper, we present two designs of silicon Mach-Zehnder Interferometer (MZI) switches achieving ultra-low-crosstalk, driven thermally and electrically. Each switch fabric is optimized at both the device and circuit level to suppress crosstalk and reduce system complexity. Notably, for the first time to the best of our knowledge, we harness the inherent self-heating effect in a carrier-injection-based MZI switch to create a pair of phase shifters that offer arbitrary phase differences. Such a pair of phase shifters induces matched insertion loss at each arm, thus minimizing crosstalk. Experimentally, an ultra-low crosstalk ratio below -40 dB is demonstrated for both thermo-optic (T-O) and electro-optic (E-O) switches. The T-O switch exhibits an on-chip loss of less than 5 dB with a switching time of 500 microseconds, whereas the E-O switch achieves an on-chip loss as low as 8.5 dB with a switching time of under 100 ns. In addition, data transmission of a 50 Gb/s on-off keying signal is demonstrated with high fidelity on the E-O switch, showing the great potential of the proposed switch designs.<|reference_end|>
arxiv
@article{bao2024ultra-low-crosstalk, title={Ultra-low-crosstalk Silicon Switches Driven Thermally and Electrically}, author={Peng Bao, Chunhui Yao, Chenxi Tan, Alan Yilun Yuan, Minjia Chen, Seb J. Savory, Richard Penty, Qixiang Cheng}, journal={arXiv preprint arXiv:2410.00592}, year={2024}, archivePrefix={arXiv}, eprint={2410.00592}, primaryClass={physics.optics cs.SY eess.SY} }
bao2024ultra-low-crosstalk
arxiv-664047
2410.00593
Style-Specific Neurons for Steering LLMs in Text Style Transfer
<|reference_start|>Style-Specific Neurons for Steering LLMs in Text Style Transfer: Text style transfer (TST) aims to modify the style of a text without altering its original meaning. Large language models (LLMs) demonstrate superior performance across multiple tasks, including TST. However, in zero-shot setups, they tend to directly copy a significant portion of the input text to the output without effectively changing its style. To enhance the stylistic variety and fluency of the text, we present sNeuron-TST, a novel approach for steering LLMs using style-specific neurons in TST. Specifically, we identify neurons associated with the source and target styles and deactivate source-style-only neurons to give target-style words a higher probability, aiming to enhance the stylistic diversity of the generated text. However, we find that this deactivation negatively impacts the fluency of the generated text, which we address by proposing an improved contrastive decoding method that accounts for rapid token probability shifts across layers caused by deactivated source-style neurons. Empirical experiments demonstrate the effectiveness of the proposed method on six benchmarks, encompassing formality, toxicity, politics, politeness, authorship, and sentiment.<|reference_end|>
arxiv
@article{lai2024style-specific, title={Style-Specific Neurons for Steering LLMs in Text Style Transfer}, author={Wen Lai, Viktor Hangya, Alexander Fraser}, journal={arXiv preprint arXiv:2410.00593}, year={2024}, archivePrefix={arXiv}, eprint={2410.00593}, primaryClass={cs.CL} }
lai2024style-specific
arxiv-664048
2410.00595
On the Interaction of Adaptive Population Control with Cumulative Step-Size Adaptation
<|reference_start|>On the Interaction of Adaptive Population Control with Cumulative Step-Size Adaptation: Three state-of-the-art adaptive population control strategies (PCS) are theoretically and empirically investigated for a multi-recombinative, cumulative step-size adaptation Evolution Strategy $(\mu/\mu_I, \lambda)$-CSA-ES. First, scaling properties for the generation number and mutation strength rescaling are derived on the sphere in the limit of large population sizes. Then, the adaptation properties of three standard CSA-variants are studied as a function of the population size and dimensionality, and compared to the predicted scaling results. Thereafter, three PCS are implemented along the CSA-ES and studied on a test bed of sphere, random, and Rastrigin functions. The CSA-adaptation properties significantly influence the performance of the PCS, which is shown in more detail. Given the test bed, well-performing parameter sets (in terms of scaling, efficiency, and success rate) for both the CSA- and PCS-subroutines are identified.<|reference_end|>
arxiv
@article{omeradzic2024on, title={On the Interaction of Adaptive Population Control with Cumulative Step-Size Adaptation}, author={Amir Omeradzic, Hans-Georg Beyer}, journal={arXiv preprint arXiv:2410.00595}, year={2024}, archivePrefix={arXiv}, eprint={2410.00595}, primaryClass={cs.NE} }
omeradzic2024on
arxiv-664049
2410.00596
Dynamic and Scalable Data Preparation for Object-Centric Process Mining
<|reference_start|>Dynamic and Scalable Data Preparation for Object-Centric Process Mining: Object-centric process mining is emerging as a promising paradigm across diverse industries, drawing substantial academic attention. To support its data requirements, existing object-centric data formats primarily facilitate the exchange of static event logs between data owners, researchers, and analysts, rather than serving as a robust foundational data model for continuous data ingestion and transformation pipelines for subsequent storage and analysis. This focus results into suboptimal design choices in terms of flexibility, scalability, and maintainability. For example, it is difficult for current object-centric event log formats to deal with novel object types or new attributes in case of streaming data. This paper proposes a database format designed for an intermediate data storage hub, which segregates process mining applications from their data sources using a hub-and-spoke architecture. It delineates essential requirements for robust object-centric event log storage from a data engineering perspective and introduces a novel relational schema tailored to these requirements. To validate the efficacy of the proposed database format, an end-to-end solution is implemented using a lightweight, open-source data stack. Our implementation includes data extractors for various object-centric event log formats, automated data quality assessments, and intuitive process data visualization capabilities.<|reference_end|>
arxiv
@article{bosmans2024dynamic, title={Dynamic and Scalable Data Preparation for Object-Centric Process Mining}, author={Lien Bosmans, Jari Peeperkorn, Alexandre Goossens, Giovanni Lugaresi, Johannes De Smedt, Jochen De Weerdt}, journal={arXiv preprint arXiv:2410.00596}, year={2024}, archivePrefix={arXiv}, eprint={2410.00596}, primaryClass={cs.DB} }
bosmans2024dynamic
arxiv-664050
2410.00598
FPT Approximations for Fair $k$-Min-Sum-Radii
<|reference_start|>FPT Approximations for Fair $k$-Min-Sum-Radii: We consider the $k$-min-sum-radii ($k$-MSR) clustering problem with fairness constraints. The $k$-min-sum-radii problem is a mixture of the classical $k$-center and $k$-median problems. We are given a set of points $P$ in a metric space and a number $k$ and aim to partition the points into $k$ clusters, each of the clusters having one designated center. The objective to minimize is the sum of the radii of the $k$ clusters (where in $k$-center we would only consider the maximum radius and in $k$-median we would consider the sum of the individual points' costs). Various notions of fair clustering have been introduced lately, and we follow the definitions due to Chierichetti, Kumar, Lattanzi and Vassilvitskii [NeurIPS 2017] which demand that cluster compositions shall follow the proportions of the input point set with respect to some given sensitive attribute. For the easier case where the sensitive attribute only has two possible values and each is equally frequent in the input, the aim is to compute a clustering where all clusters have a 1:1 ratio with respect to this attribute. We call this the 1:1 case. There has been a surge of FPT-approximation algorithms for the $k$-MSR problem lately, solving the problem both in the unconstrained case and in several constrained problem variants. We add to this research area by designing an FPT $(6+\epsilon)$-approximation that works for $k$-MSR under the mentioned general fairness notion. For the special 1:1 case, we improve our algorithm to achieve a $(3+\epsilon)$-approximation.<|reference_end|>
arxiv
@article{carta2024fpt, title={FPT Approximations for Fair $k$-Min-Sum-Radii}, author={Lena Carta, Lukas Drexler, Annika Hennes, Clemens R"osner and Melanie Schmidt}, journal={arXiv preprint arXiv:2410.00598}, year={2024}, archivePrefix={arXiv}, eprint={2410.00598}, primaryClass={cs.DS} }
carta2024fpt
arxiv-664051
2410.00601
$k$-local Graphs
<|reference_start|>$k$-local Graphs: In 2017 Day et al. introduced the notion of locality as a structural complexity-measure for patterns in the field of pattern matching established by Angluin in 1980. In 2019 Casel et al. showed that determining the locality of an arbitrary pattern is NP-complete. Inspired by hierarchical clustering, we extend the notion to coloured graphs, i.e., given a coloured graph determine an enumeration of the colours such that colouring the graph stepwise according to the enumeration leads to as few clusters as possible. Next to first theoretical results on graph classes, we propose a priority search algorithm to compute the $k$-locality of a graph. The algorithm is optimal in the number of marking prefix expansions, and is faster by orders of magnitude than an exhaustive search. Finally, we perform a case study on a DBLP subgraph to demonstrate the potential of $k$-locality for knowledge discovery.<|reference_end|>
arxiv
@article{beth2024$k$-local, title={$k$-local Graphs}, author={Christian Beth, Pamela Fleischmann, Annika Huch, Daniyal Kazempour, Peer Kr"oger, Andrea Kulow, Matthias Renz}, journal={arXiv preprint arXiv:2410.00601}, year={2024}, archivePrefix={arXiv}, eprint={2410.00601}, primaryClass={math.CO cs.DS cs.SI} }
beth2024$k$-local
arxiv-664052
2410.00603
An Empirical Study of Large Language Models for Type and Call Graph Analysis
<|reference_start|>An Empirical Study of Large Language Models for Type and Call Graph Analysis: Large Language Models (LLMs) are increasingly being explored for their potential in software engineering, particularly in static analysis tasks. In this study, we investigate the potential of current LLMs to enhance call-graph analysis and type inference for Python and JavaScript programs. We empirically evaluated 24 LLMs, including OpenAI's GPT series and open-source models like LLaMA and Mistral, using existing and newly developed benchmarks. Specifically, we enhanced TypeEvalPy, a micro-benchmarking framework for type inference in Python, with auto-generation capabilities, expanding its scope from 860 to 77,268 type annotations for Python. Additionally, we introduced SWARM-CG and SWARM-JS, comprehensive benchmarking suites for evaluating call-graph construction tools across multiple programming languages. Our findings reveal a contrasting performance of LLMs in static analysis tasks. For call-graph generation in Python, traditional static analysis tools like PyCG significantly outperform LLMs. In JavaScript, the static tool TAJS underperforms due to its inability to handle modern language features, while LLMs, despite showing potential with models like mistral-large-it-2407-123b and GPT-4o, struggle with completeness and soundness in both languages for call-graph analysis. Conversely, LLMs demonstrate a clear advantage in type inference for Python, surpassing traditional tools like HeaderGen and hybrid approaches such as HiTyper. These results suggest that while LLMs hold promise in type inference, their limitations in call-graph analysis highlight the need for further research. Our study provides a foundation for integrating LLMs into static analysis workflows, offering insights into their strengths and current limitations.<|reference_end|>
arxiv
@article{venkatesh2024an, title={An Empirical Study of Large Language Models for Type and Call Graph Analysis}, author={Ashwin Prasad Shivarpatna Venkatesh and Rose Sunil and Samkutty Sabu and Amir M. Mir and Sofia Reis and Eric Bodden}, journal={arXiv preprint arXiv:2410.00603}, year={2024}, archivePrefix={arXiv}, eprint={2410.00603}, primaryClass={cs.SE} }
venkatesh2024an
arxiv-664053
2410.00605
Random large eddy simulation for 3-dimensional incompressible viscous flows
<|reference_start|>Random large eddy simulation for 3-dimensional incompressible viscous flows: We develop a numerical method for simulation of incompressible viscous flows by integrating the technology of random vortex method with the core idea of Large Eddy Simulation (LES). Specifically, we utilize the filtering method in LES, interpreted as spatial averaging, along with the integral representation theorem for parabolic equations, to achieve a closure scheme which may be used for calculating solutions of Navier-Stokes equations. This approach circumvents the challenge associated with handling the non-locally integrable 3-dimensional integral kernel in the random vortex method and facilitates the computation of numerical solutions for flow systems via Monte-Carlo method. Numerical simulations are carried out for both laminar and turbulent flows, demonstrating the validity and effectiveness of the method.<|reference_end|>
arxiv
@article{guo2024random, title={Random large eddy simulation for 3-dimensional incompressible viscous flows}, author={Zihao Guo and Zhongmin Qian}, journal={arXiv preprint arXiv:2410.00605}, year={2024}, archivePrefix={arXiv}, eprint={2410.00605}, primaryClass={physics.flu-dyn cs.NA math.AP math.NA math.PR} }
guo2024random
arxiv-664054
2410.00608
Measurement challenges in AI catastrophic risk governance and safety frameworks
<|reference_start|>Measurement challenges in AI catastrophic risk governance and safety frameworks: Safety frameworks represent a significant development in AI governance: they are the first type of publicly shared catastrophic risk management framework developed by major AI companies and focus specifically on AI scaling decisions. I identify six critical measurement challenges in their implementation and propose three policy recommendations to improve their validity and reliability.<|reference_end|>
arxiv
@article{kasirzadeh2024measurement, title={Measurement challenges in AI catastrophic risk governance and safety frameworks}, author={Atoosa Kasirzadeh}, journal={arXiv preprint arXiv:2410.00608}, year={2024}, archivePrefix={arXiv}, eprint={2410.00608}, primaryClass={cs.CY} }
kasirzadeh2024measurement
arxiv-664055
2410.00611
The combinatorial structure and value distributions of plateaued functions
<|reference_start|>The combinatorial structure and value distributions of plateaued functions: We study combinatorial properties of plateaued functions. All quadratic functions, bent functions and most known APN functions are plateaued, so many cryptographic primitives rely on plateaued functions as building blocks. The main focus of our study is the interplay of the Walsh transform and linearity of a plateaued function, its differential properties, and their value distributions, i.e., the sizes of image and preimage sets. In particular, we study the special case of ``almost balanced'' plateaued functions, which only have two nonzero preimage set sizes, generalizing for instance all monomial functions. We achieve several direct connections and (non)existence conditions for these functions, showing for instance that plateaued $d$-to-$1$ functions (and thus plateaued monomials) only exist for a very select choice of $d$, and we derive for all these functions their linearity as well as bounds on their differential uniformity. We also specifically study the Walsh transform of plateaued APN functions and their relation to their value distribution.<|reference_end|>
arxiv
@article{kölsch2024the, title={The combinatorial structure and value distributions of plateaued functions}, author={Lukas K"olsch and Alexandr Polujan}, journal={arXiv preprint arXiv:2410.00611}, year={2024}, archivePrefix={arXiv}, eprint={2410.00611}, primaryClass={math.CO cs.IT math.IT} }
kölsch2024the
arxiv-664056
2410.00616
Detecci\'on Autom\'atica de Patolog\'ias en Notas Cl\'inicas en Espa\~nol Combinando Modelos de Lenguaje y Ontolog\'ias M\'edicos
<|reference_start|>Detecci\'on Autom\'atica de Patolog\'ias en Notas Cl\'inicas en Espa\~nol Combinando Modelos de Lenguaje y Ontolog\'ias M\'edicos: In this paper we present a hybrid method for the automatic detection of dermatological pathologies in medical reports. We use a large language model combined with medical ontologies to predict, given a first appointment or follow-up medical report, the pathology a person may suffer from. The results show that teaching the model to learn the type, severity and location on the body of a dermatological pathology as well as in which order it has to learn these three features significantly increases its accuracy. The article presents the demonstration of state-of-the-art results for classification of medical texts with a precision of 0.84, micro and macro F1-score of 0.82 and 0.75, and makes both the method and the dataset used available to the community. -- En este art\'iculo presentamos un m\'etodo h\'ibrido para la detecci\'on autom\'atica de patolog\'ias dermatol\'ogicas en informes m\'edicos. Usamos un modelo de lenguaje amplio en espa\~nol combinado con ontolog\'ias m\'edicas para predecir, dado un informe m\'edico de primera cita o de seguimiento, la patolog\'ia del paciente. Los resultados muestran que el tipo, la gravedad y el sitio en el cuerpo de una patolog\'ia dermatol\'ogica, as\'i como en qu\'e orden tiene un modelo que aprender esas tres caracter\'isticas, aumentan su precisi\'on. El art\'iculo presenta la demostraci\'on de resultados comparables al estado del arte de clasificaci\'on de textos m\'edicos con una precisi\'on de 0.84, micro y macro F1-score de 0.82 y 0.75, y deja a disposici\'on de la comunidad tanto el m\'etodo como el conjunto de datos utilizado.<|reference_end|>
arxiv
@article{torre2024detecci\'on, title={Detecci\'on Autom\'atica de Patolog\'ias en Notas Cl\'inicas en Espa\~nol Combinando Modelos de Lenguaje y Ontolog\'ias M\'edicos}, author={L'eon-Paul Schaub Torre, Pelayo Quir'os and Helena Garc'ia Mieres}, journal={arXiv preprint arXiv:2410.00616}, year={2024}, archivePrefix={arXiv}, eprint={2410.00616}, primaryClass={cs.CL} }
torre2024detecci\'on
arxiv-664057
2410.00617
Radio Foundation Models: Pre-training Transformers for 5G-based Indoor Localization
<|reference_start|>Radio Foundation Models: Pre-training Transformers for 5G-based Indoor Localization: Artificial Intelligence (AI)-based radio fingerprinting (FP) outperforms classic localization methods in propagation environments with strong multipath effects. However, the model and data orchestration of FP are time-consuming and costly, as it requires many reference positions and extensive measurement campaigns for each environment. Instead, modern unsupervised and self-supervised learning schemes require less reference data for localization, but either their accuracy is low or they require additional sensor information, rendering them impractical. In this paper we propose a self-supervised learning framework that pre-trains a general transformer (TF) neural network on 5G channel measurements that we collect on-the-fly without expensive equipment. Our novel pretext task randomly masks and drops input information to learn to reconstruct it. So, it implicitly learns the spatiotemporal patterns and information of the propagation environment that enable FP-based localization. Most interestingly, when we optimize this pre-trained model for localization in a given environment, it achieves the accuracy of state-of-the-art methods but requires ten times less reference data and significantly reduces the time from training to operation.<|reference_end|>
arxiv
@article{ott2024radio, title={Radio Foundation Models: Pre-training Transformers for 5G-based Indoor Localization}, author={Jonathan Ott, Jonas Pirkl, Maximilian Stahlke, Tobias Feigl, Christopher Mutschler}, journal={arXiv preprint arXiv:2410.00617}, year={2024}, archivePrefix={arXiv}, eprint={2410.00617}, primaryClass={eess.SP cs.LG} }
ott2024radio
arxiv-664058
2410.00620
Differentiable Interacting Multiple Model Particle Filtering
<|reference_start|>Differentiable Interacting Multiple Model Particle Filtering: We propose a sequential Monte Carlo algorithm for parameter learning when the studied model exhibits random discontinuous jumps in behaviour. To facilitate the learning of high dimensional parameter sets, such as those associated to neural networks, we adopt the emerging framework of differentiable particle filtering, wherein parameters are trained by gradient descent. We design a new differentiable interacting multiple model particle filter to be capable of learning the individual behavioural regimes and the model which controls the jumping simultaneously. In contrast to previous approaches, our algorithm allows control of the computational effort assigned per regime whilst using the probability of being in a given regime to guide sampling. Furthermore, we develop a new gradient estimator that has a lower variance than established approaches and remains fast to compute, for which we prove consistency. We establish new theoretical results of the presented algorithms and demonstrate superior numerical performance compared to the previous state-of-the-art algorithms.<|reference_end|>
arxiv
@article{brady2024differentiable, title={Differentiable Interacting Multiple Model Particle Filtering}, author={John-Joseph Brady, Yuhui Luo, Wenwu Wang, V'ictor Elvira, Yunpeng Li}, journal={arXiv preprint arXiv:2410.00620}, year={2024}, archivePrefix={arXiv}, eprint={2410.00620}, primaryClass={stat.ML cs.LG eess.SP} }
brady2024differentiable
arxiv-664059
2410.00622
A Reconfigurable Approximate Computing RISC-V Platform for Fault-Tolerant Applications
<|reference_start|>A Reconfigurable Approximate Computing RISC-V Platform for Fault-Tolerant Applications: The demand for energy-efficient and high performance embedded systems drives the evolution of new hardware architectures, including concepts like approximate computing. This paper presents a novel reconfigurable embedded platform named "phoeniX", using the standard RISC-V ISA, maximizing energy efficiency while maintaining acceptable application-level accuracy. The platform enables the integration of approximate circuits at the core level with diverse structures, accuracies, and timings without requiring modifications to the core, particularly in the control logic. The platform introduces novel control features, allowing configurable trade-offs between accuracy and energy consumption based on specific application requirements. To evaluate the effectiveness of the platform, experiments were conducted on a set of applications, such as image processing and Dhrystone benchmark. The core with its original execution engine, occupies 0.024mm2 of area, with average power consumption of 4.23mW at 1.1V operating voltage, average energy-efficiency of 7.85pJ per operation at 620MHz frequency in 45nm CMOS technology. The configurable platform with a highly optimized 3-stage pipelined RV32I(E)M architecture, possesses a DMIPS/MHz of 1.89, and a CPI of 1.13, showcasing remarkable capabilities for an embedded processor.<|reference_end|>
arxiv
@article{delavari2024a, title={A Reconfigurable Approximate Computing RISC-V Platform for Fault-Tolerant Applications}, author={Arvin Delavari, Faraz Ghoreishy, Hadi Shahriar Shahhoseini, and Sattar Mirzakuchaki}, journal={arXiv preprint arXiv:2410.00622}, year={2024}, doi={10.1109/DSD64264.2024.00020}, archivePrefix={arXiv}, eprint={2410.00622}, primaryClass={cs.AR} }
delavari2024a
arxiv-664060
2410.00623
Adoption and Adaptation of CI/CD Practices in Very Small Software Development Entities: A Systematic Literature Review
<|reference_start|>Adoption and Adaptation of CI/CD Practices in Very Small Software Development Entities: A Systematic Literature Review: This study presents a systematic literature review on the adoption of Continuous Integration and Continuous Delivery (CI/CD) practices in Very Small Entities (VSEs) in software development. The research analyzes 13 selected studies to identify common CI/CD practices, characterize the specific limitations of VSEs, and explore strategies for adapting these practices to small-scale environments. The findings reveal that VSEs face significant challenges in implementing CI/CD due to resource constraints and complex tool ecosystems. However, the adoption of accessible tools like Jenkins and Docker, coupled with micro-pipeline practices and simplified frameworks such as ISO 29110, can effectively address these challenges. The study highlights the growing trend of microservices architecture adoption and the importance of tailoring CI/CD processes to VSE-specific needs. This research contributes to the understanding of how small software entities can leverage CI/CD practices to enhance their competitiveness and software quality, despite limited resources.<|reference_end|>
arxiv
@article{ccallo2024adoption, title={Adoption and Adaptation of CI/CD Practices in Very Small Software Development Entities: A Systematic Literature Review}, author={Mario Ccallo, Alex Quispe-Quispe}, journal={arXiv preprint arXiv:2410.00623}, year={2024}, archivePrefix={arXiv}, eprint={2410.00623}, primaryClass={cs.SE} }
ccallo2024adoption
arxiv-664061
2410.00627
Parallel state estimation for systems with integrated measurements
<|reference_start|>Parallel state estimation for systems with integrated measurements: This paper presents parallel-in-time state estimation methods for systems with Slow-Rate inTegrated Measurements (SRTM). Integrated measurements are common in various applications, and they appear in analysis of data resulting from processes that require material collection or integration over the sampling period. Current state estimation methods for SRTM are inherently sequential, preventing temporal parallelization in their standard form. This paper proposes parallel Bayesian filters and smoothers for linear Gaussian SRTM models. For that purpose, we develop a novel smoother for SRTM models and develop parallel-in-time filters and smoother for them using an associative scan-based parallel formulation. Empirical experiments ran on a GPU demonstrate the superior time complexity of the proposed methods over traditional sequential approaches.<|reference_end|>
arxiv
@article{yaghoobi2024parallel, title={Parallel state estimation for systems with integrated measurements}, author={Fatemeh Yaghoobi and Simo S"arkk"a}, journal={arXiv preprint arXiv:2410.00627}, year={2024}, archivePrefix={arXiv}, eprint={2410.00627}, primaryClass={stat.CO cs.DC} }
yaghoobi2024parallel
arxiv-664062
2410.00629
An Illumination-Robust Feature Extractor Augmented by Relightable 3D Reconstruction
<|reference_start|>An Illumination-Robust Feature Extractor Augmented by Relightable 3D Reconstruction: Visual features, whose description often relies on the local intensity and gradient direction, have found wide applications in robot navigation and localization in recent years. However, the extraction of visual features is usually disturbed by the variation of illumination conditions, making it challenging for real-world applications. Previous works have addressed this issue by establishing datasets with variations in illumination conditions, but can be costly and time-consuming. This paper proposes a design procedure for an illumination-robust feature extractor, where the recently developed relightable 3D reconstruction techniques are adopted for rapid and direct data generation with varying illumination conditions. A self-supervised framework is proposed for extracting features with advantages in repeatability for key points and similarity for descriptors across good and bad illumination conditions. Experiments are conducted to demonstrate the effectiveness of the proposed method for robust feature extraction. Ablation studies also indicate the effectiveness of the self-supervised framework design.<|reference_end|>
arxiv
@article{zhao2024an, title={An Illumination-Robust Feature Extractor Augmented by Relightable 3D Reconstruction}, author={Shunyi Zhao, Zehuan Yu, Zuxin Fan, Zhihao Zhou, Lecheng Ruan and Qining Wang}, journal={arXiv preprint arXiv:2410.00629}, year={2024}, archivePrefix={arXiv}, eprint={2410.00629}, primaryClass={cs.CV} }
zhao2024an
arxiv-664063
2410.00630
Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures
<|reference_start|>Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures: Volumetric modeling and neural radiance field representations have revolutionized 3D face capture and photorealistic novel view synthesis. However, these methods often require hundreds of multi-view input images and are thus inapplicable to cases with less than a handful of inputs. We present a novel volumetric prior on human faces that allows for high-fidelity expressive face modeling from as few as three input views captured in the wild. Our key insight is that an implicit prior trained on synthetic data alone can generalize to extremely challenging real-world identities and expressions and render novel views with fine idiosyncratic details like wrinkles and eyelashes. We leverage a 3D Morphable Face Model to synthesize a large training set, rendering each identity with different expressions, hair, clothing, and other assets. We then train a conditional Neural Radiance Field prior on this synthetic dataset and, at inference time, fine-tune the model on a very sparse set of real images of a single subject. On average, the fine-tuning requires only three inputs to cross the synthetic-to-real domain gap. The resulting personalized 3D model reconstructs strong idiosyncratic facial expressions and outperforms the state-of-the-art in high-quality novel view synthesis of faces from sparse inputs in terms of perceptual and photo-metric quality.<|reference_end|>
arxiv
@article{bühler2024cafca:, title={Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures}, author={Marcel C. B"uhler, Gengyan Li, Erroll Wood, Leonhard Helminger, Xu Chen, Tanmay Shah, Daoye Wang, Stephan Garbin, Sergio Orts-Escolano, Otmar Hilliges, Dmitry Lagun, J'er'emy Riviere, Paulo Gotardo, Thabo Beeler, Abhimitra Meka, Kripasindhu Sarkar}, journal={arXiv preprint arXiv:2410.00630}, year={2024}, doi={10.1145/3680528.3687580}, archivePrefix={arXiv}, eprint={2410.00630}, primaryClass={cs.CV cs.AI} }
bühler2024cafca:
arxiv-664064
2410.00637
High-order numerical integration on self-affine sets
<|reference_start|>High-order numerical integration on self-affine sets: We construct an interpolatory high-order cubature rule to compute integrals of smooth functions over self-affine sets with respect to an invariant measure. The main difficulty is the computation of the cubature weights, which we characterize algebraically, by exploiting a self-similarity property of the integral. We propose an $h$-version and a $p$-version of the cubature, present an error analysis and conduct numerical experiments.<|reference_end|>
arxiv
@article{joly2024high-order, title={High-order numerical integration on self-affine sets}, author={Patrick Joly and Maryna Kachanovska and Zo"is Moitier}, journal={arXiv preprint arXiv:2410.00637}, year={2024}, archivePrefix={arXiv}, eprint={2410.00637}, primaryClass={math.NA cs.NA} }
joly2024high-order
arxiv-664065
2410.00639
On the Creation of Representative Samples of Software Repositories
<|reference_start|>On the Creation of Representative Samples of Software Repositories: Software repositories is one of the sources of data in Empirical Software Engineering, primarily in the Mining Software Repositories field, aimed at extracting knowledge from the dynamics and practice of software projects. With the emergence of social coding platforms such as GitHub, researchers have now access to millions of software repositories to use as source data for their studies. With this massive amount of data, sampling techniques are needed to create more manageable datasets. The creation of these datasets is a crucial step, and researchers have to carefully select the repositories to create representative samples according to a set of variables of interest. However, current sampling methods are often based on random selection or rely on variables which may not be related to the research study (e.g., popularity or activity). In this paper, we present a methodology for creating representative samples of software repositories, where such representativeness is properly aligned with both the characteristics of the population of repositories and the requirements of the empirical study. We illustrate our approach with use cases based on Hugging Face repositories.<|reference_end|>
arxiv
@article{gorostidi2024on, title={On the Creation of Representative Samples of Software Repositories}, author={June Gorostidi, Adem Ait, Jordi Cabot, Javier Luis C'anovas Izquierdo}, journal={arXiv preprint arXiv:2410.00639}, year={2024}, archivePrefix={arXiv}, eprint={2410.00639}, primaryClass={cs.SE} }
gorostidi2024on
arxiv-664066
2410.00643
Cross-Camera Data Association via GNN for Supervised Graph Clustering
<|reference_start|>Cross-Camera Data Association via GNN for Supervised Graph Clustering: Cross-camera data association is one of the cornerstones of the multi-camera computer vision field. Although often integrated into detection and tracking tasks through architecture design and loss definition, it is also recognized as an independent challenge. The ultimate goal is to connect appearances of one item from all cameras, wherever it is visible. Therefore, one possible perspective on this task involves supervised clustering of the affinity graph, where nodes are instances captured by all cameras. They are represented by appropriate visual features and positional attributes. We leverage the advantages of GNN (Graph Neural Network) architecture to examine nodes' relations and generate representative edge embeddings. These embeddings are then classified to determine the existence or non-existence of connections in node pairs. Therefore, the core of this approach is graph connectivity prediction. Experimental validation was conducted on multicamera pedestrian datasets across diverse environments such as the laboratory, basketball court, and terrace. Our proposed method, named SGC-CCA, outperformed the state-of-the-art method named GNN-CCA across all clustering metrics, offering an end-to-end clustering solution without the need for graph post-processing. The code is available at https://github.com/djordjened92/cca-gnnclust.<|reference_end|>
arxiv
@article{nedeljković2024cross-camera, title={Cross-Camera Data Association via GNN for Supervised Graph Clustering}, author={{DJ}or{dj}e Nedeljkovi'c}, journal={arXiv preprint arXiv:2410.00643}, year={2024}, archivePrefix={arXiv}, eprint={2410.00643}, primaryClass={cs.CV} }
nedeljković2024cross-camera
arxiv-664067
2410.00644
PARSIR: a Package for Effective Parallel Discrete Event Simulation on Multi-processor Machines
<|reference_start|>PARSIR: a Package for Effective Parallel Discrete Event Simulation on Multi-processor Machines: In this article we present PARSIR (PARallel SImulation Runner), a package that enables the effective exploitation of shared-memory multi-processor machines for running discrete event simulation models. PARSIR is a compile/run-time environment for discrete event simulation models developed with the {\tt C} programming language. The architecture of PARSIR has been designed in order to keep low the amount of CPU-cycles required for running models. This is achieved via the combination of a set of techniques like: 1) causally consistent batch-processing of simulation events at an individual simulation object for caching effectiveness; 2) high likelihood of disjoint access parallelism; 3) the favoring of memory accesses on local NUMA (Non-Uniform-Memory-Access) nodes in the architecture, while still enabling well balanced workload distribution via work-stealing from remote nodes; 4) the use of RMW (Read-Modify-Write) machine instructions for fast access to simulation engine data required by the worker threads for managing the concurrent simulation objects and distributing the workload. Furthermore, any architectural solution embedded in the PARSIR engine is fully transparent to the application level code implementing the simulation model. We also provide experimental results showing the effectiveness of PARSIR when running the reference PHOLD benchmark on a NUMA shared-memory multi-processor machine equipped with 40 CPUs.<|reference_end|>
arxiv
@article{quaglia2024parsir:, title={PARSIR: a Package for Effective Parallel Discrete Event Simulation on Multi-processor Machines}, author={Francesco Quaglia}, journal={arXiv preprint arXiv:2410.00644}, year={2024}, archivePrefix={arXiv}, eprint={2410.00644}, primaryClass={cs.DC} }
quaglia2024parsir:
arxiv-664068
2410.00645
ICL-TSVD: Bridging Theory and Practice in Continual Learning with Pre-trained Models
<|reference_start|>ICL-TSVD: Bridging Theory and Practice in Continual Learning with Pre-trained Models: The goal of continual learning (CL) is to train a model that can solve multiple tasks presented sequentially. Recent CL approaches have achieved strong performance by leveraging large pre-trained models that generalize well to downstream tasks. However, such methods lack theoretical guarantees, making them prone to unexpected failures. Conversely, principled CL approaches often fail to achieve competitive performance. In this work, we bridge this gap between theory and practice by integrating an empirically strong approach (RanPAC) into a principled framework, Ideal Continual Learner (ICL), designed to prevent forgetting. Specifically, we lift pre-trained features into a higher dimensional space and formulate an over-parametrized minimum-norm least-squares problem. We find that the lifted features are highly ill-conditioned, potentially leading to large training errors (numerical instability) and increased generalization errors (double descent). We address these challenges by continually truncating the singular value decomposition (SVD) of the lifted features. Our approach, termed ICL-TSVD, is stable with respect to the choice of hyperparameters, can handle hundreds of tasks, and outperforms state-of-the-art CL methods on multiple datasets. Importantly, our method satisfies a recurrence relation throughout its continual learning process, which allows us to prove it maintains small training and generalization errors by appropriately truncating a fraction of SVD factors. This results in a stable continual learning method with strong empirical performance and theoretical guarantees.<|reference_end|>
arxiv
@article{peng2024icl-tsvd:, title={ICL-TSVD: Bridging Theory and Practice in Continual Learning with Pre-trained Models}, author={Liangzu Peng, Juan Elenter, Joshua Agterberg, Alejandro Ribeiro, Ren'e Vidal}, journal={arXiv preprint arXiv:2410.00645}, year={2024}, archivePrefix={arXiv}, eprint={2410.00645}, primaryClass={cs.LG} }
peng2024icl-tsvd:
arxiv-664069
2410.00649
LASMP: Language Aided Subset Sampling Based Motion Planner
<|reference_start|>LASMP: Language Aided Subset Sampling Based Motion Planner: This paper presents the Language Aided Subset Sampling Based Motion Planner (LASMP), a system that helps mobile robots plan their movements by using natural language instructions. LASMP uses a modified version of the Rapidly Exploring Random Tree (RRT) method, which is guided by user-provided commands processed through a language model (RoBERTa). The system improves efficiency by focusing on specific areas of the robot's workspace based on these instructions, making it faster and less resource-intensive. Compared to traditional RRT methods, LASMP reduces the number of nodes needed by 55% and cuts random sample queries by 80%, while still generating safe, collision-free paths. Tested in both simulated and real-world environments, LASMP has shown better performance in handling complex indoor scenarios. The results highlight the potential of combining language processing with motion planning to make robot navigation more efficient.<|reference_end|>
arxiv
@article{bhattacharjee2024lasmp:, title={LASMP: Language Aided Subset Sampling Based Motion Planner}, author={Saswati Bhattacharjee, Anirban Sinha, Chinwe Ekenna}, journal={arXiv preprint arXiv:2410.00649}, year={2024}, archivePrefix={arXiv}, eprint={2410.00649}, primaryClass={cs.RO cs.AI cs.HC cs.LG} }
bhattacharjee2024lasmp:
arxiv-664070
2410.00650
A Survey on Testing and Analysis of Quantum Software
<|reference_start|>A Survey on Testing and Analysis of Quantum Software: Quantum computing is getting increasing interest from both academia and industry, and the quantum software landscape has been growing rapidly. The quantum software stack comprises quantum programs, implementing algorithms, and platforms like IBM Qiskit, Google Cirq, and Microsoft Q#, enabling their development. To ensure the reliability and performance of quantum software, various techniques for testing and analyzing it have been proposed, such as test generation, bug pattern detection, and circuit optimization. However, the large amount of work and the fact that work on quantum software is performed by several research communities, make it difficult to get a comprehensive overview of the existing techniques. In this work, we provide an extensive survey of the state of the art in testing and analysis of quantum software. We discuss literature from several research communities, including quantum computing, software engineering, programming languages, and formal methods. Our survey covers a wide range of topics, including expected and unexpected behavior of quantum programs, testing techniques, program analysis approaches, optimizations, and benchmarks for testing and analyzing quantum software. We create novel connections between the discussed topics and present them in an accessible way. Finally, we discuss key challenges and open problems to inspire future research.<|reference_end|>
arxiv
@article{paltenghi2024a, title={A Survey on Testing and Analysis of Quantum Software}, author={Matteo Paltenghi, Michael Pradel}, journal={arXiv preprint arXiv:2410.00650}, year={2024}, archivePrefix={arXiv}, eprint={2410.00650}, primaryClass={cs.SE quant-ph} }
paltenghi2024a
arxiv-664071
2410.00654
Explainable Multi-Stakeholder Job Recommender Systems
<|reference_start|>Explainable Multi-Stakeholder Job Recommender Systems: Public opinion on recommender systems has become increasingly wary in recent years. In line with this trend, lawmakers have also started to become more critical of such systems, resulting in the introduction of new laws focusing on aspects such as privacy, fairness, and explainability for recommender systems and AI at large. These concepts are especially crucial in high-risk domains such as recruitment. In recruitment specifically, decisions carry substantial weight, as the outcomes can significantly impact individuals' careers and companies' success. Additionally, there is a need for a multi-stakeholder approach, as these systems are used by job seekers, recruiters, and companies simultaneously, each with its own requirements and expectations. In this paper, I summarize my current research on the topic of explainable, multi-stakeholder job recommender systems and set out a number of future research directions.<|reference_end|>
arxiv
@article{schellingerhout2024explainable, title={Explainable Multi-Stakeholder Job Recommender Systems}, author={Roan Schellingerhout}, journal={arXiv preprint arXiv:2410.00654}, year={2024}, doi={10.1145/3640457.3688014}, archivePrefix={arXiv}, eprint={2410.00654}, primaryClass={cs.HC cs.AI} }
schellingerhout2024explainable
arxiv-664072
2410.00655
AutoTM 20: Automatic Topic Modeling Framework for Documents Analysis
<|reference_start|>AutoTM 20: Automatic Topic Modeling Framework for Documents Analysis: In this work, we present an AutoTM 2.0 framework for optimizing additively regularized topic models. Comparing to the previous version, this version includes such valuable improvements as novel optimization pipeline, LLM-based quality metrics and distributed mode. AutoTM 2.0 is a comfort tool for specialists as well as non-specialists to work with text documents to conduct exploratory data analysis or to perform clustering task on interpretable set of features. Quality evaluation is based on specially developed metrics such as coherence and gpt-4-based approaches. Researchers and practitioners can easily integrate new optimization algorithms and adapt novel metrics to enhance modeling quality and extend their experiments. We show that AutoTM 2.0 achieves better performance compared to the previous AutoTM by providing results on 5 datasets with different features and in two different languages.<|reference_end|>
arxiv
@article{khodorchenko2024autotm, title={AutoTM 2.0: Automatic Topic Modeling Framework for Documents Analysis}, author={Maria Khodorchenko and Nikolay Butakov and Maxim Zuev and Denis Nasonov}, journal={arXiv preprint arXiv:2410.00655}, year={2024}, archivePrefix={arXiv}, eprint={2410.00655}, primaryClass={cs.LG cs.CL} }
khodorchenko2024autotm
arxiv-664073
2410.00656
Circuit and Graver Walks and Linear and Integer Programming
<|reference_start|>Circuit and Graver Walks and Linear and Integer Programming: We show that a circuit walk from a given feasible point of a given linear program to an optimal point can be computed in polynomial time using only linear algebra operations and the solution of the single given linear program. We also show that a Graver walk from a given feasible point of a given integer program to an optimal point is polynomial time computable using an integer programming oracle, but without such an oracle, it is hard to compute such a walk even if an optimal solution to the given program is given as well. Combining our oracle algorithm with recent results on sparse integer programming, we also show that Graver walks from any point are polynomial time computable over matrices of bounded tree-depth and subdeterminants.<|reference_end|>
arxiv
@article{onn2024circuit, title={Circuit and Graver Walks and Linear and Integer Programming}, author={Shmuel Onn}, journal={Discrete Optimization, 54:100862 (7 pages), 2024}, year={2024}, doi={10.1016/j.disopt.2024.100862}, archivePrefix={arXiv}, eprint={2410.00656}, primaryClass={math.OC cs.DM cs.DS math.CO} }
onn2024circuit
arxiv-664074
2410.00659
Multimodal Coherent Explanation Generation of Robot Failures
<|reference_start|>Multimodal Coherent Explanation Generation of Robot Failures: The explainability of a robot's actions is crucial to its acceptance in social spaces. Explaining why a robot fails to complete a given task is particularly important for non-expert users to be aware of the robot's capabilities and limitations. So far, research on explaining robot failures has only considered generating textual explanations, even though several studies have shown the benefits of multimodal ones. However, a simple combination of multiple modalities may lead to semantic incoherence between the information across different modalities - a problem that is not well-studied. An incoherent multimodal explanation can be difficult to understand, and it may even become inconsistent with what the robot and the human observe and how they perform reasoning with the observations. Such inconsistencies may lead to wrong conclusions about the robot's capabilities. In this paper, we introduce an approach to generate coherent multimodal explanations by checking the logical coherence of explanations from different modalities, followed by refinements as required. We propose a classification approach for coherence assessment, where we evaluate if an explanation logically follows another. Our experiments suggest that fine-tuning a neural network that was pre-trained to recognize textual entailment, performs well for coherence assessment of multimodal explanations. Code & data: https://pradippramanick.github.io/coherent-explain/.<|reference_end|>
arxiv
@article{pramanick2024multimodal, title={Multimodal Coherent Explanation Generation of Robot Failures}, author={Pradip Pramanick, Silvia Rossi}, journal={arXiv preprint arXiv:2410.00659}, year={2024}, archivePrefix={arXiv}, eprint={2410.00659}, primaryClass={cs.RO cs.AI} }
pramanick2024multimodal
arxiv-664075
2410.00660
Stabilizing the Kumaraswamy Distribution
<|reference_start|>Stabilizing the Kumaraswamy Distribution: Large-scale latent variable models require expressive continuous distributions that support efficient sampling and low-variance differentiation, achievable through the reparameterization trick. The Kumaraswamy (KS) distribution is both expressive and supports the reparameterization trick with a simple closed-form inverse CDF. Yet, its adoption remains limited. We identify and resolve numerical instabilities in the inverse CDF and log-pdf, exposing issues in libraries like PyTorch and TensorFlow. We then introduce simple and scalable latent variable models based on the KS, improving exploration-exploitation trade-offs in contextual multi-armed bandits and enhancing uncertainty quantification for link prediction with graph neural networks. Our results support the stabilized KS distribution as a core component in scalable variational models for bounded latent variables.<|reference_end|>
arxiv
@article{wasserman2024stabilizing, title={Stabilizing the Kumaraswamy Distribution}, author={Max Wasserman, Gonzalo Mateos}, journal={arXiv preprint arXiv:2410.00660}, year={2024}, archivePrefix={arXiv}, eprint={2410.00660}, primaryClass={cs.LG stat.ML} }
wasserman2024stabilizing
arxiv-664076
2410.00661
Integrating PETs into Software Applications: A Game-Based Learning Approach
<|reference_start|>Integrating PETs into Software Applications: A Game-Based Learning Approach: The absence of data protection measures in software applications leads to data breaches, threatening end-user privacy and causing instabilities in organisations that developed those software. Privacy Enhancing Technologies (PETs) emerge as promising safeguards against data breaches. PETs minimise threats to personal data while enabling software to extract valuable insights from them. However, software developers often lack the adequate knowledge and awareness to develop PETs integrated software. This issue is exacerbated by insufficient PETs related learning approaches customised for software developers. Therefore, we propose "PETs-101", a novel game-based learning framework that motivates developers to integrate PETs into software. By doing so, it aims to improve developers' privacy-preserving software development behaviour rather than simply delivering the learning content on PETs. In future, the proposed framework will be empirically investigated and used as a foundation for developing an educational gaming intervention that trains developers to put PETs into practice.<|reference_end|>
arxiv
@article{boteju2024integrating, title={Integrating PETs into Software Applications: A Game-Based Learning Approach}, author={Maisha Boteju, Thilina Ranbaduge, Dinusha Vatsalan, Nalin Arachchilage}, journal={Forty-Fifth International Conference on Information Systems, Bangkok, Thailand 2024}, year={2024}, archivePrefix={arXiv}, eprint={2410.00661}, primaryClass={cs.CR} }
boteju2024integrating
arxiv-664077
2410.00664
Warped geometries of Segre-Veronese manifolds
<|reference_start|>Warped geometries of Segre-Veronese manifolds: Segre-Veronese manifolds are smooth manifolds consisting of partially symmetric rank-$1$ tensors. They are naturally viewed as submanifolds of a Euclidean space of tensors. However, they can also be equipped with other metrics than their induced metrics, such as warped product metrics. We investigate a one-parameter family of warped geometries, which includes the standard Euclidean geometry, and whose parameter controls by how much spherical tangent directions are weighted relative to radial tangent directions. We compute the exponential and logarithmic maps of these warped Segre-Veronese manifolds, including the corresponding starting and connecting geodesics. A closed formula for the intrinsic distance between points on the warped Segre-Veronese manifold is presented. We determine for which warping parameters Segre-Veronese manifolds are geodesically connected and show that they are not geodesically connected in the standard Euclidean geometry. The benefits of connecting geodesics may outweigh using the Euclidean geometry in certain applications. One such application is presented: numerically computing the Riemannian center of mass for averaging rank-$1$ tensors.<|reference_end|>
arxiv
@article{jacobsson2024warped, title={Warped geometries of Segre-Veronese manifolds}, author={Simon Jacobsson, Lars Swijsen, Joeri Van der Veken, Nick Vannieuwenhoven}, journal={arXiv preprint arXiv:2410.00664}, year={2024}, archivePrefix={arXiv}, eprint={2410.00664}, primaryClass={math.NA cs.NA math.DG} }
jacobsson2024warped
arxiv-664078
2410.00665
TAVRNN: Temporal Attention-enhanced Variational Graph RNN Captures Neural Dynamics and Behavior
<|reference_start|>TAVRNN: Temporal Attention-enhanced Variational Graph RNN Captures Neural Dynamics and Behavior: We introduce Temporal Attention-enhanced Variational Graph Recurrent Neural Network (TAVRNN), a novel framework for analyzing the evolving dynamics of neuronal connectivity networks in response to external stimuli and behavioral feedback. TAVRNN captures temporal changes in network structure by modeling sequential snapshots of neuronal activity, enabling the identification of key connectivity patterns. Leveraging temporal attention mechanisms and variational graph techniques, TAVRNN uncovers how connectivity shifts align with behavior over time. We validate TAVRNN on two datasets: in vivo calcium imaging data from freely behaving rats and novel in vitro electrophysiological data from the DishBrain system, where biological neurons control a simulated environment during the game of pong. We show that TAVRNN outperforms previous baseline models in classification, clustering tasks and computational efficiency while accurately linking connectivity changes to performance variations. Crucially, TAVRNN reveals that high game performance in the DishBrain system correlates with the alignment of sensory and motor subregion channels, a relationship not evident in earlier models. This framework represents the first application of dynamic graph representation of electrophysiological (neuronal) data from DishBrain system, providing insights into the reorganization of neuronal networks during learning. TAVRNN's ability to differentiate between neuronal states associated with successful and unsuccessful learning outcomes, offers significant implications for real-time monitoring and manipulation of biological neuronal systems.<|reference_end|>
arxiv
@article{khajehnejad2024tavrnn:, title={TAVRNN: Temporal Attention-enhanced Variational Graph RNN Captures Neural Dynamics and Behavior}, author={Moein Khajehnejad, Forough Habibollahi, Ahmad Khajehnejad, Brett J. Kagan, Adeel Razi}, journal={arXiv preprint arXiv:2410.00665}, year={2024}, archivePrefix={arXiv}, eprint={2410.00665}, primaryClass={q-bio.NC cs.LG cs.NE} }
khajehnejad2024tavrnn:
arxiv-664079
2410.00667
Contribution of soundscape appropriateness to soundscape quality assessment in space: a mediating variable affecting acoustic comfort
<|reference_start|>Contribution of soundscape appropriateness to soundscape quality assessment in space: a mediating variable affecting acoustic comfort: Soundscape appropriateness (SA) provides supplemental information on the matching degree between auditory information and the surrounding scene in soundscape perception. This indicator has been integrated into the standard ISO process for collecting soundscape data, forming a component of the sound quality assessment questionnaire. However, its role in soundscape quality assessment has not been fully understood. Herein, we present the findings from soundscape data collected from Beiling Park in Shenyang, China. A method was developed that integrates mediation effect models with multiscale geographically weighted regression (MGWR) models to explore the mediating role of SA in the impact of sound source types on soundscape quality, as well as the spatial heterogeneity of this mediation effect. The results confirm that SA does mediates the influence of sound source types on acoustics comfort (AC). Specifically, natural sounds (indirect effect / total effect = 0.19 / 0.19), traffic sounds (indirect effect / total effect = -0.46 / -0.65), and commercial sounds (indirect effect / total effect = -0.25 / -0.12) impact the perception of AC by either enhancing or reducing SA. Moreover, the relationships among variables depicted in this model demonstrate spatial heterogeneity, demonstrating that in urban open spaces with complex constructures, local spatial models may be needed for soundscape assessment. The research reaffirms the significance of SA in urban open spaces. In terms of practical implications for urban and landscape planners, when sound sources cannot be controlled or altered, coordinating between the sound and the surrounding environment through landscape optimisation could also improve the quality of the soundscape through enhancing SA and help achieve the goal of creating healthy urban open spaces.<|reference_end|>
arxiv
@article{yang2024contribution, title={Contribution of soundscape appropriateness to soundscape quality assessment in space: a mediating variable affecting acoustic comfort}, author={Xinhao Yang, Guangyu Zhang, Xiaodong Lu, Yuan Zhang, Jian Kang}, journal={arXiv preprint arXiv:2410.00667}, year={2024}, archivePrefix={arXiv}, eprint={2410.00667}, primaryClass={cs.SD eess.AS physics.class-ph} }
yang2024contribution
arxiv-664080
2410.00668
Unifying a Public Software Ecosystem: How Omaolo Responded to the COVID-19 Challenge
<|reference_start|>Unifying a Public Software Ecosystem: How Omaolo Responded to the COVID-19 Challenge: Public actors are often seen as slow, especially in renewing information systems, due to complex tendering and competition regulations, which delay decisions. This challenge is even greater in multi-company ecosystems. However, when faced with a common threat, the ecosystem needs to unite to face the challenge. This study explores how the Omaolo ecosystem in Finland evolved from traditional public-private cooperation to an alliance model during the COVID-19 pandemic from 2020 to 2022. It highlights how the crisis accelerated changes in operations and collaboration between public and private participants, identifying key shifts, benefits, and challenges. Key findings include the removal of traditional barriers and the creation of an alliance approach that sped up the development of Omaolo's symptom assessment tool. This improved collaboration, service scalability, and responsiveness to healthcare needs despite the initial regulatory and stakeholder alignment challenges. The study concludes that crises can drive agile responses in public ecosystems. The new collaboration model helped Omaolo to adapt quickly to changing service demands, managing healthcare patient loads more effectively. These findings highlight the value of flexible, collaborative strategies for responding to emergencies in public software ecosystems.<|reference_end|>
arxiv
@article{kolehmainen2024unifying, title={Unifying a Public Software Ecosystem: How Omaolo Responded to the COVID-19 Challenge}, author={Taija Kolehmainen, Reetta Ghezzi, Sami Hyrynsalmi, Tommi Mikkonen, Samuli Pekkola, Manu Set"al"a}, journal={arXiv preprint arXiv:2410.00668}, year={2024}, archivePrefix={arXiv}, eprint={2410.00668}, primaryClass={cs.SI} }
kolehmainen2024unifying
arxiv-664081
2410.00672
GMT: Enhancing Generalizable Neural Rendering via Geometry-Driven Multi-Reference Texture Transfer
<|reference_start|>GMT: Enhancing Generalizable Neural Rendering via Geometry-Driven Multi-Reference Texture Transfer: Novel view synthesis (NVS) aims to generate images at arbitrary viewpoints using multi-view images, and recent insights from neural radiance fields (NeRF) have contributed to remarkable improvements. Recently, studies on generalizable NeRF (G-NeRF) have addressed the challenge of per-scene optimization in NeRFs. The construction of radiance fields on-the-fly in G-NeRF simplifies the NVS process, making it well-suited for real-world applications. Meanwhile, G-NeRF still struggles in representing fine details for a specific scene due to the absence of per-scene optimization, even with texture-rich multi-view source inputs. As a remedy, we propose a Geometry-driven Multi-reference Texture transfer network (GMT) available as a plug-and-play module designed for G-NeRF. Specifically, we propose ray-imposed deformable convolution (RayDCN), which aligns input and reference features reflecting scene geometry. Additionally, the proposed texture preserving transformer (TP-Former) aggregates multi-view source features while preserving texture information. Consequently, our module enables direct interaction between adjacent pixels during the image enhancement process, which is deficient in G-NeRF models with an independent rendering process per pixel. This addresses constraints that hinder the ability to capture high-frequency details. Experiments show that our plug-and-play module consistently improves G-NeRF models on various benchmark datasets.<|reference_end|>
arxiv
@article{yoon2024gmt:, title={GMT: Enhancing Generalizable Neural Rendering via Geometry-Driven Multi-Reference Texture Transfer}, author={Youngho Yoon, Hyun-Kurl Jang, and Kuk-Jin Yoon}, journal={arXiv preprint arXiv:2410.00672}, year={2024}, archivePrefix={arXiv}, eprint={2410.00672}, primaryClass={cs.CV} }
yoon2024gmt:
arxiv-664082
2410.00675
Fibrational perspectives on determinization of finite-state automata
<|reference_start|>Fibrational perspectives on determinization of finite-state automata: Colcombet and Petri\c{s}an argued that automata may be usefully considered from a functorial perspective, introducing a general notion of "$\mathcal{V}$-automaton" based on functors into $\mathcal{V}$. This enables them to recover different standard notions of automata by choosing $\mathcal{V}$ appropriately, and they further analyzed the determinization for \textbf{Rel}-automata using the Kleisli adjunction between \textbf{Set} and \textbf{Rel}. In this paper, we revisit Colcombet and Petri\c{s}an's analysis from a fibrational perspective, building on Melli\`es and Zeilberger's recent alternative but related definition of categorical automata as functors $p : \mathcal{Q} \to \mathcal{C}$ satisfying the finitary fiber and unique lifting of factorizations property. In doing so, we improve the understanding of determinization in three regards: Firstly, we carefully describe the universal property of determinization in terms of forward-backward simulations. Secondly, we generalize the determinization procedure for \textbf{Rel} automata using a local adjunction between \textbf{SpanSet} and \textbf{Rel}, which provides us with a canonical forward simulation. Finally we also propose an alterative determinization based on the multiset relative adjunction which retains paths, and we leverage this to provide a canonical forward-backward simulation.<|reference_end|>
arxiv
@article{li2024fibrational, title={Fibrational perspectives on determinization of finite-state automata}, author={Thea Li}, journal={arXiv preprint arXiv:2410.00675}, year={2024}, archivePrefix={arXiv}, eprint={2410.00675}, primaryClass={math.CT cs.FL cs.LO} }
li2024fibrational
arxiv-664083
2410.00676
User-Guided Verification of Security Protocols via Sound Animation
<|reference_start|>User-Guided Verification of Security Protocols via Sound Animation: Current formal verification of security protocols relies on specialized researchers and complex tools, inaccessible to protocol designers who informally evaluate their work with emulators. This paper addresses this gap by embedding symbolic analysis into the design process. Our approach implements the Dolev-Yao attack model using a variant of CSP based on Interaction Trees (ITrees) to compile protocols into animators -- executable programs that designers can use for debugging and inspection. To guarantee the soundness of our compilation, we mechanised our approach in the theorem prover Isabelle/HOL. As traditionally done with symbolic tools, we refer to the Diffie-Hellman key exchange and the Needham-Schroeder public-key protocol (and Lowe's patched variant). We demonstrate how our animator can easily reveal the mechanics of attacks and verify corrections. This work facilitates security integration at the design level and supports further security property analysis and software-engineered integrations.<|reference_end|>
arxiv
@article{ye2024user-guided, title={User-Guided Verification of Security Protocols via Sound Animation}, author={Kangfeng Ye, Roberto Metere, Poonam Yadav}, journal={arXiv preprint arXiv:2410.00676}, year={2024}, archivePrefix={arXiv}, eprint={2410.00676}, primaryClass={cs.CR cs.LO} }
ye2024user-guided
arxiv-664084
2410.00678
On high-order/low-order and micro-macro methods for implicit time-stepping of the BGK model
<|reference_start|>On high-order/low-order and micro-macro methods for implicit time-stepping of the BGK model: In this paper, a high-order/low-order (HOLO) method is combined with a micro-macro (MM) decomposition to accelerate iterative solvers in fully implicit time-stepping of the BGK equation for gas dynamics. The MM formulation represents a kinetic distribution as the sum of a local Maxwellian and a perturbation. In highly collisional regimes, the perturbation away from initial and boundary layers is small and can be compressed to reduce the overall storage cost of the distribution. The convergence behavior of the MM methods, the usual HOLO method, and the standard source iteration method is analyzed on a linear BGK model. Both the HOLO and MM methods are implemented using a discontinuous Galerkin (DG) discretization in phase space, which naturally preserves the consistency between high- and low-order models required by the HOLO approach. The accuracy and performance of these methods are compared on the Sod shock tube problem and a sudden wall heating boundary layer problem. Overall, the results demonstrate the robustness of the MM and HOLO approaches and illustrate the compression benefits enabled by the MM formulation when the kinetic distribution is near equilibrium.<|reference_end|>
arxiv
@article{hauck2024on, title={On high-order/low-order and micro-macro methods for implicit time-stepping of the BGK model}, author={Cory Hauck, M. Paul Laiu, Stefan Schnake}, journal={arXiv preprint arXiv:2410.00678}, year={2024}, archivePrefix={arXiv}, eprint={2410.00678}, primaryClass={math.NA cs.NA} }
hauck2024on
arxiv-664085
2410.00680
The Conformer Encoder May Reverse the Time Dimension
<|reference_start|>The Conformer Encoder May Reverse the Time Dimension: We sometimes observe monotonically decreasing cross-attention weights in our Conformer-based global attention-based encoder-decoder (AED) models. Further investigation shows that the Conformer encoder internally reverses the sequence in the time dimension. We analyze the initial behavior of the decoder cross-attention mechanism and find that it encourages the Conformer encoder self-attention to build a connection between the initial frames and all other informative frames. Furthermore, we show that, at some point in training, the self-attention module of the Conformer starts dominating the output over the preceding feed-forward module, which then only allows the reversed information to pass through. We propose several methods and ideas of how this flipping can be avoided. Additionally, we investigate a novel method to obtain label-frame-position alignments by using the gradients of the label log probabilities w.r.t. the encoder input frames.<|reference_end|>
arxiv
@article{schmitt2024the, title={The Conformer Encoder May Reverse the Time Dimension}, author={Robin Schmitt, Albert Zeyer, Mohammad Zeineldeen, Ralf Schl"uter, Hermann Ney}, journal={arXiv preprint arXiv:2410.00680}, year={2024}, archivePrefix={arXiv}, eprint={2410.00680}, primaryClass={eess.AS cs.SD stat.ML} }
schmitt2024the
arxiv-664086
2410.00681
Advanced Arabic Alphabet Sign Language Recognition Using Transfer Learning and Transformer Models
<|reference_start|>Advanced Arabic Alphabet Sign Language Recognition Using Transfer Learning and Transformer Models: This paper presents an Arabic Alphabet Sign Language recognition approach, using deep learning methods in conjunction with transfer learning and transformer-based models. We study the performance of the different variants on two publicly available datasets, namely ArSL2018 and AASL. This task will make full use of state-of-the-art CNN architectures like ResNet50, MobileNetV2, and EfficientNetB7, and the latest transformer models such as Google ViT and Microsoft Swin Transformer. These pre-trained models have been fine-tuned on the above datasets in an attempt to capture some unique features of Arabic sign language motions. Experimental results present evidence that the suggested methodology can receive a high recognition accuracy, by up to 99.6\% and 99.43\% on ArSL2018 and AASL, respectively. That is far beyond the previously reported state-of-the-art approaches. This performance opens up even more avenues for communication that may be more accessible to Arabic-speaking deaf and hard-of-hearing, and thus encourages an inclusive society.<|reference_end|>
arxiv
@article{balat2024advanced, title={Advanced Arabic Alphabet Sign Language Recognition Using Transfer Learning and Transformer Models}, author={Mazen Balat, Rewaa Awaad, Hend Adel, Ahmed B. Zaky, Salah A. Aly}, journal={IEEE ICCA 2024}, year={2024}, archivePrefix={arXiv}, eprint={2410.00681}, primaryClass={cs.CV cs.AI cs.LG} }
balat2024advanced
arxiv-664087
2410.00683
Efficient Technical Term Translation: A Knowledge Distillation Approach for Parenthetical Terminology Translation
<|reference_start|>Efficient Technical Term Translation: A Knowledge Distillation Approach for Parenthetical Terminology Translation: This paper addresses the challenge of accurately translating technical terms, which are crucial for clear communication in specialized fields. We introduce the Parenthetical Terminology Translation (PTT) task, designed to mitigate potential inaccuracies by displaying the original term in parentheses alongside its translation. To implement this approach, we generated a representative PTT dataset using a collaborative approach with large language models and applied knowledge distillation to fine-tune traditional Neural Machine Translation (NMT) models and small-sized Large Language Models (sLMs). Additionally, we developed a novel evaluation metric to assess both overall translation accuracy and the correct parenthetical presentation of terms. Our findings indicate that sLMs did not consistently outperform NMT models, with fine-tuning proving more effective than few-shot prompting, particularly in models with continued pre-training in the target language. These insights contribute to the advancement of more reliable terminology translation methodologies.<|reference_end|>
arxiv
@article{myung2024efficient, title={Efficient Technical Term Translation: A Knowledge Distillation Approach for Parenthetical Terminology Translation}, author={Jiyoon Myung, Jihyeon Park, Jungki Son, Kyungro Lee, Joohyung Han}, journal={arXiv preprint arXiv:2410.00683}, year={2024}, archivePrefix={arXiv}, eprint={2410.00683}, primaryClass={cs.CL cs.AI} }
myung2024efficient
arxiv-664088
2410.00687
High-order primal mixed finite element method for boundary-value correction on curved domain
<|reference_start|>High-order primal mixed finite element method for boundary-value correction on curved domain: This paper addresses the non-homogeneous Neumann boundary condition on domains with curved boundaries. We consider the Raviart-Thomas element (RTk ) of degree $k \geq 1 $on triangular mesh. on a triangular mesh. A key feature of our boundary value correction method is the shift from the true boundary to a surrogate boundary. We present a high-order version of the method, achieving an $O(h^k+1/2)$ convergence in $L^2$-norm estimate for the velocity field and an $O(h^k )$ convergence in $H^1$-norm estimate for the pressure. Finally, numerical experiments validate our theoretical results.<|reference_end|>
arxiv
@article{hou2024high-order, title={High-order primal mixed finite element method for boundary-value correction on curved domain}, author={Yongli Hou, Yi Liu and Tengjin Zhao}, journal={arXiv preprint arXiv:2410.00687}, year={2024}, archivePrefix={arXiv}, eprint={2410.00687}, primaryClass={math.NA cs.NA} }
hou2024high-order
arxiv-664089
2410.00688
Supercomputer 3D Digital Twin for User Focused Real-Time Monitoring
<|reference_start|>Supercomputer 3D Digital Twin for User Focused Real-Time Monitoring: Real-time supercomputing performance analysis is a critical aspect of evaluating and optimizing computational systems in a dynamic user environment. The operation of supercomputers produce vast quantities of analytic data from multiple sources and of varying types so compiling this data in an efficient matter is critical to the process. MIT Lincoln Laboratory Supercomputing Center has been utilizing the Unity 3D game engine to create a Digital Twin of our supercomputing systems for several years to perform system monitoring. Unity offers robust visualization capabilities making it ideal for creating a sophisticated representation of the computational processes. As we scale the systems to include a diversity of resources such as accelerators and the addition of more users, we need to implement new analysis tools for the monitoring system. The workloads in research continuously change, as does the capability of Unity, and this allows us to adapt our monitoring tools to scale and incorporate features enabling efficient replay of system wide events, user isolation, and machine level granularity. Our system fully takes advantage of the modern capabilities of the Unity Engine in a way that intuitively represents the real time workload performed on a supercomputer. It allows HPC system engineers to quickly diagnose usage related errors with its responsive user interface which scales efficiently with large data sets.<|reference_end|>
arxiv
@article{bergeron2024supercomputer, title={Supercomputer 3D Digital Twin for User Focused Real-Time Monitoring}, author={William Bergeron, Matthew Hubbell, Daniel Mojica, Albert Reuther, William Arcand, David Bestor, Daniel Burrill, Chansup, Byun, Vijay Gadepally, Michael Houle, Hayden Jananthan, Michael Jones, Piotr Luszczek, Peter Michaleas, Lauren Milechin, Julie Mullen Andrew Prout, Antonio Rosa, Charles Yee, Jeremy Kepner}, journal={arXiv preprint arXiv:2410.00688}, year={2024}, archivePrefix={arXiv}, eprint={2410.00688}, primaryClass={cs.DC} }
bergeron2024supercomputer
arxiv-664090
2410.00689
Multimodal Auto Validation For Self-Refinement in Web Agents
<|reference_start|>Multimodal Auto Validation For Self-Refinement in Web Agents: As our world digitizes, web agents that can automate complex and monotonous tasks are becoming essential in streamlining workflows. This paper introduces an approach to improving web agent performance through multi-modal validation and self-refinement. We present a comprehensive study of different modalities (text, vision) and the effect of hierarchy for the automatic validation of web agents, building upon the state-of-the-art Agent-E web automation framework. We also introduce a self-refinement mechanism for web automation, using the developed auto-validator, that enables web agents to detect and self-correct workflow failures. Our results show significant gains on Agent-E's (a SOTA web agent) prior state-of-art performance, boosting task-completion rates from 76.2\% to 81.24\% on the subset of the WebVoyager benchmark. The approach presented in this paper paves the way for more reliable digital assistants in complex, real-world scenarios.<|reference_end|>
arxiv
@article{azam2024multimodal, title={Multimodal Auto Validation For Self-Refinement in Web Agents}, author={Ruhana Azam and Tamer Abuelsaad and Aditya Vempaty and Ashish Jagmohan}, journal={arXiv preprint arXiv:2410.00689}, year={2024}, archivePrefix={arXiv}, eprint={2410.00689}, primaryClass={cs.AI cs.SE} }
azam2024multimodal
arxiv-664091
2410.00690
Beyond Minimax Rates in Group Distributionally Robust Optimization via a Novel Notion of Sparsity
<|reference_start|>Beyond Minimax Rates in Group Distributionally Robust Optimization via a Novel Notion of Sparsity: The minimax sample complexity of group distributionally robust optimization (GDRO) has been determined up to a $\log(K)$ factor, for $K$ the number of groups. In this work, we venture beyond the minimax perspective via a novel notion of sparsity that we dub $(\lambda, \beta)$-sparsity. In short, this condition means that at any parameter $\theta$, there is a set of at most $\beta$ groups whose risks at $\theta$ all are at least $\lambda$ larger than the risks of the other groups. To find an $\epsilon$-optimal $\theta$, we show via a novel algorithm and analysis that the $\epsilon$-dependent term in the sample complexity can swap a linear dependence on $K$ for a linear dependence on the potentially much smaller $\beta$. This improvement leverages recent progress in sleeping bandits, showing a fundamental connection between the two-player zero-sum game optimization framework for GDRO and per-action regret bounds in sleeping bandits. The aforementioned result assumes having a particular $\lambda$ as input. Perhaps surprisingly, we next show an adaptive algorithm which, up to log factors, gets sample complexity that adapts to the best $(\lambda, \beta)$-sparsity condition that holds. Finally, for a particular input $\lambda$, we also show how to get a dimension-free sample complexity result.<|reference_end|>
arxiv
@article{nguyen2024beyond, title={Beyond Minimax Rates in Group Distributionally Robust Optimization via a Novel Notion of Sparsity}, author={Quan Nguyen, Nishant A. Mehta, Crist'obal Guzm'an}, journal={arXiv preprint arXiv:2410.00690}, year={2024}, archivePrefix={arXiv}, eprint={2410.00690}, primaryClass={cs.LG cs.AI math.OC} }
nguyen2024beyond
arxiv-664092
2410.00693
Optimizing Photoplethysmography-Based Sleep Staging Models by Leveraging Temporal Context for Wearable Devices Applications
<|reference_start|>Optimizing Photoplethysmography-Based Sleep Staging Models by Leveraging Temporal Context for Wearable Devices Applications: Accurate sleep stage classification is crucial for diagnosing sleep disorders and evaluating sleep quality. While polysomnography (PSG) remains the gold standard, photoplethysmography (PPG) is more practical due to its affordability and widespread use in wearable devices. However, state-of-the-art sleep staging methods often require prolonged continuous signal acquisition, making them impractical for wearable devices due to high energy consumption. Shorter signal acquisitions are more feasible but less accurate. Our work proposes an adapted sleep staging model based on top-performing state-of-the-art methods and evaluates its performance with different PPG segment sizes. We concatenate 30-second PPG segments over 15-minute intervals to leverage longer segment contexts. This approach achieved an accuracy of 0.75, a Cohen's Kappa of 0.60, an F1-Weighted score of 0.74, and an F1-Macro score of 0.60. Although reducing segment size decreased sensitivity for deep and REM stages, our strategy outperformed single 30-second window methods, particularly for these stages.<|reference_end|>
arxiv
@article{quino2024optimizing, title={Optimizing Photoplethysmography-Based Sleep Staging Models by Leveraging Temporal Context for Wearable Devices Applications}, author={Joseph A. P. Quino, Diego A. C. Cardenas, Marcelo A. F. Toledo, Felipe M. Dias, Estela Ribeiro, Jose E. Krieger and Marco A. Gutierrez}, journal={arXiv preprint arXiv:2410.00693}, year={2024}, archivePrefix={arXiv}, eprint={2410.00693}, primaryClass={eess.SP cs.HC cs.LG} }
quino2024optimizing
arxiv-664093
2410.00695
E-MPC: Edge-assisted Model Predictive Control
<|reference_start|>E-MPC: Edge-assisted Model Predictive Control: Model predictive control (MPC) has become the de facto standard action space for local planning and learning-based control in many continuous robotic control tasks, including autonomous driving. MPC solves a long-horizon cost optimization as a series of short-horizon optimizations based on a global planner-supplied reference path. The primary challenge in MPC, however, is that the computational budget for re-planning has a hard limit, which frequently inhibits exact optimization. Modern edge networks provide low-latency communication and heterogeneous properties that can be especially beneficial in this situation. We propose a novel framework for edge-assisted MPC (E-MPC) for path planning that exploits the heterogeneity of edge networks in three important ways: 1) varying computational capacity, 2) localized sensor information, and 3) localized observation histories. Theoretical analysis and extensive simulations are undertaken to demonstrate quantitatively the benefits of E-MPC in various scenarios, including maps, channel dynamics, and availability and density of edge nodes. The results confirm that E-MPC has the potential to reduce costs by a greater percentage than standard MPC does.<|reference_end|>
arxiv
@article{lou2024e-mpc:, title={E-MPC: Edge-assisted Model Predictive Control}, author={Yuan-Yao Lou, Jonathan Spencer, Kwang Taik Kim, Mung Chiang}, journal={arXiv preprint arXiv:2410.00695}, year={2024}, archivePrefix={arXiv}, eprint={2410.00695}, primaryClass={cs.DC cs.RO} }
lou2024e-mpc:
arxiv-664094
2410.00696
Stroboscopic averaging methods to study autoresonance and other problems with slowly varying forcing frequencies
<|reference_start|>Stroboscopic averaging methods to study autoresonance and other problems with slowly varying forcing frequencies: Autoresonance is a phenomenon of physical interest that may take place when a nonlinear oscillator is forced at a frequency that varies slowly. The stroboscopic averaging method (SAM), which provides an efficient numerical technique for the integration of highly oscillatory systems, cannot be used directly to study autoresonance due to the slow changes of the forcing frequency. We study how to modify SAM to cater for such slow variations. Numerical experiments show the computational advantages of using SAM.<|reference_end|>
arxiv
@article{calvo2024stroboscopic, title={Stroboscopic averaging methods to study autoresonance and other problems with slowly varying forcing frequencies}, author={M.P. Calvo, J.M. Sanz-Serna and Beibei Zhu}, journal={arXiv preprint arXiv:2410.00696}, year={2024}, archivePrefix={arXiv}, eprint={2410.00696}, primaryClass={math.NA cs.NA} }
calvo2024stroboscopic
arxiv-664095
2410.00698
Analysis of Cross-Domain Message Passing for OTFS Transmissions
<|reference_start|>Analysis of Cross-Domain Message Passing for OTFS Transmissions: In this paper, we investigate the performance of the cross-domain iterative detection (CDID) framework with orthogonal time frequency space (OTFS) modulation, where two distinct CDID algorithms are presented. The proposed schemes estimate/detect the information symbols iteratively across the frequency domain and the delay-Doppler (DD) domain via passing either the a posteriori or extrinsic information. Building upon this framework, we investigate the error performance by considering the bias evolution and state evolution. Furthermore, we discuss their error performance in convergence and the DD domain error state lower bounds in each iteration. Specifically, we demonstrate that in convergence, the ultimate error performance of the CDID passing the a posteriori information can be characterized by two potential convergence points. In contrast, the ultimate error performance of the CDID passing the extrinsic information has only one convergence point, which, interestingly, aligns with the matched filter bound. Our numerical results confirm our analytical findings and unveil the promising error performance achieved by the proposed designs.<|reference_end|>
arxiv
@article{chong2024analysis, title={Analysis of Cross-Domain Message Passing for OTFS Transmissions}, author={Ruoxi Chong, Shuangyang Li, Zhiqiang Wei, Michail Matthaiou, Derrick Wing Kwan Ng, and Giuseppe Caire}, journal={arXiv preprint arXiv:2410.00698}, year={2024}, archivePrefix={arXiv}, eprint={2410.00698}, primaryClass={cs.IT eess.SP math.IT} }
chong2024analysis
arxiv-664096
2410.00699
Investigating the Impact of Model Complexity in Large Language Models
<|reference_start|>Investigating the Impact of Model Complexity in Large Language Models: Large Language Models (LLMs) based on the pre-trained fine-tuning paradigm have become pivotal in solving natural language processing tasks, consistently achieving state-of-the-art performance. Nevertheless, the theoretical understanding of how model complexity influences fine-tuning performance remains challenging and has not been well explored yet. In this paper, we focus on autoregressive LLMs and propose to employ Hidden Markov Models (HMMs) to model them. Based on the HMM modeling, we investigate the relationship between model complexity and the generalization capability in downstream tasks. Specifically, we consider a popular tuning paradigm for downstream tasks, head tuning, where all pre-trained parameters are frozen and only individual heads are trained atop pre-trained LLMs. Our theoretical analysis reveals that the risk initially increases and then decreases with rising model complexity, showcasing a "double descent" phenomenon. In this case, the initial "descent" is degenerate, signifying that the "sweet spot" where bias and variance are balanced occurs when the model size is zero. Obtaining the presented in this study conclusion confronts several challenges, primarily revolving around effectively modeling autoregressive LLMs and downstream tasks, as well as conducting a comprehensive risk analysis for multivariate regression. Our research is substantiated by experiments conducted on data generated from HMMs, which provided empirical support and alignment with our theoretical insights.<|reference_end|>
arxiv
@article{luo2024investigating, title={Investigating the Impact of Model Complexity in Large Language Models}, author={Jing Luo, Huiyuan Wang, Weiran Huang}, journal={arXiv preprint arXiv:2410.00699}, year={2024}, archivePrefix={arXiv}, eprint={2410.00699}, primaryClass={cs.LG stat.ML} }
luo2024investigating
arxiv-664097
2410.00700
Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models
<|reference_start|>Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models: Personalized text-to-image diffusion models have grown popular for their ability to efficiently acquire a new concept from user-defined text descriptions and a few images. However, in the real world, a user may wish to personalize a model on multiple concepts but one at a time, with no access to the data from previous concepts due to storage/privacy concerns. When faced with this continual learning (CL) setup, most personalization methods fail to find a balance between acquiring new concepts and retaining previous ones -- a challenge that continual personalization (CP) aims to solve. Inspired by the successful CL methods that rely on class-specific information for regularization, we resort to the inherent class-conditioned density estimates, also known as diffusion classifier (DC) scores, for continual personalization of text-to-image diffusion models. Namely, we propose using DC scores for regularizing the parameter-space and function-space of text-to-image diffusion models, to achieve continual personalization. Using several diverse evaluation setups, datasets, and metrics, we show that our proposed regularization-based CP methods outperform the state-of-the-art C-LoRA, and other baselines. Finally, by operating in the replay-free CL setup and on low-rank adapters, our method incurs zero storage and parameter overhead, respectively, over the state-of-the-art.<|reference_end|>
arxiv
@article{jha2024mining, title={Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models}, author={Saurav Jha, Shiqi Yang, Masato Ishii, Mengjie Zhao, Christian Simon, Muhammad Jehanzeb Mirza, Dong Gong, Lina Yao, Shusuke Takahashi, Yuki Mitsufuji}, journal={arXiv preprint arXiv:2410.00700}, year={2024}, archivePrefix={arXiv}, eprint={2410.00700}, primaryClass={cs.CV cs.AI} }
jha2024mining
arxiv-664098
2410.00702
FlashMix: Fast Map-Free LiDAR Localization via Feature Mixing and Contrastive-Constrained Accelerated Training
<|reference_start|>FlashMix: Fast Map-Free LiDAR Localization via Feature Mixing and Contrastive-Constrained Accelerated Training: Map-free LiDAR localization systems accurately localize within known environments by predicting sensor position and orientation directly from raw point clouds, eliminating the need for large maps and descriptors. However, their long training times hinder rapid adaptation to new environments. To address this, we propose FlashMix, which uses a frozen, scene-agnostic backbone to extract local point descriptors, aggregated with an MLP mixer to predict sensor pose. A buffer of local descriptors is used to accelerate training by orders of magnitude, combined with metric learning or contrastive loss regularization of aggregated descriptors to improve performance and convergence. We evaluate FlashMix on various LiDAR localization benchmarks, examining different regularizations and aggregators, demonstrating its effectiveness for rapid and accurate LiDAR localization in real-world scenarios. The code is available at https://github.com/raktimgg/FlashMix.<|reference_end|>
arxiv
@article{goswami2024flashmix:, title={FlashMix: Fast Map-Free LiDAR Localization via Feature Mixing and Contrastive-Constrained Accelerated Training}, author={Raktim Gautam Goswami, Naman Patel, Prashanth Krishnamurthy, Farshad Khorrami}, journal={arXiv preprint arXiv:2410.00702}, year={2024}, archivePrefix={arXiv}, eprint={2410.00702}, primaryClass={cs.CV} }
goswami2024flashmix:
arxiv-664099
2410.00703
Koopman Spectral Analysis from Noisy Measurements based on Bayesian Learning and Kalman Smoothing
<|reference_start|>Koopman Spectral Analysis from Noisy Measurements based on Bayesian Learning and Kalman Smoothing: Koopman spectral analysis plays a crucial role in understanding and modeling nonlinear dynamical systems as it reveals key system behaviors and long-term dynamics. However, the presence of measurement noise poses a significant challenge to accurately extracting spectral properties. In this work, we propose a robust method for identifying the Koopman operator and extracting its spectral characteristics in noisy environments. To address the impact of noise, our approach tackles an identification problem that accounts for both systematic errors from finite-dimensional approximations and measurement noise in the data. By incorporating Bayesian learning and Kalman smoothing, the method simultaneously identifies the Koopman operator and estimates system states, effectively decoupling these two error sources. The method's efficiency and robustness are demonstrated through extensive experiments, showcasing its accuracy across varying noise levels.<|reference_end|>
arxiv
@article{zeng2024koopman, title={Koopman Spectral Analysis from Noisy Measurements based on Bayesian Learning and Kalman Smoothing}, author={Zhexuan Zeng, Jun Zhou, Yasen Wang, and Zuowei Ping}, journal={arXiv preprint arXiv:2410.00703}, year={2024}, archivePrefix={arXiv}, eprint={2410.00703}, primaryClass={eess.SY cs.SY math.DS} }
zeng2024koopman
arxiv-664100
2410.00704
Contrastive Abstraction for Reinforcement Learning
<|reference_start|>Contrastive Abstraction for Reinforcement Learning: Learning agents with reinforcement learning is difficult when dealing with long trajectories that involve a large number of states. To address these learning problems effectively, the number of states can be reduced by abstract representations that cluster states. In principle, deep reinforcement learning can find abstract states, but end-to-end learning is unstable. We propose contrastive abstraction learning to find abstract states, where we assume that successive states in a trajectory belong to the same abstract state. Such abstract states may be basic locations, achieved subgoals, inventory, or health conditions. Contrastive abstraction learning first constructs clusters of state representations by contrastive learning and then applies modern Hopfield networks to determine the abstract states. The first phase of contrastive abstraction learning is self-supervised learning, where contrastive learning forces states with sequential proximity to have similar representations. The second phase uses modern Hopfield networks to map similar state representations to the same fixed point, i.e.\ to an abstract state. The level of abstraction can be adjusted by determining the number of fixed points of the modern Hopfield network. Furthermore, \textit{contrastive abstraction learning} does not require rewards and facilitates efficient reinforcement learning for a wide range of downstream tasks. Our experiments demonstrate the effectiveness of contrastive abstraction learning for reinforcement learning.<|reference_end|>
arxiv
@article{patil2024contrastive, title={Contrastive Abstraction for Reinforcement Learning}, author={Vihang Patil, Markus Hofmarcher, Elisabeth Rumetshofer, Sepp Hochreiter}, journal={arXiv preprint arXiv:2410.00704}, year={2024}, archivePrefix={arXiv}, eprint={2410.00704}, primaryClass={cs.LG cs.AI} }
patil2024contrastive