corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-661301
|
2409.15926
|
QHyper: an integration library for hybrid quantum-classical optimization
|
<|reference_start|>QHyper: an integration library for hybrid quantum-classical optimization: We propose the QHyper library, which is aimed at researchers working on computational experiments with a variety of quantum combinatorial optimization solvers. The library offers a simple and extensible interface for formulating combinatorial optimization problems, selecting and running solvers, and optimizing hyperparameters. The supported solver set includes variational gate-based algorithms, quantum annealers, and classical solutions. The solvers can be combined with provided local and global (hyper)optimizers. The main features of the library are its extensibility on different levels of use as well as a straightforward and flexible experiment configuration format presented in the paper.<|reference_end|>
|
arxiv
|
@article{lamża2024qhyper:,
title={QHyper: an integration library for hybrid quantum-classical optimization},
author={Tomasz Lam.za, Justyna Zawalska, Kacper Jurek, Mariusz Sterzel,
Katarzyna Rycerz},
journal={arXiv preprint arXiv:2409.15926},
year={2024},
archivePrefix={arXiv},
eprint={2409.15926},
primaryClass={cs.MS}
}
|
lamża2024qhyper:
|
arxiv-661302
|
2409.15927
|
Facing Asymmetry -- Uncovering the Causal Link between Facial Symmetry and Expression Classifiers using Synthetic Interventions
|
<|reference_start|>Facing Asymmetry -- Uncovering the Causal Link between Facial Symmetry and Expression Classifiers using Synthetic Interventions: Understanding expressions is vital for deciphering human behavior, and nowadays, end-to-end trained black box models achieve high performance. Due to the black-box nature of these models, it is unclear how they behave when applied out-of-distribution. Specifically, these models show decreased performance for unilateral facial palsy patients. We hypothesize that one crucial factor guiding the internal decision rules is facial symmetry. In this work, we use insights from causal reasoning to investigate the hypothesis. After deriving a structural causal model, we develop a synthetic interventional framework. This approach allows us to analyze how facial symmetry impacts a network's output behavior while keeping other factors fixed. All 17 investigated expression classifiers significantly lower their output activations for reduced symmetry. This result is congruent with observed behavior on real-world data from healthy subjects and facial palsy patients. As such, our investigation serves as a case study for identifying causal factors that influence the behavior of black-box models.<|reference_end|>
|
arxiv
|
@article{büchner2024facing,
title={Facing Asymmetry -- Uncovering the Causal Link between Facial Symmetry
and Expression Classifiers using Synthetic Interventions},
author={Tim B"uchner, Niklas Penzel, Orlando Guntinas-Lichius, Joachim
Denzler},
journal={arXiv preprint arXiv:2409.15927},
year={2024},
archivePrefix={arXiv},
eprint={2409.15927},
primaryClass={cs.CV}
}
|
büchner2024facing
|
arxiv-661303
|
2409.15930
|
On the Lifecycle of a Lightning Network Payment Channel
|
<|reference_start|>On the Lifecycle of a Lightning Network Payment Channel: The Bitcoin Lightning Network, launched in 2018, serves as a layer 2 scaling solution for Bitcoin. The Lightning Network allows users to establish channels between each other and subsequently exchange off-chain payments. Together, these channels form a network that facilitates payments between parties even if they do not have a channel in common. The Lightning Network has gained popularity over the past five years as it offers an attractive alternative to on-chain transactions by substantially reducing transaction costs and processing times. Nevertheless, due to the privacy-centric design of the Lightning Network, little is understood about its inner workings. In this work, we conduct a measurement study of the Lightning Network to shed light on the lifecycle of channels. By combining Lightning gossip messages with on-chain Bitcoin data, we investigate the lifecycle of a channel from its opening through its lifetime to its closing. In particular, our analysis offers unique insights into the utilization patterns of the Lightning Network. Even more so, through decoding the channel closing transactions, we obtain the first dataset of Lightning Network payments, observe the imbalance of channels during the closing, and investigate whether both parties are involved in the closing, or one closes the channel unilaterally. For instance, we find nearly 60% of cooperatively closed channels are resurrected, i.e., their outputs were used to fund another channel.<|reference_end|>
|
arxiv
|
@article{grötschla2024on,
title={On the Lifecycle of a Lightning Network Payment Channel},
author={Florian Gr"otschla and Lioba Heimbach and Severin Richner and Roger
Wattenhofer},
journal={arXiv preprint arXiv:2409.15930},
year={2024},
archivePrefix={arXiv},
eprint={2409.15930},
primaryClass={cs.DC}
}
|
grötschla2024on
|
arxiv-661304
|
2409.15931
|
Automatic Registration of SHG and H&E Images with Feature-based Initial Alignment and Intensity-based Instance Optimization: Contribution to the COMULIS Challenge
|
<|reference_start|>Automatic Registration of SHG and H&E Images with Feature-based Initial Alignment and Intensity-based Instance Optimization: Contribution to the COMULIS Challenge: The automatic registration of noninvasive second-harmonic generation microscopy to hematoxylin and eosin slides is a highly desired, yet still unsolved problem. The task is challenging because the second-harmonic images contain only partial information, in contrast to the stained H&E slides that provide more information about the tissue morphology. Moreover, both imaging methods have different intensity distributions. Therefore, the task can be formulated as a multi-modal registration problem with missing data. In this work, we propose a method based on automatic keypoint matching followed by deformable registration based on instance optimization. The method does not require any training and is evaluated using the dataset provided in the Learn2Reg challenge by the COMULIS organization. The method achieved relatively good generalizability resulting in 88% of success rate in the initial alignment and average target registration error equal to 2.48 on the external validation set. We openly release the source code and incorporate it in the DeeperHistReg image registration framework.<|reference_end|>
|
arxiv
|
@article{wodzinski2024automatic,
title={Automatic Registration of SHG and H&E Images with Feature-based Initial
Alignment and Intensity-based Instance Optimization: Contribution to the
COMULIS Challenge},
author={Marek Wodzinski, Henning M"uller},
journal={arXiv preprint arXiv:2409.15931},
year={2024},
archivePrefix={arXiv},
eprint={2409.15931},
primaryClass={cs.CV}
}
|
wodzinski2024automatic
|
arxiv-661305
|
2409.15933
|
SLIMER-IT: Zero-Shot NER on Italian Language
|
<|reference_start|>SLIMER-IT: Zero-Shot NER on Italian Language: Traditional approaches to Named Entity Recognition (NER) frame the task into a BIO sequence labeling problem. Although these systems often excel in the downstream task at hand, they require extensive annotated data and struggle to generalize to out-of-distribution input domains and unseen entity types. On the contrary, Large Language Models (LLMs) have demonstrated strong zero-shot capabilities. While several works address Zero-Shot NER in English, little has been done in other languages. In this paper, we define an evaluation framework for Zero-Shot NER, applying it to the Italian language. Furthermore, we introduce SLIMER-IT, the Italian version of SLIMER, an instruction-tuning approach for zero-shot NER leveraging prompts enriched with definition and guidelines. Comparisons with other state-of-the-art models, demonstrate the superiority of SLIMER-IT on never-seen-before entity tags.<|reference_end|>
|
arxiv
|
@article{zamai2024slimer-it:,
title={SLIMER-IT: Zero-Shot NER on Italian Language},
author={Andrew Zamai, Leonardo Rigutini, Marco Maggini, Andrea Zugarini},
journal={arXiv preprint arXiv:2409.15933},
year={2024},
archivePrefix={arXiv},
eprint={2409.15933},
primaryClass={cs.CL cs.IR}
}
|
zamai2024slimer-it:
|
arxiv-661306
|
2409.15934
|
Automated test generation to evaluate tool-augmented LLMs as conversational AI agents
|
<|reference_start|>Automated test generation to evaluate tool-augmented LLMs as conversational AI agents: Tool-augmented LLMs are a promising approach to create AI agents that can have realistic conversations, follow procedures, and call appropriate functions. However, evaluating them is challenging due to the diversity of possible conversations, and existing datasets focus only on single interactions and function-calling. We present a test generation pipeline to evaluate LLMs as conversational AI agents. Our framework uses LLMs to generate diverse tests grounded on user-defined procedures. For that, we use intermediate graphs to limit the LLM test generator's tendency to hallucinate content that is not grounded on input procedures, and enforces high coverage of the possible conversations. Additionally, we put forward ALMITA, a manually curated dataset for evaluating AI agents in customer support, and use it to evaluate existing LLMs. Our results show that while tool-augmented LLMs perform well in single interactions, they often struggle to handle complete conversations. While our focus is on customer support, our method is general and capable of AI agents for different domains.<|reference_end|>
|
arxiv
|
@article{arcadinho2024automated,
title={Automated test generation to evaluate tool-augmented LLMs as
conversational AI agents},
author={Samuel Arcadinho, David Aparicio, Mariana Almeida},
journal={arXiv preprint arXiv:2409.15934},
year={2024},
archivePrefix={arXiv},
eprint={2409.15934},
primaryClass={cs.CL cs.AI cs.LG}
}
|
arcadinho2024automated
|
arxiv-661307
|
2409.15936
|
DepMamba: Progressive Fusion Mamba for Multimodal Depression Detection
|
<|reference_start|>DepMamba: Progressive Fusion Mamba for Multimodal Depression Detection: Depression is a common mental disorder that affects millions of people worldwide. Although promising, current multimodal methods hinge on aligned or aggregated multimodal fusion, suffering two significant limitations: (i) inefficient long-range temporal modeling, and (ii) sub-optimal multimodal fusion between intermodal fusion and intramodal processing. In this paper, we propose an audio-visual progressive fusion Mamba for multimodal depression detection, termed DepMamba. DepMamba features two core designs: hierarchical contextual modeling and progressive multimodal fusion. On the one hand, hierarchical modeling introduces convolution neural networks and Mamba to extract the local-to-global features within long-range sequences. On the other hand, the progressive fusion first presents a multimodal collaborative State Space Model (SSM) extracting intermodal and intramodal information for each modality, and then utilizes a multimodal enhanced SSM for modality cohesion. Extensive experimental results on two large-scale depression datasets demonstrate the superior performance of our DepMamba over existing state-of-the-art methods. Code is available at https://github.com/Jiaxin-Ye/DepMamba.<|reference_end|>
|
arxiv
|
@article{ye2024depmamba:,
title={DepMamba: Progressive Fusion Mamba for Multimodal Depression Detection},
author={Jiaxin Ye and Junping Zhang and Hongming Shan},
journal={arXiv preprint arXiv:2409.15936},
year={2024},
archivePrefix={arXiv},
eprint={2409.15936},
primaryClass={cs.CY cs.CV cs.HC}
}
|
ye2024depmamba:
|
arxiv-661308
|
2409.15937
|
Numerical determination of the width and shape of the effective string using Stochastic Normalizing Flows
|
<|reference_start|>Numerical determination of the width and shape of the effective string using Stochastic Normalizing Flows: Flow-based architectures have recently proved to be an efficient tool for numerical simulations of Effective String Theories regularized on the lattice that otherwise cannot be efficiently sampled by standard Monte Carlo methods. In this work we use Stochastic Normalizing Flows, a state-of-the-art deep-learning architecture based on non-equilibrium Monte Carlo simulations, to study different effective string models. After testing the reliability of this approach through a comparison with exact results for the Nambu-Got\={o} model, we discuss results on observables that are challenging to study analytically, such as the width of the string and the shape of the flux density. Furthermore, we perform a novel numerical study of Effective String Theories with terms beyond the Nambu-Got\={o} action, including a broader discussion on their significance for lattice gauge theories. These results establish the reliability and feasibility of flow-based samplers for Effective String Theories and pave the way for future applications on more complex models.<|reference_end|>
|
arxiv
|
@article{caselle2024numerical,
title={Numerical determination of the width and shape of the effective string
using Stochastic Normalizing Flows},
author={Michele Caselle, Elia Cellini, Alessandro Nada},
journal={arXiv preprint arXiv:2409.15937},
year={2024},
archivePrefix={arXiv},
eprint={2409.15937},
primaryClass={hep-lat cs.LG hep-th}
}
|
caselle2024numerical
|
arxiv-661309
|
2409.15939
|
Self-supervised Shape Completion via Involution and Implicit Correspondences
|
<|reference_start|>Self-supervised Shape Completion via Involution and Implicit Correspondences: 3D shape completion is traditionally solved using supervised training or by distribution learning on complete shape examples. Recently self-supervised learning approaches that do not require any complete 3D shape examples have gained more interests. In this paper, we propose a non-adversarial self-supervised approach for the shape completion task. Our first finding is that completion problems can be formulated as an involutory function trivially, which implies a special constraint on the completion function G, such that G(G(X)) = X. Our second constraint on self-supervised shape completion relies on the fact that shape completion becomes easier to solve with correspondences and similarly, completion can simplify the correspondences problem. We formulate a consistency measure in the canonical space in order to supervise the completion function. We efficiently optimize the completion and correspondence modules using "freeze and alternate" strategy. The overall approach performs well for rigid shapes in a category as well as dynamic non-rigid shapes. We ablate our design choices and compare our solution against state-of-the-art methods, showing remarkable accuracy approaching supervised accuracy in some cases.<|reference_end|>
|
arxiv
|
@article{liu2024self-supervised,
title={Self-supervised Shape Completion via Involution and Implicit
Correspondences},
author={Mengya Liu, Ajad Chhatkuli, Janis Postels, Luc Van Gool, and Federico
Tombari},
journal={arXiv preprint arXiv:2409.15939},
year={2024},
archivePrefix={arXiv},
eprint={2409.15939},
primaryClass={cs.CV}
}
|
liu2024self-supervised
|
arxiv-661310
|
2409.15940
|
A Formalization of Image Vectorization by Region Merging
|
<|reference_start|>A Formalization of Image Vectorization by Region Merging: Image vectorization converts raster images into vector graphics composed of regions separated by curves. Typical vectorization methods first define the regions by grouping similar colored regions via color quantization, then approximate their boundaries by Bezier curves. In that way, the raster input is converted into an SVG format parameterizing the regions' colors and the Bezier control points. This compact representation has many graphical applications thanks to its universality and resolution-independence. In this paper, we remark that image vectorization is nothing but an image segmentation, and that it can be built by fine to coarse region merging. Our analysis of the problem leads us to propose a vectorization method alternating region merging and curve smoothing. We formalize the method by alternate operations on the dual and primal graph induced from any domain partition. In that way, we address a limitation of current vectorization methods, which separate the update of regional information from curve approximation. We formalize region merging methods by associating them with various gain functionals, including the classic Beaulieu-Goldberg and Mumford-Shah functionals. More generally, we introduce and compare region merging criteria involving region number, scale, area, and internal standard deviation. We also show that the curve smoothing, implicit in all vectorization methods, can be performed by the shape-preserving affine scale space. We extend this flow to a network of curves and give a sufficient condition for the topological preservation of the segmentation. The general vectorization method that follows from this analysis shows explainable behaviors, explicitly controlled by a few intuitive parameters. It is experimentally compared to state-of-the-art software and proved to have comparable or superior fidelity and cost efficiency.<|reference_end|>
|
arxiv
|
@article{he2024a,
title={A Formalization of Image Vectorization by Region Merging},
author={Roy Y. He, Sung Ha Kang, Jean-Michel Morel},
journal={arXiv preprint arXiv:2409.15940},
year={2024},
archivePrefix={arXiv},
eprint={2409.15940},
primaryClass={cs.CV cs.GR cs.NA math.NA}
}
|
he2024a
|
arxiv-661311
|
2409.15941
|
Sampling in CMA-ES: Low Numbers of Low Discrepancy Points
|
<|reference_start|>Sampling in CMA-ES: Low Numbers of Low Discrepancy Points: The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is one of the most successful examples of a derandomized evolution strategy. However, it still relies on randomly sampling offspring, which can be done via a uniform distribution and subsequently transforming into the required Gaussian. Previous work has shown that replacing this uniform sampling with a low-discrepancy sampler, such as Halton or Sobol sequences, can improve performance over a wide set of problems. We show that iterating through small, fixed sets of low-discrepancy points can still perform better than the default uniform distribution. Moreover, using only 128 points throughout the search is sufficient to closely approximate the empirical performance of using the complete pseudorandom sequence up to dimensionality 40 on the BBOB benchmark. For lower dimensionalities (below 10), we find that using as little as 32 unique low discrepancy points performs similar or better than uniform sampling. In 2D, for which we have highly optimized low discrepancy samples available, we demonstrate that using these points yields the highest empirical performance and requires only 16 samples to improve over uniform sampling. Overall, we establish a clear relation between the $L_2$ discrepancy of the used point set and the empirical performance of the CMA-ES.<|reference_end|>
|
arxiv
|
@article{de nobel2024sampling,
title={Sampling in CMA-ES: Low Numbers of Low Discrepancy Points},
author={Jacob de Nobel, Diederick Vermetten, Thomas H.W. B"ack, Anna V.
Kononova},
journal={arXiv preprint arXiv:2409.15941},
year={2024},
archivePrefix={arXiv},
eprint={2409.15941},
primaryClass={cs.NE}
}
|
de nobel2024sampling
|
arxiv-661312
|
2409.15947
|
Vulnerabilities that arise from poor governance in Distributed Ledger Technologies
|
<|reference_start|>Vulnerabilities that arise from poor governance in Distributed Ledger Technologies: Current implementations of governance in Distributed Ledger Technologies leave them susceptible to a number of attacks. We survey the state of the art of Distributed Ledger Technologies (DLTs) governance protocols and work carried out to systematise good governance properties in the context of DLTs. We then select the most appropriate taxonomy of good governance properties and point to formal security notions that good governance protocols should satisfy. We point practitioners to existing solutions to deliver them, where possible. Furthermore, we outline a number of vulnerabilities that arise in the absence of good governance properties. We call on the research community and DLT research practitioners to prioritise delivering these good governance properties and continue to develop tools to do so, to avoid attacks to DLT protocols that exploit their poor governance models.<|reference_end|>
|
arxiv
|
@article{kharman2024vulnerabilities,
title={Vulnerabilities that arise from poor governance in Distributed Ledger
Technologies},
author={Aida Manzano Kharman, William Sanders},
journal={arXiv preprint arXiv:2409.15947},
year={2024},
archivePrefix={arXiv},
eprint={2409.15947},
primaryClass={cs.CR cs.CY}
}
|
kharman2024vulnerabilities
|
arxiv-661313
|
2409.15949
|
Beats of Bias: Analyzing Lyrics with Topic Modeling and Gender Bias Measurements
|
<|reference_start|>Beats of Bias: Analyzing Lyrics with Topic Modeling and Gender Bias Measurements: This paper uses topic modeling and bias measurement techniques to analyze and determine gender bias in English song lyrics. We utilize BERTopic to cluster 537,553 English songs into distinct topics and chart their development over time. Our analysis shows the thematic shift in song lyrics over the years, from themes of romance to the increasing sexualization of women in songs. We observe large amounts of profanity and misogynistic lyrics on various topics, especially in the overall biggest cluster. Furthermore, to analyze gender bias across topics and genres, we employ the Single Category Word Embedding Association Test (SC-WEAT) to compute bias scores for the word embeddings trained on the most popular topics as well as for each genre. We find that words related to intelligence and strength tend to show a male bias across genres, as opposed to appearance and weakness words, which are more female-biased; however, a closer look also reveals differences in biases across topics.<|reference_end|>
|
arxiv
|
@article{chen2024beats,
title={Beats of Bias: Analyzing Lyrics with Topic Modeling and Gender Bias
Measurements},
author={Danqing Chen, Adithi Satish, Rasul Khanbayov, Carolin M. Schuster and
Georg Groh},
journal={arXiv preprint arXiv:2409.15949},
year={2024},
archivePrefix={arXiv},
eprint={2409.15949},
primaryClass={cs.CL}
}
|
chen2024beats
|
arxiv-661314
|
2409.15950
|
TSFeatLIME: An Online User Study in Enhancing Explainability in Univariate Time Series Forecasting
|
<|reference_start|>TSFeatLIME: An Online User Study in Enhancing Explainability in Univariate Time Series Forecasting: Time series forecasting, while vital in various applications, often employs complex models that are difficult for humans to understand. Effective explainable AI techniques are crucial to bridging the gap between model predictions and user understanding. This paper presents a framework - TSFeatLIME, extending TSLIME, tailored specifically for explaining univariate time series forecasting. TSFeatLIME integrates an auxiliary feature into the surrogate model and considers the pairwise Euclidean distances between the queried time series and the generated samples to improve the fidelity of the surrogate models. However, the usefulness of such explanations for human beings remains an open question. We address this by conducting a user study with 160 participants through two interactive interfaces, aiming to measure how individuals from different backgrounds can simulate or predict model output changes in the treatment group and control group. Our results show that the surrogate model under the TSFeatLIME framework is able to better simulate the behaviour of the black-box considering distance, without sacrificing accuracy. In addition, the user study suggests that the explanations were significantly more effective for participants without a computer science background.<|reference_end|>
|
arxiv
|
@article{ma2024tsfeatlime:,
title={TSFeatLIME: An Online User Study in Enhancing Explainability in
Univariate Time Series Forecasting},
author={Hongnan Ma, Kevin McAreavey, Weiru Liu},
journal={arXiv preprint arXiv:2409.15950},
year={2024},
archivePrefix={arXiv},
eprint={2409.15950},
primaryClass={cs.AI}
}
|
ma2024tsfeatlime:
|
arxiv-661315
|
2409.15952
|
Multiscale method for image denoising using nonlinear diffusion process: local denoising and spectral multiscale basis functions
|
<|reference_start|>Multiscale method for image denoising using nonlinear diffusion process: local denoising and spectral multiscale basis functions: We consider image denoising using a nonlinear diffusion process, where we solve unsteady partial differential equations with nonlinear coefficients. The noised image is given as an initial condition, and nonlinear coefficients are used to preserve the main image features. In this paper, we present a multiscale method for the resulting nonlinear parabolic equation in order to construct an efficient solver. To both filter out noise and preserve essential image features during the denoising process, we utilize a time-dependent nonlinear diffusion model known as Perona-Malik. Here, the noised image is fed as an initial condition and the denoised image is stimulated with given parameters. We numerically implement this model by constructing a discrete system for a given image resolution using a finite volume method and employing an implicit time approximation scheme to avoid time-step restriction. However, the resulting discrete system size is proportional to the number of pixels which leads to computationally expensive numerical algorithms for high-resolution images. In order to reduce the size of the system and construct efficient computational algorithms, we construct a coarse-resolution representation of the system using the Generalized Multiscale Finite Element Method (GMsFEM). We incorporate local noise reduction in the coarsening process to construct an efficient algorithm with fewer denoising iterations. We propose a computational approach with two main ingredients: (1) performing local image denoising in each local domain of basis support; and (2) constructing spectral multiscale basis functions to construct a coarse resolution representation by a Galerkin coupling. We present numerical results for several test images to demonstrate the effectiveness of the proposed multiscale approach with local denoising and local spectral representation.<|reference_end|>
|
arxiv
|
@article{vasilyeva2024multiscale,
title={Multiscale method for image denoising using nonlinear diffusion process:
local denoising and spectral multiscale basis functions},
author={Maria Vasilyeva, Aleksei Krasnikov, Kelum Gajamannage, Mehrube
Mehrubeoglu},
journal={arXiv preprint arXiv:2409.15952},
year={2024},
archivePrefix={arXiv},
eprint={2409.15952},
primaryClass={math.NA cs.NA}
}
|
vasilyeva2024multiscale
|
arxiv-661316
|
2409.15953
|
Mind the Prompt: A Novel Benchmark for Prompt-based Class-Agnostic Counting
|
<|reference_start|>Mind the Prompt: A Novel Benchmark for Prompt-based Class-Agnostic Counting: Class-agnostic counting (CAC) is a recent task in computer vision that aims to estimate the number of instances of arbitrary object classes never seen during model training. With the recent advancement of robust vision-and-language foundation models, there is a growing interest in prompt-based CAC, where object categories to be counted can be specified using natural language. However, we identify significant limitations in current benchmarks for evaluating this task, which hinder both accurate assessment and the development of more effective solutions. Specifically, we argue that the current evaluation protocols do not measure the ability of the model to understand which object has to be counted. This is due to two main factors: (i) the shortcomings of CAC datasets, which primarily consist of images containing objects from a single class, and (ii) the limitations of current counting performance evaluators, which are based on traditional class-specific counting and focus solely on counting errors. To fill this gap, we introduce the Prompt-Aware Counting (PrACo) benchmark, which comprises two targeted tests, each accompanied by appropriate evaluation metrics. We evaluate state-of-the-art methods and demonstrate that, although some achieve impressive results on standard class-specific counting metrics, they exhibit a significant deficiency in understanding the input prompt, indicating the need for more careful training procedures or revised designs. The code for reproducing our results is available at https://github.com/ciampluca/PrACo.<|reference_end|>
|
arxiv
|
@article{ciampi2024mind,
title={Mind the Prompt: A Novel Benchmark for Prompt-based Class-Agnostic
Counting},
author={Luca Ciampi, Nicola Messina, Matteo Pierucci, Giuseppe Amato, Marco
Avvenuti, Fabrizio Falchi},
journal={arXiv preprint arXiv:2409.15953},
year={2024},
archivePrefix={arXiv},
eprint={2409.15953},
primaryClass={cs.CV}
}
|
ciampi2024mind
|
arxiv-661317
|
2409.15955
|
A Historical Trajectory Assisted Optimization Method for Zeroth-Order Federated Learning
|
<|reference_start|>A Historical Trajectory Assisted Optimization Method for Zeroth-Order Federated Learning: Federated learning heavily relies on distributed gradient descent techniques. In the situation where gradient information is not available, the gradients need to be estimated from zeroth-order information, which typically involves computing finite-differences along isotropic random directions. This method suffers from high estimation errors, as the geometric features of the objective landscape may be overlooked during the isotropic sampling. In this work, we propose a non-isotropic sampling method to improve the gradient estimation procedure. Gradients in our method are estimated in a subspace spanned by historical trajectories of solutions, aiming to encourage the exploration of promising regions and hence improve the convergence. The proposed method uses a covariance matrix for sampling which is a convex combination of two parts. The first part is a thin projection matrix containing the basis of the subspace which is designed to improve the exploitation ability. The second part is the historical trajectories. We implement this method in zeroth-order federated settings, and show that the convergence rate aligns with existing ones while introducing no significant overheads in communication or local computation. The effectiveness of our proposal is verified on several numerical experiments in comparison to several commonly-used zeroth-order federated optimization algorithms.<|reference_end|>
|
arxiv
|
@article{wu2024a,
title={A Historical Trajectory Assisted Optimization Method for Zeroth-Order
Federated Learning},
author={Chenlin Wu, Xiaoyu He, Zike Li, Jing Gong, Zibin Zheng},
journal={arXiv preprint arXiv:2409.15955},
year={2024},
archivePrefix={arXiv},
eprint={2409.15955},
primaryClass={cs.LG cs.AI}
}
|
wu2024a
|
arxiv-661318
|
2409.15957
|
ASD-Diffusion: Anomalous Sound Detection with Diffusion Models
|
<|reference_start|>ASD-Diffusion: Anomalous Sound Detection with Diffusion Models: Unsupervised Anomalous Sound Detection (ASD) aims to design a generalizable method that can be used to detect anomalies when only normal sounds are given. In this paper, Anomalous Sound Detection based on Diffusion Models (ASD-Diffusion) is proposed for ASD in real-world factories. In our pipeline, the anomalies in acoustic features are reconstructed from their noisy corrupted features into their approximate normal pattern. Secondly, a post-processing anomalies filter algorithm is proposed to detect anomalies that exhibit significant deviation from the original input after reconstruction. Furthermore, denoising diffusion implicit model is introduced to accelerate the inference speed by a longer sampling interval of the denoising process. The proposed method is innovative in the application of diffusion models as a new scheme. Experimental results on the development set of DCASE 2023 challenge task 2 outperform the baseline by 7.75%, demonstrating the effectiveness of the proposed method.<|reference_end|>
|
arxiv
|
@article{zhang2024asd-diffusion:,
title={ASD-Diffusion: Anomalous Sound Detection with Diffusion Models},
author={Fengrun Zhang and Xiang Xie and Kai Guo},
journal={arXiv preprint arXiv:2409.15957},
year={2024},
archivePrefix={arXiv},
eprint={2409.15957},
primaryClass={cs.SD cs.AI eess.AS}
}
|
zhang2024asd-diffusion:
|
arxiv-661319
|
2409.15958
|
An ensemble framework approach of hybrid Quantum convolutional neural networks for classification of breast cancer images
|
<|reference_start|>An ensemble framework approach of hybrid Quantum convolutional neural networks for classification of breast cancer images: Quantum neural networks are deemed suitable to replace classical neural networks in their ability to learn and scale up network models using quantum-exclusive phenomena like superposition and entanglement. However, in the noisy intermediate scale quantum (NISQ) era, the trainability and expressibility of quantum models are yet under investigation. Medical image classification on the other hand, pertains well to applications in deep learning, particularly, convolutional neural networks. In this paper, we carry out a study of three hybrid classical-quantum neural network architectures and combine them using standard ensembling techniques on a breast cancer histopathological dataset. The best accuracy percentage obtained by an individual model is 85.59. Whereas, on performing ensemble, we have obtained accuracy as high as 86.72%, an improvement over the individual hybrid network as well as classical neural network counterparts of the hybrid network models.<|reference_end|>
|
arxiv
|
@article{guha2024an,
title={An ensemble framework approach of hybrid Quantum convolutional neural
networks for classification of breast cancer images},
author={Dibyasree Guha, Shyamali Mitra, Somenath Kuiry, Nibaran Das},
journal={arXiv preprint arXiv:2409.15958},
year={2024},
archivePrefix={arXiv},
eprint={2409.15958},
primaryClass={cs.CV}
}
|
guha2024an
|
arxiv-661320
|
2409.15959
|
Semantics-Controlled Gaussian Splatting for Outdoor Scene Reconstruction and Rendering in Virtual Reality
|
<|reference_start|>Semantics-Controlled Gaussian Splatting for Outdoor Scene Reconstruction and Rendering in Virtual Reality: Advancements in 3D rendering like Gaussian Splatting (GS) allow novel view synthesis and real-time rendering in virtual reality (VR). However, GS-created 3D environments are often difficult to edit. For scene enhancement or to incorporate 3D assets, segmenting Gaussians by class is essential. Existing segmentation approaches are typically limited to certain types of scenes, e.g., ''circular'' scenes, to determine clear object boundaries. However, this method is ineffective when removing large objects in non-''circling'' scenes such as large outdoor scenes. We propose Semantics-Controlled GS (SCGS), a segmentation-driven GS approach, enabling the separation of large scene parts in uncontrolled, natural environments. SCGS allows scene editing and the extraction of scene parts for VR. Additionally, we introduce a challenging outdoor dataset, overcoming the ''circling'' setup. We outperform the state-of-the-art in visual quality on our dataset and in segmentation quality on the 3D-OVS dataset. We conducted an exploratory user study, comparing a 360-video, plain GS, and SCGS in VR with a fixed viewpoint. In our subsequent main study, users were allowed to move freely, evaluating plain GS and SCGS. Our main study results show that participants clearly prefer SCGS over plain GS. We overall present an innovative approach that surpasses the state-of-the-art both technically and in user experience.<|reference_end|>
|
arxiv
|
@article{schieber2024semantics-controlled,
title={Semantics-Controlled Gaussian Splatting for Outdoor Scene Reconstruction
and Rendering in Virtual Reality},
author={Hannah Schieber, Jacob Young, Tobias Langlotz, Stefanie Zollmann,
Daniel Roth},
journal={arXiv preprint arXiv:2409.15959},
year={2024},
archivePrefix={arXiv},
eprint={2409.15959},
primaryClass={cs.CV cs.GR}
}
|
schieber2024semantics-controlled
|
arxiv-661321
|
2409.15961
|
Toward Scalable and Efficient Visual Data Transmission in 6G Networks
|
<|reference_start|>Toward Scalable and Efficient Visual Data Transmission in 6G Networks: 6G network technology will emerge in a landscape where visual data transmissions dominate global mobile traffic and are expected to grow continuously, driven by the increasing demand for AI-based computer vision applications. This will make already challenging task of visual data transmission even more difficult. In this work, we review effective techniques for visual data transmission, such as content compression and adaptive video streaming, highlighting their advantages and limitations. Further, considering the scalability and cost issues of cloud-based and on-device AI services, we explore distributed in-network computing architecture like fog-computing as a direction of 6G networks, and investigate the necessary technical properties for the timely delivery of visual data.<|reference_end|>
|
arxiv
|
@article{cai2024toward,
title={Toward Scalable and Efficient Visual Data Transmission in 6G Networks},
author={Junhao Cai and Taegun An and Changhee Joo},
journal={arXiv preprint arXiv:2409.15961},
year={2024},
archivePrefix={arXiv},
eprint={2409.15961},
primaryClass={cs.NI eess.SP}
}
|
cai2024toward
|
arxiv-661322
|
2409.15963
|
Provably Efficient Exploration in Inverse Constrained Reinforcement Learning
|
<|reference_start|>Provably Efficient Exploration in Inverse Constrained Reinforcement Learning: To obtain the optimal constraints in complex environments, Inverse Constrained Reinforcement Learning (ICRL) seeks to recover these constraints from expert demonstrations in a data-driven manner. Existing ICRL algorithms collect training samples from an interactive environment. However, the efficacy and efficiency of these sampling strategies remain unknown. To bridge this gap, we introduce a strategic exploration framework with guaranteed efficiency. Specifically, we define a feasible constraint set for ICRL problems and investigate how expert policy and environmental dynamics influence the optimality of constraints. Motivated by our findings, we propose two exploratory algorithms to achieve efficient constraint inference via 1) dynamically reducing the bounded aggregate error of cost estimation and 2) strategically constraining the exploration policy. Both algorithms are theoretically grounded with tractable sample complexity. We empirically demonstrate the performance of our algorithms under various environments.<|reference_end|>
|
arxiv
|
@article{yue2024provably,
title={Provably Efficient Exploration in Inverse Constrained Reinforcement
Learning},
author={Bo Yue, Jian Li, Guiliang Liu},
journal={arXiv preprint arXiv:2409.15963},
year={2024},
archivePrefix={arXiv},
eprint={2409.15963},
primaryClass={cs.LG cs.AI}
}
|
yue2024provably
|
arxiv-661323
|
2409.15968
|
Adversarial Backdoor Defense in CLIP
|
<|reference_start|>Adversarial Backdoor Defense in CLIP: Multimodal contrastive pretraining, exemplified by models like CLIP, has been found to be vulnerable to backdoor attacks. While current backdoor defense methods primarily employ conventional data augmentation to create augmented samples aimed at feature alignment, these methods fail to capture the distinct features of backdoor samples, resulting in suboptimal defense performance. Observations reveal that adversarial examples and backdoor samples exhibit similarities in the feature space within the compromised models. Building on this insight, we propose Adversarial Backdoor Defense (ABD), a novel data augmentation strategy that aligns features with meticulously crafted adversarial examples. This approach effectively disrupts the backdoor association. Our experiments demonstrate that ABD provides robust defense against both traditional uni-modal and multimodal backdoor attacks targeting CLIP. Compared to the current state-of-the-art defense method, CleanCLIP, ABD reduces the attack success rate by 8.66% for BadNet, 10.52% for Blended, and 53.64% for BadCLIP, while maintaining a minimal average decrease of just 1.73% in clean accuracy.<|reference_end|>
|
arxiv
|
@article{kuang2024adversarial,
title={Adversarial Backdoor Defense in CLIP},
author={Junhao Kuang and Siyuan Liang and Jiawei Liang and Kuanrong Liu and
Xiaochun Cao},
journal={arXiv preprint arXiv:2409.15968},
year={2024},
archivePrefix={arXiv},
eprint={2409.15968},
primaryClass={cs.CV}
}
|
kuang2024adversarial
|
arxiv-661324
|
2409.15970
|
Non-Boolean OMv: One More Reason to Believe Lower Bounds for Dynamic Problems
|
<|reference_start|>Non-Boolean OMv: One More Reason to Believe Lower Bounds for Dynamic Problems: Most of the known tight lower bounds for dynamic problems are based on the Online Boolean Matrix-Vector Multiplication (OMv) Hypothesis, which is not as well studied and understood as some more popular hypotheses in fine-grained complexity. It would be desirable to base hardness of dynamic problems on a more believable hypothesis. We propose analogues of the OMv Hypothesis for variants of matrix multiplication that are known to be harder than Boolean product in the offline setting, namely: equality, dominance, min-witness, min-max, and bounded monotone min-plus products. These hypotheses are a priori weaker assumptions than the standard (Boolean) OMv Hypothesis. Somewhat surprisingly, we show that they are actually equivalent to it. This establishes the first such fine-grained equivalence class for dynamic problems.<|reference_end|>
|
arxiv
|
@article{hu2024non-boolean,
title={Non-Boolean OMv: One More Reason to Believe Lower Bounds for Dynamic
Problems},
author={Bingbing Hu and Adam Polak},
journal={arXiv preprint arXiv:2409.15970},
year={2024},
archivePrefix={arXiv},
eprint={2409.15970},
primaryClass={cs.CC cs.DS}
}
|
hu2024non-boolean
|
arxiv-661325
|
2409.15971
|
Creating Healthy Friction: Determining Stakeholder Requirements of Job Recommendation Explanations
|
<|reference_start|>Creating Healthy Friction: Determining Stakeholder Requirements of Job Recommendation Explanations: The increased use of information retrieval in recruitment, primarily through job recommender systems (JRSs), can have a large impact on job seekers, recruiters, and companies. As a result, such systems have been determined to be high-risk in recent legislature. This requires JRSs to be trustworthy and transparent, allowing stakeholders to understand why specific recommendations were made. To fulfill this requirement, the stakeholders' exact preferences and needs need to be determined. To do so, we evaluated an explainable job recommender system using a realistic, task-based, mixed-design user study (n=30) in which stakeholders had to make decisions based on the model's explanations. This mixed-methods evaluation consisted of two objective metrics - correctness and efficiency, along with three subjective metrics - trust, transparency, and usefulness. These metrics were evaluated twice per participant, once using real explanations and once using random explanations. The study included a qualitative analysis following a think-aloud protocol while performing tasks adapted to each stakeholder group. We find that providing stakeholders with real explanations does not significantly improve decision-making speed and accuracy. Our results showed a non-significant trend for the real explanations to outperform the random ones on perceived trust, usefulness, and transparency of the system for all stakeholder types. We determine that stakeholders benefit more from interacting with explanations as decision support capable of providing healthy friction, rather than as previously-assumed persuasive tools.<|reference_end|>
|
arxiv
|
@article{schellingerhout2024creating,
title={Creating Healthy Friction: Determining Stakeholder Requirements of Job
Recommendation Explanations},
author={Roan Schellingerhout, Francesco Barile, Nava Tintarev},
journal={arXiv preprint arXiv:2409.15971},
year={2024},
archivePrefix={arXiv},
eprint={2409.15971},
primaryClass={cs.HC cs.AI}
}
|
schellingerhout2024creating
|
arxiv-661326
|
2409.15972
|
Analysis of a dislocation model for earthquakes
|
<|reference_start|>Analysis of a dislocation model for earthquakes: Approximation of problems in linear elasticity having small shear modulus in a thin region is considered. Problems of this type arise when modeling ground motion due to earthquakes where rupture occurs in a thin fault. It is shown that, under appropriate scaling, solutions of these problems can be approximated by solutions of a limit problem where the fault region is represented by a surface. In a numerical context this eliminates the need to resolve the large deformations in the fault; a numerical example is presented to illustrate efficacy of this strategy.<|reference_end|>
|
arxiv
|
@article{liu2024analysis,
title={Analysis of a dislocation model for earthquakes},
author={Jing Liu, Xin Yang Lu, Noel J Walkington},
journal={arXiv preprint arXiv:2409.15972},
year={2024},
archivePrefix={arXiv},
eprint={2409.15972},
primaryClass={math.NA cs.NA}
}
|
liu2024analysis
|
arxiv-661327
|
2409.15973
|
Edge-device Collaborative Computing for Multi-view Classification
|
<|reference_start|>Edge-device Collaborative Computing for Multi-view Classification: Motivated by the proliferation of Internet-of-Thing (IoT) devices and the rapid advances in the field of deep learning, there is a growing interest in pushing deep learning computations, conventionally handled by the cloud, to the edge of the network to deliver faster responses to end users, reduce bandwidth consumption to the cloud, and address privacy concerns. However, to fully realize deep learning at the edge, two main challenges still need to be addressed: (i) how to meet the high resource requirements of deep learning on resource-constrained devices, and (ii) how to leverage the availability of multiple streams of spatially correlated data, to increase the effectiveness of deep learning and improve application-level performance. To address the above challenges, we explore collaborative inference at the edge, in which edge nodes and end devices share correlated data and the inference computational burden by leveraging different ways to split computation and fuse data. Besides traditional centralized and distributed schemes for edge-end device collaborative inference, we introduce selective schemes that decrease bandwidth resource consumption by effectively reducing data redundancy. As a reference scenario, we focus on multi-view classification in a networked system in which sensing nodes can capture overlapping fields of view. The proposed schemes are compared in terms of accuracy, computational expenditure at the nodes, communication overhead, inference latency, robustness, and noise sensitivity. Experimental results highlight that selective collaborative schemes can achieve different trade-offs between the above performance metrics, with some of them bringing substantial communication savings (from 18% to 74% of the transmitted data with respect to centralized inference) while still keeping the inference accuracy well above 90%.<|reference_end|>
|
arxiv
|
@article{palena2024edge-device,
title={Edge-device Collaborative Computing for Multi-view Classification},
author={Marco Palena, Tania Cerquitelli and Carla Fabiana Chiasserini},
journal={arXiv preprint arXiv:2409.15973},
year={2024},
archivePrefix={arXiv},
eprint={2409.15973},
primaryClass={cs.LG cs.AI cs.DC cs.NI}
}
|
palena2024edge-device
|
arxiv-661328
|
2409.15974
|
Disentangling Age and Identity with a Mutual Information Minimization Approach for Cross-Age Speaker Verification
|
<|reference_start|>Disentangling Age and Identity with a Mutual Information Minimization Approach for Cross-Age Speaker Verification: There has been an increasing research interest in cross-age speaker verification~(CASV). However, existing speaker verification systems perform poorly in CASV due to the great individual differences in voice caused by aging. In this paper, we propose a disentangled representation learning framework for CASV based on mutual information~(MI) minimization. In our method, a backbone model is trained to disentangle the identity- and age-related embeddings from speaker information, and an MI estimator is trained to minimize the correlation between age- and identity-related embeddings via MI minimization, resulting in age-invariant speaker embeddings. Furthermore, by using the age gaps between positive and negative samples, we propose an aging-aware MI minimization loss function that allows the backbone model to focus more on the vocal changes with large age gaps. Experimental results show that the proposed method outperforms other methods on multiple Cross-Age test sets of Vox-CA.<|reference_end|>
|
arxiv
|
@article{zhang2024disentangling,
title={Disentangling Age and Identity with a Mutual Information Minimization
Approach for Cross-Age Speaker Verification},
author={Fengrun Zhang and Wangjin Zhou and Yiming Liu and Wang Geng and Yahui
Shan and Chen Zhang},
journal={arXiv preprint arXiv:2409.15974},
year={2024},
archivePrefix={arXiv},
eprint={2409.15974},
primaryClass={cs.SD cs.AI eess.AS}
}
|
zhang2024disentangling
|
arxiv-661329
|
2409.15977
|
TCSinger: Zero-Shot Singing Voice Synthesis with Style Transfer and Multi-Level Style Control
|
<|reference_start|>TCSinger: Zero-Shot Singing Voice Synthesis with Style Transfer and Multi-Level Style Control: Zero-shot singing voice synthesis (SVS) with style transfer and style control aims to generate high-quality singing voices with unseen timbres and styles (including singing method, emotion, rhythm, technique, and pronunciation) from audio and text prompts. However, the multifaceted nature of singing styles poses a significant challenge for effective modeling, transfer, and control. Furthermore, current SVS models often fail to generate singing voices rich in stylistic nuances for unseen singers. To address these challenges, we introduce TCSinger, the first zero-shot SVS model for style transfer across cross-lingual speech and singing styles, along with multi-level style control. Specifically, TCSinger proposes three primary modules: 1) the clustering style encoder employs a clustering vector quantization model to stably condense style information into a compact latent space; 2) the Style and Duration Language Model (S\&D-LM) concurrently predicts style information and phoneme duration, which benefits both; 3) the style adaptive decoder uses a novel mel-style adaptive normalization method to generate singing voices with enhanced details. Experimental results show that TCSinger outperforms all baseline models in synthesis quality, singer similarity, and style controllability across various tasks, including zero-shot style transfer, multi-level style control, cross-lingual style transfer, and speech-to-singing style transfer. Singing voice samples can be accessed at https://tcsinger.github.io/.<|reference_end|>
|
arxiv
|
@article{zhang2024tcsinger:,
title={TCSinger: Zero-Shot Singing Voice Synthesis with Style Transfer and
Multi-Level Style Control},
author={Yu Zhang, Ziyue Jiang, Ruiqi Li, Changhao Pan, Jinzheng He, Rongjie
Huang, Chuxin Wang, Zhou Zhao},
journal={arXiv preprint arXiv:2409.15977},
year={2024},
archivePrefix={arXiv},
eprint={2409.15977},
primaryClass={eess.AS cs.CL cs.SD}
}
|
zhang2024tcsinger:
|
arxiv-661330
|
2409.15979
|
Finetuning LLMs for Comparative Assessment Tasks
|
<|reference_start|>Finetuning LLMs for Comparative Assessment Tasks: Automated assessment in natural language generation is a challenging task. Instruction-tuned large language models (LLMs) have shown promise in reference-free evaluation, particularly through comparative assessment. However, the quadratic computational complexity of pairwise comparisons limits its scalability. To address this, efficient comparative assessment has been explored by applying comparative strategies on zero-shot LLM probabilities. We propose a framework for finetuning LLMs for comparative assessment to align the model's output with the target distribution of comparative probabilities. By training on soft probabilities, our approach improves state-of-the-art performance while maintaining high performance with an efficient subset of comparisons.<|reference_end|>
|
arxiv
|
@article{raina2024finetuning,
title={Finetuning LLMs for Comparative Assessment Tasks},
author={Vatsal Raina, Adian Liusie, Mark Gales},
journal={arXiv preprint arXiv:2409.15979},
year={2024},
archivePrefix={arXiv},
eprint={2409.15979},
primaryClass={cs.CL}
}
|
raina2024finetuning
|
arxiv-661331
|
2409.15980
|
Leveraging Unsupervised Learning for Cost-Effective Visual Anomaly Detection
|
<|reference_start|>Leveraging Unsupervised Learning for Cost-Effective Visual Anomaly Detection: Traditional machine learning-based visual inspection systems require extensive data collection and repetitive model training to improve accuracy. These systems typically require expensive camera, computing equipment and significant machine learning expertise, which can substantially burden small and medium-sized enterprises. This study explores leveraging unsupervised learning methods with pre-trained models and low-cost hardware to create a cost-effective visual anomaly detection system. The research aims to develop a low-cost visual anomaly detection solution that uses minimal data for model training while maintaining generalizability and scalability. The system utilises unsupervised learning models from Anomalib and is deployed on affordable Raspberry Pi hardware through openVINO. The results show that this cost-effective system can complete anomaly defection training and inference on a Raspberry Pi in just 90 seconds using only 10 normal product images, achieving an F1 macro score exceeding 0.95. While the system is slightly sensitive to environmental changes like lighting, product positioning, or background, it remains a swift and economical method for factory automation inspection for small and medium-sized manufacturers<|reference_end|>
|
arxiv
|
@article{long2024leveraging,
title={Leveraging Unsupervised Learning for Cost-Effective Visual Anomaly
Detection},
author={Yunbo Long, Zhengyang Ling, Sam Brook, Duncan McFarlane, Alexandra
Brintrup},
journal={arXiv preprint arXiv:2409.15980},
year={2024},
archivePrefix={arXiv},
eprint={2409.15980},
primaryClass={cs.CV cs.AI}
}
|
long2024leveraging
|
arxiv-661332
|
2409.15981
|
GPT-4 as a Homework Tutor can Improve Student Engagement and Learning Outcomes
|
<|reference_start|>GPT-4 as a Homework Tutor can Improve Student Engagement and Learning Outcomes: This work contributes to the scarce empirical literature on LLM-based interactive homework in real-world educational settings and offers a practical, scalable solution for improving homework in schools. Homework is an important part of education in schools across the world, but in order to maximize benefit, it needs to be accompanied with feedback and followup questions. We developed a prompting strategy that enables GPT-4 to conduct interactive homework sessions for high-school students learning English as a second language. Our strategy requires minimal efforts in content preparation, one of the key challenges of alternatives like home tutors or ITSs. We carried out a Randomized Controlled Trial (RCT) in four high-school classes, replacing traditional homework with GPT-4 homework sessions for the treatment group. We observed significant improvements in learning outcomes, specifically a greater gain in grammar, and student engagement. In addition, students reported high levels of satisfaction with the system and wanted to continue using it after the end of the RCT.<|reference_end|>
|
arxiv
|
@article{vanzo2024gpt-4,
title={GPT-4 as a Homework Tutor can Improve Student Engagement and Learning
Outcomes},
author={Alessandro Vanzo and Sankalan Pal Chowdhury and Mrinmaya Sachan},
journal={arXiv preprint arXiv:2409.15981},
year={2024},
archivePrefix={arXiv},
eprint={2409.15981},
primaryClass={cs.CY}
}
|
vanzo2024gpt-4
|
arxiv-661333
|
2409.15985
|
DataGpt-SQL-7B: An Open-Source Language Model for Text-to-SQL
|
<|reference_start|>DataGpt-SQL-7B: An Open-Source Language Model for Text-to-SQL: In addressing the pivotal role of translating natural language queries into SQL commands, we propose a suite of compact, fine-tuned models and self-refine mechanisms to democratize data access and analysis for non-expert users, mitigating risks associated with closed-source Large Language Models. Specifically, we constructed a dataset of over 20K sample for Text-to-SQL as well as the preference dateset, to improve the efficiency in the domain of SQL generation. To further ensure code validity, a code corrector was integrated into the model. Our system, DataGpt-sql, achieved 87.2\% accuracy on the spider-dev, respectively, showcasing the effectiveness of our solution in text-to-SQL conversion tasks. Our code, data, and models are available at \url{https://github.com/CainiaoTechAi/datagpt-sql-7b}<|reference_end|>
|
arxiv
|
@article{wu2024datagpt-sql-7b:,
title={DataGpt-SQL-7B: An Open-Source Language Model for Text-to-SQL},
author={Lixia Wu, Peng Li, Junhong Lou and Lei Fu},
journal={arXiv preprint arXiv:2409.15985},
year={2024},
archivePrefix={arXiv},
eprint={2409.15985},
primaryClass={cs.AI}
}
|
wu2024datagpt-sql-7b:
|
arxiv-661334
|
2409.15986
|
Exploring the Impact of Outlier Variability on Anomaly Detection Evaluation Metrics
|
<|reference_start|>Exploring the Impact of Outlier Variability on Anomaly Detection Evaluation Metrics: Anomaly detection is a dynamic field, in which the evaluation of models plays a critical role in understanding their effectiveness. The selection and interpretation of the evaluation metrics are pivotal, particularly in scenarios with varying amounts of anomalies. This study focuses on examining the behaviors of three widely used anomaly detection metrics under different conditions: F1 score, Receiver Operating Characteristic Area Under Curve (ROC AUC), and Precision-Recall Curve Area Under Curve (AUCPR). Our study critically analyzes the extent to which these metrics provide reliable and distinct insights into model performance, especially considering varying levels of outlier fractions and contamination thresholds in datasets. Through a comprehensive experimental setup involving widely recognized algorithms for anomaly detection, we present findings that challenge the conventional understanding of these metrics and reveal nuanced behaviors under varying conditions. We demonstrated that while the F1 score and AUCPR are sensitive to outlier fractions, the ROC AUC maintains consistency and is unaffected by such variability. Additionally, under conditions of a fixed outlier fraction in the test set, we observe an alignment between ROC AUC and AUCPR, indicating that the choice between these two metrics may be less critical in such scenarios. The results of our study contribute to a more refined understanding of metric selection and interpretation in anomaly detection, offering valuable insights for both researchers and practitioners in the field.<|reference_end|>
|
arxiv
|
@article{ok2024exploring,
title={Exploring the Impact of Outlier Variability on Anomaly Detection
Evaluation Metrics},
author={Minjae Ok and Simon Kl"uttermann and Emmanuel M"uller},
journal={arXiv preprint arXiv:2409.15986},
year={2024},
archivePrefix={arXiv},
eprint={2409.15986},
primaryClass={cs.LG}
}
|
ok2024exploring
|
arxiv-661335
|
2409.15988
|
Semi-strong Efficient Market of Bitcoin and Twitter: an Analysis of Semantic Vector Spaces of Extracted Keywords and Light Gradient Boosting Machine Models
|
<|reference_start|>Semi-strong Efficient Market of Bitcoin and Twitter: an Analysis of Semantic Vector Spaces of Extracted Keywords and Light Gradient Boosting Machine Models: This study extends the examination of the Efficient-Market Hypothesis in Bitcoin market during a five year fluctuation period, from September 1 2017 to September 1 2022, by analyzing 28,739,514 qualified tweets containing the targeted topic "Bitcoin". Unlike previous studies, we extracted fundamental keywords as an informative proxy for carrying out the study of the EMH in the Bitcoin market rather than focusing on sentiment analysis, information volume, or price data. We tested market efficiency in hourly, 4-hourly, and daily time periods to understand the speed and accuracy of market reactions towards the information within different thresholds. A sequence of machine learning methods and textual analyses were used, including measurements of distances of semantic vector spaces of information, keywords extraction and encoding model, and Light Gradient Boosting Machine (LGBM) classifiers. Our results suggest that 78.06% (83.08%), 84.63% (87.77%), and 94.03% (94.60%) of hourly, 4-hourly, and daily bullish (bearish) market movements can be attributed to public information within organic tweets.<|reference_end|>
|
arxiv
|
@article{wang2024semi-strong,
title={Semi-strong Efficient Market of Bitcoin and Twitter: an Analysis of
Semantic Vector Spaces of Extracted Keywords and Light Gradient Boosting
Machine Models},
author={Fang Wang (Florence Wong) and Marko Gacesa},
journal={arXiv preprint arXiv:2409.15988},
year={2024},
archivePrefix={arXiv},
eprint={2409.15988},
primaryClass={econ.GN cs.LG q-fin.EC}
}
|
wang2024semi-strong
|
arxiv-661336
|
2409.15990
|
PACE: Poisoning Attacks on Learned Cardinality Estimation
|
<|reference_start|>PACE: Poisoning Attacks on Learned Cardinality Estimation: Cardinality estimation (CE) plays a crucial role in database optimizer. We have witnessed the emergence of numerous learned CE models recently which can outperform traditional methods such as histograms and samplings. However, learned models also bring many security risks. For example, a query-driven learned CE model learns a query-to-cardinality mapping based on the historical workload. Such a learned model could be attacked by poisoning queries, which are crafted by malicious attackers and woven into the historical workload, leading to performance degradation of CE. In this paper, we explore the potential security risks in learned CE and study a new problem of poisoning attacks on learned CE in a black-box setting. Experiments show that PACE reduces the accuracy of the learned CE models by 178 times, leading to a 10 times decrease in the end-to-end performance of the target database.<|reference_end|>
|
arxiv
|
@article{zhang2024pace:,
title={PACE: Poisoning Attacks on Learned Cardinality Estimation},
author={Jintao Zhang (1), Chao Zhang (1), Guoliang Li (1), Chengliang Chai (2)
((1) Tsinghua University, (2) Beijing Institute of Technology)},
journal={SIGMOD 2024},
year={2024},
archivePrefix={arXiv},
eprint={2409.15990},
primaryClass={cs.DB cs.CR}
}
|
zhang2024pace:
|
arxiv-661337
|
2409.15994
|
A Multi-operator Ensemble LSHADE with Restart and Local Search Mechanisms for Single-objective Optimization
|
<|reference_start|>A Multi-operator Ensemble LSHADE with Restart and Local Search Mechanisms for Single-objective Optimization: In recent years, multi-operator and multi-method algorithms have succeeded, encouraging their combination within single frameworks. Despite promising results, there remains room for improvement as only some evolutionary algorithms (EAs) consistently excel across all optimization problems. This paper proposes mLSHADE-RL, an enhanced version of LSHADE-cnEpSin, which is one of the winners of the CEC 2017 competition in real-parameter single-objective optimization. mLSHADE-RL integrates multiple EAs and search operators to improve performance further. Three mutation strategies such as DE/current-to-pbest-weight/1 with archive, DE/current-to-pbest/1 without archive, and DE/current-to-ordpbest-weight/1 are integrated in the original LSHADE-cnEpSin. A restart mechanism is also proposed to overcome the local optima tendency. Additionally, a local search method is applied in the later phase of the evolutionary procedure to enhance the exploitation capability of mLSHADE-RL. mLSHADE-RL is tested on 30 dimensions in the CEC 2024 competition on single objective bound constrained optimization, demonstrating superior performance over other state-of-the-art algorithms in producing high-quality solutions across various optimization scenarios.<|reference_end|>
|
arxiv
|
@article{chauhan2024a,
title={A Multi-operator Ensemble LSHADE with Restart and Local Search
Mechanisms for Single-objective Optimization},
author={Dikshit Chauhan, Anupam Trivedi, Shivani},
journal={arXiv preprint arXiv:2409.15994},
year={2024},
archivePrefix={arXiv},
eprint={2409.15994},
primaryClass={cs.NE}
}
|
chauhan2024a
|
arxiv-661338
|
2409.15997
|
Improvements to SDXL in NovelAI Diffusion V3
|
<|reference_start|>Improvements to SDXL in NovelAI Diffusion V3: In this technical report, we document the changes we made to SDXL in the process of training NovelAI Diffusion V3, our state of the art anime image generation model.<|reference_end|>
|
arxiv
|
@article{ossa2024improvements,
title={Improvements to SDXL in NovelAI Diffusion V3},
author={Juan Ossa, Eren Dou{g}an, Alex Birch, F. Johnson},
journal={arXiv preprint arXiv:2409.15997},
year={2024},
archivePrefix={arXiv},
eprint={2409.15997},
primaryClass={cs.CV cs.AI cs.LG}
}
|
ossa2024improvements
|
arxiv-661339
|
2409.15998
|
Bridging the Transparency Gap: Exploring Multi-Stakeholder Preferences for Targeted Advertisement Explanations
|
<|reference_start|>Bridging the Transparency Gap: Exploring Multi-Stakeholder Preferences for Targeted Advertisement Explanations: Limited transparency in targeted advertising on online content delivery platforms can breed mistrust for both viewers (of the content and ads) and advertisers. This user study (n=864) explores how explanations for targeted ads can bridge this gap, fostering transparency for two of the key stakeholders. We explore participants' preferences for explanations and allow them to tailor the content and format. Acting as viewers or advertisers, participants chose which details about viewing habits and user data to include in explanations. Participants expressed concerns not only about the inclusion of personal data in explanations but also about the use of it in ad placing. Surprisingly, we found no significant differences in the features selected by the two groups to be included in the explanations. Furthermore, both groups showed overall high satisfaction, while "advertisers" perceived the explanations as significantly more transparent than "viewers". Additionally, we observed significant variations in the use of personal data and the features presented in explanations between the two phases of the experiment. This study also provided insights into participants' preferences for how explanations are presented and their assumptions regarding advertising practices and data usage. This research broadens our understanding of transparent advertising practices by highlighting the unique dynamics between viewers and advertisers on online platforms, and suggesting that viewers' priorities should be considered in the process of ad placement and creation of explanations.<|reference_end|>
|
arxiv
|
@article{zilbershtein2024bridging,
title={Bridging the Transparency Gap: Exploring Multi-Stakeholder Preferences
for Targeted Advertisement Explanations},
author={Dina Zilbershtein, Francesco Barile, Daan Odijk, Nava Tintarev},
journal={arXiv preprint arXiv:2409.15998},
year={2024},
archivePrefix={arXiv},
eprint={2409.15998},
primaryClass={cs.HC}
}
|
zilbershtein2024bridging
|
arxiv-661340
|
2409.16001
|
Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI
|
<|reference_start|>Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI: Human intelligence, the most evident and accessible form of source of reasoning, hosted by biological hardware, has evolved and been refined over thousands of years, positioning itself today to create new artificial forms and preparing to self--design their evolutionary path forward. Beginning with the advent of foundation models, the rate at which human and artificial intelligence interact with each other has surpassed any anticipated quantitative figures. The close engagement led to both bits of intelligence to be impacted in various ways, which naturally resulted in complex confluences that warrant close scrutiny. In the sequel, we shall explore the interplay between human and machine intelligence, focusing on the crucial role humans play in developing ethical, responsible, and robust intelligent systems. We slightly delve into interesting aspects of implementation inspired by the mechanisms underlying neuroscience and human cognition. Additionally, we propose future perspectives, capitalizing on the advantages of symbiotic designs to suggest a human-centered direction for next-generation AI development. We finalize this evolving document with a few thoughts and open questions yet to be addressed by the broader community.<|reference_end|>
|
arxiv
|
@article{arslan2024artificial,
title={Artificial Human Intelligence: The role of Humans in the Development of
Next Generation AI},
author={Suayb S. Arslan},
journal={arXiv preprint arXiv:2409.16001},
year={2024},
archivePrefix={arXiv},
eprint={2409.16001},
primaryClass={cs.AI q-bio.NC}
}
|
arslan2024artificial
|
arxiv-661341
|
2409.16002
|
Unleashing the Potential of Synthetic Images: A Study on Histopathology Image Classification
|
<|reference_start|>Unleashing the Potential of Synthetic Images: A Study on Histopathology Image Classification: Histopathology image classification is crucial for the accurate identification and diagnosis of various diseases but requires large and diverse datasets. Obtaining such datasets, however, is often costly and time-consuming due to the need for expert annotations and ethical constraints. To address this, we examine the suitability of different generative models and image selection approaches to create realistic synthetic histopathology image patches conditioned on class labels. Our findings highlight the importance of selecting an appropriate generative model type and architecture to enhance performance. Our experiments over the PCam dataset show that diffusion models are effective for transfer learning, while GAN-generated samples are better suited for augmentation. Additionally, transformer-based generative models do not require image filtering, in contrast to those derived from Convolutional Neural Networks (CNNs), which benefit from realism score-based selection. Therefore, we show that synthetic images can effectively augment existing datasets, ultimately improving the performance of the downstream histopathology image classification task.<|reference_end|>
|
arxiv
|
@article{benito-del-valle2024unleashing,
title={Unleashing the Potential of Synthetic Images: A Study on Histopathology
Image Classification},
author={Leire Benito-Del-Valle, Aitor Alvarez-Gila, Itziar Eguskiza, and
Cristina L. Saratxaga},
journal={arXiv preprint arXiv:2409.16002},
year={2024},
archivePrefix={arXiv},
eprint={2409.16002},
primaryClass={cs.CV}
}
|
benito-del-valle2024unleashing
|
arxiv-661342
|
2409.16004
|
Second order divergence constraint preserving entropy stable finite difference schemes for ideal two-fluid plasma flow equations
|
<|reference_start|>Second order divergence constraint preserving entropy stable finite difference schemes for ideal two-fluid plasma flow equations: Two-fluid plasma flow equations describe the flow of ions and electrons with different densities, velocities, and pressures. We consider the ideal plasma flow i.e. we ignore viscous, resistive, and collision effects. The resulting system of equations has flux consisting of three independent components, one for ions, one for electrons, and a linear Maxwell's equation flux for the electromagnetic fields. The coupling of these components is via source terms. In this article, we present {conservative} second-order finite difference schemes that ensure the consistent evolution of the divergence constraints on the electric and magnetic fields. The key idea is to design a numerical solver for Maxwell's equations using the multidimensional Riemann solver at the vertices, ensuring discrete divergence constraints; for the fluid parts, we use an entropy-stable discretization. The proposed schemes are co-located, second-order accurate, entropy stable, and ensure divergence-free evolution of the magnetic field. We use explicit and IMplicit-EXplicit (IMEX) schemes for time discretizations. To demonstrate the accuracy, stability, and divergence constraint-preserving ability of the proposed schemes, we present several test cases in one and two dimensions. We also compare the numerical results with those obtained from schemes with no divergence cleaning and those employing perfectly hyperbolic Maxwell (PHM) equations-based divergence cleaning methods for Maxwell's equations.<|reference_end|>
|
arxiv
|
@article{agnihotri2024second,
title={Second order divergence constraint preserving entropy stable finite
difference schemes for ideal two-fluid plasma flow equations},
author={Jaya Agnihotri, Deepak Bhoriya, Harish Kumar, Praveen Chandrashekhar,
Dinshaw S. Balsara},
journal={arXiv preprint arXiv:2409.16004},
year={2024},
archivePrefix={arXiv},
eprint={2409.16004},
primaryClass={math.NA cs.NA}
}
|
agnihotri2024second
|
arxiv-661343
|
2409.16005
|
Bridging Speech and Text: Enhancing ASR with Pinyin-to-Character Pre-training in LLMs
|
<|reference_start|>Bridging Speech and Text: Enhancing ASR with Pinyin-to-Character Pre-training in LLMs: The integration of large language models (LLMs) with pre-trained speech models has opened up new avenues in automatic speech recognition (ASR). While LLMs excel in multimodal understanding tasks, effectively leveraging their capabilities for ASR remains a significant challenge. This paper presents a novel training approach to enhance LLM performance in ASR tasks. We propose pre-training LLMs on Pinyin embedding sequences, which represent pronunciation features, to generate corresponding Chinese characters. This step enables the LLM to adapt to generating text from pronunciation features before encountering real speech data. Furthermore, we fine-tune the LoRA parameters to enhance the LLM's understanding of speech modality information. In AISHELL-1 corpus, our approach yields a 9.5% relative improvement in ASR tasks compared to the baseline without Pinyi-to-Character pre-training. Additionally, incorporating auxiliary text data for Pinyi-to-Character pre-training further boosts performance, achieving a 19.0% relative improvement.<|reference_end|>
|
arxiv
|
@article{yuhang2024bridging,
title={Bridging Speech and Text: Enhancing ASR with Pinyin-to-Character
Pre-training in LLMs},
author={Yang Yuhang, Peng Yizhou, Eng Siong Chng and Xionghu Zhong},
journal={arXiv preprint arXiv:2409.16005},
year={2024},
archivePrefix={arXiv},
eprint={2409.16005},
primaryClass={cs.CL cs.SD eess.AS}
}
|
yuhang2024bridging
|
arxiv-661344
|
2409.16008
|
Robust Neural IDA-PBC: passivity-based stabilization under approximations
|
<|reference_start|>Robust Neural IDA-PBC: passivity-based stabilization under approximations: In this paper, we restructure the Neural Interconnection and Damping Assignment - Passivity Based Control (Neural IDA-PBC) design methodology, and we formally analyze its closed-loop properties. Neural IDA-PBC redefines the IDA-PBC design approach as an optimization problem by building on the framework of Physics Informed Neural Networks (PINNs). However, the closed-loop stability and robustness properties under Neural IDA-PBC remain unexplored. To address the issue, we study the behavior of classical IDA-PBC under approximations. Our theoretical analysis allows deriving conditions for practical and asymptotic stability of the desired equilibrium point. Moreover, it extends the Neural IDA-PBC applicability to port-Hamiltonian systems where the matching conditions cannot be solved exactly. Our renewed optimization-based design introduces three significant aspects: i) it involves a novel optimization objective including stability and robustness constraints issued from our theoretical analysis; ii) it employs separate Neural Networks (NNs), which can be structured to reduce the search space to relevant functions; iii) it does not require knowledge about the port-Hamiltonian formulation of the system's model. Our methodology is validated with simulations on three standard benchmarks: a double pendulum, a nonlinear mass-spring-damper and a cartpole. Notably, classical IDA-PBC designs cannot be analytically derived for the latter.<|reference_end|>
|
arxiv
|
@article{sanchez-escalonilla2024robust,
title={Robust Neural IDA-PBC: passivity-based stabilization under
approximations},
author={Santiago Sanchez-Escalonilla, Samuele Zoboli and Bayu Jayawardhana},
journal={arXiv preprint arXiv:2409.16008},
year={2024},
archivePrefix={arXiv},
eprint={2409.16008},
primaryClass={eess.SY cs.LG cs.SY math.OC}
}
|
sanchez-escalonilla2024robust
|
arxiv-661345
|
2409.16009
|
Investigating the Impact of Trust in Multi-Human Multi-Robot Task Allocation
|
<|reference_start|>Investigating the Impact of Trust in Multi-Human Multi-Robot Task Allocation: Trust is essential in human-robot collaboration. Even more so in multi-human multi-robot teams where trust is vital to ensure teaming cohesion in complex operational environments. Yet, at the moment, trust is rarely considered a factor during task allocation and reallocation in algorithms used in multi-human, multi-robot collaboration contexts. Prior work on trust in single-human-robot interaction has identified that including trust as a parameter in human-robot interaction significantly improves both performance outcomes and human experience with robotic systems. However, very little research has explored the impact of trust in multi-human multi-robot collaboration, specifically in the context of task allocation. In this paper, we introduce a new trust model, the Expectation Comparison Trust (ECT) model, and employ it with three trust models from prior work and a baseline no-trust model to investigate the impact of trust on task allocation outcomes in multi-human multi-robot collaboration. Our experiment involved different team configurations, including 2 humans, 2 robots, 5 humans, 5 robots, and 10 humans, 10 robots. Results showed that using trust-based models generally led to better task allocation outcomes in larger teams (10 humans and 10 robots) than in smaller teams. We discuss the implications of our findings and provide recommendations for future work on integrating trust as a variable for task allocation in multi-human, multi-robot collaboration.<|reference_end|>
|
arxiv
|
@article{obi2024investigating,
title={Investigating the Impact of Trust in Multi-Human Multi-Robot Task
Allocation},
author={Ike Obi, Ruiqi Wang, Wonse Jo, Byung-Cheol Min},
journal={arXiv preprint arXiv:2409.16009},
year={2024},
archivePrefix={arXiv},
eprint={2409.16009},
primaryClass={cs.RO}
}
|
obi2024investigating
|
arxiv-661346
|
2409.16011
|
CrowdSurfer: Sampling Optimization Augmented with Vector-Quantized Variational AutoEncoder for Dense Crowd Navigation
|
<|reference_start|>CrowdSurfer: Sampling Optimization Augmented with Vector-Quantized Variational AutoEncoder for Dense Crowd Navigation: Navigation amongst densely packed crowds remains a challenge for mobile robots. The complexity increases further if the environment layout changes, making the prior computed global plan infeasible. In this paper, we show that it is possible to dramatically enhance crowd navigation by just improving the local planner. Our approach combines generative modelling with inference time optimization to generate sophisticated long-horizon local plans at interactive rates. More specifically, we train a Vector Quantized Variational AutoEncoder to learn a prior over the expert trajectory distribution conditioned on the perception input. At run-time, this is used as an initialization for a sampling-based optimizer for further refinement. Our approach does not require any sophisticated prediction of dynamic obstacles and yet provides state-of-the-art performance. In particular, we compare against the recent DRL-VO approach and show a 40% improvement in success rate and a 6% improvement in travel time.<|reference_end|>
|
arxiv
|
@article{kumar2024crowdsurfer:,
title={CrowdSurfer: Sampling Optimization Augmented with Vector-Quantized
Variational AutoEncoder for Dense Crowd Navigation},
author={Naman Kumar, Antareep Singha, Laksh Nanwani, Dhruv Potdar, Tarun R,
Fatemeh Rastgar, Simon Idoko, Arun Kumar Singh, K. Madhava Krishna},
journal={arXiv preprint arXiv:2409.16011},
year={2024},
archivePrefix={arXiv},
eprint={2409.16011},
primaryClass={cs.RO math.OC}
}
|
kumar2024crowdsurfer:
|
arxiv-661347
|
2409.16012
|
PRESTO: Fast motion planning using diffusion models based on key-configuration environment representation
|
<|reference_start|>PRESTO: Fast motion planning using diffusion models based on key-configuration environment representation: We introduce a learning-guided motion planning framework that provides initial seed trajectories using a diffusion model for trajectory optimization. Given a workspace, our method approximates the configuration space (C-space) obstacles through a key-configuration representation that consists of a sparse set of task-related key configurations, and uses this as an input to the diffusion model. The diffusion model integrates regularization terms that encourage collision avoidance and smooth trajectories during training, and trajectory optimization refines the generated seed trajectories to further correct any colliding segments. Our experimental results demonstrate that using high-quality trajectory priors, learned through our C-space-grounded diffusion model, enables efficient generation of collision-free trajectories in narrow-passage environments, outperforming prior learning- and planning-based baselines. Videos and additional materials can be found on the project page: https://kiwi-sherbet.github.io/PRESTO.<|reference_end|>
|
arxiv
|
@article{seo2024presto:,
title={PRESTO: Fast motion planning using diffusion models based on
key-configuration environment representation},
author={Mingyo Seo, Yoonyoung Cho, Yoonchang Sung, Peter Stone, Yuke Zhu,
Beomjoon Kim},
journal={arXiv preprint arXiv:2409.16012},
year={2024},
archivePrefix={arXiv},
eprint={2409.16012},
primaryClass={cs.RO}
}
|
seo2024presto:
|
arxiv-661348
|
2409.16016
|
VascX Models: Model Ensembles for Retinal Vascular Analysis from Color Fundus Images
|
<|reference_start|>VascX Models: Model Ensembles for Retinal Vascular Analysis from Color Fundus Images: We introduce VascX models, a comprehensive set of model ensembles for analyzing retinal vasculature from color fundus images (CFIs). Annotated CFIs were aggregated from public datasets for vessel, artery-vein, and disc segmentation; and fovea localization. Additional CFIs from the population-based Rotterdam Study were, with arteries and veins annotated by graders at pixel level. Our models achieved robust performance across devices from different vendors, varying levels of image quality levels, and diverse pathologies. Our models demonstrated superior segmentation performance compared to existing systems under a variety of conditions. Significant enhancements were observed in artery-vein and disc segmentation performance, particularly in segmentations of these structures on CFIs of intermediate quality, a common characteristic of large cohorts and clinical datasets. Our model outperformed human graders in segmenting vessels with greater precision. With VascX models we provide a robust, ready-to-use set of model ensembles and inference code aimed at simplifying the implementation and enhancing the quality of automated retinal vasculature analyses. The precise vessel parameters generated by the model can serve as starting points for the identification of disease patterns in and outside of the eye.<|reference_end|>
|
arxiv
|
@article{quiros2024vascx,
title={VascX Models: Model Ensembles for Retinal Vascular Analysis from Color
Fundus Images},
author={Jose Vargas Quiros, Bart Liefers, Karin van Garderen, Jeroen
Vermeulen, Eyened Reading Center, Sinergia Consortium and Caroline Klaver},
journal={arXiv preprint arXiv:2409.16016},
year={2024},
archivePrefix={arXiv},
eprint={2409.16016},
primaryClass={eess.IV cs.CV q-bio.TO}
}
|
quiros2024vascx
|
arxiv-661349
|
2409.16018
|
Lattice-Based Vulnerabilities in Lee Metric Post-Quantum Cryptosystems
|
<|reference_start|>Lattice-Based Vulnerabilities in Lee Metric Post-Quantum Cryptosystems: Post-quantum cryptography has gained attention due to the need for secure cryptographic systems in the face of quantum computing. Code-based and lattice-based cryptography are two prominent approaches, both heavily studied within the NIST standardization project. Code-based cryptography -- most prominently exemplified by the McEliece cryptosystem -- is based on the hardness of decoding random linear error-correcting codes. Despite the McEliece cryptosystem having been unbroken for several decades, it suffers from large key sizes, which has led to exploring variants using metrics than the Hamming metric, such as the Lee metric. This alternative metric may allow for smaller key sizes, but requires further analysis for potential vulnerabilities to lattice-based attack techniques. In this paper, we consider a generic Lee metric based McEliece type cryptosystem and evaluate its security against lattice-based attacks.<|reference_end|>
|
arxiv
|
@article{horlemann2024lattice-based,
title={Lattice-Based Vulnerabilities in Lee Metric Post-Quantum Cryptosystems},
author={Anna-Lena Horlemann, Karan Khathuria, Marc Newman, Amin Sakzad, Carlos
Vela Cabello},
journal={arXiv preprint arXiv:2409.16018},
year={2024},
archivePrefix={arXiv},
eprint={2409.16018},
primaryClass={cs.CR cs.IT math.IT}
}
|
horlemann2024lattice-based
|
arxiv-661350
|
2409.16019
|
AIR-Embodied: An Efficient Active 3DGS-based Interaction and Reconstruction Framework with Embodied Large Language Model
|
<|reference_start|>AIR-Embodied: An Efficient Active 3DGS-based Interaction and Reconstruction Framework with Embodied Large Language Model: Recent advancements in 3D reconstruction and neural rendering have enhanced the creation of high-quality digital assets, yet existing methods struggle to generalize across varying object shapes, textures, and occlusions. While Next Best View (NBV) planning and Learning-based approaches offer solutions, they are often limited by predefined criteria and fail to manage occlusions with human-like common sense. To address these problems, we present AIR-Embodied, a novel framework that integrates embodied AI agents with large-scale pretrained multi-modal language models to improve active 3DGS reconstruction. AIR-Embodied utilizes a three-stage process: understanding the current reconstruction state via multi-modal prompts, planning tasks with viewpoint selection and interactive actions, and employing closed-loop reasoning to ensure accurate execution. The agent dynamically refines its actions based on discrepancies between the planned and actual outcomes. Experimental evaluations across virtual and real-world environments demonstrate that AIR-Embodied significantly enhances reconstruction efficiency and quality, providing a robust solution to challenges in active 3D reconstruction.<|reference_end|>
|
arxiv
|
@article{qi2024air-embodied:,
title={AIR-Embodied: An Efficient Active 3DGS-based Interaction and
Reconstruction Framework with Embodied Large Language Model},
author={Zhenghao Qi, Shenghai Yuan, Fen Liu, Haozhi Cao, Tianchen Deng,
Jianfei Yang, and Lihua Xie},
journal={arXiv preprint arXiv:2409.16019},
year={2024},
archivePrefix={arXiv},
eprint={2409.16019},
primaryClass={cs.RO}
}
|
qi2024air-embodied:
|
arxiv-661351
|
2409.16022
|
AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-Based Batch Relevance Assessment
|
<|reference_start|>AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-Based Batch Relevance Assessment: Cognitive biases are systematic deviations in thinking that lead to irrational judgments and problematic decision-making, extensively studied across various fields. Recently, large language models (LLMs) have shown advanced understanding capabilities but may inherit human biases from their training data. While social biases in LLMs have been well-studied, cognitive biases have received less attention, with existing research focusing on specific scenarios. The broader impact of cognitive biases on LLMs in various decision-making contexts remains underexplored. We investigated whether LLMs are influenced by the threshold priming effect in relevance judgments, a core task and widely-discussed research topic in the Information Retrieval (IR) coummunity. The priming effect occurs when exposure to certain stimuli unconsciously affects subsequent behavior and decisions. Our experiment employed 10 topics from the TREC 2019 Deep Learning passage track collection, and tested AI judgments under different document relevance scores, batch lengths, and LLM models, including GPT-3.5, GPT-4, LLaMa2-13B and LLaMa2-70B. Results showed that LLMs tend to give lower scores to later documents if earlier ones have high relevance, and vice versa, regardless of the combination and model used. Our finding demonstrates that LLM%u2019s judgments, similar to human judgments, are also influenced by threshold priming biases, and suggests that researchers and system engineers should take into account potential human-like cognitive biases in designing, evaluating, and auditing LLMs in IR tasks and beyond.<|reference_end|>
|
arxiv
|
@article{chen2024ai,
title={AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming
in LLM-Based Batch Relevance Assessment},
author={Nuo Chen, Jiqun Liu, Xiaoyu Dong, Qijiong Liu, Tetsuya Sakai,
Xiao-Ming Wu},
journal={arXiv preprint arXiv:2409.16022},
year={2024},
archivePrefix={arXiv},
eprint={2409.16022},
primaryClass={cs.CL cs.AI}
}
|
chen2024ai
|
arxiv-661352
|
2409.16024
|
Bridging Environments and Language with Rendering Functions and Vision-Language Models
|
<|reference_start|>Bridging Environments and Language with Rendering Functions and Vision-Language Models: Vision-language models (VLMs) have tremendous potential for grounding language, and thus enabling language-conditioned agents (LCAs) to perform diverse tasks specified with text. This has motivated the study of LCAs based on reinforcement learning (RL) with rewards given by rendering images of an environment and evaluating those images with VLMs. If single-task RL is employed, such approaches are limited by the cost and time required to train a policy for each new task. Multi-task RL (MTRL) is a natural alternative, but requires a carefully designed corpus of training tasks and does not always generalize reliably to new tasks. Therefore, this paper introduces a novel decomposition of the problem of building an LCA: first find an environment configuration that has a high VLM score for text describing a task; then use a (pretrained) goal-conditioned policy to reach that configuration. We also explore several enhancements to the speed and quality of VLM-based LCAs, notably, the use of distilled models, and the evaluation of configurations from multiple viewpoints to resolve the ambiguities inherent in a single 2D view. We demonstrate our approach on the Humanoid environment, showing that it results in LCAs that outperform MTRL baselines in zero-shot generalization, without requiring any textual task descriptions or other forms of environment-specific annotation during training. Videos and an interactive demo can be found at https://europe.naverlabs.com/text2control<|reference_end|>
|
arxiv
|
@article{cachet2024bridging,
title={Bridging Environments and Language with Rendering Functions and
Vision-Language Models},
author={Theo Cachet and Christopher R. Dance and Olivier Sigaud},
journal={arXiv preprint arXiv:2409.16024},
year={2024},
archivePrefix={arXiv},
eprint={2409.16024},
primaryClass={cs.AI}
}
|
cachet2024bridging
|
arxiv-661353
|
2409.16025
|
Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering
|
<|reference_start|>Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering: Users post numerous product-related questions on e-commerce platforms, affecting their purchase decisions. Product-related question answering (PQA) entails utilizing product-related resources to provide precise responses to users. We propose a novel task of Multilingual Cross-market Product-based Question Answering (MCPQA) and define the task as providing answers to product-related questions in a main marketplace by utilizing information from another resource-rich auxiliary marketplace in a multilingual context. We introduce a large-scale dataset comprising over 7 million questions from 17 marketplaces across 11 languages. We then perform automatic translation on the Electronics category of our dataset, naming it as McMarket. We focus on two subtasks: review-based answer generation and product-related question ranking. For each subtask, we label a subset of McMarket using an LLM and further evaluate the quality of the annotations via human assessment. We then conduct experiments to benchmark our dataset, using models ranging from traditional lexical models to LLMs in both single-market and cross-market scenarios across McMarket and the corresponding LLM subset. Results show that incorporating cross-market information significantly enhances performance in both tasks.<|reference_end|>
|
arxiv
|
@article{yuan2024unlocking,
title={Unlocking Markets: A Multilingual Benchmark to Cross-Market Question
Answering},
author={Yifei Yuan, Yang Deng, Anders S{o}gaard, Mohammad Aliannejadi},
journal={arXiv preprint arXiv:2409.16025},
year={2024},
archivePrefix={arXiv},
eprint={2409.16025},
primaryClass={cs.CL}
}
|
yuan2024unlocking
|
arxiv-661354
|
2409.16027
|
AutoCE: An Accurate and Efficient Model Advisor for Learned Cardinality Estimation
|
<|reference_start|>AutoCE: An Accurate and Efficient Model Advisor for Learned Cardinality Estimation: Cardinality estimation (CE) plays a crucial role in many database-related tasks such as query generation, cost estimation, and join ordering. Lately, we have witnessed the emergence of numerous learned CE models. However, no single CE model is invincible when it comes to the datasets with various data distributions. To facilitate data-intensive applications with accurate and efficient cardinality estimation, it is important to have an approach that can judiciously and efficiently select the most suitable CE model for an arbitrary dataset. In this paper, we study a new problem of selecting the best CE models for a variety of datasets. This problem is rather challenging as it is hard to capture the relationship from various datasets to the performance of disparate models. To address this problem, we propose a model advisor, named AutoCE, which can adaptively select the best model for a dataset. The main contribution of AutoCE is the learning-based model selection, where deep metric learning is used to learn a recommendation model and incremental learning is proposed to reduce the labeling overhead and improve the model robustness. We have integrated AutoCE into PostgreSQL and evaluated its impact on query optimization. The results showed that AutoCE achieved the best performance (27% better) and outperformed the baselines concerning accuracy (2.1 times better) and efficacy (4.2 times better).<|reference_end|>
|
arxiv
|
@article{zhang2024autoce:,
title={AutoCE: An Accurate and Efficient Model Advisor for Learned Cardinality
Estimation},
author={Jintao Zhang (1), Chao Zhang (1), Guoliang Li (1), Chengliang Chai (2)
((1) Tsinghua University, (2) Beijing Institute of Technology)},
journal={ICDE 2023},
year={2024},
archivePrefix={arXiv},
eprint={2409.16027},
primaryClass={cs.DB}
}
|
zhang2024autoce:
|
arxiv-661355
|
2409.16030
|
MHRC: Closed-loop Decentralized Multi-Heterogeneous Robot Collaboration with Large Language Models
|
<|reference_start|>MHRC: Closed-loop Decentralized Multi-Heterogeneous Robot Collaboration with Large Language Models: The integration of large language models (LLMs) with robotics has significantly advanced robots' abilities in perception, cognition, and task planning. The use of natural language interfaces offers a unified approach for expressing the capability differences of heterogeneous robots, facilitating communication between them, and enabling seamless task allocation and collaboration. Currently, the utilization of LLMs to achieve decentralized multi-heterogeneous robot collaborative tasks remains an under-explored area of research. In this paper, we introduce a novel framework that utilizes LLMs to achieve decentralized collaboration among multiple heterogeneous robots. Our framework supports three robot categories, mobile robots, manipulation robots, and mobile manipulation robots, working together to complete tasks such as exploration, transportation, and organization. We developed a rich set of textual feedback mechanisms and chain-of-thought (CoT) prompts to enhance task planning efficiency and overall system performance. The mobile manipulation robot can adjust its base position flexibly, ensuring optimal conditions for grasping tasks. The manipulation robot can comprehend task requirements, seek assistance when necessary, and handle objects appropriately. Meanwhile, the mobile robot can explore the environment extensively, map object locations, and communicate this information to the mobile manipulation robot, thus improving task execution efficiency. We evaluated the framework using PyBullet, creating scenarios with three different room layouts and three distinct operational tasks. We tested various LLM models and conducted ablation studies to assess the contributions of different modules. The experimental results confirm the effectiveness and necessity of our proposed framework.<|reference_end|>
|
arxiv
|
@article{yu2024mhrc:,
title={MHRC: Closed-loop Decentralized Multi-Heterogeneous Robot Collaboration
with Large Language Models},
author={Wenhao Yu, Jie Peng, Yueliang Ying, Sai Li, Jianmin Ji, Yanyong Zhang},
journal={arXiv preprint arXiv:2409.16030},
year={2024},
archivePrefix={arXiv},
eprint={2409.16030},
primaryClass={cs.RO}
}
|
yu2024mhrc:
|
arxiv-661356
|
2409.16032
|
Deep chroma compression of tone-mapped images
|
<|reference_start|>Deep chroma compression of tone-mapped images: Acquisition of high dynamic range (HDR) images is thriving due to the increasing use of smart devices and the demand for high-quality output. Extensive research has focused on developing methods for reducing the luminance range in HDR images using conventional and deep learning-based tone mapping operators to enable accurate reproduction on conventional 8 and 10-bit digital displays. However, these methods often fail to account for pixels that may lie outside the target display's gamut, resulting in visible chromatic distortions or color clipping artifacts. Previous studies suggested that a gamut management step ensures that all pixels remain within the target gamut. However, such approaches are computationally expensive and cannot be deployed on devices with limited computational resources. We propose a generative adversarial network for fast and reliable chroma compression of HDR tone-mapped images. We design a loss function that considers the hue property of generated images to improve color accuracy, and train the model on an extensive image dataset. Quantitative experiments demonstrate that the proposed model outperforms state-of-the-art image generation and enhancement networks in color accuracy, while a subjective study suggests that the generated images are on par or superior to those produced by conventional chroma compression methods in terms of visual quality. Additionally, the model achieves real-time performance, showing promising results for deployment on devices with limited computational resources.<|reference_end|>
|
arxiv
|
@article{milidonis2024deep,
title={Deep chroma compression of tone-mapped images},
author={Xenios Milidonis, Francesco Banterle, Alessandro Artusi},
journal={arXiv preprint arXiv:2409.16032},
year={2024},
archivePrefix={arXiv},
eprint={2409.16032},
primaryClass={eess.IV cs.AI cs.CV}
}
|
milidonis2024deep
|
arxiv-661357
|
2409.16033
|
RTAGrasp: Learning Task-Oriented Grasping from Human Videos via Retrieval, Transfer, and Alignment
|
<|reference_start|>RTAGrasp: Learning Task-Oriented Grasping from Human Videos via Retrieval, Transfer, and Alignment: Task-oriented grasping (TOG) is crucial for robots to accomplish manipulation tasks, requiring the determination of TOG positions and directions. Existing methods either rely on costly manual TOG annotations or only extract coarse grasping positions or regions from human demonstrations, limiting their practicality in real-world applications. To address these limitations, we introduce RTAGrasp, a Retrieval, Transfer, and Alignment framework inspired by human grasping strategies. Specifically, our approach first effortlessly constructs a robot memory from human grasping demonstration videos, extracting both TOG position and direction constraints. Then, given a task instruction and a visual observation of the target object, RTAGrasp retrieves the most similar human grasping experience from its memory and leverages semantic matching capabilities of vision foundation models to transfer the TOG constraints to the target object in a training-free manner. Finally, RTAGrasp aligns the transferred TOG constraints with the robot's action for execution. Evaluations on the public TOG benchmark, TaskGrasp dataset, show the competitive performance of RTAGrasp on both seen and unseen object categories compared to existing baseline methods. Real-world experiments further validate its effectiveness on a robotic arm. Our code, appendix, and video are available at \url{https://sites.google.com/view/rtagrasp/home}.<|reference_end|>
|
arxiv
|
@article{dong2024rtagrasp:,
title={RTAGrasp: Learning Task-Oriented Grasping from Human Videos via
Retrieval, Transfer, and Alignment},
author={Wenlong Dong, Dehao Huang, Jiangshan Liu, Chao Tang and Hong Zhang},
journal={arXiv preprint arXiv:2409.16033},
year={2024},
archivePrefix={arXiv},
eprint={2409.16033},
primaryClass={cs.RO}
}
|
dong2024rtagrasp:
|
arxiv-661358
|
2409.16036
|
Grounded Computation & Consciousness: A Framework for Exploring Consciousness in Machines & Other Organisms
|
<|reference_start|>Grounded Computation & Consciousness: A Framework for Exploring Consciousness in Machines & Other Organisms: Computational modeling is a critical tool for understanding consciousness, but is it enough on its own? This paper discusses the necessity for an ontological basis of consciousness, and introduces a formal framework for grounding computational descriptions into an ontological substrate. Utilizing this technique, a method is demonstrated for estimating the difference in qualitative experience between two systems. This framework has wide applicability to computational theories of consciousness.<|reference_end|>
|
arxiv
|
@article{williams2024grounded,
title={Grounded Computation & Consciousness: A Framework for Exploring
Consciousness in Machines & Other Organisms},
author={Ryan Williams},
journal={arXiv preprint arXiv:2409.16036},
year={2024},
archivePrefix={arXiv},
eprint={2409.16036},
primaryClass={q-bio.NC cs.AI}
}
|
williams2024grounded
|
arxiv-661359
|
2409.16037
|
Using Virtual Reality as a Simulation Tool for Augmented Reality Virtual Windows: Effects on Cognitive Workload and Task Performance
|
<|reference_start|>Using Virtual Reality as a Simulation Tool for Augmented Reality Virtual Windows: Effects on Cognitive Workload and Task Performance: Virtual content in Augmented Reality (AR) applications can be constructed according to the designer's requirements, but real environments, are difficult to be accurate control or completely reproduce. This makes it difficult to prototype AR applications for certain real environments. One way to address this issue is to use Virtual Reality (VR) to simulate an AR system, enabling the design of controlled experiments and conducting usability evaluations. However, the effectiveness of using VR to simulate AR has not been well studied. In this paper, we report on a user study (N=20) conducted to investigate the impact of using an VR simulation of AR on participants' task performance and cognitive workload (CWL). Participants performed several office tasks in an AR scene with virtual monitors and then again in the VR-simulated AR scene. While using the interfaces CWL was measured with Electroencephalography (EEG) data and a subjective questionnaire. Results showed that frequent visual checks on the keyboard resulted in decreased task performance and increased cognitive workload. This study found that using AR centered on virtual monitor can be effectively simulated using VR. However, there is more research that can be done, so we also report on the study limitations and directions for future work.<|reference_end|>
|
arxiv
|
@article{liu2024using,
title={Using Virtual Reality as a Simulation Tool for Augmented Reality Virtual
Windows: Effects on Cognitive Workload and Task Performance},
author={Tianyu Liu, Weiping He, Mark Billinghurst},
journal={arXiv preprint arXiv:2409.16037},
year={2024},
archivePrefix={arXiv},
eprint={2409.16037},
primaryClass={cs.HC cs.GR}
}
|
liu2024using
|
arxiv-661360
|
2409.16040
|
Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts
|
<|reference_start|>Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts: Deep learning for time series forecasting has seen significant advancements over the past decades. However, despite the success of large-scale pre-training in language and vision domains, pre-trained time series models remain limited in scale and operate at a high cost, hindering the development of larger capable forecasting models in real-world applications. In response, we introduce Time-MoE, a scalable and unified architecture designed to pre-train larger, more capable forecasting foundation models while reducing inference costs. By leveraging a sparse mixture-of-experts (MoE) design, Time-MoE enhances computational efficiency by activating only a subset of networks for each prediction, reducing computational load while maintaining high model capacity. This allows Time-MoE to scale effectively without a corresponding increase in inference costs. Time-MoE comprises a family of decoder-only transformer models that operate in an auto-regressive manner and support flexible forecasting horizons with varying input context lengths. We pre-trained these models on our newly introduced large-scale data Time-300B, which spans over 9 domains and encompassing over 300 billion time points. For the first time, we scaled a time series foundation model up to 2.4 billion parameters, achieving significantly improved forecasting precision. Our results validate the applicability of scaling laws for training tokens and model size in the context of time series forecasting. Compared to dense models with the same number of activated parameters or equivalent computation budgets, our models consistently outperform them by large margin. These advancements position Time-MoE as a state-of-the-art solution for tackling real-world time series forecasting challenges with superior capability, efficiency, and flexibility.<|reference_end|>
|
arxiv
|
@article{shi2024time-moe:,
title={Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of
Experts},
author={Xiaoming Shi, Shiyu Wang, Yuqi Nie, Dianqi Li, Zhou Ye, Qingsong Wen,
Ming Jin},
journal={arXiv preprint arXiv:2409.16040},
year={2024},
archivePrefix={arXiv},
eprint={2409.16040},
primaryClass={cs.LG cs.AI}
}
|
shi2024time-moe:
|
arxiv-661361
|
2409.16041
|
Safe Output Feedback Improvement with Baselines
|
<|reference_start|>Safe Output Feedback Improvement with Baselines: In data-driven control design, an important problem is to deal with uncertainty due to limited and noisy data. One way to do this is to use a min-max approach, which aims to minimize some design criteria for the worst-case scenario. However, a strategy based on this approach can lead to overly conservative controllers. To overcome this issue, we apply the idea of baseline regret, and it is seen that minimizing the baseline regret under model uncertainty can guarantee safe controller improvement with less conservatism and variance in the resulting controllers. To exemplify the use of baseline controllers, we focus on the output feedback setting and propose a two-step control design method; first, an uncertainty set is constructed by a data-driven system identification approach based on finite impulse response models; then a control design criterion based on model reference control is used. To solve the baseline regret optimization problem efficiently, we use a convex approximation of the criterion and apply the scenario approach in optimization. The numerical examples show that the inclusion of baseline regret indeed improves the performance and reduces the variance of the resulting controller.<|reference_end|>
|
arxiv
|
@article{zhang2024safe,
title={Safe Output Feedback Improvement with Baselines},
author={Ruoqi Zhang, Per Mattsson, Dave Zachariah},
journal={arXiv preprint arXiv:2409.16041},
year={2024},
archivePrefix={arXiv},
eprint={2409.16041},
primaryClass={eess.SY cs.SY}
}
|
zhang2024safe
|
arxiv-661362
|
2409.16042
|
Enhanced Unsupervised Image-to-Image Translation Using Contrastive Learning and Histogram of Oriented Gradients
|
<|reference_start|>Enhanced Unsupervised Image-to-Image Translation Using Contrastive Learning and Histogram of Oriented Gradients: Image-to-Image Translation is a vital area of computer vision that focuses on transforming images from one visual domain to another while preserving their core content and structure. However, this field faces two major challenges: first, the data from the two domains are often unpaired, making it difficult to train generative adversarial networks effectively; second, existing methods tend to produce artifacts or hallucinations during image generation, leading to a decline in image quality. To address these issues, this paper proposes an enhanced unsupervised image-to-image translation method based on the Contrastive Unpaired Translation (CUT) model, incorporating Histogram of Oriented Gradients (HOG) features. This novel approach ensures the preservation of the semantic structure of images, even without semantic labels, by minimizing the loss between the HOG features of input and generated images. The method was tested on translating synthetic game environments from GTA5 dataset to realistic urban scenes in cityscapes dataset, demonstrating significant improvements in reducing hallucinations and enhancing image quality.<|reference_end|>
|
arxiv
|
@article{zhao2024enhanced,
title={Enhanced Unsupervised Image-to-Image Translation Using Contrastive
Learning and Histogram of Oriented Gradients},
author={Wanchen Zhao},
journal={arXiv preprint arXiv:2409.16042},
year={2024},
archivePrefix={arXiv},
eprint={2409.16042},
primaryClass={eess.IV cs.CV}
}
|
zhao2024enhanced
|
arxiv-661363
|
2409.16045
|
LTNtorch: PyTorch Implementation of Logic Tensor Networks
|
<|reference_start|>LTNtorch: PyTorch Implementation of Logic Tensor Networks: Logic Tensor Networks (LTN) is a Neuro-Symbolic framework that effectively incorporates deep learning and logical reasoning. In particular, LTN allows defining a logical knowledge base and using it as the objective of a neural model. This makes learning by logical reasoning possible as the parameters of the model are optimized by minimizing a loss function composed of a set of logical formulas expressing facts about the learning task. The framework learns via gradient-descent optimization. Fuzzy logic, a relaxation of classical logic permitting continuous truth values in the interval [0,1], makes this learning possible. Specifically, the training of an LTN consists of three steps. Firstly, (1) the training data is used to ground the formulas. Then, (2) the formulas are evaluated, and the loss function is computed. Lastly, (3) the gradients are back-propagated through the logical computational graph, and the weights of the neural model are changed so the knowledge base is maximally satisfied. LTNtorch is the fully documented and tested PyTorch implementation of Logic Tensor Networks. This paper presents the formalization of LTN and how LTNtorch implements it. Moreover, it provides a basic binary classification example.<|reference_end|>
|
arxiv
|
@article{carraro2024ltntorch:,
title={LTNtorch: PyTorch Implementation of Logic Tensor Networks},
author={Tommaso Carraro, Luciano Serafini, Fabio Aiolli},
journal={arXiv preprint arXiv:2409.16045},
year={2024},
archivePrefix={arXiv},
eprint={2409.16045},
primaryClass={cs.AI}
}
|
carraro2024ltntorch:
|
arxiv-661364
|
2409.16047
|
Examples of slow convergence for adaptive regularization optimization methods are not isolated
|
<|reference_start|>Examples of slow convergence for adaptive regularization optimization methods are not isolated: The adaptive regularization algorithm for unconstrained nonconvex optimization was shown in Nesterov and Polyak (2006) and Cartis, Gould and Toint (2011) to require, under standard assumptions, at most $\mathcal{O}(\epsilon^{3/(3-q)})$ evaluations of the objective function and its derivatives of degrees one and two to produce an $\epsilon$-approximate critical point of order $q\in\{1,2\}$. This bound was shown to be sharp for $q \in\{1,2\}$. This note revisits these results and shows that the example for which slow convergence is exhibited is not isolated, but that this behaviour occurs for a subset of univariate functions of nonzero measure.<|reference_end|>
|
arxiv
|
@article{toint2024examples,
title={Examples of slow convergence for adaptive regularization optimization
methods are not isolated},
author={Philippe L. Toint},
journal={arXiv preprint arXiv:2409.16047},
year={2024},
archivePrefix={arXiv},
eprint={2409.16047},
primaryClass={math.OC cs.CC}
}
|
toint2024examples
|
arxiv-661365
|
2409.16048
|
Whole-body end-effector pose tracking
|
<|reference_start|>Whole-body end-effector pose tracking: Combining manipulation with the mobility of legged robots is essential for a wide range of robotic applications. However, integrating an arm with a mobile base significantly increases the system's complexity, making precise end-effector control challenging. Existing model-based approaches are often constrained by their modeling assumptions, leading to limited robustness. Meanwhile, recent Reinforcement Learning (RL) implementations restrict the arm's workspace to be in front of the robot or track only the position to obtain decent tracking accuracy. In this work, we address these limitations by introducing a whole-body RL formulation for end-effector pose tracking in a large workspace on rough, unstructured terrains. Our proposed method involves a terrain-aware sampling strategy for the robot's initial configuration and end-effector pose commands, as well as a game-based curriculum to extend the robot's operating range. We validate our approach on the ANYmal quadrupedal robot with a six DoF robotic arm. Through our experiments, we show that the learned controller achieves precise command tracking over a large workspace and adapts across varying terrains such as stairs and slopes. On deployment, it achieves a pose-tracking error of 2.64 cm and 3.64 degrees, outperforming existing competitive baselines.<|reference_end|>
|
arxiv
|
@article{portela2024whole-body,
title={Whole-body end-effector pose tracking},
author={Tifanny Portela, Andrei Cramariuc, Mayank Mittal, Marco Hutter},
journal={arXiv preprint arXiv:2409.16048},
year={2024},
archivePrefix={arXiv},
eprint={2409.16048},
primaryClass={cs.RO cs.AI cs.LG cs.SY eess.SY}
}
|
portela2024whole-body
|
arxiv-661366
|
2409.16052
|
Denoising Graph Super-Resolution towards Improved Collider Event Reconstruction
|
<|reference_start|>Denoising Graph Super-Resolution towards Improved Collider Event Reconstruction: Accurately reconstructing particles from detector data is a critical challenge in experimental particle physics, where the spatial resolution of calorimeters has a crucial impact. This study explores the integration of super-resolution techniques into an LHC-like reconstruction pipeline to effectively enhance the granularity of calorimeter data and suppress noise. We find that this software preprocessing step can significantly improve reconstruction quality without physical changes to detectors. To demonstrate the impact of our approach, we propose a novel particle flow model that offers enhanced particle reconstruction quality and interpretability. These advancements underline the potential of super-resolution to impact both current and future particle physics experiments.<|reference_end|>
|
arxiv
|
@article{kakati2024denoising,
title={Denoising Graph Super-Resolution towards Improved Collider Event
Reconstruction},
author={Nilotpal Kakati, Etienne Dreyer, Eilam Gross},
journal={arXiv preprint arXiv:2409.16052},
year={2024},
archivePrefix={arXiv},
eprint={2409.16052},
primaryClass={hep-ex cs.LG}
}
|
kakati2024denoising
|
arxiv-661367
|
2409.16056
|
Adversarial Watermarking for Face Recognition
|
<|reference_start|>Adversarial Watermarking for Face Recognition: Watermarking is an essential technique for embedding an identifier (i.e., watermark message) within digital images to assert ownership and monitor unauthorized alterations. In face recognition systems, watermarking plays a pivotal role in ensuring data integrity and security. However, an adversary could potentially interfere with the watermarking process, significantly impairing recognition performance. We explore the interaction between watermarking and adversarial attacks on face recognition models. Our findings reveal that while watermarking or input-level perturbation alone may have a negligible effect on recognition accuracy, the combined effect of watermarking and perturbation can result in an adversarial watermarking attack, significantly degrading recognition performance. Specifically, we introduce a novel threat model, the adversarial watermarking attack, which remains stealthy in the absence of watermarking, allowing images to be correctly recognized initially. However, once watermarking is applied, the attack is activated, causing recognition failures. Our study reveals a previously unrecognized vulnerability: adversarial perturbations can exploit the watermark message to evade face recognition systems. Evaluated on the CASIA-WebFace dataset, our proposed adversarial watermarking attack reduces face matching accuracy by 67.2% with an $\ell_\infty$ norm-measured perturbation strength of ${2}/{255}$ and by 95.9% with a strength of ${4}/{255}$.<|reference_end|>
|
arxiv
|
@article{yao2024adversarial,
title={Adversarial Watermarking for Face Recognition},
author={Yuguang Yao, Anil Jain, Sijia Liu},
journal={arXiv preprint arXiv:2409.16056},
year={2024},
archivePrefix={arXiv},
eprint={2409.16056},
primaryClass={cs.CV cs.AI}
}
|
yao2024adversarial
|
arxiv-661368
|
2409.16057
|
Towards Robust Object Detection: Identifying and Removing Backdoors via Module Inconsistency Analysis
|
<|reference_start|>Towards Robust Object Detection: Identifying and Removing Backdoors via Module Inconsistency Analysis: Object detection models, widely used in security-critical applications, are vulnerable to backdoor attacks that cause targeted misclassifications when triggered by specific patterns. Existing backdoor defense techniques, primarily designed for simpler models like image classifiers, often fail to effectively detect and remove backdoors in object detectors. We propose a backdoor defense framework tailored to object detection models, based on the observation that backdoor attacks cause significant inconsistencies between local modules' behaviors, such as the Region Proposal Network (RPN) and classification head. By quantifying and analyzing these inconsistencies, we develop an algorithm to detect backdoors. We find that the inconsistent module is usually the main source of backdoor behavior, leading to a removal method that localizes the affected module, resets its parameters, and fine-tunes the model on a small clean dataset. Extensive experiments with state-of-the-art two-stage object detectors show our method achieves a 90% improvement in backdoor removal rate over fine-tuning baselines, while limiting clean data accuracy loss to less than 4%. To the best of our knowledge, this work presents the first approach that addresses both the detection and removal of backdoors in two-stage object detection models, advancing the field of securing these complex systems against backdoor attacks.<|reference_end|>
|
arxiv
|
@article{zhang2024towards,
title={Towards Robust Object Detection: Identifying and Removing Backdoors via
Module Inconsistency Analysis},
author={Xianda Zhang and Siyuan Liang},
journal={arXiv preprint arXiv:2409.16057},
year={2024},
archivePrefix={arXiv},
eprint={2409.16057},
primaryClass={cs.CV cs.AI}
}
|
zhang2024towards
|
arxiv-661369
|
2409.16058
|
Generative 3D Cardiac Shape Modelling for In-Silico Trials
|
<|reference_start|>Generative 3D Cardiac Shape Modelling for In-Silico Trials: We propose a deep learning method to model and generate synthetic aortic shapes based on representing shapes as the zero-level set of a neural signed distance field, conditioned by a family of trainable embedding vectors with encode the geometric features of each shape. The network is trained on a dataset of aortic root meshes reconstructed from CT images by making the neural field vanish on sampled surface points and enforcing its spatial gradient to have unit norm. Empirical results show that our model can represent aortic shapes with high fidelity. Moreover, by sampling from the learned embedding vectors, we can generate novel shapes that resemble real patient anatomies, which can be used for in-silico trials.<|reference_end|>
|
arxiv
|
@article{gasparovici2024generative,
title={Generative 3D Cardiac Shape Modelling for In-Silico Trials},
author={Andrei Gasparovici, Alex Serban},
journal={arXiv preprint arXiv:2409.16058},
year={2024},
archivePrefix={arXiv},
eprint={2409.16058},
primaryClass={cs.CV eess.IV}
}
|
gasparovici2024generative
|
arxiv-661370
|
2409.16063
|
Benchmarking Robustness of Endoscopic Depth Estimation with Synthetically Corrupted Data
|
<|reference_start|>Benchmarking Robustness of Endoscopic Depth Estimation with Synthetically Corrupted Data: Accurate depth perception is crucial for patient outcomes in endoscopic surgery, yet it is compromised by image distortions common in surgical settings. To tackle this issue, our study presents a benchmark for assessing the robustness of endoscopic depth estimation models. We have compiled a comprehensive dataset that reflects real-world conditions, incorporating a range of synthetically induced corruptions at varying severity levels. To further this effort, we introduce the Depth Estimation Robustness Score (DERS), a novel metric that combines measures of error, accuracy, and robustness to meet the multifaceted requirements of surgical applications. This metric acts as a foundational element for evaluating performance, establishing a new paradigm for the comparative analysis of depth estimation technologies. Additionally, we set forth a benchmark focused on robustness for the evaluation of depth estimation in endoscopic surgery, with the aim of driving progress in model refinement. A thorough analysis of two monocular depth estimation models using our framework reveals crucial information about their reliability under adverse conditions. Our results emphasize the essential need for algorithms that can tolerate data corruption, thereby advancing discussions on improving model robustness. The impact of this research transcends theoretical frameworks, providing concrete gains in surgical precision and patient safety. This study establishes a benchmark for the robustness of depth estimation and serves as a foundation for developing more resilient surgical support technologies. Code is available at https://github.com/lofrienger/EndoDepthBenchmark.<|reference_end|>
|
arxiv
|
@article{wang2024benchmarking,
title={Benchmarking Robustness of Endoscopic Depth Estimation with
Synthetically Corrupted Data},
author={An Wang, Haochen Yin, Beilei Cui, Mengya Xu, Hongliang Ren},
journal={arXiv preprint arXiv:2409.16063},
year={2024},
archivePrefix={arXiv},
eprint={2409.16063},
primaryClass={cs.CV eess.IV}
}
|
wang2024benchmarking
|
arxiv-661371
|
2409.16068
|
A decision-theoretic model for a principal-agent collaborative learning problem
|
<|reference_start|>A decision-theoretic model for a principal-agent collaborative learning problem: In this technical note, we consider a collaborative learning framework with principal-agent setting, in which the principal at each time-step determines a set of appropriate aggregation coefficients based on how the current parameter estimates from a group of $K$ agents effectively performed in connection with a separate test dataset, which is not part of the agents' training model datasets. Whereas, the agents, who act together as a team, then update their parameter estimates using a discrete-time version of Langevin dynamics with mean-field-like interaction term, but guided by their respective different training model datasets. Here, we propose a decision-theoretic framework that explicitly describes how the principal progressively determines a set of nonnegative and sum to one aggregation coefficients used by the agents in their mean-field-like interaction term, that eventually leading them to reach a consensus optimal parameter estimate. Interestingly, due to the inherent feedbacks and cooperative behavior among the agents, the proposed framework offers some advantages in terms of stability and generalization, despite that both the principal and the agents do not necessarily need to have any knowledge of the sample distributions or the quality of each others' datasets.<|reference_end|>
|
arxiv
|
@article{befekadu2024a,
title={A decision-theoretic model for a principal-agent collaborative learning
problem},
author={Getachew K Befekadu},
journal={arXiv preprint arXiv:2409.16068},
year={2024},
archivePrefix={arXiv},
eprint={2409.16068},
primaryClass={stat.ML cs.LG}
}
|
befekadu2024a
|
arxiv-661372
|
2409.16069
|
Machine learning approaches for automatic defect detection in photovoltaic systems
|
<|reference_start|>Machine learning approaches for automatic defect detection in photovoltaic systems: Solar photovoltaic (PV) modules are prone to damage during manufacturing, installation and operation which reduces their power conversion efficiency. This diminishes their positive environmental impact over the lifecycle. Continuous monitoring of PV modules during operation via unmanned aerial vehicles is essential to ensure that defective panels are promptly replaced or repaired to maintain high power conversion efficiencies. Computer vision provides an automatic, non-destructive and cost-effective tool for monitoring defects in large-scale PV plants. We review the current landscape of deep learning-based computer vision techniques used for detecting defects in solar modules. We compare and evaluate the existing approaches at different levels, namely the type of images used, data collection and processing method, deep learning architectures employed, and model interpretability. Most approaches use convolutional neural networks together with data augmentation or generative adversarial network-based techniques. We evaluate the deep learning approaches by performing interpretability analysis on classification tasks. This analysis reveals that the model focuses on the darker regions of the image to perform the classification. We find clear gaps in the existing approaches while also laying out the groundwork for mitigating these challenges when building new models. We conclude with the relevant research gaps that need to be addressed and approaches for progress in this field: integrating geometric deep learning with existing approaches for building more robust and reliable models, leveraging physics-based neural networks that combine domain expertise of physical laws to build more domain-aware deep learning models, and incorporating interpretability as a factor for building models that can be trusted. The review points towards a clear roadmap for making this technology commercially relevant.<|reference_end|>
|
arxiv
|
@article{mohanty2024machine,
title={Machine learning approaches for automatic defect detection in
photovoltaic systems},
author={Swayam Rajat Mohanty and Moin Uddin Maruf and Vaibhav Singh and
Zeeshan Ahmad},
journal={arXiv preprint arXiv:2409.16069},
year={2024},
archivePrefix={arXiv},
eprint={2409.16069},
primaryClass={cs.CV physics.app-ph}
}
|
mohanty2024machine
|
arxiv-661373
|
2409.16071
|
Learning with Confidence: Training Better Classifiers from Soft Labels
|
<|reference_start|>Learning with Confidence: Training Better Classifiers from Soft Labels: In supervised machine learning, models are typically trained using data with hard labels, i.e., definite assignments of class membership. This traditional approach, however, does not take the inherent uncertainty in these labels into account. We investigate whether incorporating label uncertainty, represented as discrete probability distributions over the class labels -- known as soft labels -- improves the predictive performance of classification models. We first demonstrate the potential value of soft label learning (SLL) for estimating model parameters in a simulation experiment, particularly for limited sample sizes and imbalanced data. Subsequently, we compare the performance of various wrapper methods for learning from both hard and soft labels using identical base classifiers. On real-world-inspired synthetic data with clean labels, the SLL methods consistently outperform hard label methods. Since real-world data is often noisy and precise soft labels are challenging to obtain, we study the effect that noisy probability estimates have on model performance. Alongside conventional noise models, our study examines four types of miscalibration that are known to affect human annotators. The results show that SLL methods outperform the hard label methods in the majority of settings. Finally, we evaluate the methods on a real-world dataset with confidence scores, where the SLL methods are shown to match the traditional methods for predicting the (noisy) hard labels while providing more accurate confidence estimates.<|reference_end|>
|
arxiv
|
@article{de vries2024learning,
title={Learning with Confidence: Training Better Classifiers from Soft Labels},
author={Sjoerd de Vries and Dirk Thierens},
journal={arXiv preprint arXiv:2409.16071},
year={2024},
archivePrefix={arXiv},
eprint={2409.16071},
primaryClass={cs.LG}
}
|
de vries2024learning
|
arxiv-661374
|
2409.16073
|
Open-World Object Detection with Instance Representation Learning
|
<|reference_start|>Open-World Object Detection with Instance Representation Learning: While humans naturally identify novel objects and understand their relationships, deep learning-based object detectors struggle to detect and relate objects that are not observed during training. To overcome this issue, Open World Object Detection(OWOD) has been introduced to enable models to detect unknown objects in open-world scenarios. However, OWOD methods fail to capture the fine-grained relationships between detected objects, which are crucial for comprehensive scene understanding and applications such as class discovery and tracking. In this paper, we propose a method to train an object detector that can both detect novel objects and extract semantically rich features in open-world conditions by leveraging the knowledge of Vision Foundation Models(VFM). We first utilize the semantic masks from the Segment Anything Model to supervise the box regression of unknown objects, ensuring accurate localization. By transferring the instance-wise similarities obtained from the VFM features to the detector's instance embeddings, our method then learns a semantically rich feature space of these embeddings. Extensive experiments show that our method learns a robust and generalizable feature space, outperforming other OWOD-based feature extraction methods. Additionally, we demonstrate that the enhanced feature from our model increases the detector's applicability to tasks such as open-world tracking.<|reference_end|>
|
arxiv
|
@article{lee2024open-world,
title={Open-World Object Detection with Instance Representation Learning},
author={Sunoh Lee, Minsik Jeon, Jihong Min, Junwon Seo},
journal={arXiv preprint arXiv:2409.16073},
year={2024},
archivePrefix={arXiv},
eprint={2409.16073},
primaryClass={cs.CV cs.RO}
}
|
lee2024open-world
|
arxiv-661375
|
2409.16074
|
Real-time Planning of Minimum-time Trajectories for Agile UAV Flight
|
<|reference_start|>Real-time Planning of Minimum-time Trajectories for Agile UAV Flight: We address the challenge of real-time planning of minimum-time trajectories over multiple waypoints, onboard multirotor UAVs. Previous works demonstrated that achieving a truly time-optimal trajectory is computationally too demanding to enable frequent replanning during agile flight, especially on less powerful flight computers. Our approach overcomes this stumbling block by utilizing a point-mass model with a novel iterative thrust decomposition algorithm, enabling the UAV to use all of its collective thrust, something previous point-mass approaches could not achieve. The approach enables gravity and drag modeling integration, significantly reducing tracking errors in high-speed trajectories, which is proven through an ablation study. When combined with a new multi-waypoint optimization algorithm, which uses a gradient-based method to converge to optimal velocities in waypoints, the proposed method generates minimum-time multi-waypoint trajectories within milliseconds. The proposed approach, which we provide as open-source package, is validated both in simulation and in real-world, using Nonlinear Model Predictive Control. With accelerations of up to 3.5g and speeds over 100 km/h, trajectories generated by the proposed method yield similar or even smaller tracking errors than the trajectories generated for a full multirotor model.<|reference_end|>
|
arxiv
|
@article{teissing2024real-time,
title={Real-time Planning of Minimum-time Trajectories for Agile UAV Flight},
author={Krystof Teissing, Matej Novosad, Robert Penicka, Martin Saska},
journal={arXiv preprint arXiv:2409.16074},
year={2024},
doi={10.1109/LRA.2024.3471388},
archivePrefix={arXiv},
eprint={2409.16074},
primaryClass={cs.RO}
}
|
teissing2024real-time
|
arxiv-661376
|
2409.16075
|
Ultra-low latency quantum-inspired machine learning predictors implemented on FPGA
|
<|reference_start|>Ultra-low latency quantum-inspired machine learning predictors implemented on FPGA: Tensor Networks (TNs) are a computational paradigm used for representing quantum many-body systems. Recent works have shown how TNs can also be applied to perform Machine Learning (ML) tasks, yielding comparable results to standard supervised learning techniques. In this work, we study the use of Tree Tensor Networks (TTNs) in high-frequency real-time applications by exploiting the low-latency hardware of the Field-Programmable Gate Array (FPGA) technology. We present different implementations of TTN classifiers, capable of performing inference on classical ML datasets as well as on complex physics data. A preparatory analysis of bond dimensions and weight quantization is realized in the training phase, together with entanglement entropy and correlation measurements, that help setting the choice of the TTN architecture. The generated TTNs are then deployed on a hardware accelerator; using an FPGA integrated into a server, the inference of the TTN is completely offloaded. Eventually, a classifier for High Energy Physics (HEP) applications is implemented and executed fully pipelined with sub-microsecond latency.<|reference_end|>
|
arxiv
|
@article{borella2024ultra-low,
title={Ultra-low latency quantum-inspired machine learning predictors
implemented on FPGA},
author={Lorenzo Borella, Alberto Coppi, Jacopo Pazzini, Andrea Stanco, Marco
Trenti, Andrea Triossi, Marco Zanetti},
journal={arXiv preprint arXiv:2409.16075},
year={2024},
archivePrefix={arXiv},
eprint={2409.16075},
primaryClass={hep-ex cs.LG quant-ph}
}
|
borella2024ultra-low
|
arxiv-661377
|
2409.16077
|
Leveraging Mixture of Experts for Improved Speech Deepfake Detection
|
<|reference_start|>Leveraging Mixture of Experts for Improved Speech Deepfake Detection: Speech deepfakes pose a significant threat to personal security and content authenticity. Several detectors have been proposed in the literature, and one of the primary challenges these systems have to face is the generalization over unseen data to identify fake signals across a wide range of datasets. In this paper, we introduce a novel approach for enhancing speech deepfake detection performance using a Mixture of Experts architecture. The Mixture of Experts framework is well-suited for the speech deepfake detection task due to its ability to specialize in different input types and handle data variability efficiently. This approach offers superior generalization and adaptability to unseen data compared to traditional single models or ensemble methods. Additionally, its modular structure supports scalable updates, making it more flexible in managing the evolving complexity of deepfake techniques while maintaining high detection accuracy. We propose an efficient, lightweight gating mechanism to dynamically assign expert weights for each input, optimizing detection performance. Experimental results across multiple datasets demonstrate the effectiveness and potential of our proposed approach.<|reference_end|>
|
arxiv
|
@article{negroni2024leveraging,
title={Leveraging Mixture of Experts for Improved Speech Deepfake Detection},
author={Viola Negroni, Davide Salvi, Alessandro Ilic Mezza, Paolo Bestagini,
Stefano Tubaro},
journal={arXiv preprint arXiv:2409.16077},
year={2024},
archivePrefix={arXiv},
eprint={2409.16077},
primaryClass={cs.SD cs.AI eess.AS}
}
|
negroni2024leveraging
|
arxiv-661378
|
2409.16078
|
Assessing strategies to manage distributed photovoltaics in Swiss low-voltage networks: An analysis of curtailment, export tariffs, and resource sharing
|
<|reference_start|>Assessing strategies to manage distributed photovoltaics in Swiss low-voltage networks: An analysis of curtailment, export tariffs, and resource sharing: The integration of photovoltaic systems poses several challenges for the distribution grid, mainly due to the infrastructure not being designed to handle the upstream flow and being dimensioned for consumption only, potentially leading to reliability and stability issues. This study investigates the use of capacity-based tariffs, export tariffs, and curtailment policies to reduce negative grid impacts without hampering PV deployment. We analyze the effect of such export tariffs on three typical Swiss low-voltage networks (rural, semi-urban, and urban), using power flow analysis to evaluate the power exchanges at the transformer station, as well as line overloading and voltage violations. Finally, a simple case of mutualization of resources is analyzed to assess its potential contribution to relieving network constraints and the economic costs of managing LV networks. We found that the tariff with capacity-based components on the export (CT export daily) severely penalizes PV penetration. This applies to other tariffs as well (e.g. IRR monthly, Curtailment 30, and DT variable) but to a lesser extent. However, the inclusion of curtailment at 50\% and 70\%, as well as mixed tariffs with capacity-based components at import and curtailment, allow for a high degree of PV installations in the three zones studied and help to mitigate the impact of PV on the distributed network.<|reference_end|>
|
arxiv
|
@article{pena-bello2024assessing,
title={Assessing strategies to manage distributed photovoltaics in Swiss
low-voltage networks: An analysis of curtailment, export tariffs, and
resource sharing},
author={Alejandro Pena-Bello, Gerard Marias Gonzalez, Nicolas Wyrsch,
Christophe Ballif},
journal={arXiv preprint arXiv:2409.16078},
year={2024},
archivePrefix={arXiv},
eprint={2409.16078},
primaryClass={eess.SY cs.SY}
}
|
pena-bello2024assessing
|
arxiv-661379
|
2409.16081
|
Online Multi-level Contrastive Representation Distillation for Cross-Subject fNIRS Emotion Recognition
|
<|reference_start|>Online Multi-level Contrastive Representation Distillation for Cross-Subject fNIRS Emotion Recognition: Utilizing functional near-infrared spectroscopy (fNIRS) signals for emotion recognition is a significant advancement in understanding human emotions. However, due to the lack of artificial intelligence data and algorithms in this field, current research faces the following challenges: 1) The portable wearable devices have higher requirements for lightweight models; 2) The objective differences of physiology and psychology among different subjects aggravate the difficulty of emotion recognition. To address these challenges, we propose a novel cross-subject fNIRS emotion recognition method, called the Online Multi-level Contrastive Representation Distillation framework (OMCRD). Specifically, OMCRD is a framework designed for mutual learning among multiple lightweight student networks. It utilizes multi-level fNIRS feature extractor for each sub-network and conducts multi-view sentimental mining using physiological signals. The proposed Inter-Subject Interaction Contrastive Representation (IS-ICR) facilitates knowledge transfer for interactions between student models, enhancing cross-subject emotion recognition performance. The optimal student network can be selected and deployed on a wearable device. Some experimental results demonstrate that OMCRD achieves state-of-the-art results in emotional perception and affective imagery tasks.<|reference_end|>
|
arxiv
|
@article{lai2024online,
title={Online Multi-level Contrastive Representation Distillation for
Cross-Subject fNIRS Emotion Recognition},
author={Zhili Lai, Chunmei Qing, Junpeng Tan, Wanxiang Luo, Xiangmin Xu},
journal={arXiv preprint arXiv:2409.16081},
year={2024},
archivePrefix={arXiv},
eprint={2409.16081},
primaryClass={cs.HC cs.AI}
}
|
lai2024online
|
arxiv-661380
|
2409.16082
|
GS-Net: Global Self-Attention Guided CNN for Multi-Stage Glaucoma Classification
|
<|reference_start|>GS-Net: Global Self-Attention Guided CNN for Multi-Stage Glaucoma Classification: Glaucoma is a common eye disease that leads to irreversible blindness unless timely detected. Hence, glaucoma detection at an early stage is of utmost importance for a better treatment plan and ultimately saving the vision. The recent literature has shown the prominence of CNN-based methods to detect glaucoma from retinal fundus images. However, such methods mainly focus on solving binary classification tasks and have not been thoroughly explored for the detection of different glaucoma stages, which is relatively challenging due to minute lesion size variations and high inter-class similarities. This paper proposes a global self-attention based network called GS-Net for efficient multi-stage glaucoma classification. We introduce a global self-attention module (GSAM) consisting of two parallel attention modules, a channel attention module (CAM) and a spatial attention module (SAM), to learn global feature dependencies across channel and spatial dimensions. The GSAM encourages extracting more discriminative and class-specific features from the fundus images. The experimental results on a publicly available dataset demonstrate that our GS-Net outperforms state-of-the-art methods. Also, the GSAM achieves competitive performance against popular attention modules.<|reference_end|>
|
arxiv
|
@article{das2024gs-net:,
title={GS-Net: Global Self-Attention Guided CNN for Multi-Stage Glaucoma
Classification},
author={Dipankar Das, Deepak Ranjan Nayak},
journal={ICIP 2023},
year={2024},
doi={10.1109/ICIP49359.2023.10222689},
archivePrefix={arXiv},
eprint={2409.16082},
primaryClass={cs.CV}
}
|
das2024gs-net:
|
arxiv-661381
|
2409.16083
|
Multi-Model Ensemble Approach for Accurate Bi-Atrial Segmentation in LGE-MRI of Atrial Fibrillation Patients
|
<|reference_start|>Multi-Model Ensemble Approach for Accurate Bi-Atrial Segmentation in LGE-MRI of Atrial Fibrillation Patients: Atrial fibrillation (AF) is the most prevalent form of cardiac arrhythmia and is associated with increased morbidity and mortality. The effectiveness of current clinical interventions for AF is often limited by an incomplete understanding of the atrial anatomical structures that sustain this arrhythmia. Late Gadolinium-Enhanced MRI (LGE-MRI) has emerged as a critical imaging modality for assessing atrial fibrosis and scarring, which are essential markers for predicting the success of ablation procedures in AF patients. The Multi-class Bi-Atrial Segmentation (MBAS) challenge at MICCAI 2024 aims to enhance the segmentation of both left and right atria and their walls using a comprehensive dataset of 200 multi-center 3D LGE-MRIs, labelled by experts. This work presents an ensemble approach that integrates multiple machine learning models, including Unet, ResNet, EfficientNet and VGG, to perform automatic bi-atrial segmentation from LGE-MRI data. The ensemble model was evaluated using the Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD95) on the left & right atrium wall, right atrium cavity, and left atrium cavity. On the internal testing dataset, the model achieved a DSC of 88.41%, 98.48%, 98.45% and an HD95 of 1.07, 0.95, 0.64 respectively. This demonstrates the effectiveness of the ensemble model in improving segmentation accuracy. The approach contributes to advancing the understanding of AF and supports the development of more targeted and effective ablation strategies.<|reference_end|>
|
arxiv
|
@article{beveridge2024multi-model,
title={Multi-Model Ensemble Approach for Accurate Bi-Atrial Segmentation in
LGE-MRI of Atrial Fibrillation Patients},
author={Lucas Beveridge and Le Zhang},
journal={arXiv preprint arXiv:2409.16083},
year={2024},
archivePrefix={arXiv},
eprint={2409.16083},
primaryClass={eess.IV cs.CV}
}
|
beveridge2024multi-model
|
arxiv-661382
|
2409.16084
|
MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object Scenarios
|
<|reference_start|>MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object Scenarios: Large visual-language models (LVLMs) have achieved great success in multiple applications. However, they still encounter challenges in complex scenes, especially those involving camouflaged objects. This is primarily due to the lack of samples related to camouflaged scenes in the training dataset. To mitigate this issue, we construct the MM-CamObj dataset for the first time, comprising two subsets: CamObj-Align and CamObj-Instruct. Specifically, CamObj-Align contains 11,363 image-text pairs, and it is designed for VL alignment and injecting rich knowledge of camouflaged scenes into LVLMs. CamObj-Instruct is collected for fine-tuning the LVLMs with improved instruction-following capabilities, and it includes 11,363 images and 68,849 conversations with diverse instructions. Based on the MM-CamObj dataset, we propose the CamObj-Llava, an LVLM specifically designed for addressing tasks in camouflaged scenes. To facilitate our model's effective acquisition of knowledge about camouflaged objects and scenes, we introduce a curriculum learning strategy with six distinct modes. Additionally, we construct the CamObj-Bench to evaluate the existing LVLMs' capabilities of understanding, recognition, localization and count in camouflage scenes. This benchmark includes 600 images and 7 tasks, with a total of 9,449 questions. Extensive experiments are conducted on the CamObj-Bench with CamObj-Llava, 8 existing open-source and 3 closed-source LVLMs. Surprisingly, the results indicate that our model achieves a 25.84% improvement in 4 out of 7 tasks compared to GPT-4o. Code and datasets will be available at https://github.com/JCruan519/MM-CamObj.<|reference_end|>
|
arxiv
|
@article{ruan2024mm-camobj:,
title={MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object
Scenarios},
author={Jiacheng Ruan, Wenzhen Yuan, Zehao Lin, Ning Liao, Zhiyu Li, Feiyu
Xiong, Ting Liu, Yuzhuo Fu},
journal={arXiv preprint arXiv:2409.16084},
year={2024},
archivePrefix={arXiv},
eprint={2409.16084},
primaryClass={cs.CV}
}
|
ruan2024mm-camobj:
|
arxiv-661383
|
2409.16086
|
Assessing Simplification Levels in Neural Networks: The Impact of Hyperparameter Configurations on Complexity and Sensitivity
|
<|reference_start|>Assessing Simplification Levels in Neural Networks: The Impact of Hyperparameter Configurations on Complexity and Sensitivity: This paper presents an experimental study focused on understanding the simplification properties of neural networks under different hyperparameter configurations, specifically investigating the effects on Lempel Ziv complexity and sensitivity. By adjusting key hyperparameters such as activation functions, hidden layers, and learning rate, this study evaluates how these parameters impact the complexity of network outputs and their robustness to input perturbations. The experiments conducted using the MNIST dataset aim to provide insights into the relationships between hyperparameters, complexity, and sensitivity, contributing to a deeper theoretical understanding of these concepts in neural networks.<|reference_end|>
|
arxiv
|
@article{guan2024assessing,
title={Assessing Simplification Levels in Neural Networks: The Impact of
Hyperparameter Configurations on Complexity and Sensitivity},
author={(Joy) Huixin Guan},
journal={arXiv preprint arXiv:2409.16086},
year={2024},
archivePrefix={arXiv},
eprint={2409.16086},
primaryClass={cs.LG cs.AI}
}
|
guan2024assessing
|
arxiv-661384
|
2409.16089
|
From Pixels to Words: Leveraging Explainability in Face Recognition through Interactive Natural Language Processing
|
<|reference_start|>From Pixels to Words: Leveraging Explainability in Face Recognition through Interactive Natural Language Processing: Face Recognition (FR) has advanced significantly with the development of deep learning, achieving high accuracy in several applications. However, the lack of interpretability of these systems raises concerns about their accountability, fairness, and reliability. In the present study, we propose an interactive framework to enhance the explainability of FR models by combining model-agnostic Explainable Artificial Intelligence (XAI) and Natural Language Processing (NLP) techniques. The proposed framework is able to accurately answer various questions of the user through an interactive chatbot. In particular, the explanations generated by our proposed method are in the form of natural language text and visual representations, which for example can describe how different facial regions contribute to the similarity measure between two faces. This is achieved through the automatic analysis of the output's saliency heatmaps of the face images and a BERT question-answering model, providing users with an interface that facilitates a comprehensive understanding of the FR decisions. The proposed approach is interactive, allowing the users to ask questions to get more precise information based on the user's background knowledge. More importantly, in contrast to previous studies, our solution does not decrease the face recognition performance. We demonstrate the effectiveness of the method through different experiments, highlighting its potential to make FR systems more interpretable and user-friendly, especially in sensitive applications where decision-making transparency is crucial.<|reference_end|>
|
arxiv
|
@article{deandres-tame2024from,
title={From Pixels to Words: Leveraging Explainability in Face Recognition
through Interactive Natural Language Processing},
author={Ivan DeAndres-Tame and Muhammad Faisal and Ruben Tolosana and Rouqaiah
Al-Refai and Ruben Vera-Rodriguez and Philipp Terh"orst},
journal={27th International Conference on Pattern Recognition Workshops
(ICPRw 2024)},
year={2024},
archivePrefix={arXiv},
eprint={2409.16089},
primaryClass={cs.CV cs.AI cs.CY cs.LG}
}
|
deandres-tame2024from
|
arxiv-661385
|
2409.16096
|
Exploring Hint Generation Approaches in Open-Domain Question Answering
|
<|reference_start|>Exploring Hint Generation Approaches in Open-Domain Question Answering: Automatic Question Answering (QA) systems rely on contextual information to provide accurate answers. Commonly, contexts are prepared through either retrieval-based or generation-based methods. The former involves retrieving relevant documents from a corpus like Wikipedia, whereas the latter uses generative models such as Large Language Models (LLMs) to generate the context. In this paper, we introduce a novel context preparation approach called HINTQA, which employs Automatic Hint Generation (HG) techniques. Unlike traditional methods, HINTQA prompts LLMs to produce hints about potential answers for the question rather than generating relevant context. We evaluate our approach across three QA datasets including TriviaQA, NaturalQuestions, and Web Questions, examining how the number and order of hints impact performance. Our findings show that the HINTQA surpasses both retrieval-based and generation-based approaches. We demonstrate that hints enhance the accuracy of answers more than retrieved and generated contexts.<|reference_end|>
|
arxiv
|
@article{mozafari2024exploring,
title={Exploring Hint Generation Approaches in Open-Domain Question Answering},
author={Jamshid Mozafari, Abdelrahman Abdallah, Bhawna Piryani, Adam Jatowt},
journal={arXiv preprint arXiv:2409.16096},
year={2024},
archivePrefix={arXiv},
eprint={2409.16096},
primaryClass={cs.CL cs.IR}
}
|
mozafari2024exploring
|
arxiv-661386
|
2409.16098
|
The Digital Transformation in Health: How AI Can Improve the Performance of Health Systems
|
<|reference_start|>The Digital Transformation in Health: How AI Can Improve the Performance of Health Systems: Mobile health has the potential to revolutionize health care delivery and patient engagement. In this work, we discuss how integrating Artificial Intelligence into digital health applications-focused on supply chain, patient management, and capacity building, among other use cases-can improve the health system and public health performance. We present an Artificial Intelligence and Reinforcement Learning platform that allows the delivery of adaptive interventions whose impact can be optimized through experimentation and real-time monitoring. The system can integrate multiple data sources and digital health applications. The flexibility of this platform to connect to various mobile health applications and digital devices and send personalized recommendations based on past data and predictions can significantly improve the impact of digital tools on health system outcomes. The potential for resource-poor settings, where the impact of this approach on health outcomes could be more decisive, is discussed specifically. This framework is, however, similarly applicable to improving efficiency in health systems where scarcity is not an issue.<|reference_end|>
|
arxiv
|
@article{periáñez2024the,
title={The Digital Transformation in Health: How AI Can Improve the Performance
of Health Systems},
author={'Africa Peri'a~nez, Ana Fern'andez del R'io, Ivan Nazarov, Enric
Jan'e, Moiz Hassan, Aditya Rastogi, and Dexian Tang},
journal={arXiv preprint arXiv:2409.16098},
year={2024},
archivePrefix={arXiv},
eprint={2409.16098},
primaryClass={cs.LG cs.AI cs.CY cs.HC}
}
|
periáñez2024the
|
arxiv-661387
|
2409.16099
|
Neuromorphic Drone Detection: an Event-RGB Multimodal Approach
|
<|reference_start|>Neuromorphic Drone Detection: an Event-RGB Multimodal Approach: In recent years, drone detection has quickly become a subject of extreme interest: the potential for fast-moving objects of contained dimensions to be used for malicious intents or even terrorist attacks has posed attention to the necessity for precise and resilient systems for detecting and identifying such elements. While extensive literature and works exist on object detection based on RGB data, it is also critical to recognize the limits of such modality when applied to UAVs detection. Detecting drones indeed poses several challenges such as fast-moving objects and scenes with a high dynamic range or, even worse, scarce illumination levels. Neuromorphic cameras, on the other hand, can retain precise and rich spatio-temporal information in situations that are challenging for RGB cameras. They are resilient to both high-speed moving objects and scarce illumination settings, while prone to suffer a rapid loss of information when the objects in the scene are static. In this context, we present a novel model for integrating both domains together, leveraging multimodal data to take advantage of the best of both worlds. To this end, we also release NeRDD (Neuromorphic-RGB Drone Detection), a novel spatio-temporally synchronized Event-RGB Drone detection dataset of more than 3.5 hours of multimodal annotated recordings.<|reference_end|>
|
arxiv
|
@article{magrini2024neuromorphic,
title={Neuromorphic Drone Detection: an Event-RGB Multimodal Approach},
author={Gabriele Magrini, Federico Becattini, Pietro Pala, Alberto Del Bimbo,
Antonio Porta},
journal={arXiv preprint arXiv:2409.16099},
year={2024},
archivePrefix={arXiv},
eprint={2409.16099},
primaryClass={cs.CV cs.AI}
}
|
magrini2024neuromorphic
|
arxiv-661388
|
2409.16106
|
Scenario of Use Scheme: Threat Model Specification for Speaker Privacy Protection in the Medical Domain
|
<|reference_start|>Scenario of Use Scheme: Threat Model Specification for Speaker Privacy Protection in the Medical Domain: Speech recordings are being more frequently used to detect and monitor disease, leading to privacy concerns. Beyond cryptography, protection of speech can be addressed by approaches, such as perturbation, disentanglement, and re-synthesis, that eliminate sensitive information of the speaker, leaving the information necessary for medical analysis purposes. In order for such privacy protective approaches to be developed, clear and systematic specifications of assumptions concerning medical settings and the needs of medical professionals are necessary. In this paper, we propose a Scenario of Use Scheme that incorporates an Attacker Model, which characterizes the adversary against whom the speaker's privacy must be defended, and a Protector Model, which specifies the defense. We discuss the connection of the scheme with previous work on speech privacy. Finally, we present a concrete example of a specified Scenario of Use and a set of experiments about protecting speaker data against gender inference attacks while maintaining utility for Parkinson's detection.<|reference_end|>
|
arxiv
|
@article{rahman2024scenario,
title={Scenario of Use Scheme: Threat Model Specification for Speaker Privacy
Protection in the Medical Domain},
author={Mehtab Ur Rahman, Martha Larson, Louis ten Bosch, Cristian
Tejedor-Garc'ia},
journal={Pages: 21-25, Proc. 4th Symposium on Security and Privacy in
Speech Communication (SPSC) at Interspeech 2024},
year={2024},
doi={10.21437/SPSC.2024-4},
archivePrefix={arXiv},
eprint={2409.16106},
primaryClass={eess.AS cs.AI cs.CR cs.SD}
}
|
rahman2024scenario
|
arxiv-661389
|
2409.16107
|
Ciphertext Malleability in Lattice-Based KEMs as a Countermeasure to Side Channel Analysis
|
<|reference_start|>Ciphertext Malleability in Lattice-Based KEMs as a Countermeasure to Side Channel Analysis: Due to developments in quantum computing, classical asymmetric cryptography is at risk of being breached. Consequently, new Post-Quantum Cryptography (PQC) primitives using lattices are studied. Another point of scrutiny is the resilience of these new primitives to Side Channel Analysis (SCA), where an attacker can study physical leakages. In this work we discuss a SCA vulnerability due to the ciphertext malleability of some PQC primitives exposed by a work from Ravi et al. We propose a novel countermeasure to this vulnerability exploiting the same ciphertext malleability and discuss its practical application to several PQC primitives. We also extend the seminal work of Ravi et al. by detailling their attack on the different security levels of a post-quantum Key Encapsulation Mechanism (KEM), namely FrodoKEM.<|reference_end|>
|
arxiv
|
@article{berthet2024ciphertext,
title={Ciphertext Malleability in Lattice-Based KEMs as a Countermeasure to
Side Channel Analysis},
author={Pierre-Augustin Berthet},
journal={arXiv preprint arXiv:2409.16107},
year={2024},
archivePrefix={arXiv},
eprint={2409.16107},
primaryClass={cs.CR}
}
|
berthet2024ciphertext
|
arxiv-661390
|
2409.16110
|
Wind lulls and slews; consequences for the stability of future UK electricity systems
|
<|reference_start|>Wind lulls and slews; consequences for the stability of future UK electricity systems: As the United Kingdom wind fleet increases in size, wind lulls and slews will increasingly challenge the stability of its electricity system. The paper describes the use of models based on real time records and including solar slews, to investigate the most extreme wind variations likely to be encountered in future, enabling strategies to be devised to mitigate them. Wind lulls are surprisingly frequent, occasionally lasting a week or more, and are always likely to be beyond the capabilities of stored or imported electrical energy to mitigate them. The models indicate that there will be a continuing need for gas powered generation to mitigate wind lulls. Currently, Combined Cycle Gas Turbines (CCGTs) provide most of the dispatchable generation. However, CCGTs are not sufficiently fast acting to cope with the wind and solar slews anticipated in future. The paper suggests that a range of already proven fast-acting sources of dispatchable generation, including Open Cycle Gas Turbines (OCGTs), Internal Combustion Gas-Fired Reciprocating engines (ICGRs) and stored electrical energy systems, should be capable of coping with the largest wind and solar slews likely to be encountered up to the year 2035. Examples are given of the recent introduction of these fast-acting sources of generation which, it is suggested, will progressively replace CCGTs as the wind and solar fleets increase in size. Moreover, we see the pattern of recent investments, summarised in the paper, as a good indication of likely future investments, with OCGT investments mainly serving the 440 kV grid, and ICGRs and stored electrical energy more local networks.<|reference_end|>
|
arxiv
|
@article{stephens2024wind,
title={Wind lulls and slews; consequences for the stability of future UK
electricity systems},
author={Anthony D Stephens, David R Walwyn},
journal={arXiv preprint arXiv:2409.16110},
year={2024},
archivePrefix={arXiv},
eprint={2409.16110},
primaryClass={eess.SY cs.SY}
}
|
stephens2024wind
|
arxiv-661391
|
2409.16111
|
CloudTrack: Scalable UAV Tracking with Cloud Semantics
|
<|reference_start|>CloudTrack: Scalable UAV Tracking with Cloud Semantics: Nowadays, unmanned aerial vehicles (UAVs) are commonly used in search and rescue scenarios to gather information in the search area. The automatic identification of the person searched for in aerial footage could increase the autonomy of such systems, reduce the search time, and thus increase the missed person's chances of survival. In this paper, we present a novel approach to perform semantically conditioned open vocabulary object tracking that is specifically designed to cope with the limitations of UAV hardware. Our approach has several advantages. It can run with verbal descriptions of the missing person, e.g., the color of the shirt, it does not require dedicated training to execute the mission and can efficiently track a potentially moving person. Our experimental results demonstrate the versatility and efficacy of our approach.<|reference_end|>
|
arxiv
|
@article{blei2024cloudtrack:,
title={CloudTrack: Scalable UAV Tracking with Cloud Semantics},
author={Yannik Blei, Michael Krawez, Nisarga Nilavadi, Tanja Katharina Kaiser
and Wolfram Burgard},
journal={arXiv preprint arXiv:2409.16111},
year={2024},
archivePrefix={arXiv},
eprint={2409.16111},
primaryClass={cs.RO cs.CV}
}
|
blei2024cloudtrack:
|
arxiv-661392
|
2409.16112
|
Self-attention as an attractor network: transient memories without backpropagation
|
<|reference_start|>Self-attention as an attractor network: transient memories without backpropagation: Transformers are one of the most successful architectures of modern neural networks. At their core there is the so-called attention mechanism, which recently interested the physics community as it can be written as the derivative of an energy function in certain cases: while it is possible to write the cross-attention layer as a modern Hopfield network, the same is not possible for the self-attention, which is used in the GPT architectures and other autoregressive models. In this work we show that it is possible to obtain the self-attention layer as the derivative of local energy terms, which resemble a pseudo-likelihood. We leverage the analogy with pseudo-likelihood to design a recurrent model that can be trained without backpropagation: the dynamics shows transient states that are strongly correlated with both train and test examples. Overall we present a novel framework to interpret self-attention as an attractor network, potentially paving the way for new theoretical approaches inspired from physics to understand transformers.<|reference_end|>
|
arxiv
|
@article{d'amico2024self-attention,
title={Self-attention as an attractor network: transient memories without
backpropagation},
author={Francesco D'Amico, Matteo Negri},
journal={arXiv preprint arXiv:2409.16112},
year={2024},
archivePrefix={arXiv},
eprint={2409.16112},
primaryClass={cs.LG cond-mat.dis-nn}
}
|
d'amico2024self-attention
|
arxiv-661393
|
2409.16115
|
Mean Age of Information in Partial Offloading Mobile Edge Computing Networks
|
<|reference_start|>Mean Age of Information in Partial Offloading Mobile Edge Computing Networks: The age of information (AoI) performance analysis is essential for evaluating the information freshness in the large-scale mobile edge computing (MEC) networks. This work proposes the earliest analysis of the mean AoI (MAoI) performance of large-scale partial offloading MEC networks. Firstly, we derive and validate the closed-form expressions of MAoI by using queueing theory and stochastic geometry. Based on these expressions, we analyse the effects of computing offloading ratio (COR) and task generation rate (TGR) on the MAoI performance and compare the MAoI performance under the local computing, remote computing, and partial offloading schemes. The results show that by jointly optimising the COR and TGR, the partial offloading scheme outperforms the local and remote computing schemes in terms of the MAoI, which can be improved by up to 51% and 61%, respectively. This encourages the MEC networks to adopt the partial offloading scheme to improve the MAoI performance.<|reference_end|>
|
arxiv
|
@article{dong2024mean,
title={Mean Age of Information in Partial Offloading Mobile Edge Computing
Networks},
author={Ying Dong, Hang Xiao, Haonan Hu, Jiliang Zhang, Qianbin Chen, Jie
Zhang},
journal={arXiv preprint arXiv:2409.16115},
year={2024},
archivePrefix={arXiv},
eprint={2409.16115},
primaryClass={eess.SY cs.SY}
}
|
dong2024mean
|
arxiv-661394
|
2409.16117
|
Generative Speech Foundation Model Pretraining for High-Quality Speech Extraction and Restoration
|
<|reference_start|>Generative Speech Foundation Model Pretraining for High-Quality Speech Extraction and Restoration: This paper proposes a generative pretraining foundation model for high-quality speech restoration tasks. By directly operating on complex-valued short-time Fourier transform coefficients, our model does not rely on any vocoders for time-domain signal reconstruction. As a result, our model simplifies the synthesis process and removes the quality upper-bound introduced by any mel-spectrogram vocoder compared to prior work SpeechFlow. The proposed method is evaluated on multiple speech restoration tasks, including speech denoising, bandwidth extension, codec artifact removal, and target speaker extraction. In all scenarios, finetuning our pretrained model results in superior performance over strong baselines. Notably, in the target speaker extraction task, our model outperforms existing systems, including those leveraging SSL-pretrained encoders like WavLM. The code and the pretrained checkpoints are publicly available in the NVIDIA NeMo framework.<|reference_end|>
|
arxiv
|
@article{ku2024generative,
title={Generative Speech Foundation Model Pretraining for High-Quality Speech
Extraction and Restoration},
author={Pin-Jui Ku, Alexander H. Liu, Roman Korostik, Sung-Feng Huang, Szu-Wei
Fu and Ante Juki'c},
journal={arXiv preprint arXiv:2409.16117},
year={2024},
archivePrefix={arXiv},
eprint={2409.16117},
primaryClass={eess.AS cs.SD}
}
|
ku2024generative
|
arxiv-661395
|
2409.16118
|
TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models
|
<|reference_start|>TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models: Data collection is often difficult in critical fields such as medicine, physics, and chemistry. As a result, classification methods usually perform poorly with these small datasets, leading to weak predictive performance. Increasing the training set with additional synthetic data, similar to data augmentation in images, is commonly believed to improve downstream classification performance. However, current tabular generative methods that learn either the joint distribution $ p(\mathbf{x}, y) $ or the class-conditional distribution $ p(\mathbf{x} \mid y) $ often overfit on small datasets, resulting in poor-quality synthetic data, usually worsening classification performance compared to using real data alone. To solve these challenges, we introduce TabEBM, a novel class-conditional generative method using Energy-Based Models (EBMs). Unlike existing methods that use a shared model to approximate all class-conditional densities, our key innovation is to create distinct EBM generative models for each class, each modelling its class-specific data distribution individually. This approach creates robust energy landscapes, even in ambiguous class distributions. Our experiments show that TabEBM generates synthetic data with higher quality and better statistical fidelity than existing methods. When used for data augmentation, our synthetic data consistently improves the classification performance across diverse datasets of various sizes, especially small ones.<|reference_end|>
|
arxiv
|
@article{margeloiu2024tabebm:,
title={TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific
Energy-Based Models},
author={Andrei Margeloiu, Xiangjian Jiang, Nikola Simidjievski, Mateja Jamnik},
journal={arXiv preprint arXiv:2409.16118},
year={2024},
archivePrefix={arXiv},
eprint={2409.16118},
primaryClass={cs.LG}
}
|
margeloiu2024tabebm:
|
arxiv-661396
|
2409.16119
|
Stochastic Minimum Spanning Trees with a Single Sample
|
<|reference_start|>Stochastic Minimum Spanning Trees with a Single Sample: We consider the minimum spanning tree problem in a setting where the edge weights are stochastic from unknown distributions, and the only available information is a single sample of each edge's weight distribution. In this setting, we analyze the expected performance of the algorithm that outputs a minimum spanning tree for the sampled weights. We compare to the optimal solution when the distributions are known. For every graph with weights that are exponentially distributed, we show that the sampling based algorithm has a performance guarantee that is equal to the size of the largest bond in the graph. Furthermore, we show that for every graph this performance guarantee is tight. The proof is based on two separate inductive arguments via edge contractions, which can be interpreted as reducing the spanning tree problem to a stochastic item selection problem. We also generalize these results to arbitrary matroids, where the performance guarantee is equal to the size of the largest co-circuit of the matroid.<|reference_end|>
|
arxiv
|
@article{hoeksma2024stochastic,
title={Stochastic Minimum Spanning Trees with a Single Sample},
author={Ruben Hoeksma, Gavin Speek, and Marc Uetz},
journal={arXiv preprint arXiv:2409.16119},
year={2024},
archivePrefix={arXiv},
eprint={2409.16119},
primaryClass={cs.DS}
}
|
hoeksma2024stochastic
|
arxiv-661397
|
2409.16120
|
MOSS: Enabling Code-Driven Evolution and Context Management for AI Agents
|
<|reference_start|>MOSS: Enabling Code-Driven Evolution and Context Management for AI Agents: Developing AI agents powered by large language models (LLMs) faces significant challenges in achieving true Turing completeness and adaptive, code-driven evolution. Current approaches often generate code independently of its runtime context, relying heavily on the LLM's memory, which results in inefficiencies and limits adaptability. Manual protocol development in sandbox environments further constrains the agent's autonomous adaptability. Crucially, achieving consistency in code and context across multi-turn interactions and ensuring isolation of local variables within each interaction remains an unsolved problem. We introduce MOSS (llM-oriented Operating System Simulation), a novel framework that addresses these challenges by integrating code generation with a dynamic context management system. MOSS ensures consistency and adaptability by using a mechanism that maintains the Python context across interactions, including isolation of local variables and preservation of runtime integrity. At its core, the framework employs an Inversion of Control (IoC) container in conjunction with decorators to enforce the least knowledge principle, allowing agents to focus on abstract interfaces rather than concrete implementations. This facilitates seamless integration of new tools and libraries, enables runtime instance replacement, and reduces prompt complexity, providing a "what you see is what you get" environment for the agent. Through a series of case studies, we show how this framework can enhance the efficiency and capabilities of agent development and highlight its advantages in moving towards Turing-complete agents capable of evolving through code.<|reference_end|>
|
arxiv
|
@article{zhu2024moss:,
title={MOSS: Enabling Code-Driven Evolution and Context Management for AI
Agents},
author={Ming Zhu, Yi Zhou},
journal={arXiv preprint arXiv:2409.16120},
year={2024},
archivePrefix={arXiv},
eprint={2409.16120},
primaryClass={cs.SE cs.AI cs.CL}
}
|
zhu2024moss:
|
arxiv-661398
|
2409.16122
|
RIS-aided Trajectory Optimization in Layered Urban Air Mobility
|
<|reference_start|>RIS-aided Trajectory Optimization in Layered Urban Air Mobility: Urban Air Mobility (UAM) relies on developing aerospace industries, where safe aviation and efficient communication are critical features of aircraft. However, it is challenging for aircraft to sustain efficient air-ground communication in urban circumstances. Without continuous air-ground communication, aircraft may experience course deviation and safety accidents. To address these problems, a reconfigurable intelligent surface(RIS)-aided trajectory optimization scheme is proposed enabling efficient air-ground communication and safe aviation in UAM with a layered airspace structure. This paper first devises a dual-plane RIS communication scheme for layered airspace. It fully engages the omnidirectional and directional signal attributes to reduce the transmission delay of the air-ground communication. Based on the dual-plane RIS configuration, we jointly develop the intra- and inter-layer trajectory scheme to optimize communication and safe aviation. In the intra-layer trajectory optimization, we propose a dual-time-scale flight scheme to improve communication capacity and horizontal flight safety. Meanwhile, we propose a safe layer-switching method to ensure collision avoidance during vertical flight in the inter-layer trajectory optimization. The communication load of the proposed scheme can be improved 40% and the time of safe separation restoration can be lessened 66% compared with the benchmarks in the layered airspace.<|reference_end|>
|
arxiv
|
@article{xiong2024ris-aided,
title={RIS-aided Trajectory Optimization in Layered Urban Air Mobility},
author={Kai Xiong, Supeng Leng, Liyuan Chen, Dapei Zhang, Chongwen Huang, Chau
Yuen},
journal={arXiv preprint arXiv:2409.16122},
year={2024},
archivePrefix={arXiv},
eprint={2409.16122},
primaryClass={cs.CE}
}
|
xiong2024ris-aided
|
arxiv-661399
|
2409.16125
|
Analyzing Probabilistic Methods for Evaluating Agent Capabilities
|
<|reference_start|>Analyzing Probabilistic Methods for Evaluating Agent Capabilities: To mitigate risks from AI systems, we need to assess their capabilities accurately. This is especially difficult in cases where capabilities are only rarely displayed. Phuong et al. propose two methods that aim to obtain better estimates of the probability of an AI agent successfully completing a given task. The milestone method decomposes tasks into subtasks, aiming to improve overall success rate estimation, while the expert best-of-N method leverages human guidance as a proxy for the model's independent performance. Our analysis of these methods as Monte Carlo estimators reveals that while both effectively reduce variance compared to naive Monte Carlo sampling, they also introduce bias. Experimental results demonstrate that the milestone method underestimates true solve rates for many real-world tasks due to its constraining assumptions. The expert best-of-N method exhibits even more severe underestimation across all tasks, attributed to an inherently flawed re-weighting factor. To enhance the accuracy of capability estimates of AI agents on difficult tasks, we suggest future work should leverage the rich literature on Monte Carlo Estimators.<|reference_end|>
|
arxiv
|
@article{højmark2024analyzing,
title={Analyzing Probabilistic Methods for Evaluating Agent Capabilities},
author={Axel H{o}jmark and Govind Pimpale and Arjun Panickssery and Marius
Hobbhahn and J'er'emy Scheurer},
journal={arXiv preprint arXiv:2409.16125},
year={2024},
archivePrefix={arXiv},
eprint={2409.16125},
primaryClass={cs.AI}
}
|
højmark2024analyzing
|
arxiv-661400
|
2409.16126
|
VisioPhysioENet: Multimodal Engagement Detection using Visual and Physiological Signals
|
<|reference_start|>VisioPhysioENet: Multimodal Engagement Detection using Visual and Physiological Signals: This paper presents VisioPhysioENet, a novel multimodal system that leverages visual cues and physiological signals to detect learner engagement. It employs a two-level approach for visual feature extraction using the Dlib library for facial landmark extraction and the OpenCV library for further estimations. This is complemented by extracting physiological signals using the plane-orthogonal-to-skin method to assess cardiovascular activity. These features are integrated using advanced machine learning classifiers, enhancing the detection of various engagement levels. We rigorously evaluate VisioPhysioENet on the DAiSEE dataset, where it achieves an accuracy of 63.09%, demonstrating a superior ability to discern various levels of engagement compared to existing methodologies. The proposed system's code can be accessed at https://github.com/MIntelligence-Group/VisioPhysioENet.<|reference_end|>
|
arxiv
|
@article{singh2024visiophysioenet:,
title={VisioPhysioENet: Multimodal Engagement Detection using Visual and
Physiological Signals},
author={Alakhsimar Singh, Nischay Verma, Kanav Goyal, Amritpal Singh, Puneet
Kumar and Xiaobai Li},
journal={arXiv preprint arXiv:2409.16126},
year={2024},
archivePrefix={arXiv},
eprint={2409.16126},
primaryClass={cs.CV}
}
|
singh2024visiophysioenet:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.