corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-667001
2410.05884
A Robust Quadruped Robot with Twisting Waist for Flexible Motions
<|reference_start|>A Robust Quadruped Robot with Twisting Waist for Flexible Motions: The waist plays a crucial role in the agile movement of many animals in nature. It provides the torso with additional degrees of freedom and flexibility, inspiring researchers to incorporate this biological feature into robotic structures to enhance robot locomotion. This paper presents a cost-effective and low-complexity waist mechanism integrated into the structure of the open-source robot solo8, adding a new degree of freedom (DOF) to its torso. We refer to this novel robot as solo9. Additionally, we propose a full-body control method for the waist-equipped quadruped robot based on generative adversarial imitation learning (GAIL). During training, the discriminator is used as input for iterative optimization of the policy and dataset, enabling solo9 to achieve flexible steering maneuvers across various gaits. Extensive tests of solo9's steering capabilities, terrain adaptability, and robustness are conducted in both simulation and real-world scenarios, with detailed comparisons to solo8 and solo12, demonstrating the effectiveness of the control algorithm and the advantages of the waist mechanism.<|reference_end|>
arxiv
@article{qian2024a, title={A Robust Quadruped Robot with Twisting Waist for Flexible Motions}, author={Quancheng Qian, Xiaoyi Wei, Zonghao Zhang, Jiaxin Tu, Yueqi Zhang, Taixian Hou, Xiaofei Gao, Peng Zhai, Lihua Zhang}, journal={arXiv preprint arXiv:2410.05884}, year={2024}, archivePrefix={arXiv}, eprint={2410.05884}, primaryClass={cs.RO} }
qian2024a
arxiv-667002
2410.05889
Deep learning-based fault identification in condition monitoring
<|reference_start|>Deep learning-based fault identification in condition monitoring: Vibration-based condition monitoring techniques are commonly used to identify faults in rolling element bearings. Accuracy and speed of fault detection procedures are critical performance measures in condition monitoring. Delay is especially important in remote condition monitoring and time-sensitive industrial applications. While most existing methods focus on accuracy, little attention has been given to the inference time in the fault identification process. In this paper, we address this gap by presenting a Convolutional Neural Network (CNN) based approach for real-time fault identification in rolling element bearings. We encode raw vibration signals into two-dimensional images using various encoding methods and use these with a CNN to classify several categories of bearing fault types and sizes. We analyse the interplay between fault identification accuracy and processing time. For training and evaluation we use a bearing failure CWRU dataset.<|reference_end|>
arxiv
@article{dhungana2024deep, title={Deep learning-based fault identification in condition monitoring}, author={Hariom Dhungana, Suresh Kumar Mukhiya, Pragya Dhungana, and Benjamin Karic}, journal={arXiv preprint arXiv:2410.05889}, year={2024}, archivePrefix={arXiv}, eprint={2410.05889}, primaryClass={cs.LG cs.AI} }
dhungana2024deep
arxiv-667003
2410.05890
Ordering-Based Causal Discovery for Linear and Nonlinear Relations
<|reference_start|>Ordering-Based Causal Discovery for Linear and Nonlinear Relations: Identifying causal relations from purely observational data typically requires additional assumptions on relations and/or noise. Most current methods restrict their analysis to datasets that are assumed to have pure linear or nonlinear relations, which is often not reflective of real-world datasets that contain a combination of both. This paper presents CaPS, an ordering-based causal discovery algorithm that effectively handles linear and nonlinear relations. CaPS introduces a novel identification criterion for topological ordering and incorporates the concept of "parent score" during the post-processing optimization stage. These scores quantify the strength of the average causal effect, helping to accelerate the pruning process and correct inaccurate predictions in the pruning step. Experimental results demonstrate that our proposed solutions outperform state-of-the-art baselines on synthetic data with varying ratios of linear and nonlinear relations. The results obtained from real-world data also support the competitiveness of CaPS. Code and datasets are available at https://github.com/E2real/CaPS.<|reference_end|>
arxiv
@article{xu2024ordering-based, title={Ordering-Based Causal Discovery for Linear and Nonlinear Relations}, author={Zhuopeng Xu, Yujie Li, Cheng Liu, Ning Gui}, journal={arXiv preprint arXiv:2410.05890}, year={2024}, archivePrefix={arXiv}, eprint={2410.05890}, primaryClass={cs.LG} }
xu2024ordering-based
arxiv-667004
2410.05892
Towards an Autonomous Surface Vehicle Prototype for Artificial Intelligence Applications of Water Quality Monitoring
<|reference_start|>Towards an Autonomous Surface Vehicle Prototype for Artificial Intelligence Applications of Water Quality Monitoring: The use of Autonomous Surface Vehicles, equipped with water quality sensors and artificial vision systems, allows for a smart and adaptive deployment in water resources environmental monitoring. This paper presents a real implementation of a vehicle prototype that to address the use of Artificial Intelligence algorithms and enhanced sensing techniques for water quality monitoring. The vehicle is fully equipped with high-quality sensors to measure water quality parameters and water depth. Furthermore, by means of a stereo-camera, it also can detect and locate macro-plastics in real environments by means of deep visual models, such as YOLOv5. In this paper, experimental results, carried out in Lago Mayor (Sevilla), has been presented as proof of the capabilities of the proposed architecture. The overall system, and the early results obtained, are expected to provide a solid example of a real platform useful for the water resource monitoring task, and to serve as a real case scenario for deploying Artificial Intelligence algorithms, such as path planning, artificial vision, etc.<|reference_end|>
arxiv
@article{díaz2024towards, title={Towards an Autonomous Surface Vehicle Prototype for Artificial Intelligence Applications of Water Quality Monitoring}, author={Luis Miguel D'iaz, Samuel Yanes Luis, Alejandro Mendoza Barrionuevo, Dame Seck Diop, Manuel Perales, Alejandro Casado, Sergio Toral, Daniel Guti'errez}, journal={arXiv preprint arXiv:2410.05892}, year={2024}, archivePrefix={arXiv}, eprint={2410.05892}, primaryClass={cs.RO cs.AI cs.LG} }
díaz2024towards
arxiv-667005
2410.05894
DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning
<|reference_start|>DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning: In the realm of computational physics, an enduring topic is the numerical solutions to partial differential equations (PDEs). Recently, the attention of researchers has shifted towards Neural Operator methods, renowned for their capability to approximate ``operators'' -- mappings from functions to functions. Despite the universal approximation theorem within neural operators, ensuring error bounds often requires employing numerous Fourier layers. However, what about lightweight models? In response to this question, we introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis. To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers, enhancing their ability to handle sum-of-products structures inherent in many physical systems. Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets. Furthermore, by analyzing Fourier components' weights, we can symbolically discern the physical significance of each term. This sheds light on the opaque nature of neural networks, unveiling underlying physical principles.<|reference_end|>
arxiv
@article{song2024dimol:, title={DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning}, author={Yichen Song, Yunbo Wang, Xiaokang Yang}, journal={arXiv preprint arXiv:2410.05894}, year={2024}, archivePrefix={arXiv}, eprint={2410.05894}, primaryClass={cs.LG} }
song2024dimol:
arxiv-667006
2410.05896
Exact space-time symmetry conservation and automatic mesh refinement for classical lattice field theory
<|reference_start|>Exact space-time symmetry conservation and automatic mesh refinement for classical lattice field theory: The breaking of space-time symmetries and the non-conservation of the associated Noether charges constitutes a central artifact in lattice field theory. In prior work we have shown how to overcome this limitation for classical actions describing point particle motion, using the world-line formalism of general relativity. The key is to treat coordinate maps (from an abstract parameter space into space-time) as dynamical and dependent degrees of freedom, which remain continuous after discretization of the underlying parameter space. Here we present latest results where we construct a reparameterization invariant classical action for scalar fields, which features dynamical coordinate maps. We highlight the following achievements of our approach: 1) global space-time symmetries remain intact after discretization and the associated Noether charges remain exactly preserved 2) coordinate maps adapt to the dynamics of the scalar field leading to adaptive grid resolution guided by the symmetries.<|reference_end|>
arxiv
@article{rothkopf2024exact, title={Exact space-time symmetry conservation and automatic mesh refinement for classical lattice field theory}, author={A. Rothkopf, W. A. Horowitz and J. Nordstr"om}, journal={arXiv preprint arXiv:2410.05896}, year={2024}, archivePrefix={arXiv}, eprint={2410.05896}, primaryClass={hep-lat cs.NA math.NA} }
rothkopf2024exact
arxiv-667007
2410.05898
Manifolds, Random Matrices and Spectral Gaps: The geometric phases of generative diffusion
<|reference_start|>Manifolds, Random Matrices and Spectral Gaps: The geometric phases of generative diffusion: In this paper, we investigate the latent geometry of generative diffusion models under the manifold hypothesis. To this purpose, we analyze the spectrum of eigenvalues (and singular values) of the Jacobian of the score function, whose discontinuities (gaps) reveal the presence and dimensionality of distinct sub-manifolds. Using a statistical physics approach, we derive the spectral distributions and formulas for the spectral gaps under several distributional assumptions and we compare these theoretical predictions with the spectra estimated from trained networks. Our analysis reveals the existence of three distinct qualitative phases during the generative process: a trivial phase; a manifold coverage phase where the diffusion process fits the distribution internal to the manifold; a consolidation phase where the score becomes orthogonal to the manifold and all particles are projected on the support of the data. This `division of labor' between different timescales provides an elegant explanation on why generative diffusion models are not affected by the manifold overfitting phenomenon that plagues likelihood-based models, since the internal distribution and the manifold geometry are produced at different time points during generation.<|reference_end|>
arxiv
@article{ventura2024manifolds,, title={Manifolds, Random Matrices and Spectral Gaps: The geometric phases of generative diffusion}, author={Enrico Ventura, Beatrice Achilli, Gianluigi Silvestri, Carlo Lucibello, Luca Ambrogioni}, journal={arXiv preprint arXiv:2410.05898}, year={2024}, archivePrefix={arXiv}, eprint={2410.05898}, primaryClass={stat.ML cs.LG} }
ventura2024manifolds,
arxiv-667008
2410.05899
Brain-inspired continual pre-trained learner via silent synaptic consolidation
<|reference_start|>Brain-inspired continual pre-trained learner via silent synaptic consolidation: Pre-trained models have demonstrated impressive generalization capabilities, yet they remain vulnerable to catastrophic forgetting when incrementally trained on new tasks. Existing architecture-based strategies encounter two primary challenges: 1) Integrating a pre-trained network with a trainable sub-network complicates the delicate balance between learning plasticity and memory stability across evolving tasks during learning. 2) The absence of robust interconnections between pre-trained networks and various sub-networks limits the effective retrieval of pertinent information during inference. In this study, we introduce the Artsy, inspired by the activation mechanisms of silent synapses via spike-timing-dependent plasticity observed in mature brains, to enhance the continual learning capabilities of pre-trained models. The Artsy integrates two key components: During training, the Artsy mimics mature brain dynamics by maintaining memory stability for previously learned knowledge within the pre-trained network while simultaneously promoting learning plasticity in task-specific sub-networks. During inference, artificial silent and functional synapses are utilized to establish precise connections between the pre-synaptic neurons in the pre-trained network and the post-synaptic neurons in the sub-networks, facilitated through synaptic consolidation, thereby enabling effective extraction of relevant information from test samples. Comprehensive experimental evaluations reveal that our model significantly outperforms conventional methods on class-incremental learning tasks, while also providing enhanced biological interpretability for architecture-based approaches. Moreover, we propose that the Artsy offers a promising avenue for simulating biological synaptic mechanisms, potentially advancing our understanding of neural plasticity in both artificial and biological systems.<|reference_end|>
arxiv
@article{ran2024brain-inspired, title={Brain-inspired continual pre-trained learner via silent synaptic consolidation}, author={Xuming Ran, Juntao Yao, Yusong Wang, Mingkun Xu, Dianbo Liu}, journal={arXiv preprint arXiv:2410.05899}, year={2024}, archivePrefix={arXiv}, eprint={2410.05899}, primaryClass={cs.LG} }
ran2024brain-inspired
arxiv-667009
2410.05900
MTFL: Multi-Timescale Feature Learning for Weakly-Supervised Anomaly Detection in Surveillance Videos
<|reference_start|>MTFL: Multi-Timescale Feature Learning for Weakly-Supervised Anomaly Detection in Surveillance Videos: Detection of anomaly events is relevant for public safety and requires a combination of fine-grained motion information and contextual events at variable time-scales. To this end, we propose a Multi-Timescale Feature Learning (MTFL) method to enhance the representation of anomaly features. Short, medium, and long temporal tubelets are employed to extract spatio-temporal video features using a Video Swin Transformer. Experimental results demonstrate that MTFL outperforms state-of-the-art methods on the UCF-Crime dataset, achieving an anomaly detection performance 89.78% AUC. Moreover, it performs complementary to SotA with 95.32% AUC on the ShanghaiTech and 84.57% AP on the XD-Violence dataset. Furthermore, we generate an extended dataset of the UCF-Crime for development and evaluation on a wider range of anomalies, namely Video Anomaly Detection Dataset (VADD), involving 2,591 videos in 18 classes with extensive coverage of realistic anomalies.<|reference_end|>
arxiv
@article{zhang2024mtfl:, title={MTFL: Multi-Timescale Feature Learning for Weakly-Supervised Anomaly Detection in Surveillance Videos}, author={Yiling Zhang, Erkut Akdag, Egor Bondarev, Peter H. N. De With}, journal={arXiv preprint arXiv:2410.05900}, year={2024}, archivePrefix={arXiv}, eprint={2410.05900}, primaryClass={cs.CV} }
zhang2024mtfl:
arxiv-667010
2410.05901
On implicit time methods and discontinuous Galerkin space reconstruction for conservation laws
<|reference_start|>On implicit time methods and discontinuous Galerkin space reconstruction for conservation laws: In many physical scenarios governed by systems of hyperbolic conservation laws, stiffness necessitates small time-steps due to the stringent CFL stability criterion. Implicit time integration schemes leverage superior stability properties enabling the selection of time-steps based solely on accuracy requirements, thereby bypassing the need for minute time-steps. In this work, we consider high-order diagonally implicit Runge-Kutta time integration methods coupled with discontinuous Galerkin space reconstructions for stiff hyperbolic systems. We analyze dispersion-diffusion properties to select the best combination of the space-time discretization for high Courant numbers, in terms of controlling spurious oscillations. Working with high-order methods, however, requires to introduce local space limiters which make the whole implicit scheme highly nonlinear. Therefore, we propose to use appropriate space limiters that can be precomputed on a first-order predictor of the solution. This approach follows the methodology proposed by Puppo et al. (Commun. Comput. Phys., 2024) for high-order finite volume schemes. Numerical experiments involve both scalar equations and systems.<|reference_end|>
arxiv
@article{briani2024on, title={On implicit time methods and discontinuous Galerkin space reconstruction for conservation laws}, author={Maya Briani, Gabriella Puppo, Giuseppe Visconti}, journal={arXiv preprint arXiv:2410.05901}, year={2024}, number={Roma01.Math.NA}, archivePrefix={arXiv}, eprint={2410.05901}, primaryClass={math.NA cs.NA} }
briani2024on
arxiv-667011
2410.05902
Mini-Batch Kernel $k$-means
<|reference_start|>Mini-Batch Kernel $k$-means: We present the first mini-batch kernel $k$-means algorithm, offering an order of magnitude improvement in running time compared to the full batch algorithm. A single iteration of our algorithm takes $\widetilde{O}(kb^2)$ time, significantly faster than the $O(n^2)$ time required by the full batch kernel $k$-means, where $n$ is the dataset size and $b$ is the batch size. Extensive experiments demonstrate that our algorithm consistently achieves a 10-100x speedup with minimal loss in quality, addressing the slow runtime that has limited kernel $k$-means adoption in practice. We further complement these results with a theoretical analysis under an early stopping condition, proving that with a batch size of $\widetilde{\Omega}(\max \{\gamma^{4}, \gamma^{2}\} \cdot \epsilon^{-2})$, the algorithm terminates in $O(\gamma^2/\epsilon)$ iterations with high probability, where $\gamma$ bounds the norm of points in feature space and $\epsilon$ is a termination threshold. Our analysis holds for any reasonable center initialization, and when using $k$-means++ initialization, the algorithm achieves an approximation ratio of $O(\log k)$ in expectation. For normalized kernels, such as Gaussian or Laplacian it holds that $\gamma=1$. Taking $\epsilon = O(1)$ and $b=\Theta(\log n)$, the algorithm terminates in $O(1)$ iterations, with each iteration running in $\widetilde{O}(k)$ time.<|reference_end|>
arxiv
@article{jourdan2024mini-batch, title={Mini-Batch Kernel $k$-means}, author={Ben Jourdan, Gregory Schwartzman}, journal={arXiv preprint arXiv:2410.05902}, year={2024}, archivePrefix={arXiv}, eprint={2410.05902}, primaryClass={cs.LG cs.AI cs.DS} }
jourdan2024mini-batch
arxiv-667012
2410.05903
Automatic Summarization of Long Documents
<|reference_start|>Automatic Summarization of Long Documents: A vast amount of textual data is added to the internet daily, making utilization and interpretation of such data difficult and cumbersome. As a result, automatic text summarization is crucial for extracting relevant information, saving precious reading time. Although many transformer-based models excel in summarization, they are constrained by their input size, preventing them from processing texts longer than their context size. This study introduces three novel algorithms that allow any LLM to efficiently overcome its input size limitation, effectively utilizing its full potential without any architectural modifications. We test our algorithms on texts with more than 70,000 words, and our experiments show a significant increase in BERTScore with competitive ROUGE scores.<|reference_end|>
arxiv
@article{chhibbar2024automatic, title={Automatic Summarization of Long Documents}, author={Naman Chhibbar, Jugal Kalita}, journal={arXiv preprint arXiv:2410.05903}, year={2024}, archivePrefix={arXiv}, eprint={2410.05903}, primaryClass={cs.CL cs.AI} }
chhibbar2024automatic
arxiv-667013
2410.05905
MedUniSeg: 2D and 3D Medical Image Segmentation via a Prompt-driven Universal Model
<|reference_start|>MedUniSeg: 2D and 3D Medical Image Segmentation via a Prompt-driven Universal Model: Universal segmentation models offer significant potential in addressing a wide range of tasks by effectively leveraging discrete annotations. As the scope of tasks and modalities expands, it becomes increasingly important to generate and strategically position task- and modal-specific priors within the universal model. However, existing universal models often overlook the correlations between different priors, and the optimal placement and frequency of these priors remain underexplored. In this paper, we introduce MedUniSeg, a prompt-driven universal segmentation model designed for 2D and 3D multi-task segmentation across diverse modalities and domains. MedUniSeg employs multiple modal-specific prompts alongside a universal task prompt to accurately characterize the modalities and tasks. To generate the related priors, we propose the modal map (MMap) and the fusion and selection (FUSE) modules, which transform modal and task prompts into corresponding priors. These modal and task priors are systematically introduced at the start and end of the encoding process. We evaluate MedUniSeg on a comprehensive multi-modal upstream dataset consisting of 17 sub-datasets. The results demonstrate that MedUniSeg achieves superior multi-task segmentation performance, attaining a 1.2% improvement in the mean Dice score across the 17 upstream tasks compared to nnUNet baselines, while using less than 1/10 of the parameters. For tasks that underperform during the initial multi-task joint training, we freeze MedUniSeg and introduce new modules to re-learn these tasks. This approach yields an enhanced version, MedUniSeg*, which consistently outperforms MedUniSeg across all tasks. Moreover, MedUniSeg surpasses advanced self-supervised and supervised pre-trained models on six downstream tasks, establishing itself as a high-quality, highly generalizable pre-trained segmentation model.<|reference_end|>
arxiv
@article{ye2024meduniseg:, title={MedUniSeg: 2D and 3D Medical Image Segmentation via a Prompt-driven Universal Model}, author={Yiwen Ye, Ziyang Chen, Jianpeng Zhang, Yutong Xie, Yong Xia}, journal={arXiv preprint arXiv:2410.05905}, year={2024}, archivePrefix={arXiv}, eprint={2410.05905}, primaryClass={cs.CV} }
ye2024meduniseg:
arxiv-667014
2410.05910
Digital Labor and the Inconspicuous Production of Artificial Intelligence
<|reference_start|>Digital Labor and the Inconspicuous Production of Artificial Intelligence: Digital platforms capitalize on users' labor, often disguising essential contributions as casual activities or consumption, regardless of users' recognition of their efforts. Data annotation, content creation, and engagement with advertising are all aspects of this hidden productivity. Despite playing a crucial role in driving AI development, such tasks remain largely unrecognized and undercompensated. This chapter exposes the systemic devaluation of these activities in the digital economy, by drawing on historical theories about unrecognized labor, from housework to audience labor. This approach advocates for a broader understanding of digital labor by introducing the concept of ''inconspicuous production.'' It moves beyond the traditional notion of ''invisible work'' to highlight the hidden elements inherent in all job types, especially in light of growing automation and platform-based employment.<|reference_end|>
arxiv
@article{casilli2024digital, title={Digital Labor and the Inconspicuous Production of Artificial Intelligence}, author={Antonio A. Casilli (I3 SES, NOS, IP Paris)}, journal={arXiv preprint arXiv:2410.05910}, year={2024}, archivePrefix={arXiv}, eprint={2410.05910}, primaryClass={cs.CY} }
casilli2024digital
arxiv-667015
2410.05911
Accelerating Error Correction Code Transformers
<|reference_start|>Accelerating Error Correction Code Transformers: Error correction codes (ECC) are crucial for ensuring reliable information transmission in communication systems. Choukroun & Wolf (2022b) recently introduced the Error Correction Code Transformer (ECCT), which has demonstrated promising performance across various transmission channels and families of codes. However, its high computational and memory demands limit its practical applications compared to traditional decoding algorithms. Achieving effective quantization of the ECCT presents significant challenges due to its inherently small architecture, since existing, very low-precision quantization techniques often lead to performance degradation in compact neural networks. In this paper, we introduce a novel acceleration method for transformer-based decoders. We first propose a ternary weight quantization method specifically designed for the ECCT, inducing a decoder with multiplication-free linear layers. We present an optimized self-attention mechanism to reduce computational complexity via codeaware multi-heads processing. Finally, we provide positional encoding via the Tanner graph eigendecomposition, enabling a richer representation of the graph connectivity. The approach not only matches or surpasses ECCT's performance but also significantly reduces energy consumption, memory footprint, and computational complexity. Our method brings transformer-based error correction closer to practical implementation in resource-constrained environments, achieving a 90% compression ratio and reducing arithmetic operation energy consumption by at least 224 times on modern hardware.<|reference_end|>
arxiv
@article{levy2024accelerating, title={Accelerating Error Correction Code Transformers}, author={Matan Levy, Yoni Choukroun, Lior Wolf}, journal={arXiv preprint arXiv:2410.05911}, year={2024}, archivePrefix={arXiv}, eprint={2410.05911}, primaryClass={cs.LG cs.AI cs.IT math.IT} }
levy2024accelerating
arxiv-667016
2410.05915
Give me a hint: Can LLMs take a hint to solve math problems?
<|reference_start|>Give me a hint: Can LLMs take a hint to solve math problems?: While many state-of-the-art LLMs have shown poor logical and basic mathematical reasoning, recent works try to improve their problem-solving abilities using prompting techniques. We propose giving "hints" to improve the language model's performance on advanced mathematical problems, taking inspiration from how humans approach math pedagogically. We also test the model's adversarial robustness to wrong hints. We demonstrate the effectiveness of our approach by evaluating various LLMs, presenting them with a diverse set of problems of different difficulties and topics from the MATH dataset and comparing against techniques such as one-shot, few-shot, and chain of thought prompting.<|reference_end|>
arxiv
@article{agrawal2024give, title={Give me a hint: Can LLMs take a hint to solve math problems?}, author={Vansh Agrawal, Pratham Singla, Amitoj Singh Miglani, Shivank Garg, Ayush Mangal}, journal={arXiv preprint arXiv:2410.05915}, year={2024}, archivePrefix={arXiv}, eprint={2410.05915}, primaryClass={cs.CL cs.AI cs.CV} }
agrawal2024give
arxiv-667017
2410.05916
TIMBA: Time series Imputation with Bi-directional Mamba Blocks and Diffusion models
<|reference_start|>TIMBA: Time series Imputation with Bi-directional Mamba Blocks and Diffusion models: The problem of imputing multivariate time series spans a wide range of fields, from clinical healthcare to multi-sensor systems. Initially, Recurrent Neural Networks (RNNs) were employed for this task; however, their error accumulation issues led to the adoption of Transformers, leveraging attention mechanisms to mitigate these problems. Concurrently, the promising results of diffusion models in capturing original distributions have positioned them at the forefront of current research, often in conjunction with Transformers. In this paper, we propose replacing time-oriented Transformers with State-Space Models (SSM), which are better suited for temporal data modeling. Specifically, we utilize the latest SSM variant, S6, which incorporates attention-like mechanisms. By embedding S6 within Mamba blocks, we develop a model that integrates SSM, Graph Neural Networks, and node-oriented Transformers to achieve enhanced spatiotemporal representations. Implementing these architectural modifications, previously unexplored in this field, we present Time series Imputation with Bi-directional mamba blocks and diffusion models (TIMBA). TIMBA achieves superior performance in almost all benchmark scenarios and performs comparably in others across a diverse range of missing value situations and three real-world datasets. We also evaluate how the performance of our model varies with different amounts of missing values and analyse its performance on downstream tasks. In addition, we provide the original code to replicate the results.<|reference_end|>
arxiv
@article{solís-garcía2024timba:, title={TIMBA: Time series Imputation with Bi-directional Mamba Blocks and Diffusion models}, author={Javier Sol'is-Garc'ia, Bel'en Vega-M'arquez, Juan A. Nepomuceno, Isabel A. Nepomuceno-Chamorro}, journal={arXiv preprint arXiv:2410.05916}, year={2024}, archivePrefix={arXiv}, eprint={2410.05916}, primaryClass={cs.LG} }
solís-garcía2024timba:
arxiv-667018
2410.05920
FINALLY: fast and universal speech enhancement with studio-like quality
<|reference_start|>FINALLY: fast and universal speech enhancement with studio-like quality: In this paper, we address the challenge of speech enhancement in real-world recordings, which often contain various forms of distortion, such as background noise, reverberation, and microphone artifacts. We revisit the use of Generative Adversarial Networks (GANs) for speech enhancement and theoretically show that GANs are naturally inclined to seek the point of maximum density within the conditional clean speech distribution, which, as we argue, is essential for the speech enhancement task. We study various feature extractors for perceptual loss to facilitate the stability of adversarial training, developing a methodology for probing the structure of the feature space. This leads us to integrate WavLM-based perceptual loss into MS-STFT adversarial training pipeline, creating an effective and stable training procedure for the speech enhancement model. The resulting speech enhancement model, which we refer to as FINALLY, builds upon the HiFi++ architecture, augmented with a WavLM encoder and a novel training pipeline. Empirical results on various datasets confirm our model's ability to produce clear, high-quality speech at 48 kHz, achieving state-of-the-art performance in the field of speech enhancement.<|reference_end|>
arxiv
@article{babaev2024finally:, title={FINALLY: fast and universal speech enhancement with studio-like quality}, author={Nicholas Babaev, Kirill Tamogashev, Azat Saginbaev, Ivan Shchekotov, Hanbin Bae, Hosang Sung, WonJun Lee, Hoon-Young Cho and Pavel Andreev}, journal={arXiv preprint arXiv:2410.05920}, year={2024}, archivePrefix={arXiv}, eprint={2410.05920}, primaryClass={cs.SD cs.AI eess.AS} }
babaev2024finally:
arxiv-667019
2410.05926
Bayesian model of individual learning to control a motor imagery BCI
<|reference_start|>Bayesian model of individual learning to control a motor imagery BCI: The cognitive mechanisms underlying subjects' self-regulation in Brain-Computer Interface (BCI) and neurofeedback (NF) training remain poorly understood. Yet, a mechanistic computational model of each individual learning trajectory is required to improve the reliability of BCI applications. The few existing attempts mostly rely on model-free (reinforcement learning) approaches. Hence, they cannot capture the strategy developed by each subject and neither finely predict their learning curve. In this study, we propose an alternative, model-based approach rooted in cognitive skill learning within the Active Inference framework. We show how BCI training may be framed as an inference problem under high uncertainties. We illustrate the proposed approach on a previously published synthetic Motor Imagery ERD laterality training. We show how simple changes in model parameters allow us to qualitatively match experimental results and account for various subject. In the near future, this approach may provide a powerful computational to model individual skill learning and thus optimize and finely characterize BCI training.<|reference_end|>
arxiv
@article{annicchiarico2024bayesian, title={Bayesian model of individual learning to control a motor imagery BCI}, author={C^ome Annicchiarico, Fabien Lotte (Potioc), J'er'emie Mattout}, journal={arXiv preprint arXiv:2410.05926}, year={2024}, doi={10.3217/978-3-99161-014-4-083}, archivePrefix={arXiv}, eprint={2410.05926}, primaryClass={cs.HC} }
annicchiarico2024bayesian
arxiv-667020
2410.05928
Beyond Captioning: Task-Specific Prompting for Improved VLM Performance in Mathematical Reasoning
<|reference_start|>Beyond Captioning: Task-Specific Prompting for Improved VLM Performance in Mathematical Reasoning: Vision-Language Models (VLMs) have transformed tasks requiring visual and reasoning abilities, such as image retrieval and Visual Question Answering (VQA). Despite their success, VLMs face significant challenges with tasks involving geometric reasoning, algebraic problem-solving, and counting. These limitations stem from difficulties effectively integrating multiple modalities and accurately interpreting geometry-related tasks. Various works claim that introducing a captioning pipeline before VQA tasks enhances performance. We incorporated this pipeline for tasks involving geometry, algebra, and counting. We found that captioning results are not generalizable, specifically with larger VLMs primarily trained on downstream QnA tasks showing random performance on math-related challenges. However, we present a promising alternative: task-based prompting, enriching the prompt with task-specific guidance. This approach shows promise and proves more effective than direct captioning methods for math-heavy problems.<|reference_end|>
arxiv
@article{singh2024beyond, title={Beyond Captioning: Task-Specific Prompting for Improved VLM Performance in Mathematical Reasoning}, author={Ayush Singh, Mansi Gupta, Shivank Garg, Abhinav Kumar, Vansh Agrawal}, journal={arXiv preprint arXiv:2410.05928}, year={2024}, archivePrefix={arXiv}, eprint={2410.05928}, primaryClass={cs.CV cs.AI cs.CL} }
singh2024beyond
arxiv-667021
2410.05930
Fortify Your Foundations: Practical Privacy and Security for Foundation Model Deployments In The Cloud
<|reference_start|>Fortify Your Foundations: Practical Privacy and Security for Foundation Model Deployments In The Cloud: Foundation Models (FMs) display exceptional performance in tasks such as natural language processing and are being applied across a growing range of disciplines. Although typically trained on large public datasets, FMs are often fine-tuned or integrated into Retrieval-Augmented Generation (RAG) systems, which rely on private data. This access, along with their size and costly training, heightens the risk of intellectual property theft. Moreover, multimodal FMs may expose sensitive information. In this work, we examine the FM threat model and discuss the practicality and comprehensiveness of various approaches for securing against them, such as ML-based methods and trusted execution environments (TEEs). We demonstrate that TEEs offer an effective balance between strong security properties, usability, and performance. Specifically, we present a solution achieving less than 10\% overhead versus bare metal for the full Llama2 7B and 13B inference pipelines running inside \intel\ SGX and \intel\ TDX. We also share our configuration files and insights from our implementation. To our knowledge, our work is the first to show the practicality of TEEs for securing FMs.<|reference_end|>
arxiv
@article{chrapek2024fortify, title={Fortify Your Foundations: Practical Privacy and Security for Foundation Model Deployments In The Cloud}, author={Marcin Chrapek, Anjo Vahldiek-Oberwagner, Marcin Spoczynski, Scott Constable, Mona Vij, Torsten Hoefler}, journal={arXiv preprint arXiv:2410.05930}, year={2024}, archivePrefix={arXiv}, eprint={2410.05930}, primaryClass={cs.CR cs.AI} }
chrapek2024fortify
arxiv-667022
2410.05931
Construction of Musculoskeletal Simulation for Shoulder Complex with Ligaments and Its Validation via Model Predictive Control
<|reference_start|>Construction of Musculoskeletal Simulation for Shoulder Complex with Ligaments and Its Validation via Model Predictive Control: The complex ways in which humans utilize their bodies in sports and martial arts are remarkable, and human motion analysis is one of the most effective tools for robot body design and control. On the other hand, motion analysis is not easy, and it is difficult to measure complex body motions in detail due to the influence of numerous muscles and soft tissues, mainly ligaments. In response, various musculoskeletal simulators have been developed and applied to motion analysis and robotics. However, none of them reproduce the ligaments but only the muscles, nor do they focus on the shoulder complex, including the clavicle and scapula, which is one of the most complex parts of the body. Therefore, in this study, a detailed simulation model of the shoulder complex including ligaments is constructed. The model will mimic not only the skeletal structure and muscle arrangement but also the ligament arrangement and maximum muscle strength. Through model predictive control based on the constructed simulation, we confirmed that the ligaments contribute to joint stabilization in the first movement and that the proper distribution of maximum muscle force contributes to the equalization of the load on each muscle, demonstrating the effectiveness of this simulation.<|reference_end|>
arxiv
@article{sahara2024construction, title={Construction of Musculoskeletal Simulation for Shoulder Complex with Ligaments and Its Validation via Model Predictive Control}, author={Yuta Sahara, Akihiro Miki, Yoshimoto Ribayashi, Shunnosuke Yoshimura, Kento Kawaharazuka, Kei Okada, Masayuki Inaba}, journal={arXiv preprint arXiv:2410.05931}, year={2024}, archivePrefix={arXiv}, eprint={2410.05931}, primaryClass={cs.RO} }
sahara2024construction
arxiv-667023
2410.05933
CubiX: Portable Wire-Driven Parallel Robot Connecting to and Utilizing the Environment
<|reference_start|>CubiX: Portable Wire-Driven Parallel Robot Connecting to and Utilizing the Environment: A wire-driven parallel robot is a type of robotic system where multiple wires are used to control the movement of a end-effector. The wires are attached to the end-effector and anchored to fixed points on external structures. This configuration allows for the separation of actuators and end-effectors, enabling lightweight and simplified movable parts in the robot. However, its range of motion remains confined within the space formed by the wires, limiting the wire-driven capability to only within the pre-designed operational range. Here, in this study, we develop a wire-driven robot, CubiX, capable of connecting to and utilizing the environment. CubiX connects itself to the environment using up to 8 wires and drives itself by winding these wires. By integrating actuators for winding the wires into CubiX, a portable wire-driven parallel robot is realized without limitations on its workspace. Consequently, the robot can form parallel wire-driven structures by connecting wires to the environment at any operational location.<|reference_end|>
arxiv
@article{inoue2024cubix:, title={CubiX: Portable Wire-Driven Parallel Robot Connecting to and Utilizing the Environment}, author={Shintaro Inoue, Kento Kawaharazuka, Temma Suzuki, Sota Yuzaki, Kei Okada, Masayuki Inaba}, journal={arXiv preprint arXiv:2410.05933}, year={2024}, archivePrefix={arXiv}, eprint={2410.05933}, primaryClass={cs.RO} }
inoue2024cubix:
arxiv-667024
2410.05934
Chameleon: An Efficient FHE Scheme Switching Acceleration on GPUs
<|reference_start|>Chameleon: An Efficient FHE Scheme Switching Acceleration on GPUs: Fully homomorphic encryption (FHE) enables direct computation on encrypted data, making it a crucial technology for privacy protection. However, FHE suffers from significant performance bottlenecks. In this context, GPU acceleration offers a promising solution to bridge the performance gap. Existing efforts primarily focus on single-class FHE schemes, which fail to meet the diverse requirements of data types and functions, prompting the development of hybrid multi-class FHE schemes. However, studies have yet to thoroughly investigate specific GPU optimizations for hybrid FHE schemes. In this paper, we present an efficient GPU-based FHE scheme switching acceleration named Chameleon. First, we propose a scalable NTT acceleration design that adapts to larger CKKS polynomials and smaller TFHE polynomials. Specifically, Chameleon tackles synchronization issues by fusing stages to reduce synchronization, employing polynomial coefficient shuffling to minimize synchronization scale, and utilizing an SM-aware combination strategy to identify the optimal switching point. Second, Chameleon is the first to comprehensively analyze and optimize critical switching operations. It introduces CMux-level parallelization to accelerate LUT evaluation and a homomorphic rotation-free matrix-vector multiplication to improve repacking efficiency. Finally, Chameleon outperforms the state-of-the-art GPU implementations by 1.23x in CKKS HMUL and 1.15x in bootstrapping. It also achieves up to 4.87x and 1.51x speedups for TFHE gate bootstrapping compared to CPU and GPU versions, respectively, and delivers a 67.3x average speedup for scheme switching over CPU-based implementation.<|reference_end|>
arxiv
@article{wang2024chameleon:, title={Chameleon: An Efficient FHE Scheme Switching Acceleration on GPUs}, author={Zhiwei Wang, Haoqi He, Lutan Zhao, Peinan Li, Zhihao Li, Dan Meng, Rui Hou}, journal={arXiv preprint arXiv:2410.05934}, year={2024}, archivePrefix={arXiv}, eprint={2410.05934}, primaryClass={cs.CR} }
wang2024chameleon:
arxiv-667025
2410.05935
Learning Gaussian Data Augmentation in Feature Space for One-shot Object Detection in Manga
<|reference_start|>Learning Gaussian Data Augmentation in Feature Space for One-shot Object Detection in Manga: We tackle one-shot object detection in Japanese Manga. The rising global popularity of Japanese manga has made the object detection of character faces increasingly important, with potential applications such as automatic colorization. However, obtaining sufficient data for training conventional object detectors is challenging due to copyright restrictions. Additionally, new characters appear every time a new volume of manga is released, making it impractical to re-train object detectors each time to detect these new characters. Therefore, one-shot object detection, where only a single query (reference) image is required to detect a new character, is an essential task in the manga industry. One challenge with one-shot object detection in manga is the large variation in the poses and facial expressions of characters in target images, despite having only one query image as a reference. Another challenge is that the frequency of character appearances follows a long-tail distribution. To overcome these challenges, we propose a data augmentation method in feature space to increase the variation of the query. The proposed method augments the feature from the query by adding Gaussian noise, with the noise variance at each channel learned during training. The experimental results show that the proposed method improves the performance for both seen and unseen classes, surpassing data augmentation methods in image space.<|reference_end|>
arxiv
@article{taniguchi2024learning, title={Learning Gaussian Data Augmentation in Feature Space for One-shot Object Detection in Manga}, author={Takara Taniguchi, Ryosuke Furuta}, journal={arXiv preprint arXiv:2410.05935}, year={2024}, archivePrefix={arXiv}, eprint={2410.05935}, primaryClass={cs.CV cs.MM} }
taniguchi2024learning
arxiv-667026
2410.05937
Athanor: Local Search over Abstract Constraint Specifications
<|reference_start|>Athanor: Local Search over Abstract Constraint Specifications: Local search is a common method for solving combinatorial optimisation problems. We focus on general-purpose local search solvers that accept as input a constraint model - a declarative description of a problem consisting of a set of decision variables under a set of constraints. Existing approaches typically take as input models written in solver-independent constraint modelling languages like MiniZinc. The Athanor solver we describe herein differs in that it begins from a specification of a problem in the abstract constraint specification language Essence, which allows problems to be described without commitment to low-level modelling decisions through its support for a rich set of abstract types. The advantage of proceeding from Essence is that the structure apparent in a concise, abstract specification of a problem can be exploited to generate high quality neighbourhoods automatically, avoiding the difficult task of identifying that structure in an equivalent constraint model. Based on the twin benefits of neighbourhoods derived from high level types and the scalability derived by searching directly over those types, our empirical results demonstrate strong performance in practice relative to existing solution methods.<|reference_end|>
arxiv
@article{attieh2024athanor:, title={Athanor: Local Search over Abstract Constraint Specifications}, author={Saad Attieh, Nguyen Dang, Christopher Jefferson, Ian Miguel, Peter Nightingale}, journal={arXiv preprint arXiv:2410.05937}, year={2024}, archivePrefix={arXiv}, eprint={2410.05937}, primaryClass={cs.AI} }
attieh2024athanor:
arxiv-667027
2410.05938
EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignment
<|reference_start|>EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignment: Mamba-based architectures have shown to be a promising new direction for deep learning models owing to their competitive performance and sub-quadratic deployment speed. However, current Mamba multi-modal large language models (MLLM) are insufficient in extracting visual features, leading to imbalanced cross-modal alignment between visual and textural latents, negatively impacting performance on multi-modal tasks. In this work, we propose Empowering Multi-modal Mamba with Structural and Hierarchical Alignment (EMMA), which enables the MLLM to extract fine-grained visual information. Specifically, we propose a pixel-wise alignment module to autoregressively optimize the learning and processing of spatial image-level features along with textual tokens, enabling structural alignment at the image level. In addition, to prevent the degradation of visual information during the cross-model alignment process, we propose a multi-scale feature fusion (MFF) module to combine multi-scale visual features from intermediate layers, enabling hierarchical alignment at the feature level. Extensive experiments are conducted across a variety of multi-modal benchmarks. Our model shows lower latency than other Mamba-based MLLMs and is nearly four times faster than transformer-based MLLMs of similar scale during inference. Due to better cross-modal alignment, our model exhibits lower degrees of hallucination and enhanced sensitivity to visual details, which manifests in superior performance across diverse multi-modal benchmarks. Code will be provided.<|reference_end|>
arxiv
@article{xing2024emma:, title={EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignment}, author={Yifei Xing, Xiangyuan Lan, Ruiping Wang, Dongmei Jiang, Wenjun Huang, Qingfang Zheng, Yaowei Wang}, journal={arXiv preprint arXiv:2410.05938}, year={2024}, archivePrefix={arXiv}, eprint={2410.05938}, primaryClass={cs.CV cs.AI} }
xing2024emma:
arxiv-667028
2410.05939
RLRF4Rec: Reinforcement Learning from Recsys Feedback for Enhanced Recommendation Reranking
<|reference_start|>RLRF4Rec: Reinforcement Learning from Recsys Feedback for Enhanced Recommendation Reranking: Large Language Models (LLMs) have demonstrated remarkable performance across diverse domains, prompting researchers to explore their potential for use in recommendation systems. Initial attempts have leveraged the exceptional capabilities of LLMs, such as rich knowledge and strong generalization through In-context Learning, which involves phrasing the recommendation task as prompts. Nevertheless, the performance of LLMs in recommendation tasks remains suboptimal due to a substantial disparity between the training tasks for LLMs and recommendation tasks and inadequate recommendation data during pre-training. This paper introduces RLRF4Rec, a novel framework integrating Reinforcement Learning from Recsys Feedback for Enhanced Recommendation Reranking(RLRF4Rec) with LLMs to address these challenges. Specifically, We first have the LLM generate inferred user preferences based on user interaction history, which is then used to augment traditional ID-based sequence recommendation models. Subsequently, we trained a reward model based on knowledge augmentation recommendation models to evaluate the quality of the reasoning knowledge from LLM. We then select the best and worst responses from the N samples to construct a dataset for LLM tuning. Finally, we design a structure alignment strategy with Direct Preference Optimization(DPO). We validate the effectiveness of RLRF4Rec through extensive experiments, demonstrating significant improvements in recommendation re-ranking metrics compared to baselines. This demonstrates that our approach significantly improves the capability of LLMs to respond to instructions within recommender systems.<|reference_end|>
arxiv
@article{sun2024rlrf4rec:, title={RLRF4Rec: Reinforcement Learning from Recsys Feedback for Enhanced Recommendation Reranking}, author={Chao Sun, Yaobo Liang, Yaming Yang, Shilin Xu, Tianmeng Yang, Yunhai Tong}, journal={arXiv preprint arXiv:2410.05939}, year={2024}, archivePrefix={arXiv}, eprint={2410.05939}, primaryClass={cs.IR} }
sun2024rlrf4rec:
arxiv-667029
2410.05940
TouchInsight: Uncertainty-aware Rapid Touch and Text Input for Mixed Reality from Egocentric Vision
<|reference_start|>TouchInsight: Uncertainty-aware Rapid Touch and Text Input for Mixed Reality from Egocentric Vision: While passive surfaces offer numerous benefits for interaction in mixed reality, reliably detecting touch input solely from head-mounted cameras has been a long-standing challenge. Camera specifics, hand self-occlusion, and rapid movements of both head and fingers introduce considerable uncertainty about the exact location of touch events. Existing methods have thus not been capable of achieving the performance needed for robust interaction. In this paper, we present a real-time pipeline that detects touch input from all ten fingers on any physical surface, purely based on egocentric hand tracking. Our method TouchInsight comprises a neural network to predict the moment of a touch event, the finger making contact, and the touch location. TouchInsight represents locations through a bivariate Gaussian distribution to account for uncertainties due to sensing inaccuracies, which we resolve through contextual priors to accurately infer intended user input. We first evaluated our method offline and found that it locates input events with a mean error of 6.3 mm, and accurately detects touch events (F1=0.99) and identifies the finger used (F1=0.96). In an online evaluation, we then demonstrate the effectiveness of our approach for a core application of dexterous touch input: two-handed text entry. In our study, participants typed 37.0 words per minute with an uncorrected error rate of 2.9% on average.<|reference_end|>
arxiv
@article{streli2024touchinsight:, title={TouchInsight: Uncertainty-aware Rapid Touch and Text Input for Mixed Reality from Egocentric Vision}, author={Paul Streli, Mark Richardson, Fadi Botros, Shugao Ma, Robert Wang, Christian Holz}, journal={arXiv preprint arXiv:2410.05940}, year={2024}, doi={10.1145/3654777.3676330}, archivePrefix={arXiv}, eprint={2410.05940}, primaryClass={cs.CV cs.HC} }
streli2024touchinsight:
arxiv-667030
2410.05942
Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function
<|reference_start|>Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function: Zero-order (ZO) optimization is a powerful tool for dealing with realistic constraints. On the other hand, the gradient-tracking (GT) technique proved to be an efficient method for distributed optimization aiming to achieve consensus. However, it is a first-order (FO) method that requires knowledge of the gradient, which is not always possible in practice. In this work, we introduce a zero-order distributed optimization method based on a one-point estimate of the gradient tracking technique. We prove that this new technique converges with a single noisy function query at a time in the non-convex setting. We then establish a convergence rate of $O(\frac{1}{\sqrt[3]{K}})$ after a number of iterations K, which competes with that of $O(\frac{1}{\sqrt[4]{K}})$ of its centralized counterparts. Finally, a numerical example validates our theoretical results.<|reference_end|>
arxiv
@article{mhanna2024single, title={Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function}, author={Elissa Mhanna and Mohamad Assaad}, journal={Proceedings of the 40th International Conference on Machine Learning, PMLR 202:24701-24719, 2023}, year={2024}, archivePrefix={arXiv}, eprint={2410.05942}, primaryClass={cs.LG math.OC} }
mhanna2024single
arxiv-667031
2410.05947
Maximal Length Cellular Automata : A Survey
<|reference_start|>Maximal Length Cellular Automata : A Survey: This article surveys some theoretical aspects of Cellular Automata (CAs) research. In particular, we discuss on maximal length CA. An n-cell CA is a maximal length CA, if all the configurations except one form a single cycle. There is a bonding between maximal length CA and primitive polynomial. So, primitive polynomials occupy a good amount of space in this survey. The main goal of this survey is to provide a tutorial on maximal length CA theory to researchers with classical and new results on maximality. We also give a compact collection of known results with references to their proofs, and to suggest some open problems. Additionally, some new theorems and corollaries are added to bridge the gaps among several known results.<|reference_end|>
arxiv
@article{adak2024maximal, title={Maximal Length Cellular Automata : A Survey}, author={Sumit Adak and Sukanta Das}, journal={arXiv preprint arXiv:2410.05947}, year={2024}, archivePrefix={arXiv}, eprint={2410.05947}, primaryClass={cs.FL cs.CC} }
adak2024maximal
arxiv-667032
2410.05951
Hyper Adversarial Tuning for Boosting Adversarial Robustness of Pretrained Large Vision Models
<|reference_start|>Hyper Adversarial Tuning for Boosting Adversarial Robustness of Pretrained Large Vision Models: Large vision models have been found vulnerable to adversarial examples, emphasizing the need for enhancing their adversarial robustness. While adversarial training is an effective defense for deep convolutional models, it often faces scalability issues with large vision models due to high computational costs. Recent approaches propose robust fine-tuning methods, such as adversarial tuning of low-rank adaptation (LoRA) in large vision models, but they still struggle to match the accuracy of full parameter adversarial fine-tuning. The integration of various defense mechanisms offers a promising approach to enhancing the robustness of large vision models, yet this paradigm remains underexplored. To address this, we propose hyper adversarial tuning (HyperAT), which leverages shared defensive knowledge among different methods to improve model robustness efficiently and effectively simultaneously. Specifically, adversarial tuning of each defense method is formulated as a learning task, and a hypernetwork generates LoRA specific to this defense. Then, a random sampling and tuning strategy is proposed to extract and facilitate the defensive knowledge transfer between different defenses. Finally, diverse LoRAs are merged to enhance the adversarial robustness. Experiments on various datasets and model architectures demonstrate that HyperAT significantly enhances the adversarial robustness of pretrained large vision models without excessive computational overhead, establishing a new state-of-the-art benchmark.<|reference_end|>
arxiv
@article{lv2024hyper, title={Hyper Adversarial Tuning for Boosting Adversarial Robustness of Pretrained Large Vision Models}, author={Kangtao Lv, Huangsen Cao, Kainan Tu, Yihuai Xu, Zhimeng Zhang, Xin Ding, Yongwei Wang}, journal={arXiv preprint arXiv:2410.05951}, year={2024}, archivePrefix={arXiv}, eprint={2410.05951}, primaryClass={cs.CV} }
lv2024hyper
arxiv-667033
2410.05952
Active Evaluation Acquisition for Efficient LLM Benchmarking
<|reference_start|>Active Evaluation Acquisition for Efficient LLM Benchmarking: As large language models (LLMs) become increasingly versatile, numerous large scale benchmarks have been developed to thoroughly assess their capabilities. These benchmarks typically consist of diverse datasets and prompts to evaluate different aspects of LLM performance. However, comprehensive evaluations on hundreds or thousands of prompts incur tremendous costs in terms of computation, money, and time. In this work, we investigate strategies to improve evaluation efficiency by selecting a subset of examples from each benchmark using a learned policy. Our approach models the dependencies across test examples, allowing accurate prediction of the evaluation outcomes for the remaining examples based on the outcomes of the selected ones. Consequently, we only need to acquire the actual evaluation outcomes for the selected subset. We rigorously explore various subset selection policies and introduce a novel RL-based policy that leverages the captured dependencies. Empirical results demonstrate that our approach significantly reduces the number of evaluation prompts required while maintaining accurate performance estimates compared to previous methods.<|reference_end|>
arxiv
@article{li2024active, title={Active Evaluation Acquisition for Efficient LLM Benchmarking}, author={Yang Li, Jie Ma, Miguel Ballesteros, Yassine Benajiba, Graham Horwood}, journal={arXiv preprint arXiv:2410.05952}, year={2024}, archivePrefix={arXiv}, eprint={2410.05952}, primaryClass={cs.LG} }
li2024active
arxiv-667034
2410.05953
The Cyber Alliance Game: How Alliances Influence Cyber-Warfare
<|reference_start|>The Cyber Alliance Game: How Alliances Influence Cyber-Warfare: Cyber-warfare has become the norm in current ongoing military conflicts. Over the past decade, numerous examples have shown the extent to which nation-states become vulnerable if they do not focus on building their cyber capacities. Adding to the inherent complexity of cyberwar scenarios, a state is usually a member of one or more alliances. Alliance policies and internal struggles could shape the individual actions of member states; intuitively, this also holds for the cyber domain. In this paper, we define and study a simple Cyber Alliance Game with the objective of understanding the fundamental influence of alliances on cyber conflicts between nation-states. Specifically, we focus on the decision of whether to exploit a newly found vulnerability individually or share it with the alliance. First, we characterize the impact of vulnerability-sharing rewards on the resulting equilibrium. Second, we study the implications of the internal power structure of alliances on cyberwar outcomes and infer the expected behavior of Dictator, Veto, and Dummy players. Finally, we investigate how alliances can nudge their members via rewards and punishments to adhere to their defensive or offensive cyber policy. We believe that our results contribute to the fundamental understanding of real-world cyber-conflicts by characterizing the impact of alliances.<|reference_end|>
arxiv
@article{benkő2024the, title={The Cyber Alliance Game: How Alliances Influence Cyber-Warfare}, author={Gergely BenkH{o} and Gergely Bicz'ok}, journal={arXiv preprint arXiv:2410.05953}, year={2024}, archivePrefix={arXiv}, eprint={2410.05953}, primaryClass={cs.GT cs.CR} }
benkő2024the
arxiv-667035
2410.05954
Pyramidal Flow Matching for Efficient Video Generative Modeling
<|reference_start|>Pyramidal Flow Matching for Efficient Video Generative Modeling: Video generation requires modeling a vast spatiotemporal space, which demands significant computational resources and data usage. To reduce the complexity, the prevailing approaches employ a cascaded architecture to avoid direct training with full resolution. Despite reducing computational demands, the separate optimization of each sub-stage hinders knowledge sharing and sacrifices flexibility. This work introduces a unified pyramidal flow matching algorithm. It reinterprets the original denoising trajectory as a series of pyramid stages, where only the final stage operates at the full resolution, thereby enabling more efficient video generative modeling. Through our sophisticated design, the flows of different pyramid stages can be interlinked to maintain continuity. Moreover, we craft autoregressive video generation with a temporal pyramid to compress the full-resolution history. The entire framework can be optimized in an end-to-end manner and with a single unified Diffusion Transformer (DiT). Extensive experiments demonstrate that our method supports generating high-quality 5-second (up to 10-second) videos at 768p resolution and 24 FPS within 20.7k A100 GPU training hours. All code and models will be open-sourced at https://pyramid-flow.github.io.<|reference_end|>
arxiv
@article{jin2024pyramidal, title={Pyramidal Flow Matching for Efficient Video Generative Modeling}, author={Yang Jin, Zhicheng Sun, Ningyuan Li, Kun Xu, Kun Xu, Hao Jiang, Nan Zhuang, Quzhe Huang, Yang Song, Yadong Mu, Zhouchen Lin}, journal={arXiv preprint arXiv:2410.05954}, year={2024}, archivePrefix={arXiv}, eprint={2410.05954}, primaryClass={cs.CV cs.LG} }
jin2024pyramidal
arxiv-667036
2410.05956
Waveguide-multiplexed photonic matrix-vector multiplication processor using multiport photodetectors
<|reference_start|>Waveguide-multiplexed photonic matrix-vector multiplication processor using multiport photodetectors: The slowing down of Moore's law has driven the development of application-specific processors for deep learning. Analog photonic processors offer a promising solution for accelerating matrix-vector multiplications (MVMs) in deep learning by leveraging parallel computations in the optical domain. Intensity-based photonic MVM processors, which do not utilize the phase information of light, are appealing due to their simplified operations. However, existing intensity-based schemes for such processors often employ wavelength multiplexing or mode multiplexing, both of which have limited scalability due to high insertion loss or wavelength crosstalk. In this work, we present a scalable intensity-based photonic MVM processor based on the concept of waveguide multiplexing. This scheme employs multiport photodetectors (PDs) to sum the intensities of multiple optical signals, eliminating the need for multiple wavelengths or modes. A 16-port Ge PD with a 3 dB bandwidth of 11.8 GHz at a bias voltage of -3 V is demonstrated, and it can be further scaled up to handle 250 ports while maintaining a 6.1 GHz operation bandwidth. A 4 $\times$ 4 circuit fabricated on a Si-on-insulator (SOI) platform is used to perform MVMs in a 3-layer neural network designed for classifying Iris flowers, achieving a classification accuracy of 93.3%. Furthermore, the performance of large-scale circuits in a convolutional neural network (CNN) for Fashion-MNIST is simulated, resulting in a classification accuracy of 90.53%. This work provides a simplified and scalable approach to photonic MVM, laying a foundation for large-scale and multi-dimensional photonic matrix-matrix multiplication in optical neural networks.<|reference_end|>
arxiv
@article{tang2024waveguide-multiplexed, title={Waveguide-multiplexed photonic matrix-vector multiplication processor using multiport photodetectors}, author={Rui Tang, Makoto Okano, Chao Zhang, Kasidit Toprasertpong, Shinichi Takagi, Mitsuru Takenaka}, journal={arXiv preprint arXiv:2410.05956}, year={2024}, archivePrefix={arXiv}, eprint={2410.05956}, primaryClass={physics.optics cs.ET} }
tang2024waveguide-multiplexed
arxiv-667037
2410.05961
Active and Passive Beamforming Designs for SER Minimization in RIS-Assisted MIMO Systems
<|reference_start|>Active and Passive Beamforming Designs for SER Minimization in RIS-Assisted MIMO Systems: This research exploits the applications of reconfigurable intelligent surface (RIS)-assisted multiple input multiple output (MIMO) systems, specifically addressing the enhancement of communication reliability with modulated signals. Specifically, we first derive the analytical downlink symbol error rate (SER) of each user as a multivariate function of both the phase-shift and beamforming vectors. The analytical SER enables us to obtain insights into the synergistic dynamics between the RIS and MIMO communication. We then introduce a novel average SER minimization problem subject to the practical constraints of the transmitted power budget and phase shift coefficients, which is NP-hard. By incorporating the differential evolution (DE) algorithm as a pivotal tool for optimizing the intricate active and passive beamforming variables in RIS-assisted communication systems, the non-convexity of the considered SER optimization problem can be effectively handled. Furthermore, an efficient local search is incorporated into the DE algorithm to overcome the local optimum, and hence offer low SER and high communication reliability. Monte Carlo simulations validate the analytical results and the proposed optimization framework, indicating that the joint active and passive beamforming design is superior to the other benchmarks.<|reference_end|>
arxiv
@article{van chien2024active, title={Active and Passive Beamforming Designs for SER Minimization in RIS-Assisted MIMO Systems}, author={Trinh Van Chien, Bui Trong Duc, Ho Viet Duc Luong, Huynh Thi Thanh Binh, Hien Quoc Ngo, and Symeon Chatzinotas}, journal={arXiv preprint arXiv:2410.05961}, year={2024}, archivePrefix={arXiv}, eprint={2410.05961}, primaryClass={cs.IT eess.SP math.IT} }
van chien2024active
arxiv-667038
2410.05963
Training-Free Open-Ended Object Detection and Segmentation via Attention as Prompts
<|reference_start|>Training-Free Open-Ended Object Detection and Segmentation via Attention as Prompts: Existing perception models achieve great success by learning from large amounts of labeled data, but they still struggle with open-world scenarios. To alleviate this issue, researchers introduce open-set perception tasks to detect or segment unseen objects in the training set. However, these models require predefined object categories as inputs during inference, which are not available in real-world scenarios. Recently, researchers pose a new and more practical problem, \textit{i.e.}, open-ended object detection, which discovers unseen objects without any object categories as inputs. In this paper, we present VL-SAM, a training-free framework that combines the generalized object recognition model (\textit{i.e.,} Vision-Language Model) with the generalized object localization model (\textit{i.e.,} Segment-Anything Model), to address the open-ended object detection and segmentation task. Without additional training, we connect these two generalized models with attention maps as the prompts. Specifically, we design an attention map generation module by employing head aggregation and a regularized attention flow to aggregate and propagate attention maps across all heads and layers in VLM, yielding high-quality attention maps. Then, we iteratively sample positive and negative points from the attention maps with a prompt generation module and send the sampled points to SAM to segment corresponding objects. Experimental results on the long-tail instance segmentation dataset (LVIS) show that our method surpasses the previous open-ended method on the object detection task and can provide additional instance segmentation masks. Besides, VL-SAM achieves favorable performance on the corner case object detection dataset (CODA), demonstrating the effectiveness of VL-SAM in real-world applications. Moreover, VL-SAM exhibits good model generalization that can incorporate various VLMs and SAMs.<|reference_end|>
arxiv
@article{lin2024training-free, title={Training-Free Open-Ended Object Detection and Segmentation via Attention as Prompts}, author={Zhiwei Lin, Yongtao Wang, Zhi Tang}, journal={arXiv preprint arXiv:2410.05963}, year={2024}, archivePrefix={arXiv}, eprint={2410.05963}, primaryClass={cs.CV} }
lin2024training-free
arxiv-667039
2410.05964
STNet: Deep Audio-Visual Fusion Network for Robust Speaker Tracking
<|reference_start|>STNet: Deep Audio-Visual Fusion Network for Robust Speaker Tracking: Audio-visual speaker tracking aims to determine the location of human targets in a scene using signals captured by a multi-sensor platform, whose accuracy and robustness can be improved by multi-modal fusion methods. Recently, several fusion methods have been proposed to model the correlation in multiple modalities. However, for the speaker tracking problem, the cross-modal interaction between audio and visual signals hasn't been well exploited. To this end, we present a novel Speaker Tracking Network (STNet) with a deep audio-visual fusion model in this work. We design a visual-guided acoustic measurement method to fuse heterogeneous cues in a unified localization space, which employs visual observations via a camera model to construct the enhanced acoustic map. For feature fusion, a cross-modal attention module is adopted to jointly model multi-modal contexts and interactions. The correlated information between audio and visual features is further interacted in the fusion model. Moreover, the STNet-based tracker is applied to multi-speaker cases by a quality-aware module, which evaluates the reliability of multi-modal observations to achieve robust tracking in complex scenarios. Experiments on the AV16.3 and CAV3D datasets show that the proposed STNet-based tracker outperforms uni-modal methods and state-of-the-art audio-visual speaker trackers.<|reference_end|>
arxiv
@article{li2024stnet:, title={STNet: Deep Audio-Visual Fusion Network for Robust Speaker Tracking}, author={Yidi Li and Hong Liu and Bing Yang}, journal={arXiv preprint arXiv:2410.05964}, year={2024}, archivePrefix={arXiv}, eprint={2410.05964}, primaryClass={cs.CV cs.AI} }
li2024stnet:
arxiv-667040
2410.05966
FLOPS: Forward Learning with OPtimal Sampling
<|reference_start|>FLOPS: Forward Learning with OPtimal Sampling: Given the limitations of backpropagation, perturbation-based gradient computation methods have recently gained focus for learning with only forward passes, also referred to as queries. Conventional forward learning consumes enormous queries on each data point for accurate gradient estimation through Monte Carlo sampling, which hinders the scalability of those algorithms. However, not all data points deserve equal queries for gradient estimation. In this paper, we study the problem of improving the forward learning efficiency from a novel perspective: how to reduce the gradient estimation variance with minimum cost? For this, we propose to allocate the optimal number of queries over each data in one batch during training to achieve a good balance between estimation accuracy and computational efficiency. Specifically, with a simplified proxy objective and a reparameterization technique, we derive a novel plug-and-play query allocator with minimal parameters. Theoretical results are carried out to verify its optimality. We conduct extensive experiments for fine-tuning Vision Transformers on various datasets and further deploy the allocator to two black-box applications: prompt tuning and multimodal alignment for foundation models. All findings demonstrate that our proposed allocator significantly enhances the scalability of forward-learning algorithms, paving the way for real-world applications.<|reference_end|>
arxiv
@article{ren2024flops:, title={FLOPS: Forward Learning with OPtimal Sampling}, author={Tao Ren, Zishi Zhang, Jinyang Jiang, Guanghao Li, Zeliang Zhang, Mingqian Feng, Yijie Peng}, journal={arXiv preprint arXiv:2410.05966}, year={2024}, archivePrefix={arXiv}, eprint={2410.05966}, primaryClass={cs.LG cs.AI} }
ren2024flops:
arxiv-667041
2410.05968
A meshless geometric conservation weighted least square method for solving the shallow water equations
<|reference_start|>A meshless geometric conservation weighted least square method for solving the shallow water equations: The shallow water equations are numerically solved to simulate free surface flows. The convective flux terms in the shallow water equations need to be discretized using a Riemann solver to capture shocks and discontinuity for certain flow situations such as hydraulic jump, dam-break wave propagation or bore wave propagation, levee-breaching flows, etc. The approximate Riemann solver can capture shocks and is popular for studying open-channel flow dynamics with traditional mesh-based numerical methods. Though meshless methods can work on highly irregular geometry without involving the complex mesh generation procedure, the shock-capturing capability has not been implemented, especially for solving open-channel flows. Therefore, we have proposed a numerical method, namely, a shock-capturing meshless geometric conservation weighted least square (GC-WLS) method for solving the shallow water equations. The HLL (Harten-Lax-Van Leer) Riemann solver is implemented within the framework of the proposed meshless method. The spatial derivatives in the shallow water equations and the reconstruction of conservative variables for high-order accuracy are computed using the GC-WLS method. The proposed meshless method is tested for various numerically challenging open-channel flow problems, including analytical, laboratory experiments, and a large-scale physical model study on dam-break event.<|reference_end|>
arxiv
@article{satyaprasad2024a, title={A meshless geometric conservation weighted least square method for solving the shallow water equations}, author={D. Satyaprasad, Soumendra Nath Kuiry and S. Sundar}, journal={arXiv preprint arXiv:2410.05968}, year={2024}, archivePrefix={arXiv}, eprint={2410.05968}, primaryClass={physics.flu-dyn cs.NA math.NA} }
satyaprasad2024a
arxiv-667042
2410.05969
Deep neural network-based detection of counterfeit products from smartphone images
<|reference_start|>Deep neural network-based detection of counterfeit products from smartphone images: Counterfeit products such as drugs and vaccines as well as luxury items such as high-fashion handbags, watches, jewelry, garments, and cosmetics, represent significant direct losses of revenue to legitimate manufacturers and vendors, as well as indirect costs to societies at large. We present the world's first purely computer-vision-based system to combat such counterfeiting-one that does not require special security tags or other alterations to the products or modifications to supply chain tracking. Our deep neural network system shows high accuracy on branded garments from our first manufacturer tested (99.71% after 3.06% rejections) using images captured under natural, weakly controlled conditions, such as in retail stores, customs checkpoints, warehouses, and outdoors. Our system, suitably transfer trained on a small number of fake and genuine articles, should find application in additional product categories as well, for example fashion accessories, perfume boxes, medicines, and more.<|reference_end|>
arxiv
@article{garcia-cotte2024deep, title={Deep neural network-based detection of counterfeit products from smartphone images}, author={Hugo Garcia-Cotte, Dorra Mellouli, Abdul Rehman, Li Wang, David G. Stork}, journal={arXiv preprint arXiv:2410.05969}, year={2024}, archivePrefix={arXiv}, eprint={2410.05969}, primaryClass={cs.CV} }
garcia-cotte2024deep
arxiv-667043
2410.05970
PDF-WuKong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling
<|reference_start|>PDF-WuKong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling: Document understanding is a challenging task to process and comprehend large amounts of textual and visual information. Recent advances in Large Language Models (LLMs) have significantly improved the performance of this task. However, existing methods typically focus on either plain text or a limited number of document images, struggling to handle long PDF documents with interleaved text and images, especially in academic papers. In this paper, we introduce PDF-WuKong, a multimodal large language model (MLLM) which is designed to enhance multimodal question-answering (QA) for long PDF documents. PDF-WuKong incorporates a sparse sampler that operates on both text and image representations, significantly improving the efficiency and capability of the MLLM. The sparse sampler is integrated with the MLLM's image encoder and selects the paragraphs or diagrams most pertinent to user queries for processing by the language model. To effectively train and evaluate our model, we construct PaperPDF, a dataset consisting of a broad collection of academic papers sourced from arXiv, multiple strategies are proposed to generate automatically 1M QA pairs along with their corresponding evidence sources. Experimental results demonstrate the superiority and high efficiency of our approach over other models on the task of long multimodal PDF understanding, surpassing proprietary products by an average of 8.6% on F1. Our code and dataset will be released at https://github.com/yh-hust/PDF-Wukong.<|reference_end|>
arxiv
@article{xie2024pdf-wukong:, title={PDF-WuKong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling}, author={Xudong Xie, Liang Yin, Hao Yan, Yang Liu, Jing Ding, Minghui Liao, Yuliang Liu, Wei Chen, Xiang Bai}, journal={arXiv preprint arXiv:2410.05970}, year={2024}, archivePrefix={arXiv}, eprint={2410.05970}, primaryClass={cs.CV cs.AI cs.CL} }
xie2024pdf-wukong:
arxiv-667044
2410.05973
Komet: A Serverless Platform for Low-Earth Orbit Edge Services
<|reference_start|>Komet: A Serverless Platform for Low-Earth Orbit Edge Services: Low-Earth orbit satellite networks can provide global broadband Internet access using constellations of thousands of satellites. Integrating edge computing resources in such networks can enable global low-latency access to compute services, supporting end users in rural areas, remote industrial applications, or the IoT. To achieve this, resources must be carefully allocated to various services from multiple tenants. Moreover, applications must navigate the dynamic nature of satellite networks, where orbital mechanics necessitate frequent client hand-offs. Therefore, managing applications on the low-Earth orbit edge will require the right platform abstractions. We introduce Komet, a serverless platform for low-Earth orbit edge computing. Komet integrates Function-as-a-Service compute with data replication, enabling on-demand elastic edge resource allocation and frequent service migration against satellite orbital trajectories to keep services deployed in the same geographic region. We implement Komet as a proof-of-concept prototype and demonstrate how its abstractions can be used to build low-Earth orbit edge applications with high availability despite constant mobility. Further, we propose simple heuristics for service migration scheduling in different application scenarios and evaluate them in simulation based on our experiment traces, showing the trade-off between selecting an optimal satellite server at every instance and minimizing service migration frequency.<|reference_end|>
arxiv
@article{pfandzelter2024komet:, title={Komet: A Serverless Platform for Low-Earth Orbit Edge Services}, author={Tobias Pfandzelter and David Bermbach}, journal={arXiv preprint arXiv:2410.05973}, year={2024}, doi={10.1145/3698038.3698517}, archivePrefix={arXiv}, eprint={2410.05973}, primaryClass={cs.DC} }
pfandzelter2024komet:
arxiv-667045
2410.05975
ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning
<|reference_start|>ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning: Meta-learning enables learning systems to adapt quickly to new tasks, similar to humans. To emulate this human-like rapid learning and enhance alignment and discrimination abilities, we propose ConML, a universal meta-learning framework that can be applied to various meta-learning algorithms without relying on specific model architectures nor target models. The core of ConML is task-level contrastive learning, which extends contrastive learning from the representation space in unsupervised learning to the model space in meta-learning. By leveraging task identity as an additional supervision signal during meta-training, we contrast the outputs of the meta-learner in the model space, minimizing inner-task distance (between models trained on different subsets of the same task) and maximizing inter-task distance (between models from different tasks). We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms, as well as in-context learning, resulting in performance improvements across diverse few-shot learning tasks.<|reference_end|>
arxiv
@article{wu2024conml:, title={ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning}, author={Shiguang Wu, Yaqing Wang, Yatao Bian, Quanming Yao}, journal={arXiv preprint arXiv:2410.05975}, year={2024}, archivePrefix={arXiv}, eprint={2410.05975}, primaryClass={cs.LG} }
wu2024conml:
arxiv-667046
2410.05980
Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing
<|reference_start|>Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing: As training datasets grow larger, we aspire to develop models that generalize well to any diverse test distribution, even if the latter deviates significantly from the training data. Various approaches like domain adaptation, domain generalization, and robust optimization attempt to address the out-of-distribution challenge by posing assumptions about the relation between training and test distribution. Differently, we adopt a more conservative perspective by accounting for the worst-case error across all sufficiently diverse test distributions within a known domain. Our first finding is that training on a uniform distribution over this domain is optimal. We also interrogate practical remedies when uniform samples are unavailable by considering methods for mitigating non-uniformity through finetuning and rebalancing. Our theory provides a mathematical grounding for previous observations on the role of entropy and rebalancing for o.o.d. generalization and foundation model training. We also provide new empirical evidence across tasks involving o.o.d. shifts which illustrate the broad applicability of our perspective.<|reference_end|>
arxiv
@article{loukas2024generalizing, title={Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing}, author={Andreas Loukas, Karolis Martinkus, Ed Wagstaff, Kyunghyun Cho}, journal={arXiv preprint arXiv:2410.05980}, year={2024}, archivePrefix={arXiv}, eprint={2410.05980}, primaryClass={cs.LG} }
loukas2024generalizing
arxiv-667047
2410.05982
DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States
<|reference_start|>DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States: Accurate motion forecasting for traffic agents is crucial for ensuring the safety and efficiency of autonomous driving systems in dynamically changing environments. Mainstream methods adopt a one-query-one-trajectory paradigm, where each query corresponds to a unique trajectory for predicting multi-modal trajectories. While straightforward and effective, the absence of detailed representation of future trajectories may yield suboptimal outcomes, given that the agent states dynamically evolve over time. To address this problem, we introduce DeMo, a framework that decouples multi-modal trajectory queries into two types: mode queries capturing distinct directional intentions and state queries tracking the agent's dynamic states over time. By leveraging this format, we separately optimize the multi-modality and dynamic evolutionary properties of trajectories. Subsequently, the mode and state queries are integrated to obtain a comprehensive and detailed representation of the trajectories. To achieve these operations, we additionally introduce combined Attention and Mamba techniques for global information aggregation and state sequence modeling, leveraging their respective strengths. Extensive experiments on both the Argoverse 2 and nuScenes benchmarks demonstrate that our DeMo achieves state-of-the-art performance in motion forecasting.<|reference_end|>
arxiv
@article{zhang2024demo:, title={DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States}, author={Bozhou Zhang, Nan Song, Li Zhang}, journal={arXiv preprint arXiv:2410.05982}, year={2024}, archivePrefix={arXiv}, eprint={2410.05982}, primaryClass={cs.CV cs.RO} }
zhang2024demo:
arxiv-667048
2410.05983
Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG
<|reference_start|>Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG: Retrieval-augmented generation (RAG) empowers large language models (LLMs) to utilize external knowledge sources. The increasing capacity of LLMs to process longer input sequences opens up avenues for providing more retrieved information, to potentially enhance the quality of generated outputs. It is plausible to assume that a larger retrieval set would contain more relevant information (higher recall), that might result in improved performance. However, our empirical findings demonstrate that for many long-context LLMs, the quality of generated output initially improves first, but then subsequently declines as the number of retrieved passages increases. This paper investigates this phenomenon, identifying the detrimental impact of retrieved "hard negatives" as a key contributor. To mitigate this and enhance the robustness of long-context LLM-based RAG, we propose both training-free and training-based approaches. We first showcase the effectiveness of retrieval reordering as a simple yet powerful training-free optimization. Furthermore, we explore training-based methods, specifically RAG-specific implicit LLM fine-tuning and RAG-oriented fine-tuning with intermediate reasoning, demonstrating their capacity for substantial performance gains. Finally, we conduct a systematic analysis of design choices for these training-based methods, including data distribution, retriever selection, and training context length.<|reference_end|>
arxiv
@article{jin2024long-context, title={Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG}, author={Bowen Jin, Jinsung Yoon, Jiawei Han, Sercan O. Arik}, journal={arXiv preprint arXiv:2410.05983}, year={2024}, archivePrefix={arXiv}, eprint={2410.05983}, primaryClass={cs.CL cs.AI cs.LG} }
jin2024long-context
arxiv-667049
2410.05984
Are Minimal Radial Distortion Solvers Necessary for Relative Pose Estimation?
<|reference_start|>Are Minimal Radial Distortion Solvers Necessary for Relative Pose Estimation?: Estimating the relative pose between two cameras is a fundamental step in many applications such as Structure-from-Motion. The common approach to relative pose estimation is to apply a minimal solver inside a RANSAC loop. Highly efficient solvers exist for pinhole cameras. Yet, (nearly) all cameras exhibit radial distortion. Not modeling radial distortion leads to (significantly) worse results. However, minimal radial distortion solvers are significantly more complex than pinhole solvers, both in terms of run-time and implementation efforts. This paper compares radial distortion solvers with a simple-to-implement approach that combines an efficient pinhole solver with sampled radial distortion parameters. Extensive experiments on multiple datasets and RANSAC variants show that this simple approach performs similarly or better than the most accurate minimal distortion solvers at faster run-times while being significantly more accurate than faster non-minimal solvers. We clearly show that complex radial distortion solvers are not necessary in practice. Code and benchmark are available at https://github.com/kocurvik/rd.<|reference_end|>
arxiv
@article{tzamos2024are, title={Are Minimal Radial Distortion Solvers Necessary for Relative Pose Estimation?}, author={Charalambos Tzamos, Viktor Kocur, Yaqing Ding, Torsten Sattler, Zuzana Kukelova}, journal={arXiv preprint arXiv:2410.05984}, year={2024}, archivePrefix={arXiv}, eprint={2410.05984}, primaryClass={cs.CV} }
tzamos2024are
arxiv-667050
2410.05985
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
<|reference_start|>Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates: The increasing size of deep learning models has created the need for more efficient alternatives to the standard error backpropagation algorithm, that make better use of asynchronous, parallel and distributed computing. One major shortcoming of backpropagation is the interlocking between the forward phase of the algorithm, which computes a global loss, and the backward phase where the loss is backpropagated through all layers to compute the gradients, which are used to update the network parameters. To address this problem, we propose a method that parallelises SGD updates across the layers of a model by asynchronously updating them from multiple threads. Furthermore, since we observe that the forward pass is often much faster than the backward pass, we use separate threads for the forward and backward pass calculations, which allows us to use a higher ratio of forward to backward threads than the usual 1:1 ratio, reducing the overall staleness of the parameters. Thus, our approach performs asynchronous stochastic gradient descent using separate threads for the loss (forward) and gradient (backward) computations and performs layer-wise partial updates to parameters in a distributed way. We show that this approach yields close to state-of-the-art results while running up to 2.97x faster than Hogwild! scaled on multiple devices (Locally-Partitioned-Asynchronous-Parallel SGD). We theoretically prove the convergence of the algorithm using a novel theoretical framework based on stochastic differential equations and the drift diffusion process, by modeling the asynchronous parameter updates as a stochastic process.<|reference_end|>
arxiv
@article{fokam2024asynchronous, title={Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates}, author={Cabrel Teguemne Fokam, Khaleelulla Khan Nazeer, Lukas K"onig, David Kappel, Anand Subramoney}, journal={arXiv preprint arXiv:2410.05985}, year={2024}, archivePrefix={arXiv}, eprint={2410.05985}, primaryClass={cs.LG cs.AI cs.NE} }
fokam2024asynchronous
arxiv-667051
2410.05986
The USTC-NERCSLIP Systems for the CHiME-8 MMCSG Challenge
<|reference_start|>The USTC-NERCSLIP Systems for the CHiME-8 MMCSG Challenge: In the two-person conversation scenario with one wearing smart glasses, transcribing and displaying the speaker's content in real-time is an intriguing application, providing a priori information for subsequent tasks such as translation and comprehension. Meanwhile, multi-modal data captured from the smart glasses is scarce. Therefore, we propose utilizing simulation data with multiple overlap rates and a one-to-one matching training strategy to narrow down the deviation for the model training between real and simulated data. In addition, combining IMU unit data in the model can assist the audio to achieve better real-time speech recognition performance.<|reference_end|>
arxiv
@article{jiang2024the, title={The USTC-NERCSLIP Systems for the CHiME-8 MMCSG Challenge}, author={Ya Jiang, Hongbo Lan, Jun Du, Qing Wang, Shutong Niu}, journal={arXiv preprint arXiv:2410.05986}, year={2024}, archivePrefix={arXiv}, eprint={2410.05986}, primaryClass={eess.AS cs.SD} }
jiang2024the
arxiv-667052
2410.05988
Utilizing Lyapunov Exponents in designing deep neural networks
<|reference_start|>Utilizing Lyapunov Exponents in designing deep neural networks: Training large deep neural networks is resource intensive. This study investigates whether Lyapunov exponents can accelerate this process by aiding in the selection of hyperparameters. To study this I formulate an optimization problem using neural networks with different activation functions in the hidden layers. By initializing model weights with different random seeds, I calculate the Lyapunov exponent while performing traditional gradient descent on these model weights. The findings demonstrate that variations in the learning rate can induce chaotic changes in model weights. I also show that activation functions with more negative Lyapunov exponents exhibit better convergence properties. Additionally, the study also demonstrates that Lyapunov exponents can be utilized to select effective initial model weights for deep neural networks, potentially enhancing the optimization process.<|reference_end|>
arxiv
@article{mittra2024utilizing, title={Utilizing Lyapunov Exponents in designing deep neural networks}, author={Tirthankar Mittra}, journal={arXiv preprint arXiv:2410.05988}, year={2024}, archivePrefix={arXiv}, eprint={2410.05988}, primaryClass={cs.LG cs.AI} }
mittra2024utilizing
arxiv-667053
2410.05991
Vector Grimoire: Codebook-based Shape Generation under Raster Image Supervision
<|reference_start|>Vector Grimoire: Codebook-based Shape Generation under Raster Image Supervision: Scalable Vector Graphics (SVG) is a popular format on the web and in the design industry. However, despite the great strides made in generative modeling, SVG has remained underexplored due to the discrete and complex nature of such data. We introduce GRIMOIRE, a text-guided SVG generative model that is comprised of two modules: A Visual Shape Quantizer (VSQ) learns to map raster images onto a discrete codebook by reconstructing them as vector shapes, and an Auto-Regressive Transformer (ART) models the joint probability distribution over shape tokens, positions and textual descriptions, allowing us to generate vector graphics from natural language. Unlike existing models that require direct supervision from SVG data, GRIMOIRE learns shape image patches using only raster image supervision which opens up vector generative modeling to significantly more data. We demonstrate the effectiveness of our method by fitting GRIMOIRE for closed filled shapes on the MNIST and for outline strokes on icon and font data, surpassing previous image-supervised methods in generative quality and vector-supervised approach in flexibility.<|reference_end|>
arxiv
@article{feuerpfeil2024vector, title={Vector Grimoire: Codebook-based Shape Generation under Raster Image Supervision}, author={Moritz Feuerpfeil, Marco Cipriano, Gerard de Melo}, journal={arXiv preprint arXiv:2410.05991}, year={2024}, archivePrefix={arXiv}, eprint={2410.05991}, primaryClass={cs.CV cs.AI cs.GR} }
feuerpfeil2024vector
arxiv-667054
2410.05992
Linking Code and Documentation Churn: Preliminary Analysis
<|reference_start|>Linking Code and Documentation Churn: Preliminary Analysis: Code churn refers to the measure of the amount of code added, modified, or deleted in a project and is often used to assess codebase stability and maintainability. Program comprehension or how understandable the changes are, is equally important for maintainability. Documentation is crucial for knowledge transfer, especially when new maintainers take over abandoned code. We emphasize the need for corresponding documentation updates, as this reflects project health and trustworthiness as a third-party library. Therefore, we argue that every code change should prompt a documentation update (defined as documentation churn). Linking code churn changes with documentation updates is important for project sustainability, as it facilitates knowledge transfer and reduces the effort required for program comprehension. This study investigates the synchrony between code churn and documentation updates in three GitHub open-source projects. We will use qualitative analysis and repository mining to examine the alignment and correlation of code churn and documentation updates over time. We want to identify which code changes are likely synchronized with documentation and to what extent documentation can be auto-generated. Preliminary results indicate varying degrees of synchrony across projects, highlighting the importance of integrated concurrent documentation practices and providing insights into how recent technologies like AI, in the form of Large Language Models (i.e., LLMs), could be leveraged to keep code and documentation churn in sync. The novelty of this study lies in demonstrating how synchronizing code changes with documentation updates can improve the development lifecycle by enhancing diversity and efficiency.<|reference_end|>
arxiv
@article{hovhannisyan2024linking, title={Linking Code and Documentation Churn: Preliminary Analysis}, author={Ani Hovhannisyan, Youmei Fan, Gema Rodriguez-Perez, Raula Gaikovina Kula}, journal={arXiv preprint arXiv:2410.05992}, year={2024}, archivePrefix={arXiv}, eprint={2410.05992}, primaryClass={cs.SE} }
hovhannisyan2024linking
arxiv-667055
2410.05993
Aria: An Open Multimodal Native Mixture-of-Experts Model
<|reference_start|>Aria: An Open Multimodal Native Mixture-of-Experts Model: Information comes in diverse modalities. Multimodal native AI models are essential to integrate real-world information and deliver comprehensive understanding. While proprietary multimodal native models exist, their lack of openness imposes obstacles for adoptions, let alone adaptations. To fill this gap, we introduce Aria, an open multimodal native model with best-in-class performance across a wide range of multimodal, language, and coding tasks. Aria is a mixture-of-expert model with 3.9B and 3.5B activated parameters per visual token and text token, respectively. It outperforms Pixtral-12B and Llama3.2-11B, and is competitive against the best proprietary models on various multimodal tasks. We pre-train Aria from scratch following a 4-stage pipeline, which progressively equips the model with strong capabilities in language understanding, multimodal understanding, long context window, and instruction following. We open-source the model weights along with a codebase that facilitates easy adoptions and adaptations of Aria in real-world applications.<|reference_end|>
arxiv
@article{li2024aria:, title={Aria: An Open Multimodal Native Mixture-of-Experts Model}, author={Dongxu Li, Yudong Liu, Haoning Wu, Yue Wang, Zhiqi Shen, Bowen Qu, Xinyao Niu, Guoyin Wang, Bei Chen, Junnan Li}, journal={arXiv preprint arXiv:2410.05993}, year={2024}, archivePrefix={arXiv}, eprint={2410.05993}, primaryClass={cs.CV} }
li2024aria:
arxiv-667056
2410.05996
AIVIO: Closed-loop, Object-relative Navigation of UAVs with AI-aided Visual Inertial Odometry
<|reference_start|>AIVIO: Closed-loop, Object-relative Navigation of UAVs with AI-aided Visual Inertial Odometry: Object-relative mobile robot navigation is essential for a variety of tasks, e.g. autonomous critical infrastructure inspection, but requires the capability to extract semantic information about the objects of interest from raw sensory data. While deep learning-based (DL) methods excel at inferring semantic object information from images, such as class and relative 6 degree of freedom (6-DoF) pose, they are computationally demanding and thus often not suitable for payload constrained mobile robots. In this letter we present a real-time capable unmanned aerial vehicle (UAV) system for object-relative, closed-loop navigation with a minimal sensor configuration consisting of an inertial measurement unit (IMU) and RGB camera. Utilizing a DL-based object pose estimator, solely trained on synthetic data and optimized for companion board deployment, the object-relative pose measurements are fused with the IMU data to perform object-relative localization. We conduct multiple real-world experiments to validate the performance of our system for the challenging use case of power pole inspection. An example closed-loop flight is presented in the supplementary video.<|reference_end|>
arxiv
@article{jantos2024aivio:, title={AIVIO: Closed-loop, Object-relative Navigation of UAVs with AI-aided Visual Inertial Odometry}, author={Thomas Jantos, Martin Scheiber, Christian Brommer, Eren Allak, Stephan Weiss and Jan Steinbrener}, journal={arXiv preprint arXiv:2410.05996}, year={2024}, doi={10.1109/LRA.2024.3479713}, archivePrefix={arXiv}, eprint={2410.05996}, primaryClass={cs.RO} }
jantos2024aivio:
arxiv-667057
2410.05997
An Eye for an Ear: Zero-shot Audio Description Leveraging an Image Captioner using Audiovisual Distribution Alignment
<|reference_start|>An Eye for an Ear: Zero-shot Audio Description Leveraging an Image Captioner using Audiovisual Distribution Alignment: Multimodal large language models have fueled progress in image captioning. These models, fine-tuned on vast image datasets, exhibit a deep understanding of semantic concepts. In this work, we show that this ability can be re-purposed for audio captioning, where the joint image-language decoder can be leveraged to describe auditory content associated with image sequences within videos featuring audiovisual content. This can be achieved via multimodal alignment. Yet, this multimodal alignment task is non-trivial due to the inherent disparity between audible and visible elements in real-world videos. Moreover, multimodal representation learning often relies on contrastive learning, facing the challenge of the so-called modality gap which hinders smooth integration between modalities. In this work, we introduce a novel methodology for bridging the audiovisual modality gap by matching the distributions of tokens produced by an audio backbone and those of an image captioner. Our approach aligns the audio token distribution with that of the image tokens, enabling the model to perform zero-shot audio captioning in an unsupervised fashion while keeping the initial image captioning component unaltered. This alignment allows for the use of either audio or audiovisual input by combining or substituting the image encoder with the aligned audio encoder. Our method achieves significantly improved performances in zero-shot audio captioning, compared to existing approaches.<|reference_end|>
arxiv
@article{malard2024an, title={An Eye for an Ear: Zero-shot Audio Description Leveraging an Image Captioner using Audiovisual Distribution Alignment}, author={Hugo Malard, Michel Olvera, St'ephane Lathuiliere, Slim Essid}, journal={arXiv preprint arXiv:2410.05997}, year={2024}, archivePrefix={arXiv}, eprint={2410.05997}, primaryClass={eess.AS cs.CV cs.LG cs.SD} }
malard2024an
arxiv-667058
2410.06001
TapType: Ten-finger text entry on everyday surfaces via Bayesian inference
<|reference_start|>TapType: Ten-finger text entry on everyday surfaces via Bayesian inference: Despite the advent of touchscreens, typing on physical keyboards remains most efficient for entering text, because users can leverage all fingers across a full-size keyboard for convenient typing. As users increasingly type on the go, text input on mobile and wearable devices has had to compromise on full-size typing. In this paper, we present TapType, a mobile text entry system for full-size typing on passive surfaces--without an actual keyboard. From the inertial sensors inside a band on either wrist, TapType decodes and relates surface taps to a traditional QWERTY keyboard layout. The key novelty of our method is to predict the most likely character sequences by fusing the finger probabilities from our Bayesian neural network classifier with the characters' prior probabilities from an n-gram language model. In our online evaluation, participants on average typed 19 words per minute with a character error rate of 0.6% after 30 minutes of training. Expert typists thereby consistently achieved more than 25 WPM at a similar error rate. We demonstrate applications of TapType in mobile use around smartphones and tablets, as a complement to interaction in situated Mixed Reality outside visual control, and as an eyes-free mobile text input method using an audio feedback-only interface.<|reference_end|>
arxiv
@article{streli2024taptype:, title={TapType: Ten-finger text entry on everyday surfaces via Bayesian inference}, author={Paul Streli, Jiaxi Jiang, Andreas Fender, Manuel Meier, Hugo Romat, Christian Holz}, journal={arXiv preprint arXiv:2410.06001}, year={2024}, doi={10.1145/3491102.3501878}, archivePrefix={arXiv}, eprint={2410.06001}, primaryClass={cs.HC cs.CV} }
streli2024taptype:
arxiv-667059
2410.06003
Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization
<|reference_start|>Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization: An important line of research in the field of explainability is to extract a small subset of crucial rationales from the full input. The most widely used criterion for rationale extraction is the maximum mutual information (MMI) criterion. However, in certain datasets, there are spurious features non-causally correlated with the label and also get high mutual information, complicating the loss landscape of MMI. Although some penalty-based methods have been developed to penalize the spurious features (e.g., invariance penalty, intervention penalty, etc) to help MMI work better, these are merely remedial measures. In the optimization objectives of these methods, spurious features are still distinguished from plain noise, which hinders the discovery of causal rationales. This paper aims to develop a new criterion that treats spurious features as plain noise, allowing the model to work on datasets rich in spurious features as if it were working on clean datasets, thereby making rationale extraction easier. We theoretically observe that removing either plain noise or spurious features from the input does not alter the conditional distribution of the remaining components relative to the task label. However, significant changes in the conditional distribution occur only when causal features are eliminated. Based on this discovery, the paper proposes a criterion for \textbf{M}aximizing the \textbf{R}emaining \textbf{D}iscrepancy (MRD). Experiments on six widely used datasets show that our MRD criterion improves rationale quality (measured by the overlap with human-annotated rationales) by up to $10.4\%$ as compared to several recent competitive MMI variants. Code: \url{https://github.com/jugechengzi/Rationalization-MRD}.<|reference_end|>
arxiv
@article{liu2024is, title={Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization}, author={Wei Liu, Zhiying Deng, Zhongyu Niu, Jun Wang, Haozhao Wang, YuanKai Zhang, Ruixuan Li}, journal={arXiv preprint arXiv:2410.06003}, year={2024}, archivePrefix={arXiv}, eprint={2410.06003}, primaryClass={cs.LG} }
liu2024is
arxiv-667060
2410.06006
Finite Element Approximations of Stochastic Linear Schr\"odinger equation driven by additive Wiener noise
<|reference_start|>Finite Element Approximations of Stochastic Linear Schr\"odinger equation driven by additive Wiener noise: In this article, we have analyzed semi-discrete finite element approximations of the Stochastic linear Schr\"{o}dinger equation in a bounded convex polygonal domain driven by additive Wiener noise. We use the finite element method for spatial discretization and derive an error estimate with respect to the discretization parameter of the finite element approximation. Numerical experiments have also been performed to support theoretical bounds.<|reference_end|>
arxiv
@article{bhar2024finite, title={Finite Element Approximations of Stochastic Linear Schr\"{o}dinger equation driven by additive Wiener noise}, author={Suprio Bhar, Mrinmay Biswas and Mangala Prasad}, journal={arXiv preprint arXiv:2410.06006}, year={2024}, archivePrefix={arXiv}, eprint={2410.06006}, primaryClass={math.NA cs.NA math-ph math.AP math.MP math.PR} }
bhar2024finite
arxiv-667061
2410.06007
Motion Forecasting in Continuous Driving
<|reference_start|>Motion Forecasting in Continuous Driving: Motion forecasting for agents in autonomous driving is highly challenging due to the numerous possibilities for each agent's next action and their complex interactions in space and time. In real applications, motion forecasting takes place repeatedly and continuously as the self-driving car moves. However, existing forecasting methods typically process each driving scene within a certain range independently, totally ignoring the situational and contextual relationships between successive driving scenes. This significantly simplifies the forecasting task, making the solutions suboptimal and inefficient to use in practice. To address this fundamental limitation, we propose a novel motion forecasting framework for continuous driving, named RealMotion. It comprises two integral streams both at the scene level: (1) The scene context stream progressively accumulates historical scene information until the present moment, capturing temporal interactive relationships among scene elements. (2) The agent trajectory stream optimizes current forecasting by sequentially relaying past predictions. Besides, a data reorganization strategy is introduced to narrow the gap between existing benchmarks and real-world applications, consistent with our network. These approaches enable exploiting more broadly the situational and progressive insights of dynamic motion across space and time. Extensive experiments on Argoverse series with different settings demonstrate that our RealMotion achieves state-of-the-art performance, along with the advantage of efficient real-world inference. The source code will be available at https://github.com/fudan-zvg/RealMotion.<|reference_end|>
arxiv
@article{song2024motion, title={Motion Forecasting in Continuous Driving}, author={Nan Song, Bozhou Zhang, Xiatian Zhu and Li Zhang}, journal={arXiv preprint arXiv:2410.06007}, year={2024}, archivePrefix={arXiv}, eprint={2410.06007}, primaryClass={cs.CV} }
song2024motion
arxiv-667062
2410.06008
Sitting, Standing and Walking Control of the Series-Parallel Hybrid Recupera-Reha Exoskeleton
<|reference_start|>Sitting, Standing and Walking Control of the Series-Parallel Hybrid Recupera-Reha Exoskeleton: This paper presents advancements in the functionalities of the Recupera-Reha lower extremity exoskeleton robot. The exoskeleton features a series-parallel hybrid design characterized by multiple kinematic loops resulting in 148 degrees of freedom in its spanning tree and 102 independent loop closure constraints, which poses significant challenges for modeling and control. To address these challenges, we applied an optimal control approach to generate feasible trajectories such as sitting, standing, and static walking, and tested these trajectories on the exoskeleton robot. Our method efficiently solves the optimal control problem using a serial abstraction of the model to generate trajectories. It then utilizes the full series-parallel hybrid model, which takes all the kinematic loop constraints into account to generate the final actuator commands. The experimental results demonstrate the effectiveness of our approach in generating the desired motions for the exoskeleton.<|reference_end|>
arxiv
@article{tijjani2024sitting,, title={Sitting, Standing and Walking Control of the Series-Parallel Hybrid Recupera-Reha Exoskeleton}, author={Ibrahim Tijjani, Rohit Kumar, Melya Boukheddimi, Mathias Trampler, Shivesh Kumar and Frank Kirchner}, journal={arXiv preprint arXiv:2410.06008}, year={2024}, archivePrefix={arXiv}, eprint={2410.06008}, primaryClass={cs.RO} }
tijjani2024sitting,
arxiv-667063
2410.06010
A large collection of bioinformatics question-query pairs over federated knowledge graphs: methodology and applications
<|reference_start|>A large collection of bioinformatics question-query pairs over federated knowledge graphs: methodology and applications: Background. In the last decades, several life science resources have structured data using the same framework and made these accessible using the same query language to facilitate interoperability. Knowledge graphs have seen increased adoption in bioinformatics due to their advantages for representing data in a generic graph format. For example, yummydata.org catalogs more than 60 knowledge graphs accessible through SPARQL, a technical query language. Although SPARQL allows powerful, expressive queries, even across physically distributed knowledge graphs, formulating such queries is a challenge for most users. Therefore, to guide users in retrieving the relevant data, many of these resources provide representative examples. These examples can also be an important source of information for machine learning, if a sufficiently large number of examples are provided and published in a common, machine-readable and standardized format across different resources. Findings. We introduce a large collection of human-written natural language questions and their corresponding SPARQL queries over federated bioinformatics knowledge graphs (KGs) collected for several years across different research groups at the SIB Swiss Institute of Bioinformatics. The collection comprises more than 1000 example questions and queries, including 65 federated queries. We propose a methodology to uniformly represent the examples with minimal metadata, based on existing standards. Furthermore, we introduce an extensive set of open-source applications, including query graph visualizations and smart query editors, easily reusable by KG maintainers who adopt the proposed methodology. Conclusions. We encourage the community to adopt and extend the proposed methodology, towards richer KG metadata and improved Semantic Web services.<|reference_end|>
arxiv
@article{bolleman2024a, title={A large collection of bioinformatics question-query pairs over federated knowledge graphs: methodology and applications}, author={Jerven Bolleman, Vincent Emonet, Adrian Altenhoff, Amos Bairoch, Marie-Claude Blatter, Alan Bridge, Severine Duvaud, Elisabeth Gasteiger, Dmitry Kuznetsov, Sebastien Moretti, Pierre-Andre Michel, Anne Morgat, Marco Pagni, Nicole Redaschi, Monique Zahn-Zabal, Tarcisio Mendes de Farias, and Ana Claudia Sima}, journal={arXiv preprint arXiv:2410.06010}, year={2024}, archivePrefix={arXiv}, eprint={2410.06010}, primaryClass={cs.DB cs.AI cs.IR} }
bolleman2024a
arxiv-667064
2410.06011
Large Language Model Enhanced Text-to-SQL Generation: A Survey
<|reference_start|>Large Language Model Enhanced Text-to-SQL Generation: A Survey: Text-to-SQL translates natural language queries into Structured Query Language (SQL) commands, enabling users to interact with databases using natural language. Essentially, the text-to-SQL task is a text generation task, and its development is primarily dependent on changes in language models. Especially with the rapid development of Large Language Models (LLMs), the pattern of text-to-SQL has undergone significant changes. Existing survey work mainly focuses on rule-based and neural-based approaches, but it still lacks a survey of Text-to-SQL with LLMs. In this paper, we survey the large language model enhanced text-to-SQL generations, classifying them into prompt engineering, fine-tuning, pre-trained, and Agent groups according to training strategies. We also summarize datasets and evaluation metrics comprehensively. This survey could help people better understand the pattern, research status, and challenges of LLM-based text-to-SQL generations.<|reference_end|>
arxiv
@article{zhu2024large, title={Large Language Model Enhanced Text-to-SQL Generation: A Survey}, author={Xiaohu Zhu, Qian Li, Lizhen Cui, Yongkang Liu}, journal={arXiv preprint arXiv:2410.06011}, year={2024}, archivePrefix={arXiv}, eprint={2410.06011}, primaryClass={cs.DB} }
zhu2024large
arxiv-667065
2410.06012
Generalized Sparse Additive Model with Unknown Link Function
<|reference_start|>Generalized Sparse Additive Model with Unknown Link Function: Generalized additive models (GAM) have been successfully applied to high dimensional data analysis. However, most existing methods cannot simultaneously estimate the link function, the component functions and the variable interaction. To alleviate this problem, we propose a new sparse additive model, named generalized sparse additive model with unknown link function (GSAMUL), in which the component functions are estimated by B-spline basis and the unknown link function is estimated by a multi-layer perceptron (MLP) network. Furthermore, $\ell_{2,1}$-norm regularizer is used for variable selection. The proposed GSAMUL can realize both variable selection and hidden interaction. We integrate this estimation into a bilevel optimization problem, where the data is split into training set and validation set. In theory, we provide the guarantees about the convergence of the approximate procedure. In applications, experimental evaluations on both synthetic and real world data sets consistently validate the effectiveness of the proposed approach.<|reference_end|>
arxiv
@article{yuan2024generalized, title={Generalized Sparse Additive Model with Unknown Link Function}, author={Peipei Yuan, Xinge You, Hong Chen, Xuelin Zhang, Qinmu Peng}, journal={arXiv preprint arXiv:2410.06012}, year={2024}, archivePrefix={arXiv}, eprint={2410.06012}, primaryClass={stat.ML cs.LG} }
yuan2024generalized
arxiv-667066
2410.06013
Characterization of input-to-output stability for infinite dimensional systems
<|reference_start|>Characterization of input-to-output stability for infinite dimensional systems: We prove a superposition theorem for input-to-output stability (IOS) of a broad class of nonlinear infinite-dimensional systems with outputs including both continuous-time and discrete-time systems. It contains, as a special case, the superposition theorem for input-to-state stability (ISS) of infinite-dimensional systems from [1] and the IOS superposition theorem for systems of ordinary differential equations from [2]. To achieve this result, we introduce and examine several novel stability and attractivity concepts for infinite dimensional systems with outputs: We prove criteria for the uniform limit property for systems with outputs, several of which are new already for systems with full-state output, we provide superposition theorems for systems which satisfy both the output-Lagrange stability property (OL) and IOS, give a sufficient condition for OL and characterize ISS in terms of IOS and input/output-to-state stability. Finally, by means of counterexamples, we illustrate the challenges appearing on the way of extension of the superposition theorems from [1] and [2] to infinite-dimensional systems with outputs.<|reference_end|>
arxiv
@article{bachmann2024characterization, title={Characterization of input-to-output stability for infinite dimensional systems}, author={Patrick Bachmann, Sergey Dashkovskiy, Andrii Mironchenko}, journal={arXiv preprint arXiv:2410.06013}, year={2024}, archivePrefix={arXiv}, eprint={2410.06013}, primaryClass={math.OC cs.SY eess.SY math.DS} }
bachmann2024characterization
arxiv-667067
2410.06014
SplaTraj: Camera Trajectory Generation with Semantic Gaussian Splatting
<|reference_start|>SplaTraj: Camera Trajectory Generation with Semantic Gaussian Splatting: Many recent developments for robots to represent environments have focused on photorealistic reconstructions. This paper particularly focuses on generating sequences of images from the photorealistic Gaussian Splatting models, that match instructions that are given by user-inputted language. We contribute a novel framework, SplaTraj, which formulates the generation of images within photorealistic environment representations as a continuous-time trajectory optimization problem. Costs are designed so that a camera following the trajectory poses will smoothly traverse through the environment and render the specified spatial information in a photogenic manner. This is achieved by querying a photorealistic representation with language embedding to isolate regions that correspond to the user-specified inputs. These regions are then projected to the camera's view as it moves over time and a cost is constructed. We can then apply gradient-based optimization and differentiate through the rendering to optimize the trajectory for the defined cost. The resulting trajectory moves to photogenically view each of the specified objects. We empirically evaluate our approach on a suite of environments and instructions, and demonstrate the quality of generated image sequences.<|reference_end|>
arxiv
@article{liu2024splatraj:, title={SplaTraj: Camera Trajectory Generation with Semantic Gaussian Splatting}, author={Xinyi Liu, Tianyi Zhang, Matthew Johnson-Roberson, Weiming Zhi}, journal={arXiv preprint arXiv:2410.06014}, year={2024}, archivePrefix={arXiv}, eprint={2410.06014}, primaryClass={cs.RO cs.AI cs.CV cs.LG} }
liu2024splatraj:
arxiv-667068
2410.06016
Variable Bitrate Residual Vector Quantization for Audio Coding
<|reference_start|>Variable Bitrate Residual Vector Quantization for Audio Coding: Recent state-of-the-art neural audio compression models have progressively adopted residual vector quantization (RVQ). Despite this success, these models employ a fixed number of codebooks per frame, which can be suboptimal in terms of rate-distortion tradeoff, particularly in scenarios with simple input audio, such as silence. To address this limitation, we propose variable bitrate RVQ (VRVQ) for audio codecs, which allows for more efficient coding by adapting the number of codebooks used per frame. Furthermore, we propose a gradient estimation method for the non-differentiable masking operation that transforms from the importance map to the binary importance mask, improving model training via a straight-through estimator. We demonstrate that the proposed training framework achieves superior results compared to the baseline method and shows further improvement when applied to the current state-of-the-art codec.<|reference_end|>
arxiv
@article{chae2024vrvq:, title={VRVQ: Variable Bitrate Residual Vector Quantization for Audio Compression}, author={Yunkee Chae, Woosung Choi, Yuhta Takida, Junghyun Koo, Yukara Ikemiya, Zhi Zhong, Kin Wai Cheuk, Marco A. Mart'inez-Ram'irez, Kyogu Lee, Wei-Hsiang Liao, Yuki Mitsufuji}, journal={arXiv preprint arXiv:2410.06016}, year={2024}, archivePrefix={arXiv}, eprint={2410.06016}, primaryClass={cs.SD cs.LG eess.AS} }
chae2024vrvq:
arxiv-667069
2410.06017
Evacuation patterns and socioeconomic stratification in the context of wildfires in Chile
<|reference_start|>Evacuation patterns and socioeconomic stratification in the context of wildfires in Chile: Climate change is altering the frequency and intensity of wildfires, leading to increased evacuation events that disrupt human mobility and socioeconomic structures. These disruptions affect access to resources, employment, and housing, amplifying existing vulnerabilities within communities. Understanding the interplay between climate change, wildfires, evacuation patterns, and socioeconomic factors is crucial for developing effective mitigation and adaptation strategies. To contribute to this challenge, we use high-definition mobile phone records to analyse evacuation patterns during the wildfires in Valpara\'iso, Chile, that took place between February 2-3, 2024. This data allows us to track the movements of individuals in the disaster area, providing insight into how people respond to large-scale evacuations in the context of severe wildfires. We apply a causal inference approach that combines regression discontinuity and difference-in-differences methodologies to observe evacuation behaviours during wildfires, with a focus on socioeconomic stratification. This approach allows us to isolate the impact of the wildfires on different socioeconomic groups by comparing the evacuation patterns of affected populations before and after the event, while accounting for underlying trends and discontinuities at the threshold of the disaster. We find that many people spent nights away from home, with those in the lowest socioeconomic segment stayed away the longest. In general, people reduced their travel distance during the evacuation, and the lowest socioeconomic group moved the least. Initially, movements became more random, as people sought refuge in a rush, but eventually gravitated towards areas with similar socioeconomic status. Our results show that socioeconomic differences play a role in evacuation dynamics, providing useful insights for response planning.<|reference_end|>
arxiv
@article{naushirvanov2024evacuation, title={Evacuation patterns and socioeconomic stratification in the context of wildfires in Chile}, author={Timur Naushirvanov, Erick Elejalde, Kyriaki Kalimeri, Elisa Omodei, M'arton Karsai, Leo Ferres}, journal={arXiv preprint arXiv:2410.06017}, year={2024}, archivePrefix={arXiv}, eprint={2410.06017}, primaryClass={physics.soc-ph cs.CY} }
naushirvanov2024evacuation
arxiv-667070
2410.06019
Unveiling Transformer Perception by Exploring Input Manifolds
<|reference_start|>Unveiling Transformer Perception by Exploring Input Manifolds: This paper introduces a general method for the exploration of equivalence classes in the input space of Transformer models. The proposed approach is based on sound mathematical theory which describes the internal layers of a Transformer architecture as sequential deformations of the input manifold. Using eigendecomposition of the pullback of the distance metric defined on the output space through the Jacobian of the model, we are able to reconstruct equivalence classes in the input space and navigate across them. We illustrate how this method can be used as a powerful tool for investigating how a Transformer sees the input space, facilitating local and task-agnostic explainability in Computer Vision and Natural Language Processing tasks.<|reference_end|>
arxiv
@article{benfenati2024unveiling, title={Unveiling Transformer Perception by Exploring Input Manifolds}, author={Alessandro Benfenati and Alfio Ferrara and Alessio Marta and Davide Riva and Elisabetta Rocchetti}, journal={arXiv preprint arXiv:2410.06019}, year={2024}, archivePrefix={arXiv}, eprint={2410.06019}, primaryClass={cs.LG cs.AI cs.CL} }
benfenati2024unveiling
arxiv-667071
2410.06020
QT-DoG: Quantization-aware Training for Domain Generalization
<|reference_start|>QT-DoG: Quantization-aware Training for Domain Generalization: Domain Generalization (DG) aims to train models that perform well not only on the training (source) domains but also on novel, unseen target data distributions. A key challenge in DG is preventing overfitting to source domains, which can be mitigated by finding flatter minima in the loss landscape. In this work, we propose Quantization-aware Training for Domain Generalization (QT-DoG) and demonstrate that weight quantization effectively leads to flatter minima in the loss landscape, thereby enhancing domain generalization. Unlike traditional quantization methods focused on model compression, QT-DoG exploits quantization as an implicit regularizer by inducing noise in model weights, guiding the optimization process toward flatter minima that are less sensitive to perturbations and overfitting. We provide both theoretical insights and empirical evidence demonstrating that quantization inherently encourages flatter minima, leading to better generalization across domains. Moreover, with the benefit of reducing the model size through quantization, we demonstrate that an ensemble of multiple quantized models further yields superior accuracy than the state-of-the-art DG approaches with no computational or memory overheads. Our extensive experiments demonstrate that QT-DoG generalizes across various datasets, architectures, and quantization algorithms, and can be combined with other DG methods, establishing its versatility and robustness.<|reference_end|>
arxiv
@article{javed2024qt-dog:, title={QT-DoG: Quantization-aware Training for Domain Generalization}, author={Saqib Javed, Hieu Le, Mathieu Salzmann}, journal={arXiv preprint arXiv:2410.06020}, year={2024}, archivePrefix={arXiv}, eprint={2410.06020}, primaryClass={cs.LG cs.AI cs.CV cs.RO} }
javed2024qt-dog:
arxiv-667072
2410.06021
Efficient Solution of State-Constrained Distributed Parabolic Optimal Control Problems
<|reference_start|>Efficient Solution of State-Constrained Distributed Parabolic Optimal Control Problems: We consider a space-time finite element method for the numerical solution of a distributed tracking-type optimal control problem subject to the heat equation with state constraints. The cost or regularization term is formulated in an anisotropic Sobolev norm for the state, and the optimal state is then characterized as the unique solution of a first kind variational inequality. We discuss an efficient realization of the anisotropic Sobolev norm in the case of a space-time tensor-product finite element mesh, and the iterative solution of the resulting discrete variational inequality by means of a semi-smooth Newton method, i.e., using an active set strategy.<|reference_end|>
arxiv
@article{löscher2024efficient, title={Efficient Solution of State-Constrained Distributed Parabolic Optimal Control Problems}, author={Richard L"oscher, Michael Reichelt, Olaf Steinbach}, journal={arXiv preprint arXiv:2410.06021}, year={2024}, archivePrefix={arXiv}, eprint={2410.06021}, primaryClass={math.NA cs.NA math.OC} }
löscher2024efficient
arxiv-667073
2410.06022
Can Language Models Induce Grammatical Knowledge from Indirect Evidence?
<|reference_start|>Can Language Models Induce Grammatical Knowledge from Indirect Evidence?: What kinds of and how much data is necessary for language models to induce grammatical knowledge to judge sentence acceptability? Recent language models still have much room for improvement in their data efficiency compared to humans. This paper investigates whether language models efficiently use indirect data (indirect evidence), from which they infer sentence acceptability. In contrast, humans use indirect evidence efficiently, which is considered one of the inductive biases contributing to efficient language acquisition. To explore this question, we introduce the Wug InDirect Evidence Test (WIDET), a dataset consisting of training instances inserted into the pre-training data and evaluation instances. We inject synthetic instances with newly coined wug words into pretraining data and explore the model's behavior on evaluation data that assesses grammatical acceptability regarding those words. We prepare the injected instances by varying their levels of indirectness and quantity. Our experiments surprisingly show that language models do not induce grammatical knowledge even after repeated exposure to instances with the same structure but differing only in lexical items from evaluation instances in certain language phenomena. Our findings suggest a potential direction for future research: developing models that use latent indirect evidence to induce grammatical knowledge.<|reference_end|>
arxiv
@article{oba2024can, title={Can Language Models Induce Grammatical Knowledge from Indirect Evidence?}, author={Miyu Oba, Yohei Oseki, Akiyo Fukatsu, Akari Haga, Hiroki Ouchi, Taro Watanabe, Saku Sugawara}, journal={arXiv preprint arXiv:2410.06022}, year={2024}, archivePrefix={arXiv}, eprint={2410.06022}, primaryClass={cs.CL} }
oba2024can
arxiv-667074
2410.06024
Jet Expansions of Residual Computation
<|reference_start|>Jet Expansions of Residual Computation: We introduce a framework for expanding residual computational graphs using jets, operators that generalize truncated Taylor series. Our method provides a systematic approach to disentangle contributions of different computational paths to model predictions. In contrast to existing techniques such as distillation, probing, or early decoding, our expansions rely solely on the model itself and requires no data, training, or sampling from the model. We demonstrate how our framework grounds and subsumes logit lens, reveals a (super-)exponential path structure in the recursive residual depth and opens up several applications. These include sketching a transformer large language model with $n$-gram statistics extracted from its computations, and indexing the models' levels of toxicity knowledge. Our approach enables data-free analysis of residual computation for model interpretability, development, and evaluation.<|reference_end|>
arxiv
@article{chen2024jet, title={Jet Expansions of Residual Computation}, author={Yihong Chen, Xiangxiang Xu, Yao Lu, Pontus Stenetorp, Luca Franceschi}, journal={arXiv preprint arXiv:2410.06024}, year={2024}, archivePrefix={arXiv}, eprint={2410.06024}, primaryClass={cs.LG cs.AI cs.CL cs.SC} }
chen2024jet
arxiv-667075
2410.06025
Sparse Repellency for Shielded Generation in Text-to-image Diffusion Models
<|reference_start|>Sparse Repellency for Shielded Generation in Text-to-image Diffusion Models: The increased adoption of diffusion models in text-to-image generation has triggered concerns on their reliability. Such models are now closely scrutinized under the lens of various metrics, notably calibration, fairness, or compute efficiency. We focus in this work on two issues that arise when deploying these models: a lack of diversity when prompting images, and a tendency to recreate images from the training set. To solve both problems, we propose a method that coaxes the sampled trajectories of pretrained diffusion models to land on images that fall outside of a reference set. We achieve this by adding repellency terms to the diffusion SDE throughout the generation trajectory, which are triggered whenever the path is expected to land too closely to an image in the shielded reference set. Our method is sparse in the sense that these repellency terms are zero and inactive most of the time, and even more so towards the end of the generation trajectory. Our method, named SPELL for sparse repellency, can be used either with a static reference set that contains protected images, or dynamically, by updating the set at each timestep with the expected images concurrently generated within a batch. We show that adding SPELL to popular diffusion models improves their diversity while impacting their FID only marginally, and performs comparatively better than other recent training-free diversity methods. We also demonstrate how SPELL can ensure a shielded generation away from a very large set of protected images by considering all 1.2M images from ImageNet as the protected set.<|reference_end|>
arxiv
@article{kirchhof2024sparse, title={Sparse Repellency for Shielded Generation in Text-to-image Diffusion Models}, author={Michael Kirchhof, James Thornton, Pierre Ablin, Louis B'ethune, Eugene Ndiaye, Marco Cuturi}, journal={arXiv preprint arXiv:2410.06025}, year={2024}, archivePrefix={arXiv}, eprint={2410.06025}, primaryClass={cs.CV cs.LG stat.ML} }
kirchhof2024sparse
arxiv-667076
2410.06026
Content-based Wake-up for Energy-efficient and Timely Top-k IoT Sensing Data Retrieval
<|reference_start|>Content-based Wake-up for Energy-efficient and Timely Top-k IoT Sensing Data Retrieval: Energy efficiency and information freshness are key requirements for sensor nodes serving Industrial Internet of Things (IIoT) applications, where a sink node collects informative and fresh data before a deadline, e.g., to control an external actuator. Content-based wake-up (CoWu) activates a subset of nodes that hold data relevant for the sink's goal, thereby offering an energy-efficient way to attain objectives related to information freshness. This paper focuses on a scenario where the sink collects fresh information on top-k values, defined as data from the nodes observing the k highest readings at the deadline. We introduce a new metric called top-k Query Age of Information (k-QAoI), which allows us to characterize the performance of CoWu by considering the characteristics of the physical process. Further, we show how to select the CoWu parameters, such as its timing and threshold, to attain both information freshness and energy efficiency. The numerical results reveal the effectiveness of the CoWu approach, which is able to collect top-k data with higher energy efficiency while reducing k-QAoI when compared to round-robin scheduling, especially when the number of nodes is large and the required size of k is small.<|reference_end|>
arxiv
@article{shiraishi2024content-based, title={Content-based Wake-up for Energy-efficient and Timely Top-k IoT Sensing Data Retrieval}, author={Junya Shiraishi, Anders E. Kal{o}r, Israel Leyva-Mayorga, Federico Chiariotti, Petar Popovski, Hiroyuki Yomo}, journal={arXiv preprint arXiv:2410.06026}, year={2024}, archivePrefix={arXiv}, eprint={2410.06026}, primaryClass={cs.NI eess.SP} }
shiraishi2024content-based
arxiv-667077
2410.06028
SpecTrack: Learned Multi-Rotation Tracking via Speckle Imaging
<|reference_start|>SpecTrack: Learned Multi-Rotation Tracking via Speckle Imaging: Precision pose detection is increasingly demanded in fields such as personal fabrication, Virtual Reality (VR), and robotics due to its critical role in ensuring accurate positioning information. However, conventional vision-based systems used in these systems often struggle with achieving high precision and accuracy, particularly when dealing with complex environments or fast-moving objects. To address these limitations, we investigate Laser Speckle Imaging (LSI), an emerging optical tracking method that offers promising potential for improving pose estimation accuracy. Specifically, our proposed LSI-Based Tracking (SpecTrack) leverages the captures from a lensless camera and a retro-reflector marker with a coded aperture to achieve multi-axis rotational pose estimation with high precision. Our extensive trials using our in-house built testbed have shown that SpecTrack achieves an accuracy of 0.31{\deg} (std=0.43{\deg}), significantly outperforming state-of-the-art approaches and improving accuracy up to 200%.<|reference_end|>
arxiv
@article{chen2024spectrack:, title={SpecTrack: Learned Multi-Rotation Tracking via Speckle Imaging}, author={Ziyang Chen, Mustafa Dou{g}a Dou{g}an, Josef Spjut and Kaan Akc{s}it}, journal={arXiv preprint arXiv:2410.06028}, year={2024}, doi={10.1145/3681756.3697875}, archivePrefix={arXiv}, eprint={2410.06028}, primaryClass={cs.ET cs.CV} }
chen2024spectrack:
arxiv-667078
2410.06029
Unclonable Functional Encryption
<|reference_start|>Unclonable Functional Encryption: In a functional encryption (FE) scheme, a user that holds a ciphertext and a function-key can learn the result of applying the function to the plaintext message. Security requires that the user does not learn anything beyond the function evaluation. On the other hand, unclonable encryption (UE) is a uniquely quantum primitive, which ensures that an adversary cannot duplicate a ciphertext to decrypt the same message multiple times. In this work we introduce unclonable quantum functional encryption (UFE), which both extends the notion of FE to the quantum setting and also possesses the unclonable security of UE. We give a construction for UFE that supports arbitrary quantum messages and polynomialy-sized circuit, and achieves unclonable-indistinguishable security for independently sampled function keys. In particular, our UFE guarantees that two parties cannot simultaneously recover the correct function outputs using two independently sampled function keys. Our construction combines quantum garbled circuits [BY22], and quantum-key unclonable encryption [AKY24], and leverages techniques from the plaintext expansion arguments in [Hir+23]. As an application we give the first construction for public-key UE with variable decryption keys. Lastly, we establish a connection between quantum indistinguishability obfuscation (qiO) and quantum functional encryption (QFE); Showing that any multi-input indistinguishability-secure quantum functional encryption scheme unconditionally implies the existence of qiO.<|reference_end|>
arxiv
@article{mehta2024unclonable, title={Unclonable Functional Encryption}, author={Arthur Mehta, Anne M"uller}, journal={arXiv preprint arXiv:2410.06029}, year={2024}, archivePrefix={arXiv}, eprint={2410.06029}, primaryClass={quant-ph cs.CR} }
mehta2024unclonable
arxiv-667079
2410.06030
Data Quality Issues in Vulnerability Detection Datasets
<|reference_start|>Data Quality Issues in Vulnerability Detection Datasets: Vulnerability detection is a crucial yet challenging task to identify potential weaknesses in software for cyber security. Recently, deep learning (DL) has made great progress in automating the detection process. Due to the complex multi-layer structure and a large number of parameters, a DL model requires massive labeled (vulnerable or secure) source code to gain knowledge to effectively distinguish between vulnerable and secure code. In the literature, many datasets have been created to train DL models for this purpose. However, these datasets suffer from several issues that will lead to low detection accuracy of DL models. In this paper, we define three critical issues (i.e., data imbalance, low vulnerability coverage, biased vulnerability distribution) that can significantly affect the model performance and three secondary issues (i.e., errors in source code, mislabeling, noisy historical data) that also affect the performance but can be addressed through a dedicated pre-processing procedure. In addition, we conduct a study of 14 papers along with 54 datasets for vulnerability detection to confirm these defined issues. Furthermore, we discuss good practices to use existing datasets and to create new ones.<|reference_end|>
arxiv
@article{guo2024data, title={Data Quality Issues in Vulnerability Detection Datasets}, author={Yuejun Guo and Seifeddine Bettaieb}, journal={arXiv preprint arXiv:2410.06030}, year={2024}, doi={10.1109/EuroSPW59978.2023.00008}, archivePrefix={arXiv}, eprint={2410.06030}, primaryClass={cs.CR cs.AI} }
guo2024data
arxiv-667080
2410.06031
Patient flow networks absorb healthcare stress during pandemic crises
<|reference_start|>Patient flow networks absorb healthcare stress during pandemic crises: Disasters, such as the recent COVID-19 pandemic, impose recurrent and heterogeneous stress on healthcare systems, necessitating the redistribution of stress to enhance healthcare resilience. However, existing studies have been hindered by limited datasets and approaches for assessing its absorptive capacity - defined as the system's ability to absorb stress by redistributing patient flows. This study addresses this gap by analyzing patient flow networks constructed from billions of electronic medical records and introducing an approach to quantify network absorptivity under crisis conditions. Our analysis of U.S. healthcare systems reveals that during the COVID-19 pandemic, cross-regional patient flows increased by 3.89%, a 0.90% rise from pre-pandemic levels. The networks exhibited an average absorptivity of 0.21, representing a 10% increase over pre-pandemic conditions. Flow networks with higher connectivity and heterogeneity showed a greater capacity to alleviate system burdens. These empirical and analytical insights underscore the critical role of proactive patient flow management in strengthening healthcare resilience during crises.<|reference_end|>
arxiv
@article{zhong2024patient, title={Patient flow networks absorb healthcare stress during pandemic crises}, author={Lu Zhong, Sen Pei, Jianxi Gao}, journal={arXiv preprint arXiv:2410.06031}, year={2024}, archivePrefix={arXiv}, eprint={2410.06031}, primaryClass={cs.SI} }
zhong2024patient
arxiv-667081
2410.06033
Nationally Scalable Hydrogen Fueling Infrastructure Deployment: A Megaregion Analysis and Optimization Approach
<|reference_start|>Nationally Scalable Hydrogen Fueling Infrastructure Deployment: A Megaregion Analysis and Optimization Approach: Decarbonizing regional and long-haul freight faces challenges due to the limitations of battery-electric vehicles and infrastructure. Hydrogen fuel cell medium- and heavy-duty vehicles (MHDVs) present a promising alternative, aligning with the Department of Energy's decarbonization goals. Historically, alternative fuels like compressed natural gas and propane gas have seen slow adoption due to infrastructure barriers. To prevent similar setbacks, planning for zero-emission hydrogen fueling infrastructure is critical. This research develops plans for affordable and accessible hydrogen refueling stations, supporting the decarbonized freight system and benefiting underserved and rural communities by improving air quality, reducing noise pollution, and enhancing energy resilience.It provides a blueprint for replacing diesel in Class 8 trucks with hydrogen fueling solutions, focusing on the Texas Triangle Megaregion (I-45, I-35, I-10), the I-10 corridor between San Antonio, TX, and Los Angeles, CA, and the I-5/CA-99 corridors between Los Angeles and San Francisco. This area accounts for ~8.5% of U.S. heavy-duty freight volume. Using the OR-AGENT (Optimal Regional Architecture Generation for Electrified National Transport) framework, the study analyzes vehicles, freight networks, and energy systems. The framework integrates data on freight mobility, traffic, weather, and energy pathways to deliver optimized powertrain architectures and hydrogen fueling infrastructure deployment. It assesses all vehicle origin-destination pairs and feasible fueling station locations, using a genetic algorithm to identify the minimum number and optimal locations of hydrogen stations. It also determines fuel schedules and quantities, ensuring no vehicle is stranded. A deployment roadmap outlines strategic hydrogen refueling infrastructure rollout across multiple adoption scenarios.<|reference_end|>
arxiv
@article{sujan2024nationally, title={Nationally Scalable Hydrogen Fueling Infrastructure Deployment: A Megaregion Analysis and Optimization Approach}, author={Vivek Sujan, Junchaun Fan, Gurneesh Jatana, Ruixiao Sun}, journal={arXiv preprint arXiv:2410.06033}, year={2024}, archivePrefix={arXiv}, eprint={2410.06033}, primaryClass={eess.SY cs.SY} }
sujan2024nationally
arxiv-667082
2410.06040
QERA: an Analytical Framework for Quantization Error Reconstruction
<|reference_start|>QERA: an Analytical Framework for Quantization Error Reconstruction: he growing number of parameters and computational demands of large language models (LLMs) present significant challenges for their efficient deployment. Recently, there is an increasing interest in quantizing weights to extremely low precision while offsetting the resulting error with low-rank, high-precision error reconstruction terms. The combination of quantization and low-rank approximation is now popular in both adapter-based, parameter-efficient fine-tuning methods such as LoftQ and low-precision inference techniques including ZeroQuant-V2. Usually, the low-rank terms are calculated via the singular value decomposition (SVD) of the weight quantization error, minimizing the Frobenius and spectral norms of the weight approximation error. Recent methods like LQ-LoRA and LQER introduced hand-crafted heuristics to minimize errors in layer outputs (activations) rather than weights, resulting improved quantization results. However, these heuristic methods lack an analytical solution to guide the design of quantization error reconstruction terms. In this paper, we revisit this problem and formulate an analytical framework, named Quantization Error Reconstruction Analysis (QERA), and offer a closed-form solution to the problem. We show QERA benefits both existing low-precision fine-tuning and inference methods -- QERA achieves a fine-tuned accuracy gain of $\Delta_{\text{acc}}$ = 6.05% of 2-bit RoBERTa-base on GLUE compared to LoftQ; and obtains $\Delta_{\text{acc}}$ = 2.97% higher post-training quantization accuracy of 4-bit Llama-3.1-70B on average than ZeroQuant-V2 and $\Delta_{\text{ppl}}$ = - 0.28 lower perplexity on WikiText2 than LQER.<|reference_end|>
arxiv
@article{zhang2024qera:, title={QERA: an Analytical Framework for Quantization Error Reconstruction}, author={Cheng Zhang, Jeffrey T. H. Wong, Can Xiao, George A. Constantinides and Yiren Zhao}, journal={arXiv preprint arXiv:2410.06040}, year={2024}, archivePrefix={arXiv}, eprint={2410.06040}, primaryClass={cs.LG} }
zhang2024qera:
arxiv-667083
2410.06041
Block Induced Signature Generative Adversarial Network (BISGAN): Signature Spoofing Using GANs and Their Evaluation
<|reference_start|>Block Induced Signature Generative Adversarial Network (BISGAN): Signature Spoofing Using GANs and Their Evaluation: Deep learning is actively being used in biometrics to develop efficient identification and verification systems. Handwritten signatures are a common subset of biometric data for authentication purposes. Generative adversarial networks (GANs) learn from original and forged signatures to generate forged signatures. While most GAN techniques create a strong signature verifier, which is the discriminator, there is a need to focus more on the quality of forgeries generated by the generator model. This work focuses on creating a generator that produces forged samples that achieve a benchmark in spoofing signature verification systems. We use CycleGANs infused with Inception model-like blocks with attention heads as the generator and a variation of the SigCNN model as the base Discriminator. We train our model with a new technique that results in 80% to 100% success in signature spoofing. Additionally, we create a custom evaluation technique to act as a goodness measure of the generated forgeries. Our work advocates generator-focused GAN architectures for spoofing data quality that aid in a better understanding of biometric data generation and evaluation.<|reference_end|>
arxiv
@article{amjad2024block, title={Block Induced Signature Generative Adversarial Network (BISGAN): Signature Spoofing Using GANs and Their Evaluation}, author={Haadia Amjad, Kilian Goeller, Steffen Seitz, Carsten Knoll, Naseer Bajwa, Ronald Tetzlaff and Muhammad Imran Malik}, journal={arXiv preprint arXiv:2410.06041}, year={2024}, archivePrefix={arXiv}, eprint={2410.06041}, primaryClass={cs.CV cs.AI} }
amjad2024block
arxiv-667084
2410.06042
Weighted Embeddings for Low-Dimensional Graph Representation
<|reference_start|>Weighted Embeddings for Low-Dimensional Graph Representation: Learning low-dimensional numerical representations from symbolic data, e.g., embedding the nodes of a graph into a geometric space, is an important concept in machine learning. While embedding into Euclidean space is common, recent observations indicate that hyperbolic geometry is better suited to represent hierarchical information and heterogeneous data (e.g., graphs with a scale-free degree distribution). Despite their potential for more accurate representations, hyperbolic embeddings also have downsides like being more difficult to compute and harder to use in downstream tasks. We propose embedding into a weighted space, which is closely related to hyperbolic geometry but mathematically simpler. We provide the embedding algorithm WEmbed and demonstrate, based on generated as well as over 2000 real-world graphs, that our weighted embeddings heavily outperform state-of-the-art Euclidean embeddings for heterogeneous graphs while using fewer dimensions. The running time of WEmbed and embedding quality for the remaining instances is on par with state-of-the-art Euclidean embedders.<|reference_end|>
arxiv
@article{bläsius2024weighted, title={Weighted Embeddings for Low-Dimensional Graph Representation}, author={Thomas Bl"asius, Jean-Pierre von der Heydt, Maximilian Katzmann, Nikolai Maas}, journal={arXiv preprint arXiv:2410.06042}, year={2024}, archivePrefix={arXiv}, eprint={2410.06042}, primaryClass={cs.LG cs.DS cs.SI} }
bläsius2024weighted
arxiv-667085
2410.06043
KwicKwocKwac, a tool for rapidly generating concordances and marking up a literary text
<|reference_start|>KwicKwocKwac, a tool for rapidly generating concordances and marking up a literary text: This paper introduces KwicKwocKwac 1.0 (KwicKK), a web application designed to enhance the annotation and enrichment of digital texts in the humanities. KwicKK provides a user-friendly interface that enables scholars and researchers to perform semi-automatic markup of textual documents, facilitating the identification of relevant entities such as people, organizations, and locations. Key functionalities include the visualization of annotated texts using KeyWord in Context (KWIC), KeyWord Out Of Context (KWOC), and KeyWord After Context (KWAC) methodologies, alongside automatic disambiguation of generic references and integration with Wikidata for Linked Open Data connections. The application supports metadata input and offers multiple download formats, promoting accessibility and ease of use. Developed primarily for the National Edition of Aldo Moro's works, KwicKK aims to lower the technical barriers for users while fostering deeper engagement with digital scholarly resources. The architecture leverages contemporary web technologies, ensuring scalability and reliability. Future developments will explore user experience enhancements, collaborative features, and integration of additional data sources.<|reference_end|>
arxiv
@article{barzaghi2024kwickwockwac,, title={KwicKwocKwac, a tool for rapidly generating concordances and marking up a literary text}, author={Sebastian Barzaghi, Francesco Paolucci, Francesca Tomasi, Fabio Vitali}, journal={arXiv preprint arXiv:2410.06043}, year={2024}, archivePrefix={arXiv}, eprint={2410.06043}, primaryClass={cs.DL cs.IR} }
barzaghi2024kwickwockwac,
arxiv-667086
2410.06044
HyperDet: Generalizable Detection of Synthesized Images by Generating and Merging A Mixture of Hyper LoRAs
<|reference_start|>HyperDet: Generalizable Detection of Synthesized Images by Generating and Merging A Mixture of Hyper LoRAs: The emergence of diverse generative vision models has recently enabled the synthesis of visually realistic images, underscoring the critical need for effectively detecting these generated images from real photos. Despite advances in this field, existing detection approaches often struggle to accurately identify synthesized images generated by different generative models. In this work, we introduce a novel and generalizable detection framework termed HyperDet, which innovatively captures and integrates shared knowledge from a collection of functionally distinct and lightweight expert detectors. HyperDet leverages a large pretrained vision model to extract general detection features while simultaneously capturing and enhancing task-specific features. To achieve this, HyperDet first groups SRM filters into five distinct groups to efficiently capture varying levels of pixel artifacts based on their different functionality and complexity. Then, HyperDet utilizes a hypernetwork to generate LoRA model weights with distinct embedding parameters. Finally, we merge the LoRA networks to form an efficient model ensemble. Also, we propose a novel objective function that balances the pixel and semantic artifacts effectively. Extensive experiments on the UnivFD and Fake2M datasets demonstrate the effectiveness of our approach, achieving state-of-the-art performance. Moreover, our work paves a new way to establish generalizable domain-specific fake image detectors based on pretrained large vision models.<|reference_end|>
arxiv
@article{cao2024hyperdet:, title={HyperDet: Generalizable Detection of Synthesized Images by Generating and Merging A Mixture of Hyper LoRAs}, author={Huangsen Cao, Yongwei Wang, Yinfeng Liu, Sixian Zheng, Kangtao Lv, Zhimeng Zhang, Bo Zhang, Xin Ding, Fei Wu}, journal={arXiv preprint arXiv:2410.06044}, year={2024}, archivePrefix={arXiv}, eprint={2410.06044}, primaryClass={cs.CV} }
cao2024hyperdet:
arxiv-667087
2410.06045
Extracting Finite State Machines from Transformers
<|reference_start|>Extracting Finite State Machines from Transformers: Fueled by the popularity of the transformer architecture in deep learning, several works have investigated what formal languages a transformer can learn. Nonetheless, existing results remain hard to compare and a fine-grained understanding of the trainability of transformers on regular languages is still lacking. We investigate transformers trained on regular languages from a mechanistic interpretability perspective. Using an extension of the $L^*$ algorithm, we extract Moore machines from transformers. We empirically find tighter lower bounds on the trainability of transformers, when a finite number of symbols determine the state. Additionally, our mechanistic insight allows us to characterise the regular languages a one-layer transformer can learn with good length generalisation. However, we also identify failure cases where the determining symbols get misrecognised due to saturation of the attention mechanism.<|reference_end|>
arxiv
@article{adriaensen2024extracting, title={Extracting Finite State Machines from Transformers}, author={Rik Adriaensen, Jaron Maene}, journal={arXiv preprint arXiv:2410.06045}, year={2024}, archivePrefix={arXiv}, eprint={2410.06045}, primaryClass={cs.LG cs.AI} }
adriaensen2024extracting
arxiv-667088
2410.06049
"Diversity is Having the Diversity": Unpacking and Designing for Diversity in Applicant Selection
<|reference_start|>"Diversity is Having the Diversity": Unpacking and Designing for Diversity in Applicant Selection: When selecting applicants for scholarships, universities, or jobs, practitioners often aim for a diverse cohort of qualified recipients. However, differing articulations, constructs, and notions of diversity prevents decision-makers from operationalising and progressing towards the diversity they all agree is needed. To understand this challenge of translation from values, to requirements, to decision support tools (DSTs), we conducted participatory design studies exploring professionals' varied perceptions of diversity and how to build for them. Our results suggest three definitions of diversity: bringing together different perspectives; ensuring representativeness of a base population; and contextualising applications, which we use to create the Diversity Triangle. We experience-prototyped DSTs reflecting each angle of the Diversity Triangle to enhance decision-making around diversity. We find that notions of diversity are highly diverse; efforts to design DSTs for diversity should start by working with organisations to distil 'diversity' into definitions and design requirements.<|reference_end|>
arxiv
@article{natarajan2024"diversity, title={"Diversity is Having the Diversity": Unpacking and Designing for Diversity in Applicant Selection}, author={Neil Natarajan and Sruthi Viswanathan and Reuben Binns and Nigel Shadbolt}, journal={arXiv preprint arXiv:2410.06049}, year={2024}, archivePrefix={arXiv}, eprint={2410.06049}, primaryClass={cs.HC} }
natarajan2024"diversity
arxiv-667089
2410.06051
Gaussian-Based and Outside-the-Box Runtime Monitoring Join Forces
<|reference_start|>Gaussian-Based and Outside-the-Box Runtime Monitoring Join Forces: Since neural networks can make wrong predictions even with high confidence, monitoring their behavior at runtime is important, especially in safety-critical domains like autonomous driving. In this paper, we combine ideas from previous monitoring approaches based on observing the activation values of hidden neurons. In particular, we combine the Gaussian-based approach, which observes whether the current value of each monitored neuron is similar to typical values observed during training, and the Outside-the-Box monitor, which creates clusters of the acceptable activation values, and, thus, considers the correlations of the neurons' values. Our experiments evaluate the achieved improvement.<|reference_end|>
arxiv
@article{hashemi2024gaussian-based, title={Gaussian-Based and Outside-the-Box Runtime Monitoring Join Forces}, author={Vahid Hashemi, Jan Kv{r}et'insk'y, Sabine Rieder, Torsten Sch"on and Jan Vorhoff}, journal={arXiv preprint arXiv:2410.06051}, year={2024}, archivePrefix={arXiv}, eprint={2410.06051}, primaryClass={cs.LG} }
hashemi2024gaussian-based
arxiv-667090
2410.06052
Concurrent-Learning Based Relative Localization in Shape Formation of Robot Swarms
<|reference_start|>Concurrent-Learning Based Relative Localization in Shape Formation of Robot Swarms: In this paper, we address the shape formation problem for massive robot swarms in environments where external localization systems are unavailable. Achieving this task effectively with solely onboard measurements is still scarcely explored and faces some practical challenges. To solve this challenging problem, we propose the following novel results. Firstly, to estimate the relative positions among neighboring robots, a concurrent-learning based estimator is proposed. It relaxes the persistent excitation condition required in the classical ones such as least-square estimator. Secondly, we introduce a finite-time agreement protocol to determine the shape location. This is achieved by estimating the relative position between each robot and a randomly assigned seed robot. The initial position of the seed one marks the shape location. Thirdly, based on the theoretical results of the relative localization, a novel behavior-based control strategy is devised. This strategy not only enables adaptive shape formation of large group of robots but also enhances the observability of inter-robot relative localization. Numerical simulation results are provided to verify the performance of our proposed strategy compared to the state-of-the-art ones. Additionally, outdoor experiments on real robots further demonstrate the practical effectiveness and robustness of our methods.<|reference_end|>
arxiv
@article{lü2024concurrent-learning, title={Concurrent-Learning Based Relative Localization in Shape Formation of Robot Swarms}, author={Jinhu L"u, Kunrui Ze, Shuoyu Yue, Kexin Liu, Wei Wang, Guibin Sun}, journal={arXiv preprint arXiv:2410.06052}, year={2024}, archivePrefix={arXiv}, eprint={2410.06052}, primaryClass={cs.RO cs.MA} }
lü2024concurrent-learning
arxiv-667091
2410.06055
AP-LDM: Attentive and Progressive Latent Diffusion Model for Training-Free High-Resolution Image Generation
<|reference_start|>AP-LDM: Attentive and Progressive Latent Diffusion Model for Training-Free High-Resolution Image Generation: Latent diffusion models (LDMs), such as Stable Diffusion, often experience significant structural distortions when directly generating high-resolution (HR) images that exceed their original training resolutions. A straightforward and cost-effective solution is to adapt pre-trained LDMs for HR image generation; however, existing methods often suffer from poor image quality and long inference time. In this paper, we propose an Attentive and Progressive LDM (AP-LDM), a novel, training-free framework aimed at enhancing HR image quality while accelerating the generation process. AP-LDM decomposes the denoising process of LDMs into two stages: (i) attentive training-resolution denoising, and (ii) progressive high-resolution denoising. The first stage generates a latent representation of a higher-quality training-resolution image through the proposed attentive guidance, which utilizes a novel parameter-free self-attention mechanism to enhance the structural consistency. The second stage progressively performs upsampling in pixel space, alleviating the severe artifacts caused by latent space upsampling. Leveraging the effective initialization from the first stage enables denoising at higher resolutions with significantly fewer steps, enhancing overall efficiency. Extensive experimental results demonstrate that AP-LDM significantly outperforms state-of-the-art methods, delivering up to a 5x speedup in HR image generation, thereby highlighting its substantial advantages for real-world applications. Code is available at https://github.com/kmittle/AP-LDM.<|reference_end|>
arxiv
@article{cao2024ap-ldm:, title={AP-LDM: Attentive and Progressive Latent Diffusion Model for Training-Free High-Resolution Image Generation}, author={Boyuan Cao, Jiaxin Ye, Yujie Wei, Hongming Shan}, journal={arXiv preprint arXiv:2410.06055}, year={2024}, archivePrefix={arXiv}, eprint={2410.06055}, primaryClass={cs.CV} }
cao2024ap-ldm:
arxiv-667092
2410.06059
Maximum Achievable Rate of Resistive Random-Access Memory Channels by Mutual Information Spectrum Analysis
<|reference_start|>Maximum Achievable Rate of Resistive Random-Access Memory Channels by Mutual Information Spectrum Analysis: The maximum achievable rate is derived for resistive random-access memory (ReRAM) channel with sneak path interference. Based on the mutual information spectrum analysis, the maximum achievable rate of ReRAM channel with independent and identically distributed (i.i.d.) binary inputs is derived as an explicit function of channel parameters such as the distribution of cell selector failures and channel noise level. Due to the randomness of cell selector failures, the ReRAM channel demonstrates multi-status characteristic. For each status, it is shown that as the array size is large, the fraction of cells affected by sneak paths approaches a constant value. Therefore, the mutual information spectrum of the ReRAM channel is formulated as a mixture of multiple stationary channels. Maximum achievable rates of the ReRAM channel with different settings, such as single- and across-array codings, with and without data shaping, and optimal and treating-interference-as-noise (TIN) decodings, are compared. These results provide valuable insights on the code design for ReRAM.<|reference_end|>
arxiv
@article{song2024maximum, title={Maximum Achievable Rate of Resistive Random-Access Memory Channels by Mutual Information Spectrum Analysis}, author={Guanghui Song, Kui Cai, Ying Li, and Kees A. Schouhamer Immink}, journal={arXiv preprint arXiv:2410.06059}, year={2024}, archivePrefix={arXiv}, eprint={2410.06059}, primaryClass={cs.IT math.IT} }
song2024maximum
arxiv-667093
2410.06060
Hierarchical Matrix Completion for the Prediction of Properties of Binary Mixtures
<|reference_start|>Hierarchical Matrix Completion for the Prediction of Properties of Binary Mixtures: Predicting the thermodynamic properties of mixtures is crucial for process design and optimization in chemical engineering. Machine learning (ML) methods are gaining increasing attention in this field, but experimental data for training are often scarce, which hampers their application. In this work, we introduce a novel generic approach for improving data-driven models: inspired by the ancient rule "similia similibus solvuntur", we lump components that behave similarly into chemical classes and model them jointly in the first step of a hierarchical approach. While the information on class affiliations can stem in principle from any source, we demonstrate how classes can reproducibly be defined based on mixture data alone by agglomerative clustering. The information from this clustering step is then used as an informed prior for fitting the individual data. We demonstrate the benefits of this approach by applying it in connection with a matrix completion method (MCM) for predicting isothermal activity coefficients at infinite dilution in binary mixtures. Using clustering leads to significantly improved predictions compared to an MCM without clustering. Furthermore, the chemical classes learned from the clustering give exciting insights into what matters on the molecular level for modeling given mixture properties.<|reference_end|>
arxiv
@article{gond2024hierarchical, title={Hierarchical Matrix Completion for the Prediction of Properties of Binary Mixtures}, author={Dominik Gond, Jan-Tobias Sohns, Heike Leitte, Hans Hasse, Fabian Jirasek}, journal={arXiv preprint arXiv:2410.06060}, year={2024}, archivePrefix={arXiv}, eprint={2410.06060}, primaryClass={cs.LG} }
gond2024hierarchical
arxiv-667094
2410.06062
LLM-based SPARQL Query Generation from Natural Language over Federated Knowledge Graphs
<|reference_start|>LLM-based SPARQL Query Generation from Natural Language over Federated Knowledge Graphs: We introduce a Retrieval-Augmented Generation (RAG) system for translating user questions into accurate federated SPARQL queries over bioinformatics knowledge graphs (KGs) leveraging Large Language Models (LLMs). To enhance accuracy and reduce hallucinations in query generation, our system utilises metadata from the KGs, including query examples and schema information, and incorporates a validation step to correct generated queries. The system is available online at chat.expasy.org.<|reference_end|>
arxiv
@article{emonet2024llm-based, title={LLM-based SPARQL Query Generation from Natural Language over Federated Knowledge Graphs}, author={Vincent Emonet, Jerven Bolleman, Severine Duvaud, Tarcisio Mendes de Farias and Ana Claudia Sima}, journal={arXiv preprint arXiv:2410.06062}, year={2024}, archivePrefix={arXiv}, eprint={2410.06062}, primaryClass={cs.DB cs.AI cs.IR} }
emonet2024llm-based
arxiv-667095
2410.06065
Posets and Bounded Probabilities for Discovering Order-inducing Features in Event Knowledge Graphs
<|reference_start|>Posets and Bounded Probabilities for Discovering Order-inducing Features in Event Knowledge Graphs: Event knowledge graphs (EKG) extend the classical notion of a trace to capture multiple, interacting views of a process execution. In this paper, we tackle the open problem of automating EKG discovery from uncurated data through a principled, probabilistic framing based on the outcome space resulting from featured-derived partial orders on events. From this, we derive an EKG discovery algorithm based upon statistical inference rather than an ad-hoc or heuristic-based strategy, or relying on manual analysis from domain experts. This approach comes at the computational cost of exploring a large, non-convex hypothesis space. In particular, solving the maximum likelihood term involves counting the number of linear extensions of posets, which in general is #P-complete. Fortunately, bound estimates suffice for model comparison, and admit incorporation into a bespoke branch-and-bound algorithm. We show that the posterior probability as defined is antitonic w.r.t. search depth for branching rules that are monotonic w.r.t. model inclusion. This allows pruning of large portions of the search space, which we show experimentally leads to rapid convergence toward optimal solutions that are consistent with manually built EKGs.<|reference_end|>
arxiv
@article{back2024posets, title={Posets and Bounded Probabilities for Discovering Order-inducing Features in Event Knowledge Graphs}, author={Christoffer Olling Back and Jakob Grue Simonsen}, journal={arXiv preprint arXiv:2410.06065}, year={2024}, archivePrefix={arXiv}, eprint={2410.06065}, primaryClass={cs.LG cs.AI} }
back2024posets
arxiv-667096
2410.06066
An Analysis of QUIC Connection Migration in the Wild
<|reference_start|>An Analysis of QUIC Connection Migration in the Wild: As QUIC gains attention, more applications that leverage its capabilities are emerging. These include defenses against on-path IP tracking and traffic analysis. However, the deployment of the underlying required support for connection migration remains largely unexplored. This paper provides a comprehensive examination of the support of the QUIC connection migration mechanism over the Internet. We perform Internet-wide scans revealing that despite a rapid evolution in the deployment of QUIC on web servers, some of the most popular destinations do not support connection migration yet.<|reference_end|>
arxiv
@article{buchet2024an, title={An Analysis of QUIC Connection Migration in the Wild}, author={Aur'elien Buchet and Cristel Pelsser}, journal={arXiv preprint arXiv:2410.06066}, year={2024}, archivePrefix={arXiv}, eprint={2410.06066}, primaryClass={cs.NI} }
buchet2024an
arxiv-667097
2410.06067
Contrastive Learning to Fine-Tune Feature Extraction Models for the Visual Cortex
<|reference_start|>Contrastive Learning to Fine-Tune Feature Extraction Models for the Visual Cortex: Predicting the neural response to natural images in the visual cortex requires extracting relevant features from the images and relating those feature to the observed responses. In this work, we optimize the feature extraction in order to maximize the information shared between the image features and the neural response across voxels in a given region of interest (ROI) extracted from the BOLD signal measured by fMRI. We adapt contrastive learning (CL) to fine-tune a convolutional neural network, which was pretrained for image classification, such that a mapping of a given image's features are more similar to the corresponding fMRI response than to the responses to other images. We exploit the recently released Natural Scenes Dataset (Allen et al., 2022) as organized for the Algonauts Project (Gifford et al., 2023), which contains the high-resolution fMRI responses of eight subjects to tens of thousands of naturalistic images. We show that CL fine-tuning creates feature extraction models that enable higher encoding accuracy in early visual ROIs as compared to both the pretrained network and a baseline approach that uses a regression loss at the output of the network to tune it for fMRI response encoding. We investigate inter-subject transfer of the CL fine-tuned models, including subjects from another, lower-resolution dataset (Gong et al., 2023). We also pool subjects for fine-tuning to further improve the encoding performance. Finally, we examine the performance of the fine-tuned models on common image classification tasks, explore the landscape of ROI-specific models by applying dimensionality reduction on the Bhattacharya dissimilarity matrix created using the predictions on those tasks (Mao et al., 2024), and investigate lateralization of the processing for early visual ROIs using salience maps of the classifiers built on the CL-tuned models.<|reference_end|>
arxiv
@article{mulrooney2024contrastive, title={Contrastive Learning to Fine-Tune Feature Extraction Models for the Visual Cortex}, author={Alex Mulrooney and Austin J. Brockmeier}, journal={arXiv preprint arXiv:2410.06067}, year={2024}, archivePrefix={arXiv}, eprint={2410.06067}, primaryClass={cs.CV cs.LG} }
mulrooney2024contrastive
arxiv-667098
2410.06068
Resolution limit of the eye: how many pixels can we see?
<|reference_start|>Resolution limit of the eye: how many pixels can we see?: As large engineering efforts go towards improving the resolution of mobile, AR and VR displays, it is important to know the maximum resolution at which further improvements bring no noticeable benefit. This limit is often referred to as the "retinal resolution", although the limiting factor may not necessarily be attributed to the retina. To determine the ultimate resolution at which an image appears sharp to our eyes with no perceivable blur, we created an experimental setup with a sliding display, which allows for continuous control of the resolution. The lack of such control was the main limitation of the previous studies. We measure achromatic (black-white) and chromatic (red-green and yellow-violet) resolution limits for foveal vision, and at two eccentricities (10 and 20 deg). Our results demonstrate that the resolution limit is higher than what was previously believed, reaching 94 pixels-per-degree (ppd) for foveal achromatic vision, 89 ppd for red-green patterns, and 53 ppd for yellow-violet patterns. We also observe a much larger drop in the resolution limit for chromatic patterns (red-green and yellow-violet) than for achromatic. Our results set the north star for display development, with implications for future imaging, rendering and video coding technologies.<|reference_end|>
arxiv
@article{ashraf2024resolution, title={Resolution limit of the eye: how many pixels can we see?}, author={Maliha Ashraf, Alexandre Chapiro, Rafa{l} K. Mantiuk}, journal={arXiv preprint arXiv:2410.06068}, year={2024}, archivePrefix={arXiv}, eprint={2410.06068}, primaryClass={cs.HC cs.GR cs.MM eess.IV} }
ashraf2024resolution
arxiv-667099
2410.06069
Provable Methods for Searching with an Imperfect Sensor
<|reference_start|>Provable Methods for Searching with an Imperfect Sensor: Assume that a target is known to be present at an unknown point among a finite set of locations in the plane. We search for it using a mobile robot that has imperfect sensing capabilities. It takes time for the robot to move between locations and search a location; we have a total time budget within which to conduct the search. We study the problem of computing a search path/strategy for the robot that maximizes the probability of detection of the target. Considering non-uniform travel times between points (e.g., based on the distance between them) is crucial for search and rescue applications; such problems have been investigated to a limited extent due to their inherent complexity. In this paper, we describe fast algorithms with performance guarantees for this search problem and some variants, complement them with complexity results, and perform experiments to observe their performance.<|reference_end|>
arxiv
@article{chakraborty2024provable, title={Provable Methods for Searching with an Imperfect Sensor}, author={Nilanjan Chakraborty, Prahlad Narasimhan Kasthurirangan, Joseph S.B. Mitchell, Linh Nguyen, and Michael Perk}, journal={arXiv preprint arXiv:2410.06069}, year={2024}, archivePrefix={arXiv}, eprint={2410.06069}, primaryClass={cs.RO cs.CG} }
chakraborty2024provable
arxiv-667100
2410.06070
Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework
<|reference_start|>Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework: There has been a recent push of research on Transformer-based models for long-term time series forecasting, even though they are inherently difficult to interpret and explain. While there is a large body of work on interpretability methods for various domains and architectures, the interpretability of Transformer-based forecasting models remains largely unexplored. To address this gap, we develop a framework based on Concept Bottleneck Models to enforce interpretability of time series Transformers. We modify the training objective to encourage a model to develop representations similar to predefined interpretable concepts. In our experiments, we enforce similarity using Centered Kernel Alignment, and the predefined concepts include time features and an interpretable, autoregressive surrogate model (AR). We apply the framework to the Autoformer model, and present an in-depth analysis for a variety of benchmark tasks. We find that the model performance remains mostly unaffected, while the model shows much improved interpretability. Additionally, interpretable concepts become local, which makes the trained model easily intervenable. As a proof of concept, we demonstrate a successful intervention in the scenario of a time shift in the data, which eliminates the need to retrain.<|reference_end|>
arxiv
@article{van sprang2024enforcing, title={Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework}, author={Angela van Sprang, Erman Acar, Willem Zuidema}, journal={arXiv preprint arXiv:2410.06070}, year={2024}, archivePrefix={arXiv}, eprint={2410.06070}, primaryClass={cs.LG} }
van sprang2024enforcing