corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-667501
2410.06825
K-SAM: A Prompting Method Using Pretrained U-Net to Improve Zero Shot Performance of SAM on Lung Segmentation in CXR Images
<|reference_start|>K-SAM: A Prompting Method Using Pretrained U-Net to Improve Zero Shot Performance of SAM on Lung Segmentation in CXR Images: In clinical procedures, precise localization of the target area is an essential step for clinical diagnosis and screening. For many diagnostic applications, lung segmentation of chest X-ray images is an essential first step that significantly reduces the image size to speed up the subsequent analysis. One of the primary difficulties with this task is segmenting the lung regions covered by dense abnormalities also known as opacities due to diseases like pneumonia and tuberculosis. SAM has astonishing generalization capabilities for category agnostic segmentation. In this study we propose an algorithm to improve zero shot performance of SAM on lung region segmentation task by automatic prompt selection. Two separate UNet models were trained, one for predicting lung segments and another for heart segment. Though these predictions lack fine details around the edges, they provide positive and negative points as prompt for SAM. Using proposed prompting method zero shot performance of SAM is evaluated on two benchmark datasets. ViT-l version of the model achieved slightly better performance compared to other two versions, ViTh and ViTb. It yields an average Dice score of 95.5 percent and 94.9 percent on hold out data for two datasets respectively. Though, for most of the images, SAM did outstanding segmentation, its prediction was way off for some of the images. After careful inspection it is found that all of these images either had extreme abnormality or distorted shape. Unlike most of the research performed so far on lung segmentation from CXR images using SAM, this study proposes a fully automated prompt selection process only from the input image. Our finding indicates that using pretrained models for prompt selection can utilize SAM impressive generalization capability to its full extent.<|reference_end|>
arxiv
@article{deriche2024k-sam:, title={K-SAM: A Prompting Method Using Pretrained U-Net to Improve Zero Shot Performance of SAM on Lung Segmentation in CXR Images}, author={Mohamed Deriche, Mohammad Marufur}, journal={arXiv preprint arXiv:2410.06825}, year={2024}, archivePrefix={arXiv}, eprint={2410.06825}, primaryClass={eess.IV cs.LG} }
deriche2024k-sam:
arxiv-667502
2410.06828
Transfer Learning for a Class of Cascade Dynamical Systems
<|reference_start|>Transfer Learning for a Class of Cascade Dynamical Systems: This work considers the problem of transfer learning in the context of reinforcement learning. Specifically, we consider training a policy in a reduced order system and deploying it in the full state system. The motivation for this training strategy is that running simulations in the full-state system may take excessive time if the dynamics are complex. While transfer learning alleviates the computational issue, the transfer guarantees depend on the discrepancy between the two systems. In this work, we consider a class of cascade dynamical systems, where the dynamics of a subset of the state-space influence the rest of the states but not vice-versa. The reinforcement learning policy learns in a model that ignores the dynamics of these states and treats them as commanded inputs. In the full-state system, these dynamics are handled using a classic controller (e.g., a PID). These systems have vast applications in the control literature and their structure allows us to provide transfer guarantees that depend on the stability of the inner loop controller. Numerical experiments on a quadrotor support the theoretical findings.<|reference_end|>
arxiv
@article{rabiei2024transfer, title={Transfer Learning for a Class of Cascade Dynamical Systems}, author={Shima Rabiei, Sandipan Mishra, Santiago Paternain}, journal={arXiv preprint arXiv:2410.06828}, year={2024}, archivePrefix={arXiv}, eprint={2410.06828}, primaryClass={cs.LG} }
rabiei2024transfer
arxiv-667503
2410.06832
Learning a generalized multiscale prolongation operator
<|reference_start|>Learning a generalized multiscale prolongation operator: Multigrid preconditioners are one of the most powerful techniques for solving large sparse linear systems. In this research, we address Darcy flow problems with random permeability using the conjugate gradient method, enhanced by a two-grid preconditioner based on a generalized multiscale prolongation operator, which has been demonstrated to be stable for high contrast profiles. To circumvent the need for repeatedly solving spectral problems with varying coefficients, we harness deep learning techniques to expedite the construction of the generalized multiscale prolongation operator. Considering linear transformations on multiscale basis have no impact on the performance of the preconditioner, we devise a loss function by the coefficient-based distance between subspaces instead of $l^2$-norm of the difference of the corresponding multiscale bases. We discover that leveraging the inherent symmetry in the local spectral problem can effectively accelerate the neural network training process. In scenarios where training data are limited, we utilize the Karhunen-Lo\`eve expansion to augment the dataset. Extensive numerical experiments with various types of random coefficient models are exhibited, showing that the proposed method can significantly reduce the time required to generate the prolongation operator while maintaining the original efficiency of the two-grid preconditioner.<|reference_end|>
arxiv
@article{liu2024learning, title={Learning a generalized multiscale prolongation operator}, author={Yucheng Liu, Shubin Fu, Yingjie Zhou, Changqing Ye, Eric T. Chung}, journal={arXiv preprint arXiv:2410.06832}, year={2024}, archivePrefix={arXiv}, eprint={2410.06832}, primaryClass={math.NA cs.NA} }
liu2024learning
arxiv-667504
2410.06833
Dynamic metastability in the self-attention model
<|reference_start|>Dynamic metastability in the self-attention model: We consider the self-attention model - an interacting particle system on the unit sphere, which serves as a toy model for Transformers, the deep neural network architecture behind the recent successes of large language models. We prove the appearance of dynamic metastability conjectured in [GLPR23] - although particles collapse to a single cluster in infinite time, they remain trapped near a configuration of several clusters for an exponentially long period of time. By leveraging a gradient flow interpretation of the system, we also connect our result to an overarching framework of slow motion of gradient flows proposed by Otto and Reznikoff [OR07] in the context of coarsening and the Allen-Cahn equation. We finally probe the dynamics beyond the exponentially long period of metastability, and illustrate that, under an appropriate time-rescaling, the energy reaches its global maximum in finite time and has a staircase profile, with trajectories manifesting saddle-to-saddle-like behavior, reminiscent of recent works in the analysis of training dynamics via gradient descent for two-layer neural networks.<|reference_end|>
arxiv
@article{geshkovski2024dynamic, title={Dynamic metastability in the self-attention model}, author={Borjan Geshkovski, Hugo Koubbi, Yury Polyanskiy, Philippe Rigollet}, journal={arXiv preprint arXiv:2410.06833}, year={2024}, archivePrefix={arXiv}, eprint={2410.06833}, primaryClass={cs.LG math.AP math.DS} }
geshkovski2024dynamic
arxiv-667505
2410.06837
Faster and Simpler Online Computation of String Net Frequency
<|reference_start|>Faster and Simpler Online Computation of String Net Frequency: An occurrence of a repeated substring $u$ in a string $S$ is called a net occurrence if extending the occurrence to the left or to the right decreases the number of occurrences to 1. The net frequency (NF) of a repeated substring $u$ in a string $S$ is the number of net occurrences of $u$ in $S$. Very recently, Guo et al. [SPIRE 2024] proposed an online $O(n \log \sigma)$-time algorithm that maintains a data structure of $O(n)$ space which answers Single-NF queries in $O(m\log \sigma + \sigma^2)$ time and reports all answers of the All-NF problem in $O(n\sigma^2)$ time. Here, $n$ is the length of the input string $S$, $m$ is the query pattern length, and $\sigma$ is the alphabet size. The $\sigma^2$ term is a major drawback of their method since computing string net frequencies is originally motivated for Chinese language processing where $\sigma$ can be thousands large. This paper presents an improved online $O(n \log \sigma)$-time algorithm, which answers Single-NF queries in $O(m \log \sigma)$ time and reports all answers to the All-NF problem in output-optimal $O(|\mathsf{NF}^+(S)|)$ time, where $\mathsf{NF}^+(S)$ is the set of substrings of $S$ paired with their positive NF values. We note that $|\mathsf{NF}^+(S)| = O(n)$ always holds. In contract to Guo et al.'s algorithm that is based on Ukkonen's suffix tree construction, our algorithm is based on Weiner's suffix tree construction.<|reference_end|>
arxiv
@article{inenaga2024faster, title={Faster and Simpler Online Computation of String Net Frequency}, author={Shunsuke Inenaga}, journal={arXiv preprint arXiv:2410.06837}, year={2024}, archivePrefix={arXiv}, eprint={2410.06837}, primaryClass={cs.DS} }
inenaga2024faster
arxiv-667506
2410.06841
Boosting Few-Shot Detection with Large Language Models and Layout-to-Image Synthesis
<|reference_start|>Boosting Few-Shot Detection with Large Language Models and Layout-to-Image Synthesis: Recent advancements in diffusion models have enabled a wide range of works exploiting their ability to generate high-volume, high-quality data for use in various downstream tasks. One subclass of such models, dubbed Layout-to-Image Synthesis (LIS), learns to generate images conditioned on a spatial layout (bounding boxes, masks, poses, etc.) and has shown a promising ability to generate realistic images, albeit with limited layout-adherence. Moreover, the question of how to effectively transfer those models for scalable augmentation of few-shot detection data remains unanswered. Thus, we propose a collaborative framework employing a Large Language Model (LLM) and an LIS model for enhancing few-shot detection beyond state-of-the-art generative augmentation approaches. We leverage LLM's reasoning ability to extrapolate the spatial prior of the annotation space by generating new bounding boxes given only a few example annotations. Additionally, we introduce our novel layout-aware CLIP score for sample ranking, enabling tight coupling between generated layouts and images. Significant improvements on COCO few-shot benchmarks are observed. With our approach, a YOLOX-S baseline is boosted by more than 140%, 50%, 35% in mAP on the COCO 5-,10-, and 30-shot settings, respectively.<|reference_end|>
arxiv
@article{abdullah2024boosting, title={Boosting Few-Shot Detection with Large Language Models and Layout-to-Image Synthesis}, author={Ahmed Abdullah, Nikolas Ebert, Oliver Wasenm"uller}, journal={arXiv preprint arXiv:2410.06841}, year={2024}, archivePrefix={arXiv}, eprint={2410.06841}, primaryClass={cs.CV} }
abdullah2024boosting
arxiv-667507
2410.06842
SurANet: Surrounding-Aware Network for Concealed Object Detection via Highly-Efficient Interactive Contrastive Learning Strategy
<|reference_start|>SurANet: Surrounding-Aware Network for Concealed Object Detection via Highly-Efficient Interactive Contrastive Learning Strategy: Concealed object detection (COD) in cluttered scenes is significant for various image processing applications. However, due to that concealed objects are always similar to their background, it is extremely hard to distinguish them. Here, the major obstacle is the tiny feature differences between the inside and outside object boundary region, which makes it trouble for existing COD methods to achieve accurate results. In this paper, considering that the surrounding environment information can be well utilized to identify the concealed objects, and thus, we propose a novel deep Surrounding-Aware Network, namely SurANet, for COD tasks, which introduces surrounding information into feature extraction and loss function to improve the discrimination. First, we enhance the semantics of feature maps using differential fusion of surrounding features to highlight concealed objects. Next, a Surrounding-Aware Contrastive Loss is applied to identify the concealed object via learning surrounding feature maps contrastively. Then, SurANet can be trained end-to-end with high efficiency via our proposed Spatial-Compressed Correlation Transmission strategy after our investigation of feature dynamics, and extensive experiments improve that such features can be well reserved respectively. Finally, experimental results demonstrate that the proposed SurANet outperforms state-of-the-art COD methods on multiple real datasets. Our source code will be available at https://github.com/kyh433/SurANet.<|reference_end|>
arxiv
@article{kang2024suranet:, title={SurANet: Surrounding-Aware Network for Concealed Object Detection via Highly-Efficient Interactive Contrastive Learning Strategy}, author={Yuhan Kang, Qingpeng Li, Leyuan Fang, Jian Zhao, and Xuelong Li}, journal={arXiv preprint arXiv:2410.06842}, year={2024}, archivePrefix={arXiv}, eprint={2410.06842}, primaryClass={cs.CV} }
kang2024suranet:
arxiv-667508
2410.06843
Point-to-Point MIMO Channel Estimation by Exploiting Array Geometry and Clustered Multipath Propagation
<|reference_start|>Point-to-Point MIMO Channel Estimation by Exploiting Array Geometry and Clustered Multipath Propagation: A large-scale MIMO (multiple-input multiple-output) system offers significant advantages in wireless communication, including potential spatial multiplexing and beamforming capabilities. However, channel estimation becomes challenging with multiple antennas at both the transmitter and receiver ends. The minimum mean-squared error (MMSE) estimator, for instance, requires a spatial correlation matrix whose dimensions scale with the square of the product of the number of antennas on the transmitter and receiver sides. This scaling presents a substantial challenge, particularly as antenna counts increase in line with current technological trends. Traditional MIMO literature offers alternative channel estimators that mitigate the need to fully acquire the spatial correlation matrix. In this paper, we revisit point-to-point MIMO channel estimation and introduce a reduced-subspace least squares (RS-LS) channel estimator designed to eliminate physically impossible channel dimensions inherent in uniform planar arrays. Additionally, we propose a cluster-aware RS-LS estimator that leverages both reduced and cluster-specific subspace properties, significantly enhancing performance over the conventional RS-LS approach. Notably, both proposed methods obviate the need for fully/partial knowledge of the spatial correlation matrix.<|reference_end|>
arxiv
@article{demir2024point-to-point, title={Point-to-Point MIMO Channel Estimation by Exploiting Array Geometry and Clustered Multipath Propagation}, author={"Ozlem Tuu{g}fe Demir and Emil Bj"ornson}, journal={arXiv preprint arXiv:2410.06843}, year={2024}, archivePrefix={arXiv}, eprint={2410.06843}, primaryClass={eess.SP cs.IT math.IT} }
demir2024point-to-point
arxiv-667509
2410.06845
MentalArena: Self-play Training of Language Models for Diagnosis and Treatment of Mental Health Disorders
<|reference_start|>MentalArena: Self-play Training of Language Models for Diagnosis and Treatment of Mental Health Disorders: Mental health disorders are one of the most serious diseases in the world. Most people with such a disease lack access to adequate care, which highlights the importance of training models for the diagnosis and treatment of mental health disorders. However, in the mental health domain, privacy concerns limit the accessibility of personalized treatment data, making it challenging to build powerful models. In this paper, we introduce MentalArena, a self-play framework to train language models by generating domain-specific personalized data, where we obtain a better model capable of making a personalized diagnosis and treatment (as a therapist) and providing information (as a patient). To accurately model human-like mental health patients, we devise Symptom Encoder, which simulates a real patient from both cognition and behavior perspectives. To address intent bias during patient-therapist interactions, we propose Symptom Decoder to compare diagnosed symptoms with encoded symptoms, and dynamically manage the dialogue between patient and therapist according to the identified deviations. We evaluated MentalArena against 6 benchmarks, including biomedicalQA and mental health tasks, compared to 6 advanced models. Our models, fine-tuned on both GPT-3.5 and Llama-3-8b, significantly outperform their counterparts, including GPT-4o. We hope that our work can inspire future research on personalized care. Code is available in https://github.com/Scarelette/MentalArena/tree/main<|reference_end|>
arxiv
@article{li2024mentalarena:, title={MentalArena: Self-play Training of Language Models for Diagnosis and Treatment of Mental Health Disorders}, author={Cheng Li, May Fung, Qingyun Wang, Chi Han, Manling Li, Jindong Wang, Heng Ji}, journal={arXiv preprint arXiv:2410.06845}, year={2024}, archivePrefix={arXiv}, eprint={2410.06845}, primaryClass={cs.CL cs.AI cs.MA} }
li2024mentalarena:
arxiv-667510
2410.06846
Joint Fine-tuning and Conversion of Pretrained Speech and Language Models towards Linear Complexity
<|reference_start|>Joint Fine-tuning and Conversion of Pretrained Speech and Language Models towards Linear Complexity: Architectures such as Linformer and Mamba have recently emerged as competitive linear time replacements for transformers. However, corresponding large pretrained models are often unavailable, especially in non-text domains. To remedy this, we present a Cross-Architecture Layerwise Distillation (CALD) approach that jointly converts a transformer model to a linear time substitute and fine-tunes it to a target task. We also compare several means to guide the fine-tuning to optimally retain the desired inference capability from the original model. The methods differ in their use of the target model and the trajectory of the parameters. In a series of empirical studies on language processing, language modeling, and speech processing, we show that CALD can effectively recover the result of the original model, and that the guiding strategy contributes to the result. Some reasons for the variation are suggested.<|reference_end|>
arxiv
@article{he2024joint, title={Joint Fine-tuning and Conversion of Pretrained Speech and Language Models towards Linear Complexity}, author={Mutian He, Philip N. Garner}, journal={arXiv preprint arXiv:2410.06846}, year={2024}, archivePrefix={arXiv}, eprint={2410.06846}, primaryClass={cs.CL cs.AI cs.LG cs.SD eess.AS} }
he2024joint
arxiv-667511
2410.06847
A Safety Modulator Actor-Critic Method in Model-Free Safe Reinforcement Learning and Application in UAV Hovering
<|reference_start|>A Safety Modulator Actor-Critic Method in Model-Free Safe Reinforcement Learning and Application in UAV Hovering: This paper proposes a safety modulator actor-critic (SMAC) method to address safety constraint and overestimation mitigation in model-free safe reinforcement learning (RL). A safety modulator is developed to satisfy safety constraints by modulating actions, allowing the policy to ignore safety constraint and focus on maximizing reward. Additionally, a distributional critic with a theoretical update rule for SMAC is proposed to mitigate the overestimation of Q-values with safety constraints. Both simulation and real-world scenarios experiments on Unmanned Aerial Vehicles (UAVs) hovering confirm that the SMAC can effectively maintain safety constraints and outperform mainstream baseline algorithms.<|reference_end|>
arxiv
@article{qi2024a, title={A Safety Modulator Actor-Critic Method in Model-Free Safe Reinforcement Learning and Application in UAV Hovering}, author={Qihan Qi, Xinsong Yang, Gang Xia, Daniel W. C. Ho, Pengyang Tang}, journal={arXiv preprint arXiv:2410.06847}, year={2024}, archivePrefix={arXiv}, eprint={2410.06847}, primaryClass={cs.AI cs.LG cs.RO} }
qi2024a
arxiv-667512
2410.06848
Forgetting Through Transforming: Enabling Federated Unlearning via Class-Aware Representation Transformation
<|reference_start|>Forgetting Through Transforming: Enabling Federated Unlearning via Class-Aware Representation Transformation: Federated Unlearning (FU) enables clients to selectively remove the influence of specific data from a trained federated learning model, addressing privacy concerns and regulatory requirements. However, existing FU methods often struggle to balance effective erasure with model utility preservation, especially for class-level unlearning in non-IID settings. We propose Federated Unlearning via Class-aware Representation Transformation (FUCRT), a novel method that achieves unlearning through class-aware representation transformation. FUCRT employs two key components: (1) a transformation class selection strategy to identify optimal forgetting directions, and (2) a transformation alignment technique using dual class-aware contrastive learning to ensure consistent transformations across clients. Extensive experiments on four datasets demonstrate FUCRT's superior performance in terms of erasure guarantee, model utility preservation, and efficiency. FUCRT achieves complete (100\%) erasure of unlearning classes while maintaining or improving performance on remaining classes, outperforming state-of-the-art baselines across both IID and Non-IID settings. Analysis of the representation space reveals FUCRT's ability to effectively merge unlearning class representations with the transformation class from remaining classes, closely mimicking the model retrained from scratch.<|reference_end|>
arxiv
@article{guo2024forgetting, title={Forgetting Through Transforming: Enabling Federated Unlearning via Class-Aware Representation Transformation}, author={Qi Guo, Zhen Tian, Minghao Yao, Yong Qi, Saiyu Qi, Yun Li, and Jin Song Dong}, journal={arXiv preprint arXiv:2410.06848}, year={2024}, archivePrefix={arXiv}, eprint={2410.06848}, primaryClass={cs.LG} }
guo2024forgetting
arxiv-667513
2410.06849
On the Security and Design of Cryptosystems Using Gabidulin-Kronecker Product Codes
<|reference_start|>On the Security and Design of Cryptosystems Using Gabidulin-Kronecker Product Codes: This paper is a preliminary study on the security and design of cryptosystems using Gabidulin-Kronecker Product Codes. In particular, we point out the design impracticality of the system, and propose ways to improve it.<|reference_end|>
arxiv
@article{lau2024on, title={On the Security and Design of Cryptosystems Using Gabidulin-Kronecker Product Codes}, author={Terry Shue Chien Lau, Zhe Sun, Sook-Chin Yip, Ji-Jian Chin and Choo-Yee Ting}, journal={arXiv preprint arXiv:2410.06849}, year={2024}, archivePrefix={arXiv}, eprint={2410.06849}, primaryClass={cs.CR} }
lau2024on
arxiv-667514
2410.06850
A robust solver for large-scale heat transfer topology optimization
<|reference_start|>A robust solver for large-scale heat transfer topology optimization: This paper presents a large-scale parallel solver, specifically designed to tackle the challenges of solving high-dimensional and high-contrast linear systems in heat transfer topology optimization. The solver incorporates an interpolation technique to accelerate convergence in high-resolution domains, along with a multiscale multigrid preconditioner to handle complex coefficient fields with significant contrast. All modules of the optimization solver are implemented on a high performance computing cluster by the PETSc numerical library. Through a series of numerical investigations, we demonstrate the effectiveness of our approach in enhancing convergence and robustness during the optimization process, particularly in high-contrast scenarios with resolutions up to $1024^3$. Our performance results indicate that the proposed preconditioner achieves over $2\times$ speedup against the default algebraic multigrid in PETSc for high-contrast cases.<|reference_end|>
arxiv
@article{zhou2024a, title={A robust solver for large-scale heat transfer topology optimization}, author={Yingjie Zhou, Changqing Ye, Yucheng Liu, Shubin Fu, Eric T. Chung}, journal={arXiv preprint arXiv:2410.06850}, year={2024}, archivePrefix={arXiv}, eprint={2410.06850}, primaryClass={math.NA cs.NA} }
zhou2024a
arxiv-667515
2410.06851
Understanding Model Ensemble in Transferable Adversarial Attack
<|reference_start|>Understanding Model Ensemble in Transferable Adversarial Attack: Model ensemble adversarial attack has become a powerful method for generating transferable adversarial examples that can target even unknown models, but its theoretical foundation remains underexplored. To address this gap, we provide early theoretical insights that serve as a roadmap for advancing model ensemble adversarial attack. We first define transferability error to measure the error in adversarial transferability, alongside concepts of diversity and empirical model ensemble Rademacher complexity. We then decompose the transferability error into vulnerability, diversity, and a constant, which rigidly explains the origin of transferability error in model ensemble attack: the vulnerability of an adversarial example to ensemble components, and the diversity of ensemble components. Furthermore, we apply the latest mathematical tools in information theory to bound the transferability error using complexity and generalization terms, contributing to three practical guidelines for reducing transferability error: (1) incorporating more surrogate models, (2) increasing their diversity, and (3) reducing their complexity in cases of overfitting. Finally, extensive experiments with 54 models validate our theoretical framework, representing a significant step forward in understanding transferable model ensemble adversarial attacks.<|reference_end|>
arxiv
@article{yao2024understanding, title={Understanding Model Ensemble in Transferable Adversarial Attack}, author={Wei Yao, Zeliang Zhang, Huayi Tang, Yong Liu}, journal={arXiv preprint arXiv:2410.06851}, year={2024}, archivePrefix={arXiv}, eprint={2410.06851}, primaryClass={cs.LG cs.AI} }
yao2024understanding
arxiv-667516
2410.06852
Safe Reinforcement Learning Filter for Multicopter Collision-Free Tracking under disturbances
<|reference_start|>Safe Reinforcement Learning Filter for Multicopter Collision-Free Tracking under disturbances: This paper proposes a safe reinforcement learning filter (SRLF) to realize multicopter collision-free trajectory tracking with input disturbance. A novel robust control barrier function (RCBF) with its analysis techniques is introduced to avoid collisions with unknown disturbances during tracking. To ensure the system state remains within the safe set, the RCBF gain is designed in control action. A safety filter is introduced to transform unsafe reinforcement learning (RL) control inputs into safe ones, allowing RL training to proceed without explicitly considering safety constraints. The SRLF obtains rigorous guaranteed safe control action by solving a quadratic programming (QP) problem that incorporates forward invariance of RCBF and input saturation constraints. Both simulation and real-world experiments on multicopters demonstrate the effectiveness and excellent performance of SRLF in achieving collision-free tracking under input disturbances and saturation.<|reference_end|>
arxiv
@article{qi2024safe, title={Safe Reinforcement Learning Filter for Multicopter Collision-Free Tracking under disturbances}, author={Qihan Qi, Xinsong Yang, Gang Xia}, journal={arXiv preprint arXiv:2410.06852}, year={2024}, archivePrefix={arXiv}, eprint={2410.06852}, primaryClass={cs.RO} }
qi2024safe
arxiv-667517
2410.06854
Focal Surface Holographic Light Transport using Learned Spatially Adaptive Convolutions
<|reference_start|>Focal Surface Holographic Light Transport using Learned Spatially Adaptive Convolutions: Computer-Generated Holography (CGH) is a set of algorithmic methods for identifying holograms that reconstruct Three-Dimensi-onal (3D) scenes in holographic displays. CGH algorithms decompose 3D scenes into multiplanes at different depth levels and rely on simulations of light that propagated from a source plane to a targeted plane. Thus, for n planes, CGH typically optimizes holograms using n plane-to-plane light transport simulations, leading to major time and computational demands. Our work replaces multiple planes with a focal surface and introduces a learned light transport model that could propagate a light field from a source plane to the focal surface in a single inference. Our learned light transport model leverages spatially adaptive convolution to achieve depth-varying propagation demanded by targeted focal surfaces. The proposed model reduces the hologram optimization process up to 1.5x, which contributes to hologram dataset generation and the training of future learned CGH models.<|reference_end|>
arxiv
@article{zheng2024focal, title={Focal Surface Holographic Light Transport using Learned Spatially Adaptive Convolutions}, author={Chuanjun Zheng and Yicheng Zhan and Liang Shi and Ozan Cakmakci and Kaan Akc{s}it}, journal={arXiv preprint arXiv:2410.06854}, year={2024}, archivePrefix={arXiv}, eprint={2410.06854}, primaryClass={cs.GR cs.HC} }
zheng2024focal
arxiv-667518
2410.06855
RIS-Assisted ISAC: Precoding and Phase-Shift Optimization for Mono-Static Target Detection
<|reference_start|>RIS-Assisted ISAC: Precoding and Phase-Shift Optimization for Mono-Static Target Detection: The reconfigurable intelligent surface (RIS) technology emerges as a highly useful component of the rapidly evolving integrated sensing and communications paradigm, primarily owing to its remarkable signal-to-noise ratio enhancement capabilities. In this paper, our focus is on mono-static target detection while considering the communication requirement of a user equipment. Both sensing and communication benefit from the presence of an RIS, which makes the channels richer and stronger. Diverging from prior research, we comprehensively examine three target echo paths: the direct (static) channel path, the path via the RIS, and a combination of these, each characterized by distinct radar cross sections (RCSs). We take both the line-of-sight (LOS) and the non-line-of-sight (NLOS) paths into account under a clutter for which the distribution is not known, but the low-rank subspace it resides. We derive the generalized likelihood ratio test (GLRT) detector and introduce a novel approach for jointly optimizing the configuration of RIS phase-shifts and precoding. Our simulation results underscore the paramount importance of this combined design in terms of enhancing detection probability. Moreover, it becomes evident that the derived clutter-aware target detection significantly enhances detection performance, especially when the clutter is strong.<|reference_end|>
arxiv
@article{demir2024ris-assisted, title={RIS-Assisted ISAC: Precoding and Phase-Shift Optimization for Mono-Static Target Detection}, author={"Ozlem Tuu{g}fe Demir and Emil Bj"ornson}, journal={arXiv preprint arXiv:2410.06855}, year={2024}, archivePrefix={arXiv}, eprint={2410.06855}, primaryClass={eess.SP cs.ET} }
demir2024ris-assisted
arxiv-667519
2410.06856
On Wagner's k-Tree Algorithm Over Integers
<|reference_start|>On Wagner's k-Tree Algorithm Over Integers: The k-Tree algorithm [Wagner 02] is a non-trivial algorithm for the average-case k-SUM problem that has found widespread use in cryptanalysis. Its input consists of k lists, each containing n integers from a range of size m. Wagner's original heuristic analysis suggested that this algorithm succeeds with constant probability if n = m^{1/(\log{k}+1)}, and that in this case it runs in time O(kn). Subsequent rigorous analysis of the algorithm [Lyubashevsky 05, Shallue 08, Joux-Kippen-Loss 24] has shown that it succeeds with high probability if the input list sizes are significantly larger than this. We present a broader rigorous analysis of the k-Tree algorithm, showing upper and lower bounds on its success probability and complexity for any size of the input lists. Our results confirm Wagner's heuristic conclusions, and also give meaningful bounds for a wide range of list sizes that are not covered by existing analyses. We present analytical bounds that are asymptotically tight, as well as an efficient algorithm that computes (provably correct) bounds for a wide range of concrete parameter settings. We also do the same for the k-Tree algorithm over Z_m. Finally, we present experimental evaluation of the tightness of our results.<|reference_end|>
arxiv
@article{lin2024on, title={On Wagner's k-Tree Algorithm Over Integers}, author={Haoxing Lin and Prashant Nalini Vasudevan}, journal={arXiv preprint arXiv:2410.06856}, year={2024}, archivePrefix={arXiv}, eprint={2410.06856}, primaryClass={cs.CR cs.DS} }
lin2024on
arxiv-667520
2410.06857
Digital Dotted Lines: Design and Evaluation of a Prototype for Digitally Signing Documents Using Identity Wallets
<|reference_start|>Digital Dotted Lines: Design and Evaluation of a Prototype for Digitally Signing Documents Using Identity Wallets: Documents are largely stored and shared digitally. Yet, digital documents are still commonly signed using (copies of) handwritten signatures, which are sensitive to fraud. Though secure, cryptography-based signature solutions exist, they are hardly used due to usability issues. This paper proposes to use digital identity wallets for securely and intuitively signing digital documents with verified personal data. Using expert feedback, we implemented this vision in an interactive prototype. The prototype was assessed in a moderated usability test (N = 15) and a subsequent unmoderated remote usability test (N = 99). While participants generally expressed satisfaction with the system, they also misunderstood how to interpret the signature information displayed by the prototype. Specifically, signed documents were also trusted when the document was signed with irrelevant personal data of the signer. We conclude that such unwarranted trust forms a threat to usable digital signatures and requires attention by the usable security community.<|reference_end|>
arxiv
@article{last2024digital, title={Digital Dotted Lines: Design and Evaluation of a Prototype for Digitally Signing Documents Using Identity Wallets}, author={Yorick Last, Jorrit Geels, Hanna Schraffenberger}, journal={Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems. Article 108, 1-11}, year={2024}, doi={10.1145/3613905.3650977}, archivePrefix={arXiv}, eprint={2410.06857}, primaryClass={cs.HC} }
last2024digital
arxiv-667521
2410.06863
Application of a Temporal Multiscale Method for Efficient Simulation of Degradation in PEM Water Electrolysis under Dynamic Operation
<|reference_start|>Application of a Temporal Multiscale Method for Efficient Simulation of Degradation in PEM Water Electrolysis under Dynamic Operation: Hydrogen is vital for sectors like chemicals and others, driven by the need to reduce carbon emissions. Proton Electrolyte Membrane Water Electrolysis (PEMWE) is a key technology for the production of green hydrogen under fluctuating conditions of renewable power sources. However, due to the scarcity of noble metal materials, the stability of the anode catalyst layer under dynamic operating conditions must be better understood. Model-aided investigation approaches are essential due to the back-box nature of the electrochemical system and the high costs of experimental long-term testing. In this work, a temporal multiscale method based on a Heterogeneous technique is applied to reduce the computational effort of simulating long-term degradation, focused on catalyst dissolution. Such an approach characterizes the problem in fast locally periodic processes, influenced by the dynamic operation and slow processes attributed to the gradual degradation of the catalyst layer. A mechanistic model that includes the oxygen evolution reaction, catalyst dissolution and hydrogen permeation from the cathode to the anode side is hypothesized and implemented. The multiscale approach notably reduces computational effort of simulation from hours to mere minutes. This efficiency gain is ascribed to the limited evolution of Slow-Scale variables during each period of time of the Fast-Scale variables. Consequently, simulation of the fast processes is required only until local periodicity is achieved within each Slow-Scale time step. Thus, the developed temporal multiscale approach proves to be highly effective in accelerating parameter estimation and predictive simulation steps, as could be verified through the results of this article. In this way, the method can support systematic model development to describe degradation in PEMWE under dynamic operating conditions.<|reference_end|>
arxiv
@article{dominguez2024application, title={Application of a Temporal Multiscale Method for Efficient Simulation of Degradation in PEM Water Electrolysis under Dynamic Operation}, author={Dayron Chang Dominguez (1), An Phuc Dam (2), Thomas Richter (1), Kai Sundmacher (2), Shaun M. Alia (3) ((1) Otto-von-Guericke University, Magdeburg Germany, (2) Max-Planck-Institute for Dynamics of Complex Technical Systems, Magdeburg Germany, (3) National Renewable Energy Laboratory, Colorado United States of America)}, journal={arXiv preprint arXiv:2410.06863}, year={2024}, archivePrefix={arXiv}, eprint={2410.06863}, primaryClass={math.NA cs.NA} }
dominguez2024application
arxiv-667522
2410.06865
Students' Perceptions and Use of Generative AI Tools for Programming Across Different Computing Courses
<|reference_start|>Students' Perceptions and Use of Generative AI Tools for Programming Across Different Computing Courses: Investigation of students' perceptions and opinions on the use of generative artificial intelligence (GenAI) in education is a topic gaining much interest. Studies addressing this are typically conducted with large heterogeneous groups, at one moment in time. However, how students perceive and use GenAI tools can potentially depend on many factors, including their background knowledge, familiarity with the tools, and the learning goals and policies of the courses they are taking. In this study we explore how students following computing courses use GenAI for programming-related tasks across different programs and courses: Bachelor and Master, in courses in which learning programming is the learning goal, courses that require programming as a means to achieve another goal, and in courses in which programming is optional, but can be useful. We are also interested in changes over time, since GenAI capabilities are changing at a fast pace, and users are adopting GenAI increasingly. We conducted three consecutive surveys (fall `23, winter `23, and spring `24) among students of all computing programs of a large European research university. We asked questions on the use in education, ethics, and job prospects, and we included specific questions on the (dis)allowed use of GenAI tools in the courses they were taking at the time. We received 264 responses, which we quantitatively and qualitatively analyzed, to find out how students have employed GenAI tools across 59 different computing courses, and whether the opinion of an average student about these tools evolves over time. Our study contributes to the emerging discussion of how to differentiate GenAI use across different courses, and how to align its use with the learning goals of a computing course.<|reference_end|>
arxiv
@article{keuning2024students', title={Students' Perceptions and Use of Generative AI Tools for Programming Across Different Computing Courses}, author={Hieke Keuning, Isaac Alpizar-Chacon, Ioanna Lykourentzou, Lauren Beehler, Christian K"oppe, Imke de Jong, Sergey Sosnovsky}, journal={arXiv preprint arXiv:2410.06865}, year={2024}, archivePrefix={arXiv}, eprint={2410.06865}, primaryClass={cs.CY cs.AI} }
keuning2024students'
arxiv-667523
2410.06866
Secure Video Quality Assessment Resisting Adversarial Attacks
<|reference_start|>Secure Video Quality Assessment Resisting Adversarial Attacks: The exponential surge in video traffic has intensified the imperative for Video Quality Assessment (VQA). Leveraging cutting-edge architectures, current VQA models have achieved human-comparable accuracy. However, recent studies have revealed the vulnerability of existing VQA models against adversarial attacks. To establish a reliable and practical assessment system, a secure VQA model capable of resisting such malicious attacks is urgently demanded. Unfortunately, no attempt has been made to explore this issue. This paper first attempts to investigate general adversarial defense principles, aiming at endowing existing VQA models with security. Specifically, we first introduce random spatial grid sampling on the video frame for intra-frame defense. Then, we design pixel-wise randomization through a guardian map, globally neutralizing adversarial perturbations. Meanwhile, we extract temporal information from the video sequence as compensation for inter-frame defense. Building upon these principles, we present a novel VQA framework from the security-oriented perspective, termed SecureVQA. Extensive experiments indicate that SecureVQA sets a new benchmark in security while achieving competitive VQA performance compared with state-of-the-art models. Ablation studies delve deeper into analyzing the principles of SecureVQA, demonstrating their generalization and contributions to the security of leading VQA models.<|reference_end|>
arxiv
@article{zhang2024secure, title={Secure Video Quality Assessment Resisting Adversarial Attacks}, author={Ao-Xiang Zhang, Yu Ran, Weixuan Tang, Yuan-Gen Wang, Qingxiao Guan, and Chunsheng Yang}, journal={arXiv preprint arXiv:2410.06866}, year={2024}, archivePrefix={arXiv}, eprint={2410.06866}, primaryClass={cs.CV eess.IV} }
zhang2024secure
arxiv-667524
2410.06868
Online Matching Meets Sampling Without Replacement
<|reference_start|>Online Matching Meets Sampling Without Replacement: Sampling without replacement is a natural online rounding strategy for converting fractional bipartite matching into an integral one. In Online Bipartite Matching, we can use the Balance algorithm to fractionally match each online vertex, and then sample an unmatched offline neighbor with probability proportional to the fractional matching. In Online Stochastic Matching, we can take the solution to a linear program relaxation as a reference, and then match each online vertex to an unmatched offline neighbor with probability proportional to the fractional matching of the online vertex's type. On the one hand, we find empirical evidence that online matching algorithms based on sampling without replacement outperform existing algorithms. On the other hand, the literature offers little theoretical understanding of the power of sampling without replacement in online matching problems. This paper fills the gap in the literature by giving the first non-trivial competitive analyses of sampling without replacement for online matching problems. In Online Stochastic Matching, we develop a potential function analysis framework to show that sampling without replacement is at least $0.707$-competitive. The new analysis framework further allows us to derandomize the algorithm to obtain the first polynomial-time deterministic algorithm that breaks the $1-\frac{1}{e}$ barrier. In Online Bipartite Matching, we show that sampling without replacement provides provable online correlated selection guarantees when the selection probabilities correspond to the fractional matching chosen by the Balance algorithm. As a result, we prove that sampling without replacement is at least $0.513$-competitive for Online Bipartite Matching.<|reference_end|>
arxiv
@article{huang2024online, title={Online Matching Meets Sampling Without Replacement}, author={Zhiyi Huang, Chui Shan Lee, Jianqiao Lu, Xinkai Shu}, journal={arXiv preprint arXiv:2410.06868}, year={2024}, archivePrefix={arXiv}, eprint={2410.06868}, primaryClass={cs.DS cs.GT} }
huang2024online
arxiv-667525
2410.06875
Group Shapley Value and Counterfactual Simulations in a Structural Model
<|reference_start|>Group Shapley Value and Counterfactual Simulations in a Structural Model: We propose a variant of the Shapley value, the group Shapley value, to interpret counterfactual simulations in structural economic models by quantifying the importance of different components. Our framework compares two sets of parameters, partitioned into multiple groups, and applying group Shapley value decomposition yields unique additive contributions to the changes between these sets. The relative contributions sum to one, enabling us to generate an importance table that is as easily interpretable as a regression table. The group Shapley value can be characterized as the solution to a constrained weighted least squares problem. Using this property, we develop robust decomposition methods to address scenarios where inputs for the group Shapley value are missing. We first apply our methodology to a simple Roy model and then illustrate its usefulness by revisiting two published papers.<|reference_end|>
arxiv
@article{kwon2024group, title={Group Shapley Value and Counterfactual Simulations in a Structural Model}, author={Yongchan Kwon, Sokbae Lee, Guillaume A. Pouliot}, journal={arXiv preprint arXiv:2410.06875}, year={2024}, archivePrefix={arXiv}, eprint={2410.06875}, primaryClass={econ.EM cs.LG stat.ME} }
kwon2024group
arxiv-667526
2410.06877
Best-of-Both-Worlds Fair Allocation of Indivisible and Mixed Goods
<|reference_start|>Best-of-Both-Worlds Fair Allocation of Indivisible and Mixed Goods: We study the problem of fairly allocating either a set of indivisible goods or a set of mixed divisible and indivisible goods (i.e., mixed goods) to agents with additive utilities, taking the best-of-both-worlds perspective of guaranteeing fairness properties both ex ante and ex post. The ex-post fairness notions considered in this paper are relaxations of envy-freeness, specifically, EFX for indivisible-goods allocation, and EFM for mixed-goods allocation. For two agents, we show that there is a polynomial-time randomized algorithm that achieves ex-ante envy-freeness and ex-post EFX / EFM simultaneously. For $n$ agents with bi-valued utilities, we show there exist randomized allocations that are (i) ex-ante proportional and ex-post EFM, and (ii) ex-ante envy-free, ex-post EFX, and ex-post fractionally Pareto optimal.<|reference_end|>
arxiv
@article{bu2024best-of-both-worlds, title={Best-of-Both-Worlds Fair Allocation of Indivisible and Mixed Goods}, author={Xiaolin Bu, Zihao Li, Shengxin Liu, Xinhang Lu, Biaoshuai Tao}, journal={arXiv preprint arXiv:2410.06877}, year={2024}, archivePrefix={arXiv}, eprint={2410.06877}, primaryClass={cs.GT} }
bu2024best-of-both-worlds
arxiv-667527
2410.06878
Noise is All You Need: Private Second-Order Convergence of Noisy SGD
<|reference_start|>Noise is All You Need: Private Second-Order Convergence of Noisy SGD: Private optimization is a topic of major interest in machine learning, with differentially private stochastic gradient descent (DP-SGD) playing a key role in both theory and practice. Furthermore, DP-SGD is known to be a powerful tool in contexts beyond privacy, including robustness, machine unlearning, etc. Existing analyses of DP-SGD either make relatively strong assumptions (e.g., Lipschitz continuity of the loss function, or even convexity) or prove only first-order convergence (and thus might end at a saddle point in the non-convex setting). At the same time, there has been progress in proving second-order convergence of the non-private version of ``noisy SGD'', as well as progress in designing algorithms that are more complex than DP-SGD and do guarantee second-order convergence. We revisit DP-SGD and show that ``noise is all you need'': the noise necessary for privacy already implies second-order convergence under the standard smoothness assumptions, even for non-Lipschitz loss functions. Hence, we get second-order convergence essentially for free: DP-SGD, the workhorse of modern private optimization, under minimal assumptions can be used to find a second-order stationary point.<|reference_end|>
arxiv
@article{avdiukhin2024noise, title={Noise is All You Need: Private Second-Order Convergence of Noisy SGD}, author={Dmitrii Avdiukhin, Michael Dinitz, Chenglin Fan, Grigory Yaroslavtsev}, journal={arXiv preprint arXiv:2410.06878}, year={2024}, archivePrefix={arXiv}, eprint={2410.06878}, primaryClass={cs.LG} }
avdiukhin2024noise
arxiv-667528
2410.06879
Evaluating Model Performance with Hard-Swish Activation Function Adjustments
<|reference_start|>Evaluating Model Performance with Hard-Swish Activation Function Adjustments: In the field of pattern recognition, achieving high accuracy is essential. While training a model to recognize different complex images, it is vital to fine-tune the model to achieve the highest accuracy possible. One strategy for fine-tuning a model involves changing its activation function. Most pre-trained models use ReLU as their default activation function, but switching to a different activation function like Hard-Swish could be beneficial. This study evaluates the performance of models using ReLU, Swish and Hard-Swish activation functions across diverse image datasets. Our results show a 2.06% increase in accuracy for models on the CIFAR-10 dataset and a 0.30% increase in accuracy for models on the ATLAS dataset. Modifying the activation functions in architecture of pre-trained models lead to improved overall accuracy.<|reference_end|>
arxiv
@article{pydimarry2024evaluating, title={Evaluating Model Performance with Hard-Swish Activation Function Adjustments}, author={Sai Abhinav Pydimarry, Shekhar Madhav Khairnar, Sofia Garces Palacios, Ganesh Sankaranarayanan, Darian Hoagland, Dmitry Nepomnayshy, Huu Phong Nguyen}, journal={RECPAD 2024}, year={2024}, archivePrefix={arXiv}, eprint={2410.06879}, primaryClass={cs.CV} }
pydimarry2024evaluating
arxiv-667529
2410.06880
Cooperative UAV-Relay based Satellite Aerial Ground Integrated Networks
<|reference_start|>Cooperative UAV-Relay based Satellite Aerial Ground Integrated Networks: In the post-fifth generation (5G) era, escalating user quality of service (QoS) strains terrestrial network capacity, especially in urban areas with dynamic traffic distributions. This paper introduces a novel cooperative unmanned aerial vehicle relay-based deployment (CUD) framework in satellite air-ground integrated networks (SAGIN). The CUD strategy deploys an unmanned aerial vehicle-based relay (UAVr) in an amplify-andforward (AF) mode to enhance user QoS when terrestrial base stations fall short of network capacity. By combining low earth orbit (LEO) satellite and UAVr signals using cooperative diversity, the CUD framework enhances the signal to noise ratio (SNR) at the user. Comparative evaluations against existing frameworks reveal performance improvements, demonstrating the effectiveness of the CUD framework in addressing the evolving demands of next-generation networks.<|reference_end|>
arxiv
@article{bhola2024cooperative, title={Cooperative UAV-Relay based Satellite Aerial Ground Integrated Networks}, author={Bhola, Yu-Jia Chen, Ashutosh Balakrishnan, Swades De, Li-Chun Wang}, journal={arXiv preprint arXiv:2410.06880}, year={2024}, archivePrefix={arXiv}, eprint={2410.06880}, primaryClass={eess.SY cs.SY eess.SP} }
bhola2024cooperative
arxiv-667530
2410.06881
Privately Counting Partially Ordered Data
<|reference_start|>Privately Counting Partially Ordered Data: We consider differentially private counting when each data point consists of $d$ bits satisfying a partial order. Our main technical contribution is a problem-specific $K$-norm mechanism that runs in time $O(d^2)$. Experiments show that, depending on the partial order in question, our solution dominates existing pure differentially private mechanisms, and can reduce their error by an order of magnitude or more.<|reference_end|>
arxiv
@article{joseph2024privately, title={Privately Counting Partially Ordered Data}, author={Matthew Joseph and M'onica Ribero and Alexander Yu}, journal={arXiv preprint arXiv:2410.06881}, year={2024}, archivePrefix={arXiv}, eprint={2410.06881}, primaryClass={cs.CR} }
joseph2024privately
arxiv-667531
2410.06883
Degree Distribution based Spiking Graph Networks for Domain Adaptation
<|reference_start|>Degree Distribution based Spiking Graph Networks for Domain Adaptation: Spiking Graph Networks (SGNs) have garnered significant attraction from both researchers and industry due to their ability to address energy consumption challenges in graph classification. However, SGNs are only effective for in-distribution data and cannot tackle out-of-distribution data. In this paper, we first propose the domain adaptation problem in SGNs, and introduce a novel framework named Degree-aware Spiking Graph Domain Adaptation for Classification. The proposed DeSGDA addresses the spiking graph domain adaptation problem by three aspects: node degree-aware personalized spiking representation, adversarial feature distribution alignment, and pseudo-label distillation. First, we introduce the personalized spiking representation method for generating degree-dependent spiking signals. Specifically, the threshold of triggering a spike is determined by the node degree, allowing this personalized approach to capture more expressive information for classification. Then, we propose the graph feature distribution alignment module that is adversarially trained using membrane potential against a domain discriminator. Such an alignment module can efficiently maintain high performance and low energy consumption in the case of inconsistent distribution. Additionally, we extract consistent predictions across two spaces to create reliable pseudo-labels, effectively leveraging unlabeled data to enhance graph classification performance. Extensive experiments on benchmark datasets validate the superiority of the proposed DeSGDA compared with competitive baselines.<|reference_end|>
arxiv
@article{wang2024degree, title={Degree Distribution based Spiking Graph Networks for Domain Adaptation}, author={Yingxu Wang, Siwei Liu, Mengzhu Wang, Shangsong Liang, Nan Yin}, journal={arXiv preprint arXiv:2410.06883}, year={2024}, archivePrefix={arXiv}, eprint={2410.06883}, primaryClass={cs.LG cs.AI} }
wang2024degree
arxiv-667532
2410.06884
Adaptive Refinement Protocols for Distributed Distribution Estimation under $\ell^p$-Losses
<|reference_start|>Adaptive Refinement Protocols for Distributed Distribution Estimation under $\ell^p$-Losses: Consider the communication-constrained estimation of discrete distributions under $\ell^p$ losses, where each distributed terminal holds multiple independent samples and uses limited number of bits to describe the samples. We obtain the minimax optimal rates of the problem in most parameter regimes. An elbow effect of the optimal rates at $p=2$ is clearly identified. To show the optimal rates, we first design estimation protocols to achieve them. The key ingredient of these protocols is to introduce adaptive refinement mechanisms, which first generate rough estimate by partial information and then establish refined estimate in subsequent steps guided by the rough estimate. The protocols leverage successive refinement, sample compression and thresholding methods to achieve the optimal rates in different parameter regimes. The optimality of the protocols is shown by deriving compatible minimax lower bounds.<|reference_end|>
arxiv
@article{yuan2024adaptive, title={Adaptive Refinement Protocols for Distributed Distribution Estimation under $\ell^p$-Losses}, author={Deheng Yuan, Tao Guo and Zhongyi Huang}, journal={arXiv preprint arXiv:2410.06884}, year={2024}, archivePrefix={arXiv}, eprint={2410.06884}, primaryClass={cs.LG cs.IT math.IT math.ST stat.TH} }
yuan2024adaptive
arxiv-667533
2410.06885
F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching
<|reference_start|>F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching: This paper introduces F5-TTS, a fully non-autoregressive text-to-speech system based on flow matching with Diffusion Transformer (DiT). Without requiring complex designs such as duration model, text encoder, and phoneme alignment, the text input is simply padded with filler tokens to the same length as input speech, and then the denoising is performed for speech generation, which was originally proved feasible by E2 TTS. However, the original design of E2 TTS makes it hard to follow due to its slow convergence and low robustness. To address these issues, we first model the input with ConvNeXt to refine the text representation, making it easy to align with the speech. We further propose an inference-time Sway Sampling strategy, which significantly improves our model's performance and efficiency. This sampling strategy for flow step can be easily applied to existing flow matching based models without retraining. Our design allows faster training and achieves an inference RTF of 0.15, which is greatly improved compared to state-of-the-art diffusion-based TTS models. Trained on a public 100K hours multilingual dataset, our Fairytaler Fakes Fluent and Faithful speech with Flow matching (F5-TTS) exhibits highly natural and expressive zero-shot ability, seamless code-switching capability, and speed control efficiency. Demo samples can be found at https://SWivid.github.io/F5-TTS. We release all code and checkpoints to promote community development.<|reference_end|>
arxiv
@article{chen2024f5-tts:, title={F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching}, author={Yushen Chen, Zhikang Niu, Ziyang Ma, Keqi Deng, Chunhui Wang, Jian Zhao, Kai Yu, Xie Chen}, journal={arXiv preprint arXiv:2410.06885}, year={2024}, archivePrefix={arXiv}, eprint={2410.06885}, primaryClass={eess.AS cs.SD} }
chen2024f5-tts:
arxiv-667534
2410.06886
FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding
<|reference_start|>FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding: The development of Long-Context Large Language Models (LLMs) has markedly advanced natural language processing by facilitating the process of textual data across long documents and multiple corpora. However, Long-Context LLMs still face two critical challenges: The lost in the middle phenomenon, where crucial middle-context information is likely to be missed, and the distraction issue that the models lose focus due to overly extended contexts. To address these challenges, we propose the Context Filtering Language Model (FltLM), a novel integrated Long-Context LLM which enhances the ability of the model on multi-document question-answering (QA) tasks. Specifically, FltLM innovatively incorporates a context filter with a soft mask mechanism, identifying and dynamically excluding irrelevant content to concentrate on pertinent information for better comprehension and reasoning. Our approach not only mitigates these two challenges, but also enables the model to operate conveniently in a single forward pass. Experimental results demonstrate that FltLM significantly outperforms supervised fine-tuning and retrieval-based methods in complex QA scenarios, suggesting a promising solution for more accurate and reliable long-context natural language understanding applications.<|reference_end|>
arxiv
@article{deng2024fltlm:, title={FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding}, author={Jingyang Deng, Zhengyang Shen, Boyang Wang, Lixin Su, Suqi Cheng, Ying Nie, Junfeng Wang, Dawei Yin and Jinwen Ma}, journal={arXiv preprint arXiv:2410.06886}, year={2024}, archivePrefix={arXiv}, eprint={2410.06886}, primaryClass={cs.CL} }
deng2024fltlm:
arxiv-667535
2410.06889
Subspace method of moments for ab initio 3-D single-particle Cryo-EM reconstruction
<|reference_start|>Subspace method of moments for ab initio 3-D single-particle Cryo-EM reconstruction: Cryo-electron microscopy (Cryo-EM) is a widely-used technique for recovering the 3-D structure of biological molecules from a large number of experimentally generated noisy 2-D tomographic projection images of the 3-D structure, taken from unknown viewing angles. Through computationally intensive algorithms, these observed images are processed to reconstruct the 3-D structures. Many popular computational methods rely on estimating the unknown angles as part of the reconstruction process, which becomes particularly challenging at low signal-to-noise ratio. The method of moments (MoM) offers an alternative approach that circumvents the estimation of viewing angles of individual projection images by instead estimating the underlying distribution of the viewing angles, and is robust to noise given sufficiently many images. However, the method of moments typically entails computing high-order moments of the projection images, incurring significant storage and computational costs. To mitigate this, we propose a new approach called the subspace method of moments (subspace MoM), which compresses the first three moments using data-driven low-rank tensor techniques as well as expansion into a suitable function basis. The compressed moments can be efficiently computed from the set of projection images using numerical quadrature and can be employed to jointly recover the 3-D structure and the distribution of viewing angles. We illustrate the practical applicability of the subspace MoM in numerical experiments using up to the third-order moment, which significantly improves the resolution of MoM reconstructions compared to previous approaches.<|reference_end|>
arxiv
@article{hoskins2024subspace, title={Subspace method of moments for ab initio 3-D single-particle Cryo-EM reconstruction}, author={Jeremy Hoskins, Yuehaw Khoo, Oscar Mickelin, Amit Singer, Yuguan Wang}, journal={arXiv preprint arXiv:2410.06889}, year={2024}, archivePrefix={arXiv}, eprint={2410.06889}, primaryClass={math.NA cs.NA} }
hoskins2024subspace
arxiv-667536
2410.06892
Selecting the Best Sequential Transfer Path for Medical Image Segmentation with Limited Labeled Data
<|reference_start|>Selecting the Best Sequential Transfer Path for Medical Image Segmentation with Limited Labeled Data: The medical image processing field often encounters the critical issue of scarce annotated data. Transfer learning has emerged as a solution, yet how to select an adequate source task and effectively transfer the knowledge to the target task remains challenging. To address this, we propose a novel sequential transfer scheme with a task affinity metric tailored for medical images. Considering the characteristics of medical image segmentation tasks, we analyze the image and label similarity between tasks and compute the task affinity scores, which assess the relatedness among tasks. Based on this, we select appropriate source tasks and develop an effective sequential transfer strategy by incorporating intermediate source tasks to gradually narrow the domain discrepancy and minimize the transfer cost. Thereby we identify the best sequential transfer path for the given target task. Extensive experiments on three MRI medical datasets, FeTS 2022, iSeg-2019, and WMH, demonstrate the efficacy of our method in finding the best source sequence. Compared with directly transferring from a single source task, the sequential transfer results underline a significant improvement in target task performance, achieving an average of 2.58% gain in terms of segmentation Dice score, notably, 6.00% for FeTS 2022. Code is available at the git repository.<|reference_end|>
arxiv
@article{yang2024selecting, title={Selecting the Best Sequential Transfer Path for Medical Image Segmentation with Limited Labeled Data}, author={Jingyun Yang, Jingge Wang, Guoqing Zhang, Yang Li}, journal={arXiv preprint arXiv:2410.06892}, year={2024}, archivePrefix={arXiv}, eprint={2410.06892}, primaryClass={eess.IV cs.CV} }
yang2024selecting
arxiv-667537
2410.06893
Learning from Spatio-temporal Correlation for Semi-Supervised LiDAR Semantic Segmentation
<|reference_start|>Learning from Spatio-temporal Correlation for Semi-Supervised LiDAR Semantic Segmentation: We address the challenges of the semi-supervised LiDAR segmentation (SSLS) problem, particularly in low-budget scenarios. The two main issues in low-budget SSLS are the poor-quality pseudo-labels for unlabeled data, and the performance drops due to the significant imbalance between ground-truth and pseudo-labels. This imbalance leads to a vicious training cycle. To overcome these challenges, we leverage the spatio-temporal prior by recognizing the substantial overlap between temporally adjacent LiDAR scans. We propose a proximity-based label estimation, which generates highly accurate pseudo-labels for unlabeled data by utilizing semantic consistency with adjacent labeled data. Additionally, we enhance this method by progressively expanding the pseudo-labels from the nearest unlabeled scans, which helps significantly reduce errors linked to dynamic classes. Additionally, we employ a dual-branch structure to mitigate performance degradation caused by data imbalance. Experimental results demonstrate remarkable performance in low-budget settings (i.e., <= 5%) and meaningful improvements in normal budget settings (i.e., 5 - 50%). Finally, our method has achieved new state-of-the-art results on SemanticKITTI and nuScenes in semi-supervised LiDAR segmentation. With only 5% labeled data, it offers competitive results against fully-supervised counterparts. Moreover, it surpasses the performance of the previous state-of-the-art at 100% labeled data (75.2%) using only 20% of labeled data (76.0%) on nuScenes. The code is available on https://github.com/halbielee/PLE.<|reference_end|>
arxiv
@article{lee2024learning, title={Learning from Spatio-temporal Correlation for Semi-Supervised LiDAR Semantic Segmentation}, author={Seungho Lee, Hwijeong Lee and Hyunjung Shim}, journal={arXiv preprint arXiv:2410.06893}, year={2024}, archivePrefix={arXiv}, eprint={2410.06893}, primaryClass={cs.CV} }
lee2024learning
arxiv-667538
2410.06895
Average Certified Radius is a Poor Metric for Randomized Smoothing
<|reference_start|>Average Certified Radius is a Poor Metric for Randomized Smoothing: Randomized smoothing is a popular approach for providing certified robustness guarantees against adversarial attacks, and has become a very active area of research. Over the past years, the average certified radius (ACR) has emerged as the single most important metric for comparing methods and tracking progress in the field. However, in this work, we show that ACR is an exceptionally poor metric for evaluating robustness guarantees provided by randomized smoothing. We theoretically show not only that a trivial classifier can have arbitrarily large ACR, but also that ACR is much more sensitive to improvements on easy samples than on hard ones. Empirically, we confirm that existing training strategies that improve ACR reduce the model's robustness on hard samples. Further, we show that by focusing on easy samples, we can effectively replicate the increase in ACR. We develop strategies, including explicitly discarding hard samples, reweighing the dataset with certified radius, and extreme optimization for easy samples, to achieve state-of-the-art ACR, although these strategies ignore robustness for the general data distribution. Overall, our results suggest that ACR has introduced a strong undesired bias to the field, and better metrics are required to holistically evaluate randomized smoothing.<|reference_end|>
arxiv
@article{sun2024average, title={Average Certified Radius is a Poor Metric for Randomized Smoothing}, author={Chenhao Sun, Yuhao Mao, Mark Niklas M"uller, Martin Vechev}, journal={arXiv preprint arXiv:2410.06895}, year={2024}, archivePrefix={arXiv}, eprint={2410.06895}, primaryClass={cs.LG} }
sun2024average
arxiv-667539
2410.06898
Generative Model for Less-Resourced Language with 1 billion parameters
<|reference_start|>Generative Model for Less-Resourced Language with 1 billion parameters: Large language models (LLMs) are a basic infrastructure for modern natural language processing. Many commercial and open-source LLMs exist for English, e.g., ChatGPT, Llama, Falcon, and Mistral. As these models are trained on mostly English texts, their fluency and knowledge of low-resource languages and societies are superficial. We present the development of large generative language models for a less-resourced language. GaMS 1B - Generative Model for Slovene with 1 billion parameters was created by continuing pretraining of the existing English OPT model. We developed a new tokenizer adapted to Slovene, Croatian, and English languages and used embedding initialization methods FOCUS and WECHSEL to transfer the embeddings from the English OPT model. We evaluate our models on several classification datasets from the Slovene suite of benchmarks and generative sentence simplification task SENTA. We only used a few-shot in-context learning of our models, which are not yet instruction-tuned. For classification tasks, in this mode, the generative models lag behind the existing Slovene BERT-type models fine-tuned for specific tasks. On a sentence simplification task, the GaMS models achieve comparable or better performance than the GPT-3.5-Turbo model.<|reference_end|>
arxiv
@article{vreš2024generative, title={Generative Model for Less-Resourced Language with 1 billion parameters}, author={Domen Vrev{s}, Martin Bov{z}iv{c}, Aljav{z} Potov{c}nik, Tomav{z} Martinv{c}iv{c}, Marko Robnik-v{S}ikonja}, journal={arXiv preprint arXiv:2410.06898}, year={2024}, archivePrefix={arXiv}, eprint={2410.06898}, primaryClass={cs.CL} }
vreš2024generative
arxiv-667540
2410.06903
On the restriction to unitarity for rational approximations to the exponential function
<|reference_start|>On the restriction to unitarity for rational approximations to the exponential function: In the present work we consider rational best approximations to the exponential function that minimize a uniform error on a subset of the imaginary axis. Namely, Chebyshev approximation and unitary best approximation where the latter is subject to further restriction to unitarity, i.e., requiring that the imaginary axis is mapped to the unit circle. We show that Chebyshev approximants are not unitary, and consequently, distinct to unitary best approximants. However, unitary best approximation attains at most twice the error of Chebyshev approximation, and thus, the restriction to unitarity is not a severe restriction in a practical setting. Moreover, Chebyshev approximation and unitary best approximation attain the same asymptotic error as the underlying domain of approximation shrinks to the origin.<|reference_end|>
arxiv
@article{jawecki2024on, title={On the restriction to unitarity for rational approximations to the exponential function}, author={Tobias Jawecki}, journal={arXiv preprint arXiv:2410.06903}, year={2024}, archivePrefix={arXiv}, eprint={2410.06903}, primaryClass={math.NA cs.NA} }
jawecki2024on
arxiv-667541
2410.06905
Reliable Probabilistic Human Trajectory Prediction for Autonomous Applications
<|reference_start|>Reliable Probabilistic Human Trajectory Prediction for Autonomous Applications: Autonomous systems, like vehicles or robots, require reliable, accurate, fast, resource-efficient, scalable, and low-latency trajectory predictions to get initial knowledge about future locations and movements of surrounding objects for safe human-machine interaction. Furthermore, they need to know the uncertainty of the predictions for risk assessment to provide safe path planning. This paper presents a lightweight method to address these requirements, combining Long Short-Term Memory and Mixture Density Networks. Our method predicts probability distributions, including confidence level estimations for positional uncertainty to support subsequent risk management applications and runs on a low-power embedded platform. We discuss essential requirements for human trajectory prediction in autonomous vehicle applications and demonstrate our method's performance using multiple traffic-related datasets. Furthermore, we explain reliability and sharpness metrics and show how important they are to guarantee the correctness and robustness of a model's predictions and uncertainty assessments. These essential evaluations have so far received little attention for no good reason. Our approach focuses entirely on real-world applicability. Verifying prediction uncertainties and a model's reliability are central to autonomous real-world applications. Our framework and code are available at: https://github.com/kav-institute/mdn_trajectory_forecasting.<|reference_end|>
arxiv
@article{hetzel2024reliable, title={Reliable Probabilistic Human Trajectory Prediction for Autonomous Applications}, author={Manuel Hetzel, Hannes Reichert, Konrad Doll and Bernhard Sick}, journal={arXiv preprint arXiv:2410.06905}, year={2024}, archivePrefix={arXiv}, eprint={2410.06905}, primaryClass={cs.CV} }
hetzel2024reliable
arxiv-667542
2410.06911
Combining Planning and Diffusion for Mobility with Unknown Dynamics
<|reference_start|>Combining Planning and Diffusion for Mobility with Unknown Dynamics: Manipulation of large objects over long horizons (such as carts in a warehouse) is an essential skill for deployable robotic systems. Large objects require mobile manipulation which involves simultaneous manipulation, navigation, and movement with the object in tow. In many real-world situations, object dynamics are incredibly complex, such as the interaction of an office chair (with a rotating base and five caster wheels) and the ground. We present a hierarchical algorithm for long-horizon robot manipulation problems in which the dynamics are partially unknown. We observe that diffusion-based behavior cloning is highly effective for short-horizon problems with unknown dynamics, so we decompose the problem into an abstract high-level, obstacle-aware motion-planning problem that produces a waypoint sequence. We use a short-horizon, relative-motion diffusion policy to achieve the waypoints in sequence. We train mobile manipulation policies on a Spot robot that has to push and pull an office chair. Our hierarchical manipulation policy performs consistently better, especially when the horizon increases, compared to a diffusion policy trained on long-horizon demonstrations or motion planning assuming a rigidly-attached object (success rate of 8 (versus 0 and 5 respectively) out of 10 runs). Importantly, our learned policy generalizes to new layouts, grasps, chairs, and flooring that induces more friction, without any further training, showing promise for other complex mobile manipulation problems. Project Page: https://yravan.github.io/plannerorderedpolicy/<|reference_end|>
arxiv
@article{ravan2024combining, title={Combining Planning and Diffusion for Mobility with Unknown Dynamics}, author={Yajvan Ravan, Zhutian Yang, Tao Chen, Tom'as Lozano-P'erez, Leslie Pack Kaelbling}, journal={arXiv preprint arXiv:2410.06911}, year={2024}, archivePrefix={arXiv}, eprint={2410.06911}, primaryClass={cs.RO cs.AI} }
ravan2024combining
arxiv-667543
2410.06912
Compositional Entailment Learning for Hyperbolic Vision-Language Models
<|reference_start|>Compositional Entailment Learning for Hyperbolic Vision-Language Models: Image-text representation learning forms a cornerstone in vision-language models, where pairs of images and textual descriptions are contrastively aligned in a shared embedding space. Since visual and textual concepts are naturally hierarchical, recent work has shown that hyperbolic space can serve as a high-potential manifold to learn vision-language representation with strong downstream performance. In this work, for the first time we show how to fully leverage the innate hierarchical nature of hyperbolic embeddings by looking beyond individual image-text pairs. We propose Compositional Entailment Learning for hyperbolic vision-language models. The idea is that an image is not only described by a sentence but is itself a composition of multiple object boxes, each with their own textual description. Such information can be obtained freely by extracting nouns from sentences and using openly available localized grounding models. We show how to hierarchically organize images, image boxes, and their textual descriptions through contrastive and entailment-based objectives. Empirical evaluation on a hyperbolic vision-language model trained with millions of image-text pairs shows that the proposed compositional learning approach outperforms conventional Euclidean CLIP learning, as well as recent hyperbolic alternatives, with better zero-shot and retrieval generalization and clearly stronger hierarchical performance.<|reference_end|>
arxiv
@article{pal2024compositional, title={Compositional Entailment Learning for Hyperbolic Vision-Language Models}, author={Avik Pal, Max van Spengler, Guido Maria D'Amely di Melendugno, Alessandro Flaborea, Fabio Galasso, Pascal Mettes}, journal={arXiv preprint arXiv:2410.06912}, year={2024}, archivePrefix={arXiv}, eprint={2410.06912}, primaryClass={cs.CV cs.AI cs.LG} }
pal2024compositional
arxiv-667544
2410.06913
Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning
<|reference_start|>Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning: Refusal-Aware Instruction Tuning (RAIT) enables Large Language Models (LLMs) to refuse to answer unknown questions. By modifying responses of unknown questions in the training data to refusal responses such as "I don't know", RAIT enhances the reliability of LLMs and reduces their hallucination. Generally, RAIT modifies training samples based on the correctness of the initial LLM's response. However, this crude approach can cause LLMs to excessively refuse answering questions they could have correctly answered, the problem we call over-refusal. In this paper, we explore two primary causes of over-refusal: Static conflict emerges when the RAIT data is constructed solely on correctness criteria, causing similar samples in the LLM's feature space to be assigned different labels (original vs. modified "I don't know"). Dynamic conflict occurs due to the changes of LLM's knowledge state during fine-tuning, which transforms previous unknown questions into knowns, while the training data, which is constructed based on the initial LLM, remains unchanged. These conflicts cause the trained LLM to misclassify known questions as unknown, resulting in over-refusal. To address this issue, we introduce Certainty Represented Knowledge Flow for Refusal-Aware Instructions Construction (CRaFT). CRaFT centers on two main contributions: First, we additionally incorporate response certainty to selectively filter and modify data, reducing static conflicts. Second, we implement preliminary rehearsal training to characterize changes in the LLM's knowledge state, which helps mitigate dynamic conflicts during the fine-tuning process. We conducted extensive experiments on open-ended question answering and multiple-choice question task. Experiment results show that CRaFT can improve LLM's overall performance during the RAIT process. Source code and training data will be released at Github.<|reference_end|>
arxiv
@article{zhu2024utilize, title={Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning}, author={Runchuan Zhu, Zhipeng Ma, Jiang Wu, Junyuan Gao, Jiaqi Wang, Dahua Lin, Conghui He}, journal={arXiv preprint arXiv:2410.06913}, year={2024}, archivePrefix={arXiv}, eprint={2410.06913}, primaryClass={cs.CL} }
zhu2024utilize
arxiv-667545
2410.06916
SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration
<|reference_start|>SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration: Speculative decoding (SD) has emerged as a widely used paradigm to accelerate the inference of large language models (LLMs) without compromising generation quality. It works by first employing a compact model to draft multiple tokens efficiently and then using the target LLM to verify them in parallel. While this technique has achieved notable speedups, most existing approaches necessitate either additional parameters or extensive training to construct effective draft models, thereby restricting their applicability across different LLMs and tasks. To address this limitation, we explore a novel plug-and-play SD solution with layer-skipping, which skips intermediate layers of the target LLM as the compact draft model. Our analysis reveals that LLMs exhibit great potential for self-acceleration through layer sparsity and the task-specific nature of this sparsity. Building on these insights, we introduce SWIFT, an on-the-fly self-speculative decoding algorithm that adaptively selects intermediate layers of LLMs to skip during inference. SWIFT does not require auxiliary models or additional training, making it a plug-and-play solution for accelerating LLM inference across diverse input data streams. Our extensive experiments across a wide range of models and downstream tasks demonstrate that SWIFT can achieve over a 1.3x-1.6x speedup while preserving the original distribution of the generated text.<|reference_end|>
arxiv
@article{xia2024swift:, title={SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration}, author={Heming Xia, Yongqi Li, Jun Zhang, Cunxiao Du, Wenjie Li}, journal={arXiv preprint arXiv:2410.06916}, year={2024}, archivePrefix={arXiv}, eprint={2410.06916}, primaryClass={cs.CL} }
xia2024swift:
arxiv-667546
2410.06917
A structural description of Zykov and Blanche Descartes graphs
<|reference_start|>A structural description of Zykov and Blanche Descartes graphs: In 1949, Zykov proposed the first explicit construction of triangle-free graphs with arbitrarily large chromatic number. We define a Zykov graph as any induced subgraph of a graph created using Zykov's construction. We give a structural characterization of Zykov graphs based on a specific type of stable set, that we call splitting stable set. It implies that recognizing this class is NP-complete, while being FPT in the treewidth of the input graph. We provide similar results for the Blanche Descartes construction.<|reference_end|>
arxiv
@article{marin2024a, title={A structural description of Zykov and Blanche Descartes graphs}, author={Malory Marin, St'ephan Thomass'e, Nicolas Trotignon and R'emi Watrigant}, journal={arXiv preprint arXiv:2410.06917}, year={2024}, archivePrefix={arXiv}, eprint={2410.06917}, primaryClass={math.CO cs.CC cs.DM} }
marin2024a
arxiv-667547
2410.06919
Neural Green's Function Accelerated Iterative Methods for Solving Indefinite Boundary Value Problems
<|reference_start|>Neural Green's Function Accelerated Iterative Methods for Solving Indefinite Boundary Value Problems: Neural operators, which learn mappings between the function spaces, have been applied to solve boundary value problems in various ways, including learning mappings from the space of the forcing terms to the space of the solutions with the substantial requirements of data pairs. In this work, we present a data-free neural operator integrated with physics, which learns the Green kernel directly. Our method proceeds in three steps: 1. The governing equations for the Green's function are reformulated into an interface problem, where the delta Dirac function is removed; 2. The interface problem is embedded in a lifted space of higher-dimension to handle the jump in the derivative, but still solved on a two-dimensional surface without additional sampling cost; 3. Deep neural networks are employed to address the curse of dimensionality caused by this lifting operation. The approximate Green's function obtained through our approach is then used to construct preconditioners for the linear systems allowed by its mathematical properties. Furthermore, the spectral bias of it revealed through both theoretical analysis and numerical validation contrasts with the smoothing effects of traditional iterative solvers, which motivates us to propose a hybrid iterative method that combines these two solvers. Numerical experiments demonstrate the effectiveness of our approximate Green's function in accelerating iterative methods, proving fast convergence for solving indefinite problems even involving discontinuous coefficients.<|reference_end|>
arxiv
@article{li2024neural, title={Neural Green's Function Accelerated Iterative Methods for Solving Indefinite Boundary Value Problems}, author={Shengyan Li, Qi Sun, Xuejun Xu and Bowen Zheng}, journal={arXiv preprint arXiv:2410.06919}, year={2024}, archivePrefix={arXiv}, eprint={2410.06919}, primaryClass={math.NA cs.NA} }
li2024neural
arxiv-667548
2410.06920
To Be or Not to Be (in the EU): Measurement of Discrepancies Presented in Cookie Paywalls
<|reference_start|>To Be or Not to Be (in the EU): Measurement of Discrepancies Presented in Cookie Paywalls: Cookie paywalls allow visitors to access the content of a website only after making a choice between paying a fee (paying option) or accepting tracking (cookie option). The practice has been studied in previous research in regard to its prevalence and legal standing, but the effects of the clients' device and geographic location remain unexplored. To address these questions, this study explores the effects of three factors: 1) the clients' browser, 2) the device type (desktop or mobile), and 3) the geographic location on the presence and behavior of cookie paywalls and the handling of users' data. Using an automatic crawler on our dataset composed of 804 websites that present a cookie paywall, we observed that the presence of a cookie paywall was most affected by the geographic location of the user. We further showed that both the behavior of a cookie paywall and the processing of user data are impacted by all three factors, but no patterns of significance could be found. Finally, an additional type of paywall was discovered to be used on approximately 11% of the studied websites, coined the "double paywall", which consists of a cookie paywall complemented by another paywall once tracking is accepted.<|reference_end|>
arxiv
@article{stenwreth2024to, title={To Be or Not to Be (in the EU): Measurement of Discrepancies Presented in Cookie Paywalls}, author={Andreas Stenwreth, Simon T"ang and Victor Morel}, journal={arXiv preprint arXiv:2410.06920}, year={2024}, archivePrefix={arXiv}, eprint={2410.06920}, primaryClass={cs.CY cs.ET} }
stenwreth2024to
arxiv-667549
2410.06921
Adversarial Vulnerability as a Consequence of On-Manifold Inseparibility
<|reference_start|>Adversarial Vulnerability as a Consequence of On-Manifold Inseparibility: Recent works have shown theoretically and empirically that redundant data dimensions are a source of adversarial vulnerability. However, the inverse doesn't seem to hold in practice; employing dimension-reduction techniques doesn't exhibit robustness as expected. In this work, we consider classification tasks and characterize the data distribution as a low-dimensional manifold, with high/low variance features defining the on/off manifold direction. We argue that clean training experiences poor convergence in the off-manifold direction caused by the ill-conditioning in widely used first-order optimizers like gradient descent. The poor convergence then acts as a source of adversarial vulnerability when the dataset is inseparable in the on-manifold direction. We provide theoretical results for logistic regression and a 2-layer linear network on the considered data distribution. Furthermore, we advocate using second-order methods that are immune to ill-conditioning and lead to better robustness. We perform experiments and exhibit tremendous robustness improvements in clean training through long training and the employment of second-order methods, corroborating our framework. Additionally, we find the inclusion of batch-norm layers hinders such robustness gains. We attribute this to differing implicit biases between traditional and batch-normalized neural networks.<|reference_end|>
arxiv
@article{haldar2024adversarial, title={Adversarial Vulnerability as a Consequence of On-Manifold Inseparibility}, author={Rajdeep Haldar, Yue Xing, Qifan Song, Guang Lin}, journal={arXiv preprint arXiv:2410.06921}, year={2024}, archivePrefix={arXiv}, eprint={2410.06921}, primaryClass={stat.ML cs.LG} }
haldar2024adversarial
arxiv-667550
2410.06922
Estimating Exoplanet Mass using Machine Learning on Incomplete Datasets
<|reference_start|>Estimating Exoplanet Mass using Machine Learning on Incomplete Datasets: The exoplanet archive is an incredible resource of information on the properties of discovered extrasolar planets, but statistical analysis has been limited by the number of missing values. One of the most informative bulk properties is planet mass, which is particularly challenging to measure with more than 70\% of discovered planets with no measured value. We compare the capabilities of five different machine learning algorithms that can utilize multidimensional incomplete datasets to estimate missing properties for imputing planet mass. The results are compared when using a partial subset of the archive with a complete set of six planet properties, and where all planet discoveries are leveraged in an incomplete set of six and eight planet properties. We find that imputation results improve with more data even when the additional data is incomplete, and allows a mass prediction for any planet regardless of which properties are known. Our favored algorithm is the newly developed $k$NN$\times$KDE, which can return a probability distribution for the imputed properties. The shape of this distribution can indicate the algorithm's level of confidence, and also inform on the underlying demographics of the exoplanet population. We demonstrate how the distributions can be interpreted with a series of examples for planets where the discovery was made with either the transit method, or radial velocity method. Finally, we test the generative capability of the $k$NN$\times$KDE to create a large synthetic population of planets based on the archive, and identify potential categories of planets from groups of properties in the multidimensional space. All codes are Open Source.<|reference_end|>
arxiv
@article{lalande2024estimating, title={Estimating Exoplanet Mass using Machine Learning on Incomplete Datasets}, author={Florian Lalande, Elizabeth Tasker, Kenji Doya}, journal={arXiv preprint arXiv:2410.06922}, year={2024}, doi={10.33232/001c.124538}, archivePrefix={arXiv}, eprint={2410.06922}, primaryClass={astro-ph.EP astro-ph.IM cs.LG} }
lalande2024estimating
arxiv-667551
2410.06927
Spectral and Rhythm Features for Audio Classification with Deep Convolutional Neural Networks
<|reference_start|>Spectral and Rhythm Features for Audio Classification with Deep Convolutional Neural Networks: Convolutional neural networks (CNNs) are widely used in computer vision. They can be used not only for conventional digital image material to recognize patterns, but also for feature extraction from digital imagery representing spectral and rhythm features extracted from time-domain digital audio signals for the acoustic classification of sounds. Different spectral and rhythm feature representations like mel-scaled spectrograms, mel-frequency cepstral coefficients (MFCCs), cyclic tempograms, short-time Fourier transform (STFT) chromagrams, constant-Q transform (CQT) chromagrams and chroma energy normalized statistics (CENS) chromagrams are investigated in terms of the audio classification performance using a deep convolutional neural network. It can be clearly shown that the mel-scaled spectrograms and the mel-frequency cepstral coefficients (MFCCs) perform significantly better then the other spectral and rhythm features investigated in this research for audio classification tasks using deep CNNs. The experiments were carried out with the aid of the ESC-50 dataset with 2,000 labeled environmental audio recordings.<|reference_end|>
arxiv
@article{wolf-monheim2024spectral, title={Spectral and Rhythm Features for Audio Classification with Deep Convolutional Neural Networks}, author={Friedrich Wolf-Monheim}, journal={arXiv preprint arXiv:2410.06927}, year={2024}, archivePrefix={arXiv}, eprint={2410.06927}, primaryClass={cs.SD cs.LG eess.AS} }
wolf-monheim2024spectral
arxiv-667552
2410.06932
Reproducing and Extending Experiments in Behavioral Strategy with Large Language Models
<|reference_start|>Reproducing and Extending Experiments in Behavioral Strategy with Large Language Models: In this study, we propose LLM agents as a novel approach in behavioral strategy research, complementing simulations and laboratory experiments to advance our understanding of cognitive processes in decision-making. Specifically, we reproduce a human laboratory experiment in behavioral strategy using large language model (LLM) generated agents and investigate how LLM agents compare to observed human behavior. Our results show that LLM agents effectively reproduce search behavior and decision-making comparable to humans. Extending our experiment, we analyze LLM agents' simulated "thoughts," discovering that more forward-looking thoughts correlate with favoring exploitation over exploration to maximize wealth. We show how this new approach can be leveraged in behavioral strategy research and address limitations.<|reference_end|>
arxiv
@article{albert2024reproducing, title={Reproducing and Extending Experiments in Behavioral Strategy with Large Language Models}, author={Daniel Albert and Stephan Billinger}, journal={arXiv preprint arXiv:2410.06932}, year={2024}, archivePrefix={arXiv}, eprint={2410.06932}, primaryClass={econ.GN cs.AI q-fin.EC} }
albert2024reproducing
arxiv-667553
2410.06934
VEC-Sim: A Simulation Platform for Evaluating Service Caching and Computation Offloading Policies in Vehicular Edge Networks
<|reference_start|>VEC-Sim: A Simulation Platform for Evaluating Service Caching and Computation Offloading Policies in Vehicular Edge Networks: Computer simulation platforms offer an alternative solution by emulating complex systems in a controlled manner. However, existing Edge Computing (EC) simulators, as well as general-purpose vehicular network simulators, are not tailored for VEC and lack dedicated support for modeling the distinct access pattern, entity mobility trajectory and other unique characteristics of VEC networks. To fill this gap, this paper proposes VEC-Sim, a versatile simulation platform for in-depth evaluation and analysis of various service caching and computation offloading policies in VEC networks. VEC-Sim incorporates realistic mechanisms to replicate real-world access patterns, including service feature vector, vehicle mobility modeling, evolving service popularity, new service upload and user preference shifts, etc. Moreover, its modular architecture and extensive Application Programming Interfaces (APIs) allow seamless integration of customized scheduling policies and user-defined metrics. A comprehensive evaluation of VEC-Sim's capabilities is undertaken in comparison to real-world ground truths. Results prove it to be accurate in reproducing classical scheduling algorithms and extremely effective in conducting case studies.<|reference_end|>
arxiv
@article{wu2024vec-sim:, title={VEC-Sim: A Simulation Platform for Evaluating Service Caching and Computation Offloading Policies in Vehicular Edge Networks}, author={Fan Wu, Xiaolong Xu, Muhammad Bilal, Xiangwei Wang, Hao Cheng and Siyu Wu}, journal={arXiv preprint arXiv:2410.06934}, year={2024}, archivePrefix={arXiv}, eprint={2410.06934}, primaryClass={cs.NI} }
wu2024vec-sim:
arxiv-667554
2410.06935
Predicting Bitcoin Market Trends with Enhanced Technical Indicator Integration and Classification Models
<|reference_start|>Predicting Bitcoin Market Trends with Enhanced Technical Indicator Integration and Classification Models: Thanks to the high potential for profit, trading has become increasingly attractive to investors as the cryptocurrency and stock markets rapidly expand. However, because financial markets are intricate and dynamic, accurately predicting prices remains a significant challenge. The volatile nature of the cryptocurrency market makes it even harder for traders and investors to make decisions. This study presents a machine learning model based on classification to forecast the direction of the cryptocurrency market, i.e., whether prices will increase or decrease. The model is trained using historical data and important technical indicators such as the Moving Average Convergence Divergence, the Relative Strength Index, and Bollinger Bands. We illustrate our approach with an empirical study of the closing price of Bitcoin. Several simulations, including a confusion matrix and Receiver Operating Characteristic curve, are used to assess the model's performance, and the results show a buy/sell signal accuracy of over 92%. These findings demonstrate how machine learning models can assist investors and traders of cryptocurrencies in making wise/informed decisions in a very volatile market.<|reference_end|>
arxiv
@article{hafid2024predicting, title={Predicting Bitcoin Market Trends with Enhanced Technical Indicator Integration and Classification Models}, author={Abdelatif Hafid, Mohamed Rahouti, Linglong Kong, Maad Ebrahim, Mohamed Adel Serhani}, journal={arXiv preprint arXiv:2410.06935}, year={2024}, archivePrefix={arXiv}, eprint={2410.06935}, primaryClass={cs.LG} }
hafid2024predicting
arxiv-667555
2410.06940
Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think
<|reference_start|>Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think: Recent studies have shown that the denoising process in (generative) diffusion models can induce meaningful (discriminative) representations inside the model, though the quality of these representations still lags behind those learned through recent self-supervised learning methods. We argue that one main bottleneck in training large-scale diffusion models for generation lies in effectively learning these representations. Moreover, training can be made easier by incorporating high-quality external visual representations, rather than relying solely on the diffusion models to learn them independently. We study this by introducing a straightforward regularization called REPresentation Alignment (REPA), which aligns the projections of noisy input hidden states in denoising networks with clean image representations obtained from external, pretrained visual encoders. The results are striking: our simple strategy yields significant improvements in both training efficiency and generation quality when applied to popular diffusion and flow-based transformers, such as DiTs and SiTs. For instance, our method can speed up SiT training by over 17.5$\times$, matching the performance (without classifier-free guidance) of a SiT-XL model trained for 7M steps in less than 400K steps. In terms of final generation quality, our approach achieves state-of-the-art results of FID=1.42 using classifier-free guidance with the guidance interval.<|reference_end|>
arxiv
@article{yu2024representation, title={Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think}, author={Sihyun Yu, Sangkyung Kwak, Huiwon Jang, Jongheon Jeong, Jonathan Huang, Jinwoo Shin, Saining Xie}, journal={arXiv preprint arXiv:2410.06940}, year={2024}, archivePrefix={arXiv}, eprint={2410.06940}, primaryClass={cs.CV cs.LG} }
yu2024representation
arxiv-667556
2410.06941
WorkflowHub: a registry for computational workflows
<|reference_start|>WorkflowHub: a registry for computational workflows: The rising popularity of computational workflows is driven by the need for repetitive and scalable data processing, sharing of processing know-how, and transparent methods. As both combined records of analysis and descriptions of processing steps, workflows should be reproducible, reusable, adaptable, and available. Workflow sharing presents opportunities to reduce unnecessary reinvention, promote reuse, increase access to best practice analyses for non-experts, and increase productivity. In reality, workflows are scattered and difficult to find, in part due to the diversity of available workflow engines and ecosystems, and because workflow sharing is not yet part of research practice. WorkflowHub provides a unified registry for all computational workflows that links to community repositories, and supports both the workflow lifecycle and making workflows findable, accessible, interoperable, and reusable (FAIR). By interoperating with diverse platforms, services, and external registries, WorkflowHub adds value by supporting workflow sharing, explicitly assigning credit, enhancing FAIRness, and promoting workflows as scholarly artefacts. The registry has a global reach, with hundreds of research organisations involved, and more than 700 workflows registered.<|reference_end|>
arxiv
@article{gustafsson2024workflowhub:, title={WorkflowHub: a registry for computational workflows}, author={Ove Johan Ragnar Gustafsson, Sean R. Wilkinson, Finn Bacall, Luca Pireddu, Stian Soiland-Reyes, Simone Leo, Stuart Owen, Nick Juty, Jos'e M. Fern'andez, Bj"orn Gr"uning, Tom Brown, Herv'e M'enager, Salvador Capella-Gutierrez, Frederik Coppens and Carole Goble}, journal={arXiv preprint arXiv:2410.06941}, year={2024}, archivePrefix={arXiv}, eprint={2410.06941}, primaryClass={cs.DL cs.SE} }
gustafsson2024workflowhub:
arxiv-667557
2410.06943
AutoFeedback: An LLM-based Framework for Efficient and Accurate API Request Generation
<|reference_start|>AutoFeedback: An LLM-based Framework for Efficient and Accurate API Request Generation: Large Language Models (LLMs) leverage external tools primarily through generating the API request to enhance task completion efficiency. The accuracy of API request generation significantly determines the capability of LLMs to accomplish tasks. Due to the inherent hallucinations within the LLM, it is difficult to efficiently and accurately generate the correct API request. Current research uses prompt-based feedback to facilitate the LLM-based API request generation. However, existing methods lack factual information and are insufficiently detailed. To address these issues, we propose AutoFeedback, an LLM-based framework for efficient and accurate API request generation, with a Static Scanning Component (SSC) and a Dynamic Analysis Component (DAC). SSC incorporates errors detected in the API requests as pseudo-facts into the feedback, enriching the factual information. DAC retrieves information from API documentation, enhancing the level of detail in feedback. Based on this two components, Autofeedback implementes two feedback loops during the process of generating API requests by the LLM. Extensive experiments demonstrate that it significantly improves accuracy of API request generation and reduces the interaction cost. AutoFeedback achieves an accuracy of 100.00\% on a real-world API dataset and reduces the cost of interaction with GPT-3.5 Turbo by 23.44\%, and GPT-4 Turbo by 11.85\%.<|reference_end|>
arxiv
@article{liu2024autofeedback:, title={AutoFeedback: An LLM-based Framework for Efficient and Accurate API Request Generation}, author={Huanxi Liu, Jiaqi Liao, Dawei Feng, Kele Xu, Huaimin Wang}, journal={arXiv preprint arXiv:2410.06943}, year={2024}, archivePrefix={arXiv}, eprint={2410.06943}, primaryClass={cs.SE cs.AI} }
liu2024autofeedback:
arxiv-667558
2410.06944
CSSL: Contrastive Self-Supervised Learning for Dependency Parsing on Relatively Free Word Ordered and Morphologically Rich Low Resource Languages
<|reference_start|>CSSL: Contrastive Self-Supervised Learning for Dependency Parsing on Relatively Free Word Ordered and Morphologically Rich Low Resource Languages: Neural dependency parsing has achieved remarkable performance for low resource morphologically rich languages. It has also been well-studied that morphologically rich languages exhibit relatively free word order. This prompts a fundamental investigation: Is there a way to enhance dependency parsing performance, making the model robust to word order variations utilizing the relatively free word order nature of morphologically rich languages? In this work, we examine the robustness of graph-based parsing architectures on 7 relatively free word order languages. We focus on scrutinizing essential modifications such as data augmentation and the removal of position encoding required to adapt these architectures accordingly. To this end, we propose a contrastive self-supervised learning method to make the model robust to word order variations. Furthermore, our proposed modification demonstrates a substantial average gain of 3.03/2.95 points in 7 relatively free word order languages, as measured by the UAS/LAS Score metric when compared to the best performing baseline.<|reference_end|>
arxiv
@article{ray2024cssl:, title={CSSL: Contrastive Self-Supervised Learning for Dependency Parsing on Relatively Free Word Ordered and Morphologically Rich Low Resource Languages}, author={Pretam Ray, Jivnesh Sandhan, Amrith Krishna, Pawan Goyal}, journal={arXiv preprint arXiv:2410.06944}, year={2024}, archivePrefix={arXiv}, eprint={2410.06944}, primaryClass={cs.CL} }
ray2024cssl:
arxiv-667559
2410.06946
A Trilogy of AI Safety Frameworks: Paths from Facts and Knowledge Gaps to Reliable Predictions and New Knowledge
<|reference_start|>A Trilogy of AI Safety Frameworks: Paths from Facts and Knowledge Gaps to Reliable Predictions and New Knowledge: AI Safety has become a vital front-line concern of many scientists within and outside the AI community. There are many immediate and long term anticipated risks that range from existential risk to human existence to deep fakes and bias in machine learning systems [1-5]. In this paper, we reduce the full scope and immense complexity of AI safety concerns to a trilogy of three important but tractable opportunities for advances that have the short-term potential to improve AI safety and reliability without reducing AI innovation in critical domains. In this perspective, we discuss this vision based on several case studies that already produced proofs of concept in critical ML applications in biomedical science.<|reference_end|>
arxiv
@article{kasif2024a, title={A Trilogy of AI Safety Frameworks: Paths from Facts and Knowledge Gaps to Reliable Predictions and New Knowledge}, author={Simon Kasif}, journal={arXiv preprint arXiv:2410.06946}, year={2024}, archivePrefix={arXiv}, eprint={2410.06946}, primaryClass={cs.AI} }
kasif2024a
arxiv-667560
2410.06948
An Overview of zbMATH Open Digital Library
<|reference_start|>An Overview of zbMATH Open Digital Library: Mathematical research thrives on the effective dissemination and discovery of knowledge. zbMATH Open has emerged as a pivotal platform in this landscape, offering a comprehensive repository of mathematical literature. Beyond indexing and abstracting, it serves as a unified quality-assured infrastructure for finding, evaluating, and connecting mathematical information that advances mathematical research as well as interdisciplinary exploration. zbMATH Open enables scientific quality control by post-publication reviews and promotes connections between researchers, institutions, and research outputs. This paper represents the functionalities of the most significant features of this open-access service, highlighting its role in shaping the future of mathematical information retrieval.<|reference_end|>
arxiv
@article{deb2024an, title={An Overview of zbMATH Open Digital Library}, author={Madhurima Deb, Isabel Beckenbach, Matteo Petrera, Dariush Ehsani, Marcel Fuhrmann, Yun Hao, Olaf Teschke, Moritz Schubotz}, journal={arXiv preprint arXiv:2410.06948}, year={2024}, doi={10.1145/3677389.3702597}, archivePrefix={arXiv}, eprint={2410.06948}, primaryClass={cs.DL cs.IR} }
deb2024an
arxiv-667561
2410.06949
Seeker: Enhancing Exception Handling in Code with LLM-based Multi-Agent Approach
<|reference_start|>Seeker: Enhancing Exception Handling in Code with LLM-based Multi-Agent Approach: In real world software development, improper or missing exception handling can severely impact the robustness and reliability of code. Exception handling mechanisms require developers to detect, capture, and manage exceptions according to high standards, but many developers struggle with these tasks, leading to fragile code. This problem is particularly evident in open source projects and impacts the overall quality of the software ecosystem. To address this challenge, we explore the use of large language models (LLMs) to improve exception handling in code. Through extensive analysis, we identify three key issues: Insensitive Detection of Fragile Code, Inaccurate Capture of Exception Types, and Distorted Handling Solutions. These problems are widespread across real world repositories, suggesting that robust exception handling practices are often overlooked or mishandled. In response, we propose Seeker, a multi agent framework inspired by expert developer strategies for exception handling. Seeker uses agents: Scanner, Detector, Predator, Ranker, and Handler to assist LLMs in detecting, capturing, and resolving exceptions more effectively. Our work is the first systematic study on leveraging LLMs to enhance exception handling practices, providing valuable insights for future improvements in code reliability.<|reference_end|>
arxiv
@article{zhang2024seeker:, title={Seeker: Enhancing Exception Handling in Code with LLM-based Multi-Agent Approach}, author={Xuanming Zhang, Yuxuan Chen, Yuan Yuan, Minlie Huang}, journal={arXiv preprint arXiv:2410.06949}, year={2024}, archivePrefix={arXiv}, eprint={2410.06949}, primaryClass={cs.SE cs.CL} }
zhang2024seeker:
arxiv-667562
2410.06950
Faithful Interpretation for Graph Neural Networks
<|reference_start|>Faithful Interpretation for Graph Neural Networks: Currently, attention mechanisms have garnered increasing attention in Graph Neural Networks (GNNs), such as Graph Attention Networks (GATs) and Graph Transformers (GTs). It is not only due to the commendable boost in performance they offer but also its capacity to provide a more lucid rationale for model behaviors, which are often viewed as inscrutable. However, Attention-based GNNs have demonstrated instability in interpretability when subjected to various sources of perturbations during both training and testing phases, including factors like additional edges or nodes. In this paper, we propose a solution to this problem by introducing a novel notion called Faithful Graph Attention-based Interpretation (FGAI). In particular, FGAI has four crucial properties regarding stability and sensitivity to interpretation and final output distribution. Built upon this notion, we propose an efficient methodology for obtaining FGAI, which can be viewed as an ad hoc modification to the canonical Attention-based GNNs. To validate our proposed solution, we introduce two novel metrics tailored for graph interpretation assessment. Experimental results demonstrate that FGAI exhibits superior stability and preserves the interpretability of attention under various forms of perturbations and randomness, which makes FGAI a more faithful and reliable explanation tool.<|reference_end|>
arxiv
@article{hu2024faithful, title={Faithful Interpretation for Graph Neural Networks}, author={Lijie Hu, Tianhao Huang, Lu Yu, Wanyu Lin, Tianhang Zheng, Di Wang}, journal={arXiv preprint arXiv:2410.06950}, year={2024}, archivePrefix={arXiv}, eprint={2410.06950}, primaryClass={cs.LG cs.AI} }
hu2024faithful
arxiv-667563
2410.06953
Control System Design and Experiments for Autonomous Underwater Helicopter Docking Procedure Based on Acoustic-inertial-optical Guidance
<|reference_start|>Control System Design and Experiments for Autonomous Underwater Helicopter Docking Procedure Based on Acoustic-inertial-optical Guidance: A control system structure for the underwater docking procedure of an Autonomous Underwater Helicopter (AUH) is proposed in this paper, which utilizes acoustic-inertial-optical guidance. Unlike conventional Autonomous Underwater Vehicles (AUVs), the maneuverability requirements for AUHs are more stringent during the docking procedure, requiring it to remain stationary or have minimal horizontal movement while moving vertically. The docking procedure is divided into two stages: Homing and Landing, each stage utilizing different guidance methods. Additionally, a segmented aligning strategy operating at various altitudes and a linear velocity decision are both adopted in Landing stage. Due to the unique structure of the Subsea Docking System (SDS), the AUH is required to dock onto the SDS in a fixed orientation with specific attitude and altitude. Therefore, a particular criterion is proposed to determine whether the AUH has successfully docked onto the SDS. Furthermore, the effectiveness and robustness of the proposed control method in AUH's docking procedure are demonstrated through pool experiments and sea trials.<|reference_end|>
arxiv
@article{li2024control, title={Control System Design and Experiments for Autonomous Underwater Helicopter Docking Procedure Based on Acoustic-inertial-optical Guidance}, author={Haoda Li, Xinyu An, Rendong Feng, Zhenwei Rong, Zhuoyu Zhang, Zhipeng Li, Liming Zhao and Ying Chen}, journal={arXiv preprint arXiv:2410.06953}, year={2024}, archivePrefix={arXiv}, eprint={2410.06953}, primaryClass={cs.RO} }
li2024control
arxiv-667564
2410.06954
How Unique is Whose Web Browser? The role of demographics in browser fingerprinting among US users
<|reference_start|>How Unique is Whose Web Browser? The role of demographics in browser fingerprinting among US users: Browser fingerprinting can be used to identify and track users across the Web, even without cookies, by collecting attributes from users' devices to create unique "fingerprints". This technique and resulting privacy risks have been studied for over a decade. Yet further research is limited because prior studies used data not publicly available. Additionally, data in prior studies lacked user demographics. Here we provide a first-of-its-kind dataset to enable further research. It includes browser attributes with users' demographics and survey responses, collected with informed consent from 8,400 US study participants. We use this dataset to demonstrate how fingerprinting risks differ across demographic groups. For example, we find lower income users are more at risk, and find that as users' age increases, they are both more likely to be concerned about fingerprinting and at real risk of fingerprinting. Furthermore, we demonstrate an overlooked risk: user demographics, such as gender, age, income level and race, can be inferred from browser attributes commonly used for fingerprinting, and we identify which browser attributes most contribute to this risk. Our data collection process also conducted an experiment to study what impacts users' likelihood to share browser data for open research, in order to inform future data collection efforts, with responses from 12,461 total participants. Female participants were significantly less likely to share their browser data, as were participants who were shown the browser data we asked to collect. Overall, we show the important role of user demographics in the ongoing work that intends to assess fingerprinting risks and improve user privacy, with findings to inform future privacy enhancing browser developments. The dataset and data collection tool we provide can be used to further study research questions not addressed in this work.<|reference_end|>
arxiv
@article{berke2024how, title={How Unique is Whose Web Browser? The role of demographics in browser fingerprinting among US users}, author={Alex Berke, Enrico Bacis, Badih Ghazi, Pritish Kamath, Ravi Kumar, Robin Lassonde, Pasin Manurangsi, Umar Syed}, journal={arXiv preprint arXiv:2410.06954}, year={2024}, archivePrefix={arXiv}, eprint={2410.06954}, primaryClass={cs.CY} }
berke2024how
arxiv-667565
2410.06957
Support Vector Boosting Machine (SVBM): Enhancing Classification Performance with AdaBoost and Residual Connections
<|reference_start|>Support Vector Boosting Machine (SVBM): Enhancing Classification Performance with AdaBoost and Residual Connections: In traditional boosting algorithms, the focus on misclassified training samples emphasizes their importance based on difficulty during the learning process. While using a standard Support Vector Machine (SVM) as a weak learner in an AdaBoost framework can enhance model performance by concentrating on error samples, this approach introduces significant challenges. Specifically, SVMs, characterized by their stability and robustness, may require destabilization to fit the boosting paradigm, which in turn can constrain performance due to reliance on the weighted results from preceding iterations. To address these challenges, we propose the Support Vector Boosting Machine (SVBM), which integrates a novel subsampling process with SVM algorithms and residual connection techniques. This method updates sample weights by considering both the current model's predictions and the outputs from prior rounds, allowing for effective sparsity control. The SVBM framework enhances the ability to form complex decision boundaries, thereby improving classification performance. The MATLAB source code for SVBM can be accessed at https://github.com/junbolian/SVBM.<|reference_end|>
arxiv
@article{lian2024support, title={Support Vector Boosting Machine (SVBM): Enhancing Classification Performance with AdaBoost and Residual Connections}, author={Junbo Jacob Lian}, journal={arXiv preprint arXiv:2410.06957}, year={2024}, archivePrefix={arXiv}, eprint={2410.06957}, primaryClass={cs.LG cs.AI} }
lian2024support
arxiv-667566
2410.06958
Constrained TLBO algorithm for lightweight cable-stiffened scissor-like deployable structures
<|reference_start|>Constrained TLBO algorithm for lightweight cable-stiffened scissor-like deployable structures: Present works discusses the efficient structural analysis and weight optimization of the cable-stiffened deployable structures. The stiffening effect of cables is incorporated through a matrix analysis based iterative strategy to identify the active and passive cables. The structural form can be easily deployed to cartesian as well as polar coordinates through the arrangement of duplet members. The large span utility of cable stiffened bar members can pose challenges to the deployability due to increased weight. A novel teaching-learning based optimization (TLBO) algorithm is utilized to optimize the overall weight of the structure through efficient section designs with proper constraint on the yield criteria. The penalty function approach is adopted to identify the unfeasible designs. A number of example cases are analysed and comparison is presented with the existing literature to show the suitability of the proposed approach. Finally, a new form of three-dimensional deployable structure is proposed. It is seen that such deployable structure can be accurately analysed using the iterative matrix analysis approach and efficiently optimized using the present algorithm.<|reference_end|>
arxiv
@article{manna2024constrained, title={Constrained TLBO algorithm for lightweight cable-stiffened scissor-like deployable structures}, author={Soumyajit Manna, Arijit Sau and Devesh Punera}, journal={arXiv preprint arXiv:2410.06958}, year={2024}, archivePrefix={arXiv}, eprint={2410.06958}, primaryClass={cs.CE cs.SY eess.SY math.OC} }
manna2024constrained
arxiv-667567
2410.06961
Self-Boosting Large Language Models with Synthetic Preference Data
<|reference_start|>Self-Boosting Large Language Models with Synthetic Preference Data: Through alignment with human preferences, Large Language Models (LLMs) have advanced significantly in generating honest, harmless, and helpful responses. However, collecting high-quality preference data is a resource-intensive and creativity-demanding process, especially for the continual improvement of LLMs. We introduce SynPO, a self-boosting paradigm that leverages synthetic preference data for model alignment. SynPO employs an iterative mechanism wherein a self-prompt generator creates diverse prompts, and a response improver refines model responses progressively. This approach trains LLMs to autonomously learn the generative rewards for their own outputs and eliminates the need for large-scale annotation of prompts and human preferences. After four SynPO iterations, Llama3-8B and Mistral-7B show significant enhancements in instruction-following abilities, achieving over 22.1% win rate improvements on AlpacaEval 2.0 and ArenaHard. Simultaneously, SynPO improves the general performance of LLMs on various tasks, validated by a 3.2 to 5.0 average score increase on the well-recognized Open LLM leaderboard.<|reference_end|>
arxiv
@article{dong2024self-boosting, title={Self-Boosting Large Language Models with Synthetic Preference Data}, author={Qingxiu Dong, Li Dong, Xingxing Zhang, Zhifang Sui, Furu Wei}, journal={arXiv preprint arXiv:2410.06961}, year={2024}, archivePrefix={arXiv}, eprint={2410.06961}, primaryClass={cs.CL cs.AI} }
dong2024self-boosting
arxiv-667568
2410.06963
ELMO: Enhanced Real-time LiDAR Motion Capture through Upsampling
<|reference_start|>ELMO: Enhanced Real-time LiDAR Motion Capture through Upsampling: This paper introduces ELMO, a real-time upsampling motion capture framework designed for a single LiDAR sensor. Modeled as a conditional autoregressive transformer-based upsampling motion generator, ELMO achieves 60 fps motion capture from a 20 fps LiDAR point cloud sequence. The key feature of ELMO is the coupling of the self-attention mechanism with thoughtfully designed embedding modules for motion and point clouds, significantly elevating the motion quality. To facilitate accurate motion capture, we develop a one-time skeleton calibration model capable of predicting user skeleton offsets from a single-frame point cloud. Additionally, we introduce a novel data augmentation technique utilizing a LiDAR simulator, which enhances global root tracking to improve environmental understanding. To demonstrate the effectiveness of our method, we compare ELMO with state-of-the-art methods in both image-based and point cloud-based motion capture. We further conduct an ablation study to validate our design principles. ELMO's fast inference time makes it well-suited for real-time applications, exemplified in our demo video featuring live streaming and interactive gaming scenarios. Furthermore, we contribute a high-quality LiDAR-mocap synchronized dataset comprising 20 different subjects performing a range of motions, which can serve as a valuable resource for future research. The dataset and evaluation code are available at {\blue \url{https://movin3d.github.io/ELMO_SIGASIA2024/}}<|reference_end|>
arxiv
@article{jang2024elmo:, title={ELMO: Enhanced Real-time LiDAR Motion Capture through Upsampling}, author={Deok-Kyeong Jang, Dongseok Yang, Deok-Yun Jang, Byeoli Choi, Donghoon Shin, Sung-hee Lee}, journal={arXiv preprint arXiv:2410.06963}, year={2024}, archivePrefix={arXiv}, eprint={2410.06963}, primaryClass={cs.GR cs.AI cs.CV cs.LG} }
jang2024elmo:
arxiv-667569
2410.06964
Bridge the Points: Graph-based Few-shot Segment Anything Semantically
<|reference_start|>Bridge the Points: Graph-based Few-shot Segment Anything Semantically: The recent advancements in large-scale pre-training techniques have significantly enhanced the capabilities of vision foundation models, notably the Segment Anything Model (SAM), which can generate precise masks based on point and box prompts. Recent studies extend SAM to Few-shot Semantic Segmentation (FSS), focusing on prompt generation for SAM-based automatic semantic segmentation. However, these methods struggle with selecting suitable prompts, require specific hyperparameter settings for different scenarios, and experience prolonged one-shot inference times due to the overuse of SAM, resulting in low efficiency and limited automation ability. To address these issues, we propose a simple yet effective approach based on graph analysis. In particular, a Positive-Negative Alignment module dynamically selects the point prompts for generating masks, especially uncovering the potential of the background context as the negative reference. Another subsequent Point-Mask Clustering module aligns the granularity of masks and selected points as a directed graph, based on mask coverage over points. These points are then aggregated by decomposing the weakly connected components of the directed graph in an efficient manner, constructing distinct natural clusters. Finally, the positive and overshooting gating, benefiting from graph-based granularity alignment, aggregate high-confident masks and filter out the false-positive masks for final prediction, reducing the usage of additional hyperparameters and redundant mask generation. Extensive experimental analysis across standard FSS, One-shot Part Segmentation, and Cross Domain FSS datasets validate the effectiveness and efficiency of the proposed approach, surpassing state-of-the-art generalist models with a mIoU of 58.7% on COCO-20i and 35.2% on LVIS-92i. The code is available in https://andyzaq.github.io/GF-SAM/.<|reference_end|>
arxiv
@article{zhang2024bridge, title={Bridge the Points: Graph-based Few-shot Segment Anything Semantically}, author={Anqi Zhang, Guangyu Gao, Jianbo Jiao, Chi Harold Liu, and Yunchao Wei}, journal={arXiv preprint arXiv:2410.06964}, year={2024}, archivePrefix={arXiv}, eprint={2410.06964}, primaryClass={cs.CV} }
zhang2024bridge
arxiv-667570
2410.06965
Uncovering Factor Level Preferences to Improve Human-Model Alignment
<|reference_start|>Uncovering Factor Level Preferences to Improve Human-Model Alignment: Despite advancements in Large Language Model (LLM) alignment, understanding the reasons behind LLM preferences remains crucial for bridging the gap between desired and actual behavior. LLMs often exhibit biases or tendencies that diverge from human preferences, such as favoring certain writing styles or producing overly verbose outputs. However, current methods for evaluating preference alignment often lack explainability, relying on coarse-grained comparisons. To address this, we introduce PROFILE (PRObing Factors of InfLuence for Explainability), a novel framework that uncovers and quantifies the influence of specific factors driving preferences. PROFILE's factor level analysis explains the 'why' behind human-model alignment and misalignment, offering insights into the direction of model improvement. We apply PROFILE to analyze human and LLM preferences across three tasks: summarization, helpful response generation, and document-based question-answering. Our factor level analysis reveals a substantial discrepancy between human and LLM preferences in generation tasks, whereas LLMs show strong alignment with human preferences in evaluation tasks. We demonstrate how leveraging factor level insights, including addressing misaligned factors or exploiting the generation-evaluation gap, can improve alignment with human preferences. This work underscores the importance of explainable preference analysis and highlights PROFILE's potential to provide valuable training signals, driving further improvements in human-model alignment.<|reference_end|>
arxiv
@article{oh2024uncovering, title={Uncovering Factor Level Preferences to Improve Human-Model Alignment}, author={Juhyun Oh, Eunsu Kim, Jiseon Kim, Wenda Xu, Inha Cha, William Yang Wang, Alice Oh}, journal={arXiv preprint arXiv:2410.06965}, year={2024}, archivePrefix={arXiv}, eprint={2410.06965}, primaryClass={cs.CL cs.AI} }
oh2024uncovering
arxiv-667571
2410.06967
$\textttModSCAN$: Measuring Stereotypical Bias in Large Vision-Language Models from Vision and Language Modalities
<|reference_start|>$\textttModSCAN$: Measuring Stereotypical Bias in Large Vision-Language Models from Vision and Language Modalities: Large vision-language models (LVLMs) have been rapidly developed and widely used in various fields, but the (potential) stereotypical bias in the model is largely unexplored. In this study, we present a pioneering measurement framework, $\texttt{ModSCAN}$, to $\underline{SCAN}$ the stereotypical bias within LVLMs from both vision and language $\underline{Mod}$alities. $\texttt{ModSCAN}$ examines stereotypical biases with respect to two typical stereotypical attributes (gender and race) across three kinds of scenarios: occupations, descriptors, and persona traits. Our findings suggest that 1) the currently popular LVLMs show significant stereotype biases, with CogVLM emerging as the most biased model; 2) these stereotypical biases may stem from the inherent biases in the training dataset and pre-trained models; 3) the utilization of specific prompt prefixes (from both vision and language modalities) performs well in reducing stereotypical biases. We believe our work can serve as the foundation for understanding and addressing stereotypical bias in LVLMs.<|reference_end|>
arxiv
@article{jiang2024$\texttt{modscan}$:, title={$\texttt{ModSCAN}$: Measuring Stereotypical Bias in Large Vision-Language Models from Vision and Language Modalities}, author={Yukun Jiang, Zheng Li, Xinyue Shen, Yugeng Liu, Michael Backes, Yang Zhang}, journal={arXiv preprint arXiv:2410.06967}, year={2024}, archivePrefix={arXiv}, eprint={2410.06967}, primaryClass={cs.CR cs.CY} }
jiang2024$\texttt{modscan}$:
arxiv-667572
2410.06968
RM4D: A Combined Reachability and Inverse Reachability Map for Common 6-/7-axis Robot Arms by Dimensionality Reduction to 4D
<|reference_start|>RM4D: A Combined Reachability and Inverse Reachability Map for Common 6-/7-axis Robot Arms by Dimensionality Reduction to 4D: Knowledge of a manipulator's workspace is fundamental for a variety of tasks including robot design, grasp planning and robot base placement. Consequently, workspace representations are well studied in robotics. Two important representations are reachability maps and inverse reachability maps. The former predicts whether a given end-effector pose is reachable from where the robot currently is, and the latter suggests suitable base positions for a desired end-effector pose. Typically, the reachability map is built by discretizing the 6D space containing the robot's workspace and determining, for each cell, whether it is reachable or not. The reachability map is subsequently inverted to build the inverse map. This is a cumbersome process which restricts the applications of such maps. In this work, we exploit commonalities of existing six and seven axis robot arms to reduce the dimension of the discretization from 6D to 4D. We propose Reachability Map 4D (RM4D), a map that only requires a single 4D data structure for both forward and inverse queries. This gives a much more compact map that can be constructed by an order of magnitude faster than existing maps, with no inversion overheads and no loss in accuracy. Our experiments showcase the usefulness of RM4D for grasp planning with a mobile manipulator.<|reference_end|>
arxiv
@article{rudorfer2024rm4d:, title={RM4D: A Combined Reachability and Inverse Reachability Map for Common 6-/7-axis Robot Arms by Dimensionality Reduction to 4D}, author={Martin Rudorfer}, journal={arXiv preprint arXiv:2410.06968}, year={2024}, archivePrefix={arXiv}, eprint={2410.06968}, primaryClass={cs.RO} }
rudorfer2024rm4d:
arxiv-667573
2410.06969
DLGNet: Hyperedge Classification through Directed Line Graphs for Chemical Reactions
<|reference_start|>DLGNet: Hyperedge Classification through Directed Line Graphs for Chemical Reactions: Graphs and hypergraphs provide powerful abstractions for modeling interactions among a set of entities of interest and have been attracting a growing interest in the literature thanks to many successful applications in several fields. In particular, they are rapidly expanding in domains such as chemistry and biology, especially in the areas of drug discovery and molecule generation. One of the areas witnessing the fasted growth is the chemical reactions field, where chemical reactions can be naturally encoded as directed hyperedges of a hypergraph. In this paper, we address the chemical reaction classification problem by introducing the notation of a Directed Line Graph (DGL) associated with a given directed hypergraph. On top of it, we build the Directed Line Graph Network (DLGNet), the first spectral-based Graph Neural Network (GNN) expressly designed to operate on a hypergraph via its DLG transformation. The foundation of DLGNet is a novel Hermitian matrix, the Directed Line Graph Laplacian, which compactly encodes the directionality of the interactions taking place within the directed hyperedges of the hypergraph thanks to the DLG representation. The Directed Line Graph Laplacian enjoys many desirable properties, including admitting an eigenvalue decomposition and being positive semidefinite, which make it well-suited for its adoption within a spectral-based GNN. Through extensive experiments on chemical reaction datasets, we show that DGLNet significantly outperforms the existing approaches, achieving on a collection of real-world datasets an average relative-percentage-difference improvement of 33.01%, with a maximum improvement of 37.71%.<|reference_end|>
arxiv
@article{fiorini2024dlgnet:, title={DLGNet: Hyperedge Classification through Directed Line Graphs for Chemical Reactions}, author={Stefano Fiorini, Giulia M. Bovolenta, Stefano Coniglio, Michele Ciavotta, Pietro Morerio, Michele Parrinello, Alessio Del Bue}, journal={arXiv preprint arXiv:2410.06969}, year={2024}, archivePrefix={arXiv}, eprint={2410.06969}, primaryClass={cs.LG cs.AI} }
fiorini2024dlgnet:
arxiv-667574
2410.06972
Diamond of Thought: A Design Thinking-Based Framework for LLMs in Wearable Design
<|reference_start|>Diamond of Thought: A Design Thinking-Based Framework for LLMs in Wearable Design: Wearable design is an interdisciplinary field that balances technological innovation, human factors, and human-computer interactions. Despite contributions from various disciplines, many projects lack stable interdisciplinary teams, which often leads to design failures. Large language models (LLMs) integrate diverse information and generate innovative solutions, making them a valuable tool for enhancing design processes. Thus, we have explored the use of LLMs in wearable design by combining design-thinking principles with LLM capabilities. We have developed the "Diamond of Thought" framework and analysed 1,603 prototypes and 1,129 products from a body-centric perspective to create a comprehensive database. We employed retrieval-augmented generation to input database details into the LLMs, ensuring applicability to wearable design challenges and integration of embodied cognition into the process. Our LLM-based methodology for wearables has been experimentally validated, demonstrating the potential of LLMs for the advancement of design practices. This study offers new tools and methods for future wearable designs.<|reference_end|>
arxiv
@article{miao2024diamond, title={Diamond of Thought: A Design Thinking-Based Framework for LLMs in Wearable Design}, author={Qiyang Miao and Jiang Xu and Zhihao Song and Chengrui Wang and Yu Cui}, journal={arXiv preprint arXiv:2410.06972}, year={2024}, archivePrefix={arXiv}, eprint={2410.06972}, primaryClass={cs.HC} }
miao2024diamond
arxiv-667575
2410.06973
Personal Intelligence System UniLM: Hybrid On-Device Small Language Model and Server-Based Large Language Model for Malay Nusantara
<|reference_start|>Personal Intelligence System UniLM: Hybrid On-Device Small Language Model and Server-Based Large Language Model for Malay Nusantara: In contexts with limited computational and data resources, high-resource language models often prove inadequate, particularly when addressing the specific needs of Malay languages. This paper introduces a Personal Intelligence System designed to efficiently integrate both on-device and server-based models. The system incorporates SLiM-34M for on-device processing, optimized for low memory and power usage, and MANYAK-1.3B for server-based tasks, allowing for scalable, high-performance language processing. The models achieve significant results across various tasks, such as machine translation, question-answering, and translate IndoMMLU. Particularly noteworthy is SLiM-34M's ability to achieve a high improvement in accuracy compared to other LLMs while using 2 times fewer pre-training tokens. This work challenges the prevailing assumption that large-scale computational resources are necessary to build effective language models, contributing to the development of resource-efficient models for the Malay language with the unique orchestration between SLiM-34M and MANYAK-1.3B.<|reference_end|>
arxiv
@article{nazri2024personal, title={Personal Intelligence System UniLM: Hybrid On-Device Small Language Model and Server-Based Large Language Model for Malay Nusantara}, author={Azree Nazri, Olalekan Agbolade, Faisal Aziz}, journal={arXiv preprint arXiv:2410.06973}, year={2024}, archivePrefix={arXiv}, eprint={2410.06973}, primaryClass={cs.CL cs.AI} }
nazri2024personal
arxiv-667576
2410.06974
Diagnosis of Malignant Lymphoma Cancer Using Hybrid Optimized Techniques Based on Dense Neural Networks
<|reference_start|>Diagnosis of Malignant Lymphoma Cancer Using Hybrid Optimized Techniques Based on Dense Neural Networks: Lymphoma diagnosis, particularly distinguishing between subtypes, is critical for effective treatment but remains challenging due to the subtle morphological differences in histopathological images. This study presents a novel hybrid deep learning framework that combines DenseNet201 for feature extraction with a Dense Neural Network (DNN) for classification, optimized using the Harris Hawks Optimization (HHO) algorithm. The model was trained on a dataset of 15,000 biopsy images, spanning three lymphoma subtypes: Chronic Lymphocytic Leukemia (CLL), Follicular Lymphoma (FL), and Mantle Cell Lymphoma (MCL). Our approach achieved a testing accuracy of 99.33\%, demonstrating significant improvements in both accuracy and model interpretability. Comprehensive evaluation using precision, recall, F1-score, and ROC-AUC underscores the model's robustness and potential for clinical adoption. This framework offers a scalable solution for improving diagnostic accuracy and efficiency in oncology.<|reference_end|>
arxiv
@article{aly2024diagnosis, title={Diagnosis of Malignant Lymphoma Cancer Using Hybrid Optimized Techniques Based on Dense Neural Networks}, author={Salah A. Aly, Ali Bakhiet, Mazen Balat}, journal={arXiv preprint arXiv:2410.06974}, year={2024}, archivePrefix={arXiv}, eprint={2410.06974}, primaryClass={eess.IV cs.CV cs.LG} }
aly2024diagnosis
arxiv-667577
2410.06975
Neural network solvers for parametrized elasticity problems that conserve linear and angular momentum
<|reference_start|>Neural network solvers for parametrized elasticity problems that conserve linear and angular momentum: We consider a mixed formulation of parametrized elasticity problems in terms of stress, displacement, and rotation. The latter two variables act as Lagrange multipliers to enforce conservation of linear and angular momentum. Due to the saddle-point structure, the resulting system is computationally demanding to solve directly, and we therefore propose an efficient solution strategy based on a decomposition of the stress variable. First, a triangular system is solved to obtain a stress field that balances the body and boundary forces. Second, a trained neural network is employed to provide a correction without affecting the conservation equations. The displacement and rotation can be obtained by post-processing, if necessary. The potential of the approach is highlighted by three numerical test cases, including a non-linear model.<|reference_end|>
arxiv
@article{boon2024neural, title={Neural network solvers for parametrized elasticity problems that conserve linear and angular momentum}, author={Wietse M. Boon, Nicola R. Franco, Alessio Fumagalli}, journal={arXiv preprint arXiv:2410.06975}, year={2024}, archivePrefix={arXiv}, eprint={2410.06975}, primaryClass={math.NA cs.NA} }
boon2024neural
arxiv-667578
2410.06976
AdaRC: Mitigating Graph Structure Shifts during Test-Time
<|reference_start|>AdaRC: Mitigating Graph Structure Shifts during Test-Time: Powerful as they are, graph neural networks (GNNs) are known to be vulnerable to distribution shifts. Recently, test-time adaptation (TTA) has attracted attention due to its ability to adapt a pre-trained model to a target domain without re-accessing the source domain. However, existing TTA algorithms are primarily designed for attribute shifts in vision tasks, where samples are independent. These methods perform poorly on graph data that experience structure shifts, where node connectivity differs between source and target graphs. We attribute this performance gap to the distinct impact of node attribute shifts versus graph structure shifts: the latter significantly degrades the quality of node representations and blurs the boundaries between different node categories. To address structure shifts in graphs, we propose AdaRC, an innovative framework designed for effective and efficient adaptation to structure shifts by adjusting the hop-aggregation parameters in GNNs. To enhance the representation quality, we design a prediction-informed clustering loss to encourage the formation of distinct clusters for different node categories. Additionally, AdaRC seamlessly integrates with existing TTA algorithms, allowing it to handle attribute shifts effectively while improving overall performance under combined structure and attribute shifts. We validate the effectiveness of AdaRC on both synthetic and real-world datasets, demonstrating its robustness across various combinations of structure and attribute shifts.<|reference_end|>
arxiv
@article{bao2024adarc:, title={AdaRC: Mitigating Graph Structure Shifts during Test-Time}, author={Wenxuan Bao, Zhichen Zeng, Zhining Liu, Hanghang Tong, Jingrui He}, journal={arXiv preprint arXiv:2410.06976}, year={2024}, archivePrefix={arXiv}, eprint={2410.06976}, primaryClass={cs.LG} }
bao2024adarc:
arxiv-667579
2410.06977
Adaptive High-Frequency Transformer for Diverse Wildlife Re-Identification
<|reference_start|>Adaptive High-Frequency Transformer for Diverse Wildlife Re-Identification: Wildlife ReID involves utilizing visual technology to identify specific individuals of wild animals in different scenarios, holding significant importance for wildlife conservation, ecological research, and environmental monitoring. Existing wildlife ReID methods are predominantly tailored to specific species, exhibiting limited applicability. Although some approaches leverage extensively studied person ReID techniques, they struggle to address the unique challenges posed by wildlife. Therefore, in this paper, we present a unified, multi-species general framework for wildlife ReID. Given that high-frequency information is a consistent representation of unique features in various species, significantly aiding in identifying contours and details such as fur textures, we propose the Adaptive High-Frequency Transformer model with the goal of enhancing high-frequency information learning. To mitigate the inevitable high-frequency interference in the wilderness environment, we introduce an object-aware high-frequency selection strategy to adaptively capture more valuable high-frequency components. Notably, we unify the experimental settings of multiple wildlife datasets for ReID, achieving superior performance over state-of-the-art ReID methods. In domain generalization scenarios, our approach demonstrates robust generalization to unknown species.<|reference_end|>
arxiv
@article{li2024adaptive, title={Adaptive High-Frequency Transformer for Diverse Wildlife Re-Identification}, author={Chenyue Li, Shuoyi Chen, Mang Ye}, journal={arXiv preprint arXiv:2410.06977}, year={2024}, archivePrefix={arXiv}, eprint={2410.06977}, primaryClass={cs.CV cs.AI} }
li2024adaptive
arxiv-667580
2410.06981
Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models
<|reference_start|>Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models: We investigate feature universality in large language models (LLMs), a research field that aims to understand how different models similarly represent concepts in the latent spaces of their intermediate layers. Demonstrating feature universality allows discoveries about latent representations to generalize across several models. However, comparing features across LLMs is challenging due to polysemanticity, in which individual neurons often correspond to multiple features rather than distinct ones. This makes it difficult to disentangle and match features across different models. To address this issue, we employ a method known as dictionary learning by using sparse autoencoders (SAEs) to transform LLM activations into more interpretable spaces spanned by neurons corresponding to individual features. After matching feature neurons across models via activation correlation, we apply representational space similarity metrics like Singular Value Canonical Correlation Analysis to analyze these SAE features across different LLMs. Our experiments reveal significant similarities in SAE feature spaces across various LLMs, providing new evidence for feature universality.<|reference_end|>
arxiv
@article{lan2024sparse, title={Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models}, author={Michael Lan, Philip Torr, Austin Meek, Ashkan Khakzar, David Krueger, Fazl Barez}, journal={arXiv preprint arXiv:2410.06981}, year={2024}, archivePrefix={arXiv}, eprint={2410.06981}, primaryClass={cs.LG cs.AI cs.CL} }
lan2024sparse
arxiv-667581
2410.06982
Structure-Centric Robust Monocular Depth Estimation via Knowledge Distillation
<|reference_start|>Structure-Centric Robust Monocular Depth Estimation via Knowledge Distillation: Monocular depth estimation, enabled by self-supervised learning, is a key technique for 3D perception in computer vision. However, it faces significant challenges in real-world scenarios, which encompass adverse weather variations, motion blur, as well as scenes with poor lighting conditions at night. Our research reveals that we can divide monocular depth estimation into three sub-problems: depth structure consistency, local texture disambiguation, and semantic-structural correlation. Our approach tackles the non-robustness of existing self-supervised monocular depth estimation models to interference textures by adopting a structure-centered perspective and utilizing the scene structure characteristics demonstrated by semantics and illumination. We devise a novel approach to reduce over-reliance on local textures, enhancing robustness against missing or interfering patterns. Additionally, we incorporate a semantic expert model as the teacher and construct inter-model feature dependencies via learnable isomorphic graphs to enable aggregation of semantic structural knowledge. Our approach achieves state-of-the-art out-of-distribution monocular depth estimation performance across a range of public adverse scenario datasets. It demonstrates notable scalability and compatibility, without necessitating extensive model engineering. This showcases the potential for customizing models for diverse industrial applications.<|reference_end|>
arxiv
@article{chen2024structure-centric, title={Structure-Centric Robust Monocular Depth Estimation via Knowledge Distillation}, author={Runze Chen and Haiyong Luo and Fang Zhao and Jingze Yu and Yupeng Jia and Juan Wang and Xuepeng Ma}, journal={arXiv preprint arXiv:2410.06982}, year={2024}, archivePrefix={arXiv}, eprint={2410.06982}, primaryClass={cs.CV} }
chen2024structure-centric
arxiv-667582
2410.06984
Observability rank conditions for analysing practical identifiability a priori
<|reference_start|>Observability rank conditions for analysing practical identifiability a priori: The concept of identifiability describes the possibility of inferring the parameters of a dynamic model by observing its output. It is common and useful to distinguish between structural and practical identifiability. The former property is fully determined by the model equations, while the latter is also influenced by the characteristics of the available experimental data. Structural identifiability can be determined by means of symbolic computations, which may be performed before collecting experimental data, and are hence sometimes called a priori analyses. Practical identifiability is typically assessed numerically, with methods that require simulations - and often also optimization - and are applied a posteriori. An approach to study structural local identifiability is to consider it as a particular case of observability, which is the possibility of inferring the internal state of a system from its output. Thus, both properties can be analysed jointly, by building a generalized observability matrix and computing its rank. The aim of this paper is to investigate to which extent such observability-based methods can also inform about practical identifiability. To this end, we explore a number of possible extensions of the rank tests, and discuss the purposes for which they can be informative as well as others for which they cannot.<|reference_end|>
arxiv
@article{villaverde2024observability, title={Observability rank conditions for analysing practical identifiability a priori}, author={Alejandro F. Villaverde}, journal={arXiv preprint arXiv:2410.06984}, year={2024}, archivePrefix={arXiv}, eprint={2410.06984}, primaryClass={q-bio.QM cs.SY eess.SY} }
villaverde2024observability
arxiv-667583
2410.06985
Jointly Generating Multi-view Consistent PBR Textures using Collaborative Control
<|reference_start|>Jointly Generating Multi-view Consistent PBR Textures using Collaborative Control: Multi-view consistency remains a challenge for image diffusion models. Even within the Text-to-Texture problem, where perfect geometric correspondences are known a priori, many methods fail to yield aligned predictions across views, necessitating non-trivial fusion methods to incorporate the results onto the original mesh. We explore this issue for a Collaborative Control workflow specifically in PBR Text-to-Texture. Collaborative Control directly models PBR image probability distributions, including normal bump maps; to our knowledge, the only diffusion model to directly output full PBR stacks. We discuss the design decisions involved in making this model multi-view consistent, and demonstrate the effectiveness of our approach in ablation studies, as well as practical applications.<|reference_end|>
arxiv
@article{vainer2024jointly, title={Jointly Generating Multi-view Consistent PBR Textures using Collaborative Control}, author={Shimon Vainer, Konstantin Kutsy, Dante De Nigris, Ciara Rowles, Slava Elizarov, Simon Donn'e}, journal={arXiv preprint arXiv:2410.06985}, year={2024}, archivePrefix={arXiv}, eprint={2410.06985}, primaryClass={cs.CV cs.GR} }
vainer2024jointly
arxiv-667584
2410.06986
Diffusion Density Estimators
<|reference_start|>Diffusion Density Estimators: We investigate the use of diffusion models as neural density estimators. The current approach to this problem involves converting the generative process to a smooth flow, known as the Probability Flow ODE. The log density at a given sample can be obtained by solving the ODE with a black-box solver. We introduce a new, highly parallelizable method that computes log densities without the need to solve a flow. Our approach is based on estimating a path integral by Monte Carlo, in a manner identical to the simulation-free training of diffusion models. We also study how different training parameters affect the accuracy of the density calculation, and offer insights into how these models can be made more scalable and efficient.<|reference_end|>
arxiv
@article{premkumar2024diffusion, title={Diffusion Density Estimators}, author={Akhil Premkumar}, journal={arXiv preprint arXiv:2410.06986}, year={2024}, archivePrefix={arXiv}, eprint={2410.06986}, primaryClass={cs.LG stat.ML} }
premkumar2024diffusion
arxiv-667585
2410.06987
Radio signal propagation in 5G systems equipped with RISs (PL: Propagacja sygna\lu radiowego w systemach 5G wyposa\zonych w matryce IPR)
<|reference_start|>Radio signal propagation in 5G systems equipped with RISs (PL: Propagacja sygna\lu radiowego w systemach 5G wyposa\zonych w matryce IPR): In this paper, the characteristics of radio signal propagation within the boundaries of the city of Poznan (Poland) are analyzed. The study considers the use of a Radio Access Network (RAN) of the 5th generation wireless system (5G NR - New Radio), which includes 8 base stations (BSs) utilizing Single Input Single Output (SISO) or Multiple Input Multiple Output (MIMO) antenna technology depending on the adopted configuration of network cells. Additionally, 15 reflecting arrays known as Reconfigurable Intelligent Surfaces (RISs) were placed in the studied area, and their impact on radio signal propagation at different suspension heights was taken into account.<|reference_end|>
arxiv
@article{samorzewski2024radio, title={Radio signal propagation in 5G systems equipped with RISs (PL: Propagacja sygna{\l}u radiowego w systemach 5G wyposa\.zonych w matryce IPR)}, author={Adam Samorzewski, Adrian Kliks}, journal={Telecommunication Review - Telecommunication News, 2024, no. 4, pp. 280-283}, year={2024}, doi={10.15199/59.2024.4.61}, archivePrefix={arXiv}, eprint={2410.06987}, primaryClass={cs.NI} }
samorzewski2024radio
arxiv-667586
2410.06990
Structure and Control of Biology-inspired Networks
<|reference_start|>Structure and Control of Biology-inspired Networks: There is increasing interest in developing the theoretical foundations of networked control systems that illuminate how brain networks function so as to enable sensory perception, control of movement, memory and all the operations that are needed for animals to survive. The present paper proposes a biologically inspired network model featuring dynamic connections regulated by Hebbian learning. Drawing on the machinery of graph theory and classical control we show that our novel nonlinear model exhibits such biologically plausible features as bounded evolution, stability, resilience, and a kind of structural stability -- meaning that perturbations of the model parameters leave the essential properties of the model in tact. The proposed network model involves generalized cactus graphs with multiple control input nodes, and it is shown that the properties of the network are resilient to various changes in network topology provided these changes preserve the generalized cactus structure. A particular example described in what follows is an idealized network model of the visual system of a macaque monkey. The model displays resilience to network disruptions such as might occur in a living organism due to disease or injury. A different model of the same type provides an example of a system that can perform data classification.<|reference_end|>
arxiv
@article{sun2024structure, title={Structure and Control of Biology-inspired Networks}, author={Zexin Sun, John Baillieul}, journal={arXiv preprint arXiv:2410.06990}, year={2024}, archivePrefix={arXiv}, eprint={2410.06990}, primaryClass={eess.SY cs.SY} }
sun2024structure
arxiv-667587
2410.06992
SWE-Bench+: Enhanced Coding Benchmark for LLMs
<|reference_start|>SWE-Bench+: Enhanced Coding Benchmark for LLMs: Large Language Models (LLMs) in Software Engineering (SE) can offer assistance for coding. To facilitate a rigorous evaluation of LLMs in practical coding contexts, Carlos et al. introduced the SWE-bench dataset, which comprises 2,294 real-world GitHub issues and their corresponding pull requests, collected from 12 widely used Python repositories. Several impressive LLM-based toolkits recently are developed and evaluated on this dataset. However, a systematic evaluation of the quality of SWE-bench remains missing. In this paper, we addressed this gap by presenting an empirical analysis of the SWE-bench dataset. We conducted a manual screening of instances where SWEAgent + GPT-4 successfully resolved issues by comparing the model-generated patches with the actual pull requests. SWE-Agent+GPT-4 was at the top of SWE-bench leaderboard during the time of our study. Our analysis reveals some critical issues with the SWE-bench dataset: 1) 32.67% of the successful patches involve cheating as the solutions were directly provided in the issue report or the comments. We refer to as solution leakage problem. 2) 31.08% of the passed patches are suspicious patches due to weak test cases, i.e., the tests were not adequate to verify the correctness of a patch. When we filtered out these problematic issues, the resolution rate of SWE-Agent+GPT-4 dropped from 12.47% to 3.97%. We also observed that the same data quality issues also exist in the two variants of SWE-bench, i.e., SWE-bench Lite and SWE-Bench Verified. In addition, over 94% of the issues were created before LLM's knowledge cutoff dates, posing potential data leakage issues.<|reference_end|>
arxiv
@article{aleithan2024swe-bench+:, title={SWE-Bench+: Enhanced Coding Benchmark for LLMs}, author={Reem Aleithan, Haoran Xue, Mohammad Mahdi Mohajer, Elijah Nnorom, Gias Uddin, Song Wang}, journal={arXiv preprint arXiv:2410.06992}, year={2024}, archivePrefix={arXiv}, eprint={2410.06992}, primaryClass={cs.SE} }
aleithan2024swe-bench+:
arxiv-667588
2410.06993
Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax
<|reference_start|>Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax: Deep InfoMax (DIM) is a well-established method for self-supervised representation learning (SSRL) based on maximization of the mutual information between the input and the output of a deep neural network encoder. Despite the DIM and contrastive SSRL in general being well-explored, the task of learning representations conforming to a specific distribution (i.e., distribution matching, DM) is still under-addressed. Motivated by the importance of DM to several downstream tasks (including generative modeling, disentanglement, outliers detection and other), we enhance DIM to enable automatic matching of learned representations to a selected prior distribution. To achieve this, we propose injecting an independent noise into the normalized outputs of the encoder, while keeping the same InfoMax training objective. We show that such modification allows for learning uniformly and normally distributed representations, as well as representations of other absolutely continuous distributions. Our approach is tested on various downstream tasks. The results indicate a moderate trade-off between the performance on the downstream tasks and quality of DM.<|reference_end|>
arxiv
@article{butakov2024efficient, title={Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax}, author={Ivan Butakov, Alexander Sememenko, Alexander Tolmachev, Andrey Gladkov, Marina Munkhoeva, Alexey Frolov}, journal={arXiv preprint arXiv:2410.06993}, year={2024}, archivePrefix={arXiv}, eprint={2410.06993}, primaryClass={cs.LG cs.IT math.IT stat.ML} }
butakov2024efficient
arxiv-667589
2410.06997
A Diffusion-based Xray2MRI Model: Generating Pseudo-MRI Volumes From one Single X-ray
<|reference_start|>A Diffusion-based Xray2MRI Model: Generating Pseudo-MRI Volumes From one Single X-ray: Knee osteoarthritis (KOA) is a prevalent musculoskeletal disorder, and X-rays are commonly used for its diagnosis due to their cost-effectiveness. Magnetic Resonance Imaging (MRI), on the other hand, offers detailed soft tissue visualization and has become a valuable supplementary diagnostic tool for KOA. Unfortunately, the high cost and limited accessibility of MRI hinder its widespread use, leaving many patients with KOA reliant solely on X-ray imaging. In this study, we introduce a novel diffusion-based Xray2MRI model capable of generating pseudo-MRI volumes from one single X-ray image. In addition to using X-rays as conditional input, our model integrates target depth, KOA probability distribution, and image intensity distribution modules to guide the synthesis process, ensuring that the generated corresponding slices accurately correspond to the anatomical structures. Experimental results demonstrate that by integrating information from X-rays with additional input data, our proposed approach is capable of generating pseudo-MRI sequences that approximate real MRI scans. Moreover, by increasing the inference times, the model achieves effective interpolation, further improving the continuity and smoothness of the generated MRI sequences, representing one promising initial attempt for cost-effective medical imaging solutions.<|reference_end|>
arxiv
@article{wang2024a, title={A Diffusion-based Xray2MRI Model: Generating Pseudo-MRI Volumes From one Single X-ray}, author={Zhe Wang, Rachid Jennane, Aladine Chetouani, Yung Hsin Chen, Fabian Bauer, Mohamed Jarraya}, journal={arXiv preprint arXiv:2410.06997}, year={2024}, archivePrefix={arXiv}, eprint={2410.06997}, primaryClass={eess.IV cs.CV} }
wang2024a
arxiv-667590
2410.06998
An Improved ESO-Based Line-of-Sight Guidance Law for Path Following of Underactuated Autonomous Underwater Helicopter With Nonlinear Tracking Differentiator and Anti-saturation Controller
<|reference_start|>An Improved ESO-Based Line-of-Sight Guidance Law for Path Following of Underactuated Autonomous Underwater Helicopter With Nonlinear Tracking Differentiator and Anti-saturation Controller: This paper presents an Improved Extended-state-observer based Line-of-Sight (IELOS) guidance law for path following of underactuated Autonomous Underwater helicopter (AUH) utilizing a nonlinear tracking differentiator and anti-saturation controller. Due to the high mobility of the AUH, the classical reduced-order Extended-State-Observer (ESO) struggles to accurately track the sideslip angle, especially when rapid variation occurs. By incorporating the nonlinear tracking differentiator and anti-saturation controller, the IELOS guidance law can precisely track sideslip angle and mitigate propeller thrust buffet compared to the classical Extended-state-observer based Line-of-Sight (ELOS) guidance law. The performance of ESO is significantly influenced by the bandwidth, with the Improved Extended-State-Observer (IESO) proving effective at low bandwidths where the classical ESO falls short. The paper establishes the input-to-state stability of the closed-loop system. Subsequently, simulation and pool experimental results are showcased to validate the effectiveness of the IELOS guidance law, which outperforms both the Line-of-Sight (LOS) and Adaptive Line-of-Sight (ALOS) guidance laws in terms of performance.<|reference_end|>
arxiv
@article{li2024an, title={An Improved ESO-Based Line-of-Sight Guidance Law for Path Following of Underactuated Autonomous Underwater Helicopter With Nonlinear Tracking Differentiator and Anti-saturation Controller}, author={Haoda Li, Zichen Liu, Jin Huang, Xinyu An, and Ying Chen}, journal={arXiv preprint arXiv:2410.06998}, year={2024}, archivePrefix={arXiv}, eprint={2410.06998}, primaryClass={eess.SY cs.SY} }
li2024an
arxiv-667591
2410.07001
WebTigerPython -- A Low-Floor High-Ceiling Python IDE for the Browser
<|reference_start|>WebTigerPython -- A Low-Floor High-Ceiling Python IDE for the Browser: With the large diversity of platforms and devices used by students, web applications increasingly suggest themselves as the solution of choice. Developing adequate educational programming environments in the browser, however, remains a challenge and often involves trade-offs between desired functionalities and navigating the limitations of web applications, in particular the blocking single-threaded execution model. We introduce a fully browser-based Python programming environment that explores the possibilities and demonstrates that a web application can indeed support a rich and mature set of features, ranging from Turtle graphics over educational robotics to data processing.<|reference_end|>
arxiv
@article{bachmann2024webtigerpython, title={WebTigerPython -- A Low-Floor High-Ceiling Python IDE for the Browser}, author={Clemens Bachmann, Alexandra Maximova, Tobias Kohn, Dennis Komm}, journal={arXiv preprint arXiv:2410.07001}, year={2024}, archivePrefix={arXiv}, eprint={2410.07001}, primaryClass={cs.PL} }
bachmann2024webtigerpython
arxiv-667592
2410.07002
CursorCore: Assist Programming through Aligning Anything
<|reference_start|>CursorCore: Assist Programming through Aligning Anything: Large language models have been successfully applied to programming assistance tasks, such as code completion, code insertion, and instructional code editing. However, these applications remain insufficiently automated and struggle to effectively integrate various types of information during the programming process, including coding history, current code, and user instructions. In this work, we propose a new conversational framework that comprehensively integrates these information sources, collect data to train our models and evaluate their performance. Firstly, to thoroughly evaluate how well models align with different types of information and the quality of their outputs, we introduce a new benchmark, APEval (Assist Programming Eval), to comprehensively assess the performance of models in programming assistance tasks. Then, for data collection, we develop a data generation pipeline, Programming-Instruct, which synthesizes training data from diverse sources, such as GitHub and online judge platforms. This pipeline can automatically generate various types of messages throughout the programming process. Finally, using this pipeline, we generate 219K samples, fine-tune multiple models, and develop the CursorCore series. We show that CursorCore outperforms other models of comparable size. This framework unifies applications such as inline chat and automated editing, contributes to the advancement of coding assistants. Code, models and data are freely available at https://github.com/TechxGenus/CursorCore.<|reference_end|>
arxiv
@article{jiang2024cursorcore:, title={CursorCore: Assist Programming through Aligning Anything}, author={Hao Jiang, Qi Liu, Rui Li, Shengyu Ye, Shijin Wang}, journal={arXiv preprint arXiv:2410.07002}, year={2024}, archivePrefix={arXiv}, eprint={2410.07002}, primaryClass={cs.CL cs.AI cs.SE} }
jiang2024cursorcore:
arxiv-667593
2410.07003
Through the Looking Glass: Mirror Schr\"odinger Bridges
<|reference_start|>Through the Looking Glass: Mirror Schr\"odinger Bridges: Resampling from a target measure whose density is unknown is a fundamental problem in mathematical statistics and machine learning. A setting that dominates the machine learning literature consists of learning a map from an easy-to-sample prior, such as the Gaussian distribution, to a target measure. Under this model, samples from the prior are pushed forward to generate a new sample on the target measure, which is often difficult to sample from directly. In this paper, we propose a new model for conditional resampling called mirror Schr\"odinger bridges. Our key observation is that solving the Schr\"odinger bridge problem between a distribution and itself provides a natural way to produce new samples from conditional distributions, giving in-distribution variations of an input data point. We show how to efficiently solve this largely overlooked version of the Schr\"odinger bridge problem. We prove that our proposed method leads to significant algorithmic simplifications over existing alternatives, in addition to providing control over in-distribution variation. Empirically, we demonstrate how these benefits can be leveraged to produce proximal samples in a number of application domains.<|reference_end|>
arxiv
@article{da silva2024through, title={Through the Looking Glass: Mirror Schr\"odinger Bridges}, author={Leticia Mattos Da Silva, Silvia Sell'an, Justin Solomon}, journal={arXiv preprint arXiv:2410.07003}, year={2024}, archivePrefix={arXiv}, eprint={2410.07003}, primaryClass={cs.LG} }
da silva2024through
arxiv-667594
2410.07009
Pap2Pat: Towards Automated Paper-to-Patent Drafting using Chunk-based Outline-guided Generation
<|reference_start|>Pap2Pat: Towards Automated Paper-to-Patent Drafting using Chunk-based Outline-guided Generation: The patent domain is gaining attention in natural language processing research, offering practical applications in streamlining the patenting process and providing challenging benchmarks for large language models (LLMs). However, the generation of the description sections of patents, which constitute more than 90% of the patent document, has not been studied to date. We address this gap by introducing the task of outline-guided paper-to-patent generation, where an academic paper provides the technical specification of the invention and an outline conveys the desired patent structure. We present PAP2PAT, a new challenging benchmark of 1.8k patent-paper pairs with document outlines, collected using heuristics that reflect typical research lab practices. Our experiments with current open-weight LLMs and outline-guided chunk-based generation show that they can effectively use information from the paper but struggle with repetitions, likely due to the inherent repetitiveness of patent language. We release our data and code.<|reference_end|>
arxiv
@article{knappich2024pap2pat:, title={Pap2Pat: Towards Automated Paper-to-Patent Drafting using Chunk-based Outline-guided Generation}, author={Valentin Knappich, Simon Razniewski, Anna H"atty, Annemarie Friedrich}, journal={arXiv preprint arXiv:2410.07009}, year={2024}, archivePrefix={arXiv}, eprint={2410.07009}, primaryClass={cs.CL cs.AI} }
knappich2024pap2pat:
arxiv-667595
2410.07013
Causal Representation Learning in Temporal Data via Single-Parent Decoding
<|reference_start|>Causal Representation Learning in Temporal Data via Single-Parent Decoding: Scientific research often seeks to understand the causal structure underlying high-level variables in a system. For example, climate scientists study how phenomena, such as El Ni\~no, affect other climate processes at remote locations across the globe. However, scientists typically collect low-level measurements, such as geographically distributed temperature readings. From these, one needs to learn both a mapping to causally-relevant latent variables, such as a high-level representation of the El Ni\~no phenomenon and other processes, as well as the causal model over them. The challenge is that this task, called causal representation learning, is highly underdetermined from observational data alone, requiring other constraints during learning to resolve the indeterminacies. In this work, we consider a temporal model with a sparsity assumption, namely single-parent decoding: each observed low-level variable is only affected by a single latent variable. Such an assumption is reasonable in many scientific applications that require finding groups of low-level variables, such as extracting regions from geographically gridded measurement data in climate research or capturing brain regions from neural activity data. We demonstrate the identifiability of the resulting model and propose a differentiable method, Causal Discovery with Single-parent Decoding (CDSD), that simultaneously learns the underlying latents and a causal graph over them. We assess the validity of our theoretical results using simulated data and showcase the practical validity of our method in an application to real-world data from the climate science field.<|reference_end|>
arxiv
@article{brouillard2024causal, title={Causal Representation Learning in Temporal Data via Single-Parent Decoding}, author={Philippe Brouillard, S'ebastien Lachapelle, Julia Kaltenborn, Yaniv Gurwicz, Dhanya Sridhar, Alexandre Drouin, Peer Nowack, Jakob Runge, David Rolnick}, journal={arXiv preprint arXiv:2410.07013}, year={2024}, archivePrefix={arXiv}, eprint={2410.07013}, primaryClass={cs.LG} }
brouillard2024causal
arxiv-667596
2410.07014
Optimizing Estimators of Squared Calibration Errors in Classification
<|reference_start|>Optimizing Estimators of Squared Calibration Errors in Classification: In this work, we propose a mean-squared error-based risk that enables the comparison and optimization of estimators of squared calibration errors in practical settings. Improving the calibration of classifiers is crucial for enhancing the trustworthiness and interpretability of machine learning models, especially in sensitive decision-making scenarios. Although various calibration (error) estimators exist in the current literature, there is a lack of guidance on selecting the appropriate estimator and tuning its hyperparameters. By leveraging the bilinear structure of squared calibration errors, we reformulate calibration estimation as a regression problem with independent and identically distributed (i.i.d.) input pairs. This reformulation allows us to quantify the performance of different estimators even for the most challenging calibration criterion, known as canonical calibration. Our approach advocates for a training-validation-testing pipeline when estimating a calibration error on an evaluation dataset. We demonstrate the effectiveness of our pipeline by optimizing existing calibration estimators and comparing them with novel kernel ridge regression-based estimators on standard image classification tasks.<|reference_end|>
arxiv
@article{gruber2024optimizing, title={Optimizing Estimators of Squared Calibration Errors in Classification}, author={Sebastian G. Gruber, Francis Bach}, journal={arXiv preprint arXiv:2410.07014}, year={2024}, archivePrefix={arXiv}, eprint={2410.07014}, primaryClass={cs.LG stat.ML} }
gruber2024optimizing
arxiv-667597
2410.07018
Tri-Level Navigator: LLM-Empowered Tri-Level Learning for Time Series OOD Generalization
<|reference_start|>Tri-Level Navigator: LLM-Empowered Tri-Level Learning for Time Series OOD Generalization: Out-of-Distribution (OOD) generalization in machine learning is a burgeoning area of study. Its primary goal is to enhance the adaptability and resilience of machine learning models when faced with new, unseen, and potentially adversarial data that significantly diverges from their original training datasets. In this paper, we investigate time series OOD generalization via pre-trained Large Language Models (LLMs). We first propose a novel \textbf{T}ri-level learning framework for \textbf{T}ime \textbf{S}eries \textbf{O}OD generalization, termed TTSO, which considers both sample-level and group-level uncertainties. This formula offers a fresh theoretic perspective for formulating and analyzing OOD generalization problem. In addition, we provide a theoretical analysis to justify this method is well motivated. We then develop a stratified localization algorithm tailored for this tri-level optimization problem, theoretically demonstrating the guaranteed convergence of the proposed algorithm. Our analysis also reveals that the iteration complexity to obtain an $\epsilon$-stationary point is bounded by O($\frac{1}{\epsilon^{2}}$). Extensive experiments on real-world datasets have been conducted to elucidate the effectiveness of the proposed method.<|reference_end|>
arxiv
@article{jian2024tri-level, title={Tri-Level Navigator: LLM-Empowered Tri-Level Learning for Time Series OOD Generalization}, author={Chengtao Jian, Kai Yang, Yang Jiao}, journal={arXiv preprint arXiv:2410.07018}, year={2024}, archivePrefix={arXiv}, eprint={2410.07018}, primaryClass={cs.LG cs.AI} }
jian2024tri-level
arxiv-667598
2410.07020
What Makes Programmers Laugh? Exploring the Subreddit r/ProgrammerHumor
<|reference_start|>What Makes Programmers Laugh? Exploring the Subreddit r/ProgrammerHumor: Background: Humor is a fundamental part of human communication, with prior work linking positive humor in the workplace to positive outcomes, such as improved performance and job satisfaction. Aims: This study aims to investigate programming-related humor in a large social media community. Methodology: We collected 139,718 submissions from Reddit subreddit r/ProgrammerHumor. Both textual and image-based (memes) submissions were considered. The image data was processed with OCR to extract text from images for NLP analysis. Multiple regression models were built to investigate what makes submissions humorous. Additionally, a random sample of 800 submissions was labeled by human annotators regarding their relation to theories of humor, suitability for the workplace, the need for programming knowledge to understand the submission, and whether images in image-based submissions added context to the submission. Results: Our results indicate that predicting the humor of software developers is difficult. Our best regression model was able to explain only 10% of the variance. However, statistically significant differences were observed between topics, submission times, and associated humor theories. Our analysis reveals that the highest submission scores are achieved by imagebased submissions that are created during the winter months in the northern hemisphere, between 2-3pm UTC on weekends, which are distinctly related to superiority and incongruity theories of humor, and are about the topic of "Learning". Conclusions: Predicting humor with natural language processing methods is challenging. We discuss the benefits and inherent difficulties in assessing perceived humor of submissions, as well as possible avenues for future work. Additionally, our replication package should help future studies and can act as a joke repository for the software industry and education.<|reference_end|>
arxiv
@article{kuutila2024what, title={What Makes Programmers Laugh? Exploring the Subreddit r/ProgrammerHumor}, author={Miikka Kuutila, Leevi Rantala, Junhao Li, Simo Hosio, Mika M"antyl"a}, journal={arXiv preprint arXiv:2410.07020}, year={2024}, doi={10.1145/3674805.3686696}, archivePrefix={arXiv}, eprint={2410.07020}, primaryClass={cs.SE} }
kuutila2024what
arxiv-667599
2410.07021
Do Contemporary CATE Models Capture Real-World Heterogeneity? Findings from a Large-Scale Benchmark
<|reference_start|>Do Contemporary CATE Models Capture Real-World Heterogeneity? Findings from a Large-Scale Benchmark: We present unexpected findings from a large-scale benchmark study evaluating Conditional Average Treatment Effect (CATE) estimation algorithms. By running 16 modern CATE models across 43,200 datasets, we find that: (a) 62\% of CATE estimates have a higher Mean Squared Error (MSE) than a trivial zero-effect predictor, rendering them ineffective; (b) in datasets with at least one useful CATE estimate, 80\% still have higher MSE than a constant-effect model; and (c) Orthogonality-based models outperform other models only 30\% of the time, despite widespread optimism about their performance. These findings expose significant limitations in current CATE models and suggest ample opportunities for further research. Our findings stem from a novel application of \textit{observational sampling}, originally developed to evaluate Average Treatment Effect (ATE) estimates from observational methods with experiment data. To adapt observational sampling for CATE evaluation, we introduce a statistical parameter, $Q$, equal to MSE minus a constant and preserves the ranking of models by their MSE. We then derive a family of sample statistics, collectively called $\hat{Q}$, that can be computed from real-world data. We prove that $\hat{Q}$ is a consistent estimator of $Q$ under mild technical conditions. When used in observational sampling, $\hat{Q}$ is unbiased and asymptotically selects the model with the smallest MSE. To ensure the benchmark reflects real-world heterogeneity, we handpick datasets where outcomes come from field rather than simulation. By combining the new observational sampling method, new statistics, and real-world datasets, the benchmark provides a unique perspective on CATE estimator performance and uncover gaps in capturing real-world heterogeneity.<|reference_end|>
arxiv
@article{yu2024do, title={Do Contemporary CATE Models Capture Real-World Heterogeneity? Findings from a Large-Scale Benchmark}, author={Haining Yu and Yizhou Sun}, journal={arXiv preprint arXiv:2410.07021}, year={2024}, archivePrefix={arXiv}, eprint={2410.07021}, primaryClass={stat.ML cs.LG} }
yu2024do
arxiv-667600
2410.07022
Exploiting Distribution Constraints for Scalable and Efficient Image Retrieval
<|reference_start|>Exploiting Distribution Constraints for Scalable and Efficient Image Retrieval: Image retrieval is crucial in robotics and computer vision, with downstream applications in robot place recognition and vision-based product recommendations. Modern retrieval systems face two key challenges: scalability and efficiency. State-of-the-art image retrieval systems train specific neural networks for each dataset, an approach that lacks scalability. Furthermore, since retrieval speed is directly proportional to embedding size, existing systems that use large embeddings lack efficiency. To tackle scalability, recent works propose using off-the-shelf foundation models. However, these models, though applicable across datasets, fall short in achieving performance comparable to that of dataset-specific models. Our key observation is that, while foundation models capture necessary subtleties for effective retrieval, the underlying distribution of their embedding space can negatively impact cosine similarity searches. We introduce Autoencoders with Strong Variance Constraints (AE-SVC), which, when used for projection, significantly improves the performance of foundation models. We provide an in-depth theoretical analysis of AE-SVC. Addressing efficiency, we introduce Single-shot Similarity Space Distillation ((SS)$_2$D), a novel approach to learn embeddings with adaptive sizes that offers a better trade-off between size and performance. We conducted extensive experiments on four retrieval datasets, including Stanford Online Products (SoP) and Pittsburgh30k, using four different off-the-shelf foundation models, including DinoV2 and CLIP. AE-SVC demonstrates up to a $16\%$ improvement in retrieval performance, while (SS)$_2$D shows a further $10\%$ improvement for smaller embedding sizes.<|reference_end|>
arxiv
@article{omama2024exploiting, title={Exploiting Distribution Constraints for Scalable and Efficient Image Retrieval}, author={Mohammad Omama, Po-han Li, Sandeep P. Chinchali}, journal={arXiv preprint arXiv:2410.07022}, year={2024}, archivePrefix={arXiv}, eprint={2410.07022}, primaryClass={cs.IR} }
omama2024exploiting