corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-666401 | 2410.04794 | Extended multi-adjoint logic programming | <|reference_start|>Extended multi-adjoint logic programming: Extended multi-adjoint logic programming arises as an extension of multi-adjoint normal logic programming where constraints and a special type of aggregator operator have been included. The use of this general aggregator operator permits to consider, for example, different negation operators in the body of the rules of a logic program. We have introduced the syntax and the semantics of this new paradigm, as well as an interesting mechanism for obtaining a multi-adjoint normal logic program from an extended multi-adjoint logic program. This mechanism will allow us to establish technical properties relating the different stable models of both logic programming frameworks. Moreover, it makes possible that the already developed and future theory associated with stable models of multi-adjoint normal logic programs can be applied to extended multi-adjoint logic programs.<|reference_end|> | arxiv | @article{cornejo2024extended,
title={Extended multi-adjoint logic programming},
author={M. Eugenia Cornejo, David Lobo, Jes'us Medina},
journal={Fuzzy Sets and Systems 388 (2020) 124-145},
year={2024},
doi={10.1016/j.fss.2019.03.016},
archivePrefix={arXiv},
eprint={2410.04794},
primaryClass={cs.LO math.LO}
} | cornejo2024extended |
arxiv-666402 | 2410.04795 | Representing the Under-Represented: Cultural and Core Capability Benchmarks for Developing Thai Large Language Models | <|reference_start|>Representing the Under-Represented: Cultural and Core Capability Benchmarks for Developing Thai Large Language Models: The rapid advancement of large language models (LLMs) has highlighted the need for robust evaluation frameworks that assess their core capabilities, such as reasoning, knowledge, and commonsense, leading to the inception of certain widely-used benchmark suites such as the H6 benchmark. However, these benchmark suites are primarily built for the English language, and there exists a lack thereof for under-represented languages, in terms of LLM development, such as Thai. On the other hand, developing LLMs for Thai should also include enhancing the cultural understanding as well as core capabilities. To address these dual challenge in Thai LLM research, we propose two key benchmarks: Thai-H6 and Thai Cultural and Linguistic Intelligence Benchmark (ThaiCLI). Through a thorough evaluation of various LLMs with multi-lingual capabilities, we provide a comprehensive analysis of the proposed benchmarks and how they contribute to Thai LLM development. Furthermore, we will make both the datasets and evaluation code publicly available to encourage further research and development for Thai LLMs.<|reference_end|> | arxiv | @article{kim2024representing,
title={Representing the Under-Represented: Cultural and Core Capability
Benchmarks for Developing Thai Large Language Models},
author={Dahyun Kim, Sukyung Lee, Yungi Kim, Attapol Rutherford, Chanjun Park},
journal={arXiv preprint arXiv:2410.04795},
year={2024},
archivePrefix={arXiv},
eprint={2410.04795},
primaryClass={cs.CL cs.AI}
} | kim2024representing |
arxiv-666403 | 2410.04797 | Attentive-based Multi-level Feature Fusion for Voice Disorder Diagnosis | <|reference_start|>Attentive-based Multi-level Feature Fusion for Voice Disorder Diagnosis: Voice disorders negatively impact the quality of daily life in various ways. However, accurately recognizing the category of pathological features from raw audio remains a considerable challenge due to the limited dataset. A promising method to handle this issue is extracting multi-level pathological information from speech in a comprehensive manner by fusing features in the latent space. In this paper, a novel framework is designed to explore the way of high-quality feature fusion for effective and generalized detection performance. Specifically, the proposed model follows a two-stage training paradigm: (1) ECAPA-TDNN and Wav2vec 2.0 which have shown remarkable effectiveness in various domains are employed to learn the universal pathological information from raw audio; (2) An attentive fusion module is dedicatedly designed to establish the interaction between pathological features projected by EcapTdnn and Wav2vec 2.0 respectively and guide the multi-layer fusion, the entire model is jointly fine-tuned from pre-trained features by the automatic voice pathology detection task. Finally, comprehensive experiments on the FEMH and SVD datasets demonstrate that the proposed framework outperforms the competitive baselines, and achieves the accuracy of 90.51% and 87.68%.<|reference_end|> | arxiv | @article{shen2024attentive-based,
title={Attentive-based Multi-level Feature Fusion for Voice Disorder Diagnosis},
author={Lipeng Shen, Yifan Xiong, Dongyue Guo, Wei Mo, Lingyu Yu, Hui Yang, Yi
Lin},
journal={arXiv preprint arXiv:2410.04797},
year={2024},
archivePrefix={arXiv},
eprint={2410.04797},
primaryClass={cs.SD cs.MM eess.AS}
} | shen2024attentive-based |
arxiv-666404 | 2410.04798 | DAPE V2: Process Attention Score as Feature Map for Length Extrapolation | <|reference_start|>DAPE V2: Process Attention Score as Feature Map for Length Extrapolation: The attention mechanism is a fundamental component of the Transformer model, contributing to interactions among distinct tokens, in contrast to earlier feed-forward neural networks. In general, the attention scores are determined simply by the key-query products. However, this work's occasional trial (combining DAPE and NoPE) of including additional MLPs on attention scores without position encoding indicates that the classical key-query multiplication may limit the performance of Transformers. In this work, we conceptualize attention as a feature map and apply the convolution operator (for neighboring attention scores across different heads) to mimic the processing methods in computer vision. Specifically, the main contribution of this paper is identifying and interpreting the Transformer length extrapolation problem as a result of the limited expressiveness of the naive query and key dot product, and we successfully translate the length extrapolation issue into a well-understood feature map processing problem. The novel insight, which can be adapted to various attention-related models, reveals that the current Transformer architecture has the potential for further evolution. Extensive experiments demonstrate that treating attention as a feature map and applying convolution as a processing method significantly enhances Transformer performance.<|reference_end|> | arxiv | @article{zheng2024dape,
title={DAPE V2: Process Attention Score as Feature Map for Length Extrapolation},
author={Chuanyang Zheng, Yihang Gao, Han Shi, Jing Xiong, Jiankai Sun, Jingyao
Li, Minbin Huang, Xiaozhe Ren, Michael Ng, Xin Jiang, Zhenguo Li, Yu Li},
journal={arXiv preprint arXiv:2410.04798},
year={2024},
archivePrefix={arXiv},
eprint={2410.04798},
primaryClass={cs.CL}
} | zheng2024dape |
arxiv-666405 | 2410.04799 | Transforming Color: A Novel Image Colorization Method | <|reference_start|>Transforming Color: A Novel Image Colorization Method: This paper introduces a novel method for image colorization that utilizes a color transformer and generative adversarial networks (GANs) to address the challenge of generating visually appealing colorized images. Conventional approaches often struggle with capturing long-range dependencies and producing realistic colorizations. The proposed method integrates a transformer architecture to capture global information and a GAN framework to improve visual quality. In this study, a color encoder that utilizes a random normal distribution to generate color features is applied. These features are then integrated with grayscale image features to enhance the overall representation of the images. Our method demonstrates superior performance compared with existing approaches by utilizing the capacity of the transformer, which can capture long-range dependencies and generate a realistic colorization of the GAN. Experimental results show that the proposed network significantly outperforms other state-of-the-art colorization techniques, highlighting its potential for image colorization. This research opens new possibilities for precise and visually compelling image colorization in domains such as digital restoration and historical image analysis.<|reference_end|> | arxiv | @article{shafiq2024transforming,
title={Transforming Color: A Novel Image Colorization Method},
author={Hamza Shafiq and Bumshik Lee},
journal={Electronics 2024, 13, 2511},
year={2024},
doi={10.3390/electronics13132511},
archivePrefix={arXiv},
eprint={2410.04799},
primaryClass={cs.CV cs.AI}
} | shafiq2024transforming |
arxiv-666406 | 2410.04801 | Improving Image Clustering with Artifacts Attenuation via Inference-Time Attention Engineering | <|reference_start|>Improving Image Clustering with Artifacts Attenuation via Inference-Time Attention Engineering: The goal of this paper is to improve the performance of pretrained Vision Transformer (ViT) models, particularly DINOv2, in image clustering task without requiring re-training or fine-tuning. As model size increases, high-norm artifacts anomaly appears in the patches of multi-head attention. We observe that this anomaly leads to reduced accuracy in zero-shot image clustering. These artifacts are characterized by disproportionately large values in the attention map compared to other patch tokens. To address these artifacts, we propose an approach called Inference-Time Attention Engineering (ITAE), which manipulates attention function during inference. Specifically, we identify the artifacts by investigating one of the Query-Key-Value (QKV) patches in the multi-head attention and attenuate their corresponding attention values inside the pretrained models. ITAE shows improved clustering accuracy on multiple datasets by exhibiting more expressive features in latent space. Our findings highlight the potential of ITAE as a practical solution for reducing artifacts in pretrained ViT models and improving model performance in clustering tasks without the need for re-training or fine-tuning.<|reference_end|> | arxiv | @article{nakamura2024improving,
title={Improving Image Clustering with Artifacts Attenuation via Inference-Time
Attention Engineering},
author={Kazumoto Nakamura, Yuji Nozawa, Yu-Chieh Lin, Kengo Nakata, Youyang Ng},
journal={arXiv preprint arXiv:2410.04801},
year={2024},
archivePrefix={arXiv},
eprint={2410.04801},
primaryClass={cs.CV cs.LG}
} | nakamura2024improving |
arxiv-666407 | 2410.04802 | Building Damage Assessment in Conflict Zones: A Deep Learning Approach Using Geospatial Sub-Meter Resolution Data | <|reference_start|>Building Damage Assessment in Conflict Zones: A Deep Learning Approach Using Geospatial Sub-Meter Resolution Data: Very High Resolution (VHR) geospatial image analysis is crucial for humanitarian assistance in both natural and anthropogenic crises, as it allows to rapidly identify the most critical areas that need support. Nonetheless, manually inspecting large areas is time-consuming and requires domain expertise. Thanks to their accuracy, generalization capabilities, and highly parallelizable workload, Deep Neural Networks (DNNs) provide an excellent way to automate this task. Nevertheless, there is a scarcity of VHR data pertaining to conflict situations, and consequently, of studies on the effectiveness of DNNs in those scenarios. Motivated by this, our work extensively studies the applicability of a collection of state-of-the-art Convolutional Neural Networks (CNNs) originally developed for natural disasters damage assessment in a war scenario. To this end, we build an annotated dataset with pre- and post-conflict images of the Ukrainian city of Mariupol. We then explore the transferability of the CNN models in both zero-shot and learning scenarios, demonstrating their potential and limitations. To the best of our knowledge, this is the first study to use sub-meter resolution imagery to assess building damage in combat zones.<|reference_end|> | arxiv | @article{risso2024building,
title={Building Damage Assessment in Conflict Zones: A Deep Learning Approach
Using Geospatial Sub-Meter Resolution Data},
author={Matteo Risso, Alessia Goffi, Beatrice Alessandra Motetti, Alessio
Burrello, Jean Baptiste Bove, Enrico Macii, Massimo Poncino, Daniele Jahier
Pagliari, Giuseppe Maffeis},
journal={arXiv preprint arXiv:2410.04802},
year={2024},
archivePrefix={arXiv},
eprint={2410.04802},
primaryClass={cs.CV cs.LG}
} | risso2024building |
arxiv-666408 | 2410.04803 | Timer-XL: Long-Context Transformers for Unified Time Series Forecasting | <|reference_start|>Timer-XL: Long-Context Transformers for Unified Time Series Forecasting: We present Timer-XL, a generative Transformer for unified time series forecasting. To uniformly predict 1D and 2D time series, we generalize next token prediction, predominantly adopted for causal generation of 1D sequences, to multivariate next token prediction. The proposed paradigm uniformly formulates various forecasting scenarios as a long-context generation problem. We opt for the generative Transformer, which can capture global-range and causal dependencies while providing contextual flexibility, to implement unified forecasting on univariate series characterized by non-stationarity, multivariate time series with complicated dynamics and correlations, and covariate-informed contexts that include both endogenous and exogenous variables. Technically, we propose a universal TimeAttention to facilitate generative Transformers on time series, which can effectively capture fine-grained intra- and inter-series dependencies of flattened time series tokens (patches) and is further strengthened by position embeddings in both temporal and variable dimensions. Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach. As a large time series model, it demonstrates notable model transferability by large-scale pre-training, as well as contextual flexibility in token lengths, positioning it as a one-for-all forecaster.<|reference_end|> | arxiv | @article{liu2024timer-xl:,
title={Timer-XL: Long-Context Transformers for Unified Time Series Forecasting},
author={Yong Liu, Guo Qin, Xiangdong Huang, Jianmin Wang, Mingsheng Long},
journal={arXiv preprint arXiv:2410.04803},
year={2024},
archivePrefix={arXiv},
eprint={2410.04803},
primaryClass={cs.LG stat.ML}
} | liu2024timer-xl: |
arxiv-666409 | 2410.04805 | HF-NTT: Hazard-Free Dataflow Accelerator for Number Theoretic Transform | <|reference_start|>HF-NTT: Hazard-Free Dataflow Accelerator for Number Theoretic Transform: Polynomial multiplication is one of the fundamental operations in many applications, such as fully homomorphic encryption (FHE). However, the computational inefficiency stemming from polynomials with many large-bit coefficients poses a significant challenge for the practical implementation of FHE. The Number Theoretic Transform (NTT) has proven an effective tool in enhancing polynomial multiplication, but a fast and adaptable method for generating NTT accelerators is lacking. In this paper, we introduce HF-NTT, a novel NTT accelerator. HF-NTT efficiently handles polynomials of varying degrees and moduli, allowing for a balance between performance and hardware resources by adjusting the number of Processing Elements (PEs). Meanwhile, we introduce a data movement strategy that eliminates the need for bit-reversal operations, resolves different hazards, and reduces the clock cycles. Furthermore, Our accelerator includes a hardware-friendly modular multiplication design and a configurable PE capable of adapting its data path, resulting in a universal architecture. We synthesized and implemented prototype using Vivado 2022.2, and evaluated it on the Xilinx Virtex-7 FPGA platform. The results demonstrate significant improvements in Area-Time-Product (ATP) and processing speed for different polynomial degrees. In scenarios involving multi-modulus polynomial multiplication, our prototype consistently outperforms other designs in both ATP and latency metrics.<|reference_end|> | arxiv | @article{meng2024hf-ntt:,
title={HF-NTT: Hazard-Free Dataflow Accelerator for Number Theoretic Transform},
author={Xiangchen Meng, Zijun Jiang, Yangdi Lyu},
journal={arXiv preprint arXiv:2410.04805},
year={2024},
archivePrefix={arXiv},
eprint={2410.04805},
primaryClass={cs.AR cs.CR}
} | meng2024hf-ntt: |
arxiv-666410 | 2410.04808 | LPZero: Language Model Zero-cost Proxy Search from Zero | <|reference_start|>LPZero: Language Model Zero-cost Proxy Search from Zero: In spite of the outstanding performance, Neural Architecture Search (NAS) is criticized for massive computation. Recently, Zero-shot NAS has emerged as a promising approach by exploiting Zero-cost (ZC) proxies, which markedly reduce computational demands. Despite this, existing ZC proxies heavily rely on expert knowledge and incur significant trial-and-error costs. Particularly in NLP tasks, most existing ZC proxies fail to surpass the performance of the naive baseline. To address these challenges, we introduce a novel framework, \textbf{LPZero}, which is the first to automatically design ZC proxies for various tasks, achieving higher ranking consistency than human-designed proxies. Specifically, we model the ZC proxy as a symbolic equation and incorporate a unified proxy search space that encompasses existing ZC proxies, which are composed of a predefined set of mathematical symbols. To heuristically search for the best ZC proxy, LPZero incorporates genetic programming to find the optimal symbolic composition. We propose a \textit{Rule-based Pruning Strategy (RPS),} which preemptively eliminates unpromising proxies, thereby mitigating the risk of proxy degradation. Extensive experiments on FlexiBERT, GPT-2, and LLaMA-7B demonstrate LPZero's superior ranking ability and performance on downstream tasks compared to current approaches.<|reference_end|> | arxiv | @article{dong2024lpzero:,
title={LPZero: Language Model Zero-cost Proxy Search from Zero},
author={Peijie Dong, Lujun Li, Xiang Liu, Zhenheng Tang, Xuebo Liu, Qiang
Wang, Xiaowen Chu},
journal={arXiv preprint arXiv:2410.04808},
year={2024},
archivePrefix={arXiv},
eprint={2410.04808},
primaryClass={cs.CL}
} | dong2024lpzero: |
arxiv-666411 | 2410.04809 | Data-driven Diffusion Models for Enhancing Safety in Autonomous Vehicle Traffic Simulations | <|reference_start|>Data-driven Diffusion Models for Enhancing Safety in Autonomous Vehicle Traffic Simulations: Safety-critical traffic scenarios are integral to the development and validation of autonomous driving systems. These scenarios provide crucial insights into vehicle responses under high-risk conditions rarely encountered in real-world settings. Recent advancements in critical scenario generation have demonstrated the superiority of diffusion-based approaches over traditional generative models in terms of effectiveness and realism. However, current diffusion-based methods fail to adequately address the complexity of driver behavior and traffic density information, both of which significantly influence driver decision-making processes. In this work, we present a novel approach to overcome these limitations by introducing adversarial guidance functions for diffusion models that incorporate behavior complexity and traffic density, thereby enhancing the generation of more effective and realistic safety-critical traffic scenarios. The proposed method is evaluated on two evaluation metrics: effectiveness and realism.The proposed method is evaluated on two evaluation metrics: effectiveness and realism, demonstrating better efficacy as compared to other state-of-the-art methods.<|reference_end|> | arxiv | @article{lu2024data-driven,
title={Data-driven Diffusion Models for Enhancing Safety in Autonomous Vehicle
Traffic Simulations},
author={Jinxiong Lu, Shoaib Azam, Gokhan Alcan, and Ville Kyrki},
journal={arXiv preprint arXiv:2410.04809},
year={2024},
archivePrefix={arXiv},
eprint={2410.04809},
primaryClass={cs.RO}
} | lu2024data-driven |
arxiv-666412 | 2410.04810 | FedBiP: Heterogeneous One-Shot Federated Learning with Personalized Latent Diffusion Models | <|reference_start|>FedBiP: Heterogeneous One-Shot Federated Learning with Personalized Latent Diffusion Models: One-Shot Federated Learning (OSFL), a special decentralized machine learning paradigm, has recently gained significant attention. OSFL requires only a single round of client data or model upload, which reduces communication costs and mitigates privacy threats compared to traditional FL. Despite these promising prospects, existing methods face challenges due to client data heterogeneity and limited data quantity when applied to real-world OSFL systems. Recently, Latent Diffusion Models (LDM) have shown remarkable advancements in synthesizing high-quality images through pretraining on large-scale datasets, thereby presenting a potential solution to overcome these issues. However, directly applying pretrained LDM to heterogeneous OSFL results in significant distribution shifts in synthetic data, leading to performance degradation in classification models trained on such data. This issue is particularly pronounced in rare domains, such as medical imaging, which are underrepresented in LDM's pretraining data. To address this challenge, we propose Federated Bi-Level Personalization (FedBiP), which personalizes the pretrained LDM at both instance-level and concept-level. Hereby, FedBiP synthesizes images following the client's local data distribution without compromising the privacy regulations. FedBiP is also the first approach to simultaneously address feature space heterogeneity and client data scarcity in OSFL. Our method is validated through extensive experiments on three OSFL benchmarks with feature space heterogeneity, as well as on challenging medical and satellite image datasets with label heterogeneity. The results demonstrate the effectiveness of FedBiP, which substantially outperforms other OSFL methods.<|reference_end|> | arxiv | @article{chen2024fedbip:,
title={FedBiP: Heterogeneous One-Shot Federated Learning with Personalized
Latent Diffusion Models},
author={Haokun Chen, Hang Li, Yao Zhang, Gengyuan Zhang, Jinhe Bi, Philip
Torr, Jindong Gu, Denis Krompass, Volker Tresp},
journal={arXiv preprint arXiv:2410.04810},
year={2024},
archivePrefix={arXiv},
eprint={2410.04810},
primaryClass={cs.LG cs.CV cs.DC cs.MM}
} | chen2024fedbip: |
arxiv-666413 | 2410.04811 | Learning Efficient and Effective Trajectories for Differential Equation-based Image Restoration | <|reference_start|>Learning Efficient and Effective Trajectories for Differential Equation-based Image Restoration: The differential equation-based image restoration approach aims to establish learnable trajectories connecting high-quality images to a tractable distribution, e.g., low-quality images or a Gaussian distribution. In this paper, we reformulate the trajectory optimization of this kind of method, focusing on enhancing both reconstruction quality and efficiency. Initially, we navigate effective restoration paths through a reinforcement learning process, gradually steering potential trajectories toward the most precise options. Additionally, to mitigate the considerable computational burden associated with iterative sampling, we propose cost-aware trajectory distillation to streamline complex paths into several manageable steps with adaptable sizes. Moreover, we fine-tune a foundational diffusion model (FLUX) with 12B parameters by using our algorithms, producing a unified framework for handling 7 kinds of image restoration tasks. Extensive experiments showcase the significant superiority of the proposed method, achieving a maximum PSNR improvement of 2.1 dB over state-of-the-art methods, while also greatly enhancing visual perceptual quality. Project page: \url{https://zhu-zhiyu.github.io/FLUX-IR/}.<|reference_end|> | arxiv | @article{zhu2024learning,
title={Learning Efficient and Effective Trajectories for Differential
Equation-based Image Restoration},
author={Zhiyu Zhu, Jinhui Hou, Hui Liu, Huanqiang Zeng, and Junhui Hou},
journal={arXiv preprint arXiv:2410.04811},
year={2024},
archivePrefix={arXiv},
eprint={2410.04811},
primaryClass={cs.CV}
} | zhu2024learning |
arxiv-666414 | 2410.04814 | Learning Interpretable Hierarchical Dynamical Systems Models from Time Series Data | <|reference_start|>Learning Interpretable Hierarchical Dynamical Systems Models from Time Series Data: In science, we are often interested in obtaining a generative model of the underlying system dynamics from observed time series. While powerful methods for dynamical systems reconstruction (DSR) exist when data come from a single domain, how to best integrate data from multiple dynamical regimes and leverage it for generalization is still an open question. This becomes particularly important when individual time series are short, and group-level information may help to fill in for gaps in single-domain data. At the same time, averaging is not an option in DSR, as it will wipe out crucial dynamical properties (e.g., limit cycles in one domain vs. chaos in another). Hence, a framework is needed that enables to efficiently harvest group-level (multi-domain) information while retaining all single-domain dynamical characteristics. Here we provide such a hierarchical approach and showcase it on popular DSR benchmarks, as well as on neuroscientific and medical time series. In addition to faithful reconstruction of all individual dynamical regimes, our unsupervised methodology discovers common low-dimensional feature spaces in which datasets with similar dynamics cluster. The features spanning these spaces were further dynamically highly interpretable, surprisingly in often linear relation to control parameters that govern the dynamics of the underlying system. Finally, we illustrate transfer learning and generalization to new parameter regimes.<|reference_end|> | arxiv | @article{brenner2024learning,
title={Learning Interpretable Hierarchical Dynamical Systems Models from Time
Series Data},
author={Manuel Brenner, Elias Weber, Georgia Koppe, Daniel Durstewitz},
journal={arXiv preprint arXiv:2410.04814},
year={2024},
archivePrefix={arXiv},
eprint={2410.04814},
primaryClass={cs.LG cs.AI math.DS nlin.CD physics.data-an}
} | brenner2024learning |
arxiv-666415 | 2410.04815 | A Review of Artificial Intelligence based Biological-Tree Construction: Priorities, Methods, Applications and Trends | <|reference_start|>A Review of Artificial Intelligence based Biological-Tree Construction: Priorities, Methods, Applications and Trends: Biological tree analysis serves as a pivotal tool in uncovering the evolutionary and differentiation relationships among organisms, genes, and cells. Its applications span diverse fields including phylogenetics, developmental biology, ecology, and medicine. Traditional tree inference methods, while foundational in early studies, face increasing limitations in processing the large-scale, complex datasets generated by modern high-throughput technologies. Recent advances in deep learning offer promising solutions, providing enhanced data processing and pattern recognition capabilities. However, challenges remain, particularly in accurately representing the inherently discrete and non-Euclidean nature of biological trees. In this review, we first outline the key biological priors fundamental to phylogenetic and differentiation tree analyses, facilitating a deeper interdisciplinary understanding between deep learning researchers and biologists. We then systematically examine the commonly used data formats and databases, serving as a comprehensive resource for model testing and development. We provide a critical analysis of traditional tree generation methods, exploring their underlying biological assumptions, technical characteristics, and limitations. Current developments in deep learning-based tree generation are reviewed, highlighting both recent advancements and existing challenges. Furthermore, we discuss the diverse applications of biological trees across various biological domains. Finally, we propose potential future directions and trends in leveraging deep learning for biological tree research, aiming to guide further exploration and innovation in this field.<|reference_end|> | arxiv | @article{zang2024a,
title={A Review of Artificial Intelligence based Biological-Tree Construction:
Priorities, Methods, Applications and Trends},
author={Zelin Zang, Yongjie Xu, Chenrui Duan, Jinlin Wu, Stan Z. Li, Zhen Lei},
journal={arXiv preprint arXiv:2410.04815},
year={2024},
archivePrefix={arXiv},
eprint={2410.04815},
primaryClass={q-bio.PE cs.AI}
} | zang2024a |
arxiv-666416 | 2410.04817 | Resource-Efficient Multiview Perception: Integrating Semantic Masking with Masked Autoencoders | <|reference_start|>Resource-Efficient Multiview Perception: Integrating Semantic Masking with Masked Autoencoders: Multiview systems have become a key technology in modern computer vision, offering advanced capabilities in scene understanding and analysis. However, these systems face critical challenges in bandwidth limitations and computational constraints, particularly for resource-limited camera nodes like drones. This paper presents a novel approach for communication-efficient distributed multiview detection and tracking using masked autoencoders (MAEs). We introduce a semantic-guided masking strategy that leverages pre-trained segmentation models and a tunable power function to prioritize informative image regions. This approach, combined with an MAE, reduces communication overhead while preserving essential visual information. We evaluate our method on both virtual and real-world multiview datasets, demonstrating comparable performance in terms of detection and tracking performance metrics compared to state-of-the-art techniques, even at high masking ratios. Our selective masking algorithm outperforms random masking, maintaining higher accuracy and precision as the masking ratio increases. Furthermore, our approach achieves a significant reduction in transmission data volume compared to baseline methods, thereby balancing multiview tracking performance with communication efficiency.<|reference_end|> | arxiv | @article{dakic2024resource-efficient,
title={Resource-Efficient Multiview Perception: Integrating Semantic Masking
with Masked Autoencoders},
author={Kosta Dakic, Kanchana Thilakarathna, Rodrigo N. Calheiros, Teng Joon
Lim},
journal={arXiv preprint arXiv:2410.04817},
year={2024},
archivePrefix={arXiv},
eprint={2410.04817},
primaryClass={cs.CV cs.AI eess.IV eess.SP}
} | dakic2024resource-efficient |
arxiv-666417 | 2410.04818 | Physics-Informed GNN for non-linear constrained optimization: PINCO a solver for the AC-optimal power flow | <|reference_start|>Physics-Informed GNN for non-linear constrained optimization: PINCO a solver for the AC-optimal power flow: The energy transition is driving the integration of large shares of intermittent power sources in the electric power grid. Therefore, addressing the AC optimal power flow (AC-OPF) effectively becomes increasingly essential. The AC-OPF, which is a fundamental optimization problem in power systems, must be solved more frequently to ensure the safe and cost-effective operation of power systems. Due to its non-linear nature, AC-OPF is often solved in its linearized form, despite inherent inaccuracies. Non-linear solvers, such as the interior point method, are typically employed to solve the full OPF problem. However, these iterative methods may not converge for large systems and do not guarantee global optimality. This work explores a physics-informed graph neural network, PINCO, to solve the AC-OPF. We demonstrate that this method provides accurate solutions in a fraction of the computational time when compared to the established non-linear programming solvers. Remarkably, PINCO generalizes effectively across a diverse set of loading conditions in the power system. We show that our method can solve the AC-OPF without violating inequality constraints. Furthermore, it can function both as a solver and as a hybrid universal function approximator. Moreover, the approach can be easily adapted to different power systems with minimal adjustments to the hyperparameters, including systems with multiple generators at each bus. Overall, this work demonstrates an advancement in the field of power system optimization to tackle the challenges of the energy transition. The code and data utilized in this paper are available at https://anonymous.4open.science/r/opf_pinn_iclr-B83E/.<|reference_end|> | arxiv | @article{varbella2024physics-informed,
title={Physics-Informed GNN for non-linear constrained optimization: PINCO a
solver for the AC-optimal power flow},
author={Anna Varbella, Damien Briens, Blazhe Gjorgiev, Giuseppe Alessio
D'Inverno, Giovanni Sansavini},
journal={arXiv preprint arXiv:2410.04818},
year={2024},
archivePrefix={arXiv},
eprint={2410.04818},
primaryClass={eess.SY cs.LG cs.SY}
} | varbella2024physics-informed |
arxiv-666418 | 2410.04819 | MINER: Mining the Underlying Pattern of Modality-Specific Neurons in Multimodal Large Language Models | <|reference_start|>MINER: Mining the Underlying Pattern of Modality-Specific Neurons in Multimodal Large Language Models: In recent years, multimodal large language models (MLLMs) have significantly advanced, integrating more modalities into diverse applications. However, the lack of explainability remains a major barrier to their use in scenarios requiring decision transparency. Current neuron-level explanation paradigms mainly focus on knowledge localization or language- and domain-specific analyses, leaving the exploration of multimodality largely unaddressed. To tackle these challenges, we propose MINER, a transferable framework for mining modality-specific neurons (MSNs) in MLLMs, which comprises four stages: (1) modality separation, (2) importance score calculation, (3) importance score aggregation, (4) modality-specific neuron selection. Extensive experiments across six benchmarks and two representative MLLMs show that (I) deactivating ONLY 2% of MSNs significantly reduces MLLMs performance (0.56 to 0.24 for Qwen2-VL, 0.69 to 0.31 for Qwen2-Audio), (II) different modalities mainly converge in the lower layers, (III) MSNs influence how key information from various modalities converges to the last token, (IV) two intriguing phenomena worth further investigation, i.e., semantic probing and semantic telomeres. The source code is available at this URL.<|reference_end|> | arxiv | @article{huang2024miner:,
title={MINER: Mining the Underlying Pattern of Modality-Specific Neurons in
Multimodal Large Language Models},
author={Kaichen Huang, Jiahao Huo, Yibo Yan, Kun Wang, Yutao Yue, Xuming Hu},
journal={arXiv preprint arXiv:2410.04819},
year={2024},
archivePrefix={arXiv},
eprint={2410.04819},
primaryClass={cs.CL}
} | huang2024miner: |
arxiv-666419 | 2410.04820 | BCIM: Budget and capacity constrained influence maximization in multilayer networks | <|reference_start|>BCIM: Budget and capacity constrained influence maximization in multilayer networks: Influence maximization (IM) seeks to identify a seed set that maximizes influence within a network, with applications in areas such as viral marketing, disease control, and political campaigns. The budgeted influence maximization (BIM) problem extends IM by incorporating cost constraints for different nodes. However, the current BIM problem, limited by budget alone, often results in the selection of numerous low-cost nodes, which may not be applicable to real-world scenarios. Moreover, considering that users can transmit information across multiple social platforms, solving the BIM problem across these platforms could lead to more optimized resource utilization. To address these challenges, we propose the Budget and Capacity Constrained Influence Maximization (BCIM) problem within multilayer networks and introduce a Multilayer Multi-population Genetic Algorithm (MMGA) to solve it. The MMGA employs modules, such as initialization, repair, and parallel evolution, designed not only to meet budget and capacity constraints but also to significantly enhance algorithmic efficiency. Extensive experiments on both synthetic and empirical multilayer networks demonstrate that MMGA improves spreading performance by at least 10% under the two constraints compared to baselines extended from classical IM problems. The BCIM framework introduces a novel direction in influence maximization, providing an effective and efficient solution to the problem.<|reference_end|> | arxiv | @article{zhang2024bcim:,
title={BCIM: Budget and capacity constrained influence maximization in
multilayer networks},
author={Su-Su Zhang, Chuang Liu, Huijuan Wang, Yang Chen, Xiu-Xiu Zhan},
journal={arXiv preprint arXiv:2410.04820},
year={2024},
archivePrefix={arXiv},
eprint={2410.04820},
primaryClass={cs.SI}
} | zhang2024bcim: |
arxiv-666420 | 2410.04823 | CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models | <|reference_start|>CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models: Despite the transformative impact of deep learning across multiple domains, the inherent opacity of these models has driven the development of Explainable Artificial Intelligence (XAI). Among these efforts, Concept Bottleneck Models (CBMs) have emerged as a key approach to improve interpretability by leveraging high-level semantic information. However, CBMs, like other machine learning models, are susceptible to security threats, particularly backdoor attacks, which can covertly manipulate model behaviors. Understanding that the community has not yet studied the concept level backdoor attack of CBM, because of "Better the devil you know than the devil you don't know.", we introduce CAT (Concept-level Backdoor ATtacks), a methodology that leverages the conceptual representations within CBMs to embed triggers during training, enabling controlled manipulation of model predictions at inference time. An enhanced attack pattern, CAT+, incorporates a correlation function to systematically select the most effective and stealthy concept triggers, thereby optimizing the attack's impact. Our comprehensive evaluation framework assesses both the attack success rate and stealthiness, demonstrating that CAT and CAT+ maintain high performance on clean data while achieving significant targeted effects on backdoored datasets. This work underscores the potential security risks associated with CBMs and provides a robust testing methodology for future security assessments.<|reference_end|> | arxiv | @article{lai2024cat:,
title={CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models},
author={Songning Lai, Jiayu Yang, Yu Huang, Lijie Hu, Tianlang Xue, Zhangyi
Hu, Jiaxu Li, Haicheng Liao, Yutao Yue},
journal={arXiv preprint arXiv:2410.04823},
year={2024},
archivePrefix={arXiv},
eprint={2410.04823},
primaryClass={cs.CV cs.CR}
} | lai2024cat: |
arxiv-666421 | 2410.04824 | Taming Gradient Oversmoothing and Expansion in Graph Neural Networks | <|reference_start|>Taming Gradient Oversmoothing and Expansion in Graph Neural Networks: Oversmoothing has been claimed as a primary bottleneck for multi-layered graph neural networks (GNNs). Multiple analyses have examined how and why oversmoothing occurs. However, none of the prior work addressed how optimization is performed under the oversmoothing regime. In this work, we show the presence of $\textit{gradient oversmoothing}$ preventing optimization during training. We further analyze that GNNs with residual connections, a well-known solution to help gradient flow in deep architecture, introduce $\textit{gradient expansion}$, a phenomenon of the gradient explosion in diverse directions. Therefore, adding residual connections cannot be a solution for making a GNN deep. Our analysis reveals that constraining the Lipschitz bound of each layer can neutralize the gradient expansion. To this end, we provide a simple yet effective normalization method to prevent the gradient expansion. An empirical study shows that the residual GNNs with hundreds of layers can be efficiently trained with the proposed normalization without compromising performance. Additional studies show that the empirical observations corroborate our theoretical analysis.<|reference_end|> | arxiv | @article{park2024taming,
title={Taming Gradient Oversmoothing and Expansion in Graph Neural Networks},
author={MoonJeong Park, Dongwoo Kim},
journal={arXiv preprint arXiv:2410.04824},
year={2024},
archivePrefix={arXiv},
eprint={2410.04824},
primaryClass={cs.LG stat.ML}
} | park2024taming |
arxiv-666422 | 2410.04825 | The divide between us: Internet access among people with and without disabilities in the post-pandemic era | <|reference_start|>The divide between us: Internet access among people with and without disabilities in the post-pandemic era: The COVID-19 pandemic highlighted the importance of internet access across various aspects of life, from remote work and online education to healthcare services and social connections. As we transition to a post-pandemic era, a pressing need arises to update our understanding of the multifaceted nature of internet access. This study is one of the first attempts to do so. Using survey data from New Zealand adult internet users (n=960), it compares internet connection types, frequency of internet use at home, social media use, and concerns about online risk between people with and without disabilities. Results show people with disabilities have restricted fibre access and higher wireless broadband (a much slower connection type). People with disabilities use social media platforms less and are more concerned about certain online risks. The findings highlight persistent disparities in internet access for people with disabilities in the post-pandemic era. Implications of the study are discussed.<|reference_end|> | arxiv | @article{pacheco2024the,
title={The divide between us: Internet access among people with and without
disabilities in the post-pandemic era},
author={Edgar Pacheco and Hannah Burgess},
journal={Disability & Society 2024},
year={2024},
doi={10.1080/09687599.2024.2411541},
archivePrefix={arXiv},
eprint={2410.04825},
primaryClass={cs.CY cs.SI}
} | pacheco2024the |
arxiv-666423 | 2410.04826 | A Planar-Symmetric SO(3) Representation for Learning Grasp Detection | <|reference_start|>A Planar-Symmetric SO(3) Representation for Learning Grasp Detection: Planar-symmetric hands, such as parallel grippers, are widely adopted in both research and industrial fields. Their symmetry, however, introduces ambiguity and discontinuity in the SO(3) representation, which hinders both the training and inference of neural-network-based grasp detectors. We propose a novel SO(3) representation that can parametrize a pair of planar-symmetric poses with a single parameter set by leveraging the 2D Bingham distribution. We also detail a grasp detector based on our representation, which provides a more consistent rotation output. An intensive evaluation with multiple grippers and objects in both the simulation and the real world quantitatively shows our approach's contribution.<|reference_end|> | arxiv | @article{ko2024a,
title={A Planar-Symmetric SO(3) Representation for Learning Grasp Detection},
author={Tianyi Ko, Takuya Ikeda, Hiroya Sato, Koichi Nishiwaki},
journal={arXiv preprint arXiv:2410.04826},
year={2024},
archivePrefix={arXiv},
eprint={2410.04826},
primaryClass={cs.RO}
} | ko2024a |
arxiv-666424 | 2410.04830 | Correcting for Popularity Bias in Recommender Systems via Item Loss Equalization | <|reference_start|>Correcting for Popularity Bias in Recommender Systems via Item Loss Equalization: Recommender Systems (RS) often suffer from popularity bias, where a small set of popular items dominate the recommendation results due to their high interaction rates, leaving many less popular items overlooked. This phenomenon disproportionately benefits users with mainstream tastes while neglecting those with niche interests, leading to unfairness among users and exacerbating disparities in recommendation quality across different user groups. In this paper, we propose an in-processing approach to address this issue by intervening in the training process of recommendation models. Drawing inspiration from fair empirical risk minimization in machine learning, we augment the objective function of the recommendation model with an additional term aimed at minimizing the disparity in loss values across different item groups during the training process. Our approach is evaluated through extensive experiments on two real-world datasets and compared against state-of-the-art baselines. The results demonstrate the superior efficacy of our method in mitigating the unfairness of popularity bias while incurring only negligible loss in recommendation accuracy.<|reference_end|> | arxiv | @article{prent2024correcting,
title={Correcting for Popularity Bias in Recommender Systems via Item Loss
Equalization},
author={Juno Prent and Masoud Mansoury},
journal={arXiv preprint arXiv:2410.04830},
year={2024},
archivePrefix={arXiv},
eprint={2410.04830},
primaryClass={cs.IR}
} | prent2024correcting |
arxiv-666425 | 2410.04833 | Multimodal Fusion Strategies for Mapping Biophysical Landscape Features | <|reference_start|>Multimodal Fusion Strategies for Mapping Biophysical Landscape Features: Multimodal aerial data are used to monitor natural systems, and machine learning can significantly accelerate the classification of landscape features within such imagery to benefit ecology and conservation. It remains under-explored, however, how these multiple modalities ought to be fused in a deep learning model. As a step towards filling this gap, we study three strategies (Early fusion, Late fusion, and Mixture of Experts) for fusing thermal, RGB, and LiDAR imagery using a dataset of spatially-aligned orthomosaics in these three modalities. In particular, we aim to map three ecologically-relevant biophysical landscape features in African savanna ecosystems: rhino middens, termite mounds, and water. The three fusion strategies differ in whether the modalities are fused early or late, and if late, whether the model learns fixed weights per modality for each class or generates weights for each class adaptively, based on the input. Overall, the three methods have similar macro-averaged performance with Late fusion achieving an AUC of 0.698, but their per-class performance varies strongly, with Early fusion achieving the best recall for middens and water and Mixture of Experts achieving the best recall for mounds.<|reference_end|> | arxiv | @article{gordon2024multimodal,
title={Multimodal Fusion Strategies for Mapping Biophysical Landscape Features},
author={Lucia Gordon and Nico Lang and Catherine Ressijac and Andrew Davies},
journal={arXiv preprint arXiv:2410.04833},
year={2024},
archivePrefix={arXiv},
eprint={2410.04833},
primaryClass={cs.CV cs.AI cs.LG}
} | gordon2024multimodal |
arxiv-666426 | 2410.04834 | As Simple as Fine-tuning: LLM Alignment via Bidirectional Negative Feedback Loss | <|reference_start|>As Simple as Fine-tuning: LLM Alignment via Bidirectional Negative Feedback Loss: Direct Preference Optimization (DPO) has emerged as a more computationally efficient alternative to Reinforcement Learning from Human Feedback (RLHF) with Proximal Policy Optimization (PPO), eliminating the need for reward models and online sampling. Despite these benefits, DPO and its variants remain sensitive to hyper-parameters and prone to instability, particularly on mathematical datasets. We argue that these issues arise from the unidirectional likelihood-derivative negative feedback inherent in the log-likelihood loss function. To address this, we propose a novel LLM alignment loss that establishes a stable Bidirectional Negative Feedback (BNF) during optimization. Our proposed BNF loss eliminates the need for pairwise contrastive losses and does not require any extra tunable hyper-parameters or pairwise preference data, streamlining the alignment pipeline to be as simple as supervised fine-tuning. We conduct extensive experiments across two challenging QA benchmarks and four reasoning benchmarks. The experimental results show that BNF achieves comparable performance to the best methods on QA benchmarks, while its performance decrease on the four reasoning benchmarks is significantly lower compared to the best methods, thus striking a better balance between value alignment and reasoning ability. In addition, we further validate the performance of BNF on non-pairwise datasets, and conduct in-depth analysis of log-likelihood and logit shifts across different preference optimization methods.<|reference_end|> | arxiv | @article{mao2024as,
title={As Simple as Fine-tuning: LLM Alignment via Bidirectional Negative
Feedback Loss},
author={Xin Mao, Feng-Lin Li, Huimin Xu, Wei Zhang, Wang Chen, Anh Tuan Luu},
journal={arXiv preprint arXiv:2410.04834},
year={2024},
archivePrefix={arXiv},
eprint={2410.04834},
primaryClass={cs.CL}
} | mao2024as |
arxiv-666427 | 2410.04836 | An Optimized H5 Hysteresis Current Control with Clamped Diodes in Transformer-less Grid-PV Inverter | <|reference_start|>An Optimized H5 Hysteresis Current Control with Clamped Diodes in Transformer-less Grid-PV Inverter: With the rise of renewable energy penetration in the grid, photovoltaic (PV) panels are connected to the grid via inverters to supply solar energy. Transformer-less grid-tied PV inverters are gaining popularity because of their improved efficiency, reduced size, and lower costs. However, they can induce a path for leakage currents between the PV and the grid part due to the absence of galvanic isolation between them. This leads to serious electromagnetic interference, loss in efficiency and safety concerns. The leakage current is primarily influenced by the nature of the common mode voltage (CMV), which is determined by the switching techniques of the inverter. In this paper, a novel inverter topology of Hysteresis Controlled H5 with Two Clamping Diodes (HCH5-D2) has been derived. The HCH5-D2 topology helps to decouple the AC part (Grid) and DC part (PV) during the freewheeling to make the CMV constant and in turn, reduces the leakage current. Also, the additional diodes help to reduce the voltage spikes generated during the freewheeling period and maintain the CMV at a constant value. Finally, a 2.2kW grid-connected single-phase HCH5-D2 PV inverter system's MATLAB simulation has been presented with better results when compared with a traditional H4 inverter.<|reference_end|> | arxiv | @article{phuyal2024an,
title={An Optimized H5 Hysteresis Current Control with Clamped Diodes in
Transformer-less Grid-PV Inverter},
author={Sushil Phuyal, Shashwot Shrestha, Swodesh Sharma, Rachana Subedi, Anil
Kumar Panjiyar, Mukesh Gautam},
journal={arXiv preprint arXiv:2410.04836},
year={2024},
archivePrefix={arXiv},
eprint={2410.04836},
primaryClass={eess.SY cs.SY}
} | phuyal2024an |
arxiv-666428 | 2410.04838 | Rationale-Aware Answer Verification by Pairwise Self-Evaluation | <|reference_start|>Rationale-Aware Answer Verification by Pairwise Self-Evaluation: Answer verification identifies correct solutions among candidates generated by large language models (LLMs). Current approaches typically train verifier models by labeling solutions as correct or incorrect based solely on whether the final answer matches the gold answer. However, this approach neglects any flawed rationale in the solution yielding the correct answer, undermining the verifier's ability to distinguish between sound and flawed rationales. We empirically show that in StrategyQA, only 19% of LLM-generated solutions with correct answers have valid rationales, thus leading to an unreliable verifier. Furthermore, we demonstrate that training a verifier on valid rationales significantly improves its ability to distinguish valid and flawed rationale. To make a better verifier without extra human supervision, we introduce REPS (Rationale Enhancement through Pairwise Selection), a method for selecting valid rationales from candidates by iteratively applying pairwise self-evaluation using the same LLM that generates the solutions. Verifiers trained on solutions selected by REPS outperform those trained using conventional training methods on three reasoning benchmarks (ARC-Challenge, DROP, and StrategyQA). Our results suggest that training reliable verifiers requires ensuring the validity of rationales in addition to the correctness of the final answers, which would be critical for models assisting humans in solving complex reasoning tasks.<|reference_end|> | arxiv | @article{kawabata2024rationale-aware,
title={Rationale-Aware Answer Verification by Pairwise Self-Evaluation},
author={Akira Kawabata, Saku Sugawara},
journal={arXiv preprint arXiv:2410.04838},
year={2024},
archivePrefix={arXiv},
eprint={2410.04838},
primaryClass={cs.CL}
} | kawabata2024rationale-aware |
arxiv-666429 | 2410.04840 | Strong Model Collapse | <|reference_start|>Strong Model Collapse: Within the scaling laws paradigm, which underpins the training of large neural networks like ChatGPT and Llama, we consider a supervised regression setting and establish the existance of a strong form of the model collapse phenomenon, a critical performance degradation due to synthetic data in the training corpus. Our results show that even the smallest fraction of synthetic data (e.g., as little as 1\% of the total training dataset) can still lead to model collapse: larger and larger training sets do not enhance performance. We further investigate whether increasing model size, an approach aligned with current trends in training large language models, exacerbates or mitigates model collapse. In a simplified regime where neural networks are approximated via random projections of tunable size, we both theoretically and empirically show that larger models can amplify model collapse. Interestingly, our theory also indicates that, beyond the interpolation threshold (which can be extremely high for very large datasets), larger models may mitigate the collapse, although they do not entirely prevent it. Our theoretical findings are empirically verified through experiments on language models and feed-forward neural networks for images.<|reference_end|> | arxiv | @article{dohmatob2024strong,
title={Strong Model Collapse},
author={Elvis Dohmatob, Yunzhen Feng, Arjun Subramonian, Julia Kempe},
journal={arXiv preprint arXiv:2410.04840},
year={2024},
archivePrefix={arXiv},
eprint={2410.04840},
primaryClass={cs.LG stat.ML}
} | dohmatob2024strong |
arxiv-666430 | 2410.04842 | A Simple Image Segmentation Framework via In-Context Examples | <|reference_start|>A Simple Image Segmentation Framework via In-Context Examples: Recently, there have been explorations of generalist segmentation models that can effectively tackle a variety of image segmentation tasks within a unified in-context learning framework. However, these methods still struggle with task ambiguity in in-context segmentation, as not all in-context examples can accurately convey the task information. In order to address this issue, we present SINE, a simple image Segmentation framework utilizing in-context examples. Our approach leverages a Transformer encoder-decoder structure, where the encoder provides high-quality image representations, and the decoder is designed to yield multiple task-specific output masks to effectively eliminate task ambiguity. Specifically, we introduce an In-context Interaction module to complement in-context information and produce correlations between the target image and the in-context example and a Matching Transformer that uses fixed matching and a Hungarian algorithm to eliminate differences between different tasks. In addition, we have further perfected the current evaluation system for in-context image segmentation, aiming to facilitate a holistic appraisal of these models. Experiments on various segmentation tasks show the effectiveness of the proposed method.<|reference_end|> | arxiv | @article{liu2024a,
title={A Simple Image Segmentation Framework via In-Context Examples},
author={Yang Liu, Chenchen Jing, Hengtao Li, Muzhi Zhu, Hao Chen, Xinlong
Wang, Chunhua Shen},
journal={arXiv preprint arXiv:2410.04842},
year={2024},
archivePrefix={arXiv},
eprint={2410.04842},
primaryClass={cs.CV}
} | liu2024a |
arxiv-666431 | 2410.04844 | PostEdit: Posterior Sampling for Efficient Zero-Shot Image Editing | <|reference_start|>PostEdit: Posterior Sampling for Efficient Zero-Shot Image Editing: In the field of image editing, three core challenges persist: controllability, background preservation, and efficiency. Inversion-based methods rely on time-consuming optimization to preserve the features of the initial images, which results in low efficiency due to the requirement for extensive network inference. Conversely, inversion-free methods lack theoretical support for background similarity, as they circumvent the issue of maintaining initial features to achieve efficiency. As a consequence, none of these methods can achieve both high efficiency and background consistency. To tackle the challenges and the aforementioned disadvantages, we introduce PostEdit, a method that incorporates a posterior scheme to govern the diffusion sampling process. Specifically, a corresponding measurement term related to both the initial features and Langevin dynamics is introduced to optimize the estimated image generated by the given target prompt. Extensive experimental results indicate that the proposed PostEdit achieves state-of-the-art editing performance while accurately preserving unedited regions. Furthermore, the method is both inversion- and training-free, necessitating approximately 1.5 seconds and 18 GB of GPU memory to generate high-quality results.<|reference_end|> | arxiv | @article{tian2024postedit:,
title={PostEdit: Posterior Sampling for Efficient Zero-Shot Image Editing},
author={Feng Tian, Yixuan Li, Yichao Yan, Shanyan Guan, Yanhao Ge and Xiaokang
Yang},
journal={arXiv preprint arXiv:2410.04844},
year={2024},
archivePrefix={arXiv},
eprint={2410.04844},
primaryClass={cs.CV cs.AI}
} | tian2024postedit: |
arxiv-666432 | 2410.04847 | Causal Context Adjustment Loss for Learned Image Compression | <|reference_start|>Causal Context Adjustment Loss for Learned Image Compression: In recent years, learned image compression (LIC) technologies have surpassed conventional methods notably in terms of rate-distortion (RD) performance. Most present learned techniques are VAE-based with an autoregressive entropy model, which obviously promotes the RD performance by utilizing the decoded causal context. However, extant methods are highly dependent on the fixed hand-crafted causal context. The question of how to guide the auto-encoder to generate a more effective causal context benefit for the autoregressive entropy models is worth exploring. In this paper, we make the first attempt in investigating the way to explicitly adjust the causal context with our proposed Causal Context Adjustment loss (CCA-loss). By imposing the CCA-loss, we enable the neural network to spontaneously adjust important information into the early stage of the autoregressive entropy model. Furthermore, as transformer technology develops remarkably, variants of which have been adopted by many state-of-the-art (SOTA) LIC techniques. The existing computing devices have not adapted the calculation of the attention mechanism well, which leads to a burden on computation quantity and inference latency. To overcome it, we establish a convolutional neural network (CNN) image compression model and adopt the unevenly channel-wise grouped strategy for high efficiency. Ultimately, the proposed CNN-based LIC network trained with our Causal Context Adjustment loss attains a great trade-off between inference latency and rate-distortion performance.<|reference_end|> | arxiv | @article{han2024causal,
title={Causal Context Adjustment Loss for Learned Image Compression},
author={Minghao Han, Shiyin Jiang, Shengxi Li, Xin Deng, Mai Xu, Ce Zhu,
Shuhang Gu},
journal={arXiv preprint arXiv:2410.04847},
year={2024},
archivePrefix={arXiv},
eprint={2410.04847},
primaryClass={eess.IV cs.CV}
} | han2024causal |
arxiv-666433 | 2410.04850 | Artificial Barriers for stochastic differential equations and for construction of Boundary-preserving schemes | <|reference_start|>Artificial Barriers for stochastic differential equations and for construction of Boundary-preserving schemes: We develop the novel method of artificial barriers for scalar stochastic differential equations (SDEs) and use it to construct boundary-preserving numerical schemes for strong approximations of scalar SDEs, possibly with non-globally Lipschitz drift and diffusion coefficients, whose state-space is either bounded or half-bounded. The idea of artificial barriers is to augment the SDE with artificial barriers outside the state-space to not change the solution process, and then apply a boundary-preserving numerical scheme to the resulting reflected SDE (RSDE). This enables us to construct boundary-preserving numerical schemes with the same strong convergence rate as the strong convergence rate of the numerical scheme for the corresponding RSDE. Based on the method of artificial barriers, we construct two boundary-preserving schemes that we call the Artificial Barrier Euler-Maruyama (ABEM) scheme and the Artificial Barrier Euler-Peano (ABEP) scheme. We provide numerical experiments for the ABEM scheme and the numerical results agree with the obtained theoretical results.<|reference_end|> | arxiv | @article{ulander2024artificial,
title={Artificial Barriers for stochastic differential equations and for
construction of Boundary-preserving schemes},
author={Johan Ulander},
journal={arXiv preprint arXiv:2410.04850},
year={2024},
archivePrefix={arXiv},
eprint={2410.04850},
primaryClass={math.NA cs.NA math.PR}
} | ulander2024artificial |
arxiv-666434 | 2410.04852 | Single Vs Dual: Influence of the Number of Displays on User Experience within Virtually Embodied Conversational Systems | <|reference_start|>Single Vs Dual: Influence of the Number of Displays on User Experience within Virtually Embodied Conversational Systems: The current research evaluates user experience and preference when interacting with a patient-reported outcome measure (PROM) healthcare application displayed on a single tablet in comparison to interaction with the same application distributed across two tablets. We conducted a within-subject user study with 43 participants who engaged with and rated the usability of our system and participated in a post-experiment interview to collect subjective data. Our findings showed significantly higher usability and higher pragmatic quality ratings for the single tablet condition. However, some users attribute a higher level of presence to the avatar and prefer it to be placed on a second tablet.<|reference_end|> | arxiv | @article{ashrafi2024single,
title={Single Vs Dual: Influence of the Number of Displays on User Experience
within Virtually Embodied Conversational Systems},
author={Navid Ashrafi, Francesco Vona, Philipp Graf, Philipp Harnisch, Sina
Hinzmann, Jan-Niklas Voigt-Antons},
journal={arXiv preprint arXiv:2410.04852},
year={2024},
doi={10.1145/3641825.3689700},
archivePrefix={arXiv},
eprint={2410.04852},
primaryClass={cs.HC}
} | ashrafi2024single |
arxiv-666435 | 2410.04853 | TimeCNN: Refining Cross-Variable Interaction on Time Point for Time Series Forecasting | <|reference_start|>TimeCNN: Refining Cross-Variable Interaction on Time Point for Time Series Forecasting: Time series forecasting is extensively applied across diverse domains. Transformer-based models demonstrate significant potential in modeling cross-time and cross-variable interaction. However, we notice that the cross-variable correlation of multivariate time series demonstrates multifaceted (positive and negative correlations) and dynamic progression over time, which is not well captured by existing Transformer-based models. To address this issue, we propose a TimeCNN model to refine cross-variable interactions to enhance time series forecasting. Its key innovation is timepoint-independent, where each time point has an independent convolution kernel, allowing each time point to have its independent model to capture relationships among variables. This approach effectively handles both positive and negative correlations and adapts to the evolving nature of variable relationships over time. Extensive experiments conducted on 12 real-world datasets demonstrate that TimeCNN consistently outperforms state-of-the-art models. Notably, our model achieves significant reductions in computational requirements (approximately 60.46%) and parameter count (about 57.50%), while delivering inference speeds 3 to 4 times faster than the benchmark iTransformer model<|reference_end|> | arxiv | @article{hu2024timecnn:,
title={TimeCNN: Refining Cross-Variable Interaction on Time Point for Time
Series Forecasting},
author={Ao Hu, Dongkai Wang, Yong Dai, Shiyi Qi, Liangjian Wen, Jun Wang, Zhi
Chen, Xun Zhou, Zenglin Xu, Jiang Duan},
journal={arXiv preprint arXiv:2410.04853},
year={2024},
archivePrefix={arXiv},
eprint={2410.04853},
primaryClass={cs.LG cs.AI stat.ML}
} | hu2024timecnn: |
arxiv-666436 | 2410.04854 | State Observer for the Fourth-order Model of a Salient Pole Synchronous Generator with Stator Losses: Known and Partially Unknown Input Cases | <|reference_start|>State Observer for the Fourth-order Model of a Salient Pole Synchronous Generator with Stator Losses: Known and Partially Unknown Input Cases: In this paper we study the question of how to reconstruct the state of a power system using Phasor Measurement Units (PMUs). In our previous research we proved that this question has an affirmative answer imposing some rather strict structural assumptions: namely, neglecting the generator rotors saliency and assuming that the stator resistance of the synchronous generator is zero. It was shown in simulations that the performance of the proposed observer was sensitive to these assumptions, observing a transient quality degradation for realistic simulations not imposing these assumptions. Moreover, it was assumed in our previous work that the mechanical power and the field voltage are available for measurement, a scenario that it is not always realistic. In this paper we accomplish two ambitious objectives. First, we propose a new observer that does not impose the simplifying assumptions on the generator model. Secondly, we consider the more realistic scenario where only mechanical power is available for measurement. That is, we solve a problem of state reconstruction of a nonlinear system with partially known input measurements -- that is well-known to be a very challenging task. The design of the first observer relies on two recent developments proposed by the authors, a parameter estimation based approach to the problem of state estimation and the use of the Dynamic Regressor Extension and Mixing (DREM) technique to estimate these parameters. The use of DREM allows us to overcome the problem of lack of persistent excitation that stymies the application of standard parameter estimation designs. On the other hand, the observer for the partial input measurement scenario relies on the clever exploitation of the systems model. Simulation results illustrates the good performance of the proposed observers.<|reference_end|> | arxiv | @article{bobtsov2024state,
title={State Observer for the Fourth-order Model of a Salient Pole Synchronous
Generator with Stator Losses: Known and Partially Unknown Input Cases},
author={Alexey Bobtsov, Romeo Ortega, Nicolai Lorenz-Meyer, Johannes Schiffer},
journal={arXiv preprint arXiv:2410.04854},
year={2024},
archivePrefix={arXiv},
eprint={2410.04854},
primaryClass={eess.SY cs.SY}
} | bobtsov2024state |
arxiv-666437 | 2410.04855 | Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation | <|reference_start|>Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation: Learning skills that interact with objects is of major importance for robotic manipulation. These skills can indeed serve as an efficient prior for solving various manipulation tasks. We propose a novel Skill Learning approach that discovers composable behaviors by solving a large and diverse number of autonomously generated tasks. Our method learns skills allowing the robot to consistently and robustly interact with objects in its environment. The discovered behaviors are embedded in primitives which can be composed with Hierarchical Reinforcement Learning to solve unseen manipulation tasks. In particular, we leverage Asymmetric Self-Play to discover behaviors and Multiplicative Compositional Policies to embed them. We compare our method to Skill Learning baselines and find that our skills are more interactive. Furthermore, the learned skills can be used to solve a set of unseen manipulation tasks, in simulation as well as on a real robotic platform.<|reference_end|> | arxiv | @article{jansonnie2024unsupervised,
title={Unsupervised Skill Discovery for Robotic Manipulation through Automatic
Task Generation},
author={Paul Jansonnie, Bingbing Wu, Julien Perez, Jan Peters},
journal={arXiv preprint arXiv:2410.04855},
year={2024},
archivePrefix={arXiv},
eprint={2410.04855},
primaryClass={cs.RO cs.AI cs.LG}
} | jansonnie2024unsupervised |
arxiv-666438 | 2410.04862 | Temporal-Assisted Dynamic Beampattern Optimization in Integrated Sensing and Communication Systems | <|reference_start|>Temporal-Assisted Dynamic Beampattern Optimization in Integrated Sensing and Communication Systems: In this paper, an integrated sensing and communication (ISAC) system is investigated. Initially, we introduce a design criterion wherein sensing data acquired from the preceding time slot is employed for instantaneous optimal beamforming in the succeeding time slot, aiming to enhance the communication rate. Subsequently, the development of optimal beamforming is addressed, and a high-caliber suboptimal resolution is derived utilizing successive convex approximation (SCA) techniques combined with the iterative rank minimization (IRM) methodology. Our evaluations, grounded on numerical analyses, reveal that the communication rate of the introduced beamforming strategy surpasses that of conventional omnidirectional sensing and pilot based approaches.<|reference_end|> | arxiv | @article{zhou2024temporal-assisted,
title={Temporal-Assisted Dynamic Beampattern Optimization in Integrated Sensing
and Communication Systems},
author={Shengcai Zhou, Luping Xiang, Kun Yang},
journal={arXiv preprint arXiv:2410.04862},
year={2024},
archivePrefix={arXiv},
eprint={2410.04862},
primaryClass={cs.CG}
} | zhou2024temporal-assisted |
arxiv-666439 | 2410.04865 | Mastering Chinese Chess AI (Xiangqi) Without Search | <|reference_start|>Mastering Chinese Chess AI (Xiangqi) Without Search: We have developed a high-performance Chinese Chess AI that operates without reliance on search algorithms. This AI has demonstrated the capability to compete at a level commensurate with the top 0.1\% of human players. By eliminating the search process typically associated with such systems, this AI achieves a Queries Per Second (QPS) rate that exceeds those of systems based on the Monte Carlo Tree Search (MCTS) algorithm by over a thousandfold and surpasses those based on the AlphaBeta pruning algorithm by more than a hundredfold. The AI training system consists of two parts: supervised learning and reinforcement learning. Supervised learning provides an initial human-like Chinese chess AI, while reinforcement learning, based on supervised learning, elevates the strength of the entire AI to a new level. Based on this training system, we carried out enough ablation experiments and discovered that 1. The same parameter amount of Transformer architecture has a higher performance than CNN on Chinese chess; 2. Possible moves of both sides as features can greatly improve the training process; 3. Selective opponent pool, compared to pure self-play training, results in a faster improvement curve and a higher strength limit. 4. Value Estimation with Cutoff(VECT) improves the original PPO algorithm training process and we will give the explanation.<|reference_end|> | arxiv | @article{chen2024mastering,
title={Mastering Chinese Chess AI (Xiangqi) Without Search},
author={Yu Chen, Juntong Lin, Zhichao Shu},
journal={arXiv preprint arXiv:2410.04865},
year={2024},
archivePrefix={arXiv},
eprint={2410.04865},
primaryClass={cs.LG cs.AI}
} | chen2024mastering |
arxiv-666440 | 2410.04866 | Art Forgery Detection using Kolmogorov Arnold and Convolutional Neural Networks | <|reference_start|>Art Forgery Detection using Kolmogorov Arnold and Convolutional Neural Networks: Art authentication has historically established itself as a task requiring profound connoisseurship of one particular artist. Nevertheless, famous art forgers such as Wolfgang Beltracchi were able to deceive dozens of art experts. In recent years Artificial Intelligence algorithms have been successfully applied to various image processing tasks. In this work, we leverage the growing improvements in AI to present an art authentication framework for the identification of the forger Wolfgang Beltracchi. Differently from existing literature on AI-aided art authentication, we focus on a specialized model of a forger, rather than an artist, flipping the approach of traditional AI methods. We use a carefully compiled dataset of known artists forged by Beltracchi and a set of known works by the forger to train a multiclass image classification model based on EfficientNet. We compare the results with Kolmogorov Arnold Networks (KAN) which, to the best of our knowledge, have never been tested in the art domain. The results show a general agreement between the different models' predictions on artworks flagged as forgeries, which are then closely studied using visual analysis.<|reference_end|> | arxiv | @article{boccuzzo2024art,
title={Art Forgery Detection using Kolmogorov Arnold and Convolutional Neural
Networks},
author={Sandro Boccuzzo, Deborah Desir'ee Meyer, Ludovica Schaerf},
journal={arXiv preprint arXiv:2410.04866},
year={2024},
archivePrefix={arXiv},
eprint={2410.04866},
primaryClass={cs.CV}
} | boccuzzo2024art |
arxiv-666441 | 2410.04868 | Predictive Spliner: Data-Driven Overtaking in Autonomous Racing Using Opponent Trajectory Prediction | <|reference_start|>Predictive Spliner: Data-Driven Overtaking in Autonomous Racing Using Opponent Trajectory Prediction: Head-to-head racing against opponents is a challenging and emerging topic in the domain of autonomous racing. We propose Predictive Spliner, a data-driven overtaking planner that learns the behavior of opponents through Gaussian Process (GP) regression, which is then leveraged to compute viable overtaking maneuvers in future sections of the racing track. Experimentally validated on a 1:10 scale autonomous racing platform using Light Detection and Ranging (LiDAR) information to perceive the opponent, Predictive Spliner outperforms State-of-the-Art (SotA) algorithms by overtaking opponents at up to 83.1% of its own speed, being on average 8.4% faster than the previous best-performing method. Additionally, it achieves an average success rate of 84.5%, which is 47.6% higher than the previous best-performing method. The method maintains computational efficiency with a Central Processing Unit (CPU) load of 22.79% and a computation time of 8.4 ms, evaluated on a Commercial off-the-Shelf (CotS) Intel i7-1165G7, making it suitable for real-time robotic applications. These results highlight the potential of Predictive Spliner to enhance the performance and safety of autonomous racing vehicles. The code for Predictive Spliner is available at: https://github.com/ForzaETH/predictive-spliner.<|reference_end|> | arxiv | @article{baumann2024predictive,
title={Predictive Spliner: Data-Driven Overtaking in Autonomous Racing Using
Opponent Trajectory Prediction},
author={Nicolas Baumann, Edoardo Ghignone, Cheng Hu, Benedict Hildisch, Tino
H"ammerle, Alessandro Bettoni, Andrea Carron, Lei Xie, and Michele Magno},
journal={arXiv preprint arXiv:2410.04868},
year={2024},
archivePrefix={arXiv},
eprint={2410.04868},
primaryClass={cs.RO cs.SY eess.SY}
} | baumann2024predictive |
arxiv-666442 | 2410.04869 | Active Inference for Closed-loop transmit beamsteering in Fetal Doppler Ultrasound | <|reference_start|>Active Inference for Closed-loop transmit beamsteering in Fetal Doppler Ultrasound: Doppler ultrasound is widely used to monitor fetal heart rate during labor and pregnancy. Unfortunately, it is highly sensitive to fetal and maternal movements, which can cause the displacement of the fetal heart with respect to the ultrasound beam, in turn reducing the Doppler signal-to-noise ratio and leading to erratic, noisy, or missing heart rate readings. To tackle this issue, we augment the conventional Doppler ultrasound system with a rational agent that autonomously steers the ultrasound beam to track the position of the fetal heart. The proposed cognitive ultrasound system leverages a sequential Monte Carlo method to infer the fetal heart position from the power Doppler signal, and employs a greedy information-seeking criterion to select the steering angle that minimizes the positional uncertainty for future timesteps. The fetal heart rate is then calculated using the Doppler signal at the estimated fetal heart position. Our results show that the system can accurately track the fetal heart position across challenging signal-to-noise ratio scenarios, mainly thanks to its dynamic transmit beam steering capability. Additionally, we find that optimizing the transmit beamsteering to minimize positional uncertainty also optimizes downstream heart rate estimation performance. In conclusion, this work showcases the power of closed-loop cognitive ultrasound in boosting the capabilities of traditional systems.<|reference_end|> | arxiv | @article{federici2024active,
title={Active Inference for Closed-loop transmit beamsteering in Fetal Doppler
Ultrasound},
author={Beatrice Federici, Ruud JG van Sloun, Massimo Mischi},
journal={arXiv preprint arXiv:2410.04869},
year={2024},
archivePrefix={arXiv},
eprint={2410.04869},
primaryClass={eess.SP cs.SY eess.SY}
} | federici2024active |
arxiv-666443 | 2410.04870 | On the Optimization and Generalization of Two-layer Transformers with Sign Gradient Descent | <|reference_start|>On the Optimization and Generalization of Two-layer Transformers with Sign Gradient Descent: The Adam optimizer is widely used for transformer optimization in practice, which makes understanding the underlying optimization mechanisms an important problem. However, due to the Adam's complexity, theoretical analysis of how it optimizes transformers remains a challenging task. Fortunately, Sign Gradient Descent (SignGD) serves as an effective surrogate for Adam. Despite its simplicity, theoretical understanding of how SignGD optimizes transformers still lags behind. In this work, we study how SignGD optimizes a two-layer transformer -- consisting of a softmax attention layer with trainable query-key parameterization followed by a linear layer -- on a linearly separable noisy dataset. We identify four stages in the training dynamics, each exhibiting intriguing behaviors. Based on the training dynamics, we prove the fast convergence but poor generalization of the learned transformer on the noisy dataset. We also show that Adam behaves similarly to SignGD in terms of both optimization and generalization in this setting. Additionally, we find that the poor generalization of SignGD is not solely due to data noise, suggesting that both SignGD and Adam requires high-quality data for real-world tasks. Finally, experiments on synthetic and real-world datasets empirically support our theoretical results.<|reference_end|> | arxiv | @article{li2024on,
title={On the Optimization and Generalization of Two-layer Transformers with
Sign Gradient Descent},
author={Bingrui Li, Wei Huang, Andi Han, Zhanpeng Zhou, Taiji Suzuki, Jun Zhu,
Jianfei Chen},
journal={arXiv preprint arXiv:2410.04870},
year={2024},
archivePrefix={arXiv},
eprint={2410.04870},
primaryClass={cs.LG stat.ML}
} | li2024on |
arxiv-666444 | 2410.04871 | Distributed Collaborative User Positioning for Cell-Free Massive MIMO with Multi-Agent Reinforcement Learning | <|reference_start|>Distributed Collaborative User Positioning for Cell-Free Massive MIMO with Multi-Agent Reinforcement Learning: In this paper, we investigate a cell-free massive multiple-input multiple-output system, which exhibits great potential in enhancing the capabilities of next-generation mobile communication networks. We first study the distributed positioning problem to lay the groundwork for solving resource allocation and interference management issues. Instead of relying on computationally and spatially complex fingerprint positioning methods, we propose a novel two-stage distributed collaborative positioning architecture with multi-agent reinforcement learning (MARL) network, consisting of a received signal strength-based preliminary positioning network and an angle of arrival-based auxiliary correction network. Our experimental results demonstrate that the two-stage distributed collaborative user positioning architecture can outperform conventional fingerprint positioning methods in terms of positioning accuracy.<|reference_end|> | arxiv | @article{liu2024distributed,
title={Distributed Collaborative User Positioning for Cell-Free Massive MIMO
with Multi-Agent Reinforcement Learning},
author={Ziheng Liu, Jiayi Zhang, Enyu Shi, Yiyang Zhu, Derrick Wing Kwan Ng,
and Bo Ai},
journal={arXiv preprint arXiv:2410.04871},
year={2024},
archivePrefix={arXiv},
eprint={2410.04871},
primaryClass={cs.IT eess.SP math.IT}
} | liu2024distributed |
arxiv-666445 | 2410.04873 | TeX-NeRF: Neural Radiance Fields from Pseudo-TeX Vision | <|reference_start|>TeX-NeRF: Neural Radiance Fields from Pseudo-TeX Vision: Neural radiance fields (NeRF) has gained significant attention for its exceptional visual effects. However, most existing NeRF methods reconstruct 3D scenes from RGB images captured by visible light cameras. In practical scenarios like darkness, low light, or bad weather, visible light cameras become ineffective. Therefore, we propose TeX-NeRF, a 3D reconstruction method using only infrared images, which introduces the object material emissivity as a priori, preprocesses the infrared images using Pseudo-TeX vision, and maps the temperatures (T), emissivities (e), and textures (X) of the scene into the saturation (S), hue (H), and value (V) channels of the HSV color space, respectively. Novel view synthesis using the processed images has yielded excellent results. Additionally, we introduce 3D-TeX Datasets, the first dataset comprising infrared images and their corresponding Pseudo-TeX vision images. Experiments demonstrate that our method not only matches the quality of scene reconstruction achieved with high-quality RGB images but also provides accurate temperature estimations for objects in the scene.<|reference_end|> | arxiv | @article{zhong2024tex-nerf:,
title={TeX-NeRF: Neural Radiance Fields from Pseudo-TeX Vision},
author={Chonghao Zhong, Chao Xu},
journal={arXiv preprint arXiv:2410.04873},
year={2024},
archivePrefix={arXiv},
eprint={2410.04873},
primaryClass={cs.CV cs.RO}
} | zhong2024tex-nerf: |
arxiv-666446 | 2410.04878 | Leveraging Grammar Induction for Language Understanding and Generation | <|reference_start|>Leveraging Grammar Induction for Language Understanding and Generation: Grammar induction has made significant progress in recent years. However, it is not clear how the application of induced grammar could enhance practical performance in downstream tasks. In this work, we introduce an unsupervised grammar induction method for language understanding and generation. We construct a grammar parser to induce constituency structures and dependency relations, which is simultaneously trained on downstream tasks without additional syntax annotations. The induced grammar features are subsequently incorporated into Transformer as a syntactic mask to guide self-attention. We evaluate and apply our method to multiple machine translation tasks and natural language understanding tasks. Our method demonstrates superior performance compared to the original Transformer and other models enhanced with external parsers. Experimental results indicate that our method is effective in both from-scratch and pre-trained scenarios. Additionally, our research highlights the contribution of explicitly modeling the grammatical structure of texts to neural network models.<|reference_end|> | arxiv | @article{kai2024leveraging,
title={Leveraging Grammar Induction for Language Understanding and Generation},
author={Jushi Kai, Shengyuan Hou, Yusheng Huang, Zhouhan Lin},
journal={arXiv preprint arXiv:2410.04878},
year={2024},
archivePrefix={arXiv},
eprint={2410.04878},
primaryClass={cs.CL cs.AI}
} | kai2024leveraging |
arxiv-666447 | 2410.04880 | Improved detection of discarded fish species through BoxAL active learning | <|reference_start|>Improved detection of discarded fish species through BoxAL active learning: In recent years, powerful data-driven deep-learning techniques have been developed and applied for automated catch registration. However, these methods are dependent on the labelled data, which is time-consuming, labour-intensive, expensive to collect and need expert knowledge. In this study, we present an active learning technique, named BoxAL, which includes estimation of epistemic certainty of the Faster R-CNN object-detection model. The method allows selecting the most uncertain training images from an unlabeled pool, which are then used to train the object-detection model. To evaluate the method, we used an open-source image dataset obtained with a dedicated image-acquisition system developed for commercial trawlers targeting demersal species. We demonstrated, that our approach allows reaching the same object-detection performance as with the random sampling using 400 fewer labelled images. Besides, mean AP score was significantly higher at the last training iteration with 1100 training images, specifically, 39.0±1.6 and 34.8±1.8 for certainty-based sampling and random sampling, respectively. Additionally, we showed that epistemic certainty is a suitable method to sample images that the current iteration of the model cannot deal with yet. Our study additionally showed that the sampled new data is more valuable for training than the remaining unlabeled data. Our software is available on https://github.com/pieterblok/boxal.<|reference_end|> | arxiv | @article{sokolova2024improved,
title={Improved detection of discarded fish species through BoxAL active
learning},
author={Maria Sokolova, Pieter M. Blok, Angelo Mencarelli, Arjan Vroegop,
Aloysius van Helmond, and Gert Kootstra},
journal={arXiv preprint arXiv:2410.04880},
year={2024},
archivePrefix={arXiv},
eprint={2410.04880},
primaryClass={cs.CV}
} | sokolova2024improved |
arxiv-666448 | 2410.04883 | Improving the Sampling Strategy in KernelSHAP | <|reference_start|>Improving the Sampling Strategy in KernelSHAP: Shapley values are a popular model-agnostic explanation framework for explaining predictions made by complex machine learning models. The framework provides feature contribution scores that sum to the predicted response and represent each feature's importance. The computation of exact Shapley values is computationally expensive due to estimating an exponential amount of non-trivial conditional expectations. The KernelSHAP framework enables us to approximate the Shapley values using a sampled subset of weighted conditional expectations. We propose three main novel contributions: a stabilizing technique to reduce the variance of the weights in the current state-of-the-art strategy, a novel weighing scheme that corrects the Shapley kernel weights based on sampled subsets, and a straightforward strategy that includes the important subsets and integrates them with the corrected Shapley kernel weights. We compare these new approximation strategies against existing ones by evaluating their Shapley value accuracy as a function of the number of subsets. The results demonstrate that our sampling strategies significantly enhance the accuracy of the approximated Shapley value explanations, making them more reliable in practical applications. This work provides valuable insights and practical recommendations for researchers and practitioners seeking to implement Shapley value-based explainability of their models.<|reference_end|> | arxiv | @article{olsen2024improving,
title={Improving the Sampling Strategy in KernelSHAP},
author={Lars Henry Berge Olsen and Martin Jullum},
journal={arXiv preprint arXiv:2410.04883},
year={2024},
archivePrefix={arXiv},
eprint={2410.04883},
primaryClass={cs.LG stat.ML}
} | olsen2024improving |
arxiv-666449 | 2410.04884 | Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models | <|reference_start|>Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models: Visual language pre-training (VLP) models have demonstrated significant success across various domains, yet they remain vulnerable to adversarial attacks. Addressing these adversarial vulnerabilities is crucial for enhancing security in multimodal learning. Traditionally, adversarial methods targeting VLP models involve simultaneously perturbing images and text. However, this approach faces notable challenges: first, adversarial perturbations often fail to translate effectively into real-world scenarios; second, direct modifications to the text are conspicuously visible. To overcome these limitations, we propose a novel strategy that exclusively employs image patches for attacks, thus preserving the integrity of the original text. Our method leverages prior knowledge from diffusion models to enhance the authenticity and naturalness of the perturbations. Moreover, to optimize patch placement and improve the efficacy of our attacks, we utilize the cross-attention mechanism, which encapsulates intermodal interactions by generating attention maps to guide strategic patch placements. Comprehensive experiments conducted in a white-box setting for image-to-text scenarios reveal that our proposed method significantly outperforms existing techniques, achieving a 100% attack success rate. Additionally, it demonstrates commendable performance in transfer tasks involving text-to-image configurations.<|reference_end|> | arxiv | @article{kong2024patch,
title={Patch is Enough: Naturalistic Adversarial Patch against Vision-Language
Pre-training Models},
author={Dehong Kong, Siyuan Liang, Xiaopeng Zhu, Yuansheng Zhong and Wenqi Ren},
journal={arXiv preprint arXiv:2410.04884},
year={2024},
archivePrefix={arXiv},
eprint={2410.04884},
primaryClass={cs.CV cs.AI}
} | kong2024patch |
arxiv-666450 | 2410.04885 | The error of Chebyshev approximations on shrinking domains | <|reference_start|>The error of Chebyshev approximations on shrinking domains: Previous works show convergence of rational Chebyshev approximants to the Pad\'e approximant as the underlying domain of approximation shrinks to the origin. In the present work, the asymptotic error and interpolation properties of rational Chebyshev approximants are studied in such settings. Namely, the point-wise error of Chebyshev approximants is shown to approach a Chebyshev polynomial multiplied by the asymptotically leading order term of the error of the Pad\'e approximant, and similar results hold true for the uniform error and Chebyshev constants. Moreover, rational Chebyshev approximants are shown to attain interpolation nodes which approach scaled Chebyshev nodes in the limit. Main results are formulated for interpolatory best approximations and apply for complex Chebyshev approximation as well as real Chebyshev approximation to real functions and unitary best approximation to the exponential function.<|reference_end|> | arxiv | @article{jawecki2024the,
title={The error of Chebyshev approximations on shrinking domains},
author={Tobias Jawecki},
journal={arXiv preprint arXiv:2410.04885},
year={2024},
archivePrefix={arXiv},
eprint={2410.04885},
primaryClass={math.NA cs.NA}
} | jawecki2024the |
arxiv-666451 | 2410.04886 | High Information Density and Low Coverage Data Storage in DNA with Efficient Channel Coding Schemes | <|reference_start|>High Information Density and Low Coverage Data Storage in DNA with Efficient Channel Coding Schemes: DNA-based data storage has been attracting significant attention due to its extremely high density, low power consumption, and long duration compared to traditional data storage mediums. Despite the recent advancements in DNA data storage technology, significant challenges remain. In particular, various types of errors can occur during the processes of DNA synthesis, storage, and sequencing, including substitution errors, insertion errors, and deletion errors. Furthermore, the entire oligo may be lost. In this work, we report a DNA-based data storage architecture that incorporates efficient channel coding schemes, including different types of error-correcting codes (ECCs) and constrained codes, for both the inner coding and outer coding for the DNA data storage channel. We also carry out large scale experiments to validate our proposed DNA data storage architecture. Specifically, 1.61 and 1.69 MB data are encoded into 30,000 oligos each, with information densities of 1.731 and 1.815, respectively. It has been found that the stored information can be fully recovered without any error at average coverages 4.5 and 6.0, respectively. Compared to previous experimental studies, our architecture achieves higher information density and lower coverage, demonstrating the efficiency of the proposed channel coding schemes.<|reference_end|> | arxiv | @article{ding2024high,
title={High Information Density and Low Coverage Data Storage in DNA with
Efficient Channel Coding Schemes},
author={Yi Ding, Xuan He, Tuan Thanh Nguyen, Wentu Song, Zohar Yakhini, Eitan
Yaakobi, Linqiang Pan, Xiaohu Tang, Kui Cai},
journal={arXiv preprint arXiv:2410.04886},
year={2024},
archivePrefix={arXiv},
eprint={2410.04886},
primaryClass={cs.IT math.IT}
} | ding2024high |
arxiv-666452 | 2410.04887 | Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse | <|reference_start|>Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse: Deep neural networks (DNNs) at convergence consistently represent the training data in the last layer via a highly symmetric geometric structure referred to as neural collapse. This empirical evidence has spurred a line of theoretical research aimed at proving the emergence of neural collapse, mostly focusing on the unconstrained features model. Here, the features of the penultimate layer are free variables, which makes the model data-agnostic and, hence, puts into question its ability to capture DNN training. Our work addresses the issue, moving away from unconstrained features and studying DNNs that end with at least two linear layers. We first prove generic guarantees on neural collapse that assume (i) low training error and balancedness of the linear layers (for within-class variability collapse), and (ii) bounded conditioning of the features before the linear part (for orthogonality of class-means, as well as their alignment with weight matrices). We then show that such assumptions hold for gradient descent training with weight decay: (i) for networks with a wide first layer, we prove low training error and balancedness, and (ii) for solutions that are either nearly optimal or stable under large learning rates, we additionally prove the bounded conditioning. Taken together, our results are the first to show neural collapse in the end-to-end training of DNNs.<|reference_end|> | arxiv | @article{jacot2024wide,
title={Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural
Collapse},
author={Arthur Jacot, Peter S'uken'ik, Zihan Wang, Marco Mondelli},
journal={arXiv preprint arXiv:2410.04887},
year={2024},
archivePrefix={arXiv},
eprint={2410.04887},
primaryClass={cs.LG math.OC stat.ML}
} | jacot2024wide |
arxiv-666453 | 2410.04889 | D-PoSE: Depth as an Intermediate Representation for 3D Human Pose and Shape Estimation | <|reference_start|>D-PoSE: Depth as an Intermediate Representation for 3D Human Pose and Shape Estimation: We present D-PoSE (Depth as an Intermediate Representation for 3D Human Pose and Shape Estimation), a one-stage method that estimates human pose and SMPL-X shape parameters from a single RGB image. Recent works use larger models with transformer backbones and decoders to improve the accuracy in human pose and shape (HPS) benchmarks. D-PoSE proposes a vision based approach that uses the estimated human depth-maps as an intermediate representation for HPS and leverages training with synthetic data and the ground-truth depth-maps provided with them for depth supervision during training. Although trained on synthetic datasets, D-PoSE achieves state-of-the-art performance on the real-world benchmark datasets, EMDB and 3DPW. Despite its simple lightweight design and the CNN backbone, it outperforms ViT-based models that have a number of parameters that is larger by almost an order of magnitude. D-PoSE code is available at: https://github.com/nvasilik/D-PoSE<|reference_end|> | arxiv | @article{vasilikopoulos2024d-pose:,
title={D-PoSE: Depth as an Intermediate Representation for 3D Human Pose and
Shape Estimation},
author={Nikolaos Vasilikopoulos, Drosakis Drosakis, Antonis Argyros},
journal={arXiv preprint arXiv:2410.04889},
year={2024},
archivePrefix={arXiv},
eprint={2410.04889},
primaryClass={cs.CV}
} | vasilikopoulos2024d-pose: |
arxiv-666454 | 2410.04891 | Low-Rank Continual Personalization of Diffusion Models | <|reference_start|>Low-Rank Continual Personalization of Diffusion Models: Recent personalization methods for diffusion models, such as Dreambooth, allow fine-tuning pre-trained models to generate new concepts. However, applying these techniques across multiple tasks in order to include, e.g., several new objects or styles, leads to mutual interference between their adapters. While recent studies attempt to mitigate this issue by combining trained adapters across tasks after fine-tuning, we adopt a more rigorous regime and investigate the personalization of large diffusion models under a continual learning scenario, where such interference leads to catastrophic forgetting of previous knowledge. To that end, we evaluate the na\"ive continual fine-tuning of customized models and compare this approach with three methods for consecutive adapters' training: sequentially merging new adapters, merging orthogonally initialized adapters, and updating only relevant parameters according to the task. In our experiments, we show that the proposed approaches mitigate forgetting when compared to the na\"ive approach.<|reference_end|> | arxiv | @article{staniszewski2024low-rank,
title={Low-Rank Continual Personalization of Diffusion Models},
author={{L}ukasz Staniszewski, Katarzyna Zaleska, Kamil Deja},
journal={arXiv preprint arXiv:2410.04891},
year={2024},
archivePrefix={arXiv},
eprint={2410.04891},
primaryClass={cs.LG}
} | staniszewski2024low-rank |
arxiv-666455 | 2410.04897 | Complexity results for a cops and robber game on directed graphs | <|reference_start|>Complexity results for a cops and robber game on directed graphs: We investigate a cops and robber game on directed graphs, where the robber moves along the arcs of the graph, while the cops can select any position at each time step. Our main focus is on the cop number: the minimum number of cops required to guarantee the capture of the robber. We prove that deciding whether the cop number of a digraph is equal to 1 is NP-hard, whereas this is decidable in polynomial time for tournaments. Furthermore, we show that computing the cop number for general digraphs is fixed parameter tractable when parameterized by a generalization of vertex cover. However, for tournaments, tractability is achieved with respect to the minimum size of a feedback vertex set. Among our findings, we prove that the cop number of a digraph is equal to that of its reverse digraph, and we draw connections to the matrix mortality problem.<|reference_end|> | arxiv | @article{ben-ameur2024complexity,
title={Complexity results for a cops and robber game on directed graphs},
author={Walid Ben-Ameur and Alessandro Maddaloni},
journal={arXiv preprint arXiv:2410.04897},
year={2024},
archivePrefix={arXiv},
eprint={2410.04897},
primaryClass={cs.CC cs.DM math.CO}
} | ben-ameur2024complexity |
arxiv-666456 | 2410.04899 | Working with Mixed Reality in Public: Effects of Virtual Display Layouts on Productivity, Feeling of Safety, and Social Acceptability | <|reference_start|>Working with Mixed Reality in Public: Effects of Virtual Display Layouts on Productivity, Feeling of Safety, and Social Acceptability: Nowadays, Mixed Reality (MR) headsets are a game-changer for knowledge work. Unlike stationary monitors, MR headsets allow users to work with large virtual displays anywhere they wear the headset, whether in a professional office, a public setting like a cafe, or a quiet space like a library. This study compares four different layouts (eye level-close, eye level-far, below eye level-close, below eye level-far) of virtual displays regarding feelings of safety, perceived productivity, and social acceptability when working with MR in public. We test which layout is most preferred by users and seek to understand which factors affect users' layout preferences. The aim is to derive useful insights for designing better MR layouts. A field study in a public library was conducted using a within-subject design. While the participants interact with a layout, they are asked to work on a planning task. The results from a repeated measure ANOVA show a statistically significant effect on productivity but not on safety and social acceptability. Additionally, we report preferences expressed by the users regarding the layouts and using MR in public.<|reference_end|> | arxiv | @article{kaeder2024working,
title={Working with Mixed Reality in Public: Effects of Virtual Display Layouts
on Productivity, Feeling of Safety, and Social Acceptability},
author={Janne Kaeder, Maurizio Vergari, Verena Biener, Tanja Koji'c, Jens
Grubert, Sebastian M"oller, Jan-Niklas Voigt-Antons},
journal={arXiv preprint arXiv:2410.04899},
year={2024},
archivePrefix={arXiv},
eprint={2410.04899},
primaryClass={cs.HC}
} | kaeder2024working |
arxiv-666457 | 2410.04905 | Equations in wreath products | <|reference_start|>Equations in wreath products: We survey solvability of equations in wreath products of groups, and prove that the quadratic diophantine problem is solvable in wreath products of Abelian groups. We consider the related question of determining commutator width, and prove that the quadratic diophantine problem is also solvable in Baumslag's finitely presented metabelian group. This text is a short version of an extensive article by the first-named authors.<|reference_end|> | arxiv | @article{bartholdi2024equations,
title={Equations in wreath products},
author={Laurent Bartholdi, Ruiwen Dong, Leon Pernak, Jan Philipp W"achter},
journal={arXiv preprint arXiv:2410.04905},
year={2024},
archivePrefix={arXiv},
eprint={2410.04905},
primaryClass={math.GR cs.FL math.LO}
} | bartholdi2024equations |
arxiv-666458 | 2410.04906 | Art2Mus: Bridging Visual Arts and Music through Cross-Modal Generation | <|reference_start|>Art2Mus: Bridging Visual Arts and Music through Cross-Modal Generation: Artificial Intelligence and generative models have revolutionized music creation, with many models leveraging textual or visual prompts for guidance. However, existing image-to-music models are limited to simple images, lacking the capability to generate music from complex digitized artworks. To address this gap, we introduce $\mathcal{A}\textit{rt2}\mathcal{M}\textit{us}$, a novel model designed to create music from digitized artworks or text inputs. $\mathcal{A}\textit{rt2}\mathcal{M}\textit{us}$ extends the AudioLDM~2 architecture, a text-to-audio model, and employs our newly curated datasets, created via ImageBind, which pair digitized artworks with music. Experimental results demonstrate that $\mathcal{A}\textit{rt2}\mathcal{M}\textit{us}$ can generate music that resonates with the input stimuli. These findings suggest promising applications in multimedia art, interactive installations, and AI-driven creative tools.<|reference_end|> | arxiv | @article{rinaldi2024art2mus:,
title={Art2Mus: Bridging Visual Arts and Music through Cross-Modal Generation},
author={Ivan Rinaldi, Nicola Fanelli, Giovanna Castellano, Gennaro Vessio},
journal={arXiv preprint arXiv:2410.04906},
year={2024},
archivePrefix={arXiv},
eprint={2410.04906},
primaryClass={cs.MM cs.CV cs.SD eess.AS}
} | rinaldi2024art2mus: |
arxiv-666459 | 2410.04907 | Decomposition Polyhedra of Piecewise Linear Functions | <|reference_start|>Decomposition Polyhedra of Piecewise Linear Functions: In this paper we contribute to the frequently studied question of how to decompose a continuous piecewise linear (CPWL) function into a difference of two convex CPWL functions. Every CPWL function has infinitely many such decompositions, but for applications in optimization and neural network theory, it is crucial to find decompositions with as few linear pieces as possible. This is a highly challenging problem, as we further demonstrate by disproving a recently proposed approach by Tran and Wang [Minimal representations of tropical rational functions. Algebraic Statistics, 15(1):27-59, 2024]. To make the problem more tractable, we propose to fix an underlying polyhedral complex determining the possible locus of nonlinearity. Under this assumption, we prove that the set of decompositions forms a polyhedron that arises as intersection of two translated cones. We prove that irreducible decompositions correspond to the bounded faces of this polyhedron and minimal solutions must be vertices. We then identify cases with a unique minimal decomposition, and illustrate how our insights have consequences in the theory of submodular functions. Finally, we improve upon previous constructions of neural networks for a given convex CPWL function and apply our framework to obtain results in the nonconvex case.<|reference_end|> | arxiv | @article{brandenburg2024decomposition,
title={Decomposition Polyhedra of Piecewise Linear Functions},
author={Marie-Charlotte Brandenburg and Moritz Grillo and Christoph Hertrich},
journal={arXiv preprint arXiv:2410.04907},
year={2024},
archivePrefix={arXiv},
eprint={2410.04907},
primaryClass={math.CO cs.DM cs.LG cs.NE math.OC}
} | brandenburg2024decomposition |
arxiv-666460 | 2410.04909 | Gibbs state preparation for commuting Hamiltonian: Mapping to classical Gibbs sampling | <|reference_start|>Gibbs state preparation for commuting Hamiltonian: Mapping to classical Gibbs sampling: Gibbs state preparation, or Gibbs sampling, is a key computational technique extensively used in physics, statistics, and other scientific fields. Recent efforts for designing fast mixing Gibbs samplers for quantum Hamiltonians have largely focused on commuting local Hamiltonians (CLHs), a non-trivial subclass of Hamiltonians which include highly entangled systems such as the Toric code and quantum double model. Most previous Gibbs samplers relied on simulating the Davies generator, which is a Lindbladian associated with the thermalization process in nature. Instead of using the Davies generator, we design a different Gibbs sampler for various CLHs by giving a reduction to classical Hamiltonians, in the sense that one can efficiently prepare the Gibbs state for some CLH $H$ on a quantum computer as long as one can efficiently do classical Gibbs sampling for the corresponding classical Hamiltonian $H^{(c)}$. We demonstrate that our Gibbs sampler is able to replicate state-of-the-art results as well as prepare the Gibbs state in regimes which were previously unknown, such as the low temperature region, as long as there exists fast mixing Gibbs samplers for the corresponding classical Hamiltonians. Our reductions are as follows. - If $H$ is a 2-local qudit CLH, then $H^{(c)}$ is a 2-local qudit classical Hamiltonian. - If $H$ is a 4-local qubit CLH on 2D lattice and there are no classical qubits, then $H^{(c)}$ is a 2-local qudit classical Hamiltonian on a planar graph. As an example, our algorithm can prepare the Gibbs state for the (defected) Toric code at any non-zero temperature in $\mathcal O(n^2)$ time. - If $H$ is a 4-local qubit CLH on 2D lattice and there are classical qubits, assuming that quantum terms are uniformly correctable, then $H^{(c)}$ is a constant-local classical Hamiltonian.<|reference_end|> | arxiv | @article{hwang2024gibbs,
title={Gibbs state preparation for commuting Hamiltonian: Mapping to classical
Gibbs sampling},
author={Yeongwoo Hwang, Jiaqing Jiang},
journal={arXiv preprint arXiv:2410.04909},
year={2024},
archivePrefix={arXiv},
eprint={2410.04909},
primaryClass={quant-ph cs.CC cs.DS}
} | hwang2024gibbs |
arxiv-666461 | 2410.04912 | On the Capacity of the Peak Limited and Band Limited Channel | <|reference_start|>On the Capacity of the Peak Limited and Band Limited Channel: We investigate the Peak-Power Limited (PPL) Additive White Gaussian Noise (AWGN) channels in which the signal is band-limited, and its instantaneous power cannot exceed the power P. This model is relevant to many communication systems; however, its capacity is still unknown. We use a new geometry-based approach which evaluates the maximal entropy of the transmitted signal by assessing the volume of the body, in the space of Nyquist-rate samples, comprising all the points the transmitted signal can reach. This leads to lower bounds on capacity which are tight at high Signal to Noise Ratios (SNR). We found lower bounds on capacity, expressed as power efficiency, higher than the known ones by a factor of 3.35 and 8.6 in the low pass and the band pass cases respectively. The gap to the upper bounds is reduced to a power ratio of 1.5. The new bounds are numerically evaluated for FDMA-style signals with limited duration and also derived in the general case as a conjecture. The penalty in power efficiency due to the peak power constraint is roughly 6 dB at high SNR. Further research is needed to develop effective modulation and coding for this channel.<|reference_end|> | arxiv | @article{peleg2024on,
title={On the Capacity of the Peak Limited and Band Limited Channel},
author={Michael Peleg and Shlomo Shamai},
journal={arXiv preprint arXiv:2410.04912},
year={2024},
archivePrefix={arXiv},
eprint={2410.04912},
primaryClass={cs.IT math.IT}
} | peleg2024on |
arxiv-666462 | 2410.04915 | Shear-flexible geometrically exact beam element based on finite differences | <|reference_start|>Shear-flexible geometrically exact beam element based on finite differences: The proposed two-dimensional geometrically exact beam element extends our previous work by including the effects of shear distortion, and also of distributed forces and moments acting along the beam. The general flexibility-based formulation exploits the kinematic equations combined with the inverted sectional equations and the integrated form of equilibrium equations. The resulting set of three first-order differential equations is discretized by finite differences and the boundary value problem is converted into an initial value problem using the shooting method. Due to the special structure of the governing equations, the scheme remains explicit even though the first derivatives are approximated by central differences, leading to high accuracy. The main advantage of the adopted approach is that the error can be efficiently reduced by refining the computational grid used for finite differences at the element level while keeping the number of global degrees of freedom low. The efficiency is also increased by dealing directly with the global centerline coordinates and sectional inclination with respect to global axes as the primary unknowns at the element level, thereby avoiding transformations between local and global coordinates. Two formulations of the sectional equations, referred to as the Reissner and Ziegler models, are presented and compared. In particular, stability of an axially loaded beam/column is investigated and the connections to the Haringx and Engesser stability theories are discussed. Both approaches are tested in a series of numerical examples, which illustrate (i) high accuracy with quadratic convergence when the spatial discretization is refined, (ii) easy modeling of variable stiffness along the element (such as rigid joint offsets), (iii) efficient and accurate characterization of the buckling and post-buckling behavior.<|reference_end|> | arxiv | @article{jirasek2024shear-flexible,
title={Shear-flexible geometrically exact beam element based on finite
differences},
author={Milan Jirasek, Martin Horak, Emma La Malfa Ribolla, Chiara Bonvissuto},
journal={arXiv preprint arXiv:2410.04915},
year={2024},
archivePrefix={arXiv},
eprint={2410.04915},
primaryClass={math.NA cs.NA}
} | jirasek2024shear-flexible |
arxiv-666463 | 2410.04916 | Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models | <|reference_start|>Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models: With the trend of large graph learning models, business owners tend to employ a model provided by a third party to deliver business services to users. However, these models might be backdoored, and malicious users can submit trigger-embedded inputs to manipulate the model predictions. Current graph backdoor defenses have several limitations: 1) depending on model-related details, 2) requiring additional model fine-tuning, and 3) relying upon extra explainability tools, all of which are infeasible under stringent privacy policies. To address those limitations, we propose GraphProt, which allows resource-constrained business owners to rely on third parties to avoid backdoor attacks on GNN-based graph classifiers. Our GraphProt is model-agnostic and only relies on the input graph. The key insight is to leverage subgraph information for prediction, thereby mitigating backdoor effects induced by triggers. GraphProt comprises two components: clustering-based trigger elimination and robust subgraph ensemble. Specifically, we first propose feature-topology clustering that aims to remove most of the anomalous subgraphs (triggers). Moreover, we design subgraph sampling strategies based on feature-topology clustering to build a robust classifier via majority vote. Experimental results across three backdoor attacks and six benchmark datasets demonstrate that GraphProt significantly reduces the backdoor attack success rate while preserving the model accuracy on regular graph classification tasks.<|reference_end|> | arxiv | @article{yang2024defense-as-a-service:,
title={Defense-as-a-Service: Black-box Shielding against Backdoored Graph
Models},
author={Xiao Yang, Kai Zhou, Yuni Lai, Gaolei Li},
journal={arXiv preprint arXiv:2410.04916},
year={2024},
archivePrefix={arXiv},
eprint={2410.04916},
primaryClass={cs.LG cs.AI cs.CR}
} | yang2024defense-as-a-service: |
arxiv-666464 | 2410.04917 | Why am I seeing this: Democratizing End User Auditing for Online Content Recommendations | <|reference_start|>Why am I seeing this: Democratizing End User Auditing for Online Content Recommendations: Personalized recommendation systems tailor content based on user attributes, which are either provided or inferred from private data. Research suggests that users often hypothesize about reasons behind contents they encounter (e.g., "I see this jewelry ad because I am a woman"), but they lack the means to confirm these hypotheses due to the opaqueness of these systems. This hinders informed decision-making about privacy and system use and contributes to the lack of algorithmic accountability. To address these challenges, we introduce a new interactive sandbox approach. This approach creates sets of synthetic user personas and corresponding personal data that embody realistic variations in personal attributes, allowing users to test their hypotheses by observing how a website's algorithms respond to these personas. We tested the sandbox in the context of targeted advertisement. Our user study demonstrates its usability, usefulness, and effectiveness in empowering end-user auditing in a case study of targeting ads.<|reference_end|> | arxiv | @article{chen2024why,
title={Why am I seeing this: Democratizing End User Auditing for Online Content
Recommendations},
author={Chaoran Chen, Leyang Li, Luke Cao, Yanfang Ye, Tianshi Li, Yaxing Yao,
Toby Jia-jun Li},
journal={arXiv preprint arXiv:2410.04917},
year={2024},
archivePrefix={arXiv},
eprint={2410.04917},
primaryClass={cs.HC}
} | chen2024why |
arxiv-666465 | 2410.04920 | Cloud-Based Scheduling Mechanism for Scalable and Resource-Efficient Centralized Controllers | <|reference_start|>Cloud-Based Scheduling Mechanism for Scalable and Resource-Efficient Centralized Controllers: This paper proposes a novel approach to address the challenges of deploying complex robotic software in large-scale systems, i.e., Centralized Nonlinear Model Predictive Controllers (CNMPCs) for multi-agent systems. The proposed approach is based on a Kubernetes-based scheduling mechanism designed to monitor and optimize the operation of CNMPCs, while addressing the scalability limitation of centralized control schemes. By leveraging a cluster in a real-time cloud environment, the proposed mechanism effectively offloads the computational burden of CNMPCs. Through experiments, we have demonstrated the effectiveness and performance of our system, especially in scenarios where the number of robots is subject to change. Our work contributes to the advancement of cloud-based control strategies and lays the foundation for enhanced performance in cloud-controlled robotic systems.<|reference_end|> | arxiv | @article{seisa2024cloud-based,
title={Cloud-Based Scheduling Mechanism for Scalable and Resource-Efficient
Centralized Controllers},
author={Achilleas Santi Seisa, Sumeet Gajanan Satpute, George Nikolakopoulos},
journal={arXiv preprint arXiv:2410.04920},
year={2024},
archivePrefix={arXiv},
eprint={2410.04920},
primaryClass={cs.DC cs.MA cs.RO cs.SY eess.SY}
} | seisa2024cloud-based |
arxiv-666466 | 2410.04921 | Music-triggered fashion design: from songs to the metaverse | <|reference_start|>Music-triggered fashion design: from songs to the metaverse: The advent of increasingly-growing virtual realities poses unprecedented opportunities and challenges to different societies. Artistic collectives are not an exception, and we here aim to put special attention into musicians. Compositions, lyrics and even show-advertisements are constituents of a message that artists transmit about their reality. As such, artistic creations are ultimately linked to feelings and emotions, with aesthetics playing a crucial role when it comes to transmit artist's intentions. In this context, we here analyze how virtual realities can help to broaden the opportunities for musicians to bridge with their audiences, by devising a dynamical fashion-design recommendation system inspired by sound stimulus. We present our first steps towards re-defining musical experiences in the metaverse, opening up alternative opportunities for artists to connect both with real and virtual (\textit{e.g.} machine-learning agents operating in the metaverse) in potentially broader ways.<|reference_end|> | arxiv | @article{delgado2024music-triggered,
title={Music-triggered fashion design: from songs to the metaverse},
author={Martina Delgado, Marta Llopart, Eva Sarabia, Sandra Taboada, Pol
Vierge, Fernando Vilari~no, Joan Moya Kohler, Julieta Grimberg Golijov,
Mat'ias Bilkis},
journal={arXiv preprint arXiv:2410.04921},
year={2024},
archivePrefix={arXiv},
eprint={2410.04921},
primaryClass={cs.HC cs.CY cs.SI}
} | delgado2024music-triggered |
arxiv-666467 | 2410.04923 | Integrated or Segregated? User Behavior Change after Cross-Party Interactions on Reddit | <|reference_start|>Integrated or Segregated? User Behavior Change after Cross-Party Interactions on Reddit: It has been a widely shared concern that social media reinforces echo chambers of like-minded users and exacerbate political polarization. While fostering interactions across party lines is recognized as an important strategy to break echo chambers, there is a lack of empirical evidence on whether users will actually become more integrated or instead more segregated following such interactions on real social media platforms. We fill this gap by inspecting how users change their community engagement after receiving a cross-party reply in the U.S. politics discussion on Reddit. More specifically, we investigate if they increase their activity in communities of the opposing party, or in communities of their own party. We find that receiving a cross-party reply to a comment in a non-partisan discussion space is not significantly associated with increased out-party subreddit activity, unless the comment itself is already a reply to another comment. Meanwhile, receiving a cross-party reply is significantly associated with increased in-party subreddit activity, but the effect is comparable to that of receiving a same-party reply. Our results reveal a highly conditional depolarization effect following cross-party interactions in spurring activity in out-party communities, which is likely part of a more general dynamic of feedback-boosted engagement.<|reference_end|> | arxiv | @article{xia2024integrated,
title={Integrated or Segregated? User Behavior Change after Cross-Party
Interactions on Reddit},
author={Yan Xia, Corrado Monti, Barbara Keller, Mikko Kivel"a},
journal={arXiv preprint arXiv:2410.04923},
year={2024},
archivePrefix={arXiv},
eprint={2410.04923},
primaryClass={cs.SI cs.CY}
} | xia2024integrated |
arxiv-666468 | 2410.04925 | Intent Classification for Bank Chatbots through LLM Fine-Tuning | <|reference_start|>Intent Classification for Bank Chatbots through LLM Fine-Tuning: This study evaluates the application of large language models (LLMs) for intent classification within a chatbot with predetermined responses designed for banking industry websites. Specifically, the research examines the effectiveness of fine-tuning SlovakBERT compared to employing multilingual generative models, such as Llama 8b instruct and Gemma 7b instruct, in both their pre-trained and fine-tuned versions. The findings indicate that SlovakBERT outperforms the other models in terms of in-scope accuracy and out-of-scope false positive rate, establishing it as the benchmark for this application.<|reference_end|> | arxiv | @article{lajčinová2024intent,
title={Intent Classification for Bank Chatbots through LLM Fine-Tuning},
author={Bibi'ana Lajv{c}inov'a, Patrik Val'abek, Michal Spiv{s}iak},
journal={arXiv preprint arXiv:2410.04925},
year={2024},
archivePrefix={arXiv},
eprint={2410.04925},
primaryClass={cs.CL}
} | lajčinová2024intent |
arxiv-666469 | 2410.04927 | FELLAS: Enhancing Federated Sequential Recommendation with LLM as External Services | <|reference_start|>FELLAS: Enhancing Federated Sequential Recommendation with LLM as External Services: Federated sequential recommendation (FedSeqRec) has gained growing attention due to its ability to protect user privacy. Unfortunately, the performance of FedSeqRec is still unsatisfactory because the models used in FedSeqRec have to be lightweight to accommodate communication bandwidth and clients' on-device computational resource constraints. Recently, large language models (LLMs) have exhibited strong transferable and generalized language understanding abilities and therefore, in the NLP area, many downstream tasks now utilize LLMs as a service to achieve superior performance without constructing complex models. Inspired by this successful practice, we propose a generic FedSeqRec framework, FELLAS, which aims to enhance FedSeqRec by utilizing LLMs as an external service. Specifically, FELLAS employs an LLM server to provide both item-level and sequence-level representation assistance. The item-level representation service is queried by the central server to enrich the original ID-based item embedding with textual information, while the sequence-level representation service is accessed by each client. However, invoking the sequence-level representation service requires clients to send sequences to the external LLM server. To safeguard privacy, we implement dx-privacy satisfied sequence perturbation, which protects clients' sensitive data with guarantees. Additionally, a contrastive learning-based method is designed to transfer knowledge from the noisy sequence representation to clients' sequential recommendation models. Furthermore, to empirically validate the privacy protection capability of FELLAS, we propose two interacted item inference attacks. Extensive experiments conducted on three datasets with two widely used sequential recommendation models demonstrate the effectiveness and privacy-preserving capability of FELLAS.<|reference_end|> | arxiv | @article{yuan2024fellas:,
title={FELLAS: Enhancing Federated Sequential Recommendation with LLM as
External Services},
author={Wei Yuan, Chaoqun Yang, Guanhua Ye, Tong Chen, Quoc Viet Hung Nguyen,
Hongzhi Yin},
journal={arXiv preprint arXiv:2410.04927},
year={2024},
archivePrefix={arXiv},
eprint={2410.04927},
primaryClass={cs.IR}
} | yuan2024fellas: |
arxiv-666470 | 2410.04929 | Goal-Conditioned Terminal Value Estimation for Real-time and Multi-task Model Predictive Control | <|reference_start|>Goal-Conditioned Terminal Value Estimation for Real-time and Multi-task Model Predictive Control: While MPC enables nonlinear feedback control by solving an optimal control problem at each timestep, the computational burden tends to be significantly large, making it difficult to optimize a policy within the control period. To address this issue, one possible approach is to utilize terminal value learning to reduce computational costs. However, the learned value cannot be used for other tasks in situations where the task dynamically changes in the original MPC setup. In this study, we develop an MPC framework with goal-conditioned terminal value learning to achieve multitask policy optimization while reducing computational time. Furthermore, by using a hierarchical control structure that allows the upper-level trajectory planner to output appropriate goal-conditioned trajectories, we demonstrate that a robot model is able to generate diverse motions. We evaluate the proposed method on a bipedal inverted pendulum robot model and confirm that combining goal-conditioned terminal value learning with an upper-level trajectory planner enables real-time control; thus, the robot successfully tracks a target trajectory on sloped terrain.<|reference_end|> | arxiv | @article{morita2024goal-conditioned,
title={Goal-Conditioned Terminal Value Estimation for Real-time and Multi-task
Model Predictive Control},
author={Mitsuki Morita, Satoshi Yamamori, Satoshi Yagi, Norikazu Sugimoto, Jun
Morimoto},
journal={arXiv preprint arXiv:2410.04929},
year={2024},
archivePrefix={arXiv},
eprint={2410.04929},
primaryClass={cs.RO cs.LG cs.SY eess.SY}
} | morita2024goal-conditioned |
arxiv-666471 | 2410.04931 | The Role of Governments in Increasing Interconnected Post-Deployment Monitoring of AI | <|reference_start|>The Role of Governments in Increasing Interconnected Post-Deployment Monitoring of AI: Language-based AI systems are diffusing into society, bringing positive and negative impacts. Mitigating negative impacts depends on accurate impact assessments, drawn from an empirical evidence base that makes causal connections between AI usage and impacts. Interconnected post-deployment monitoring combines information about model integration and use, application use, and incidents and impacts. For example, inference time monitoring of chain-of-thought reasoning can be combined with long-term monitoring of sectoral AI diffusion, impacts and incidents. Drawing on information sharing mechanisms in other industries, we highlight example data sources and specific data points that governments could collect to inform AI risk management.<|reference_end|> | arxiv | @article{stein2024the,
title={The Role of Governments in Increasing Interconnected Post-Deployment
Monitoring of AI},
author={Merlin Stein, Jamie Bernardi, Connor Dunlop},
journal={arXiv preprint arXiv:2410.04931},
year={2024},
archivePrefix={arXiv},
eprint={2410.04931},
primaryClass={cs.CY cs.AI cs.HC}
} | stein2024the |
arxiv-666472 | 2410.04932 | OmniBooth: Learning Latent Control for Image Synthesis with Multi-modal Instruction | <|reference_start|>OmniBooth: Learning Latent Control for Image Synthesis with Multi-modal Instruction: We present OmniBooth, an image generation framework that enables spatial control with instance-level multi-modal customization. For all instances, the multimodal instruction can be described through text prompts or image references. Given a set of user-defined masks and associated text or image guidance, our objective is to generate an image, where multiple objects are positioned at specified coordinates and their attributes are precisely aligned with the corresponding guidance. This approach significantly expands the scope of text-to-image generation, and elevates it to a more versatile and practical dimension in controllability. In this paper, our core contribution lies in the proposed latent control signals, a high-dimensional spatial feature that provides a unified representation to integrate the spatial, textual, and image conditions seamlessly. The text condition extends ControlNet to provide instance-level open-vocabulary generation. The image condition further enables fine-grained control with personalized identity. In practice, our method empowers users with more flexibility in controllable generation, as users can choose multi-modal conditions from text or images as needed. Furthermore, thorough experiments demonstrate our enhanced performance in image synthesis fidelity and alignment across different tasks and datasets. Project page: https://len-li.github.io/omnibooth-web/<|reference_end|> | arxiv | @article{li2024omnibooth:,
title={OmniBooth: Learning Latent Control for Image Synthesis with Multi-modal
Instruction},
author={Leheng Li, Weichao Qiu, Xu Yan, Jing He, Kaiqiang Zhou, Yingjie Cai,
Qing Lian, Bingbing Liu, Ying-Cong Chen},
journal={arXiv preprint arXiv:2410.04932},
year={2024},
archivePrefix={arXiv},
eprint={2410.04932},
primaryClass={cs.CV}
} | li2024omnibooth: |
arxiv-666473 | 2410.04936 | Training Interactive Agent in Large FPS Game Map with Rule-enhanced Reinforcement Learning | <|reference_start|>Training Interactive Agent in Large FPS Game Map with Rule-enhanced Reinforcement Learning: In the realm of competitive gaming, 3D first-person shooter (FPS) games have gained immense popularity, prompting the development of game AI systems to enhance gameplay. However, deploying game AI in practical scenarios still poses challenges, particularly in large-scale and complex FPS games. In this paper, we focus on the practical deployment of game AI in the online multiplayer competitive 3D FPS game called Arena Breakout, developed by Tencent Games. We propose a novel gaming AI system named Private Military Company Agent (PMCA), which is interactable within a large game map and engages in combat with players while utilizing tactical advantages provided by the surrounding terrain. To address the challenges of navigation and combat in modern 3D FPS games, we introduce a method that combines navigation mesh (Navmesh) and shooting-rule with deep reinforcement learning (NSRL). The integration of Navmesh enhances the agent's global navigation capabilities while shooting behavior is controlled using rule-based methods to ensure controllability. NSRL employs a DRL model to predict when to enable the navigation mesh, resulting in a diverse range of behaviors for the game AI. Customized rewards for human-like behaviors are also employed to align PMCA's behavior with that of human players.<|reference_end|> | arxiv | @article{zhang2024training,
title={Training Interactive Agent in Large FPS Game Map with Rule-enhanced
Reinforcement Learning},
author={Chen Zhang, Huan Hu, Yuan Zhou, Qiyang Cao, Ruochen Liu, Wenya Wei,
Elvis S. Liu},
journal={arXiv preprint arXiv:2410.04936},
year={2024},
archivePrefix={arXiv},
eprint={2410.04936},
primaryClass={cs.AI}
} | zhang2024training |
arxiv-666474 | 2410.04939 | PRFusion: Toward Effective and Robust Multi-Modal Place Recognition with Image and Point Cloud Fusion | <|reference_start|>PRFusion: Toward Effective and Robust Multi-Modal Place Recognition with Image and Point Cloud Fusion: Place recognition plays a crucial role in the fields of robotics and computer vision, finding applications in areas such as autonomous driving, mapping, and localization. Place recognition identifies a place using query sensor data and a known database. One of the main challenges is to develop a model that can deliver accurate results while being robust to environmental variations. We propose two multi-modal place recognition models, namely PRFusion and PRFusion++. PRFusion utilizes global fusion with manifold metric attention, enabling effective interaction between features without requiring camera-LiDAR extrinsic calibrations. In contrast, PRFusion++ assumes the availability of extrinsic calibrations and leverages pixel-point correspondences to enhance feature learning on local windows. Additionally, both models incorporate neural diffusion layers, which enable reliable operation even in challenging environments. We verify the state-of-the-art performance of both models on three large-scale benchmarks. Notably, they outperform existing models by a substantial margin of +3.0 AR@1 on the demanding Boreas dataset. Furthermore, we conduct ablation studies to validate the effectiveness of our proposed methods. The codes are available at: https://github.com/sijieaaa/PRFusion<|reference_end|> | arxiv | @article{wang2024prfusion:,
title={PRFusion: Toward Effective and Robust Multi-Modal Place Recognition with
Image and Point Cloud Fusion},
author={Sijie Wang, Qiyu Kang, Rui She, Kai Zhao, Yang Song, Wee Peng Tay},
journal={arXiv preprint arXiv:2410.04939},
year={2024},
archivePrefix={arXiv},
eprint={2410.04939},
primaryClass={cs.CV}
} | wang2024prfusion: |
arxiv-666475 | 2410.04940 | Next state prediction gives rise to entangled, yet compositional representations of objects | <|reference_start|>Next state prediction gives rise to entangled, yet compositional representations of objects: Compositional representations are thought to enable humans to generalize across combinatorially vast state spaces. Models with learnable object slots, which encode information about objects in separate latent codes, have shown promise for this type of generalization but rely on strong architectural priors. Models with distributed representations, on the other hand, use overlapping, potentially entangled neural codes, and their ability to support compositional generalization remains underexplored. In this paper we examine whether distributed models can develop linearly separable representations of objects, like slotted models, through unsupervised training on videos of object interactions. We show that, surprisingly, models with distributed representations often match or outperform models with object slots in downstream prediction tasks. Furthermore, we find that linearly separable object representations can emerge without object-centric priors, with auxiliary objectives like next-state prediction playing a key role. Finally, we observe that distributed models' object representations are never fully disentangled, even if they are linearly separable: Multiple objects can be encoded through partially overlapping neural populations while still being highly separable with a linear classifier. We hypothesize that maintaining partially shared codes enables distributed models to better compress object dynamics, potentially enhancing generalization.<|reference_end|> | arxiv | @article{saanum2024next,
title={Next state prediction gives rise to entangled, yet compositional
representations of objects},
author={Tankred Saanum, Luca M. Schulze Buschoff, Peter Dayan, Eric Schulz},
journal={arXiv preprint arXiv:2410.04940},
year={2024},
archivePrefix={arXiv},
eprint={2410.04940},
primaryClass={cs.LG cs.CV}
} | saanum2024next |
arxiv-666476 | 2410.04941 | Detecting and Approximating Redundant Computational Blocks in Neural Networks | <|reference_start|>Detecting and Approximating Redundant Computational Blocks in Neural Networks: Deep neural networks often learn similar internal representations, both across different models and within their own layers. While inter-network similarities have enabled techniques such as model stitching and merging, intra-network similarities present new opportunities for designing more efficient architectures. In this paper, we investigate the emergence of these internal similarities across different layers in diverse neural architectures, showing that similarity patterns emerge independently of the datataset used. We introduce a simple metric, Block Redundancy, to detect redundant blocks, providing a foundation for future architectural optimization methods. Building on this, we propose Redundant Blocks Approximation (RBA), a general framework that identifies and approximates one or more redundant computational blocks using simpler transformations. We show that the transformation $\mathcal{T}$ between two representations can be efficiently computed in closed-form, and it is enough to replace the redundant blocks from the network. RBA reduces model parameters and time complexity while maintaining good performance. We validate our method on classification tasks in the vision domain using a variety of pretrained foundational models and datasets.<|reference_end|> | arxiv | @article{cannistraci2024detecting,
title={Detecting and Approximating Redundant Computational Blocks in Neural
Networks},
author={Irene Cannistraci, Emanuele Rodol`a, Bastian Rieck},
journal={arXiv preprint arXiv:2410.04941},
year={2024},
archivePrefix={arXiv},
eprint={2410.04941},
primaryClass={cs.LG cs.AI}
} | cannistraci2024detecting |
arxiv-666477 | 2410.04943 | A posteriori error estimates for Schr\"odinger operators discretized with linear combinations of atomic orbitals | <|reference_start|>A posteriori error estimates for Schr\"odinger operators discretized with linear combinations of atomic orbitals: We establish guaranteed and practically computable a posteriori error bounds for source problems and eigenvalue problems involving linear Schr{\"o}dinger operators with atom-centered potentials discretized with linear combinations of atomic orbitals. We show that the energy norm of the discretization error can be estimated by the dual energy norm of the residual, that further decomposes into atomic contributions, characterizing the error localized on atoms. Moreover, we show that the practical computation of the dual norms of atomic residuals involves diagonalizing radial Schr{\"o}dinger operators which can easily be precomputed in practice. We provide numerical illustrations of the performance of such a posteriori analysis on several test cases, showing that the error bounds accurately estimate the error, and that the localized error components allow for optimized adaptive basis sets.<|reference_end|> | arxiv | @article{dusson2024a,
title={A posteriori error estimates for Schr{\"o}dinger operators discretized
with linear combinations of atomic orbitals},
author={Genevi`eve Dusson (LMB), Mi-Song Dupuy (LJLL (UMR_7598)),
Ioanna-Maria Lygatsika (CEA/DAM)},
journal={arXiv preprint arXiv:2410.04943},
year={2024},
archivePrefix={arXiv},
eprint={2410.04943},
primaryClass={math.NA cs.NA}
} | dusson2024a |
arxiv-666478 | 2410.04946 | Real-time Ship Recognition and Georeferencing for the Improvement of Maritime Situational Awareness | <|reference_start|>Real-time Ship Recognition and Georeferencing for the Improvement of Maritime Situational Awareness: In an era where maritime infrastructures are crucial, advanced situational awareness solutions are increasingly important. The use of optical camera systems can allow real-time usage of maritime footage. This thesis presents an investigation into leveraging deep learning and computer vision to advance real-time ship recognition and georeferencing for the improvement of maritime situational awareness. A novel dataset, ShipSG, is introduced, containing 3,505 images and 11,625 ship masks with corresponding class and geographic position. After an exploration of state-of-the-art, a custom real-time segmentation architecture, ScatYOLOv8+CBAM, is designed for the NVIDIA Jetson AGX Xavier embedded system. This architecture adds the 2D scattering transform and attention mechanisms to YOLOv8, achieving an mAP of 75.46% and an 25.3 ms per frame, outperforming state-of-the-art methods by over 5%. To improve small and distant ship recognition in high-resolution images on embedded systems, an enhanced slicing mechanism is introduced, improving mAP by 8% to 11%. Additionally, a georeferencing method is proposed, achieving positioning errors of 18 m for ships up to 400 m away and 44 m for ships between 400 m and 1200 m. The findings are also applied in real-world scenarios, such as the detection of abnormal ship behaviour, camera integrity assessment and 3D reconstruction. The approach of this thesis outperforms existing methods and provides a framework for integrating recognized and georeferenced ships into real-time systems, enhancing operational effectiveness and decision-making for maritime stakeholders. This thesis contributes to the maritime computer vision field by establishing a benchmark for ship segmentation and georeferencing research, demonstrating the viability of deep-learning-based recognition and georeferencing methods for real-time maritime monitoring.<|reference_end|> | arxiv | @article{perez2024real-time,
title={Real-time Ship Recognition and Georeferencing for the Improvement of
Maritime Situational Awareness},
author={Borja Carrillo Perez},
journal={Staats- und Universitaetsbibliothek Bremen (2024)},
year={2024},
doi={10.26092/elib/3265},
archivePrefix={arXiv},
eprint={2410.04946},
primaryClass={cs.CV cs.AI}
} | perez2024real-time |
arxiv-666479 | 2410.04949 | Leverage Knowledge Graph and Large Language Model for Law Article Recommendation: A Case Study of Chinese Criminal Law | <|reference_start|>Leverage Knowledge Graph and Large Language Model for Law Article Recommendation: A Case Study of Chinese Criminal Law: Court efficiency is vital for social stability. However, in most countries around the world, the grassroots courts face case backlogs, with decisions relying heavily on judicial personnel's cognitive labor, lacking intelligent tools to improve efficiency. To address this issue, we propose an efficient law article recommendation approach utilizing a Knowledge Graph (KG) and a Large Language Model (LLM). Firstly, we propose a Case-Enhanced Law Article Knowledge Graph (CLAKG) as a database to store current law statutes, historical case information, and correspondence between law articles and historical cases. Additionally, we introduce an automated CLAKG construction method based on LLM. On this basis, we propose a closed-loop law article recommendation method. Finally, through a series of experiments using judgment documents from the website "China Judgements Online", we have improved the accuracy of law article recommendation in cases from 0.549 to 0.694, demonstrating that our proposed method significantly outperforms baseline approaches.<|reference_end|> | arxiv | @article{chen2024leverage,
title={Leverage Knowledge Graph and Large Language Model for Law Article
Recommendation: A Case Study of Chinese Criminal Law},
author={Yongming Chen, Miner Chen, Ye Zhu, Juan Pei, Siyu Chen, Yu Zhou, Yi
Wang, Yifan Zhou, Hao Li, Songan Zhang},
journal={arXiv preprint arXiv:2410.04949},
year={2024},
archivePrefix={arXiv},
eprint={2410.04949},
primaryClass={cs.IR cs.AI}
} | chen2024leverage |
arxiv-666480 | 2410.04951 | A decade of DCASE: Achievements, practices, evaluations and future challenges | <|reference_start|>A decade of DCASE: Achievements, practices, evaluations and future challenges: This paper introduces briefly the history and growth of the Detection and Classification of Acoustic Scenes and Events (DCASE) challenge, workshop, research area and research community. Created in 2013 as a data evaluation challenge, DCASE has become a major research topic in the Audio and Acoustic Signal Processing area. Its success comes from a combination of factors: the challenge offers a large variety of tasks that are renewed each year; and the workshop offers a channel for dissemination of related work, engaging a young and dynamic community. At the same time, DCASE faces its own challenges, growing and expanding to different areas. One of the core principles of DCASE is open science and reproducibility: publicly available datasets, baseline systems, technical reports and workshop publications. While the DCASE challenge and workshop are independent of IEEE SPS, the challenge receives annual endorsement from the AASP TC, and the DCASE community contributes significantly to the ICASSP flagship conference and the success of SPS in many of its activities.<|reference_end|> | arxiv | @article{mesaros2024a,
title={A decade of DCASE: Achievements, practices, evaluations and future
challenges},
author={Annamaria Mesaros, Romain Serizel, Toni Heittola, Tuomas Virtanen,
Mark D. Plumbley},
journal={arXiv preprint arXiv:2410.04951},
year={2024},
archivePrefix={arXiv},
eprint={2410.04951},
primaryClass={eess.AS cs.SD}
} | mesaros2024a |
arxiv-666481 | 2410.04956 | Maximizing the practical achievability of quantum annealing attacks on factorization-based cryptography | <|reference_start|>Maximizing the practical achievability of quantum annealing attacks on factorization-based cryptography: This work focuses on quantum methods for cryptanalysis of schemes based on the integer factorization problem and the discrete logarithm problem. We demonstrate how to practically solve the largest instances of the factorization problem by improving an approach that combines quantum and classical computations, assuming the use of the best publicly available special-class quantum computer: the quantum annealer. We achieve new computational experiment results by solving the largest instance of the factorization problem ever announced as solved using quantum annealing, with a size of 29 bits. The core idea of the improved approach is to leverage known sub-exponential classical method to break the problem down into many smaller computations and perform the most critical ones on a quantum computer. This approach does not reduce the complexity class, but it assesses the pragmatic capabilities of an attacker. It also marks a step forward in the development of hybrid methods, which in practice may surpass classical methods in terms of efficiency sooner than purely quantum computations will.<|reference_end|> | arxiv | @article{żołnierczyk2024maximizing,
title={Maximizing the practical achievability of quantum annealing attacks on
factorization-based cryptography},
author={Olgierd .Zo{l}nierczyk},
journal={arXiv preprint arXiv:2410.04956},
year={2024},
archivePrefix={arXiv},
eprint={2410.04956},
primaryClass={cs.CR}
} | żołnierczyk2024maximizing |
arxiv-666482 | 2410.04959 | Failure-Proof Non-Contrastive Self-Supervised Learning | <|reference_start|>Failure-Proof Non-Contrastive Self-Supervised Learning: We identify sufficient conditions to avoid known failure modes, including representation, dimensional, cluster and intracluster collapses, occurring in non-contrastive self-supervised learning. Based on these findings, we propose a principled design for the projector and loss function. We theoretically demonstrate that this design introduces an inductive bias that promotes learning representations that are both decorrelated and clustered without explicit enforcing these properties and leading to improved generalization. To the best of our knowledge, this is the first solution that achieves robust training with respect to these failure modes while guaranteeing enhanced generalization performance in downstream tasks. We validate our theoretical findings on image datasets including SVHN, CIFAR10, CIFAR100 and ImageNet-100, and show that our solution, dubbed FALCON, outperforms existing feature decorrelation and cluster-based self-supervised learning methods in terms of generalization to clustering and linear classification tasks.<|reference_end|> | arxiv | @article{sansone2024failure-proof,
title={Failure-Proof Non-Contrastive Self-Supervised Learning},
author={Emanuele Sansone, Tim Lebailly, Tinne Tuytelaars},
journal={arXiv preprint arXiv:2410.04959},
year={2024},
archivePrefix={arXiv},
eprint={2410.04959},
primaryClass={cs.LG stat.ML}
} | sansone2024failure-proof |
arxiv-666483 | 2410.04960 | On Efficient Variants of Segment Anything Model: A Survey | <|reference_start|>On Efficient Variants of Segment Anything Model: A Survey: The Segment Anything Model (SAM) is a foundational model for image segmentation tasks, known for its strong generalization across diverse applications. However, its impressive performance comes with significant computational and resource demands, making it challenging to deploy in resource-limited environments such as mobile devices. To address this, a variety of SAM variants have been proposed to enhance efficiency without sacrificing accuracy. This survey provides the first comprehensive review of these efficient SAM variants. We begin by exploring the motivations driving this research. We then present core techniques used in SAM and model acceleration. This is followed by an in-depth analysis of various acceleration strategies, categorized by approach. Finally, we offer a unified and extensive evaluation of these methods, assessing their efficiency and accuracy on representative benchmarks, and providing a clear comparison of their overall performance.<|reference_end|> | arxiv | @article{sun2024on,
title={On Efficient Variants of Segment Anything Model: A Survey},
author={Xiaorui Sun, Jun Liu, Heng Tao Shen, Xiaofeng Zhu, Ping Hu},
journal={arXiv preprint arXiv:2410.04960},
year={2024},
archivePrefix={arXiv},
eprint={2410.04960},
primaryClass={cs.CV}
} | sun2024on |
arxiv-666484 | 2410.04962 | Activation Scaling for Steering and Interpreting Language Models | <|reference_start|>Activation Scaling for Steering and Interpreting Language Models: Given the prompt "Rome is in", can we steer a language model to flip its prediction of an incorrect token "France" to a correct token "Italy" by only multiplying a few relevant activation vectors with scalars? We argue that successfully intervening on a model is a prerequisite for interpreting its internal workings. Concretely, we establish a three-term objective: a successful intervention should flip the correct with the wrong token and vice versa (effectiveness), and leave other tokens unaffected (faithfulness), all while being sparse (minimality). Using gradient-based optimization, this objective lets us learn (and later evaluate) a specific kind of efficient and interpretable intervention: activation scaling only modifies the signed magnitude of activation vectors to strengthen, weaken, or reverse the steering directions already encoded in the model. On synthetic tasks, this intervention performs comparably with steering vectors in terms of effectiveness and faithfulness, but is much more minimal allowing us to pinpoint interpretable model components. We evaluate activation scaling from different angles, compare performance on different datasets, and make activation scalars a learnable function of the activation vectors themselves to generalize to varying-length prompts.<|reference_end|> | arxiv | @article{stoehr2024activation,
title={Activation Scaling for Steering and Interpreting Language Models},
author={Niklas Stoehr, Kevin Du, V'esteinn Sn{ae}bjarnarson, Robert West,
Ryan Cotterell, Aaron Schein},
journal={arXiv preprint arXiv:2410.04962},
year={2024},
archivePrefix={arXiv},
eprint={2410.04962},
primaryClass={cs.CL cs.AI}
} | stoehr2024activation |
arxiv-666485 | 2410.04965 | Revealing Directions for Text-guided 3D Face Editing | <|reference_start|>Revealing Directions for Text-guided 3D Face Editing: 3D face editing is a significant task in multimedia, aimed at the manipulation of 3D face models across various control signals. The success of 3D-aware GAN provides expressive 3D models learned from 2D single-view images only, encouraging researchers to discover semantic editing directions in its latent space. However, previous methods face challenges in balancing quality, efficiency, and generalization. To solve the problem, we explore the possibility of introducing the strength of diffusion model into 3D-aware GANs. In this paper, we present Face Clan, a fast and text-general approach for generating and manipulating 3D faces based on arbitrary attribute descriptions. To achieve disentangled editing, we propose to diffuse on the latent space under a pair of opposite prompts to estimate the mask indicating the region of interest on latent codes. Based on the mask, we then apply denoising to the masked latent codes to reveal the editing direction. Our method offers a precisely controllable manipulation method, allowing users to intuitively customize regions of interest with the text description. Experiments demonstrate the effectiveness and generalization of our Face Clan for various pre-trained GANs. It offers an intuitive and wide application for text-guided face editing that contributes to the landscape of multimedia content creation.<|reference_end|> | arxiv | @article{chen2024revealing,
title={Revealing Directions for Text-guided 3D Face Editing},
author={Zhuo Chen, Yichao Yan, Sehngqi Liu, Yuhao Cheng, Weiming Zhao,
Lincheng Li, Mengxiao Bi, Xiaokang Yang},
journal={arXiv preprint arXiv:2410.04965},
year={2024},
archivePrefix={arXiv},
eprint={2410.04965},
primaryClass={cs.CV}
} | chen2024revealing |
arxiv-666486 | 2410.04968 | Collaboration! Towards Robust Neural Methods for Routing Problems | <|reference_start|>Collaboration! Towards Robust Neural Methods for Routing Problems: Despite enjoying desirable efficiency and reduced reliance on domain expertise, existing neural methods for vehicle routing problems (VRPs) suffer from severe robustness issues -- their performance significantly deteriorates on clean instances with crafted perturbations. To enhance robustness, we propose an ensemble-based Collaborative Neural Framework (CNF) w.r.t. the defense of neural VRP methods, which is crucial yet underexplored in the literature. Given a neural VRP method, we adversarially train multiple models in a collaborative manner to synergistically promote robustness against attacks, while boosting standard generalization on clean instances. A neural router is designed to adeptly distribute training instances among models, enhancing overall load balancing and collaborative efficacy. Extensive experiments verify the effectiveness and versatility of CNF in defending against various attacks across different neural VRP methods. Notably, our approach also achieves impressive out-of-distribution generalization on benchmark instances.<|reference_end|> | arxiv | @article{zhou2024collaboration!,
title={Collaboration! Towards Robust Neural Methods for Routing Problems},
author={Jianan Zhou, Yaoxin Wu, Zhiguang Cao, Wen Song, Jie Zhang, Zhiqi Shen},
journal={arXiv preprint arXiv:2410.04968},
year={2024},
archivePrefix={arXiv},
eprint={2410.04968},
primaryClass={cs.AI cs.LG}
} | zhou2024collaboration! |
arxiv-666487 | 2410.04970 | Contest design with a finite type-space: A unifying approach | <|reference_start|>Contest design with a finite type-space: A unifying approach: We study the classical contest design problem of allocating a budget across different prizes to maximize effort in a finite type-space environment. For any contest, we characterize the unique symmetric equilibrium. In this equilibrium, different agent types mix over contiguous intervals so that more efficient agents always exert greater effort than less efficient agents. We then solve for the expected equilibrium effort, investigate the effect of increasing competition under linear costs, and identify conditions under which this effect persists under general costs. As a result, we find that the winner-takes-all contest is optimal under linear and concave costs. Lastly, we obtain an equilibrium convergence result for the continuum type-space, and since the finite type-space encompasses the complete information environment as a special case, our analysis offers a unified approach to studying contests in these classical environments.<|reference_end|> | arxiv | @article{baranski2024contest,
title={Contest design with a finite type-space: A unifying approach},
author={Andrzej Baranski, Sumit Goel},
journal={arXiv preprint arXiv:2410.04970},
year={2024},
archivePrefix={arXiv},
eprint={2410.04970},
primaryClass={econ.TH cs.GT}
} | baranski2024contest |
arxiv-666488 | 2410.04972 | L-C4: Language-Based Video Colorization for Creative and Consistent Color | <|reference_start|>L-C4: Language-Based Video Colorization for Creative and Consistent Color: Automatic video colorization is inherently an ill-posed problem because each monochrome frame has multiple optional color candidates. Previous exemplar-based video colorization methods restrict the user's imagination due to the elaborate retrieval process. Alternatively, conditional image colorization methods combined with post-processing algorithms still struggle to maintain temporal consistency. To address these issues, we present Language-based video Colorization for Creative and Consistent Colors (L-C4) to guide the colorization process using user-provided language descriptions. Our model is built upon a pre-trained cross-modality generative model, leveraging its comprehensive language understanding and robust color representation abilities. We introduce the cross-modality pre-fusion module to generate instance-aware text embeddings, enabling the application of creative colors. Additionally, we propose temporally deformable attention to prevent flickering or color shifts, and cross-clip fusion to maintain long-term color consistency. Extensive experimental results demonstrate that L-C4 outperforms relevant methods, achieving semantically accurate colors, unrestricted creative correspondence, and temporally robust consistency.<|reference_end|> | arxiv | @article{chang2024l-c4:,
title={L-C4: Language-Based Video Colorization for Creative and Consistent
Color},
author={Zheng Chang, Shuchen Weng, Huan Ouyang, Yu Li, Si Li, Boxin Shi},
journal={arXiv preprint arXiv:2410.04972},
year={2024},
archivePrefix={arXiv},
eprint={2410.04972},
primaryClass={cs.CV}
} | chang2024l-c4: |
arxiv-666489 | 2410.04974 | 6DGS: Enhanced Direction-Aware Gaussian Splatting for Volumetric Rendering | <|reference_start|>6DGS: Enhanced Direction-Aware Gaussian Splatting for Volumetric Rendering: Novel view synthesis has advanced significantly with the development of neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS). However, achieving high quality without compromising real-time rendering remains challenging, particularly for physically-based ray tracing with view-dependent effects. Recently, N-dimensional Gaussians (N-DG) introduced a 6D spatial-angular representation to better incorporate view-dependent effects, but the Gaussian representation and control scheme are sub-optimal. In this paper, we revisit 6D Gaussians and introduce 6D Gaussian Splatting (6DGS), which enhances color and opacity representations and leverages the additional directional information in the 6D space for optimized Gaussian control. Our approach is fully compatible with the 3DGS framework and significantly improves real-time radiance field rendering by better modeling view-dependent effects and fine details. Experiments demonstrate that 6DGS significantly outperforms 3DGS and N-DG, achieving up to a 15.73 dB improvement in PSNR with a reduction of 66.5% Gaussian points compared to 3DGS. The project page is: https://gaozhongpai.github.io/6dgs/<|reference_end|> | arxiv | @article{gao20246dgs:,
title={6DGS: Enhanced Direction-Aware Gaussian Splatting for Volumetric
Rendering},
author={Zhongpai Gao, Benjamin Planche, Meng Zheng, Anwesa Choudhuri, Terrence
Chen, Ziyan Wu},
journal={arXiv preprint arXiv:2410.04974},
year={2024},
archivePrefix={arXiv},
eprint={2410.04974},
primaryClass={cs.CV cs.AI}
} | gao20246dgs: |
arxiv-666490 | 2410.04976 | Noise-Domain Non-Orthogonal Multiple Access | <|reference_start|>Noise-Domain Non-Orthogonal Multiple Access: In this paper, we present noise-domain non-orthogonal multiple access (ND-NOMA), an innovative communication scheme that utilizes the modulation of artificial noise mean and variance to convey information. Distinct from traditional methods such as power-domain non-orthogonal multiple access (PD-NOMA) that heavily relies on successive interference cancellation (SIC), ND-NOMA utilizes the noise domain, considerably reducing power consumption and system complexity. Inspired by noise modulation, ND-NOMA enhances energy efficiency and provides lower bit error probability (BEP), making it highly suitable for next-generation Internet-of-things (IoT) networks. Our theoretical analyses and computer simulations reveal that ND-NOMA can achieve exceptionally low bit error rates in both uplink and downlink scenarios, in the presence of Rician fading channels. The proposed multi-user system is supported by a minimum distance detector for mean detection and a threshold-based detector for variance detection, ensuring robust communication in low-power environments. By leveraging the inherent properties of noise, ND-NOMA offers a promising platform for long-term deployments of low-cost and low-complexity devices.<|reference_end|> | arxiv | @article{yapici2024noise-domain,
title={Noise-Domain Non-Orthogonal Multiple Access},
author={Erkin Yapici, Yusuf Islam Tek, Ertugrul Basar},
journal={arXiv preprint arXiv:2410.04976},
year={2024},
archivePrefix={arXiv},
eprint={2410.04976},
primaryClass={cs.IT eess.SP math.IT}
} | yapici2024noise-domain |
arxiv-666491 | 2410.04980 | Comparison of marker-less 2D image-based methods for infant pose estimation | <|reference_start|>Comparison of marker-less 2D image-based methods for infant pose estimation: There are increasing efforts to automate clinical methods for early diagnosis of developmental disorders, among them the General Movement Assessment (GMA), a video-based tool to classify infant motor functioning. Optimal pose estimation is a crucial part of the automated GMA. In this study we compare the performance of available generic- and infant-pose estimators, and the choice of viewing angle for optimal recordings, i.e., conventional diagonal view used in GMA vs. top-down view. For this study, we used 4500 annotated video-frames from 75 recordings of infant spontaneous motor functions from 4 to 26 weeks. To determine which available pose estimation method and camera angle yield the best pose estimation accuracy on infants in a GMA related setting, the distance to human annotations as well as the percentage of correct key-points (PCK) were computed and compared. The results show that the best performing generic model trained on adults, ViTPose, also performs best on infants. We see no improvement from using specialized infant-pose estimators over the generic pose estimators on our own infant dataset. However, when retraining a generic model on our data, there is a significant improvement in pose estimation accuracy. The pose estimation accuracy obtained from the top-down view is significantly better than that obtained from the diagonal view, especially for the detection of the hip key-points. The results also indicate only limited generalization capabilities of infant-pose estimators to other infant datasets, which hints that one should be careful when choosing infant pose estimators and using them on infant datasets which they were not trained on. While the standard GMA method uses a diagonal view for assessment, pose estimation accuracy significantly improves using a top-down view. This suggests that a top-down view should be included in recording setups for automated GMA research.<|reference_end|> | arxiv | @article{jahn2024comparison,
title={Comparison of marker-less 2D image-based methods for infant pose
estimation},
author={Lennart Jahn, Sarah Fl"ugge, Dajie Zhang, Luise Poustka, Sven
B"olte, Florentin W"org"otter, Peter B Marschik and Tomas Kulvicius},
journal={arXiv preprint arXiv:2410.04980},
year={2024},
archivePrefix={arXiv},
eprint={2410.04980},
primaryClass={cs.CV}
} | jahn2024comparison |
arxiv-666492 | 2410.04981 | On the Rigour of Scientific Writing: Criteria, Analysis, and Insights | <|reference_start|>On the Rigour of Scientific Writing: Criteria, Analysis, and Insights: Rigour is crucial for scientific research as it ensures the reproducibility and validity of results and findings. Despite its importance, little work exists on modelling rigour computationally, and there is a lack of analysis on whether these criteria can effectively signal or measure the rigour of scientific papers in practice. In this paper, we introduce a bottom-up, data-driven framework to automatically identify and define rigour criteria and assess their relevance in scientific writing. Our framework includes rigour keyword extraction, detailed rigour definition generation, and salient criteria identification. Furthermore, our framework is domain-agnostic and can be tailored to the evaluation of scientific rigour for different areas, accommodating the distinct salient criteria across fields. We conducted comprehensive experiments based on datasets collected from two high impact venues for Machine Learning and NLP (i.e., ICLR and ACL) to demonstrate the effectiveness of our framework in modelling rigour. In addition, we analyse linguistic patterns of rigour, revealing that framing certainty is crucial for enhancing the perception of scientific rigour, while suggestion certainty and probability uncertainty diminish it.<|reference_end|> | arxiv | @article{james2024on,
title={On the Rigour of Scientific Writing: Criteria, Analysis, and Insights},
author={Joseph James, Chenghao Xiao, Yucheng Li, Chenghua Lin},
journal={arXiv preprint arXiv:2410.04981},
year={2024},
archivePrefix={arXiv},
eprint={2410.04981},
primaryClass={cs.CL}
} | james2024on |
arxiv-666493 | 2410.04982 | Safe Learning-Based Optimization of Model Predictive Control: Application to Battery Fast-Charging | <|reference_start|>Safe Learning-Based Optimization of Model Predictive Control: Application to Battery Fast-Charging: Model predictive control (MPC) is a powerful tool for controlling complex nonlinear systems under constraints, but often struggles with model uncertainties and the design of suitable cost functions. To address these challenges, we discuss an approach that integrates MPC with safe Bayesian optimization to optimize long-term closed-loop performance despite significant model-plant mismatches. By parameterizing the MPC stage cost function using a radial basis function network, we employ Bayesian optimization as a multi-episode learning strategy to tune the controller without relying on precise system models. This method mitigates conservativeness introduced by overly cautious soft constraints in the MPC cost function and provides probabilistic safety guarantees during learning, ensuring that safety-critical constraints are met with high probability. As a practical application, we apply our approach to fast charging of lithium-ion batteries, a challenging task due to the complicated battery dynamics and strict safety requirements, subject to the requirement to be implementable in real time. Simulation results demonstrate that, in the context of model-plant mismatch, our method reduces charging times compared to traditional MPC methods while maintaining safety. This work extends previous research by emphasizing closed-loop constraint satisfaction and offers a promising solution for enhancing performance in systems where model uncertainties and safety are critical concerns.<|reference_end|> | arxiv | @article{hirt2024safe,
title={Safe Learning-Based Optimization of Model Predictive Control:
Application to Battery Fast-Charging},
author={Sebastian Hirt, Andreas H"ohl, Johannes Pohlodek, Joachim Schaeffer,
Maik Pfefferkorn, Richard D. Braatz, Rolf Findeisen},
journal={arXiv preprint arXiv:2410.04982},
year={2024},
archivePrefix={arXiv},
eprint={2410.04982},
primaryClass={eess.SY cs.LG cs.SY}
} | hirt2024safe |
arxiv-666494 | 2410.04983 | RoWeeder: Unsupervised Weed Mapping through Crop-Row Detection | <|reference_start|>RoWeeder: Unsupervised Weed Mapping through Crop-Row Detection: Precision agriculture relies heavily on effective weed management to ensure robust crop yields. This study presents RoWeeder, an innovative framework for unsupervised weed mapping that combines crop-row detection with a noise-resilient deep learning model. By leveraging crop-row information to create a pseudo-ground truth, our method trains a lightweight deep learning model capable of distinguishing between crops and weeds, even in the presence of noisy data. Evaluated on the WeedMap dataset, RoWeeder achieves an F1 score of 75.3, outperforming several baselines. Comprehensive ablation studies further validated the model's performance. By integrating RoWeeder with drone technology, farmers can conduct real-time aerial surveys, enabling precise weed management across large fields. The code is available at: \url{https://github.com/pasqualedem/RoWeeder}.<|reference_end|> | arxiv | @article{de marinis2024roweeder:,
title={RoWeeder: Unsupervised Weed Mapping through Crop-Row Detection},
author={Pasquale De Marinis, Gennaro Vessio, Giovanna Castellano},
journal={arXiv preprint arXiv:2410.04983},
year={2024},
archivePrefix={arXiv},
eprint={2410.04983},
primaryClass={cs.CV}
} | de marinis2024roweeder: |
arxiv-666495 | 2410.04984 | A Meta-Complexity Characterization of Quantum Cryptography | <|reference_start|>A Meta-Complexity Characterization of Quantum Cryptography: We prove the first meta-complexity characterization of a quantum cryptographic primitive. We show that one-way puzzles exist if and only if there is some quantum samplable distribution of binary strings over which it is hard to approximate Kolmogorov complexity. Therefore, we characterize one-way puzzles by the average-case hardness of a uncomputable problem. This brings to the quantum setting a recent line of work that characterizes classical cryptography with the average-case hardness of a meta-complexity problem, initiated by Liu and Pass. Moreover, since the average-case hardness of Kolmogorov complexity over classically polynomial-time samplable distributions characterizes one-way functions, this result poses one-way puzzles as a natural generalization of one-way functions to the quantum setting. Furthermore, our equivalence goes through probability estimation, giving us the additional equivalence that one-way puzzles exist if and only if there is a quantum samplable distribution over which probability estimation is hard. We also observe that the oracle worlds of defined by Kretschmer et. al. rule out any relativizing characterization of one-way puzzles by the hardness of a problem in NP or QMA, which means that it may not be possible with current techniques to characterize one-way puzzles with another meta-complexity problem.<|reference_end|> | arxiv | @article{cavalar2024a,
title={A Meta-Complexity Characterization of Quantum Cryptography},
author={Bruno P. Cavalar, Eli Goldin, Matthew Gray, and Peter Hall},
journal={arXiv preprint arXiv:2410.04984},
year={2024},
archivePrefix={arXiv},
eprint={2410.04984},
primaryClass={cs.CR cs.CC quant-ph}
} | cavalar2024a |
arxiv-666496 | 2410.04986 | Finding Safety Violations of AI-Enabled Control Systems through the Lens of Synthesized Proxy Programs | <|reference_start|>Finding Safety Violations of AI-Enabled Control Systems through the Lens of Synthesized Proxy Programs: Given the increasing adoption of modern AI-enabled control systems, ensuring their safety and reliability has become a critical task in software testing. One prevalent approach to testing control systems is falsification, which aims to find an input signal that causes the control system to violate a formal safety specification using optimization algorithms. However, applying falsification to AI-enabled control systems poses two significant challenges: (1)~it requires the system to execute numerous candidate test inputs, which can be time-consuming, particularly for systems with AI models that have many parameters, and (2)~multiple safety requirements are typically defined as a conjunctive specification, which is difficult for existing falsification approaches to comprehensively cover. This paper introduces Synthify, a falsification framework tailored for AI-enabled control systems. Our approach performs falsification in a two-phase process. At the start, Synthify synthesizes a program that implements one or a few linear controllers to serve as a proxy for the AI controller. This proxy program mimics the AI controller's functionality but is computationally more efficient. Then, Synthify employs the $\epsilon$-greedy strategy to sample a promising sub-specification from the conjunctive safety specification. It then uses a Simulated Annealing-based falsification algorithm to find violations of the sampled sub-specification for the control system. To evaluate Synthify, we compare it to PSY-TaLiRo, a state-of-the-art and industrial-strength falsification tool, on 8 publicly available control systems. On average, Synthify achieves a 83.5% higher success rate in falsification compared to PSY-TaLiRo with the same budget of falsification trials. The safety violations found by Synthify are also more diverse than those found by PSY-TaLiRo, covering 137.7% more sub-specifications.<|reference_end|> | arxiv | @article{shi2024finding,
title={Finding Safety Violations of AI-Enabled Control Systems through the Lens
of Synthesized Proxy Programs},
author={Jieke Shi, Zhou Yang, Junda He, Bowen Xu, Dongsun Kim, DongGyun Han,
and David Lo},
journal={arXiv preprint arXiv:2410.04986},
year={2024},
archivePrefix={arXiv},
eprint={2410.04986},
primaryClass={cs.SE}
} | shi2024finding |
arxiv-666497 | 2410.04987 | icon: Fast Simulation of Epidemics on Coevolving Networks | <|reference_start|>icon: Fast Simulation of Epidemics on Coevolving Networks: We introduce a fast simulation technique for modeling epidemics on adaptive networks. Our rejection-based algorithm efficiently simulates the co-evolution of the network structure and the epidemic dynamics. We extend the classical SIS model by incorporating stochastic rules that allow for the association of susceptible nodes and the dissociation of infected nodes. The method outperforms standard baselines in terms of computational efficiency while revealing new emergent patterns in epidemic spread. Code is made available at github.com/GerritGr/icon.<|reference_end|> | arxiv | @article{großmann2024icon:,
title={icon: Fast Simulation of Epidemics on Coevolving Networks},
author={Gerrit Gro{ss}mann, Sebastian Vollmer},
journal={arXiv preprint arXiv:2410.04987},
year={2024},
archivePrefix={arXiv},
eprint={2410.04987},
primaryClass={cs.SI physics.soc-ph}
} | großmann2024icon: |
arxiv-666498 | 2410.04988 | Efficient Model-Based Reinforcement Learning Through Optimistic Thompson Sampling | <|reference_start|>Efficient Model-Based Reinforcement Learning Through Optimistic Thompson Sampling: Learning complex robot behavior through interactions with the environment necessitates principled exploration. Effective strategies should prioritize exploring regions of the state-action space that maximize rewards, with optimistic exploration emerging as a promising direction aligned with this idea and enabling sample-efficient reinforcement learning. However, existing methods overlook a crucial aspect: the need for optimism to be informed by a belief connecting the reward and state. To address this, we propose a practical, theoretically grounded approach to optimistic exploration based on Thompson sampling. Our model structure is the first that allows for reasoning about joint uncertainty over transitions and rewards. We apply our method on a set of MuJoCo and VMAS continuous control tasks. Our experiments demonstrate that optimistic exploration significantly accelerates learning in environments with sparse rewards, action penalties, and difficult-to-explore regions. Furthermore, we provide insights into when optimism is beneficial and emphasize the critical role of model uncertainty in guiding exploration.<|reference_end|> | arxiv | @article{bayrooti2024efficient,
title={Efficient Model-Based Reinforcement Learning Through Optimistic Thompson
Sampling},
author={Jasmine Bayrooti, Carl Henrik Ek, Amanda Prorok},
journal={arXiv preprint arXiv:2410.04988},
year={2024},
archivePrefix={arXiv},
eprint={2410.04988},
primaryClass={cs.LG cs.RO}
} | bayrooti2024efficient |
arxiv-666499 | 2410.04989 | Conditional Variational Autoencoders for Probabilistic Pose Regression | <|reference_start|>Conditional Variational Autoencoders for Probabilistic Pose Regression: Robots rely on visual relocalization to estimate their pose from camera images when they lose track. One of the challenges in visual relocalization is repetitive structures in the operation environment of the robot. This calls for probabilistic methods that support multiple hypotheses for robot's pose. We propose such a probabilistic method to predict the posterior distribution of camera poses given an observed image. Our proposed training strategy results in a generative model of camera poses given an image, which can be used to draw samples from the pose posterior distribution. Our method is streamlined and well-founded in theory and outperforms existing methods on localization in presence of ambiguities.<|reference_end|> | arxiv | @article{zangeneh2024conditional,
title={Conditional Variational Autoencoders for Probabilistic Pose Regression},
author={Fereidoon Zangeneh, Leonard Bruns, Amit Dekel, Alessandro Pieropan,
Patric Jensfelt},
journal={arXiv preprint arXiv:2410.04989},
year={2024},
archivePrefix={arXiv},
eprint={2410.04989},
primaryClass={cs.CV}
} | zangeneh2024conditional |
arxiv-666500 | 2410.04990 | Stage-Wise and Prior-Aware Neural Speech Phase Prediction | <|reference_start|>Stage-Wise and Prior-Aware Neural Speech Phase Prediction: This paper proposes a novel Stage-wise and Prior-aware Neural Speech Phase Prediction (SP-NSPP) model, which predicts the phase spectrum from input amplitude spectrum by two-stage neural networks. In the initial prior-construction stage, we preliminarily predict a rough prior phase spectrum from the amplitude spectrum. The subsequent refinement stage transforms the amplitude spectrum into a refined high-quality phase spectrum conditioned on the prior phase. Networks in both stages use ConvNeXt v2 blocks as the backbone and adopt adversarial training by innovatively introducing a phase spectrum discriminator (PSD). To further improve the continuity of the refined phase, we also incorporate a time-frequency integrated difference (TFID) loss in the refinement stage. Experimental results confirm that, compared to neural network-based no-prior phase prediction methods, the proposed SP-NSPP achieves higher phase prediction accuracy, thanks to introducing the coarse phase priors and diverse training criteria. Compared to iterative phase estimation algorithms, our proposed SP-NSPP does not require multiple rounds of staged iterations, resulting in higher generation efficiency.<|reference_end|> | arxiv | @article{liu2024stage-wise,
title={Stage-Wise and Prior-Aware Neural Speech Phase Prediction},
author={Fei Liu, Yang Ai, Hui-Peng Du, Ye-Xin Lu, Rui-Chen Zheng, Zhen-Hua
Ling},
journal={arXiv preprint arXiv:2410.04990},
year={2024},
archivePrefix={arXiv},
eprint={2410.04990},
primaryClass={cs.SD cs.AI eess.AS}
} | liu2024stage-wise |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.