corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-664401
2410.01337
PhyMPGN: Physics-encoded Message Passing Graph Network for spatiotemporal PDE systems
<|reference_start|>PhyMPGN: Physics-encoded Message Passing Graph Network for spatiotemporal PDE systems: Solving partial differential equations (PDEs) serves as a cornerstone for modeling complex dynamical systems. Recent progresses have demonstrated grand benefits of data-driven neural-based models for predicting spatiotemporal dynamics (e.g., tremendous speedup gain compared with classical numerical methods). However, most existing neural models rely on rich training data, have limited extrapolation and generalization abilities, and suffer to produce precise or reliable physical prediction under intricate conditions (e.g., irregular mesh or geometry, complex boundary conditions, diverse PDE parameters, etc.). To this end, we propose a new graph learning approach, namely, Physics-encoded Message Passing Graph Network (PhyMPGN), to model spatiotemporal PDE systems on irregular meshes given small training datasets. Specifically, we incorporate a GNN into a numerical integrator to approximate the temporal marching of spatiotemporal dynamics for a given PDE system. Considering that many physical phenomena are governed by diffusion processes, we further design a learnable Laplace block, which encodes the discrete Laplace-Beltrami operator, to aid and guide the GNN learning in a physically feasible solution space. A boundary condition padding strategy is also designed to improve the model convergence and accuracy. Extensive experiments demonstrate that PhyMPGN is capable of accurately predicting various types of spatiotemporal dynamics on coarse unstructured meshes, consistently achieves the state-of-the-art results, and outperforms other baselines with considerable gains.<|reference_end|>
arxiv
@article{zeng2024phympgn:, title={PhyMPGN: Physics-encoded Message Passing Graph Network for spatiotemporal PDE systems}, author={Bocheng Zeng, Qi Wang, Mengtao Yan, Yang Liu, Ruizhi Chengze, Yi Zhang, Hongsheng Liu, Zidong Wang, Hao Sun}, journal={arXiv preprint arXiv:2410.01337}, year={2024}, archivePrefix={arXiv}, eprint={2410.01337}, primaryClass={cs.LG cs.AI cs.CE} }
zeng2024phympgn:
arxiv-664402
2410.01340
Response Estimation and System Identification of Dynamical Systems via Physics-Informed Neural Networks
<|reference_start|>Response Estimation and System Identification of Dynamical Systems via Physics-Informed Neural Networks: The accurate modelling of structural dynamics is crucial across numerous engineering applications, such as Structural Health Monitoring (SHM), seismic analysis, and vibration control. Often, these models originate from physics-based principles and can be derived from corresponding governing equations, often of differential equation form. However, complex system characteristics, such as nonlinearities and energy dissipation mechanisms, often imply that such models are approximative and often imprecise. This challenge is further compounded in SHM, where sensor data is often sparse, making it difficult to fully observe the system's states. To address these issues, this paper explores the use of Physics-Informed Neural Networks (PINNs), a class of physics-enhanced machine learning (PEML) techniques, for the identification and estimation of dynamical systems. PINNs offer a unique advantage by embedding known physical laws directly into the neural network's loss function, allowing for simple embedding of complex phenomena, even in the presence of uncertainties. This study specifically investigates three key applications of PINNs: state estimation in systems with sparse sensing, joint state-parameter estimation, when both system response and parameters are unknown, and parameter estimation within a Bayesian framework to quantify uncertainties. The results demonstrate that PINNs deliver an efficient tool across all aforementioned tasks, even in presence of modelling errors. However, these errors tend to have a more significant impact on parameter estimation, as the optimization process must reconcile discrepancies between the prescribed model and the true system behavior. Despite these challenges, PINNs show promise in dynamical system modeling, offering a robust approach to handling uncertainties.<|reference_end|>
arxiv
@article{haywood-alexander2024response, title={Response Estimation and System Identification of Dynamical Systems via Physics-Informed Neural Networks}, author={Marcus Haywood-Alexander, Giacomo Arcieri, Antonios Kamariotis, Eleni Chatzi}, journal={arXiv preprint arXiv:2410.01340}, year={2024}, archivePrefix={arXiv}, eprint={2410.01340}, primaryClass={physics.comp-ph cs.LG} }
haywood-alexander2024response
arxiv-664403
2410.01341
Cognition Transferring and Decoupling for Text-supervised Egocentric Semantic Segmentation
<|reference_start|>Cognition Transferring and Decoupling for Text-supervised Egocentric Semantic Segmentation: In this paper, we explore a novel Text-supervised Egocentic Semantic Segmentation (TESS) task that aims to assign pixel-level categories to egocentric images weakly supervised by texts from image-level labels. In this task with prospective potential, the egocentric scenes contain dense wearer-object relations and inter-object interference. However, most recent third-view methods leverage the frozen Contrastive Language-Image Pre-training (CLIP) model, which is pre-trained on the semantic-oriented third-view data and lapses in the egocentric view due to the ``relation insensitive" problem. Hence, we propose a Cognition Transferring and Decoupling Network (CTDN) that first learns the egocentric wearer-object relations via correlating the image and text. Besides, a Cognition Transferring Module (CTM) is developed to distill the cognitive knowledge from the large-scale pre-trained model to our model for recognizing egocentric objects with various semantics. Based on the transferred cognition, the Foreground-background Decoupling Module (FDM) disentangles the visual representations to explicitly discriminate the foreground and background regions to mitigate false activation areas caused by foreground-background interferential objects during egocentric relation learning. Extensive experiments on four TESS benchmarks demonstrate the effectiveness of our approach, which outperforms many recent related methods by a large margin. Code will be available at https://github.com/ZhaofengSHI/CTDN.<|reference_end|>
arxiv
@article{shi2024cognition, title={Cognition Transferring and Decoupling for Text-supervised Egocentric Semantic Segmentation}, author={Zhaofeng Shi, Heqian Qiu, Lanxiao Wang, Fanman Meng, Qingbo Wu and Hongliang Li}, journal={arXiv preprint arXiv:2410.01341}, year={2024}, archivePrefix={arXiv}, eprint={2410.01341}, primaryClass={cs.CV} }
shi2024cognition
arxiv-664404
2410.01345
Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy
<|reference_start|>Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy: Generalizing language-conditioned robotic policies to new tasks remains a significant challenge, hampered by the lack of suitable simulation benchmarks. In this paper, we address this gap by introducing GemBench, a novel benchmark to assess generalization capabilities of vision-language robotic manipulation policies. GemBench incorporates seven general action primitives and four levels of generalization, spanning novel placements, rigid and articulated objects, and complex long-horizon tasks. We evaluate state-of-the-art approaches on GemBench and also introduce a new method. Our approach 3D-LOTUS leverages rich 3D information for action prediction conditioned on language. While 3D-LOTUS excels in both efficiency and performance on seen tasks, it struggles with novel tasks. To address this, we present 3D-LOTUS++, a framework that integrates 3D-LOTUS's motion planning capabilities with the task planning capabilities of LLMs and the object grounding accuracy of VLMs. 3D-LOTUS++ achieves state-of-the-art performance on novel tasks of GemBench, setting a new standard for generalization in robotic manipulation. The benchmark, codes and trained models are available at \url{https://www.di.ens.fr/willow/research/gembench/}.<|reference_end|>
arxiv
@article{garcia2024towards, title={Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy}, author={Ricardo Garcia, Shizhe Chen, Cordelia Schmid}, journal={arXiv preprint arXiv:2410.01345}, year={2024}, archivePrefix={arXiv}, eprint={2410.01345}, primaryClass={cs.RO cs.CV} }
garcia2024towards
arxiv-664405
2410.01349
Life, uh, Finds a Way: Systematic Neural Search
<|reference_start|>Life, uh, Finds a Way: Systematic Neural Search: We tackle the challenge of rapidly adapting an agent's behavior to solve spatiotemporally continuous problems in novel settings. Animals exhibit extraordinary abilities to adapt to new contexts, a capacity unmatched by artificial systems. Instead of focusing on generalization through deep reinforcement learning, we propose viewing behavior as the physical manifestation of a search procedure, where robust problem-solving emerges from an exhaustive search across all possible behaviors. Surprisingly, this can be done efficiently using online modification of a cognitive graph that guides action, challenging the predominant view that exhaustive search in continuous spaces is impractical. We describe an algorithm that implicitly enumerates behaviors by regulating the tight feedback loop between execution of behaviors and mutation of the graph, and provide a neural implementation based on Hebbian learning and a novel high-dimensional harmonic representation inspired by entorhinal cortex. By framing behavior as search, we provide a mathematically simple and biologically plausible model for real-time behavioral adaptation, successfully solving a variety of continuous state-space navigation problems. This framework not only offers a flexible neural substrate for other applications but also presents a powerful paradigm for understanding adaptive behavior. Our results suggest potential advancements in developmental learning and unsupervised skill acquisition, paving the way for autonomous robots to master complex skills in data-sparse environments demanding flexibility.<|reference_end|>
arxiv
@article{baranski2024life,, title={Life, uh, Finds a Way: Systematic Neural Search}, author={Alex Baranski, Jun Tani}, journal={arXiv preprint arXiv:2410.01349}, year={2024}, archivePrefix={arXiv}, eprint={2410.01349}, primaryClass={cs.AI cs.NE} }
baranski2024life,
arxiv-664406
2410.01350
Takin-VC: Zero-shot Voice Conversion via Jointly Hybrid Content and Memory-Augmented Context-Aware Timbre Modeling
<|reference_start|>Takin-VC: Zero-shot Voice Conversion via Jointly Hybrid Content and Memory-Augmented Context-Aware Timbre Modeling: Zero-shot voice conversion (VC) aims to transform the source speaker timbre into an arbitrary unseen one without altering the original speech content.While recent advancements in zero-shot VC methods have shown remarkable progress, there still remains considerable potential for improvement in terms of improving speaker similarity and speech naturalness.In this paper, we propose Takin-VC, a novel zero-shot VC framework based on jointly hybrid content and memory-augmented context-aware timbre modeling to tackle this challenge. Specifically, an effective hybrid content encoder, guided by neural codec training, that leverages quantized features from pre-trained WavLM and HybridFormer is first presented to extract the linguistic content of the source speech. Subsequently, we introduce an advanced cross-attention-based context-aware timbre modeling approach that learns the fine-grained, semantically associated target timbre features. To further enhance both speaker similarity and real-time performance, we utilize a conditional flow matching model to reconstruct the Mel-spectrogram of the source speech. Additionally, we advocate an efficient memory-augmented module designed to generate high-quality conditional target inputs for the flow matching process, thereby improving the overall performance of the proposed system. Experimental results demonstrate that the proposed Takin-VC method surpasses state-of-the-art zero-shot VC systems, delivering superior performance in terms of both speech naturalness and speaker similarity.<|reference_end|>
arxiv
@article{yang2024takin-vc:, title={Takin-VC: Zero-shot Voice Conversion via Jointly Hybrid Content and Memory-Augmented Context-Aware Timbre Modeling}, author={Yuguang Yang, Yu Pan, Jixun Yao, Xiang Zhang, Jianhao Ye, Hongbin Zhou, Lei Xie, Lei Ma, Jianjun Zhao}, journal={arXiv preprint arXiv:2410.01350}, year={2024}, archivePrefix={arXiv}, eprint={2410.01350}, primaryClass={cs.SD cs.AI eess.AS} }
yang2024takin-vc:
arxiv-664407
2410.01351
Learning and teaching biological data science in the Bioconductor community
<|reference_start|>Learning and teaching biological data science in the Bioconductor community: Modern biological research is increasingly data-intensive, leading to a growing demand for effective training in biological data science. In this article, we provide an overview of key resources and best practices available within the Bioconductor project - an open-source software community focused on omics data analysis. This guide serves as a valuable reference for both learners and educators in the field.<|reference_end|>
arxiv
@article{drnevich2024learning, title={Learning and teaching biological data science in the Bioconductor community}, author={Jenny Drnevich, Frederick J. Tan, Fabricio Almeida-Silva, Robert Castelo, Aedin C. Culhane, Sean Davis, Maria A. Doyle, Susan Holmes, Leo Lahti, Alexandru Mahmoud, Kozo Nishida, Marcel Ramos, Kevin Rue-Albrecht, David J.H. Shih, Laurent Gatto and Charlotte Soneson}, journal={arXiv preprint arXiv:2410.01351}, year={2024}, archivePrefix={arXiv}, eprint={2410.01351}, primaryClass={cs.CY q-bio.OT stat.AP} }
drnevich2024learning
arxiv-664408
2410.01353
Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion?
<|reference_start|>Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion?: Code completion, a key downstream task in code generation, is one of the most frequent and impactful methods for enhancing developer productivity in software development. As intelligent completion tools evolve, we need a robust evaluation benchmark that enables meaningful comparisons between products and guides future advancements. However, existing benchmarks focus more on coarse-grained tasks without industrial analysis resembling general code generation rather than the real-world scenarios developers encounter. Moreover, these benchmarks often rely on costly and time-consuming human annotation, and the standalone test cases fail to leverage minimal tests for maximum repository-level understanding and code coverage. To address these limitations, we first analyze business data from an industrial code completion tool and redefine the evaluation criteria to better align with the developer's intent and desired completion behavior throughout the coding process. Based on these insights, we introduce Codev-Agent, an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage, ensuring fair and effective comparisons. Using Codev-Agent, we present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework. Codev-Bench assesses whether a code completion tool can capture a developer's immediate intent and suggest appropriate code across diverse contexts, providing a more realistic benchmark for code completion in modern software development.<|reference_end|>
arxiv
@article{pan2024codev-bench:, title={Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion?}, author={Zhenyu Pan, Rongyu Cao, Yongchang Cao, Yingwei Ma, Binhua Li, Fei Huang, Han Liu, Yongbin Li}, journal={arXiv preprint arXiv:2410.01353}, year={2024}, archivePrefix={arXiv}, eprint={2410.01353}, primaryClass={cs.SE cs.AI} }
pan2024codev-bench:
arxiv-664409
2410.01356
Assisted Data Annotation for Business Process Information Extraction from Textual Documents
<|reference_start|>Assisted Data Annotation for Business Process Information Extraction from Textual Documents: Machine-learning based generation of process models from natural language text process descriptions provides a solution for the time-intensive and expensive process discovery phase. Many organizations have to carry out this phase, before they can utilize business process management and its benefits. Yet, research towards this is severely restrained by an apparent lack of large and high-quality datasets. This lack of data can be attributed to, among other things, an absence of proper tool assistance for dataset creation, resulting in high workloads and inferior data quality. We explore two assistance features to support dataset creation, a recommendation system for identifying process information in the text and visualization of the current state of already identified process information as a graphical business process model. A controlled user study with 31 participants shows that assisting dataset creators with recommendations lowers all aspects of workload, up to $-51.0\%$, and significantly improves annotation quality, up to $+38.9\%$. We make all data and code available to encourage further research on additional novel assistance strategies.<|reference_end|>
arxiv
@article{neuberger2024assisted, title={Assisted Data Annotation for Business Process Information Extraction from Textual Documents}, author={Julian Neuberger, Han van der Aa, Lars Ackermann, Daniel Buschek, Jannic Herrmann, Stefan Jablonski}, journal={arXiv preprint arXiv:2410.01356}, year={2024}, archivePrefix={arXiv}, eprint={2410.01356}, primaryClass={cs.CL} }
neuberger2024assisted
arxiv-664410
2410.01359
FlashMask: Efficient and Rich Mask Extension of FlashAttention
<|reference_start|>FlashMask: Efficient and Rich Mask Extension of FlashAttention: The computational and memory demands of vanilla attention scale quadratically with the sequence length $N$, posing significant challenges for processing long sequences in Transformer models. FlashAttention alleviates these challenges by eliminating the $O(N^2)$ memory dependency and reducing attention latency through IO-aware memory optimizations. However, its native support for certain attention mask types is limited, and it does not inherently accommodate more complex masking requirements. Previous approaches resort to using dense masks with $O(N^2)$ memory complexity, leading to inefficiencies. In this paper, we propose FlashMask, an extension of FlashAttention that introduces a column-wise sparse representation of attention masks. This approach efficiently represents a wide range of mask types and facilitates the development of optimized kernel implementations. By adopting this novel representation, FlashMask achieves linear memory complexity $O(N)$, suitable for modeling long-context sequences. Moreover, this representation enables kernel optimizations that eliminate unnecessary computations by leveraging sparsity in the attention mask, without sacrificing computational accuracy, resulting in higher computational efficiency. We evaluate FlashMask's performance in fine-tuning and alignment training of LLMs such as SFT, LoRA, DPO, and RM. FlashMask achieves significant throughput improvements, with end-to-end speedups ranging from 1.65x to 3.22x compared to existing FlashAttention dense method. Additionally, our kernel-level comparisons demonstrate that FlashMask surpasses the latest counterpart, FlexAttention, by 12.1% to 60.7% in terms of kernel TFLOPs/s, achieving 37.8% to 62.3% of the theoretical maximum FLOPs/s on the A100 GPU. The code is open-sourced on PaddlePaddle and integrated into PaddleNLP, supporting models with over 100 billion parameters for contexts up to 128K tokens.<|reference_end|>
arxiv
@article{wang2024flashmask:, title={FlashMask: Efficient and Rich Mask Extension of FlashAttention}, author={Guoxia Wang, Jinle Zeng, Xiyuan Xiao, Siming Wu, Jiabin Yang, Lujing Zheng, Zeyu Chen, Jiang Bian, Dianhai Yu, and Haifeng Wang}, journal={arXiv preprint arXiv:2410.01359}, year={2024}, archivePrefix={arXiv}, eprint={2410.01359}, primaryClass={cs.LG} }
wang2024flashmask:
arxiv-664411
2410.01360
High-quality Animatable Eyelid Shapes from Lightweight Captures
<|reference_start|>High-quality Animatable Eyelid Shapes from Lightweight Captures: High-quality eyelid reconstruction and animation are challenging for the subtle details and complicated deformations. Previous works usually suffer from the trade-off between the capture costs and the quality of details. In this paper, we propose a novel method that can achieve detailed eyelid reconstruction and animation by only using an RGB video captured by a mobile phone. Our method utilizes both static and dynamic information of eyeballs (e.g., positions and rotations) to assist the eyelid reconstruction, cooperating with an automatic eyeball calibration method to get the required eyeball parameters. Furthermore, we develop a neural eyelid control module to achieve the semantic animation control of eyelids. To the best of our knowledge, we present the first method for high-quality eyelid reconstruction and animation from lightweight captures. Extensive experiments on both synthetic and real data show that our method can provide more detailed and realistic results compared with previous methods based on the same-level capture setups. The code is available at https://github.com/StoryMY/AniEyelid.<|reference_end|>
arxiv
@article{lyu2024high-quality, title={High-quality Animatable Eyelid Shapes from Lightweight Captures}, author={Junfeng Lyu, Feng Xu}, journal={arXiv preprint arXiv:2410.01360}, year={2024}, doi={10.1145/3680528.3687583}, archivePrefix={arXiv}, eprint={2410.01360}, primaryClass={cs.CV} }
lyu2024high-quality
arxiv-664412
2410.01363
PCQPR: Proactive Conversational Question Planning with Reflection
<|reference_start|>PCQPR: Proactive Conversational Question Planning with Reflection: Conversational Question Generation (CQG) enhances the interactivity of conversational question-answering systems in fields such as education, customer service, and entertainment. However, traditional CQG, focusing primarily on the immediate context, lacks the conversational foresight necessary to guide conversations toward specified conclusions. This limitation significantly restricts their ability to achieve conclusion-oriented conversational outcomes. In this work, we redefine the CQG task as Conclusion-driven Conversational Question Generation (CCQG) by focusing on proactivity, not merely reacting to the unfolding conversation but actively steering it towards a conclusion-oriented question-answer pair. To address this, we propose a novel approach, called Proactive Conversational Question Planning with self-Refining (PCQPR). Concretely, by integrating a planning algorithm inspired by Monte Carlo Tree Search (MCTS) with the analytical capabilities of large language models (LLMs), PCQPR predicts future conversation turns and continuously refines its questioning strategies. This iterative self-refining mechanism ensures the generation of contextually relevant questions strategically devised to reach a specified outcome. Our extensive evaluations demonstrate that PCQPR significantly surpasses existing CQG methods, marking a paradigm shift towards conclusion-oriented conversational question-answering systems.<|reference_end|>
arxiv
@article{guo2024pcqpr:, title={PCQPR: Proactive Conversational Question Planning with Reflection}, author={Shasha Guo, Lizi Liao, Jing Zhang, Cuiping Li, Hong Chen}, journal={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing}, year={2024}, archivePrefix={arXiv}, eprint={2410.01363}, primaryClass={cs.CL cs.AI} }
guo2024pcqpr:
arxiv-664413
2410.01364
MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics
<|reference_start|>MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics: The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model's decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model's features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.<|reference_end|>
arxiv
@article{zhang2024marlens:, title={MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics}, author={Yutian Zhang, Guohong Zheng, Zhiyuan Liu, Quan Li and Haipeng Zeng}, journal={arXiv preprint arXiv:2410.01364}, year={2024}, doi={10.1109/TVCG.2024.3392587}, archivePrefix={arXiv}, eprint={2410.01364}, primaryClass={cs.HC} }
zhang2024marlens:
arxiv-664414
2410.01365
Anti-biofouling Lensless Camera System with Deep Learning based Image Reconstruction
<|reference_start|>Anti-biofouling Lensless Camera System with Deep Learning based Image Reconstruction: In recent years, there has been an increasing demand for underwater cameras that monitor the condition of offshore structures and check the number of individuals in aqua culture environments with long-period observation. One of the significant issues with this observation is that biofouling sticks to the aperture and lens densely and prevents cameras from capturing clear images. This study examines an underwater camera that applies material technologies with high inherent resistance to biofouling and computer vision technologies based on image reconstruction by deep learning to lens-less cameras. For this purpose, our prototype camera uses a coded aperture with 1k rectangular shape pinholes in a thin metal plate, such as copper, which hinder the growth of biofouling and keep the surface clean. Although images taken by lens-less cameras are usually not well formed due to lack of the traditional glass-based lens, a deep learning approach using ViT (Vision Transformer) has recently demonstrated reconstructing original photo images well and our study shows that using gated MLP (Multilayer Perceptron) also yields good results. On the other hand, a certain degree of thickness for bio-repellence materials is required to exhibit their effect the thickness of aperture is necessary to use apertures sufficiently thinner than the size of the pinholes to avoid unintentional reflection and absorption on the sidewalls. Therefore, we prepared a sufficiently thin plate for image reconstruction and now currently we conduct tests of the lens-less camera of the bio-repellence aperture with actual seawater environments to determine whether it can sufficiently demonstrate the biofouling effect compared with usual camera with only waterproof.<|reference_end|>
arxiv
@article{ide2024anti-biofouling, title={Anti-biofouling Lensless Camera System with Deep Learning based Image Reconstruction}, author={Naoki Ide, Tomohiro Kawahara, Hiroshi Ueno, Daiki Yanagidaira, and Susumu Takatsuka}, journal={arXiv preprint arXiv:2410.01365}, year={2024}, archivePrefix={arXiv}, eprint={2410.01365}, primaryClass={eess.IV cs.CV} }
ide2024anti-biofouling
arxiv-664415
2410.01366
Harnessing the Latent Diffusion Model for Training-Free Image Style Transfer
<|reference_start|>Harnessing the Latent Diffusion Model for Training-Free Image Style Transfer: Diffusion models have recently shown the ability to generate high-quality images. However, controlling its generation process still poses challenges. The image style transfer task is one of those challenges that transfers the visual attributes of a style image to another content image. Typical obstacle of this task is the requirement of additional training of a pre-trained model. We propose a training-free style transfer algorithm, Style Tracking Reverse Diffusion Process (STRDP) for a pretrained Latent Diffusion Model (LDM). Our algorithm employs Adaptive Instance Normalization (AdaIN) function in a distinct manner during the reverse diffusion process of an LDM while tracking the encoding history of the style image. This algorithm enables style transfer in the latent space of LDM for reduced computational cost, and provides compatibility for various LDM models. Through a series of experiments and a user study, we show that our method can quickly transfer the style of an image without additional training. The speed, compatibility, and training-free aspect of our algorithm facilitates agile experiments with combinations of styles and LDMs for extensive application.<|reference_end|>
arxiv
@article{masui2024harnessing, title={Harnessing the Latent Diffusion Model for Training-Free Image Style Transfer}, author={Kento Masui, Mayu Otani, Masahiro Nomura and Hideki Nakayama}, journal={arXiv preprint arXiv:2410.01366}, year={2024}, archivePrefix={arXiv}, eprint={2410.01366}, primaryClass={cs.CV cs.MM} }
masui2024harnessing
arxiv-664416
2410.01367
Towards Dynamic Graph Neural Networks with Provably High-Order Expressive Power
<|reference_start|>Towards Dynamic Graph Neural Networks with Provably High-Order Expressive Power: Dynamic Graph Neural Networks (DyGNNs) have garnered increasing research attention for learning representations on evolving graphs. Despite their effectiveness, the limited expressive power of existing DyGNNs hinders them from capturing important evolving patterns of dynamic graphs. Although some works attempt to enhance expressive capability with heuristic features, there remains a lack of DyGNN frameworks with provable and quantifiable high-order expressive power. To address this research gap, we firstly propose the k-dimensional Dynamic WL tests (k-DWL) as the referencing algorithms to quantify the expressive power of DyGNNs. We demonstrate that the expressive power of existing DyGNNs is upper bounded by the 1-DWL test. To enhance the expressive power, we propose Dynamic Graph Neural Network with High-order expressive power (HopeDGN), which updates the representation of central node pair by aggregating the interaction history with neighboring node pairs. Our theoretical results demonstrate that HopeDGN can achieve expressive power equivalent to the 2-DWL test. We then present a Transformer-based implementation for the local variant of HopeDGN. Experimental results show that HopeDGN achieved performance improvements of up to 3.12%, demonstrating the effectiveness of HopeDGN.<|reference_end|>
arxiv
@article{wang2024towards, title={Towards Dynamic Graph Neural Networks with Provably High-Order Expressive Power}, author={Zhe Wang, Tianjian Zhao, Zhen Zhang, Jiawei Chen, Sheng Zhou, Yan Feng, Chun Chen, Can Wang}, journal={arXiv preprint arXiv:2410.01367}, year={2024}, archivePrefix={arXiv}, eprint={2410.01367}, primaryClass={cs.LG} }
wang2024towards
arxiv-664417
2410.01368
Theoretical Lower Bounds for the Oven Scheduling Problem
<|reference_start|>Theoretical Lower Bounds for the Oven Scheduling Problem: The Oven Scheduling Problem (OSP) is an NP-hard real-world parallel batch scheduling problem arising in the semiconductor industry. The objective of the problem is to schedule a set of jobs on ovens while minimizing several factors, namely total oven runtime, job tardiness, and setup costs. At the same time, it must adhere to various constraints such as oven eligibility and availability, job release dates, setup times between batches, and oven capacity limitations. The key to obtaining efficient schedules is to process compatible jobs simultaneously in batches. In this paper, we develop theoretical, problem-specific lower bounds for the OSP that can be computed very quickly. We thoroughly examine these lower bounds, evaluating their quality and exploring their integration into existing solution methods. Specifically, we investigate their contribution to exact methods and a metaheuristic local search approach using simulated annealing. Moreover, these problem-specific lower bounds enable us to assess the solution quality for large instances for which exact methods often fail to provide tight lower bounds.<|reference_end|>
arxiv
@article{da ros2024theoretical, title={Theoretical Lower Bounds for the Oven Scheduling Problem}, author={Francesca Da Ros and Marie-Louise Lackner and Nysret Musliu}, journal={Proceedings of the 14th International Conference on the Practice and Theory of Automated Timetabling, 2024}, year={2024}, archivePrefix={arXiv}, eprint={2410.01368}, primaryClass={cs.AI cs.DC cs.DS} }
da ros2024theoretical
arxiv-664418
2410.01371
A method to estimate well flowing gas-oil ratio and composition using pressure and temperature measurements across a production choke, a seed composition of oil and gas, and a thermodynamic simulator
<|reference_start|>A method to estimate well flowing gas-oil ratio and composition using pressure and temperature measurements across a production choke, a seed composition of oil and gas, and a thermodynamic simulator: In this work we propose and demonstrate a method to estimate the flowing gas-oil ratio and composition of a hydrocarbon well stream using measurements of pressure and temperature across a production choke. The method consists of using a numerical solver on a thermodynamic simulator to recombine a seed oil and gas until the simulated temperature drop across the choke is equal to the measured value. This method is meant for cases where it is not possible to measure periodically individual well composition. A study case and reference solution were generated using the reservoir model presented in the SPE (Society of Petroleum Engineers) comparative case Nr. 5 linked with a process simulator. Time profiles of well producing gas-oil ratio, wellstream compositions, compositions of surface conditions oil and gas, and temperature drop across the choke were generated with the models. The method proposed was then employed to estimate the flowing gas-oil ratio of the reference solution. Results show that the proposed method predicts with reasonable accuracy (maximum 12% percent error) the well gas-oil ratio and compositions during the life of the field when using compositions of surface oil and gas from initial time. When using compositions of surface oil and gas from later times, the prediction accuracy of the gas-oil ratio improves at those times but worsens for times before and after. A measurement error for the temperature drop across the choke of at least 0.01 {\deg}C is required to achieve convergence of the method. The mean percent error between the predicted and real mole fractions has an upper bound in time of 21% when using initial surface oil and gas as seed compositions.<|reference_end|>
arxiv
@article{moon2024a, title={A method to estimate well flowing gas-oil ratio and composition using pressure and temperature measurements across a production choke, a seed composition of oil and gas, and a thermodynamic simulator}, author={Seok Ki Moon and Milan Stanko}, journal={arXiv preprint arXiv:2410.01371}, year={2024}, archivePrefix={arXiv}, eprint={2410.01371}, primaryClass={cs.CE physics.app-ph} }
moon2024a
arxiv-664419
2410.01374
Newton Meets Marchenko-Pastur: Massively Parallel Second-Order Optimization with Hessian Sketching and Debiasing
<|reference_start|>Newton Meets Marchenko-Pastur: Massively Parallel Second-Order Optimization with Hessian Sketching and Debiasing: Motivated by recent advances in serverless cloud computing, in particular the "function as a service" (FaaS) model, we consider the problem of minimizing a convex function in a massively parallel fashion, where communication between workers is limited. Focusing on the case of a twice-differentiable objective subject to an L2 penalty, we propose a scheme where the central node (server) effectively runs a Newton method, offloading its high per-iteration cost -- stemming from the need to invert the Hessian -- to the workers. In our solution, workers produce independently coarse but low-bias estimates of the inverse Hessian, using an adaptive sketching scheme. The server then averages the descent directions produced by the workers, yielding a good approximation for the exact Newton step. The main component of our adaptive sketching scheme is a low-complexity procedure for selecting the sketching dimension, an issue that was left largely unaddressed in the existing literature on Hessian sketching for distributed optimization. Our solution is based on ideas from asymptotic random matrix theory, specifically the Marchenko-Pastur law. For Gaussian sketching matrices, we derive non asymptotic guarantees for our algorithm which are essentially dimension-free. Lastly, when the objective is self-concordant, we provide convergence guarantees for the approximate Newton's method with noisy Hessians, which may be of independent interest beyond the setting considered in this paper.<|reference_end|>
arxiv
@article{romanov2024newton, title={Newton Meets Marchenko-Pastur: Massively Parallel Second-Order Optimization with Hessian Sketching and Debiasing}, author={Elad Romanov, Fangzhao Zhang, Mert Pilanci}, journal={arXiv preprint arXiv:2410.01374}, year={2024}, archivePrefix={arXiv}, eprint={2410.01374}, primaryClass={math.OC cs.IT eess.SP math.IT stat.ML} }
romanov2024newton
arxiv-664420
2410.01376
Learning Physics From Video: Unsupervised Physical Parameter Estimation for Continuous Dynamical Systems
<|reference_start|>Learning Physics From Video: Unsupervised Physical Parameter Estimation for Continuous Dynamical Systems: Extracting physical dynamical system parameters from videos is of great interest to applications in natural science and technology. The state-of-the-art in automatic parameter estimation from video is addressed by training supervised deep networks on large datasets. Such datasets require labels, which are difficult to acquire. While some unsupervised techniques -- which depend on frame prediction -- exist, they suffer from long training times, instability under different initializations, and are limited to hand-picked motion problems. In this work, we propose a method to estimate the physical parameters of any known, continuous governing equation from single videos; our solution is suitable for different dynamical systems beyond motion and is robust to initialization compared to previous approaches. Moreover, we remove the need for frame prediction by implementing a KL-divergence-based loss function in the latent space, which avoids convergence to trivial solutions and reduces model size and compute.<|reference_end|>
arxiv
@article{garcia2024learning, title={Learning Physics From Video: Unsupervised Physical Parameter Estimation for Continuous Dynamical Systems}, author={Alejandro Casta~neda Garcia, Jan van Gemert, Daan Brinks and Nergis T"omen}, journal={arXiv preprint arXiv:2410.01376}, year={2024}, archivePrefix={arXiv}, eprint={2410.01376}, primaryClass={cs.CV physics.comp-ph} }
garcia2024learning
arxiv-664421
2410.01380
Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition
<|reference_start|>Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition: In this work, we investigate how a model's tendency to broadly integrate its parametric knowledge evolves throughout pretraining, and how this behavior affects overall performance, particularly in terms of knowledge acquisition and forgetting. We introduce the concept of knowledge entropy, which quantifies the range of memory sources the model engages with; high knowledge entropy indicates that the model utilizes a wide range of memory sources, while low knowledge entropy suggests reliance on specific sources with greater certainty. Our analysis reveals a consistent decline in knowledge entropy as pretraining advances. We also find that the decline is closely associated with a reduction in the model's ability to acquire and retain knowledge, leading us to conclude that diminishing knowledge entropy (smaller number of active memory sources) impairs the model's knowledge acquisition and retention capabilities. We find further support for this by demonstrating that increasing the activity of inactive memory sources enhances the model's capacity for knowledge acquisition and retention.<|reference_end|>
arxiv
@article{kim2024knowledge, title={Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition}, author={Jiyeon Kim, Hyunji Lee, Hyowon Cho, Joel Jang, Hyeonbin Hwang, Seungpil Won, Youbin Ahn, Dohaeng Lee, Minjoon Seo}, journal={arXiv preprint arXiv:2410.01380}, year={2024}, archivePrefix={arXiv}, eprint={2410.01380}, primaryClass={cs.CL cs.AI} }
kim2024knowledge
arxiv-664422
2410.01383
PairDistill: Pairwise Relevance Distillation for Dense Retrieval
<|reference_start|>PairDistill: Pairwise Relevance Distillation for Dense Retrieval: Effective information retrieval (IR) from vast datasets relies on advanced techniques to extract relevant information in response to queries. Recent advancements in dense retrieval have showcased remarkable efficacy compared to traditional sparse retrieval methods. To further enhance retrieval performance, knowledge distillation techniques, often leveraging robust cross-encoder rerankers, have been extensively explored. However, existing approaches primarily distill knowledge from pointwise rerankers, which assign absolute relevance scores to documents, thus facing challenges related to inconsistent comparisons. This paper introduces Pairwise Relevance Distillation (PairDistill) to leverage pairwise reranking, offering fine-grained distinctions between similarly relevant documents to enrich the training of dense retrieval models. Our experiments demonstrate that PairDistill outperforms existing methods, achieving new state-of-the-art results across multiple benchmarks. This highlights the potential of PairDistill in advancing dense retrieval techniques effectively. Our source code and trained models are released at https://github.com/MiuLab/PairDistill<|reference_end|>
arxiv
@article{huang2024pairdistill:, title={PairDistill: Pairwise Relevance Distillation for Dense Retrieval}, author={Chao-Wei Huang and Yun-Nung Chen}, journal={arXiv preprint arXiv:2410.01383}, year={2024}, archivePrefix={arXiv}, eprint={2410.01383}, primaryClass={cs.IR cs.CL} }
huang2024pairdistill:
arxiv-664423
2410.01384
CSLens: Towards Better Deploying Charging Stations via Visual Analytics -- A Coupled Networks Perspective
<|reference_start|>CSLens: Towards Better Deploying Charging Stations via Visual Analytics -- A Coupled Networks Perspective: In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens's potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.<|reference_end|>
arxiv
@article{zhang2024cslens:, title={CSLens: Towards Better Deploying Charging Stations via Visual Analytics -- A Coupled Networks Perspective}, author={Yutian Zhang, Liwen Xu, Shaocong Tao, Quanxue Guan, Quan Li and Haipeng Zeng}, journal={arXiv preprint arXiv:2410.01384}, year={2024}, doi={10.1109/TVCG.2024.3456392}, archivePrefix={arXiv}, eprint={2410.01384}, primaryClass={cs.HC} }
zhang2024cslens:
arxiv-664424
2410.01386
FLAME: Adaptive and Reactive Concept Drift Mitigation for Federated Learning Deployments
<|reference_start|>FLAME: Adaptive and Reactive Concept Drift Mitigation for Federated Learning Deployments: This paper presents Federated Learning with Adaptive Monitoring and Elimination (FLAME), a novel solution capable of detecting and mitigating concept drift in Federated Learning (FL) Internet of Things (IoT) environments. Concept drift poses significant challenges for FL models deployed in dynamic and real-world settings. FLAME leverages an FL architecture, considers a real-world FL pipeline, and proves capable of maintaining model performance and accuracy while addressing bandwidth and privacy constraints. Introducing various features and extensions on previous works, FLAME offers a robust solution to concept drift, significantly reducing computational load and communication overhead. Compared to well-known lightweight mitigation methods, FLAME demonstrates superior performance in maintaining high F1 scores and reducing resource utilisation in large-scale IoT deployments, making it a promising approach for real-world applications.<|reference_end|>
arxiv
@article{mavromatis2024flame:, title={FLAME: Adaptive and Reactive Concept Drift Mitigation for Federated Learning Deployments}, author={Ioannis Mavromatis and Stefano De Feo and Aftab Khan}, journal={arXiv preprint arXiv:2410.01386}, year={2024}, archivePrefix={arXiv}, eprint={2410.01386}, primaryClass={cs.LG cs.AI} }
mavromatis2024flame:
arxiv-664425
2410.01391
Quantifying Cancer Likeness: A Statistical Approach for Pathological Image Diagnosis
<|reference_start|>Quantifying Cancer Likeness: A Statistical Approach for Pathological Image Diagnosis: In this paper, we present a new statistical approach to automatically identify cancer regions in pathological images. The proposed method is built from statistical theory in line with evidence-based medicine. The two core technologies are the classification information of image features, which was introduced based on information theory and which cancer features take positive values, normal features take negative values, and the calculation technique for determining their spatial distribution. This method then estimates areas where the classification information content shows a positive value as cancer areas in the pathological image. The method achieves AUCs of 0.95 or higher in cancer classification tasks. In addition, the proposed method has the practical advantage of not requiring a precise demarcation line between cancer and normal. This frees pathologists from the monotonous and tedious work of building consensus with other pathologists.<|reference_end|>
arxiv
@article{kindo2024quantifying, title={Quantifying Cancer Likeness: A Statistical Approach for Pathological Image Diagnosis}, author={Toshiki Kindo}, journal={arXiv preprint arXiv:2410.01391}, year={2024}, archivePrefix={arXiv}, eprint={2410.01391}, primaryClass={cs.CV} }
kindo2024quantifying
arxiv-664426
2410.01392
Causal Inference Tools for a Better Evaluation of Machine Learning
<|reference_start|>Causal Inference Tools for a Better Evaluation of Machine Learning: We present a comprehensive framework for applying rigorous statistical techniques from econometrics to analyze and improve machine learning systems. We introduce key statistical methods such as Ordinary Least Squares (OLS) regression, Analysis of Variance (ANOVA), and logistic regression, explaining their theoretical foundations and practical applications in machine learning evaluation. The document serves as a guide for researchers and practitioners, detailing how these techniques can provide deeper insights into model behavior, performance, and fairness. We cover the mathematical principles behind each method, discuss their assumptions and limitations, and provide step-by-step instructions for their implementation. The paper also addresses how to interpret results, emphasizing the importance of statistical significance and effect size. Through illustrative examples, we demonstrate how these tools can reveal subtle patterns and interactions in machine learning models that are not apparent from traditional evaluation metrics. By connecting the fields of econometrics and machine learning, this work aims to equip readers with powerful analytical tools for more rigorous and comprehensive evaluation of AI systems. The framework presented here contributes to developing more robust, interpretable, and fair machine learning technologies.<|reference_end|>
arxiv
@article{soumm2024causal, title={Causal Inference Tools for a Better Evaluation of Machine Learning}, author={Micha"el Soumm}, journal={arXiv preprint arXiv:2410.01392}, year={2024}, archivePrefix={arXiv}, eprint={2410.01392}, primaryClass={cs.LG} }
soumm2024causal
arxiv-664427
2410.01393
Signal Adversarial Examples Generation for Signal Detection Network via White-Box Attack
<|reference_start|>Signal Adversarial Examples Generation for Signal Detection Network via White-Box Attack: With the development and application of deep learning in signal detection tasks, the vulnerability of neural networks to adversarial attacks has also become a security threat to signal detection networks. This paper defines a signal adversarial examples generation model for signal detection network from the perspective of adding perturbations to the signal. The model uses the inequality relationship of L2-norm between time domain and time-frequency domain to constrain the energy of signal perturbations. Building upon this model, we propose a method for generating signal adversarial examples utilizing gradient-based attacks and Short-Time Fourier Transform. The experimental results show that under the constraint of signal perturbation energy ratio less than 3%, our adversarial attack resulted in a 28.1% reduction in the mean Average Precision (mAP), a 24.7% reduction in recall, and a 30.4% reduction in precision of the signal detection network. Compared to random noise perturbation of equivalent intensity, our adversarial attack demonstrates a significant attack effect.<|reference_end|>
arxiv
@article{li2024signal, title={Signal Adversarial Examples Generation for Signal Detection Network via White-Box Attack}, author={Dongyang Li, Linyuan Wang, Guangwei Xiong, Bin Yan, Dekui Ma, Jinxian Peng}, journal={arXiv preprint arXiv:2410.01393}, year={2024}, archivePrefix={arXiv}, eprint={2410.01393}, primaryClass={cs.CV cs.CR} }
li2024signal
arxiv-664428
2410.01394
Gaussian kernel expansion with basis functions uniformly bounded in $\mathcalL_\infty$
<|reference_start|>Gaussian kernel expansion with basis functions uniformly bounded in $\mathcalL_\infty$: Kernel expansions are a topic of considerable interest in machine learning, also because of their relation to the so-called feature maps introduced in machine learning. Properties of the associated basis functions and weights (corresponding to eigenfunctions and eigenvalues in the Mercer setting) give insight into for example the structure of the associated reproducing kernel Hilbert space, the goodness of approximation schemes, the convergence rates and generalization properties of kernel machines. Recent work in the literature has derived some of these results by assuming uniformly bounded basis functions in $\mathcal{L}_\infty$. Motivated by this line of research, we investigate under this constraint all possible kernel expansions of the Gaussian kernel, one of the most widely used models in machine learning. Our main result is the construction on $\mathbb{R}^2$ of a Gaussian kernel expansion with weights in $\ell_p$ for any $p>1$. This result is optimal since we also prove that $p=1$ cannot be reached by the Gaussian kernel, nor by any of the other radial basis function kernels commonly used in the literature. A consequence for this kind of kernels is also the non-existence of Mercer expansions on $\mathbb{R}^2$, with respect to any finite measure, whose eigenfunctions all belong to a closed ball of $\mathcal{L}_\infty$.<|reference_end|>
arxiv
@article{bisiacco2024gaussian, title={Gaussian kernel expansion with basis functions uniformly bounded in $\mathcal{L}_{\infty}$}, author={Mauro Bisiacco, Gianluigi Pillonetto}, journal={arXiv preprint arXiv:2410.01394}, year={2024}, archivePrefix={arXiv}, eprint={2410.01394}, primaryClass={cs.LG} }
bisiacco2024gaussian
arxiv-664429
2410.01395
Toward Zero-Shot Learning for Visual Dehazing of Urological Surgical Robots
<|reference_start|>Toward Zero-Shot Learning for Visual Dehazing of Urological Surgical Robots: Robot-assisted surgery has profoundly influenced current forms of minimally invasive surgery. However, in transurethral suburethral urological surgical robots, they need to work in a liquid environment. This causes vaporization of the liquid when shearing and heating is performed, resulting in bubble atomization that affects the visual perception of the robot. This can lead to the need for uninterrupted pauses in the surgical procedure, which makes the surgery take longer. To address the atomization characteristics of liquids under urological surgical robotic vision, we propose an unsupervised zero-shot dehaze method (RSF-Dehaze) for urological surgical robotic vision. Specifically, the proposed Region Similarity Filling Module (RSFM) of RSF-Dehaze significantly improves the recovery of blurred region tissues. In addition, we organize and propose a dehaze dataset for robotic vision in urological surgery (USRobot-Dehaze dataset). In particular, this dataset contains the three most common urological surgical robot operation scenarios. To the best of our knowledge, we are the first to organize and propose a publicly available dehaze dataset for urological surgical robot vision. The proposed RSF-Dehaze proves the effectiveness of our method in three urological surgical robot operation scenarios with extensive comparative experiments with 20 most classical and advanced dehazing and image recovery algorithms. The proposed source code and dataset are available at https://github.com/wurenkai/RSF-Dehaze .<|reference_end|>
arxiv
@article{wu2024toward, title={Toward Zero-Shot Learning for Visual Dehazing of Urological Surgical Robots}, author={Renkai Wu, Xianjin Wang, Pengchen Liang, Zhenyu Zhang, Qing Chang, Hao Tang}, journal={arXiv preprint arXiv:2410.01395}, year={2024}, archivePrefix={arXiv}, eprint={2410.01395}, primaryClass={eess.IV cs.CV} }
wu2024toward
arxiv-664430
2410.01396
Can We Delegate Learning to Automation?: A Comparative Study of LLM Chatbots, Search Engines, and Books
<|reference_start|>Can We Delegate Learning to Automation?: A Comparative Study of LLM Chatbots, Search Engines, and Books: Learning is a key motivator behind information search behavior. With the emergence of LLM-based chatbots, students are increasingly turning to these tools as their primary resource for acquiring knowledge. However, the transition from traditional resources like textbooks and web searches raises concerns among educators. They worry that these fully-automated LLMs might lead students to delegate critical steps of search as learning. In this paper, we systematically uncover three main concerns from educators' perspectives. In response to these concerns, we conducted a mixed-methods study with 92 university students to compare three learning sources with different automation levels. Our results show that LLMs support comprehensive understanding of key concepts without promoting passive learning, though their effectiveness in knowledge retention was limited. Additionally, we found that academic performance impacted both learning outcomes and search patterns. Notably, higher-competence learners engaged more deeply with content through reading-intensive behaviors rather than relying on search activities.<|reference_end|>
arxiv
@article{yang2024can, title={Can We Delegate Learning to Automation?: A Comparative Study of LLM Chatbots, Search Engines, and Books}, author={Yeonsun Yang, Ahyeon Shin, Mincheol Kang, Jiheon Kang, Jean Young Song}, journal={arXiv preprint arXiv:2410.01396}, year={2024}, archivePrefix={arXiv}, eprint={2410.01396}, primaryClass={cs.HC cs.AI cs.IR} }
yang2024can
arxiv-664431
2410.01398
WiFi-CSI Sensing and Bearing Estimation in Multi-Robot Systems: An Open-Source Simulation Framework
<|reference_start|>WiFi-CSI Sensing and Bearing Estimation in Multi-Robot Systems: An Open-Source Simulation Framework: Development and testing of multi-robot systems employing wireless signal-based sensing requires access to suitable hardware, such as channel monitoring WiFi transceivers, which can pose significant limitations. The WiFi Sensor for Robotics (WSR) toolbox, introduced by Jadhav et al. in 2022, provides a novel solution by using WiFi Channel State Information (CSI) to compute relative bearing between robots. The toolbox leverages the amplitude and phase of WiFi signals and creates virtual antenna arrays by exploiting the motion of mobile robots, eliminating the need for physical antenna arrays. However, the WSR toolbox's reliance on an obsoleting WiFi transceiver hardware has limited its operability and accessibility, hindering broader application and development of relevant tools. We present an open-source simulation framework that replicates the WSR toolbox's capabilities using Gazebo and Matlab. By simulating WiFi-CSI data collection, our framework emulates the behavior of mobile robots equipped with the WSR toolbox, enabling precise bearing estimation without physical hardware. We validate the framework through experiments with both simulated and real Turtlebot3 robots, showing a close match between the obtained CSI data and the resulting bearing estimates. This work provides a virtual environment for developing and testing WiFi-CSI-based multi-robot localization without relying on physical hardware. All code and experimental setup information are publicly available at https://github.com/BrendanxP/CSI-Simulation-Framework<|reference_end|>
arxiv
@article{dijkstra2024wifi-csi, title={WiFi-CSI Sensing and Bearing Estimation in Multi-Robot Systems: An Open-Source Simulation Framework}, author={Brendan Dijkstra, Ninad Jadhav, Alex Sloot, Matteo Marcantoni, Bayu Jayawardhana, Stephanie Gil, Bahar Haghighat}, journal={arXiv preprint arXiv:2410.01398}, year={2024}, archivePrefix={arXiv}, eprint={2410.01398}, primaryClass={cs.RO cs.SY eess.SY} }
dijkstra2024wifi-csi
arxiv-664432
2410.01399
Overpredictive Signal Analytics in Federated Learning: Algorithms and Analysis
<|reference_start|>Overpredictive Signal Analytics in Federated Learning: Algorithms and Analysis: Edge signal processing facilitates distributed learning and inference in the client-server model proposed in federated learning. In traditional machine learning, clients (IoT devices) that acquire raw signal samples can aid a data center (server) learn a global signal model by pooling these distributed samples at a third-party location. Despite the promising capabilities of IoTs, these distributed deployments often face the challenge of sensitive private data and communication rate constraints. This necessitates a learning approach that communicates a processed approximation of the distributed samples instead of the raw signals. Such a decentralized learning approach using signal approximations will be termed distributed signal analytics in this work. Overpredictive signal approximations may be desired for distributed signal analytics, especially in network demand (capacity) planning applications motivated by federated learning. In this work, we propose algorithms that compute an overpredictive signal approximation at the client devices using an efficient convex optimization framework. Tradeoffs between communication cost, sampling rate, and the signal approximation error are quantified using mathematical analysis. We also show the performance of the proposed distributed algorithms on a publicly available residential energy consumption dataset.<|reference_end|>
arxiv
@article{anavangot2024overpredictive, title={Overpredictive Signal Analytics in Federated Learning: Algorithms and Analysis}, author={Vijay Anavangot}, journal={arXiv preprint arXiv:2410.01399}, year={2024}, archivePrefix={arXiv}, eprint={2410.01399}, primaryClass={eess.SP cs.LG stat.ML} }
anavangot2024overpredictive
arxiv-664433
2410.01400
CrowdCounter: A benchmark type-specific multi-target counterspeech dataset
<|reference_start|>CrowdCounter: A benchmark type-specific multi-target counterspeech dataset: Counterspeech presents a viable alternative to banning or suspending users for hate speech while upholding freedom of expression. However, writing effective counterspeech is challenging for moderators/users. Hence, developing suggestion tools for writing counterspeech is the need of the hour. One critical challenge in developing such a tool is the lack of quality and diversity of the responses in the existing datasets. Hence, we introduce a new dataset - CrowdCounter containing 3,425 hate speech-counterspeech pairs spanning six different counterspeech types (empathy, humor, questioning, warning, shaming, contradiction), which is the first of its kind. The design of our annotation platform itself encourages annotators to write type-specific, non-redundant and high-quality counterspeech. We evaluate two frameworks for generating counterspeech responses - vanilla and type-controlled prompts - across four large language models. In terms of metrics, we evaluate the responses using relevance, diversity and quality. We observe that Flan-T5 is the best model in the vanilla framework across different models. Type-specific prompts enhance the relevance of the responses, although they might reduce the language quality. DialoGPT proves to be the best at following the instructions and generating the type-specific counterspeech accurately.<|reference_end|>
arxiv
@article{saha2024crowdcounter:, title={CrowdCounter: A benchmark type-specific multi-target counterspeech dataset}, author={Punyajoy Saha, Abhilash Datta, Abhik Jana, Animesh Mukherjee}, journal={arXiv preprint arXiv:2410.01400}, year={2024}, archivePrefix={arXiv}, eprint={2410.01400}, primaryClass={cs.CL} }
saha2024crowdcounter:
arxiv-664434
2410.01401
Question-guided Knowledge Graph Re-scoring and Injection for Knowledge Graph Question Answering
<|reference_start|>Question-guided Knowledge Graph Re-scoring and Injection for Knowledge Graph Question Answering: Knowledge graph question answering (KGQA) involves answering natural language questions by leveraging structured information stored in a knowledge graph. Typically, KGQA initially retrieve a targeted subgraph from a large-scale knowledge graph, which serves as the basis for reasoning models to address queries. However, the retrieved subgraph inevitably brings distraction information for knowledge utilization, impeding the model's ability to perform accurate reasoning. To address this issue, we propose a Question-guided Knowledge Graph Re-scoring method (Q-KGR) to eliminate noisy pathways for the input question, thereby focusing specifically on pertinent factual knowledge. Moreover, we introduce Knowformer, a parameter-efficient method for injecting the re-scored knowledge graph into large language models to enhance their ability to perform factual reasoning. Extensive experiments on multiple KGQA benchmarks demonstrate the superiority of our method over existing systems.<|reference_end|>
arxiv
@article{zhang2024question-guided, title={Question-guided Knowledge Graph Re-scoring and Injection for Knowledge Graph Question Answering}, author={Yu Zhang, Kehai Chen, Xuefeng Bai, zhao kang, Quanjiang Guo, Min Zhang}, journal={arXiv preprint arXiv:2410.01401}, year={2024}, archivePrefix={arXiv}, eprint={2410.01401}, primaryClass={cs.CL} }
zhang2024question-guided
arxiv-664435
2410.01403
Detection and suppression of epileptiform seizures via model-free control and derivatives in a noisy environment
<|reference_start|>Detection and suppression of epileptiform seizures via model-free control and derivatives in a noisy environment: Recent advances in control theory yield closed-loop neurostimulations for suppressing epileptiform seizures. These advances are illustrated by computer experiments which are easy to implement and to tune. The feedback synthesis is provided by an intelligent proportional-derivative (iPD) regulator associated to model-free control. This approach has already been successfully exploited in many concrete situations in engineering, since no precise computational modeling is needed. iPDs permit tracking a large variety of signals including high-amplitude epileptic activity. Those unpredictable pathological brain oscillations should be detected in order to avoid continuous stimulation, which might induce detrimental side effects. This is achieved by introducing a data mining method based on the maxima of the recorded signals. The real-time derivative estimation in a particularly noisy epileptiform environment is made possible due to a newly developed algebraic differentiator. The virtual patient is the Wendling model, i.e., a set of ordinary differential equations adapted from the Jansen-Rit neural mass model in order to generate epileptiform activity via appropriate values of excitation- and inhibition-related parameters. Several simulations, which lead to a large variety of possible scenarios, are discussed. They show the robustness of our control synthesis with respect to different virtual patients and external disturbances.<|reference_end|>
arxiv
@article{join2024detection, title={Detection and suppression of epileptiform seizures via model-free control and derivatives in a noisy environment}, author={C'edric Join, D. Blair Jovellar, Emmanuel Delaleau, Michel Fliess}, journal={arXiv preprint arXiv:2410.01403}, year={2024}, archivePrefix={arXiv}, eprint={2410.01403}, primaryClass={eess.SY cs.SY math.OC} }
join2024detection
arxiv-664436
2410.01404
Gaussian-Det: Learning Closed-Surface Gaussians for 3D Object Detection
<|reference_start|>Gaussian-Det: Learning Closed-Surface Gaussians for 3D Object Detection: Skins wrapping around our bodies, leathers covering over the sofa, sheet metal coating the car - it suggests that objects are enclosed by a series of continuous surfaces, which provides us with informative geometry prior for objectness deduction. In this paper, we propose Gaussian-Det which leverages Gaussian Splatting as surface representation for multi-view based 3D object detection. Unlike existing monocular or NeRF-based methods which depict the objects via discrete positional data, Gaussian-Det models the objects in a continuous manner by formulating the input Gaussians as feature descriptors on a mass of partial surfaces. Furthermore, to address the numerous outliers inherently introduced by Gaussian splatting, we accordingly devise a Closure Inferring Module (CIM) for the comprehensive surface-based objectness deduction. CIM firstly estimates the probabilistic feature residuals for partial surfaces given the underdetermined nature of Gaussian Splatting, which are then coalesced into a holistic representation on the overall surface closure of the object proposal. In this way, the surface information Gaussian-Det exploits serves as the prior on the quality and reliability of objectness and the information basis of proposal refinement. Experiments on both synthetic and real-world datasets demonstrate that Gaussian-Det outperforms various existing approaches, in terms of both average precision and recall.<|reference_end|>
arxiv
@article{yan2024gaussian-det:, title={Gaussian-Det: Learning Closed-Surface Gaussians for 3D Object Detection}, author={Hongru Yan, Yu Zheng, Yueqi Duan}, journal={arXiv preprint arXiv:2410.01404}, year={2024}, archivePrefix={arXiv}, eprint={2410.01404}, primaryClass={cs.CV} }
yan2024gaussian-det:
arxiv-664437
2410.01405
On Expressive Power of Looped Transformers: Theoretical Analysis and Enhancement via Timestep Encoding
<|reference_start|>On Expressive Power of Looped Transformers: Theoretical Analysis and Enhancement via Timestep Encoding: Looped Transformers offer advantages in parameter efficiency and Turing completeness. However, their expressive power for function approximation and approximation rate remains underexplored. In this paper, we establish approximation rates of Looped Transformers by defining the concept of the modulus of continuity for sequence-to-sequence functions. This reveals a limitation specific to the looped architecture. That is, the analysis prompts us to incorporate scaling parameters for each loop, conditioned on timestep encoding. Experimental results demonstrate that increasing the number of loops enhances performance, with further gains achieved through the timestep encoding architecture.<|reference_end|>
arxiv
@article{xu2024on, title={On Expressive Power of Looped Transformers: Theoretical Analysis and Enhancement via Timestep Encoding}, author={Kevin Xu and Issei Sato}, journal={arXiv preprint arXiv:2410.01405}, year={2024}, archivePrefix={arXiv}, eprint={2410.01405}, primaryClass={cs.LG} }
xu2024on
arxiv-664438
2410.01407
AgriCLIP: Adapting CLIP for Agriculture and Livestock via Domain-Specialized Cross-Model Alignment
<|reference_start|>AgriCLIP: Adapting CLIP for Agriculture and Livestock via Domain-Specialized Cross-Model Alignment: Capitalizing on vast amount of image-text data, large-scale vision-language pre-training has demonstrated remarkable zero-shot capabilities and has been utilized in several applications. However, models trained on general everyday web-crawled data often exhibit sub-optimal performance for specialized domains, likely due to domain shift. Recent works have tackled this problem for some domains (e.g., healthcare) by constructing domain-specialized image-text data. However, constructing a dedicated large-scale image-text dataset for sustainable area of agriculture and livestock is still open to research. Further, this domain desires fine-grained feature learning due to the subtle nature of the downstream tasks (e.g, nutrient deficiency detection, livestock breed classification). To address this we present AgriCLIP, a vision-language foundational model dedicated to the domain of agriculture and livestock. First, we propose a large-scale dataset, named ALive, that leverages customized prompt generation strategy to overcome the scarcity of expert annotations. Our ALive dataset covers crops, livestock, and fishery, with around 600,000 image-text pairs. Second, we propose a training pipeline that integrates both contrastive and self-supervised learning to learn both global semantic and local fine-grained domain-specialized features. Experiments on diverse set of 20 downstream tasks demonstrate the effectiveness of AgriCLIP framework, achieving an absolute gain of 7.8\% in terms of average zero-shot classification accuracy, over the standard CLIP adaptation via domain-specialized ALive dataset. Our ALive dataset and code can be accessible at \href{https://github.com/umair1221/AgriCLIP/tree/main}{Github}.<|reference_end|>
arxiv
@article{nawaz2024agriclip:, title={AgriCLIP: Adapting CLIP for Agriculture and Livestock via Domain-Specialized Cross-Model Alignment}, author={Umair Nawaz, Muhammad Awais, Hanan Gani, Muzammal Naseer, Fahad Khan, Salman Khan, Rao Muhammad Anwer}, journal={arXiv preprint arXiv:2410.01407}, year={2024}, archivePrefix={arXiv}, eprint={2410.01407}, primaryClass={cs.CV} }
nawaz2024agriclip:
arxiv-664439
2410.01408
SHAP-CAT: A interpretable multi-modal framework enhancing WSI classification via virtual staining and shapley-value-based multimodal fusion
<|reference_start|>SHAP-CAT: A interpretable multi-modal framework enhancing WSI classification via virtual staining and shapley-value-based multimodal fusion: The multimodal model has demonstrated promise in histopathology. However, most multimodal models are based on H\&E and genomics, adopting increasingly complex yet black-box designs. In our paper, we propose a novel interpretable multimodal framework named SHAP-CAT, which uses a Shapley-value-based dimension reduction technique for effective multimodal fusion. Starting with two paired modalities -- H\&E and IHC images, we employ virtual staining techniques to enhance limited input data by generating a new clinical-related modality. Lightweight bag-level representations are extracted from image modalities and a Shapley-value-based mechanism is used for dimension reduction. For each dimension of the bag-level representation, attribution values are calculated to indicate how changes in the specific dimensions of the input affect the model output. In this way, we select a few top important dimensions of bag-level representation for each image modality to late fusion. Our experimental results demonstrate that the proposed SHAP-CAT framework incorporating synthetic modalities significantly enhances model performance, yielding a 5\% increase in accuracy for the BCI, an 8\% increase for IHC4BC-ER, and an 11\% increase for the IHC4BC-PR dataset.<|reference_end|>
arxiv
@article{wang2024shap-cat:, title={SHAP-CAT: A interpretable multi-modal framework enhancing WSI classification via virtual staining and shapley-value-based multimodal fusion}, author={Jun Wang, Yu Mao, Nan Guan, Chun Jason Xue}, journal={arXiv preprint arXiv:2410.01408}, year={2024}, archivePrefix={arXiv}, eprint={2410.01408}, primaryClass={cs.CV} }
wang2024shap-cat:
arxiv-664440
2410.01409
Hexahedral mesh of anatomical atlas for construction of computational human brain models: Applications to modeling biomechanics and bioelectric field propagation
<|reference_start|>Hexahedral mesh of anatomical atlas for construction of computational human brain models: Applications to modeling biomechanics and bioelectric field propagation: Numerical simulations rely on constructing accurate and detailed models to produce reliable results - a task that is often challenging. This task becomes notably more difficult when the model is of the human brain. We create an anatomically comprehensive hexahedral mesh of the human brain using an open-source digital brain atlas. Digital atlases are valuable tools currently used by medical professionals, medical students, and researchers for gathering, presenting, and discovering knowledge about the human brain. We demonstrate that the atlas can be used to efficiently create an accurate and detailed hexahedral finite element mesh of the brain for scientific computing. We present two case studies. The first case study constructs a biomechanical model of the brain to compute brain deformations and predict traumatic brain injury risk due to violent impact. In the second case study, we construct a bioelectrical model of the brain to solve the electroencephalography (EEG) forward problem, a frequent simulation process used in electrophysiology to study electromagnetic fields generated by the nervous system. We demonstrate efficient and accurate model construction using the meshed anatomical brain atlas, as well as emphasize the importance of effective communication and contextual analysis of results for enabling multi-disciplinary scientific computing research.<|reference_end|>
arxiv
@article{huynh2024hexahedral, title={Hexahedral mesh of anatomical atlas for construction of computational human brain models: Applications to modeling biomechanics and bioelectric field propagation}, author={Andy Huynh, Benjamin Zwick, Mostafa Jamshidian, Michael Halle, Adam Wittek and Karol Miller}, journal={arXiv preprint arXiv:2410.01409}, year={2024}, archivePrefix={arXiv}, eprint={2410.01409}, primaryClass={cs.CE} }
huynh2024hexahedral
arxiv-664441
2410.01410
On the Convergence of FedProx with Extrapolation and Inexact Prox
<|reference_start|>On the Convergence of FedProx with Extrapolation and Inexact Prox: Enhancing the FedProx federated learning algorithm (Li et al., 2020) with server-side extrapolation, Li et al. (2024a) recently introduced the FedExProx method. Their theoretical analysis, however, relies on the assumption that each client computes a certain proximal operator exactly, which is impractical since this is virtually never possible to do in real settings. In this paper, we investigate the behavior of FedExProx without this exactness assumption in the smooth and globally strongly convex setting. We establish a general convergence result, showing that inexactness leads to convergence to a neighborhood of the solution. Additionally, we demonstrate that, with careful control, the adverse effects of this inexactness can be mitigated. By linking inexactness to biased compression (Beznosikov et al., 2023), we refine our analysis, highlighting robustness of extrapolation to inexact proximal updates. We also examine the local iteration complexity required by each client to achieved the required level of inexactness using various local optimizers. Our theoretical insights are validated through comprehensive numerical experiments.<|reference_end|>
arxiv
@article{li2024on, title={On the Convergence of FedProx with Extrapolation and Inexact Prox}, author={Hanmin Li, Peter Richt'arik}, journal={arXiv preprint arXiv:2410.01410}, year={2024}, archivePrefix={arXiv}, eprint={2410.01410}, primaryClass={math.OC cs.AI} }
li2024on
arxiv-664442
2410.01411
CSIM: A Copula-based similarity index sensitive to local changes for Image quality assessment
<|reference_start|>CSIM: A Copula-based similarity index sensitive to local changes for Image quality assessment: Image similarity metrics play an important role in computer vision applications, as they are used in image processing, computer vision and machine learning. Furthermore, those metrics enable tasks such as image retrieval, object recognition and quality assessment, essential in fields like healthcare, astronomy and surveillance. Existing metrics, such as PSNR, MSE, SSIM, ISSM and FSIM, often face limitations in terms of either speed, complexity or sensitivity to small changes in images. To address these challenges, a novel image similarity metric, namely CSIM, that combines real-time while being sensitive to subtle image variations is investigated in this paper. The novel metric uses Gaussian Copula from probability theory to transform an image into vectors of pixel distribution associated to local image patches. These vectors contain, in addition to intensities and pixel positions, information on the dependencies between pixel values, capturing the structural relationships within the image. By leveraging the properties of Copulas, CSIM effectively models the joint distribution of pixel intensities, enabling a more nuanced comparison of image patches making it more sensitive to local changes compared to other metrics. Experimental results demonstrate that CSIM outperforms existing similarity metrics in various image distortion scenarios, including noise, compression artifacts and blur. The metric's ability to detect subtle differences makes it suitable for applications requiring high precision, such as medical imaging, where the detection of minor anomalies can be of a high importance. The results obtained in this work can be reproduced from this Github repository: https://github.com/safouaneelg/copulasimilarity.<|reference_end|>
arxiv
@article{ghazouali2024csim:, title={CSIM: A Copula-based similarity index sensitive to local changes for Image quality assessment}, author={Safouane El Ghazouali, Umberto Michelucci, Yassin El Hillali, Hichem Nouira}, journal={arXiv preprint arXiv:2410.01411}, year={2024}, archivePrefix={arXiv}, eprint={2410.01411}, primaryClass={eess.IV cs.CV math.PR} }
ghazouali2024csim:
arxiv-664443
2410.01413
Improving Fuzzy Rule Classifier with Brain Storm Optimization and Rule Modification
<|reference_start|>Improving Fuzzy Rule Classifier with Brain Storm Optimization and Rule Modification: The expanding complexity and dimensionality in the search space can adversely affect inductive learning in fuzzy rule classifiers, thus impacting the scalability and accuracy of fuzzy systems. This research specifically addresses the challenge of diabetic classification by employing the Brain Storm Optimization (BSO) algorithm to propose a novel fuzzy system that redefines rule generation for this context. An exponential model is integrated into the standard BSO algorithm to enhance rule derivation, tailored specifically for diabetes-related data. The innovative fuzzy system is then applied to classification tasks involving diabetic datasets, demonstrating a substantial improvement in classification accuracy, as evidenced by our experiments.<|reference_end|>
arxiv
@article{huang2024improving, title={Improving Fuzzy Rule Classifier with Brain Storm Optimization and Rule Modification}, author={Yan Huang, Wei Liu, Xiaogang Zang}, journal={arXiv preprint arXiv:2410.01413}, year={2024}, archivePrefix={arXiv}, eprint={2410.01413}, primaryClass={cs.AI cs.NE} }
huang2024improving
arxiv-664444
2410.01415
QCRMut: Quantum Circuit Random Mutant generator tool
<|reference_start|>QCRMut: Quantum Circuit Random Mutant generator tool: Quantum computing has been on the rise in recent years, evidenced by a surge in publications on quantum software engineering and testing. Progress in quantum hardware has also been notable, with the introduction of impressive systems like Condor boasting 1121 qubits, and IBM Quantum System Two, which employs three 133-qubit Heron processors. As this technology edges closer to practical application, ensuring the efficacy of our software becomes imperative. Mutation testing, a well-established technique in classical computing, emerges as a valuable approach in this context. In our paper, we aim to introduce QCRMut, a mutation tool tailored for quantum programs, leveraging the inherent Quantum Circuit structure. We propose a randomised approach compared to previous works with exhaustive creation processes and the capability for marking immutable positions within the circuit. These features facilitate the preservation of program structure, which is crucial for future applications such as metamorphic testing.<|reference_end|>
arxiv
@article{gil2024qcrmut:, title={QCRMut: Quantum Circuit Random Mutant generator tool}, author={Sinhu'e Garc'ia Gil and Luis Llana D'iaz and Jos'e Ignacio Requeno Jarabo}, journal={arXiv preprint arXiv:2410.01415}, year={2024}, archivePrefix={arXiv}, eprint={2410.01415}, primaryClass={cs.SE} }
gil2024qcrmut:
arxiv-664445
2410.01417
The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs
<|reference_start|>The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs: Multi-modal Large Language Models (MLLMs) have exhibited impressive capability. However, recently many deficiencies of MLLMs have been found compared to human intelligence, $\textit{e.g.}$, hallucination. To drive the MLLMs study, the community dedicated efforts to building larger benchmarks with complex tasks. In this paper, we propose benchmarking an essential but usually overlooked intelligence: $\textbf{association}$, a human's basic capability to link observation and prior practice memory. To comprehensively investigate MLLM's performance on the association, we formulate the association task and devise a standard benchmark based on adjective and verb semantic concepts. Instead of costly data annotation and curation, we propose a convenient $\textbf{annotation-free}$ construction method transforming the general dataset for our association tasks. Simultaneously, we devise a rigorous data refinement process to eliminate confusion in the raw dataset. Building on this database, we establish three levels of association tasks: single-step, synchronous, and asynchronous associations. Moreover, we conduct a comprehensive investigation into the MLLMs' zero-shot association capabilities, addressing multiple dimensions, including three distinct memory strategies, both open-source and closed-source MLLMs, cutting-edge Mixture-of-Experts (MoE) models, and the involvement of human experts. Our systematic investigation shows that current open-source MLLMs consistently exhibit poor capability in our association tasks, even the currently state-of-the-art GPT-4V(vision) also has a significant gap compared to humans. We believe our benchmark would pave the way for future MLLM studies. $\textit{Our data and code are available at:}$ https://mvig-rhos.com/llm_inception.<|reference_end|>
arxiv
@article{li2024the, title={The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs}, author={Hong Li, Nanxi Li, Yuanjie Chen, Jianbin Zhu, Qinlu Guo, Cewu Lu, Yong-Lu Li}, journal={arXiv preprint arXiv:2410.01417}, year={2024}, archivePrefix={arXiv}, eprint={2410.01417}, primaryClass={cs.CV cs.AI cs.CL cs.LG} }
li2024the
arxiv-664446
2410.01421
Disconnection Rules are Complete for Chemical Reactions
<|reference_start|>Disconnection Rules are Complete for Chemical Reactions: We provide a category theoretical framework capturing two approaches to graph-based models of chemistry: formal reactions and disconnection rules. We model a translation from the latter to the former as a functor, which is faithful, and full up to isomorphism. This allows us to state, as our main result, that the disconnection rules are sound, complete and universal with respect to the reactions. Concretely, this means that every reaction can be decomposed into a sequence of disconnection rules in an essentially unique way. This provides a uniform way to store reaction data, and gives an algorithmic interface between (forward) reaction prediction and (backward) reaction search or retrosynthesis.<|reference_end|>
arxiv
@article{gale2024disconnection, title={Disconnection Rules are Complete for Chemical Reactions}, author={Ella Gale, Leo Lobski, Fabio Zanasi}, journal={arXiv preprint arXiv:2410.01421}, year={2024}, archivePrefix={arXiv}, eprint={2410.01421}, primaryClass={cs.LO math.CT} }
gale2024disconnection
arxiv-664447
2410.01423
Fair4Free: Generating High-fidelity Fair Synthetic Samples using Data Free Distillation
<|reference_start|>Fair4Free: Generating High-fidelity Fair Synthetic Samples using Data Free Distillation: This work presents Fair4Free, a novel generative model to generate synthetic fair data using data-free distillation in the latent space. Fair4Free can work on the situation when the data is private or inaccessible. In our approach, we first train a teacher model to create fair representation and then distil the knowledge to a student model (using a smaller architecture). The process of distilling the student model is data-free, i.e. the student model does not have access to the training dataset while distilling. After the distillation, we use the distilled model to generate fair synthetic samples. Our extensive experiments show that our synthetic samples outperform state-of-the-art models in all three criteria (fairness, utility and synthetic quality) with a performance increase of 5% for fairness, 8% for utility and 12% in synthetic quality for both tabular and image datasets.<|reference_end|>
arxiv
@article{sikder2024fair4free:, title={Fair4Free: Generating High-fidelity Fair Synthetic Samples using Data Free Distillation}, author={Md Fahim Sikder, Daniel de Leng, Fredrik Heintz}, journal={arXiv preprint arXiv:2410.01423}, year={2024}, archivePrefix={arXiv}, eprint={2410.01423}, primaryClass={cs.LG cs.AI} }
sikder2024fair4free:
arxiv-664448
2410.01425
EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Camera Settings
<|reference_start|>EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Camera Settings: The feed-forward based 3D Gaussian Splatting method has demonstrated exceptional capability in real-time human novel view synthesis. However, existing approaches are restricted to dense viewpoint settings, which limits their flexibility in free-viewpoint rendering across a wide range of camera view angle discrepancies. To address this limitation, we propose a real-time pipeline named EVA-Gaussian for 3D human novel view synthesis across diverse camera settings. Specifically, we first introduce an Efficient cross-View Attention (EVA) module to accurately estimate the position of each 3D Gaussian from the source images. Then, we integrate the source images with the estimated Gaussian position map to predict the attributes and feature embeddings of the 3D Gaussians. Moreover, we employ a recurrent feature refiner to correct artifacts caused by geometric errors in position estimation and enhance visual fidelity.To further improve synthesis quality, we incorporate a powerful anchor loss function for both 3D Gaussian attributes and human face landmarks. Experimental results on the THuman2.0 and THumansit datasets showcase the superiority of our EVA-Gaussian approach in rendering quality across diverse camera settings. Project page: https://zhenliuzju.github.io/huyingdong/EVA-Gaussian.<|reference_end|>
arxiv
@article{hu2024eva-gaussian:, title={EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Camera Settings}, author={Yingdong Hu, Zhening Liu, Jiawei Shao, Zehong Lin, Jun Zhang}, journal={arXiv preprint arXiv:2410.01425}, year={2024}, archivePrefix={arXiv}, eprint={2410.01425}, primaryClass={cs.CV} }
hu2024eva-gaussian:
arxiv-664449
2410.01426
Approximation by Steklov Neural Network Operators
<|reference_start|>Approximation by Steklov Neural Network Operators: The present paper deals with construction of newly family of Neural Network operators, that is,Steklov Neural Network operators. By using Steklov type integral, we introduce a new version of Neural Network operators and we obtain some convergence theorems for the family, such as, pointwise and uniform convergence,rate of convergence via moduli of smoothness of order $r$.<|reference_end|>
arxiv
@article{karaman2024approximation, title={Approximation by Steklov Neural Network Operators}, author={S. N. Karaman, M. Turgay, T. Acar}, journal={arXiv preprint arXiv:2410.01426}, year={2024}, archivePrefix={arXiv}, eprint={2410.01426}, primaryClass={math.FA cs.LG} }
karaman2024approximation
arxiv-664450
2410.01428
Can We Further Elicit Reasoning in LLMs? Critic-Guided Planning with Retrieval-Augmentation for Solving Challenging Tasks
<|reference_start|>Can We Further Elicit Reasoning in LLMs? Critic-Guided Planning with Retrieval-Augmentation for Solving Challenging Tasks: State-of-the-art large language models (LLMs) exhibit impressive problem-solving capabilities but may struggle with complex reasoning and factual correctness. Existing methods harness the strengths of chain-of-thought and retrieval-augmented generation (RAG) to decompose a complex problem into simpler steps and apply retrieval to improve factual correctness. These methods work well on straightforward reasoning tasks but often falter on challenging tasks such as competitive programming and mathematics, due to frequent reasoning errors and irrelevant knowledge retrieval. To address this, we introduce Critic-guided planning with Retrieval-augmentation, CR-Planner, a novel framework that leverages fine-tuned critic models to guide both reasoning and retrieval processes through planning. CR-Planner solves a problem by iteratively selecting and executing sub-goals. Initially, it identifies the most promising sub-goal from reasoning, query generation, and retrieval, guided by rewards given by a critic model named sub-goal critic. It then executes this sub-goal through sampling and selecting the optimal output based on evaluations from another critic model named execution critic. This iterative process, informed by retrieved information and critic models, enables CR-Planner to effectively navigate the solution space towards the final answer. We employ Monte Carlo Tree Search to collect the data for training the critic models, allowing for a systematic exploration of action sequences and their long-term impacts. We validate CR-Planner on challenging domain-knowledge-intensive and reasoning-heavy tasks, including competitive programming, theorem-driven math reasoning, and complex domain retrieval problems. Our experiments demonstrate that CR-Planner significantly outperforms baselines, highlighting its effectiveness in addressing challenging problems by improving both reasoning and retrieval.<|reference_end|>
arxiv
@article{li2024can, title={Can We Further Elicit Reasoning in LLMs? Critic-Guided Planning with Retrieval-Augmentation for Solving Challenging Tasks}, author={Xingxuan Li, Weiwen Xu, Ruochen Zhao, Fangkai Jiao, Shafiq Joty, Lidong Bing}, journal={arXiv preprint arXiv:2410.01428}, year={2024}, archivePrefix={arXiv}, eprint={2410.01428}, primaryClass={cs.CL} }
li2024can
arxiv-664451
2410.01431
Scalable Reinforcement Learning-based Neural Architecture Search
<|reference_start|>Scalable Reinforcement Learning-based Neural Architecture Search: In this publication, we assess the ability of a novel Reinforcement Learning-based solution to the problem of Neural Architecture Search, where a Reinforcement Learning (RL) agent learns to search for good architectures, rather than to return a single optimal architecture. We consider both the NAS-Bench-101 and NAS- Bench-301 settings, and compare against various known strong baselines, such as local search and random search. We conclude that our Reinforcement Learning agent displays strong scalability with regards to the size of the search space, but limited robustness to hyperparameter changes.<|reference_end|>
arxiv
@article{cassimon2024scalable, title={Scalable Reinforcement Learning-based Neural Architecture Search}, author={Amber Cassimon, Siegfried Mercelis, Kevin Mets}, journal={arXiv preprint arXiv:2410.01431}, year={2024}, archivePrefix={arXiv}, eprint={2410.01431}, primaryClass={cs.LG} }
cassimon2024scalable
arxiv-664452
2410.01432
Adaptive teachers for amortized samplers
<|reference_start|>Adaptive teachers for amortized samplers: Amortized inference is the task of training a parametric model, such as a neural network, to approximate a distribution with a given unnormalized density where exact sampling is intractable. When sampling is implemented as a sequential decision-making process, reinforcement learning (RL) methods, such as generative flow networks, can be used to train the sampling policy. Off-policy RL training facilitates the discovery of diverse, high-reward candidates, but existing methods still face challenges in efficient exploration. We propose to use an adaptive training distribution (the Teacher) to guide the training of the primary amortized sampler (the Student) by prioritizing high-loss regions. The Teacher, an auxiliary behavior model, is trained to sample high-error regions of the Student and can generalize across unexplored modes, thereby enhancing mode coverage by providing an efficient training curriculum. We validate the effectiveness of this approach in a synthetic environment designed to present an exploration challenge, two diffusion-based sampling tasks, and four biochemical discovery tasks demonstrating its ability to improve sample efficiency and mode coverage.<|reference_end|>
arxiv
@article{kim2024adaptive, title={Adaptive teachers for amortized samplers}, author={Minsu Kim, Sanghyeok Choi, Taeyoung Yun, Emmanuel Bengio, Leo Feng, Jarrid Rector-Brooks, Sungsoo Ahn, Jinkyoo Park, Nikolay Malkin, Yoshua Bengio}, journal={arXiv preprint arXiv:2410.01432}, year={2024}, archivePrefix={arXiv}, eprint={2410.01432}, primaryClass={cs.LG stat.ML} }
kim2024adaptive
arxiv-664453
2410.01434
Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models
<|reference_start|>Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models: A fundamental question in interpretability research is to what extent neural networks, particularly language models, implement reusable functions via subnetworks that can be composed to perform more complex tasks. Recent developments in mechanistic interpretability have made progress in identifying subnetworks, often referred to as circuits, which represent the minimal computational subgraph responsible for a model's behavior on specific tasks. However, most studies focus on identifying circuits for individual tasks without investigating how functionally similar circuits relate to each other. To address this gap, we examine the modularity of neural networks by analyzing circuits for highly compositional subtasks within a transformer-based language model. Specifically, given a probabilistic context-free grammar, we identify and compare circuits responsible for ten modular string-edit operations. Our results indicate that functionally similar circuits exhibit both notable node overlap and cross-task faithfulness. Moreover, we demonstrate that the circuits identified can be reused and combined through subnetwork set operations to represent more complex functional capabilities of the model.<|reference_end|>
arxiv
@article{mondorf2024circuit, title={Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models}, author={Philipp Mondorf, Sondre Wold, Barbara Plank}, journal={arXiv preprint arXiv:2410.01434}, year={2024}, archivePrefix={arXiv}, eprint={2410.01434}, primaryClass={cs.LG cs.CL} }
mondorf2024circuit
arxiv-664454
2410.01438
Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models
<|reference_start|>Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models: In recent years, Vision-Language Models (VLMs) have demonstrated significant advancements in artificial intelligence, transforming tasks across various domains. Despite their capabilities, these models are susceptible to jailbreak attacks, which can compromise their safety and reliability. This paper explores the trade-off between jailbreakability and stealthiness in VLMs, presenting a novel algorithm to detect non-stealthy jailbreak attacks and enhance model robustness. We introduce a stealthiness-aware jailbreak attack using diffusion models, highlighting the challenge of detecting AI-generated content. Our approach leverages Fano's inequality to elucidate the relationship between attack success rates and stealthiness scores, providing an explainable framework for evaluating these threats. Our contributions aim to fortify AI systems against sophisticated attacks, ensuring their outputs remain aligned with ethical standards and user expectations.<|reference_end|>
arxiv
@article{kao2024information-theoretical, title={Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models}, author={Ching-Chia Kao, Chia-Mu Yu, Chun-Shien Lu, Chu-Song Chen}, journal={arXiv preprint arXiv:2410.01438}, year={2024}, archivePrefix={arXiv}, eprint={2410.01438}, primaryClass={cs.LG} }
kao2024information-theoretical
arxiv-664455
2410.01440
Closed-Loop Long-Horizon Robotic Planning via Equilibrium Sequence Modeling
<|reference_start|>Closed-Loop Long-Horizon Robotic Planning via Equilibrium Sequence Modeling: In the endeavor to make autonomous robots take actions, task planning is a major challenge that requires translating high-level task descriptions into long-horizon action sequences. Despite recent advances in language model agents, they remain prone to planning errors and limited in their ability to plan ahead. To address these limitations in robotic planning, we advocate a self-refining scheme that iteratively refines a draft plan until an equilibrium is reached. Remarkably, this process can be optimized end-to-end from an analytical perspective without the need to curate additional verifiers or reward models, allowing us to train self-refining planners in a simple supervised learning fashion. Meanwhile, a nested equilibrium sequence modeling procedure is devised for efficient closed-loop planning that incorporates useful feedback from the environment (or an internal world model). Our method is evaluated on the VirtualHome-Env benchmark, showing advanced performance with better scaling for inference computation. Code is available at https://github.com/Singularity0104/equilibrium-planner.<|reference_end|>
arxiv
@article{li2024closed-loop, title={Closed-Loop Long-Horizon Robotic Planning via Equilibrium Sequence Modeling}, author={Jinghan Li, Zhicheng Sun, Fei Li, Cao Sheng, Jiazhong Yu, Yadong Mu}, journal={arXiv preprint arXiv:2410.01440}, year={2024}, archivePrefix={arXiv}, eprint={2410.01440}, primaryClass={cs.RO cs.LG} }
li2024closed-loop
arxiv-664456
2410.01441
Decorrelation-based Self-Supervised Visual Representation Learning for Writer Identification
<|reference_start|>Decorrelation-based Self-Supervised Visual Representation Learning for Writer Identification: Self-supervised learning has developed rapidly over the last decade and has been applied in many areas of computer vision. Decorrelation-based self-supervised pretraining has shown great promise among non-contrastive algorithms, yielding performance at par with supervised and contrastive self-supervised baselines. In this work, we explore the decorrelation-based paradigm of self-supervised learning and apply the same to learning disentangled stroke features for writer identification. Here we propose a modified formulation of the decorrelation-based framework named SWIS which was proposed for signature verification by standardizing the features along each dimension on top of the existing framework. We show that the proposed framework outperforms the contemporary self-supervised learning framework on the writer identification benchmark and also outperforms several supervised methods as well. To the best of our knowledge, this work is the first of its kind to apply self-supervised learning for learning representations for writer verification tasks.<|reference_end|>
arxiv
@article{maitra2024decorrelation-based, title={Decorrelation-based Self-Supervised Visual Representation Learning for Writer Identification}, author={Arkadip Maitra and Shree Mitra and Siladittya Manna and Saumik Bhattacharya and Umapada Pal}, journal={arXiv preprint arXiv:2410.01441}, year={2024}, archivePrefix={arXiv}, eprint={2410.01441}, primaryClass={cs.CV} }
maitra2024decorrelation-based
arxiv-664457
2410.01442
Using a Performance Model to Implement a Superscalar CVA6
<|reference_start|>Using a Performance Model to Implement a Superscalar CVA6: A performance model of CVA6 RISC-V processor is built to evaluate performance related modifications before implementing them in RTL. Its accuracy is 99.2% on CoreMark. This model is used to evaluate a superscalar feature for CVA6. During design phase, the model helped detecting and fixing performance bugs. The superscalar feature resulted in a CVA6 performance improvement of 40% on CoreMark.<|reference_end|>
arxiv
@article{allart2024using, title={Using a Performance Model to Implement a Superscalar CVA6}, author={C^ome Allart, Jean-Roch Coulon, Andr'e Sintzoff, Olivier Potin, Jean-Baptiste Rigaud}, journal={arXiv preprint arXiv:2410.01442}, year={2024}, archivePrefix={arXiv}, eprint={2410.01442}, primaryClass={cs.AR} }
allart2024using
arxiv-664458
2410.01443
SurgPointTransformer: Vertebrae Shape Completion with RGB-D Data
<|reference_start|>SurgPointTransformer: Vertebrae Shape Completion with RGB-D Data: State-of-the-art computer- and robot-assisted surgery systems heavily depend on intraoperative imaging technologies such as CT and fluoroscopy to generate detailed 3D visualization of the patient's anatomy. While imaging techniques are highly accurate, they are based on ionizing radiation and expose patients and clinicians. This study introduces an alternative, radiation-free approach for reconstructing the 3D spine anatomy using RGB-D data. Drawing inspiration from the 3D "mental map" that surgeons form during surgeries, we introduce SurgPointTransformer, a shape completion approach for surgical applications that can accurately reconstruct the unexposed spine regions from sparse observations of the exposed surface. Our method involves two main steps: segmentation and shape completion. The segmentation step includes spinal column localization and segmentation, followed by vertebra-wise segmentation. The segmented vertebra point clouds are then subjected to SurgPointTransformer, which leverages an attention mechanism to learn patterns between visible surface features and the underlying anatomy. For evaluation, we utilize an ex-vivo dataset of nine specimens. Their CT data is used to establish ground truth data that were used to compare to the outputs of our methods. Our method significantly outperforms the state-of-the-art baselines, achieving an average Chamfer Distance of 5.39, an F-Score of 0.85, an Earth Mover's Distance of 0.011, and a Signal-to-Noise Ratio of 22.90 dB. This study demonstrates the potential of our reconstruction method for 3D vertebral shape completion. It enables 3D reconstruction of the entire lumbar spine and surgical guidance without ionizing radiation or invasive imaging. Our work contributes to computer-aided and robot-assisted surgery, advancing the perception and intelligence of these systems.<|reference_end|>
arxiv
@article{massalimova2024surgpointtransformer:, title={SurgPointTransformer: Vertebrae Shape Completion with RGB-D Data}, author={Aidana Massalimova, Florentin Liebmann, Sascha Jecklin, Fabio Carrillo, Farshad Mazda and Philipp F"urnstahl}, journal={arXiv preprint arXiv:2410.01443}, year={2024}, archivePrefix={arXiv}, eprint={2410.01443}, primaryClass={eess.IV cs.CV} }
massalimova2024surgpointtransformer:
arxiv-664459
2410.01444
Geometric Signatures of Compositionality Across a Language Model's Lifetime
<|reference_start|>Geometric Signatures of Compositionality Across a Language Model's Lifetime: Compositionality, the notion that the meaning of an expression is constructed from the meaning of its parts and syntactic rules, permits the infinite productivity of human language. For the first time, artificial language models (LMs) are able to match human performance in a number of compositional generalization tasks. However, much remains to be understood about the representational mechanisms underlying these abilities. We take a high-level geometric approach to this problem by relating the degree of compositionality in a dataset to the intrinsic dimensionality of its representations under an LM, a measure of feature complexity. We find not only that the degree of dataset compositionality is reflected in representations' intrinsic dimensionality, but that the relationship between compositionality and geometric complexity arises due to learned linguistic features over training. Finally, our analyses reveal a striking contrast between linear and nonlinear dimensionality, showing that they respectively encode formal and semantic aspects of linguistic composition.<|reference_end|>
arxiv
@article{lee2024geometric, title={Geometric Signatures of Compositionality Across a Language Model's Lifetime}, author={Jin Hwa Lee, Thomas Jiralerspong, Lei Yu, Yoshua Bengio, Emily Cheng}, journal={arXiv preprint arXiv:2410.01444}, year={2024}, archivePrefix={arXiv}, eprint={2410.01444}, primaryClass={cs.CL cs.AI cs.IT cs.LG math.IT} }
lee2024geometric
arxiv-664460
2410.01448
Analyzing Byte-Pair Encoding on Monophonic and Polyphonic Symbolic Music: A Focus on Musical Phrase Segmentation
<|reference_start|>Analyzing Byte-Pair Encoding on Monophonic and Polyphonic Symbolic Music: A Focus on Musical Phrase Segmentation: Byte-Pair Encoding (BPE) is an algorithm commonly used in Natural Language Processing to build a vocabulary of subwords, which has been recently applied to symbolic music. Given that symbolic music can differ significantly from text, particularly with polyphony, we investigate how BPE behaves with different types of musical content. This study provides a qualitative analysis of BPE's behavior across various instrumentations and evaluates its impact on a musical phrase segmentation task for both monophonic and polyphonic music. Our findings show that the BPE training process is highly dependent on the instrumentation and that BPE "supertokens" succeed in capturing abstract musical content. In a musical phrase segmentation task, BPE notably improves performance in a polyphonic setting, but enhances performance in monophonic tunes only within a specific range of BPE merges.<|reference_end|>
arxiv
@article{le2024analyzing, title={Analyzing Byte-Pair Encoding on Monophonic and Polyphonic Symbolic Music: A Focus on Musical Phrase Segmentation}, author={Dinh-Viet-Toan Le, Louis Bigo, Mikaela Keller}, journal={arXiv preprint arXiv:2410.01448}, year={2024}, archivePrefix={arXiv}, eprint={2410.01448}, primaryClass={cs.IR cs.CL cs.SD eess.AS} }
le2024analyzing
arxiv-664461
2410.01450
Agent-Driven Large Language Models for Mandarin Lyric Generation
<|reference_start|>Agent-Driven Large Language Models for Mandarin Lyric Generation: Generative Large Language Models have shown impressive in-context learning abilities, performing well across various tasks with just a prompt. Previous melody-to-lyric research has been limited by scarce high-quality aligned data and unclear standard for creativeness. Most efforts focused on general themes or emotions, which are less valuable given current language model capabilities. In tonal contour languages like Mandarin, pitch contours are influenced by both melody and tone, leading to variations in lyric-melody fit. Our study, validated by the Mpop600 dataset, confirms that lyricists and melody writers consider this fit during their composition process. In this research, we developed a multi-agent system that decomposes the melody-to-lyric task into sub-tasks, with each agent controlling rhyme, syllable count, lyric-melody alignment, and consistency. Listening tests were conducted via a diffusion-based singing voice synthesizer to evaluate the quality of lyrics generated by different agent groups.<|reference_end|>
arxiv
@article{liu2024agent-driven, title={Agent-Driven Large Language Models for Mandarin Lyric Generation}, author={Hong-Hsiang Liu, Yi-Wen Liu}, journal={arXiv preprint arXiv:2410.01450}, year={2024}, archivePrefix={arXiv}, eprint={2410.01450}, primaryClass={cs.CL cs.AI} }
liu2024agent-driven
arxiv-664462
2410.01452
Ensembles provably learn equivariance through data augmentation
<|reference_start|>Ensembles provably learn equivariance through data augmentation: Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.<|reference_end|>
arxiv
@article{nordenfors2024ensembles, title={Ensembles provably learn equivariance through data augmentation}, author={Oskar Nordenfors, Axel Flinth}, journal={arXiv preprint arXiv:2410.01452}, year={2024}, archivePrefix={arXiv}, eprint={2410.01452}, primaryClass={cs.LG cs.NA math.NA} }
nordenfors2024ensembles
arxiv-664463
2410.01454
The Impact of the COVID-19 Pandemic on Women's Contribution to Public Code
<|reference_start|>The Impact of the COVID-19 Pandemic on Women's Contribution to Public Code: Despite its promise of openness and inclusiveness, the development of free and open source software (FOSS) remains significantly unbalanced in terms of gender representation among contributors. To assist open source project maintainers and communities in addressing this imbalance, it is crucial to understand the causes of this inequality.In this study, we aim to establish how the COVID-19 pandemic has influenced the ability of women to contribute to public code. To do so, we use the Software Heritage archive, which holds the largest dataset of commits to public code, and the difference in differences (DID) methodology from econometrics that enables the derivation of causality from historical data.Our findings show that the COVID-19 pandemic has disproportionately impacted women's ability to contribute to the development of public code, relatively to men. Further, our observations of specific contributor subgroups indicate that COVID-19 particularly affected women hobbyists, identified using contribution patterns and email address domains.<|reference_end|>
arxiv
@article{casanueva2024the, title={The Impact of the COVID-19 Pandemic on Women's Contribution to Public Code}, author={Annal'i Casanueva, Davide Rossi (UNIBO), Stefano Zacchiroli (IP Paris, LTCI, ACES, INFRES), Th'eo Zimmermann (IP Paris, LTCI, ACES, INFRES)}, journal={arXiv preprint arXiv:2410.01454}, year={2024}, archivePrefix={arXiv}, eprint={2410.01454}, primaryClass={cs.SE cs.CY} }
casanueva2024the
arxiv-664464
2410.01457
Verbalized Graph Representation Learning: A Fully Interpretable Graph Model Based on Large Language Models Throughout the Entire Process
<|reference_start|>Verbalized Graph Representation Learning: A Fully Interpretable Graph Model Based on Large Language Models Throughout the Entire Process: Representation learning on text-attributed graphs (TAGs) has attracted significant interest due to its wide-ranging real-world applications, particularly through Graph Neural Networks (GNNs). Traditional GNN methods focus on encoding the structural information of graphs, often using shallow text embeddings for node or edge attributes. This limits the model to understand the rich semantic information in the data and its reasoning ability for complex downstream tasks, while also lacking interpretability. With the rise of large language models (LLMs), an increasing number of studies are combining them with GNNs for graph representation learning and downstream tasks. While these approaches effectively leverage the rich semantic information in TAGs datasets, their main drawback is that they are only partially interpretable, which limits their application in critical fields. In this paper, we propose a verbalized graph representation learning (VGRL) method which is fully interpretable. In contrast to traditional graph machine learning models, which are usually optimized within a continuous parameter space, VGRL constrains this parameter space to be text description which ensures complete interpretability throughout the entire process, making it easier for users to understand and trust the decisions of the model. We conduct several studies to empirically evaluate the effectiveness of VGRL and we believe these method can serve as a stepping stone in graph representation learning.<|reference_end|>
arxiv
@article{ji2024verbalized, title={Verbalized Graph Representation Learning: A Fully Interpretable Graph Model Based on Large Language Models Throughout the Entire Process}, author={Xingyu Ji, Jiale Liu, Lu Li, Maojun Wang, Zeyu Zhang}, journal={arXiv preprint arXiv:2410.01457}, year={2024}, archivePrefix={arXiv}, eprint={2410.01457}, primaryClass={cs.LG} }
ji2024verbalized
arxiv-664465
2410.01458
From Reward Shaping to Q-Shaping: Achieving Unbiased Learning with LLM-Guided Knowledge
<|reference_start|>From Reward Shaping to Q-Shaping: Achieving Unbiased Learning with LLM-Guided Knowledge: Q-shaping is an extension of Q-value initialization and serves as an alternative to reward shaping for incorporating domain knowledge to accelerate agent training, thereby improving sample efficiency by directly shaping Q-values. This approach is both general and robust across diverse tasks, allowing for immediate impact assessment while guaranteeing optimality. We evaluated Q-shaping across 20 different environments using a large language model (LLM) as the heuristic provider. The results demonstrate that Q-shaping significantly enhances sample efficiency, achieving a \textbf{16.87\%} improvement over the best baseline in each environment and a \textbf{253.80\%} improvement compared to LLM-based reward shaping methods. These findings establish Q-shaping as a superior and unbiased alternative to conventional reward shaping in reinforcement learning.<|reference_end|>
arxiv
@article{wu2024from, title={From Reward Shaping to Q-Shaping: Achieving Unbiased Learning with LLM-Guided Knowledge}, author={Xiefeng Wu}, journal={arXiv preprint arXiv:2410.01458}, year={2024}, archivePrefix={arXiv}, eprint={2410.01458}, primaryClass={cs.AI cs.LG} }
wu2024from
arxiv-664466
2410.01459
A Smart Chair for Health Monitoring in Daily Life
<|reference_start|>A Smart Chair for Health Monitoring in Daily Life: Recent research has focused on the risks associated with poor sitting posture and the impact of sitting on biological parameters, such as heart rate because prolonged sitting is common across all ages and professions. In this work, we propose a novel approach that can display simultaneously posture and heart rate in real-time. In this device, pressure sensors are embedded into a flexible separate cushion easily put on any chair to provide sitting behaviours and a smartwatch-like PPG module is worn on the user's wrist. Regarding posture classification, pressure figures of ten pressure sensors under the seat bottom are inputs of four machine learning models, giving a high accuracy of 99 per cent. Besides, the Electrocardiography recording module is illustrated with the same results as a commercial device called DFRobot. Another advantage of this smart chair is that it not only simultaneously displays both sitting postures and heart rates on external devices like laptops, mobile phones, or televisions through microcontrollers but also offers the relationship between them to help people adjust their sitting behaviours, avoiding influencing heart rate. The smart chair is expected to be useful equipment for people with a sedentary lifestyle, especially office workers.<|reference_end|>
arxiv
@article{huong2024a, title={A Smart Chair for Health Monitoring in Daily Life}, author={Nguyen Thi Minh Huong, Vo Quoc Bao, Nguyen Trung Hau, Huynh Quang Linh}, journal={arXiv preprint arXiv:2410.01459}, year={2024}, archivePrefix={arXiv}, eprint={2410.01459}, primaryClass={cs.HC eess.SP} }
huong2024a
arxiv-664467
2410.01463
Selective Aggregation for Low-Rank Adaptation in Federated Learning
<|reference_start|>Selective Aggregation for Low-Rank Adaptation in Federated Learning: We investigate LoRA in federated learning through the lens of the asymmetry analysis of the learned $A$ and $B$ matrices. In doing so, we uncover that $A$ matrices are responsible for learning general knowledge, while $B$ matrices focus on capturing client-specific knowledge. Based on this finding, we introduce Federated Share-A Low-Rank Adaptation (FedSA-LoRA), which employs two low-rank trainable matrices $A$ and $B$ to model the weight update, but only $A$ matrices are shared with the server for aggregation. Moreover, we delve into the relationship between the learned $A$ and $B$ matrices in other LoRA variants, such as rsLoRA and VeRA, revealing a consistent pattern. Consequently, we extend our FedSA-LoRA method to these LoRA variants, resulting in FedSA-rsLoRA and FedSA-VeRA. In this way, we establish a general paradigm for integrating LoRA with FL, offering guidance for future work on subsequent LoRA variants combined with FL. Extensive experimental results on natural language understanding and generation tasks demonstrate the effectiveness of the proposed method. Our code is available at https://github.com/Pengxin-Guo/FedSA-LoRA.<|reference_end|>
arxiv
@article{guo2024selective, title={Selective Aggregation for Low-Rank Adaptation in Federated Learning}, author={Pengxin Guo, Shuang Zeng, Yanran Wang, Huijie Fan, Feifei Wang, Liangqiong Qu}, journal={arXiv preprint arXiv:2410.01463}, year={2024}, archivePrefix={arXiv}, eprint={2410.01463}, primaryClass={cs.LG} }
guo2024selective
arxiv-664468
2410.01464
Flow Matching for Accelerated Simulation of Atomic Transport in Materials
<|reference_start|>Flow Matching for Accelerated Simulation of Atomic Transport in Materials: We introduce LiFlow, a generative framework to accelerate molecular dynamics (MD) simulations for crystalline materials that formulates the task as conditional generation of atomic displacements. The model uses flow matching, with a Propagator submodel to generate atomic displacements and a Corrector to locally correct unphysical geometries, and incorporates an adaptive prior based on the Maxwell-Boltzmann distribution to account for chemical and thermal conditions. We benchmark LiFlow on a dataset comprising 25-ps trajectories of lithium diffusion across 4,186 solid-state electrolyte (SSE) candidates at four temperatures. The model obtains a consistent Spearman rank correlation of 0.7-0.8 for lithium mean squared displacement (MSD) predictions on unseen compositions. Furthermore, LiFlow generalizes from short training trajectories to larger supercells and longer simulations while maintaining high accuracy. With speed-ups of up to 600,000$\times$ compared to first-principles methods, LiFlow enables scalable simulations at significantly larger length and time scales.<|reference_end|>
arxiv
@article{nam2024flow, title={Flow Matching for Accelerated Simulation of Atomic Transport in Materials}, author={Juno Nam, Sulin Liu, Gavin Winter, KyuJung Jun, Soojung Yang, Rafael G'omez-Bombarelli}, journal={arXiv preprint arXiv:2410.01464}, year={2024}, archivePrefix={arXiv}, eprint={2410.01464}, primaryClass={cond-mat.mtrl-sci cs.LG physics.comp-ph} }
nam2024flow
arxiv-664469
2410.01465
A generalized spectral concentration problem and the varying masks algorithm
<|reference_start|>A generalized spectral concentration problem and the varying masks algorithm: In this paper we generalize the spectral concentration problem as formulated by Slepian, Pollak and Landau in the 1960s. We show that a generalized version with arbitrary space and Fourier masks is well-posed, and we prove some new results concerning general quadratic domains and gaussian filters. We also propose a more general splitting representation of the spectral concentration operator allowing to construct quasi-modes in some situations. We then study its discretization and we illustrate the fact that standard eigen-algorithms are not robust because of a clustering of eigenvalues. We propose a new alternative algorithm that can be implemented in any dimension and for any domain shape, and that gives very efficient results in practice.<|reference_end|>
arxiv
@article{faou2024a, title={A generalized spectral concentration problem and the varying masks algorithm}, author={Erwan Faou (IRMAR, Inria), Yoann Le Henaff}, journal={arXiv preprint arXiv:2410.01465}, year={2024}, archivePrefix={arXiv}, eprint={2410.01465}, primaryClass={math.NA cs.NA math.SP} }
faou2024a
arxiv-664470
2410.01466
A complete formalization of Fermat's Last Theorem for regular primes in Lean
<|reference_start|>A complete formalization of Fermat's Last Theorem for regular primes in Lean: We formalize a complete proof of the regular case of Fermat's Last Theorem in the Lean4 theorem prover. Our formalization includes a proof of Kummer's lemma, that is the main obstruction to Fermat's Last Theorem for regular primes. Rather than following the modern proof of Kummer's lemma via class field theory, we prove it by using Hilbert's Theorems 90-94 in a way that is more amenable to formalization.<|reference_end|>
arxiv
@article{brasca2024a, title={A complete formalization of Fermat's Last Theorem for regular primes in Lean}, author={Riccardo Brasca (IMJ-PRG (UMR_7586), UPCit'e), Christopher Birkbeck (UEA), Eric Rodriguez Boidi, Alex Best, Ruben van De Velde, Andrew Yang}, journal={arXiv preprint arXiv:2410.01466}, year={2024}, archivePrefix={arXiv}, eprint={2410.01466}, primaryClass={cs.FL cs.LO math.NT} }
brasca2024a
arxiv-664471
2410.01467
A fast numerical scheme for fractional viscoelastic models of wave propagation
<|reference_start|>A fast numerical scheme for fractional viscoelastic models of wave propagation: We propose a fast scheme for approximating the Mittag-Leffler function by an efficient sum-of-exponentials (SOE), and apply the scheme to the viscoelastic model of wave propagation with mixed finite element methods for the spatial discretization and the Newmark-beta scheme for the second-order temporal derivative. Compared with traditional L1 scheme for fractional derivative, our fast scheme reduces the memory complexity from $\mathcal O(N_sN) $ to $\mathcal O(N_sN_{exp})$ and the computation complexity from $\mathcal O(N_sN^2)$ to $\mathcal O(N_sN_{exp}N)$, where $N$ denotes the total number of temporal grid points, $N_{exp}$ is the number of exponentials in SOE, and $N_s$ represents the complexity of memory and computation related to the spatial discretization. Numerical experiments are provided to verify the theoretical results.<|reference_end|>
arxiv
@article{yuan2024a, title={A fast numerical scheme for fractional viscoelastic models of wave propagation}, author={Hao Yuan and Xiaoping Xie}, journal={arXiv preprint arXiv:2410.01467}, year={2024}, archivePrefix={arXiv}, eprint={2410.01467}, primaryClass={math.NA cs.NA} }
yuan2024a
arxiv-664472
2410.01469
TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for Efficient Speech Separation
<|reference_start|>TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for Efficient Speech Separation: In recent years, much speech separation research has focused primarily on improving model performance. However, for low-latency speech processing systems, high efficiency is equally important. Therefore, we propose a speech separation model with significantly reduced parameters and computational costs: Time-frequency Interleaved Gain Extraction and Reconstruction network (TIGER). TIGER leverages prior knowledge to divide frequency bands and compresses frequency information. We employ a multi-scale selective attention module to extract contextual features, while introducing a full-frequency-frame attention module to capture both temporal and frequency contextual information. Additionally, to more realistically evaluate the performance of speech separation models in complex acoustic environments, we introduce a dataset called EchoSet. This dataset includes noise and more realistic reverberation (e.g., considering object occlusions and material properties), with speech from two speakers overlapping at random proportions. Experimental results showed that models trained on EchoSet had better generalization ability than those trained on other datasets to the data collected in the physical world, which validated the practical value of the EchoSet. On EchoSet and real-world data, TIGER significantly reduces the number of parameters by 94.3% and the MACs by 95.3% while achieving performance surpassing state-of-the-art (SOTA) model TF-GridNet. This is the first speech separation model with fewer than 1 million parameters that achieves performance comparable to the SOTA model.<|reference_end|>
arxiv
@article{xu2024tiger:, title={TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for Efficient Speech Separation}, author={Mohan Xu, Kai Li, Guo Chen, Xiaolin Hu}, journal={arXiv preprint arXiv:2410.01469}, year={2024}, archivePrefix={arXiv}, eprint={2410.01469}, primaryClass={cs.SD cs.AI eess.AS} }
xu2024tiger:
arxiv-664473
2410.01470
Peeling Back the Layers: An In-Depth Evaluation of Encoder Architectures in Neural News Recommenders
<|reference_start|>Peeling Back the Layers: An In-Depth Evaluation of Encoder Architectures in Neural News Recommenders: Encoder architectures play a pivotal role in neural news recommenders by embedding the semantic and contextual information of news and users. Thus, research has heavily focused on enhancing the representational capabilities of news and user encoders to improve recommender performance. Despite the significant impact of encoder architectures on the quality of news and user representations, existing analyses of encoder designs focus only on the overall downstream recommendation performance. This offers a one-sided assessment of the encoders' similarity, ignoring more nuanced differences in their behavior, and potentially resulting in sub-optimal model selection. In this work, we perform a comprehensive analysis of encoder architectures in neural news recommender systems. We systematically evaluate the most prominent news and user encoder architectures, focusing on their (i) representational similarity, measured with the Central Kernel Alignment, (ii) overlap of generated recommendation lists, quantified with the Jaccard similarity, and (iii) the overall recommendation performance. Our analysis reveals that the complexity of certain encoding techniques is often empirically unjustified, highlighting the potential for simpler, more efficient architectures. By isolating the effects of individual components, we provide valuable insights for researchers and practitioners to make better informed decisions about encoder selection and avoid unnecessary complexity in the design of news recommenders.<|reference_end|>
arxiv
@article{iana2024peeling, title={Peeling Back the Layers: An In-Depth Evaluation of Encoder Architectures in Neural News Recommenders}, author={Andreea Iana, Goran Glavav{s}, Heiko Paulheim}, journal={arXiv preprint arXiv:2410.01470}, year={2024}, archivePrefix={arXiv}, eprint={2410.01470}, primaryClass={cs.IR cs.AI} }
iana2024peeling
arxiv-664474
2410.01473
SinkSAM: A Monocular Depth-Guided SAM Framework for Automatic Sinkhole Segmentation
<|reference_start|>SinkSAM: A Monocular Depth-Guided SAM Framework for Automatic Sinkhole Segmentation: Soil sinkholes significantly influence soil degradation, but their irregular shapes, along with interference from shadow and vegetation, make it challenging to accurately quantify their properties using remotely sensed data. We present a novel framework for sinkhole segmentation that combines traditional topographic computations of closed depressions with the newly developed prompt-based Segment Anything Model (SAM). Within this framework, termed SinkSAM, we highlight four key improvements: (1) The integration of topographic computations with SAM enables pixel-level refinement of sinkhole boundaries segmentation; (2) A coherent mathematical prompting strategy, based on closed depressions, addresses the limitations of purely learning-based models (CNNs) in detecting and segmenting undefined sinkhole features, while improving generalization to new, unseen regions; (3) Using Depth Anything V2 monocular depth for automatic prompts eliminates photogrammetric biases, enabling sinkhole mapping without the dependence on LiDAR data; and (4) An established sinkhole database facilitates fine-tuning of SAM, improving its zero-shot performance in sinkhole segmentation. These advancements allow the deployment of SinkSAM, in an unseen test area, in the highly variable semiarid region, achieving an intersection-over-union (IoU) of 40.27\% and surpassing previous results. This paper also presents the first SAM implementation for sinkhole segmentation and demonstrates the robustness of SinkSAM in extracting sinkhole maps using a single RGB image.<|reference_end|>
arxiv
@article{rafaeli2024sinksam:, title={SinkSAM: A Monocular Depth-Guided SAM Framework for Automatic Sinkhole Segmentation}, author={Osher Rafaeli, Tal Svoray and Ariel Nahlieli}, journal={arXiv preprint arXiv:2410.01473}, year={2024}, archivePrefix={arXiv}, eprint={2410.01473}, primaryClass={cs.CV} }
rafaeli2024sinksam:
arxiv-664475
2410.01476
Reducing Variance in Meta-Learning via Laplace Approximation for Regression Tasks
<|reference_start|>Reducing Variance in Meta-Learning via Laplace Approximation for Regression Tasks: Given a finite set of sample points, meta-learning algorithms aim to learn an optimal adaptation strategy for new, unseen tasks. Often, this data can be ambiguous as it might belong to different tasks concurrently. This is particularly the case in meta-regression tasks. In such cases, the estimated adaptation strategy is subject to high variance due to the limited amount of support data for each task, which often leads to sub-optimal generalization performance. In this work, we address the problem of variance reduction in gradient-based meta-learning and formalize the class of problems prone to this, a condition we refer to as \emph{task overlap}. Specifically, we propose a novel approach that reduces the variance of the gradient estimate by weighing each support point individually by the variance of its posterior over the parameters. To estimate the posterior, we utilize the Laplace approximation, which allows us to express the variance in terms of the curvature of the loss landscape of our meta-learner. Experimental results demonstrate the effectiveness of the proposed method and highlight the importance of variance reduction in meta-learning.<|reference_end|>
arxiv
@article{reichlin2024reducing, title={Reducing Variance in Meta-Learning via Laplace Approximation for Regression Tasks}, author={Alfredo Reichlin, Gustaf Tegn'er, Miguel Vasco, Hang Yin, M{aa}rten Bj"orkman and Danica Kragic}, journal={arXiv preprint arXiv:2410.01476}, year={2024}, archivePrefix={arXiv}, eprint={2410.01476}, primaryClass={cs.LG stat.ML} }
reichlin2024reducing
arxiv-664476
2410.01480
Introducing Flexible Monotone Multiple Choice Item Response Theory Models and Bit Scales
<|reference_start|>Introducing Flexible Monotone Multiple Choice Item Response Theory Models and Bit Scales: Item Response Theory (IRT) is a powerful statistical approach for evaluating test items and determining test taker abilities through response analysis. An IRT model that better fits the data leads to more accurate latent trait estimates. In this study, we present a new model for multiple choice data, the monotone multiple choice (MMC) model, which we fit using autoencoders. Using both simulated scenarios and real data from the Swedish Scholastic Aptitude Test, we demonstrate empirically that the MMC model outperforms the traditional nominal response IRT model in terms of fit. Furthermore, we illustrate how the latent trait scale from any fitted IRT model can be transformed into a ratio scale, aiding in score interpretation and making it easier to compare different types of IRT models. We refer to these new scales as bit scales. Bit scales are especially useful for models for which minimal or no assumptions are made for the latent trait scale distributions, such as for the autoencoder fitted models in this study.<|reference_end|>
arxiv
@article{wallmark2024introducing, title={Introducing Flexible Monotone Multiple Choice Item Response Theory Models and Bit Scales}, author={Joakim Wallmark, Maria Josefsson, Marie Wiberg}, journal={arXiv preprint arXiv:2410.01480}, year={2024}, archivePrefix={arXiv}, eprint={2410.01480}, primaryClass={stat.ML cs.LG stat.ME} }
wallmark2024introducing
arxiv-664477
2410.01481
SonicSim: A customizable simulation platform for speech processing in moving sound source scenarios
<|reference_start|>SonicSim: A customizable simulation platform for speech processing in moving sound source scenarios: The systematic evaluation of speech separation and enhancement models under moving sound source conditions typically requires extensive data comprising diverse scenarios. However, real-world datasets often contain insufficient data to meet the training and evaluation requirements of models. Although synthetic datasets offer a larger volume of data, their acoustic simulations lack realism. Consequently, neither real-world nor synthetic datasets effectively fulfill practical needs. To address these issues, we introduce SonicSim, a synthetic toolkit de-designed to generate highly customizable data for moving sound sources. SonicSim is developed based on the embodied AI simulation platform, Habitat-sim, supporting multi-level adjustments, including scene-level, microphone-level, and source-level, thereby generating more diverse synthetic data. Leveraging SonicSim, we constructed a moving sound source benchmark dataset, SonicSet, using the Librispeech, the Freesound Dataset 50k (FSD50K) and Free Music Archive (FMA), and 90 scenes from the Matterport3D to evaluate speech separation and enhancement models. Additionally, to validate the differences between synthetic data and real-world data, we randomly selected 5 hours of raw data without reverberation from the SonicSet validation set to record a real-world speech separation dataset, which was then compared with the corresponding synthetic datasets. Similarly, we utilized the real-world speech enhancement dataset RealMAN to validate the acoustic gap between other synthetic datasets and the SonicSet dataset for speech enhancement. The results indicate that the synthetic data generated by SonicSim can effectively generalize to real-world scenarios. Demo and code are publicly available at https://cslikai.cn/SonicSim/.<|reference_end|>
arxiv
@article{li2024sonicsim:, title={SonicSim: A customizable simulation platform for speech processing in moving sound source scenarios}, author={Kai Li, Wendi Sang, Chang Zeng, Runxuan Yang, Guo Chen, Xiaolin Hu}, journal={arXiv preprint arXiv:2410.01481}, year={2024}, archivePrefix={arXiv}, eprint={2410.01481}, primaryClass={cs.SD cs.AI eess.AS} }
li2024sonicsim:
arxiv-664478
2410.01482
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability
<|reference_start|>One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability: Despite the growing use of deep neural networks in safety-critical decision-making, their inherent black-box nature hinders transparency and interpretability. Explainable AI (XAI) methods have thus emerged to understand a model's internal workings, and notably attribution methods also called saliency maps. Conventional attribution methods typically identify the locations -- the where -- of significant regions within an input. However, because they overlook the inherent structure of the input data, these methods often fail to interpret what these regions represent in terms of structural components (e.g., textures in images or transients in sounds). Furthermore, existing methods are usually tailored to a single data modality, limiting their generalizability. In this paper, we propose leveraging the wavelet domain as a robust mathematical foundation for attribution. Our approach, the Wavelet Attribution Method (WAM) extends the existing gradient-based feature attributions into the wavelet domain, providing a unified framework for explaining classifiers across images, audio, and 3D shapes. Empirical evaluations demonstrate that WAM matches or surpasses state-of-the-art methods across faithfulness metrics and models in image, audio, and 3D explainability. Finally, we show how our method explains not only the where -- the important parts of the input -- but also the what -- the relevant patterns in terms of structural components.<|reference_end|>
arxiv
@article{kasmi2024one, title={One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability}, author={Gabriel Kasmi and Amandine Brunetto and Thomas Fel and Jayneel Parekh}, journal={arXiv preprint arXiv:2410.01482}, year={2024}, archivePrefix={arXiv}, eprint={2410.01482}, primaryClass={stat.ML cs.AI cs.LG} }
kasmi2024one
arxiv-664479
2410.01483
Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks
<|reference_start|>Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks: Many recent methods aim to merge neural networks (NNs) with identical architectures trained on different tasks to obtain a single multi-task model. Most existing works tackle the simpler setup of merging NNs initialized from a common pre-trained network, where simple heuristics like weight averaging work well. This work targets a more challenging goal: merging large transformers trained on different tasks from distinct initializations. First, we demonstrate that traditional merging methods fail catastrophically in this setup. To overcome this challenge, we propose Foldable SuperNet Merge (FS-Merge), a method that optimizes a SuperNet to fuse the original models using a feature reconstruction loss. FS-Merge is simple, data-efficient, and capable of merging models of varying widths. We test FS-Merge against existing methods, including knowledge distillation, on MLPs and transformers across various settings, sizes, tasks, and modalities. FS-Merge consistently outperforms them, achieving SOTA results, particularly in limited data scenarios.<|reference_end|>
arxiv
@article{kinderman2024foldable, title={Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks}, author={Edan Kinderman, Itay Hubara, Haggai Maron, Daniel Soudry}, journal={arXiv preprint arXiv:2410.01483}, year={2024}, archivePrefix={arXiv}, eprint={2410.01483}, primaryClass={cs.LG} }
kinderman2024foldable
arxiv-664480
2410.01485
A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts
<|reference_start|>A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts: Training and serving long-context large language models (LLMs) incurs substantial overhead. To address this, two critical steps are often required: a pretrained LLM typically undergoes a separate stage for context length extension by training on long-context data, followed by architectural modifications to reduce the overhead of KV cache during serving. This paper argues that integrating length extension with a GPU-friendly KV cache reduction architecture not only reduces training overhead during length extension, but also achieves better long-context performance. This leads to our proposed LongGen, which finetunes a pretrained LLM into an efficient architecture during length extension. LongGen builds on three key insights: (1) Sparse attention patterns, such as window attention (attending to recent tokens), attention sink (initial ones), and blockwise sparse attention (strided token blocks) are well-suited for building efficient long-context models, primarily due to their GPU-friendly memory access patterns, enabling efficiency gains not just theoretically but in practice as well. (2) It is essential for the model to have direct access to all tokens. A hybrid architecture with 1/3 full attention layers and 2/3 efficient ones achieves a balanced trade-off between efficiency and long-context performance. (3) Lightweight training on 5B long-context data is sufficient to extend the hybrid model's context length from 4K to 128K. We evaluate LongGen on both Llama-2 7B and Llama-2 70B, demonstrating its effectiveness across different scales. During training with 128K-long contexts, LongGen achieves 1.55x training speedup and reduces wall-clock time by 36%, compared to a full-attention baseline. During inference, LongGen reduces KV cache memory by 62%, achieving 1.67x prefilling speedup and 1.41x decoding speedup.<|reference_end|>
arxiv
@article{ge2024a, title={A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts}, author={Suyu Ge, Xihui Lin, Yunan Zhang, Jiawei Han, Hao Peng}, journal={arXiv preprint arXiv:2410.01485}, year={2024}, archivePrefix={arXiv}, eprint={2410.01485}, primaryClass={cs.CL} }
ge2024a
arxiv-664481
2410.01487
Small Language Models Like Small Vocabularies: Probing the Linguistic Abilities of Grapheme- and Phoneme-Based Baby Llamas
<|reference_start|>Small Language Models Like Small Vocabularies: Probing the Linguistic Abilities of Grapheme- and Phoneme-Based Baby Llamas: Current language models use subword-based tokenization algorithms like Byte Pair Encoding, which put their validity as models of linguistic representations into question. In this paper, we explore the potential of tokenization-free, phoneme- and grapheme-based language models. We demonstrate that small models based on the Llama architecture can achieve strong linguistic performance on standard syntactic and novel lexical/phonetic benchmarks when trained with character-level vocabularies. We further show that phoneme-based models without any graphemic biases almost match grapheme-based models in standard tasks and novel evaluations. Our findings suggest a promising direction for creating more linguistically plausible language models that are better suited for computational studies of language acquisition and processing.<|reference_end|>
arxiv
@article{bunzeck2024small, title={Small Language Models Like Small Vocabularies: Probing the Linguistic Abilities of Grapheme- and Phoneme-Based Baby Llamas}, author={Bastian Bunzeck, Daniel Duran, Leonie Schade, Sina Zarrie{ss}}, journal={arXiv preprint arXiv:2410.01487}, year={2024}, archivePrefix={arXiv}, eprint={2410.01487}, primaryClass={cs.CL} }
bunzeck2024small
arxiv-664482
2410.01488
SecCoder: Towards Generalizable and Robust Secure Code Generation
<|reference_start|>SecCoder: Towards Generalizable and Robust Secure Code Generation: After large models (LMs) have gained widespread acceptance in code-related tasks, their superior generative capacity has greatly promoted the application of the code LM. Nevertheless, the security of the generated code has raised attention to its potential damage. Existing secure code generation methods have limited generalizability to unseen test cases and poor robustness against the attacked model, leading to safety failures in code generation. In this paper, we propose a generalizable and robust secure code generation method SecCoder by using in-context learning (ICL) and the safe demonstration. The dense retriever is also used to select the most helpful demonstration to maximize the improvement of the generated code's security. Experimental results show the superior generalizability of the proposed model SecCoder compared to the current secure code generation method, achieving a significant security improvement of an average of 7.20% on unseen test cases. The results also show the better robustness of SecCoder compared to the current attacked code LM, achieving a significant security improvement of an average of 7.74%. Our analysis indicates that SecCoder enhances the security of LMs in generating code, and it is more generalizable and robust.<|reference_end|>
arxiv
@article{zhang2024seccoder:, title={SecCoder: Towards Generalizable and Robust Secure Code Generation}, author={Boyu Zhang, Tianyu Du, Junkai Tong, Xuhong Zhang, Kingsum Chow, Sheng Cheng, Xun Wang, Jianwei Yin}, journal={arXiv preprint arXiv:2410.01488}, year={2024}, archivePrefix={arXiv}, eprint={2410.01488}, primaryClass={cs.PL} }
zhang2024seccoder:
arxiv-664483
2410.01490
Extending Context Window of Large Language Models from a Distributional Perspective
<|reference_start|>Extending Context Window of Large Language Models from a Distributional Perspective: Scaling the rotary position embedding (RoPE) has become a common method for extending the context window of RoPE-based large language models (LLMs). However, existing scaling methods often rely on empirical approaches and lack a profound understanding of the internal distribution within RoPE, resulting in suboptimal performance in extending the context window length. In this paper, we propose to optimize the context window extending task from the view of rotary angle distribution. Specifically, we first estimate the distribution of the rotary angles within the model and analyze the extent to which length extension perturbs this distribution. Then, we present a novel extension strategy that minimizes the disturbance between rotary angle distributions to maintain consistency with the pre-training phase, enhancing the model's capability to generalize to longer sequences. Experimental results compared to the strong baseline methods demonstrate that our approach reduces by up to 72% of the distributional disturbance when extending LLaMA2's context window to 8k, and reduces by up to 32% when extending to 16k. On the LongBench-E benchmark, our method achieves an average improvement of up to 4.33% over existing state-of-the-art methods. Furthermore, Our method maintains the model's performance on the Hugging Face Open LLM benchmark after context window extension, with only an average performance fluctuation ranging from -0.12 to +0.22.<|reference_end|>
arxiv
@article{wu2024extending, title={Extending Context Window of Large Language Models from a Distributional Perspective}, author={Yingsheng Wu, Yuxuan Gu, Xiaocheng Feng, Weihong Zhong, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin}, journal={arXiv preprint arXiv:2410.01490}, year={2024}, archivePrefix={arXiv}, eprint={2410.01490}, primaryClass={cs.CL} }
wu2024extending
arxiv-664484
2410.01495
Open-vocabulary Multimodal Emotion Recognition: Dataset, Metric, and Benchmark
<|reference_start|>Open-vocabulary Multimodal Emotion Recognition: Dataset, Metric, and Benchmark: Multimodal Emotion Recognition (MER) is an important research topic. This paper advocates for a transformative paradigm in MER. The rationale behind our work is that current approaches often rely on a limited set of basic emotion labels, which do not adequately represent the rich spectrum of human emotions. These traditional and overly simplistic emotion categories fail to capture the inherent complexity and subtlety of human emotional experiences, leading to limited generalizability and practicality. Therefore, we propose a new MER paradigm called Open-vocabulary MER (OV-MER), which encompasses a broader range of emotion labels to reflect the richness of human emotions. This paradigm relaxes the label space, allowing for the prediction of arbitrary numbers and categories of emotions. To support this transition, we provide a comprehensive solution that includes a newly constructed database based on LLM and human collaborative annotations, along with corresponding metrics and a series of benchmarks. We hope this work advances emotion recognition from basic emotions to more nuanced emotions, contributing to the development of emotional AI.<|reference_end|>
arxiv
@article{lian2024open-vocabulary, title={Open-vocabulary Multimodal Emotion Recognition: Dataset, Metric, and Benchmark}, author={Zheng Lian, Haiyang Sun, Licai Sun, Lan Chen, Haoyu Chen, Hao Gu, Zhuofan Wen, Shun Chen, Siyuan Zhang, Hailiang Yao, Mingyu Xu, Kang Chen, Bin Liu, Rui Liu, Shan Liang, Ya Li, Jiangyan Yi, Jianhua Tao}, journal={arXiv preprint arXiv:2410.01495}, year={2024}, archivePrefix={arXiv}, eprint={2410.01495}, primaryClass={cs.HC} }
lian2024open-vocabulary
arxiv-664485
2410.01497
DLP-LoRA: Efficient Task-Specific LoRA Fusion with a Dynamic, Lightweight Plugin for Large Language Models
<|reference_start|>DLP-LoRA: Efficient Task-Specific LoRA Fusion with a Dynamic, Lightweight Plugin for Large Language Models: Recent advancements in Large Language Models (LLMs) have achieved robust performance across diverse tasks, but fine-tuning these models for specific domains remains resource-intensive. Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) address this challenge by fine-tuning a small subset of parameters. However, existing methods for fusing multiple LoRAs lack dynamic fusion based on contextual inputs and often increase inference time due to token-level operations. We propose DLP-LoRA, a Dynamic Lightweight Plugin that employs a mini-MLP module with only 5M parameters to dynamically fuse multiple LoRAs at the sentence level using top-p sampling strategies. This approach reduces inference time to less than twice that of single LoRA inference by leveraging parallel computation. Evaluations across 26 tasks-including multiple-choice questions and question answering-demonstrate that DLP-LoRA achieves an average accuracy of 92.34% on multiple-choice datasets and significant improvements in BLEU and ROUGE scores on QA datasets, outperforming different LLMs backbones under composite task settings. DLP-LoRA effectively balances performance and efficiency, making it a practical solution for dynamic multi-task adaptation in LLMs. Our code is available at https://github.com/MeCuping/DLP-LoRA.<|reference_end|>
arxiv
@article{zhang2024dlp-lora:, title={DLP-LoRA: Efficient Task-Specific LoRA Fusion with a Dynamic, Lightweight Plugin for Large Language Models}, author={Yuxuan Zhang, Ruizhe Li}, journal={arXiv preprint arXiv:2410.01497}, year={2024}, archivePrefix={arXiv}, eprint={2410.01497}, primaryClass={cs.CL cs.AI cs.LG} }
zhang2024dlp-lora:
arxiv-664486
2410.01498
Quo Vadis RankList-based System in Face Recognition?
<|reference_start|>Quo Vadis RankList-based System in Face Recognition?: Face recognition in the wild has gained a lot of focus in the last few years, and many face recognition models are designed to verify faces in medium-quality images. Especially due to the availability of large training datasets with similar conditions, deep face recognition models perform exceptionally well in such tasks. However, in other tasks where substantially less training data is available, such methods struggle, especially when required to compare high-quality enrollment images with low-quality probes. On the other hand, traditional RankList-based methods have been developed that compare faces indirectly by comparing to cohort faces with similar conditions. In this paper, we revisit these RankList methods and extend them to use the logits of the state-of-the-art DaliFace network, instead of an external cohort. We show that through a reasonable Logit-Cohort Selection (LoCoS) the performance of RankList-based functions can be improved drastically. Experiments on two challenging face recognition datasets not only demonstrate the enhanced performance of our proposed method but also set the stage for future advancements in handling diverse image qualities.<|reference_end|>
arxiv
@article{zhang2024quo, title={Quo Vadis RankList-based System in Face Recognition?}, author={Xinyi Zhang, Manuel G"unther}, journal={arXiv preprint arXiv:2410.01498}, year={2024}, archivePrefix={arXiv}, eprint={2410.01498}, primaryClass={cs.CV} }
zhang2024quo
arxiv-664487
2410.01500
Discrete Diffusion Schr\"odinger Bridge Matching for Graph Transformation
<|reference_start|>Discrete Diffusion Schr\"odinger Bridge Matching for Graph Transformation: Transporting between arbitrary distributions is a fundamental goal in generative modeling. Recently proposed diffusion bridge models provide a potential solution, but they rely on a joint distribution that is difficult to obtain in practice. Furthermore, formulations based on continuous domains limit their applicability to discrete domains such as graphs. To overcome these limitations, we propose Discrete Diffusion Schr\"odinger Bridge Matching (DDSBM), a novel framework that utilizes continuous-time Markov chains to solve the SB problem in a high-dimensional discrete state space. Our approach extends Iterative Markovian Fitting to discrete domains, and we have proved its convergence to the SB. Furthermore, we adapt our framework for the graph transformation and show that our design choice of underlying dynamics characterized by independent modifications of nodes and edges can be interpreted as the entropy-regularized version of optimal transport with a cost function described by the graph edit distance. To demonstrate the effectiveness of our framework, we have applied DDSBM to molecular optimization in the field of chemistry. Experimental results demonstrate that DDSBM effectively optimizes molecules' property-of-interest with minimal graph transformation, successfully retaining other features.<|reference_end|>
arxiv
@article{kim2024discrete, title={Discrete Diffusion Schr\"odinger Bridge Matching for Graph Transformation}, author={Jun Hyeong Kim, Seonghwan Kim, Seokhyun Moon, Hyeongwoo Kim, Jeheon Woo, Woo Youn Kim}, journal={arXiv preprint arXiv:2410.01500}, year={2024}, archivePrefix={arXiv}, eprint={2410.01500}, primaryClass={cs.LG cs.AI} }
kim2024discrete
arxiv-664488
2410.01502
Personalized Federated Learning on Flowing Data Heterogeneity under Restricted Storage
<|reference_start|>Personalized Federated Learning on Flowing Data Heterogeneity under Restricted Storage: Recent years, researchers focused on personalized federated learning (pFL) to address the inconsistent requirements of clients causing by data heterogeneity in federated learning (FL). However, existing pFL methods typically assume that local data distribution remains unchanged during FL training, the changing data distribution in actual heterogeneous data scenarios can affect model convergence rate and reduce model performance. In this paper, we focus on solving the pFL problem under the situation where data flows through each client like a flowing stream which called Flowing Data Heterogeneity under Restricted Storage, and shift the training goal to the comprehensive performance of the model throughout the FL training process. Therefore, based on the idea of category decoupling, we design a local data distribution reconstruction scheme and a related generator architecture to reduce the error of the controllable replayed data distribution, then propose our pFL framework, pFedGRP, to achieve knowledge transfer and personalized aggregation. Comprehensive experiments on five datasets with multiple settings show the superiority of pFedGRP over eight baseline methods.<|reference_end|>
arxiv
@article{tan2024personalized, title={Personalized Federated Learning on Flowing Data Heterogeneity under Restricted Storage}, author={Sixing Tan and Xianmin Liu}, journal={arXiv preprint arXiv:2410.01502}, year={2024}, archivePrefix={arXiv}, eprint={2410.01502}, primaryClass={cs.DC} }
tan2024personalized
arxiv-664489
2410.01504
PersonaMath: Enhancing Math Reasoning through Persona-Driven Data Augmentation
<|reference_start|>PersonaMath: Enhancing Math Reasoning through Persona-Driven Data Augmentation: While closed-source Large Language Models (LLMs) demonstrate strong mathematical problem-solving abilities, open-source models continue to struggle with such tasks. To bridge this gap, we propose a data augmentation approach and introduce PersonaMathQA, a dataset derived from MATH and GSM8K, on which we train the PersonaMath models. Our approach consists of two stages: the first stage is learning from Persona Diversification, and the second stage is learning from Reflection. In the first stage, we regenerate detailed chain-of-thought (CoT) solutions as instructions using a closed-source LLM and introduce a novel persona-driven data augmentation technique to enhance the dataset's quantity and diversity. In the second stage, we incorporate reflection to fully leverage more challenging and valuable questions. Evaluation of our PersonaMath models on MATH and GSM8K reveals that the PersonaMath-7B model (based on LLaMA-2-7B) achieves an accuracy of 24.2% on MATH and 68.7% on GSM8K, surpassing all baseline methods and achieving state-of-the-art performance. Notably, our dataset contains only 70.3K data points-merely 17.8% of MetaMathQA and 27% of MathInstruct-yet our model outperforms these baselines, demonstrating the high quality and diversity of our dataset, which enables more efficient model training. We open-source the PersonaMathQA dataset, PersonaMath models, and our code for public usage.<|reference_end|>
arxiv
@article{luo2024personamath:, title={PersonaMath: Enhancing Math Reasoning through Persona-Driven Data Augmentation}, author={Jing Luo, Run Luo, Longze Chen, Liang Zhu, Chang Ao, Jiaming Li, Yukun Chen, Xin Cheng, Wen Yang, Jiayuan Su, Chengming Li, Min Yang}, journal={arXiv preprint arXiv:2410.01504}, year={2024}, archivePrefix={arXiv}, eprint={2410.01504}, primaryClass={cs.CL} }
luo2024personamath:
arxiv-664490
2410.01506
LEGO: Learnable Expansion of Graph Operators for Multi-Modal Feature Fusion
<|reference_start|>LEGO: Learnable Expansion of Graph Operators for Multi-Modal Feature Fusion: In computer vision tasks, features often come from diverse representations, domains, and modalities, such as text, images, and videos. Effectively fusing these features is essential for robust performance, especially with the availability of powerful pre-trained models like vision-language models. However, common fusion methods, such as concatenation, element-wise operations, and non-linear techniques, often fail to capture structural relationships, deep feature interactions, and suffer from inefficiency or misalignment of features across domains. In this paper, we shift from high-dimensional feature space to a lower-dimensional, interpretable graph space by constructing similarity graphs that encode feature relationships at different levels, e.g., clip, frame, patch, token, etc. To capture deeper interactions, we use graph power expansions and introduce a learnable graph fusion operator to combine these graph powers for more effective fusion. Our approach is relationship-centric, operates in a homogeneous space, and is mathematically principled, resembling element-wise similarity score aggregation via multilinear polynomials. We demonstrate the effectiveness of our graph-based fusion method on video anomaly detection, showing strong performance across multi-representational, multi-modal, and multi-domain feature fusion tasks.<|reference_end|>
arxiv
@article{ding2024lego:, title={LEGO: Learnable Expansion of Graph Operators for Multi-Modal Feature Fusion}, author={Dexuan Ding, Lei Wang, Liyun Zhu, Tom Gedeon, Piotr Koniusz}, journal={arXiv preprint arXiv:2410.01506}, year={2024}, archivePrefix={arXiv}, eprint={2410.01506}, primaryClass={cs.CV cs.AI cs.LG} }
ding2024lego:
arxiv-664491
2410.01508
Disentangling Latent Shifts of In-Context Learning Through Self-Training
<|reference_start|>Disentangling Latent Shifts of In-Context Learning Through Self-Training: In-context learning (ICL) has become essential in natural language processing, particularly with autoregressive large language models capable of learning from demonstrations provided within the prompt. However, ICL faces challenges with stability and long contexts, especially as the number of demonstrations grows, leading to poor generalization and inefficient inference. To address these issues, we introduce STICL (Self-Training ICL), an approach that disentangles the latent shifts of demonstrations from the latent shift of the query through self-training. STICL employs a teacher model to generate pseudo-labels and trains a student model using these labels, encoded in an adapter module. The student model exhibits weak-to-strong generalization, progressively refining its predictions over time. Our empirical results show that STICL improves generalization and stability, consistently outperforming traditional ICL methods and other disentangling strategies across both in-domain and out-of-domain data.<|reference_end|>
arxiv
@article{jukić2024disentangling, title={Disentangling Latent Shifts of In-Context Learning Through Self-Training}, author={Josip Juki'c, Jan v{S}najder}, journal={arXiv preprint arXiv:2410.01508}, year={2024}, archivePrefix={arXiv}, eprint={2410.01508}, primaryClass={cs.CL cs.LG} }
jukić2024disentangling
arxiv-664492
2410.01512
InstaTrans: An Instruction-Aware Translation Framework for Non-English Instruction Datasets
<|reference_start|>InstaTrans: An Instruction-Aware Translation Framework for Non-English Instruction Datasets: It is challenging to generate high-quality instruction datasets for non-English languages due to tail phenomena, which limit performance on less frequently observed data. To mitigate this issue, we propose translating existing high-quality English instruction datasets as a solution, emphasizing the need for complete and instruction-aware translations to maintain the inherent attributes of these datasets. We claim that fine-tuning LLMs with datasets translated in this way can improve their performance in the target language. To this end, we introduces a new translation framework tailored for instruction datasets, named InstaTrans (INSTruction-Aware TRANSlation). Through extensive experiments, we demonstrate the superiority of InstaTrans over other competitors in terms of completeness and instruction-awareness of translation, highlighting its potential to broaden the accessibility of LLMs across diverse languages at a relatively low cost. Furthermore, we have validated that fine-tuning LLMs with datasets translated by InstaTrans can effectively improve their performance in the target language.<|reference_end|>
arxiv
@article{kim2024instatrans:, title={InstaTrans: An Instruction-Aware Translation Framework for Non-English Instruction Datasets}, author={Yungi Kim, Chanjun Park}, journal={arXiv preprint arXiv:2410.01512}, year={2024}, archivePrefix={arXiv}, eprint={2410.01512}, primaryClass={cs.CL cs.AI} }
kim2024instatrans:
arxiv-664493
2410.01514
Multi-level Memory-Centric Profiling on ARM Processors with ARM SPE
<|reference_start|>Multi-level Memory-Centric Profiling on ARM Processors with ARM SPE: High-end ARM processors are emerging in data centers and HPC systems, posing as a strong contender to x86 machines. Memory-centric profiling is an important approach for dissecting an application's bottlenecks on memory access and guiding optimizations. Many existing memory profiling tools leverage hardware performance counters and precise event sampling, such as Intel PEBS and AMD IBS, to achieve high accuracy and low overhead. In this work, we present a multi-level memory profiling tool for ARM processors, leveraging Statistical Profiling Extension (SPE). We evaluate the tool using both HPC and Cloud workloads on the ARM Ampere processor. Our results provide the first quantitative assessment of time overhead and sampling accuracy of ARM SPE for memory-centric profiling at different sampling periods and aux buffer sizes.<|reference_end|>
arxiv
@article{miksits2024multi-level, title={Multi-level Memory-Centric Profiling on ARM Processors with ARM SPE}, author={Samuel Miksits, Ruimin Shi, Maya Gokhale, Jacob Wahlgren, Gabin Schieffer, Ivy Peng}, journal={arXiv preprint arXiv:2410.01514}, year={2024}, archivePrefix={arXiv}, eprint={2410.01514}, primaryClass={cs.DC} }
miksits2024multi-level
arxiv-664494
2410.01515
Task-Oriented Edge-Assisted Cooperative Data Compression, Communications and Computing for UGV-Enhanced Warehouse Logistics
<|reference_start|>Task-Oriented Edge-Assisted Cooperative Data Compression, Communications and Computing for UGV-Enhanced Warehouse Logistics: This paper explores the growing need for task-oriented communications in warehouse logistics, where traditional communication Key Performance Indicators (KPIs)-such as latency, reliability, and throughput-often do not fully meet task requirements. As the complexity of data flow management in large-scale device networks increases, there is also a pressing need for innovative cross-system designs that balance data compression, communication, and computation. To address these challenges, we propose a task-oriented, edge-assisted framework for cooperative data compression, communication, and computing in Unmanned Ground Vehicle (UGV)-enhanced warehouse logistics. In this framework, two UGVs collaborate to transport cargo, with control functions-navigation for the front UGV and following/conveyance for the rear UGV-offloaded to the edge server to accommodate their limited on-board computing resources. We develop a Deep Reinforcement Learning (DRL)-based two-stage point cloud data compression algorithm that dynamically and collaboratively adjusts compression ratios according to task requirements, significantly reducing communication overhead. System-level simulations of our UGV logistics prototype demonstrate the framework's effectiveness and its potential for swift real-world implementation.<|reference_end|>
arxiv
@article{yang2024task-oriented, title={Task-Oriented Edge-Assisted Cooperative Data Compression, Communications and Computing for UGV-Enhanced Warehouse Logistics}, author={Jiaming Yang, Zhen Meng, Xiangmin Xu, Kan Chen and Emma Liying Li (University of Glasgow, United Kingdom (Great Britain)), Philip Guodong G. Zhao (University of Manchester, United Kingdom (Great Britain))}, journal={arXiv preprint arXiv:2410.01515}, year={2024}, archivePrefix={arXiv}, eprint={2410.01515}, primaryClass={cs.NI eess.SP} }
yang2024task-oriented
arxiv-664495
2410.01516
Bounds on $L_p$ Errors in Density Ratio Estimation via $f$-Divergence Loss Functions
<|reference_start|>Bounds on $L_p$ Errors in Density Ratio Estimation via $f$-Divergence Loss Functions: Density ratio estimation (DRE) is a fundamental machine learning technique for identifying relationships between two probability distributions. $f$-divergence loss functions, derived from variational representations of $f$-divergence, are commonly employed in DRE to achieve state-of-the-art results. This study presents a novel perspective on DRE using $f$-divergence loss functions by deriving the upper and lower bounds on $L_p$ errors. These bounds apply to any estimator within a class of Lipschitz continuous estimators, irrespective of the specific $f$-divergence loss functions utilized. The bounds are formulated as a product of terms that include the data dimension and the expected value of the density ratio raised to the power of $p$. Notably, the lower bound incorporates an exponential term dependent on the Kullback--Leibler divergence, indicating that the $L_p$ error significantly increases with the Kullback--Leibler divergence for $p > 1$, and this increase becomes more pronounced as $p$ increases. Furthermore, these theoretical findings are substantiated through numerical experiments.<|reference_end|>
arxiv
@article{kitazawa2024bounds, title={Bounds on $L_p$ Errors in Density Ratio Estimation via $f$-Divergence Loss Functions}, author={Yoshiaki Kitazawa}, journal={arXiv preprint arXiv:2410.01516}, year={2024}, archivePrefix={arXiv}, eprint={2410.01516}, primaryClass={cs.LG} }
kitazawa2024bounds
arxiv-664496
2410.01517
UW-GS: Distractor-Aware 3D Gaussian Splatting for Enhanced Underwater Scene Reconstruction
<|reference_start|>UW-GS: Distractor-Aware 3D Gaussian Splatting for Enhanced Underwater Scene Reconstruction: 3D Gaussian splatting (3DGS) offers the capability to achieve real-time high quality 3D scene rendering. However, 3DGS assumes that the scene is in a clear medium environment and struggles to generate satisfactory representations in underwater scenes, where light absorption and scattering are prevalent and moving objects are involved. To overcome these, we introduce a novel Gaussian Splatting-based method, UW-GS, designed specifically for underwater applications. It introduces a color appearance that models distance-dependent color variation, employs a new physics-based density control strategy to enhance clarity for distant objects, and uses a binary motion mask to handle dynamic content. Optimized with a well-designed loss function supporting for scattering media and strengthened by pseudo-depth maps, UW-GS outperforms existing methods with PSNR gains up to 1.26dB. To fully verify the effectiveness of the model, we also developed a new underwater dataset, S-UW, with dynamic object masks.<|reference_end|>
arxiv
@article{wang2024uw-gs:, title={UW-GS: Distractor-Aware 3D Gaussian Splatting for Enhanced Underwater Scene Reconstruction}, author={Haoran Wang, Nantheera Anantrasirichai, Fan Zhang, and David Bull}, journal={arXiv preprint arXiv:2410.01517}, year={2024}, archivePrefix={arXiv}, eprint={2410.01517}, primaryClass={cs.CV} }
wang2024uw-gs:
arxiv-664497
2410.01518
InfiniPot: Infinite Context Processing on Memory-Constrained LLMs
<|reference_start|>InfiniPot: Infinite Context Processing on Memory-Constrained LLMs: Handling long input contexts remains a significant challenge for Large Language Models (LLMs), particularly in resource-constrained environments such as mobile devices. Our work aims to address this limitation by introducing InfiniPot, a novel KV cache control framework designed to enable pre-trained LLMs to manage extensive sequences within fixed memory constraints efficiently, without requiring additional training. InfiniPot leverages Continual Context Distillation (CCD), an iterative process that compresses and retains essential information through novel importance metrics, effectively maintaining critical data even without access to future context. Our comprehensive evaluations indicate that InfiniPot significantly outperforms models trained for long contexts in various NLP tasks, establishing its efficacy and versatility. This work represents a substantial advancement toward making LLMs applicable to a broader range of real-world scenarios.<|reference_end|>
arxiv
@article{kim2024infinipot:, title={InfiniPot: Infinite Context Processing on Memory-Constrained LLMs}, author={Minsoo Kim, Kyuhong Shim, Jungwook Choi, Simyung Chang}, journal={arXiv preprint arXiv:2410.01518}, year={2024}, archivePrefix={arXiv}, eprint={2410.01518}, primaryClass={cs.CL cs.LG} }
kim2024infinipot:
arxiv-664498
2410.01521
MiraGe: Editable 2D Images using Gaussian Splatting
<|reference_start|>MiraGe: Editable 2D Images using Gaussian Splatting: Implicit Neural Representations (INRs) approximate discrete data through continuous functions and are commonly used for encoding 2D images. Traditional image-based INRs employ neural networks to map pixel coordinates to RGB values, capturing shapes, colors, and textures within the network's weights. Recently, GaussianImage has been proposed as an alternative, using Gaussian functions instead of neural networks to achieve comparable quality and compression. Such a solution obtains a quality and compression ratio similar to classical INR models but does not allow image modification. In contrast, our work introduces a novel method, MiraGe, which uses mirror reflections to perceive 2D images in 3D space and employs flat-controlled Gaussians for precise 2D image editing. Our approach improves the rendering quality and allows realistic image modifications, including human-inspired perception of photos in the 3D world. Thanks to modeling images in 3D space, we obtain the illusion of 3D-based modification in 2D images. We also show that our Gaussian representation can be easily combined with a physics engine to produce physics-based modification of 2D images. Consequently, MiraGe allows for better quality than the standard approach and natural modification of 2D images.<|reference_end|>
arxiv
@article{waczyńska2024mirage:, title={MiraGe: Editable 2D Images using Gaussian Splatting}, author={Joanna Waczy'nska, Tomasz Szczepanik, Piotr Borycki, S{l}awomir Tadeja, Thomas Bohn'e, Przemys{l}aw Spurek}, journal={arXiv preprint arXiv:2410.01521}, year={2024}, archivePrefix={arXiv}, eprint={2410.01521}, primaryClass={cs.CV} }
waczyńska2024mirage:
arxiv-664499
2410.01524
HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models
<|reference_start|>HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models: Safety guard models that detect malicious queries aimed at large language models (LLMs) are essential for ensuring the secure and responsible deployment of LLMs in real-world applications. However, deploying existing safety guard models with billions of parameters alongside LLMs on mobile devices is impractical due to substantial memory requirements and latency. To reduce this cost, we distill a large teacher safety guard model into a smaller one using a labeled dataset of instruction-response pairs with binary harmfulness labels. Due to the limited diversity of harmful instructions in the existing labeled dataset, naively distilled models tend to underperform compared to larger models. To bridge the gap between small and large models, we propose HarmAug, a simple yet effective data augmentation method that involves jailbreaking an LLM and prompting it to generate harmful instructions. Given a prompt such as, "Make a single harmful instruction prompt that would elicit offensive content", we add an affirmative prefix (e.g., "I have an idea for a prompt:") to the LLM's response. This encourages the LLM to continue generating the rest of the response, leading to sampling harmful instructions. Another LLM generates a response to the harmful instruction, and the teacher model labels the instruction-response pair. We empirically show that our HarmAug outperforms other relevant baselines. Moreover, a 435-million-parameter safety guard model trained with HarmAug achieves an F1 score comparable to larger models with over 7 billion parameters, and even outperforms them in AUPRC, while operating at less than 25% of their computational cost.<|reference_end|>
arxiv
@article{lee2024harmaug:, title={HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models}, author={Seanie Lee, Haebin Seong, Dong Bok Lee, Minki Kang, Xiaoyin Chen, Dominik Wagner, Yoshua Bengio, Juho Lee, Sung Ju Hwang}, journal={arXiv preprint arXiv:2410.01524}, year={2024}, archivePrefix={arXiv}, eprint={2410.01524}, primaryClass={cs.CL cs.LG} }
lee2024harmaug:
arxiv-664500
2410.01528
Global Scheduling of Weakly-Hard Real-Time Tasks using Job-Level Priority Classes
<|reference_start|>Global Scheduling of Weakly-Hard Real-Time Tasks using Job-Level Priority Classes: Real-time systems are intrinsic components of many pivotal applications, such as self-driving vehicles, aerospace and defense systems. The trend in these applications is to incorporate multiple tasks onto fewer, more powerful hardware platforms, e.g., multi-core systems, mainly for reducing cost and power consumption. Many real-time tasks, like control tasks, can tolerate occasional deadline misses due to robust algorithms. These tasks can be modeled using the weakly-hard model. Literature shows that leveraging the weakly-hard model can relax the over-provisioning associated with designed real-time systems. However, a wide-range of the research focuses on single-core platforms. Therefore, we strive to extend the state-of-the-art of scheduling weakly-hard real-time tasks to multi-core platforms. We present a global job-level fixed priority scheduling algorithm together with its schedulability analysis. The scheduling algorithm leverages the tolerable continuous deadline misses to assigning priorities to jobs. The proposed analysis extends the Response Time Analysis (RTA) for global scheduling to test the schedulability of tasks. Hence, our analysis scales with the number of tasks and number of cores because, unlike literature, it depends neither on Integer Linear Programming nor reachability trees. Schedulability analyses show that the schedulability ratio is improved by 40% comparing to the global Rate Monotonic (RM) scheduling and up to 60% more than the global EDF scheduling, which are the state-of-the-art schedulers on the RTEMS real-time operating system. Our evaluation on industrial embedded multi-core platform running RTEMS shows that the scheduling overhead of our proposal does not exceed 60 Nanosecond.<|reference_end|>
arxiv
@article{moyano2024global, title={Global Scheduling of Weakly-Hard Real-Time Tasks using Job-Level Priority Classes}, author={V. Gabriel Moyano, Zain A. H. Hammadeh, Selma Saidi and Daniel L"udtke}, journal={arXiv preprint arXiv:2410.01528}, year={2024}, archivePrefix={arXiv}, eprint={2410.01528}, primaryClass={cs.OS} }
moyano2024global