corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-661701
|
2409.16703
|
The 2-domination number of cylindrical graphs
|
<|reference_start|>The 2-domination number of cylindrical graphs: A vertex subset S of a graph G is said to 2-dominate the graph if each vertex not in S has at least two neighbors in it. As usual, the associated parameter is the minimum cardinal of a 2-dominating set, which is called the 2-domination number of the graph G. We present both lower and upper bounds of the 2-domination number of cylinders, which are the Cartesian products of a path and a cycle. These bounds allow us to compute the exact value of the 2-domination number of cylinders where the path is arbitrary, and the order of the cycle is n $\equiv$ 0(mod 3) and as large as desired. In the case of the lower bound, we adapt the technique of the wasted domination to this parameter and we use the so-called tropical matrix product to obtain the desired bound. Moreover, we provide a regular patterned construction of a minimum 2-dominating set in the cylinders having the mentioned cycle order.<|reference_end|>
|
arxiv
|
@article{martínez2024the,
title={The 2-domination number of cylindrical graphs},
author={Jos'e Antonio Mart'inez, Ana Bel'en Casta~no-Fern'andez and
Mar'ia Luz Puertas},
journal={Comp. Appl. Math. 41, 424 (2022)},
year={2024},
doi={10.1007/s40314-022-02137-1},
archivePrefix={arXiv},
eprint={2409.16703},
primaryClass={math.CO cs.DM}
}
|
martínez2024the
|
arxiv-661702
|
2409.16706
|
Pix2Next: Leveraging Vision Foundation Models for RGB to NIR Image Translation
|
<|reference_start|>Pix2Next: Leveraging Vision Foundation Models for RGB to NIR Image Translation: This paper proposes Pix2Next, a novel image-to-image translation framework designed to address the challenge of generating high-quality Near-Infrared (NIR) images from RGB inputs. Our approach leverages a state-of-the-art Vision Foundation Model (VFM) within an encoder-decoder architecture, incorporating cross-attention mechanisms to enhance feature integration. This design captures detailed global representations and preserves essential spectral characteristics, treating RGB-to-NIR translation as more than a simple domain transfer problem. A multi-scale PatchGAN discriminator ensures realistic image generation at various detail levels, while carefully designed loss functions couple global context understanding with local feature preservation. We performed experiments on the RANUS dataset to demonstrate Pix2Next's advantages in quantitative metrics and visual quality, improving the FID score by 34.81% compared to existing methods. Furthermore, we demonstrate the practical utility of Pix2Next by showing improved performance on a downstream object detection task using generated NIR data to augment limited real NIR datasets. The proposed approach enables the scaling up of NIR datasets without additional data acquisition or annotation efforts, potentially accelerating advancements in NIR-based computer vision applications.<|reference_end|>
|
arxiv
|
@article{jin2024pix2next:,
title={Pix2Next: Leveraging Vision Foundation Models for RGB to NIR Image
Translation},
author={Youngwan Jin, Incheol Park, Hanbin Song, Hyeongjin Ju, Yagiz Nalcakan
and Shiho Kim},
journal={arXiv preprint arXiv:2409.16706},
year={2024},
archivePrefix={arXiv},
eprint={2409.16706},
primaryClass={cs.CV cs.AI}
}
|
jin2024pix2next:
|
arxiv-661703
|
2409.16707
|
Probing Omissions and Distortions in Transformer-based RDF-to-Text Models
|
<|reference_start|>Probing Omissions and Distortions in Transformer-based RDF-to-Text Models: In Natural Language Generation (NLG), important information is sometimes omitted in the output text. To better understand and analyse how this type of mistake arises, we focus on RDF-to-Text generation and explore two methods of probing omissions in the encoder output of BART (Lewis et al, 2020) and of T5 (Raffel et al, 2019): (i) a novel parameter-free probing method based on the computation of cosine similarity between embeddings of RDF graphs and of RDF graphs in which we removed some entities and (ii) a parametric probe which performs binary classification on the encoder embeddings to detect omitted entities. We also extend our analysis to distorted entities, i.e. entities that are not fully correctly mentioned in the generated text (e.g. misspelling of entity, wrong units of measurement). We found that both omitted and distorted entities can be probed in the encoder's output embeddings. This suggests that the encoder emits a weaker signal for these entities and therefore is responsible for some loss of information. This also shows that probing methods can be used to detect mistakes in the output of NLG models.<|reference_end|>
|
arxiv
|
@article{faille2024probing,
title={Probing Omissions and Distortions in Transformer-based RDF-to-Text
Models},
author={Juliette Faille and Albert Gatt and Claire Gardent},
journal={arXiv preprint arXiv:2409.16707},
year={2024},
archivePrefix={arXiv},
eprint={2409.16707},
primaryClass={cs.CL}
}
|
faille2024probing
|
arxiv-661704
|
2409.16708
|
AI Makes You Smarter, But None The Wiser: The Disconnect Between Performance and Metacognition
|
<|reference_start|>AI Makes You Smarter, But None The Wiser: The Disconnect Between Performance and Metacognition: Optimizing human-AI interaction requires users to reflect on their own performance critically. Our study examines whether people using AI to complete tasks can accurately monitor how well they perform. Participants (N = 246) used AI to solve 20 logical problems from the Law School Admission Test. While their task performance improved by three points compared to a norm population, participants overestimated their performance by four points. Interestingly, higher AI literacy was linked to less accurate self-assessment. Participants with more technical knowledge of AI were more confident but less precise in judging their own performance. Using a computational model, we explored individual differences in metacognitive accuracy and found that the Dunning-Kruger effect, usually observed in this task, ceased to exist with AI use. We discuss how AI levels our cognitive and metacognitive performance and consider the consequences of performance overestimation for designing interactive AI systems that enhance cognition.<|reference_end|>
|
arxiv
|
@article{fernandes2024ai,
title={AI Makes You Smarter, But None The Wiser: The Disconnect Between
Performance and Metacognition},
author={Daniela Fernandes, Steeven Villa, Salla Nicholls, Otso Haavisto,
Daniel Buschek, Albrecht Schmidt, Thomas Kosch, Chenxinran Shen, Robin Welsch},
journal={arXiv preprint arXiv:2409.16708},
year={2024},
archivePrefix={arXiv},
eprint={2409.16708},
primaryClass={cs.HC}
}
|
fernandes2024ai
|
arxiv-661705
|
2409.16709
|
Pose-Guided Fine-Grained Sign Language Video Generation
|
<|reference_start|>Pose-Guided Fine-Grained Sign Language Video Generation: Sign language videos are an important medium for spreading and learning sign language. However, most existing human image synthesis methods produce sign language images with details that are distorted, blurred, or structurally incorrect. They also produce sign language video frames with poor temporal consistency, with anomalies such as flickering and abrupt detail changes between the previous and next frames. To address these limitations, we propose a novel Pose-Guided Motion Model (PGMM) for generating fine-grained and motion-consistent sign language videos. Firstly, we propose a new Coarse Motion Module (CMM), which completes the deformation of features by optical flow warping, thus transfering the motion of coarse-grained structures without changing the appearance; Secondly, we propose a new Pose Fusion Module (PFM), which guides the modal fusion of RGB and pose features, thus completing the fine-grained generation. Finally, we design a new metric, Temporal Consistency Difference (TCD) to quantitatively assess the degree of temporal consistency of a video by comparing the difference between the frames of the reconstructed video and the previous and next frames of the target video. Extensive qualitative and quantitative experiments show that our method outperforms state-of-the-art methods in most benchmark tests, with visible improvements in details and temporal consistency.<|reference_end|>
|
arxiv
|
@article{shi2024pose-guided,
title={Pose-Guided Fine-Grained Sign Language Video Generation},
author={Tongkai Shi, Lianyu Hu, Fanhua Shang, Jichao Feng, Peidong Liu, Wei
Feng},
journal={arXiv preprint arXiv:2409.16709},
year={2024},
archivePrefix={arXiv},
eprint={2409.16709},
primaryClass={cs.CV}
}
|
shi2024pose-guided
|
arxiv-661706
|
2409.16710
|
Beyond Turing Test: Can GPT-4 Sway Experts' Decisions?
|
<|reference_start|>Beyond Turing Test: Can GPT-4 Sway Experts' Decisions?: In the post-Turing era, evaluating large language models (LLMs) involves assessing generated text based on readers' reactions rather than merely its indistinguishability from human-produced content. This paper explores how LLM-generated text impacts readers' decisions, focusing on both amateur and expert audiences. Our findings indicate that GPT-4 can generate persuasive analyses affecting the decisions of both amateurs and professionals. Furthermore, we evaluate the generated text from the aspects of grammar, convincingness, logical coherence, and usefulness. The results highlight a high correlation between real-world evaluation through audience reactions and the current multi-dimensional evaluators commonly used for generative models. Overall, this paper shows the potential and risk of using generated text to sway human decisions and also points out a new direction for evaluating generated text, i.e., leveraging the reactions and decisions of readers. We release our dataset to assist future research.<|reference_end|>
|
arxiv
|
@article{takayanagi2024beyond,
title={Beyond Turing Test: Can GPT-4 Sway Experts' Decisions?},
author={Takehiro Takayanagi and Hiroya Takamura and Kiyoshi Izumi and
Chung-Chi Chen},
journal={arXiv preprint arXiv:2409.16710},
year={2024},
archivePrefix={arXiv},
eprint={2409.16710},
primaryClass={cs.CE cs.CL}
}
|
takayanagi2024beyond
|
arxiv-661707
|
2409.16711
|
A numerical method for reconstructing the potential in fractional Calder\'on problem with a single measurement
|
<|reference_start|>A numerical method for reconstructing the potential in fractional Calder\'on problem with a single measurement: In this paper, we develop a numerical method for determining the potential in one and two dimensional fractional Calder\'{o}n problems with a single measurement. Finite difference scheme is employed to discretize the fractional Laplacian, and the parameter reconstruction is formulated into a variational problem based on Tikhonov regularization to obtain a stable and accurate solution. Conjugate gradient method is utilized to solve the variational problem. Moreover, we also provide a suggestion to choose the regularization parameter. Numerical experiments are performed to illustrate the efficiency and effectiveness of the developed method and verify the theoretical results.<|reference_end|>
|
arxiv
|
@article{li2024a,
title={A numerical method for reconstructing the potential in fractional
Calder\'{o}n problem with a single measurement},
author={Xinyan Li},
journal={arXiv preprint arXiv:2409.16711},
year={2024},
archivePrefix={arXiv},
eprint={2409.16711},
primaryClass={math.NA cs.NA}
}
|
li2024a
|
arxiv-661708
|
2409.16713
|
Repairing Databases over Metric Spaces with Coincidence Constraints
|
<|reference_start|>Repairing Databases over Metric Spaces with Coincidence Constraints: Datasets often contain values that naturally reside in a metric space: numbers, strings, geographical locations, machine-learned embeddings in a Euclidean space, and so on. We study the computational complexity of repairing inconsistent databases that violate integrity constraints, where the database values belong to an underlying metric space. The goal is to update the database values to retain consistency while minimizing the total distance between the original values and the repaired ones. We consider what we refer to as \emph{coincidence constraints}, which include key constraints, inclusion, foreign keys, and generally any restriction on the relationship between the numbers of cells of different labels (attributes) coinciding in a single value, for a fixed attribute set. We begin by showing that the problem is APX-hard for general metric spaces. We then present an algorithm solving the problem optimally for tree metrics, which generalize both the line metric (i.e., where repaired values are numbers) and the discrete metric (i.e., where we simply count the number of changed values). Combining our algorithm for tree metrics and a classic result on probabilistic tree embeddings, we design a (high probability) logarithmic-ratio approximation for general metrics. We also study the variant of the problem where each individual value's allowed change is limited. In this variant, it is already NP-complete to decide the existence of any legal repair for a general metric, and we present a polynomial-time repairing algorithm for the case of a line metric.<|reference_end|>
|
arxiv
|
@article{kaminsky2024repairing,
title={Repairing Databases over Metric Spaces with Coincidence Constraints},
author={Youri Kaminsky, Benny Kimelfeld, Ester Livshits, Felix Naumann, and
David Wajc},
journal={arXiv preprint arXiv:2409.16713},
year={2024},
archivePrefix={arXiv},
eprint={2409.16713},
primaryClass={cs.DB}
}
|
kaminsky2024repairing
|
arxiv-661709
|
2409.16714
|
Stochastic Modelling of Elasticity Tensors
|
<|reference_start|>Stochastic Modelling of Elasticity Tensors: We present a novel framework for the probabilistic modelling of random fourth order material tensor fields, with a focus on tensors that are physically symmetric and positive definite (SPD), of which the elasticity tensor is a prime example. Given the critical role that spatial symmetries and invariances play in determining material behaviour, it is essential to incorporate these aspects into the probabilistic description and modelling of material properties. In particular, we focus on spatial point symmetries or invariances under rotations, a classical subject in elasticity. Following this, we formulate a stochastic modelling framework using a Lie algebra representation via a memoryless transformation that respects the requirements of positive definiteness and invariance. With this, it is shown how to generate a random ensemble of elasticity tensors that allows an independent control of strength, eigenstrain, and orientation. The procedure also accommodates the requirement to prescribe specific spatial symmetries and invariances for each member of the whole ensemble, while ensuring that the mean or expected value of the ensemble conforms to a potentially 'higher' class of spatial invariance. Furthermore, it is important to highlight that the set of SPD tensors forms a differentiable manifold, which geometrically corresponds to an open cone within the ambient space of symmetric tensors. Thus, we explore the mathematical structure of the underlying sample space of such tensors, and introduce a new distance measure or metric, called the 'elasticity metric', between the tensors.<|reference_end|>
|
arxiv
|
@article{shivanand2024stochastic,
title={Stochastic Modelling of Elasticity Tensors},
author={Sharana Kumar Shivanand, Bojana Rosi'c, Hermann G. Matthies},
journal={arXiv preprint arXiv:2409.16714},
year={2024},
archivePrefix={arXiv},
eprint={2409.16714},
primaryClass={cs.CE cs.NA math-ph math.MP math.NA}
}
|
shivanand2024stochastic
|
arxiv-661710
|
2409.16716
|
Simultaneously reconstructing potentials and internal sources for fractional Schr\"odinger equations
|
<|reference_start|>Simultaneously reconstructing potentials and internal sources for fractional Schr\"odinger equations: The inverse problems about fractional Calder\'on problem and fractional Schr\"odinger equations are of interest in the study of mathematics. In this paper, we propose the inverse problem to simultaneously reconstruct potentials and sources for fractional Schr\"odinger equations with internal source terms. We show the uniqueness for reconstructing the two terms under measurements from two different nonhomogeneous boundary conditions. By introducing the variational Tikhonov regularization functional, numerical method based on conjugate gradient method(CGM) is provided to realize this inverse problem. Numerical experiments are given to gauge the performance of the numerical method.<|reference_end|>
|
arxiv
|
@article{li2024simultaneously,
title={Simultaneously reconstructing potentials and internal sources for
fractional Schr\"odinger equations},
author={Xinyan Li},
journal={arXiv preprint arXiv:2409.16716},
year={2024},
archivePrefix={arXiv},
eprint={2409.16716},
primaryClass={math.NA cs.NA}
}
|
li2024simultaneously
|
arxiv-661711
|
2409.16717
|
The Bayesian Separation Principle for Data-driven Control
|
<|reference_start|>The Bayesian Separation Principle for Data-driven Control: This paper investigates the existence of a separation principle between model identification and control design in the context of model predictive control. First, we elucidate that the separation principle holds asymptotically in the number of data in a Fisherian setting, and universally in a Bayesian setting. Then, by formulating model predictive control within a Gaussian regression framework, we describe how the Bayesian separation principle can be used to derive explicit, uncertainty-aware expressions for the control cost and optimal input sequence, thereby bridging direct and indirect data-driven approaches.<|reference_end|>
|
arxiv
|
@article{grimaldi2024the,
title={The Bayesian Separation Principle for Data-driven Control},
author={Riccardo Alessandro Grimaldi, Giacomo Baggio, Ruggero Carli, Gianluigi
Pillonetto},
journal={arXiv preprint arXiv:2409.16717},
year={2024},
archivePrefix={arXiv},
eprint={2409.16717},
primaryClass={eess.SY cs.SY}
}
|
grimaldi2024the
|
arxiv-661712
|
2409.16718
|
Vision-Language Model Fine-Tuning via Simple Parameter-Efficient Modification
|
<|reference_start|>Vision-Language Model Fine-Tuning via Simple Parameter-Efficient Modification: Recent advances in fine-tuning Vision-Language Models (VLMs) have witnessed the success of prompt tuning and adapter tuning, while the classic model fine-tuning on inherent parameters seems to be overlooked. It is believed that fine-tuning the parameters of VLMs with few-shot samples corrupts the pre-trained knowledge since fine-tuning the CLIP model even degrades performance. In this paper, we revisit this viewpoint, and propose a new perspective: fine-tuning the specific parameters instead of all will uncover the power of classic model fine-tuning on VLMs. Through our meticulous study, we propose ClipFit, a simple yet effective method to fine-tune CLIP without introducing any overhead of extra parameters. We demonstrate that by only fine-tuning the specific bias terms and normalization layers, ClipFit can improve the performance of zero-shot CLIP by 7.27\% average harmonic mean accuracy. Lastly, to understand how fine-tuning in CLIPFit affects the pre-trained models, we conducted extensive experimental analyses w.r.t. changes in internal parameters and representations. We found that low-level text bias layers and the first layer normalization layer change much more than other layers. The code is available at \url{https://github.com/minglllli/CLIPFit}.<|reference_end|>
|
arxiv
|
@article{li2024vision-language,
title={Vision-Language Model Fine-Tuning via Simple Parameter-Efficient
Modification},
author={Ming Li, Jike Zhong, Chenxin Li, Liuzhuozheng Li, Nie Lin, Masashi
Sugiyama},
journal={arXiv preprint arXiv:2409.16718},
year={2024},
archivePrefix={arXiv},
eprint={2409.16718},
primaryClass={cs.CV cs.AI cs.CL cs.LG cs.RO}
}
|
li2024vision-language
|
arxiv-661713
|
2409.16720
|
Dashing for the Golden Snitch: Multi-Drone Time-Optimal Motion Planning with Multi-Agent Reinforcement Learning
|
<|reference_start|>Dashing for the Golden Snitch: Multi-Drone Time-Optimal Motion Planning with Multi-Agent Reinforcement Learning: Recent innovations in autonomous drones have facilitated time-optimal flight in single-drone configurations and enhanced maneuverability in multi-drone systems through the application of optimal control and learning-based methods. However, few studies have achieved time-optimal motion planning for multi-drone systems, particularly during highly agile maneuvers or in dynamic scenarios. This paper presents a decentralized policy network for time-optimal multi-drone flight using multi-agent reinforcement learning. To strike a balance between flight efficiency and collision avoidance, we introduce a soft collision penalty inspired by optimization-based methods. By customizing PPO in a centralized training, decentralized execution (CTDE) fashion, we unlock higher efficiency and stability in training, while ensuring lightweight implementation. Extensive simulations show that, despite slight performance trade-offs compared to single-drone systems, our multi-drone approach maintains near-time-optimal performance with low collision rates. Real-world experiments validate our method, with two quadrotors using the same network as simulation achieving a maximum speed of 13.65 m/s and a maximum body rate of 13.4 rad/s in a 5.5 m * 5.5 m * 2.0 m space across various tracks, relying entirely on onboard computation.<|reference_end|>
|
arxiv
|
@article{wang2024dashing,
title={Dashing for the Golden Snitch: Multi-Drone Time-Optimal Motion Planning
with Multi-Agent Reinforcement Learning},
author={Xian Wang, Jin Zhou, Yuanli Feng, Jiahao Mei, Jiming Chen, Shuo Li},
journal={arXiv preprint arXiv:2409.16720},
year={2024},
archivePrefix={arXiv},
eprint={2409.16720},
primaryClass={cs.RO cs.LG}
}
|
wang2024dashing
|
arxiv-661714
|
2409.16721
|
A Multi-Dataset Classification-Based Deep Learning Framework for Electronic Health Records and Predictive Analysis in Healthcare
|
<|reference_start|>A Multi-Dataset Classification-Based Deep Learning Framework for Electronic Health Records and Predictive Analysis in Healthcare: In contemporary healthcare, to protect patient data, electronic health records have become invaluable repositories, creating vast opportunities to leverage deep learning techniques for predictive analysis. Retinal fundus images, cirrhosis stages, and heart disease diagnostic predictions have shown promising results through the integration of deep learning techniques for classifying diverse datasets. This study proposes a novel deep learning predictive analysis framework for classifying multiple datasets by pre-processing data from three distinct sources. A hybrid deep learning model combining Residual Networks and Artificial Neural Networks is proposed to detect acute and chronic diseases such as heart diseases, cirrhosis, and retinal conditions, outperforming existing models. Dataset preparation involves aspects such as categorical data transformation, dimensionality reduction, and missing data synthesis. Feature extraction is effectively performed using scaler transformation for categorical datasets and ResNet architecture for image datasets. The resulting features are integrated into a unified classification model. Rigorous experimentation and evaluation resulted in high accuracies of 93%, 99%, and 95% for retinal fundus images, cirrhosis stages, and heart disease diagnostic predictions, respectively. The efficacy of the proposed method is demonstrated through a detailed analysis of F1-score, precision, and recall metrics. This study offers a comprehensive exploration of methodologies and experiments, providing in-depth knowledge of deep learning predictive analysis in electronic health records.<|reference_end|>
|
arxiv
|
@article{malik2024a,
title={A Multi-Dataset Classification-Based Deep Learning Framework for
Electronic Health Records and Predictive Analysis in Healthcare},
author={Syed Mohd Faisal Malik, Md Tabrez Nafis, Mohd Abdul Ahad, and Safdar
Tanweer},
journal={arXiv preprint arXiv:2409.16721},
year={2024},
archivePrefix={arXiv},
eprint={2409.16721},
primaryClass={cs.AI}
}
|
malik2024a
|
arxiv-661715
|
2409.16722
|
PMSS: Pretrained Matrices Skeleton Selection for LLM Fine-tuning
|
<|reference_start|>PMSS: Pretrained Matrices Skeleton Selection for LLM Fine-tuning: Low-rank adaptation (LoRA) and its variants have recently gained much interest due to their ability to avoid excessive inference costs. However, LoRA still encounters the following challenges: (1) Limitation of low-rank assumption; and (2) Its initialization method may be suboptimal. To this end, we propose PMSS(Pre-trained Matrices Skeleton Selection), which enables high-rank updates with low costs while leveraging semantic and linguistic information inherent in pre-trained weight. It achieves this by selecting skeletons from the pre-trained weight matrix and only learning a small matrix instead. Experiments demonstrate that PMSS outperforms LoRA and other fine-tuning methods across tasks with much less trainable parameters. We demonstrate its effectiveness, especially in handling complex tasks such as DROP benchmark(+3.4%/+5.9% on LLaMA2-7B/13B) and math reasoning(+12.89%/+5.61%/+3.11% on LLaMA2-7B, Mistral-7B and Gemma-7B of GSM8K). The code and model will be released soon.<|reference_end|>
|
arxiv
|
@article{wang2024pmss:,
title={PMSS: Pretrained Matrices Skeleton Selection for LLM Fine-tuning},
author={Qibin Wang, Xiaolin Hu, Weikai Xu, Wei Liu, Jian Luan, Bin Wang},
journal={arXiv preprint arXiv:2409.16722},
year={2024},
archivePrefix={arXiv},
eprint={2409.16722},
primaryClass={cs.CL cs.LG}
}
|
wang2024pmss:
|
arxiv-661716
|
2409.16723
|
EAGLE: Towards Efficient Arbitrary Referring Visual Prompts Comprehension for Multimodal Large Language Models
|
<|reference_start|>EAGLE: Towards Efficient Arbitrary Referring Visual Prompts Comprehension for Multimodal Large Language Models: Recently, Multimodal Large Language Models (MLLMs) have sparked great research interests owing to their exceptional content-reasoning and instruction-following capabilities. To effectively instruct an MLLM, in addition to conventional language expressions, the practice of referring to objects by painting with brushes on images has emerged as a prevalent tool (referred to as "referring visual prompts") due to its efficacy in aligning the user's intention with specific image regions. To accommodate the most common referring visual prompts, namely points, boxes, and masks, existing approaches initially utilize specialized feature encoding modules to capture the semantics of the highlighted areas indicated by these prompts. Subsequently, these encoded region features are adapted to MLLMs through fine-tuning on a meticulously curated multimodal instruction dataset. However, such designs suffer from redundancy in architecture. Moreover, they face challenges in effectively generalizing when encountering a diverse range of arbitrary referring visual prompts in real-life scenarios. To address the above issues, we propose EAGLE, a novel MLLM that empowers comprehension of arbitrary referring visual prompts with less training efforts than existing approaches. Specifically, our EAGLE maintains the innate format of the referring visual prompts as colored patches rendered on the given image for conducting the instruction tuning. Our approach embeds referring visual prompts as spatial concepts conveying specific spatial areas comprehensible to the MLLM, with the semantic comprehension of these regions originating from the MLLM itself. Besides, we also propose a Geometry-Agnostic Learning paradigm (GAL) to further disentangle the MLLM's region-level comprehension with the specific formats of referring visual prompts. Extensive experiments are conducted to prove the effectiveness of our proposed method.<|reference_end|>
|
arxiv
|
@article{zhang2024eagle:,
title={EAGLE: Towards Efficient Arbitrary Referring Visual Prompts
Comprehension for Multimodal Large Language Models},
author={Jiacheng Zhang, Yang Jiao, Shaoxiang Chen, Jingjing Chen, Yu-Gang
Jiang},
journal={arXiv preprint arXiv:2409.16723},
year={2024},
archivePrefix={arXiv},
eprint={2409.16723},
primaryClass={cs.CV}
}
|
zhang2024eagle:
|
arxiv-661717
|
2409.16724
|
pyGANDALF -- An open-source, Geometric, ANimation, Directed, Algorithmic, Learning Framework for Computer Graphics
|
<|reference_start|>pyGANDALF -- An open-source, Geometric, ANimation, Directed, Algorithmic, Learning Framework for Computer Graphics: In computer graphics (CG) education, the challenge of finding modern, versatile tools is significant, particularly when integrating both legacy and advanced technologies. Traditional frameworks, often reliant on solid, yet outdated APIs like OpenGL, limit the exploration of cutting-edge graphics techniques. To address this, we introduce pyGANDALF, a unique, lightweight, open-source CG framework built on three pillars: Entity-Component-System (ECS) architecture, Python programming, and WebGPU integration. This combination sets pyGANDALF apart by providing a streamlined ECS design with an editor layer, compatibility with WebGPU for state-of-the-art features like compute and ray tracing pipelines, and a programmer-friendly Python environment. The framework supports modern features, such as Physically Based Rendering (PBR) capabilities and integration with Universal Scene Description (USD) formats, making it suitable for both educational demonstrations and real-world applications. Evaluations by expert users confirmed that pyGANDALF effectively balances ease of use with advanced functionality, preparing students for contemporary CG development challenges.<|reference_end|>
|
arxiv
|
@article{petropoulos2024pygandalf,
title={pyGANDALF -- An open-source, Geometric, ANimation, Directed,
Algorithmic, Learning Framework for Computer Graphics},
author={John Petropoulos, Manos Kamarianakis, Antonis Protopsaltis, George
Papagiannakis},
journal={arXiv preprint arXiv:2409.16724},
year={2024},
archivePrefix={arXiv},
eprint={2409.16724},
primaryClass={cs.GR}
}
|
petropoulos2024pygandalf
|
arxiv-661718
|
2409.16726
|
Verified Relative Safety Margins for Neural Network Twins
|
<|reference_start|>Verified Relative Safety Margins for Neural Network Twins: Given two Deep Neural Network (DNN) classifiers with the same input and output domains, our goal is to quantify the robustness of the two networks in relation to each other. Towards this, we introduce the notion of Relative Safety Margins (RSMs). Intuitively, given two classes and a common input, RSM of one classifier with respect to another reflects the relative margins with which decisions are made. The proposed notion is relevant in the context of several applications domains, including to compare a trained network and its corresponding compact network (e.g., pruned, quantized, distilled network). Not only can RSMs establish whether decisions are preserved, but they can also quantify their qualities. We also propose a framework to establish safe bounds on RSM gains or losses given an input and a family of perturbations. We evaluate our approach using the MNIST, CIFAR10, and two real-world medical datasets, to show the relevance of our results.<|reference_end|>
|
arxiv
|
@article{baninajjar2024verified,
title={Verified Relative Safety Margins for Neural Network Twins},
author={Anahita Baninajjar, Kamran Hosseini, Ahmed Rezine, Amir Aminifar},
journal={arXiv preprint arXiv:2409.16726},
year={2024},
archivePrefix={arXiv},
eprint={2409.16726},
primaryClass={cs.LG}
}
|
baninajjar2024verified
|
arxiv-661719
|
2409.16727
|
RoleBreak: Character Hallucination as a Jailbreak Attack in Role-Playing Systems
|
<|reference_start|>RoleBreak: Character Hallucination as a Jailbreak Attack in Role-Playing Systems: Role-playing systems powered by large language models (LLMs) have become increasingly influential in emotional communication applications. However, these systems are susceptible to character hallucinations, where the model deviates from predefined character roles and generates responses that are inconsistent with the intended persona. This paper presents the first systematic analysis of character hallucination from an attack perspective, introducing the RoleBreak framework. Our framework identifies two core mechanisms-query sparsity and role-query conflict-as key factors driving character hallucination. Leveraging these insights, we construct a novel dataset, RoleBreakEval, to evaluate existing hallucination mitigation techniques. Our experiments reveal that even enhanced models trained to minimize hallucination remain vulnerable to attacks. To address these vulnerabilities, we propose a novel defence strategy, the Narrator Mode, which generates supplemental context through narration to mitigate role-query conflicts and improve query generalization. Experimental results demonstrate that Narrator Mode significantly outperforms traditional refusal-based strategies by reducing hallucinations, enhancing fidelity to character roles and queries, and improving overall narrative coherence.<|reference_end|>
|
arxiv
|
@article{tang2024rolebreak:,
title={RoleBreak: Character Hallucination as a Jailbreak Attack in Role-Playing
Systems},
author={Yihong Tang, Bo Wang, Xu Wang, Dongming Zhao, Jing Liu, Jijun Zhang,
Ruifang He, Yuexian Hou},
journal={arXiv preprint arXiv:2409.16727},
year={2024},
archivePrefix={arXiv},
eprint={2409.16727},
primaryClass={cs.CL}
}
|
tang2024rolebreak:
|
arxiv-661720
|
2409.16728
|
SDCL: Students Discrepancy-Informed Correction Learning for Semi-supervised Medical Image Segmentation
|
<|reference_start|>SDCL: Students Discrepancy-Informed Correction Learning for Semi-supervised Medical Image Segmentation: Semi-supervised medical image segmentation (SSMIS) has been demonstrated the potential to mitigate the issue of limited medical labeled data. However, confirmation and cognitive biases may affect the prevalent teacher-student based SSMIS methods due to erroneous pseudo-labels. To tackle this challenge, we improve the mean teacher approach and propose the Students Discrepancy-Informed Correction Learning (SDCL) framework that includes two students and one non-trainable teacher, which utilizes the segmentation difference between the two students to guide the self-correcting learning. The essence of SDCL is to identify the areas of segmentation discrepancy as the potential bias areas, and then encourage the model to review the correct cognition and rectify their own biases in these areas. To facilitate the bias correction learning with continuous review and rectification, two correction loss functions are employed to minimize the correct segmentation voxel distance and maximize the erroneous segmentation voxel entropy. We conducted experiments on three public medical image datasets: two 3D datasets (CT and MRI) and one 2D dataset (MRI). The results show that our SDCL surpasses the current State-of-the-Art (SOTA) methods by 2.57\%, 3.04\%, and 2.34\% in the Dice score on the Pancreas, LA, and ACDC datasets, respectively. In addition, the accuracy of our method is very close to the fully supervised method on the ACDC dataset, and even exceeds the fully supervised method on the Pancreas and LA dataset. (Code available at \url{https://github.com/pascalcpp/SDCL}).<|reference_end|>
|
arxiv
|
@article{song2024sdcl:,
title={SDCL: Students Discrepancy-Informed Correction Learning for
Semi-supervised Medical Image Segmentation},
author={Bentao Song, Qingfeng Wang},
journal={arXiv preprint arXiv:2409.16728},
year={2024},
archivePrefix={arXiv},
eprint={2409.16728},
primaryClass={eess.IV cs.CV}
}
|
song2024sdcl:
|
arxiv-661721
|
2409.16730
|
Non-stationary BERT: Exploring Augmented IMU Data For Robust Human Activity Recognition
|
<|reference_start|>Non-stationary BERT: Exploring Augmented IMU Data For Robust Human Activity Recognition: Human Activity Recognition (HAR) has gained great attention from researchers due to the popularity of mobile devices and the need to observe users' daily activity data for better human-computer interaction. In this work, we collect a human activity recognition dataset called OPPOHAR consisting of phone IMU data. To facilitate the employment of HAR system in mobile phone and to achieve user-specific activity recognition, we propose a novel light-weight network called Non-stationary BERT with a two-stage training method. We also propose a simple yet effective data augmentation method to explore the deeper relationship between the accelerator and gyroscope data from the IMU. The network achieves the state-of-the-art performance testing on various activity recognition datasets and the data augmentation method demonstrates its wide applicability.<|reference_end|>
|
arxiv
|
@article{sun2024non-stationary,
title={Non-stationary BERT: Exploring Augmented IMU Data For Robust Human
Activity Recognition},
author={Ning Sun, Yufei Wang, Yuwei Zhang, Jixiang Wan, Shenyue Wang, Ping
Liu, Xudong Zhang},
journal={arXiv preprint arXiv:2409.16730},
year={2024},
archivePrefix={arXiv},
eprint={2409.16730},
primaryClass={cs.AI cs.CV}
}
|
sun2024non-stationary
|
arxiv-661722
|
2409.16732
|
"It Explains What I am Currently Going Through Perfectly to a Tee": Understanding User Perceptions on LLM-Enhanced Narrative Interventions
|
<|reference_start|>"It Explains What I am Currently Going Through Perfectly to a Tee": Understanding User Perceptions on LLM-Enhanced Narrative Interventions: Stories about overcoming personal struggles can effectively illustrate the application of psychological theories in real life, yet they may fail to resonate with individuals' experiences. In this work, we employ large language models (LLMs) to create tailored narratives that acknowledge and address unique challenging thoughts and situations faced by individuals. Our study, involving 346 young adults across two settings, demonstrates that LLM-enhanced stories were perceived to be better than human-written ones in conveying key takeaways, promoting reflection, and reducing belief in negative thoughts. These stories were not only seen as more relatable but also similarly authentic to human-written ones, highlighting the potential of LLMs in helping young adults manage their struggles. The findings of this work provide crucial design considerations for future narrative-based digital mental health interventions, such as the need to maintain relatability without veering into implausibility and refining the wording and tone of AI-enhanced content.<|reference_end|>
|
arxiv
|
@article{bhattacharjee2024"it,
title={"It Explains What I am Currently Going Through Perfectly to a Tee":
Understanding User Perceptions on LLM-Enhanced Narrative Interventions},
author={Ananya Bhattacharjee, Sarah Yi Xu, Pranav Rao, Yuchen Zeng, Jonah
Meyerhoff, Syed Ishtiaque Ahmed, David C Mohr, Michael Liut, Alex Mariakakis,
Rachel Kornfield, Joseph Jay Williams},
journal={arXiv preprint arXiv:2409.16732},
year={2024},
archivePrefix={arXiv},
eprint={2409.16732},
primaryClass={cs.HC}
}
|
bhattacharjee2024"it
|
arxiv-661723
|
2409.16733
|
The Effect of Lossy Compression on 3D Medical Images Segmentation with Deep Learning
|
<|reference_start|>The Effect of Lossy Compression on 3D Medical Images Segmentation with Deep Learning: Image compression is a critical tool in decreasing the cost of storage and improving the speed of transmission over the internet. While deep learning applications for natural images widely adopts the usage of lossy compression techniques, it is not widespread for 3D medical images. Using three CT datasets (17 tasks) and one MRI dataset (3 tasks) we demonstrate that lossy compression up to 20 times have no negative impact on segmentation quality with deep neural networks (DNN). In addition, we demonstrate the ability of DNN models trained on compressed data to predict on uncompressed data and vice versa with no quality deterioration.<|reference_end|>
|
arxiv
|
@article{kurmukov2024the,
title={The Effect of Lossy Compression on 3D Medical Images Segmentation with
Deep Learning},
author={Anvar Kurmukov and Bogdan Zavolovich and Aleksandra Dalechina and
Vladislav Proskurov and Boris Shirokikh},
journal={arXiv preprint arXiv:2409.16733},
year={2024},
archivePrefix={arXiv},
eprint={2409.16733},
primaryClass={eess.IV cs.CV}
}
|
kurmukov2024the
|
arxiv-661724
|
2409.16735
|
GB-RVFL: Fusion of Randomized Neural Network and Granular Ball Computing
|
<|reference_start|>GB-RVFL: Fusion of Randomized Neural Network and Granular Ball Computing: The random vector functional link (RVFL) network is a prominent classification model with strong generalization ability. However, RVFL treats all samples uniformly, ignoring whether they are pure or noisy, and its scalability is limited due to the need for inverting the entire training matrix. To address these issues, we propose granular ball RVFL (GB-RVFL) model, which uses granular balls (GBs) as inputs instead of training samples. This approach enhances scalability by requiring only the inverse of the GB center matrix and improves robustness against noise and outliers through the coarse granularity of GBs. Furthermore, RVFL overlooks the dataset's geometric structure. To address this, we propose graph embedding GB-RVFL (GE-GB-RVFL) model, which fuses granular computing and graph embedding (GE) to preserve the topological structure of GBs. The proposed GB-RVFL and GE-GB-RVFL models are evaluated on KEEL, UCI, NDC and biomedical datasets, demonstrating superior performance compared to baseline models.<|reference_end|>
|
arxiv
|
@article{sajid2024gb-rvfl:,
title={GB-RVFL: Fusion of Randomized Neural Network and Granular Ball Computing},
author={M. Sajid, A. Quadir, M. Tanveer},
journal={arXiv preprint arXiv:2409.16735},
year={2024},
archivePrefix={arXiv},
eprint={2409.16735},
primaryClass={cs.LG cs.AI}
}
|
sajid2024gb-rvfl:
|
arxiv-661725
|
2409.16736
|
Commonly Interesting Images
|
<|reference_start|>Commonly Interesting Images: Images tell stories, trigger emotions, and let us recall memories -- they make us think. Thus, they have the ability to attract and hold one's attention, which is the definition of being "interesting". Yet, the appeal of an image is highly subjective. Looking at the image of my son taking his first steps will always bring me back to this emotional moment, while it is just a blurry, quickly taken snapshot to most others. Preferences vary widely: some adore cats, others are dog enthusiasts, and a third group may not be fond of either. We argue that every image can be interesting to a particular observer under certain circumstances. This work particularly emphasizes subjective preferences. However, our analysis of 2.5k image collections from diverse users of the photo-sharing platform Flickr reveals that specific image characteristics make them commonly more interesting. For instance, images, including professionally taken landscapes, appeal broadly due to their aesthetic qualities. In contrast, subjectively interesting images, such as those depicting personal or niche community events, resonate on a more individual level, often evoking personal memories and emotions.<|reference_end|>
|
arxiv
|
@article{abdullahu2024commonly,
title={Commonly Interesting Images},
author={Fitim Abdullahu, Helmut Grabner},
journal={arXiv preprint arXiv:2409.16736},
year={2024},
archivePrefix={arXiv},
eprint={2409.16736},
primaryClass={cs.CV}
}
|
abdullahu2024commonly
|
arxiv-661726
|
2409.16739
|
Context-Enhanced LLM-Based Framework for Automatic Test Refactoring
|
<|reference_start|>Context-Enhanced LLM-Based Framework for Automatic Test Refactoring: Test smells arise from poor design practices and insufficient domain knowledge, which can lower the quality of test code and make it harder to maintain and update. Manually refactoring test smells is time-consuming and error-prone, highlighting the necessity for automated approaches. Current rule-based refactoring methods often struggle in scenarios not covered by predefined rules and lack the flexibility needed to handle diverse cases effectively. In this paper, we propose a novel approach called UTRefactor, a context-enhanced, LLM-based framework for automatic test refactoring in Java projects. UTRefactor extracts relevant context from test code and leverages an external knowledge base that includes test smell definitions, descriptions, and DSL-based refactoring rules. By simulating the manual refactoring process through a chain-of-thought approach, UTRefactor guides the LLM to eliminate test smells in a step-by-step process, ensuring both accuracy and consistency throughout the refactoring. Additionally, we implement a checkpoint mechanism to facilitate comprehensive refactoring, particularly when multiple smells are present. We evaluate UTRefactor on 879 tests from six open-source Java projects, reducing the number of test smells from 2,375 to 265, achieving an 89% reduction. UTRefactor outperforms direct LLM-based refactoring methods by 61.82% in smell elimination and significantly surpasses the performance of a rule-based test smell refactoring tool. Our results demonstrate the effectiveness of UTRefactor in enhancing test code quality while minimizing manual involvement.<|reference_end|>
|
arxiv
|
@article{gao2024context-enhanced,
title={Context-Enhanced LLM-Based Framework for Automatic Test Refactoring},
author={Yi Gao, Xing Hu, Xiaohu Yang and Xin Xia},
journal={arXiv preprint arXiv:2409.16739},
year={2024},
archivePrefix={arXiv},
eprint={2409.16739},
primaryClass={cs.SE}
}
|
gao2024context-enhanced
|
arxiv-661727
|
2409.16743
|
Event-Triggered Non-Linear Control of Offshore MMC Grids for Asymmetrical AC Faults
|
<|reference_start|>Event-Triggered Non-Linear Control of Offshore MMC Grids for Asymmetrical AC Faults: Fault ride-through capability studies of MMC-HVDC connected wind power plants have focused primarily on the DC link and onshore AC grid faults. Offshore AC faults, mainly asymmetrical faults have not gained much attention in the literature despite being included in the future development at national levels in the ENTSO-E HVDC code. The proposed work gives an event-triggered control to stabilize the system once the offshore AC fault has occurred, identified, and isolated. Different types of control actions such as proportional-integral (PI) controller and super-twisted sliding mode control (STSMC) are used to smoothly transition the post-fault system to a new steady state operating point by suppressing the negative sequence control. Initially, the effect of a negative sequence current control scheme on the transient behavior of the power system with a PI controller is discussed in this paper. Further, a non-linear control strategy (STSMC) is proposed which gives quicker convergence of the system post-fault in comparison to PI control action. These post-fault control operations are only triggered in the presence of a fault in the system, i.e., they are event-triggered. The validity of the proposed strategy is demonstrated by simulation on a $\pm$525 kV, three-terminal meshed MMC-HVDC system model in Real Time Digital Simulator (RTDS).<|reference_end|>
|
arxiv
|
@article{cherat2024event-triggered,
title={Event-Triggered Non-Linear Control of Offshore MMC Grids for
Asymmetrical AC Faults},
author={Naajein Cherat, Vaibhav Nougain, Milovan Majstorovi'c, Peter
Palensky, and Aleksandra Leki'c},
journal={ISGT 2024},
year={2024},
archivePrefix={arXiv},
eprint={2409.16743},
primaryClass={eess.SY cs.SY}
}
|
cherat2024event-triggered
|
arxiv-661728
|
2409.16746
|
Adaptive Single-Terminal Fault Location for DC Microgrids
|
<|reference_start|>Adaptive Single-Terminal Fault Location for DC Microgrids: Identifying faulty lines and their accurate location is key for rapidly restoring distribution systems. This will become a greater challenge as the penetration of power electronics increases, and contingencies are seen across larger areas. This paper proposes a single terminal methodology (i.e., no communication involved) that is robust to variations of key parameters (e.g., sampling frequency, system parameters, etc.) and performs particularly well for low resistance faults that constitute the majority of faults in low voltage DC systems. The proposed method uses local measurements to estimate the current caused by the other terminals affected by the contingency. This mimics the strategy followed by double terminal methods that require communications and decouples the accuracy of the methodology from the fault resistance. The algorithm takes consecutive voltage and current samples, including the estimated current of the other terminal, into the analysis. This mathematical methodology results in a better accuracy than other single-terminal approaches found in the literature. The robustness of the proposed strategy against different fault resistances and locations is demonstrated using MATLAB simulations.<|reference_end|>
|
arxiv
|
@article{nougain2024adaptive,
title={Adaptive Single-Terminal Fault Location for DC Microgrids},
author={Vaibhav Nougain, Sukumar Mishra, Joan-Marc Rodriguez-Bernuz, Adria
Junyent-Ferre, Aditya Shekhar, Aleksandra Lekic},
journal={SEST 2024 Proceedings},
year={2024},
archivePrefix={arXiv},
eprint={2409.16746},
primaryClass={eess.SY cs.SY}
}
|
nougain2024adaptive
|
arxiv-661729
|
2409.16749
|
Rapid Prototyping of 3D Microstructures: A Simplified Grayscale Lithography Encoding Method Using Blender
|
<|reference_start|>Rapid Prototyping of 3D Microstructures: A Simplified Grayscale Lithography Encoding Method Using Blender: The democratization of fabrication equipment has spurred recent interest in maskless grayscale lithography for both 2D and 3D microfabrication. However, the design of suitable template images remains a challenge. This work presents a simplified method for encoding 3D objects into grayscale image files optimized for grayscale lithography. Leveraging the widely used, open-source 3D modeling software Blender, we developed a robust approach to convert geometric heights into grayscale levels and generate image files through top-view rendering. Our method accurately reproduced the overall shape of simple structures like stairs and ramps compared to the original designs. We extended this approach to complex 3D sinusoidal surfaces, achieving similar results. Given the increasing accessibility and user-friendliness of digital rendering tools, this study offers a promising strategy for rapid prototyping of initial designs with minimal effort.<|reference_end|>
|
arxiv
|
@article{borghi2024rapid,
title={Rapid Prototyping of 3D Microstructures: A Simplified Grayscale
Lithography Encoding Method Using Blender},
author={Fabricio Frizera Borghi, Mohammed Bendimerad, Marie-Ly Chapon, Tatiana
Petithory, Laurent Vonna and Laurent Pieuchot},
journal={arXiv preprint arXiv:2409.16749},
year={2024},
archivePrefix={arXiv},
eprint={2409.16749},
primaryClass={cs.GR physics.app-ph physics.optics q-bio.CB}
}
|
borghi2024rapid
|
arxiv-661730
|
2409.16750
|
Distributed Robust Optimization Method for AC/MTDC Hybrid Power Systems with DC Network Cognizance
|
<|reference_start|>Distributed Robust Optimization Method for AC/MTDC Hybrid Power Systems with DC Network Cognizance: AC/multi-terminal DC (MTDC) hybrid power systems have emerged as a solution for the large-scale and longdistance accommodation of power produced by renewable energy systems (RESs). To ensure the optimal operation of such hybrid power systems, this paper addresses three key issues: system operational flexibility, centralized communication limitations, and RES uncertainties. Accordingly, a specific AC/DC optimal power flow (OPF) model and a distributed robust optimization method are proposed. Firstly, we apply a set of linear approximation and convex relaxation techniques to formulate the mixed-integer convex AC/DC OPF model. This model incorporates the DC network-cognizant constraint and enables DC topology reconfiguration. Next, generalized Benders decomposition (GBD) is employed to provide distributed optimization. Enhanced approaches are incorporated into GBD to achieve parallel computation and asynchronous updating. Additionally, the extreme scenario method (ESM) is embedded into the AC/DC OPF model to provide robust decisions to hedge against RES uncertainties. ESM is further extended to align the GBD procedure. Numerical results are finally presented to validate the effectiveness of our proposed method.<|reference_end|>
|
arxiv
|
@article{li2024distributed,
title={Distributed Robust Optimization Method for AC/MTDC Hybrid Power Systems
with DC Network Cognizance},
author={Haixiao Li, Aleksandra Leki'c},
journal={SEST 2024 Proceedings},
year={2024},
archivePrefix={arXiv},
eprint={2409.16750},
primaryClass={math.OC cs.SY eess.SY}
}
|
li2024distributed
|
arxiv-661731
|
2409.16751
|
E-SQL: Direct Schema Linking via Question Enrichment in Text-to-SQL
|
<|reference_start|>E-SQL: Direct Schema Linking via Question Enrichment in Text-to-SQL: Translating Natural Language Queries into Structured Query Language (Text-to-SQL or NLQ-to-SQL) is a critical task extensively studied by both the natural language processing and database communities, aimed at providing a natural language interface to databases (NLIDB) and lowering the barrier for non-experts. Despite recent advancements made through the use of Large Language Models (LLMs), significant challenges remain. These include handling complex database schemas, resolving ambiguity in user queries, and generating SQL queries with intricate structures that accurately reflect the user's intent. In this work, we introduce E-SQL, a novel pipeline specifically designed to address these challenges through direct schema linking and candidate predicate augmentation. E-SQL enhances the natural language query by incorporating relevant database items (i.e., tables, columns, and values) and conditions directly into the question, bridging the gap between the query and the database structure. The pipeline leverages candidate predicate augmentation to mitigate erroneous or incomplete predicates in generated SQLs. We further investigate the impact of schema filtering, a technique widely explored in previous work, and demonstrate its diminishing returns when applied alongside advanced large language models. Comprehensive evaluations on the BIRD benchmark illustrate that E-SQL achieves competitive performance, particularly excelling in complex queries with a 66.29% execution accuracy on the test set. All code required to reproduce the reported results is publicly available on our GitHub repository.<|reference_end|>
|
arxiv
|
@article{caferoğlu2024e-sql:,
title={E-SQL: Direct Schema Linking via Question Enrichment in Text-to-SQL},
author={Hasan Alp Caferou{g}lu, "Ozg"ur Ulusoy},
journal={arXiv preprint arXiv:2409.16751},
year={2024},
archivePrefix={arXiv},
eprint={2409.16751},
primaryClass={cs.CL}
}
|
caferoğlu2024e-sql:
|
arxiv-661732
|
2409.16753
|
Perfect Hermitian rank-metric codes
|
<|reference_start|>Perfect Hermitian rank-metric codes: This study investigates Hermitian rank-metric codes, a special class of rank-metric codes, focusing on perfect codes and on the analysis of their covering properties. Firstly, we establish bounds on the size of spheres in the space of Hermitian matrices and, as a consequence, we show that non-trivial perfect codes do not exist in the Hermitian case. We conclude the paper by examining their covering density.<|reference_end|>
|
arxiv
|
@article{mushrraf2024perfect,
title={Perfect Hermitian rank-metric codes},
author={Usman Mushrraf},
journal={arXiv preprint arXiv:2409.16753},
year={2024},
archivePrefix={arXiv},
eprint={2409.16753},
primaryClass={cs.IT math.IT}
}
|
mushrraf2024perfect
|
arxiv-661733
|
2409.16754
|
xDevSM: Streamlining xApp Development With a Flexible Framework for O-RAN E2 Service Models
|
<|reference_start|>xDevSM: Streamlining xApp Development With a Flexible Framework for O-RAN E2 Service Models: RAN Intelligent Controllers (RICs) are programmable platforms that enable data-driven closed-loop control in the O-RAN architecture. They collect telemetry and data from the RAN, process it in custom applications, and enforce control or new configurations on the RAN. Such custom applications in the Near-Real-Time (RT) RIC are called xApps, and enable a variety of use cases related to radio resource management. Despite numerous open-source and commercial projects focused on the Near-RT RIC, developing and testing xApps that are interoperable across multiple RAN implementations is a time-consuming and technically challenging process. This is primarily caused by the complexity of the protocol of the E2 interface, which enables communication between the RIC and the RAN while providing a high degree of flexibility, with multiple Service Models (SMs) providing plug-and-play functionalities such as data reporting and RAN control. In this paper, we propose xDevSM, an open-source flexible framework for O-RAN service models, aimed at simplifying xApp development for the O-RAN Software Community (OSC) Near-RT RIC. xDevSM reduces the complexity of the xApp development process, allowing developers to focus on the control logic of their xApps and moving the logic of the E2 service models behind simple Application Programming Interfaces (APIs). We demonstrate the effectiveness of this framework by deploying and testing xApps across various RAN software platforms, including OpenAirInterface and srsRAN. This framework significantly facilitates the development and validation of solutions and algorithms on O-RAN networks, including the testing of data-driven solutions across multiple RAN implementations.<|reference_end|>
|
arxiv
|
@article{feraudo2024xdevsm:,
title={xDevSM: Streamlining xApp Development With a Flexible Framework for
O-RAN E2 Service Models},
author={Angelo Feraudo, Stefano Maxenti, Andrea Lacava, Paolo Bellavista,
Michele Polese, Tommaso Melodia},
journal={arXiv preprint arXiv:2409.16754},
year={2024},
doi={10.1145/3636534.3697325},
archivePrefix={arXiv},
eprint={2409.16754},
primaryClass={cs.NI}
}
|
feraudo2024xdevsm:
|
arxiv-661734
|
2409.16756
|
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
|
<|reference_start|>Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics: Explainable AI (XAI) is a rapidly growing domain with a myriad of proposed methods as well as metrics aiming to evaluate their efficacy. However, current studies are often of limited scope, examining only a handful of XAI methods and ignoring underlying design parameters for performance, such as the model architecture or the nature of input data. Moreover, they often rely on one or a few metrics and neglect thorough validation, increasing the risk of selection bias and ignoring discrepancies among metrics. These shortcomings leave practitioners confused about which method to choose for their problem. In response, we introduce LATEC, a large-scale benchmark that critically evaluates 17 prominent XAI methods using 20 distinct metrics. We systematically incorporate vital design parameters like varied architectures and diverse input modalities, resulting in 7,560 examined combinations. Through LATEC, we showcase the high risk of conflicting metrics leading to unreliable rankings and consequently propose a more robust evaluation scheme. Further, we comprehensively evaluate various XAI methods to assist practitioners in selecting appropriate methods aligning with their needs. Curiously, the emerging top-performing method, Expected Gradients, is not examined in any relevant related study. LATEC reinforces its role in future XAI research by publicly releasing all 326k saliency maps and 378k metric scores as a (meta-)evaluation dataset. The benchmark is hosted at: https://github.com/IML-DKFZ/latec.<|reference_end|>
|
arxiv
|
@article{klein2024navigating,
title={Navigating the Maze of Explainable AI: A Systematic Approach to
Evaluating Methods and Metrics},
author={Lukas Klein, Carsten T. L"uth, Udo Schlegel, Till J. Bungert,
Mennatallah El-Assady, Paul F. J"ager},
journal={arXiv preprint arXiv:2409.16756},
year={2024},
archivePrefix={arXiv},
eprint={2409.16756},
primaryClass={cs.CV}
}
|
klein2024navigating
|
arxiv-661735
|
2409.16757
|
An Adaptive Re-evaluation Method for Evolution Strategy under Additive Noise
|
<|reference_start|>An Adaptive Re-evaluation Method for Evolution Strategy under Additive Noise: The Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) is one of the most advanced algorithms in numerical black-box optimization. For noisy objective functions, several approaches were proposed to mitigate the noise, e.g., re-evaluations of the same solution or adapting the population size. In this paper, we devise a novel method to adaptively choose the optimal re-evaluation number for function values corrupted by additive Gaussian white noise. We derive a theoretical lower bound of the expected improvement achieved in one iteration of CMA-ES, given an estimation of the noise level and the Lipschitz constant of the function's gradient. Solving for the maximum of the lower bound, we obtain a simple expression of the optimal re-evaluation number. We experimentally compare our method to the state-of-the-art noise-handling methods for CMA-ES on a set of artificial test functions across various noise levels, optimization budgets, and dimensionality. Our method demonstrates significant advantages in terms of the probability of hitting near-optimal function values.<|reference_end|>
|
arxiv
|
@article{dinu2024an,
title={An Adaptive Re-evaluation Method for Evolution Strategy under Additive
Noise},
author={Catalin-Viorel Dinu, Yash J. Patel, Xavier Bonet-Monroig, Hao Wang},
journal={arXiv preprint arXiv:2409.16757},
year={2024},
archivePrefix={arXiv},
eprint={2409.16757},
primaryClass={cs.NE}
}
|
dinu2024an
|
arxiv-661736
|
2409.16760
|
Enhancing Automatic Keyphrase Labelling with Text-to-Text Transfer Transformer (T5) Architecture: A Framework for Keyphrase Generation and Filtering
|
<|reference_start|>Enhancing Automatic Keyphrase Labelling with Text-to-Text Transfer Transformer (T5) Architecture: A Framework for Keyphrase Generation and Filtering: Automatic keyphrase labelling stands for the ability of models to retrieve words or short phrases that adequately describe documents' content. Previous work has put much effort into exploring extractive techniques to address this task; however, these methods cannot produce keyphrases not found in the text. Given this limitation, keyphrase generation approaches have arisen lately. This paper presents a keyphrase generation model based on the Text-to-Text Transfer Transformer (T5) architecture. Having a document's title and abstract as input, we learn a T5 model to generate keyphrases which adequately define its content. We name this model docT5keywords. We not only perform the classic inference approach, where the output sequence is directly selected as the predicted values, but we also report results from a majority voting approach. In this approach, multiple sequences are generated, and the keyphrases are ranked based on their frequency of occurrence across these sequences. Along with this model, we present a novel keyphrase filtering technique based on the T5 architecture. We train a T5 model to learn whether a given keyphrase is relevant to a document. We devise two evaluation methodologies to prove our model's capability to filter inadequate keyphrases. First, we perform a binary evaluation where our model has to predict if a keyphrase is relevant for a given document. Second, we filter the predicted keyphrases by several AKG models and check if the evaluation scores are improved. Experimental results demonstrate that our keyphrase generation model significantly outperforms all the baselines, with gains exceeding 100\% in some cases. The proposed filtering technique also achieves near-perfect accuracy in eliminating false positives across all datasets.<|reference_end|>
|
arxiv
|
@article{gabín2024enhancing,
title={Enhancing Automatic Keyphrase Labelling with Text-to-Text Transfer
Transformer (T5) Architecture: A Framework for Keyphrase Generation and
Filtering},
author={Jorge Gab'in, M. Eduardo Ares, Javier Parapar},
journal={arXiv preprint arXiv:2409.16760},
year={2024},
archivePrefix={arXiv},
eprint={2409.16760},
primaryClass={cs.IR}
}
|
gabín2024enhancing
|
arxiv-661737
|
2409.16763
|
Statewide Visual Geolocalization in the Wild
|
<|reference_start|>Statewide Visual Geolocalization in the Wild: This work presents a method that is able to predict the geolocation of a street-view photo taken in the wild within a state-sized search region by matching against a database of aerial reference imagery. We partition the search region into geographical cells and train a model to map cells and corresponding photos into a joint embedding space that is used to perform retrieval at test time. The model utilizes aerial images for each cell at multiple levels-of-detail to provide sufficient information about the surrounding scene. We propose a novel layout of the search region with consistent cell resolutions that allows scaling to large geographical regions. Experiments demonstrate that the method successfully localizes 60.6% of all non-panoramic street-view photos uploaded to the crowd-sourcing platform Mapillary in the state of Massachusetts to within 50m of their ground-truth location. Source code is available at https://github.com/fferflo/statewide-visual-geolocalization.<|reference_end|>
|
arxiv
|
@article{fervers2024statewide,
title={Statewide Visual Geolocalization in the Wild},
author={Florian Fervers, Sebastian Bullinger, Christoph Bodensteiner, Michael
Arens, Rainer Stiefelhagen},
journal={arXiv preprint arXiv:2409.16763},
year={2024},
archivePrefix={arXiv},
eprint={2409.16763},
primaryClass={cs.CV}
}
|
fervers2024statewide
|
arxiv-661738
|
2409.16764
|
Offline and Distributional Reinforcement Learning for Radio Resource Management
|
<|reference_start|>Offline and Distributional Reinforcement Learning for Radio Resource Management: Reinforcement learning (RL) has proved to have a promising role in future intelligent wireless networks. Online RL has been adopted for radio resource management (RRM), taking over traditional schemes. However, due to its reliance on online interaction with the environment, its role becomes limited in practical, real-world problems where online interaction is not feasible. In addition, traditional RL stands short in front of the uncertainties and risks in real-world stochastic environments. In this manner, we propose an offline and distributional RL scheme for the RRM problem, enabling offline training using a static dataset without any interaction with the environment and considering the sources of uncertainties using the distributions of the return. Simulation results demonstrate that the proposed scheme outperforms conventional resource management models. In addition, it is the only scheme that surpasses online RL and achieves a $16 \%$ gain over online RL.<|reference_end|>
|
arxiv
|
@article{eldeeb2024offline,
title={Offline and Distributional Reinforcement Learning for Radio Resource
Management},
author={Eslam Eldeeb and Hirley Alves},
journal={arXiv preprint arXiv:2409.16764},
year={2024},
archivePrefix={arXiv},
eprint={2409.16764},
primaryClass={cs.LG cs.AI cs.MA}
}
|
eldeeb2024offline
|
arxiv-661739
|
2409.16765
|
MaViLS, a Benchmark Dataset for Video-to-Slide Alignment, Assessing Baseline Accuracy with a Multimodal Alignment Algorithm Leveraging Speech, OCR, and Visual Features
|
<|reference_start|>MaViLS, a Benchmark Dataset for Video-to-Slide Alignment, Assessing Baseline Accuracy with a Multimodal Alignment Algorithm Leveraging Speech, OCR, and Visual Features: This paper presents a benchmark dataset for aligning lecture videos with corresponding slides and introduces a novel multimodal algorithm leveraging features from speech, text, and images. It achieves an average accuracy of 0.82 in comparison to SIFT (0.56) while being approximately 11 times faster. Using dynamic programming the algorithm tries to determine the optimal slide sequence. The results show that penalizing slide transitions increases accuracy. Features obtained via optical character recognition (OCR) contribute the most to a high matching accuracy, followed by image features. The findings highlight that audio transcripts alone provide valuable information for alignment and are beneficial if OCR data is lacking. Variations in matching accuracy across different lectures highlight the challenges associated with video quality and lecture style. The novel multimodal algorithm demonstrates robustness to some of these challenges, underscoring the potential of the approach.<|reference_end|>
|
arxiv
|
@article{anderer2024mavils,,
title={MaViLS, a Benchmark Dataset for Video-to-Slide Alignment, Assessing
Baseline Accuracy with a Multimodal Alignment Algorithm Leveraging Speech,
OCR, and Visual Features},
author={Katharina Anderer and Andreas Reich and Matthias W"olfel},
journal={Proceedings of Interspeech 2024},
year={2024},
doi={10.21437/Interspeech.2024-978},
archivePrefix={arXiv},
eprint={2409.16765},
primaryClass={cs.CV cs.AI cs.LG eess.IV}
}
|
anderer2024mavils,
|
arxiv-661740
|
2409.16766
|
Let There Be Light: Robust Lensless Imaging Under External Illumination With Deep Learning
|
<|reference_start|>Let There Be Light: Robust Lensless Imaging Under External Illumination With Deep Learning: Lensless cameras relax the design constraints of traditional cameras by shifting image formation from analog optics to digital post-processing. While new camera designs and applications can be enabled, lensless imaging is very sensitive to unwanted interference (other sources, noise, etc.). In this work, we address a prevalent noise source that has not been studied for lensless imaging: external illumination e.g. from ambient and direct lighting. Being robust to a variety of lighting conditions would increase the practicality and adoption of lensless imaging. To this end, we propose multiple recovery approaches that account for external illumination by incorporating its estimate into the image recovery process. At the core is a physics-based reconstruction that combines learnable image recovery and denoisers, all of whose parameters are trained using experimentally gathered data. Compared to standard reconstruction methods, our approach yields significant qualitative and quantitative improvements. We open-source our implementations and a 25K dataset of measurements under multiple lighting conditions.<|reference_end|>
|
arxiv
|
@article{bezzam2024let,
title={Let There Be Light: Robust Lensless Imaging Under External Illumination
With Deep Learning},
author={Eric Bezzam, Stefan Peters, Martin Vetterli},
journal={arXiv preprint arXiv:2409.16766},
year={2024},
archivePrefix={arXiv},
eprint={2409.16766},
primaryClass={eess.IV cs.CV}
}
|
bezzam2024let
|
arxiv-661741
|
2409.16767
|
Exploring Information-Theoretic Metrics Associated with Neural Collapse in Supervised Training
|
<|reference_start|>Exploring Information-Theoretic Metrics Associated with Neural Collapse in Supervised Training: In this paper, we utilize information-theoretic metrics like matrix entropy and mutual information to analyze supervised learning. We explore the information content of data representations and classification head weights and their information interplay during supervised training. Experiments show that matrix entropy cannot solely describe the interaction of the information content of data representation and classification head weights but it can effectively reflect the similarity and clustering behavior of the data. Inspired by this, we propose a cross-modal alignment loss to improve the alignment between the representations of the same class from different modalities. Moreover, in order to assess the interaction of the information content of data representation and classification head weights more accurately, we utilize new metrics like matrix mutual information ratio (MIR) and matrix information entropy difference ratio (HDR). Through theory and experiment, we show that HDR and MIR can not only effectively describe the information interplay of supervised training but also improve the performance of supervised and semi-supervised learning.<|reference_end|>
|
arxiv
|
@article{song2024exploring,
title={Exploring Information-Theoretic Metrics Associated with Neural Collapse
in Supervised Training},
author={Kun Song, Zhiquan Tan, Bochao Zou, Jiansheng Chen, Huimin Ma, Weiran
Huang},
journal={arXiv preprint arXiv:2409.16767},
year={2024},
archivePrefix={arXiv},
eprint={2409.16767},
primaryClass={cs.LG}
}
|
song2024exploring
|
arxiv-661742
|
2409.16768
|
Interpreting Deep Neural Network-Based Receiver Under Varying Signal-To-Noise Ratios
|
<|reference_start|>Interpreting Deep Neural Network-Based Receiver Under Varying Signal-To-Noise Ratios: We propose a novel method for interpreting neural networks, focusing on convolutional neural network-based receiver model. The method identifies which unit or units of the model contain most (or least) information about the channel parameter(s) of the interest, providing insights at both global and local levels -- with global explanations aggregating local ones. Experiments on link-level simulations demonstrate the method's effectiveness in identifying units that contribute most (and least) to signal-to-noise ratio processing. Although we focus on a radio receiver model, the method generalizes to other neural network architectures and applications, offering robust estimation even in high-dimensional settings.<|reference_end|>
|
arxiv
|
@article{tuononen2024interpreting,
title={Interpreting Deep Neural Network-Based Receiver Under Varying
Signal-To-Noise Ratios},
author={Marko Tuononen and Dani Korpi and Ville Hautam"aki},
journal={arXiv preprint arXiv:2409.16768},
year={2024},
archivePrefix={arXiv},
eprint={2409.16768},
primaryClass={cs.LG cs.NI}
}
|
tuononen2024interpreting
|
arxiv-661743
|
2409.16769
|
Super Level Sets and Exponential Decay: A Synergistic Approach to Stable Neural Network Training
|
<|reference_start|>Super Level Sets and Exponential Decay: A Synergistic Approach to Stable Neural Network Training: The objective of this paper is to enhance the optimization process for neural networks by developing a dynamic learning rate algorithm that effectively integrates exponential decay and advanced anti-overfitting strategies. Our primary contribution is the establishment of a theoretical framework where we demonstrate that the optimization landscape, under the influence of our algorithm, exhibits unique stability characteristics defined by Lyapunov stability principles. Specifically, we prove that the superlevel sets of the loss function, as influenced by our adaptive learning rate, are always connected, ensuring consistent training dynamics. Furthermore, we establish the "equiconnectedness" property of these superlevel sets, which maintains uniform stability across varying training conditions and epochs. This paper contributes to the theoretical understanding of dynamic learning rate mechanisms in neural networks and also pave the way for the development of more efficient and reliable neural optimization techniques. This study intends to formalize and validate the equiconnectedness of loss function as superlevel sets in the context of neural network training, opening newer avenues for future research in adaptive machine learning algorithms. We leverage previous theoretical discoveries to propose training mechanisms that can effectively handle complex and high-dimensional data landscapes, particularly in applications requiring high precision and reliability.<|reference_end|>
|
arxiv
|
@article{chaudhary2024super,
title={Super Level Sets and Exponential Decay: A Synergistic Approach to Stable
Neural Network Training},
author={Jatin Chaudhary, Dipak Nidhi, Jukka Heikkonen, Haari Merisaari, and
Rajiv Kanth},
journal={arXiv preprint arXiv:2409.16769},
year={2024},
archivePrefix={arXiv},
eprint={2409.16769},
primaryClass={cs.LG cs.AI}
}
|
chaudhary2024super
|
arxiv-661744
|
2409.16770
|
Evolutionary Greedy Algorithm for Optimal Sensor Placement Problem in Urban Sewage Surveillance
|
<|reference_start|>Evolutionary Greedy Algorithm for Optimal Sensor Placement Problem in Urban Sewage Surveillance: Designing a cost-effective sensor placement plan for sewage surveillance is a crucial task because it allows cost-effective early pandemic outbreak detection as supplementation for individual testing. However, this problem is computationally challenging to solve, especially for massive sewage networks having complicated topologies. In this paper, we formulate this problem as a multi-objective optimization problem to consider the conflicting objectives and put forward a novel evolutionary greedy algorithm (EG) to enable efficient and effective optimization for large-scale directed networks. The proposed model is evaluated on both small-scale synthetic networks and a large-scale, real-world sewage network in Hong Kong. The experiments on small-scale synthetic networks demonstrate a consistent efficiency improvement with reasonable optimization performance and the real-world application shows that our method is effective in generating optimal sensor placement plans to guide policy-making.<|reference_end|>
|
arxiv
|
@article{wang2024evolutionary,
title={Evolutionary Greedy Algorithm for Optimal Sensor Placement Problem in
Urban Sewage Surveillance},
author={Sunyu Wang, Yutong Xia, Huanfa Chen, Xinyi Tong, Yulun Zhou},
journal={arXiv preprint arXiv:2409.16770},
year={2024},
archivePrefix={arXiv},
eprint={2409.16770},
primaryClass={cs.CY cs.NE}
}
|
wang2024evolutionary
|
arxiv-661745
|
2409.16774
|
MixPolyp: Integrating Mask, Box and Scribble Supervision for Enhanced Polyp Segmentation
|
<|reference_start|>MixPolyp: Integrating Mask, Box and Scribble Supervision for Enhanced Polyp Segmentation: Limited by the expensive labeling, polyp segmentation models are plagued by data shortages. To tackle this, we propose the mixed supervised polyp segmentation paradigm (MixPolyp). Unlike traditional models relying on a single type of annotation, MixPolyp combines diverse annotation types (mask, box, and scribble) within a single model, thereby expanding the range of available data and reducing labeling costs. To achieve this, MixPolyp introduces three novel supervision losses to handle various annotations: Subspace Projection loss (L_SP), Binary Minimum Entropy loss (L_BME), and Linear Regularization loss (L_LR). For box annotations, L_SP eliminates shape inconsistencies between the prediction and the supervision. For scribble annotations, L_BME provides supervision for unlabeled pixels through minimum entropy constraint, thereby alleviating supervision sparsity. Furthermore, L_LR provides dense supervision by enforcing consistency among the predictions, thus reducing the non-uniqueness. These losses are independent of the model structure, making them generally applicable. They are used only during training, adding no computational cost during inference. Extensive experiments on five datasets demonstrate MixPolyp's effectiveness.<|reference_end|>
|
arxiv
|
@article{hu2024mixpolyp:,
title={MixPolyp: Integrating Mask, Box and Scribble Supervision for Enhanced
Polyp Segmentation},
author={Yiwen Hu, Jun Wei, Yuncheng Jiang, Haoyang Li, Shuguang Cui, Zhen Li,
Song Wu},
journal={arXiv preprint arXiv:2409.16774},
year={2024},
archivePrefix={arXiv},
eprint={2409.16774},
primaryClass={cs.CV}
}
|
hu2024mixpolyp:
|
arxiv-661746
|
2409.16777
|
PhD Forum: Efficient Privacy-Preserving Processing via Memory-Centric Computing
|
<|reference_start|>PhD Forum: Efficient Privacy-Preserving Processing via Memory-Centric Computing: Privacy-preserving computation techniques like homomorphic encryption (HE) and secure multi-party computation (SMPC) enhance data security by enabling processing on encrypted data. However, the significant computational and CPU-DRAM data movement overhead resulting from the underlying cryptographic algorithms impedes the adoption of these techniques in practice. Existing approaches focus on improving computational overhead using specialized hardware like GPUs and FPGAs, but these methods still suffer from the same processor-DRAM overhead. Novel hardware technologies that support in-memory processing have the potential to address this problem. Memory-centric computing, or processing-in-memory (PIM), brings computation closer to data by introducing low-power processors called data processing units (DPUs) into memory. Besides its in-memory computation capability, PIM provides extensive parallelism, resulting in significant performance improvement over state-of-the-art approaches. We propose a framework that uses recently available PIM hardware to achieve efficient privacy-preserving computation. Our design consists of a four-layer architecture: (1) an application layer that decouples privacy-preserving applications from the underlying protocols and hardware; (2) a protocol layer that implements existing secure computation protocols (HE and MPC); (3) a data orchestration layer that leverages data compression techniques to mitigate the data transfer overhead between DPUs and host memory; (4) a computation layer which implements DPU kernels on which secure computation algorithms are built.<|reference_end|>
|
arxiv
|
@article{mwaisela2024phd,
title={PhD Forum: Efficient Privacy-Preserving Processing via Memory-Centric
Computing},
author={Mpoki Mwaisela},
journal={arXiv preprint arXiv:2409.16777},
year={2024},
archivePrefix={arXiv},
eprint={2409.16777},
primaryClass={cs.CR cs.AR cs.DC}
}
|
mwaisela2024phd
|
arxiv-661747
|
2409.16779
|
LLaMa-SciQ: An Educational Chatbot for Answering Science MCQ
|
<|reference_start|>LLaMa-SciQ: An Educational Chatbot for Answering Science MCQ: Large Language Models (LLMs) often struggle with tasks requiring mathematical reasoning, particularly multiple-choice questions (MCQs). To address this issue, we developed LLaMa-SciQ, an educational chatbot designed to assist college students in solving and understanding MCQs in STEM fields. We begin by fine-tuning and aligning the models to human preferences. After comparing the performance of Mistral-7B and LLaMa-8B, we selected the latter as the base model due to its higher evaluation accuracy. To further enhance accuracy, we implement Retrieval-Augmented Generation (RAG) and apply quantization to compress the model, reducing inference time and increasing accessibility for students. For mathematical reasoning, LLaMa-SciQ achieved 74.5% accuracy on the GSM8k dataset and 30% on the MATH dataset. However, RAG does not improve performance and even reduces it, likely due to retriever issues or the model's unfamiliarity with context. Despite this, the quantized model shows only a 5% loss in performance, demonstrating significant efficiency improvements.<|reference_end|>
|
arxiv
|
@article{allard2024llama-sciq:,
title={LLaMa-SciQ: An Educational Chatbot for Answering Science MCQ},
author={Marc-Antoine Allard, Matin Ansaripour, Maria Yuffa, Paul Teiletche},
journal={arXiv preprint arXiv:2409.16779},
year={2024},
archivePrefix={arXiv},
eprint={2409.16779},
primaryClass={cs.AI}
}
|
allard2024llama-sciq:
|
arxiv-661748
|
2409.16781
|
miniLB: A Performance Portability Study of Lattice-Boltzmann Simulations
|
<|reference_start|>miniLB: A Performance Portability Study of Lattice-Boltzmann Simulations: The Lattice Boltzmann Method (LBM) is a computational technique of Computational Fluid Dynamics (CFD) that has gained popularity due to its high parallelism and ability to handle complex geometries with minimal effort. Although LBM frameworks are increasingly important in various industries and research fields, their complexity makes them difficult to modify and can lead to suboptimal performance. This paper presents miniLB, the first, to the best of our knowledge, SYCL-based LBM mini-app.miniLB addresses the need for a performance-portable LBM proxy app capable of abstracting complex fluid dynamics simulations across heterogeneous computing systems. We analyze SYCL semantics for performance portability and evaluate miniLB on multiple GPU architectures using various SYCL implementations. Our results, compared against a manually-tuned FORTRAN version, demonstrate effectiveness of miniLB in assessing LBM performance across diverse hardware, offering valuable insights for optimizing large-scale LBM frameworks in modern computing environments.<|reference_end|>
|
arxiv
|
@article{crisci2024minilb:,
title={miniLB: A Performance Portability Study of Lattice-Boltzmann Simulations},
author={Luigi Crisci, Biagio Cosenza, Giorgio Amati, Matteo Turisini},
journal={arXiv preprint arXiv:2409.16781},
year={2024},
archivePrefix={arXiv},
eprint={2409.16781},
primaryClass={cs.DC}
}
|
crisci2024minilb:
|
arxiv-661749
|
2409.16783
|
Holistic Automated Red Teaming for Large Language Models through Top-Down Test Case Generation and Multi-turn Interaction
|
<|reference_start|>Holistic Automated Red Teaming for Large Language Models through Top-Down Test Case Generation and Multi-turn Interaction: Automated red teaming is an effective method for identifying misaligned behaviors in large language models (LLMs). Existing approaches, however, often focus primarily on improving attack success rates while overlooking the need for comprehensive test case coverage. Additionally, most of these methods are limited to single-turn red teaming, failing to capture the multi-turn dynamics of real-world human-machine interactions. To overcome these limitations, we propose HARM (Holistic Automated Red teaMing), which scales up the diversity of test cases using a top-down approach based on an extensible, fine-grained risk taxonomy. Our method also leverages a novel fine-tuning strategy and reinforcement learning techniques to facilitate multi-turn adversarial probing in a human-like manner. Experimental results demonstrate that our framework enables a more systematic understanding of model vulnerabilities and offers more targeted guidance for the alignment process.<|reference_end|>
|
arxiv
|
@article{zhang2024holistic,
title={Holistic Automated Red Teaming for Large Language Models through
Top-Down Test Case Generation and Multi-turn Interaction},
author={Jinchuan Zhang, Yan Zhou, Yaxin Liu, Ziming Li, Songlin Hu},
journal={arXiv preprint arXiv:2409.16783},
year={2024},
archivePrefix={arXiv},
eprint={2409.16783},
primaryClass={cs.CL cs.AI cs.CR}
}
|
zhang2024holistic
|
arxiv-661750
|
2409.16784
|
World Model-based Perception for Visual Legged Locomotion
|
<|reference_start|>World Model-based Perception for Visual Legged Locomotion: Legged locomotion over various terrains is challenging and requires precise perception of the robot and its surroundings from both proprioception and vision. However, learning directly from high-dimensional visual input is often data-inefficient and intricate. To address this issue, traditional methods attempt to learn a teacher policy with access to privileged information first and then learn a student policy to imitate the teacher's behavior with visual input. Despite some progress, this imitation framework prevents the student policy from achieving optimal performance due to the information gap between inputs. Furthermore, the learning process is unnatural since animals intuitively learn to traverse different terrains based on their understanding of the world without privileged knowledge. Inspired by this natural ability, we propose a simple yet effective method, World Model-based Perception (WMP), which builds a world model of the environment and learns a policy based on the world model. We illustrate that though completely trained in simulation, the world model can make accurate predictions of real-world trajectories, thus providing informative signals for the policy controller. Extensive simulated and real-world experiments demonstrate that WMP outperforms state-of-the-art baselines in traversability and robustness. Videos and Code are available at: https://wmp-loco.github.io/.<|reference_end|>
|
arxiv
|
@article{lai2024world,
title={World Model-based Perception for Visual Legged Locomotion},
author={Hang Lai, Jiahang Cao, Jiafeng Xu, Hongtao Wu, Yunfeng Lin, Tao Kong,
Yong Yu and Weinan Zhang},
journal={arXiv preprint arXiv:2409.16784},
year={2024},
archivePrefix={arXiv},
eprint={2409.16784},
primaryClass={cs.RO cs.LG}
}
|
lai2024world
|
arxiv-661751
|
2409.16787
|
Enhancing Feature Selection and Interpretability in AI Regression Tasks Through Feature Attribution
|
<|reference_start|>Enhancing Feature Selection and Interpretability in AI Regression Tasks Through Feature Attribution: Research in Explainable Artificial Intelligence (XAI) is increasing, aiming to make deep learning models more transparent. Most XAI methods focus on justifying the decisions made by Artificial Intelligence (AI) systems in security-relevant applications. However, relatively little attention has been given to using these methods to improve the performance and robustness of deep learning algorithms. Additionally, much of the existing XAI work primarily addresses classification problems. In this study, we investigate the potential of feature attribution methods to filter out uninformative features in input data for regression problems, thereby improving the accuracy and stability of predictions. We introduce a feature selection pipeline that combines Integrated Gradients with k-means clustering to select an optimal set of variables from the initial data space. To validate the effectiveness of this approach, we apply it to a real-world industrial problem - blade vibration analysis in the development process of turbo machinery.<|reference_end|>
|
arxiv
|
@article{hinterleitner2024enhancing,
title={Enhancing Feature Selection and Interpretability in AI Regression Tasks
Through Feature Attribution},
author={Alexander Hinterleitner, Thomas Bartz-Beielstein, Richard Schulz,
Sebastian Spengler, Thomas Winter, Christoph Leitenmeier},
journal={arXiv preprint arXiv:2409.16787},
year={2024},
archivePrefix={arXiv},
eprint={2409.16787},
primaryClass={cs.LG cs.AI}
}
|
hinterleitner2024enhancing
|
arxiv-661752
|
2409.16788
|
Mitigating the Bias of Large Language Model Evaluation
|
<|reference_start|>Mitigating the Bias of Large Language Model Evaluation: Recently, there has been a trend of evaluating the Large Language Model (LLM) quality in the flavor of LLM-as-a-Judge, namely leveraging another LLM to evaluate the current output quality. However, existing judges are proven to be biased, namely they would favor answers which present better superficial quality (such as verbosity, fluency) while ignoring the instruction following ability. In this work, we propose systematic research about the bias of LLM-as-a-Judge. Specifically, for closed-source judge models, we apply calibration to mitigate the significance of superficial quality, both on probability level and prompt level. For open-source judge models, we propose to mitigate the bias by contrastive training, with curated negative samples that deviate from instruction but present better superficial quality. We apply our methods on the bias evaluation benchmark, and experiment results show our methods mitigate the bias by a large margin while maintaining a satisfactory evaluation accuracy.<|reference_end|>
|
arxiv
|
@article{zhou2024mitigating,
title={Mitigating the Bias of Large Language Model Evaluation},
author={Hongli Zhou, Hui Huang, Yunfei Long, Bing Xu, Conghui Zhu, Hailong
Cao, Muyun Yang and Tiejun Zhao},
journal={arXiv preprint arXiv:2409.16788},
year={2024},
archivePrefix={arXiv},
eprint={2409.16788},
primaryClass={cs.CL}
}
|
zhou2024mitigating
|
arxiv-661753
|
2409.16791
|
Symbolic State Partitioning for Reinforcement Learning
|
<|reference_start|>Symbolic State Partitioning for Reinforcement Learning: Tabular reinforcement learning methods cannot operate directly on continuous state spaces. One solution for this problem is to partition the state space. A good partitioning enables generalization during learning and more efficient exploitation of prior experiences. Consequently, the learning process becomes faster and produces more reliable policies. However, partitioning introduces approximation, which is particularly harmful in the presence of nonlinear relations between state components. An ideal partition should be as coarse as possible, while capturing the key structure of the state space for the given problem. This work extracts partitions from the environment dynamics by symbolic execution. We show that symbolic partitioning improves state space coverage with respect to environmental behavior and allows reinforcement learning to perform better for sparse rewards. We evaluate symbolic state space partitioning with respect to precision, scalability, learning agent performance and state space coverage for the learnt policies.<|reference_end|>
|
arxiv
|
@article{ghaffari2024symbolic,
title={Symbolic State Partitioning for Reinforcement Learning},
author={Mohsen Ghaffari, Mahsa Varshosaz, Einar Broch Johnsen, Andrzej
Wk{a}sowski},
journal={arXiv preprint arXiv:2409.16791},
year={2024},
archivePrefix={arXiv},
eprint={2409.16791},
primaryClass={cs.LG cs.AI}
}
|
ghaffari2024symbolic
|
arxiv-661754
|
2409.16793
|
Spacewalker: Traversing Representation Spaces for Fast Interactive Exploration and Annotation of Unstructured Data
|
<|reference_start|>Spacewalker: Traversing Representation Spaces for Fast Interactive Exploration and Annotation of Unstructured Data: Unstructured data in industries such as healthcare, finance, and manufacturing presents significant challenges for efficient analysis and decision making. Detecting patterns within this data and understanding their impact is critical but complex without the right tools. Traditionally, these tasks relied on the expertise of data analysts or labor-intensive manual reviews. In response, we introduce Spacewalker, an interactive tool designed to explore and annotate data across multiple modalities. Spacewalker allows users to extract data representations and visualize them in low-dimensional spaces, enabling the detection of semantic similarities. Through extensive user studies, we assess Spacewalker's effectiveness in data annotation and integrity verification. Results show that the tool's ability to traverse latent spaces and perform multi-modal queries significantly enhances the user's capacity to quickly identify relevant data. Moreover, Spacewalker allows for annotation speed-ups far superior to conventional methods, making it a promising tool for efficiently navigating unstructured data and improving decision making processes. The code of this work is open-source and can be found at: https://github.com/code-lukas/Spacewalker<|reference_end|>
|
arxiv
|
@article{heine2024spacewalker:,
title={Spacewalker: Traversing Representation Spaces for Fast Interactive
Exploration and Annotation of Unstructured Data},
author={Lukas Heine, Fabian H"orst, Jana Fragemann, Gijs Luijten, Miriam
Balzer, Jan Egger, Fin Bahnsen, M. Saquib Sarfraz, Jens Kleesiek and
Constantin Seibold},
journal={arXiv preprint arXiv:2409.16793},
year={2024},
archivePrefix={arXiv},
eprint={2409.16793},
primaryClass={cs.CV cs.HC cs.IR}
}
|
heine2024spacewalker:
|
arxiv-661755
|
2409.16794
|
Optimal Denial-of-Service Attacks Against Partially-Observable Real-Time Monitoring Systems
|
<|reference_start|>Optimal Denial-of-Service Attacks Against Partially-Observable Real-Time Monitoring Systems: In this paper, we investigate the impact of denial-of-service attacks on the status updating of a cyber-physical system with one or more sensors connected to a remote monitor via unreliable channels. We approach the problem from the perspective of an adversary that can strategically jam a subset of the channels. The sources are modeled as Markov chains, and the performance of status updating is measured based on the age of incorrect information at the monitor. Our objective is to derive jamming policies that strike a balance between the degradation of the system's performance and the conservation of the adversary's energy. For a single-source scenario, we formulate the problem as a partially-observable Markov decision process, and rigorously prove that the optimal jamming policy is of a threshold form. We then extend the problem to a multi-source scenario. We formulate this problem as a restless multi-armed bandit, and provide a jamming policy based on the Whittle's index. Our numerical results highlight the performance of our policies compared to baseline policies.<|reference_end|>
|
arxiv
|
@article{kriouile2024optimal,
title={Optimal Denial-of-Service Attacks Against Partially-Observable Real-Time
Monitoring Systems},
author={Saad Kriouile, Mohamad Assaad, Amira Alloum, and Touraj Soleymani},
journal={arXiv preprint arXiv:2409.16794},
year={2024},
archivePrefix={arXiv},
eprint={2409.16794},
primaryClass={cs.IT math.IT math.OC}
}
|
kriouile2024optimal
|
arxiv-661756
|
2409.16796
|
The Detection and Correction of Silent Errors in Pipelined Krylov Subspace Methods
|
<|reference_start|>The Detection and Correction of Silent Errors in Pipelined Krylov Subspace Methods: As computational machines are becoming larger and more complex, the probability of hardware failure rises. ``Silent errors'', or, bit flips, may not be immediately apparent but can cause detrimental effects to algorithm behavior. In this work, we examine an algorithm-based approach to silent error detection in the context of pipelined Krylov subspace methods, in particular, Pipe-PR-CG, for the solution of linear systems. Our approach is based on using finite precision error analysis to bound the differences between quantities which should be equal in exact arithmetic. Through inexpensive monitoring during the iteration, we can detect when these bounds are violated, which indicates that a silent error has occurred. We use this approach to develop a fault-tolerance variant and also suggest a strategy for dynamically adapting the detection criteria. Our numerical experiments demonstrate the effectiveness of our approach.<|reference_end|>
|
arxiv
|
@article{carson2024the,
title={The Detection and Correction of Silent Errors in Pipelined Krylov
Subspace Methods},
author={Erin Claire Carson and Jakub Herc'ik},
journal={arXiv preprint arXiv:2409.16796},
year={2024},
archivePrefix={arXiv},
eprint={2409.16796},
primaryClass={math.NA cs.NA}
}
|
carson2024the
|
arxiv-661757
|
2409.16797
|
Scalable Ensemble Diversification for OOD Generalization and Detection
|
<|reference_start|>Scalable Ensemble Diversification for OOD Generalization and Detection: Training a diverse ensemble of models has several practical applications such as providing candidates for model selection with better out-of-distribution (OOD) generalization, and enabling the detection of OOD samples via Bayesian principles. An existing approach to diverse ensemble training encourages the models to disagree on provided OOD samples. However, the approach is computationally expensive and it requires well-separated ID and OOD examples, such that it has only been demonstrated in small-scale settings. $\textbf{Method.}$ This work presents a method for Scalable Ensemble Diversification (SED) applicable to large-scale settings (e.g. ImageNet) that does not require OOD samples. Instead, SED identifies hard training samples on the fly and encourages the ensemble members to disagree on these. To improve scaling, we show how to avoid the expensive computations in existing methods of exhaustive pairwise disagreements across models. $\textbf{Results.}$ We evaluate the benefits of diversification with experiments on ImageNet. First, for OOD generalization, we observe large benefits from the diversification in multiple settings including output-space (classical) ensembles and weight-space ensembles (model soups). Second, for OOD detection, we turn the diversity of ensemble hypotheses into a novel uncertainty score estimator that surpasses a large number of OOD detection baselines. Code is available here: https://github.com/AlexanderRubinstein/diverse-universe-public.<|reference_end|>
|
arxiv
|
@article{rubinstein2024scalable,
title={Scalable Ensemble Diversification for OOD Generalization and Detection},
author={Alexander Rubinstein, Luca Scimeca, Damien Teney, Seong Joon Oh},
journal={arXiv preprint arXiv:2409.16797},
year={2024},
archivePrefix={arXiv},
eprint={2409.16797},
primaryClass={cs.LG cs.AI cs.CV}
}
|
rubinstein2024scalable
|
arxiv-661758
|
2409.16799
|
Large Language Model Predicts Above Normal All India Summer Monsoon Rainfall in 2024
|
<|reference_start|>Large Language Model Predicts Above Normal All India Summer Monsoon Rainfall in 2024: Reliable prediction of the All India Summer Monsoon Rainfall (AISMR) is pivotal for informed policymaking for the country, impacting the lives of billions of people. However, accurate simulation of AISMR has been a persistent challenge due to the complex interplay of various muti-scale factors and the inherent variability of the monsoon system. This research focuses on adapting and fine-tuning the latest LLM model, PatchTST, to accurately predict AISMR with a lead time of three months. The fine-tuned PatchTST model, trained with historical AISMR data, the Ni\~no3.4 index, and categorical Indian Ocean Dipole values, outperforms several popular neural network models and statistical models. This fine-tuned LLM model exhibits an exceptionally low RMSE percentage of 0.07% and a Spearman correlation of 0.976. This is particularly impressive, since it is nearly 80% more accurate than the best-performing NN models. The model predicts an above-normal monsoon for the year 2024, with an accumulated rainfall of 921.6 mm in the month of June-September for the entire country.<|reference_end|>
|
arxiv
|
@article{sharma2024large,
title={Large Language Model Predicts Above Normal All India Summer Monsoon
Rainfall in 2024},
author={Ujjawal Sharma, Madhav Biyani, Akhil Dev Suresh, Debi Prasad Bhuyan,
Saroj Kanta Mishra, Tanmoy Chakraborty},
journal={arXiv preprint arXiv:2409.16799},
year={2024},
archivePrefix={arXiv},
eprint={2409.16799},
primaryClass={cs.AI cs.LG stat.AP}
}
|
sharma2024large
|
arxiv-661759
|
2409.16800
|
Programming of Skill-based Robots
|
<|reference_start|>Programming of Skill-based Robots: Manufacturing is facing ever changing market demands, with faster innovation cycles resulting to growing agility and flexibility requirements. Industry 4.0 has been transforming the manufacturing world towards digital automation and the importance of software has increased drastically. Easy and fast task programming and execution in robot - sensor systems become a prerequisite for agile and flexible automation and in this paper, we propose such a system. Our solution relies on a robot skill library, which provides the user with high level and parametrized operations, i.e., robot skills, for task programming and execution. Programming actions results to a control recipe in a neutral product context and is based on use of product CAD models or alternatively collaborative use of pointers and tracking sensor with real parts. Practical tests are also reported to show the feasibility of our approach.<|reference_end|>
|
arxiv
|
@article{lohi2024programming,
title={Programming of Skill-based Robots},
author={Taneli Lohi, Samuli Soutukorva and Tapio Heikkil"a},
journal={arXiv preprint arXiv:2409.16800},
year={2024},
doi={10.1109/ICIEA61579.2024.10664981},
archivePrefix={arXiv},
eprint={2409.16800},
primaryClass={cs.RO}
}
|
lohi2024programming
|
arxiv-661760
|
2409.16802
|
Do We Need iPhone Moment or Xiaomi Moment for Robots? Design of Affordable Home Robots for Health Monitoring
|
<|reference_start|>Do We Need iPhone Moment or Xiaomi Moment for Robots? Design of Affordable Home Robots for Health Monitoring: In this paper, we study cost-effective home robot solutions which are designed for home health monitoring. The recent advancements in Artificial Intelligence (AI) have significantly advanced the capabilities of the robots, enabling them to better and efficiently understand and interact with their surroundings. The most common robots currently used in homes are toy robots and cleaning robots. While these are relatively affordable, their functionalities are very limited. On the other hand, humanoid and quadruped robots offer more sophisticated features and capabilities, albeit at a much higher cost. Another category is educational robots, which provide educators with the flexibility to attach various sensors and integrate different design methods with the integrated operating systems. However, the challenge still exists in bridging the gap between affordability and functionality. Our research aims to address this by exploring the potential of developing advanced yet affordable and accessible robots for home robots, aiming for health monitoring, by using edge computing techniques and taking advantage of existing computing resources for home robots, such as mobile phones.<|reference_end|>
|
arxiv
|
@article{wei2024do,
title={Do We Need iPhone Moment or Xiaomi Moment for Robots? Design of
Affordable Home Robots for Health Monitoring},
author={Bo Wei, Yaya Bian, Mingcen Gao},
journal={arXiv preprint arXiv:2409.16802},
year={2024},
archivePrefix={arXiv},
eprint={2409.16802},
primaryClass={cs.RO}
}
|
wei2024do
|
arxiv-661761
|
2409.16803
|
Incorporating Spatial Cues in Modular Speaker Diarization for Multi-channel Multi-party Meetings
|
<|reference_start|>Incorporating Spatial Cues in Modular Speaker Diarization for Multi-channel Multi-party Meetings: Although fully end-to-end speaker diarization systems have made significant progress in recent years, modular systems often achieve superior results in real-world scenarios due to their greater adaptability and robustness. Historically, modular speaker diarization methods have seldom discussed how to leverage spatial cues from multi-channel speech. This paper proposes a three-stage modular system to enhance single-channel neural speaker diarization systems and recognition performance by utilizing spatial cues from multi-channel speech to provide more accurate initialization for each stage of neural speaker diarization (NSD) decoding: (1) Overlap detection and continuous speech separation (CSS) on multi-channel speech are used to obtain cleaner single speaker speech segments for clustering, followed by the first NSD decoding pass. (2) The results from the first pass initialize a complex Angular Central Gaussian Mixture Model (cACGMM) to estimate speaker-wise masks on multi-channel speech, and through Overlap-add and Mask-to-VAD, achieve initialization with lower speaker error (SpkErr), followed by the second NSD decoding pass. (3) The second decoding results are used for guided source separation (GSS), recognizing and filtering short segments containing less one word to obtain cleaner speech segments, followed by re-clustering and the final NSD decoding pass. We presented the progressively explored evaluation results from the CHiME-8 NOTSOFAR-1 (Natural Office Talkers in Settings Of Far-field Audio Recordings) challenge, demonstrating the effectiveness of our system and its contribution to improving recognition performance. Our final system achieved the first place in the challenge.<|reference_end|>
|
arxiv
|
@article{wang2024incorporating,
title={Incorporating Spatial Cues in Modular Speaker Diarization for
Multi-channel Multi-party Meetings},
author={Ruoyu Wang, Shutong Niu, Gaobin Yang, Jun Du, Shuangqing Qian, Tian
Gao, Jia Pan},
journal={arXiv preprint arXiv:2409.16803},
year={2024},
archivePrefix={arXiv},
eprint={2409.16803},
primaryClass={eess.AS cs.SD}
}
|
wang2024incorporating
|
arxiv-661762
|
2409.16806
|
Topological SLAM in colonoscopies leveraging deep features and topological priors
|
<|reference_start|>Topological SLAM in colonoscopies leveraging deep features and topological priors: We introduce ColonSLAM, a system that combines classical multiple-map metric SLAM with deep features and topological priors to create topological maps of the whole colon. The SLAM pipeline by itself is able to create disconnected individual metric submaps representing locations from short video subsections of the colon, but is not able to merge covisible submaps due to deformations and the limited performance of the SIFT descriptor in the medical domain. ColonSLAM is guided by topological priors and combines a deep localization network trained to distinguish if two images come from the same place or not and the soft verification of a transformer-based matching network, being able to relate far-in-time submaps during an exploration, grouping them in nodes imaging the same colon place, building more complex maps than any other approach in the literature. We demonstrate our approach in the Endomapper dataset, showing its potential for producing maps of the whole colon in real human explorations. Code and models are available at: https://github.com/endomapper/ColonSLAM.<|reference_end|>
|
arxiv
|
@article{morlana2024topological,
title={Topological SLAM in colonoscopies leveraging deep features and
topological priors},
author={Javier Morlana, Juan D. Tard'os and Jos'e M. M. Montiel},
journal={arXiv preprint arXiv:2409.16806},
year={2024},
archivePrefix={arXiv},
eprint={2409.16806},
primaryClass={cs.CV}
}
|
morlana2024topological
|
arxiv-661763
|
2409.16807
|
A Few Hypocrites: Few-Shot Learning and Subtype Definitions for Detecting Hypocrisy Accusations in Online Climate Change Debates
|
<|reference_start|>A Few Hypocrites: Few-Shot Learning and Subtype Definitions for Detecting Hypocrisy Accusations in Online Climate Change Debates: The climate crisis is a salient issue in online discussions, and hypocrisy accusations are a central rhetorical element in these debates. However, for large-scale text analysis, hypocrisy accusation detection is an understudied tool, most often defined as a smaller subtask of fallacious argument detection. In this paper, we define hypocrisy accusation detection as an independent task in NLP, and identify different relevant subtypes of hypocrisy accusations. Our Climate Hypocrisy Accusation Corpus (CHAC) consists of 420 Reddit climate debate comments, expert-annotated into two different types of hypocrisy accusations: personal versus political hypocrisy. We evaluate few-shot in-context learning with 6 shots and 3 instruction-tuned Large Language Models (LLMs) for detecting hypocrisy accusations in this dataset. Results indicate that the GPT-4o and Llama-3 models in particular show promise in detecting hypocrisy accusations (F1 reaching 0.68, while previous work shows F1 of 0.44). However, context matters for a complex semantic concept such as hypocrisy accusations, and we find models struggle especially at identifying political hypocrisy accusations compared to personal moral hypocrisy. Our study contributes new insights in hypocrisy detection and climate change discourse, and is a stepping stone for large-scale analysis of hypocrisy accusation in online climate debates.<|reference_end|>
|
arxiv
|
@article{corral2024a,
title={A Few Hypocrites: Few-Shot Learning and Subtype Definitions for
Detecting Hypocrisy Accusations in Online Climate Change Debates},
author={Paulina Garcia Corral, Avishai Green, Hendrik Meyer, Anke Stoll,
Xiaoyue Yan, Myrthe Reuver},
journal={arXiv preprint arXiv:2409.16807},
year={2024},
archivePrefix={arXiv},
eprint={2409.16807},
primaryClass={cs.CL}
}
|
corral2024a
|
arxiv-661764
|
2409.16808
|
Benchmarking Deep Learning Models for Object Detection on Edge Computing Devices
|
<|reference_start|>Benchmarking Deep Learning Models for Object Detection on Edge Computing Devices: Modern applications, such as autonomous vehicles, require deploying deep learning algorithms on resource-constrained edge devices for real-time image and video processing. However, there is limited understanding of the efficiency and performance of various object detection models on these devices. In this paper, we evaluate state-of-the-art object detection models, including YOLOv8 (Nano, Small, Medium), EfficientDet Lite (Lite0, Lite1, Lite2), and SSD (SSD MobileNet V1, SSDLite MobileDet). We deployed these models on popular edge devices like the Raspberry Pi 3, 4, and 5 with/without TPU accelerators, and Jetson Orin Nano, collecting key performance metrics such as energy consumption, inference time, and Mean Average Precision (mAP). Our findings highlight that lower mAP models such as SSD MobileNet V1 are more energy-efficient and faster in inference, whereas higher mAP models like YOLOv8 Medium generally consume more energy and have slower inference, though with exceptions when accelerators like TPUs are used. Among the edge devices, Jetson Orin Nano stands out as the fastest and most energy-efficient option for request handling, despite having the highest idle energy consumption. These results emphasize the need to balance accuracy, speed, and energy efficiency when deploying deep learning models on edge devices, offering valuable guidance for practitioners and researchers selecting models and devices for their applications.<|reference_end|>
|
arxiv
|
@article{alqahtani2024benchmarking,
title={Benchmarking Deep Learning Models for Object Detection on Edge Computing
Devices},
author={Daghash K. Alqahtani, Aamir Cheema, Adel N. Toosi},
journal={arXiv preprint arXiv:2409.16808},
year={2024},
archivePrefix={arXiv},
eprint={2409.16808},
primaryClass={cs.CV cs.AR cs.DC cs.SE}
}
|
alqahtani2024benchmarking
|
arxiv-661765
|
2409.16809
|
Analytical assessment of workers' safety concerning direct and indirect ways of getting infected by dangerous pathogen
|
<|reference_start|>Analytical assessment of workers' safety concerning direct and indirect ways of getting infected by dangerous pathogen: The development of safety policies for protecting large groups of individuals working in indoor environments against disease spreading provides an important and challenging task. To address this issue, we investigate the scenario of workers getting infected by the dangerous airborne pathogen in a close to real-life industrial environment. We present the simple analytical model based on the observations made during the recent pandemic, and business expectations concerning the protection of workers. The model can be tuned to handle other epidemic or non-epidemic threads, including dangerous vapors from industrial processes. In the presented model, we consider direct and indirect ways of getting infected, the first by direct contact with an infected agent, and the second by contact with a contaminated environment, including air in compartments or working surfaces. Our analysis is based on the simplified droplet/aerosol spreading diffusion model, validated by droplets' spreading simulations. The model can be easily applied to new scenarios and has modest computational requirements compared with the simulations. Hence, the model can be applied in an automated protection ecosystem in the industrial environment, where the time for assessing danger is limited, and computation has to be performed almost in real time. Using a simple agent-based model, we confirm the general research conclusion on disease spreading. From our results, we draft a set of countermeasures for infection spreading, which could be used as the basis of the prevention policy, suitable for use in industrial scenarios.<|reference_end|>
|
arxiv
|
@article{domino2024analytical,
title={Analytical assessment of workers' safety concerning direct and indirect
ways of getting infected by dangerous pathogen},
author={Krzysztof Domino, Arkadiusz Sochan, Jaros{l}aw Adam Miszczak},
journal={arXiv preprint arXiv:2409.16809},
year={2024},
archivePrefix={arXiv},
eprint={2409.16809},
primaryClass={cs.CE}
}
|
domino2024analytical
|
arxiv-661766
|
2409.16810
|
Inline Photometrically Calibrated Hybrid Visual SLAM
|
<|reference_start|>Inline Photometrically Calibrated Hybrid Visual SLAM: This paper presents an integrated approach to Visual SLAM, merging online sequential photometric calibration within a Hybrid direct-indirect visual SLAM (H-SLAM). Photometric calibration helps normalize pixel intensity values under different lighting conditions, and thereby improves the direct component of our H-SLAM. A tangential benefit also results to the indirect component of H-SLAM given that the detected features are more stable across variable lighting conditions. Our proposed photometrically calibrated H-SLAM is tested on several datasets, including the TUM monoVO as well as on a dataset we created. Calibrated H-SLAM outperforms other state of the art direct, indirect, and hybrid Visual SLAM systems in all the experiments. Furthermore, in online SLAM tested at our site, it also significantly outperformed the other SLAM Systems.<|reference_end|>
|
arxiv
|
@article{abboud2024inline,
title={Inline Photometrically Calibrated Hybrid Visual SLAM},
author={Nicolas Abboud, Malak Sayour, Imad H. Elhajj, John Zelek, Daniel Asmar},
journal={arXiv preprint arXiv:2409.16810},
year={2024},
archivePrefix={arXiv},
eprint={2409.16810},
primaryClass={cs.RO cs.CV cs.SY eess.SY}
}
|
abboud2024inline
|
arxiv-661767
|
2409.16811
|
Performance Boundary Analyses for Statistical Multi-QoS Framework Over 6G SAGINs
|
<|reference_start|>Performance Boundary Analyses for Statistical Multi-QoS Framework Over 6G SAGINs: To enable the cost-effective universal access and the enhancement of current communication services, the space-air-ground integrated networks (SAGINs) have recently been developed due to its exceptional 3D coverage and the ability to guarantee rigorous and multidimensional demands for quality-of-service (QoS) provisioning, including delay and reliability across vast distances. In response to the complex, heterogeneous, and dynamic serving scenarios and stringent performance expectations for 6G SAGINs, it is crucial to undertake modeling, assurance, and analysis of the key technologies, aligned with the diverse demands for QoS provisioning in the non-asymptotic regime, i.e., when implementing finite blocklength coding (FBC) as a new dimension for error-rate bounded QoS metric. However, how to design new statistical QoS-driven performance modeling approaches that accurately delineate the complex and dynamic behaviors of networks, particularly in terms of constraining both delay and error rate, persists as a significant challenge for implementing mURLLC within 6G SAGINs in the finite blocklength regime. To overcome these difficulties, in this paper we propose to develop a set of analytical modeling frameworks for 6G SAGIN in supporting statistical delay and error-rate bounded QoS in the finite blocklength regime. First we establish the SAGIN system architecture model. Second, the aggregate interference and decoding error probability functions are modeled and examined through using Laplace transform. Third, we introduce modeling techniques aimed at defining the$\epsilon$-effective capacity function as a crucial metric for facilitating statistical QoS standards with respect to delay and error-rate. To validate the effectiveness of the developed performance modeling schemes, we have executed a series of simulations over SAGINs.<|reference_end|>
|
arxiv
|
@article{wang2024performance,
title={Performance Boundary Analyses for Statistical Multi-QoS Framework Over
6G SAGINs},
author={Jingqing Wang, Wenchi Cheng, and Wei Zhang},
journal={arXiv preprint arXiv:2409.16811},
year={2024},
archivePrefix={arXiv},
eprint={2409.16811},
primaryClass={eess.SY cs.SY}
}
|
wang2024performance
|
arxiv-661768
|
2409.16813
|
PeerArg: Argumentative Peer Review with LLMs
|
<|reference_start|>PeerArg: Argumentative Peer Review with LLMs: Peer review is an essential process to determine the quality of papers submitted to scientific conferences or journals. However, it is subjective and prone to biases. Several studies have been conducted to apply techniques from NLP to support peer review, but they are based on black-box techniques and their outputs are difficult to interpret and trust. In this paper, we propose a novel pipeline to support and understand the reviewing and decision-making processes of peer review: the PeerArg system combining LLMs with methods from knowledge representation. PeerArg takes in input a set of reviews for a paper and outputs the paper acceptance prediction. We evaluate the performance of the PeerArg pipeline on three different datasets, in comparison with a novel end-2-end LLM that uses few-shot learning to predict paper acceptance given reviews. The results indicate that the end-2-end LLM is capable of predicting paper acceptance from reviews, but a variant of the PeerArg pipeline outperforms this LLM.<|reference_end|>
|
arxiv
|
@article{sukpanichnant2024peerarg:,
title={PeerArg: Argumentative Peer Review with LLMs},
author={Purin Sukpanichnant, Anna Rapberger, Francesca Toni},
journal={arXiv preprint arXiv:2409.16813},
year={2024},
archivePrefix={arXiv},
eprint={2409.16813},
primaryClass={cs.AI}
}
|
sukpanichnant2024peerarg:
|
arxiv-661769
|
2409.16815
|
Accelerating TinyML Inference on Microcontrollers through Approximate Kernels
|
<|reference_start|>Accelerating TinyML Inference on Microcontrollers through Approximate Kernels: The rapid growth of microcontroller-based IoT devices has opened up numerous applications, from smart manufacturing to personalized healthcare. Despite the widespread adoption of energy-efficient microcontroller units (MCUs) in the Tiny Machine Learning (TinyML) domain, they still face significant limitations in terms of performance and memory (RAM, Flash). In this work, we combine approximate computing and software kernel design to accelerate the inference of approximate CNN models on MCUs. Our kernel-based approximation framework firstly unpacks the operands of each convolution layer and then conducts an offline calculation to determine the significance of each operand. Subsequently, through a design space exploration, it employs a computation skipping approximation strategy based on the calculated significance. Our evaluation on an STM32-Nucleo board and 2 popular CNNs trained on the CIFAR-10 dataset shows that, compared to state-of-the-art exact inference, our Pareto optimal solutions can feature on average 21% latency reduction with no degradation in Top-1 classification accuracy, while for lower accuracy requirements, the corresponding reduction becomes even more pronounced.<|reference_end|>
|
arxiv
|
@article{armeniakos2024accelerating,
title={Accelerating TinyML Inference on Microcontrollers through Approximate
Kernels},
author={Giorgos Armeniakos, Georgios Mentzos, Dimitrios Soudris},
journal={arXiv preprint arXiv:2409.16815},
year={2024},
archivePrefix={arXiv},
eprint={2409.16815},
primaryClass={cs.LG}
}
|
armeniakos2024accelerating
|
arxiv-661770
|
2409.16817
|
A parametric framework for kernel-based dynamic mode decomposition using deep learning
|
<|reference_start|>A parametric framework for kernel-based dynamic mode decomposition using deep learning: Surrogate modelling is widely applied in computational science and engineering to mitigate computational efficiency issues for the real-time simulations of complex and large-scale computational models or for many-query scenarios, such as uncertainty quantification and design optimisation. In this work, we propose a parametric framework for kernel-based dynamic mode decomposition method based on the linear and nonlinear disambiguation optimization (LANDO) algorithm. The proposed parametric framework consists of two stages, offline and online. The offline stage prepares the essential component for prediction, namely a series of LANDO models that emulate the dynamics of the system with particular parameters from a training dataset. The online stage leverages those LANDO models to generate new data at a desired time instant, and approximate the mapping between parameters and the state with the data using deep learning techniques. Moreover, dimensionality reduction technique is applied to high-dimensional dynamical systems to reduce the computational cost of training. Three numerical examples including Lotka-Volterra model, heat equation and reaction-diffusion equation are presented to demonstrate the efficiency and effectiveness of the proposed framework.<|reference_end|>
|
arxiv
|
@article{kevopoulos2024a,
title={A parametric framework for kernel-based dynamic mode decomposition using
deep learning},
author={Konstantinos Kevopoulos, Dongwei Ye},
journal={arXiv preprint arXiv:2409.16817},
year={2024},
archivePrefix={arXiv},
eprint={2409.16817},
primaryClass={cs.LG cs.CE}
}
|
kevopoulos2024a
|
arxiv-661771
|
2409.16818
|
Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation
|
<|reference_start|>Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation: Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology. However, due to the accessibility of MRI scanners and their lengthy acquisition time, multimodal MR images are not commonly available. Current MR image synthesis approaches are typically trained on independent datasets for specific tasks, leading to suboptimal performance when applied to novel datasets and tasks. Here, we present TUMSyn, a Text-guided Universal MR image Synthesis generalist model, which can flexibly generate brain MR images with demanded imaging metadata from routinely acquired scans guided by text prompts. To ensure TUMSyn's image synthesis precision, versatility, and generalizability, we first construct a brain MR database comprising 31,407 3D images with 7 MRI modalities from 13 centers. We then pre-train an MRI-specific text encoder using contrastive learning to effectively control MR image synthesis based on text prompts. Extensive experiments on diverse datasets and physician assessments indicate that TUMSyn can generate clinically meaningful MR images with specified imaging metadata in supervised and zero-shot scenarios. Therefore, TUMSyn can be utilized along with acquired MR scan(s) to facilitate large-scale MRI-based screening and diagnosis of brain diseases.<|reference_end|>
|
arxiv
|
@article{wang2024towards,
title={Towards General Text-guided Image Synthesis for Customized Multimodal
Brain MRI Generation},
author={Yulin Wang, Honglin Xiong, Kaicong Sun, Shuwei Bai, Ling Dai,
Zhongxiang Ding, Jiameng Liu, Qian Wang, Qian Liu, Dinggang Shen},
journal={arXiv preprint arXiv:2409.16818},
year={2024},
archivePrefix={arXiv},
eprint={2409.16818},
primaryClass={eess.IV cs.CV}
}
|
wang2024towards
|
arxiv-661772
|
2409.16819
|
CodeInsight: A Curated Dataset of Practical Coding Solutions from Stack Overflow
|
<|reference_start|>CodeInsight: A Curated Dataset of Practical Coding Solutions from Stack Overflow: We introduce a novel dataset tailored for code generation, aimed at aiding developers in common tasks. Our dataset provides examples that include a clarified intent, code snippets associated, and an average of three related unit tests. It encompasses a range of libraries such as \texttt{Pandas}, \texttt{Numpy}, and \texttt{Regex}, along with more than 70 standard libraries in Python code derived from Stack Overflow. Comprising 3,409 crafted examples by Python experts, our dataset is designed for both model finetuning and standalone evaluation. To complete unit tests evaluation, we categorize examples in order to get more fine grained analysis, enhancing the understanding of models' strengths and weaknesses in specific coding tasks. The examples have been refined to reduce data contamination, a process confirmed by the performance of three leading models: Mistral 7B, CodeLLaMa 13B, and Starcoder 15B. We further investigate data-contamination testing GPT-4 performance on a part of our dataset. The benchmark can be accessed at \url{https://github.com/NathanaelBeau/CodeInsight}.<|reference_end|>
|
arxiv
|
@article{beau2024codeinsight:,
title={CodeInsight: A Curated Dataset of Practical Coding Solutions from Stack
Overflow},
author={Nathana"el Beau and Beno^it Crabb'e},
journal={arXiv preprint arXiv:2409.16819},
year={2024},
archivePrefix={arXiv},
eprint={2409.16819},
primaryClass={cs.CL cs.SE}
}
|
beau2024codeinsight:
|
arxiv-661773
|
2409.16820
|
Spotlight Text Detector: Spotlight on Candidate Regions Like a Camera
|
<|reference_start|>Spotlight Text Detector: Spotlight on Candidate Regions Like a Camera: The irregular contour representation is one of the tough challenges in scene text detection. Although segmentation-based methods have achieved significant progress with the help of flexible pixel prediction, the overlap of geographically close texts hinders detecting them separately. To alleviate this problem, some shrink-based methods predict text kernels and expand them to restructure texts. However, the text kernel is an artificial object with incomplete semantic features that are prone to incorrect or missing detection. In addition, different from the general objects, the geometry features (aspect ratio, scale, and shape) of scene texts vary significantly, which makes it difficult to detect them accurately. To consider the above problems, we propose an effective spotlight text detector (STD), which consists of a spotlight calibration module (SCM) and a multivariate information extraction module (MIEM). The former concentrates efforts on the candidate kernel, like a camera focus on the target. It obtains candidate features through a mapping filter and calibrates them precisely to eliminate some false positive samples. The latter designs different shape schemes to explore multiple geometric features for scene texts. It helps extract various spatial relationships to improve the model's ability to recognize kernel regions. Ablation studies prove the effectiveness of the designed SCM and MIEM. Extensive experiments verify that our STD is superior to existing state-of-the-art methods on various datasets, including ICDAR2015, CTW1500, MSRA-TD500, and Total-Text.<|reference_end|>
|
arxiv
|
@article{han2024spotlight,
title={Spotlight Text Detector: Spotlight on Candidate Regions Like a Camera},
author={Xu Han, Junyu Gao, Chuang Yang, Yuan Yuan, and Qi Wang},
journal={arXiv preprint arXiv:2409.16820},
year={2024},
archivePrefix={arXiv},
eprint={2409.16820},
primaryClass={cs.CV}
}
|
han2024spotlight
|
arxiv-661774
|
2409.16821
|
XAI-guided Insulator Anomaly Detection for Imbalanced Datasets
|
<|reference_start|>XAI-guided Insulator Anomaly Detection for Imbalanced Datasets: Power grids serve as a vital component in numerous industries, seamlessly delivering electrical energy to industrial processes and technologies, making their safe and reliable operation indispensable. However, powerlines can be hard to inspect due to difficult terrain or harsh climatic conditions. Therefore, unmanned aerial vehicles are increasingly deployed to inspect powerlines, resulting in a substantial stream of visual data which requires swift and accurate processing. Deep learning methods have become widely popular for this task, proving to be a valuable asset in fault detection. In particular, the detection of insulator defects is crucial for predicting powerline failures, since their malfunction can lead to transmission disruptions. It is therefore of great interest to continuously maintain and rigorously inspect insulator components. In this work we propose a novel pipeline to tackle this task. We utilize state-of-the-art object detection to detect and subsequently classify individual insulator anomalies. Our approach addresses dataset challenges such as imbalance and motion-blurred images through a fine-tuning methodology which allows us to alter the classification focus of the model by increasing the classification accuracy of anomalous insulators. In addition, we employ explainable-AI tools for precise localization and explanation of anomalies. This proposed method contributes to the field of anomaly detection, particularly vision-based industrial inspection and predictive maintenance. We significantly improve defect detection accuracy by up to 13%, while also offering a detailed analysis of model mis-classifications and localization quality, showcasing the potential of our method on real-world data.<|reference_end|>
|
arxiv
|
@article{hoefler2024xai-guided,
title={XAI-guided Insulator Anomaly Detection for Imbalanced Datasets},
author={Maximilian Andreas Hoefler, Karsten Mueller, Wojciech Samek},
journal={arXiv preprint arXiv:2409.16821},
year={2024},
archivePrefix={arXiv},
eprint={2409.16821},
primaryClass={cs.CV cs.AI}
}
|
hoefler2024xai-guided
|
arxiv-661775
|
2409.16822
|
Gripenberg-like algorithm for the lower spectral radius
|
<|reference_start|>Gripenberg-like algorithm for the lower spectral radius: This article presents an extended algorithm for computing the lower spectral radius of finite, non-negative matrix sets. Given a set of matrices $\mathcal{F} = \{A_1, \ldots, A_m\}$, the lower spectral radius represents the minimal growth rate of sequences in the product semigroup generated by $\mathcal{F}$. This quantity is crucial for characterizing optimal stable trajectories in discrete dynamical systems of the form $x_{k+1} = A_{i_k} x_k$, where $A_{i_k} \in \mathcal{F}$ for all $k \ge 0$. For the well-known joint spectral radius (which represents the highest growth rate), a famous algorithm providing suitable lower and upper bounds and able to approximate the joint spectral radius with arbitrary accuracy was proposed by Gripenberg in 1996. For the lower spectral radius, where a lower bound is not directly available (contrarily to the joint spectral radius), this computation appears more challenging. Our work extends Gripenberg's approach to the lower spectral radius computation for non-negative matrix families. The proposed algorithm employs a time-varying antinorm and demonstrates rapid convergence. Its success is related to the property that the lower spectral radius can be obtained as a Gelfand limit, which was recently proved in Guglielmi and Zennaro (2020). Additionally, we propose an improvement to the classical Gripenberg algorithm for approximating the joint spectral radius of arbitrary matrix sets.<|reference_end|>
|
arxiv
|
@article{guglielmi2024gripenberg-like,
title={Gripenberg-like algorithm for the lower spectral radius},
author={Nicola Guglielmi, Francesco Paolo Maiale},
journal={arXiv preprint arXiv:2409.16822},
year={2024},
archivePrefix={arXiv},
eprint={2409.16822},
primaryClass={math.NA cs.NA}
}
|
guglielmi2024gripenberg-like
|
arxiv-661776
|
2409.16824
|
Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability
|
<|reference_start|>Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability: Optimal decision-making under partial observability requires reasoning about the uncertainty of the environment's hidden state. However, most reinforcement learning architectures handle partial observability with sequence models that have no internal mechanism to incorporate uncertainty in their hidden state representation, such as recurrent neural networks, deterministic state-space models and transformers. Inspired by advances in probabilistic world models for reinforcement learning, we propose a standalone Kalman filter layer that performs closed-form Gaussian inference in linear state-space models and train it end-to-end within a model-free architecture to maximize returns. Similar to efficient linear recurrent layers, the Kalman filter layer processes sequential data using a parallel scan, which scales logarithmically with the sequence length. By design, Kalman filter layers are a drop-in replacement for other recurrent layers in standard model-free architectures, but importantly they include an explicit mechanism for probabilistic filtering of the latent state representation. Experiments in a wide variety of tasks with partial observability show that Kalman filter layers excel in problems where uncertainty reasoning is key for decision-making, outperforming other stateful models.<|reference_end|>
|
arxiv
|
@article{luis2024uncertainty,
title={Uncertainty Representations in State-Space Layers for Deep Reinforcement
Learning under Partial Observability},
author={Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix
Berkenkamp, Jan Peters},
journal={arXiv preprint arXiv:2409.16824},
year={2024},
archivePrefix={arXiv},
eprint={2409.16824},
primaryClass={cs.LG cs.AI}
}
|
luis2024uncertainty
|
arxiv-661777
|
2409.16825
|
Measurements and System Identification for the Characterization of Smooth Muscle Cell Dynamics
|
<|reference_start|>Measurements and System Identification for the Characterization of Smooth Muscle Cell Dynamics: Biological tissue integrity is actively maintained by cells. It is essential to comprehend how cells accomplish this in order to stage tissue diseases. However, addressing the complexity of a cell's system of interrelated mechanisms poses a challenge. This necessitates a well-structured identification framework and an effective integration of measurements. Here we introduce the use of state-of-the-art frequency-domain system identification techniques combined with an indentation measurement platform to analyze the underlying mechanisms from the perspective of control system theory. The ultimate goal is to explore how mechanical and biological factors are related in induced Pluripotent Stem Cell-derived vascular smooth muscle cells. We study on the frequency-domain analysis for the investigation and characterization of cellular dynamics of smooth muscle cells from the measured data. The measurement model in this study exploits the availability of human tissue and samples, enabling fundamental investigations of vascular tissue disease. This approach using human cell lines holds significant potential to decrease the necessity for animal-based safety and efficacy studies. The focus of this review is to investigate the cellular dynamics underlying the myogenic response and to demonstrate the practicability of employing a nano-indentation measurement setup for the broadband frequency-domain characterization of induced Pluripotent Stem Cell-derived vascular smooth muscle cells.<|reference_end|>
|
arxiv
|
@article{ozturk2024measurements,
title={Measurements and System Identification for the Characterization of
Smooth Muscle Cell Dynamics},
author={Dilan Ozturk, Pepijn Saraber, Kevin Bielawski, Alessandro Giudici,
Leon Schurgers, Koen Reesink, Maarten Schoukens},
journal={2024 IEEE International Symposium on Medical Measurements and
Applications (MeMeA)},
year={2024},
doi={10.1109/MeMeA60663.2024.10596921},
archivePrefix={arXiv},
eprint={2409.16825},
primaryClass={eess.SY cs.SY math.DS}
}
|
ozturk2024measurements
|
arxiv-661778
|
2409.16826
|
Learning phase-space flows using time-discrete implicit Runge-Kutta PINNs
|
<|reference_start|>Learning phase-space flows using time-discrete implicit Runge-Kutta PINNs: We present a computational framework for obtaining multidimensional phase-space solutions of systems of non-linear coupled differential equations, using high-order implicit Runge-Kutta Physics- Informed Neural Networks (IRK-PINNs) schemes. Building upon foundational work originally solving differential equations for fields depending on coordinates [J. Comput. Phys. 378, 686 (2019)], we adapt the scheme to a context where the coordinates are treated as functions. This modification enables us to efficiently solve equations of motion for a particle in an external field. Our scheme is particularly useful for explicitly time-independent and periodic fields. We apply this approach to successfully solve the equations of motion for a mass particle placed in a central force field and a charged particle in a periodic electric field.<|reference_end|>
|
arxiv
|
@article{corral2024learning,
title={Learning phase-space flows using time-discrete implicit Runge-Kutta
PINNs},
author={'Alvaro Fern'andez Corral, Nicol'as Mendoza, Armin Iske, Andrey
Yachmenev and Jochen K"upper},
journal={arXiv preprint arXiv:2409.16826},
year={2024},
archivePrefix={arXiv},
eprint={2409.16826},
primaryClass={cs.LG cs.AI cs.NA math.DS math.NA}
}
|
corral2024learning
|
arxiv-661779
|
2409.16827
|
Focus Entirety and Perceive Environment for Arbitrary-Shaped Text Detection
|
<|reference_start|>Focus Entirety and Perceive Environment for Arbitrary-Shaped Text Detection: Due to the diversity of scene text in aspects such as font, color, shape, and size, accurately and efficiently detecting text is still a formidable challenge. Among the various detection approaches, segmentation-based approaches have emerged as prominent contenders owing to their flexible pixel-level predictions. However, these methods typically model text instances in a bottom-up manner, which is highly susceptible to noise. In addition, the prediction of pixels is isolated without introducing pixel-feature interaction, which also influences the detection performance. To alleviate these problems, we propose a multi-information level arbitrary-shaped text detector consisting of a focus entirety module (FEM) and a perceive environment module (PEM). The former extracts instance-level features and adopts a top-down scheme to model texts to reduce the influence of noises. Specifically, it assigns consistent entirety information to pixels within the same instance to improve their cohesion. In addition, it emphasizes the scale information, enabling the model to distinguish varying scale texts effectively. The latter extracts region-level information and encourages the model to focus on the distribution of positive samples in the vicinity of a pixel, which perceives environment information. It treats the kernel pixels as positive samples and helps the model differentiate text and kernel features. Extensive experiments demonstrate the FEM's ability to efficiently support the model in handling different scale texts and confirm the PEM can assist in perceiving pixels more accurately by focusing on pixel vicinities. Comparisons show the proposed model outperforms existing state-of-the-art approaches on four public datasets.<|reference_end|>
|
arxiv
|
@article{han2024focus,
title={Focus Entirety and Perceive Environment for Arbitrary-Shaped Text
Detection},
author={Xu Han, Junyu Gao, Chuang Yang, Yuan Yuan and Qi Wang},
journal={arXiv preprint arXiv:2409.16827},
year={2024},
archivePrefix={arXiv},
eprint={2409.16827},
primaryClass={cs.CV}
}
|
han2024focus
|
arxiv-661780
|
2409.16828
|
On the role of Artificial Intelligence methods in modern force-controlled manufacturing robotic tasks
|
<|reference_start|>On the role of Artificial Intelligence methods in modern force-controlled manufacturing robotic tasks: This position paper explores the integration of Artificial Intelligence (AI) into force-controlled robotic tasks within the scope of advanced manufacturing, a cornerstone of Industry 4.0. AI's role in enhancing robotic manipulators - key drivers in the Fourth Industrial Revolution - is rapidly leading to significant innovations in smart manufacturing. The objective of this article is to frame these innovations in practical force-controlled applications - e.g. deburring, polishing, and assembly tasks like peg-in-hole (PiH) - highlighting their necessity for maintaining high-quality production standards. By reporting on recent AI-based methodologies, this article contrasts them and identifies current challenges to be addressed in future research. The analysis concludes with a perspective on future research directions, emphasizing the need for common performance metrics to validate AI techniques, integration of various enhancements for performance optimization, and the importance of validating them in relevant scenarios. These future directions aim to provide consistency with already adopted approaches, so as to be compatible with manufacturing standards, increasing the relevance of AI-driven methods in both academic and industrial contexts.<|reference_end|>
|
arxiv
|
@article{petrone2024on,
title={On the role of Artificial Intelligence methods in modern
force-controlled manufacturing robotic tasks},
author={Vincenzo Petrone, Enrico Ferrentino, Pasquale Chiacchio},
journal={arXiv preprint arXiv:2409.16828},
year={2024},
archivePrefix={arXiv},
eprint={2409.16828},
primaryClass={cs.RO cs.AI}
}
|
petrone2024on
|
arxiv-661781
|
2409.16830
|
OffRIPP: Offline RL-based Informative Path Planning
|
<|reference_start|>OffRIPP: Offline RL-based Informative Path Planning: Informative path planning (IPP) is a crucial task in robotics, where agents must design paths to gather valuable information about a target environment while adhering to resource constraints. Reinforcement learning (RL) has been shown to be effective for IPP, however, it requires environment interactions, which are risky and expensive in practice. To address this problem, we propose an offline RL-based IPP framework that optimizes information gain without requiring real-time interaction during training, offering safety and cost-efficiency by avoiding interaction, as well as superior performance and fast computation during execution -- key advantages of RL. Our framework leverages batch-constrained reinforcement learning to mitigate extrapolation errors, enabling the agent to learn from pre-collected datasets generated by arbitrary algorithms. We validate the framework through extensive simulations and real-world experiments. The numerical results show that our framework outperforms the baselines, demonstrating the effectiveness of the proposed approach.<|reference_end|>
|
arxiv
|
@article{gadipudi2024offripp:,
title={OffRIPP: Offline RL-based Informative Path Planning},
author={Srikar Babu Gadipudi, Srujan Deolasee, Siva Kailas, Wenhao Luo, Katia
Sycara, and Woojun Kim},
journal={arXiv preprint arXiv:2409.16830},
year={2024},
archivePrefix={arXiv},
eprint={2409.16830},
primaryClass={cs.RO cs.AI}
}
|
gadipudi2024offripp:
|
arxiv-661782
|
2409.16831
|
Joint Mobile Cell Positioning and Scheduler Selection in Locations Characterised by Substantial Obstacles
|
<|reference_start|>Joint Mobile Cell Positioning and Scheduler Selection in Locations Characterised by Substantial Obstacles: Positioning a mobile cell in a seaport environment presents unique challenges due to the high density of User Equipments (UEs) and obstacles causing shadowing effects. This paper addresses the problem of optimal positioning for a mobile cell within a defined area containing UEs, fixed cells, and obstacles. By formulating an optimisation problem, we consider variables including user associations and different types of scheduling for packet transmission. The mobile cell wireless backhaul is designed to meet the total capacity requirements of the UEs it serves, based on the optimal positioning determined by our solution approach. Using a Genetic Algorithm (GA) solver, we achieve significant gains, with objective capacity improvements of up to 200% for the 90th percentile. The proposed solution enhances network performance, especially in scenarios requiring increased capacity for emergency situations.<|reference_end|>
|
arxiv
|
@article{correia2024joint,
title={Joint Mobile Cell Positioning and Scheduler Selection in Locations
Characterised by Substantial Obstacles},
author={Paulo Furtado Correia, Andre Coelho, Manuel Ricardo},
journal={arXiv preprint arXiv:2409.16831},
year={2024},
archivePrefix={arXiv},
eprint={2409.16831},
primaryClass={cs.NI}
}
|
correia2024joint
|
arxiv-661783
|
2409.16832
|
Asynchronous Fractional Multi-Agent Deep Reinforcement Learning for Age-Minimal Mobile Edge Computing
|
<|reference_start|>Asynchronous Fractional Multi-Agent Deep Reinforcement Learning for Age-Minimal Mobile Edge Computing: In the realm of emerging real-time networked applications like cyber-physical systems (CPS), the Age of Information (AoI) has merged as a pivotal metric for evaluating the timeliness. To meet the high computational demands, such as those in intelligent manufacturing within CPS, mobile edge computing (MEC) presents a promising solution for optimizing computing and reducing AoI. In this work, we study the timeliness of computational-intensive updates and explores jointly optimize the task updating and offloading policies to minimize AoI. Specifically, we consider edge load dynamics and formulate a task scheduling problem to minimize the expected time-average AoI. The fractional objective introduced by AoI and the semi-Markov game nature of the problem render this challenge particularly difficult, with existing approaches not directly applicable. To this end, we present a comprehensive framework to fractional reinforcement learning (RL). We first introduce a fractional single-agent RL framework and prove its linear convergence. We then extend this to a fractional multi-agent RL framework with a convergence analysis. To tackle the challenge of asynchronous control in semi-Markov game, we further design an asynchronous model-free fractional multi-agent RL algorithm, where each device makes scheduling decisions with the hybrid action space without knowing the system dynamics and decisions of other devices. Experimental results show that our proposed algorithms reduce the average AoI by up to 52.6% compared with the best baseline algorithm in our experiments.<|reference_end|>
|
arxiv
|
@article{jin2024asynchronous,
title={Asynchronous Fractional Multi-Agent Deep Reinforcement Learning for
Age-Minimal Mobile Edge Computing},
author={Lyudong Jin, Ming Tang, Jiayu Pan, Meng Zhang, Hao Wang},
journal={arXiv preprint arXiv:2409.16832},
year={2024},
archivePrefix={arXiv},
eprint={2409.16832},
primaryClass={cs.LG cs.NI}
}
|
jin2024asynchronous
|
arxiv-661784
|
2409.16834
|
Conditional Generative Denoiser for Nighttime UAV Tracking
|
<|reference_start|>Conditional Generative Denoiser for Nighttime UAV Tracking: State-of-the-art (SOTA) visual object tracking methods have significantly enhanced the autonomy of unmanned aerial vehicles (UAVs). However, in low-light conditions, the presence of irregular real noise from the environments severely degrades the performance of these SOTA methods. Moreover, existing SOTA denoising techniques often fail to meet the real-time processing requirements when deployed as plug-and-play denoisers for UAV tracking. To address this challenge, this work proposes a novel conditional generative denoiser (CGDenoiser), which breaks free from the limitations of traditional deterministic paradigms and generates the noise conditioning on the input, subsequently removing it. To better align the input dimensions and accelerate inference, a novel nested residual Transformer conditionalizer is developed. Furthermore, an innovative multi-kernel conditional refiner is designed to pertinently refine the denoised output. Extensive experiments show that CGDenoiser promotes the tracking precision of the SOTA tracker by 18.18\% on DarkTrack2021 whereas working 5.8 times faster than the second well-performed denoiser. Real-world tests with complex challenges also prove the effectiveness and practicality of CGDenoiser. Code, video demo and supplementary proof for CGDenoier are now available at: \url{https://github.com/vision4robotics/CGDenoiser}.<|reference_end|>
|
arxiv
|
@article{wang2024conditional,
title={Conditional Generative Denoiser for Nighttime UAV Tracking},
author={Yucheng Wang, Changhong Fu, Kunhan Lu, Liangliang Yao, and Haobo Zuo},
journal={Proceedings of the IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2024)},
year={2024},
archivePrefix={arXiv},
eprint={2409.16834},
primaryClass={cs.RO}
}
|
wang2024conditional
|
arxiv-661785
|
2409.16837
|
Demo2Vec: Learning Region Embedding with Demographic Information
|
<|reference_start|>Demo2Vec: Learning Region Embedding with Demographic Information: Demographic data, such as income, education level, and employment rate, contain valuable information of urban regions, yet few studies have integrated demographic information to generate region embedding. In this study, we show how the simple and easy-to-access demographic data can improve the quality of state-of-the-art region embedding and provide better predictive performances in urban areas across three common urban tasks, namely check-in prediction, crime rate prediction, and house price prediction. We find that existing pre-train methods based on KL divergence are potentially biased towards mobility information and propose to use Jenson-Shannon divergence as a more appropriate loss function for multi-view representation learning. Experimental results from both New York and Chicago show that mobility + income is the best pre-train data combination, providing up to 10.22\% better predictive performances than existing models. Considering that mobility big data can be hardly accessible in many developing cities, we suggest geographic proximity + income to be a simple but effective data combination for region embedding pre-training.<|reference_end|>
|
arxiv
|
@article{wen2024demo2vec:,
title={Demo2Vec: Learning Region Embedding with Demographic Information},
author={Ya Wen, Yulun Zhou},
journal={arXiv preprint arXiv:2409.16837},
year={2024},
archivePrefix={arXiv},
eprint={2409.16837},
primaryClass={cs.LG cs.CY}
}
|
wen2024demo2vec:
|
arxiv-661786
|
2409.16838
|
Explicitly Modeling Pre-Cortical Vision with a Neuro-Inspired Front-End Improves CNN Robustness
|
<|reference_start|>Explicitly Modeling Pre-Cortical Vision with a Neuro-Inspired Front-End Improves CNN Robustness: While convolutional neural networks (CNNs) excel at clean image classification, they struggle to classify images corrupted with different common corruptions, limiting their real-world applicability. Recent work has shown that incorporating a CNN front-end block that simulates some features of the primate primary visual cortex (V1) can improve overall model robustness. Here, we expand on this approach by introducing two novel biologically-inspired CNN model families that incorporate a new front-end block designed to simulate pre-cortical visual processing. RetinaNet, a hybrid architecture containing the novel front-end followed by a standard CNN back-end, shows a relative robustness improvement of 12.3% when compared to the standard model; and EVNet, which further adds a V1 block after the pre-cortical front-end, shows a relative gain of 18.5%. The improvement in robustness was observed for all the different corruption categories, though accompanied by a small decrease in clean image accuracy, and generalized to a different back-end architecture. These findings show that simulating multiple stages of early visual processing in CNN early layers provides cumulative benefits for model robustness.<|reference_end|>
|
arxiv
|
@article{piper2024explicitly,
title={Explicitly Modeling Pre-Cortical Vision with a Neuro-Inspired Front-End
Improves CNN Robustness},
author={Lucas Piper, Arlindo L. Oliveira and Tiago Marques},
journal={arXiv preprint arXiv:2409.16838},
year={2024},
archivePrefix={arXiv},
eprint={2409.16838},
primaryClass={cs.CV q-bio.NC}
}
|
piper2024explicitly
|
arxiv-661787
|
2409.16840
|
Modeling the Modqueue: Towards Understanding and Improving Report Resolution on Reddit
|
<|reference_start|>Modeling the Modqueue: Towards Understanding and Improving Report Resolution on Reddit: There are three common stages in the moderation process employed by platforms like Reddit: rule creation, reporting/triaging, and report resolution. While the first two stages are well-studied in HCI, the third stage remains under-explored. Directly observing report resolution is challenging, since it requires using invasive tracking tools that moderators may feel uncomfortable with. However, evaluating the current state of this stage is crucial to improve moderation outcomes, especially as online communities continue to grow. In this paper, we present a non-invasive methodology to study report resolution via modeling and simulations. Using agent-based modeling, we analyze the performance of report resolution on Reddit using theory-driven measures and use our results to motivate interventions. We then highlight potential improvements that can be gained by adopting these interventions. We conclude by discussing how modeling and simulations can be used to navigate processes like report resolution and inform the design of new moderation interventions.<|reference_end|>
|
arxiv
|
@article{bajpai2024modeling,
title={Modeling the Modqueue: Towards Understanding and Improving Report
Resolution on Reddit},
author={Tanvi Bajpai and Eshwar Chandrasekharan},
journal={arXiv preprint arXiv:2409.16840},
year={2024},
archivePrefix={arXiv},
eprint={2409.16840},
primaryClass={cs.HC}
}
|
bajpai2024modeling
|
arxiv-661788
|
2409.16843
|
Optimal starting point for time series forecasting
|
<|reference_start|>Optimal starting point for time series forecasting: Recent advances on time series forecasting mainly focus on improving the forecasting models themselves. However, managing the length of the input data can also significantly enhance prediction performance. In this paper, we introduce a novel approach called Optimal Starting Point Time Series Forecast (OSP-TSP) to capture the intrinsic characteristics of time series data. By adjusting the sequence length via leveraging the XGBoost and LightGBM models, the proposed approach can determine optimal starting point (OSP) of the time series and thus enhance the prediction performances. The performances of the OSP-TSP approach are then evaluated across various frequencies on the M4 dataset and other real-world datasets. Empirical results indicate that predictions based on the OSP-TSP approach consistently outperform those using the complete dataset. Moreover, recognizing the necessity of sufficient data to effectively train models for OSP identification, we further propose targeted solutions to address the issue of data insufficiency.<|reference_end|>
|
arxiv
|
@article{zhong2024optimal,
title={Optimal starting point for time series forecasting},
author={Yiming Zhong and Yinuo Ren and Guangyao Cao and Feng Li and Haobo Qi},
journal={arXiv preprint arXiv:2409.16843},
year={2024},
archivePrefix={arXiv},
eprint={2409.16843},
primaryClass={stat.AP cs.LG}
}
|
zhong2024optimal
|
arxiv-661789
|
2409.16845
|
IRASNet: Improved Feature-Level Clutter Reduction for Domain Generalized SAR-ATR
|
<|reference_start|>IRASNet: Improved Feature-Level Clutter Reduction for Domain Generalized SAR-ATR: Recently, computer-aided design models and electromagnetic simulations have been used to augment synthetic aperture radar (SAR) data for deep learning. However, an automatic target recognition (ATR) model struggles with domain shift when using synthetic data because the model learns specific clutter patterns present in such data, which disturbs performance when applied to measured data with different clutter distributions. This study proposes a framework particularly designed for domain-generalized SAR-ATR called IRASNet, enabling effective feature-level clutter reduction and domain-invariant feature learning. First, we propose a clutter reduction module (CRM) that maximizes the signal-to-clutter ratio on feature maps. The module reduces the impact of clutter at the feature level while preserving target and shadow information, thereby improving ATR performance. Second, we integrate adversarial learning with CRM to extract clutter-reduced domain-invariant features. The integration bridges the gap between synthetic and measured datasets without requiring measured data during training. Third, we improve feature extraction from target and shadow regions by implementing a positional supervision task using mask ground truth encoding. The improvement enhances the ability of the model to discriminate between classes. Our proposed IRASNet presents new state-of-the-art public SAR datasets utilizing target and shadow information to achieve superior performance across various test conditions. IRASNet not only enhances generalization performance but also significantly improves feature-level clutter reduction, making it a valuable advancement in the field of radar image pattern recognition.<|reference_end|>
|
arxiv
|
@article{jang2024irasnet:,
title={IRASNet: Improved Feature-Level Clutter Reduction for Domain Generalized
SAR-ATR},
author={Oh-Tae Jang, Hae-Kang Song, Min-Jun Kim, Kyung-Hwan Lee, Geon Lee,
Sung-Ho Kim, Hee-Sub Shin, Jae-Woo Ok, Min-Young Back, Jae-Hyuk Yoon, and
Kyung-Tae Kim},
journal={arXiv preprint arXiv:2409.16845},
year={2024},
archivePrefix={arXiv},
eprint={2409.16845},
primaryClass={cs.CV}
}
|
jang2024irasnet:
|
arxiv-661790
|
2409.16847
|
CREVE: An Acceleration-based Constraint Approach for Robust Radar Ego-Velocity Estimation
|
<|reference_start|>CREVE: An Acceleration-based Constraint Approach for Robust Radar Ego-Velocity Estimation: Ego-velocity estimation from point cloud measurements of a millimeter-wave frequency-modulated continuous wave (mmWave FMCW) radar has become a crucial component of radar-inertial odometry (RIO) systems. Conventional approaches often perform poorly when the number of point cloud outliers exceeds that of inliers. In this paper, we propose CREVE, an acceleration-based inequality constraints filter that leverages additional measurements from an inertial measurement unit (IMU) to achieve robust ego-velocity estimations. To further enhance accuracy and robustness against sensor errors, we introduce a practical accelerometer bias estimation method and a parameter adaptation rule. The effectiveness of the proposed method is evaluated using five open-source drone datasets. Experimental results demonstrate that our algorithm significantly outperforms three existing state-of-the-art methods, achieving reductions in absolute trajectory error of approximately 53%, 84%, and 35% compared to them.<|reference_end|>
|
arxiv
|
@article{do2024creve:,
title={CREVE: An Acceleration-based Constraint Approach for Robust Radar
Ego-Velocity Estimation},
author={Hoang Viet Do, Bo Sung Ko, and Jin Woo Song},
journal={arXiv preprint arXiv:2409.16847},
year={2024},
archivePrefix={arXiv},
eprint={2409.16847},
primaryClass={cs.RO}
}
|
do2024creve:
|
arxiv-661791
|
2409.16849
|
Exposing Assumptions in AI Benchmarks through Cognitive Modelling
|
<|reference_start|>Exposing Assumptions in AI Benchmarks through Cognitive Modelling: Cultural AI benchmarks often rely on implicit assumptions about measured constructs, leading to vague formulations with poor validity and unclear interrelations. We propose exposing these assumptions using explicit cognitive models formulated as Structural Equation Models. Using cross-lingual alignment transfer as an example, we show how this approach can answer key research questions and identify missing datasets. This framework grounds benchmark construction theoretically and guides dataset development to improve construct measurement. By embracing transparency, we move towards more rigorous, cumulative AI evaluation science, challenging researchers to critically examine their assessment foundations.<|reference_end|>
|
arxiv
|
@article{rystrøm2024exposing,
title={Exposing Assumptions in AI Benchmarks through Cognitive Modelling},
author={Jonathan H. Rystr{o}m and Kenneth C. Enevoldsen},
journal={arXiv preprint arXiv:2409.16849},
year={2024},
archivePrefix={arXiv},
eprint={2409.16849},
primaryClass={cs.AI cs.CL}
}
|
rystrøm2024exposing
|
arxiv-661792
|
2409.16850
|
Robust Scene Change Detection Using Visual Foundation Models and Cross-Attention Mechanisms
|
<|reference_start|>Robust Scene Change Detection Using Visual Foundation Models and Cross-Attention Mechanisms: We present a novel method for scene change detection that leverages the robust feature extraction capabilities of a visual foundational model, DINOv2, and integrates full-image cross-attention to address key challenges such as varying lighting, seasonal variations, and viewpoint differences. In order to effectively learn correspondences and mis-correspondences between an image pair for the change detection task, we propose to a) ``freeze'' the backbone in order to retain the generality of dense foundation features, and b) employ ``full-image'' cross-attention to better tackle the viewpoint variations between the image pair. We evaluate our approach on two benchmark datasets, VL-CMU-CD and PSCD, along with their viewpoint-varied versions. Our experiments demonstrate significant improvements in F1-score, particularly in scenarios involving geometric changes between image pairs. The results indicate our method's superior generalization capabilities over existing state-of-the-art approaches, showing robustness against photometric and geometric variations as well as better overall generalization when fine-tuned to adapt to new environments. Detailed ablation studies further validate the contributions of each component in our architecture. Source code will be made publicly available upon acceptance.<|reference_end|>
|
arxiv
|
@article{lin2024robust,
title={Robust Scene Change Detection Using Visual Foundation Models and
Cross-Attention Mechanisms},
author={Chun-Jung Lin, Sourav Garg, Tat-Jun Chin, Feras Dayoub},
journal={arXiv preprint arXiv:2409.16850},
year={2024},
archivePrefix={arXiv},
eprint={2409.16850},
primaryClass={cs.CV}
}
|
lin2024robust
|
arxiv-661793
|
2409.16851
|
Communication Backbone Reconfiguration with Connectivity Maintenance
|
<|reference_start|>Communication Backbone Reconfiguration with Connectivity Maintenance: The exchange of information is key in applications that involve multiple agents, such as search and rescue, military operations, and disaster response. In this work, we propose a simple and effective trajectory planning framework that tackles the design, deployment, and reconfiguration of a communication backbone by reframing the problem of networked multi-agent motion planning as a manipulator motion planning problem. Our approach works for backbones of variable configurations both in terms of the number of robots utilized and the distance limit between each robot. While research has been conducted on connection-restricted navigation for multi-robot systems in the last years, the field of manipulators is arguably more developed both in theory and practice. Hence, our methodology facilitates practical applications built on top of widely available motion planning algorithms and frameworks for manipulators.<|reference_end|>
|
arxiv
|
@article{santos2024communication,
title={Communication Backbone Reconfiguration with Connectivity Maintenance},
author={Leonardo Santos, Caio C. G. Ribeiro, Douglas G. Macharet},
journal={arXiv preprint arXiv:2409.16851},
year={2024},
archivePrefix={arXiv},
eprint={2409.16851},
primaryClass={cs.RO}
}
|
santos2024communication
|
arxiv-661794
|
2409.16854
|
Dispute resolution in legal mediation with quantitative argumentation
|
<|reference_start|>Dispute resolution in legal mediation with quantitative argumentation: Mediation is often treated as an extension of negotiation, without taking into account the unique role that norms and facts play in legal mediation. Additionally, current approaches for updating argument acceptability in response to changing variables frequently require the introduction of new arguments or the removal of existing ones, which can be inefficient and cumbersome in decision-making processes within legal disputes. In this paper, our contribution is two-fold. First, we introduce a QuAM (Quantitative Argumentation Mediate) framework, which integrates the parties' knowledge and the mediator's knowledge, including facts and legal norms, when determining the acceptability of a mediation goal. Second, we develop a new formalism to model the relationship between the acceptability of a goal argument and the values assigned to a variable associated with the argument. We use a real-world legal mediation as a running example to illustrate our approach.<|reference_end|>
|
arxiv
|
@article{chi2024dispute,
title={Dispute resolution in legal mediation with quantitative argumentation},
author={Xiao Chi},
journal={arXiv preprint arXiv:2409.16854},
year={2024},
archivePrefix={arXiv},
eprint={2409.16854},
primaryClass={cs.AI}
}
|
chi2024dispute
|
arxiv-661795
|
2409.16855
|
A Versatile and Differentiable Hand-Object Interaction Representation
|
<|reference_start|>A Versatile and Differentiable Hand-Object Interaction Representation: Synthesizing accurate hands-object interactions (HOI) is critical for applications in Computer Vision, Augmented Reality (AR), and Mixed Reality (MR). Despite recent advances, the accuracy of reconstructed or generated HOI leaves room for refinement. Some techniques have improved the accuracy of dense correspondences by shifting focus from generating explicit contacts to using rich HOI fields. Still, they lack full differentiability or continuity and are tailored to specific tasks. In contrast, we present a Coarse Hand-Object Interaction Representation (CHOIR), a novel, versatile and fully differentiable field for HOI modelling. CHOIR leverages discrete unsigned distances for continuous shape and pose encoding, alongside multivariate Gaussian distributions to represent dense contact maps with few parameters. To demonstrate the versatility of CHOIR we design JointDiffusion, a diffusion model to learn a grasp distribution conditioned on noisy hand-object interactions or only object geometries, for both refinement and synthesis applications. We demonstrate JointDiffusion's improvements over the SOTA in both applications: it increases the contact F1 score by $5\%$ for refinement and decreases the sim. displacement by $46\%$ for synthesis. Our experiments show that JointDiffusion with CHOIR yield superior contact accuracy and physical realism compared to SOTA methods designed for specific tasks. Our models and code will be publicly available to the research community.<|reference_end|>
|
arxiv
|
@article{morales2024a,
title={A Versatile and Differentiable Hand-Object Interaction Representation},
author={Th'eo Morales, Omid Taheri, Gerard Lacey},
journal={arXiv preprint arXiv:2409.16855},
year={2024},
archivePrefix={arXiv},
eprint={2409.16855},
primaryClass={cs.CV}
}
|
morales2024a
|
arxiv-661796
|
2409.16856
|
Comparison of Atom Detection Algorithms for Neutral Atom Quantum Computing
|
<|reference_start|>Comparison of Atom Detection Algorithms for Neutral Atom Quantum Computing: In neutral atom quantum computers, readout and preparation of the atomic qubits are usually based on fluorescence imaging and subsequent analysis of the acquired image. For each atom site, the brightness or some comparable metric is estimated and used to predict the presence or absence of an atom. Across different setups, we can see a vast number of different approaches used to analyze these images. Often, the choice of detection algorithm is either not mentioned at all or it is not justified. We investigate several different algorithms and compare their performance in terms of both precision and execution run time. To do so, we rely on a set of synthetic images across different simulated exposure times with known occupancy states. Since the use of simulation provides us with the ground truth of atom site occupancy, we can easily state precise error rates and variances of the reconstructed property. To also rule out the possibility of better algorithms existing, we calculated the Cram\'er-Rao bound in order to establish an upper limit that even a perfect estimator cannot outperform. As the metric of choice, we used the number of photonelectrons that can be contributed to a specific atom site. Since the bound depends on the occupancy of neighboring sites, we provide the best and worst cases, as well as a half filled one. Our comparison shows that of our tested algorithms, a global non-linear least-squares solver that uses the optical system's PSF to return a each sites' number of photoelectrons performed the best, on average crossing the worst-case bound for longer exposure times. Its main drawback is its huge computational complexity and, thus, required calculation time. We manage to somewhat reduce this problem, suggesting that its use may be viable. However, our study also shows that for cases where utmost speed is required, simple algorithms may be preferable.<|reference_end|>
|
arxiv
|
@article{winklmann2024comparison,
title={Comparison of Atom Detection Algorithms for Neutral Atom Quantum
Computing},
author={Jonas Winklmann, Andrea Alberti, Martin Schulz},
journal={2024 IEEE International Conference on Quantum Computing and
Engineering (QCE), Montreal, QC, Canada, 2024, pp. 1048-1057},
year={2024},
doi={10.1109/QCE60285.2024.00124},
archivePrefix={arXiv},
eprint={2409.16856},
primaryClass={quant-ph cs.SE}
}
|
winklmann2024comparison
|
arxiv-661797
|
2409.16860
|
The Role of Language Models in Modern Healthcare: A Comprehensive Review
|
<|reference_start|>The Role of Language Models in Modern Healthcare: A Comprehensive Review: The application of large language models (LLMs) in healthcare has gained significant attention due to their ability to process complex medical data and provide insights for clinical decision-making. These models have demonstrated substantial capabilities in understanding and generating natural language, which is crucial for medical documentation, diagnostics, and patient interaction. This review examines the trajectory of language models from their early stages to the current state-of-the-art LLMs, highlighting their strengths in healthcare applications and discussing challenges such as data privacy, bias, and ethical considerations. The potential of LLMs to enhance healthcare delivery is explored, alongside the necessary steps to ensure their ethical and effective integration into medical practice.<|reference_end|>
|
arxiv
|
@article{khalid2024the,
title={The Role of Language Models in Modern Healthcare: A Comprehensive Review},
author={Amna Khalid, Ayma Khalid, Umar Khalid},
journal={arXiv preprint arXiv:2409.16860},
year={2024},
archivePrefix={arXiv},
eprint={2409.16860},
primaryClass={cs.CV cs.AI cs.CL}
}
|
khalid2024the
|
arxiv-661798
|
2409.16861
|
Limitations of (Procrustes) Alignment in Assessing Multi-Person Human Pose and Shape Estimation
|
<|reference_start|>Limitations of (Procrustes) Alignment in Assessing Multi-Person Human Pose and Shape Estimation: We delve into the challenges of accurately estimating 3D human pose and shape in video surveillance scenarios. Beginning with the advocacy for metrics like W-MPJPE and W-PVE, which omit the (Procrustes) realignment step, to improve model evaluation, we then introduce RotAvat. This technique aims to enhance these metrics by refining the alignment of 3D meshes with the ground plane. Through qualitative comparisons, we demonstrate RotAvat's effectiveness in addressing the limitations of existing aproaches.<|reference_end|>
|
arxiv
|
@article{martin2024limitations,
title={Limitations of (Procrustes) Alignment in Assessing Multi-Person Human
Pose and Shape Estimation},
author={Drazic Martin and Pierre Perrault},
journal={arXiv preprint arXiv:2409.16861},
year={2024},
archivePrefix={arXiv},
eprint={2409.16861},
primaryClass={cs.CV cs.GR}
}
|
martin2024limitations
|
arxiv-661799
|
2409.16862
|
Behavior evolution-inspired approach to walking gait reinforcement training for quadruped robots
|
<|reference_start|>Behavior evolution-inspired approach to walking gait reinforcement training for quadruped robots: Reinforcement learning method is extremely competitive in gait generation techniques for quadrupedal robot, which is mainly due to the fact that stochastic exploration in reinforcement training is beneficial to achieve an autonomous gait. Nevertheless, although incremental reinforcement learning is employed to improve training success and movement smoothness by relying on the continuity inherent during limb movements, challenges remain in adapting gait policy to diverse terrain and external disturbance. Inspired by the association between reinforcement learning and the evolution of animal motion behavior, a self-improvement mechanism for reference gait is introduced in this paper to enable incremental learning of action and self-improvement of reference action together to imitate the evolution of animal motion behavior. Further, a new framework for reinforcement training of quadruped gait is proposed. In this framework, genetic algorithm is specifically adopted to perform global probabilistic search for the initial value of the arbitrary foot trajectory to update the reference trajectory with better fitness. Subsequently, the improved reference gait is used for incremental reinforcement learning of gait. The above process is repeatedly and alternatively executed to finally train the gait policy. The analysis considering terrain, model dimensions, and locomotion condition is presented in detail based on simulation, and the results show that the framework is significantly more adaptive to terrain compared to regular incremental reinforcement learning.<|reference_end|>
|
arxiv
|
@article{wang2024behavior,
title={Behavior evolution-inspired approach to walking gait reinforcement
training for quadruped robots},
author={Yu Wang, Wenchuan Jia, Yi Sun and Dong He},
journal={arXiv preprint arXiv:2409.16862},
year={2024},
archivePrefix={arXiv},
eprint={2409.16862},
primaryClass={cs.RO}
}
|
wang2024behavior
|
arxiv-661800
|
2409.16863
|
Towards Unified 3D Hair Reconstruction from Single-View Portraits
|
<|reference_start|>Towards Unified 3D Hair Reconstruction from Single-View Portraits: Single-view 3D hair reconstruction is challenging, due to the wide range of shape variations among diverse hairstyles. Current state-of-the-art methods are specialized in recovering un-braided 3D hairs and often take braided styles as their failure cases, because of the inherent difficulty to define priors for complex hairstyles, whether rule-based or data-based. We propose a novel strategy to enable single-view 3D reconstruction for a variety of hair types via a unified pipeline. To achieve this, we first collect a large-scale synthetic multi-view hair dataset SynMvHair with diverse 3D hair in both braided and un-braided styles, and learn two diffusion priors specialized on hair. Then we optimize 3D Gaussian-based hair from the priors with two specially designed modules, i.e. view-wise and pixel-wise Gaussian refinement. Our experiments demonstrate that reconstructing braided and un-braided 3D hair from single-view images via a unified approach is possible and our method achieves the state-of-the-art performance in recovering complex hairstyles. It is worth to mention that our method shows good generalization ability to real images, although it learns hair priors from synthetic data.<|reference_end|>
|
arxiv
|
@article{zheng2024towards,
title={Towards Unified 3D Hair Reconstruction from Single-View Portraits},
author={Yujian Zheng, Yuda Qiu, Leyang Jin, Chongyang Ma, Haibin Huang, Di
Zhang, Pengfei Wan and Xiaoguang Han},
journal={arXiv preprint arXiv:2409.16863},
year={2024},
archivePrefix={arXiv},
eprint={2409.16863},
primaryClass={cs.CV}
}
|
zheng2024towards
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.