corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-661901
|
2409.17085
|
Parameter-efficient Bayesian Neural Networks for Uncertainty-aware Depth Estimation
|
<|reference_start|>Parameter-efficient Bayesian Neural Networks for Uncertainty-aware Depth Estimation: State-of-the-art computer vision tasks, like monocular depth estimation (MDE), rely heavily on large, modern Transformer-based architectures. However, their application in safety-critical domains demands reliable predictive performance and uncertainty quantification. While Bayesian neural networks provide a conceptually simple approach to serve those requirements, they suffer from the high dimensionality of the parameter space. Parameter-efficient fine-tuning (PEFT) methods, in particular low-rank adaptations (LoRA), have emerged as a popular strategy for adapting large-scale models to down-stream tasks by performing parameter inference on lower-dimensional subspaces. In this work, we investigate the suitability of PEFT methods for subspace Bayesian inference in large-scale Transformer-based vision models. We show that, indeed, combining BitFit, DiffFit, LoRA, and CoLoRA, a novel LoRA-inspired PEFT method, with Bayesian inference enables more robust and reliable predictive performance in MDE.<|reference_end|>
|
arxiv
|
@article{paul2024parameter-efficient,
title={Parameter-efficient Bayesian Neural Networks for Uncertainty-aware Depth
Estimation},
author={Richard D. Paul, Alessio Quercia, Vincent Fortuin, Katharina N"oh,
Hanno Scharr},
journal={arXiv preprint arXiv:2409.17085},
year={2024},
archivePrefix={arXiv},
eprint={2409.17085},
primaryClass={cs.CV stat.ML}
}
|
paul2024parameter-efficient
|
arxiv-661902
|
2409.17087
|
SEN12-WATER: A New Dataset for Hydrological Applications and its Benchmarking
|
<|reference_start|>SEN12-WATER: A New Dataset for Hydrological Applications and its Benchmarking: Climate change and increasing droughts pose significant challenges to water resource management around the world. These problems lead to severe water shortages that threaten ecosystems, agriculture, and human communities. To advance the fight against these challenges, we present a new dataset, SEN12-WATER, along with a benchmark using a novel end-to-end Deep Learning (DL) framework for proactive drought-related analysis. The dataset, identified as a spatiotemporal datacube, integrates SAR polarization, elevation, slope, and multispectral optical bands. Our DL framework enables the analysis and estimation of water losses over time in reservoirs of interest, revealing significant insights into water dynamics for drought analysis by examining temporal changes in physical quantities such as water volume. Our methodology takes advantage of the multitemporal and multimodal characteristics of the proposed dataset, enabling robust generalization and advancing understanding of drought, contributing to climate change resilience and sustainable water resource management. The proposed framework involves, among the several components, speckle noise removal from SAR data, a water body segmentation through a U-Net architecture, the time series analysis, and the predictive capability of a Time-Distributed-Convolutional Neural Network (TD-CNN). Results are validated through ground truth data acquired on-ground via dedicated sensors and (tailored) metrics, such as Precision, Recall, Intersection over Union, Mean Squared Error, Structural Similarity Index Measure and Peak Signal-to-Noise Ratio.<|reference_end|>
|
arxiv
|
@article{russo2024sen12-water:,
title={SEN12-WATER: A New Dataset for Hydrological Applications and its
Benchmarking},
author={Luigi Russo, Francesco Mauro, Alessandro Sebastianelli, Paolo Gamba,
Silvia Liberata Ullo},
journal={arXiv preprint arXiv:2409.17087},
year={2024},
archivePrefix={arXiv},
eprint={2409.17087},
primaryClass={eess.IV cs.AI cs.LG}
}
|
russo2024sen12-water:
|
arxiv-661903
|
2409.17088
|
Textoshop: Interactions Inspired by Drawing Software to Facilitate Text Editing
|
<|reference_start|>Textoshop: Interactions Inspired by Drawing Software to Facilitate Text Editing: We explore how interactions inspired by drawing software can help edit text. Making an analogy between visual and text editing, we consider words as pixels, sentences as regions, and tones as colours. For instance, direct manipulations move, shorten, expand, and reorder text; tools change number, tense, and grammar; colours map to tones explored along three dimensions in a tone picker; and layers help organize and version text. This analogy also leads to new workflows, such as boolean operations on text fragments to construct more elaborated text. A study shows participants were more successful at editing text and preferred using the proposed interface over existing solutions. Broadly, our work highlights the potential of interaction analogies to rethink existing workflows, while capitalizing on familiar features.<|reference_end|>
|
arxiv
|
@article{masson2024textoshop:,
title={Textoshop: Interactions Inspired by Drawing Software to Facilitate Text
Editing},
author={Damien Masson, Young-Ho Kim, Fanny Chevalier},
journal={arXiv preprint arXiv:2409.17088},
year={2024},
archivePrefix={arXiv},
eprint={2409.17088},
primaryClass={cs.HC}
}
|
masson2024textoshop:
|
arxiv-661904
|
2409.17090
|
Locally Regularized Sparse Graph by Fast Proximal Gradient Descent
|
<|reference_start|>Locally Regularized Sparse Graph by Fast Proximal Gradient Descent: Sparse graphs built by sparse representation has been demonstrated to be effective in clustering high-dimensional data. Albeit the compelling empirical performance, the vanilla sparse graph ignores the geometric information of the data by performing sparse representation for each datum separately. In order to obtain a sparse graph aligned with the local geometric structure of data, we propose a novel Support Regularized Sparse Graph, abbreviated as SRSG, for data clustering. SRSG encourages local smoothness on the neighborhoods of nearby data points by a well-defined support regularization term. We propose a fast proximal gradient descent method to solve the non-convex optimization problem of SRSG with the convergence matching the Nesterov's optimal convergence rate of first-order methods on smooth and convex objective function with Lipschitz continuous gradient. Extensive experimental results on various real data sets demonstrate the superiority of SRSG over other competing clustering methods.<|reference_end|>
|
arxiv
|
@article{sun2024locally,
title={Locally Regularized Sparse Graph by Fast Proximal Gradient Descent},
author={Dongfang Sun, Yingzhen Yang},
journal={arXiv preprint arXiv:2409.17090},
year={2024},
archivePrefix={arXiv},
eprint={2409.17090},
primaryClass={cs.LG math.OC}
}
|
sun2024locally
|
arxiv-661905
|
2409.17091
|
Ctrl-GenAug: Controllable Generative Augmentation for Medical Sequence Classification
|
<|reference_start|>Ctrl-GenAug: Controllable Generative Augmentation for Medical Sequence Classification: In the medical field, the limited availability of large-scale datasets and labor-intensive annotation processes hinder the performance of deep models. Diffusion-based generative augmentation approaches present a promising solution to this issue, having been proven effective in advancing downstream medical recognition tasks. Nevertheless, existing works lack sufficient semantic and sequential steerability for challenging video/3D sequence generation, and neglect quality control of noisy synthesized samples, resulting in unreliable synthetic databases and severely limiting the performance of downstream tasks. In this work, we present Ctrl-GenAug, a novel and general generative augmentation framework that enables highly semantic- and sequential-customized sequence synthesis and suppresses incorrectly synthesized samples, to aid medical sequence classification. Specifically, we first design a multimodal conditions-guided sequence generator for controllably synthesizing diagnosis-promotive samples. A sequential augmentation module is integrated to enhance the temporal/stereoscopic coherence of generated samples. Then, we propose a noisy synthetic data filter to suppress unreliable cases at semantic and sequential levels. Extensive experiments on 3 medical datasets, using 11 networks trained on 3 paradigms, comprehensively analyze the effectiveness and generality of Ctrl-GenAug, particularly in underrepresented high-risk populations and out-domain conditions.<|reference_end|>
|
arxiv
|
@article{zhou2024ctrl-genaug:,
title={Ctrl-GenAug: Controllable Generative Augmentation for Medical Sequence
Classification},
author={Xinrui Zhou, Yuhao Huang, Haoran Dou, Shijing Chen, Ao Chang, Jia Liu,
Weiran Long, Jian Zheng, Erjiao Xu, Jie Ren, Ruobing Huang, Jun Cheng, Wufeng
Xue, Dong Ni},
journal={arXiv preprint arXiv:2409.17091},
year={2024},
archivePrefix={arXiv},
eprint={2409.17091},
primaryClass={cs.CV cs.AI cs.LG}
}
|
zhou2024ctrl-genaug:
|
arxiv-661906
|
2409.17092
|
Accumulator-Aware Post-Training Quantization
|
<|reference_start|>Accumulator-Aware Post-Training Quantization: Several recent studies have investigated low-precision accumulation, reporting improvements in throughput, power, and area across various platforms. However, the accompanying proposals have only considered the quantization-aware training (QAT) paradigm, in which models are fine-tuned or trained from scratch with quantization in the loop. As models continue to grow in size, QAT techniques become increasingly more expensive, which has motivated the recent surge in post-training quantization (PTQ) research. To the best of our knowledge, ours marks the first formal study of accumulator-aware quantization in the PTQ setting. To bridge this gap, we introduce AXE, a practical framework of accumulator-aware extensions designed to endow overflow avoidance guarantees to existing layer-wise PTQ algorithms. We theoretically motivate AXE and demonstrate its flexibility by implementing it on top of two state-of-the-art PTQ algorithms: GPFQ and OPTQ. We further generalize AXE to support multi-stage accumulation for the first time, opening the door for full datapath optimization and scaling to large language models (LLMs). We evaluate AXE across image classification and language generation models, and observe significant improvements in the trade-off between accumulator bit width and model accuracy over baseline methods.<|reference_end|>
|
arxiv
|
@article{colbert2024accumulator-aware,
title={Accumulator-Aware Post-Training Quantization},
author={Ian Colbert, Fabian Grob, Giuseppe Franco, Jinjie Zhang, Rayan Saab},
journal={arXiv preprint arXiv:2409.17092},
year={2024},
archivePrefix={arXiv},
eprint={2409.17092},
primaryClass={cs.LG cs.AI cs.DM}
}
|
colbert2024accumulator-aware
|
arxiv-661907
|
2409.17093
|
BitQ: Tailoring Block Floating Point Precision for Improved DNN Efficiency on Resource-Constrained Devices
|
<|reference_start|>BitQ: Tailoring Block Floating Point Precision for Improved DNN Efficiency on Resource-Constrained Devices: Deep neural networks (DNNs) are powerful for cognitive tasks such as image classification, object detection, and scene segmentation. One drawback however is the significant high computational complexity and memory consumption, which makes them unfeasible to run real-time on embedded platforms because of the limited hardware resources. Block floating point (BFP) quantization is one of the representative compression approaches for reducing the memory and computational burden owing to their capability to effectively capture the broad data distribution of DNN models. Unfortunately, prior works on BFP-based quantization empirically choose the block size and the precision that preserve accuracy. In this paper, we develop a BFP-based bitwidth-aware analytical modeling framework (called ``BitQ'') for the best BFP implementation of DNN inference on embedded platforms. We formulate and resolve an optimization problem to identify the optimal BFP block size and bitwidth distribution by the trade-off of both accuracy and performance loss. Experimental results show that compared with an equal bitwidth setting, the BFP DNNs with optimized bitwidth allocation provide efficient computation, preserving accuracy on famous benchmarks. The source code and data are available at https://github.com/Cheliosoops/BitQ.<|reference_end|>
|
arxiv
|
@article{xu2024bitq:,
title={BitQ: Tailoring Block Floating Point Precision for Improved DNN
Efficiency on Resource-Constrained Devices},
author={Yongqi Xu, Yujian Lee, Gao Yi, Bosheng Liu, Yucong Chen, Peng Liu,
Jigang Wu, Xiaoming Chen, Yinhe Han},
journal={arXiv preprint arXiv:2409.17093},
year={2024},
archivePrefix={arXiv},
eprint={2409.17093},
primaryClass={cs.CV}
}
|
xu2024bitq:
|
arxiv-661908
|
2409.17095
|
General Detection-based Text Line Recognition
|
<|reference_start|>General Detection-based Text Line Recognition: We introduce a general detection-based approach to text line recognition, be it printed (OCR) or handwritten (HTR), with Latin, Chinese, or ciphered characters. Detection-based approaches have until now been largely discarded for HTR because reading characters separately is often challenging, and character-level annotation is difficult and expensive. We overcome these challenges thanks to three main insights: (i) synthetic pre-training with sufficiently diverse data enables learning reasonable character localization for any script; (ii) modern transformer-based detectors can jointly detect a large number of instances, and, if trained with an adequate masking strategy, leverage consistency between the different detections; (iii) once a pre-trained detection model with approximate character localization is available, it is possible to fine-tune it with line-level annotation on real data, even with a different alphabet. Our approach, dubbed DTLR, builds on a completely different paradigm than state-of-the-art HTR methods, which rely on autoregressive decoding, predicting character values one by one, while we treat a complete line in parallel. Remarkably, we demonstrate good performance on a large range of scripts, usually tackled with specialized approaches. In particular, we improve state-of-the-art performances for Chinese script recognition on the CASIA v2 dataset, and for cipher recognition on the Borg and Copiale datasets. Our code and models are available at https://github.com/raphael-baena/DTLR.<|reference_end|>
|
arxiv
|
@article{baena2024general,
title={General Detection-based Text Line Recognition},
author={Raphael Baena, Syrine Kalleli, Mathieu Aubry},
journal={arXiv preprint arXiv:2409.17095},
year={2024},
archivePrefix={arXiv},
eprint={2409.17095},
primaryClass={cs.CV}
}
|
baena2024general
|
arxiv-661909
|
2409.17098
|
Pentagon Minimization without Computation
|
<|reference_start|>Pentagon Minimization without Computation: Erd\H{o}s and Guy initiated a line of research studying $\mu_k(n)$, the minimum number of convex $k$-gons one can obtain by placing $n$ points in the plane without any three of them being collinear. Asymptotically, the limits $c_k := \lim_{n\to \infty} \mu_k(n)/\binom{n}{k}$ exist for all $k$, and are strictly positive due to the Erd\H{o}s-Szekeres theorem. This article focuses on the case $k=5$, where $c_5$ was known to be between $0.0608516$ and $0.0625$ (Goaoc et al., 2018; Subercaseaux et al., 2023). The lower bound was obtained through the Flag Algebra method of Razborov using semi-definite programming. In this article we prove a more modest lower bound of $\frac{5\sqrt{5}-11}{4} \approx 0.04508$ without any computation; we exploit``planar-point equations'' that count, in different ways, the number of convex pentagons (or other geometric objects) in a point placement. To derive our lower bound we combine such equations by viewing them from a statistical perspective, which we believe can be fruitful for other related problems.<|reference_end|>
|
arxiv
|
@article{mackey2024pentagon,
title={Pentagon Minimization without Computation},
author={John Mackey and Bernardo Subercaseaux},
journal={arXiv preprint arXiv:2409.17098},
year={2024},
archivePrefix={arXiv},
eprint={2409.17098},
primaryClass={math.CO cs.CG cs.DM}
}
|
mackey2024pentagon
|
arxiv-661910
|
2409.17100
|
Generic Diagonalizability, Structural Functional Observability and Output Controllability
|
<|reference_start|>Generic Diagonalizability, Structural Functional Observability and Output Controllability: This paper investigates the structural functional observability (SFO) and structural output controllability (SOC) of a class of systems with generically diagonalizable state matrices and explores the associated minimal sensor and actuator placement problems. The verification of SOC and the corresponding sensor and actuator placement problems, i.e., the problems of determining the minimum number of outputs and inputs required to achieve SFO and SOC, respectively, are yet open for general systems, which motivates our focus on a class of systems enabling polynomial-time solutions. In this line, we first define and characterize generically diagonalizable systems, referring to structured systems for which almost all realizations of the state matrices are diagonalizable. We then develop computationally efficient criteria for SFO and SOC within the context of generically diagonalizable systems. Our work expands the class of systems amenable to polynomial-time SOC verification. Thanks to the simplicity of the obtained criteria, we derive closed-form solutions for determining the minimal sensor placement to achieve SFO and the minimal actuator deployment to achieve SOC in such systems, along with efficient weighted maximum matching based and weighted maximum flow based algorithms. For more general systems to achieve SFO, an upper bound is given by identifying a non-decreasing property of SFO with respect to a specific class of edge additions, which is shown to be optimal under certain circumstances.<|reference_end|>
|
arxiv
|
@article{zhang2024generic,
title={Generic Diagonalizability, Structural Functional Observability and
Output Controllability},
author={Yuan Zhang, Tyrone Fernando, Mohamed Darouach},
journal={arXiv preprint arXiv:2409.17100},
year={2024},
archivePrefix={arXiv},
eprint={2409.17100},
primaryClass={eess.SY cs.SY}
}
|
zhang2024generic
|
arxiv-661911
|
2409.17104
|
Language-oriented Semantic Communication for Image Transmission with Fine-Tuned Diffusion Model
|
<|reference_start|>Language-oriented Semantic Communication for Image Transmission with Fine-Tuned Diffusion Model: Ubiquitous image transmission in emerging applications brings huge overheads to limited wireless resources. Since that text has the characteristic of conveying a large amount of information with very little data, the transmission of the descriptive text of an image can reduce the amount of transmitted data. In this context, this paper develops a novel semantic communication framework based on a text-2-image generative model (Gen-SC). In particular, a transmitter converts the input image to textual modality data. Then the text is transmitted through a noisy channel to the receiver. The receiver then uses the received text to generate images. Additionally, to improve the robustness of text transmission over noisy channels, we designed a transformer-based text transmission codec model. Moreover, we obtained a personalized knowledge base by fine-tuning the diffusion model to meet the requirements of task-oriented transmission scenarios. Simulation results show that the proposed framework can achieve high perceptual quality with reducing the transmitted data volume by up to 99% and is robust to wireless channel noise in terms of portrait image transmission.<|reference_end|>
|
arxiv
|
@article{wei2024language-oriented,
title={Language-oriented Semantic Communication for Image Transmission with
Fine-Tuned Diffusion Model},
author={Xinfeng Wei, Haonan Tong, Nuocheng Yang, and Changchuan Yin},
journal={arXiv preprint arXiv:2409.17104},
year={2024},
archivePrefix={arXiv},
eprint={2409.17104},
primaryClass={cs.MM}
}
|
wei2024language-oriented
|
arxiv-661912
|
2409.17106
|
Text2CAD: Generating Sequential CAD Models from Beginner-to-Expert Level Text Prompts
|
<|reference_start|>Text2CAD: Generating Sequential CAD Models from Beginner-to-Expert Level Text Prompts: Prototyping complex computer-aided design (CAD) models in modern softwares can be very time-consuming. This is due to the lack of intelligent systems that can quickly generate simpler intermediate parts. We propose Text2CAD, the first AI framework for generating text-to-parametric CAD models using designer-friendly instructions for all skill levels. Furthermore, we introduce a data annotation pipeline for generating text prompts based on natural language instructions for the DeepCAD dataset using Mistral and LLaVA-NeXT. The dataset contains $\sim170$K models and $\sim660$K text annotations, from abstract CAD descriptions (e.g., generate two concentric cylinders) to detailed specifications (e.g., draw two circles with center $(x,y)$ and radius $r_{1}$, $r_{2}$, and extrude along the normal by $d$...). Within the Text2CAD framework, we propose an end-to-end transformer-based auto-regressive network to generate parametric CAD models from input texts. We evaluate the performance of our model through a mixture of metrics, including visual quality, parametric precision, and geometrical accuracy. Our proposed framework shows great potential in AI-aided design applications. Our source code and annotations will be publicly available.<|reference_end|>
|
arxiv
|
@article{khan2024text2cad:,
title={Text2CAD: Generating Sequential CAD Models from Beginner-to-Expert Level
Text Prompts},
author={Mohammad Sadil Khan, Sankalp Sinha, Talha Uddin Sheikh, Didier
Stricker, Sk Aziz Ali and Muhammad Zeshan Afzal},
journal={arXiv preprint arXiv:2409.17106},
year={2024},
archivePrefix={arXiv},
eprint={2409.17106},
primaryClass={cs.CV cs.GR}
}
|
khan2024text2cad:
|
arxiv-661913
|
2409.17107
|
Non-asymptotic convergence analysis of the stochastic gradient Hamiltonian Monte Carlo algorithm with discontinuous stochastic gradient with applications to training of ReLU neural networks
|
<|reference_start|>Non-asymptotic convergence analysis of the stochastic gradient Hamiltonian Monte Carlo algorithm with discontinuous stochastic gradient with applications to training of ReLU neural networks: In this paper, we provide a non-asymptotic analysis of the convergence of the stochastic gradient Hamiltonian Monte Carlo (SGHMC) algorithm to a target measure in Wasserstein-1 and Wasserstein-2 distance. Crucially, compared to the existing literature on SGHMC, we allow its stochastic gradient to be discontinuous. This allows us to provide explicit upper bounds, which can be controlled to be arbitrarily small, for the expected excess risk of non-convex stochastic optimization problems with discontinuous stochastic gradients, including, among others, the training of neural networks with ReLU activation function. To illustrate the applicability of our main results, we consider numerical experiments on quantile estimation and on several optimization problems involving ReLU neural networks relevant in finance and artificial intelligence.<|reference_end|>
|
arxiv
|
@article{liang2024non-asymptotic,
title={Non-asymptotic convergence analysis of the stochastic gradient
Hamiltonian Monte Carlo algorithm with discontinuous stochastic gradient with
applications to training of ReLU neural networks},
author={Luxu Liang, Ariel Neufeld, Ying Zhang},
journal={arXiv preprint arXiv:2409.17107},
year={2024},
archivePrefix={arXiv},
eprint={2409.17107},
primaryClass={math.OC cs.LG cs.NA math.NA math.PR stat.ML}
}
|
liang2024non-asymptotic
|
arxiv-661914
|
2409.17109
|
Unveiling Ontological Commitment in Multi-Modal Foundation Models
|
<|reference_start|>Unveiling Ontological Commitment in Multi-Modal Foundation Models: Ontological commitment, i.e., used concepts, relations, and assumptions, are a corner stone of qualitative reasoning (QR) models. The state-of-the-art for processing raw inputs, though, are deep neural networks (DNNs), nowadays often based off from multimodal foundation models. These automatically learn rich representations of concepts and respective reasoning. Unfortunately, the learned qualitative knowledge is opaque, preventing easy inspection, validation, or adaptation against available QR models. So far, it is possible to associate pre-defined concepts with latent representations of DNNs, but extractable relations are mostly limited to semantic similarity. As a next step towards QR for validation and verification of DNNs: Concretely, we propose a method that extracts the learned superclass hierarchy from a multimodal DNN for a given set of leaf concepts. Under the hood we (1) obtain leaf concept embeddings using the DNN's textual input modality; (2) apply hierarchical clustering to them, using that DNNs encode semantic similarities via vector distances; and (3) label the such-obtained parent concepts using search in available ontologies from QR. An initial evaluation study shows that meaningful ontological class hierarchies can be extracted from state-of-the-art foundation models. Furthermore, we demonstrate how to validate and verify a DNN's learned representations against given ontologies. Lastly, we discuss potential future applications in the context of QR.<|reference_end|>
|
arxiv
|
@article{keser2024unveiling,
title={Unveiling Ontological Commitment in Multi-Modal Foundation Models},
author={Mert Keser, Gesina Schwalbe, Niki Amini-Naieni, Matthias Rottmann,
Alois Knoll},
journal={arXiv preprint arXiv:2409.17109},
year={2024},
archivePrefix={arXiv},
eprint={2409.17109},
primaryClass={cs.CV cs.AI}
}
|
keser2024unveiling
|
arxiv-661915
|
2409.17110
|
MorphoSeg: An Uncertainty-Aware Deep Learning Method for Biomedical Segmentation of Complex Cellular Morphologies
|
<|reference_start|>MorphoSeg: An Uncertainty-Aware Deep Learning Method for Biomedical Segmentation of Complex Cellular Morphologies: Deep learning has revolutionized medical and biological imaging, particularly in segmentation tasks. However, segmenting biological cells remains challenging due to the high variability and complexity of cell shapes. Addressing this challenge requires high-quality datasets that accurately represent the diverse morphologies found in biological cells. Existing cell segmentation datasets are often limited by their focus on regular and uniform shapes. In this paper, we introduce a novel benchmark dataset of Ntera-2 (NT2) cells, a pluripotent carcinoma cell line, exhibiting diverse morphologies across multiple stages of differentiation, capturing the intricate and heterogeneous cellular structures that complicate segmentation tasks. To address these challenges, we propose an uncertainty-aware deep learning framework for complex cellular morphology segmentation (MorphoSeg) by incorporating sampling of virtual outliers from low-likelihood regions during training. Our comprehensive experimental evaluations against state-of-the-art baselines demonstrate that MorphoSeg significantly enhances segmentation accuracy, achieving up to a 7.74% increase in the Dice Similarity Coefficient (DSC) and a 28.36% reduction in the Hausdorff Distance. These findings highlight the effectiveness of our dataset and methodology in advancing cell segmentation capabilities, especially for complex and variable cell morphologies. The dataset and source code is publicly available at https://github.com/RanchoGoose/MorphoSeg.<|reference_end|>
|
arxiv
|
@article{zhang2024morphoseg:,
title={MorphoSeg: An Uncertainty-Aware Deep Learning Method for Biomedical
Segmentation of Complex Cellular Morphologies},
author={Tianhao Zhang, Heather J. McCourty, Berardo M. Sanchez-Tafolla, Anton
Nikolaev, Lyudmila S. Mihaylova},
journal={arXiv preprint arXiv:2409.17110},
year={2024},
archivePrefix={arXiv},
eprint={2409.17110},
primaryClass={cs.CV}
}
|
zhang2024morphoseg:
|
arxiv-661916
|
2409.17111
|
Self-Sensing for Proprioception and Contact Detection in Soft Robots Using Shape Memory Alloy Artificial Muscles
|
<|reference_start|>Self-Sensing for Proprioception and Contact Detection in Soft Robots Using Shape Memory Alloy Artificial Muscles: Estimating a soft robot's pose and applied forces, also called proprioception, is crucial for safe interaction of the robot with its environment. However, most solutions for soft robot proprioception use dedicated sensors, particularly for external forces, which introduce design trade-offs, rigidity, and risk of failure. This work presents an approach for pose estimation and contact detection for soft robots actuated by shape memory alloy (SMA) artificial muscles, using no dedicated force sensors. Our framework uses the unique material properties of SMAs to self-sense their internal stress, via offboard measurements of their electrical resistance and in-situ temperature readings, in an existing fully-soft limb design. We demonstrate that a simple polynomial regression model on these measurements is sufficient to predict the robot's pose, under no-contact conditions. Then, we show that if an additional measurement of the true pose is available (e.g. from an already-in-place bending sensor), it is possible to predict a binary contact/no-contact using multiple combinations of self-sensing signals. Our hardware tests verify our hypothesis via a contact detection test with a human operator. This proof-of-concept validates that self-sensing signals in soft SMA-actuated soft robots can be used for proprioception and contact detection, and suggests a direction for integrating proprioception into soft robots without design compromises. Future work could employ machine learning for enhanced accuracy.<|reference_end|>
|
arxiv
|
@article{jing2024self-sensing,
title={Self-Sensing for Proprioception and Contact Detection in Soft Robots
Using Shape Memory Alloy Artificial Muscles},
author={Ran Jing, Meredith L. Anderson, Juan C. Pacheco Garcia, Andrew P.
Sabelhaus},
journal={arXiv preprint arXiv:2409.17111},
year={2024},
archivePrefix={arXiv},
eprint={2409.17111},
primaryClass={cs.RO}
}
|
jing2024self-sensing
|
arxiv-661917
|
2409.17113
|
Characterizing stable regions in the residual stream of LLMs
|
<|reference_start|>Characterizing stable regions in the residual stream of LLMs: We identify "stable regions" in the residual stream of Transformers, where the model's output remains insensitive to small activation changes, but exhibits high sensitivity at region boundaries. These regions emerge during training and become more defined as training progresses or model size increases. The regions appear to be much larger than previously studied polytopes. Our analysis suggests that these stable regions align with semantic distinctions, where similar prompts cluster within regions, and activations from the same region lead to similar next token predictions. This work provides a promising research direction for understanding the complexity of neural networks, shedding light on training dynamics, and advancing interpretability.<|reference_end|>
|
arxiv
|
@article{janiak2024characterizing,
title={Characterizing stable regions in the residual stream of LLMs},
author={Jett Janiak, Jacek Karwowski, Chatrik Singh Mangat, Giorgi Giglemiani,
Nora Petrova, Stefan Heimersheim},
journal={arXiv preprint arXiv:2409.17113},
year={2024},
archivePrefix={arXiv},
eprint={2409.17113},
primaryClass={cs.LG}
}
|
janiak2024characterizing
|
arxiv-661918
|
2409.17114
|
Towards human-like kinematics in industrial robotic arms: a case study on a UR3 robot
|
<|reference_start|>Towards human-like kinematics in industrial robotic arms: a case study on a UR3 robot: Safety in industrial robotic environments is a hot research topic in the area of human-robot interaction (HRI). Up to now, a robotic arm on an assembly line interacts with other machines away from human workers. Nowadays, robotic arm manufactures are aimed to their robots could increasingly perform tasks collaborating with humans. One of the ways to improve this collaboration is by making the movement of robots more humanlike. This way, it would be easier for a human to foresee the movement of the robot and approach it without fear of contact. The main difference between the movement of a human and of a robotic arm is that the former has a bell-shaped speed profile while the latter has a uniform speed one. To generate this speed profile, the kinematic theory of rapid human movements and its Sigma-Lognormal model has been used. This model is widely used to explain most of the basic phenomena related to the control of human movements. Both human-like and robotic-like movements are transferred to the UR3 robot. In this paper we detail the how the UR3 robot was programmed to produce both kinds of movement. The dissimilarities result between the input motion and output motion to the robot confirm the possibility to develop human-like velocities in the UR3 robot.<|reference_end|>
|
arxiv
|
@article{wolniakowski2024towards,
title={Towards human-like kinematics in industrial robotic arms: a case study
on a UR3 robot},
author={Adam Wolniakowski, Kanstantsin Miatliuk, Jose J. Quintana, Miguel A.
Ferrer, Moises Diaz},
journal={2021 International Carnahan Conference on Security Technology
(ICCST). IEEE, 2021},
year={2024},
doi={10.1109/ICCST49569.2021.9717393},
archivePrefix={arXiv},
eprint={2409.17114},
primaryClass={cs.RO cs.SY eess.SY}
}
|
wolniakowski2024towards
|
arxiv-661919
|
2409.17115
|
Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale
|
<|reference_start|>Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale: Large language model pre-training has traditionally relied on human experts to craft heuristics for improving the corpora quality, resulting in numerous rules developed to date. However, these rules lack the flexibility to address the unique characteristics of individual example effectively. Meanwhile, applying tailored rules to every example is impractical for human experts. In this paper, we demonstrate that even small language models, with as few as 0.3B parameters, can exhibit substantial data refining capabilities comparable to those of human experts. We introduce Programming Every Example (ProX), a novel framework that treats data refinement as a programming task, enabling models to refine corpora by generating and executing fine-grained operations, such as string normalization, for each individual example at scale. Experimental results show that models pre-trained on ProX-curated data outperform either original data or data filtered by other selection methods by more than 2% across various downstream benchmarks. Its effectiveness spans various model sizes and pre-training corpora, including C4, RedPajama-V2, and FineWeb. Furthermore, ProX exhibits significant potential in domain-specific continual pre-training: without domain specific design, models trained on OpenWebMath refined by ProX outperform human-crafted rule-based methods, improving average accuracy by 7.6% over Mistral-7B, with 14.6% for Llama-2-7B and 20.3% for CodeLlama-7B, all within 10B tokens to be comparable to models like Llemma-7B trained on 200B tokens. Further analysis highlights that ProX significantly saves training FLOPs, offering a promising path for efficient LLM pre-training.We are open-sourcing ProX with >100B corpus, models, and sharing all training and implementation details for reproducible research and future innovation. Code: https://github.com/GAIR-NLP/ProX<|reference_end|>
|
arxiv
|
@article{zhou2024programming,
title={Programming Every Example: Lifting Pre-training Data Quality like
Experts at Scale},
author={Fan Zhou, Zengzhi Wang, Qian Liu, Junlong Li, Pengfei Liu},
journal={arXiv preprint arXiv:2409.17115},
year={2024},
archivePrefix={arXiv},
eprint={2409.17115},
primaryClass={cs.CL cs.AI cs.LG}
}
|
zhou2024programming
|
arxiv-661920
|
2409.17116
|
Hierarchical Tri-manual Planning for Vision-assisted Fruit Harvesting with Quadrupedal Robots
|
<|reference_start|>Hierarchical Tri-manual Planning for Vision-assisted Fruit Harvesting with Quadrupedal Robots: This paper addresses the challenge of developing a multi-arm quadrupedal robot capable of efficiently harvesting fruit in complex, natural environments. To overcome the inherent limitations of traditional bimanual manipulation, we introduce the first three-arm quadrupedal robot LocoHarv-3 and propose a novel hierarchical tri-manual planning approach, enabling automated fruit harvesting with collision-free trajectories. Our comprehensive semi-autonomous framework integrates teleoperation, supported by LiDAR-based odometry and mapping, with learning-based visual perception for accurate fruit detection and pose estimation. Validation is conducted through a series of controlled indoor experiments using motion capture and extensive field tests in natural settings. Results demonstrate a 90\% success rate in in-lab settings with a single attempt, and field trials further verify the system's robustness and efficiency in more challenging real-world environments.<|reference_end|>
|
arxiv
|
@article{liu2024hierarchical,
title={Hierarchical Tri-manual Planning for Vision-assisted Fruit Harvesting
with Quadrupedal Robots},
author={Zhichao Liu, Jingzong Zhou and Konstantinos Karydis},
journal={arXiv preprint arXiv:2409.17116},
year={2024},
archivePrefix={arXiv},
eprint={2409.17116},
primaryClass={cs.RO}
}
|
liu2024hierarchical
|
arxiv-661921
|
2409.17119
|
Small data deep learning methodology for in-field disease detection
|
<|reference_start|>Small data deep learning methodology for in-field disease detection: Early detection of diseases in crops is essential to prevent harvest losses and improve the quality of the final product. In this context, the combination of machine learning and proximity sensors is emerging as a technique capable of achieving this detection efficiently and effectively. For example, this machine learning approach has been applied to potato crops -- to detect late blight (Phytophthora infestans) -- and grapevine crops -- to detect downy mildew. However, most of these AI models found in the specialised literature have been developed using leaf-by-leaf images taken in the lab, which does not represent field conditions and limits their applicability. In this study, we present the first machine learning model capable of detecting mild symptoms of late blight in potato crops through the analysis of high-resolution RGB images captured directly in the field, overcoming the limitations of other publications in the literature and presenting real-world applicability. Our proposal exploits the availability of high-resolution images via the concept of patching, and is based on deep convolutional neural networks with a focal loss function, which makes the model to focus on the complex patterns that arise in field conditions. Additionally, we present a data augmentation scheme that facilitates the training of these neural networks with few high-resolution images, which allows for development of models under the small data paradigm. Our model correctly detects all cases of late blight in the test dataset, demonstrating a high level of accuracy and effectiveness in identifying early symptoms. These promising results reinforce the potential use of machine learning for the early detection of diseases and pests in agriculture, enabling better treatment and reducing their impact on crops.<|reference_end|>
|
arxiv
|
@article{herrera-poyato2024small,
title={Small data deep learning methodology for in-field disease detection},
author={David Herrera-Poyato, Jacinto Dom'inguez-Rull, Rosana Montes, In'es
Hern'ande, Ignacio Barrio, Carlos Poblete-Echeverria, Javier Tardaguila,
Francisco Herrera, Andr'es Herrera-Poyatos},
journal={arXiv preprint arXiv:2409.17119},
year={2024},
archivePrefix={arXiv},
eprint={2409.17119},
primaryClass={cs.CV}
}
|
herrera-poyato2024small
|
arxiv-661922
|
2409.17120
|
Deep Learning and Machine Learning, Advancing Big Data Analytics and Management: Handy Appetizer
|
<|reference_start|>Deep Learning and Machine Learning, Advancing Big Data Analytics and Management: Handy Appetizer: This book explores the role of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) in driving the progress of big data analytics and management. The book focuses on simplifying the complex mathematical concepts behind deep learning, offering intuitive visualizations and practical case studies to help readers understand how neural networks and technologies like Convolutional Neural Networks (CNNs) work. It introduces several classic models and technologies such as Transformers, GPT, ResNet, BERT, and YOLO, highlighting their applications in fields like natural language processing, image recognition, and autonomous driving. The book also emphasizes the importance of pre-trained models and how they can enhance model performance and accuracy, with instructions on how to apply these models in various real-world scenarios. Additionally, it provides an overview of key big data management technologies like SQL and NoSQL databases, as well as distributed computing frameworks such as Apache Hadoop and Spark, explaining their importance in managing and processing vast amounts of data. Ultimately, the book underscores the value of mastering deep learning and big data management skills as critical tools for the future workforce, making it an essential resource for both beginners and experienced professionals.<|reference_end|>
|
arxiv
|
@article{peng2024deep,
title={Deep Learning and Machine Learning, Advancing Big Data Analytics and
Management: Handy Appetizer},
author={Benji Peng, Xuanhe Pan, Yizhu Wen, Ziqian Bi, Keyu Chen, Ming Li, Ming
Liu, Qian Niu, Junyu Liu, Jinlang Wang, Sen Zhang, Jiawei Xu, Pohsun Feng},
journal={arXiv preprint arXiv:2409.17120},
year={2024},
archivePrefix={arXiv},
eprint={2409.17120},
primaryClass={cs.CL cs.LG}
}
|
peng2024deep
|
arxiv-661923
|
2409.17122
|
Classification of Gleason Grading in Prostate Cancer Histopathology Images Using Deep Learning Techniques: YOLO, Vision Transformers, and Vision Mamba
|
<|reference_start|>Classification of Gleason Grading in Prostate Cancer Histopathology Images Using Deep Learning Techniques: YOLO, Vision Transformers, and Vision Mamba: Prostate cancer ranks among the leading health issues impacting men, with the Gleason scoring system serving as the primary method for diagnosis and prognosis. This system relies on expert pathologists to evaluate samples of prostate tissue and assign a Gleason grade, a task that requires significant time and manual effort. To address this challenge, artificial intelligence (AI) solutions have been explored to automate the grading process. In light of these challenges, this study evaluates and compares the effectiveness of three deep learning methodologies, YOLO, Vision Transformers, and Vision Mamba, in accurately classifying Gleason grades from histopathology images. The goal is to enhance diagnostic precision and efficiency in prostate cancer management. This study utilized two publicly available datasets, Gleason2019 and SICAPv2, to train and test the performance of YOLO, Vision Transformers, and Vision Mamba models. Each model was assessed based on its ability to classify Gleason grades accurately, considering metrics such as false positive rate, false negative rate, precision, and recall. The study also examined the computational efficiency and applicability of each method in a clinical setting. Vision Mamba demonstrated superior performance across all metrics, achieving high precision and recall rates while minimizing false positives and negatives. YOLO showed promise in terms of speed and efficiency, particularly beneficial for real-time analysis. Vision Transformers excelled in capturing long-range dependencies within images, although they presented higher computational complexity compared to the other models. Vision Mamba emerges as the most effective model for Gleason grade classification in histopathology images, offering a balance between accuracy and computational efficiency.<|reference_end|>
|
arxiv
|
@article{malekmohammadi2024classification,
title={Classification of Gleason Grading in Prostate Cancer Histopathology
Images Using Deep Learning Techniques: YOLO, Vision Transformers, and Vision
Mamba},
author={Amin Malekmohammadi, Ali Badiezadeh, Seyed Mostafa Mirhassani, Parisa
Gifani, Majid Vafaeezadeh},
journal={arXiv preprint arXiv:2409.17122},
year={2024},
archivePrefix={arXiv},
eprint={2409.17122},
primaryClass={eess.IV cs.CV}
}
|
malekmohammadi2024classification
|
arxiv-661924
|
2409.17124
|
PokeFlex: Towards a Real-World Dataset of Deformable Objects for Robotic Manipulation
|
<|reference_start|>PokeFlex: Towards a Real-World Dataset of Deformable Objects for Robotic Manipulation: Advancing robotic manipulation of deformable objects can enable automation of repetitive tasks across multiple industries, from food processing to textiles and healthcare. Yet robots struggle with the high dimensionality of deformable objects and their complex dynamics. While data-driven methods have shown potential for solving manipulation tasks, their application in the domain of deformable objects has been constrained by the lack of data. To address this, we propose PokeFlex, a pilot dataset featuring real-world 3D mesh data of actively deformed objects, together with the corresponding forces and torques applied by a robotic arm, using a simple poking strategy. Deformations are captured with a professional volumetric capture system that allows for complete 360-degree reconstruction. The PokeFlex dataset consists of five deformable objects with varying stiffness and shapes. Additionally, we leverage the PokeFlex dataset to train a vision model for online 3D mesh reconstruction from a single image and a template mesh. We refer readers to the supplementary material and to our website ( https://pokeflex-dataset.github.io/ ) for demos and examples of our dataset.<|reference_end|>
|
arxiv
|
@article{obrist2024pokeflex:,
title={PokeFlex: Towards a Real-World Dataset of Deformable Objects for Robotic
Manipulation},
author={Jan Obrist, Miguel Zamora, Hehui Zheng, Juan Zarate, Robert K.
Katzschmann, Stelian Coros},
journal={arXiv preprint arXiv:2409.17124},
year={2024},
archivePrefix={arXiv},
eprint={2409.17124},
primaryClass={cs.RO}
}
|
obrist2024pokeflex:
|
arxiv-661925
|
2409.17125
|
On-orbit Servicing for Spacecraft Collision Avoidance With Autonomous Decision Making
|
<|reference_start|>On-orbit Servicing for Spacecraft Collision Avoidance With Autonomous Decision Making: This study develops an AI-based implementation of autonomous On-Orbit Servicing (OOS) mission to assist with spacecraft collision avoidance maneuvers (CAMs). We propose an autonomous `servicer' trained with Reinforcement Learning (RL) to autonomously detect potential collisions between a target satellite and space debris, rendezvous and dock with endangered satellites, and execute optimal CAM. The RL model integrates collision risk estimates, satellite specifications, and debris data to generate an optimal maneuver matrix for OOS rendezvous and collision prevention. We employ the Cross-Entropy algorithm to find optimal decision policies efficiently. Initial results demonstrate the feasibility of autonomous robotic OOS for collision avoidance services, focusing on one servicer spacecraft to one endangered satellite scenario. However, merging spacecraft rendezvous and optimal CAM presents significant complexities. We discuss design challenges and critical parameters for the successful implementation of the framework presented through a case study.<|reference_end|>
|
arxiv
|
@article{patnala2024on-orbit,
title={On-orbit Servicing for Spacecraft Collision Avoidance With Autonomous
Decision Making},
author={Susmitha Patnala, Adam Abdin},
journal={arXiv preprint arXiv:2409.17125},
year={2024},
archivePrefix={arXiv},
eprint={2409.17125},
primaryClass={cs.AI}
}
|
patnala2024on-orbit
|
arxiv-661926
|
2409.17126
|
Blox-Net: Generative Design-for-Robot-Assembly Using VLM Supervision, Physics Simulation, and a Robot with Reset
|
<|reference_start|>Blox-Net: Generative Design-for-Robot-Assembly Using VLM Supervision, Physics Simulation, and a Robot with Reset: Generative AI systems have shown impressive capabilities in creating text, code, and images. Inspired by the rich history of research in industrial ''Design for Assembly'', we introduce a novel problem: Generative Design-for-Robot-Assembly (GDfRA). The task is to generate an assembly based on a natural language prompt (e.g., ''giraffe'') and an image of available physical components, such as 3D-printed blocks. The output is an assembly, a spatial arrangement of these components, and instructions for a robot to build this assembly. The output must 1) resemble the requested object and 2) be reliably assembled by a 6 DoF robot arm with a suction gripper. We then present Blox-Net, a GDfRA system that combines generative vision language models with well-established methods in computer vision, simulation, perturbation analysis, motion planning, and physical robot experimentation to solve a class of GDfRA problems with minimal human supervision. Blox-Net achieved a Top-1 accuracy of 63.5% in the ''recognizability'' of its designed assemblies (eg, resembling giraffe as judged by a VLM). These designs, after automated perturbation redesign, were reliably assembled by a robot, achieving near-perfect success across 10 consecutive assembly iterations with human intervention only during reset prior to assembly. Surprisingly, this entire design process from textual word (''giraffe'') to reliable physical assembly is performed with zero human intervention.<|reference_end|>
|
arxiv
|
@article{goldberg2024blox-net:,
title={Blox-Net: Generative Design-for-Robot-Assembly Using VLM Supervision,
Physics Simulation, and a Robot with Reset},
author={Andrew Goldberg, Kavish Kondap, Tianshuang Qiu, Zehan Ma, Letian Fu,
Justin Kerr, Huang Huang, Kaiyuan Chen, Kuan Fang, Ken Goldberg},
journal={arXiv preprint arXiv:2409.17126},
year={2024},
archivePrefix={arXiv},
eprint={2409.17126},
primaryClass={cs.RO cs.AI cs.LG}
}
|
goldberg2024blox-net:
|
arxiv-661927
|
2409.17128
|
NetScaNDN: A Scalable and Flexible Testbed To Evaluate NDN on Multiple Infrastructures
|
<|reference_start|>NetScaNDN: A Scalable and Flexible Testbed To Evaluate NDN on Multiple Infrastructures: The evolution from traditional IP-based networking to Named Data Networking (NDN) represents a paradigm shift to address the inherent limitations of current network architectures, such as scalability, mobility, and efficient data distribution. NDN introduces an information-centric approach where data is identified and retrieved based on names rather than locations, offering more efficient data dissemination and enhanced security. However, the transition to NDN, alongside the need to integrate it with existing IP infrastructures, necessitates the development of flexible and scalable testbeds that support diverse experimental scenarios across various physical media and networking protocol stacks. In this paper, we present NetScaNDN, a scalable, flexible, and plug-and-play testbed designed to facilitate such experiments. NetScaNDNl employs an automated process for node discovery, configuration, and installation, enabling seamless setup and execution of experiments on both wired and wireless infrastructures simultaneously. Additionally, it incorporates a central log repository using the syslog protocol, allowing comprehensive measurement and evaluation of user-defined metrics across different network layers. NetScaNDN offers a robust platform for researchers to explore and validate various networking scenarios, advancing the study of IP and NDN-based applications.<|reference_end|>
|
arxiv
|
@article{esmaeili2024netscandn:,
title={NetScaNDN: A Scalable and Flexible Testbed To Evaluate NDN on Multiple
Infrastructures},
author={Amir Esmaeili, Maryam Fazli},
journal={arXiv preprint arXiv:2409.17128},
year={2024},
archivePrefix={arXiv},
eprint={2409.17128},
primaryClass={cs.NI}
}
|
esmaeili2024netscandn:
|
arxiv-661928
|
2409.17130
|
Assessing the Level of Toxicity Against Distinct Groups in Bangla Social Media Comments: A Comprehensive Investigation
|
<|reference_start|>Assessing the Level of Toxicity Against Distinct Groups in Bangla Social Media Comments: A Comprehensive Investigation: Social media platforms have a vital role in the modern world, serving as conduits for communication, the exchange of ideas, and the establishment of networks. However, the misuse of these platforms through toxic comments, which can range from offensive remarks to hate speech, is a concerning issue. This study focuses on identifying toxic comments in the Bengali language targeting three specific groups: transgender people, indigenous people, and migrant people, from multiple social media sources. The study delves into the intricate process of identifying and categorizing toxic language while considering the varying degrees of toxicity: high, medium, and low. The methodology involves creating a dataset, manual annotation, and employing pre-trained transformer models like Bangla-BERT, bangla-bert-base, distil-BERT, and Bert-base-multilingual-cased for classification. Diverse assessment metrics such as accuracy, recall, precision, and F1-score are employed to evaluate the model's effectiveness. The experimental findings reveal that Bangla-BERT surpasses alternative models, achieving an F1-score of 0.8903. This research exposes the complexity of toxicity in Bangla social media dialogues, revealing its differing impacts on diverse demographic groups.<|reference_end|>
|
arxiv
|
@article{moin2024assessing,
title={Assessing the Level of Toxicity Against Distinct Groups in Bangla Social
Media Comments: A Comprehensive Investigation},
author={Mukaffi Bin Moin, Pronay Debnath, Usafa Akther Rifa, Rijeet Bin Anis},
journal={arXiv preprint arXiv:2409.17130},
year={2024},
archivePrefix={arXiv},
eprint={2409.17130},
primaryClass={cs.CL}
}
|
moin2024assessing
|
arxiv-661929
|
2409.17131
|
Enhancing robot reliability for health-care facilities by means of Human-Aware Navigation Planning
|
<|reference_start|>Enhancing robot reliability for health-care facilities by means of Human-Aware Navigation Planning: With the aim of enabling robots to cooperate with humans, carry out human-like tasks, or navigate among humans, we need to ensure that they are equipped with the ability to comprehend human behaviors and use the extracted knowledge for intelligent decision-making. This ability is particularly important in the safety-critical and human-centred environment of health-care institutions. In the field of robotic navigation, the most cutting-edge approaches to enhancing robot reliability in the application domain of healthcare facilities and in general pertain to augmenting navigation systems with human-aware properties. To implement this in our work, the Co-operative Human-Aware Navigation planner has been integrated into the ROS-based differential-drive robot MARRtina and exhaustively challenged within various simulated contexts and scenarios (mainly modelling the situations relevant in the medical domain) to draw attention to the integrated system's benefits and identify its drawbacks or instances of poor performance while exploring the scope of system capabilities and creating a full characterization of its applicability. The simulation results are then presented to medical experts, and the enhanced robot acceptability within the domain is validated with them as the robot is further planned for deployment.<|reference_end|>
|
arxiv
|
@article{sorokoletova2024enhancing,
title={Enhancing robot reliability for health-care facilities by means of
Human-Aware Navigation Planning},
author={Olga E. Sorokoletova and Lucca Iocchi},
journal={arXiv preprint arXiv:2409.17131},
year={2024},
archivePrefix={arXiv},
eprint={2409.17131},
primaryClass={cs.RO}
}
|
sorokoletova2024enhancing
|
arxiv-661930
|
2409.17132
|
Complex-Phase, Data-Driven Identification of Grid-Forming Inverter Dynamics
|
<|reference_start|>Complex-Phase, Data-Driven Identification of Grid-Forming Inverter Dynamics: The increasing integration of renewable energy sources (RESs) into power systems requires the deployment of grid-forming inverters to ensure a stable operation. Accurate modeling of these devices is necessary. In this paper, a system identification approach to obtain low-dimensional models of grid-forming inverters is presented. The proposed approach is based on a Hammerstein-Wiener parametrization of the normal-form model. The normal-form is a gray-box model that utilizes complex frequency and phase to capture non-linear inverter dynamics. The model is validated on two well-known control strategies: droop-control and dispatchable virtual oscillators. Simulations and hardware-in-the-loop experiments demonstrate that the normal-form accurately models inverter dynamics across various operating conditions. The approach shows great potential for enhancing the modeling of RES-dominated power systems, especially when component models are unavailable or computationally expensive.<|reference_end|>
|
arxiv
|
@article{büttner2024complex-phase,,
title={Complex-Phase, Data-Driven Identification of Grid-Forming Inverter
Dynamics},
author={Anna B"uttner, Hans W"urfel, Sebastian Liemann, Johannes Schiffer,
Frank Hellmann},
journal={arXiv preprint arXiv:2409.17132},
year={2024},
archivePrefix={arXiv},
eprint={2409.17132},
primaryClass={eess.SY cs.SY}
}
|
büttner2024complex-phase,
|
arxiv-661931
|
2409.17134
|
Streaming Neural Images
|
<|reference_start|>Streaming Neural Images: Implicit Neural Representations (INRs) are a novel paradigm for signal representation that have attracted considerable interest for image compression. INRs offer unprecedented advantages in signal resolution and memory efficiency, enabling new possibilities for compression techniques. However, the existing limitations of INRs for image compression have not been sufficiently addressed in the literature. In this work, we explore the critical yet overlooked limiting factors of INRs, such as computational cost, unstable performance, and robustness. Through extensive experiments and empirical analysis, we provide a deeper and more nuanced understanding of implicit neural image compression methods such as Fourier Feature Networks and Siren. Our work also offers valuable insights for future research in this area.<|reference_end|>
|
arxiv
|
@article{conde2024streaming,
title={Streaming Neural Images},
author={Marcos V. Conde and Andy Bigos and Radu Timofte},
journal={arXiv preprint arXiv:2409.17134},
year={2024},
archivePrefix={arXiv},
eprint={2409.17134},
primaryClass={cs.CV eess.IV}
}
|
conde2024streaming
|
arxiv-661932
|
2409.17136
|
Adaptive Cost Model for Query Optimization
|
<|reference_start|>Adaptive Cost Model for Query Optimization: The principal component of conventional database query optimizers is a cost model that is used to estimate expected performance of query plans. The accuracy of the cost model has direct impact on the optimality of execution plans selected by the optimizer and thus, on the resulting query latency. Several common parameters of cost models in modern DBMS are related to the performance of CPU and I/O and are typically set by a database administrator upon system tuning. However these performance characteristics are not stable and therefore, a single point estimation may not suffice for all DB load regimes. In this paper, we propose an Adaptive Cost Model (ACM) which dynamically optimizes CPU- and I/O-related plan cost parameters at DB runtime. By continuously monitoring query execution statistics and the state of DB buffer cache ACM adjusts cost parameters without the need for manual intervention from a database administrator. This allows for responding to changes in the workload and system performance ensuring more optimal query execution plans. We describe the main ideas in the implementation of ACM and report on a preliminary experimental evaluation showing 20\% end-to-end latency improvement on TPC-H benchmark.<|reference_end|>
|
arxiv
|
@article{vasilenko2024adaptive,
title={Adaptive Cost Model for Query Optimization},
author={Nikita Vasilenko, Alexander Demin, Denis Ponomaryov},
journal={arXiv preprint arXiv:2409.17136},
year={2024},
archivePrefix={arXiv},
eprint={2409.17136},
primaryClass={cs.DB}
}
|
vasilenko2024adaptive
|
arxiv-661933
|
2409.17137
|
PACE: marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization
|
<|reference_start|>PACE: marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization: Parameter-Efficient Fine-Tuning (PEFT) effectively adapts pre-trained vision transformers to downstream tasks. However, the optimization for tasks performance often comes at the cost of generalizability in fine-tuned models. To address this issue, we theoretically connect smaller weight gradient norms during training and larger datasets to the improved model generalization. Motivated by this connection, we propose reducing gradient norms for enhanced generalization and aligning fine-tuned model with the pre-trained counterpart to retain knowledge from large-scale pre-training data. Yet, naive alignment does not guarantee gradient reduction and can potentially cause gradient explosion, complicating efforts to manage gradients. To address such issues, we propose PACE, marrying generalization of PArameter-efficient fine-tuning with Consistency rEgularization. We perturb features learned from the adapter with the multiplicative noise and ensure the fine-tuned model remains consistent for same sample under different perturbations. Theoretical analysis shows that PACE not only implicitly regularizes gradients for enhanced generalization, but also implicitly aligns the fine-tuned and pre-trained models to retain knowledge. Experimental evidence supports our theories. PACE outperforms existing PEFT methods in four visual adaptation tasks: VTAB-1k, FGVC, few-shot learning and domain adaptation. Code will be available at https://github.com/MaxwellYaoNi/PACE<|reference_end|>
|
arxiv
|
@article{ni2024pace:,
title={PACE: marrying generalization in PArameter-efficient fine-tuning with
Consistency rEgularization},
author={Yao Ni, Shan Zhang, Piotr Koniusz},
journal={arXiv preprint arXiv:2409.17137},
year={2024},
archivePrefix={arXiv},
eprint={2409.17137},
primaryClass={cs.LG cs.CV}
}
|
ni2024pace:
|
arxiv-661934
|
2409.17138
|
Landscape of Policy Optimization for Finite Horizon MDPs with General State and Action
|
<|reference_start|>Landscape of Policy Optimization for Finite Horizon MDPs with General State and Action: Policy gradient methods are widely used in reinforcement learning. Yet, the nonconvexity of policy optimization imposes significant challenges in understanding the global convergence of policy gradient methods. For a class of finite-horizon Markov Decision Processes (MDPs) with general state and action spaces, we develop a framework that provides a set of easily verifiable assumptions to ensure the Kurdyka-Lojasiewicz (KL) condition of the policy optimization. Leveraging the KL condition, policy gradient methods converge to the globally optimal policy with a non-asymptomatic rate despite nonconvexity. Our results find applications in various control and operations models, including entropy-regularized tabular MDPs, Linear Quadratic Regulator (LQR) problems, stochastic inventory models, and stochastic cash balance problems, for which we show an $\epsilon$-optimal policy can be obtained using a sample size in $\tilde{\mathcal{O}}(\epsilon^{-1})$ and polynomial in terms of the planning horizon by stochastic policy gradient methods. Our result establishes the first sample complexity for multi-period inventory systems with Markov-modulated demands and stochastic cash balance problems in the literature.<|reference_end|>
|
arxiv
|
@article{chen2024landscape,
title={Landscape of Policy Optimization for Finite Horizon MDPs with General
State and Action},
author={Xin Chen, Yifan Hu, Minda Zhao},
journal={arXiv preprint arXiv:2409.17138},
year={2024},
archivePrefix={arXiv},
eprint={2409.17138},
primaryClass={math.OC cs.LG}
}
|
chen2024landscape
|
arxiv-661935
|
2409.17139
|
Learning with Dynamics: Autonomous Regulation of UAV Based Communication Networks with Dynamic UAV Crew
|
<|reference_start|>Learning with Dynamics: Autonomous Regulation of UAV Based Communication Networks with Dynamic UAV Crew: Unmanned Aerial Vehicle (UAV) based communication networks (UCNs) are a key component in future mobile networking. To handle the dynamic environments in UCNs, reinforcement learning (RL) has been a promising solution attributed to its strong capability of adaptive decision-making free of the environment models. However, most existing RL-based research focus on control strategy design assuming a fixed set of UAVs. Few works have investigated how UCNs should be adaptively regulated when the serving UAVs change dynamically. This article discusses RL-based strategy design for adaptive UCN regulation given a dynamic UAV set, addressing both reactive strategies in general UCNs and proactive strategies in solar-powered UCNs. An overview of the UCN and the RL framework is first provided. Potential research directions with key challenges and possible solutions are then elaborated. Some of our recent works are presented as case studies to inspire innovative ways to handle dynamic UAV crew with different RL algorithms.<|reference_end|>
|
arxiv
|
@article{zhang2024learning,
title={Learning with Dynamics: Autonomous Regulation of UAV Based Communication
Networks with Dynamic UAV Crew},
author={Ran Zhang, Bowei Li, Liyuan Zhang, Jiang (Linda) Xie, and Miao Wang},
journal={arXiv preprint arXiv:2409.17139},
year={2024},
archivePrefix={arXiv},
eprint={2409.17139},
primaryClass={eess.SY cs.LG cs.NI cs.SY}
}
|
zhang2024learning
|
arxiv-661936
|
2409.17140
|
Turn Every Application into an Agent: Towards Efficient Human-Agent-Computer Interaction with API-First LLM-Based Agents
|
<|reference_start|>Turn Every Application into an Agent: Towards Efficient Human-Agent-Computer Interaction with API-First LLM-Based Agents: Multimodal large language models (MLLMs) have enabled LLM-based agents to directly interact with application user interfaces (UIs), enhancing agents' performance in complex tasks. However, these agents often suffer from high latency and low reliability due to the extensive sequential UI interactions. To address this issue, we propose AXIS, a novel LLM-based agents framework prioritize actions through application programming interfaces (APIs) over UI actions. This framework also facilitates the creation and expansion of APIs through automated exploration of applications. Our experiments on Office Word demonstrate that AXIS reduces task completion time by 65%-70% and cognitive workload by 38%-53%, while maintaining accuracy of 97%-98% compare to humans. Our work contributes to a new human-agent-computer interaction (HACI) framework and a fresh UI design principle for application providers in the era of LLMs. It also explores the possibility of turning every applications into agents, paving the way towards an agent-centric operating system (Agent OS).<|reference_end|>
|
arxiv
|
@article{lu2024turn,
title={Turn Every Application into an Agent: Towards Efficient
Human-Agent-Computer Interaction with API-First LLM-Based Agents},
author={Junting Lu, Zhiyang Zhang, Fangkai Yang, Jue Zhang, Lu Wang, Chao Du,
Qingwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang},
journal={arXiv preprint arXiv:2409.17140},
year={2024},
archivePrefix={arXiv},
eprint={2409.17140},
primaryClass={cs.AI}
}
|
lu2024turn
|
arxiv-661937
|
2409.17141
|
FineZip : Pushing the Limits of Large Language Models for Practical Lossless Text Compression
|
<|reference_start|>FineZip : Pushing the Limits of Large Language Models for Practical Lossless Text Compression: While the language modeling objective has been shown to be deeply connected with compression, it is surprising that modern LLMs are not employed in practical text compression systems. In this paper, we provide an in-depth analysis of neural network and transformer-based compression techniques to answer this question. We compare traditional text compression systems with neural network and LLM-based text compression methods. Although LLM-based systems significantly outperform conventional compression methods, they are highly impractical. Specifically, LLMZip, a recent text compression system using Llama3-8B requires 9.5 days to compress just 10 MB of text, although with huge improvements in compression ratios. To overcome this, we present FineZip - a novel LLM-based text compression system that combines ideas of online memorization and dynamic context to reduce the compression time immensely. FineZip can compress the above corpus in approximately 4 hours compared to 9.5 days, a 54 times improvement over LLMZip and comparable performance. FineZip outperforms traditional algorithmic compression methods with a large margin, improving compression ratios by approximately 50\%. With this work, we take the first step towards making lossless text compression with LLMs a reality. While FineZip presents a significant step in that direction, LLMs are still not a viable solution for large-scale text compression. We hope our work paves the way for future research and innovation to solve this problem.<|reference_end|>
|
arxiv
|
@article{mittu2024finezip,
title={FineZip : Pushing the Limits of Large Language Models for Practical
Lossless Text Compression},
author={Fazal Mittu, Yihuan Bu, Akshat Gupta, Ashok Devireddy, Alp Eren
Ozdarendeli, Anant Singh, Gopala Anumanchipalli},
journal={arXiv preprint arXiv:2409.17141},
year={2024},
archivePrefix={arXiv},
eprint={2409.17141},
primaryClass={cs.CL cs.AI cs.LG}
}
|
mittu2024finezip
|
arxiv-661938
|
2409.17143
|
Attention Prompting on Image for Large Vision-Language Models
|
<|reference_start|>Attention Prompting on Image for Large Vision-Language Models: Compared with Large Language Models (LLMs), Large Vision-Language Models (LVLMs) can also accept images as input, thus showcasing more interesting emergent capabilities and demonstrating impressive performance on various vision-language tasks. Motivated by text prompting in LLMs, visual prompting has been explored to enhance LVLMs' capabilities of perceiving visual information. However, previous visual prompting techniques solely process visual inputs without considering text queries, limiting the models' ability to follow text instructions to complete tasks. To fill this gap, in this work, we propose a new prompting technique named Attention Prompting on Image, which just simply overlays a text-query-guided attention heatmap on the original input image and effectively enhances LVLM on various tasks. Specifically, we generate an attention heatmap for the input image dependent on the text query with an auxiliary model like CLIP. Then the heatmap simply multiplies the pixel values of the original image to obtain the actual input image for the LVLM. Extensive experiments on various vison-language benchmarks verify the effectiveness of our technique. For example, Attention Prompting on Image improves LLaVA-1.5 by 3.8% and 2.9% on MM-Vet and LLaVA-Wild benchmarks, respectively.<|reference_end|>
|
arxiv
|
@article{yu2024attention,
title={Attention Prompting on Image for Large Vision-Language Models},
author={Runpeng Yu and Weihao Yu and Xinchao Wang},
journal={arXiv preprint arXiv:2409.17143},
year={2024},
archivePrefix={arXiv},
eprint={2409.17143},
primaryClass={cs.CV cs.AI}
}
|
yu2024attention
|
arxiv-661939
|
2409.17144
|
Differential Privacy Regularization: Protecting Training Data Through Loss Function Regularization
|
<|reference_start|>Differential Privacy Regularization: Protecting Training Data Through Loss Function Regularization: Training machine learning models based on neural networks requires large datasets, which may contain sensitive information. The models, however, should not expose private information from these datasets. Differentially private SGD [DP-SGD] requires the modification of the standard stochastic gradient descent [SGD] algorithm for training new models. In this short paper, a novel regularization strategy is proposed to achieve the same goal in a more efficient manner.<|reference_end|>
|
arxiv
|
@article{aguilera-martínez2024differential,
title={Differential Privacy Regularization: Protecting Training Data Through
Loss Function Regularization},
author={Francisco Aguilera-Mart'inez, Fernando Berzal},
journal={arXiv preprint arXiv:2409.17144},
year={2024},
archivePrefix={arXiv},
eprint={2409.17144},
primaryClass={cs.LG cs.AI cs.CR cs.NE}
}
|
aguilera-martínez2024differential
|
arxiv-661940
|
2409.17145
|
DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D Diffusion
|
<|reference_start|>DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D Diffusion: Leveraging pretrained 2D diffusion models and score distillation sampling (SDS), recent methods have shown promising results for text-to-3D avatar generation. However, generating high-quality 3D avatars capable of expressive animation remains challenging. In this work, we present DreamWaltz-G, a novel learning framework for animatable 3D avatar generation from text. The core of this framework lies in Skeleton-guided Score Distillation and Hybrid 3D Gaussian Avatar representation. Specifically, the proposed skeleton-guided score distillation integrates skeleton controls from 3D human templates into 2D diffusion models, enhancing the consistency of SDS supervision in terms of view and human pose. This facilitates the generation of high-quality avatars, mitigating issues such as multiple faces, extra limbs, and blurring. The proposed hybrid 3D Gaussian avatar representation builds on the efficient 3D Gaussians, combining neural implicit fields and parameterized 3D meshes to enable real-time rendering, stable SDS optimization, and expressive animation. Extensive experiments demonstrate that DreamWaltz-G is highly effective in generating and animating 3D avatars, outperforming existing methods in both visual quality and animation expressiveness. Our framework further supports diverse applications, including human video reenactment and multi-subject scene composition.<|reference_end|>
|
arxiv
|
@article{huang2024dreamwaltz-g:,
title={DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D
Diffusion},
author={Yukun Huang, Jianan Wang, Ailing Zeng, Zheng-Jun Zha, Lei Zhang, Xihui
Liu},
journal={arXiv preprint arXiv:2409.17145},
year={2024},
archivePrefix={arXiv},
eprint={2409.17145},
primaryClass={cs.CV cs.GR cs.LG}
}
|
huang2024dreamwaltz-g:
|
arxiv-661941
|
2409.17146
|
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models
|
<|reference_start|>Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models: Today's most advanced multimodal models remain proprietary. The strongest open-weight models rely heavily on synthetic data from proprietary VLMs to achieve good performance, effectively distilling these closed models into open ones. As a result, the community is still missing foundational knowledge about how to build performant VLMs from scratch. We present Molmo, a new family of VLMs that are state-of-the-art in their class of openness. Our key innovation is a novel, highly detailed image caption dataset collected entirely from human annotators using speech-based descriptions. To enable a wide array of user interactions, we also introduce a diverse dataset mixture for fine-tuning that includes in-the-wild Q&A and innovative 2D pointing data. The success of our approach relies on careful choices for the model architecture details, a well-tuned training pipeline, and, most critically, the quality of our newly collected datasets, all of which will be released. The best-in-class 72B model within the Molmo family not only outperforms others in the class of open weight and data models but also compares favorably against proprietary systems like GPT-4o, Claude 3.5, and Gemini 1.5 on both academic benchmarks and human evaluation. We will be releasing all of our model weights, captioning and fine-tuning data, and source code in the near future. Select model weights, inference code, and demo are available at https://molmo.allenai.org.<|reference_end|>
|
arxiv
|
@article{deitke2024molmo,
title={Molmo and PixMo: Open Weights and Open Data for State-of-the-Art
Multimodal Models},
author={Matt Deitke, Christopher Clark, Sangho Lee, Rohun Tripathi, Yue Yang,
Jae Sung Park, Mohammadreza Salehi, Niklas Muennighoff, Kyle Lo, Luca
Soldaini, Jiasen Lu, Taira Anderson, Erin Bransom, Kiana Ehsani, Huong Ngo,
YenSung Chen, Ajay Patel, Mark Yatskar, Chris Callison-Burch, Andrew Head,
Rose Hendrix, Favyen Bastani, Eli VanderBilt, Nathan Lambert, Yvonne Chou,
Arnavi Chheda, Jenna Sparks, Sam Skjonsberg, Michael Schmitz, Aaron Sarnat,
Byron Bischoff, Pete Walsh, Chris Newell, Piper Wolters, Tanmay Gupta,
Kuo-Hao Zeng, Jon Borchardt, Dirk Groeneveld, Jen Dumas, Crystal Nam, Sophie
Lebrecht, Caitlin Wittlif, Carissa Schoenick, Oscar Michel, Ranjay Krishna,
Luca Weihs, Noah A. Smith, Hannaneh Hajishirzi, Ross Girshick, Ali Farhadi,
Aniruddha Kembhavi},
journal={arXiv preprint arXiv:2409.17146},
year={2024},
archivePrefix={arXiv},
eprint={2409.17146},
primaryClass={cs.CV cs.CL cs.LG}
}
|
deitke2024molmo
|
arxiv-661942
|
2409.17155
|
Exploring the Boundaries of Content Moderation in Text-to-Image Generation
|
<|reference_start|>Exploring the Boundaries of Content Moderation in Text-to-Image Generation: This paper analyzes the community safety guidelines of five text-to-image (T2I) generation platforms and audits five T2I models, focusing on prompts related to the representation of humans in areas that might lead to societal stigma. While current research primarily focuses on ensuring safety by restricting the generation of harmful content, our study offers a complementary perspective. We argue that the concept of safety is difficult to define and operationalize, reflected in a discrepancy between the officially published safety guidelines and the actual behavior of the T2I models, and leading at times to over-censorship. Our findings call for more transparency and an inclusive dialogue about the platforms' content moderation practices, bearing in mind their global cultural and social impact.<|reference_end|>
|
arxiv
|
@article{riccio2024exploring,
title={Exploring the Boundaries of Content Moderation in Text-to-Image
Generation},
author={Piera Riccio, Georgina Curto, Nuria Oliver},
journal={arXiv preprint arXiv:2409.17155},
year={2024},
archivePrefix={arXiv},
eprint={2409.17155},
primaryClass={cs.CY}
}
|
riccio2024exploring
|
arxiv-661943
|
2409.17156
|
An Art-centric perspective on AI-based content moderation of nudity
|
<|reference_start|>An Art-centric perspective on AI-based content moderation of nudity: At a time when the influence of generative Artificial Intelligence on visual arts is a highly debated topic, we raise the attention towards a more subtle phenomenon: the algorithmic censorship of artistic nudity online. We analyze the performance of three "Not-Safe-For-Work'' image classifiers on artistic nudity, and empirically uncover the existence of a gender and a stylistic bias, as well as evident technical limitations, especially when only considering visual information. Hence, we propose a multi-modal zero-shot classification approach that improves artistic nudity classification. From our research, we draw several implications that we hope will inform future research on this topic.<|reference_end|>
|
arxiv
|
@article{riccio2024an,
title={An Art-centric perspective on AI-based content moderation of nudity},
author={Piera Riccio, Georgina Curto, Thomas Hofmann, Nuria Oliver},
journal={arXiv preprint arXiv:2409.17156},
year={2024},
archivePrefix={arXiv},
eprint={2409.17156},
primaryClass={cs.CV cs.SI}
}
|
riccio2024an
|
arxiv-661944
|
2409.17157
|
Confident Teacher, Confident Student? A Novel User Study Design for Investigating the Didactic Potential of Explanations and their Impact on Uncertainty
|
<|reference_start|>Confident Teacher, Confident Student? A Novel User Study Design for Investigating the Didactic Potential of Explanations and their Impact on Uncertainty: Evaluating the quality of explanations in Explainable Artificial Intelligence (XAI) is to this day a challenging problem, with ongoing debate in the research community. While some advocate for establishing standardized offline metrics, others emphasize the importance of human-in-the-loop (HIL) evaluation. Here we propose an experimental design to evaluate the potential of XAI in human-AI collaborative settings as well as the potential of XAI for didactics. In a user study with 1200 participants we investigate the impact of explanations on human performance on a challenging visual task - annotation of biological species in complex taxonomies. Our results demonstrate the potential of XAI in complex visual annotation tasks: users become more accurate in their annotations and demonstrate less uncertainty with AI assistance. The increase in accuracy was, however, not significantly different when users were shown the mere prediction of the model compared to when also providing an explanation. We also find negative effects of explanations: users tend to replicate the model's predictions more often when shown explanations, even when those predictions are wrong. When evaluating the didactic effects of explanations in collaborative human-AI settings, we find that users' annotations are not significantly better after performing annotation with AI assistance. This suggests that explanations in visual human-AI collaboration do not appear to induce lasting learning effects. All code and experimental data can be found in our GitHub repository: https://github.com/TeodorChiaburu/beexplainable.<|reference_end|>
|
arxiv
|
@article{chiaburu2024confident,
title={Confident Teacher, Confident Student? A Novel User Study Design for
Investigating the Didactic Potential of Explanations and their Impact on
Uncertainty},
author={Teodor Chiaburu, Frank Hau{ss}er, Felix Bie{ss}mann},
journal={arXiv preprint arXiv:2409.17157},
year={2024},
archivePrefix={arXiv},
eprint={2409.17157},
primaryClass={cs.HC cs.AI cs.CY}
}
|
chiaburu2024confident
|
arxiv-661945
|
2409.17158
|
Cross Dataset Analysis and Network Architecture Repair for Autonomous Car Lane Detection
|
<|reference_start|>Cross Dataset Analysis and Network Architecture Repair for Autonomous Car Lane Detection: Transfer Learning has become one of the standard methods to solve problems to overcome the isolated learning paradigm by utilizing knowledge acquired for one task to solve another related one. However, research needs to be done, to identify the initial steps before inducing transfer learning to applications for further verification and explainablity. In this research, we have performed cross dataset analysis and network architecture repair for the lane detection application in autonomous vehicles. Lane detection is an important aspect of autonomous vehicles driving assistance system. In most circumstances, modern deep-learning-based lane recognition systems are successful, but they struggle with lanes with complex topologies. The proposed architecture, ERFCondLaneNet is an enhancement to the CondlaneNet used for lane identification framework to solve the difficulty of detecting lane lines with complex topologies like dense, curved and fork lines. The newly proposed technique was tested on two common lane detecting benchmarks, CULane and CurveLanes respectively, and two different backbones, ResNet and ERFNet. The researched technique with ERFCondLaneNet, exhibited similar performance in comparison to ResnetCondLaneNet, while using 33% less features, resulting in a reduction of model size by 46%.<|reference_end|>
|
arxiv
|
@article{ganeriwala2024cross,
title={Cross Dataset Analysis and Network Architecture Repair for Autonomous
Car Lane Detection},
author={Parth Ganeriwala, Siddhartha Bhattacharyya, Raja Muthalagu},
journal={arXiv preprint arXiv:2409.17158},
year={2024},
doi={10.1109/IV55152.2023.10186721},
archivePrefix={arXiv},
eprint={2409.17158},
primaryClass={cs.CV cs.AI cs.LG}
}
|
ganeriwala2024cross
|
arxiv-661946
|
2409.17160
|
BERTScoreVisualizer: A Web Tool for Understanding Simplified Text Evaluation with BERTScore
|
<|reference_start|>BERTScoreVisualizer: A Web Tool for Understanding Simplified Text Evaluation with BERTScore: The BERTScore metric is commonly used to evaluate automatic text simplification systems. However, current implementations of the metric fail to provide complete visibility into all information the metric can produce. Notably, the specific token matchings can be incredibly useful in generating clause-level insight into the quality of simplified text. We address this by introducing BERTScoreVisualizer, a web application that goes beyond reporting precision, recall, and F1 score and provides a visualization of the matching between tokens. We believe that our software can help improve the analysis of text simplification systems by specifically showing where generated, simplified text deviates from reference text. We host our code and demo on GitHub.<|reference_end|>
|
arxiv
|
@article{jaskowski2024bertscorevisualizer:,
title={BERTScoreVisualizer: A Web Tool for Understanding Simplified Text
Evaluation with BERTScore},
author={Sebastian Jaskowski, Sahasra Chava, Agam Shah},
journal={arXiv preprint arXiv:2409.17160},
year={2024},
archivePrefix={arXiv},
eprint={2409.17160},
primaryClass={cs.CL}
}
|
jaskowski2024bertscorevisualizer:
|
arxiv-661947
|
2409.17161
|
Optimizing Control Strategies for Wheeled Mobile Robots Using Fuzzy Type I and II Controllers and Parallel Distributed Compensation
|
<|reference_start|>Optimizing Control Strategies for Wheeled Mobile Robots Using Fuzzy Type I and II Controllers and Parallel Distributed Compensation: Adjusting the control actions of a wheeled robot to eliminate oscillations and ensure smoother motion is critical in applications requiring accurate and soft movements. Fuzzy controllers enable a robot to operate smoothly while accounting for uncertainties in the system. This work uses fuzzy theories and parallel distributed compensation to establish a robust controller for wheeled mobile robots. The use of fuzzy logic type I and type II controllers are covered in the study, and their performance is compared with a PID controller. Experimental results demonstrate that fuzzy logic type II outperforms type I and the classic controller. Further, we deploy parallel distributed compensation, sector of nonlinearity, and local approximation strategy in our design. These strategies help analyze the stability of each rule of the fuzzy controller separately and map the if-then rules of the fuzzy box into parallel distributed compensation using Linear Matrix Inequalities (LMI) analysis. Also, they help manage the uncertainty flow in the equations that exist in the kinematic model of a robot. Last, we propose a Bezier curve to represent the different pathways for the wheeled mobile robot.<|reference_end|>
|
arxiv
|
@article{paykari2024optimizing,
title={Optimizing Control Strategies for Wheeled Mobile Robots Using Fuzzy Type
I and II Controllers and Parallel Distributed Compensation},
author={Nasim Paykari, Razieh Jokar, Ali Alfatemi, Damian Lyons, Mohamed
Rahouti},
journal={arXiv preprint arXiv:2409.17161},
year={2024},
archivePrefix={arXiv},
eprint={2409.17161},
primaryClass={cs.RO}
}
|
paykari2024optimizing
|
arxiv-661948
|
2409.17162
|
Autonomous Vehicle Decision-Making Framework for Considering Malicious Behavior at Unsignalized Intersections
|
<|reference_start|>Autonomous Vehicle Decision-Making Framework for Considering Malicious Behavior at Unsignalized Intersections: In this paper, we propose a Q-learning based decision-making framework to improve the safety and efficiency of Autonomous Vehicles when they encounter other maliciously behaving vehicles while passing through unsignalized intersections. In Autonomous Vehicles, conventional reward signals are set as regular rewards regarding feedback factors such as safety and efficiency. In this paper, safety gains are modulated by variable weighting parameters to ensure that safety can be emphasized more in emergency situations. The framework proposed in this paper introduces first-order theory of mind inferences on top of conventional rewards, using first-order beliefs as additional reward signals. The decision framework enables Autonomous Vehicles to make informed decisions when encountering vehicles with potentially malicious behaviors at unsignalized intersections, thereby improving the overall safety and efficiency of Autonomous Vehicle transportation systems. In order to verify the performance of the decision framework, this paper uses Prescan/Simulink co-simulations for simulation, and the results show that the performance of the decision framework can meet the set requirements.<|reference_end|>
|
arxiv
|
@article{li2024autonomous,
title={Autonomous Vehicle Decision-Making Framework for Considering Malicious
Behavior at Unsignalized Intersections},
author={Qing Li, Jinxing Hua, Qiuxia Sun},
journal={arXiv preprint arXiv:2409.17162},
year={2024},
archivePrefix={arXiv},
eprint={2409.17162},
primaryClass={cs.RO cs.LG}
}
|
li2024autonomous
|
arxiv-661949
|
2409.17163
|
Towards Using Active Learning Methods for Human-Seat Interactions To Generate Realistic Occupant Motion
|
<|reference_start|>Towards Using Active Learning Methods for Human-Seat Interactions To Generate Realistic Occupant Motion: In the context of developing new vehicle concepts, especially autonomous vehicles with novel seating arrangements and occupant activities, predicting occupant motion can be a tool for ensuring safety and comfort. In this study, a data-driven surrogate contact model integrated into an optimal control framework to predict human occupant behavior during driving maneuvers is presented. High-fidelity finite element simulations are utilized to generate a dataset of interaction forces and moments for various human body configurations and velocities. To automate the generation of training data, an active learning approach is introduced, which iteratively queries the high-fidelity finite element simulation for an additional dataset. The feasibility and effectiveness of the proposed method are demonstrated through a case study of a head interaction with an automotive headrest, showing promising results in accurately replicating contact forces and moments while reducing manual effort.<|reference_end|>
|
arxiv
|
@article{fahse2024towards,
title={Towards Using Active Learning Methods for Human-Seat Interactions To
Generate Realistic Occupant Motion},
author={Niklas Fahse, Monika Harant, Marius Obentheuer, Joachim Linn, J"org
Fehr},
journal={Proc. Appl. Math. Mech. (2024) e202400142},
year={2024},
doi={10.1002/pamm.202400142},
archivePrefix={arXiv},
eprint={2409.17163},
primaryClass={cs.RO math.DS}
}
|
fahse2024towards
|
arxiv-661950
|
2409.17165
|
Mamba for Scalable and Efficient Personalized Recommendations
|
<|reference_start|>Mamba for Scalable and Efficient Personalized Recommendations: In this effort, we propose using the Mamba for handling tabular data in personalized recommendation systems. We present the \textit{FT-Mamba} (Feature Tokenizer\,$+$\,Mamba), a novel hybrid model that replaces Transformer layers with Mamba layers within the FT-Transformer architecture, for handling tabular data in personalized recommendation systems. The \textit{Mamba model} offers an efficient alternative to Transformers, reducing computational complexity from quadratic to linear by enhancing the capabilities of State Space Models (SSMs). FT-Mamba is designed to improve the scalability and efficiency of recommendation systems while maintaining performance. We evaluate FT-Mamba in comparison to a traditional Transformer-based model within a Two-Tower architecture on three datasets: Spotify music recommendation, H\&M fashion recommendation, and vaccine messaging recommendation. Each model is trained on 160,000 user-action pairs, and performance is measured using precision (P), recall (R), Mean Reciprocal Rank (MRR), and Hit Ratio (HR) at several truncation values. Our results demonstrate that FT-Mamba outperforms the Transformer-based model in terms of computational efficiency while maintaining or exceeding performance across key recommendation metrics. By leveraging Mamba layers, FT-Mamba provides a scalable and effective solution for large-scale personalized recommendation systems, showcasing the potential of the Mamba architecture to enhance both efficiency and accuracy.<|reference_end|>
|
arxiv
|
@article{starnes2024mamba,
title={Mamba for Scalable and Efficient Personalized Recommendations},
author={Andrew Starnes, Clayton Webster},
journal={arXiv preprint arXiv:2409.17165},
year={2024},
archivePrefix={arXiv},
eprint={2409.17165},
primaryClass={cs.IR cs.LG}
}
|
starnes2024mamba
|
arxiv-661951
|
2409.17166
|
ScriptSmith: A Unified LLM Framework for Enhancing IT Operations via Automated Bash Script Generation, Assessment, and Refinement
|
<|reference_start|>ScriptSmith: A Unified LLM Framework for Enhancing IT Operations via Automated Bash Script Generation, Assessment, and Refinement: In the rapidly evolving landscape of site reliability engineering (SRE), the demand for efficient and effective solutions to manage and resolve issues in site and cloud applications is paramount. This paper presents an innovative approach to action automation using large language models (LLMs) for script generation, assessment, and refinement. By leveraging the capabilities of LLMs, we aim to significantly reduce the human effort involved in writing and debugging scripts, thereby enhancing the productivity of SRE teams. Our experiments focus on Bash scripts, a commonly used tool in SRE, and involve the CodeSift dataset of 100 tasks and the InterCode dataset of 153 tasks. The results show that LLMs can automatically assess and refine scripts efficiently, reducing the need for script validation in an execution environment. Results demonstrate that the framework shows an overall improvement of 7-10% in script generation.<|reference_end|>
|
arxiv
|
@article{chatterjee2024scriptsmith:,
title={ScriptSmith: A Unified LLM Framework for Enhancing IT Operations via
Automated Bash Script Generation, Assessment, and Refinement},
author={Oishik Chatterjee, Pooja Aggarwal, Suranjana Samanta, Ting Dai,
Prateeti Mohapatra, Debanjana Kar, Ruchi Mahindru, Steve Barbieri, Eugen
Postea, Brad Blancett, Arthur De Magalhaes},
journal={arXiv preprint arXiv:2409.17166},
year={2024},
archivePrefix={arXiv},
eprint={2409.17166},
primaryClass={cs.SE cs.AI}
}
|
chatterjee2024scriptsmith:
|
arxiv-661952
|
2409.17167
|
StressPrompt: Does Stress Impact Large Language Models and Human Performance Similarly?
|
<|reference_start|>StressPrompt: Does Stress Impact Large Language Models and Human Performance Similarly?: Human beings often experience stress, which can significantly influence their performance. This study explores whether Large Language Models (LLMs) exhibit stress responses similar to those of humans and whether their performance fluctuates under different stress-inducing prompts. To investigate this, we developed a novel set of prompts, termed StressPrompt, designed to induce varying levels of stress. These prompts were derived from established psychological frameworks and carefully calibrated based on ratings from human participants. We then applied these prompts to several LLMs to assess their responses across a range of tasks, including instruction-following, complex reasoning, and emotional intelligence. The findings suggest that LLMs, like humans, perform optimally under moderate stress, consistent with the Yerkes-Dodson law. Notably, their performance declines under both low and high-stress conditions. Our analysis further revealed that these StressPrompts significantly alter the internal states of LLMs, leading to changes in their neural representations that mirror human responses to stress. This research provides critical insights into the operational robustness and flexibility of LLMs, demonstrating the importance of designing AI systems capable of maintaining high performance in real-world scenarios where stress is prevalent, such as in customer service, healthcare, and emergency response contexts. Moreover, this study contributes to the broader AI research community by offering a new perspective on how LLMs handle different scenarios and their similarities to human cognition.<|reference_end|>
|
arxiv
|
@article{shen2024stressprompt:,
title={StressPrompt: Does Stress Impact Large Language Models and Human
Performance Similarly?},
author={Guobin Shen, Dongcheng Zhao, Aorigele Bao, Xiang He, Yiting Dong, Yi
Zeng},
journal={arXiv preprint arXiv:2409.17167},
year={2024},
archivePrefix={arXiv},
eprint={2409.17167},
primaryClass={cs.HC cs.AI cs.CL}
}
|
shen2024stressprompt:
|
arxiv-661953
|
2409.17169
|
REAL: Response Embedding-based Alignment for LLMs
|
<|reference_start|>REAL: Response Embedding-based Alignment for LLMs: Aligning large language models (LLMs) to human preferences is a crucial step in building helpful and safe AI tools, which usually involve training on supervised datasets. Popular algorithms such as Direct Preference Optimization rely on pairs of AI-generated responses ranked according to human feedback. The labeling process is the most labor-intensive and costly part of the alignment pipeline, and improving its efficiency would have a meaningful impact on AI development. We propose a strategy for sampling a high-quality training dataset that focuses on acquiring the most informative response pairs for labeling out of a set of AI-generated responses. Experimental results on synthetic HH-RLHF benchmarks indicate that choosing dissimilar response pairs enhances the direct alignment of LLMs while reducing inherited labeling errors. We also applied our method to the real-world dataset SHP2, selecting optimal pairs from multiple responses. The model aligned on dissimilar response pairs obtained the best win rate on the dialogue task. Our findings suggest that focusing on less similar pairs can improve the efficiency of LLM alignment, saving up to 65% of annotators' work.<|reference_end|>
|
arxiv
|
@article{zhang2024real:,
title={REAL: Response Embedding-based Alignment for LLMs},
author={Honggen Zhang, Xufeng Zhao, Igor Molybog, June Zhang},
journal={arXiv preprint arXiv:2409.17169},
year={2024},
archivePrefix={arXiv},
eprint={2409.17169},
primaryClass={cs.CL cs.AI}
}
|
zhang2024real:
|
arxiv-661954
|
2409.17171
|
Cross-Domain Content Generation with Domain-Specific Small Language Models
|
<|reference_start|>Cross-Domain Content Generation with Domain-Specific Small Language Models: Generating domain-specific content using small language models poses challenges, especially when dealing with multiple distinct datasets with minimal overlap. In this study, we explore methods to enable a small language model to produce coherent and relevant outputs for two different domains: stories (Dataset A) and recipes (Dataset B). Our initial experiments show that training individual models on each dataset yields satisfactory results, with each model generating appropriate content within its domain. We find that utilizing custom tokenizers tailored to each dataset significantly enhances generation quality compared to using a generic tokenizer. Attempts to adapt a single model to both domains using Low-Rank Adaptation (LoRA) or standard fine-tuning do not yield substantial results, often failing to produce meaningful outputs. Moreover, full fine-tuning without freezing the model's existing weights leads to catastrophic forgetting, where the model loses previously learned information and only retains knowledge from the new data. To overcome these challenges, we employ a knowledge expansion strategy: training only with additional parameters. This approach enables the model to generate both stories and recipes upon request, effectively handling multiple domains without suffering from catastrophic forgetting. Our findings demonstrate that knowledge expansion with frozen layers is an effective method for small language models to generate domain-specific content across distinct datasets. This work contributes to the development of efficient multi-domain language models and provides insights into managing catastrophic forgetting in small-scale architectures.<|reference_end|>
|
arxiv
|
@article{maloo2024cross-domain,
title={Cross-Domain Content Generation with Domain-Specific Small Language
Models},
author={Ankit Maloo, Abhinav Garg},
journal={arXiv preprint arXiv:2409.17171},
year={2024},
archivePrefix={arXiv},
eprint={2409.17171},
primaryClass={cs.CL cs.AI}
}
|
maloo2024cross-domain
|
arxiv-661955
|
2409.17172
|
What Would You Ask When You First Saw $a^2+b^2=c^2$? Evaluating LLM on Curiosity-Driven Questioning
|
<|reference_start|>What Would You Ask When You First Saw $a^2+b^2=c^2$? Evaluating LLM on Curiosity-Driven Questioning: Large language models (LLMs) can store a massive amount of knowledge, yet their potential to acquire new knowledge remains unknown. We propose a novel evaluation framework that evaluates this capability. This framework prompts LLMs to generate questions about a statement introducing scientific knowledge, simulating a curious person when facing the statement for the first time. We score the qualities of the generated questions, thereby evaluating the knowledge acquisition potential of the LLM. We apply controlled ablation studies to validate our scoring procedures. Additionally, we created a synthetic dataset consisting of 1101 statements in physics, chemistry, and maths with distinct levels of difficulties, 300 general knowledge statements, and 567 incorrect statements. Human evaluations were conducted to validate our model assessments, achieving an approximate weighted Cohen's kappa of 0.7 on all three metrics considered. We find that while large models like GPT-4 and Mistral 8x7b are adept at generating coherent and relevant questions, the smaller Phi-2 model is equally or more effective. This indicates that size does not solely determine a model's knowledge acquisition potential. The proposed framework quantifies a critical model capability that was commonly overlooked and opens up research opportunities for developing more knowledgeable AI systems<|reference_end|>
|
arxiv
|
@article{javaji2024what,
title={What Would You Ask When You First Saw $a^2+b^2=c^2$? Evaluating LLM on
Curiosity-Driven Questioning},
author={Shashidhar Reddy Javaji, Zining Zhu},
journal={arXiv preprint arXiv:2409.17172},
year={2024},
archivePrefix={arXiv},
eprint={2409.17172},
primaryClass={cs.CL cs.AI cs.LG}
}
|
javaji2024what
|
arxiv-661956
|
2409.17173
|
A Multiple-Fill-in-the-Blank Exam Approach for Enhancing Zero-Resource Hallucination Detection in Large Language Models
|
<|reference_start|>A Multiple-Fill-in-the-Blank Exam Approach for Enhancing Zero-Resource Hallucination Detection in Large Language Models: Large language models (LLMs) often fabricate a hallucinatory text. Several methods have been developed to detect such text by semantically comparing it with the multiple versions probabilistically regenerated. However, a significant issue is that if the storyline of each regenerated text changes, the generated texts become incomparable, which worsen detection accuracy. In this paper, we propose a hallucination detection method that incorporates a multiple-fill-in-the-blank exam approach to address this storyline-changing issue. First, our method creates a multiple-fill-in-the-blank exam by masking multiple objects from the original text. Second, prompts an LLM to repeatedly answer this exam. This approach ensures that the storylines of the exam answers align with the original ones. Finally, quantifies the degree of hallucination for each original sentence by scoring the exam answers, considering the potential for \emph{hallucination snowballing} within the original text itself. Experimental results show that our method alone not only outperforms existing methods, but also achieves clearer state-of-the-art performance in the ensembles with existing methods.<|reference_end|>
|
arxiv
|
@article{munakata2024a,
title={A Multiple-Fill-in-the-Blank Exam Approach for Enhancing Zero-Resource
Hallucination Detection in Large Language Models},
author={Satoshi Munakata, Taku Fukui and Takao Mohri},
journal={arXiv preprint arXiv:2409.17173},
year={2024},
archivePrefix={arXiv},
eprint={2409.17173},
primaryClass={cs.CL cs.AI}
}
|
munakata2024a
|
arxiv-661957
|
2409.17174
|
CSCE: Boosting LLM Reasoning by Simultaneous Enhancing of Casual Significance and Consistency
|
<|reference_start|>CSCE: Boosting LLM Reasoning by Simultaneous Enhancing of Casual Significance and Consistency: Chain-based reasoning methods like chain of thought (CoT) play a rising role in solving reasoning tasks for large language models (LLMs). However, the causal illusions between \textit{a step of reasoning} and \textit{corresponding state transitions} are becoming a significant obstacle to advancing LLMs' reasoning capabilities, especially in long-range reasoning tasks. This paper proposes a non-chain-based reasoning framework for simultaneous consideration of causal significance and consistency, i.e., the Causal Significance and Consistency Enhancer (CSCE). We customize LLM's loss function utilizing treatment effect assessments to enhance its reasoning ability from two aspects: causal significance and consistency. This ensures that the model captures essential causal relationships and maintains robust and consistent performance across various scenarios. Additionally, we transform the reasoning process from the cascading multiple one-step reasoning commonly used in Chain-Based methods, like CoT, to a causal-enhanced method that outputs the entire reasoning process in one go, further improving the model's reasoning efficiency. Extensive experiments show that our method improves both the reasoning success rate and speed. These improvements further demonstrate that non-chain-based methods can also aid LLMs in completing reasoning tasks.<|reference_end|>
|
arxiv
|
@article{wang2024csce:,
title={CSCE: Boosting LLM Reasoning by Simultaneous Enhancing of Casual
Significance and Consistency},
author={Kangsheng Wang, Xiao Zhang, Zizheng Guo, Tianyu Hu, Huimin Ma},
journal={arXiv preprint arXiv:2409.17174},
year={2024},
archivePrefix={arXiv},
eprint={2409.17174},
primaryClass={cs.CL cs.AI}
}
|
wang2024csce:
|
arxiv-661958
|
2409.17176
|
XDC Gasless Subnet: Gasless Subnet Staking dApp for XDC Network
|
<|reference_start|>XDC Gasless Subnet: Gasless Subnet Staking dApp for XDC Network: With a delegated proof-of-stake (XDPoS) consensus mechanism, the XDC Network is an enterprise-focused blockchain platform that combines the strength of public and private blockchains to provide quick transaction times, low energy consumption, and economical gas fees. XDC is designed for interoperability and supports decentralized apps (dApps) and integrates smoothly with financial systems. It is perfect for trade financing and tokenisation of physical assets because of its emphasis on security and scalability. However, there are a few critical issues that hamper wider acceptance and usability for certain high-frequency applications. This whitepaper introduces a novel and enthralling dApp for establishing a gasless subnet in which mainnet XDC can be staked to spin off a subnet that functions similarly to a non-crypto network, accepting currency fees on the XDC network. This would allow users to stake their tokens without incurring gas fees making the staking process more efficient, cost-effective, and simultaneously enhancing scalability. Performance evaluation of the dApp shows promising results in terms of throughput, latency, scalability, security, and cost efficiency. The use cases and applications of this approach along with challenges and ensuing solutions are included.<|reference_end|>
|
arxiv
|
@article{chakraborty2024xdc,
title={XDC Gasless Subnet: Gasless Subnet Staking dApp for XDC Network},
author={Mohuya Chakraborty, Atul Khekade},
journal={arXiv preprint arXiv:2409.17176},
year={2024},
number={Sep 2024},
archivePrefix={arXiv},
eprint={2409.17176},
primaryClass={cs.CR}
}
|
chakraborty2024xdc
|
arxiv-661959
|
2409.17178
|
MODEL&CO: Exoplanet detection in angular differential imaging by learning across multiple observations
|
<|reference_start|>MODEL&CO: Exoplanet detection in angular differential imaging by learning across multiple observations: Direct imaging of exoplanets is particularly challenging due to the high contrast between the planet and the star luminosities, and their small angular separation. In addition to tailored instrumental facilities implementing adaptive optics and coronagraphy, post-processing methods combining several images recorded in pupil tracking mode are needed to attenuate the nuisances corrupting the signals of interest. Most of these post-processing methods build a model of the nuisances from the target observations themselves, resulting in strongly limited detection sensitivity at short angular separations due to the lack of angular diversity. To address this issue, we propose to build the nuisance model from an archive of multiple observations by leveraging supervised deep learning techniques. The proposed approach casts the detection problem as a reconstruction task and captures the structure of the nuisance from two complementary representations of the data. Unlike methods inspired by reference differential imaging, the proposed model is highly non-linear and does not resort to explicit image-to-image similarity measurements and subtractions. The proposed approach also encompasses statistical modeling of learnable spatial features. The latter is beneficial to improve both the detection sensitivity and the robustness against heterogeneous data. We apply the proposed algorithm to several datasets from the VLT/SPHERE instrument, and demonstrate a superior precision-recall trade-off compared to the PACO algorithm. Interestingly, the gain is especially important when the diversity induced by ADI is the most limited, thus supporting the ability of the proposed approach to learn information across multiple observations.<|reference_end|>
|
arxiv
|
@article{bodrito2024model&co:,
title={MODEL&CO: Exoplanet detection in angular differential imaging by
learning across multiple observations},
author={Th'eo Bodrito, Olivier Flasseur, Julien Mairal, Jean Ponce, Maud
Langlois, Anne-Marie Lagrange},
journal={arXiv preprint arXiv:2409.17178},
year={2024},
archivePrefix={arXiv},
eprint={2409.17178},
primaryClass={astro-ph.IM astro-ph.EP cs.CV physics.data-an}
}
|
bodrito2024model&co:
|
arxiv-661960
|
2409.17179
|
Fully automatic extraction of morphological traits from the Web: utopia or reality?
|
<|reference_start|>Fully automatic extraction of morphological traits from the Web: utopia or reality?: Plant morphological traits, their observable characteristics, are fundamental to understand the role played by each species within their ecosystem. However, compiling trait information for even a moderate number of species is a demanding task that may take experts years to accomplish. At the same time, massive amounts of information about species descriptions is available online in the form of text, although the lack of structure makes this source of data impossible to use at scale. To overcome this, we propose to leverage recent advances in large language models (LLMs) and devise a mechanism for gathering and processing information on plant traits in the form of unstructured textual descriptions, without manual curation. We evaluate our approach by automatically replicating three manually created species-trait matrices. Our method managed to find values for over half of all species-trait pairs, with an F1-score of over 75%. Our results suggest that large-scale creation of structured trait databases from unstructured online text is currently feasible thanks to the information extraction capabilities of LLMs, being limited by the availability of textual descriptions covering all the traits of interest.<|reference_end|>
|
arxiv
|
@article{marcos2024fully,
title={Fully automatic extraction of morphological traits from the Web: utopia
or reality?},
author={Diego Marcos, Robert van de Vlasakker, Ioannis N. Athanasiadis, Pierre
Bonnet, Herv'e Goeau, Alexis Joly, W. Daniel Kissling, C'esar Leblanc,
Andr'e S.J. van Proosdij, Konstantinos P. Panousis},
journal={arXiv preprint arXiv:2409.17179},
year={2024},
archivePrefix={arXiv},
eprint={2409.17179},
primaryClass={cs.CL cs.AI cs.LG}
}
|
marcos2024fully
|
arxiv-661961
|
2409.17181
|
A Mobile Payment Scheme Using Biometric Identification with Mutual Authentication
|
<|reference_start|>A Mobile Payment Scheme Using Biometric Identification with Mutual Authentication: Cashless payment systems offer many benefits over cash, but also have some drawbacks. Fake terminals, skimming, wireless connectivity, and relay attacks are persistent problems. Attempts to overcome one problem often lead to another - for example, some systems use QR codes to avoid skimming and connexion issues, but QR codes can be stolen at distance and relayed. In this paper, we propose a novel mobile payment scheme based on biometric identification that provides mutual authentication to protect the user from rogue terminals. Our scheme imposes only minimal requirements on terminal hardware, does not depend on wireless connectivity between the user and the verifier during the authentication phase, and does not require the user to trust the terminal until it has authenticated itself to the user. We show that our scheme is resistant against phishing, replay, relay, and presentation attacks.<|reference_end|>
|
arxiv
|
@article{sturgess2024a,
title={A Mobile Payment Scheme Using Biometric Identification with Mutual
Authentication},
author={Jack Sturgess and Ivan Martinovic},
journal={arXiv preprint arXiv:2409.17181},
year={2024},
archivePrefix={arXiv},
eprint={2409.17181},
primaryClass={cs.CR}
}
|
sturgess2024a
|
arxiv-661962
|
2409.17183
|
Transfer learning for financial data predictions: a systematic review
|
<|reference_start|>Transfer learning for financial data predictions: a systematic review: Literature highlighted that financial time series data pose significant challenges for accurate stock price prediction, because these data are characterized by noise and susceptibility to news; traditional statistical methodologies made assumptions, such as linearity and normality, which are not suitable for the non-linear nature of financial time series; on the other hand, machine learning methodologies are able to capture non linear relationship in the data. To date, neural network is considered the main machine learning tool for the financial prices prediction. Transfer Learning, as a method aimed at transferring knowledge from source tasks to target tasks, can represent a very useful methodological tool for getting better financial prediction capability. Current reviews on the above body of knowledge are mainly focused on neural network architectures, for financial prediction, with very little emphasis on the transfer learning methodology; thus, this paper is aimed at going deeper on this topic by developing a systematic review with respect to application of Transfer Learning for financial market predictions and to challenges/potential future directions of the transfer learning methodologies for stock market predictions.<|reference_end|>
|
arxiv
|
@article{lanzetta2024transfer,
title={Transfer learning for financial data predictions: a systematic review},
author={V. Lanzetta},
journal={arXiv preprint arXiv:2409.17183},
year={2024},
archivePrefix={arXiv},
eprint={2409.17183},
primaryClass={q-fin.TR cs.AI cs.LG q-fin.CP}
}
|
lanzetta2024transfer
|
arxiv-661963
|
2409.17186
|
Don't Trust A Single Gerrymandering Metric
|
<|reference_start|>Don't Trust A Single Gerrymandering Metric: In recent years, in an effort to promote fairness in the election process, a wide variety of techniques and metrics have been proposed to determine whether a map is a partisan gerrymander. The most accessible measures, requiring easily obtained data, are metrics such as the Mean-Median Difference, Efficiency Gap, Declination, and GEO metric. But for most of these metrics, researchers have struggled to describe, given no additional information, how a value of that metric on a single map indicates the presence or absence of gerrymandering. Our main result is that each of these metrics is gameable when used as a single, isolated quantity to detect gerrymandering (or the lack thereof). That is, for each of the four metrics, we can find district plans for a given state with an extremely large number of Democratic-won (or Republican-won) districts while the metric value of that plan falls within a reasonable, predetermined bound. We do this by using a hill-climbing method to generate district plans that are constrained by the bounds on the metric but also maximize or nearly maximize the number of districts won by a party. In addition, extreme values of the Mean-Median Difference do not necessarily correspond to maps with an extreme number of districts won. Thus, the Mean- Median Difference metric is particularly misleading, as it cannot distinguish more extreme maps from less extreme maps. The other metrics are more nuanced, but when assessed on an ensemble, none perform substantially differently from simply measuring number of districts won by a fixed party. One clear consequence of these results is that they demonstrate the folly of specifying a priori bounds on a metric that a redistricting commission must meet in order to avoid gerrymandering.<|reference_end|>
|
arxiv
|
@article{ratliff2024don't,
title={Don't Trust A Single Gerrymandering Metric},
author={Thomas Ratliff, Stephanie Somersille, Ellen Veomett},
journal={arXiv preprint arXiv:2409.17186},
year={2024},
archivePrefix={arXiv},
eprint={2409.17186},
primaryClass={physics.soc-ph cs.CY}
}
|
ratliff2024don't
|
arxiv-661964
|
2409.17187
|
Applications and Novel Regularization of the Thin-Film Equation
|
<|reference_start|>Applications and Novel Regularization of the Thin-Film Equation: The classical no-slip boundary condition of the Navier-Stokes equations fails to describe the spreading motion of a droplet on a substrate due to the missing small-scale physics near the contact line. In this thesis, we introduce a novel regularization of the thin-film equation to model droplet spreading. The solution of the regularized thin-film equation -- the Geometric Thin-Film Equation is studied and characterized. Two robust numerical solvers are discussed, notably, a fast and mesh-free numerical scheme for simulating thin-film flows in two and three spatial dimensions. Moreover, we prove the regularity and convergence of the numerical solutions. The existence and uniqueness of the solution of the Geometric Thin-Film Equation with respect to a wide range of measure-valued initial conditions are also discussed.<|reference_end|>
|
arxiv
|
@article{pang2024applications,
title={Applications and Novel Regularization of the Thin-Film Equation},
author={Khang Ee Pang},
journal={arXiv preprint arXiv:2409.17187},
year={2024},
archivePrefix={arXiv},
eprint={2409.17187},
primaryClass={physics.flu-dyn cs.NA math.NA}
}
|
pang2024applications
|
arxiv-661965
|
2409.17189
|
Decentralized Federated Learning with Gradient Tracking over Time-Varying Directed Networks
|
<|reference_start|>Decentralized Federated Learning with Gradient Tracking over Time-Varying Directed Networks: We investigate the problem of agent-to-agent interaction in decentralized (federated) learning over time-varying directed graphs, and, in doing so, propose a consensus-based algorithm called DSGTm-TV. The proposed algorithm incorporates gradient tracking and heavy-ball momentum to distributively optimize a global objective function, while preserving local data privacy. Under DSGTm-TV, agents will update local model parameters and gradient estimates using information exchange with neighboring agents enabled through row- and column-stochastic mixing matrices, which we show guarantee both consensus and optimality. Our analysis establishes that DSGTm-TV exhibits linear convergence to the exact global optimum when exact gradient information is available, and converges in expectation to a neighborhood of the global optimum when employing stochastic gradients. Moreover, in contrast to existing methods, DSGTm-TV preserves convergence for networks with uncoordinated stepsizes and momentum parameters, for which we provide explicit bounds. These results enable agents to operate in a fully decentralized manner, independently optimizing their local hyper-parameters. We demonstrate the efficacy of our approach via comparisons with state-of-the-art baselines on real-world image classification and natural language processing tasks.<|reference_end|>
|
arxiv
|
@article{nguyen2024decentralized,
title={Decentralized Federated Learning with Gradient Tracking over
Time-Varying Directed Networks},
author={Duong Thuy Anh Nguyen, Su Wang, Duong Tung Nguyen, Angelia Nedich, H.
Vincent Poor},
journal={arXiv preprint arXiv:2409.17189},
year={2024},
archivePrefix={arXiv},
eprint={2409.17189},
primaryClass={math.OC cs.LG}
}
|
nguyen2024decentralized
|
arxiv-661966
|
2409.17190
|
Enhancing Guardrails for Safe and Secure Healthcare AI
|
<|reference_start|>Enhancing Guardrails for Safe and Secure Healthcare AI: Generative AI holds immense promise in addressing global healthcare access challenges, with numerous innovative applications now ready for use across various healthcare domains. However, a significant barrier to the widespread adoption of these domain-specific AI solutions is the lack of robust safety mechanisms to effectively manage issues such as hallucination, misinformation, and ensuring truthfulness. Left unchecked, these risks can compromise patient safety and erode trust in healthcare AI systems. While general-purpose frameworks like Llama Guard are useful for filtering toxicity and harmful content, they do not fully address the stringent requirements for truthfulness and safety in healthcare contexts. This paper examines the unique safety and security challenges inherent to healthcare AI, particularly the risk of hallucinations, the spread of misinformation, and the need for factual accuracy in clinical settings. I propose enhancements to existing guardrails frameworks, such as Nvidia NeMo Guardrails, to better suit healthcare-specific needs. By strengthening these safeguards, I aim to ensure the secure, reliable, and accurate use of AI in healthcare, mitigating misinformation risks and improving patient safety.<|reference_end|>
|
arxiv
|
@article{gangavarapu2024enhancing,
title={Enhancing Guardrails for Safe and Secure Healthcare AI},
author={Ananya Gangavarapu},
journal={arXiv preprint arXiv:2409.17190},
year={2024},
archivePrefix={arXiv},
eprint={2409.17190},
primaryClass={cs.CR cs.AI}
}
|
gangavarapu2024enhancing
|
arxiv-661967
|
2409.17191
|
An Effective, Robust and Fairness-aware Hate Speech Detection Framework
|
<|reference_start|>An Effective, Robust and Fairness-aware Hate Speech Detection Framework: With the widespread online social networks, hate speeches are spreading faster and causing more damage than ever before. Existing hate speech detection methods have limitations in several aspects, such as handling data insufficiency, estimating model uncertainty, improving robustness against malicious attacks, and handling unintended bias (i.e., fairness). There is an urgent need for accurate, robust, and fair hate speech classification in online social networks. To bridge the gap, we design a data-augmented, fairness addressed, and uncertainty estimated novel framework. As parts of the framework, we propose Bidirectional Quaternion-Quasi-LSTM layers to balance effectiveness and efficiency. To build a generalized model, we combine five datasets collected from three platforms. Experiment results show that our model outperforms eight state-of-the-art methods under both no attack scenario and various attack scenarios, indicating the effectiveness and robustness of our model. We share our code along with combined dataset for better future research<|reference_end|>
|
arxiv
|
@article{mou2024an,
title={An Effective, Robust and Fairness-aware Hate Speech Detection Framework},
author={Guanyi Mou, Kyumin Lee},
journal={IEEE BigData 2021},
year={2024},
doi={10.1109/BigData52589.2021.9672022},
archivePrefix={arXiv},
eprint={2409.17191},
primaryClass={cs.CL cs.LG}
}
|
mou2024an
|
arxiv-661968
|
2409.17192
|
Constrain Path Optimization on Time-Dependent Road Networks
|
<|reference_start|>Constrain Path Optimization on Time-Dependent Road Networks: Time-Dependent Constrained Path Optimization (TD-CPO) takes the following input: (i) time-dependent (TD) road network, (ii) source ($s$), (iii) destination ($d$), (iv) departure time ($t$) and, (v) budget ($\mathcal{B}$). In TD graph, each edge is characterized by a time-dependent arrival time and a score function. TD-CPO aims to determine a loopless path $s$--$d$ departing from $s$ at time $t$ and arriving at $d$ on or before $t+\mathcal{B}$ while maximizing the score. TD-CPO has applications in urban navigation. TD-CPO is a variant of the Arc Orienteering Problem (AOP) known to be NP-hard in nature. The key computational challenge of TD-CPO is that we need to find the "longest path" in terms of score within the given budget constraint in a TD graph. Current works prune down the search space very aggressively. Thus, despite having low execution time, these algorithms often produce low-quality solutions. In contrast, our proposed approach $\mathcal{SCOPE}$ efficiently solves TD-CPO by exploiting road networks' spatial and temporal properties. The inherent computational structure of $\mathcal{SCOPE}$ enables trivial parallelization for improved performance. Our experiments indicate that $\mathcal{SCOPE}$ produces superior quality solutions (nearly $2x$) compared to the state-of-the-art algorithm while having comparable running times. Furthermore, $\mathcal{SCOPE}$ exhibits almost linear speedup as the number of CPUs (cores) increases (up to 24 CPUs).<|reference_end|>
|
arxiv
|
@article{dutta2024constrain,
title={Constrain Path Optimization on Time-Dependent Road Networks},
author={Kousik Kumar Dutta, Venkata M. V. Gunturi},
journal={arXiv preprint arXiv:2409.17192},
year={2024},
archivePrefix={arXiv},
eprint={2409.17192},
primaryClass={cs.OH}
}
|
dutta2024constrain
|
arxiv-661969
|
2409.17200
|
A random measure approach to reinforcement learning in continuous time
|
<|reference_start|>A random measure approach to reinforcement learning in continuous time: We present a random measure approach for modeling exploration, i.e., the execution of measure-valued controls, in continuous-time reinforcement learning (RL) with controlled diffusion and jumps. First, we consider the case when sampling the randomized control in continuous time takes place on a discrete-time grid and reformulate the resulting stochastic differential equation (SDE) as an equation driven by suitable random measures. The construction of these random measures makes use of the Brownian motion and the Poisson random measure (which are the sources of noise in the original model dynamics) as well as the additional random variables, which are sampled on the grid for the control execution. Then, we prove a limit theorem for these random measures as the mesh-size of the sampling grid goes to zero, which leads to the grid-sampling limit SDE that is jointly driven by white noise random measures and a Poisson random measure. We also argue that the grid-sampling limit SDE can substitute the exploratory SDE and the sample SDE of the recent continuous-time RL literature, i.e., it can be applied for the theoretical analysis of exploratory control problems and for the derivation of learning algorithms.<|reference_end|>
|
arxiv
|
@article{bender2024a,
title={A random measure approach to reinforcement learning in continuous time},
author={Christian Bender and Nguyen Tran Thuan},
journal={arXiv preprint arXiv:2409.17200},
year={2024},
archivePrefix={arXiv},
eprint={2409.17200},
primaryClass={cs.LG math.PR stat.ML}
}
|
bender2024a
|
arxiv-661970
|
2409.17201
|
Immersion and Invariance-based Coding for Privacy-Preserving Federated Learning
|
<|reference_start|>Immersion and Invariance-based Coding for Privacy-Preserving Federated Learning: Federated learning (FL) has emerged as a method to preserve privacy in collaborative distributed learning. In FL, clients train AI models directly on their devices rather than sharing data with a centralized server, which can pose privacy risks. However, it has been shown that despite FL's partial protection of local data privacy, information about clients' data can still be inferred from shared model updates during training. In recent years, several privacy-preserving approaches have been developed to mitigate this privacy leakage in FL, though they often provide privacy at the cost of model performance or system efficiency. Balancing these trade-offs presents a significant challenge in implementing FL schemes. In this manuscript, we introduce a privacy-preserving FL framework that combines differential privacy and system immersion tools from control theory. The core idea is to treat the optimization algorithms used in standard FL schemes (e.g., gradient-based algorithms) as a dynamical system that we seek to immerse into a higher-dimensional system (referred to as the target optimization algorithm). The target algorithm's dynamics are designed such that, first, the model parameters of the original algorithm are immersed in its parameters; second, it operates on distorted parameters; and third, it converges to an encoded version of the true model parameters from the original algorithm. These encoded parameters can then be decoded at the server to retrieve the original model parameters. We demonstrate that the proposed privacy-preserving scheme can be tailored to offer any desired level of differential privacy for both local and global model parameters, while maintaining the same accuracy and convergence rate as standard FL algorithms.<|reference_end|>
|
arxiv
|
@article{hayati2024immersion,
title={Immersion and Invariance-based Coding for Privacy-Preserving Federated
Learning},
author={Haleh Hayati, Carlos Murguia, Nathan van de Wouw},
journal={arXiv preprint arXiv:2409.17201},
year={2024},
archivePrefix={arXiv},
eprint={2409.17201},
primaryClass={cs.CR cs.LG}
}
|
hayati2024immersion
|
arxiv-661971
|
2409.17203
|
AACLiteNet: A Lightweight Model for Detection of Fine-Grained Abdominal Aortic Calcification
|
<|reference_start|>AACLiteNet: A Lightweight Model for Detection of Fine-Grained Abdominal Aortic Calcification: Cardiovascular Diseases (CVDs) are the leading cause of death worldwide, taking 17.9 million lives annually. Abdominal Aortic Calcification (AAC) is an established marker for CVD, which can be observed in lateral view Vertebral Fracture Assessment (VFA) scans, usually done for vertebral fracture detection. Early detection of AAC may help reduce the risk of developing clinical CVDs by encouraging preventive measures. Manual analysis of VFA scans for AAC measurement is time consuming and requires trained human assessors. Recently, efforts have been made to automate the process, however, the proposed models are either low in accuracy, lack granular level score prediction, or are too heavy in terms of inference time and memory footprint. Considering all these shortcomings of existing algorithms, we propose 'AACLiteNet', a lightweight deep learning model that predicts both cumulative and granular level AAC scores with high accuracy, and also has a low memory footprint, and computation cost (Floating Point Operations (FLOPs)). The AACLiteNet achieves a significantly improved one-vs-rest average accuracy of 85.94% as compared to the previous best 81.98%, with 19.88 times less computational cost and 2.26 times less memory footprint, making it implementable on portable computing devices.<|reference_end|>
|
arxiv
|
@article{ilyas2024aaclitenet:,
title={AACLiteNet: A Lightweight Model for Detection of Fine-Grained Abdominal
Aortic Calcification},
author={Zaid Ilyas, Afsah Saleem, David Suter, Siobhan Reid, John Schousboe,
William Leslie, Joshua Lewis, Syed Zulqarnain Gilani},
journal={arXiv preprint arXiv:2409.17203},
year={2024},
archivePrefix={arXiv},
eprint={2409.17203},
primaryClass={cs.CV}
}
|
ilyas2024aaclitenet:
|
arxiv-661972
|
2409.17204
|
Exploring the Roles of NLP-based Dialog Indicators in Predicting User Experience in interacting with Large Language Model System
|
<|reference_start|>Exploring the Roles of NLP-based Dialog Indicators in Predicting User Experience in interacting with Large Language Model System: The use of Large Language Models for dialogue systems is rising, presenting a new challenge: how do we assess users' chat experience in these systems? Leveraging Natural Language Processing (NLP)-powered dialog analyzers to create dialog indicators like Coherence and Emotion has the potential to predict the chat experience. In this paper, we proposed a conceptual model to explain the relationship between the dialog indicators and various factors related to the chat experience, such as users' intentions, affinity toward dialog agents, and prompts of the agents' characters. We evaluated the conceptual model using PLS-SEM with 120 participants and found it well fit. Our results suggest that dialog indicators can predict the chat experience and fully mediate the impact of prompts and user intentions. Additionally, users' affinity toward agents can partially explain these predictions. Our findings demonstrate the potential of using dialog indicators in predicting the chat experience. Through the conceptual model we propose, researchers can apply the dialog analyzers to generate dialog indicators to constantly monitor the dialog process and improve the user's chat experience accordingly.<|reference_end|>
|
arxiv
|
@article{chen2024exploring,
title={Exploring the Roles of NLP-based Dialog Indicators in Predicting User
Experience in interacting with Large Language Model System},
author={Eason Chen},
journal={arXiv preprint arXiv:2409.17204},
year={2024},
archivePrefix={arXiv},
eprint={2409.17204},
primaryClass={cs.HC}
}
|
chen2024exploring
|
arxiv-661973
|
2409.17207
|
Finite State Machine with Input and Process Render
|
<|reference_start|>Finite State Machine with Input and Process Render: Finite State Machines are a concept widely taught in undergraduate theory of computing courses. Educators typically use tools with static representations of FSMs to help students visualize these objects and processes; however, all existing tools require manual editing by the instructor. In this poster, we created an automatic visualization tool for FSMs that generates videos of FSM simulation, named Finite State Machine with Input and Process Render (FSMIPR). Educators can input any formal definition of an FSM and an input string, and FSMIPR generates an accompanying video of its simulation. We believe that FSMIPR will be beneficial to students who learn difficult computer theory concepts. We conclude with future work currently in-progress with FSMIPR.<|reference_end|>
|
arxiv
|
@article{bennett-manke2024finite,
title={Finite State Machine with Input and Process Render},
author={Sierra Zoe Bennett-Manke, Sebastian Neumann, Ryan E. Dougherty},
journal={arXiv preprint arXiv:2409.17207},
year={2024},
archivePrefix={arXiv},
eprint={2409.17207},
primaryClass={cs.CY}
}
|
bennett-manke2024finite
|
arxiv-661974
|
2409.17208
|
First Place Solution to the ECCV 2024 BRAVO Challenge: Evaluating Robustness of Vision Foundation Models for Semantic Segmentation
|
<|reference_start|>First Place Solution to the ECCV 2024 BRAVO Challenge: Evaluating Robustness of Vision Foundation Models for Semantic Segmentation: In this report, we present the first place solution to the ECCV 2024 BRAVO Challenge, where a model is trained on Cityscapes and its robustness is evaluated on several out-of-distribution datasets. Our solution leverages the powerful representations learned by vision foundation models, by attaching a simple segmentation decoder to DINOv2 and fine-tuning the entire model. This approach outperforms more complex existing approaches, and achieves first place in the challenge. Our code is publicly available at https://github.com/tue-mps/benchmark-vfm-ss.<|reference_end|>
|
arxiv
|
@article{kerssies2024first,
title={First Place Solution to the ECCV 2024 BRAVO Challenge: Evaluating
Robustness of Vision Foundation Models for Semantic Segmentation},
author={Tommie Kerssies, Daan de Geus, Gijs Dubbelman},
journal={arXiv preprint arXiv:2409.17208},
year={2024},
archivePrefix={arXiv},
eprint={2409.17208},
primaryClass={cs.CV cs.AI cs.LG cs.RO}
}
|
kerssies2024first
|
arxiv-661975
|
2409.17210
|
Neural Network Architecture Search Enabled Wide-Deep Learning (NAS-WD) for Spatially Heterogenous Property Awared Chicken Woody Breast Classification and Hardness Regression
|
<|reference_start|>Neural Network Architecture Search Enabled Wide-Deep Learning (NAS-WD) for Spatially Heterogenous Property Awared Chicken Woody Breast Classification and Hardness Regression: Due to intensive genetic selection for rapid growth rates and high broiler yields in recent years, the global poultry industry has faced a challenging problem in the form of woody breast (WB) conditions. This condition has caused significant economic losses as high as $200 million annually, and the root cause of WB has yet to be identified. Human palpation is the most common method of distinguishing a WB from others. However, this method is time-consuming and subjective. Hyperspectral imaging (HSI) combined with machine learning algorithms can evaluate the WB conditions of fillets in a non-invasive, objective, and high-throughput manner. In this study, 250 raw chicken breast fillet samples (normal, mild, severe) were taken, and spatially heterogeneous hardness distribution was first considered when designing HSI processing models. The study not only classified the WB levels from HSI but also built a regression model to correlate the spectral information with sample hardness data. To achieve a satisfactory classification and regression model, a neural network architecture search (NAS) enabled a wide-deep neural network model named NAS-WD, which was developed. In NAS-WD, NAS was first used to automatically optimize the network architecture and hyperparameters. The classification results show that NAS-WD can classify the three WB levels with an overall accuracy of 95%, outperforming the traditional machine learning model, and the regression correlation between the spectral data and hardness was 0.75, which performs significantly better than traditional regression models.<|reference_end|>
|
arxiv
|
@article{pallerla2024neural,
title={Neural Network Architecture Search Enabled Wide-Deep Learning (NAS-WD)
for Spatially Heterogenous Property Awared Chicken Woody Breast
Classification and Hardness Regression},
author={Chaitanya Pallerla, Yihong Feng, Casey M. Owens, Ramesh Bahadur Bist,
Siavash Mahmoudi, Pouya Sohrabipour, Amirreza Davar, Dongyi Wang},
journal={arXiv preprint arXiv:2409.17210},
year={2024},
archivePrefix={arXiv},
eprint={2409.17210},
primaryClass={cs.CV cs.CE}
}
|
pallerla2024neural
|
arxiv-661976
|
2409.17213
|
Plurals: A System for Guiding LLMs Via Simulated Social Ensembles
|
<|reference_start|>Plurals: A System for Guiding LLMs Via Simulated Social Ensembles: Recent debates raised concerns that language models may favor certain viewpoints. But what if the solution is not to aim for a 'view from nowhere' but rather to leverage different viewpoints? We introduce Plurals, a system and Python library for pluralistic AI deliberation. Plurals consists of Agents (LLMs, optionally with personas) which deliberate within customizable Structures, with Moderators overseeing deliberation. Plurals is a generator of simulated social ensembles. Plurals integrates with government datasets to create nationally representative personas, includes deliberation templates inspired by democratic deliberation theory, and allows users to customize both information-sharing structures and deliberation behavior within Structures. Six case studies demonstrate fidelity to theoretical constructs and efficacy. Three randomized experiments show simulated focus groups produced output resonant with an online sample of the relevant audiences (chosen over zero-shot generation in 75% of trials). Plurals is both a paradigm and a concrete system for pluralistic AI. The Plurals library is available at https://github.com/josh-ashkinaze/plurals and will be continually updated.<|reference_end|>
|
arxiv
|
@article{ashkinaze2024plurals:,
title={Plurals: A System for Guiding LLMs Via Simulated Social Ensembles},
author={Joshua Ashkinaze, Emily Fry, Narendra Edara, Eric Gilbert, Ceren Budak},
journal={arXiv preprint arXiv:2409.17213},
year={2024},
archivePrefix={arXiv},
eprint={2409.17213},
primaryClass={cs.CL cs.AI cs.CY cs.HC cs.MA}
}
|
ashkinaze2024plurals:
|
arxiv-661977
|
2409.17214
|
Grounded Predictions of Teamwork as a One-Shot Game: A Multiagent Multi-Armed Bandits Approach
|
<|reference_start|>Grounded Predictions of Teamwork as a One-Shot Game: A Multiagent Multi-Armed Bandits Approach: Humans possess innate collaborative capacities. However, effective teamwork often remains challenging. This study delves into the feasibility of collaboration within teams of rational, self-interested agents who engage in teamwork without the obligation to contribute. Drawing from psychological and game theoretical frameworks, we formalise teamwork as a one-shot aggregative game, integrating insights from Steiner's theory of group productivity. We characterise this novel game's Nash equilibria and propose a multiagent multi-armed bandit system that learns to converge to approximations of such equilibria. Our research contributes value to the areas of game theory and multiagent systems, paving the way for a better understanding of voluntary collaborative dynamics. We examine how team heterogeneity, task typology, and assessment difficulty influence agents' strategies and resulting teamwork outcomes. Finally, we empirically study the behaviour of work teams under incentive systems that defy analytical treatment. Our agents demonstrate human-like behaviour patterns, corroborating findings from social psychology research.<|reference_end|>
|
arxiv
|
@article{gómez2024grounded,
title={Grounded Predictions of Teamwork as a One-Shot Game: A Multiagent
Multi-Armed Bandits Approach},
author={Alejandra L'opez de Aberasturi G'omez, Carles Sierra and Jordi
Sabater-Mir},
journal={arXiv preprint arXiv:2409.17214},
year={2024},
archivePrefix={arXiv},
eprint={2409.17214},
primaryClass={cs.MA cs.GT}
}
|
gómez2024grounded
|
arxiv-661978
|
2409.17216
|
Data-Centric AI Governance: Addressing the Limitations of Model-Focused Policies
|
<|reference_start|>Data-Centric AI Governance: Addressing the Limitations of Model-Focused Policies: Current regulations on powerful AI capabilities are narrowly focused on "foundation" or "frontier" models. However, these terms are vague and inconsistently defined, leading to an unstable foundation for governance efforts. Critically, policy debates often fail to consider the data used with these models, despite the clear link between data and model performance. Even (relatively) "small" models that fall outside the typical definitions of foundation and frontier models can achieve equivalent outcomes when exposed to sufficiently specific datasets. In this work, we illustrate the importance of considering dataset size and content as essential factors in assessing the risks posed by models both today and in the future. More broadly, we emphasize the risk posed by over-regulating reactively and provide a path towards careful, quantitative evaluation of capabilities that can lead to a simplified regulatory environment.<|reference_end|>
|
arxiv
|
@article{gupta2024data-centric,
title={Data-Centric AI Governance: Addressing the Limitations of Model-Focused
Policies},
author={Ritwik Gupta, Leah Walker, Rodolfo Corona, Stephanie Fu, Suzanne
Petryk, Janet Napolitano, Trevor Darrell, Andrew W. Reddie},
journal={arXiv preprint arXiv:2409.17216},
year={2024},
archivePrefix={arXiv},
eprint={2409.17216},
primaryClass={cs.CY cs.AI}
}
|
gupta2024data-centric
|
arxiv-661979
|
2409.17221
|
Walker: Self-supervised Multiple Object Tracking by Walking on Temporal Appearance Graphs
|
<|reference_start|>Walker: Self-supervised Multiple Object Tracking by Walking on Temporal Appearance Graphs: The supervision of state-of-the-art multiple object tracking (MOT) methods requires enormous annotation efforts to provide bounding boxes for all frames of all videos, and instance IDs to associate them through time. To this end, we introduce Walker, the first self-supervised tracker that learns from videos with sparse bounding box annotations, and no tracking labels. First, we design a quasi-dense temporal object appearance graph, and propose a novel multi-positive contrastive objective to optimize random walks on the graph and learn instance similarities. Then, we introduce an algorithm to enforce mutually-exclusive connective properties across instances in the graph, optimizing the learned topology for MOT. At inference time, we propose to associate detected instances to tracklets based on the max-likelihood transition state under motion-constrained bi-directional walks. Walker is the first self-supervised tracker to achieve competitive performance on MOT17, DanceTrack, and BDD100K. Remarkably, our proposal outperforms the previous self-supervised trackers even when drastically reducing the annotation requirements by up to 400x.<|reference_end|>
|
arxiv
|
@article{segu2024walker:,
title={Walker: Self-supervised Multiple Object Tracking by Walking on Temporal
Appearance Graphs},
author={Mattia Segu, Luigi Piccinelli, Siyuan Li, Luc Van Gool, Fisher Yu,
Bernt Schiele},
journal={arXiv preprint arXiv:2409.17221},
year={2024},
archivePrefix={arXiv},
eprint={2409.17221},
primaryClass={cs.CV}
}
|
segu2024walker:
|
arxiv-661980
|
2409.17228
|
Disk2Planet: A Robust and Automated Machine Learning Tool for Parameter Inference in Disk-Planet Systems
|
<|reference_start|>Disk2Planet: A Robust and Automated Machine Learning Tool for Parameter Inference in Disk-Planet Systems: We introduce Disk2Planet, a machine learning-based tool to infer key parameters in disk-planet systems from observed protoplanetary disk structures. Disk2Planet takes as input the disk structures in the form of two-dimensional density and velocity maps, and outputs disk and planet properties, that is, the Shakura--Sunyaev viscosity, the disk aspect ratio, the planet--star mass ratio, and the planet's radius and azimuth. We integrate the Covariance Matrix Adaptation Evolution Strategy (CMA--ES), an evolutionary algorithm tailored for complex optimization problems, and the Protoplanetary Disk Operator Network (PPDONet), a neural network designed to predict solutions of disk--planet interactions. Our tool is fully automated and can retrieve parameters in one system in three minutes on an Nvidia A100 graphics processing unit. We empirically demonstrate that our tool achieves percent-level or higher accuracy, and is able to handle missing data and unknown levels of noise.<|reference_end|>
|
arxiv
|
@article{mao2024disk2planet:,
title={Disk2Planet: A Robust and Automated Machine Learning Tool for Parameter
Inference in Disk-Planet Systems},
author={Shunyuan Mao, Ruobing Dong, Kwang Moo Yi, Lu Lu, Sifan Wang, Paris
Perdikaris},
journal={arXiv preprint arXiv:2409.17228},
year={2024},
archivePrefix={arXiv},
eprint={2409.17228},
primaryClass={astro-ph.EP cs.AI cs.LG}
}
|
mao2024disk2planet:
|
arxiv-661981
|
2409.17250
|
Kernelization Complexity of Solution Discovery Problems
|
<|reference_start|>Kernelization Complexity of Solution Discovery Problems: In the solution discovery variant of a vertex (edge) subset problem $\Pi$ on graphs, we are given an initial configuration of tokens on the vertices (edges) of an input graph $G$ together with a budget $b$. The question is whether we can transform this configuration into a feasible solution of $\Pi$ on $G$ with at most $b$ modification steps. We consider the token sliding variant of the solution discovery framework, where each modification step consists of sliding a token to an adjacent vertex (edge). The framework of solution discovery was recently introduced by Fellows et al. [Fellows et al., ECAI 2023] and for many solution discovery problems the classical as well as the parameterized complexity has been established. In this work, we study the kernelization complexity of the solution discovery variants of Vertex Cover, Independent Set, Dominating Set, Shortest Path, Matching, and Vertex Cut with respect to the parameters number of tokens $k$, discovery budget $b$, as well as structural parameters such as pathwidth.<|reference_end|>
|
arxiv
|
@article{grobler2024kernelization,
title={Kernelization Complexity of Solution Discovery Problems},
author={Mario Grobler, Stephanie Maaz, Amer E. Mouawad, Naomi Nishimura,
Vijayaragunathan Ramamoorthi, Sebastian Siebertz},
journal={arXiv preprint arXiv:2409.17250},
year={2024},
archivePrefix={arXiv},
eprint={2409.17250},
primaryClass={cs.DS cs.CC math.CO}
}
|
grobler2024kernelization
|
arxiv-661982
|
2409.17253
|
Evaluation of Galaxy as a User-friendly Bioinformatics Tool for Enhancing Clinical Diagnostics in Genetics Laboratories
|
<|reference_start|>Evaluation of Galaxy as a User-friendly Bioinformatics Tool for Enhancing Clinical Diagnostics in Genetics Laboratories: Bioinformatics platforms have significantly changed clinical diagnostics by facilitating the analysis of genomic data, thereby advancing personalized medicine and improving patient care. This study examines the integration, usage patterns, challenges, and impact of the Galaxy platform within clinical diagnostics laboratories. We employed a convergent parallel mixed-methods design, collecting quantitative survey data and qualitative insights from structured interviews with fifteen participants across various clinical roles. The findings indicate a wide adoption of Galaxy, with participants expressing high satisfaction due to its user-friendly interface and notable improvements in workflow efficiency and diagnostic accuracy. Challenges such as data security and training needs were also identified, highlighting the platform's role in simplifying complex data analysis tasks. This study contributes to understanding the transformative potential of Galaxy in clinical practice and offers recommendations for optimizing its integration and functionality. These insights are crucial for advancing clinical diagnostics and enhancing patient outcomes.<|reference_end|>
|
arxiv
|
@article{almohab2024evaluation,
title={Evaluation of Galaxy as a User-friendly Bioinformatics Tool for
Enhancing Clinical Diagnostics in Genetics Laboratories},
author={Hadi Almohab and Ramzy Al-Othmany},
journal={International Journal on Bioinformatics & Biosciences (IJBB),
4(3), 19-40 (2024)},
year={2024},
doi={10.5121/ijbb.2024.14303},
archivePrefix={arXiv},
eprint={2409.17253},
primaryClass={cs.HC}
}
|
almohab2024evaluation
|
arxiv-661983
|
2409.17256
|
AIM 2024 Challenge on Efficient Video Super-Resolution for AV1 Compressed Content
|
<|reference_start|>AIM 2024 Challenge on Efficient Video Super-Resolution for AV1 Compressed Content: Video super-resolution (VSR) is a critical task for enhancing low-bitrate and low-resolution videos, particularly in streaming applications. While numerous solutions have been developed, they often suffer from high computational demands, resulting in low frame rates (FPS) and poor power efficiency, especially on mobile platforms. In this work, we compile different methods to address these challenges, the solutions are end-to-end real-time video super-resolution frameworks optimized for both high performance and low runtime. We also introduce a new test set of high-quality 4K videos to further validate the approaches. The proposed solutions tackle video up-scaling for two applications: 540p to 4K (x4) as a general case, and 360p to 1080p (x3) more tailored towards mobile devices. In both tracks, the solutions have a reduced number of parameters and operations (MACs), allow high FPS, and improve VMAF and PSNR over interpolation baselines. This report gauges some of the most efficient video super-resolution methods to date.<|reference_end|>
|
arxiv
|
@article{conde2024aim,
title={AIM 2024 Challenge on Efficient Video Super-Resolution for AV1
Compressed Content},
author={Marcos V Conde, Zhijun Lei, Wen Li, Christos Bampis, Ioannis
Katsavounidis, Radu Timofte},
journal={arXiv preprint arXiv:2409.17256},
year={2024},
archivePrefix={arXiv},
eprint={2409.17256},
primaryClass={eess.IV cs.CV cs.GR cs.MM}
}
|
conde2024aim
|
arxiv-661984
|
2409.17262
|
CROSS-GAiT: Cross-Attention-Based Multimodal Representation Fusion for Parametric Gait Adaptation in Complex Terrains
|
<|reference_start|>CROSS-GAiT: Cross-Attention-Based Multimodal Representation Fusion for Parametric Gait Adaptation in Complex Terrains: We present CROSS-GAiT, a novel algorithm for quadruped robots that uses Cross Attention to fuse terrain representations derived from visual and time-series inputs, including linear accelerations, angular velocities, and joint efforts. These fused representations are used to adjust the robot's step height and hip splay, enabling adaptive gaits that respond dynamically to varying terrain conditions. We generate these terrain representations by processing visual inputs through a masked Vision Transformer (ViT) encoder and time-series data through a dilated causal convolutional encoder. The cross-attention mechanism then selects and integrates the most relevant features from each modality, combining terrain characteristics with robot dynamics for better-informed gait adjustments. CROSS-GAiT uses the combined representation to dynamically adjust gait parameters in response to varying and unpredictable terrains. We train CROSS-GAiT on data from diverse terrains, including asphalt, concrete, brick pavements, grass, dense vegetation, pebbles, gravel, and sand. Our algorithm generalizes well and adapts to unseen environmental conditions, enhancing real-time navigation performance. CROSS-GAiT was implemented on a Ghost Robotics Vision 60 robot and extensively tested in complex terrains with high vegetation density, uneven/unstable surfaces, sand banks, deformable substrates, etc. We observe at least a 7.04% reduction in IMU energy density and a 27.3% reduction in total joint effort, which directly correlates with increased stability and reduced energy usage when compared to state-of-the-art methods. Furthermore, CROSS-GAiT demonstrates at least a 64.5% increase in success rate and a 4.91% reduction in time to reach the goal in four complex scenarios. Additionally, the learned representations perform 4.48% better than the state-of-the-art on a terrain classification task.<|reference_end|>
|
arxiv
|
@article{seneviratne2024cross-gait:,
title={CROSS-GAiT: Cross-Attention-Based Multimodal Representation Fusion for
Parametric Gait Adaptation in Complex Terrains},
author={Gershom Seneviratne, Kasun Weerakoon, Mohamed Elnoor, Vignesh
Rajgopal, Harshavarthan Varatharajan, Mohamed Khalid M Jaffar, Jason Pusey,
Dinesh Manocha},
journal={arXiv preprint arXiv:2409.17262},
year={2024},
archivePrefix={arXiv},
eprint={2409.17262},
primaryClass={cs.RO}
}
|
seneviratne2024cross-gait:
|
arxiv-661985
|
2409.17263
|
Collaborative Comic Generation: Integrating Visual Narrative Theories with AI Models for Enhanced Creativity
|
<|reference_start|>Collaborative Comic Generation: Integrating Visual Narrative Theories with AI Models for Enhanced Creativity: This study presents a theory-inspired visual narrative generative system that integrates conceptual principles-comic authoring idioms-with generative and language models to enhance the comic creation process. Our system combines human creativity with AI models to support parts of the generative process, providing a collaborative platform for creating comic content. These comic-authoring idioms, derived from prior human-created image sequences, serve as guidelines for crafting and refining storytelling. The system translates these principles into system layers that facilitate comic creation through sequential decision-making, addressing narrative elements such as panel composition, story tension changes, and panel transitions. Key contributions include integrating machine learning models into the human-AI cooperative comic generation process, deploying abstract narrative theories into AI-driven comic creation, and a customizable tool for narrative-driven image sequences. This approach improves narrative elements in generated image sequences and engages human creativity in an AI-generative process of comics. We open-source the code at https://github.com/RimiChen/Collaborative_Comic_Generation.<|reference_end|>
|
arxiv
|
@article{chen2024collaborative,
title={Collaborative Comic Generation: Integrating Visual Narrative Theories
with AI Models for Enhanced Creativity},
author={Yi-Chun Chen and Arnav Jhala},
journal={arXiv preprint arXiv:2409.17263},
year={2024},
archivePrefix={arXiv},
eprint={2409.17263},
primaryClass={cs.AI}
}
|
chen2024collaborative
|
arxiv-661986
|
2409.17264
|
Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations
|
<|reference_start|>Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations: As large language models (LLMs) evolve to handle increasingly longer contexts, serving inference requests for context lengths in the range of millions of tokens presents unique challenges. While existing techniques are effective for training, they fail to address the unique challenges of inference, such as varying prefill and decode phases and their associated latency constraints - like Time to First Token (TTFT) and Time Between Tokens (TBT). Furthermore, there are no long context inference solutions that allow batching requests to increase the hardware utilization today. In this paper, we propose three key innovations for efficient interactive long context LLM inference, without resorting to any approximation: adaptive chunking to reduce prefill overheads in mixed batching, Sequence Pipeline Parallelism (SPP) to lower TTFT, and KV Cache Parallelism (KVP) to minimize TBT. These contributions are combined into a 3D parallelism strategy, enabling Mnemosyne to scale interactive inference to context lengths at least up to 10 million tokens with high throughput enabled with batching. To our knowledge, Mnemosyne is the first to be able to achieve support for 10 million long context inference efficiently, while satisfying production-grade SLOs on TBT (30ms) on contexts up to and including 10 million.<|reference_end|>
|
arxiv
|
@article{agrawal2024mnemosyne:,
title={Mnemosyne: Parallelization Strategies for Efficiently Serving
Multi-Million Context Length LLM Inference Requests Without Approximations},
author={Amey Agrawal, Junda Chen,'I~nigo Goiri, Ramachandran Ramjee, Chaojie
Zhang, Alexey Tumanov, Esha Choukse},
journal={arXiv preprint arXiv:2409.17264},
year={2024},
archivePrefix={arXiv},
eprint={2409.17264},
primaryClass={cs.LG cs.DC}
}
|
agrawal2024mnemosyne:
|
arxiv-661987
|
2409.17265
|
CodonMPNN for Organism Specific and Codon Optimal Inverse Folding
|
<|reference_start|>CodonMPNN for Organism Specific and Codon Optimal Inverse Folding: Generating protein sequences conditioned on protein structures is an impactful technique for protein engineering. When synthesizing engineered proteins, they are commonly translated into DNA and expressed in an organism such as yeast. One difficulty in this process is that the expression rates can be low due to suboptimal codon sequences for expressing a protein in a host organism. We propose CodonMPNN, which generates a codon sequence conditioned on a protein backbone structure and an organism label. If naturally occurring DNA sequences are close to codon optimality, CodonMPNN could learn to generate codon sequences with higher expression yields than heuristic codon choices for generated amino acid sequences. Experiments show that CodonMPNN retains the performance of previous inverse folding approaches and recovers wild-type codons more frequently than baselines. Furthermore, CodonMPNN has a higher likelihood of generating high-fitness codon sequences than low-fitness codon sequences for the same protein sequence. Code is available at https://github.com/HannesStark/CodonMPNN.<|reference_end|>
|
arxiv
|
@article{stark2024codonmpnn,
title={CodonMPNN for Organism Specific and Codon Optimal Inverse Folding},
author={Hannes Stark, Umesh Padia, Julia Balla, Cameron Diao, George Church},
journal={arXiv preprint arXiv:2409.17265},
year={2024},
archivePrefix={arXiv},
eprint={2409.17265},
primaryClass={cs.LG q-bio.QM}
}
|
stark2024codonmpnn
|
arxiv-661988
|
2409.17266
|
AAPM: Large Language Model Agent-based Asset Pricing Models
|
<|reference_start|>AAPM: Large Language Model Agent-based Asset Pricing Models: In this study, we propose a novel asset pricing approach, LLM Agent-based Asset Pricing Models (AAPM), which fuses qualitative discretionary investment analysis from LLM agents and quantitative manual financial economic factors to predict excess asset returns. The experimental results show that our approach outperforms machine learning-based asset pricing baselines in portfolio optimization and asset pricing errors. Specifically, the Sharpe ratio and average $|\alpha|$ for anomaly portfolios improved significantly by 9.6\% and 10.8\% respectively. In addition, we conducted extensive ablation studies on our model and analysis of the data to reveal further insights into the proposed method.<|reference_end|>
|
arxiv
|
@article{cheng2024aapm:,
title={AAPM: Large Language Model Agent-based Asset Pricing Models},
author={Junyan Cheng, Peter Chin},
journal={arXiv preprint arXiv:2409.17266},
year={2024},
archivePrefix={arXiv},
eprint={2409.17266},
primaryClass={cs.AI cs.CE}
}
|
cheng2024aapm:
|
arxiv-661989
|
2409.17267
|
Model aggregation: minimizing empirical variance outperforms minimizing empirical error
|
<|reference_start|>Model aggregation: minimizing empirical variance outperforms minimizing empirical error: Whether deterministic or stochastic, models can be viewed as functions designed to approximate a specific quantity of interest. We propose a data-driven framework that aggregates predictions from diverse models into a single, more accurate output. This aggregation approach exploits each model's strengths to enhance overall accuracy. It is non-intrusive - treating models as black-box functions - model-agnostic, requires minimal assumptions, and can combine outputs from a wide range of models, including those from machine learning and numerical solvers. We argue that the aggregation process should be point-wise linear and propose two methods to find an optimal aggregate: Minimal Error Aggregation (MEA), which minimizes the aggregate's prediction error, and Minimal Variance Aggregation (MVA), which minimizes its variance. While MEA is inherently more accurate when correlations between models and the target quantity are perfectly known, Minimal Empirical Variance Aggregation (MEVA), an empirical version of MVA - consistently outperforms Minimal Empirical Error Aggregation (MEEA), the empirical counterpart of MEA, when these correlations must be estimated from data. The key difference is that MEVA constructs an aggregate by estimating model errors, while MEEA treats the models as features for direct interpolation of the quantity of interest. This makes MEEA more susceptible to overfitting and poor generalization, where the aggregate may underperform individual models during testing. We demonstrate the versatility and effectiveness of our framework in various applications, such as data science and partial differential equations, showing how it successfully integrates traditional solvers with machine learning models to improve both robustness and accuracy.<|reference_end|>
|
arxiv
|
@article{bourdais2024model,
title={Model aggregation: minimizing empirical variance outperforms minimizing
empirical error},
author={Th'eo Bourdais and Houman Owhadi},
journal={arXiv preprint arXiv:2409.17267},
year={2024},
archivePrefix={arXiv},
eprint={2409.17267},
primaryClass={cs.LG cs.AI cs.NA math.NA stat.ML}
}
|
bourdais2024model
|
arxiv-661990
|
2409.17270
|
Proof of Thought : Neurosymbolic Program Synthesis allows Robust and Interpretable Reasoning
|
<|reference_start|>Proof of Thought : Neurosymbolic Program Synthesis allows Robust and Interpretable Reasoning: Large Language Models (LLMs) have revolutionized natural language processing, yet they struggle with inconsistent reasoning, particularly in novel domains and complex logical sequences. This research introduces Proof of Thought, a framework that enhances the reliability and transparency of LLM outputs. Our approach bridges LLM-generated ideas with formal logic verification, employing a custom interpreter to convert LLM outputs into First Order Logic constructs for theorem prover scrutiny. Central to our method is an intermediary JSON-based Domain-Specific Language, which by design balances precise logical structures with intuitive human concepts. This hybrid representation enables both rigorous validation and accessible human comprehension of LLM reasoning processes. Key contributions include a robust type system with sort management for enhanced logical integrity, explicit representation of rules for clear distinction between factual and inferential knowledge, and a flexible architecture that allows for easy extension to various domain-specific applications. We demonstrate Proof of Thought's effectiveness through benchmarking on StrategyQA and a novel multimodal reasoning task, showing improved performance in open-ended scenarios. By providing verifiable and interpretable results, our technique addresses critical needs for AI system accountability and sets a foundation for human-in-the-loop oversight in high-stakes domains.<|reference_end|>
|
arxiv
|
@article{ganguly2024proof,
title={Proof of Thought : Neurosymbolic Program Synthesis allows Robust and
Interpretable Reasoning},
author={Debargha Ganguly, Srinivasan Iyengar, Vipin Chaudhary and Shivkumar
Kalyanaraman},
journal={arXiv preprint arXiv:2409.17270},
year={2024},
archivePrefix={arXiv},
eprint={2409.17270},
primaryClass={cs.AI cs.CL cs.LG cs.LO cs.NE}
}
|
ganguly2024proof
|
arxiv-661991
|
2409.17272
|
Design and development of desktop braille printing machine at Fablab Nepal
|
<|reference_start|>Design and development of desktop braille printing machine at Fablab Nepal: The development of a desktop Braille printing machine aims to create an affordable, user-friendly device for visually impaired users. This document outlines the entire process, from research and requirement analysis to distribution and support, leveraging the content and guidelines from the GitHub repository,https://github.com/fablabnepal1/Desktop-Braille-Printing-Machine.<|reference_end|>
|
arxiv
|
@article{ghimire2024design,
title={Design and development of desktop braille printing machine at Fablab
Nepal},
author={Daya Bandhu Ghimire, Pallab Shrestha},
journal={arXiv preprint arXiv:2409.17272},
year={2024},
archivePrefix={arXiv},
eprint={2409.17272},
primaryClass={cs.HC cs.CY}
}
|
ghimire2024design
|
arxiv-661992
|
2409.17273
|
An Integrated Deep Learning Framework for Effective Brain Tumor Localization, Segmentation, and Classification from Magnetic Resonance Images
|
<|reference_start|>An Integrated Deep Learning Framework for Effective Brain Tumor Localization, Segmentation, and Classification from Magnetic Resonance Images: Tumors in the brain result from abnormal cell growth within the brain tissue, arising from various types of brain cells. When left undiagnosed, they lead to severe neurological deficits such as cognitive impairment, motor dysfunction, and sensory loss. As the tumor grows, it causes an increase in intracranial pressure, potentially leading to life-threatening complications such as brain herniation. Therefore, early detection and treatment are necessary to manage the complications caused by such tumors to slow down their growth. Numerous works involving deep learning (DL) and artificial intelligence (AI) are being carried out to assist physicians in early diagnosis by utilizing the scans obtained through Magnetic Resonance Imaging (MRI). Our research proposes DL frameworks for localizing, segmenting, and classifying the grade of these gliomas from MRI images to solve this critical issue. In our localization framework, we enhance the LinkNet framework with a VGG19- inspired encoder architecture for improved multimodal tumor feature extraction, along with spatial and graph attention mechanisms to refine feature focus and inter-feature relationships. Following this, we integrated the SeResNet101 CNN model as the encoder backbone into the LinkNet framework for tumor segmentation, which achieved an IoU Score of 96%. To classify the segmented tumors, we combined the SeResNet152 feature extractor with an Adaptive Boosting classifier, which yielded an accuracy of 98.53%. Our proposed models demonstrated promising results, with the potential to advance medical AI by enabling early diagnosis and providing more accurate treatment options for patients.<|reference_end|>
|
arxiv
|
@article{v2024an,
title={An Integrated Deep Learning Framework for Effective Brain Tumor
Localization, Segmentation, and Classification from Magnetic Resonance Images},
author={Pandiyaraju V, Shravan Venkatraman, Abeshek A, Aravintakshan S A,
Pavan Kumar S, Madhan S},
journal={arXiv preprint arXiv:2409.17273},
year={2024},
archivePrefix={arXiv},
eprint={2409.17273},
primaryClass={eess.IV cs.CV cs.LG}
}
|
v2024an
|
arxiv-661993
|
2409.17275
|
On the Vulnerability of Applying Retrieval-Augmented Generation within Knowledge-Intensive Application Domains
|
<|reference_start|>On the Vulnerability of Applying Retrieval-Augmented Generation within Knowledge-Intensive Application Domains: Retrieval-Augmented Generation (RAG) has been empirically shown to enhance the performance of large language models (LLMs) in knowledge-intensive domains such as healthcare, finance, and legal contexts. Given a query, RAG retrieves relevant documents from a corpus and integrates them into the LLMs' generation process. In this study, we investigate the adversarial robustness of RAG, focusing specifically on examining the retrieval system. First, across 225 different setup combinations of corpus, retriever, query, and targeted information, we show that retrieval systems are vulnerable to universal poisoning attacks in medical Q\&A. In such attacks, adversaries generate poisoned documents containing a broad spectrum of targeted information, such as personally identifiable information. When these poisoned documents are inserted into a corpus, they can be accurately retrieved by any users, as long as attacker-specified queries are used. To understand this vulnerability, we discovered that the deviation from the query's embedding to that of the poisoned document tends to follow a pattern in which the high similarity between the poisoned document and the query is retained, thereby enabling precise retrieval. Based on these findings, we develop a new detection-based defense to ensure the safe use of RAG. Through extensive experiments spanning various Q\&A domains, we observed that our proposed method consistently achieves excellent detection rates in nearly all cases.<|reference_end|>
|
arxiv
|
@article{xian2024on,
title={On the Vulnerability of Applying Retrieval-Augmented Generation within
Knowledge-Intensive Application Domains},
author={Xun Xian, Ganghua Wang, Xuan Bi, Jayanth Srinivasa, Ashish Kundu,
Charles Fleming, Mingyi Hong, Jie Ding},
journal={arXiv preprint arXiv:2409.17275},
year={2024},
archivePrefix={arXiv},
eprint={2409.17275},
primaryClass={cs.CR cs.AI cs.CL cs.DB cs.ET cs.IR cs.LG}
}
|
xian2024on
|
arxiv-661994
|
2409.17276
|
Multiview Canonical Correlation Analysis for Automatic Pathological Speech Detection
|
<|reference_start|>Multiview Canonical Correlation Analysis for Automatic Pathological Speech Detection: Recently proposed automatic pathological speech detection approaches rely on spectrogram input representations or wav2vec2 embeddings. These representations may contain pathology irrelevant uncorrelated information, such as changing phonetic content or variations in speaking style across time, which can adversely affect classification performance. To address this issue, we propose to use Multiview Canonical Correlation Analysis (MCCA) on these input representations prior to automatic pathological speech detection. Our results demonstrate that unlike other dimensionality reduction techniques, the use of MCCA leads to a considerable improvement in pathological speech detection performance by eliminating uncorrelated information present in the input representations. Employing MCCA with traditional classifiers yields a comparable or higher performance than using sophisticated architectures, while preserving the representation structure and providing interpretability.<|reference_end|>
|
arxiv
|
@article{kaloga2024multiview,
title={Multiview Canonical Correlation Analysis for Automatic Pathological
Speech Detection},
author={Yacouba Kaloga and Shakeel A. Sheikh and Ina Kodrasi},
journal={arXiv preprint arXiv:2409.17276},
year={2024},
archivePrefix={arXiv},
eprint={2409.17276},
primaryClass={eess.AS cs.LG cs.SD}
}
|
kaloga2024multiview
|
arxiv-661995
|
2409.17277
|
Building Real-time Awareness of Out-of-distribution in Trajectory Prediction for Autonomous Vehicles
|
<|reference_start|>Building Real-time Awareness of Out-of-distribution in Trajectory Prediction for Autonomous Vehicles: Trajectory prediction describes the motions of surrounding moving obstacles for an autonomous vehicle; it plays a crucial role in enabling timely decision-making, such as collision avoidance and trajectory replanning. Accurate trajectory planning is the key to reliable vehicle deployments in open-world environment, where unstructured obstacles bring in uncertainties that are impossible to fully capture by training data. For traditional machine learning tasks, such uncertainties are often addressed reasonably well via methods such as continual learning. On the one hand, naively applying those methods to trajectory prediction can result in continuous data collection and frequent model updates, which can be resource-intensive. On the other hand, the predicted trajectories can be far away from the true trajectories, leading to unsafe decision-making. In this paper, we aim to establish real-time awareness of out-of-distribution in trajectory prediction for autonomous vehicles. We focus on the challenging and practically relevant setting where the out-of-distribution is deceptive, that is, the one not easily detectable by human intuition. Drawing on the well-established techniques of sequential analysis, we build real-time awareness of out-of-distribution by monitoring prediction errors using the quickest change point detection (QCD). Our solutions are lightweight and can handle the occurrence of out-of-distribution at any time during trajectory prediction inference. Experimental results on multiple real-world datasets using a benchmark trajectory prediction model demonstrate the effectiveness of our methods.<|reference_end|>
|
arxiv
|
@article{tongfei2024building,
title={Building Real-time Awareness of Out-of-distribution in Trajectory
Prediction for Autonomous Vehicles},
author={Tongfei (Felicia) Guo, Taposh Banerjee, Rui Liu, Lili Su},
journal={arXiv preprint arXiv:2409.17277},
year={2024},
archivePrefix={arXiv},
eprint={2409.17277},
primaryClass={cs.RO cs.LG}
}
|
tongfei2024building
|
arxiv-661996
|
2409.17279
|
SHEATH: Defending Horizontal Collaboration for Distributed CNNs against Adversarial Noise
|
<|reference_start|>SHEATH: Defending Horizontal Collaboration for Distributed CNNs against Adversarial Noise: As edge computing and the Internet of Things (IoT) expand, horizontal collaboration (HC) emerges as a distributed data processing solution for resource-constrained devices. In particular, a convolutional neural network (CNN) model can be deployed on multiple IoT devices, allowing distributed inference execution for image recognition while ensuring model and data privacy. Yet, this distributed architecture remains vulnerable to adversaries who want to make subtle alterations that impact the model, even if they lack access to the entire model. Such vulnerabilities can have severe implications for various sectors, including healthcare, military, and autonomous systems. However, security solutions for these vulnerabilities have not been explored. This paper presents a novel framework for Secure Horizontal Edge with Adversarial Threat Handling (SHEATH) to detect adversarial noise and eliminate its effect on CNN inference by recovering the original feature maps. Specifically, SHEATH aims to address vulnerabilities without requiring complete knowledge of the CNN model in HC edge architectures based on sequential partitioning. It ensures data and model integrity, offering security against adversarial attacks in diverse HC environments. Our evaluations demonstrate SHEATH's adaptability and effectiveness across diverse CNN configurations.<|reference_end|>
|
arxiv
|
@article{asif2024sheath:,
title={SHEATH: Defending Horizontal Collaboration for Distributed CNNs against
Adversarial Noise},
author={Muneeba Asif, Mohammad Kumail Kazmi, Mohammad Ashiqur Rahman, Syed
Rafay Hasan, Soamar Homsi},
journal={arXiv preprint arXiv:2409.17279},
year={2024},
archivePrefix={arXiv},
eprint={2409.17279},
primaryClass={cs.CR cs.DC}
}
|
asif2024sheath:
|
arxiv-661997
|
2409.17280
|
Disco4D: Disentangled 4D Human Generation and Animation from a Single Image
|
<|reference_start|>Disco4D: Disentangled 4D Human Generation and Animation from a Single Image: We present \textbf{Disco4D}, a novel Gaussian Splatting framework for 4D human generation and animation from a single image. Different from existing methods, Disco4D distinctively disentangles clothings (with Gaussian models) from the human body (with SMPL-X model), significantly enhancing the generation details and flexibility. It has the following technical innovations. \textbf{1)} Disco4D learns to efficiently fit the clothing Gaussians over the SMPL-X Gaussians. \textbf{2)} It adopts diffusion models to enhance the 3D generation process, \textit{e.g.}, modeling occluded parts not visible in the input image. \textbf{3)} It learns an identity encoding for each clothing Gaussian to facilitate the separation and extraction of clothing assets. Furthermore, Disco4D naturally supports 4D human animation with vivid dynamics. Extensive experiments demonstrate the superiority of Disco4D on 4D human generation and animation tasks. Our visualizations can be found in \url{https://disco-4d.github.io/}.<|reference_end|>
|
arxiv
|
@article{pang2024disco4d:,
title={Disco4D: Disentangled 4D Human Generation and Animation from a Single
Image},
author={Hui En Pang, Shuai Liu, Zhongang Cai, Lei Yang, Tianwei Zhang, Ziwei
Liu},
journal={arXiv preprint arXiv:2409.17280},
year={2024},
archivePrefix={arXiv},
eprint={2409.17280},
primaryClass={cs.CV}
}
|
pang2024disco4d:
|
arxiv-661998
|
2409.17282
|
Memory Networks: Towards Fully Biologically Plausible Learning
|
<|reference_start|>Memory Networks: Towards Fully Biologically Plausible Learning: The field of artificial intelligence faces significant challenges in achieving both biological plausibility and computational efficiency, particularly in visual learning tasks. Current artificial neural networks, such as convolutional neural networks, rely on techniques like backpropagation and weight sharing, which do not align with the brain's natural information processing methods. To address these issues, we propose the Memory Network, a model inspired by biological principles that avoids backpropagation and convolutions, and operates in a single pass. This approach enables rapid and efficient learning, mimicking the brain's ability to adapt quickly with minimal exposure to data. Our experiments demonstrate that the Memory Network achieves efficient and biologically plausible learning, showing strong performance on simpler datasets like MNIST. However, further refinement is needed for the model to handle more complex datasets such as CIFAR10, highlighting the need to develop new algorithms and techniques that closely align with biological processes while maintaining computational efficiency.<|reference_end|>
|
arxiv
|
@article{ruiz2024memory,
title={Memory Networks: Towards Fully Biologically Plausible Learning},
author={Jacobo Ruiz, Manas Gupta},
journal={arXiv preprint arXiv:2409.17282},
year={2024},
archivePrefix={arXiv},
eprint={2409.17282},
primaryClass={cs.LG cs.AI cs.NE}
}
|
ruiz2024memory
|
arxiv-661999
|
2409.17283
|
Investigating Privacy Attacks in the Gray-Box Setting to Enhance Collaborative Learning Schemes
|
<|reference_start|>Investigating Privacy Attacks in the Gray-Box Setting to Enhance Collaborative Learning Schemes: The notion that collaborative machine learning can ensure privacy by just withholding the raw data is widely acknowledged to be flawed. Over the past seven years, the literature has revealed several privacy attacks that enable adversaries to extract information about a model's training dataset by exploiting access to model parameters during or after training. In this work, we study privacy attacks in the gray-box setting, where the attacker has only limited access - in terms of view and actions - to the model. The findings of our investigation provide new insights for the development of privacy-preserving collaborative learning solutions. We deploy SmartCryptNN, a framework that tailors homomorphic encryption to protect the portions of the model posing higher privacy risks. Our solution offers a trade-off between privacy and efficiency, which varies based on the extent and selection of the model components we choose to protect. We explore it on dense neural networks, where through extensive evaluation of diverse datasets and architectures, we uncover instances where a favorable sweet spot in the trade-off can be achieved by safeguarding only a single layer of the network. In one of such instances, our approach trains ~4 times faster compared to fully encrypted solutions, while reducing membership leakage by 17.8 times compared to plaintext solutions.<|reference_end|>
|
arxiv
|
@article{mazzone2024investigating,
title={Investigating Privacy Attacks in the Gray-Box Setting to Enhance
Collaborative Learning Schemes},
author={Federico Mazzone, Ahmad Al Badawi, Yuriy Polyakov, Maarten Everts,
Florian Hahn, Andreas Peter},
journal={arXiv preprint arXiv:2409.17283},
year={2024},
archivePrefix={arXiv},
eprint={2409.17283},
primaryClass={cs.CR}
}
|
mazzone2024investigating
|
arxiv-662000
|
2409.17285
|
SpoofCeleb: Speech Deepfake Detection and SASV In The Wild
|
<|reference_start|>SpoofCeleb: Speech Deepfake Detection and SASV In The Wild: This paper introduces SpoofCeleb, a dataset designed for Speech Deepfake Detection (SDD) and Spoofing-robust Automatic Speaker Verification (SASV), utilizing source data from real-world conditions and spoofing attacks generated by Text-To-Speech (TTS) systems also trained on the same real-world data. Robust recognition systems require speech data recorded in varied acoustic environments with different levels of noise to be trained. However, existing datasets typically include clean, high-quality recordings (bona fide data) due to the requirements for TTS training; studio-quality or well-recorded read speech is typically necessary to train TTS models. Existing SDD datasets also have limited usefulness for training SASV models due to insufficient speaker diversity. We present SpoofCeleb, which leverages a fully automated pipeline that processes the VoxCeleb1 dataset, transforming it into a suitable form for TTS training. We subsequently train 23 contemporary TTS systems. The resulting SpoofCeleb dataset comprises over 2.5 million utterances from 1,251 unique speakers, collected under natural, real-world conditions. The dataset includes carefully partitioned training, validation, and evaluation sets with well-controlled experimental protocols. We provide baseline results for both SDD and SASV tasks. All data, protocols, and baselines are publicly available at https://jungjee.github.io/spoofceleb.<|reference_end|>
|
arxiv
|
@article{jung2024spoofceleb:,
title={SpoofCeleb: Speech Deepfake Detection and SASV In The Wild},
author={Jee-weon Jung, Yihan Wu, Xin Wang, Ji-Hoon Kim, Soumi Maiti, Yuta
Matsunaga, Hye-jin Shim, Jinchuan Tian, Nicholas Evans, Joon Son Chung,
Wangyou Zhang, Seyun Um, Shinnosuke Takamichi, Shinji Watanabe},
journal={arXiv preprint arXiv:2409.17285},
year={2024},
archivePrefix={arXiv},
eprint={2409.17285},
primaryClass={cs.SD cs.AI eess.AS}
}
|
jung2024spoofceleb:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.