corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-662801
2409.18753
Enhancing Explainability in Multimodal Large Language Models Using Ontological Context
<|reference_start|>Enhancing Explainability in Multimodal Large Language Models Using Ontological Context: Recently, there has been a growing interest in Multimodal Large Language Models (MLLMs) due to their remarkable potential in various tasks integrating different modalities, such as image and text, as well as applications such as image captioning and visual question answering. However, such models still face challenges in accurately captioning and interpreting specific visual concepts and classes, particularly in domain-specific applications. We argue that integrating domain knowledge in the form of an ontology can significantly address these issues. In this work, as a proof of concept, we propose a new framework that combines ontology with MLLMs to classify images of plant diseases. Our method uses concepts about plant diseases from an existing disease ontology to query MLLMs and extract relevant visual concepts from images. Then, we use the reasoning capabilities of the ontology to classify the disease according to the identified concepts. Ensuring that the model accurately uses the concepts describing the disease is crucial in domain-specific applications. By employing an ontology, we can assist in verifying this alignment. Additionally, using the ontology's inference capabilities increases transparency, explainability, and trust in the decision-making process while serving as a judge by checking if the annotations of the concepts by MLLMs are aligned with those in the ontology and displaying the rationales behind their errors. Our framework offers a new direction for synergizing ontologies and MLLMs, supported by an empirical study using different well-known MLLMs.<|reference_end|>
arxiv
@article{amara2024enhancing, title={Enhancing Explainability in Multimodal Large Language Models Using Ontological Context}, author={Jihen Amara, Birgitta K"onig-Ries, Sheeba Samuel}, journal={arXiv preprint arXiv:2409.18753}, year={2024}, archivePrefix={arXiv}, eprint={2409.18753}, primaryClass={cs.CV} }
amara2024enhancing
arxiv-662802
2409.18755
Transparency evaluation for the Kinematic Design of the Harnesses through Human-Exoskeleton Interaction Modeling
<|reference_start|>Transparency evaluation for the Kinematic Design of the Harnesses through Human-Exoskeleton Interaction Modeling: Lower Limb Exoskeletons (LLEs) are wearable robots that provide mechanical power to the user. Human-exoskeleton (HE) connections must preserve the user's natural behavior during the interaction, avoiding undesired forces. Therefore, numerous works focus on their minimization. Given the inherent complications of repeatedly prototyping and experimentally testing a device, modeling the exoskeleton and its physical interaction with the user emerges as a valuable approach for assessing the design effects. This paper proposes a novel method to compare different exoskeleton configurations with a flexible simulation tool. This approach contemplates simulating the dynamics of the device, including its interaction with the wearer, to evaluate multiple connection mechanism designs along with the kinematics and actuation of the LLE. This evaluation is based on the minimization of the interaction wrenches through an optimization process that includes the impedance parameters at the interfaces as optimization variables and the similarity of the LLE's joint variables trajectories with the motion of the wearer's articulations. Exploratory tests are conducted using the Wearable Walker LLE in different configurations and measuring the interaction forces. Experimental data are then compared to the optimization outcomes, proving that the proposed method provides contact wrench estimations consistent with the collected measurements and previous outcomes from the literature. Copyright 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.<|reference_end|>
arxiv
@article{bezzini2024transparency, title={Transparency evaluation for the Kinematic Design of the Harnesses through Human-Exoskeleton Interaction Modeling}, author={Riccardo Bezzini, Carlo Alberto Avizzano, Francesco Porcini, Alessandro Filippeschi}, journal={arXiv preprint arXiv:2409.18755}, year={2024}, archivePrefix={arXiv}, eprint={2409.18755}, primaryClass={cs.RO cs.SY eess.SY} }
bezzini2024transparency
arxiv-662803
2409.18757
$L_2$-approximation using randomized lattice algorithms
<|reference_start|>$L_2$-approximation using randomized lattice algorithms: We propose a randomized lattice algorithm for approximating multivariate periodic functions over the $d$-dimensional unit cube from the weighted Korobov space with mixed smoothness $\alpha > 1/2$ and product weights $\gamma_1,\gamma_2,\ldots\in [0,1]$. Building upon the deterministic lattice algorithm by Kuo, Sloan, and Wo\'{z}niakowski (2006), we incorporate a randomized quadrature rule by Dick, Goda, and Suzuki (2022) to accelerate the convergence rate. This randomization involves drawing the number of points for function evaluations randomly, and selecting a good generating vector for rank-1 lattice points using the randomized component-by-component algorithm. We prove that our randomized algorithm achieves a worst-case root mean squared $L_2$-approximation error of order $M^{-\alpha/2 - 1/8 + \varepsilon}$ for an arbitrarily small $\varepsilon > 0$, where $M$ denotes the maximum number of function evaluations, and that the error bound is independent of the dimension $d$ if the weights satisfy $\sum_{j=1}^\infty \gamma_j^{1/\alpha} < \infty$. Our upper bound converges faster than a lower bound on the worst-case $L_2$-approximation error for deterministic rank-1 lattice-based approximation proved by Byrenheid, K\"{a}mmerer, Ullrich, and Volkmer (2017). We also show a lower error bound of order $M^{-\alpha/2-1/2}$ for our randomized algorithm, leaving a slight gap between the upper and lower bounds open for future research.<|reference_end|>
arxiv
@article{cai2024$l_2$-approximation, title={$L_2$-approximation using randomized lattice algorithms}, author={Mou Cai, Takashi Goda, Yoshihito Kazashi}, journal={arXiv preprint arXiv:2409.18757}, year={2024}, archivePrefix={arXiv}, eprint={2409.18757}, primaryClass={math.NA cs.NA} }
cai2024$l_2$-approximation
arxiv-662804
2409.18761
Geometric deep learning for galaxy-halo connection: a case study for galaxy intrinsic alignments
<|reference_start|>Geometric deep learning for galaxy-halo connection: a case study for galaxy intrinsic alignments: Forthcoming cosmological imaging surveys, such as the Rubin Observatory LSST, require large-scale simulations encompassing realistic galaxy populations for a variety of scientific applications. Of particular concern is the phenomenon of intrinsic alignments (IA), whereby galaxies orient themselves towards overdensities, potentially introducing significant systematic biases in weak gravitational lensing analyses if they are not properly modeled. Due to computational constraints, simulating the intricate details of galaxy formation and evolution relevant to IA across vast volumes is impractical. As an alternative, we propose a Deep Generative Model trained on the IllustrisTNG-100 simulation to sample 3D galaxy shapes and orientations to accurately reproduce intrinsic alignments along with correlated scalar features. We model the cosmic web as a set of graphs, each graph representing a halo with nodes representing the subhalos/galaxies. The architecture consists of a SO(3) $\times$ $\mathbb{R}^n$ diffusion generative model, for galaxy orientations and $n$ scalars, implemented with E(3) equivariant Graph Neural Networks that explicitly respect the Euclidean symmetries of our Universe. The model is able to learn and predict features such as galaxy orientations that are statistically consistent with the reference simulation. Notably, our model demonstrates the ability to jointly model Euclidean-valued scalars (galaxy sizes, shapes, and colors) along with non-Euclidean valued SO(3) quantities (galaxy orientations) that are governed by highly complex galactic physics at non-linear scales.<|reference_end|>
arxiv
@article{jagvaral2024geometric, title={Geometric deep learning for galaxy-halo connection: a case study for galaxy intrinsic alignments}, author={Yesukhei Jagvaral, Francois Lanusse, Rachel Mandelbaum}, journal={arXiv preprint arXiv:2409.18761}, year={2024}, archivePrefix={arXiv}, eprint={2409.18761}, primaryClass={astro-ph.GA cs.LG} }
jagvaral2024geometric
arxiv-662805
2409.18764
Charting the Future: Using Chart Question-Answering for Scalable Evaluation of LLM-Driven Data Visualizations
<|reference_start|>Charting the Future: Using Chart Question-Answering for Scalable Evaluation of LLM-Driven Data Visualizations: We propose a novel framework that leverages Visual Question Answering (VQA) models to automate the evaluation of LLM-generated data visualizations. Traditional evaluation methods often rely on human judgment, which is costly and unscalable, or focus solely on data accuracy, neglecting the effectiveness of visual communication. By employing VQA models, we assess data representation quality and the general communicative clarity of charts. Experiments were conducted using two leading VQA benchmark datasets, ChartQA and PlotQA, with visualizations generated by OpenAI's GPT-3.5 Turbo and Meta's Llama 3.1 70B-Instruct models. Our results indicate that LLM-generated charts do not match the accuracy of the original non-LLM-generated charts based on VQA performance measures. Moreover, while our results demonstrate that few-shot prompting significantly boosts the accuracy of chart generation, considerable progress remains to be made before LLMs can fully match the precision of human-generated graphs. This underscores the importance of our work, which expedites the research process by enabling rapid iteration without the need for human annotation, thus accelerating advancements in this field.<|reference_end|>
arxiv
@article{ford2024charting, title={Charting the Future: Using Chart Question-Answering for Scalable Evaluation of LLM-Driven Data Visualizations}, author={James Ford, Xingmeng Zhao, Dan Schumacher, and Anthony Rios}, journal={arXiv preprint arXiv:2409.18764}, year={2024}, archivePrefix={arXiv}, eprint={2409.18764}, primaryClass={cs.CV cs.CL} }
ford2024charting
arxiv-662806
2409.18766
Dual Pricing to Prioritize Renewable Energy and Consumer Preferences in Electricity Markets
<|reference_start|>Dual Pricing to Prioritize Renewable Energy and Consumer Preferences in Electricity Markets: Electricity markets currently fail to incorporate preferences of buyers, treating polluting and renewable energy sources as having equal social benefit under a system of uniform clearing prices. Meanwhile, renewable energy is prone to curtailment due to transmission constraints, forcing grid operators to reduce or shut down renewable energy production despite its availability and need. This paper proposes a ``dual pricing mechanism" which allows buyers to bid both their willingness to pay for electricity, and additionally, their preference for green energy. Designed for use in deregulated electricity markets, this mechanism prioritizes the dispatch of more renewable energy sources according to consumer preferences. Traditional uniform clearing prices, which treat all energy sources equally, do not reflect the growing share of green energy in the power grid and the environmental values of consumers. By allowing load-serving entities to bid their willingness to pay for renewable energy directly into the clearing market, our proposed framework generates distinct pricing signals for green and ``black" electricity.<|reference_end|>
arxiv
@article{jong2024dual, title={Dual Pricing to Prioritize Renewable Energy and Consumer Preferences in Electricity Markets}, author={Emilie Jong, Samuel Chevalier, Spyros Chatzivasileiadis, Shie Mannor}, journal={arXiv preprint arXiv:2409.18766}, year={2024}, archivePrefix={arXiv}, eprint={2409.18766}, primaryClass={eess.SY cs.SY} }
jong2024dual
arxiv-662807
2409.18768
Learning from Demonstration with Implicit Nonlinear Dynamics Models
<|reference_start|>Learning from Demonstration with Implicit Nonlinear Dynamics Models: Learning from Demonstration (LfD) is a useful paradigm for training policies that solve tasks involving complex motions, such as those encountered in robotic manipulation. In practice, the successful application of LfD requires overcoming error accumulation during policy execution, i.e. the problem of drift due to errors compounding over time and the consequent out-of-distribution behaviours. Existing works seek to address this problem through scaling data collection, correcting policy errors with a human-in-the-loop, temporally ensembling policy predictions or through learning a dynamical system model with convergence guarantees. In this work, we propose and validate an alternative approach to overcoming this issue. Inspired by reservoir computing, we develop a recurrent neural network layer that includes a fixed nonlinear dynamical system with tunable dynamical properties for modelling temporal dynamics. We validate the efficacy of our neural network layer on the task of reproducing human handwriting motions using the LASA Human Handwriting Dataset. Through empirical experiments we demonstrate that incorporating our layer into existing neural network architectures addresses the issue of compounding errors in LfD. Furthermore, we perform a comparative evaluation against existing approaches including a temporal ensemble of policy predictions and an Echo State Network (ESN) implementation. We find that our approach yields greater policy precision and robustness on the handwriting task while also generalising to multiple dynamics regimes and maintaining competitive latency scores.<|reference_end|>
arxiv
@article{fagan2024learning, title={Learning from Demonstration with Implicit Nonlinear Dynamics Models}, author={Peter David Fagan, Subramanian Ramamoorthy}, journal={arXiv preprint arXiv:2409.18768}, year={2024}, archivePrefix={arXiv}, eprint={2409.18768}, primaryClass={cs.AI cs.LG cs.RO cs.SY eess.SY} }
fagan2024learning
arxiv-662808
2409.18769
State-of-the-Art Periorbital Distance Prediction and Disease Classification Using Periorbital Features
<|reference_start|>State-of-the-Art Periorbital Distance Prediction and Disease Classification Using Periorbital Features: Periorbital distances and features around the eyes and lids hold valuable information for disease quantification and monitoring of surgical and medical intervention. These distances are commonly measured manually, a process that is both subjective and highly time-consuming. Here, we set out to developed three deep-learning methods for segmentation and periorbital distance prediction, and also evaluate the utility of periorbital distances for disease classification. The MAE of our deep learning predicted distances was less than or very close to the error observed between trained human annotators. We compared our models to the current state-of-the-art (SOTA) method for periorbital distance prediction and found that our methods outperformed SOTA on all of our datasets on all but one periorbital measurement. We also show that robust segmentation can be achieved on diseased eyes using models trained on open-source, healthy eyes, and that periorbital distances have can be used as high-quality features in downstream classification models. Leveraging segmentation networks as intermediary steps in classification has broad implications for increasing the generalizability of classification models in ophthalmic plastic and craniofacial surgery by avoiding the out-of-distribution problem observed in traditional convolutional neural networks.<|reference_end|>
arxiv
@article{nahass2024state-of-the-art, title={State-of-the-Art Periorbital Distance Prediction and Disease Classification Using Periorbital Features}, author={George R. Nahass, Ghasem Yazdanpanah, Madison Cheung, Alex Palacios, Jeffery Peterson, Kevin Heinze, Sasha Hubschman, Chad A. Purnell, Pete Setabutr, Ann Q. Tran, Darvin Yi}, journal={arXiv preprint arXiv:2409.18769}, year={2024}, archivePrefix={arXiv}, eprint={2409.18769}, primaryClass={cs.CV cs.AI} }
nahass2024state-of-the-art
arxiv-662809
2409.18770
Relighting from a Single Image: Datasets and Deep Intrinsic-based Architecture
<|reference_start|>Relighting from a Single Image: Datasets and Deep Intrinsic-based Architecture: Single image scene relighting aims to generate a realistic new version of an input image so that it appears to be illuminated by a new target light condition. Although existing works have explored this problem from various perspectives, generating relit images under arbitrary light conditions remains highly challenging, and related datasets are scarce. Our work addresses this problem from both the dataset and methodological perspectives. We propose two new datasets: a synthetic dataset with the ground truth of intrinsic components and a real dataset collected under laboratory conditions. These datasets alleviate the scarcity of existing datasets. To incorporate physical consistency in the relighting pipeline, we establish a two-stage network based on intrinsic decomposition, giving outputs at intermediate steps, thereby introducing physical constraints. When the training set lacks ground truth for intrinsic decomposition, we introduce an unsupervised module to ensure that the intrinsic outputs are satisfactory. Our method outperforms the state-of-the-art methods in performance, as tested on both existing datasets and our newly developed datasets. Furthermore, pretraining our method or other prior methods using our synthetic dataset can enhance their performance on other datasets. Since our method can accommodate any light conditions, it is capable of producing animated results. The dataset, method, and videos are publicly available.<|reference_end|>
arxiv
@article{yang2024relighting, title={Relighting from a Single Image: Datasets and Deep Intrinsic-based Architecture}, author={Yixiong Yang, Hassan Ahmed Sial, Ramon Baldrich, Maria Vanrell}, journal={arXiv preprint arXiv:2409.18770}, year={2024}, archivePrefix={arXiv}, eprint={2409.18770}, primaryClass={cs.CV} }
yang2024relighting
arxiv-662810
2409.18772
A method of using RSVD in residual calculation of LowBit GEMM
<|reference_start|>A method of using RSVD in residual calculation of LowBit GEMM: The advancements of hardware technology in recent years has brought many possibilities for low-precision applications. However, the use of low precision can introduce significant computational errors, posing a considerable challenge to maintaining the computational accuracy. We propose low-rank residuals quantized matrix multiplication(LRQMM) method which introduces low-rank approximation in residual compensation for dense low precision quantization matrix multiplication. It can bring several times accuracy improvement with only BLAS-2 level extra time overhead. Moreover, LRQMM is a completely data-free quantization method that does not require additional data for pre-training. And it only works with low precision GEMM operator, which is easy to couple with other methods. Through experimentation, LRQMM can reduce the error of direct quantized matrix multiplication by 1~2 orders of magnitude, when dealing with larger matrix sizes, the computational speed is only reduced by approximately 20\%. In deep learning networks, LRQMM-4bit achieves 61.8% ImageNet Top-1 accuracy in Resnet-50, while the Direct Quant accuracy is only 8.3%.<|reference_end|>
arxiv
@article{gu2024a, title={A method of using RSVD in residual calculation of LowBit GEMM}, author={Hongyaoxing Gu}, journal={arXiv preprint arXiv:2409.18772}, year={2024}, archivePrefix={arXiv}, eprint={2409.18772}, primaryClass={cs.MS cs.LG} }
gu2024a
arxiv-662811
2409.18775
A POMDP-based hierarchical planning framework for manipulation under pose uncertainty
<|reference_start|>A POMDP-based hierarchical planning framework for manipulation under pose uncertainty: Robots often face challenges in domestic environments where visual feedback is ineffective, such as retrieving objects obstructed by occlusions or finding a light switch in the dark. In these cases, utilizing contacts to localize the target object can be effective. We propose an online planning framework using binary contact signals for manipulation tasks with pose uncertainty, formulated as a Partially Observable Markov Decision Process (POMDP). Naively representing the belief as a particle set makes planning infeasible due to the large uncertainties in domestic settings, as identifying the best sequence of actions requires rolling out thousands of actions across millions of particles, taking significant compute time. To address this, we propose a hierarchical belief representation. Initially, we represent the uncertainty coarsely in a 3D volumetric space. Policies that refine uncertainty in this space are computed and executed, and once uncertainty is sufficiently reduced, the problem is translated back into the particle space for further refinement before task completion. We utilize a closed-loop planning and execution framework with a heuristic-search-based anytime solver that computes partial policies within a limited time budget. The performance of the framework is demonstrated both in real world and in simulation on the high-precision task of inserting a plug into a port using a UR10e manipulator, resolving positional uncertainties up to 50 centimeters and angular uncertainties close to $2\pi$. Experimental results highlight the framework's effectiveness, achieving a 93\% success rate in the real world and over 50\% improvement in solution quality compared to greedy baselines, significantly accelerating planning and enabling real-time solutions for complex problems.<|reference_end|>
arxiv
@article{saleem2024a, title={A POMDP-based hierarchical planning framework for manipulation under pose uncertainty}, author={Muhammad Suhail Saleem, Rishi Veerapaneni, and Maxim Likhachev}, journal={arXiv preprint arXiv:2409.18775}, year={2024}, archivePrefix={arXiv}, eprint={2409.18775}, primaryClass={cs.RO} }
saleem2024a
arxiv-662812
2409.18776
Can AI Enhance its Creativity to Beat Humans ?
<|reference_start|>Can AI Enhance its Creativity to Beat Humans ?: Creativity is a fundamental pillar of human expression and a driving force behind innovation, yet it now stands at a crossroads. As artificial intelligence advances at an astonishing pace, the question arises: can machines match and potentially surpass human creativity? This study investigates the creative performance of artificial intelligence (AI) compared to humans by analyzing the effects of two distinct prompting strategies (a Naive and an Expert AI) on AI and across three different tasks (Text, Draw and Alternative Uses tasks). Human external evaluators have scored creative outputs generated by humans and AI, and these subjective creative scores were complemented with objective measures based on quantitative measurements and NLP tools. The results reveal that AI generally outperforms humans in creative tasks, though this advantage is nuanced by the specific nature of each task and the chosen creativity criteria. Ultimately, while AI demonstrates superior performance in certain creative domains, our results suggest that integrating human feedback is crucial for maximizing AI's creative potential.<|reference_end|>
arxiv
@article{maltese2024can, title={Can AI Enhance its Creativity to Beat Humans ?}, author={Anne-Ga"elle Maltese, Pierre Pelletier, R'emy Guichardaz}, journal={arXiv preprint arXiv:2409.18776}, year={2024}, archivePrefix={arXiv}, eprint={2409.18776}, primaryClass={econ.GN cs.CY q-fin.EC} }
maltese2024can
arxiv-662813
2409.18778
HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation
<|reference_start|>HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation: Efficiently determining the satisfiability of a boolean equation -- known as the SAT problem for brevity -- is crucial in various industrial problems. Recently, the advent of deep learning methods has introduced significant potential for enhancing SAT solving. However, a major barrier to the advancement of this field has been the scarcity of large, realistic datasets. The majority of current public datasets are either randomly generated or extremely limited, containing only a few examples from unrelated problem families. These datasets are inadequate for meaningful training of deep learning methods. In light of this, researchers have started exploring generative techniques to create data that more accurately reflect SAT problems encountered in practical situations. These methods have so far suffered from either the inability to produce challenging SAT problems or time-scalability obstacles. In this paper we address both by identifying and manipulating the key contributors to a problem's ``hardness'', known as cores. Although some previous work has addressed cores, the time costs are unacceptably high due to the expense of traditional heuristic core detection techniques. We introduce a fast core detection procedure that uses a graph neural network. Our empirical results demonstrate that we can efficiently generate problems that remain hard to solve and retain key attributes of the original example problems. We show via experiment that the generated synthetic SAT problems can be used in a data augmentation setting to provide improved prediction of solver runtimes.<|reference_end|>
arxiv
@article{cotnareanu2024hardcore, title={HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation}, author={Joseph Cotnareanu, Zhanguang Zhang, Hui-Ling Zhen, Yingxue Zhang, Mark Coates}, journal={arXiv preprint arXiv:2409.18778}, year={2024}, archivePrefix={arXiv}, eprint={2409.18778}, primaryClass={cs.LG cs.AI} }
cotnareanu2024hardcore
arxiv-662814
2409.18779
Hello SME! Generating Fast Matrix Multiplication Kernels Using the Scalable Matrix Extension
<|reference_start|>Hello SME! Generating Fast Matrix Multiplication Kernels Using the Scalable Matrix Extension: Modern central processing units (CPUs) feature single-instruction, multiple-data pipelines to accelerate compute-intensive floating-point and fixed-point workloads. Traditionally, these pipelines and corresponding instruction set architectures (ISAs) were designed for vector parallelism. In recent years, major hardware vendors have further increased the throughput of their CPUs by introducing matrix units with corresponding ISA extensions. The Scalable Matrix Extension (SME) has been announced for the Arm architecture in 2021 and Apple's M4 chip is the first to support SME. This paper presents an in-depth study of SME on M4. Our microbenchmarks determine the maximum floating-point and fixed-point throughput of M4's SME acceleration and study the achievable bandwidth for transfers to and from the matrix registers. Furthermore, we used the insights gained to design a just-in-time code generator for SME-based small matrix multiplications. The results presented show that M4's SME support is FP32-centric, with an achievable throughput of over 2.3 FP32 TFLOPS. To maximize read and write bandwidth, loading and storing to and from the matrix registers must be done in two steps. Our just-in-time generated small matrix multiplication kernels outperform the vendor-optimized BLAS implementation in almost all tested configurations.<|reference_end|>
arxiv
@article{remke2024hello, title={Hello SME! Generating Fast Matrix Multiplication Kernels Using the Scalable Matrix Extension}, author={Stefan Remke and Alexander Breuer}, journal={arXiv preprint arXiv:2409.18779}, year={2024}, archivePrefix={arXiv}, eprint={2409.18779}, primaryClass={cs.DC} }
remke2024hello
arxiv-662815
2409.18783
DualDn: Dual-domain Denoising via Differentiable ISP
<|reference_start|>DualDn: Dual-domain Denoising via Differentiable ISP: Image denoising is a critical component in a camera's Image Signal Processing (ISP) pipeline. There are two typical ways to inject a denoiser into the ISP pipeline: applying a denoiser directly to captured raw frames (raw domain) or to the ISP's output sRGB images (sRGB domain). However, both approaches have their limitations. Residual noise from raw-domain denoising can be amplified by the subsequent ISP processing, and the sRGB domain struggles to handle spatially varying noise since it only sees noise distorted by the ISP. Consequently, most raw or sRGB domain denoising works only for specific noise distributions and ISP configurations. To address these challenges, we propose DualDn, a novel learning-based dual-domain denoising. Unlike previous single-domain denoising, DualDn consists of two denoising networks: one in the raw domain and one in the sRGB domain. The raw domain denoising adapts to sensor-specific noise as well as spatially varying noise levels, while the sRGB domain denoising adapts to ISP variations and removes residual noise amplified by the ISP. Both denoising networks are connected with a differentiable ISP, which is trained end-to-end and discarded during the inference stage. With this design, DualDn achieves greater generalizability compared to most learning-based denoising methods, as it can adapt to different unseen noises, ISP parameters, and even novel ISP pipelines. Experiments show that DualDn achieves state-of-the-art performance and can adapt to different denoising architectures. Moreover, DualDn can be used as a plug-and-play denoising module with real cameras without retraining, and still demonstrate better performance than commercial on-camera denoising. The project website is available at: https://openimaginglab.github.io/DualDn/<|reference_end|>
arxiv
@article{li2024dualdn:, title={DualDn: Dual-domain Denoising via Differentiable ISP}, author={Ruikang Li, Yujin Wang, Shiqi Chen, Fan Zhang, Jinwei Gu, Tianfan Xue}, journal={arXiv preprint arXiv:2409.18783}, year={2024}, archivePrefix={arXiv}, eprint={2409.18783}, primaryClass={eess.IV cs.CV} }
li2024dualdn:
arxiv-662816
2409.18785
Student-Oriented Teacher Knowledge Refinement for Knowledge Distillation
<|reference_start|>Student-Oriented Teacher Knowledge Refinement for Knowledge Distillation: Knowledge distillation has become widely recognized for its ability to transfer knowledge from a large teacher network to a compact and more streamlined student network. Traditional knowledge distillation methods primarily follow a teacher-oriented paradigm that imposes the task of learning the teacher's complex knowledge onto the student network. However, significant disparities in model capacity and architectural design hinder the student's comprehension of the complex knowledge imparted by the teacher, resulting in sub-optimal performance. This paper introduces a novel perspective emphasizing student-oriented and refining the teacher's knowledge to better align with the student's needs, thereby improving knowledge transfer effectiveness. Specifically, we present the Student-Oriented Knowledge Distillation (SoKD), which incorporates a learnable feature augmentation strategy during training to refine the teacher's knowledge of the student dynamically. Furthermore, we deploy the Distinctive Area Detection Module (DAM) to identify areas of mutual interest between the teacher and student, concentrating knowledge transfer within these critical areas to avoid transferring irrelevant information. This customized module ensures a more focused and effective knowledge distillation process. Our approach, functioning as a plug-in, could be integrated with various knowledge distillation methods. Extensive experimental results demonstrate the efficacy and generalizability of our method.<|reference_end|>
arxiv
@article{shen2024student-oriented, title={Student-Oriented Teacher Knowledge Refinement for Knowledge Distillation}, author={Chaomin Shen, Yaomin Huang, Haokun Zhu, Jinsong Fan, Guixu Zhang}, journal={arXiv preprint arXiv:2409.18785}, year={2024}, archivePrefix={arXiv}, eprint={2409.18785}, primaryClass={cs.CV} }
shen2024student-oriented
arxiv-662817
2409.18786
A Survey on the Honesty of Large Language Models
<|reference_start|>A Survey on the Honesty of Large Language Models: Honesty is a fundamental principle for aligning large language models (LLMs) with human values, requiring these models to recognize what they know and don't know and be able to faithfully express their knowledge. Despite promising, current LLMs still exhibit significant dishonest behaviors, such as confidently presenting wrong answers or failing to express what they know. In addition, research on the honesty of LLMs also faces challenges, including varying definitions of honesty, difficulties in distinguishing between known and unknown knowledge, and a lack of comprehensive understanding of related research. To address these issues, we provide a survey on the honesty of LLMs, covering its clarification, evaluation approaches, and strategies for improvement. Moreover, we offer insights for future research, aiming to inspire further exploration in this important area.<|reference_end|>
arxiv
@article{li2024a, title={A Survey on the Honesty of Large Language Models}, author={Siheng Li, Cheng Yang, Taiqiang Wu, Chufan Shi, Yuji Zhang, Xinyu Zhu, Zesen Cheng, Deng Cai, Mo Yu, Lemao Liu, Jie Zhou, Yujiu Yang, Ngai Wong, Xixin Wu, Wai Lam}, journal={arXiv preprint arXiv:2409.18786}, year={2024}, archivePrefix={arXiv}, eprint={2409.18786}, primaryClass={cs.CL cs.AI} }
li2024a
arxiv-662818
2409.18787
Asymptotic tracking control of dynamic reference over homomorphically encrypted data with finite modulus
<|reference_start|>Asymptotic tracking control of dynamic reference over homomorphically encrypted data with finite modulus: This paper considers a tracking control problem, in which the dynamic controller is encrypted with an additively homomorphic encryption scheme and the output of a process tracks a dynamic reference asymptotically. Our paper is motivated by the following problem: When dealing with both asymptotic tracking and dynamic reference, we find that the control input is generally subject to overflow issues under a finite modulus, though the dynamic controller consists of only integer coefficients. First, we provide a new controller design method such that the coefficients of the tracking controller can be transformed into integers leveraging the zooming-in factor of dynamic quantization. By the Cayley-Hamilton theorem, we represent the control input as linear combination of the previous control inputs. Leveraging the property above, we design an algorithm on the actuator side such that it can restore the control input from the lower bits under a finite modulus. A lower bound of the modulus is also provided. As an extension of the first result, we further solve the problem of unbounded internal state taking place in the actuator. In particular, the actuator can restore the correct control input under the same modulus. A simulation example is provided to verify the control schemes proposed in our paper.<|reference_end|>
arxiv
@article{feng2024asymptotic, title={Asymptotic tracking control of dynamic reference over homomorphically encrypted data with finite modulus}, author={Shuai Feng, Junsoo Kim}, journal={arXiv preprint arXiv:2409.18787}, year={2024}, archivePrefix={arXiv}, eprint={2409.18787}, primaryClass={eess.SY cs.SY} }
feng2024asymptotic
arxiv-662819
2409.18788
Excavating in the Wild: The GOOSE-Ex Dataset for Semantic Segmentation
<|reference_start|>Excavating in the Wild: The GOOSE-Ex Dataset for Semantic Segmentation: The successful deployment of deep learning-based techniques for autonomous systems is highly dependent on the data availability for the respective system in its deployment environment. Especially for unstructured outdoor environments, very few datasets exist for even fewer robotic platforms and scenarios. In an earlier work, we presented the German Outdoor and Offroad Dataset (GOOSE) framework along with 10000 multimodal frames from an offroad vehicle to enhance the perception capabilities in unstructured environments. In this work, we address the generalizability of the GOOSE framework. To accomplish this, we open-source the GOOSE-Ex dataset, which contains additional 5000 labeled multimodal frames from various completely different environments, recorded on a robotic excavator and a quadruped platform. We perform a comprehensive analysis of the semantic segmentation performance on different platforms and sensor modalities in unseen environments. In addition, we demonstrate how the combined datasets can be utilized for different downstream applications or competitions such as offroad navigation, object manipulation or scene completion. The dataset, its platform documentation and pre-trained state-of-the-art models for offroad perception will be made available on https://goose-dataset.de/. \<|reference_end|>
arxiv
@article{hagmanns2024excavating, title={Excavating in the Wild: The GOOSE-Ex Dataset for Semantic Segmentation}, author={Raphael Hagmanns, Peter Mortimer, Miguel Granero, Thorsten Luettel, Janko Petereit}, journal={arXiv preprint arXiv:2409.18788}, year={2024}, archivePrefix={arXiv}, eprint={2409.18788}, primaryClass={cs.RO cs.CV} }
hagmanns2024excavating
arxiv-662820
2409.18792
asQ: parallel-in-time finite element simulations using ParaDiag for geoscientific models and beyond
<|reference_start|>asQ: parallel-in-time finite element simulations using ParaDiag for geoscientific models and beyond: Modern high performance computers are massively parallel; for many PDE applications spatial parallelism saturates long before the computer's capability is reached. Parallel-in-time methods enable further speedup beyond spatial saturation by solving multiple timesteps simultaneously to expose additional parallelism. ParaDiag is a particular approach to parallel-in-time based on preconditioning the simultaneous timestep system with a perturbation that allows block diagonalisation via a Fourier transform in time. In this article, we introduce asQ, a new library for implementing ParaDiag parallel-in-time methods, with a focus on applications in the geosciences, especially weather and climate. asQ is built on Firedrake, a library for the automated solution of finite element models, and the PETSc library of scalable linear and nonlinear solvers. This enables asQ to build ParaDiag solvers for general finite element models and provide a range of solution strategies, making testing a wide array of problems straightforward. We use a quasi-Newton formulation that encompasses a range of ParaDiag methods, and expose building blocks for constructing more complex methods. The performance and flexibility of asQ is demonstrated on a hierarchy of linear and nonlinear atmospheric flow models. We show that ParaDiag can offer promising speedups and that asQ is a productive testbed for further developing these methods.<|reference_end|>
arxiv
@article{hope-collins2024asq:, title={asQ: parallel-in-time finite element simulations using ParaDiag for geoscientific models and beyond}, author={Joshua Hope-Collins, Abdalaziz Hamdan, Werner Bauer, Lawrence Mitchell, and Colin Cotter}, journal={arXiv preprint arXiv:2409.18792}, year={2024}, archivePrefix={arXiv}, eprint={2409.18792}, primaryClass={math.NA cs.NA} }
hope-collins2024asq:
arxiv-662821
2409.18794
Open-Nav: Exploring Zero-Shot Vision-and-Language Navigation in Continuous Environment with Open-Source LLMs
<|reference_start|>Open-Nav: Exploring Zero-Shot Vision-and-Language Navigation in Continuous Environment with Open-Source LLMs: Vision-and-Language Navigation (VLN) tasks require an agent to follow textual instructions to navigate through 3D environments. Traditional approaches use supervised learning methods, relying heavily on domain-specific datasets to train VLN models. Recent methods try to utilize closed-source large language models (LLMs) like GPT-4 to solve VLN tasks in zero-shot manners, but face challenges related to expensive token costs and potential data breaches in real-world applications. In this work, we introduce Open-Nav, a novel study that explores open-source LLMs for zero-shot VLN in the continuous environment. Open-Nav employs a spatial-temporal chain-of-thought (CoT) reasoning approach to break down tasks into instruction comprehension, progress estimation, and decision-making. It enhances scene perceptions with fine-grained object and spatial knowledge to improve LLM's reasoning in navigation. Our extensive experiments in both simulated and real-world environments demonstrate that Open-Nav achieves competitive performance compared to using closed-source LLMs.<|reference_end|>
arxiv
@article{qiao2024open-nav:, title={Open-Nav: Exploring Zero-Shot Vision-and-Language Navigation in Continuous Environment with Open-Source LLMs}, author={Yanyuan Qiao, Wenqi Lyu, Hui Wang, Zixu Wang, Zerui Li, Yuan Zhang, Mingkui Tan, Qi Wu}, journal={arXiv preprint arXiv:2409.18794}, year={2024}, archivePrefix={arXiv}, eprint={2409.18794}, primaryClass={cs.RO cs.CV} }
qiao2024open-nav:
arxiv-662822
2409.18796
Hierarchical Federated ADMM
<|reference_start|>Hierarchical Federated ADMM: In this paper, we depart from the widely-used gradient descent-based hierarchical federated learning (FL) algorithms to develop a novel hierarchical FL framework based on the alternating direction method of multipliers (ADMM). Within this framework, we propose two novel FL algorithms, which both use ADMM in the top layer: one that employs ADMM in the lower layer and another that uses the conventional gradient descent-based approach. The proposed framework enhances privacy, and experiments demonstrate the superiority of the proposed algorithms compared to the conventional algorithms in terms of learning convergence and accuracy. Additionally, gradient descent on the lower layer performs well even if the number of local steps is very limited, while ADMM on both layers lead to better performance otherwise.<|reference_end|>
arxiv
@article{azimi-abarghouyi2024hierarchical, title={Hierarchical Federated ADMM}, author={Seyed Mohammad Azimi-Abarghouyi, Nicola Bastianello, Karl H. Johansson, Viktoria Fodor}, journal={arXiv preprint arXiv:2409.18796}, year={2024}, archivePrefix={arXiv}, eprint={2409.18796}, primaryClass={cs.LG cs.AI cs.DC cs.IT cs.SY eess.SY math.IT} }
azimi-abarghouyi2024hierarchical
arxiv-662823
2409.18797
Supervised Learning Model for Key Frame Identification from Cow Teat Videos
<|reference_start|>Supervised Learning Model for Key Frame Identification from Cow Teat Videos: This paper proposes a method for improving the accuracy of mastitis risk assessment in cows using neural networks and video analysis. Mastitis, an infection of the udder tissue, is a critical health problem for cows and can be detected by examining the cow's teat. Traditionally, veterinarians assess the health of a cow's teat during the milking process, but this process is limited in time and can weaken the accuracy of the assessment. In commercial farms, cows are recorded by cameras when they are milked in the milking parlor. This paper uses a neural network to identify key frames in the recorded video where the cow's udder appears intact. These key frames allow veterinarians to have more flexible time to perform health assessments on the teat, increasing their efficiency and accuracy. However, there are challenges in using cow teat video for mastitis risk assessment, such as complex environments, changing cow positions and postures, and difficulty in identifying the udder from the video. To address these challenges, a fusion distance and an ensemble model are proposed to improve the performance (F-score) of identifying key frames from cow teat videos. The results show that these two approaches improve performance compared to using a single distance measure or model.<|reference_end|>
arxiv
@article{wang2024supervised, title={Supervised Learning Model for Key Frame Identification from Cow Teat Videos}, author={Minghao Wang, Pinxue Lin}, journal={arXiv preprint arXiv:2409.18797}, year={2024}, archivePrefix={arXiv}, eprint={2409.18797}, primaryClass={cs.CV cs.AI cs.LG eess.IV} }
wang2024supervised
arxiv-662824
2409.18798
Esports Debut as a Medal Event at 2023 Asian Games: Exploring Public Perceptions with BERTopic and GPT-4 Topic Fine-Tuning
<|reference_start|>Esports Debut as a Medal Event at 2023 Asian Games: Exploring Public Perceptions with BERTopic and GPT-4 Topic Fine-Tuning: This study examined the public opinions of esports at the 2023 Asian Games and value co-creation during the event using an LLM-enhanced BERTopic modeling analysis. We identified five major themes representing public perceptions, as well as how major stakeholders co-created value within and beyond the esports ecosystem. Key findings highlighted the strategic use of social media marketing to influence public opinion and promote esports events and brands, emphasizing the importance of event logistics and infrastructure. Additionally, the study revealed the co-creation value contributed by stakeholders outside the traditional esports ecosystem, particularly in promoting national representation and performance. Our findings supported the ongoing efforts to legitimize esports as a sport, noting that mainstream recognition remains a challenge. The inclusion of esports as a medal event showcased broader acceptance and helped mitigate negative public perceptions. Moreover, contributions from non-traditional stakeholders underscored the value of cross-subcultural collaborations in esports.<|reference_end|>
arxiv
@article{qian2024esports, title={Esports Debut as a Medal Event at 2023 Asian Games: Exploring Public Perceptions with BERTopic and GPT-4 Topic Fine-Tuning}, author={Tyreal Yizhou Qian, Bo Yu, Weizhe Li, and Chenglong Xu}, journal={arXiv preprint arXiv:2409.18798}, year={2024}, archivePrefix={arXiv}, eprint={2409.18798}, primaryClass={cs.HC cs.AI cs.LG} }
qian2024esports
arxiv-662825
2409.18799
Drawing the boundaries between Blockchain and Blockchain-like systems: A Comprehensive Survey on Distributed Ledger Technologies
<|reference_start|>Drawing the boundaries between Blockchain and Blockchain-like systems: A Comprehensive Survey on Distributed Ledger Technologies: Bitcoin's global success has led to the rise of blockchain, but many systems labeled as "blockchain" deviate from its core principles, adding complexity to the ecosystem. This survey addresses the need for a comprehensive review and taxonomy to clarify the differences between blockchain and blockchain-like systems. We propose a reference model with four key layers: data, consensus, execution, and application, and introduce a new taxonomy for better classification. Through a qualitative and quantitative analysis of 44 DLT solutions and 26 consensus mechanisms, we highlight key challenges and offer research directions in the field.<|reference_end|>
arxiv
@article{bellaj2024drawing, title={Drawing the boundaries between Blockchain and Blockchain-like systems: A Comprehensive Survey on Distributed Ledger Technologies}, author={Badr Bellaj, Aafaf Ouaddah, Noel Crespi, Abdelatif Mezrioui, Emmanuel Bertin}, journal={arXiv preprint arXiv:2409.18799}, year={2024}, archivePrefix={arXiv}, eprint={2409.18799}, primaryClass={cs.CR cs.DC} }
bellaj2024drawing
arxiv-662826
2409.18800
MiniVLN: Efficient Vision-and-Language Navigation by Progressive Knowledge Distillation
<|reference_start|>MiniVLN: Efficient Vision-and-Language Navigation by Progressive Knowledge Distillation: In recent years, Embodied Artificial Intelligence (Embodied AI) has advanced rapidly, yet the increasing size of models conflicts with the limited computational capabilities of Embodied AI platforms. To address this challenge, we aim to achieve both high model performance and practical deployability. Specifically, we focus on Vision-and-Language Navigation (VLN), a core task in Embodied AI. This paper introduces a two-stage knowledge distillation framework, producing a student model, MiniVLN, and showcasing the significant potential of distillation techniques in developing lightweight models. The proposed method aims to capture fine-grained knowledge during the pretraining phase and navigation-specific knowledge during the fine-tuning phase. Our findings indicate that the two-stage distillation approach is more effective in narrowing the performance gap between the teacher model and the student model compared to single-stage distillation. On the public R2R and REVERIE benchmarks, MiniVLN achieves performance on par with the teacher model while having only about 12% of the teacher model's parameter count.<|reference_end|>
arxiv
@article{zhu2024minivln:, title={MiniVLN: Efficient Vision-and-Language Navigation by Progressive Knowledge Distillation}, author={Junyou Zhu, Yanyuan Qiao, Siqi Zhang, Xingjian He, Qi Wu, Jing Liu}, journal={arXiv preprint arXiv:2409.18800}, year={2024}, archivePrefix={arXiv}, eprint={2409.18800}, primaryClass={cs.CV} }
zhu2024minivln:
arxiv-662827
2409.18804
Convergence of Diffusion Models Under the Manifold Hypothesis in High-Dimensions
<|reference_start|>Convergence of Diffusion Models Under the Manifold Hypothesis in High-Dimensions: Denoising Diffusion Probabilistic Models (DDPM) are powerful state-of-the-art methods used to generate synthetic data from high-dimensional data distributions and are widely used for image, audio and video generation as well as many more applications in science and beyond. The manifold hypothesis states that high-dimensional data often lie on lower-dimensional manifolds within the ambient space, and is widely believed to hold in provided examples. While recent results has provided invaluable insight into how diffusion models adapt to the manifold hypothesis, they do not capture the great empirical success of these models, making this a very fruitful research direction. In this work, we study DDPMs under the manifold hypothesis and prove that they achieve rates independent of the ambient dimension in terms of learning the score. In terms of sampling, we obtain rates independent of the ambient dimension w.r.t. the Kullback-Leibler divergence, and $O(\sqrt{D})$ w.r.t. the Wasserstein distance. We do this by developing a new framework connecting diffusion models to the well-studied theory of extrema of Gaussian Processes.<|reference_end|>
arxiv
@article{azangulov2024convergence, title={Convergence of Diffusion Models Under the Manifold Hypothesis in High-Dimensions}, author={Iskander Azangulov, George Deligiannidis, Judith Rousseau}, journal={arXiv preprint arXiv:2409.18804}, year={2024}, archivePrefix={arXiv}, eprint={2409.18804}, primaryClass={stat.ML cs.LG math.ST stat.TH} }
azangulov2024convergence
arxiv-662828
2409.18806
Path Following Model Predictive Control of a Coupled Autonomous Underwater Vehicle
<|reference_start|>Path Following Model Predictive Control of a Coupled Autonomous Underwater Vehicle: The operation of an autonomous underwater vehicle (AUV) faces challenges in following predetermined waypoints due to coupled motions under environmental disturbances. To address this, a 3D path following guidance and control system is developed in this work based on the line-of-sight (LOS) guidance method. Conventionally, the 3D path following problem is transformed into heading and depth control problems, assuming that the motion of the vehicle is decoupled in horizontal and depth coordinates. The proposed control system design avoids this simplifying assumption by transforming the problem into a 3D position and orientation tracking problem. This design is achieved by computing a 2D horizontal coordinate based on the desired heading and then computing a corresponding LOS depth coordinate. A model predictive controller (MPC) is then implemented using the 3D LOS coordinate and the computed orientation vector. The MPC obtains a robust control by solving a minimax optimisation problem considering the effects of unknown ocean disturbances. The effectiveness of the proposed guidance and control system is demonstrated through the simulation of a prototype AUV system. Numerical results show that the AUV can follow predetermined waypoints in the presence of time-varying disturbances, and the system is steered at a constant surge speed that is proportional to the radius of the circle of acceptance used to implement the guidance system.<|reference_end|>
arxiv
@article{jimoh2024path, title={Path Following Model Predictive Control of a Coupled Autonomous Underwater Vehicle}, author={Isah A. Jimoh, Hong Yue}, journal={arXiv preprint arXiv:2409.18806}, year={2024}, archivePrefix={arXiv}, eprint={2409.18806}, primaryClass={eess.SY cs.SY} }
jimoh2024path
arxiv-662829
2409.18807
LLM With Tools: A Survey
<|reference_start|>LLM With Tools: A Survey: The integration of tools in augmenting large language models presents a novel approach toward enhancing the efficiency and accuracy of these models in handling specific, complex tasks. This paper delves into the methodology,challenges, and developments in the realm of teaching LLMs to use external tools, thereby pushing the boundaries of their capabilities beyond pre-existing knowledge bases. We introduce a standardized paradigm for tool integration guided by a series of functions that map user instructions to actionable plans and their execution, emphasizing the significance of understanding user intent, tool selection, and dynamic plan adjustment. Our exploration reveals the various challenges encountered, such as tool invocation timing, selection accuracy, and the need for robust reasoning processes. In addressing these challenges, we investigate techniques within the context of fine-tuning and incontext learning paradigms, highlighting innovative approaches to ensure diversity, augment datasets, and improve generalization.Furthermore, we investigate a perspective on enabling LLMs to not only utilize but also autonomously create tools, which may redefine their role from mere tool users to tool creators. Finally,we reproduced Chameleon's results on ScienceQA and analyzed the code structure.<|reference_end|>
arxiv
@article{shen2024llm, title={LLM With Tools: A Survey}, author={Zhuocheng Shen}, journal={arXiv preprint arXiv:2409.18807}, year={2024}, archivePrefix={arXiv}, eprint={2409.18807}, primaryClass={cs.AI} }
shen2024llm
arxiv-662830
2409.18811
Moldable Development Patterns
<|reference_start|>Moldable Development Patterns: Moldable development supports decision-making by making software systems explainable. This is done by making it cheap to add numerous custom tools to your software, turning it into a live, explorable domain model. Based on several years of experience of applying moldable development to both open-source and industrial systems, we have identified several mutually supporting patterns to explain how moldable development works in practice. This paper targets (i) readers curious to learn about moldable development, (ii) current users of the Glamorous Toolkit moldable IDE wanting to learn best practices, and (iii) developers interested in applying moldable development using other platforms and technology.<|reference_end|>
arxiv
@article{nierstrasz2024moldable, title={Moldable Development Patterns}, author={Oscar Nierstrasz, Tudor G^irba}, journal={arXiv preprint arXiv:2409.18811}, year={2024}, archivePrefix={arXiv}, eprint={2409.18811}, primaryClass={cs.SE} }
nierstrasz2024moldable
arxiv-662831
2409.18812
LLMs4Synthesis: Leveraging Large Language Models for Scientific Synthesis
<|reference_start|>LLMs4Synthesis: Leveraging Large Language Models for Scientific Synthesis: In response to the growing complexity and volume of scientific literature, this paper introduces the LLMs4Synthesis framework, designed to enhance the capabilities of Large Language Models (LLMs) in generating high-quality scientific syntheses. This framework addresses the need for rapid, coherent, and contextually rich integration of scientific insights, leveraging both open-source and proprietary LLMs. It also examines the effectiveness of LLMs in evaluating the integrity and reliability of these syntheses, alleviating inadequacies in current quantitative metrics. Our study contributes to this field by developing a novel methodology for processing scientific papers, defining new synthesis types, and establishing nine detailed quality criteria for evaluating syntheses. The integration of LLMs with reinforcement learning and AI feedback is proposed to optimize synthesis quality, ensuring alignment with established criteria. The LLMs4Synthesis framework and its components are made available, promising to enhance both the generation and evaluation processes in scientific research synthesis.<|reference_end|>
arxiv
@article{giglou2024llms4synthesis:, title={LLMs4Synthesis: Leveraging Large Language Models for Scientific Synthesis}, author={Hamed Babaei Giglou, Jennifer D'Souza, S"oren Auer}, journal={arXiv preprint arXiv:2409.18812}, year={2024}, archivePrefix={arXiv}, eprint={2409.18812}, primaryClass={cs.CL cs.AI cs.DL} }
giglou2024llms4synthesis:
arxiv-662832
2409.18813
EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing
<|reference_start|>EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing: Eye-tracking technology has gained significant attention in recent years due to its wide range of applications in human-computer interaction, virtual and augmented reality, and wearable health. Traditional RGB camera-based eye-tracking systems often struggle with poor temporal resolution and computational constraints, limiting their effectiveness in capturing rapid eye movements. To address these limitations, we propose EyeTrAES, a novel approach using neuromorphic event cameras for high-fidelity tracking of natural pupillary movement that shows significant kinematic variance. One of EyeTrAES's highlights is the use of a novel adaptive windowing/slicing algorithm that ensures just the right amount of descriptive asynchronous event data accumulation within an event frame, across a wide range of eye movement patterns. EyeTrAES then applies lightweight image processing functions over accumulated event frames from just a single eye to perform pupil segmentation and tracking. We show that these methods boost pupil tracking fidelity by 6+%, achieving IoU~=92%, while incurring at least 3x lower latency than competing pure event-based eye tracking alternatives [38]. We additionally demonstrate that the microscopic pupillary motion captured by EyeTrAES exhibits distinctive variations across individuals and can thus serve as a biometric fingerprint. For robust user authentication, we train a lightweight per-user Random Forest classifier using a novel feature vector of short-term pupillary kinematics, comprising a sliding window of pupil (location, velocity, acceleration) triples. Experimental studies with two different datasets demonstrate that the EyeTrAES-based authentication technique can simultaneously achieve high authentication accuracy (~=0.82) and low processing latency (~=12ms), and significantly outperform multiple state-of-the-art competitive baselines.<|reference_end|>
arxiv
@article{sen2024eyetraes:, title={EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing}, author={Argha Sen, Nuwan Bandara, Ila Gokarn, Thivya Kandappu, Archan Misra}, journal={arXiv preprint arXiv:2409.18813}, year={2024}, archivePrefix={arXiv}, eprint={2409.18813}, primaryClass={cs.CV cs.HC} }
sen2024eyetraes:
arxiv-662833
2409.18814
Early diagnosis of Alzheimer's disease from MRI images with deep learning model
<|reference_start|>Early diagnosis of Alzheimer's disease from MRI images with deep learning model: It is acknowledged that the most common cause of dementia worldwide is Alzheimer's disease (AD). This condition progresses in severity from mild to severe and interferes with people's everyday routines. Early diagnosis plays a critical role in patient care and clinical trials. Convolutional neural networks (CNN) are used to create a framework for identifying specific disease features from MRI scans Classification of dementia involves approaches such as medical history review, neuropsychological tests, and magnetic resonance imaging (MRI). However, the image dataset obtained from Kaggle faces a significant issue of class imbalance, which requires equal distribution of samples from each class to address. In this article, to address this imbalance, the Synthetic Minority Oversampling Technique (SMOTE) is utilized. Furthermore, a pre-trained convolutional neural network has been applied to the DEMNET dementia network to extract key features from AD images. The proposed model achieved an impressive accuracy of 98.67%.<|reference_end|>
arxiv
@article{javid2024early, title={Early diagnosis of Alzheimer's disease from MRI images with deep learning model}, author={Sajjad Aghasi Javid and Mahmood Mohassel Feghhi}, journal={arXiv preprint arXiv:2409.18814}, year={2024}, doi={10.1109/AISP61396.2024.10475240}, archivePrefix={arXiv}, eprint={2409.18814}, primaryClass={eess.IV cs.AI cs.CV cs.LG} }
javid2024early
arxiv-662834
2409.18817
Facility Location Problem with Aleatory Agents
<|reference_start|>Facility Location Problem with Aleatory Agents: In this paper, we introduce and study the Facility Location Problem with Aleatory Agents (FLPAA), where the facility accommodates n agents larger than the number of agents reporting their preferences, namely n_r. The spare capacity is used by n_u=n-n_r aleatory agents sampled from a probability distribution \mu. The goal of FLPAA is to find a location that minimizes the ex-ante social cost, which is the expected cost of the n_u agents sampled from \mu plus the cost incurred by the agents reporting their position. We investigate the mechanism design aspects of the FLPAA under the assumption that the Mechanism Designer (MD) lacks knowledge of the distribution $\mu$ but can query k quantiles of \mu. We explore the trade-off between acquiring more insights into the probability distribution and designing a better-performing mechanism, which we describe through the strong approximation ratio (SAR). The SAR of a mechanism measures the highest ratio between the cost of the mechanisms and the cost of the optimal solution on the worst-case input x and worst-case distribution \mu, offering a metric for efficiency that does not depend on \mu. We divide our study into four different information settings: the zero information case, in which the MD has access to no quantiles; the median information case, in which the MD has access to the median of \mu; the n_u-quantile information case, in which the MD has access to n_u quantiles of its choice, and the k-quantile information case, in which the MD has access to k<n_u quantiles of its choice. For all frameworks, we propose a mechanism that is optimal or achieves a small constant SAR and pairs it with a lower bound on the SAR. In most cases, the lower bound matches the upper bound, thus no truthful mechanism can achieve a lower SAR. Lastly, we extend the FLPAA to include instances in which we must locate two facilities.<|reference_end|>
arxiv
@article{auricchio2024facility, title={Facility Location Problem with Aleatory Agents}, author={Gennaro Auricchio and Jie Zhang}, journal={arXiv preprint arXiv:2409.18817}, year={2024}, archivePrefix={arXiv}, eprint={2409.18817}, primaryClass={cs.GT cs.MA} }
auricchio2024facility
arxiv-662835
2409.18819
Local Transcription Models in Home Care Nursing in Switzerland: an Interdisciplinary Case Study
<|reference_start|>Local Transcription Models in Home Care Nursing in Switzerland: an Interdisciplinary Case Study: Latest advances in the field of natural language processing (NLP) enable new use cases for different domains, including the medical sector. In particular, transcription can be used to support automation in the nursing documentation process and give nurses more time to interact with the patients. However, different challenges including (a) data privacy, (b) local languages and dialects, and (c) domain-specific vocabulary need to be addressed. In this case study, we investigate the case of home care nursing documentation in Switzerland. We assessed different transcription tools and models, and conducted several experiments with OpenAI Whisper, involving different variations of German (i.e., dialects, foreign accent) and manually curated example texts by a domain expert of home care nursing. Our results indicate that even the used out-of-the-box model performs sufficiently well to be a good starting point for future research in the field.<|reference_end|>
arxiv
@article{kramer2024local, title={Local Transcription Models in Home Care Nursing in Switzerland: an Interdisciplinary Case Study}, author={Jeremy Kramer, Tetiana Kravchenko, Beatrice Kaufmann, Friederike J.S. Thilo, Mascha Kurpicz-Briki}, journal={arXiv preprint arXiv:2409.18819}, year={2024}, archivePrefix={arXiv}, eprint={2409.18819}, primaryClass={cs.CL} }
kramer2024local
arxiv-662836
2409.18820
An $11/6$-Approximation Algorithm for Vertex Cover on String Graphs
<|reference_start|>An $11/6$-Approximation Algorithm for Vertex Cover on String Graphs: We present a 1.8334-approximation algorithm for Vertex Cover on string graphs given with a representation, which takes polynomial time in the size of the representation; the exact approximation factor is $11/6$. Recently, the barrier of 2 was broken by Lokshtanov et al. [SoGC '24] with a 1.9999-approximation algorithm. Thus we increase by three orders of magnitude the distance of the approximation ratio to the trivial bound of 2. Our algorithm is very simple. The intricacies reside in its analysis, where we mainly establish that string graphs without odd cycles of length at most 11 are 8-colorable. Previously, Chudnovsky, Scott, and Seymour [JCTB '21] showed that string graphs without odd cycles of length at most 7 are 80-colorable, and string graphs without odd cycles of length at most 5 have bounded chromatic number.<|reference_end|>
arxiv
@article{bonnet2024an, title={An $11/6$-Approximation Algorithm for Vertex Cover on String Graphs}, author={'Edouard Bonnet and Pawe{l} Rzk{a}.zewski}, journal={arXiv preprint arXiv:2409.18820}, year={2024}, archivePrefix={arXiv}, eprint={2409.18820}, primaryClass={cs.DS cs.CG cs.DM math.CO} }
bonnet2024an
arxiv-662837
2409.18821
Data Generation for Testing Complex Queries
<|reference_start|>Data Generation for Testing Complex Queries: Generation of sample data for testing SQL queries has been an important task for many years, with applications such as testing of SQL queries used for data analytics and in application software, as well as student SQL queries. More recently, with the increasing use of text-to-SQL systems, test data is key for the validation of generated queries. Earlier work for test data generation handled basic single block SQL queries, as well as simple nested SQL queries, but could not handle more complex queries. In this paper, we present a novel data generation approach that is designed to handle complex queries, and show its effectiveness on queries for which the earlier XData approach is not as effective. We also show that it can outperform the state-of-the-art VeriEQL system in showing non-equivalence of queries.<|reference_end|>
arxiv
@article{somwase2024data, title={Data Generation for Testing Complex Queries}, author={Sunanda Somwase and Parismita Das and S. Sudarshan}, journal={arXiv preprint arXiv:2409.18821}, year={2024}, archivePrefix={arXiv}, eprint={2409.18821}, primaryClass={cs.DB} }
somwase2024data
arxiv-662838
2409.18824
Fully integrating the Flang Fortran compiler with standard MLIR
<|reference_start|>Fully integrating the Flang Fortran compiler with standard MLIR: Fortran is the lingua franca of HPC code development and as such it is crucial that we as a community have open source Fortran compilers capable of generating high performance executables. Flang is LLVM's Fortran compiler and leverages MLIR which is a reusable compiler infrastructure which, as part of LLVM, has become popular in recent years. However, whilst Flang leverages MLIR it does not fully integrate with it and instead provides bespoke translation and optimisation passes to target LLVM-IR. In this paper we first explore the performance of Flang against other compilers popular in HPC for a range of benchmarks before describing a mapping between Fortran and standard MLIR, exploring the performance of this. The result of this work is an up to three times speed up compared with Flang's existing approach across the benchmarks and experiments run, demonstrating that the Flang community should seriously consider leveraging standard MLIR.<|reference_end|>
arxiv
@article{brown2024fully, title={Fully integrating the Flang Fortran compiler with standard MLIR}, author={Nick Brown}, journal={arXiv preprint arXiv:2409.18824}, year={2024}, archivePrefix={arXiv}, eprint={2409.18824}, primaryClass={cs.DC cs.PL} }
brown2024fully
arxiv-662839
2409.18826
YOLOv8-ResCBAM: YOLOv8 Based on An Effective Attention Module for Pediatric Wrist Fracture Detection
<|reference_start|>YOLOv8-ResCBAM: YOLOv8 Based on An Effective Attention Module for Pediatric Wrist Fracture Detection: Wrist trauma and even fractures occur frequently in daily life, particularly among children who account for a significant proportion of fracture cases. Before performing surgery, surgeons often request patients to undergo X-ray imaging first, and prepare for the surgery based on the analysis of the X-ray images. With the development of neural networks, You Only Look Once (YOLO) series models have been widely used in fracture detection for Computer-Assisted Diagnosis, where the YOLOv8 model has obtained the satisfactory results. Applying the attention modules to neural networks is one of the effective methods to improve the model performance. This paper proposes YOLOv8-ResCBAM, which incorporates Convolutional Block Attention Module integrated with resblock (ResCBAM) into the original YOLOv8 network architecture. The experimental results on the GRAZPEDWRI-DX dataset demonstrate that the mean Average Precision calculated at Intersection over Union threshold of 0.5 (mAP 50) of the proposed model increased from 63.6% of the original YOLOv8 model to 65.8%, which achieves the state-of-the-art performance. The implementation code is available at https://github.com/RuiyangJu/Fracture_Detection_Improved_YOLOv8.<|reference_end|>
arxiv
@article{ju2024yolov8-rescbam:, title={YOLOv8-ResCBAM: YOLOv8 Based on An Effective Attention Module for Pediatric Wrist Fracture Detection}, author={Rui-Yang Ju, Chun-Tse Chien, Jen-Shiun Chiang}, journal={arXiv preprint arXiv:2409.18826}, year={2024}, archivePrefix={arXiv}, eprint={2409.18826}, primaryClass={cs.CV} }
ju2024yolov8-rescbam:
arxiv-662840
2409.18827
ARLBench: Flexible and Efficient Benchmarking for Hyperparameter Optimization in Reinforcement Learning
<|reference_start|>ARLBench: Flexible and Efficient Benchmarking for Hyperparameter Optimization in Reinforcement Learning: Hyperparameters are a critical factor in reliably training well-performing reinforcement learning (RL) agents. Unfortunately, developing and evaluating automated approaches for tuning such hyperparameters is both costly and time-consuming. As a result, such approaches are often only evaluated on a single domain or algorithm, making comparisons difficult and limiting insights into their generalizability. We propose ARLBench, a benchmark for hyperparameter optimization (HPO) in RL that allows comparisons of diverse HPO approaches while being highly efficient in evaluation. To enable research into HPO in RL, even in settings with low compute resources, we select a representative subset of HPO tasks spanning a variety of algorithm and environment combinations. This selection allows for generating a performance profile of an automated RL (AutoRL) method using only a fraction of the compute previously necessary, enabling a broader range of researchers to work on HPO in RL. With the extensive and large-scale dataset on hyperparameter landscapes that our selection is based on, ARLBench is an efficient, flexible, and future-oriented foundation for research on AutoRL. Both the benchmark and the dataset are available at https://github.com/automl/arlbench.<|reference_end|>
arxiv
@article{becktepe2024arlbench:, title={ARLBench: Flexible and Efficient Benchmarking for Hyperparameter Optimization in Reinforcement Learning}, author={Jannis Becktepe, Julian Dierkes, Carolin Benjamins, Aditya Mohan, David Salinas, Raghu Rajan, Frank Hutter, Holger Hoos, Marius Lindauer, Theresa Eimer}, journal={17th European Workshop on Reinforcement Learning 2024}, year={2024}, archivePrefix={arXiv}, eprint={2409.18827}, primaryClass={cs.LG} }
becktepe2024arlbench:
arxiv-662841
2409.18828
MECG-E: Mamba-based ECG Enhancer for Baseline Wander Removal
<|reference_start|>MECG-E: Mamba-based ECG Enhancer for Baseline Wander Removal: Electrocardiogram (ECG) is an important non-invasive method for diagnosing cardiovascular disease. However, ECG signals are susceptible to noise contamination, such as electrical interference or signal wandering, which reduces diagnostic accuracy. Various ECG denoising methods have been proposed, but most existing methods yield suboptimal performance under very noisy conditions or require several steps during inference, leading to latency during online processing. In this paper, we propose a novel ECG denoising model, namely Mamba-based ECG Enhancer (MECG-E), which leverages the Mamba architecture known for its fast inference and outstanding nonlinear mapping capabilities. Experimental results indicate that MECG-E surpasses several well-known existing models across multiple metrics under different noise conditions. Additionally, MECG-E requires less inference time than state-of-the-art diffusion-based ECG denoisers, demonstrating the model's functionality and efficiency.<|reference_end|>
arxiv
@article{hung2024mecg-e:, title={MECG-E: Mamba-based ECG Enhancer for Baseline Wander Removal}, author={Kuo-Hsuan Hung, Kuan-Chen Wang, Kai-Chun Liu, Wei-Lun Chen, Xugang Lu, Yu Tsao and Chii-Wann Lin}, journal={arXiv preprint arXiv:2409.18828}, year={2024}, archivePrefix={arXiv}, eprint={2409.18828}, primaryClass={eess.SP cs.AI} }
hung2024mecg-e:
arxiv-662842
2409.18832
Classification and regression of trajectories rendered as images via 2D Convolutional Neural Networks
<|reference_start|>Classification and regression of trajectories rendered as images via 2D Convolutional Neural Networks: Trajectories can be regarded as time-series of coordinates, typically arising from motile objects. Methods for trajectory classification are particularly important to detect different movement patterns, while methods for regression to compute motility metrics and forecasting. Recent advances in computer vision have facilitated the processing of trajectories rendered as images via artificial neural networks with 2d convolutional layers (CNNs). This approach leverages the capability of CNNs to learn spatial hierarchies of features from images, necessary to recognize complex shapes. Moreover, it overcomes the limitation of other machine learning methods that require input trajectories with a fixed number of points. However, rendering trajectories as images can introduce poorly investigated artifacts such as information loss due to the plotting of coordinates on a discrete grid, and spectral changes due to line thickness and aliasing. In this study, we investigate the effectiveness of CNNs for solving classification and regression problems from synthetic trajectories that have been rendered as images using different modalities. The parameters considered in this study include line thickness, image resolution, usage of motion history (color-coding of the temporal component) and anti-aliasing. Results highlight the importance of choosing an appropriate image resolution according to model depth and motion history in applications where movement direction is critical.<|reference_end|>
arxiv
@article{nicolai2024classification, title={Classification and regression of trajectories rendered as images via 2D Convolutional Neural Networks}, author={Mariaclaudia Nicolai, Raffaella Fiamma Cabini, Diego Ulisse Pizzagalli}, journal={arXiv preprint arXiv:2409.18832}, year={2024}, archivePrefix={arXiv}, eprint={2409.18832}, primaryClass={cs.CV cs.LG} }
nicolai2024classification
arxiv-662843
2409.18835
Accelerating stencils on the Tenstorrent Grayskull RISC-V accelerator
<|reference_start|>Accelerating stencils on the Tenstorrent Grayskull RISC-V accelerator: The RISC-V Instruction Set Architecture (ISA) has enjoyed phenomenal growth in recent years, however it still to gain popularity in HPC. Whilst adopting RISC-V CPU solutions in HPC might be some way off, RISC-V based PCIe accelerators offer a middle ground where vendors benefit from the flexibility of RISC-V yet fit into existing systems. In this paper we focus on the Tenstorrent Grayskull PCIe RISC-V based accelerator which, built upon Tensix cores, decouples data movement from compute. Using the Jacobi iterative method as a vehicle, we explore the suitability of stencils on the Grayskull e150. We explore best practice in structuring these codes for the accelerator and demonstrate that the e150 provides similar performance to a Xeon Platinum CPU (albeit BF16 vs FP32) but the e150 uses around five times less energy. Over four e150s we obtain around four times the CPU performance, again at around five times less energy.<|reference_end|>
arxiv
@article{brown2024accelerating, title={Accelerating stencils on the Tenstorrent Grayskull RISC-V accelerator}, author={Nick Brown, Ryan Barton}, journal={arXiv preprint arXiv:2409.18835}, year={2024}, archivePrefix={arXiv}, eprint={2409.18835}, primaryClass={cs.DC} }
brown2024accelerating
arxiv-662844
2409.18836
Constructing Confidence Intervals for 'the' Generalization Error -- a Comprehensive Benchmark Study
<|reference_start|>Constructing Confidence Intervals for 'the' Generalization Error -- a Comprehensive Benchmark Study: When assessing the quality of prediction models in machine learning, confidence intervals (CIs) for the generalization error, which measures predictive performance, are a crucial tool. Luckily, there exist many methods for computing such CIs and new promising approaches are continuously being proposed. Typically, these methods combine various resampling procedures, most popular among them cross-validation and bootstrapping, with different variance estimation techniques. Unfortunately, however, there is currently no consensus on when any of these combinations may be most reliably employed and how they generally compare. In this work, we conduct the first large-scale study comparing CIs for the generalization error - empirically evaluating 13 different methods on a total of 18 tabular regression and classification problems, using four different inducers and a total of eight loss functions. We give an overview of the methodological foundations and inherent challenges of constructing CIs for the generalization error and provide a concise review of all 13 methods in a unified framework. Finally, the CI methods are evaluated in terms of their relative coverage frequency, width, and runtime. Based on these findings, we are able to identify a subset of methods that we would recommend. We also publish the datasets as a benchmarking suite on OpenML and our code on GitHub to serve as a basis for further studies.<|reference_end|>
arxiv
@article{schulz-kümpel2024constructing, title={Constructing Confidence Intervals for 'the' Generalization Error -- a Comprehensive Benchmark Study}, author={Hannah Schulz-K"umpel, Sebastian Fischer, Thomas Nagler, Anne-Laure Boulesteix, Bernd Bischl, Roman Hornung}, journal={arXiv preprint arXiv:2409.18836}, year={2024}, archivePrefix={arXiv}, eprint={2409.18836}, primaryClass={stat.ML cs.LG} }
schulz-kümpel2024constructing
arxiv-662845
2409.18839
MinerU: An Open-Source Solution for Precise Document Content Extraction
<|reference_start|>MinerU: An Open-Source Solution for Precise Document Content Extraction: Document content analysis has been a crucial research area in computer vision. Despite significant advancements in methods such as OCR, layout detection, and formula recognition, existing open-source solutions struggle to consistently deliver high-quality content extraction due to the diversity in document types and content. To address these challenges, we present MinerU, an open-source solution for high-precision document content extraction. MinerU leverages the sophisticated PDF-Extract-Kit models to extract content from diverse documents effectively and employs finely-tuned preprocessing and postprocessing rules to ensure the accuracy of the final results. Experimental results demonstrate that MinerU consistently achieves high performance across various document types, significantly enhancing the quality and consistency of content extraction. The MinerU open-source project is available at https://github.com/opendatalab/MinerU.<|reference_end|>
arxiv
@article{wang2024mineru:, title={MinerU: An Open-Source Solution for Precise Document Content Extraction}, author={Bin Wang, Chao Xu, Xiaomeng Zhao, Linke Ouyang, Fan Wu, Zhiyuan Zhao, Rui Xu, Kaiwen Liu, Yuan Qu, Fukai Shang, Bo Zhang, Liqun Wei, Zhihao Sui, Wei Li, Botian Shi, Yu Qiao, Dahua Lin, Conghui He}, journal={arXiv preprint arXiv:2409.18839}, year={2024}, archivePrefix={arXiv}, eprint={2409.18839}, primaryClass={cs.CV} }
wang2024mineru:
arxiv-662846
2409.18841
RNC: Efficient RRAM-aware NAS and Compilation for DNNs on Resource-Constrained Edge Devices
<|reference_start|>RNC: Efficient RRAM-aware NAS and Compilation for DNNs on Resource-Constrained Edge Devices: Computing-in-memory (CIM) is an emerging computing paradigm, offering noteworthy potential for accelerating neural networks with high parallelism, low latency, and energy efficiency compared to conventional von Neumann architectures. However, existing research has primarily focused on hardware architecture and network co-design for large-scale neural networks, without considering resource constraints. In this study, we aim to develop edge-friendly deep neural networks (DNNs) for accelerators based on resistive random-access memory (RRAM). To achieve this, we propose an edge compilation and resource-constrained RRAM-aware neural architecture search (NAS) framework to search for optimized neural networks meeting specific hardware constraints. Our compilation approach integrates layer partitioning, duplication, and network packing to maximize the utilization of computation units. The resulting network architecture can be optimized for either high accuracy or low latency using a one-shot neural network approach with Pareto optimality achieved through the Non-dominated Sorted Genetic Algorithm II (NSGA-II). The compilation of mobile-friendly networks, like Squeezenet and MobilenetV3 small can achieve over 80% of utilization and over 6x speedup compared to ISAAC-like framework with different crossbar resources. The resulting model from NAS optimized for speed achieved 5x-30x speedup. The code for this paper is available at https://github.com/ArChiiii/rram_nas_comp_pack.<|reference_end|>
arxiv
@article{loong2024rnc:, title={RNC: Efficient RRAM-aware NAS and Compilation for DNNs on Resource-Constrained Edge Devices}, author={Kam Chi Loong, Shihao Han, Sishuo Liu, Ning Lin, Zhongrui Wang}, journal={arXiv preprint arXiv:2409.18841}, year={2024}, archivePrefix={arXiv}, eprint={2409.18841}, primaryClass={cs.NE} }
loong2024rnc:
arxiv-662847
2409.18842
Classical Statistical (In-Sample) Intuitions Don't Generalize Well: A Note on Bias-Variance Tradeoffs, Overfitting and Moving from Fixed to Random Designs
<|reference_start|>Classical Statistical (In-Sample) Intuitions Don't Generalize Well: A Note on Bias-Variance Tradeoffs, Overfitting and Moving from Fixed to Random Designs: The sudden appearance of modern machine learning (ML) phenomena like double descent and benign overfitting may leave many classically trained statisticians feeling uneasy -- these phenomena appear to go against the very core of statistical intuitions conveyed in any introductory class on learning from data. The historical lack of earlier observation of such phenomena is usually attributed to today's reliance on more complex ML methods, overparameterization, interpolation and/or higher data dimensionality. In this note, we show that there is another reason why we observe behaviors today that appear at odds with intuitions taught in classical statistics textbooks, which is much simpler to understand yet rarely discussed explicitly. In particular, many intuitions originate in fixed design settings, in which in-sample prediction error (under resampling of noisy outcomes) is of interest, while modern ML evaluates its predictions in terms of generalization error, i.e. out-of-sample prediction error in random designs. Here, we highlight that this simple move from fixed to random designs has (perhaps surprisingly) far-reaching consequences on textbook intuitions relating to the bias-variance tradeoff, and comment on the resulting (im)possibility of observing double descent and benign overfitting in fixed versus random designs.<|reference_end|>
arxiv
@article{curth2024classical, title={Classical Statistical (In-Sample) Intuitions Don't Generalize Well: A Note on Bias-Variance Tradeoffs, Overfitting and Moving from Fixed to Random Designs}, author={Alicia Curth}, journal={arXiv preprint arXiv:2409.18842}, year={2024}, archivePrefix={arXiv}, eprint={2409.18842}, primaryClass={stat.ML cs.LG} }
curth2024classical
arxiv-662848
2409.18847
Text2FX: Harnessing CLAP Embeddings for Text-Guided Audio Effects
<|reference_start|>Text2FX: Harnessing CLAP Embeddings for Text-Guided Audio Effects: This work introduces Text2FX, a method that leverages CLAP embeddings and differentiable digital signal processing to control audio effects, such as equalization and reverberation, using open-vocabulary natural language prompts (e.g., "make this sound in-your-face and bold"). Text2FX operates without retraining any models, relying instead on single-instance optimization within the existing embedding space. We show that CLAP encodes valuable information for controlling audio effects and propose two optimization approaches using CLAP to map text to audio effect parameters. While we demonstrate with CLAP, this approach is applicable to any shared text-audio embedding space. Similarly, while we demonstrate with equalization and reverberation, any differentiable audio effect may be controlled. We conduct a listener study with diverse text prompts and source audio to evaluate the quality and alignment of these methods with human perception.<|reference_end|>
arxiv
@article{chu2024text2fx:, title={Text2FX: Harnessing CLAP Embeddings for Text-Guided Audio Effects}, author={Annie Chu, Patrick O'Reilly, Julia Barnett, Bryan Pardo}, journal={arXiv preprint arXiv:2409.18847}, year={2024}, archivePrefix={arXiv}, eprint={2409.18847}, primaryClass={eess.AS cs.SD} }
chu2024text2fx:
arxiv-662849
2409.18850
Two Sparse Matrices are Better than One: Sparsifying Neural Networks with Double Sparse Factorization
<|reference_start|>Two Sparse Matrices are Better than One: Sparsifying Neural Networks with Double Sparse Factorization: Neural networks are often challenging to work with due to their large size and complexity. To address this, various methods aim to reduce model size by sparsifying or decomposing weight matrices, such as magnitude pruning and low-rank or block-diagonal factorization. In this work, we present Double Sparse Factorization (DSF), where we factorize each weight matrix into two sparse matrices. Although solving this problem exactly is computationally infeasible, we propose an efficient heuristic based on alternating minimization via ADMM that achieves state-of-the-art results, enabling unprecedented sparsification of neural networks. For instance, in a one-shot pruning setting, our method can reduce the size of the LLaMA2-13B model by 50% while maintaining better performance than the dense LLaMA2-7B model. We also compare favorably with Optimal Brain Compression, the state-of-the-art layer-wise pruning approach for convolutional neural networks. Furthermore, accuracy improvements of our method persist even after further model fine-tuning. Code available at: https://github.com/usamec/double_sparse.<|reference_end|>
arxiv
@article{boža2024two, title={Two Sparse Matrices are Better than One: Sparsifying Neural Networks with Double Sparse Factorization}, author={Vladim'ir Bov{z}a, Vladim'ir Macko}, journal={arXiv preprint arXiv:2409.18850}, year={2024}, archivePrefix={arXiv}, eprint={2409.18850}, primaryClass={cs.LG} }
boža2024two
arxiv-662850
2409.18852
Space-time 2D Gaussian Splatting for Accurate Surface Reconstruction under Complex Dynamic Scenes
<|reference_start|>Space-time 2D Gaussian Splatting for Accurate Surface Reconstruction under Complex Dynamic Scenes: Previous surface reconstruction methods either suffer from low geometric accuracy or lengthy training times when dealing with real-world complex dynamic scenes involving multi-person activities, and human-object interactions. To tackle the dynamic contents and the occlusions in complex scenes, we present a space-time 2D Gaussian Splatting approach. Specifically, to improve geometric quality in dynamic scenes, we learn canonical 2D Gaussian splats and deform these 2D Gaussian splats while enforcing the disks of the Gaussian located on the surface of the objects by introducing depth and normal regularizers. Further, to tackle the occlusion issues in complex scenes, we introduce a compositional opacity deformation strategy, which further reduces the surface recovery of those occluded areas. Experiments on real-world sparse-view video datasets and monocular dynamic datasets demonstrate that our reconstructions outperform state-of-the-art methods, especially for the surface of the details. The project page and more visualizations can be found at: https://tb2-sy.github.io/st-2dgs/.<|reference_end|>
arxiv
@article{wang2024space-time, title={Space-time 2D Gaussian Splatting for Accurate Surface Reconstruction under Complex Dynamic Scenes}, author={Shuo Wang, Binbin Huang, Ruoyu Wang, Shenghua Gao}, journal={arXiv preprint arXiv:2409.18852}, year={2024}, archivePrefix={arXiv}, eprint={2409.18852}, primaryClass={cs.CV} }
wang2024space-time
arxiv-662851
2409.18857
Mitigating Selection Bias with Node Pruning and Auxiliary Options
<|reference_start|>Mitigating Selection Bias with Node Pruning and Auxiliary Options: Large language models (LLMs) often show unwarranted preference for certain choice options when responding to multiple-choice questions, posing significant reliability concerns in LLM-automated systems. To mitigate this selection bias problem, previous solutions utilized debiasing methods to adjust the model's input and/or output. Our work, in contrast, investigates the model's internal representation of the selection bias. Specifically, we introduce a novel debiasing approach, Bias Node Pruning (BNP), which eliminates the linear layer parameters that contribute to the bias. Furthermore, we present Auxiliary Option Injection (AOI), a simple yet effective input modification technique for debiasing, which is compatible even with black-box LLMs. To provide a more systematic evaluation of selection bias, we review existing metrics and introduce Choice Kullback-Leibler Divergence (CKLD), which addresses the insensitivity of the commonly used metrics to label imbalance. Experiments show that our methods are robust and adaptable across various datasets when applied to three LLMs.<|reference_end|>
arxiv
@article{choi2024mitigating, title={Mitigating Selection Bias with Node Pruning and Auxiliary Options}, author={Hyeong Kyu Choi, Weijie Xu, Chi Xue, Stephanie Eckman, Chandan K. Reddy}, journal={arXiv preprint arXiv:2409.18857}, year={2024}, archivePrefix={arXiv}, eprint={2409.18857}, primaryClass={cs.AI} }
choi2024mitigating
arxiv-662852
2409.18858
Predicting and analyzing memorization within fine-tuned Large Language Models
<|reference_start|>Predicting and analyzing memorization within fine-tuned Large Language Models: Large Language Models have received significant attention due to their abilities to solve a wide range of complex tasks. However these models memorize a significant proportion of their training data, posing a serious threat when disclosed at inference time. To mitigate this unintended memorization, it is crucial to understand what elements are memorized and why. Most existing works provide a posteriori explanations, which has a limited interest in practice. To address this gap, we propose a new approach based on sliced mutual information to detect memorized samples a priori, in a classification setting. It is efficient from the early stages of training, and is readily adaptable to practical scenarios. Our method is supported by new theoretical results that we demonstrate, and requires a low computational budget. We obtain strong empirical results, paving the way for systematic inspection and protection of these vulnerable samples before memorization happens.<|reference_end|>
arxiv
@article{dentan2024predicting, title={Predicting and analyzing memorization within fine-tuned Large Language Models}, author={J'er'emie Dentan, Davide Buscaldi, Aymen Shabou, Sonia Vanier}, journal={arXiv preprint arXiv:2409.18858}, year={2024}, archivePrefix={arXiv}, eprint={2409.18858}, primaryClass={cs.CR} }
dentan2024predicting
arxiv-662853
2409.18859
Challenges of Generating Structurally Diverse Graphs
<|reference_start|>Challenges of Generating Structurally Diverse Graphs: For many graph-related problems, it can be essential to have a set of structurally diverse graphs. For instance, such graphs can be used for testing graph algorithms or their neural approximations. However, to the best of our knowledge, the problem of generating structurally diverse graphs has not been explored in the literature. In this paper, we fill this gap. First, we discuss how to define diversity for a set of graphs, why this task is non-trivial, and how one can choose a proper diversity measure. Then, for a given diversity measure, we propose and compare several algorithms optimizing it: we consider approaches based on standard random graph models, local graph optimization, genetic algorithms, and neural generative models. We show that it is possible to significantly improve diversity over basic random graph generators. Additionally, our analysis of generated graphs allows us to better understand the properties of graph distances: depending on which diversity measure is used for optimization, the obtained graphs may possess very different structural properties which gives insights about the sensitivity of the graph distance underlying the diversity measure.<|reference_end|>
arxiv
@article{velikonivtsev2024challenges, title={Challenges of Generating Structurally Diverse Graphs}, author={Fedor Velikonivtsev, Mikhail Mironov, Liudmila Prokhorenkova}, journal={arXiv preprint arXiv:2409.18859}, year={2024}, archivePrefix={arXiv}, eprint={2409.18859}, primaryClass={cs.LG} }
velikonivtsev2024challenges
arxiv-662854
2409.18860
LW2G: Learning Whether to Grow for Prompt-based Continual Learning
<|reference_start|>LW2G: Learning Whether to Grow for Prompt-based Continual Learning: Continual Learning (CL) aims to learn in non-stationary scenarios, progressively acquiring and maintaining knowledge from sequential tasks. Recent Prompt-based Continual Learning (PCL) has achieved remarkable performance with Pre-Trained Models (PTMs). These approaches grow a prompt sets pool by adding a new set of prompts when learning each new task (\emph{prompt learning}) and adopt a matching mechanism to select the correct set for each testing sample (\emph{prompt retrieval}). Previous studies focus on the latter stage by improving the matching mechanism to enhance Prompt Retrieval Accuracy (PRA). To promote cross-task knowledge facilitation and form an effective and efficient prompt sets pool, we propose a plug-in module in the former stage to \textbf{Learn Whether to Grow (LW2G)} based on the disparities between tasks. Specifically, a shared set of prompts is utilized when several tasks share certain commonalities, and a new set is added when there are significant differences between the new task and previous tasks. Inspired by Gradient Projection Continual Learning, our LW2G develops a metric called Hinder Forward Capability (HFC) to measure the hindrance imposed on learning new tasks by surgically modifying the original gradient onto the orthogonal complement of the old feature space. With HFC, an automated scheme Dynamic Growing Approach adaptively learns whether to grow with a dynamic threshold. Furthermore, we design a gradient-based constraint to ensure the consistency between the updating prompts and pre-trained knowledge, and a prompts weights reusing strategy to enhance forward transfer. Extensive experiments show the effectiveness of our method. The source codes are available at \url{https://github.com/RAIAN08/LW2G}.<|reference_end|>
arxiv
@article{feng2024lw2g:, title={LW2G: Learning Whether to Grow for Prompt-based Continual Learning}, author={Qian Feng, Dawei Zhou, Hanbin Zhao, Chao Zhang, Hui Qian}, journal={arXiv preprint arXiv:2409.18860}, year={2024}, archivePrefix={arXiv}, eprint={2409.18860}, primaryClass={cs.CV} }
feng2024lw2g:
arxiv-662855
2409.18862
Safe Decentralized Multi-Agent Control using Black-Box Predictors, Conformal Decision Policies, and Control Barrier Functions
<|reference_start|>Safe Decentralized Multi-Agent Control using Black-Box Predictors, Conformal Decision Policies, and Control Barrier Functions: We address the challenge of safe control in decentralized multi-agent robotic settings, where agents use uncertain black-box models to predict other agents' trajectories. We use the recently proposed conformal decision theory to adapt the restrictiveness of control barrier functions-based safety constraints based on observed prediction errors. We use these constraints to synthesize controllers that balance between the objectives of safety and task accomplishment, despite the prediction errors. We provide an upper bound on the average over time of the value of a monotonic function of the difference between the safety constraint based on the predicted trajectories and the constraint based on the ground truth ones. We validate our theory through experimental results showing the performance of our controllers when navigating a robot in the multi-agent scenes in the Stanford Drone Dataset.<|reference_end|>
arxiv
@article{huriot2024safe, title={Safe Decentralized Multi-Agent Control using Black-Box Predictors, Conformal Decision Policies, and Control Barrier Functions}, author={Sacha Huriot and Hussein Sibai}, journal={arXiv preprint arXiv:2409.18862}, year={2024}, archivePrefix={arXiv}, eprint={2409.18862}, primaryClass={eess.SY cs.MA cs.RO cs.SY} }
huriot2024safe
arxiv-662856
2409.18865
Positional Encoder Graph Quantile Neural Networks for Geographic Data
<|reference_start|>Positional Encoder Graph Quantile Neural Networks for Geographic Data: Positional Encoder Graph Neural Networks (PE-GNNs) are a leading approach for modeling continuous spatial data. However, they often fail to produce calibrated predictive distributions, limiting their effectiveness for uncertainty quantification. We introduce the Positional Encoder Graph Quantile Neural Network (PE-GQNN), a novel method that integrates PE-GNNs, Quantile Neural Networks, and recalibration techniques in a fully nonparametric framework, requiring minimal assumptions about the predictive distributions. We propose a new network architecture that, when combined with a quantile-based loss function, yields accurate and reliable probabilistic models without increasing computational complexity. Our approach provides a flexible, robust framework for conditional density estimation, applicable beyond spatial data contexts. We further introduce a structured method for incorporating a KNN predictor into the model while avoiding data leakage through the GNN layer operation. Experiments on benchmark datasets demonstrate that PE-GQNN significantly outperforms existing state-of-the-art methods in both predictive accuracy and uncertainty quantification.<|reference_end|>
arxiv
@article{de amorim2024positional, title={Positional Encoder Graph Quantile Neural Networks for Geographic Data}, author={William E. R. de Amorim, Scott A. Sisson, T. Rodrigues, David J. Nott, Guilherme S. Rodrigues}, journal={arXiv preprint arXiv:2409.18865}, year={2024}, archivePrefix={arXiv}, eprint={2409.18865}, primaryClass={stat.ML cs.AI cs.CV cs.LG cs.SI} }
de amorim2024positional
arxiv-662857
2409.18866
MCUBench: A Benchmark of Tiny Object Detectors on MCUs
<|reference_start|>MCUBench: A Benchmark of Tiny Object Detectors on MCUs: We introduce MCUBench, a benchmark featuring over 100 YOLO-based object detection models evaluated on the VOC dataset across seven different MCUs. This benchmark provides detailed data on average precision, latency, RAM, and Flash usage for various input resolutions and YOLO-based one-stage detectors. By conducting a controlled comparison with a fixed training pipeline, we collect comprehensive performance metrics. Our Pareto-optimal analysis shows that integrating modern detection heads and training techniques allows various YOLO architectures, including legacy models like YOLOv3, to achieve a highly efficient tradeoff between mean Average Precision (mAP) and latency. MCUBench serves as a valuable tool for benchmarking the MCU performance of contemporary object detectors and aids in model selection based on specific constraints.<|reference_end|>
arxiv
@article{sah2024mcubench:, title={MCUBench: A Benchmark of Tiny Object Detectors on MCUs}, author={Sudhakar Sah, Darshan C. Ganji, Matteo Grimaldi, Ravish Kumar, Alexander Hoffman, Honnesh Rohmetra, Ehsan Saboori}, journal={arXiv preprint arXiv:2409.18866}, year={2024}, archivePrefix={arXiv}, eprint={2409.18866}, primaryClass={cs.CV} }
sah2024mcubench:
arxiv-662858
2409.18867
Robust and efficient data-driven predictive control
<|reference_start|>Robust and efficient data-driven predictive control: We propose a robust and efficient data-driven predictive control (eDDPC) scheme which is more sample efficient (requires less offline data) compared to existing schemes, and is also computationally efficient. This is done by leveraging an alternative data-based representation of the trajectories of linear time-invariant (LTI) systems. The proposed scheme relies only on using (short and potentially irregularly measured) noisy input-output data, the amount of which is independent of the prediction horizon. To account for measurement noise, we provide a novel result that quantifies the uncertainty between the true (unknown) restricted behavior of the system and the estimated one from noisy data. Furthermore, we show that the robust eDDPC scheme is recursively feasible and that the resulting closed-loop system is practically stable. Finally, we compare the performance of this scheme to existing ones on a case study of a four tank system.<|reference_end|>
arxiv
@article{alsalti2024robust, title={Robust and efficient data-driven predictive control}, author={Mohammad Alsalti, Manuel Barkey, Victor G. Lopez and Matthias A. M"uller}, journal={arXiv preprint arXiv:2409.18867}, year={2024}, archivePrefix={arXiv}, eprint={2409.18867}, primaryClass={eess.SY cs.SY} }
alsalti2024robust
arxiv-662859
2409.18868
Individuation in Neural Models with and without Visual Grounding
<|reference_start|>Individuation in Neural Models with and without Visual Grounding: We show differences between a language-and-vision model CLIP and two text-only models - FastText and SBERT - when it comes to the encoding of individuation information. We study latent representations that CLIP provides for substrates, granular aggregates, and various numbers of objects. We demonstrate that CLIP embeddings capture quantitative differences in individuation better than models trained on text-only data. Moreover, the individuation hierarchy we deduce from the CLIP embeddings agrees with the hierarchies proposed in linguistics and cognitive science.<|reference_end|>
arxiv
@article{tikhonov2024individuation, title={Individuation in Neural Models with and without Visual Grounding}, author={Alexey Tikhonov, Lisa Bylinina, Ivan P. Yamshchikov}, journal={arXiv preprint arXiv:2409.18868}, year={2024}, archivePrefix={arXiv}, eprint={2409.18868}, primaryClass={cs.CL cs.AI cs.LG} }
tikhonov2024individuation
arxiv-662860
2409.18869
Emu3: Next-Token Prediction is All You Need
<|reference_start|>Emu3: Next-Token Prediction is All You Need: While next-token prediction is considered a promising path towards artificial general intelligence, it has struggled to excel in multimodal tasks, which are still dominated by diffusion models (e.g., Stable Diffusion) and compositional approaches (e.g., CLIP combined with LLMs). In this paper, we introduce Emu3, a new suite of state-of-the-art multimodal models trained solely with next-token prediction. By tokenizing images, text, and videos into a discrete space, we train a single transformer from scratch on a mixture of multimodal sequences. Emu3 outperforms several well-established task-specific models in both generation and perception tasks, surpassing flagship models such as SDXL and LLaVA-1.6, while eliminating the need for diffusion or compositional architectures. Emu3 is also capable of generating high-fidelity video via predicting the next token in a video sequence. We simplify complex multimodal model designs by converging on a singular focus: tokens, unlocking great potential for scaling both during training and inference. Our results demonstrate that next-token prediction is a promising path towards building general multimodal intelligence beyond language. We open-source key techniques and models to support further research in this direction.<|reference_end|>
arxiv
@article{wang2024emu3:, title={Emu3: Next-Token Prediction is All You Need}, author={Xinlong Wang, Xiaosong Zhang, Zhengxiong Luo, Quan Sun, Yufeng Cui, Jinsheng Wang, Fan Zhang, Yueze Wang, Zhen Li, Qiying Yu, Yingli Zhao, Yulong Ao, Xuebin Min, Tao Li, Boya Wu, Bo Zhao, Bowen Zhang, Liangdong Wang, Guang Liu, Zheqi He, Xi Yang, Jingjing Liu, Yonghua Lin, Tiejun Huang, Zhongyuan Wang}, journal={arXiv preprint arXiv:2409.18869}, year={2024}, archivePrefix={arXiv}, eprint={2409.18869}, primaryClass={cs.CV} }
wang2024emu3:
arxiv-662861
2409.18872
Simulating Dynamic Tumor Contrast Enhancement in Breast MRI using Conditional Generative Adversarial Networks
<|reference_start|>Simulating Dynamic Tumor Contrast Enhancement in Breast MRI using Conditional Generative Adversarial Networks: This paper presents a method for virtual contrast enhancement in breast MRI, offering a promising non-invasive alternative to traditional contrast agent-based DCE-MRI acquisition. Using a conditional generative adversarial network, we predict DCE-MRI images, including jointly-generated sequences of multiple corresponding DCE-MRI timepoints, from non-contrast-enhanced MRIs, enabling tumor localization and characterization without the associated health risks. Furthermore, we qualitatively and quantitatively evaluate the synthetic DCE-MRI images, proposing a multi-metric Scaled Aggregate Measure (SAMe), assessing their utility in a tumor segmentation downstream task, and conclude with an analysis of the temporal patterns in multi-sequence DCE-MRI generation. Our approach demonstrates promising results in generating realistic and useful DCE-MRI sequences, highlighting the potential of virtual contrast enhancement for improving breast cancer diagnosis and treatment, particularly for patients where contrast agent administration is contraindicated.<|reference_end|>
arxiv
@article{osuala2024simulating, title={Simulating Dynamic Tumor Contrast Enhancement in Breast MRI using Conditional Generative Adversarial Networks}, author={Richard Osuala, Smriti Joshi, Apostolia Tsirikoglou, Lidia Garrucho, Walter H.L. Pinaya, Daniel M. Lang, Julia A. Schnabel, Oliver Diaz, Karim Lekadir}, journal={arXiv preprint arXiv:2409.18872}, year={2024}, archivePrefix={arXiv}, eprint={2409.18872}, primaryClass={eess.IV cs.CV cs.LG} }
osuala2024simulating
arxiv-662862
2409.18874
CESNET-TimeSeries24: Time Series Dataset for Network Traffic Anomaly Detection and Forecasting
<|reference_start|>CESNET-TimeSeries24: Time Series Dataset for Network Traffic Anomaly Detection and Forecasting: Anomaly detection in network traffic is crucial for maintaining the security of computer networks and identifying malicious activities. One of the primary approaches to anomaly detection are methods based on forecasting. Nevertheless, extensive real-world network datasets for forecasting and anomaly detection techniques are missing, potentially causing performance overestimation of anomaly detection algorithms. This manuscript addresses this gap by introducing a dataset comprising time series data of network entities' behavior, collected from the CESNET3 network. The dataset was created from 40 weeks of network traffic of 275 thousand active IP addresses. The ISP origin of the presented data ensures a high level of variability among network entities, which forms a unique and authentic challenge for forecasting and anomaly detection models. It provides valuable insights into the practical deployment of forecast-based anomaly detection approaches.<|reference_end|>
arxiv
@article{koumar2024cesnet-timeseries24:, title={CESNET-TimeSeries24: Time Series Dataset for Network Traffic Anomaly Detection and Forecasting}, author={Josef Koumar and Karel Hynek and Tom'av{s} v{C}ejka and Pavel v{S}iv{s}ka}, journal={arXiv preprint arXiv:2409.18874}, year={2024}, archivePrefix={arXiv}, eprint={2409.18874}, primaryClass={cs.LG cs.AI} }
koumar2024cesnet-timeseries24:
arxiv-662863
2409.18876
CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition
<|reference_start|>CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition: Privacy issue is a main concern in developing face recognition techniques. Although synthetic face images can partially mitigate potential legal risks while maintaining effective face recognition (FR) performance, FR models trained by face images synthesized by existing generative approaches frequently suffer from performance degradation problems due to the insufficient discriminative quality of these synthesized samples. In this paper, we systematically investigate what contributes to solid face recognition model training, and reveal that face images with certain degree of similarities to their identity centers show great effectiveness in the performance of trained FR models. Inspired by this, we propose a novel diffusion-based approach (namely Center-based Semi-hard Synthetic Face Generation (CemiFace)) which produces facial samples with various levels of similarity to the subject center, thus allowing to generate face datasets containing effective discriminative samples for training face recognition. Experimental results show that with a modest degree of similarity, training on the generated dataset can produce competitive performance compared to previous generation methods.<|reference_end|>
arxiv
@article{sun2024cemiface:, title={CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition}, author={Zhonglin Sun, Siyang Song, Ioannis Patras, Georgios Tzimiropoulos}, journal={arXiv preprint arXiv:2409.18876}, year={2024}, archivePrefix={arXiv}, eprint={2409.18876}, primaryClass={cs.CV} }
sun2024cemiface:
arxiv-662864
2409.18877
UniEmoX: Cross-modal Semantic-Guided Large-Scale Pretraining for Universal Scene Emotion Perception
<|reference_start|>UniEmoX: Cross-modal Semantic-Guided Large-Scale Pretraining for Universal Scene Emotion Perception: Visual emotion analysis holds significant research value in both computer vision and psychology. However, existing methods for visual emotion analysis suffer from limited generalizability due to the ambiguity of emotion perception and the diversity of data scenarios. To tackle this issue, we introduce UniEmoX, a cross-modal semantic-guided large-scale pretraining framework. Inspired by psychological research emphasizing the inseparability of the emotional exploration process from the interaction between individuals and their environment, UniEmoX integrates scene-centric and person-centric low-level image spatial structural information, aiming to derive more nuanced and discriminative emotional representations. By exploiting the similarity between paired and unpaired image-text samples, UniEmoX distills rich semantic knowledge from the CLIP model to enhance emotional embedding representations more effectively. To the best of our knowledge, this is the first large-scale pretraining framework that integrates psychological theories with contemporary contrastive learning and masked image modeling techniques for emotion analysis across diverse scenarios. Additionally, we develop a visual emotional dataset titled Emo8. Emo8 samples cover a range of domains, including cartoon, natural, realistic, science fiction and advertising cover styles, covering nearly all common emotional scenes. Comprehensive experiments conducted on six benchmark datasets across two downstream tasks validate the effectiveness of UniEmoX. The source code is available at https://github.com/chincharles/u-emo.<|reference_end|>
arxiv
@article{chen2024uniemox:, title={UniEmoX: Cross-modal Semantic-Guided Large-Scale Pretraining for Universal Scene Emotion Perception}, author={Chuang Chen, Xiao Sun and Zhi Liu}, journal={arXiv preprint arXiv:2409.18877}, year={2024}, archivePrefix={arXiv}, eprint={2409.18877}, primaryClass={cs.AI cs.CV} }
chen2024uniemox:
arxiv-662865
2409.18878
Suicide Phenotyping from Clinical Notes in Safety-Net Psychiatric Hospital Using Multi-Label Classification with Pre-Trained Language Models
<|reference_start|>Suicide Phenotyping from Clinical Notes in Safety-Net Psychiatric Hospital Using Multi-Label Classification with Pre-Trained Language Models: Accurate identification and categorization of suicidal events can yield better suicide precautions, reducing operational burden, and improving care quality in high-acuity psychiatric settings. Pre-trained language models offer promise for identifying suicidality from unstructured clinical narratives. We evaluated the performance of four BERT-based models using two fine-tuning strategies (multiple single-label and single multi-label) for detecting coexisting suicidal events from 500 annotated psychiatric evaluation notes. The notes were labeled for suicidal ideation (SI), suicide attempts (SA), exposure to suicide (ES), and non-suicidal self-injury (NSSI). RoBERTa outperformed other models using multiple single-label classification strategy (acc=0.86, F1=0.78). MentalBERT (acc=0.83, F1=0.74) also exceeded BioClinicalBERT (acc=0.82, F1=0.72) which outperformed BERT (acc=0.80, F1=0.70). RoBERTa fine-tuned with single multi-label classification further improved the model performance (acc=0.88, F1=0.81). The findings highlight that the model optimization, pretraining with domain-relevant data, and the single multi-label classification strategy enhance the model performance of suicide phenotyping. Keywords: EHR-based Phenotyping; Natural Language Processing; Secondary Use of EHR Data; Suicide Classification; BERT-based Model; Psychiatry; Mental Health<|reference_end|>
arxiv
@article{li2024suicide, title={Suicide Phenotyping from Clinical Notes in Safety-Net Psychiatric Hospital Using Multi-Label Classification with Pre-Trained Language Models}, author={Zehan Li, Yan Hu, Scott Lane, Salih Selek, Lokesh Shahani, Rodrigo Machado-Vieira, Jair Soares, Hua Xu, Hongfang Liu, Ming Huang}, journal={arXiv preprint arXiv:2409.18878}, year={2024}, archivePrefix={arXiv}, eprint={2409.18878}, primaryClass={cs.CL cs.AI cs.CY cs.IR} }
li2024suicide
arxiv-662866
2409.18881
Explainable Artifacts for Synthetic Western Blot Source Attribution
<|reference_start|>Explainable Artifacts for Synthetic Western Blot Source Attribution: Recent advancements in artificial intelligence have enabled generative models to produce synthetic scientific images that are indistinguishable from pristine ones, posing a challenge even for expert scientists habituated to working with such content. When exploited by organizations known as paper mills, which systematically generate fraudulent articles, these technologies can significantly contribute to the spread of misinformation about ungrounded science, potentially undermining trust in scientific research. While previous studies have explored black-box solutions, such as Convolutional Neural Networks, for identifying synthetic content, only some have addressed the challenge of generalizing across different models and providing insight into the artifacts in synthetic images that inform the detection process. This study aims to identify explainable artifacts generated by state-of-the-art generative models (e.g., Generative Adversarial Networks and Diffusion Models) and leverage them for open-set identification and source attribution (i.e., pointing to the model that created the image).<|reference_end|>
arxiv
@article{cardenuto2024explainable, title={Explainable Artifacts for Synthetic Western Blot Source Attribution}, author={Jo~ao Phillipe Cardenuto, Sara Mandelli, Daniel Moreira, Paolo Bestagini, Edward Delp, and Anderson Rocha}, journal={arXiv preprint arXiv:2409.18881}, year={2024}, archivePrefix={arXiv}, eprint={2409.18881}, primaryClass={cs.CV} }
cardenuto2024explainable
arxiv-662867
2409.18884
An Overview and Catalogue of Dependency Challenges in Open Source Software Package Registries
<|reference_start|>An Overview and Catalogue of Dependency Challenges in Open Source Software Package Registries: While open-source software has enabled significant levels of reuse to speed up software development, it has also given rise to the dreadful dependency hell that all software practitioners face on a regular basis. This article provides a catalogue of dependency-related challenges that come with relying on OSS packages or libraries. The catalogue is based on a review of the abundant scientific literature on empirical research that has been conducted to understand, quantify and overcome these challenges. Our results can be used as a starting point for junior and senior researchers as well as practitioners that would like to learn more about research advances in dealing with the challenges that come with the dependency networks of large OSS package registries.<|reference_end|>
arxiv
@article{mens2024an, title={An Overview and Catalogue of Dependency Challenges in Open Source Software Package Registries}, author={Tom Mens, Alexandre Decan}, journal={arXiv preprint arXiv:2409.18884}, year={2024}, archivePrefix={arXiv}, eprint={2409.18884}, primaryClass={cs.SE} }
mens2024an
arxiv-662868
2409.18885
HR-Extreme: A High-Resolution Dataset for Extreme Weather Forecasting
<|reference_start|>HR-Extreme: A High-Resolution Dataset for Extreme Weather Forecasting: The application of large deep learning models in weather forecasting has led to significant advancements in the field, including higher-resolution forecasting and extended prediction periods exemplified by models such as Pangu and Fuxi. Despite these successes, previous research has largely been characterized by the neglect of extreme weather events, and the availability of datasets specifically curated for such events remains limited. Given the critical importance of accurately forecasting extreme weather, this study introduces a comprehensive dataset that incorporates high-resolution extreme weather cases derived from the High-Resolution Rapid Refresh (HRRR) data, a 3-km real-time dataset provided by NOAA. We also evaluate the current state-of-the-art deep learning models and Numerical Weather Prediction (NWP) systems on HR-Extreme, and provide a improved baseline deep learning model called HR-Heim which has superior performance on both general loss and HR-Extreme compared to others. Our results reveal that the errors of extreme weather cases are significantly larger than overall forecast error, highlighting them as an crucial source of loss in weather prediction. These findings underscore the necessity for future research to focus on improving the accuracy of extreme weather forecasts to enhance their practical utility.<|reference_end|>
arxiv
@article{ran2024hr-extreme:, title={HR-Extreme: A High-Resolution Dataset for Extreme Weather Forecasting}, author={Nian Ran, Peng Xiao, Yue Wang, Wesley Shi, Jianxin Lin, Qi Meng, Richard Allmendinger}, journal={arXiv preprint arXiv:2409.18885}, year={2024}, archivePrefix={arXiv}, eprint={2409.18885}, primaryClass={cs.LG} }
ran2024hr-extreme:
arxiv-662869
2409.18892
IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation
<|reference_start|>IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation: As Large Language Models (LLMs) grow increasingly adept at managing complex tasks, the evaluation set must keep pace with these advancements to ensure it remains sufficiently discriminative. Item Discrimination (ID) theory, which is widely used in educational assessment, measures the ability of individual test items to differentiate between high and low performers. Inspired by this theory, we propose an ID-induced prompt synthesis framework for evaluating LLMs to ensure the evaluation set can continually update and refine according to model abilities. Our data synthesis framework prioritizes both breadth and specificity. It can generate prompts that comprehensively evaluate the capabilities of LLMs while revealing meaningful performance differences between models, allowing for effective discrimination of their relative strengths and weaknesses across various tasks and domains. To produce high-quality data, we incorporate a self-correct mechanism into our generalization framework, and develop two models to predict prompt discrimination and difficulty score to facilitate our data synthesis framework, contributing valuable tools to evaluation data synthesis research. We apply our generated data to evaluate five SOTA models. Our data achieves an average score of 51.92, accompanied by a variance of 10.06. By contrast, previous works (i.e., SELF-INSTRUCT and WizardLM) obtain an average score exceeding 67, with a variance below 3.2. The results demonstrate that the data generated by our framework is more challenging and discriminative compared to previous works. We will release a dataset of over 3,000 carefully crafted prompts to facilitate evaluation research of LLMs.<|reference_end|>
arxiv
@article{lin2024idgen:, title={IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation}, author={Fan Lin, Shuyi Xie, Yong Dai, Wenlin Yao, Tianjiao Lang, Zishan Xu, Zhichao Hu, Xiao Xiao, Yuhong Liu, Yu Zhang}, journal={arXiv preprint arXiv:2409.18892}, year={2024}, archivePrefix={arXiv}, eprint={2409.18892}, primaryClass={cs.CL} }
lin2024idgen:
arxiv-662870
2409.18893
HM3: Hierarchical Multi-Objective Model Merging for Pretrained Models
<|reference_start|>HM3: Hierarchical Multi-Objective Model Merging for Pretrained Models: Model merging is a technique that combines multiple large pretrained models into a single model with enhanced performance and broader task adaptability. It has gained popularity in large pretrained model development due to its ability to bypass the need for original training data and further training processes. However, most existing model merging approaches focus solely on exploring the parameter space, merging models with identical architectures. Merging within the architecture space, despite its potential, remains in its early stages due to the vast search space and the challenges of layer compatibility. This paper marks a significant advance toward more flexible and comprehensive model merging techniques by modeling the architecture-space merging process as a reinforcement learning task. We train policy and value networks using offline sampling of weight vectors, which are then employed for the online optimization of merging strategies. Moreover, a multi-objective optimization paradigm is introduced to accommodate users' diverse task preferences, learning the Pareto front of optimal models to offer customized merging suggestions. Experimental results across multiple tasks, including text translation, mathematical reasoning, and code generation, validate the effectiveness and superiority of the proposed framework in model merging. The code will be made publicly available after the review process.<|reference_end|>
arxiv
@article{zhou2024hm3:, title={HM3: Hierarchical Multi-Objective Model Merging for Pretrained Models}, author={Yu Zhou, Xingyu Wu, Jibin Wu, Liang Feng, Kay Chen Tan}, journal={arXiv preprint arXiv:2409.18893}, year={2024}, archivePrefix={arXiv}, eprint={2409.18893}, primaryClass={cs.LG} }
zhou2024hm3:
arxiv-662871
2409.18895
Multi-Source Hard and Soft Information Fusion Approach for Accurate Cryptocurrency Price Movement Prediction
<|reference_start|>Multi-Source Hard and Soft Information Fusion Approach for Accurate Cryptocurrency Price Movement Prediction: One of the most important challenges in the financial and cryptocurrency field is accurately predicting cryptocurrency price trends. Leveraging artificial intelligence (AI) is beneficial in addressing this challenge. Cryptocurrency markets, marked by substantial growth and volatility, attract investors and scholars keen on deciphering and forecasting cryptocurrency price movements. The vast and diverse array of data available for such predictions increases the complexity of the task. In our study, we introduce a novel approach termed hard and soft information fusion (HSIF) to enhance the accuracy of cryptocurrency price movement forecasts. The hard information component of our approach encompasses historical price records alongside technical indicators. Complementing this, the soft data component extracts from X (formerly Twitter), encompassing news headlines and tweets about the cryptocurrency. To use this data, we use the Bidirectional Encoder Representations from Transformers (BERT)-based sentiment analysis method, financial BERT (FinBERT), which performs best. Finally, our model feeds on the information set including processed hard and soft data. We employ the bidirectional long short-term memory (BiLSTM) model because processing information in both forward and backward directions can capture long-term dependencies in sequential information. Our empirical findings emphasize the superiority of the HSIF approach over models dependent on single-source data by testing on Bitcoin-related data. By fusing hard and soft information on Bitcoin dataset, our model has about 96.8\% accuracy in predicting price movement. Incorporating information enables our model to grasp the influence of social sentiment on price fluctuations, thereby supplementing the technical analysis-based predictions derived from hard information.<|reference_end|>
arxiv
@article{dashtaki2024multi-source, title={Multi-Source Hard and Soft Information Fusion Approach for Accurate Cryptocurrency Price Movement Prediction}, author={Saeed Mohammadi Dashtaki, Mehdi Hosseini Chagahi, Behzad Moshiri, Md. Jalil Piran}, journal={arXiv preprint arXiv:2409.18895}, year={2024}, archivePrefix={arXiv}, eprint={2409.18895}, primaryClass={cs.LG cs.AI} }
dashtaki2024multi-source
arxiv-662872
2409.18896
S2O: Static to Openable Enhancement for Articulated 3D Objects
<|reference_start|>S2O: Static to Openable Enhancement for Articulated 3D Objects: Despite much progress in large 3D datasets there are currently few interactive 3D object datasets, and their scale is limited due to the manual effort required in their construction. We introduce the static to openable (S2O) task which creates interactive articulated 3D objects from static counterparts through openable part detection, motion prediction, and interior geometry completion. We formulate a unified framework to tackle this task, and curate a challenging dataset of openable 3D objects that serves as a test bed for systematic evaluation. Our experiments benchmark methods from prior work and simple yet effective heuristics for the S2O task. We find that turning static 3D objects into interactively openable counterparts is possible but that all methods struggle to generalize to realistic settings of the task, and we highlight promising future work directions.<|reference_end|>
arxiv
@article{iliash2024s2o:, title={S2O: Static to Openable Enhancement for Articulated 3D Objects}, author={Denys Iliash, Hanxiao Jiang, Yiming Zhang, Manolis Savva, Angel X. Chang}, journal={arXiv preprint arXiv:2409.18896}, year={2024}, archivePrefix={arXiv}, eprint={2409.18896}, primaryClass={cs.CV} }
iliash2024s2o:
arxiv-662873
2409.18897
Detecting Dataset Abuse in Fine-Tuning Stable Diffusion Models for Text-to-Image Synthesis
<|reference_start|>Detecting Dataset Abuse in Fine-Tuning Stable Diffusion Models for Text-to-Image Synthesis: Text-to-image synthesis has become highly popular for generating realistic and stylized images, often requiring fine-tuning generative models with domain-specific datasets for specialized tasks. However, these valuable datasets face risks of unauthorized usage and unapproved sharing, compromising the rights of the owners. In this paper, we address the issue of dataset abuse during the fine-tuning of Stable Diffusion models for text-to-image synthesis. We present a dataset watermarking framework designed to detect unauthorized usage and trace data leaks. The framework employs two key strategies across multiple watermarking schemes and is effective for large-scale dataset authorization. Extensive experiments demonstrate the framework's effectiveness, minimal impact on the dataset (only 2% of the data required to be modified for high detection accuracy), and ability to trace data leaks. Our results also highlight the robustness and transferability of the framework, proving its practical applicability in detecting dataset abuse.<|reference_end|>
arxiv
@article{wang2024detecting, title={Detecting Dataset Abuse in Fine-Tuning Stable Diffusion Models for Text-to-Image Synthesis}, author={Songrui Wang, Yubo Zhu, Wei Tong, Sheng Zhong}, journal={arXiv preprint arXiv:2409.18897}, year={2024}, archivePrefix={arXiv}, eprint={2409.18897}, primaryClass={cs.CV} }
wang2024detecting
arxiv-662874
2409.18899
Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors
<|reference_start|>Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors: Low-light image enhancement (LIE) aims at precisely and efficiently recovering an image degraded in poor illumination environments. Recent advanced LIE techniques are using deep neural networks, which require lots of low-normal light image pairs, network parameters, and computational resources. As a result, their practicality is limited. In this work, we devise a novel unsupervised LIE framework based on diffusion priors and lookup tables (DPLUT) to achieve efficient low-light image recovery. The proposed approach comprises two critical components: a light adjustment lookup table (LLUT) and a noise suppression lookup table (NLUT). LLUT is optimized with a set of unsupervised losses. It aims at predicting pixel-wise curve parameters for the dynamic range adjustment of a specific image. NLUT is designed to remove the amplified noise after the light brightens. As diffusion models are sensitive to noise, diffusion priors are introduced to achieve high-performance noise suppression. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods in terms of visual quality and efficiency.<|reference_end|>
arxiv
@article{lin2024unsupervised, title={Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors}, author={Yunlong Lin, Zhenqi Fu, Kairun Wen, Tian Ye, Sixiang Chen, Ge Meng, Yingying Wang, Yue Huang, Xiaotong Tu, Xinghao Ding}, journal={arXiv preprint arXiv:2409.18899}, year={2024}, archivePrefix={arXiv}, eprint={2409.18899}, primaryClass={cs.CV eess.IV} }
lin2024unsupervised
arxiv-662875
2409.18901
Improving Visual Object Tracking through Visual Prompting
<|reference_start|>Improving Visual Object Tracking through Visual Prompting: Learning a discriminative model to distinguish a target from its surrounding distractors is essential to generic visual object tracking. Dynamic target representation adaptation against distractors is challenging due to the limited discriminative capabilities of prevailing trackers. We present a new visual Prompting mechanism for generic Visual Object Tracking (PiVOT) to address this issue. PiVOT proposes a prompt generation network with the pre-trained foundation model CLIP to automatically generate and refine visual prompts, enabling the transfer of foundation model knowledge for tracking. While CLIP offers broad category-level knowledge, the tracker, trained on instance-specific data, excels at recognizing unique object instances. Thus, PiVOT first compiles a visual prompt highlighting potential target locations. To transfer the knowledge of CLIP to the tracker, PiVOT leverages CLIP to refine the visual prompt based on the similarities between candidate objects and the reference templates across potential targets. Once the visual prompt is refined, it can better highlight potential target locations, thereby reducing irrelevant prompt information. With the proposed prompting mechanism, the tracker can generate improved instance-aware feature maps through the guidance of the visual prompt, thus effectively reducing distractors. The proposed method does not involve CLIP during training, thereby keeping the same training complexity and preserving the generalization capability of the pretrained foundation model. Extensive experiments across multiple benchmarks indicate that PiVOT, using the proposed prompting method can suppress distracting objects and enhance the tracker.<|reference_end|>
arxiv
@article{chen2024improving, title={Improving Visual Object Tracking through Visual Prompting}, author={Shih-Fang Chen and Jun-Cheng Chen and I-Hong Jhuo and Yen-Yu Lin}, journal={arXiv preprint arXiv:2409.18901}, year={2024}, archivePrefix={arXiv}, eprint={2409.18901}, primaryClass={cs.CV cs.AI cs.MM eess.IV} }
chen2024improving
arxiv-662876
2409.18903
On the convergence rate of a numerical method for the Hunter-Saxton equation
<|reference_start|>On the convergence rate of a numerical method for the Hunter-Saxton equation: We derive a robust error estimate for a recently proposed numerical method for $\alpha$-dissipative solutions of the Hunter-Saxton equation, where $\alpha \in [0, 1]$. In particular, if the following two conditions hold: i) there exist a constant $C > 0$ and $\beta \in (0, 1]$ such that the initial spatial derivative $\bar{u}_{x}$ satisfies $\|\bar{u}_x(\cdot + h) - \bar{u}_x(\cdot)\|_2 \leq Ch^{\beta}$ for all $h \in (0, 2]$, and ii), the singular continuous part of the initial energy measure is zero, then the numerical wave profile converges with order $O(\Delta x^{\frac{\beta}{8}})$ in $L^{\infty}(\mathbb{R})$. Moreover, if $\alpha=0$, then the rate improves to $O(\Delta x^{\frac{1}{4}})$ without the above assumptions, and we also obtain a convergence rate for the associated energy measure - it converges with order $O(\Delta x^{\frac{1}{2}})$ in the bounded Lipschitz metric. These convergence rates are illustrated by several examples.<|reference_end|>
arxiv
@article{christiansen2024on, title={On the convergence rate of a numerical method for the Hunter-Saxton equation}, author={Thomas Christiansen}, journal={arXiv preprint arXiv:2409.18903}, year={2024}, archivePrefix={arXiv}, eprint={2409.18903}, primaryClass={math.NA cs.NA math.AP} }
christiansen2024on
arxiv-662877
2409.18905
Probabilistic Analysis of Least Squares, Orthogonal Projection, and QR Factorization Algorithms Subject to Gaussian Noise
<|reference_start|>Probabilistic Analysis of Least Squares, Orthogonal Projection, and QR Factorization Algorithms Subject to Gaussian Noise: In this paper, we extend the work of Liesen et al. (2002), which analyzes how the condition number of an orthonormal matrix Q changes when a column is added ([Q, c]), particularly focusing on the perpendicularity of c to the span of Q. Their result, presented in Theorem 2.3 of Liesen et al. (2002), assumes exact arithmetic and orthonormality of Q, which is a strong assumption when applying these results to numerical methods such as QR factorization algorithms. In our work, we address this gap by deriving bounds on the condition number increase for a matrix B without assuming perfect orthonormality, even when a column is not perfectly orthogonal to the span of B. This framework allows us to analyze QR factorization methods where orthogonalization is imperfect and subject to Gaussian noise. We also provide results on the performance of orthogonal projection and least squares under Gaussian noise, further supporting the development of this theory.<|reference_end|>
arxiv
@article{lotfi2024probabilistic, title={Probabilistic Analysis of Least Squares, Orthogonal Projection, and QR Factorization Algorithms Subject to Gaussian Noise}, author={Ali Lotfi, Julien Langou, Mohammad Meysami}, journal={arXiv preprint arXiv:2409.18905}, year={2024}, archivePrefix={arXiv}, eprint={2409.18905}, primaryClass={math.NA cs.LG cs.NA math.PR math.ST stat.TH} }
lotfi2024probabilistic
arxiv-662878
2409.18907
In-depth Analysis of Privacy Threats in Federated Learning for Medical Data
<|reference_start|>In-depth Analysis of Privacy Threats in Federated Learning for Medical Data: Federated learning is emerging as a promising machine learning technique in the medical field for analyzing medical images, as it is considered an effective method to safeguard sensitive patient data and comply with privacy regulations. However, recent studies have revealed that the default settings of federated learning may inadvertently expose private training data to privacy attacks. Thus, the intensity of such privacy risks and potential mitigation strategies in the medical domain remain unclear. In this paper, we make three original contributions to privacy risk analysis and mitigation in federated learning for medical data. First, we propose a holistic framework, MedPFL, for analyzing privacy risks in processing medical data in the federated learning environment and developing effective mitigation strategies for protecting privacy. Second, through our empirical analysis, we demonstrate the severe privacy risks in federated learning to process medical images, where adversaries can accurately reconstruct private medical images by performing privacy attacks. Third, we illustrate that the prevalent defense mechanism of adding random noises may not always be effective in protecting medical images against privacy attacks in federated learning, which poses unique and pressing challenges related to protecting the privacy of medical data. Furthermore, the paper discusses several unique research questions related to the privacy protection of medical data in the federated learning environment. We conduct extensive experiments on several benchmark medical image datasets to analyze and mitigate the privacy risks associated with federated learning for medical data.<|reference_end|>
arxiv
@article{das2024in-depth, title={In-depth Analysis of Privacy Threats in Federated Learning for Medical Data}, author={Badhan Chandra Das, M. Hadi Amini, Yanzhao Wu}, journal={arXiv preprint arXiv:2409.18907}, year={2024}, archivePrefix={arXiv}, eprint={2409.18907}, primaryClass={cs.LG} }
das2024in-depth
arxiv-662879
2409.18909
Best Arm Identification with Minimal Regret
<|reference_start|>Best Arm Identification with Minimal Regret: Motivated by real-world applications that necessitate responsible experimentation, we introduce the problem of best arm identification (BAI) with minimal regret. This innovative variant of the multi-armed bandit problem elegantly amalgamates two of its most ubiquitous objectives: regret minimization and BAI. More precisely, the agent's goal is to identify the best arm with a prescribed confidence level $\delta$, while minimizing the cumulative regret up to the stopping time. Focusing on single-parameter exponential families of distributions, we leverage information-theoretic techniques to establish an instance-dependent lower bound on the expected cumulative regret. Moreover, we present an intriguing impossibility result that underscores the tension between cumulative regret and sample complexity in fixed-confidence BAI. Complementarily, we design and analyze the Double KL-UCB algorithm, which achieves asymptotic optimality as the confidence level tends to zero. Notably, this algorithm employs two distinct confidence bounds to guide arm selection in a randomized manner. Our findings elucidate a fresh perspective on the inherent connections between regret minimization and BAI.<|reference_end|>
arxiv
@article{yang2024best, title={Best Arm Identification with Minimal Regret}, author={Junwen Yang, Vincent Y. F. Tan, Tianyuan Jin}, journal={arXiv preprint arXiv:2409.18909}, year={2024}, archivePrefix={arXiv}, eprint={2409.18909}, primaryClass={cs.LG cs.IT math.IT stat.ML} }
yang2024best
arxiv-662880
2409.18910
A Robin-Robin splitting method for the Stokes-Biot fluid-poroelastic structure interaction model
<|reference_start|>A Robin-Robin splitting method for the Stokes-Biot fluid-poroelastic structure interaction model: We develop and analyze a splitting method for fluid-poroelastic structure interaction. The fluid is described using the Stokes equations and the poroelastic structure is described using the Biot equations. The transmission conditions on the interface are mass conservation, balance of stresses, and the Beavers-Joseph-Saffman condition. The splitting method involves single and decoupled Stokes and Biot solves at each time step. The subdomain problems use Robin boundary conditions on the interface, which are obtained from the transmission conditions. The Robin data is represented by an auxiliary interface variable. We prove that the method is unconditionally stable and establish that the time discretization error is $\mathcal{O}(\sqrt{T}\Delta t)$, where $T$ is the final time and $\Delta t$ is the time step. We further study the iterative version of the algorithm, which involves an iteration between the Stokes and Biot sub-problems at each time step. We prove that the iteration converges to a monolithic scheme with a Robin Lagrange multiplier used to impose the continuity of the velocity. Numerical experiments are presented to illustrate the theoretical results.<|reference_end|>
arxiv
@article{dalal2024a, title={A Robin-Robin splitting method for the Stokes-Biot fluid-poroelastic structure interaction model}, author={Aashi Dalal, Rebecca Durst, Annalisa Quaini, and Ivan Yotov}, journal={arXiv preprint arXiv:2409.18910}, year={2024}, archivePrefix={arXiv}, eprint={2409.18910}, primaryClass={math.NA cs.NA} }
dalal2024a
arxiv-662881
2409.18911
Soft Measures for Extracting Causal Collective Intelligence
<|reference_start|>Soft Measures for Extracting Causal Collective Intelligence: Understanding and modeling collective intelligence is essential for addressing complex social systems. Directed graphs called fuzzy cognitive maps (FCMs) offer a powerful tool for encoding causal mental models, but extracting high-integrity FCMs from text is challenging. This study presents an approach using large language models (LLMs) to automate FCM extraction. We introduce novel graph-based similarity measures and evaluate them by correlating their outputs with human judgments through the Elo rating system. Results show positive correlations with human evaluations, but even the best-performing measure exhibits limitations in capturing FCM nuances. Fine-tuning LLMs improves performance, but existing measures still fall short. This study highlights the need for soft similarity measures tailored to FCM extraction, advancing collective intelligence modeling with NLP.<|reference_end|>
arxiv
@article{berijanian2024soft, title={Soft Measures for Extracting Causal Collective Intelligence}, author={Maryam Berijanian, Spencer Dork, Kuldeep Singh, Michael Riley Millikan, Ashlin Riggs, Aadarsh Swaminathan, Sarah L. Gibbs, Scott E. Friedman, Nathan Brugnone}, journal={arXiv preprint arXiv:2409.18911}, year={2024}, archivePrefix={arXiv}, eprint={2409.18911}, primaryClass={cs.CL cs.AI cs.CY cs.SI} }
berijanian2024soft
arxiv-662882
2409.18915
A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
<|reference_start|>A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs: As a popular paradigm for juggling data privacy and collaborative training, federated learning (FL) is flourishing to distributively process the large scale of heterogeneous datasets on edged clients. Due to bandwidth limitations and security considerations, it ingeniously splits the original problem into multiple subproblems to be solved in parallel, which empowers primal dual solutions to great application values in FL. In this paper, we review the recent development of classical federated primal dual methods and point out a serious common defect of such methods in non-convex scenarios, which we say is a "dual drift" caused by dual hysteresis of those longstanding inactive clients under partial participation training. To further address this problem, we propose a novel Aligned Federated Primal Dual (A-FedPD) method, which constructs virtual dual updates to align global consensus and local dual variables for those protracted unparticipated local clients. Meanwhile, we provide a comprehensive analysis of the optimization and generalization efficiency for the A-FedPD method on smooth non-convex objectives, which confirms its high efficiency and practicality. Extensive experiments are conducted on several classical FL setups to validate the effectiveness of our proposed method.<|reference_end|>
arxiv
@article{sun2024a-fedpd:, title={A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs}, author={Yan Sun, Li Shen, and Dacheng Tao}, journal={arXiv preprint arXiv:2409.18915}, year={2024}, archivePrefix={arXiv}, eprint={2409.18915}, primaryClass={cs.LG} }
sun2024a-fedpd:
arxiv-662883
2409.18921
Cluster-BPI: Efficient Fine-Grain Blind Power Identification for Defending against Hardware Thermal Trojans in Multicore SoCs
<|reference_start|>Cluster-BPI: Efficient Fine-Grain Blind Power Identification for Defending against Hardware Thermal Trojans in Multicore SoCs: Modern multicore System-on-Chips (SoCs) feature hardware monitoring mechanisms that measure total power consumption. However, these aggregate measurements are often insufficient for fine-grained thermal and power management. This paper presents an enhanced Clustering Blind Power Identification (ICBPI) approach, designed to improve the sensitivity and robustness of the traditional Blind Power Identification (BPI) method. BPI estimates the power consumption of individual cores and models the thermal behavior of an SoC using only thermal sensor data and total power measurements. The proposed ICBPI approach refines BPI's initialization process, particularly improving the non-negative matrix factorization (NNMF) step, which is critical to the accuracy of BPI. ICBPI introduces density-based spatial clustering of applications with noise (DBSCAN) to better align temperature and power consumption data, thereby providing more accurate power consumption estimates. We validate the ICBPI method through two key tasks. The first task evaluates power estimation accuracy across four different multicore architectures, including a heterogeneous processor. Results show that ICBPI significantly enhances accuracy, reducing error rates by 77.56% compared to the original BPI and by 68.44% compared to the state-of-the-art BPISS method. The second task focuses on improving the detection and localization of malicious thermal sensor attacks in heterogeneous processors. The results demonstrate that ICBPI enhances the security and robustness of multicore SoCs against such attacks.<|reference_end|>
arxiv
@article{elshamy2024cluster-bpi:, title={Cluster-BPI: Efficient Fine-Grain Blind Power Identification for Defending against Hardware Thermal Trojans in Multicore SoCs}, author={Mohamed R. Elshamy, Mehdi Elahi, Ahmad Patooghy, Abdel-Hameed A. Badawy}, journal={arXiv preprint arXiv:2409.18921}, year={2024}, archivePrefix={arXiv}, eprint={2409.18921}, primaryClass={cs.CR cs.PF eess.SP} }
elshamy2024cluster-bpi:
arxiv-662884
2409.18922
SurfaceAI: Automated creation of cohesive road surface quality datasets based on open street-level imagery
<|reference_start|>SurfaceAI: Automated creation of cohesive road surface quality datasets based on open street-level imagery: This paper introduces SurfaceAI, a pipeline designed to generate comprehensive georeferenced datasets on road surface type and quality from openly available street-level imagery. The motivation stems from the significant impact of road unevenness on the safety and comfort of traffic participants, especially vulnerable road users, emphasizing the need for detailed road surface data in infrastructure modeling and analysis. SurfaceAI addresses this gap by leveraging crowdsourced Mapillary data to train models that predict the type and quality of road surfaces visible in street-level images, which are then aggregated to provide cohesive information on entire road segment conditions.<|reference_end|>
arxiv
@article{kapp2024surfaceai:, title={SurfaceAI: Automated creation of cohesive road surface quality datasets based on open street-level imagery}, author={Alexandra Kapp, Edith Hoffmann, Esther Weigmann and Helena Mihaljevi'c}, journal={arXiv preprint arXiv:2409.18922}, year={2024}, archivePrefix={arXiv}, eprint={2409.18922}, primaryClass={cs.CV} }
kapp2024surfaceai:
arxiv-662885
2409.18924
AIPatient: Simulating Patients with EHRs and LLM Powered Agentic Workflow
<|reference_start|>AIPatient: Simulating Patients with EHRs and LLM Powered Agentic Workflow: Simulated patient systems play a crucial role in modern medical education and research, providing safe, integrative learning environments and enabling clinical decision-making simulations. Large Language Models (LLM) could advance simulated patient systems by replicating medical conditions and patient-doctor interactions with high fidelity and low cost. However, ensuring the effectiveness and trustworthiness of these systems remains a challenge, as they require a large, diverse, and precise patient knowledgebase, along with a robust and stable knowledge diffusion to users. Here, we developed AIPatient, an advanced simulated patient system with AIPatient Knowledge Graph (AIPatient KG) as the input and the Reasoning Retrieval-Augmented Generation (Reasoning RAG) agentic workflow as the generation backbone. AIPatient KG samples data from Electronic Health Records (EHRs) in the Medical Information Mart for Intensive Care (MIMIC)-III database, producing a clinically diverse and relevant cohort of 1,495 patients with high knowledgebase validity (F1 0.89). Reasoning RAG leverages six LLM powered agents spanning tasks including retrieval, KG query generation, abstraction, checker, rewrite, and summarization. This agentic framework reaches an overall accuracy of 94.15% in EHR-based medical Question Answering (QA), outperforming benchmarks that use either no agent or only partial agent integration. Our system also presents high readability (median Flesch Reading Ease 77.23; median Flesch Kincaid Grade 5.6), robustness (ANOVA F-value 0.6126, p>0.1), and stability (ANOVA F-value 0.782, p>0.1). The promising performance of the AIPatient system highlights its potential to support a wide range of applications, including medical education, model evaluation, and system integration.<|reference_end|>
arxiv
@article{yu2024aipatient:, title={AIPatient: Simulating Patients with EHRs and LLM Powered Agentic Workflow}, author={Huizi Yu, Jiayan Zhou, Lingyao Li, Shan Chen, Jack Gallifant, Anye Shi, Xiang Li, Wenyue Hua, Mingyu Jin, Guang Chen, Yang Zhou, Zhao Li, Trisha Gupte, Ming-Li Chen, Zahra Azizi, Yongfeng Zhang, Themistocles L. Assimes, Xin Ma, Danielle S. Bitterman, Lin Lu, Lizhou Fan}, journal={arXiv preprint arXiv:2409.18924}, year={2024}, archivePrefix={arXiv}, eprint={2409.18924}, primaryClass={cs.CL cs.AI} }
yu2024aipatient:
arxiv-662886
2409.18930
Nonlinear orbital stability of stationary discrete shock profiles for scalar conservation laws
<|reference_start|>Nonlinear orbital stability of stationary discrete shock profiles for scalar conservation laws: For scalar conservation laws, we prove that spectrally stable stationary Lax discrete shock profiles are nonlinearly stable in some polynomially-weighted $\ell^1$ and $\ell^\infty$ spaces. In comparison with several previous nonlinear stability results on discrete shock profiles, we avoid the introduction of any weakness assumption on the amplitude of the shock and apply our analysis to a large family of schemes that introduce some artificial possibly high-order viscosity. The proof relies on a precise description of the Green's function of the linearization of the numerical scheme about spectrally stable discrete shock profiles obtained in [Coeu23]. The present article also pinpoints the ideas for a possible extension of this nonlinear orbital stability result for discrete shock profiles in the case of systems of conservation laws.<|reference_end|>
arxiv
@article{coeuret2024nonlinear, title={Nonlinear orbital stability of stationary discrete shock profiles for scalar conservation laws}, author={Lucas Coeuret}, journal={arXiv preprint arXiv:2409.18930}, year={2024}, archivePrefix={arXiv}, eprint={2409.18930}, primaryClass={math.AP cs.NA math.NA} }
coeuret2024nonlinear
arxiv-662887
2409.18931
Social Media Bot Policies: Evaluating Passive and Active Enforcement
<|reference_start|>Social Media Bot Policies: Evaluating Passive and Active Enforcement: The emergence of Multimodal Foundation Models (MFMs) holds significant promise for transforming social media platforms. However, this advancement also introduces substantial security and ethical concerns, as it may facilitate malicious actors in the exploitation of online users. We aim to evaluate the strength of security protocols on prominent social media platforms in mitigating the deployment of MFM bots. We examined the bot and content policies of eight popular social media platforms: X (formerly Twitter), Instagram, Facebook, Threads, TikTok, Mastodon, Reddit, and LinkedIn. Using Selenium, we developed a web bot to test bot deployment and AI-generated content policies and their enforcement mechanisms. Our findings indicate significant vulnerabilities within the current enforcement mechanisms of these platforms. Despite having explicit policies against bot activity, all platforms failed to detect and prevent the operation of our MFM bots. This finding reveals a critical gap in the security measures employed by these social media platforms, underscoring the potential for malicious actors to exploit these weaknesses to disseminate misinformation, commit fraud, or manipulate users.<|reference_end|>
arxiv
@article{radivojevic2024social, title={Social Media Bot Policies: Evaluating Passive and Active Enforcement}, author={Kristina Radivojevic, Christopher McAleer, Catrell Conley, Cormac Kennedy, Paul Brenner}, journal={arXiv preprint arXiv:2409.18931}, year={2024}, archivePrefix={arXiv}, eprint={2409.18931}, primaryClass={cs.SI cs.CY} }
radivojevic2024social
arxiv-662888
2409.18932
ReviveDiff: A Universal Diffusion Model for Restoring Images in Adverse Weather Conditions
<|reference_start|>ReviveDiff: A Universal Diffusion Model for Restoring Images in Adverse Weather Conditions: Images captured in challenging environments--such as nighttime, foggy, rainy weather, and underwater--often suffer from significant degradation, resulting in a substantial loss of visual quality. Effective restoration of these degraded images is critical for the subsequent vision tasks. While many existing approaches have successfully incorporated specific priors for individual tasks, these tailored solutions limit their applicability to other degradations. In this work, we propose a universal network architecture, dubbed "ReviveDiff", which can address a wide range of degradations and bring images back to life by enhancing and restoring their quality. Our approach is inspired by the observation that, unlike degradation caused by movement or electronic issues, quality degradation under adverse conditions primarily stems from natural media (such as fog, water, and low luminance), which generally preserves the original structures of objects. To restore the quality of such images, we leveraged the latest advancements in diffusion models and developed ReviveDiff to restore image quality from both macro and micro levels across some key factors determining image quality, such as sharpness, distortion, noise level, dynamic range, and color accuracy. We rigorously evaluated ReviveDiff on seven benchmark datasets covering five types of degrading conditions: Rainy, Underwater, Low-light, Smoke, and Nighttime Hazy. Our experimental results demonstrate that ReviveDiff outperforms the state-of-the-art methods both quantitatively and visually.<|reference_end|>
arxiv
@article{huang2024revivediff:, title={ReviveDiff: A Universal Diffusion Model for Restoring Images in Adverse Weather Conditions}, author={Wenfeng Huang, Guoan Xu, Wenjing Jia, Stuart Perry and Guangwei Gao}, journal={arXiv preprint arXiv:2409.18932}, year={2024}, archivePrefix={arXiv}, eprint={2409.18932}, primaryClass={cs.CV} }
huang2024revivediff:
arxiv-662889
2409.18937
Robust Deep Reinforcement Learning for Volt-VAR Optimization in Active Distribution System under Uncertainty
<|reference_start|>Robust Deep Reinforcement Learning for Volt-VAR Optimization in Active Distribution System under Uncertainty: The deep reinforcement learning (DRL) based Volt-VAR optimization (VVO) methods have been widely studied for active distribution networks (ADNs). However, most of them lack safety guarantees in terms of power injection uncertainties due to the increase in distributed energy resources (DERs) and load demand, such as electric vehicles. This article proposes a robust deep reinforcement learning (RDRL) framework for VVO via a robust deep deterministic policy gradient (DDPG) algorithm. This algorithm can effectively manage hybrid action spaces, considering control devices like capacitors, voltage regulators, and smart inverters. Additionally, it is designed to handle uncertainties by quantifying uncertainty sets with conformal prediction and modeling uncertainties as adversarial attacks to guarantee safe exploration across action spaces. Numerical results on three IEEE test cases demonstrate the sample efficiency and safety of the proposed robust DDPG against uncertainties compared to the benchmark algorithms.<|reference_end|>
arxiv
@article{chen2024robust, title={Robust Deep Reinforcement Learning for Volt-VAR Optimization in Active Distribution System under Uncertainty}, author={Zhengrong Chen, Siyao Cai, A.P. Sakis Meliopoulos}, journal={arXiv preprint arXiv:2409.18937}, year={2024}, archivePrefix={arXiv}, eprint={2409.18937}, primaryClass={eess.SY cs.SY} }
chen2024robust
arxiv-662890
2409.18938
From Seconds to Hours: Reviewing MultiModal Large Language Models on Comprehensive Long Video Understanding
<|reference_start|>From Seconds to Hours: Reviewing MultiModal Large Language Models on Comprehensive Long Video Understanding: The integration of Large Language Models (LLMs) with visual encoders has recently shown promising performance in visual understanding tasks, leveraging their inherent capability to comprehend and generate human-like text for visual reasoning. Given the diverse nature of visual data, MultiModal Large Language Models (MM-LLMs) exhibit variations in model designing and training for understanding images, short videos, and long videos. Our paper focuses on the substantial differences and unique challenges posed by long video understanding compared to static image and short video understanding. Unlike static images, short videos encompass sequential frames with both spatial and within-event temporal information, while long videos consist of multiple events with between-event and long-term temporal information. In this survey, we aim to trace and summarize the advancements of MM-LLMs from image understanding to long video understanding. We review the differences among various visual understanding tasks and highlight the challenges in long video understanding, including more fine-grained spatiotemporal details, dynamic events, and long-term dependencies. We then provide a detailed summary of the advancements in MM-LLMs in terms of model design and training methodologies for understanding long videos. Finally, we compare the performance of existing MM-LLMs on video understanding benchmarks of various lengths and discuss potential future directions for MM-LLMs in long video understanding.<|reference_end|>
arxiv
@article{zou2024from, title={From Seconds to Hours: Reviewing MultiModal Large Language Models on Comprehensive Long Video Understanding}, author={Heqing Zou, Tianze Luo, Guiyang Xie, Victor (Xiao Jie) Zhang, Fengmao Lv, Guangcong Wang, Juanyang Chen, Zhuochen Wang, Hansheng Zhang and Huaijian Zhang}, journal={arXiv preprint arXiv:2409.18938}, year={2024}, archivePrefix={arXiv}, eprint={2409.18938}, primaryClass={cs.CV cs.AI} }
zou2024from
arxiv-662891
2409.18939
Towards Super-Nominal Payload Handling: Inverse Dynamics Analysis for Multi-Skill Robotic Manipulation
<|reference_start|>Towards Super-Nominal Payload Handling: Inverse Dynamics Analysis for Multi-Skill Robotic Manipulation: Motion planning for articulated robots has traditionally been governed by algorithms that operate within manufacturer-defined payload limits. Our empirical analysis of the Franka Emika Panda robot demonstrates that this approach unnecessarily restricts the robot's dynamically-reachable task space. These results establish an expanded operational envelope for such robots, showing that they can handle payloads of more than twice their rated capacity. Additionally, our preliminary findings indicate that integrating non-prehensile motion primitives with grasping-based manipulation has the potential to further increase the success rates of manipulation tasks involving payloads exceeding nominal limits.<|reference_end|>
arxiv
@article{pasricha2024towards, title={Towards Super-Nominal Payload Handling: Inverse Dynamics Analysis for Multi-Skill Robotic Manipulation}, author={Anuj Pasricha and Alessandro Roncone}, journal={arXiv preprint arXiv:2409.18939}, year={2024}, archivePrefix={arXiv}, eprint={2409.18939}, primaryClass={cs.RO} }
pasricha2024towards
arxiv-662892
2409.18941
Building Trust Through Voice: How Vocal Tone Impacts User Perception of Attractiveness of Voice Assistants
<|reference_start|>Building Trust Through Voice: How Vocal Tone Impacts User Perception of Attractiveness of Voice Assistants: Voice Assistants (VAs) are popular for simple tasks, but users are often hesitant to use them for complex activities like online shopping. We explored whether the vocal characteristics like the VA's vocal tone, can make VAs perceived as more attractive and trustworthy to users for complex tasks. Our findings show that the tone of the VA voice significantly impacts its perceived attractiveness and trustworthiness. Participants in our experiment were more likely to be attracted to VAs with positive or neutral tones and ultimately trusted the VAs they found more attractive. We conclude that VA's perceived trustworthiness can be enhanced through thoughtful voice design, incorporating a variety of vocal tones.<|reference_end|>
arxiv
@article{pias2024building, title={Building Trust Through Voice: How Vocal Tone Impacts User Perception of Attractiveness of Voice Assistants}, author={Sabid Bin Habib Pias, Alicia Freel, Ran Huang, Donald Williamson, Minjeong Kim, Apu Kapadia}, journal={arXiv preprint arXiv:2409.18941}, year={2024}, archivePrefix={arXiv}, eprint={2409.18941}, primaryClass={cs.HC cs.AI} }
pias2024building
arxiv-662893
2409.18943
Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models
<|reference_start|>Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models: The instruction-following ability of large language models enables humans to interact with AI agents in a natural way. However, when required to generate responses of a specific length, large language models often struggle to meet users' needs due to their inherent difficulty in accurately perceiving numerical constraints. To explore the ability of large language models to control the length of generated responses, we propose the Target Length Generation Task (TLG) and design two metrics, Precise Match (PM) and Flexible Match (FM) to evaluate the model's performance in adhering to specified response lengths. Furthermore, we introduce a novel, model-agnostic approach called Ruler, which employs Meta Length Tokens (MLTs) to enhance the instruction-following ability of large language models under length-constrained instructions. Specifically, Ruler equips LLMs with the ability to generate responses of a specified length based on length constraints within the instructions. Moreover, Ruler can automatically generate appropriate MLT when length constraints are not explicitly provided, demonstrating excellent versatility and generalization. Comprehensive experiments show the effectiveness of Ruler across different LLMs on Target Length Generation Task, e.g., at All Level 27.97 average gain on PM, 29.57 average gain on FM. In addition, we conduct extensive ablation experiments to further substantiate the efficacy and generalization of Ruler. Our code and data is available at https://github.com/Geaming2002/Ruler.<|reference_end|>
arxiv
@article{li2024ruler:, title={Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models}, author={Jiaming Li, Lei Zhang, Yunshui Li, Ziqiang Liu, yuelin bai, Run Luo, Longze Chen, Min Yang}, journal={arXiv preprint arXiv:2409.18943}, year={2024}, archivePrefix={arXiv}, eprint={2409.18943}, primaryClass={cs.CL} }
li2024ruler:
arxiv-662894
2409.18946
Unconditional stability of a recurrent neural circuit implementing divisive normalization
<|reference_start|>Unconditional stability of a recurrent neural circuit implementing divisive normalization: Stability in recurrent neural models poses a significant challenge, particularly in developing biologically plausible neurodynamical models that can be seamlessly trained. Traditional cortical circuit models are notoriously difficult to train due to expansive nonlinearities in the dynamical system, leading to an optimization problem with nonlinear stability constraints that are difficult to impose. Conversely, recurrent neural networks (RNNs) excel in tasks involving sequential data but lack biological plausibility and interpretability. In this work, we address these challenges by linking dynamic divisive normalization (DN) to the stability of ORGaNICs, a biologically plausible recurrent cortical circuit model that dynamically achieves DN and has been shown to simulate a wide range of neurophysiological phenomena. By using the indirect method of Lyapunov, we prove the remarkable property of unconditional local stability for an arbitrary-dimensional ORGaNICs circuit when the recurrent weight matrix is the identity. We thus connect ORGaNICs to a system of coupled damped harmonic oscillators, which enables us to derive the circuit's energy function, providing a normative principle of what the circuit, and individual neurons, aim to accomplish. Further, for a generic recurrent weight matrix, we prove the stability of the 2D model and demonstrate empirically that stability holds in higher dimensions. Finally, we show that ORGaNICs can be trained by backpropagation through time without gradient clipping/scaling, thanks to its intrinsic stability property and adaptive time constants, which address the problems of exploding, vanishing, and oscillating gradients. By evaluating the model's performance on RNN benchmarks, we find that ORGaNICs outperform alternative neurodynamical models on static image classification tasks and perform comparably to LSTMs on sequential tasks.<|reference_end|>
arxiv
@article{rawat2024unconditional, title={Unconditional stability of a recurrent neural circuit implementing divisive normalization}, author={Shivang Rawat, David J. Heeger and Stefano Martiniani}, journal={arXiv preprint arXiv:2409.18946}, year={2024}, archivePrefix={arXiv}, eprint={2409.18946}, primaryClass={q-bio.NC cs.AI cs.LG math.DS} }
rawat2024unconditional
arxiv-662895
2409.18951
Spectral Wavelet Dropout: Regularization in the Wavelet Domain
<|reference_start|>Spectral Wavelet Dropout: Regularization in the Wavelet Domain: Regularization techniques help prevent overfitting and therefore improve the ability of convolutional neural networks (CNNs) to generalize. One reason for overfitting is the complex co-adaptations among different parts of the network, which make the CNN dependent on their joint response rather than encouraging each part to learn a useful feature representation independently. Frequency domain manipulation is a powerful strategy for modifying data that has temporal and spatial coherence by utilizing frequency decomposition. This work introduces Spectral Wavelet Dropout (SWD), a novel regularization method that includes two variants: 1D-SWD and 2D-SWD. These variants improve CNN generalization by randomly dropping detailed frequency bands in the discrete wavelet decomposition of feature maps. Our approach distinguishes itself from the pre-existing Spectral "Fourier" Dropout (2D-SFD), which eliminates coefficients in the Fourier domain. Notably, SWD requires only a single hyperparameter, unlike the two required by SFD. We also extend the literature by implementing a one-dimensional version of Spectral "Fourier" Dropout (1D-SFD), setting the stage for a comprehensive comparison. Our evaluation shows that both 1D and 2D SWD variants have competitive performance on CIFAR-10/100 benchmarks relative to both 1D-SFD and 2D-SFD. Specifically, 1D-SWD has a significantly lower computational complexity compared to 1D/2D-SFD. In the Pascal VOC Object Detection benchmark, SWD variants surpass 1D-SFD and 2D-SFD in performance and demonstrate lower computational complexity during training.<|reference_end|>
arxiv
@article{cakaj2024spectral, title={Spectral Wavelet Dropout: Regularization in the Wavelet Domain}, author={Rinor Cakaj, Jens Mehnert, Bin Yang}, journal={arXiv preprint arXiv:2409.18951}, year={2024}, archivePrefix={arXiv}, eprint={2409.18951}, primaryClass={cs.CV cs.LG} }
cakaj2024spectral
arxiv-662896
2409.18952
RepairBench: Leaderboard of Frontier Models for Program Repair
<|reference_start|>RepairBench: Leaderboard of Frontier Models for Program Repair: AI-driven program repair uses AI models to repair buggy software by producing patches. Rapid advancements in AI surely impact state-of-the-art performance of program repair. Yet, grasping this progress requires frequent and standardized evaluations. We propose RepairBench, a novel leaderboard for AI-driven program repair. The key characteristics of RepairBench are: 1) it is execution-based: all patches are compiled and executed against a test suite, 2) it assesses frontier models in a frequent and standardized way. RepairBench leverages two high-quality benchmarks, Defects4J and GitBug-Java, to evaluate frontier models against real-world program repair tasks. We publicly release the evaluation framework of RepairBench. We will update the leaderboard as new frontier models are released.<|reference_end|>
arxiv
@article{silva2024repairbench:, title={RepairBench: Leaderboard of Frontier Models for Program Repair}, author={Andr'e Silva, Martin Monperrus}, journal={arXiv preprint arXiv:2409.18952}, year={2024}, archivePrefix={arXiv}, eprint={2409.18952}, primaryClass={cs.SE cs.LG} }
silva2024repairbench:
arxiv-662897
2409.18953
UniCal: Unified Neural Sensor Calibration
<|reference_start|>UniCal: Unified Neural Sensor Calibration: Self-driving vehicles (SDVs) require accurate calibration of LiDARs and cameras to fuse sensor data accurately for autonomy. Traditional calibration methods typically leverage fiducials captured in a controlled and structured scene and compute correspondences to optimize over. These approaches are costly and require substantial infrastructure and operations, making it challenging to scale for vehicle fleets. In this work, we propose UniCal, a unified framework for effortlessly calibrating SDVs equipped with multiple LiDARs and cameras. Our approach is built upon a differentiable scene representation capable of rendering multi-view geometrically and photometrically consistent sensor observations. We jointly learn the sensor calibration and the underlying scene representation through differentiable volume rendering, utilizing outdoor sensor data without the need for specific calibration fiducials. This "drive-and-calibrate" approach significantly reduces costs and operational overhead compared to existing calibration systems, enabling efficient calibration for large SDV fleets at scale. To ensure geometric consistency across observations from different sensors, we introduce a novel surface alignment loss that combines feature-based registration with neural rendering. Comprehensive evaluations on multiple datasets demonstrate that UniCal outperforms or matches the accuracy of existing calibration approaches while being more efficient, demonstrating the value of UniCal for scalable calibration.<|reference_end|>
arxiv
@article{yang2024unical:, title={UniCal: Unified Neural Sensor Calibration}, author={Ze Yang, George Chen, Haowei Zhang, Kevin Ta, Ioan Andrei B^arsan, Daniel Murphy, Sivabalan Manivasagam, Raquel Urtasun}, journal={arXiv preprint arXiv:2409.18953}, year={2024}, archivePrefix={arXiv}, eprint={2409.18953}, primaryClass={cs.CV cs.RO} }
yang2024unical:
arxiv-662898
2409.18957
LML-DAP: Language Model Learning a Dataset for Data-Augmented Prediction
<|reference_start|>LML-DAP: Language Model Learning a Dataset for Data-Augmented Prediction: Classification tasks are typically handled using Machine Learning (ML) models, which lack a balance between accuracy and interpretability. This paper introduces a new approach to using Large Language Models (LLMs) for classification tasks in an explainable way. Unlike ML models that rely heavily on data cleaning and feature engineering, this method streamlines the process using LLMs. This paper proposes a new concept called "Language Model Learning (LML)" powered by a new method called "Data-Augmented Prediction (DAP)". The classification is performed by LLMs using a method similar to humans manually exploring and understanding the data and deciding classifications using data as a reference. In the LML process, a dataset is summarized and evaluated to determine the features that lead to the classification of each label the most. In the process of DAP, the system uses the data summary and a row of the testing dataset to automatically generate a query, which is used to retrieve relevant rows from the dataset. A classification is generated by the LLM using data summary and relevant rows, ensuring satisfactory accuracy even with complex data using context-aware decision-making. LML and DAP unlock the possibilities of new applications. The proposed method uses the words "Act as an Explainable Machine Learning Model" in the prompt to enhance the interpretability of the predictions by allowing users to review the logic behind each prediction. In some test cases, the system scored an accuracy above 90%, proving the effectiveness of the system and its potential to outperform conventional ML models in various scenarios. The code is available at https://github.com/Pro-GenAI/LML-DAP<|reference_end|>
arxiv
@article{vadlapati2024lml-dap:, title={LML-DAP: Language Model Learning a Dataset for Data-Augmented Prediction}, author={Praneeth Vadlapati}, journal={arXiv preprint arXiv:2409.18957}, year={2024}, archivePrefix={arXiv}, eprint={2409.18957}, primaryClass={cs.CL cs.AI cs.IR cs.LG} }
vadlapati2024lml-dap:
arxiv-662899
2409.18959
$O(d/T)$ Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions
<|reference_start|>$O(d/T)$ Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions: Score-based diffusion models, which generate new data by learning to reverse a diffusion process that perturbs data from the target distribution into noise, have achieved remarkable success across various generative tasks. Despite their superior empirical performance, existing theoretical guarantees are often constrained by stringent assumptions or suboptimal convergence rates. In this paper, we establish a fast convergence theory for a popular SDE-based sampler under minimal assumptions. Our analysis shows that, provided $\ell_{2}$-accurate estimates of the score functions, the total variation distance between the target and generated distributions is upper bounded by $O(d/T)$ (ignoring logarithmic factors), where $d$ is the data dimensionality and $T$ is the number of steps. This result holds for any target distribution with finite first-order moment. To our knowledge, this improves upon existing convergence theory for both the SDE-based sampler and another ODE-based sampler, while imposing minimal assumptions on the target data distribution and score estimates. This is achieved through a novel set of analytical tools that provides a fine-grained characterization of how the error propagates at each step of the reverse process.<|reference_end|>
arxiv
@article{li2024$o(d/t)$, title={$O(d/T)$ Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions}, author={Gen Li, Yuling Yan}, journal={arXiv preprint arXiv:2409.18959}, year={2024}, archivePrefix={arXiv}, eprint={2409.18959}, primaryClass={cs.LG cs.AI math.ST stat.ML stat.TH} }
li2024$o(d/t)$
arxiv-662900
2409.18961
ProMerge: Prompt and Merge for Unsupervised Instance Segmentation
<|reference_start|>ProMerge: Prompt and Merge for Unsupervised Instance Segmentation: Unsupervised instance segmentation aims to segment distinct object instances in an image without relying on human-labeled data. This field has recently seen significant advancements, partly due to the strong local correspondences afforded by rich visual feature representations from self-supervised models (e.g., DINO). Recent state-of-the-art approaches use self-supervised features to represent images as graphs and solve a generalized eigenvalue system (i.e., normalized-cut) to generate foreground masks. While effective, this strategy is limited by its attendant computational demands, leading to slow inference speeds. In this paper, we propose Prompt and Merge (ProMerge), which leverages self-supervised visual features to obtain initial groupings of patches and applies a strategic merging to these segments, aided by a sophisticated background-based mask pruning technique. ProMerge not only yields competitive results but also offers a significant reduction in inference time compared to state-of-the-art normalized-cut-based approaches. Furthermore, when training an object detector using our mask predictions as pseudo-labels, the resulting detector surpasses the current leading unsupervised model on various challenging instance segmentation benchmarks.<|reference_end|>
arxiv
@article{li2024promerge:, title={ProMerge: Prompt and Merge for Unsupervised Instance Segmentation}, author={Dylan Li and Gyungin Shin}, journal={arXiv preprint arXiv:2409.18961}, year={2024}, archivePrefix={arXiv}, eprint={2409.18961}, primaryClass={cs.CV cs.AI} }
li2024promerge: