corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-662701
|
2409.18573
|
Optimal Tree-Based Mechanisms for Differentially Private Approximate CDFs
|
<|reference_start|>Optimal Tree-Based Mechanisms for Differentially Private Approximate CDFs: This paper considers the $\varepsilon$-differentially private (DP) release of an approximate cumulative distribution function (CDF) of the samples in a dataset. We assume that the true (approximate) CDF is obtained after lumping the data samples into a fixed number $K$ of bins. In this work, we extend the well-known binary tree mechanism to the class of \emph{level-uniform tree-based} mechanisms and identify $\varepsilon$-DP mechanisms that have a small $\ell_2$-error. We identify optimal or close-to-optimal tree structures when either of the parameters, which are the branching factors or the privacy budgets at each tree level, are given, and when the algorithm designer is free to choose both sets of parameters. Interestingly, when we allow the branching factors to take on real values, under certain mild restrictions, the optimal level-uniform tree-based mechanism is obtained by choosing equal branching factors \emph{independent} of $K$, and equal privacy budgets at all levels. Furthermore, for selected $K$ values, we explicitly identify the optimal \emph{integer} branching factors and tree height, assuming equal privacy budgets at all levels. Finally, we describe general strategies for improving the private CDF estimates further, by combining multiple noisy estimates and by post-processing the estimates for consistency.<|reference_end|>
|
arxiv
|
@article{rameshwar2024optimal,
title={Optimal Tree-Based Mechanisms for Differentially Private Approximate
CDFs},
author={V. Arvind Rameshwar, Anshoo Tandon, Abhay Sharma},
journal={arXiv preprint arXiv:2409.18573},
year={2024},
archivePrefix={arXiv},
eprint={2409.18573},
primaryClass={cs.IT math.IT}
}
|
rameshwar2024optimal
|
arxiv-662702
|
2409.18574
|
Climate Adaptation with Reinforcement Learning: Experiments with Flooding and Transportation in Copenhagen
|
<|reference_start|>Climate Adaptation with Reinforcement Learning: Experiments with Flooding and Transportation in Copenhagen: Due to climate change the frequency and intensity of extreme rainfall events, which contribute to urban flooding, are expected to increase in many places. These floods can damage transport infrastructure and disrupt mobility, highlighting the need for cities to adapt to escalating risks. Reinforcement learning (RL) serves as a powerful tool for uncovering optimal adaptation strategies, determining how and where to deploy adaptation measures effectively, even under significant uncertainty. In this study, we leverage RL to identify the most effective timing and locations for implementing measures, aiming to reduce both direct and indirect impacts of flooding. Our framework integrates climate change projections of future rainfall events and floods, models city-wide motorized trips, and quantifies direct and indirect impacts on infrastructure and mobility. Preliminary results suggest that our RL-based approach can significantly enhance decision-making by prioritizing interventions in specific urban areas and identifying the optimal periods for their implementation.<|reference_end|>
|
arxiv
|
@article{costa2024climate,
title={Climate Adaptation with Reinforcement Learning: Experiments with
Flooding and Transportation in Copenhagen},
author={Miguel Costa, Morten W. Petersen, Arthur Vandervoort, Martin Drews,
Karyn Morrissey, Francisco C. Pereira},
journal={arXiv preprint arXiv:2409.18574},
year={2024},
archivePrefix={arXiv},
eprint={2409.18574},
primaryClass={cs.LG}
}
|
costa2024climate
|
arxiv-662703
|
2409.18575
|
Corpus-informed Retrieval Augmented Generation of Clarifying Questions
|
<|reference_start|>Corpus-informed Retrieval Augmented Generation of Clarifying Questions: This study aims to develop models that generate corpus informed clarifying questions for web search, in a way that ensures the questions align with the available information in the retrieval corpus. We demonstrate the effectiveness of Retrieval Augmented Language Models (RAG) in this process, emphasising their ability to (i) jointly model the user query and retrieval corpus to pinpoint the uncertainty and ask for clarifications end-to-end and (ii) model more evidence documents, which can be used towards increasing the breadth of the questions asked. However, we observe that in current datasets search intents are largely unsupported by the corpus, which is problematic both for training and evaluation. This causes question generation models to ``hallucinate'', ie. suggest intents that are not in the corpus, which can have detrimental effects in performance. To address this, we propose dataset augmentation methods that align the ground truth clarifications with the retrieval corpus. Additionally, we explore techniques to enhance the relevance of the evidence pool during inference, but find that identifying ground truth intents within the corpus remains challenging. Our analysis suggests that this challenge is partly due to the bias of current datasets towards clarification taxonomies and calls for data that can support generating corpus-informed clarifications.<|reference_end|>
|
arxiv
|
@article{krasakis2024corpus-informed,
title={Corpus-informed Retrieval Augmented Generation of Clarifying Questions},
author={Antonios Minas Krasakis, Andrew Yates, Evangelos Kanoulas},
journal={arXiv preprint arXiv:2409.18575},
year={2024},
archivePrefix={arXiv},
eprint={2409.18575},
primaryClass={cs.IR}
}
|
krasakis2024corpus-informed
|
arxiv-662704
|
2409.18578
|
An Enhanced Federated Prototype Learning Method under Domain Shift
|
<|reference_start|>An Enhanced Federated Prototype Learning Method under Domain Shift: Federated Learning (FL) allows collaborative machine learning training without sharing private data. Numerous studies have shown that one significant factor affecting the performance of federated learning models is the heterogeneity of data across different clients, especially when the data is sampled from various domains. A recent paper introduces variance-aware dual-level prototype clustering and uses a novel $\alpha$-sparsity prototype loss, which increases intra-class similarity and reduces inter-class similarity. To ensure that the features converge within specific clusters, we introduce an improved algorithm, Federated Prototype Learning with Convergent Clusters, abbreviated as FedPLCC. To increase inter-class distances, we weight each prototype with the size of the cluster it represents. To reduce intra-class distances, considering that prototypes with larger distances might come from different domains, we select only a certain proportion of prototypes for the loss function calculation. Evaluations on the Digit-5, Office-10, and DomainNet datasets show that our method performs better than existing approaches.<|reference_end|>
|
arxiv
|
@article{kuang2024an,
title={An Enhanced Federated Prototype Learning Method under Domain Shift},
author={Liang Kuang, Kuangpu Guo, Jian Liang, Jianguo Zhang},
journal={arXiv preprint arXiv:2409.18578},
year={2024},
archivePrefix={arXiv},
eprint={2409.18578},
primaryClass={cs.LG cs.AI}
}
|
kuang2024an
|
arxiv-662705
|
2409.18581
|
Using Deep Autoregressive Models as Causal Inference Engines
|
<|reference_start|>Using Deep Autoregressive Models as Causal Inference Engines: Existing causal inference (CI) models are limited to primarily handling low-dimensional confounders and singleton actions. We propose an autoregressive (AR) CI framework capable of handling complex confounders and sequential actions common in modern applications. We accomplish this by {\em sequencification}, transforming data from an underlying causal diagram into a sequence of tokens. This approach not only enables training with data generated from any DAG but also extends existing CI capabilities to accommodate estimating several statistical quantities using a {\em single} model. We can directly predict interventional probabilities, simplifying inference and enhancing outcome prediction accuracy. We demonstrate that an AR model adapted for CI is efficient and effective in various complex applications such as navigating mazes, playing chess endgames, and evaluating the impact of certain keywords on paper acceptance rates.<|reference_end|>
|
arxiv
|
@article{im2024using,
title={Using Deep Autoregressive Models as Causal Inference Engines},
author={Daniel Jiwoong Im, Kevin Zhang, Nakul Verma, Kyunghyun Cho},
journal={arXiv preprint arXiv:2409.18581},
year={2024},
archivePrefix={arXiv},
eprint={2409.18581},
primaryClass={cs.LG stat.ML}
}
|
im2024using
|
arxiv-662706
|
2409.18582
|
Optimistic Games for Combinatorial Bayesian Optimization with Application to Protein Design
|
<|reference_start|>Optimistic Games for Combinatorial Bayesian Optimization with Application to Protein Design: Bayesian optimization (BO) is a powerful framework to optimize black-box expensive-to-evaluate functions via sequential interactions. In several important problems (e.g. drug discovery, circuit design, neural architecture search, etc.), though, such functions are defined over large $\textit{combinatorial and unstructured}$ spaces. This makes existing BO algorithms not feasible due to the intractable maximization of the acquisition function over these domains. To address this issue, we propose $\textbf{GameOpt}$, a novel game-theoretical approach to combinatorial BO. $\textbf{GameOpt}$ establishes a cooperative game between the different optimization variables, and selects points that are game $\textit{equilibria}$ of an upper confidence bound acquisition function. These are stable configurations from which no variable has an incentive to deviate$-$ analog to local optima in continuous domains. Crucially, this allows us to efficiently break down the complexity of the combinatorial domain into individual decision sets, making $\textbf{GameOpt}$ scalable to large combinatorial spaces. We demonstrate the application of $\textbf{GameOpt}$ to the challenging $\textit{protein design}$ problem and validate its performance on four real-world protein datasets. Each protein can take up to $20^{X}$ possible configurations, where $X$ is the length of a protein, making standard BO methods infeasible. Instead, our approach iteratively selects informative protein configurations and very quickly discovers highly active protein variants compared to other baselines.<|reference_end|>
|
arxiv
|
@article{bal2024optimistic,
title={Optimistic Games for Combinatorial Bayesian Optimization with
Application to Protein Design},
author={Melis Ilayda Bal, Pier Giuseppe Sessa, Mojmir Mutny, Andreas Krause},
journal={arXiv preprint arXiv:2409.18582},
year={2024},
archivePrefix={arXiv},
eprint={2409.18582},
primaryClass={cs.LG cs.AI cs.NE q-bio.BM q-bio.QM}
}
|
bal2024optimistic
|
arxiv-662707
|
2409.18583
|
Hit the Sweet Spot! Span-Level Ensemble for Large Language Models
|
<|reference_start|>Hit the Sweet Spot! Span-Level Ensemble for Large Language Models: Ensembling various LLMs to unlock their complementary potential and leverage their individual strengths is highly valuable. Previous studies typically focus on two main paradigms: sample-level and token-level ensembles. Sample-level ensemble methods either select or blend fully generated outputs, which hinders dynamic correction and enhancement of outputs during the generation process. On the other hand, token-level ensemble methods enable real-time correction through fine-grained ensemble at each generation step. However, the information carried by an individual token is quite limited, leading to suboptimal decisions at each step. To address these issues, we propose SweetSpan, a span-level ensemble method that effectively balances the need for real-time adjustments and the information required for accurate ensemble decisions. Our approach involves two key steps: First, we have each candidate model independently generate candidate spans based on the shared prefix. Second, we calculate perplexity scores to facilitate mutual evaluation among the candidate models and achieve robust span selection by filtering out unfaithful scores. To comprehensively evaluate ensemble methods, we propose a new challenging setting (ensemble models with significant performance gaps) in addition to the standard setting (ensemble the best-performing models) to assess the performance of model ensembles in more realistic scenarios. Experimental results in both standard and challenging settings across various language generation tasks demonstrate the effectiveness, robustness, and versatility of our approach compared with previous ensemble methods.<|reference_end|>
|
arxiv
|
@article{xu2024hit,
title={Hit the Sweet Spot! Span-Level Ensemble for Large Language Models},
author={Yangyifan Xu, Jianghao Chen, Junhong Wu and Jiajun Zhang},
journal={arXiv preprint arXiv:2409.18583},
year={2024},
archivePrefix={arXiv},
eprint={2409.18583},
primaryClass={cs.CL}
}
|
xu2024hit
|
arxiv-662708
|
2409.18584
|
ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5
|
<|reference_start|>ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5: Automatic speech recognition (ASR) systems have advanced significantly with models like Whisper, Conformer, and self-supervised frameworks such as Wav2vec 2.0 and HuBERT. However, developing robust ASR models for young children's speech remains challenging due to differences in pronunciation, tone, and pace compared to adult speech. In this paper, we introduce a new Mandarin speech dataset focused on children aged 3 to 5, addressing the scarcity of resources in this area. The dataset comprises 41.25 hours of speech with carefully crafted manual transcriptions, collected from 397 speakers across various provinces in China, with balanced gender representation. We provide a comprehensive analysis of speaker demographics, speech duration distribution and geographic coverage. Additionally, we evaluate ASR performance on models trained from scratch, such as Conformer, as well as fine-tuned pre-trained models like HuBERT and Whisper, where fine-tuning demonstrates significant performance improvements. Furthermore, we assess speaker verification (SV) on our dataset, showing that, despite the challenges posed by the unique vocal characteristics of young children, the dataset effectively supports both ASR and SV tasks. This dataset is a valuable contribution to Mandarin child speech research and holds potential for applications in educational technology and child-computer interaction. It will be open-source and freely available for all academic purposes.<|reference_end|>
|
arxiv
|
@article{zhou2024childmandarin:,
title={ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young
Children Aged 3-5},
author={Jiaming Zhou, Shiyao Wang, Shiwan Zhao, Jiabei He, Haoqin Sun, Hui
Wang, Cheng Liu, Aobo Kong, Yujie Guo and Yong Qin},
journal={arXiv preprint arXiv:2409.18584},
year={2024},
archivePrefix={arXiv},
eprint={2409.18584},
primaryClass={cs.SD eess.AS}
}
|
zhou2024childmandarin:
|
arxiv-662709
|
2409.18585
|
Unscented Transform-based Pure Pursuit Path-Tracking Algorithm under Uncertainty
|
<|reference_start|>Unscented Transform-based Pure Pursuit Path-Tracking Algorithm under Uncertainty: Automated driving has become more and more popular due to its potential to eliminate road accidents by taking over driving tasks from humans. One of the remaining challenges is to follow a planned path autonomously, especially when uncertainties in self-localizing or understanding the surroundings can influence the decisions made by autonomous vehicles, such as calculating how much they need to steer to minimize tracking errors. In this paper, a modified geometric pure pursuit path-tracking algorithm is proposed, taking into consideration such uncertainties using the unscented transform. The algorithm is tested through simulations for typical road geometries, such as straight and circular lines.<|reference_end|>
|
arxiv
|
@article{nantabut2024unscented,
title={Unscented Transform-based Pure Pursuit Path-Tracking Algorithm under
Uncertainty},
author={Chinnawut Nantabut},
journal={arXiv preprint arXiv:2409.18585},
year={2024},
archivePrefix={arXiv},
eprint={2409.18585},
primaryClass={cs.RO cs.SY eess.SY}
}
|
nantabut2024unscented
|
arxiv-662710
|
2409.18586
|
Analysis of Truncated Singular Value Decomposition for Koopman Operator-Based Lane Change Model
|
<|reference_start|>Analysis of Truncated Singular Value Decomposition for Koopman Operator-Based Lane Change Model: Understanding and modeling complex dynamic systems is crucial for enhancing vehicle performance and safety, especially in the context of autonomous driving. Recently, popular methods such as Koopman operators and their approximators, known as Extended Dynamic Mode Decomposition (EDMD), have emerged for their effectiveness in transforming strongly nonlinear system behavior into linear representations. This allows them to be integrated with conventional linear controllers. To achieve this, Singular Value Decomposition (SVD), specifically truncated SVD, is employed to approximate Koopman operators from extensive datasets efficiently. This study evaluates different basis functions used in EDMD and ranks for truncated SVD for representing lane change behavior models, aiming to balance computational efficiency with information loss. The findings, however, suggest that the technique of truncated SVD does not necessarily achieve substantial reductions in computational training time and results in significant information loss.<|reference_end|>
|
arxiv
|
@article{nantabut2024analysis,
title={Analysis of Truncated Singular Value Decomposition for Koopman
Operator-Based Lane Change Model},
author={Chinnawut Nantabut},
journal={arXiv preprint arXiv:2409.18586},
year={2024},
archivePrefix={arXiv},
eprint={2409.18586},
primaryClass={eess.SY cs.AI cs.RO cs.SY}
}
|
nantabut2024analysis
|
arxiv-662711
|
2409.18589
|
Towards Event-Triggered NMPC for Efficient 6G Communications: Experimental Results and Open Problems
|
<|reference_start|>Towards Event-Triggered NMPC for Efficient 6G Communications: Experimental Results and Open Problems: Networked control systems enable real-time control and coordination of distributed systems, leveraging the low latency, high reliability, and massive connectivity offered by 5G and future 6G networks. Applications include autonomous vehicles, robotics, industrial automation, and smart grids. Despite networked control algorithms admitting nominal stability guarantees even in the presence of delays and packet dropouts, their practical performance still heavily depends on the specific characteristics and conditions of the underlying network. To achieve the desired performance while efficiently using communication resources, co-design of control and communication is pivotal. Although periodic schemes, where communication instances are fixed, can provide reliable control performance, unnecessary transmissions, when updates are not needed, result in inefficient usage of network resources. In this paper, we investigate the potential for co-design of model predictive control and network communication. To this end, we design and implement an event-triggered nonlinear model predictive controller for stabilizing a Furuta pendulum communicating over a tailored open radio access network 6G research platform. We analyze the control performance as well as network utilization under varying channel conditions and event-triggering criteria. Our results show that the event-triggered control scheme achieves similar performance to periodic control with reduced communication demand.<|reference_end|>
|
arxiv
|
@article{püttschneider2024towards,
title={Towards Event-Triggered NMPC for Efficient 6G Communications:
Experimental Results and Open Problems},
author={Jens P"uttschneider, Julian Golembiewski, Niklas A. Wagner, Christian
Wietfeld, Timm Faulwasser},
journal={arXiv preprint arXiv:2409.18589},
year={2024},
archivePrefix={arXiv},
eprint={2409.18589},
primaryClass={eess.SY cs.NI cs.SY math.OC}
}
|
püttschneider2024towards
|
arxiv-662712
|
2409.18590
|
Accessibility Issues in Ad-Driven Web Applications
|
<|reference_start|>Accessibility Issues in Ad-Driven Web Applications: Website accessibility is essential for inclusiveness and regulatory compliance. Although third-party advertisements (ads) are a vital revenue source for free web services, they introduce significant accessibility challenges. Leasing a website\'s space to ad-serving technologies like DoubleClick results in developers losing control over ad content accessibility. Even on highly accessible websites, third-party ads can undermine adherence to Web Content Accessibility Guidelines (WCAG). We conduct the first large-scale investigation of 430K website elements, including nearly 100K ad elements, to understand the accessibility of ads on websites. We seek to understand the prevalence of inaccessible ads and their overall impact on the accessibility of websites. Our findings show that 67% of websites experience increased accessibility violations due to ads, with common violations including Focus Visible and On Input. Popular ad-serving technologies like Taboola, DoubleClick, and RevContent often serve ads that fail to comply with WCAG standards. Even when ads are WCAG compliant, 27% of them have alternative text in ad images that misrepresents information, potentially deceiving users. Manual inspection of a sample of these misleading ads revealed that user-identifiable data is collected on 94% of websites through interactions, such as hovering or pressing enter. Since users with disabilities often rely on tools like screen readers that require hover events to access website content, they have no choice but to compromise their privacy in order to navigate website ads. Based on our findings, we further dissect the root cause of these violations and provide design guidelines to both website developers and ad-serving technologies to achieve WCAG-compliant ad integration.<|reference_end|>
|
arxiv
|
@article{amjad2024accessibility,
title={Accessibility Issues in Ad-Driven Web Applications},
author={Abdul Haddi Amjad, Muhammad Danish, Bless Jah, Muhammad Ali Gulzar},
journal={arXiv preprint arXiv:2409.18590},
year={2024},
archivePrefix={arXiv},
eprint={2409.18590},
primaryClass={cs.SE}
}
|
amjad2024accessibility
|
arxiv-662713
|
2409.18591
|
Off to new Shores: A Dataset & Benchmark for (near-)coastal Flood Inundation Forecasting
|
<|reference_start|>Off to new Shores: A Dataset & Benchmark for (near-)coastal Flood Inundation Forecasting: Floods are among the most common and devastating natural hazards, imposing immense costs on our society and economy due to their disastrous consequences. Recent progress in weather prediction and spaceborne flood mapping demonstrated the feasibility of anticipating extreme events and reliably detecting their catastrophic effects afterwards. However, these efforts are rarely linked to one another and there is a critical lack of datasets and benchmarks to enable the direct forecasting of flood extent. To resolve this issue, we curate a novel dataset enabling a timely prediction of flood extent. Furthermore, we provide a representative evaluation of state-of-the-art methods, structured into two benchmark tracks for forecasting flood inundation maps i) in general and ii) focused on coastal regions. Altogether, our dataset and benchmark provide a comprehensive platform for evaluating flood forecasts, enabling future solutions for this critical challenge. Data, code & models are shared at https://github.com/Multihuntr/GFF under a CC0 license.<|reference_end|>
|
arxiv
|
@article{victor2024off,
title={Off to new Shores: A Dataset & Benchmark for (near-)coastal Flood
Inundation Forecasting},
author={Brandon Victor, Mathilde Letard, Peter Naylor, Karim Douch, Nicolas
Long'ep'e, Zhen He, Patrick Ebel},
journal={arXiv preprint arXiv:2409.18591},
year={2024},
archivePrefix={arXiv},
eprint={2409.18591},
primaryClass={cs.CV}
}
|
victor2024off
|
arxiv-662714
|
2409.18592
|
From One to the Power of Many: Augmentations for Invariance to Multi-LiDAR Perception from Single-Sensor Datasets
|
<|reference_start|>From One to the Power of Many: Augmentations for Invariance to Multi-LiDAR Perception from Single-Sensor Datasets: Recently, LiDAR perception methods for autonomous vehicles, powered by deep neural networks have experienced steep growth in performance on classic benchmarks, such as nuScenes and SemanticKITTI. However, there are still large gaps in performance when deploying models trained on such single-sensor setups to modern multi-sensor vehicles. In this work, we investigate if a lack of invariance may be responsible for these performance gaps, and propose some initial solutions in the form of application-specific data augmentations, which can facilitate better transfer to multi-sensor LiDAR setups. We provide experimental evidence that our proposed augmentations improve generalization across LiDAR sensor setups, and investigate how these augmentations affect the models' invariance properties on simulations of different LiDAR sensor setups.<|reference_end|>
|
arxiv
|
@article{uecker2024from,
title={From One to the Power of Many: Augmentations for Invariance to
Multi-LiDAR Perception from Single-Sensor Datasets},
author={Marc Uecker, J. Marius Z"ollner},
journal={arXiv preprint arXiv:2409.18592},
year={2024},
archivePrefix={arXiv},
eprint={2409.18592},
primaryClass={cs.CV cs.RO}
}
|
uecker2024from
|
arxiv-662715
|
2409.18594
|
"Oh LLM, I'm Asking Thee, Please Give Me a Decision Tree": Zero-Shot Decision Tree Induction and Embedding with Large Language Models
|
<|reference_start|>"Oh LLM, I'm Asking Thee, Please Give Me a Decision Tree": Zero-Shot Decision Tree Induction and Embedding with Large Language Models: Large language models (LLMs) provide powerful means to leverage prior knowledge for predictive modeling when data is limited. In this work, we demonstrate how LLMs can use their compressed world knowledge to generate intrinsically interpretable machine learning models, i.e., decision trees, without any training data. We find that these zero-shot decision trees can surpass data-driven trees on some small-sized tabular datasets and that embeddings derived from these trees perform on par with data-driven tree-based embeddings on average. Our knowledge-driven decision tree induction and embedding approaches therefore serve as strong new baselines for data-driven machine learning methods in the low-data regime.<|reference_end|>
|
arxiv
|
@article{knauer2024"oh,
title={"Oh LLM, I'm Asking Thee, Please Give Me a Decision Tree": Zero-Shot
Decision Tree Induction and Embedding with Large Language Models},
author={Ricardo Knauer, Mario Koddenbrock, Raphael Wallsberger, Nicholas M.
Brisson, Georg N. Duda, Deborah Falla, David W. Evans, Erik Rodner},
journal={arXiv preprint arXiv:2409.18594},
year={2024},
archivePrefix={arXiv},
eprint={2409.18594},
primaryClass={cs.AI cs.CL cs.LG}
}
|
knauer2024"oh
|
arxiv-662716
|
2409.18596
|
ASAG2024: A Combined Benchmark for Short Answer Grading
|
<|reference_start|>ASAG2024: A Combined Benchmark for Short Answer Grading: Open-ended questions test a more thorough understanding than closed-ended questions and are often a preferred assessment method. However, open-ended questions are tedious to grade and subject to personal bias. Therefore, there have been efforts to speed up the grading process through automation. Short Answer Grading (SAG) systems aim to automatically score students' answers. Despite growth in SAG methods and capabilities, there exists no comprehensive short-answer grading benchmark across different subjects, grading scales, and distributions. Thus, it is hard to assess the capabilities of current automated grading methods in terms of their generalizability. In this preliminary work, we introduce the combined ASAG2024 benchmark to facilitate the comparison of automated grading systems. Combining seven commonly used short-answer grading datasets in a common structure and grading scale. For our benchmark, we evaluate a set of recent SAG methods, revealing that while LLM-based approaches reach new high scores, they still are far from reaching human performance. This opens up avenues for future research on human-machine SAG systems.<|reference_end|>
|
arxiv
|
@article{meyer2024asag2024:,
title={ASAG2024: A Combined Benchmark for Short Answer Grading},
author={G'er^ome Meyer, Philip Breuer, Jonathan F"urst},
journal={arXiv preprint arXiv:2409.18596},
year={2024},
doi={10.1145/3649409.3691083},
archivePrefix={arXiv},
eprint={2409.18596},
primaryClass={cs.AI cs.CL cs.LG}
}
|
meyer2024asag2024:
|
arxiv-662717
|
2409.18597
|
TemporalPaD: a reinforcement-learning framework for temporal feature representation and dimension reduction
|
<|reference_start|>TemporalPaD: a reinforcement-learning framework for temporal feature representation and dimension reduction: Recent advancements in feature representation and dimension reduction have highlighted their crucial role in enhancing the efficacy of predictive modeling. This work introduces TemporalPaD, a novel end-to-end deep learning framework designed for temporal pattern datasets. TemporalPaD integrates reinforcement learning (RL) with neural networks to achieve concurrent feature representation and feature reduction. The framework consists of three cooperative modules: a Policy Module, a Representation Module, and a Classification Module, structured based on the Actor-Critic (AC) framework. The Policy Module, responsible for dimensionality reduction through RL, functions as the actor, while the Representation Module for feature extraction and the Classification Module collectively serve as the critic. We comprehensively evaluate TemporalPaD using 29 UCI datasets, a well-known benchmark for validating feature reduction algorithms, through 10 independent tests and 10-fold cross-validation. Additionally, given that TemporalPaD is specifically designed for time series data, we apply it to a real-world DNA classification problem involving enhancer category and enhancer strength. The results demonstrate that TemporalPaD is an efficient and effective framework for achieving feature reduction, applicable to both structured data and sequence datasets. The source code of the proposed TemporalPaD is freely available as supplementary material to this article and at http://www.healthinformaticslab.org/supp/.<|reference_end|>
|
arxiv
|
@article{mu2024temporalpad:,
title={TemporalPaD: a reinforcement-learning framework for temporal feature
representation and dimension reduction},
author={Xuechen Mu, Zhenyu Huang, Kewei Li, Haotian Zhang, Xiuli Wang, Yusi
Fan, Kai Zhang, Fengfeng Zhou},
journal={arXiv preprint arXiv:2409.18597},
year={2024},
archivePrefix={arXiv},
eprint={2409.18597},
primaryClass={cs.LG cs.AI q-bio.GN}
}
|
mu2024temporalpad:
|
arxiv-662718
|
2409.18601
|
Privacy-Preserving Quantum Annealing for Quadratic Unconstrained Binary Optimization (QUBO) Problems
|
<|reference_start|>Privacy-Preserving Quantum Annealing for Quadratic Unconstrained Binary Optimization (QUBO) Problems: Quantum annealers offer a promising approach to solve Quadratic Unconstrained Binary Optimization (QUBO) problems, which have a wide range of applications. However, when a user submits its QUBO problem to a third-party quantum annealer, the problem itself may disclose the user's private information to the quantum annealing service provider. To mitigate this risk, we introduce a privacy-preserving QUBO framework and propose a novel solution method. Our approach employs a combination of digit-wise splitting and matrix permutation to obfuscate the QUBO problem's model matrix $Q$, effectively concealing the matrix elements. In addition, based on the solution to the obfuscated version of the QUBO problem, we can reconstruct the solution to the original problem with high accuracy. Theoretical analysis and empirical tests confirm the efficacy and efficiency of our proposed technique, demonstrating its potential for preserving user privacy in quantum annealing services.<|reference_end|>
|
arxiv
|
@article{xie2024privacy-preserving,
title={Privacy-Preserving Quantum Annealing for Quadratic Unconstrained Binary
Optimization (QUBO) Problems},
author={Moyang Xie, Yuan Zhang, Sheng Zhong, Qun Li},
journal={arXiv preprint arXiv:2409.18601},
year={2024},
archivePrefix={arXiv},
eprint={2409.18601},
primaryClass={cs.CR quant-ph}
}
|
xie2024privacy-preserving
|
arxiv-662719
|
2409.18602
|
Do LLMs suffer from Multi-Party Hangover? A Diagnostic Approach to Addressee Recognition and Response Selection in Conversations
|
<|reference_start|>Do LLMs suffer from Multi-Party Hangover? A Diagnostic Approach to Addressee Recognition and Response Selection in Conversations: Assessing the performance of systems to classify Multi-Party Conversations (MPC) is challenging due to the interconnection between linguistic and structural characteristics of conversations. Conventional evaluation methods often overlook variances in model behavior across different levels of structural complexity on interaction graphs. In this work, we propose a methodological pipeline to investigate model performance across specific structural attributes of conversations. As a proof of concept we focus on Response Selection and Addressee Recognition tasks, to diagnose model weaknesses. To this end, we extract representative diagnostic subdatasets with a fixed number of users and a good structural variety from a large and open corpus of online MPCs. We further frame our work in terms of data minimization, avoiding the use of original usernames to preserve privacy, and propose alternatives to using original text messages. Results show that response selection relies more on the textual content of conversations, while addressee recognition requires capturing their structural dimension. Using an LLM in a zero-shot setting, we further highlight how sensitivity to prompt variations is task-dependent.<|reference_end|>
|
arxiv
|
@article{penzo2024do,
title={Do LLMs suffer from Multi-Party Hangover? A Diagnostic Approach to
Addressee Recognition and Response Selection in Conversations},
author={Nicol`o Penzo, Maryam Sajedinia, Bruno Lepri, Sara Tonelli, Marco
Guerini},
journal={arXiv preprint arXiv:2409.18602},
year={2024},
archivePrefix={arXiv},
eprint={2409.18602},
primaryClass={cs.CL}
}
|
penzo2024do
|
arxiv-662720
|
2409.18606
|
Error analysis of an Algebraic Flux Correction Scheme for a nonlinear Scalar Conservation Law Using SSP-RK2
|
<|reference_start|>Error analysis of an Algebraic Flux Correction Scheme for a nonlinear Scalar Conservation Law Using SSP-RK2: We consider a scalar conservation law with linear and nonlinear flux function on a bounded domain $\Omega\subset{\R}^2$ with Lipschitz boundary $\partial\Omega.$ We discretize the spatial variable with the standard finite element method where we use a local extremum diminishing flux limiter which is linearity preserving. For temporal discretization, we use the second order explicit strong stability preserving Runge--Kutta method. It is known that the resulting fully-discrete scheme satisfies the discrete maximum principle. Under the sufficiently regularity of the weak solution and the CFL condition $k = \mathcal{O}(h^2)$, we derive error estimates in $L^{2}-$ norm for the algebraic flux correction scheme in space and in $\ell^\infty$ in time. We also present numerical experiments that validate that the fully-discrete scheme satisfies the temporal order of convergence of the fully-discrete scheme that we proved in the theoretical analysis.<|reference_end|>
|
arxiv
|
@article{pervolianakis2024error,
title={Error analysis of an Algebraic Flux Correction Scheme for a nonlinear
Scalar Conservation Law Using SSP-RK2},
author={Christos Pervolianakis},
journal={arXiv preprint arXiv:2409.18606},
year={2024},
archivePrefix={arXiv},
eprint={2409.18606},
primaryClass={math.NA cs.NA}
}
|
pervolianakis2024error
|
arxiv-662721
|
2409.18611
|
Differentially Private Non Parametric Copulas: Generating synthetic data with non parametric copulas under privacy guarantees
|
<|reference_start|>Differentially Private Non Parametric Copulas: Generating synthetic data with non parametric copulas under privacy guarantees: Creation of synthetic data models has represented a significant advancement across diverse scientific fields, but this technology also brings important privacy considerations for users. This work focuses on enhancing a non-parametric copula-based synthetic data generation model, DPNPC, by incorporating Differential Privacy through an Enhanced Fourier Perturbation method. The model generates synthetic data for mixed tabular databases while preserving privacy. We compare DPNPC with three other models (PrivBayes, DP-Copula, and DP-Histogram) across three public datasets, evaluating privacy, utility, and execution time. DPNPC outperforms others in modeling multivariate dependencies, maintaining privacy for small $\epsilon$ values, and reducing training times. However, limitations include the need to assess the model's performance with different encoding methods and consider additional privacy attacks. Future research should address these areas to enhance privacy-preserving synthetic data generation.<|reference_end|>
|
arxiv
|
@article{osorio-marulanda2024differentially,
title={Differentially Private Non Parametric Copulas: Generating synthetic data
with non parametric copulas under privacy guarantees},
author={Pablo A. Osorio-Marulanda, John Esteban Castro Ramirez, Mikel
Hern'andez Jim'enez, Nicolas Moreno Reyes and Gorka Epelde Unanue},
journal={arXiv preprint arXiv:2409.18611},
year={2024},
archivePrefix={arXiv},
eprint={2409.18611},
primaryClass={cs.LG cs.DB}
}
|
osorio-marulanda2024differentially
|
arxiv-662722
|
2409.18612
|
Libros en abierto de las editoriales universitarias espa\~nolas
|
<|reference_start|>Libros en abierto de las editoriales universitarias espa\~nolas: This paper analyses the set of scientific publications in open access, other than journals (monographs, conferences proceedings, teaching materials and grey literature), published by Spanish public universities, studying their volume, documentary typology, level of description and open access policies with the aim of measuring their degree of incorporation and compliance with the principles of Open Science. An exhaustive review of the disposed material in open access by these publishers has been carried out, which has allowed to make a diagnosis of their level of open access publishing. Grey literature is the most common documentary type followed by the monograph, in the open publication of these publishers that does not reach even 5% of the average editorial production. The results allow us to conclude that the academic publishing, and more specifically the academic books in open access, still has a very reduced presence within the editorial production of these institutions.<|reference_end|>
|
arxiv
|
@article{lopez-carreño2024libros,
title={Libros en abierto de las editoriales universitarias espa\~nolas},
author={Rosana Lopez-Carre~no, Angel-Maria Delgado Vazquez, Francisco-Javier
Martinez-Mendez},
journal={El Profesional de la informacion, v. 30, n. 1, 2021},
year={2024},
doi={10.3145/epi.2021.ene.16},
archivePrefix={arXiv},
eprint={2409.18612},
primaryClass={cs.DL}
}
|
lopez-carreño2024libros
|
arxiv-662723
|
2409.18614
|
Metasurface-generated large and arbitrary analog convolution kernels for accelerated machine vision
|
<|reference_start|>Metasurface-generated large and arbitrary analog convolution kernels for accelerated machine vision: In the rapidly evolving field of artificial intelligence, convolutional neural networks are essential for tackling complex challenges such as machine vision and medical diagnosis. Recently, to address the challenges in processing speed and power consumption of conventional digital convolution operations, many optical components have been suggested to replace the digital convolution layer in the neural network, accelerating various machine vision tasks. Nonetheless, the analog nature of the optical convolution kernel has not been fully explored. Here, we develop a spatial frequency domain training method to create arbitrarily shaped analog convolution kernels using an optical metasurface as the convolution layer, with its receptive field largely surpassing digital convolution kernels. By employing spatial multiplexing, the multiple parallel convolution kernels with both positive and negative weights are generated under the incoherent illumination condition. We experimentally demonstrate a 98.59% classification accuracy on the MNIST dataset, with simulations showing 92.63% and 68.67% accuracy on the Fashion-MNIST and CIFAR-10 datasets with additional digital layers. This work underscores the unique advantage of analog optical convolution, offering a promising avenue to accelerate machine vision tasks, especially in edge devices.<|reference_end|>
|
arxiv
|
@article{liang2024metasurface-generated,
title={Metasurface-generated large and arbitrary analog convolution kernels for
accelerated machine vision},
author={Ruiqi Liang, Shuai Wang, Yiying Dong, Liu Li, Ying Kuang, Bohan Zhang
and Yuanmu Yang},
journal={arXiv preprint arXiv:2409.18614},
year={2024},
archivePrefix={arXiv},
eprint={2409.18614},
primaryClass={physics.optics cs.CV}
}
|
liang2024metasurface-generated
|
arxiv-662724
|
2409.18616
|
Enhanced Drug Delivery via Localization-Enabled Relaying in Molecular Communication Nanonetworks
|
<|reference_start|>Enhanced Drug Delivery via Localization-Enabled Relaying in Molecular Communication Nanonetworks: Intra-body nanonetworks hold promise for advancing targeted drug delivery (TDD) systems through molecular communications (MC). In the baseline MC-TDD system, drug-loaded nanomachines (DgNs) are positioned near the infected tissues to deliver drug molecules directly. To mitigate the decline in drug delivery efficiency caused by diffusion, we propose an enhanced MC-TDD system with a relay network. This network employs a novel localization-enabled relaying mechanism, where a nano-controller broadcasts a localization signal. DgNs then measure the received signal strength against thresholds to determine their clusters relative to the infected tissue. Additionally, our study considers the effect of multiple absorbing DgNs on the channel impulse response (CIR), a factor overlooked in previous works. Our approach improves drug delivery efficiency by $17\%$ compared to the baseline system. Importantly, we find that optimizing CIR is crucial for enhancing drug delivery efficiency. These findings pave the way for further research into optimizing CIR-based relay selection, as well as investigating the impact of factors such as drug molecule lifespan, obstruction probabilities, and flow dynamics.<|reference_end|>
|
arxiv
|
@article{shitiri2024enhanced,
title={Enhanced Drug Delivery via Localization-Enabled Relaying in Molecular
Communication Nanonetworks},
author={Ethungshan Shitiri, Akarsh Yadav, Sergi Abadal, Eduard Alarc'on,
Ho-Shin Cho},
journal={arXiv preprint arXiv:2409.18616},
year={2024},
archivePrefix={arXiv},
eprint={2409.18616},
primaryClass={cs.ET}
}
|
shitiri2024enhanced
|
arxiv-662725
|
2409.18618
|
Model-based Preference Optimization in Abstractive Summarization without Human Feedback
|
<|reference_start|>Model-based Preference Optimization in Abstractive Summarization without Human Feedback: In abstractive summarization, the challenge of producing concise and accurate summaries arises from the vast amount of information contained in the source document. Consequently, although Large Language Models (LLMs) can generate fluent text, they often introduce inaccuracies by hallucinating content not found in the original source. While supervised fine-tuning methods that maximize likelihood contribute to this issue, they do not consistently enhance the faithfulness of the summaries. Preference-based optimization methods, such as Direct Preference Optimization (DPO), can further refine the model to align with human preferences. However, these methods still heavily depend on costly human feedback. In this work, we introduce a novel and straightforward approach called Model-based Preference Optimization (MPO) to fine-tune LLMs for improved summarization abilities without any human feedback. By leveraging the model's inherent summarization capabilities, we create a preference dataset that is fully generated by the model using different decoding strategies. Our experiments on standard summarization datasets and various metrics demonstrate that our proposed MPO significantly enhances the quality of generated summaries without relying on human feedback.<|reference_end|>
|
arxiv
|
@article{choi2024model-based,
title={Model-based Preference Optimization in Abstractive Summarization without
Human Feedback},
author={Jaepill Choi, Kyubyung Chae, Jiwoo Song, Yohan Jo, Taesup Kim},
journal={arXiv preprint arXiv:2409.18618},
year={2024},
archivePrefix={arXiv},
eprint={2409.18618},
primaryClass={cs.CL cs.AI}
}
|
choi2024model-based
|
arxiv-662726
|
2409.18620
|
Toward Greener Matrix Operations by Lossless Compressed Formats
|
<|reference_start|>Toward Greener Matrix Operations by Lossless Compressed Formats: Sparse matrix-vector multiplication (SpMV) is a fundamental operation in machine learning, scientific computing, and graph algorithms. In this paper, we investigate the space, time, and energy efficiency of SpMV using various compressed formats for large sparse matrices, focusing specifically on Boolean matrices and real-valued vectors. Through extensive analysis and experiments conducted on server and edge devices, we found that different matrix compression formats offer distinct trade-offs among space usage, execution time, and energy consumption. Notably, by employing the appropriate compressed format, we can reduce energy consumption by an order of magnitude on both server and single-board computers. Furthermore, our experiments indicate that while data parallelism can enhance execution speed and energy efficiency, achieving simultaneous time and energy efficiency presents partially distinct challenges. Specifically, we show that for certain compression schemes, the optimal degree of parallelism for time does not align with that for energy, thereby challenging prevailing assumptions about a straightforward linear correlation between execution time and energy consumption. Our results have significant implications for software engineers in all domains where SpMV operations are prevalent. They also suggest that similar studies exploring the trade-offs between time, space, and energy for other compressed data structures can substantially contribute to designing more energy-efficient software components.<|reference_end|>
|
arxiv
|
@article{tosoni2024toward,
title={Toward Greener Matrix Operations by Lossless Compressed Formats},
author={Francesco Tosoni and Philip Bille and Valerio Brunacci and Alessio De
Angelis and Paolo Ferragina and Giovanni Manzini},
journal={arXiv preprint arXiv:2409.18620},
year={2024},
archivePrefix={arXiv},
eprint={2409.18620},
primaryClass={cs.DS cs.PF}
}
|
tosoni2024toward
|
arxiv-662727
|
2409.18621
|
A New Bound on the Cumulant Generating Function of Dirichlet Processes
|
<|reference_start|>A New Bound on the Cumulant Generating Function of Dirichlet Processes: In this paper, we introduce a novel approach for bounding the cumulant generating function (CGF) of a Dirichlet process (DP) $X \sim \text{DP}(\alpha \nu_0)$, using superadditivity. In particular, our key technical contribution is the demonstration of the superadditivity of $\alpha \mapsto \log \mathbb{E}_{X \sim \text{DP}(\alpha \nu_0)}[\exp( \mathbb{E}_X[\alpha f])]$, where $\mathbb{E}_X[f] = \int f dX$. This result, combined with Fekete's lemma and Varadhan's integral lemma, converts the known asymptotic large deviation principle into a practical upper bound on the CGF $ \log\mathbb{E}_{X\sim \text{DP}(\alpha\nu_0)}{\exp(\mathbb{E}_{X}{[f]})} $ for any $\alpha > 0$. The bound is given by the convex conjugate of the scaled reversed Kullback-Leibler divergence $\alpha\mathrm{KL}(\nu_0\Vert \cdot)$. This new bound provides particularly effective confidence regions for sums of independent DPs, making it applicable across various fields.<|reference_end|>
|
arxiv
|
@article{perrault2024a,
title={A New Bound on the Cumulant Generating Function of Dirichlet Processes},
author={Pierre Perrault, Denis Belomestny, Pierre M'enard, 'Eric Moulines,
Alexey Naumov, Daniil Tiapkin, Michal Valko},
journal={arXiv preprint arXiv:2409.18621},
year={2024},
archivePrefix={arXiv},
eprint={2409.18621},
primaryClass={math.PR cs.IT math.IT}
}
|
perrault2024a
|
arxiv-662728
|
2409.18622
|
Audio-Based Linguistic Feature Extraction for Enhancing Multi-lingual and Low-Resource Text-to-Speech
|
<|reference_start|>Audio-Based Linguistic Feature Extraction for Enhancing Multi-lingual and Low-Resource Text-to-Speech: The difficulty of acquiring abundant, high-quality data, especially in multi-lingual contexts, has sparked interest in addressing low-resource scenarios. Moreover, current literature rely on fixed expressions from language IDs, which results in the inadequate learning of language representations, and the failure to generate speech in unseen languages. To address these challenges, we propose a novel method that directly extracts linguistic features from audio input while effectively filtering out miscellaneous acoustic information including speaker-specific attributes like timbre. Subjective and objective evaluations affirm the effectiveness of our approach for multi-lingual text-to-speech, and highlight its superiority in low-resource transfer learning for previously unseen language.<|reference_end|>
|
arxiv
|
@article{kim2024audio-based,
title={Audio-Based Linguistic Feature Extraction for Enhancing Multi-lingual
and Low-Resource Text-to-Speech},
author={Youngjae Kim, Yejin Jeon, Gary Geunbae Lee},
journal={arXiv preprint arXiv:2409.18622},
year={2024},
archivePrefix={arXiv},
eprint={2409.18622},
primaryClass={cs.SD eess.AS}
}
|
kim2024audio-based
|
arxiv-662729
|
2409.18624
|
Unsupervised Cognition
|
<|reference_start|>Unsupervised Cognition: Unsupervised learning methods have a soft inspiration in cognition models. To this day, the most successful unsupervised learning methods revolve around clustering samples in a mathematical space. In this paper we propose a state-of-the-art primitive-based unsupervised learning approach for decision-making inspired by novel cognition models. This representation-centric approach models the input space constructively as a distributed hierarchical structure in an input-agnostic way. We compared our approach with current state-of-the-art in unsupervised learning classification, and with current state-of-the-art in cancer type classification. We show how our proposal outperforms previous state-of-the-art. We also evaluate some cognition-like properties of our proposal where it not only outperforms the compared algorithms (even supervised learning ones), but it also shows a different, more cognition-like, behaviour.<|reference_end|>
|
arxiv
|
@article{ibias2024unsupervised,
title={Unsupervised Cognition},
author={Alfredo Ibias, Hector Antona, Guillem Ramirez-Miranda, Enric
Guinovart, Eduard Alarcon},
journal={arXiv preprint arXiv:2409.18624},
year={2024},
archivePrefix={arXiv},
eprint={2409.18624},
primaryClass={cs.AI cs.LG}
}
|
ibias2024unsupervised
|
arxiv-662730
|
2409.18626
|
Refutation of Spectral Graph Theory Conjectures with Search Algorithms)
|
<|reference_start|>Refutation of Spectral Graph Theory Conjectures with Search Algorithms): We are interested in the automatic refutation of spectral graph theory conjectures. Most existing works address this problem either with the exhaustive generation of graphs with a limited size or with deep reinforcement learning. Exhaustive generation is limited by the size of the generated graphs and deep reinforcement learning takes hours or days to refute a conjecture. We propose to use search algorithms to address these shortcomings to find potentially large counter-examples to spectral graph theory conjectures in seconds. We apply a wide range of search algorithms to a selection of conjectures from Graffiti. Out of 13 already refuted conjectures from Graffiti, our algorithms are able to refute 12 in seconds. We also refute conjecture 197 from Graffiti which was open until now.<|reference_end|>
|
arxiv
|
@article{roucairol2024refutation,
title={Refutation of Spectral Graph Theory Conjectures with Search Algorithms)},
author={Milo Roucairol and Tristan Cazenave},
journal={arXiv preprint arXiv:2409.18626},
year={2024},
archivePrefix={arXiv},
eprint={2409.18626},
primaryClass={cs.AI}
}
|
roucairol2024refutation
|
arxiv-662731
|
2409.18628
|
Towards Integrating Epistemic Uncertainty Estimation into the Radiotherapy Workflow
|
<|reference_start|>Towards Integrating Epistemic Uncertainty Estimation into the Radiotherapy Workflow: The precision of contouring target structures and organs-at-risk (OAR) in radiotherapy planning is crucial for ensuring treatment efficacy and patient safety. Recent advancements in deep learning (DL) have significantly improved OAR contouring performance, yet the reliability of these models, especially in the presence of out-of-distribution (OOD) scenarios, remains a concern in clinical settings. This application study explores the integration of epistemic uncertainty estimation within the OAR contouring workflow to enable OOD detection in clinically relevant scenarios, using specifically compiled data. Furthermore, we introduce an advanced statistical method for OOD detection to enhance the methodological framework of uncertainty estimation. Our empirical evaluation demonstrates that epistemic uncertainty estimation is effective in identifying instances where model predictions are unreliable and may require an expert review. Notably, our approach achieves an AUC-ROC of 0.95 for OOD detection, with a specificity of 0.95 and a sensitivity of 0.92 for implant cases, underscoring its efficacy. This study addresses significant gaps in the current research landscape, such as the lack of ground truth for uncertainty estimation and limited empirical evaluations. Additionally, it provides a clinically relevant application of epistemic uncertainty estimation in an FDA-approved and widely used clinical solution for OAR segmentation from Varian, a Siemens Healthineers company, highlighting its practical benefits.<|reference_end|>
|
arxiv
|
@article{teichmann2024towards,
title={Towards Integrating Epistemic Uncertainty Estimation into the
Radiotherapy Workflow},
author={Marvin Tom Teichmann, Manasi Datar, Lisa Kratzke, Fernando Vega,
Florin C. Ghesu},
journal={arXiv preprint arXiv:2409.18628},
year={2024},
archivePrefix={arXiv},
eprint={2409.18628},
primaryClass={eess.IV cs.AI cs.CV cs.LG}
}
|
teichmann2024towards
|
arxiv-662732
|
2409.18629
|
Structure-preserving scheme for fractional nonlinear diffusion equations
|
<|reference_start|>Structure-preserving scheme for fractional nonlinear diffusion equations: In this paper, we introduce and analyze a numerical scheme for solving the Cauchy-Dirichlet problem associated with fractional nonlinear diffusion equations. These equations generalize the porous medium equation and the fast diffusion equation by incorporating a fractional diffusion term. We provide a rigorous analysis showing that the discretization preserves main properties of the continuous equations, including algebraic decay in the fractional porous medium case and the extinction phenomenon in the fractional fast diffusion case. The study is supported by extensive numerical simulations. In addition, we propose a novel method for accurately computing the extinction time for the fractional fast diffusion equation and illustrate numerically the convergence of rescaled solutions towards asymptotic profiles near the extinction time.<|reference_end|>
|
arxiv
|
@article{hivert2024structure-preserving,
title={Structure-preserving scheme for fractional nonlinear diffusion equations},
author={H'el`ene Hivert, Florian Salin},
journal={arXiv preprint arXiv:2409.18629},
year={2024},
archivePrefix={arXiv},
eprint={2409.18629},
primaryClass={math.NA cs.NA}
}
|
hivert2024structure-preserving
|
arxiv-662733
|
2409.18630
|
Entropy, concentration, and learning: a statistical mechanics primer
|
<|reference_start|>Entropy, concentration, and learning: a statistical mechanics primer: Artificial intelligence models trained through loss minimization have demonstrated significant success, grounded in principles from fields like information theory and statistical physics. This work explores these established connections through the lens of statistical mechanics, starting from first-principles sample concentration behaviors that underpin AI and machine learning. Our development of statistical mechanics for modeling highlights the key role of exponential families, and quantities of statistics, physics, and information theory.<|reference_end|>
|
arxiv
|
@article{balsubramani2024entropy,,
title={Entropy, concentration, and learning: a statistical mechanics primer},
author={Akshay Balsubramani},
journal={arXiv preprint arXiv:2409.18630},
year={2024},
archivePrefix={arXiv},
eprint={2409.18630},
primaryClass={cs.LG cond-mat.stat-mech cs.AI cs.IT math.IT stat.ML}
}
|
balsubramani2024entropy,
|
arxiv-662734
|
2409.18631
|
Quantum Algorithms for Drone Mission Planning
|
<|reference_start|>Quantum Algorithms for Drone Mission Planning: Mission planning often involves optimising the use of ISR (Intelligence, Surveillance and Reconnaissance) assets in order to achieve a set of mission objectives within allowed parameters subject to constraints. The missions of interest here, involve routing multiple UAVs visiting multiple targets, utilising sensors to capture data relating to each target. Finding such solutions is often an NP-Hard problem and cannot be solved efficiently on classical computers. Furthermore, during the mission new constraints and objectives may arise, requiring a new solution to be computed within a short time period. To achieve this we investigate near term quantum algorithms that have the potential to offer speed-ups against current classical methods. We demonstrate how a large family of these problems can be formulated as a Mixed Integer Linear Program (MILP) and then converted to a Quadratic Unconstrained Binary Optimisation (QUBO). The formulation provided is versatile and can be adapted for many different constraints with clear qubit scaling provided. We discuss the results of solving the QUBO formulation using commercial quantum annealers and compare the solutions to current edge classical solvers. We also analyse the results from solving the QUBO using Quantum Approximate Optimisation Algorithms (QAOA) and discuss their results. Finally, we also provide efficient methods to encode to the problem into the Variational Quantum Eigensolver (VQE) formalism, where we have tailored the ansatz to the problem making efficient use of the qubits available.<|reference_end|>
|
arxiv
|
@article{davies2024quantum,
title={Quantum Algorithms for Drone Mission Planning},
author={Ethan Davies and Pranav Kalidindi},
journal={arXiv preprint arXiv:2409.18631},
year={2024},
archivePrefix={arXiv},
eprint={2409.18631},
primaryClass={quant-ph cs.AI math.OC}
}
|
davies2024quantum
|
arxiv-662735
|
2409.18633
|
Reducing Diversity to Generate Hierarchical Archetypes
|
<|reference_start|>Reducing Diversity to Generate Hierarchical Archetypes: The Artificial Intelligence field seldom address the development of a fundamental building piece: a framework, methodology or algorithm to automatically build hierarchies of abstractions. This is a key requirement in order to build intelligent behaviour, as recent neuroscience studies clearly expose. In this paper we present a primitive-based framework to automatically generate hierarchies of constructive archetypes, as a theory of how to generate hierarchies of abstractions. We assume the existence of a primitive with very specific characteristics, and we develop our framework over it. We prove the effectiveness of our framework through mathematical definitions and proofs. Finally, we give a few insights about potential uses of our framework and the expected results.<|reference_end|>
|
arxiv
|
@article{ibias2024reducing,
title={Reducing Diversity to Generate Hierarchical Archetypes},
author={Alfredo Ibias, Hector Antona, Guillem Ramirez-Miranda, Enric
Guinovart, Eduard Alarcon},
journal={arXiv preprint arXiv:2409.18633},
year={2024},
archivePrefix={arXiv},
eprint={2409.18633},
primaryClass={cs.AI}
}
|
ibias2024reducing
|
arxiv-662736
|
2409.18634
|
Split-or-decompose: Improved FPT branching algorithms for maximum agreement forests
|
<|reference_start|>Split-or-decompose: Improved FPT branching algorithms for maximum agreement forests: Phylogenetic trees are leaf-labelled trees used to model the evolution of species. In practice it is not uncommon to obtain two topologically distinct trees for the same set of species, and this motivates the use of distance measures to quantify dissimilarity. A well-known measure is the maximum agreement forest (MAF): a minimum-size partition of the leaf labels which splits both trees into the same set of disjoint, leaf-labelled subtrees (up to isomorphism after suppressing degree-2 vertices). Computing such a MAF is NP-hard and so considerable effort has been invested in finding FPT algorithms, parameterised by $k$, the number of components of a MAF. The state of the art has been unchanged since 2015, with running times of $O^*(3^k)$ for unrooted trees and $O^*(2.3431^k)$ for rooted trees. In this work we present improved algorithms for both the unrooted and rooted cases, with runtimes $O^*(2.846^k)$ and $O^*(2.3391^k)$ respectively. The key to our improvement is a novel branching strategy in which we show that any overlapping components obtained on the way to a MAF can be `split' by a branching rule with favourable branching factor, and then the problem can be decomposed into disjoint subproblems to be solved separately. We expect that this technique may be more widely applicable to other problems in algorithmic phylogenetics.<|reference_end|>
|
arxiv
|
@article{mestel2024split-or-decompose:,
title={Split-or-decompose: Improved FPT branching algorithms for maximum
agreement forests},
author={David Mestel, Steven Chaplick, Steven Kelk, Ruben Meuwese},
journal={arXiv preprint arXiv:2409.18634},
year={2024},
archivePrefix={arXiv},
eprint={2409.18634},
primaryClass={cs.DS q-bio.PE}
}
|
mestel2024split-or-decompose:
|
arxiv-662737
|
2409.18636
|
Unsupervised Fingerphoto Presentation Attack Detection With Diffusion Models
|
<|reference_start|>Unsupervised Fingerphoto Presentation Attack Detection With Diffusion Models: Smartphone-based contactless fingerphoto authentication has become a reliable alternative to traditional contact-based fingerprint biometric systems owing to rapid advances in smartphone camera technology. Despite its convenience, fingerprint authentication through fingerphotos is more vulnerable to presentation attacks, which has motivated recent research efforts towards developing fingerphoto Presentation Attack Detection (PAD) techniques. However, prior PAD approaches utilized supervised learning methods that require labeled training data for both bona fide and attack samples. This can suffer from two key issues, namely (i) generalization:the detection of novel presentation attack instruments (PAIs) unseen in the training data, and (ii) scalability:the collection of a large dataset of attack samples using different PAIs. To address these challenges, we propose a novel unsupervised approach based on a state-of-the-art deep-learning-based diffusion model, the Denoising Diffusion Probabilistic Model (DDPM), which is trained solely on bona fide samples. The proposed approach detects Presentation Attacks (PA) by calculating the reconstruction similarity between the input and output pairs of the DDPM. We present extensive experiments across three PAI datasets to test the accuracy and generalization capability of our approach. The results show that the proposed DDPM-based PAD method achieves significantly better detection error rates on several PAI classes compared to other baseline unsupervised approaches.<|reference_end|>
|
arxiv
|
@article{li2024unsupervised,
title={Unsupervised Fingerphoto Presentation Attack Detection With Diffusion
Models},
author={Hailin Li, Raghavendra Ramachandra, Mohamed Ragab, Soumik Mondal, Yong
Kiam Tan, Khin Mi Mi Aung},
journal={arXiv preprint arXiv:2409.18636},
year={2024},
archivePrefix={arXiv},
eprint={2409.18636},
primaryClass={cs.CV}
}
|
li2024unsupervised
|
arxiv-662738
|
2409.18641
|
Pseudo-kinematic trajectory control of tracked vehicles
|
<|reference_start|>Pseudo-kinematic trajectory control of tracked vehicles: Tracked vehicles are used in complex scenarios, where motion planning and navigation can be very complex. They have complex dynamics, with many parameters that are difficult to identify and that change significantly based on the operating conditions. We propose a simple pseudo-kinematic model, where the intricate dynamic effects underlying the vehicle's motion are captured in a small set of velocity-dependent parameters. This choice enables the development of a Lyapunov-based trajectory controller with guaranteed performance and small computation time. We demonstrate the correctness of our approach with both simulation and experimental data.<|reference_end|>
|
arxiv
|
@article{focchi2024pseudo-kinematic,
title={Pseudo-kinematic trajectory control of tracked vehicles},
author={Michele Focchi, Daniele Fontanelli, Luigi Palopoli},
journal={arXiv preprint arXiv:2409.18641},
year={2024},
archivePrefix={arXiv},
eprint={2409.18641},
primaryClass={cs.RO cs.SY eess.SY}
}
|
focchi2024pseudo-kinematic
|
arxiv-662739
|
2409.18642
|
Enhanced Convolution Neural Network with Optimized Pooling and Hyperparameter Tuning for Network Intrusion Detection
|
<|reference_start|>Enhanced Convolution Neural Network with Optimized Pooling and Hyperparameter Tuning for Network Intrusion Detection: Network Intrusion Detection Systems (NIDS) are essential for protecting computer networks from malicious activities, including Denial of Service (DoS), Probing, User-to-Root (U2R), and Remote-to-Local (R2L) attacks. Without effective NIDS, networks are vulnerable to significant security breaches and data loss. Machine learning techniques provide a promising approach to enhance NIDS by automating threat detection and improving accuracy. In this research, we propose an Enhanced Convolutional Neural Network (EnCNN) for NIDS and evaluate its performance using the KDDCUP'99 dataset. Our methodology includes comprehensive data preprocessing, exploratory data analysis (EDA), and feature engineering. We compare EnCNN with various machine learning algorithms, including Logistic Regression, Decision Trees, Support Vector Machines (SVM), and ensemble methods like Random Forest, AdaBoost, and Voting Ensemble. The results show that EnCNN significantly improves detection accuracy, with a notable 10% increase over state-of-art approaches. This demonstrates the effectiveness of EnCNN in real-time network intrusion detection, offering a robust solution for identifying and mitigating security threats, and enhancing overall network resilience.<|reference_end|>
|
arxiv
|
@article{sharma2024enhanced,
title={Enhanced Convolution Neural Network with Optimized Pooling and
Hyperparameter Tuning for Network Intrusion Detection},
author={Ayush Kumar Sharma, Sourav Patel, Supriya Bharat Wakchaure, Abirami S},
journal={arXiv preprint arXiv:2409.18642},
year={2024},
archivePrefix={arXiv},
eprint={2409.18642},
primaryClass={cs.CR cs.AI cs.CV}
}
|
sharma2024enhanced
|
arxiv-662740
|
2409.18644
|
Incorporating Precedents for Legal Judgement Prediction on European Court of Human Rights Cases
|
<|reference_start|>Incorporating Precedents for Legal Judgement Prediction on European Court of Human Rights Cases: Inspired by the legal doctrine of stare decisis, which leverages precedents (prior cases) for informed decision-making, we explore methods to integrate them into LJP models. To facilitate precedent retrieval, we train a retriever with a fine-grained relevance signal based on the overlap ratio of alleged articles between cases. We investigate two strategies to integrate precedents: direct incorporation at inference via label interpolation based on case proximity and during training via a precedent fusion module using a stacked-cross attention model. We employ joint training of the retriever and LJP models to address latent space divergence between them. Our experiments on LJP tasks from the ECHR jurisdiction reveal that integrating precedents during training coupled with joint training of the retriever and LJP model, outperforms models without precedents or with precedents incorporated only at inference, particularly benefiting sparser articles.<|reference_end|>
|
arxiv
|
@article{santosh2024incorporating,
title={Incorporating Precedents for Legal Judgement Prediction on European
Court of Human Rights Cases},
author={T.Y.S.S. Santosh, Mohamed Hesham Elganayni, Stanis{l}aw S'ojka,
Matthias Grabmair},
journal={arXiv preprint arXiv:2409.18644},
year={2024},
archivePrefix={arXiv},
eprint={2409.18644},
primaryClass={cs.CL}
}
|
santosh2024incorporating
|
arxiv-662741
|
2409.18645
|
The Craft of Selective Prediction: Towards Reliable Case Outcome Classification -- An Empirical Study on European Court of Human Rights Cases
|
<|reference_start|>The Craft of Selective Prediction: Towards Reliable Case Outcome Classification -- An Empirical Study on European Court of Human Rights Cases: In high-stakes decision-making tasks within legal NLP, such as Case Outcome Classification (COC), quantifying a model's predictive confidence is crucial. Confidence estimation enables humans to make more informed decisions, particularly when the model's certainty is low, or where the consequences of a mistake are significant. However, most existing COC works prioritize high task performance over model reliability. This paper conducts an empirical investigation into how various design choices including pre-training corpus, confidence estimator and fine-tuning loss affect the reliability of COC models within the framework of selective prediction. Our experiments on the multi-label COC task, focusing on European Court of Human Rights (ECtHR) cases, highlight the importance of a diverse yet domain-specific pre-training corpus for better calibration. Additionally, we demonstrate that larger models tend to exhibit overconfidence, Monte Carlo dropout methods produce reliable confidence estimates, and confident error regularization effectively mitigates overconfidence. To our knowledge, this is the first systematic exploration of selective prediction in legal NLP. Our findings underscore the need for further research on enhancing confidence measurement and improving the trustworthiness of models in the legal domain.<|reference_end|>
|
arxiv
|
@article{santosh2024the,
title={The Craft of Selective Prediction: Towards Reliable Case Outcome
Classification -- An Empirical Study on European Court of Human Rights Cases},
author={T.Y.S.S. Santosh, Irtiza Chowdhury, Shanshan Xu, Matthias Grabmair},
journal={arXiv preprint arXiv:2409.18645},
year={2024},
archivePrefix={arXiv},
eprint={2409.18645},
primaryClass={cs.CL}
}
|
santosh2024the
|
arxiv-662742
|
2409.18647
|
HiCuLR: Hierarchical Curriculum Learning for Rhetorical Role Labeling of Legal Documents
|
<|reference_start|>HiCuLR: Hierarchical Curriculum Learning for Rhetorical Role Labeling of Legal Documents: Rhetorical Role Labeling (RRL) of legal documents is pivotal for various downstream tasks such as summarization, semantic case search and argument mining. Existing approaches often overlook the varying difficulty levels inherent in legal document discourse styles and rhetorical roles. In this work, we propose HiCuLR, a hierarchical curriculum learning framework for RRL. It nests two curricula: Rhetorical Role-level Curriculum (RC) on the outer layer and Document-level Curriculum (DC) on the inner layer. DC categorizes documents based on their difficulty, utilizing metrics like deviation from a standard discourse structure and exposes the model to them in an easy-to-difficult fashion. RC progressively strengthens the model to discern coarse-to-fine-grained distinctions between rhetorical roles. Our experiments on four RRL datasets demonstrate the efficacy of HiCuLR, highlighting the complementary nature of DC and RC.<|reference_end|>
|
arxiv
|
@article{santosh2024hiculr:,
title={HiCuLR: Hierarchical Curriculum Learning for Rhetorical Role Labeling of
Legal Documents},
author={T.Y.S.S. Santosh, Apolline Isaia, Shiyu Hong, Matthias Grabmair},
journal={arXiv preprint arXiv:2409.18647},
year={2024},
archivePrefix={arXiv},
eprint={2409.18647},
primaryClass={cs.CL}
}
|
santosh2024hiculr:
|
arxiv-662743
|
2409.18649
|
Automatic Gain Tuning for Humanoid Robots Walking Architectures Using Gradient-Free Optimization Techniques
|
<|reference_start|>Automatic Gain Tuning for Humanoid Robots Walking Architectures Using Gradient-Free Optimization Techniques: Developing sophisticated control architectures has endowed robots, particularly humanoid robots, with numerous capabilities. However, tuning these architectures remains a challenging and time-consuming task that requires expert intervention. In this work, we propose a methodology to automatically tune the gains of all layers of a hierarchical control architecture for walking humanoids. We tested our methodology by employing different gradient-free optimization methods: Genetic Algorithm (GA), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), Evolution Strategy (ES), and Differential Evolution (DE). We validated the parameter found both in simulation and on the real ergoCub humanoid robot. Our results show that GA achieves the fastest convergence (10 x 10^3 function evaluations vs 25 x 10^3 needed by the other algorithms) and 100% success rate in completing the task both in simulation and when transferred on the real robotic platform. These findings highlight the potential of our proposed method to automate the tuning process, reducing the need for manual intervention.<|reference_end|>
|
arxiv
|
@article{sartore2024automatic,
title={Automatic Gain Tuning for Humanoid Robots Walking Architectures Using
Gradient-Free Optimization Techniques},
author={Carlotta Sartore, Marco Rando, Giulio Romualdi, Cesare Molinari,
Lorenzo Rosasco, Daniele Pucci},
journal={arXiv preprint arXiv:2409.18649},
year={2024},
archivePrefix={arXiv},
eprint={2409.18649},
primaryClass={cs.RO}
}
|
sartore2024automatic
|
arxiv-662744
|
2409.18653
|
When SAM2 Meets Video Camouflaged Object Segmentation: A Comprehensive Evaluation and Adaptation
|
<|reference_start|>When SAM2 Meets Video Camouflaged Object Segmentation: A Comprehensive Evaluation and Adaptation: This study investigates the application and performance of the Segment Anything Model 2 (SAM2) in the challenging task of video camouflaged object segmentation (VCOS). VCOS involves detecting objects that blend seamlessly in the surroundings for videos, due to similar colors and textures, poor light conditions, etc. Compared to the objects in normal scenes, camouflaged objects are much more difficult to detect. SAM2, a video foundation model, has shown potential in various tasks. But its effectiveness in dynamic camouflaged scenarios remains under-explored. This study presents a comprehensive study on SAM2's ability in VCOS. First, we assess SAM2's performance on camouflaged video datasets using different models and prompts (click, box, and mask). Second, we explore the integration of SAM2 with existing multimodal large language models (MLLMs) and VCOS methods. Third, we specifically adapt SAM2 by fine-tuning it on the video camouflaged dataset. Our comprehensive experiments demonstrate that SAM2 has excellent zero-shot ability of detecting camouflaged objects in videos. We also show that this ability could be further improved by specifically adjusting SAM2's parameters for VCOS. The code will be available at https://github.com/zhoustan/SAM2-VCOS<|reference_end|>
|
arxiv
|
@article{zhou2024when,
title={When SAM2 Meets Video Camouflaged Object Segmentation: A Comprehensive
Evaluation and Adaptation},
author={Yuli Zhou, Guolei Sun, Yawei Li, Luca Benini, Ender Konukoglu},
journal={arXiv preprint arXiv:2409.18653},
year={2024},
archivePrefix={arXiv},
eprint={2409.18653},
primaryClass={cs.CV cs.AI}
}
|
zhou2024when
|
arxiv-662745
|
2409.18654
|
Speech-Mamba: Long-Context Speech Recognition with Selective State Spaces Models
|
<|reference_start|>Speech-Mamba: Long-Context Speech Recognition with Selective State Spaces Models: Current automatic speech recognition systems struggle with modeling long speech sequences due to high quadratic complexity of Transformer-based models. Selective state space models such as Mamba has performed well on long-sequence modeling in natural language processing and computer vision tasks. However, research endeavors in speech technology tasks has been under-explored. We propose Speech-Mamba, which incorporates selective state space modeling in Transformer neural architectures. Long sequence representations with selective state space models in Speech-Mamba is complemented with lower-level representations from Transformer-based modeling. Speech-mamba achieves better capacity to model long-range dependencies, as it scales near-linearly with sequence length.<|reference_end|>
|
arxiv
|
@article{gao2024speech-mamba:,
title={Speech-Mamba: Long-Context Speech Recognition with Selective State
Spaces Models},
author={Xiaoxue Gao, Nancy F. Chen},
journal={arXiv preprint arXiv:2409.18654},
year={2024},
archivePrefix={arXiv},
eprint={2409.18654},
primaryClass={eess.AS cs.SD}
}
|
gao2024speech-mamba:
|
arxiv-662746
|
2409.18657
|
Impact of number of elements on the directivity of planar array of monopole antenna
|
<|reference_start|>Impact of number of elements on the directivity of planar array of monopole antenna: This research investigates how the number of elements affects the monopole antenna's planar array's directivity. This study also takes into account the antenna's effect on the whole field it radiates. The monopole antennas are arranged in a planar configuration with all the components in their proper locations using the Hadamard matrix approach. Each matrix's directivities and array factors were calculated, and a MATLAB tool was used to simulate the radiation pattern. A range of elements from 4 X 4 to 50 X 50 planar layouts were taken into consideration during the investigation. Increasing the number of elements improves the directivity. Increasing the number of elements in the planar array resulted in a great improvement in directivity, as seen by the computed and simulated results. Consequently, by increasing the antenna's directivity, a greater number of elements influences the overall field emitted.<|reference_end|>
|
arxiv
|
@article{akpo2024impact,
title={Impact of number of elements on the directivity of planar array of
monopole antenna},
author={S. E. Akpo, O. U. Omini, G. A. Tawo},
journal={arXiv preprint arXiv:2409.18657},
year={2024},
archivePrefix={arXiv},
eprint={2409.18657},
primaryClass={eess.SY cs.SY}
}
|
akpo2024impact
|
arxiv-662747
|
2409.18658
|
SEART Data Hub: Streamlining Large-Scale Source Code Mining and Pre-Processing
|
<|reference_start|>SEART Data Hub: Streamlining Large-Scale Source Code Mining and Pre-Processing: Large-scale code datasets have acquired an increasingly central role in software engineering (SE) research. This is the result of (i) the success of the mining software repositories (MSR) community, that pushed the standards of empirical studies in SE; and (ii) the recent advent of deep learning (DL) in software engineering, with models trained and tested on large source code datasets. While there exist some ready-to-use datasets in the literature, researchers often need to build and pre-process their own dataset to meet specific requirements of the study/technique they are working on. This implies a substantial cost in terms of time and computational resources. In this work we present the SEART Data Hub, a web application that allows to easily build and pre-process large-scale datasets featuring code mined from public GitHub repositories. Through a simple web interface, researchers can specify a set of mining criteria (e.g., only collect code from repositories having more than 100 contributors and more than 1,000 commits) as well as specific pre-processing steps they want to perform (e.g., remove duplicates, test code, instances with syntax errors). After submitting the request, the user will receive an email with a download link for the required dataset within a few hours. A video showcasing the SEART Data Hub is available at https://youtu.be/lCgQaA7CYWA.<|reference_end|>
|
arxiv
|
@article{dabić2024seart,
title={SEART Data Hub: Streamlining Large-Scale Source Code Mining and
Pre-Processing},
author={Ozren Dabi'c, Rosalia Tufano, Gabriele Bavota},
journal={arXiv preprint arXiv:2409.18658},
year={2024},
archivePrefix={arXiv},
eprint={2409.18658},
primaryClass={cs.SE}
}
|
dabić2024seart
|
arxiv-662748
|
2409.18659
|
Explainable Enrichment-Driven GrAph Reasoner (EDGAR) for Large Knowledge Graphs with Applications in Drug Repurposing
|
<|reference_start|>Explainable Enrichment-Driven GrAph Reasoner (EDGAR) for Large Knowledge Graphs with Applications in Drug Repurposing: Knowledge graphs (KGs) represent connections and relationships between real-world entities. We propose a link prediction framework for KGs named Enrichment-Driven GrAph Reasoner (EDGAR), which infers new edges by mining entity-local rules. This approach leverages enrichment analysis, a well-established statistical method used to identify mechanisms common to sets of differentially expressed genes. EDGAR's inference results are inherently explainable and rankable, with p-values indicating the statistical significance of each enrichment-based rule. We demonstrate the framework's effectiveness on a large-scale biomedical KG, ROBOKOP, focusing on drug repurposing for Alzheimer disease (AD) as a case study. Initially, we extracted 14 known drugs from the KG and identified 20 contextual biomarkers through enrichment analysis, revealing functional pathways relevant to shared drug efficacy for AD. Subsequently, using the top 1000 enrichment results, our system identified 1246 additional drug candidates for AD treatment. The top 10 candidates were validated using evidence from medical literature. EDGAR is deployed within ROBOKOP, complete with a web user interface. This is the first study to apply enrichment analysis to large graph completion and drug repurposing.<|reference_end|>
|
arxiv
|
@article{olasunkanmi2024explainable,
title={Explainable Enrichment-Driven GrAph Reasoner (EDGAR) for Large Knowledge
Graphs with Applications in Drug Repurposing},
author={Olawumi Olasunkanmi, Evan Morris, Yaphet Kebede, Harlin Lee, Stanley
Ahalt, Alexander Tropsha and Chris Bizon},
journal={arXiv preprint arXiv:2409.18659},
year={2024},
archivePrefix={arXiv},
eprint={2409.18659},
primaryClass={cs.IT cs.IR math.IT}
}
|
olasunkanmi2024explainable
|
arxiv-662749
|
2409.18660
|
Effects of AI Feedback on Learning, the Skill Gap, and Intellectual Diversity
|
<|reference_start|>Effects of AI Feedback on Learning, the Skill Gap, and Intellectual Diversity: Can human decision-makers learn from AI feedback? Using data on 52,000 decision-makers from a large online chess platform, we investigate how their AI use affects three interrelated long-term outcomes: Learning, skill gap, and diversity of decision strategies. First, we show that individuals are far more likely to seek AI feedback in situations in which they experienced success rather than failure. This AI feedback seeking strategy turns out to be detrimental to learning: Feedback on successes decreases future performance, while feedback on failures increases it. Second, higher-skilled decision-makers seek AI feedback more often and are far more likely to seek AI feedback after a failure, and benefit more from AI feedback than lower-skilled individuals. As a result, access to AI feedback increases, rather than decreases, the skill gap between high- and low-skilled individuals. Finally, we leverage 42 major platform updates as natural experiments to show that access to AI feedback causes a decrease in intellectual diversity of the population as individuals tend to specialize in the same areas. Together, those results indicate that learning from AI feedback is not automatic and using AI correctly seems to be a skill itself. Furthermore, despite its individual-level benefits, access to AI feedback can have significant population-level downsides including loss of intellectual diversity and an increasing skill gap.<|reference_end|>
|
arxiv
|
@article{riedl2024effects,
title={Effects of AI Feedback on Learning, the Skill Gap, and Intellectual
Diversity},
author={Christoph Riedl and Eric Bogert},
journal={arXiv preprint arXiv:2409.18660},
year={2024},
archivePrefix={arXiv},
eprint={2409.18660},
primaryClass={econ.GN cs.AI cs.HC q-fin.EC}
}
|
riedl2024effects
|
arxiv-662750
|
2409.18661
|
Not the Silver Bullet: LLM-enhanced Programming Error Messages are Ineffective in Practice
|
<|reference_start|>Not the Silver Bullet: LLM-enhanced Programming Error Messages are Ineffective in Practice: The sudden emergence of large language models (LLMs) such as ChatGPT has had a disruptive impact throughout the computing education community. LLMs have been shown to excel at producing correct code to CS1 and CS2 problems, and can even act as friendly assistants to students learning how to code. Recent work shows that LLMs demonstrate unequivocally superior results in being able to explain and resolve compiler error messages -- for decades, one of the most frustrating parts of learning how to code. However, LLM-generated error message explanations have only been assessed by expert programmers in artificial conditions. This work sought to understand how novice programmers resolve programming error messages (PEMs) in a more realistic scenario. We ran a within-subjects study with $n$ = 106 participants in which students were tasked to fix six buggy C programs. For each program, participants were randomly assigned to fix the problem using either a stock compiler error message, an expert-handwritten error message, or an error message explanation generated by GPT-4. Despite promising evidence on synthetic benchmarks, we found that GPT-4 generated error messages outperformed conventional compiler error messages in only 1 of the 6 tasks, measured by students' time-to-fix each problem. Handwritten explanations still outperform LLM and conventional error messages, both on objective and subjective measures.<|reference_end|>
|
arxiv
|
@article{santos2024not,
title={Not the Silver Bullet: LLM-enhanced Programming Error Messages are
Ineffective in Practice},
author={Eddie Antonio Santos and Brett A. Becker},
journal={arXiv preprint arXiv:2409.18661},
year={2024},
doi={10.1145/3689535.3689554},
archivePrefix={arXiv},
eprint={2409.18661},
primaryClass={cs.AI cs.HC}
}
|
santos2024not
|
arxiv-662751
|
2409.18664
|
How green is continual learning, really? Analyzing the energy consumption in continual training of vision foundation models
|
<|reference_start|>How green is continual learning, really? Analyzing the energy consumption in continual training of vision foundation models: With the ever-growing adoption of AI, its impact on the environment is no longer negligible. Despite the potential that continual learning could have towards Green AI, its environmental sustainability remains relatively uncharted. In this work we aim to gain a systematic understanding of the energy efficiency of continual learning algorithms. To that end, we conducted an extensive set of empirical experiments comparing the energy consumption of recent representation-, prompt-, and exemplar-based continual learning algorithms and two standard baseline (fine tuning and joint training) when used to continually adapt a pre-trained ViT-B/16 foundation model. We performed our experiments on three standard datasets: CIFAR-100, ImageNet-R, and DomainNet. Additionally, we propose a novel metric, the Energy NetScore, which we use measure the algorithm efficiency in terms of energy-accuracy trade-off. Through numerous evaluations varying the number and size of the incremental learning steps, our experiments demonstrate that different types of continual learning algorithms have very different impacts on energy consumption during both training and inference. Although often overlooked in the continual learning literature, we found that the energy consumed during the inference phase is crucial for evaluating the environmental sustainability of continual learning models.<|reference_end|>
|
arxiv
|
@article{trinci2024how,
title={How green is continual learning, really? Analyzing the energy
consumption in continual training of vision foundation models},
author={Tomaso Trinci, Simone Magistri, Roberto Verdecchia, Andrew D. Bagdanov},
journal={arXiv preprint arXiv:2409.18664},
year={2024},
archivePrefix={arXiv},
eprint={2409.18664},
primaryClass={cs.LG}
}
|
trinci2024how
|
arxiv-662752
|
2409.18665
|
Kaleidoscopic reorganization of network communities across different scales
|
<|reference_start|>Kaleidoscopic reorganization of network communities across different scales: The notion of structural heterogeneity is pervasive in real networks, and their community organization is no exception. Still, a vast majority of community detection methods assume neatly hierarchically organized communities of a characteristic scale for a given hierarchical level. In this work, we demonstrate that the reality of scale-dependent community reorganization is convoluted with simultaneous processes of community splitting and merging, challenging the conventional understanding of community-scale adjustment. We provide the mathematical argument on the modularity function, the results from the real-network analysis, and a simple network model for a comprehensive understanding of the nontrivial community reorganization process characterized by a local dip in the number of communities as the resolution parameter varies. This study suggests a need for a paradigm shift in the study of network communities, which emphasizes the importance of considering scale-dependent reorganization to better understand the genuine structural organization of networks.<|reference_end|>
|
arxiv
|
@article{jeong2024kaleidoscopic,
title={Kaleidoscopic reorganization of network communities across different
scales},
author={Wonhee Jeong, Daekyung Lee, Heetae Kim, Sang Hoon Lee},
journal={arXiv preprint arXiv:2409.18665},
year={2024},
archivePrefix={arXiv},
eprint={2409.18665},
primaryClass={physics.soc-ph cond-mat.stat-mech cs.SI}
}
|
jeong2024kaleidoscopic
|
arxiv-662753
|
2409.18667
|
Synchronous Team Semantics for Temporal Logics
|
<|reference_start|>Synchronous Team Semantics for Temporal Logics: We present team semantics for two of the most important linear and branching time specification languages, Linear Temporal Logic (LTL) and Computation Tree Logic (CTL). With team semantics, LTL is able to express hyperproperties, which have in the last decade been identified as a key concept in the verification of information flow properties. We study basic properties of the logic and classify the computational complexity of its satisfiability, path, and model checking problem. Further, we examine how extensions of the basic logic react to adding additional atomic operators. Finally, we compare its expressivity to the one of HyperLTL, another recently introduced logic for hyperproperties. Our results show that LTL with team semantics is a viable alternative to HyperLTL, which complements the expressivity of HyperLTL and has partially better algorithmic properties. For CTL with team semantics, we investigate the computational complexity of the satisfiability and model checking problem. The satisfiability problem is shown to be EXPTIME-complete while we show that model checking is PSPACE-complete.<|reference_end|>
|
arxiv
|
@article{krebs2024synchronous,
title={Synchronous Team Semantics for Temporal Logics},
author={Andreas Krebs, Arne Meier, Jonni Virtema, Martin Zimmermann},
journal={arXiv preprint arXiv:2409.18667},
year={2024},
archivePrefix={arXiv},
eprint={2409.18667},
primaryClass={cs.LO}
}
|
krebs2024synchronous
|
arxiv-662754
|
2409.18670
|
Beyond Decisiveness of Infinite Markov Chains
|
<|reference_start|>Beyond Decisiveness of Infinite Markov Chains: Verification of infinite-state Markov chains is still a challenge despite several fruitful numerical or statistical approaches. For decisive Markov chains, there is a simple numerical algorithm that frames the reachability probability as accurately as required (however with an unknown complexity). On the other hand when applicable, statistical model checking is in most of the cases very efficient. Here we study the relation between these two approaches showing first that decisiveness is a necessary and sufficient condition for almost sure termination of statistical model checking. Afterwards we develop an approach with application to both methods that substitutes to a non decisive Markov chain a decisive Markov chain with the same reachability probability. This approach combines two key ingredients: abstraction and importance sampling (a technique that was formerly used for efficiency). We develop this approach on a generic formalism called layered Markov chain (LMC). Afterwards we perform an empirical study on probabilistic pushdown automata (an instance of LMC) to understand the complexity factors of the statistical and numerical algorithms. To the best of our knowledge, this prototype is the first implementation of the deterministic algorithm for decisive Markov chains and required us to solve several qualitative and numerical issues.<|reference_end|>
|
arxiv
|
@article{barbot2024beyond,
title={Beyond Decisiveness of Infinite Markov Chains},
author={Beno^it Barbot, Patricia Bouyer, Serge Haddad},
journal={arXiv preprint arXiv:2409.18670},
year={2024},
archivePrefix={arXiv},
eprint={2409.18670},
primaryClass={cs.LO cs.FL}
}
|
barbot2024beyond
|
arxiv-662755
|
2409.18673
|
Exploiting Motion Prior for Accurate Pose Estimation of Dashboard Cameras
|
<|reference_start|>Exploiting Motion Prior for Accurate Pose Estimation of Dashboard Cameras: Dashboard cameras (dashcams) record millions of driving videos daily, offering a valuable potential data source for various applications, including driving map production and updates. A necessary step for utilizing these dashcam data involves the estimation of camera poses. However, the low-quality images captured by dashcams, characterized by motion blurs and dynamic objects, pose challenges for existing image-matching methods in accurately estimating camera poses. In this study, we propose a precise pose estimation method for dashcam images, leveraging the inherent camera motion prior. Typically, image sequences captured by dash cameras exhibit pronounced motion prior, such as forward movement or lateral turns, which serve as essential cues for correspondence estimation. Building upon this observation, we devise a pose regression module aimed at learning camera motion prior, subsequently integrating these prior into both correspondences and pose estimation processes. The experiment shows that, in real dashcams dataset, our method is 22% better than the baseline for pose estimation in AUC5\textdegree, and it can estimate poses for 19% more images with less reprojection error in Structure from Motion (SfM).<|reference_end|>
|
arxiv
|
@article{lu2024exploiting,
title={Exploiting Motion Prior for Accurate Pose Estimation of Dashboard
Cameras},
author={Yipeng Lu, Yifan Zhao, Haiping Wang, Zhiwei Ruan, Yuan Liu, Zhen Dong,
Bisheng Yang},
journal={arXiv preprint arXiv:2409.18673},
year={2024},
archivePrefix={arXiv},
eprint={2409.18673},
primaryClass={cs.CV cs.AI}
}
|
lu2024exploiting
|
arxiv-662756
|
2409.18674
|
Image-guided topic modeling for interpretable privacy classification
|
<|reference_start|>Image-guided topic modeling for interpretable privacy classification: Predicting and explaining the private information contained in an image in human-understandable terms is a complex and contextual task. This task is challenging even for large language models. To facilitate the understanding of privacy decisions, we propose to predict image privacy based on a set of natural language content descriptors. These content descriptors are associated with privacy scores that reflect how people perceive image content. We generate descriptors with our novel Image-guided Topic Modeling (ITM) approach. ITM leverages, via multimodality alignment, both vision information and image textual descriptions from a vision language model. We use the ITM-generated descriptors to learn a privacy predictor, Priv$\times$ITM, whose decisions are interpretable by design. Our Priv$\times$ITM classifier outperforms the reference interpretable method by 5 percentage points in accuracy and performs comparably to the current non-interpretable state-of-the-art model.<|reference_end|>
|
arxiv
|
@article{baia2024image-guided,
title={Image-guided topic modeling for interpretable privacy classification},
author={Alina Elena Baia and Andrea Cavallaro},
journal={arXiv preprint arXiv:2409.18674},
year={2024},
archivePrefix={arXiv},
eprint={2409.18674},
primaryClass={cs.CV}
}
|
baia2024image-guided
|
arxiv-662757
|
2409.18675
|
Online and Utility-Power Efficient Task Scheduling in Homogeneous Fog Networks
|
<|reference_start|>Online and Utility-Power Efficient Task Scheduling in Homogeneous Fog Networks: Fog computing is of particular interest to Internet of Things (IoT), where inexpensive simple devices can offload their computation tasks to nearby Fog Nodes. Online scheduling in such fog networks is challenging due to stochastic network states such as task arrivals, wireless channels and location of nodes. In this paper, we focus on the problem of optimizing computation offloading management, arrival data admission control and resource scheduling, in order to improve the overall system performance, in terms of throughput fairness, power efficiency, and average mean of queue backlogs. We investigate this problem for a fog network with homogeneous mobile Fog Nodes, serving multiple wireless devices, controlled by a Fog Control Node. By formulating the problem as a stochastic optimization problem, maximizing utility-power efficiency, defined as achievable utility per-unit power consumption, subject to queue backlog stability, we modify Lyapunov optimization techniques to deal with the fractional form of utility-power efficiency function. Then we propose an online utility-power efficient task scheduling algorithm, which is asymptotically optimal. Our online task scheduling algorithm can achieve the theoretical [O(1/V), O(V)] trade-off between utility-power efficiency and average mean of queue backlogs,<|reference_end|>
|
arxiv
|
@article{ebadi2024online,
title={Online and Utility-Power Efficient Task Scheduling in Homogeneous Fog
Networks},
author={Fatemeh Ebadi, Vahid Shah-Mansouri},
journal={arXiv preprint arXiv:2409.18675},
year={2024},
archivePrefix={arXiv},
eprint={2409.18675},
primaryClass={cs.NI}
}
|
ebadi2024online
|
arxiv-662758
|
2409.18676
|
Toward Universal and Interpretable World Models for Open-ended Learning Agents
|
<|reference_start|>Toward Universal and Interpretable World Models for Open-ended Learning Agents: We introduce a generic, compositional and interpretable class of generative world models that supports open-ended learning agents. This is a sparse class of Bayesian networks capable of approximating a broad range of stochastic processes, which provide agents with the ability to learn world models in a manner that may be both interpretable and computationally scalable. This approach integrating Bayesian structure learning and intrinsically motivated (model-based) planning enables agents to actively develop and refine their world models, which may lead to open-ended learning and more robust, adaptive behavior.<|reference_end|>
|
arxiv
|
@article{da costa2024toward,
title={Toward Universal and Interpretable World Models for Open-ended Learning
Agents},
author={Lancelot Da Costa},
journal={NeurIPS 2024 Workshop on Intrinsically Motivated Open-ended
Learning (IMOL)},
year={2024},
archivePrefix={arXiv},
eprint={2409.18676},
primaryClass={cs.AI cs.MA q-bio.NC}
}
|
da costa2024toward
|
arxiv-662759
|
2409.18677
|
Co-Trained Retriever-Generator Framework for Question Generation in Earnings Calls
|
<|reference_start|>Co-Trained Retriever-Generator Framework for Question Generation in Earnings Calls: In diverse professional environments, ranging from academic conferences to corporate earnings calls, the ability to anticipate audience questions stands paramount. Traditional methods, which rely on manual assessment of an audience's background, interests, and subject knowledge, often fall short - particularly when facing large or heterogeneous groups, leading to imprecision and inefficiency. While NLP has made strides in text-based question generation, its primary focus remains on academic settings, leaving the intricate challenges of professional domains, especially earnings call conferences, underserved. Addressing this gap, our paper pioneers the multi-question generation (MQG) task specifically designed for earnings call contexts. Our methodology involves an exhaustive collection of earnings call transcripts and a novel annotation technique to classify potential questions. Furthermore, we introduce a retriever-enhanced strategy to extract relevant information. With a core aim of generating a spectrum of potential questions that analysts might pose, we derive these directly from earnings call content. Empirical evaluations underscore our approach's edge, revealing notable excellence in the accuracy, consistency, and perplexity of the questions generated.<|reference_end|>
|
arxiv
|
@article{juan2024co-trained,
title={Co-Trained Retriever-Generator Framework for Question Generation in
Earnings Calls},
author={Yining Juan, Chung-Chi Chen, Hen-Hsen Huang, Hsin-Hsi Chen},
journal={arXiv preprint arXiv:2409.18677},
year={2024},
archivePrefix={arXiv},
eprint={2409.18677},
primaryClass={cs.CL}
}
|
juan2024co-trained
|
arxiv-662760
|
2409.18678
|
Rehearsing Answers to Probable Questions with Perspective-Taking
|
<|reference_start|>Rehearsing Answers to Probable Questions with Perspective-Taking: Question answering (QA) has been a long-standing focus in the NLP field, predominantly addressing reading comprehension and common sense QA. However, scenarios involving the preparation of answers to probable questions during professional oral presentations remain underexplored. In this paper, we pioneer the examination of this crucial yet overlooked topic by utilizing real-world QA conversation transcripts between company managers and professional analysts. We explore the proposed task using three causal knowledge graphs (KGs) and three large language models (LLMs). This work provides foundational insights into the application of LLMs in professional QA scenarios, highlighting the importance of causal KGs and perspective-taking in generating effective responses.<|reference_end|>
|
arxiv
|
@article{shih2024rehearsing,
title={Rehearsing Answers to Probable Questions with Perspective-Taking},
author={Yung-Yu Shih, Ziwei Xu, Hiroya Takamura, Yun-Nung Chen, Chung-Chi Chen},
journal={arXiv preprint arXiv:2409.18678},
year={2024},
archivePrefix={arXiv},
eprint={2409.18678},
primaryClass={cs.CL}
}
|
shih2024rehearsing
|
arxiv-662761
|
2409.18679
|
"Why" Has the Least Side Effect on Model Editing
|
<|reference_start|>"Why" Has the Least Side Effect on Model Editing: Training large language models (LLMs) from scratch is an expensive endeavor, particularly as world knowledge continually evolves. To maintain relevance and accuracy of LLMs, model editing has emerged as a pivotal research area. While these methods hold promise, they can also produce unintended side effects. Their underlying factors and causes remain largely unexplored. This paper delves into a critical factor-question type-by categorizing model editing questions. Our findings reveal that the extent of performance degradation varies significantly across different question types, providing new insights for experimental design in knowledge editing. Furthermore, we investigate whether insights from smaller models can be extrapolated to larger models. Our results indicate discrepancies in findings between models of different sizes, suggesting that insights from smaller models may not necessarily apply to larger models. Additionally, we examine the impact of batch size on side effects, discovering that increasing the batch size can mitigate performance drops.<|reference_end|>
|
arxiv
|
@article{pan2024"why",
title={"Why" Has the Least Side Effect on Model Editing},
author={Tsung-Hsuan Pan, Chung-Chi Chen, Hen-Hsen Huang, Hsin-Hsi Chen},
journal={arXiv preprint arXiv:2409.18679},
year={2024},
archivePrefix={arXiv},
eprint={2409.18679},
primaryClass={cs.CL}
}
|
pan2024"why"
|
arxiv-662762
|
2409.18680
|
Beyond Single-Audio: Advancing Multi-Audio Processing in Audio Large Language Models
|
<|reference_start|>Beyond Single-Audio: Advancing Multi-Audio Processing in Audio Large Language Models: Various audio-LLMs (ALLMs) have been explored recently for tackling different audio tasks simultaneously using a single, unified model. While existing evaluations of ALLMs primarily focus on single-audio tasks, real-world applications often involve processing multiple audio streams simultaneously. To bridge this gap, we propose the first multi-audio evaluation (MAE) benchmark that consists of 20 datasets from 11 multi-audio tasks encompassing both speech and sound scenarios. Comprehensive experiments on MAE demonstrate that the existing ALLMs, while being powerful in comprehending primary audio elements in individual audio inputs, struggling to handle multi-audio scenarios. To this end, we propose a novel multi-audio-LLM (MALLM) to capture audio context among multiple similar audios using discriminative learning on our proposed synthetic data. The results demonstrate that the proposed MALLM outperforms all baselines and achieves high data efficiency using synthetic data without requiring human annotations. The proposed MALLM opens the door for ALLMs towards multi-audio processing era and brings us closer to replicating human auditory capabilities in machines.<|reference_end|>
|
arxiv
|
@article{chen2024beyond,
title={Beyond Single-Audio: Advancing Multi-Audio Processing in Audio Large
Language Models},
author={Yiming Chen, Xianghu Yue, Xiaoxue Gao, Chen Zhang, Luis Fernando
D'Haro, Robby T. Tan, Haizhou Li},
journal={arXiv preprint arXiv:2409.18680},
year={2024},
archivePrefix={arXiv},
eprint={2409.18680},
primaryClass={cs.SD cs.AI cs.CL cs.MM eess.AS}
}
|
chen2024beyond
|
arxiv-662763
|
2409.18681
|
Pseudometrics for scalable data-driven comparisons of nonlinear dynamical systems
|
<|reference_start|>Pseudometrics for scalable data-driven comparisons of nonlinear dynamical systems: Novel solutions for pseudometrics quantifying deviation from topological conjugacy between dynamical systems are presented. Deviation from conjugacy is quantified in a Pareto optimal sense that accounts for spectral properties of Koopman operators as well as trajectory geometry. Theoretical justification is provided for computing such pseudometrics in Koopman eigenfunction space rather than observable space. Furthermore, it is shown deriving the pseudometrics from unitary transformations is sufficient to recover a value of zero if two systems are topologically conjugate. Therefore the pseudometrics for quantifying deviation from conjugacy are based on analytical solutions for unitary transformations in Koopman eigenfunction space. Finally, geometric considerations for the Pareto optimality problem associated with deviation from conjugacy are used to develop pseudometrics that account for all possible solutions given just two Pareto points based on analytical solutions.<|reference_end|>
|
arxiv
|
@article{glaz2024efficient,
title={Efficient pseudometrics for data-driven comparisons of nonlinear
dynamical systems},
author={Bryan Glaz},
journal={arXiv preprint arXiv:2409.18681},
year={2024},
archivePrefix={arXiv},
eprint={2409.18681},
primaryClass={math.DS cs.SY eess.SY math-ph math.MP}
}
|
glaz2024efficient
|
arxiv-662764
|
2409.18682
|
Exploring DAOS Interfaces and Performance
|
<|reference_start|>Exploring DAOS Interfaces and Performance: Distributed Asynchronous Object Store (DAOS) is a novel software-defined object store leveraging Non-Volatile Memory (NVM) devices, designed for high performance. It provides a number of interfaces for applications to undertake I/O, ranging from a native object storage API to a DAOS FUSE module for seamless compatibility with existing applications using POSIX file system APIs. In this paper we discuss these interfaces and the options they provide, exercise DAOS through them with various I/O benchmarks, and analyse the observed performance. We also briefly compare the performance with a distributed file system and another object storage system deployed on the same hardware, and showcase DAOS' potential and increased flexibility to support high-performance I/O.<|reference_end|>
|
arxiv
|
@article{manubens2024exploring,
title={Exploring DAOS Interfaces and Performance},
author={Nicolau Manubens, Johann Lombardi, Simon D. Smart, Emanuele Danovaro,
Tiago Quintino, Dean Hildebrand, Adrian Jackson},
journal={arXiv preprint arXiv:2409.18682},
year={2024},
archivePrefix={arXiv},
eprint={2409.18682},
primaryClass={cs.DC}
}
|
manubens2024exploring
|
arxiv-662765
|
2409.18685
|
Understanding the Benefits of SimCLR Pre-Training in Two-Layer Convolutional Neural Networks
|
<|reference_start|>Understanding the Benefits of SimCLR Pre-Training in Two-Layer Convolutional Neural Networks: SimCLR is one of the most popular contrastive learning methods for vision tasks. It pre-trains deep neural networks based on a large amount of unlabeled data by teaching the model to distinguish between positive and negative pairs of augmented images. It is believed that SimCLR can pre-train a deep neural network to learn efficient representations that can lead to a better performance of future supervised fine-tuning. Despite its effectiveness, our theoretical understanding of the underlying mechanisms of SimCLR is still limited. In this paper, we theoretically introduce a case study of the SimCLR method. Specifically, we consider training a two-layer convolutional neural network (CNN) to learn a toy image data model. We show that, under certain conditions on the number of labeled data, SimCLR pre-training combined with supervised fine-tuning achieves almost optimal test loss. Notably, the label complexity for SimCLR pre-training is far less demanding compared to direct training on supervised data. Our analysis sheds light on the benefits of SimCLR in learning with fewer labels.<|reference_end|>
|
arxiv
|
@article{zhang2024understanding,
title={Understanding the Benefits of SimCLR Pre-Training in Two-Layer
Convolutional Neural Networks},
author={Han Zhang and Yuan Cao},
journal={arXiv preprint arXiv:2409.18685},
year={2024},
archivePrefix={arXiv},
eprint={2409.18685},
primaryClass={cs.LG stat.ML}
}
|
zhang2024understanding
|
arxiv-662766
|
2409.18686
|
A Novel Unified Architecture for Low-Shot Counting by Detection and Segmentation
|
<|reference_start|>A Novel Unified Architecture for Low-Shot Counting by Detection and Segmentation: Low-shot object counters estimate the number of objects in an image using few or no annotated exemplars. Objects are localized by matching them to prototypes, which are constructed by unsupervised image-wide object appearance aggregation. Due to potentially diverse object appearances, the existing approaches often lead to overgeneralization and false positive detections. Furthermore, the best-performing methods train object localization by a surrogate loss, that predicts a unit Gaussian at each object center. This loss is sensitive to annotation error, hyperparameters and does not directly optimize the detection task, leading to suboptimal counts. We introduce GeCo, a novel low-shot counter that achieves accurate object detection, segmentation, and count estimation in a unified architecture. GeCo robustly generalizes the prototypes across objects appearances through a novel dense object query formulation. In addition, a novel counting loss is proposed, that directly optimizes the detection task and avoids the issues of the standard surrogate loss. GeCo surpasses the leading few-shot detection-based counters by $\sim$25\% in the total count MAE, achieves superior detection accuracy and sets a new solid state-of-the-art result across all low-shot counting setups.<|reference_end|>
|
arxiv
|
@article{pelhan2024a,
title={A Novel Unified Architecture for Low-Shot Counting by Detection and
Segmentation},
author={Jer Pelhan, Alan Lukev{z}iv{c}, Vitjan Zavrtanik, Matej Kristan},
journal={arXiv preprint arXiv:2409.18686},
year={2024},
archivePrefix={arXiv},
eprint={2409.18686},
primaryClass={cs.CV}
}
|
pelhan2024a
|
arxiv-662767
|
2409.18690
|
Less is More: Towards Sustainability-Aware Persuasive Explanations in Recommender Systems
|
<|reference_start|>Less is More: Towards Sustainability-Aware Persuasive Explanations in Recommender Systems: Recommender systems play an important role in supporting the achievement of the United Nations sustainable development goals (SDGs). In recommender systems, explanations can support different goals, such as increasing a user's trust in a recommendation, persuading a user to purchase specific items, or increasing the understanding of the reasons behind a recommendation. In this paper, we discuss the concept of "sustainability-aware persuasive explanations" which we regard as a major concept to support the achievement of the mentioned SDGs. Such explanations are orthogonal to most existing explanation approaches since they focus on a "less is more" principle, which per se is not included in existing e-commerce platforms. Based on a user study in three item domains, we analyze the potential impacts of sustainability-aware persuasive explanations. The study results are promising regarding user acceptance and the potential impacts of such explanations.<|reference_end|>
|
arxiv
|
@article{tran2024less,
title={Less is More: Towards Sustainability-Aware Persuasive Explanations in
Recommender Systems},
author={Thi Ngoc Trang Tran, Seda Polat Erdeniz, Alexander Felfernig,
Sebastian Lubos, Merfat El-Mansi, Viet-Man Le},
journal={arXiv preprint arXiv:2409.18690},
year={2024},
doi={10.1145/3640457.3691708},
archivePrefix={arXiv},
eprint={2409.18690},
primaryClass={cs.IR}
}
|
tran2024less
|
arxiv-662768
|
2409.18692
|
MG-Net: Learn to Customize QAOA with Circuit Depth Awareness
|
<|reference_start|>MG-Net: Learn to Customize QAOA with Circuit Depth Awareness: Quantum Approximate Optimization Algorithm (QAOA) and its variants exhibit immense potential in tackling combinatorial optimization challenges. However, their practical realization confronts a dilemma: the requisite circuit depth for satisfactory performance is problem-specific and often exceeds the maximum capability of current quantum devices. To address this dilemma, here we first analyze the convergence behavior of QAOA, uncovering the origins of this dilemma and elucidating the intricate relationship between the employed mixer Hamiltonian, the specific problem at hand, and the permissible maximum circuit depth. Harnessing this understanding, we introduce the Mixer Generator Network (MG-Net), a unified deep learning framework adept at dynamically formulating optimal mixer Hamiltonians tailored to distinct tasks and circuit depths. Systematic simulations, encompassing Ising models and weighted Max-Cut instances with up to 64 qubits, substantiate our theoretical findings, highlighting MG-Net's superior performance in terms of both approximation ratio and efficiency.<|reference_end|>
|
arxiv
|
@article{qian2024mg-net:,
title={MG-Net: Learn to Customize QAOA with Circuit Depth Awareness},
author={Yang Qian, Xinbiao Wang, Yuxuan Du, Yong Luo, Dacheng Tao},
journal={arXiv preprint arXiv:2409.18692},
year={2024},
archivePrefix={arXiv},
eprint={2409.18692},
primaryClass={quant-ph cs.AI cs.LG}
}
|
qian2024mg-net:
|
arxiv-662769
|
2409.18694
|
Learning from Pattern Completion: Self-supervised Controllable Generation
|
<|reference_start|>Learning from Pattern Completion: Self-supervised Controllable Generation: The human brain exhibits a strong ability to spontaneously associate different visual attributes of the same or similar visual scene, such as associating sketches and graffiti with real-world visual objects, usually without supervising information. In contrast, in the field of artificial intelligence, controllable generation methods like ControlNet heavily rely on annotated training datasets such as depth maps, semantic segmentation maps, and poses, which limits the method's scalability. Inspired by the neural mechanisms that may contribute to the brain's associative power, specifically the cortical modularization and hippocampal pattern completion, here we propose a self-supervised controllable generation (SCG) framework. Firstly, we introduce an equivariant constraint to promote inter-module independence and intra-module correlation in a modular autoencoder network, thereby achieving functional specialization. Subsequently, based on these specialized modules, we employ a self-supervised pattern completion approach for controllable generation training. Experimental results demonstrate that the proposed modular autoencoder effectively achieves functional specialization, including the modular processing of color, brightness, and edge detection, and exhibits brain-like features including orientation selectivity, color antagonism, and center-surround receptive fields. Through self-supervised training, associative generation capabilities spontaneously emerge in SCG, demonstrating excellent generalization ability to various tasks such as associative generation on painting, sketches, and ancient graffiti. Compared to the previous representative method ControlNet, our proposed approach not only demonstrates superior robustness in more challenging high-noise scenarios but also possesses more promising scalability potential due to its self-supervised manner.<|reference_end|>
|
arxiv
|
@article{chen2024learning,
title={Learning from Pattern Completion: Self-supervised Controllable
Generation},
author={Zhiqiang Chen, Guofan Fan, Jinying Gao, Lei Ma, Bo Lei, Tiejun Huang,
Shan Yu},
journal={arXiv preprint arXiv:2409.18694},
year={2024},
archivePrefix={arXiv},
eprint={2409.18694},
primaryClass={cs.CV cs.AI}
}
|
chen2024learning
|
arxiv-662770
|
2409.18695
|
KALE-LM: Unleash The Power Of AI For Science Via Knowledge And Logic Enhanced Large Model
|
<|reference_start|>KALE-LM: Unleash The Power Of AI For Science Via Knowledge And Logic Enhanced Large Model: Artificial intelligence is gradually demonstrating its immense potential, and increasing attention is being given to how AI can be harnessed to advance scientific research. In this vision paper, we present our perspectives on how AI can better assist scientific inquiry and explore corresponding technical approach. We have proposed and open-sourced a large model of our KALE-LM model series, Llama3-KALE-LM-Chem-8B, which has achieved outstanding performance in tasks related to the field of chemistry. We hope that our work serves as a strong starting point, helping to realize more intelligent AI and promoting the advancement of human science and technology, as well as societal development.<|reference_end|>
|
arxiv
|
@article{dai2024kale-lm:,
title={KALE-LM: Unleash The Power Of AI For Science Via Knowledge And Logic
Enhanced Large Model},
author={Weichen Dai, Yezeng Chen, Zijie Dai, Zhijie Huang, Yubo Liu, Yixuan
Pan, Baiyang Song, Chengli Zhong, Xinhe Li, Zeyu Wang, Zhuoying Feng, Yi Zhou},
journal={arXiv preprint arXiv:2409.18695},
year={2024},
archivePrefix={arXiv},
eprint={2409.18695},
primaryClass={cs.AI cs.CE cs.CL}
}
|
dai2024kale-lm:
|
arxiv-662771
|
2409.18696
|
Rethinking the Power of Timestamps for Robust Time Series Forecasting: A Global-Local Fusion Perspective
|
<|reference_start|>Rethinking the Power of Timestamps for Robust Time Series Forecasting: A Global-Local Fusion Perspective: Time series forecasting has played a pivotal role across various industries, including finance, transportation, energy, healthcare, and climate. Due to the abundant seasonal information they contain, timestamps possess the potential to offer robust global guidance for forecasting techniques. However, existing works primarily focus on local observations, with timestamps being treated merely as an optional supplement that remains underutilized. When data gathered from the real world is polluted, the absence of global information will damage the robust prediction capability of these algorithms. To address these problems, we propose a novel framework named GLAFF. Within this framework, the timestamps are modeled individually to capture the global dependencies. Working as a plugin, GLAFF adaptively adjusts the combined weights for global and local information, enabling seamless collaboration with any time series forecasting backbone. Extensive experiments conducted on nine real-world datasets demonstrate that GLAFF significantly enhances the average performance of widely used mainstream forecasting models by 12.5%, surpassing the previous state-of-the-art method by 5.5%.<|reference_end|>
|
arxiv
|
@article{wang2024rethinking,
title={Rethinking the Power of Timestamps for Robust Time Series Forecasting: A
Global-Local Fusion Perspective},
author={Chengsen Wang, Qi Qi, Jingyu Wang, Haifeng Sun, Zirui Zhuang, Jinming
Wu, Jianxin Liao},
journal={arXiv preprint arXiv:2409.18696},
year={2024},
archivePrefix={arXiv},
eprint={2409.18696},
primaryClass={cs.LG}
}
|
wang2024rethinking
|
arxiv-662772
|
2409.18701
|
3DPX: Single Panoramic X-ray Analysis Guided by 3D Oral Structure Reconstruction
|
<|reference_start|>3DPX: Single Panoramic X-ray Analysis Guided by 3D Oral Structure Reconstruction: Panoramic X-ray (PX) is a prevalent modality in dentistry practice owing to its wide availability and low cost. However, as a 2D projection of a 3D structure, PX suffers from anatomical information loss and PX diagnosis is limited compared to that with 3D imaging modalities. 2D-to-3D reconstruction methods have been explored for the ability to synthesize the absent 3D anatomical information from 2D PX for use in PX image analysis. However, there are challenges in leveraging such 3D synthesized reconstructions. First, inferring 3D depth from 2D images remains a challenging task with limited accuracy. The second challenge is the joint analysis of 2D PX with its 3D synthesized counterpart, with the aim to maximize the 2D-3D synergy while minimizing the errors arising from the synthesized image. In this study, we propose a new method termed 3DPX - PX image analysis guided by 2D-to-3D reconstruction, to overcome these challenges. 3DPX consists of (i) a novel progressive reconstruction network to improve 2D-to-3D reconstruction and, (ii) a contrastive-guided bidirectional multimodality alignment module for 3D-guided 2D PX classification and segmentation tasks. The reconstruction network progressively reconstructs 3D images with knowledge imposed on the intermediate reconstructions at multiple pyramid levels and incorporates Multilayer Perceptrons to improve semantic understanding. The downstream networks leverage the reconstructed images as 3D anatomical guidance to the PX analysis through feature alignment, which increases the 2D-3D synergy with bidirectional feature projection and decease the impact of potential errors with contrastive guidance. Extensive experiments on two oral datasets involving 464 studies demonstrate that 3DPX outperforms the state-of-the-art methods in various tasks including 2D-to-3D reconstruction, PX classification and lesion segmentation.<|reference_end|>
|
arxiv
|
@article{li20243dpx:,
title={3DPX: Single Panoramic X-ray Analysis Guided by 3D Oral Structure
Reconstruction},
author={Xiaoshuang Li, Zimo Huang, Mingyuan Meng, Eduardo Delamare, Dagan
Feng, Lei Bi, Bin Sheng, Lingyong Jiang, Bo Li, Jinman Kim},
journal={arXiv preprint arXiv:2409.18701},
year={2024},
archivePrefix={arXiv},
eprint={2409.18701},
primaryClass={eess.IV cs.CV}
}
|
li20243dpx:
|
arxiv-662773
|
2409.18704
|
Semantic Model Component Implementation for Model-driven Semantic Communications
|
<|reference_start|>Semantic Model Component Implementation for Model-driven Semantic Communications: The key feature of model-driven semantic communication is the propagation of the model. The semantic model component (SMC) is designed to drive the intelligent model to transmit in the physical channel, allowing the intelligence to flow through the networks. According to the characteristics of neural networks with common and individual model parameters, this paper designs the cross-source-domain and cross-task semantic component model. Considering that the basic model is deployed on the edge node, the large server node updates the edge node by transmitting only the semantic component model to the edge node so that the edge node can handle different sources and different tasks. In addition, this paper also discusses how channel noise affects the performance of the model and proposes methods of injection noise and regularization to improve the noise resistance of the model. Experiments show that SMCs use smaller model parameters to achieve cross-source, cross-task functionality while maintaining performance and improving the model's tolerance to noise. Finally, a component transfer-based unmanned vehicle tracking prototype was implemented to verify the feasibility of model components in practical applications.<|reference_end|>
|
arxiv
|
@article{liang2024semantic,
title={Semantic Model Component Implementation for Model-driven Semantic
Communications},
author={Haotai Liang, Mengran Shi, Chen Dong, Xiaodong Xu, Long Liu, Hao Chen},
journal={arXiv preprint arXiv:2409.18704},
year={2024},
archivePrefix={arXiv},
eprint={2409.18704},
primaryClass={cs.AI}
}
|
liang2024semantic
|
arxiv-662774
|
2409.18705
|
Speech Boosting: Low-Latency Live Speech Enhancement for TWS Earbuds
|
<|reference_start|>Speech Boosting: Low-Latency Live Speech Enhancement for TWS Earbuds: This paper introduces a speech enhancement solution tailored for true wireless stereo (TWS) earbuds on-device usage. The solution was specifically designed to support conversations in noisy environments, with active noise cancellation (ANC) activated. The primary challenges for speech enhancement models in this context arise from computational complexity that limits on-device usage and latency that must be less than 3 ms to preserve a live conversation. To address these issues, we evaluated several crucial design elements, including the network architecture and domain, design of loss functions, pruning method, and hardware-specific optimization. Consequently, we demonstrated substantial improvements in speech enhancement quality compared with that in baseline models, while simultaneously reducing the computational complexity and algorithmic latency.<|reference_end|>
|
arxiv
|
@article{bae2024speech,
title={Speech Boosting: Low-Latency Live Speech Enhancement for TWS Earbuds},
author={Hanbin Bae, Pavel Andreev, Azat Saginbaev, Nicholas Babaev, Won-Jun
Lee, Hosang Sung, Hoon-Young Cho},
journal={arXiv preprint arXiv:2409.18705},
year={2024},
doi={10.21437/Interspeech.2024-1444},
archivePrefix={arXiv},
eprint={2409.18705},
primaryClass={eess.AS cs.AI cs.SD eess.SP}
}
|
bae2024speech
|
arxiv-662775
|
2409.18707
|
Discrete Policy: Learning Disentangled Action Space for Multi-Task Robotic Manipulation
|
<|reference_start|>Discrete Policy: Learning Disentangled Action Space for Multi-Task Robotic Manipulation: Learning visuomotor policy for multi-task robotic manipulation has been a long-standing challenge for the robotics community. The difficulty lies in the diversity of action space: typically, a goal can be accomplished in multiple ways, resulting in a multimodal action distribution for a single task. The complexity of action distribution escalates as the number of tasks increases. In this work, we propose \textbf{Discrete Policy}, a robot learning method for training universal agents capable of multi-task manipulation skills. Discrete Policy employs vector quantization to map action sequences into a discrete latent space, facilitating the learning of task-specific codes. These codes are then reconstructed into the action space conditioned on observations and language instruction. We evaluate our method on both simulation and multiple real-world embodiments, including both single-arm and bimanual robot settings. We demonstrate that our proposed Discrete Policy outperforms a well-established Diffusion Policy baseline and many state-of-the-art approaches, including ACT, Octo, and OpenVLA. For example, in a real-world multi-task training setting with five tasks, Discrete Policy achieves an average success rate that is 26\% higher than Diffusion Policy and 15\% higher than OpenVLA. As the number of tasks increases to 12, the performance gap between Discrete Policy and Diffusion Policy widens to 32.5\%, further showcasing the advantages of our approach. Our work empirically demonstrates that learning multi-task policies within the latent space is a vital step toward achieving general-purpose agents.<|reference_end|>
|
arxiv
|
@article{wu2024discrete,
title={Discrete Policy: Learning Disentangled Action Space for Multi-Task
Robotic Manipulation},
author={Kun Wu, Yichen Zhu, Jinming Li, Junjie Wen, Ning Liu, Zhiyuan Xu,
Qinru Qiu, and Jian Tang},
journal={arXiv preprint arXiv:2409.18707},
year={2024},
archivePrefix={arXiv},
eprint={2409.18707},
primaryClass={cs.RO}
}
|
wu2024discrete
|
arxiv-662776
|
2409.18708
|
Read Over the Lines: Attacking LLMs and Toxicity Detection Systems with ASCII Art to Mask Profanity
|
<|reference_start|>Read Over the Lines: Attacking LLMs and Toxicity Detection Systems with ASCII Art to Mask Profanity: We introduce a novel family of adversarial attacks that exploit the inability of language models to interpret ASCII art. To evaluate these attacks, we propose the ToxASCII benchmark and develop two custom ASCII art fonts: one leveraging special tokens and another using text-filled letter shapes. Our attacks achieve a perfect 1.0 Attack Success Rate across ten models, including OpenAI's o1-preview and LLaMA 3.1. Warning: this paper contains examples of toxic language used for research purposes.<|reference_end|>
|
arxiv
|
@article{berezin2024read,
title={Read Over the Lines: Attacking LLMs and Toxicity Detection Systems with
ASCII Art to Mask Profanity},
author={Sergey Berezin, Reza Farahbakhsh, Noel Crespi},
journal={arXiv preprint arXiv:2409.18708},
year={2024},
archivePrefix={arXiv},
eprint={2409.18708},
primaryClass={cs.CL cs.AI cs.CR}
}
|
berezin2024read
|
arxiv-662777
|
2409.18709
|
Interaction Equivalence
|
<|reference_start|>Interaction Equivalence: Contextual equivalence is the de facto standard notion of program equivalence. A key theorem is that contextual equivalence is an equational theory. Making contextual equivalence more intensional, for example taking into account the time cost of the computation, seems a natural refinement. Such a change, however, does not induce an equational theory, for an apparently essential reason: cost is not invariant under reduction. In the paradigmatic case of the untyped $\lambda$-calculus, we introduce interaction equivalence. Inspired by game semantics, we observe the number of interaction steps between terms and contexts but -- crucially -- ignore their own internal steps. We prove that interaction equivalence is an equational theory and we characterize it as $B$, the well-known theory induced by B\"ohm tree equality. Ours is the first observational characterization of $B$ obtained without enriching the discriminating power of contexts with extra features such as non-determinism. To prove our results, we develop interaction-based refinements of the B\"ohm-out technique and of intersection types.<|reference_end|>
|
arxiv
|
@article{accattoli2024interaction,
title={Interaction Equivalence},
author={Beniamino Accattoli, Adrienne Lancelot, Giulio Manzonetto, Gabriele
Vanoni},
journal={arXiv preprint arXiv:2409.18709},
year={2024},
archivePrefix={arXiv},
eprint={2409.18709},
primaryClass={cs.LO cs.PL}
}
|
accattoli2024interaction
|
arxiv-662778
|
2409.18713
|
Decoding Complexity-Rate-Quality Pareto-Front for Adaptive VVC Streaming
|
<|reference_start|>Decoding Complexity-Rate-Quality Pareto-Front for Adaptive VVC Streaming: Pareto-front optimization is crucial for addressing the multi-objective challenges in video streaming, enabling the identification of optimal trade-offs between conflicting goals such as bitrate, video quality, and decoding complexity. This paper explores the construction of efficient bitrate ladders for adaptive Versatile Video Coding (VVC) streaming, focusing on optimizing these trade-offs. We investigate various ladder construction methods based on Pareto-front optimization, including exhaustive Rate-Quality and fixed ladder approaches. We propose a joint decoding time-rate-quality Pareto-front, providing a comprehensive framework to balance bitrate, decoding time, and video quality in video streaming. This allows streaming services to tailor their encoding strategies to meet specific requirements, prioritizing low decoding latency, bandwidth efficiency, or a balanced approach, thus enhancing the overall user experience. The experimental results confirm and demonstrate these opportunities for navigating the decoding time-rate-quality space to support various use cases. For example, when prioritizing low decoding latency, the proposed method achieves decoding time reduction of 14.86% while providing Bjontegaard delta rate savings of 4.65% and 0.32dB improvement in the eXtended Peak Signal-to-Noise Ratio (XPSNR)-Rate domain over the traditional fixed ladder solution.<|reference_end|>
|
arxiv
|
@article{katsenou2024decoding,
title={Decoding Complexity-Rate-Quality Pareto-Front for Adaptive VVC Streaming},
author={Angeliki Katsenou, Vignesh V Menon, Adam Wieckowski, Benjamin Bross,
and Detlev Marpe},
journal={arXiv preprint arXiv:2409.18713},
year={2024},
archivePrefix={arXiv},
eprint={2409.18713},
primaryClass={eess.IV cs.MM}
}
|
katsenou2024decoding
|
arxiv-662779
|
2409.18715
|
Multi-modal Medical Image Fusion For Non-Small Cell Lung Cancer Classification
|
<|reference_start|>Multi-modal Medical Image Fusion For Non-Small Cell Lung Cancer Classification: The early detection and nuanced subtype classification of non-small cell lung cancer (NSCLC), a predominant cause of cancer mortality worldwide, is a critical and complex issue. In this paper, we introduce an innovative integration of multi-modal data, synthesizing fused medical imaging (CT and PET scans) with clinical health records and genomic data. This unique fusion methodology leverages advanced machine learning models, notably MedClip and BEiT, for sophisticated image feature extraction, setting a new standard in computational oncology. Our research surpasses existing approaches, as evidenced by a substantial enhancement in NSCLC detection and classification precision. The results showcase notable improvements across key performance metrics, including accuracy, precision, recall, and F1-score. Specifically, our leading multi-modal classifier model records an impressive accuracy of 94.04%. We believe that our approach has the potential to transform NSCLC diagnostics, facilitating earlier detection and more effective treatment planning and, ultimately, leading to superior patient outcomes in lung cancer care.<|reference_end|>
|
arxiv
|
@article{hassan2024multi-modal,
title={Multi-modal Medical Image Fusion For Non-Small Cell Lung Cancer
Classification},
author={Salma Hassan, Hamad Al Hammadi, Ibrahim Mohammed, Muhammad Haris Khan},
journal={arXiv preprint arXiv:2409.18715},
year={2024},
doi={10.1109/ICIP51287.2024.10648275},
archivePrefix={arXiv},
eprint={2409.18715},
primaryClass={eess.IV cs.AI cs.CV}
}
|
hassan2024multi-modal
|
arxiv-662780
|
2409.18717
|
Improved Hardness Results for the Clearing Problem in Financial Networks with Credit Default Swaps
|
<|reference_start|>Improved Hardness Results for the Clearing Problem in Financial Networks with Credit Default Swaps: We study computational problems in financial networks of banks connected by debt contracts and credit default swaps (CDSs). A main problem is to determine \emph{clearing} payments, for instance right after some banks have been exposed to a financial shock. Previous works have shown the $\varepsilon$-approximate version of the problem to be $\mathrm{PPAD}$-complete and the exact problem $\mathrm{FIXP}$-complete. We show that $\mathrm{PPAD}$-hardness hold when $\varepsilon \approx 0.101$, improving the previously best bound significantly. Due to the fact that the clearing problem typically does not have a unique solution, or that it may not have a solution at all in the presence of default costs, several natural decision problems are also of great interest. We show two such problems to be $\exists\mathbb{R}$-complete, complementing previous $\mathrm{NP}$-hardness results for the approximate setting.<|reference_end|>
|
arxiv
|
@article{dohn2024improved,
title={Improved Hardness Results for the Clearing Problem in Financial Networks
with Credit Default Swaps},
author={Simon Dohn, Kristoffer Arnsfelt Hansen, Asger Klinkby},
journal={arXiv preprint arXiv:2409.18717},
year={2024},
archivePrefix={arXiv},
eprint={2409.18717},
primaryClass={cs.GT q-fin.RM}
}
|
dohn2024improved
|
arxiv-662781
|
2409.18718
|
Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered Policy Learning via Asynchronous Federated Inverse Reinforcement Learning
|
<|reference_start|>Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered Policy Learning via Asynchronous Federated Inverse Reinforcement Learning: In this paper, a novel generative adversarial imitation learning (GAIL)-powered policy learning approach is proposed for optimizing beamforming, spectrum allocation, and remote user equipment (RUE) association in NTNs. Traditional reinforcement learning (RL) methods for wireless network optimization often rely on manually designed reward functions, which can require extensive parameter tuning. To overcome these limitations, we employ inverse RL (IRL), specifically leveraging the GAIL framework, to automatically learn reward functions without manual design. We augment this framework with an asynchronous federated learning approach, enabling decentralized multi-satellite systems to collaboratively derive optimal policies. The proposed method aims to maximize spectrum efficiency (SE) while meeting minimum information rate requirements for RUEs. To address the non-convex, NP-hard nature of this problem, we combine the many-to-one matching theory with a multi-agent asynchronous federated IRL (MA-AFIRL) framework. This allows agents to learn through asynchronous environmental interactions, improving training efficiency and scalability. The expert policy is generated using the Whale optimization algorithm (WOA), providing data to train the automatic reward function within GAIL. Simulation results show that the proposed MA-AFIRL method outperforms traditional RL approaches, achieving a $14.6\%$ improvement in convergence and reward value. The novel GAIL-driven policy learning establishes a novel benchmark for 6G NTN optimization.<|reference_end|>
|
arxiv
|
@article{hassan2024enhancing,
title={Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered
Policy Learning via Asynchronous Federated Inverse Reinforcement Learning},
author={Sheikh Salman Hassan, Yu Min Park, Yan Kyaw Tun, Walid Saad, Zhu Han,
Choong Seon Hong},
journal={arXiv preprint arXiv:2409.18718},
year={2024},
archivePrefix={arXiv},
eprint={2409.18718},
primaryClass={cs.NI cs.LG}
}
|
hassan2024enhancing
|
arxiv-662782
|
2409.18721
|
Scalable Cross-Entropy Loss for Sequential Recommendations with Large Item Catalogs
|
<|reference_start|>Scalable Cross-Entropy Loss for Sequential Recommendations with Large Item Catalogs: Scalability issue plays a crucial role in productionizing modern recommender systems. Even lightweight architectures may suffer from high computational overload due to intermediate calculations, limiting their practicality in real-world applications. Specifically, applying full Cross-Entropy (CE) loss often yields state-of-the-art performance in terms of recommendations quality. Still, it suffers from excessive GPU memory utilization when dealing with large item catalogs. This paper introduces a novel Scalable Cross-Entropy (SCE) loss function in the sequential learning setup. It approximates the CE loss for datasets with large-size catalogs, enhancing both time efficiency and memory usage without compromising recommendations quality. Unlike traditional negative sampling methods, our approach utilizes a selective GPU-efficient computation strategy, focusing on the most informative elements of the catalog, particularly those most likely to be false positives. This is achieved by approximating the softmax distribution over a subset of the model outputs through the maximum inner product search. Experimental results on multiple datasets demonstrate the effectiveness of SCE in reducing peak memory usage by a factor of up to 100 compared to the alternatives, retaining or even exceeding their metrics values. The proposed approach also opens new perspectives for large-scale developments in different domains, such as large language models.<|reference_end|>
|
arxiv
|
@article{mezentsev2024scalable,
title={Scalable Cross-Entropy Loss for Sequential Recommendations with Large
Item Catalogs},
author={Gleb Mezentsev, Danil Gusak, Ivan Oseledets, Evgeny Frolov},
journal={arXiv preprint arXiv:2409.18721},
year={2024},
doi={10.1145/3640457.3688140},
archivePrefix={arXiv},
eprint={2409.18721},
primaryClass={cs.IR cs.LG}
}
|
mezentsev2024scalable
|
arxiv-662783
|
2409.18724
|
Cross-Domain Keyword Extraction with Keyness Patterns
|
<|reference_start|>Cross-Domain Keyword Extraction with Keyness Patterns: Domain dependence and annotation subjectivity pose challenges for supervised keyword extraction. Based on the premises that second-order keyness patterns are existent at the community level and learnable from annotated keyword extraction datasets, this paper proposes a supervised ranking approach to keyword extraction that ranks keywords with keyness patterns consisting of independent features (such as sublanguage domain and term length) and three categories of dependent features -- heuristic features, specificity features, and representavity features. The approach uses two convolutional-neural-network based models to learn keyness patterns from keyword datasets and overcomes annotation subjectivity by training the two models with bootstrap sampling strategy. Experiments demonstrate that the approach not only achieves state-of-the-art performance on ten keyword datasets in general supervised keyword extraction with an average top-10-F-measure of 0.316 , but also robust cross-domain performance with an average top-10-F-measure of 0.346 on four datasets that are excluded in the training process. Such cross-domain robustness is attributed to the fact that community-level keyness patterns are limited in number and temperately independent of language domains, the distinction between independent features and dependent features, and the sampling training strategy that balances excess risk and lack of negative training data.<|reference_end|>
|
arxiv
|
@article{zhou2024cross-domain,
title={Cross-Domain Keyword Extraction with Keyness Patterns},
author={Dongmei Zhou, Xuri Tang},
journal={arXiv preprint arXiv:2409.18724},
year={2024},
archivePrefix={arXiv},
eprint={2409.18724},
primaryClass={cs.IR cs.CL cs.NE}
}
|
zhou2024cross-domain
|
arxiv-662784
|
2409.18725
|
Electro-Mechanical Contact Interactions Between Human Finger and Touchscreen Under Electroadhesion
|
<|reference_start|>Electro-Mechanical Contact Interactions Between Human Finger and Touchscreen Under Electroadhesion: Electroadhesion (EA) has potential in robotics, automation, space missions, textiles, and tactile displays, but its physics remains underexplored due to limited models and experimental data. This thesis develops an electro-mechanical model to estimate electrostatic forces between human finger and touchscreen under EA and compares it to experimentally measured friction forces. The model aligns well with the data, showing that the electrostatic force changes mainly due to charge leakage from the Stratum Corneum at frequencies below 250 Hz and its electrical properties above 250 Hz. Additionally, a novel approach using electrical impedance measurements estimates electrostatic forces by subtracting skin and touchscreen impedances from the total impedance. This method is the first to experimentally estimate the average air gap between finger and voltage-induced capacitive touchscreen. The effect of electrode polarization impedance, particularly at low frequencies, was also studied, revealing its role in the charge leakage phenomenon. Tactile perception via EA was investigated using DC and AC voltage signals on a touchscreen with 10 participants of varying finger moisture levels. Results showed that AC voltage detection thresholds were significantly lower than for DC, explained by charge leakage at lower frequencies. Participants with moist fingers exhibited higher threshold levels, supported by impedance measurements. The thesis also investigated how touchscreen top coatings influence tactile perception, focusing on EA-free interactions. Psychophysical experiments and physical measurements demonstrated that coating materials significantly affect tactile perception, likely due to molecular interactions. These findings offer insights into finger-touchscreen interactions under EA and have potential applications in designing robotic systems and haptic interfaces using this technology.<|reference_end|>
|
arxiv
|
@article{aliabbasi2024electro-mechanical,
title={Electro-Mechanical Contact Interactions Between Human Finger and
Touchscreen Under Electroadhesion},
author={Easa AliAbbasi},
journal={arXiv preprint arXiv:2409.18725},
year={2024},
archivePrefix={arXiv},
eprint={2409.18725},
primaryClass={cs.HC}
}
|
aliabbasi2024electro-mechanical
|
arxiv-662785
|
2409.18730
|
Effectiveness of learning-based image codecs on fingerprint storage
|
<|reference_start|>Effectiveness of learning-based image codecs on fingerprint storage: The success of learning-based coding techniques and the development of learning-based image coding standards, such as JPEG-AI, point towards the adoption of such solutions in different fields, including the storage of biometric data, like fingerprints. However, the peculiar nature of learning-based compression artifacts poses several issues concerning their impact and effectiveness on extracting biometric features and landmarks, e.g., minutiae. This problem is utterly stressed by the fact that most models are trained on natural color images, whose characteristics are very different from usual biometric images, e.g, fingerprint or iris pictures. As a matter of fact, these issues are deemed to be accurately questioned and investigated, being such analysis still largely unexplored. This study represents the first investigation about the adaptability of learning-based image codecs in the storage of fingerprint images by measuring its impact on the extraction and characterization of minutiae. Experimental results show that at a fixed rate point, learned solutions considerably outperform previous fingerprint coding standards, like JPEG2000, both in terms of distortion and minutiae preservation. Indeed, experimental results prove that the peculiarities of learned compression artifacts do not prevent automatic fingerprint identification (since minutiae types and locations are not significantly altered), nor do compromise image quality for human visual inspection (as they gain in terms of BD rate and PSNR of 47.8% and +3.97dB respectively).<|reference_end|>
|
arxiv
|
@article{mari2024effectiveness,
title={Effectiveness of learning-based image codecs on fingerprint storage},
author={Daniele Mari, Saverio Cavasin, Simone Milani, and Mauro Conti},
journal={arXiv preprint arXiv:2409.18730},
year={2024},
archivePrefix={arXiv},
eprint={2409.18730},
primaryClass={eess.IV cs.CV}
}
|
mari2024effectiveness
|
arxiv-662786
|
2409.18731
|
A Generalized Tensor Formulation for Hyperspectral Image Super-Resolution Under General Spatial Blurring
|
<|reference_start|>A Generalized Tensor Formulation for Hyperspectral Image Super-Resolution Under General Spatial Blurring: Hyperspectral super-resolution is commonly accomplished by the fusing of a hyperspectral imaging of low spatial resolution with a multispectral image of high spatial resolution, and many tensor-based approaches to this task have been recently proposed. Yet, it is assumed in such tensor-based methods that the spatial-blurring operation that creates the observed hyperspectral image from the desired super-resolved image is separable into independent horizontal and vertical blurring. Recent work has argued that such separable spatial degradation is ill-equipped to model the operation of real sensors which may exhibit, for example, anisotropic blurring. To accommodate this fact, a generalized tensor formulation based on a Kronecker decomposition is proposed to handle any general spatial-degradation matrix, including those that are not separable as previously assumed. Analysis of the generalized formulation reveals conditions under which exact recovery of the desired super-resolved image is guaranteed, and a practical algorithm for such recovery, driven by a blockwise-group-sparsity regularization, is proposed. Extensive experimental results demonstrate that the proposed generalized tensor approach outperforms not only traditional matrix-based techniques but also state-of-the-art tensor-based methods; the gains with respect to the latter are especially significant in cases of anisotropic spatial blurring.<|reference_end|>
|
arxiv
|
@article{wang2024a,
title={A Generalized Tensor Formulation for Hyperspectral Image
Super-Resolution Under General Spatial Blurring},
author={Yinjian Wang, Wei Li, Yuanyuan Gui, Qian Du and James E. Fowler},
journal={arXiv preprint arXiv:2409.18731},
year={2024},
archivePrefix={arXiv},
eprint={2409.18731},
primaryClass={eess.IV cs.CV}
}
|
wang2024a
|
arxiv-662787
|
2409.18732
|
Verification of Quantitative Temporal Properties in RealTime-DEVS
|
<|reference_start|>Verification of Quantitative Temporal Properties in RealTime-DEVS: Real-Time DEVS (RT-DEVS) can model systems with quantitative temporal requirements. Ensuring that such models verify some temporal properties requires to use something beyond simulation. In this work we use the model checker Uppaal to verify a class of recurrent quantitative temporal properties appearing in RT-DEVS models. Secondly, by introducing mutations to quantitative temporal properties we are able to find errors in RT-DEVS models and their implementations. A case study from the railway domain is presented.<|reference_end|>
|
arxiv
|
@article{gonzález2024verification,
title={Verification of Quantitative Temporal Properties in RealTime-DEVS},
author={Ariel Gonz'alez, Maximiliano Cristi'a, Carlos Luna},
journal={arXiv preprint arXiv:2409.18732},
year={2024},
archivePrefix={arXiv},
eprint={2409.18732},
primaryClass={cs.SE}
}
|
gonzález2024verification
|
arxiv-662788
|
2409.18733
|
Search and Detect: Training-Free Long Tail Object Detection via Web-Image Retrieval
|
<|reference_start|>Search and Detect: Training-Free Long Tail Object Detection via Web-Image Retrieval: In this paper, we introduce SearchDet, a training-free long-tail object detection framework that significantly enhances open-vocabulary object detection performance. SearchDet retrieves a set of positive and negative images of an object to ground, embeds these images, and computes an input image-weighted query which is used to detect the desired concept in the image. Our proposed method is simple and training-free, yet achieves over 48.7% mAP improvement on ODinW and 59.1% mAP improvement on LVIS compared to state-of-the-art models such as GroundingDINO. We further show that our approach of basing object detection on a set of Web-retrieved exemplars is stable with respect to variations in the exemplars, suggesting a path towards eliminating costly data annotation and training procedures.<|reference_end|>
|
arxiv
|
@article{sidhu2024search,
title={Search and Detect: Training-Free Long Tail Object Detection via
Web-Image Retrieval},
author={Mankeerat Sidhu, Hetarth Chopra, Ansel Blume, Jeonghwan Kim, Revanth
Gangi Reddy, Heng Ji},
journal={arXiv preprint arXiv:2409.18733},
year={2024},
archivePrefix={arXiv},
eprint={2409.18733},
primaryClass={cs.CV}
}
|
sidhu2024search
|
arxiv-662789
|
2409.18734
|
On Adaptive Frequency Sampling for Data-driven MOR Applied to Antenna Responses
|
<|reference_start|>On Adaptive Frequency Sampling for Data-driven MOR Applied to Antenna Responses: Frequency domain sweeps of array antennas are well-known to be time-intensive, and different surrogate models have been used to improve the performance. Data-driven model order reduction algorithms, such as the Loewner framework and vector fitting, can be integrated with these adaptive error estimates, in an iterative algorithm, to reduce the number of full-wave simulations required to accurately capture the requested frequency behavior of multiport array antennas. In this work, we propose two novel adaptive methods exploiting a block matrix function which is a key part of the Loewner framework generating system approach. The first algorithm leverages an inherent matrix parameter freedom in the block matrix function to identify frequency points with large errors, whereas the second utilizes the condition number of the block matrix function. Both methods effectively provide frequency domain error estimates, essential for improved performance. Numerical experiments on multiport array antenna S-parameters demonstrate the effectiveness of our proposed algorithms within the Loewner framework.<|reference_end|>
|
arxiv
|
@article{åkerstedt2024on,
title={On Adaptive Frequency Sampling for Data-driven MOR Applied to Antenna
Responses},
author={Lucas {AA}kerstedt, Darwin Blanco, and B. L. G. Jonsson},
journal={arXiv preprint arXiv:2409.18734},
year={2024},
archivePrefix={arXiv},
eprint={2409.18734},
primaryClass={eess.SY cs.SY physics.comp-ph}
}
|
åkerstedt2024on
|
arxiv-662790
|
2409.18735
|
Autoregressive Policy Optimization for Constrained Allocation Tasks
|
<|reference_start|>Autoregressive Policy Optimization for Constrained Allocation Tasks: Allocation tasks represent a class of problems where a limited amount of resources must be allocated to a set of entities at each time step. Prominent examples of this task include portfolio optimization or distributing computational workloads across servers. Allocation tasks are typically bound by linear constraints describing practical requirements that have to be strictly fulfilled at all times. In portfolio optimization, for example, investors may be obligated to allocate less than 30\% of the funds into a certain industrial sector in any investment period. Such constraints restrict the action space of allowed allocations in intricate ways, which makes learning a policy that avoids constraint violations difficult. In this paper, we propose a new method for constrained allocation tasks based on an autoregressive process to sequentially sample allocations for each entity. In addition, we introduce a novel de-biasing mechanism to counter the initial bias caused by sequential sampling. We demonstrate the superior performance of our approach compared to a variety of Constrained Reinforcement Learning (CRL) methods on three distinct constrained allocation tasks: portfolio optimization, computational workload distribution, and a synthetic allocation benchmark. Our code is available at: https://github.com/niklasdbs/paspo<|reference_end|>
|
arxiv
|
@article{winkel2024autoregressive,
title={Autoregressive Policy Optimization for Constrained Allocation Tasks},
author={David Winkel, Niklas Strau{ss}, Maximilian Bernhard, Zongyue Li,
Thomas Seidl, Matthias Schubert},
journal={arXiv preprint arXiv:2409.18735},
year={2024},
archivePrefix={arXiv},
eprint={2409.18735},
primaryClass={cs.AI cs.LG}
}
|
winkel2024autoregressive
|
arxiv-662791
|
2409.18736
|
Adversarial Challenges in Network Intrusion Detection Systems: Research Insights and Future Prospects
|
<|reference_start|>Adversarial Challenges in Network Intrusion Detection Systems: Research Insights and Future Prospects: Machine learning has brought significant advances in cybersecurity, particularly in the development of Intrusion Detection Systems (IDS). These improvements are mainly attributed to the ability of machine learning algorithms to identify complex relationships between features and effectively generalize to unseen data. Deep neural networks, in particular, contributed to this progress by enabling the analysis of large amounts of training data, significantly enhancing detection performance. However, machine learning models remain vulnerable to adversarial attacks, where carefully crafted input data can mislead the model into making incorrect predictions. While adversarial threats in unstructured data, such as images and text, have been extensively studied, their impact on structured data like network traffic is less explored. This survey aims to address this gap by providing a comprehensive review of machine learning-based Network Intrusion Detection Systems (NIDS) and thoroughly analyzing their susceptibility to adversarial attacks. We critically examine existing research in NIDS, highlighting key trends, strengths, and limitations, while identifying areas that require further exploration. Additionally, we discuss emerging challenges in the field and offer insights for the development of more robust and resilient NIDS. In summary, this paper enhances the understanding of adversarial attacks and defenses in NIDS and guide future research in improving the robustness of machine learning models in cybersecurity applications.<|reference_end|>
|
arxiv
|
@article{ennaji2024adversarial,
title={Adversarial Challenges in Network Intrusion Detection Systems: Research
Insights and Future Prospects},
author={Sabrine Ennaji, Fabio De Gaspari, Dorjan Hitaj, Alicia Kbidi, Luigi V.
Mancini},
journal={arXiv preprint arXiv:2409.18736},
year={2024},
archivePrefix={arXiv},
eprint={2409.18736},
primaryClass={cs.CR cs.ET cs.NI}
}
|
ennaji2024adversarial
|
arxiv-662792
|
2409.18737
|
MemFusionMap: Working Memory Fusion for Online Vectorized HD Map Construction
|
<|reference_start|>MemFusionMap: Working Memory Fusion for Online Vectorized HD Map Construction: High-definition (HD) maps provide environmental information for autonomous driving systems and are essential for safe planning. While existing methods with single-frame input achieve impressive performance for online vectorized HD map construction, they still struggle with complex scenarios and occlusions. We propose MemFusionMap, a novel temporal fusion model with enhanced temporal reasoning capabilities for online HD map construction. Specifically, we contribute a working memory fusion module that improves the model's memory capacity to reason across history frames. We also design a novel temporal overlap heatmap to explicitly inform the model about the temporal overlap information and vehicle trajectory in the Bird's Eye View space. By integrating these two designs, MemFusionMap significantly outperforms existing methods while also maintaining a versatile design for scalability. We conduct extensive evaluation on open-source benchmarks and demonstrate a maximum improvement of 5.4% in mAP over state-of-the-art methods. The code for MemFusionMap will be made open-source upon publication of this paper.<|reference_end|>
|
arxiv
|
@article{song2024memfusionmap:,
title={MemFusionMap: Working Memory Fusion for Online Vectorized HD Map
Construction},
author={Jingyu Song, Xudong Chen, Liupei Lu, Jie Li, Katherine A. Skinner},
journal={arXiv preprint arXiv:2409.18737},
year={2024},
archivePrefix={arXiv},
eprint={2409.18737},
primaryClass={cs.CV cs.AI cs.LG cs.RO}
}
|
song2024memfusionmap:
|
arxiv-662793
|
2409.18741
|
Optimum Configuration for Hovering n-Quadrotors carrying a Slung Payload
|
<|reference_start|>Optimum Configuration for Hovering n-Quadrotors carrying a Slung Payload: This work proposes a strategy for organising quadrotors around a payload to enable hovering without external stimuli, together with a MATLAB software for modelling the dynamics of a quadrotor-payload system. Based on geometric concepts, the proposed design keeps the payload and system centre of mass aligned. Hovering tests that are successful confirm the method's efficiency. Moreover, the algorithm is improved to take thrust capacities and propeller distances into account, calculating the minimum number of quadrotors needed for hovering. The algorithm's effectiveness is demonstrated by numerical examples, which reveal that larger quadrotors may require fewer units while smaller ones give greater flexibility. Our code can be found at: \href{https://github.com/Hosnooo/Swarm-Slung-Payload}{https://github.com/Hosnooo/Swarm-Slung-Payload}<|reference_end|>
|
arxiv
|
@article{elshaar2024optimum,
title={Optimum Configuration for Hovering n-Quadrotors carrying a Slung Payload},
author={Mohssen E. Elshaar, Pansie A. khodary, Meral L. Badr, Mohamad A.
Sayegh, Zeyad M. Manaa, Ayman M. Abdallah},
journal={arXiv preprint arXiv:2409.18741},
year={2024},
archivePrefix={arXiv},
eprint={2409.18741},
primaryClass={cs.RO math.DS}
}
|
elshaar2024optimum
|
arxiv-662794
|
2409.18742
|
A History-Guided Regional Partitioning Evolutionary Optimization for Solving the Flexible Job Shop Problem with Limited Multi-load Automated Guided Vehicles
|
<|reference_start|>A History-Guided Regional Partitioning Evolutionary Optimization for Solving the Flexible Job Shop Problem with Limited Multi-load Automated Guided Vehicles: In a flexible job shop environment, using Automated Guided Vehicles (AGVs) to transport jobs and process materials is an important way to promote the intelligence of the workshop. Compared with single-load AGVs, multi-load AGVs can improve AGV utilization, reduce path conflicts, etc. Therefore, this study proposes a history-guided regional partitioning algorithm (HRPEO) for the flexible job shop scheduling problem with limited multi-load AGVs (FJSPMA). First, the encoding and decoding rules are designed according to the characteristics of multi-load AGVs, and then the initialization rule based on the branch and bound method is used to generate the initial population. Second, to prevent the algorithm from falling into a local optimum, the algorithm adopts a regional partitioning strategy. This strategy divides the solution space into multiple regions and measures the potential of the regions. After that, cluster the regions into multiple clusters in each iteration, and selects individuals for evolutionary search based on the set of clusters. Third, a local search strategy is designed to improve the exploitation ability of the algorithm, which uses a greedy approach to optimize machines selection and transportation sequence according to the characteristics of FJSPMA. Finally, a large number of experiments are carried out on the benchmarks to test the performance of the algorithm. Compared with multiple advanced algorithms, the results show that the HRPEO has a better advantage in solving FJSPMA.<|reference_end|>
|
arxiv
|
@article{liu2024a,
title={A History-Guided Regional Partitioning Evolutionary Optimization for
Solving the Flexible Job Shop Problem with Limited Multi-load Automated
Guided Vehicles},
author={Feige Liu, Chao Lu, Xin Li},
journal={arXiv preprint arXiv:2409.18742},
year={2024},
archivePrefix={arXiv},
eprint={2409.18742},
primaryClass={eess.SY cs.NE cs.SY}
}
|
liu2024a
|
arxiv-662795
|
2409.18743
|
OpenObject-NAV: Open-Vocabulary Object-Oriented Navigation Based on Dynamic Carrier-Relationship Scene Graph
|
<|reference_start|>OpenObject-NAV: Open-Vocabulary Object-Oriented Navigation Based on Dynamic Carrier-Relationship Scene Graph: In everyday life, frequently used objects like cups often have unfixed positions and multiple instances within the same category, and their carriers frequently change as well. As a result, it becomes challenging for a robot to efficiently navigate to a specific instance. To tackle this challenge, the robot must capture and update scene changes and plans continuously. However, current object navigation approaches primarily focus on semantic-level and lack the ability to dynamically update scene representation. This paper captures the relationships between frequently used objects and their static carriers. It constructs an open-vocabulary Carrier-Relationship Scene Graph (CRSG) and updates the carrying status during robot navigation to reflect the dynamic changes of the scene. Based on the CRSG, we further propose an instance navigation strategy that models the navigation process as a Markov Decision Process. At each step, decisions are informed by Large Language Model's commonsense knowledge and visual-language feature similarity. We designed a series of long-sequence navigation tasks for frequently used everyday items in the Habitat simulator. The results demonstrate that by updating the CRSG, the robot can efficiently navigate to moved targets. Additionally, we deployed our algorithm on a real robot and validated its practical effectiveness.<|reference_end|>
|
arxiv
|
@article{tang2024openobject-nav:,
title={OpenObject-NAV: Open-Vocabulary Object-Oriented Navigation Based on
Dynamic Carrier-Relationship Scene Graph},
author={Yujie Tang, Meiling Wang, Yinan Deng, Zibo Zheng, Jiagui Zhong, Yufeng
Yue},
journal={arXiv preprint arXiv:2409.18743},
year={2024},
archivePrefix={arXiv},
eprint={2409.18743},
primaryClass={cs.RO cs.AI}
}
|
tang2024openobject-nav:
|
arxiv-662796
|
2409.18745
|
A study on the effects of mixed explicit and implicit communications in human-virtual-agent interactions
|
<|reference_start|>A study on the effects of mixed explicit and implicit communications in human-virtual-agent interactions: Communication between humans and robots (or virtual agents) is essential for interaction and often inspired by human communication, which uses gestures, facial expressions, gaze direction, and other explicit and implicit means. This work presents an interaction experiment where humans and virtual agents interact through explicit (gestures, manual entries using mouse and keyboard, voice, sound, and information on screen) and implicit (gaze direction, location, facial expressions, and raise of eyebrows) communication to evaluate the effect of mixed explicit-implicit communication against purely explicit communication. Results obtained using Bayesian parameter estimation show that the number of errors and task execution time did not significantly change when mixed explicit and implicit communications were used, and neither the perceived efficiency of the interaction. In contrast, acceptance, sociability, and transparency of the virtual agent increased when using mixed communication modalities (88.3%, 92%, and 92.9% of the effect size posterior distribution of each variable, respectively, were above the upper limit of the region of practical equivalence). This suggests that task-related measures, such as time, number of errors, and perceived efficiency of the interaction, have not been influenced by the communication type in our particular experiment. However, the improvement of subjective measures related to the virtual agent, such as acceptance, sociability, and transparency, suggests that humans are more receptive to mixed explicit and implicit communications.<|reference_end|>
|
arxiv
|
@article{campos2024a,
title={A study on the effects of mixed explicit and implicit communications in
human-virtual-agent interactions},
author={Ana Christina Almada Campos and Bruno Vilhena Adorno},
journal={arXiv preprint arXiv:2409.18745},
year={2024},
archivePrefix={arXiv},
eprint={2409.18745},
primaryClass={cs.RO}
}
|
campos2024a
|
arxiv-662797
|
2409.18747
|
Cottention: Linear Transformers With Cosine Attention
|
<|reference_start|>Cottention: Linear Transformers With Cosine Attention: Attention mechanisms, particularly softmax attention, have been instrumental in the success of transformer-based models such as GPT. However, the quadratic memory complexity of softmax attention with respect to sequence length poses significant challenges for processing longer sequences. We introduce Cottention, a novel attention mechanism that replaces the softmax operation with cosine similarity. By leveraging the properties of cosine similarity and rearranging the attention equation, Cottention achieves native linear memory complexity with respect to sequence length, making it inherently more memory-efficient than softmax attention. We demonstrate that Cottention can be reformulated as a recurrent neural network (RNN) with a finite hidden state, allowing for constant memory usage during inference. We evaluate Cottention on both the bidirectional BERT and causal GPT tasks, demonstrating comparable performance to softmax attention while significantly reducing memory requirements. To ensure efficient computation, we develop a custom CUDA kernel for Cottention. Our results show that Cottention is a promising alternative to softmax attention, enabling the processing of longer sequences without sacrificing performance, due to its native linear memory complexity and ability to maintain a constant memory footprint during inference.<|reference_end|>
|
arxiv
|
@article{mongaras2024cottention:,
title={Cottention: Linear Transformers With Cosine Attention},
author={Gabriel Mongaras and Trevor Dohm and Eric C. Larson},
journal={arXiv preprint arXiv:2409.18747},
year={2024},
archivePrefix={arXiv},
eprint={2409.18747},
primaryClass={cs.LG}
}
|
mongaras2024cottention:
|
arxiv-662798
|
2409.18749
|
TensorSocket: Shared Data Loading for Deep Learning Training
|
<|reference_start|>TensorSocket: Shared Data Loading for Deep Learning Training: Training deep learning models is a repetitive and resource-intensive process. Data scientists often train several models before landing on set of parameters (e.g., hyper-parameter tuning), model architecture (e.g., neural architecture search), among other things that yields the highest accuracy. The computational efficiency of these training tasks depends highly on how well we can supply the training process with training data. The repetitive nature of these tasks results in the same data processing pipelines running over and over exacerbating the need for and costs of computational resources. In this paper, we present Tensorsocket to reduce the computational needs of deep learning training by enabling simultaneous training processes to share the same data loader. Tensorsocket mitigates CPU-side bottlenecks in cases where the collocated training workloads have high throughput on GPU, but are held back by lower data-loading throughput on CPU. Tensorsocket achieves this by reducing redundant computations across collocated training processes and leveraging modern GPU-GPU interconnects. We demonstrate the hardware- and pipeline-agnostic nature of Tensorsocket and evaluate it using a variety of training scenarios. Our evaluation shows that Tensorsocket enables scenarios that are infeasible without data sharing, increases training throughput by up to $100\%$, and when utilizing cloud instances, Tensorsocket achieves cost savings of $50\%$ by reducing the hardware resource needs on the CPU side. Furthermore, Tensorsocket outperforms the state-of-the-art solutions for shared data loading such as CoorDL and Joader. It is easier to use, maintain, and deploy, and either achieves higher or matches the throughput of other solutions while requiring less CPU resources.<|reference_end|>
|
arxiv
|
@article{robroek2024tensorsocket:,
title={TensorSocket: Shared Data Loading for Deep Learning Training},
author={Ties Robroek (IT University of Copenhagen), Neil Kim Nielsen (IT
University of Copenhagen), P{i}nar T"oz"un (IT University of Copenhagen)},
journal={arXiv preprint arXiv:2409.18749},
year={2024},
archivePrefix={arXiv},
eprint={2409.18749},
primaryClass={cs.LG cs.DC}
}
|
robroek2024tensorsocket:
|
arxiv-662799
|
2409.18750
|
Temporal queries for dynamic temporal forests
|
<|reference_start|>Temporal queries for dynamic temporal forests: In a temporal forest each edge has an associated set of time labels that specify the time instants in which the edges are available. A temporal path from vertex $u$ to vertex $v$ in the forest is a selection of a label for each edge in the unique path from $u$ to $v$, assuming it exists, such that the labels selected for any two consecutive edges are non-decreasing. We design linear-size data structures that maintain a temporal forest of rooted trees under addition and deletion of both edge labels and singleton vertices, insertion of root-to-node edges, and removal of edges with no labels. Such data structures can answer temporal reachability, earliest arrival, and latest departure queries. All queries and updates are handled in polylogarithmic worst-case time. Our results can be adapted to deal with latencies. More precisely, all the worst-case time bounds are asymptotically unaffected when latencies are uniform. For arbitrary latencies, the update time becomes amortized in the incremental case where only label additions and edge/singleton insertions are allowed as well as in the decremental case in which only label deletions and edge/singleton removals are allowed. To the best of our knowledge, the only previously known data structure supporting temporal reachability queries is due to Brito, Albertini, Casteigts, and Traven\c{c}olo [Social Network Analysis and Mining, 2021], which can handle general temporal graphs, answers queries in logarithmic time in the worst case, but requires an amortized update time that is quadratic in the number of vertices, up to polylogarithmic factors.<|reference_end|>
|
arxiv
|
@article{bilò2024temporal,
title={Temporal queries for dynamic temporal forests},
author={Davide Bil`o, Luciano Gual`a, Stefano Leucci, Guido Proietti,
Alessandro Straziota},
journal={arXiv preprint arXiv:2409.18750},
year={2024},
archivePrefix={arXiv},
eprint={2409.18750},
primaryClass={cs.DS}
}
|
bilò2024temporal
|
arxiv-662800
|
2409.18752
|
Royal Reveals: LiDAR Mapping of Kronborg Castle, Echoes of Hamlet's Halls
|
<|reference_start|>Royal Reveals: LiDAR Mapping of Kronborg Castle, Echoes of Hamlet's Halls: This paper presents a large scale dataset from a meticulous 360-degree LiDAR (Light Detection and Ranging) scan conducted on Kronborg Castle, a renowned Renaissance fortress located in Elsinore (Helsing{\o}r), Denmark, famously associated with Shakespeare's "Hamlet." Utilising a vertical mounted, gimbal stabilised, 16 channel, 360-degree Velodyne VLP-16 LiDAR scanner, paired with an Intel RealSense L515 depth camera. This research offers an unparalleled digital representation of the castle's intricate architectural details and structural nuances, enabling fellow researchers to conduct experiments utilising the data for SLAM (Simultaneous Localisation and Mapping) as well as floorplan generation.<|reference_end|>
|
arxiv
|
@article{davies2024royal,
title={Royal Reveals: LiDAR Mapping of Kronborg Castle, Echoes of Hamlet's
Halls},
author={Leon Davies and Simon S{o}lvsten},
journal={arXiv preprint arXiv:2409.18752},
year={2024},
archivePrefix={arXiv},
eprint={2409.18752},
primaryClass={cs.RO}
}
|
davies2024royal
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.