corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-663501
|
2409.20073
|
Whole-Graph Representation Learning For the Classification of Signed Networks
|
<|reference_start|>Whole-Graph Representation Learning For the Classification of Signed Networks: Graphs are ubiquitous for modeling complex systems involving structured data and relationships. Consequently, graph representation learning, which aims to automatically learn low-dimensional representations of graphs, has drawn a lot of attention in recent years. The overwhelming majority of existing methods handle unsigned graphs. However, signed graphs appear in an increasing number of application domains to model systems involving two types of opposed relationships. Several authors took an interest in signed graphs and proposed methods for providing vertex-level representations, but only one exists for whole-graph representations, and it can handle only fully connected graphs. In this article, we tackle this issue by proposing two approaches to learning whole-graph representations of general signed graphs. The first is a SG2V, a signed generalization of the whole-graph embedding method Graph2vec that relies on a modification of the Weisfeiler--Lehman relabelling procedure. The second one is WSGCN, a whole-graph generalization of the signed vertex embedding method SGCN that relies on the introduction of master nodes into the GCN. We propose several variants of both these approaches. A bottleneck in the development of whole-graph-oriented methods is the lack of data. We constitute a benchmark composed of three collections of signed graphs with corresponding ground truths. We assess our methods on this benchmark, and our results show that the signed whole-graph methods learn better representations for this task. Overall, the baseline obtains an F-measure score of 58.57, when SG2V and WSGCN reach 73.01 and 81.20, respectively. Our source code and benchmark dataset are both publicly available online.<|reference_end|>
|
arxiv
|
@article{cecillon2024whole-graph,
title={Whole-Graph Representation Learning For the Classification of Signed
Networks},
author={No'e Cecillon (LIA), Vincent Labatut (LIA), Richard Dufour (LS2N -
'equipe TALN), Nejat Ar{i}n{i}k (CRIL)},
journal={IEEE Access, 12:151303-151316, 2024},
year={2024},
doi={10.1109/ACCESS.2024.3472474},
archivePrefix={arXiv},
eprint={2409.20073},
primaryClass={cs.LG cs.NE cs.SI}
}
|
cecillon2024whole-graph
|
arxiv-663502
|
2409.20075
|
BSharedRAG: Backbone Shared Retrieval-Augmented Generation for the E-commerce Domain
|
<|reference_start|>BSharedRAG: Backbone Shared Retrieval-Augmented Generation for the E-commerce Domain: Retrieval Augmented Generation (RAG) system is important in domains such as e-commerce, which has many long-tail entities and frequently updated information. Most existing works adopt separate modules for retrieval and generation, which may be suboptimal since the retrieval task and the generation task cannot benefit from each other to improve performance. We propose a novel Backbone Shared RAG framework (BSharedRAG). It first uses a domain-specific corpus to continually pre-train a base model as a domain-specific backbone model and then trains two plug-and-play Low-Rank Adaptation (LoRA) modules based on the shared backbone to minimize retrieval and generation losses respectively. Experimental results indicate that our proposed BSharedRAG outperforms baseline models by 5% and 13% in Hit@3 upon two datasets in retrieval evaluation and by 23% in terms of BLEU-3 in generation evaluation. Our codes, models, and dataset are available at https://bsharedrag.github.io.<|reference_end|>
|
arxiv
|
@article{guan2024bsharedrag:,
title={BSharedRAG: Backbone Shared Retrieval-Augmented Generation for the
E-commerce Domain},
author={Kaisi Guan, Qian Cao, Yuchong Sun, Xiting Wang and Ruihua Song},
journal={arXiv preprint arXiv:2409.20075},
year={2024},
archivePrefix={arXiv},
eprint={2409.20075},
primaryClass={cs.CL}
}
|
guan2024bsharedrag:
|
arxiv-663503
|
2409.20078
|
Quantifying discriminability of evaluation metrics in link prediction for real networks
|
<|reference_start|>Quantifying discriminability of evaluation metrics in link prediction for real networks: Link prediction is one of the most productive branches in network science, aiming to predict links that would have existed but have not yet been observed, or links that will appear during the evolution of the network. Over nearly two decades, the field of link prediction has amassed a substantial body of research, encompassing a plethora of algorithms and diverse applications. For any algorithm, one or more evaluation metrics are required to assess its performance. Because using different evaluation metrics can provide different assessments of the algorithm performance, how to select appropriate evaluation metrics is a fundamental issue in link prediction. To address this issue, we propose a novel measure that quantifiers the discriminability of any evaluation metric given a real network and an algorithm. Based on 131 real networks and 20 representative algorithms, we systematically compare the discriminabilities of eight evaluation metrics, and demonstrate that H-measure and Area Under the ROC Curve (AUC) exhibit the strongest discriminabilities, followed by Normalized Discounted Cumulative Gain (NDCG). Our finding is robust for networks in different domains and algorithms of different types. This study provides insights into the selection of evaluation metrics, which may further contribute to standardizing the evaluating process of link prediction algorithms.<|reference_end|>
|
arxiv
|
@article{wan2024quantifying,
title={Quantifying discriminability of evaluation metrics in link prediction
for real networks},
author={Shuyan Wan, Yilin Bi, Xinshan Jiao, Tao Zhou},
journal={arXiv preprint arXiv:2409.20078},
year={2024},
archivePrefix={arXiv},
eprint={2409.20078},
primaryClass={cs.SI physics.soc-ph}
}
|
wan2024quantifying
|
arxiv-663504
|
2409.20079
|
Online Influence Maximization with Semi-Bandit Feedback under Corruptions
|
<|reference_start|>Online Influence Maximization with Semi-Bandit Feedback under Corruptions: In this work, we investigate the online influence maximization in social networks. Most prior research studies on online influence maximization assume that the nodes are fully cooperative and act according to their stochastically generated influence probabilities on others. In contrast, we study the online influence maximization problem in the presence of some corrupted nodes whose damaging effects diffuse throughout the network. We propose a novel bandit algorithm, CW-IMLinUCB, which robustly learns and finds the optimal seed set in the presence of corrupted users. Theoretical analyses establish that the regret performance of our proposed algorithm is better than the state-of-the-art online influence maximization algorithms. Extensive empirical evaluations on synthetic and real-world datasets also show the superior performance of our proposed algorithm.<|reference_end|>
|
arxiv
|
@article{cheng2024online,
title={Online Influence Maximization with Semi-Bandit Feedback under
Corruptions},
author={Xiaotong Cheng, Behzad Nourani-Koliji, Setareh Maghsudi},
journal={arXiv preprint arXiv:2409.20079},
year={2024},
archivePrefix={arXiv},
eprint={2409.20079},
primaryClass={cs.SI}
}
|
cheng2024online
|
arxiv-663505
|
2409.20081
|
ProFD: Prompt-Guided Feature Disentangling for Occluded Person Re-Identification
|
<|reference_start|>ProFD: Prompt-Guided Feature Disentangling for Occluded Person Re-Identification: To address the occlusion issues in person Re-Identification (ReID) tasks, many methods have been proposed to extract part features by introducing external spatial information. However, due to missing part appearance information caused by occlusion and noisy spatial information from external model, these purely vision-based approaches fail to correctly learn the features of human body parts from limited training data and struggle in accurately locating body parts, ultimately leading to misaligned part features. To tackle these challenges, we propose a Prompt-guided Feature Disentangling method (ProFD), which leverages the rich pre-trained knowledge in the textual modality facilitate model to generate well-aligned part features. ProFD first designs part-specific prompts and utilizes noisy segmentation mask to preliminarily align visual and textual embedding, enabling the textual prompts to have spatial awareness. Furthermore, to alleviate the noise from external masks, ProFD adopts a hybrid-attention decoder, ensuring spatial and semantic consistency during the decoding process to minimize noise impact. Additionally, to avoid catastrophic forgetting, we employ a self-distillation strategy, retaining pre-trained knowledge of CLIP to mitigate over-fitting. Evaluation results on the Market1501, DukeMTMC-ReID, Occluded-Duke, Occluded-ReID, and P-DukeMTMC datasets demonstrate that ProFD achieves state-of-the-art results. Our project is available at: https://github.com/Cuixxx/ProFD.<|reference_end|>
|
arxiv
|
@article{cui2024profd:,
title={ProFD: Prompt-Guided Feature Disentangling for Occluded Person
Re-Identification},
author={Can Cui, Siteng Huang, Wenxuan Song, Pengxiang Ding, Min Zhang,
Donglin Wang},
journal={arXiv preprint arXiv:2409.20081},
year={2024},
archivePrefix={arXiv},
eprint={2409.20081},
primaryClass={cs.CV cs.MM}
}
|
cui2024profd:
|
arxiv-663506
|
2409.20083
|
SurgPETL: Parameter-Efficient Image-to-Surgical-Video Transfer Learning for Surgical Phase Recognition
|
<|reference_start|>SurgPETL: Parameter-Efficient Image-to-Surgical-Video Transfer Learning for Surgical Phase Recognition: Capitalizing on image-level pre-trained models for various downstream tasks has recently emerged with promising performance. However, the paradigm of "image pre-training followed by video fine-tuning" for high-dimensional video data inevitably poses significant performance bottlenecks. Furthermore, in the medical domain, many surgical video tasks encounter additional challenges posed by the limited availability of video data and the necessity for comprehensive spatial-temporal modeling. Recently, Parameter-Efficient Image-to-Video Transfer Learning has emerged as an efficient and effective paradigm for video action recognition tasks, which employs image-level pre-trained models with promising feature transferability and involves cross-modality temporal modeling with minimal fine-tuning. Nevertheless, the effectiveness and generalizability of this paradigm within intricate surgical domain remain unexplored. In this paper, we delve into a novel problem of efficiently adapting image-level pre-trained models to specialize in fine-grained surgical phase recognition, termed as Parameter-Efficient Image-to-Surgical-Video Transfer Learning. Firstly, we develop a parameter-efficient transfer learning benchmark SurgPETL for surgical phase recognition, and conduct extensive experiments with three advanced methods based on ViTs of two distinct scales pre-trained on five large-scale natural and medical datasets. Then, we introduce the Spatial-Temporal Adaptation module, integrating a standard spatial adapter with a novel temporal adapter to capture detailed spatial features and establish connections across temporal sequences for robust spatial-temporal modeling. Extensive experiments on three challenging datasets spanning various surgical procedures demonstrate the effectiveness of SurgPETL with STA.<|reference_end|>
|
arxiv
|
@article{yang2024surgpetl:,
title={SurgPETL: Parameter-Efficient Image-to-Surgical-Video Transfer Learning
for Surgical Phase Recognition},
author={Shu Yang, Zhiyuan Cai, Luyang Luo, Ning Ma, Shuchang Xu, Hao Chen},
journal={arXiv preprint arXiv:2409.20083},
year={2024},
archivePrefix={arXiv},
eprint={2409.20083},
primaryClass={cs.CV}
}
|
yang2024surgpetl:
|
arxiv-663507
|
2409.20086
|
Optimising EEG decoding with refined sampling and multimodal feature integration
|
<|reference_start|>Optimising EEG decoding with refined sampling and multimodal feature integration: Electroencephalography (EEG) is a neuroimaging technique that records brain neural activity with high temporal resolution. Unlike other methods, EEG does not require prohibitively expensive equipment and can be easily set up using commercially available portable EEG caps, making it an ideal candidate for brain-computer interfaces. However, EEG signals are characterised by poor spatial resolution and high noise levels, complicating their decoding. In this study, we employ a contrastive learning framework to align encoded EEG features with pretrained CLIP features, achieving a 7% improvement over the state-of-the-art in EEG decoding of object categories. This enhancement is equally attributed to (1) a novel online sampling method that boosts the signal-to-noise ratio and (2) multimodal representations leveraging visual and language features to enhance the alignment space. Our analysis reveals a systematic interaction between the architecture and dataset of pretrained features and their alignment efficacy for EEG signal decoding. This interaction correlates with the generalisation power of the pretrained features on ImageNet-O/A datasets ($r=.5$). These findings extend beyond EEG signal alignment, offering potential for broader applications in neuroimaging decoding and generic feature alignments.<|reference_end|>
|
arxiv
|
@article{akbarinia2024optimising,
title={Optimising EEG decoding with refined sampling and multimodal feature
integration},
author={Arash Akbarinia},
journal={arXiv preprint arXiv:2409.20086},
year={2024},
archivePrefix={arXiv},
eprint={2409.20086},
primaryClass={cs.HC}
}
|
akbarinia2024optimising
|
arxiv-663508
|
2409.20087
|
Inferring Thunderstorm Occurrence from Vertical Profiles of Convection-Permitting Simulations: Physical Insights from a Physical Deep Learning Model
|
<|reference_start|>Inferring Thunderstorm Occurrence from Vertical Profiles of Convection-Permitting Simulations: Physical Insights from a Physical Deep Learning Model: Thunderstorms have significant social and economic impacts due to heavy precipitation, hail, lightning, and strong winds, necessitating reliable forecasts. Thunderstorm forecasts based on numerical weather prediction (NWP) often rely on single-level surrogate predictors, like convective available potential energy and precipitation rate, derived from vertical profiles of three-dimensional atmospheric variables. In this study, we develop SALAMA 1D, a deep neural network that directly infers the probability of thunderstorm occurrence from vertical profiles of ten atmospheric variables, bypassing single-level predictors. By training the model on convection-permitting NWP forecasts, we allow SALAMA 1D to flexibly identify convective patterns, with the goal of enhancing forecast accuracy. The model's architecture is physically motivated: sparse connections encourage interactions at similar height levels, while a shuffling mechanism prevents the model from learning non-physical patterns tied to the vertical grid. SALAMA 1D is trained over Central Europe with lightning observations as the ground truth. Comparative analysis against a baseline machine learning model that uses single-level predictors shows SALAMA 1D's superior skill across various metrics and lead times of up to at least 11 hours. Moreover, increasing the number of forecasts used to compile the training set improves skill, even when training set size is kept constant. Sensitivity analysis using saliency maps indicates that the model reconstructs environmental lapse rates and rediscovers patterns consistent with established theoretical understandings, such as positive buoyancy, convective inhibition, and ice particle formation near the tropopause, while ruling out thunderstorm occurrence based on the absence of mid-level graupel and cloud cover.<|reference_end|>
|
arxiv
|
@article{yousefnia2024inferring,
title={Inferring Thunderstorm Occurrence from Vertical Profiles of
Convection-Permitting Simulations: Physical Insights from a Physical Deep
Learning Model},
author={Kianusch Vahid Yousefnia, Tobias B"olle, Christoph Metzl},
journal={arXiv preprint arXiv:2409.20087},
year={2024},
archivePrefix={arXiv},
eprint={2409.20087},
primaryClass={physics.ao-ph cs.LG}
}
|
yousefnia2024inferring
|
arxiv-663509
|
2409.20089
|
Robust LLM safeguarding via refusal feature adversarial training
|
<|reference_start|>Robust LLM safeguarding via refusal feature adversarial training: Large language models (LLMs) are vulnerable to adversarial attacks that can elicit harmful responses. Defending against such attacks remains challenging due to the opacity of jailbreaking mechanisms and the high computational cost of training LLMs robustly. We demonstrate that adversarial attacks share a universal mechanism for circumventing LLM safeguards that works by ablating a dimension in the residual stream embedding space called the refusal feature. We further show that the operation of refusal feature ablation (RFA) approximates the worst-case perturbation of offsetting model safety. Based on these findings, we propose Refusal Feature Adversarial Training (ReFAT), a novel algorithm that efficiently performs LLM adversarial training by simulating the effect of input-level attacks via RFA. Experiment results show that ReFAT significantly improves the robustness of three popular LLMs against a wide range of adversarial attacks, with considerably less computational overhead compared to existing adversarial training methods.<|reference_end|>
|
arxiv
|
@article{yu2024robust,
title={Robust LLM safeguarding via refusal feature adversarial training},
author={Lei Yu, Virginie Do, Karen Hambardzumyan, Nicola Cancedda},
journal={arXiv preprint arXiv:2409.20089},
year={2024},
archivePrefix={arXiv},
eprint={2409.20089},
primaryClass={cs.LG cs.CL cs.CR}
}
|
yu2024robust
|
arxiv-663510
|
2409.20092
|
Continuous-Time Linear Positional Embedding for Irregular Time Series Forecasting
|
<|reference_start|>Continuous-Time Linear Positional Embedding for Irregular Time Series Forecasting: Irregularly sampled time series forecasting, characterized by non-uniform intervals, is prevalent in practical applications. However, previous research have been focused on regular time series forecasting, typically relying on transformer architectures. To extend transformers to handle irregular time series, we tackle the positional embedding which represents the temporal information of the data. We propose CTLPE, a method learning a continuous linear function for encoding temporal information. The two challenges of irregular time series, inconsistent observation patterns and irregular time gaps, are solved by learning a continuous-time function and concise representation of position. Additionally, the linear continuous function is empirically shown superior to other continuous functions by learning a neural controlled differential equation-based positional embedding, and theoretically supported with properties of ideal positional embedding. CTLPE outperforms existing techniques across various irregularly-sampled time series datasets, showcasing its enhanced efficacy.<|reference_end|>
|
arxiv
|
@article{kim2024continuous-time,
title={Continuous-Time Linear Positional Embedding for Irregular Time Series
Forecasting},
author={Byunghyun Kim and Jae-Gil Lee},
journal={arXiv preprint arXiv:2409.20092},
year={2024},
archivePrefix={arXiv},
eprint={2409.20092},
primaryClass={cs.LG cs.AI}
}
|
kim2024continuous-time
|
arxiv-663511
|
2409.20094
|
Aggressive Post-Training Compression on Extremely Large Language Models
|
<|reference_start|>Aggressive Post-Training Compression on Extremely Large Language Models: The increasing size and complexity of Large Language Models (LLMs) pose challenges for their deployment on personal computers and mobile devices. Aggressive post-training model compression is necessary to reduce the models' size, but it often results in significant accuracy loss. To address this challenge, we propose a novel network pruning technology that utilizes over 0.7 sparsity and less than 8 bits of quantization. Our approach enables the compression of prevailing LLMs within a couple of hours while maintaining a relatively small accuracy loss. In experimental evaluations, our method demonstrates effectiveness and potential for practical deployment. By making LLMs available on domestic devices, our work can facilitate a new era of natural language processing applications with wide-ranging impacts.<|reference_end|>
|
arxiv
|
@article{zhang2024aggressive,
title={Aggressive Post-Training Compression on Extremely Large Language Models},
author={Zining Zhang, Yao Chen, Bingsheng He, Zhenjie Zhang},
journal={arXiv preprint arXiv:2409.20094},
year={2024},
archivePrefix={arXiv},
eprint={2409.20094},
primaryClass={cs.CL cs.AI}
}
|
zhang2024aggressive
|
arxiv-663512
|
2409.20098
|
Learning to Discover Generalized Facial Expressions
|
<|reference_start|>Learning to Discover Generalized Facial Expressions: We introduce Facial Expression Category Discovery (FECD), a novel task in the domain of open-world facial expression recognition (O-FER). While Generalized Category Discovery (GCD) has been explored in natural image datasets, applying it to facial expressions presents unique challenges. Specifically, we identify two key biases to better understand these challenges: Theoretical Bias-arising from the introduction of new categories in unlabeled training data, and Practical Bias-stemming from the imbalanced and fine-grained nature of facial expression data. To address these challenges, we propose FER-GCD, an adversarial approach that integrates both implicit and explicit debiasing components. In the implicit debiasing process, we devise F-discrepancy, a novel metric used to estimate the upper bound of Theoretical Bias, helping the model minimize this upper bound through adversarial training. The explicit debiasing process further optimizes the feature generator and classifier to reduce Practical Bias. Extensive experiments on GCD-based FER datasets demonstrate that our FER-GCD framework significantly improves accuracy on both old and new categories, achieving an average improvement of 9.8% over the baseline and outperforming state-of-the-art methods.<|reference_end|>
|
arxiv
|
@article{luo2024learning,
title={Learning to Discover Generalized Facial Expressions},
author={Tingzhang Luo, Yichao Liu, Yuanyuan Liu, Andi Zhang, Xin Wang, Chang
Tang, Zhe Chen},
journal={arXiv preprint arXiv:2409.20098},
year={2024},
archivePrefix={arXiv},
eprint={2409.20098},
primaryClass={cs.CV}
}
|
luo2024learning
|
arxiv-663513
|
2409.20099
|
FastFlow in FPGA Stacks of Data Centers
|
<|reference_start|>FastFlow in FPGA Stacks of Data Centers: FPGA programming is more complex as compared to Central Processing Units (CPUs) and Graphics Processing Units (GPUs). The coding languages to define the abstraction of Register Transfer Level (RTL) in High Level Synthesis (HLS) for FPGA platforms have emerged due to the laborious complexity of Hardware Description Languages (HDL). The HDL and High Level Synthesis (HLS) became complex when FPGA is adopted in high-performance parallel programs in multicore platforms of data centers. Writing an efficient host-side parallel program to control the hardware kernels placed in stacks of FPGAs is challenging and strenuous. The unavailability of efficient high level parallel programming tools for multi core architectures makes multicore parallel programming very unpopular for the masses. This work proposes an extension of FastFlow where data flows in hardware kernels can be executed efficiently in FPGA stacks. Here host side codes are generated automatically from simple csv files. The programmer needs to specify four simple parameters in these csv file: FPGA IDs, source, destination nodes, hardware kernel names. The proposed tool flow uses FastFlow libraries with Vitis to develop efficient and scalable parallel programs for FPGA stacks in data centers. The evidence from the implementation shows that the integration of FastFlow with Vitis reduces 96 % coding effort (in terms of number of lines) as compared to existing Vitis solutions.<|reference_end|>
|
arxiv
|
@article{paul2024fastflow,
title={FastFlow in FPGA Stacks of Data Centers},
author={Rourab Paul, Alberto Ottimo, Marco Danelutto},
journal={arXiv preprint arXiv:2409.20099},
year={2024},
archivePrefix={arXiv},
eprint={2409.20099},
primaryClass={cs.AR}
}
|
paul2024fastflow
|
arxiv-663514
|
2409.20101
|
A Flexible Velocity Boltzmann Scheme for Convection-Diffusion Equations
|
<|reference_start|>A Flexible Velocity Boltzmann Scheme for Convection-Diffusion Equations: A framework of finite-velocity model based Boltzmann equation has been developed for convection-diffusion equations. These velocities are kept flexible and adjusted to control numerical diffusion. A flux difference splitting based kinetic scheme is then introduced for solving a wide variety of nonlinear convection-diffusion equations numerically. Based on this framework, a generalized kinetic Lax-Wendroff scheme is also derived, recovering the classical Lax-Wendroff method as one of the choices. Further, a total variation diminishing version of this kinetic flux difference splitting scheme is presented, combining it with the kinetic Lax-Wendroff scheme using a limiter function. The numerical scheme has been extensively tested and the results for benchmark test cases, for 1D and 2D nonlinear convection and convection-diffusion equations, are presented.<|reference_end|>
|
arxiv
|
@article{rao2024a,
title={A Flexible Velocity Boltzmann Scheme for Convection-Diffusion Equations},
author={S.V. Raghurama Rao, K.S. Shrinath, Ankit Ruhi, Veeredhi Vasudeva Rao},
journal={arXiv preprint arXiv:2409.20101},
year={2024},
archivePrefix={arXiv},
eprint={2409.20101},
primaryClass={math.NA cs.NA}
}
|
rao2024a
|
arxiv-663515
|
2409.20108
|
Simple Realizability of Abstract Topological Graphs
|
<|reference_start|>Simple Realizability of Abstract Topological Graphs: An abstract topological graph (AT-graph) is a pair $A=(G,\mathcal{X})$, where $G=(V,E)$ is a graph and $\mathcal{X} \subseteq {E \choose 2}$ is a set of pairs of edges of $G$. A realization of $A$ is a drawing $\Gamma_A$ of $G$ in the plane such that any two edges $e_1,e_2$ of $G$ cross in $\Gamma_A$ if and only if $(e_1,e_2) \in \mathcal{X}$; $\Gamma_A$ is simple if any two edges intersect at most once (either at a common endpoint or at a proper crossing). The AT-graph Realizability (ATR) problem asks whether an input AT-graph admits a realization. The version of this problem that requires a simple realization is called Simple AT-graph Realizability (SATR). It is a classical result that both ATR and SATR are NP-complete. In this paper, we study the SATR problem from a new structural perspective. More precisely, we consider the size $\mathrm{\lambda}(A)$ of the largest connected component of the crossing graph of any realization of $A$, i.e., the graph ${\cal C}(A) = (E, \mathcal{X})$. This parameter represents a natural way to measure the level of interplay among edge crossings. First, we prove that SATR is NP-complete when $\mathrm{\lambda}(A) \geq 6$. On the positive side, we give an optimal linear-time algorithm that solves SATR when $\mathrm{\lambda}(A) \leq 3$ and returns a simple realization if one exists. Our algorithm is based on several ingredients, in particular the reduction to a new embedding problem subject to constraints that require certain pairs of edges to alternate (in the rotation system), and a sequence of transformations that exploit the interplay between alternation constraints and the SPQR-tree and PQ-tree data structures to eventually arrive at a simpler embedding problem that can be solved with standard techniques.<|reference_end|>
|
arxiv
|
@article{da lozzo2024simple,
title={Simple Realizability of Abstract Topological Graphs},
author={Giordano Da Lozzo, Walter Didimo, Fabrizio Montecchiani, Miriam
M"unch, Maurizio Patrignani, Ignaz Rutter},
journal={arXiv preprint arXiv:2409.20108},
year={2024},
archivePrefix={arXiv},
eprint={2409.20108},
primaryClass={cs.DS}
}
|
da lozzo2024simple
|
arxiv-663516
|
2409.20111
|
Robust Gaussian Splatting SLAM by Leveraging Loop Closure
|
<|reference_start|>Robust Gaussian Splatting SLAM by Leveraging Loop Closure: 3D Gaussian Splatting algorithms excel in novel view rendering applications and have been adapted to extend the capabilities of traditional SLAM systems. However, current Gaussian Splatting SLAM methods, designed mainly for hand-held RGB or RGB-D sensors, struggle with tracking drifts when used with rotating RGB-D camera setups. In this paper, we propose a robust Gaussian Splatting SLAM architecture that utilizes inputs from rotating multiple RGB-D cameras to achieve accurate localization and photorealistic rendering performance. The carefully designed Gaussian Splatting Loop Closure module effectively addresses the issue of accumulated tracking and mapping errors found in conventional Gaussian Splatting SLAM systems. First, each Gaussian is associated with an anchor frame and categorized as historical or novel based on its timestamp. By rendering different types of Gaussians at the same viewpoint, the proposed loop detection strategy considers both co-visibility relationships and distinct rendering outcomes. Furthermore, a loop closure optimization approach is proposed to remove camera pose drift and maintain the high quality of 3D Gaussian models. The approach uses a lightweight pose graph optimization algorithm to correct pose drift and updates Gaussians based on the optimized poses. Additionally, a bundle adjustment scheme further refines camera poses using photometric and geometric constraints, ultimately enhancing the global consistency of scenarios. Quantitative and qualitative evaluations on both synthetic and real-world datasets demonstrate that our method outperforms state-of-the-art methods in camera pose estimation and novel view rendering tasks. The code will be open-sourced for the community.<|reference_end|>
|
arxiv
|
@article{zhu2024robust,
title={Robust Gaussian Splatting SLAM by Leveraging Loop Closure},
author={Zunjie Zhu, Youxu Fang, Xin Li, Chengang Yan, Feng Xu, Chau Yuen,
Yanyan Li},
journal={arXiv preprint arXiv:2409.20111},
year={2024},
archivePrefix={arXiv},
eprint={2409.20111},
primaryClass={cs.RO}
}
|
zhu2024robust
|
arxiv-663517
|
2409.20113
|
CBAM-SwinT-BL: Small Rail Surface Defect Detection Method Based on Swin Transformer with Block Level CBAM Enhancement
|
<|reference_start|>CBAM-SwinT-BL: Small Rail Surface Defect Detection Method Based on Swin Transformer with Block Level CBAM Enhancement: Under high-intensity rail operations, rail tracks endure considerable stresses resulting in various defects such as corrugation and spellings. Failure to effectively detect defects and provide maintenance in time would compromise service reliability and public safety. While advanced models have been developed in recent years, efficiently identifying small-scale rail defects has not yet been studied, especially for categories such as Dirt or Squat on rail surface. To address this challenge, this study utilizes Swin Transformer (SwinT) as baseline and incorporates the Convolutional Block Attention Module (CBAM) for enhancement. Our proposed method integrates CBAM successively within the swin transformer blocks, resulting in significant performance improvement in rail defect detection, particularly for categories with small instance sizes. The proposed framework is named CBAM-Enhanced Swin Transformer in Block Level (CBAM-SwinT-BL). Experiment and ablation study have proven the effectiveness of the framework. The proposed framework has a notable improvement in the accuracy of small size defects, such as dirt and dent categories in RIII dataset, with mAP-50 increasing by +23.0% and +38.3% respectively, and the squat category in MUET dataset also reaches +13.2% higher than the original model. Compares to the original SwinT, CBAM-SwinT-BL increase overall precision around +5% in the MUET dataset and +7% in the RIII dataset, reaching 69.1% and 88.1% respectively. Meanwhile, the additional module CBAM merely extend the model training speed by an average of +0.04s/iteration, which is acceptable compared to the significant improvement in system performance.<|reference_end|>
|
arxiv
|
@article{zhao2024cbam-swint-bl:,
title={CBAM-SwinT-BL: Small Rail Surface Defect Detection Method Based on Swin
Transformer with Block Level CBAM Enhancement},
author={Jiayi Zhao, Alison Wun-lam Yeung, Ali Muhammad, Songjiang Lai, Vincent
To-Yee NG},
journal={arXiv preprint arXiv:2409.20113},
year={2024},
archivePrefix={arXiv},
eprint={2409.20113},
primaryClass={cs.CV}
}
|
zhao2024cbam-swint-bl:
|
arxiv-663518
|
2409.20116
|
REST-HANDS: Rehabilitation with Egocentric Vision Using Smartglasses for Treatment of Hands after Surviving Stroke
|
<|reference_start|>REST-HANDS: Rehabilitation with Egocentric Vision Using Smartglasses for Treatment of Hands after Surviving Stroke: Stroke represents the third cause of death and disability worldwide, and is recognised as a significant global health problem. A major challenge for stroke survivors is persistent hand dysfunction, which severely affects the ability to perform daily activities and the overall quality of life. In order to regain their functional hand ability, stroke survivors need rehabilitation therapy. However, traditional rehabilitation requires continuous medical support, creating dependency on an overburdened healthcare system. In this paper, we explore the use of egocentric recordings from commercially available smart glasses, specifically RayBan Stories, for remote hand rehabilitation. Our approach includes offline experiments to evaluate the potential of smart glasses for automatic exercise recognition, exercise form evaluation and repetition counting. We present REST-HANDS, the first dataset of egocentric hand exercise videos. Using state-of-the-art methods, we establish benchmarks with high accuracy rates for exercise recognition (98.55%), form evaluation (86.98%), and repetition counting (mean absolute error of 1.33). Our study demonstrates the feasibility of using egocentric video from smart glasses for remote rehabilitation, paving the way for further research.<|reference_end|>
|
arxiv
|
@article{mucha2024rest-hands:,
title={REST-HANDS: Rehabilitation with Egocentric Vision Using Smartglasses for
Treatment of Hands after Surviving Stroke},
author={Wiktor Mucha, Kentaro Tanaka, Martin Kampel},
journal={arXiv preprint arXiv:2409.20116},
year={2024},
archivePrefix={arXiv},
eprint={2409.20116},
primaryClass={cs.CV}
}
|
mucha2024rest-hands:
|
arxiv-663519
|
2409.20117
|
Masked Autoregressive Model for Weather Forecasting
|
<|reference_start|>Masked Autoregressive Model for Weather Forecasting: The growing impact of global climate change amplifies the need for accurate and reliable weather forecasting. Traditional autoregressive approaches, while effective for temporal modeling, suffer from error accumulation in long-term prediction tasks. The lead time embedding method has been suggested to address this issue, but it struggles to maintain crucial correlations in atmospheric events. To overcome these challenges, we propose the Masked Autoregressive Model for Weather Forecasting (MAM4WF). This model leverages masked modeling, where portions of the input data are masked during training, allowing the model to learn robust spatiotemporal relationships by reconstructing the missing information. MAM4WF combines the advantages of both autoregressive and lead time embedding methods, offering flexibility in lead time modeling while iteratively integrating predictions. We evaluate MAM4WF across weather, climate forecasting, and video frame prediction datasets, demonstrating superior performance on five test datasets.<|reference_end|>
|
arxiv
|
@article{kim2024masked,
title={Masked Autoregressive Model for Weather Forecasting},
author={Doyi Kim, Minseok Seo, Hakjin Lee, Junghoon Seo},
journal={arXiv preprint arXiv:2409.20117},
year={2024},
archivePrefix={arXiv},
eprint={2409.20117},
primaryClass={cs.CV}
}
|
kim2024masked
|
arxiv-663520
|
2409.20120
|
ACE: Abstractions for Communicating Efficiently
|
<|reference_start|>ACE: Abstractions for Communicating Efficiently: A central but unresolved aspect of problem-solving in AI is the capability to introduce and use abstractions, something humans excel at. Work in cognitive science has demonstrated that humans tend towards higher levels of abstraction when engaged in collaborative task-oriented communication, enabling gradually shorter and more information-efficient utterances. Several computational methods have attempted to replicate this phenomenon, but all make unrealistic simplifying assumptions about how abstractions are introduced and learned. Our method, Abstractions for Communicating Efficiently (ACE), overcomes these limitations through a neuro-symbolic approach. On the symbolic side, we draw on work from library learning for proposing abstractions. We combine this with neural methods for communication and reinforcement learning, via a novel use of bandit algorithms for controlling the exploration and exploitation trade-off in introducing new abstractions. ACE exhibits similar tendencies to humans on a collaborative construction task from the cognitive science literature, where one agent (the architect) instructs the other (the builder) to reconstruct a scene of block-buildings. ACE results in the emergence of an efficient language as a by-product of collaborative communication. Beyond providing mechanistic insights into human communication, our work serves as a first step to providing conversational agents with the ability for human-like communicative abstractions.<|reference_end|>
|
arxiv
|
@article{thomas2024ace:,
title={ACE: Abstractions for Communicating Efficiently},
author={Jonathan D. Thomas, Andrea Silvi, Devdatt Dubhashi, Vikas Garg, Moa
Johansson},
journal={arXiv preprint arXiv:2409.20120},
year={2024},
archivePrefix={arXiv},
eprint={2409.20120},
primaryClass={cs.CL}
}
|
thomas2024ace:
|
arxiv-663521
|
2409.20122
|
Training a Computer Vision Model for Commercial Bakeries with Primarily Synthetic Images
|
<|reference_start|>Training a Computer Vision Model for Commercial Bakeries with Primarily Synthetic Images: In the food industry, reprocessing returned product is a vital step to increase resource efficiency. [SBB23] presented an AI application that automates the tracking of returned bread buns. We extend their work by creating an expanded dataset comprising 2432 images and a wider range of baked goods. To increase model robustness, we use generative models pix2pix and CycleGAN to create synthetic images. We train state-of-the-art object detection model YOLOv9 and YOLOv8 on our detection task. Our overall best-performing model achieved an average precision [email protected] of 90.3% on our test set.<|reference_end|>
|
arxiv
|
@article{schmitt2024training,
title={Training a Computer Vision Model for Commercial Bakeries with Primarily
Synthetic Images},
author={Thomas H. Schmitt, Maximilian Bundscherer, Tobias Bocklet},
journal={arXiv preprint arXiv:2409.20122},
year={2024},
archivePrefix={arXiv},
eprint={2409.20122},
primaryClass={cs.CV cs.LG}
}
|
schmitt2024training
|
arxiv-663522
|
2409.20123
|
DBNode: A Decentralized Storage System for Big Data Storage in Consortium Blockchains
|
<|reference_start|>DBNode: A Decentralized Storage System for Big Data Storage in Consortium Blockchains: Storing big data directly on a blockchain poses a substantial burden due to the need to maintain a consistent ledger across all nodes. Numerous studies in decentralized storage systems have been conducted to tackle this particular challenge. Most state-of-the-art research concentrates on developing a general storage system that can accommodate diverse blockchain categories. However, it is essential to recognize the unique attributes of a consortium blockchain, such as data privacy and access control. Beyond ensuring high performance, these specific needs are often overlooked by general storage systems. This paper proposes a decentralized storage system for Hyperledger Fabric, which is a well-known consortium blockchain. First, we employ erasure coding to partition files, subsequently organizing these chunks into a hierarchical structure that fosters efficient and dependable data storage. Second, we design a two-layer hash-slots mechanism and a mirror strategy, enabling high data availability. Third, we design an access control mechanism based on a smart contract to regulate file access.<|reference_end|>
|
arxiv
|
@article{dadkhah2024dbnode:,
title={DBNode: A Decentralized Storage System for Big Data Storage in
Consortium Blockchains},
author={Narges Dadkhah, Xuyang Ma, Katinka Wolter, Gerhard Wunder},
journal={arXiv preprint arXiv:2409.20123},
year={2024},
doi={10.1109/ICBDA61153.2024.10607167},
archivePrefix={arXiv},
eprint={2409.20123},
primaryClass={cs.CR cs.DC}
}
|
dadkhah2024dbnode:
|
arxiv-663523
|
2409.20125
|
Sliding Block (Slick) Hashing: An Implementation & Benchmarks
|
<|reference_start|>Sliding Block (Slick) Hashing: An Implementation & Benchmarks: With hash tables being one of the most used data structures, Lehmann, Sanders and Walzer propose a novel, light-weight hash table, referred to as Slick Hash. Their idea is to hit a sweet spot between space consumption and speed. Building on the theoretical ideas by the authors, an implementation and experiments are required to evaluate the practical performance of Slick Hash. This work contributes to fulfilling this requirement by providing a basic implementation of Slick Hash, an analysis of its performance, and an evaluation of the entry deletion, focusing on the impact of backyard cleaning. The findings are discussed, and a conclusion is drawn.<|reference_end|>
|
arxiv
|
@article{oberst2024sliding,
title={Sliding Block (Slick) Hashing: An Implementation & Benchmarks},
author={Jan Oberst},
journal={arXiv preprint arXiv:2409.20125},
year={2024},
archivePrefix={arXiv},
eprint={2409.20125},
primaryClass={cs.DS}
}
|
oberst2024sliding
|
arxiv-663524
|
2409.20126
|
DCAST: Diverse Class-Aware Self-Training Mitigates Selection Bias for Fairer Learning
|
<|reference_start|>DCAST: Diverse Class-Aware Self-Training Mitigates Selection Bias for Fairer Learning: Fairness in machine learning seeks to mitigate model bias against individuals based on sensitive features such as sex or age, often caused by an uneven representation of the population in the training data due to selection bias. Notably, bias unascribed to sensitive features is challenging to identify and typically goes undiagnosed, despite its prominence in complex high-dimensional data from fields like computer vision and molecular biomedicine. Strategies to mitigate unidentified bias and evaluate mitigation methods are crucially needed, yet remain underexplored. We introduce: (i) Diverse Class-Aware Self-Training (DCAST), model-agnostic mitigation aware of class-specific bias, which promotes sample diversity to counter confirmation bias of conventional self-training while leveraging unlabeled samples for an improved representation of the underlying population; (ii) hierarchy bias, multivariate and class-aware bias induction without prior knowledge. Models learned with DCAST showed improved robustness to hierarchy and other biases across eleven datasets, against conventional self-training and six prominent domain adaptation techniques. Advantage was largest on multi-class classification, emphasizing DCAST as a promising strategy for fairer learning in different contexts.<|reference_end|>
|
arxiv
|
@article{tepeli2024dcast:,
title={DCAST: Diverse Class-Aware Self-Training Mitigates Selection Bias for
Fairer Learning},
author={Yasin I. Tepeli, Joana P. Gonc{c}alves},
journal={arXiv preprint arXiv:2409.20126},
year={2024},
archivePrefix={arXiv},
eprint={2409.20126},
primaryClass={cs.LG cs.CY}
}
|
tepeli2024dcast:
|
arxiv-663525
|
2409.20127
|
PuzzleBoard: A New Camera Calibration Pattern with Position Encoding
|
<|reference_start|>PuzzleBoard: A New Camera Calibration Pattern with Position Encoding: Accurate camera calibration is a well-known and widely used task in computer vision that has been researched for decades. However, the standard approach based on checkerboard calibration patterns has some drawbacks that limit its applicability. For example, the calibration pattern must be completely visible without any occlusions. Alternative solutions such as ChArUco boards allow partial occlusions, but require a higher camera resolution due to the fine details of the position encoding. We present a new calibration pattern that combines the advantages of checkerboard calibration patterns with a lightweight position coding that can be decoded at very low resolutions. The decoding algorithm includes error correction and is computationally efficient. The whole approach is backward compatible to both checkerboard calibration patterns and several checkerboard calibration algorithms. Furthermore, the method can be used not only for camera calibration but also for camera pose estimation and marker-based object localization tasks.<|reference_end|>
|
arxiv
|
@article{stelldinger2024puzzleboard:,
title={PuzzleBoard: A New Camera Calibration Pattern with Position Encoding},
author={Peer Stelldinger, Nils Sch"onherr, Justus Biermann},
journal={arXiv preprint arXiv:2409.20127},
year={2024},
archivePrefix={arXiv},
eprint={2409.20127},
primaryClass={cs.CV}
}
|
stelldinger2024puzzleboard:
|
arxiv-663526
|
2409.20130
|
Reevaluation of Inductive Link Prediction
|
<|reference_start|>Reevaluation of Inductive Link Prediction: Within this paper, we show that the evaluation protocol currently used for inductive link prediction is heavily flawed as it relies on ranking the true entity in a small set of randomly sampled negative entities. Due to the limited size of the set of negatives, a simple rule-based baseline can achieve state-of-the-art results, which simply ranks entities higher based on the validity of their type. As a consequence of these insights, we reevaluate current approaches for inductive link prediction on several benchmarks using the link prediction protocol usually applied to the transductive setting. As some inductive methods suffer from scalability issues when evaluated in this setting, we propose and apply additionally an improved sampling protocol, which does not suffer from the problem mentioned above. The results of our evaluation differ drastically from the results reported in so far.<|reference_end|>
|
arxiv
|
@article{ott2024reevaluation,
title={Reevaluation of Inductive Link Prediction},
author={Simon Ott, Christian Meilicke, Heiner Stuckenschmidt},
journal={In: Rules and Reasoning. RuleML+RR 2024. Lecture Notes in Computer
Science, vol 15183. Springer, Cham (2024)},
year={2024},
doi={10.1007/978-3-031-72407-7_7},
archivePrefix={arXiv},
eprint={2409.20130},
primaryClass={cs.AI cs.LG}
}
|
ott2024reevaluation
|
arxiv-663527
|
2409.20132
|
Machine Learning in Industrial Quality Control of Glass Bottle Prints
|
<|reference_start|>Machine Learning in Industrial Quality Control of Glass Bottle Prints: In industrial manufacturing of glass bottles, quality control of bottle prints is necessary as numerous factors can negatively affect the printing process. Even minor defects in the bottle prints must be detected despite reflections in the glass or manufacturing-related deviations. In cooperation with our medium-sized industrial partner, two ML-based approaches for quality control of these bottle prints were developed and evaluated, which can also be used in this challenging scenario. Our first approach utilized different filters to supress reflections (e.g. Sobel or Canny) and image quality metrics for image comparison (e.g. MSE or SSIM) as features for different supervised classification models (e.g. SVM or k-Neighbors), which resulted in an accuracy of 84%. The images were aligned based on the ORB algorithm, which allowed us to estimate the rotations of the prints, which may serve as an indicator for anomalies in the manufacturing process. In our second approach, we fine-tuned different pre-trained CNN models (e.g. ResNet or VGG) for binary classification, which resulted in an accuracy of 87%. Utilizing Grad-Cam on our fine-tuned ResNet-34, we were able to localize and visualize frequently defective bottle print regions. This method allowed us to provide insights that could be used to optimize the actual manufacturing process. This paper also describes our general approach and the challenges we encountered in practice with data collection during ongoing production, unsupervised preselection, and labeling.<|reference_end|>
|
arxiv
|
@article{bundscherer2024machine,
title={Machine Learning in Industrial Quality Control of Glass Bottle Prints},
author={Maximilian Bundscherer, Thomas H. Schmitt, Tobias Bocklet},
journal={arXiv preprint arXiv:2409.20132},
year={2024},
archivePrefix={arXiv},
eprint={2409.20132},
primaryClass={cs.CV cs.LG}
}
|
bundscherer2024machine
|
arxiv-663528
|
2409.20133
|
Improving Achievability of Cache-Aided Private Variable-Length Coding with Zero Leakage
|
<|reference_start|>Improving Achievability of Cache-Aided Private Variable-Length Coding with Zero Leakage: A statistical cache-aided compression problem with a privacy constraint is studied, where a server has access to a database of $N$ files, $(Y_1,...,Y_N)$, each of size $F$ bits and is linked through a shared channel to $K$ users, where each has access to a local cache memory of size $MF$ bits. During the placement phase, the server fills the users' caches without prior knowledge of their demands, while the delivery phase takes place after the users send their demands to the server. We assume that each file in database $Y_i$ is arbitrarily correlated with a private attribute $X$, and an adversary is assumed to have access to the shared channel. The users and the server have access to a shared key $W$. The goal is to design the cache contents and the delivered message $\cal C$ such that the average length of $\mathcal{C}$ is minimized, while satisfying: i. The response $\cal C$ does not reveal any information about $X$, i.e., $I(X;\mathcal{C})=0$; ii. User $i$ can decode its demand, $Y_{d_i}$, by using the shared key $W$, $\cal C$, and its local cache $Z_i$. In a previous work, we have proposed a variable-length coding scheme that combines privacy-aware compression with coded caching techniques. In this paper, we propose a new achievability scheme using minimum entropy coupling concept and a greedy entropy-based algorithm. We show that the proposed scheme improves the previous results. Moreover, considering two special cases we improve the obtained bounds using the common information concept.<|reference_end|>
|
arxiv
|
@article{zamani2024improving,
title={Improving Achievability of Cache-Aided Private Variable-Length Coding
with Zero Leakage},
author={Amirreza Zamani, Mikael Skoglund},
journal={arXiv preprint arXiv:2409.20133},
year={2024},
archivePrefix={arXiv},
eprint={2409.20133},
primaryClass={cs.IT math.IT}
}
|
zamani2024improving
|
arxiv-663529
|
2409.20134
|
DRLinSPH: An open-source platform using deep reinforcement learning and SPHinXsys for fluid-structure-interaction problems
|
<|reference_start|>DRLinSPH: An open-source platform using deep reinforcement learning and SPHinXsys for fluid-structure-interaction problems: Fluid-structure interaction (FSI) problems are characterized by strong nonlinearities arising from complex interactions between fluids and structures. These pose significant challenges for traditional control strategies in optimizing structural motion, often leading to suboptimal performance. In contrast, deep reinforcement learning (DRL), through agent interactions within numerical simulation environments and the approximation of control policies using deep neural networks (DNNs), has shown considerable promise in addressing high-dimensional FSI problems. Additionally, smoothed particle hydrodynamics (SPH) offers a flexible and efficient computational approach for modeling large deformations, fractures, and complex interface movements inherent in FSI, outperforming traditional grid-based methods. In this work, we present DRLinSPH, an open-source Python platform that integrates the SPH-based numerical environment provided by the open-source software SPHinXsys with the mature DRL platform Tianshou to enable parallel training for FSI problems. DRLinSPH has been successfully applied to four FSI scenarios: sloshing suppression using rigid and elastic baffles, optimization of wave energy capture through an oscillating wave surge converter (OWSC), and muscle-driven fish swimming in vortices. The results demonstrate the platform's accuracy, stability, and scalability, highlighting its potential to advance industrial solutions for complex FSI challenges.<|reference_end|>
|
arxiv
|
@article{ye2024drlinsph:,
title={DRLinSPH: An open-source platform using deep reinforcement learning and
SPHinXsys for fluid-structure-interaction problems},
author={Mai Ye and Hao Ma and Yaru Ren and Chi Zhang and Oskar J. Haidn and
Xiangyu Hu},
journal={arXiv preprint arXiv:2409.20134},
year={2024},
archivePrefix={arXiv},
eprint={2409.20134},
primaryClass={cs.CE}
}
|
ye2024drlinsph:
|
arxiv-663530
|
2409.20135
|
Federated Instruction Tuning of LLMs with Domain Coverage Augmentation
|
<|reference_start|>Federated Instruction Tuning of LLMs with Domain Coverage Augmentation: Federated Domain-specific Instruction Tuning (FedDIT) utilizes limited cross-client private data together with server-side public data for instruction augmentation, ultimately boosting model performance within specific domains. To date, the factors affecting FedDIT remain unclear, and existing instruction augmentation methods primarily focus on the centralized setting without considering distributed environments. Our experiments reveal that the cross-client domain coverage, rather than data heterogeneity, drives model performance in FedDIT. In response, we propose FedDCA, which optimizes domain coverage through greedy client center selection and retrieval-based augmentation. For client-side computational efficiency and system scalability, FedDCA$^*$, the variant of FedDCA, utilizes heterogeneous encoders with server-side feature alignment. Extensive experiments across four distinct domains (code, medical, financial, and mathematical) substantiate the effectiveness of both methods. Additionally, we investigate privacy preservation against memory extraction attacks utilizing various amounts of public data. Results show that there is no significant correlation between the volume of public data and the privacy-preserving capability. However, as the fine-tuning rounds increase, the risk of privacy leakage reduces or converges.<|reference_end|>
|
arxiv
|
@article{wang2024federated,
title={Federated Instruction Tuning of LLMs with Domain Coverage Augmentation},
author={Zezhou Wang, Yaxin Du, Zhuzhong Qian, Siheng Chen},
journal={arXiv preprint arXiv:2409.20135},
year={2024},
archivePrefix={arXiv},
eprint={2409.20135},
primaryClass={cs.LG cs.CL cs.DC}
}
|
wang2024federated
|
arxiv-663531
|
2409.20137
|
Segmenting Wood Rot using Computer Vision Models
|
<|reference_start|>Segmenting Wood Rot using Computer Vision Models: In the woodworking industry, a huge amount of effort has to be invested into the initial quality assessment of the raw material. In this study we present an AI model to detect, quantify and localize defects on wooden logs. This model aims to both automate the quality control process and provide a more consistent and reliable quality assessment. For this purpose a dataset of 1424 sample images of wood logs is created. A total of 5 annotators possessing different levels of expertise is involved in dataset creation. An inter-annotator agreement analysis is conducted to analyze the impact of expertise on the annotation task and to highlight subjective differences in annotator judgement. We explore, train and fine-tune the state-of-the-art InternImage and ONE-PEACE architectures for semantic segmentation. The best model created achieves an average IoU of 0.71, and shows detection and quantification capabilities close to the human annotators.<|reference_end|>
|
arxiv
|
@article{kammerbauer2024segmenting,
title={Segmenting Wood Rot using Computer Vision Models},
author={Roland Kammerbauer, Thomas H. Schmitt, Tobias Bocklet},
journal={arXiv preprint arXiv:2409.20137},
year={2024},
archivePrefix={arXiv},
eprint={2409.20137},
primaryClass={cs.CV}
}
|
kammerbauer2024segmenting
|
arxiv-663532
|
2409.20138
|
Constraint Guided Model Quantization of Neural Networks
|
<|reference_start|>Constraint Guided Model Quantization of Neural Networks: Deploying neural networks on the edge has become increasingly important as deep learning is being applied in an increasing amount of applications. The devices on the edge are typically characterised as having small computational resources as large computational resources results in a higher energy consumption, which is impractical for these devices. To reduce the complexity of neural networks a wide range of quantization methods have been proposed in recent years. This work proposes Constraint Guided Model Quantization (CGMQ), which is a quantization aware training algorithm that uses an upper bound on the computational resources and reduces the bit-widths of the parameters of the neural network. CGMQ does not require the tuning of a hyperparameter to result in a mixed precision neural network that satisfies the predefined computational cost constraint, while prior work does. It is shown on MNIST that the performance of CGMQ is competitive with state-of-the-art quantization aware training algorithms, while guaranteeing the satisfaction of the cost constraint.<|reference_end|>
|
arxiv
|
@article{van baelen2024constraint,
title={Constraint Guided Model Quantization of Neural Networks},
author={Quinten Van Baelen and Peter Karsmakers},
journal={arXiv preprint arXiv:2409.20138},
year={2024},
archivePrefix={arXiv},
eprint={2409.20138},
primaryClass={cs.LG}
}
|
van baelen2024constraint
|
arxiv-663533
|
2409.20139
|
Characterizing Model Robustness via Natural Input Gradients
|
<|reference_start|>Characterizing Model Robustness via Natural Input Gradients: Adversarially robust models are locally smooth around each data sample so that small perturbations cannot drastically change model outputs. In modern systems, such smoothness is usually obtained via Adversarial Training, which explicitly enforces models to perform well on perturbed examples. In this work, we show the surprising effectiveness of instead regularizing the gradient with respect to model inputs on natural examples only. Penalizing input Gradient Norm is commonly believed to be a much inferior approach. Our analyses identify that the performance of Gradient Norm regularization critically depends on the smoothness of activation functions, and are in fact extremely effective on modern vision transformers that adopt smooth activations over piecewise linear ones (eg, ReLU), contrary to prior belief. On ImageNet-1k, Gradient Norm training achieves > 90% the performance of state-of-the-art PGD-3 Adversarial Training} (52% vs.~56%), while using only 60% computation cost of the state-of-the-art without complex adversarial optimization. Our analyses also highlight the relationship between model robustness and properties of natural input gradients, such as asymmetric sample and channel statistics. Surprisingly, we find model robustness can be significantly improved by simply regularizing its gradients to concentrate on image edges without explicit conditioning on the gradient norm.<|reference_end|>
|
arxiv
|
@article{rodríguez-muñoz2024characterizing,
title={Characterizing Model Robustness via Natural Input Gradients},
author={Adri'an Rodr'iguez-Mu~noz, Tongzhou Wang, Antonio Torralba},
journal={arXiv preprint arXiv:2409.20139},
year={2024},
archivePrefix={arXiv},
eprint={2409.20139},
primaryClass={cs.LG cs.CV}
}
|
rodríguez-muñoz2024characterizing
|
arxiv-663534
|
2409.20140
|
RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering
|
<|reference_start|>RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering: In this paper, we propose a novel end-to-end relightable neural inverse rendering system that achieves high-quality reconstruction of geometry and material properties, thus enabling high-quality relighting. The cornerstone of our method is a two-stage approach for learning a better factorization of scene parameters. In the first stage, we develop a reflection-aware radiance field using a neural signed distance field (SDF) as the geometry representation and deploy an MLP (multilayer perceptron) to estimate indirect illumination. In the second stage, we introduce a novel information-sharing network structure to jointly learn the radiance field and the physically based factorization of the scene. For the physically based factorization, to reduce the noise caused by Monte Carlo sampling, we apply a split-sum approximation with a simplified Disney BRDF and cube mipmap as the environment light representation. In the relighting phase, to enhance the quality of indirect illumination, we propose a second split-sum algorithm to trace secondary rays under the split-sum rendering framework. Furthermore, there is no dataset or protocol available to quantitatively evaluate the inverse rendering performance for glossy objects. To assess the quality of material reconstruction and relighting, we have created a new dataset with ground truth BRDF parameters and relighting results. Our experiments demonstrate that our algorithm achieves state-of-the-art performance in inverse rendering and relighting, with particularly strong results in the reconstruction of highly reflective objects.<|reference_end|>
|
arxiv
|
@article{zhang2024rise-sdf:,
title={RISE-SDF: a Relightable Information-Shared Signed Distance Field for
Glossy Object Inverse Rendering},
author={Deheng Zhang, Jingyu Wang, Shaofei Wang, Marko Mihajlovic, Sergey
Prokudin, Hendrik P.A. Lensch, Siyu Tang},
journal={arXiv preprint arXiv:2409.20140},
year={2024},
archivePrefix={arXiv},
eprint={2409.20140},
primaryClass={cs.CV cs.GR}
}
|
zhang2024rise-sdf:
|
arxiv-663535
|
2409.20142
|
Signal Processing for Haptic Surface Modeling: a Review
|
<|reference_start|>Signal Processing for Haptic Surface Modeling: a Review: Haptic feedback has been integrated into Virtual and Augmented Reality, complementing acoustic and visual information and contributing to an all-round immersive experience in multiple fields, spanning from the medical domain to entertainment and gaming. Haptic technologies involve complex cross-disciplinary research that encompasses sensing, data representation, interactive rendering, perception, and quality of experience. The standard processing pipeline, consists of (I) sensing physical features in the real world using a transducer, (II) modeling and storing the collected information in some digital format, (III) communicating the information, and finally, (IV) rendering the haptic information through appropriate devices, thus producing a user experience (V) perceptually close to the original physical world. Among these areas, sensing, rendering and perception have been deeply investigated and are the subject of different comprehensive surveys available in the literature. Differently, research dealing with haptic surface modeling and data representation still lacks a comprehensive dissection. In this work, we aim at providing an overview on modeling and representation of haptic surfaces from a signal processing perspective, covering the aspects that lie in between haptic information acquisition on one side and rendering and perception on the other side. We analyze, categorize, and compare research papers that address the haptic surface modeling and data representation, pointing out existing gaps and possible research directions.<|reference_end|>
|
arxiv
|
@article{stefani2024signal,
title={Signal Processing for Haptic Surface Modeling: a Review},
author={Antonio Luigi Stefani, Niccol`o Bisagno, Andrea Rosani, Nicola Conci,
Francesco De Natale},
journal={arXiv preprint arXiv:2409.20142},
year={2024},
archivePrefix={arXiv},
eprint={2409.20142},
primaryClass={cs.MM}
}
|
stefani2024signal
|
arxiv-663536
|
2409.20146
|
VMAD: Visual-enhanced Multimodal Large Language Model for Zero-Shot Anomaly Detection
|
<|reference_start|>VMAD: Visual-enhanced Multimodal Large Language Model for Zero-Shot Anomaly Detection: Zero-shot anomaly detection (ZSAD) recognizes and localizes anomalies in previously unseen objects by establishing feature mapping between textual prompts and inspection images, demonstrating excellent research value in flexible industrial manufacturing. However, existing ZSAD methods are limited by closed-world settings, struggling to unseen defects with predefined prompts. Recently, adapting Multimodal Large Language Models (MLLMs) for Industrial Anomaly Detection (IAD) presents a viable solution. Unlike fixed-prompt methods, MLLMs exhibit a generative paradigm with open-ended text interpretation, enabling more adaptive anomaly analysis. However, this adaption faces inherent challenges as anomalies often manifest in fine-grained regions and exhibit minimal visual discrepancies from normal samples. To address these challenges, we propose a novel framework VMAD (Visual-enhanced MLLM Anomaly Detection) that enhances MLLM with visual-based IAD knowledge and fine-grained perception, simultaneously providing precise detection and comprehensive analysis of anomalies. Specifically, we design a Defect-Sensitive Structure Learning scheme that transfers patch-similarities cues from visual branch to our MLLM for improved anomaly discrimination. Besides, we introduce a novel visual projector, Locality-enhanced Token Compression, which mines multi-level features in local contexts to enhance fine-grained detection. Furthermore, we introduce the Real Industrial Anomaly Detection (RIAD), a comprehensive IAD dataset with detailed anomaly descriptions and analyses, offering a valuable resource for MLLM-based IAD development. Extensive experiments on zero-shot benchmarks, including MVTec-AD, Visa, WFDD, and RIAD datasets, demonstrate our superior performance over state-of-the-art methods. The code and dataset will be available soon.<|reference_end|>
|
arxiv
|
@article{deng2024vmad:,
title={VMAD: Visual-enhanced Multimodal Large Language Model for Zero-Shot
Anomaly Detection},
author={Huilin Deng, Hongchen Luo, Wei Zhai, Yang Cao, Yu Kang},
journal={arXiv preprint arXiv:2409.20146},
year={2024},
archivePrefix={arXiv},
eprint={2409.20146},
primaryClass={cs.CV}
}
|
deng2024vmad:
|
arxiv-663537
|
2409.20147
|
Classification of Radiological Text in Small and Imbalanced Datasets in a Non-English Language
|
<|reference_start|>Classification of Radiological Text in Small and Imbalanced Datasets in a Non-English Language: Natural language processing (NLP) in the medical domain can underperform in real-world applications involving small datasets in a non-English language with few labeled samples and imbalanced classes. There is yet no consensus on how to approach this problem. We evaluated a set of NLP models including BERT-like transformers, few-shot learning with sentence transformers (SetFit), and prompted large language models (LLM), using three datasets of radiology reports on magnetic resonance images of epilepsy patients in Danish, a low-resource language. Our results indicate that BERT-like models pretrained in the target domain of radiology reports currently offer the optimal performances for this scenario. Notably, the SetFit and LLM models underperformed compared to BERT-like models, with LLM performing the worst. Importantly, none of the models investigated was sufficiently accurate to allow for text classification without any supervision. However, they show potential for data filtering, which could reduce the amount of manual labeling required.<|reference_end|>
|
arxiv
|
@article{beliveau2024classification,
title={Classification of Radiological Text in Small and Imbalanced Datasets in
a Non-English Language},
author={Vincent Beliveau, Helene Kaas, Martin Prener, Claes N. Ladefoged,
Desmond Elliott, Gitte M. Knudsen, Lars H. Pinborg, Melanie Ganz},
journal={arXiv preprint arXiv:2409.20147},
year={2024},
archivePrefix={arXiv},
eprint={2409.20147},
primaryClass={cs.CL cs.AI}
}
|
beliveau2024classification
|
arxiv-663538
|
2409.20148
|
Pragma driven shared memory parallelism in Zig by supporting OpenMP loop directives
|
<|reference_start|>Pragma driven shared memory parallelism in Zig by supporting OpenMP loop directives: The Zig programming language, which is designed to provide performance and safety as first class concerns, has become popular in recent years. Given that Zig is built upon LLVM, and-so enjoys many of the benefits provided by the ecosystem, including access to a rich set of backends, Zig has significant potential for high performance workloads. However, it is yet to gain acceptance in HPC and one of the reasons for this is that support for the pragma driven shared memory parallelism is missing. In this paper we describe enhancing the Zig compiler to add support for OpenMP loop directives. Then exploring performance using NASA's NAS Parallel Benchmark (NPB) suite. We demonstrate that not only does our integration of OpenMP with Zig scale comparatively to Fortran and C reference implementations of NPB, but furthermore Zig provides up to a 1.25 times performance increase compared to Fortran.<|reference_end|>
|
arxiv
|
@article{kacs2024pragma,
title={Pragma driven shared memory parallelism in Zig by supporting OpenMP loop
directives},
author={David Kacs, Joseph Lee, Justs Zarins, Nick Brown},
journal={arXiv preprint arXiv:2409.20148},
year={2024},
archivePrefix={arXiv},
eprint={2409.20148},
primaryClass={cs.DC cs.PL}
}
|
kacs2024pragma
|
arxiv-663539
|
2409.20149
|
1 Trillion Token (1TT) Platform: A Novel Framework for Efficient Data Sharing and Compensation in Large Language Models
|
<|reference_start|>1 Trillion Token (1TT) Platform: A Novel Framework for Efficient Data Sharing and Compensation in Large Language Models: In this paper, we propose the 1 Trillion Token Platform (1TT Platform), a novel framework designed to facilitate efficient data sharing with a transparent and equitable profit-sharing mechanism. The platform fosters collaboration between data contributors, who provide otherwise non-disclosed datasets, and a data consumer, who utilizes these datasets to enhance their own services. Data contributors are compensated in monetary terms, receiving a share of the revenue generated by the services of the data consumer. The data consumer is committed to sharing a portion of the revenue with contributors, according to predefined profit-sharing arrangements. By incorporating a transparent profit-sharing paradigm to incentivize large-scale data sharing, the 1TT Platform creates a collaborative environment to drive the advancement of NLP and LLM technologies.<|reference_end|>
|
arxiv
|
@article{park20241,
title={1 Trillion Token (1TT) Platform: A Novel Framework for Efficient Data
Sharing and Compensation in Large Language Models},
author={Chanjun Park, Hyunsoo Ha, Jihoo Kim, Yungi Kim, Dahyun Kim, Sukyung
Lee, Seonghoon Yang},
journal={arXiv preprint arXiv:2409.20149},
year={2024},
archivePrefix={arXiv},
eprint={2409.20149},
primaryClass={cs.CL cs.AI}
}
|
park20241
|
arxiv-663540
|
2409.20154
|
GravMAD: Grounded Spatial Value Maps Guided Action Diffusion for Generalized 3D Manipulation
|
<|reference_start|>GravMAD: Grounded Spatial Value Maps Guided Action Diffusion for Generalized 3D Manipulation: Robots' ability to follow language instructions and execute diverse 3D tasks is vital in robot learning. Traditional imitation learning-based methods perform well on seen tasks but struggle with novel, unseen ones due to variability. Recent approaches leverage large foundation models to assist in understanding novel tasks, thereby mitigating this issue. However, these methods lack a task-specific learning process, which is essential for an accurate understanding of 3D environments, often leading to execution failures. In this paper, we introduce GravMAD, a sub-goal-driven, language-conditioned action diffusion framework that combines the strengths of imitation learning and foundation models. Our approach breaks tasks into sub-goals based on language instructions, allowing auxiliary guidance during both training and inference. During training, we introduce Sub-goal Keypose Discovery to identify key sub-goals from demonstrations. Inference differs from training, as there are no demonstrations available, so we use pre-trained foundation models to bridge the gap and identify sub-goals for the current task. In both phases, GravMaps are generated from sub-goals, providing flexible 3D spatial guidance compared to fixed 3D positions. Empirical evaluations on RLBench show that GravMAD significantly outperforms state-of-the-art methods, with a 28.63% improvement on novel tasks and a 13.36% gain on tasks encountered during training. These results demonstrate GravMAD's strong multi-task learning and generalization in 3D manipulation. Video demonstrations are available at: https://gravmad.github.io.<|reference_end|>
|
arxiv
|
@article{chen2024gravmad:,
title={GravMAD: Grounded Spatial Value Maps Guided Action Diffusion for
Generalized 3D Manipulation},
author={Yangtao Chen, Zixuan Chen, Junhui Yin, Jing Huo, Pinzhuo Tian, Jieqi
Shi, Yang Gao},
journal={arXiv preprint arXiv:2409.20154},
year={2024},
archivePrefix={arXiv},
eprint={2409.20154},
primaryClass={cs.RO}
}
|
chen2024gravmad:
|
arxiv-663541
|
2409.20156
|
ASTRA: Accurate and Scalable ANNS-based Training of Extreme Classifiers
|
<|reference_start|>ASTRA: Accurate and Scalable ANNS-based Training of Extreme Classifiers: `Extreme Classification'' (or XC) is the task of annotating data points (queries) with relevant labels (documents), from an extremely large set of $L$ possible labels, arising in search and recommendations. The most successful deep learning paradigm that has emerged over the last decade or so for XC is to embed the queries (and labels) using a deep encoder (e.g. DistilBERT), and use linear classifiers on top of the query embeddings. This architecture is of appeal because it enables millisecond-time inference using approximate nearest neighbor search (ANNS). The key question is how do we design training algorithms that are accurate as well as scale to $O(100M)$ labels on a limited number of GPUs. State-of-the-art XC techniques that demonstrate high accuracies (e.g., DEXML, Ren\'ee, DEXA) on standard datasets have per-epoch training time that scales as $O(L)$ or employ expensive negative sampling strategies, which are prohibitive in XC scenarios. In this work, we develop an accurate and scalable XC algorithm ASTRA with two key observations: (a) building ANNS index on the classifier vectors and retrieving hard negatives using the classifiers aligns the negative sampling strategy to the loss function optimized; (b) keeping the ANNS indices current as the classifiers change through the epochs is prohibitively expensive while using stale negatives (refreshed periodically) results in poor accuracy; to remedy this, we propose a negative sampling strategy that uses a mixture of importance sampling and uniform sampling. By extensive evaluation on standard XC as well as proprietary datasets with 120M labels, we demonstrate that ASTRA achieves SOTA precision, while reducing training time by 4x-15x relative to the second best.<|reference_end|>
|
arxiv
|
@article{mehta2024astra:,
title={ASTRA: Accurate and Scalable ANNS-based Training of Extreme Classifiers},
author={Sonu Mehta, Jayashree Mohan, Nagarajan Natarajan, Ramachandran Ramjee,
Manik Varma},
journal={arXiv preprint arXiv:2409.20156},
year={2024},
archivePrefix={arXiv},
eprint={2409.20156},
primaryClass={cs.LG cs.IR}
}
|
mehta2024astra:
|
arxiv-663542
|
2409.20157
|
RSVP: Beyond Weisfeiler Lehman Graph Isomorphism Test
|
<|reference_start|>RSVP: Beyond Weisfeiler Lehman Graph Isomorphism Test: Graph isomorphism, a classical algorithmic problem, determines whether two input graphs are structurally identical or not. Interestingly, it is one of the few problems that is not yet known to belong to either the P or NP-complete complexity classes. As such, intelligent search-space pruning based strategies were proposed for developing isomorphism testing solvers like nauty and bliss, which are still, unfortunately, exponential in the worst-case scenario. Thus, the polynomial-time Weisfeiler-Lehman (WL) isomorphism testing heuristic, based on colour refinement, has been widely adopted in the literature. However, WL fails for multiple classes of non-isomorphic graph instances such as strongly regular graphs, block structures, and switched edges, among others. In this paper, we propose a novel polynomial-time graph isomorphism testing heuristic, RSVP, and depict its enhanced discriminative power compared to the Weisfeiler-Lehman approach for several challenging classes of graphs. Bounded by a run-time complexity of O(m^2+mn^2+n^3) (where n and m are the number of vertices and edges respectively), we show that RSVP can identify non-isomorphism in several 'hard' graph instance classes including Miyazaki, Paulus, cubic hypohamiltonian, strongly regular, Latin series and Steiner triple system graphs, where the 3-WL test fails. Similar to the WL test, our proposed algorithm is prone to only one-sided errors, where isomorphic graphs will never be determined to be non-isomorphic, although the reverse can happen.<|reference_end|>
|
arxiv
|
@article{dutta2024rsvp:,
title={RSVP: Beyond Weisfeiler Lehman Graph Isomorphism Test},
author={Sourav Dutta and Arnab Bhattacharya},
journal={arXiv preprint arXiv:2409.20157},
year={2024},
archivePrefix={arXiv},
eprint={2409.20157},
primaryClass={cs.DS}
}
|
dutta2024rsvp:
|
arxiv-663543
|
2409.20158
|
Professor X: Manipulating EEG BCI with Invisible and Robust Backdoor Attack
|
<|reference_start|>Professor X: Manipulating EEG BCI with Invisible and Robust Backdoor Attack: While electroencephalogram (EEG) based brain-computer interface (BCI) has been widely used for medical diagnosis, health care, and device control, the safety of EEG BCI has long been neglected. In this paper, we propose Professor X, an invisible and robust "mind-controller" that can arbitrarily manipulate the outputs of EEG BCI through backdoor attack, to alert the EEG community of the potential hazard. However, existing EEG attacks mainly focus on single-target class attacks, and they either require engaging the training stage of the target BCI, or fail to maintain high stealthiness. Addressing these limitations, Professor X exploits a three-stage clean label poisoning attack: 1) selecting one trigger for each class; 2) learning optimal injecting EEG electrodes and frequencies strategy with reinforcement learning for each trigger; 3) generating poisoned samples by injecting the corresponding trigger's frequencies into poisoned data for each class by linearly interpolating the spectral amplitude of both data according to previously learned strategies. Experiments on datasets of three common EEG tasks demonstrate the effectiveness and robustness of Professor X, which also easily bypasses existing backdoor defenses.<|reference_end|>
|
arxiv
|
@article{liu2024professor,
title={Professor X: Manipulating EEG BCI with Invisible and Robust Backdoor
Attack},
author={Xuan-Hao Liu, Xinhao Song, Dexuan He, Bao-Liang Lu, Wei-Long Zheng},
journal={arXiv preprint arXiv:2409.20158},
year={2024},
archivePrefix={arXiv},
eprint={2409.20158},
primaryClass={cs.CR cs.HC}
}
|
liu2024professor
|
arxiv-663544
|
2409.20163
|
MemSim: A Bayesian Simulator for Evaluating Memory of LLM-based Personal Assistants
|
<|reference_start|>MemSim: A Bayesian Simulator for Evaluating Memory of LLM-based Personal Assistants: LLM-based agents have been widely applied as personal assistants, capable of memorizing information from user messages and responding to personal queries. However, there still lacks an objective and automatic evaluation on their memory capability, largely due to the challenges in constructing reliable questions and answers (QAs) according to user messages. In this paper, we propose MemSim, a Bayesian simulator designed to automatically construct reliable QAs from generated user messages, simultaneously keeping their diversity and scalability. Specifically, we introduce the Bayesian Relation Network (BRNet) and a causal generation mechanism to mitigate the impact of LLM hallucinations on factual information, facilitating the automatic creation of an evaluation dataset. Based on MemSim, we generate a dataset in the daily-life scenario, named MemDaily, and conduct extensive experiments to assess the effectiveness of our approach. We also provide a benchmark for evaluating different memory mechanisms in LLM-based agents with the MemDaily dataset. To benefit the research community, we have released our project at https://github.com/nuster1128/MemSim.<|reference_end|>
|
arxiv
|
@article{zhang2024memsim:,
title={MemSim: A Bayesian Simulator for Evaluating Memory of LLM-based Personal
Assistants},
author={Zeyu Zhang, Quanyu Dai, Luyu Chen, Zeren Jiang, Rui Li, Jieming Zhu,
Xu Chen, Yi Xie, Zhenhua Dong, Ji-Rong Wen},
journal={arXiv preprint arXiv:2409.20163},
year={2024},
archivePrefix={arXiv},
eprint={2409.20163},
primaryClass={cs.AI cs.CL}
}
|
zhang2024memsim:
|
arxiv-663545
|
2409.20164
|
Erase, then Redraw: A Novel Data Augmentation Approach for Free Space Detection Using Diffusion Model
|
<|reference_start|>Erase, then Redraw: A Novel Data Augmentation Approach for Free Space Detection Using Diffusion Model: Data augmentation is one of the most common tools in deep learning, underpinning many recent advances including tasks such as classification, detection, and semantic segmentation. The standard approach to data augmentation involves simple transformations like rotation and flipping to generate new images. However, these new images often lack diversity along the main semantic dimensions within the data. Traditional data augmentation methods cannot alter high-level semantic attributes such as the presence of vehicles, trees, and buildings in a scene to enhance data diversity. In recent years, the rapid development of generative models has injected new vitality into the field of data augmentation. In this paper, we address the lack of diversity in data augmentation for road detection task by using a pre-trained text-to-image diffusion model to parameterize image-to-image transformations. Our method involves editing images using these diffusion models to change their semantics. In essence, we achieve this goal by erasing instances of real objects from the original dataset and generating new instances with similar semantics in the erased regions using the diffusion model, thereby expanding the original dataset. We evaluate our approach on the KITTI road dataset and achieve the best results compared to other data augmentation methods, which demonstrates the effectiveness of our proposed development.<|reference_end|>
|
arxiv
|
@article{ma2024erase,,
title={Erase, then Redraw: A Novel Data Augmentation Approach for Free Space
Detection Using Diffusion Model},
author={Fulong Ma, Weiqing Qi, Guoyang Zhao, Ming Liu, and Jun Ma},
journal={arXiv preprint arXiv:2409.20164},
year={2024},
archivePrefix={arXiv},
eprint={2409.20164},
primaryClass={cs.CV}
}
|
ma2024erase,
|
arxiv-663546
|
2409.20165
|
How Entangled is Factuality and Deception in German?
|
<|reference_start|>How Entangled is Factuality and Deception in German?: The statement "The earth is flat" is factually inaccurate, but if someone truly believes and argues in its favor, it is not deceptive. Research on deception detection and fact checking often conflates factual accuracy with the truthfulness of statements. This assumption makes it difficult to (a) study subtle distinctions and interactions between the two and (b) gauge their effects on downstream tasks. The belief-based deception framework disentangles these properties by defining texts as deceptive when there is a mismatch between what people say and what they truly believe. In this study, we assess if presumed patterns of deception generalize to German language texts. We test the effectiveness of computational models in detecting deception using an established corpus of belief-based argumentation. Finally, we gauge the impact of deception on the downstream task of fact checking and explore if this property confounds verification models. Surprisingly, our analysis finds no correlation with established cues of deception. Previous work claimed that computational models can outperform humans in deception detection accuracy, however, our experiments show that both traditional and state-of-the-art models struggle with the task, performing no better than random guessing. For fact checking, we find that Natural Language Inference-based verification performs worse on non-factual and deceptive content, while prompting Large Language Models for the same task is less sensitive to these properties.<|reference_end|>
|
arxiv
|
@article{velutharambath2024how,
title={How Entangled is Factuality and Deception in German?},
author={Aswathy Velutharambath, Amelie W"uhrl and Roman Klinger},
journal={arXiv preprint arXiv:2409.20165},
year={2024},
archivePrefix={arXiv},
eprint={2409.20165},
primaryClass={cs.CL}
}
|
velutharambath2024how
|
arxiv-663547
|
2409.20166
|
Task-Oriented Pre-Training for Drivable Area Detection
|
<|reference_start|>Task-Oriented Pre-Training for Drivable Area Detection: Pre-training techniques play a crucial role in deep learning, enhancing models' performance across a variety of tasks. By initially training on large datasets and subsequently fine-tuning on task-specific data, pre-training provides a solid foundation for models, improving generalization abilities and accelerating convergence rates. This approach has seen significant success in the fields of natural language processing and computer vision. However, traditional pre-training methods necessitate large datasets and substantial computational resources, and they can only learn shared features through prolonged training and struggle to capture deeper, task-specific features. In this paper, we propose a task-oriented pre-training method that begins with generating redundant segmentation proposals using the Segment Anything (SAM) model. We then introduce a Specific Category Enhancement Fine-tuning (SCEF) strategy for fine-tuning the Contrastive Language-Image Pre-training (CLIP) model to select proposals most closely related to the drivable area from those generated by SAM. This approach can generate a lot of coarse training data for pre-training models, which are further fine-tuned using manually annotated data, thereby improving model's performance. Comprehensive experiments conducted on the KITTI road dataset demonstrate that our task-oriented pre-training method achieves an all-around performance improvement compared to models without pre-training. Moreover, our pre-training method not only surpasses traditional pre-training approach but also achieves the best performance compared to state-of-the-art self-training methods.<|reference_end|>
|
arxiv
|
@article{ma2024task-oriented,
title={Task-Oriented Pre-Training for Drivable Area Detection},
author={Fulong Ma, Guoyang Zhao, Weiqing Qi, Ming Liu, and Jun Ma},
journal={arXiv preprint arXiv:2409.20166},
year={2024},
archivePrefix={arXiv},
eprint={2409.20166},
primaryClass={cs.CV}
}
|
ma2024task-oriented
|
arxiv-663548
|
2409.20167
|
Using Large Multimodal Models to Extract Knowledge Components for Knowledge Tracing from Multimedia Question Information
|
<|reference_start|>Using Large Multimodal Models to Extract Knowledge Components for Knowledge Tracing from Multimedia Question Information: Knowledge tracing models have enabled a range of intelligent tutoring systems to provide feedback to students. However, existing methods for knowledge tracing in learning sciences are predominantly reliant on statistical data and instructor-defined knowledge components, making it challenging to integrate AI-generated educational content with traditional established methods. We propose a method for automatically extracting knowledge components from educational content using instruction-tuned large multimodal models. We validate this approach by comprehensively evaluating it against knowledge tracing benchmarks in five domains. Our results indicate that the automatically extracted knowledge components can effectively replace human-tagged labels, offering a promising direction for enhancing intelligent tutoring systems in limited-data scenarios, achieving more explainable assessments in educational settings, and laying the groundwork for automated assessment.<|reference_end|>
|
arxiv
|
@article{moon2024using,
title={Using Large Multimodal Models to Extract Knowledge Components for
Knowledge Tracing from Multimedia Question Information},
author={Hyeongdon Moon, Richard Davis, Seyed Parsa Neshaei, Pierre Dillenbourg},
journal={arXiv preprint arXiv:2409.20167},
year={2024},
archivePrefix={arXiv},
eprint={2409.20167},
primaryClass={cs.CL}
}
|
moon2024using
|
arxiv-663549
|
2409.20171
|
Annotation-Free Curb Detection Leveraging Altitude Difference Image
|
<|reference_start|>Annotation-Free Curb Detection Leveraging Altitude Difference Image: Road curbs are considered as one of the crucial and ubiquitous traffic features, which are essential for ensuring the safety of autonomous vehicles. Current methods for detecting curbs primarily rely on camera imagery or LiDAR point clouds. Image-based methods are vulnerable to fluctuations in lighting conditions and exhibit poor robustness, while methods based on point clouds circumvent the issues associated with lighting variations. However, it is the typical case that significant processing delays are encountered due to the voluminous amount of 3D points contained in each frame of the point cloud data. Furthermore, the inherently unstructured characteristics of point clouds poses challenges for integrating the latest deep learning advancements into point cloud data applications. To address these issues, this work proposes an annotation-free curb detection method leveraging Altitude Difference Image (ADI), which effectively mitigates the aforementioned challenges. Given that methods based on deep learning generally demand extensive, manually annotated datasets, which are both expensive and labor-intensive to create, we present an Automatic Curb Annotator (ACA) module. This module utilizes a deterministic curb detection algorithm to automatically generate a vast quantity of training data. Consequently, it facilitates the training of the curb detection model without necessitating any manual annotation of data. Finally, by incorporating a post-processing module, we manage to achieve state-of-the-art results on the KITTI 3D curb dataset with considerably reduced processing delays compared to existing methods, which underscores the effectiveness of our approach in curb detection tasks.<|reference_end|>
|
arxiv
|
@article{ma2024annotation-free,
title={Annotation-Free Curb Detection Leveraging Altitude Difference Image},
author={Fulong Ma, Peng Hou, Yuxuan Liu, Ming Liu, and Jun Ma},
journal={arXiv preprint arXiv:2409.20171},
year={2024},
archivePrefix={arXiv},
eprint={2409.20171},
primaryClass={cs.CV}
}
|
ma2024annotation-free
|
arxiv-663550
|
2409.20172
|
Efficient Approximation of Fractional Hypertree Width
|
<|reference_start|>Efficient Approximation of Fractional Hypertree Width: We give two new approximation algorithms to compute the fractional hypertree width of an input hypergraph. The first algorithm takes as input $n$-vertex $m$-edge hypergraph $H$ of fractional hypertree width at most $\omega$, runs in polynomial time and produces a tree decomposition of $H$ of fractional hypertree width $O(\omega \log n \log \omega)$. As an immediate corollary this yields polynomial time $O(\log^2 n \log \omega)$-approximation algorithms for (generalized) hypertree width as well. To the best of our knowledge our algorithm is the first non-trivial polynomial-time approximation algorithm for fractional hypertree width and (generalized) hypertree width, as opposed to algorithms that run in polynomial time only when $\omega$ is considered a constant. For hypergraphs with the bounded intersection property we get better bounds, comparable with that recent algorithm of Lanzinger and Razgon [STACS 2024]. The second algorithm runs in time $n^{\omega}m^{O(1)}$ and produces a tree decomposition of $H$ of fractional hypertree width $O(\omega \log^2 \omega)$. This significantly improves over the $(n+m)^{O(\omega^3)}$ time algorithm of Marx [ACM TALG 2010], which produces a tree decomposition of fractional hypertree width $O(\omega^3)$, both in terms of running time and the approximation ratio. Our main technical contribution, and the key insight behind both algorithms, is a variant of the classic Menger's Theorem for clique separators in graphs: For every graph $G$, vertex sets $A$ and $B$, family ${\cal F}$ of cliques in $G$, and positive rational $f$, either there exists a sub-family of $O(f \cdot \log^2 n)$ cliques in ${\cal F}$ whose union separates $A$ from $B$, or there exist $f \cdot \log |{\cal F}|$ paths from $A$ to $B$ such that no clique in ${\cal F}$ intersects more than $\log |{\cal F}|$ paths.<|reference_end|>
|
arxiv
|
@article{korchemna2024efficient,
title={Efficient Approximation of Fractional Hypertree Width},
author={Viktoriia Korchemna, Daniel Lokshtanov, Saket Saurabh, Vaishali
Surianarayanan, Jie Xue},
journal={arXiv preprint arXiv:2409.20172},
year={2024},
archivePrefix={arXiv},
eprint={2409.20172},
primaryClass={cs.DS cs.DB cs.DM}
}
|
korchemna2024efficient
|
arxiv-663551
|
2409.20173
|
ILeSiA: Interactive Learning of Situational Awareness from Camera Input
|
<|reference_start|>ILeSiA: Interactive Learning of Situational Awareness from Camera Input: Learning from demonstration is a promising way of teaching robots new skills. However, a central problem when executing acquired skills is to recognize risks and failures. This is essential since the demonstrations usually cover only a few mostly successful cases. Inevitable errors during execution require specific reactions that were not apparent in the demonstrations. In this paper, we focus on teaching the robot situational awareness from an initial skill demonstration via kinesthetic teaching and sparse labeling of autonomous skill executions as safe or risky. At runtime, our system, called ILeSiA, detects risks based on the perceived camera images by encoding the images into a low-dimensional latent space representation and training a classifier based on the encoding and the provided labels. In this way, ILeSiA boosts the confidence and safety with which robotic skills can be executed. Our experiments demonstrate that classifiers, trained with only a small amount of user-provided data, can successfully detect numerous risks. The system is flexible because the risk cases are defined by labeling data. This also means that labels can be added as soon as risks are identified by a human supervisor. We provide all code and data required to reproduce our experiments at imitrob.ciirc.cvut.cz/publications/ilesia.<|reference_end|>
|
arxiv
|
@article{vanc2024ilesia:,
title={ILeSiA: Interactive Learning of Situational Awareness from Camera Input},
author={Petr Vanc, Giovanni Franzese, Jan Kristof Behrens, Cosimo Della
Santina, Karla Stepanova, Jens Kober},
journal={arXiv preprint arXiv:2409.20173},
year={2024},
archivePrefix={arXiv},
eprint={2409.20173},
primaryClass={cs.RO cs.CV cs.LG}
}
|
vanc2024ilesia:
|
arxiv-663552
|
2409.20174
|
Modelando procesos cognitivos de la lectura natural con GPT-2
|
<|reference_start|>Modelando procesos cognitivos de la lectura natural con GPT-2: The advancement of the Natural Language Processing field has enabled the development of language models with a great capacity for generating text. In recent years, Neuroscience has been using these models to better understand cognitive processes. In previous studies, we found that models like Ngrams and LSTM networks can partially model Predictability when used as a co-variable to explain readers' eye movements. In the present work, we further this line of research by using GPT-2 based models. The results show that this architecture achieves better outcomes than its predecessors.<|reference_end|>
|
arxiv
|
@article{bianchi2024modelando,
title={Modelando procesos cognitivos de la lectura natural con GPT-2},
author={Bruno Bianchi, Alfredo Umfurer, Juan Esteban Kamienkowski},
journal={arXiv preprint arXiv:2409.20174},
year={2024},
archivePrefix={arXiv},
eprint={2409.20174},
primaryClass={q-bio.NC cs.AI}
}
|
bianchi2024modelando
|
arxiv-663553
|
2409.20175
|
Ensemble Kalman Diffusion Guidance: A Derivative-free Method for Inverse Problems
|
<|reference_start|>Ensemble Kalman Diffusion Guidance: A Derivative-free Method for Inverse Problems: When solving inverse problems, it is increasingly popular to use pre-trained diffusion models as plug-and-play priors. This framework can accommodate different forward models without re-training while preserving the generative capability of diffusion models. Despite their success in many imaging inverse problems, most existing methods rely on privileged information such as derivative, pseudo-inverse, or full knowledge about the forward model. This reliance poses a substantial limitation that restricts their use in a wide range of problems where such information is unavailable, such as in many scientific applications. To address this issue, we propose Ensemble Kalman Diffusion Guidance (EnKG) for diffusion models, a derivative-free approach that can solve inverse problems by only accessing forward model evaluations and a pre-trained diffusion model prior. We study the empirical effectiveness of our method across various inverse problems, including scientific settings such as inferring fluid flows and astronomical objects, which are highly non-linear inverse problems that often only permit black-box access to the forward model.<|reference_end|>
|
arxiv
|
@article{zheng2024ensemble,
title={Ensemble Kalman Diffusion Guidance: A Derivative-free Method for Inverse
Problems},
author={Hongkai Zheng, Wenda Chu, Austin Wang, Nikola Kovachki, Ricardo
Baptista, Yisong Yue},
journal={arXiv preprint arXiv:2409.20175},
year={2024},
archivePrefix={arXiv},
eprint={2409.20175},
primaryClass={cs.LG stat.ML}
}
|
zheng2024ensemble
|
arxiv-663554
|
2409.20179
|
Survival Prediction in Lung Cancer through Multi-Modal Representation Learning
|
<|reference_start|>Survival Prediction in Lung Cancer through Multi-Modal Representation Learning: Survival prediction is a crucial task associated with cancer diagnosis and treatment planning. This paper presents a novel approach to survival prediction by harnessing comprehensive information from CT and PET scans, along with associated Genomic data. Current methods rely on either a single modality or the integration of multiple modalities for prediction without adequately addressing associations across patients or modalities. We aim to develop a robust predictive model for survival outcomes by integrating multi-modal imaging data with genetic information while accounting for associations across patients and modalities. We learn representations for each modality via a self-supervised module and harness the semantic similarities across the patients to ensure the embeddings are aligned closely. However, optimizing solely for global relevance is inadequate, as many pairs sharing similar high-level semantics, such as tumor type, are inadvertently pushed apart in the embedding space. To address this issue, we use a cross-patient module (CPM) designed to harness inter-subject correspondences. The CPM module aims to bring together embeddings from patients with similar disease characteristics. Our experimental evaluation of the dataset of Non-Small Cell Lung Cancer (NSCLC) patients demonstrates the effectiveness of our approach in predicting survival outcomes, outperforming state-of-the-art methods.<|reference_end|>
|
arxiv
|
@article{farooq2024survival,
title={Survival Prediction in Lung Cancer through Multi-Modal Representation
Learning},
author={Aiman Farooq, Deepak Mishra, Santanu Chaudhury},
journal={arXiv preprint arXiv:2409.20179},
year={2024},
archivePrefix={arXiv},
eprint={2409.20179},
primaryClass={eess.IV cs.CV}
}
|
farooq2024survival
|
arxiv-663555
|
2409.20181
|
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models
|
<|reference_start|>Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models: Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities. In-Context Learning (ICL) and Parameter-Efficient Fine-Tuning (PEFT) are currently two mainstream methods for augmenting LLMs to downstream tasks. ICL typically constructs a few-shot learning scenario, either manually or by setting up a Retrieval-Augmented Generation (RAG) system, helping models quickly grasp domain knowledge or question-answering patterns without changing model parameters. However, this approach involves trade-offs, such as slower inference speed and increased space occupancy. PEFT assists the model in adapting to tasks through minimal parameter modifications, but the training process still demands high hardware requirements, even with a small number of parameters involved. To address these challenges, we propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning, maintaining low inference costs. RTD constructs a reference datastore from the provided training examples and optimizes the LLM's final vocabulary distribution by flexibly selecting suitable references based on the input, resulting in more trustable responses and enabling the model to adapt to downstream tasks at a low cost. Experimental evaluations on various LLMs using different benchmarks demonstrate that RTD establishes a new paradigm for augmenting models to downstream tasks. Furthermore, our method exhibits strong orthogonality with traditional methods, allowing for concurrent usage.<|reference_end|>
|
arxiv
|
@article{shi2024reference,
title={Reference Trustable Decoding: A Training-Free Augmentation Paradigm for
Large Language Models},
author={Luohe Shi, Yao Yao, Zuchao Li, Lefei Zhang, Hai Zhao},
journal={arXiv preprint arXiv:2409.20181},
year={2024},
archivePrefix={arXiv},
eprint={2409.20181},
primaryClass={cs.CL}
}
|
shi2024reference
|
arxiv-663556
|
2409.20182
|
Quantum Fast Implementation of Private Information Retrieval and Functional Bootstrapping
|
<|reference_start|>Quantum Fast Implementation of Private Information Retrieval and Functional Bootstrapping: Quantum computation has found greater efficiency and security across various fields. We show that, in a near-term hybrid cloud computing scenario with only one single quantum server and an entirely classical client, critical bottlenecks in privacy-preserving computation can be addressed. First, we propose an efficient quantum functional bootstrapping algorithm with a runtime polynomial in the plaintext-size, providing an exponential quantum speedup over classical algorithms. Second, we present a secure and fast quantum private information retrieval protocol with logarithmic query time. The security relies on the learning with errors (LWE) problem with polynomial modulus, greatly improving the security of classical fast PIR protocol based on ring-LWE with super-polynomial modulus. Technically, we extend an important classical homomorphic operation, known as blind rotation, to the quantum case by an encrypted conditional rotation technique. This technique holds promise for broader applications in quantum cryptography.<|reference_end|>
|
arxiv
|
@article{ma2024quantum,
title={Quantum Fast Implementation of Functional Bootstrapping and Private
Information Retrieval},
author={Guangsheng Ma, Hongbo Li},
journal={arXiv preprint arXiv:2409.20182},
year={2024},
archivePrefix={arXiv},
eprint={2409.20182},
primaryClass={quant-ph cs.CC cs.CR}
}
|
ma2024quantum
|
arxiv-663557
|
2409.20183
|
Local equivalence of stabilizer states: a graphical characterisation
|
<|reference_start|>Local equivalence of stabilizer states: a graphical characterisation: Stabilizer states form a ubiquitous family of quantum states that can be graphically represented through the graph state formalism. A fundamental property of graph states is that applying a local complementation - a well-known and extensively studied graph transformation - results in a graph that represents the same entanglement as the original. In other words, the corresponding graph states are LU-equivalent. This property served as the cornerstone for capturing non-trivial quantum properties in a simple graphical manner, in the study of quantum entanglement but also for developing protocols and models based on graph states and stabilizer states, such as measurement-based quantum computing, secret sharing, error correction, entanglement distribution... However, local complementation fails short to fully characterise entanglement: there exist pairs of graph states that are LU-equivalent but cannot be transformed one into the other using local complementations. Only few is known about the equivalence of graph states beyond local complementation. We introduce a generalization of local complementation which graphically characterises the LU-equivalence of graph states. We use this characterisation to show the existence of a strict infinite hierarchy of equivalences of graph states. Our approach is based on minimal local sets, which are subsets of vertices that are known to cover any graph, and to be invariant under local complementation and even LU-equivalence. We use these structures to provide a type to each vertex of a graph, leading to a natural standard form in which the LU-equivalence can be exhibited and captured by means of generalised local complementation.<|reference_end|>
|
arxiv
|
@article{claudet2024local,
title={Local equivalence of stabilizer states: a graphical characterisation},
author={Nathan Claudet and Simon Perdrix},
journal={arXiv preprint arXiv:2409.20183},
year={2024},
archivePrefix={arXiv},
eprint={2409.20183},
primaryClass={quant-ph cs.DM}
}
|
claudet2024local
|
arxiv-663558
|
2409.20184
|
Boosting Safe Human-Robot Collaboration Through Adaptive Collision Sensitivity
|
<|reference_start|>Boosting Safe Human-Robot Collaboration Through Adaptive Collision Sensitivity: What is considered safe for a robot operator during physical human-robot collaboration (HRC) is specified in corresponding HRC standards (e.g., the European ISO/TS 15066). The regime that allows collisions between the moving robot and the operator, called Power and Force Limiting (PFL), restricts the permissible contact forces. Using the same fixed contact thresholds on the entire robot surface results in significant and unnecessary productivity losses, as the robot needs to stop even when impact forces are within limits. Here we present a framework for setting the protective skin thresholds individually for different parts of the robot body and dynamically on the fly, based on the effective mass of each robot link and the link velocity. We perform experiments on a 6-axis collaborative robot arm (UR10e) completely covered with a sensitive skin (AIRSKIN) consisting of eleven individual pads. On a mock pick-and-place scenario with both transient and quasi-static collisions, we demonstrate how skin sensitivity influences the task performance and exerted force. We show an increase in productivity of almost 50% from the most conservative setting of collision thresholds to the most adaptive setting, while ensuring safety for human operators. The method is applicable to any robot for which the effective mass can be calculated.<|reference_end|>
|
arxiv
|
@article{rustler2024boosting,
title={Boosting Safe Human-Robot Collaboration Through Adaptive Collision
Sensitivity},
author={Lukas Rustler, Matej Misar, Matej Hoffmann},
journal={arXiv preprint arXiv:2409.20184},
year={2024},
archivePrefix={arXiv},
eprint={2409.20184},
primaryClass={cs.RO}
}
|
rustler2024boosting
|
arxiv-663559
|
2409.20187
|
Choosing DAG Models Using Markov and Minimal Edge Count in the Absence of Ground Truth
|
<|reference_start|>Choosing DAG Models Using Markov and Minimal Edge Count in the Absence of Ground Truth: We give a novel nonparametric pointwise consistent statistical test (the Markov Checker) of the Markov condition for directed acyclic graph (DAG) or completed partially directed acyclic graph (CPDAG) models given a dataset. We also introduce the Cross-Algorithm Frugality Search (CAFS) for rejecting DAG models that either do not pass the Markov Checker test or that are not edge minimal. Edge minimality has been used previously by Raskutti and Uhler as a nonparametric simplicity criterion, though CAFS readily generalizes to other simplicity conditions. Reference to the ground truth is not necessary for CAFS, so it is useful for finding causal structure learning algorithms and tuning parameter settings that output causal models that are approximately true from a given data set. We provide a software tool for this analysis that is suitable for even quite large or dense models, provided a suitably fast pointwise consistent test of conditional independence is available. In addition, we show in simulation that the CAFS procedure can pick approximately correct models without knowing the ground truth.<|reference_end|>
|
arxiv
|
@article{ramsey2024choosing,
title={Choosing DAG Models Using Markov and Minimal Edge Count in the Absence
of Ground Truth},
author={Joseph D. Ramsey, Bryan Andrews, Peter Spirtes},
journal={arXiv preprint arXiv:2409.20187},
year={2024},
archivePrefix={arXiv},
eprint={2409.20187},
primaryClass={cs.LG cs.AI stat.ME stat.ML}
}
|
ramsey2024choosing
|
arxiv-663560
|
2409.20188
|
Active Listener: Continuous Generation of Listener's Head Motion Response in Dyadic Interactions
|
<|reference_start|>Active Listener: Continuous Generation of Listener's Head Motion Response in Dyadic Interactions: A key component of dyadic spoken interactions is the contextually relevant non-verbal gestures, such as head movements that reflect a listener's response to the interlocutor's speech. Although significant progress has been made in the context of generating co-speech gestures, generating listener's response has remained a challenge. We introduce the task of generating continuous head motion response of a listener in response to the speaker's speech in real time. To this end, we propose a graph-based end-to-end crossmodal model that takes interlocutor's speech audio as input and directly generates head pose angles (roll, pitch, yaw) of the listener in real time. Different from previous work, our approach is completely data-driven, does not require manual annotations or oversimplify head motion to merely nods and shakes. Extensive evaluation on the dyadic interaction sessions on the IEMOCAP dataset shows that our model produces a low overall error (4.5 degrees) and a high frame rate, thereby indicating its deployability in real-world human-robot interaction systems. Our code is available at - https://github.com/bigzen/Active-Listener<|reference_end|>
|
arxiv
|
@article{ghosh2024active,
title={Active Listener: Continuous Generation of Listener's Head Motion
Response in Dyadic Interactions},
author={Bishal Ghosh, Emma Li and Tanaya Guha},
journal={arXiv preprint arXiv:2409.20188},
year={2024},
archivePrefix={arXiv},
eprint={2409.20188},
primaryClass={cs.RO cs.SD eess.AS}
}
|
ghosh2024active
|
arxiv-663561
|
2409.20189
|
TaskComplexity: A Dataset for Task Complexity Classification with In-Context Learning, FLAN-T5 and GPT-4o Benchmarks
|
<|reference_start|>TaskComplexity: A Dataset for Task Complexity Classification with In-Context Learning, FLAN-T5 and GPT-4o Benchmarks: This paper addresses the challenge of classifying and assigning programming tasks to experts, a process that typically requires significant effort, time, and cost. To tackle this issue, a novel dataset containing a total of 4,112 programming tasks was created by extracting tasks from various websites. Web scraping techniques were employed to collect this dataset of programming problems systematically. Specific HTML tags were tracked to extract key elements of each issue, including the title, problem description, input-output, examples, problem class, and complexity score. Examples from the dataset are provided in the appendix to illustrate the variety and complexity of tasks included. The dataset's effectiveness has been evaluated and benchmarked using two approaches; the first approach involved fine-tuning the FLAN-T5 small model on the dataset, while the second approach used in-context learning (ICL) with the GPT-4o mini. The performance was assessed using standard metrics: accuracy, recall, precision, and F1-score. The results indicated that in-context learning with GPT-4o-mini outperformed the FLAN-T5 model.<|reference_end|>
|
arxiv
|
@article{rasheed2024taskcomplexity:,
title={TaskComplexity: A Dataset for Task Complexity Classification with
In-Context Learning, FLAN-T5 and GPT-4o Benchmarks},
author={Areeg Fahad Rasheed, M. Zarkoosh, Safa F. Abbas, Sana Sabah Al-Azzawi},
journal={arXiv preprint arXiv:2409.20189},
year={2024},
archivePrefix={arXiv},
eprint={2409.20189},
primaryClass={cs.CL}
}
|
rasheed2024taskcomplexity:
|
arxiv-663562
|
2409.20192
|
Factory Operators' Perspectives on Cognitive Assistants for Knowledge Sharing: Challenges, Risks, and Impact on Work
|
<|reference_start|>Factory Operators' Perspectives on Cognitive Assistants for Knowledge Sharing: Challenges, Risks, and Impact on Work: In the shift towards human-centered manufacturing, our two-year longitudinal study investigates the real-world impact of deploying Cognitive Assistants (CAs) in factories. The CAs were designed to facilitate knowledge sharing among factory operators. Our investigation focused on smartphone-based voice assistants and LLM-powered chatbots, examining their usability and utility in a real-world factory setting. Based on the qualitative feedback we collected during the deployments of CAs at the factories, we conducted a thematic analysis to investigate the perceptions, challenges, and overall impact on workflow and knowledge sharing. Our results indicate that while CAs have the potential to significantly improve efficiency through knowledge sharing and quicker resolution of production issues, they also introduce concerns around workplace surveillance, the types of knowledge that can be shared, and shortcomings compared to human-to-human knowledge sharing. Additionally, our findings stress the importance of addressing privacy, knowledge contribution burdens, and tensions between factory operators and their managers.<|reference_end|>
|
arxiv
|
@article{freire2024factory,
title={Factory Operators' Perspectives on Cognitive Assistants for Knowledge
Sharing: Challenges, Risks, and Impact on Work},
author={Samuel Kernan Freire, Tianhao He, Chaofan Wang, Evangelos Niforatos,
Alessandro Bozzon},
journal={arXiv preprint arXiv:2409.20192},
year={2024},
archivePrefix={arXiv},
eprint={2409.20192},
primaryClass={cs.HC cs.AI}
}
|
freire2024factory
|
arxiv-663563
|
2409.20195
|
Forecasting Disease Progression with Parallel Hyperplanes in Longitudinal Retinal OCT
|
<|reference_start|>Forecasting Disease Progression with Parallel Hyperplanes in Longitudinal Retinal OCT: Predicting future disease progression risk from medical images is challenging due to patient heterogeneity, and subtle or unknown imaging biomarkers. Moreover, deep learning (DL) methods for survival analysis are susceptible to image domain shifts across scanners. We tackle these issues in the task of predicting late dry Age-related Macular Degeneration (dAMD) onset from retinal OCT scans. We propose a novel DL method for survival prediction to jointly predict from the current scan a risk score, inversely related to time-to-conversion, and the probability of conversion within a time interval $t$. It uses a family of parallel hyperplanes generated by parameterizing the bias term as a function of $t$. In addition, we develop unsupervised losses based on intra-subject image pairs to ensure that risk scores increase over time and that future conversion predictions are consistent with AMD stage prediction using actual scans of future visits. Such losses enable data-efficient fine-tuning of the trained model on new unlabeled datasets acquired with a different scanner. Extensive evaluation on two large datasets acquired with different scanners resulted in a mean AUROCs of 0.82 for Dataset-1 and 0.83 for Dataset-2, across prediction intervals of 6,12 and 24 months.<|reference_end|>
|
arxiv
|
@article{chakravarty2024forecasting,
title={Forecasting Disease Progression with Parallel Hyperplanes in
Longitudinal Retinal OCT},
author={Arunava Chakravarty, Taha Emre, Dmitrii Lachinov, Antoine Rivail,
Hendrik Scholl, Lars Fritsche, Sobha Sivaprasad, Daniel Rueckert, Andrew
Lotery, Ursula Schmidt-Erfurth, Hrvoje Bogunovi'c},
journal={arXiv preprint arXiv:2409.20195},
year={2024},
archivePrefix={arXiv},
eprint={2409.20195},
primaryClass={cs.CV cs.AI cs.LG}
}
|
chakravarty2024forecasting
|
arxiv-663564
|
2409.20196
|
Melody Is All You Need For Music Generation
|
<|reference_start|>Melody Is All You Need For Music Generation: We present the Melody Guided Music Generation (MMGen) model, the first novel approach using melody to guide the music generation that, despite a pretty simple method and extremely limited resources, achieves excellent performance. Specifically, we first align the melody with audio waveforms and their associated descriptions using the multimodal alignment module. Subsequently, we condition the diffusion module on the learned melody representations. This allows MMGen to generate music that matches the style of the provided audio while also producing music that reflects the content of the given text description. To address the scarcity of high-quality data, we construct a multi-modal dataset, MusicSet, which includes melody, text, and audio, and will be made publicly available. We conduct extensive experiments which demonstrate the superiority of the proposed model both in terms of experimental metrics and actual performance quality.<|reference_end|>
|
arxiv
|
@article{wei2024melody,
title={Melody Is All You Need For Music Generation},
author={Shaopeng Wei, Manzhen Wei, Haoyu Wang, Yu Zhao, Gang Kou},
journal={arXiv preprint arXiv:2409.20196},
year={2024},
archivePrefix={arXiv},
eprint={2409.20196},
primaryClass={cs.SD cs.AI eess.AS}
}
|
wei2024melody
|
arxiv-663565
|
2409.20197
|
UIR-LoRA: Achieving Universal Image Restoration through Multiple Low-Rank Adaptation
|
<|reference_start|>UIR-LoRA: Achieving Universal Image Restoration through Multiple Low-Rank Adaptation: Existing unified methods typically treat multi-degradation image restoration as a multi-task learning problem. Despite performing effectively compared to single degradation restoration methods, they overlook the utilization of commonalities and specificities within multi-task restoration, thereby impeding the model's performance. Inspired by the success of deep generative models and fine-tuning techniques, we proposed a universal image restoration framework based on multiple low-rank adapters (LoRA) from multi-domain transfer learning. Our framework leverages the pre-trained generative model as the shared component for multi-degradation restoration and transfers it to specific degradation image restoration tasks using low-rank adaptation. Additionally, we introduce a LoRA composing strategy based on the degradation similarity, which adaptively combines trained LoRAs and enables our model to be applicable for mixed degradation restoration. Extensive experiments on multiple and mixed degradations demonstrate that the proposed universal image restoration method not only achieves higher fidelity and perceptual image quality but also has better generalization ability than other unified image restoration models. Our code is available at https://github.com/Justones/UIR-LoRA.<|reference_end|>
|
arxiv
|
@article{zhang2024uir-lora:,
title={UIR-LoRA: Achieving Universal Image Restoration through Multiple
Low-Rank Adaptation},
author={Cheng Zhang, Dong Gong, Jiumei He, Yu Zhu, Jinqiu Sun, Yanning Zhang},
journal={arXiv preprint arXiv:2409.20197},
year={2024},
archivePrefix={arXiv},
eprint={2409.20197},
primaryClass={cs.CV}
}
|
zhang2024uir-lora:
|
arxiv-663566
|
2409.20201
|
AfriHuBERT: A self-supervised speech representation model for African languages
|
<|reference_start|>AfriHuBERT: A self-supervised speech representation model for African languages: In this work, we present AfriHuBERT, an extension of mHuBERT-147, a state-of-the-art (SOTA) and compact self-supervised learning (SSL) model, originally pretrained on 147 languages. While mHuBERT-147 was pretrained on 16 African languages, we expand this to cover 39 African languages through continued pretraining on 6,500+ hours of speech data aggregated from diverse sources, including 23 newly added languages. We evaluate AfriHuBERT on two key speech tasks: Language Identification (LID) and Automatic Speech Recognition (ASR) using FLEURS dataset. Our results show a +4% F1 score improvement on average for LID and a -1.2% average Word Error Rate (WER) reduction for ASR. Further analysis shows that ASR models trained on AfriHuBERT exhibit improved cross-corpus generalization. Additionally, the analysis indicates that the FLEURS have data quality limitations that may affect their suitability for evaluating low-resource African languages, suggesting the need for better evaluation benchmarks for these languages.<|reference_end|>
|
arxiv
|
@article{alabi2024afrihubert:,
title={AfriHuBERT: A self-supervised speech representation model for African
languages},
author={Jesujoba O. Alabi, Xuechen Liu, Dietrich Klakow, Junichi Yamagishi},
journal={arXiv preprint arXiv:2409.20201},
year={2024},
archivePrefix={arXiv},
eprint={2409.20201},
primaryClass={cs.CL cs.SD eess.AS}
}
|
alabi2024afrihubert:
|
arxiv-663567
|
2409.20204
|
Divided by discipline? A systematic literature review on the quantification of online sexism and misogyny using a semi-automated approach
|
<|reference_start|>Divided by discipline? A systematic literature review on the quantification of online sexism and misogyny using a semi-automated approach: In recent years, several computational tools have been developed to detect and identify sexism, misogyny, and gender-based hate speech, especially on online platforms. Though these tools intend to draw on knowledge from both social science and computer science, little is known about the current state of research in quantifying online sexism or misogyny. Given the growing concern over the discrimination of women in online spaces and the rise in interdisciplinary research on capturing the online manifestation of sexism and misogyny, a systematic literature review on the research practices and their measures is the need of the hour. We make three main contributions: (i) we present a semi-automated way to narrow down the search results in the different phases of selection stage in the PRISMA flowchart; (ii) we perform a systematic literature review of research papers that focus on the quantification and measurement of online gender-based hate speech, examining literature from computer science and the social sciences from 2012 to 2022; and (iii) we identify the opportunities and challenges for measuring gender-based online hate speech. Our findings from topic analysis suggest a disciplinary divide between the themes of research on sexism/misogyny. With evidence-based review, we summarise the different approaches used by the studies who have explored interdisciplinary approaches to bridge the knowledge gap. Coupled with both the existing literature on social science theories and computational modeling, we provide an analysis of the benefits and shortcomings of the methodologies used. Lastly, we discuss the challenges and opportunities for future research dedicated to measuring online sexism and misogyny.<|reference_end|>
|
arxiv
|
@article{dutta2024divided,
title={Divided by discipline? A systematic literature review on the
quantification of online sexism and misogyny using a semi-automated approach},
author={Aditi Dutta, Susan Banducci and Chico Q. Camargo},
journal={arXiv preprint arXiv:2409.20204},
year={2024},
archivePrefix={arXiv},
eprint={2409.20204},
primaryClass={cs.CL cs.CY}
}
|
dutta2024divided
|
arxiv-663568
|
2409.20206
|
SetPINNs: Set-based Physics-informed Neural Networks
|
<|reference_start|>SetPINNs: Set-based Physics-informed Neural Networks: Physics-Informed Neural Networks (PINNs) have emerged as a promising method for approximating solutions to partial differential equations (PDEs) using deep learning. However, PINNs, based on multilayer perceptrons (MLP), often employ point-wise predictions, overlooking the implicit dependencies within the physical system such as temporal or spatial dependencies. These dependencies can be captured using more complex network architectures, for example CNNs or Transformers. However, these architectures conventionally do not allow for incorporating physical constraints, as advancements in integrating such constraints within these frameworks are still lacking. Relying on point-wise predictions often results in trivial solutions. To address this limitation, we propose SetPINNs, a novel approach inspired by Finite Elements Methods from the field of Numerical Analysis. SetPINNs allow for incorporating the dependencies inherent in the physical system while at the same time allowing for incorporating the physical constraints. They accurately approximate PDE solutions of a region, thereby modeling the inherent dependencies between multiple neighboring points in that region. Our experiments show that SetPINNs demonstrate superior generalization performance and accuracy across diverse physical systems, showing that they mitigate failure modes and converge faster in comparison to existing approaches. Furthermore, we demonstrate the utility of SetPINNs on two real-world physical systems.<|reference_end|>
|
arxiv
|
@article{nagda2024setpinns:,
title={SetPINNs: Set-based Physics-informed Neural Networks},
author={Mayank Nagda, Phil Ostheimer, Thomas Specht, Frank Rhein, Fabian
Jirasek, Marius Kloft, Sophie Fellenz},
journal={arXiv preprint arXiv:2409.20206},
year={2024},
archivePrefix={arXiv},
eprint={2409.20206},
primaryClass={cs.LG}
}
|
nagda2024setpinns:
|
arxiv-663569
|
2409.20208
|
Constraining Anomaly Detection with Anomaly-Free Regions
|
<|reference_start|>Constraining Anomaly Detection with Anomaly-Free Regions: We propose the novel concept of anomaly-free regions (AFR) to improve anomaly detection. An AFR is a region in the data space for which it is known that there are no anomalies inside it, e.g., via domain knowledge. This region can contain any number of normal data points and can be anywhere in the data space. AFRs have the key advantage that they constrain the estimation of the distribution of non-anomalies: The estimated probability mass inside the AFR must be consistent with the number of normal data points inside the AFR. Based on this insight, we provide a solid theoretical foundation and a reference implementation of anomaly detection using AFRs. Our empirical results confirm that anomaly detection constrained via AFRs improves upon unconstrained anomaly detection. Specifically, we show that, when equipped with an estimated AFR, an efficient algorithm based on random guessing becomes a strong baseline that several widely-used methods struggle to overcome. On a dataset with a ground-truth AFR available, the current state of the art is outperformed.<|reference_end|>
|
arxiv
|
@article{toller2024constraining,
title={Constraining Anomaly Detection with Anomaly-Free Regions},
author={Maximilian Toller and Hussain Hussain and Roman Kern and Bernhard C.
Geiger},
journal={arXiv preprint arXiv:2409.20208},
year={2024},
archivePrefix={arXiv},
eprint={2409.20208},
primaryClass={cs.LG}
}
|
toller2024constraining
|
arxiv-663570
|
2409.20212
|
Graph matching based on similarities in structure and attributes
|
<|reference_start|>Graph matching based on similarities in structure and attributes: Finding vertex-to-vertex correspondences in real-world graphs is a challenging task with applications in a wide variety of domains. Structural matching based on graphs connectivities has attracted considerable attention, while the integration of all the other information stemming from vertices and edges attributes has been mostly left aside. Here we present the Graph Attributes and Structure Matching (GASM) algorithm, which provides high-quality solutions by integrating all the available information in a unified framework. Parameters quantifying the reliability of the attributes can tune how much the solutions should rely on the structure or on the attributes. We further show that even without attributes GASM consistently finds as-good-as or better solutions than state-of-the-art algorithms, with similar processing times.<|reference_end|>
|
arxiv
|
@article{candelier2024graph,
title={Graph matching based on similarities in structure and attributes},
author={Rapha"el Candelier},
journal={arXiv preprint arXiv:2409.20212},
year={2024},
archivePrefix={arXiv},
eprint={2409.20212},
primaryClass={cs.DS}
}
|
candelier2024graph
|
arxiv-663571
|
2409.20213
|
Mind the GAP: Glimpse-based Active Perception improves generalization and sample efficiency of visual reasoning
|
<|reference_start|>Mind the GAP: Glimpse-based Active Perception improves generalization and sample efficiency of visual reasoning: Human capabilities in understanding visual relations are far superior to those of AI systems, especially for previously unseen objects. For example, while AI systems struggle to determine whether two such objects are visually the same or different, humans can do so with ease. Active vision theories postulate that the learning of visual relations is grounded in actions that we take to fixate objects and their parts by moving our eyes. In particular, the low-dimensional spatial information about the corresponding eye movements is hypothesized to facilitate the representation of relations between different image parts. Inspired by these theories, we develop a system equipped with a novel Glimpse-based Active Perception (GAP) that sequentially glimpses at the most salient regions of the input image and processes them at high resolution. Importantly, our system leverages the locations stemming from the glimpsing actions, along with the visual content around them, to represent relations between different parts of the image. The results suggest that the GAP is essential for extracting visual relations that go beyond the immediate visual content. Our approach reaches state-of-the-art performance on several visual reasoning tasks being more sample-efficient, and generalizing better to out-of-distribution visual inputs than prior models.<|reference_end|>
|
arxiv
|
@article{kolner2024mind,
title={Mind the GAP: Glimpse-based Active Perception improves generalization
and sample efficiency of visual reasoning},
author={Oleh Kolner, Thomas Ortner, Stanis{l}aw Wo'zniak and Angeliki
Pantazi},
journal={arXiv preprint arXiv:2409.20213},
year={2024},
archivePrefix={arXiv},
eprint={2409.20213},
primaryClass={cs.CV}
}
|
kolner2024mind
|
arxiv-663572
|
2409.20218
|
Co-Movement and Trust Development in Human-Robot Teams
|
<|reference_start|>Co-Movement and Trust Development in Human-Robot Teams: For humans and robots to form an effective human-robot team (HRT) there must be sufficient trust between team members throughout a mission. We analyze data from an HRT experiment focused on trust dynamics in teams of one human and two robots, where trust was manipulated by robots becoming temporarily unresponsive. Whole-body movement tracking was achieved using ultrasound beacons, alongside communications and performance logs from a human-robot interface. We find evidence that synchronization between time series of human-robot movement, within a certain spatial proximity, is correlated with changes in self-reported trust. This suggests that the interplay of proxemics and kinesics, i.e. moving together through space, where implicit communication via coordination can occur, could play a role in building and maintaining trust in human-robot teams. Thus, quantitative indicators of coordination dynamics between team members could be used to predict trust over time and also provide early warning signals of the need for timely trust repair if trust is damaged. Hence, we aim to develop the metrology of trust in mobile human-robot teams.<|reference_end|>
|
arxiv
|
@article{webb2024co-movement,
title={Co-Movement and Trust Development in Human-Robot Teams},
author={Nicola Webb, Sanja Milivojevic, Mehdi Sobhani, Zachary R. Madin, James
C. Ward, Sagir Yusuf, Chris Baber, Edmund R. Hunt},
journal={arXiv preprint arXiv:2409.20218},
year={2024},
archivePrefix={arXiv},
eprint={2409.20218},
primaryClass={cs.RO cs.HC}
}
|
webb2024co-movement
|
arxiv-663573
|
2409.20219
|
Advanced Resilience Planning for Distribution Systems
|
<|reference_start|>Advanced Resilience Planning for Distribution Systems: Climate change has led to an increase in the frequency and severity of extreme weather events, posing significant challenges for power distribution systems. In response, this work presents a planning approach in order to enhance the resilience of distribution systems against climatic hazards. The framework systematically addresses uncertainties during extreme events, including weather variability and line damage. Key strategies include line hardening, backup diesel generators, and sectionalizers to strengthen resilience. We model spatio-temporal dynamics and costs through a hybrid model integrating stochastic processes with deterministic elements. A two-stage stochastic mixed-integer linear approach is developed to optimize resilience investments against load loss, generator operations, and repairs. Case studies on the IEEE 15-bus benchmark system and a realistic distribution grid model in Riyadh, Saudi Arabia demonstrate enhanced system robustness as well as cost efficiency of 10% and 15%, respectively.<|reference_end|>
|
arxiv
|
@article{afzal2024advanced,
title={Advanced Resilience Planning for Distribution Systems},
author={Ahmad Bin Afzal, Nabil Mohammed, Shehab Ahmed, Charalambos
Konstantinou},
journal={arXiv preprint arXiv:2409.20219},
year={2024},
archivePrefix={arXiv},
eprint={2409.20219},
primaryClass={eess.SY cs.SY}
}
|
afzal2024advanced
|
arxiv-663574
|
2409.20222
|
Beyond Prompts: Dynamic Conversational Benchmarking of Large Language Models
|
<|reference_start|>Beyond Prompts: Dynamic Conversational Benchmarking of Large Language Models: We introduce a dynamic benchmarking system for conversational agents that evaluates their performance through a single, simulated, and lengthy user$\leftrightarrow$agent interaction. The interaction is a conversation between the user and agent, where multiple tasks are introduced and then undertaken concurrently. We context switch regularly to interleave the tasks, which constructs a realistic testing scenario in which we assess the Long-Term Memory, Continual Learning, and Information Integration capabilities of the agents. Results from both proprietary and open-source Large-Language Models show that LLMs in general perform well on single-task interactions, but they struggle on the same tasks when they are interleaved. Notably, short-context LLMs supplemented with an LTM system perform as well as or better than those with larger contexts. Our benchmark suggests that there are other challenges for LLMs responding to more natural interactions that contemporary benchmarks have heretofore not been able to capture.<|reference_end|>
|
arxiv
|
@article{castillo-bolado2024beyond,
title={Beyond Prompts: Dynamic Conversational Benchmarking of Large Language
Models},
author={David Castillo-Bolado, Joseph Davidson, Finlay Gray, Marek Rosa},
journal={arXiv preprint arXiv:2409.20222},
year={2024},
archivePrefix={arXiv},
eprint={2409.20222},
primaryClass={cs.CL cs.AI}
}
|
castillo-bolado2024beyond
|
arxiv-663575
|
2409.20223
|
GTransPDM: A Graph-embedded Transformer with Positional Decoupling for Pedestrian Crossing Intention Prediction
|
<|reference_start|>GTransPDM: A Graph-embedded Transformer with Positional Decoupling for Pedestrian Crossing Intention Prediction: Understanding and predicting pedestrian crossing behavioral intention is crucial for autonomous vehicles driving safety. Nonetheless, challenges emerge when using promising images or environmental context masks to extract various factors for time-series network modeling, causing pre-processing errors or a loss in efficiency. Typically, pedestrian positions captured by onboard cameras are often distorted and do not accurately reflect their actual movements. To address these issues, GTransPDM -- a Graph-embedded Transformer with a Position Decoupling Module -- was developed for pedestrian crossing intention prediction by leveraging multi-modal features. First, a positional decoupling module was proposed to decompose the pedestrian lateral movement and simulate depth variations in the image view. Then, a graph-embedded Transformer was designed to capture the spatial-temporal dynamics of human pose skeletons, integrating essential factors such as position, skeleton, and ego-vehicle motion. Experimental results indicate that the proposed method achieves 92% accuracy on the PIE dataset and 87% accuracy on the JAAD dataset, with a processing speed of 0.05ms. It outperforms the state-of-the-art in comparison.<|reference_end|>
|
arxiv
|
@article{xie2024gtranspdm:,
title={GTransPDM: A Graph-embedded Transformer with Positional Decoupling for
Pedestrian Crossing Intention Prediction},
author={Chen Xie, Ciyun Lin, Xiaoyu Zheng, Bowen Gong, Dayong Wu, Antonio M.
L'opez},
journal={arXiv preprint arXiv:2409.20223},
year={2024},
archivePrefix={arXiv},
eprint={2409.20223},
primaryClass={cs.CV}
}
|
xie2024gtranspdm:
|
arxiv-663576
|
2409.20224
|
Trapped in Transformative Agreements? A Multifaceted Analysis of >1,000 Contracts
|
<|reference_start|>Trapped in Transformative Agreements? A Multifaceted Analysis of >1,000 Contracts: Transformative agreements between academic publishers and research institutions are ubiquitous. The 'Efficiency and Standards for Article Charges' (ESAC) Initiative lists more than 1,000 contracts in its database. We make use of this unique dataset by web-scraping the details of every contract to substantially expand the overview spreadsheet provided by the ESAC Initiative. Based on that hitherto unused data source, we combine qualitative and quantitative methods to conduct an in-depth analysis of the contract characteristics and the TA landscape. Our analysis demonstrates that research institutions seem to be 'trapped' in transformative agreements. Instead of being a bridge towards a fully Open Access world, academia is stuck in the hybrid system. This endows the legacy (non-Open Access) publishing houses with substantial market power. It raises entry barriers, lowers competition, and increases costs for libraries and universities.<|reference_end|>
|
arxiv
|
@article{rothfritz2024trapped,
title={Trapped in Transformative Agreements? A Multifaceted Analysis of >1,000
Contracts},
author={Laura Rothfritz, W. Benedikt Schmal, Ulrich Herb},
journal={arXiv preprint arXiv:2409.20224},
year={2024},
archivePrefix={arXiv},
eprint={2409.20224},
primaryClass={cs.DL}
}
|
rothfritz2024trapped
|
arxiv-663577
|
2409.20227
|
Assessing interaction recovery of predicted protein-ligand poses
|
<|reference_start|>Assessing interaction recovery of predicted protein-ligand poses: The field of protein-ligand pose prediction has seen significant advances in recent years, with machine learning-based methods now being commonly used in lieu of classical docking methods or even to predict all-atom protein-ligand complex structures. Most contemporary studies focus on the accuracy and physical plausibility of ligand placement to determine pose quality, often neglecting a direct assessment of the interactions observed with the protein. In this work, we demonstrate that ignoring protein-ligand interaction fingerprints can lead to overestimation of model performance, most notably in recent protein-ligand cofolding models which often fail to recapitulate key interactions.<|reference_end|>
|
arxiv
|
@article{errington2024assessing,
title={Assessing interaction recovery of predicted protein-ligand poses},
author={David Errington, Constantin Schneider, C'edric Bouysset, Fr'ed'eric
A. Dreyer},
journal={arXiv preprint arXiv:2409.20227},
year={2024},
archivePrefix={arXiv},
eprint={2409.20227},
primaryClass={q-bio.BM cs.LG}
}
|
errington2024assessing
|
arxiv-663578
|
2409.20235
|
A general machine learning model of aluminosilicate melt viscosity and its application to the surface properties of dry lava planets
|
<|reference_start|>A general machine learning model of aluminosilicate melt viscosity and its application to the surface properties of dry lava planets: Ultra-short-period exoplanets like K2-141 b likely have magma oceans on their dayside, which play a critical role in redistributing heat within the planet. This could lead to a warm nightside surface, measurable by the James Webb Space Telescope, offering insights into the planet's structure. Accurate models of properties like viscosity, which can vary by orders of magnitude, are essential for such studies. We present a new model for predicting molten magma viscosity, applicable in diverse scenarios, including magma oceans on lava planets. Using a database of 28,898 viscosity measurements on phospho-alumino-silicate melts, spanning superliquidus to undercooled temperatures and pressures up to 30 GPa, we trained a greybox artificial neural network, refined by a Gaussian process. This model achieves high predictive accuracy (RMSE $\approx 0.4 \log_{10}$ Pa$\cdot$s) and can handle compositions from SiO$_2$ to multicomponent magmatic and industrial glasses, accounting for pressure effects up to 30 GPa for compositions such as peridotite. Applying this model, we calculated the viscosity of K2-141 b's magma ocean under different compositions. Phase diagram calculations suggest that the dayside is fully molten, with extreme temperatures primarily controlling viscosity. A tenuous atmosphere (0.1 bar) might exist around a 40{\deg} radius from the substellar point. At higher longitudes, atmospheric pressure drops, and by 90{\deg}, magma viscosity rapidly increases as solidification occurs. The nightside surface is likely solid, but previously estimated surface temperatures above 400 K imply a partly molten mantle, feeding geothermal flux through vertical convection.<|reference_end|>
|
arxiv
|
@article{losq2024a,
title={A general machine learning model of aluminosilicate melt viscosity and
its application to the surface properties of dry lava planets},
author={Charles Le Losq and Cl'ement Ferraina and Paolo A. Sossi and
Charles-'Edouard Boukar'e},
journal={arXiv preprint arXiv:2409.20235},
year={2024},
archivePrefix={arXiv},
eprint={2409.20235},
primaryClass={astro-ph.EP cond-mat.mtrl-sci cs.LG}
}
|
losq2024a
|
arxiv-663579
|
2409.20237
|
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
|
<|reference_start|>Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies: We propose ClassroomKD, a novel multi-mentor knowledge distillation framework inspired by classroom environments to enhance knowledge transfer between student and multiple mentors. Unlike traditional methods that rely on fixed mentor-student relationships, our framework dynamically selects and adapts the teaching strategies of diverse mentors based on their effectiveness for each data sample. ClassroomKD comprises two main modules: the Knowledge Filtering (KF) Module and the Mentoring Module. The KF Module dynamically ranks mentors based on their performance for each input, activating only high-quality mentors to minimize error accumulation and prevent information loss. The Mentoring Module adjusts the distillation strategy by tuning each mentor's influence according to the performance gap between the student and mentors, effectively modulating the learning pace. Extensive experiments on image classification (CIFAR-100 and ImageNet) and 2D human pose estimation (COCO Keypoints and MPII Human Pose) demonstrate that ClassroomKD significantly outperforms existing knowledge distillation methods. Our results highlight that a dynamic and adaptive approach to mentor selection and guidance leads to more effective knowledge transfer, paving the way for enhanced model performance through distillation.<|reference_end|>
|
arxiv
|
@article{sarode2024classroom-inspired,
title={Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning
Strategies},
author={Shalini Sarode, Muhammad Saif Ullah Khan, Tahira Shehzadi, Didier
Stricker, Muhammad Zeshan Afzal},
journal={arXiv preprint arXiv:2409.20237},
year={2024},
archivePrefix={arXiv},
eprint={2409.20237},
primaryClass={cs.CV}
}
|
sarode2024classroom-inspired
|
arxiv-663580
|
2409.20238
|
Investigating Creation Perspectives and Icon Placement Preferences for On-Body Menus in Virtual Reality
|
<|reference_start|>Investigating Creation Perspectives and Icon Placement Preferences for On-Body Menus in Virtual Reality: On-body menus present a novel interaction paradigm within Virtual Reality (VR) environments by embedding virtual interfaces directly onto the user's body. Unlike traditional screen-based interfaces, on-body menus enable users to interact with virtual options or icons visually attached to their physical form. In this paper, We investigated the impact of the creation process on the effectiveness of on-body menus, comparing first-person, third-person, and mirror perspectives. Our first study ($N$ = 12) revealed that the mirror perspective led to faster creation times and more accurate recall compared to the other two perspectives. To further explore user preferences, we conducted a second study ($N$ = 18) utilizing a VR system with integrated body tracking. By combining distributions of icons from both studies ($N$ = 30), we confirmed significant preferences in on-body menu placement based on icon category (e.g., Social Media icons were consistently placed on forearms). We also discovered associations between categories, such as Leisure and Social Media icons frequently co-occurring. Our findings highlight the importance of the creation process, uncover user preferences for on-body menu organization, and provide insights to guide the development of intuitive and effective on-body interactions within virtual environments.<|reference_end|>
|
arxiv
|
@article{li2024investigating,
title={Investigating Creation Perspectives and Icon Placement Preferences for
On-Body Menus in Virtual Reality},
author={Xiang Li, Wei He, Shan Jin, Jan Gugenheimer, Pan Hui, Hai-Ning Liang,
Per Ola Kristensson},
journal={arXiv preprint arXiv:2409.20238},
year={2024},
doi={10.1145/3698136},
archivePrefix={arXiv},
eprint={2409.20238},
primaryClass={cs.HC}
}
|
li2024investigating
|
arxiv-663581
|
2409.20242
|
Design and validation of a fuzzy logic controller for multi-section continuum robots
|
<|reference_start|>Design and validation of a fuzzy logic controller for multi-section continuum robots: The rise of multi-section continuum robots (CRs) has captivated researchers and practitioners across diverse industries and medical fields. Accurate modeling of these dexterous manipulators continues to be a significant challenge. This complexity stems primarily from many nonlinearities that plague their behavior, including hysteresis and cable elongation. Researchers have devised a spectrum of model-based and learning-based strategies to navigate this intricate landscape, aiming to conquer the modeling problem and elevate control performance. Despite the advancements in these approaches, they encounter challenges stemming from their complex design and intricate learning processes, impairing versatility and hindering robust closed-loop control. This paper introduces a simple-structured, model-less fuzzy logic controller for the closed-loop control of continuum robots. Unlike traditional methods relying on complex models and numerous sensors, this controller boasts a built-in shape reconstruction algorithm. This algorithm allows it to achieve robust control using only the feedback of end position and orientation, significantly reducing sensor dependence. It efficiently adapts to various nonlinearities like hysteresis, cable elongation, and unexpected external disturbances. The experimental results conclusively demonstrate the accuracy and robustness of the proposed fuzzy controller. On a three-section, six-degree-of-freedom continuum robot, it achieved a miniscule trajectory tracking Root Mean Square Error (RMSE) from 0.28 to 0.54 mm, representing just 0.17 to 0.32% of the robot's length. Additionally, the controller demonstrates robustness by successfully handling an unexpected external disturbance of 100g during the trajectory tracking.<|reference_end|>
|
arxiv
|
@article{liu2024design,
title={Design and validation of a fuzzy logic controller for multi-section
continuum robots},
author={Jing Liu, Tianyi Zeng, Abdelkhalick Mohammad, Xin Dong, Dragos Axinte},
journal={arXiv preprint arXiv:2409.20242},
year={2024},
archivePrefix={arXiv},
eprint={2409.20242},
primaryClass={eess.SY cs.SY}
}
|
liu2024design
|
arxiv-663582
|
2409.20243
|
PsyGUARD: An Automated System for Suicide Detection and Risk Assessment in Psychological Counseling
|
<|reference_start|>PsyGUARD: An Automated System for Suicide Detection and Risk Assessment in Psychological Counseling: As awareness of mental health issues grows, online counseling support services are becoming increasingly prevalent worldwide. Detecting whether users express suicidal ideation in text-based counseling services is crucial for identifying and prioritizing at-risk individuals. However, the lack of domain-specific systems to facilitate fine-grained suicide detection and corresponding risk assessment in online counseling poses a significant challenge for automated crisis intervention aimed at suicide prevention. In this paper, we propose PsyGUARD, an automated system for detecting suicide ideation and assessing risk in psychological counseling. To achieve this, we first develop a detailed taxonomy for detecting suicide ideation based on foundational theories. We then curate a large-scale, high-quality dataset called PsySUICIDE for suicide detection. To evaluate the capabilities of automated systems in fine-grained suicide detection, we establish a range of baselines. Subsequently, to assist automated services in providing safe, helpful, and tailored responses for further assessment, we propose to build a suite of risk assessment frameworks. Our study not only provides an insightful analysis of the effectiveness of automated risk assessment systems based on fine-grained suicide detection but also highlights their potential to improve mental health services on online counseling platforms. Code, data, and models are available at https://github.com/qiuhuachuan/PsyGUARD.<|reference_end|>
|
arxiv
|
@article{qiu2024psyguard:,
title={PsyGUARD: An Automated System for Suicide Detection and Risk Assessment
in Psychological Counseling},
author={Huachuan Qiu, Lizhi Ma, Zhenzhong Lan},
journal={arXiv preprint arXiv:2409.20243},
year={2024},
archivePrefix={arXiv},
eprint={2409.20243},
primaryClass={cs.CL}
}
|
qiu2024psyguard:
|
arxiv-663583
|
2409.20246
|
Analysing Zero-Shot Readability-Controlled Sentence Simplification
|
<|reference_start|>Analysing Zero-Shot Readability-Controlled Sentence Simplification: Readability-controlled text simplification (RCTS) rewrites texts to lower readability levels while preserving their meaning. RCTS models often depend on parallel corpora with readability annotations on both source and target sides. Such datasets are scarce and difficult to curate, especially at the sentence level. To reduce reliance on parallel data, we explore using instruction-tuned large language models for zero-shot RCTS. Through automatic and manual evaluations, we examine: (1) how different types of contextual information affect a model's ability to generate sentences with the desired readability, and (2) the trade-off between achieving target readability and preserving meaning. Results show that all tested models struggle to simplify sentences (especially to the lowest levels) due to models' limitations and characteristics of the source sentences that impede adequate rewriting. Our experiments also highlight the need for better automatic evaluation metrics tailored to RCTS, as standard ones often misinterpret common simplification operations, and inaccurately assess readability and meaning preservation.<|reference_end|>
|
arxiv
|
@article{barayan2024analysing,
title={Analysing Zero-Shot Readability-Controlled Sentence Simplification},
author={Abdullah Barayan, Jose Camacho-Collados and Fernando Alva-Manchego},
journal={arXiv preprint arXiv:2409.20246},
year={2024},
archivePrefix={arXiv},
eprint={2409.20246},
primaryClass={cs.CL}
}
|
barayan2024analysing
|
arxiv-663584
|
2409.20247
|
Resource Allocation for Stable LLM Training in Mobile Edge Computing
|
<|reference_start|>Resource Allocation for Stable LLM Training in Mobile Edge Computing: As mobile devices increasingly become focal points for advanced applications, edge computing presents a viable solution to their inherent computational limitations, particularly in deploying large language models (LLMs). However, despite the advancements in edge computing, significant challenges remain in efficient training and deploying LLMs due to the computational demands and data privacy concerns associated with these models. This paper explores a collaborative training framework that integrates mobile users with edge servers to optimize resource allocation, thereby enhancing both performance and efficiency. Our approach leverages parameter-efficient fine-tuning (PEFT) methods, allowing mobile users to adjust the initial layers of the LLM while edge servers handle the more demanding latter layers. Specifically, we formulate a multi-objective optimization problem to minimize the total energy consumption and delay during training. We also address the common issue of instability in model performance by incorporating stability enhancements into our objective function. Through novel fractional programming technique, we achieve a stationary point for the formulated problem. Simulations demonstrate that our method reduces the energy consumption as well as the latency, and increases the reliability of LLMs across various mobile settings.<|reference_end|>
|
arxiv
|
@article{liu2024resource,
title={Resource Allocation for Stable LLM Training in Mobile Edge Computing},
author={Chang Liu and Jun Zhao},
journal={arXiv preprint arXiv:2409.20247},
year={2024},
archivePrefix={arXiv},
eprint={2409.20247},
primaryClass={cs.DC cs.AI cs.IT cs.SY eess.SY math.IT math.OC}
}
|
liu2024resource
|
arxiv-663585
|
2409.20248
|
Feature Extractor or Decision Maker: Rethinking the Role of Visual Encoders in Visuomotor Policies
|
<|reference_start|>Feature Extractor or Decision Maker: Rethinking the Role of Visual Encoders in Visuomotor Policies: An end-to-end (E2E) visuomotor policy is typically treated as a unified whole, but recent approaches using out-of-domain (OOD) data to pretrain the visual encoder have cleanly separated the visual encoder from the network, with the remainder referred to as the policy. We propose Visual Alignment Testing, an experimental framework designed to evaluate the validity of this functional separation. Our results indicate that in E2E-trained models, visual encoders actively contribute to decision-making resulting from motor data supervision, contradicting the assumed functional separation. In contrast, OOD-pretrained models, where encoders lack this capability, experience an average performance drop of 42% in our benchmark results, compared to the state-of-the-art performance achieved by E2E policies. We believe this initial exploration of visual encoders' role can provide a first step towards guiding future pretraining methods to address their decision-making ability, such as developing task-conditioned or context-aware encoders.<|reference_end|>
|
arxiv
|
@article{wang2024feature,
title={Feature Extractor or Decision Maker: Rethinking the Role of Visual
Encoders in Visuomotor Policies},
author={Ruiyu Wang, Zheyu Zhuang, Shutong Jin, Nils Ingelhag, Danica Kragic
and Florian T. Pokorny},
journal={arXiv preprint arXiv:2409.20248},
year={2024},
archivePrefix={arXiv},
eprint={2409.20248},
primaryClass={cs.RO}
}
|
wang2024feature
|
arxiv-663586
|
2409.20250
|
Random Features Outperform Linear Models: Effect of Strong Input-Label Correlation in Spiked Covariance Data
|
<|reference_start|>Random Features Outperform Linear Models: Effect of Strong Input-Label Correlation in Spiked Covariance Data: Random Feature Model (RFM) with a nonlinear activation function is instrumental in understanding training and generalization performance in high-dimensional learning. While existing research has established an asymptotic equivalence in performance between the RFM and noisy linear models under isotropic data assumptions, empirical observations indicate that the RFM frequently surpasses linear models in practical applications. To address this gap, we ask, "When and how does the RFM outperform linear models?" In practice, inputs often have additional structures that significantly influence learning. Therefore, we explore the RFM under anisotropic input data characterized by spiked covariance in the proportional asymptotic limit, where dimensions diverge jointly while maintaining finite ratios. Our analysis reveals that a high correlation between inputs and labels is a critical factor enabling the RFM to outperform linear models. Moreover, we show that the RFM performs equivalent to noisy polynomial models, where the polynomial degree depends on the strength of the correlation between inputs and labels. Our numerical simulations validate these theoretical insights, confirming the performance-wise superiority of RFM in scenarios characterized by strong input-label correlation.<|reference_end|>
|
arxiv
|
@article{demir2024random,
title={Random Features Outperform Linear Models: Effect of Strong Input-Label
Correlation in Spiked Covariance Data},
author={Samet Demir, Zafer Dogan},
journal={arXiv preprint arXiv:2409.20250},
year={2024},
archivePrefix={arXiv},
eprint={2409.20250},
primaryClass={stat.ML cs.LG}
}
|
demir2024random
|
arxiv-663587
|
2409.20251
|
Controlling sharpness, SNR and SAR for 3D FSE at 7T by end-to-end learning
|
<|reference_start|>Controlling sharpness, SNR and SAR for 3D FSE at 7T by end-to-end learning: Purpose: To non-heuristically identify dedicated variable flip angle (VFA) schemes optimized for the point-spread function (PSF) and signal-to-noise ratio (SNR) of multiple tissues in 3D FSE sequences with very long echo trains at 7T. Methods: The proposed optimization considers predefined SAR constraints and target contrast using an end-to-end learning framework. The cost function integrates components for contrast fidelity (SNR) and a penalty term to minimize image blurring (PSF) for multiple tissues. By adjusting the weights of PSF/SNR cost-function components, PSF- and SNR-optimized VFAs were derived and tested in vivo using both the open-source Pulseq standard on two volunteers as well as vendor protocols on a 7T MRI system with parallel transmit extension on three volunteers. Results: PSF-optimized VFAs resulted in significantly reduced image blurring compared to standard VFAs for T2w while maintaining contrast fidelity. Small white and gray matter structures, as well as blood vessels, are more visible with PSF-optimized VFAs. Quantitative analysis shows that the optimized VFA yields 50% less deviation from a sinc-like reference PSF than the standard VFA. The SNR-optimized VFAs yielded images with significantly improved SNR in a white and gray matter region relative to standard (81.2\pm18.4 vs. 41.2\pm11.5, respectively) as trade-off for elevated image blurring. Conclusion: This study demonstrates the potential of end-to-end learning frameworks to optimize VFA schemes in very long echo trains for 3D FSE acquisition at 7T in terms of PSF and SNR. It paves the way for fast and flexible adjustment of the trade-off between PSF and SNR for 3D FSE.<|reference_end|>
|
arxiv
|
@article{dawood2024controlling,
title={Controlling sharpness, SNR and SAR for 3D FSE at 7T by end-to-end
learning},
author={Peter Dawood, Martin Blaimer, J"urgen Herrler, Patrick Liebig, Simon
Weinm"uller, Shaihan Malik, Peter M. Jakob, Moritz Zaiss},
journal={arXiv preprint arXiv:2409.20251},
year={2024},
archivePrefix={arXiv},
eprint={2409.20251},
primaryClass={physics.med-ph cs.LG cs.SY eess.IV eess.SY}
}
|
dawood2024controlling
|
arxiv-663588
|
2409.20252
|
What is the Role of Large Language Models in the Evolution of Astronomy Research?
|
<|reference_start|>What is the Role of Large Language Models in the Evolution of Astronomy Research?: ChatGPT and other state-of-the-art large language models (LLMs) are rapidly transforming multiple fields, offering powerful tools for a wide range of applications. These models, commonly trained on vast datasets, exhibit human-like text generation capabilities, making them useful for research tasks such as ideation, literature review, coding, drafting, and outreach. We conducted a study involving 13 astronomers at different career stages and research fields to explore LLM applications across diverse tasks over several months and to evaluate their performance in research-related activities. This work was accompanied by an anonymous survey assessing participants' experiences and attitudes towards LLMs. We provide a detailed analysis of the tasks attempted and the survey answers, along with specific output examples. Our findings highlight both the potential and limitations of LLMs in supporting research while also addressing general and research-specific ethical considerations. We conclude with a series of recommendations, emphasizing the need for researchers to complement LLMs with critical thinking and domain expertise, ensuring these tools serve as aids rather than substitutes for rigorous scientific inquiry.<|reference_end|>
|
arxiv
|
@article{fouesneau2024what,
title={What is the Role of Large Language Models in the Evolution of Astronomy
Research?},
author={Morgan Fouesneau and Ivelina G. Momcheva and Urmila Chadayammuri and
Mariia Demianenko and Antoine Dumont and Raphael E. Hviding and K. Angelique
Kahle and Nadiia Pulatova, Bhavesh Rajpoot and Marten B. Scheuck and Rhys
Seeburger and Dmitry Semenov and Jaime I. Villase~nor},
journal={arXiv preprint arXiv:2409.20252},
year={2024},
archivePrefix={arXiv},
eprint={2409.20252},
primaryClass={astro-ph.IM cs.AI}
}
|
fouesneau2024what
|
arxiv-663589
|
2409.20253
|
Medical Image Segmentation with SAM-generated Annotations
|
<|reference_start|>Medical Image Segmentation with SAM-generated Annotations: The field of medical image segmentation is hindered by the scarcity of large, publicly available annotated datasets. Not all datasets are made public for privacy reasons, and creating annotations for a large dataset is time-consuming and expensive, as it requires specialized expertise to accurately identify regions of interest (ROIs) within the images. To address these challenges, we evaluate the performance of the Segment Anything Model (SAM) as an annotation tool for medical data by using it to produce so-called "pseudo labels" on the Medical Segmentation Decathlon (MSD) computed tomography (CT) tasks. The pseudo labels are then used in place of ground truth labels to train a UNet model in a weakly-supervised manner. We experiment with different prompt types on SAM and find that the bounding box prompt is a simple yet effective method for generating pseudo labels. This method allows us to develop a weakly-supervised model that performs comparably to a fully supervised model.<|reference_end|>
|
arxiv
|
@article{häkkinen2024medical,
title={Medical Image Segmentation with SAM-generated Annotations},
author={Iira H"akkinen, Iaroslav Melekhov, Erik Englesson, Hossein Azizpour,
Juho Kannala},
journal={arXiv preprint arXiv:2409.20253},
year={2024},
archivePrefix={arXiv},
eprint={2409.20253},
primaryClass={cs.CV}
}
|
häkkinen2024medical
|
arxiv-663590
|
2409.20254
|
MNT Elliptic Curves with Non-Prime Order
|
<|reference_start|>MNT Elliptic Curves with Non-Prime Order: Miyaji, Nakabayashi, and Takano proposed the algorithm for the construction of prime order pairing-friendly elliptic curves with embedding degrees $k=3,4,6$. We present a method for generating generalized MNT curves. The order of such pairing-friendly curves is the product of two prime numbers.<|reference_end|>
|
arxiv
|
@article{grześkowiak2024mnt,
title={MNT Elliptic Curves with Non-Prime Order},
author={Maciej Grze'skowiak},
journal={arXiv preprint arXiv:2409.20254},
year={2024},
archivePrefix={arXiv},
eprint={2409.20254},
primaryClass={cs.CR math.NT}
}
|
grześkowiak2024mnt
|
arxiv-663591
|
2409.20255
|
PerCo (SD): Open Perceptual Compression
|
<|reference_start|>PerCo (SD): Open Perceptual Compression: We introduce PerCo (SD), a perceptual image compression method based on Stable Diffusion v2.1, targeting the ultra-low bit range. PerCo (SD) serves as an open and competitive alternative to the state-of-the-art method PerCo, which relies on a proprietary variant of GLIDE and remains closed to the public. In this work, we review the theoretical foundations, discuss key engineering decisions in adapting PerCo to the Stable Diffusion ecosystem, and provide a comprehensive comparison, both quantitatively and qualitatively. On the MSCOCO-30k dataset, PerCo (SD) demonstrates improved perceptual characteristics at the cost of higher distortion. We partly attribute this gap to the different model capacities being used (866M vs. 1.4B). We hope our work contributes to a deeper understanding of the underlying mechanisms and paves the way for future advancements in the field. Code and trained models will be released at https://github.com/Nikolai10/PerCo.<|reference_end|>
|
arxiv
|
@article{körber2024perco,
title={PerCo (SD): Open Perceptual Compression},
author={Nikolai K"orber and Eduard Kromer and Andreas Siebert and Sascha
Hauke and Daniel Mueller-Gritschneder and Bj"orn Schuller},
journal={arXiv preprint arXiv:2409.20255},
year={2024},
archivePrefix={arXiv},
eprint={2409.20255},
primaryClass={cs.CV}
}
|
körber2024perco
|
arxiv-663592
|
2409.20257
|
A hybrid finite element/finite difference method for reconstruction of dielectric properties of conductive objects
|
<|reference_start|>A hybrid finite element/finite difference method for reconstruction of dielectric properties of conductive objects: The aim of this article is to present a hybrid finite element/finite difference method which is used for reconstructions of electromagnetic properties within a realistic breast phantom. This is done by studying the mentioned properties' (electric permittivity and conductivity in this case) representing coefficients in a constellation of Maxwell's equations. This information is valuable since these coefficient can reveal types of tissues within the breast, and in applications could be used to detect shapes and locations of tumours. Because of the ill-posed nature of this coefficient inverse problem, we approach it as an optimization problem by introducing the corresponding Tikhonov functional and in turn Lagrangian. These are then minimized by utilizing an interplay between finite element and finite difference methods for solutions of direct and adjoint problems, and thereafter by applying a conjugate gradient method to an adaptively refined mesh.<|reference_end|>
|
arxiv
|
@article{lindström2024a,
title={A hybrid finite element/finite difference method for reconstruction of
dielectric properties of conductive objects},
author={Eric Lindstr"om, Larisa Beilina},
journal={arXiv preprint arXiv:2409.20257},
year={2024},
archivePrefix={arXiv},
eprint={2409.20257},
primaryClass={math.NA cs.NA}
}
|
lindström2024a
|
arxiv-663593
|
2409.20258
|
Inferring Preferences from Demonstrations in Multi-objective Reinforcement Learning
|
<|reference_start|>Inferring Preferences from Demonstrations in Multi-objective Reinforcement Learning: Many decision-making problems feature multiple objectives where it is not always possible to know the preferences of a human or agent decision-maker for different objectives. However, demonstrated behaviors from the decision-maker are often available. This research proposes a dynamic weight-based preference inference (DWPI) algorithm that can infer the preferences of agents acting in multi-objective decision-making problems from demonstrations. The proposed algorithm is evaluated on three multi-objective Markov decision processes: Deep Sea Treasure, Traffic, and Item Gathering, and is compared to two existing preference inference algorithms. Empirical results demonstrate significant improvements compared to the baseline algorithms, in terms of both time efficiency and inference accuracy. The DWPI algorithm maintains its performance when inferring preferences for sub-optimal demonstrations. Moreover, the DWPI algorithm does not necessitate any interactions with the user during inference - only demonstrations are required. We provide a correctness proof and complexity analysis of the algorithm and statistically evaluate the performance under different representation of demonstrations.<|reference_end|>
|
arxiv
|
@article{lu2024inferring,
title={Inferring Preferences from Demonstrations in Multi-objective
Reinforcement Learning},
author={Junlin Lu, Patrick Mannion, Karl Mason},
journal={arXiv preprint arXiv:2409.20258},
year={2024},
doi={10.1007/s00521-024-10412-x},
archivePrefix={arXiv},
eprint={2409.20258},
primaryClass={cs.AI}
}
|
lu2024inferring
|
arxiv-663594
|
2409.20259
|
Learning to Ground Existentially Quantified Goals
|
<|reference_start|>Learning to Ground Existentially Quantified Goals: Goal instructions for autonomous AI agents cannot assume that objects have unique names. Instead, objects in goals must be referred to by providing suitable descriptions. However, this raises problems in both classical planning and generalized planning. The standard approach to handling existentially quantified goals in classical planning involves compiling them into a DNF formula that encodes all possible variable bindings and adding dummy actions to map each DNF term into the new, dummy goal. This preprocessing is exponential in the number of variables. In generalized planning, the problem is different: even if general policies can deal with any initial situation and goal, executing a general policy requires the goal to be grounded to define a value for the policy features. The problem of grounding goals, namely finding the objects to bind the goal variables, is subtle: it is a generalization of classical planning, which is a special case when there are no goal variables to bind, and constraint reasoning, which is a special case when there are no actions. In this work, we address the goal grounding problem with a novel supervised learning approach. A GNN architecture, trained to predict the cost of partially quantified goals over small domain instances is tested on larger instances involving more objects and different quantified goals. The proposed architecture is evaluated experimentally over several planning domains where generalization is tested along several dimensions including the number of goal variables and objects that can bind such variables. The scope of the approach is also discussed in light of the known relationship between GNNs and C2 logics.<|reference_end|>
|
arxiv
|
@article{funkquist2024learning,
title={Learning to Ground Existentially Quantified Goals},
author={Martin Funkquist, Simon St{aa}hlberg, Hector Geffner},
journal={arXiv preprint arXiv:2409.20259},
year={2024},
archivePrefix={arXiv},
eprint={2409.20259},
primaryClass={cs.AI}
}
|
funkquist2024learning
|
arxiv-663595
|
2409.20260
|
Computer-mediated therapies for stroke rehabilitation: a systematic review and meta-Analysis
|
<|reference_start|>Computer-mediated therapies for stroke rehabilitation: a systematic review and meta-Analysis: OBJECTIVE: To evaluate the efficacy of different forms of virtual reality (VR) treatments as either immersive virtual reality (IVR) or non-immersive virtual reality (NIVR) in comparison to conventional therapy (CT) in improving physical and psychological status among stroke patients. METHODS: The literature search was conducted on seven databases. ACM Digital Library, Medline (via PubMed), Cochrane, IEEE Xplore, Web of Science, and Scopus. The effect sizes of the main outcomes were calculated using Cohen's d. Pooled results were used to present an overall estimate of the treatment effect using a random-effects model. RESULTS: A total of 22 randomized controlled trials were evaluated. 3 trials demonstrated that immersive virtual reality improved upper limb activity, function and activity of daily life in a way comparable to CT. 18 trials showed that NIVR had similar benefits to CT for upper limb activity and function, balance and mobility, activities of daily living and participation. A comparison between the different forms of VR showed that IVR may be more beneficial than NIVR for upper limb training and activities of daily life. CONCLUSIONS: This study found out that IVR therapies may be more effective than NIVR but not CT to improve upper limb activity, function, and daily life activities. However, there is no evidence of the durability of IVR treatment. More research involving studies with larger samples is needed to assess the long-term effects and promising benefits of immersive virtual reality technology.<|reference_end|>
|
arxiv
|
@article{zoppi2024computer-mediated,
title={Computer-mediated therapies for stroke rehabilitation: a systematic
review and meta-Analysis},
author={Stanley Mugisha. Mirko Job. Matteo Zoppi, Marco Testa, Rezia Molfino},
journal={arXiv preprint arXiv:2409.20260},
year={2024},
doi={10.1016/j.jstrokecerebrovasdis.2022.106454},
archivePrefix={arXiv},
eprint={2409.20260},
primaryClass={physics.med-ph cs.AI cs.HC cs.MM}
}
|
zoppi2024computer-mediated
|
arxiv-663596
|
2409.20261
|
Bi-stable thin soft robot for in-plane locomotion in narrow space
|
<|reference_start|>Bi-stable thin soft robot for in-plane locomotion in narrow space: Dielectric elastomer actuators (DEAs), also recognized as artificial muscle, have been widely developed for the soft locomotion robot. With the complaint skeleton and miniaturized dimension, they are well suited for the narrow space inspection. In this work, we propose a novel low profile (1.1mm) and lightweight (1.8g) bi-stable in-plane DEA (Bi-DEA) constructed by supporting a dielectric elastomer onto a flat bi-stable mechanism. It has an amplified displacement and output force compared with the in-plane DEA (I-DEA) without the bi-stable mechanism. Then, the Bi-DEA is applied to a thin soft robot, using three electrostatic adhesive pads (EA-Pads) as anchoring elements. This robot is capable of crawling and climbing to access millimetre-scale narrow gaps. A theoretical model of the bi-stable mechanism and the DEA are presented. The enhanced performance of the Bi-DEA induced by the mechanism is experimentally validated. EA-Pad provides the adhesion between the actuator and the locomotion substrate, allowing crawling and climbing on various surfaces, i.e., paper and acrylic. The thin soft robot has been demonstrated to be capable of crawling through a 4mm narrow gap with a speed up to 3.3mm/s (0.07 body length per second and 2.78 body thickness per second).<|reference_end|>
|
arxiv
|
@article{wang2024bi-stable,
title={Bi-stable thin soft robot for in-plane locomotion in narrow space},
author={Xi Wang, Jung-che Chang, Feiran Wang, Dragos Axinte, Xin Dong},
journal={arXiv preprint arXiv:2409.20261},
year={2024},
archivePrefix={arXiv},
eprint={2409.20261},
primaryClass={cs.RO physics.class-ph}
}
|
wang2024bi-stable
|
arxiv-663597
|
2409.20264
|
First Order System Least Squares Neural Networks
|
<|reference_start|>First Order System Least Squares Neural Networks: We introduce a conceptual framework for numerically solving linear elliptic, parabolic, and hyperbolic PDEs on bounded, polytopal domains in euclidean spaces by deep neural networks. The PDEs are recast as minimization of a least-squares (LSQ for short) residual of an equivalent, well-posed first-order system, over parametric families of deep neural networks. The associated LSQ residual is a) equal or proportional to a weak residual of the PDE, b) additive in terms of contributions from localized subnetworks, indicating locally ``out-of-equilibrium'' of neural networks with respect to the PDE residual, c) serves as numerical loss function for neural network training, and d) constitutes, even with incomplete training, a computable, (quasi-)optimal numerical error estimator in the context of adaptive LSQ finite element methods. In addition, an adaptive neural network growth strategy is proposed which, assuming exact numerical minimization of the LSQ loss functional, yields sequences of neural networks with realizations that converge rate-optimally to the exact solution of the first order system LSQ formulation.<|reference_end|>
|
arxiv
|
@article{opschoor2024first,
title={First Order System Least Squares Neural Networks},
author={Joost A. A. Opschoor, Philipp C. Petersen, Christoph Schwab},
journal={arXiv preprint arXiv:2409.20264},
year={2024},
archivePrefix={arXiv},
eprint={2409.20264},
primaryClass={math.NA cs.LG cs.NA}
}
|
opschoor2024first
|
arxiv-663598
|
2409.20266
|
Self-Assessment and Correction of Sensor Synchronization
|
<|reference_start|>Self-Assessment and Correction of Sensor Synchronization: We propose an approach to assess the synchronization of rigidly mounted sensors based on their rotational motion. Using function similarity measures combined with a sliding window approach, our approach is capable of estimating time-varying time offsets. Further, the estimated offset allows the correction of erroneously assigned time stamps on measurements. This mitigates the effect of synchronization issues on subsequent modules in autonomous software stacks, such as tracking systems that heavily rely on accurate measurement time stamps. Additionally, a self-assessment based on an uncertainty measure is derived, and correction strategies are described. Our approach is evaluated with Monte Carlo experiments containing different error patterns. The results show that our approach accurately estimates time offsets and, thus, is able to detect and assess synchronization issues. To further embrace the importance of our approach for autonomous systems, we investigate the effect of synchronization inconsistencies in tracking systems in more detail and demonstrate the beneficial effect of our proposed offset correction.<|reference_end|>
|
arxiv
|
@article{wodtko2024self-assessment,
title={Self-Assessment and Correction of Sensor Synchronization},
author={Thomas Wodtko, Alexander Scheible and Michael Buchholz},
journal={arXiv preprint arXiv:2409.20266},
year={2024},
archivePrefix={arXiv},
eprint={2409.20266},
primaryClass={cs.RO}
}
|
wodtko2024self-assessment
|
arxiv-663599
|
2409.20270
|
Loose Social-Interaction Recognition in Real-world Therapy Scenarios
|
<|reference_start|>Loose Social-Interaction Recognition in Real-world Therapy Scenarios: The computer vision community has explored dyadic interactions for atomic actions such as pushing, carrying-object, etc. However, with the advancement in deep learning models, there is a need to explore more complex dyadic situations such as loose interactions. These are interactions where two people perform certain atomic activities to complete a global action irrespective of temporal synchronisation and physical engagement, like cooking-together for example. Analysing these types of dyadic-interactions has several useful applications in the medical domain for social-skills development and mental health diagnosis. To achieve this, we propose a novel dual-path architecture to capture the loose interaction between two individuals. Our model learns global abstract features from each stream via a CNNs backbone and fuses them using a new Global-Layer-Attention module based on a cross-attention strategy. We evaluate our model on real-world autism diagnoses such as our Loose-Interaction dataset, and the publicly available Autism dataset for loose interactions. Our network achieves baseline results on the Loose-Interaction and SOTA results on the Autism datasets. Moreover, we study different social interactions by experimenting on a publicly available dataset i.e. NTU-RGB+D (interactive classes from both NTU-60 and NTU-120). We have found that different interactions require different network designs. We also compare a slightly different version of our method by incorporating time information to address tight interactions achieving SOTA results.<|reference_end|>
|
arxiv
|
@article{ali2024loose,
title={Loose Social-Interaction Recognition in Real-world Therapy Scenarios},
author={Abid Ali, Rui Dai, Ashish Marisetty, Guillaume Astruc, Monique
Thonnat, Jean-Marc Odobez, Susanne Th"ummler, Francois Bremond},
journal={IEEE/CVF Winter Conference on Applications of Computer Vision 2025},
year={2024},
archivePrefix={arXiv},
eprint={2409.20270},
primaryClass={cs.CV}
}
|
ali2024loose
|
arxiv-663600
|
2409.20274
|
Probabilistic Answer Set Programming with Discrete and Continuous Random Variables
|
<|reference_start|>Probabilistic Answer Set Programming with Discrete and Continuous Random Variables: Probabilistic Answer Set Programming under the credal semantics (PASP) extends Answer Set Programming with probabilistic facts that represent uncertain information. The probabilistic facts are discrete with Bernoulli distributions. However, several real-world scenarios require a combination of both discrete and continuous random variables. In this paper, we extend the PASP framework to support continuous random variables and propose Hybrid Probabilistic Answer Set Programming (HPASP). Moreover, we discuss, implement, and assess the performance of two exact algorithms based on projected answer set enumeration and knowledge compilation and two approximate algorithms based on sampling. Empirical results, also in line with known theoretical results, show that exact inference is feasible only for small instances, but knowledge compilation has a huge positive impact on the performance. Sampling allows handling larger instances, but sometimes requires an increasing amount of memory. Under consideration in Theory and Practice of Logic Programming (TPLP).<|reference_end|>
|
arxiv
|
@article{azzolini2024probabilistic,
title={Probabilistic Answer Set Programming with Discrete and Continuous Random
Variables},
author={Damiano Azzolini and Fabrizio Riguzzi},
journal={arXiv preprint arXiv:2409.20274},
year={2024},
archivePrefix={arXiv},
eprint={2409.20274},
primaryClass={cs.AI}
}
|
azzolini2024probabilistic
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.