corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-662601
|
2409.18387
|
Simpler Gradient Methods for Blind Super-Resolution with Lower Iteration Complexity
|
<|reference_start|>Simpler Gradient Methods for Blind Super-Resolution with Lower Iteration Complexity: We study the problem of blind super-resolution, which can be formulated as a low-rank matrix recovery problem via vectorized Hankel lift (VHL). The previous gradient descent method based on VHL named PGD-VHL relies on additional regularization such as the projection and balancing penalty, exhibiting a suboptimal iteration complexity. In this paper, we propose a simpler unconstrained optimization problem without the above two types of regularization and develop two new and provable gradient methods named VGD-VHL and ScalGD-VHL. A novel and sharp analysis is provided for the theoretical guarantees of our algorithms, which demonstrates that our methods offer lower iteration complexity than PGD-VHL. In addition, ScalGD-VHL has the lowest iteration complexity while being independent of the condition number. Furthermore, our novel analysis reveals that the blind super-resolution problem is less incoherence-demanding, thereby eliminating the necessity for incoherent projections to achieve linear convergence. Empirical results illustrate that our methods exhibit superior computational efficiency while achieving comparable recovery performance to prior arts.<|reference_end|>
|
arxiv
|
@article{li2024simpler,
title={Simpler Gradient Methods for Blind Super-Resolution with Lower Iteration
Complexity},
author={Jinsheng Li, Wei Cui, and Xu Zhang},
journal={arXiv preprint arXiv:2409.18387},
year={2024},
archivePrefix={arXiv},
eprint={2409.18387},
primaryClass={cs.IT math.IT}
}
|
li2024simpler
|
arxiv-662602
|
2409.18388
|
Scale Free Projections Arise from Bipartite Random Networks
|
<|reference_start|>Scale Free Projections Arise from Bipartite Random Networks: The degree distribution of a real world network -- the number of links per node -- often follows a power law, with some hubs having many more links than traditional graph generation methods predict. For years, preferential attachment and growth have been the proposed mechanisms that lead to these scale free networks. However, the two sides of bipartite graphs like collaboration networks are usually not scale free, and are therefore not well-explained by these processes. Here we develop a bipartite extension to the Randomly Stopped Linking Model and show that mixtures of geometric distributions lead to power laws according to a Central Limit Theorem for distributions with high variance. The two halves of the actor-movie network are not scale free and can be represented by just 5 geometric distributions, but they combine to form a scale free actor-actor unipartite projection without preferential attachment or growth. This result supports our claim that scale free networks are the natural result of many Bernoulli trials with high variance of which preferential attachment and growth are only one example.<|reference_end|>
|
arxiv
|
@article{johnston2024scale,
title={Scale Free Projections Arise from Bipartite Random Networks},
author={Josh Johnston and Tim Andersen},
journal={arXiv preprint arXiv:2409.18388},
year={2024},
archivePrefix={arXiv},
eprint={2409.18388},
primaryClass={cs.SI cs.DM physics.soc-ph}
}
|
johnston2024scale
|
arxiv-662603
|
2409.18390
|
Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly
|
<|reference_start|>Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly: We present a system that transforms speech into physical objects by combining 3D generative Artificial Intelligence with robotic assembly. The system leverages natural language input to make design and manufacturing more accessible, enabling individuals without expertise in 3D modeling or robotic programming to create physical objects. We propose utilizing discrete robotic assembly of lattice-based voxel components to address the challenges of using generative AI outputs in physical production, such as design variability, fabrication speed, structural integrity, and material waste. The system interprets speech to generate 3D objects, discretizes them into voxel components, computes an optimized assembly sequence, and generates a robotic toolpath. The results are demonstrated through the assembly of various objects, ranging from chairs to shelves, which are prompted via speech and realized within 5 minutes using a 6-axis robotic arm.<|reference_end|>
|
arxiv
|
@article{kyaw2024speech,
title={Speech to Reality: On-Demand Production using Natural Language, 3D
Generative AI, and Discrete Robotic Assembly},
author={Alexander Htet Kyaw, Se Hwan Jeon, Miana Smith, and Neil Gershenfeld},
journal={arXiv preprint arXiv:2409.18390},
year={2024},
archivePrefix={arXiv},
eprint={2409.18390},
primaryClass={cs.RO cs.AI cs.HC}
}
|
kyaw2024speech
|
arxiv-662604
|
2409.18391
|
Crank-Nicolson-type iterative decoupled algorithms for Biot's consolidation model using total pressure
|
<|reference_start|>Crank-Nicolson-type iterative decoupled algorithms for Biot's consolidation model using total pressure: In this work, we develop Crank-Nicolson-type iterative decoupled algorithms for a three-field formulation of Biot's consolidation model using total pressure. We begin by constructing an equivalent fully implicit coupled algorithm using the standard Crank-Nicolson method for the three-field formulation of Biot's model. Employing an iterative decoupled scheme to decompose the resulting coupled system, we derive two distinctive forms of Crank-Nicolson-type iterative decoupled algorithms based on the order of temporal computation and iteration: a time-stepping iterative decoupled algorithm and a global-in-time iterative decoupled algorithm. Notably, the proposed global-in-time algorithm supports a partially parallel-in-time feature. Capitalizing on the convergence properties of the iterative decoupled scheme, both algorithms exhibit second-order time accuracy and unconditional stability. Through numerical experiments, we validate theoretical predictions and demonstrate the effectiveness and efficiency of these novel approaches.<|reference_end|>
|
arxiv
|
@article{gu2024crank-nicolson-type,
title={Crank-Nicolson-type iterative decoupled algorithms for Biot's
consolidation model using total pressure},
author={Huipeng Gu and Mingchao Cai and Jingzhi Li},
journal={arXiv preprint arXiv:2409.18391},
year={2024},
archivePrefix={arXiv},
eprint={2409.18391},
primaryClass={math.NA cs.NA}
}
|
gu2024crank-nicolson-type
|
arxiv-662605
|
2409.18393
|
Social media algorithms can curb misinformation, but do they?
|
<|reference_start|>Social media algorithms can curb misinformation, but do they?: A recent article in $\textit{Science}$ by Guess et al. estimated the effect of Facebook's news feed algorithm on exposure to misinformation and political information among Facebook users. However, its reporting and conclusions did not account for a series of temporary emergency changes to Facebook's news feed algorithm in the wake of the 2020 U.S. presidential election that were designed to diminish the spread of voter-fraud misinformation. Here, we demonstrate that these emergency measures systematically reduced the amount of misinformation in the control group of the study, which was using the news feed algorithm. This issue may have led readers to misinterpret the results of the study and to conclude that the Facebook news feed algorithm used outside of the study period mitigates political misinformation as compared to reverse chronological feed.<|reference_end|>
|
arxiv
|
@article{bagchi2024social,
title={Social media algorithms can curb misinformation, but do they?},
author={Chhandak Bagchi, Filippo Menczer, Jennifer Lundquist, Monideepa
Tarafdar, Anthony Paik, Przemyslaw A. Grabowicz},
journal={arXiv preprint arXiv:2409.18393},
year={2024},
doi={10.5281/zenodo.13787981},
archivePrefix={arXiv},
eprint={2409.18393},
primaryClass={cs.SI cs.CY}
}
|
bagchi2024social
|
arxiv-662606
|
2409.18394
|
An Augmented Reality Interface for Teleoperating Robot Manipulators: Reducing Demonstrator Task Load through Digital Twin Control
|
<|reference_start|>An Augmented Reality Interface for Teleoperating Robot Manipulators: Reducing Demonstrator Task Load through Digital Twin Control: Acquiring high-quality demonstration data is essential for the success of data-driven methods, such as imitation learning. Existing platforms for providing demonstrations for manipulation tasks often impose significant physical and mental demands on the demonstrator, require additional hardware systems, or necessitate specialized domain knowledge. In this work, we present a novel augmented reality (AR) interface for teleoperating robotic manipulators, emphasizing the demonstrator's experience, particularly in the context of performing complex tasks that require precision and accuracy. This interface, designed for the Microsoft HoloLens 2, leverages the adaptable nature of mixed reality (MR), enabling users to control a physical robot through digital twin surrogates. We assess the effectiveness of our approach across three complex manipulation tasks and compare its performance against OPEN TEACH, a recent virtual reality (VR) teleoperation system, as well as two traditional control methods: kinesthetic teaching and a 3D SpaceMouse for end-effector control. Our findings show that our method performs comparably to the VR approach and demonstrates the potential for AR in data collection. Additionally, we conduct a pilot study to evaluate the usability and task load associated with each method. Results indicate that our AR-based system achieves higher usability scores than the VR benchmark and significantly reduces mental demand, physical effort, and frustration experienced by users. An accompanying video can be found at https://youtu.be/w-M58ohPgrA.<|reference_end|>
|
arxiv
|
@article{smith2024an,
title={An Augmented Reality Interface for Teleoperating Robot Manipulators:
Reducing Demonstrator Task Load through Digital Twin Control},
author={Aliyah Smith and Monroe Kennedy III},
journal={arXiv preprint arXiv:2409.18394},
year={2024},
archivePrefix={arXiv},
eprint={2409.18394},
primaryClass={cs.RO}
}
|
smith2024an
|
arxiv-662607
|
2409.18395
|
Code Vulnerability Repair with Large Language Model using Context-Aware Prompt Tuning
|
<|reference_start|>Code Vulnerability Repair with Large Language Model using Context-Aware Prompt Tuning: Large Language Models (LLMs) have shown significant challenges in detecting and repairing vulnerable code, particularly when dealing with vulnerabilities involving multiple aspects, such as variables, code flows, and code structures. In this study, we utilize GitHub Copilot as the LLM and focus on buffer overflow vulnerabilities. Our experiments reveal a notable gap in Copilot's abilities when dealing with buffer overflow vulnerabilities, with a 76% vulnerability detection rate but only a 15% vulnerability repair rate. To address this issue, we propose context-aware prompt tuning techniques designed to enhance LLM performance in repairing buffer overflow. By injecting a sequence of domain knowledge about the vulnerability, including various security and code contexts, we demonstrate that Copilot's successful repair rate increases to 63%, representing more than four times the improvement compared to repairs without domain knowledge.<|reference_end|>
|
arxiv
|
@article{khan2024code,
title={Code Vulnerability Repair with Large Language Model using Context-Aware
Prompt Tuning},
author={Arshiya Khan, Guannan Liu, Xing Gao},
journal={arXiv preprint arXiv:2409.18395},
year={2024},
archivePrefix={arXiv},
eprint={2409.18395},
primaryClass={cs.CR cs.AI}
}
|
khan2024code
|
arxiv-662608
|
2409.18396
|
Heterogeneous quantization regularizes spiking neural network activity
|
<|reference_start|>Heterogeneous quantization regularizes spiking neural network activity: The learning and recognition of object features from unregulated input has been a longstanding challenge for artificial intelligence systems. Brains are adept at learning stable representations given small samples of noisy observations; across sensory modalities, this capacity is aided by a cascade of signal conditioning steps informed by domain knowledge. The olfactory system, in particular, solves a source separation and denoising problem compounded by concentration variability, environmental interference, and unpredictably correlated sensor affinities. To function optimally, its plastic network requires statistically well-behaved input. We present a data-blind neuromorphic signal conditioning strategy whereby analog data are normalized and quantized into spike phase representations. Input is delivered to a column of duplicated spiking principal neurons via heterogeneous synaptic weights; this regularizes layer utilization, yoking total activity to the network's operating range and rendering internal representations robust to uncontrolled open-set stimulus variance. We extend this mechanism by adding a data-aware calibration step whereby the range and density of the quantization weights adapt to accumulated input statistics, optimizing resource utilization by balancing activity regularization and information retention.<|reference_end|>
|
arxiv
|
@article{moyal2024heterogeneous,
title={Heterogeneous quantization regularizes spiking neural network activity},
author={Roy Moyal, Kyrus R. Mama, Matthew Einhorn, Ayon Borthakur, Thomas A.
Cleland},
journal={arXiv preprint arXiv:2409.18396},
year={2024},
archivePrefix={arXiv},
eprint={2409.18396},
primaryClass={q-bio.NC cs.NE}
}
|
moyal2024heterogeneous
|
arxiv-662609
|
2409.18397
|
Scientific Machine Learning Seismology
|
<|reference_start|>Scientific Machine Learning Seismology: Scientific machine learning (SciML) is an interdisciplinary research field that integrates machine learning, particularly deep learning, with physics theory to understand and predict complex natural phenomena. By incorporating physical knowledge, SciML reduces the dependency on observational data, which is often limited in the natural sciences. In this article, the fundamental concepts of SciML, its applications in seismology, and prospects are described. Specifically, two popular methods are mainly discussed: physics-informed neural networks (PINNs) and neural operators (NOs). PINNs can address both forward and inverse problems by incorporating governing laws into the loss functions. The use of PINNs is expanding into areas such as simultaneous solutions of differential equations, inference in underdetermined systems, and regularization based on physics. These research directions would broaden the scope of deep learning in natural sciences. NOs are models designed for operator learning, which deals with relationships between infinite-dimensional spaces. NOs show promise in modeling the time evolution of complex systems based on observational or simulation data. Since large amounts of data are often required, combining NOs with physics-informed learning holds significant potential. Finally, SciML is considered from a broader perspective beyond deep learning: statistical (or mathematical) frameworks that integrate observational data with physical principles to model natural phenomena. In seismology, mathematically rigorous Bayesian statistics has been developed over the past decades, whereas more flexible and scalable deep learning has only emerged recently. Both approaches can be considered as part of SciML in a broad sense. Theoretical and practical insights in both directions would advance SciML methodologies and thereby deepen our understanding of earthquake phenomena.<|reference_end|>
|
arxiv
|
@article{okazaki2024scientific,
title={Scientific Machine Learning Seismology},
author={Tomohisa Okazaki},
journal={arXiv preprint arXiv:2409.18397},
year={2024},
archivePrefix={arXiv},
eprint={2409.18397},
primaryClass={physics.geo-ph cs.LG physics.comp-ph}
}
|
okazaki2024scientific
|
arxiv-662610
|
2409.18399
|
Multimodal Trajectory Prediction for Autonomous Driving on Unstructured Roads using Deep Convolutional Network
|
<|reference_start|>Multimodal Trajectory Prediction for Autonomous Driving on Unstructured Roads using Deep Convolutional Network: Recently, the application of autonomous driving in open-pit mining has garnered increasing attention for achieving safe and efficient mineral transportation. Compared to urban structured roads, unstructured roads in mining sites have uneven boundaries and lack clearly defined lane markings. This leads to a lack of sufficient constraint information for predicting the trajectories of other human-driven vehicles, resulting in higher uncertainty in trajectory prediction problems. A method is proposed to predict multiple possible trajectories and their probabilities of the target vehicle. The surrounding environment and historical trajectories of the target vehicle are encoded as a rasterized image, which is used as input to our deep convolutional network to predict the target vehicle's multiple possible trajectories. The method underwent offline testing on a dataset specifically designed for autonomous driving scenarios in open-pit mining and was compared and evaluated against physics-based method. The open-source code and data are available at https://github.com/LLsxyc/mine_motion_prediction.git<|reference_end|>
|
arxiv
|
@article{li2024multimodal,
title={Multimodal Trajectory Prediction for Autonomous Driving on Unstructured
Roads using Deep Convolutional Network},
author={Lei Li, Zhifa Chen, Jian Wang, Bin Zhou, Guizhen Yu and Xiaoxuan Chen},
journal={arXiv preprint arXiv:2409.18399},
year={2024},
archivePrefix={arXiv},
eprint={2409.18399},
primaryClass={cs.AI}
}
|
li2024multimodal
|
arxiv-662611
|
2409.18401
|
GenesisTex2: Stable, Consistent and High-Quality Text-to-Texture Generation
|
<|reference_start|>GenesisTex2: Stable, Consistent and High-Quality Text-to-Texture Generation: Large-scale text-guided image diffusion models have shown astonishing results in text-to-image (T2I) generation. However, applying these models to synthesize textures for 3D geometries remains challenging due to the domain gap between 2D images and textures on a 3D surface. Early works that used a projecting-and-inpainting approach managed to preserve generation diversity but often resulted in noticeable artifacts and style inconsistencies. While recent methods have attempted to address these inconsistencies, they often introduce other issues, such as blurring, over-saturation, or over-smoothing. To overcome these challenges, we propose a novel text-to-texture synthesis framework that leverages pretrained diffusion models. We first introduce a local attention reweighing mechanism in the self-attention layers to guide the model in concentrating on spatial-correlated patches across different views, thereby enhancing local details while preserving cross-view consistency. Additionally, we propose a novel latent space merge pipeline, which further ensures consistency across different viewpoints without sacrificing too much diversity. Our method significantly outperforms existing state-of-the-art techniques regarding texture consistency and visual quality, while delivering results much faster than distillation-based methods. Importantly, our framework does not require additional training or fine-tuning, making it highly adaptable to a wide range of models available on public platforms.<|reference_end|>
|
arxiv
|
@article{lu2024genesistex2:,
title={GenesisTex2: Stable, Consistent and High-Quality Text-to-Texture
Generation},
author={Jiawei Lu, Yingpeng Zhang, Zengjun Zhao, He Wang, Kun Zhou, Tianjia
Shao},
journal={arXiv preprint arXiv:2409.18401},
year={2024},
archivePrefix={arXiv},
eprint={2409.18401},
primaryClass={cs.CV cs.AI}
}
|
lu2024genesistex2:
|
arxiv-662612
|
2409.18402
|
Embed and Emulate: Contrastive representations for simulation-based inference
|
<|reference_start|>Embed and Emulate: Contrastive representations for simulation-based inference: Scientific modeling and engineering applications rely heavily on parameter estimation methods to fit physical models and calibrate numerical simulations using real-world measurements. In the absence of analytic statistical models with tractable likelihoods, modern simulation-based inference (SBI) methods first use a numerical simulator to generate a dataset of parameters and simulated outputs. This dataset is then used to approximate the likelihood and estimate the system parameters given observation data. Several SBI methods employ machine learning emulators to accelerate data generation and parameter estimation. However, applying these approaches to high-dimensional physical systems remains challenging due to the cost and complexity of training high-dimensional emulators. This paper introduces Embed and Emulate (E&E): a new SBI method based on contrastive learning that efficiently handles high-dimensional data and complex, multimodal parameter posteriors. E&E learns a low-dimensional latent embedding of the data (i.e., a summary statistic) and a corresponding fast emulator in the latent space, eliminating the need to run expensive simulations or a high dimensional emulator during inference. We illustrate the theoretical properties of the learned latent space through a synthetic experiment and demonstrate superior performance over existing methods in a realistic, non-identifiable parameter estimation task using the high-dimensional, chaotic Lorenz 96 system.<|reference_end|>
|
arxiv
|
@article{jiang2024embed,
title={Embed and Emulate: Contrastive representations for simulation-based
inference},
author={Ruoxi Jiang, Peter Y. Lu, Rebecca Willett},
journal={arXiv preprint arXiv:2409.18402},
year={2024},
archivePrefix={arXiv},
eprint={2409.18402},
primaryClass={cs.LG stat.ML}
}
|
jiang2024embed
|
arxiv-662613
|
2409.18403
|
SpecCFA: Enhancing Control Flow Attestation/Auditing via Application-Aware Sub-Path Speculation
|
<|reference_start|>SpecCFA: Enhancing Control Flow Attestation/Auditing via Application-Aware Sub-Path Speculation: At the edge of modern cyber-physical systems, Micro-Controller Units (MCUs) are responsible for safety-critical sensing/actuation. However, MCU cost constraints rule out the usual security mechanisms of general-purpose computers. Thus, various low-cost security architectures have been proposed to remotely verify MCU software integrity. Control Flow Attestation (CFA) enables a Verifier (Vrf) to remotely assess the run-time behavior of a prover MCU (Prv), generating an authenticated trace of all of Prv control flow transfers (CFLog). Further, Control Flow Auditing architectures augment CFA by guaranteeing the delivery of evidence to Vrf. Unfortunately, a limitation of existing CFA lies in the cost to store and transmit CFLog, as even simple MCU software may generate large traces. Given these issues, prior work has proposed static (context-insensitive) optimizations. However, they do not support configurable program-specific optimizations. In this work, we note that programs may produce unique predictable control flow sub-paths and argue that program-specific predictability can be leveraged to dynamically optimize CFA while retaining all security guarantees. Therefore, we propose SpecCFA: an approach for dynamic sub-path speculation in CFA. SpecCFA allows Vrf to securely speculate on likely control flow sub-paths for each attested program. At run-time, when a sub-path in CFLog matches a pre-defined speculation, the entire sub-path is replaced by a reserved symbol. SpecCFA can speculate on multiple variable-length control flow sub-paths simultaneously. We implement SpecCFA atop two open-source control flow auditing architectures: one based on a custom hardware design and one based on a commodity Trusted Execution Environment (ARM TrustZone-M). In both cases, SpecCFA significantly lowers storage/performance costs that are critical to resource-constrained MCUs.<|reference_end|>
|
arxiv
|
@article{caulfield2024speccfa:,
title={SpecCFA: Enhancing Control Flow Attestation/Auditing via
Application-Aware Sub-Path Speculation},
author={Adam Caulfield, Liam Tyler, Ivan De Oliveira Nunes},
journal={arXiv preprint arXiv:2409.18403},
year={2024},
archivePrefix={arXiv},
eprint={2409.18403},
primaryClass={cs.CR}
}
|
caulfield2024speccfa:
|
arxiv-662614
|
2409.18405
|
Word2Wave: Language Driven Mission Programming for Efficient Subsea Deployments of Marine Robots
|
<|reference_start|>Word2Wave: Language Driven Mission Programming for Efficient Subsea Deployments of Marine Robots: This paper explores the design and development of a language-based interface for dynamic mission programming of autonomous underwater vehicles (AUVs). The proposed 'Word2Wave' (W2W) framework enables interactive programming and parameter configuration of AUVs for remote subsea missions. The W2W framework includes: (i) a set of novel language rules and command structures for efficient language-to-mission mapping; (ii) a GPT-based prompt engineering module for training data generation; (iii) a small language model (SLM)-based sequence-to-sequence learning pipeline for mission command generation from human speech or text; and (iv) a novel user interface for 2D mission map visualization and human-machine interfacing. The proposed learning pipeline adapts an SLM named T5-Small that can learn language-to-mission mapping from processed language data effectively, providing robust and efficient performance. In addition to a benchmark evaluation with state-of-the-art, we conduct a user interaction study to demonstrate the effectiveness of W2W over commercial AUV programming interfaces. Across participants, W2W-based programming required less than 10% time for mission programming compared to traditional interfaces; it is deemed to be a simpler and more natural paradigm for subsea mission programming with a usability score of 76.25. W2W opens up promising future research opportunities on hands-free AUV mission programming for efficient subsea deployments.<|reference_end|>
|
arxiv
|
@article{chen2024word2wave:,
title={Word2Wave: Language Driven Mission Programming for Efficient Subsea
Deployments of Marine Robots},
author={Ruo Chen, David Blow, Adnan Abdullah, and Md Jahidul Islam},
journal={arXiv preprint arXiv:2409.18405},
year={2024},
archivePrefix={arXiv},
eprint={2409.18405},
primaryClass={cs.RO}
}
|
chen2024word2wave:
|
arxiv-662615
|
2409.18408
|
Query matching for spatio-temporal action detection with query-based object detector
|
<|reference_start|>Query matching for spatio-temporal action detection with query-based object detector: In this paper, we propose a method that extends the query-based object detection model, DETR, to spatio-temporal action detection, which requires maintaining temporal consistency in videos. Our proposed method applies DETR to each frame and uses feature shift to incorporate temporal information. However, DETR's object queries in each frame may correspond to different objects, making a simple feature shift ineffective. To overcome this issue, we propose query matching across different frames, ensuring that queries for the same object are matched and used for the feature shift. Experimental results show that performance on the JHMDB21 dataset improves significantly when query features are shifted using the proposed query matching.<|reference_end|>
|
arxiv
|
@article{hori2024query,
title={Query matching for spatio-temporal action detection with query-based
object detector},
author={Shimon Hori, Kazuki Omi, Toru Tamaki},
journal={arXiv preprint arXiv:2409.18408},
year={2024},
archivePrefix={arXiv},
eprint={2409.18408},
primaryClass={cs.CV}
}
|
hori2024query
|
arxiv-662616
|
2409.18409
|
Generative Retrieval Meets Multi-Graded Relevance
|
<|reference_start|>Generative Retrieval Meets Multi-Graded Relevance: Generative retrieval represents a novel approach to information retrieval. It uses an encoder-decoder architecture to directly produce relevant document identifiers (docids) for queries. While this method offers benefits, current approaches are limited to scenarios with binary relevance data, overlooking the potential for documents to have multi-graded relevance. Extending generative retrieval to accommodate multi-graded relevance poses challenges, including the need to reconcile likelihood probabilities for docid pairs and the possibility of multiple relevant documents sharing the same identifier. To address these challenges, we introduce a framework called GRaded Generative Retrieval (GR$^2$). GR$^2$ focuses on two key components: ensuring relevant and distinct identifiers, and implementing multi-graded constrained contrastive training. First, we create identifiers that are both semantically relevant and sufficiently distinct to represent individual documents effectively. This is achieved by jointly optimizing the relevance and distinctness of docids through a combination of docid generation and autoencoder models. Second, we incorporate information about the relationship between relevance grades to guide the training process. We use a constrained contrastive training strategy to bring the representations of queries and the identifiers of their relevant documents closer together, based on their respective relevance grades. Extensive experiments on datasets with both multi-graded and binary relevance demonstrate the effectiveness of GR$^2$.<|reference_end|>
|
arxiv
|
@article{tang2024generative,
title={Generative Retrieval Meets Multi-Graded Relevance},
author={Yubao Tang, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Wei Chen,
Xueqi Cheng},
journal={arXiv preprint arXiv:2409.18409},
year={2024},
archivePrefix={arXiv},
eprint={2409.18409},
primaryClass={cs.IR}
}
|
tang2024generative
|
arxiv-662617
|
2409.18411
|
BoT-Drive: Hierarchical Behavior and Trajectory Planning for Autonomous Driving using POMDPs
|
<|reference_start|>BoT-Drive: Hierarchical Behavior and Trajectory Planning for Autonomous Driving using POMDPs: Uncertainties in dynamic road environments pose significant challenges for behavior and trajectory planning in autonomous driving. This paper introduces BoT-Drive, a planning algorithm that addresses uncertainties at both behavior and trajectory levels within a Partially Observable Markov Decision Process (POMDP) framework. BoT-Drive employs driver models to characterize unknown behavioral intentions and utilizes their model parameters to infer hidden driving styles. By also treating driver models as decision-making actions for the autonomous vehicle, BoT-Drive effectively tackles the exponential complexity inherent in POMDPs. To enhance safety and robustness, the planner further applies importance sampling to refine the driving trajectory conditioned on the planned high-level behavior. Evaluation on real-world data shows that BoT-Drive consistently outperforms both existing planning methods and learning-based methods in regular and complex urban driving scenes, demonstrating significant improvements in driving safety and reliability.<|reference_end|>
|
arxiv
|
@article{jin2024bot-drive:,
title={BoT-Drive: Hierarchical Behavior and Trajectory Planning for Autonomous
Driving using POMDPs},
author={Xuanjin Jin, Chendong Zeng, Shengfa Zhu, Chunxiao Liu, Panpan Cai},
journal={arXiv preprint arXiv:2409.18411},
year={2024},
archivePrefix={arXiv},
eprint={2409.18411},
primaryClass={cs.RO cs.AI}
}
|
jin2024bot-drive:
|
arxiv-662618
|
2409.18412
|
SciDFM: A Large Language Model with Mixture-of-Experts for Science
|
<|reference_start|>SciDFM: A Large Language Model with Mixture-of-Experts for Science: Recently, there has been a significant upsurge of interest in leveraging large language models (LLMs) to assist scientific discovery. However, most LLMs only focus on general science, while they lack domain-specific knowledge, such as chemical molecules and amino acid sequences. To bridge these gaps, we introduce SciDFM, a mixture-of-experts LLM, which is trained from scratch and is able to conduct college-level scientific reasoning and understand molecules and amino acid sequences. We collect a large-scale training corpus containing numerous scientific papers and books from different disciplines as well as data from domain-specific databases. We further fine-tune the pre-trained model on lots of instruction data to improve performances on downstream benchmarks. From experiment results, we show that SciDFM achieves strong performance on general scientific benchmarks such as SciEval and SciQ, and it reaches a SOTA performance on domain-specific benchmarks among models of similar size. We further analyze the expert layers and show that the results of expert selection vary with data from different disciplines. To benefit the broader research community, we open-source SciDFM at https://huggingface.co/OpenDFM/SciDFM-MoE-A5.6B-v1.0.<|reference_end|>
|
arxiv
|
@article{sun2024scidfm:,
title={SciDFM: A Large Language Model with Mixture-of-Experts for Science},
author={Liangtai Sun, Danyu Luo, Da Ma, Zihan Zhao, Baocai Chen, Zhennan Shen,
Su Zhu, Lu Chen, Xin Chen and Kai Yu},
journal={arXiv preprint arXiv:2409.18412},
year={2024},
archivePrefix={arXiv},
eprint={2409.18412},
primaryClass={cs.CL cs.AI}
}
|
sun2024scidfm:
|
arxiv-662619
|
2409.18417
|
VickreyFeedback: Cost-efficient Data Construction for Reinforcement Learning from Human Feedback
|
<|reference_start|>VickreyFeedback: Cost-efficient Data Construction for Reinforcement Learning from Human Feedback: This paper addresses the cost-efficiency aspect of Reinforcement Learning from Human Feedback (RLHF). RLHF leverages datasets of human preferences over outputs of large language models (LLM) to instill human expectations into LLMs. While preference annotation comes with a monetized cost, the economic utility of a preference dataset has not been considered by far. What exacerbates this situation is that given complex intransitive or cyclic relationships in preference datasets, existing algorithms for fine-tuning LLMs are still far from capturing comprehensive preferences. This raises severe cost-efficiency concerns in production environments, where preference data accumulate over time. In this paper, we see the fine-tuning of LLMs as a monetized economy and introduce an auction mechanism to improve the efficiency of the preference data collection in dollar terms. We show that introducing an auction mechanism can play an essential role in enhancing the cost-efficiency of RLHF while maintaining satisfactory model performance. Experimental results demonstrate that our proposed auction-based protocol is cost-efficient for fine-tuning LLMs by concentrating on high-quality feedback.<|reference_end|>
|
arxiv
|
@article{zhang2024vickreyfeedback:,
title={VickreyFeedback: Cost-efficient Data Construction for Reinforcement
Learning from Human Feedback},
author={Guoxi Zhang, Jiuding Duan},
journal={arXiv preprint arXiv:2409.18417},
year={2024},
archivePrefix={arXiv},
eprint={2409.18417},
primaryClass={cs.LG cs.AI cs.CL cs.GT econ.GN q-fin.EC}
}
|
zhang2024vickreyfeedback:
|
arxiv-662620
|
2409.18418
|
A3: Active Adversarial Alignment for Source-Free Domain Adaptation
|
<|reference_start|>A3: Active Adversarial Alignment for Source-Free Domain Adaptation: Unsupervised domain adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain. Recent works have focused on source-free UDA, where only target data is available. This is challenging as models rely on noisy pseudo-labels and struggle with distribution shifts. We propose Active Adversarial Alignment (A3), a novel framework combining self-supervised learning, adversarial training, and active learning for robust source-free UDA. A3 actively samples informative and diverse data using an acquisition function for training. It adapts models via adversarial losses and consistency regularization, aligning distributions without source data access. A3 advances source-free UDA through its synergistic integration of active and adversarial learning for effective domain alignment and noise reduction.<|reference_end|>
|
arxiv
|
@article{eze2024a3:,
title={A3: Active Adversarial Alignment for Source-Free Domain Adaptation},
author={Chrisantus Eze and Christopher Crick},
journal={arXiv preprint arXiv:2409.18418},
year={2024},
archivePrefix={arXiv},
eprint={2409.18418},
primaryClass={cs.LG cs.AI cs.CV}
}
|
eze2024a3:
|
arxiv-662621
|
2409.18419
|
Robust Network Learning via Inverse Scale Variational Sparsification
|
<|reference_start|>Robust Network Learning via Inverse Scale Variational Sparsification: While neural networks have made significant strides in many AI tasks, they remain vulnerable to a range of noise types, including natural corruptions, adversarial noise, and low-resolution artifacts. Many existing approaches focus on enhancing robustness against specific noise types, limiting their adaptability to others. Previous studies have addressed general robustness by adopting a spectral perspective, which tends to blur crucial features like texture and object contours. Our proposed solution, however, introduces an inverse scale variational sparsification framework within a time-continuous inverse scale space formulation. This framework progressively learns finer-scale features by discerning variational differences between pixels, ultimately preserving only large-scale features in the smoothed image. Unlike frequency-based methods, our approach not only removes noise by smoothing small-scale features where corruptions often occur but also retains high-contrast details such as textures and object contours. Moreover, our framework offers simplicity and efficiency in implementation. By integrating this algorithm into neural network training, we guide the model to prioritize learning large-scale features. We show the efficacy of our approach through enhanced robustness against various noise types.<|reference_end|>
|
arxiv
|
@article{zhou2024robust,
title={Robust Network Learning via Inverse Scale Variational Sparsification},
author={Zhiling Zhou, Zirui Liu, Chengming Xu, Yanwei Fu, Xinwei Sun},
journal={arXiv preprint arXiv:2409.18419},
year={2024},
archivePrefix={arXiv},
eprint={2409.18419},
primaryClass={cs.CV cs.LG}
}
|
zhou2024robust
|
arxiv-662622
|
2409.18423
|
A physics-driven sensor placement optimization methodology for temperature field reconstruction
|
<|reference_start|>A physics-driven sensor placement optimization methodology for temperature field reconstruction: Perceiving the global field from sparse sensors has been a grand challenge in the monitoring, analysis, and design of physical systems. In this context, sensor placement optimization is a crucial issue. Most existing works require large and sufficient data to construct data-based criteria, which are intractable in data-free scenarios without numerical and experimental data. To this end, we propose a novel physics-driven sensor placement optimization (PSPO) method for temperature field reconstruction using a physics-based criterion to optimize sensor locations. In our methodological framework, we firstly derive the theoretical upper and lower bounds of the reconstruction error under noise scenarios by analyzing the optimal solution, proving that error bounds correlate with the condition number determined by sensor locations. Furthermore, the condition number, as the physics-based criterion, is used to optimize sensor locations by the genetic algorithm. Finally, the best sensors are validated by reconstruction models, including non-invasive end-to-end models, non-invasive reduced-order models, and physics-informed models. Experimental results, both on a numerical and an application case, demonstrate that the PSPO method significantly outperforms random and uniform selection methods, improving the reconstruction accuracy by nearly an order of magnitude. Moreover, the PSPO method can achieve comparable reconstruction accuracy to the existing data-driven placement optimization methods.<|reference_end|>
|
arxiv
|
@article{liu2024a,
title={A physics-driven sensor placement optimization methodology for
temperature field reconstruction},
author={Xu Liu, Wen Yao, Wei Peng, Zhuojia Fu, Zixue Xiang, Xiaoqian Chen},
journal={Applied thermal engineering(2024)},
year={2024},
archivePrefix={arXiv},
eprint={2409.18423},
primaryClass={cs.LG}
}
|
liu2024a
|
arxiv-662623
|
2409.18426
|
Dual Cone Gradient Descent for Training Physics-Informed Neural Networks
|
<|reference_start|>Dual Cone Gradient Descent for Training Physics-Informed Neural Networks: Physics-informed neural networks (PINNs) have emerged as a prominent approach for solving partial differential equations (PDEs) by minimizing a combined loss function that incorporates both boundary loss and PDE residual loss. Despite their remarkable empirical performance in various scientific computing tasks, PINNs often fail to generate reasonable solutions, and such pathological behaviors remain difficult to explain and resolve. In this paper, we identify that PINNs can be adversely trained when gradients of each loss function exhibit a significant imbalance in their magnitudes and present a negative inner product value. To address these issues, we propose a novel optimization framework, Dual Cone Gradient Descent (DCGD), which adjusts the direction of the updated gradient to ensure it falls within a dual cone region. This region is defined as a set of vectors where the inner products with both the gradients of the PDE residual loss and the boundary loss are non-negative. Theoretically, we analyze the convergence properties of DCGD algorithms in a non-convex setting. On a variety of benchmark equations, we demonstrate that DCGD outperforms other optimization algorithms in terms of various evaluation metrics. In particular, DCGD achieves superior predictive accuracy and enhances the stability of training for failure modes of PINNs and complex PDEs, compared to existing optimally tuned models. Moreover, DCGD can be further improved by combining it with popular strategies for PINNs, including learning rate annealing and the Neural Tangent Kernel (NTK).<|reference_end|>
|
arxiv
|
@article{hwang2024dual,
title={Dual Cone Gradient Descent for Training Physics-Informed Neural Networks},
author={Youngsik Hwang, Dong-Young Lim},
journal={arXiv preprint arXiv:2409.18426},
year={2024},
archivePrefix={arXiv},
eprint={2409.18426},
primaryClass={cs.LG cs.NA math.AP math.NA math.OC stat.ML}
}
|
hwang2024dual
|
arxiv-662624
|
2409.18427
|
Neural Collaborative Filtering to Detect Anomalies in Human Semantic Trajectories
|
<|reference_start|>Neural Collaborative Filtering to Detect Anomalies in Human Semantic Trajectories: Human trajectory anomaly detection has become increasingly important across a wide range of applications, including security surveillance and public health. However, existing trajectory anomaly detection methods are primarily focused on vehicle-level traffic, while human-level trajectory anomaly detection remains under-explored. Since human trajectory data is often very sparse, machine learning methods have become the preferred approach for identifying complex patterns. However, concerns regarding potential biases and the robustness of these models have intensified the demand for more transparent and explainable alternatives. In response to these challenges, our research focuses on developing a lightweight anomaly detection model specifically designed to detect anomalies in human trajectories. We propose a Neural Collaborative Filtering approach to model and predict normal mobility. Our method is designed to model users' daily patterns of life without requiring prior knowledge, thereby enhancing performance in scenarios where data is sparse or incomplete, such as in cold start situations. Our algorithm consists of two main modules. The first is the collaborative filtering module, which applies collaborative filtering to model normal mobility of individual humans to places of interest. The second is the neural module, responsible for interpreting the complex spatio-temporal relationships inherent in human trajectory data. To validate our approach, we conducted extensive experiments using simulated and real-world datasets comparing to numerous state-of-the-art trajectory anomaly detection approaches.<|reference_end|>
|
arxiv
|
@article{liu2024neural,
title={Neural Collaborative Filtering to Detect Anomalies in Human Semantic
Trajectories},
author={Yueyang Liu, Lance Kennedy, Hossein Amiri, Andreas Z"ufle},
journal={arXiv preprint arXiv:2409.18427},
year={2024},
doi={10.1145/3681765.3698463},
archivePrefix={arXiv},
eprint={2409.18427},
primaryClass={cs.LG cs.AI cs.IR cs.SI}
}
|
liu2024neural
|
arxiv-662625
|
2409.18428
|
Improving Multilingual ASR in the Wild Using Simple N-best Re-ranking
|
<|reference_start|>Improving Multilingual ASR in the Wild Using Simple N-best Re-ranking: Multilingual Automatic Speech Recognition (ASR) models are typically evaluated in a setting where the ground-truth language of the speech utterance is known, however, this is often not the case for most practical settings. Automatic Spoken Language Identification (SLID) models are not perfect and misclassifications have a substantial impact on the final ASR accuracy. In this paper, we present a simple and effective N-best re-ranking approach to improve multilingual ASR accuracy for several prominent acoustic models by employing external features such as language models and text-based language identification models. Our results on FLEURS using the MMS and Whisper models show spoken language identification accuracy improvements of 8.7% and 6.1%, respectively and word error rates which are 3.3% and 2.0% lower on these benchmarks.<|reference_end|>
|
arxiv
|
@article{yan2024improving,
title={Improving Multilingual ASR in the Wild Using Simple N-best Re-ranking},
author={Brian Yan, Vineel Pratap, Shinji Watanabe, Michael Auli},
journal={arXiv preprint arXiv:2409.18428},
year={2024},
archivePrefix={arXiv},
eprint={2409.18428},
primaryClass={cs.CL cs.SD eess.AS}
}
|
yan2024improving
|
arxiv-662626
|
2409.18429
|
Joint Optimization of Data- and Model-Driven Probing Beams and Beam Predictor
|
<|reference_start|>Joint Optimization of Data- and Model-Driven Probing Beams and Beam Predictor: Hierarchical search in millimeter-wave (mmWave) communications incurs significant beam training overhead and delay, especially in a dynamic environment. Deep learning-enabled beam prediction is promising to significantly mitigate the overhead and delay, efficiently utilizing the site-specific channel prior. In this work, we propose to jointly optimize a data- and model-driven probe beam module and a cascaded data-driven beam predictor, with limitations in that the probe and communicate beams are restricted within the manifold space of uniform planer array and quantization of the phase modulator. First, The probe beam module senses the mmWave channel with a complex-valued neural network and outputs the counterpart RSRPs of probe beams. Second, the beam predictor estimates the RSRPs in the entire beamspace to minimize the prediction cross entropy and selects the optimal beam with the maximum RSRP value for data transmission. Additionally, we propose to add noise to the phase variables in the probe beam module, against quantization error. Simulation results show the effectiveness of our proposed scheme.<|reference_end|>
|
arxiv
|
@article{lu2024joint,
title={Joint Optimization of Data- and Model-Driven Probing Beams and Beam
Predictor},
author={Tianheng Lu, Fan Meng, Zhilei Zhang, Yongming Huang, Cheng Zhang,
Xiaoyu Bai},
journal={arXiv preprint arXiv:2409.18429},
year={2024},
archivePrefix={arXiv},
eprint={2409.18429},
primaryClass={cs.IT eess.SP math.IT}
}
|
lu2024joint
|
arxiv-662627
|
2409.18431
|
Search3D: Hierarchical Open-Vocabulary 3D Segmentation
|
<|reference_start|>Search3D: Hierarchical Open-Vocabulary 3D Segmentation: Open-vocabulary 3D segmentation enables the exploration of 3D spaces using free-form text descriptions. Existing methods for open-vocabulary 3D instance segmentation primarily focus on identifying object-level instances in a scene. However, they face challenges when it comes to understanding more fine-grained scene entities such as object parts, or regions described by generic attributes. In this work, we introduce Search3D, an approach that builds a hierarchical open-vocabulary 3D scene representation, enabling the search for entities at varying levels of granularity: fine-grained object parts, entire objects, or regions described by attributes like materials. Our method aims to expand the capabilities of open vocabulary instance-level 3D segmentation by shifting towards a more flexible open-vocabulary 3D search setting less anchored to explicit object-centric queries, compared to prior work. To ensure a systematic evaluation, we also contribute a scene-scale open-vocabulary 3D part segmentation benchmark based on MultiScan, along with a set of open-vocabulary fine-grained part annotations on ScanNet++. We verify the effectiveness of Search3D across several tasks, demonstrating that our approach outperforms baselines in scene-scale open-vocabulary 3D part segmentation, while maintaining strong performance in segmenting 3D objects and materials.<|reference_end|>
|
arxiv
|
@article{takmaz2024search3d:,
title={Search3D: Hierarchical Open-Vocabulary 3D Segmentation},
author={Ayca Takmaz, Alexandros Delitzas, Robert W. Sumner, Francis Engelmann,
Johanna Wald, Federico Tombari},
journal={arXiv preprint arXiv:2409.18431},
year={2024},
archivePrefix={arXiv},
eprint={2409.18431},
primaryClass={cs.CV}
}
|
takmaz2024search3d:
|
arxiv-662628
|
2409.18433
|
Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization
|
<|reference_start|>Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization: While generalization over tasks from easy to hard is crucial to profile language models (LLMs), the datasets with fine-grained difficulty annotations for each problem across a broad range of complexity are still blank. Aiming to address this limitation, we present Easy2Hard-Bench, a consistently formatted collection of 6 benchmark datasets spanning various domains, such as mathematics and programming problems, chess puzzles, and reasoning questions. Each problem within these datasets is annotated with numerical difficulty scores. To systematically estimate problem difficulties, we collect abundant performance data on attempts to each problem by humans in the real world or LLMs on the prominent leaderboard. Leveraging the rich performance data, we apply well-established difficulty ranking systems, such as Item Response Theory (IRT) and Glicko-2 models, to uniformly assign numerical difficulty scores to problems. Moreover, datasets in Easy2Hard-Bench distinguish themselves from previous collections by a higher proportion of challenging problems. Through extensive experiments with six state-of-the-art LLMs, we provide a comprehensive analysis of their performance and generalization capabilities across varying levels of difficulty, with the aim of inspiring future research in LLM generalization. The datasets are available at https://huggingface.co/datasets/furonghuang-lab/Easy2Hard-Bench.<|reference_end|>
|
arxiv
|
@article{ding2024easy2hard-bench:,
title={Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM
Performance and Generalization},
author={Mucong Ding, Chenghao Deng, Jocelyn Choo, Zichu Wu, Aakriti Agrawal,
Avi Schwarzschild, Tianyi Zhou, Tom Goldstein, John Langford, Anima
Anandkumar, Furong Huang},
journal={arXiv preprint arXiv:2409.18433},
year={2024},
archivePrefix={arXiv},
eprint={2409.18433},
primaryClass={cs.LG cs.AI cs.CL}
}
|
ding2024easy2hard-bench:
|
arxiv-662629
|
2409.18434
|
Get It For Free: Radar Segmentation without Expert Labels and Its Application in Odometry and Localization
|
<|reference_start|>Get It For Free: Radar Segmentation without Expert Labels and Its Application in Odometry and Localization: This paper presents a novel weakly supervised semantic segmentation method for radar segmentation, where the existing LiDAR semantic segmentation models are employed to generate semantic labels, which then serve as supervision signals for training a radar semantic segmentation model. The obtained radar semantic segmentation model outperforms LiDAR-based models, providing more consistent and robust segmentation under all-weather conditions, particularly in the snow, rain and fog. To mitigate potential errors in LiDAR semantic labels, we design a dedicated refinement scheme that corrects erroneous labels based on structural features and distribution patterns. The semantic information generated by our radar segmentation model is used in two downstream tasks, achieving significant performance improvements. In large-scale radar-based localization using OpenStreetMap, it leads to localization error reduction by 20.55\% over prior methods. For the odometry task, it improves translation accuracy by 16.4\% compared to the second-best method, securing the first place in the radar odometry competition at the Radar in Robotics workshop of ICRA 2024, Japan<|reference_end|>
|
arxiv
|
@article{li2024get,
title={Get It For Free: Radar Segmentation without Expert Labels and Its
Application in Odometry and Localization},
author={Siru Li, Ziyang Hong, Yushuai Chen, Liang Hu and Jiahu Qin},
journal={arXiv preprint arXiv:2409.18434},
year={2024},
archivePrefix={arXiv},
eprint={2409.18434},
primaryClass={cs.RO}
}
|
li2024get
|
arxiv-662630
|
2409.18435
|
Multi-agent Reinforcement Learning for Dynamic Dispatching in Material Handling Systems
|
<|reference_start|>Multi-agent Reinforcement Learning for Dynamic Dispatching in Material Handling Systems: This paper proposes a multi-agent reinforcement learning (MARL) approach to learn dynamic dispatching strategies, which is crucial for optimizing throughput in material handling systems across diverse industries. To benchmark our method, we developed a material handling environment that reflects the complexities of an actual system, such as various activities at different locations, physical constraints, and inherent uncertainties. To enhance exploration during learning, we propose a method to integrate domain knowledge in the form of existing dynamic dispatching heuristics. Our experimental results show that our method can outperform heuristics by up to 7.4 percent in terms of median throughput. Additionally, we analyze the effect of different architectures on MARL performance when training multiple agents with different functions. We also demonstrate that the MARL agents performance can be further improved by using the first iteration of MARL agents as heuristics to train a second iteration of MARL agents. This work demonstrates the potential of applying MARL to learn effective dynamic dispatching strategies that may be deployed in real-world systems to improve business outcomes.<|reference_end|>
|
arxiv
|
@article{lee2024multi-agent,
title={Multi-agent Reinforcement Learning for Dynamic Dispatching in Material
Handling Systems},
author={Xian Yeow Lee, Haiyan Wang, Daisuke Katsumata, Takaharu Matsui, Chetan
Gupta},
journal={arXiv preprint arXiv:2409.18435},
year={2024},
archivePrefix={arXiv},
eprint={2409.18435},
primaryClass={cs.LG cs.AI cs.MA}
}
|
lee2024multi-agent
|
arxiv-662631
|
2409.18438
|
Physics Augmented Tuple Transformer for Autism Severity Level Detection
|
<|reference_start|>Physics Augmented Tuple Transformer for Autism Severity Level Detection: Early diagnosis of Autism Spectrum Disorder (ASD) is an effective and favorable step towards enhancing the health and well-being of children with ASD. Manual ASD diagnosis testing is labor-intensive, complex, and prone to human error due to several factors contaminating the results. This paper proposes a novel framework that exploits the laws of physics for ASD severity recognition. The proposed physics-informed neural network architecture encodes the behaviour of the subject extracted by observing a part of the skeleton-based motion trajectory in a higher dimensional latent space. Two decoders, namely physics-based and non-physics-based decoder, use this latent embedding and predict the future motion patterns. The physics branch leverages the laws of physics that apply to a skeleton sequence in the prediction process while the non-physics-based branch is optimised to minimise the difference between the predicted and actual motion of the subject. A classifier also leverages the same latent space embeddings to recognise the ASD severity. This dual generative objective explicitly forces the network to compare the actual behaviour of the subject with the general normal behaviour of children that are governed by the laws of physics, aiding the ASD recognition task. The proposed method attains state-of-the-art performance on multiple ASD diagnosis benchmarks. To illustrate the utility of the proposed framework beyond the task ASD diagnosis, we conduct a third experiment using a publicly available benchmark for the task of fall prediction and demonstrate the superiority of our model.<|reference_end|>
|
arxiv
|
@article{ranasingha2024physics,
title={Physics Augmented Tuple Transformer for Autism Severity Level Detection},
author={Chinthaka Ranasingha, Harshala Gammulle, Tharindu Fernando, Sridha
Sridharan, Clinton Fookes},
journal={arXiv preprint arXiv:2409.18438},
year={2024},
archivePrefix={arXiv},
eprint={2409.18438},
primaryClass={cs.AI}
}
|
ranasingha2024physics
|
arxiv-662632
|
2409.18439
|
State-free Reinforcement Learning
|
<|reference_start|>State-free Reinforcement Learning: In this work, we study the \textit{state-free RL} problem, where the algorithm does not have the states information before interacting with the environment. Specifically, denote the reachable state set by ${S}^\Pi := \{ s|\max_{\pi\in \Pi}q^{P, \pi}(s)>0 \}$, we design an algorithm which requires no information on the state space $S$ while having a regret that is completely independent of ${S}$ and only depend on ${S}^\Pi$. We view this as a concrete first step towards \textit{parameter-free RL}, with the goal of designing RL algorithms that require no hyper-parameter tuning.<|reference_end|>
|
arxiv
|
@article{chen2024state-free,
title={State-free Reinforcement Learning},
author={Mingyu Chen, Aldo Pacchiano, Xuezhou Zhang},
journal={arXiv preprint arXiv:2409.18439},
year={2024},
archivePrefix={arXiv},
eprint={2409.18439},
primaryClass={cs.LG cs.AI}
}
|
chen2024state-free
|
arxiv-662633
|
2409.18442
|
Gradient-free Decoder Inversion in Latent Diffusion Models
|
<|reference_start|>Gradient-free Decoder Inversion in Latent Diffusion Models: In latent diffusion models (LDMs), denoising diffusion process efficiently takes place on latent space whose dimension is lower than that of pixel space. Decoder is typically used to transform the representation in latent space to that in pixel space. While a decoder is assumed to have an encoder as an accurate inverse, exact encoder-decoder pair rarely exists in practice even though applications often require precise inversion of decoder. Prior works for decoder inversion in LDMs employed gradient descent inspired by inversions of generative adversarial networks. However, gradient-based methods require larger GPU memory and longer computation time for larger latent space. For example, recent video LDMs can generate more than 16 frames, but GPUs with 24 GB memory can only perform gradient-based decoder inversion for 4 frames. Here, we propose an efficient gradient-free decoder inversion for LDMs, which can be applied to diverse latent models. Theoretical convergence property of our proposed inversion has been investigated not only for the forward step method, but also for the inertial Krasnoselskii-Mann (KM) iterations under mild assumption on cocoercivity that is satisfied by recent LDMs. Our proposed gradient-free method with Adam optimizer and learning rate scheduling significantly reduced computation time and memory usage over prior gradient-based methods and enabled efficient computation in applications such as noise-space watermarking while achieving comparable error levels.<|reference_end|>
|
arxiv
|
@article{hong2024gradient-free,
title={Gradient-free Decoder Inversion in Latent Diffusion Models},
author={Seongmin Hong, Suh Yoon Jeon, Kyeonghyun Lee, Ernest K. Ryu, Se Young
Chun},
journal={arXiv preprint arXiv:2409.18442},
year={2024},
archivePrefix={arXiv},
eprint={2409.18442},
primaryClass={cs.LG cs.CV}
}
|
hong2024gradient-free
|
arxiv-662634
|
2409.18444
|
Cost-Aware Dynamic Cloud Workflow Scheduling using Self-Attention and Evolutionary Reinforcement Learning
|
<|reference_start|>Cost-Aware Dynamic Cloud Workflow Scheduling using Self-Attention and Evolutionary Reinforcement Learning: The Cost-aware Dynamic Multi-Workflow Scheduling (CDMWS) in the cloud is a kind of cloud workflow management problem, which aims to assign virtual machine (VM) instances to execute tasks in workflows so as to minimize the total costs, including both the penalties for violating Service Level Agreement (SLA) and the VM rental fees. Powered by deep neural networks, Reinforcement Learning (RL) methods can construct effective scheduling policies for solving CDMWS problems. Traditional policy networks in RL often use basic feedforward architectures to separately determine the suitability of assigning any VM instances, without considering all VMs simultaneously to learn their global information. This paper proposes a novel self-attention policy network for cloud workflow scheduling (SPN-CWS) that captures global information from all VMs. We also develop an Evolution Strategy-based RL (ERL) system to train SPN-CWS reliably and effectively. The trained SPN-CWS can effectively process all candidate VM instances simultaneously to identify the most suitable VM instance to execute every workflow task. Comprehensive experiments show that our method can noticeably outperform several state-of-the-art algorithms on multiple benchmark CDMWS problems.<|reference_end|>
|
arxiv
|
@article{shen2024cost-aware,
title={Cost-Aware Dynamic Cloud Workflow Scheduling using Self-Attention and
Evolutionary Reinforcement Learning},
author={Ya Shen, Gang Chen, Hui Ma, and Mengjie Zhang},
journal={arXiv preprint arXiv:2409.18444},
year={2024},
archivePrefix={arXiv},
eprint={2409.18444},
primaryClass={cs.AI}
}
|
shen2024cost-aware
|
arxiv-662635
|
2409.18446
|
Exploring Language Model Generalization in Low-Resource Extractive QA
|
<|reference_start|>Exploring Language Model Generalization in Low-Resource Extractive QA: In this paper, we investigate Extractive Question Answering (EQA) with Large Language Models (LLMs) under domain drift, i.e., can LLMs generalize well to closed-domains that require specific knowledge such as medicine and law in a zero-shot fashion without additional in-domain training? To this end, we devise a series of experiments to empirically explain the performance gap. Our findings suggest that: a) LLMs struggle with dataset demands of closed-domains such as retrieving long answer-spans; b) Certain LLMs, despite showing strong overall performance, display weaknesses in meeting basic requirements as discriminating between domain-specific senses of words which we link to pre-processing decisions; c) Scaling model parameters is not always effective for cross-domain generalization; and d) Closed-domain datasets are quantitatively much different than open-domain EQA datasets and current LLMs struggle to deal with them. Our findings point out important directions for improving existing LLMs.<|reference_end|>
|
arxiv
|
@article{sengupta2024exploring,
title={Exploring Language Model Generalization in Low-Resource Extractive QA},
author={Saptarshi Sengupta, Wenpeng Yin, Preslav Nakov, Shreya Ghosh, Suhang
Wang},
journal={arXiv preprint arXiv:2409.18446},
year={2024},
archivePrefix={arXiv},
eprint={2409.18446},
primaryClass={cs.CL}
}
|
sengupta2024exploring
|
arxiv-662636
|
2409.18448
|
Hierarchical Federated Learning with Multi-Timescale Gradient Correction
|
<|reference_start|>Hierarchical Federated Learning with Multi-Timescale Gradient Correction: While traditional federated learning (FL) typically focuses on a star topology where clients are directly connected to a central server, real-world distributed systems often exhibit hierarchical architectures. Hierarchical FL (HFL) has emerged as a promising solution to bridge this gap, leveraging aggregation points at multiple levels of the system. However, existing algorithms for HFL encounter challenges in dealing with multi-timescale model drift, i.e., model drift occurring across hierarchical levels of data heterogeneity. In this paper, we propose a multi-timescale gradient correction (MTGC) methodology to resolve this issue. Our key idea is to introduce distinct control variables to (i) correct the client gradient towards the group gradient, i.e., to reduce client model drift caused by local updates based on individual datasets, and (ii) correct the group gradient towards the global gradient, i.e., to reduce group model drift caused by FL over clients within the group. We analytically characterize the convergence behavior of MTGC under general non-convex settings, overcoming challenges associated with couplings between correction terms. We show that our convergence bound is immune to the extent of data heterogeneity, confirming the stability of the proposed algorithm against multi-level non-i.i.d. data. Through extensive experiments on various datasets and models, we validate the effectiveness of MTGC in diverse HFL settings. The code for this project is available at \href{https://github.com/wenzhifang/MTGC}{https://github.com/wenzhifang/MTGC}.<|reference_end|>
|
arxiv
|
@article{fang2024hierarchical,
title={Hierarchical Federated Learning with Multi-Timescale Gradient Correction},
author={Wenzhi Fang, Dong-Jun Han, Evan Chen, Shiqiang Wang, and Christopher
G. Brinton},
journal={arXiv preprint arXiv:2409.18448},
year={2024},
archivePrefix={arXiv},
eprint={2409.18448},
primaryClass={cs.LG}
}
|
fang2024hierarchical
|
arxiv-662637
|
2409.18449
|
Towards Personal Data Sharing Autonomy:A Task-driven Data Capsule Sharing System
|
<|reference_start|>Towards Personal Data Sharing Autonomy:A Task-driven Data Capsule Sharing System: Personal data custodian services enable data owners to share their data with data consumers in a convenient manner, anytime and anywhere. However, with data hosted in these services being beyond the control of the data owners, it raises significant concerns about privacy in personal data sharing. Many schemes have been proposed to realize fine-grained access control and privacy protection in data sharing. However, they fail to protect the rights of data owners to their data under the law, since their designs focus on the management of system administrators rather than enhancing the data owners' privacy. In this paper, we introduce a novel task-driven personal data sharing system based on the data capsule paradigm realizing personal data sharing autonomy. It enables data owners in our system to fully control their data, and share it autonomously. Specifically, we present a tamper-resistant data capsule encapsulation method, where the data capsule is the minimal unit for independent and secure personal data storage and sharing. Additionally, to realize selective sharing and informed-consent based authorization, we propose a task-driven data sharing mechanism that is resistant to collusion and EDoS attacks. Furthermore, by updating parts of the data capsules, the permissions granted to data consumers can be immediately revoked. Finally, we conduct a security and performance analysis, proving that our scheme is correct, sound, and secure, as well as revealing more advantageous features in practicality, compared with the state-of-the-art schemes.<|reference_end|>
|
arxiv
|
@article{lyu2024towards,
title={Towards Personal Data Sharing Autonomy:A Task-driven Data Capsule
Sharing System},
author={Qiuyun Lyu, Yilong Zhou, Yizhi Ren, Zhen Wang, and Yunchuan Guo},
journal={arXiv preprint arXiv:2409.18449},
year={2024},
archivePrefix={arXiv},
eprint={2409.18449},
primaryClass={cs.CR}
}
|
lyu2024towards
|
arxiv-662638
|
2409.18452
|
Exploiting Physical Human-Robot Interaction to Provide a Unique Rolling Experience with a Riding Ballbot
|
<|reference_start|>Exploiting Physical Human-Robot Interaction to Provide a Unique Rolling Experience with a Riding Ballbot: This study introduces the development of hands-free control schemes for a riding ballbot, designed to allow riders including manual wheelchair users to control its movement through torso leaning and twisting. The hardware platform, Personal Unique Rolling Experience (PURE), utilizes a ballbot drivetrain, a dynamically stable mobile robot that uses a ball as its wheel to provide omnidirectional maneuverability. To accommodate users with varying torso motion functions, the hanads-free control scheme should be adjustable based on the rider's torso function and personal preferences. Therefore, concepts of (a) impedance control and (b) admittance control were integrated into the control scheme. A duo-agent optimization framework was utilized to assess the efficiency of this rider-ballbot system for a safety-critical task: braking from 1.4 m/s. The candidate control schemes were further implemented in the physical robot hardware and validated with two experienced users, demonstrating the efficiency and robustness of the hands-free admittance control scheme (HACS). This interface, which utilized physical human-robot interaction (pHRI) as the input, resulted in lower braking effort and shorter braking distance and time. Subsequently, 12 novice participants (six able-bodied users and six manual wheelchair users) with different levels of torso motion capability were then recruited to benchmark the braking performance with HACS. The indoor navigation capability of PURE was further demonstrated with these participants in courses simulating narrow hallways, tight turns, and navigation through static and dynamic obstacles. By exploiting pHRI, the proposed admittance-style control scheme provided effective control of the ballbot via torso motions. This interface enables PURE to provide a personal unique rolling experience to manual wheelchair users for safe and agile indoor navigation.<|reference_end|>
|
arxiv
|
@article{xiao2024exploiting,
title={Exploiting Physical Human-Robot Interaction to Provide a Unique Rolling
Experience with a Riding Ballbot},
author={Chenzhang Xiao, Seung Yun Song, Yu Chen, Mahshid Mansouri, Jo~ao
Ramos, Adam W. Bleakney, William R. Norris, and Elizabeth T. Hsiao-Wecksler},
journal={arXiv preprint arXiv:2409.18452},
year={2024},
archivePrefix={arXiv},
eprint={2409.18452},
primaryClass={cs.RO}
}
|
xiao2024exploiting
|
arxiv-662639
|
2409.18454
|
Leveraging Long-Context Large Language Models for Multi-Document Understanding and Summarization in Enterprise Applications
|
<|reference_start|>Leveraging Long-Context Large Language Models for Multi-Document Understanding and Summarization in Enterprise Applications: The rapid increase in unstructured data across various fields has made multi-document comprehension and summarization a critical task. Traditional approaches often fail to capture relevant context, maintain logical consistency, and extract essential information from lengthy documents. This paper explores the use of Long-context Large Language Models (LLMs) for multi-document summarization, demonstrating their exceptional capacity to grasp extensive connections, provide cohesive summaries, and adapt to various industry domains and integration with enterprise applications/systems. The paper discusses the workflow of multi-document summarization for effectively deploying long-context LLMs, supported by case studies in legal applications, enterprise functions such as HR, finance, and sourcing, as well as in the medical and news domains. These case studies show notable enhancements in both efficiency and accuracy. Technical obstacles, such as dataset diversity, model scalability, and ethical considerations like bias mitigation and factual accuracy, are carefully analyzed. Prospective research avenues are suggested to augment the functionalities and applications of long-context LLMs, establishing them as pivotal tools for transforming information processing across diverse sectors and enterprise applications.<|reference_end|>
|
arxiv
|
@article{godbole2024leveraging,
title={Leveraging Long-Context Large Language Models for Multi-Document
Understanding and Summarization in Enterprise Applications},
author={Aditi Godbole, Jabin Geevarghese George, Smita Shandilya},
journal={arXiv preprint arXiv:2409.18454},
year={2024},
archivePrefix={arXiv},
eprint={2409.18454},
primaryClass={cs.CL cs.AI}
}
|
godbole2024leveraging
|
arxiv-662640
|
2409.18455
|
Review of Digital Asset Development with Graph Neural Network Unlearning
|
<|reference_start|>Review of Digital Asset Development with Graph Neural Network Unlearning: In the rapidly evolving landscape of digital assets, the imperative for robust data privacy and compliance with regulatory frameworks has intensified. This paper investigates the critical role of Graph Neural Networks (GNNs) in the management of digital assets and introduces innovative unlearning techniques specifically tailored to GNN architectures. We categorize unlearning strategies into two primary classes: data-driven approximation, which manipulates the graph structure to isolate and remove the influence of specific nodes, and model-driven approximation, which modifies the internal parameters and architecture of the GNN itself. By examining recent advancements in these unlearning methodologies, we highlight their applicability in various use cases, including fraud detection, risk assessment, token relationship prediction, and decentralized governance. We discuss the challenges inherent in balancing model performance with the requirements for data unlearning, particularly in the context of real-time financial applications. Furthermore, we propose a hybrid approach that combines the strengths of both unlearning strategies to enhance the efficiency and effectiveness of GNNs in digital asset ecosystems. Ultimately, this paper aims to provide a comprehensive framework for understanding and implementing GNN unlearning techniques, paving the way for secure and compliant deployment of machine learning in the digital asset domain.<|reference_end|>
|
arxiv
|
@article{lisbon2024review,
title={Review of Digital Asset Development with Graph Neural Network Unlearning},
author={Zara Lisbon},
journal={arXiv preprint arXiv:2409.18455},
year={2024},
archivePrefix={arXiv},
eprint={2409.18455},
primaryClass={cs.LG cs.AI}
}
|
lisbon2024review
|
arxiv-662641
|
2409.18457
|
DynaWeightPnP: Toward global real-time 3D-2D solver in PnP without correspondences
|
<|reference_start|>DynaWeightPnP: Toward global real-time 3D-2D solver in PnP without correspondences: This paper addresses a special Perspective-n-Point (PnP) problem: estimating the optimal pose to align 3D and 2D shapes in real-time without correspondences, termed as correspondence-free PnP. While several studies have focused on 3D and 2D shape registration, achieving both real-time and accurate performance remains challenging. This study specifically targets the 3D-2D geometric shape registration tasks, applying the recently developed Reproducing Kernel Hilbert Space (RKHS) to address the "big-to-small" issue. An iterative reweighted least squares method is employed to solve the RKHS-based formulation efficiently. Moreover, our work identifies a unique and interesting observability issue in correspondence-free PnP: the numerical ambiguity between rotation and translation. To address this, we proposed DynaWeightPnP, introducing a dynamic weighting sub-problem and an alternative searching algorithm designed to enhance pose estimation and alignment accuracy. Experiments were conducted on a typical case, that is, a 3D-2D vascular centerline registration task within Endovascular Image-Guided Interventions (EIGIs). Results demonstrated that the proposed algorithm achieves registration processing rates of 60 Hz (without post-refinement) and 31 Hz (with post-refinement) on modern single-core CPUs, with competitive accuracy comparable to existing methods. These results underscore the suitability of DynaWeightPnP for future robot navigation tasks like EIGIs.<|reference_end|>
|
arxiv
|
@article{song2024dynaweightpnp:,
title={DynaWeightPnP: Toward global real-time 3D-2D solver in PnP without
correspondences},
author={Jingwei Song and Maani Ghaffari},
journal={arXiv preprint arXiv:2409.18457},
year={2024},
archivePrefix={arXiv},
eprint={2409.18457},
primaryClass={cs.CV cs.RO}
}
|
song2024dynaweightpnp:
|
arxiv-662642
|
2409.18458
|
Enhancing Crime Scene Investigations through Virtual Reality and Deep Learning Techniques
|
<|reference_start|>Enhancing Crime Scene Investigations through Virtual Reality and Deep Learning Techniques: The analysis of a crime scene is a pivotal activity in forensic investigations. Crime Scene Investigators and forensic science practitioners rely on best practices, standard operating procedures, and critical thinking, to produce rigorous scientific reports to document the scenes of interest and meet the quality standards expected in the courts. However, crime scene examination is a complex and multifaceted task often performed in environments susceptible to deterioration, contamination, and alteration, despite the use of contact-free and non-destructive methods of analysis. In this context, the documentation of the sites, and the identification and isolation of traces of evidential value remain challenging endeavours. In this paper, we propose a photogrammetric reconstruction of the crime scene for inspection in virtual reality (VR) and focus on fully automatic object recognition with deep learning (DL) algorithms through a client-server architecture. A pre-trained Faster-RCNN model was chosen as the best method that can best categorize relevant objects at the scene, selected by experts in the VR environment. These operations can considerably improve and accelerate crime scene analysis and help the forensic expert in extracting measurements and analysing in detail the objects under analysis. Experimental results on a simulated crime scene have shown that the proposed method can be effective in finding and recognizing objects with potential evidentiary value, enabling timely analyses of crime scenes, particularly those with health and safety risks (e.g. fires, explosions, chemicals, etc.), while minimizing subjective bias and contamination of the scene.<|reference_end|>
|
arxiv
|
@article{zappalà2024enhancing,
title={Enhancing Crime Scene Investigations through Virtual Reality and Deep
Learning Techniques},
author={Antonino Zappal`a (1), Luca Guarnera (1), Vincenzo Rinaldi (2),
Salvatore Livatino (3), Sebastiano Battiato (1) ((1) University of Catania,
(2) University of Dundee, (3) University of Hertfordshire)},
journal={arXiv preprint arXiv:2409.18458},
year={2024},
archivePrefix={arXiv},
eprint={2409.18458},
primaryClass={cs.CV}
}
|
zappalà2024enhancing
|
arxiv-662643
|
2409.18459
|
FoodMLLM-JP: Leveraging Multimodal Large Language Models for Japanese Recipe Generation
|
<|reference_start|>FoodMLLM-JP: Leveraging Multimodal Large Language Models for Japanese Recipe Generation: Research on food image understanding using recipe data has been a long-standing focus due to the diversity and complexity of the data. Moreover, food is inextricably linked to people's lives, making it a vital research area for practical applications such as dietary management. Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities, not only in their vast knowledge but also in their ability to handle languages naturally. While English is predominantly used, they can also support multiple languages including Japanese. This suggests that MLLMs are expected to significantly improve performance in food image understanding tasks. We fine-tuned open MLLMs LLaVA-1.5 and Phi-3 Vision on a Japanese recipe dataset and benchmarked their performance against the closed model GPT-4o. We then evaluated the content of generated recipes, including ingredients and cooking procedures, using 5,000 evaluation samples that comprehensively cover Japanese food culture. Our evaluation demonstrates that the open models trained on recipe data outperform GPT-4o, the current state-of-the-art model, in ingredient generation. Our model achieved F1 score of 0.531, surpassing GPT-4o's F1 score of 0.481, indicating a higher level of accuracy. Furthermore, our model exhibited comparable performance to GPT-4o in generating cooking procedure text.<|reference_end|>
|
arxiv
|
@article{imajuku2024foodmllm-jp:,
title={FoodMLLM-JP: Leveraging Multimodal Large Language Models for Japanese
Recipe Generation},
author={Yuki Imajuku and Yoko Yamakata and Kiyoharu Aizawa},
journal={arXiv preprint arXiv:2409.18459},
year={2024},
archivePrefix={arXiv},
eprint={2409.18459},
primaryClass={cs.CV cs.MM}
}
|
imajuku2024foodmllm-jp:
|
arxiv-662644
|
2409.18461
|
Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration
|
<|reference_start|>Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration: Federated Learning has emerged as a promising paradigm for collaborative machine learning, while preserving user data privacy. Despite its potential, standard FL lacks support for diverse heterogeneous device prototypes, which vary significantly in model and dataset sizes -- from small IoT devices to large workstations. This limitation is only partially addressed by existing knowledge distillation techniques, which often fail to transfer knowledge effectively across a broad spectrum of device prototypes with varied capabilities. This failure primarily stems from two issues: the dilution of informative logits from more capable devices by those from less capable ones, and the use of a single integrated logits as the distillation target across all devices, which neglects their individual learning capacities and and the unique contributions of each. To address these challenges, we introduce TAKFL, a novel KD-based framework that treats the knowledge transfer from each device prototype's ensemble as a separate task, independently distilling each to preserve its unique contributions and avoid dilution. TAKFL also incorporates a KD-based self-regularization technique to mitigate the issues related to the noisy and unsupervised ensemble distillation process. To integrate the separately distilled knowledge, we introduce an adaptive task arithmetic knowledge integration process, allowing each student model to customize the knowledge integration for optimal performance. Additionally, we present theoretical results demonstrating the effectiveness of task arithmetic in transferring knowledge across heterogeneous devices with varying capacities. Comprehensive evaluations of our method across both CV and NLP tasks demonstrate that TAKFL achieves SOTA results in a variety of datasets and settings, significantly outperforming existing KD-based methods. Code is released at https://github.com/MMorafah/TAKFL<|reference_end|>
|
arxiv
|
@article{morafah2024towards,
title={Towards Diverse Device Heterogeneous Federated Learning via Task
Arithmetic Knowledge Integration},
author={Mahdi Morafah, Vyacheslav Kungurtsev, Hojin Chang, Chen Chen, Bill Lin},
journal={38th Conference on Neural Information Processing Systems (NeurIPS
2024)},
year={2024},
archivePrefix={arXiv},
eprint={2409.18461},
primaryClass={cs.LG cs.AI cs.CV cs.DC}
}
|
morafah2024towards
|
arxiv-662645
|
2409.18462
|
Latent Representation Learning for Multimodal Brain Activity Translation
|
<|reference_start|>Latent Representation Learning for Multimodal Brain Activity Translation: Neuroscience employs diverse neuroimaging techniques, each offering distinct insights into brain activity, from electrophysiological recordings such as EEG, which have high temporal resolution, to hemodynamic modalities such as fMRI, which have increased spatial precision. However, integrating these heterogeneous data sources remains a challenge, which limits a comprehensive understanding of brain function. We present the Spatiotemporal Alignment of Multimodal Brain Activity (SAMBA) framework, which bridges the spatial and temporal resolution gaps across modalities by learning a unified latent space free of modality-specific biases. SAMBA introduces a novel attention-based wavelet decomposition for spectral filtering of electrophysiological recordings, graph attention networks to model functional connectivity between functional brain units, and recurrent layers to capture temporal autocorrelations in brain signal. We show that the training of SAMBA, aside from achieving translation, also learns a rich representation of brain information processing. We showcase this classify external stimuli driving brain activity from the representation learned in hidden layers of SAMBA, paving the way for broad downstream applications in neuroscience research and clinical contexts.<|reference_end|>
|
arxiv
|
@article{afrasiyabi2024latent,
title={Latent Representation Learning for Multimodal Brain Activity Translation},
author={Arman Afrasiyabi, Dhananjay Bhaskar, Erica L. Busch, Laurent Caplette,
Rahul Singh, Guillaume Lajoie, Nicholas B. Turk-Browne, Smita Krishnaswamy},
journal={arXiv preprint arXiv:2409.18462},
year={2024},
archivePrefix={arXiv},
eprint={2409.18462},
primaryClass={cs.LG q-bio.NC}
}
|
afrasiyabi2024latent
|
arxiv-662646
|
2409.18465
|
RIS-Enabled Cellular Systems Operated by Different Service Providers
|
<|reference_start|>RIS-Enabled Cellular Systems Operated by Different Service Providers: In realistic cellular communication systems, multiple service providers will operate within different frequency ranges. Each serving cell, which is managed by a distinct service provider, is designed individually due to the orthogonal frequencies. However, when a reconfigurable intelligent surface (RIS) is deployed for a certain cell, the RIS still incurs reflective channels for the overall system since the RIS reflects signals across all frequency ranges. This may cause severe undesired performance degradation for the other cells unless the reflection coefficients are properly designed. To tackle this issue, by utilizing the Riemannian manifold optimization method, an RIS reflection coefficients design is proposed in this paper to maximize the performance improvements of the cell that deploys the RIS while minimizing the undesired performance degradation for the other cells simultaneously. Numerical results demonstrate that the proposed design can effectively balance the two objectives for practical scenarios.<|reference_end|>
|
arxiv
|
@article{lee2024ris-enabled,
title={RIS-Enabled Cellular Systems Operated by Different Service Providers},
author={Hyeongtaek Lee and Junil Choi},
journal={arXiv preprint arXiv:2409.18465},
year={2024},
archivePrefix={arXiv},
eprint={2409.18465},
primaryClass={cs.IT eess.SP math.IT}
}
|
lee2024ris-enabled
|
arxiv-662647
|
2409.18467
|
A TextGCN-Based Decoding Approach for Improving Remote Sensing Image Captioning
|
<|reference_start|>A TextGCN-Based Decoding Approach for Improving Remote Sensing Image Captioning: Remote sensing images are highly valued for their ability to address complex real-world issues such as risk management, security, and meteorology. However, manually captioning these images is challenging and requires specialized knowledge across various domains. This letter presents an approach for automatically describing (captioning) remote sensing images. We propose a novel encoder-decoder setup that deploys a Text Graph Convolutional Network (TextGCN) and multi-layer LSTMs. The embeddings generated by TextGCN enhance the decoder's understanding by capturing the semantic relationships among words at both the sentence and corpus levels. Furthermore, we advance our approach with a comparison-based beam search method to ensure fairness in the search strategy for generating the final caption. We present an extensive evaluation of our approach against various other state-of-the-art encoder-decoder frameworks. We evaluated our method across three datasets using seven metrics: BLEU-1 to BLEU-4, METEOR, ROUGE-L, and CIDEr. The results demonstrate that our approach significantly outperforms other state-of-the-art encoder-decoder methods.<|reference_end|>
|
arxiv
|
@article{das2024a,
title={A TextGCN-Based Decoding Approach for Improving Remote Sensing Image
Captioning},
author={Swadhin Das and Raksha Sharma},
journal={arXiv preprint arXiv:2409.18467},
year={2024},
archivePrefix={arXiv},
eprint={2409.18467},
primaryClass={cs.LG}
}
|
das2024a
|
arxiv-662648
|
2409.18468
|
SmartReco: Detecting Read-Only Reentrancy via Fine-Grained Cross-DApp Analysis
|
<|reference_start|>SmartReco: Detecting Read-Only Reentrancy via Fine-Grained Cross-DApp Analysis: Despite the increasing popularity of Decentralized Applications (DApps), they are suffering from various vulnerabilities that can be exploited by adversaries for profits. Among such vulnerabilities, Read-Only Reentrancy (called ROR in this paper), is an emerging type of vulnerability that arises from the complex interactions between DApps. In the recent three years, attack incidents of ROR have already caused around 30M USD losses to the DApp ecosystem. Existing techniques for vulnerability detection in smart contracts can hardly detect Read-Only Reentrancy attacks, due to the lack of tracking and analyzing the complex interactions between multiple DApps. In this paper, we propose SmartReco, a new framework for detecting Read-Only Reentrancy vulnerability in DApps through a novel combination of static and dynamic analysis (i.e., fuzzing) over smart contracts. The key design behind SmartReco is threefold: (1) SmartReco identifies the boundary between different DApps from the heavy-coupled cross-contract interactions. (2) SmartReco performs fine-grained static analysis to locate points of interest (i.e., entry functions) that may lead to ROR. (3) SmartReco utilizes the on-chain transaction data and performs multi-function fuzzing (i.e., the entry function and victim function) across different DApps to verify the existence of ROR. Our evaluation of a manual-labeled dataset with 45 RORs shows that SmartReco achieves a precision of 88.63% and a recall of 86.36%. In addition, SmartReco successfully detects 43 new RORs from 123 popular DApps. The total assets affected by such RORs reach around 520,000 USD.<|reference_end|>
|
arxiv
|
@article{zhang2024smartreco:,
title={SmartReco: Detecting Read-Only Reentrancy via Fine-Grained Cross-DApp
Analysis},
author={Jingwen Zhang, Zibin Zheng, Yuhong Nan, Mingxi Ye, Kaiwen Ning, Yu
Zhang and Weizhe Zhang},
journal={arXiv preprint arXiv:2409.18468},
year={2024},
archivePrefix={arXiv},
eprint={2409.18468},
primaryClass={cs.SE}
}
|
zhang2024smartreco:
|
arxiv-662649
|
2409.18469
|
Deciding Reachability in a Directed Graph given its Path Decomposition
|
<|reference_start|>Deciding Reachability in a Directed Graph given its Path Decomposition: Deciding if there exists a path from one vertex to another in a graph is known as the s-t connectivity or the reachability problem. Reachability can be solved using graph traversal algorithms like Depth First Search(DFS) or Breadth First Search(BFS) in linear time but these algorithms also take linear space. On the other hand, Savitch's algorithm solves the same problem using O(log^2 n) space but takes quasipolynomial time. A path decomposition P of a directed graph G is a collection of simple directed paths such that every edge of G lies on exactly one path in P. A minimal path decomposition of G is a path decomposition of G having the smallest number of paths possible and the number of paths in a minimal path decomposition of G is called the path number of G. We show that if a path decomposition P of a directed graph G consisting of k directed paths is provided, then reachability in G can be decided simultaneously in O(klog n) space and polynomial time. In fact, our result holds even when a walk decomposition is provided (instead of a path decomposition) where the graph is decomposed into k directed walks (instead of paths) and the walks are not necessarily edge-disjoint. We further show that a minimal path decomposition can be computed in logspace for directed acyclic graphs. This leads to the conclusion that reachability in directed acyclic graphs having bounded path number is logspace computable.<|reference_end|>
|
arxiv
|
@article{bhadra2024deciding,
title={Deciding Reachability in a Directed Graph given its Path Decomposition},
author={Ronak Bhadra, Raghunath Tewari},
journal={arXiv preprint arXiv:2409.18469},
year={2024},
archivePrefix={arXiv},
eprint={2409.18469},
primaryClass={cs.CC}
}
|
bhadra2024deciding
|
arxiv-662650
|
2409.18470
|
Fairness without Sensitive Attributes via Knowledge Sharing
|
<|reference_start|>Fairness without Sensitive Attributes via Knowledge Sharing: While model fairness improvement has been explored previously, existing methods invariably rely on adjusting explicit sensitive attribute values in order to improve model fairness in downstream tasks. However, we observe a trend in which sensitive demographic information becomes inaccessible as public concerns around data privacy grow. In this paper, we propose a confidence-based hierarchical classifier structure called "Reckoner" for reliable fair model learning under the assumption of missing sensitive attributes. We first present results showing that if the dataset contains biased labels or other hidden biases, classifiers significantly increase the bias gap across different demographic groups in the subset with higher prediction confidence. Inspired by these findings, we devised a dual-model system in which a version of the model initialised with a high-confidence data subset learns from a version of the model initialised with a low-confidence data subset, enabling it to avoid biased predictions. Our experimental results show that Reckoner consistently outperforms state-of-the-art baselines in COMPAS dataset and New Adult dataset, considering both accuracy and fairness metrics.<|reference_end|>
|
arxiv
|
@article{ni2024fairness,
title={Fairness without Sensitive Attributes via Knowledge Sharing},
author={Hongliang Ni, Lei Han, Tong Chen, Shazia Sadiq, Gianluca Demartini},
journal={arXiv preprint arXiv:2409.18470},
year={2024},
archivePrefix={arXiv},
eprint={2409.18470},
primaryClass={cs.LG}
}
|
ni2024fairness
|
arxiv-662651
|
2409.18471
|
Unveiling Hidden Vulnerabilities in Quantum Systems by Expanding Attack Vectors through Heisenberg's Uncertainty Principle
|
<|reference_start|>Unveiling Hidden Vulnerabilities in Quantum Systems by Expanding Attack Vectors through Heisenberg's Uncertainty Principle: This study uncovers novel vulnerabilities within Quantum Key Distribution (QKD) protocols that extend beyond traditional implementation flaws, such as loopholes. These newly identified vulnerabilities arise from the complex interaction between Bell Inequalities (BIs) and Hidden Variable Theories (HVTs), further exacerbated by the Heisenberg Uncertainty Principle (HUP). Through a combination of theoretical analysis, simulations, and quantum experiments, we reveal critical security weaknesses that challenge the core assumptions of today's quantum cryptography. While these vulnerabilities differ from known loopholes, when considered alongside them and traditional cyberattacks, they present a significant threat to the robustness of QKD and quantum integrity systems. These results provide a new perspective to rethink current quantum security frameworks to ensure the robustness of future quantum cryptographic and quantum integrity protocols.<|reference_end|>
|
arxiv
|
@article{rosas-bustos2024unveiling,
title={Unveiling Hidden Vulnerabilities in Quantum Systems by Expanding Attack
Vectors through Heisenberg's Uncertainty Principle},
author={Jose R. Rosas-Bustos, Jesse Van Griensven Th'e, Roydon Andrew Fraser},
journal={arXiv preprint arXiv:2409.18471},
year={2024},
doi={10.21203/rs.3.rs-4979824/v1},
archivePrefix={arXiv},
eprint={2409.18471},
primaryClass={quant-ph cs.ET}
}
|
rosas-bustos2024unveiling
|
arxiv-662652
|
2409.18472
|
URIEL+: Enhancing Linguistic Inclusion and Usability in a Typological and Multilingual Knowledge Base
|
<|reference_start|>URIEL+: Enhancing Linguistic Inclusion and Usability in a Typological and Multilingual Knowledge Base: URIEL is a knowledge base offering geographical, phylogenetic, and typological vector representations for 7970 languages. It includes distance measures between these vectors for 4005 languages, which are accessible via the lang2vec tool. Despite being frequently cited, URIEL is limited in terms of linguistic inclusion and overall usability. To tackle these challenges, we introduce URIEL+, an enhanced version of URIEL and lang2vec addressing these limitations. In addition to expanding typological feature coverage for 2898 languages, URIEL+ improves user experience with robust, customizable distance calculations to better suit the needs of the users. These upgrades also offer competitive performance on downstream tasks and provide distances that better align with linguistic distance studies.<|reference_end|>
|
arxiv
|
@article{khan2024uriel+:,
title={URIEL+: Enhancing Linguistic Inclusion and Usability in a Typological
and Multilingual Knowledge Base},
author={Aditya Khan, Mason Shipton, David Anugraha, Kaiyao Duan, Phuong H.
Hoang, Eric Khiu, A. Seza Dou{g}ru"oz, En-Shiun Annie Lee},
journal={arXiv preprint arXiv:2409.18472},
year={2024},
archivePrefix={arXiv},
eprint={2409.18472},
primaryClass={cs.CL cs.LG}
}
|
khan2024uriel+:
|
arxiv-662653
|
2409.18473
|
Efficient Top-k s-Biplexes Search over Large Bipartite Graphs
|
<|reference_start|>Efficient Top-k s-Biplexes Search over Large Bipartite Graphs: In a bipartite graph, a subgraph is an $s$-biplex if each vertex of the subgraph is adjacent to all but at most $s$ vertices on the opposite set. The enumeration of $s$-biplexes from a given graph is a fundamental problem in bipartite graph analysis. However, in real-world data engineering, finding all $s$-biplexes is neither necessary nor computationally affordable. A more realistic problem is to identify some of the largest $s$-biplexes from the large input graph. We formulate the problem as the {\em top-$k$ $s$-biplex search (TBS) problem}, which aims to find the top-$k$ maximal $s$-biplexes with the most vertices, where $k$ is an input parameter. We prove that the TBS problem is NP-hard for any fixed $k\ge 1$. Then, we propose a branching algorithm, named MVBP, that breaks the simple $2^n$ enumeration algorithm. Furthermore, from a practical perspective, we investigate three techniques to improve the performance of MVBP: 2-hop decomposition, single-side bounds, and progressive search. Complexity analysis shows that the improved algorithm, named FastMVBP, has a running time $O^*(\gamma_s^{d_2})$, where $\gamma_s<2$, and $d_2$ is a parameter much smaller than the number of vertex in the sparse real-world graphs, e.g. $d_2$ is only $67$ in the AmazonRatings dataset which has more than $3$ million vertices. Finally, we conducted extensive experiments on eight real-world and synthetic datasets to demonstrate the empirical efficiency of the proposed algorithms. In particular, FastMVBP outperforms the benchmark algorithms by up to three orders of magnitude in several instances.<|reference_end|>
|
arxiv
|
@article{xu2024efficient,
title={Efficient Top-k s-Biplexes Search over Large Bipartite Graphs},
author={Zhenxiang Xu, Yiping Liu, Yi Zhou, Yimin Hao, Zhengren Wang},
journal={arXiv preprint arXiv:2409.18473},
year={2024},
archivePrefix={arXiv},
eprint={2409.18473},
primaryClass={cs.IR cs.DS}
}
|
xu2024efficient
|
arxiv-662654
|
2409.18475
|
Data Analysis in the Era of Generative AI
|
<|reference_start|>Data Analysis in the Era of Generative AI: This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges. We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow by translating high-level user intentions into executable code, charts, and insights. We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps. Finally, we discuss the research challenges that impede the development of these AI-based systems such as enhancing model capabilities, evaluating and benchmarking, and understanding end-user needs.<|reference_end|>
|
arxiv
|
@article{inala2024data,
title={Data Analysis in the Era of Generative AI},
author={Jeevana Priya Inala, Chenglong Wang, Steven Drucker, Gonzalo Ramos,
Victor Dibia, Nathalie Riche, Dave Brown, Dan Marshall, Jianfeng Gao},
journal={arXiv preprint arXiv:2409.18475},
year={2024},
archivePrefix={arXiv},
eprint={2409.18475},
primaryClass={cs.AI cs.HC}
}
|
inala2024data
|
arxiv-662655
|
2409.18476
|
Underwater Image Enhancement with Physical-based Denoising Diffusion Implicit Models
|
<|reference_start|>Underwater Image Enhancement with Physical-based Denoising Diffusion Implicit Models: Underwater vision is crucial for autonomous underwater vehicles (AUVs), and enhancing degraded underwater images in real-time on a resource-constrained AUV is a key challenge due to factors like light absorption and scattering, or the sufficient model computational complexity to resolve such factors. Traditional image enhancement techniques lack adaptability to varying underwater conditions, while learning-based methods, particularly those using convolutional neural networks (CNNs) and generative adversarial networks (GANs), offer more robust solutions but face limitations such as inadequate enhancement, unstable training, or mode collapse. Denoising diffusion probabilistic models (DDPMs) have emerged as a state-of-the-art approach in image-to-image tasks but require intensive computational complexity to achieve the desired underwater image enhancement (UIE) using the recent UW-DDPM solution. To address these challenges, this paper introduces UW-DiffPhys, a novel physical-based and diffusion-based UIE approach. UW-DiffPhys combines light-computation physical-based UIE network components with a denoising U-Net to replace the computationally intensive distribution transformation U-Net in the existing UW-DDPM framework, reducing complexity while maintaining performance. Additionally, the Denoising Diffusion Implicit Model (DDIM) is employed to accelerate the inference process through non-Markovian sampling. Experimental results demonstrate that UW-DiffPhys achieved a substantial reduction in computational complexity and inference time compared to UW-DDPM, with competitive performance in key metrics such as PSNR, SSIM, UCIQE, and an improvement in the overall underwater image quality UIQM metric. The implementation code can be found at the following repository: https://github.com/bachzz/UW-DiffPhys<|reference_end|>
|
arxiv
|
@article{bach2024underwater,
title={Underwater Image Enhancement with Physical-based Denoising Diffusion
Implicit Models},
author={Nguyen Gia Bach, Chanh Minh Tran, Eiji Kamioka, Phan Xuan Tan},
journal={arXiv preprint arXiv:2409.18476},
year={2024},
archivePrefix={arXiv},
eprint={2409.18476},
primaryClass={cs.CV}
}
|
bach2024underwater
|
arxiv-662656
|
2409.18478
|
Temporal2Seq: A Unified Framework for Temporal Video Understanding Tasks
|
<|reference_start|>Temporal2Seq: A Unified Framework for Temporal Video Understanding Tasks: With the development of video understanding, there is a proliferation of tasks for clip-level temporal video analysis, including temporal action detection (TAD), temporal action segmentation (TAS), and generic event boundary detection (GEBD). While task-specific video understanding models have exhibited outstanding performance in each task, there remains a dearth of a unified framework capable of simultaneously addressing multiple tasks, which is a promising direction for the next generation of AI. To this end, in this paper, we propose a single unified framework, coined as Temporal2Seq, to formulate the output of these temporal video understanding tasks as a sequence of discrete tokens. With this unified token representation, Temporal2Seq can train a generalist model within a single architecture on different video understanding tasks. In the absence of multi-task learning (MTL) benchmarks, we compile a comprehensive co-training dataset by borrowing the datasets from TAD, TAS, and GEBD tasks. We evaluate our Temporal2Seq generalist model on the corresponding test sets of three tasks, demonstrating that Temporal2Seq can produce reasonable results on various tasks and achieve advantages compared with single-task training on this framework. We also investigate the generalization performance of our generalist model on new datasets from different tasks, which yields superior performance to the specific model.<|reference_end|>
|
arxiv
|
@article{yang2024temporal2seq:,
title={Temporal2Seq: A Unified Framework for Temporal Video Understanding Tasks},
author={Min Yang, Zichen Zhang, Limin Wang},
journal={arXiv preprint arXiv:2409.18478},
year={2024},
archivePrefix={arXiv},
eprint={2409.18478},
primaryClass={cs.CV}
}
|
yang2024temporal2seq:
|
arxiv-662657
|
2409.18479
|
CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns
|
<|reference_start|>CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns: The stable periodic patterns present in time series data serve as the foundation for conducting long-horizon forecasts. In this paper, we pioneer the exploration of explicitly modeling this periodicity to enhance the performance of models in long-term time series forecasting (LTSF) tasks. Specifically, we introduce the Residual Cycle Forecasting (RCF) technique, which utilizes learnable recurrent cycles to model the inherent periodic patterns within sequences, and then performs predictions on the residual components of the modeled cycles. Combining RCF with a Linear layer or a shallow MLP forms the simple yet powerful method proposed in this paper, called CycleNet. CycleNet achieves state-of-the-art prediction accuracy in multiple domains including electricity, weather, and energy, while offering significant efficiency advantages by reducing over 90% of the required parameter quantity. Furthermore, as a novel plug-and-play technique, the RCF can also significantly improve the prediction accuracy of existing models, including PatchTST and iTransformer. The source code is available at: https://github.com/ACAT-SCUT/CycleNet.<|reference_end|>
|
arxiv
|
@article{lin2024cyclenet:,
title={CycleNet: Enhancing Time Series Forecasting through Modeling Periodic
Patterns},
author={Shengsheng Lin, Weiwei Lin, Xinyi Hu, Wentai Wu, Ruichao Mo, Haocheng
Zhong},
journal={arXiv preprint arXiv:2409.18479},
year={2024},
archivePrefix={arXiv},
eprint={2409.18479},
primaryClass={cs.LG}
}
|
lin2024cyclenet:
|
arxiv-662658
|
2409.18481
|
Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild Context-Aware Human Activity Recognition
|
<|reference_start|>Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild Context-Aware Human Activity Recognition: Human Activity Recognition (HAR) is a challenging, multi-label classification problem as activities may co-occur and sensor signals corresponding to the same activity may vary in different contexts (e.g., different device placements). This paper proposes a Deep Heterogeneous Contrastive Hyper-Graph Learning (DHC-HGL) framework that captures heterogenous Context-Aware HAR (CA-HAR) hypergraph properties in a message-passing and neighborhood-aggregation fashion. Prior work only explored homogeneous or shallow-node-heterogeneous graphs. DHC-HGL handles heterogeneous CA-HAR data by innovatively 1) Constructing three different types of sub-hypergraphs that are each passed through different custom HyperGraph Convolution (HGC) layers designed to handle edge-heterogeneity and 2) Adopting a contrastive loss function to ensure node-heterogeneity. In rigorous evaluation on two CA-HAR datasets, DHC-HGL significantly outperformed state-of-the-art baselines by 5.8% to 16.7% on Matthews Correlation Coefficient (MCC) and 3.0% to 8.4% on Macro F1 scores. UMAP visualizations of learned CA-HAR node embeddings are also presented to enhance model explainability.<|reference_end|>
|
arxiv
|
@article{ge2024deep,
title={Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild
Context-Aware Human Activity Recognition},
author={Wen Ge, Guanyi Mou, Emmanuel O. Agu, Kyumin Lee},
journal={IMWUT 2023},
year={2024},
doi={10.1145/3631444},
archivePrefix={arXiv},
eprint={2409.18481},
primaryClass={cs.LG}
}
|
ge2024deep
|
arxiv-662659
|
2409.18482
|
HSTFL: A Heterogeneous Federated Learning Framework for Misaligned Spatiotemporal Forecasting
|
<|reference_start|>HSTFL: A Heterogeneous Federated Learning Framework for Misaligned Spatiotemporal Forecasting: Spatiotemporal forecasting has emerged as an indispensable building block of diverse smart city applications, such as intelligent transportation and smart energy management. Recent advancements have uncovered that the performance of spatiotemporal forecasting can be significantly improved by integrating knowledge in geo-distributed time series data from different domains, \eg enhancing real-estate appraisal with human mobility data; joint taxi and bike demand predictions. While effective, existing approaches assume a centralized data collection and exploitation environment, overlooking the privacy and commercial interest concerns associated with data owned by different parties. In this paper, we investigate multi-party collaborative spatiotemporal forecasting without direct access to multi-source private data. However, this task is challenging due to 1) cross-domain feature heterogeneity and 2) cross-client geographical heterogeneity, where standard horizontal or vertical federated learning is inapplicable. To this end, we propose a Heterogeneous SpatioTemporal Federated Learning (HSTFL) framework to enable multiple clients to collaboratively harness geo-distributed time series data from different domains while preserving privacy. Specifically, we first devise vertical federated spatiotemporal representation learning to locally preserve spatiotemporal dependencies among individual participants and generate effective representations for heterogeneous data. Then we propose a cross-client virtual node alignment block to incorporate cross-client spatiotemporal dependencies via a multi-level knowledge fusion scheme. Extensive privacy analysis and experimental evaluations demonstrate that HSTFL not only effectively resists inference attacks but also provides a significant improvement against various baselines.<|reference_end|>
|
arxiv
|
@article{cai2024hstfl:,
title={HSTFL: A Heterogeneous Federated Learning Framework for Misaligned
Spatiotemporal Forecasting},
author={Shuowei Cai and Hao Liu},
journal={arXiv preprint arXiv:2409.18482},
year={2024},
archivePrefix={arXiv},
eprint={2409.18482},
primaryClass={cs.LG}
}
|
cai2024hstfl:
|
arxiv-662660
|
2409.18486
|
Evaluation of OpenAI o1: Opportunities and Challenges of AGI
|
<|reference_start|>Evaluation of OpenAI o1: Opportunities and Challenges of AGI: This comprehensive study evaluates the performance of OpenAI's o1-preview large language model across a diverse array of complex reasoning tasks, spanning multiple domains, including computer science, mathematics, natural sciences, medicine, linguistics, and social sciences. Through rigorous testing, o1-preview demonstrated remarkable capabilities, often achieving human-level or superior performance in areas ranging from coding challenges to scientific reasoning and from language processing to creative problem-solving. Key findings include: -83.3% success rate in solving complex competitive programming problems, surpassing many human experts. -Superior ability in generating coherent and accurate radiology reports, outperforming other evaluated models. -100% accuracy in high school-level mathematical reasoning tasks, providing detailed step-by-step solutions. -Advanced natural language inference capabilities across general and specialized domains like medicine. -Impressive performance in chip design tasks, outperforming specialized models in areas such as EDA script generation and bug analysis. -Remarkable proficiency in anthropology and geology, demonstrating deep understanding and reasoning in these specialized fields. -Strong capabilities in quantitative investing. O1 has comprehensive financial knowledge and statistical modeling skills. -Effective performance in social media analysis, including sentiment analysis and emotion recognition. The model excelled particularly in tasks requiring intricate reasoning and knowledge integration across various fields. While some limitations were observed, including occasional errors on simpler problems and challenges with certain highly specialized concepts, the overall results indicate significant progress towards artificial general intelligence.<|reference_end|>
|
arxiv
|
@article{zhong2024evaluation,
title={Evaluation of OpenAI o1: Opportunities and Challenges of AGI},
author={Tianyang Zhong, Zhengliang Liu, Yi Pan, Yutong Zhang, Yifan Zhou,
Shizhe Liang, Zihao Wu, Yanjun Lyu, Peng Shu, Xiaowei Yu, Chao Cao, Hanqi
Jiang, Hanxu Chen, Yiwei Li, Junhao Chen, Huawen Hu, Yihen Liu, Huaqin Zhao,
Shaochen Xu, Haixing Dai, Lin Zhao, Ruidong Zhang, Wei Zhao, Zhenyuan Yang,
Jingyuan Chen, Peilong Wang, Wei Ruan, Hui Wang, Huan Zhao, Jing Zhang,
Yiming Ren, Shihuan Qin, Tong Chen, Jiaxi Li, Arif Hassan Zidan, Afrar Jahin,
Minheng Chen, Sichen Xia, Jason Holmes, Yan Zhuang, Jiaqi Wang, Bochen Xu,
Weiran Xia, Jichao Yu, Kaibo Tang, Yaxuan Yang, Bolun Sun, Tao Yang, Guoyu
Lu, Xianqiao Wang, Lilong Chai, He Li, Jin Lu, Lichao Sun, Xin Zhang, Bao Ge,
Xintao Hu, Lian Zhang, Hua Zhou, Lu Zhang, Shu Zhang, Ninghao Liu, Bei Jiang,
Linglong Kong, Zhen Xiang, Yudan Ren, Jun Liu, Xi Jiang, Yu Bao, Wei Zhang,
Xiang Li, Gang Li, Wei Liu, Dinggang Shen, Andrea Sikora, Xiaoming Zhai,
Dajiang Zhu, Tianming Liu},
journal={arXiv preprint arXiv:2409.18486},
year={2024},
archivePrefix={arXiv},
eprint={2409.18486},
primaryClass={cs.CL}
}
|
zhong2024evaluation
|
arxiv-662661
|
2409.18487
|
An accelerated frequency-independent solver for oscillatory differential equations
|
<|reference_start|>An accelerated frequency-independent solver for oscillatory differential equations: Oscillatory second order linear ordinary differential equations arise in many scientific calculations. Because the running times of standard solvers increase linearly with frequency when they are applied to such problems, a variety of specialized methods, most of them quite complicated, have been proposed. Here, we point out that one of the simplest approaches not only works, but yields a scheme for solving oscillatory second order linear ordinary differential equations which is significantly faster than current state-of-the-art techniques. Our method, which operates by constructing a slowly varying phase function representing a basis of solutions of the differential equation, runs in time independent of the frequency and can be applied to second order equations whose solutions are oscillatory in some regions and slowly varying in others. In the high-frequency regime, our algorithm discretizes the nonlinear Riccati equation satisfied by the derivative of the phase function via a Chebyshev spectral collocation method and applies the Newton-Kantorovich method to the resulting system of nonlinear algebraic equations. We prove that the iterates converge quadratically to a nonoscillatory solution of the Riccati equation. The quadratic convergence of the Newton-Kantorovich method and the simple form of the linearized equations ensure that this procedure is extremely efficient. Our algorithm then extends the slowly varying phase function calculated in the high-frequency regime throughout the solution domain by solving a certain third order linear ordinary differential equation related to the Riccati equation. We describe the results of numerical experiments showing that our algorithm is orders of magnitude faster than existing schemes, including the modified Magnus method [18], the current state-of-the-art approach [7] and the recently introduced ARDC method [1].<|reference_end|>
|
arxiv
|
@article{stojimirovic2024an,
title={An accelerated frequency-independent solver for oscillatory differential
equations},
author={Tara Stojimirovic and James Bremer},
journal={arXiv preprint arXiv:2409.18487},
year={2024},
archivePrefix={arXiv},
eprint={2409.18487},
primaryClass={math.NA cs.NA}
}
|
stojimirovic2024an
|
arxiv-662662
|
2409.18488
|
An Error-Code Perspective on Metzner--Kapturowski-like Decoders
|
<|reference_start|>An Error-Code Perspective on Metzner--Kapturowski-like Decoders: In this paper we consider a Metzner-Kapturowski-like decoding algorithm for high-order interleaved sum-rank-metric codes, offering a novel perspective on the decoding process through the concept of an error code. The error code, defined as the linear code spanned by the vectors forming the error matrix, provides a more intuitive understanding of the decoder's functionality and new insights. The proposed algorithm can correct errors of sum-rank weight up to $d-2$, where $d$ is the minimum distance of the constituent code, given a sufficiently large interleaving order. The decoder's versatility is highlighted by its applicability to any linear constituent code, including unstructured or random codes. The computational complexity is $O(\max\{n^3, n^2 s\})$ operations over $\mathbb{F}_{q^m}$, where $n$ is the code length and $s$ is the interleaving order. We further explore the success probability of the decoder for random errors, providing an efficient algorithm to compute an upper bound on this probability. Additionally, we derive bounds and approximations for the success probability when the error weight exceeds the unique decoding radius, showing that the decoder maintains a high success probability in this regime. Our findings suggest that this decoder could be a valuable tool for the design and security analysis of code-based cryptosystems using interleaved sum-rank-metric codes. The new insights into the decoding process and the high success probability of the algorithm even beyond the unique decoding radius underscore its potential to contribute to various coding-related applications.<|reference_end|>
|
arxiv
|
@article{jerkovits2024an,
title={An Error-Code Perspective on Metzner--Kapturowski-like Decoders},
author={Thomas Jerkovits, Felicitas H"ormann, Hannes Bartz},
journal={arXiv preprint arXiv:2409.18488},
year={2024},
archivePrefix={arXiv},
eprint={2409.18488},
primaryClass={cs.IT math.IT}
}
|
jerkovits2024an
|
arxiv-662663
|
2409.18490
|
Numerical method for the zero dispersion limit of the fractional Korteweg-de Vries equation
|
<|reference_start|>Numerical method for the zero dispersion limit of the fractional Korteweg-de Vries equation: We present a fully discrete Crank-Nicolson Fourier-spectral-Galerkin (FSG) scheme for approximating solutions of the fractional Korteweg-de Vries (KdV) equation, which involves a fractional Laplacian with exponent $\alpha \in [1,2]$ and a small dispersion coefficient of order $\varepsilon^2$. The solution in the limit as $\varepsilon \to 0$ is known as the zero dispersion limit. We demonstrate that the semi-discrete FSG scheme conserves the first three integral invariants, thereby structure preserving, and that the fully discrete FSG scheme is $L^2$-conservative, ensuring stability. Using a compactness argument, we constructively prove the convergence of the approximate solution to the unique solution of the fractional KdV equation in $C([0,T]; H_p^{1+\alpha}(\mathbb{R}))$ for the periodic initial data in $H_p^{1+\alpha}(\mathbb{R})$. The devised scheme achieves spectral accuracy for the initial data in $H_p^r,$ $r \geq 1+\alpha$ and exponential accuracy for the analytic initial data. Additionally, we establish that the approximation of the zero dispersion limit obtained from the fully discrete FSG scheme converges to the solution of the Hopf equation in $L^2$ as $\varepsilon \to 0$, up to the gradient catastrophe time $t_c$. Beyond $t_c$, numerical investigations reveal that the approximation converges to the asymptotic solution, which is weakly described by the Whitham's averaged equation within the oscillatory zone for $\alpha = 2$. Numerical results are provided to demonstrate the convergence of the scheme and to validate the theoretical findings.<|reference_end|>
|
arxiv
|
@article{dwivedi2024numerical,
title={Numerical method for the zero dispersion limit of the fractional
Korteweg-de Vries equation},
author={Mukul Dwivedi and Tanmay Sarkar},
journal={arXiv preprint arXiv:2409.18490},
year={2024},
archivePrefix={arXiv},
eprint={2409.18490},
primaryClass={math.NA cs.NA}
}
|
dwivedi2024numerical
|
arxiv-662664
|
2409.18491
|
Treating Brain-inspired Memories as Priors for Diffusion Model to Forecast Multivariate Time Series
|
<|reference_start|>Treating Brain-inspired Memories as Priors for Diffusion Model to Forecast Multivariate Time Series: Forecasting Multivariate Time Series (MTS) involves significant challenges in various application domains. One immediate challenge is modeling temporal patterns with the finite length of the input. These temporal patterns usually involve periodic and sudden events that recur across different channels. To better capture temporal patterns, we get inspiration from humans' memory mechanisms and propose a channel-shared, brain-inspired memory module for MTS. Specifically, brain-inspired memory comprises semantic and episodic memory, where the former is used to capture general patterns, such as periodic events, and the latter is employed to capture special patterns, such as sudden events, respectively. Meanwhile, we design corresponding recall and update mechanisms to better utilize these patterns. Furthermore, acknowledging the capacity of diffusion models to leverage memory as a prior, we present a brain-inspired memory-augmented diffusion model. This innovative model retrieves relevant memories for different channels, utilizing them as distinct priors for MTS predictions. This incorporation significantly enhances the accuracy and robustness of predictions. Experimental results on eight datasets consistently validate the superiority of our approach in capturing and leveraging diverse recurrent temporal patterns across different channels.<|reference_end|>
|
arxiv
|
@article{wang2024treating,
title={Treating Brain-inspired Memories as Priors for Diffusion Model to
Forecast Multivariate Time Series},
author={Muyao Wang and Wenchao Chen and Zhibin Duan and Bo Chen},
journal={arXiv preprint arXiv:2409.18491},
year={2024},
archivePrefix={arXiv},
eprint={2409.18491},
primaryClass={cs.LG}
}
|
wang2024treating
|
arxiv-662665
|
2409.18497
|
Neural Video Representation for Redundancy Reduction and Consistency Preservation
|
<|reference_start|>Neural Video Representation for Redundancy Reduction and Consistency Preservation: Implicit neural representations (INRs) embed various signals into networks. They have gained attention in recent years because of their versatility in handling diverse signal types. For videos, INRs achieve video compression by embedding video signals into networks and compressing them. Conventional methods use an index that expresses the time of the frame or the features extracted from the frame as inputs to the network. The latter method provides greater expressive capability as the input is specific to each video. However, the features extracted from frames often contain redundancy, which contradicts the purpose of video compression. Moreover, since frame time information is not explicitly provided to the network, learning the relationships between frames is challenging. To address these issues, we aim to reduce feature redundancy by extracting features based on the high-frequency components of the frames. In addition, we use feature differences between adjacent frames in order for the network to learn frame relationships smoothly. We propose a video representation method that uses the high-frequency components of frames and the differences in features between adjacent frames. The experimental results show that our method outperforms the existing HNeRV method in 90 percent of the videos.<|reference_end|>
|
arxiv
|
@article{hayami2024neural,
title={Neural Video Representation for Redundancy Reduction and Consistency
Preservation},
author={Taiga Hayami, Takahiro Shindo, Shunsuke Akamatsu, Hiroshi Watanabe},
journal={arXiv preprint arXiv:2409.18497},
year={2024},
archivePrefix={arXiv},
eprint={2409.18497},
primaryClass={cs.CV}
}
|
hayami2024neural
|
arxiv-662666
|
2409.18498
|
Improved Approximation Algorithms for Relational Clustering
|
<|reference_start|>Improved Approximation Algorithms for Relational Clustering: Clustering plays a crucial role in computer science, facilitating data analysis and problem-solving across numerous fields. By partitioning large datasets into meaningful groups, clustering reveals hidden structures and relationships within the data, aiding tasks such as unsupervised learning, classification, anomaly detection, and recommendation systems. Particularly in relational databases, where data is distributed across multiple tables, efficient clustering is essential yet challenging due to the computational complexity of joining tables. This paper addresses this challenge by introducing efficient algorithms for $k$-median and $k$-means clustering on relational data without the need for pre-computing the join query results. For the relational $k$-median clustering, we propose the first efficient relative approximation algorithm. For the relational $k$-means clustering, our algorithm significantly improves both the approximation factor and the running time of the known relational $k$-means clustering algorithms, which suffer either from large constant approximation factors, or expensive running time. Given a join query $Q$ and a database instance $D$ of $O(N)$ tuples, for both $k$-median and $k$-means clustering on the results of $Q$ on $D$, we propose randomized $(1+\varepsilon)\gamma$-approximation algorithms that run in roughly $O(k^2N^{\mathsf{fhw}})+T_\gamma(k^2)$ time, where $\varepsilon\in (0,1)$ is a constant parameter decided by the user, $\mathsf{fhw}$ is the fractional hyper-tree width of $Q$, while $\gamma$ and $T_\gamma(x)$ are respectively the approximation factor and the running time of a traditional clustering algorithm in the standard computational setting over $x$ points.<|reference_end|>
|
arxiv
|
@article{esmailpour2024improved,
title={Improved Approximation Algorithms for Relational Clustering},
author={Aryan Esmailpour, Stavros Sintos},
journal={arXiv preprint arXiv:2409.18498},
year={2024},
archivePrefix={arXiv},
eprint={2409.18498},
primaryClass={cs.DB cs.DS}
}
|
esmailpour2024improved
|
arxiv-662667
|
2409.18499
|
Fairness-aware Multiobjective Evolutionary Learning
|
<|reference_start|>Fairness-aware Multiobjective Evolutionary Learning: Multiobjective evolutionary learning (MOEL) has demonstrated its advantages of training fairer machine learning models considering a predefined set of conflicting objectives, including accuracy and different fairness measures. Recent works propose to construct a representative subset of fairness measures as optimisation objectives of MOEL throughout model training. However, the determination of a representative measure set relies on dataset, prior knowledge and requires substantial computational costs. What's more, those representative measures may differ across different model training processes. Instead of using a static predefined set determined before model training, this paper proposes to dynamically and adaptively determine a representative measure set online during model training. The dynamically determined representative set is then used as optimising objectives of the MOEL framework and can vary with time. Extensive experimental results on 12 well-known benchmark datasets demonstrate that our proposed framework achieves outstanding performance compared to state-of-the-art approaches for mitigating unfairness in terms of accuracy as well as 25 fairness measures although only a few of them were dynamically selected and used as optimisation objectives. The results indicate the importance of setting optimisation objectives dynamically during training.<|reference_end|>
|
arxiv
|
@article{zhang2024fairness-aware,
title={Fairness-aware Multiobjective Evolutionary Learning},
author={Qingquan Zhang and Jialin Liu and Xin Yao},
journal={IEEE Transactions on Evolutionary Computation (2014)},
year={2024},
doi={10.1109/TEVC.2024.3430824},
archivePrefix={arXiv},
eprint={2409.18499},
primaryClass={cs.LG cs.AI}
}
|
zhang2024fairness-aware
|
arxiv-662668
|
2409.18504
|
WHOMP: Optimizing Randomized Controlled Trials via Wasserstein Homogeneity
|
<|reference_start|>WHOMP: Optimizing Randomized Controlled Trials via Wasserstein Homogeneity: We investigate methods for partitioning datasets into subgroups that maximize diversity within each subgroup while minimizing dissimilarity across subgroups. We introduce a novel partitioning method called the $\textit{Wasserstein Homogeneity Partition}$ (WHOMP), which optimally minimizes type I and type II errors that often result from imbalanced group splitting or partitioning, commonly referred to as accidental bias, in comparative and controlled trials. We conduct an analytical comparison of WHOMP against existing partitioning methods, such as random subsampling, covariate-adaptive randomization, rerandomization, and anti-clustering, demonstrating its advantages. Moreover, we characterize the optimal solutions to the WHOMP problem and reveal an inherent trade-off between the stability of subgroup means and variances among these solutions. Based on our theoretical insights, we design algorithms that not only obtain these optimal solutions but also equip practitioners with tools to select the desired trade-off. Finally, we validate the effectiveness of WHOMP through numerical experiments, highlighting its superiority over traditional methods.<|reference_end|>
|
arxiv
|
@article{xu2024whomp:,
title={WHOMP: Optimizing Randomized Controlled Trials via Wasserstein
Homogeneity},
author={Shizhou Xu, Thomas Strohmer},
journal={arXiv preprint arXiv:2409.18504},
year={2024},
archivePrefix={arXiv},
eprint={2409.18504},
primaryClass={stat.ML cs.LG math.PR math.ST stat.TH}
}
|
xu2024whomp:
|
arxiv-662669
|
2409.18506
|
Med-IC: Fusing a Single Layer Involution with Convolutions for Enhanced Medical Image Classification and Segmentation
|
<|reference_start|>Med-IC: Fusing a Single Layer Involution with Convolutions for Enhanced Medical Image Classification and Segmentation: The majority of medical images, especially those that resemble cells, have similar characteristics. These images, which occur in a variety of shapes, often show abnormalities in the organ or cell region. The convolution operation possesses a restricted capability to extract visual patterns across several spatial regions of an image. The involution process, which is the inverse operation of convolution, complements this inherent lack of spatial information extraction present in convolutions. In this study, we investigate how applying a single layer of involution prior to a convolutional neural network (CNN) architecture can significantly improve classification and segmentation performance, with a comparatively negligible amount of weight parameters. The study additionally shows how excessive use of involution layers might result in inaccurate predictions in a particular type of medical image. According to our findings from experiments, the strategy of adding only a single involution layer before a CNN-based model outperforms most of the previous works.<|reference_end|>
|
arxiv
|
@article{islam2024med-ic:,
title={Med-IC: Fusing a Single Layer Involution with Convolutions for Enhanced
Medical Image Classification and Segmentation},
author={Md. Farhadul Islam, Sarah Zabeen, Meem Arafat Manab, Mohammad Rakibul
Hasan Mahin, Joyanta Jyoti Mondal, Md. Tanzim Reza, Md Zahidul Hasan, Munima
Haque, Farig Sadeque, Jannatun Noor},
journal={arXiv preprint arXiv:2409.18506},
year={2024},
archivePrefix={arXiv},
eprint={2409.18506},
primaryClass={eess.IV cs.CV cs.LG}
}
|
islam2024med-ic:
|
arxiv-662670
|
2409.18511
|
Do We Need Domain-Specific Embedding Models? An Empirical Investigation
|
<|reference_start|>Do We Need Domain-Specific Embedding Models? An Empirical Investigation: Embedding models play a crucial role in representing and retrieving information across various NLP applications. Recent advancements in Large Language Models (LLMs) have further enhanced the performance of embedding models, which are trained on massive amounts of text covering almost every domain. These models are often benchmarked on general-purpose datasets like Massive Text Embedding Benchmark (MTEB), where they demonstrate superior performance. However, a critical question arises: Is the development of domain-specific embedding models necessary when general-purpose models are trained on vast corpora that already include specialized domain texts? In this paper, we empirically investigate this question, choosing the finance domain as an example. We introduce the Finance Massive Text Embedding Benchmark (FinMTEB), a counterpart to MTEB that consists of financial domain-specific text datasets. We evaluate the performance of seven state-of-the-art embedding models on FinMTEB and observe a significant performance drop compared to their performance on MTEB. To account for the possibility that this drop is driven by FinMTEB's higher complexity, we propose four measures to quantify dataset complexity and control for this factor in our analysis. Our analysis provides compelling evidence that state-of-the-art embedding models struggle to capture domain-specific linguistic and semantic patterns. Moreover, we find that the performance of general-purpose embedding models on MTEB is not correlated with their performance on FinMTEB, indicating the need for domain-specific embedding benchmarks for domain-specific embedding models. This study sheds light on developing domain-specific embedding models in the LLM era. FinMTEB comes with open-source code at https://github.com/yixuantt/FinMTEB<|reference_end|>
|
arxiv
|
@article{tang2024do,
title={Do We Need Domain-Specific Embedding Models? An Empirical Investigation},
author={Yixuan Tang and Yi Yang},
journal={arXiv preprint arXiv:2409.18511},
year={2024},
archivePrefix={arXiv},
eprint={2409.18511},
primaryClass={cs.CL cs.IR}
}
|
tang2024do
|
arxiv-662671
|
2409.18512
|
EmoPro: A Prompt Selection Strategy for Emotional Expression in LM-based Speech Synthesis
|
<|reference_start|>EmoPro: A Prompt Selection Strategy for Emotional Expression in LM-based Speech Synthesis: Recent advancements in speech synthesis models, trained on extensive datasets, have demonstrated remarkable zero-shot capabilities. These models can control content, timbre, and emotion in generated speech based on prompt inputs. Despite these advancements, the choice of prompts significantly impacts the output quality, yet most existing selection schemes do not adequately address the control of emotional intensity. To address this question, this paper proposes a two-stage prompt selection strategy EmoPro, which is specifically designed for emotionally controllable speech synthesis. This strategy focuses on selecting highly expressive and high-quality prompts by evaluating them from four perspectives: emotional expression strength, speech quality, text-emotion consistency, and model generation performance. Experimental results show that prompts selected using the proposed method result in more emotionally expressive and engaging synthesized speech compared to those obtained through baseline. Audio samples and codes will be available at https://whyrrrrun.github.io/EmoPro/.<|reference_end|>
|
arxiv
|
@article{wang2024emopro:,
title={EmoPro: A Prompt Selection Strategy for Emotional Expression in LM-based
Speech Synthesis},
author={Haoyu Wang, Chunyu Qiang, Tianrui Wang, Cheng Gong, Qiuyu Liu, Yu
Jiang, Xiaobao Wang, Chenyang Wang, Chen Zhang},
journal={arXiv preprint arXiv:2409.18512},
year={2024},
archivePrefix={arXiv},
eprint={2409.18512},
primaryClass={cs.SD cs.AI cs.CL eess.AS}
}
|
wang2024emopro:
|
arxiv-662672
|
2409.18522
|
Decomposing the Jaccard Distance and the Jaccard Index in ABCDE
|
<|reference_start|>Decomposing the Jaccard Distance and the Jaccard Index in ABCDE: ABCDE is a sophisticated technique for evaluating differences between very large clusterings. Its main metric that characterizes the magnitude of the difference between two clusterings is the JaccardDistance, which is a true distance metric in the space of all clusterings of a fixed set of (weighted) items. The JaccardIndex is the complementary metric that characterizes the similarity of two clusterings. Its relationship with the JaccardDistance is simple: JaccardDistance + JaccardIndex = 1. This paper decomposes the JaccardDistance and the JaccardIndex further. In each case, the decomposition yields Impact and Quality metrics. The Impact metrics measure aspects of the magnitude of the clustering diff, while Quality metrics use human judgements to measure how much the clustering diff improves the quality of the clustering. The decompositions of this paper offer more and deeper insight into a clustering change. They also unlock new techniques for debugging and exploring the nature of the clustering diff. The new metrics are mathematically well-behaved and they are interrelated via simple equations. While the work can be seen as an alternative formal framework for ABCDE, we prefer to view it as complementary. It certainly offers a different perspective on the magnitude and the quality of a clustering change, and users can use whatever they want from each approach to gain more insight into a change.<|reference_end|>
|
arxiv
|
@article{van staden2024decomposing,
title={Decomposing the Jaccard Distance and the Jaccard Index in ABCDE},
author={Stephan van Staden},
journal={arXiv preprint arXiv:2409.18522},
year={2024},
archivePrefix={arXiv},
eprint={2409.18522},
primaryClass={cs.IR}
}
|
van staden2024decomposing
|
arxiv-662673
|
2409.18523
|
Token Caching for Diffusion Transformer Acceleration
|
<|reference_start|>Token Caching for Diffusion Transformer Acceleration: Diffusion transformers have gained substantial interest in diffusion generative modeling due to their outstanding performance. However, their high computational cost, arising from the quadratic computational complexity of attention mechanisms and multi-step inference, presents a significant bottleneck. To address this challenge, we propose TokenCache, a novel post-training acceleration method that leverages the token-based multi-block architecture of transformers to reduce redundant computations among tokens across inference steps. TokenCache specifically addresses three critical questions in the context of diffusion transformers: (1) which tokens should be pruned to eliminate redundancy, (2) which blocks should be targeted for efficient pruning, and (3) at which time steps caching should be applied to balance speed and quality. In response to these challenges, TokenCache introduces a Cache Predictor that assigns importance scores to tokens, enabling selective pruning without compromising model performance. Furthermore, we propose an adaptive block selection strategy to focus on blocks with minimal impact on the network's output, along with a Two-Phase Round-Robin (TPRR) scheduling policy to optimize caching intervals throughout the denoising process. Experimental results across various models demonstrate that TokenCache achieves an effective trade-off between generation quality and inference speed for diffusion transformers. Our code will be publicly available.<|reference_end|>
|
arxiv
|
@article{lou2024token,
title={Token Caching for Diffusion Transformer Acceleration},
author={Jinming Lou, Wenyang Luo, Yufan Liu, Bing Li, Xinmiao Ding, Weiming
Hu, Jiajiong Cao, Yuming Li, Chenguang Ma},
journal={arXiv preprint arXiv:2409.18523},
year={2024},
archivePrefix={arXiv},
eprint={2409.18523},
primaryClass={cs.LG cs.CV}
}
|
lou2024token
|
arxiv-662674
|
2409.18524
|
Adaptive Knowledge-based Multi-Objective Evolutionary Algorithm for Hybrid Flow Shop Scheduling Problems with Multiple Parallel Batch Processing Stages
|
<|reference_start|>Adaptive Knowledge-based Multi-Objective Evolutionary Algorithm for Hybrid Flow Shop Scheduling Problems with Multiple Parallel Batch Processing Stages: Parallel batch processing machines have extensive applications in the semiconductor manufacturing process. However, the problem models in previous studies regard parallel batch processing as a fixed processing stage in the machining process. This study generalizes the problem model, in which users can arbitrarily set certain stages as parallel batch processing stages according to their needs. A Hybrid Flow Shop Scheduling Problem with Parallel Batch Processing Machines (PBHFSP) is solved in this paper. Furthermore, an Adaptive Knowledge-based Multi-Objective Evolutionary Algorithm (AMOEA/D) is designed to simultaneously optimize both makespan and Total Energy Consumption (TEC). Firstly, a hybrid initialization strategy with heuristic rules based on knowledge of PBHFSP is proposed to generate promising solutions. Secondly, the disjunctive graph model has been established based on the knowledge to find the critical-path of PBHFS. Then, a critical-path based neighborhood search is proposed to enhance the exploitation ability of AMOEA/D. Moreover, the search time is adaptively adjusted based on learning experience from Q-learning and Decay Law. Afterward, to enhance the exploration capability of the algorithm, AMOEA/D designs an improved population updating strategy with a weight vector updating strategy. These strategies rematch individuals with weight vectors, thereby maintaining the diversity of the population. Finally, the proposed algorithm is compared with state-of-the-art algorithms. The experimental results show that the AMOEA/D is superior to the comparison algorithms in solving the PBHFSP.<|reference_end|>
|
arxiv
|
@article{liu2024adaptive,
title={Adaptive Knowledge-based Multi-Objective Evolutionary Algorithm for
Hybrid Flow Shop Scheduling Problems with Multiple Parallel Batch Processing
Stages},
author={Feige Liu, Xin Li, Chao Lu, Wenying Gong},
journal={arXiv preprint arXiv:2409.18524},
year={2024},
archivePrefix={arXiv},
eprint={2409.18524},
primaryClass={cs.NE cs.SY eess.SY}
}
|
liu2024adaptive
|
arxiv-662675
|
2409.18528
|
Security Analysis of Top-Ranked mHealth Fitness Apps: An Empirical Study
|
<|reference_start|>Security Analysis of Top-Ranked mHealth Fitness Apps: An Empirical Study: Mobile health applications (mHealth apps), particularly in the health and fitness category, have experienced an increase in popularity due to their convenience and availability. However, this widespread adoption raises concerns regarding the security of the user's data. In this study, we investigate the security vulnerabilities of ten top-ranked Android health and fitness apps, a set that accounts for 237 million downloads. We performed several static and dynamic security analyses using tools such as the Mobile Security Framework (MobSF) and Android emulators. We also checked the server's security levels with Qualys SSL, which allowed us to gain insights into the security posture of the servers communicating with the mHealth fitness apps. Our findings revealed many vulnerabilities, such as insecure coding, hardcoded sensitive information, over-privileged permissions, misconfiguration, and excessive communication with third-party domains. For instance, some apps store their database API key directly in the code while also exposing their database URL. We found insecure encryption methods in six apps, such as using AES with ECB mode. Two apps communicated with an alarming number of approximately 230 domains each, and a third app with over 100 domains, exacerbating privacy linkability threats. The study underscores the importance of continuous security assessments of top-ranked mHealth fitness apps to better understand the threat landscape and inform app developers.<|reference_end|>
|
arxiv
|
@article{forsberg2024security,
title={Security Analysis of Top-Ranked mHealth Fitness Apps: An Empirical Study},
author={Albin Forsberg and Leonardo Horn Iwaya},
journal={arXiv preprint arXiv:2409.18528},
year={2024},
archivePrefix={arXiv},
eprint={2409.18528},
primaryClass={cs.CR}
}
|
forsberg2024security
|
arxiv-662676
|
2409.18529
|
Robustness of AI-based weather forecasts in a changing climate
|
<|reference_start|>Robustness of AI-based weather forecasts in a changing climate: Data-driven machine learning models for weather forecasting have made transformational progress in the last 1-2 years, with state-of-the-art ones now outperforming the best physics-based models for a wide range of skill scores. Given the strong links between weather and climate modelling, this raises the question whether machine learning models could also revolutionize climate science, for example by informing mitigation and adaptation to climate change or to generate larger ensembles for more robust uncertainty estimates. Here, we show that current state-of-the-art machine learning models trained for weather forecasting in present-day climate produce skillful forecasts across different climate states corresponding to pre-industrial, present-day, and future 2.9K warmer climates. This indicates that the dynamics shaping the weather on short timescales may not differ fundamentally in a changing climate. It also demonstrates out-of-distribution generalization capabilities of the machine learning models that are a critical prerequisite for climate applications. Nonetheless, two of the models show a global-mean cold bias in the forecasts for the future warmer climate state, i.e. they drift towards the colder present-day climate they have been trained for. A similar result is obtained for the pre-industrial case where two out of three models show a warming. We discuss possible remedies for these biases and analyze their spatial distribution, revealing complex warming and cooling patterns that are partly related to missing ocean-sea ice and land surface information in the training data. Despite these current limitations, our results suggest that data-driven machine learning models will provide powerful tools for climate science and transform established approaches by complementing conventional physics-based models.<|reference_end|>
|
arxiv
|
@article{rackow2024robustness,
title={Robustness of AI-based weather forecasts in a changing climate},
author={Thomas Rackow, Nikolay Koldunov, Christian Lessig, Irina Sandu, Mihai
Alexe, Matthew Chantry, Mariana Clare, Jesper Dramsch, Florian Pappenberger,
Xabier Pedruzo-Bagazgoitia, Steffen Tietsche, and Thomas Jung},
journal={arXiv preprint arXiv:2409.18529},
year={2024},
archivePrefix={arXiv},
eprint={2409.18529},
primaryClass={physics.ao-ph cs.LG physics.comp-ph}
}
|
rackow2024robustness
|
arxiv-662677
|
2409.18530
|
A Static Analysis of Popular C Packages in Linux
|
<|reference_start|>A Static Analysis of Popular C Packages in Linux: Static analysis is a classical technique for improving software security and software quality in general. Fairly recently, a new static analyzer was implemented in the GNU Compiler Collection (GCC). The present paper uses the GCC's analyzer to empirically examine popular Linux packages. The dataset used is based on those packages in the Gentoo Linux distribution that are either written in C or contain C code. In total, $3,538$ such packages are covered. According to the results, uninitialized variables and NULL pointer dereference issues are the most common problems according to the analyzer. Classical memory management issues are relatively rare. The warnings also follow a long-tailed probability distribution across the packages; a few packages are highly warning-prone, whereas no warnings are present for as much as 89% of the packages. Furthermore, the warnings do not vary across different application domains. With these results, the paper contributes to the domain of large-scale empirical research on software quality and security. In addition, a discussion is presented about practical implications of the results.<|reference_end|>
|
arxiv
|
@article{ruohonen2024a,
title={A Static Analysis of Popular C Packages in Linux},
author={Jukka Ruohonen and Mubashrah Saddiqa and Krzysztof Sierszecki},
journal={arXiv preprint arXiv:2409.18530},
year={2024},
archivePrefix={arXiv},
eprint={2409.18530},
primaryClass={cs.SE cs.CR}
}
|
ruohonen2024a
|
arxiv-662678
|
2409.18533
|
Prompt-Driven Temporal Domain Adaptation for Nighttime UAV Tracking
|
<|reference_start|>Prompt-Driven Temporal Domain Adaptation for Nighttime UAV Tracking: Nighttime UAV tracking under low-illuminated scenarios has achieved great progress by domain adaptation (DA). However, previous DA training-based works are deficient in narrowing the discrepancy of temporal contexts for UAV trackers. To address the issue, this work proposes a prompt-driven temporal domain adaptation training framework to fully utilize temporal contexts for challenging nighttime UAV tracking, i.e., TDA. Specifically, the proposed framework aligns the distribution of temporal contexts from daytime and nighttime domains by training the temporal feature generator against the discriminator. The temporal-consistent discriminator progressively extracts shared domain-specific features to generate coherent domain discrimination results in the time series. Additionally, to obtain high-quality training samples, a prompt-driven object miner is employed to precisely locate objects in unannotated nighttime videos. Moreover, a new benchmark for long-term nighttime UAV tracking is constructed. Exhaustive evaluations on both public and self-constructed nighttime benchmarks demonstrate the remarkable performance of the tracker trained in TDA framework, i.e., TDA-Track. Real-world tests at nighttime also show its practicality. The code and demo videos are available at https://github.com/vision4robotics/TDA-Track.<|reference_end|>
|
arxiv
|
@article{fu2024prompt-driven,
title={Prompt-Driven Temporal Domain Adaptation for Nighttime UAV Tracking},
author={Changhong Fu, Yiheng Wang, Liangliang Yao, Guangze Zheng, Haobo Zuo,
and Jia Pan},
journal={arXiv preprint arXiv:2409.18533},
year={2024},
archivePrefix={arXiv},
eprint={2409.18533},
primaryClass={cs.CV}
}
|
fu2024prompt-driven
|
arxiv-662679
|
2409.18534
|
Transformation of the discrete logarithm problem over $\mathbb F_2^n$ to the QUBO problem using normal bases
|
<|reference_start|>Transformation of the discrete logarithm problem over $\mathbb F_2^n$ to the QUBO problem using normal bases: Quantum computations are very important branch of modern cryptology. According to the number of working physical qubits available in general-purpose quantum computers and in quantum annealers, there is no coincidence, that nowadays quantum annealers allow to solve larger problems. In this paper we focus on solving discrete logarithm problem (DLP) over binary fields using quantum annealing. It is worth to note, that however solving DLP over prime fields using quantum annealing has been considered before, no author, until now, has considered DLP over binary fields using quantum annealing. Therefore, in this paper, we aim to bridge this gap. We present a polynomial transformation of the discrete logarithm problem over binary fields to the Quadratic Unconstrained Binary Optimization (QUBO) problem, using approximately $3n^2$ logical variables for the binary field $\mathbb{F}_{2^n}$. In our estimations, we assume the existence of an optimal normal base of II type in the given fields. Such a QUBO instance can then be solved using quantum annealing.<|reference_end|>
|
arxiv
|
@article{wroński2024transformation,
title={Transformation of the discrete logarithm problem over $\mathbb F_{2^n}$
to the QUBO problem using normal bases},
author={Micha{l} Wro'nski, Mateusz Le'sniak},
journal={arXiv preprint arXiv:2409.18534},
year={2024},
archivePrefix={arXiv},
eprint={2409.18534},
primaryClass={cs.CR}
}
|
wroński2024transformation
|
arxiv-662680
|
2409.18536
|
How Effective is Pre-training of Large Masked Autoencoders for Downstream Earth Observation Tasks?
|
<|reference_start|>How Effective is Pre-training of Large Masked Autoencoders for Downstream Earth Observation Tasks?: Self-supervised pre-training has proven highly effective for many computer vision tasks, particularly when labelled data are scarce. In the context of Earth Observation (EO), foundation models and various other Vision Transformer (ViT)-based approaches have been successfully applied for transfer learning to downstream tasks. However, it remains unclear under which conditions pre-trained models offer significant advantages over training from scratch. In this study, we investigate the effectiveness of pre-training ViT-based Masked Autoencoders (MAE) for downstream EO tasks, focusing on reconstruction, segmentation, and classification. We consider two large ViT-based MAE pre-trained models: a foundation model (Prithvi) and SatMAE. We evaluate Prithvi on reconstruction and segmentation-based downstream tasks, and for SatMAE we assess its performance on a classification downstream task. Our findings suggest that pre-training is particularly beneficial when the fine-tuning task closely resembles the pre-training task, e.g. reconstruction. In contrast, for tasks such as segmentation or classification, training from scratch with specific hyperparameter adjustments proved to be equally or more effective.<|reference_end|>
|
arxiv
|
@article{sosa2024how,
title={How Effective is Pre-training of Large Masked Autoencoders for
Downstream Earth Observation Tasks?},
author={Jose Sosa, Mohamed Aloulou, Danila Rukhovich, Rim Sleimi, Boonyarit
Changaival, Anis Kacem, and Djamila Aouada},
journal={arXiv preprint arXiv:2409.18536},
year={2024},
archivePrefix={arXiv},
eprint={2409.18536},
primaryClass={cs.CV}
}
|
sosa2024how
|
arxiv-662681
|
2409.18538
|
A Survey on Complex Tasks for Goal-Directed Interactive Agents
|
<|reference_start|>A Survey on Complex Tasks for Goal-Directed Interactive Agents: Goal-directed interactive agents, which autonomously complete tasks through interactions with their environment, can assist humans in various domains of their daily lives. Recent advances in large language models (LLMs) led to a surge of new, more and more challenging tasks to evaluate such agents. To properly contextualize performance across these tasks, it is imperative to understand the different challenges they pose to agents. To this end, this survey compiles relevant tasks and environments for evaluating goal-directed interactive agents, structuring them along dimensions relevant for understanding current obstacles. An up-to-date compilation of relevant resources can be found on our project website: https://coli-saar.github.io/interactive-agents.<|reference_end|>
|
arxiv
|
@article{hartmann2024a,
title={A Survey on Complex Tasks for Goal-Directed Interactive Agents},
author={Mareike Hartmann and Alexander Koller},
journal={arXiv preprint arXiv:2409.18538},
year={2024},
archivePrefix={arXiv},
eprint={2409.18538},
primaryClass={cs.CL}
}
|
hartmann2024a
|
arxiv-662682
|
2409.18541
|
Align$^2$LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction Curation
|
<|reference_start|>Align$^2$LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction Curation: Recent advances in Multi-modal Large Language Models (MLLMs), such as LLaVA-series models, are driven by massive machine-generated instruction-following data tuning. Such automatic instruction collection pipelines, however, inadvertently introduce significant variability in data quality. This paper introduces a novel instruction curation algorithm, derived from two unique perspectives, human and LLM preference alignment, to compress this vast corpus of machine-generated multimodal instructions to a compact and high-quality form: (i) For human preference alignment, we have collected a machine-generated multimodal instruction dataset and established a comprehensive set of both subjective and objective criteria to guide the data quality assessment critically from human experts. By doing so, a reward model was trained on the annotated dataset to internalize the nuanced human understanding of instruction alignment. (ii) For LLM preference alignment, given the instruction selected by the reward model, we propose leveraging the inner LLM used in MLLM to align the writing style of visual instructions with that of the inner LLM itself, resulting in LLM-aligned instruction improvement. Extensive experiments demonstrate that we can maintain or even improve model performance by compressing synthetic multimodal instructions by up to 90%. Impressively, by aggressively reducing the total training sample size from 158k to 14k (9$\times$ smaller), our model consistently outperforms its full-size dataset counterpart across various MLLM benchmarks. Our project is available at https://github.com/DCDmllm/Align2LLaVA.<|reference_end|>
|
arxiv
|
@article{huang2024align$^2$llava:,
title={Align$^2$LLaVA: Cascaded Human and Large Language Model Preference
Alignment for Multi-modal Instruction Curation},
author={Hongzhe Huang, Zhewen Yu, Jiang Liu, Li Cai, Dian Jiao, Wenqiao Zhang,
Siliang Tang, Juncheng Li, Hao Jiang, Haoyuan Li, Yueting Zhuang},
journal={arXiv preprint arXiv:2409.18541},
year={2024},
archivePrefix={arXiv},
eprint={2409.18541},
primaryClass={cs.AI}
}
|
huang2024align$^2$llava:
|
arxiv-662683
|
2409.18542
|
MIMII-Gen: Generative Modeling Approach for Simulated Evaluation of Anomalous Sound Detection System
|
<|reference_start|>MIMII-Gen: Generative Modeling Approach for Simulated Evaluation of Anomalous Sound Detection System: Insufficient recordings and the scarcity of anomalies present significant challenges in developing and validating robust anomaly detection systems for machine sounds. To address these limitations, we propose a novel approach for generating diverse anomalies in machine sound using a latent diffusion-based model that integrates an encoder-decoder framework. Our method utilizes the Flan-T5 model to encode captions derived from audio file metadata, enabling conditional generation through a carefully designed U-Net architecture. This approach aids our model in generating audio signals within the EnCodec latent space, ensuring high contextual relevance and quality. We objectively evaluated the quality of our generated sounds using the Fr\'echet Audio Distance (FAD) score and other metrics, demonstrating that our approach surpasses existing models in generating reliable machine audio that closely resembles actual abnormal conditions. The evaluation of the anomaly detection system using our generated data revealed a strong correlation, with the area under the curve (AUC) score differing by 4.8\% from the original, validating the effectiveness of our generated data. These results demonstrate the potential of our approach to enhance the evaluation and robustness of anomaly detection systems across varied and previously unseen conditions. Audio samples can be found at \url{https://hpworkhub.github.io/MIMII-Gen.github.io/}.<|reference_end|>
|
arxiv
|
@article{purohit2024mimii-gen:,
title={MIMII-Gen: Generative Modeling Approach for Simulated Evaluation of
Anomalous Sound Detection System},
author={Harsh Purohit, Tomoya Nishida, Kota Dohi, Takashi Endo, and Yohei
Kawaguchi},
journal={arXiv preprint arXiv:2409.18542},
year={2024},
archivePrefix={arXiv},
eprint={2409.18542},
primaryClass={eess.AS cs.AI cs.SD}
}
|
purohit2024mimii-gen:
|
arxiv-662684
|
2409.18543
|
Reducing Semantic Ambiguity In Domain Adaptive Semantic Segmentation Via Probabilistic Prototypical Pixel Contrast
|
<|reference_start|>Reducing Semantic Ambiguity In Domain Adaptive Semantic Segmentation Via Probabilistic Prototypical Pixel Contrast: Domain adaptation aims to reduce the model degradation on the target domain caused by the domain shift between the source and target domains. Although encouraging performance has been achieved by combining cognitive learning with the self-training paradigm, they suffer from ambiguous scenarios caused by scale, illumination, or overlapping when deploying deterministic embedding. To address these issues, we propose probabilistic proto-typical pixel contrast (PPPC), a universal adaptation framework that models each pixel embedding as a probability via multivariate Gaussian distribution to fully exploit the uncertainty within them, eventually improving the representation quality of the model. In addition, we derive prototypes from probability estimation posterior probability estimation which helps to push the decision boundary away from the ambiguity points. Moreover, we employ an efficient method to compute similarity between distributions, eliminating the need for sampling and reparameterization, thereby significantly reducing computational overhead. Further, we dynamically select the ambiguous crops at the image level to enlarge the number of boundary points involved in contrastive learning, which benefits the establishment of precise distributions for each category. Extensive experimentation demonstrates that PPPC not only helps to address ambiguity at the pixel level, yielding discriminative representations but also achieves significant improvements in both synthetic-to-real and day-to-night adaptation tasks. It surpasses the previous state-of-the-art (SOTA) by +5.2% mIoU in the most challenging daytime-to-nighttime adaptation scenario, exhibiting stronger generalization on other unseen datasets. The code and models are available at https://github.com/DarlingInTheSV/Probabilistic-Prototypical-Pixel-Contrast.<|reference_end|>
|
arxiv
|
@article{hao2024reducing,
title={Reducing Semantic Ambiguity In Domain Adaptive Semantic Segmentation Via
Probabilistic Prototypical Pixel Contrast},
author={Xiaoke Hao, Shiyu Liu, Chuanbo Feng, Ye Zhu},
journal={arXiv preprint arXiv:2409.18543},
year={2024},
doi={10.1016/j.neunet.2024.106806},
archivePrefix={arXiv},
eprint={2409.18543},
primaryClass={cs.CV}
}
|
hao2024reducing
|
arxiv-662685
|
2409.18544
|
Wasserstein Distance-Weighted Adversarial Network for Cross-Domain Credit Risk Assessment
|
<|reference_start|>Wasserstein Distance-Weighted Adversarial Network for Cross-Domain Credit Risk Assessment: This paper delves into the application of adversarial domain adaptation (ADA) for enhancing credit risk assessment in financial institutions. It addresses two critical challenges: the cold start problem, where historical lending data is scarce, and the data imbalance issue, where high-risk transactions are underrepresented. The paper introduces an improved ADA framework, the Wasserstein Distance Weighted Adversarial Domain Adaptation Network (WD-WADA), which leverages the Wasserstein distance to align source and target domains effectively. The proposed method includes an innovative weighted strategy to tackle data imbalance, adjusting for both the class distribution and the difficulty level of predictions. The paper demonstrates that WD-WADA not only mitigates the cold start problem but also provides a more accurate measure of domain differences, leading to improved cross-domain credit risk assessment. Extensive experiments on real-world credit datasets validate the model's effectiveness, showcasing superior performance in cross-domain learning, classification accuracy, and model stability compared to traditional methods.<|reference_end|>
|
arxiv
|
@article{jiang2024wasserstein,
title={Wasserstein Distance-Weighted Adversarial Network for Cross-Domain
Credit Risk Assessment},
author={Mohan Jiang, Jiating Lin, Hongju Ouyang, Jingming Pan, Siyuan Han,
Bingyao Liu},
journal={arXiv preprint arXiv:2409.18544},
year={2024},
archivePrefix={arXiv},
eprint={2409.18544},
primaryClass={cs.LG}
}
|
jiang2024wasserstein
|
arxiv-662686
|
2409.18545
|
An Epistemic Human-Aware Task Planner which Anticipates Human Beliefs and Decisions
|
<|reference_start|>An Epistemic Human-Aware Task Planner which Anticipates Human Beliefs and Decisions: We present a substantial extension of our Human-Aware Task Planning framework, tailored for scenarios with intermittent shared execution experiences and significant belief divergence between humans and robots, particularly due to the uncontrollable nature of humans. Our objective is to build a robot policy that accounts for uncontrollable human behaviors, thus enabling the anticipation of possible advancements achieved by the robot when the execution is not shared, e.g. when humans are briefly absent from the shared environment to complete a subtask. But, this anticipation is considered from the perspective of humans who have access to an estimated model for the robot. To this end, we propose a novel planning framework and build a solver based on AND-OR search, which integrates knowledge reasoning, including situation assessment by perspective taking. Our approach dynamically models and manages the expansion and contraction of potential advances while precisely keeping track of when (and when not) agents share the task execution experience. The planner systematically assesses the situation and ignores worlds that it has reason to think are impossible for humans. Overall, our new solver can estimate the distinct beliefs of the human and the robot along potential courses of action, enabling the synthesis of plans where the robot selects the right moment for communication, i.e. informing, or replying to an inquiry, or defers ontic actions until the execution experiences can be shared. Preliminary experiments in two domains, one novel and one adapted, demonstrate the effectiveness of the framework.<|reference_end|>
|
arxiv
|
@article{shekhar2024an,
title={An Epistemic Human-Aware Task Planner which Anticipates Human Beliefs
and Decisions},
author={Shashank Shekhar, Anthony Favier and Rachid Alami},
journal={arXiv preprint arXiv:2409.18545},
year={2024},
archivePrefix={arXiv},
eprint={2409.18545},
primaryClass={cs.RO cs.AI cs.HC}
}
|
shekhar2024an
|
arxiv-662687
|
2409.18548
|
Research on Predicting Public Opinion Event Heat Levels Based on Large Language Models
|
<|reference_start|>Research on Predicting Public Opinion Event Heat Levels Based on Large Language Models: In recent years, with the rapid development of large language models, serval models such as GPT-4o have demonstrated extraordinary capabilities, surpassing human performance in various language tasks. As a result, many researchers have begun exploring their potential applications in the field of public opinion analysis. This study proposes a novel large-language-models-based method for public opinion event heat level prediction. First, we preprocessed and classified 62,836 Chinese hot event data collected between July 2022 and December 2023. Then, based on each event's online dissemination heat index, we used the MiniBatchKMeans algorithm to automatically cluster the events and categorize them into four heat levels (ranging from low heat to very high heat). Next, we randomly selected 250 events from each heat level, totalling 1,000 events, to build the evaluation dataset. During the evaluation process, we employed various large language models to assess their accuracy in predicting event heat levels in two scenarios: without reference cases and with similar case references. The results showed that GPT-4o and DeepseekV2 performed the best in the latter case, achieving prediction accuracies of 41.4% and 41.5%, respectively. Although the overall prediction accuracy remains relatively low, it is worth noting that for low-heat (Level 1) events, the prediction accuracies of these two models reached 73.6% and 70.4%, respectively. Additionally, the prediction accuracy showed a downward trend from Level 1 to Level 4, which correlates with the uneven distribution of data across the heat levels in the actual dataset. This suggests that with the more robust dataset, public opinion event heat level prediction based on large language models will have significant research potential for the future.<|reference_end|>
|
arxiv
|
@article{ren2024research,
title={Research on Predicting Public Opinion Event Heat Levels Based on Large
Language Models},
author={Yi Ren, Tianyi Zhang, Weibin Li, DuoMu Zhou, Chenhao Qin, FangCheng
Dong},
journal={arXiv preprint arXiv:2409.18548},
year={2024},
archivePrefix={arXiv},
eprint={2409.18548},
primaryClass={cs.CL cs.AI}
}
|
ren2024research
|
arxiv-662688
|
2409.18549
|
Ca\SigmaoS: A nonlinear sum-of-squares optimization suite
|
<|reference_start|>Ca\SigmaoS: A nonlinear sum-of-squares optimization suite: We present Ca$\Sigma$oS, the first MATLAB software specifically designed for nonlinear sum-of-squares optimization. A symbolic polynomial algebra system allows to formulate parametrized sum-of-squares optimization problems and facilitates their fast, repeated evaluations. To that extent, we make use of CasADi's symbolic framework and realize concepts of monomial sparsity, linear operators (including duals), and functions between polynomials. Ca$\Sigma$oS currently provides interfaces to the conic solvers SeDuMi, Mosek, and SCS as well as methods to solve quasiconvex optimization problems (via bisection) and nonconvex optimization problems (via sequential convexification). Numerical examples for benchmark problems including region-of-attraction and reachable set estimation for nonlinear dynamic systems demonstrate significant improvements in computation time compared to existing toolboxes. Ca$\Sigma$oS is available open-source at https://github.com/ifr-acso/casos.<|reference_end|>
|
arxiv
|
@article{cunis2024ca{\sigma}os:,
title={Ca{\Sigma}oS: A nonlinear sum-of-squares optimization suite},
author={Torbj{o}rn Cunis and Jan Olucak},
journal={arXiv preprint arXiv:2409.18549},
year={2024},
archivePrefix={arXiv},
eprint={2409.18549},
primaryClass={math.OC cs.SY eess.SY}
}
|
cunis2024ca{\sigma}os:
|
arxiv-662689
|
2409.18553
|
Efficient Noise Mitigation for Enhancing Inference Accuracy in DNNs on Mixed-Signal Accelerators
|
<|reference_start|>Efficient Noise Mitigation for Enhancing Inference Accuracy in DNNs on Mixed-Signal Accelerators: In this paper, we propose a framework to enhance the robustness of the neural models by mitigating the effects of process-induced and aging-related variations of analog computing components on the accuracy of the analog neural networks. We model these variations as the noise affecting the precision of the activations and introduce a denoising block inserted between selected layers of a pre-trained model. We demonstrate that training the denoising block significantly increases the model's robustness against various noise levels. To minimize the overhead associated with adding these blocks, we present an exploration algorithm to identify optimal insertion points for the denoising blocks. Additionally, we propose a specialized architecture to efficiently execute the denoising blocks, which can be integrated into mixed-signal accelerators. We evaluate the effectiveness of our approach using Deep Neural Network (DNN) models trained on the ImageNet and CIFAR-10 datasets. The results show that on average, by accepting 2.03% parameter count overhead, the accuracy drop due to the variations reduces from 31.7% to 1.15%.<|reference_end|>
|
arxiv
|
@article{azizi2024efficient,
title={Efficient Noise Mitigation for Enhancing Inference Accuracy in DNNs on
Mixed-Signal Accelerators},
author={Seyedarmin Azizi, Mohammad Erfan Sadeghi, Mehdi Kamal, Massoud Pedram},
journal={arXiv preprint arXiv:2409.18553},
year={2024},
archivePrefix={arXiv},
eprint={2409.18553},
primaryClass={cs.LG cs.AI cs.CV}
}
|
azizi2024efficient
|
arxiv-662690
|
2409.18556
|
CodeSCAN: ScreenCast ANalysis for Video Programming Tutorials
|
<|reference_start|>CodeSCAN: ScreenCast ANalysis for Video Programming Tutorials: Programming tutorials in the form of coding screencasts play a crucial role in programming education, serving both novices and experienced developers. However, the video format of these tutorials presents a challenge due to the difficulty of searching for and within videos. Addressing the absence of large-scale and diverse datasets for screencast analysis, we introduce the CodeSCAN dataset. It comprises 12,000 screenshots captured from the Visual Studio Code environment during development, featuring 24 programming languages, 25 fonts, and over 90 distinct themes, in addition to diverse layout changes and realistic user interactions. Moreover, we conduct detailed quantitative and qualitative evaluations to benchmark the performance of Integrated Development Environment (IDE) element detection, color-to-black-and-white conversion, and Optical Character Recognition (OCR). We hope that our contributions facilitate more research in coding screencast analysis, and we make the source code for creating the dataset and the benchmark publicly available on this website.<|reference_end|>
|
arxiv
|
@article{naumann2024codescan:,
title={CodeSCAN: ScreenCast ANalysis for Video Programming Tutorials},
author={Alexander Naumann and Felix Hertlein and Jacqueline H"ollig and Lucas
Cazzonelli and Steffen Thoma},
journal={arXiv preprint arXiv:2409.18556},
year={2024},
archivePrefix={arXiv},
eprint={2409.18556},
primaryClass={cs.LG cs.CV}
}
|
naumann2024codescan:
|
arxiv-662691
|
2409.18557
|
Balanced Splitting: A Framework for Achieving Zero-wait in the Multiserver-job Model
|
<|reference_start|>Balanced Splitting: A Framework for Achieving Zero-wait in the Multiserver-job Model: We present a new framework for designing nonpreemptive and job-size oblivious scheduling policies in the multiserver-job queueing model. The main requirement is to identify a static and balanced sub-partition of the server set and ensure that the servers in each set of that sub-partition can only handle jobs of a given class and in a first-come first-served order. A job class is determined by the number of servers to which it has exclusive access during its entire execution and the probability distribution of its service time. This approach aims to reduce delays by preventing small jobs from being blocked by larger ones that arrived first, and it is particularly beneficial when the job size variability intra resp. inter classes is small resp. large. In this setting, we propose a new scheduling policy, Balanced-Splitting. We provide a sufficient condition for the stability of Balanced-Splitting and show that the resulting queueing probability, i.e., the probability that an arriving job needs to wait for processing upon arrival, vanishes in both the subcritical (the load is kept fixed to a constant less than one) and critical (the load approaches one from below) many-server limiting regimes. Crucial to our analysis is a connection with the M/GI/s/s queue and Erlang's loss formula, which allows our analysis to rely on fundamental results from queueing theory. Numerical simulations show that the proposed policy performs better than several preemptive/nonpreemptive size-aware/oblivious policies in various practical scenarios. This is also confirmed by simulations running on real traces from High Performance Computing (HPC) workloads. The delays induced by Balanced-Splitting are also competitive with those induced by state-of-the-art policies such as First-Fit-SRPT and ServerFilling-SRPT, though our approach has the advantage of not requiring preemption, nor the knowledge of job sizes.<|reference_end|>
|
arxiv
|
@article{anselmi2024balanced,
title={Balanced Splitting: A Framework for Achieving Zero-wait in the
Multiserver-job Model},
author={Jonatha Anselmi and Josu Doncel},
journal={arXiv preprint arXiv:2409.18557},
year={2024},
archivePrefix={arXiv},
eprint={2409.18557},
primaryClass={cs.PF}
}
|
anselmi2024balanced
|
arxiv-662692
|
2409.18558
|
XWSB: A Blend System Utilizing XLS-R and WavLM with SLS Classifier detection system for SVDD 2024 Challenge
|
<|reference_start|>XWSB: A Blend System Utilizing XLS-R and WavLM with SLS Classifier detection system for SVDD 2024 Challenge: This paper introduces the model structure used in the SVDD 2024 Challenge. The SVDD 2024 challenge has been introduced this year for the first time. Singing voice deepfake detection (SVDD) which faces complexities due to informal speech intonations and varying speech rates. In this paper, we propose the XWSB system, which achieved SOTA per-formance in the SVDD challenge. XWSB stands for XLS-R, WavLM, and SLS Blend, representing the integration of these technologies for the purpose of SVDD. Specifically, we used the best performing model structure XLS-R&SLS from the ASVspoof DF dataset, and applied SLS to WavLM to form the WavLM&SLS structure. Finally, we integrated two models to form the XWSB system. Experimental results show that our system demonstrates advanced recognition capabilities in the SVDD challenge, specifically achieving an EER of 2.32% in the CtrSVDD track. The code and data can be found at https://github.com/QiShanZhang/XWSB_for_ SVDD2024.<|reference_end|>
|
arxiv
|
@article{zhang2024xwsb:,
title={XWSB: A Blend System Utilizing XLS-R and WavLM with SLS Classifier
detection system for SVDD 2024 Challenge},
author={Qishan Zhang, Shuangbing Wen, Fangke Yan, Tao Hu, Jun Li},
journal={IEEE Spoken Language Technology Workshop 2024},
year={2024},
archivePrefix={arXiv},
eprint={2409.18558},
primaryClass={cs.SD eess.AS}
}
|
zhang2024xwsb:
|
arxiv-662693
|
2409.18561
|
AL-GTD: Deep Active Learning for Gaze Target Detection
|
<|reference_start|>AL-GTD: Deep Active Learning for Gaze Target Detection: Gaze target detection aims at determining the image location where a person is looking. While existing studies have made significant progress in this area by regressing accurate gaze heatmaps, these achievements have largely relied on access to extensive labeled datasets, which demands substantial human labor. In this paper, our goal is to reduce the reliance on the size of labeled training data for gaze target detection. To achieve this, we propose AL-GTD, an innovative approach that integrates supervised and self-supervised losses within a novel sample acquisition function to perform active learning (AL). Additionally, it utilizes pseudo-labeling to mitigate distribution shifts during the training phase. AL-GTD achieves the best of all AUC results by utilizing only 40-50% of the training data, in contrast to state-of-the-art (SOTA) gaze target detectors requiring the entire training dataset to achieve the same performance. Importantly, AL-GTD quickly reaches satisfactory performance with 10-20% of the training data, showing the effectiveness of our acquisition function, which is able to acquire the most informative samples. We provide a comprehensive experimental analysis by adapting several AL methods for the task. AL-GTD outperforms AL competitors, simultaneously exhibiting superior performance compared to SOTA gaze target detectors when all are trained within a low-data regime. Code is available at https://github.com/francescotonini/al-gtd.<|reference_end|>
|
arxiv
|
@article{tonini2024al-gtd:,
title={AL-GTD: Deep Active Learning for Gaze Target Detection},
author={Francesco Tonini, Nicola Dall'Asen, Lorenzo Vaquero, Cigdem Beyan,
Elisa Ricci},
journal={arXiv preprint arXiv:2409.18561},
year={2024},
doi={10.1145/3664647.3680952},
archivePrefix={arXiv},
eprint={2409.18561},
primaryClass={cs.CV}
}
|
tonini2024al-gtd:
|
arxiv-662694
|
2409.18563
|
Revisiting Weighted Information Extraction: A Simpler and Faster Algorithm for Ranked Enumeration
|
<|reference_start|>Revisiting Weighted Information Extraction: A Simpler and Faster Algorithm for Ranked Enumeration: Information extraction from textual data, where the query is represented by a finite transducer and the task is to enumerate all results without repetition, and its extension to the weighted case, where each output element has a weight and the output elements are to be enumerated sorted by their weights, are important and well studied problems in database theory. On the one hand, the first framework already covers the well-known case of regular document spanners, while the latter setting covers several practically relevant tasks that cannot be described in the unweighted setting. It is known that in the unweighted case this problem can be solved with linear time preprocessing $O(|D|)$ and output-linear delay $O(|s|)$ in data complexity, where $D$ is the input data and $s$ is the current output element. For the weighted case, Bourhis, Grez, Jachiet, and Riveros [ICDT 2021] recently designed an algorithm with linear time preprocessing, but the delay of $O(|s| \cdot \log|\mathsf{D}|)$ depends on the size of the data. We first show how to leverage the existing results on enumerating shortest paths to obtain a simple alternative algorithm with linear preprocessing and a delay of $O(|s_i| + \min\{ \log i, \log|\mathsf{D}|\})$ for the $i^{\text{th}}$ output element $s_i$ (in data complexity); thus, substantially improving the previous algorithm. Next, we develop a technically involved rounding technique that allows us to devise an algorithm with linear time preprocessing and output-linear delay $O(|s|)$ with high probability. To this end, we combine tools from algebra, high-dimensional geometry, and linear programming.<|reference_end|>
|
arxiv
|
@article{gawrychowski2024revisiting,
title={Revisiting Weighted Information Extraction: A Simpler and Faster
Algorithm for Ranked Enumeration},
author={Pawel Gawrychowski, Florin Manea, Markus L. Schmid},
journal={arXiv preprint arXiv:2409.18563},
year={2024},
archivePrefix={arXiv},
eprint={2409.18563},
primaryClass={cs.DS cs.DB cs.FL}
}
|
gawrychowski2024revisiting
|
arxiv-662695
|
2409.18564
|
The IEEE-IS2 2024 Music Packet Loss Concealment Challenge
|
<|reference_start|>The IEEE-IS2 2024 Music Packet Loss Concealment Challenge: We present the IEEE-IS2 2024 Music Packet Loss Concealment Challenge. We begin by detailing the challenge rules, followed by an overview of the provided baseline system, the blind test set, and the evaluation methodology used to determine the final ranking. This inaugural edition aimed to foster collaboration between researchers and practitioners from the fields of signal processing, machine learning, and networked music performance, while also laying the groundwork for future advancements in packet loss concealment for music signals.<|reference_end|>
|
arxiv
|
@article{mezza2024the,
title={The IEEE-IS2 2024 Music Packet Loss Concealment Challenge},
author={Alessandro Ilic Mezza and Alberto Bernardini},
journal={arXiv preprint arXiv:2409.18564},
year={2024},
archivePrefix={arXiv},
eprint={2409.18564},
primaryClass={eess.AS cs.SD}
}
|
mezza2024the
|
arxiv-662696
|
2409.18565
|
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
|
<|reference_start|>Harmonizing knowledge Transfer in Neural Network with Unified Distillation: Knowledge distillation (KD), known for its ability to transfer knowledge from a cumbersome network (teacher) to a lightweight one (student) without altering the architecture, has been garnering increasing attention. Two primary categories emerge within KD methods: feature-based, focusing on intermediate layers' features, and logits-based, targeting the final layer's logits. This paper introduces a novel perspective by leveraging diverse knowledge sources within a unified KD framework. Specifically, we aggregate features from intermediate layers into a comprehensive representation, effectively gathering semantic information from different stages and scales. Subsequently, we predict the distribution parameters from this representation. These steps transform knowledge from the intermediate layers into corresponding distributive forms, thereby allowing for knowledge distillation through a unified distribution constraint at different stages of the network, ensuring the comprehensiveness and coherence of knowledge transfer. Numerous experiments were conducted to validate the effectiveness of the proposed method.<|reference_end|>
|
arxiv
|
@article{huang2024harmonizing,
title={Harmonizing knowledge Transfer in Neural Network with Unified
Distillation},
author={Yaomin Huang, Zaomin Yan, Chaomin Shen, Faming Fang, and Guixu Zhang},
journal={arXiv preprint arXiv:2409.18565},
year={2024},
archivePrefix={arXiv},
eprint={2409.18565},
primaryClass={cs.CV}
}
|
huang2024harmonizing
|
arxiv-662697
|
2409.18566
|
Optimizing DNN Inference on Multi-Accelerator SoCs at Training-time
|
<|reference_start|>Optimizing DNN Inference on Multi-Accelerator SoCs at Training-time: The demand for executing Deep Neural Networks (DNNs) with low latency and minimal power consumption at the edge has led to the development of advanced heterogeneous Systems-on-Chips (SoCs) that incorporate multiple specialized computing units (CUs), such as accelerators. Offloading DNN computations to a specific CU from the available set often exposes accuracy vs efficiency trade-offs, due to differences in their supported operations (e.g., standard vs. depthwise convolution) or data representations (e.g., more/less aggressively quantized). A challenging yet unresolved issue is how to map a DNN onto these multi-CU systems to maximally exploit the parallelization possibilities while taking accuracy into account. To address this problem, we present ODiMO, a hardware-aware tool that efficiently explores fine-grain mapping of DNNs among various on-chip CUs, during the training phase. ODiMO strategically splits individual layers of the neural network and executes them in parallel on the multiple available CUs, aiming to balance the total inference energy consumption or latency with the resulting accuracy, impacted by the unique features of the different hardware units. We test our approach on CIFAR-10, CIFAR-100, and ImageNet, targeting two open-source heterogeneous SoCs, i.e., DIANA and Darkside. We obtain a rich collection of Pareto-optimal networks in the accuracy vs. energy or latency space. We show that ODiMO reduces the latency of a DNN executed on the Darkside SoC by up to 8x at iso-accuracy, compared to manual heuristic mappings. When targeting energy, on the same SoC, ODiMO produced up to 50.8x more efficient mappings, with minimal accuracy drop (< 0.3%).<|reference_end|>
|
arxiv
|
@article{risso2024optimizing,
title={Optimizing DNN Inference on Multi-Accelerator SoCs at Training-time},
author={Matteo Risso, Alessio Burrello, Daniele Jahier Pagliari},
journal={arXiv preprint arXiv:2409.18566},
year={2024},
archivePrefix={arXiv},
eprint={2409.18566},
primaryClass={cs.LG}
}
|
risso2024optimizing
|
arxiv-662698
|
2409.18568
|
Experimental Evaluation of Machine Learning Models for Goal-oriented Customer Service Chatbot with Pipeline Architecture
|
<|reference_start|>Experimental Evaluation of Machine Learning Models for Goal-oriented Customer Service Chatbot with Pipeline Architecture: Integrating machine learning (ML) into customer service chatbots enhances their ability to understand and respond to user queries, ultimately improving service performance. However, they may appear artificial to some users and affecting customer experience. Hence, meticulous evaluation of ML models for each pipeline component is crucial for optimizing performance, though differences in functionalities can lead to unfair comparisons. In this paper, we present a tailored experimental evaluation approach for goal-oriented customer service chatbots with pipeline architecture, focusing on three key components: Natural Language Understanding (NLU), dialogue management (DM), and Natural Language Generation (NLG). Our methodology emphasizes individual assessment to determine optimal ML models. Specifically, we focus on optimizing hyperparameters and evaluating candidate models for NLU (utilizing BERT and LSTM), DM (employing DQN and DDQN), and NLG (leveraging GPT-2 and DialoGPT). The results show that for the NLU component, BERT excelled in intent detection whereas LSTM was superior for slot filling. For the DM component, the DDQN model outperformed DQN by achieving fewer turns, higher rewards, as well as greater success rates. For NLG, the large language model GPT-2 surpassed DialoGPT in BLEU, METEOR, and ROUGE metrics. These findings aim to provide a benchmark for future research in developing and optimizing customer service chatbots, offering valuable insights into model performance and optimal hyperparameters.<|reference_end|>
|
arxiv
|
@article{isa2024experimental,
title={Experimental Evaluation of Machine Learning Models for Goal-oriented
Customer Service Chatbot with Pipeline Architecture},
author={Nurul Ain Nabilah Mohd Isa, Siti Nuraishah Agos Jawaddi and Azlan
Ismail},
journal={arXiv preprint arXiv:2409.18568},
year={2024},
archivePrefix={arXiv},
eprint={2409.18568},
primaryClass={cs.AI cs.LG cs.NE}
}
|
isa2024experimental
|
arxiv-662699
|
2409.18569
|
Cross-video Identity Correlating for Person Re-identification Pre-training
|
<|reference_start|>Cross-video Identity Correlating for Person Re-identification Pre-training: Recent researches have proven that pre-training on large-scale person images extracted from internet videos is an effective way in learning better representations for person re-identification. However, these researches are mostly confined to pre-training at the instance-level or single-video tracklet-level. They ignore the identity-invariance in images of the same person across different videos, which is a key focus in person re-identification. To address this issue, we propose a Cross-video Identity-cOrrelating pre-traiNing (CION) framework. Defining a noise concept that comprehensively considers both intra-identity consistency and inter-identity discrimination, CION seeks the identity correlation from cross-video images by modeling it as a progressive multi-level denoising problem. Furthermore, an identity-guided self-distillation loss is proposed to implement better large-scale pre-training by mining the identity-invariance within person images. We conduct extensive experiments to verify the superiority of our CION in terms of efficiency and performance. CION achieves significantly leading performance with even fewer training samples. For example, compared with the previous state-of-the-art~\cite{ISR}, CION with the same ResNet50-IBN achieves higher mAP of 93.3\% and 74.3\% on Market1501 and MSMT17, while only utilizing 8\% training samples. Finally, with CION demonstrating superior model-agnostic ability, we contribute a model zoo named ReIDZoo to meet diverse research and application needs in this field. It contains a series of CION pre-trained models with spanning structures and parameters, totaling 32 models with 10 different structures, including GhostNet, ConvNext, RepViT, FastViT and so on. The code and models will be made publicly available at https://github.com/Zplusdragon/CION_ReIDZoo.<|reference_end|>
|
arxiv
|
@article{zuo2024cross-video,
title={Cross-video Identity Correlating for Person Re-identification
Pre-training},
author={Jialong Zuo, Ying Nie, Hanyu Zhou, Huaxin Zhang, Haoyu Wang, Tianyu
Guo, Nong Sang, Changxin Gao},
journal={arXiv preprint arXiv:2409.18569},
year={2024},
archivePrefix={arXiv},
eprint={2409.18569},
primaryClass={cs.CV}
}
|
zuo2024cross-video
|
arxiv-662700
|
2409.18572
|
Towards an active-learning approach to resource allocation for population-based damage prognosis
|
<|reference_start|>Towards an active-learning approach to resource allocation for population-based damage prognosis: Damage prognosis is, arguably, one of the most difficult tasks of structural health monitoring (SHM). To address common problems of damage prognosis, a population-based SHM (PBSHM) approach is adopted in the current work. In this approach the prognosis problem is considered as an information-sharing problem where data from past structures are exploited to make more accurate inferences regarding currently-degrading structures. For a given population, there may exist restrictions on the resources available to conduct monitoring; thus, the current work studies the problem of allocating such resources within a population of degrading structures with a view to maximising the damage-prognosis accuracy. The challenges of the current framework are mainly associated with the inference of outliers on the level of damage evolution, given partial data from the damage-evolution phenomenon. The current approach considers an initial population of structures for which damage evolution is extensively observed. Subsequently, a second population of structures with evolving damage is considered for which two monitoring systems are available, a low-availability and high-fidelity (low-uncertainty) one, and a widely-available and low-fidelity (high-uncertainty) one. The task of the current work is to follow an active-learning approach to identify the structures to which the high-fidelity system should be assigned in order to enhance the predictive capabilities of the machine-learning model throughout the population.<|reference_end|>
|
arxiv
|
@article{tsialiamanis2024towards,
title={Towards an active-learning approach to resource allocation for
population-based damage prognosis},
author={George Tsialiamanis, Keith Worden, Nikolaos Dervilis, Aidan J Hughes},
journal={arXiv preprint arXiv:2409.18572},
year={2024},
archivePrefix={arXiv},
eprint={2409.18572},
primaryClass={cs.LG}
}
|
tsialiamanis2024towards
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.