corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-661101
|
2409.15594
|
Beyond Turn-Based Interfaces: Synchronous LLMs as Full-Duplex Dialogue Agents
|
<|reference_start|>Beyond Turn-Based Interfaces: Synchronous LLMs as Full-Duplex Dialogue Agents: Despite broad interest in modeling spoken dialogue agents, most approaches are inherently "half-duplex" -- restricted to turn-based interaction with responses requiring explicit prompting by the user or implicit tracking of interruption or silence events. Human dialogue, by contrast, is "full-duplex" allowing for rich synchronicity in the form of quick and dynamic turn-taking, overlapping speech, and backchanneling. Technically, the challenge of achieving full-duplex dialogue with LLMs lies in modeling synchrony as pre-trained LLMs do not have a sense of "time". To bridge this gap, we propose Synchronous LLMs for full-duplex spoken dialogue modeling. We design a novel mechanism to integrate time information into Llama3-8b so that they run synchronously with the real-world clock. We also introduce a training recipe that uses 212k hours of synthetic spoken dialogue data generated from text dialogue data to create a model that generates meaningful and natural spoken dialogue, with just 2k hours of real-world spoken dialogue data. Synchronous LLMs outperform state-of-the-art in dialogue meaningfulness while maintaining naturalness. Finally, we demonstrate the model's ability to participate in full-duplex dialogue by simulating interaction between two agents trained on different datasets, while considering Internet-scale latencies of up to 240 ms. Webpage: https://syncllm.cs.washington.edu/.<|reference_end|>
|
arxiv
|
@article{veluri2024beyond,
title={Beyond Turn-Based Interfaces: Synchronous LLMs as Full-Duplex Dialogue
Agents},
author={Bandhav Veluri, Benjamin N Peloquin, Bokai Yu, Hongyu Gong, Shyamnath
Gollakota},
journal={arXiv preprint arXiv:2409.15594},
year={2024},
archivePrefix={arXiv},
eprint={2409.15594},
primaryClass={cs.CL cs.LG cs.SD eess.AS}
}
|
veluri2024beyond
|
arxiv-661102
|
2409.15595
|
Physics Enhanced Residual Policy Learning (PERPL) for safety cruising in mixed traffic platooning under actuator and communication delay
|
<|reference_start|>Physics Enhanced Residual Policy Learning (PERPL) for safety cruising in mixed traffic platooning under actuator and communication delay: Linear control models have gained extensive application in vehicle control due to their simplicity, ease of use, and support for stability analysis. However, these models lack adaptability to the changing environment and multi-objective settings. Reinforcement learning (RL) models, on the other hand, offer adaptability but suffer from a lack of interpretability and generalization capabilities. This paper aims to develop a family of RL-based controllers enhanced by physics-informed policies, leveraging the advantages of both physics-based models (data-efficient and interpretable) and RL methods (flexible to multiple objectives and fast computing). We propose the Physics-Enhanced Residual Policy Learning (PERPL) framework, where the physics component provides model interpretability and stability. The learning-based Residual Policy adjusts the physics-based policy to adapt to the changing environment, thereby refining the decisions of the physics model. We apply our proposed model to decentralized control to mixed traffic platoon of Connected and Automated Vehicles (CAVs) and Human-driven Vehicles (HVs) using a constant time gap (CTG) strategy for cruising and incorporating actuator and communication delays. Experimental results demonstrate that our method achieves smaller headway errors and better oscillation dampening than linear models and RL alone in scenarios with artificially extreme conditions and real preceding vehicle trajectories. At the macroscopic level, overall traffic oscillations are also reduced as the penetration rate of CAVs employing the PERPL scheme increases.<|reference_end|>
|
arxiv
|
@article{long2024physics,
title={Physics Enhanced Residual Policy Learning (PERPL) for safety cruising in
mixed traffic platooning under actuator and communication delay},
author={Keke Long, Haotian Shi, Yang Zhou, Xiaopeng Li},
journal={arXiv preprint arXiv:2409.15595},
year={2024},
archivePrefix={arXiv},
eprint={2409.15595},
primaryClass={cs.AI eess.SP}
}
|
long2024physics
|
arxiv-661103
|
2409.15600
|
Polyatomic Complexes: A topologically-informed learning representation for atomistic systems
|
<|reference_start|>Polyatomic Complexes: A topologically-informed learning representation for atomistic systems: Developing robust representations of chemical structures that enable models to learn topological inductive biases is challenging. In this manuscript, we present a representation of atomistic systems. We begin by proving that our representation satisfies all structural, geometric, efficiency, and generalizability constraints. Afterward, we provide a general algorithm to encode any atomistic system. Finally, we report performance comparable to state-of-the-art methods on numerous tasks. We open-source all code and datasets. The code and data are available at https://github.com/rahulkhorana/PolyatomicComplexes.<|reference_end|>
|
arxiv
|
@article{khorana2024polyatomic,
title={Polyatomic Complexes: A topologically-informed learning representation
for atomistic systems},
author={Rahul Khorana, Marcus Noack, Jin Qian},
journal={arXiv preprint arXiv:2409.15600},
year={2024},
archivePrefix={arXiv},
eprint={2409.15600},
primaryClass={cs.LG physics.comp-ph}
}
|
khorana2024polyatomic
|
arxiv-661104
|
2409.15602
|
Assessment of Submillimeter Precision via Structure from Motion Technique in Close-Range Capture Environments
|
<|reference_start|>Assessment of Submillimeter Precision via Structure from Motion Technique in Close-Range Capture Environments: Creating 3D models through the Structure from Motion technique is a recognized, efficient, cost-effective structural monitoring strategy. This technique is applied in several engineering fields, particularly for creating models of large structures from photographs taken a few tens of meters away. However, discussions about its usability and the procedures for conducting laboratory analysis, such as structural tests, are rarely addressed. This study investigates the potential of the SfM method to create submillimeter-quality models for structural tests, with short-distance captures. A series of experiments was carried out, with photographic captures at a 1-meter distance, using different quality settings: camera calibration model, Scale Bars dispersion, overlapping rates, and the use of vertical and oblique images. Employing a calibration model with images taken over a test board and a set of Scale Bars (SB) appropriately distributed over the test area, an overlap rate of 80 percent, and the integration of vertical and oblique images, RMSE values of approximately 0.1 mm were obtained. This result indicates the potential application of the technique for 3D modeling with submillimeter positional quality, as required for structural tests in laboratory environments.<|reference_end|>
|
arxiv
|
@article{de moraes2024assessment,
title={Assessment of Submillimeter Precision via Structure from Motion
Technique in Close-Range Capture Environments},
author={Francisco Roza de Moraes and Irineu da Silva},
journal={arXiv preprint arXiv:2409.15602},
year={2024},
archivePrefix={arXiv},
eprint={2409.15602},
primaryClass={cs.CV}
}
|
de moraes2024assessment
|
arxiv-661105
|
2409.15603
|
A revisit on well-posedness of a boundary value problem of a stationary advection equation without the separation condition
|
<|reference_start|>A revisit on well-posedness of a boundary value problem of a stationary advection equation without the separation condition: We consider a boundary value problem of a stationary advection equation in a bounded domain with Lipschitz boundary. It is known to be well-posed in $L^p$-based function spaces for $1 < p < \infty$ under the separation condition of the inflow and the outflow boundaries. In this article, we provide another sufficient condition for the well-posedness with $1 \leq p \leq \infty$.<|reference_end|>
|
arxiv
|
@article{imagawa2024a,
title={A revisit on well-posedness of a boundary value problem of a stationary
advection equation without the separation condition},
author={Masaki Imagawa, Daisuke Kawagoe},
journal={arXiv preprint arXiv:2409.15603},
year={2024},
archivePrefix={arXiv},
eprint={2409.15603},
primaryClass={math.AP cs.NA math.NA}
}
|
imagawa2024a
|
arxiv-661106
|
2409.15604
|
Persona-L has Entered the Chat: Leveraging LLM and Ability-based Framework for Personas of People with Complex Needs
|
<|reference_start|>Persona-L has Entered the Chat: Leveraging LLM and Ability-based Framework for Personas of People with Complex Needs: We present Persona-L, a novel approach for creating personas using Large Language Models (LLMs) and an ability-based framework, specifically designed to improve the representation of users with complex needs. Traditional methods of persona creation often fall short of accurately depicting the dynamic and diverse nature of complex needs, resulting in oversimplified or stereotypical profiles. Persona-L enables users to create and interact with personas through a chat interface. Persona-L was evaluated through interviews with UX designers (N=6), where we examined its effectiveness in reflecting the complexities of lived experiences of people with complex needs. We report our findings that indicate the potential of Persona-L to increase empathy and understanding of complex needs while also revealing the need for transparency of data used in persona creation, the role of the language and tone, and the need to provide a more balanced presentation of abilities with constraints.<|reference_end|>
|
arxiv
|
@article{sun2024persona-l,
title={Persona-L has Entered the Chat: Leveraging LLM and Ability-based
Framework for Personas of People with Complex Needs},
author={Lipeipei Sun, Tianzi Qin, Anran Hu, Jiale Zhang, Shuojia Lin, Jianyan
Chen, Mona Ali, Mirjana Prpa},
journal={arXiv preprint arXiv:2409.15604},
year={2024},
archivePrefix={arXiv},
eprint={2409.15604},
primaryClass={cs.HC}
}
|
sun2024persona-l
|
arxiv-661107
|
2409.15608
|
Deep Learning Approach for Knee Point Detection on Noisy Data
|
<|reference_start|>Deep Learning Approach for Knee Point Detection on Noisy Data: A knee point on a curve is the one where the curve levels off after an increase. In a computer system, it marks the point at which the system's performance is no longer improving significantly despite adding extra resources. Thus a knee point often represents an optimal point for decision. However, identifying knee points in noisy data is a challenging task. All previous works defined knee points based on the data in the original scale. However, in this work, we define knee points based on normalized data and provide a mathematical definition of curvature for normalized discrete data points, based on the mathematical definition of curvature for continuous functions. The impact of normalization exerted on curvature and the location of knee points are also discussed. Nevertheless, assessing the effectiveness of methods is difficult in the absence of ground truth data and benchmark datasets, which makes comparing existing methods challenging. In view of this, we create synthetic data that simulate real-world scenarios. We achieve this by selecting a set of functions that possess the required characteristics in this research and then introducing noise that satisfies the underlying distribution. In addition, we present a deep-learning approach and employ a Convolutional Neural Network (CNN) with a U-Net-like architecture, to accurately detect the knee point(s) of the underlying true distribution. The proposed model is evaluated against state-of-the-art methods. Experiments show that our network outperforms existing methods in all synthetic datasets, regardless of whether the samples have single or multiple knee points. In fact, our model achieves the best $F_{1}$ scores among all existing methods in all the test sets.<|reference_end|>
|
arxiv
|
@article{fok2024deep,
title={Deep Learning Approach for Knee Point Detection on Noisy Data},
author={Ting Yan Fok, Nong Ye},
journal={arXiv preprint arXiv:2409.15608},
year={2024},
archivePrefix={arXiv},
eprint={2409.15608},
primaryClass={cs.LG}
}
|
fok2024deep
|
arxiv-661108
|
2409.15610
|
Full-Order Sampling-Based MPC for Torque-Level Locomotion Control via Diffusion-Style Annealing
|
<|reference_start|>Full-Order Sampling-Based MPC for Torque-Level Locomotion Control via Diffusion-Style Annealing: Due to high dimensionality and non-convexity, real-time optimal control using full-order dynamics models for legged robots is challenging. Therefore, Nonlinear Model Predictive Control (NMPC) approaches are often limited to reduced-order models. Sampling-based MPC has shown potential in nonconvex even discontinuous problems, but often yields suboptimal solutions with high variance, which limits its applications in high-dimensional locomotion. This work introduces DIAL-MPC (Diffusion-Inspired Annealing for Legged MPC), a sampling-based MPC framework with a novel diffusion-style annealing process. Such an annealing process is supported by the theoretical landscape analysis of Model Predictive Path Integral Control (MPPI) and the connection between MPPI and single-step diffusion. Algorithmically, DIAL-MPC iteratively refines solutions online and achieves both global coverage and local convergence. In quadrupedal torque-level control tasks, DIAL-MPC reduces the tracking error of standard MPPI by $13.4$ times and outperforms reinforcement learning (RL) policies by $50\%$ in challenging climbing tasks without any training. In particular, DIAL-MPC enables precise real-world quadrupedal jumping with payload. To the best of our knowledge, DIAL-MPC is the first training-free method that optimizes over full-order quadruped dynamics in real-time.<|reference_end|>
|
arxiv
|
@article{xue2024full-order,
title={Full-Order Sampling-Based MPC for Torque-Level Locomotion Control via
Diffusion-Style Annealing},
author={Haoru Xue, Chaoyi Pan, Zeji Yi, Guannan Qu, Guanya Shi},
journal={arXiv preprint arXiv:2409.15610},
year={2024},
archivePrefix={arXiv},
eprint={2409.15610},
primaryClass={cs.RO}
}
|
xue2024full-order
|
arxiv-661109
|
2409.15612
|
Revolutionizing Biomarker Discovery: Leveraging Generative AI for Bio-Knowledge-Embedded Continuous Space Exploration
|
<|reference_start|>Revolutionizing Biomarker Discovery: Leveraging Generative AI for Bio-Knowledge-Embedded Continuous Space Exploration: Biomarker discovery is vital in advancing personalized medicine, offering insights into disease diagnosis, prognosis, and therapeutic efficacy. Traditionally, the identification and validation of biomarkers heavily depend on extensive experiments and statistical analyses. These approaches are time-consuming, demand extensive domain expertise, and are constrained by the complexity of biological systems. These limitations motivate us to ask: Can we automatically identify the effective biomarker subset without substantial human efforts? Inspired by the success of generative AI, we think that the intricate knowledge of biomarker identification can be compressed into a continuous embedding space, thus enhancing the search for better biomarkers. Thus, we propose a new biomarker identification framework with two important modules:1) training data preparation and 2) embedding-optimization-generation. The first module uses a multi-agent system to automatically collect pairs of biomarker subsets and their corresponding prediction accuracy as training data. These data establish a strong knowledge base for biomarker identification. The second module employs an encoder-evaluator-decoder learning paradigm to compress the knowledge of the collected data into a continuous space. Then, it utilizes gradient-based search techniques and autoregressive-based reconstruction to efficiently identify the optimal subset of biomarkers. Finally, we conduct extensive experiments on three real-world datasets to show the efficiency, robustness, and effectiveness of our method.<|reference_end|>
|
arxiv
|
@article{ying2024revolutionizing,
title={Revolutionizing Biomarker Discovery: Leveraging Generative AI for
Bio-Knowledge-Embedded Continuous Space Exploration},
author={Wangyang Ying, Dongjie Wang, Xuanming Hu, Ji Qiu, Jin Park, Yanjie Fu},
journal={arXiv preprint arXiv:2409.15612},
year={2024},
archivePrefix={arXiv},
eprint={2409.15612},
primaryClass={cs.LG cs.AI}
}
|
ying2024revolutionizing
|
arxiv-661110
|
2409.15615
|
KISS-Matcher: Fast and Robust Point Cloud Registration Revisited
|
<|reference_start|>KISS-Matcher: Fast and Robust Point Cloud Registration Revisited: While global point cloud registration systems have advanced significantly in all aspects, many studies have focused on specific components, such as feature extraction, graph-theoretic pruning, or pose solvers. In this paper, we take a holistic view on the registration problem and develop an open-source and versatile C++ library for point cloud registration, called \textit{KISS-Matcher}. KISS-Matcher combines a novel feature detector, \textit{Faster-PFH}, that improves over the classical fast point feature histogram (FPFH). Moreover, it adopts a $k$-core-based graph-theoretic pruning to reduce the time complexity of rejecting outlier correspondences. Finally, it combines these modules in a complete, user-friendly, and ready-to-use pipeline. As verified by extensive experiments, KISS-Matcher has superior scalability and broad applicability, achieving a substantial speed-up compared to state-of-the-art outlier-robust registration pipelines while preserving accuracy. Our code will be available at \href{https://github.com/MIT-SPARK/KISS-Matcher}{\texttt{https://github.com/MIT-SPARK/KISS-Matcher}}.<|reference_end|>
|
arxiv
|
@article{lim2024kiss-matcher:,
title={KISS-Matcher: Fast and Robust Point Cloud Registration Revisited},
author={Hyungtae Lim and Daebeom Kim and Gunhee Shin and Jingnan Shi and
Ignacio Vizzo and Hyun Myung and Jaesik Park and Luca Carlone},
journal={arXiv preprint arXiv:2409.15615},
year={2024},
archivePrefix={arXiv},
eprint={2409.15615},
primaryClass={cs.CV cs.RO}
}
|
lim2024kiss-matcher:
|
arxiv-661111
|
2409.15616
|
Reinforcement Feature Transformation for Polymer Property Performance Prediction
|
<|reference_start|>Reinforcement Feature Transformation for Polymer Property Performance Prediction: Polymer property performance prediction aims to forecast specific features or attributes of polymers, which has become an efficient approach to measuring their performance. However, existing machine learning models face challenges in effectively learning polymer representations due to low-quality polymer datasets, which consequently impact their overall performance. This study focuses on improving polymer property performance prediction tasks by reconstructing an optimal and explainable descriptor representation space. Nevertheless, prior research such as feature engineering and representation learning can only partially solve this task since they are either labor-incentive or unexplainable. This raises two issues: 1) automatic transformation and 2) explainable enhancement. To tackle these issues, we propose our unique Traceable Group-wise Reinforcement Generation Perspective. Specifically, we redefine the reconstruction of the representation space into an interactive process, combining nested generation and selection. Generation creates meaningful descriptors, and selection eliminates redundancies to control descriptor sizes. Our approach employs cascading reinforcement learning with three Markov Decision Processes, automating descriptor and operation selection, and descriptor crossing. We utilize a group-wise generation strategy to explore and enhance reward signals for cascading agents. Ultimately, we conduct experiments to indicate the effectiveness of our proposed framework.<|reference_end|>
|
arxiv
|
@article{hu2024reinforcement,
title={Reinforcement Feature Transformation for Polymer Property Performance
Prediction},
author={Xuanming Hu, Dongjie Wang, Wangyang Ying, Yanjie Fu},
journal={arXiv preprint arXiv:2409.15616},
year={2024},
archivePrefix={arXiv},
eprint={2409.15616},
primaryClass={cs.LG}
}
|
hu2024reinforcement
|
arxiv-661112
|
2409.15618
|
A Fully Parallelizable Loosely Coupled Scheme for Fluid-Poroelastic Structure Interaction Problems
|
<|reference_start|>A Fully Parallelizable Loosely Coupled Scheme for Fluid-Poroelastic Structure Interaction Problems: We investigate the fluid-poroelastic structure interaction problem in a moving domain, governed by Navier-Stokes-Biot (NSBiot) system. First, we propose a fully parallelizable, loosely coupled scheme to solve the coupled system. At each time step, the solution from the previous time step is used to approximate the coupling conditions at the interface, allowing the original coupled problem to be fully decoupled into seperate fluid and structure subproblems, which are solved in parallel. Since our approach utilizes a loosely coupled scheme, no sub-iterations are required at each time step. Next, we conduct the energy estimates of this splitting method for the linearized problem (Stokes-Biot system), which demonstrates that the scheme is unconditionally stable without any restriction of the time step size from the physical parameters. Furthermore, we illustrate the first-order accuracy in time through two benchmark problems. Finally, to demonstrate that the proposed method maintains its excellent stability properties also for the nonlinear NSBiot system, we present numerical results for both $2D$ and $3D$ NSBiot problems related to real-world physical applications.<|reference_end|>
|
arxiv
|
@article{guo2024a,
title={A Fully Parallelizable Loosely Coupled Scheme for Fluid-Poroelastic
Structure Interaction Problems},
author={Shihan Guo, Yizhong Sun, Yifan Wang, Xiaohe Yue, Haibiao Zheng},
journal={arXiv preprint arXiv:2409.15618},
year={2024},
archivePrefix={arXiv},
eprint={2409.15618},
primaryClass={math.NA cs.NA}
}
|
guo2024a
|
arxiv-661113
|
2409.15621
|
Three-dimensional large deformation frictional contact treatment using varying-order NURBS discretization in IGA
|
<|reference_start|>Three-dimensional large deformation frictional contact treatment using varying-order NURBS discretization in IGA: We introduce a varying-order (VO) NURBS discretization method to enhance the performance of the IGA technique for three-dimensional large deformation frictional contact problems. Based on the promising results obtained with the previous work on the 2D isogeometric contact analysis, the present work extends the capability of the method for tri-variate NURBS discretization. The proposed method enables independent employment of the user-defined higher-order NURBS for the discretization of the contact surface and the minimum order of NURBS for the remaining solid volume. Such a method provides the possibility to refine a NURBS solid with the controllable order elevation-based approach while preserving its volume parametrization at a fixed mesh. The advantages of the method are twofold. First, the higher-order NURBS for the evaluation of contact integral enhances the accuracy of the contact responses at a fixed mesh, hence fully exploiting the advantage of higher-order NURBS specifically for contact computations. Second, the minimum order of NURBS for the computations in the remaining volume considerably reduces the computational cost associated with the uniform order NURBS-based isogeometric contact analyses. The capabilities of the proposed method are demonstrated using various contact problems with or without considering friction between deformable solids. The results with the standard uniform order of NURBS-based discretization are also included to provide a comparative assessment. We show that to attain similar accuracy results, the VO NURBS discretization uses a much coarser mesh resolution than the standard NURBS-based discretization, leading to a major gain in computational efficiency for isogeometric contact analysis. The convergence study demonstrates the consistent performance of the method for efficient IGA of three-dimensional (3D) frictional contact problems.<|reference_end|>
|
arxiv
|
@article{agrawal2024three-dimensional,
title={Three-dimensional large deformation frictional contact treatment using
varying-order NURBS discretization in IGA},
author={Vishal Agrawal},
journal={arXiv preprint arXiv:2409.15621},
year={2024},
archivePrefix={arXiv},
eprint={2409.15621},
primaryClass={math.NA cs.NA}
}
|
agrawal2024three-dimensional
|
arxiv-661114
|
2409.15623
|
Safe Guard: an LLM-agent for Real-time Voice-based Hate Speech Detection in Social Virtual Reality
|
<|reference_start|>Safe Guard: an LLM-agent for Real-time Voice-based Hate Speech Detection in Social Virtual Reality: In this paper, we present Safe Guard, an LLM-agent for the detection of hate speech in voice-based interactions in social VR (VRChat). Our system leverages Open AI GPT and audio feature extraction for real-time voice interactions. We contribute a system design and evaluation of the system that demonstrates the capability of our approach in detecting hate speech, and reducing false positives compared to currently available approaches. Our results indicate the potential of LLM-based agents in creating safer virtual environments and set the groundwork for further advancements in LLM-driven moderation approaches.<|reference_end|>
|
arxiv
|
@article{xu2024safe,
title={Safe Guard: an LLM-agent for Real-time Voice-based Hate Speech Detection
in Social Virtual Reality},
author={Yiwen Xu and Qinyang Hou and Hongyu Wan and Mirjana Prpa},
journal={arXiv preprint arXiv:2409.15623},
year={2024},
archivePrefix={arXiv},
eprint={2409.15623},
primaryClass={eess.AS cs.AI cs.SD}
}
|
xu2024safe
|
arxiv-661115
|
2409.15626
|
Qualitative Insights Tool (QualIT): LLM Enhanced Topic Modeling
|
<|reference_start|>Qualitative Insights Tool (QualIT): LLM Enhanced Topic Modeling: Topic modeling is a widely used technique for uncovering thematic structures from large text corpora. However, most topic modeling approaches e.g. Latent Dirichlet Allocation (LDA) struggle to capture nuanced semantics and contextual understanding required to accurately model complex narratives. Recent advancements in this area include methods like BERTopic, which have demonstrated significantly improved topic coherence and thus established a new standard for benchmarking. In this paper, we present a novel approach, the Qualitative Insights Tool (QualIT) that integrates large language models (LLMs) with existing clustering-based topic modeling approaches. Our method leverages the deep contextual understanding and powerful language generation capabilities of LLMs to enrich the topic modeling process using clustering. We evaluate our approach on a large corpus of news articles and demonstrate substantial improvements in topic coherence and topic diversity compared to baseline topic modeling techniques. On the 20 ground-truth topics, our method shows 70% topic coherence (vs 65% & 57% benchmarks) and 95.5% topic diversity (vs 85% & 72% benchmarks). Our findings suggest that the integration of LLMs can unlock new opportunities for topic modeling of dynamic and complex text data, as is common in talent management research contexts.<|reference_end|>
|
arxiv
|
@article{kapoor2024qualitative,
title={Qualitative Insights Tool (QualIT): LLM Enhanced Topic Modeling},
author={Satya Kapoor, Alex Gil, Sreyoshi Bhaduri, Anshul Mittal, Rutu Mulkar},
journal={arXiv preprint arXiv:2409.15626},
year={2024},
archivePrefix={arXiv},
eprint={2409.15626},
primaryClass={cs.IR cs.CL}
}
|
kapoor2024qualitative
|
arxiv-661116
|
2409.15627
|
ModCube: Modular, Self-Assembling Cubic Underwater Robot
|
<|reference_start|>ModCube: Modular, Self-Assembling Cubic Underwater Robot: This paper presents a low-cost, centralized modular underwater robot platform, ModCube, which can be used to study swarm coordination for a wide range of tasks in underwater environments. A ModCube structure consists of multiple ModCube robots. Each robot can move in six DoF with eight thrusters and can be rigidly connected to other ModCube robots with an electromagnet controlled by onboard computer. In this paper, we present a novel method for characterizing and visualizing dynamic behavior, along with four benchmarks to evaluate the morphological performance of the robot. Analysis shows that our ModCube design is desirable for omnidirectional tasks, compared with the configurations widely used by commercial underwater robots. We run real robot experiments in two water tanks to demonstrate the robust control and self-assemble of the proposed system, We also open-source the design and code to facilitate future research.<|reference_end|>
|
arxiv
|
@article{zheng2024modcube:,
title={ModCube: Modular, Self-Assembling Cubic Underwater Robot},
author={Jiaxi Zheng, Guangmin Dai, Botao He, Zhaoyang Mu, Zhaochen Meng,
Tianyi Zhang, Weiming Zhi and Dixia Fan},
journal={arXiv preprint arXiv:2409.15627},
year={2024},
archivePrefix={arXiv},
eprint={2409.15627},
primaryClass={cs.RO}
}
|
zheng2024modcube:
|
arxiv-661117
|
2409.15629
|
Dynamic Game-Theoretical Decision-Making Framework for Vehicle-Pedestrian Interaction with Human Bounded Rationality
|
<|reference_start|>Dynamic Game-Theoretical Decision-Making Framework for Vehicle-Pedestrian Interaction with Human Bounded Rationality: Human-involved interactive environments pose significant challenges for autonomous vehicle decision-making processes due to the complexity and uncertainty of human behavior. It is crucial to develop an explainable and trustworthy decision-making system for autonomous vehicles interacting with pedestrians. Previous studies often used traditional game theory to describe interactions for its interpretability. However, it assumes complete human rationality and unlimited reasoning abilities, which is unrealistic. To solve this limitation and improve model accuracy, this paper proposes a novel framework that integrates the partially observable markov decision process with behavioral game theory to dynamically model AV-pedestrian interactions at the unsignalized intersection. Both the AV and the pedestrian are modeled as dynamic-belief-induced quantal cognitive hierarchy (DB-QCH) models, considering human reasoning limitations and bounded rationality in the decision-making process. In addition, a dynamic belief updating mechanism allows the AV to update its understanding of the opponent's rationality degree in real-time based on observed behaviors and adapt its strategies accordingly. The analysis results indicate that our models effectively simulate vehicle-pedestrian interactions and our proposed AV decision-making approach performs well in safety, efficiency, and smoothness. It closely resembles real-world driving behavior and even achieves more comfortable driving navigation compared to our previous virtual reality experimental data.<|reference_end|>
|
arxiv
|
@article{dang2024dynamic,
title={Dynamic Game-Theoretical Decision-Making Framework for
Vehicle-Pedestrian Interaction with Human Bounded Rationality},
author={Meiting Dang, Dezong Zhao, Yafei Wang and Chongfeng Wei},
journal={arXiv preprint arXiv:2409.15629},
year={2024},
archivePrefix={arXiv},
eprint={2409.15629},
primaryClass={cs.RO}
}
|
dang2024dynamic
|
arxiv-661118
|
2409.15631
|
Data Augmentation for Sparse Multidimensional Learning Performance Data Using Generative AI
|
<|reference_start|>Data Augmentation for Sparse Multidimensional Learning Performance Data Using Generative AI: Learning performance data describe correct and incorrect answers or problem-solving attempts in adaptive learning, such as in intelligent tutoring systems (ITSs). Learning performance data tend to be highly sparse (80\%\(\sim\)90\% missing observations) in most real-world applications due to adaptive item selection. This data sparsity presents challenges to using learner models to effectively predict future performance explore new hypotheses about learning. This article proposes a systematic framework for augmenting learner data to address data sparsity in learning performance data. First, learning performance is represented as a three-dimensional tensor of learners' questions, answers, and attempts, capturing longitudinal knowledge states during learning. Second, a tensor factorization method is used to impute missing values in sparse tensors of collected learner data, thereby grounding the imputation on knowledge tracing tasks that predict missing performance values based on real observations. Third, a module for generating patterns of learning is used. This study contrasts two forms of generative Artificial Intelligence (AI), including Generative Adversarial Networks (GANs) and Generate Pre-Trained Transformers (GPT) to generate data associated with different clusters of learner data. We tested this approach on an adult literacy dataset from AutoTutor lessons developed for Adult Reading Comprehension (ARC). We found that: (1) tensor factorization improved the performance in tracing and predicting knowledge mastery compared with other knowledge tracing techniques without data augmentation, showing higher relative fidelity for this imputation method, and (2) the GAN-based simulation showed greater overall stability and less statistical bias based on a divergence evaluation with varying simulation sample sizes compared to GPT.<|reference_end|>
|
arxiv
|
@article{zhang2024data,
title={Data Augmentation for Sparse Multidimensional Learning Performance Data
Using Generative AI},
author={Liang Zhang, Jionghao Lin, John Sabatini, Conrad Borchers, Daniel
Weitekamp, Meng Cao, John Hollander, Xiangen Hu, Arthur C. Graesser},
journal={arXiv preprint arXiv:2409.15631},
year={2024},
archivePrefix={arXiv},
eprint={2409.15631},
primaryClass={cs.LG cs.AI}
}
|
zhang2024data
|
arxiv-661119
|
2409.15633
|
Intent Prediction-Driven Model Predictive Control for UAV Planning and Navigation in Dynamic Environments
|
<|reference_start|>Intent Prediction-Driven Model Predictive Control for UAV Planning and Navigation in Dynamic Environments: The emergence of indoor aerial robots holds significant potential for enhancing construction site workers' productivity by autonomously performing inspection and mapping tasks. The key challenge to this application is ensuring navigation safety with human workers. While navigation in static environments has been extensively studied, navigating dynamic environments remains open due to challenges in perception and planning. Payload limitations of unmanned aerial vehicles limit them to using cameras with limited fields of view, resulting in unreliable perception and tracking during collision avoidance. Moreover, the unpredictable nature of the dynamic environments can quickly make the generated optimal trajectory outdated. To address these challenges, this paper presents a comprehensive navigation framework that incorporates both perception and planning, introducing the concept of dynamic obstacle intent prediction. Our perception module detects and tracks dynamic obstacles efficiently and handles tracking loss and occlusion during collision avoidance. The proposed intent prediction module employs a Markov Decision Process (MDP) to forecast potential actions of dynamic obstacles with the possible future trajectories. Finally, a novel intent-based planning algorithm, leveraging model predictive control (MPC), is applied to generate safe navigation trajectories. Simulation and physical experiments demonstrate that our method enables safe navigation in dynamic environments and achieves the fewest collisions compared to benchmarks.<|reference_end|>
|
arxiv
|
@article{xu2024intent,
title={Intent Prediction-Driven Model Predictive Control for UAV Planning and
Navigation in Dynamic Environments},
author={Zhefan Xu, Hanyu Jin, Xinming Han, Haoyu Shen, Kenji Shimada},
journal={arXiv preprint arXiv:2409.15633},
year={2024},
archivePrefix={arXiv},
eprint={2409.15633},
primaryClass={cs.RO}
}
|
xu2024intent
|
arxiv-661120
|
2409.15634
|
NavRL: Learning Safe Flight in Dynamic Environments
|
<|reference_start|>NavRL: Learning Safe Flight in Dynamic Environments: Safe flight in dynamic environments requires autonomous unmanned aerial vehicles (UAVs) to make effective decisions when navigating cluttered spaces with moving obstacles. Traditional approaches often decompose decision-making into hierarchical modules for prediction and planning. Although these handcrafted systems can perform well in specific settings, they might fail if environmental conditions change and often require careful parameter tuning. Additionally, their solutions could be suboptimal due to the use of inaccurate mathematical model assumptions and simplifications aimed at achieving computational efficiency. To overcome these limitations, this paper introduces the NavRL framework, a deep reinforcement learning-based navigation method built on the Proximal Policy Optimization (PPO) algorithm. NavRL utilizes our carefully designed state and action representations, allowing the learned policy to make safe decisions in the presence of both static and dynamic obstacles, with zero-shot transfer from simulation to real-world flight. Furthermore, the proposed method adopts a simple but effective safety shield for the trained policy, inspired by the concept of velocity obstacles, to mitigate potential failures associated with the black-box nature of neural networks. To accelerate the convergence, we implement the training pipeline using NVIDIA Isaac Sim, enabling parallel training with thousands of quadcopters. Simulation and physical experiments show that our method ensures safe navigation in dynamic environments and results in the fewest collisions compared to benchmarks in scenarios with dynamic obstacles.<|reference_end|>
|
arxiv
|
@article{xu2024navrl:,
title={NavRL: Learning Safe Flight in Dynamic Environments},
author={Zhefan Xu, Xinming Han, Haoyu Shen, Hanyu Jin, Kenji Shimada},
journal={arXiv preprint arXiv:2409.15634},
year={2024},
archivePrefix={arXiv},
eprint={2409.15634},
primaryClass={cs.RO}
}
|
xu2024navrl:
|
arxiv-661121
|
2409.15635
|
Dynamic Cloth Manipulation Considering Variable Stiffness and Material Change Using Deep Predictive Model with Parametric Bias
|
<|reference_start|>Dynamic Cloth Manipulation Considering Variable Stiffness and Material Change Using Deep Predictive Model with Parametric Bias: Dynamic manipulation of flexible objects such as fabric, which is difficult to modelize, is one of the major challenges in robotics. With the development of deep learning, we are beginning to see results in simulations and in some actual robots, but there are still many problems that have not yet been tackled. Humans can move their arms at high speed using their flexible bodies skillfully, and even when the material to be manipulated changes, they can manipulate the material after moving it several times and understanding its characteristics. Therefore, in this research, we focus on the following two points: (1) body control using a variable stiffness mechanism for more dynamic manipulation, and (2) response to changes in the material of the manipulated object using parametric bias. By incorporating these two approaches into a deep predictive model, we show through simulation and actual robot experiments that Musashi-W, a musculoskeletal humanoid with variable stiffness mechanism, can dynamically manipulate cloth while detecting changes in the physical properties of the manipulated object.<|reference_end|>
|
arxiv
|
@article{kawaharazuka2024dynamic,
title={Dynamic Cloth Manipulation Considering Variable Stiffness and Material
Change Using Deep Predictive Model with Parametric Bias},
author={Kento Kawaharazuka, Akihiro Miki, Masahiro Bando, Kei Okada, Masayuki
Inaba},
journal={arXiv preprint arXiv:2409.15635},
year={2024},
doi={10.3389/fnbot.2022.890695},
archivePrefix={arXiv},
eprint={2409.15635},
primaryClass={cs.RO}
}
|
kawaharazuka2024dynamic
|
arxiv-661122
|
2409.15636
|
Personalized Federated Learning via Backbone Self-Distillation
|
<|reference_start|>Personalized Federated Learning via Backbone Self-Distillation: In practical scenarios, federated learning frequently necessitates training personalized models for each client using heterogeneous data. This paper proposes a backbone self-distillation approach to facilitate personalized federated learning. In this approach, each client trains its local model and only sends the backbone weights to the server. These weights are then aggregated to create a global backbone, which is returned to each client for updating. However, the client's local backbone lacks personalization because of the common representation. To solve this problem, each client further performs backbone self-distillation by using the global backbone as a teacher and transferring knowledge to update the local backbone. This process involves learning two components: the shared backbone for common representation and the private head for local personalization, which enables effective global knowledge transfer. Extensive experiments and comparisons with 12 state-of-the-art approaches demonstrate the effectiveness of our approach.<|reference_end|>
|
arxiv
|
@article{wang2024personalized,
title={Personalized Federated Learning via Backbone Self-Distillation},
author={Pengju Wang and Bochao Liu and Dan Zeng and Chenggang Yan and Shiming
Ge},
journal={arXiv preprint arXiv:2409.15636},
year={2024},
archivePrefix={arXiv},
eprint={2409.15636},
primaryClass={cs.LG cs.AI cs.CR cs.CV}
}
|
wang2024personalized
|
arxiv-661123
|
2409.15637
|
Synatra: Turning Indirect Knowledge into Direct Demonstrations for Digital Agents at Scale
|
<|reference_start|>Synatra: Turning Indirect Knowledge into Direct Demonstrations for Digital Agents at Scale: LLMs can now act as autonomous agents that interact with digital environments and complete specific objectives (e.g., arranging an online meeting). However, accuracy is still far from satisfactory, partly due to a lack of large-scale, direct demonstrations for digital tasks. Obtaining supervised data from humans is costly, and automatic data collection through exploration or reinforcement learning relies on complex environmental and content setup, resulting in datasets that lack comprehensive coverage of various scenarios. On the other hand, there is abundant knowledge that may indirectly assist task completion, such as online tutorials that were created for human consumption. In this work, we present Synatra, an approach that effectively transforms this indirect knowledge into direct supervision at scale. We define different types of indirect knowledge, and carefully study the available sources to obtain it, methods to encode the structure of direct demonstrations, and finally methods to transform indirect knowledge into direct demonstrations. We use 100k such synthetically-created demonstrations to finetune a 7B CodeLlama, and demonstrate that the resulting agent surpasses all comparably sized models on three web-based task benchmarks Mind2Web, MiniWoB++ and WebArena, as well as surpassing GPT-3.5 on WebArena and Mind2Web. In addition, while synthetic demonstrations prove to be only 3% the cost of human demonstrations (at $0.031 each), we show that the synthetic demonstrations can be more effective than an identical number of human demonstrations collected from limited domains.<|reference_end|>
|
arxiv
|
@article{ou2024synatra:,
title={Synatra: Turning Indirect Knowledge into Direct Demonstrations for
Digital Agents at Scale},
author={Tianyue Ou, Frank F. Xu, Aman Madaan, Jiarui Liu, Robert Lo, Abishek
Sridhar, Sudipta Sengupta, Dan Roth, Graham Neubig, Shuyan Zhou},
journal={arXiv preprint arXiv:2409.15637},
year={2024},
archivePrefix={arXiv},
eprint={2409.15637},
primaryClass={cs.AI}
}
|
ou2024synatra:
|
arxiv-661124
|
2409.15642
|
Generative AI-Enhanced Multi-Modal Semantic Communication in Internet of Vehicles: System Design and Methodologies
|
<|reference_start|>Generative AI-Enhanced Multi-Modal Semantic Communication in Internet of Vehicles: System Design and Methodologies: Vehicle-to-everything (V2X) communication supports numerous tasks, from driving safety to entertainment services. To achieve a holistic view, vehicles are typically equipped with multiple sensors to compensate for undetectable blind spots. However, processing large volumes of multi-modal data increases transmission load, while the dynamic nature of vehicular networks adds to transmission instability. To address these challenges, we propose a novel framework, Generative Artificial intelligence (GAI)-enhanced multi-modal semantic communication (SemCom), referred to as G-MSC, designed to handle various vehicular network tasks by employing suitable analog or digital transmission. GAI presents a promising opportunity to transform the SemCom framework by significantly enhancing semantic encoding to facilitate the optimized integration of multi-modal information, enhancing channel robustness, and fortifying semantic decoding against noise interference. To validate the effectiveness of the G-MSC framework, we conduct a case study showcasing its performance in vehicular communication networks for predictive tasks. The experimental results show that the design achieves reliable and efficient communication in V2X networks. In the end, we present future research directions on G-MSC.<|reference_end|>
|
arxiv
|
@article{lu2024generative,
title={Generative AI-Enhanced Multi-Modal Semantic Communication in Internet of
Vehicles: System Design and Methodologies},
author={Jiayi Lu, Wanting Yang, Zehui Xiong, Chengwen Xing, Rahim Tafazolli,
Tony Q.S. Quek and Merouane Debbah},
journal={arXiv preprint arXiv:2409.15642},
year={2024},
archivePrefix={arXiv},
eprint={2409.15642},
primaryClass={cs.NI}
}
|
lu2024generative
|
arxiv-661125
|
2409.15644
|
PolicyCraft: Supporting Collaborative and Participatory Policy Design through Case-Grounded Deliberation
|
<|reference_start|>PolicyCraft: Supporting Collaborative and Participatory Policy Design through Case-Grounded Deliberation: Community and organizational policies are typically designed in a top-down, centralized fashion, with limited input from impacted stakeholders. This can result in policies that are misaligned with community needs or perceived as illegitimate. How can we support more collaborative, participatory approaches to policy design? In this paper, we present PolicyCraft, a system that structures collaborative policy design through case-grounded deliberation. Building on past research that highlights the value of concrete cases in establishing common ground, PolicyCraft supports users in collaboratively proposing, critiquing, and revising policies through discussion and voting on cases. A field study across two university courses showed that students using PolicyCraft reached greater consensus and developed better-supported course policies, compared with those using a baseline system that did not scaffold their use of concrete cases. Reflecting on our findings, we discuss opportunities for future HCI systems to help groups more effectively bridge between abstract policies and concrete cases.<|reference_end|>
|
arxiv
|
@article{kuo2024policycraft:,
title={PolicyCraft: Supporting Collaborative and Participatory Policy Design
through Case-Grounded Deliberation},
author={Tzu-Sheng Kuo, Quan Ze Chen, Amy X. Zhang, Jane Hsieh, Haiyi Zhu,
Kenneth Holstein},
journal={arXiv preprint arXiv:2409.15644},
year={2024},
archivePrefix={arXiv},
eprint={2409.15644},
primaryClass={cs.HC}
}
|
kuo2024policycraft:
|
arxiv-661126
|
2409.15647
|
Looped Transformers for Length Generalization
|
<|reference_start|>Looped Transformers for Length Generalization: Recent work has shown that Transformers trained from scratch can successfully solve various arithmetic and algorithmic tasks, such as adding numbers and computing parity. While these Transformers generalize well on unseen inputs of the same length, they struggle with length generalization, i.e., handling inputs of unseen lengths. In this work, we demonstrate that looped Transformers with an adaptive number of steps significantly improve length generalization. We focus on tasks with a known iterative solution, involving multiple iterations of a RASP-L operation - a length-generalizable operation that can be expressed by a finite-sized Transformer. We train looped Transformers using our proposed learning algorithm and observe that they learn highly length-generalizable solutions for various tasks.<|reference_end|>
|
arxiv
|
@article{fan2024looped,
title={Looped Transformers for Length Generalization},
author={Ying Fan, Yilun Du, Kannan Ramchandran, Kangwook Lee},
journal={arXiv preprint arXiv:2409.15647},
year={2024},
archivePrefix={arXiv},
eprint={2409.15647},
primaryClass={cs.LG}
}
|
fan2024looped
|
arxiv-661127
|
2409.15650
|
ImPoster: Text and Frequency Guidance for Subject Driven Action Personalization using Diffusion Models
|
<|reference_start|>ImPoster: Text and Frequency Guidance for Subject Driven Action Personalization using Diffusion Models: We present ImPoster, a novel algorithm for generating a target image of a 'source' subject performing a 'driving' action. The inputs to our algorithm are a single pair of a source image with the subject that we wish to edit and a driving image with a subject of an arbitrary class performing the driving action, along with the text descriptions of the two images. Our approach is completely unsupervised and does not require any access to additional annotations like keypoints or pose. Our approach builds on a pretrained text-to-image latent diffusion model and learns the characteristics of the source and the driving image by finetuning the diffusion model for a small number of iterations. At inference time, ImPoster performs step-wise text prompting i.e. it denoises by first moving in the direction of the image manifold corresponding to the driving image followed by the direction of the image manifold corresponding to the text description of the desired target image. We propose a novel diffusion guidance formulation, image frequency guidance, to steer the generation towards the manifold of the source subject and the driving action at every step of the inference denoising. Our frequency guidance formulations are derived from the frequency domain properties of images. We extensively evaluate ImPoster on a diverse set of source-driving image pairs to demonstrate improvements over baselines. To the best of our knowledge, ImPoster is the first approach towards achieving both subject-driven as well as action-driven image personalization. Code and data is available at https://github.com/divyakraman/ImPosterDiffusion2024.<|reference_end|>
|
arxiv
|
@article{kothandaraman2024imposter:,
title={ImPoster: Text and Frequency Guidance for Subject Driven Action
Personalization using Diffusion Models},
author={Divya Kothandaraman, Kuldeep Kulkarni, Sumit Shekhar, Balaji Vasan
Srinivasan, Dinesh Manocha},
journal={arXiv preprint arXiv:2409.15650},
year={2024},
archivePrefix={arXiv},
eprint={2409.15650},
primaryClass={cs.CV}
}
|
kothandaraman2024imposter:
|
arxiv-661128
|
2409.15651
|
SurgIRL: Towards Life-Long Learning for Surgical Automation by Incremental Reinforcement Learning
|
<|reference_start|>SurgIRL: Towards Life-Long Learning for Surgical Automation by Incremental Reinforcement Learning: Surgical automation holds immense potential to improve the outcome and accessibility of surgery. Recent studies use reinforcement learning to learn policies that automate different surgical tasks. However, these policies are developed independently and are limited in their reusability when the task changes, making it more time-consuming when robots learn to solve multiple tasks. Inspired by how human surgeons build their expertise, we train surgical automation policies through Surgical Incremental Reinforcement Learning (SurgIRL). SurgIRL aims to (1) acquire new skills by referring to external policies (knowledge) and (2) accumulate and reuse these skills to solve multiple unseen tasks incrementally (incremental learning). Our SurgIRL framework includes three major components. We first define an expandable knowledge set containing heterogeneous policies that can be helpful for surgical tasks. Then, we propose Knowledge Inclusive Attention Network with mAximum Coverage Exploration (KIAN-ACE), which improves learning efficiency by maximizing the coverage of the knowledge set during the exploration process. Finally, we develop incremental learning pipelines based on KIAN-ACE to accumulate and reuse learned knowledge and solve multiple surgical tasks sequentially. Our simulation experiments show that KIAN-ACE efficiently learns to automate ten surgical tasks separately or incrementally. We also evaluate our learned policies on the da Vinci Research Kit (dVRK) and demonstrate successful sim-to-real transfers.<|reference_end|>
|
arxiv
|
@article{ho2024surgirl:,
title={SurgIRL: Towards Life-Long Learning for Surgical Automation by
Incremental Reinforcement Learning},
author={Yun-Jie Ho, Zih-Yun Chiu, Yuheng Zhi, Michael C. Yip},
journal={arXiv preprint arXiv:2409.15651},
year={2024},
archivePrefix={arXiv},
eprint={2409.15651},
primaryClass={cs.RO cs.LG}
}
|
ho2024surgirl:
|
arxiv-661129
|
2409.15652
|
English offensive text detection using CNN based Bi-GRU model
|
<|reference_start|>English offensive text detection using CNN based Bi-GRU model: Over the years, the number of users of social media has increased drastically. People frequently share their thoughts through social platforms, and this leads to an increase in hate content. In this virtual community, individuals share their views, express their feelings, and post photos, videos, blogs, and more. Social networking sites like Facebook and Twitter provide platforms to share vast amounts of content with a single click. However, these platforms do not impose restrictions on the uploaded content, which may include abusive language and explicit images unsuitable for social media. To resolve this issue, a new idea must be implemented to divide the inappropriate content. Numerous studies have been done to automate the process. In this paper, we propose a new Bi-GRU-CNN model to classify whether the text is offensive or not. The combination of the Bi-GRU and CNN models outperforms the existing model.<|reference_end|>
|
arxiv
|
@article{roy2024english,
title={English offensive text detection using CNN based Bi-GRU model},
author={Tonmoy Roy, Md Robiul Islam, Asif Ahammad Miazee, Anika Antara, Al
Amin, Sunjim Hossain},
journal={arXiv preprint arXiv:2409.15652},
year={2024},
archivePrefix={arXiv},
eprint={2409.15652},
primaryClass={cs.CL cs.LG cs.SI}
}
|
roy2024english
|
arxiv-661130
|
2409.15654
|
Cambricon-LLM: A Chiplet-Based Hybrid Architecture for On-Device Inference of 70B LLM
|
<|reference_start|>Cambricon-LLM: A Chiplet-Based Hybrid Architecture for On-Device Inference of 70B LLM: Deploying advanced large language models on edge devices, such as smartphones and robotics, is a growing trend that enhances user data privacy and network connectivity resilience while preserving intelligent capabilities. However, such a task exhibits single-batch computing with incredibly low arithmetic intensity, which poses the significant challenges of huge memory footprint and bandwidth demands on limited edge resources. To address these issues, we introduce Cambricon-LLM, a chiplet-based hybrid architecture with NPU and a dedicated NAND flash chip to enable efficient on-device inference of 70B LLMs. Such a hybrid architecture utilizes both the high computing capability of NPU and the data capacity of the NAND flash chip, with the proposed hardware-tiling strategy that minimizes the data movement overhead between NPU and NAND flash chip. Specifically, the NAND flash chip, enhanced by our innovative in-flash computing and on-die ECC techniques, excels at performing precise lightweight on-die processing. Simultaneously, the NPU collaborates with the flash chip for matrix operations and handles special function computations beyond the flash's on-die processing capabilities. Overall, Cambricon-LLM enables the on-device inference of 70B LLMs at a speed of 3.44 token/s, and 7B LLMs at a speed of 36.34 token/s, which is over 22X to 45X faster than existing flash-offloading technologies, showing the potentiality of deploying powerful LLMs in edge devices.<|reference_end|>
|
arxiv
|
@article{yu2024cambricon-llm:,
title={Cambricon-LLM: A Chiplet-Based Hybrid Architecture for On-Device
Inference of 70B LLM},
author={Zhongkai Yu, Shengwen Liang, Tianyun Ma, Yunke Cai, Ziyuan Nan, Di
Huang, Xinkai Song, Yifan Hao, Jie Zhang, Tian Zhi, Yongwei Zhao, Zidong Du,
Xing Hu, Qi Guo, Tianshi Chen},
journal={MICRO 2024},
year={2024},
archivePrefix={arXiv},
eprint={2409.15654},
primaryClass={cs.AR}
}
|
yu2024cambricon-llm:
|
arxiv-661131
|
2409.15656
|
Identified-and-Targeted: The First Early Evidence of the Privacy-Invasive Use of Browser Fingerprinting for Online Tracking
|
<|reference_start|>Identified-and-Targeted: The First Early Evidence of the Privacy-Invasive Use of Browser Fingerprinting for Online Tracking: While advertising has become commonplace in today's online interactions, there is a notable dearth of research investigating the extent to which browser fingerprinting is harnessed for user tracking and targeted advertising. Prior studies only measured whether fingerprinting-related scripts are being run on the websites but that in itself does not necessarily mean that fingerprinting is being used for the privacy-invasive purpose of online tracking because fingerprinting might be deployed for the defensive purposes of bot/fraud detection and user authentication. It is imperative to address the mounting concerns regarding the utilization of browser fingerprinting in the realm of online advertising. To understand the privacy-invasive use of fingerprinting for user tracking, this paper introduces a new framework ``FPTrace'' (fingerprinting-based tracking assessment and comprehensive evaluation framework) designed to identify alterations in advertisements resulting from adjustments in browser fingerprinting settings. Our approach involves emulating genuine user interactions, capturing advertiser bid data, and closely monitoring HTTP information. Using FPTrace we conduct a large-scale measurement study to identify whether browser fingerprinting is being used for the purpose of user tracking and ad targeting. The results we have obtained provide robust evidence supporting the utilization of browser fingerprinting for the purposes of advertisement tracking and targeting. This is substantiated by significant disparities in bid values and a reduction in HTTP records subsequent to changes in fingerprinting. In conclusion, our research unveils the widespread employment of browser fingerprinting in online advertising, prompting critical considerations regarding user privacy and data security within the digital advertising landscape.<|reference_end|>
|
arxiv
|
@article{liu2024identified-and-targeted:,
title={Identified-and-Targeted: The First Early Evidence of the
Privacy-Invasive Use of Browser Fingerprinting for Online Tracking},
author={Zengrui Liu, Jimmy Dani, Shujiang Wu, Yinzhi Cao, Nitesh Saxena},
journal={arXiv preprint arXiv:2409.15656},
year={2024},
archivePrefix={arXiv},
eprint={2409.15656},
primaryClass={cs.CR}
}
|
liu2024identified-and-targeted:
|
arxiv-661132
|
2409.15657
|
M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning
|
<|reference_start|>M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning: Multimodal Large Language Models (MLLMs) demonstrate remarkable performance across a wide range of domains, with increasing emphasis on enhancing their zero-shot generalization capabilities for unseen tasks across various modalities. Instruction tuning has emerged as an effective strategy for achieving zero-shot generalization by finetuning pretrained models on diverse multimodal tasks. As the scale of MLLMs continues to grow, parameter-efficient finetuning becomes increasingly critical. However, most existing parameter-efficient approaches focus only on single modalities and often overlook the multimodal characteristics during finetuning. In this work, we introduce a novel Multimodal Prompt Tuning (M$^2$PT) approach for efficient instruction tuning of MLLMs. M$^2$PT effectively integrates visual and textual prompts into the vision encoder and language processor respectively during finetuning, facilitating the extraction and alignment of features across modalities. Empirical results on various multimodal evaluation datasets demonstrate the superior performance of our approach compared to several state-of-the-art baselines. A comprehensive set of ablation studies validates the effectiveness of our prompt design and the efficiency of our approach.<|reference_end|>
|
arxiv
|
@article{wang2024m$^2$pt:,
title={M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning},
author={Taowen Wang, Yiyang Liu, James Chenhao Liang, junhan zhao, Yiming Cui,
Yuning Mao, Shaoliang Nie, Jiahao Liu, Fuli Feng, Zenglin Xu, Cheng Han, Lifu
Huang, Qifan Wang, Dongfang Liu},
journal={arXiv preprint arXiv:2409.15657},
year={2024},
archivePrefix={arXiv},
eprint={2409.15657},
primaryClass={cs.AI cs.CL cs.LG}
}
|
wang2024m$^2$pt:
|
arxiv-661133
|
2409.15658
|
ReLEP: A Novel Framework for Real-world Long-horizon Embodied Planning
|
<|reference_start|>ReLEP: A Novel Framework for Real-world Long-horizon Embodied Planning: Real-world long-horizon embodied planning underpins embodied AI. To accomplish long-horizon tasks, agents need to decompose abstract instructions into detailed steps. Prior works mostly rely on GPT-4V for task decomposition into predefined actions, which limits task diversity due to GPT-4V's finite understanding of larger skillsets. Therefore, we present ReLEP, a groundbreaking framework for Real world Long-horizon Embodied Planning, which can accomplish a wide range of daily tasks. At its core lies a fine-tuned large vision language model that formulates plans as sequences of skill functions according to input instruction and scene image. These functions are selected from a carefully designed skill library. ReLEP is also equipped with a Memory module for plan and status recall, and a Robot Configuration module for versatility across robot types. In addition, we propose a semi-automatic data generation pipeline to tackle dataset scarcity. Real-world off-line experiments across eight daily embodied tasks demonstrate that ReLEP is able to accomplish long-horizon embodied tasks and outperforms other state-of-the-art baseline methods.<|reference_end|>
|
arxiv
|
@article{liu2024relep:,
title={ReLEP: A Novel Framework for Real-world Long-horizon Embodied Planning},
author={Siyuan Liu, Jiawei Du, Sicheng Xiang, Zibo Wang and Dingsheng Luo},
journal={arXiv preprint arXiv:2409.15658},
year={2024},
archivePrefix={arXiv},
eprint={2409.15658},
primaryClass={cs.RO cs.AI}
}
|
liu2024relep:
|
arxiv-661134
|
2409.15662
|
Double-Path Adaptive-correlation Spatial-Temporal Inverted Transformer for Stock Time Series Forecasting
|
<|reference_start|>Double-Path Adaptive-correlation Spatial-Temporal Inverted Transformer for Stock Time Series Forecasting: Spatial-temporal graph neural networks (STGNNs) have achieved significant success in various time series forecasting tasks. However, due to the lack of explicit and fixed spatial relationships in stock prediction tasks, many STGNNs fail to perform effectively in this domain. While some STGNNs learn spatial relationships from time series, they often lack comprehensiveness. Research indicates that modeling time series using feature changes as tokens reveals entirely different information compared to using time steps as tokens. To more comprehensively extract dynamic spatial information from stock data, we propose a Double-Path Adaptive-correlation Spatial-Temporal Inverted Transformer (DPA-STIFormer). DPA-STIFormer models each node via continuous changes in features as tokens and introduces a Double Direction Self-adaptation Fusion mechanism. This mechanism decomposes node encoding into temporal and feature representations, simultaneously extracting different spatial correlations from a double path approach, and proposes a Double-path gating mechanism to fuse these two types of correlation information. Experiments conducted on four stock market datasets demonstrate state-of-the-art results, validating the model's superior capability in uncovering latent temporal-correlation patterns.<|reference_end|>
|
arxiv
|
@article{yan2024double-path,
title={Double-Path Adaptive-correlation Spatial-Temporal Inverted Transformer
for Stock Time Series Forecasting},
author={Wenbo Yan and Ying Tan},
journal={arXiv preprint arXiv:2409.15662},
year={2024},
archivePrefix={arXiv},
eprint={2409.15662},
primaryClass={cs.LG cs.AI}
}
|
yan2024double-path
|
arxiv-661135
|
2409.15664
|
Mitigating Semantic Leakage in Cross-lingual Embeddings via Orthogonality Constraint
|
<|reference_start|>Mitigating Semantic Leakage in Cross-lingual Embeddings via Orthogonality Constraint: Accurately aligning contextual representations in cross-lingual sentence embeddings is key for effective parallel data mining. A common strategy for achieving this alignment involves disentangling semantics and language in sentence embeddings derived from multilingual pre-trained models. However, we discover that current disentangled representation learning methods suffer from semantic leakage - a term we introduce to describe when a substantial amount of language-specific information is unintentionally leaked into semantic representations. This hinders the effective disentanglement of semantic and language representations, making it difficult to retrieve embeddings that distinctively represent the meaning of the sentence. To address this challenge, we propose a novel training objective, ORthogonAlity Constraint LEarning (ORACLE), tailored to enforce orthogonality between semantic and language embeddings. ORACLE builds upon two components: intra-class clustering and inter-class separation. Through experiments on cross-lingual retrieval and semantic textual similarity tasks, we demonstrate that training with the ORACLE objective effectively reduces semantic leakage and enhances semantic alignment within the embedding space.<|reference_end|>
|
arxiv
|
@article{ki2024mitigating,
title={Mitigating Semantic Leakage in Cross-lingual Embeddings via
Orthogonality Constraint},
author={Dayeon Ki, Cheonbok Park, Hyunjoong Kim},
journal={ACL 2024 RepL4NLP Workshop},
year={2024},
archivePrefix={arXiv},
eprint={2409.15664},
primaryClass={cs.CL cs.AI}
}
|
ki2024mitigating
|
arxiv-661136
|
2409.15670
|
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks
|
<|reference_start|>Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks: Spiking Neural Networks (SNNs), the third generation neural networks, are known for their low energy consumption and high robustness. SNNs are developing rapidly and can compete with Artificial Neural Networks (ANNs) in many fields. To ensure that the widespread use of SNNs does not cause serious security incidents, much research has been conducted to explore the robustness of SNNs under adversarial sample attacks. However, many other unassessed security threats exist, such as highly stealthy backdoor attacks. Therefore, to fill the research gap in this and further explore the security vulnerabilities of SNNs, this paper explores the robustness performance of SNNs trained by supervised learning rules under backdoor attacks. Specifically, the work herein includes: i) We propose a generic backdoor attack framework that can be launched against the training process of existing supervised learning rules and covers all learnable dataset types of SNNs. ii) We analyze the robustness differences between different learning rules and between SNN and ANN, which suggests that SNN no longer has inherent robustness under backdoor attacks. iii) We reveal the vulnerability of conversion-dependent learning rules caused by backdoor migration and further analyze the migration ability during the conversion process, finding that the backdoor migration rate can even exceed 99%. iv) Finally, we discuss potential countermeasures against this kind of backdoor attack and its technical challenges and point out several promising research directions.<|reference_end|>
|
arxiv
|
@article{jin2024data,
title={Data Poisoning-based Backdoor Attack Framework against Supervised
Learning Rules of Spiking Neural Networks},
author={Lingxin Jin, Meiyu Lin, Wei Jiang, Jinyu Zhan},
journal={arXiv preprint arXiv:2409.15670},
year={2024},
archivePrefix={arXiv},
eprint={2409.15670},
primaryClass={cs.CR cs.NE}
}
|
jin2024data
|
arxiv-661137
|
2409.15671
|
Autonomous Hiking Trail Navigation via Semantic Segmentation and Geometric Analysis
|
<|reference_start|>Autonomous Hiking Trail Navigation via Semantic Segmentation and Geometric Analysis: Natural environments pose significant challenges for autonomous robot navigation, particularly due to their unstructured and ever-changing nature. Hiking trails, with their dynamic conditions influenced by weather, vegetation, and human traffic, represent one such challenge. This work introduces a novel approach to autonomous hiking trail navigation that balances trail adherence with the flexibility to adapt to off-trail routes when necessary. The solution is a Traversability Analysis module that integrates semantic data from camera images with geometric information from LiDAR to create a comprehensive understanding of the surrounding terrain. A planner uses this traversability map to navigate safely, adhering to trails while allowing off-trail movement when necessary to avoid on-trail hazards or for safe off-trail shortcuts. The method is evaluated through simulation to determine the balance between semantic and geometric information in traversability estimation. These simulations tested various weights to assess their impact on navigation performance across different trail scenarios. Weights were then validated through field tests at the West Virginia University Core Arboretum, demonstrating the method's effectiveness in a real-world environment.<|reference_end|>
|
arxiv
|
@article{reed2024autonomous,
title={Autonomous Hiking Trail Navigation via Semantic Segmentation and
Geometric Analysis},
author={Camndon Reed, Christopher Tatsch, Jason N. Gross and Yu Gu},
journal={arXiv preprint arXiv:2409.15671},
year={2024},
archivePrefix={arXiv},
eprint={2409.15671},
primaryClass={cs.RO cs.CV eess.IV}
}
|
reed2024autonomous
|
arxiv-661138
|
2409.15672
|
Language-based Audio Moment Retrieval
|
<|reference_start|>Language-based Audio Moment Retrieval: In this paper, we propose and design a new task called audio moment retrieval (AMR). Unlike conventional language-based audio retrieval tasks that search for short audio clips from an audio database, AMR aims to predict relevant moments in untrimmed long audio based on a text query. Given the lack of prior work in AMR, we first build a dedicated dataset, Clotho-Moment, consisting of large-scale simulated audio recordings with moment annotations. We then propose a DETR-based model, named Audio Moment DETR (AM-DETR), as a fundamental framework for AMR tasks. This model captures temporal dependencies within audio features, inspired by similar video moment retrieval tasks, thus surpassing conventional clip-level audio retrieval methods. Additionally, we provide manually annotated datasets to properly measure the effectiveness and robustness of our methods on real data. Experimental results show that AM-DETR, trained with Clotho-Moment, outperforms a baseline model that applies a clip-level audio retrieval method with a sliding window on all metrics, particularly improving [email protected] by 9.00 points. Our datasets and code are publicly available in https://h-munakata.github.io/Language-based-Audio-Moment-Retrieval.<|reference_end|>
|
arxiv
|
@article{munakata2024language-based,
title={Language-based Audio Moment Retrieval},
author={Hokuto Munakata and Taichi Nishimura and Shota Nakada and Tatsuya
Komatsu},
journal={arXiv preprint arXiv:2409.15672},
year={2024},
archivePrefix={arXiv},
eprint={2409.15672},
primaryClass={eess.AS cs.CL cs.SD}
}
|
munakata2024language-based
|
arxiv-661139
|
2409.15674
|
Developer Reactions to Protestware in Open Source Software: The cases of colorjs and es5ext
|
<|reference_start|>Developer Reactions to Protestware in Open Source Software: The cases of colorjs and es5ext: There is growing concern about maintainers self-sabotaging their work in order to take political or economic stances, a practice referred to as "protestware". Our objective is to understand the discourse around discussions on such an attack, how it is received by the community, and whether developers respond to the attack in a timely manner. We study two notable protestware cases i.e., colors.js and es5-ext. Results indicate that protestware discussions spread rapidly, though not as quickly as security vulnerabilities, with a lower speed when compared to ua-parser and log4j. By establishing a taxonomy of protestware discussions, we identify posts such as expressing stances and providing technical mitigation instructions. A thematic analysis identified five major themes during the discussions: i. disseminate and response, ii. stance, iii. reputation, iv. communicative styles, v. rights and ethics. This work sheds light on the nuanced landscape of protestware discussions, offering insights for both researchers and developers into maintaining a healthy balance between the political or social actions of developers and the collective well-being of the open-source community.<|reference_end|>
|
arxiv
|
@article{fan2024developer,
title={Developer Reactions to Protestware in Open Source Software: The cases of
color.js and es5.ext},
author={Youmei Fan, Dong Wang, Supatsara Wattanakriengkrai, Hathaichanok
Damrongsiri, Christoph Treude, Hideaki Hata, Raula Gaikovina Kula},
journal={arXiv preprint arXiv:2409.15674},
year={2024},
archivePrefix={arXiv},
eprint={2409.15674},
primaryClass={cs.SE}
}
|
fan2024developer
|
arxiv-661140
|
2409.15675
|
Northeast Materials Database (NEMAD): Enabling Discovery of High Transition Temperature Magnetic Compounds
|
<|reference_start|>Northeast Materials Database (NEMAD): Enabling Discovery of High Transition Temperature Magnetic Compounds: The discovery of novel magnetic materials with greater operating temperature ranges and optimized performance is essential for advanced applications. Current data-driven approaches are challenging and limited due to the lack of accurate, comprehensive, and feature-rich databases. This study aims to address this challenge by introducing a new approach that uses Large Language Models (LLMs) to create a comprehensive, experiment-based, magnetic materials database named the Northeast Materials Database (NEMAD), which consists of 26,706 magnetic materials (www.nemad.org). The database incorporates chemical composition, magnetic phase transition temperatures, structural details, and magnetic properties. Enabled by NEMAD, machine learning models were developed to classify materials and predict transition temperatures. Our classification model achieved an accuracy of 90% in categorizing materials as ferromagnetic (FM), antiferromagnetic (AFM), and non-magnetic (NM). The regression models predict Curie (N\'eel) temperature with a coefficient of determination (R2) of 0.86 (0.85) and a mean absolute error (MAE) of 62K (32K). These models identified 62 (19) FM (AFM) candidates with a predicted Curie (N\'eel) temperature above 500K (100K) from the Materials Project. This work shows the feasibility of combining LLMs for automated data extraction and machine learning models in accelerating the discovery of magnetic materials.<|reference_end|>
|
arxiv
|
@article{itani2024northeast,
title={Northeast Materials Database (NEMAD): Enabling Discovery of High
Transition Temperature Magnetic Compounds},
author={Suman Itani, Yibo Zhang, Jiadong Zang},
journal={arXiv preprint arXiv:2409.15675},
year={2024},
archivePrefix={arXiv},
eprint={2409.15675},
primaryClass={cond-mat.mtrl-sci cs.LG physics.comp-ph}
}
|
itani2024northeast
|
arxiv-661141
|
2409.15679
|
PDT: Uav Target Detection Dataset for Pests and Diseases Tree
|
<|reference_start|>PDT: Uav Target Detection Dataset for Pests and Diseases Tree: UAVs emerge as the optimal carriers for visual weed iden?tification and integrated pest and disease management in crops. How?ever, the absence of specialized datasets impedes the advancement of model development in this domain. To address this, we have developed the Pests and Diseases Tree dataset (PDT dataset). PDT dataset repre?sents the first high-precision UAV-based dataset for targeted detection of tree pests and diseases, which is collected in real-world operational environments and aims to fill the gap in available datasets for this field. Moreover, by aggregating public datasets and network data, we further introduced the Common Weed and Crop dataset (CWC dataset) to ad?dress the challenge of inadequate classification capabilities of test models within datasets for this field. Finally, we propose the YOLO-Dense Pest (YOLO-DP) model for high-precision object detection of weed, pest, and disease crop images. We re-evaluate the state-of-the-art detection models with our proposed PDT dataset and CWC dataset, showing the completeness of the dataset and the effectiveness of the YOLO-DP. The proposed PDT dataset, CWC dataset, and YOLO-DP model are pre?sented at https://github.com/RuiXing123/PDT_CWC_YOLO-DP.<|reference_end|>
|
arxiv
|
@article{zhou2024pdt:,
title={PDT: Uav Target Detection Dataset for Pests and Diseases Tree},
author={Mingle Zhou, Rui Xing, Delong Han, Zhiyong Qi, Gang Li},
journal={arXiv preprint arXiv:2409.15679},
year={2024},
archivePrefix={arXiv},
eprint={2409.15679},
primaryClass={cs.CV}
}
|
zhou2024pdt:
|
arxiv-661142
|
2409.15680
|
Distributed Online Bandit Nonconvex Optimization with One-Point Residual Feedback via Dynamic Regret
|
<|reference_start|>Distributed Online Bandit Nonconvex Optimization with One-Point Residual Feedback via Dynamic Regret: This paper considers the distributed online bandit optimization problem with nonconvex loss functions over a time-varying digraph. This problem can be viewed as a repeated game between a group of online players and an adversary. At each round, each player selects a decision from the constraint set, and then the adversary assigns an arbitrary, possibly nonconvex, loss function to this player. Only the loss value at the current round, rather than the entire loss function or any other information (e.g. gradient), is privately revealed to the player. Players aim to minimize a sequence of global loss functions, which are the sum of local losses. We observe that traditional multi-point bandit algorithms are unsuitable for online optimization, where the data for the loss function are not all a priori, while the one-point bandit algorithms suffer from poor regret guarantees. To address these issues, we propose a novel one-point residual feedback distributed online algorithm. This algorithm estimates the gradient using residuals from two points, effectively reducing the regret bound while maintaining $\mathcal{O}(1)$ sampling complexity per iteration. We employ a rigorous metric, dynamic regret, to evaluate the algorithm's performance. By appropriately selecting the step size and smoothing parameters, we demonstrate that the expected dynamic regret of our algorithm is comparable to existing algorithms that use two-point feedback, provided the deviation in the objective function sequence and the path length of the minimization grows sublinearly. Finally, we validate the effectiveness of the proposed algorithm through numerical simulations.<|reference_end|>
|
arxiv
|
@article{hua2024distributed,
title={Distributed Online Bandit Nonconvex Optimization with One-Point Residual
Feedback via Dynamic Regret},
author={Youqing Hua, Shuai Liu, Yiguang Hong, Karl Henrik Johansson, and
Guangchen Wang},
journal={arXiv preprint arXiv:2409.15680},
year={2024},
archivePrefix={arXiv},
eprint={2409.15680},
primaryClass={cs.LG math.OC}
}
|
hua2024distributed
|
arxiv-661143
|
2409.15682
|
Linear Contextual Bandits with Interference
|
<|reference_start|>Linear Contextual Bandits with Interference: Interference, a key concept in causal inference, extends the reward modeling process by accounting for the impact of one unit's actions on the rewards of others. In contextual bandit (CB) settings, where multiple units are present in the same round, potential interference can significantly affect the estimation of expected rewards for different arms, thereby influencing the decision-making process. Although some prior work has explored multi-agent and adversarial bandits in interference-aware settings, the effect of interference in CB, as well as the underlying theory, remains significantly underexplored. In this paper, we introduce a systematic framework to address interference in Linear CB (LinCB), bridging the gap between causal inference and online decision-making. We propose a series of algorithms that explicitly quantify the interference effect in the reward modeling process and provide comprehensive theoretical guarantees, including sublinear regret bounds, finite sample upper bounds, and asymptotic properties. The effectiveness of our approach is demonstrated through simulations and a synthetic data generated based on MovieLens data.<|reference_end|>
|
arxiv
|
@article{xu2024linear,
title={Linear Contextual Bandits with Interference},
author={Yang Xu, Wenbin Lu, Rui Song},
journal={arXiv preprint arXiv:2409.15682},
year={2024},
archivePrefix={arXiv},
eprint={2409.15682},
primaryClass={cs.LG stat.ME}
}
|
xu2024linear
|
arxiv-661144
|
2409.15684
|
SYNERGAI: Perception Alignment for Human-Robot Collaboration
|
<|reference_start|>SYNERGAI: Perception Alignment for Human-Robot Collaboration: Recently, large language models (LLMs) have shown strong potential in facilitating human-robotic interaction and collaboration. However, existing LLM-based systems often overlook the misalignment between human and robot perceptions, which hinders their effective communication and real-world robot deployment. To address this issue, we introduce SYNERGAI, a unified system designed to achieve both perceptual alignment and human-robot collaboration. At its core, SYNERGAI employs 3D Scene Graph (3DSG) as its explicit and innate representation. This enables the system to leverage LLM to break down complex tasks and allocate appropriate tools in intermediate steps to extract relevant information from the 3DSG, modify its structure, or generate responses. Importantly, SYNERGAI incorporates an automatic mechanism that enables perceptual misalignment correction with users by updating its 3DSG with online interaction. SYNERGAI achieves comparable performance with the data-driven models in ScanQA in a zero-shot manner. Through comprehensive experiments across 10 real-world scenes, SYNERGAI demonstrates its effectiveness in establishing common ground with humans, realizing a success rate of 61.9% in alignment tasks. It also significantly improves the success rate from 3.7% to 45.68% on novel tasks by transferring the knowledge acquired during alignment.<|reference_end|>
|
arxiv
|
@article{chen2024synergai:,
title={SYNERGAI: Perception Alignment for Human-Robot Collaboration},
author={Yixin Chen, Guoxi Zhang, Yaowei Zhang, Hongming Xu, Peiyuan Zhi, Qing
Li, Siyuan Huang},
journal={arXiv preprint arXiv:2409.15684},
year={2024},
archivePrefix={arXiv},
eprint={2409.15684},
primaryClass={cs.RO}
}
|
chen2024synergai:
|
arxiv-661145
|
2409.15687
|
A Comprehensive Evaluation of Large Language Models on Mental Illnesses
|
<|reference_start|>A Comprehensive Evaluation of Large Language Models on Mental Illnesses: Large language models have shown promise in various domains, including healthcare. In this study, we conduct a comprehensive evaluation of LLMs in the context of mental health tasks using social media data. We explore the zero-shot (ZS) and few-shot (FS) capabilities of various LLMs, including GPT-4, Llama 3, Gemini, and others, on tasks such as binary disorder detection, disorder severity evaluation, and psychiatric knowledge assessment. Our evaluation involved 33 models testing 9 main prompt templates across the tasks. Key findings revealed that models like GPT-4 and Llama 3 exhibited superior performance in binary disorder detection, with accuracies reaching up to 85% on certain datasets. Moreover, prompt engineering played a crucial role in enhancing model performance. Notably, the Mixtral 8x22b model showed an improvement of over 20%, while Gemma 7b experienced a similar boost in performance. In the task of disorder severity evaluation, we observed that FS learning significantly improved the model's accuracy, highlighting the importance of contextual examples in complex assessments. Notably, the Phi-3-mini model exhibited a substantial increase in performance, with balanced accuracy improving by over 6.80% and mean average error dropping by nearly 1.3 when moving from ZS to FS learning. In the psychiatric knowledge task, recent models generally outperformed older, larger counterparts, with the Llama 3.1 405b achieving an accuracy of 91.2%. Despite promising results, our analysis identified several challenges, including variability in performance across datasets and the need for careful prompt engineering. Furthermore, the ethical guards imposed by many LLM providers hamper the ability to accurately evaluate their performance, due to tendency to not respond to potentially sensitive queries.<|reference_end|>
|
arxiv
|
@article{hanafi2024a,
title={A Comprehensive Evaluation of Large Language Models on Mental Illnesses},
author={Abdelrahman Hanafi, Mohammed Saad, Noureldin Zahran, Radwa J. Hanafy
and Mohammed E. Fouda},
journal={arXiv preprint arXiv:2409.15687},
year={2024},
archivePrefix={arXiv},
eprint={2409.15687},
primaryClass={cs.AI}
}
|
hanafi2024a
|
arxiv-661146
|
2409.15688
|
Safe Navigation for Robotic Digestive Endoscopy via Human Intervention-based Reinforcement Learning
|
<|reference_start|>Safe Navigation for Robotic Digestive Endoscopy via Human Intervention-based Reinforcement Learning: With the increasing application of automated robotic digestive endoscopy (RDE), ensuring safe and efficient navigation in the unstructured and narrow digestive tract has become a critical challenge. Existing automated reinforcement learning navigation algorithms, often result in potentially risky collisions due to the absence of essential human intervention, which significantly limits the safety and effectiveness of RDE in actual clinical practice. To address this limitation, we proposed a Human Intervention (HI)-based Proximal Policy Optimization (PPO) framework, dubbed HI-PPO, which incorporates expert knowledge to enhance RDE's safety. Specifically, we introduce an Enhanced Exploration Mechanism (EEM) to address the low exploration efficiency of the standard PPO. Additionally, a reward-penalty adjustment (RPA) is implemented to penalize unsafe actions during initial interventions. Furthermore, Behavior Cloning Similarity (BCS) is included as an auxiliary objective to ensure the agent emulates expert actions. Comparative experiments conducted in a simulated platform across various anatomical colon segments demonstrate that our model effectively and safely guides RDE.<|reference_end|>
|
arxiv
|
@article{tan2024safe,
title={Safe Navigation for Robotic Digestive Endoscopy via Human
Intervention-based Reinforcement Learning},
author={Min Tan, Yushun Tao, Boyun Zheng, GaoSheng Xie, Lijuan Feng, Zeyang
Xia, Jing Xiong},
journal={arXiv preprint arXiv:2409.15688},
year={2024},
archivePrefix={arXiv},
eprint={2409.15688},
primaryClass={cs.RO cs.AI}
}
|
tan2024safe
|
arxiv-661147
|
2409.15689
|
Plenoptic PNG: Real-Time Neural Radiance Fields in 150 KB
|
<|reference_start|>Plenoptic PNG: Real-Time Neural Radiance Fields in 150 KB: The goal of this paper is to encode a 3D scene into an extremely compact representation from 2D images and to enable its transmittance, decoding and rendering in real-time across various platforms. Despite the progress in NeRFs and Gaussian Splats, their large model size and specialized renderers make it challenging to distribute free-viewpoint 3D content as easily as images. To address this, we have designed a novel 3D representation that encodes the plenoptic function into sinusoidal function indexed dense volumes. This approach facilitates feature sharing across different locations, improving compactness over traditional spatial voxels. The memory footprint of the dense 3D feature grid can be further reduced using spatial decomposition techniques. This design combines the strengths of spatial hashing functions and voxel decomposition, resulting in a model size as small as 150 KB for each 3D scene. Moreover, PPNG features a lightweight rendering pipeline with only 300 lines of code that decodes its representation into standard GL textures and fragment shaders. This enables real-time rendering using the traditional GL pipeline, ensuring universal compatibility and efficiency across various platforms without additional dependencies.<|reference_end|>
|
arxiv
|
@article{lee2024plenoptic,
title={Plenoptic PNG: Real-Time Neural Radiance Fields in 150 KB},
author={Jae Yong Lee, Yuqun Wu, Chuhang Zou, Derek Hoiem, Shenlong Wang},
journal={arXiv preprint arXiv:2409.15689},
year={2024},
archivePrefix={arXiv},
eprint={2409.15689},
primaryClass={cs.CV}
}
|
lee2024plenoptic
|
arxiv-661148
|
2409.15690
|
A Survey of Stance Detection on Social Media: New Directions and Perspectives
|
<|reference_start|>A Survey of Stance Detection on Social Media: New Directions and Perspectives: In modern digital environments, users frequently express opinions on contentious topics, providing a wealth of information on prevailing attitudes. The systematic analysis of these opinions offers valuable insights for decision-making in various sectors, including marketing and politics. As a result, stance detection has emerged as a crucial subfield within affective computing, enabling the automatic detection of user stances in social media conversations and providing a nuanced understanding of public sentiment on complex issues. Recent years have seen a surge of research interest in developing effective stance detection methods, with contributions from multiple communities, including natural language processing, web science, and social computing. This paper provides a comprehensive survey of stance detection techniques on social media, covering task definitions, datasets, approaches, and future works. We review traditional stance detection models, as well as state-of-the-art methods based on large language models, and discuss their strengths and limitations. Our survey highlights the importance of stance detection in understanding public opinion and sentiment, and identifies gaps in current research. We conclude by outlining potential future directions for stance detection on social media, including the need for more robust and generalizable models, and the importance of addressing emerging challenges such as multi-modal stance detection and stance detection in low-resource languages.<|reference_end|>
|
arxiv
|
@article{zhang2024a,
title={A Survey of Stance Detection on Social Media: New Directions and
Perspectives},
author={Bowen Zhang, Genan Dai, Fuqiang Niu, Nan Yin, Xiaomao Fan, Hu Huang},
journal={arXiv preprint arXiv:2409.15690},
year={2024},
archivePrefix={arXiv},
eprint={2409.15690},
primaryClass={cs.CL cs.IR cs.SI}
}
|
zhang2024a
|
arxiv-661149
|
2409.15692
|
Walking with Terrain Reconstruction: Learning to Traverse Risky Sparse Footholds
|
<|reference_start|>Walking with Terrain Reconstruction: Learning to Traverse Risky Sparse Footholds: Traversing risky terrains with sparse footholds presents significant challenges for legged robots, requiring precise foot placement in safe areas. Current learning-based methods often rely on implicit feature representations without supervising physically significant estimation targets. This limits the policy's ability to fully understand complex terrain structures, which is critical for generating accurate actions. In this paper, we utilize end-to-end reinforcement learning to traverse risky terrains with high sparsity and randomness. Our approach integrates proprioception with single-view depth images to reconstruct robot's local terrain, enabling a more comprehensive representation of terrain information. Meanwhile, by incorporating implicit and explicit estimations of the robot's state and its surroundings, we improve policy's environmental understanding, leading to more precise actions. We deploy the proposed framework on a low-cost quadrupedal robot, achieving agile and adaptive locomotion across various challenging terrains and demonstrating outstanding performance in real-world scenarios. Video at: http://youtu.be/ReQAR4D6tuc.<|reference_end|>
|
arxiv
|
@article{yu2024walking,
title={Walking with Terrain Reconstruction: Learning to Traverse Risky Sparse
Footholds},
author={Ruiqi Yu, Qianshi Wang, Yizhen Wang, Zhicheng Wang, Jun Wu, Qiuguo Zhu},
journal={arXiv preprint arXiv:2409.15692},
year={2024},
archivePrefix={arXiv},
eprint={2409.15692},
primaryClass={cs.RO}
}
|
yu2024walking
|
arxiv-661150
|
2409.15695
|
Toward Mixture-of-Experts Enabled Trustworthy Semantic Communication for 6G Networks
|
<|reference_start|>Toward Mixture-of-Experts Enabled Trustworthy Semantic Communication for 6G Networks: Semantic Communication (SemCom) plays a pivotal role in 6G networks, offering a viable solution for future efficient communication. Deep Learning (DL)-based semantic codecs further enhance this efficiency. However, the vulnerability of DL models to security threats, such as adversarial attacks, poses significant challenges for practical applications of SemCom systems. These vulnerabilities enable attackers to tamper with messages and eavesdrop on private information, especially in wireless communication scenarios. Although existing defenses attempt to address specific threats, they often fail to simultaneously handle multiple heterogeneous attacks. To overcome this limitation, we introduce a novel Mixture-of-Experts (MoE)-based SemCom system. This system comprises a gating network and multiple experts, each specializing in different security challenges. The gating network adaptively selects suitable experts to counter heterogeneous attacks based on user-defined security requirements. Multiple experts collaborate to accomplish semantic communication tasks while meeting the security requirements of users. A case study in vehicular networks demonstrates the efficacy of the MoE-based SemCom system. Simulation results show that the proposed MoE-based SemCom system effectively mitigates concurrent heterogeneous attacks, with minimal impact on downstream task accuracy.<|reference_end|>
|
arxiv
|
@article{he2024toward,
title={Toward Mixture-of-Experts Enabled Trustworthy Semantic Communication for
6G Networks},
author={Jiayi He, Xiaofeng Luo, Jiawen Kang, Hongyang Du, Zehui Xiong, Ci
Chen, Dusit Niyato, Xuemin Shen},
journal={arXiv preprint arXiv:2409.15695},
year={2024},
archivePrefix={arXiv},
eprint={2409.15695},
primaryClass={cs.NI cs.AI cs.CR}
}
|
he2024toward
|
arxiv-661151
|
2409.15697
|
dnaGrinder: a lightweight and high-capacity genomic foundation model
|
<|reference_start|>dnaGrinder: a lightweight and high-capacity genomic foundation model: The task of understanding and interpreting the complex information encoded within genomic sequences remains a grand challenge in biological research and clinical applications. In this context, recent advancements in large language model research have led to the development of both encoder-only and decoder-only foundation models designed to decode intricate information in DNA sequences. However, several issues persist, particularly regarding the efficient management of long-range dependencies inherent in genomic sequences, the effective representation of nucleotide variations, and the considerable computational costs associated with large model architectures and extensive pretraining datasets. Current genomic foundation models often face a critical tradeoff: smaller models with mediocre performance versus large models with improved performance. To address these challenges, we introduce dnaGrinder, a unique and efficient genomic foundation model. dnaGrinder excels at managing long-range dependencies within genomic sequences while minimizing computational costs without compromising performance. It achieves results that are not just comparable but often superior to leading DNA models such as Nucleotide Transformer and DNABERT-2. Furthermore, dnaGrinder is designed for easy fine-tuning on workstation-grade GPUs, accommodating input lengths exceeding 17,000 tokens. On a single high-performance GPU, it supports sequences longer than 140,000 tokens, making it a highly efficient and accessible tool for both basic biological research and clinical applications.<|reference_end|>
|
arxiv
|
@article{zhao2024dnagrinder:,
title={dnaGrinder: a lightweight and high-capacity genomic foundation model},
author={Qihang Zhao, Chi Zhang, Weixiong Zhang},
journal={arXiv preprint arXiv:2409.15697},
year={2024},
archivePrefix={arXiv},
eprint={2409.15697},
primaryClass={q-bio.GN cs.AI cs.CE cs.CL}
}
|
zhao2024dnagrinder:
|
arxiv-661152
|
2409.15698
|
GraphGI:A GNN Explanation Method using Game Interaction
|
<|reference_start|>GraphGI:A GNN Explanation Method using Game Interaction: Graph Neural Networks (GNNs) have garnered significant attention and have been extensively utilized across various domains. However, similar to other deep learning models, GNNs are often viewed as black-box models, making it challenging to interpret their prediction mechanisms. Current graph explanation techniques focus on identifying key nodes or edges, attributing the critical data features that drive model predictions. Nevertheless, these features do not independently influence the model's outcomes; rather, they interact with one another to collectively affect predictions. In this work, we propose a novel explanatory method GraphGI, which identifies the coalition with the highest interaction strength and presents it as an explanatory subgraph. Given a trained model and an input graph, our method explains predictions by gradually incorporating significant edges into the selected subgraph. We utilize game-theoretic interaction values to assess the interaction strength after edge additions, ensuring that the newly added edges confer maximum interaction strength to the explanatory subgraph. To enhance computational efficiency, we adopt effective approximation techniques for calculating Shapley values and game-theoretic interaction values. Empirical evaluations demonstrate that our method achieves superior fidelity and sparsity, maintaining the interpretability of the results at a comprehensible level.<|reference_end|>
|
arxiv
|
@article{xian2024graphgi:a,
title={GraphGI:A GNN Explanation Method using Game Interaction},
author={Xingping Xian, Jianlu Liu, Tao Wu, Lin Yuan, Chao Wang, Baiyun Chen},
journal={arXiv preprint arXiv:2409.15698},
year={2024},
archivePrefix={arXiv},
eprint={2409.15698},
primaryClass={cs.LG cs.SI}
}
|
xian2024graphgi:a
|
arxiv-661153
|
2409.15699
|
Lighter And Better: Towards Flexible Context Adaptation For Retrieval Augmented Generation
|
<|reference_start|>Lighter And Better: Towards Flexible Context Adaptation For Retrieval Augmented Generation: The existing Retrieval-Augmented Generation (RAG) systems face significant challenges in terms of cost and effectiveness. On one hand, they need to encode the lengthy retrieved contexts before responding to the input tasks, which imposes substantial computational overhead. On the other hand, directly using generic Large Language Models (LLMs) often leads to sub-optimal answers, while task-specific fine-tuning may compromise the LLMs' general capabilities. To address these challenges, we introduce a novel approach called FlexRAG (Flexible Context Adaptation for RAG). In this approach, the retrieved contexts are compressed into compact embeddings before being encoded by the LLMs. Simultaneously, these compressed embeddings are optimized to enhance downstream RAG performance. A key feature of FlexRAG is its flexibility, which enables effective support for diverse compression ratios and selective preservation of important contexts. Thanks to these technical designs, FlexRAG achieves superior generation quality while significantly reducing running costs. Comprehensive experiments on various question-answering datasets validate our approach as a cost-effective and flexible solution for RAG systems.<|reference_end|>
|
arxiv
|
@article{liu2024lighter,
title={Lighter And Better: Towards Flexible Context Adaptation For Retrieval
Augmented Generation},
author={Zheng Liu, Chenyuan Wu, Ninglu Shao, Shitao Xiao, Chaozhuo Li, Defu
Lian},
journal={arXiv preprint arXiv:2409.15699},
year={2024},
archivePrefix={arXiv},
eprint={2409.15699},
primaryClass={cs.CL}
}
|
liu2024lighter
|
arxiv-661154
|
2409.15700
|
Making Text Embedders Few-Shot Learners
|
<|reference_start|>Making Text Embedders Few-Shot Learners: Large language models (LLMs) with decoder-only architectures demonstrate remarkable in-context learning (ICL) capabilities. This feature enables them to effectively handle both familiar and novel tasks by utilizing examples provided within their input context. Recognizing the potential of this capability, we propose leveraging the ICL feature in LLMs to enhance the process of text embedding generation. To this end, we introduce a novel model bge-en-icl, which employs few-shot examples to produce high-quality text embeddings. Our approach integrates task-related examples directly into the query side, resulting in significant improvements across various tasks. Additionally, we have investigated how to effectively utilize LLMs as embedding models, including various attention mechanisms, pooling methods, etc. Our findings suggest that retaining the original framework often yields the best results, underscoring that simplicity is best. Experimental results on the MTEB and AIR-Bench benchmarks demonstrate that our approach sets new state-of-the-art (SOTA) performance. Our model, code and dataset are freely available at https://github.com/FlagOpen/FlagEmbedding .<|reference_end|>
|
arxiv
|
@article{li2024making,
title={Making Text Embedders Few-Shot Learners},
author={Chaofan Li, MingHao Qin, Shitao Xiao, Jianlyu Chen, Kun Luo, Yingxia
Shao, Defu Lian, Zheng Liu},
journal={arXiv preprint arXiv:2409.15700},
year={2024},
archivePrefix={arXiv},
eprint={2409.15700},
primaryClass={cs.IR cs.CL}
}
|
li2024making
|
arxiv-661155
|
2409.15703
|
Agent-state based policies in POMDPs: Beyond belief-state MDPs
|
<|reference_start|>Agent-state based policies in POMDPs: Beyond belief-state MDPs: The traditional approach to POMDPs is to convert them into fully observed MDPs by considering a belief state as an information state. However, a belief-state based approach requires perfect knowledge of the system dynamics and is therefore not applicable in the learning setting where the system model is unknown. Various approaches to circumvent this limitation have been proposed in the literature. We present a unified treatment of some of these approaches by viewing them as models where the agent maintains a local recursively updateable agent state and chooses actions based on the agent state. We highlight the different classes of agent-state based policies and the various approaches that have been proposed in the literature to find good policies within each class. These include the designer's approach to find optimal non-stationary agent-state based policies, policy search approaches to find a locally optimal stationary agent-state based policies, and the approximate information state to find approximately optimal stationary agent-state based policies. We then present how ideas from the approximate information state approach have been used to improve Q-learning and actor-critic algorithms for learning in POMDPs.<|reference_end|>
|
arxiv
|
@article{sinha2024agent-state,
title={Agent-state based policies in POMDPs: Beyond belief-state MDPs},
author={Amit Sinha and Aditya Mahajan},
journal={arXiv preprint arXiv:2409.15703},
year={2024},
archivePrefix={arXiv},
eprint={2409.15703},
primaryClass={eess.SY cs.LG cs.SY}
}
|
sinha2024agent-state
|
arxiv-661156
|
2409.15704
|
Assessing FIFO and Round Robin Scheduling:Effects on Data Pipeline Performance and Energy Usage
|
<|reference_start|>Assessing FIFO and Round Robin Scheduling:Effects on Data Pipeline Performance and Energy Usage: In the case of compute-intensive machine learning, efficient operating system scheduling is crucial for performance and energy efficiency. This paper conducts a comparative study over FIFO(First-In-First-Out) and RR(Round-Robin) scheduling policies with the application of real-time machine learning training processes and data pipelines on Ubuntu-based systems. Knowing a few patterns of CPU usage and energy consumption, we identify which policy (the exclusive or the shared) provides higher performance and/or lower energy consumption for typical modern workloads. Results of this study would help in providing better operating system schedulers for modern systems like Ubuntu, working to improve performance and reducing energy consumption in compute intensive workloads.<|reference_end|>
|
arxiv
|
@article{choudhury2024assessing,
title={Assessing FIFO and Round Robin Scheduling:Effects on Data Pipeline
Performance and Energy Usage},
author={Malobika Roy Choudhury, Akshat Mehrotra},
journal={arXiv preprint arXiv:2409.15704},
year={2024},
archivePrefix={arXiv},
eprint={2409.15704},
primaryClass={cs.OS}
}
|
choudhury2024assessing
|
arxiv-661157
|
2409.15705
|
Toward Conceptual Modeling for Propositional Logic: Propositions as Events
|
<|reference_start|>Toward Conceptual Modeling for Propositional Logic: Propositions as Events: Applying logic in the area of conceptual modeling has been investigated widely, yet there has been limited uptake of logic-based conceptual modeling in industry. According to some researchers, another formalization of such tools as EER or UML class diagrams in logic may only marginally contribute to the body of knowledge. This paper reflects on applying propositional logic language to a high-level diagrammatic representation called the thinging machines (TM) model. We explore the relationship between conceptual modeling and logic, including such issues as: What logical constructs model? How does truth fit into the picture produced in conceptual modeling as a representation of some piece of the world it is about? The ultimate research objective is a quest for a thorough semantic alignment of TM modeling and propositional logic into a single structure. Examples that involve the application of propositional logic in certain areas of reality are TM remodeled, where propositions are viewed as TM regions or events. As it turned out, TM seems to shed light on the semantics of propositions. In such a conceptual framework, logical truth is a matter of how things are in actuality and how falsehood is in subsistence. The results show that propositional logic enriches the rigorousness of conceptual descriptions and that the TM semantic apparatus complements propositional logic by providing a background to the given set of propositions. Semantics matters are applied to propositional constructs such as negative propositions, disjunctions, and conjunctions with negative terms.<|reference_end|>
|
arxiv
|
@article{al-fedaghi2024toward,
title={Toward Conceptual Modeling for Propositional Logic: Propositions as
Events},
author={Sabah Al-Fedaghi},
journal={arXiv preprint arXiv:2409.15705},
year={2024},
archivePrefix={arXiv},
eprint={2409.15705},
primaryClass={cs.SE}
}
|
al-fedaghi2024toward
|
arxiv-661158
|
2409.15706
|
Improving Emotional Support Delivery in Text-Based Community Safety Reporting Using Large Language Models
|
<|reference_start|>Improving Emotional Support Delivery in Text-Based Community Safety Reporting Using Large Language Models: Emotional support is a crucial aspect of communication between community members and police dispatchers during incident reporting. However, there is a lack of understanding about how emotional support is delivered through text-based systems, especially in various non-emergency contexts. In this study, we analyzed two years of chat logs comprising 57,114 messages across 8,239 incidents from 130 higher education institutions. Our empirical findings revealed significant variations in emotional support provided by dispatchers, influenced by the type of incident, service time, and a noticeable decline in support over time across multiple organizations. To improve the consistency and quality of emotional support, we developed and implemented a fine-tuned Large Language Model (LLM), named dispatcherLLM. We evaluated dispatcherLLM by comparing its generated responses to those of human dispatchers and other off-the-shelf models using real chat messages. Additionally, we conducted a human evaluation to assess the perceived effectiveness of the support provided by dispatcherLLM. This study not only contributes new empirical understandings of emotional support in text-based dispatch systems but also demonstrates the significant potential of generative AI in improving service delivery.<|reference_end|>
|
arxiv
|
@article{liu2024improving,
title={Improving Emotional Support Delivery in Text-Based Community Safety
Reporting Using Large Language Models},
author={Yiren Liu, Yerong Li, Ryan Mayfield, Yun Huang},
journal={arXiv preprint arXiv:2409.15706},
year={2024},
archivePrefix={arXiv},
eprint={2409.15706},
primaryClass={cs.HC cs.AI}
}
|
liu2024improving
|
arxiv-661159
|
2409.15708
|
Open-/Closed-loop Active Learning for Data-driven Predictive Control
|
<|reference_start|>Open-/Closed-loop Active Learning for Data-driven Predictive Control: An important question in data-driven control is how to obtain an informative dataset. In this work, we consider the problem of effective data acquisition of an unknown linear system with bounded disturbance for both open-loop and closed-loop stages. The learning objective is to minimize the volume of the set of admissible systems. First, a performance measure based on historical data and the input sequence is introduced to characterize the upper bound of the volume of the set of admissible systems. On the basis of this performance measure, an open-loop active learning strategy is proposed to minimize the volume by actively designing inputs during the open-loop stage. For the closed-loop stage, an closed-loop active learning strategy is designed to select and learn from informative closed-loop data. The efficiency of the proposed closed-loop active learning strategy is proved by showing that the unselected data cannot benefit the learning performance. Furthermore, an adaptive predictive controller is designed in accordance with the proposed data acquisition approach. The recursive feasibility and the stability of the controller are proved by analyzing the effect of the closed-loop active learning strategy. Finally, numerical examples and comparisons illustrate the effectiveness of the proposed data acquisition strategy.<|reference_end|>
|
arxiv
|
@article{feng2024open-/closed-loop,
title={Open-/Closed-loop Active Learning for Data-driven Predictive Control},
author={Shilun Feng, Dawei Shi, Yang Shi, Kaikai Zheng},
journal={arXiv preprint arXiv:2409.15708},
year={2024},
archivePrefix={arXiv},
eprint={2409.15708},
primaryClass={eess.SY cs.SY}
}
|
feng2024open-/closed-loop
|
arxiv-661160
|
2409.15710
|
Autotuning Bipedal Locomotion MPC with GRFM-Net for Efficient Sim-to-Real Transfer
|
<|reference_start|>Autotuning Bipedal Locomotion MPC with GRFM-Net for Efficient Sim-to-Real Transfer: Bipedal locomotion control is essential for humanoid robots to navigate complex, human-centric environments. While optimization-based control designs are popular for integrating sophisticated models of humanoid robots, they often require labor-intensive manual tuning. In this work, we address the challenges of parameter selection in bipedal locomotion control using DiffTune, a model-based autotuning method that leverages differential programming for efficient parameter learning. A major difficulty lies in balancing model fidelity with differentiability. We address this difficulty using a low-fidelity model for differentiability, enhanced by a Ground Reaction Force-and-Moment Network (GRFM-Net) to capture discrepancies between MPC commands and actual control effects. We validate the parameters learned by DiffTune with GRFM-Net in hardware experiments, which demonstrates the parameters' optimality in a multi-objective setting compared with baseline parameters, reducing the total loss by up to 40.5$\%$ compared with the expert-tuned parameters. The results confirm the GRFM-Net's effectiveness in mitigating the sim-to-real gap, improving the transferability of simulation-learned parameters to real hardware.<|reference_end|>
|
arxiv
|
@article{chen2024autotuning,
title={Autotuning Bipedal Locomotion MPC with GRFM-Net for Efficient
Sim-to-Real Transfer},
author={Qianzhong Chen, Junheng Li, Sheng Cheng, Naira Hovakimyan, Quan Nguyen},
journal={arXiv preprint arXiv:2409.15710},
year={2024},
archivePrefix={arXiv},
eprint={2409.15710},
primaryClass={cs.RO cs.AI cs.SY eess.SY}
}
|
chen2024autotuning
|
arxiv-661161
|
2409.15711
|
Adversarial Federated Consensus Learning for Surface Defect Classification Under Data Heterogeneity in IIoT
|
<|reference_start|>Adversarial Federated Consensus Learning for Surface Defect Classification Under Data Heterogeneity in IIoT: The challenge of data scarcity hinders the application of deep learning in industrial surface defect classification (SDC), as it's difficult to collect and centralize sufficient training data from various entities in Industrial Internet of Things (IIoT) due to privacy concerns. Federated learning (FL) provides a solution by enabling collaborative global model training across clients while maintaining privacy. However, performance may suffer due to data heterogeneity--discrepancies in data distributions among clients. In this paper, we propose a novel personalized FL (PFL) approach, named Adversarial Federated Consensus Learning (AFedCL), for the challenge of data heterogeneity across different clients in SDC. First, we develop a dynamic consensus construction strategy to mitigate the performance degradation caused by data heterogeneity. Through adversarial training, local models from different clients utilize the global model as a bridge to achieve distribution alignment, alleviating the problem of global knowledge forgetting. Complementing this strategy, we propose a consensus-aware aggregation mechanism. It assigns aggregation weights to different clients based on their efficacy in global knowledge learning, thereby enhancing the global model's generalization capabilities. Finally, we design an adaptive feature fusion module to further enhance global knowledge utilization efficiency. Personalized fusion weights are gradually adjusted for each client to optimally balance global and local features, tailored to their individual global knowledge learning efficacy. Compared with state-of-the-art FL methods like FedALA, the proposed AFedCL method achieves an accuracy increase of up to 5.67% on three SDC datasets.<|reference_end|>
|
arxiv
|
@article{cui2024adversarial,
title={Adversarial Federated Consensus Learning for Surface Defect
Classification Under Data Heterogeneity in IIoT},
author={Jixuan Cui, Jun Li, Zhen Mei, Yiyang Ni, Wen Chen, Zengxiang Li},
journal={arXiv preprint arXiv:2409.15711},
year={2024},
archivePrefix={arXiv},
eprint={2409.15711},
primaryClass={cs.LG cs.AI eess.SP}
}
|
cui2024adversarial
|
arxiv-661162
|
2409.15713
|
Hardness of Approximate Sperner and Applications to Envy-Free Cake Cutting
|
<|reference_start|>Hardness of Approximate Sperner and Applications to Envy-Free Cake Cutting: Given a so called ''Sperner coloring'' of a triangulation of the $D$-dimensional simplex, Sperner's lemma guarantees the existence of a rainbow simplex, i.e. a simplex colored by all $D+1$ colors. However, finding a rainbow simplex was the first problem to be proven $\mathsf{PPAD}$-complete in Papadimitriou's classical paper introducing the class $\mathsf{PPAD}$ (1994). In this paper, we prove that the problem does not become easier if we relax ''all $D+1$ colors'' to allow some fraction of missing colors: in fact, for any constant $D$, finding even a simplex with just three colors remains $\mathsf{PPAD}$-complete! Our result has an interesting application for the envy-free cake cutting from fair division. It is known that if agents value pieces of cake using general continuous functions satisfying a simple boundary condition (''a non-empty piece is better than an empty piece of cake''), there exists an envy-free allocation with connected pieces. We show that for any constant number of agents it is $\mathsf{PPAD}$-complete to find an allocation -- even using any constant number of possibly disconnected pieces -- that makes just three agents envy-free. Our results extend to super-constant dimension, number of agents, and number of pieces, as long as they are asymptotically bounded by any $\log^{1-\Omega(1)}(\epsilon)$, where $\epsilon$ is the precision parameter (side length for Sperner and approximate envy-free for cake cutting).<|reference_end|>
|
arxiv
|
@article{gao2024hardness,
title={Hardness of Approximate Sperner and Applications to Envy-Free Cake
Cutting},
author={Ruiquan Gao, Mohammad Roghani, Aviad Rubinstein, Amin Saberi},
journal={arXiv preprint arXiv:2409.15713},
year={2024},
archivePrefix={arXiv},
eprint={2409.15713},
primaryClass={cs.CC cs.GT}
}
|
gao2024hardness
|
arxiv-661163
|
2409.15715
|
Disentangled Generation and Aggregation for Robust Radiance Fields
|
<|reference_start|>Disentangled Generation and Aggregation for Robust Radiance Fields: The utilization of the triplane-based radiance fields has gained attention in recent years due to its ability to effectively disentangle 3D scenes with a high-quality representation and low computation cost. A key requirement of this method is the precise input of camera poses. However, due to the local update property of the triplane, a similar joint estimation as previous joint pose-NeRF optimization works easily results in local minima. To this end, we propose the Disentangled Triplane Generation module to introduce global feature context and smoothness into triplane learning, which mitigates errors caused by local updating. Then, we propose the Disentangled Plane Aggregation to mitigate the entanglement caused by the common triplane feature aggregation during camera pose updating. In addition, we introduce a two-stage warm-start training strategy to reduce the implicit constraints caused by the triplane generator. Quantitative and qualitative results demonstrate that our proposed method achieves state-of-the-art performance in novel view synthesis with noisy or unknown camera poses, as well as efficient convergence of optimization. Project page: https://gaohchen.github.io/DiGARR/.<|reference_end|>
|
arxiv
|
@article{shen2024disentangled,
title={Disentangled Generation and Aggregation for Robust Radiance Fields},
author={Shihe Shen, Huachen Gao, Wangze Xu, Rui Peng, Luyang Tang, Kaiqiang
Xiong, Jianbo Jiao, Ronggang Wang},
journal={arXiv preprint arXiv:2409.15715},
year={2024},
archivePrefix={arXiv},
eprint={2409.15715},
primaryClass={cs.CV cs.GR}
}
|
shen2024disentangled
|
arxiv-661164
|
2409.15717
|
Autonomous Wheel Loader Navigation Using Goal-Conditioned Actor-Critic MPC
|
<|reference_start|>Autonomous Wheel Loader Navigation Using Goal-Conditioned Actor-Critic MPC: This paper proposes a novel control method for an autonomous wheel loader, enabling time-efficient navigation to an arbitrary goal pose. Unlike prior works that combine high-level trajectory planners with Model Predictive Control (MPC), we directly enhance the planning capabilities of MPC by integrating a cost function derived from Actor-Critic Reinforcement Learning (RL). Specifically, we train an RL agent to solve the pose reaching task in simulation, then incorporate the trained neural network critic as both the stage and terminal cost of an MPC. We show through comprehensive simulations that the resulting MPC inherits the time-efficient behavior of the RL agent, generating trajectories that compare favorably against those found using trajectory optimization. We also deploy our method on a real wheel loader, where we successfully navigate to various goal poses. In contrast, the RL actor risked damaging the machine and was unsuitable for real-world use.<|reference_end|>
|
arxiv
|
@article{mäki-penttilä2024autonomous,
title={Autonomous Wheel Loader Navigation Using Goal-Conditioned Actor-Critic
MPC},
author={Aleksi M"aki-Penttil"a, Naeim Ebrahimi Toulkani, Reza Ghabcheloo},
journal={arXiv preprint arXiv:2409.15717},
year={2024},
archivePrefix={arXiv},
eprint={2409.15717},
primaryClass={cs.RO cs.SY eess.SY}
}
|
mäki-penttilä2024autonomous
|
arxiv-661165
|
2409.15720
|
Optimization of partially isolated quantum harmonic oscillator memory systems by mean square decoherence time criteria
|
<|reference_start|>Optimization of partially isolated quantum harmonic oscillator memory systems by mean square decoherence time criteria: This paper is concerned with open quantum harmonic oscillators with position-momentum system variables, whose internal dynamics and interaction with the environment are governed by linear quantum stochastic differential equations. A recently proposed approach to such systems as Heisenberg picture quantum memories exploits their ability to approximately retain initial conditions over a decoherence horizon. Using the quantum memory decoherence time defined previously in terms of a fidelity threshold on a weighted mean-square deviation of the system variables from their initial values, we apply this approach to a partially isolated subsystem of the oscillator, which is not directly affected by the external fields. The partial isolation leads to an appropriate system decomposition and a qualitatively different short-horizon asymptotic behaviour of the deviation, which yields a longer decoherence time in the high-fidelity limit. The resulting approximate decoherence time maximization over the energy parameters for improving the quantum memory performance is discussed for a coherent feedback interconnection of such systems.<|reference_end|>
|
arxiv
|
@article{vladimirov2024optimization,
title={Optimization of partially isolated quantum harmonic oscillator memory
systems by mean square decoherence time criteria},
author={Igor G. Vladimirov, Ian R. Petersen},
journal={arXiv preprint arXiv:2409.15720},
year={2024},
archivePrefix={arXiv},
eprint={2409.15720},
primaryClass={quant-ph cs.SY eess.SY math.OC}
}
|
vladimirov2024optimization
|
arxiv-661166
|
2409.15721
|
Applying Incremental Learning in Binary-Addition-Tree Algorithm for Dynamic Binary-State Network Reliability
|
<|reference_start|>Applying Incremental Learning in Binary-Addition-Tree Algorithm for Dynamic Binary-State Network Reliability: This paper presents a novel approach to enhance the Binary-Addition-Tree algorithm (BAT) by integrating incremental learning techniques. BAT, known for its simplicity in development, implementation, and application, is a powerful implicit enumeration method for solving network reliability and optimization problems. However, it traditionally struggles with dynamic and large-scale networks due to its static nature. By introducing incremental learning, we enable the BAT to adapt and improve its performance iteratively as it encounters new data or network changes. This integration allows for more efficient computation, reduced redundancy without searching minimal paths and cuts, and improves overall performance in dynamic environments. Experimental results demonstrate the effectiveness of the proposed method, showing significant improvements in both computational efficiency and solution quality compared to the traditional BAT and indirect algorithms, such as MP-based algorithms and MC-based algorithms.<|reference_end|>
|
arxiv
|
@article{yeh2024applying,
title={Applying Incremental Learning in Binary-Addition-Tree Algorithm for
Dynamic Binary-State Network Reliability},
author={Wei-Chang Yeh},
journal={arXiv preprint arXiv:2409.15721},
year={2024},
archivePrefix={arXiv},
eprint={2409.15721},
primaryClass={cs.LG}
}
|
yeh2024applying
|
arxiv-661167
|
2409.15723
|
Federated Large Language Models: Current Progress and Future Directions
|
<|reference_start|>Federated Large Language Models: Current Progress and Future Directions: Large language models are rapidly gaining popularity and have been widely adopted in real-world applications. While the quality of training data is essential, privacy concerns arise during data collection. Federated learning offers a solution by allowing multiple clients to collaboratively train LLMs without sharing local data. However, FL introduces new challenges, such as model convergence issues due to heterogeneous data and high communication costs. A comprehensive study is required to address these challenges and guide future research. This paper surveys Federated learning for LLMs (FedLLM), highlighting recent advances and future directions. We focus on two key aspects: fine-tuning and prompt learning in a federated setting, discussing existing work and associated research challenges. We finally propose potential research directions for federated LLMs, including pre-training and how LLMs can further enhance federated learning.<|reference_end|>
|
arxiv
|
@article{yao2024federated,
title={Federated Large Language Models: Current Progress and Future Directions},
author={Yuhang Yao, Jianyi Zhang, Junda Wu, Chengkai Huang, Yu Xia, Tong Yu,
Ruiyi Zhang, Sungchul Kim, Ryan Rossi, Ang Li, Lina Yao, Julian McAuley,
Yiran Chen, Carlee Joe-Wong},
journal={arXiv preprint arXiv:2409.15723},
year={2024},
archivePrefix={arXiv},
eprint={2409.15723},
primaryClass={cs.LG cs.CL}
}
|
yao2024federated
|
arxiv-661168
|
2409.15724
|
LLM-Cure: LLM-based Competitor User Review Analysis for Feature Enhancement
|
<|reference_start|>LLM-Cure: LLM-based Competitor User Review Analysis for Feature Enhancement: The exponential growth of the mobile app market underscores the importance of constant innovation and rapid response to user demands. As user satisfaction is paramount to the success of a mobile application (app), developers typically rely on user reviews, which represent user feedback that includes ratings and comments to identify areas for improvement. However, the sheer volume of user reviews poses challenges in manual analysis, necessitating automated approaches. Existing automated approaches either analyze only the target apps reviews, neglecting the comparison of similar features to competitors or fail to provide suggestions for feature enhancement. To address these gaps, we propose a Large Language Model (LLM)-based Competitive User Review Analysis for Feature Enhancement) (LLM-Cure), an approach powered by LLMs to automatically generate suggestion s for mobile app feature improvements. More specifically, LLM-Cure identifies and categorizes features within reviews by applying LLMs. When provided with a complaint in a user review, LLM-Cure curates highly rated (4 and 5 stars) reviews in competing apps related to the complaint and proposes potential improvements tailored to the target application. We evaluate LLM-Cure on 1,056,739 reviews of 70 popular Android apps. Our evaluation demonstrates that LLM-Cure significantly outperforms the state-of-the-art approaches in assigning features to reviews by up to 13% in F1-score, up to 16% in recall and up to 11% in precision. Additionally, LLM-Cure demonstrates its capability to provide suggestions for resolving user complaints. We verify the suggestions using the release notes that reflect the changes of features in the target mobile app. LLM-Cure achieves a promising average of 73% of the implementation of the provided suggestions.<|reference_end|>
|
arxiv
|
@article{assi2024llm-cure:,
title={LLM-Cure: LLM-based Competitor User Review Analysis for Feature
Enhancement},
author={Maram Assi, Safwat Hassan, Ying Zou},
journal={arXiv preprint arXiv:2409.15724},
year={2024},
archivePrefix={arXiv},
eprint={2409.15724},
primaryClass={cs.SE cs.AI cs.IR}
}
|
assi2024llm-cure:
|
arxiv-661169
|
2409.15727
|
LaPose: Laplacian Mixture Shape Modeling for RGB-Based Category-Level Object Pose Estimation
|
<|reference_start|>LaPose: Laplacian Mixture Shape Modeling for RGB-Based Category-Level Object Pose Estimation: While RGBD-based methods for category-level object pose estimation hold promise, their reliance on depth data limits their applicability in diverse scenarios. In response, recent efforts have turned to RGB-based methods; however, they face significant challenges stemming from the absence of depth information. On one hand, the lack of depth exacerbates the difficulty in handling intra-class shape variation, resulting in increased uncertainty in shape predictions. On the other hand, RGB-only inputs introduce inherent scale ambiguity, rendering the estimation of object size and translation an ill-posed problem. To tackle these challenges, we propose LaPose, a novel framework that models the object shape as the Laplacian mixture model for Pose estimation. By representing each point as a probabilistic distribution, we explicitly quantify the shape uncertainty. LaPose leverages both a generalized 3D information stream and a specialized feature stream to independently predict the Laplacian distribution for each point, capturing different aspects of object geometry. These two distributions are then integrated as a Laplacian mixture model to establish the 2D-3D correspondences, which are utilized to solve the pose via the PnP module. In order to mitigate scale ambiguity, we introduce a scale-agnostic representation for object size and translation, enhancing training efficiency and overall robustness. Extensive experiments on the NOCS datasets validate the effectiveness of LaPose, yielding state-of-the-art performance in RGB-based category-level object pose estimation. Codes are released at https://github.com/lolrudy/LaPose<|reference_end|>
|
arxiv
|
@article{zhang2024lapose:,
title={LaPose: Laplacian Mixture Shape Modeling for RGB-Based Category-Level
Object Pose Estimation},
author={Ruida Zhang, Ziqin Huang, Gu Wang, Chenyangguang Zhang, Yan Di,
Xingxing Zuo, Jiwen Tang, and Xiangyang Ji},
journal={arXiv preprint arXiv:2409.15727},
year={2024},
archivePrefix={arXiv},
eprint={2409.15727},
primaryClass={cs.CV}
}
|
zhang2024lapose:
|
arxiv-661170
|
2409.15729
|
Sequential Learning in the Dense Associative Memory
|
<|reference_start|>Sequential Learning in the Dense Associative Memory: Sequential learning involves learning tasks in a sequence, and proves challenging for most neural networks. Biological neural networks regularly conquer the sequential learning challenge and are even capable of transferring knowledge both forward and backwards between tasks. Artificial neural networks often totally fail to transfer performance between tasks, and regularly suffer from degraded performance or catastrophic forgetting on previous tasks. Models of associative memory have been used to investigate the discrepancy between biological and artificial neural networks due to their biological ties and inspirations, of which the Hopfield network is perhaps the most studied model. The Dense Associative Memory, or modern Hopfield network, generalizes the Hopfield network, allowing for greater capacities and prototype learning behaviors, while still retaining the associative memory structure. We investigate the performance of the Dense Associative Memory in sequential learning problems, and benchmark various sequential learning techniques in the network. We give a substantial review of the sequential learning space with particular respect to the Hopfield network and associative memories, as well as describe the techniques we implement in detail. We also draw parallels between the classical and Dense Associative Memory in the context of sequential learning, and discuss the departures from biological inspiration that may influence the utility of the Dense Associative Memory as a tool for studying biological neural networks. We present our findings, and show that existing sequential learning methods can be applied to the Dense Associative Memory to improve sequential learning performance.<|reference_end|>
|
arxiv
|
@article{mcalister2024sequential,
title={Sequential Learning in the Dense Associative Memory},
author={Hayden McAlister, Anthony Robins, Lech Szymanski},
journal={arXiv preprint arXiv:2409.15729},
year={2024},
archivePrefix={arXiv},
eprint={2409.15729},
primaryClass={cs.NE cs.AI}
}
|
mcalister2024sequential
|
arxiv-661171
|
2409.15730
|
Learning Multiple Probabilistic Decisions from Latent World Model in Autonomous Driving
|
<|reference_start|>Learning Multiple Probabilistic Decisions from Latent World Model in Autonomous Driving: The autoregressive world model exhibits robust generalization capabilities in vectorized scene understanding but encounters difficulties in deriving actions due to insufficient uncertainty modeling and self-delusion. In this paper, we explore the feasibility of deriving decisions from an autoregressive world model by addressing these challenges through the formulation of multiple probabilistic hypotheses. We propose LatentDriver, a framework models the environment's next states and the ego vehicle's possible actions as a mixture distribution, from which a deterministic control signal is then derived. By incorporating mixture modeling, the stochastic nature of decisionmaking is captured. Additionally, the self-delusion problem is mitigated by providing intermediate actions sampled from a distribution to the world model. Experimental results on the recently released close-loop benchmark Waymax demonstrate that LatentDriver surpasses state-of-the-art reinforcement learning and imitation learning methods, achieving expert-level performance. The code and models will be made available at https://github.com/Sephirex-X/LatentDriver.<|reference_end|>
|
arxiv
|
@article{xiao2024learning,
title={Learning Multiple Probabilistic Decisions from Latent World Model in
Autonomous Driving},
author={Lingyu Xiao, Jiang-Jiang Liu, Sen Yang, Xiaofan Li, Xiaoqing Ye,
Wankou Yang and Jingdong Wang},
journal={arXiv preprint arXiv:2409.15730},
year={2024},
archivePrefix={arXiv},
eprint={2409.15730},
primaryClass={cs.RO cs.AI}
}
|
xiao2024learning
|
arxiv-661172
|
2409.15732
|
Hypothesis Clustering and Merging: Novel MultiTalker Speech Recognition with Speaker Tokens
|
<|reference_start|>Hypothesis Clustering and Merging: Novel MultiTalker Speech Recognition with Speaker Tokens: In many real-world scenarios, such as meetings, multiple speakers are present with an unknown number of participants, and their utterances often overlap. We address these multi-speaker challenges by a novel attention-based encoder-decoder method augmented with special speaker class tokens obtained by speaker clustering. During inference, we select multiple recognition hypotheses conditioned on predicted speaker cluster tokens, and these hypotheses are merged by agglomerative hierarchical clustering (AHC) based on the normalized edit distance. The clustered hypotheses result in the multi-speaker transcriptions with the appropriate number of speakers determined by AHC. Our experiments on the LibriMix dataset demonstrate that our proposed method was particularly effective in complex 3-mix environments, achieving a 55% relative error reduction on clean data and a 36% relative error reduction on noisy data compared with conventional serialized output training.<|reference_end|>
|
arxiv
|
@article{kashiwagi2024hypothesis,
title={Hypothesis Clustering and Merging: Novel MultiTalker Speech Recognition
with Speaker Tokens},
author={Yosuke Kashiwagi, Hayato Futami, Emiru Tsunoo, Siddhant Arora and
Shinji Watanabe},
journal={arXiv preprint arXiv:2409.15732},
year={2024},
archivePrefix={arXiv},
eprint={2409.15732},
primaryClass={cs.CL cs.SD eess.AS}
}
|
kashiwagi2024hypothesis
|
arxiv-661173
|
2409.15733
|
EvoFA: Evolvable Fast Adaptation for EEG Emotion Recognition
|
<|reference_start|>EvoFA: Evolvable Fast Adaptation for EEG Emotion Recognition: Electroencephalography (EEG)-based emotion recognition has gained significant traction due to its accuracy and objectivity. However, the non-stationary nature of EEG signals leads to distribution drift over time, causing severe performance degradation when the model is reused. While numerous domain adaptation (DA) approaches have been proposed in recent years to address this issue, their reliance on large amounts of target data for calibration restricts them to offline scenarios, rendering them unsuitable for real-time applications. To address this challenge, this paper proposes Evolvable Fast Adaptation (EvoFA), an online adaptive framework tailored for EEG data. EvoFA organically integrates the rapid adaptation of Few-Shot Learning (FSL) and the distribution matching of Domain Adaptation (DA) through a two-stage generalization process. During the training phase, a robust base meta-learning model is constructed for strong generalization. In the testing phase, a designed evolvable meta-adaptation module iteratively aligns the marginal distribution of target (testing) data with the evolving source (training) data within a model-agnostic meta-learning framework, enabling the model to learn the evolving trends of testing data relative to training data and improving online testing performance. Experimental results demonstrate that EvoFA achieves significant improvements compared to the basic FSL method and previous online methods. The introduction of EvoFA paves the way for broader adoption of EEG-based emotion recognition in real-world applications. Our code will be released upon publication.<|reference_end|>
|
arxiv
|
@article{jin2024evofa:,
title={EvoFA: Evolvable Fast Adaptation for EEG Emotion Recognition},
author={Ming Jin, Danni Zhang, Gangming Zhao, Changde Du, and Jinpeng Li},
journal={arXiv preprint arXiv:2409.15733},
year={2024},
archivePrefix={arXiv},
eprint={2409.15733},
primaryClass={cs.LG cs.AI}
}
|
jin2024evofa:
|
arxiv-661174
|
2409.15734
|
Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models
|
<|reference_start|>Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models: In this work, we consider solving optimization problems with a stochastic objective and deterministic equality constraints. We propose a Trust-Region Sequential Quadratic Programming method to find both first- and second-order stationary points. Our method utilizes a random model to represent the objective function, which is constructed from stochastic observations of the objective and is designed to satisfy proper adaptive accuracy conditions with a high but fixed probability. To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a quadratic approximation of the objective subject to a (relaxed) linear approximation of the problem constraints and a trust-region constraint. To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature of the reduced Hessian matrix, as well as a second-order correction step to address the potential Maratos effect, which arises due to the nonlinearity of the problem constraints. Such an effect may impede the method from moving away from saddle points. Both gradient and eigen step computations leverage a novel parameter-free decomposition of the step and the trust-region radius, accounting for the proportions among the feasibility residual, optimality residual, and negative curvature. We establish global almost sure first- and second-order convergence guarantees for our method, and present computational results on CUTEst problems, regression problems, and saddle-point problems to demonstrate its superiority over existing line-search-based stochastic methods.<|reference_end|>
|
arxiv
|
@article{fang2024trust-region,
title={Trust-Region Sequential Quadratic Programming for Stochastic
Optimization with Random Models},
author={Yuchen Fang, Sen Na, Michael W. Mahoney, Mladen Kolar},
journal={arXiv preprint arXiv:2409.15734},
year={2024},
archivePrefix={arXiv},
eprint={2409.15734},
primaryClass={math.OC cs.LG cs.NA math.NA stat.CO stat.ML}
}
|
fang2024trust-region
|
arxiv-661175
|
2409.15735
|
LSAST -- Enhancing Cybersecurity through LLM-supported Static Application Security Testing
|
<|reference_start|>LSAST -- Enhancing Cybersecurity through LLM-supported Static Application Security Testing: In the fast-evolving landscape of cybersecurity, Large Language Models (LLMs) play a pivotal role, continually improving their ability to analyze software code. This paper introduces a novel approach to vulnerability scanning by integrating conservative SAST (Static Application Security Testing) scanners with LLM capabilities, resulting in the creation of LSAST (LLM-supported Static Application Security Testing). Our approach significantly enhances the performance of LLMs in vulnerability scanning, establishing a new standard in this field. We benchmark LSAST's efficiency and compare its results with a state-of-the-art LLM. Additionally, we address the inherent drawbacks of LLMs in vulnerability scanning: their reliance on static training datasets, which leads to the exclusion of the latest vulnerabilities, and the privacy concerns associated with sending code to third-party LLM providers. To mitigate these issues, we utilize an open-source LLM to ensure privacy and employ a novel approach to gather relevant vulnerability information, thereby equipping the LLM with up-to-date knowledge.<|reference_end|>
|
arxiv
|
@article{keltek2024lsast,
title={LSAST -- Enhancing Cybersecurity through LLM-supported Static
Application Security Testing},
author={Mete Keltek, Rong Hu, Mohammadreza Fani Sani, Ziyue Li},
journal={arXiv preprint arXiv:2409.15735},
year={2024},
archivePrefix={arXiv},
eprint={2409.15735},
primaryClass={cs.CR}
}
|
keltek2024lsast
|
arxiv-661176
|
2409.15736
|
SoMaSLAM: 2D Graph SLAM for Sparse Range Sensing with Soft Manhattan World Constraints
|
<|reference_start|>SoMaSLAM: 2D Graph SLAM for Sparse Range Sensing with Soft Manhattan World Constraints: We propose a graph SLAM algorithm for sparse range sensing that incorporates a soft Manhattan world utilizing landmark-landmark constraints. Sparse range sensing is necessary for tiny robots that do not have the luxury of using heavy and expensive sensors. Existing SLAM methods dealing with sparse range sensing lack accuracy and accumulate drift error over time due to limited access to data points. Algorithms that cover this flaw using structural regularities, such as the Manhattan world (MW), have shortcomings when mapping real-world environments that do not coincide with the rules. We propose SoMaSLAM, a 2D graph SLAM designed for tiny robots with sparse range sensing. Our approach effectively maps sparse range data without enforcing strict structural regularities and maintains an adaptive graph. We implement the MW assumption as soft constraints, which we refer to as a soft Manhattan world. We propose novel soft landmark-landmark constraints to incorporate the soft MW into graph SLAM. Through extensive evaluation, we demonstrate that our proposed SoMaSLAM method improves localization accuracy on diverse datasets and is flexible enough to be used in the real world. We release our source code and sparse range datasets at https://SoMaSLAM.github.io/.<|reference_end|>
|
arxiv
|
@article{han2024somaslam:,
title={SoMaSLAM: 2D Graph SLAM for Sparse Range Sensing with Soft Manhattan
World Constraints},
author={Jeahn Han, Zichao Hu, Seonmo Yang, Minji Kim, and Pyojin Kim},
journal={arXiv preprint arXiv:2409.15736},
year={2024},
archivePrefix={arXiv},
eprint={2409.15736},
primaryClass={cs.RO}
}
|
han2024somaslam:
|
arxiv-661177
|
2409.15737
|
Reinforcement Leaning for Infinite-Dimensional Systems
|
<|reference_start|>Reinforcement Leaning for Infinite-Dimensional Systems: Interest in reinforcement learning (RL) for massive-scale systems consisting of large populations of intelligent agents interacting with heterogeneous environments has witnessed a significant surge in recent years across diverse scientific domains. However, due to the large-scale nature of the system, the majority of state-of-the-art RL techniques either encounter high computational cost or exhibit compromised performance. To mitigate these challenges, we propose a novel RL architecture along with the derivation of effective algorithms to learn optimal policies for any arbitrarily large system of agents. Specifically, we model such a system as a parameterized control system defined on an infinite-dimensional function space. We then develop a moment kernel transform to map the parameterized system and the value function of an RL problem into a reproducing kernel Hilbert space. This transformation subsequently generates a finite-dimensional moment representation for this RL problem. Leveraging this representation, we develop a hierarchical algorithm for learning optimal policies for the infinite-dimensional parameterized system. We further enhance efficiency of the algorithm by exploiting early stopping at each hierarchy, by which we show the fast convergence property of the algorithm through constructing a convergent spectral sequence. The performance and efficiency of the proposed algorithm are validated using practical examples.<|reference_end|>
|
arxiv
|
@article{zhang2024reinforcement,
title={Reinforcement Leaning for Infinite-Dimensional Systems},
author={Wei Zhang, Jr-Shin Li},
journal={arXiv preprint arXiv:2409.15737},
year={2024},
archivePrefix={arXiv},
eprint={2409.15737},
primaryClass={eess.SY cs.SY math.OC}
}
|
zhang2024reinforcement
|
arxiv-661178
|
2409.15739
|
Teaching Tailored to Talent: Adverse Weather Restoration via Prompt Pool and Depth-Anything Constraint
|
<|reference_start|>Teaching Tailored to Talent: Adverse Weather Restoration via Prompt Pool and Depth-Anything Constraint: Recent advancements in adverse weather restoration have shown potential, yet the unpredictable and varied combinations of weather degradations in the real world pose significant challenges. Previous methods typically struggle with dynamically handling intricate degradation combinations and carrying on background reconstruction precisely, leading to performance and generalization limitations. Drawing inspiration from prompt learning and the "Teaching Tailored to Talent" concept, we introduce a novel pipeline, T3-DiffWeather. Specifically, we employ a prompt pool that allows the network to autonomously combine sub-prompts to construct weather-prompts, harnessing the necessary attributes to adaptively tackle unforeseen weather input. Moreover, from a scene modeling perspective, we incorporate general prompts constrained by Depth-Anything feature to provide the scene-specific condition for the diffusion process. Furthermore, by incorporating contrastive prompt loss, we ensures distinctive representations for both types of prompts by a mutual pushing strategy. Experimental results demonstrate that our method achieves state-of-the-art performance across various synthetic and real-world datasets, markedly outperforming existing diffusion techniques in terms of computational efficiency.<|reference_end|>
|
arxiv
|
@article{chen2024teaching,
title={Teaching Tailored to Talent: Adverse Weather Restoration via Prompt Pool
and Depth-Anything Constraint},
author={Sixiang Chen, Tian Ye, Kai Zhang, Zhaohu Xing, Yunlong Lin, Lei Zhu},
journal={arXiv preprint arXiv:2409.15739},
year={2024},
archivePrefix={arXiv},
eprint={2409.15739},
primaryClass={cs.CV}
}
|
chen2024teaching
|
arxiv-661179
|
2409.15740
|
Real-Time Pedestrian Detection on IoT Edge Devices: A Lightweight Deep Learning Approach
|
<|reference_start|>Real-Time Pedestrian Detection on IoT Edge Devices: A Lightweight Deep Learning Approach: Artificial intelligence (AI) has become integral to our everyday lives. Computer vision has advanced to the point where it can play the safety critical role of detecting pedestrians at road intersections in intelligent transportation systems and alert vehicular traffic as to potential collisions. Centralized computing analyzes camera feeds and generates alerts for nearby vehicles. However, real-time applications face challenges such as latency, limited data transfer speeds, and the risk of life loss. Edge servers offer a potential solution for real-time applications, providing localized computing and storage resources and lower response times. Unfortunately, edge servers have limited processing power. Lightweight deep learning (DL) techniques enable edge servers to utilize compressed deep neural network (DNN) models. The research explores implementing a lightweight DL model on Artificial Intelligence of Things (AIoT) edge devices. An optimized You Only Look Once (YOLO) based DL model is deployed for real-time pedestrian detection, with detection events transmitted to the edge server using the Message Queuing Telemetry Transport (MQTT) protocol. The simulation results demonstrate that the optimized YOLO model can achieve real-time pedestrian detection, with a fast inference speed of 147 milliseconds, a frame rate of 2.3 frames per second, and an accuracy of 78%, representing significant improvements over baseline models.<|reference_end|>
|
arxiv
|
@article{alfikri2024real-time,
title={Real-Time Pedestrian Detection on IoT Edge Devices: A Lightweight Deep
Learning Approach},
author={Muhammad Dany Alfikri, Rafael Kaliski},
journal={arXiv preprint arXiv:2409.15740},
year={2024},
archivePrefix={arXiv},
eprint={2409.15740},
primaryClass={cs.AI cs.CV cs.NI}
}
|
alfikri2024real-time
|
arxiv-661180
|
2409.15741
|
StyleFusion TTS: Multimodal Style-control and Enhanced Feature Fusion for Zero-shot Text-to-speech Synthesis
|
<|reference_start|>StyleFusion TTS: Multimodal Style-control and Enhanced Feature Fusion for Zero-shot Text-to-speech Synthesis: We introduce StyleFusion-TTS, a prompt and/or audio referenced, style and speaker-controllable, zero-shot text-to-speech (TTS) synthesis system designed to enhance the editability and naturalness of current research literature. We propose a general front-end encoder as a compact and effective module to utilize multimodal inputs including text prompts, audio references, and speaker timbre references in a fully zero-shot manner and produce disentangled style and speaker control embeddings. Our novel approach also leverages a hierarchical conformer structure for the fusion of style and speaker control embeddings, aiming to achieve optimal feature fusion within the current advanced TTS architecture. StyleFusion-TTS is evaluated through multiple metrics, both subjectively and objectively. The system shows promising performance across our evaluations, suggesting its potential to contribute to the advancement of the field of zero-shot text-to-speech synthesis.<|reference_end|>
|
arxiv
|
@article{chen2024stylefusion,
title={StyleFusion TTS: Multimodal Style-control and Enhanced Feature Fusion
for Zero-shot Text-to-speech Synthesis},
author={Zhiyong Chen, Xinnuo Li, Zhiqi Ai and Shugong Xu},
journal={The 7th Chinese Conference on Pattern Recognition and Computer
Vision PRCV 2024},
year={2024},
archivePrefix={arXiv},
eprint={2409.15741},
primaryClass={eess.AS cs.SD}
}
|
chen2024stylefusion
|
arxiv-661181
|
2409.15742
|
Enhancing Open-Set Speaker Identification through Rapid Tuning with Speaker Reciprocal Points and Negative Sample
|
<|reference_start|>Enhancing Open-Set Speaker Identification through Rapid Tuning with Speaker Reciprocal Points and Negative Sample: This paper introduces a novel framework for open-set speaker identification in household environments, playing a crucial role in facilitating seamless human-computer interactions. Addressing the limitations of current speaker models and classification approaches, our work integrates an pretrained WavLM frontend with a few-shot rapid tuning neural network (NN) backend for enrollment, employing task-optimized Speaker Reciprocal Points Learning (SRPL) to enhance discrimination across multiple target speakers. Furthermore, we propose an enhanced version of SRPL (SRPL+), which incorporates negative sample learning with both speech-synthesized and real negative samples to significantly improve open-set SID accuracy. Our approach is thoroughly evaluated across various multi-language text-dependent speaker recognition datasets, demonstrating its effectiveness in achieving high usability for complex household multi-speaker recognition scenarios. The proposed system enhanced open-set performance by up to 27\% over the directly use of efficient WavLM base+ model.<|reference_end|>
|
arxiv
|
@article{chen2024enhancing,
title={Enhancing Open-Set Speaker Identification through Rapid Tuning with
Speaker Reciprocal Points and Negative Sample},
author={Zhiyong Chen, Zhiqi Ai, Xinnuo Li and Shugong Xu},
journal={IEEE Spoken Language Technology Workshop 2024},
year={2024},
archivePrefix={arXiv},
eprint={2409.15742},
primaryClass={eess.AS cs.SD}
}
|
chen2024enhancing
|
arxiv-661182
|
2409.15744
|
ViKL: A Mammography Interpretation Framework via Multimodal Aggregation of Visual-knowledge-linguistic Features
|
<|reference_start|>ViKL: A Mammography Interpretation Framework via Multimodal Aggregation of Visual-knowledge-linguistic Features: Mammography is the primary imaging tool for breast cancer diagnosis. Despite significant strides in applying deep learning to interpret mammography images, efforts that focus predominantly on visual features often struggle with generalization across datasets. We hypothesize that integrating additional modalities in the radiology practice, notably the linguistic features of reports and manifestation features embodying radiological insights, offers a more powerful, interpretable and generalizable representation. In this paper, we announce MVKL, the first multimodal mammography dataset encompassing multi-view images, detailed manifestations and reports. Based on this dataset, we focus on the challanging task of unsupervised pretraining and propose ViKL, a innovative framework that synergizes Visual, Knowledge, and Linguistic features. This framework relies solely on pairing information without the necessity for pathology labels, which are often challanging to acquire. ViKL employs a triple contrastive learning approach to merge linguistic and knowledge-based insights with visual data, enabling both inter-modality and intra-modality feature enhancement. Our research yields significant findings: 1) Integrating reports and manifestations with unsupervised visual pretraining, ViKL substantially enhances the pathological classification and fosters multimodal interactions. 2) Manifestations can introduce a novel hard negative sample selection mechanism. 3) The multimodal features demonstrate transferability across different datasets. 4) The multimodal pretraining approach curbs miscalibrations and crafts a high-quality representation space. The MVKL dataset and ViKL code are publicly available at https://github.com/wxwxwwxxx/ViKL to support a broad spectrum of future research.<|reference_end|>
|
arxiv
|
@article{wei2024vikl:,
title={ViKL: A Mammography Interpretation Framework via Multimodal Aggregation
of Visual-knowledge-linguistic Features},
author={Xin Wei, Yaling Tao, Changde Du, Gangming Zhao, Yizhou Yu, Jinpeng Li},
journal={arXiv preprint arXiv:2409.15744},
year={2024},
archivePrefix={arXiv},
eprint={2409.15744},
primaryClass={eess.IV cs.CV}
}
|
wei2024vikl:
|
arxiv-661183
|
2409.15745
|
ManiNeg: Manifestation-guided Multimodal Pretraining for Mammography Classification
|
<|reference_start|>ManiNeg: Manifestation-guided Multimodal Pretraining for Mammography Classification: Breast cancer is a significant threat to human health. Contrastive learning has emerged as an effective method to extract critical lesion features from mammograms, thereby offering a potent tool for breast cancer screening and analysis. A crucial aspect of contrastive learning involves negative sampling, where the selection of appropriate hard negative samples is essential for driving representations to retain detailed information about lesions. In contrastive learning, it is often assumed that features can sufficiently capture semantic content, and that each minibatch inherently includes ideal hard negative samples. However, the characteristics of breast lumps challenge these assumptions. In response, we introduce ManiNeg, a novel approach that leverages manifestations as proxies to mine hard negative samples. Manifestations, which refer to the observable symptoms or signs of a disease, provide a knowledge-driven and robust basis for choosing hard negative samples. This approach benefits from its invariance to model optimization, facilitating efficient sampling. To support ManiNeg and future research endeavors, we developed the MVKL dataset, which includes multi-view mammograms, corresponding reports, meticulously annotated manifestations, and pathologically confirmed benign-malignant outcomes. We evaluate ManiNeg on the benign and malignant classification task. Our results demonstrate that ManiNeg not only improves representation in both unimodal and multimodal contexts but also shows generalization across datasets. The MVKL dataset and our codes are publicly available at https://github.com/wxwxwwxxx/ManiNeg.<|reference_end|>
|
arxiv
|
@article{li2024manineg:,
title={ManiNeg: Manifestation-guided Multimodal Pretraining for Mammography
Classification},
author={Xujun Li, Xin Wei, Jing Jiang, Danxiang Chen, Wei Zhang, Jinpeng Li},
journal={arXiv preprint arXiv:2409.15745},
year={2024},
archivePrefix={arXiv},
eprint={2409.15745},
primaryClass={eess.IV cs.CV}
}
|
li2024manineg:
|
arxiv-661184
|
2409.15746
|
A Differentiable Material Point Method Framework for Shape Morphing
|
<|reference_start|>A Differentiable Material Point Method Framework for Shape Morphing: We present a novel, physically-based morphing technique for elastic shapes, leveraging the differentiable material point method (MPM) with space-time control through per-particle deformation gradients to accommodate complex topology changes. This approach, grounded in MPM's natural handling of dynamic topologies, is enhanced by a chained iterative optimization technique, allowing for the creation of both succinct and extended morphing sequences that maintain coherence over time. Demonstrated across various challenging scenarios, our method is able to produce detailed elastic deformation and topology transitions, all grounded within our physics-based simulation framework.<|reference_end|>
|
arxiv
|
@article{xu2024a,
title={A Differentiable Material Point Method Framework for Shape Morphing},
author={Michael Xu, Chang-Yong Song, David I. W. Levin, David Hyde},
journal={arXiv preprint arXiv:2409.15746},
year={2024},
archivePrefix={arXiv},
eprint={2409.15746},
primaryClass={cs.GR}
}
|
xu2024a
|
arxiv-661185
|
2409.15747
|
Training Neural Networks for Modularity aids Interpretability
|
<|reference_start|>Training Neural Networks for Modularity aids Interpretability: An approach to improve network interpretability is via clusterability, i.e., splitting a model into disjoint clusters that can be studied independently. We find pretrained models to be highly unclusterable and thus train models to be more modular using an ``enmeshment loss'' function that encourages the formation of non-interacting clusters. Using automated interpretability measures, we show that our method finds clusters that learn different, disjoint, and smaller circuits for CIFAR-10 labels. Our approach provides a promising direction for making neural networks easier to interpret.<|reference_end|>
|
arxiv
|
@article{golechha2024training,
title={Training Neural Networks for Modularity aids Interpretability},
author={Satvik Golechha, Dylan Cope, Nandi Schoots},
journal={arXiv preprint arXiv:2409.15747},
year={2024},
archivePrefix={arXiv},
eprint={2409.15747},
primaryClass={cs.LG cs.AI}
}
|
golechha2024training
|
arxiv-661186
|
2409.15749
|
Automated Assessment of Multimodal Answer Sheets in the STEM domain
|
<|reference_start|>Automated Assessment of Multimodal Answer Sheets in the STEM domain: In the domain of education, the integration of,technology has led to a transformative era, reshaping traditional,learning paradigms. Central to this evolution is the automation,of grading processes, particularly within the STEM domain encompassing Science, Technology, Engineering, and Mathematics.,While efforts to automate grading have been made in subjects,like Literature, the multifaceted nature of STEM assessments,presents unique challenges, ranging from quantitative analysis,to the interpretation of handwritten diagrams. To address these,challenges, this research endeavors to develop efficient and reliable grading methods through the implementation of automated,assessment techniques using Artificial Intelligence (AI). Our,contributions lie in two key areas: firstly, the development of a,robust system for evaluating textual answers in STEM, leveraging,sample answers for precise comparison and grading, enabled by,advanced algorithms and natural language processing techniques.,Secondly, a focus on enhancing diagram evaluation, particularly,flowcharts, within the STEM context, by transforming diagrams,into textual representations for nuanced assessment using a,Large Language Model (LLM). By bridging the gap between,visual representation and semantic meaning, our approach ensures accurate evaluation while minimizing manual intervention.,Through the integration of models such as CRAFT for text,extraction and YoloV5 for object detection, coupled with LLMs,like Mistral-7B for textual evaluation, our methodology facilitates,comprehensive assessment of multimodal answer sheets. This,paper provides a detailed account of our methodology, challenges,encountered, results, and implications, emphasizing the potential,of AI-driven approaches in revolutionizing grading practices in,STEM education.<|reference_end|>
|
arxiv
|
@article{patil2024automated,
title={Automated Assessment of Multimodal Answer Sheets in the STEM domain},
author={Rajlaxmi Patil, Aditya Ashutosh Kulkarni, Ruturaj Ghatage, Sharvi
Endait, Geetanjali Kale, Raviraj Joshi},
journal={arXiv preprint arXiv:2409.15749},
year={2024},
archivePrefix={arXiv},
eprint={2409.15749},
primaryClass={cs.AI}
}
|
patil2024automated
|
arxiv-661187
|
2409.15750
|
The Roles of Generative Artificial Intelligence in Internet of Electric Vehicles
|
<|reference_start|>The Roles of Generative Artificial Intelligence in Internet of Electric Vehicles: With the advancement of generative artificial intelligence (GenAI) models, their capability to generate content is seeing significant enhancement, leading to widespread applications in the field of data generation and forecasting. Furthermore, GenAI has strong capabilities in data modeling and analysis, which enhances Internet of electric vehicles (IoEV) applications in various aspects. In this paper, we investigate and survey applications of GenAI in the IoEV. Specifically, we categorize GenAI for IoEV into four different layers namely, EV's battery layer, individual electric vehicle (EV) layer, smart grid with EV layer, and security layer. We first introduce various GenAI techniques used in each layer of IoEV applications. Subsequently, public datasets available for training the GenAI models are summarized. Finally, we provide recommendations for future directions. This survey not only categorizes the applications of GenAI in IoEV across different layers but also serves as a valuable resource for researchers and practitioners by highlighting the design and implementation challenges within each layer. Furthermore, it provides a roadmap for future research directions, enabling the development of more robust and efficient IoEV systems through the integration of advanced GenAI techniques.<|reference_end|>
|
arxiv
|
@article{zhang2024the,
title={The Roles of Generative Artificial Intelligence in Internet of Electric
Vehicles},
author={Hanwen Zhang, Dusit Niyato, Wei Zhang, Changyuan Zhao, Hongyang Du,
Abbas Jamalipour, Sumei Sun, Yiyang Pei},
journal={arXiv preprint arXiv:2409.15750},
year={2024},
archivePrefix={arXiv},
eprint={2409.15750},
primaryClass={cs.LG cs.AI cs.ET}
}
|
zhang2024the
|
arxiv-661188
|
2409.15753
|
Development and Validation of Heparin Dosing Policies Using an Offline Reinforcement Learning Algorithm
|
<|reference_start|>Development and Validation of Heparin Dosing Policies Using an Offline Reinforcement Learning Algorithm: Appropriate medication dosages in the intensive care unit (ICU) are critical for patient survival. Heparin, used to treat thrombosis and inhibit blood clotting in the ICU, requires careful administration due to its complexity and sensitivity to various factors, including patient clinical characteristics, underlying medical conditions, and potential drug interactions. Incorrect dosing can lead to severe complications such as strokes or excessive bleeding. To address these challenges, this study proposes a reinforcement learning (RL)-based personalized optimal heparin dosing policy that guides dosing decisions reliably within the therapeutic range based on individual patient conditions. A batch-constrained policy was implemented to minimize out-of-distribution errors in an offline RL environment and effectively integrate RL with existing clinician policies. The policy's effectiveness was evaluated using weighted importance sampling, an off-policy evaluation method, and the relationship between state representations and Q-values was explored using t-SNE. Both quantitative and qualitative analyses were conducted using the Medical Information Mart for Intensive Care III (MIMIC-III) database, demonstrating the efficacy of the proposed RL-based medication policy. Leveraging advanced machine learning techniques and extensive clinical data, this research enhances heparin administration practices and establishes a precedent for the development of sophisticated decision-support tools in medicine.<|reference_end|>
|
arxiv
|
@article{lim2024development,
title={Development and Validation of Heparin Dosing Policies Using an Offline
Reinforcement Learning Algorithm},
author={Yooseok Lim, Inbeom Park, Sujee Lee},
journal={arXiv preprint arXiv:2409.15753},
year={2024},
archivePrefix={arXiv},
eprint={2409.15753},
primaryClass={cs.LG cs.AI}
}
|
lim2024development
|
arxiv-661189
|
2409.15754
|
NFTracer: Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics
|
<|reference_start|>NFTracer: Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics: Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders' influx and projects' freshness.<|reference_end|>
|
arxiv
|
@article{cao2024nftracer:,
title={NFTracer: Tracing NFT Impact Dynamics in Transaction-flow Substitutive
Systems with Visual Analytics},
author={Yifan Cao, Qing Shi, Lue Shen, Kani Chen, Yang Wang, Wei Zeng, Huamin
Qu},
journal={arXiv preprint arXiv:2409.15754},
year={2024},
doi={10.1109/TVCG.2024.3402834},
archivePrefix={arXiv},
eprint={2409.15754},
primaryClass={cs.CE cs.SI}
}
|
cao2024nftracer:
|
arxiv-661190
|
2409.15755
|
Stage-Wise Reward Shaping for Acrobatic Robots: A Constrained Multi-Objective Reinforcement Learning Approach
|
<|reference_start|>Stage-Wise Reward Shaping for Acrobatic Robots: A Constrained Multi-Objective Reinforcement Learning Approach: As the complexity of tasks addressed through reinforcement learning (RL) increases, the definition of reward functions also has become highly complicated. We introduce an RL method aimed at simplifying the reward-shaping process through intuitive strategies. Initially, instead of a single reward function composed of various terms, we define multiple reward and cost functions within a constrained multi-objective RL (CMORL) framework. For tasks involving sequential complex movements, we segment the task into distinct stages and define multiple rewards and costs for each stage. Finally, we introduce a practical CMORL algorithm that maximizes objectives based on these rewards while satisfying constraints defined by the costs. The proposed method has been successfully demonstrated across a variety of acrobatic tasks in both simulation and real-world environments. Additionally, it has been shown to successfully perform tasks compared to existing RL and constrained RL algorithms. Our code is available at https://github.com/rllab-snu/Stage-Wise-CMORL.<|reference_end|>
|
arxiv
|
@article{kim2024stage-wise,
title={Stage-Wise Reward Shaping for Acrobatic Robots: A Constrained
Multi-Objective Reinforcement Learning Approach},
author={Dohyeong Kim, Hyeokjin Kwon, Junseok Kim, Gunmin Lee, Songhwai Oh},
journal={arXiv preprint arXiv:2409.15755},
year={2024},
archivePrefix={arXiv},
eprint={2409.15755},
primaryClass={cs.RO cs.AI}
}
|
kim2024stage-wise
|
arxiv-661191
|
2409.15757
|
Smart Grid Security: A Verified Deep Reinforcement Learning Framework to Counter Cyber-Physical Attacks
|
<|reference_start|>Smart Grid Security: A Verified Deep Reinforcement Learning Framework to Counter Cyber-Physical Attacks: The distributed nature of smart grids, combined with sophisticated sensors, control algorithms, and data collection facilities at Supervisory Control and Data Acquisition (SCADA) centers, makes them vulnerable to strategically crafted cyber-physical attacks. These malicious attacks can manipulate power demands using high-wattage Internet of Things (IoT) botnet devices, such as refrigerators and air conditioners, or introduce false values into transmission line power flow sensor readings. Consequently, grids experience blackouts and high power flow oscillations. Existing grid protection mechanisms, originally designed to tackle natural faults in transmission lines and generator outages, are ineffective against such intelligently crafted attacks. This is because grid operators overlook potential scenarios of cyber-physical attacks during their design phase. In this work, we propose a safe Deep Reinforcement Learning (DRL)-based framework for mitigating attacks on smart grids. The DRL agent effectively neutralizes cyber-physical attacks on grid surfaces by triggering appropriate sequences of existing protection schemes. The safety of the DRL agent is formally verified through a reachability analysis method. Additionally, our framework is designed for deployment on CUDA-enabled GPU systems, which enables faster execution of these protection sequences and their real-time validation. Our framework establishes a new set of protection rules for grid models, successfully thwarting existing cyber-physical attacks.<|reference_end|>
|
arxiv
|
@article{maiti2024smart,
title={Smart Grid Security: A Verified Deep Reinforcement Learning Framework to
Counter Cyber-Physical Attacks},
author={Suman Maiti, Soumyajit Dey},
journal={arXiv preprint arXiv:2409.15757},
year={2024},
archivePrefix={arXiv},
eprint={2409.15757},
primaryClass={cs.CR}
}
|
maiti2024smart
|
arxiv-661192
|
2409.15759
|
VoiceGuider: Enhancing Out-of-Domain Performance in Parameter-Efficient Speaker-Adaptive Text-to-Speech via Autoguidance
|
<|reference_start|>VoiceGuider: Enhancing Out-of-Domain Performance in Parameter-Efficient Speaker-Adaptive Text-to-Speech via Autoguidance: When applying parameter-efficient finetuning via LoRA onto speaker adaptive text-to-speech models, adaptation performance may decline compared to full-finetuned counterparts, especially for out-of-domain speakers. Here, we propose VoiceGuider, a parameter-efficient speaker adaptive text-to-speech system reinforced with autoguidance to enhance the speaker adaptation performance, reducing the gap against full-finetuned models. We carefully explore various ways of strengthening autoguidance, ultimately finding the optimal strategy. VoiceGuider as a result shows robust adaptation performance especially on extreme out-of-domain speech data. We provide audible samples in our demo page.<|reference_end|>
|
arxiv
|
@article{yeom2024voiceguider:,
title={VoiceGuider: Enhancing Out-of-Domain Performance in Parameter-Efficient
Speaker-Adaptive Text-to-Speech via Autoguidance},
author={Jiheum Yeom, Heeseung Kim, Jooyoung Choi, Che Hyun Lee, Nohil Park,
Sungroh Yoon},
journal={arXiv preprint arXiv:2409.15759},
year={2024},
archivePrefix={arXiv},
eprint={2409.15759},
primaryClass={cs.SD eess.AS}
}
|
yeom2024voiceguider:
|
arxiv-661193
|
2409.15760
|
NanoVoice: Efficient Speaker-Adaptive Text-to-Speech for Multiple Speakers
|
<|reference_start|>NanoVoice: Efficient Speaker-Adaptive Text-to-Speech for Multiple Speakers: We present NanoVoice, a personalized text-to-speech model that efficiently constructs voice adapters for multiple speakers simultaneously. NanoVoice introduces a batch-wise speaker adaptation technique capable of fine-tuning multiple references in parallel, significantly reducing training time. Beyond building separate adapters for each speaker, we also propose a parameter sharing technique that reduces the number of parameters used for speaker adaptation. By incorporating a novel trainable scale matrix, NanoVoice mitigates potential performance degradation during parameter sharing. NanoVoice achieves performance comparable to the baselines, while training 4 times faster and using 45 percent fewer parameters for speaker adaptation with 40 reference voices. Extensive ablation studies and analysis further validate the efficiency of our model.<|reference_end|>
|
arxiv
|
@article{park2024nanovoice:,
title={NanoVoice: Efficient Speaker-Adaptive Text-to-Speech for Multiple
Speakers},
author={Nohil Park, Heeseung Kim, Che Hyun Lee, Jooyoung Choi, Jiheum Yeom,
Sungroh Yoon},
journal={arXiv preprint arXiv:2409.15760},
year={2024},
archivePrefix={arXiv},
eprint={2409.15760},
primaryClass={cs.SD eess.AS}
}
|
park2024nanovoice:
|
arxiv-661194
|
2409.15761
|
TFG: Unified Training-Free Guidance for Diffusion Models
|
<|reference_start|>TFG: Unified Training-Free Guidance for Diffusion Models: Given an unconditional diffusion model and a predictor for a target property of interest (e.g., a classifier), the goal of training-free guidance is to generate samples with desirable target properties without additional training. Existing methods, though effective in various individual applications, often lack theoretical grounding and rigorous testing on extensive benchmarks. As a result, they could even fail on simple tasks, and applying them to a new problem becomes unavoidably difficult. This paper introduces a novel algorithmic framework encompassing existing methods as special cases, unifying the study of training-free guidance into the analysis of an algorithm-agnostic design space. Via theoretical and empirical investigation, we propose an efficient and effective hyper-parameter searching strategy that can be readily applied to any downstream task. We systematically benchmark across 7 diffusion models on 16 tasks with 40 targets, and improve performance by 8.5% on average. Our framework and benchmark offer a solid foundation for conditional generation in a training-free manner.<|reference_end|>
|
arxiv
|
@article{ye2024tfg:,
title={TFG: Unified Training-Free Guidance for Diffusion Models},
author={Haotian Ye, Haowei Lin, Jiaqi Han, Minkai Xu, Sheng Liu, Yitao Liang,
Jianzhu Ma, James Zou, Stefano Ermon},
journal={arXiv preprint arXiv:2409.15761},
year={2024},
archivePrefix={arXiv},
eprint={2409.15761},
primaryClass={cs.LG cs.AI}
}
|
ye2024tfg:
|
arxiv-661195
|
2409.15762
|
XTRUST: On the Multilingual Trustworthiness of Large Language Models
|
<|reference_start|>XTRUST: On the Multilingual Trustworthiness of Large Language Models: Large language models (LLMs) have demonstrated remarkable capabilities across a range of natural language processing (NLP) tasks, capturing the attention of both practitioners and the broader public. A key question that now preoccupies the AI community concerns the capabilities and limitations of these models, with trustworthiness emerging as a central issue, particularly as LLMs are increasingly applied in sensitive fields like healthcare and finance, where errors can have serious consequences. However, most previous studies on the trustworthiness of LLMs have been limited to a single language, typically the predominant one in the dataset, such as English. In response to the growing global deployment of LLMs, we introduce XTRUST, the first comprehensive multilingual trustworthiness benchmark. XTRUST encompasses a diverse range of topics, including illegal activities, hallucination, out-of-distribution (OOD) robustness, physical and mental health, toxicity, fairness, misinformation, privacy, and machine ethics, across 10 different languages. Using XTRUST, we conduct an empirical evaluation of the multilingual trustworthiness of five widely used LLMs, offering an in-depth analysis of their performance across languages and tasks. Our results indicate that many LLMs struggle with certain low-resource languages, such as Arabic and Russian, highlighting the considerable room for improvement in the multilingual trustworthiness of current language models. The code is available at https://github.com/LluckyYH/XTRUST.<|reference_end|>
|
arxiv
|
@article{li2024xtrust:,
title={XTRUST: On the Multilingual Trustworthiness of Large Language Models},
author={Yahan Li, Yi Wang, Yi Chang, and Yuan Wu},
journal={arXiv preprint arXiv:2409.15762},
year={2024},
archivePrefix={arXiv},
eprint={2409.15762},
primaryClass={cs.CL}
}
|
li2024xtrust:
|
arxiv-661196
|
2409.15763
|
IRSC: A Zero-shot Evaluation Benchmark for Information Retrieval through Semantic Comprehension in Retrieval-Augmented Generation Scenarios
|
<|reference_start|>IRSC: A Zero-shot Evaluation Benchmark for Information Retrieval through Semantic Comprehension in Retrieval-Augmented Generation Scenarios: In Retrieval-Augmented Generation (RAG) tasks using Large Language Models (LLMs), the quality of retrieved information is critical to the final output. This paper introduces the IRSC benchmark for evaluating the performance of embedding models in multilingual RAG tasks. The benchmark encompasses five retrieval tasks: query retrieval, title retrieval, part-of-paragraph retrieval, keyword retrieval, and summary retrieval. Our research addresses the current lack of comprehensive testing and effective comparison methods for embedding models in RAG scenarios. We introduced new metrics: the Similarity of Semantic Comprehension Index (SSCI) and the Retrieval Capability Contest Index (RCCI), and evaluated models such as Snowflake-Arctic, BGE, GTE, and M3E. Our contributions include: 1) the IRSC benchmark, 2) the SSCI and RCCI metrics, and 3) insights into the cross-lingual limitations of embedding models. The IRSC benchmark aims to enhance the understanding and development of accurate retrieval systems in RAG tasks. All code and datasets are available at: https://github.com/Jasaxion/IRSC_Benchmark<|reference_end|>
|
arxiv
|
@article{lin2024irsc:,
title={IRSC: A Zero-shot Evaluation Benchmark for Information Retrieval through
Semantic Comprehension in Retrieval-Augmented Generation Scenarios},
author={Hai Lin, Shaoxiong Zhan, Junyou Su, Haitao Zheng, Hui Wang},
journal={arXiv preprint arXiv:2409.15763},
year={2024},
archivePrefix={arXiv},
eprint={2409.15763},
primaryClass={cs.IR cs.AI}
}
|
lin2024irsc:
|
arxiv-661197
|
2409.15764
|
Spatial-Temporal Mixture-of-Graph-Experts for Multi-Type Crime Prediction
|
<|reference_start|>Spatial-Temporal Mixture-of-Graph-Experts for Multi-Type Crime Prediction: As various types of crime continue to threaten public safety and economic development, predicting the occurrence of multiple types of crimes becomes increasingly vital for effective prevention measures. Although extensive efforts have been made, most of them overlook the heterogeneity of different crime categories and fail to address the issue of imbalanced spatial distribution. In this work, we propose a Spatial-Temporal Mixture-of-Graph-Experts (ST-MoGE) framework for collective multiple-type crime prediction. To enhance the model's ability to identify diverse spatial-temporal dependencies and mitigate potential conflicts caused by spatial-temporal heterogeneity of different crime categories, we introduce an attentive-gated Mixture-of-Graph-Experts (MGEs) module to capture the distinctive and shared crime patterns of each crime category. Then, we propose Cross-Expert Contrastive Learning(CECL) to update the MGEs and force each expert to focus on specific pattern modeling, thereby reducing blending and redundancy. Furthermore, to address the issue of imbalanced spatial distribution, we propose a Hierarchical Adaptive Loss Re-weighting (HALR) approach to eliminate biases and insufficient learning of data-scarce regions. To evaluate the effectiveness of our methods, we conduct comprehensive experiments on two real-world crime datasets and compare our results with twelve advanced baselines. The experimental results demonstrate the superiority of our methods.<|reference_end|>
|
arxiv
|
@article{wu2024spatial-temporal,
title={Spatial-Temporal Mixture-of-Graph-Experts for Multi-Type Crime
Prediction},
author={Ziyang Wu, Fan Liu, Jindong Han, Yuxuan Liang, Hao Liu},
journal={arXiv preprint arXiv:2409.15764},
year={2024},
archivePrefix={arXiv},
eprint={2409.15764},
primaryClass={cs.LG cs.AI}
}
|
wu2024spatial-temporal
|
arxiv-661198
|
2409.15765
|
User-Centric Cell-Free Massive MIMO With RIS-Integrated Antenna Arrays
|
<|reference_start|>User-Centric Cell-Free Massive MIMO With RIS-Integrated Antenna Arrays: Cell-free massive MIMO (multiple-input multiple-output) is a promising network architecture for beyond 5G systems, which can particularly offer more uniform data rates across the coverage area. Recent works have shown how reconfigurable intelligent surfaces (RISs) can be used as relays in cell-free massive MIMO networks to improve data rates further. In this paper, we analyze an alternative architecture where an RIS is integrated into the antenna array at each access point and acts as an intelligent transmitting surface to expand the aperture area. This approach alleviates the multiplicative fading effect that normally makes RIS-aided systems inefficient and offers a cost-effective alternative to building large antenna arrays. We use a small number of antennas and a larger number of controllable RIS elements to match the performance of an antenna array whose size matches that of the RIS. In this paper, we explore this innovative transceiver architecture in the uplink of a cell-free massive MIMO system for the first time, demonstrating its potential benefits through analytic and numerical contributions. The simulation results validate the effectiveness of our proposed phase-shift configuration and highlight scenarios where the proposed architecture significantly enhances data rates.<|reference_end|>
|
arxiv
|
@article{demir2024user-centric,
title={User-Centric Cell-Free Massive MIMO With RIS-Integrated Antenna Arrays},
author={"Ozlem Tuu{g}fe Demir and Emil Bj"ornson},
journal={arXiv preprint arXiv:2409.15765},
year={2024},
archivePrefix={arXiv},
eprint={2409.15765},
primaryClass={eess.SP cs.IT math.IT}
}
|
demir2024user-centric
|
arxiv-661199
|
2409.15766
|
CHBench: A Chinese Dataset for Evaluating Health in Large Language Models
|
<|reference_start|>CHBench: A Chinese Dataset for Evaluating Health in Large Language Models: With the rapid development of large language models (LLMs), assessing their performance on health-related inquiries has become increasingly essential. It is critical that these models provide accurate and trustworthy health information, as their application in real-world contexts--where misinformation can have serious consequences for individuals seeking medical advice and support--depends on their reliability. In this work, we present CHBench, the first comprehensive Chinese Health-related Benchmark designed to evaluate LLMs' capabilities in understanding physical and mental health across diverse scenarios. CHBench includes 6,493 entries related to mental health and 2,999 entries focused on physical health, covering a broad spectrum of topics. This dataset serves as a foundation for evaluating Chinese LLMs' capacity to comprehend and generate accurate health-related information. Our extensive evaluations of four popular Chinese LLMs demonstrate that there remains considerable room for improvement in their understanding of health-related information. The code is available at https://github.com/TracyGuo2001/CHBench.<|reference_end|>
|
arxiv
|
@article{guo2024chbench:,
title={CHBench: A Chinese Dataset for Evaluating Health in Large Language
Models},
author={Chenlu Guo, Nuo Xu, Yi Chang, and Yuan Wu},
journal={arXiv preprint arXiv:2409.15766},
year={2024},
archivePrefix={arXiv},
eprint={2409.15766},
primaryClass={cs.CL}
}
|
guo2024chbench:
|
arxiv-661200
|
2409.15767
|
Representation Loss Minimization with Randomized Selection Strategy for Efficient Environmental Fake Audio Detection
|
<|reference_start|>Representation Loss Minimization with Randomized Selection Strategy for Efficient Environmental Fake Audio Detection: The adaptation of foundation models has significantly advanced environmental audio deepfake detection (EADD), a rapidly growing area of research. These models are typically fine-tuned or utilized in their frozen states for downstream tasks. However, the dimensionality of their representations can substantially lead to a high parameter count of downstream models, leading to higher computational demands. So, a general way is to compress these representations by leveraging state-of-the-art (SOTA) unsupervised dimensionality reduction techniques (PCA, SVD, KPCA, GRP) for efficient EADD. However, with the application of such techniques, we observe a drop in performance. So in this paper, we show that representation vectors contain redundant information, and randomly selecting 40-50% of representation values and building downstream models on it preserves or sometimes even improves performance. We show that such random selection preserves more performance than the SOTA dimensionality reduction techniques while reducing model parameters and inference time by almost over half.<|reference_end|>
|
arxiv
|
@article{phukan2024representation,
title={Representation Loss Minimization with Randomized Selection Strategy for
Efficient Environmental Fake Audio Detection},
author={Orchid Chetia Phukan, Girish, Mohd Mujtaba Akhtar, Swarup Ranjan
Behera, Nitin Choudhury, Arun Balaji Buduru, Rajesh Sharma and S.R Mahadeva
Prasanna},
journal={arXiv preprint arXiv:2409.15767},
year={2024},
archivePrefix={arXiv},
eprint={2409.15767},
primaryClass={eess.AS cs.SD}
}
|
phukan2024representation
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.