corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-661201
|
2409.15769
|
In-Situ Mode: Generative AI-Driven Characters Transforming Art Engagement Through Anthropomorphic Narratives
|
<|reference_start|>In-Situ Mode: Generative AI-Driven Characters Transforming Art Engagement Through Anthropomorphic Narratives: Art appreciation serves as a crucial medium for emotional communication and sociocultural dialogue. In the digital era, fostering deep user engagement on online art appreciation platforms remains a challenge. Leveraging generative AI technologies, we present EyeSee, a system designed to engage users through anthropomorphic characters. We implemented and evaluated three modes (Narrator, Artist, and In-Situ) acting as a third-person narrator, a first-person creator, and first-person created objects, respectively, across two sessions: Narrative and Recommendation. We conducted a within-subject study with 24 participants. In the Narrative session, we found that the In-Situ and Artist modes had higher aesthetic appeal than the Narrator mode, although the Artist mode showed lower perceived usability. Additionally, from the Narrative to Recommendation session, we found that user-perceived relatability and believability within each interaction mode were sustained, but the user-perceived consistency and stereotypicality changed. Our findings suggest novel implications for applying anthropomorphic in-situ narratives to other educational settings.<|reference_end|>
|
arxiv
|
@article{li2024in-situ,
title={In-Situ Mode: Generative AI-Driven Characters Transforming Art
Engagement Through Anthropomorphic Narratives},
author={Yongming Li, Hangyue Zhang, Andrea Yaoyun Cui, Zisong Ma, Yunpeng
Song, Zhongmin Cai, and Yun Huang},
journal={arXiv preprint arXiv:2409.15769},
year={2024},
archivePrefix={arXiv},
eprint={2409.15769},
primaryClass={cs.HC}
}
|
li2024in-situ
|
arxiv-661202
|
2409.15770
|
Optimal preconditioners for nonsymmetric multilevel Toeplitz systems with application to solving non-local evolutionary partial differential equations
|
<|reference_start|>Optimal preconditioners for nonsymmetric multilevel Toeplitz systems with application to solving non-local evolutionary partial differential equations: Preconditioning for multilevel Toeplitz systems has long been a focal point of research in numerical linear algebra. In this work, we develop a novel preconditioning method for a class of nonsymmetric multilevel Toeplitz systems, which includes the all-at-once systems that arise from evolutionary partial differential equations. These systems have recently garnered considerable attention in the literature. To further illustrate our proposed preconditioning strategy, we specifically consider the application of solving a wide range of non-local, time-dependent partial differential equations in a parallel-in-time manner. For these equations, we propose a symmetric positive definite multilevel Tau preconditioner that is not only efficient to implement but can also be adapted as an optimal preconditioner. In this context, the proposed preconditioner is optimal in the sense that it enables mesh-independent convergence when using the preconditioned generalized minimal residual method. Numerical examples are provided to critically analyze the results and underscore the effectiveness of our preconditioning strategy.<|reference_end|>
|
arxiv
|
@article{huang2024optimal,
title={Optimal preconditioners for nonsymmetric multilevel Toeplitz systems
with application to solving non-local evolutionary partial differential
equations},
author={Yuan-Yuan Huang, Sean Y. Hon, Lot-Kei Chou, Siu-Long Lei},
journal={arXiv preprint arXiv:2409.15770},
year={2024},
archivePrefix={arXiv},
eprint={2409.15770},
primaryClass={math.NA cs.NA}
}
|
huang2024optimal
|
arxiv-661203
|
2409.15771
|
Zero-shot forecasting of chaotic systems
|
<|reference_start|>Zero-shot forecasting of chaotic systems: Time-series forecasting is a challenging task that traditionally requires specialized models custom-trained for the specific task at hand. Recently, inspired by the success of large language models, foundation models pre-trained on vast amounts of time-series data from diverse domains have emerged as a promising candidate for general-purpose time-series forecasting. The defining characteristic of these foundation models is their ability to perform zero-shot learning, that is, forecasting a new system from limited context data without explicit re-training or fine-tuning. Here, we evaluate whether the zero-shot learning paradigm extends to the challenging task of forecasting chaotic systems. Across 135 distinct chaotic dynamical systems and $10^8$ timepoints, we find that foundation models produce competitive forecasts compared to custom-trained models (including NBEATS, TiDE, etc.), particularly when training data is limited. Interestingly, even after point forecasts fail, foundation models preserve the geometric and statistical properties of the chaotic attractors, demonstrating a surprisingly strong ability to capture the long-term behavior of chaotic dynamical systems. Our results highlight the promises and pitfalls of foundation models in making zero-shot forecasts of chaotic systems.<|reference_end|>
|
arxiv
|
@article{zhang2024zero-shot,
title={Zero-shot forecasting of chaotic systems},
author={Yuanzhao Zhang and William Gilpin},
journal={arXiv preprint arXiv:2409.15771},
year={2024},
archivePrefix={arXiv},
eprint={2409.15771},
primaryClass={cs.LG nlin.CD physics.comp-ph}
}
|
zhang2024zero-shot
|
arxiv-661204
|
2409.15773
|
Evolving Topics in Federated Learning: Trends, and Emerging Directions for IS
|
<|reference_start|>Evolving Topics in Federated Learning: Trends, and Emerging Directions for IS: Federated learning (FL) is a popular approach that enables organizations to train machine learning models without compromising data privacy and security. As the field of FL continues to grow, it is crucial to have a thorough understanding of the topic, current trends and future research directions for information systems (IS) researchers. Consequently, this paper conducts a comprehensive computational literature review on FL and presents the research landscape. By utilizing advanced data analytics and leveraging the topic modeling approach, we identified and analyzed the most prominent 15 topics and areas that have influenced the research on FL. We also proposed guiding research questions to stimulate further research directions for IS scholars. Our work is valuable for scholars, practitioners, and policymakers since it offers a comprehensive overview of state-of-the-art research on FL.<|reference_end|>
|
arxiv
|
@article{uddin2024evolving,
title={Evolving Topics in Federated Learning: Trends, and Emerging Directions
for IS},
author={Md Raihan Uddin, Gauri Shankar, Saddam Hossain Mukta, Prabhat Kumar,
Najmul Islam},
journal={arXiv preprint arXiv:2409.15773},
year={2024},
archivePrefix={arXiv},
eprint={2409.15773},
primaryClass={cs.DC}
}
|
uddin2024evolving
|
arxiv-661205
|
2409.15774
|
Bi-Level Belief Space Search for Compliant Part Mating Under Uncertainty
|
<|reference_start|>Bi-Level Belief Space Search for Compliant Part Mating Under Uncertainty: The problem of mating two parts with low clearance remains difficult for autonomous robots. We present bi-level belief assembly (BILBA), a model-based planner that computes a sequence of compliant motions which can leverage contact with the environment to reduce uncertainty and perform challenging assembly tasks with low clearance. Our approach is based on first deriving candidate contact schedules from the structure of the configuration space obstacle of the parts and then finding compliant motions that achieve the desired contacts. We demonstrate that BILBA can efficiently compute robust plans on multiple simulated tasks as well as a real robot rectangular peg-in-hole insertion task.<|reference_end|>
|
arxiv
|
@article{chintalapudi2024bi-level,
title={Bi-Level Belief Space Search for Compliant Part Mating Under Uncertainty},
author={Sahit Chintalapudi, Leslie Kaelbling, Tomas Lozano-Perez},
journal={arXiv preprint arXiv:2409.15774},
year={2024},
archivePrefix={arXiv},
eprint={2409.15774},
primaryClass={cs.RO}
}
|
chintalapudi2024bi-level
|
arxiv-661206
|
2409.15779
|
A Robust, Task-Agnostic and Fully-Scalable Voxel Mapping System for Large Scale Environments
|
<|reference_start|>A Robust, Task-Agnostic and Fully-Scalable Voxel Mapping System for Large Scale Environments: Perception still remains a challenging problem for autonomous navigation in unknown environment, especially for aerial vehicles. Most mapping algorithms for autonomous navigation are specifically designed for their very intended task, which hinders extended usage or cooperative task. In this paper, we propose a voxel mapping system that can build an adaptable map for multiple tasks. The system employs hash table-based map structure and manages each voxel with spatial and temporal priorities without explicit map boundary. We also introduce an efficient map-sharing feature with minimal bandwidth to enable multi-agent applications. We tested the system in real world and simulation environment by applying it for various tasks including local mapping, global mapping, cooperative multi-agent navigation, and high-speed navigation. Our system proved its capability to build customizable map with high resolution, wide coverage, and real-time performance regardless of sensor and environment. The system can build a full-resolution map using the map-sharing feature, with over 95 % of bandwidth reduction from raw sensor data.<|reference_end|>
|
arxiv
|
@article{la2024a,
title={A Robust, Task-Agnostic and Fully-Scalable Voxel Mapping System for
Large Scale Environments},
author={Jinche La, Jun-Gill Kang, and Dasol Lee},
journal={arXiv preprint arXiv:2409.15779},
year={2024},
archivePrefix={arXiv},
eprint={2409.15779},
primaryClass={cs.RO}
}
|
la2024a
|
arxiv-661207
|
2409.15780
|
A Learning Framework for Diverse Legged Robot Locomotion Using Barrier-Based Style Rewards
|
<|reference_start|>A Learning Framework for Diverse Legged Robot Locomotion Using Barrier-Based Style Rewards: This work introduces a model-free reinforcement learning framework that enables various modes of motion (quadruped, tripod, or biped) and diverse tasks for legged robot locomotion. We employ a motion-style reward based on a relaxed logarithmic barrier function as a soft constraint, to bias the learning process toward the desired motion style, such as gait, foot clearance, joint position, or body height. The predefined gait cycle is encoded in a flexible manner, facilitating gait adjustments throughout the learning process. Extensive experiments demonstrate that KAIST HOUND, a 45 kg robotic system, can achieve biped, tripod, and quadruped locomotion using the proposed framework; quadrupedal capabilities include traversing uneven terrain, galloping at 4.67 m/s, and overcoming obstacles up to 58 cm (67 cm for HOUND2); bipedal capabilities include running at 3.6 m/s, carrying a 7.5 kg object, and ascending stairs-all performed without exteroceptive input.<|reference_end|>
|
arxiv
|
@article{kim2024a,
title={A Learning Framework for Diverse Legged Robot Locomotion Using
Barrier-Based Style Rewards},
author={Gijeong Kim, Yong-Hoon Lee, Hae-Won Park},
journal={arXiv preprint arXiv:2409.15780},
year={2024},
archivePrefix={arXiv},
eprint={2409.15780},
primaryClass={cs.RO}
}
|
kim2024a
|
arxiv-661208
|
2409.15781
|
Training Data Attribution: Was Your Model Secretly Trained On Data Created By Mine?
|
<|reference_start|>Training Data Attribution: Was Your Model Secretly Trained On Data Created By Mine?: The emergence of text-to-image models has recently sparked significant interest, but the attendant is a looming shadow of potential infringement by violating the user terms. Specifically, an adversary may exploit data created by a commercial model to train their own without proper authorization. To address such risk, it is crucial to investigate the attribution of a suspicious model's training data by determining whether its training data originates, wholly or partially, from a specific source model. To trace the generated data, existing methods require applying extra watermarks during either the training or inference phases of the source model. However, these methods are impractical for pre-trained models that have been released, especially when model owners lack security expertise. To tackle this challenge, we propose an injection-free training data attribution method for text-to-image models. It can identify whether a suspicious model's training data stems from a source model, without additional modifications on the source model. The crux of our method lies in the inherent memorization characteristic of text-to-image models. Our core insight is that the memorization of the training dataset is passed down through the data generated by the source model to the model trained on that data, making the source model and the infringing model exhibit consistent behaviors on specific samples. Therefore, our approach involves developing algorithms to uncover these distinct samples and using them as inherent watermarks to verify if a suspicious model originates from the source model. Our experiments demonstrate that our method achieves an accuracy of over 80\% in identifying the source of a suspicious model's training data, without interfering the original training or generation process of the source model.<|reference_end|>
|
arxiv
|
@article{zhang2024training,
title={Training Data Attribution: Was Your Model Secretly Trained On Data
Created By Mine?},
author={Likun Zhang, Hao Wu, Lingcui Zhang, Fengyuan Xu, Jin Cao, Fenghua Li,
Ben Niu},
journal={arXiv preprint arXiv:2409.15781},
year={2024},
archivePrefix={arXiv},
eprint={2409.15781},
primaryClass={cs.CV}
}
|
zhang2024training
|
arxiv-661209
|
2409.15782
|
M-Vec: Matryoshka Speaker Embeddings with Flexible Dimensions
|
<|reference_start|>M-Vec: Matryoshka Speaker Embeddings with Flexible Dimensions: Fixed-dimensional speaker embeddings have become the dominant approach in speaker modeling, typically spanning hundreds to thousands of dimensions. These dimensions are hyperparameters that are not specifically picked, nor are they hierarchically ordered in terms of importance. In large-scale speaker representation databases, reducing the dimensionality of embeddings can significantly lower storage and computational costs. However, directly training low-dimensional representations often yields suboptimal performance. In this paper, we introduce the Matryoshka speaker embedding, a method that allows dynamic extraction of sub-dimensions from the embedding while maintaining performance. Our approach is validated on the VoxCeleb dataset, demonstrating that it can achieve extremely low-dimensional embeddings, such as 8 dimensions, while preserving high speaker verification performance.<|reference_end|>
|
arxiv
|
@article{wang2024m-vec:,
title={M-Vec: Matryoshka Speaker Embeddings with Flexible Dimensions},
author={Shuai Wang, Pengcheng Zhu, Haizhou Li},
journal={arXiv preprint arXiv:2409.15782},
year={2024},
archivePrefix={arXiv},
eprint={2409.15782},
primaryClass={eess.AS cs.SD}
}
|
wang2024m-vec:
|
arxiv-661210
|
2409.15783
|
AnyCar to Anywhere: Learning Universal Dynamics Model for Agile and Adaptive Mobility
|
<|reference_start|>AnyCar to Anywhere: Learning Universal Dynamics Model for Agile and Adaptive Mobility: Recent works in the robot learning community have successfully introduced generalist models capable of controlling various robot embodiments across a wide range of tasks, such as navigation and locomotion. However, achieving agile control, which pushes the limits of robotic performance, still relies on specialist models that require extensive parameter tuning. To leverage generalist-model adaptability and flexibility while achieving specialist-level agility, we propose AnyCar, a transformer-based generalist dynamics model designed for agile control of various wheeled robots. To collect training data, we unify multiple simulators and leverage different physics backends to simulate vehicles with diverse sizes, scales, and physical properties across various terrains. With robust training and real-world fine-tuning, our model enables precise adaptation to different vehicles, even in the wild and under large state estimation errors. In real-world experiments, AnyCar shows both few-shot and zero-shot generalization across a wide range of vehicles and environments, where our model, combined with a sampling-based MPC, outperforms specialist models by up to 54%. These results represent a key step toward building a foundation model for agile wheeled robot control. We will also open-source our framework to support further research.<|reference_end|>
|
arxiv
|
@article{xiao2024anycar,
title={AnyCar to Anywhere: Learning Universal Dynamics Model for Agile and
Adaptive Mobility},
author={Wenli Xiao, Haoru Xue, Tony Tao, Dvij Kalaria, John M. Dolan, Guanya
Shi},
journal={arXiv preprint arXiv:2409.15783},
year={2024},
archivePrefix={arXiv},
eprint={2409.15783},
primaryClass={cs.RO}
}
|
xiao2024anycar
|
arxiv-661211
|
2409.15784
|
Deep-learning real-time phase retrieval of imperfect diffraction patterns from X-ray free-electron lasers
|
<|reference_start|>Deep-learning real-time phase retrieval of imperfect diffraction patterns from X-ray free-electron lasers: Machine learning is attracting surging interest across nearly all scientific areas by enabling the analysis of large datasets and the extraction of scientific information from incomplete data. Data-driven science is rapidly growing, especially in X-ray methodologies, where advanced light sources and detection technologies accumulate vast amounts of data that exceed meticulous human inspection capabilities. Despite the increasing demands, the full application of machine learning has been hindered by the need for data-specific optimizations. In this study, we introduce a new deep-learning-based phase retrieval method for imperfect diffraction data. This method provides robust phase retrieval for simulated data and performs well on weak-signal single-pulse diffraction data from X-ray free-electron lasers. Moreover, the method significantly reduces data processing time, facilitating real-time image reconstructions that are crucial for high-repetition-rate data acquisition. Thus, this approach offers a reliable solution to the phase problem and is expected to be widely adopted across various research areas.<|reference_end|>
|
arxiv
|
@article{lee2024deep-learning,
title={Deep-learning real-time phase retrieval of imperfect diffraction
patterns from X-ray free-electron lasers},
author={Sung Yun Lee, Do Hyung Cho, Chulho Jung, Daeho Sung, Daewoong Nam,
Sangsoo Kim, Changyong Song},
journal={arXiv preprint arXiv:2409.15784},
year={2024},
archivePrefix={arXiv},
eprint={2409.15784},
primaryClass={physics.app-ph cond-mat.mtrl-sci cs.LG physics.optics}
}
|
lee2024deep-learning
|
arxiv-661212
|
2409.15786
|
Improving behavior profile discovery for vehicles
|
<|reference_start|>Improving behavior profile discovery for vehicles: Multiple approaches have already been proposed to mimic real driver behaviors in simulation. This article proposes a new one, based solely on the exploration of undisturbed observation of intersections. From them, the behavior profiles for each macro-maneuver will be discovered. Using the macro-maneuvers already identified in previous works, a comparison method between trajectories with different lengths using an Extended Kalman Filter (EKF) is proposed, which combined with an Expectation-Maximization (EM) inspired method, defines the different clusters that represent the behaviors observed. This is also paired with a Kullback-Liebler divergent (KL) criteria to define when the clusters need to be split or merged. Finally, the behaviors for each macro-maneuver are determined by each cluster discovered, without using any map information about the environment and being dynamically consistent with vehicle motion. By observation it becomes clear that the two main factors for driver's behavior are their assertiveness and interaction with other road users.<|reference_end|>
|
arxiv
|
@article{de moura2024improving,
title={Improving behavior profile discovery for vehicles},
author={Nelson de Moura (ASTRA), Fawzi Nashashibi (ASTRA), Fernando Garrido},
journal={arXiv preprint arXiv:2409.15786},
year={2024},
archivePrefix={arXiv},
eprint={2409.15786},
primaryClass={cs.RO}
}
|
de moura2024improving
|
arxiv-661213
|
2409.15790
|
Small Language Models: Survey, Measurements, and Insights
|
<|reference_start|>Small Language Models: Survey, Measurements, and Insights: Small language models (SLMs), despite their widespread adoption in modern smart devices, have received significantly less academic attention compared to their large language model (LLM) counterparts, which are predominantly deployed in data centers and cloud environments. While researchers continue to improve the capabilities of LLMs in the pursuit of artificial general intelligence, SLM research aims to make machine intelligence more accessible, affordable, and efficient for everyday tasks. Focusing on transformer-based, decoder-only language models with 100M-5B parameters, we survey 59 state-of-the-art open-source SLMs, analyzing their technical innovations across three axes: architectures, training datasets, and training algorithms. In addition, we evaluate their capabilities in various domains, including commonsense reasoning, in-context learning, mathematics, and coding. To gain further insight into their on-device runtime costs, we benchmark their inference latency and memory footprints. Through in-depth analysis of our benchmarking data, we offer valuable insights to advance research in this field.<|reference_end|>
|
arxiv
|
@article{lu2024small,
title={Small Language Models: Survey, Measurements, and Insights},
author={Zhenyan Lu, Xiang Li, Dongqi Cai, Rongjie Yi, Fangming Liu, Xiwen
Zhang, Nicholas D. Lane, Mengwei Xu},
journal={arXiv preprint arXiv:2409.15790},
year={2024},
archivePrefix={arXiv},
eprint={2409.15790},
primaryClass={cs.CL cs.AI cs.LG}
}
|
lu2024small
|
arxiv-661214
|
2409.15791
|
Development of Bidirectional Series Elastic Actuator with Torsion Coil Spring and Implementation to the Legged Robot
|
<|reference_start|>Development of Bidirectional Series Elastic Actuator with Torsion Coil Spring and Implementation to the Legged Robot: Many studies have been conducted on Series Elastic Actuators (SEA) for robot joints because they are effective in terms of flexibility, safety, and energy efficiency. The ability of SEA to robustly handle unexpected disturbances has raised expectations for practical applications in environments where robots interact with humans. On the other hand, the development and commercialization of small robots for indoor entertainment applications is also actively underway, and it is thought that by using SEA in these robots, dynamic movements such as jumping and running can be realized. In this work, we developed a small and lightweight SEA using coil springs as elastic elements. By devising a method for fixing the coil spring, it is possible to absorb shock and perform highly accurate force measurement in both rotational directions with a simple structure. In addition, to verify the effectiveness of the developed SEA, we created a small single-legged robot with SEA implemented in the three joints of the hip, knee, and ankle, and we conducted a drop test. By adjusting the initial posture and control gain of each joint, we confirmed that flexible landing and continuous hopping are possible with simple PD position control. The measurement results showed that SEA is effective in terms of shock absorption and energy reuse. This work was performed for research purposes only.<|reference_end|>
|
arxiv
|
@article{koda2024development,
title={Development of Bidirectional Series Elastic Actuator with Torsion Coil
Spring and Implementation to the Legged Robot},
author={Yuta Koda, Hiroshi Osawa, Norio Nagatsuka, Shinichi Kariya, Taeko
Inagawa, Kensaku Ishizuka},
journal={arXiv preprint arXiv:2409.15791},
year={2024},
archivePrefix={arXiv},
eprint={2409.15791},
primaryClass={cs.RO}
}
|
koda2024development
|
arxiv-661215
|
2409.15792
|
Regional stability conditions for recurrent neural network-based control systems
|
<|reference_start|>Regional stability conditions for recurrent neural network-based control systems: In this paper we propose novel global and regional stability analysis conditions based on linear matrix inequalities for a general class of recurrent neural networks. These conditions can be also used for state-feedback control design and a suitable optimization problem enforcing H2 norm minimization properties is defined. The theoretical results are corroborated by numerical simulations, showing the advantages and limitations of the methods presented herein.<|reference_end|>
|
arxiv
|
@article{la bella2024regional,
title={Regional stability conditions for recurrent neural network-based control
systems},
author={Alessio La Bella and Marcello Farina and William D'Amico and Luca
Zaccarian},
journal={arXiv preprint arXiv:2409.15792},
year={2024},
archivePrefix={arXiv},
eprint={2409.15792},
primaryClass={eess.SY cs.SY}
}
|
la bella2024regional
|
arxiv-661216
|
2409.15793
|
Listing spanning trees of outerplanar graphs by pivot exchanges
|
<|reference_start|>Listing spanning trees of outerplanar graphs by pivot exchanges: We prove that the spanning trees of any outerplanar triangulation $G$ can be listed so that any two consecutive spanning trees differ in an exchange of two edges that share an end vertex. For outerplanar graphs $G$ with faces of arbitrary lengths (not necessarily 3) we establish a similar result, with the condition that the two exchanged edges share an end vertex or lie on a common face. These listings of spanning trees are obtained from a simple greedy algorithm that can be implemented efficiently, i.e., in time $\mathcal{O}(n \log n)$ per generated spanning tree, where $n$ is the number of vertices of $G$. Furthermore, the listings correspond to Hamilton paths on the 0/1-polytope that is obtained as the convex hull of the characteristic vectors of all spanning trees of $G$.<|reference_end|>
|
arxiv
|
@article{behrooznia2024listing,
title={Listing spanning trees of outerplanar graphs by pivot exchanges},
author={Nastaran Behrooznia, Torsten M"utze},
journal={arXiv preprint arXiv:2409.15793},
year={2024},
archivePrefix={arXiv},
eprint={2409.15793},
primaryClass={cs.DM math.CO}
}
|
behrooznia2024listing
|
arxiv-661217
|
2409.15794
|
Towards Universal Large-Scale Foundational Model for Natural Gas Demand Forecasting
|
<|reference_start|>Towards Universal Large-Scale Foundational Model for Natural Gas Demand Forecasting: In the context of global energy strategy, accurate natural gas demand forecasting is crucial for ensuring efficient resource allocation and operational planning. Traditional forecasting methods struggle to cope with the growing complexity and variability of gas consumption patterns across diverse industries and commercial sectors. To address these challenges, we propose the first foundation model specifically tailored for natural gas demand forecasting. Foundation models, known for their ability to generalize across tasks and datasets, offer a robust solution to the limitations of traditional methods, such as the need for separate models for different customer segments and their limited generalization capabilities. Our approach leverages contrastive learning to improve prediction accuracy in real-world scenarios, particularly by tackling issues such as noise in historical consumption data and the potential misclassification of similar data samples, which can lead to degradation in the quaility of the representation and thus the accuracy of downstream forecasting tasks. By integrating advanced noise filtering techniques within the contrastive learning framework, our model enhances the quality of learned representations, leading to more accurate predictions. Furthermore, the model undergoes industry-specific fine-tuning during pretraining, enabling it to better capture the unique characteristics of gas consumption across various sectors. We conducted extensive experiments using a large-scale dataset from ENN Group, which includes data from over 10,000 industrial, commercial, and welfare-related customers across multiple regions. Our model outperformed existing state-of-the-art methods, demonstrating a relative improvement in MSE by 3.68\% and in MASE by 6.15\% compared to the best available model.<|reference_end|>
|
arxiv
|
@article{zhou2024towards,
title={Towards Universal Large-Scale Foundational Model for Natural Gas Demand
Forecasting},
author={Xinxing Zhou, Jiaqi Ye, Shubao Zhao, Ming Jin, Zhaoxiang Hou, Chengyi
Yang, Zengxiang Li, Yanlong Wen and Xiaojie Yuan},
journal={arXiv preprint arXiv:2409.15794},
year={2024},
archivePrefix={arXiv},
eprint={2409.15794},
primaryClass={cs.LG cs.AI}
}
|
zhou2024towards
|
arxiv-661218
|
2409.15795
|
Development and Evaluation Study of Intelligent Cockpit in the Age of Large Models
|
<|reference_start|>Development and Evaluation Study of Intelligent Cockpit in the Age of Large Models: The development of Artificial Intelligence (AI) Large Models has a great impact on the application development of automotive Intelligent cockpit. The fusion development of Intelligent Cockpit and Large Models has become a new growth point of user experience in the industry, which also creates problems for related scholars, practitioners and users in terms of their understanding and evaluation of the user experience and the capability characteristics of the Intelligent Cockpit Large Models (ICLM). This paper aims to analyse the current situation of Intelligent cockpit, large model and AI Agent, to reveal the key of application research focuses on the integration of Intelligent Cockpit and Large Models, and to put forward a necessary limitation for the subsequent development of an evaluation system for the capability of automotive ICLM and user experience. The evaluation system, P-CAFE, proposed in this paper mainly proposes five dimensions of perception, cognition, action, feedback and evolution as the first-level indicators from the domains of cognitive architecture, user experience, and capability characteristics of large models, and many second-level indicators to satisfy the current status of the application and research focuses are selected. After expert evaluation, the weights of the indicators were determined, and the indicator system of P-CAFE was established. Finally, a complete evaluation method was constructed based on Fuzzy Hierarchical Analysis. It will lay a solid foundation for the application and evaluation of the automotive ICLM, and provide a reference for the development and improvement of the future ICLM.<|reference_end|>
|
arxiv
|
@article{ma2024development,
title={Development and Evaluation Study of Intelligent Cockpit in the Age of
Large Models},
author={Jun Ma, Meng Wang, Jinhui Pang, Haofen Wang, Xuejing Feng, Zhipeng Hu,
Zhenyu Yang, Mingyang Guo, Zhenming Liu, Junwei Wang, Siyi Lu, Zhiming Gou},
journal={arXiv preprint arXiv:2409.15795},
year={2024},
archivePrefix={arXiv},
eprint={2409.15795},
primaryClass={cs.HC}
}
|
ma2024development
|
arxiv-661219
|
2409.15796
|
Finite-Difference Approximations and Local Algorithm for the Poisson and Poisson-Boltzmann Electrostatics
|
<|reference_start|>Finite-Difference Approximations and Local Algorithm for the Poisson and Poisson-Boltzmann Electrostatics: We study finite-difference approximations of both Poisson and Poisson-Boltzmann (PB) electrostatic energy functionals for periodic structures constrained by Gauss' law and a class of local algorithms for minimizing the finite-difference discretization of such functionals. The variable of Poisson energy is the vector field of electric displacement and that for the PB energy consists of an electric displacement and ionic concentrations. The displacement is discretized at midpoints of edges of grid boxes while the concentrations are discretize at grid points. The local algorithm is an iteration over all the grid boxes that locally minimizes the energy on each grid box, keeping Gauss' law satisfied. We prove that the energy functionals admit unique minimizers that are solutions to the corresponding Poisson's and charge-conserved PB equation, respectively. Local equilibrium conditions are identified to characterize the finite-difference minimizers of the discretized energy functionals. These conditions are the curl free for the Poisson case and the discrete Boltzmann distributions for the PB case, respectively. Next, we obtain the uniform bound with respect to the grid size h and O(h2)-error estimates in maximum norm for the finite-difference minimizers. The local algorithms are detailed, and a new local algorithm with shift is proposed to treat the general case of a variable coefficient for the Poisson energy. We prove the convergence of all these local algorithms, using the characterization of the finite-difference minimizers. Finally, we present numerical tests to demonstrate the results of our analysis.<|reference_end|>
|
arxiv
|
@article{li2024finite-difference,
title={Finite-Difference Approximations and Local Algorithm for the Poisson and
Poisson-Boltzmann Electrostatics},
author={Bo Li, and Qian Yin, and Shenggao Zhou},
journal={arXiv preprint arXiv:2409.15796},
year={2024},
archivePrefix={arXiv},
eprint={2409.15796},
primaryClass={math.NA cs.NA}
}
|
li2024finite-difference
|
arxiv-661220
|
2409.15798
|
Positioning Error Compensation by Channel Knowledge Map in UAV Communication Missions
|
<|reference_start|>Positioning Error Compensation by Channel Knowledge Map in UAV Communication Missions: When Unmanned Aerial Vehicles (UAVs) perform high-precision communication tasks, such as searching for users and providing emergency coverage, positioning errors between base stations and users make it challenging to deploy trajectory planning algorithms. To address these challenges caused by position errors, a framework was proposed to compensate it by Channel Knowledge Map (CKM), which stores channel state information (CSI). By taking the positions with errors as input, the generated CKM could give a prediction of signal attenuation which is close to true positions. Based on that, the predictions are utilized to calculate the received power and a PPO-based algorithm is applied to optimize the compensation. After training, the framework is able to find a strategy that minimize the flight time under communication constraints and positioning error. Besides, the confidence interval is calculated to assist the allocation of power and the update of CKM is studied to adapt to the dynamic environment. Simulation results show the robustness of CKM to positioning error and environmental changes, and the superiority of CKM-assisted UAV communication design.<|reference_end|>
|
arxiv
|
@article{zhang2024positioning,
title={Positioning Error Compensation by Channel Knowledge Map in UAV
Communication Missions},
author={Chiya Zhang, Ting Wang, Chunlong He},
journal={arXiv preprint arXiv:2409.15798},
year={2024},
archivePrefix={arXiv},
eprint={2409.15798},
primaryClass={eess.SP cs.NI}
}
|
zhang2024positioning
|
arxiv-661221
|
2409.15799
|
WeSep: A Scalable and Flexible Toolkit Towards Generalizable Target Speaker Extraction
|
<|reference_start|>WeSep: A Scalable and Flexible Toolkit Towards Generalizable Target Speaker Extraction: Target speaker extraction (TSE) focuses on isolating the speech of a specific target speaker from overlapped multi-talker speech, which is a typical setup in the cocktail party problem. In recent years, TSE draws increasing attention due to its potential for various applications such as user-customized interfaces and hearing aids, or as a crutial front-end processing technologies for subsequential tasks such as speech recognition and speaker recongtion. However, there are currently few open-source toolkits or available pre-trained models for off-the-shelf usage. In this work, we introduce WeSep, a toolkit designed for research and practical applications in TSE. WeSep is featured with flexible target speaker modeling, scalable data management, effective on-the-fly data simulation, structured recipes and deployment support. The toolkit is publicly avaliable at \url{https://github.com/wenet-e2e/WeSep.}<|reference_end|>
|
arxiv
|
@article{wang2024wesep:,
title={WeSep: A Scalable and Flexible Toolkit Towards Generalizable Target
Speaker Extraction},
author={Shuai Wang, Ke Zhang, Shaoxiong Lin, Junjie Li, Xuefei Wang, Meng Ge,
Jianwei Yu, Yanmin Qian, Haizhou Li},
journal={arXiv preprint arXiv:2409.15799},
year={2024},
archivePrefix={arXiv},
eprint={2409.15799},
primaryClass={eess.AS cs.SD}
}
|
wang2024wesep:
|
arxiv-661222
|
2409.15801
|
DIAL: Dense Image-text ALignment for Weakly Supervised Semantic Segmentation
|
<|reference_start|>DIAL: Dense Image-text ALignment for Weakly Supervised Semantic Segmentation: Weakly supervised semantic segmentation (WSSS) approaches typically rely on class activation maps (CAMs) for initial seed generation, which often fail to capture global context due to limited supervision from image-level labels. To address this issue, we introduce DALNet, Dense Alignment Learning Network that leverages text embeddings to enhance the comprehensive understanding and precise localization of objects across different levels of granularity. Our key insight is to employ a dual-level alignment strategy: (1) Global Implicit Alignment (GIA) to capture global semantics by maximizing the similarity between the class token and the corresponding text embeddings while minimizing the similarity with background embeddings, and (2) Local Explicit Alignment (LEA) to improve object localization by utilizing spatial information from patch tokens. Moreover, we propose a cross-contrastive learning approach that aligns foreground features between image and text modalities while separating them from the background, encouraging activation in missing regions and suppressing distractions. Through extensive experiments on the PASCAL VOC and MS COCO datasets, we demonstrate that DALNet significantly outperforms state-of-the-art WSSS methods. Our approach, in particular, allows for more efficient end-to-end process as a single-stage method.<|reference_end|>
|
arxiv
|
@article{jang2024dial:,
title={DIAL: Dense Image-text ALignment for Weakly Supervised Semantic
Segmentation},
author={Soojin Jang, Jungmin Yun, Junehyoung Kwon, Eunju Lee, and Youngbin Kim},
journal={arXiv preprint arXiv:2409.15801},
year={2024},
archivePrefix={arXiv},
eprint={2409.15801},
primaryClass={cs.CV}
}
|
jang2024dial:
|
arxiv-661223
|
2409.15802
|
A Multi-Level Approach for Class Imbalance Problem in Federated Learning for Remote Industry 40 Applications
|
<|reference_start|>A Multi-Level Approach for Class Imbalance Problem in Federated Learning for Remote Industry 40 Applications: Deep neural network (DNN) models are effective solutions for industry 4.0 applications (\eg oil spill detection, fire detection, anomaly detection). However, training a DNN network model needs a considerable amount of data collected from various sources and transferred to the central cloud server that can be expensive and sensitive to privacy. For instance, in the remote offshore oil field where network connectivity is vulnerable, a federated fog environment can be a potential computing platform. Hence it is feasible to perform computation within the federation. On the contrary, performing a DNN model training using fog systems poses a security issue that the federated learning (FL) technique can resolve. In this case, the new challenge is the class imbalance problem that can be inherited in local data sets and can degrade the performance of the global model. Therefore, FL training needs to be performed considering the class imbalance problem locally. In addition, an efficient technique to select the relevant worker model needs to be adopted at the global level to increase the robustness of the global model. Accordingly, we utilize one of the suitable loss functions addressing the class imbalance in workers at the local level. In addition, we employ a dynamic threshold mechanism with user-defined worker's weight to efficiently select workers for aggregation that improve the global model's robustness. Finally, we perform an extensive empirical evaluation to explore the benefits of our solution and find up to 3-5% performance improvement than baseline federated learning methods.<|reference_end|>
|
arxiv
|
@article{hussain2024a,
title={A Multi-Level Approach for Class Imbalance Problem in Federated Learning
for Remote Industry 4.0 Applications},
author={Razin Farhan Hussain, Mohsen Amini Salehi},
journal={arXiv preprint arXiv:2409.15802},
year={2024},
archivePrefix={arXiv},
eprint={2409.15802},
primaryClass={cs.DC cs.LG cs.SY eess.SY}
}
|
hussain2024a
|
arxiv-661224
|
2409.15803
|
3D-JEPA: A Joint Embedding Predictive Architecture for 3D Self-Supervised Representation Learning
|
<|reference_start|>3D-JEPA: A Joint Embedding Predictive Architecture for 3D Self-Supervised Representation Learning: Invariance-based and generative methods have shown a conspicuous performance for 3D self-supervised representation learning (SSRL). However, the former relies on hand-crafted data augmentations that introduce bias not universally applicable to all downstream tasks, and the latter indiscriminately reconstructs masked regions, resulting in irrelevant details being saved in the representation space. To solve the problem above, we introduce 3D-JEPA, a novel non-generative 3D SSRL framework. Specifically, we propose a multi-block sampling strategy that produces a sufficiently informative context block and several representative target blocks. We present the context-aware decoder to enhance the reconstruction of the target blocks. Concretely, the context information is fed to the decoder continuously, facilitating the encoder in learning semantic modeling rather than memorizing the context information related to target blocks. Overall, 3D-JEPA predicts the representation of target blocks from a context block using the encoder and context-aware decoder architecture. Various downstream tasks on different datasets demonstrate 3D-JEPA's effectiveness and efficiency, achieving higher accuracy with fewer pretraining epochs, e.g., 88.65% accuracy on PB_T50_RS with 150 pretraining epochs.<|reference_end|>
|
arxiv
|
@article{hu20243d-jepa:,
title={3D-JEPA: A Joint Embedding Predictive Architecture for 3D
Self-Supervised Representation Learning},
author={Naiwen Hu, Haozhe Cheng, Yifan Xie, Shiqi Li and Jihua Zhu},
journal={arXiv preprint arXiv:2409.15803},
year={2024},
archivePrefix={arXiv},
eprint={2409.15803},
primaryClass={cs.CV}
}
|
hu20243d-jepa:
|
arxiv-661225
|
2409.15804
|
NER-Luxury: Named entity recognition for the fashion and luxury domain
|
<|reference_start|>NER-Luxury: Named entity recognition for the fashion and luxury domain: In this study, we address multiple challenges of developing a named-entity recognition model in English for the fashion and luxury industry, namely the entity disambiguation, French technical jargon in multiple sub-sectors, scarcity of the ESG methodology, and a disparate company structures of the sector with small and medium-sized luxury houses to large conglomerate leveraging economy of scale. In this work, we introduce a taxonomy of 36+ entity types with a luxury-oriented annotation scheme, and create a dataset of more than 40K sentences respecting a clear hierarchical classification. We also present five supervised fine-tuned models NER-Luxury for fashion, beauty, watches, jewelry, fragrances, cosmetics, and overall luxury, focusing equally on the aesthetic side and the quantitative side. In an additional experiment, we compare in a quantitative empirical assessment of the NER performance of our models against the state-of-the-art open-source large language models that show promising results and highlights the benefits of incorporating a bespoke NER model in existing machine learning pipelines.<|reference_end|>
|
arxiv
|
@article{mousterou2024ner-luxury:,
title={NER-Luxury: Named entity recognition for the fashion and luxury domain},
author={Akim Mousterou},
journal={arXiv preprint arXiv:2409.15804},
year={2024},
archivePrefix={arXiv},
eprint={2409.15804},
primaryClass={cs.CL}
}
|
mousterou2024ner-luxury:
|
arxiv-661226
|
2409.15805
|
Bound Preserving Lax-Wendroff Flux Reconstruction Method for Special Relativistic Hydrodynamics
|
<|reference_start|>Bound Preserving Lax-Wendroff Flux Reconstruction Method for Special Relativistic Hydrodynamics: Lax-Wendroff flux reconstruction (LWFR) schemes have high order of accuracy in both space and time despite having a single internal time step. Here, we design a Jacobian-free LWFR type scheme to solve the special relativistic hydrodynamics equations. We then blend the scheme with a first-order finite volume scheme to control the oscillations near discontinuities. We also use a scaling limiter to preserve the physical admissibility of the solution after ensuring the scheme is admissible in means. A particular focus is given to designing a discontinuity indicator model to detect the local non-smoothness in the solution of the highly non-linear relativistic hydrodynamics equations. Finally, we present the numerical results of a wide range of test cases with fourth and fifth-order schemes to show their robustness and efficiency.<|reference_end|>
|
arxiv
|
@article{basak2024bound,
title={Bound Preserving Lax-Wendroff Flux Reconstruction Method for Special
Relativistic Hydrodynamics},
author={Sujoy Basak, Arpit Babbar, Harish Kumar, Praveen Chandrashekar},
journal={arXiv preprint arXiv:2409.15805},
year={2024},
archivePrefix={arXiv},
eprint={2409.15805},
primaryClass={math.NA cs.NA}
}
|
basak2024bound
|
arxiv-661227
|
2409.15806
|
CLSP: High-Fidelity Contrastive Language-State Pre-training for Agent State Representation
|
<|reference_start|>CLSP: High-Fidelity Contrastive Language-State Pre-training for Agent State Representation: With the rapid development of artificial intelligence, multimodal learning has become an important research area. For intelligent agents, the state is a crucial modality to convey precise information alongside common modalities like images, videos, and language. This becomes especially clear with the broad adoption of reinforcement learning and multimodal large language models. Nevertheless, the representation of state modality still lags in development. To this end, we propose a High-Fidelity Contrastive Language-State Pre-training (CLSP) method, which can accurately encode state information into general representations for both reinforcement learning and multimodal large language models. Specifically, we first design a pre-training task based on the classification to train an encoder with coarse-grained information. Next, we construct data pairs of states and language descriptions, utilizing the pre-trained encoder to initialize the CLSP encoder. Then, we deploy contrastive learning to train the CLSP encoder to effectively represent precise state information. Additionally, we enhance the representation of numerical information using the Random Fourier Features (RFF) method for high-fidelity mapping. Extensive experiments demonstrate the superior precision and generalization capabilities of our representation, achieving outstanding results in text-state retrieval, reinforcement learning navigation tasks, and multimodal large language model understanding.<|reference_end|>
|
arxiv
|
@article{huang2024clsp:,
title={CLSP: High-Fidelity Contrastive Language-State Pre-training for Agent
State Representation},
author={Fuxian Huang, Qi Zhang, Shaopeng Zhai, Jie Wang, Tianyi Zhang, Haoran
Zhang, Ming Zhou, Yu Liu, Yu Qiao},
journal={arXiv preprint arXiv:2409.15806},
year={2024},
archivePrefix={arXiv},
eprint={2409.15806},
primaryClass={cs.AI}
}
|
huang2024clsp:
|
arxiv-661228
|
2409.15808
|
Blockprint Accuracy Study
|
<|reference_start|>Blockprint Accuracy Study: Blockprint, a tool for assessing client diversity on the Ethereum beacon chain, is essential for analyzing decentralization. This paper details experiments conducted at MigaLabs to enhance Blockprint's accuracy, evaluating various configurations for the K-Nearest Neighbors (KNN) classifier and exploring the Multi-Layer Perceptron (MLP) classifier as a proposed alternative. Findings suggest that the MLP classifier generally achieves superior accuracy with a smaller training dataset. The study revealed that clients running in different modes, especially those subscribed to all subnets, impact attestation inclusion differently, leading to proposed methods for mitigating the decline in model accuracy. Consequently, the recommendation is to employ an MLP model trained with a combined dataset of slots from both default and subscribed-to-all-subnets client configurations.<|reference_end|>
|
arxiv
|
@article{somoza2024blockprint,
title={Blockprint Accuracy Study},
author={Santiago Somoza, Tarun Mohandas-Daryanani, Leonardo Bautista-Gomez},
journal={arXiv preprint arXiv:2409.15808},
year={2024},
archivePrefix={arXiv},
eprint={2409.15808},
primaryClass={cs.CR}
}
|
somoza2024blockprint
|
arxiv-661229
|
2409.15809
|
A Computer Vision Approach for Autonomous Cars to Drive Safe at Construction Zone
|
<|reference_start|>A Computer Vision Approach for Autonomous Cars to Drive Safe at Construction Zone: To build a smarter and safer city, a secure, efficient, and sustainable transportation system is a key requirement. The autonomous driving system (ADS) plays an important role in the development of smart transportation and is considered one of the major challenges facing the automotive sector in recent decades. A car equipped with an autonomous driving system (ADS) comes with various cutting-edge functionalities such as adaptive cruise control, collision alerts, automated parking, and more. A primary area of research within ADAS involves identifying road obstacles in construction zones regardless of the driving environment. This paper presents an innovative and highly accurate road obstacle detection model utilizing computer vision technology that can be activated in construction zones and functions under diverse drift conditions, ultimately contributing to build a safer road transportation system. The model developed with the YOLO framework achieved a mean average precision exceeding 94\% and demonstrated an inference time of 1.6 milliseconds on the validation dataset, underscoring the robustness of the methodology applied to mitigate hazards and risks for autonomous vehicles.<|reference_end|>
|
arxiv
|
@article{ahammed2024a,
title={A Computer Vision Approach for Autonomous Cars to Drive Safe at
Construction Zone},
author={Abu Shad Ahammed, Md Shahi Amran Hossain, Roman Obermaisser},
journal={arXiv preprint arXiv:2409.15809},
year={2024},
archivePrefix={arXiv},
eprint={2409.15809},
primaryClass={cs.CV cs.RO}
}
|
ahammed2024a
|
arxiv-661230
|
2409.15810
|
Hyperbolic Image-and-Pointcloud Contrastive Learning for 3D Classification
|
<|reference_start|>Hyperbolic Image-and-Pointcloud Contrastive Learning for 3D Classification: 3D contrastive representation learning has exhibited remarkable efficacy across various downstream tasks. However, existing contrastive learning paradigms based on cosine similarity fail to deeply explore the potential intra-modal hierarchical and cross-modal semantic correlations about multi-modal data in Euclidean space. In response, we seek solutions in hyperbolic space and propose a hyperbolic image-and-pointcloud contrastive learning method (HyperIPC). For the intra-modal branch, we rely on the intrinsic geometric structure to explore the hyperbolic embedding representation of point cloud to capture invariant features. For the cross-modal branch, we leverage images to guide the point cloud in establishing strong semantic hierarchical correlations. Empirical experiments underscore the outstanding classification performance of HyperIPC. Notably, HyperIPC enhances object classification results by 2.8% and few-shot classification outcomes by 5.9% on ScanObjectNN compared to the baseline. Furthermore, ablation studies and confirmatory testing validate the rationality of HyperIPC's parameter settings and the effectiveness of its submodules.<|reference_end|>
|
arxiv
|
@article{hu2024hyperbolic,
title={Hyperbolic Image-and-Pointcloud Contrastive Learning for 3D
Classification},
author={Naiwen Hu, Haozhe Cheng, Yifan Xie, Pengcheng Shi and Jihua Zhu},
journal={arXiv preprint arXiv:2409.15810},
year={2024},
archivePrefix={arXiv},
eprint={2409.15810},
primaryClass={cs.CV}
}
|
hu2024hyperbolic
|
arxiv-661231
|
2409.15812
|
Aided design of bridge aesthetics based on Stable Diffusion fine-tuning
|
<|reference_start|>Aided design of bridge aesthetics based on Stable Diffusion fine-tuning: Stable Diffusion fine-tuning technique is tried to assist bridge-type innovation. The bridge real photo dataset is built, and Stable Diffusion is fine tuned by using four methods that are Textual Inversion, Dreambooth, Hypernetwork and Lora. All of them can capture the main characteristics of dataset images and realize the personalized customization of Stable Diffusion. Through fine-tuning, Stable Diffusion is not only a drawing tool, but also has the designer's innovative thinking ability. The fine tuned model can generate a large number of innovative new bridge types, which can provide rich inspiration for human designers. The result shows that this technology can be used as an engine of creativity and a power multiplier for human designers.<|reference_end|>
|
arxiv
|
@article{zhang2024aided,
title={Aided design of bridge aesthetics based on Stable Diffusion fine-tuning},
author={Leye Zhang, Xiangxiang Tian, Chengli Zhang, Hongjun Zhang},
journal={arXiv preprint arXiv:2409.15812},
year={2024},
archivePrefix={arXiv},
eprint={2409.15812},
primaryClass={cs.LG cs.CV}
}
|
zhang2024aided
|
arxiv-661232
|
2409.15813
|
Layer-wise Model Merging for Unsupervised Domain Adaptation in Segmentation Tasks
|
<|reference_start|>Layer-wise Model Merging for Unsupervised Domain Adaptation in Segmentation Tasks: Merging parameters of multiple models has resurfaced as an effective strategy to enhance task performance and robustness, but prior work is limited by the high costs of ensemble creation and inference. In this paper, we leverage the abundance of freely accessible trained models to introduce a cost-free approach to model merging. It focuses on a layer-wise integration of merged models, aiming to maintain the distinctiveness of the task-specific final layers while unifying the initial layers, which are primarily associated with feature extraction. This approach ensures parameter consistency across all layers, essential for boosting performance. Moreover, it facilitates seamless integration of knowledge, enabling effective merging of models from different datasets and tasks. Specifically, we investigate its applicability in Unsupervised Domain Adaptation (UDA), an unexplored area for model merging, for Semantic and Panoptic Segmentation. Experimental results demonstrate substantial UDA improvements without additional costs for merging same-architecture models from distinct datasets ($\uparrow 2.6\%$ mIoU) and different-architecture models with a shared backbone ($\uparrow 6.8\%$ mIoU). Furthermore, merging Semantic and Panoptic Segmentation models increases mPQ by $\uparrow 7\%$. These findings are validated across a wide variety of UDA strategies, architectures, and datasets.<|reference_end|>
|
arxiv
|
@article{alcover-couso2024layer-wise,
title={Layer-wise Model Merging for Unsupervised Domain Adaptation in
Segmentation Tasks},
author={Roberto Alcover-Couso, Juan C. SanMiguel, Marcos Escudero-Vi~nolo,
Jose M Mart'inez},
journal={arXiv preprint arXiv:2409.15813},
year={2024},
archivePrefix={arXiv},
eprint={2409.15813},
primaryClass={cs.CV cs.AI cs.MM}
}
|
alcover-couso2024layer-wise
|
arxiv-661233
|
2409.15814
|
Interactive Example-based Explanations to Improve Health Professionals' Onboarding with AI for Human-AI Collaborative Decision Making
|
<|reference_start|>Interactive Example-based Explanations to Improve Health Professionals' Onboarding with AI for Human-AI Collaborative Decision Making: A growing research explores the usage of AI explanations on user's decision phases for human-AI collaborative decision-making. However, previous studies found the issues of overreliance on `wrong' AI outputs. In this paper, we propose interactive example-based explanations to improve health professionals' onboarding with AI for their better reliance on AI during AI-assisted decision-making. We implemented an AI-based decision support system that utilizes a neural network to assess the quality of post-stroke survivors' exercises and interactive example-based explanations that systematically surface the nearest neighborhoods of a test/task sample from the training set of the AI model to assist users' onboarding with the AI model. To investigate the effect of interactive example-based explanations, we conducted a study with domain experts, health professionals to evaluate their performance and reliance on AI. Our interactive example-based explanations during onboarding assisted health professionals in having a better reliance on AI and making a higher ratio of making `right' decisions and a lower ratio of `wrong' decisions than providing only feature-based explanations during the decision-support phase. Our study discusses new challenges of assisting user's onboarding with AI for human-AI collaborative decision-making.<|reference_end|>
|
arxiv
|
@article{lee2024interactive,
title={Interactive Example-based Explanations to Improve Health Professionals'
Onboarding with AI for Human-AI Collaborative Decision Making},
author={Min Hun Lee, Renee Bao Xuan Ng, Silvana Xinyi Choo, and Shamala
Thilarajah},
journal={arXiv preprint arXiv:2409.15814},
year={2024},
archivePrefix={arXiv},
eprint={2409.15814},
primaryClass={cs.HC cs.AI cs.LG}
}
|
lee2024interactive
|
arxiv-661234
|
2409.15815
|
AsthmaBot: Multi-modal, Multi-Lingual Retrieval Augmented Generation For Asthma Patient Support
|
<|reference_start|>AsthmaBot: Multi-modal, Multi-Lingual Retrieval Augmented Generation For Asthma Patient Support: Asthma rates have risen globally, driven by environmental and lifestyle factors. Access to immediate medical care is limited, particularly in developing countries, necessitating automated support systems. Large Language Models like ChatGPT (Chat Generative Pre-trained Transformer) and Gemini have advanced natural language processing in general and question answering in particular, however, they are prone to producing factually incorrect responses (i.e. hallucinations). Retrieval-augmented generation systems, integrating curated documents, can improve large language models' performance and reduce the incidence of hallucination. We introduce AsthmaBot, a multi-lingual, multi-modal retrieval-augmented generation system for asthma support. Evaluation of an asthma-related frequently asked questions dataset shows AsthmaBot's efficacy. AsthmaBot has an added interactive and intuitive interface that integrates different data modalities (text, images, videos) to make it accessible to the larger public. AsthmaBot is available online via \url{asthmabot.datanets.org}.<|reference_end|>
|
arxiv
|
@article{bahaj2024asthmabot:,
title={AsthmaBot: Multi-modal, Multi-Lingual Retrieval Augmented Generation For
Asthma Patient Support},
author={Adil Bahaj and Mounir Ghogho},
journal={arXiv preprint arXiv:2409.15815},
year={2024},
archivePrefix={arXiv},
eprint={2409.15815},
primaryClass={cs.AI cs.CL}
}
|
bahaj2024asthmabot:
|
arxiv-661235
|
2409.15816
|
Diffusion Models for Intelligent Transportation Systems: A Survey
|
<|reference_start|>Diffusion Models for Intelligent Transportation Systems: A Survey: Intelligent Transportation Systems (ITS) are vital in modern traffic management and optimization, significantly enhancing traffic efficiency and safety. Recently, diffusion models have emerged as transformative tools for addressing complex challenges within ITS. In this paper, we present a comprehensive survey of diffusion models for ITS, covering both theoretical and practical aspects. First, we introduce the theoretical foundations of diffusion models and their key variants, including conditional diffusion models and latent diffusion models, highlighting their suitability for modeling complex, multi-modal traffic data and enabling controllable generation. Second, we outline the primary challenges in ITS and the corresponding advantages of diffusion models, providing readers with a deeper understanding of the intersection between ITS and diffusion models. Third, we offer a multi-perspective investigation of current applications of diffusion models in ITS domains, including autonomous driving, traffic simulation, trajectory prediction, and traffic safety. Finally, we discuss state-of-the-art diffusion model techniques and highlight key ITS research directions that warrant further investigation. Through this structured overview, we aim to provide researchers with a comprehensive understanding of diffusion models for ITS, thereby advancing their future applications in the transportation domain.<|reference_end|>
|
arxiv
|
@article{peng2024diffusion,
title={Diffusion Models for Intelligent Transportation Systems: A Survey},
author={Mingxing Peng, Kehua Chen, Xusen Guo, Qiming Zhang, Hongliang Lu, Hui
Zhong, Di Chen, Meixin Zhu, and Hai Yang},
journal={arXiv preprint arXiv:2409.15816},
year={2024},
archivePrefix={arXiv},
eprint={2409.15816},
primaryClass={eess.SY cs.SY}
}
|
peng2024diffusion
|
arxiv-661236
|
2409.15817
|
SwiftDossier: Tailored Automatic Dossier for Drug Discovery with LLMs and Agents
|
<|reference_start|>SwiftDossier: Tailored Automatic Dossier for Drug Discovery with LLMs and Agents: The advancement of artificial intelligence algorithms has expanded their application to several fields such as the biomedical domain. Artificial intelligence systems, including Large Language Models (LLMs), can be particularly advantageous in drug discovery, which is a very long and expensive process. However, LLMs by themselves lack in-depth knowledge about specific domains and can generate factually incorrect information. Moreover, they are not able to perform more complex actions that imply the usage of external tools. Our work is focused on these two issues. Firstly, we show how the implementation of an advanced RAG system can help the LLM to generate more accurate answers to drug-discovery-related questions. The results show that the answers generated by the LLM with the RAG system surpass in quality the answers produced by the model without RAG. Secondly, we show how to create an automatic target dossier using LLMs and incorporating them with external tools that they can use to execute more intricate tasks to gather data such as accessing databases and executing code. The result is a production-ready target dossier containing the acquired information summarized into a PDF and a PowerPoint presentation.<|reference_end|>
|
arxiv
|
@article{fossi2024swiftdossier:,
title={SwiftDossier: Tailored Automatic Dossier for Drug Discovery with LLMs
and Agents},
author={Gabriele Fossi, Youssef Boulaimen, Leila Outemzabet, Nathalie Jeanray,
Stephane Gerart, Sebastien Vachenc, Joanna Giemza, Salvatore Raieli},
journal={arXiv preprint arXiv:2409.15817},
year={2024},
archivePrefix={arXiv},
eprint={2409.15817},
primaryClass={cs.AI}
}
|
fossi2024swiftdossier:
|
arxiv-661237
|
2409.15818
|
High-precision randomized iterative methods for the random feature method
|
<|reference_start|>High-precision randomized iterative methods for the random feature method: This paper focuses on solving large-scale, ill-conditioned, and overdetermined sparse least squares problems that arise from numerical partial differential equations (PDEs), mainly from the random feature method. To address these difficulties, we introduce (1) a count sketch technique to sketch the original matrix to a smaller matrix; (2) a QR factorization or a singular value decomposition for the smaller matrix to obtain the preconditioner, which is multiplied to the original matrix from the right-hand side; (3) least squares iterative solvers to solve the preconditioned least squares system. Therefore, the methods we develop are termed CSQRP-LSQR and CSSVDP-LSQR. Under mild assumptions, we prove that the preconditioned problem holds a condition number whose upper bound is independent of the condition number of the original matrix, and provide error estimates for both methods. Ample numerical experiments, including least squares problems arising from two-dimensional and three-dimensional PDEs and the Florida Sparse Matrix Collection, are conducted. Both methods are comparable to or even better than direct methods in accuracy and are computationally more efficient for large-scale problems. This opens up the applicability of the random feature method for PDEs over complicated geometries with high-complexity solutions.<|reference_end|>
|
arxiv
|
@article{chen2024high-precision,
title={High-precision randomized iterative methods for the random feature
method},
author={Jingrun Chen, Longze Tan},
journal={arXiv preprint arXiv:2409.15818},
year={2024},
archivePrefix={arXiv},
eprint={2409.15818},
primaryClass={math.NA cs.NA}
}
|
chen2024high-precision
|
arxiv-661238
|
2409.15820
|
Supervised Fine-Tuning: An Activation Pattern Optimization Process for Attention Heads
|
<|reference_start|>Supervised Fine-Tuning: An Activation Pattern Optimization Process for Attention Heads: Though demonstrating promising potential, LLMs' performance on complex tasks, such as advanced mathematics and complex disease diagnosis is still unsatisfactory. A key issue is the present LLMs learn in a data-driven schema, while the instruction dataset about these complex tasks is both scarce and hard to collect or construct. On the contrary, a prominent phenomenon is that LLMs can learn rather fast on those simpler tasks with adequate prior knowledge captured during pretraining stage. Thus, if the prerequisite and mechanism of such rapid generalization could be elucidated, it could be highly beneficial in enhancing the efficiency and effectiveness of the LLM's ability to learn complex tasks. Thus, in this paper, we employ a gradient-based method, to dissect the process that the SFT process adapts LLMs to downstream tasks via the perspective of attention patterns. We find that: (1) LLMs selectively activate task-specific attention heads during SFT; (2) activation patterns for complex tasks are combinations of basic task patterns; and (3) changes in a few parameters can significantly impact activation patterns after SFT on a small number of samples. Based on these insights, we conduct experiments to examine whether these conclusions could effectively enhance the efficiency and effectiveness of SFT, particularly in handling complex tasks and when instructional resources are scarce. Our research not only uncovers the underlying reasons behind LLMs' rapid learning and generalization mechanisms but also provides practical solutions for addressing data challenges in complex and specialized tasks.<|reference_end|>
|
arxiv
|
@article{zhao2024supervised,
title={Supervised Fine-Tuning Achieve Rapid Task Adaption Via Alternating
Attention Head Activation Patterns},
author={Yang Zhao, Li Du, Xiao Ding, Kai Xiong, Ting Liu and Bing Qin},
journal={arXiv preprint arXiv:2409.15820},
year={2024},
archivePrefix={arXiv},
eprint={2409.15820},
primaryClass={cs.LG cs.CL}
}
|
zhao2024supervised
|
arxiv-661239
|
2409.15821
|
Intention-based and Risk-Aware Trajectory Prediction for Autonomous Driving in Complex Traffic Scenarios
|
<|reference_start|>Intention-based and Risk-Aware Trajectory Prediction for Autonomous Driving in Complex Traffic Scenarios: Accurately predicting the trajectory of surrounding vehicles is a critical challenge for autonomous vehicles. In complex traffic scenarios, there are two significant issues with the current autonomous driving system: the cognitive uncertainty of prediction and the lack of risk awareness, which limit the further development of autonomous driving. To address this challenge, we introduce a novel trajectory prediction model that incorporates insights and principles from driving behavior, ethical decision-making, and risk assessment. Based on joint prediction, our model consists of interaction, intention, and risk assessment modules. The dynamic variation of interaction between vehicles can be comprehensively captured at each timestamp in the interaction module. Based on interaction information, our model considers primary intentions for vehicles to enhance the diversity of trajectory generation. The optimization of predicted trajectories follows the advanced risk-aware decision-making principles. Experimental results are evaluated on the DeepAccident dataset; our approach shows its remarkable prediction performance on normal and accident scenarios and outperforms the state-of-the-art algorithms by at least 28.9\% and 26.5\%, respectively. The proposed model improves the proficiency and adaptability of trajectory prediction in complex traffic scenarios. The code for the proposed model is available at https://sites.google.com/view/ir-prediction.<|reference_end|>
|
arxiv
|
@article{wei2024intention-based,
title={Intention-based and Risk-Aware Trajectory Prediction for Autonomous
Driving in Complex Traffic Scenarios},
author={Wen Wei, Jiankun Wang},
journal={arXiv preprint arXiv:2409.15821},
year={2024},
archivePrefix={arXiv},
eprint={2409.15821},
primaryClass={cs.RO}
}
|
wei2024intention-based
|
arxiv-661240
|
2409.15822
|
A Ducted Fan UAV for Safe Aerial Grabbing and Transfer of Multiple Loads Using Electromagnets
|
<|reference_start|>A Ducted Fan UAV for Safe Aerial Grabbing and Transfer of Multiple Loads Using Electromagnets: In recent years, research on aerial grasping, manipulation, and transportation of objects has garnered significant attention. These tasks often require UAVs to operate safely close to environments or objects and to efficiently grasp payloads. However, current widely adopted flying platforms pose safety hazards: unprotected high-speed rotating propellers can cause harm to the surroundings. Additionally, the space for carrying payloads on the fuselage is limited, and the restricted position of the payload also hinders efficient grasping. To address these issues, this paper presents a coaxial ducted fan UAV which is equipped with electromagnets mounted externally on the fuselage, enabling safe grasping and transfer of multiple loads in midair without complex additional actuators. It also has the capability to achieve direct human-UAV cargo transfer in the air. The forces acting on the loads during magnetic attachment and their influencing factors were analyzed. An ADRC controller is utilized to counteract disturbances during grasping and achieve attitude control. Finally, flight tests are conducted to verify the UAV's ability to directly grasp multiple loads from human hands in flight while maintaining attitude tracking.<|reference_end|>
|
arxiv
|
@article{yin2024a,
title={A Ducted Fan UAV for Safe Aerial Grabbing and Transfer of Multiple Loads
Using Electromagnets},
author={Zhong Yin and Hailong Pei},
journal={arXiv preprint arXiv:2409.15822},
year={2024},
archivePrefix={arXiv},
eprint={2409.15822},
primaryClass={cs.RO}
}
|
yin2024a
|
arxiv-661241
|
2409.15825
|
Empirical Insights on Fine-Tuning Large Language Models for Question-Answering
|
<|reference_start|>Empirical Insights on Fine-Tuning Large Language Models for Question-Answering: Large language models (LLMs) encode extensive world knowledge through pre-training on massive datasets, which can then be fine-tuned for the question-answering (QA) task. However, effective strategies for fine-tuning LLMs for the QA task remain largely unexplored. To address this gap, we categorize supervised fine-tuning (SFT) data based on the extent of knowledge memorized by the pretrained LLMs and conduct a series of empirical analyses. Our experiments, involving four LLMs from three different model families, focus on three key factors: the amount of data required for SFT, the impact of different SFT datasets on model performance, and how data requirements vary across LLMs. The results show that as few as 60 data points during the SFT stage can activate the knowledge encoded during pre-training, enabling LLMs to perform the QA task. Additionally, SFT with data of varying memory levels has a significant impact on LLM performance, with the optimal dataset differing based on the specific model being fine-tuned. Future research will delve deeper into the mechanisms underlying these phenomena.<|reference_end|>
|
arxiv
|
@article{ye2024empirical,
title={Empirical Insights on Fine-Tuning Large Language Models for
Question-Answering},
author={Junjie Ye, Yuming Yang, Qi Zhang, Tao Gui, Xuanjing Huang, Peng Wang,
Zhongchao Shi, Jianping Fan},
journal={arXiv preprint arXiv:2409.15825},
year={2024},
archivePrefix={arXiv},
eprint={2409.15825},
primaryClass={cs.CL cs.AI}
}
|
ye2024empirical
|
arxiv-661242
|
2409.15827
|
Unveiling Language Competence Neurons: A Psycholinguistic Approach to Model Interpretability
|
<|reference_start|>Unveiling Language Competence Neurons: A Psycholinguistic Approach to Model Interpretability: As large language models (LLMs) become advance in their linguistic capacity, understanding how they capture aspects of language competence remains a significant challenge. This study therefore employs psycholinguistic paradigms, which are well-suited for probing deeper cognitive aspects of language processing, to explore neuron-level representations in language model across three tasks: sound-shape association, sound-gender association, and implicit causality. Our findings indicate that while GPT-2-XL struggles with the sound-shape task, it demonstrates human-like abilities in both sound-gender association and implicit causality. Targeted neuron ablation and activation manipulation reveal a crucial relationship: when GPT-2-XL displays a linguistic ability, specific neurons correspond to that competence; conversely, the absence of such an ability indicates a lack of specialized neurons. This study is the first to utilize psycholinguistic experiments to investigate deep language competence at the neuron level, providing a new level of granularity in model interpretability and insights into the internal mechanisms driving language ability in transformer based LLMs.<|reference_end|>
|
arxiv
|
@article{duan2024unveiling,
title={Unveiling Language Competence Neurons: A Psycholinguistic Approach to
Model Interpretability},
author={Xufeng Duan, Xinyu Zhou, Bei Xiao, Zhenguang G. Cai},
journal={arXiv preprint arXiv:2409.15827},
year={2024},
archivePrefix={arXiv},
eprint={2409.15827},
primaryClass={cs.CL}
}
|
duan2024unveiling
|
arxiv-661243
|
2409.15828
|
Mitigating Digital Discrimination in Dating Apps -- The Dutch Breeze case
|
<|reference_start|>Mitigating Digital Discrimination in Dating Apps -- The Dutch Breeze case: In September 2023, the Netherlands Institute for Human Rights, the Dutch non-discrimination authority, decided that Breeze, a Dutch dating app, was justified in suspecting that their algorithm discriminated against non-white. Consequently, the Institute decided that Breeze must prevent this discrimination based on ethnicity. This paper explores two questions. (i) Is the discrimination based on ethnicity in Breeze's matching algorithm illegal? (ii) How can dating apps mitigate or stop discrimination in their matching algorithms? We illustrate the legal and technical difficulties dating apps face in tackling discrimination and illustrate promising solutions. We analyse the Breeze decision in-depth, combining insights from computer science and law. We discuss the implications of this judgment for scholarship and practice in the field of fair and non-discriminatory machine learning.<|reference_end|>
|
arxiv
|
@article{de jonge2024mitigating,
title={Mitigating Digital Discrimination in Dating Apps -- The Dutch Breeze
case},
author={Tim de Jonge, Frederik Zuiderveen Borgesius},
journal={arXiv preprint arXiv:2409.15828},
year={2024},
archivePrefix={arXiv},
eprint={2409.15828},
primaryClass={cs.CY cs.IR}
}
|
de jonge2024mitigating
|
arxiv-661244
|
2409.15831
|
Introducing Anisotropic Fields for Enhanced Diversity in Crowd Simulation
|
<|reference_start|>Introducing Anisotropic Fields for Enhanced Diversity in Crowd Simulation: Large crowds exhibit intricate behaviors and significant emergent properties, yet existing crowd simulation systems often lack behavioral diversity, resulting in homogeneous simulation outcomes. To address this limitation, we propose incorporating anisotropic fields (AFs) as a fundamental structure for depicting the uncertainty in crowd movement. By leveraging AFs, our method can rapidly generate crowd simulations with intricate behavioral patterns that better reflect the inherent complexity of real crowds. The AFs are generated either through intuitive sketching or extracted from real crowd videos, enabling flexible and efficient crowd simulation systems. We demonstrate the effectiveness of our approach through several representative scenarios, showcasing a significant improvement in behavioral diversity compared to classical methods. Our findings indicate that by incorporating AFs, crowd simulation systems can achieve a much higher similarity to real-world crowd systems. Our code is publicly available at https://github.com/tomblack2014/AF\_Generation.<|reference_end|>
|
arxiv
|
@article{li2024introducing,
title={Introducing Anisotropic Fields for Enhanced Diversity in Crowd
Simulation},
author={Yihao Li, Junyu Liu, Xiaoyu Guan, Hanming Hou, Tianyu Huang},
journal={arXiv preprint arXiv:2409.15831},
year={2024},
archivePrefix={arXiv},
eprint={2409.15831},
primaryClass={cs.MA}
}
|
li2024introducing
|
arxiv-661245
|
2409.15832
|
PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings
|
<|reference_start|>PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings: We propose PseudoNeg-MAE, a novel self-supervised learning framework that enhances global feature representation of point cloud mask autoencoder by making them both discriminative and sensitive to transformations. Traditional contrastive learning methods focus on achieving invariance, which can lead to the loss of valuable transformation-related information. In contrast, PseudoNeg-MAE explicitly models the relationship between original and transformed data points using a parametric network COPE, which learns the localized displacements caused by transformations within the latent space. However, jointly training COPE with the MAE leads to undesirable trivial solutions where COPE outputs collapse to an identity. To address this, we introduce a novel loss function incorporating pseudo-negatives, which effectively penalizes these trivial invariant solutions and promotes transformation sensitivity in the embeddings. We validate PseudoNeg-MAE on shape classification and relative pose estimation tasks, where PseudoNeg-MAE achieves state-of-the-art performance on the ModelNet40 and ScanObjectNN datasets under challenging evaluation protocols and demonstrates superior accuracy in estimating relative poses. These results show the effectiveness of PseudoNeg-MAE in learning discriminative and transformation-sensitive representations.<|reference_end|>
|
arxiv
|
@article{mahendren2024pseudoneg-mae:,
title={PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional
Pseudo-Negative Embeddings},
author={Sutharsan Mahendren, Saimunur Rahman, Piotr Koniusz, Tharindu
Fernando, Sridha Sridharan, Clinton Fookes, Peyman Moghadam},
journal={arXiv preprint arXiv:2409.15832},
year={2024},
archivePrefix={arXiv},
eprint={2409.15832},
primaryClass={cs.CV}
}
|
mahendren2024pseudoneg-mae:
|
arxiv-661246
|
2409.15834
|
Deep Learning Techniques for Automatic Lateral X-ray Cephalometric Landmark Detection: Is the Problem Solved?
|
<|reference_start|>Deep Learning Techniques for Automatic Lateral X-ray Cephalometric Landmark Detection: Is the Problem Solved?: Localization of the craniofacial landmarks from lateral cephalograms is a fundamental task in cephalometric analysis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the "Cephalometric Landmark Detection (CL-Detection)" dataset, which is the largest publicly available and comprehensive dataset for cephalometric landmark detection. This multi-center and multi-vendor dataset includes 600 lateral X-ray images with 38 landmarks acquired with different equipment from three medical centers. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go for cephalometric landmark detection. Following the 2023 MICCAI CL-Detection Challenge, we report the results of the top ten research groups using deep learning methods. Results show that the best methods closely approximate the expert analysis, achieving a mean detection rate of 75.719% and a mean radial error of 1.518 mm. While there is room for improvement, these findings undeniably open the door to highly accurate and fully automatic location of craniofacial landmarks. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for the community to benchmark future algorithm developments at https://cl-detection2023.grand-challenge.org/.<|reference_end|>
|
arxiv
|
@article{zhang2024deep,
title={Deep Learning Techniques for Automatic Lateral X-ray Cephalometric
Landmark Detection: Is the Problem Solved?},
author={Hongyuan Zhang, Ching-Wei Wang, Hikam Muzakky, Juan Dai, Xuguang Li,
Chenglong Ma, Qian Wu, Xianan Cui, Kunlun Xu, Pengfei He, Dongqian Guo,
Xianlong Wang, Hyunseok Lee, Zhangnan Zhong, Zhu Zhu and Bingsheng Huang},
journal={arXiv preprint arXiv:2409.15834},
year={2024},
archivePrefix={arXiv},
eprint={2409.15834},
primaryClass={cs.CV}
}
|
zhang2024deep
|
arxiv-661247
|
2409.15838
|
TiltXter: CNN-based Electro-tactile Rendering of Tilt Angle for Telemanipulation of Pasteur Pipettes
|
<|reference_start|>TiltXter: CNN-based Electro-tactile Rendering of Tilt Angle for Telemanipulation of Pasteur Pipettes: The shape of deformable objects can change drastically during grasping by robotic grippers, causing an ambiguous perception of their alignment and hence resulting in errors in robot positioning and telemanipulation. Rendering clear tactile patterns is fundamental to increasing users' precision and dexterity through tactile haptic feedback during telemanipulation. Therefore, different methods have to be studied to decode the sensors' data into haptic stimuli. This work presents a telemanipulation system for plastic pipettes that consists of a Force Dimension Omega.7 haptic interface endowed with two electro-stimulation arrays and two tactile sensor arrays embedded in the 2-finger Robotiq gripper. We propose a novel approach based on convolutional neural networks (CNN) to detect the tilt of deformable objects. The CNN generates a tactile pattern based on recognized tilt data to render further electro-tactile stimuli provided to the user during the telemanipulation. The study has shown that using the CNN algorithm, tilt recognition by users increased from 23.13\% with the downsized data to 57.9%, and the success rate during teleoperation increased from 53.12% using the downsized data to 92.18% using the tactile patterns generated by the CNN.<|reference_end|>
|
arxiv
|
@article{cabrera2024tiltxter:,
title={TiltXter: CNN-based Electro-tactile Rendering of Tilt Angle for
Telemanipulation of Pasteur Pipettes},
author={Miguel Altamirano Cabrera, Jonathan Tirado, Aleksey Fedoseev, Oleg
Sautenkov, Vladimir Poliakov, Pavel Kopanev, and Dzmitry Tsetserukou},
journal={arXiv preprint arXiv:2409.15838},
year={2024},
archivePrefix={arXiv},
eprint={2409.15838},
primaryClass={cs.RO}
}
|
cabrera2024tiltxter:
|
arxiv-661248
|
2409.15840
|
Distance-based Multiple Non-cooperative Ground Target Encirclement for Complex Environments
|
<|reference_start|>Distance-based Multiple Non-cooperative Ground Target Encirclement for Complex Environments: This paper proposes a comprehensive strategy for complex multi-target-multi-drone encirclement in an obstacle-rich and GPS-denied environment, motivated by practical scenarios such as pursuing vehicles or humans in urban canyons. The drones have omnidirectional range sensors that can robustly detect ground targets and obtain noisy relative distances. After each drone task is assigned, a novel distance-based target state estimator (DTSE) is proposed by estimating the measurement output noise variance and utilizing the Kalman filter. By integrating anti-synchronization techniques and pseudo-force functions, an acceleration controller enables two tasking drones to cooperatively encircle a target from opposing positions while navigating obstacles. The algorithms effectiveness for the discrete-time double-integrator system is established theoretically, particularly regarding observability. Moreover, the versatility of the algorithm is showcased in aerial-to-ground scenarios, supported by compelling simulation results. Experimental validation demonstrates the effectiveness of the proposed approach.<|reference_end|>
|
arxiv
|
@article{liu2024distance-based,
title={Distance-based Multiple Non-cooperative Ground Target Encirclement for
Complex Environments},
author={Fen Liu and Shenghai Yuan and Kun Cao and Wei Meng and Lihua Xie},
journal={arXiv preprint arXiv:2409.15840},
year={2024},
archivePrefix={arXiv},
eprint={2409.15840},
primaryClass={cs.RO}
}
|
liu2024distance-based
|
arxiv-661249
|
2409.15841
|
FSF-Net: Enhance 4D Occupancy Forecasting with Coarse BEV Scene Flow for Autonomous Driving
|
<|reference_start|>FSF-Net: Enhance 4D Occupancy Forecasting with Coarse BEV Scene Flow for Autonomous Driving: 4D occupancy forecasting is one of the important techniques for autonomous driving, which can avoid potential risk in the complex traffic scenes. Scene flow is a crucial element to describe 4D occupancy map tendency. However, an accurate scene flow is difficult to predict in the real scene. In this paper, we find that BEV scene flow can approximately represent 3D scene flow in most traffic scenes. And coarse BEV scene flow is easy to generate. Under this thought, we propose 4D occupancy forecasting method FSF-Net based on coarse BEV scene flow. At first, we develop a general occupancy forecasting architecture based on coarse BEV scene flow. Then, to further enhance 4D occupancy feature representation ability, we propose a vector quantized based Mamba (VQ-Mamba) network to mine spatial-temporal structural scene feature. After that, to effectively fuse coarse occupancy maps forecasted from BEV scene flow and latent features, we design a U-Net based quality fusion (UQF) network to generate the fine-grained forecasting result. Extensive experiments are conducted on public Occ3D dataset. FSF-Net has achieved IoU and mIoU 9.56% and 10.87% higher than state-of-the-art method. Hence, we believe that proposed FSF-Net benefits to the safety of autonomous driving.<|reference_end|>
|
arxiv
|
@article{guo2024fsf-net:,
title={FSF-Net: Enhance 4D Occupancy Forecasting with Coarse BEV Scene Flow for
Autonomous Driving},
author={Erxin Guo, Pei An, You Yang, Qiong Liu, and An-An Liu},
journal={arXiv preprint arXiv:2409.15841},
year={2024},
archivePrefix={arXiv},
eprint={2409.15841},
primaryClass={cs.CV}
}
|
guo2024fsf-net:
|
arxiv-661250
|
2409.15843
|
From Passive Watching to Active Learning: Empowering Proactive Participation in Digital Classrooms with AI Video Assistant
|
<|reference_start|>From Passive Watching to Active Learning: Empowering Proactive Participation in Digital Classrooms with AI Video Assistant: In online education, innovative tools are crucial for enhancing learning outcomes. SAM (Study with AI Mentor) is an advanced platform that integrates educational videos with a context-aware chat interface powered by large language models. SAM encourages students to ask questions and explore unclear concepts in real-time, offering personalized, context-specific assistance, including explanations of formulas, slides, and images. In a crowdsourced user study involving 140 participants, SAM was evaluated through pre- and post-knowledge tests, comparing a group using SAM with a control group. The results demonstrated that SAM users achieved greater knowledge gains, with a 96.8% answer accuracy. Participants also provided positive feedback on SAM's usability and effectiveness. SAM's proactive approach to learning not only enhances learning outcomes but also empowers students to take full ownership of their educational experience, representing a promising future direction for online learning tools.<|reference_end|>
|
arxiv
|
@article{bodonhelyi2024from,
title={From Passive Watching to Active Learning: Empowering Proactive
Participation in Digital Classrooms with AI Video Assistant},
author={Anna Bodonhelyi, Enkeleda Thaqi, S"uleyman "Ozdel, Efe Bozkir,
Enkelejda Kasneci},
journal={arXiv preprint arXiv:2409.15843},
year={2024},
archivePrefix={arXiv},
eprint={2409.15843},
primaryClass={cs.AI}
}
|
bodonhelyi2024from
|
arxiv-661251
|
2409.15844
|
Adaptive Learn-then-Test: Statistically Valid and Efficient Hyperparameter Selection
|
<|reference_start|>Adaptive Learn-then-Test: Statistically Valid and Efficient Hyperparameter Selection: We introduce adaptive learn-then-test (aLTT), an efficient hyperparameter selection procedure that provides finite-sample statistical guarantees on the population risk of AI models. Unlike the existing learn-then-test (LTT) technique, which relies on conventional p-value-based multiple hypothesis testing (MHT), aLTT implements sequential data-dependent MHT with early termination by leveraging e-processes. As a result, aLTT can reduce the number of testing rounds, making it particularly well-suited for scenarios in which testing is costly or presents safety risks. Apart from maintaining statistical validity, in applications such as online policy selection for offline reinforcement learning and hyperparameter tuning for engineering systems, aLTT is shown to achieve the same performance as LTT while requiring only a fraction of the testing rounds.<|reference_end|>
|
arxiv
|
@article{zecchin2024adaptive,
title={Adaptive Learn-then-Test: Statistically Valid and Efficient
Hyperparameter Selection},
author={Matteo Zecchin, Osvaldo Simeone},
journal={arXiv preprint arXiv:2409.15844},
year={2024},
archivePrefix={arXiv},
eprint={2409.15844},
primaryClass={stat.ML cs.AI cs.IT cs.LG math.IT stat.ME}
}
|
zecchin2024adaptive
|
arxiv-661252
|
2409.15846
|
Potential Field as Scene Affordance for Behavior Change-Based Visual Risk Object Identification
|
<|reference_start|>Potential Field as Scene Affordance for Behavior Change-Based Visual Risk Object Identification: We study behavior change-based visual risk object identification (Visual-ROI), a critical framework designed to detect potential hazards for intelligent driving systems. Existing methods often show significant limitations in spatial accuracy and temporal consistency, stemming from an incomplete understanding of scene affordance. For example, these methods frequently misidentify vehicles that do not impact the ego vehicle as risk objects. Furthermore, existing behavior change-based methods are inefficient because they implement causal inference in the perspective image space. We propose a new framework with a Bird's Eye View (BEV) representation to overcome the above challenges. Specifically, we utilize potential fields as scene affordance, involving repulsive forces derived from road infrastructure and traffic participants, along with attractive forces sourced from target destinations. In this work, we compute potential fields by assigning different energy levels according to the semantic labels obtained from BEV semantic segmentation. We conduct thorough experiments and ablation studies, comparing the proposed method with various state-of-the-art algorithms on both synthetic and real-world datasets. Our results show a notable increase in spatial and temporal consistency, with enhancements of 20.3% and 11.6% on the RiskBench dataset, respectively. Additionally, we can improve computational efficiency by 88%. We achieve improvements of 5.4% in spatial accuracy and 7.2% in temporal consistency on the nuScenes dataset.<|reference_end|>
|
arxiv
|
@article{pao2024potential,
title={Potential Field as Scene Affordance for Behavior Change-Based Visual
Risk Object Identification},
author={Pang-Yuan Pao, Shu-Wei Lu, Ze-Yan Lu, Yi-Ting Chen},
journal={arXiv preprint arXiv:2409.15846},
year={2024},
archivePrefix={arXiv},
eprint={2409.15846},
primaryClass={cs.CV}
}
|
pao2024potential
|
arxiv-661253
|
2409.15848
|
iGAiVA: Integrated Generative AI and Visual Analytics in a Machine Learning Workflow for Text Classification
|
<|reference_start|>iGAiVA: Integrated Generative AI and Visual Analytics in a Machine Learning Workflow for Text Classification: In developing machine learning (ML) models for text classification, one common challenge is that the collected data is often not ideally distributed, especially when new classes are introduced in response to changes of data and tasks. In this paper, we present a solution for using visual analytics (VA) to guide the generation of synthetic data using large language models. As VA enables model developers to identify data-related deficiency, data synthesis can be targeted to address such deficiency. We discuss different types of data deficiency, describe different VA techniques for supporting their identification, and demonstrate the effectiveness of targeted data synthesis in improving model accuracy. In addition, we present a software tool, iGAiVA, which maps four groups of ML tasks into four VA views, integrating generative AI and VA into an ML workflow for developing and improving text classification models.<|reference_end|>
|
arxiv
|
@article{jin2024igaiva:,
title={iGAiVA: Integrated Generative AI and Visual Analytics in a Machine
Learning Workflow for Text Classification},
author={Yuanzhe Jin, Adrian Carrasco-Revilla, and Min Chen},
journal={arXiv preprint arXiv:2409.15848},
year={2024},
archivePrefix={arXiv},
eprint={2409.15848},
primaryClass={cs.LG cs.CL}
}
|
jin2024igaiva:
|
arxiv-661254
|
2409.15849
|
Twin Network Augmentation: A Novel Training Strategy for Improved Spiking Neural Networks and Efficient Weight Quantization
|
<|reference_start|>Twin Network Augmentation: A Novel Training Strategy for Improved Spiking Neural Networks and Efficient Weight Quantization: The proliferation of Artificial Neural Networks (ANNs) has led to increased energy consumption, raising concerns about their sustainability. Spiking Neural Networks (SNNs), which are inspired by biological neural systems and operate using sparse, event-driven spikes to communicate information between neurons, offer a potential solution due to their lower energy requirements. An alternative technique for reducing a neural network's footprint is quantization, which compresses weight representations to decrease memory usage and energy consumption. In this study, we present Twin Network Augmentation (TNA), a novel training framework aimed at improving the performance of SNNs while also facilitating an enhanced compression through low-precision quantization of weights. TNA involves co-training an SNN with a twin network, optimizing both networks to minimize their cross-entropy losses and the mean squared error between their output logits. We demonstrate that TNA significantly enhances classification performance across various vision datasets and in addition is particularly effective when applied when reducing SNNs to ternary weight precision. Notably, during inference , only the ternary SNN is retained, significantly reducing the network in number of neurons, connectivity and weight size representation. Our results show that TNA outperforms traditional knowledge distillation methods and achieves state-of-the-art performance for the evaluated network architecture on benchmark datasets, including CIFAR-10, CIFAR-100, and CIFAR-10-DVS. This paper underscores the effectiveness of TNA in bridging the performance gap between SNNs and ANNs and suggests further exploration into the application of TNA in different network architectures and datasets.<|reference_end|>
|
arxiv
|
@article{deckers2024twin,
title={Twin Network Augmentation: A Novel Training Strategy for Improved
Spiking Neural Networks and Efficient Weight Quantization},
author={Lucas Deckers, Benjamin Vandersmissen, Ing Jyh Tsang, Werner Van
Leekwijck and Steven Latr'e},
journal={arXiv preprint arXiv:2409.15849},
year={2024},
archivePrefix={arXiv},
eprint={2409.15849},
primaryClass={cs.NE}
}
|
deckers2024twin
|
arxiv-661255
|
2409.15857
|
Ducho meets Elliot: Large-scale Benchmarks for Multimodal Recommendation
|
<|reference_start|>Ducho meets Elliot: Large-scale Benchmarks for Multimodal Recommendation: In specific domains like fashion, music, and movie recommendation, the multi-faceted features characterizing products and services may influence each customer on online selling platforms differently, paving the way to novel multimodal recommendation models that can learn from such multimodal content. According to the literature, the common multimodal recommendation pipeline involves (i) extracting multimodal features, (ii) refining their high-level representations to suit the recommendation task, (iii) optionally fusing all multimodal features, and (iv) predicting the user-item score. While great effort has been put into designing optimal solutions for (ii-iv), to the best of our knowledge, very little attention has been devoted to exploring procedures for (i). In this respect, the existing literature outlines the large availability of multimodal datasets and the ever-growing number of large models accounting for multimodal-aware tasks, but (at the same time) an unjustified adoption of limited standardized solutions. This motivates us to explore more extensive techniques for the (i) stage of the pipeline. To this end, this paper settles as the first attempt to offer a large-scale benchmarking for multimodal recommender systems, with a specific focus on multimodal extractors. Specifically, we take advantage of two popular and recent frameworks for multimodal feature extraction and reproducibility in recommendation, Ducho and Elliot, to offer a unified and ready-to-use experimental environment able to run extensive benchmarking analyses leveraging novel multimodal feature extractors. Results, largely validated under different hyper-parameter settings for the chosen extractors, provide important insights on how to train and tune the next generation of multimodal recommendation algorithms.<|reference_end|>
|
arxiv
|
@article{attimonelli2024ducho,
title={Ducho meets Elliot: Large-scale Benchmarks for Multimodal Recommendation},
author={Matteo Attimonelli, Danilo Danese, Angela Di Fazio, Daniele Malitesta,
Claudio Pomo, Tommaso Di Noia},
journal={arXiv preprint arXiv:2409.15857},
year={2024},
archivePrefix={arXiv},
eprint={2409.15857},
primaryClass={cs.IR}
}
|
attimonelli2024ducho
|
arxiv-661256
|
2409.15858
|
Identification For Control Based on Neural Networks: Approximately Linearizable Models
|
<|reference_start|>Identification For Control Based on Neural Networks: Approximately Linearizable Models: This work presents a control-oriented identification scheme for efficient control design and stability analysis of nonlinear systems. Neural networks are used to identify a discrete-time nonlinear state-space model to approximate time-domain input-output behavior of a nonlinear system. The network is constructed such that the identified model is approximately linearizable by feedback, ensuring that the control law trivially follows from the learning stage. After the identification and quasi-linearization procedures, linear control theory comes at hand to design robust controllers and study stability of the closed-loop system. The effectiveness and interest of the methodology are illustrated throughout the paper on popular benchmarks for system identification.<|reference_end|>
|
arxiv
|
@article{thieffry2024identification,
title={Identification For Control Based on Neural Networks: Approximately
Linearizable Models},
author={Maxime Thieffry, Alexandre Hache, Mohamed Yagoubi, Philippe Chevrel},
journal={arXiv preprint arXiv:2409.15858},
year={2024},
archivePrefix={arXiv},
eprint={2409.15858},
primaryClass={eess.SY cs.AI cs.SY}
}
|
thieffry2024identification
|
arxiv-661257
|
2409.15859
|
Performance and scaling of the LFRic weather and climate model on different generations of HPE Cray EX supercomputers
|
<|reference_start|>Performance and scaling of the LFRic weather and climate model on different generations of HPE Cray EX supercomputers: This study presents scaling results and a performance analysis across different supercomputers and compilers for the Met Office weather and climate model, LFRic. The model is shown to scale to large numbers of nodes which meets the design criteria, that of exploitation of parallelism to achieve good scaling. The model is written in a Domain-Specific Language, embedded in modern Fortran and uses a Domain-Specific Compiler, PSyclone, to generate the parallel code. The performance analysis shows the effect of choice of algorithm, such as redundant computation and scaling with OpenMP threads. The analysis can be used to motivate a discussion of future work to improve the OpenMP performance of other parts of the code. Finally, an analysis of the performance tuning of the I/O server, XIOS is presented.<|reference_end|>
|
arxiv
|
@article{bull2024performance,
title={Performance and scaling of the LFRic weather and climate model on
different generations of HPE Cray EX supercomputers},
author={J. Mark Bull (1), Andrew Coughtrie (2), Deva Deeptimahanti (3), Mark
Hedley (2), Caoimh'in Laoide-Kemp (1), Christopher Maynard (2), Harry
Shepherd (2), Sebastiaan van de Bund (1), Mich`ele Weiland (2), Benjamin
Went (2) ((1) EPCC University of Edinburgh, (2) Met Office, (3) Pawsey
Supercomputing Research Centre)},
journal={arXiv preprint arXiv:2409.15859},
year={2024},
archivePrefix={arXiv},
eprint={2409.15859},
primaryClass={cs.DC cs.PF}
}
|
bull2024performance
|
arxiv-661258
|
2409.15861
|
A Zero-Shot Open-Vocabulary Pipeline for Dialogue Understanding
|
<|reference_start|>A Zero-Shot Open-Vocabulary Pipeline for Dialogue Understanding: Dialogue State Tracking (DST) is crucial for understanding user needs and executing appropriate system actions in task-oriented dialogues. Majority of existing DST methods are designed to work within predefined ontologies and assume the availability of gold domain labels, struggling with adapting to new slots values. While Large Language Models (LLMs)-based systems show promising zero-shot DST performance, they either require extensive computational resources or they underperform existing fully-trained systems, limiting their practicality. To address these limitations, we propose a zero-shot, open-vocabulary system that integrates domain classification and DST in a single pipeline. Our approach includes reformulating DST as a question-answering task for less capable models and employing self-refining prompts for more adaptable ones. Our system does not rely on fixed slot values defined in the ontology allowing the system to adapt dynamically. We compare our approach with existing SOTA, and show that it provides up to 20% better Joint Goal Accuracy (JGA) over previous methods on datasets like Multi-WOZ 2.1, with up to 90% fewer requests to the LLM API.<|reference_end|>
|
arxiv
|
@article{safa2024a,
title={A Zero-Shot Open-Vocabulary Pipeline for Dialogue Understanding},
author={Abdulfattah Safa, G"ozde G"ul c{S}ahin},
journal={arXiv preprint arXiv:2409.15861},
year={2024},
archivePrefix={arXiv},
eprint={2409.15861},
primaryClass={cs.CL cs.AI}
}
|
safa2024a
|
arxiv-661259
|
2409.15863
|
A discrete trace theory for non-conforming polytopal hybrid discretisation methods
|
<|reference_start|>A discrete trace theory for non-conforming polytopal hybrid discretisation methods: In this work we develop a discrete trace theory that spans non-conforming hybrid discretization methods and holds on polytopal meshes. A notion of a discrete trace seminorm is defined, and trace and lifting results with respect to a discrete $H^1$-seminorm on the hybrid fully discrete space are proven. Building on these results we also prove a truncation estimate for piecewise polynomials in the discrete trace seminorm. Finally, we conduct two numerical tests in which we compute the proposed discrete operators and investigate their spectrum to verify the theoretical analysis. The development of this theory is motivated by the design and analysis of preconditioners for hybrid methods, e.g., of substructuring domain decomposition type.<|reference_end|>
|
arxiv
|
@article{badia2024a,
title={A discrete trace theory for non-conforming polytopal hybrid
discretisation methods},
author={Santiago Badia, Jerome Droniou, and Jai Tushar},
journal={arXiv preprint arXiv:2409.15863},
year={2024},
archivePrefix={arXiv},
eprint={2409.15863},
primaryClass={math.NA cs.NA}
}
|
badia2024a
|
arxiv-661260
|
2409.15865
|
BeSimulator: A Large Language Model Powered Text-based Behavior Simulator
|
<|reference_start|>BeSimulator: A Large Language Model Powered Text-based Behavior Simulator: Traditional robot simulators focus on physical process modeling and realistic rendering, often suffering from high computational costs, inefficiencies, and limited adaptability. To handle this issue, we propose Behavior Simulation in robotics to emphasize checking the behavior logic of robots and achieving sufficient alignment between the outcome of robot actions and real scenarios. In this paper, we introduce BeSimulator, a modular and novel LLM-powered framework, as an attempt towards behavior simulation in the context of text-based environments. By constructing text-based virtual environments and performing semantic-level simulation, BeSimulator can generalize across scenarios and achieve long-horizon complex simulation. Inspired by human cognition processes, it employs a "consider-decide-capture-transfer" methodology, termed Chain of Behavior Simulation, which excels at analyzing action feasibility and state transitions. Additionally, BeSimulator incorporates code-driven reasoning to enable arithmetic operations and enhance reliability, as well as integrates reflective feedback to refine simulation. Based on our manually constructed behavior-tree-based simulation benchmark BTSIMBENCH, our experiments show a significant performance improvement in behavior simulation compared to baselines, ranging from 14.7% to 26.6%.<|reference_end|>
|
arxiv
|
@article{wang2024besimulator:,
title={BeSimulator: A Large Language Model Powered Text-based Behavior
Simulator},
author={Jianan Wang, Bin Li, Xueying Wang, Fu Li, Yunlong Wu, Juan Chen,
Xiaodong Yi},
journal={arXiv preprint arXiv:2409.15865},
year={2024},
archivePrefix={arXiv},
eprint={2409.15865},
primaryClass={cs.RO cs.AI cs.CL}
}
|
wang2024besimulator:
|
arxiv-661261
|
2409.15866
|
Multi-UAV Pursuit-Evasion with Online Planning in Unknown Environments by Deep Reinforcement Learning
|
<|reference_start|>Multi-UAV Pursuit-Evasion with Online Planning in Unknown Environments by Deep Reinforcement Learning: Multi-UAV pursuit-evasion, where pursuers aim to capture evaders, poses a key challenge for UAV swarm intelligence. Multi-agent reinforcement learning (MARL) has demonstrated potential in modeling cooperative behaviors, but most RL-based approaches remain constrained to simplified simulations with limited dynamics or fixed scenarios. Previous attempts to deploy RL policy to real-world pursuit-evasion are largely restricted to two-dimensional scenarios, such as ground vehicles or UAVs at fixed altitudes. In this paper, we address multi-UAV pursuit-evasion by considering UAV dynamics and physical constraints. We introduce an evader prediction-enhanced network to tackle partial observability in cooperative strategy learning. Additionally, we propose an adaptive environment generator within MARL training, enabling higher exploration efficiency and better policy generalization across diverse scenarios. Simulations show our method significantly outperforms all baselines in challenging scenarios, generalizing to unseen scenarios with a 100% capture rate. Finally, we derive a feasible policy via a two-stage reward refinement and deploy the policy on real quadrotors in a zero-shot manner. To our knowledge, this is the first work to derive and deploy an RL-based policy using collective thrust and body rates control commands for multi-UAV pursuit-evasion in unknown environments. The open-source code and videos are available at https://sites.google.com/view/pursuit-evasion-rl.<|reference_end|>
|
arxiv
|
@article{chen2024multi-uav,
title={Multi-UAV Pursuit-Evasion with Online Planning in Unknown Environments
by Deep Reinforcement Learning},
author={Jiayu Chen, Chao Yu, Guosheng Li, Wenhao Tang, Xinyi Yang, Botian Xu,
Huazhong Yang, Yu Wang},
journal={arXiv preprint arXiv:2409.15866},
year={2024},
archivePrefix={arXiv},
eprint={2409.15866},
primaryClass={cs.RO cs.LG}
}
|
chen2024multi-uav
|
arxiv-661262
|
2409.15867
|
In-Context Ensemble Improves Video-Language Models for Low-Level Workflow Understanding from Human Demonstrations
|
<|reference_start|>In-Context Ensemble Improves Video-Language Models for Low-Level Workflow Understanding from Human Demonstrations: A Standard Operating Procedure (SOP) defines a low-level, step-by-step written guide for a business software workflow based on a video demonstration. SOPs are a crucial step toward automating end-to-end software workflows. Manually creating SOPs can be time-consuming. Recent advancements in large video-language models offer the potential for automating SOP generation by analyzing recordings of human demonstrations. However, current large video-language models face challenges with zero-shot SOP generation. We explore in-context learning with video-language models for SOP generation. We report that in-context learning sometimes helps video-language models at SOP generation. We then propose an in-context ensemble learning to further enhance the capabilities of the models in SOP generation.<|reference_end|>
|
arxiv
|
@article{xu2024in-context,
title={In-Context Ensemble Learning from Pseudo Labels Improves Video-Language
Models for Low-Level Workflow Understanding},
author={Moucheng Xu and Evangelos Chatzaroulas and Luc McCutcheon and Abdul
Ahad and Hamzah Azeem and Janusz Marecki and Ammar Anwar},
journal={arXiv preprint arXiv:2409.15867},
year={2024},
archivePrefix={arXiv},
eprint={2409.15867},
primaryClass={cs.AI}
}
|
xu2024in-context
|
arxiv-661263
|
2409.15868
|
Privacy Evaluation Benchmarks for NLP Models
|
<|reference_start|>Privacy Evaluation Benchmarks for NLP Models: By inducing privacy attacks on NLP models, attackers can obtain sensitive information such as training data and model parameters, etc. Although researchers have studied, in-depth, several kinds of attacks in NLP models, they are non-systematic analyses. It lacks a comprehensive understanding of the impact caused by the attacks. For example, we must consider which scenarios can apply to which attacks, what the common factors are that affect the performance of different attacks, the nature of the relationships between different attacks, and the influence of various datasets and models on the effectiveness of the attacks, etc. Therefore, we need a benchmark to holistically assess the privacy risks faced by NLP models. In this paper, we present a privacy attack and defense evaluation benchmark in the field of NLP, which includes the conventional/small models and large language models (LLMs). This benchmark supports a variety of models, datasets, and protocols, along with standardized modules for comprehensive evaluation of attacks and defense strategies. Based on the above framework, we present a study on the association between auxiliary data from different domains and the strength of privacy attacks. And we provide an improved attack method in this scenario with the help of Knowledge Distillation (KD). Furthermore, we propose a chained framework for privacy attacks. Allowing a practitioner to chain multiple attacks to achieve a higher-level attack objective. Based on this, we provide some defense and enhanced attack strategies. The code for reproducing the results can be found at https://github.com/user2311717757/nlp_doctor.<|reference_end|>
|
arxiv
|
@article{huang2024privacy,
title={Privacy Evaluation Benchmarks for NLP Models},
author={Wei Huang, Yinggui Wang, Cen Chen},
journal={arXiv preprint arXiv:2409.15868},
year={2024},
archivePrefix={arXiv},
eprint={2409.15868},
primaryClass={cs.CL cs.LG}
}
|
huang2024privacy
|
arxiv-661264
|
2409.15869
|
Whisper in Medusa's Ear: Multi-head Efficient Decoding for Transformer-based ASR
|
<|reference_start|>Whisper in Medusa's Ear: Multi-head Efficient Decoding for Transformer-based ASR: Large transformer-based models have significant potential for speech transcription and translation. Their self-attention mechanisms and parallel processing enable them to capture complex patterns and dependencies in audio sequences. However, this potential comes with challenges, as these large and computationally intensive models lead to slow inference speeds. Various optimization strategies have been proposed to improve performance, including efficient hardware utilization and algorithmic enhancements. In this paper, we introduce Whisper-Medusa, a novel approach designed to enhance processing speed with minimal impact on Word Error Rate (WER). The proposed model extends the OpenAI's Whisper architecture by predicting multiple tokens per iteration, resulting in a 50% reduction in latency. We showcase the effectiveness of Whisper-Medusa across different learning setups and datasets.<|reference_end|>
|
arxiv
|
@article{segal-feldman2024whisper,
title={Whisper in Medusa's Ear: Multi-head Efficient Decoding for
Transformer-based ASR},
author={Yael Segal-Feldman, Aviv Shamsian, Aviv Navon, Gill Hetz, Joseph
Keshet},
journal={arXiv preprint arXiv:2409.15869},
year={2024},
archivePrefix={arXiv},
eprint={2409.15869},
primaryClass={eess.AS cs.AI cs.LG cs.SD}
}
|
segal-feldman2024whisper
|
arxiv-661265
|
2409.15872
|
Physics-informed neural networks for Timoshenko system with Thermoelasticity
|
<|reference_start|>Physics-informed neural networks for Timoshenko system with Thermoelasticity: The main focus of this paper is to analyze the behavior of a numerical solution of the Timoshenko system coupled with Thermoelasticity and incorporating second sound effects. In order to address this target, we employ the Physics-Informed Neural Networks (PINNs) framework to derive an approximate solution for the system. Our investigation delves into the extent to which this approximate solution can accurately capture the asymptotic behavior of the discrete energy, contingent upon the stability number $\chi$. Interestingly, the PINNs overcome the major difficulties encountered while using the standard numerical methods.<|reference_end|>
|
arxiv
|
@article{chebbi2024physics-informed,
title={Physics-informed neural networks for Timoshenko system with
Thermoelasticity},
author={Sabrine Chebbi, Joseph Muthui Wacira, Makram Hamouda, and Bubacarr Bah},
journal={arXiv preprint arXiv:2409.15872},
year={2024},
archivePrefix={arXiv},
eprint={2409.15872},
primaryClass={math.NA cs.NA math.AP}
}
|
chebbi2024physics-informed
|
arxiv-661266
|
2409.15875
|
Zero-Shot Detection of AI-Generated Images
|
<|reference_start|>Zero-Shot Detection of AI-Generated Images: Detecting AI-generated images has become an extraordinarily difficult challenge as new generative architectures emerge on a daily basis with more and more capabilities and unprecedented realism. New versions of many commercial tools, such as DALLE, Midjourney, and Stable Diffusion, have been released recently, and it is impractical to continually update and retrain supervised forensic detectors to handle such a large variety of models. To address this challenge, we propose a zero-shot entropy-based detector (ZED) that neither needs AI-generated training data nor relies on knowledge of generative architectures to artificially synthesize their artifacts. Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images. To this end, we rely on a lossless image encoder that estimates the probability distribution of each pixel given its context. To ensure computational efficiency, the encoder has a multi-resolution architecture and contexts comprise mostly pixels of the lower-resolution version of the image.Since only real images are needed to learn the model, the detector is independent of generator architectures and synthetic training data. Using a single discriminative feature, the proposed detector achieves state-of-the-art performance. On a wide variety of generative models it achieves an average improvement of more than 3% over the SoTA in terms of accuracy. Code is available at https://grip-unina.github.io/ZED/.<|reference_end|>
|
arxiv
|
@article{cozzolino2024zero-shot,
title={Zero-Shot Detection of AI-Generated Images},
author={Davide Cozzolino and Giovanni Poggi and Matthias Nie{ss}ner and Luisa
Verdoliva},
journal={arXiv preprint arXiv:2409.15875},
year={2024},
archivePrefix={arXiv},
eprint={2409.15875},
primaryClass={cs.CV}
}
|
cozzolino2024zero-shot
|
arxiv-661267
|
2409.15879
|
Machine Translation Advancements of Low-Resource Indian Languages by Transfer Learning
|
<|reference_start|>Machine Translation Advancements of Low-Resource Indian Languages by Transfer Learning: This paper introduces the submission by Huawei Translation Center (HW-TSC) to the WMT24 Indian Languages Machine Translation (MT) Shared Task. To develop a reliable machine translation system for low-resource Indian languages, we employed two distinct knowledge transfer strategies, taking into account the characteristics of the language scripts and the support available from existing open-source models for Indian languages. For Assamese(as) and Manipuri(mn), we fine-tuned the existing IndicTrans2 open-source model to enable bidirectional translation between English and these languages. For Khasi (kh) and Mizo (mz), We trained a multilingual model as a baseline using bilingual data from these four language pairs, along with an additional about 8kw English-Bengali bilingual data, all of which share certain linguistic features. This was followed by fine-tuning to achieve bidirectional translation between English and Khasi, as well as English and Mizo. Our transfer learning experiments produced impressive results: 23.5 BLEU for en-as, 31.8 BLEU for en-mn, 36.2 BLEU for as-en, and 47.9 BLEU for mn-en on their respective test sets. Similarly, the multilingual model transfer learning experiments yielded impressive outcomes, achieving 19.7 BLEU for en-kh, 32.8 BLEU for en-mz, 16.1 BLEU for kh-en, and 33.9 BLEU for mz-en on their respective test sets. These results not only highlight the effectiveness of transfer learning techniques for low-resource languages but also contribute to advancing machine translation capabilities for low-resource Indian languages.<|reference_end|>
|
arxiv
|
@article{wei2024machine,
title={Machine Translation Advancements of Low-Resource Indian Languages by
Transfer Learning},
author={Bin Wei, Jiawei Zhen, Zongyao Li, Zhanglin Wu, Daimeng Wei, Jiaxin
Guo, Zhiqiang Rao, Shaojun Li, Yuanchang Luo, Hengchao Shang, Jinlong Yang,
Yuhao Xie, Hao Yang},
journal={arXiv preprint arXiv:2409.15879},
year={2024},
archivePrefix={arXiv},
eprint={2409.15879},
primaryClass={cs.CL cs.AI}
}
|
wei2024machine
|
arxiv-661268
|
2409.15880
|
Aperiodic monotiles: from geometry to groups
|
<|reference_start|>Aperiodic monotiles: from geometry to groups: In 2023, two striking, nearly simultaneous, mathematical discoveries have excited their respective communities, one by Greenfeld and Tao, the other (the Hat tile) by Smith, Myers, Kaplan and Goodman-Strauss, which can both be summed up as the following: there exists a single tile that tiles, but not periodically (sometimes dubbed the einstein problem). The two settings and the tools are quite different (as emphasized by their almost disjoint bibliographies): one in euclidean geometry, the other in group theory. Both are highly nontrivial: in the first case, one allows complex shapes; in the second one, also the space to tile may be complex. We propose here a framework that embeds both of these problems. We illustrate our setting by transforming the Hat tile into a new aperiodic group monotile, and we describe its symmetries.<|reference_end|>
|
arxiv
|
@article{coulbois2024aperiodic,
title={Aperiodic monotiles: from geometry to groups},
author={Thierry Coulbois (I2M), Anah'i Gajardo (UdeC), Pierre Guillon (I2M),
Victor Lutfalla (I2M)},
journal={arXiv preprint arXiv:2409.15880},
year={2024},
archivePrefix={arXiv},
eprint={2409.15880},
primaryClass={cs.DM math.CO}
}
|
coulbois2024aperiodic
|
arxiv-661269
|
2409.15881
|
Automatic Bottom-Up Taxonomy Construction: A Software Application Domain Study
|
<|reference_start|>Automatic Bottom-Up Taxonomy Construction: A Software Application Domain Study: Previous research in software application domain classification has faced challenges due to the lack of a proper taxonomy that explicitly models relations between classes. As a result, current solutions are less effective for real-world usage. This study aims to develop a comprehensive software application domain taxonomy by integrating multiple datasources and leveraging ensemble methods. The goal is to overcome the limitations of individual sources and configurations by creating a more robust, accurate, and reproducible taxonomy. This study employs a quantitative research design involving three different datasources: an existing Computer Science Ontology (CSO), Wikidata, and LLMs. The study utilises a combination of automated and human evaluations to assess the quality of a taxonomy. The outcome measures include the number of unlinked terms, self-loops, and overall connectivity of the taxonomy. The results indicate that individual datasources have advantages and drawbacks: the CSO datasource showed minimal variance across different configurations, but a notable issue of missing technical terms and a high number of self-loops. The Wikipedia datasource required significant filtering during construction to improve metric performance. LLM-generated taxonomies demonstrated better performance when using context-rich prompts. An ensemble approach showed the most promise, successfully reducing the number of unlinked terms and self-loops, thus creating a more connected and comprehensive taxonomy. The study addresses the construction of a software application domain taxonomy relying on pre-existing resources. Our results indicate that an ensemble approach to taxonomy construction can effectively address the limitations of individual datasources. Future work should focus on refining the ensemble techniques and exploring additional datasources to enhance the taxonomy's accuracy and completeness.<|reference_end|>
|
arxiv
|
@article{sas2024automatic,
title={Automatic Bottom-Up Taxonomy Construction: A Software Application Domain
Study},
author={Cezar Sas and Andrea Capiluppi},
journal={arXiv preprint arXiv:2409.15881},
year={2024},
archivePrefix={arXiv},
eprint={2409.15881},
primaryClass={cs.SE}
}
|
sas2024automatic
|
arxiv-661270
|
2409.15882
|
Exploring VQ-VAE with Prosody Parameters for Speaker Anonymization
|
<|reference_start|>Exploring VQ-VAE with Prosody Parameters for Speaker Anonymization: Human speech conveys prosody, linguistic content, and speaker identity. This article investigates a novel speaker anonymization approach using an end-to-end network based on a Vector-Quantized Variational Auto-Encoder (VQ-VAE) to deal with these speech components. This approach is designed to disentangle these components to specifically target and modify the speaker identity while preserving the linguistic and emotionalcontent. To do so, three separate branches compute embeddings for content, prosody, and speaker identity respectively. During synthesis, taking these embeddings, the decoder of the proposed architecture is conditioned on both speaker and prosody information, allowing for capturing more nuanced emotional states and precise adjustments to speaker identification. Findings indicate that this method outperforms most baseline techniques in preserving emotional information. However, it exhibits more limited performance on other voice privacy tasks, emphasizing the need for further improvements.<|reference_end|>
|
arxiv
|
@article{leang2024exploring,
title={Exploring VQ-VAE with Prosody Parameters for Speaker Anonymization},
author={Sotheara Leang (CADT, M-PSI), Anderson Augusma (M-PSI, SVH), Eric
Castelli (M-PSI), Fr'ed'erique Letu'e (SAM), Sethserey Sam (CADT),
Dominique Vaufreydaz (M-PSI)},
journal={Voice Privacy Challenge 2024 at INTERSPEECH 2024, Sep 2024, KOS
Island, Greece},
year={2024},
archivePrefix={arXiv},
eprint={2409.15882},
primaryClass={cs.CV eess.SP}
}
|
leang2024exploring
|
arxiv-661271
|
2409.15883
|
Unsupervised dMRI Artifact Detection via Angular Resolution Enhancement and Cycle Consistency Learning
|
<|reference_start|>Unsupervised dMRI Artifact Detection via Angular Resolution Enhancement and Cycle Consistency Learning: Diffusion magnetic resonance imaging (dMRI) is a crucial technique in neuroimaging studies, allowing for the non-invasive probing of the underlying structures of brain tissues. Clinical dMRI data is susceptible to various artifacts during acquisition, which can lead to unreliable subsequent analyses. Therefore, dMRI preprocessing is essential for improving image quality, and manual inspection is often required to ensure that the preprocessed data is sufficiently corrected. However, manual inspection requires expertise and is time-consuming, especially with large-scale dMRI datasets. Given these challenges, an automated dMRI artifact detection tool is necessary to increase the productivity and reliability of dMRI data analysis. To this end, we propose a novel unsupervised deep learning framework called $\textbf{U}$nsupervised $\textbf{d}$MRI $\textbf{A}$rtifact $\textbf{D}$etection via $\textbf{A}$ngular Resolution Enhancement and $\textbf{C}$ycle Consistency Learning (UdAD-AC). UdAD-AC leverages dMRI angular resolution enhancement and cycle consistency learning to capture the effective representation of artifact-free dMRI data during training, and it identifies data containing artifacts using designed confidence score during inference. To assess the capability of UdAD-AC, several commonly reported dMRI artifacts, including bias field, susceptibility distortion, and corrupted volume, were added to the testing data. Experimental results demonstrate that UdAD-AC achieves the best performance compared to competitive methods in unsupervised dMRI artifact detection.<|reference_end|>
|
arxiv
|
@article{chen2024unsupervised,
title={Unsupervised dMRI Artifact Detection via Angular Resolution Enhancement
and Cycle Consistency Learning},
author={Sheng Chen, Zihao Tang, Xinyi Wang, Chenyu Wang, Weidong Cai},
journal={arXiv preprint arXiv:2409.15883},
year={2024},
archivePrefix={arXiv},
eprint={2409.15883},
primaryClass={eess.IV cs.CV}
}
|
chen2024unsupervised
|
arxiv-661272
|
2409.15884
|
Interpolation filter design for sample rate independent audio effect RNNs
|
<|reference_start|>Interpolation filter design for sample rate independent audio effect RNNs: Recurrent neural networks (RNNs) are effective at emulating the non-linear, stateful behavior of analog guitar amplifiers and distortion effects. Unlike the case of direct circuit simulation, RNNs have a fixed sample rate encoded in their model weights, making the sample rate non-adjustable during inference. Recent work has proposed increasing the sample rate of RNNs at inference (oversampling) by increasing the feedback delay length in samples, using a fractional delay filter for non-integer conversions. Here, we investigate the task of lowering the sample rate at inference (undersampling), and propose using an extrapolation filter to approximate the required fractional signal advance. We consider two filter design methods and analyze the impact of filter order on audio quality. Our results show that the correct choice of filter can give high quality results for both oversampling and undersampling; however, in some cases the sample rate adjustment leads to unwanted artefacts in the output signal. We analyse these failure cases through linearised stability analysis, showing that they result from instability around a fixed point. This approach enables an informed prediction of suitable interpolation filters for a given RNN model before runtime.<|reference_end|>
|
arxiv
|
@article{carson2024interpolation,
title={Interpolation filter design for sample rate independent audio effect
RNNs},
author={Alistair Carson, Alec Wright, Stefan Bilbao},
journal={arXiv preprint arXiv:2409.15884},
year={2024},
archivePrefix={arXiv},
eprint={2409.15884},
primaryClass={eess.AS cs.SD eess.SP}
}
|
carson2024interpolation
|
arxiv-661273
|
2409.15885
|
On the calibration of powerset speaker diarization models
|
<|reference_start|>On the calibration of powerset speaker diarization models: End-to-end neural diarization models have usually relied on a multilabel-classification formulation of the speaker diarization problem. Recently, we proposed a powerset multiclass formulation that has beaten the state-of-the-art on multiple datasets. In this paper, we propose to study the calibration of a powerset speaker diarization model, and explore some of its uses. We study the calibration in-domain, as well as out-of-domain, and explore the data in low-confidence regions. The reliability of model confidence is then tested in practice: we use the confidence of the pretrained model to selectively create training and validation subsets out of unannotated data, and compare this to random selection. We find that top-label confidence can be used to reliably predict high-error regions. Moreover, training on low-confidence regions provides a better calibrated model, and validating on low-confidence regions can be more annotation-efficient than random regions.<|reference_end|>
|
arxiv
|
@article{plaquet2024on,
title={On the calibration of powerset speaker diarization models},
author={Alexis Plaquet (IRIT-SAMoVA), Herv'e Bredin (IRIT-SAMoVA, CNRS)},
journal={Interspeech 2024, Sep 2024, Kos, Greece. pp.3764-3768},
year={2024},
doi={10.21437/Interspeech.2024-1060},
archivePrefix={arXiv},
eprint={2409.15885},
primaryClass={cs.SD cs.LG eess.AS}
}
|
plaquet2024on
|
arxiv-661274
|
2409.15887
|
Self-Supervised Graph Embedding Clustering
|
<|reference_start|>Self-Supervised Graph Embedding Clustering: The K-means one-step dimensionality reduction clustering method has made some progress in addressing the curse of dimensionality in clustering tasks. However, it combines the K-means clustering and dimensionality reduction processes for optimization, leading to limitations in the clustering effect due to the introduced hyperparameters and the initialization of clustering centers. Moreover, maintaining class balance during clustering remains challenging. To overcome these issues, we propose a unified framework that integrates manifold learning with K-means, resulting in the self-supervised graph embedding framework. Specifically, we establish a connection between K-means and the manifold structure, allowing us to perform K-means without explicitly defining centroids. Additionally, we use this centroid-free K-means to generate labels in low-dimensional space and subsequently utilize the label information to determine the similarity between samples. This approach ensures consistency between the manifold structure and the labels. Our model effectively achieves one-step clustering without the need for redundant balancing hyperparameters. Notably, we have discovered that maximizing the $\ell_{2,1}$-norm naturally maintains class balance during clustering, a result that we have theoretically proven. Finally, experiments on multiple datasets demonstrate that the clustering results of Our-LPP and Our-MFA exhibit excellent and reliable performance.<|reference_end|>
|
arxiv
|
@article{li2024self-supervised,
title={Self-Supervised Graph Embedding Clustering},
author={Fangfang Li, Quanxue Gao, Cheng Deng, Wei Xia},
journal={arXiv preprint arXiv:2409.15887},
year={2024},
archivePrefix={arXiv},
eprint={2409.15887},
primaryClass={cs.LG}
}
|
li2024self-supervised
|
arxiv-661275
|
2409.15888
|
Investigating Gender Bias in Lymph-node Segmentation with Anatomical Priors
|
<|reference_start|>Investigating Gender Bias in Lymph-node Segmentation with Anatomical Priors: Radiotherapy requires precise segmentation of organs at risk (OARs) and of the Clinical Target Volume (CTV) to maximize treatment efficacy and minimize toxicity. While deep learning (DL) has significantly advanced automatic contouring, complex targets like CTVs remain challenging. This study explores the use of simpler, well-segmented structures (e.g., OARs) as Anatomical Prior (AP) information to improve CTV segmentation. We investigate gender bias in segmentation models and the mitigation effect of the prior information. Findings indicate that incorporating prior knowledge with the discussed strategies enhances segmentation quality in female patients and reduces gender bias, particularly in the abdomen region. This research provides a comparative analysis of new encoding strategies and highlights the potential of using AP to achieve fairer segmentation outcomes.<|reference_end|>
|
arxiv
|
@article{brioso2024investigating,
title={Investigating Gender Bias in Lymph-node Segmentation with Anatomical
Priors},
author={Ricardo Coimbra Brioso, Damiano Dei, Nicola Lambri, Pietro Mancosu,
Marta Scorsetti, and Daniele Loiacono},
journal={arXiv preprint arXiv:2409.15888},
year={2024},
archivePrefix={arXiv},
eprint={2409.15888},
primaryClass={eess.IV cs.CV}
}
|
brioso2024investigating
|
arxiv-661276
|
2409.15889
|
CAD: Memory Efficient Convolutional Adapter for Segment Anything
|
<|reference_start|>CAD: Memory Efficient Convolutional Adapter for Segment Anything: The Foundation model for image segmentation, Segment Anything (SAM), has been actively researched in various fields since its proposal. Various researches have been proposed to adapt SAM to specific domains, with one notable approach involving the addition and training of lightweight adapter modules. While adapter-based fine-tuning approaches have reported parameter efficiency and significant performance improvements, they face a often overlooked issue: the excessive consumption of GPU memory relative to the number of trainable parameters. Addressing this issue, this paper proposes a memory-efficient parallel convolutional adapter architecture. This architecture connects in parallel with SAM's image encoder, eliminating the need to store activations and gradients of the image encoder during model training. Our proposed architecture demonstrated competitive experimental results while using less than half the GPU memory compared to SAM Adapter, indicating its value as an alternative to simple decoder fine-tuning when hardware limitations preclude adapter-based learning. Our code implementation is available at our github.<|reference_end|>
|
arxiv
|
@article{kim2024cad:,
title={CAD: Memory Efficient Convolutional Adapter for Segment Anything},
author={Joohyeok Kim, Joonhyeon Song, Seohwan Yun, Seongho Yoon, Sangmin Lee},
journal={arXiv preprint arXiv:2409.15889},
year={2024},
archivePrefix={arXiv},
eprint={2409.15889},
primaryClass={cs.CV}
}
|
kim2024cad:
|
arxiv-661277
|
2409.15890
|
HLB: Benchmarking LLMs' Humanlikeness in Language Use
|
<|reference_start|>HLB: Benchmarking LLMs' Humanlikeness in Language Use: As synthetic data becomes increasingly prevalent in training language models, particularly through generated dialogue, concerns have emerged that these models may deviate from authentic human language patterns, potentially losing the richness and creativity inherent in human communication. This highlights the critical need to assess the humanlikeness of language models in real-world language use. In this paper, we present a comprehensive humanlikeness benchmark (HLB) evaluating 20 large language models (LLMs) using 10 psycholinguistic experiments designed to probe core linguistic aspects, including sound, word, syntax, semantics, and discourse (see https://huggingface.co/spaces/XufengDuan/HumanLikeness). To anchor these comparisons, we collected responses from over 2,000 human participants and compared them to outputs from the LLMs in these experiments. For rigorous evaluation, we developed a coding algorithm that accurately identified language use patterns, enabling the extraction of response distributions for each task. By comparing the response distributions between human participants and LLMs, we quantified humanlikeness through distributional similarity. Our results reveal fine-grained differences in how well LLMs replicate human responses across various linguistic levels. Importantly, we found that improvements in other performance metrics did not necessarily lead to greater humanlikeness, and in some cases, even resulted in a decline. By introducing psycholinguistic methods to model evaluation, this benchmark offers the first framework for systematically assessing the humanlikeness of LLMs in language use.<|reference_end|>
|
arxiv
|
@article{duan2024hlb:,
title={HLB: Benchmarking LLMs' Humanlikeness in Language Use},
author={Xufeng Duan, Bei Xiao, Xuemei Tang, Zhenguang G. Cai},
journal={arXiv preprint arXiv:2409.15890},
year={2024},
archivePrefix={arXiv},
eprint={2409.15890},
primaryClass={cs.CL}
}
|
duan2024hlb:
|
arxiv-661278
|
2409.15892
|
Symmetries and Expressive Requirements for Learning General Policies
|
<|reference_start|>Symmetries and Expressive Requirements for Learning General Policies: State symmetries play an important role in planning and generalized planning. In the first case, state symmetries can be used to reduce the size of the search; in the second, to reduce the size of the training set. In the case of general planning, however, it is also critical to distinguish non-symmetric states, i.e., states that represent non-isomorphic relational structures. However, while the language of first-order logic distinguishes non-symmetric states, the languages and architectures used to represent and learn general policies do not. In particular, recent approaches for learning general policies use state features derived from description logics or learned via graph neural networks (GNNs) that are known to be limited by the expressive power of C_2, first-order logic with two variables and counting. In this work, we address the problem of detecting symmetries in planning and generalized planning and use the results to assess the expressive requirements for learning general policies over various planning domains. For this, we map planning states to plain graphs, run off-the-shelf algorithms to determine whether two states are isomorphic with respect to the goal, and run coloring algorithms to determine if C_2 features computed logically or via GNNs distinguish non-isomorphic states. Symmetry detection results in more effective learning, while the failure to detect non-symmetries prevents general policies from being learned at all in certain domains.<|reference_end|>
|
arxiv
|
@article{drexler2024symmetries,
title={Symmetries and Expressive Requirements for Learning General Policies},
author={Dominik Drexler and Simon St{aa}hlberg and Blai Bonet and Hector
Geffner},
journal={arXiv preprint arXiv:2409.15892},
year={2024},
archivePrefix={arXiv},
eprint={2409.15892},
primaryClass={cs.AI}
}
|
drexler2024symmetries
|
arxiv-661279
|
2409.15893
|
Unsupervised Attention Regularization Based Domain Adaptation for Oracle Character Recognition
|
<|reference_start|>Unsupervised Attention Regularization Based Domain Adaptation for Oracle Character Recognition: The study of oracle characters plays an important role in Chinese archaeology and philology. However, the difficulty of collecting and annotating real-world scanned oracle characters hinders the development of oracle character recognition. In this paper, we develop a novel unsupervised domain adaptation (UDA) method, i.e., unsupervised attention regularization net?work (UARN), to transfer recognition knowledge from labeled handprinted oracle characters to unlabeled scanned data. First, we experimentally prove that existing UDA methods are not always consistent with human priors and cannot achieve optimal performance on the target domain. For these oracle characters with flip-insensitivity and high inter-class similarity, model interpretations are not flip-consistent and class-separable. To tackle this challenge, we take into consideration visual perceptual plausibility when adapting. Specifically, our method enforces attention consistency between the original and flipped images to achieve the model robustness to flipping. Simultaneously, we constrain attention separability between the pseudo class and the most confusing class to improve the model discriminability. Extensive experiments demonstrate that UARN shows better interpretability and achieves state-of-the-art performance on Oracle-241 dataset, substantially outperforming the previously structure-texture separation network by 8.5%.<|reference_end|>
|
arxiv
|
@article{wang2024unsupervised,
title={Unsupervised Attention Regularization Based Domain Adaptation for Oracle
Character Recognition},
author={Mei Wang, Weihong Deng, Jiani Hu, Sen Su},
journal={arXiv preprint arXiv:2409.15893},
year={2024},
archivePrefix={arXiv},
eprint={2409.15893},
primaryClass={cs.CV}
}
|
wang2024unsupervised
|
arxiv-661280
|
2409.15895
|
Preference-Guided Refactored Tuning for Retrieval Augmented Code Generation
|
<|reference_start|>Preference-Guided Refactored Tuning for Retrieval Augmented Code Generation: Retrieval-augmented code generation utilizes Large Language Models as the generator and significantly expands their code generation capabilities by providing relevant code, documentation, and more via the retriever. The current approach suffers from two primary limitations: 1) information redundancy. The indiscriminate inclusion of redundant information can result in resource wastage and may misguide generators, affecting their effectiveness and efficiency. 2) preference gap. Due to different optimization objectives, the retriever strives to procure code with higher ground truth similarity, yet this effort does not substantially benefit the generator. The retriever and the generator may prefer different golden code, and this gap in preference results in a suboptimal design. Additionally, differences in parameterization knowledge acquired during pre-training result in varying preferences among different generators. To address these limitations, in this paper, we propose RRG (Retrieve, Refactor, Generate), a novel framework for effective and efficient code generation. This framework introduces a code refactorer module between the retriever and the generator to bridge them. The refactoring process transforms the raw retrieved code into a more concise, efficient, and model-friendly version. It eliminates redundant information and noise, reducing the input length. Consequently, the generator receives higher-quality context, enabling it to produce more accurate results with lower inference costs. We conducted comprehensive experiments on multiple datasets. In the experiments, we confirmed the existence of a preference gap between the retriever and the generator, and RRG effectively bridges this gap. Specifically, RRG achieved significant performance improvements, with increases of up to 28% on EM, 13% on BLEU, and 6.8% on CodeBLEU.<|reference_end|>
|
arxiv
|
@article{gao2024preference-guided,
title={Preference-Guided Refactored Tuning for Retrieval Augmented Code
Generation},
author={Xinyu Gao, Yun Xiong, Deze Wang, Zhenhan Guan, Zejian Shi, Haofen
Wang, Shanshan Li},
journal={arXiv preprint arXiv:2409.15895},
year={2024},
archivePrefix={arXiv},
eprint={2409.15895},
primaryClass={cs.SE}
}
|
gao2024preference-guided
|
arxiv-661281
|
2409.15897
|
ESPnet-Codec: Comprehensive Training and Evaluation of Neural Codecs for Audio, Music, and Speech
|
<|reference_start|>ESPnet-Codec: Comprehensive Training and Evaluation of Neural Codecs for Audio, Music, and Speech: Neural codecs have become crucial to recent speech and audio generation research. In addition to signal compression capabilities, discrete codecs have also been found to enhance downstream training efficiency and compatibility with autoregressive language models. However, as extensive downstream applications are investigated, challenges have arisen in ensuring fair comparisons across diverse applications. To address these issues, we present a new open-source platform ESPnet-Codec, which is built on ESPnet and focuses on neural codec training and evaluation. ESPnet-Codec offers various recipes in audio, music, and speech for training and evaluation using several widely adopted codec models. Together with ESPnet-Codec, we present VERSA, a standalone evaluation toolkit, which provides a comprehensive evaluation of codec performance over 20 audio evaluation metrics. Notably, we demonstrate that ESPnet-Codec can be integrated into six ESPnet tasks, supporting diverse applications.<|reference_end|>
|
arxiv
|
@article{shi2024espnet-codec:,
title={ESPnet-Codec: Comprehensive Training and Evaluation of Neural Codecs for
Audio, Music, and Speech},
author={Jiatong Shi, Jinchuan Tian, Yihan Wu, Jee-weon Jung, Jia Qi Yip,
Yoshiki Masuyama, William Chen, Yuning Wu, Yuxun Tang, Massa Baali, Dareen
Alharhi, Dong Zhang, Ruifan Deng, Tejes Srivastava, Haibin Wu, Alexander H.
Liu, Bhiksha Raj, Qin Jin, Ruihua Song, Shinji Watanabe},
journal={arXiv preprint arXiv:2409.15897},
year={2024},
archivePrefix={arXiv},
eprint={2409.15897},
primaryClass={eess.AS cs.SD}
}
|
shi2024espnet-codec:
|
arxiv-661282
|
2409.15898
|
FedRepOpt: Gradient Re-parametrized Optimizers in Federated Learning
|
<|reference_start|>FedRepOpt: Gradient Re-parametrized Optimizers in Federated Learning: Federated Learning (FL) has emerged as a privacy-preserving method for training machine learning models in a distributed manner on edge devices. However, on-device models face inherent computational power and memory limitations, potentially resulting in constrained gradient updates. As the model's size increases, the frequency of gradient updates on edge devices decreases, ultimately leading to suboptimal training outcomes during any particular FL round. This limits the feasibility of deploying advanced and large-scale models on edge devices, hindering the potential for performance enhancements. To address this issue, we propose FedRepOpt, a gradient re-parameterized optimizer for FL. The gradient re-parameterized method allows training a simple local model with a similar performance as a complex model by modifying the optimizer's gradients according to a set of model-specific hyperparameters obtained from the complex models. In this work, we focus on VGG-style and Ghost-style models in the FL environment. Extensive experiments demonstrate that models using FedRepOpt obtain a significant boost in performance of 16.7% and 11.4% compared to the RepGhost-style and RepVGG-style networks, while also demonstrating a faster convergence time of 11.7% and 57.4% compared to their complex structure.<|reference_end|>
|
arxiv
|
@article{lau2024fedrepopt:,
title={FedRepOpt: Gradient Re-parametrized Optimizers in Federated Learning},
author={Kin Wai Lau, Yasar Abbas Ur Rehman, Pedro Porto Buarque de Gusm~ao,
Lai-Man Po, Lan Ma, Yuyang Xie},
journal={arXiv preprint arXiv:2409.15898},
year={2024},
archivePrefix={arXiv},
eprint={2409.15898},
primaryClass={cs.LG cs.CV cs.DC}
}
|
lau2024fedrepopt:
|
arxiv-661283
|
2409.15902
|
Konstruktor: A Strong Baseline for Simple Knowledge Graph Question Answering
|
<|reference_start|>Konstruktor: A Strong Baseline for Simple Knowledge Graph Question Answering: While being one of the most popular question types, simple questions such as "Who is the author of Cinderella?", are still not completely solved. Surprisingly, even the most powerful modern Large Language Models are prone to errors when dealing with such questions, especially when dealing with rare entities. At the same time, as an answer may be one hop away from the question entity, one can try to develop a method that uses structured knowledge graphs (KGs) to answer such questions. In this paper, we introduce Konstruktor - an efficient and robust approach that breaks down the problem into three steps: (i) entity extraction and entity linking, (ii) relation prediction, and (iii) querying the knowledge graph. Our approach integrates language models and knowledge graphs, exploiting the power of the former and the interpretability of the latter. We experiment with two named entity recognition and entity linking methods and several relation detection techniques. We show that for relation detection, the most challenging step of the workflow, a combination of relation classification/generation and ranking outperforms other methods. We report Konstruktor's strong results on four datasets.<|reference_end|>
|
arxiv
|
@article{lysyuk2024konstruktor:,
title={Konstruktor: A Strong Baseline for Simple Knowledge Graph Question
Answering},
author={Maria Lysyuk, Mikhail Salnikov, Pavel Braslavski, Alexander Panchenko},
journal={International Conference on Applications of Natural Language to
Information Systems, pages: 107-118, year: 2024, organization: Springer},
year={2024},
doi={10.1007/978-3-031-70242-6_11},
archivePrefix={arXiv},
eprint={2409.15902},
primaryClass={cs.CL}
}
|
lysyuk2024konstruktor:
|
arxiv-661284
|
2409.15903
|
Five questions and answers about artificial intelligence
|
<|reference_start|>Five questions and answers about artificial intelligence: Rapid advances in Artificial Intelligence (AI) are generating much controversy in society, often without scientific basis. As occurred the development of other emerging technologies, such as the introduction of electricity in the early 20th century, AI causes both fascination and fear. Following the advice of the philosopher R.W. Emerson's: advice the knowledge is the antidote to fear; this paper seeks to contribute to the dissemination of knowledge about AI. To this end, it reflects on the following questions: the origins of AI, its possible future evolution, its ability to show feelings, the associated threats and dangers, and the concept of AI singularity.<|reference_end|>
|
arxiv
|
@article{prieto2024five,
title={Five questions and answers about artificial intelligence},
author={Alberto Prieto, Beatriz Prieto},
journal={arXiv preprint arXiv:2409.15903},
year={2024},
archivePrefix={arXiv},
eprint={2409.15903},
primaryClass={cs.AI}
}
|
prieto2024five
|
arxiv-661285
|
2409.15904
|
Unimotion: Unifying 3D Human Motion Synthesis and Understanding
|
<|reference_start|>Unimotion: Unifying 3D Human Motion Synthesis and Understanding: We introduce Unimotion, the first unified multi-task human motion model capable of both flexible motion control and frame-level motion understanding. While existing works control avatar motion with global text conditioning, or with fine-grained per frame scripts, none can do both at once. In addition, none of the existing works can output frame-level text paired with the generated poses. In contrast, Unimotion allows to control motion with global text, or local frame-level text, or both at once, providing more flexible control for users. Importantly, Unimotion is the first model which by design outputs local text paired with the generated poses, allowing users to know what motion happens and when, which is necessary for a wide range of applications. We show Unimotion opens up new applications: 1.) Hierarchical control, allowing users to specify motion at different levels of detail, 2.) Obtaining motion text descriptions for existing MoCap data or YouTube videos 3.) Allowing for editability, generating motion from text, and editing the motion via text edits. Moreover, Unimotion attains state-of-the-art results for the frame-level text-to-motion task on the established HumanML3D dataset. The pre-trained model and code are available available on our project page at https://coral79.github.io/uni-motion/.<|reference_end|>
|
arxiv
|
@article{li2024unimotion:,
title={Unimotion: Unifying 3D Human Motion Synthesis and Understanding},
author={Chuqiao Li, Julian Chibane, Yannan He, Naama Pearl, Andreas Geiger,
Gerard Pons-moll},
journal={arXiv preprint arXiv:2409.15904},
year={2024},
archivePrefix={arXiv},
eprint={2409.15904},
primaryClass={cs.CV}
}
|
li2024unimotion:
|
arxiv-661286
|
2409.15905
|
Boosting Code-Switching ASR with Mixture of Experts Enhanced Speech-Conditioned LLM
|
<|reference_start|>Boosting Code-Switching ASR with Mixture of Experts Enhanced Speech-Conditioned LLM: In this paper, we introduce a speech-conditioned Large Language Model (LLM) integrated with a Mixture of Experts (MoE) based connector to address the challenge of Code-Switching (CS) in Automatic Speech Recognition (ASR). Specifically, we propose an Insertion and Deletion of Interruption Token (IDIT) mechanism for better transfer text generation ability of LLM to speech recognition task. We also present a connecter with MoE architecture that manages multiple languages efficiently. To further enhance the collaboration of multiple experts and leverage the understanding capabilities of LLM, we propose a two-stage progressive training strategy: 1) The connector is unfrozen and trained with language-specialized experts to map speech representations to the text space. 2) The connector and LLM LoRA adaptor are trained with the proposed IDIT mechanism and all experts are activated to learn general representations. Experimental results demonstrate that our method significantly outperforms state-of-the-art models, including end-to-end and large-scale audio-language models.<|reference_end|>
|
arxiv
|
@article{zhang2024boosting,
title={Boosting Code-Switching ASR with Mixture of Experts Enhanced
Speech-Conditioned LLM},
author={Fengrun Zhang, Wang Geng, Hukai Huang, Yahui Shan, Cheng Yi, He Qu},
journal={arXiv preprint arXiv:2409.15905},
year={2024},
archivePrefix={arXiv},
eprint={2409.15905},
primaryClass={cs.SD cs.AI eess.AS}
}
|
zhang2024boosting
|
arxiv-661287
|
2409.15906
|
Preserving positivity of Gauss-Newton Hessian through random sampling
|
<|reference_start|>Preserving positivity of Gauss-Newton Hessian through random sampling: Numerically the reconstructability of unknown parameters in inverse problems heavily relies on the chosen data. Therefore, it is crucial to design an experiment that yields data that is sensitive to the parameters. We approach this problem from the perspective of a least squares optimization, and examine the positivity of the Gauss-Newton Hessian at the global minimum point of the objective function. We propose a general framework that provides an efficient down-sampling strategy that can select data that preserves the strict positivity of the Hessian. Matrix sketching techniques from randomized linear algebra is heavily leaned on to achieve this goal. The method requires drawing samples from a certain distribution, and gradient free sampling methods are integrated to execute the data selection. Numerical experiments demonstrate the effectiveness of this method in selecting sensor locations for Schr\"odinger potential reconstruction.<|reference_end|>
|
arxiv
|
@article{hellmuth2024preserving,
title={Preserving positivity of Gauss-Newton Hessian through random sampling},
author={Kathrin Hellmuth, Christian Klingenberg, Qin Li},
journal={arXiv preprint arXiv:2409.15906},
year={2024},
archivePrefix={arXiv},
eprint={2409.15906},
primaryClass={math.NA cs.NA math.OC}
}
|
hellmuth2024preserving
|
arxiv-661288
|
2409.15907
|
Enhancing Text-to-SQL Capabilities of Large Language Models via Domain Database Knowledge Injection
|
<|reference_start|>Enhancing Text-to-SQL Capabilities of Large Language Models via Domain Database Knowledge Injection: Text-to-SQL is a subtask in semantic parsing that has seen rapid progress with the evolution of Large Language Models (LLMs). However, LLMs face challenges due to hallucination issues and a lack of domain-specific database knowledge(such as table schema and cell values). As a result, they can make errors in generating table names, columns, and matching values to the correct columns in SQL statements. This paper introduces a method of knowledge injection to enhance LLMs' ability to understand schema contents by incorporating prior knowledge. This approach improves their performance in Text-to-SQL tasks. Experimental results show that pre-training LLMs on domain-specific database knowledge and fine-tuning them on downstream Text-to-SQL tasks significantly improves the Execution Match (EX) and Exact Match (EM) metrics across various models. This effectively reduces errors in generating column names and matching values to the columns. Furthermore, the knowledge-injected models can be applied to many downstream Text-to-SQL tasks, demonstrating the generalizability of the approach presented in this paper.<|reference_end|>
|
arxiv
|
@article{ma2024enhancing,
title={Enhancing Text-to-SQL Capabilities of Large Language Models via Domain
Database Knowledge Injection},
author={Xingyu Ma, Xin Tian, Lingxiang Wu, Xuepeng Wang, Xueming Tang, Jinqiao
Wang},
journal={arXiv preprint arXiv:2409.15907},
year={2024},
archivePrefix={arXiv},
eprint={2409.15907},
primaryClass={cs.CL cs.AI}
}
|
ma2024enhancing
|
arxiv-661289
|
2409.15910
|
Enhancing IoT based Plant Health Monitoring through Advanced Human Plant Interaction using Large Language Models and Mobile Applications
|
<|reference_start|>Enhancing IoT based Plant Health Monitoring through Advanced Human Plant Interaction using Large Language Models and Mobile Applications: This paper presents the development of a novel plant communication application that allows plants to "talk" to humans using real-time sensor data and AI-powered language models. Utilizing soil sensors that track moisture, temperature, and nutrient levels, the system feeds this data into the Gemini API, where it is processed and transformed into natural language insights about the plant's health and "mood." Developed using Flutter, Firebase, and ThingSpeak, the app offers a seamless user experience with real-time interaction capabilities. By fostering human-plant connectivity, this system enhances plant care practices, promotes sustainability, and introduces innovative applications for AI and IoT technologies in both personal and agricultural contexts. The paper explores the technical architecture, system integration, and broader implications of AI-driven plant communication.<|reference_end|>
|
arxiv
|
@article{agarwal2024enhancing,
title={Enhancing IoT based Plant Health Monitoring through Advanced Human Plant
Interaction using Large Language Models and Mobile Applications},
author={Kriti Agarwal, Samhruth Ananthanarayanan, Srinitish Srinivasan and
Abirami S},
journal={arXiv preprint arXiv:2409.15910},
year={2024},
archivePrefix={arXiv},
eprint={2409.15910},
primaryClass={cs.AI}
}
|
agarwal2024enhancing
|
arxiv-661290
|
2409.15911
|
A Modular-based Strategy for Mitigating Gradient Conflicts in Simultaneous Speech Translation
|
<|reference_start|>A Modular-based Strategy for Mitigating Gradient Conflicts in Simultaneous Speech Translation: Simultaneous Speech Translation (SimulST) involves generating target language text while continuously processing streaming speech input, presenting significant real-time challenges. Multi-task learning is often employed to enhance SimulST performance but introduces optimization conflicts between primary and auxiliary tasks, potentially compromising overall efficiency. The existing model-level conflict resolution methods are not well-suited for this task which exacerbates inefficiencies and leads to high GPU memory consumption. To address these challenges, we propose a Modular Gradient Conflict Mitigation (MGCM) strategy that detects conflicts at a finer-grained modular level and resolves them utilizing gradient projection. Experimental results demonstrate that MGCM significantly improves SimulST performance, particularly under medium and high latency conditions, achieving a 0.68 BLEU score gain in offline tasks. Additionally, MGCM reduces GPU memory consumption by over 95\% compared to other conflict mitigation methods, establishing it as a robust solution for SimulST tasks.<|reference_end|>
|
arxiv
|
@article{liu2024a,
title={A Modular-based Strategy for Mitigating Gradient Conflicts in
Simultaneous Speech Translation},
author={Xiaoqian Liu, Yangfan Du, Jianjin Wang, Yuan Ge, Chen Xu, Tong Xiao,
Guocheng Chen, Jingbo Zhu},
journal={arXiv preprint arXiv:2409.15911},
year={2024},
archivePrefix={arXiv},
eprint={2409.15911},
primaryClass={cs.CL cs.SD eess.AS}
}
|
liu2024a
|
arxiv-661291
|
2409.15912
|
Explaining word embeddings with perfect fidelity: Case study in research impact prediction
|
<|reference_start|>Explaining word embeddings with perfect fidelity: Case study in research impact prediction: Best performing approaches for scholarly document quality prediction are based on embedding models, which do not allow direct explanation of classifiers as distinct words no longer correspond to the input features for model training. Although model-agnostic explanation methods such as Local interpretable model-agnostic explanations (LIME) can be applied, these produce results with questionable correspondence to the ML model. We introduce a new feature importance method, Self-model Rated Entities (SMER), for logistic regression-based classification models trained on word embeddings. We show that SMER has theoretically perfect fidelity with the explained model, as its prediction corresponds exactly to the average of predictions for individual words in the text. SMER allows us to reliably determine which words or entities positively contribute to predicting impactful articles. Quantitative and qualitative evaluation is performed through five diverse experiments conducted on 50.000 research papers from the CORD-19 corpus. Through an AOPC curve analysis, we experimentally demonstrate that SMER produces better explanations than LIME for logistic regression.<|reference_end|>
|
arxiv
|
@article{dvorackova2024explaining,
title={Explaining word embeddings with perfect fidelity: Case study in research
impact prediction},
author={Lucie Dvorackova, Marcin P. Joachimiak, Michal Cerny, Adriana
Kubecova, Vilem Sklenak, Tomas Kliegr},
journal={arXiv preprint arXiv:2409.15912},
year={2024},
archivePrefix={arXiv},
eprint={2409.15912},
primaryClass={cs.CL}
}
|
dvorackova2024explaining
|
arxiv-661292
|
2409.15914
|
Exploring the potential of collaborative UAV 3D mapping in Kenyan savanna for wildlife research
|
<|reference_start|>Exploring the potential of collaborative UAV 3D mapping in Kenyan savanna for wildlife research: UAV-based biodiversity conservation applications have exhibited many data acquisition advantages for researchers. UAV platforms with embedded data processing hardware can support conservation challenges through 3D habitat mapping, surveillance and monitoring solutions. High-quality real-time scene reconstruction as well as real-time UAV localization can optimize the exploration vs exploitation balance of single or collaborative mission. In this work, we explore the potential of two collaborative frameworks - Visual Simultaneous Localization and Mapping (V-SLAM) and Structure-from-Motion (SfM) for 3D mapping purposes and compare results with standard offline approaches.<|reference_end|>
|
arxiv
|
@article{shukla2024exploring,
title={Exploring the potential of collaborative UAV 3D mapping in Kenyan
savanna for wildlife research},
author={Vandita Shukla, Luca Morelli, Pawel Trybala, Fabio Remondino, Wentian
Gan, Yifei Yu, Xin Wang},
journal={arXiv preprint arXiv:2409.15914},
year={2024},
archivePrefix={arXiv},
eprint={2409.15914},
primaryClass={cs.CV}
}
|
shukla2024exploring
|
arxiv-661293
|
2409.15915
|
Planning in the Dark: LLM-Symbolic Planning Pipeline without Experts
|
<|reference_start|>Planning in the Dark: LLM-Symbolic Planning Pipeline without Experts: Large Language Models (LLMs) have shown promise in solving natural language-described planning tasks, but their direct use often leads to inconsistent reasoning and hallucination. While hybrid LLM-symbolic planning pipelines have emerged as a more robust alternative, they typically require extensive expert intervention to refine and validate generated action schemas. It not only limits scalability but also introduces a potential for biased interpretation, as a single expert's interpretation of ambiguous natural language descriptions might not align with the user's actual intent. To address this, we propose a novel approach that constructs an action schema library to generate multiple candidates, accounting for the diverse possible interpretations of natural language descriptions. We further introduce a semantic validation and ranking module that automatically filter and rank the generated schemas and plans without expert-in-the-loop. The experiments showed our pipeline maintains superiority in planning over the direct LLM planning approach. These findings demonstrate the feasibility of a fully automated end-to-end LLM-symbolic planner that requires no expert intervention, opening up the possibility for a broader audience to engage with AI planning with less prerequisite of domain expertise.<|reference_end|>
|
arxiv
|
@article{huang2024planning,
title={Planning in the Dark: LLM-Symbolic Planning Pipeline without Experts},
author={Sukai Huang, Nir Lipovetzky and Trevor Cohn},
journal={arXiv preprint arXiv:2409.15915},
year={2024},
archivePrefix={arXiv},
eprint={2409.15915},
primaryClass={cs.AI}
}
|
huang2024planning
|
arxiv-661294
|
2409.15916
|
Deep convolutional framelets for dose reconstruction in BNCT with Compton camera detector
|
<|reference_start|>Deep convolutional framelets for dose reconstruction in BNCT with Compton camera detector: Boron Neutron Capture Therapy (BNCT) is an innovative binary form of radiation therapy with high selectivity towards cancer tissue based on the neutron capture reaction 10B(n,$\alpha$)7Li, consisting in the exposition of patients to neutron beams after administration of a boron compound with preferential accumulation in cancer cells. The high linear energy transfer products of the ensuing reaction deposit their energy at cell level, sparing normal tissue. Although progress in accelerator-based BNCT has led to renewed interest in this cancer treatment modality, in vivo dose monitoring during treatment still remains not feasible and several approaches are under investigation. While Compton imaging presents various advantages over other imaging methods, it typically requires long reconstruction times, comparable with BNCT treatment duration. This study aims to develop deep neural network models to estimate the dose distribution by using a simulated dataset of BNCT Compton camera images. The models pursue the avoidance of the iteration time associated with the maximum-likelihood expectation-maximization algorithm (MLEM), enabling a prompt dose reconstruction during the treatment. The U-Net architecture and two variants based on the deep convolutional framelets framework have been used for noise and artifacts reduction in few-iterations reconstructed images, leading to promising results in terms of reconstruction accuracy and processing time.<|reference_end|>
|
arxiv
|
@article{didonna2024deep,
title={Deep convolutional framelets for dose reconstruction in BNCT with
Compton camera detector},
author={Angelo Didonna, Dayron Ramos Lopez, Giuseppe Iaselli, Nicola Amoroso,
Nicola Ferrara, Gabriella Maria Incoronata Pugliese},
journal={arXiv preprint arXiv:2409.15916},
year={2024},
archivePrefix={arXiv},
eprint={2409.15916},
primaryClass={physics.med-ph cs.LG}
}
|
didonna2024deep
|
arxiv-661295
|
2409.15917
|
The lowest-order Neural Approximated Virtual Element Method on polygonal elements
|
<|reference_start|>The lowest-order Neural Approximated Virtual Element Method on polygonal elements: The lowest-order Neural Approximated Virtual Element Method on polygonal elements is proposed here. This method employs a neural network to locally approximate the Virtual Element basis functions, thereby eliminating issues concerning stabilization and projection operators, which are the key components of the standard Virtual Element Method. We propose different training strategies for the neural network training, each correlated by the theoretical justification and with a different level of accuracy. Several numerical experiments are proposed to validate our procedure on general polygonal meshes and demonstrate the advantages of the proposed method across different problem formulations, particularly in cases where the heavy usage of projection and stabilization terms may represent challenges for the standard version of the method. Particular attention is reserved to triangular meshes with hanging nodes which assume a central role in many virtual element applications.<|reference_end|>
|
arxiv
|
@article{berrone2024the,
title={The lowest-order Neural Approximated Virtual Element Method on polygonal
elements},
author={Stefano Berrone and Moreno Pintore and Gioana Teora},
journal={arXiv preprint arXiv:2409.15917},
year={2024},
archivePrefix={arXiv},
eprint={2409.15917},
primaryClass={math.NA cs.NA}
}
|
berrone2024the
|
arxiv-661296
|
2409.15919
|
Learning Compact Channel Correlation Representation for LiDAR Place Recognition
|
<|reference_start|>Learning Compact Channel Correlation Representation for LiDAR Place Recognition: This paper presents a novel approach to learn compact channel correlation representation for LiDAR place recognition, called C3R, aimed at reducing the computational burden and dimensionality associated with traditional covariance pooling methods for place recognition tasks. Our method partitions the feature matrix into smaller groups, computes group-wise covariance matrices, and aggregates them via a learnable aggregation strategy. Matrix power normalization is applied to ensure stability. Theoretical analyses are also given to demonstrate the effectiveness of the proposed method, including its ability to preserve permutation invariance and maintain high mutual information between the original features and the aggregated representation. We conduct extensive experiments on four large-scale, public LiDAR place recognition datasets including Oxford RobotCar, In-house, MulRan, and WildPlaces datasets to validate our approach's superiority in accuracy, and robustness. Furthermore, we provide the quantitative results of our approach for a deeper understanding. The code will be released upon acceptance.<|reference_end|>
|
arxiv
|
@article{rahman2024learning,
title={Learning Compact Channel Correlation Representation for LiDAR Place
Recognition},
author={Saimunur Rahman, Peyman Moghadam},
journal={arXiv preprint arXiv:2409.15919},
year={2024},
archivePrefix={arXiv},
eprint={2409.15919},
primaryClass={cs.CV}
}
|
rahman2024learning
|
arxiv-661297
|
2409.15920
|
An adequacy theorem between mixed powerdomains and probabilistic concurrency
|
<|reference_start|>An adequacy theorem between mixed powerdomains and probabilistic concurrency: We present an adequacy theorem for a concurrent extension of probabilistic GCL. The underlying denotational semantics is based on the so-called mixed powerdomains which combine non-determinism with stochasticity. The theorem itself is formulated via M. Smyth's idea of treating observable properties as open sets of a topological space. One application of our theorem is that it entails semi-decidability w.r.t. whether a concurrent program satisfies an observable property (written in a certain form). This is intimately connected to M. Escard\'o's conjecture about semi-decidability w.r.t. may and must probabilistic testing.<|reference_end|>
|
arxiv
|
@article{neves2024an,
title={An adequacy theorem between mixed powerdomains and probabilistic
concurrency},
author={Renato Neves},
journal={arXiv preprint arXiv:2409.15920},
year={2024},
archivePrefix={arXiv},
eprint={2409.15920},
primaryClass={cs.LO}
}
|
neves2024an
|
arxiv-661298
|
2409.15922
|
Overcoming Reward Model Noise in Instruction-Guided Reinforcement Learning
|
<|reference_start|>Overcoming Reward Model Noise in Instruction-Guided Reinforcement Learning: Vision-language models (VLMs) have gained traction as auxiliary reward models to provide more informative reward signals in sparse reward environments. However, our work reveals a critical vulnerability of this method: a small amount of noise in the reward signal can severely degrade agent performance. In challenging environments with sparse rewards, we show that reinforcement learning agents using VLM-based reward models without proper noise handling perform worse than agents relying solely on exploration-driven methods. We hypothesize that false positive rewards -- where the reward model incorrectly assigns rewards to trajectories that do not fulfill the given instruction -- are more detrimental to learning than false negatives. Our analysis confirms this hypothesis, revealing that the widely used cosine similarity metric, when applied to comparing agent trajectories and language instructions, is prone to generating false positive reward signals. To address this, we introduce BiMI (Binary Mutual Information), a novel noise-resilient reward function. Our experiments demonstrate that, BiMI significantly boosts the agent performance, with an average improvement ratio of 44.5\% across diverse environments with learned, non-oracle VLMs, thereby making VLM-based reward models practical for real-world applications.<|reference_end|>
|
arxiv
|
@article{huang2024the,
title={The Dark Side of Rich Rewards: Understanding and Mitigating Noise in VLM
Rewards},
author={Sukai Huang, Nir Lipovetzky and Trevor Cohn},
journal={arXiv preprint arXiv:2409.15922},
year={2024},
archivePrefix={arXiv},
eprint={2409.15922},
primaryClass={cs.LG cs.RO}
}
|
huang2024the
|
arxiv-661299
|
2409.15924
|
Multilingual Transfer and Domain Adaptation for Low-Resource Languages of Spain
|
<|reference_start|>Multilingual Transfer and Domain Adaptation for Low-Resource Languages of Spain: This article introduces the submission status of the Translation into Low-Resource Languages of Spain task at (WMT 2024) by Huawei Translation Service Center (HW-TSC). We participated in three translation tasks: spanish to aragonese (es-arg), spanish to aranese (es-arn), and spanish to asturian (es-ast). For these three translation tasks, we use training strategies such as multilingual transfer, regularized dropout, forward translation and back translation, labse denoising, transduction ensemble learning and other strategies to neural machine translation (NMT) model based on training deep transformer-big architecture. By using these enhancement strategies, our submission achieved a competitive result in the final evaluation.<|reference_end|>
|
arxiv
|
@article{luo2024multilingual,
title={Multilingual Transfer and Domain Adaptation for Low-Resource Languages
of Spain},
author={Yuanchang Luo, Zhanglin Wu, Daimeng Wei, Hengchao Shang, Zongyao Li,
Jiaxin Guo, Zhiqiang Rao, Shaojun Li, Jinlong Yang, Yuhao Xie, Jiawei Zheng
Bin Wei, Hao Yang},
journal={arXiv preprint arXiv:2409.15924},
year={2024},
archivePrefix={arXiv},
eprint={2409.15924},
primaryClass={cs.CL cs.AI}
}
|
luo2024multilingual
|
arxiv-661300
|
2409.15925
|
Identifying early tumour states in a Cahn-Hilliard-reaction-diffusion model
|
<|reference_start|>Identifying early tumour states in a Cahn-Hilliard-reaction-diffusion model: In this paper, we tackle the problem of reconstructing earlier tumour configurations starting from a single spatial measurement at a later time. We describe the tumour evolution through a diffuse interface model coupling a Cahn-Hilliard-type equation for the tumour phase field to a reaction-diffusion equation for a key nutrient proportion, also accounting for chemotaxis effects. We stress that the ability to reconstruct earlier tumour states is crucial for calibrating the model used to predict the tumour dynamics and also to identify the areas where the tumour initially began to develop. However, backward-in-time inverse problems are well-known to be severely ill-posed, even for linear parabolic equations. Moreover, we also face additional challenges due to the complexity of a non-linear fourth-order parabolic system. Nonetheless, we can establish uniqueness by using logarithmic convexity methods under suitable a priori assumptions. To further address the ill-posedness of the inverse problem, we propose a Tikhonov regularisation approach that approximates the solution through a family of constrained minimisation problems. For such problems, we analytically derive the first-order necessary optimality conditions. Finally, we develop a computationally efficient numerical approximation of the optimisation problems by employing standard $C^0$-conforming first-order finite elements. We conduct numerical experiments on several pertinent test cases and observe that the proposed algorithm consistently meets expectations, delivering accurate reconstructions of the original ground truth.<|reference_end|>
|
arxiv
|
@article{agosti2024identifying,
title={Identifying early tumour states in a Cahn-Hilliard-reaction-diffusion
model},
author={Abramo Agosti, Elena Beretta, Cecilia Cavaterra, Matteo Fornoni,
Elisabetta Rocca},
journal={arXiv preprint arXiv:2409.15925},
year={2024},
archivePrefix={arXiv},
eprint={2409.15925},
primaryClass={math.AP cs.NA math.NA math.OC}
}
|
agosti2024identifying
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.