corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-660501
|
2409.14525
|
Snakes can be fooled into thinking they live in a tree
|
<|reference_start|>Snakes can be fooled into thinking they live in a tree: We construct a finitely generated group which is not virtually free, yet has decidable snake tiling problem. This shows that either a long-standing conjecture by Ballier and Stein (the characterization of groups with decidable domino problem as those virtually free ones) is false, or a question by Aubrun and Bitar has a positive answer (there exists a group for which the domino and snake problems are of different difficulty).<|reference_end|>
|
arxiv
|
@article{bartholdi2024snakes,
title={Snakes can be fooled into thinking they live in a tree},
author={Laurent Bartholdi, Ville Salo},
journal={arXiv preprint arXiv:2409.14525},
year={2024},
archivePrefix={arXiv},
eprint={2409.14525},
primaryClass={math.GR cs.LO math.DS math.LO}
}
|
bartholdi2024snakes
|
arxiv-660502
|
2409.14526
|
What Are They Doing? Joint Audio-Speech Co-Reasoning
|
<|reference_start|>What Are They Doing? Joint Audio-Speech Co-Reasoning: In audio and speech processing, tasks usually focus on either the audio or speech modality, even when both sounds and human speech are present in the same audio clip. Recent Auditory Large Language Models (ALLMs) have made it possible to process audio and speech simultaneously within a single model, leading to further considerations of joint audio-speech tasks. In this paper, we investigate how well ALLMs can perform joint audio-speech processing. Specifically, we introduce Joint Audio-Speech Co-Reasoning (JASCO), a novel task that unifies audio and speech processing, strictly requiring co-reasoning across both modalities. We release a scene-reasoning dataset called "What Are They Doing" and establish a joint audio-speech benchmark to evaluate the joint reasoning capability of popular ALLMs. Additionally, we provide deeper insights into the models' behaviors by analyzing their dependence on each modality.<|reference_end|>
|
arxiv
|
@article{wang2024what,
title={What Are They Doing? Joint Audio-Speech Co-Reasoning},
author={Yingzhi Wang, Pooneh Mousavi, Artem Ploujnikov and Mirco Ravanelli},
journal={arXiv preprint arXiv:2409.14526},
year={2024},
archivePrefix={arXiv},
eprint={2409.14526},
primaryClass={cs.SD cs.CL eess.AS}
}
|
wang2024what
|
arxiv-660503
|
2409.14527
|
Is 3D chip technology the next growth engine for performance improvement?
|
<|reference_start|>Is 3D chip technology the next growth engine for performance improvement?: The semiconductor industry is reaching a fascinating confluence in several evolutionary trends that will likely lead to a number of revolutionary changes in how computer systems are designed, implemented, scaled, and used. Since Moores Law, which has driven the evolution in systems for the last several decades, is imminently approaching real and severe limitations, the ability to create three-dimensional,3D, device stacks appears promising as a way to continue to integrate more devices into a chip.While on the one hand, this nascent ability to make 3D technology can be interpreted as merely an extension of Moores Law, on the other hand, the fact that systems can now be integrated across multiple planes poses some novel opportunities, as well as serious challenges and questions. In this paper, we explore these various challenges and opportunities and discuss structures and systems that are likely to be facilitated by 3D technology. We also describe the ways in which these systems are likely to change. Since 3D technology offers some different value propositions, we expect that some of the most important ways in which 3D technology will likely impact our approach to future systems design, implementation, and usage are not yet obvious to most system designers, and we outline several of them.<|reference_end|>
|
arxiv
|
@article{emma2024is,
title={Is 3D chip technology the next growth engine for performance
improvement?},
author={Philip Emma, Eren Kurshan},
journal={IBM Journal of Research and Development, Volume 52, Pages 541-552,
2008/11},
year={2024},
archivePrefix={arXiv},
eprint={2409.14527},
primaryClass={cs.ET}
}
|
emma2024is
|
arxiv-660504
|
2409.14530
|
An Integrated Blockchain and IPFS Solution for Secure and Efficient Source Code Repository Hosting using Middleman Approach
|
<|reference_start|>An Integrated Blockchain and IPFS Solution for Secure and Efficient Source Code Repository Hosting using Middleman Approach: Version control systems (VCS) are essential for software development, yet centralized VCS present risks such as data loss, security breaches, and ownership disputes. While blockchain-based approaches to decentralized source code repository hosting have been explored, many existing solutions struggle with challenges related to security, scalability, efficiency, and real-time collaboration. This study seeks to enhance these efforts by proposing a novel decentralized solution that leverages the Ethereum blockchain and IPFS for secure, efficient, and resilient code repository hosting and governance. Our approach introduces a hybrid architecture that combines the immutable and decentralized nature of blockchain with the efficiency of IPFS for off-chain storage. To facilitate real-time collaboration, we integrate a temporary centralized Middleman IPFS that manages transaction processing and enhances operational efficiency without compromising long-term security. This Middleman IPFS acts as an intermediary, balancing the speed of centralized systems with the resilience of decentralized architectures. Our system uses smart contracts to maintain access control and key management by dynamically verifying access rights, ensuring that only authorized users can retrieve and decrypt data stored on IPFS. This integration allows for secure, real-time collaboration in environments where multiple collaborators need concurrent access to shared resources. Our system employs a hybrid encryption scheme that combines symmetric and asymmetric cryptography. The encrypted keys are stored on the blockchain, while IPFS handles the efficient storage of the codebase itself, with a Middleman IPFS maintaining concurrent collaboration, providing a robust and scalable solution for managing large-scale, collaborative coding projects.<|reference_end|>
|
arxiv
|
@article{haque2024an,
title={An Integrated Blockchain and IPFS Solution for Secure and Efficient
Source Code Repository Hosting using Middleman Approach},
author={Md. Rafid Haque, Sakibul Islam Munna, Sabbir Ahmed, Md. Tahmid Islam,
Md Mehedi Hassan Onik, and A.B.M. Ashikur Rahman},
journal={arXiv preprint arXiv:2409.14530},
year={2024},
archivePrefix={arXiv},
eprint={2409.14530},
primaryClass={cs.CR cs.NI}
}
|
haque2024an
|
arxiv-660505
|
2409.14532
|
Distributed Primal-Dual Interior Point Framework for Analyzing Infeasible Combined Transmission and Distribution Grid Networks
|
<|reference_start|>Distributed Primal-Dual Interior Point Framework for Analyzing Infeasible Combined Transmission and Distribution Grid Networks: The proliferation of distributed energy resources has heightened the interactions between transmission and distribution (T&D) systems, necessitating novel analyses for the reliable operation and planning of interconnected T&D networks. A critical gap is an analysis approach that identifies and localizes the weak spots in the combined T\&D networks, providing valuable information to system planners and operators. The research goal is to efficiently model and simulate infeasible (i.e. unsolvable in general settings) combined positive sequence transmission and three-phase distribution networks with a unified solution algorithm. We model the combined T&D network with the equivalent circuit formulation. To solve the overall T&D network, we build a Gauss-Jacobi-Newton (GJN) based distributed primal dual interior point optimization algorithm capable of isolating weak nodes. We validate the approach on large combined T&D networks with 70k+ T and 15k+ D nodes and demonstrate performance improvement over the alternating direction method of multipliers (ADMM) method.<|reference_end|>
|
arxiv
|
@article{ali2024distributed,
title={Distributed Primal-Dual Interior Point Framework for Analyzing
Infeasible Combined Transmission and Distribution Grid Networks},
author={Muhammad Hamza Ali and Amritanshu Pandey},
journal={arXiv preprint arXiv:2409.14532},
year={2024},
archivePrefix={arXiv},
eprint={2409.14532},
primaryClass={eess.SY cs.SY}
}
|
ali2024distributed
|
arxiv-660506
|
2409.14533
|
Oscillating Magnetic Effect in BiFeO$_3$
|
<|reference_start|>Oscillating Magnetic Effect in BiFeO$_3$: The development of electric vehicles has led to a growing need for more efficient and environmentally friendly batteries. As a result, there is significant interest in researching new materials and techniques to enhance battery efficiency. One such material being explored is bismuth ferrite (BiFeO$_3$ or BFO), a perovskite with versatile properties. Researchers are particularly intrigued by the potential to control its antiferromagnetic magnetization using magnetic or electric fields. Here, a comprehensive analysis of BFO was conducted, with a focus on its behavior when subjected to oscillating magnetic fields. The research revealed that BFO is sensitive to the frequency and shape of these magnetic fields, leading to the discovery of a new effect related to the transmission of electromagnetic signals on its surface. This effect resulted in a significant increase in the power of the electromagnetic signal, representing a major technological breakthrough. According to the findings, this gain in power has not been observed in any system of this kind before. The study also demonstrated that BFO has the ability to detect magnetic fields through electrical output signals and vice versa, which is crucial for assessing the state and efficiency of batteries, thus contributing to significant advancements in energy storage technology.<|reference_end|>
|
arxiv
|
@article{ferro2024oscillating,
title={Oscillating Magnetic Effect in BiFeO$_3$},
author={Thiago Ferro, Adrielson Dias, Maria Clara, Luana Hildever, and Jos'e
Holanda},
journal={arXiv preprint arXiv:2409.14533},
year={2024},
archivePrefix={arXiv},
eprint={2409.14533},
primaryClass={cond-mat.mtrl-sci cond-mat.mes-hall cs.CE physics.ins-det}
}
|
ferro2024oscillating
|
arxiv-660507
|
2409.14534
|
Goal-Oriented Communications for Interplanetary and Non-Terrestrial Networks
|
<|reference_start|>Goal-Oriented Communications for Interplanetary and Non-Terrestrial Networks: The accelerated pace of space exploration and satellite connectivity calls for scalable communication network architectures that can effectively cater for increasing numbers of bursty flows, such as those occurring in remote monitoring and actuation. Communications in Space face unique challenges including highly variable delays and disruptions that sometimes preclude real-time signaling and end-to-end acknowledgements. In this paper we provide a vision for tackling these fundamental challenges by exploiting recent progress in goal-oriented communication. Our vision for Goal-Oriented Networking in Space is built on three pillars: (1) principles and decision metrics for goal-oriented sampling and multi-user scheduling, that can handle highly variable delay processes that contain memory, (2) grant-free access policies for massive machine-type communications that replace exogenous arrivals with goal-oriented traffic shaping, and (3) flow control mechanisms that exploit the cross-layer operability at application and link layers of Delay/Disruption Tolerant Networking (DTN) protocols.<|reference_end|>
|
arxiv
|
@article{uysal2024goal-oriented,
title={Goal-Oriented Communications for Interplanetary and Non-Terrestrial
Networks},
author={Elif Uysal},
journal={arXiv preprint arXiv:2409.14534},
year={2024},
archivePrefix={arXiv},
eprint={2409.14534},
primaryClass={cs.NI cs.SY eess.SY}
}
|
uysal2024goal-oriented
|
arxiv-660508
|
2409.14535
|
Hyper-parameter Optimization for Wireless Network Traffic Prediction Models with A Novel Meta-Learning Framework
|
<|reference_start|>Hyper-parameter Optimization for Wireless Network Traffic Prediction Models with A Novel Meta-Learning Framework: In this paper, we propose a novel meta-learning based hyper-parameter optimization framework for wireless network traffic prediction models. An attention-based deep neural network (ADNN) is adopted as the prediction model, i.e., base-learner, for each wireless network traffic prediction task, namely base-task, and a meta-learner is employed to automatically generate the optimal hyper-parameters for a given base-learner according to the corresponding base-task's intrinsic characteristics or properties, i.e., meta-features. Based on our observation from real-world traffic records that base-tasks possessing similar meta-features tend to favour similar hyper-parameters for their base-learners, the meta-learner exploits a K-nearest neighbor (KNN) learning method to obtain a set of candidate hyper-parameter selection strategies for a new base-learner, which are then utilized by an advanced genetic algorithm with intelligent chromosome screening to finally acquire the best hyper-parameter selection strategy. Extensive experiments demonstrate that base-learners in the proposed framework have high potential prediction ability for wireless network traffic prediction task, and the meta-learner can enormously elevate the base-learners' performance by providing them the optimal hyper-parameters.<|reference_end|>
|
arxiv
|
@article{wang2024hyper-parameter,
title={Hyper-parameter Optimization for Wireless Network Traffic Prediction
Models with A Novel Meta-Learning Framework},
author={Liangzhi Wang, Jie Zhang, Yuan Gao, Jiliang Zhang, Guiyi Wei, Haibo
Zhou, Bin Zhuge, and Zitian Zhang},
journal={arXiv preprint arXiv:2409.14535},
year={2024},
archivePrefix={arXiv},
eprint={2409.14535},
primaryClass={cs.NI}
}
|
wang2024hyper-parameter
|
arxiv-660509
|
2409.14538
|
Towards Model-Agnostic Dataset Condensation by Heterogeneous Models
|
<|reference_start|>Towards Model-Agnostic Dataset Condensation by Heterogeneous Models: Abstract. The advancement of deep learning has coincided with the proliferation of both models and available data. The surge in dataset sizes and the subsequent surge in computational requirements have led to the development of the Dataset Condensation (DC). While prior studies have delved into generating synthetic images through methods like distribution alignment and training trajectory tracking for more efficient model training, a significant challenge arises when employing these condensed images practically. Notably, these condensed images tend to be specific to particular models, constraining their versatility and practicality. In response to this limitation, we introduce a novel method, Heterogeneous Model Dataset Condensation (HMDC), designed to produce universally applicable condensed images through cross-model interactions. To address the issues of gradient magnitude difference and semantic distance in models when utilizing heterogeneous models, we propose the Gradient Balance Module (GBM) and Mutual Distillation (MD) with the SpatialSemantic Decomposition method. By balancing the contribution of each model and maintaining their semantic meaning closely, our approach overcomes the limitations associated with model-specific condensed images and enhances the broader utility. The source code is available in https://github.com/KHU-AGI/HMDC.<|reference_end|>
|
arxiv
|
@article{moon2024towards,
title={Towards Model-Agnostic Dataset Condensation by Heterogeneous Models},
author={Jun-Yeong Moon, Jung Uk Kim, Gyeong-Moon Park},
journal={arXiv preprint arXiv:2409.14538},
year={2024},
archivePrefix={arXiv},
eprint={2409.14538},
primaryClass={cs.CV}
}
|
moon2024towards
|
arxiv-660510
|
2409.14541
|
Tumbling Down the Rabbit Hole: How do Assisting Exploration Strategies Facilitate Grey-box Fuzzing?
|
<|reference_start|>Tumbling Down the Rabbit Hole: How do Assisting Exploration Strategies Facilitate Grey-box Fuzzing?: Many assisting exploration strategies have been proposed to assist grey-box fuzzers in exploring program states guarded by tight and complex branch conditions such as equality constraints. Although they have shown promising results in their original papers, their evaluations seldom follow equivalent protocols, e.g., they are rarely evaluated on identical benchmarks. Moreover, there is a lack of sufficient investigations on the specifics of the program states explored by these strategies which can obfuscate the future application and development of such strategies. Consequently, there is a pressing need for a comprehensive study of assisting exploration strategies on their effectiveness, versatility, and limitations to enlighten their future development. To this end, we perform the first comprehensive study about the assisting exploration strategies for grey-box fuzzers. Specifically, we first collect nine recent fuzzers representing the mainstream assisting exploration strategies as our studied subjects and 21 real-world projects to form our benchmark suite. After evaluating the subjects on the benchmark suite, we then surprisingly find that the dictionary strategy is the most promising since it not only achieves similar or even slightly better performance over the other studied assisting exploration strategies in terms of exploring program states but also is more practical to be enhanced. Accordingly, we propose CDFUZZ, which generates a customized dictionary for each seed upon the baseline fuzzer AFL to improve over the original dictionary strategy. The evaluation results demonstrate that CDFUZZ increases the edge coverage by 16.1% on average for all benchmark projects over the best performer in our study (i.e., AFL++ with the dictionary strategy). CDFUZZ also successfully exposed 37 previously unknown bugs, with nine confirmed and seven fixed by the corresponding developers.<|reference_end|>
|
arxiv
|
@article{wu2024tumbling,
title={Tumbling Down the Rabbit Hole: How do Assisting Exploration Strategies
Facilitate Grey-box Fuzzing?},
author={Mingyuan Wu, Jiahong Xiang, Kunqiu Chen, Peng DI, Shin Hwei Tan,
Heming Cui, Yuqun Zhang},
journal={arXiv preprint arXiv:2409.14541},
year={2024},
archivePrefix={arXiv},
eprint={2409.14541},
primaryClass={cs.SE}
}
|
wu2024tumbling
|
arxiv-660511
|
2409.14542
|
Distributionally Robust Inverse Reinforcement Learning for Identifying Multi-Agent Coordinated Sensing
|
<|reference_start|>Distributionally Robust Inverse Reinforcement Learning for Identifying Multi-Agent Coordinated Sensing: We derive a minimax distributionally robust inverse reinforcement learning (IRL) algorithm to reconstruct the utility functions of a multi-agent sensing system. Specifically, we construct utility estimators which minimize the worst-case prediction error over a Wasserstein ambiguity set centered at noisy signal observations. We prove the equivalence between this robust estimation and a semi-infinite optimization reformulation, and we propose a consistent algorithm to compute solutions. We illustrate the efficacy of this robust IRL scheme in numerical studies to reconstruct the utility functions of a cognitive radar network from observed tracking signals.<|reference_end|>
|
arxiv
|
@article{snow2024distributionally,
title={Distributionally Robust Inverse Reinforcement Learning for Identifying
Multi-Agent Coordinated Sensing},
author={Luke Snow, Vikram Krishnamurthy},
journal={arXiv preprint arXiv:2409.14542},
year={2024},
archivePrefix={arXiv},
eprint={2409.14542},
primaryClass={cs.LG cs.MA eess.SP}
}
|
snow2024distributionally
|
arxiv-660512
|
2409.14543
|
TrackNetV4: Enhancing Fast Sports Object Tracking with Motion Attention Maps
|
<|reference_start|>TrackNetV4: Enhancing Fast Sports Object Tracking with Motion Attention Maps: Accurately detecting and tracking high-speed, small objects, such as balls in sports videos, is challenging due to factors like motion blur and occlusion. Although recent deep learning frameworks like TrackNetV1, V2, and V3 have advanced tennis ball and shuttlecock tracking, they often struggle in scenarios with partial occlusion or low visibility. This is primarily because these models rely heavily on visual features without explicitly incorporating motion information, which is crucial for precise tracking and trajectory prediction. In this paper, we introduce an enhancement to the TrackNet family by fusing high-level visual features with learnable motion attention maps through a motion-aware fusion mechanism, effectively emphasizing the moving ball's location and improving tracking performance. Our approach leverages frame differencing maps, modulated by a motion prompt layer, to highlight key motion regions over time. Experimental results on the tennis ball and shuttlecock datasets show that our method enhances the tracking performance of both TrackNetV2 and V3. We refer to our lightweight, plug-and-play solution, built on top of the existing TrackNet, as TrackNetV4.<|reference_end|>
|
arxiv
|
@article{raj2024tracknetv4:,
title={TrackNetV4: Enhancing Fast Sports Object Tracking with Motion Attention
Maps},
author={Arjun Raj, Lei Wang, Tom Gedeon},
journal={arXiv preprint arXiv:2409.14543},
year={2024},
archivePrefix={arXiv},
eprint={2409.14543},
primaryClass={cs.CV cs.AI cs.LG}
}
|
raj2024tracknetv4:
|
arxiv-660513
|
2409.14545
|
Why Is Anything Conscious?
|
<|reference_start|>Why Is Anything Conscious?: We tackle the hard problem of consciousness taking the naturally-selected, self-organising, embodied organism as our starting point. We provide a mathematical formalism describing how biological systems self-organise to hierarchically interpret unlabelled sensory information according to valence and specific needs. Such interpretations imply behavioural policies which can only be differentiated from each other by the qualitative aspect of information processing. Selection pressures favour systems that can intervene in the world to achieve homeostatic and reproductive goals. Quality is a property arising in such systems to link cause to affect to motivate real world interventions. This produces a range of qualitative classifiers (interoceptive and exteroceptive) that motivate specific actions and determine priorities and preferences. Building upon the seminal distinction between access and phenomenal consciousness, our radical claim here is that phenomenal consciousness without access consciousness is likely very common, but the reverse is implausible. To put it provocatively: Nature does not like zombies. We formally describe the multilayered architecture of self-organisation from rocks to Einstein, illustrating how our argument applies in the real world. We claim that access consciousness at the human level is impossible without the ability to hierarchically model i) the self, ii) the world/others and iii) the self as modelled by others. Phenomenal consciousness is therefore required for human-level functionality. Our proposal lays the foundations of a formal science of consciousness, deeply connected with natural selection rather than abstract thinking, closer to human fact than zombie fiction.<|reference_end|>
|
arxiv
|
@article{bennett2024why,
title={Why Is Anything Conscious?},
author={Michael Timothy Bennett, Sean Welsh, Anna Ciaunica},
journal={arXiv preprint arXiv:2409.14545},
year={2024},
archivePrefix={arXiv},
eprint={2409.14545},
primaryClass={cs.AI}
}
|
bennett2024why
|
arxiv-660514
|
2409.14547
|
A maximin based, linear programming approach to worst-case scenario control
|
<|reference_start|>A maximin based, linear programming approach to worst-case scenario control: For non-zero-sum games, playing maximin or minimax strategies are not optimal in general, and thus we usually do not calculate them. However, under some conditions, such as incomplete information, trembling hands, or an irrational opponent, it can be reasonable to use the maximin expected utility preferences instead. A particular goal when there is uncertainty about an opponents behaviour -- especially when we cannot be certain of their rationality -- is for a player to avoid an unaffordable worst-case payoff. And such a worst-case payoff a player is willing to accept in practice need not be the exact maximin value. Here, we first introduce a motivating example and give the analytical description of an algorithm to control the worst-case scenario given a specified worst-case allowance for two-by-two games. Then we extend this method to n-by-m games using linear programming. We analyze two practical applications from the maximin angle, and also show that the equilibria when facing a malicious opponent coincides with a subset of the Nash equilibria of a transformation of the game. Lastly, we make some comments about problems when trying to analytically compute the subset of a strategy space with a specified worst-case allowance.<|reference_end|>
|
arxiv
|
@article{zhang2024a,
title={A maximin based, linear programming approach to worst-case scenario
control},
author={Zhuoer Zhang, Bryce Morsky},
journal={arXiv preprint arXiv:2409.14547},
year={2024},
archivePrefix={arXiv},
eprint={2409.14547},
primaryClass={cs.GT}
}
|
zhang2024a
|
arxiv-660515
|
2409.14549
|
Adaptive Feedforward Gradient Estimation in Neural ODEs
|
<|reference_start|>Adaptive Feedforward Gradient Estimation in Neural ODEs: Neural Ordinary Differential Equations (Neural ODEs) represent a significant breakthrough in deep learning, promising to bridge the gap between machine learning and the rich theoretical frameworks developed in various mathematical fields over centuries. In this work, we propose a novel approach that leverages adaptive feedforward gradient estimation to improve the efficiency, consistency, and interpretability of Neural ODEs. Our method eliminates the need for backpropagation and the adjoint method, reducing computational overhead and memory usage while maintaining accuracy. The proposed approach has been validated through practical applications, and showed good performance relative to Neural ODEs state of the art methods.<|reference_end|>
|
arxiv
|
@article{dabounou2024adaptive,
title={Adaptive Feedforward Gradient Estimation in Neural ODEs},
author={Jaouad Dabounou},
journal={arXiv preprint arXiv:2409.14549},
year={2024},
archivePrefix={arXiv},
eprint={2409.14549},
primaryClass={cs.LG}
}
|
dabounou2024adaptive
|
arxiv-660516
|
2409.14550
|
Interpretable Nonroutine Network Traffic Prediction with a Case Study
|
<|reference_start|>Interpretable Nonroutine Network Traffic Prediction with a Case Study: This paper pioneers a nonroutine network traffic prediction (NNTP) method to prospectively provide a theoretical basis for avoiding large-scale network disruption by accurately predicting bursty traffic. Certain events that impact user behavior subsequently trigger nonroutine traffic, which significantly constrains the performance of network traffic prediction (NTP) models. By analyzing nonroutine traffic and the corresponding events, the NNTP method is pioneered to construct interpretable NTP model. Based on the real-world traffic data, the network traffic generated during soccer games serves as a case study to validate the performance of the NNTP method. The numerical results indicate that our prediction closely fits the traffic pattern. In comparison to existing researches, the NNTP method is at the forefront of finding a balance among interpretability, accuracy, and computational complexity.<|reference_end|>
|
arxiv
|
@article{wang2024interpretable,
title={Interpretable Nonroutine Network Traffic Prediction with a Case Study},
author={Liangzhi Wang, Haoyuan Zhu, Jiliang Zhang, Zitian Zhang, and Jie Zhang},
journal={arXiv preprint arXiv:2409.14550},
year={2024},
archivePrefix={arXiv},
eprint={2409.14550},
primaryClass={cs.NI}
}
|
wang2024interpretable
|
arxiv-660517
|
2409.14551
|
Unconditional energy stable IEQ-FEMs for the Cahn-Hilliard-Navier-Stokes equations
|
<|reference_start|>Unconditional energy stable IEQ-FEMs for the Cahn-Hilliard-Navier-Stokes equations: We propose several unconditionally energy stable invariant energy quadratization (IEQ) finite element methods (FEMs) to solve the Cahn-Hilliard-Navier-Stokes (CHNS) equations. The time discretization of these IEQ-FEMs is based on the first- and second-order backward differentiation methods. The intermediate function introduced by the IEQ approach is positioned in different function spaces: the continuous function space, and a combination of the continuous function and finite element spaces. These methods offer distinct advantages. Consequently, we propose a new hybrid IEQ-FEM that combines the strengths of both schemes, offering computational efficiency and unconditional energy stability in the finite element space. We provide rigorous proofs of mass conservation and energy dissipation for the proposed IEQ-FEMs. Several numerical experiments are presented to validate the accuracy, efficiency, and solution properties of the proposed IEQ-FEMs.<|reference_end|>
|
arxiv
|
@article{chen2024unconditional,
title={Unconditional energy stable IEQ-FEMs for the Cahn-Hilliard-Navier-Stokes
equations},
author={Yaoyao Chen, Dongqian Li, Yin Yang, Peimeng Yin},
journal={arXiv preprint arXiv:2409.14551},
year={2024},
archivePrefix={arXiv},
eprint={2409.14551},
primaryClass={math.NA cs.NA}
}
|
chen2024unconditional
|
arxiv-660518
|
2409.14552
|
Unleashing the Power of Emojis in Texts via Self-supervised Graph Pre-Training
|
<|reference_start|>Unleashing the Power of Emojis in Texts via Self-supervised Graph Pre-Training: Emojis have gained immense popularity on social platforms, serving as a common means to supplement or replace text. However, existing data mining approaches generally either completely ignore or simply treat emojis as ordinary Unicode characters, which may limit the model's ability to grasp the rich semantic information in emojis and the interaction between emojis and texts. Thus, it is necessary to release the emoji's power in social media data mining. To this end, we first construct a heterogeneous graph consisting of three types of nodes, i.e. post, word and emoji nodes to improve the representation of different elements in posts. The edges are also well-defined to model how these three elements interact with each other. To facilitate the sharing of information among post, word and emoji nodes, we propose a graph pre-train framework for text and emoji co-modeling, which contains two graph pre-training tasks: node-level graph contrastive learning and edge-level link reconstruction learning. Extensive experiments on the Xiaohongshu and Twitter datasets with two types of downstream tasks demonstrate that our approach proves significant improvement over previous strong baseline methods.<|reference_end|>
|
arxiv
|
@article{zhang2024unleashing,
title={Unleashing the Power of Emojis in Texts via Self-supervised Graph
Pre-Training},
author={Zhou Zhang, Dongzeng Tan, Jiaan Wang, Yilong Chen, Jiarong Xu},
journal={arXiv preprint arXiv:2409.14552},
year={2024},
archivePrefix={arXiv},
eprint={2409.14552},
primaryClass={cs.CL cs.AI}
}
|
zhang2024unleashing
|
arxiv-660519
|
2409.14553
|
GlamTry: Advancing Virtual Try-On for High-End Accessories
|
<|reference_start|>GlamTry: Advancing Virtual Try-On for High-End Accessories: The paper aims to address the lack of photorealistic virtual try-on models for accessories such as jewelry and watches, which are particularly relevant for online retail applications. While existing virtual try-on models focus primarily on clothing items, there is a gap in the market for accessories. This research explores the application of techniques from 2D virtual try-on models for clothing, such as VITON-HD, and integrates them with other computer vision models, notably MediaPipe Hand Landmarker. Drawing on existing literature, the study customizes and retrains a unique model using accessory-specific data and network architecture modifications to assess the feasibility of extending virtual try-on technology to accessories. Results demonstrate improved location prediction compared to the original model for clothes, even with a small dataset. This underscores the model's potential with larger datasets exceeding 10,000 images, paving the way for future research in virtual accessory try-on applications.<|reference_end|>
|
arxiv
|
@article{chang2024glamtry:,
title={GlamTry: Advancing Virtual Try-On for High-End Accessories},
author={Ting-Yu Chang, Seretsi Khabane Lekena},
journal={arXiv preprint arXiv:2409.14553},
year={2024},
archivePrefix={arXiv},
eprint={2409.14553},
primaryClass={cs.CV}
}
|
chang2024glamtry:
|
arxiv-660520
|
2409.14554
|
Robust Audio-Visual Speech Enhancement: Correcting Misassignments in Complex Environments with Advanced Post-Processing
|
<|reference_start|>Robust Audio-Visual Speech Enhancement: Correcting Misassignments in Complex Environments with Advanced Post-Processing: This paper addresses the prevalent issue of incorrect speech output in audio-visual speech enhancement (AVSE) systems, which is often caused by poor video quality and mismatched training and test data. We introduce a post-processing classifier (PPC) to rectify these erroneous outputs, ensuring that the enhanced speech corresponds accurately to the intended speaker. We also adopt a mixup strategy in PPC training to improve its robustness. Experimental results on the AVSE-challenge dataset show that integrating PPC into the AVSE model can significantly improve AVSE performance, and combining PPC with the AVSE model trained with permutation invariant training (PIT) yields the best performance. The proposed method substantially outperforms the baseline model by a large margin. This work highlights the potential for broader applications across various modalities and architectures, providing a promising direction for future research in this field.<|reference_end|>
|
arxiv
|
@article{ren2024robust,
title={Robust Audio-Visual Speech Enhancement: Correcting Misassignments in
Complex Environments with Advanced Post-Processing},
author={Wenze Ren, Kuo-Hsuan Hung, Rong Chao, YouJin Li, Hsin-Min Wang, Yu
Tsao},
journal={arXiv preprint arXiv:2409.14554},
year={2024},
archivePrefix={arXiv},
eprint={2409.14554},
primaryClass={eess.AS cs.SD}
}
|
ren2024robust
|
arxiv-660521
|
2409.14556
|
RACOON: An LLM-based Framework for Retrieval-Augmented Column Type Annotation with a Knowledge Graph
|
<|reference_start|>RACOON: An LLM-based Framework for Retrieval-Augmented Column Type Annotation with a Knowledge Graph: As an important component of data exploration and integration, Column Type Annotation (CTA) aims to label columns of a table with one or more semantic types. With the recent development of Large Language Models (LLMs), researchers have started to explore the possibility of using LLMs for CTA, leveraging their strong zero-shot capabilities. In this paper, we build on this promising work and improve on LLM-based methods for CTA by showing how to use a Knowledge Graph (KG) to augment the context information provided to the LLM. Our approach, called RACOON, combines both pre-trained parametric and non-parametric knowledge during generation to improve LLMs' performance on CTA. Our experiments show that RACOON achieves up to a 0.21 micro F-1 improvement compared against vanilla LLM inference.<|reference_end|>
|
arxiv
|
@article{wei2024racoon:,
title={RACOON: An LLM-based Framework for Retrieval-Augmented Column Type
Annotation with a Knowledge Graph},
author={Lindsey Linxi Wei, Guorui Xiao, Magdalena Balazinska},
journal={arXiv preprint arXiv:2409.14556},
year={2024},
archivePrefix={arXiv},
eprint={2409.14556},
primaryClass={cs.DB cs.AI}
}
|
wei2024racoon:
|
arxiv-660522
|
2409.14557
|
Exploiting Exogenous Structure for Sample-Efficient Reinforcement Learning
|
<|reference_start|>Exploiting Exogenous Structure for Sample-Efficient Reinforcement Learning: We study a class of structured Markov Decision Processes (MDPs) known as Exo-MDPs, characterized by a partition of the state space into two components. The exogenous states evolve stochastically in a manner not affected by the agent's actions, whereas the endogenous states are affected by the actions, and evolve in a deterministic and known way conditional on the exogenous states. Exo-MDPs are a natural model for various applications including inventory control, finance, power systems, ride sharing, among others. Despite seeming restrictive, this work establishes that any discrete MDP can be represented as an Exo-MDP. Further, Exo-MDPs induce a natural representation of the transition and reward dynamics as linear functions of the exogenous state distribution. This linear representation leads to near-optimal algorithms with regret guarantees scaling only with the (effective) size of the exogenous state space $d$, independent of the sizes of the endogenous state and action spaces. Specifically, when the exogenous state is fully observed, a simple plug-in approach achieves a regret upper bound of $O(H^{3/2}\sqrt{dK})$, where $H$ denotes the horizon and $K$ denotes the total number of episodes. When the exogenous state is unobserved, the linear representation leads to a regret upper bound of $O(H^{3/2}d\sqrt{K})$. We also establish a nearly matching regret lower bound of $\Omega(Hd\sqrt{K})$ for the no observation regime. We complement our theoretical findings with an experimental study on inventory control problems.<|reference_end|>
|
arxiv
|
@article{wan2024exploiting,
title={Exploiting Exogenous Structure for Sample-Efficient Reinforcement
Learning},
author={Jia Wan, Sean R. Sinclair, Devavrat Shah, Martin J. Wainwright},
journal={arXiv preprint arXiv:2409.14557},
year={2024},
archivePrefix={arXiv},
eprint={2409.14557},
primaryClass={stat.ML cs.LG math.OC}
}
|
wan2024exploiting
|
arxiv-660523
|
2409.14559
|
Computing String Covers in Sublinear Time
|
<|reference_start|>Computing String Covers in Sublinear Time: Let $T$ be a string of length $n$ over an integer alphabet of size $\sigma$. In the word RAM model, $T$ can be represented in $O(n /\log_\sigma n)$ space. We show that a representation of all covers of $T$ can be computed in the optimal $O(n/\log_\sigma n)$ time; in particular, the shortest cover can be computed within this time. We also design an $O(n(\log\sigma + \log \log n)/\log n)$-sized data structure that computes in $O(1)$ time any element of the so-called (shortest) cover array of $T$, that is, the length of the shortest cover of any given prefix of $T$. As a by-product, we describe the structure of cover arrays of Fibonacci strings. On the negative side, we show that the shortest cover of a length-$n$ string cannot be computed using $o(n/\log n)$ operations in the PILLAR model of Charalampopoulos, Kociumaka, and Wellnitz (FOCS 2020).<|reference_end|>
|
arxiv
|
@article{radoszewski2024computing,
title={Computing String Covers in Sublinear Time},
author={Jakub Radoszewski and Wiktor Zuba},
journal={String Processing and Information Retrieval. SPIRE 2024. Lecture
Notes in Computer Science, vol. 14899. Springer, Cham},
year={2024},
doi={10.1007/978-3-031-72200-4},
archivePrefix={arXiv},
eprint={2409.14559},
primaryClass={cs.DS}
}
|
radoszewski2024computing
|
arxiv-660524
|
2409.14560
|
Exact mean and variance of the squared Hellinger distance for random density matrices
|
<|reference_start|>Exact mean and variance of the squared Hellinger distance for random density matrices: The Hellinger distance between quantum states is a significant measure in quantum information theory, known for its Riemannian and monotonic properties. It is also easier to compute than the Bures distance, another measure that shares these properties. In this work, we derive the mean and variance of the Hellinger distance between pairs of density matrices, where one or both matrices are random. Along the way, we also obtain exact results for the mean affinity and mean square affinity. The first two cumulants of the Hellinger distance allow us to propose an approximation for the corresponding probability density function based on the gamma distribution. Our analytical results are corroborated through Monte Carlo simulations, showing excellent agreement.<|reference_end|>
|
arxiv
|
@article{kumar2024exact,
title={Exact mean and variance of the squared Hellinger distance for random
density matrices},
author={Vinay Kumar, Kaushik Vasan, Santosh Kumar},
journal={arXiv preprint arXiv:2409.14560},
year={2024},
archivePrefix={arXiv},
eprint={2409.14560},
primaryClass={quant-ph cs.IT math-ph math.IT math.MP nlin.CD}
}
|
kumar2024exact
|
arxiv-660525
|
2409.14561
|
Cloud and IoT based Smart Agent-driven Simulation of Human Gait for Detecting Muscles Disorder
|
<|reference_start|>Cloud and IoT based Smart Agent-driven Simulation of Human Gait for Detecting Muscles Disorder: Motion disorders pose a significant global health concern and are often managed with pharmacological treatments that may lead to undesirable long-term effects. Current therapeutic strategies lack differentiation between healthy and unhealthy muscles in a patient, necessitating a targeted approach to distinguish between musculature. There is still no motion analyzer application for this purpose. Additionally, there is a deep gap in motion analysis software as some studies prioritize simulation, neglecting software needs, while others concentrate on computational aspects, disregarding simulation nuances. We introduce a comprehensive five-phase methodology to analyze the neuromuscular system of the lower body during gait. The first phase employs an innovative IoT-based method for motion signal capture. The second and third phases involve an agent-driven biomechanical model of the lower body skeleton and a model of human voluntary muscle. Thus, using an agent-driven approach, motion-captured signals can be converted to neural stimuli. The simulation results are then analyzed by our proposed ensemble neural network framework in the fourth step in order to detect abnormal motion in each joint. Finally, the results are shown by a userfriendly graphical interface which promotes the usability of the method. Utilizing the developed application, we simulate the neuromusculoskeletal system of some patients during the gait cycle, enabling the classification of healthy and pathological muscle activity through joint-based analysis. This study leverages cloud computing to create an infrastructure-independent application which is globally accessible. The proposed application enables experts to differentiate between healthy and unhealthy muscles in a patient by simulating his gait.<|reference_end|>
|
arxiv
|
@article{saadati2024cloud,
title={Cloud and IoT based Smart Agent-driven Simulation of Human Gait for
Detecting Muscles Disorder},
author={Sina Saadati, Mohammadreza Razzazi},
journal={arXiv preprint arXiv:2409.14561},
year={2024},
archivePrefix={arXiv},
eprint={2409.14561},
primaryClass={cs.HC cs.MA}
}
|
saadati2024cloud
|
arxiv-660526
|
2409.14562
|
DROP: Dexterous Reorientation via Online Planning
|
<|reference_start|>DROP: Dexterous Reorientation via Online Planning: Achieving human-like dexterity is a longstanding challenge in robotics, in part due to the complexity of planning and control for contact-rich systems. In reinforcement learning (RL), one popular approach has been to use massively-parallelized, domain-randomized simulations to learn a policy offline over a vast array of contact conditions, allowing robust sim-to-real transfer. Inspired by recent advances in real-time parallel simulation, this work considers instead the viability of online planning methods for contact-rich manipulation by studying the well-known in-hand cube reorientation task. We propose a simple architecture that employs a sampling-based predictive controller and vision-based pose estimator to search for contact-rich control actions online. We conduct thorough experiments to assess the real-world performance of our method, architectural design choices, and key factors for robustness, demonstrating that our simple sampling-based approach achieves performance comparable to prior RL-based works. Supplemental material: https://caltech-amber.github.io/drop.<|reference_end|>
|
arxiv
|
@article{li2024drop:,
title={DROP: Dexterous Reorientation via Online Planning},
author={Albert H. Li, Preston Culbertson, Vince Kurtz, Aaron D. Ames},
journal={arXiv preprint arXiv:2409.14562},
year={2024},
archivePrefix={arXiv},
eprint={2409.14562},
primaryClass={cs.RO}
}
|
li2024drop:
|
arxiv-660527
|
2409.14563
|
Optimizing Feature Selection with Genetic Algorithms: A Review of Methods and Applications
|
<|reference_start|>Optimizing Feature Selection with Genetic Algorithms: A Review of Methods and Applications: Analyzing large datasets to select optimal features is one of the most important research areas in machine learning and data mining. This feature selection procedure involves dimensionality reduction which is crucial in enhancing the performance of the model, making it less complex. Recently, several types of attribute selection methods have been proposed that use different approaches to obtain representative subsets of the attributes. However, population-based evolutionary algorithms like Genetic Algorithms (GAs) have been proposed to provide remedies for these drawbacks by avoiding local optima and improving the selection process itself. This manuscript presents a sweeping review on GA-based feature selection techniques in applications and their effectiveness across different domains. This review was conducted using the PRISMA methodology; hence, the systematic identification, screening, and analysis of relevant literature were performed. Thus, our results hint that the field's hybrid GA methodologies including, but not limited to, GA-Wrapper feature selector and HGA-neural networks, have substantially improved their potential through the resolution of problems such as exploration of unnecessary search space, accuracy performance problems, and complexity. The conclusions of this paper would result in discussing the potential that GAs bear in feature selection and future research directions for their enhancement in applicability and performance.<|reference_end|>
|
arxiv
|
@article{taha2024optimizing,
title={Optimizing Feature Selection with Genetic Algorithms: A Review of
Methods and Applications},
author={Zhila Yaseen Taha, Abdulhady Abas Abdullah, Tarik A. Rashid},
journal={arXiv preprint arXiv:2409.14563},
year={2024},
archivePrefix={arXiv},
eprint={2409.14563},
primaryClass={cs.NE cs.LG}
}
|
taha2024optimizing
|
arxiv-660528
|
2409.14564
|
Event-ECC: Asynchronous Tracking of Events with Continuous Optimization
|
<|reference_start|>Event-ECC: Asynchronous Tracking of Events with Continuous Optimization: In this paper, an event-based tracker is presented. Inspired by recent advances in asynchronous processing of individual events, we develop a direct matching scheme that aligns spatial distributions of events at different times. More specifically, we adopt the Enhanced Correlation Coefficient (ECC) criterion and propose a tracking algorithm that computes a 2D motion warp per single event, called event-ECC (eECC). The complete tracking of a feature along time is cast as a \emph{single} iterative continuous optimization problem, whereby every single iteration is executed per event. The computational burden of event-wise processing is alleviated through a lightweight version that benefits from incremental processing and updating scheme. We test the proposed algorithm on publicly available datasets and we report improvements in tracking accuracy and feature age over state-of-the-art event-based asynchronous trackers.<|reference_end|>
|
arxiv
|
@article{zafeiri2024event-ecc:,
title={Event-ECC: Asynchronous Tracking of Events with Continuous Optimization},
author={Maria Zafeiri, Georgios Evangelidis, Emmanouil Psarakis},
journal={arXiv preprint arXiv:2409.14564},
year={2024},
archivePrefix={arXiv},
eprint={2409.14564},
primaryClass={cs.CV}
}
|
zafeiri2024event-ecc:
|
arxiv-660529
|
2409.14565
|
Combating Spatial Disorientation in a Dynamic Self-Stabilization Task Using AI Assistants
|
<|reference_start|>Combating Spatial Disorientation in a Dynamic Self-Stabilization Task Using AI Assistants: Spatial disorientation is a leading cause of fatal aircraft accidents. This paper explores the potential of AI agents to aid pilots in maintaining balance and preventing unrecoverable losses of control by offering cues and corrective measures that ameliorate spatial disorientation. A multi-axis rotation system (MARS) was used to gather data from human subjects self-balancing in a spaceflight analog condition. We trained models over this data to create "digital twins" that exemplified performance characteristics of humans with different proficiency levels. We then trained various reinforcement learning and deep learning models to offer corrective cues if loss of control is predicted. Digital twins and assistant models then co-performed a virtual inverted pendulum (VIP) programmed with identical physics. From these simulations, we picked the 5 best-performing assistants based on task metrics such as crash frequency and mean distance from the direction of balance. These were used in a co-performance study with 20 new human subjects performing a version of the VIP task with degraded spatial information. We show that certain AI assistants were able to improve human performance and that reinforcement-learning based assistants were objectively more effective but rated as less trusted and preferable by humans.<|reference_end|>
|
arxiv
|
@article{mannan2024combating,
title={Combating Spatial Disorientation in a Dynamic Self-Stabilization Task
Using AI Assistants},
author={Sheikh Mannan, Paige Hansen, Vivekanand Pandey Vimal, Hannah N.
Davies, Paul DiZio, and Nikhil Krishnaswamy},
journal={arXiv preprint arXiv:2409.14565},
year={2024},
doi={10.1145/3687272.3688329},
archivePrefix={arXiv},
eprint={2409.14565},
primaryClass={cs.HC cs.AI cs.LG cs.MA cs.RO}
}
|
mannan2024combating
|
arxiv-660530
|
2409.14567
|
Modeling and In-flight Torso Attitude Stabilization of a Jumping Quadruped
|
<|reference_start|>Modeling and In-flight Torso Attitude Stabilization of a Jumping Quadruped: This paper addresses the modeling and attitude control of jumping quadrupeds in low-gravity environments. First, a convex decomposition procedure is presented to generate high-accuracy and low-cost collision geometries for quadrupeds performing agile maneuvers. A hierarchical control architecture is then investigated, separating torso orientation tracking from the generation of suitable, collision-free, corresponding leg motions. Nonlinear Model Predictive Controllers (NMPCs) are utilized in both layers of the controller. To compute the necessary leg motions, a torque allocation strategy is employed that leverages the symmetries of the system to avoid self-collisions and simplify the respective NMPC. To plan periodic trajectories online, a Finite State Machine (FSM)-based weight switching strategy is also used. The proposed controller is first evaluated in simulation, where 90 degree rotations in roll, pitch, and yaw are stabilized in 6.3, 2.4, and 5.5 seconds, respectively. The performance of the controller is further experimentally demonstrated by stabilizing constant and changing orientation references. Overall, this work provides a framework for the development of advanced model-based attitude controllers for jumping legged systems.<|reference_end|>
|
arxiv
|
@article{papadakis2024modeling,
title={Modeling and In-flight Torso Attitude Stabilization of a Jumping
Quadruped},
author={Michail Papadakis, J{o}rgen Anker Olsen, Ioannis Poulakakis, and
Kostas Alexis},
journal={arXiv preprint arXiv:2409.14567},
year={2024},
archivePrefix={arXiv},
eprint={2409.14567},
primaryClass={cs.RO}
}
|
papadakis2024modeling
|
arxiv-660531
|
2409.14570
|
Generative artificial intelligence usage by researchers at work: Effects of gender, career stage, type of workplace, and perceived barriers
|
<|reference_start|>Generative artificial intelligence usage by researchers at work: Effects of gender, career stage, type of workplace, and perceived barriers: The integration of generative artificial intelligence technology into research environments has become increasingly common in recent years, representing a significant shift in the way researchers approach their work. This paper seeks to explore the factors underlying the frequency of use of generative AI amongst researchers in their professional environments. As survey data may be influenced by a bias towards scientists interested in AI, potentially skewing the results towards the perspectives of these researchers, this study uses a regression model to isolate the impact of specific factors such as gender, career stage, type of workplace, and perceived barriers to using AI technology on the frequency of use of generative AI. It also controls for other relevant variables such as direct involvement in AI research or development, collaboration with AI companies, geographic location, and scientific discipline. Our results show that researchers who face barriers to AI adoption experience an 11% increase in tool use, while those who cite insufficient training resources experience an 8% decrease. Female researchers experience a 7% decrease in AI tool usage compared to men, while advanced career researchers experience a significant 19% decrease. Researchers associated with government advisory groups are 45% more likely to use AI tools frequently than those in government roles. Researchers in for-profit companies show an increase of 19%, while those in medical research institutions and hospitals show an increase of 16% and 15%, respectively. This paper contributes to a deeper understanding of the mechanisms driving the use of generative AI tools amongst researchers, with valuable implications for both academia and industry.<|reference_end|>
|
arxiv
|
@article{dorta-gonzález2024generative,
title={Generative artificial intelligence usage by researchers at work: Effects
of gender, career stage, type of workplace, and perceived barriers},
author={Pablo Dorta-Gonz'alez, Alexis Jorge L'opez-Puig, Mar'ia Isabel
Dorta-Gonz'alez, Sara M. Gonz'alez-Betancor},
journal={arXiv preprint arXiv:2409.14570},
year={2024},
archivePrefix={arXiv},
eprint={2409.14570},
primaryClass={cs.CY cs.HC stat.AP}
}
|
dorta-gonzález2024generative
|
arxiv-660532
|
2409.14571
|
Encoder with the Empirical Mode Decomposition (EMD) to remove muscle artefacts from EEG signal
|
<|reference_start|>Encoder with the Empirical Mode Decomposition (EMD) to remove muscle artefacts from EEG signal: This paper introduces a novel method for effectively removing artifacts from EEG signals by combining the Empirical Mode Decomposition (EMD) method with a machine learning architecture. The proposed method addresses the limitations of existing artifact removal techniques by enhancing the EMD method through interpolation of the upper and lower. For conventional artifact removal methods, the EMD technique is commonly employed. However, the challenge lies in accurately interpolating the missing components of the signal while preserving its inherent frequency components. To overcome this limitation, we incorporated machine learning technique, which enables us to carefully handle the interpolation process without directly manipulating the data. The key advantage of our approach lies in the preservation of the natural characteristics of the EEG signal during artifact removal. By utilizing machine learning for interpolation, we ensure that the average component obtained through the EMD method retains the crucial frequency components of the original signal. This preservation is essential for maintaining the integrity and fidelity of the EEG data, allowing for accurate analysis and interpretation. The results obtained from our evaluation serve to validate the effectiveness of our approach and pave the way for further advancements in EEG signal processing and analysis.<|reference_end|>
|
arxiv
|
@article{rakhmatulin2024encoder,
title={Encoder with the Empirical Mode Decomposition (EMD) to remove muscle
artefacts from EEG signal},
author={Ildar Rakhmatulin},
journal={arXiv preprint arXiv:2409.14571},
year={2024},
archivePrefix={arXiv},
eprint={2409.14571},
primaryClass={cs.AI}
}
|
rakhmatulin2024encoder
|
arxiv-660533
|
2409.14572
|
Evaluating the Performance and Robustness of LLMs in Materials Science Q&A and Property Predictions
|
<|reference_start|>Evaluating the Performance and Robustness of LLMs in Materials Science Q&A and Property Predictions: Large Language Models (LLMs) have the potential to revolutionize scientific research, yet their robustness and reliability in domain-specific applications remain insufficiently explored. This study conducts a comprehensive evaluation and robustness analysis of LLMs within the field of materials science, focusing on domain-specific question answering and materials property prediction. Three distinct datasets are used in this study: 1) a set of multiple-choice questions from undergraduate-level materials science courses, 2) a dataset including various steel compositions and yield strengths, and 3) a band gap dataset, containing textual descriptions of material crystal structures and band gap values. The performance of LLMs is assessed using various prompting strategies, including zero-shot chain-of-thought, expert prompting, and few-shot in-context learning. The robustness of these models is tested against various forms of 'noise', ranging from realistic disturbances to intentionally adversarial manipulations, to evaluate their resilience and reliability under real-world conditions. Additionally, the study uncovers unique phenomena of LLMs during predictive tasks, such as mode collapse behavior when the proximity of prompt examples is altered and performance enhancement from train/test mismatch. The findings aim to provide informed skepticism for the broad use of LLMs in materials science and to inspire advancements that enhance their robustness and reliability for practical applications.<|reference_end|>
|
arxiv
|
@article{wang2024evaluating,
title={Evaluating the Performance and Robustness of LLMs in Materials Science
Q&A and Property Predictions},
author={Hongchen Wang, Kangming Li, Scott Ramsay, Yao Fehlis, Edward Kim, and
Jason Hattrick-Simpers},
journal={arXiv preprint arXiv:2409.14572},
year={2024},
archivePrefix={arXiv},
eprint={2409.14572},
primaryClass={cs.CL cond-mat.mtrl-sci cs.AI cs.LG}
}
|
wang2024evaluating
|
arxiv-660534
|
2409.14575
|
Domain knowledge-guided machine learning framework for state of health estimation in Lithium-ion batteries
|
<|reference_start|>Domain knowledge-guided machine learning framework for state of health estimation in Lithium-ion batteries: Accurate estimation of battery state of health is crucial for effective electric vehicle battery management. Here, we propose five health indicators that can be extracted online from real-world electric vehicle operation and develop a machine learning-based method to estimate the battery state of health. The proposed indicators provide physical insights into the energy and power fade of the battery and enable accurate capacity estimation even with partially missing data. Moreover, they can be computed for portions of the charging profile and real-world driving discharging conditions, facilitating real-time battery degradation estimation. The indicators are computed using experimental data from five cells aged under electric vehicle conditions, and a linear regression model is used to estimate the state of health. The results show that models trained with power autocorrelation and energy-based features achieve capacity estimation with maximum absolute percentage error within 1.5% to 2.5% .<|reference_end|>
|
arxiv
|
@article{lanubile2024domain,
title={Domain knowledge-guided machine learning framework for state of health
estimation in Lithium-ion batteries},
author={Andrea Lanubile and Pietro Bosoni and Gabriele Pozzato and Anirudh
Allam and Matteo Acquarone and Simona Onori},
journal={arXiv preprint arXiv:2409.14575},
year={2024},
archivePrefix={arXiv},
eprint={2409.14575},
primaryClass={cs.LG cs.SY eess.SY}
}
|
lanubile2024domain
|
arxiv-660535
|
2409.14577
|
AR Overlay: Training Image Pose Estimation on Curved Surface in a Synthetic Way
|
<|reference_start|>AR Overlay: Training Image Pose Estimation on Curved Surface in a Synthetic Way: In the field of spatial computing, one of the most essential tasks is the pose estimation of 3D objects. While rigid transformations of arbitrary 3D objects are relatively hard to detect due to varying environment introducing factors like insufficient lighting or even occlusion, objects with pre-defined shapes are often easy to track, leveraging geometric constraints. Curved images, with flexible dimensions but a confined shape, are essential shapes often targeted in 3D tracking. Traditionally, proprietary algorithms often require specific curvature measures as the input along with the original flattened images to enable pose estimation for a single image target. In this paper, we propose a pipeline that can detect several logo images simultaneously and only requires the original images as the input, unlocking more effects in downstream fields such as Augmented Reality (AR).<|reference_end|>
|
arxiv
|
@article{huang2024ar,
title={AR Overlay: Training Image Pose Estimation on Curved Surface in a
Synthetic Way},
author={Sining Huang, Yukun Song, Yixiao Kang and Chang Yu},
journal={arXiv preprint arXiv:2409.14577},
year={2024},
archivePrefix={arXiv},
eprint={2409.14577},
primaryClass={cs.CV}
}
|
huang2024ar
|
arxiv-660536
|
2409.14579
|
Medical Concept Normalization in a Low-Resource Setting
|
<|reference_start|>Medical Concept Normalization in a Low-Resource Setting: In the field of biomedical natural language processing, medical concept normalization is a crucial task for accurately mapping mentions of concepts to a large knowledge base. However, this task becomes even more challenging in low-resource settings, where limited data and resources are available. In this thesis, I explore the challenges of medical concept normalization in a low-resource setting. Specifically, I investigate the shortcomings of current medical concept normalization methods applied to German lay texts. Since there is no suitable dataset available, a dataset consisting of posts from a German medical online forum is annotated with concepts from the Unified Medical Language System. The experiments demonstrate that multilingual Transformer-based models are able to outperform string similarity methods. The use of contextual information to improve the normalization of lay mentions is also examined, but led to inferior results. Based on the results of the best performing model, I present a systematic error analysis and lay out potential improvements to mitigate frequent errors.<|reference_end|>
|
arxiv
|
@article{patzelt2024medical,
title={Medical Concept Normalization in a Low-Resource Setting},
author={Tim Patzelt},
journal={arXiv preprint arXiv:2409.14579},
year={2024},
archivePrefix={arXiv},
eprint={2409.14579},
primaryClass={cs.CL cs.LG}
}
|
patzelt2024medical
|
arxiv-660537
|
2409.14580
|
Updating Robot Safety Representations Online from Natural Language Feedback
|
<|reference_start|>Updating Robot Safety Representations Online from Natural Language Feedback: Robots must operate safely when deployed in novel and human-centered environments, like homes. Current safe control approaches typically assume that the safety constraints are known a priori, and thus, the robot can pre-compute a corresponding safety controller. While this may make sense for some safety constraints (e.g., avoiding collision with walls by analyzing a floor plan), other constraints are more complex (e.g., spills), inherently personal, context-dependent, and can only be identified at deployment time when the robot is interacting in a specific environment and with a specific person (e.g., fragile objects, expensive rugs). Here, language provides a flexible mechanism to communicate these evolving safety constraints to the robot. In this work, we use vision language models (VLMs) to interpret language feedback and the robot's image observations to continuously update the robot's representation of safety constraints. With these inferred constraints, we update a Hamilton-Jacobi reachability safety controller online via efficient warm-starting techniques. Through simulation and hardware experiments, we demonstrate the robot's ability to infer and respect language-based safety constraints with the proposed approach.<|reference_end|>
|
arxiv
|
@article{santos2024updating,
title={Updating Robot Safety Representations Online from Natural Language
Feedback},
author={Leonardo Santos, Zirui Li, Lasse Peters, Somil Bansal, Andrea Bajcsy},
journal={arXiv preprint arXiv:2409.14580},
year={2024},
archivePrefix={arXiv},
eprint={2409.14580},
primaryClass={cs.RO}
}
|
santos2024updating
|
arxiv-660538
|
2409.14583
|
Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios
|
<|reference_start|>Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios: Recent advancements in Large Language Models(LLMs) have been notable, yet widespread enterprise adoption remains limited due to various constraints. This paper examines bias in LLMs-a crucial issue affecting their usability, reliability, and fairness. Researchers are developing strategies to mitigate bias, including debiasing layers, specialized reference datasets like Winogender and Winobias, and reinforcement learning with human feedback (RLHF). These techniques have been integrated into the latest LLMs. Our study evaluates gender bias in occupational scenarios and gender, age, and racial bias in crime scenarios across four leading LLMs released in 2024: Gemini 1.5 Pro, Llama 3 70B, Claude 3 Opus, and GPT-4o. Findings reveal that LLMs often depict female characters more frequently than male ones in various occupations, showing a 37% deviation from US BLS data. In crime scenarios, deviations from US FBI data are 54% for gender, 28% for race, and 17% for age. We observe that efforts to reduce gender and racial bias often lead to outcomes that may over-index one sub-class, potentially exacerbating the issue. These results highlight the limitations of current bias mitigation techniques and underscore the need for more effective approaches.<|reference_end|>
|
arxiv
|
@article{mirza2024evaluating,
title={Evaluating Gender, Racial, and Age Biases in Large Language Models: A
Comparative Analysis of Occupational and Crime Scenarios},
author={Vishal Mirza, Rahul Kulkarni, Aakanksha Jadhav},
journal={arXiv preprint arXiv:2409.14583},
year={2024},
archivePrefix={arXiv},
eprint={2409.14583},
primaryClass={cs.AI}
}
|
mirza2024evaluating
|
arxiv-660539
|
2409.14584
|
The X Types -- Mapping the Semantics of the Twitter Sphere
|
<|reference_start|>The X Types -- Mapping the Semantics of the Twitter Sphere: Social networks form a valuable source of world knowledge, where influential entities correspond to popular accounts. Unlike factual knowledge bases (KBs), which maintain a semantic ontology, structured semantic information is not available on social media. In this work, we consider a social KB of roughly 200K popular Twitter accounts, which denotes entities of interest. We elicit semantic information about those entities. In particular, we associate them with a fine-grained set of 136 semantic types, e.g., determine whether a given entity account belongs to a politician, or a musical artist. In the lack of explicit type information in Twitter, we obtain semantic labels for a subset of the accounts via alignment with the KBs of DBpedia and Wikidata. Given the labeled dataset, we finetune a transformer-based text encoder to generate semantic embeddings of the entities based on the contents of their accounts. We then exploit this evidence alongside network-based embeddings to predict the entities semantic types. In our experiments, we show high type prediction performance on the labeled dataset. Consequently, we apply our type classification model to all of the entity accounts in the social KB. Our analysis of the results offers insights about the global semantics of the Twitter sphere. We discuss downstream applications that should benefit from semantic type information and the semantic embeddings of social entities generated in this work. In particular, we demonstrate enhanced performance on the key task of entity similarity assessment using this information.<|reference_end|>
|
arxiv
|
@article{drukerman2024the,
title={The X Types -- Mapping the Semantics of the Twitter Sphere},
author={Ogen Schlachet Drukerman and Einat Minkov},
journal={arXiv preprint arXiv:2409.14584},
year={2024},
archivePrefix={arXiv},
eprint={2409.14584},
primaryClass={cs.CL}
}
|
drukerman2024the
|
arxiv-660540
|
2409.14585
|
A convergent scheme for the Bayesian filtering problem based on the Fokker--Planck equation and deep splitting
|
<|reference_start|>A convergent scheme for the Bayesian filtering problem based on the Fokker--Planck equation and deep splitting: A numerical scheme for approximating the nonlinear filtering density is introduced and its convergence rate is established, theoretically under a parabolic H\"{o}rmander condition, and empirically for two examples. For the prediction step, between the noisy and partial measurements at discrete times, the scheme approximates the Fokker--Planck equation with a deep splitting scheme, and performs an exact update through Bayes' formula. This results in a classical prediction-update filtering algorithm that operates online for new observation sequences post-training. The algorithm employs a sampling-based Feynman--Kac approach, designed to mitigate the curse of dimensionality. Our convergence proof relies on the Malliavin integration-by-parts formula. As a corollary we obtain the convergence rate for the approximation of the Fokker--Planck equation alone, disconnected from the filtering problem.<|reference_end|>
|
arxiv
|
@article{bågmark2024a,
title={A convergent scheme for the Bayesian filtering problem based on the
Fokker--Planck equation and deep splitting},
author={Kasper B{aa}gmark, Adam Andersson, Stig Larsson, Filip Rydin},
journal={arXiv preprint arXiv:2409.14585},
year={2024},
archivePrefix={arXiv},
eprint={2409.14585},
primaryClass={math.NA cs.NA math.PR stat.CO stat.ML}
}
|
bågmark2024a
|
arxiv-660541
|
2409.14586
|
Backtracking Improves Generation Safety
|
<|reference_start|>Backtracking Improves Generation Safety: Text generation has a fundamental limitation almost by definition: there is no taking back tokens that have been generated, even when they are clearly problematic. In the context of language model safety, when a partial unsafe generation is produced, language models by their nature tend to happily keep on generating similarly unsafe additional text. This is in fact how safety alignment of frontier models gets circumvented in the wild, despite great efforts in improving their safety. Deviating from the paradigm of approaching safety alignment as prevention (decreasing the probability of harmful responses), we propose backtracking, a technique that allows language models to "undo" and recover from their own unsafe generation through the introduction of a special [RESET] token. Our method can be incorporated into either SFT or DPO training to optimize helpfulness and harmlessness. We show that models trained to backtrack are consistently safer than baseline models: backtracking Llama-3-8B is four times more safe than the baseline model (6.1\% $\to$ 1.5\%) in our evaluations without regression in helpfulness. Our method additionally provides protection against four adversarial attacks including an adaptive attack, despite not being trained to do so.<|reference_end|>
|
arxiv
|
@article{zhang2024backtracking,
title={Backtracking Improves Generation Safety},
author={Yiming Zhang, Jianfeng Chi, Hailey Nguyen, Kartikeya Upasani, Daniel
M. Bikel, Jason Weston, Eric Michael Smith},
journal={arXiv preprint arXiv:2409.14586},
year={2024},
archivePrefix={arXiv},
eprint={2409.14586},
primaryClass={cs.LG cs.AI cs.CL}
}
|
zhang2024backtracking
|
arxiv-660542
|
2409.14587
|
Deep Learning Techniques for Atmospheric Turbulence Removal: A Review
|
<|reference_start|>Deep Learning Techniques for Atmospheric Turbulence Removal: A Review: The influence of atmospheric turbulence on acquired imagery makes image interpretation and scene analysis extremely difficult and reduces the effectiveness of conventional approaches for classifying and tracking objects of interest in the scene. Restoring a scene distorted by atmospheric turbulence is also a challenging problem. The effect, which is caused by random, spatially varying perturbations, makes conventional model-based approaches difficult and, in most cases, impractical due to complexity and memory requirements. Deep learning approaches offer faster operation and are capable of implementation on small devices. This paper reviews the characteristics of atmospheric turbulence and its impact on acquired imagery. It compares the performance of various state-of-the-art deep neural networks, including Transformers, SWIN and Mamba, when used to mitigate spatio-temporal image distortions.<|reference_end|>
|
arxiv
|
@article{hill2024deep,
title={Deep Learning Techniques for Atmospheric Turbulence Removal: A Review},
author={Paul Hill, Nantheera Anantrasirichai, Alin Achim and David Bull},
journal={arXiv preprint arXiv:2409.14587},
year={2024},
archivePrefix={arXiv},
eprint={2409.14587},
primaryClass={cs.CV astro-ph.IM}
}
|
hill2024deep
|
arxiv-660543
|
2409.14588
|
Space evaluation based on pitch control using drone video in Ultimate
|
<|reference_start|>Space evaluation based on pitch control using drone video in Ultimate: Ultimate is a sport in which teams of seven players compete for points by passing a disc into the end zone. A distinctive aspect of Ultimate is that the player holding the disc is unable to move, underscoring the significance of creating space to receive passes. Despite extensive research into space evaluation in sports such as football and basketball, there is a paucity of information available for Ultimate. This study focuses on the 3-on-3 format, which is widely practiced in Ultimate, and evaluates space during offensive play. The data collection process entailed the use of drones for filming and the subsequent correction of the angles for the purpose of obtaining positional data. The model is derived from the pitch control model of soccer and adapted to the rules of Ultimate, where the player holding the disc is stationary. The integration of position and distance weights with pitch control values enables the derivation of space evaluation metrics. The findings of this study indicate that movement to create space and accurate passing into that space are both significant factors in scoring. The code is available at https://github.com/shunsuke-iwashita/USO.<|reference_end|>
|
arxiv
|
@article{iwashita2024space,
title={Space evaluation based on pitch control using drone video in Ultimate},
author={Shunsuke Iwashita, Atom Scott, Rikuhei Umemoto, Ning Ding, Keisuke
Fujii},
journal={arXiv preprint arXiv:2409.14588},
year={2024},
archivePrefix={arXiv},
eprint={2409.14588},
primaryClass={cs.CV}
}
|
iwashita2024space
|
arxiv-660544
|
2409.14589
|
URSimulator: Human-Perception-Driven Prompt Tuning for Enhanced Virtual Urban Renewal via Diffusion Models
|
<|reference_start|>URSimulator: Human-Perception-Driven Prompt Tuning for Enhanced Virtual Urban Renewal via Diffusion Models: Tackling Urban Physical Disorder (e.g., abandoned buildings, litter, messy vegetation, graffiti) is essential, as it negatively impacts the safety, well-being, and psychological state of communities. Urban Renewal is the process of revitalizing these neglected and decayed areas within a city to improve the physical environment and quality of life for residents. Effective urban renewal efforts can transform these environments, enhancing their appeal and livability. However, current research lacks simulation tools that can quantitatively assess and visualize the impacts of renewal efforts, often relying on subjective judgments. Such tools are crucial for planning and implementing effective strategies by providing a clear visualization of potential changes and their impacts. This paper presents a novel framework addressing this gap by using human perception feedback to simulate street environment enhancement. We develop a prompt tuning approach that integrates text-driven Stable Diffusion with human perception feedback, iteratively editing local areas of street view images to better align with perceptions of beauty, liveliness, and safety. Our experiments show that this framework significantly improves perceptions of urban environments, with increases of 17.60% in safety, 31.15% in beauty, and 28.82% in liveliness. In contrast, advanced methods like DiffEdit achieve only 2.31%, 11.87%, and 15.84% improvements, respectively. We applied this framework across various virtual scenarios, including neighborhood improvement, building redevelopment, green space expansion, and community garden creation. The results demonstrate its effectiveness in simulating urban renewal, offering valuable insights for urban planning and policy-making.<|reference_end|>
|
arxiv
|
@article{hu2024ursimulator:,
title={URSimulator: Human-Perception-Driven Prompt Tuning for Enhanced Virtual
Urban Renewal via Diffusion Models},
author={Chuanbo Hu, Shan Jia, Xin Li},
journal={arXiv preprint arXiv:2409.14589},
year={2024},
archivePrefix={arXiv},
eprint={2409.14589},
primaryClass={cs.CV}
}
|
hu2024ursimulator:
|
arxiv-660545
|
2409.14590
|
Explainable AI needs formal notions of explanation correctness
|
<|reference_start|>Explainable AI needs formal notions of explanation correctness: The use of machine learning (ML) in critical domains such as medicine poses risks and requires regulation. One requirement is that decisions of ML systems in high-risk applications should be human-understandable. The field of "explainable artificial intelligence" (XAI) seemingly addresses this need. However, in its current form, XAI is unfit to provide quality control for ML; it itself needs scrutiny. Popular XAI methods cannot reliably answer important questions about ML models, their training data, or a given test input. We recapitulate results demonstrating that popular XAI methods systematically attribute importance to input features that are independent of the prediction target. This limits their utility for purposes such as model and data (in)validation, model improvement, and scientific discovery. We argue that the fundamental reason for this limitation is that current XAI methods do not address well-defined problems and are not evaluated against objective criteria of explanation correctness. Researchers should formally define the problems they intend to solve first and then design methods accordingly. This will lead to notions of explanation correctness that can be theoretically verified and objective metrics of explanation performance that can be assessed using ground-truth data.<|reference_end|>
|
arxiv
|
@article{haufe2024explainable,
title={Explainable AI needs formal notions of explanation correctness},
author={Stefan Haufe, Rick Wilming, Benedict Clark, Rustam Zhumagambetov,
Danny Panknin, Ahc`ene Boubekki},
journal={arXiv preprint arXiv:2409.14590},
year={2024},
archivePrefix={arXiv},
eprint={2409.14590},
primaryClass={cs.LG cs.AI stat.ML}
}
|
haufe2024explainable
|
arxiv-660546
|
2409.14591
|
Non-Cartesian Guarded Recursion with Daggers
|
<|reference_start|>Non-Cartesian Guarded Recursion with Daggers: Guarded recursion is a framework allowing for a formalisation of streams in classical programming languages. The latter take their semantics in cartesian closed categories. However, some programming paradigms do not take their semantics in a cartesian setting; this is the case for concurrency, reversible and quantum programming for example. In this paper, we focus on reversible programming through the prism of dagger categories, which are categories that contain an involutive operator on morphisms. After presenting classical guarded recursion, we show how to introduce this framework into dagger categories. Given a dagger category, we build categories shown to be suitable for guarded recursion in multiple ways, via enrichment and fixed point theorems. Finally, we show that our construction is suitable as a model of reversible programming languages, such as symmetric pattern-matching.<|reference_end|>
|
arxiv
|
@article{lemonnier2024non-cartesian,
title={Non-Cartesian Guarded Recursion with Daggers},
author={Louis Lemonnier},
journal={arXiv preprint arXiv:2409.14591},
year={2024},
archivePrefix={arXiv},
eprint={2409.14591},
primaryClass={cs.LO cs.PL math.CT}
}
|
lemonnier2024non-cartesian
|
arxiv-660547
|
2409.14592
|
Tactile Functasets: Neural Implicit Representations of Tactile Datasets
|
<|reference_start|>Tactile Functasets: Neural Implicit Representations of Tactile Datasets: Modern incarnations of tactile sensors produce high-dimensional raw sensory feedback such as images, making it challenging to efficiently store, process, and generalize across sensors. To address these concerns, we introduce a novel implicit function representation for tactile sensor feedback. Rather than directly using raw tactile images, we propose neural implicit functions trained to reconstruct the tactile dataset, producing compact representations that capture the underlying structure of the sensory inputs. These representations offer several advantages over their raw counterparts: they are compact, enable probabilistically interpretable inference, and facilitate generalization across different sensors. We demonstrate the efficacy of this representation on the downstream task of in-hand object pose estimation, achieving improved performance over image-based methods while simplifying downstream models. We release code, demos and datasets at https://www.mmintlab.com/tactile-functasets.<|reference_end|>
|
arxiv
|
@article{li2024tactile,
title={Tactile Functasets: Neural Implicit Representations of Tactile Datasets},
author={Sikai Li, Samanta Rodriguez, Yiming Dou, Andrew Owens, Nima Fazeli},
journal={arXiv preprint arXiv:2409.14592},
year={2024},
archivePrefix={arXiv},
eprint={2409.14592},
primaryClass={cs.RO}
}
|
li2024tactile
|
arxiv-660548
|
2409.14593
|
Testing Causal Models with Hidden Variables in Polynomial Delay via Conditional Independencies
|
<|reference_start|>Testing Causal Models with Hidden Variables in Polynomial Delay via Conditional Independencies: Testing a hypothesized causal model against observational data is a key prerequisite for many causal inference tasks. A natural approach is to test whether the conditional independence relations (CIs) assumed in the model hold in the data. While a model can assume exponentially many CIs (with respect to the number of variables), testing all of them is both impractical and unnecessary. Causal graphs, which encode these CIs in polynomial space, give rise to local Markov properties that enable model testing with a significantly smaller subset of CIs. Model testing based on local properties requires an algorithm to list the relevant CIs. However, existing algorithms for realistic settings with hidden variables and non-parametric distributions can take exponential time to produce even a single CI constraint. In this paper, we introduce the c-component local Markov property (C-LMP) for causal graphs with hidden variables. Since C-LMP can still invoke an exponential number of CIs, we develop a polynomial delay algorithm to list these CIs in poly-time intervals. To our knowledge, this is the first algorithm that enables poly-delay testing of CIs in causal graphs with hidden variables against arbitrary data distributions. Experiments on real-world and synthetic data demonstrate the practicality of our algorithm.<|reference_end|>
|
arxiv
|
@article{jeong2024testing,
title={Testing Causal Models with Hidden Variables in Polynomial Delay via
Conditional Independencies},
author={Hyunchai Jeong, Adiba Ejaz, Jin Tian, Elias Bareinboim},
journal={arXiv preprint arXiv:2409.14593},
year={2024},
archivePrefix={arXiv},
eprint={2409.14593},
primaryClass={cs.LG cs.AI stat.ME stat.ML}
}
|
jeong2024testing
|
arxiv-660549
|
2409.14595
|
EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models
|
<|reference_start|>EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models: Large Language Models (LLMs), with their increasing depth and number of parameters, have demonstrated outstanding performance across a variety of natural language processing tasks. However, this growth in scale leads to increased computational demands, particularly during inference and fine-tuning. To address these challenges, we introduce EchoAtt, a novel framework aimed at optimizing transformer-based models by analyzing and leveraging the similarity of attention patterns across layers. Our analysis reveals that many inner layers in LLMs, especially larger ones, exhibit highly similar attention matrices. By exploiting this similarity, EchoAtt enables the sharing of attention matrices in less critical layers, significantly reducing computational requirements without compromising performance. We incorporate this approach within a knowledge distillation setup, where a pre-trained teacher model guides the training of a smaller student model. The student model selectively shares attention matrices in layers with high similarity while inheriting key parameters from the teacher. Our best results with TinyLLaMA-1.1B demonstrate that EchoAtt improves inference speed by 15\%, training speed by 25\%, and reduces the number of parameters by approximately 4\%, all while improving zero-shot performance. These findings highlight the potential of attention matrix sharing to enhance the efficiency of LLMs, making them more practical for real-time and resource-limited applications.<|reference_end|>
|
arxiv
|
@article{rajabzadeh2024echoatt:,
title={EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language
Models},
author={Hossein Rajabzadeh, Aref Jafari, Aman Sharma, Benyamin Jami, Hyock Ju
Kwon, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh},
journal={arXiv preprint arXiv:2409.14595},
year={2024},
archivePrefix={arXiv},
eprint={2409.14595},
primaryClass={cs.CL cs.LG}
}
|
rajabzadeh2024echoatt:
|
arxiv-660550
|
2409.14596
|
DarkGram: Exploring and Mitigating Cybercriminal content shared in Telegram channels
|
<|reference_start|>DarkGram: Exploring and Mitigating Cybercriminal content shared in Telegram channels: We present the first large scale analysis of 339 cybercriminal activity channels (CACs) on Telegram from February to May 2024. Collectively followed by over 23.8 million users, these channels shared a wide array of illicit content, including compromised credentials, pirated software and media, tools for blackhat hacking resources such as malware, social engineering scams, and exploit kits. We developed DarkGram, a BERT based framework that identifies malicious posts from the CACs with an accuracy of 96%, using which we conducted a quantitative analysis of 53,605 posts from these channels, revealing key characteristics of shared content. While much of this content is distributed for free, channel administrators frequently employ promotions and giveaways to engage users and boost the sales of premium cybercriminal content. These channels also pose significant risks to their own subscribers. Notably, 28.1% of shared links contained phishing attacks, and 38% of executable files were bundled with malware. Moreover, our qualitative analysis of replies in CACs shows how subscribers cultivate a dangerous sense of community through requests for illegal content, illicit knowledge sharing, and collaborative hacking efforts, while their reactions to posts, including emoji responses, further underscore their appreciation for such content. We also find that the CACs can evade scrutiny by quickly migrating to new channels with minimal subscriber loss, highlighting the resilience of this ecosystem. To counteract this, we further utilized DarkGram to detect new channels, reporting malicious content to Telegram and the affected organizations which resulted in the takedown of 196 such channels over three months. To aid further collaborative efforts in taking down these channels, we open source our dataset and the DarkGram framework.<|reference_end|>
|
arxiv
|
@article{roy2024darkgram:,
title={DarkGram: Exploring and Mitigating Cybercriminal content shared in
Telegram channels},
author={Sayak Saha Roy, Elham Pourabbas Vafa, Kobra Khanmohammadi, Shirin
Nilizadeh},
journal={arXiv preprint arXiv:2409.14596},
year={2024},
archivePrefix={arXiv},
eprint={2409.14596},
primaryClass={cs.CR cs.CY cs.SI}
}
|
roy2024darkgram:
|
arxiv-660551
|
2409.14599
|
Implicit Dynamical Flow Fusion (IDFF) for Generative Modeling
|
<|reference_start|>Implicit Dynamical Flow Fusion (IDFF) for Generative Modeling: Conditional Flow Matching (CFM) models can generate high-quality samples from a non-informative prior, but they can be slow, often needing hundreds of network evaluations (NFE). To address this, we propose Implicit Dynamical Flow Fusion (IDFF); IDFF learns a new vector field with an additional momentum term that enables taking longer steps during sample generation while maintaining the fidelity of the generated distribution. Consequently, IDFFs reduce the NFEs by a factor of ten (relative to CFMs) without sacrificing sample quality, enabling rapid sampling and efficient handling of image and time-series data generation tasks. We evaluate IDFF on standard benchmarks such as CIFAR-10 and CelebA for image generation. We achieved likelihood and quality performance comparable to CFMs and diffusion-based models with fewer NFEs. IDFF also shows superior performance on time-series datasets modeling, including molecular simulation and sea surface temperature (SST) datasets, highlighting its versatility and effectiveness across different domains.<|reference_end|>
|
arxiv
|
@article{rezaei2024implicit,
title={Implicit Dynamical Flow Fusion (IDFF) for Generative Modeling},
author={Mohammad R. Rezaei, Rahul G. Krishnan, Milos R. Popovic, Milad
Lankarany},
journal={arXiv preprint arXiv:2409.14599},
year={2024},
archivePrefix={arXiv},
eprint={2409.14599},
primaryClass={cs.LG}
}
|
rezaei2024implicit
|
arxiv-660552
|
2409.14600
|
Rent Division with Picky Roommates
|
<|reference_start|>Rent Division with Picky Roommates: How can one assign roommates and rooms when tenants have preferences for both where and with whom they live? In this setting, the usual notions of envy-freeness and maximizing social welfare may not hold; the roommate rent-division problem is assumed to be NP-hard, and even when welfare is maximized, an envy-free price vector may not exist. We first construct a novel greedy algorithm with bipartite matching before exploiting the connection between social welfare maximization and the maximum weighted independent set (MWIS) problem to construct a polynomial-time algorithm that gives a $\frac{3}{4}+\varepsilon$-approximation of maximum social welfare. Further, we present an integer program to find a room envy-free price vector that minimizes envy between any two tenants. We show empirically that a MWIS algorithm returns the optimal allocation in polynomial time and conjecture that this problem, at the forefront of computer science research, may have an exact polynomial algorithm solution. This study not only advances the theoretical framework for roommate rent division but also offers practical algorithmic solutions that can be implemented in real-world applications.<|reference_end|>
|
arxiv
|
@article{huang2024rent,
title={Rent Division with Picky Roommates},
author={Yanqing Huang, Madeline Kitch, Natalie Melas-Kyriazi},
journal={arXiv preprint arXiv:2409.14600},
year={2024},
archivePrefix={arXiv},
eprint={2409.14600},
primaryClass={cs.GT}
}
|
huang2024rent
|
arxiv-660553
|
2409.14602
|
Can pre-trained language models generate titles for research papers?
|
<|reference_start|>Can pre-trained language models generate titles for research papers?: The title of a research paper communicates in a succinct style the main theme and, sometimes, the findings of the paper. Coming up with the right title is often an arduous task, and therefore, it would be beneficial to authors if title generation can be automated. In this paper, we fine-tune pre-trained and large language models to generate titles of papers from their abstracts. We also use ChatGPT in a zero-shot setting to generate paper titles. The performance of the models is measured with ROUGE, METEOR, MoverScore, BERTScore and SciBERTScore metrics.<|reference_end|>
|
arxiv
|
@article{rehman2024can,
title={Can pre-trained language models generate titles for research papers?},
author={Tohida Rehman, Debarshi Kumar Sanyal, Samiran Chattopadhyay},
journal={arXiv preprint arXiv:2409.14602},
year={2024},
archivePrefix={arXiv},
eprint={2409.14602},
primaryClass={cs.CL cs.AI}
}
|
rehman2024can
|
arxiv-660554
|
2409.14603
|
Brain Surgery: Ensuring GDPR Compliance in Large Language Models via Concept Erasure
|
<|reference_start|>Brain Surgery: Ensuring GDPR Compliance in Large Language Models via Concept Erasure: As large-scale AI systems proliferate, ensuring compliance with data privacy laws such as the General Data Protection Regulation (GDPR) has become critical. This paper introduces Brain Surgery, a transformative methodology for making every local AI model GDPR-ready by enabling real-time privacy management and targeted unlearning. Building on advanced techniques such as Embedding-Corrupted Prompts (ECO Prompts), blockchain-based privacy management, and privacy-aware continual learning, Brain Surgery provides a modular solution that can be deployed across various AI architectures. This tool not only ensures compliance with privacy regulations but also empowers users to define their own privacy limits, creating a new paradigm in AI ethics and governance.<|reference_end|>
|
arxiv
|
@article{laurelli2024brain,
title={Brain Surgery: Ensuring GDPR Compliance in Large Language Models via
Concept Erasure},
author={Michele Laurelli},
journal={arXiv preprint arXiv:2409.14603},
year={2024},
archivePrefix={arXiv},
eprint={2409.14603},
primaryClass={cs.AI}
}
|
laurelli2024brain
|
arxiv-660555
|
2409.14605
|
First Field Trial of LLM-Powered AI Agent for Lifecycle Management of Autonomous Driving Optical Networks
|
<|reference_start|>First Field Trial of LLM-Powered AI Agent for Lifecycle Management of Autonomous Driving Optical Networks: We design and demonstrate the first field trial of LLM-powered AI Agent for ADON. Three operation modes of the Agent are proposed for network lifecycle management. The Agent efficiently processes wavelength add/drop and soft/hard failures, and achieves comparable performance to human-designed algorithms for power optimization.<|reference_end|>
|
arxiv
|
@article{liu2024first,
title={First Field Trial of LLM-Powered AI Agent for Lifecycle Management of
Autonomous Driving Optical Networks},
author={Xiaomin Liu, Qizhi Qiu, Yihao Zhang, Yuming Cheng, Lilin Yi, Weisheng
Hu, Qunbi Zhuge},
journal={arXiv preprint arXiv:2409.14605},
year={2024},
archivePrefix={arXiv},
eprint={2409.14605},
primaryClass={eess.SY cs.SY}
}
|
liu2024first
|
arxiv-660556
|
2409.14607
|
Patch Ranking: Efficient CLIP by Learning to Rank Local Patches
|
<|reference_start|>Patch Ranking: Efficient CLIP by Learning to Rank Local Patches: Contrastive image-text pre-trained models such as CLIP have shown remarkable adaptability to downstream tasks. However, they face challenges due to the high computational requirements of the Vision Transformer (ViT) backbone. Current strategies to boost ViT efficiency focus on pruning patch tokens but fall short in addressing the multimodal nature of CLIP and identifying the optimal subset of tokens for maximum performance. To address this, we propose greedy search methods to establish a "Golden Ranking" and introduce a lightweight predictor specifically trained to approximate this Ranking. To compensate for any performance degradation resulting from token pruning, we incorporate learnable visual tokens that aid in restoring and potentially enhancing the model's performance. Our work presents a comprehensive and systematic investigation of pruning tokens within the ViT backbone of CLIP models. Through our framework, we successfully reduced 40% of patch tokens in CLIP's ViT while only suffering a minimal average accuracy loss of 0.3 across seven datasets. Our study lays the groundwork for building more computationally efficient multimodal models without sacrificing their performance, addressing a key challenge in the application of advanced vision-language models.<|reference_end|>
|
arxiv
|
@article{wu2024patch,
title={Patch Ranking: Efficient CLIP by Learning to Rank Local Patches},
author={Cheng-En Wu, Jinhong Lin, Yu Hen Hu, Pedro Morgado},
journal={arXiv preprint arXiv:2409.14607},
year={2024},
archivePrefix={arXiv},
eprint={2409.14607},
primaryClass={cs.CV cs.LG}
}
|
wu2024patch
|
arxiv-660557
|
2409.14608
|
Visual-auditory Extrinsic Contact Estimation
|
<|reference_start|>Visual-auditory Extrinsic Contact Estimation: Estimating contact locations between a grasped object and the environment is important for robust manipulation. In this paper, we present a visual-auditory method for extrinsic contact estimation, featuring a real-to-sim approach for auditory signals. Our method equips a robotic manipulator with contact microphones and speakers on its fingers, along with an externally mounted static camera providing a visual feed of the scene. As the robot manipulates objects, it detects contact events with surrounding surfaces using auditory feedback from the fingertips and visual feedback from the camera. A key feature of our approach is the transfer of auditory feedback into a simulated environment, where we learn a multimodal representation that is then applied to real world scenes without additional training. This zero-shot transfer is accurate and robust in estimating contact location and size, as demonstrated in our simulated and real world experiments in various cluttered environments.<|reference_end|>
|
arxiv
|
@article{yi2024visual-auditory,
title={Visual-auditory Extrinsic Contact Estimation},
author={Xili Yi, Jayjun Lee, Nima Fazeli},
journal={arXiv preprint arXiv:2409.14608},
year={2024},
archivePrefix={arXiv},
eprint={2409.14608},
primaryClass={cs.RO}
}
|
yi2024visual-auditory
|
arxiv-660558
|
2409.14609
|
Nirjas: An open source framework for extracting metadata from the source code
|
<|reference_start|>Nirjas: An open source framework for extracting metadata from the source code: Metadata and comments are critical elements of any software development process. In this paper, we explain how metadata and comments in source code can play an essential role in comprehending software. We introduce a Python-based open-source framework, Nirjas, which helps in extracting this metadata in a structured manner. Various syntaxes, types, and widely accepted conventions exist for adding comments in source files of different programming languages. Edge cases can create noise in extraction, for which we use Regex to accurately retrieve metadata. Non-Regex methods can give results but often miss accuracy and noise separation. Nirjas also separates different types of comments, source code, and provides details about those comments, such as line number, file name, language used, total SLOC, etc. Nirjas is a standalone Python framework/library and can be easily installed via source or pip (the Python package installer). Nirjas was initially created as part of a Google Summer of Code project and is currently developed and maintained under the FOSSology organization.<|reference_end|>
|
arxiv
|
@article{bhardwaj2024nirjas:,
title={Nirjas: An open source framework for extracting metadata from the source
code},
author={Ayush Bhardwaj, Sahil, Kaushlendra Pratap, Gaurav Mishra},
journal={arXiv preprint arXiv:2409.14609},
year={2024},
doi={10.1109/Confluence52989.2022.9734222},
archivePrefix={arXiv},
eprint={2409.14609},
primaryClass={cs.SE cs.IR}
}
|
bhardwaj2024nirjas:
|
arxiv-660559
|
2409.14610
|
An Empirical Study of Refactoring Engine Bugs
|
<|reference_start|>An Empirical Study of Refactoring Engine Bugs: Refactoring is a critical process in software development, aiming at improving the internal structure of code while preserving its external behavior. Refactoring engines are integral components of modern Integrated Development Environments (IDEs) and can automate or semi-automate this process to enhance code readability, reduce complexity, and improve the maintainability of software products. Like traditional software systems, refactoring engines can generate incorrect refactored programs, resulting in unexpected behaviors or even crashes. In this paper, we present the first systematic study of refactoring engine bugs by analyzing bugs arising in three popular refactoring engines (i.e., Eclipse, IntelliJ IDEA, and Netbeans). We analyzed these bugs according to their refactoring types, symptoms, root causes, and triggering conditions. We obtained 12 findings and provided a series of valuable guidelines for future work on refactoring bug detection and debugging. Furthermore, our transferability study revealed 130 new bugs in the latest version of those refactoring engines. Among the 21 bugs we submitted, 10 bugs are confirmed by their developers, and seven of them have already been fixed.<|reference_end|>
|
arxiv
|
@article{wang2024an,
title={An Empirical Study of Refactoring Engine Bugs},
author={Haibo Wang, Zhuolin Xu, Huaien Zhang, Nikolaos Tsantalis, Shin Hwei
Tan},
journal={arXiv preprint arXiv:2409.14610},
year={2024},
archivePrefix={arXiv},
eprint={2409.14610},
primaryClass={cs.SE}
}
|
wang2024an
|
arxiv-660560
|
2409.14611
|
Secrets of Edge-Informed Contrast Maximization for Event-Based Vision
|
<|reference_start|>Secrets of Edge-Informed Contrast Maximization for Event-Based Vision: Event cameras capture the motion of intensity gradients (edges) in the image plane in the form of rapid asynchronous events. When accumulated in 2D histograms, these events depict overlays of the edges in motion, consequently obscuring the spatial structure of the generating edges. Contrast maximization (CM) is an optimization framework that can reverse this effect and produce sharp spatial structures that resemble the moving intensity gradients by estimating the motion trajectories of the events. Nonetheless, CM is still an underexplored area of research with avenues for improvement. In this paper, we propose a novel hybrid approach that extends CM from uni-modal (events only) to bi-modal (events and edges). We leverage the underpinning concept that, given a reference time, optimally warped events produce sharp gradients consistent with the moving edge at that time. Specifically, we formalize a correlation-based objective to aid CM and provide key insights into the incorporation of multiscale and multireference techniques. Moreover, our edge-informed CM method yields superior sharpness scores and establishes new state-of-the-art event optical flow benchmarks on the MVSEC, DSEC, and ECD datasets.<|reference_end|>
|
arxiv
|
@article{karmokar2024secrets,
title={Secrets of Edge-Informed Contrast Maximization for Event-Based Vision},
author={Pritam P. Karmokar, Quan H. Nguyen, William J. Beksi},
journal={arXiv preprint arXiv:2409.14611},
year={2024},
archivePrefix={arXiv},
eprint={2409.14611},
primaryClass={cs.CV eess.IV}
}
|
karmokar2024secrets
|
arxiv-660561
|
2409.14614
|
Faster Mixing of Higher-Dimensional Random Reversible Circuits
|
<|reference_start|>Faster Mixing of Higher-Dimensional Random Reversible Circuits: We continue the study of the approximate $k$-wise independence of random reversible circuits as permutations of $\{\pm1\}^n$. Our main result is the first construction of a natural class of random reversible circuits with a sublinear-in-$n$ dependence on depth. Our construction is motivated by considerations in practical cryptography and is somewhat inspired by the design of practical block ciphers, such as DES and AES. Previous constructions of He and O'Donnell [HO24], which were built with gate architectures on one-dimensional lattices, suffered from an inherent linear-in-$n$ dependence on depth. The main novelty of our circuit model is a gate architecture built on higher-dimensional lattices.<|reference_end|>
|
arxiv
|
@article{gay2024faster,
title={Faster Mixing of Higher-Dimensional Random Reversible Circuits},
author={William Gay, William He, Nicholas Kocurek},
journal={arXiv preprint arXiv:2409.14614},
year={2024},
archivePrefix={arXiv},
eprint={2409.14614},
primaryClass={cs.CC cs.CR}
}
|
gay2024faster
|
arxiv-660562
|
2409.14615
|
A Comparative Study on State-Action Spaces for Learning Viewpoint Selection and Manipulation with Diffusion Policy
|
<|reference_start|>A Comparative Study on State-Action Spaces for Learning Viewpoint Selection and Manipulation with Diffusion Policy: Robotic manipulation tasks often rely on static cameras for perception, which can limit flexibility, particularly in scenarios like robotic surgery and cluttered environments where mounting static cameras is impractical. Ideally, robots could jointly learn a policy for dynamic viewpoint and manipulation. However, it remains unclear which state-action space is most suitable for this complex learning process. To enable manipulation with dynamic viewpoints and to better understand impacts from different state-action spaces on this policy learning process, we conduct a comparative study on the state-action spaces for policy learning and their impacts on the performance of visuomotor policies that integrate viewpoint selection with manipulation. Specifically, we examine the configuration space of the robotic system, the end-effector space with a dual-arm Inverse Kinematics (IK) solver, and the reduced end-effector space with a look-at IK solver to optimize rotation for viewpoint selection. We also assess variants with different rotation representations. Our results demonstrate that state-action spaces utilizing Euler angles with the look-at IK achieve superior task success rates compared to other spaces. Further analysis suggests that these performance differences are driven by inherent variations in the high-frequency components across different state-action spaces and rotation representations.<|reference_end|>
|
arxiv
|
@article{sun2024a,
title={A Comparative Study on State-Action Spaces for Learning Viewpoint
Selection and Manipulation with Diffusion Policy},
author={Xiatao Sun, Francis Fan, Yinxing Chen, Daniel Rakita},
journal={arXiv preprint arXiv:2409.14615},
year={2024},
archivePrefix={arXiv},
eprint={2409.14615},
primaryClass={cs.RO}
}
|
sun2024a
|
arxiv-660563
|
2409.14616
|
Learning to Refine Input Constrained Control Barrier Functions via Uncertainty-Aware Online Parameter Adaptation
|
<|reference_start|>Learning to Refine Input Constrained Control Barrier Functions via Uncertainty-Aware Online Parameter Adaptation: Control Barrier Functions (CBFs) have become powerful tools for ensuring safety in nonlinear systems. However, finding valid CBFs that guarantee persistent safety and feasibility remains an open challenge, especially in systems with input constraints. Traditional approaches often rely on manually tuning the parameters of the class K functions of the CBF conditions a priori. The performance of CBF-based controllers is highly sensitive to these fixed parameters, potentially leading to overly conservative behavior or safety violations. To overcome these issues, this paper introduces a learning-based optimal control framework for online adaptation of Input Constrained CBF (ICCBF) parameters in discrete-time nonlinear systems. Our method employs a probabilistic ensemble neural network to predict the performance and risk metrics, as defined in this work, for candidate parameters, accounting for both epistemic and aleatoric uncertainties. We propose a two-step verification process using Jensen-Renyi Divergence and distributionally-robust Conditional Value at Risk to identify valid parameters. This enables dynamic refinement of ICCBF parameters based on current state and nearby environments, optimizing performance while ensuring safety within the verified parameter set. Experimental results demonstrate that our method outperforms both fixed-parameter and existing adaptive methods in robot navigation scenarios across safety and performance metrics.<|reference_end|>
|
arxiv
|
@article{kim2024learning,
title={Learning to Refine Input Constrained Control Barrier Functions via
Uncertainty-Aware Online Parameter Adaptation},
author={Taekyung Kim, Robin Inho Kee, Dimitra Panagou},
journal={arXiv preprint arXiv:2409.14616},
year={2024},
archivePrefix={arXiv},
eprint={2409.14616},
primaryClass={cs.RO cs.SY eess.SY}
}
|
kim2024learning
|
arxiv-660564
|
2409.14617
|
Protein-Mamba: Biological Mamba Models for Protein Function Prediction
|
<|reference_start|>Protein-Mamba: Biological Mamba Models for Protein Function Prediction: Protein function prediction is a pivotal task in drug discovery, significantly impacting the development of effective and safe therapeutics. Traditional machine learning models often struggle with the complexity and variability inherent in predicting protein functions, necessitating more sophisticated approaches. In this work, we introduce Protein-Mamba, a novel two-stage model that leverages both self-supervised learning and fine-tuning to improve protein function prediction. The pre-training stage allows the model to capture general chemical structures and relationships from large, unlabeled datasets, while the fine-tuning stage refines these insights using specific labeled datasets, resulting in superior prediction performance. Our extensive experiments demonstrate that Protein-Mamba achieves competitive performance, compared with a couple of state-of-the-art methods across a range of protein function datasets. This model's ability to effectively utilize both unlabeled and labeled data highlights the potential of self-supervised learning in advancing protein function prediction and offers a promising direction for future research in drug discovery.<|reference_end|>
|
arxiv
|
@article{xu2024protein-mamba:,
title={Protein-Mamba: Biological Mamba Models for Protein Function Prediction},
author={Bohao Xu, Yingzhou Lu, Yoshitaka Inoue, Namkyeong Lee, Tianfan Fu,
Jintai Chen},
journal={arXiv preprint arXiv:2409.14617},
year={2024},
archivePrefix={arXiv},
eprint={2409.14617},
primaryClass={cs.LG q-bio.BM q-bio.QM}
}
|
xu2024protein-mamba:
|
arxiv-660565
|
2409.14619
|
SongTrans: An unified song transcription and alignment method for lyrics and notes
|
<|reference_start|>SongTrans: An unified song transcription and alignment method for lyrics and notes: The quantity of processed data is crucial for advancing the field of singing voice synthesis. While there are tools available for lyric or note transcription tasks, they all need pre-processed data which is relatively time-consuming (e.g., vocal and accompaniment separation). Besides, most of these tools are designed to address a single task and struggle with aligning lyrics and notes (i.e., identifying the corresponding notes of each word in lyrics). To address those challenges, we first design a pipeline by optimizing existing tools and annotating numerous lyric-note pairs of songs. Then, based on the annotated data, we train a unified SongTrans model that can directly transcribe lyrics and notes while aligning them simultaneously, without requiring pre-processing songs. Our SongTrans model consists of two modules: (1) the \textbf{Autoregressive module} predicts the lyrics, along with the duration and note number corresponding to each word in a lyric. (2) the \textbf{Non-autoregressive module} predicts the pitch and duration of the notes. Our experiments demonstrate that SongTrans achieves state-of-the-art (SOTA) results in both lyric and note transcription tasks. Furthermore, it is the first model capable of aligning lyrics with notes. Experimental results demonstrate that the SongTrans model can effectively adapt to different types of songs (e.g., songs with accompaniment), showcasing its versatility for real-world applications.<|reference_end|>
|
arxiv
|
@article{wu2024songtrans:,
title={SongTrans: An unified song transcription and alignment method for lyrics
and notes},
author={Siwei Wu, Jinzheng He, Ruibin Yuan, Haojie Wei, Xipin Wei, Chenghua
Lin, Jin Xu, Junyang Lin},
journal={arXiv preprint arXiv:2409.14619},
year={2024},
archivePrefix={arXiv},
eprint={2409.14619},
primaryClass={cs.SD eess.AS}
}
|
wu2024songtrans:
|
arxiv-660566
|
2409.14622
|
LatentQGAN: A Hybrid QGAN with Classical Convolutional Autoencoder
|
<|reference_start|>LatentQGAN: A Hybrid QGAN with Classical Convolutional Autoencoder: Quantum machine learning consists in taking advantage of quantum computations to generate classical data. A potential application of quantum machine learning is to harness the power of quantum computers for generating classical data, a process essential to a multitude of applications such as enriching training datasets, anomaly detection, and risk management in finance. Given the success of Generative Adversarial Networks in classical image generation, the development of its quantum versions has been actively conducted. However, existing implementations on quantum computers often face significant challenges, such as scalability and training convergence issues. To address these issues, we propose LatentQGAN, a novel quantum model that uses a hybrid quantum-classical GAN coupled with an autoencoder. Although it was initially designed for image generation, the LatentQGAN approach holds potential for broader application across various practical data generation tasks. Experimental outcomes on both classical simulators and noisy intermediate scale quantum computers have demonstrated significant performance enhancements over existing quantum methods, alongside a significant reduction in quantum resources overhead.<|reference_end|>
|
arxiv
|
@article{vieloszynski2024latentqgan:,
title={LatentQGAN: A Hybrid QGAN with Classical Convolutional Autoencoder},
author={Alexis Vieloszynski, Soumaya Cherkaoui, Jean-Fr'ed'eric Laprade,
Oliver Nahman-L'evesque, Abdallah Aaraba, Shengrui Wang},
journal={arXiv preprint arXiv:2409.14622},
year={2024},
archivePrefix={arXiv},
eprint={2409.14622},
primaryClass={quant-ph cs.AI cs.LG}
}
|
vieloszynski2024latentqgan:
|
arxiv-660567
|
2409.14623
|
From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks
|
<|reference_start|>From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks: Biological and artificial neural networks develop internal representations that enable them to perform complex tasks. In artificial networks, the effectiveness of these models relies on their ability to build task specific representation, a process influenced by interactions among datasets, architectures, initialization strategies, and optimization algorithms. Prior studies highlight that different initializations can place networks in either a lazy regime, where representations remain static, or a rich/feature learning regime, where representations evolve dynamically. Here, we examine how initialization influences learning dynamics in deep linear neural networks, deriving exact solutions for lambda-balanced initializations-defined by the relative scale of weights across layers. These solutions capture the evolution of representations and the Neural Tangent Kernel across the spectrum from the rich to the lazy regimes. Our findings deepen the theoretical understanding of the impact of weight initialization on learning regimes, with implications for continual learning, reversal learning, and transfer learning, relevant to both neuroscience and practical applications.<|reference_end|>
|
arxiv
|
@article{dominé2024from,
title={From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks},
author={Cl'ementine C. J. Domin'e, and Nicolas Anguita, and Alexandra M.
Proca, and Lukas Braun, and Daniel Kunin, and Pedro A. M. Mediano, and Andrew
M. Saxe},
journal={arXiv preprint arXiv:2409.14623},
year={2024},
archivePrefix={arXiv},
eprint={2409.14623},
primaryClass={cs.LG}
}
|
dominé2024from
|
arxiv-660568
|
2409.14627
|
SOS: Segment Object System for Open-World Instance Segmentation With Object Priors
|
<|reference_start|>SOS: Segment Object System for Open-World Instance Segmentation With Object Priors: We propose an approach for Open-World Instance Segmentation (OWIS), a task that aims to segment arbitrary unknown objects in images by generalizing from a limited set of annotated object classes during training. Our Segment Object System (SOS) explicitly addresses the generalization ability and the low precision of state-of-the-art systems, which often generate background detections. To this end, we generate high-quality pseudo annotations based on the foundation model SAM. We thoroughly study various object priors to generate prompts for SAM, explicitly focusing the foundation model on objects. The strongest object priors were obtained by self-attention maps from self-supervised Vision Transformers, which we utilize for prompting SAM. Finally, the post-processed segments from SAM are used as pseudo annotations to train a standard instance segmentation system. Our approach shows strong generalization capabilities on COCO, LVIS, and ADE20k datasets and improves on the precision by up to 81.6% compared to the state-of-the-art. Source code is available at: https://github.com/chwilms/SOS<|reference_end|>
|
arxiv
|
@article{wilms2024sos:,
title={SOS: Segment Object System for Open-World Instance Segmentation With
Object Priors},
author={Christian Wilms, Tim Rolff, Maris Hillemann, Robert Johanson, Simone
Frintrop},
journal={arXiv preprint arXiv:2409.14627},
year={2024},
archivePrefix={arXiv},
eprint={2409.14627},
primaryClass={cs.CV}
}
|
wilms2024sos:
|
arxiv-660569
|
2409.14628
|
Can a Neural Model Guide Fieldwork? A Case Study on Morphological Inflection
|
<|reference_start|>Can a Neural Model Guide Fieldwork? A Case Study on Morphological Inflection: Linguistic fieldwork is an important component in language documentation and preservation. However, it is a long, exhaustive, and time-consuming process. This paper presents a novel model that guides a linguist during the fieldwork and accounts for the dynamics of linguist-speaker interactions. We introduce a novel framework that evaluates the efficiency of various sampling strategies for obtaining morphological data and assesses the effectiveness of state-of-the-art neural models in generalising morphological structures. Our experiments highlight two key strategies for improving the efficiency: (1) increasing the diversity of annotated data by uniform sampling among the cells of the paradigm tables, and (2) using model confidence as a guide to enhance positive interaction by providing reliable predictions during annotation.<|reference_end|>
|
arxiv
|
@article{mahmudi2024can,
title={Can a Neural Model Guide Fieldwork? A Case Study on Morphological
Inflection},
author={Aso Mahmudi, Borja Herce, Demian Inostroza Amestica, Andreas
Scherbakov, Eduard Hovy, Ekaterina Vylomova},
journal={arXiv preprint arXiv:2409.14628},
year={2024},
archivePrefix={arXiv},
eprint={2409.14628},
primaryClass={cs.CL}
}
|
mahmudi2024can
|
arxiv-660570
|
2409.14629
|
Gate Optimization of NEQR Quantum Circuits via PPRM Transformation
|
<|reference_start|>Gate Optimization of NEQR Quantum Circuits via PPRM Transformation: Quantum image representation (QIR) is a key challenge in quantum image processing (QIP) due to the large number of pixels in images, which increases the need for quantum gates and qubits. However, current quantum systems face limitations in run-time complexity and available qubits. This work aims to compress the quantum circuits of the Novel Enhanced Quantum Representation (NEQR) scheme by transforming their Exclusive-Or Sum-of-Products (ESOP) expressions into Positive Polarity Reed-Muller (PPRM) equivalents without adding ancillary qubits. Two cases of run-time complexity, exponential and linear, are considered for NEQR circuits with m controlling qubits ($m \rightarrow \infty$), depending on the decomposition of multi-controlled NOT gates. Using nonlinear regression, the proposed transformation is estimated to reduce the exponential complexity from $O(2^m)$ to $O(1.5^m)$, with a compression ratio approaching 100%. For linear complexity, the transformation is estimated to halve the run-time, with a compression ratio approaching 52%. Tests on six 256$\times$256 images show average reductions of 105.5 times for exponential cases and 2.4 times for linear cases, with average compression ratios of 99.05% and 58.91%, respectively.<|reference_end|>
|
arxiv
|
@article{iranmanesh2024gate,
title={Gate Optimization of NEQR Quantum Circuits via PPRM Transformation},
author={Shahab Iranmanesh, Hossein Aghababa, Kazim Fouladi},
journal={arXiv preprint arXiv:2409.14629},
year={2024},
archivePrefix={arXiv},
eprint={2409.14629},
primaryClass={quant-ph cs.ET}
}
|
iranmanesh2024gate
|
arxiv-660571
|
2409.14630
|
EQ-CBM: A Probabilistic Concept Bottleneck with Energy-based Models and Quantized Vectors
|
<|reference_start|>EQ-CBM: A Probabilistic Concept Bottleneck with Energy-based Models and Quantized Vectors: The demand for reliable AI systems has intensified the need for interpretable deep neural networks. Concept bottleneck models (CBMs) have gained attention as an effective approach by leveraging human-understandable concepts to enhance interpretability. However, existing CBMs face challenges due to deterministic concept encoding and reliance on inconsistent concepts, leading to inaccuracies. We propose EQ-CBM, a novel framework that enhances CBMs through probabilistic concept encoding using energy-based models (EBMs) with quantized concept activation vectors (qCAVs). EQ-CBM effectively captures uncertainties, thereby improving prediction reliability and accuracy. By employing qCAVs, our method selects homogeneous vectors during concept encoding, enabling more decisive task performance and facilitating higher levels of human intervention. Empirical results using benchmark datasets demonstrate that our approach outperforms the state-of-the-art in both concept and task accuracy.<|reference_end|>
|
arxiv
|
@article{kim2024eq-cbm:,
title={EQ-CBM: A Probabilistic Concept Bottleneck with Energy-based Models and
Quantized Vectors},
author={Sangwon Kim, Dasom Ahn, Byoung Chul Ko, In-su Jang, Kwang-Ju Kim},
journal={arXiv preprint arXiv:2409.14630},
year={2024},
archivePrefix={arXiv},
eprint={2409.14630},
primaryClass={cs.CV cs.AI}
}
|
kim2024eq-cbm:
|
arxiv-660572
|
2409.14633
|
Hierarchical end-to-end autonomous navigation through few-shot waypoint detection
|
<|reference_start|>Hierarchical end-to-end autonomous navigation through few-shot waypoint detection: Human navigation is facilitated through the association of actions with landmarks, tapping into our ability to recognize salient features in our environment. Consequently, navigational instructions for humans can be extremely concise, such as short verbal descriptions, indicating a small memory requirement and no reliance on complex and overly accurate navigation tools. Conversely, current autonomous navigation schemes rely on accurate positioning devices and algorithms as well as extensive streams of sensory data collected from the environment. Inspired by this human capability and motivated by the associated technological gap, in this work we propose a hierarchical end-to-end meta-learning scheme that enables a mobile robot to navigate in a previously unknown environment upon presentation of only a few sample images of a set of landmarks along with their corresponding high-level navigation actions. This dramatically simplifies the wayfinding process and enables easy adoption to new environments. For few-shot waypoint detection, we implement a metric-based few-shot learning technique through distribution embedding. Waypoint detection triggers the multi-task low-level maneuver controller module to execute the corresponding high-level navigation action. We demonstrate the effectiveness of the scheme using a small-scale autonomous vehicle on novel indoor navigation tasks in several previously unseen environments.<|reference_end|>
|
arxiv
|
@article{ghafourian2024hierarchical,
title={Hierarchical end-to-end autonomous navigation through few-shot waypoint
detection},
author={Amin Ghafourian, Zhongying CuiZhu, Debo Shi, Ian Chuang, Francois
Charette, Rithik Sachdeva, Iman Soltani},
journal={in IEEE Robotics and Automation Letters, vol. 9, no. 4, pp.
3211-3218, April 2024},
year={2024},
doi={10.1109/LRA.2024.3365294},
archivePrefix={arXiv},
eprint={2409.14633},
primaryClass={cs.RO cs.AI cs.LG}
}
|
ghafourian2024hierarchical
|
arxiv-660573
|
2409.14634
|
Scideator: Human-LLM Scientific Idea Generation Grounded in Research-Paper Facet Recombination
|
<|reference_start|>Scideator: Human-LLM Scientific Idea Generation Grounded in Research-Paper Facet Recombination: The scientific ideation process often involves blending salient aspects of existing papers to create new ideas. To see if large language models (LLMs) can assist this process, we contribute Scideator, a novel mixed-initiative tool for scientific ideation. Starting from a user-provided set of papers, Scideator extracts key facets (purposes, mechanisms, and evaluations) from these and relevant papers, allowing users to explore the idea space by interactively recombining facets to synthesize inventive ideas. Scideator also helps users to gauge idea novelty by searching the literature for potential overlaps and showing automated novelty assessments and explanations. To support these tasks, Scideator introduces four LLM-powered retrieval-augmented generation (RAG) modules: Analogous Paper Facet Finder, Faceted Idea Generator, Idea Novelty Checker, and Idea Novelty Iterator. In a within-subjects user study, 19 computer-science researchers identified significantly more interesting ideas using Scideator compared to a strong baseline combining a scientific search engine with LLM interaction.<|reference_end|>
|
arxiv
|
@article{radensky2024scideator:,
title={Scideator: Human-LLM Scientific Idea Generation Grounded in
Research-Paper Facet Recombination},
author={Marissa Radensky, Simra Shahid, Raymond Fok, Pao Siangliulue, Tom
Hope, Daniel S. Weld},
journal={arXiv preprint arXiv:2409.14634},
year={2024},
archivePrefix={arXiv},
eprint={2409.14634},
primaryClass={cs.HC cs.AI}
}
|
radensky2024scideator:
|
arxiv-660574
|
2409.14635
|
Completeness of coalition logics with seriality, independence of agents, or determinism
|
<|reference_start|>Completeness of coalition logics with seriality, independence of agents, or determinism: Coalition Logic is a central logic in logical research on strategic reasoning. In a recent paper, Li and Ju argued that generally, models of Coalition Logic, concurrent game models, have three too strong assumptions: seriality, independence of agents, and determinism. They presented a Minimal Coalition Logic based on general concurrent game models, which do not have the three assumptions. However, when constructing coalition logics about strategic reasoning in special kinds of situations, we may want to keep some of the assumptions. Thus, studying coalition logics with some of these assumptions makes good sense. In this paper, we show the completeness of these coalition logics by a uniform approach.<|reference_end|>
|
arxiv
|
@article{li2024completeness,
title={Completeness of coalition logics with seriality, independence of agents,
or determinism},
author={Yinfeng Li and Fengkui Ju},
journal={arXiv preprint arXiv:2409.14635},
year={2024},
archivePrefix={arXiv},
eprint={2409.14635},
primaryClass={cs.GT math.LO}
}
|
li2024completeness
|
arxiv-660575
|
2409.14637
|
Not Only the Last-Layer Features for Spurious Correlations: All Layer Deep Feature Reweighting
|
<|reference_start|>Not Only the Last-Layer Features for Spurious Correlations: All Layer Deep Feature Reweighting: Spurious correlations are a major source of errors for machine learning models, in particular when aiming for group-level fairness. It has been recently shown that a powerful approach to combat spurious correlations is to re-train the last layer on a balanced validation dataset, isolating robust features for the predictor. However, key attributes can sometimes be discarded by neural networks towards the last layer. In this work, we thus consider retraining a classifier on a set of features derived from all layers. We utilize a recently proposed feature selection strategy to select unbiased features from all the layers. We observe this approach gives significant improvements in worst-group accuracy on several standard benchmarks.<|reference_end|>
|
arxiv
|
@article{hameed2024not,
title={Not Only the Last-Layer Features for Spurious Correlations: All Layer
Deep Feature Reweighting},
author={Humza Wajid Hameed, Geraldin Nanfack and Eugene Belilovsky},
journal={arXiv preprint arXiv:2409.14637},
year={2024},
archivePrefix={arXiv},
eprint={2409.14637},
primaryClass={cs.LG cs.AI}
}
|
hameed2024not
|
arxiv-660576
|
2409.14638
|
Harmonising the Clinical Melody: Tuning Large Language Models for Hospital Course Summarisation in Clinical Coding
|
<|reference_start|>Harmonising the Clinical Melody: Tuning Large Language Models for Hospital Course Summarisation in Clinical Coding: The increasing volume and complexity of clinical documentation in Electronic Medical Records systems pose significant challenges for clinical coders, who must mentally process and summarise vast amounts of clinical text to extract essential information needed for coding tasks. While large language models have been successfully applied to shorter summarisation tasks in recent years, the challenge of summarising a hospital course remains an open area for further research and development. In this study, we adapted three pre trained LLMs, Llama 3, BioMistral, Mistral Instruct v0.1 for the hospital course summarisation task, using Quantized Low Rank Adaptation fine tuning. We created a free text clinical dataset from MIMIC III data by concatenating various clinical notes as the input clinical text, paired with ground truth Brief Hospital Course sections extracted from the discharge summaries for model training. The fine tuned models were evaluated using BERTScore and ROUGE metrics to assess the effectiveness of clinical domain fine tuning. Additionally, we validated their practical utility using a novel hospital course summary assessment metric specifically tailored for clinical coding. Our findings indicate that fine tuning pre trained LLMs for the clinical domain can significantly enhance their performance in hospital course summarisation and suggest their potential as assistive tools for clinical coding. Future work should focus on refining data curation methods to create higher quality clinical datasets tailored for hospital course summary tasks and adapting more advanced open source LLMs comparable to proprietary models to further advance this research.<|reference_end|>
|
arxiv
|
@article{bi2024harmonising,
title={Harmonising the Clinical Melody: Tuning Large Language Models for
Hospital Course Summarisation in Clinical Coding},
author={Bokang Bi, Leibo Liu, Sanja Lujic, Louisa Jorm, Oscar Perez-Concha},
journal={arXiv preprint arXiv:2409.14638},
year={2024},
archivePrefix={arXiv},
eprint={2409.14638},
primaryClass={cs.CL cs.LG}
}
|
bi2024harmonising
|
arxiv-660577
|
2409.14639
|
Impedance Control for Manipulators Handling Heavy Payloads
|
<|reference_start|>Impedance Control for Manipulators Handling Heavy Payloads: Attaching a heavy payload to the wrist force/moment (F/M) sensor of a manipulator can cause conventional impedance controllers to fail in establishing the desired impedance due to the presence of non-contact forces; namely, the inertial and gravitational forces of the payload. This paper presents an impedance control scheme designed to accurately shape the force-response of such a manipulator without requiring acceleration measurements. As a result, neither wrist accelerometers nor dynamic estimators for compensating inertial load forces are necessary. The proposed controller employs an inner-outer loop feedback structure, which not only addresses uncertainties in the robot's dynamics but also enables the specification of a general target impedance model, including nonlinear models. Stability and convergence of the controller are analytically proven, with results showing that the control input remains bounded as long as the desired inertia differs from the payload inertia. Experimental results confirm that the proposed impedance controller effectively shapes the impedance of a manipulator carrying a heavy load according to the desired impedance model.<|reference_end|>
|
arxiv
|
@article{aghili2024impedance,
title={Impedance Control for Manipulators Handling Heavy Payloads},
author={Farhad Aghili},
journal={ASME Journal of Dynamic Systems, Measurement, and Control, 2010},
year={2024},
doi={10.1115/1.4001898},
archivePrefix={arXiv},
eprint={2409.14639},
primaryClass={cs.RO cs.SY eess.SY}
}
|
aghili2024impedance
|
arxiv-660578
|
2409.14640
|
MECURY: Practical Cross-Chain Exchange via Trusted Hardware
|
<|reference_start|>MECURY: Practical Cross-Chain Exchange via Trusted Hardware: The proliferation of blockchain-backed cryptocurrencies has sparked the need for cross-chain exchanges of diverse digital assets. Unfortunately, current exchanges suffer from high on-chain verification costs, weak threat models of central trusted parties, or synchronous requirements, making them impractical for currency trading applications. In this paper, we present MERCURY, a practical cryptocurrency exchange that is trust-minimized and efficient without online-client requirements. MERCURY leverages Trusted Execution Environments (TEEs) to shield participants from malicious behaviors, eliminating the reliance on trusted participants and making on-chain verification efficient. Despite the simple idea, building a practical TEE-assisted cross-chain exchange is challenging due to the security and unavailability issues of TEEs. MERCURY tackles the unavailability problem of TEEs by implementing an efficient challenge-response mechanism executed on smart contracts. Furthermore, MERCURY utilizes a lightweight transaction verification mechanism and adopts multiple optimizations to reduce on-chain costs. Comparative evaluations with XClaim, ZK-bridge, and Tesseract demonstrate that MERCURY significantly reduces on-chain costs by approximately 67.87%, 45.01%, and 47.70%, respectively.<|reference_end|>
|
arxiv
|
@article{wen2024mecury:,
title={MECURY: Practical Cross-Chain Exchange via Trusted Hardware},
author={Xiaoqing Wen, Quanbi Feng, Jianyu Niu, Yinqian Zhang, Chen Feng},
journal={arXiv preprint arXiv:2409.14640},
year={2024},
archivePrefix={arXiv},
eprint={2409.14640},
primaryClass={cs.CR cs.DC}
}
|
wen2024mecury:
|
arxiv-660579
|
2409.14644
|
zsLLMCode: An Effective Approach for Functional Code Embedding via LLM with Zero-Shot Learning
|
<|reference_start|>zsLLMCode: An Effective Approach for Functional Code Embedding via LLM with Zero-Shot Learning: Regarding software engineering (SE) tasks, Large language models (LLMs) have the capability of zero-shot learning, which does not require training or fine-tuning, unlike pre-trained models (PTMs). However, LLMs are primarily designed for natural language output, and cannot directly produce intermediate embeddings from source code. They also face some challenges, for example, the restricted context length may prevent them from handling larger inputs, limiting their applicability to many SE tasks; while hallucinations may occur when LLMs are applied to complex downstream tasks. Motivated by the above facts, we propose zsLLMCode, a novel approach that generates functional code embeddings using LLMs. Our approach utilizes LLMs to convert source code into concise summaries through zero-shot learning, which is then transformed into functional code embeddings using specialized embedding models. This unsupervised approach eliminates the need for training and addresses the issue of hallucinations encountered with LLMs. To the best of our knowledge, this is the first approach that combines LLMs and embedding models to generate code embeddings. We conducted experiments to evaluate the performance of our approach. The results demonstrate the effectiveness and superiority of our approach over state-of-the-art unsupervised methods.<|reference_end|>
|
arxiv
|
@article{xian2024zsllmcode:,
title={zsLLMCode: An Effective Approach for Functional Code Embedding via LLM
with Zero-Shot Learning},
author={Zixiang Xian, Chenhui Cui, Rubing Huang, Chunrong Fang, Zhenyu Chen},
journal={arXiv preprint arXiv:2409.14644},
year={2024},
archivePrefix={arXiv},
eprint={2409.14644},
primaryClass={cs.SE cs.AI}
}
|
xian2024zsllmcode:
|
arxiv-660580
|
2409.14645
|
Demystifying Trajectory Recovery From Ash: An Open-Source Evaluation and Enhancement
|
<|reference_start|>Demystifying Trajectory Recovery From Ash: An Open-Source Evaluation and Enhancement: Once analysed, location trajectories can provide valuable insights beneficial to various applications. However, such data is also highly sensitive, rendering them susceptible to privacy risks in the event of mismanagement, for example, revealing an individual's identity, home address, or political affiliations. Hence, ensuring that privacy is preserved for this data is a priority. One commonly taken measure to mitigate this concern is aggregation. Previous work by Xu et al. shows that trajectories are still recoverable from anonymised and aggregated datasets. However, the study lacks implementation details, obfuscating the mechanisms of the attack. Additionally, the attack was evaluated on commercial non-public datasets, rendering the results and subsequent claims unverifiable. This study reimplements the trajectory recovery attack from scratch and evaluates it on two open-source datasets, detailing the preprocessing steps and implementation. Results confirm that privacy leakage still exists despite common anonymisation and aggregation methods but also indicate that the initial accuracy claims may have been overly ambitious. We release all code as open-source to ensure the results are entirely reproducible and, therefore, verifiable. Moreover, we propose a stronger attack by designing a series of enhancements to the baseline attack. These enhancements yield higher accuracies by up to 16%, providing an improved benchmark for future research in trajectory recovery methods. Our improvements also enable online execution of the attack, allowing partial attacks on larger datasets previously considered unprocessable, thereby furthering the extent of privacy leakage. The findings emphasise the importance of using strong privacy-preserving mechanisms when releasing aggregated mobility data and not solely relying on aggregation as a means of anonymisation.<|reference_end|>
|
arxiv
|
@article{d'silva2024demystifying,
title={Demystifying Trajectory Recovery From Ash: An Open-Source Evaluation and
Enhancement},
author={Nicholas D'Silva and Toran Shahi and {O}yvind Timian Dokk Husveg and
Adith Sanjeeve and Erik Buchholz and Salil S. Kanhere},
journal={arXiv preprint arXiv:2409.14645},
year={2024},
archivePrefix={arXiv},
eprint={2409.14645},
primaryClass={cs.CR cs.LG}
}
|
d'silva2024demystifying
|
arxiv-660581
|
2409.14647
|
TeeRollup: Efficient Rollup Design Using Heterogeneous TEE
|
<|reference_start|>TeeRollup: Efficient Rollup Design Using Heterogeneous TEE: Rollups have emerged as a promising approach to improving blockchains' scalability by offloading transactions execution off-chain. Existing rollup solutions either leverage complex zero-knowledge proofs or optimistically assume execution correctness unless challenged. However, these solutions have practical issues such as high gas costs and significant withdrawal delays, hindering their adoption in decentralized applications. This paper introduces TeeRollup, an efficient rollup design with low gas costs and short withdrawal delays. TeeRollup employs Trusted Execution Environments (TEEs)-supported sequencers to execute transactions, requiring the blockchain to verify only the TEEs' signatures. TeeRollup is designed under a realistic threat model in which the integrity and availability of sequencers' TEEs may be compromised. To address these issues, we first introduce a distributed system of sequencers with heterogeneous TEEs, ensuring system security even if a minority of TEEs are compromised. Second, we propose a challenge mechanism to solve the redeemability issue caused by TEE unavailability. Furthermore, TeeRollup incorporates Data Availability Providers (DAPs) to reduce on-chain storage overhead and uses a laziness penalty game to regulate DAP behavior. We implement a prototype of TeeRollup in Golang, using the Ethereum test network, Sepolia. Our experimental results indicate that TeeRollup outperforms zero-knowledge rollups (zk-rollups), reducing on-chain verification costs by approximately 86% and withdrawal delays to a few minutes.<|reference_end|>
|
arxiv
|
@article{wen2024teerollup:,
title={TeeRollup: Efficient Rollup Design Using Heterogeneous TEE},
author={Xiaoqing Wen, Quanbi Feng, Jianyu Niu, Yinqian Zhang, Chen Feng},
journal={arXiv preprint arXiv:2409.14647},
year={2024},
archivePrefix={arXiv},
eprint={2409.14647},
primaryClass={cs.CR}
}
|
wen2024teerollup:
|
arxiv-660582
|
2409.14649
|
Substring Compression Variations and LZ78-Derivates
|
<|reference_start|>Substring Compression Variations and LZ78-Derivates: We propose algorithms computing the semi-greedy Lempel-Ziv 78 (LZ78), the Lempel-Ziv Double (LZD), and the Lempel-Ziv-Miller-Wegman (LZMW) factorizations in linear time for integer alphabets. For LZD and LZMW, we additionally propose data structures that can be constructed in linear time, which can solve the substring compression problems for these factorizations in time linear in the output size. For substring compression, we give results for lexparse and closed factorizations.<|reference_end|>
|
arxiv
|
@article{köppl2024substring,
title={Substring Compression Variations and LZ78-Derivates},
author={Dominik K"oppl},
journal={full version of conference paper published at DCC 2024},
year={2024},
doi={10.1109/DCC58796.2024.00021},
archivePrefix={arXiv},
eprint={2409.14649},
primaryClass={cs.DS}
}
|
köppl2024substring
|
arxiv-660583
|
2409.14652
|
AEANet: Affinity Enhanced Attentional Networks for Arbitrary Style Transfer
|
<|reference_start|>AEANet: Affinity Enhanced Attentional Networks for Arbitrary Style Transfer: Arbitrary artistic style transfer is a research area that combines rational academic study with emotive artistic creation. It aims to create a new image from a content image according to a target artistic style, maintaining the content's textural structural information while incorporating the artistic characteristics of the style image. However, existing style transfer methods often significantly damage the texture lines of the content image during the style transformation. To address these issues, we propose affinity-enhanced attentional network, which include the content affinity-enhanced attention (CAEA) module, the style affinity-enhanced attention (SAEA) module, and the hybrid attention (HA) module. The CAEA and SAEA modules first use attention to enhance content and style representations, followed by a detail enhanced (DE) module to reinforce detail features. The hybrid attention module adjusts the style feature distribution based on the content feature distribution. We also introduce the local dissimilarity loss based on affinity attention, which better preserves the affinity with content and style images. Experiments demonstrate that our work achieves better results in arbitrary style transfer than other state-of-the-art methods.<|reference_end|>
|
arxiv
|
@article{li2024aeanet:,
title={AEANet: Affinity Enhanced Attentional Networks for Arbitrary Style
Transfer},
author={Gen Li, Xianqiu Zheng, Yujian Li},
journal={arXiv preprint arXiv:2409.14652},
year={2024},
archivePrefix={arXiv},
eprint={2409.14652},
primaryClass={cs.CV}
}
|
li2024aeanet:
|
arxiv-660584
|
2409.14653
|
Data-driven Viscosity Solver for Fluid Simulation
|
<|reference_start|>Data-driven Viscosity Solver for Fluid Simulation: We propose a data-driven viscosity solver based on U-shaped convolutional neural network to predict velocity changes due to viscosity. Our solver takes velocity derivatives, fluid volume, and solid indicator quantities as input. The traditional marker-and-cell (MAC) grid stores velocities at the edges of the grid, causing the dimensions of the velocity field vary from axis to axis. In our work, we suggest a symmetric MAC grid that maintains consistent dimensions across axes without interpolation or symmetry breaking. The proposed grid effectively transfers spatial fluid quantities such as partial derivatives of velocity, enabling networks to generate accurate predictions. Additionally, we introduce a physics-based loss inspired by the variational formulation of viscosity to enhance the network's generalization for a wide range of viscosity coefficients. We demonstrate various fluid simulation results, including 2D and 3D fluid-rigid body scenes and a scene exhibiting the buckling effect. Our code is available at \url{https://github.com/SSTDV-Project/python-fluid-simulation.}<|reference_end|>
|
arxiv
|
@article{park2024data-driven,
title={Data-driven Viscosity Solver for Fluid Simulation},
author={Wonjung Park, Hyunsoo Kim, Jinah Park},
journal={arXiv preprint arXiv:2409.14653},
year={2024},
doi={10.5220/0012397300003660},
archivePrefix={arXiv},
eprint={2409.14653},
primaryClass={cs.GR}
}
|
park2024data-driven
|
arxiv-660585
|
2409.14654
|
Fast and Small Subsampled R-indexes
|
<|reference_start|>Fast and Small Subsampled R-indexes: The $r$-index represented a breakthrough in compressed indexing of repetitive text collections, outperforming its alternatives by orders of magnitude in query time. Its space usage, $O(r)$ where $r$ is the number of runs in the Burrows--Wheeler Transform of the text, is however higher than Lempel--Ziv (LZ) and grammar-based indexes, and makes it uninteresting in various real-life scenarios of milder repetitiveness. We introduce the $sr$-index, a variant that limits the space to $O(\min(r,n/s))$ for a text of length $n$ and a given parameter $s$, at the expense of multiplying by $s$ the time per occurrence reported. The $sr$-index is obtained subsampling the text positions indexed by the $r$-index, being still able to support pattern matching with guaranteed performance. Our experiments show that the theoretical analysis falls short in describing the practical advantages of the $sr$-index, because it performs much better on real texts than on synthetic ones: the $sr$-index retains the performance of the $r$-index while using 1.5--4.0 times less space, sharply outperforming {\em virtually every other} compressed index on repetitive texts in both time and space. Only a particular LZ-based index uses less space than the $sr$-index, but it is an order of magnitude slower. Our second contribution are the $r$-csa and $sr$-csa indexes. Just like the $r$-index adapts the well-known FM-Index to repetitive texts, the $r$-csa adapts Sadakane's Compressed Suffix Array (CSA) to this case. We show that the principles used on the $r$-index turn out to fit naturally and efficiently in the CSA framework. The $sr$-csa is the corresponding subsampled version of the $r$-csa. While the CSA performs better than the FM-Index on classic texts with alphabets larger than DNA, we show that the $sr$-csa outperforms the $sr$-index on repetitive texts over those larger alphabets and some DNA texts as well.<|reference_end|>
|
arxiv
|
@article{cobas2024fast,
title={Fast and Small Subsampled R-indexes},
author={Dustin Cobas and Travis Gagie and Gonzalo Navarro},
journal={arXiv preprint arXiv:2409.14654},
year={2024},
archivePrefix={arXiv},
eprint={2409.14654},
primaryClass={cs.DS}
}
|
cobas2024fast
|
arxiv-660586
|
2409.14655
|
Federated Graph Learning with Adaptive Importance-based Sampling
|
<|reference_start|>Federated Graph Learning with Adaptive Importance-based Sampling: For privacy-preserving graph learning tasks involving distributed graph datasets, federated learning (FL)-based GCN (FedGCN) training is required. A key challenge for FedGCN is scaling to large-scale graphs, which typically incurs high computation and communication costs when dealing with the explosively increasing number of neighbors. Existing graph sampling-enhanced FedGCN training approaches ignore graph structural information or dynamics of optimization, resulting in high variance and inaccurate node embeddings. To address this limitation, we propose the Federated Adaptive Importance-based Sampling (FedAIS) approach. It achieves substantial computational cost saving by focusing the limited resources on training important nodes, while reducing communication overhead via adaptive historical embedding synchronization. The proposed adaptive importance-based sampling method jointly considers the graph structural heterogeneity and the optimization dynamics to achieve optimal trade-off between efficiency and accuracy. Extensive evaluations against five state-of-the-art baselines on five real-world graph datasets show that FedAIS achieves comparable or up to 3.23% higher test accuracy, while saving communication and computation costs by 91.77% and 85.59%.<|reference_end|>
|
arxiv
|
@article{li2024federated,
title={Federated Graph Learning with Adaptive Importance-based Sampling},
author={Anran Li, Yuanyuan Chen, Chao Ren, Wenhan Wang, Ming Hu, Tianlin Li,
Han Yu, Qingyu Chen},
journal={arXiv preprint arXiv:2409.14655},
year={2024},
archivePrefix={arXiv},
eprint={2409.14655},
primaryClass={cs.DC cs.CR cs.LG}
}
|
li2024federated
|
arxiv-660587
|
2409.14657
|
Building Tamil Treebanks
|
<|reference_start|>Building Tamil Treebanks: Treebanks are important linguistic resources, which are structured and annotated corpora with rich linguistic annotations. These resources are used in Natural Language Processing (NLP) applications, supporting linguistic analyses, and are essential for training and evaluating various computational models. This paper discusses the creation of Tamil treebanks using three distinct approaches: manual annotation, computational grammars, and machine learning techniques. Manual annotation, though time-consuming and requiring linguistic expertise, ensures high-quality and rich syntactic and semantic information. Computational deep grammars, such as Lexical Functional Grammar (LFG), offer deep linguistic analyses but necessitate significant knowledge of the formalism. Machine learning approaches, utilising off-the-shelf frameworks and tools like Stanza, UDpipe, and UUParser, facilitate the automated annotation of large datasets but depend on the availability of quality annotated data, cross-linguistic training resources, and computational power. The paper discusses the challenges encountered in building Tamil treebanks, including issues with Internet data, the need for comprehensive linguistic analysis, and the difficulty of finding skilled annotators. Despite these challenges, the development of Tamil treebanks is essential for advancing linguistic research and improving NLP tools for Tamil.<|reference_end|>
|
arxiv
|
@article{sarveswaran2024building,
title={Building Tamil Treebanks},
author={Kengatharaiyer Sarveswaran},
journal={Sarveswaran, K. (2024). Building Tamil Treebanks. In Proceedings
of the International Conference on Tamil Computing and Information Technology
(ICTCIT 2024)/23rd Tamil Internet Conference (pp. 22-32). INFITT. ISSN:
2313-4887},
year={2024},
archivePrefix={arXiv},
eprint={2409.14657},
primaryClass={cs.CL}
}
|
sarveswaran2024building
|
arxiv-660588
|
2409.14659
|
Image memorability enhances social media virality
|
<|reference_start|>Image memorability enhances social media virality: Certain social media contents can achieve widespread virality. Prior research has identified that emotion and morality may play a role in this phenomenon. Yet, due to the variability in subjective perception of these factors, they may not consistently predict virality. Recent work in vision and memory has identified a property intrinsic to images - memorability - that can automatically drive human memory. Here, we present evidence that memorability can enhance social media virality by analyzing a naturalistic dataset from Reddit, a widely used social media platform. Specifically, we discover that more memorable images (as judged automatically by neural network ResMem) cause more comments and higher upvotes, and this effect replicates across three different timepoints. To uncover the mechanism of this effect, we employ natural language processing techniques finding that memorable images tend to evoke abstract and less emotional comments. Leveraging an object recognition neural network, we discover that memorable images result in comments directed to information external to the image, which causes them to be more abstract. Further analysis quantifying the representations within the ResMem neural network reveals that images with more semantically distinct features are more likely to be memorable, and consequently, more likely to go viral. These findings reveal that images that are easier to remember become more viral, offering new future directions such as the creation of predictive models of content virality or the application of these insights to enhance the design of impactful visual content.<|reference_end|>
|
arxiv
|
@article{peng2024image,
title={Image memorability enhances social media virality},
author={Shikang Peng (1,2,3), Wilma A. Bainbridge (3,4) ((1) Department of
Psychology, University of Toronto, (2) Rotman Research Institute, Baycrest
Health Sciences, (3) Department of Psychology, University of Chicago, (4)
Neuroscience Institute, University of Chicago)},
journal={arXiv preprint arXiv:2409.14659},
year={2024},
archivePrefix={arXiv},
eprint={2409.14659},
primaryClass={cs.HC cs.CE cs.SI}
}
|
peng2024image
|
arxiv-660589
|
2409.14660
|
Fourier neural operators for spatiotemporal dynamics in two-dimensional turbulence
|
<|reference_start|>Fourier neural operators for spatiotemporal dynamics in two-dimensional turbulence: High-fidelity direct numerical simulation of turbulent flows for most real-world applications remains an outstanding computational challenge. Several machine learning approaches have recently been proposed to alleviate the computational cost even though they become unstable or unphysical for long time predictions. We identify that the Fourier neural operator (FNO) based models combined with a partial differential equation (PDE) solver can accelerate fluid dynamic simulations and thus address computational expense of large-scale turbulence simulations. We treat the FNO model on the same footing as a PDE solver and answer important questions about the volume and temporal resolution of data required to build pre-trained models for turbulence. We also discuss the pitfalls of purely data-driven approaches that need to be avoided by the machine learning models to become viable and competitive tools for long time simulations of turbulence.<|reference_end|>
|
arxiv
|
@article{atif2024fourier,
title={Fourier neural operators for spatiotemporal dynamics in two-dimensional
turbulence},
author={Mohammad Atif and Pulkit Dubey and Pratik P. Aghor and Vanessa
Lopez-Marrero and Tao Zhang and Abdullah Sharfuddin and Kwangmin Yu and Fan
Yang and Foluso Ladeinde and Yangang Liu and Meifeng Lin and Lingda Li},
journal={arXiv preprint arXiv:2409.14660},
year={2024},
archivePrefix={arXiv},
eprint={2409.14660},
primaryClass={physics.flu-dyn cs.LG nlin.CD}
}
|
atif2024fourier
|
arxiv-660590
|
2409.14664
|
Direct Judgement Preference Optimization
|
<|reference_start|>Direct Judgement Preference Optimization: Auto-evaluation is crucial for assessing response quality and offering feedback for model development. Recent studies have explored training large language models (LLMs) as generative judges to evaluate and critique other models' outputs. In this work, we investigate the idea of learning from both positive and negative data with preference optimization to enhance the evaluation capabilities of LLM judges across an array of different use cases. We achieve this by employing three approaches to collect the preference pairs for different use cases, each aimed at improving our generative judge from a different perspective. Our comprehensive study over a wide range of benchmarks demonstrates the effectiveness of our method. In particular, our generative judge achieves the best performance on 10 out of 13 benchmarks, outperforming strong baselines like GPT-4o and specialized judge models. Further analysis show that our judge model robustly counters inherent biases such as position and length bias, flexibly adapts to any evaluation protocol specified by practitioners, and provides helpful language feedback for improving downstream generator models.<|reference_end|>
|
arxiv
|
@article{wang2024direct,
title={Direct Judgement Preference Optimization},
author={Peifeng Wang, Austin Xu, Yilun Zhou, Caiming Xiong, Shafiq Joty},
journal={arXiv preprint arXiv:2409.14664},
year={2024},
archivePrefix={arXiv},
eprint={2409.14664},
primaryClass={cs.CL}
}
|
wang2024direct
|
arxiv-660591
|
2409.14666
|
Semi-supervised Learning For Robust Speech Evaluation
|
<|reference_start|>Semi-supervised Learning For Robust Speech Evaluation: Speech evaluation measures a learners oral proficiency using automatic models. Corpora for training such models often pose sparsity challenges given that there often is limited scored data from teachers, in addition to the score distribution across proficiency levels being often imbalanced among student cohorts. Automatic scoring is thus not robust when faced with under-represented samples or out-of-distribution samples, which inevitably exist in real-world deployment scenarios. This paper proposes to address such challenges by exploiting semi-supervised pre-training and objective regularization to approximate subjective evaluation criteria. In particular, normalized mutual information is used to quantify the speech characteristics from the learner and the reference. An anchor model is trained using pseudo labels to predict the correctness of pronunciation. An interpolated loss function is proposed to minimize not only the prediction error with respect to ground-truth scores but also the divergence between two probability distributions estimated by the speech evaluation model and the anchor model. Compared to other state-of-the-art methods on a public data-set, this approach not only achieves high performance while evaluating the entire test-set as a whole, but also brings the most evenly distributed prediction error across distinct proficiency levels. Furthermore, empirical results show the model accuracy on out-of-distribution data also compares favorably with competitive baselines.<|reference_end|>
|
arxiv
|
@article{zhang2024semi-supervised,
title={Semi-supervised Learning For Robust Speech Evaluation},
author={Huayun Zhang, Jeremy H.M. Wong, Geyu Lin, Nancy F. Chen},
journal={arXiv preprint arXiv:2409.14666},
year={2024},
archivePrefix={arXiv},
eprint={2409.14666},
primaryClass={cs.AI}
}
|
zhang2024semi-supervised
|
arxiv-660592
|
2409.14670
|
BDF schemes for accelerated gradient flows in projection-free approximation of nonconvex constrained variational minimization
|
<|reference_start|>BDF schemes for accelerated gradient flows in projection-free approximation of nonconvex constrained variational minimization: We present a set of novel accelerated gradient flow methods for solving quadratic energy minimization problems with nonconvex constraints. Our algorithms are built on novel evolutionary equations that combine projection-free approximations for nonconvex constraints with first- and higher-order backward differentiation formulas (BDFs) for {artificial} temporal derivatives. We focus on examining the asymptotic consistency of constraints achieved by the accelerated gradient flow using the BDF schemes. Both unconditional and conditional high-order estimates for constraint violations in these schemes are established. Numerical results not only validate our theoretical findings but also demonstrate that the proposed methods outperform existing gradient flow approaches in terms of both efficiency and accuracy.<|reference_end|>
|
arxiv
|
@article{dong2024bdf,
title={BDF schemes for accelerated gradient flows in projection-free
approximation of nonconvex constrained variational minimization},
author={Guozhi Dong, Zikang Gong, Ziqing Xie and Shuo Yang},
journal={arXiv preprint arXiv:2409.14670},
year={2024},
archivePrefix={arXiv},
eprint={2409.14670},
primaryClass={math.NA cs.NA math.OC}
}
|
dong2024bdf
|
arxiv-660593
|
2409.14671
|
FedGCA: Global Consistent Augmentation Based Single-Source Federated Domain Generalization
|
<|reference_start|>FedGCA: Global Consistent Augmentation Based Single-Source Federated Domain Generalization: Federated Domain Generalization (FedDG) aims to train the global model for generalization ability to unseen domains with multi-domain training samples. However, clients in federated learning networks are often confined to a single, non-IID domain due to inherent sampling and temporal limitations. The lack of cross-domain interaction and the in-domain divergence impede the learning of domain-common features and limit the effectiveness of existing FedDG, referred to as the single-source FedDG (sFedDG) problem. To address this, we introduce the Federated Global Consistent Augmentation (FedGCA) method, which incorporates a style-complement module to augment data samples with diverse domain styles. To ensure the effective integration of augmented samples, FedGCA employs both global guided semantic consistency and class consistency, mitigating inconsistencies from local semantics within individual clients and classes across multiple clients. The conducted extensive experiments demonstrate the superiority of FedGCA.<|reference_end|>
|
arxiv
|
@article{liu2024fedgca:,
title={FedGCA: Global Consistent Augmentation Based Single-Source Federated
Domain Generalization},
author={Yuan Liu, Shu Wang, Zhe Qu, Xingyu Li, Shichao Kan, Jianxin Wang},
journal={arXiv preprint arXiv:2409.14671},
year={2024},
archivePrefix={arXiv},
eprint={2409.14671},
primaryClass={cs.AI cs.CV}
}
|
liu2024fedgca:
|
arxiv-660594
|
2409.14672
|
Speechworthy Instruction-tuned Language Models
|
<|reference_start|>Speechworthy Instruction-tuned Language Models: Current instruction-tuned language models are exclusively trained with textual preference data and thus are often not aligned with the unique requirements of other modalities, such as speech. To better align language models with the speech domain, we explore (i) prompting strategies grounded in radio-industry best practices and (ii) preference learning using a novel speech-based preference data of 20K samples, generated with a wide spectrum of prompts that induce varying dimensions of speech-suitability and labeled by annotators who listen to response pairs. Both human and automatic evaluation show that both prompting and preference learning increase the speech-suitability of popular instruction-tuned LLMs. Interestingly, we find that prompting and preference learning can be additive; combining them achieves the best win rates in head-to-head comparison, resulting in responses that are preferred or tied to the base model in 76.2% of comparisons on average. Lastly, we share lexical, syntactical, and qualitative analyses to showcase how each method contributes to improving the speech-suitability of generated responses.<|reference_end|>
|
arxiv
|
@article{cho2024speechworthy,
title={Speechworthy Instruction-tuned Language Models},
author={Hyundong Cho, Nicolaas Jedema, Leonardo F.R. Ribeiro, Karishma Sharma,
Pedro Szekely, Alessandro Moschitti, Ruben Janssen, Jonathan May},
journal={arXiv preprint arXiv:2409.14672},
year={2024},
archivePrefix={arXiv},
eprint={2409.14672},
primaryClass={cs.AI}
}
|
cho2024speechworthy
|
arxiv-660595
|
2409.14673
|
Instruction Tuning Vs In-Context Learning: Revisiting Large Language Models in Few-Shot Computational Social Science
|
<|reference_start|>Instruction Tuning Vs In-Context Learning: Revisiting Large Language Models in Few-Shot Computational Social Science: Real-world applications of large language models (LLMs) in computational social science (CSS) tasks primarily depend on the effectiveness of instruction tuning (IT) or in-context learning (ICL). While IT has shown highly effective at fine-tuning LLMs for various tasks, ICL offers a rapid alternative for task adaptation by learning from examples without explicit gradient updates. In this paper, we evaluate the classification performance of LLMs using IT versus ICL in few-shot CSS tasks. The experimental results indicate that ICL consistently outperforms IT in most CSS tasks. Additionally, we investigate the relationship between the increasing number of training samples and LLM performance. Our findings show that simply increasing the number of samples without considering their quality does not consistently enhance the performance of LLMs with either ICL or IT and can sometimes even result in a performance decline. Finally, we compare three prompting strategies, demonstrating that ICL is more effective than zero-shot and Chain-of-Thought (CoT). Our research highlights the significant advantages of ICL in handling CSS tasks in few-shot settings and emphasizes the importance of optimizing sample quality and prompting strategies to improve LLM classification performance. The code will be made available.<|reference_end|>
|
arxiv
|
@article{wang2024instruction,
title={Instruction Tuning Vs. In-Context Learning: Revisiting Large Language
Models in Few-Shot Computational Social Science},
author={Taihang Wang, Xiaoman Xu, Yimin Wang, Ye Jiang},
journal={arXiv preprint arXiv:2409.14673},
year={2024},
archivePrefix={arXiv},
eprint={2409.14673},
primaryClass={cs.CL cs.AI}
}
|
wang2024instruction
|
arxiv-660596
|
2409.14674
|
RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning
|
<|reference_start|>RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning: Developing robust and correctable visuomotor policies for robotic manipulation is challenging due to the lack of self-recovery mechanisms from failures and the limitations of simple language instructions in guiding robot actions. To address these issues, we propose a scalable data generation pipeline that automatically augments expert demonstrations with failure recovery trajectories and fine-grained language annotations for training. We then introduce Rich languAge-guided failure reCovERy (RACER), a supervisor-actor framework, which combines failure recovery data with rich language descriptions to enhance robot control. RACER features a vision-language model (VLM) that acts as an online supervisor, providing detailed language guidance for error correction and task execution, and a language-conditioned visuomotor policy as an actor to predict the next actions. Our experimental results show that RACER outperforms the state-of-the-art Robotic View Transformer (RVT) on RLbench across various evaluation settings, including standard long-horizon tasks, dynamic goal-change tasks and zero-shot unseen tasks, achieving superior performance in both simulated and real world environments. Videos and code are available at: https://rich-language-failure-recovery.github.io.<|reference_end|>
|
arxiv
|
@article{dai2024racer:,
title={RACER: Rich Language-Guided Failure Recovery Policies for Imitation
Learning},
author={Yinpei Dai, Jayjun Lee, Nima Fazeli, Joyce Chai},
journal={arXiv preprint arXiv:2409.14674},
year={2024},
archivePrefix={arXiv},
eprint={2409.14674},
primaryClass={cs.RO cs.CL cs.CV}
}
|
dai2024racer:
|
arxiv-660597
|
2409.14675
|
Maintaining Strong $r$-Robustness in Reconfigurable Multi-Robot Networks using Control Barrier Functions
|
<|reference_start|>Maintaining Strong $r$-Robustness in Reconfigurable Multi-Robot Networks using Control Barrier Functions: In leader-follower consensus, strong $r$-robustness of the communication graph provides a sufficient condition for followers to achieve consensus in the presence of misbehaving agents. Previous studies have assumed that robots can form and/or switch between predetermined network topologies with known robustness properties. However, robots with distance-based communication models may not be able to achieve these topologies while moving through spatially constrained environments, such as narrow corridors, to complete their objectives. This paper introduces a Control Barrier Function (CBF) that ensures robots maintain strong $r$-robustness of their communication graph above a certain threshold without maintaining any fixed topologies. Our CBF directly addresses robustness, allowing robots to have flexible reconfigurable network structure while navigating to achieve their objectives. The efficacy of our method is tested through various simulation and hardware experiments.<|reference_end|>
|
arxiv
|
@article{lee2024maintaining,
title={Maintaining Strong $r$-Robustness in Reconfigurable Multi-Robot Networks
using Control Barrier Functions},
author={Haejoon Lee and Dimitra Panagou},
journal={arXiv preprint arXiv:2409.14675},
year={2024},
archivePrefix={arXiv},
eprint={2409.14675},
primaryClass={cs.RO cs.SY eess.SY}
}
|
lee2024maintaining
|
arxiv-660598
|
2409.14676
|
TransUKAN:Computing-Efficient Hybrid KAN-Transformer for Enhanced Medical Image Segmentation
|
<|reference_start|>TransUKAN:Computing-Efficient Hybrid KAN-Transformer for Enhanced Medical Image Segmentation: U-Net is currently the most widely used architecture for medical image segmentation. Benefiting from its unique encoder-decoder architecture and skip connections, it can effectively extract features from input images to segment target regions. The commonly used U-Net is typically based on convolutional operations or Transformers, modeling the dependencies between local or global information to accomplish medical image analysis tasks. However, convolutional layers, fully connected layers, and attention mechanisms used in this process introduce a significant number of parameters, often requiring the stacking of network layers to model complex nonlinear relationships, which can impact the training process. To address these issues, we propose TransUKAN. Specifically, we have improved the KAN to reduce memory usage and computational load. On this basis, we explored an effective combination of KAN, Transformer, and U-Net structures. This approach enhances the model's capability to capture nonlinear relationships by introducing only a small number of additional parameters and compensates for the Transformer structure's deficiency in local information extraction. We validated TransUKAN on multiple medical image segmentation tasks. Experimental results demonstrate that TransUKAN achieves excellent performance with significantly reduced parameters. The code will be available athttps://github.com/wuyanlin-wyl/TransUKAN.<|reference_end|>
|
arxiv
|
@article{wu2024transukan:computing-efficient,
title={TransUKAN:Computing-Efficient Hybrid KAN-Transformer for Enhanced
Medical Image Segmentation},
author={Yanlin Wu, Tao Li, Zhihong Wang, Hong Kang, Along He},
journal={arXiv preprint arXiv:2409.14676},
year={2024},
archivePrefix={arXiv},
eprint={2409.14676},
primaryClass={eess.IV cs.CV}
}
|
wu2024transukan:computing-efficient
|
arxiv-660599
|
2409.14677
|
Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections
|
<|reference_start|>Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections: We tackle the problem of generating highly realistic and plausible mirror reflections using diffusion-based generative models. We formulate this problem as an image inpainting task, allowing for more user control over the placement of mirrors during the generation process. To enable this, we create SynMirror, a large-scale dataset of diverse synthetic scenes with objects placed in front of mirrors. SynMirror contains around 198K samples rendered from 66K unique 3D objects, along with their associated depth maps, normal maps and instance-wise segmentation masks, to capture relevant geometric properties of the scene. Using this dataset, we propose a novel depth-conditioned inpainting method called MirrorFusion, which generates high-quality geometrically consistent and photo-realistic mirror reflections given an input image and a mask depicting the mirror region. MirrorFusion outperforms state-of-the-art methods on SynMirror, as demonstrated by extensive quantitative and qualitative analysis. To the best of our knowledge, we are the first to successfully tackle the challenging problem of generating controlled and faithful mirror reflections of an object in a scene using diffusion based models. SynMirror and MirrorFusion open up new avenues for image editing and augmented reality applications for practitioners and researchers alike.<|reference_end|>
|
arxiv
|
@article{dhiman2024reflecting,
title={Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror
Reflections},
author={Ankit Dhiman, Manan Shah, Rishubh Parihar, Yash Bhalgat, Lokesh R
Boregowda and R Venkatesh Babu},
journal={arXiv preprint arXiv:2409.14677},
year={2024},
archivePrefix={arXiv},
eprint={2409.14677},
primaryClass={cs.CV}
}
|
dhiman2024reflecting
|
arxiv-660600
|
2409.14679
|
Quantifying Context Bias in Domain Adaptation for Object Detection
|
<|reference_start|>Quantifying Context Bias in Domain Adaptation for Object Detection: Domain adaptation for object detection (DAOD) aims to transfer a trained model from a source to a target domain. Various DAOD methods exist, some of which minimize context bias between foreground-background associations in various domains. However, no prior work has studied context bias in DAOD by analyzing changes in background features during adaptation and how context bias is represented in different domains. Our research experiment highlights the potential usability of context bias in DAOD. We address the problem by varying activation values over different layers of trained models and by masking the background, both of which impact the number and quality of detections. We then use one synthetic dataset from CARLA and two different versions of real open-source data, Cityscapes and Cityscapes foggy, as separate domains to represent and quantify context bias. We utilize different metrics such as Maximum Mean Discrepancy (MMD) and Maximum Variance Discrepancy (MVD) to find the layer-specific conditional probability estimates of foreground given manipulated background regions for separate domains. We demonstrate through detailed analysis that understanding of the context bias can affect DAOD approach and foc<|reference_end|>
|
arxiv
|
@article{son2024quantifying,
title={Quantifying Context Bias in Domain Adaptation for Object Detection},
author={Hojun Son and Arpan Kusari},
journal={arXiv preprint arXiv:2409.14679},
year={2024},
archivePrefix={arXiv},
eprint={2409.14679},
primaryClass={cs.CV cs.AI cs.RO}
}
|
son2024quantifying
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.