corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-663901 | 2410.00322 | Strategic information disclosure with communication constraints and private preferences | <|reference_start|>Strategic information disclosure with communication constraints and private preferences: Social-media platforms are one of the most prevalent communication media today. In such systems, a large amount of content is generated and available to the platform. However, not all content can be transmitted to every possible user at all times. At the other end are the users, who have their own preferences about which content they enjoy, which is often unknown ex ante to the platform. We model the interaction between the platform and the users as a signaling game with asymmetric information, where each user optimizes its preference disclosure policy, and the platform optimizes its information disclosure policy. We provide structural as well as existence of policies that constitute Bayesian Nash Equilibria, and necessary optimality conditions used to explicitly compute the optimal policies.<|reference_end|> | arxiv | @article{vasconcelos2024strategic,
title={Strategic information disclosure with communication constraints and
private preferences},
author={Marcos M. Vasconcelos and Odilon C^amara},
journal={arXiv preprint arXiv:2410.00322},
year={2024},
archivePrefix={arXiv},
eprint={2410.00322},
primaryClass={cs.GT cs.SY eess.SY}
} | vasconcelos2024strategic |
arxiv-663902 | 2410.00323 | Energetic Resilience of Linear Driftless Systems | <|reference_start|>Energetic Resilience of Linear Driftless Systems: When a malfunction causes a control system to lose authority over a subset of its actuators, achieving a task may require spending additional energy in order to compensate for the effect of uncontrolled inputs. To understand this increase in energy, we introduce energetic resilience metrics that quantify the maximal additional energy required to achieve finite-time regulation in linear driftless systems that lose authority over some of their actuators. Using a technical lemma based on the calculus of variations, we first derive optimal control signals and minimum energies to achieve this task in both the nominal and malfunctioning systems. We then obtain a bound on the worst-case energy used by the malfunctioning system, and its exact expression in the special case of loss of authority over one actuator. Further considering this special case, we derive bounds on additive and multiplicative metrics for energetic resilience. A simulation example on a model of an underwater robot demonstrates that these bounds are useful in quantifying the increased energy used by a system suffering a partial loss of control authority.<|reference_end|> | arxiv | @article{padmanabhan2024energetic,
title={Energetic Resilience of Linear Driftless Systems},
author={Ram Padmanabhan, Melkior Ornik},
journal={arXiv preprint arXiv:2410.00323},
year={2024},
archivePrefix={arXiv},
eprint={2410.00323},
primaryClass={math.OC cs.SY eess.SY}
} | padmanabhan2024energetic |
arxiv-663903 | 2410.00324 | Vision Language Models See What You Want but not What You See | <|reference_start|>Vision Language Models See What You Want but not What You See: Knowing others' intentions and taking others' perspectives are two core components of human intelligence that are typically considered to be instantiations of theory-of-mind. Infiltrating machines with these abilities is an important step towards building human-level artificial intelligence. Recently, Li et al. built CogDevelop2K, a data-intensive cognitive experiment benchmark to assess the developmental trajectory of machine intelligence. Here, to investigate intentionality understanding and perspective-taking in Vision Language Models, we leverage the IntentBench and PerspectBench of CogDevelop2K, which contains over 300 cognitive experiments grounded in real-world scenarios and classic cognitive tasks, respectively. Surprisingly, we find VLMs achieving high performance on intentionality understanding but lower performance on perspective-taking. This challenges the common belief in cognitive science literature that perspective-taking at the corresponding modality is necessary for intentionality understanding.<|reference_end|> | arxiv | @article{gao2024vision,
title={Vision Language Models See What You Want but not What You See},
author={Qingying Gao, Yijiang Li, Haiyun Lyu, Haoran Sun, Dezhi Luo, Hokin
Deng},
journal={arXiv preprint arXiv:2410.00324},
year={2024},
archivePrefix={arXiv},
eprint={2410.00324},
primaryClass={cs.AI}
} | gao2024vision |
arxiv-663904 | 2410.00327 | EnzymeFlow: Generating Reaction-specific Enzyme Catalytic Pockets through Flow Matching and Co-Evolutionary Dynamics | <|reference_start|>EnzymeFlow: Generating Reaction-specific Enzyme Catalytic Pockets through Flow Matching and Co-Evolutionary Dynamics: Enzyme design is a critical area in biotechnology, with applications ranging from drug development to synthetic biology. Traditional methods for enzyme function prediction or protein binding pocket design often fall short in capturing the dynamic and complex nature of enzyme-substrate interactions, particularly in catalytic processes. To address the challenges, we introduce EnzymeFlow, a generative model that employs flow matching with hierarchical pre-training and enzyme-reaction co-evolution to generate catalytic pockets for specific substrates and catalytic reactions. Additionally, we introduce a large-scale, curated, and validated dataset of enzyme-reaction pairs, specifically designed for the catalytic pocket generation task, comprising a total of $328,192$ pairs. By incorporating evolutionary dynamics and reaction-specific adaptations, EnzymeFlow becomes a powerful model for designing enzyme pockets, which is capable of catalyzing a wide range of biochemical reactions. Experiments on the new dataset demonstrate the model's effectiveness in designing high-quality, functional enzyme catalytic pockets, paving the way for advancements in enzyme engineering and synthetic biology. We provide EnzymeFlow code at https://github.com/WillHua127/EnzymeFlow with notebook demonstration at https://github.com/WillHua127/EnzymeFlow/blob/main/enzymeflow_demo.ipynb.<|reference_end|> | arxiv | @article{hua2024enzymeflow:,
title={EnzymeFlow: Generating Reaction-specific Enzyme Catalytic Pockets
through Flow Matching and Co-Evolutionary Dynamics},
author={Chenqing Hua, Yong Liu, Dinghuai Zhang, Odin Zhang, Sitao Luan, Kevin
K. Yang, Guy Wolf, Doina Precup, Shuangjia Zheng},
journal={arXiv preprint arXiv:2410.00327},
year={2024},
archivePrefix={arXiv},
eprint={2410.00327},
primaryClass={cs.LG cs.AI cs.CE q-bio.QM}
} | hua2024enzymeflow: |
arxiv-663905 | 2410.00328 | Tuning Fast Memory Size based on Modeling of Page Migration for Tiered Memory | <|reference_start|>Tuning Fast Memory Size based on Modeling of Page Migration for Tiered Memory: Tiered memory, built upon a combination of fast memory and slow memory, provides a cost-effective solution to meet ever-increasing requirements from emerging applications for large memory capacity. Reducing the size of fast memory is valuable to improve memory utilization in production and reduce production costs because fast memory tends to be expensive. However, deciding the fast memory size is challenging because there is a complex interplay between application characterization and the overhead of page migration used to mitigate the impact of limited fast memory capacity. In this paper, we introduce a system, Tuna, to decide fast memory size based on modeling of page migration. Tuna uses micro-benchmarking to model the impact of page migration on application performance using three metrics. Tuna decides the fast memory size based on offline modeling results and limited information on workload telemetry. Evaluating with common big-memory applications and using 5% as the performance loss target, we show that Tuna in combination with a page management system (TPP) saves fast memory by 8.5% on average (up to 16%). This is in contrast to the 5% saving in fast memory reported by Microsoft Pond for the same workloads (BFS and SSSP) and the same performance loss target.<|reference_end|> | arxiv | @article{chen2024tuning,
title={Tuning Fast Memory Size based on Modeling of Page Migration for Tiered
Memory},
author={Shangye Chen, Jin Huang, Shuangyan Yang, Jie Liu, Huaicheng Li,
Dimitrios Nikolopoulos, Junhee Ryu, Jinho Baek, Kwangsik Shin, Dong Li},
journal={arXiv preprint arXiv:2410.00328},
year={2024},
archivePrefix={arXiv},
eprint={2410.00328},
primaryClass={cs.PF}
} | chen2024tuning |
arxiv-663906 | 2410.00332 | Vision Language Models Know Law of Conservation without Understanding More-or-Less | <|reference_start|>Vision Language Models Know Law of Conservation without Understanding More-or-Less: Conservation is a critical milestone of cognitive development considered to be supported by both the understanding of quantitative concepts and the reversibility of mental operations. To assess whether this critical component of human intelligence has emerged in Vision Language Models, we leverage the ConserveBench from CogDevelop2K, a data-intensive cognitive experiment benchmark for assaying the developmental trajectory of machine intelligence. The battery includes over 350 questions across four dimensions of physical quantities: volume, solid quantity, length, and number. The former two involve only transformational tasks, whereas the latter two also involve non-transformational tasks assessing the understanding of quantitative concepts alone. Surprisingly, we find that while VLMs are generally capable of conserving, they tend to fail at non-transformational tasks which success is typically considered to be entailed by the ability to conserve. This implies that the law of conservation, at least in concrete domains, may exist without corresponding conceptual understanding of quantity.<|reference_end|> | arxiv | @article{luo2024vision,
title={Vision Language Models Know Law of Conservation without Understanding
More-or-Less},
author={Dezhi Luo, Haiyun Lyu, Qingying Gao, Haoran Sun, Yijiang Li, Hokin
Deng},
journal={arXiv preprint arXiv:2410.00332},
year={2024},
archivePrefix={arXiv},
eprint={2410.00332},
primaryClass={cs.AI q-bio.NC}
} | luo2024vision |
arxiv-663907 | 2410.00334 | Preserving Generalization of Language models in Few-shot Continual Relation Extraction | <|reference_start|>Preserving Generalization of Language models in Few-shot Continual Relation Extraction: Few-shot Continual Relations Extraction (FCRE) is an emerging and dynamic area of study where models can sequentially integrate knowledge from new relations with limited labeled data while circumventing catastrophic forgetting and preserving prior knowledge from pre-trained backbones. In this work, we introduce a novel method that leverages often-discarded language model heads. By employing these components via a mutual information maximization strategy, our approach helps maintain prior knowledge from the pre-trained backbone and strategically aligns the primary classification head, thereby enhancing model performance. Furthermore, we explore the potential of Large Language Models (LLMs), renowned for their wealth of knowledge, in addressing FCRE challenges. Our comprehensive experimental results underscore the efficacy of the proposed method and offer valuable insights for future work.<|reference_end|> | arxiv | @article{tran2024preserving,
title={Preserving Generalization of Language models in Few-shot Continual
Relation Extraction},
author={Quyen Tran, Nguyen Xuan Thanh, Nguyen Hoang Anh, Nam Le Hai, Trung Le,
Linh Van Ngo, Thien Huu Nguyen},
journal={arXiv preprint arXiv:2410.00334},
year={2024},
archivePrefix={arXiv},
eprint={2410.00334},
primaryClass={cs.CL cs.AI}
} | tran2024preserving |
arxiv-663908 | 2410.00337 | SyntheOcc: Synthesize Geometric-Controlled Street View Images through 3D Semantic MPIs | <|reference_start|>SyntheOcc: Synthesize Geometric-Controlled Street View Images through 3D Semantic MPIs: The advancement of autonomous driving is increasingly reliant on high-quality annotated datasets, especially in the task of 3D occupancy prediction, where the occupancy labels require dense 3D annotation with significant human effort. In this paper, we propose SyntheOcc, which denotes a diffusion model that Synthesize photorealistic and geometric-controlled images by conditioning Occupancy labels in driving scenarios. This yields an unlimited amount of diverse, annotated, and controllable datasets for applications like training perception models and simulation. SyntheOcc addresses the critical challenge of how to efficiently encode 3D geometric information as conditional input to a 2D diffusion model. Our approach innovatively incorporates 3D semantic multi-plane images (MPIs) to provide comprehensive and spatially aligned 3D scene descriptions for conditioning. As a result, SyntheOcc can generate photorealistic multi-view images and videos that faithfully align with the given geometric labels (semantics in 3D voxel space). Extensive qualitative and quantitative evaluations of SyntheOcc on the nuScenes dataset prove its effectiveness in generating controllable occupancy datasets that serve as an effective data augmentation to perception models.<|reference_end|> | arxiv | @article{li2024syntheocc:,
title={SyntheOcc: Synthesize Geometric-Controlled Street View Images through 3D
Semantic MPIs},
author={Leheng Li, Weichao Qiu, Yingjie Cai, Xu Yan, Qing Lian, Bingbing Liu,
Ying-Cong Chen},
journal={arXiv preprint arXiv:2410.00337},
year={2024},
archivePrefix={arXiv},
eprint={2410.00337},
primaryClass={cs.CV}
} | li2024syntheocc: |
arxiv-663909 | 2410.00340 | Sparse Attention Decomposition Applied to Circuit Tracing | <|reference_start|>Sparse Attention Decomposition Applied to Circuit Tracing: Many papers have shown that attention heads work in conjunction with each other to perform complex tasks. It's frequently assumed that communication between attention heads is via the addition of specific features to token residuals. In this work we seek to isolate and identify the features used to effect communication and coordination among attention heads in GPT-2 small. Our key leverage on the problem is to show that these features are very often sparsely coded in the singular vectors of attention head matrices. We characterize the dimensionality and occurrence of these signals across the attention heads in GPT-2 small when used for the Indirect Object Identification (IOI) task. The sparse encoding of signals, as provided by attention head singular vectors, allows for efficient separation of signals from the residual background and straightforward identification of communication paths between attention heads. We explore the effectiveness of this approach by tracing portions of the circuits used in the IOI task. Our traces reveal considerable detail not present in previous studies, shedding light on the nature of redundant paths present in GPT-2. And our traces go beyond previous work by identifying features used to communicate between attention heads when performing IOI.<|reference_end|> | arxiv | @article{franco2024sparse,
title={Sparse Attention Decomposition Applied to Circuit Tracing},
author={Gabriel Franco, Mark Crovella},
journal={arXiv preprint arXiv:2410.00340},
year={2024},
archivePrefix={arXiv},
eprint={2410.00340},
primaryClass={cs.LG cs.AI cs.CL}
} | franco2024sparse |
arxiv-663910 | 2410.00343 | RRT-CBF Based Motion Planning | <|reference_start|>RRT-CBF Based Motion Planning: Control barrier functions (CBF) are widely explored to enforce the safety-critical constraints on nonlinear systems recently. There are many researchers incorporating the control barrier functions into path planning algorithms to find a safe path, but these methods involve huge computational complexity or unidirectional randomness, resulting in arising of run-time. When safety constraints are satisfied, searching efficiency, and searching space are sacrificed. This paper combines the novel motion planning approach using rapid exploring random trees (RRT) algorithm with model predictive control (MPC) to enforce the CBF with dynamically updating constraints to get the safety-critical resolution of trajectory which will enable the robots not to collide with both static and dynamic circle obstacles as well as other moving robots while considering the model uncertainty in process. Besides, this paper first realizes application of CBF-RRT in robot arm model for nonlinear system.<|reference_end|> | arxiv | @article{liu2024rrt-cbf,
title={RRT-CBF Based Motion Planning},
author={Leonas Liu, Yingfan Zhang, Larry Zhang and Mehbi Kermanshabi},
journal={arXiv preprint arXiv:2410.00343},
year={2024},
archivePrefix={arXiv},
eprint={2410.00343},
primaryClass={cs.RO cs.SY eess.SY}
} | liu2024rrt-cbf |
arxiv-663911 | 2410.00344 | Integrating Text-to-Music Models with Language Models: Composing Long Structured Music Pieces | <|reference_start|>Integrating Text-to-Music Models with Language Models: Composing Long Structured Music Pieces: Recent music generation methods based on transformers have a context window of up to a minute. The music generated by these methods is largely unstructured beyond the context window. With a longer context window, learning long-scale structures from musical data is a prohibitively challenging problem. This paper proposes integrating a text-to-music model with a large language model to generate music with form. The papers discusses the solutions to the challenges of such integration. The experimental results show that the proposed method can generate 2.5-minute-long music that is highly structured, strongly organized, and cohesive.<|reference_end|> | arxiv | @article{atassi2024integrating,
title={Integrating Text-to-Music Models with Language Models: Composing Long
Structured Music Pieces},
author={Lilac Atassi},
journal={arXiv preprint arXiv:2410.00344},
year={2024},
archivePrefix={arXiv},
eprint={2410.00344},
primaryClass={cs.SD cs.LG eess.AS}
} | atassi2024integrating |
arxiv-663912 | 2410.00345 | A Taxonomy of Loss Functions for Stochastic Optimal Control | <|reference_start|>A Taxonomy of Loss Functions for Stochastic Optimal Control: Stochastic optimal control (SOC) aims to direct the behavior of noisy systems and has widespread applications in science, engineering, and artificial intelligence. In particular, reward fine-tuning of diffusion and flow matching models and sampling from unnormalized methods can be recast as SOC problems. A recent work has introduced Adjoint Matching (Domingo-Enrich et al., 2024), a loss function for SOC problems that vastly outperforms existing loss functions in the reward fine-tuning setup. The goal of this work is to clarify the connections between all the existing (and some new) SOC loss functions. Namely, we show that SOC loss functions can be grouped into classes that share the same gradient in expectation, which means that their optimization landscape is the same; they only differ in their gradient variance. We perform simple SOC experiments to understand the strengths and weaknesses of different loss functions.<|reference_end|> | arxiv | @article{domingo-enrich2024a,
title={A Taxonomy of Loss Functions for Stochastic Optimal Control},
author={Carles Domingo-Enrich},
journal={arXiv preprint arXiv:2410.00345},
year={2024},
archivePrefix={arXiv},
eprint={2410.00345},
primaryClass={cs.LG math.OC stat.ML}
} | domingo-enrich2024a |
arxiv-663913 | 2410.00346 | Augmenting team diversity and performance by enabling agency and fairness criteria in recommendation algorithms | <|reference_start|>Augmenting team diversity and performance by enabling agency and fairness criteria in recommendation algorithms: In this study, we examined the impact of recommendation systems' algorithms on individuals' collaborator choices when forming teams. Different algorithmic designs can lead individuals to select one collaborator over another, thereby shaping their teams' composition, dynamics, and performance. To test this hypothesis, we conducted a 2 x 2 between-subject laboratory experiment with 332 participants who assembled teams using a recommendation system. We tested four algorithms that controlled the participants' agency to choose collaborators and the inclusion of fairness criteria. Our results show that participants assigned by an algorithm to work in highly diverse teams struggled to work with different and unfamiliar individuals, while participants enabled by an algorithm to choose collaborators without fairness criteria formed homogenous teams without the necessary skills. In contrast, combining users' agency and fairness criteria in an algorithm enhanced teams' performance and composition. This study breaks new ground by providing insights into how algorithms can augment team formation.<|reference_end|> | arxiv | @article{gomez-zara2024augmenting,
title={Augmenting team diversity and performance by enabling agency and
fairness criteria in recommendation algorithms},
author={Diego Gomez-Zara, Victoria Kam, Charles Chiang, Leslie DeChurch,
Noshir Contractor},
journal={arXiv preprint arXiv:2410.00346},
year={2024},
archivePrefix={arXiv},
eprint={2410.00346},
primaryClass={cs.HC}
} | gomez-zara2024augmenting |
arxiv-663914 | 2410.00348 | Revisiting the Role of Texture in 3D Person Re-identification | <|reference_start|>Revisiting the Role of Texture in 3D Person Re-identification: This study introduces a new framework for 3D person re-identification (re-ID) that leverages readily available high-resolution texture data in 3D reconstruction to improve the performance and explainability of the person re-ID task. We propose a method to emphasize texture in 3D person re-ID models by incorporating UVTexture mapping, which better differentiates human subjects. Our approach uniquely combines UVTexture and its heatmaps with 3D models to visualize and explain the person re-ID process. In particular, the visualization and explanation are achieved through activation maps and attribute-based attention maps, which highlight the important regions and features contributing to the person re-ID decision. Our contributions include: (1) a novel technique for emphasizing texture in 3D models using UVTexture processing, (2) an innovative method for explicating person re-ID matches through a combination of 3D models and UVTexture mapping, and (3) achieving state-of-the-art performance in 3D person re-ID. We ensure the reproducibility of our results by making all data, codes, and models publicly available.<|reference_end|> | arxiv | @article{nguyen2024revisiting,
title={Revisiting the Role of Texture in 3D Person Re-identification},
author={Huy Nguyen, Kien Nguyen, Akila Pemasiri, Sridha Sridharan and Clinton
Fookes},
journal={arXiv preprint arXiv:2410.00348},
year={2024},
archivePrefix={arXiv},
eprint={2410.00348},
primaryClass={cs.CV}
} | nguyen2024revisiting |
arxiv-663915 | 2410.00349 | Data Augmentation for 3DMM-based Arousal-Valence Prediction for HRI | <|reference_start|>Data Augmentation for 3DMM-based Arousal-Valence Prediction for HRI: Humans use multiple communication channels to interact with each other. For instance, body gestures or facial expressions are commonly used to convey an intent. The use of such non-verbal cues has motivated the development of prediction models. One such approach is predicting arousal and valence (AV) from facial expressions. However, making these models accurate for human-robot interaction (HRI) settings is challenging as it requires handling multiple subjects, challenging conditions, and a wide range of facial expressions. In this paper, we propose a data augmentation (DA) technique to improve the performance of AV predictors using 3D morphable models (3DMM). We then utilize this approach in an HRI setting with a mediator robot and a group of three humans. Our augmentation method creates synthetic sequences for underrepresented values in the AV space of the SEWA dataset, which is the most comprehensive dataset with continuous AV labels. Results show that using our DA method improves the accuracy and robustness of AV prediction in real-time applications. The accuracy of our models on the SEWA dataset is 0.793 for arousal and valence.<|reference_end|> | arxiv | @article{cruz2024data,
title={Data Augmentation for 3DMM-based Arousal-Valence Prediction for HRI},
author={Christian Arzate Cruz, Yotam Sechayk, Takeo Igarashi and Randy Gomez},
journal={arXiv preprint arXiv:2410.00349},
year={2024},
archivePrefix={arXiv},
eprint={2410.00349},
primaryClass={cs.RO}
} | cruz2024data |
arxiv-663916 | 2410.00350 | Efficient Training of Large Vision Models via Advanced Automated Progressive Learning | <|reference_start|>Efficient Training of Large Vision Models via Advanced Automated Progressive Learning: The rapid advancements in Large Vision Models (LVMs), such as Vision Transformers (ViTs) and diffusion models, have led to an increasing demand for computational resources, resulting in substantial financial and environmental costs. This growing challenge highlights the necessity of developing efficient training methods for LVMs. Progressive learning, a training strategy in which model capacity gradually increases during training, has shown potential in addressing these challenges. In this paper, we present an advanced automated progressive learning (AutoProg) framework for efficient training of LVMs. We begin by focusing on the pre-training of LVMs, using ViTs as a case study, and propose AutoProg-One, an AutoProg scheme featuring momentum growth (MoGrow) and a one-shot growth schedule search. Beyond pre-training, we extend our approach to tackle transfer learning and fine-tuning of LVMs. We expand the scope of AutoProg to cover a wider range of LVMs, including diffusion models. First, we introduce AutoProg-Zero, by enhancing the AutoProg framework with a novel zero-shot unfreezing schedule search, eliminating the need for one-shot supernet training. Second, we introduce a novel Unique Stage Identifier (SID) scheme to bridge the gap during network growth. These innovations, integrated with the core principles of AutoProg, offer a comprehensive solution for efficient training across various LVM scenarios. Extensive experiments show that AutoProg accelerates ViT pre-training by up to 1.85x on ImageNet and accelerates fine-tuning of diffusion models by up to 2.86x, with comparable or even higher performance. This work provides a robust and scalable approach to efficient training of LVMs, with potential applications in a wide range of vision tasks. Code: https://github.com/changlin31/AutoProg-Zero<|reference_end|> | arxiv | @article{li2024efficient,
title={Efficient Training of Large Vision Models via Advanced Automated
Progressive Learning},
author={Changlin Li, Jiawei Zhang, Sihao Lin, Zongxin Yang, Junwei Liang,
Xiaodan Liang, Xiaojun Chang},
journal={arXiv preprint arXiv:2410.00350},
year={2024},
archivePrefix={arXiv},
eprint={2410.00350},
primaryClass={cs.CV cs.AI}
} | li2024efficient |
arxiv-663917 | 2410.00352 | Interleaved One-Shot SPS Performance under Smart DoS Attacks in C-V2X Networks | <|reference_start|>Interleaved One-Shot SPS Performance under Smart DoS Attacks in C-V2X Networks: This paper evaluates the performance of the one-shot Semi-Persistent Scheduling (SPS) mechanism in Cellular Vehicle-to-Everything (C-V2X) networks under Denial-of-Service (DoS) smart attack scenarios. The study focuses on the impact of these attacks on key performance metrics, including Packet Delivery Ratio (PDR), Inter-Packet Gap (IPG), and Age of Information (AoI). Through extensive Monte Carlo simulations, we demonstrate that the one-shot mechanism significantly enhances network resilience by mitigating the adverse effects of smart DoS attacks. The findings reveal that while the one-shot mechanism improves the PDR and reduces the IPG and AoI tail values, its effectiveness diminishes slightly in high-density vehicular environments. Nevertheless, the one-shot mechanism proves to be a robust solution for maintaining the stability and reliability of C-V2X communications under adversarial conditions.<|reference_end|> | arxiv | @article{sun2024interleaved,
title={Interleaved One-Shot SPS Performance under Smart DoS Attacks in C-V2X
Networks},
author={Zepei Sun, Randall Berry},
journal={arXiv preprint arXiv:2410.00352},
year={2024},
archivePrefix={arXiv},
eprint={2410.00352},
primaryClass={eess.SY cs.SY}
} | sun2024interleaved |
arxiv-663918 | 2410.00354 | Hierarchical Organization Simulacra in the Investment Sector | <|reference_start|>Hierarchical Organization Simulacra in the Investment Sector: This paper explores designing artificial organizations with professional behavior in investments using a multi-agent simulation. The method mimics hierarchical decision-making in investment firms, using news articles to inform decisions. A large-scale study analyzing over 115,000 news articles of 300 companies across 15 years compared this approach against professional traders' decisions. Results show that hierarchical simulations align closely with professional choices, both in frequency and profitability. However, the study also reveals biases in decision-making, where changes in prompt wording and perceived agent seniority significantly influence outcomes. This highlights both the potential and limitations of large language models in replicating professional financial decision-making.<|reference_end|> | arxiv | @article{chen2024hierarchical,
title={Hierarchical Organization Simulacra in the Investment Sector},
author={Chung-Chi Chen, Hiroya Takamura, Ichiro Kobayashi, Yusuke Miyao},
journal={arXiv preprint arXiv:2410.00354},
year={2024},
archivePrefix={arXiv},
eprint={2410.00354},
primaryClass={cs.CL}
} | chen2024hierarchical |
arxiv-663919 | 2410.00355 | Hammerstein equations for sparse random matrices | <|reference_start|>Hammerstein equations for sparse random matrices: Finding eigenvalue distributions for a number of sparse random matrix ensembles can be reduced to solving nonlinear integral equations of the Hammerstein type. While a systematic mathematical theory of such equations exists, it has not been previously applied to sparse matrix problems. We close this gap in the literature by showing how one can employ numerical solutions of Hammerstein equations to accurately recover the spectra of adjacency matrices and Laplacians of random graphs. While our treatment focuses on random graphs for concreteness, the methodology has broad applications to more general sparse random matrices.<|reference_end|> | arxiv | @article{akara-pipattana2024hammerstein,
title={Hammerstein equations for sparse random matrices},
author={Pawat Akara-pipattana, Oleg Evnin},
journal={arXiv preprint arXiv:2410.00355},
year={2024},
archivePrefix={arXiv},
eprint={2410.00355},
primaryClass={cond-mat.dis-nn cs.NA math-ph math.FA math.MP math.NA math.PR}
} | akara-pipattana2024hammerstein |
arxiv-663920 | 2410.00356 | A Digital Twin Framework for Physical-Virtual Integration in V2X-Enabled Connected Vehicle Corridors | <|reference_start|>A Digital Twin Framework for Physical-Virtual Integration in V2X-Enabled Connected Vehicle Corridors: Transportation Cyber-Physical Systems (T-CPS) are critical in improving traffic safety, reliability, and sustainability by integrating computing, communication, and control in transportation systems. The connected vehicle corridor is at the forefront of this transformation, where Cellular Vehicle-to-Everything (C-V2X) technology facilitates real-time data exchange between infrastructure, vehicles, and road users. However, challenges remain in processing and synchronizing the vast V2X data from vehicles and roadside units, particularly when ensuring scalability, data integrity, and operational resilience. This paper presents a digital twin framework for T-CPS, developed from a real-world connected vehicle corridor to address these challenges. By leveraging C-V2X technology and real-time data from infrastructure, vehicles, and road users, the digital twin accurately replicates vehicle behaviors, signal phases, and traffic patterns within the CARLA simulation environment. This framework demonstrates high fidelity between physical and digital systems and ensures robust synchronization of vehicle trajectories and signal phases through extensive experiments. Moreover, the digital twin's scalable and redundant architecture enhances data integrity, making it capable of supporting future large-scale C-V2X deployments. The digital twin is a vital tool in T-CPS, enabling real-time traffic monitoring, prediction, and optimization to enhance the reliability and safety of transportation systems.<|reference_end|> | arxiv | @article{wu2024a,
title={A Digital Twin Framework for Physical-Virtual Integration in V2X-Enabled
Connected Vehicle Corridors},
author={Keshu Wu, Pei Li, Yang Cheng, Steven T. Parker, Bin Ran, David A.
Noyce, Xinyue Ye},
journal={arXiv preprint arXiv:2410.00356},
year={2024},
archivePrefix={arXiv},
eprint={2410.00356},
primaryClass={cs.RO cs.ET cs.SY eess.SY}
} | wu2024a |
arxiv-663921 | 2410.00357 | Neural Scaling Laws of Deep ReLU and Deep Operator Network: A Theoretical Study | <|reference_start|>Neural Scaling Laws of Deep ReLU and Deep Operator Network: A Theoretical Study: Neural scaling laws play a pivotal role in the performance of deep neural networks and have been observed in a wide range of tasks. However, a complete theoretical framework for understanding these scaling laws remains underdeveloped. In this paper, we explore the neural scaling laws for deep operator networks, which involve learning mappings between function spaces, with a focus on the Chen and Chen style architecture. These approaches, which include the popular Deep Operator Network (DeepONet), approximate the output functions using a linear combination of learnable basis functions and coefficients that depend on the input functions. We establish a theoretical framework to quantify the neural scaling laws by analyzing its approximation and generalization errors. We articulate the relationship between the approximation and generalization errors of deep operator networks and key factors such as network model size and training data size. Moreover, we address cases where input functions exhibit low-dimensional structures, allowing us to derive tighter error bounds. These results also hold for deep ReLU networks and other similar structures. Our results offer a partial explanation of the neural scaling laws in operator learning and provide a theoretical foundation for their applications.<|reference_end|> | arxiv | @article{liu2024neural,
title={Neural Scaling Laws of Deep ReLU and Deep Operator Network: A
Theoretical Study},
author={Hao Liu, Zecheng Zhang, Wenjing Liao, Hayden Schaeffer},
journal={arXiv preprint arXiv:2410.00357},
year={2024},
archivePrefix={arXiv},
eprint={2410.00357},
primaryClass={cs.LG stat.ML}
} | liu2024neural |
arxiv-663922 | 2410.00358 | AARK: An Open Toolkit for Autonomous Racing Research | <|reference_start|>AARK: An Open Toolkit for Autonomous Racing Research: Autonomous racing demands safe control of vehicles at their physical limits for extended periods of time, providing insights into advanced vehicle safety systems which increasingly rely on intervention provided by vehicle autonomy. Participation in this field carries with it a high barrier to entry. Physical platforms and their associated sensor suites require large capital outlays before any demonstrable progress can be made. Simulators allow researches to develop soft autonomous systems without purchasing a platform. However, currently available simulators lack visual and dynamic fidelity, can still be expensive to buy, lack customisation, and are difficult to use. AARK provides three packages, ACI, ACDG, and ACMPC. These packages enable research into autonomous control systems in the demanding environment of racing to bring more people into the field and improve reproducibility: ACI provides researchers with a computer vision-friendly interface to Assetto Corsa for convenient comparison and evaluation of autonomous control solutions; ACDG enables generation of depth, normal and semantic segmentation data for training computer vision models to use in perception systems; and ACMPC gives newcomers to the field a modular full-stack autonomous control solution, capable of controlling vehicles to build from. AARK aims to unify and democratise research into a field critical to providing safer roads and trusted autonomous systems.<|reference_end|> | arxiv | @article{bockman2024aark:,
title={AARK: An Open Toolkit for Autonomous Racing Research},
author={James Bockman, Matthew Howe, Adrian Orenstein and Feras Dayoub},
journal={arXiv preprint arXiv:2410.00358},
year={2024},
archivePrefix={arXiv},
eprint={2410.00358},
primaryClass={cs.RO cs.LG cs.SY eess.SY}
} | bockman2024aark: |
arxiv-663923 | 2410.00359 | Self-controller: Controlling LLMs with Multi-round Step-by-step Self-awareness | <|reference_start|>Self-controller: Controlling LLMs with Multi-round Step-by-step Self-awareness: The applications of large language models (LLMs) have been widely spread across all domains. However, the basic abilities such as the controllability of LLMs are still limited. To address this, we propose "Self-controller", a novel agentic framework bringing self-awareness into LLMs' reasoning logic. The core idea of this work is to maintain states based on the LLM's response, letting the LLM become self-aware of current status and think step by step in a multi-round chain-of-thought paradigm. Our experiment on the state of textual length has shown the controllability and effectiveness of the Self-controller. We further implement a binary search algorithm to accelerate the generation process based on the linearity and monotonicity of the textual length state. Another advantage of the Self-controller comes with DeepSeek's Context Caching technology, which significantly saves computational token consumption when a cluster of conversations shares the same prefix of context. Theoretically, we prove that in this scenario the extra time complexity is $O(c \log n)$. Results of the back-of-the-envelope estimation suggest that the token consumption of our method is no more than twice as much as that of the trivial single-round generation. Furthermore, our ablation study on word constraints demonstrates the Self-controller's consistent controllability across all foundation models.<|reference_end|> | arxiv | @article{peng2024self-controller:,
title={Self-controller: Controlling LLMs with Multi-round Step-by-step
Self-awareness},
author={Xiao Peng, Xufan Geng},
journal={arXiv preprint arXiv:2410.00359},
year={2024},
archivePrefix={arXiv},
eprint={2410.00359},
primaryClass={cs.CL cs.AI}
} | peng2024self-controller: |
arxiv-663924 | 2410.00360 | TFCT-I2P: Three stream fusion network with color aware transformer for image-to-point cloud registration | <|reference_start|>TFCT-I2P: Three stream fusion network with color aware transformer for image-to-point cloud registration: Along with the advancements in artificial intelligence technologies, image-to-point-cloud registration (I2P) techniques have made significant strides. Nevertheless, the dimensional differences in the features of points cloud (three-dimension) and image (two-dimension) continue to pose considerable challenges to their development. The primary challenge resides in the inability to leverage the features of one modality to augment those of another, thereby complicating the alignment of features within the latent space. To address this challenge, we propose an image-to-point-cloud method named as TFCT-I2P. Initially, we introduce a Three-Stream Fusion Network (TFN), which integrates color information from images with structural information from point clouds, facilitating the alignment of features from both modalities. Subsequently, to effectively mitigate patch-level misalignments introduced by the inclusion of color information, we design a Color-Aware Transformer (CAT). Finally, we conduct extensive experiments on 7Scenes, RGB-D Scenes V2, ScanNet V2, and a self-collected dataset. The results demonstrate that TFCT-I2P surpasses state-of-the-art methods by 1.5% in Inlier Ratio, 0.4% in Feature Matching Recall, and 5.4% in Registration Recall. Therefore, we believe that the proposed TFCT-I2P contributes to the advancement of I2P registration.<|reference_end|> | arxiv | @article{peng2024tfct-i2p:,
title={TFCT-I2P: Three stream fusion network with color aware transformer for
image-to-point cloud registration},
author={Muyao Peng and Pei An and Zichen Wan and You Yang and Qiong Liu},
journal={arXiv preprint arXiv:2410.00360},
year={2024},
archivePrefix={arXiv},
eprint={2410.00360},
primaryClass={cs.CV}
} | peng2024tfct-i2p: |
arxiv-663925 | 2410.00361 | PclGPT: A Large Language Model for Patronizing and Condescending Language Detection | <|reference_start|>PclGPT: A Large Language Model for Patronizing and Condescending Language Detection: Disclaimer: Samples in this paper may be harmful and cause discomfort! Patronizing and condescending language (PCL) is a form of speech directed at vulnerable groups. As an essential branch of toxic language, this type of language exacerbates conflicts and confrontations among Internet communities and detrimentally impacts disadvantaged groups. Traditional pre-trained language models (PLMs) perform poorly in detecting PCL due to its implicit toxicity traits like hypocrisy and false sympathy. With the rise of large language models (LLMs), we can harness their rich emotional semantics to establish a paradigm for exploring implicit toxicity. In this paper, we introduce PclGPT, a comprehensive LLM benchmark designed specifically for PCL. We collect, annotate, and integrate the Pcl-PT/SFT dataset, and then develop a bilingual PclGPT-EN/CN model group through a comprehensive pre-training and supervised fine-tuning staircase process to facilitate implicit toxic detection. Group detection results and fine-grained detection from PclGPT and other models reveal significant variations in the degree of bias in PCL towards different vulnerable groups, necessitating increased societal attention to protect them.<|reference_end|> | arxiv | @article{wang2024pclgpt:,
title={PclGPT: A Large Language Model for Patronizing and Condescending
Language Detection},
author={Hongbo Wang and Mingda Li and Junyu Lu and Hebin Xia and Liang Yang
and Bo Xu and Ruizhu Liu and Hongfei Lin},
journal={arXiv preprint arXiv:2410.00361},
year={2024},
archivePrefix={arXiv},
eprint={2410.00361},
primaryClass={cs.CL}
} | wang2024pclgpt: |
arxiv-663926 | 2410.00362 | FedPT: Federated Proxy-Tuning of Large Language Models on Resource-Constrained Edge Devices | <|reference_start|>FedPT: Federated Proxy-Tuning of Large Language Models on Resource-Constrained Edge Devices: Despite demonstrating superior performance across a variety of linguistic tasks, pre-trained large language models (LMs) often require fine-tuning on specific datasets to effectively address different downstream tasks. However, fine-tuning these LMs for downstream tasks necessitates collecting data from individuals, which raises significant privacy concerns. Federated learning (FL) has emerged as the de facto solution, enabling collaborative model training without sharing raw data. While promising, federated fine-tuning of large LMs faces significant challenges, including restricted access to model parameters and high computation, communication, and memory overhead. To address these challenges, this paper introduces \textbf{Fed}erated \textbf{P}roxy-\textbf{T}uning (FedPT), a novel framework for federated fine-tuning of black-box large LMs, requiring access only to their predictions over the output vocabulary instead of their parameters. Specifically, devices in FedPT first collaboratively tune a smaller LM, and then the server combines the knowledge learned by the tuned small LM with the knowledge learned by the larger pre-trained LM to construct a large proxy-tuned LM that can reach the performance of directly tuned large LMs. The experimental results demonstrate that FedPT can significantly reduce computation, communication, and memory overhead while maintaining competitive performance compared to directly federated fine-tuning of large LMs. FedPT offers a promising solution for efficient, privacy-preserving fine-tuning of large LMs on resource-constrained devices, broadening the accessibility and applicability of state-of-the-art large LMs.<|reference_end|> | arxiv | @article{gao2024fedpt:,
title={FedPT: Federated Proxy-Tuning of Large Language Models on
Resource-Constrained Edge Devices},
author={Zhidong Gao, Yu Zhang, Zhenxiao Zhang, Yanmin Gong, Yuanxiong Guo},
journal={arXiv preprint arXiv:2410.00362},
year={2024},
archivePrefix={arXiv},
eprint={2410.00362},
primaryClass={cs.CL cs.AI}
} | gao2024fedpt: |
arxiv-663927 | 2410.00363 | Unleashing the Potentials of Likelihood Composition for Multi-modal Language Models | <|reference_start|>Unleashing the Potentials of Likelihood Composition for Multi-modal Language Models: Model fusing has always been an important topic, especially in an era where large language models (LLM) and multi-modal language models (MLM) with different architectures, parameter sizes and training pipelines, are being created all the time. In this work, we propose a post-hoc framework, aiming at fusing heterogeneous models off-the-shell, which we call \textit{likelihood composition}, and the basic idea is to compose multiple models' likelihood distribution when doing a multi-choice visual-question-answering task. Here the core concept, \textit{likelihood}, is actually the log-probability of the candidate answer. In \textit{likelihood composition}, we introduce some basic operations: \textit{debias}, \textit{highlight}, \textit{majority-vote} and \textit{ensemble}. By combining (composing) these basic elements, we get the mixed composition methods: \textit{mix-composition}. Through conducting comprehensive experiments on 9 VQA datasets and 10 MLMs, we prove the effectiveness of \textit{mix-composition} compared with simple \textit{ensemble} or \textit{majority-vote} methods. In this framework, people can propose new basic composition methods and combine them to get the new mixed composition methods. We hope our proposed \textit{likelihood composition} can provide a new perspective of fusing heterogeneous models and inspire the exploration under this framework.<|reference_end|> | arxiv | @article{zhao2024unleashing,
title={Unleashing the Potentials of Likelihood Composition for Multi-modal
Language Models},
author={Shitian Zhao, Renrui Zhang, Xu Luo, Yan Wang, Shanghang Zhang, Peng
Gao},
journal={arXiv preprint arXiv:2410.00363},
year={2024},
archivePrefix={arXiv},
eprint={2410.00363},
primaryClass={cs.CL}
} | zhao2024unleashing |
arxiv-663928 | 2410.00365 | Guided Statistical Workflows with Interactive Explanations and Assumption Checking | <|reference_start|>Guided Statistical Workflows with Interactive Explanations and Assumption Checking: Statistical practices such as building regression models or running hypothesis tests rely on following rigorous procedures of steps and verifying assumptions on data to produce valid results. However, common statistical tools do not verify users' decision choices and provide low-level statistical functions without instructions on the whole analysis practice. Users can easily misuse analysis methods, potentially decreasing the validity of results. To address this problem, we introduce GuidedStats, an interactive interface within computational notebooks that encapsulates guidance, models, visualization, and exportable results into interactive workflows. It breaks down typical analysis processes, such as linear regression and two-sample T-tests, into interactive steps supplemented with automatic visualizations and explanations for step-wise evaluation. Users can iterate on input choices to refine their models, while recommended actions and exports allow the user to continue their analysis in code. Case studies show how GuidedStats offers valuable instructions for conducting fluid statistical analyses while finding possible assumption violations in the underlying data, supporting flexible and accurate statistical analyses.<|reference_end|> | arxiv | @article{zhang2024guided,
title={Guided Statistical Workflows with Interactive Explanations and
Assumption Checking},
author={Yuqi Zhang, Adam Perer, Will Epperson},
journal={arXiv preprint arXiv:2410.00365},
year={2024},
archivePrefix={arXiv},
eprint={2410.00365},
primaryClass={cs.HC}
} | zhang2024guided |
arxiv-663929 | 2410.00366 | Easydiagnos: a framework for accurate feature selection for automatic diagnosis in smart healthcare | <|reference_start|>Easydiagnos: a framework for accurate feature selection for automatic diagnosis in smart healthcare: The rapid advancements in artificial intelligence (AI) have revolutionized smart healthcare, driving innovations in wearable technologies, continuous monitoring devices, and intelligent diagnostic systems. However, security, explainability, robustness, and performance optimization challenges remain critical barriers to widespread adoption in clinical environments. This research presents an innovative algorithmic method using the Adaptive Feature Evaluator (AFE) algorithm to improve feature selection in healthcare datasets and overcome problems. AFE integrating Genetic Algorithms (GA), Explainable Artificial Intelligence (XAI), and Permutation Combination Techniques (PCT), the algorithm optimizes Clinical Decision Support Systems (CDSS), thereby enhancing predictive accuracy and interpretability. The proposed method is validated across three diverse healthcare datasets using six distinct machine learning algorithms, demonstrating its robustness and superiority over conventional feature selection techniques. The results underscore the transformative potential of AFE in smart healthcare, enabling personalized and transparent patient care. Notably, the AFE algorithm, when combined with a Multi-layer Perceptron (MLP), achieved an accuracy of up to 98.5%, highlighting its capability to improve clinical decision-making processes in real-world healthcare applications.<|reference_end|> | arxiv | @article{maji2024easydiagnos:,
title={Easydiagnos: a framework for accurate feature selection for automatic
diagnosis in smart healthcare},
author={Prasenjit Maji, Amit Kumar Mondal, Hemanta Kumar Mondal, Saraju P.
Mohanty},
journal={arXiv preprint arXiv:2410.00366},
year={2024},
archivePrefix={arXiv},
eprint={2410.00366},
primaryClass={cs.LG cs.AI}
} | maji2024easydiagnos: |
arxiv-663930 | 2410.00367 | ROK Defense M&S in the Age of Hyperscale AI: Concepts, Challenges, and Future Directions | <|reference_start|>ROK Defense M&S in the Age of Hyperscale AI: Concepts, Challenges, and Future Directions: Integrating hyperscale AI into national defense modeling and simulation (M&S) is crucial for enhancing strategic and operational capabilities. We explore how hyperscale AI can revolutionize defense M\&S by providing unprecedented accuracy, speed, and the ability to simulate complex scenarios. Countries such as the United States and China are at the forefront of adopting these technologies and are experiencing varying degrees of success. Maximizing the potential of hyperscale AI necessitates addressing critical challenges, such as closed networks, long-tail data, complex decision-making, and a shortage of experts. Future directions emphasize the adoption of domestic foundation models, the investment in various GPUs / NPUs, the utilization of big tech services, and the use of open source software. These initiatives will enhance national security, maintain competitive advantages, and promote broader technological and economic progress. With this blueprint, the Republic of Korea can strengthen its defense capabilities and stay ahead of the emerging threats of modern warfare.<|reference_end|> | arxiv | @article{lee2024rok,
title={ROK Defense M&S in the Age of Hyperscale AI: Concepts, Challenges, and
Future Directions},
author={Youngjoon Lee, Taehyun Park, Yeongjoon Kang, Jonghoe Kim, Joonhyuk
Kang},
journal={arXiv preprint arXiv:2410.00367},
year={2024},
archivePrefix={arXiv},
eprint={2410.00367},
primaryClass={eess.SP cs.LG}
} | lee2024rok |
arxiv-663931 | 2410.00368 | Descriptor: Face Detection Dataset for Programmable Threshold-Based Sparse-Vision | <|reference_start|>Descriptor: Face Detection Dataset for Programmable Threshold-Based Sparse-Vision: Smart focal-plane and in-chip image processing has emerged as a crucial technology for vision-enabled embedded systems with energy efficiency and privacy. However, the lack of special datasets providing examples of the data that these neuromorphic sensors compute to convey visual information has hindered the adoption of these promising technologies. Neuromorphic imager variants, including event-based sensors, produce various representations such as streams of pixel addresses representing time and locations of intensity changes in the focal plane, temporal-difference data, data sifted/thresholded by temporal differences, image data after applying spatial transformations, optical flow data, and/or statistical representations. To address the critical barrier to entry, we provide an annotated, temporal-threshold-based vision dataset specifically designed for face detection tasks derived from the same videos used for Aff-Wild2. By offering multiple threshold levels (e.g., 4, 8, 12, and 16), this dataset allows for comprehensive evaluation and optimization of state-of-the-art neural architectures under varying conditions and settings compared to traditional methods. The accompanying tool flow for generating event data from raw videos further enhances accessibility and usability. We anticipate that this resource will significantly support the development of robust vision systems based on smart sensors that can process based on temporal-difference thresholds, enabling more accurate and efficient object detection and localization and ultimately promoting the broader adoption of low-power, neuromorphic imaging technologies. To support further research, we publicly released the dataset at \url{https://dx.doi.org/10.21227/bw2e-dj78}.<|reference_end|> | arxiv | @article{islam2024descriptor:,
title={Descriptor: Face Detection Dataset for Programmable Threshold-Based
Sparse-Vision},
author={Riadul Islam, Sri Ranga Sai Krishna Tummala, Joey Mul'e, Rohith
Kankipati, Suraj Jalapally, Dhandeep Challagundla, Chad Howard, and Ryan
Robucci},
journal={arXiv preprint arXiv:2410.00368},
year={2024},
archivePrefix={arXiv},
eprint={2410.00368},
primaryClass={cs.CV eess.IV}
} | islam2024descriptor: |
arxiv-663932 | 2410.00371 | AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation | <|reference_start|>AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation: Robotic manipulation in open-world settings requires not only task execution but also the ability to detect and learn from failures. While recent advances in vision-language models (VLMs) and large language models (LLMs) have improved robots' spatial reasoning and problem-solving abilities, they still struggle with failure recognition, limiting their real-world applicability. We introduce AHA, an open-source VLM designed to detect and reason about failures in robotic manipulation using natural language. By framing failure detection as a free-form reasoning task, AHA identifies failures and provides detailed, adaptable explanations across different robots, tasks, and environments. We fine-tuned AHA using FailGen, a scalable framework that generates the first large-scale dataset of robotic failure trajectories, the AHA dataset. FailGen achieves this by procedurally perturbing successful demonstrations from simulation. Despite being trained solely on the AHA dataset, AHA generalizes effectively to real-world failure datasets, robotic systems, and unseen tasks. It surpasses the second-best model (GPT-4o in-context learning) by 10.3% and exceeds the average performance of six compared models including five state-of-the-art VLMs by 35.3% across multiple metrics and datasets. We integrate AHA into three manipulation frameworks that utilize LLMs/VLMs for reinforcement learning, task and motion planning, and zero-shot trajectory generation. AHA's failure feedback enhances these policies' performances by refining dense reward functions, optimizing task planning, and improving sub-task verification, boosting task success rates by an average of 21.4% across all three tasks compared to GPT-4 models.<|reference_end|> | arxiv | @article{duan2024aha:,
title={AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures
in Robotic Manipulation},
author={Jiafei Duan, Wilbert Pumacay, Nishanth Kumar, Yi Ru Wang, Shulin Tian,
Wentao Yuan, Ranjay Krishna, Dieter Fox, Ajay Mandlekar, Yijie Guo},
journal={arXiv preprint arXiv:2410.00371},
year={2024},
archivePrefix={arXiv},
eprint={2410.00371},
primaryClass={cs.RO}
} | duan2024aha: |
arxiv-663933 | 2410.00373 | Robust Traffic Forecasting against Spatial Shift over Years | <|reference_start|>Robust Traffic Forecasting against Spatial Shift over Years: Recent advancements in Spatiotemporal Graph Neural Networks (ST-GNNs) and Transformers have demonstrated promising potential for traffic forecasting by effectively capturing both temporal and spatial correlations. The generalization ability of spatiotemporal models has received considerable attention in recent scholarly discourse. However, no substantive datasets specifically addressing traffic out-of-distribution (OOD) scenarios have been proposed. Existing ST-OOD methods are either constrained to testing on extant data or necessitate manual modifications to the dataset. Consequently, the generalization capacity of current spatiotemporal models in OOD scenarios remains largely underexplored. In this paper, we investigate state-of-the-art models using newly proposed traffic OOD benchmarks and, surprisingly, find that these models experience a significant decline in performance. Through meticulous analysis, we attribute this decline to the models' inability to adapt to previously unobserved spatial relationships. To address this challenge, we propose a novel Mixture of Experts (MoE) framework, which learns a set of graph generators (i.e., graphons) during training and adaptively combines them to generate new graphs based on novel environmental conditions to handle spatial distribution shifts during testing. We further extend this concept to the Transformer architecture, achieving substantial improvements. Our method is both parsimonious and efficacious, and can be seamlessly integrated into any spatiotemporal model, outperforming current state-of-the-art approaches in addressing spatial dynamics.<|reference_end|> | arxiv | @article{wang2024robust,
title={Robust Traffic Forecasting against Spatial Shift over Years},
author={Hongjun Wang, Jiyuan Chen, Tong Pan, Zheng Dong, Lingyu Zhang, Renhe
Jiang, and Xuan Song},
journal={arXiv preprint arXiv:2410.00373},
year={2024},
archivePrefix={arXiv},
eprint={2410.00373},
primaryClass={cs.LG cs.AI cs.DB stat.ML}
} | wang2024robust |
arxiv-663934 | 2410.00376 | Frequency Diverse Array-enabled RIS-aided Integrated Sensing and Communication | <|reference_start|>Frequency Diverse Array-enabled RIS-aided Integrated Sensing and Communication: Integrated sensing and communication (ISAC) has been envisioned as a prospective technology to enable ubiquitous sensing and communications in next-generation wireless networks. In contrast to existing works on reconfigurable intelligent surface (RIS) aided ISAC systems using conventional phased arrays (PAs), this paper investigates a frequency diverse array (FDA)-enabled RIS-aided ISAC system, where the FDA aims to provide a distance-angle-dependent beampattern to effectively suppress the clutter, and RIS is employed to establish high-quality links between the BS and users/target. We aim to maximize sum rate by jointly optimizing the BS transmit beamforming vectors, the covariance matrix of the dedicated radar signal, the RIS phase shift matrix, the FDA frequency offsets and the radar receive equalizer, while guaranteeing the required signal-to-clutter-plus-noise ratio (SCNR) of the radar echo signal. To tackle this challenging problem, we first theoretically prove that the dedicated radar signal is unnecessary for enhancing target sensing performance, based on which the original problem is much simplified. Then, we turn our attention to the single-user single-target (SUST) scenario to demonstrate that the FDA-RIS-aided ISAC system always achieves a higher SCNR than its PA-RIS-aided counterpart. Moreover, it is revealed that the SCNR increment exhibits linear growth with the BS transmit power and the number of BS receive antennas. In order to effectively solve this simplified problem, we leverage the fractional programming (FP) theory and subsequently develop an efficient alternating optimization (AO) algorithm based on symmetric alternating direction method of multipliers (SADMM) and successive convex approximation (SCA) techniques. Numerical results demonstrate the superior performance of our proposed algorithm in terms of sum rate and radar SCNR.<|reference_end|> | arxiv | @article{yang2024frequency,
title={Frequency Diverse Array-enabled RIS-aided Integrated Sensing and
Communication},
author={Hanyu Yang, Shiqi Gong, Heng Liu, Chengwen Xing, Nan Zhao and Dusit
Niyato},
journal={arXiv preprint arXiv:2410.00376},
year={2024},
archivePrefix={arXiv},
eprint={2410.00376},
primaryClass={cs.IT eess.SP math.IT}
} | yang2024frequency |
arxiv-663935 | 2410.00379 | CXPMRG-Bench: Pre-training and Benchmarking for X-ray Medical Report Generation on CheXpert Plus Dataset | <|reference_start|>CXPMRG-Bench: Pre-training and Benchmarking for X-ray Medical Report Generation on CheXpert Plus Dataset: X-ray image-based medical report generation (MRG) is a pivotal area in artificial intelligence which can significantly reduce diagnostic burdens and patient wait times. Despite significant progress, we believe that the task has reached a bottleneck due to the limited benchmark datasets and the existing large models' insufficient capability enhancements in this specialized domain. Specifically, the recently released CheXpert Plus dataset lacks comparative evaluation algorithms and their results, providing only the dataset itself. This situation makes the training, evaluation, and comparison of subsequent algorithms challenging. Thus, we conduct a comprehensive benchmarking of existing mainstream X-ray report generation models and large language models (LLMs), on the CheXpert Plus dataset. We believe that the proposed benchmark can provide a solid comparative basis for subsequent algorithms and serve as a guide for researchers to quickly grasp the state-of-the-art models in this field. More importantly, we propose a large model for the X-ray image report generation using a multi-stage pre-training strategy, including self-supervised autoregressive generation and Xray-report contrastive learning, and supervised fine-tuning. Extensive experimental results indicate that the autoregressive pre-training based on Mamba effectively encodes X-ray images, and the image-text contrastive pre-training further aligns the feature spaces, achieving better experimental results. Source code can be found on \url{https://github.com/Event-AHU/Medical_Image_Analysis}.<|reference_end|> | arxiv | @article{wang2024cxpmrg-bench:,
title={CXPMRG-Bench: Pre-training and Benchmarking for X-ray Medical Report
Generation on CheXpert Plus Dataset},
author={Xiao Wang, Fuling Wang, Yuehang Li, Qingchuan Ma, Shiao Wang, Bo
Jiang, Chuanfu Li, Jin Tang},
journal={arXiv preprint arXiv:2410.00379},
year={2024},
archivePrefix={arXiv},
eprint={2410.00379},
primaryClass={cs.CV cs.AI cs.LG}
} | wang2024cxpmrg-bench: |
arxiv-663936 | 2410.00380 | GLMHA A Guided Low-rank Multi-Head Self-Attention for Efficient Image Restoration and Spectral Reconstruction | <|reference_start|>GLMHA A Guided Low-rank Multi-Head Self-Attention for Efficient Image Restoration and Spectral Reconstruction: Image restoration and spectral reconstruction are longstanding computer vision tasks. Currently, CNN-transformer hybrid models provide state-of-the-art performance for these tasks. The key common ingredient in the architectural designs of these models is Channel-wise Self-Attention (CSA). We first show that CSA is an overall low-rank operation. Then, we propose an instance-Guided Low-rank Multi-Head selfattention (GLMHA) to replace the CSA for a considerable computational gain while closely retaining the original model performance. Unique to the proposed GLMHA is its ability to provide computational gain for both short and long input sequences. In particular, the gain is in terms of both Floating Point Operations (FLOPs) and parameter count reduction. This is in contrast to the existing popular computational complexity reduction techniques, e.g., Linformer, Performer, and Reformer, for whom FLOPs overpower the efficient design tricks for the shorter input sequences. Moreover, parameter reduction remains unaccounted for in the existing methods.We perform an extensive evaluation for the tasks of spectral reconstruction from RGB images, spectral reconstruction from snapshot compressive imaging, motion deblurring, and image deraining by enhancing the best-performing models with our GLMHA. Our results show up to a 7.7 Giga FLOPs reduction with 370K fewer parameters required to closely retain the original performance of the best-performing models that employ CSA.<|reference_end|> | arxiv | @article{ilyas2024glmha,
title={GLMHA A Guided Low-rank Multi-Head Self-Attention for Efficient Image
Restoration and Spectral Reconstruction},
author={Zaid Ilyas, Naveed Akhtar, David Suter, Syed Zulqarnain Gilani},
journal={arXiv preprint arXiv:2410.00380},
year={2024},
archivePrefix={arXiv},
eprint={2410.00380},
primaryClass={cs.CV}
} | ilyas2024glmha |
arxiv-663937 | 2410.00381 | Generative Precipitation Downscaling using Score-based Diffusion with Wasserstein Regularization | <|reference_start|>Generative Precipitation Downscaling using Score-based Diffusion with Wasserstein Regularization: Understanding local risks from extreme rainfall, such as flooding, requires both long records (to sample rare events) and high-resolution products (to assess localized hazards). Unfortunately, there is a dearth of long-record and high-resolution products that can be used to understand local risk and precipitation science. In this paper, we present a novel generative diffusion model that downscales (super-resolves) globally available Climate Prediction Center (CPC) gauge-based precipitation products and ERA5 reanalysis data to generate kilometer-scale precipitation estimates. Downscaling gauge-based precipitation from 55 km to 1 km while recovering extreme rainfall signals poses significant challenges. To enforce our model (named WassDiff) to produce well-calibrated precipitation intensity values, we introduce a Wasserstein Distance Regularization (WDR) term for the score-matching training objective in the diffusion denoising process. We show that WDR greatly enhances the model's ability to capture extreme values compared to diffusion without WDR. Extensive evaluation shows that WassDiff has better reconstruction accuracy and bias scores than conventional score-based diffusion models. Case studies of extreme weather phenomena, like tropical storms and cold fronts, demonstrate WassDiff's ability to produce appropriate spatial patterns while capturing extremes. Such downscaling capability enables the generation of extensive km-scale precipitation datasets from existing historical global gauge records and current gauge measurements in areas without high-resolution radar.<|reference_end|> | arxiv | @article{liu2024generative,
title={Generative Precipitation Downscaling using Score-based Diffusion with
Wasserstein Regularization},
author={Yuhao Liu, James Doss-Gollin, Guha Balakrishnan, Ashok Veeraraghavan},
journal={arXiv preprint arXiv:2410.00381},
year={2024},
archivePrefix={arXiv},
eprint={2410.00381},
primaryClass={cs.LG cs.AI}
} | liu2024generative |
arxiv-663938 | 2410.00382 | Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning | <|reference_start|>Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning: As large language models (LLMs) are applied across diverse domains, the ability to selectively unlearn specific information has become increasingly essential. For instance, LLMs are expected to provide confidential information to authorized internal users, such as employees or trusted partners, while withholding it from external users, including the general public and unauthorized entities. In response to this challenge, we propose a novel method termed ``in-context knowledge unlearning'', which enables the model to selectively forget information in test-time based on the context of the query. Our method fine-tunes pre-trained LLMs to enable prompt unlearning of target knowledge within the context, while preserving other knowledge. Experiments on the TOFU and AGE datasets using Llama2-7B/13B and Mistral-7B models show our method achieves up to 95% forgetting accuracy while retaining 80% of unrelated knowledge, significantly outperforming baselines in both in-domain and out-of-domain scenarios. Further investigation into the model's internal behavior revealed that while fine-tuned LLMs generate correct predictions in the middle layers and maintain them up to the final layer, they make the decision to forget at the last layer, i.e., ``LLMs pretend to forget''. Our findings offer valuable insights into enhancing the robustness of unlearning mechanisms in LLMs, setting a foundation for future research in the field.<|reference_end|> | arxiv | @article{takashiro2024answer,
title={Answer When Needed, Forget When Not: Language Models Pretend to Forget
via In-Context Knowledge Unlearning},
author={Shota Takashiro, Takeshi Kojima, Andrew Gambardella, Qi Cao, Yusuke
Iwasawa and Yutaka Matsuo},
journal={arXiv preprint arXiv:2410.00382},
year={2024},
archivePrefix={arXiv},
eprint={2410.00382},
primaryClass={cs.CL}
} | takashiro2024answer |
arxiv-663939 | 2410.00385 | STGformer: Efficient Spatiotemporal Graph Transformer for Traffic Forecasting | <|reference_start|>STGformer: Efficient Spatiotemporal Graph Transformer for Traffic Forecasting: Traffic forecasting is a cornerstone of smart city management, enabling efficient resource allocation and transportation planning. Deep learning, with its ability to capture complex nonlinear patterns in spatiotemporal (ST) data, has emerged as a powerful tool for traffic forecasting. While graph neural networks (GCNs) and transformer-based models have shown promise, their computational demands often hinder their application to real-world road networks, particularly those with large-scale spatiotemporal interactions. To address these challenges, we propose a novel spatiotemporal graph transformer (STGformer) architecture. STGformer effectively balances the strengths of GCNs and Transformers, enabling efficient modeling of both global and local traffic patterns while maintaining a manageable computational footprint. Unlike traditional approaches that require multiple attention layers, STG attention block captures high-order spatiotemporal interactions in a single layer, significantly reducing computational cost. In particular, STGformer achieves a 100x speedup and a 99.8\% reduction in GPU memory usage compared to STAEformer during batch inference on a California road graph with 8,600 sensors. We evaluate STGformer on the LargeST benchmark and demonstrate its superiority over state-of-the-art Transformer-based methods such as PDFormer and STAEformer, which underline STGformer's potential to revolutionize traffic forecasting by overcoming the computational and memory limitations of existing approaches, making it a promising foundation for future spatiotemporal modeling tasks.<|reference_end|> | arxiv | @article{wang2024stgformer:,
title={STGformer: Efficient Spatiotemporal Graph Transformer for Traffic
Forecasting},
author={Hongjun Wang, Jiyuan Chen, Tong Pan, Zheng Dong, Lingyu Zhang, Renhe
Jiang, and Xuan Song},
journal={arXiv preprint arXiv:2410.00385},
year={2024},
archivePrefix={arXiv},
eprint={2410.00385},
primaryClass={cs.LG cs.AI cs.DB}
} | wang2024stgformer: |
arxiv-663940 | 2410.00386 | Seamless Augmented Reality Integration in Arthroscopy: A Pipeline for Articular Reconstruction and Guidance | <|reference_start|>Seamless Augmented Reality Integration in Arthroscopy: A Pipeline for Articular Reconstruction and Guidance: Arthroscopy is a minimally invasive surgical procedure used to diagnose and treat joint problems. The clinical workflow of arthroscopy typically involves inserting an arthroscope into the joint through a small incision, during which surgeons navigate and operate largely by relying on their visual assessment through the arthroscope. However, the arthroscope's restricted field of view and lack of depth perception pose challenges in navigating complex articular structures and achieving surgical precision during procedures. Aiming at enhancing intraoperative awareness, we present a robust pipeline that incorporates simultaneous localization and mapping, depth estimation, and 3D Gaussian splatting to realistically reconstruct intra-articular structures solely based on monocular arthroscope video. Extending 3D reconstruction to Augmented Reality (AR) applications, our solution offers AR assistance for articular notch measurement and annotation anchoring in a human-in-the-loop manner. Compared to traditional Structure-from-Motion and Neural Radiance Field-based methods, our pipeline achieves dense 3D reconstruction and competitive rendering fidelity with explicit 3D representation in 7 minutes on average. When evaluated on four phantom datasets, our method achieves RMSE = 2.21mm reconstruction error, PSNR = 32.86 and SSIM = 0.89 on average. Because our pipeline enables AR reconstruction and guidance directly from monocular arthroscopy without any additional data and/or hardware, our solution may hold the potential for enhancing intraoperative awareness and facilitating surgical precision in arthroscopy. Our AR measurement tool achieves accuracy within 1.59 +/- 1.81mm and the AR annotation tool achieves a mIoU of 0.721.<|reference_end|> | arxiv | @article{shu2024seamless,
title={Seamless Augmented Reality Integration in Arthroscopy: A Pipeline for
Articular Reconstruction and Guidance},
author={Hongchao Shu, Mingxu Liu, Lalithkumar Seenivasan, Suxi Gu, Ping-Cheng
Ku, Jonathan Knopf, Russell Taylor, Mathias Unberath},
journal={arXiv preprint arXiv:2410.00386},
year={2024},
archivePrefix={arXiv},
eprint={2410.00386},
primaryClass={cs.CV cs.LG}
} | shu2024seamless |
arxiv-663941 | 2410.00387 | Boosting the Capabilities of Compact Models in Low-Data Contexts with Large Language Models and Retrieval-Augmented Generation | <|reference_start|>Boosting the Capabilities of Compact Models in Low-Data Contexts with Large Language Models and Retrieval-Augmented Generation: The data and compute requirements of current language modeling technology pose challenges for the processing and analysis of low-resource languages. Declarative linguistic knowledge has the potential to partially bridge this data scarcity gap by providing models with useful inductive bias in the form of language-specific rules. In this paper, we propose a retrieval augmented generation (RAG) framework backed by a large language model (LLM) to correct the output of a smaller model for the linguistic task of morphological glossing. We leverage linguistic information to make up for the lack of data and trainable parameters, while allowing for inputs from written descriptive grammars interpreted and distilled through an LLM. The results demonstrate that significant leaps in performance and efficiency are possible with the right combination of: a) linguistic inputs in the form of grammars, b) the interpretive power of LLMs, and c) the trainability of smaller token classification networks. We show that a compact, RAG-supported model is highly effective in data-scarce settings, achieving a new state-of-the-art for this task and our target languages. Our work also offers documentary linguists a more reliable and more usable tool for morphological glossing by providing well-reasoned explanations and confidence scores for each output.<|reference_end|> | arxiv | @article{shandilya2024boosting,
title={Boosting the Capabilities of Compact Models in Low-Data Contexts with
Large Language Models and Retrieval-Augmented Generation},
author={Bhargav Shandilya and Alexis Palmer},
journal={arXiv preprint arXiv:2410.00387},
year={2024},
archivePrefix={arXiv},
eprint={2410.00387},
primaryClass={cs.CL cs.AI}
} | shandilya2024boosting |
arxiv-663942 | 2410.00388 | Find Everything: A General Vision Language Model Approach to Multi-Object Search | <|reference_start|>Find Everything: A General Vision Language Model Approach to Multi-Object Search: The Multi-Object Search (MOS) problem involves navigating to a sequence of locations to maximize the likelihood of finding target objects while minimizing travel costs. In this paper, we introduce a novel approach to the MOS problem, called Finder, which leverages vision language models (VLMs) to locate multiple objects across diverse environments. Specifically, our approach introduces multi-channel score maps to track and reason about multiple objects simultaneously during navigation, along with a score fusion technique that combines scene-level and object-level semantic correlations. Experiments in both simulated and real-world settings showed that Finder outperforms existing methods using deep reinforcement learning and VLMs. Ablation and scalability studies further validated our design choices and robustness with increasing numbers of target objects, respectively. Website: https://find-all-my-things.github.io/<|reference_end|> | arxiv | @article{choi2024find,
title={Find Everything: A General Vision Language Model Approach to
Multi-Object Search},
author={Daniel Choi, Angus Fung, Haitong Wang, Aaron Hao Tan},
journal={arXiv preprint arXiv:2410.00388},
year={2024},
archivePrefix={arXiv},
eprint={2410.00388},
primaryClass={cs.RO}
} | choi2024find |
arxiv-663943 | 2410.00392 | MERIT: Multimodal Wearable Vital Sign Waveform Monitoring | <|reference_start|>MERIT: Multimodal Wearable Vital Sign Waveform Monitoring: Cardiovascular disease (CVD) is the leading cause of death and premature mortality worldwide, with occupational environments significantly influencing CVD risk, underscoring the need for effective cardiac monitoring and early warning systems. Existing methods of monitoring vital signs require subjects to remain stationary, which is impractical for daily monitoring as individuals are often in motion. To address this limitation, we propose MERIT, a multimodality-based wearable system designed for precise ECG waveform monitoring without movement restrictions. Daily activities, involving frequent arm movements, can significantly affect sensor data and complicate the reconstruction of accurate ECG signals. To mitigate motion impact and enhance ECG signal reconstruction, we introduce a deep independent component analysis (Deep-ICA) module and a multimodal fusion module. We conducted experiments with 15 subjects. Our results, compared with commercial wearable devices and existing methods, demonstrate that MERIT accurately reconstructs ECG waveforms during various office activities, offering a reliable solution for fine-grained cardiac monitoring in dynamic environments.<|reference_end|> | arxiv | @article{tang2024merit:,
title={MERIT: Multimodal Wearable Vital Sign Waveform Monitoring},
author={Yongyang Tang, Zhe Chen, Ang Li, Tianyue Zheng, Zheng Lin, Jia Xu, Pin
Lv, Zhe Sun, Yue Gao},
journal={arXiv preprint arXiv:2410.00392},
year={2024},
archivePrefix={arXiv},
eprint={2410.00392},
primaryClass={eess.SY cs.AR cs.SY}
} | tang2024merit: |
arxiv-663944 | 2410.00393 | Revisiting Essential and Nonessential Settings of Evidential Deep Learning | <|reference_start|>Revisiting Essential and Nonessential Settings of Evidential Deep Learning: Evidential Deep Learning (EDL) is an emerging method for uncertainty estimation that provides reliable predictive uncertainty in a single forward pass, attracting significant attention. Grounded in subjective logic, EDL derives Dirichlet concentration parameters from neural networks to construct a Dirichlet probability density function (PDF), modeling the distribution of class probabilities. Despite its success, EDL incorporates several nonessential settings: In model construction, (1) a commonly ignored prior weight parameter is fixed to the number of classes, while its value actually impacts the balance between the proportion of evidence and its magnitude in deriving predictive scores. In model optimization, (2) the empirical risk features a variance-minimizing optimization term that biases the PDF towards a Dirac delta function, potentially exacerbating overconfidence. (3) Additionally, the structural risk typically includes a KL-divergence-minimizing regularization, whose optimization direction extends beyond the intended purpose and contradicts common sense, diminishing the information carried by the evidence magnitude. Therefore, we propose Re-EDL, a simplified yet more effective variant of EDL, by relaxing the nonessential settings and retaining the essential one, namely, the adoption of projected probability from subjective logic. Specifically, Re-EDL treats the prior weight as an adjustable hyperparameter rather than a fixed scalar, and directly optimizes the expectation of the Dirichlet PDF provided by deprecating both the variance-minimizing optimization term and the divergence regularization term. Extensive experiments and state-of-the-art performance validate the effectiveness of our method. The source code is available at https://github.com/MengyuanChen21/Re-EDL.<|reference_end|> | arxiv | @article{chen2024revisiting,
title={Revisiting Essential and Nonessential Settings of Evidential Deep
Learning},
author={Mengyuan Chen, Junyu Gao, Changsheng Xu},
journal={arXiv preprint arXiv:2410.00393},
year={2024},
archivePrefix={arXiv},
eprint={2410.00393},
primaryClass={cs.LG cs.AI}
} | chen2024revisiting |
arxiv-663945 | 2410.00394 | Analyzing School Shootings in the US with Statistical Learning | <|reference_start|>Analyzing School Shootings in the US with Statistical Learning: Active shooter incidents in schools cause widespread attention across the nation. Students, faculty, and staff on campuses could be involved with these shootings, as victims, perpetrators, etc.[1]. These gun-related crimes jeopardize school safety. From 1999 to 2024, there have been approximately 43 mass school shootings, with over 500 school shootings altogether. By definition, mass shooting is defined as any event where four or more people are shot with a gun, but not counting the perpetrator. By studying school shooting cases, we concluded that most of the time, the shootings occur inside the classrooms. Existing research that includes statistical analysis usually focuses on public mass shootings or just shooting incidents that have occurred in the past and there are hardly any articles focusing on school mass shootings. This leads to schools being more vulnerable to mass shootings in the future. In this research, we have gathered school shooting data from various resources to analyze the results. By interpreting these data and conducting various statistical analysis, this will ultimately help the law enforcement to better prepare for future school shootings.<|reference_end|> | arxiv | @article{dai2024analyzing,
title={Analyzing School Shootings in the US with Statistical Learning},
author={Wei Dai, Diya Kafle, Brian Miller},
journal={arXiv preprint arXiv:2410.00394},
year={2024},
archivePrefix={arXiv},
eprint={2410.00394},
primaryClass={cs.CY}
} | dai2024analyzing |
arxiv-663946 | 2410.00395 | Performance Improvement of IaaS Type of Cloud Computing Using Virtualization Technique | <|reference_start|>Performance Improvement of IaaS Type of Cloud Computing Using Virtualization Technique: Cloud computing has transformed the way organizations manage and scale their IT infrastructure by offering flexible, scalable, and cost-effective solutions. However, the Infrastructure as a Service (IaaS) model faces performance challenges primarily due to the limitations imposed by virtualization technology. This paper focuses on designing an effective virtualization technique for IaaS, aiming to improve infrastructure-level performance. Through a systematic literature review and a design, development, and evaluation approach, various virtualization techniques such as full virtualization, paravirtualization, and hardware-assisted virtualization are explored. The study also considers the role of hypervisors like Xen, KVM, and VMware ESXi in improving performance. The proposed solution seeks to optimize resource utilization, minimize latency, and enhance overall throughput in IaaS environments. Finally, the research discusses the potential application of this virtualization technique for public cloud computing solutions tailored for Ethiopian Small and Medium Enterprises (ESMEs) using platforms like Amazon EC2.<|reference_end|> | arxiv | @article{admassu2024performance,
title={Performance Improvement of IaaS Type of Cloud Computing Using
Virtualization Technique},
author={Dawit Zeleke Admassu},
journal={arXiv preprint arXiv:2410.00395},
year={2024},
archivePrefix={arXiv},
eprint={2410.00395},
primaryClass={cs.CE}
} | admassu2024performance |
arxiv-663947 | 2410.00396 | Dynamic neurons: A statistical physics approach for analyzing deep neural networks | <|reference_start|>Dynamic neurons: A statistical physics approach for analyzing deep neural networks: Deep neural network architectures often consist of repetitive structural elements. We introduce a new approach that reveals these patterns and can be broadly applied to the study of deep learning. Similar to how a power strip helps untangle and organize complex cable connections, this approach treats neurons as additional degrees of freedom in interactions, simplifying the structure and enhancing the intuitive understanding of interactions within deep neural networks. Furthermore, it reveals the translational symmetry of deep neural networks, which simplifies the application of the renormalization group transformation - a method that effectively analyzes the scaling behavior of the system. By utilizing translational symmetry and renormalization group transformations, we can analyze critical phenomena. This approach may open new avenues for studying deep neural networks using statistical physics.<|reference_end|> | arxiv | @article{lee2024dynamic,
title={Dynamic neurons: A statistical physics approach for analyzing deep
neural networks},
author={Donghee Lee, Hye-Sung Lee, Jaeok Yi},
journal={arXiv preprint arXiv:2410.00396},
year={2024},
archivePrefix={arXiv},
eprint={2410.00396},
primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.LG}
} | lee2024dynamic |
arxiv-663948 | 2410.00397 | A Generalized Mean Approach for Distributed-PCA | <|reference_start|>A Generalized Mean Approach for Distributed-PCA: Principal component analysis (PCA) is a widely used technique for dimension reduction. As datasets continue to grow in size, distributed-PCA (DPCA) has become an active research area. A key challenge in DPCA lies in efficiently aggregating results across multiple machines or computing nodes due to computational overhead. Fan et al. (2019) introduced a pioneering DPCA method to estimate the leading rank-$r$ eigenspace, aggregating local rank-$r$ projection matrices by averaging. However, their method does not utilize eigenvalue information. In this article, we propose a novel DPCA method that incorporates eigenvalue information to aggregate local results via the matrix $\beta$-mean, which we call $\beta$-DPCA. The matrix $\beta$-mean offers a flexible and robust aggregation method through the adjustable choice of $\beta$ values. Notably, for $\beta=1$, it corresponds to the arithmetic mean; for $\beta=-1$, the harmonic mean; and as $\beta \to 0$, the geometric mean. Moreover, the matrix $\beta$-mean is shown to associate with the matrix $\beta$-divergence, a subclass of the Bregman matrix divergence, to support the robustness of $\beta$-DPCA. We also study the stability of eigenvector ordering under eigenvalue perturbation for $\beta$-DPCA. The performance of our proposal is evaluated through numerical studies.<|reference_end|> | arxiv | @article{jou2024a,
title={A Generalized Mean Approach for Distributed-PCA},
author={Zhi-Yu Jou, Su-Yun Huang, Hung Hung, Shinto Eguchi},
journal={arXiv preprint arXiv:2410.00397},
year={2024},
archivePrefix={arXiv},
eprint={2410.00397},
primaryClass={stat.ML cs.LG}
} | jou2024a |
arxiv-663949 | 2410.00398 | CusConcept: Customized Visual Concept Decomposition with Diffusion Models | <|reference_start|>CusConcept: Customized Visual Concept Decomposition with Diffusion Models: Enabling generative models to decompose visual concepts from a single image is a complex and challenging problem. In this paper, we study a new and challenging task, customized concept decomposition, wherein the objective is to leverage diffusion models to decompose a single image and generate visual concepts from various perspectives. To address this challenge, we propose a two-stage framework, CusConcept (short for Customized Visual Concept Decomposition), to extract customized visual concept embedding vectors that can be embedded into prompts for text-to-image generation. In the first stage, CusConcept employs a vocabulary-guided concept decomposition mechanism to build vocabularies along human-specified conceptual axes. The decomposed concepts are obtained by retrieving corresponding vocabularies and learning anchor weights. In the second stage, joint concept refinement is performed to enhance the fidelity and quality of generated images. We further curate an evaluation benchmark for assessing the performance of the open-world concept decomposition task. Our approach can effectively generate high-quality images of the decomposed concepts and produce related lexical predictions as secondary outcomes. Extensive qualitative and quantitative experiments demonstrate the effectiveness of CusConcept.<|reference_end|> | arxiv | @article{xu2024cusconcept:,
title={CusConcept: Customized Visual Concept Decomposition with Diffusion
Models},
author={Zhi Xu, Shaozhe Hao, Kai Han},
journal={arXiv preprint arXiv:2410.00398},
year={2024},
archivePrefix={arXiv},
eprint={2410.00398},
primaryClass={cs.CV}
} | xu2024cusconcept: |
arxiv-663950 | 2410.00400 | DynEx: Dynamic Code Synthesis with Structured Design Exploration for Accelerated Exploratory Programming | <|reference_start|>DynEx: Dynamic Code Synthesis with Structured Design Exploration for Accelerated Exploratory Programming: Recent advancements in large language models have significantly expedited the process of generating front-end code. This allows users to rapidly prototype user interfaces and ideate through code, a process known as exploratory programming. However, existing LLM code-generation tools focus more on technical implementation details rather than finding the right design given a particular problem. We present DynEx, an LLM-based method for design exploration in accelerated exploratory programming. DynEx uses LLMs to guide users through a structured Design Matrix to explore the design space before dynamic iterative implementation. It also introduces a technique to self-invoke generative AI, enabling the creation of a diverse suite of applications. A user study of 10 experts found that DynEx increased design exploration and enabled the creation of more complex and varied prototypes compared to a Claude Artifact baseline. We conclude with a discussion of the implications of design exploration for exploratory programming.<|reference_end|> | arxiv | @article{ma2024dynex:,
title={DynEx: Dynamic Code Synthesis with Structured Design Exploration for
Accelerated Exploratory Programming},
author={Jenny Ma, Karthik Sreedhar, Vivian Liu, Sitong Wang, Pedro Alejandro
Perez, Riya Sahni, Lydia B. Chilton},
journal={arXiv preprint arXiv:2410.00400},
year={2024},
archivePrefix={arXiv},
eprint={2410.00400},
primaryClass={cs.HC}
} | ma2024dynex: |
arxiv-663951 | 2410.00403 | TikGuard: A Deep Learning Transformer-Based Solution for Detecting Unsuitable TikTok Content for Kids | <|reference_start|>TikGuard: A Deep Learning Transformer-Based Solution for Detecting Unsuitable TikTok Content for Kids: The rise of short-form videos on platforms like TikTok has brought new challenges in safeguarding young viewers from inappropriate content. Traditional moderation methods often fall short in handling the vast and rapidly changing landscape of user-generated videos, increasing the risk of children encountering harmful material. This paper introduces TikGuard, a transformer-based deep learning approach aimed at detecting and flagging content unsuitable for children on TikTok. By using a specially curated dataset, TikHarm, and leveraging advanced video classification techniques, TikGuard achieves an accuracy of 86.7%, showing a notable improvement over existing methods in similar contexts. While direct comparisons are limited by the uniqueness of the TikHarm dataset, TikGuard's performance highlights its potential in enhancing content moderation, contributing to a safer online experience for minors. This study underscores the effectiveness of transformer models in video classification and sets a foundation for future research in this area.<|reference_end|> | arxiv | @article{balat2024tikguard:,
title={TikGuard: A Deep Learning Transformer-Based Solution for Detecting
Unsuitable TikTok Content for Kids},
author={Mazen Balat, Mahmoud Essam Gabr, Hend Bakr, Ahmed B. Zaky},
journal={arXiv preprint arXiv:2410.00403},
year={2024},
archivePrefix={arXiv},
eprint={2410.00403},
primaryClass={cs.CV cs.AI}
} | balat2024tikguard: |
arxiv-663952 | 2410.00404 | 3DGR-CAR: Coronary artery reconstruction from ultra-sparse 2D X-ray views with a 3D Gaussians representation | <|reference_start|>3DGR-CAR: Coronary artery reconstruction from ultra-sparse 2D X-ray views with a 3D Gaussians representation: Reconstructing 3D coronary arteries is important for coronary artery disease diagnosis, treatment planning and operation navigation. Traditional reconstruction techniques often require many projections, while reconstruction from sparse-view X-ray projections is a potential way of reducing radiation dose. However, the extreme sparsity of coronary arteries in a 3D volume and ultra-limited number of projections pose significant challenges for efficient and accurate 3D reconstruction. To this end, we propose 3DGR-CAR, a 3D Gaussian Representation for Coronary Artery Reconstruction from ultra-sparse X-ray projections. We leverage 3D Gaussian representation to avoid the inefficiency caused by the extreme sparsity of coronary artery data and propose a Gaussian center predictor to overcome the noisy Gaussian initialization from ultra-sparse view projections. The proposed scheme enables fast and accurate 3D coronary artery reconstruction with only 2 views. Experimental results on two datasets indicate that the proposed approach significantly outperforms other methods in terms of voxel accuracy and visual quality of coronary arteries. The code will be available in https://github.com/windrise/3DGR-CAR.<|reference_end|> | arxiv | @article{fu20243dgr-car:,
title={3DGR-CAR: Coronary artery reconstruction from ultra-sparse 2D X-ray
views with a 3D Gaussians representation},
author={Xueming Fu, Yingtai Li, Fenghe Tang, Jun Li, Mingyue Zhao, Gao-Jun
Teng, S. Kevin Zhou},
journal={arXiv preprint arXiv:2410.00404},
year={2024},
archivePrefix={arXiv},
eprint={2410.00404},
primaryClass={eess.IV cs.CV}
} | fu20243dgr-car: |
arxiv-663953 | 2410.00407 | Intelligent Repetition Counting for Unseen Exercises: A Few-Shot Learning Approach with Sensor Signals | <|reference_start|>Intelligent Repetition Counting for Unseen Exercises: A Few-Shot Learning Approach with Sensor Signals: Sensing technology has significantly advanced in automating systems that reflect human movement, particularly in robotics and healthcare, where it is used to automatically detect target movements. This study develops a method to automatically count exercise repetitions by analyzing IMU signals, with a focus on a universal exercise repetition counting task that counts all types of exercise movements, including novel exercises not seen during training, using a single model. Since peak patterns can vary significantly between different exercises as well as between individuals performing the same exercise, the model needs to learn a complex embedding space of sensor data to generalize effectively. To address this challenge,we propose a repetition counting technique utilizing a deep metric-based few-shot learning approach, designed to handle both existing and novel exercises. By redefining the counting task as a few-shot classification problem, the method is capable of detecting peak repetition patterns in exercises not seen during training. The approach employs a Siamese network with triplet loss, optimizing the embedding space to distinguish between peak and non-peak frames. Evaluation results demonstrate the effectiveness of the proposed approach, showing an 86.8% probability of accurately counting ten or more repetitions within a single set across 28 different exercises. This performance highlights the model's ability to generalize across various exercise types, including those not present in the training data. Such robustness and adaptability make the system a strong candidate for real-time implementation in fitness and healthcare applications.<|reference_end|> | arxiv | @article{lim2024intelligent,
title={Intelligent Repetition Counting for Unseen Exercises: A Few-Shot
Learning Approach with Sensor Signals},
author={Yooseok Lim, Sujee Lee},
journal={arXiv preprint arXiv:2410.00407},
year={2024},
archivePrefix={arXiv},
eprint={2410.00407},
primaryClass={cs.LG}
} | lim2024intelligent |
arxiv-663954 | 2410.00408 | ECORS: An Ensembled Clustering Approach to Eradicate The Local And Global Outlier In Collaborative Filtering Recommender System | <|reference_start|>ECORS: An Ensembled Clustering Approach to Eradicate The Local And Global Outlier In Collaborative Filtering Recommender System: Recommender systems are designed to suggest items based on user preferences, helping users navigate the vast amount of information available on the internet. Given the overwhelming content, outlier detection has emerged as a key research area in recommender systems. It involves identifying unusual or suspicious patterns in user behavior. However, existing studies in this field face several challenges, including the limited universality of algorithms, difficulties in selecting users, and a lack of optimization. In this paper, we propose an approach that addresses these challenges by employing various clustering algorithms. Specifically, we utilize a user-user matrix-based clustering technique to detect outliers. By constructing a user-user matrix, we can identify suspicious users in the system. Both local and global outliers are detected to ensure comprehensive analysis. Our experimental results demonstrate that this approach significantly improves the accuracy of outlier detection in recommender systems.<|reference_end|> | arxiv | @article{hasan2024ecors:,
title={ECORS: An Ensembled Clustering Approach to Eradicate The Local And
Global Outlier In Collaborative Filtering Recommender System},
author={Mahamudul Hasan},
journal={arXiv preprint arXiv:2410.00408},
year={2024},
archivePrefix={arXiv},
eprint={2410.00408},
primaryClass={cs.IR cs.HC cs.LG}
} | hasan2024ecors: |
arxiv-663955 | 2410.00409 | AlignSum: Data Pyramid Hierarchical Fine-tuning for Aligning with Human Summarization Preference | <|reference_start|>AlignSum: Data Pyramid Hierarchical Fine-tuning for Aligning with Human Summarization Preference: Text summarization tasks commonly employ Pre-trained Language Models (PLMs) to fit diverse standard datasets. While these PLMs excel in automatic evaluations, they frequently underperform in human evaluations, indicating a deviation between their generated summaries and human summarization preferences. This discrepancy is likely due to the low quality of fine-tuning datasets and the limited availability of high-quality human-annotated data that reflect true human preference. To address this challenge, we introduce a novel human summarization preference alignment framework AlignSum. This framework consists of three parts: Firstly, we construct a Data Pymarid with extractive, abstractive, and human-annotated summary data. Secondly, we conduct the Gaussian Resampling to remove summaries with extreme lengths. Finally, we implement the two-stage hierarchical fine-tuning with Data Pymarid after Gaussian Resampling. We apply AlignSum to PLMs on the human-annotated CNN/DailyMail and BBC XSum datasets. Experiments show that with AlignSum, PLMs like BART-Large surpass 175B GPT-3 in both automatic and human evaluations. This demonstrates that AlignSum significantly enhances the alignment of language models with human summarization preferences.<|reference_end|> | arxiv | @article{han2024alignsum:,
title={AlignSum: Data Pyramid Hierarchical Fine-tuning for Aligning with Human
Summarization Preference},
author={Yang Han, Yiming Wang, Rui Wang, Lu Chen, Kai Yu},
journal={arXiv preprint arXiv:2410.00409},
year={2024},
archivePrefix={arXiv},
eprint={2410.00409},
primaryClass={cs.CL}
} | han2024alignsum: |
arxiv-663956 | 2410.00410 | Domain Aware Multi-Task Pretraining of 3D Swin Transformer for T1-weighted Brain MRI | <|reference_start|>Domain Aware Multi-Task Pretraining of 3D Swin Transformer for T1-weighted Brain MRI: The scarcity of annotated medical images is a major bottleneck in developing learning models for medical image analysis. Hence, recent studies have focused on pretrained models with fewer annotation requirements that can be fine-tuned for various downstream tasks. However, existing approaches are mainly 3D adaptions of 2D approaches ill-suited for 3D medical imaging data. Motivated by this gap, we propose novel domain-aware multi-task learning tasks to pretrain a 3D Swin Transformer for brain magnetic resonance imaging (MRI). Our method considers the domain knowledge in brain MRI by incorporating brain anatomy and morphology as well as standard pretext tasks adapted for 3D imaging in a contrastive learning setting. We pretrain our model using large-scale brain MRI data of 13,687 samples spanning several large-scale databases. Our method outperforms existing supervised and self-supervised methods in three downstream tasks of Alzheimer's disease classification, Parkinson's disease classification, and age prediction tasks. The ablation study of the proposed pretext tasks shows the effectiveness of our pretext tasks.<|reference_end|> | arxiv | @article{kim2024domain,
title={Domain Aware Multi-Task Pretraining of 3D Swin Transformer for
T1-weighted Brain MRI},
author={Jonghun Kim, Mansu Kim, Hyunjin Park},
journal={arXiv preprint arXiv:2410.00410},
year={2024},
archivePrefix={arXiv},
eprint={2410.00410},
primaryClass={eess.IV cs.CV}
} | kim2024domain |
arxiv-663957 | 2410.00412 | TPN: Transferable Proto-Learning Network towards Few-shot Document-Level Relation Extraction | <|reference_start|>TPN: Transferable Proto-Learning Network towards Few-shot Document-Level Relation Extraction: Few-shot document-level relation extraction suffers from poor performance due to the challenging cross-domain transferability of NOTA (none-of-the-above) relation representation. In this paper, we introduce a Transferable Proto-Learning Network (TPN) to address the challenging issue. It comprises three core components: Hybrid Encoder hierarchically encodes semantic content of input text combined with attention information to enhance the relation representations. As a plug-and-play module for Out-of-Domain (OOD) Detection, Transferable Proto-Learner computes NOTA prototype through an adaptive learnable block, effectively mitigating NOTA bias across various domains. Dynamic Weighting Calibrator detects relation-specific classification confidence, serving as dynamic weights to calibrate the NOTA-dominant loss function. Finally, to bolster the model's cross-domain performance, we complement it with virtual adversarial training (VAT). We conduct extensive experimental analyses on FREDo and ReFREDo, demonstrating the superiority of TPN. Compared to state-of-the-art methods, our approach achieves competitive performance with approximately half the parameter size. Data and code are available at https://github.com/EchoDreamer/TPN.<|reference_end|> | arxiv | @article{zhang2024tpn:,
title={TPN: Transferable Proto-Learning Network towards Few-shot Document-Level
Relation Extraction},
author={Yu Zhang and Zhao Kang},
journal={arXiv preprint arXiv:2410.00412},
year={2024},
archivePrefix={arXiv},
eprint={2410.00412},
primaryClass={cs.CL cs.IR}
} | zhang2024tpn: |
arxiv-663958 | 2410.00414 | Semantic Parsing with Candidate Expressions for Knowledge Base Question Answering | <|reference_start|>Semantic Parsing with Candidate Expressions for Knowledge Base Question Answering: Semantic parsers convert natural language to logical forms, which can be evaluated on knowledge bases (KBs) to produce denotations. Recent semantic parsers have been developed with sequence-to-sequence (seq2seq) pre-trained language models (PLMs) or large language models, where the models treat logical forms as sequences of tokens. For syntactic and semantic validity, the semantic parsers use grammars that enable constrained decoding. However, the grammars lack the ability to utilize large information of KBs, although logical forms contain representations of KB elements, such as entities or relations. In this work, we propose a grammar augmented with candidate expressions for semantic parsing on a large KB with a seq2seq PLM. The grammar defines actions as production rules, and our semantic parser predicts actions during inference under the constraints by types and candidate expressions. We apply the grammar to knowledge base question answering, where the constraints by candidate expressions assist a semantic parser to generate valid KB elements. In experiments on two benchmarks, KQA Pro and Overnight, the constraints by candidate expressions increased the accuracy of our semantic parser, whether it was trained with strong supervision or weak supervision. Our semantic parser achieved state-of-the-art accuracies on KQA Pro and Overnight.<|reference_end|> | arxiv | @article{nam2024semantic,
title={Semantic Parsing with Candidate Expressions for Knowledge Base Question
Answering},
author={Daehwan Nam, Gary Geunbae Lee},
journal={arXiv preprint arXiv:2410.00414},
year={2024},
archivePrefix={arXiv},
eprint={2410.00414},
primaryClass={cs.CL}
} | nam2024semantic |
arxiv-663959 | 2410.00418 | Posterior-Mean Rectified Flow: Towards Minimum MSE Photo-Realistic Image Restoration | <|reference_start|>Posterior-Mean Rectified Flow: Towards Minimum MSE Photo-Realistic Image Restoration: Photo-realistic image restoration algorithms are typically evaluated by distortion measures (e.g., PSNR, SSIM) and by perceptual quality measures (e.g., FID, NIQE), where the desire is to attain the lowest possible distortion without compromising on perceptual quality. To achieve this goal, current methods typically attempt to sample from the posterior distribution, or to optimize a weighted sum of a distortion loss (e.g., MSE) and a perceptual quality loss (e.g., GAN). Unlike previous works, this paper is concerned specifically with the optimal estimator that minimizes the MSE under a constraint of perfect perceptual index, namely where the distribution of the reconstructed images is equal to that of the ground-truth ones. A recent theoretical result shows that such an estimator can be constructed by optimally transporting the posterior mean prediction (MMSE estimate) to the distribution of the ground-truth images. Inspired by this result, we introduce Posterior-Mean Rectified Flow (PMRF), a simple yet highly effective algorithm that approximates this optimal estimator. In particular, PMRF first predicts the posterior mean, and then transports the result to a high-quality image using a rectified flow model that approximates the desired optimal transport map. We investigate the theoretical utility of PMRF and demonstrate that it consistently outperforms previous methods on a variety of image restoration tasks.<|reference_end|> | arxiv | @article{ohayon2024posterior-mean,
title={Posterior-Mean Rectified Flow: Towards Minimum MSE Photo-Realistic Image
Restoration},
author={Guy Ohayon, Tomer Michaeli, Michael Elad},
journal={arXiv preprint arXiv:2410.00418},
year={2024},
archivePrefix={arXiv},
eprint={2410.00418},
primaryClass={eess.IV cs.AI cs.CV eess.SP}
} | ohayon2024posterior-mean |
arxiv-663960 | 2410.00419 | KANOP: A Data-Efficient Option Pricing Model using Kolmogorov-Arnold Networks | <|reference_start|>KANOP: A Data-Efficient Option Pricing Model using Kolmogorov-Arnold Networks: Inspired by the recently proposed Kolmogorov-Arnold Networks (KANs), we introduce the KAN-based Option Pricing (KANOP) model to value American-style options, building on the conventional Least Square Monte Carlo (LSMC) algorithm. KANs, which are based on Kolmogorov-Arnold representation theorem, offer a data-efficient alternative to traditional Multi-Layer Perceptrons, requiring fewer hidden layers to achieve a higher level of performance. By leveraging the flexibility of KANs, KANOP provides a learnable alternative to the conventional set of basis functions used in the LSMC model, allowing the model to adapt to the pricing task and effectively estimate the expected continuation value. Using examples of standard American and Asian-American options, we demonstrate that KANOP produces more reliable option value estimates, both for single-dimensional cases and in more complex scenarios involving multiple input variables. The delta estimated by the KANOP model is also more accurate than that obtained using conventional basis functions, which is crucial for effective option hedging. Graphical illustrations further validate KANOP's ability to accurately model the expected continuation value for American-style options.<|reference_end|> | arxiv | @article{handal2024kanop:,
title={KANOP: A Data-Efficient Option Pricing Model using Kolmogorov-Arnold
Networks},
author={Rushikesh Handal, Kazuki Matoya, Yunzhuo Wang, Masanori Hirano},
journal={arXiv preprint arXiv:2410.00419},
year={2024},
archivePrefix={arXiv},
eprint={2410.00419},
primaryClass={q-fin.CP cs.CE q-fin.MF q-fin.PR}
} | handal2024kanop: |
arxiv-663961 | 2410.00422 | Exploring Physics-Informed Neural Networks: From Fundamentals to Applications in Complex Systems | <|reference_start|>Exploring Physics-Informed Neural Networks: From Fundamentals to Applications in Complex Systems: Physics-informed neural networks (PINNs) have emerged as a versatile and widely applicable concept across various science and engineering domains over the past decade. This article offers a comprehensive overview of the fundamentals of PINNs, tracing their evolution, modifications, and various variants. It explores the impact of different parameters on PINNs and the optimization algorithms involved. The review also delves into the theoretical advancements related to the convergence, consistency, and stability of numerical solutions using PINNs, while highlighting the current state of the art. Given their ability to address equations involving complex physics, the article discusses various applications of PINNs, with a particular focus on their utility in computational fluid dynamics problems. Additionally, it identifies current gaps in the research and outlines future directions for the continued development of PINNs.<|reference_end|> | arxiv | @article{ganga2024exploring,
title={Exploring Physics-Informed Neural Networks: From Fundamentals to
Applications in Complex Systems},
author={Sai Ganga, Ziya Uddin},
journal={arXiv preprint arXiv:2410.00422},
year={2024},
archivePrefix={arXiv},
eprint={2410.00422},
primaryClass={cs.CE}
} | ganga2024exploring |
arxiv-663962 | 2410.00423 | Are LLMs Aware that Some Questions are not Open-ended? | <|reference_start|>Are LLMs Aware that Some Questions are not Open-ended?: Large Language Models (LLMs) have shown the impressive capability of answering questions in a wide range of scenarios. However, when LLMs face different types of questions, it is worth exploring whether LLMs are aware that some questions have limited answers and need to respond more deterministically but some do not. We refer to this as question awareness of LLMs. The lack of question awareness in LLMs leads to two phenomena that LLMs are: (1) too casual to answer non-open-ended questions or (2) too boring to answer open-ended questions. In this paper, we first evaluate the question awareness in LLMs. The experimental results show that LLMs have the issues of lacking awareness of questions in certain domains, e.g. factual knowledge, resulting in hallucinations during the generation. To mitigate these, we propose a method called Question Awareness Temperature Sampling (QuATS). This method enhances the question awareness of LLMs by adaptively adjusting the output distributions based on question features. The automatic adjustment in QuATS eliminates the need for manual temperature tuning in text generation and consistently improves model performance in various benchmarks.<|reference_end|> | arxiv | @article{yang2024are,
title={Are LLMs Aware that Some Questions are not Open-ended?},
author={Dongjie Yang and Hai Zhao},
journal={arXiv preprint arXiv:2410.00423},
year={2024},
archivePrefix={arXiv},
eprint={2410.00423},
primaryClass={cs.CL}
} | yang2024are |
arxiv-663963 | 2410.00425 | ManiSkill3: GPU Parallelized Robotics Simulation and Rendering for Generalizable Embodied AI | <|reference_start|>ManiSkill3: GPU Parallelized Robotics Simulation and Rendering for Generalizable Embodied AI: Simulation has enabled unprecedented compute-scalable approaches to robot learning. However, many existing simulation frameworks typically support a narrow range of scenes/tasks and lack features critical for scaling generalizable robotics and sim2real. We introduce and open source ManiSkill3, the fastest state-visual GPU parallelized robotics simulator with contact-rich physics targeting generalizable manipulation. ManiSkill3 supports GPU parallelization of many aspects including simulation+rendering, heterogeneous simulation, pointclouds/voxels visual input, and more. Simulation with rendering on ManiSkill3 can run 10-1000x faster with 2-3x less GPU memory usage than other platforms, achieving up to 30,000+ FPS in benchmarked environments due to minimal python/pytorch overhead in the system, simulation on the GPU, and the use of the SAPIEN parallel rendering system. Tasks that used to take hours to train can now take minutes. We further provide the most comprehensive range of GPU parallelized environments/tasks spanning 12 distinct domains including but not limited to mobile manipulation for tasks such as drawing, humanoids, and dextrous manipulation in realistic scenes designed by artists or real-world digital twins. In addition, millions of demonstration frames are provided from motion planning, RL, and teleoperation. ManiSkill3 also provides a comprehensive set of baselines that span popular RL and learning-from-demonstrations algorithms.<|reference_end|> | arxiv | @article{tao2024maniskill3:,
title={ManiSkill3: GPU Parallelized Robotics Simulation and Rendering for
Generalizable Embodied AI},
author={Stone Tao, Fanbo Xiang, Arth Shukla, Yuzhe Qin, Xander Hinrichsen,
Xiaodi Yuan, Chen Bao, Xinsong Lin, Yulin Liu, Tse-kai Chan, Yuan Gao,
Xuanlin Li, Tongzhou Mu, Nan Xiao, Arnav Gurha, Zhiao Huang, Roberto
Calandra, Rui Chen, Shan Luo, Hao Su},
journal={arXiv preprint arXiv:2410.00425},
year={2024},
archivePrefix={arXiv},
eprint={2410.00425},
primaryClass={cs.RO cs.AI}
} | tao2024maniskill3: |
arxiv-663964 | 2410.00427 | Conversational Exploratory Search of Scholarly Publications Using Knowledge Graphs | <|reference_start|>Conversational Exploratory Search of Scholarly Publications Using Knowledge Graphs: Traditional search methods primarily depend on string matches, while semantic search targets concept-based matches by recognizing underlying intents and contextual meanings of search terms. Semantic search is particularly beneficial for discovering scholarly publications where differences in vocabulary between users' search terms and document content are common, often yielding irrelevant search results. Many scholarly search engines have adopted knowledge graphs to represent semantic relations between authors, publications, and research concepts. However, users may face challenges when navigating these graphical search interfaces due to the complexity and volume of data, which impedes their ability to discover publications effectively. To address this problem, we developed a conversational search system for exploring scholarly publications using a knowledge graph. We outline the methodical approach for designing and implementing the proposed system, detailing its architecture and functional components. To assess the system's effectiveness, we employed various performance metrics and conducted a human evaluation with 40 participants, demonstrating how the conversational interface compares against a graphical interface with traditional text search. The findings from our evaluation provide practical insights for advancing the design of conversational search systems.<|reference_end|> | arxiv | @article{schneider2024conversational,
title={Conversational Exploratory Search of Scholarly Publications Using
Knowledge Graphs},
author={Phillip Schneider and Florian Matthes},
journal={arXiv preprint arXiv:2410.00427},
year={2024},
archivePrefix={arXiv},
eprint={2410.00427},
primaryClass={cs.CL cs.IR}
} | schneider2024conversational |
arxiv-663965 | 2410.00428 | LayerKV: Optimizing Large Language Model Serving with Layer-wise KV Cache Management | <|reference_start|>LayerKV: Optimizing Large Language Model Serving with Layer-wise KV Cache Management: The expanding context windows in large language models (LLMs) have greatly enhanced their capabilities in various applications, but they also introduce significant challenges in maintaining low latency, particularly in Time to First Token (TTFT). This paper identifies that the sharp rise in TTFT as context length increases is predominantly driven by queuing delays, which are caused by the growing demands for GPU Key-Value (KV) cache allocation clashing with the limited availability of KV cache blocks. To address this issue, we propose LayerKV, a simple yet effective plug-in method that effectively reduces TTFT without requiring additional hardware or compromising output performance, while seamlessly integrating with existing parallelism strategies and scheduling techniques. Specifically, LayerKV introduces layer-wise KV block allocation, management, and offloading for fine-grained control over system memory, coupled with an SLO-aware scheduler to optimize overall Service Level Objectives (SLOs). Comprehensive evaluations on representative models, ranging from 7B to 70B parameters, across various GPU configurations, demonstrate that LayerKV improves TTFT latency up to 69x and reduces SLO violation rates by 28.7%, significantly enhancing the user experience.<|reference_end|> | arxiv | @article{xiong2024layerkv:,
title={LayerKV: Optimizing Large Language Model Serving with Layer-wise KV
Cache Management},
author={Yi Xiong, Hao Wu, Changxu Shao, Ziqing Wang, Rui Zhang, Yuhong Guo,
Junping Zhao, Ke Zhang, Zhenxuan Pan},
journal={arXiv preprint arXiv:2410.00428},
year={2024},
archivePrefix={arXiv},
eprint={2410.00428},
primaryClass={cs.DC cs.AI cs.LG}
} | xiong2024layerkv: |
arxiv-663966 | 2410.00432 | Scalable Multi-Task Transfer Learning for Molecular Property Prediction | <|reference_start|>Scalable Multi-Task Transfer Learning for Molecular Property Prediction: Molecules have a number of distinct properties whose importance and application vary. Often, in reality, labels for some properties are hard to achieve despite their practical importance. A common solution to such data scarcity is to use models of good generalization with transfer learning. This involves domain experts for designing source and target tasks whose features are shared. However, this approach has limitations: i). Difficulty in accurate design of source-target task pairs due to the large number of tasks, and ii). corresponding computational burden verifying many trials and errors of transfer learning design, thereby iii). constraining the potential of foundation modeling of multi-task molecular property prediction. We address the limitations of the manual design of transfer learning via data-driven bi-level optimization. The proposed method enables scalable multi-task transfer learning for molecular property prediction by automatically obtaining the optimal transfer ratios. Empirically, the proposed method improved the prediction performance of 40 molecular properties and accelerated training convergence.<|reference_end|> | arxiv | @article{lee2024scalable,
title={Scalable Multi-Task Transfer Learning for Molecular Property Prediction},
author={Chanhui Lee, Dae-Woong Jeong, Sung Moon Ko, Sumin Lee, Hyunseung Kim,
Soorin Yim, Sehui Han, Sungwoong Kim, Sungbin Lim},
journal={ICML2024-AI4Science Poster},
year={2024},
archivePrefix={arXiv},
eprint={2410.00432},
primaryClass={cs.LG cs.AI}
} | lee2024scalable |
arxiv-663967 | 2410.00433 | PrivTuner with Homomorphic Encryption and LoRA: A P3EFT Scheme for Privacy-Preserving Parameter-Efficient Fine-Tuning of AI Foundation Models | <|reference_start|>PrivTuner with Homomorphic Encryption and LoRA: A P3EFT Scheme for Privacy-Preserving Parameter-Efficient Fine-Tuning of AI Foundation Models: AI foundation models have recently demonstrated impressive capabilities across a wide range of tasks. Fine-tuning (FT) is a method of customizing a pre-trained AI foundation model by further training it on a smaller, targeted dataset. In this paper, we initiate the study of the Privacy-Preserving Parameter-Efficient FT (P3EFT) framework, which can be viewed as the intersection of Parameter-Efficient FT (PEFT) and Privacy-Preserving FT (PPFT). PEFT modifies only a small subset of the model's parameters to achieve FT (i.e., adapting a pre-trained model to a specific dataset), while PPFT uses privacy-preserving technologies to protect the confidentiality of the model during the FT process. There have been many studies on PEFT or PPFT but very few on their fusion, which motivates our work on P3EFT to achieve both parameter efficiency and model privacy. To exemplify our P3EFT, we present the PrivTuner scheme, which incorporates Fully Homomorphic Encryption (FHE) enabled privacy protection into LoRA (short for ``Low-Rank Adapter''). Intuitively speaking, PrivTuner allows the model owner and the external data owners to collaboratively implement PEFT with encrypted data. After describing PrivTuner in detail, we further investigate its energy consumption and privacy protection. Then, we consider a PrivTuner system over wireless communications and formulate a joint optimization problem to adaptively minimize energy while maximizing privacy protection, with the optimization variables including FDMA bandwidth allocation, wireless transmission power, computational resource allocation, and privacy protection. A resource allocation algorithm is devised to solve the problem. Experiments demonstrate that our algorithm can significantly reduce energy consumption while adapting to different privacy requirements.<|reference_end|> | arxiv | @article{li2024privtuner,
title={PrivTuner with Homomorphic Encryption and LoRA: A P3EFT Scheme for
Privacy-Preserving Parameter-Efficient Fine-Tuning of AI Foundation Models},
author={Yang Li, Wenhan Yu, Jun Zhao},
journal={arXiv preprint arXiv:2410.00433},
year={2024},
archivePrefix={arXiv},
eprint={2410.00433},
primaryClass={cs.CR}
} | li2024privtuner |
arxiv-663968 | 2410.00434 | Deceptive Risks in LLM-enhanced Robots | <|reference_start|>Deceptive Risks in LLM-enhanced Robots: This case study investigates a critical glitch in the integration of Large Language Models (LLMs) into social robots. LLMs, including ChatGPT, were found to falsely claim to have reminder functionalities, such as setting notifications for medication intake. We tested commercially available care software, which integrated ChatGPT, running on the Pepper robot and consistently reproduced this deceptive pattern. Not only did the system falsely claim the ability to set reminders, but it also proactively suggested managing medication schedules. The persistence of this issue presents a significant risk in healthcare settings, where system reliability is paramount. This case highlights the ethical and safety concerns surrounding the deployment of LLM-integrated robots in healthcare, emphasizing the urgent need for regulatory oversight to prevent potentially harmful consequences for vulnerable populations.<|reference_end|> | arxiv | @article{ranisch2024deceptive,
title={Deceptive Risks in LLM-enhanced Robots},
author={Robert Ranisch and Joschka Haltaufderheide},
journal={arXiv preprint arXiv:2410.00434},
year={2024},
archivePrefix={arXiv},
eprint={2410.00434},
primaryClass={cs.CY cs.RO}
} | ranisch2024deceptive |
arxiv-663969 | 2410.00435 | EKAN: Equivariant Kolmogorov-Arnold Networks | <|reference_start|>EKAN: Equivariant Kolmogorov-Arnold Networks: Kolmogorov-Arnold Networks (KANs) have seen great success in scientific domains thanks to spline activation functions, becoming an alternative to Multi-Layer Perceptrons (MLPs). However, spline functions may not respect symmetry in tasks, which is crucial prior knowledge in machine learning. Previously, equivariant networks embed symmetry into their architectures, achieving better performance in specific applications. Among these, Equivariant Multi-Layer Perceptrons (EMLP) introduce arbitrary matrix group equivariance into MLPs, providing a general framework for constructing equivariant networks layer by layer. In this paper, we propose Equivariant Kolmogorov-Arnold Networks (EKAN), a method for incorporating matrix group equivariance into KANs, aiming to broaden their applicability to more fields. First, we construct gated spline basis functions, which form the EKAN layer together with equivariant linear weights. We then define a lift layer to align the input space of EKAN with the feature space of the dataset, thereby building the entire EKAN architecture. Compared with baseline models, EKAN achieves higher accuracy with smaller datasets or fewer parameters on symmetry-related tasks, such as particle scattering and the three-body problem, often reducing test MSE by several orders of magnitude. Even in non-symbolic formula scenarios, such as top quark tagging with three jet constituents, EKAN achieves comparable results with EMLP using only $26\%$ of the parameters, while KANs do not outperform MLPs as expected.<|reference_end|> | arxiv | @article{hu2024ekan:,
title={EKAN: Equivariant Kolmogorov-Arnold Networks},
author={Lexiang Hu, Yisen Wang, Zhouchen Lin},
journal={arXiv preprint arXiv:2410.00435},
year={2024},
archivePrefix={arXiv},
eprint={2410.00435},
primaryClass={cs.LG}
} | hu2024ekan: |
arxiv-663970 | 2410.00436 | Task Success Prediction for Open-Vocabulary Manipulation Based on Multi-Level Aligned Representations | <|reference_start|>Task Success Prediction for Open-Vocabulary Manipulation Based on Multi-Level Aligned Representations: In this study, we consider the problem of predicting task success for open-vocabulary manipulation by a manipulator, based on instruction sentences and egocentric images before and after manipulation. Conventional approaches, including multimodal large language models (MLLMs), often fail to appropriately understand detailed characteristics of objects and/or subtle changes in the position of objects. We propose Contrastive $\lambda$-Repformer, which predicts task success for table-top manipulation tasks by aligning images with instruction sentences. Our method integrates the following three key types of features into a multi-level aligned representation: features that preserve local image information; features aligned with natural language; and features structured through natural language. This allows the model to focus on important changes by looking at the differences in the representation between two images. We evaluate Contrastive $\lambda$-Repformer on a dataset based on a large-scale standard dataset, the RT-1 dataset, and on a physical robot platform. The results show that our approach outperformed existing approaches including MLLMs. Our best model achieved an improvement of 8.66 points in accuracy compared to the representative MLLM-based model.<|reference_end|> | arxiv | @article{goko2024task,
title={Task Success Prediction for Open-Vocabulary Manipulation Based on
Multi-Level Aligned Representations},
author={Miyu Goko, Motonari Kambara, Daichi Saito, Seitaro Otsuki, Komei
Sugiura},
journal={arXiv preprint arXiv:2410.00436},
year={2024},
archivePrefix={arXiv},
eprint={2410.00436},
primaryClass={cs.RO cs.CV}
} | goko2024task |
arxiv-663971 | 2410.00438 | A structure-preserving parametric finite element method for solid-state dewetting on curved substrates | <|reference_start|>A structure-preserving parametric finite element method for solid-state dewetting on curved substrates: We consider a two-dimensional sharp-interface model for solid-state dewetting of thin films with anisotropic surface energies on curved substrates, where the film/vapor interface and substrate surface are represented by an evolving and a static curve, respectively. The model is governed by the anisotropic surface diffusion for the evolving curve, with appropriate boundary conditions at the contact points where the two curves meet. The continuum model obeys an energy decay law and preserves the enclosed area between the two curves. We introduce an arclength parameterization for the substrate curve, which plays a crucial role in a structure-preserving approximation as it straightens the curved substrate and tracks length changes between contact points. Based on this insight, we introduce a symmetrized weak formulation which leads to an unconditional energy stable parametric approximation in terms of the discrete energy. We also provide an error estimate of the enclosed area, which depends on the substrate profile and can be zero in the case of a flat substrate. Furthermore, we introduce a correction to the discrete normals to enable an exact area preservation for general curved substrates. The resulting nonlinear system is efficiently solved using a hybrid iterative algorithm which combines both Picard and Newton's methods. Numerical results are presented to show the robustness and good properties of the introduced method for simulating solid-state dewetting on various curved substrates.<|reference_end|> | arxiv | @article{bao2024a,
title={A structure-preserving parametric finite element method for solid-state
dewetting on curved substrates},
author={Weizhu Bao, Yifei Li and Quan Zhao},
journal={arXiv preprint arXiv:2410.00438},
year={2024},
archivePrefix={arXiv},
eprint={2410.00438},
primaryClass={math.NA cs.NA physics.comp-ph}
} | bao2024a |
arxiv-663972 | 2410.00441 | ReXplain: Translating Radiology into Patient-Friendly Video Reports | <|reference_start|>ReXplain: Translating Radiology into Patient-Friendly Video Reports: Radiology reports often remain incomprehensible to patients, undermining patient-centered care. We present ReXplain (Radiology eXplanation), an innovative AI-driven system that generates patient-friendly video reports for radiology findings. ReXplain uniquely integrates a large language model for text simplification, an image segmentation model for anatomical region identification, and an avatar generation tool, producing comprehensive explanations with plain language, highlighted imagery, and 3D organ renderings. Our proof-of-concept study with five board-certified radiologists indicates that ReXplain could accurately deliver radiological information and effectively simulate one-on-one consultations. This work demonstrates a new paradigm in AI-assisted medical communication, potentially improving patient engagement and satisfaction in radiology care, and opens new avenues for research in multimodal medical communication.<|reference_end|> | arxiv | @article{luo2024rexplain:,
title={ReXplain: Translating Radiology into Patient-Friendly Video Reports},
author={Luyang Luo, Jenanan Vairavamurthy, Xiaoman Zhang, Abhinav Kumar, Ramon
R. Ter-Oganesyan, Stuart T. Schroff, Dan Shilo, Rydhwana Hossain, Mike
Moritz, Pranav Rajpurkar},
journal={arXiv preprint arXiv:2410.00441},
year={2024},
archivePrefix={arXiv},
eprint={2410.00441},
primaryClass={cs.AI eess.IV}
} | luo2024rexplain: |
arxiv-663973 | 2410.00447 | Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation | <|reference_start|>Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation: There has been exciting progress in generating images from natural language or layout conditions. However, these methods struggle to faithfully reproduce complex scenes due to the insufficient modeling of multiple objects and their relationships. To address this issue, we leverage the scene graph, a powerful structured representation, for complex image generation. Different from the previous works that directly use scene graphs for generation, we employ the generative capabilities of variational autoencoders and diffusion models in a generalizable manner, compositing diverse disentangled visual clues from scene graphs. Specifically, we first propose a Semantics-Layout Variational AutoEncoder (SL-VAE) to jointly derive (layouts, semantics) from the input scene graph, which allows a more diverse and reasonable generation in a one-to-many mapping. We then develop a Compositional Masked Attention (CMA) integrated with a diffusion model, incorporating (layouts, semantics) with fine-grained attributes as generation guidance. To further achieve graph manipulation while keeping the visual content consistent, we introduce a Multi-Layered Sampler (MLS) for an "isolated" image editing effect. Extensive experiments demonstrate that our method outperforms recent competitors based on text, layout, or scene graph, in terms of generation rationality and controllability.<|reference_end|> | arxiv | @article{wang2024scene,
title={Scene Graph Disentanglement and Composition for Generalizable Complex
Image Generation},
author={Yunnan Wang, Ziqiang Li, Zequn Zhang, Wenyao Zhang, Baao Xie, Xihui
Liu, Wenjun Zeng, Xin Jin},
journal={arXiv preprint arXiv:2410.00447},
year={2024},
archivePrefix={arXiv},
eprint={2410.00447},
primaryClass={cs.CV}
} | wang2024scene |
arxiv-663974 | 2410.00448 | Advancing Medical Radiograph Representation Learning: A Hybrid Pre-training Paradigm with Multilevel Semantic Granularity | <|reference_start|>Advancing Medical Radiograph Representation Learning: A Hybrid Pre-training Paradigm with Multilevel Semantic Granularity: This paper introduces an innovative approach to Medical Vision-Language Pre-training (Med-VLP) area in the specialized context of radiograph representation learning. While conventional methods frequently merge textual annotations into unified reports, we acknowledge the intrinsic hierarchical relationship between the findings and impression section in radiograph datasets. To establish a targeted correspondence between images and texts, we propose a novel HybridMED framework to align global-level visual representations with impression and token-level visual representations with findings. Moreover, our framework incorporates a generation decoder that employs two proxy tasks, responsible for generating the impression from (1) images, via a captioning branch, and (2) findings, through a summarization branch. Additionally, knowledge distillation is leveraged to facilitate the training process. Experiments on the MIMIC-CXR dataset reveal that our summarization branch effectively distills knowledge to the captioning branch, enhancing model performance without significantly increasing parameter requirements due to the shared self-attention and feed-forward architecture.<|reference_end|> | arxiv | @article{jiang2024advancing,
title={Advancing Medical Radiograph Representation Learning: A Hybrid
Pre-training Paradigm with Multilevel Semantic Granularity},
author={Hanqi Jiang, Xixuan Hao, Yuzhou Huang, Chong Ma, Jiaxun Zhang, Yi Pan,
and Ruimao Zhang},
journal={ECCV 2024 Workshop},
year={2024},
archivePrefix={arXiv},
eprint={2410.00448},
primaryClass={cs.CV}
} | jiang2024advancing |
arxiv-663975 | 2410.00449 | Examining Input Modalities and Visual Feedback Designs in Mobile Expressive Writing | <|reference_start|>Examining Input Modalities and Visual Feedback Designs in Mobile Expressive Writing: Expressive writing is an established approach for stress management, and recent practices include information technology. Although mobile interfaces have the potential to support daily stress management practices, interface designs for such mobile expressive writing and their effects on stress relief still lack empirical understanding. To fill the gap, we examined the interface design of mobile expressive writing by investigating the influence of input modalities and visual feedback designs on usability and perceived cathartic effects through in-the-wild studies. While our studies confirmed the stress relief effects of mobile expressive writing, our results offer important insights in interface design. We found keyboard-based text entry more user-friendly and preferred over voice messages due to its privacy friendliness and reflection process. Participants expressed different reasons for preferring different post-writing visual feedback depending on the cause and type of stress. This paper also discusses future research opportunities in interface designs for mobile expressive writing.<|reference_end|> | arxiv | @article{norihama2024examining,
title={Examining Input Modalities and Visual Feedback Designs in Mobile
Expressive Writing},
author={Shunpei Norihama, Shixian Geng, Kakeru Miyazaki, Arissa J. Sato, Mari
Hirano, Simo Hosio, Koji Yatani},
journal={arXiv preprint arXiv:2410.00449},
year={2024},
archivePrefix={arXiv},
eprint={2410.00449},
primaryClass={cs.HC}
} | norihama2024examining |
arxiv-663976 | 2410.00451 | Adversarial Suffixes May Be Features Too! | <|reference_start|>Adversarial Suffixes May Be Features Too!: Despite significant ongoing efforts in safety alignment, large language models (LLMs) such as GPT-4 and LLaMA 3 remain vulnerable to jailbreak attacks that can induce harmful behaviors, including those triggered by adversarial suffixes. Building on prior research, we hypothesize that these adversarial suffixes are not mere bugs but may represent features that can dominate the LLM's behavior. To evaluate this hypothesis, we conduct several experiments. First, we demonstrate that benign features can be effectively made to function as adversarial suffixes, i.e., we develop a feature extraction method to extract sample-agnostic features from benign dataset in the form of suffixes and show that these suffixes may effectively compromise safety alignment. Second, we show that adversarial suffixes generated from jailbreak attacks may contain meaningful features, i.e., appending the same suffix to different prompts results in responses exhibiting specific characteristics. Third, we show that such benign-yet-safety-compromising features can be easily introduced through fine-tuning using only benign datasets, i.e., even in the absence of harmful content. This highlights the critical risk posed by dominating benign features in the training data and calls for further research to reinforce LLM safety alignment. Our code and data is available at \url{https://github.com/suffix-maybe-feature/adver-suffix-maybe-features}.<|reference_end|> | arxiv | @article{zhao2024adversarial,
title={Adversarial Suffixes May Be Features Too!},
author={Wei Zhao, Zhe Li, Yige Li, Jun Sun},
journal={arXiv preprint arXiv:2410.00451},
year={2024},
archivePrefix={arXiv},
eprint={2410.00451},
primaryClass={cs.CR cs.AI cs.CL}
} | zhao2024adversarial |
arxiv-663977 | 2410.00452 | A Scheduling-Aware Defense Against Prefetching-Based Side-Channel Attacks | <|reference_start|>A Scheduling-Aware Defense Against Prefetching-Based Side-Channel Attacks: Modern computer processors use microarchitectural optimization mechanisms to improve performance. As a downside, such optimizations are prone to introducing side-channel vulnerabilities. Speculative loading of memory, called prefetching, is common in real-world CPUs and may cause such side-channel vulnerabilities: Prior work has shown that it can be exploited to bypass process isolation and leak secrets, such as keys used in RSA, AES, and ECDH implementations. However, to this date, no effective and efficient countermeasure has been presented that secures software on systems with affected prefetchers. In this work, we answer the question: How can a process defend against prefetch-based side channels? We first systematize prefetching-based side-channel vulnerabilities presented in academic literature so far. Next, we design and implement PreFence, a scheduling-aware defense against these side channels that allows processes to disable the prefetcher temporarily during security-critical operations. We implement our countermeasure for an x86_64 and an ARM processor; it can be adapted to any platform that allows to disable the prefetcher. We evaluate our defense and find that our solution reliably stops prefetch leakage. Our countermeasure causes negligible performance impact while no security-relevant code is executed, and its worst case performance is comparable to completely turning off the prefetcher. The expected average performance impact depends on the security-relevant code in the application and can be negligible as we demonstrate with a simple web server application. We expect our countermeasure could widely be integrated in commodity OS, and even be extended to signal generally security-relevant code to the kernel to allow coordinated application of countermeasures.<|reference_end|> | arxiv | @article{schlüter2024a,
title={A Scheduling-Aware Defense Against Prefetching-Based Side-Channel
Attacks},
author={Till Schl"uter, Nils Ole Tippenhauer},
journal={arXiv preprint arXiv:2410.00452},
year={2024},
archivePrefix={arXiv},
eprint={2410.00452},
primaryClass={cs.CR}
} | schlüter2024a |
arxiv-663978 | 2410.00453 | The NetMob2024 Dataset: Population Density and OD Matrices from Four LMIC Countries | <|reference_start|>The NetMob2024 Dataset: Population Density and OD Matrices from Four LMIC Countries: The NetMob24 dataset offers a unique opportunity for researchers from a range of academic fields to access comprehensive spatiotemporal data sets spanning four countries (India, Mexico, Indonesia, and Colombia) over the course of two years (2019 and 2020). This dataset, developed in collaboration with Cuebiq (Also referred to as Spectus), comprises privacy-preserving aggregated data sets derived from mobile application (app) data collected from users who have voluntarily consented to anonymous data collection for research purposes. It is our hope that this reference dataset will foster the production of new research methods and the reproducibility of research outcomes.<|reference_end|> | arxiv | @article{zhang2024the,
title={The NetMob2024 Dataset: Population Density and OD Matrices from Four
LMIC Countries},
author={Wenlan Zhang and Miguel Nunez del Prado and Vincent Gauthier and Sveta
Milusheva},
journal={arXiv preprint arXiv:2410.00453},
year={2024},
archivePrefix={arXiv},
eprint={2410.00453},
primaryClass={cs.NI cs.CY cs.SI}
} | zhang2024the |
arxiv-663979 | 2410.00454 | UniAdapt: A Universal Adapter for Knowledge Calibration | <|reference_start|>UniAdapt: A Universal Adapter for Knowledge Calibration: Large Language Models (LLMs) require frequent updates to correct errors and keep pace with continuously evolving knowledge in a timely and effective manner. Recent research in it model editing has highlighted the challenges in balancing generalization and locality, especially in the context of lifelong model editing. We discover that inserting knowledge directly into the model often causes conflicts and potentially disrupts other unrelated pre-trained knowledge. To address this problem, we introduce UniAdapt, a universal adapter for knowledge calibration. Inspired by the Mixture of Experts architecture and Retrieval-Augmented Generation, UniAdapt is designed with a vector-assisted router that is responsible for routing inputs to appropriate experts. The router maintains a vector store, including multiple shards, to construct routing vectors based on semantic similarity search results. UniAdapt is fully model-agnostic and designed for seamless plug-and-play integration. Experimental results show that UniAdapt outperforms existing lifelong model editors and achieves exceptional results in most metrics.<|reference_end|> | arxiv | @article{nguyen2024uniadapt:,
title={UniAdapt: A Universal Adapter for Knowledge Calibration},
author={Tai D. Nguyen, Long H. Pham, Jun Sun},
journal={arXiv preprint arXiv:2410.00454},
year={2024},
archivePrefix={arXiv},
eprint={2410.00454},
primaryClass={cs.LG}
} | nguyen2024uniadapt: |
arxiv-663980 | 2410.00455 | Fine-Grained Vectorized Merge Sorting on RISC-V: From Register to Cache | <|reference_start|>Fine-Grained Vectorized Merge Sorting on RISC-V: From Register to Cache: Merge sort as a divide-sort-merge paradigm has been widely applied in computer science fields. As modern reduced instruction set computing architectures like the fifth generation (RISC-V) regard multiple registers as a vector register group for wide instruction parallelism, optimizing merge sort with this vectorized property is becoming increasingly common. In this paper, we overhaul the divide-sort-merge paradigm, from its register-level sort to the cache-aware merge, to develop a fine-grained RISC-V vectorized merge sort (RVMS). From the register-level view, the inline vectorized transpose instruction is missed in RISC-V, so implementing it efficiently is non-trivial. Besides, the vectorized comparisons do not always work well in the merging networks. Both issues primarily stem from the expensive data shuffle instruction. To bypass it, RVMS strides to take register data as the proxy of data shuffle to accelerate the transpose operation, and meanwhile replaces vectorized comparisons with scalar cousin for more light real value swap. On the other hand, as cache-aware merge makes larger data merge in the cache, most merge schemes have two drawbacks: the in-cache merge usually has low cache utilization, while the out-of-cache merging network remains an ineffectively symmetric structure. To this end, we propose the half-merge scheme to employ the auxiliary space of in-place merge to halve the footprint of naive merge sort, and meanwhile copy one sequence to this space to avoid the former data exchange. Furthermore, an asymmetric merging network is developed to adapt to two different input sizes.<|reference_end|> | arxiv | @article{zhang2024fine-grained,
title={Fine-Grained Vectorized Merge Sorting on RISC-V: From Register to Cache},
author={Jin Zhang, Jincheng Zhou, Xiang Zhang, Di Ma, Chunye Gong},
journal={arXiv preprint arXiv:2410.00455},
year={2024},
archivePrefix={arXiv},
eprint={2410.00455},
primaryClass={cs.DC}
} | zhang2024fine-grained |
arxiv-663981 | 2410.00456 | Absolute centrality in a signed Friedkin-Johnsen based model: a graphical characterisation of influence | <|reference_start|>Absolute centrality in a signed Friedkin-Johnsen based model: a graphical characterisation of influence: This paper studies the evolution of opinions governed by a Friedkin Johnsen (FJ) based model in arbitrary network structures with signed interactions. The agents contributing to the opinion formation are characterised as being influential. Initially, the agents are classified as opinion leaders and followers based on network connectivity and the nature of interactions. However, the addition of stubbornness leads to interesting behaviours wherein a non influential agent can now become influential and vice versa. Thereafter, a signal flow graph (SFG) based method is proposed to quantify the influence of an influential agents' opinions. Additionally, it helps illustrate the role played by network topology in shaping the final opinions of the agents. Based on this analysis, the absolute centrality measure is proposed to determine the overall influence of all the agents in the network. Unlike most of the existing measures, it is applicable to any network structure and considers the effect of stubbornness and antagonism. Examples are presented throughout the paper to illustrate and validate these results.<|reference_end|> | arxiv | @article{shrinate2024absolute,
title={Absolute centrality in a signed Friedkin-Johnsen based model: a
graphical characterisation of influence},
author={Aashi Shrinate and Twinkle Tripathy},
journal={arXiv preprint arXiv:2410.00456},
year={2024},
archivePrefix={arXiv},
eprint={2410.00456},
primaryClass={eess.SY cs.SY}
} | shrinate2024absolute |
arxiv-663982 | 2410.00461 | Enhancing Solution Efficiency in Reinforcement Learning: Leveraging Sub-GFlowNet and Entropy Integration | <|reference_start|>Enhancing Solution Efficiency in Reinforcement Learning: Leveraging Sub-GFlowNet and Entropy Integration: Traditional reinforcement learning often struggles to generate diverse, high-reward solutions, especially in domains like drug design and black-box function optimization. Markov Chain Monte Carlo (MCMC) methods provide an alternative method of RL in candidate selection but suffer from high computational costs and limited candidate diversity exploration capabilities. In response, GFlowNet, a novel neural network architecture, was introduced to model complex system dynamics and generate diverse high-reward trajectories. To further enhance this approach, this paper proposes improvements to GFlowNet by introducing a new loss function and refining the training objective associated with sub-GFlowNet. These enhancements aim to integrate entropy and leverage network structure characteristics, improving both candidate diversity and computational efficiency. We demonstrated the superiority of the refined GFlowNet over traditional methods by empirical results from hypergrid experiments and molecule synthesis tasks. The findings underscore the effectiveness of incorporating entropy and exploiting network structure properties in solution generation in molecule synthesis as well as diverse experimental designs.<|reference_end|> | arxiv | @article{he2024enhancing,
title={Enhancing Solution Efficiency in Reinforcement Learning: Leveraging
Sub-GFlowNet and Entropy Integration},
author={Siyi He},
journal={arXiv preprint arXiv:2410.00461},
year={2024},
archivePrefix={arXiv},
eprint={2410.00461},
primaryClass={cs.LG}
} | he2024enhancing |
arxiv-663983 | 2410.00462 | Fast Hip Joint Moment Estimation with A General Moment Feature Generation Method | <|reference_start|>Fast Hip Joint Moment Estimation with A General Moment Feature Generation Method: The hip joint moment during walking is a crucial basis for hip exoskeleton control. Compared to generating assistive torque profiles based on gait estimation, estimating hip joint moment directly using hip joint angles offers advantages such as simplified sensing and adaptability to variable walking speeds. Existing methods that directly estimate moment from hip joint angles are mainly used for offline biomechanical estimation. However, they suffer from long computation time and lack of personalization, rendering them unsuitable for personalized control of hip exoskeletons. To address these challenges, this paper proposes a fast hip joint moment estimation method based on generalized moment features (GMF). The method first employs a GMF generator to learn a feature representation of joint moment, namely the proposed GMF, which is independent of individual differences. Subsequently, a GRU-based neural network with fast computational performance is trained to learn the mapping from the joint kinematics to the GMF. Finally, the predicted GMF is decoded into the joint moment with a GMF decoder. The joint estimation model is trained and tested on a dataset comprising 20 subjects under 28 walking speed conditions. Results show that the proposed method achieves a root mean square error of 0.1180 $\pm$ 0.0021 Nm/kg for subjects in test dataset, and the computation time per estimation using the employed GRU-based estimator is 1.3420 $\pm$ 0.0031 ms, significantly faster than mainstream neural network architectures, while maintaining comparable network accuracy. These promising results demonstrate that the proposed method enhances the accuracy and computational speed of joint moment estimation neural networks, with potential for guiding exoskeleton control.<|reference_end|> | arxiv | @article{zhang2024fast,
title={Fast Hip Joint Moment Estimation with A General Moment Feature
Generation Method},
author={Yuanwen Zhang, Jingfeng Xiong, Haolan Xian, Chuheng Chen, Xinxing
Chen, Chenglong Fu, and Yuquan Leng},
journal={arXiv preprint arXiv:2410.00462},
year={2024},
archivePrefix={arXiv},
eprint={2410.00462},
primaryClass={cs.RO}
} | zhang2024fast |
arxiv-663984 | 2410.00464 | Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation | <|reference_start|>Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation: Current co-speech motion generation approaches usually focus on upper body gestures following speech contents only, while lacking supporting the elaborate control of synergistic full-body motion based on text prompts, such as talking while walking. The major challenges lie in 1) the existing speech-to-motion datasets only involve highly limited full-body motions, making a wide range of common human activities out of training distribution; 2) these datasets also lack annotated user prompts. To address these challenges, we propose SynTalker, which utilizes the off-the-shelf text-to-motion dataset as an auxiliary for supplementing the missing full-body motion and prompts. The core technical contributions are two-fold. One is the multi-stage training process which obtains an aligned embedding space of motion, speech, and prompts despite the significant distributional mismatch in motion between speech-to-motion and text-to-motion datasets. Another is the diffusion-based conditional inference process, which utilizes the separate-then-combine strategy to realize fine-grained control of local body parts. Extensive experiments are conducted to verify that our approach supports precise and flexible control of synergistic full-body motion generation based on both speeches and user prompts, which is beyond the ability of existing approaches.<|reference_end|> | arxiv | @article{chen2024enabling,
title={Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion
Generation},
author={Bohong Chen, Yumeng Li, Yao-Xiang Ding, Tianjia Shao, Kun Zhou},
journal={arXiv preprint arXiv:2410.00464},
year={2024},
archivePrefix={arXiv},
eprint={2410.00464},
primaryClass={cs.CV}
} | chen2024enabling |
arxiv-663985 | 2410.00465 | Distributed Monitoring of Timed Properties | <|reference_start|>Distributed Monitoring of Timed Properties: In formal verification, runtime monitoring consists of observing the execution of a system in order to decide as quickly as possible whether or not it satisfies a given property. We consider monitoring in a distributed setting, for properties given as reachability timed automata. In such a setting, the system is made of several components, each equipped with its own local clock and monitor. The monitors observe events occurring on their associated component, and receive timestamped events from other monitors through FIFO channels. Since clocks are local, they cannot be perfectly synchronized, resulting in imprecise timestamps. Consequently, they must be seen as intervals, leading monitors to consider possible reorderings of events. In this context, each monitor aims to provide, as early as possible, a verdict on the property it is monitoring, based on its potentially incomplete and imprecise knowledge of the current execution. In this paper, we propose an on-line monitoring algorithm for timed properties, robust to time imprecision and partial information from distant components. We first identify the date at which a monitor can safely compute a verdict based on received events. We then propose a monitoring algorithm that updates this date when new information arrives, maintains the current set of states in which the property can reside, and updates its verdict accordingly.<|reference_end|> | arxiv | @article{henry2024distributed,
title={Distributed Monitoring of Timed Properties},
author={L'eo Henry (UCL), Thierry J'eron (UR), Nicolas Markey (IRISA, UR),
Victor Roussanaly (UL)},
journal={RV2024: Runtime Verification 2024, Do{\u g}an Ulus, Oct 2024,
Istanbul, Turkey. pp.260},
year={2024},
archivePrefix={arXiv},
eprint={2410.00465},
primaryClass={cs.SE}
} | henry2024distributed |
arxiv-663986 | 2410.00467 | Dynamic Planning for LLM-based Graphical User Interface Automation | <|reference_start|>Dynamic Planning for LLM-based Graphical User Interface Automation: The advent of large language models (LLMs) has spurred considerable interest in advancing autonomous LLMs-based agents, particularly in intriguing applications within smartphone graphical user interfaces (GUIs). When presented with a task goal, these agents typically emulate human actions within a GUI environment until the task is completed. However, a key challenge lies in devising effective plans to guide action prediction in GUI tasks, though planning have been widely recognized as effective for decomposing complex tasks into a series of steps. Specifically, given the dynamic nature of environmental GUIs following action execution, it is crucial to dynamically adapt plans based on environmental feedback and action history.We show that the widely-used ReAct approach fails due to the excessively long historical dialogues. To address this challenge, we propose a novel approach called Dynamic Planning of Thoughts (D-PoT) for LLM-based GUI agents.D-PoT involves the dynamic adjustment of planning based on the environmental feedback and execution history. Experimental results reveal that the proposed D-PoT significantly surpassed the strong GPT-4V baseline by +12.7% (34.66% $\rightarrow$ 47.36%) in accuracy. The analysis highlights the generality of dynamic planning in different backbone LLMs, as well as the benefits in mitigating hallucinations and adapting to unseen tasks. Code is available at https://github.com/sqzhang-lazy/D-PoT.<|reference_end|> | arxiv | @article{zhang2024dynamic,
title={Dynamic Planning for LLM-based Graphical User Interface Automation},
author={Shaoqing Zhang, Zhuosheng Zhang, Kehai Chen, Xinbei Ma, Muyun Yang,
Tiejun Zhao, Min Zhang},
journal={arXiv preprint arXiv:2410.00467},
year={2024},
archivePrefix={arXiv},
eprint={2410.00467},
primaryClass={cs.AI cs.HC}
} | zhang2024dynamic |
arxiv-663987 | 2410.00469 | Deep Multimodal Fusion for Semantic Segmentation of Remote Sensing Earth Observation Data | <|reference_start|>Deep Multimodal Fusion for Semantic Segmentation of Remote Sensing Earth Observation Data: Accurate semantic segmentation of remote sensing imagery is critical for various Earth observation applications, such as land cover mapping, urban planning, and environmental monitoring. However, individual data sources often present limitations for this task. Very High Resolution (VHR) aerial imagery provides rich spatial details but cannot capture temporal information about land cover changes. Conversely, Satellite Image Time Series (SITS) capture temporal dynamics, such as seasonal variations in vegetation, but with limited spatial resolution, making it difficult to distinguish fine-scale objects. This paper proposes a late fusion deep learning model (LF-DLM) for semantic segmentation that leverages the complementary strengths of both VHR aerial imagery and SITS. The proposed model consists of two independent deep learning branches. One branch integrates detailed textures from aerial imagery captured by UNetFormer with a Multi-Axis Vision Transformer (MaxViT) backbone. The other branch captures complex spatio-temporal dynamics from the Sentinel-2 satellite image time series using a U-Net with Temporal Attention Encoder (U-TAE). This approach leads to state-of-the-art results on the FLAIR dataset, a large-scale benchmark for land cover segmentation using multi-source optical imagery. The findings highlight the importance of multi-modality fusion in improving the accuracy and robustness of semantic segmentation in remote sensing applications.<|reference_end|> | arxiv | @article{dimitrovski2024deep,
title={Deep Multimodal Fusion for Semantic Segmentation of Remote Sensing Earth
Observation Data},
author={Ivica Dimitrovski, Vlatko Spasev, Ivan Kitanovski},
journal={arXiv preprint arXiv:2410.00469},
year={2024},
archivePrefix={arXiv},
eprint={2410.00469},
primaryClass={cs.CV}
} | dimitrovski2024deep |
arxiv-663988 | 2410.00470 | Order Reduction of Exponential Runge--Kutta Methods: Non-Commuting Operators | <|reference_start|>Order Reduction of Exponential Runge--Kutta Methods: Non-Commuting Operators: Nonlinear parabolic equations are central to numerous applications in science and engineering, posing significant challenges for analytical solutions and necessitating efficient numerical methods. Exponential integrators have recently gained attention for handling stiff differential equations. This paper explores exponential Runge--Kutta methods for solving such equations, focusing on the simplified form $u^{\prime}(t)+A u(t)=B u(t)$, where $A$ generates an analytic semigroup and $B$ is relatively bounded with respect to $A$. By treating $A$ exactly and $B$ explicitly, we derive error bounds for exponential Runge--Kutta methods up to third order. Our analysis shows that these methods maintain their order under mild regularity conditions on the initial data $u_0$, while also addressing the phenomenon of order reduction in higher-order methods. Through a careful convergence analysis and numerical investigations, this study provides a comprehensive understanding of the applicability and limitations of exponential Runge--Kutta methods in solving linear parabolic equations involving two unbounded and non-commuting operators.<|reference_end|> | arxiv | @article{hoang2024order,
title={Order Reduction of Exponential Runge--Kutta Methods: Non-Commuting
Operators},
author={Trung Hau Hoang},
journal={arXiv preprint arXiv:2410.00470},
year={2024},
archivePrefix={arXiv},
eprint={2410.00470},
primaryClass={math.NA cs.NA}
} | hoang2024order |
arxiv-663989 | 2410.00473 | Uncertainty-aware t-distributed Stochastic Neighbor Embedding for Single-cell RNA-seq Data | <|reference_start|>Uncertainty-aware t-distributed Stochastic Neighbor Embedding for Single-cell RNA-seq Data: Nonlinear data visualization using t-distributed stochastic neighbor embedding (t-SNE) enables the representation of complex single-cell transcriptomic landscapes in two or three dimensions to depict biological populations accurately. However, t-SNE often fails to account for uncertainties in the original dataset, leading to misleading visualizations where cell subsets with noise appear indistinguishable. To address these challenges, we introduce uncertainty-aware t-SNE (Ut-SNE), a noise-defending visualization tool tailored for uncertain single-cell RNA-seq data. By creating a probabilistic representation for each sample, Our Ut-SNE accurately incorporates noise about transcriptomic variability into the visual interpretation of single-cell RNA sequencing data, revealing significant uncertainties in transcriptomic variability. Through various examples, we showcase the practical value of Ut-SNE and underscore the significance of incorporating uncertainty awareness into data visualization practices. This versatile uncertainty-aware visualization tool can be easily adapted to other scientific domains beyond single-cell RNA sequencing, making them valuable resources for high-dimensional data analysis.<|reference_end|> | arxiv | @article{ma2024uncertainty-aware,
title={Uncertainty-aware t-distributed Stochastic Neighbor Embedding for
Single-cell RNA-seq Data},
author={Hui Ma and Kai Chen},
journal={arXiv preprint arXiv:2410.00473},
year={2024},
archivePrefix={arXiv},
eprint={2410.00473},
primaryClass={q-bio.GN cs.LG}
} | ma2024uncertainty-aware |
arxiv-663990 | 2410.00475 | Probabilistic Analysis of Copyright Disputes and Generative AI Safety | <|reference_start|>Probabilistic Analysis of Copyright Disputes and Generative AI Safety: This paper presents a probabilistic approach to analyzing copyright infringement disputes by formalizing relevant judicial principles within a coherent framework based on the random-worlds method. The approach provides a structured analysis of key evidentiary principles, with particular emphasis on the "inverse ratio rule"--a controversial doctrine adopted by some courts. Although this rule has faced significant criticism, a formal proof demonstrates its validity, provided it is properly defined. Additionally, the paper examines the heightened copyright risks posed by generative AI, highlighting how extensive access to copyrighted material by generative models increases the risk of infringement. Utilizing the probabilistic approach, the Near Access-Free (NAF) condition, previously proposed as a potential mitigation strategy, is evaluated. The analysis reveals that while the NAF condition mitigates some infringement risks, its justifiability and efficacy are questionable in certain contexts. These findings demonstrate how a rigorous probabilistic approach can advance our understanding of copyright jurisprudence and its interaction with emerging technologies.<|reference_end|> | arxiv | @article{chiba-okabe2024probabilistic,
title={Probabilistic Analysis of Copyright Disputes and Generative AI Safety},
author={Hiroaki Chiba-Okabe},
journal={arXiv preprint arXiv:2410.00475},
year={2024},
archivePrefix={arXiv},
eprint={2410.00475},
primaryClass={cs.CY cs.AI}
} | chiba-okabe2024probabilistic |
arxiv-663991 | 2410.00477 | ViDAS: Vision-based Danger Assessment and Scoring | <|reference_start|>ViDAS: Vision-based Danger Assessment and Scoring: We present a novel dataset aimed at advancing danger analysis and assessment by addressing the challenge of quantifying danger in video content and identifying how human-like a Large Language Model (LLM) evaluator is for the same. This is achieved by compiling a collection of 100 YouTube videos featuring various events. Each video is annotated by human participants who provided danger ratings on a scale from 0 (no danger to humans) to 10 (life-threatening), with precise timestamps indicating moments of heightened danger. Additionally, we leverage LLMs to independently assess the danger levels in these videos using video summaries. We introduce Mean Squared Error (MSE) scores for multimodal meta-evaluation of the alignment between human and LLM danger assessments. Our dataset not only contributes a new resource for danger assessment in video content but also demonstrates the potential of LLMs in achieving human-like evaluations.<|reference_end|> | arxiv | @article{gupta2024vidas:,
title={ViDAS: Vision-based Danger Assessment and Scoring},
author={Pranav Gupta, Advith Krishnan, Naman Nanda, Ananth Eswar, Deeksha
Agarwal, Pratham Gohil, Pratyush Goel},
journal={arXiv preprint arXiv:2410.00477},
year={2024},
archivePrefix={arXiv},
eprint={2410.00477},
primaryClass={cs.CV}
} | gupta2024vidas: |
arxiv-663992 | 2410.00479 | Precise Workcell Sketching from Point Clouds Using an AR Toolbox | <|reference_start|>Precise Workcell Sketching from Point Clouds Using an AR Toolbox: Capturing real-world 3D spaces as point clouds is efficient and descriptive, but it comes with sensor errors and lacks object parametrization. These limitations render point clouds unsuitable for various real-world applications, such as robot programming, without extensive post-processing (e.g., outlier removal, semantic segmentation). On the other hand, CAD modeling provides high-quality, parametric representations of 3D space with embedded semantic data, but requires manual component creation that is time-consuming and costly. To address these challenges, we propose a novel solution that combines the strengths of both approaches. Our method for 3D workcell sketching from point clouds allows users to refine raw point clouds using an Augmented Reality (AR) interface that leverages their knowledge and the real-world 3D environment. By utilizing a toolbox and an AR-enabled pointing device, users can enhance point cloud accuracy based on the device's position in 3D space. We validate our approach by comparing it with ground truth models, demonstrating that it achieves a mean error within 1cm - significant improvement over standard LiDAR scanner apps.<|reference_end|> | arxiv | @article{zieliński2024precise,
title={Precise Workcell Sketching from Point Clouds Using an AR Toolbox},
author={Krzysztof Zieli'nski, Bruce Blumberg, Mikkel Baun Kj{ae}rgaard},
journal={arXiv preprint arXiv:2410.00479},
year={2024},
archivePrefix={arXiv},
eprint={2410.00479},
primaryClass={cs.HC cs.CV}
} | zieliński2024precise |
arxiv-663993 | 2410.00480 | Stability analysis of chaotic systems in latent spaces | <|reference_start|>Stability analysis of chaotic systems in latent spaces: Partial differential equations, and their chaotic solutions, are pervasive in the modelling of complex systems in engineering, science, and beyond. Data-driven methods can find solutions to partial differential equations with a divide-and-conquer strategy: The solution is sought in a latent space, on which the temporal dynamics are inferred (``latent-space'' approach). This is achieved by, first, compressing the data with an autoencoder, and, second, inferring the temporal dynamics with recurrent neural networks. The overarching goal of this paper is to show that a latent-space approach can not only infer the solution of a chaotic partial differential equation, but it can also predict the stability properties of the physical system. First, we employ the convolutional autoencoder echo state network (CAE-ESN) on the chaotic Kuramoto-Sivashinsky equation for various chaotic regimes. We show that the CAE-ESN (i) finds a low-dimensional latent-space representation of the observations and (ii) accurately infers the Lyapunov exponents and covariant Lyapunov vectors (CLVs) in this low-dimensional manifold for different attractors. Second, we extend the CAE-ESN to a turbulent flow, comparing the Lyapunov spectrum to estimates obtained from Jacobian-free methods. A latent-space approach based on the CAE-ESN effectively produces a latent space that preserves the key properties of the chaotic system, such as Lyapunov exponents and CLVs, thus retaining the geometric structure of the attractor. The latent-space approach based on the CAE-ESN is a reduced-order model that accurately predicts the dynamics of the chaotic system, or, alternatively, it can be used to infer stability properties of chaotic systems from data.<|reference_end|> | arxiv | @article{özalp2024stability,
title={Stability analysis of chaotic systems in latent spaces},
author={Elise "Ozalp and Luca Magri},
journal={arXiv preprint arXiv:2410.00480},
year={2024},
archivePrefix={arXiv},
eprint={2410.00480},
primaryClass={nlin.CD cs.LG}
} | özalp2024stability |
arxiv-663994 | 2410.00483 | MCGM: Mask Conditional Text-to-Image Generative Model | <|reference_start|>MCGM: Mask Conditional Text-to-Image Generative Model: Recent advancements in generative models have revolutionized the field of artificial intelligence, enabling the creation of highly-realistic and detailed images. In this study, we propose a novel Mask Conditional Text-to-Image Generative Model (MCGM) that leverages the power of conditional diffusion models to generate pictures with specific poses. Our model builds upon the success of the Break-a-scene [1] model in generating new scenes using a single image with multiple subjects and incorporates a mask embedding injection that allows the conditioning of the generation process. By introducing this additional level of control, MCGM offers a flexible and intuitive approach for generating specific poses for one or more subjects learned from a single image, empowering users to influence the output based on their requirements. Through extensive experimentation and evaluation, we demonstrate the effectiveness of our proposed model in generating high-quality images that meet predefined mask conditions and improving the current Break-a-scene generative model.<|reference_end|> | arxiv | @article{skaik2024mcgm:,
title={MCGM: Mask Conditional Text-to-Image Generative Model},
author={Rami Skaik, Leonardo Rossi, Tomaso Fontanini, and Andrea Prati},
journal={arXiv preprint arXiv:2410.00483},
year={2024},
archivePrefix={arXiv},
eprint={2410.00483},
primaryClass={cs.CV cs.AI}
} | skaik2024mcgm: |
arxiv-663995 | 2410.00484 | RobotGraffiti: An AR tool for semi-automated construction of workcell models to optimize robot deployment | <|reference_start|>RobotGraffiti: An AR tool for semi-automated construction of workcell models to optimize robot deployment: Improving robot deployment is a central step towards speeding up robot-based automation in manufacturing. A main challenge in robot deployment is how to best place the robot within the workcell. To tackle this challenge, we combine two knowledge sources: robotic knowledge of the system and workcell context awareness of the user, and intersect them with an Augmented Reality interface. RobotGraffiti is a unique tool that empowers the user in robot deployment tasks. One simply takes a 3D scan of the workcell with their mobile device, adds contextual data points that otherwise would be difficult to infer from the system, and receives a robot base position that satisfies the automation task. The proposed approach is an alternative to expensive and time-consuming digital twins, with a fast and easy-to-use tool that focuses on selected workcell features needed to run the placement optimization algorithm. The main contributions of this paper are the novel user interface for robot base placement data collection and a study comparing the traditional offline simulation with our proposed method. We showcase the method with a robot base placement solution and obtain up to 16 times reduction in time.<|reference_end|> | arxiv | @article{zieliński2024robotgraffiti:,
title={RobotGraffiti: An AR tool for semi-automated construction of workcell
models to optimize robot deployment},
author={Krzysztof Zieli'nski, Ryan Penning, Bruce Blumberg, Christian
Schlette, Mikkel Baun Kj{ae}rgaard},
journal={arXiv preprint arXiv:2410.00484},
year={2024},
archivePrefix={arXiv},
eprint={2410.00484},
primaryClass={cs.RO cs.HC}
} | zieliński2024robotgraffiti: |
arxiv-663996 | 2410.00485 | A Hitchhikers Guide to Fine-Grained Face Forgery Detection Using Common Sense Reasoning | <|reference_start|>A Hitchhikers Guide to Fine-Grained Face Forgery Detection Using Common Sense Reasoning: Explainability in artificial intelligence is crucial for restoring trust, particularly in areas like face forgery detection, where viewers often struggle to distinguish between real and fabricated content. Vision and Large Language Models (VLLM) bridge computer vision and natural language, offering numerous applications driven by strong common-sense reasoning. Despite their success in various tasks, the potential of vision and language remains underexplored in face forgery detection, where they hold promise for enhancing explainability by leveraging the intrinsic reasoning capabilities of language to analyse fine-grained manipulation areas. As such, there is a need for a methodology that converts face forgery detection to a Visual Question Answering (VQA) task to systematically and fairly evaluate these capabilities. Previous efforts for unified benchmarks in deepfake detection have focused on the simpler binary task, overlooking evaluation protocols for fine-grained detection and text-generative models. We propose a multi-staged approach that diverges from the traditional binary decision paradigm to address this gap. In the first stage, we assess the models' performance on the binary task and their sensitivity to given instructions using several prompts. In the second stage, we delve deeper into fine-grained detection by identifying areas of manipulation in a multiple-choice VQA setting. In the third stage, we convert the fine-grained detection to an open-ended question and compare several matching strategies for the multi-label classification task. Finally, we qualitatively evaluate the fine-grained responses of the VLLMs included in the benchmark. We apply our benchmark to several popular models, providing a detailed comparison of binary, multiple-choice, and open-ended VQA evaluation across seven datasets. \url{https://nickyfot.github.io/hitchhickersguide.github.io/}<|reference_end|> | arxiv | @article{foteinopoulou2024a,
title={A Hitchhikers Guide to Fine-Grained Face Forgery Detection Using Common
Sense Reasoning},
author={Niki Maria Foteinopoulou, Enjie Ghorbel, Djamila Aouada},
journal={arXiv preprint arXiv:2410.00485},
year={2024},
archivePrefix={arXiv},
eprint={2410.00485},
primaryClass={cs.CV}
} | foteinopoulou2024a |
arxiv-663997 | 2410.00486 | CaRtGS: Computational Alignment for Real-Time Gaussian Splatting SLAM | <|reference_start|>CaRtGS: Computational Alignment for Real-Time Gaussian Splatting SLAM: Simultaneous Localization and Mapping (SLAM) is pivotal in robotics, with photorealistic scene reconstruction emerging as a key challenge. To address this, we introduce Computational Alignment for Real-Time Gaussian Splatting SLAM (CaRtGS), a novel method enhancing the efficiency and quality of photorealistic scene reconstruction in real-time environments. Leveraging 3D Gaussian Splatting (3DGS), CaRtGS achieves superior rendering quality and processing speed, which is crucial for scene photorealistic reconstruction. Our approach tackles computational misalignment in Gaussian Splatting SLAM (GS-SLAM) through an adaptive strategy that optimizes training, addresses long-tail optimization, and refines densification. Experiments on Replica and TUM-RGBD datasets demonstrate CaRtGS's effectiveness in achieving high-fidelity rendering with fewer Gaussian primitives. This work propels SLAM towards real-time, photorealistic dense rendering, significantly advancing photorealistic scene representation. For the benefit of the research community, we release the code on our project website: https://dapengfeng.github.io/cartgs.<|reference_end|> | arxiv | @article{feng2024cartgs:,
title={CaRtGS: Computational Alignment for Real-Time Gaussian Splatting SLAM},
author={Dapeng Feng, Zhiqiang Chen, Yizhen Yin, Shipeng Zhong, Yuhua Qi,
Hongbo Chen},
journal={arXiv preprint arXiv:2410.00486},
year={2024},
archivePrefix={arXiv},
eprint={2410.00486},
primaryClass={cs.CV cs.RO}
} | feng2024cartgs: |
arxiv-663998 | 2410.00487 | Self-Updatable Large Language Models with Parameter Integration | <|reference_start|>Self-Updatable Large Language Models with Parameter Integration: Despite significant advancements in large language models (LLMs), the rapid and frequent integration of small-scale experiences, such as interactions with surrounding objects, remains a substantial challenge. Two critical factors in assimilating these experiences are (1) Efficacy: the ability to accurately remember recent events; (2) Retention: the capacity to recall long-past experiences. Current methods either embed experiences within model parameters using continual learning, model editing, or knowledge distillation techniques, which often struggle with rapid updates and complex interactions, or rely on external storage to achieve long-term retention, thereby increasing storage requirements. In this paper, we propose SELF-PARAM (Self-Updatable Large Language Models with Parameter Integration). SELF-PARAM requires no extra parameters while ensuring near-optimal efficacy and long-term retention. Our method employs a training objective that minimizes the Kullback-Leibler (KL) divergence between the predictions of an original model (with access to contextual information) and a target model (without such access). By generating diverse question-answer pairs related to the knowledge and minimizing the KL divergence across this dataset, we update the target model to internalize the knowledge seamlessly within its parameters. Evaluations on question-answering and conversational recommendation tasks demonstrate that SELF-PARAM significantly outperforms existing methods, even when accounting for non-zero storage requirements. This advancement paves the way for more efficient and scalable integration of experiences in large language models by embedding knowledge directly into model parameters.<|reference_end|> | arxiv | @article{wang2024self-updatable,
title={Self-Updatable Large Language Models with Parameter Integration},
author={Yu Wang, Xinshuang Liu, Xiusi Chen, Sean O'Brien, Junda Wu, Julian
McAuley},
journal={arXiv preprint arXiv:2410.00487},
year={2024},
archivePrefix={arXiv},
eprint={2410.00487},
primaryClass={cs.CL}
} | wang2024self-updatable |
arxiv-663999 | 2410.00490 | Learning Adaptive Hydrodynamic Models Using Neural ODEs in Complex Conditions | <|reference_start|>Learning Adaptive Hydrodynamic Models Using Neural ODEs in Complex Conditions: Reinforcement learning-based quadruped robots excel across various terrains but still lack the ability to swim in water due to the complex underwater environment. This paper presents the development and evaluation of a data-driven hydrodynamic model for amphibious quadruped robots, aiming to enhance their adaptive capabilities in complex and dynamic underwater environments. The proposed model leverages Neural Ordinary Differential Equations (ODEs) combined with attention mechanisms to accurately process and interpret real-time sensor data. The model enables the quadruped robots to understand and predict complex environmental patterns, facilitating robust decision-making strategies. We harness real-time sensor data, capturing various environmental and internal state parameters to train and evaluate our model. A significant focus of our evaluation involves testing the quadruped robot's performance across different hydrodynamic conditions and assessing its capabilities at varying speeds and fluid dynamic conditions. The outcomes suggest that the model can effectively learn and adapt to varying conditions, enabling the prediction of force states and enhancing autonomous robotic behaviors in various practical scenarios.<|reference_end|> | arxiv | @article{wang2024learning,
title={Learning Adaptive Hydrodynamic Models Using Neural ODEs in Complex
Conditions},
author={Cong Wang, Aoming Liang, Fei Han, Xinyu Zeng, Zhibin Li, Dixia Fan,
and Jens Kober},
journal={arXiv preprint arXiv:2410.00490},
year={2024},
archivePrefix={arXiv},
eprint={2410.00490},
primaryClass={cs.RO cs.AI}
} | wang2024learning |
arxiv-664000 | 2410.00492 | Design and construction of a wireless robot that simulates head movements in cone beam computed tomography imaging | <|reference_start|>Design and construction of a wireless robot that simulates head movements in cone beam computed tomography imaging: One of the major challenges in the science of maxillofacial radiology imaging is the various artifacts created in images taken by cone beam computed tomography (CBCT) imaging systems. Among these artifacts, motion artifact, which is created by the patient, has adverse effects on image quality. In this paper, according to the conditions and limitations of the CBCT imaging room, the goal is the design and development of a cable-driven parallel robot to create repeatable movements of a dry skull inside a CBCT scanner for studying motion artifacts and building up reference datasets with motion artifacts. The proposed robot allows a dry skull to execute motions, which were selected on the basis of clinical evidence, with 3-degrees of freedom during imaging in synchronous manner with the radiation beam. The kinematic model of the robot is presented to investigate and describe the correlation between the amount of motion and the pulse width applied to DC motors. This robot can be controlled by the user through a smartphone or laptop wirelessly via a Wi-Fi connection. Using wireless communication protects the user from harmful radiation during robot driving and functioning. The results show that the designed robot has a reproducibility above 95% in performing various movements.<|reference_end|> | arxiv | @article{baghbani2024design,
title={Design and construction of a wireless robot that simulates head
movements in cone beam computed tomography imaging},
author={R. Baghbani, M. Ashoorirad, F. Salemi, Med Amine Laribi (COBRA), M.
Mostafapoor},
journal={Robotica, 2022, pp.1-14},
year={2024},
doi={10.1017/S0263574722001072},
archivePrefix={arXiv},
eprint={2410.00492},
primaryClass={physics.med-ph cs.RO}
} | baghbani2024design |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.