corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-668101
2410.07933
Offline Hierarchical Reinforcement Learning via Inverse Optimization
<|reference_start|>Offline Hierarchical Reinforcement Learning via Inverse Optimization: Hierarchical policies enable strong performance in many sequential decision-making problems, such as those with high-dimensional action spaces, those requiring long-horizon planning, and settings with sparse rewards. However, learning hierarchical policies from static offline datasets presents a significant challenge. Crucially, actions taken by higher-level policies may not be directly observable within hierarchical controllers, and the offline dataset might have been generated using a different policy structure, hindering the use of standard offline learning algorithms. In this work, we propose OHIO: a framework for offline reinforcement learning (RL) of hierarchical policies. Our framework leverages knowledge of the policy structure to solve the inverse problem, recovering the unobservable high-level actions that likely generated the observed data under our hierarchical policy. This approach constructs a dataset suitable for off-the-shelf offline training. We demonstrate our framework on robotic and network optimization problems and show that it substantially outperforms end-to-end RL methods and improves robustness. We investigate a variety of instantiations of our framework, both in direct deployment of policies trained offline and when online fine-tuning is performed.<|reference_end|>
arxiv
@article{schmidt2024offline, title={Offline Hierarchical Reinforcement Learning via Inverse Optimization}, author={Carolin Schmidt, Daniele Gammelli, James Harrison, Marco Pavone, Filipe Rodrigues}, journal={arXiv preprint arXiv:2410.07933}, year={2024}, archivePrefix={arXiv}, eprint={2410.07933}, primaryClass={cs.LG cs.SY eess.SY math.OC} }
schmidt2024offline
arxiv-668102
2410.07939
Distributed Source Coding, Multiple Description Coding, and Source Coding with Side Information at Decoders Using Constrained-Random Number Generators
<|reference_start|>Distributed Source Coding, Multiple Description Coding, and Source Coding with Side Information at Decoders Using Constrained-Random Number Generators: This paper investigates a unification of distributed source coding, multiple description coding, and source coding with side information at decoders. The equivalence between the multiple-decoder extension of distributed source coding with decoder side information and the multiple-source extension of multiple description coding with decoder side information is clarified. Their multi-letter rate-distortion region for arbitrary general correlated sources is characterized in terms of entropy functions. We construct a code based on constrained-random number generators and show its achievability.<|reference_end|>
arxiv
@article{muramatsu2024distributed, title={Distributed Source Coding, Multiple Description Coding, and Source Coding with Side Information at Decoders Using Constrained-Random Number Generators}, author={Jun Muramatsu}, journal={arXiv preprint arXiv:2410.07939}, year={2024}, archivePrefix={arXiv}, eprint={2410.07939}, primaryClass={cs.IT math.IT} }
muramatsu2024distributed
arxiv-668103
2410.07940
AI Surrogate Model for Distributed Computing Workloads
<|reference_start|>AI Surrogate Model for Distributed Computing Workloads: Large-scale international scientific collaborations, such as ATLAS, Belle II, CMS, and DUNE, generate vast volumes of data. These experiments necessitate substantial computational power for varied tasks, including structured data processing, Monte Carlo simulations, and end-user analysis. Centralized workflow and data management systems are employed to handle these demands, but current decision-making processes for data placement and payload allocation are often heuristic and disjointed. This optimization challenge potentially could be addressed using contemporary machine learning methods, such as reinforcement learning, which, in turn, require access to extensive data and an interactive environment. Instead, we propose a generative surrogate modeling approach to address the lack of training data and concerns about privacy preservation. We have collected and processed real-world job submission records, totaling more than two million jobs through 150 days, and applied four generative models for tabular data -- TVAE, CTAGGAN+, SMOTE, and TabDDPM -- to these datasets, thoroughly evaluating their performance. Along with measuring the discrepancy among feature-wise distributions separately, we also evaluate pair-wise feature correlations, distance to closest record, and responses to pre-trained models. Our experiments indicate that SMOTE and TabDDPM can generate similar tabular data, almost indistinguishable from the ground truth. Yet, as a non-learning method, SMOTE ranks the lowest in privacy preservation. As a result, we conclude that the probabilistic-diffusion-model-based TabDDPM is the most suitable generative model for managing job record data.<|reference_end|>
arxiv
@article{park2024ai, title={AI Surrogate Model for Distributed Computing Workloads}, author={David K. Park, Yihui Ren, Ozgur O. Kilic, Tatiana Korchuganova, Sairam Sri Vatsavai, Joseph Boudreau, Tasnuva Chowdhury, Shengyu Feng, Raees Khan, Jaehyung Kim, Scott Klasky, Tadashi Maeno, Paul Nilsson, Verena Ingrid Martinez Outschoorn, Norbert Podhorszki, Frederic Suter, Wei Yang, Yiming Yang, Shinjae Yoo, Alexei Klimentov, Adolfy Hoisie}, journal={arXiv preprint arXiv:2410.07940}, year={2024}, archivePrefix={arXiv}, eprint={2410.07940}, primaryClass={cs.DC} }
park2024ai
arxiv-668104
2410.07947
Exploring the core-periphery and community structure in the financial networks through random matrix theory
<|reference_start|>Exploring the core-periphery and community structure in the financial networks through random matrix theory: In finance, Random Matrix Theory (RMT) is an important tool for filtering out noise from large datasets, revealing true correlations among stocks, enhancing risk management and portfolio optimization. In this study, we use RMT to filter out noise from the full cross-correlation matrix of stock price returns for the NIFTY 200 and NIFTY 500 indices on the National Stock Exchange of India. In addition, we applied network theory tools to analyze market and sector modes as filtered correlation structures to study local interactions within financial networks. This allows us to study the very fundamental properties of networks, such as the core-periphery and the community structure of constructed networks over these filtered modes, and compare the results with the network constructed over the full cross-correlation matrix. The results suggest that the core-periphery structure is contained in the market mode, while the community structure is in the sector mode. Thus, both modes outperform the full cross-correlation in terms of capturing the essential respective structure of the network. Furthermore, we used these insights to build portfolios based on communities of the networks corresponding to the sector mode and the network corresponding to the full cross-correlation matrix. The results suggest that the portfolio constructed on the complete cross-correlation-based matrix performs better than the sector mode. These insights provide a greater understanding of RMT application in the financial market.<|reference_end|>
arxiv
@article{pawanesh2024exploring, title={Exploring the core-periphery and community structure in the financial networks through random matrix theory}, author={Pawanesh, Imran Ansari, and Niteesh Sahni}, journal={arXiv preprint arXiv:2410.07947}, year={2024}, archivePrefix={arXiv}, eprint={2410.07947}, primaryClass={cs.SI physics.soc-ph} }
pawanesh2024exploring
arxiv-668105
2410.07951
Disease Entity Recognition and Normalization is Improved with Large Language Model Derived Synthetic Normalized Mentions
<|reference_start|>Disease Entity Recognition and Normalization is Improved with Large Language Model Derived Synthetic Normalized Mentions: Background: Machine learning methods for clinical named entity recognition and entity normalization systems can utilize both labeled corpora and Knowledge Graphs (KGs) for learning. However, infrequently occurring concepts may have few mentions in training corpora and lack detailed descriptions or synonyms, even in large KGs. For Disease Entity Recognition (DER) and Disease Entity Normalization (DEN), this can result in fewer high quality training examples relative to the number of known diseases. Large Language Model (LLM) generation of synthetic training examples could improve performance in these information extraction tasks. Methods: We fine-tuned a LLaMa-2 13B Chat LLM to generate a synthetic corpus containing normalized mentions of concepts from the Unified Medical Language System (UMLS) Disease Semantic Group. We measured overall and Out of Distribution (OOD) performance for DER and DEN, with and without synthetic data augmentation. We evaluated performance on 3 different disease corpora using 4 different data augmentation strategies, assessed using BioBERT for DER and SapBERT and KrissBERT for DEN. Results: Our synthetic data yielded a substantial improvement for DEN, in all 3 training corpora the top 1 accuracy of both SapBERT and KrissBERT improved by 3-9 points in overall performance and by 20-55 points in OOD data. A small improvement (1-2 points) was also seen for DER in overall performance, but only one dataset showed OOD improvement. Conclusion: LLM generation of normalized disease mentions can improve DEN relative to normalization approaches that do not utilize LLMs to augment data with synthetic mentions. Ablation studies indicate that performance gains for DEN were only partially attributable to improvements in OOD performance. The same approach has only a limited ability to improve DER. We make our software and dataset publicly available.<|reference_end|>
arxiv
@article{sasse2024disease, title={Disease Entity Recognition and Normalization is Improved with Large Language Model Derived Synthetic Normalized Mentions}, author={Kuleen Sasse, Shinjitha Vadlakonda, Richard E. Kennedy and John D. Osborne}, journal={arXiv preprint arXiv:2410.07951}, year={2024}, archivePrefix={arXiv}, eprint={2410.07951}, primaryClass={cs.CL cs.LG} }
sasse2024disease
arxiv-668106
2410.07952
Eco-driving Incentive Mechanisms for Mitigating Emissions in Urban Transportation
<|reference_start|>Eco-driving Incentive Mechanisms for Mitigating Emissions in Urban Transportation: This paper proposes incentive mechanisms that promote eco-driving in transportation networks with the over-arching objective of minimizing emissions. The transportation system operator provides the drivers with energy-efficient driving guidance throughout their trips, and their eco-driving levels are measured by how closely they follow this guidance via vehicle telematics. Drivers choose their eco-driving levels to optimize a combination of their travel times and their emissions. To obtain optimal budget allocation and recommendations for the incentive mechanism, the system operator gathers drivers' preferences, or types, to assess each driver's trip urgency and natural willingness to eco-drive. In a setting where drivers truthfully report their types, we introduce the first-best incentive mechanism and show that the obedience condition holds (i.e., drivers find it optimal to comply with the system operator's recommendations) when the recommended eco-driving profile constitutes a Nash equilibrium. Moreover, in a setting where drivers can strategically report their types, we introduce the second-best incentive mechanism and show that the proposed mechanism is incentive-compatible (i.e., drivers find it optimal to be truthful). Under this mechanism, we also show that all equilibrium outcomes are at least as good as the recommended eco-driving profile in terms of the system operator's objective. Overall, this work offers a framework for designing eco-driving incentive mechanisms while considering both the strategic behavior of individual drivers and the network effects of collective decision-making.<|reference_end|>
arxiv
@article{niazi2024eco-driving, title={Eco-driving Incentive Mechanisms for Mitigating Emissions in Urban Transportation}, author={M. Umar B. Niazi, Jung-Hoon Cho, Munther A. Dahleh, Roy Dong, Cathy Wu}, journal={arXiv preprint arXiv:2410.07952}, year={2024}, archivePrefix={arXiv}, eprint={2410.07952}, primaryClass={cs.GT cs.SY eess.SY math.OC} }
niazi2024eco-driving
arxiv-668107
2410.07954
Dynamic Programming based Local Search approaches for Multi-Agent Path Finding problems on Directed Graphs
<|reference_start|>Dynamic Programming based Local Search approaches for Multi-Agent Path Finding problems on Directed Graphs: Among sub-optimal Multi-Agent Path Finding (MAPF) solvers, rule-based algorithms are particularly appealing since they are complete. Even in crowded scenarios, they allow finding a feasible solution that brings each agent to its target, preventing deadlock situations. However, generally, rule-based algorithms provide much longer solutions than the shortest one. The main contribution of this paper is introducing a new local search procedure for improving a known feasible solution. We start from a feasible sub-optimal solution, and perform a local search in a neighborhood of this solution. If we are able to find a shorter solution, we repeat this procedure until the solution cannot be shortened anymore. At the end, we obtain a solution that is still sub-optimal, but generally of much better quality than the initial one. We propose two different local search policies. In the first, we explore all paths in which the agents positions remain in a neighborhood of the corresponding positions of the reference solution. In the second, we set an upper limit to the number of agents that can change their path with respect to the reference solution. These two different policies can also be alternated. We explore the neighborhoods by dynamic programming. The fact that our search is local is fundamental in terms of time complexity. Indeed, if the dynamic programming approach is applied to the full MAPF problem, the number of explored states grows exponentially with the number of agents. Instead, the introduction of a locality constraint allows exploring the neghborhoods in a time that grows polynomially with respect to the number of agents.<|reference_end|>
arxiv
@article{saccani2024dynamic, title={Dynamic Programming based Local Search approaches for Multi-Agent Path Finding problems on Directed Graphs}, author={Irene Saccani, Stefano Ardizzoni, Luca Consolini, Marco Locatelli}, journal={arXiv preprint arXiv:2410.07954}, year={2024}, archivePrefix={arXiv}, eprint={2410.07954}, primaryClass={cs.MA} }
saccani2024dynamic
arxiv-668108
2410.07955
Iterative Optimization Annotation Pipeline and ALSS-YOLO-Seg for Efficient Banana Plantation Segmentation in UAV Imagery
<|reference_start|>Iterative Optimization Annotation Pipeline and ALSS-YOLO-Seg for Efficient Banana Plantation Segmentation in UAV Imagery: Precise segmentation of Unmanned Aerial Vehicle (UAV)-captured images plays a vital role in tasks such as crop yield estimation and plant health assessment in banana plantations. By identifying and classifying planted areas, crop area can be calculated, which is indispensable for accurate yield predictions. However, segmenting banana plantation scenes requires a substantial amount of annotated data, and manual labeling of these images is both time-consuming and labor-intensive, limiting the development of large-scale datasets. Furthermore, challenges such as changing target sizes, complex ground backgrounds, limited computational resources, and correct identification of crop categories make segmentation even more difficult. To address these issues, we proposed a comprehensive solution. Firstly, we designed an iterative optimization annotation pipeline leveraging SAM2's zero-shot capabilities to generate high-quality segmentation annotations, thereby reducing the cost and time associated with data annotation significantly. Secondly, we developed ALSS-YOLO-Seg, an efficient lightweight segmentation model optimized for UAV imagery. The model's backbone includes an Adaptive Lightweight Channel Splitting and Shuffling (ALSS) module to improve information exchange between channels and optimize feature extraction, aiding accurate crop identification. Additionally, a Multi-Scale Channel Attention (MSCA) module combines multi-scale feature extraction with channel attention to tackle challenges of varying target sizes and complex ground backgrounds.<|reference_end|>
arxiv
@article{he2024iterative, title={Iterative Optimization Annotation Pipeline and ALSS-YOLO-Seg for Efficient Banana Plantation Segmentation in UAV Imagery}, author={Ang He, Ximei Wu, Xing Xu, Jing Chen, Xiaobin Guo and Sheng Xu}, journal={arXiv preprint arXiv:2410.07955}, year={2024}, archivePrefix={arXiv}, eprint={2410.07955}, primaryClass={cs.CV} }
he2024iterative
arxiv-668109
2410.07959
COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act
<|reference_start|>COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act: The EU's Artificial Intelligence Act (AI Act) is a significant step towards responsible AI development, but lacks clear technical interpretation, making it difficult to assess models' compliance. This work presents COMPL-AI, a comprehensive framework consisting of (i) the first technical interpretation of the EU AI Act, translating its broad regulatory requirements into measurable technical requirements, with the focus on large language models (LLMs), and (ii) an open-source Act-centered benchmarking suite, based on thorough surveying and implementation of state-of-the-art LLM benchmarks. By evaluating 12 prominent LLMs in the context of COMPL-AI, we reveal shortcomings in existing models and benchmarks, particularly in areas like robustness, safety, diversity, and fairness. This work highlights the need for a shift in focus towards these aspects, encouraging balanced development of LLMs and more comprehensive regulation-aligned benchmarks. Simultaneously, COMPL-AI for the first time demonstrates the possibilities and difficulties of bringing the Act's obligations to a more concrete, technical level. As such, our work can serve as a useful first step towards having actionable recommendations for model providers, and contributes to ongoing efforts of the EU to enable application of the Act, such as the drafting of the GPAI Code of Practice.<|reference_end|>
arxiv
@article{guldimann2024compl-ai, title={COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act}, author={Philipp Guldimann, Alexander Spiridonov, Robin Staab, Nikola Jovanovi'c, Mark Vero, Velko Vechev, Anna Gueorguieva, Mislav Balunovi'c, Nikola Konstantinov, Pavol Bielik, Petar Tsankov, Martin Vechev}, journal={arXiv preprint arXiv:2410.07959}, year={2024}, archivePrefix={arXiv}, eprint={2410.07959}, primaryClass={cs.CL cs.AI cs.CY cs.LG} }
guldimann2024compl-ai
arxiv-668110
2410.07961
QCircuitNet: A Large-Scale Hierarchical Dataset for Quantum Algorithm Design
<|reference_start|>QCircuitNet: A Large-Scale Hierarchical Dataset for Quantum Algorithm Design: Quantum computing is an emerging field recognized for the significant speedup it offers over classical computing through quantum algorithms. However, designing and implementing quantum algorithms pose challenges due to the complex nature of quantum mechanics and the necessity for precise control over quantum states. Despite the significant advancements in AI, there has been a lack of datasets specifically tailored for this purpose. In this work, we introduce QCircuitNet, the first benchmark and test dataset designed to evaluate AI's capability in designing and implementing quantum algorithms in the form of quantum circuit codes. Unlike using AI for writing traditional codes, this task is fundamentally different and significantly more complicated due to highly flexible design space and intricate manipulation of qubits. Our key contributions include: 1. A general framework which formulates the key features of quantum algorithm design task for Large Language Models. 2. Implementation for a wide range of quantum algorithms from basic primitives to advanced applications, with easy extension to more quantum algorithms. 3. Automatic validation and verification functions, allowing for iterative evaluation and interactive reasoning without human inspection. 4. Promising potential as a training dataset through primitive fine-tuning results. We observed several interesting experimental phenomena: fine-tuning does not always outperform few-shot learning, and LLMs tend to exhibit consistent error patterns. QCircuitNet provides a comprehensive benchmark for AI-driven quantum algorithm design, offering advantages in model evaluation and improvement, while also revealing some limitations of LLMs in this domain.<|reference_end|>
arxiv
@article{yang2024qcircuitnet:, title={QCircuitNet: A Large-Scale Hierarchical Dataset for Quantum Algorithm Design}, author={Rui Yang, Yuntian Gu, Ziruo Wang, Yitao Liang, Tongyang Li}, journal={arXiv preprint arXiv:2410.07961}, year={2024}, archivePrefix={arXiv}, eprint={2410.07961}, primaryClass={quant-ph cs.DS cs.LG} }
yang2024qcircuitnet:
arxiv-668111
2410.07962
Towards Assurance of LLM Adversarial Robustness using Ontology-Driven Argumentation
<|reference_start|>Towards Assurance of LLM Adversarial Robustness using Ontology-Driven Argumentation: Despite the impressive adaptability of large language models (LLMs), challenges remain in ensuring their security, transparency, and interpretability. Given their susceptibility to adversarial attacks, LLMs need to be defended with an evolving combination of adversarial training and guardrails. However, managing the implicit and heterogeneous knowledge for continuously assuring robustness is difficult. We introduce a novel approach for assurance of the adversarial robustness of LLMs based on formal argumentation. Using ontologies for formalization, we structure state-of-the-art attacks and defenses, facilitating the creation of a human-readable assurance case, and a machine-readable representation. We demonstrate its application with examples in English language and code translation tasks, and provide implications for theory and practice, by targeting engineers, data scientists, users, and auditors.<|reference_end|>
arxiv
@article{momcilovic2024towards, title={Towards Assurance of LLM Adversarial Robustness using Ontology-Driven Argumentation}, author={Tomas Bueno Momcilovic, Beat Buesser, Giulio Zizzo, Mark Purcell, Dian Balta}, journal={arXiv preprint arXiv:2410.07962}, year={2024}, archivePrefix={arXiv}, eprint={2410.07962}, primaryClass={cs.AI} }
momcilovic2024towards
arxiv-668112
2410.07963
Fron CAD to URDF: Co-Design of a Jet-Powered Humanoid Robot Including CAD Geometry
<|reference_start|>Fron CAD to URDF: Co-Design of a Jet-Powered Humanoid Robot Including CAD Geometry: Co-design optimization strategies usually rely on simplified robot models extracted from CAD. While these models are useful for optimizing geometrical and inertial parameters for robot control, they might overlook important details essential for prototyping the optimized mechanical design. For instance, they may not account for mechanical stresses exerted on the optimized geometries and the complexity of assembly-level design. In this paper, we introduce a co-design framework aimed at improving both the control performance and mechanical design of our robot. Specifically, we identify the robot links that significantly influence control performance. The geometric characteristics of these links are parameterized and optimized using a multi-objective evolutionary algorithm to achieve optimal control performance. Additionally, an automated Finite Element Method (FEM) analysis is integrated into the framework to filter solutions not satisfying the required structural safety margin. We validate the framework by applying it to enhance the mechanical design for flight performance of the jet-powered humanoid robot iRonCub.<|reference_end|>
arxiv
@article{vanteddu2024from, title={From CAD to URDF: Co-Design of a Jet-Powered Humanoid Robot Including CAD Geometry}, author={Punith Reddy Vanteddu, Gabriele Nava, Fabio Bergonti, Giuseppe L'Erario, Antonello Paolino, Daniele Pucci}, journal={arXiv preprint arXiv:2410.07963}, year={2024}, archivePrefix={arXiv}, eprint={2410.07963}, primaryClass={cs.RO} }
vanteddu2024from
arxiv-668113
2410.07966
Neural Reasoning Networks: Efficient Interpretable Neural Networks With Automatic Textual Explanations
<|reference_start|>Neural Reasoning Networks: Efficient Interpretable Neural Networks With Automatic Textual Explanations: Recent advances in machine learning have led to a surge in adoption of neural networks for various tasks, but lack of interpretability remains an issue for many others in which an understanding of the features influencing the prediction is necessary to ensure fairness, safety, and legal compliance. In this paper we consider one class of such tasks, tabular dataset classification, and propose a novel neuro-symbolic architecture, Neural Reasoning Networks (NRN), that is scalable and generates logically sound textual explanations for its predictions. NRNs are connected layers of logical neurons which implement a form of real valued logic. A training algorithm (R-NRN) learns the weights of the network as usual using gradient descent optimization with backprop, but also learns the network structure itself using a bandit-based optimization. Both are implemented in an extension to PyTorch (https://github.com/IBM/torchlogic) that takes full advantage of GPU scaling and batched training. Evaluation on a diverse set of 22 open-source datasets for tabular classification demonstrates performance (measured by ROC AUC) which improves over multi-layer perceptron (MLP) and is statistically similar to other state-of-the-art approaches such as Random Forest, XGBoost and Gradient Boosted Trees, while offering 43% faster training and a more than 2 orders of magnitude reduction in the number of parameters required, on average. Furthermore, R-NRN explanations are shorter than the compared approaches while producing more accurate feature importance scores.<|reference_end|>
arxiv
@article{carrow2024neural, title={Neural Reasoning Networks: Efficient Interpretable Neural Networks With Automatic Textual Explanations}, author={Stephen Carrow, Kyle Harper Erwin, Olga Vilenskaia, Parikshit Ram, Tim Klinger, Naweed Aghmad Khan, Ndivhuwo Makondo, Alexander Gray}, journal={arXiv preprint arXiv:2410.07966}, year={2024}, archivePrefix={arXiv}, eprint={2410.07966}, primaryClass={cs.LG cs.AI} }
carrow2024neural
arxiv-668114
2410.07968
Octopus Inspired Optimization Algorithm: Multi-Level Structures and Parallel Computing Strategies
<|reference_start|>Octopus Inspired Optimization Algorithm: Multi-Level Structures and Parallel Computing Strategies: This paper introduces a novel bionic intelligent optimisation algorithm, Octopus Inspired Optimization (OIO) algorithm, which is inspired by the neural structure of octopus, especially its hierarchical and decentralised interaction properties. By simulating the sensory, decision-making, and executive abilities of octopuses, the OIO algorithm adopts a multi-level hierarchical strategy, including tentacles, suckers, individuals and groups, to achieve an effective combination of global and local search. This hierarchical design not only enhances the flexibility and efficiency of the algorithm, but also significantly improves its search efficiency and adaptability. In performance evaluations, including comparisons with existing mainstream intelligent optimisation algorithms, OIO shows faster convergence and higher accuracy, especially when dealing with multimodal functions and high-dimensional optimisation problems. This advantage is even more pronounced as the required minimum accuracy is higher, with the OIO algorithm showing an average speedup of 2.27 times that of conventional particle swarm optimisation (PSO) and 9.63 times that of differential evolution (DE) on multimodal functions. In particular, when dealing with high-dimensional optimisation problems, OIO achieves an average speed of 10.39 times that of DE, demonstrating its superior computational efficiency. In addition, the OIO algorithm also shows a reduction of about $5\%$ in CPU usage efficiency compared to PSO, which is reflected in the efficiency of CPU resource usage also shows its efficiency. These features make the OIO algorithm show great potential in complex optimisation problems, and it is especially suitable for application scenarios that require fast, efficient and robust optimisation methods, such as robot path planning, supply chain management optimisation, and energy system management.<|reference_end|>
arxiv
@article{wang2024octopus, title={Octopus Inspired Optimization Algorithm: Multi-Level Structures and Parallel Computing Strategies}, author={Xu Wang, Longji Xu, Yiquan Wang, Yuhua Dong, Xiang Li, Jia Deng, Rui He}, journal={arXiv preprint arXiv:2410.07968}, year={2024}, archivePrefix={arXiv}, eprint={2410.07968}, primaryClass={cs.NE} }
wang2024octopus
arxiv-668115
2410.07969
PubMed knowledge graph 20: Connecting papers, patents, and clinical trials in biomedical science
<|reference_start|>PubMed knowledge graph 20: Connecting papers, patents, and clinical trials in biomedical science: Papers, patents, and clinical trials are indispensable types of scientific literature in biomedicine, crucial for knowledge sharing and dissemination. However, these documents are often stored in disparate databases with varying management standards and data formats, making it challenging to form systematic, fine-grained connections among them. To address this issue, we introduce PKG2.0, a comprehensive knowledge graph dataset encompassing over 36 million papers, 1.3 million patents, and 0.48 million clinical trials in the biomedical field. PKG2.0 integrates these previously dispersed resources through various links, including biomedical entities, author networks, citation relationships, and research projects. Fine-grained biomedical entity extraction, high-performance author name disambiguation, and multi-source citation integration have played a crucial role in the construction of the PKG dataset. Additionally, project data from the NIH Exporter enriches the dataset with metadata of NIH-funded projects and their scholarly outputs. Data validation demonstrates that PKG2.0 excels in key tasks such as author disambiguation and biomedical entity recognition. This dataset provides valuable resources for biomedical researchers, bibliometric scholars, and those engaged in literature mining.<|reference_end|>
arxiv
@article{xu2024pubmed, title={PubMed knowledge graph 2.0: Connecting papers, patents, and clinical trials in biomedical science}, author={Jian Xu, Chao Yu, Jiawei Xu, Ying Ding, Vetle I. Torvik, Jaewoo Kang, Mujeen Sung, Min Song}, journal={arXiv preprint arXiv:2410.07969}, year={2024}, archivePrefix={arXiv}, eprint={2410.07969}, primaryClass={cs.DL} }
xu2024pubmed
arxiv-668116
2410.07970
Mapping Hong Kong's Financial Ecosystem: A Network Analysis of the SFC's Licensed Professionals and Institutions
<|reference_start|>Mapping Hong Kong's Financial Ecosystem: A Network Analysis of the SFC's Licensed Professionals and Institutions: We present the first study of the Public Register of Licensed Persons and Registered Institutions maintained by the Hong Kong Securities and Futures Commission (SFC) through the lens of complex network analysis. This dataset, spanning 21 years with daily granularity, provides a unique view of the evolving social network between licensed professionals and their affiliated firms in Hong Kong's financial sector. Leveraging large language models, we classify firms (e.g., asset managers, banks) and infer the likely nationality and gender of employees based on their names. This application enhances the dataset by adding rich demographic and organizational context, enabling more precise network analysis. Our preliminary findings reveal key structural features, offering new insights into the dynamics of Hong Kong's financial landscape. We release the structured dataset to enable further research, establishing a foundation for future studies that may inform recruitment strategies, policy-making, and risk management in the financial industry.<|reference_end|>
arxiv
@article{alketbi2024mapping, title={Mapping Hong Kong's Financial Ecosystem: A Network Analysis of the SFC's Licensed Professionals and Institutions}, author={Abdulla AlKetbi, Gautier Marti, Khaled AlNuaimi, Raed Jaradat, Andreas Henschel}, journal={arXiv preprint arXiv:2410.07970}, year={2024}, archivePrefix={arXiv}, eprint={2410.07970}, primaryClass={stat.AP cs.CE} }
alketbi2024mapping
arxiv-668117
2410.07971
Generalizable and Animatable Gaussian Head Avatar
<|reference_start|>Generalizable and Animatable Gaussian Head Avatar: In this paper, we propose Generalizable and Animatable Gaussian head Avatar (GAGAvatar) for one-shot animatable head avatar reconstruction. Existing methods rely on neural radiance fields, leading to heavy rendering consumption and low reenactment speeds. To address these limitations, we generate the parameters of 3D Gaussians from a single image in a single forward pass. The key innovation of our work is the proposed dual-lifting method, which produces high-fidelity 3D Gaussians that capture identity and facial details. Additionally, we leverage global image features and the 3D morphable model to construct 3D Gaussians for controlling expressions. After training, our model can reconstruct unseen identities without specific optimizations and perform reenactment rendering at real-time speeds. Experiments show that our method exhibits superior performance compared to previous methods in terms of reconstruction quality and expression accuracy. We believe our method can establish new benchmarks for future research and advance applications of digital avatars. Code and demos are available https://github.com/xg-chu/GAGAvatar.<|reference_end|>
arxiv
@article{chu2024generalizable, title={Generalizable and Animatable Gaussian Head Avatar}, author={Xuangeng Chu, Tatsuya Harada}, journal={arXiv preprint arXiv:2410.07971}, year={2024}, archivePrefix={arXiv}, eprint={2410.07971}, primaryClass={cs.CV cs.GR} }
chu2024generalizable
arxiv-668118
2410.07972
Learning Equivariant Non-Local Electron Density Functionals
<|reference_start|>Learning Equivariant Non-Local Electron Density Functionals: The accuracy of density functional theory hinges on the approximation of non-local contributions to the exchange-correlation (XC) functional. To date, machine-learned and human-designed approximations suffer from insufficient accuracy, limited scalability, or dependence on costly reference data. To address these issues, we introduce Equivariant Graph Exchange Correlation (EG-XC), a novel non-local XC functional based on equivariant graph neural networks. EG-XC combines semi-local functionals with a non-local feature density parametrized by an equivariant nuclei-centered point cloud representation of the electron density to capture long-range interactions. By differentiating through a self-consistent field solver, we train EG-XC requiring only energy targets. In our empirical evaluation, we find EG-XC to accurately reconstruct `gold-standard' CCSD(T) energies on MD17. On out-of-distribution conformations of 3BPA, EG-XC reduces the relative MAE by 35% to 50%. Remarkably, EG-XC excels in data efficiency and molecular size extrapolation on QM9, matching force fields trained on 5 times more and larger molecules. On identical training sets, EG-XC yields on average 51% lower MAEs.<|reference_end|>
arxiv
@article{gao2024learning, title={Learning Equivariant Non-Local Electron Density Functionals}, author={Nicholas Gao, Eike Eberhard, Stephan G"unnemann}, journal={arXiv preprint arXiv:2410.07972}, year={2024}, archivePrefix={arXiv}, eprint={2410.07972}, primaryClass={cs.LG physics.chem-ph physics.comp-ph} }
gao2024learning
arxiv-668119
2410.07973
A four-bodies motorcycle dynamic model for observer design
<|reference_start|>A four-bodies motorcycle dynamic model for observer design: Motivated by the need to predict dangerous scenarios, this article introduces a non-linear dynamic model for motorcycles consisting of four rigid bodies. Using Jourdain's principle, the model incorporates both longitudinal and lateral dynamics, targeting a balance between numerical complexity and accuracy of representation. The paper further employs the model to design a Luenberger observer based on linear quadratic regulator theory, for estimating physical states based on sensor measurements. In turn, the state estimates are useful for predicting dangerous scenarios (lowside, highside, fall). The relevance of the approach is demonstrated through simulations of various rectilinear trajectories and a lane-changing scenario using BikeSim simulator.<|reference_end|>
arxiv
@article{kabwangala2024a, title={A four-bodies motorcycle dynamic model for observer design}, author={Tychique Nzalalemba Kabwangala, Ziad Alkhoury, Jawwad Ahmed, Mihaly Petreczky, Laurentiu Hetel and Lotfi Belkoura}, journal={arXiv preprint arXiv:2410.07973}, year={2024}, archivePrefix={arXiv}, eprint={2410.07973}, primaryClass={eess.SY cs.SY} }
kabwangala2024a
arxiv-668120
2410.07974
Doob's Lagrangian: A Sample-Efficient Variational Approach to Transition Path Sampling
<|reference_start|>Doob's Lagrangian: A Sample-Efficient Variational Approach to Transition Path Sampling: Rare event sampling in dynamical systems is a fundamental problem arising in the natural sciences, which poses significant computational challenges due to an exponentially large space of trajectories. For settings where the dynamical system of interest follows a Brownian motion with known drift, the question of conditioning the process to reach a given endpoint or desired rare event is definitively answered by Doob's h-transform. However, the naive estimation of this transform is infeasible, as it requires simulating sufficiently many forward trajectories to estimate rare event probabilities. In this work, we propose a variational formulation of Doob's $h$-transform as an optimization problem over trajectories between a given initial point and the desired ending point. To solve this optimization, we propose a simulation-free training objective with a model parameterization that imposes the desired boundary conditions by design. Our approach significantly reduces the search space over trajectories and avoids expensive trajectory simulation and inefficient importance sampling estimators which are required in existing methods. We demonstrate the ability of our method to find feasible transition paths on real-world molecular simulation and protein folding tasks.<|reference_end|>
arxiv
@article{du2024doob's, title={Doob's Lagrangian: A Sample-Efficient Variational Approach to Transition Path Sampling}, author={Yuanqi Du, Michael Plainer, Rob Brekelmans, Chenru Duan, Frank No'e, Carla P. Gomes, Al'an Aspuru-Guzik, Kirill Neklyudov}, journal={arXiv preprint arXiv:2410.07974}, year={2024}, archivePrefix={arXiv}, eprint={2410.07974}, primaryClass={cs.LG cs.AI physics.bio-ph physics.chem-ph} }
du2024doob's
arxiv-668121
2410.07976
Variational Inequality Methods for Multi-Agent Reinforcement Learning: Performance and Stability Gains
<|reference_start|>Variational Inequality Methods for Multi-Agent Reinforcement Learning: Performance and Stability Gains: Multi-agent reinforcement learning (MARL) presents unique challenges as agents learn strategies through experiences. Gradient-based methods are often sensitive to hyperparameter selection and initial random seed variations. Concurrently, significant advances have been made in solving Variational Inequalities (VIs) which include equilibrium-finding problems particularly in addressing the non-converging rotational dynamics that impede convergence of traditional gradient based optimization methods. This paper explores the potential of leveraging VI-based techniques to improve MARL training. Specifically, we study the performance of VI method namely, Nested-Lookahead VI (nLA-VI) and Extragradient (EG) in enhancing the multi-agent deep deterministic policy gradient (MADDPG) algorithm. We present a VI reformulation of the actor-critic algorithm for both single- and multi-agent settings. We introduce three algorithms that use nLA-VI, EG, and a combination of both, named LA-MADDPG, EG-MADDPG, and LA-EG-MADDPG, respectively. Our empirical results demonstrate that these VI-based approaches yield significant performance improvements in benchmark environments, such as the zero-sum games: rock-paper-scissors and matching pennies, where equilibrium strategies can be quantitatively assessed, and the Multi-Agent Particle Environment: Predator prey benchmark, where VI-based methods also yield balanced participation of agents from the same team.<|reference_end|>
arxiv
@article{sidahmed2024variational, title={Variational Inequality Methods for Multi-Agent Reinforcement Learning: Performance and Stability Gains}, author={Baraah A. M. Sidahmed, Tatjana Chavdarova}, journal={arXiv preprint arXiv:2410.07976}, year={2024}, archivePrefix={arXiv}, eprint={2410.07976}, primaryClass={stat.ML cs.LG} }
sidahmed2024variational
arxiv-668122
2410.07978
Sound Zone Control Robust To Sound Speed Change
<|reference_start|>Sound Zone Control Robust To Sound Speed Change: Sound zone control (SZC) implemented using static optimal filters is significantly affected by various perturbations in the acoustic environment, an important one being the fluctuation in the speed of sound, which is in turn influenced by changes in temperature and humidity (TH). This issue arises because control algorithms typically use pre-recorded, static impulse responses (IRs) to design the optimal control filters. The IRs, however, may change with time due to TH changes, which renders the derived control filters to become non-optimal. To address this challenge, we propose a straightforward model called sinc interpolation-compression/expansion-resampling (SICER), which adjusts the IRs to account for both sound speed reduction and increase. Using the proposed technique, IRs measured at a certain TH can be corrected for any TH change and control filters can be re-derived without the need of re-measuring the new IRs (which is impractical when SZC is deployed). We integrate the proposed SICER IR correction method with the recently introduced variable span trade-off (VAST) framework for SZC, and propose a SICER-corrected VAST method that is resilient to sound speed variations. Simulation studies show that the proposed SICER-corrected VAST approach significantly improves acoustic contrast and reduces signal distortion in the presence of sound speed changes.<|reference_end|>
arxiv
@article{bhattacharjee2024sound, title={Sound Zone Control Robust To Sound Speed Change}, author={Sankha Subhra Bhattacharjee, Jesper Rindom Jensen, Mads Gr{ae}sb{o}ll Christensen}, journal={arXiv preprint arXiv:2410.07978}, year={2024}, archivePrefix={arXiv}, eprint={2410.07978}, primaryClass={eess.AS cs.SD} }
bhattacharjee2024sound
arxiv-668123
2410.07980
D-Wave's Nonlinear-Program Hybrid Solver: Description and Performance Analysis
<|reference_start|>D-Wave's Nonlinear-Program Hybrid Solver: Description and Performance Analysis: The development of advanced quantum-classical algorithms is among the most prominent strategies in quantum computing. Numerous hybrid solvers have been introduced recently. Many of these methods are created ad hoc to address specific use cases. However, several well-established schemes are frequently utilized to address optimization problems. In this context, D-Wave launched the Hybrid Solver Service in 2020, offering a portfolio of methods designed to accelerate time-to-solution for users aiming to optimize performance and operational processes. Recently, a new technique has been added to this portfolio: the Nonlinear-Program Hybrid Solver. This paper describes this solver and evaluates its performance through a benchmark of 45 instances across three combinatorial optimization problems: the Traveling Salesman Problem, the Knapsack Problem, and the Maximum Cut Problem. To facilitate the use of this relatively unexplored solver, we provide details of the implementation used to solve these three optimization problems.<|reference_end|>
arxiv
@article{osaba2024d-wave's, title={D-Wave's Nonlinear-Program Hybrid Solver: Description and Performance Analysis}, author={Eneko Osaba and Pablo Miranda-Rodriguez}, journal={arXiv preprint arXiv:2410.07980}, year={2024}, archivePrefix={arXiv}, eprint={2410.07980}, primaryClass={cs.ET cs.AI quant-ph} }
osaba2024d-wave's
arxiv-668124
2410.07981
MolMix: A Simple Yet Effective Baseline for Multimodal Molecular Representation Learning
<|reference_start|>MolMix: A Simple Yet Effective Baseline for Multimodal Molecular Representation Learning: In this work, we propose a simple transformer-based baseline for multimodal molecular representation learning, integrating three distinct modalities: SMILES strings, 2D graph representations, and 3D conformers of molecules. A key aspect of our approach is the aggregation of 3D conformers, allowing the model to account for the fact that molecules can adopt multiple conformations-an important factor for accurate molecular representation. The tokens for each modality are extracted using modality-specific encoders: a transformer for SMILES strings, a message-passing neural network for 2D graphs, and an equivariant neural network for 3D conformers. The flexibility and modularity of this framework enable easy adaptation and replacement of these encoders, making the model highly versatile for different molecular tasks. The extracted tokens are then combined into a unified multimodal sequence, which is processed by a downstream transformer for prediction tasks. To efficiently scale our model for large multimodal datasets, we utilize Flash Attention 2 and bfloat16 precision. Despite its simplicity, our approach achieves state-of-the-art results across multiple datasets, demonstrating its effectiveness as a strong baseline for multimodal molecular representation learning.<|reference_end|>
arxiv
@article{manolache2024molmix:, title={MolMix: A Simple Yet Effective Baseline for Multimodal Molecular Representation Learning}, author={Andrei Manolache, Dragos Tantaru, Mathias Niepert}, journal={arXiv preprint arXiv:2410.07981}, year={2024}, archivePrefix={arXiv}, eprint={2410.07981}, primaryClass={cs.LG cs.AI} }
manolache2024molmix:
arxiv-668125
2410.07984
Large Deviation Analysis for the Reverse Shannon Theorem
<|reference_start|>Large Deviation Analysis for the Reverse Shannon Theorem: Channel simulation is to simulate a noisy channel using noiseless channels with unlimited shared randomness. This can be interpreted as the reverse problem to Shannon's noisy coding theorem. In contrast to previous works, our approach employs R\'enyi divergence (with the parameter $\alpha\in(0,\infty)$) to measure the level of approximation. Specifically, we obtain the reverse Shannon theorem under the R\'enyi divergence, which characterizes the R\'enyi simulation rate, the minimum communication cost rate required for the R\'enyi divergence vanishing asymptotically. We also investigate the behaviors of the R\'enyi divergence when the communication cost rate is above or below the R\'enyi simulation rate. When the communication cost rate is above the R\'enyi simulation rate, we provide a complete characterization of the convergence exponent, called the reliability function. When the communication cost rate is below the R\'enyi simulation rate, we determine the linear increasing rate for the R\'enyi divergence with parameter $\alpha\in(0,\infty]$, which implies the strong converse exponent for the $\alpha$-order fidelity.<|reference_end|>
arxiv
@article{li2024large, title={Large Deviation Analysis for the Reverse Shannon Theorem}, author={Shi-Bing Li, Ke Li, Lei Yu}, journal={arXiv preprint arXiv:2410.07984}, year={2024}, archivePrefix={arXiv}, eprint={2410.07984}, primaryClass={cs.IT math.IT} }
li2024large
arxiv-668126
2410.07985
Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models
<|reference_start|>Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models: Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for truly challenging these models. To bridge this gap, we propose a comprehensive and challenging benchmark specifically designed to assess LLMs' mathematical reasoning at the Olympiad level. Unlike existing Olympiad-related benchmarks, our dataset focuses exclusively on mathematics and comprises a vast collection of 4428 competition-level problems with rigorous human annotation. These problems are meticulously categorized into over 33 sub-domains and span more than 10 distinct difficulty levels, enabling a holistic assessment of model performance in Olympiad-mathematical reasoning. Furthermore, we conducted an in-depth analysis based on this benchmark. Our experimental results show that even the most advanced models, OpenAI o1-mini and OpenAI o1-preview, struggle with highly challenging Olympiad-level problems, with 60.54% and 52.55% accuracy, highlighting significant challenges in Olympiad-level mathematical reasoning.<|reference_end|>
arxiv
@article{gao2024omni-math:, title={Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models}, author={Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, Zhengyang Tang, Benyou Wang, Daoguang Zan, Shanghaoran Quan, Ge Zhang, Lei Sha, Yichang Zhang, Xuancheng Ren, Tianyu Liu, Baobao Chang}, journal={arXiv preprint arXiv:2410.07985}, year={2024}, archivePrefix={arXiv}, eprint={2410.07985}, primaryClass={cs.CL} }
gao2024omni-math:
arxiv-668127
2410.07986
Single-copy stabilizer testing
<|reference_start|>Single-copy stabilizer testing: We consider the problem of testing whether an unknown $n$-qubit quantum state $|\psi\rangle$ is a stabilizer state, with only single-copy access. We give an algorithm solving this problem using $O(n)$ copies, and conversely prove that $\Omega(\sqrt{n})$ copies are required for any algorithm. The main observation behind our algorithm is that when repeatedly measuring in a randomly chosen stabilizer basis, stabilizer states are the most likely among the set of all pure states to exhibit linear dependencies in measurement outcomes. Our algorithm is designed to probe deviations from this extremal behavior. For the lower bound, we first reduce stabilizer testing to the task of distinguishing random stabilizer states from the maximally mixed state. We then argue that, without loss of generality, it is sufficient to consider measurement strategies that a) lie in the commutant of the tensor action of the Clifford group and b) satisfy a Positive Partial Transpose (PPT) condition. By leveraging these constraints, together with novel results on the partial transposes of the generators of the Clifford commutant, we derive the lower bound on the sample complexity.<|reference_end|>
arxiv
@article{hinsche2024single-copy, title={Single-copy stabilizer testing}, author={Marcel Hinsche, Jonas Helsen}, journal={arXiv preprint arXiv:2410.07986}, year={2024}, archivePrefix={arXiv}, eprint={2410.07986}, primaryClass={quant-ph cs.CC cs.DS} }
hinsche2024single-copy
arxiv-668128
2410.07987
A transition towards virtual representations of visual scenes
<|reference_start|>A transition towards virtual representations of visual scenes: Visual scene understanding is a fundamental task in computer vision that aims to extract meaningful information from visual data. It traditionally involves disjoint and specialized algorithms for different tasks that are tailored for specific application scenarios. This can be cumbersome when designing complex systems that include processing of visual and semantic data extracted from visual scenes, which is even more noticeable nowadays with the influx of applications for virtual or augmented reality. When designing a system that employs automatic visual scene understanding to enable a precise and semantically coherent description of the underlying scene, which can be used to fuel a visualization component with 3D virtual synthesis, the lack of flexibility and unified frameworks become more prominent. To alleviate this issue and its inherent problems, we propose an architecture that addresses the challenges of visual scene understanding and description towards a 3D virtual synthesis that enables an adaptable, unified and coherent solution. Furthermore, we expose how our proposition can be of use into multiple application areas. Additionally, we also present a proof of concept system that employs our architecture to further prove its usability in practice.<|reference_end|>
arxiv
@article{pereira2024a, title={A transition towards virtual representations of visual scenes}, author={Am'erico Pereira, Pedro Carvalho, Lu'is C^orte-Real}, journal={arXiv preprint arXiv:2410.07987}, year={2024}, archivePrefix={arXiv}, eprint={2410.07987}, primaryClass={cs.CV} }
pereira2024a
arxiv-668129
2410.07988
LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion
<|reference_start|>LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion: Face morphing attacks pose a severe security threat to face recognition systems, enabling the morphed face image to be verified against multiple identities. To detect such manipulated images, the development of new face morphing methods becomes essential to increase the diversity of training datasets used for face morph detection. In this study, we present a representation-level face morphing approach, namely LADIMO, that performs morphing on two face recognition embeddings. Specifically, we train a Latent Diffusion Model to invert a biometric template - thus reconstructing the face image from an FRS latent representation. Our subsequent vulnerability analysis demonstrates the high morph attack potential in comparison to MIPGAN-II, an established GAN-based face morphing approach. Finally, we exploit the stochastic LADMIO model design in combination with our identity conditioning mechanism to create unlimited morphing attacks from a single face morph image pair. We show that each face morph variant has an individual attack success rate, enabling us to maximize the morph attack potential by applying a simple re-sampling strategy. Code and pre-trained models available here: https://github.com/dasec/LADIMO<|reference_end|>
arxiv
@article{grimmer2024ladimo:, title={LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion}, author={Marcel Grimmer, Christoph Busch}, journal={arXiv preprint arXiv:2410.07988}, year={2024}, archivePrefix={arXiv}, eprint={2410.07988}, primaryClass={cs.CV} }
grimmer2024ladimo:
arxiv-668130
2410.07989
Machine Learning-based feasibility estimation of digital blocks in BCD technology
<|reference_start|>Machine Learning-based feasibility estimation of digital blocks in BCD technology: Analog-on-Top Mixed Signal (AMS) Integrated Circuit (IC) design is a time-consuming process predominantly carried out by hand. Within this flow, usually, some area is reserved by the top-level integrator for the placement of digital blocks. Specific features of the area, such as size and shape, have a relevant impact on the possibility of implementing the digital logic with the required functionality. We present a Machine Learning (ML)-based evaluation methodology for predicting the feasibility of digital implementation using a set of high-level features. This approach aims to avoid time-consuming Place-and-Route trials, enabling rapid feedback between Digital and Analog Back-End designers during top-level placement.<|reference_end|>
arxiv
@article{faraone2024machine, title={Machine Learning-based feasibility estimation of digital blocks in BCD technology}, author={Gabriele Faraone, Francesco Daghero, Eugenio Serianni, Dario Licastro, Nicola Di Carolo, Michelangelo Grosso, Giovanna Antonella Franchino, Daniele Jahier Pagliari}, journal={arXiv preprint arXiv:2410.07989}, year={2024}, archivePrefix={arXiv}, eprint={2410.07989}, primaryClass={cs.LG} }
faraone2024machine
arxiv-668131
2410.07991
Human and LLM Biases in Hate Speech Annotations: A Socio-Demographic Analysis of Annotators and Targets
<|reference_start|>Human and LLM Biases in Hate Speech Annotations: A Socio-Demographic Analysis of Annotators and Targets: The rise of online platforms exacerbated the spread of hate speech, demanding scalable and effective detection. However, the accuracy of hate speech detection systems heavily relies on human-labeled data, which is inherently susceptible to biases. While previous work has examined the issue, the interplay between the characteristics of the annotator and those of the target of the hate are still unexplored. We fill this gap by leveraging an extensive dataset with rich socio-demographic information of both annotators and targets, uncovering how human biases manifest in relation to the target's attributes. Our analysis surfaces the presence of widespread biases, which we quantitatively describe and characterize based on their intensity and prevalence, revealing marked differences. Furthermore, we compare human biases with those exhibited by persona-based LLMs. Our findings indicate that while persona-based LLMs do exhibit biases, these differ significantly from those of human annotators. Overall, our work offers new and nuanced results on human biases in hate speech annotations, as well as fresh insights into the design of AI-driven hate speech detection systems.<|reference_end|>
arxiv
@article{giorgi2024human, title={Human and LLM Biases in Hate Speech Annotations: A Socio-Demographic Analysis of Annotators and Targets}, author={Tommaso Giorgi, Lorenzo Cima, Tiziano Fagni, Marco Avvenuti, Stefano Cresci}, journal={arXiv preprint arXiv:2410.07991}, year={2024}, archivePrefix={arXiv}, eprint={2410.07991}, primaryClass={cs.CL cs.AI cs.HC} }
giorgi2024human
arxiv-668132
2410.07992
Subsequence Matching and Analysis Problems for Formal Languages
<|reference_start|>Subsequence Matching and Analysis Problems for Formal Languages: In this paper, we study a series of algorithmic problems related to the subsequences occurring in the strings of a given language, under the assumption that this language is succinctly represented by a grammar generating it, or an automaton accepting it. In particular, we focus on the following problems: Given a string $w$ and a language $L$, does there exist a word of $L$ which has $w$ as subsequence? Do all words of $L$ have $w$ as a subsequence? Given an integer $k$ alongside $L$, does there exist a word of $L$ which has all strings of length $k$, over the alphabet of $L$, as subsequences? Do all words of $L$ have all strings of length $k$ as subsequences? For the last two problems, efficient algorithms were already presented in [Adamson et al., ISAAC 2023] for the case when $L$ is a regular language, and efficient solutions can be easily obtained for the first two problems. We extend that work as follows: we give sufficient conditions on the class of input-languages, under which these problems are decidable; we provide efficient algorithms for all these problems in the case when the input language is context-free; we show that all problems are undecidable for context-sensitive languages. Finally, we provide a series of initial results related to a class of languages that strictly includes the regular languages and is strictly included in the class of context-sensitive languages, but is incomparable to the of class context-free languages; these results deviate significantly from those reported for language-classes from the Chomsky hierarchy.<|reference_end|>
arxiv
@article{fazekas2024subsequence, title={Subsequence Matching and Analysis Problems for Formal Languages}, author={Szil'ard Zsolt Fazekas, Tore Ko{ss}, Florin Manea, Robert Mercac{s}, Timo Specht}, journal={arXiv preprint arXiv:2410.07992}, year={2024}, archivePrefix={arXiv}, eprint={2410.07992}, primaryClass={cs.FL cs.DS} }
fazekas2024subsequence
arxiv-668133
2410.07994
Neuroplastic Expansion in Deep Reinforcement Learning
<|reference_start|>Neuroplastic Expansion in Deep Reinforcement Learning: The loss of plasticity in learning agents, analogous to the solidification of neural pathways in biological brains, significantly impedes learning and adaptation in reinforcement learning due to its non-stationary nature. To address this fundamental challenge, we propose a novel approach, Neuroplastic Expansion (NE), inspired by cortical expansion in cognitive science. NE maintains learnability and adaptability throughout the entire training process by dynamically growing the network from a smaller initial size to its full dimension. Our method is designed with three key components: (1) elastic neuron generation based on potential gradients, (2) dormant neuron pruning to optimize network expressivity, and (3) neuron consolidation via experience review to strike a balance in the plasticity-stability dilemma. Extensive experiments demonstrate that NE effectively mitigates plasticity loss and outperforms state-of-the-art methods across various tasks in MuJoCo and DeepMind Control Suite environments. NE enables more adaptive learning in complex, dynamic environments, which represents a crucial step towards transitioning deep reinforcement learning from static, one-time training paradigms to more flexible, continually adapting models.<|reference_end|>
arxiv
@article{liu2024neuroplastic, title={Neuroplastic Expansion in Deep Reinforcement Learning}, author={Jiashun Liu and Johan Obando-Ceron and Aaron Courville and Ling Pan}, journal={arXiv preprint arXiv:2410.07994}, year={2024}, archivePrefix={arXiv}, eprint={2410.07994}, primaryClass={cs.LG} }
liu2024neuroplastic
arxiv-668134
2410.07995
RegionGrasp: A Novel Task for Contact Region Controllable Hand Grasp Generation
<|reference_start|>RegionGrasp: A Novel Task for Contact Region Controllable Hand Grasp Generation: Can machine automatically generate multiple distinct and natural hand grasps, given specific contact region of an object in 3D? This motivates us to consider a novel task of \textit{Region Controllable Hand Grasp Generation (RegionGrasp)}, as follows: given as input a 3D object, together with its specific surface area selected as the intended contact region, to generate a diverse set of plausible hand grasps of the object, where the thumb finger tip touches the object surface on the contact region. To address this task, RegionGrasp-CVAE is proposed, which consists of two main parts. First, to enable contact region-awareness, we propose ConditionNet as the condition encoder that includes in it a transformer-backboned object encoder, O-Enc; a pretraining strategy is adopted by O-Enc, where the point patches of object surface are randomly masked off and subsequently restored, to further capture surface geometric information of the object. Second, to realize interaction awareness, HOINet is introduced to encode hand-object interaction features by entangling high-level hand features with embedded object features through geometric-aware multi-head cross attention. Empirical evaluations demonstrate the effectiveness of our approach qualitatively and quantitatively where it is shown to compare favorably with respect to the state of the art methods.<|reference_end|>
arxiv
@article{wang2024regiongrasp:, title={RegionGrasp: A Novel Task for Contact Region Controllable Hand Grasp Generation}, author={Yilin Wang, Chuan Guo, Li Cheng, Hai Jiang}, journal={arXiv preprint arXiv:2410.07995}, year={2024}, archivePrefix={arXiv}, eprint={2410.07995}, primaryClass={cs.CV} }
wang2024regiongrasp:
arxiv-668135
2410.07997
APOLLO: A GPT-based tool to detect phishing emails and generate explanations that warn users
<|reference_start|>APOLLO: A GPT-based tool to detect phishing emails and generate explanations that warn users: Phishing is one of the most prolific cybercriminal activities, with attacks becoming increasingly sophisticated. It is, therefore, imperative to explore novel technologies to improve user protection across both technical and human dimensions. Large Language Models (LLMs) offer significant promise for text processing in various domains, but their use for defense against phishing attacks still remains scarcely explored. In this paper, we present APOLLO, a tool based on OpenAI's GPT-4o to detect phishing emails and generate explanation messages to users about why a specific email is dangerous, thus improving their decision-making capabilities. We have evaluated the performance of APOLLO in classifying phishing emails; the results show that the LLM models have exemplary capabilities in classifying phishing emails (97 percent accuracy in the case of GPT-4o) and that this performance can be further improved by integrating data from third-party services, resulting in a near-perfect classification rate (99 percent accuracy). To assess the perception of the explanations generated by this tool, we also conducted a study with 20 participants, comparing four different explanations presented as phishing warnings. We compared the LLM-generated explanations to four baselines: a manually crafted warning, and warnings from Chrome, Firefox, and Edge browsers. The results show that not only the LLM-generated explanations were perceived as high quality, but also that they can be more understandable, interesting, and trustworthy than the baselines. These findings suggest that using LLMs as a defense against phishing is a very promising approach, with APOLLO representing a proof of concept in this research direction.<|reference_end|>
arxiv
@article{desolda2024apollo:, title={APOLLO: A GPT-based tool to detect phishing emails and generate explanations that warn users}, author={Giuseppe Desolda, Francesco Greco, and Luca Vigan`o}, journal={arXiv preprint arXiv:2410.07997}, year={2024}, archivePrefix={arXiv}, eprint={2410.07997}, primaryClass={cs.HC cs.CR} }
desolda2024apollo:
arxiv-668136
2410.07998
A Graphical Correlation-Based Method for Counting the Number of Global 8-Cycles on the SCRAM Three-Layer Tanner Graph
<|reference_start|>A Graphical Correlation-Based Method for Counting the Number of Global 8-Cycles on the SCRAM Three-Layer Tanner Graph: This paper presents a novel graphical approach that counts the number of global 8-cycles on the SCRAM three-layer Tanner graph. SCRAM, which stands for Slotted Coded Random Access Multiplexing, is a joint decoder that is meets challenging requirements of 6G. At the transmitter side, the data of the accommodated users is encoded by Low Density Parity Check (LDPC) codes, and the codewords are transmitted over the shared channel by means of Slotted ALOHA. Unlike the state-of-the-art sequential decoders, the SCRAM decoder jointly resolves collisions and decodes the LDPC codewords, in a similar analogy to Belief Propagation on a three-layer Tanner graph. By leveraging the analogy between the two-layer Tanner graph of conventional LDPC codes and the three-layer Tanner graph of SCRAM, the well-developed analysis tools of classical LDPC codes could be utilized to enhance the performance of SCRAM. In essence, the contribution of this paper is three-fold; First it proposes the methodology to utilize these tools to assess the performance of SCRAM. Second, it derives a lower bound on the shortest cycle length of an arbitrary SCRAM Tanner graph. Finally, the paper presents a novel graphical method that counts the number of cycles of length that corresponds to the girth.<|reference_end|>
arxiv
@article{nafie2024a, title={A Graphical Correlation-Based Method for Counting the Number of Global 8-Cycles on the SCRAM Three-Layer Tanner Graph}, author={Sally Nafie, Joerg Robert, Albert Heuberger}, journal={arXiv preprint arXiv:2410.07998}, year={2024}, archivePrefix={arXiv}, eprint={2410.07998}, primaryClass={cs.IT math.IT} }
nafie2024a
arxiv-668137
2410.08000
AHA: Human-Assisted Out-of-Distribution Generalization and Detection
<|reference_start|>AHA: Human-Assisted Out-of-Distribution Generalization and Detection: Modern machine learning models deployed often encounter distribution shifts in real-world applications, manifesting as covariate or semantic out-of-distribution (OOD) shifts. These shifts give rise to challenges in OOD generalization and OOD detection. This paper introduces a novel, integrated approach AHA (Adaptive Human-Assisted OOD learning) to simultaneously address both OOD generalization and detection through a human-assisted framework by labeling data in the wild. Our approach strategically labels examples within a novel maximum disambiguation region, where the number of semantic and covariate OOD data roughly equalizes. By labeling within this region, we can maximally disambiguate the two types of OOD data, thereby maximizing the utility of the fixed labeling budget. Our algorithm first utilizes a noisy binary search algorithm that identifies the maximal disambiguation region with high probability. The algorithm then continues with annotating inside the identified labeling region, reaping the full benefit of human feedback. Extensive experiments validate the efficacy of our framework. We observed that with only a few hundred human annotations, our method significantly outperforms existing state-of-the-art methods that do not involve human assistance, in both OOD generalization and OOD detection. Code is publicly available at \url{https://github.com/HaoyueBaiZJU/aha}.<|reference_end|>
arxiv
@article{bai2024aha:, title={AHA: Human-Assisted Out-of-Distribution Generalization and Detection}, author={Haoyue Bai, Jifan Zhang, Robert Nowak}, journal={arXiv preprint arXiv:2410.08000}, year={2024}, archivePrefix={arXiv}, eprint={2410.08000}, primaryClass={cs.LG} }
bai2024aha:
arxiv-668138
2410.08001
Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation
<|reference_start|>Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation: The increasing demand for versatile robotic systems to operate in diverse and dynamic environments has emphasized the importance of a generalist policy, which leverages a large cross-embodiment data corpus to facilitate broad adaptability and high-level reasoning. However, the generalist would struggle with inefficient inference and cost-expensive training. The specialist policy, instead, is curated for specific domain data and excels at task-level precision with efficiency. Yet, it lacks the generalization capacity for a wide range of applications. Inspired by these observations, we introduce RoboDual, a synergistic dual-system that supplements the merits of both generalist and specialist policy. A diffusion transformer-based specialist is devised for multi-step action rollouts, exquisitely conditioned on the high-level task understanding and discretized action output of a vision-language-action (VLA) based generalist. Compared to OpenVLA, RoboDual achieves 26.7% improvement in real-world setting and 12% gain on CALVIN by introducing a specialist policy with merely 20M trainable parameters. It maintains strong performance with 5% of demonstration data only, and enables a 3.8 times higher control frequency in real-world deployment. Code would be made publicly available. Our project page is hosted at: https://opendrivelab.com/RoboDual/<|reference_end|>
arxiv
@article{bu2024towards, title={Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation}, author={Qingwen Bu, Hongyang Li, Li Chen, Jisong Cai, Jia Zeng, Heming Cui, Maoqing Yao, Yu Qiao}, journal={arXiv preprint arXiv:2410.08001}, year={2024}, archivePrefix={arXiv}, eprint={2410.08001}, primaryClass={cs.RO cs.AI} }
bu2024towards
arxiv-668139
2410.08003
More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing
<|reference_start|>More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing: The evolution of biological neural systems has led to both modularity and sparse coding, which enables efficiency in energy usage, and robustness across the diversity of tasks in the lifespan. In contrast, standard neural networks rely on dense, non-specialized architectures, where all model parameters are simultaneously updated to learn multiple tasks, leading to representation interference. Current sparse neural network approaches aim to alleviate this issue, but are often hindered by limitations such as 1) trainable gating functions that cause representation collapse; 2) non-overlapping experts that result in redundant computation and slow learning; and 3) reliance on explicit input or task IDs that impose significant constraints on flexibility and scalability. In this paper we propose Conditionally Overlapping Mixture of ExperTs (COMET), a general deep learning method that addresses these challenges by inducing a modular, sparse architecture with an exponential number of overlapping experts. COMET replaces the trainable gating function used in Sparse Mixture of Experts with a fixed, biologically inspired random projection applied to individual input representations. This design causes the degree of expert overlap to depend on input similarity, so that similar inputs tend to share more parameters. This facilitates positive knowledge transfer, resulting in faster learning and improved generalization. We demonstrate the effectiveness of COMET on a range of tasks, including image classification, language modeling, and regression, using several popular deep learning architectures.<|reference_end|>
arxiv
@article{shaier2024more, title={More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing}, author={Sagi Shaier, Francisco Pereira, Katharina von der Wense, Lawrence E Hunter, Matt Jones}, journal={arXiv preprint arXiv:2410.08003}, year={2024}, archivePrefix={arXiv}, eprint={2410.08003}, primaryClass={cs.LG} }
shaier2024more
arxiv-668140
2410.08005
NLP-Guided Synthesis: Transitioning from Sequential Programs to Distributed Programs
<|reference_start|>NLP-Guided Synthesis: Transitioning from Sequential Programs to Distributed Programs: As the need for large-scale data processing grows, distributed programming frameworks like PySpark have become increasingly popular. However, the task of converting traditional, sequential code to distributed code remains a significant hurdle, often requiring specialized knowledge and substantial time investment. While existing tools have made strides in automating this conversion, they often fall short in terms of speed, flexibility, and overall applicability. In this paper, we introduce ROOP, a groundbreaking tool designed to address these challenges. Utilizing a BERT-based Natural Language Processing (NLP) model, ROOP automates the translation of Python code to its PySpark equivalent, offering a streamlined solution for leveraging distributed computing resources. We evaluated ROOP using a diverse set of 14 Python programs comprising 26 loop fragments. Our results are promising: ROOP achieved a near-perfect translation accuracy rate, successfully converting 25 out of the 26 loop fragments. Notably, for simpler operations, ROOP demonstrated remarkable efficiency, completing translations in as little as 44 seconds. Moreover, ROOP incorporates a built-in testing mechanism to ensure the functional equivalence of the original and translated code, adding an extra layer of reliability. This research opens up new avenues for automating the transition from sequential to distributed programming, making the process more accessible and efficient for developers.<|reference_end|>
arxiv
@article{sanjel2024nlp-guided, title={NLP-Guided Synthesis: Transitioning from Sequential Programs to Distributed Programs}, author={Arun Sanjel, Bikram Khanal, Greg Speegle, Pablo Rivas}, journal={arXiv preprint arXiv:2410.08005}, year={2024}, archivePrefix={arXiv}, eprint={2410.08005}, primaryClass={cs.DC} }
sanjel2024nlp-guided
arxiv-668141
2410.08007
Time Can Invalidate Algorithmic Recourse
<|reference_start|>Time Can Invalidate Algorithmic Recourse: Algorithmic Recourse (AR) aims to provide users with actionable steps to overturn unfavourable decisions made by machine learning predictors. However, these actions often take time to implement (e.g., getting a degree can take years), and their effects may vary as the world evolves. Thus, it is natural to ask for recourse that remains valid in a dynamic environment. In this paper, we study the robustness of algorithmic recourse over time by casting the problem through the lens of causality. We demonstrate theoretically and empirically that (even robust) causal AR methods can fail over time except in the - unlikely - case that the world is stationary. Even more critically, unless the world is fully deterministic, counterfactual AR cannot be solved optimally. To account for this, we propose a simple yet effective algorithm for temporal AR that explicitly accounts for time. Our simulations on synthetic and realistic datasets show how considering time produces more resilient solutions to potential trends in the data distribution.<|reference_end|>
arxiv
@article{de toni2024time, title={Time Can Invalidate Algorithmic Recourse}, author={Giovanni De Toni, Stefano Teso, Bruno Lepri, Andrea Passerini}, journal={arXiv preprint arXiv:2410.08007}, year={2024}, archivePrefix={arXiv}, eprint={2410.08007}, primaryClass={cs.LG cs.CY} }
de toni2024time
arxiv-668142
2410.08010
Study of Attacks on the HHL Quantum Algorithm
<|reference_start|>Study of Attacks on the HHL Quantum Algorithm: As the quantum research community continues to grow and new algorithms are designed, developed, and implemented, it is crucial to start thinking about security aspects and potential threats that could result in misuse of the algorithms, or jeopardize the information processed with these quantum algorithms. This work focuses on exploration of two types of potential attacks that could be deployed on a cloud-based quantum computer by an attacker circuit trying to interfere with victim circuit. The two attacks, called Improper Initialization Attack (IIA) and Higher Energy Attack (HEA), are for the first time applied to a well-known and widely used quantum algorithm: HHL. The HHL algorithm is used in the field of machine learning and big data for solving systems of linear equations. This work evaluates the effect of the attacks on different qubits within the HHL algorithm: ancilla qubit, clock qubit, and b qubit. This work demonstrates that the two attacks are able to cause incorrect results, even when only one of the qubits in the victim algorithm is attacked. Having discovered the vulnerabilities, the work motivates the need for future work to develop defense strategies for each of these attack scenarios.<|reference_end|>
arxiv
@article{tan2024study, title={Study of Attacks on the HHL Quantum Algorithm}, author={Yizhuo Tan, Hrvoje Kukina and Jakub Szefer}, journal={arXiv preprint arXiv:2410.08010}, year={2024}, archivePrefix={arXiv}, eprint={2410.08010}, primaryClass={cs.CR quant-ph} }
tan2024study
arxiv-668143
2410.08014
LLM Cascade with Multi-Objective Optimal Consideration
<|reference_start|>LLM Cascade with Multi-Objective Optimal Consideration: Large Language Models (LLMs) have demonstrated exceptional capabilities in understanding and generating natural language. However, their high deployment costs often pose a barrier to practical applications, especially. Cascading local and server models offers a promising solution to this challenge. While existing studies on LLM cascades have primarily focused on the performance-cost trade-off, real-world scenarios often involve more complex requirements. This paper introduces a novel LLM Cascade strategy with Multi-Objective Optimization, enabling LLM cascades to consider additional objectives (e.g., privacy) and better align with the specific demands of real-world applications while maintaining their original cascading abilities. Extensive experiments on three benchmarks validate the effectiveness and superiority of our approach.<|reference_end|>
arxiv
@article{zhang2024llm, title={LLM Cascade with Multi-Objective Optimal Consideration}, author={Kai Zhang, Liqian Peng, Congchao Wang, Alec Go, Xiaozhong Liu}, journal={arXiv preprint arXiv:2410.08014}, year={2024}, archivePrefix={arXiv}, eprint={2410.08014}, primaryClass={cs.CL} }
zhang2024llm
arxiv-668144
2410.08015
Non-transferable Pruning
<|reference_start|>Non-transferable Pruning: Pretrained Deep Neural Networks (DNNs), developed from extensive datasets to integrate multifaceted knowledge, are increasingly recognized as valuable intellectual property (IP). To safeguard these models against IP infringement, strategies for ownership verification and usage authorization have emerged. Unlike most existing IP protection strategies that concentrate on restricting direct access to the model, our study addresses an extended DNN IP issue: applicability authorization, aiming to prevent the misuse of learned knowledge, particularly in unauthorized transfer learning scenarios. We propose Non-Transferable Pruning (NTP), a novel IP protection method that leverages model pruning to control a pretrained DNN's transferability to unauthorized data domains. Selective pruning can deliberately diminish a model's suitability on unauthorized domains, even with full fine-tuning. Specifically, our framework employs the alternating direction method of multipliers (ADMM) for optimizing both the model sparsity and an innovative non-transferable learning loss, augmented with Fisher space discriminative regularization, to constrain the model's generalizability to the target dataset. We also propose a novel effective metric to measure the model non-transferability: Area Under the Sample-wise Learning Curve (SLC-AUC). This metric facilitates consideration of full fine-tuning across various sample sizes. Experimental results demonstrate that NTP significantly surpasses the state-of-the-art non-transferable learning methods, with an average SLC-AUC at $-0.54$ across diverse pairs of source and target domains, indicating that models trained with NTP do not suit for transfer learning to unauthorized target domains. The efficacy of NTP is validated in both supervised and self-supervised learning contexts, confirming its applicability in real-world scenarios.<|reference_end|>
arxiv
@article{ding2024non-transferable, title={Non-transferable Pruning}, author={Ruyi Ding, Lili Su, Aidong Adam Ding, Yunsi Fei}, journal={arXiv preprint arXiv:2410.08015}, year={2024}, archivePrefix={arXiv}, eprint={2410.08015}, primaryClass={cs.LG} }
ding2024non-transferable
arxiv-668145
2410.08017
Fast Feedforward 3D Gaussian Splatting Compression
<|reference_start|>Fast Feedforward 3D Gaussian Splatting Compression: With 3D Gaussian Splatting (3DGS) advancing real-time and high-fidelity rendering for novel view synthesis, storage requirements pose challenges for their widespread adoption. Although various compression techniques have been proposed, previous art suffers from a common limitation: for any existing 3DGS, per-scene optimization is needed to achieve compression, making the compression sluggish and slow. To address this issue, we introduce Fast Compression of 3D Gaussian Splatting (FCGS), an optimization-free model that can compress 3DGS representations rapidly in a single feed-forward pass, which significantly reduces compression time from minutes to seconds. To enhance compression efficiency, we propose a multi-path entropy module that assigns Gaussian attributes to different entropy constraint paths for balance between size and fidelity. We also carefully design both inter- and intra-Gaussian context models to remove redundancies among the unstructured Gaussian blobs. Overall, FCGS achieves a compression ratio of over 20X while maintaining fidelity, surpassing most per-scene SOTA optimization-based methods. Our code is available at: https://github.com/YihangChen-ee/FCGS.<|reference_end|>
arxiv
@article{chen2024fast, title={Fast Feedforward 3D Gaussian Splatting Compression}, author={Yihang Chen, Qianyi Wu, Mengyao Li, Weiyao Lin, Mehrtash Harandi, Jianfei Cai}, journal={arXiv preprint arXiv:2410.08017}, year={2024}, archivePrefix={arXiv}, eprint={2410.08017}, primaryClass={cs.CV} }
chen2024fast
arxiv-668146
2410.08020
Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs
<|reference_start|>Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs: Recent efforts in fine-tuning language models often rely on automatic data selection, commonly using Nearest Neighbors retrieval from large datasets. However, we theoretically show that this approach tends to select redundant data, limiting its effectiveness or even hurting performance. To address this, we introduce SIFT, a data selection algorithm designed to reduce uncertainty about the model's response given a prompt, which unifies ideas from retrieval and active learning. Whereas Nearest Neighbor retrieval typically fails in the presence of information duplication, SIFT accounts for information duplication and optimizes the overall information gain of the selected examples. We focus our evaluations on fine-tuning at test-time for prompt-specific language modeling on the Pile dataset, and show that SIFT consistently outperforms Nearest Neighbor retrieval, with minimal computational overhead. Moreover, we show that our uncertainty estimates can predict the performance gain of test-time fine-tuning, and use this to develop an adaptive algorithm that invests test-time compute proportional to realized performance gains. We provide the $\texttt{activeft}$ (Active Fine-Tuning) library which can be used as a drop-in replacement for Nearest Neighbor retrieval.<|reference_end|>
arxiv
@article{hübotter2024efficiently, title={Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs}, author={Jonas H"ubotter, Sascha Bongni, Ido Hakimi, Andreas Krause}, journal={arXiv preprint arXiv:2410.08020}, year={2024}, archivePrefix={arXiv}, eprint={2410.08020}, primaryClass={cs.LG cs.AI} }
hübotter2024efficiently
arxiv-668147
2410.08021
OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling
<|reference_start|>OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling: Constrained by the separate encoding of vision and language, existing grounding and referring segmentation works heavily rely on bulky Transformer-based fusion en-/decoders and a variety of early-stage interaction technologies. Simultaneously, the current mask visual language modeling (MVLM) fails to capture the nuanced referential relationship between image-text in referring tasks. In this paper, we propose OneRef, a minimalist referring framework built on the modality-shared one-tower transformer that unifies the visual and linguistic feature spaces. To modeling the referential relationship, we introduce a novel MVLM paradigm called Mask Referring Modeling (MRefM), which encompasses both referring-aware mask image modeling and referring-aware mask language modeling. Both modules not only reconstruct modality-related content but also cross-modal referring content. Within MRefM, we propose a referring-aware dynamic image masking strategy that is aware of the referred region rather than relying on fixed ratios or generic random masking schemes. By leveraging the unified visual language feature space and incorporating MRefM's ability to model the referential relations, our approach enables direct regression of the referring results without resorting to various complex techniques. Our method consistently surpasses existing approaches and achieves SoTA performance on both grounding and segmentation tasks, providing valuable insights for future research. Our code and models are available at https://github.com/linhuixiao/OneRef.<|reference_end|>
arxiv
@article{xiao2024oneref:, title={OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling}, author={Linhui Xiao, Xiaoshan Yang, Fang Peng, Yaowei Wang, Changsheng Xu}, journal={arXiv preprint arXiv:2410.08021}, year={2024}, archivePrefix={arXiv}, eprint={2410.08021}, primaryClass={cs.CV} }
xiao2024oneref:
arxiv-668148
2410.08022
Probabilistic Satisfaction of Temporal Logic Constraints in Reinforcement Learning via Adaptive Policy-Switching
<|reference_start|>Probabilistic Satisfaction of Temporal Logic Constraints in Reinforcement Learning via Adaptive Policy-Switching: Constrained Reinforcement Learning (CRL) is a subset of machine learning that introduces constraints into the traditional reinforcement learning (RL) framework. Unlike conventional RL which aims solely to maximize cumulative rewards, CRL incorporates additional constraints that represent specific mission requirements or limitations that the agent must comply with during the learning process. In this paper, we address a type of CRL problem where an agent aims to learn the optimal policy to maximize reward while ensuring a desired level of temporal logic constraint satisfaction throughout the learning process. We propose a novel framework that relies on switching between pure learning (reward maximization) and constraint satisfaction. This framework estimates the probability of constraint satisfaction based on earlier trials and properly adjusts the probability of switching between learning and constraint satisfaction policies. We theoretically validate the correctness of the proposed algorithm and demonstrate its performance and scalability through comprehensive simulations.<|reference_end|>
arxiv
@article{lin2024probabilistic, title={Probabilistic Satisfaction of Temporal Logic Constraints in Reinforcement Learning via Adaptive Policy-Switching}, author={Xiaoshan Lin, Sad{i}k Bera Y"uksel, Yasin Yaz{i}c{i}ou{g}lu, and Derya Aksaray}, journal={arXiv preprint arXiv:2410.08022}, year={2024}, archivePrefix={arXiv}, eprint={2410.08022}, primaryClass={cs.AI cs.RO cs.SY eess.SY} }
lin2024probabilistic
arxiv-668149
2410.08023
GrabDAE: An Innovative Framework for Unsupervised Domain Adaptation Utilizing Grab-Mask and Denoise Auto-Encoder
<|reference_start|>GrabDAE: An Innovative Framework for Unsupervised Domain Adaptation Utilizing Grab-Mask and Denoise Auto-Encoder: Unsupervised Domain Adaptation (UDA) aims to adapt a model trained on a labeled source domain to an unlabeled target domain by addressing the domain shift. Existing Unsupervised Domain Adaptation (UDA) methods often fall short in fully leveraging contextual information from the target domain, leading to suboptimal decision boundary separation during source and target domain alignment. To address this, we introduce GrabDAE, an innovative UDA framework designed to tackle domain shift in visual classification tasks. GrabDAE incorporates two key innovations: the Grab-Mask module, which blurs background information in target domain images, enabling the model to focus on essential, domain-relevant features through contrastive learning; and the Denoising Auto-Encoder (DAE), which enhances feature alignment by reconstructing features and filtering noise, ensuring a more robust adaptation to the target domain. These components empower GrabDAE to effectively handle unlabeled target domain data, significantly improving both classification accuracy and robustness. Extensive experiments on benchmark datasets, including VisDA-2017, Office-Home, and Office31, demonstrate that GrabDAE consistently surpasses state-of-the-art UDA methods, setting new performance benchmarks. By tackling UDA's critical challenges with its novel feature masking and denoising approach, GrabDAE offers both significant theoretical and practical advancements in domain adaptation.<|reference_end|>
arxiv
@article{chen2024grabdae:, title={GrabDAE: An Innovative Framework for Unsupervised Domain Adaptation Utilizing Grab-Mask and Denoise Auto-Encoder}, author={Junzhou Chen, Xuan Wen, Ronghui Zhang, Bingtao Ren, Di Wu, Zhigang Xu, Danwei Wang}, journal={arXiv preprint arXiv:2410.08023}, year={2024}, archivePrefix={arXiv}, eprint={2410.08023}, primaryClass={cs.CV cs.AI} }
chen2024grabdae:
arxiv-668150
2410.08024
Pretraining Graph Transformers with Atom-in-a-Molecule Quantum Properties for Improved ADMET Modeling
<|reference_start|>Pretraining Graph Transformers with Atom-in-a-Molecule Quantum Properties for Improved ADMET Modeling: We evaluate the impact of pretraining Graph Transformer architectures on atom-level quantum-mechanical features for the modeling of absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties of drug-like compounds. We compare this pretraining strategy with two others: one based on molecular quantum properties (specifically the HOMO-LUMO gap) and one using a self-supervised atom masking technique. After fine-tuning on Therapeutic Data Commons ADMET datasets, we evaluate the performance improvement in the different models observing that models pretrained with atomic quantum mechanical properties produce in general better results. We then analyse the latent representations and observe that the supervised strategies preserve the pretraining information after finetuning and that different pretrainings produce different trends in latent expressivity across layers. Furthermore, we find that models pretrained on atomic quantum mechanical properties capture more low-frequency laplacian eigenmodes of the input graph via the attention weights and produce better representations of atomic environments within the molecule. Application of the analysis to a much larger non-public dataset for microsomal clearance illustrates generalizability of the studied indicators. In this case the performances of the models are in accordance with the representation analysis and highlight, especially for the case of masking pretraining and atom-level quantum property pretraining, how model types with similar performance on public benchmarks can have different performances on large scale pharmaceutical data.<|reference_end|>
arxiv
@article{fallani2024pretraining, title={Pretraining Graph Transformers with Atom-in-a-Molecule Quantum Properties for Improved ADMET Modeling}, author={Alessio Fallani, Ramil Nugmanov, Jose Arjona-Medina, J"org Kurt Wegner, Alexandre Tkatchenko, Kostiantyn Chernichenko}, journal={arXiv preprint arXiv:2410.08024}, year={2024}, archivePrefix={arXiv}, eprint={2410.08024}, primaryClass={cs.LG cs.AI} }
fallani2024pretraining
arxiv-668151
2410.08025
The Computational Complexity of Circuit Discovery for Inner Interpretability
<|reference_start|>The Computational Complexity of Circuit Discovery for Inner Interpretability: Many proposed applications of neural networks in machine learning, cognitive/brain science, and society hinge on the feasibility of inner interpretability via circuit discovery. This calls for empirical and theoretical explorations of viable algorithmic options. Despite advances in the design and testing of heuristics, there are concerns about their scalability and faithfulness at a time when we lack understanding of the complexity properties of the problems they are deployed to solve. To address this, we study circuit discovery with classical and parameterized computational complexity theory: (1) we describe a conceptual scaffolding to reason about circuit finding queries in terms of affordances for description, explanation, prediction and control; (2) we formalize a comprehensive set of queries that capture mechanistic explanation, and propose a formal framework for their analysis; (3) we use it to settle the complexity of many query variants and relaxations of practical interest on multi-layer perceptrons (part of, e.g., transformers). Our findings reveal a challenging complexity landscape. Many queries are intractable (NP-hard, $\Sigma^p_2$-hard), remain fixed-parameter intractable (W[1]-hard) when constraining model/circuit features (e.g., depth), and are inapproximable under additive, multiplicative, and probabilistic approximation schemes. To navigate this landscape, we prove there exist transformations to tackle some of these hard problems (NP- vs. $\Sigma^p_2$-complete) with better-understood heuristics, and prove the tractability (PTIME) or fixed-parameter tractability (FPT) of more modest queries which retain useful affordances. This framework allows us to understand the scope and limits of interpretability queries, explore viable options, and compare their resource demands among existing and future architectures.<|reference_end|>
arxiv
@article{adolfi2024the, title={The Computational Complexity of Circuit Discovery for Inner Interpretability}, author={Federico Adolfi, Martina G. Vilas, Todd Wareham}, journal={arXiv preprint arXiv:2410.08025}, year={2024}, archivePrefix={arXiv}, eprint={2410.08025}, primaryClass={cs.AI cs.CC q-bio.NC} }
adolfi2024the
arxiv-668152
2410.08026
Generalization Bounds and Model Complexity for Kolmogorov-Arnold Networks
<|reference_start|>Generalization Bounds and Model Complexity for Kolmogorov-Arnold Networks: Kolmogorov-Arnold Network (KAN) is a network structure recently proposed by Liu et al. (2024) that offers improved interpretability and a more parsimonious design in many science-oriented tasks compared to multi-layer perceptrons. This work provides a rigorous theoretical analysis of KAN by establishing generalization bounds for KAN equipped with activation functions that are either represented by linear combinations of basis functions or lying in a low-rank Reproducing Kernel Hilbert Space (RKHS). In the first case, the generalization bound accommodates various choices of basis functions in forming the activation functions in each layer of KAN and is adapted to different operator norms at each layer. For a particular choice of operator norms, the bound scales with the $l_1$ norm of the coefficient matrices and the Lipschitz constants for the activation functions, and it has no dependence on combinatorial parameters (e.g., number of nodes) outside of logarithmic factors. Moreover, our result does not require the boundedness assumption on the loss function and, hence, is applicable to a general class of regression-type loss functions. In the low-rank case, the generalization bound scales polynomially with the underlying ranks as well as the Lipschitz constants of the activation functions in each layer. These bounds are empirically investigated for KANs trained with stochastic gradient descent on simulated and real data sets. The numerical results demonstrate the practical relevance of these bounds.<|reference_end|>
arxiv
@article{zhang2024generalization, title={Generalization Bounds and Model Complexity for Kolmogorov-Arnold Networks}, author={Xianyang Zhang and Huijuan Zhou}, journal={arXiv preprint arXiv:2410.08026}, year={2024}, archivePrefix={arXiv}, eprint={2410.08026}, primaryClass={cs.LG cs.NE stat.ML} }
zhang2024generalization
arxiv-668153
2410.08027
Private Language Models via Truncated Laplacian Mechanism
<|reference_start|>Private Language Models via Truncated Laplacian Mechanism: Deep learning models for NLP tasks are prone to variants of privacy attacks. To prevent privacy leakage, researchers have investigated word-level perturbations, relying on the formal guarantees of differential privacy (DP) in the embedding space. However, many existing approaches either achieve unsatisfactory performance in the high privacy regime when using the Laplacian or Gaussian mechanism, or resort to weaker relaxations of DP that are inferior to the canonical DP in terms of privacy strength. This raises the question of whether a new method for private word embedding can be designed to overcome these limitations. In this paper, we propose a novel private embedding method called the high dimensional truncated Laplacian mechanism. Specifically, we introduce a non-trivial extension of the truncated Laplacian mechanism, which was previously only investigated in one-dimensional space cases. Theoretically, we show that our method has a lower variance compared to the previous private word embedding methods. To further validate its effectiveness, we conduct comprehensive experiments on private embedding and downstream tasks using three datasets. Remarkably, even in the high privacy regime, our approach only incurs a slight decrease in utility compared to the non-private scenario.<|reference_end|>
arxiv
@article{huang2024private, title={Private Language Models via Truncated Laplacian Mechanism}, author={Tianhao Huang, Tao Yang, Ivan Habernal, Lijie Hu, and Di Wang}, journal={arXiv preprint arXiv:2410.08027}, year={2024}, archivePrefix={arXiv}, eprint={2410.08027}, primaryClass={cs.CL cs.AI cs.LG} }
huang2024private
arxiv-668154
2410.08031
The Complexity of Symmetric Bimatrix Games with Common Payoffs
<|reference_start|>The Complexity of Symmetric Bimatrix Games with Common Payoffs: We study symmetric bimatrix games that also have the common-payoff property, i.e., the two players receive the same payoff at any outcome of the game. Due to the symmetry property, these games are guaranteed to have symmetric Nash equilibria, where the two players play the same (mixed) strategy. While the problem of computing such symmetric equilibria in general symmetric bimatrix games is known to be intractable, namely PPAD-complete, this result does not extend to our setting. Indeed, due to the common-payoff property, the problem lies in the lower class CLS, ruling out PPAD-hardness. In this paper, we show that the problem remains intractable, namely it is CLS-complete. On the way to proving this result, as our main technical contribution, we show that computing a Karush-Kuhn-Tucker (KKT) point of a quadratic program remains CLS-hard, even when the feasible domain is a simplex.<|reference_end|>
arxiv
@article{ghosh2024the, title={The Complexity of Symmetric Bimatrix Games with Common Payoffs}, author={Abheek Ghosh and Alexandros Hollender}, journal={arXiv preprint arXiv:2410.08031}, year={2024}, archivePrefix={arXiv}, eprint={2410.08031}, primaryClass={cs.GT cs.CC} }
ghosh2024the
arxiv-668155
2410.08032
Strategic Classification With Externalities
<|reference_start|>Strategic Classification With Externalities: We propose a new variant of the strategic classification problem: a principal reveals a classifier, and $n$ agents report their (possibly manipulated) features to be classified. Motivated by real-world applications, our model crucially allows the manipulation of one agent to affect another; that is, it explicitly captures inter-agent externalities. The principal-agent interactions are formally modeled as a Stackelberg game, with the resulting agent manipulation dynamics captured as a simultaneous game. We show that under certain assumptions, the pure Nash Equilibrium of this agent manipulation game is unique and can be efficiently computed. Leveraging this result, PAC learning guarantees are established for the learner: informally, we show that it is possible to learn classifiers that minimize loss on the distribution, even when a random number of agents are manipulating their way to a pure Nash Equilibrium. We also comment on the optimization of such classifiers through gradient-based approaches. This work sets the theoretical foundations for a more realistic analysis of classifiers that are robust against multiple strategic actors interacting in a common environment.<|reference_end|>
arxiv
@article{chen2024strategic, title={Strategic Classification With Externalities}, author={Yiling Chen, Safwan Hossain, Evi Micha, Ariel Procaccia}, journal={arXiv preprint arXiv:2410.08032}, year={2024}, archivePrefix={arXiv}, eprint={2410.08032}, primaryClass={cs.GT cs.AI cs.LG cs.MA} }
chen2024strategic
arxiv-668156
2410.08033
Second-Order Optimization via Quiescence
<|reference_start|>Second-Order Optimization via Quiescence: Second-order optimization methods exhibit fast convergence to critical points, however, in nonconvex optimization, these methods often require restrictive step-sizes to ensure a monotonically decreasing objective function. In the presence of highly nonlinear objective functions with large Lipschitz constants, increasingly small step-sizes become a bottleneck to fast convergence. We propose a second-order optimization method that utilizes a dynamic system model to represent the trajectory of optimization variables as an ODE. We then follow the quasi-steady state trajectory by forcing variables with the fastest rise time into a state known as quiescence. This optimization via quiescence allows us to adaptively select large step-sizes that sequentially follow each optimization variable to a quasi-steady state until all state variables reach the actual steady state, coinciding with the optimum. The result is a second-order method that utilizes large step-sizes and does not require a monotonically decreasing objective function to reach a critical point. Experimentally, we demonstrate the fast convergence of this approach for optimizing nonconvex problems in power systems and compare them to existing state-of-the-art second-order methods, including damped Newton-Raphson, BFGS, and SR1.<|reference_end|>
arxiv
@article{agarwal2024second-order, title={Second-Order Optimization via Quiescence}, author={Aayushya Agarwal, Larry Pileggi, Ronald Rohrer}, journal={arXiv preprint arXiv:2410.08033}, year={2024}, archivePrefix={arXiv}, eprint={2410.08033}, primaryClass={math.OC cs.SY eess.SY} }
agarwal2024second-order
arxiv-668157
2410.08035
IntrinsicVoice: Empowering LLMs with Intrinsic Real-time Voice Interaction Abilities
<|reference_start|>IntrinsicVoice: Empowering LLMs with Intrinsic Real-time Voice Interaction Abilities: Current methods of building LLMs with voice interaction capabilities rely heavily on explicit text autoregressive generation before or during speech response generation to maintain content quality, which unfortunately brings computational overhead and increases latency in multi-turn interactions. To address this, we introduce IntrinsicVoic,e an LLM designed with intrinsic real-time voice interaction capabilities. IntrinsicVoice aims to facilitate the transfer of textual capabilities of pre-trained LLMs to the speech modality by mitigating the modality gap between text and speech. Our novelty architecture, GroupFormer, can reduce speech sequences to lengths comparable to text sequences while generating high-quality audio, significantly reducing the length difference between speech and text, speeding up inference, and alleviating long-text modeling issues. Additionally, we construct a multi-turn speech-to-speech dialogue dataset named \method-500k which includes nearly 500k turns of speech-to-speech dialogues, and a cross-modality training strategy to enhance the semantic alignment between speech and text. Experimental results demonstrate that IntrinsicVoice can generate high-quality speech response with latency lower than 100ms in multi-turn dialogue scenarios. Demos are available at https://instrinsicvoice.github.io/.<|reference_end|>
arxiv
@article{zhang2024intrinsicvoice:, title={IntrinsicVoice: Empowering LLMs with Intrinsic Real-time Voice Interaction Abilities}, author={Xin Zhang, Xiang Lyu, Zhihao Du, Qian Chen, Dong Zhang, Hangrui Hu, Chaohong Tan, Tianyu Zhao, Yuxuan Wang, Bin Zhang, Heng Lu, Yaqian Zhou and Xipeng Qiu}, journal={arXiv preprint arXiv:2410.08035}, year={2024}, archivePrefix={arXiv}, eprint={2410.08035}, primaryClass={cs.SD cs.AI} }
zhang2024intrinsicvoice:
arxiv-668158
2410.08037
Composite Learning Units: Generalized Learning Beyond Parameter Updates to Transform LLMs into Adaptive Reasoners
<|reference_start|>Composite Learning Units: Generalized Learning Beyond Parameter Updates to Transform LLMs into Adaptive Reasoners: Human learning thrives on the ability to learn from mistakes, adapt through feedback, and refine understanding-processes often missing in static machine learning models. In this work, we introduce Composite Learning Units (CLUs) designed to transform reasoners, such as Large Language Models (LLMs), into learners capable of generalized, continuous learning without conventional parameter updates while enhancing their reasoning abilities through continual interaction and feedback. CLUs are built on an architecture that allows a reasoning model to maintain and evolve a dynamic knowledge repository: a General Knowledge Space for broad, reusable insights and a Prompt-Specific Knowledge Space for task-specific learning. Through goal-driven interactions, CLUs iteratively refine these knowledge spaces, enabling the system to adapt dynamically to complex tasks, extract nuanced insights, and build upon past experiences autonomously. We demonstrate CLUs' effectiveness through a cryptographic reasoning task, where they continuously evolve their understanding through feedback to uncover hidden transformation rules. While conventional models struggle to grasp underlying logic, CLUs excel by engaging in an iterative, goal-oriented process. Specialized components-handling knowledge retrieval, prompt generation, and feedback analysis-work together within a reinforcing feedback loop. This approach allows CLUs to retain the memory of past failures and successes, adapt autonomously, and apply sophisticated reasoning effectively, continually learning from mistakes while also building on breakthroughs.<|reference_end|>
arxiv
@article{radha2024composite, title={Composite Learning Units: Generalized Learning Beyond Parameter Updates to Transform LLMs into Adaptive Reasoners}, author={Santosh Kumar Radha, Oktay Goktas}, journal={arXiv preprint arXiv:2410.08037}, year={2024}, archivePrefix={arXiv}, eprint={2410.08037}, primaryClass={cs.LG cs.AI cs.CL cs.MA} }
radha2024composite
arxiv-668159
2410.08041
On the Convergence of (Stochastic) Gradient Descent for Kolmogorov--Arnold Networks
<|reference_start|>On the Convergence of (Stochastic) Gradient Descent for Kolmogorov--Arnold Networks: Kolmogorov--Arnold Networks (KANs), a recently proposed neural network architecture, have gained significant attention in the deep learning community, due to their potential as a viable alternative to multi-layer perceptrons (MLPs) and their broad applicability to various scientific tasks. Empirical investigations demonstrate that KANs optimized via stochastic gradient descent (SGD) are capable of achieving near-zero training loss in various machine learning (e.g., regression, classification, and time series forecasting, etc.) and scientific tasks (e.g., solving partial differential equations). In this paper, we provide a theoretical explanation for the empirical success by conducting a rigorous convergence analysis of gradient descent (GD) and SGD for two-layer KANs in solving both regression and physics-informed tasks. For regression problems, we establish using the neural tangent kernel perspective that GD achieves global linear convergence of the objective function when the hidden dimension of KANs is sufficiently large. We further extend these results to SGD, demonstrating a similar global convergence in expectation. Additionally, we analyze the global convergence of GD and SGD for physics-informed KANs, which unveils additional challenges due to the more complex loss structure. This is the first work establishing the global convergence guarantees for GD and SGD applied to optimize KANs and physics-informed KANs.<|reference_end|>
arxiv
@article{gao2024on, title={On the Convergence of (Stochastic) Gradient Descent for Kolmogorov--Arnold Networks}, author={Yihang Gao, Vincent Y. F. Tan}, journal={arXiv preprint arXiv:2410.08041}, year={2024}, archivePrefix={arXiv}, eprint={2410.08041}, primaryClass={cs.LG cs.AI math.OC} }
gao2024on
arxiv-668160
2410.08042
\varphi-FD : A well-conditioned finite difference method inspired by \varphi-FEM for general geometries on elliptic PDEs
<|reference_start|>\varphi-FD : A well-conditioned finite difference method inspired by \varphi-FEM for general geometries on elliptic PDEs: This paper presents a new finite difference method, called {\varphi}-FD, inspired by the {\phi}-FEM approach for solving elliptic partial differential equations (PDEs) on general geometries. The proposed method uses Cartesian grids, ensuring simplicity in implementation. Moreover, contrary to the previous finite difference scheme on non-rectangular domain, the associated matrix is well-conditioned. The use of a level-set function for the geometry description makes this approach relatively flexible. We prove the quasi-optimal convergence rates in several norms and the fact that the matrix is well-conditioned. Additionally, the paper explores the use of multigrid techniques to further accelerate the computation. Finally, numerical experiments in both 2D and 3D validate the performance of the {\varphi}-FD method compared to standard finite element methods and the Shortley-Weller approach.<|reference_end|>
arxiv
@article{duprez2024{\varphi}-fd, title={{\varphi}-FD : A well-conditioned finite difference method inspired by {\varphi}-FEM for general geometries on elliptic PDEs}, author={Michel Duprez, Vanessa Lleras, Alexei Lozinski, Vincent Vigon and Killian Vuillemot}, journal={arXiv preprint arXiv:2410.08042}, year={2024}, archivePrefix={arXiv}, eprint={2410.08042}, primaryClass={math.NA cs.NA} }
duprez2024{\varphi}-fd
arxiv-668161
2410.08043
Harmonic Oscillator based Particle Swarm Optimization
<|reference_start|>Harmonic Oscillator based Particle Swarm Optimization: Numerical optimization techniques are widely used in a broad area of science and technology, from finding the minimal energy of systems in Physics or Chemistry to finding optimal routes in logistics or optimal strategies for high speed trading. In general, a set of parameters (parameter space) is tuned to find the lowest value of a function depending on these parameters (cost function). In most cases the parameter space is too big to be completely searched and the most efficient techniques combine stochastic elements (randomness included in the starting setting and decision making during the optimization process) with well designed deterministic process. Thus there is nothing like a universal best optimization method; rather than that, different methods and their settings are more or less efficient in different contexts. Here we present a method that integrates Particle Swarm Optimization (PSO), a highly effective and successful algorithm inspired by the collective behavior of a flock of birds searching for food, with the principles of Harmonic Oscillators. This physics-based approach introduces the concept of energy, enabling a smoother and a more controlled convergence throughout the optimization process. We test our method on a standard set of test functions and show that in most cases it can outperform its natural competitors including the original PSO as well as the broadly used COBYLA and Differential Evolution optimization methods.<|reference_end|>
arxiv
@article{chernyak2024harmonic, title={Harmonic Oscillator based Particle Swarm Optimization}, author={Yury Chernyak, Ijaz Ahamed Mohammad, Nikolas Masnicak, Matej Pivoluska and Martin Plesch}, journal={arXiv preprint arXiv:2410.08043}, year={2024}, archivePrefix={arXiv}, eprint={2410.08043}, primaryClass={cs.NE} }
chernyak2024harmonic
arxiv-668162
2410.08044
The Rise of AI-Generated Content in Wikipedia
<|reference_start|>The Rise of AI-Generated Content in Wikipedia: The rise of AI-generated content in popular information sources raises significant concerns about accountability, accuracy, and bias amplification. Beyond directly impacting consumers, the widespread presence of this content poses questions for the long-term viability of training language models on vast internet sweeps. We use GPTZero, a proprietary AI detector, and Binoculars, an open-source alternative, to establish lower bounds on the presence of AI-generated content in recently created Wikipedia pages. Both detectors reveal a marked increase in AI-generated content in recent pages compared to those from before the release of GPT-3.5. With thresholds calibrated to achieve a 1% false positive rate on pre-GPT-3.5 articles, detectors flag over 5% of newly created English Wikipedia articles as AI-generated, with lower percentages for German, French, and Italian articles. Flagged Wikipedia articles are typically of lower quality and are often self-promotional or partial towards a specific viewpoint on controversial topics.<|reference_end|>
arxiv
@article{brooks2024the, title={The Rise of AI-Generated Content in Wikipedia}, author={Creston Brooks, Samuel Eggert, Denis Peskoff}, journal={arXiv preprint arXiv:2410.08044}, year={2024}, archivePrefix={arXiv}, eprint={2410.08044}, primaryClass={cs.CL} }
brooks2024the
arxiv-668163
2410.08045
Timely NextG Communications with Decoy Assistance against Deep Learning-based Jamming
<|reference_start|>Timely NextG Communications with Decoy Assistance against Deep Learning-based Jamming: We consider the transfer of time-sensitive information in next-generation (NextG) communication systems in the presence of a deep learning based eavesdropper capable of jamming detected transmissions, subject to an average power budget. A decoy-based anti-jamming strategy is presented to confuse a jammer, causing it to waste power when disrupting decoy messages instead of real messages. We investigate the effectiveness of the anti-jamming strategy to guarantee timeliness of NextG communications in addition to reliability objectives, analyzing the Age of Information subject to jamming and channel effects. We assess the effect of power control, which determines the success of a transmission but also affects the accuracy of the adversary's detection, making it more likely for the jammer to successfully identify and jam the communication. The results demonstrate the feasibility of mitigating eavesdropping and jamming attacks in NextG communications with information freshness objectives using a decoy to guarantee timely information transfer.<|reference_end|>
arxiv
@article{costa2024timely, title={Timely NextG Communications with Decoy Assistance against Deep Learning-based Jamming}, author={Maice Costa and Yalin E. Sagduyu}, journal={Proc. 2024 IEEE International Conference on Communications Workshops, pp.554-559. %\thanks{Peer-reviewed version in Proc. 2024 IEEE International Conference on Communications Workshops (ICC Workshops), Denver, CO, USA, 2024, pp. 554-559}, year={2024}, doi={10.1109/ICCWorkshops59551.2024.10615460}, archivePrefix={arXiv}, eprint={2410.08045}, primaryClass={cs.IT math.IT} }
costa2024timely
arxiv-668164
2410.08047
Divide and Translate: Compositional First-Order Logic Translation and Verification for Complex Logical Reasoning
<|reference_start|>Divide and Translate: Compositional First-Order Logic Translation and Verification for Complex Logical Reasoning: Complex logical reasoning tasks require a long sequence of reasoning, which a large language model (LLM) with chain-of-thought prompting still falls short. To alleviate this issue, neurosymbolic approaches incorporate a symbolic solver. Specifically, an LLM only translates a natural language problem into a satisfiability (SAT) problem that consists of first-order logic formulas, and a sound symbolic solver returns a mathematically correct solution. However, we discover that LLMs have difficulties to capture complex logical semantics hidden in the natural language during translation. To resolve this limitation, we propose a Compositional First-Order Logic Translation. An LLM first parses a natural language sentence into newly defined logical dependency structures that consist of an atomic subsentence and its dependents, then sequentially translate the parsed subsentences. Since multiple logical dependency structures and sequential translations are possible for a single sentence, we also introduce two Verification algorithms to ensure more reliable results. We utilize an SAT solver to rigorously compare semantics of generated first-order logic formulas and select the most probable one. We evaluate the proposed method, dubbed CLOVER, on seven logical reasoning benchmarks and show that it outperforms the previous neurosymbolic approaches and achieves new state-of-the-art results.<|reference_end|>
arxiv
@article{ryu2024divide, title={Divide and Translate: Compositional First-Order Logic Translation and Verification for Complex Logical Reasoning}, author={Hyun Ryu, Gyeongman Kim, Hyemin S. Lee, Eunho Yang}, journal={arXiv preprint arXiv:2410.08047}, year={2024}, archivePrefix={arXiv}, eprint={2410.08047}, primaryClass={cs.CL} }
ryu2024divide
arxiv-668165
2410.08048
VerifierQ: Enhancing LLM Test Time Compute with Q-Learning-based Verifiers
<|reference_start|>VerifierQ: Enhancing LLM Test Time Compute with Q-Learning-based Verifiers: Recent advancements in test time compute, particularly through the use of verifier models, have significantly enhanced the reasoning capabilities of Large Language Models (LLMs). This generator-verifier approach closely resembles the actor-critic framework in reinforcement learning (RL). However, current verifier models in LLMs often rely on supervised fine-tuning without temporal difference learning such as Q-learning. This paper introduces VerifierQ, a novel approach that integrates Offline Q-learning into LLM verifier models. We address three key challenges in applying Q-learning to LLMs: (1) handling utterance-level Markov Decision Processes (MDPs), (2) managing large action spaces, and (3) mitigating overestimation bias. VerifierQ introduces a modified Bellman update for bounded Q-values, incorporates Implicit Q-learning (IQL) for efficient action space management, and integrates a novel Conservative Q-learning (CQL) formulation for balanced Q-value estimation. Our method enables parallel Q-value computation and improving training efficiency. While recent work has explored RL techniques like MCTS for generators, VerifierQ is among the first to investigate the verifier (critic) aspect in LLMs through Q-learning. This integration of RL principles into verifier models complements existing advancements in generator techniques, potentially enabling more robust and adaptive reasoning in LLMs. Experimental results on mathematical reasoning tasks demonstrate VerifierQ's superior performance compared to traditional supervised fine-tuning approaches, with improvements in efficiency, accuracy and robustness. By enhancing the synergy between generation and evaluation capabilities, VerifierQ contributes to the ongoing evolution of AI systems in addressing complex cognitive tasks across various domains.<|reference_end|>
arxiv
@article{qi2024verifierq:, title={VerifierQ: Enhancing LLM Test Time Compute with Q-Learning-based Verifiers}, author={Jianing Qi, Hao Tang, Zhigang Zhu}, journal={arXiv preprint arXiv:2410.08048}, year={2024}, archivePrefix={arXiv}, eprint={2410.08048}, primaryClass={cs.LG cs.CL} }
qi2024verifierq:
arxiv-668166
2410.08049
Scaling Up Your Kernels: Large Kernel Design in ConvNets towards Universal Representations
<|reference_start|>Scaling Up Your Kernels: Large Kernel Design in ConvNets towards Universal Representations: This paper proposes the paradigm of large convolutional kernels in designing modern Convolutional Neural Networks (ConvNets). We establish that employing a few large kernels, instead of stacking multiple smaller ones, can be a superior design strategy. Our work introduces a set of architecture design guidelines for large-kernel ConvNets that optimize their efficiency and performance. We propose the UniRepLKNet architecture, which offers systematical architecture design principles specifically crafted for large-kernel ConvNets, emphasizing their unique ability to capture extensive spatial information without deep layer stacking. This results in a model that not only surpasses its predecessors with an ImageNet accuracy of 88.0%, an ADE20K mIoU of 55.6%, and a COCO box AP of 56.4% but also demonstrates impressive scalability and performance on various modalities such as time-series forecasting, audio, point cloud, and video recognition. These results indicate the universal modeling abilities of large-kernel ConvNets with faster inference speed compared with vision transformers. Our findings reveal that large-kernel ConvNets possess larger effective receptive fields and a higher shape bias, moving away from the texture bias typical of smaller-kernel CNNs. All codes and models are publicly available at https://github.com/AILab-CVC/UniRepLKNet promoting further research and development in the community.<|reference_end|>
arxiv
@article{zhang2024scaling, title={Scaling Up Your Kernels: Large Kernel Design in ConvNets towards Universal Representations}, author={Yiyuan Zhang, Xiaohan Ding, Xiangyu Yue}, journal={arXiv preprint arXiv:2410.08049}, year={2024}, archivePrefix={arXiv}, eprint={2410.08049}, primaryClass={cs.CV cs.AI cs.LG} }
zhang2024scaling
arxiv-668167
2410.08050
Agent-based modeling for realistic reproduction of human mobility and contact behavior to evaluate test and isolation strategies in epidemic infectious disease spread
<|reference_start|>Agent-based modeling for realistic reproduction of human mobility and contact behavior to evaluate test and isolation strategies in epidemic infectious disease spread: Agent-based models have proven to be useful tools in supporting decision-making processes in different application domains. The advent of modern computers and supercomputers has enabled these bottom-up approaches to realistically model human mobility and contact behavior. The COVID-19 pandemic showcased the urgent need for detailed and informative models that can answer research questions on transmission dynamics. We present a sophisticated agent-based model to simulate the spread of respiratory diseases. The model is highly modularized and can be used on various scales, from a small collection of buildings up to cities or countries. Although not being the focus of this paper, the model has undergone performance engineering on a single core and provides an efficient intra- and inter-simulation parallelization for time-critical decision-making processes. In order to allow answering research questions on individual level resolution, nonpharmaceutical intervention strategies such as face masks or venue closures can be implemented for particular locations or agents. In particular, we allow for sophisticated testing and isolation strategies to study the effects of minimal-invasive infectious disease mitigation. With realistic human mobility patterns for the region of Brunswick, Germany, we study the effects of different interventions between March 1st and May 30, 2021 in the SARS-CoV-2 pandemic. Our analyses suggest that symptom-independent testing has limited impact on the mitigation of disease dynamics if the dark figure in symptomatic cases is high. Furthermore, we found that quarantine length is more important than quarantine efficiency but that, with sufficient symptomatic control, also short quarantines can have a substantial effect.<|reference_end|>
arxiv
@article{kerkmann2024agent-based, title={Agent-based modeling for realistic reproduction of human mobility and contact behavior to evaluate test and isolation strategies in epidemic infectious disease spread}, author={David Kerkmann, Sascha Korf, Khoa Nguyen, Daniel Abele, Alain Schengen, Carlotta Gerstein, Jens Henrik G"obbert, Achim Basermann, Martin J. K"uhn, Michael Meyer-Hermann}, journal={arXiv preprint arXiv:2410.08050}, year={2024}, archivePrefix={arXiv}, eprint={2410.08050}, primaryClass={cs.MA cs.DC physics.soc-ph} }
kerkmann2024agent-based
arxiv-668168
2410.08051
The Space Just Above One Clean Qubit
<|reference_start|>The Space Just Above One Clean Qubit: Consider the model of computation where we start with two halves of a $2n$-qubit maximally entangled state. We get to apply a universal quantum computation on one half, measure both halves at the end, and perform classical postprocessing. This model, which we call $\frac12$BQP, was defined in STOC 2017 [ABKM17] to capture the power of permutational computations on special input states. As observed in [ABKM17], this model can be viewed as a natural generalization of the one-clean-qubit model (DQC1) where we learn the content of a high entropy input state only after the computation is completed. An interesting open question is to characterize the power of this model, which seems to sit nontrivially between DQC1 and BQP. In this paper, we show that despite its limitations, this model can carry out many well-known quantum computations that are candidates for exponential speed-up over classical computations (and possibly DQC1). In particular, $\frac12$BQP can simulate Instantaneous Quantum Polynomial Time (IQP) and solve the Deutsch-Jozsa problem, Bernstein-Vazirani problem, Simon's problem, and period finding. As a consequence, $\frac12$BQP also solves Order Finding and Factoring outside of the oracle setting. Furthermore, $\frac12$BQP can solve Forrelation and the corresponding oracle problem given by Raz and Tal [RT22] to separate BQP and PH. We also study limitations of $\frac12$BQP and show that similarly to DQC1, $\frac12$BQP cannot distinguish between unitaries which are close in trace distance, then give an oracle separating $\frac12$BQP and BQP. Due to this limitation, $\frac12$BQP cannot obtain the quadratic speedup for unstructured search given by Grover's algorithm [Gro96]. We conjecture that $\frac12$BQP cannot solve $3$-Forrelation.<|reference_end|>
arxiv
@article{jacobs2024the, title={The Space Just Above One Clean Qubit}, author={Dale Jacobs and Saeed Mehraban}, journal={arXiv preprint arXiv:2410.08051}, year={2024}, archivePrefix={arXiv}, eprint={2410.08051}, primaryClass={quant-ph cs.CC} }
jacobs2024the
arxiv-668169
2410.08053
A Target-Aware Analysis of Data Augmentation for Hate Speech Detection
<|reference_start|>A Target-Aware Analysis of Data Augmentation for Hate Speech Detection: Hate speech is one of the main threats posed by the widespread use of social networks, despite efforts to limit it. Although attention has been devoted to this issue, the lack of datasets and case studies centered around scarcely represented phenomena, such as ableism or ageism, can lead to hate speech detection systems that do not perform well on underrepresented identity groups. Given the unpreceded capabilities of LLMs in producing high-quality data, we investigate the possibility of augmenting existing data with generative language models, reducing target imbalance. We experiment with augmenting 1,000 posts from the Measuring Hate Speech corpus, an English dataset annotated with target identity information, adding around 30,000 synthetic examples using both simple data augmentation methods and different types of generative models, comparing autoregressive and sequence-to-sequence approaches. We find traditional DA methods to often be preferable to generative models, but the combination of the two tends to lead to the best results. Indeed, for some hate categories such as origin, religion, and disability, hate speech classification using augmented data for training improves by more than 10% F1 over the no augmentation baseline. This work contributes to the development of systems for hate speech detection that are not only better performing but also fairer and more inclusive towards targets that have been neglected so far.<|reference_end|>
arxiv
@article{casula2024a, title={A Target-Aware Analysis of Data Augmentation for Hate Speech Detection}, author={Camilla Casula, Sara Tonelli}, journal={arXiv preprint arXiv:2410.08053}, year={2024}, archivePrefix={arXiv}, eprint={2410.08053}, primaryClass={cs.CL} }
casula2024a
arxiv-668170
2410.08058
Closing the Loop: Learning to Generate Writing Feedback via Language Model Simulated Student Revisions
<|reference_start|>Closing the Loop: Learning to Generate Writing Feedback via Language Model Simulated Student Revisions: Providing feedback is widely recognized as crucial for refining students' writing skills. Recent advances in language models (LMs) have made it possible to automatically generate feedback that is actionable and well-aligned with human-specified attributes. However, it remains unclear whether the feedback generated by these models is truly effective in enhancing the quality of student revisions. Moreover, prompting LMs with a precise set of instructions to generate feedback is nontrivial due to the lack of consensus regarding the specific attributes that can lead to improved revising performance. To address these challenges, we propose PROF that PROduces Feedback via learning from LM simulated student revisions. PROF aims to iteratively optimize the feedback generator by directly maximizing the effectiveness of students' overall revising performance as simulated by LMs. Focusing on an economic essay assignment, we empirically test the efficacy of PROF and observe that our approach not only surpasses a variety of baseline methods in effectiveness of improving students' writing but also demonstrates enhanced pedagogical values, even though it was not explicitly trained for this aspect.<|reference_end|>
arxiv
@article{nair2024closing, title={Closing the Loop: Learning to Generate Writing Feedback via Language Model Simulated Student Revisions}, author={Inderjeet Nair, Jiaye Tan, Xiaotian Su, Anne Gere, Xu Wang, Lu Wang}, journal={arXiv preprint arXiv:2410.08058}, year={2024}, archivePrefix={arXiv}, eprint={2410.08058}, primaryClass={cs.CL cs.AI cs.LG} }
nair2024closing
arxiv-668171
2410.08059
A framework for compressing unstructured scientific data via serialization
<|reference_start|>A framework for compressing unstructured scientific data via serialization: We present a general framework for compressing unstructured scientific data with known local connectivity. A common application is simulation data defined on arbitrary finite element meshes. The framework employs a greedy topology preserving reordering of original nodes which allows for seamless integration into existing data processing pipelines. This reordering process depends solely on mesh connectivity and can be performed offline for optimal efficiency. However, the algorithm's greedy nature also supports on-the-fly implementation. The proposed method is compatible with any compression algorithm that leverages spatial correlations within the data. The effectiveness of this approach is demonstrated on a large-scale real dataset using several compression methods, including MGARD, SZ, and ZFP.<|reference_end|>
arxiv
@article{reshniak2024a, title={A framework for compressing unstructured scientific data via serialization}, author={Viktor Reshniak, Qian Gong, Rick Archibald, Scott Klasky, Norbert Podhorszki}, journal={arXiv preprint arXiv:2410.08059}, year={2024}, archivePrefix={arXiv}, eprint={2410.08059}, primaryClass={cs.CV} }
reshniak2024a
arxiv-668172
2410.08060
Optimal Transportation by Orthogonal Coupling Dynamics
<|reference_start|>Optimal Transportation by Orthogonal Coupling Dynamics: Many numerical algorithms and learning tasks rest on solution of the Monge-Kantorovich problem and corresponding Wasserstein distances. While the natural approach is to treat the problem as an infinite-dimensional linear programming, such a methodology severely limits the computational performance due to the polynomial scaling with respect to the sample size along with intensive memory requirements. We propose a novel alternative framework to address the Monge-Kantorovich problem based on a projection type gradient descent scheme. The micro-dynamics is built on the notion of the conditional expectation, where the connection with the opinion dynamics is explored and leveraged to build compact numerical schemes. We demonstrate that the devised dynamics recovers random maps with favourable computational performance. Along with the theoretical insight, the provided dynamics paves the way for innovative approaches to construct numerical schemes for computing optimal transport maps as well as Wasserstein distances.<|reference_end|>
arxiv
@article{sadr2024optimal, title={Optimal Transportation by Orthogonal Coupling Dynamics}, author={Mohsen Sadr, Peyman Mohajerin Esfehani, Hossein Gorji}, journal={arXiv preprint arXiv:2410.08060}, year={2024}, archivePrefix={arXiv}, eprint={2410.08060}, primaryClass={math.OC cs.AI} }
sadr2024optimal
arxiv-668173
2410.08063
Reversible Decoupling Network for Single Image Reflection Removal
<|reference_start|>Reversible Decoupling Network for Single Image Reflection Removal: Recent deep-learning-based approaches to single-image reflection removal have shown promising advances, primarily for two reasons: 1) the utilization of recognition-pretrained features as inputs, and 2) the design of dual-stream interaction networks. However, according to the Information Bottleneck principle, high-level semantic clues tend to be compressed or discarded during layer-by-layer propagation. Additionally, interactions in dual-stream networks follow a fixed pattern across different layers, limiting overall performance. To address these limitations, we propose a novel architecture called Reversible Decoupling Network (RDNet), which employs a reversible encoder to secure valuable information while flexibly decoupling transmission- and reflection-relevant features during the forward pass. Furthermore, we customize a transmission-rate-aware prompt generator to dynamically calibrate features, further boosting performance. Extensive experiments demonstrate the superiority of RDNet over existing SOTA methods on five widely-adopted benchmark datasets. Our code will be made publicly available.<|reference_end|>
arxiv
@article{zhao2024reversible, title={Reversible Decoupling Network for Single Image Reflection Removal}, author={Hao Zhao, Mingjia Li, Qiming Hu, Xiaojie Guo}, journal={arXiv preprint arXiv:2410.08063}, year={2024}, archivePrefix={arXiv}, eprint={2410.08063}, primaryClass={cs.CV} }
zhao2024reversible
arxiv-668174
2410.08065
Dynamic Object Catching with Quadruped Robot Front Legs
<|reference_start|>Dynamic Object Catching with Quadruped Robot Front Legs: This paper presents a framework for dynamic object catching using a quadruped robot's front legs while it stands on its rear legs. The system integrates computer vision, trajectory prediction, and leg control to enable the quadruped to visually detect, track, and successfully catch a thrown object using an onboard camera. Leveraging a fine-tuned YOLOv8 model for object detection and a regression-based trajectory prediction module, the quadruped adapts its front leg positions iteratively to anticipate and intercept the object. The catching maneuver involves identifying the optimal catching position, controlling the front legs with Cartesian PD control, and closing the legs together at the right moment. We propose and validate three different methods for selecting the optimal catching position: 1) intersecting the predicted trajectory with a vertical plane, 2) selecting the point on the predicted trajectory with the minimal distance to the center of the robot's legs in their nominal position, and 3) selecting the point on the predicted trajectory with the highest likelihood on a Gaussian Mixture Model (GMM) modelling the robot's reachable space. Experimental results demonstrate robust catching capabilities across various scenarios, with the GMM method achieving the best performance, leading to an 80% catching success rate. A video demonstration of the system in action can be found at https://youtu.be/sm7RdxRfIYg .<|reference_end|>
arxiv
@article{schakkal2024dynamic, title={Dynamic Object Catching with Quadruped Robot Front Legs}, author={Andr'e Schakkal, Guillaume Bellegarda, Auke Ijspeert}, journal={arXiv preprint arXiv:2410.08065}, year={2024}, archivePrefix={arXiv}, eprint={2410.08065}, primaryClass={cs.RO} }
schakkal2024dynamic
arxiv-668175
2410.08067
Reward-Augmented Data Enhances Direct Preference Alignment of LLMs
<|reference_start|>Reward-Augmented Data Enhances Direct Preference Alignment of LLMs: Preference alignment in Large Language Models (LLMs) has significantly improved their ability to adhere to human instructions and intentions. However, existing direct alignment algorithms primarily focus on relative preferences and often overlook the qualitative aspects of responses. Striving to maximize the implicit reward gap between the chosen and the slightly inferior rejected responses can cause overfitting and unnecessary unlearning of the high-quality rejected responses. The unawareness of the reward scores also drives the LLM to indiscriminately favor the low-quality chosen responses and fail to generalize to responses with the highest rewards, which are sparse in data. To overcome these shortcomings, our study introduces reward-conditioned LLM policies that discern and learn from the entire spectrum of response quality within the dataset, helping extrapolate to more optimal regions. We propose an effective yet simple data relabeling method that conditions the preference pairs on quality scores to construct a reward-augmented dataset. This dataset is easily integrated with existing direct alignment algorithms and is applicable to any preference dataset. The experimental results across instruction-following benchmarks including AlpacaEval, MT-Bench, and Arena-Hard-Auto demonstrate that our approach consistently boosts the performance of DPO by a considerable margin across diverse models. Additionally, our method improves the average accuracy on various academic benchmarks. When applying our method to on-policy data, the resulting DPO model achieves SOTA results on AlpacaEval. Through ablation studies, we demonstrate that our method not only maximizes the utility of preference data but also mitigates the issue of unlearning, demonstrating its broad effectiveness beyond mere dataset expansion. Our code is available at https://github.com/shenao-zhang/reward-augmented-preference.<|reference_end|>
arxiv
@article{zhang2024reward-augmented, title={Reward-Augmented Data Enhances Direct Preference Alignment of LLMs}, author={Shenao Zhang, Zhihan Liu, Boyi Liu, Yufeng Zhang, Yingxiang Yang, Yongfei Liu, Liyu Chen, Tao Sun, Zhaoran Wang}, journal={arXiv preprint arXiv:2410.08067}, year={2024}, archivePrefix={arXiv}, eprint={2410.08067}, primaryClass={cs.LG cs.AI} }
zhang2024reward-augmented
arxiv-668176
2410.08068
Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models
<|reference_start|>Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models: Large Language Models (LLMs) exhibit impressive performance across various domains but still struggle with arithmetic reasoning tasks. Recent work shows the effectiveness of prompt design methods in enhancing reasoning capabilities. However, these approaches overlook crucial requirements for prior knowledge of specific concepts, theorems, and tricks to tackle most arithmetic reasoning problems successfully. To address this issue, we propose a novel and effective Teaching-Inspired Integrated Framework, which emulates the instructional process of a teacher guiding students. This method equips LLMs with essential concepts, relevant theorems, and similar problems with analogous solution approaches, facilitating the enhancement of reasoning abilities. Additionally, we introduce two new Chinese datasets, MathMC and MathToF, both with detailed explanations and answers. Experiments are conducted on nine benchmarks which demonstrates that our approach improves the reasoning accuracy of LLMs. With GPT-4 and our framework, we achieve new state-of-the-art performance on four math benchmarks (AddSub, SVAMP, Math23K and AQuA) with accuracies of 98.2% (+3.3%), 93.9% (+0.2%), 94.3% (+7.2%) and 81.1% (+1.2%). Our data and code are available at https://github.com/SallyTan13/Teaching-Inspired-Prompting.<|reference_end|>
arxiv
@article{tan2024teaching-inspired, title={Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models}, author={Wenting Tan, Dongxiao Chen, Jieting Xue, Zihao Wang, Taijie Chen}, journal={arXiv preprint arXiv:2410.08068}, year={2024}, archivePrefix={arXiv}, eprint={2410.08068}, primaryClass={cs.CL cs.AI} }
tan2024teaching-inspired
arxiv-668177
2410.08069
Unlearning-based Neural Interpretations
<|reference_start|>Unlearning-based Neural Interpretations: Gradient-based interpretations often require an anchor point of comparison to avoid saturation in computing feature importance. We show that current baselines defined using static functions--constant mapping, averaging or blurring--inject harmful colour, texture or frequency assumptions that deviate from model behaviour. This leads to accumulation of irregular gradients, resulting in attribution maps that are biased, fragile and manipulable. Departing from the static approach, we propose UNI to compute an (un)learnable, debiased and adaptive baseline by perturbing the input towards an unlearning direction of steepest ascent. Our method discovers reliable baselines and succeeds in erasing salient features, which in turn locally smooths the high-curvature decision boundaries. Our analyses point to unlearning as a promising avenue for generating faithful, efficient and robust interpretations.<|reference_end|>
arxiv
@article{choi2024unlearning-based, title={Unlearning-based Neural Interpretations}, author={Ching Lam Choi, Alexandre Duplessis, Serge Belongie}, journal={arXiv preprint arXiv:2410.08069}, year={2024}, archivePrefix={arXiv}, eprint={2410.08069}, primaryClass={cs.LG cs.AI cs.CV} }
choi2024unlearning-based
arxiv-668178
2410.08071
Gaussian Process Thompson Sampling via Rootfinding
<|reference_start|>Gaussian Process Thompson Sampling via Rootfinding: Thompson sampling (TS) is a simple, effective stochastic policy in Bayesian decision making. It samples the posterior belief about the reward profile and optimizes the sample to obtain a candidate decision. In continuous optimization, the posterior of the objective function is often a Gaussian process (GP), whose sample paths have numerous local optima, making their global optimization challenging. In this work, we introduce an efficient global optimization strategy for GP-TS that carefully selects starting points for gradient-based multi-start optimizers. It identifies all local optima of the prior sample via univariate global rootfinding, and optimizes the posterior sample using a differentiable, decoupled representation. We demonstrate remarkable improvement in the global optimization of GP posterior samples, especially in high dimensions. This leads to dramatic improvements in the overall performance of Bayesian optimization using GP-TS acquisition functions, surprisingly outperforming alternatives like GP-UCB and EI.<|reference_end|>
arxiv
@article{adebiyi2024gaussian, title={Gaussian Process Thompson Sampling via Rootfinding}, author={Taiwo A. Adebiyi and Bach Do and Ruda Zhang}, journal={arXiv preprint arXiv:2410.08071}, year={2024}, archivePrefix={arXiv}, eprint={2410.08071}, primaryClass={cs.LG math.OC stat.ML} }
adebiyi2024gaussian
arxiv-668179
2410.08073
Efficient Quantum Pseudorandomness from Hamiltonian Phase States
<|reference_start|>Efficient Quantum Pseudorandomness from Hamiltonian Phase States: Quantum pseudorandomness has found applications in many areas of quantum information, ranging from entanglement theory, to models of scrambling phenomena in chaotic quantum systems, and, more recently, in the foundations of quantum cryptography. Kretschmer (TQC '21) showed that both pseudorandom states and pseudorandom unitaries exist even in a world without classical one-way functions. To this day, however, all known constructions require classical cryptographic building blocks which are themselves synonymous with the existence of one-way functions, and which are also challenging to realize on realistic quantum hardware. In this work, we seek to make progress on both of these fronts simultaneously -- by decoupling quantum pseudorandomness from classical cryptography altogether. We introduce a quantum hardness assumption called the Hamiltonian Phase State (HPS) problem, which is the task of decoding output states of a random instantaneous quantum polynomial-time (IQP) circuit. Hamiltonian phase states can be generated very efficiently using only Hadamard gates, single-qubit Z-rotations and CNOT circuits. We show that the hardness of our problem reduces to a worst-case version of the problem, and we provide evidence that our assumption is plausibly fully quantum; meaning, it cannot be used to construct one-way functions. We also show information-theoretic hardness when only few copies of HPS are available by proving an approximate $t$-design property of our ensemble. Finally, we show that our HPS assumption and its variants allow us to efficiently construct many pseudorandom quantum primitives, ranging from pseudorandom states, to quantum pseudoentanglement, to pseudorandom unitaries, and even primitives such as public-key encryption with quantum keys.<|reference_end|>
arxiv
@article{bostanci2024efficient, title={Efficient Quantum Pseudorandomness from Hamiltonian Phase States}, author={John Bostanci and Jonas Haferkamp and Dominik Hangleiter and Alexander Poremba}, journal={arXiv preprint arXiv:2410.08073}, year={2024}, archivePrefix={arXiv}, eprint={2410.08073}, primaryClass={quant-ph cs.CR} }
bostanci2024efficient
arxiv-668180
2410.08074
Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models
<|reference_start|>Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models: Text-to-image diffusion models rely on massive, web-scale datasets. Training them from scratch is computationally expensive, and as a result, developers often prefer to make incremental updates to existing models. These updates often compose fine-tuning steps (to learn new concepts or improve model performance) with "unlearning" steps (to "forget" existing concepts, such as copyrighted works or explicit content). In this work, we demonstrate a critical and previously unknown vulnerability that arises in this paradigm: even under benign, non-adversarial conditions, fine-tuning a text-to-image diffusion model on seemingly unrelated images can cause it to "relearn" concepts that were previously "unlearned." We comprehensively investigate the causes and scope of this phenomenon, which we term concept resurgence, by performing a series of experiments which compose "mass concept erasure" (the current state of the art for unlearning in text-to-image diffusion models (Lu et al., 2024)) with subsequent fine-tuning of Stable Diffusion v1.4. Our findings underscore the fragility of composing incremental model updates, and raise serious new concerns about current approaches to ensuring the safety and alignment of text-to-image diffusion models.<|reference_end|>
arxiv
@article{suriyakumar2024unstable, title={Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models}, author={Vinith M. Suriyakumar, Rohan Alur, Ayush Sekhari, Manish Raghavan, Ashia C. Wilson}, journal={arXiv preprint arXiv:2410.08074}, year={2024}, archivePrefix={arXiv}, eprint={2410.08074}, primaryClass={cs.LG cs.CR cs.CV} }
suriyakumar2024unstable
arxiv-668181
2410.08081
Packing Analysis: Packing Is More Appropriate for Large Models or Datasets in Supervised Fine-tuning
<|reference_start|>Packing Analysis: Packing Is More Appropriate for Large Models or Datasets in Supervised Fine-tuning: Packing, initially utilized in the pre-training phase, is an optimization technique designed to maximize hardware resource efficiency by combining different training sequences to fit the model's maximum input length. Although it has demonstrated effectiveness during pre-training, there remains a lack of comprehensive analysis for the supervised fine-tuning (SFT) stage on the following points: (1) whether packing can effectively enhance training efficiency while maintaining performance, (2) the suitable size of the model and dataset for fine-tuning with the packing method, and (3) whether packing unrelated or related training samples might cause the model to either excessively disregard or over-rely on the context. In this paper, we perform extensive comparisons between SFT methods using padding and packing, covering SFT datasets ranging from 69K to 1.2M and models from 8B to 70B. This provides the first comprehensive analysis of the advantages and limitations of packing versus padding, as well as practical considerations for implementing packing in various training scenarios. Our analysis covers various benchmarks, including knowledge, reasoning, and coding, as well as GPT-based evaluations, time efficiency, and other fine-tuning parameters. We also open-source our code for fine-tuning and evaluation and provide checkpoints fine-tuned on datasets of different sizes, aiming to advance future research on packing methods. Code is available at: https://github.com/ShuheWang1998/Packing-Analysis?tab=readme-ov-file.<|reference_end|>
arxiv
@article{wang2024packing, title={Packing Analysis: Packing Is More Appropriate for Large Models or Datasets in Supervised Fine-tuning}, author={Shuhe Wang, Guoyin Wang, Yizhong Wang, Jiwei Li, Eduard Hovy, Chen Guo}, journal={arXiv preprint arXiv:2410.08081}, year={2024}, archivePrefix={arXiv}, eprint={2410.08081}, primaryClass={cs.LG cs.AI cs.CL} }
wang2024packing
arxiv-668182
2410.08082
ToMiE: Towards Modular Growth in Enhanced SMPL Skeleton for 3D Human with Animatable Garments
<|reference_start|>ToMiE: Towards Modular Growth in Enhanced SMPL Skeleton for 3D Human with Animatable Garments: In this paper, we highlight a critical yet often overlooked factor in most 3D human tasks, namely modeling humans with complex garments. It is known that the parameterized formulation of SMPL is able to fit human skin; while complex garments, e.g., hand-held objects and loose-fitting garments, are difficult to get modeled within the unified framework, since their movements are usually decoupled with the human body. To enhance the capability of SMPL skeleton in response to this situation, we propose a modular growth strategy that enables the joint tree of the skeleton to expand adaptively. Specifically, our method, called ToMiE, consists of parent joints localization and external joints optimization. For parent joints localization, we employ a gradient-based approach guided by both LBS blending weights and motion kernels. Once the external joints are obtained, we proceed to optimize their transformations in SE(3) across different frames, enabling rendering and explicit animation. ToMiE manages to outperform other methods across various cases with garments, not only in rendering quality but also by offering free animation of grown joints, thereby enhancing the expressive ability of SMPL skeleton for a broader range of applications.<|reference_end|>
arxiv
@article{zhan2024tomie:, title={ToMiE: Towards Modular Growth in Enhanced SMPL Skeleton for 3D Human with Animatable Garments}, author={Yifan Zhan, Qingtian Zhu, Muyao Niu, Mingze Ma, Jiancheng Zhao, Zhihang Zhong, Xiao Sun, Yu Qiao, Yinqiang Zheng}, journal={arXiv preprint arXiv:2410.08082}, year={2024}, archivePrefix={arXiv}, eprint={2410.08082}, primaryClass={cs.CV} }
zhan2024tomie:
arxiv-668183
2410.08085
Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study over Open-ended Question Answering
<|reference_start|>Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study over Open-ended Question Answering: Recent works integrating Knowledge Graphs (KGs) have led to promising improvements in enhancing reasoning accuracy of Large Language Models (LLMs). However, current benchmarks mainly focus on closed tasks, leaving a gap in the assessment of more complex, real-world scenarios. This gap has also obscured the evaluation of KGs' potential to mitigate the problem of hallucination in LLMs. To fill the gap, we introduce OKGQA, a new benchmark specifically designed to assess LLMs enhanced with KGs under open-ended, real-world question answering scenarios. OKGQA is designed to closely reflect the complexities of practical applications using questions from different types, and incorporates specific metrics to measure both the reduction in hallucinations and the enhancement in reasoning capabilities. To consider the scenario in which KGs may have varying levels of mistakes, we further propose another experiment setting OKGQA-P to assess model performance when the semantics and structure of KGs are deliberately perturbed and contaminated. OKGQA aims to (1) explore whether KGs can make LLMs more trustworthy in an open-ended setting, and (2) conduct a comparative analysis to shed light on methods and future directions for leveraging KGs to reduce LLMs' hallucination. We believe that this study can facilitate a more complete performance comparison and encourage continuous improvement in integrating KGs with LLMs.<|reference_end|>
arxiv
@article{sui2024can, title={Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study over Open-ended Question Answering}, author={Yuan Sui, Bryan Hooi}, journal={arXiv preprint arXiv:2410.08085}, year={2024}, archivePrefix={arXiv}, eprint={2410.08085}, primaryClass={cs.CL cs.AI} }
sui2024can
arxiv-668184
2410.08087
Noether's razor: Learning Conserved Quantities
<|reference_start|>Noether's razor: Learning Conserved Quantities: Symmetries have proven useful in machine learning models, improving generalisation and overall performance. At the same time, recent advancements in learning dynamical systems rely on modelling the underlying Hamiltonian to guarantee the conservation of energy. These approaches can be connected via a seminal result in mathematical physics: Noether's theorem, which states that symmetries in a dynamical system correspond to conserved quantities. This work uses Noether's theorem to parameterise symmetries as learnable conserved quantities. We then allow conserved quantities and associated symmetries to be learned directly from train data through approximate Bayesian model selection, jointly with the regular training procedure. As training objective, we derive a variational lower bound to the marginal likelihood. The objective automatically embodies an Occam's Razor effect that avoids collapse of conservation laws to the trivial constant, without the need to manually add and tune additional regularisers. We demonstrate a proof-of-principle on $n$-harmonic oscillators and $n$-body systems. We find that our method correctly identifies the correct conserved quantities and U($n$) and SE($n$) symmetry groups, improving overall performance and predictive accuracy on test data.<|reference_end|>
arxiv
@article{van der ouderaa2024noether's, title={Noether's razor: Learning Conserved Quantities}, author={Tycho F. A. van der Ouderaa, Mark van der Wilk, Pim de Haan}, journal={arXiv preprint arXiv:2410.08087}, year={2024}, archivePrefix={arXiv}, eprint={2410.08087}, primaryClass={cs.LG stat.ML} }
van der ouderaa2024noether's
arxiv-668185
2410.08090
Crossing Margins: Intersectional Users' Ethical Concerns about Software
<|reference_start|>Crossing Margins: Intersectional Users' Ethical Concerns about Software: Many modern software applications present numerous ethical concerns due to conflicts between users' values and companies' priorities. Intersectional communities, those with multiple marginalized identities, are disproportionately affected by these ethical issues, leading to legal, financial, and reputational issues for software companies, as well as real-world harm for intersectional users. Historically, the voices of intersectional communities have been systematically marginalized and excluded from contributing their unique perspectives to software design, perpetuating software-related ethical concerns. This work aims to fill the gap in research on intersectional users' software-related perspectives and provide software practitioners with a starting point to address their ethical concerns. We aggregated and analyzed the intersectional users' ethical concerns over time and developed a prioritization method to identify critical concerns. To achieve this, we collected posts from over 700 intersectional subreddits discussing software applications, utilized deep learning to identify ethical concerns in these posts, and employed state-of-the-art techniques to analyze their content in relation to time and priority. Our findings revealed that intersectional communities report \textit{critical} complaints related to cyberbullying, inappropriate content, and discrimination, highlighting significant flaws in modern software, particularly for intersectional users. Based on these findings, we discuss how to better address the ethical concerns of intersectional users in software development.<|reference_end|>
arxiv
@article{olson2024crossing, title={Crossing Margins: Intersectional Users' Ethical Concerns about Software}, author={Lauren Olson, Tom P. Humbert, Ricarda Anna-Lena Fischer, Bob Westerveld, Florian Kunneman, Emitz'a Guzm'an}, journal={arXiv preprint arXiv:2410.08090}, year={2024}, archivePrefix={arXiv}, eprint={2410.08090}, primaryClass={cs.SE cs.HC} }
olson2024crossing
arxiv-668186
2410.08091
Distribution Guidance Network for Weakly Supervised Point Cloud Semantic Segmentation
<|reference_start|>Distribution Guidance Network for Weakly Supervised Point Cloud Semantic Segmentation: Despite alleviating the dependence on dense annotations inherent to fully supervised methods, weakly supervised point cloud semantic segmentation suffers from inadequate supervision signals. In response to this challenge, we introduce a novel perspective that imparts auxiliary constraints by regulating the feature space under weak supervision. Our initial investigation identifies which distributions accurately characterize the feature space, subsequently leveraging this priori to guide the alignment of the weakly supervised embeddings. Specifically, we analyze the superiority of the mixture of von Mises-Fisher distributions (moVMF) among several common distribution candidates. Accordingly, we develop a Distribution Guidance Network (DGNet), which comprises a weakly supervised learning branch and a distribution alignment branch. Leveraging reliable clustering initialization derived from the weakly supervised learning branch, the distribution alignment branch alternately updates the parameters of the moVMF and the network, ensuring alignment with the moVMF-defined latent space. Extensive experiments validate the rationality and effectiveness of our distribution choice and network design. Consequently, DGNet achieves state-of-the-art performance under multiple datasets and various weakly supervised settings.<|reference_end|>
arxiv
@article{pan2024distribution, title={Distribution Guidance Network for Weakly Supervised Point Cloud Semantic Segmentation}, author={Zhiyi Pan and Wei Gao and Shan Liu and Ge Li}, journal={arXiv preprint arXiv:2410.08091}, year={2024}, archivePrefix={arXiv}, eprint={2410.08091}, primaryClass={cs.CV} }
pan2024distribution
arxiv-668187
2410.08092
UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images
<|reference_start|>UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images: Due to the unique characteristics of underwater environments, accurate 3D reconstruction of underwater objects poses a challenging problem in tasks such as underwater exploration and mapping. Traditional methods that rely on multiple sensor data for 3D reconstruction are time-consuming and face challenges in data acquisition in underwater scenarios. We propose UW-SDF, a framework for reconstructing target objects from multi-view underwater images based on neural SDF. We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction. Additionally, to address the challenge of segmentation consistency in multi-view images, we propose a novel few-shot multi-view target segmentation strategy using the general-purpose segmentation model (SAM), enabling rapid automatic segmentation of unseen objects. Through extensive qualitative and quantitative experiments on diverse datasets, we demonstrate that our proposed method outperforms the traditional underwater 3D reconstruction method and other neural rendering approaches in the field of underwater 3D reconstruction.<|reference_end|>
arxiv
@article{chen2024uw-sdf:, title={UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images}, author={Zeyu Chen, Jingyi Tang, Gu Wang, Shengquan Li, Xinghui Li, Xiangyang Ji, and Xiu Li}, journal={arXiv preprint arXiv:2410.08092}, year={2024}, archivePrefix={arXiv}, eprint={2410.08092}, primaryClass={cs.CV cs.RO} }
chen2024uw-sdf:
arxiv-668188
2410.08094
SAKA: An Intelligent Platform for Semi-automated Knowledge Graph Construction and Application
<|reference_start|>SAKA: An Intelligent Platform for Semi-automated Knowledge Graph Construction and Application: Knowledge graph (KG) technology is extensively utilized in many areas, and many companies offer applications based on KG. Nonetheless, the majority of KG platforms necessitate expertise and tremendous time and effort of users to construct KG records manually, which poses great difficulties for ordinary people to use. Additionally, audio data is abundant and holds valuable information, but it is challenging to transform it into a KG. What's more, the platforms usually do not leverage the full potential of the KGs constructed by users. In this paper, we propose an intelligent and user-friendly platform for Semi-automated KG Construction and Application (SAKA) to address the problems aforementioned. Primarily, users can semi-automatically construct KGs from structured data of numerous areas by interacting with the platform, based on which multi-versions of KG can be stored, viewed, managed, and updated. Moreover, we propose an Audio-based KG Information Extraction (AGIE) method to establish KGs from audio data. Lastly, the platform creates a semantic parsing-based knowledge base question answering (KBQA) system based on the user-created KGs. We prove the feasibility of the semi-automatic KG construction method on the SAKA platform.<|reference_end|>
arxiv
@article{zhang2024saka:, title={SAKA: An Intelligent Platform for Semi-automated Knowledge Graph Construction and Application}, author={Hanrong Zhang, Xinyue Wang, Jiabao Pan, Hongwei Wang}, journal={arXiv preprint arXiv:2410.08094}, year={2024}, archivePrefix={arXiv}, eprint={2410.08094}, primaryClass={cs.AI} }
zhang2024saka:
arxiv-668189
2410.08096
Sensor-Based Safety-Critical Control using an Incremental Control Barrier Function Formulation via Reduced-Order Approximate Models
<|reference_start|>Sensor-Based Safety-Critical Control using an Incremental Control Barrier Function Formulation via Reduced-Order Approximate Models: The existing control barrier function literature generally relies on precise mathematical models to guarantee system safety, limiting their applicability in scenarios with parametric uncertainties. While incremental control techniques have shown promise in addressing model uncertainties in flight control applications, translating these approaches to safety-critical control presents significant challenges. This paper bridges this gap by introducing measurement robust incremental control barrier functions (MRICBFs), which leverage sensor-based reduced-order models to provide formal safety guarantees for uncertain systems. By carefully addressing the challenges of sensor accuracy and approximation errors in the incremental formulation, our approach enables substituting specific model components with real-time sensor measurements while maintaining rigorous safety guarantees. This formulation overcomes the limitations of traditional adaptive control methods that adjust system parameters over time, enabling immediate and reliable safety measures for a particular class of model uncertainties. The efficacy of MRICBFs is demonstrated in two simulation case studies: a simple first-order system with time-varying sensor biases and a more complex overactuated hypersonic glide vehicle with multiple state constraints.<|reference_end|>
arxiv
@article{autenrieb2024sensor-based, title={Sensor-Based Safety-Critical Control using an Incremental Control Barrier Function Formulation via Reduced-Order Approximate Models}, author={Johannes Autenrieb, Hyo-Sang Shin}, journal={arXiv preprint arXiv:2410.08096}, year={2024}, archivePrefix={arXiv}, eprint={2410.08096}, primaryClass={eess.SY cs.SY} }
autenrieb2024sensor-based
arxiv-668190
2410.08097
LiPO: LiDAR Inertial Odometry for ICP Comparison
<|reference_start|>LiPO: LiDAR Inertial Odometry for ICP Comparison: We introduce a LiDAR inertial odometry (LIO) framework, called LiPO, that enables direct comparisons of different iterative closest point (ICP) point cloud registration methods. The two common ICP methods we compare are point-to-point (P2P) and point-to-feature (P2F). In our experience, within the context of LIO, P2F-ICP results in less drift and improved mapping accuracy when robots move aggressively through challenging environments when compared to P2P-ICP. However, P2F-ICP methods require more hand-tuned hyper-parameters that make P2F-ICP less general across all environments and motions. In real-world field robotics applications where robots are used across different environments, more general P2P-ICP methods may be preferred despite increased drift. In this paper, we seek to better quantify the trade-off between P2P-ICP and P2F-ICP to help inform when each method should be used. To explore this trade-off, we use LiPO to directly compare ICP methods and test on relevant benchmark datasets as well as on our custom unpiloted ground vehicle (UGV). We find that overall, P2F-ICP has reduced drift and improved mapping accuracy, but, P2P-ICP is more consistent across all environments and motions with minimal drift increase.<|reference_end|>
arxiv
@article{mick2024lipo:, title={LiPO: LiDAR Inertial Odometry for ICP Comparison}, author={Darwin Mick, Taylor Pool, Madankumar Sathenahally Nagaraju, Michael Kaess, Howie Choset, and Matt Travers}, journal={arXiv preprint arXiv:2410.08097}, year={2024}, archivePrefix={arXiv}, eprint={2410.08097}, primaryClass={cs.RO} }
mick2024lipo:
arxiv-668191
2410.08098
A Generative AI Technique for Synthesizing a Digital Twin for US Residential Solar Adoption and Generation
<|reference_start|>A Generative AI Technique for Synthesizing a Digital Twin for US Residential Solar Adoption and Generation: Residential rooftop solar adoption is considered crucial for reducing carbon emissions. The lack of photovoltaic (PV) data at a finer resolution (e.g., household, hourly levels) poses a significant roadblock to informed decision-making. We discuss a novel methodology to generate a highly granular, residential-scale realistic dataset for rooftop solar adoption across the contiguous United States. The data-driven methodology consists of: (i) integrated machine learning models to identify PV adopters, (ii) methods to augment the data using explainable AI techniques to glean insights about key features and their interactions, and (iii) methods to generate household-level hourly solar energy output using an analytical model. The resulting synthetic datasets are validated using real-world data and can serve as a digital twin for modeling downstream tasks. Finally, a policy-based case study utilizing the digital twin for Virginia demonstrated increased rooftop solar adoption with the 30\% Federal Solar Investment Tax Credit, especially in Low-to-Moderate-Income communities.<|reference_end|>
arxiv
@article{kishore2024a, title={A Generative AI Technique for Synthesizing a Digital Twin for U.S. Residential Solar Adoption and Generation}, author={Aparna Kishore, Swapna Thorve, Madhav Marathe}, journal={arXiv preprint arXiv:2410.08098}, year={2024}, archivePrefix={arXiv}, eprint={2410.08098}, primaryClass={cs.AI} }
kishore2024a
arxiv-668192
2410.08100
CrackSegDiff: Diffusion Probability Model-based Multi-modal Crack Segmentation
<|reference_start|>CrackSegDiff: Diffusion Probability Model-based Multi-modal Crack Segmentation: Integrating grayscale and depth data in road inspection robots could enhance the accuracy, reliability, and comprehensiveness of road condition assessments, leading to improved maintenance strategies and safer infrastructure. However, these data sources are often compromised by significant background noise from the pavement. Recent advancements in Diffusion Probabilistic Models (DPM) have demonstrated remarkable success in image segmentation tasks, showcasing potent denoising capabilities, as evidenced in studies like SegDiff \cite{amit2021segdiff}. Despite these advancements, current DPM-based segmentors do not fully capitalize on the potential of original image data. In this paper, we propose a novel DPM-based approach for crack segmentation, named CrackSegDiff, which uniquely fuses grayscale and range/depth images. This method enhances the reverse diffusion process by intensifying the interaction between local feature extraction via DPM and global feature extraction. Unlike traditional methods that utilize Transformers for global features, our approach employs Vm-unet \cite{ruan2024vm} to efficiently capture long-range information of the original data. The integration of features is further refined through two innovative modules: the Channel Fusion Module (CFM) and the Shallow Feature Compensation Module (SFCM). Our experimental evaluation on the three-class crack image segmentation tasks within the FIND dataset demonstrates that CrackSegDiff outperforms state-of-the-art methods, particularly excelling in the detection of shallow cracks. Code is available at https://github.com/sky-visionX/CrackSegDiff.<|reference_end|>
arxiv
@article{jiang2024cracksegdiff:, title={CrackSegDiff: Diffusion Probability Model-based Multi-modal Crack Segmentation}, author={Xiaoyan Jiang, Licheng Jiang, Anjie Wang, Kaiying Zhu, Yongbin Gao}, journal={arXiv preprint arXiv:2410.08100}, year={2024}, archivePrefix={arXiv}, eprint={2410.08100}, primaryClass={cs.CV} }
jiang2024cracksegdiff:
arxiv-668193
2410.08102
Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining
<|reference_start|>Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining: Efficient data selection is crucial to accelerate the pretraining of large language models (LLMs). While various methods have been proposed to enhance data efficiency, limited research has addressed the inherent conflicts between these approaches to achieve optimal data selection for LLM pretraining. To tackle this problem, we propose a novel multi-agent collaborative data selection mechanism. In this framework, each data selection method serves as an independent agent, and an agent console is designed to dynamically integrate the information from all agents throughout the LLM training process. We conduct extensive empirical studies to evaluate our multi-agent framework. The experimental results demonstrate that our approach significantly improves data efficiency, accelerates convergence in LLM training, and achieves an average performance gain of 10.5% across multiple language model benchmarks compared to the state-of-the-art methods.<|reference_end|>
arxiv
@article{bai2024multi-agent, title={Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining}, author={Tianyi Bai, Ling Yang, Zhen Hao Wong, Jiahui Peng, Xinlin Zhuang, Chi Zhang, Lijun Wu, Jiantao Qiu, Wentao Zhang, Binhang Yuan, Conghui He}, journal={arXiv preprint arXiv:2410.08102}, year={2024}, archivePrefix={arXiv}, eprint={2410.08102}, primaryClass={cs.CL} }
bai2024multi-agent
arxiv-668194
2410.08105
What Makes Large Language Models Reason in (Multi-Turn) Code Generation?
<|reference_start|>What Makes Large Language Models Reason in (Multi-Turn) Code Generation?: Prompting techniques such as chain-of-thought have established themselves as a popular vehicle for improving the outputs of large language models (LLMs). For code generation, however, their exact mechanics and efficacy are under-explored. We thus investigate the effects of a wide range of prompting strategies with a focus on automatic re-prompting over multiple turns and computational requirements. After systematically decomposing reasoning, instruction, and execution feedback prompts, we conduct an extensive grid search on the competitive programming benchmarks CodeContests and TACO for multiple LLM families and sizes (Llama 3.0 and 3.1, 8B, 70B, 405B, and GPT-4o). Our study reveals strategies that consistently improve performance across all models with small and large sampling budgets. We then show how finetuning with such an optimal configuration allows models to internalize the induced reasoning process and obtain improvements in performance and scalability for multi-turn code generation.<|reference_end|>
arxiv
@article{zheng2024what, title={What Makes Large Language Models Reason in (Multi-Turn) Code Generation?}, author={Kunhao Zheng, Juliette Decugis, Jonas Gehring, Taco Cohen, Benjamin Negrevergne, Gabriel Synnaeve}, journal={arXiv preprint arXiv:2410.08105}, year={2024}, archivePrefix={arXiv}, eprint={2410.08105}, primaryClass={cs.CL} }
zheng2024what
arxiv-668195
2410.08107
IncEventGS: Pose-Free Gaussian Splatting from a Single Event Camera
<|reference_start|>IncEventGS: Pose-Free Gaussian Splatting from a Single Event Camera: Implicit neural representation and explicit 3D Gaussian Splatting (3D-GS) for novel view synthesis have achieved remarkable progress with frame-based camera (e.g. RGB and RGB-D cameras) recently. Compared to frame-based camera, a novel type of bio-inspired visual sensor, i.e. event camera, has demonstrated advantages in high temporal resolution, high dynamic range, low power consumption and low latency. Due to its unique asynchronous and irregular data capturing process, limited work has been proposed to apply neural representation or 3D Gaussian splatting for an event camera. In this work, we present IncEventGS, an incremental 3D Gaussian Splatting reconstruction algorithm with a single event camera. To recover the 3D scene representation incrementally, we exploit the tracking and mapping paradigm of conventional SLAM pipelines for IncEventGS. Given the incoming event stream, the tracker firstly estimates an initial camera motion based on prior reconstructed 3D-GS scene representation. The mapper then jointly refines both the 3D scene representation and camera motion based on the previously estimated motion trajectory from the tracker. The experimental results demonstrate that IncEventGS delivers superior performance compared to prior NeRF-based methods and other related baselines, even we do not have the ground-truth camera poses. Furthermore, our method can also deliver better performance compared to state-of-the-art event visual odometry methods in terms of camera motion estimation. Code is publicly available at: https://github.com/wu-cvgl/IncEventGS.<|reference_end|>
arxiv
@article{huang2024inceventgs:, title={IncEventGS: Pose-Free Gaussian Splatting from a Single Event Camera}, author={Jian Huang, Chengrui Dong, Peidong Liu}, journal={arXiv preprint arXiv:2410.08107}, year={2024}, archivePrefix={arXiv}, eprint={2410.08107}, primaryClass={cs.CV} }
huang2024inceventgs:
arxiv-668196
2410.08109
A Closer Look at Machine Unlearning for Large Language Models
<|reference_start|>A Closer Look at Machine Unlearning for Large Language Models: Large language models (LLMs) may memorize sensitive or copyrighted content, raising privacy and legal concerns. Due to the high cost of retraining from scratch, researchers attempt to employ machine unlearning to remove specific content from LLMs while preserving the overall performance. In this paper, we discuss several issues in machine unlearning for LLMs and provide our insights on possible approaches. To address the issue of inadequate evaluation of model outputs after unlearning, we introduce three additional metrics to evaluate token diversity, sentence semantics, and factual correctness. We then categorize unlearning methods into untargeted and targeted, and discuss their issues respectively. Specifically, the behavior that untargeted unlearning attempts to approximate is unpredictable and may involve hallucinations, and existing regularization is insufficient for targeted unlearning. To alleviate these issues, we propose using the objective of maximizing entropy (ME) for untargeted unlearning and incorporate answer preservation (AP) loss as regularization for targeted unlearning. Experimental results across three scenarios, i.e., fictitious unlearning, continual unlearning, and real-world unlearning, demonstrate the effectiveness of our approaches. The code is available at https://github.com/sail-sg/closer-look-LLM-unlearning.<|reference_end|>
arxiv
@article{yuan2024a, title={A Closer Look at Machine Unlearning for Large Language Models}, author={Xiaojian Yuan, Tianyu Pang, Chao Du, Kejiang Chen, Weiming Zhang, Min Lin}, journal={arXiv preprint arXiv:2410.08109}, year={2024}, archivePrefix={arXiv}, eprint={2410.08109}, primaryClass={cs.CL cs.AI cs.LG} }
yuan2024a
arxiv-668197
2410.08110
On the Second-Order Achievabilities of Indirect Quadratic Lossy Source Coding
<|reference_start|>On the Second-Order Achievabilities of Indirect Quadratic Lossy Source Coding: This paper studies the second-order achievabilities of indirect quadratic lossy source coding for a specific class of source models, where the term "quadratic" denotes that the reconstruction fidelity of the hidden source is quantified by a squared error distortion measure. Specifically, it is assumed that the hidden source $S$ can be expressed as $S = \varphi(X) + W$, where $X$ is the observable source with alphabet $\mathcal{X}$, $\varphi(\cdot)$ is a deterministic function, and $W$ is a random variable independent of $X$, satisfying $\mathbb{E}[W] = 0$, $\mathbb{E}[W^2] > 0$, $\mathbb{E}[W^3] = 0$, and $\mathbb{E}[W^6] < \infty$. Additionally, both the set $\{\varphi(x):\ x \in \mathcal{X} \}$ and the reconstruction alphabet for $S$ are assumed to be bounded. Under the above settings, a second-order achievability bound is established using techniques based on distortion-tilted information. This result is then generalized to the case of indirect quadratic lossy source coding with observed source reconstruction, where reconstruction is required for both the hidden source $S$ and the observable source $X$, and the distortion measure for $X$ is not necessarily quadratic. These obtained bounds are consistent in form with their finite-alphabet counterparts, which have been proven to be second-order tight.<|reference_end|>
arxiv
@article{yang2024on, title={On the Second-Order Achievabilities of Indirect Quadratic Lossy Source Coding}, author={Huiyuan Yang and Xiaojun Yuan}, journal={arXiv preprint arXiv:2410.08110}, year={2024}, archivePrefix={arXiv}, eprint={2410.08110}, primaryClass={cs.IT math.IT} }
yang2024on
arxiv-668198
2410.08111
Active Fourier Auditor for Estimating Distributional Properties of ML Models
<|reference_start|>Active Fourier Auditor for Estimating Distributional Properties of ML Models: With the pervasive deployment of Machine Learning (ML) models in real-world applications, verifying and auditing properties of ML models have become a central concern. In this work, we focus on three properties: robustness, individual fairness, and group fairness. We discuss two approaches for auditing ML model properties: estimation with and without reconstruction of the target model under audit. Though the first approach is studied in the literature, the second approach remains unexplored. For this purpose, we develop a new framework that quantifies different properties in terms of the Fourier coefficients of the ML model under audit but does not parametrically reconstruct it. We propose the Active Fourier Auditor (AFA), which queries sample points according to the Fourier coefficients of the ML model, and further estimates the properties. We derive high probability error bounds on AFA's estimates, along with the worst-case lower bounds on the sample complexity to audit them. Numerically we demonstrate on multiple datasets and models that AFA is more accurate and sample-efficient to estimate the properties of interest than the baselines.<|reference_end|>
arxiv
@article{ajarra2024active, title={Active Fourier Auditor for Estimating Distributional Properties of ML Models}, author={Ayoub Ajarra, Bishwamittra Ghosh, Debabrota Basu}, journal={arXiv preprint arXiv:2410.08111}, year={2024}, archivePrefix={arXiv}, eprint={2410.08111}, primaryClass={cs.LG cs.AI cs.CY stat.ML} }
ajarra2024active
arxiv-668199
2410.08113
Robust AI-Generated Text Detection by Restricted Embeddings
<|reference_start|>Robust AI-Generated Text Detection by Restricted Embeddings: Growing amount and quality of AI-generated texts makes detecting such content more difficult. In most real-world scenarios, the domain (style and topic) of generated data and the generator model are not known in advance. In this work, we focus on the robustness of classifier-based detectors of AI-generated text, namely their ability to transfer to unseen generators or semantic domains. We investigate the geometry of the embedding space of Transformer-based text encoders and show that clearing out harmful linear subspaces helps to train a robust classifier, ignoring domain-specific spurious features. We investigate several subspace decomposition and feature selection strategies and achieve significant improvements over state of the art methods in cross-domain and cross-generator transfer. Our best approaches for head-wise and coordinate-based subspace removal increase the mean out-of-distribution (OOD) classification score by up to 9% and 14% in particular setups for RoBERTa and BERT embeddings respectively. We release our code and data: https://github.com/SilverSolver/RobustATD<|reference_end|>
arxiv
@article{kuznetsov2024robust, title={Robust AI-Generated Text Detection by Restricted Embeddings}, author={Kristian Kuznetsov, Eduard Tulchinskii, Laida Kushnareva, German Magai, Serguei Barannikov, Sergey Nikolenko, Irina Piontkovskaya}, journal={arXiv preprint arXiv:2410.08113}, year={2024}, archivePrefix={arXiv}, eprint={2410.08113}, primaryClass={cs.CL cs.AI} }
kuznetsov2024robust
arxiv-668200
2410.08114
Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning
<|reference_start|>Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning: Recently, leveraging pre-training techniques to enhance point cloud models has become a hot research topic. However, existing approaches typically require full fine-tuning of pre-trained models to achieve satisfied performance on downstream tasks, accompanying storage-intensive and computationally demanding. To address this issue, we propose a novel Parameter-Efficient Fine-Tuning (PEFT) method for point cloud, called PointGST (Point cloud Graph Spectral Tuning). PointGST freezes the pre-trained model and introduces a lightweight, trainable Point Cloud Spectral Adapter (PCSA) to fine-tune parameters in the spectral domain. The core idea is built on two observations: 1) The inner tokens from frozen models might present confusion in the spatial domain; 2) Task-specific intrinsic information is important for transferring the general knowledge to the downstream task. Specifically, PointGST transfers the point tokens from the spatial domain to the spectral domain, effectively de-correlating confusion among tokens via using orthogonal components for separating. Moreover, the generated spectral basis involves intrinsic information about the downstream point clouds, enabling more targeted tuning. As a result, PointGST facilitates the efficient transfer of general knowledge to downstream tasks while significantly reducing training costs. Extensive experiments on challenging point cloud datasets across various tasks demonstrate that PointGST not only outperforms its fully fine-tuning counterpart but also significantly reduces trainable parameters, making it a promising solution for efficient point cloud learning. It improves upon a solid baseline by +2.28%, 1.16%, and 2.78%, resulting in 99.48%, 97.76%, and 96.18% on the ScanObjNN OBJ BG, OBJ OBLY, and PB T50 RS datasets, respectively. This advancement establishes a new state-of-the-art, using only 0.67% of the trainable parameters.<|reference_end|>
arxiv
@article{liang2024parameter-efficient, title={Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning}, author={Dingkang Liang, Tianrui Feng, Xin Zhou, Yumeng Zhang, Zhikang Zou, Xiang Bai}, journal={arXiv preprint arXiv:2410.08114}, year={2024}, archivePrefix={arXiv}, eprint={2410.08114}, primaryClass={cs.CV} }
liang2024parameter-efficient