corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-663301
2409.19715
Coffee-Gym: An Environment for Evaluating and Improving Natural Language Feedback on Erroneous Code
<|reference_start|>Coffee-Gym: An Environment for Evaluating and Improving Natural Language Feedback on Erroneous Code: This paper presents Coffee-Gym, a comprehensive RL environment for training models that provide feedback on code editing. Coffee-Gym includes two major components: (1) Coffee, a dataset containing humans' code edit traces for coding questions and machine-written feedback for editing erroneous code; (2) CoffeeEval, a reward function that faithfully reflects the helpfulness of feedback by assessing the performance of the revised code in unit tests. With them, Coffee-Gym addresses the unavailability of high-quality datasets for training feedback models with RL, and provides more accurate rewards than the SOTA reward model (i.e., GPT-4). By applying Coffee-Gym, we elicit feedback models that outperform baselines in enhancing open-source code LLMs' code editing, making them comparable with closed-source LLMs. We make the dataset and the model checkpoint publicly available.<|reference_end|>
arxiv
@article{chae2024coffee-gym:, title={Coffee-Gym: An Environment for Evaluating and Improving Natural Language Feedback on Erroneous Code}, author={Hyungjoo Chae, Taeyoon Kwon, Seungjun Moon, Yongho Song, Dongjin Kang, Kai Tzu-iunn Ong, Beong-woo Kwak, Seonghyeon Bae, Seung-won Hwang, Jinyoung Yeo}, journal={arXiv preprint arXiv:2409.19715}, year={2024}, archivePrefix={arXiv}, eprint={2409.19715}, primaryClass={cs.CL} }
chae2024coffee-gym:
arxiv-663302
2409.19716
Constrained Reinforcement Learning for Safe Heat Pump Control
<|reference_start|>Constrained Reinforcement Learning for Safe Heat Pump Control: Constrained Reinforcement Learning (RL) has emerged as a significant research area within RL, where integrating constraints with rewards is crucial for enhancing safety and performance across diverse control tasks. In the context of heating systems in the buildings, optimizing the energy efficiency while maintaining the residents' thermal comfort can be intuitively formulated as a constrained optimization problem. However, to solve it with RL may require large amount of data. Therefore, an accurate and versatile simulator is favored. In this paper, we propose a novel building simulator I4B which provides interfaces for different usages and apply a model-free constrained RL algorithm named constrained Soft Actor-Critic with Linear Smoothed Log Barrier function (CSAC-LB) to the heating optimization problem. Benchmarking against baseline algorithms demonstrates CSAC-LB's efficiency in data exploration, constraint satisfaction and performance.<|reference_end|>
arxiv
@article{zhang2024constrained, title={Constrained Reinforcement Learning for Safe Heat Pump Control}, author={Baohe Zhang, Lilli Frison, Thomas Brox, Joschka B"odecker}, journal={arXiv preprint arXiv:2409.19716}, year={2024}, archivePrefix={arXiv}, eprint={2409.19716}, primaryClass={cs.LG cs.AI cs.SY eess.SY} }
zhang2024constrained
arxiv-663303
2409.19718
Evolving Multi-Scale Normalization for Time Series Forecasting under Distribution Shifts
<|reference_start|>Evolving Multi-Scale Normalization for Time Series Forecasting under Distribution Shifts: Complex distribution shifts are the main obstacle to achieving accurate long-term time series forecasting. Several efforts have been conducted to capture the distribution characteristics and propose adaptive normalization techniques to alleviate the influence of distribution shifts. However, these methods neglect the intricate distribution dynamics observed from various scales and the evolving functions of distribution dynamics and normalized mapping relationships. To this end, we propose a novel model-agnostic Evolving Multi-Scale Normalization (EvoMSN) framework to tackle the distribution shift problem. Flexible normalization and denormalization are proposed based on the multi-scale statistics prediction module and adaptive ensembling. An evolving optimization strategy is designed to update the forecasting model and statistics prediction module collaboratively to track the shifting distributions. We evaluate the effectiveness of EvoMSN in improving the performance of five mainstream forecasting methods on benchmark datasets and also show its superiority compared to existing advanced normalization and online learning approaches. The code is publicly available at https://github.com/qindalin/EvoMSN.<|reference_end|>
arxiv
@article{qin2024evolving, title={Evolving Multi-Scale Normalization for Time Series Forecasting under Distribution Shifts}, author={Dalin Qin, Yehui Li, Weiqi Chen, Zhaoyang Zhu, Qingsong Wen, Liang Sun, Pierre Pinson, Yi Wang}, journal={arXiv preprint arXiv:2409.19718}, year={2024}, archivePrefix={arXiv}, eprint={2409.19718}, primaryClass={cs.LG stat.ML} }
qin2024evolving
arxiv-663304
2409.19720
FAST: A Dual-tier Few-Shot Learning Paradigm for Whole Slide Image Classification
<|reference_start|>FAST: A Dual-tier Few-Shot Learning Paradigm for Whole Slide Image Classification: The expensive fine-grained annotation and data scarcity have become the primary obstacles for the widespread adoption of deep learning-based Whole Slide Images (WSI) classification algorithms in clinical practice. Unlike few-shot learning methods in natural images that can leverage the labels of each image, existing few-shot WSI classification methods only utilize a small number of fine-grained labels or weakly supervised slide labels for training in order to avoid expensive fine-grained annotation. They lack sufficient mining of available WSIs, severely limiting WSI classification performance. To address the above issues, we propose a novel and efficient dual-tier few-shot learning paradigm for WSI classification, named FAST. FAST consists of a dual-level annotation strategy and a dual-branch classification framework. Firstly, to avoid expensive fine-grained annotation, we collect a very small number of WSIs at the slide level, and annotate an extremely small number of patches. Then, to fully mining the available WSIs, we use all the patches and available patch labels to build a cache branch, which utilizes the labeled patches to learn the labels of unlabeled patches and through knowledge retrieval for patch classification. In addition to the cache branch, we also construct a prior branch that includes learnable prompt vectors, using the text encoder of visual-language models for patch classification. Finally, we integrate the results from both branches to achieve WSI classification. Extensive experiments on binary and multi-class datasets demonstrate that our proposed method significantly surpasses existing few-shot classification methods and approaches the accuracy of fully supervised methods with only 0.22$\%$ annotation costs. All codes and models will be publicly available on https://github.com/fukexue/FAST.<|reference_end|>
arxiv
@article{fu2024fast:, title={FAST: A Dual-tier Few-Shot Learning Paradigm for Whole Slide Image Classification}, author={Kexue Fu, Xiaoyuan Luo, Linhao Qu, Shuo Wang, Ying Xiong, Ilias Maglogiannis, Longxiang Gao, Manning Wang}, journal={arXiv preprint arXiv:2409.19720}, year={2024}, archivePrefix={arXiv}, eprint={2409.19720}, primaryClass={cs.CV} }
fu2024fast:
arxiv-663305
2409.19722
The Vanilla Sequent Calculus is Call-by-Value
<|reference_start|>The Vanilla Sequent Calculus is Call-by-Value: Existing Curry-Howard interpretations of call-by-value evaluation for the $\lambda$-calculus involve classical logic or linear logic, despite the fact that call-by-value was introduced in an intuitionistic setting without linear features. This paper shows that the most basic sequent calculus for minimal intuitionistic logic -- dubbed here vanilla -- can naturally be seen as a logical interpretation of call-by-value evaluation. This is obtained by establishing mutual simulations with a well-known formalism for call-by-value evaluation.<|reference_end|>
arxiv
@article{accattoli2024the, title={The Vanilla Sequent Calculus is Call-by-Value (Fresh Perspective)}, author={Beniamino Accattoli}, journal={arXiv preprint arXiv:2409.19722}, year={2024}, archivePrefix={arXiv}, eprint={2409.19722}, primaryClass={cs.LO cs.PL} }
accattoli2024the
arxiv-663306
2409.19723
Revealing Personality Traits: A New Benchmark Dataset for Explainable Personality Recognition on Dialogues
<|reference_start|>Revealing Personality Traits: A New Benchmark Dataset for Explainable Personality Recognition on Dialogues: Personality recognition aims to identify the personality traits implied in user data such as dialogues and social media posts. Current research predominantly treats personality recognition as a classification task, failing to reveal the supporting evidence for the recognized personality. In this paper, we propose a novel task named Explainable Personality Recognition, aiming to reveal the reasoning process as supporting evidence of the personality trait. Inspired by personality theories, personality traits are made up of stable patterns of personality state, where the states are short-term characteristic patterns of thoughts, feelings, and behaviors in a concrete situation at a specific moment in time. We propose an explainable personality recognition framework called Chain-of-Personality-Evidence (CoPE), which involves a reasoning process from specific contexts to short-term personality states to long-term personality traits. Furthermore, based on the CoPE framework, we construct an explainable personality recognition dataset from dialogues, PersonalityEvd. We introduce two explainable personality state recognition and explainable personality trait recognition tasks, which require models to recognize the personality state and trait labels and their corresponding support evidence. Our extensive experiments based on Large Language Models on the two tasks show that revealing personality traits is very challenging and we present some insights for future research. Our data and code are available at https://github.com/Lei-Sun-RUC/PersonalityEvd.<|reference_end|>
arxiv
@article{sun2024revealing, title={Revealing Personality Traits: A New Benchmark Dataset for Explainable Personality Recognition on Dialogues}, author={Lei Sun, Jinming Zhao, Qin Jin}, journal={arXiv preprint arXiv:2409.19723}, year={2024}, archivePrefix={arXiv}, eprint={2409.19723}, primaryClass={cs.CL} }
sun2024revealing
arxiv-663307
2409.19724
DataDRILL: Formation Pressure Prediction and Kick Detection for Drilling Rigs
<|reference_start|>DataDRILL: Formation Pressure Prediction and Kick Detection for Drilling Rigs: Accurate real-time prediction of formation pressure and kick detection is crucial for drilling operations, as it can significantly improve decision-making and the cost-effectiveness of the process. Data-driven models have gained popularity for automating drilling operations by predicting formation pressure and detecting kicks. However, the current literature does not make supporting datasets publicly available to advance research in the field of drilling rigs, thus impeding technological progress in this domain. This paper introduces two new datasets to support researchers in developing intelligent algorithms to enhance oil/gas well drilling research. The datasets include data samples for formation pressure prediction and kick detection with 28 drilling variables and more than 2000 data samples. Principal component regression is employed to forecast formation pressure, while principal component analysis is utilized to identify kicks for the dataset's technical validation. Notably, the R2 and Residual Predictive Deviation scores for principal component regression are 0.78 and 0.922, respectively.<|reference_end|>
arxiv
@article{arifeen2024datadrill:, title={DataDRILL: Formation Pressure Prediction and Kick Detection for Drilling Rigs}, author={Murshedul Arifeen, Andrei Petrovski, Md Junayed Hasan, Igor Kotenko, Maksim Sletov, Phil Hassard}, journal={arXiv preprint arXiv:2409.19724}, year={2024}, archivePrefix={arXiv}, eprint={2409.19724}, primaryClass={cs.LG} }
arifeen2024datadrill:
arxiv-663308
2409.19727
Investigating the Effect of Network Pruning on Performance and Interpretability
<|reference_start|>Investigating the Effect of Network Pruning on Performance and Interpretability: Deep Neural Networks (DNNs) are often over-parameterized for their tasks and can be compressed quite drastically by removing weights, a process called pruning. We investigate the impact of different pruning techniques on the classification performance and interpretability of GoogLeNet. We systematically apply unstructured and structured pruning, as well as connection sparsity (pruning of input weights) methods to the network and analyze the outcomes regarding the network's performance on the validation set of ImageNet. We also compare different retraining strategies, such as iterative pruning and one-shot pruning. We find that with sufficient retraining epochs, the performance of the networks can approximate the performance of the default GoogLeNet - and even surpass it in some cases. To assess interpretability, we employ the Mechanistic Interpretability Score (MIS) developed by Zimmermann et al. . Our experiments reveal that there is no significant relationship between interpretability and pruning rate when using MIS as a measure. Additionally, we observe that networks with extremely low accuracy can still achieve high MIS scores, suggesting that the MIS may not always align with intuitive notions of interpretability, such as understanding the basis of correct decisions.<|reference_end|>
arxiv
@article{von rad2024investigating, title={Investigating the Effect of Network Pruning on Performance and Interpretability}, author={Jonathan von Rad, Florian Seuffert}, journal={arXiv preprint arXiv:2409.19727}, year={2024}, archivePrefix={arXiv}, eprint={2409.19727}, primaryClass={cs.LG cs.CV} }
von rad2024investigating
arxiv-663309
2409.19732
Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement
<|reference_start|>Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement: Machine unlearning (MU) has emerged to enhance the privacy and trustworthiness of deep neural networks. Approximate MU is a practical method for large-scale models. Our investigation into approximate MU starts with identifying the steepest descent direction, minimizing the output Kullback-Leibler divergence to exact MU inside a parameters' neighborhood. This probed direction decomposes into three components: weighted forgetting gradient ascent, fine-tuning retaining gradient descent, and a weight saliency matrix. Such decomposition derived from Euclidean metric encompasses most existing gradient-based MU methods. Nevertheless, adhering to Euclidean space may result in sub-optimal iterative trajectories due to the overlooked geometric structure of the output probability space. We suggest embedding the unlearning update into a manifold rendered by the remaining geometry, incorporating second-order Hessian from the remaining data. It helps prevent effective unlearning from interfering with the retained performance. However, computing the second-order Hessian for large-scale models is intractable. To efficiently leverage the benefits of Hessian modulation, we propose a fast-slow parameter update strategy to implicitly approximate the up-to-date salient unlearning direction. Free from specific modal constraints, our approach is adaptable across computer vision unlearning tasks, including classification and generation. Extensive experiments validate our efficacy and efficiency. Notably, our method successfully performs class-forgetting on ImageNet using DiT and forgets a class on CIFAR-10 using DDPM in just 50 steps, compared to thousands of steps required by previous methods.<|reference_end|>
arxiv
@article{huang2024unified, title={Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement}, author={Zhehao Huang, Xinwen Cheng, JingHao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang}, journal={arXiv preprint arXiv:2409.19732}, year={2024}, archivePrefix={arXiv}, eprint={2409.19732}, primaryClass={cs.LG cs.AI} }
huang2024unified
arxiv-663310
2409.19733
Pear: Pruning and Sharing Adapters in Visual Parameter-Efficient Fine-Tuning
<|reference_start|>Pear: Pruning and Sharing Adapters in Visual Parameter-Efficient Fine-Tuning: Adapters have been widely explored to alleviate computational and storage costs when fine-tuning pretrained foundation models. However, the adapter itself can exhibit redundancy, leading to unnecessary storage overhead and inferior performance. In this paper, we propose Prune and Share (Pear), a novel adapter-pruning framework for efficient fine-tuning of pretrained visual foundation models. Specifically, we prune certain adapters and share the more important unpruned ones with positions where adapters are pruned, allowing continual adaptation at these positions after pruning. Additionally, a knowledge checkpoint strategy is introduced, which preserves the information of the pruned adapters and further boosts performance. Experimental results on visual adaptation benchmark validate the effectiveness and efficiency of the proposed Pear comparing to other competitive methods. Code is in https://github.com/yibozhong/pear.<|reference_end|>
arxiv
@article{zhong2024pear:, title={Pear: Pruning and Sharing Adapters in Visual Parameter-Efficient Fine-Tuning}, author={Yibo Zhong, Yao Zhou}, journal={arXiv preprint arXiv:2409.19733}, year={2024}, archivePrefix={arXiv}, eprint={2409.19733}, primaryClass={cs.CV} }
zhong2024pear:
arxiv-663311
2409.19734
T2Vs Meet VLMs: A Scalable Multimodal Dataset for Visual Harmfulness Recognition
<|reference_start|>T2Vs Meet VLMs: A Scalable Multimodal Dataset for Visual Harmfulness Recognition: To address the risks of encountering inappropriate or harmful content, researchers managed to incorporate several harmful contents datasets with machine learning methods to detect harmful concepts. However, existing harmful datasets are curated by the presence of a narrow range of harmful objects, and only cover real harmful content sources. This hinders the generalizability of methods based on such datasets, potentially leading to misjudgments. Therefore, we propose a comprehensive harmful dataset, Visual Harmful Dataset 11K (VHD11K), consisting of 10,000 images and 1,000 videos, crawled from the Internet and generated by 4 generative models, across a total of 10 harmful categories covering a full spectrum of harmful concepts with nontrivial definition. We also propose a novel annotation framework by formulating the annotation process as a multi-agent Visual Question Answering (VQA) task, having 3 different VLMs "debate" about whether the given image/video is harmful, and incorporating the in-context learning strategy in the debating process. Therefore, we can ensure that the VLMs consider the context of the given image/video and both sides of the arguments thoroughly before making decisions, further reducing the likelihood of misjudgments in edge cases. Evaluation and experimental results demonstrate that (1) the great alignment between the annotation from our novel annotation framework and those from human, ensuring the reliability of VHD11K; (2) our full-spectrum harmful dataset successfully identifies the inability of existing harmful content detection methods to detect extensive harmful contents and improves the performance of existing harmfulness recognition methods; (3) VHD11K outperforms the baseline dataset, SMID, as evidenced by the superior improvement in harmfulness recognition methods. The complete dataset and code can be found at https://github.com/nctu-eva-lab/VHD11K.<|reference_end|>
arxiv
@article{yeh2024t2vs, title={T2Vs Meet VLMs: A Scalable Multimodal Dataset for Visual Harmfulness Recognition}, author={Chen Yeh, You-Ming Chang, Wei-Chen Chiu, Ning Yu}, journal={arXiv preprint arXiv:2409.19734}, year={2024}, archivePrefix={arXiv}, eprint={2409.19734}, primaryClass={cs.CV} }
yeh2024t2vs
arxiv-663312
2409.19735
Scrambled text: training Language Models to correct OCR errors using synthetic data
<|reference_start|>Scrambled text: training Language Models to correct OCR errors using synthetic data: OCR errors are common in digitised historical archives significantly affecting their usability and value. Generative Language Models (LMs) have shown potential for correcting these errors using the context provided by the corrupted text and the broader socio-cultural context, a process called Context Leveraging OCR Correction (CLOCR-C). However, getting sufficient training data for fine-tuning such models can prove challenging. This paper shows that fine-tuning a language model on synthetic data using an LM and using a character level Markov corruption process can significantly improve the ability to correct OCR errors. Models trained on synthetic data reduce the character error rate by 55% and word error rate by 32% over the base LM and outperform models trained on real data. Key findings include; training on under-corrupted data is better than over-corrupted data; non-uniform character level corruption is better than uniform corruption; More tokens-per-observation outperforms more observations for a fixed token budget. The outputs for this paper are a set of 8 heuristics for training effective CLOCR-C models, a dataset of 11,000 synthetic 19th century newspaper articles and scrambledtext a python library for creating synthetic corrupted data.<|reference_end|>
arxiv
@article{bourne2024scrambled, title={Scrambled text: training Language Models to correct OCR errors using synthetic data}, author={Jonathan Bourne}, journal={arXiv preprint arXiv:2409.19735}, year={2024}, archivePrefix={arXiv}, eprint={2409.19735}, primaryClass={cs.CL} }
bourne2024scrambled
arxiv-663313
2409.19737
A Systematic Review of NLP for Dementia- Tasks, Datasets and Opportunities
<|reference_start|>A Systematic Review of NLP for Dementia- Tasks, Datasets and Opportunities: The close link between cognitive decline and language has fostered long-standing collaboration between the NLP and medical communities in dementia research. To examine this, we reviewed over 200 papers applying NLP to dementia related efforts, drawing from medical, technological, and NLP-focused literature. We identify key research areas, including dementia detection, linguistic biomarker extraction, caregiver support, and patient assistance, showing that half of all papers focus solely on dementia detection using clinical data. However, many directions remain unexplored: artificially degraded language models, synthetic data, digital twins, and more. We highlight gaps and opportunities around trust, scientific rigor, applicability, and cross-community collaboration, and showcase the diverse datasets encountered throughout our review: recorded, written, structured, spontaneous, synthetic, clinical, social media based, and more. This review aims to inspire more creative approaches to dementia research within the medical and NLP communities.<|reference_end|>
arxiv
@article{peled-cohen2024a, title={A Systematic Review of NLP for Dementia- Tasks, Datasets and Opportunities}, author={Lotem Peled-Cohen, Roi Reichart}, journal={arXiv preprint arXiv:2409.19737}, year={2024}, archivePrefix={arXiv}, eprint={2409.19737}, primaryClass={cs.CL} }
peled-cohen2024a
arxiv-663314
2409.19738
The Future of HCI-Policy Collaboration
<|reference_start|>The Future of HCI-Policy Collaboration: Policies significantly shape computation's societal impact, a crucial HCI concern. However, challenges persist when HCI professionals attempt to integrate policy into their work or affect policy outcomes. Prior research considered these challenges at the ``border'' of HCI and policy. This paper asks: What if HCI considers policy integral to its intellectual concerns, placing system-people-policy interaction not at the border but nearer the center of HCI research, practice, and education? What if HCI fosters a mosaic of methods and knowledge contributions that blend system, human, and policy expertise in various ways, just like HCI has done with blending system and human expertise? We present this re-imagined HCI-policy relationship as a provocation and highlight its usefulness: It spotlights previously overlooked system-people-policy interaction work in HCI. It unveils new opportunities for HCI's futuring, empirical, and design projects. It allows HCI to coordinate its diverse policy engagements, enhancing its collective impact on policy outcomes.<|reference_end|>
arxiv
@article{yang2024the, title={The Future of HCI-Policy Collaboration}, author={Qian Yang, Richmond Y Wong, Steven J Jackson, Sabine Junginger, Margaret D Hagan, Thomas Gilbert, John Zimmerman}, journal={arXiv preprint arXiv:2409.19738}, year={2024}, doi={10.1145/3613904.3642771}, archivePrefix={arXiv}, eprint={2409.19738}, primaryClass={cs.HC} }
yang2024the
arxiv-663315
2409.19740
When Molecular GAN Meets Byte-Pair Encoding
<|reference_start|>When Molecular GAN Meets Byte-Pair Encoding: Deep generative models, such as generative adversarial networks (GANs), are pivotal in discovering novel drug-like candidates via de novo molecular generation. However, traditional character-wise tokenizers often struggle with identifying novel and complex sub-structures in molecular data. In contrast, alternative tokenization methods have demonstrated superior performance. This study introduces a molecular GAN that integrates a byte level byte-pair encoding tokenizer and employs reinforcement learning to enhance de novo molecular generation. Specifically, the generator functions as an actor, producing SMILES strings, while the discriminator acts as a critic, evaluating their quality. Our molecular GAN also integrates innovative reward mechanisms aimed at improving computational efficiency. Experimental results assessing validity, uniqueness, novelty, and diversity, complemented by detailed visualization analysis, robustly demonstrate the effectiveness of our GAN.<|reference_end|>
arxiv
@article{tang2024when, title={When Molecular GAN Meets Byte-Pair Encoding}, author={Huidong Tang, Chen Li, Yasuhiko Morimoto}, journal={arXiv preprint arXiv:2409.19740}, year={2024}, archivePrefix={arXiv}, eprint={2409.19740}, primaryClass={cs.LG q-bio.QM} }
tang2024when
arxiv-663316
2409.19741
Tailored Federated Learning: Leveraging Direction Regulation & Knowledge Distillation
<|reference_start|>Tailored Federated Learning: Leveraging Direction Regulation & Knowledge Distillation: Federated learning (FL) has emerged as a transformative training paradigm, particularly invaluable in privacy-sensitive domains like healthcare. However, client heterogeneity in data, computing power, and tasks poses a significant challenge. To address such a challenge, we propose an FL optimization algorithm that integrates model delta regularization, personalized models, federated knowledge distillation, and mix-pooling. Model delta regularization optimizes model updates centrally on the server, efficiently updating clients with minimal communication costs. Personalized models and federated knowledge distillation strategies are employed to tackle task heterogeneity effectively. Additionally, mix-pooling is introduced to accommodate variations in the sensitivity of readout operations. Experimental results demonstrate the remarkable accuracy and rapid convergence achieved by model delta regularization. Additionally, the federated knowledge distillation algorithm notably improves FL performance, especially in scenarios with diverse data. Moreover, mix-pooling readout operations provide tangible benefits for clients, showing the effectiveness of our proposed methods.<|reference_end|>
arxiv
@article{tang2024tailored, title={Tailored Federated Learning: Leveraging Direction Regulation & Knowledge Distillation}, author={Huidong Tang, Chen Li, Huachong Yu, Sayaka Kamei, Yasuhiko Morimoto}, journal={arXiv preprint arXiv:2409.19741}, year={2024}, archivePrefix={arXiv}, eprint={2409.19741}, primaryClass={cs.LG} }
tang2024tailored
arxiv-663317
2409.19745
PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead
<|reference_start|>PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead: Large language models (LLMs) enhanced with retrieval-augmented generation (RAG) have introduced a new paradigm for web search. However, the limited context awareness of LLMs degrades their performance on RAG tasks. Existing methods to enhance context awareness are often inefficient, incurring time or memory overhead during inference, and many are tailored to specific position embeddings. In this paper, we propose Position-Embedding-Agnostic attention Re-weighting (PEAR), which enhances the context awareness of LLMs with zero inference overhead. Specifically, on a proxy task focused on context copying, we first detect heads which suppress the models' context awareness thereby diminishing RAG performance. To weaken the impact of these heads, we re-weight their outputs with learnable coefficients. The LLM (with frozen parameters) is optimized by adjusting these coefficients to minimize loss on the proxy task. As a result, the coefficients are optimized to values less than one, thereby reducing their tendency to suppress RAG performance. During inference, the optimized coefficients are fixed to re-weight these heads, regardless of the specific task at hand. Our proposed PEAR offers two major advantages over previous approaches: (1) It introduces zero additional inference overhead in terms of memory usage or inference time, while outperforming competitive baselines in accuracy and efficiency across various RAG tasks. (2) It is independent of position embedding algorithms, ensuring broader applicability.<|reference_end|>
arxiv
@article{tan2024pear:, title={PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead}, author={Tao Tan, Yining Qian, Ang Lv, Hongzhan Lin, Songhao Wu, Yongbo Wang, Feng Wang, Jingtong Wu, Xin Lu, Rui Yan}, journal={arXiv preprint arXiv:2409.19745}, year={2024}, archivePrefix={arXiv}, eprint={2409.19745}, primaryClass={cs.CL cs.AI} }
tan2024pear:
arxiv-663318
2409.19746
Learning Robust Policies via Interpretable Hamilton-Jacobi Reachability-Guided Disturbances
<|reference_start|>Learning Robust Policies via Interpretable Hamilton-Jacobi Reachability-Guided Disturbances: Deep Reinforcement Learning (RL) has shown remarkable success in robotics with complex and heterogeneous dynamics. However, its vulnerability to unknown disturbances and adversarial attacks remains a significant challenge. In this paper, we propose a robust policy training framework that integrates model-based control principles with adversarial RL training to improve robustness without the need for external black-box adversaries. Our approach introduces a novel Hamilton-Jacobi reachability-guided disturbance for adversarial RL training, where we use interpretable worst-case or near-worst-case disturbances as adversaries against the robust policy. We evaluated its effectiveness across three distinct tasks: a reach-avoid game in both simulation and real-world settings, and a highly dynamic quadrotor stabilization task in simulation. We validate that our learned critic network is consistent with the ground-truth HJ value function, while the policy network shows comparable performance with other learning-based methods.<|reference_end|>
arxiv
@article{hu2024learning, title={Learning Robust Policies via Interpretable Hamilton-Jacobi Reachability-Guided Disturbances}, author={Hanyang Hu, Xilun Zhang, Xubo Lyu, Mo Chen}, journal={arXiv preprint arXiv:2409.19746}, year={2024}, archivePrefix={arXiv}, eprint={2409.19746}, primaryClass={cs.RO} }
hu2024learning
arxiv-663319
2409.19747
Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions
<|reference_start|>Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions: Natural language and visualization are two complementary modalities of human communication that play a crucial role in conveying information effectively. While visualizations help people discover trends, patterns, and anomalies in data, natural language descriptions help explain these insights. Thus, combining text with visualizations is a prevalent technique for effectively delivering the core message of the data. Given the rise of natural language generation (NLG), there is a growing interest in automatically creating natural language descriptions for visualizations, which can be used as chart captions, answering questions about charts, or telling data-driven stories. In this survey, we systematically review the state of the art on NLG for visualizations and introduce a taxonomy of the problem. The NLG tasks fall within the domain of Natural Language Interfaces (NLI) for visualization, an area that has garnered significant attention from both the research community and industry. To narrow down the scope of the survey, we primarily concentrate on the research works that focus on text generation for visualizations. To characterize the NLG problem and the design space of proposed solutions, we pose five Wh-questions, why and how NLG tasks are performed for visualizations, what the task inputs and outputs are, as well as where and when the generated texts are integrated with visualizations. We categorize the solutions used in the surveyed papers based on these "five Wh-questions." Finally, we discuss the key challenges and potential avenues for future research in this domain.<|reference_end|>
arxiv
@article{hoque2024natural, title={Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions}, author={Enamul Hoque, Mohammed Saidul Islam}, journal={arXiv preprint arXiv:2409.19747}, year={2024}, archivePrefix={arXiv}, eprint={2409.19747}, primaryClass={cs.CL} }
hoque2024natural
arxiv-663320
2409.19749
NeuroMax: Enhancing Neural Topic Modeling via Maximizing Mutual Information and Group Topic Regularization
<|reference_start|>NeuroMax: Enhancing Neural Topic Modeling via Maximizing Mutual Information and Group Topic Regularization: Recent advances in neural topic models have concentrated on two primary directions: the integration of the inference network (encoder) with a pre-trained language model (PLM) and the modeling of the relationship between words and topics in the generative model (decoder). However, the use of large PLMs significantly increases inference costs, making them less practical for situations requiring low inference times. Furthermore, it is crucial to simultaneously model the relationships between topics and words as well as the interrelationships among topics themselves. In this work, we propose a novel framework called NeuroMax (Neural Topic Model with Maximizing Mutual Information with Pretrained Language Model and Group Topic Regularization) to address these challenges. NeuroMax maximizes the mutual information between the topic representation obtained from the encoder in neural topic models and the representation derived from the PLM. Additionally, NeuroMax employs optimal transport to learn the relationships between topics by analyzing how information is transported among them. Experimental results indicate that NeuroMax reduces inference time, generates more coherent topics and topic groups, and produces more representative document embeddings, thereby enhancing performance on downstream tasks.<|reference_end|>
arxiv
@article{pham2024neuromax:, title={NeuroMax: Enhancing Neural Topic Modeling via Maximizing Mutual Information and Group Topic Regularization}, author={Duy-Tung Pham, Thien Trang Nguyen Vu, Tung Nguyen, Linh Ngo Van, Duc Anh Nguyen, Thien Huu Nguyen}, journal={arXiv preprint arXiv:2409.19749}, year={2024}, archivePrefix={arXiv}, eprint={2409.19749}, primaryClass={cs.CL} }
pham2024neuromax:
arxiv-663321
2409.19750
AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs for Astronomy
<|reference_start|>AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs for Astronomy: Continual pretraining of large language models on domain-specific data has been proposed to enhance performance on downstream tasks. In astronomy, the previous absence of astronomy-focused benchmarks has hindered objective evaluation of these specialized LLM models. Leveraging a recent initiative to curate high-quality astronomical MCQs, this study aims to quantitatively assess specialized LLMs in astronomy. We find that the previously released AstroLLaMA series, based on LLaMA-2-7B, underperforms compared to the base model. We demonstrate that this performance degradation can be partially mitigated by utilizing high-quality data for continual pretraining, such as summarized text from arXiv. Despite the observed catastrophic forgetting in smaller models, our results indicate that continual pretraining on the 70B model can yield significant improvements. However, the current supervised fine-tuning dataset still constrains the performance of instruct models. In conjunction with this study, we introduce a new set of models, AstroLLaMA-3-8B and AstroLLaMA-2-70B, building upon the previous AstroLLaMA series.<|reference_end|>
arxiv
@article{pan2024astromlab, title={AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs for Astronomy}, author={Rui Pan, Tuan Dung Nguyen, Hardik Arora, Alberto Accomazzi, Tirthankar Ghosal and Yuan-Sen Ting}, journal={arXiv preprint arXiv:2409.19750}, year={2024}, archivePrefix={arXiv}, eprint={2409.19750}, primaryClass={astro-ph.IM cs.CL} }
pan2024astromlab
arxiv-663322
2409.19751
Balancing the Scales: A Comprehensive Study on Tackling Class Imbalance in Binary Classification
<|reference_start|>Balancing the Scales: A Comprehensive Study on Tackling Class Imbalance in Binary Classification: Class imbalance in binary classification tasks remains a significant challenge in machine learning, often resulting in poor performance on minority classes. This study comprehensively evaluates three widely-used strategies for handling class imbalance: Synthetic Minority Over-sampling Technique (SMOTE), Class Weights tuning, and Decision Threshold Calibration. We compare these methods against a baseline scenario of no-intervention across 15 diverse machine learning models and 30 datasets from various domains, conducting a total of 9,000 experiments. Performance was primarily assessed using the F1-score, although our study also tracked results on additional 9 metrics including F2-score, precision, recall, Brier-score, PR-AUC, and AUC. Our results indicate that all three strategies generally outperform the baseline, with Decision Threshold Calibration emerging as the most consistently effective technique. However, we observed substantial variability in the best-performing method across datasets, highlighting the importance of testing multiple approaches for specific problems. This study provides valuable insights for practitioners dealing with imbalanced datasets and emphasizes the need for dataset-specific analysis in evaluating class imbalance handling techniques.<|reference_end|>
arxiv
@article{abdelhamid2024balancing, title={Balancing the Scales: A Comprehensive Study on Tackling Class Imbalance in Binary Classification}, author={Mohamed Abdelhamid and Abhyuday Desai}, journal={arXiv preprint arXiv:2409.19751}, year={2024}, archivePrefix={arXiv}, eprint={2409.19751}, primaryClass={cs.LG cs.AI stat.ML} }
abdelhamid2024balancing
arxiv-663323
2409.19753
CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question Answering
<|reference_start|>CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question Answering: Recent studies have explored the use of Large Language Models (LLMs) with Retrieval Augmented Generation (RAG) for Knowledge Graph Question Answering (KGQA). They typically require rewriting retrieved subgraphs into natural language formats comprehensible to LLMs. However, when tackling complex questions, the knowledge rewritten by existing methods may include irrelevant information, omit crucial details, or fail to align with the question's semantics. To address them, we propose a novel rewriting method CoTKR, Chain-of-Thought Enhanced Knowledge Rewriting, for generating reasoning traces and corresponding knowledge in an interleaved manner, thereby mitigating the limitations of single-step knowledge rewriting. Additionally, to bridge the preference gap between the knowledge rewriter and the question answering (QA) model, we propose a training strategy PAQAF, Preference Alignment from Question Answering Feedback, for leveraging feedback from the QA model to further optimize the knowledge rewriter. We conduct experiments using various LLMs across several KGQA benchmarks. Experimental results demonstrate that, compared with previous knowledge rewriting methods, CoTKR generates the most beneficial knowledge representation for QA models, which significantly improves the performance of LLMs in KGQA.<|reference_end|>
arxiv
@article{wu2024cotkr:, title={CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question Answering}, author={Yike Wu, Yi Huang, Nan Hu, Yuncheng Hua, Guilin Qi, Jiaoyan Chen, Jeff Z. Pan}, journal={arXiv preprint arXiv:2409.19753}, year={2024}, archivePrefix={arXiv}, eprint={2409.19753}, primaryClass={cs.CL} }
wu2024cotkr:
arxiv-663324
2409.19754
Offline Signature Verification Based on Feature Disentangling Aided Variational Autoencoder
<|reference_start|>Offline Signature Verification Based on Feature Disentangling Aided Variational Autoencoder: Offline handwritten signature verification systems are used to verify the identity of individuals, through recognizing their handwritten signature image as genuine signatures or forgeries. The main tasks of signature verification systems include extracting features from signature images and training a classifier for classification. The challenges of these tasks are twofold. First, genuine signatures and skilled forgeries are highly similar in their appearances, resulting in a small inter-class distance. Second, the instances of skilled forgeries are often unavailable, when signature verification models are being trained. To tackle these problems, this paper proposes a new signature verification method. It is the first model that employs a variational autoencoder (VAE) to extract features directly from signature images. To make the features more discriminative, it improves the traditional VAEs by introducing a new loss function for feature disentangling. In addition, it relies on SVM (Support Vector Machine) for classification according to the extracted features. Extensive experiments are conducted on two public datasets: MCYT-75 and GPDS-synthetic where the proposed method significantly outperformed $13$ representative offline signature verification methods. The achieved improvement in distinctive datasets indicates the robustness and great potential of the developed system in real application.<|reference_end|>
arxiv
@article{zhang2024offline, title={Offline Signature Verification Based on Feature Disentangling Aided Variational Autoencoder}, author={Hansong Zhang, Jiangjian Guo, Kun Li, Yang Zhang, Yimei Zhao}, journal={arXiv preprint arXiv:2409.19754}, year={2024}, archivePrefix={arXiv}, eprint={2409.19754}, primaryClass={cs.CV} }
zhang2024offline
arxiv-663325
2409.19756
Advances in Privacy Preserving Federated Learning to Realize a Truly Learning Healthcare System
<|reference_start|>Advances in Privacy Preserving Federated Learning to Realize a Truly Learning Healthcare System: The concept of a learning healthcare system (LHS) envisions a self-improving network where multimodal data from patient care are continuously analyzed to enhance future healthcare outcomes. However, realizing this vision faces significant challenges in data sharing and privacy protection. Privacy-Preserving Federated Learning (PPFL) is a transformative and promising approach that has the potential to address these challenges by enabling collaborative learning from decentralized data while safeguarding patient privacy. This paper proposes a vision for integrating PPFL into the healthcare ecosystem to achieve a truly LHS as defined by the Institute of Medicine (IOM) Roundtable.<|reference_end|>
arxiv
@article{madduri2024advances, title={Advances in Privacy Preserving Federated Learning to Realize a Truly Learning Healthcare System}, author={Ravi Madduri, Zilinghan Li, Tarak Nandi, Kibaek Kim, Minseok Ryu, Alex Rodriguez}, journal={arXiv preprint arXiv:2409.19756}, year={2024}, archivePrefix={arXiv}, eprint={2409.19756}, primaryClass={cs.CR cs.DC} }
madduri2024advances
arxiv-663326
2409.19757
Efficient Long-Form Speech Recognition for General Speech In-Context Learning
<|reference_start|>Efficient Long-Form Speech Recognition for General Speech In-Context Learning: We propose a novel approach to end-to-end automatic speech recognition (ASR) to achieve efficient speech in-context learning (SICL) for (i) long-form speech decoding, (ii) test-time speaker adaptation, and (iii) test-time contextual biasing. Specifically, we introduce an attention-based encoder-decoder (AED) model with SICL capability (referred to as SICL-AED), where the decoder utilizes an utterance-level cross-attention to integrate information from the encoder's output efficiently, and a document-level self-attention to learn contextual information. Evaluated on the benchmark TEDLIUM3 dataset, SICL-AED achieves an 8.64% relative word error rate (WER) reduction compared to a baseline utterance-level AED model by leveraging previously decoded outputs as in-context examples. It also demonstrates comparable performance to conventional long-form AED systems with significantly reduced runtime and memory complexity. Additionally, we introduce an in-context fine-tuning (ICFT) technique that further enhances SICL effectiveness during inference. Experiments on speaker adaptation and contextual biasing highlight the general speech in-context learning capabilities of our system, achieving effective results with provided contexts. Without specific fine-tuning, SICL-AED matches the performance of supervised AED baselines for speaker adaptation and improves entity recall by 64% for contextual biasing task.<|reference_end|>
arxiv
@article{yen2024efficient, title={Efficient Long-Form Speech Recognition for General Speech In-Context Learning}, author={Hao Yen and Shaoshi Ling and Guoli Ye}, journal={arXiv preprint arXiv:2409.19757}, year={2024}, archivePrefix={arXiv}, eprint={2409.19757}, primaryClass={eess.AS cs.SD} }
yen2024efficient
arxiv-663327
2409.19759
Balancing Cost and Effectiveness of Synthetic Data Generation Strategies for LLMs
<|reference_start|>Balancing Cost and Effectiveness of Synthetic Data Generation Strategies for LLMs: As large language models (LLMs) are applied to more use cases, creating high quality, task-specific datasets for fine-tuning becomes a bottleneck for model improvement. Using high quality human data has been the most common approach to unlock model performance, but is prohibitively expensive in many scenarios. Several alternative methods have also emerged, such as generating synthetic or hybrid data, but the effectiveness of these approaches remain unclear, especially in resource-constrained scenarios and tasks that are not easily verified. To investigate this, we group various synthetic data generation strategies into three representative categories -- Answer Augmentation, Question Rephrase and New Question -- and study the performance of student LLMs trained under various constraints, namely seed instruction set size and query budget. We demonstrate that these strategies are not equally effective across settings. Notably, the optimal data generation strategy depends strongly on the ratio between the available teacher query budget and the size of the seed instruction set. When this ratio is low, generating new answers to existing questions proves most effective, but as this ratio increases, generating new questions becomes optimal. Across all tasks, we find that choice of augmentation method and other design choices matter substantially more in low to mid data regimes than in high data regimes. We provide a practical framework for selecting the appropriate augmentation method across settings, taking into account additional factors such as the scalability of each method, the importance of verifying synthetic data, and the use of different LLMs for synthetic data generation.<|reference_end|>
arxiv
@article{chan2024balancing, title={Balancing Cost and Effectiveness of Synthetic Data Generation Strategies for LLMs}, author={Yung-Chieh Chan, George Pu, Apaar Shanker, Parth Suresh, Penn Jenks, John Heyer, Sam Denton}, journal={arXiv preprint arXiv:2409.19759}, year={2024}, archivePrefix={arXiv}, eprint={2409.19759}, primaryClass={cs.CL cs.LG} }
chan2024balancing
arxiv-663328
2409.19764
Spiking Transformer with Spatial-Temporal Attention
<|reference_start|>Spiking Transformer with Spatial-Temporal Attention: Spiking Neural Networks (SNNs) present a compelling and energy-efficient alternative to traditional Artificial Neural Networks (ANNs) due to their sparse binary activation. Leveraging the success of the transformer architecture, the spiking transformer architecture is explored to scale up dataset size and performance. However, existing works only consider the spatial self-attention in spiking transformer, neglecting the inherent temporal context across the timesteps. In this work, we introduce Spiking Transformer with Spatial-Temporal Attention (STAtten), a simple and straightforward architecture designed to integrate spatial and temporal information in self-attention with negligible additional computational load. The STAtten divides the temporal or token index and calculates the self-attention in a cross-manner to effectively incorporate spatial-temporal information. We first verify our spatial-temporal attention mechanism's ability to capture long-term temporal dependencies using sequential datasets. Moreover, we validate our approach through extensive experiments on varied datasets, including CIFAR10/100, ImageNet, CIFAR10-DVS, and N-Caltech101. Notably, our cross-attention mechanism achieves an accuracy of 78.39 % on the ImageNet dataset.<|reference_end|>
arxiv
@article{lee2024spiking, title={Spiking Transformer with Spatial-Temporal Attention}, author={Donghyun Lee, Yuhang Li, Youngeun Kim, Shiting Xiao, Priyadarshini Panda}, journal={arXiv preprint arXiv:2409.19764}, year={2024}, archivePrefix={arXiv}, eprint={2409.19764}, primaryClass={cs.NE} }
lee2024spiking
arxiv-663329
2409.19765
Parameter Estimation in Optimal Tolling for Traffic Networks Under the Markovian Traffic Equilibrium
<|reference_start|>Parameter Estimation in Optimal Tolling for Traffic Networks Under the Markovian Traffic Equilibrium: Tolling, or congestion pricing, has emerged as an effective tool for preventing gridlock in traffic systems. However, tolls are currently mostly designed on route-based traffic assignment models (TAM), which may be unrealistic and computationally expensive. Existing approaches also impractically assume that the central tolling authority can access latency function parameters that characterize the time required to traverse each network arc (edge), as well as the entropy parameter $\beta$ that characterizes commuters' stochastic arc-selection decisions on the network. To address these issues, this work formulates an online learning algorithm that simultaneously refines estimates of linear arc latency functions and entropy parameters in an arc-based TAM, while implementing tolls on each arc to induce equilibrium flows that minimize overall congestion on the network. We prove that our algorithm incurs regret upper bounded by $O(\sqrt{T} \ln(T) |\arcsMod| \max\{|\nodesMod| \ln(|\arcsMod|/|\nodesMod|), B \})$, where $T$ denotes the total iteration count, $|\arcsMod|$ and $|\nodesMod|$ denote the total number of arcs and nodes in the network, respectively, and $B$ describes the number of arcs required to construct an estimate of $\beta$ (usually $\ll |I|$). Finally, we present numerical results on simulated traffic networks that validate our theoretical contributions.<|reference_end|>
arxiv
@article{chiu2024parameter, title={Parameter Estimation in Optimal Tolling for Traffic Networks Under the Markovian Traffic Equilibrium}, author={Chih-Yuan Chiu, Shankar Sastry}, journal={arXiv preprint arXiv:2409.19765}, year={2024}, archivePrefix={arXiv}, eprint={2409.19765}, primaryClass={eess.SY cs.SY} }
chiu2024parameter
arxiv-663330
2409.19766
Towards Robust Extractive Question Answering Models: Rethinking the Training Methodology
<|reference_start|>Towards Robust Extractive Question Answering Models: Rethinking the Training Methodology: This paper proposes a novel training method to improve the robustness of Extractive Question Answering (EQA) models. Previous research has shown that existing models, when trained on EQA datasets that include unanswerable questions, demonstrate a significant lack of robustness against distribution shifts and adversarial attacks. Despite this, the inclusion of unanswerable questions in EQA training datasets is essential for ensuring real-world reliability. Our proposed training method includes a novel loss function for the EQA problem and challenges an implicit assumption present in numerous EQA datasets. Models trained with our method maintain in-domain performance while achieving a notable improvement on out-of-domain datasets. This results in an overall F1 score improvement of 5.7 across all testing sets. Furthermore, our models exhibit significantly enhanced robustness against two types of adversarial attacks, with a performance decrease of only about a third compared to the default models.<|reference_end|>
arxiv
@article{tran2024towards, title={Towards Robust Extractive Question Answering Models: Rethinking the Training Methodology}, author={Son Quoc Tran, Matt Kretchmar}, journal={arXiv preprint arXiv:2409.19766}, year={2024}, archivePrefix={arXiv}, eprint={2409.19766}, primaryClass={cs.CL cs.AI} }
tran2024towards
arxiv-663331
2409.19769
Adaptive Event-triggered Reinforcement Learning Control for Complex Nonlinear Systems
<|reference_start|>Adaptive Event-triggered Reinforcement Learning Control for Complex Nonlinear Systems: In this paper, we propose an adaptive event-triggered reinforcement learning control for continuous-time nonlinear systems, subject to bounded uncertainties, characterized by complex interactions. Specifically, the proposed method is capable of jointly learning both the control policy and the communication policy, thereby reducing the number of parameters and computational overhead when learning them separately or only one of them. By augmenting the state space with accrued rewards that represent the performance over the entire trajectory, we show that accurate and efficient determination of triggering conditions is possible without the need for explicit learning triggering conditions, thereby leading to an adaptive non-stationary policy. Finally, we provide several numerical examples to demonstrate the effectiveness of the proposed approach.<|reference_end|>
arxiv
@article{siddique2024adaptive, title={Adaptive Event-triggered Reinforcement Learning Control for Complex Nonlinear Systems}, author={Umer Siddique, Abhinav Sinha, and Yongcan Cao}, journal={arXiv preprint arXiv:2409.19769}, year={2024}, archivePrefix={arXiv}, eprint={2409.19769}, primaryClass={cs.LG cs.AI cs.SY eess.SY} }
siddique2024adaptive
arxiv-663332
2409.19770
GelSlim 40: Focusing on Touch and Reproducibility
<|reference_start|>GelSlim 40: Focusing on Touch and Reproducibility: Tactile sensing provides robots with rich feedback during manipulation, enabling a host of perception and controls capabilities. Here, we present a new open-source, vision-based tactile sensor designed to promote reproducibility and accessibility across research and hobbyist communities. Building upon the GelSlim 3.0 sensor, our design features two key improvements: a simplified, modifiable finger structure and easily manufacturable lenses. To complement the hardware, we provide an open-source perception library that includes depth and shear field estimation algorithms to enable in-hand pose estimation, slip detection, and other manipulation tasks. Our sensor is accompanied by comprehensive manufacturing documentation, ensuring the design can be readily produced by users with varying levels of expertise. We validate the sensor's reproducibility through extensive human usability testing. For documentation, code, and data, please visit the project website: https://www.mmintlab.com/research/gelslim-4-0/<|reference_end|>
arxiv
@article{sipos2024gelslim, title={GelSlim 4.0: Focusing on Touch and Reproducibility}, author={Andrea Sipos, William van den Bogert, Nima Fazeli}, journal={arXiv preprint arXiv:2409.19770}, year={2024}, archivePrefix={arXiv}, eprint={2409.19770}, primaryClass={cs.RO} }
sipos2024gelslim
arxiv-663333
2409.19771
Learning Wheelchair Tennis Navigation from Broadcast Videos with Domain Knowledge Transfer and Diffusion Motion Planning
<|reference_start|>Learning Wheelchair Tennis Navigation from Broadcast Videos with Domain Knowledge Transfer and Diffusion Motion Planning: In this paper, we propose a novel and generalizable zero-shot knowledge transfer framework that distills expert sports navigation strategies from web videos into robotic systems with adversarial constraints and out-of-distribution image trajectories. Our pipeline enables diffusion-based imitation learning by reconstructing the full 3D task space from multiple partial views, warping it into 2D image space, closing the planning loop within this 2D space, and transfer constrained motion of interest back to task space. Additionally, we demonstrate that the learned policy can serve as a local planner in conjunction with position control. We apply this framework in the wheelchair tennis navigation problem to guide the wheelchair into the ball-hitting region. Our pipeline achieves a navigation success rate of 97.67% in reaching real-world recorded tennis ball trajectories with a physical robot wheelchair, and achieve a success rate of 68.49% in a real-world, real-time experiment on a full-sized tennis court.<|reference_end|>
arxiv
@article{wu2024learning, title={Learning Wheelchair Tennis Navigation from Broadcast Videos with Domain Knowledge Transfer and Diffusion Motion Planning}, author={Zixuan Wu, Zulfiqar Zaidi, Adithya Patil, Qingyu Xiao and Matthew Gombolay}, journal={arXiv preprint arXiv:2409.19771}, year={2024}, archivePrefix={arXiv}, eprint={2409.19771}, primaryClass={cs.RO} }
wu2024learning
arxiv-663334
2409.19772
PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond
<|reference_start|>PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond: We present Parametric Piecewise Linear Networks (PPLNs) for temporal vision inference. Motivated by the neuromorphic principles that regulate biological neural behaviors, PPLNs are ideal for processing data captured by event cameras, which are built to simulate neural activities in the human retina. We discuss how to represent the membrane potential of an artificial neuron by a parametric piecewise linear function with learnable coefficients. This design echoes the idea of building deep models from learnable parametric functions recently popularized by Kolmogorov-Arnold Networks (KANs). Experiments demonstrate the state-of-the-art performance of PPLNs in event-based and image-based vision applications, including steering prediction, human pose estimation, and motion deblurring. The source code of our implementation is available at https://github.com/chensong1995/PPLN.<|reference_end|>
arxiv
@article{song2024pplns:, title={PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond}, author={Chen Song, Zhenxiao Liang, Bo Sun, Qixing Huang}, journal={arXiv preprint arXiv:2409.19772}, year={2024}, archivePrefix={arXiv}, eprint={2409.19772}, primaryClass={cs.CV} }
song2024pplns:
arxiv-663335
2409.19773
The problem of computing a $2$-T-connected spanning subgraph with minimum number of edges in directed graphs
<|reference_start|>The problem of computing a $2$-T-connected spanning subgraph with minimum number of edges in directed graphs: Let $G=(V,E)$ be a strongly connected graph with $|V|\geq 3$. For $T\subseteq V$, the strongly connected graph $G$ is $2$-T-connected if $G$ is $2$-edge-connected and for each vertex $w$ in $T$, $w$ is not a strong articulation point. This concept generalizes the concept of $2$-vertex connectivity when $T$ contains all the vertices in $G$. This concept also generalizes the concept of $2$-edge connectivity when $|T|=0$. The concept of $2$-T-connectivity was introduced by Durand de Gevigney and Szigeti in $2018$. In this paper, we prove that there is a polynomial-time 4-approximation algorithm for the following problem: given a $2$-T-connected graph $G=(V,E)$, identify a subset $E^ {2T} \subseteq E$ of minimum cardinality such that $(V,E^{2T})$ is $2$-T-connected.<|reference_end|>
arxiv
@article{jaberi2024the, title={The problem of computing a $2$-T-connected spanning subgraph with minimum number of edges in directed graphs}, author={Raed Jaberi, Reham Mansour}, journal={arXiv preprint arXiv:2409.19773}, year={2024}, archivePrefix={arXiv}, eprint={2409.19773}, primaryClass={cs.DS} }
jaberi2024the
arxiv-663336
2409.19774
Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization
<|reference_start|>Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization: Single-source domain generalization attempts to learn a model on a source domain and deploy it to unseen target domains. Limiting access only to source domain data imposes two key challenges - how to train a model that can generalize and how to verify that it does. The standard practice of validation on the training distribution does not accurately reflect the model's generalization ability, while validation on the test distribution is a malpractice to avoid. In this work, we construct an independent validation set by transforming source domain images with a comprehensive list of augmentations, covering a broad spectrum of potential distribution shifts in target domains. We demonstrate a high correlation between validation and test performance for multiple methods and across various datasets. The proposed validation achieves a relative accuracy improvement over the standard validation equal to 15.4% or 1.6% when used for method selection or learning rate tuning, respectively. Furthermore, we introduce a novel family of methods that increase the shape bias through enhanced edge maps. To benefit from the augmentations during training and preserve the independence of the validation set, a k-fold validation process is designed to separate the augmentation types used in training and validation. The method that achieves the best performance on the augmented validation is selected from the proposed family. It achieves state-of-the-art performance on various standard benchmarks. Code at: https://github.com/NikosEfth/crafting-shifts<|reference_end|>
arxiv
@article{efthymiadis2024crafting, title={Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization}, author={Nikos Efthymiadis, Giorgos Tolias, Ondv{r}ej Chum}, journal={arXiv preprint arXiv:2409.19774}, year={2024}, archivePrefix={arXiv}, eprint={2409.19774}, primaryClass={cs.CV} }
efthymiadis2024crafting
arxiv-663337
2409.19777
Automatic debiasing of neural networks via moment-constrained learning
<|reference_start|>Automatic debiasing of neural networks via moment-constrained learning: Causal and nonparametric estimands in economics and biostatistics can often be viewed as the mean of a linear functional applied to an unknown outcome regression function. Naively learning the regression function and taking a sample mean of the target functional results in biased estimators, and a rich debiasing literature has developed where one additionally learns the so-called Riesz representer (RR) of the target estimand (targeted learning, double ML, automatic debiasing etc.). Learning the RR via its derived functional form can be challenging, e.g. due to extreme inverse probability weights or the need to learn conditional density functions. Such challenges have motivated recent advances in automatic debiasing (AD), where the RR is learned directly via minimization of a bespoke loss. We propose moment-constrained learning as a new RR learning approach that addresses some shortcomings in AD, constraining the predicted moments and improving the robustness of RR estimates to optimization hyperparamters. Though our approach is not tied to a particular class of learner, we illustrate it using neural networks, and evaluate on the problems of average treatment/derivative effect estimation using semi-synthetic data. Our numerical experiments show improved performance versus state of the art benchmarks.<|reference_end|>
arxiv
@article{hines2024automatic, title={Automatic debiasing of neural networks via moment-constrained learning}, author={Christian L. Hines, Oliver J. Hines}, journal={arXiv preprint arXiv:2409.19777}, year={2024}, archivePrefix={arXiv}, eprint={2409.19777}, primaryClass={stat.ML cs.LG stat.ME} }
hines2024automatic
arxiv-663338
2409.19778
Lessons Learned from Developing a Human-Centered Guide Dog Robot for Mobility Assistance
<|reference_start|>Lessons Learned from Developing a Human-Centered Guide Dog Robot for Mobility Assistance: While guide dogs offer essential mobility assistance, their high cost, limited availability, and care requirements make them inaccessible to most blind or low vision (BLV) individuals. Recent advances in quadruped robots provide a scalable solution for mobility assistance, but many current designs fail to meet real-world needs due to a lack of understanding of handler and guide dog interactions. In this paper, we share lessons learned from developing a human-centered guide dog robot, addressing challenges such as optimal hardware design, robust navigation, and informative scene description for user adoption. By conducting semi-structured interviews and human experiments with BLV individuals, guide-dog handlers, and trainers, we identified key design principles to improve safety, trust, and usability in robotic mobility aids. Our findings lay the building blocks for future development of guide dog robots, ultimately enhancing independence and quality of life for BLV individuals.<|reference_end|>
arxiv
@article{hwang2024lessons, title={Lessons Learned from Developing a Human-Centered Guide Dog Robot for Mobility Assistance}, author={Hochul Hwang, Ken Suzuki, Nicholas A Giudice, Joydeep Biswas, Sunghoon Ivan Lee, Donghyun Kim}, journal={arXiv preprint arXiv:2409.19778}, year={2024}, archivePrefix={arXiv}, eprint={2409.19778}, primaryClass={cs.RO cs.HC} }
hwang2024lessons
arxiv-663339
2409.19782
Guitar Pickups I: Analysis of the Effect of Winding and Wire Gauge on Single Coil Electric Guitar Pickups
<|reference_start|>Guitar Pickups I: Analysis of the Effect of Winding and Wire Gauge on Single Coil Electric Guitar Pickups: Guitar Pickups have been in production for nearly 100 years, and the question of how exactly one pickup is tonally superior to another is still subject to a high level of debate. This paper is the first in a set demystifying the production of guitar pickups and introducing a level of scientific procedure to the conversation. Previous studies have analysed commercial off-the-shelf pickups, but these differ from each other in multiple ways. The novelty of this study is that dedicated experimental pickups were created, which vary only one parameter at a time in order to allow scientific study. The most fundamental qualities of a single-coil pickup are investigated: in this paper, number of turns and gauge of wire. A set of single-coil stratocaster-style pickups were created, with the number of turns of wire varied across the commercially available range (5000-12000 turns), and this was done for two widely used wire gauges (42 and 44 AWG). A frequency response analyser was used to obtain impedance across a frequency range. It is shown that resonant frequency decreases exponentially with number of turns, while the magnitude of the resonant peak increases linearly with number of turns. The wire gauge used has a significant impact on both parameters, with the thicker wire giving higher resonant frequencies and higher magnitudes than the thinner wire for the same number of turns. These impact the sound associated with the pickup: the resonant frequency is linked to the perceived tone of the pickup, and the magnitude to the output amplitude and hence 'gain.' Increasing the number of turns will give a higher output pickup with a darker tone, and thicker wire gives louder outputs and brighter tones - consistent with what can be observed in commercial pickups.<|reference_end|>
arxiv
@article{batchelor2024guitar, title={Guitar Pickups I: Analysis of the Effect of Winding and Wire Gauge on Single Coil Electric Guitar Pickups}, author={Charles Batchelor, Jack Gooding, William Marriott, Nikola Chalashkanov, Nick Tucker, Rebecca Margetts}, journal={arXiv preprint arXiv:2409.19782}, year={2024}, archivePrefix={arXiv}, eprint={2409.19782}, primaryClass={eess.AS cs.SD} }
batchelor2024guitar
arxiv-663340
2409.19784
Making Quickhull More Like Quicksort: A Simple Randomized Output-Sensitive Convex Hull Algorithm
<|reference_start|>Making Quickhull More Like Quicksort: A Simple Randomized Output-Sensitive Convex Hull Algorithm: In this paper, we present Ray-shooting Quickhull, which is a simple, randomized, outputsensitive version of the Quickhull algorithm for constructing the convex hull of a set of n points in the plane. We show that the randomized Ray-shooting Quickhull algorithm runs in O(n log h) expected time, where h is the number of points on the boundary of the convex hull. Keeping with the spirit of the original Quickhull algorithm, our algorithm is quite simple and is, in fact, closer in spirit to the well-known randomized Quicksort algorithm. Unlike the original Quickhull algorithm, however, which can run in ${\Theta}(n^2) time$ for some input distributions, the expected performance bounds for the randomized Ray-shooting Quickhull algorithm match or improve the performance bounds of more complicated algorithms. Importantly, the expectation in our output-sensitive performance bound does not depend on assumptions about the distribution of input points. Still, we show that, like the deterministic Quickhull algorithm, our randomized Ray-shooting Quickhull algorithm runs in O(n) expected time for n points chosen uniformly at random from a bounded convex region. We also provide experimental evidence that the randomized Ray-shooting Quickhull algorithm is on par or faster than deterministic Quickhull in practice, depending on the input distribution.<|reference_end|>
arxiv
@article{goodrich2024making, title={Making Quickhull More Like Quicksort: A Simple Randomized Output-Sensitive Convex Hull Algorithm}, author={Michael T. Goodrich and Ryuto Kitagawa}, journal={arXiv preprint arXiv:2409.19784}, year={2024}, archivePrefix={arXiv}, eprint={2409.19784}, primaryClass={cs.CG cs.CC cs.DS} }
goodrich2024making
arxiv-663341
2409.19786
4D Metric-Semantic Mapping for Persistent Orchard Monitoring: Method and Dataset
<|reference_start|>4D Metric-Semantic Mapping for Persistent Orchard Monitoring: Method and Dataset: Automated persistent and fine-grained monitoring of orchards at the individual tree or fruit level helps maximize crop yield and optimize resources such as water, fertilizers, and pesticides while preventing agricultural waste. Towards this goal, we present a 4D spatio-temporal metric-semantic mapping method that fuses data from multiple sensors, including LiDAR, RGB camera, and IMU, to monitor the fruits in an orchard across their growth season. A LiDAR-RGB fusion module is designed for 3D fruit tracking and localization, which first segments fruits using a deep neural network and then tracks them using the Hungarian Assignment algorithm. Additionally, the 4D data association module aligns data from different growth stages into a common reference frame and tracks fruits spatio-temporally, providing information such as fruit counts, sizes, and positions. We demonstrate our method's accuracy in 4D metric-semantic mapping using data collected from a real orchard under natural, uncontrolled conditions with seasonal variations. We achieve a 3.1 percent error in total fruit count estimation for over 1790 fruits across 60 apple trees, along with accurate size estimation results with a mean error of 1.1 cm. The datasets, consisting of LiDAR, RGB, and IMU data of five fruit species captured across their growth seasons, along with corresponding ground truth data, will be made publicly available at: https://4d-metric-semantic-mapping.org/<|reference_end|>
arxiv
@article{lei20244d, title={4D Metric-Semantic Mapping for Persistent Orchard Monitoring: Method and Dataset}, author={Jiuzhou Lei, Ankit Prabhu, Xu Liu, Fernando Cladera, Mehrad Mortazavi, Reza Ehsani, Pratik Chaudhari, Vijay Kumar}, journal={arXiv preprint arXiv:2409.19786}, year={2024}, archivePrefix={arXiv}, eprint={2409.19786}, primaryClass={cs.RO} }
lei20244d
arxiv-663342
2409.19788
Adversarial Examples for DNA Classification
<|reference_start|>Adversarial Examples for DNA Classification: Pre-trained language models such as DNABERT2 and Nucleotide Transformer, which are trained on DNA sequences, have shown promising performance in DNA sequence classification tasks. The classification ability of these models stems from language models trained on vast amounts of DNA sequence samples, followed by fine-tuning with relatively smaller classification datasets. However, these text-based systems are not robust enough and can be vulnerable to adversarial examples. While adversarial attacks have been widely studied in text classification, there is limited research in DNA sequence classification. In this paper, we adapt commonly used attack algorithms in text classification for DNA sequence classification. We evaluated the impact of various attack methods on DNA sequence classification at the character, word, and sentence levels. Our findings indicate that actual DNA language model sequence classifiers are vulnerable to these attacks.<|reference_end|>
arxiv
@article{yoo2024adversarial, title={Adversarial Examples for DNA Classification}, author={Hyunwoo Yoo}, journal={arXiv preprint arXiv:2409.19788}, year={2024}, archivePrefix={arXiv}, eprint={2409.19788}, primaryClass={cs.CL} }
yoo2024adversarial
arxiv-663343
2409.19790
Analysis on Riemann Hypothesis with Cross Entropy Optimization and Reasoning
<|reference_start|>Analysis on Riemann Hypothesis with Cross Entropy Optimization and Reasoning: In this paper, we present a novel framework for the analysis of Riemann Hypothesis [27], which is composed of three key components: a) probabilistic modeling with cross entropy optimization and reasoning; b) the application of the law of large numbers; c) the application of mathematical inductions. The analysis is mainly conducted by virtue of probabilistic modeling of cross entropy optimization and reasoning with rare event simulation techniques. The application of the law of large numbers [2, 3, 6] and the application of mathematical inductions make the analysis of Riemann Hypothesis self-contained and complete to make sure that the whole complex plane is covered as conjectured in Riemann Hypothesis. We also discuss the method of enhanced top-p sampling with large language models (LLMs) for reasoning, where next token prediction is not just based on the estimated probabilities of each possible token in the current round but also based on accumulated path probabilities among multiple top-k chain of thoughts (CoTs) paths. The probabilistic modeling of cross entropy optimization and reasoning may suit well with the analysis of Riemann Hypothesis as Riemann Zeta functions are inherently dealing with the sums of infinite components of a complex number series. We hope that our analysis in this paper could shed some light on some of the insights of Riemann Hypothesis. The framework and techniques presented in this paper, coupled with recent developments with chain of thought (CoT) or diagram of thought (DoT) reasoning in large language models (LLMs) with reinforcement learning (RL) [1, 7, 18, 21, 24, 34, 39-41], could pave the way for eventual proof of Riemann Hypothesis [27].<|reference_end|>
arxiv
@article{li2024analysis, title={Analysis on Riemann Hypothesis with Cross Entropy Optimization and Reasoning}, author={Kevin Li, Fulu Li}, journal={arXiv preprint arXiv:2409.19790}, year={2024}, archivePrefix={arXiv}, eprint={2409.19790}, primaryClass={cs.AI cs.CE} }
li2024analysis
arxiv-663344
2409.19791
Gradient descent with adaptive stepsize converges (nearly) linearly under fourth-order growth
<|reference_start|>Gradient descent with adaptive stepsize converges (nearly) linearly under fourth-order growth: A prevalent belief among optimization specialists is that linear convergence of gradient descent is contingent on the function growing quadratically away from its minimizers. In this work, we argue that this belief is inaccurate. We show that gradient descent with an adaptive stepsize converges at a local (nearly) linear rate on any smooth function that merely exhibits fourth-order growth away from its minimizer. The adaptive stepsize we propose arises from an intriguing decomposition theorem: any such function admits a smooth manifold around the optimal solution -- which we call the ravine -- so that the function grows at least quadratically away from the ravine and has constant order growth along it. The ravine allows one to interlace many short gradient steps with a single long Polyak gradient step, which together ensure rapid convergence to the minimizer. We illustrate the theory and algorithm on the problems of matrix sensing and factorization and learning a single neuron in the overparameterized regime.<|reference_end|>
arxiv
@article{davis2024gradient, title={Gradient descent with adaptive stepsize converges (nearly) linearly under fourth-order growth}, author={Damek Davis, Dmitriy Drusvyatskiy, Liwei Jiang}, journal={arXiv preprint arXiv:2409.19791}, year={2024}, archivePrefix={arXiv}, eprint={2409.19791}, primaryClass={math.OC cs.LG} }
davis2024gradient
arxiv-663345
2409.19792
CyclicSim: Comprehensive Evaluation of Cyclic Shapers in Time-Sensitive Networking
<|reference_start|>CyclicSim: Comprehensive Evaluation of Cyclic Shapers in Time-Sensitive Networking: Cyclic Queuing and Forwarding (CQF) is a key Time-Sensitive Networking (TSN) shaping mechanism that ensures bounded latency using a simple gate control list (GCL). Recently, variants of CQF, including Cycle Specific Queuing and Forwarding (CSQF) and Multi Cyclic Queuing and Forwarding (MCQF), have emerged. While popular TSN mechanisms such as the Time-Aware Shaper (TAS), Asynchronous Traffic Shaper (ATS), Credit-Based Shaper (CBS), and Strict Priority (SP) have been extensively studied, cyclic shapers have not been thoroughly evaluated. This paper presents a comprehensive analysis of CQF, CSQF, and MCQF, providing insights into their performance. We quantify delays through simulations and quantitative analysis on both synthetic and realistic networks. For the first time, we introduce an open-source OMNeT++ and INET4.4 based framework capable of modeling all three cyclic shaper variants. Our tool facilitates the validation of new algorithms and serves as a benchmark for cyclic shapers. Our evaluations reveal that MCQF supports diverse timing requirements, whereas CSQF, with its additional queue, often results in larger delays and jitter for some TT flows compared to CQF. Additionally, CSQF does not demonstrate significant advantages in TSN networks where propagation delays are less critical than in wide-area networks (WANs).<|reference_end|>
arxiv
@article{debnath2024cyclicsim:, title={CyclicSim: Comprehensive Evaluation of Cyclic Shapers in Time-Sensitive Networking}, author={Rubi Debnath, Luxi Zhao, Mohammadreza Barzegaran, and Sebastian Steinhorst}, journal={arXiv preprint arXiv:2409.19792}, year={2024}, archivePrefix={arXiv}, eprint={2409.19792}, primaryClass={cs.NI} }
debnath2024cyclicsim:
arxiv-663346
2409.19795
The Duke Humanoid: Design and Control For Energy Efficient Bipedal Locomotion Using Passive Dynamics
<|reference_start|>The Duke Humanoid: Design and Control For Energy Efficient Bipedal Locomotion Using Passive Dynamics: We present the Duke Humanoid, an open-source 10-degrees-of-freedom humanoid, as an extensible platform for locomotion research. The design mimics human physiology, with minimized leg distances and symmetrical body alignment in the frontal plane to maintain static balance with straight knees. We develop a reinforcement learning policy that can be deployed zero-shot on the hardware for velocity-tracking walking tasks. Additionally, to enhance energy efficiency in locomotion, we propose an end-to-end reinforcement learning algorithm that encourages the robot to leverage passive dynamics. Our experiment results show that our passive policy reduces the cost of transport by up to $50\%$ in simulation and $31\%$ in real-world testing. Our website is http://generalroboticslab.com/DukeHumanoidv1/ .<|reference_end|>
arxiv
@article{xia2024the, title={The Duke Humanoid: Design and Control For Energy Efficient Bipedal Locomotion Using Passive Dynamics}, author={Boxi Xia, Bokuan Li, Jacob Lee, Michael Scutari, Boyuan Chen}, journal={arXiv preprint arXiv:2409.19795}, year={2024}, archivePrefix={arXiv}, eprint={2409.19795}, primaryClass={cs.RO} }
xia2024the
arxiv-663347
2409.19796
Black-Box Segmentation of Electronic Medical Records
<|reference_start|>Black-Box Segmentation of Electronic Medical Records: Electronic medical records (EMRs) contain the majority of patients' healthcare details. It is an abundant resource for developing an automatic healthcare system. Most of the natural language processing (NLP) studies on EMR processing, such as concept extraction, are adversely affected by the inaccurate segmentation of EMR sections. At the same time, not enough attention has been given to the accurate sectioning of EMRs. The information that may occur in section structures is unvalued. This work focuses on the segmentation of EMRs and proposes a black-box segmentation method using a simple sentence embedding model and neural network, along with a proper training method. To achieve universal adaptivity, we train our model on the dataset with different section headings formats. We compare several advanced deep learning-based NLP methods, and our method achieves the best segmentation accuracies (above 98%) on various test data with a proper training corpus.<|reference_end|>
arxiv
@article{yuan2024black-box, title={Black-Box Segmentation of Electronic Medical Records}, author={Hongyi Yuan, Sheng Yu}, journal={arXiv preprint arXiv:2409.19796}, year={2024}, archivePrefix={arXiv}, eprint={2409.19796}, primaryClass={cs.CL} }
yuan2024black-box
arxiv-663348
2409.19798
Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data
<|reference_start|>Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data: We consider the problem of a training data proof, where a data creator or owner wants to demonstrate to a third party that some machine learning model was trained on their data. Training data proofs play a key role in recent lawsuits against foundation models trained on web-scale data. Many prior works suggest to instantiate training data proofs using membership inference attacks. We argue that this approach is fundamentally unsound: to provide convincing evidence, the data creator needs to demonstrate that their attack has a low false positive rate, i.e., that the attack's output is unlikely under the null hypothesis that the model was not trained on the target data. Yet, sampling from this null hypothesis is impossible, as we do not know the exact contents of the training set, nor can we (efficiently) retrain a large foundation model. We conclude by offering two paths forward, by showing that data extraction attacks and membership inference on special canary data can be used to create sound training data proofs.<|reference_end|>
arxiv
@article{zhang2024membership, title={Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data}, author={Jie Zhang, Debeshee Das, Gautam Kamath, Florian Tram`er}, journal={arXiv preprint arXiv:2409.19798}, year={2024}, archivePrefix={arXiv}, eprint={2409.19798}, primaryClass={cs.LG cs.CR} }
zhang2024membership
arxiv-663349
2409.19800
Differentially Private Bilevel Optimization
<|reference_start|>Differentially Private Bilevel Optimization: We present differentially private (DP) algorithms for bilevel optimization, a problem class that received significant attention lately in various machine learning applications. These are the first DP algorithms for this task that are able to provide any desired privacy, while also avoiding Hessian computations which are prohibitive in large-scale settings. Under the well-studied setting in which the upper-level is not necessarily convex and the lower-level problem is strongly-convex, our proposed gradient-based $(\epsilon,\delta)$-DP algorithm returns a point with hypergradient norm at most $\widetilde{\mathcal{O}}\left((\sqrt{d_\mathrm{up}}/\epsilon n)^{1/2}+(\sqrt{d_\mathrm{low}}/\epsilon n)^{1/3}\right)$ where $n$ is the dataset size, and $d_\mathrm{up}/d_\mathrm{low}$ are the upper/lower level dimensions. Our analysis covers constrained and unconstrained problems alike, accounts for mini-batch gradients, and applies to both empirical and population losses.<|reference_end|>
arxiv
@article{kornowski2024differentially, title={Differentially Private Bilevel Optimization}, author={Guy Kornowski}, journal={arXiv preprint arXiv:2409.19800}, year={2024}, archivePrefix={arXiv}, eprint={2409.19800}, primaryClass={cs.LG cs.CR math.OC} }
kornowski2024differentially
arxiv-663350
2409.19801
CRScore: Grounding Automated Evaluation of Code Review Comments in Code Claims and Smells
<|reference_start|>CRScore: Grounding Automated Evaluation of Code Review Comments in Code Claims and Smells: The task of automated code review has recently gained a lot of attention from the machine learning community. However, current review comment evaluation metrics rely on comparisons with a human-written reference for a given code change (also called a diff), even though code review is a one-to-many problem like generation and summarization with many "valid reviews" for a diff. To tackle these issues we develop a CRScore - a reference-free metric to measure dimensions of review quality like conciseness, comprehensiveness, and relevance. We design CRScore to evaluate reviews in a way that is grounded in claims and potential issues detected in the code by LLMs and static analyzers. We demonstrate that CRScore can produce valid, fine-grained scores of review quality that have the greatest alignment with human judgment (0.54 Spearman correlation) and are more sensitive than reference-based metrics. We also release a corpus of 2.6k human-annotated review quality scores for machine-generated and GitHub review comments to support the development of automated metrics.<|reference_end|>
arxiv
@article{naik2024crscore:, title={CRScore: Grounding Automated Evaluation of Code Review Comments in Code Claims and Smells}, author={Atharva Naik, Marcus Alenius, Daniel Fried, Carolyn Rose}, journal={arXiv preprint arXiv:2409.19801}, year={2024}, archivePrefix={arXiv}, eprint={2409.19801}, primaryClass={cs.SE cs.AI cs.CL} }
naik2024crscore:
arxiv-663351
2409.19804
Does RAG Introduce Unfairness in LLMs? Evaluating Fairness in Retrieval-Augmented Generation Systems
<|reference_start|>Does RAG Introduce Unfairness in LLMs? Evaluating Fairness in Retrieval-Augmented Generation Systems: RAG (Retrieval-Augmented Generation) have recently gained significant attention for their enhanced ability to integrate external knowledge sources in open-domain question answering (QA) tasks. However, it remains unclear how these models address fairness concerns, particularly with respect to sensitive attributes such as gender, geographic location, and other demographic factors. First, as language models evolve to prioritize utility, like improving exact match accuracy, fairness may have been largely overlooked. Second, RAG methods are complex pipelines, making it hard to identify and address biases, as each component is optimized for different goals. In this paper, we aim to empirically evaluate fairness in several RAG methods. We propose a fairness evaluation framework tailored to RAG methods, using scenario-based questions and analyzing disparities across demographic attributes. The experimental results indicate that, despite recent advances in utility-driven optimization, fairness issues persist in both the retrieval and generation stages, highlighting the need for more targeted fairness interventions within RAG pipelines. We will release our dataset and code upon acceptance of the paper.<|reference_end|>
arxiv
@article{wu2024does, title={Does RAG Introduce Unfairness in LLMs? Evaluating Fairness in Retrieval-Augmented Generation Systems}, author={Xuyang Wu, Shuowei Li, Hsin-Tai Wu, Zhiqiang Tao and Yi Fang}, journal={arXiv preprint arXiv:2409.19804}, year={2024}, archivePrefix={arXiv}, eprint={2409.19804}, primaryClass={cs.CL} }
wu2024does
arxiv-663352
2409.19806
PALM: Few-Shot Prompt Learning for Audio Language Models
<|reference_start|>PALM: Few-Shot Prompt Learning for Audio Language Models: Audio-Language Models (ALMs) have recently achieved remarkable success in zero-shot audio recognition tasks, which match features of audio waveforms with class-specific text prompt features, inspired by advancements in Vision-Language Models (VLMs). Given the sensitivity of zero-shot performance to the choice of hand-crafted text prompts, many prompt learning techniques have been developed for VLMs. We explore the efficacy of these approaches in ALMs and propose a novel method, Prompt Learning in Audio Language Models (PALM), which optimizes the feature space of the text encoder branch. Unlike existing methods that work in the input space, our approach results in greater training efficiency. We demonstrate the effectiveness of our approach on 11 audio recognition datasets, encompassing a variety of speech-processing tasks, and compare the results with three baselines in a few-shot learning setup. Our method is either on par with or outperforms other approaches while being computationally less demanding. Code is available at https://asif-hanif.github.io/palm/<|reference_end|>
arxiv
@article{hanif2024palm:, title={PALM: Few-Shot Prompt Learning for Audio Language Models}, author={Asif Hanif, Maha Tufail Agro, Mohammad Areeb Qazi, Hanan Aldarmaki}, journal={arXiv preprint arXiv:2409.19806}, year={2024}, archivePrefix={arXiv}, eprint={2409.19806}, primaryClass={cs.SD cs.AI eess.AS} }
hanif2024palm:
arxiv-663353
2409.19807
Energy Saving and Traffic Steering Use Case and Testing by O-RAN RIC xApp/rApp Multi-vendor Interoperability
<|reference_start|>Energy Saving and Traffic Steering Use Case and Testing by O-RAN RIC xApp/rApp Multi-vendor Interoperability: This paper discusses the use case of energy saving and traffic steering in O-RAN, the mechanism of multi-vendor interoperability to make it work and depict its test methodology.<|reference_end|>
arxiv
@article{akman2024energy, title={Energy Saving and Traffic Steering Use Case and Testing by O-RAN RIC xApp/rApp Multi-vendor Interoperability}, author={Arda Akman, Peyman Tehrani, Pablo Oliver, Marcin Hoffmann, Michael Jones, Jia Li}, journal={arXiv preprint arXiv:2409.19807}, year={2024}, archivePrefix={arXiv}, eprint={2409.19807}, primaryClass={eess.SY cs.SY} }
akman2024energy
arxiv-663354
2409.19808
Can Models Learn Skill Composition from Examples?
<|reference_start|>Can Models Learn Skill Composition from Examples?: As large language models (LLMs) become increasingly advanced, their ability to exhibit compositional generalization -- the capacity to combine learned skills in novel ways not encountered during training -- has garnered significant attention. This type of generalization, particularly in scenarios beyond training data, is also of great interest in the study of AI safety and alignment. A recent study introduced the SKILL-MIX evaluation, where models are tasked with composing a short paragraph demonstrating the use of a specified $k$-tuple of language skills. While small models struggled with composing even with $k=3$, larger models like GPT-4 performed reasonably well with $k=5$ and $6$. In this paper, we employ a setup akin to SKILL-MIX to evaluate the capacity of smaller models to learn compositional generalization from examples. Utilizing a diverse set of language skills -- including rhetorical, literary, reasoning, theory of mind, and common sense -- GPT-4 was used to generate text samples that exhibit random subsets of $k$ skills. Subsequent fine-tuning of 7B and 13B parameter models on these combined skill texts, for increasing values of $k$, revealed the following findings: (1) Training on combinations of $k=2$ and $3$ skills results in noticeable improvements in the ability to compose texts with $k=4$ and $5$ skills, despite models never having seen such examples during training. (2) When skill categories are split into training and held-out groups, models significantly improve at composing texts with held-out skills during testing despite having only seen training skills during fine-tuning, illustrating the efficacy of the training approach even with previously unseen skills. This study also suggests that incorporating skill-rich (potentially synthetic) text into training can substantially enhance the compositional capabilities of models.<|reference_end|>
arxiv
@article{zhao2024can, title={Can Models Learn Skill Composition from Examples?}, author={Haoyu Zhao, Simran Kaur, Dingli Yu, Anirudh Goyal, Sanjeev Arora}, journal={arXiv preprint arXiv:2409.19808}, year={2024}, archivePrefix={arXiv}, eprint={2409.19808}, primaryClass={cs.CL cs.AI cs.LG} }
zhao2024can
arxiv-663355
2409.19811
Robust Incremental Structure-from-Motion with Hybrid Features
<|reference_start|>Robust Incremental Structure-from-Motion with Hybrid Features: Structure-from-Motion (SfM) has become a ubiquitous tool for camera calibration and scene reconstruction with many downstream applications in computer vision and beyond. While the state-of-the-art SfM pipelines have reached a high level of maturity in well-textured and well-configured scenes over the last decades, they still fall short of robustly solving the SfM problem in challenging scenarios. In particular, weakly textured scenes and poorly constrained configurations oftentimes cause catastrophic failures or large errors for the primarily keypoint-based pipelines. In these scenarios, line segments are often abundant and can offer complementary geometric constraints. Their large spatial extent and typically structured configurations lead to stronger geometric constraints as compared to traditional keypoint-based methods. In this work, we introduce an incremental SfM system that, in addition to points, leverages lines and their structured geometric relations. Our technical contributions span the entire pipeline (mapping, triangulation, registration) and we integrate these into a comprehensive end-to-end SfM system that we share as an open-source software with the community. We also present the first analytical method to propagate uncertainties for 3D optimized lines via sensitivity analysis. Experiments show that our system is consistently more robust and accurate compared to the widely used point-based state of the art in SfM -- achieving richer maps and more precise camera registrations, especially under challenging conditions. In addition, our uncertainty-aware localization module alone is able to consistently improve over the state of the art under both point-alone and hybrid setups.<|reference_end|>
arxiv
@article{liu2024robust, title={Robust Incremental Structure-from-Motion with Hybrid Features}, author={Shaohui Liu, Yidan Gao, Tianyi Zhang, R'emi Pautrat, Johannes L. Sch"onberger, Viktor Larsson, Marc Pollefeys}, journal={arXiv preprint arXiv:2409.19811}, year={2024}, archivePrefix={arXiv}, eprint={2409.19811}, primaryClass={cs.CV} }
liu2024robust
arxiv-663356
2409.19813
Transforming Hidden States into Binary Semantic Features
<|reference_start|>Transforming Hidden States into Binary Semantic Features: Large language models follow a lineage of many NLP applications that were directly inspired by distributional semantics, but do not seem to be closely related to it anymore. In this paper, we propose to employ the distributional theory of meaning once again. Using Independent Component Analysis to overcome some of its challenging aspects, we show that large language models represent semantic features in their hidden states.<|reference_end|>
arxiv
@article{musil2024transforming, title={Transforming Hidden States into Binary Semantic Features}, author={Tom'av{s} Musil, David Marev{c}ek}, journal={arXiv preprint arXiv:2409.19813}, year={2024}, archivePrefix={arXiv}, eprint={2409.19813}, primaryClass={cs.CL} }
musil2024transforming
arxiv-663357
2409.19816
Grounded Curriculum Learning
<|reference_start|>Grounded Curriculum Learning: The high cost of real-world data for robotics Reinforcement Learning (RL) leads to the wide usage of simulators. Despite extensive work on building better dynamics models for simulators to match with the real world, there is another, often-overlooked mismatch between simulations and the real world, namely the distribution of available training tasks. Such a mismatch is further exacerbated by existing curriculum learning techniques, which automatically vary the simulation task distribution without considering its relevance to the real world. Considering these challenges, we posit that curriculum learning for robotics RL needs to be grounded in real-world task distributions. To this end, we propose Grounded Curriculum Learning (GCL), which aligns the simulated task distribution in the curriculum with the real world, as well as explicitly considers what tasks have been given to the robot and how the robot has performed in the past. We validate GCL using the BARN dataset on complex navigation tasks, achieving a 6.8% and 6.5% higher success rate compared to a state-of-the-art CL method and a curriculum designed by human experts, respectively. These results show that GCL can enhance learning efficiency and navigation performance by grounding the simulation task distribution in the real world within an adaptive curriculum.<|reference_end|>
arxiv
@article{wang2024grounded, title={Grounded Curriculum Learning}, author={Linji Wang, Zifan Xu, Peter Stone, Xuesu Xiao}, journal={arXiv preprint arXiv:2409.19816}, year={2024}, archivePrefix={arXiv}, eprint={2409.19816}, primaryClass={cs.RO cs.AI} }
wang2024grounded
arxiv-663358
2409.19817
Calibrating Language Models with Adaptive Temperature Scaling
<|reference_start|>Calibrating Language Models with Adaptive Temperature Scaling: The effectiveness of large language models (LLMs) is not only measured by their ability to generate accurate outputs but also by their calibration-how well their confidence scores reflect the probability of their outputs being correct. While unsupervised pre-training has been shown to yield LLMs with well-calibrated conditional probabilities, recent studies have shown that after fine-tuning with reinforcement learning from human feedback (RLHF), the calibration of these models degrades significantly. In this work, we introduce Adaptive Temperature Scaling (ATS), a post-hoc calibration method that predicts a temperature scaling parameter for each token prediction. The predicted temperature values adapt based on token-level features and are fit over a standard supervised fine-tuning (SFT) dataset. The adaptive nature of ATS addresses the varying degrees of calibration shift that can occur after RLHF fine-tuning. ATS improves calibration by over 10-50% across three downstream natural language evaluation benchmarks compared to prior calibration methods and does not impede performance improvements from RLHF.<|reference_end|>
arxiv
@article{xie2024calibrating, title={Calibrating Language Models with Adaptive Temperature Scaling}, author={Johnathan Xie, Annie S. Chen, Yoonho Lee, Eric Mitchell, Chelsea Finn}, journal={arXiv preprint arXiv:2409.19817}, year={2024}, archivePrefix={arXiv}, eprint={2409.19817}, primaryClass={cs.LG cs.AI cs.CL} }
xie2024calibrating
arxiv-663359
2409.19818
Fine-Tuning Automatic Speech Recognition for People with Parkinson's: An Effective Strategy for Enhancing Speech Technology Accessibility
<|reference_start|>Fine-Tuning Automatic Speech Recognition for People with Parkinson's: An Effective Strategy for Enhancing Speech Technology Accessibility: This paper enhances dysarthric and dysphonic speech recognition by fine-tuning pretrained automatic speech recognition (ASR) models on the 2023-10-05 data package of the Speech Accessibility Project (SAP), which contains the speech of 253 people with Parkinson's disease. Experiments tested methods that have been effective for Cerebral Palsy, including the use of speaker clustering and severity-dependent models, weighted fine-tuning, and multi-task learning. Best results were obtained using a multi-task learning model, in which the ASR is trained to produce an estimate of the speaker's impairment severity as an auxiliary output. The resulting word error rates are considerably improved relative to a baseline model fine-tuned using only Librispeech data, with word error rate improvements of 37.62\% and 26.97\% compared to fine-tuning on 100h and 960h of LibriSpeech data, respectively.<|reference_end|>
arxiv
@article{zheng2024fine-tuning, title={Fine-Tuning Automatic Speech Recognition for People with Parkinson's: An Effective Strategy for Enhancing Speech Technology Accessibility}, author={Xiuwen Zheng, Bornali Phukon, Mark Hasegawa-Johnson}, journal={Proceedings of Interspeech 2024}, year={2024}, doi={10.21437/Interspeech.2024-1969}, archivePrefix={arXiv}, eprint={2409.19818}, primaryClass={eess.AS cs.SD} }
zheng2024fine-tuning
arxiv-663360
2409.19820
Qompose: A Technique to Select Optimal Algorithm- Specific Layout for Neutral Atom Quantum Architectures
<|reference_start|>Qompose: A Technique to Select Optimal Algorithm- Specific Layout for Neutral Atom Quantum Architectures: As quantum computing architecture matures, it is important to investigate new technologies that lend unique advantages. In this work, we propose, Qompose, a neutral atom quantum computing framework for efficiently composing quantum circuits on 2-D topologies of neutral atoms. Qompose selects an efficient topology for any given circuit in order to optimize for length of execution through efficient parallelism and for overall fidelity. our extensive evaluation demonstrates the Qompose is effective for a large collection of randomly-generated quantum circuits and a range of real-world benchmarks including VQE, ISING, and QAOA.<|reference_end|>
arxiv
@article{silver2024qompose:, title={Qompose: A Technique to Select Optimal Algorithm- Specific Layout for Neutral Atom Quantum Architectures}, author={Daniel Silver, Tirthak Patel, Devesh Tiwari}, journal={arXiv preprint arXiv:2409.19820}, year={2024}, archivePrefix={arXiv}, eprint={2409.19820}, primaryClass={quant-ph cs.AI} }
silver2024qompose:
arxiv-663361
2409.19821
Tracking Everything in Robotic-Assisted Surgery
<|reference_start|>Tracking Everything in Robotic-Assisted Surgery: Accurate tracking of tissues and instruments in videos is crucial for Robotic-Assisted Minimally Invasive Surgery (RAMIS), as it enables the robot to comprehend the surgical scene with precise locations and interactions of tissues and tools. Traditional keypoint-based sparse tracking is limited by featured points, while flow-based dense two-view matching suffers from long-term drifts. Recently, the Tracking Any Point (TAP) algorithm was proposed to overcome these limitations and achieve dense accurate long-term tracking. However, its efficacy in surgical scenarios remains untested, largely due to the lack of a comprehensive surgical tracking dataset for evaluation. To address this gap, we introduce a new annotated surgical tracking dataset for benchmarking tracking methods for surgical scenarios, comprising real-world surgical videos with complex tissue and instrument motions. We extensively evaluate state-of-the-art (SOTA) TAP-based algorithms on this dataset and reveal their limitations in challenging surgical scenarios, including fast instrument motion, severe occlusions, and motion blur, etc. Furthermore, we propose a new tracking method, namely SurgMotion, to solve the challenges and further improve the tracking performance. Our proposed method outperforms most TAP-based algorithms in surgical instruments tracking, and especially demonstrates significant improvements over baselines in challenging medical videos.<|reference_end|>
arxiv
@article{zhan2024tracking, title={Tracking Everything in Robotic-Assisted Surgery}, author={Bohan Zhan, Wang Zhao, Yi Fang, Bo Du, Francisco Vasconcelos, Danail Stoyanov, Daniel S. Elson, Baoru Huang}, journal={arXiv preprint arXiv:2409.19821}, year={2024}, archivePrefix={arXiv}, eprint={2409.19821}, primaryClass={cs.CV} }
zhan2024tracking
arxiv-663362
2409.19823
OrganiQ: Mitigating Classical Resource Bottlenecks of Quantum Generative Adversarial Networks on NISQ-Era Machines
<|reference_start|>OrganiQ: Mitigating Classical Resource Bottlenecks of Quantum Generative Adversarial Networks on NISQ-Era Machines: Driven by swift progress in hardware capabilities, quantum machine learning has emerged as a research area of interest. Recently, quantum image generation has produced promising results. However, prior quantum image generation techniques rely on classical neural networks, limiting their quantum potential and image quality. To overcome this, we introduce OrganiQ, the first quantum GAN capable of producing high-quality images without using classical neural networks.<|reference_end|>
arxiv
@article{silver2024organiq:, title={OrganiQ: Mitigating Classical Resource Bottlenecks of Quantum Generative Adversarial Networks on NISQ-Era Machines}, author={Daniel Silver, Tirthak Patel, Aditya Ranjan, William Cutler, Devesh Tiwari}, journal={arXiv preprint arXiv:2409.19823}, year={2024}, archivePrefix={arXiv}, eprint={2409.19823}, primaryClass={quant-ph cs.AI} }
silver2024organiq:
arxiv-663363
2409.19824
Counterfactual Evaluation of Ads Ranking Models through Domain Adaptation
<|reference_start|>Counterfactual Evaluation of Ads Ranking Models through Domain Adaptation: We propose a domain-adapted reward model that works alongside an Offline A/B testing system for evaluating ranking models. This approach effectively measures reward for ranking model changes in large-scale Ads recommender systems, where model-free methods like IPS are not feasible. Our experiments demonstrate that the proposed technique outperforms both the vanilla IPS method and approaches using non-generalized reward models.<|reference_end|>
arxiv
@article{radwan2024counterfactual, title={Counterfactual Evaluation of Ads Ranking Models through Domain Adaptation}, author={Mohamed A. Radwan, Himaghna Bhattacharjee, Quinn Lanners, Jiasheng Zhang, Serkan Karakulak, Houssam Nassif, Murat Ali Bayir}, journal={arXiv preprint arXiv:2409.19824}, year={2024}, archivePrefix={arXiv}, eprint={2409.19824}, primaryClass={cs.IR cs.AI} }
radwan2024counterfactual
arxiv-663364
2409.19825
PhishGuard: A Multi-Layered Ensemble Model for Optimal Phishing Website Detection
<|reference_start|>PhishGuard: A Multi-Layered Ensemble Model for Optimal Phishing Website Detection: Phishing attacks are a growing cybersecurity threat, leveraging deceptive techniques to steal sensitive information through malicious websites. To combat these attacks, this paper introduces PhishGuard, an optimal custom ensemble model designed to improve phishing site detection. The model combines multiple machine learning classifiers, including Random Forest, Gradient Boosting, CatBoost, and XGBoost, to enhance detection accuracy. Through advanced feature selection methods such as SelectKBest and RFECV, and optimizations like hyperparameter tuning and data balancing, the model was trained and evaluated on four publicly available datasets. PhishGuard outperformed state-of-the-art models, achieving a detection accuracy of 99.05% on one of the datasets, with similarly high results across other datasets. This research demonstrates that optimization methods in conjunction with ensemble learning greatly improve phishing detection performance.<|reference_end|>
arxiv
@article{ovi2024phishguard:, title={PhishGuard: A Multi-Layered Ensemble Model for Optimal Phishing Website Detection}, author={Md Sultanul Islam Ovi, Md. Hasibur Rahman, and Mohammad Arif Hossain}, journal={arXiv preprint arXiv:2409.19825}, year={2024}, archivePrefix={arXiv}, eprint={2409.19825}, primaryClass={cs.CR} }
ovi2024phishguard:
arxiv-663365
2409.19828
Blockchain-enhanced Integrity Verification in Educational Content Assessment Platform: A Lightweight and Cost-Efficient Approach
<|reference_start|>Blockchain-enhanced Integrity Verification in Educational Content Assessment Platform: A Lightweight and Cost-Efficient Approach: The growing digitization of education presents significant challenges in maintaining the integrity and trustworthiness of educational content. Traditional systems often fail to ensure data authenticity and prevent unauthorized alterations, particularly in the evaluation of teachers' professional activities, where demand for transparent and secure assessment mechanisms is increasing. In this context, Blockchain technology offers a novel solution to address these issues. This paper introduces a Blockchain-enhanced framework for the Electronic Platform for Expertise of Content (EPEC), a platform used for reviewing and assessing educational materials. Our approach integrates the Polygon network, a Layer-2 solution for Ethereum, to securely store and retrieve encrypted reviews, ensuring both privacy and accountability. By leveraging Python, Flask, and Web3.py, we interact with a Solidity-based smart contract to securely link each review to a unique identifier (UID) that connects on-chain data with real-world databases. The system, containerized using Docker, facilitates easy deployment and integration through API endpoints. Our implementation demonstrates significant cost savings, with a 98\% reduction in gas fees compared to Ethereum, making it a scalable and cost-effective solution. This research contributes to the ongoing effort to implement Blockchain in educational content verification, offering a practical and secure framework that enhances trust and transparency in the digital education landscape.<|reference_end|>
arxiv
@article{bayan2024blockchain-enhanced, title={Blockchain-enhanced Integrity Verification in Educational Content Assessment Platform: A Lightweight and Cost-Efficient Approach}, author={Talgar Bayan, Richard Banach, Askar Nurbekov, Makhmud Mustafabek Galy, Adi Sabyrbayev, Zhanat Nurbekova}, journal={arXiv preprint arXiv:2409.19828}, year={2024}, archivePrefix={arXiv}, eprint={2409.19828}, primaryClass={cs.CR cs.SE} }
bayan2024blockchain-enhanced
arxiv-663366
2409.19829
Generalizability of Graph Neural Networks for Decentralized Unlabeled Motion Planning
<|reference_start|>Generalizability of Graph Neural Networks for Decentralized Unlabeled Motion Planning: Unlabeled motion planning involves assigning a set of robots to target locations while ensuring collision avoidance, aiming to minimize the total distance traveled. The problem forms an essential building block for multi-robot systems in applications such as exploration, surveillance, and transportation. We address this problem in a decentralized setting where each robot knows only the positions of its $k$-nearest robots and $k$-nearest targets. This scenario combines elements of combinatorial assignment and continuous-space motion planning, posing significant scalability challenges for traditional centralized approaches. To overcome these challenges, we propose a decentralized policy learned via a Graph Neural Network (GNN). The GNN enables robots to determine (1) what information to communicate to neighbors and (2) how to integrate received information with local observations for decision-making. We train the GNN using imitation learning with the centralized Hungarian algorithm as the expert policy, and further fine-tune it using reinforcement learning to avoid collisions and enhance performance. Extensive empirical evaluations demonstrate the scalability and effectiveness of our approach. The GNN policy trained on 100 robots generalizes to scenarios with up to 500 robots, outperforming state-of-the-art solutions by 8.6\% on average and significantly surpassing greedy decentralized methods. This work lays the foundation for solving multi-robot coordination problems in settings where scalability is important.<|reference_end|>
arxiv
@article{muthusamy2024generalizability, title={Generalizability of Graph Neural Networks for Decentralized Unlabeled Motion Planning}, author={Shreyas Muthusamy, Damian Owerko, Charilaos I. Kanatsoulis, Saurav Agarwal, and Alejandro Ribeiro}, journal={arXiv preprint arXiv:2409.19829}, year={2024}, archivePrefix={arXiv}, eprint={2409.19829}, primaryClass={cs.RO cs.AI cs.SY eess.SY} }
muthusamy2024generalizability
arxiv-663367
2409.19830
GameLabel-10K: Collecting Image Preference Data Through Mobile Game Crowdsourcing
<|reference_start|>GameLabel-10K: Collecting Image Preference Data Through Mobile Game Crowdsourcing: The rise of multi-billion parameter models has sparked an intense hunger for data across deep learning. This study explores the possibility of replacing paid annotators with video game players who are rewarded with in-game currency for good performance. We collaborate with the developers of a mobile historical strategy game, Armchair Commander, to test this idea. More specifically, the current study tests this idea using pairwise image preference data, typically used to fine-tune diffusion models. Using this method, we create GameLabel-10K, a dataset with slightly under 10 thousand labels and 7000 unique prompts. In addition to these results, we analyze some limitations of this dataset and publicly release it under an open-source license.<|reference_end|>
arxiv
@article{zhou2024gamelabel-10k:, title={GameLabel-10K: Collecting Image Preference Data Through Mobile Game Crowdsourcing}, author={Jonathan Zhou}, journal={arXiv preprint arXiv:2409.19830}, year={2024}, archivePrefix={arXiv}, eprint={2409.19830}, primaryClass={cs.CV} }
zhou2024gamelabel-10k:
arxiv-663368
2409.19831
Enabling Multi-Robot Collaboration from Single-Human Guidance
<|reference_start|>Enabling Multi-Robot Collaboration from Single-Human Guidance: Learning collaborative behaviors is essential for multi-agent systems. Traditionally, multi-agent reinforcement learning solves this implicitly through a joint reward and centralized observations, assuming collaborative behavior will emerge. Other studies propose to learn from demonstrations of a group of collaborative experts. Instead, we propose an efficient and explicit way of learning collaborative behaviors in multi-agent systems by leveraging expertise from only a single human. Our insight is that humans can naturally take on various roles in a team. We show that agents can effectively learn to collaborate by allowing a human operator to dynamically switch between controlling agents for a short period and incorporating a human-like theory-of-mind model of teammates. Our experiments showed that our method improves the success rate of a challenging collaborative hide-and-seek task by up to 58$% with only 40 minutes of human guidance. We further demonstrate our findings transfer to the real world by conducting multi-robot experiments.<|reference_end|>
arxiv
@article{ji2024enabling, title={Enabling Multi-Robot Collaboration from Single-Human Guidance}, author={Zhengran Ji, Lingyu Zhang, Paul Sajda, Boyuan Chen}, journal={arXiv preprint arXiv:2409.19831}, year={2024}, archivePrefix={arXiv}, eprint={2409.19831}, primaryClass={cs.RO cs.HC cs.LG cs.MA} }
ji2024enabling
arxiv-663369
2409.19833
HazyDet: Open-source Benchmark for Drone-view Object Detection with Depth-cues in Hazy Scenes
<|reference_start|>HazyDet: Open-source Benchmark for Drone-view Object Detection with Depth-cues in Hazy Scenes: Drone-based object detection in adverse weather conditions is crucial for enhancing drones' environmental perception, yet it remains largely unexplored due to the lack of relevant benchmarks. To bridge this gap, we introduce HazyDet, a large-scale dataset tailored for drone-based object detection in hazy scenes. It encompasses 383,000 real-world instances, collected from both naturally hazy environments and normal scenes with synthetically imposed haze effects to simulate adverse weather conditions. By observing the significant variations in object scale and clarity under different depth and haze conditions, we designed a Depth Conditioned Detector (DeCoDet) to incorporate this prior knowledge. DeCoDet features a Multi-scale Depth-aware Detection Head that seamlessly integrates depth perception, with the resulting depth cues harnessed by a dynamic Depth Condition Kernel module. Furthermore, we propose a Scale Invariant Refurbishment Loss to facilitate the learning of robust depth cues from pseudo-labels. Extensive evaluations on the HazyDet dataset demonstrate the flexibility and effectiveness of our method, yielding significant performance improvements. Our dataset and toolkit are available at https://github.com/GrokCV/HazyDet.<|reference_end|>
arxiv
@article{feng2024hazydet:, title={HazyDet: Open-source Benchmark for Drone-view Object Detection with Depth-cues in Hazy Scenes}, author={Changfeng Feng and Zhenyuan Chen and Renke Kou and Guangwei Gao and Chunping Wang and Xiang Li and Xiangbo Shu and Yimian Dai and Qiang Fu and Jian Yang}, journal={arXiv preprint arXiv:2409.19833}, year={2024}, archivePrefix={arXiv}, eprint={2409.19833}, primaryClass={cs.CV} }
feng2024hazydet:
arxiv-663370
2409.19834
Utilizing Priors in Sampling-based Cost Minimization
<|reference_start|>Utilizing Priors in Sampling-based Cost Minimization: We consider an autonomous vehicle (AV) agent performing a long-term cost-minimization problem in the elapsed time $T$ over sequences of states $s_{1:T}$ and actions $a_{1:T}$ for some fixed, known (though potentially learned) cost function $C(s_t,a_t)$, approximate system dynamics $P$, and distribution over initial states $d_0$. The goal is to minimize the expected cost-to-go of the driving trajectory $\tau = s_1, a_1, ..., s_T, a_T$ from the initial state.<|reference_end|>
arxiv
@article{lou2024utilizing, title={Utilizing Priors in Sampling-based Cost Minimization}, author={Yuan-Yao Lou, Jonathan Spencer, Kwang Taik Kim, Mung Chiang}, journal={arXiv preprint arXiv:2409.19834}, year={2024}, archivePrefix={arXiv}, eprint={2409.19834}, primaryClass={eess.SY cs.SY} }
lou2024utilizing
arxiv-663371
2409.19835
GrokLST: Towards High-Resolution Benchmark and Toolkit for Land Surface Temperature Downscaling
<|reference_start|>GrokLST: Towards High-Resolution Benchmark and Toolkit for Land Surface Temperature Downscaling: Land Surface Temperature (LST) is a critical parameter for environmental studies, but obtaining high-resolution LST data remains challenging due to the spatio-temporal trade-off in satellite remote sensing. Guided LST downscaling has emerged as a solution, but current methods often neglect spatial non-stationarity and lack a open-source ecosystem for deep learning methods. To address these limitations, we propose the Modality-Conditional Large Selective Kernel (MoCoLSK) Networks, a novel architecture that dynamically fuses multi-modal data through modality-conditioned projections. MoCoLSK re-engineers our previous LSKNet to achieve a confluence of dynamic receptive field adjustment and multi-modal feature integration, leading to enhanced LST prediction accuracy. Furthermore, we establish the GrokLST project, a comprehensive open-source ecosystem featuring the GrokLST dataset, a high-resolution benchmark, and the GrokLST toolkit, an open-source PyTorch-based toolkit encapsulating MoCoLSK alongside 40+ state-of-the-art approaches. Extensive experimental results validate MoCoLSK's effectiveness in capturing complex dependencies and subtle variations within multispectral data, outperforming existing methods in LST downscaling. Our code, dataset, and toolkit are available at https://github.com/GrokCV/GrokLST.<|reference_end|>
arxiv
@article{dai2024groklst:, title={GrokLST: Towards High-Resolution Benchmark and Toolkit for Land Surface Temperature Downscaling}, author={Qun Dai and Chunyang Yuan and Yimian Dai and Yuxuan Li and Xiang Li and Kang Ni and Jianhui Xu and Xiangbo Shu and Jian Yang}, journal={arXiv preprint arXiv:2409.19835}, year={2024}, archivePrefix={arXiv}, eprint={2409.19835}, primaryClass={cs.CV eess.IV} }
dai2024groklst:
arxiv-663372
2409.19838
geom2vec: pretrained GNNs as geometric featurizers for conformational dynamics
<|reference_start|>geom2vec: pretrained GNNs as geometric featurizers for conformational dynamics: Identifying informative low-dimensional features that characterize dynamics in molecular simulations remains a challenge, often requiring extensive hand-tuning and system-specific knowledge. Here, we introduce geom2vec, in which pretrained graph neural networks (GNNs) are used as universal geometric featurizers. By pretraining equivariant GNNs on a large dataset of molecular conformations with a self-supervised denoising objective, we learn transferable structural representations that capture molecular geometric patterns without further fine-tuning. We show that the learned representations can be directly used to analyze trajectory data, thus eliminating the need for manual feature selection and improving robustness of the simulation analysis workflows. Importantly, by decoupling GNN training from training for downstream tasks, we enable analysis of larger molecular graphs with limited computational resources.<|reference_end|>
arxiv
@article{pengmei2024geom2vec:, title={geom2vec: pretrained GNNs as geometric featurizers for conformational dynamics}, author={Zihan Pengmei, Chatipat Lorpaiboon, Spencer C. Guo, Jonathan Weare, Aaron R. Dinner}, journal={arXiv preprint arXiv:2409.19838}, year={2024}, archivePrefix={arXiv}, eprint={2409.19838}, primaryClass={cs.LG physics.chem-ph physics.comp-ph q-bio.QM} }
pengmei2024geom2vec:
arxiv-663373
2409.19839
ForecastBench: A Dynamic Benchmark of AI Forecasting Capabilities
<|reference_start|>ForecastBench: A Dynamic Benchmark of AI Forecasting Capabilities: Forecasts of future events are essential inputs into informed decision-making. Machine learning (ML) systems have the potential to deliver forecasts at scale, but there is no framework for evaluating the accuracy of ML systems on a standardized set of forecasting questions. To address this gap, we introduce ForecastBench: a dynamic benchmark that evaluates the accuracy of ML systems on an automatically generated and regularly updated set of 1,000 forecasting questions. To avoid any possibility of data leakage, ForecastBench is comprised solely of questions about future events that have no known answer at the time of submission. We quantify the ability of current ML systems by collecting forecasts from expert (human) forecasters, the general public, and LLMs on a random subset of questions from the benchmark (N = 200). While LLMs have achieved super-human performance on many benchmarks, they perform less well here: expert forecasters outperform the top-performing LLM (p-values <= 0.01). We display system and human scores in a public leaderboard at www.forecastbench.org.<|reference_end|>
arxiv
@article{karger2024forecastbench:, title={ForecastBench: A Dynamic Benchmark of AI Forecasting Capabilities}, author={Ezra Karger, Houtan Bastani, Chen Yueh-Han, Zachary Jacobs, Danny Halawi, Fred Zhang, Philip E. Tetlock}, journal={arXiv preprint arXiv:2409.19839}, year={2024}, archivePrefix={arXiv}, eprint={2409.19839}, primaryClass={cs.LG cs.AI cs.CL} }
karger2024forecastbench:
arxiv-663374
2409.19840
Textual Training for the Hassle-Free Removal of Unwanted Visual Data
<|reference_start|>Textual Training for the Hassle-Free Removal of Unwanted Visual Data: In our study, we explore methods for detecting unwanted content lurking in visual datasets. We provide a theoretical analysis demonstrating that a model capable of successfully partitioning visual data can be obtained using only textual data. Based on the analysis, we propose Hassle-Free Textual Training (HFTT), a streamlined method capable of acquiring detectors for unwanted visual content, using only synthetic textual data in conjunction with pre-trained vision-language models. HFTT features an innovative objective function that significantly reduces the necessity for human involvement in data annotation. Furthermore, HFTT employs a clever textual data synthesis method, effectively emulating the integration of unknown visual data distribution into the training process at no extra cost. The unique characteristics of HFTT extend its utility beyond traditional out-of-distribution detection, making it applicable to tasks that address more abstract concepts. We complement our analyses with experiments in out-of-distribution detection and hateful image detection. Our codes are available at https://github.com/Saehyung-Lee/HFTT<|reference_end|>
arxiv
@article{lee2024textual, title={Textual Training for the Hassle-Free Removal of Unwanted Visual Data: Case Studies on OOD and Hateful Image Detection}, author={Saehyung Lee, Jisoo Mok, Sangha Park, Yongho Shin, Dahuin Jung, Sungroh Yoon}, journal={arXiv preprint arXiv:2409.19840}, year={2024}, archivePrefix={arXiv}, eprint={2409.19840}, primaryClass={cs.CV} }
lee2024textual
arxiv-663375
2409.19841
Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning
<|reference_start|>Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning: Despite its widespread use in neural networks, error backpropagation has faced criticism for its lack of biological plausibility, suffering from issues such as the backward locking problem and the weight transport problem. These limitations have motivated researchers to explore more biologically plausible learning algorithms that could potentially shed light on how biological neural systems adapt and learn. Inspired by the counter-current exchange mechanisms observed in biological systems, we propose counter-current learning (CCL), a biologically plausible framework for credit assignment in neural networks. This framework employs a feedforward network to process input data and a feedback network to process targets, with each network enhancing the other through anti-parallel signal propagation. By leveraging the more informative signals from the bottom layer of the feedback network to guide the updates of the top layer of the feedforward network and vice versa, CCL enables the simultaneous transformation of source inputs to target outputs and the dynamic mutual influence of these transformations. Experimental results on MNIST, FashionMNIST, CIFAR10, and CIFAR100 datasets using multi-layer perceptrons and convolutional neural networks demonstrate that CCL achieves comparable performance to other biologically plausible algorithms while offering a more biologically realistic learning mechanism. Furthermore, we showcase the applicability of our approach to an autoencoder task, underscoring its potential for unsupervised representation learning. Our work presents a direction for biologically inspired and plausible learning algorithms, offering an alternative mechanisms of learning and adaptation in neural networks.<|reference_end|>
arxiv
@article{kao2024counter-current, title={Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning}, author={Chia-Hsiang Kao, Bharath Hariharan}, journal={arXiv preprint arXiv:2409.19841}, year={2024}, archivePrefix={arXiv}, eprint={2409.19841}, primaryClass={cs.LG cs.AI cs.NE} }
kao2024counter-current
arxiv-663376
2409.19846
Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels
<|reference_start|>Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels: Large-scale vision-language models like CLIP have demonstrated impressive open-vocabulary capabilities for image-level tasks, excelling in recognizing what objects are present. However, they struggle with pixel-level recognition tasks like semantic segmentation, which additionally require understanding where the objects are located. In this work, we propose a novel method, PixelCLIP, to adapt the CLIP image encoder for pixel-level understanding by guiding the model on where, which is achieved using unlabeled images and masks generated from vision foundation models such as SAM and DINO. To address the challenges of leveraging masks without semantic labels, we devise an online clustering algorithm using learnable class names to acquire general semantic concepts. PixelCLIP shows significant performance improvements over CLIP and competitive results compared to caption-supervised methods in open-vocabulary semantic segmentation. Project page is available at https://cvlab-kaist.github.io/PixelCLIP<|reference_end|>
arxiv
@article{shin2024towards, title={Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels}, author={Heeseong Shin, Chaehyun Kim, Sunghwan Hong, Seokju Cho, Anurag Arnab, Paul Hongsuck Seo, and Seungryong Kim}, journal={arXiv preprint arXiv:2409.19846}, year={2024}, archivePrefix={arXiv}, eprint={2409.19846}, primaryClass={cs.CV} }
shin2024towards
arxiv-663377
2409.19849
An Investigation into Protestware
<|reference_start|>An Investigation into Protestware: Protests are public expressions of personal or collective discontent with the current state of affairs. Although traditional protests involve in-person events, the ubiquity of computers and software opened up a new avenue for activism: protestware. The roots of protestware date back to the early days of computing. However, recent events in the Russo-Ukrainian war has sparked a new wave of protestware. While news and media are heavily reporting on individual protestware as they are discovered, the understanding of such software as a whole is severely limited. In particular, we do not have a detailed understanding of their characteristics and their impact on the community. To address this gap, we first collect 32 samples of protestware. Then, with these samples, we formulate characteristics of protestware using inductive analysis. In addition, we analyze the aftermath of the protestware which has potential to affect the software supply chain in terms of community sentiment and usage. We report that: (1) protestware has three notable characteristics, namely, i) the "nature of inducing protest" is diverse, ii) the "nature of targeting users" is discriminatory, and iii) the "nature of transparency" is not always respected; (2) disruptive protestware may cause substantial adverse impact on downstream users; (3) developers of protestware may not shift their beliefs even with pushback; (4) the usage of protestware from JavaScript libraries has been seen to generally increase over time.<|reference_end|>
arxiv
@article{finken2024an, title={An Investigation into Protestware}, author={Tanner Finken, Jesse Chen, Sazzadur Rahaman}, journal={arXiv preprint arXiv:2409.19849}, year={2024}, archivePrefix={arXiv}, eprint={2409.19849}, primaryClass={cs.SE} }
finken2024an
arxiv-663378
2409.19850
SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers
<|reference_start|>SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers: Over the past few years, vision transformers (ViTs) have consistently demonstrated remarkable performance across various visual recognition tasks. However, attempts to enhance their robustness have yielded limited success, mainly focusing on different training strategies, input patch augmentation, or network structural enhancements. These approaches often involve extensive training and fine-tuning, which are time-consuming and resource-intensive. To tackle these obstacles, we introduce a novel approach named Spatial Autocorrelation Token Analysis (SATA). By harnessing spatial relationships between token features, SATA enhances both the representational capacity and robustness of ViT models. This is achieved through the analysis and grouping of tokens according to their spatial autocorrelation scores prior to their input into the Feed-Forward Network (FFN) block of the self-attention mechanism. Importantly, SATA seamlessly integrates into existing pre-trained ViT baselines without requiring retraining or additional fine-tuning, while concurrently improving efficiency by reducing the computational load of the FFN units. Experimental results show that the baseline ViTs enhanced with SATA not only achieve a new state-of-the-art top-1 accuracy on ImageNet-1K image classification (94.9%) but also establish new state-of-the-art performance across multiple robustness benchmarks, including ImageNet-A (top-1=63.6%), ImageNet-R (top-1=79.2%), and ImageNet-C (mCE=13.6%), all without requiring additional training or fine-tuning of baseline models.<|reference_end|>
arxiv
@article{nikzad2024sata:, title={SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers}, author={Nick Nikzad, Yi Liao, Yongsheng Gao, Jun Zhou}, journal={arXiv preprint arXiv:2409.19850}, year={2024}, archivePrefix={arXiv}, eprint={2409.19850}, primaryClass={cs.CV cs.LG} }
nikzad2024sata:
arxiv-663379
2409.19854
The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging
<|reference_start|>The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging: This paper proposes a novel method for constructing instruction-tuned large language models (LLMs) for finance without instruction data. Traditionally, developing such domain-specific LLMs has been resource-intensive, requiring a large dataset and significant computational power for continual pretraining and instruction tuning. Our study proposes a simpler approach that combines domain-specific continual pretraining with model merging. Given that general-purpose pretrained LLMs and their instruction-tuned LLMs are often publicly available, they can be leveraged to obtain the necessary instruction task vector. By merging this with a domain-specific pretrained vector, we can effectively create instruction-tuned LLMs for finance without additional instruction data. Our process involves two steps: first, we perform continual pretraining on financial data; second, we merge the instruction-tuned vector with the domain-specific pretrained vector. Our experiments demonstrate the successful construction of instruction-tuned LLMs for finance. One major advantage of our method is that the instruction-tuned and domain-specific pretrained vectors are nearly independent. This independence makes our approach highly effective. The Japanese financial instruction-tuned LLMs we developed in this study are available at https://huggingface.co/pfnet/nekomata-14b-pfn-qfin-inst-merge.<|reference_end|>
arxiv
@article{hirano2024the, title={The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging}, author={Masanori Hirano, Kentaro Imajo}, journal={arXiv preprint arXiv:2409.19854}, year={2024}, archivePrefix={arXiv}, eprint={2409.19854}, primaryClass={cs.CL econ.GN q-fin.CP q-fin.EC} }
hirano2024the
arxiv-663380
2409.19855
Local Randomized Neural Networks with Discontinuous Galerkin Methods for KdV-type and Burgers Equations
<|reference_start|>Local Randomized Neural Networks with Discontinuous Galerkin Methods for KdV-type and Burgers Equations: The Local Randomized Neural Networks with Discontinuous Galerkin (LRNN-DG) methods, introduced in [42], were originally designed for solving linear partial differential equations. In this paper, we extend the LRNN-DG methods to solve nonlinear PDEs, specifically the Korteweg-de Vries (KdV) equation and the Burgers equation, utilizing a space-time approach. Additionally, we introduce adaptive domain decomposition and a characteristic direction approach to enhance the efficiency of the proposed methods. Numerical experiments demonstrate that the proposed methods achieve high accuracy with fewer degrees of freedom, additionally, adaptive domain decomposition and a characteristic direction approach significantly improve computational efficiency.<|reference_end|>
arxiv
@article{sun2024local, title={Local Randomized Neural Networks with Discontinuous Galerkin Methods for KdV-type and Burgers Equations}, author={Jingbo Sun and Fei Wang}, journal={arXiv preprint arXiv:2409.19855}, year={2024}, archivePrefix={arXiv}, eprint={2409.19855}, primaryClass={math.NA cs.NA} }
sun2024local
arxiv-663381
2409.19856
Benchmarking Adaptive Intelligence and Computer Vision on Human-Robot Collaboration
<|reference_start|>Benchmarking Adaptive Intelligence and Computer Vision on Human-Robot Collaboration: Human-Robot Collaboration (HRC) is vital in Industry 4.0, using sensors, digital twins, collaborative robots (cobots), and intention-recognition models to have efficient manufacturing processes. However, Concept Drift is a significant challenge, where robots struggle to adapt to new environments. We address concept drift by integrating Adaptive Intelligence and self-labeling (SLB) to improve the resilience of intention-recognition in an HRC system. Our methodology begins with data collection using cameras and weight sensors, which is followed by annotation of intentions and state changes. Then we train various deep learning models with different preprocessing techniques for recognizing and predicting the intentions. Additionally, we developed a custom state detection algorithm for enhancing the accuracy of SLB, offering precise state-change definitions and timestamps to label intentions. Our results show that the MViT2 model with skeletal posture preprocessing achieves an accuracy of 83% on our data environment, compared to the 79% accuracy of MViT2 without skeleton posture extraction. Additionally, our SLB mechanism achieves a labeling accuracy of 91%, reducing a significant amount of time that would've been spent on manual annotation. Lastly, we observe swift scaling of model performance that combats concept drift by fine tuning on different increments of self-labeled data in a shifted domain that has key differences from the original training environment.. This study demonstrates the potential for rapid deployment of intelligent cobots in manufacturing through the steps shown in our methodology, paving a way for more adaptive and efficient HRC systems.<|reference_end|>
arxiv
@article{saraj2024benchmarking, title={Benchmarking Adaptive Intelligence and Computer Vision on Human-Robot Collaboration}, author={Salaar Saraj, Gregory Shklovski, Kristopher Irizarry, Jonathan Vet, Yutian Ren (California Institute for Telecommunications and Information Technology)}, journal={arXiv preprint arXiv:2409.19856}, year={2024}, archivePrefix={arXiv}, eprint={2409.19856}, primaryClass={cs.RO cs.CV cs.HC cs.LG} }
saraj2024benchmarking
arxiv-663382
2409.19860
Discrete Distributionally Robust Optimal Control with Explicitly Constrained Optimization
<|reference_start|>Discrete Distributionally Robust Optimal Control with Explicitly Constrained Optimization: Distributionally robust optimal control (DROC) is gaining interest. This study presents a reformulation method for discrete DROC (DDROC) problems to design optimal control policies under a worst-case distributional uncertainty. The reformulation of DDROC problems impacts both the utility of tractable improvements in continuous DROC problems and the inherent discretization modeling of DROC problems. DROC is believed to have tractability issues; namely, infinite inequalities emerge over the distribution space. Therefore, investigating tractable reformulation methods for these DROC problems is crucial. One such method utilizes the strong dualities of the worst-case expectations. However, previous studies demonstrated that certain non-trivial inequalities remain after the reformulation. To enhance the tractability of DDROC, the proposed method reformulates DDROC problems into one-layer smooth convex programming with only a few trivial inequalities. The proposed method is applied to a DDROC version of a patrol-agent design problem.<|reference_end|>
arxiv
@article{shida2024discrete, title={Discrete Distributionally Robust Optimal Control with Explicitly Constrained Optimization}, author={Yuma Shida, Yuji Ito}, journal={arXiv preprint arXiv:2409.19860}, year={2024}, archivePrefix={arXiv}, eprint={2409.19860}, primaryClass={math.OC cs.SY eess.SY} }
shida2024discrete
arxiv-663383
2409.19861
A Distributed Malicious Agent Detection Scheme for Resilient Power Apportioning in Microgrids
<|reference_start|>A Distributed Malicious Agent Detection Scheme for Resilient Power Apportioning in Microgrids: We consider the framework of distributed aggregation of Distributed Energy Resources (DERs) in power networks to provide ancillary services to the power grid. Existing aggregation schemes work under the assumption of trust and honest behavior of the DERs and can suffer when that is not the case. In this article, we develop a distributed detection scheme that allows the DERs to detect and isolate the maliciously behaving DERs. We propose a model for the maliciously behaving DERs and show that the proposed distributed scheme leads to the detection of the malicious DERs. Further, augmented with the distributed power apportioning algorithm the proposed scheme provides a framework for resilient distributed power apportioning for ancillary service dispatch in power networks. A controller-hardware-in-the-loop (CHIL) experimental setup is developed to evaluate the performance of the proposed resilient distributed power apportioning scheme on an 8-commercial building distribution network (Central Core) connected to a 55 bus distribution network (External Power Network) based on the University of Minnesota Campus. A diversity of DERs and loads are included in the network to generalize the applicability of the framework. The experimental results corroborate the efficacy of the proposed resilient distributed power apportioning for ancillary service dispatch in power networks.<|reference_end|>
arxiv
@article{khatana2024a, title={A Distributed Malicious Agent Detection Scheme for Resilient Power Apportioning in Microgrids}, author={Vivek Khatana, Soham Chakraborty, Govind Saraswat, Sourav Patel, and Murti V. Salapaka}, journal={arXiv preprint arXiv:2409.19861}, year={2024}, archivePrefix={arXiv}, eprint={2409.19861}, primaryClass={eess.SY cs.SY} }
khatana2024a
arxiv-663384
2409.19862
Learning Multimodal Latent Generative Models with Energy-Based Prior
<|reference_start|>Learning Multimodal Latent Generative Models with Energy-Based Prior: Multimodal generative models have recently gained significant attention for their ability to learn representations across various modalities, enhancing joint and cross-generation coherence. However, most existing works use standard Gaussian or Laplacian distributions as priors, which may struggle to capture the diverse information inherent in multiple data types due to their unimodal and less informative nature. Energy-based models (EBMs), known for their expressiveness and flexibility across various tasks, have yet to be thoroughly explored in the context of multimodal generative models. In this paper, we propose a novel framework that integrates the multimodal latent generative model with the EBM. Both models can be trained jointly through a variational scheme. This approach results in a more expressive and informative prior, better-capturing of information across multiple modalities. Our experiments validate the proposed model, demonstrating its superior generation coherence.<|reference_end|>
arxiv
@article{yuan2024learning, title={Learning Multimodal Latent Generative Models with Energy-Based Prior}, author={Shiyu Yuan, Jiali Cui, Hanao Li, and Tian Han}, journal={arXiv preprint arXiv:2409.19862}, year={2024}, archivePrefix={arXiv}, eprint={2409.19862}, primaryClass={cs.LG cs.CV} }
yuan2024learning
arxiv-663385
2409.19865
TokenBinder: Text-Video Retrieval with One-to-Many Alignment Paradigm
<|reference_start|>TokenBinder: Text-Video Retrieval with One-to-Many Alignment Paradigm: Text-Video Retrieval (TVR) methods typically match query-candidate pairs by aligning text and video features in coarse-grained, fine-grained, or combined (coarse-to-fine) manners. However, these frameworks predominantly employ a one(query)-to-one(candidate) alignment paradigm, which struggles to discern nuanced differences among candidates, leading to frequent mismatches. Inspired by Comparative Judgement in human cognitive science, where decisions are made by directly comparing items rather than evaluating them independently, we propose TokenBinder. This innovative two-stage TVR framework introduces a novel one-to-many coarse-to-fine alignment paradigm, imitating the human cognitive process of identifying specific items within a large collection. Our method employs a Focused-view Fusion Network with a sophisticated cross-attention mechanism, dynamically aligning and comparing features across multiple videos to capture finer nuances and contextual variations. Extensive experiments on six benchmark datasets confirm that TokenBinder substantially outperforms existing state-of-the-art methods. These results demonstrate its robustness and the effectiveness of its fine-grained alignment in bridging intra- and inter-modality information gaps in TVR tasks.<|reference_end|>
arxiv
@article{zhang2024tokenbinder:, title={TokenBinder: Text-Video Retrieval with One-to-Many Alignment Paradigm}, author={Bingqing Zhang, Zhuo Cao, Heming Du, Xin Yu, Xue Li, Jiajun Liu and Sen Wang}, journal={arXiv preprint arXiv:2409.19865}, year={2024}, archivePrefix={arXiv}, eprint={2409.19865}, primaryClass={cs.CV} }
zhang2024tokenbinder:
arxiv-663386
2409.19866
A Plug and Play Distributed Secondary Controller for Microgrids with Grid-Forming Inverters
<|reference_start|>A Plug and Play Distributed Secondary Controller for Microgrids with Grid-Forming Inverters: A distributed controller for secondary control problems in microgrids with grid-forming (GFM) inverter-based resources (IBRs) is developed. The controller is based on distributed optimization and is synthesized and implemented distributively enabling each GFM IBR to utilize decentralized measurements and the neighborhood information in the communication network. We present a convergence analysis establishing voltage regulation and reactive power sharing properties. A controller-hardware-in-the-loop experiment is conducted to evaluate the performance of the proposed controller. The experimental results corroborate the efficacy of the proposed distributed controller for secondary control.<|reference_end|>
arxiv
@article{khatana2024a, title={A Plug and Play Distributed Secondary Controller for Microgrids with Grid-Forming Inverters}, author={Vivek Khatana, Soham Chakraborty, and Murti V. Salapaka}, journal={arXiv preprint arXiv:2409.19866}, year={2024}, archivePrefix={arXiv}, eprint={2409.19866}, primaryClass={eess.SY cs.SY} }
khatana2024a
arxiv-663387
2409.19867
Balancing Generalization and Specialization: Offline Metalearning for Bandwidth Estimation
<|reference_start|>Balancing Generalization and Specialization: Offline Metalearning for Bandwidth Estimation: User experience in real-time video applications requires continuously adjusting video encoding bitrates to match available network capacity, which hinges on accurate bandwidth estimation (BWE). However, network heterogeneity prevents a one-size-fits-all solution to BWE, motivating the demand for personalized approaches. Although personalizing BWE algorithms offers benefits such as improved adaptability to individual network conditions, it faces the challenge of data drift -- where estimators degrade over time due to evolving network environments. To address this, we introduce Ivy, a novel method for BWE that leverages offline metalearning to tackle data drift and maximize end-user Quality of Experience (QoE). Our key insight is that dynamically selecting the most suitable BWE algorithm for current network conditions allows for more effective adaption to changing environments. Ivy is trained entirely offline using Implicit Q-learning, enabling it to learn from individual network conditions without a single, live videoconferencing interaction, thereby reducing deployment complexity and making Ivy more practical for real-world personalization. We implemented our method in a popular videoconferencing application and demonstrated that Ivy can enhance QoE by 5.9% to 11.2% over individual BWE algorithms and by 6.3% to 11.4% compared to existing online meta heuristics.<|reference_end|>
arxiv
@article{gottipati2024balancing, title={Balancing Generalization and Specialization: Offline Metalearning for Bandwidth Estimation}, author={Aashish Gottipati, Sami Khairy, Yasaman Hosseinkashi, Gabriel Mittag, Vishak Gopal, Francis Y. Yan, Ross Cutler}, journal={arXiv preprint arXiv:2409.19867}, year={2024}, archivePrefix={arXiv}, eprint={2409.19867}, primaryClass={cs.NI} }
gottipati2024balancing
arxiv-663388
2409.19868
The Unique Taste of LLMs for Papers: Potential issues in Using LLMs for Digital Library Document Recommendation Tasks
<|reference_start|>The Unique Taste of LLMs for Papers: Potential issues in Using LLMs for Digital Library Document Recommendation Tasks: This paper investigates the performance of several representative large models in the field of literature recommendation and explores potential biases. The results indicate that while some large models' recommendations can be somewhat satisfactory after simple manual screening, overall, the accuracy of these models in specific literature recommendation tasks is generally moderate. Additionally, the models tend to recommend literature that is timely, collaborative, and expands or deepens the field. In scholar recommendation tasks. There is no evidence to suggest that LLMs exacerbate inequalities related to gender, race, or the level of development of countries.<|reference_end|>
arxiv
@article{tian2024the, title={The Unique Taste of LLMs for Papers: Potential issues in Using LLMs for Digital Library Document Recommendation Tasks}, author={Yifan Tian, Yixin Liu, Yi Bu}, journal={arXiv preprint arXiv:2409.19868}, year={2024}, archivePrefix={arXiv}, eprint={2409.19868}, primaryClass={cs.DL} }
tian2024the
arxiv-663389
2409.19869
Edge Intelligence in Satellite-Terrestrial Networks with Hybrid Quantum Computing
<|reference_start|>Edge Intelligence in Satellite-Terrestrial Networks with Hybrid Quantum Computing: This paper exploits the potential of edge intelligence empowered satellite-terrestrial networks, where users' computation tasks are offloaded to the satellites or terrestrial base stations. The computation task offloading in such networks involves the edge cloud selection and bandwidth allocations for the access and backhaul links, which aims to minimize the energy consumption under the delay and satellites' energy constraints. To address it, an alternating direction method of multipliers (ADMM)-inspired algorithm is proposed to decompose the joint optimization problem into small-scale subproblems. Moreover, we develop a hybrid quantum double deep Q-learning (DDQN) approach to optimize the edge cloud selection. This novel deep reinforcement learning architecture enables that classical and quantum neural networks process information in parallel. Simulation results confirm the efficiency of the proposed algorithm, and indicate that duality gap is tiny and a larger reward can be generated from a few data points compared to the classical DDQN.<|reference_end|>
arxiv
@article{huang2024edge, title={Edge Intelligence in Satellite-Terrestrial Networks with Hybrid Quantum Computing}, author={Siyue Huang, Lifeng Wang, Xin Wang, Bo Tan, Wei Ni and Kai-Kit Wong}, journal={arXiv preprint arXiv:2409.19869}, year={2024}, archivePrefix={arXiv}, eprint={2409.19869}, primaryClass={cs.DC} }
huang2024edge
arxiv-663390
2409.19871
TSI: A Multi-View Representation Learning Approach for Time Series Forecasting
<|reference_start|>TSI: A Multi-View Representation Learning Approach for Time Series Forecasting: As the growing demand for long sequence time-series forecasting in real-world applications, such as electricity consumption planning, the significance of time series forecasting becomes increasingly crucial across various domains. This is highlighted by recent advancements in representation learning within the field. This study introduces a novel multi-view approach for time series forecasting that innovatively integrates trend and seasonal representations with an Independent Component Analysis (ICA)-based representation. Recognizing the limitations of existing methods in representing complex and high-dimensional time series data, this research addresses the challenge by combining TS (trend and seasonality) and ICA (independent components) perspectives. This approach offers a holistic understanding of time series data, going beyond traditional models that often miss nuanced, nonlinear relationships. The efficacy of TSI model is demonstrated through comprehensive testing on various benchmark datasets, where it shows superior performance over current state-of-the-art models, particularly in multivariate forecasting. This method not only enhances the accuracy of forecasting but also contributes significantly to the field by providing a more in-depth understanding of time series data. The research which uses ICA for a view lays the groundwork for further exploration and methodological advancements in time series forecasting, opening new avenues for research and practical applications.<|reference_end|>
arxiv
@article{gao2024tsi:, title={TSI: A Multi-View Representation Learning Approach for Time Series Forecasting}, author={Wentao Gao, Ziqi Xu, Jiuyong Li, Lin Liu, Jixue Liu, Thuc Duy Le, Debo Cheng, Yanchang Zhao, Yun Chen}, journal={arXiv preprint arXiv:2409.19871}, year={2024}, archivePrefix={arXiv}, eprint={2409.19871}, primaryClass={cs.LG cs.AI} }
gao2024tsi:
arxiv-663391
2409.19872
Towards Unified Multimodal Editing with Enhanced Knowledge Collaboration
<|reference_start|>Towards Unified Multimodal Editing with Enhanced Knowledge Collaboration: The swift advancement in Multimodal LLMs (MLLMs) also presents significant challenges for effective knowledge editing. Current methods, including intrinsic knowledge editing and external knowledge resorting, each possess strengths and weaknesses, struggling to balance the desired properties of reliability, generality, and locality when applied to MLLMs. In this paper, we propose UniKE, a novel multimodal editing method that establishes a unified perspective and paradigm for intrinsic knowledge editing and external knowledge resorting. Both types of knowledge are conceptualized as vectorized key-value memories, with the corresponding editing processes resembling the assimilation and accommodation phases of human cognition, conducted at the same semantic levels. Within such a unified framework, we further promote knowledge collaboration by disentangling the knowledge representations into the semantic and truthfulness spaces. Extensive experiments validate the effectiveness of our method, which ensures that the post-edit MLLM simultaneously maintains excellent reliability, generality, and locality. The code for UniKE will be available at \url{https://github.com/beepkh/UniKE}.<|reference_end|>
arxiv
@article{pan2024towards, title={Towards Unified Multimodal Editing with Enhanced Knowledge Collaboration}, author={Kaihang Pan, Zhaoyu Fan, Juncheng Li, Qifan Yu, Hao Fei, Siliang Tang, Richang Hong, Hanwang Zhang, Qianru Sun}, journal={arXiv preprint arXiv:2409.19872}, year={2024}, archivePrefix={arXiv}, eprint={2409.19872}, primaryClass={cs.CV} }
pan2024towards
arxiv-663392
2409.19877
Contrastive Token Learning with Similarity Decay for Repetition Suppression in Machine Translation
<|reference_start|>Contrastive Token Learning with Similarity Decay for Repetition Suppression in Machine Translation: For crosslingual conversation and trade, Neural Machine Translation (NMT) is pivotal yet faces persistent challenges with monotony and repetition in generated content. Traditional solutions that rely on penalizing text redundancy or token reoccurrence have shown limited efficacy, particularly for lengthy article and e-commerce descriptions with inherent redundancy, even with the advent of Large Language Models (LLMs). This paper investigates the underlying causes of textual repetition through the lens of information entropy, attributing the phenomenon to the elevated uncertainty within the input text. To address this, a novel algorithm named Contrastive Token Learning with Similarity Decay (CTSD) is introduced, which modulates the suppression of tokens dynamically, informed by varying attention weights and inter-token distances. Furthermore, an e-commerce dataset comprised of title texts of online real items is compiled and released susceptible to hallucination translations to benchmark the algorithm. Extensive evaluations demonstrate that CTSD significantly outperforms existing approaches in precision and generalizability. Additional online A/B testing underscores its practical value, showing marked improvements in user engagement and conversion. Notably, this method has been implemented with full traffic on eight multilingual sites of alibaba.com, the largest B2B e-commerce platform in the world.<|reference_end|>
arxiv
@article{dai2024contrastive, title={Contrastive Token Learning with Similarity Decay for Repetition Suppression in Machine Translation}, author={Huangyu Dai, Ben Chen, Kaidi Chen, Ying Han, Zihan Liang and Wen Jiang}, journal={arXiv preprint arXiv:2409.19877}, year={2024}, archivePrefix={arXiv}, eprint={2409.19877}, primaryClass={cs.CL cs.AI} }
dai2024contrastive
arxiv-663393
2409.19878
HDMoLE: Mixture of LoRA Experts with Hierarchical Routing and Dynamic Thresholds for Fine-Tuning LLM-based ASR Models
<|reference_start|>HDMoLE: Mixture of LoRA Experts with Hierarchical Routing and Dynamic Thresholds for Fine-Tuning LLM-based ASR Models: Recent advancements in integrating Large Language Models (LLM) with automatic speech recognition (ASR) have performed remarkably in general domains. While supervised fine-tuning (SFT) of all model parameters is often employed to adapt pre-trained LLM-based ASR models to specific domains, it imposes high computational costs and notably reduces their performance in general domains. In this paper, we propose a novel parameter-efficient multi-domain fine-tuning method for adapting pre-trained LLM-based ASR models to multi-accent domains without catastrophic forgetting named \textit{HDMoLE}, which leverages hierarchical routing and dynamic thresholds based on combining low-rank adaptation (LoRA) with the mixer of experts (MoE) and can be generalized to any linear layer. Hierarchical routing establishes a clear correspondence between LoRA experts and accent domains, improving cross-domain collaboration among the LoRA experts. Unlike the static Top-K strategy for activating LoRA experts, dynamic thresholds can adaptively activate varying numbers of LoRA experts at each MoE layer. Experiments on the multi-accent and standard Mandarin datasets demonstrate the efficacy of HDMoLE. Applying HDMoLE to an LLM-based ASR model projector module achieves similar performance to full fine-tuning in the target multi-accent domains while using only 9.6% of the trainable parameters required for full fine-tuning and minimal degradation in the source general domain.<|reference_end|>
arxiv
@article{mu2024hdmole:, title={HDMoLE: Mixture of LoRA Experts with Hierarchical Routing and Dynamic Thresholds for Fine-Tuning LLM-based ASR Models}, author={Bingshen Mu, Kun Wei, Qijie Shao, Yong Xu, Lei Xie}, journal={arXiv preprint arXiv:2409.19878}, year={2024}, archivePrefix={arXiv}, eprint={2409.19878}, primaryClass={cs.SD eess.AS} }
mu2024hdmole:
arxiv-663394
2409.19881
Estimation of Constraint Admissible Invariant Set with Neural Lyapunov Function
<|reference_start|>Estimation of Constraint Admissible Invariant Set with Neural Lyapunov Function: Constraint admissible positively invariant (CAPI) sets play a pivotal role in ensuring safety in control and planning applications, such as the recursive feasibility guarantee of explicit reference governor and model predictive control. However, existing methods for finding CAPI sets for nonlinear systems are often limited to single equilibria or specific system dynamics. This limitation underscores the necessity for a method to construct a CAPI set for general reference tracking control and a broader range of systems. In this work, we leverage recent advancements in learning-based methods to derive Lyapunov functions, particularly focusing on those with piecewise-affine activation functions. Previous attempts to find an invariant set with the piecewise-affine neural Lyapunov function have focused on the estimation of the region of attraction with mixed integer programs. We propose a methodology to determine the maximal CAPI set for any reference with the neural Lyapunov function by transforming the problem into multiple linear programs. Additionally, to enhance applicability in real-time control scenarios, we introduce a learning-based approach to train the estimator, which infers the CAPI set from a given reference. The proposed approach is validated with multiple simulations to show that it can generate a valid CAPI set with the given neural Lyapunov functions for any reference. We also employ the proposed CAPI set estimation method in the explicit reference governor and demonstrate its effectiveness for constrained control.<|reference_end|>
arxiv
@article{kim2024estimation, title={Estimation of Constraint Admissible Invariant Set with Neural Lyapunov Function}, author={Dabin Kim and H. Jin Kim}, journal={arXiv preprint arXiv:2409.19881}, year={2024}, archivePrefix={arXiv}, eprint={2409.19881}, primaryClass={eess.SY cs.SY} }
kim2024estimation
arxiv-663395
2409.19882
Tannenbaum's gain-margin optimization meets Polyak's heavy-ball algorithm
<|reference_start|>Tannenbaum's gain-margin optimization meets Polyak's heavy-ball algorithm: The paper highlights a relatively unknown link between algorithm design in optimization and control synthesis in robust control. Specifically, quadratic optimization can be recast as a regulation problem within the framework of $\mathcal{H}_\infty$ control. From this vantage point, the optimality of Polyak's fastest heavy-ball algorithm can be ascertained as a solution to a gain margin optimization problem. The approach is independent of Polyak's original and brilliant argument, yet simpler, and relies on the foundational work by Tannenbaum that introduced and solved the gain margin optimization via Nevanlinna--Pick interpolation theory. The link between first-order optimization methods and robust control theory sheds new light into limits of algorithmic performance for such methods, and suggests a new framework where similar computational problems can be systematically studied and algorithms optimized. In particular, it raises the question as to whether periodically scheduled algorithms can achieve faster rates for quadratic optimization, in a manner analogous to periodic control that extends gain margin beyond that of time-invariant control. This turns out not to be the case, due to the analytic obstruction of a transmission zero that is inherent in causal optimization algorithms. Interestingly, this obstruction can be removed with implicit algorithms, cast in a similar manner as feedback regulation problems with causal, but not strictly causal dynamics, thereby devoid of the transmission zero at infinity and able to achieve superior convergence rates. The confluence of the fields of optimization algorithms and control provides a frame to tackle questions pertaining to speed, accuracy, distributed computation, and so forth, and to delineate respective limits to performance and tradeoffs in a systematic manner, utilizing the formalism of robust control.<|reference_end|>
arxiv
@article{wu2024tannenbaum's, title={Tannenbaum's gain-margin optimization meets Polyak's heavy-ball algorithm}, author={Wuwei Wu, Jie Chen, Mihailo R. Jovanovi'c, and Tryphon T. Georgiou}, journal={arXiv preprint arXiv:2409.19882}, year={2024}, archivePrefix={arXiv}, eprint={2409.19882}, primaryClass={eess.SY cs.NA cs.SY math.NA math.OC} }
wu2024tannenbaum's
arxiv-663396
2409.19883
Optimal RANDAO Manipulation in Ethereum
<|reference_start|>Optimal RANDAO Manipulation in Ethereum: It is well-known that RANDAO manipulation is possible in Ethereum if an adversary controls the proposers assigned to the last slots in an epoch. We provide a methodology to compute, for any fraction $\alpha$ of stake owned by an adversary, the maximum fraction $f(\alpha)$ of rounds that a strategic adversary can propose. We further implement our methodology and compute $f(\cdot)$ for all $\alpha$. For example, we conclude that an optimal strategic participant with $5\%$ of the stake can propose a $5.048\%$ fraction of rounds, $10\%$ of the stake can propose a $10.19\%$ fraction of rounds, and $20\%$ of the stake can propose a $20.68\%$ fraction of rounds.<|reference_end|>
arxiv
@article{alpturer2024optimal, title={Optimal RANDAO Manipulation in Ethereum}, author={Kaya Alpturer, S. Matthew Weinberg}, journal={arXiv preprint arXiv:2409.19883}, year={2024}, archivePrefix={arXiv}, eprint={2409.19883}, primaryClass={cs.GT cs.CR} }
alpturer2024optimal
arxiv-663397
2409.19884
SWIM: Short-Window CNN Integrated with Mamba for EEG-Based Auditory Spatial Attention Decoding
<|reference_start|>SWIM: Short-Window CNN Integrated with Mamba for EEG-Based Auditory Spatial Attention Decoding: In complex auditory environments, the human auditory system possesses the remarkable ability to focus on a specific speaker while disregarding others. In this study, a new model named SWIM, a short-window convolution neural network (CNN) integrated with Mamba, is proposed for identifying the locus of auditory attention (left or right) from electroencephalography (EEG) signals without relying on speech envelopes. SWIM consists of two parts. The first is a short-window CNN (SW$_\text{CNN}$), which acts as a short-term EEG feature extractor and achieves a final accuracy of 84.9% in the leave-one-speaker-out setup on the widely used KUL dataset. This improvement is due to the use of an improved CNN structure, data augmentation, multitask training, and model combination. The second part, Mamba, is a sequence model first applied to auditory spatial attention decoding to leverage the long-term dependency from previous SW$_\text{CNN}$ time steps. By joint training SW$_\text{CNN}$ and Mamba, the proposed SWIM structure uses both short-term and long-term information and achieves an accuracy of 86.2%, which reduces the classification errors by a relative 31.0% compared to the previous state-of-the-art result. The source code is available at https://github.com/windowso/SWIM-ASAD.<|reference_end|>
arxiv
@article{zhang2024swim:, title={SWIM: Short-Window CNN Integrated with Mamba for EEG-Based Auditory Spatial Attention Decoding}, author={Ziyang Zhang, Andrew Thwaites, Alexandra Woolgar, Brian Moore, Chao Zhang}, journal={arXiv preprint arXiv:2409.19884}, year={2024}, archivePrefix={arXiv}, eprint={2409.19884}, primaryClass={eess.AS cs.AI cs.SD eess.SP} }
zhang2024swim:
arxiv-663398
2409.19886
RouterDC: Query-Based Router by Dual Contrastive Learning for Assembling Large Language Models
<|reference_start|>RouterDC: Query-Based Router by Dual Contrastive Learning for Assembling Large Language Models: Recent works show that assembling multiple off-the-shelf large language models (LLMs) can harness their complementary abilities. To achieve this, routing is a promising method, which learns a router to select the most suitable LLM for each query. However, existing routing models are ineffective when multiple LLMs perform well for a query. To address this problem, in this paper, we propose a method called query-based Router by Dual Contrastive learning (RouterDC). The RouterDC model consists of an encoder and LLM embeddings, and we propose two contrastive learning losses to train the RouterDC model. Experimental results show that RouterDC is effective in assembling LLMs and largely outperforms individual top-performing LLMs as well as existing routing methods on both in-distribution (+2.76\%) and out-of-distribution (+1.90\%) tasks. Source code is available at https://github.com/shuhao02/RouterDC.<|reference_end|>
arxiv
@article{chen2024routerdc:, title={RouterDC: Query-Based Router by Dual Contrastive Learning for Assembling Large Language Models}, author={Shuhao Chen, Weisen Jiang, Baijiong Lin, James T. Kwok, and Yu Zhang}, journal={arXiv preprint arXiv:2409.19886}, year={2024}, archivePrefix={arXiv}, eprint={2409.19886}, primaryClass={cs.LG cs.AI cs.CL} }
chen2024routerdc:
arxiv-663399
2409.19890
Universal Medical Image Representation Learning with Compositional Decoders
<|reference_start|>Universal Medical Image Representation Learning with Compositional Decoders: Visual-language models have advanced the development of universal models, yet their application in medical imaging remains constrained by specific functional requirements and the limited data. Current general-purpose models are typically designed with task-specific branches and heads, which restricts the shared feature space and the flexibility of model. To address these challenges, we have developed a decomposed-composed universal medical imaging paradigm (UniMed) that supports tasks at all levels. To this end, we first propose a decomposed decoder that can predict two types of outputs -- pixel and semantic, based on a defined input queue. Additionally, we introduce a composed decoder that unifies the input and output spaces and standardizes task annotations across different levels into a discrete token format. The coupled design of these two components enables the model to flexibly combine tasks and mutual benefits. Moreover, our joint representation learning strategy skilfully leverages large amounts of unlabeled data and unsupervised loss, achieving efficient one-stage pretraining for more robust performance. Experimental results show that UniMed achieves state-of-the-art performance on eight datasets across all three tasks and exhibits strong zero-shot and 100-shot transferability. We will release the code and trained models upon the paper's acceptance.<|reference_end|>
arxiv
@article{wang2024universal, title={Universal Medical Image Representation Learning with Compositional Decoders}, author={Kaini Wang, Ling Yang, Siping Zhou, Guangquan Zhou, Wentao Zhang, Bin Cui, Shuo Li}, journal={arXiv preprint arXiv:2409.19890}, year={2024}, archivePrefix={arXiv}, eprint={2409.19890}, primaryClass={cs.CV} }
wang2024universal
arxiv-663400
2409.19891
Opt-in Camera: Person Identification in Video via UWB Localization and Its Application to Opt-in Systems
<|reference_start|>Opt-in Camera: Person Identification in Video via UWB Localization and Its Application to Opt-in Systems: This paper presents opt-in camera, a concept of privacy-preserving camera systems capable of recording only specific individuals in a crowd who explicitly consent to be recorded. Our system utilizes a mobile wireless communication tag attached to personal belongings as proof of opt-in and as a means of localizing tag carriers in video footage. Specifically, the on-ground positions of the wireless tag are first tracked over time using the unscented Kalman filter (UKF). The tag trajectory is then matched against visual tracking results for pedestrians found in videos to identify the tag carrier. Technically, we devise a dedicated trajectory matching technique based on constrained linear optimization, as well as a novel calibration technique that handles wireless tag-camera calibration and hyperparameter tuning for the UKF, which mitigates the non-line-of-sight (NLoS) issue in wireless localization. We realize the proposed opt-in camera system using ultra-wideband (UWB) devices and an off-the-shelf webcam installed in the environment. Experimental results demonstrate that our system can perform opt-in recording of individuals in near real-time at 10 fps, with reliable identification accuracy for a crowd of 8-23 people in a confined space.<|reference_end|>
arxiv
@article{ishige2024opt-in, title={Opt-in Camera: Person Identification in Video via UWB Localization and Its Application to Opt-in Systems}, author={Matthew Ishige, Yasuhiro Yoshimura, and Ryo Yonetani}, journal={arXiv preprint arXiv:2409.19891}, year={2024}, archivePrefix={arXiv}, eprint={2409.19891}, primaryClass={cs.RO} }
ishige2024opt-in