corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-667701 | 2410.07177 | MM-Ego: Towards Building Egocentric Multimodal LLMs | <|reference_start|>MM-Ego: Towards Building Egocentric Multimodal LLMs: This research aims to comprehensively explore building a multimodal foundation model for egocentric video understanding. To achieve this goal, we work on three fronts. First, as there is a lack of QA data for egocentric video understanding, we develop a data engine that efficiently generates 7M high-quality QA samples for egocentric videos ranging from 30 seconds to one hour long, based on human-annotated data. This is currently the largest egocentric QA dataset. Second, we contribute a challenging egocentric QA benchmark with 629 videos and 7,026 questions to evaluate the models' ability in recognizing and memorizing visual details across videos of varying lengths. We introduce a new de-biasing evaluation method to help mitigate the unavoidable language bias present in the models being evaluated. Third, we propose a specialized multimodal architecture featuring a novel "Memory Pointer Prompting" mechanism. This design includes a global glimpse step to gain an overarching understanding of the entire video and identify key visual information, followed by a fallback step that utilizes the key visual information to generate responses. This enables the model to more effectively comprehend extended video content. With the data, benchmark, and model, we successfully build MM-Ego, an egocentric multimodal LLM that shows powerful performance on egocentric video understanding.<|reference_end|> | arxiv | @article{ye2024mm-ego:,
title={MM-Ego: Towards Building Egocentric Multimodal LLMs},
author={Hanrong Ye, Haotian Zhang, Erik Daxberger, Lin Chen, Zongyu Lin,
Yanghao Li, Bowen Zhang, Haoxuan You, Dan Xu, Zhe Gan, Jiasen Lu, Yinfei Yang},
journal={arXiv preprint arXiv:2410.07177},
year={2024},
archivePrefix={arXiv},
eprint={2410.07177},
primaryClass={cs.CV cs.AI cs.LG}
} | ye2024mm-ego: |
arxiv-667702 | 2410.07182 | The trade-off between data minimization and fairness in collaborative filtering | <|reference_start|>The trade-off between data minimization and fairness in collaborative filtering: General Data Protection Regulations (GDPR) aim to safeguard individuals' personal information from harm. While full compliance is mandatory in the European Union and the California Privacy Rights Act (CPRA), it is not in other places. GDPR requires simultaneous compliance with all the principles such as fairness, accuracy, and data minimization. However, it overlooks the potential contradictions within its principles. This matter gets even more complex when compliance is required from decision-making systems. Therefore, it is essential to investigate the feasibility of simultaneously achieving the goals of GDPR and machine learning, and the potential tradeoffs that might be forced upon us. This paper studies the relationship between the principles of data minimization and fairness in recommender systems. We operationalize data minimization via active learning (AL) because, unlike many other methods, it can preserve a high accuracy while allowing for strategic data collection, hence minimizing the amount of data collection. We have implemented several active learning strategies (personalized and non-personalized) and conducted a comparative analysis focusing on accuracy and fairness on two publicly available datasets. The results demonstrate that different AL strategies may have different impacts on the accuracy of recommender systems with nearly all strategies negatively impacting fairness. There has been no to very limited work on the trade-off between data minimization and fairness, the pros and cons of active learning methods as tools for implementing data minimization, and the potential impacts of AL on fairness. By exploring these critical aspects, we offer valuable insights for developing recommender systems that are GDPR compliant.<|reference_end|> | arxiv | @article{sonboli2024the,
title={The trade-off between data minimization and fairness in collaborative
filtering},
author={Nasim Sonboli, Sipei Li, Mehdi Elahi, Asia Biega},
journal={arXiv preprint arXiv:2410.07182},
year={2024},
archivePrefix={arXiv},
eprint={2410.07182},
primaryClass={cs.IR cs.CY cs.LG}
} | sonboli2024the |
arxiv-667703 | 2410.07185 | Margin-bounded Confidence Scores for Out-of-Distribution Detection | <|reference_start|>Margin-bounded Confidence Scores for Out-of-Distribution Detection: In many critical Machine Learning applications, such as autonomous driving and medical image diagnosis, the detection of out-of-distribution (OOD) samples is as crucial as accurately classifying in-distribution (ID) inputs. Recently Outlier Exposure (OE) based methods have shown promising results in detecting OOD inputs via model fine-tuning with auxiliary outlier data. However, most of the previous OE-based approaches emphasize more on synthesizing extra outlier samples or introducing regularization to diversify OOD sample space, which is rather unquantifiable in practice. In this work, we propose a novel and straightforward method called Margin bounded Confidence Scores (MaCS) to address the nontrivial OOD detection problem by enlarging the disparity between ID and OOD scores, which in turn makes the decision boundary more compact facilitating effective segregation with a simple threshold. Specifically, we augment the learning objective of an OE regularized classifier with a supplementary constraint, which penalizes high confidence scores for OOD inputs compared to that of ID and significantly enhances the OOD detection performance while maintaining the ID classification accuracy. Extensive experiments on various benchmark datasets for image classification tasks demonstrate the effectiveness of the proposed method by significantly outperforming state-of-the-art (S.O.T.A) methods on various benchmarking metrics. The code is publicly available at https://github.com/lakpa-tamang9/margin_ood<|reference_end|> | arxiv | @article{tamang2024margin-bounded,
title={Margin-bounded Confidence Scores for Out-of-Distribution Detection},
author={Lakpa D. Tamang, Mohamed Reda Bouadjenek, Richard Dazeley, and Sunil
Aryal},
journal={arXiv preprint arXiv:2410.07185},
year={2024},
archivePrefix={arXiv},
eprint={2410.07185},
primaryClass={cs.CV}
} | tamang2024margin-bounded |
arxiv-667704 | 2410.07189 | Dual Stream Graph Transformer Fusion Networks for Enhanced Brain Decoding | <|reference_start|>Dual Stream Graph Transformer Fusion Networks for Enhanced Brain Decoding: This paper presents the novel Dual Stream Graph-Transformer Fusion (DS-GTF) architecture designed specifically for classifying task-based Magnetoencephalography (MEG) data. In the spatial stream, inputs are initially represented as graphs, which are then passed through graph attention networks (GAT) to extract spatial patterns. Two methods, TopK and Thresholded Adjacency are introduced for initializing the adjacency matrix used in the GAT. In the temporal stream, the Transformer Encoder receives concatenated windowed input MEG data and learns new temporal representations. The learned temporal and spatial representations from both streams are fused before reaching the output layer. Experimental results demonstrate an enhancement in classification performance and a reduction in standard deviation across multiple test subjects compared to other examined models.<|reference_end|> | arxiv | @article{goene2024dual,
title={Dual Stream Graph Transformer Fusion Networks for Enhanced Brain
Decoding},
author={Lucas Goene and Siamak Mehrkanoon},
journal={arXiv preprint arXiv:2410.07189},
year={2024},
doi={10.14428/esann/2024.ES2024-23},
archivePrefix={arXiv},
eprint={2410.07189},
primaryClass={eess.SP cs.LG q-bio.NC}
} | goene2024dual |
arxiv-667705 | 2410.07190 | Designing Pre-training Datasets from Unlabeled Data for EEG Classification with Transformers | <|reference_start|>Designing Pre-training Datasets from Unlabeled Data for EEG Classification with Transformers: Transformer neural networks require a large amount of labeled data to train effectively. Such data is often scarce in electroencephalography, as annotations made by medical experts are costly. This is why self-supervised training, using unlabeled data, has to be performed beforehand. In this paper, we present a way to design several labeled datasets from unlabeled electroencephalogram (EEG) data. These can then be used to pre-train transformers to learn representations of EEG signals. We tested this method on an epileptic seizure forecasting task on the Temple University Seizure Detection Corpus using a Multi-channel Vision Transformer. Our results suggest that 1) Models pre-trained using our approach demonstrate significantly faster training times, reducing fine-tuning duration by more than 50% for the specific task, and 2) Pre-trained models exhibit improved accuracy, with an increase from 90.93% to 92.16%, as well as a higher AUC, rising from 0.9648 to 0.9702 when compared to non-pre-trained models.<|reference_end|> | arxiv | @article{bary2024designing,
title={Designing Pre-training Datasets from Unlabeled Data for EEG
Classification with Transformers},
author={Tim Bary, Benoit Macq},
journal={arXiv preprint arXiv:2410.07190},
year={2024},
doi={10.1109/MELECON56669.2024.10608657},
archivePrefix={arXiv},
eprint={2410.07190},
primaryClass={eess.SP cs.LG}
} | bary2024designing |
arxiv-667706 | 2410.07191 | Curb Your Attention: Causal Attention Gating for Robust Trajectory Prediction in Autonomous Driving | <|reference_start|>Curb Your Attention: Causal Attention Gating for Robust Trajectory Prediction in Autonomous Driving: Trajectory prediction models in autonomous driving are vulnerable to perturbations from non-causal agents whose actions should not affect the ego-agent's behavior. Such perturbations can lead to incorrect predictions of other agents' trajectories, potentially compromising the safety and efficiency of the ego-vehicle's decision-making process. Motivated by this challenge, we propose $\textit{Causal tRajecTory predICtion}$ $\textbf{(CRiTIC)}$, a novel model that utilizes a $\textit{Causal Discovery Network}$ to identify inter-agent causal relations over a window of past time steps. To incorporate discovered causal relationships, we propose a novel $\textit{Causal Attention Gating}$ mechanism to selectively filter information in the proposed Transformer-based architecture. We conduct extensive experiments on two autonomous driving benchmark datasets to evaluate the robustness of our model against non-causal perturbations and its generalization capacity. Our results indicate that the robustness of predictions can be improved by up to $\textbf{54%}$ without a significant detriment to prediction accuracy. Lastly, we demonstrate the superior domain generalizability of the proposed model, which achieves up to $\textbf{29%}$ improvement in cross-domain performance. These results underscore the potential of our model to enhance both robustness and generalization capacity for trajectory prediction in diverse autonomous driving domains. Further details can be found on our project page: https://critic-model.github.io/.<|reference_end|> | arxiv | @article{ahmadi2024curb,
title={Curb Your Attention: Causal Attention Gating for Robust Trajectory
Prediction in Autonomous Driving},
author={Ehsan Ahmadi, Ray Mercurius, Soheil Alizadeh, Kasra Rezaee, Amir
Rasouli},
journal={arXiv preprint arXiv:2410.07191},
year={2024},
archivePrefix={arXiv},
eprint={2410.07191},
primaryClass={cs.RO cs.LG stat.ME}
} | ahmadi2024curb |
arxiv-667707 | 2410.07192 | PipeFill: Using GPUs During Bubbles in Pipeline-parallel LLM Training | <|reference_start|>PipeFill: Using GPUs During Bubbles in Pipeline-parallel LLM Training: Training Deep Neural Networks (DNNs) with billions of parameters generally involves pipeline-parallel (PP) execution. Unfortunately, PP model training can use GPUs inefficiently, especially at large scale, due to idle GPU time caused by pipeline bubbles, which are often 15-30% and can exceed 60% of the training job's GPU allocation. To improve the GPU utilization of PP model training, this paper describes PipeFill, which fills pipeline bubbles with execution of other pending jobs. By leveraging bubble GPU time, PipeFill reduces the GPU utilization sacrifice associated with scaling-up of large-model training. To context-switch between fill jobs and the main training job with minimal overhead to the main job, and maximize fill job efficiency, PipeFill carefully fits fill job work to measured bubble durations and GPU memory availability, introduces explicit pipeline-bubble instructions, and orchestrates placement and execution of fill jobs in pipeline bubbles. Experiments show that PipeFill can increase overall utilization by up to 63% for GPUs used in large-scale LLM training, with <2% slowdown of the training job, and 5-15% even for low-scale LLM training. For large-scale LLM training on 8K GPUs, the 63% increase translates to up to 2.6K additional GPUs worth of work completed.<|reference_end|> | arxiv | @article{arfeen2024pipefill:,
title={PipeFill: Using GPUs During Bubbles in Pipeline-parallel LLM Training},
author={Daiyaan Arfeen, Zhen Zhang, Xinwei Fu, Gregory R. Ganger, Yida Wang},
journal={arXiv preprint arXiv:2410.07192},
year={2024},
archivePrefix={arXiv},
eprint={2410.07192},
primaryClass={cs.DC cs.LG}
} | arfeen2024pipefill: |
arxiv-667708 | 2410.07194 | Technical Report: Competition Solution For Modelscope-Sora | <|reference_start|>Technical Report: Competition Solution For Modelscope-Sora: This report presents the approach adopted in the Modelscope-Sora challenge, which focuses on fine-tuning data for video generation models. The challenge evaluates participants' ability to analyze, clean, and generate high-quality datasets for video-based text-to-video tasks under specific computational constraints. The provided methodology involves data processing techniques such as video description generation, filtering, and acceleration. This report outlines the procedures and tools utilized to enhance the quality of training data, ensuring improved performance in text-to-video generation models.<|reference_end|> | arxiv | @article{chen2024technical,
title={Technical Report: Competition Solution For Modelscope-Sora},
author={Shengfu Chen and Hailong Liu and Wenzhao Wei},
journal={arXiv preprint arXiv:2410.07194},
year={2024},
archivePrefix={arXiv},
eprint={2410.07194},
primaryClass={cs.CV cs.AI}
} | chen2024technical |
arxiv-667709 | 2410.07196 | EEGUnity: Open-Source Tool in Facilitating Unified EEG Datasets Towards Large-Scale EEG Model | <|reference_start|>EEGUnity: Open-Source Tool in Facilitating Unified EEG Datasets Towards Large-Scale EEG Model: The increasing number of dispersed EEG dataset publications and the advancement of large-scale Electroencephalogram (EEG) models have increased the demand for practical tools to manage diverse EEG datasets. However, the inherent complexity of EEG data, characterized by variability in content data, metadata, and data formats, poses challenges for integrating multiple datasets and conducting large-scale EEG model research. To tackle the challenges, this paper introduces EEGUnity, an open-source tool that incorporates modules of 'EEG Parser', 'Correction', 'Batch Processing', and 'Large Language Model Boost'. Leveraging the functionality of such modules, EEGUnity facilitates the efficient management of multiple EEG datasets, such as intelligent data structure inference, data cleaning, and data unification. In addition, the capabilities of EEGUnity ensure high data quality and consistency, providing a reliable foundation for large-scale EEG data research. EEGUnity is evaluated across 25 EEG datasets from different sources, offering several typical batch processing workflows. The results demonstrate the high performance and flexibility of EEGUnity in parsing and data processing. The project code is publicly available at github.com/Baizhige/EEGUnity.<|reference_end|> | arxiv | @article{qin2024eegunity:,
title={EEGUnity: Open-Source Tool in Facilitating Unified EEG Datasets Towards
Large-Scale EEG Model},
author={Chengxuan Qin and Rui Yang and Wenlong You and Zhige Chen and
Longsheng Zhu and Mengjie Huang and Zidong Wang},
journal={arXiv preprint arXiv:2410.07196},
year={2024},
archivePrefix={arXiv},
eprint={2410.07196},
primaryClass={eess.SP cs.LG}
} | qin2024eegunity: |
arxiv-667710 | 2410.07199 | Towards Explainable Graph Neural Networks for Neurological Evaluation on EEG Signals | <|reference_start|>Towards Explainable Graph Neural Networks for Neurological Evaluation on EEG Signals: After an acute stroke, accurately estimating stroke severity is crucial for healthcare professionals to effectively manage patient's treatment. Graph theory methods have shown that brain connectivity undergoes frequency-dependent reorganization post-stroke, adapting to new conditions. Traditional methods often rely on handcrafted features that may not capture the complexities of clinical phenomena. In this study, we propose a novel approach using Graph Neural Networks (GNNs) to predict stroke severity, as measured by the NIH Stroke Scale (NIHSS). We analyzed electroencephalography (EEG) recordings from 71 patients at the time of hospitalization. For each patient, we generated five graphs weighted by Lagged Linear Coherence (LLC) between signals from distinct Brodmann Areas, covering $\delta$ (2-4 Hz), $\theta$ (4-8 Hz), $\alpha_1$ (8-10.5 Hz), $\alpha_2$ (10.5-13 Hz), and $\beta_1$ (13-20 Hz) frequency bands. To emphasize key neurological connections and maintain sparsity, we applied a sparsification process based on structural and functional brain network properties. We then trained a graph attention model to predict the NIHSS. By examining its attention coefficients, our model reveals insights into brain reconfiguration, providing clinicians with a valuable tool for diagnosis, personalized treatment, and early intervention in neurorehabilitation.<|reference_end|> | arxiv | @article{protani2024towards,
title={Towards Explainable Graph Neural Networks for Neurological Evaluation on
EEG Signals},
author={Andrea Protani, Lorenzo Giusti, Chiara Iacovelli, Albert Sund Aillet,
Diogo Reis Santos, Giuseppe Reale, Aurelia Zauli, Marco Moci, Marta
Garbuglia, Pierpaolo Brutti, Pietro Caliandro, Luigi Serio},
journal={arXiv preprint arXiv:2410.07199},
year={2024},
archivePrefix={arXiv},
eprint={2410.07199},
primaryClass={eess.SP cs.LG q-bio.NC}
} | protani2024towards |
arxiv-667711 | 2410.07200 | A Realistic Model Reference Computed Torque Control Strategy for Human Lower Limb Exoskeletons | <|reference_start|>A Realistic Model Reference Computed Torque Control Strategy for Human Lower Limb Exoskeletons: Exoskeleton robots have become a promising tool in neurorehabilitation, offering effective physical therapy and recovery monitoring. The success of these therapies relies on precise motion control systems. Although computed torque control based on inverse dynamics provides a robust theoretical foundation, its practical application in rehabilitation is limited by its sensitivity to model accuracy, making it less effective when dealing with unpredictable payloads. To overcome these limitations, this study introduces a novel model reference computed torque controller that accounts for parametric uncertainties while optimizing computational efficiency. A dynamic model of a seven-degree-of-freedom human lower limb exoskeleton is developed, incorporating a realistic joint friction model to accurately reflect the physical behavior of the robot. To reduce computational demands, the control system is split into two loops: a slower loop that predicts joint torque requirements based on input trajectories and robot dynamics, and a faster PID loop that corrects trajectory tracking errors. Coriolis and centrifugal forces are excluded from the model due to their minimal impact on system dynamics relative to their computational cost. Experimental results show high accuracy in trajectory tracking, and statistical analyses confirm the controller's robustness and effectiveness in handling parametric uncertainties. This approach presents a promising advancement for improving the stability and performance of exoskeleton-based neurorehabilitation.<|reference_end|> | arxiv | @article{hasan2024a,
title={A Realistic Model Reference Computed Torque Control Strategy for Human
Lower Limb Exoskeletons},
author={SK Hasan},
journal={arXiv preprint arXiv:2410.07200},
year={2024},
archivePrefix={arXiv},
eprint={2410.07200},
primaryClass={cs.RO cs.SY eess.SY}
} | hasan2024a |
arxiv-667712 | 2410.07201 | SpaRG: Sparsely Reconstructed Graphs for Generalizable fMRI Analysis | <|reference_start|>SpaRG: Sparsely Reconstructed Graphs for Generalizable fMRI Analysis: Deep learning can help uncover patterns in resting-state functional Magnetic Resonance Imaging (rs-fMRI) associated with psychiatric disorders and personal traits. Yet the problem of interpreting deep learning findings is rarely more evident than in fMRI analyses, as the data is sensitive to scanning effects and inherently difficult to visualize. We propose a simple approach to mitigate these challenges grounded on sparsification and self-supervision. Instead of extracting post-hoc feature attributions to uncover functional connections that are important to the target task, we identify a small subset of highly informative connections during training and occlude the rest. To this end, we jointly train a (1) sparse input mask, (2) variational autoencoder (VAE), and (3) downstream classifier in an end-to-end fashion. While we need a portion of labeled samples to train the classifier, we optimize the sparse mask and VAE with unlabeled data from additional acquisition sites, retaining only the input features that generalize well. We evaluate our method - Sparsely Reconstructed Graphs (SpaRG) - on the public ABIDE dataset for the task of sex classification, training with labeled cases from 18 sites and adapting the model to two additional out-of-distribution sites with a portion of unlabeled samples. For a relatively coarse parcellation (64 regions), SpaRG utilizes only 1% of the original connections while improving the classification accuracy across domains. Our code can be found at github.com/yanismiraoui/SpaRG.<|reference_end|> | arxiv | @article{gonzález2024sparg:,
title={SpaRG: Sparsely Reconstructed Graphs for Generalizable fMRI Analysis},
author={Camila Gonz'alez, Yanis Miraoui, Yiran Fan, Ehsan Adeli and Kilian M.
Pohl},
journal={arXiv preprint arXiv:2410.07201},
year={2024},
archivePrefix={arXiv},
eprint={2410.07201},
primaryClass={cs.CV cs.LG}
} | gonzález2024sparg: |
arxiv-667713 | 2410.07202 | Approxify: Automating Energy-Accuracy Trade-offs in Batteryless IoT Devices | <|reference_start|>Approxify: Automating Energy-Accuracy Trade-offs in Batteryless IoT Devices: Batteryless IoT devices, powered by energy harvesting, face significant challenges in maintaining operational efficiency and reliability due to intermittent power availability. Traditional checkpointing mechanisms, while essential for preserving computational state, introduce considerable energy and time overheads. This paper introduces Approxify, an automated framework that significantly enhances the sustainability and performance of batteryless IoT networks by reducing energy consumption by approximately 40% through intelligent approximation techniques. \tool balances energy efficiency with computational accuracy, ensuring reliable operation without compromising essential functionalities. Our evaluation of applications, SUSAN and Link Quality Indicator (LQI), demonstrates significant reductions in checkpoint frequency and energy usage while maintaining acceptable error bounds.<|reference_end|> | arxiv | @article{soomro2024approxify:,
title={Approxify: Automating Energy-Accuracy Trade-offs in Batteryless IoT
Devices},
author={Muhammad Abdullah Soomro, Naveed Anwar Bhatti, Muhammad Hamad Alizai},
journal={arXiv preprint arXiv:2410.07202},
year={2024},
archivePrefix={arXiv},
eprint={2410.07202},
primaryClass={eess.SP cs.SY eess.SY}
} | soomro2024approxify: |
arxiv-667714 | 2410.07205 | Parametric probabilistic approach for cumulative fatigue damage using double linear damage rule considering limited data | <|reference_start|>Parametric probabilistic approach for cumulative fatigue damage using double linear damage rule considering limited data: This work proposes a parametric probabilistic approach to model damage accumulation using the double linear damage rule (DLDR) considering the existence of limited experimental fatigue data. A probabilistic version of DLDR is developed in which the joint distribution of the knee-point coordinates is obtained as a function of the joint distribution of the DLDR model input parameters. Considering information extracted from experiments containing a limited number of data points, an uncertainty quantification framework based on the Maximum Entropy Principle and Monte Carlo simulations is proposed to determine the distribution of fatigue life. The proposed approach is validated using fatigue life experiments available in the literature.<|reference_end|> | arxiv | @article{dias2024parametric,
title={Parametric probabilistic approach for cumulative fatigue damage using
double linear damage rule considering limited data},
author={Jo~ao Paulo Dias, Stephen Ekwaro-Osire, Americo Cunha Jr, Shweta
Dabetwar, Abraham Nispel, Fisseha M. Alemayehu, Haileyesus B. Endeshaw},
journal={International Journal of Fatigue, vol. 127, pp. 246-258, 2019},
year={2024},
doi={10.1016/j.ijfatigue.2019.06.011},
archivePrefix={arXiv},
eprint={2410.07205},
primaryClass={cs.CE}
} | dias2024parametric |
arxiv-667715 | 2410.07208 | An Analysis of Minimum Error Entropy Loss Functions in Wireless Communications | <|reference_start|>An Analysis of Minimum Error Entropy Loss Functions in Wireless Communications: This paper introduces the minimum error entropy (MEE) criterion as an advanced information-theoretic loss function tailored for deep learning applications in wireless communications. The MEE criterion leverages higher-order statistical properties, offering robustness in noisy scenarios like Rayleigh fading and impulsive interference. In addition, we propose a less computationally complex version of the MEE function to enhance practical usability in wireless communications. The method is evaluated through simulations on two critical applications: over-the-air regression and indoor localization. Results indicate that the MEE criterion outperforms conventional loss functions, such as mean squared error (MSE) and mean absolute error (MAE), achieving significant performance improvements in terms of accuracy, over $20 \%$ gain over traditional methods, and convergence speed across various channel conditions. This work establishes MEE as a promising alternative for wireless communication tasks in deep learning models, enabling better resilience and adaptability.<|reference_end|> | arxiv | @article{pallewela2024an,
title={An Analysis of Minimum Error Entropy Loss Functions in Wireless
Communications},
author={Rumeshika Pallewela, Eslam Eldeeb and Hirley Alves},
journal={arXiv preprint arXiv:2410.07208},
year={2024},
archivePrefix={arXiv},
eprint={2410.07208},
primaryClass={cs.IT cs.LG eess.SP math.IT}
} | pallewela2024an |
arxiv-667716 | 2410.07209 | Behavior Cloning for Mini Autonomous Car Path Following | <|reference_start|>Behavior Cloning for Mini Autonomous Car Path Following: This article presents the implementation and evaluation of a behavior cloning approach for route following with autonomous cars. Behavior cloning is a machine-learning technique in which a neural network is trained to mimic the driving behavior of a human operator. Using camera data that captures the environment and the vehicle's movement, the neural network learns to predict the control actions necessary to follow a predetermined route. Mini-autonomous cars, which provide a good benchmark for use, are employed as a testing platform. This approach simplifies the control system by directly mapping the driver's movements to the control outputs, avoiding the need for complex algorithms. We performed an evaluation in a 13-meter sizer route, where our vehicle was evaluated. The results show that behavior cloning allows for a smooth and precise route, allowing it to be a full-sized vehicle and enabling an effective transition from small-scale experiments to real-world implementations.<|reference_end|> | arxiv | @article{moraes2024behavior,
title={Behavior Cloning for Mini Autonomous Car Path Following},
author={Pablo Moraes, Christopher Peters, Hiago Sodre, William Moraes,
Sebastian Barcelona, Juan Deniz, Victor Castelli, Bruna Guterres, Ricardo
Grando},
journal={arXiv preprint arXiv:2410.07209},
year={2024},
archivePrefix={arXiv},
eprint={2410.07209},
primaryClass={cs.RO}
} | moraes2024behavior |
arxiv-667717 | 2410.07211 | Neural Contrast: Leveraging Generative Editing for Graphic Design Recommendations | <|reference_start|>Neural Contrast: Leveraging Generative Editing for Graphic Design Recommendations: Creating visually appealing composites requires optimizing both text and background for compatibility. Previous methods have focused on simple design strategies, such as changing text color or adding background shapes for contrast. These approaches are often destructive, altering text color or partially obstructing the background image. Another method involves placing design elements in non-salient and contrasting regions, but this isn't always effective, especially with patterned backgrounds. To address these challenges, we propose a generative approach using a diffusion model. This method ensures the altered regions beneath design assets exhibit low saliency while enhancing contrast, thereby improving the visibility of the design asset.<|reference_end|> | arxiv | @article{lupascu2024neural,
title={Neural Contrast: Leveraging Generative Editing for Graphic Design
Recommendations},
author={Marian Lupascu, Ionut Mironica, Mihai-Sorin Stupariu},
journal={arXiv preprint arXiv:2410.07211},
year={2024},
archivePrefix={arXiv},
eprint={2410.07211},
primaryClass={cs.CV cs.GR cs.HC cs.LG}
} | lupascu2024neural |
arxiv-667718 | 2410.07214 | Similarity Learning with neural networks | <|reference_start|>Similarity Learning with neural networks: In this work, we introduce a neural network algorithm designed to automatically identify similarity relations from data. By uncovering these similarity relations, our network approximates the underlying physical laws that relate dimensionless quantities to their dimensionless variables and coefficients. Additionally, we develop a linear algebra framework, accompanied by code, to derive the symmetry groups associated with these similarity relations. While our approach is general, we illustrate its application through examples in fluid mechanics, including laminar Newtonian and non-Newtonian flows in smooth pipes, as well as turbulent flows in both smooth and rough pipes. Such examples are chosen to highlight the framework's capability to handle both simple and intricate cases, and further validates its effectiveness in discovering underlying physical laws from data.<|reference_end|> | arxiv | @article{sanfins2024similarity,
title={Similarity Learning with neural networks},
author={Gabriel Sanfins, Fabio Ramos and Danilo Naiff},
journal={arXiv preprint arXiv:2410.07214},
year={2024},
archivePrefix={arXiv},
eprint={2410.07214},
primaryClass={cs.LG physics.data-an physics.flu-dyn}
} | sanfins2024similarity |
arxiv-667719 | 2410.07215 | Analysis and Optimization of Seismic Monitoring Networks with Bayesian Optimal Experiment Design | <|reference_start|>Analysis and Optimization of Seismic Monitoring Networks with Bayesian Optimal Experiment Design: Monitoring networks increasingly aim to assimilate data from a large number of diverse sensors covering many sensing modalities. Bayesian optimal experimental design (OED) seeks to identify data, sensor configurations, or experiments which can optimally reduce uncertainty and hence increase the performance of a monitoring network. Information theory guides OED by formulating the choice of experiment or sensor placement as an optimization problem that maximizes the expected information gain (EIG) about quantities of interest given prior knowledge and models of expected observation data. Therefore, within the context of seismo-acoustic monitoring, we can use Bayesian OED to configure sensor networks by choosing sensor locations, types, and fidelity in order to improve our ability to identify and locate seismic sources. In this work, we develop the framework necessary to use Bayesian OED to optimize a sensor network's ability to locate seismic events from arrival time data of detected seismic phases at the regional-scale. Bayesian OED requires four elements: 1) A likelihood function that describes the distribution of detection and travel time data from the sensor network, 2) A Bayesian solver that uses a prior and likelihood to identify the posterior distribution of seismic events given the data, 3) An algorithm to compute EIG about seismic events over a dataset of hypothetical prior events, 4) An optimizer that finds a sensor network which maximizes EIG. Once we have developed this framework, we explore many relevant questions to monitoring such as: how to trade off sensor fidelity and earth model uncertainty; how sensor types, number, and locations influence uncertainty; and how prior models and constraints influence sensor placement.<|reference_end|> | arxiv | @article{callahan2024analysis,
title={Analysis and Optimization of Seismic Monitoring Networks with Bayesian
Optimal Experiment Design},
author={Jake Callahan and Kevin Monogue and Ruben Villarreal and Tommie
Catanach},
journal={arXiv preprint arXiv:2410.07215},
year={2024},
archivePrefix={arXiv},
eprint={2410.07215},
primaryClass={stat.AP cs.LG physics.geo-ph stat.ML}
} | callahan2024analysis |
arxiv-667720 | 2410.07216 | Evaluating Financial Relational Graphs: Interpretation Before Prediction | <|reference_start|>Evaluating Financial Relational Graphs: Interpretation Before Prediction: Accurate and robust stock trend forecasting has been a crucial and challenging task, as stock price changes are influenced by multiple factors. Graph neural network-based methods have recently achieved remarkable success in this domain by constructing stock relationship graphs that reflect internal factors and relationships between stocks. However, most of these methods rely on predefined factors to construct static stock relationship graphs due to the lack of suitable datasets, failing to capture the dynamic changes in stock relationships. Moreover, the evaluation of relationship graphs in these methods is often tied to the performance of neural network models on downstream tasks, leading to confusion and imprecision. To address these issues, we introduce the SPNews dataset, collected based on S\&P 500 Index stocks, to facilitate the construction of dynamic relationship graphs. Furthermore, we propose a novel set of financial relationship graph evaluation methods that are independent of downstream tasks. By using the relationship graph to explain historical financial phenomena, we assess its validity before constructing a graph neural network, ensuring the graph's effectiveness in capturing relevant financial relationships. Experimental results demonstrate that our evaluation methods can effectively differentiate between various financial relationship graphs, yielding more interpretable results compared to traditional approaches. We make our source code publicly available on GitHub to promote reproducibility and further research in this area.<|reference_end|> | arxiv | @article{niu2024evaluating,
title={Evaluating Financial Relational Graphs: Interpretation Before Prediction},
author={Yingjie Niu, Lanxin Lu, Rian Dolphin, Valerio Poti, Ruihai Dong},
journal={arXiv preprint arXiv:2410.07216},
year={2024},
archivePrefix={arXiv},
eprint={2410.07216},
primaryClass={q-fin.ST cs.AI cs.LG}
} | niu2024evaluating |
arxiv-667721 | 2410.07217 | Hull's Parameters of Projective Reed-Muller Code | <|reference_start|>Hull's Parameters of Projective Reed-Muller Code: Projective Reed-Muller codes(PRM codes) are constructed from the family of projective hypersurfaces of a fixed degree over a finite field $\F_q$. In this paper, we completely determine the minimal distance of the hull of any Projective Reed-Muller codes. Motivated by Nathan Kaplan and Jon-Lark Kim \cite{kaplankim},we extend their results and calculate the hulls' dimension of Projective Reed-Muller Codes in a larger range. We also analyse two special classes of PRM codes apart from self-dual,self-orthgonal and LCD cases, which Kaplan and Kim \cite[section 3]{kaplankim} didn't consider.<|reference_end|> | arxiv | @article{song2024hull's,
title={Hull's Parameters of Projective Reed-Muller Code},
author={Yufeng Song and Jinquan Luo},
journal={arXiv preprint arXiv:2410.07217},
year={2024},
archivePrefix={arXiv},
eprint={2410.07217},
primaryClass={cs.IT math.IT}
} | song2024hull's |
arxiv-667722 | 2410.07219 | CKMImageNet: A Comprehensive Dataset to Enable Channel Knowledge Map Construction via Computer Vision | <|reference_start|>CKMImageNet: A Comprehensive Dataset to Enable Channel Knowledge Map Construction via Computer Vision: Environment-aware communication and sensing is one of the promising paradigm shifts towards 6G, which fully leverages prior information of the local wireless environment to optimize network performance. One of the key enablers for environment-aware communication and sensing is channel knowledge map (CKM), which provides location-specific channel knowledge that is crucial for channel state information (CSI) acquisition. To support the efficient construction of CKM, large-scale location-specific channel data is essential. However, most existing channel datasets do not have the location information nor visual representations of channel data, making them inadequate for exploring the intrinsic relationship between the channel knowledge and the local environment, nor for applying advanced artificial intelligence (AI) algorithms such as computer vision (CV) for CKM construction. To address such issues, in this paper, a large-scale dataset named CKMImageNet is established, which can provide both location-tagged numerical channel data and visual images, providing a holistic view of the channel and environment. Built using commercial ray tracing software, CKMImageNet captures electromagnetic wave propagation in different scenarios, revealing the relationships between location, environment and channel knowledge. By integrating detailed channel data and the corresponding image, CKMImageNet not only supports the verification of various communication and sensing algorithms, but also enables CKM construction with CV algorithms.<|reference_end|> | arxiv | @article{wu2024ckmimagenet:,
title={CKMImageNet: A Comprehensive Dataset to Enable Channel Knowledge Map
Construction via Computer Vision},
author={Di Wu, Zijian Wu, Yuelong Qiu, Shen Fu and Yong Zeng},
journal={arXiv preprint arXiv:2410.07219},
year={2024},
archivePrefix={arXiv},
eprint={2410.07219},
primaryClass={cs.IT math.IT}
} | wu2024ckmimagenet: |
arxiv-667723 | 2410.07220 | Stock Price Prediction and Traditional Models: An Approach to Achieve Short-, Medium- and Long-Term Goals | <|reference_start|>Stock Price Prediction and Traditional Models: An Approach to Achieve Short-, Medium- and Long-Term Goals: A comparative analysis of deep learning models and traditional statistical methods for stock price prediction uses data from the Nigerian stock exchange. Historical data, including daily prices and trading volumes, are employed to implement models such as Long Short Term Memory (LSTM) networks, Gated Recurrent Units (GRUs), Autoregressive Integrated Moving Average (ARIMA), and Autoregressive Moving Average (ARMA). These models are assessed over three-time horizons: short-term (1 year), medium-term (2.5 years), and long-term (5 years), with performance measured by Mean Squared Error (MSE) and Mean Absolute Error (MAE). The stability of the time series is tested using the Augmented Dickey-Fuller (ADF) test. Results reveal that deep learning models, particularly LSTM, outperform traditional methods by capturing complex, nonlinear patterns in the data, resulting in more accurate predictions. However, these models require greater computational resources and offer less interpretability than traditional approaches. The findings highlight the potential of deep learning for improving financial forecasting and investment strategies. Future research could incorporate external factors such as social media sentiment and economic indicators, refine model architectures, and explore real-time applications to enhance prediction accuracy and scalability.<|reference_end|> | arxiv | @article{alamu2024stock,
title={Stock Price Prediction and Traditional Models: An Approach to Achieve
Short-, Medium- and Long-Term Goals},
author={Opeyemi Sheu Alamu, Md Kamrul Siam},
journal={Journal of Intelligent Learning Systems and Applications, Vol.16,
No.4, 2024},
year={2024},
doi={10.4236/jilsa.2024.164018},
archivePrefix={arXiv},
eprint={2410.07220},
primaryClass={q-fin.ST cs.LG q-fin.CP}
} | alamu2024stock |
arxiv-667724 | 2410.07222 | Computing Systemic Risk Measures with Graph Neural Networks | <|reference_start|>Computing Systemic Risk Measures with Graph Neural Networks: This paper investigates systemic risk measures for stochastic financial networks of explicitly modelled bilateral liabilities. We extend the notion of systemic risk measures from Biagini, Fouque, Fritelli and Meyer-Brandis (2019) to graph structured data. In particular, we focus on an aggregation function that is derived from a market clearing algorithm proposed by Eisenberg and Noe (2001). In this setting, we show the existence of an optimal random allocation that distributes the overall minimal bailout capital and secures the network. We study numerical methods for the approximation of systemic risk and optimal random allocations. We propose to use permutation equivariant architectures of neural networks like graph neural networks (GNNs) and a class that we name (extended) permutation equivariant neural networks ((X)PENNs). We compare their performance to several benchmark allocations. The main feature of GNNs and (X)PENNs is that they are permutation equivariant with respect to the underlying graph data. In numerical experiments we find evidence that these permutation equivariant methods are superior to other approaches.<|reference_end|> | arxiv | @article{gonon2024computing,
title={Computing Systemic Risk Measures with Graph Neural Networks},
author={Lukas Gonon, Thilo Meyer-Brandis, Niklas Weber},
journal={arXiv preprint arXiv:2410.07222},
year={2024},
archivePrefix={arXiv},
eprint={2410.07222},
primaryClass={q-fin.CP cs.LG q-fin.MF}
} | gonon2024computing |
arxiv-667725 | 2410.07225 | Distilling Analysis from Generative Models for Investment Decisions | <|reference_start|>Distilling Analysis from Generative Models for Investment Decisions: Professionals' decisions are the focus of every field. For example, politicians' decisions will influence the future of the country, and stock analysts' decisions will impact the market. Recognizing the influential role of professionals' perspectives, inclinations, and actions in shaping decision-making processes and future trends across multiple fields, we propose three tasks for modeling these decisions in the financial market. To facilitate this, we introduce a novel dataset, A3, designed to simulate professionals' decision-making processes. While we find current models present challenges in forecasting professionals' behaviors, particularly in making trading decisions, the proposed Chain-of-Decision approach demonstrates promising improvements. It integrates an opinion-generator-in-the-loop to provide subjective analysis based on each news item, further enhancing the proposed tasks' performance.<|reference_end|> | arxiv | @article{chen2024distilling,
title={Distilling Analysis from Generative Models for Investment Decisions},
author={Chung-Chi Chen, Hiroya Takamura, Ichiro Kobayashi, Yusuke Miyao},
journal={arXiv preprint arXiv:2410.07225},
year={2024},
archivePrefix={arXiv},
eprint={2410.07225},
primaryClass={q-fin.ST cs.CL cs.LG}
} | chen2024distilling |
arxiv-667726 | 2410.07230 | RFBoost: Understanding and Boosting Deep WiFi Sensing via Physical Data Augmentation | <|reference_start|>RFBoost: Understanding and Boosting Deep WiFi Sensing via Physical Data Augmentation: Deep learning shows promising performance in wireless sensing. However, deep wireless sensing (DWS) heavily relies on large datasets. Unfortunately, building comprehensive datasets for DWS is difficult and costly, because wireless data depends on environmental factors and cannot be labeled offline. Despite recent advances in few-shot/cross-domain learning, DWS is still facing data scarcity issues. In this paper, we investigate a distinct perspective of radio data augmentation (RDA) for WiFi sensing and present a data-space solution. Our key insight is that wireless signals inherently exhibit data diversity, contributing more information to be extracted for DWS. We present RFBoost, a simple and effective RDA framework encompassing novel physical data augmentation techniques. We implement RFBoost as a plug-and-play module integrated with existing deep models and evaluate it on multiple datasets. Experimental results demonstrate that RFBoost achieves remarkable average accuracy improvements of 5.4% on existing models without additional data collection or model modifications, and the best-boosted performance outperforms 11 state-of-the-art baseline models without RDA. RFBoost pioneers the study of RDA, an important yet currently underexplored building block for DWS, which we expect to become a standard DWS component of WiFi sensing and beyond. RFBoost is released at https://github.com/aiot-lab/RFBoost.<|reference_end|> | arxiv | @article{hou2024rfboost:,
title={RFBoost: Understanding and Boosting Deep WiFi Sensing via Physical Data
Augmentation},
author={Weiying Hou and Chenshu Wu},
journal={Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 8, 2,
Article 58 (June 2024), 26 pages},
year={2024},
doi={10.1145/3659620},
archivePrefix={arXiv},
eprint={2410.07230},
primaryClass={eess.SP cs.HC cs.LG}
} | hou2024rfboost: |
arxiv-667727 | 2410.07234 | A Dynamic Approach to Stock Price Prediction: Comparing RNN and Mixture of Experts Models Across Different Volatility Profiles | <|reference_start|>A Dynamic Approach to Stock Price Prediction: Comparing RNN and Mixture of Experts Models Across Different Volatility Profiles: This study evaluates the effectiveness of a Mixture of Experts (MoE) model for stock price prediction by comparing it to a Recurrent Neural Network (RNN) and a linear regression model. The MoE framework combines an RNN for volatile stocks and a linear model for stable stocks, dynamically adjusting the weight of each model through a gating network. Results indicate that the MoE approach significantly improves predictive accuracy across different volatility profiles. The RNN effectively captures non-linear patterns for volatile companies but tends to overfit stable data, whereas the linear model performs well for predictable trends. The MoE model's adaptability allows it to outperform each individual model, reducing errors such as Mean Squared Error (MSE) and Mean Absolute Error (MAE). Future work should focus on enhancing the gating mechanism and validating the model with real-world datasets to optimize its practical applicability.<|reference_end|> | arxiv | @article{vallarino2024a,
title={A Dynamic Approach to Stock Price Prediction: Comparing RNN and Mixture
of Experts Models Across Different Volatility Profiles},
author={Diego Vallarino},
journal={arXiv preprint arXiv:2410.07234},
year={2024},
archivePrefix={arXiv},
eprint={2410.07234},
primaryClass={q-fin.CP cs.LG econ.EM}
} | vallarino2024a |
arxiv-667728 | 2410.07238 | vail\'a: Versatile Anarcho Integrated Liberation \'Analysis in Multimodal Toolbox | <|reference_start|>vail\'a: Versatile Anarcho Integrated Liberation \'Analysis in Multimodal Toolbox: Human movement analysis is crucial in health and sports biomechanics for understanding physical performance, guiding rehabilitation, and preventing injuries. However, existing tools are often proprietary, expensive, and function as "black boxes", limiting user control and customization. This paper introduces vail\'a-Versatile Anarcho Integrated Liberation \'Analysis in Multimodal Toolbox-an open-source, Python-based platform designed to enhance human movement analysis by integrating data from multiple biomechanical systems. vail\'a supports data from diverse sources, including retroreflective motion capture systems, inertial measurement units (IMUs), markerless video capture technology, electromyography (EMG), force plates, and GPS or GNSS systems, enabling comprehensive analysis of movement patterns. Developed entirely in Python 3.11.9, which offers improved efficiency and long-term support, and featuring a straightforward installation process, vail\'a is accessible to users without extensive programming experience. In this paper, we also present several workflow examples that demonstrate how vail\'a allows the rapid processing of large batches of data, independent of the type of collection method. This flexibility is especially valuable in research scenarios where unexpected data collection challenges arise, ensuring no valuable data point is lost. We demonstrate the application of vail\'a in analyzing sit-to-stand movements in pediatric disability, showcasing its capability to provide deeper insights even with unexpected movement patterns. By fostering a collaborative and open environment, vail\'a encourages users to innovate, customize, and freely explore their analysis needs, potentially contributing to the advancement of rehabilitation strategies and performance optimization.<|reference_end|> | arxiv | @article{santiago2024vail\'a:,
title={vail\'a: Versatile Anarcho Integrated Liberation \'Analysis in
Multimodal Toolbox},
author={Paulo Roberto Pereira Santiago, Abel Gonc{c}alves Chinaglia, Kira
Flanagan, Bruno L. S. Bedo, Ligia Yumi Mochida, Juan Aceros, Aline Bononi,
Guilherme Manna Cesar},
journal={arXiv preprint arXiv:2410.07238},
year={2024},
archivePrefix={arXiv},
eprint={2410.07238},
primaryClass={cs.HC}
} | santiago2024vail\'a: |
arxiv-667729 | 2410.07239 | Locally Measuring Cross-lingual Lexical Alignment: A Domain and Word Level Perspective | <|reference_start|>Locally Measuring Cross-lingual Lexical Alignment: A Domain and Word Level Perspective: NLP research on aligning lexical representation spaces to one another has so far focused on aligning language spaces in their entirety. However, cognitive science has long focused on a local perspective, investigating whether translation equivalents truly share the same meaning or the extent that cultural and regional influences result in meaning variations. With recent technological advances and the increasing amounts of available data, the longstanding question of cross-lingual lexical alignment can now be approached in a more data-driven manner. However, developing metrics for the task requires some methodology for comparing metric efficacy. We address this gap and present a methodology for analyzing both synthetic validations and a novel naturalistic validation using lexical gaps in the kinship domain. We further propose new metrics, hitherto unexplored on this task, based on contextualized embeddings. Our analysis spans 16 diverse languages, demonstrating that there is substantial room for improvement with the use of newer language models. Our research paves the way for more accurate and nuanced cross-lingual lexical alignment methodologies and evaluation.<|reference_end|> | arxiv | @article{karidi2024locally,
title={Locally Measuring Cross-lingual Lexical Alignment: A Domain and Word
Level Perspective},
author={Taelin Karidi, Eitan Grossman, Omri Abend},
journal={arXiv preprint arXiv:2410.07239},
year={2024},
archivePrefix={arXiv},
eprint={2410.07239},
primaryClass={cs.CL}
} | karidi2024locally |
arxiv-667730 | 2410.07240 | Evaluating internal and external dissonance of belief dynamics in social systems | <|reference_start|>Evaluating internal and external dissonance of belief dynamics in social systems: Belief dynamics are fundamental to human behavior and social coordination. Individuals rely on accurate beliefs to make decisions, and shared beliefs form the basis of successful cooperation. Traditional studies often examined beliefs in isolation, but recent perspectives suggest beliefs operate as interconnected systems, both within individuals and across social networks. To better understand belief dynamics, we propose an extension of Galesic et al.'s model, which allows individuals to weigh internal and social dissonance based on belief certainty. Our model suggests that belief convergence occurs in two patterns: internal alignment, where beliefs become ideologically consistent but socially disagreeable, or social alignment, where beliefs become socially consistent but internally varied. These results highlight a competition between internal and social belief networks, with one network often dominating. Our findings suggest that belief dynamics tend to settle at extremes, indicating a need for future models to incorporate negative feedback to reflect more nuanced societal belief changes.<|reference_end|> | arxiv | @article{hewson2024evaluating,
title={Evaluating internal and external dissonance of belief dynamics in social
systems},
author={Joshua T. S. Hewson, Ke Fang},
journal={arXiv preprint arXiv:2410.07240},
year={2024},
archivePrefix={arXiv},
eprint={2410.07240},
primaryClass={physics.soc-ph cs.SY eess.SY}
} | hewson2024evaluating |
arxiv-667731 | 2410.07245 | AAAI Workshop on AI Planning for Cyber-Physical Systems -- CAIPI24 | <|reference_start|>AAAI Workshop on AI Planning for Cyber-Physical Systems -- CAIPI24: The workshop 'AI-based Planning for Cyber-Physical Systems', which took place on February 26, 2024, as part of the 38th Annual AAAI Conference on Artificial Intelligence in Vancouver, Canada, brought together researchers to discuss recent advances in AI planning methods for Cyber-Physical Systems (CPS). CPS pose a major challenge due to their complexity and data-intensive nature, which often exceeds the capabilities of traditional planning algorithms. The workshop highlighted new approaches such as neuro-symbolic architectures, large language models (LLMs), deep reinforcement learning and advances in symbolic planning. These techniques are promising when it comes to managing the complexity of CPS and have potential for real-world applications.<|reference_end|> | arxiv | @article{niggemann2024aaai,
title={AAAI Workshop on AI Planning for Cyber-Physical Systems -- CAIPI24},
author={Oliver Niggemann, Gautam Biswas, Alexander Diedrich, Jonas Ehrhardt,
Ren'e Heesch, Niklas Widulle},
journal={arXiv preprint arXiv:2410.07245},
year={2024},
archivePrefix={arXiv},
eprint={2410.07245},
primaryClass={cs.AI}
} | niggemann2024aaai |
arxiv-667732 | 2410.07250 | Reconstruction of Particle Flow Energy Distribution Using Deep Learning Algorithms | <|reference_start|>Reconstruction of Particle Flow Energy Distribution Using Deep Learning Algorithms: In high-energy particle physics, extracting information from complex detector signals is crucial for energy reconstruction. Recent advancements involve using deep learning to process calorimeter images from various sub-detectors in experiments like the Large Hadron Collider (LHC) for energy map reconstruction. This paper compares classical algorithms\-MLP, CNN, U-Net, and RNN\-with variants that include self-attention and 3D convolution modules to evaluate their effectiveness in reconstructing the initial energy distribution. Additionally, a test dataset of jet events is utilized to analyze and compare models' performance in handling anomalous high-energy events. The analysis highlights the effectiveness of deep learning techniques for energy image reconstruction and explores their potential in this area.<|reference_end|> | arxiv | @article{zhang2024reconstruction,
title={Reconstruction of Particle Flow Energy Distribution Using Deep Learning
Algorithms},
author={Han Zhang (1), Shengxiang Lin (2), Xingyi Zhang (3), Yu Wang (4),
Yangguang Zhang (5) ((1) College of Artificial Intelligence and Automation,
Hohai University, (2) Faculty of Electronic and Information Engineering,
Xi'an Jiaotong University, (3) School of Mechanical Engineering, Shanghai
Jiao Tong University, (4) School of Control and Computer Engineering, North
China Electric Power University, (5) School of Automation and Electrical
Engineering, University of Science and Technology Beijing)},
journal={arXiv preprint arXiv:2410.07250},
year={2024},
archivePrefix={arXiv},
eprint={2410.07250},
primaryClass={physics.ins-det cs.AI}
} | zhang2024reconstruction |
arxiv-667733 | 2410.07254 | Uniform accuracy of implicit-explicit Runge-Kutta methods for linear hyperbolic relaxation systems | <|reference_start|>Uniform accuracy of implicit-explicit Runge-Kutta methods for linear hyperbolic relaxation systems: In this paper, we study the uniform accuracy of implicit-explicit (IMEX) Runge-Kutta (RK) schemes for general linear hyperbolic relaxation systems satisfying the structural stability condition proposed in \cite{yong_singular_1999}. We establish the uniform stability and accuracy of a class of IMEX-RK schemes with spatial discretization using a Fourier spectral method. Our results demonstrate that the accuracy of the fully discretized schemes is independent of the relaxation time across all regimes. Numerical experiments on applications in traffic flows and kinetic theory verify our theoretical analysis.<|reference_end|> | arxiv | @article{ma2024uniform,
title={Uniform accuracy of implicit-explicit Runge-Kutta methods for linear
hyperbolic relaxation systems},
author={Zhiting Ma, Juntao Huang},
journal={arXiv preprint arXiv:2410.07254},
year={2024},
archivePrefix={arXiv},
eprint={2410.07254},
primaryClass={math.NA cs.NA physics.comp-ph}
} | ma2024uniform |
arxiv-667734 | 2410.07258 | BlockMEDC: Blockchain Smart Contracts for Securing Moroccan Higher Education Digital Certificates | <|reference_start|>BlockMEDC: Blockchain Smart Contracts for Securing Moroccan Higher Education Digital Certificates: Morocco's Vision 2030, known as Maroc Digital 2030, aims to position the country as a regional leader in digital technology by boosting digital infrastructure, fostering innovation, and advancing digital skills. Complementing this initiative, the Pacte ESRI 2030 strategy, launched in 2023, seeks to transform the higher education, research, and innovation sectors by integrating state-of-the-art digital technologies. In alignment with these national strategies, this paper introduces BlockMEDC, a blockchain-based system for securing and managing Moroccan educational digital certificates. Leveraging Ethereum smart contracts and the InterPlanetary File System, BlockMEDC automates the issuance, management, and verification of academic credentials across Moroccan universities. The proposed system addresses key issues such as document authenticity, manual verification, and lack of interoperability, delivering a secure, transparent, and cost-effective solution that aligns with Morocco's digital transformation goals for the education sector.<|reference_end|> | arxiv | @article{fartitchou2024blockmedc:,
title={BlockMEDC: Blockchain Smart Contracts for Securing Moroccan Higher
Education Digital Certificates},
author={Mohamed Fartitchou, Ismail Lamaakal, Khalid El Makkaoui, Zakaria El
Allali, Yassine Maleh},
journal={arXiv preprint arXiv:2410.07258},
year={2024},
archivePrefix={arXiv},
eprint={2410.07258},
primaryClass={cs.CR}
} | fartitchou2024blockmedc: |
arxiv-667735 | 2410.07260 | Precision Cancer Classification and Biomarker Identification from mRNA Gene Expression via Dimensionality Reduction and Explainable AI | <|reference_start|>Precision Cancer Classification and Biomarker Identification from mRNA Gene Expression via Dimensionality Reduction and Explainable AI: Gene expression analysis is a critical method for cancer classification, enabling precise diagnoses through the identification of unique molecular signatures associated with various tumors. Identifying cancer-specific genes from gene expression values enables a more tailored and personalized treatment approach. However, the high dimensionality of mRNA gene expression data poses challenges for analysis and data extraction. This research presents a comprehensive pipeline designed to accurately identify 33 distinct cancer types and their corresponding gene sets. It incorporates a combination of normalization and feature selection techniques to reduce dataset dimensionality effectively while ensuring high performance. Notably, our pipeline successfully identifies a substantial number of cancer-specific genes using a reduced feature set of just 500, in contrast to using the full dataset comprising 19,238 features. By employing an ensemble approach that combines three top-performing classifiers, a classification accuracy of 96.61% was achieved. Furthermore, we leverage Explainable AI to elucidate the biological significance of the identified cancer-specific genes, employing Differential Gene Expression (DGE) analysis.<|reference_end|> | arxiv | @article{tabassum2024precision,
title={Precision Cancer Classification and Biomarker Identification from mRNA
Gene Expression via Dimensionality Reduction and Explainable AI},
author={Farzana Tabassum, Sabrina Islam, Siana Rizwan, Masrur Sobhan, Tasnim
Ahmed, Sabbir Ahmed, and Tareque Mohmud Chowdhury},
journal={arXiv preprint arXiv:2410.07260},
year={2024},
archivePrefix={arXiv},
eprint={2410.07260},
primaryClass={q-bio.QM cs.LG}
} | tabassum2024precision |
arxiv-667736 | 2410.07263 | Memory-augmented Transformers can implement Linear First-Order Optimization Methods | <|reference_start|>Memory-augmented Transformers can implement Linear First-Order Optimization Methods: We show that memory-augmented Transformers (Memformers) can implement linear first-order optimization methods such as conjugate gradient descent, momentum methods, and more generally, methods that linearly combine past gradients. Building on prior work that demonstrates how Transformers can simulate preconditioned gradient descent, we provide theoretical and empirical evidence that Memformers can learn more advanced optimization algorithms. Specifically, we analyze how memory registers in Memformers store suitable intermediate attention values allowing them to implement algorithms such as conjugate gradient. Our results show that Memformers can efficiently learn these methods by training on random linear regression tasks, even learning methods that outperform conjugate gradient. This work extends our knowledge about the algorithmic capabilities of Transformers, showing how they can learn complex optimization methods.<|reference_end|> | arxiv | @article{dutta2024memory-augmented,
title={Memory-augmented Transformers can implement Linear First-Order
Optimization Methods},
author={Sanchayan Dutta (UC Davis), Suvrit Sra (TU Munich)},
journal={arXiv preprint arXiv:2410.07263},
year={2024},
archivePrefix={arXiv},
eprint={2410.07263},
primaryClass={cs.LG math.OC}
} | dutta2024memory-augmented |
arxiv-667737 | 2410.07265 | A Survey: Collaborative Hardware and Software Design in the Era of Large Language Models | <|reference_start|>A Survey: Collaborative Hardware and Software Design in the Era of Large Language Models: The rapid development of large language models (LLMs) has significantly transformed the field of artificial intelligence, demonstrating remarkable capabilities in natural language processing and moving towards multi-modal functionality. These models are increasingly integrated into diverse applications, impacting both research and industry. However, their development and deployment present substantial challenges, including the need for extensive computational resources, high energy consumption, and complex software optimizations. Unlike traditional deep learning systems, LLMs require unique optimization strategies for training and inference, focusing on system-level efficiency. This paper surveys hardware and software co-design approaches specifically tailored to address the unique characteristics and constraints of large language models. This survey analyzes the challenges and impacts of LLMs on hardware and algorithm research, exploring algorithm optimization, hardware design, and system-level innovations. It aims to provide a comprehensive understanding of the trade-offs and considerations in LLM-centric computing systems, guiding future advancements in AI. Finally, we summarize the existing efforts in this space and outline future directions toward realizing production-grade co-design methodologies for the next generation of large language models and AI systems.<|reference_end|> | arxiv | @article{guo2024a,
title={A Survey: Collaborative Hardware and Software Design in the Era of Large
Language Models},
author={Cong Guo, Feng Cheng, Zhixu Du, James Kiessling, Jonathan Ku, Shiyu
Li, Ziru Li, Mingyuan Ma, Tergel Molom-Ochir, Benjamin Morris, Haoxuan Shan,
Jingwei Sun, Yitu Wang, Chiyue Wei, Xueying Wu, Yuhao Wu, Hao Frank Yang,
Jingyang Zhang, Junyao Zhang, Qilin Zheng, Guanglei Zhou, Hai (Helen) Li,
Yiran Chen},
journal={arXiv preprint arXiv:2410.07265},
year={2024},
archivePrefix={arXiv},
eprint={2410.07265},
primaryClass={cs.AR cs.AI cs.LG cs.SE}
} | guo2024a |
arxiv-667738 | 2410.07266 | Spiking GS: Towards High-Accuracy and Low-Cost Surface Reconstruction via Spiking Neuron-based Gaussian Splatting | <|reference_start|>Spiking GS: Towards High-Accuracy and Low-Cost Surface Reconstruction via Spiking Neuron-based Gaussian Splatting: 3D Gaussian Splatting is capable of reconstructing 3D scenes in minutes. Despite recent advances in improving surface reconstruction accuracy, the reconstructed results still exhibit bias and suffer from inefficiency in storage and training. This paper provides a different observation on the cause of the inefficiency and the reconstruction bias, which is attributed to the integration of the low-opacity parts (LOPs) of the generated Gaussians. We show that LOPs consist of Gaussians with overall low-opacity (LOGs) and the low-opacity tails (LOTs) of Gaussians. We propose Spiking GS to reduce such two types of LOPs by integrating spiking neurons into the Gaussian Splatting pipeline. Specifically, we introduce global and local full-precision integrate-and-fire spiking neurons to the opacity and representation function of flattened 3D Gaussians, respectively. Furthermore, we enhance the density control strategy with spiking neurons' thresholds and an new criterion on the scale of Gaussians. Our method can represent more accurate reconstructed surfaces at a lower cost. The code is available at \url{https://github.com/shippoT/Spiking_GS}.<|reference_end|> | arxiv | @article{zhang2024spiking,
title={Spiking GS: Towards High-Accuracy and Low-Cost Surface Reconstruction
via Spiking Neuron-based Gaussian Splatting},
author={Weixing Zhang, Zongrui Li, De Ma, Huajin Tang, Xudong Jiang, Qian
Zheng, Gang Pan},
journal={arXiv preprint arXiv:2410.07266},
year={2024},
archivePrefix={arXiv},
eprint={2410.07266},
primaryClass={cs.CV}
} | zhang2024spiking |
arxiv-667739 | 2410.07267 | Efficient representation learning of scintillation signal characteristics with spectrum-inspired temporal neural networks | <|reference_start|>Efficient representation learning of scintillation signal characteristics with spectrum-inspired temporal neural networks: Nuclear radiation detectors based on scintillators are widely used in particle and high energy physics experiments, nuclear medicine imaging, industrial and environmental detection, etc. Precisely extracting scintillation signal characteristics at the event level is important for these applications, not only in respect of understanding the scintillator itself, but also kinds and physical property of incident particles. Recent researches demonstrate data-driven neural networks are superior to traditional statistical methods, especially when the analytical form of signals is hard to obtain, or noise is significant. However, most densely connected or convolution-based networks fail to fully exploit the spectral and temporal structure of scintillation signals, leaving large space for performance improvement. In this paper, we propose a network architecture specially tailored for scintillation signal characterization based on previous works on time series analysis. By directly applying Fast Fourier Transform on original signals without data embedding, including the zero-frequency component, adjusting convolution scheme for low-frequency components, and unbiasedly re-weighting features from different frequencies, the proposed network architecture can serve as a lightweight and enhanced representation learning backbone. We prove our idea on simulation data generated with the setting of the LUX dark matter detector, and on experimental electrical signals with fast electronics to emulate scintillation variations. The proposed model achieves significantly better results than the reference model in literature and densely connected models without representation learning.<|reference_end|> | arxiv | @article{ai2024efficient,
title={Efficient representation learning of scintillation signal
characteristics with spectrum-inspired temporal neural networks},
author={Pengcheng Ai, Xiangming Sun, Zhi Deng, Xinchi Ran},
journal={arXiv preprint arXiv:2410.07267},
year={2024},
archivePrefix={arXiv},
eprint={2410.07267},
primaryClass={physics.ins-det cs.LG physics.data-an}
} | ai2024efficient |
arxiv-667740 | 2410.07268 | Learning Content-Aware Multi-Modal Joint Input Pruning via Bird's-Eye-View Representation | <|reference_start|>Learning Content-Aware Multi-Modal Joint Input Pruning via Bird's-Eye-View Representation: In the landscape of autonomous driving, Bird's-Eye-View (BEV) representation has recently garnered substantial academic attention, serving as a transformative framework for the fusion of multi-modal sensor inputs. This BEV paradigm effectively shifts the sensor fusion challenge from a rule-based methodology to a data-centric approach, thereby facilitating more nuanced feature extraction from an array of heterogeneous sensors. Notwithstanding its evident merits, the computational overhead associated with BEV-based techniques often mandates high-capacity hardware infrastructures, thus posing challenges for practical, real-world implementations. To mitigate this limitation, we introduce a novel content-aware multi-modal joint input pruning technique. Our method leverages BEV as a shared anchor to algorithmically identify and eliminate non-essential sensor regions prior to their introduction into the perception model's backbone. We validatethe efficacy of our approach through extensive experiments on the NuScenes dataset, demonstrating substantial computational efficiency without sacrificing perception accuracy. To the best of our knowledge, this work represents the first attempt to alleviate the computational burden from the input pruning point.<|reference_end|> | arxiv | @article{li2024learning,
title={Learning Content-Aware Multi-Modal Joint Input Pruning via
Bird's-Eye-View Representation},
author={Yuxin Li, Yiheng Li, Xulei Yang, Mengying Yu, Zihang Huang, Xiaojun
Wu, Chai Kiat Yeo},
journal={arXiv preprint arXiv:2410.07268},
year={2024},
archivePrefix={arXiv},
eprint={2410.07268},
primaryClass={cs.CV cs.AI}
} | li2024learning |
arxiv-667741 | 2410.07269 | Deep Learning for Surgical Instrument Recognition and Segmentation in Robotic-Assisted Surgeries: A Systematic Review | <|reference_start|>Deep Learning for Surgical Instrument Recognition and Segmentation in Robotic-Assisted Surgeries: A Systematic Review: Applying deep learning (DL) for annotating surgical instruments in robot-assisted minimally invasive surgeries (MIS) represents a significant advancement in surgical technology. This systematic review examines 48 studies that and advanced DL methods and architectures. These sophisticated DL models have shown notable improvements in the precision and efficiency of detecting and segmenting surgical tools. The enhanced capabilities of these models support various clinical applications, including real-time intraoperative guidance, comprehensive postoperative evaluations, and objective assessments of surgical skills. By accurately identifying and segmenting surgical instruments in video data, DL models provide detailed feedback to surgeons, thereby improving surgical outcomes and reducing complication risks. Furthermore, the application of DL in surgical education is transformative. The review underscores the significant impact of DL on improving the accuracy of skill assessments and the overall quality of surgical training programs. However, implementing DL in surgical tool detection and segmentation faces challenges, such as the need for large, accurately annotated datasets to train these models effectively. The manual annotation process is labor-intensive and time-consuming, posing a significant bottleneck. Future research should focus on automating the detection and segmentation process and enhancing the robustness of DL models against environmental variations. Expanding the application of DL models across various surgical specialties will be essential to fully realize this technology's potential. Integrating DL with other emerging technologies, such as augmented reality (AR), also offers promising opportunities to further enhance the precision and efficacy of surgical procedures.<|reference_end|> | arxiv | @article{ahmed2024deep,
title={Deep Learning for Surgical Instrument Recognition and Segmentation in
Robotic-Assisted Surgeries: A Systematic Review},
author={Fatimaelzahraa Ali Ahmed, Mahmoud Yousef, Mariam Ali Ahmed, Hasan Omar
Ali, Anns Mahboob, Hazrat Ali, Zubair Shah, Omar Aboumarzouk, Abdulla Al
Ansari, Shidin Balakrishnan},
journal={arXiv preprint arXiv:2410.07269},
year={2024},
doi={10.1007/s10462-024-10979-w},
archivePrefix={arXiv},
eprint={2410.07269},
primaryClass={eess.IV cs.AI cs.CV}
} | ahmed2024deep |
arxiv-667742 | 2410.07271 | Multi-Task Program Error Repair and Explanatory Diagnosis | <|reference_start|>Multi-Task Program Error Repair and Explanatory Diagnosis: Program errors can occur in any type of programming, and can manifest in a variety of ways, such as unexpected output, crashes, or performance issues. And program error diagnosis can often be too abstract or technical for developers to understand, especially for beginners. The goal of this paper is to present a novel machine-learning approach for Multi-task Program Error Repair and Explanatory Diagnosis (mPRED). A pre-trained language model is used to encode the source code, and a downstream model is specifically designed to identify and repair errors. Programs and test cases will be augmented and optimized from several perspectives. Additionally, our approach incorporates a "chain of thoughts" method, which enables the models to produce intermediate reasoning explanations before providing the final correction. To aid in visualizing and analyzing the program structure, we use a graph neural network for program structure visualization. Overall, our approach offers a promising approach for repairing program errors across different programming languages and providing helpful explanations to programmers.<|reference_end|> | arxiv | @article{xu2024multi-task,
title={Multi-Task Program Error Repair and Explanatory Diagnosis},
author={Zhenyu Xu and Victor S. Sheng},
journal={arXiv preprint arXiv:2410.07271},
year={2024},
archivePrefix={arXiv},
eprint={2410.07271},
primaryClass={cs.SE cs.AI}
} | xu2024multi-task |
arxiv-667743 | 2410.07272 | Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration | <|reference_start|>Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration: Decentralized Federated Learning has emerged as an alternative to centralized architectures due to its faster training, privacy preservation, and reduced communication overhead. In decentralized communication, the server aggregation phase in Centralized Federated Learning shifts to the client side, which means that clients connect with each other in a peer-to-peer manner. However, compared to the centralized mode, data heterogeneity in Decentralized Federated Learning will cause larger variances between aggregated models, which leads to slow convergence in training and poor generalization performance in tests. To address these issues, we introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata. It consists of two main components: the Moreau envelope function, which primarily addresses parameter inconsistencies among clients caused by data heterogeneity, and Nesterov's extrapolation step, which accelerates the aggregation phase. Theoretically, we prove the optimization error bound and generalization error bound of the algorithm, providing a further understanding of the nature of the algorithm and the theoretical perspectives on the hyperparameter choice. Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions. Furthermore, we also experimentally verify the theoretical properties of DFedCata.<|reference_end|> | arxiv | @article{li2024boosting,
title={Boosting the Performance of Decentralized Federated Learning via
Catalyst Acceleration},
author={Qinglun Li, Miao Zhang, Yingqi Liu, Quanjun Yin, Li Shen, Xiaochun Cao},
journal={arXiv preprint arXiv:2410.07272},
year={2024},
archivePrefix={arXiv},
eprint={2410.07272},
primaryClass={cs.LG}
} | li2024boosting |
arxiv-667744 | 2410.07273 | BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models | <|reference_start|>BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models: The inversion of diffusion model sampling, which aims to find the corresponding initial noise of a sample, plays a critical role in various tasks. Recently, several heuristic exact inversion samplers have been proposed to address the inexact inversion issue in a training-free manner. However, the theoretical properties of these heuristic samplers remain unknown and they often exhibit mediocre sampling quality. In this paper, we introduce a generic formulation, \emph{Bidirectional Explicit Linear Multi-step} (BELM) samplers, of the exact inversion samplers, which includes all previously proposed heuristic exact inversion samplers as special cases. The BELM formulation is derived from the variable-stepsize-variable-formula linear multi-step method via integrating a bidirectional explicit constraint. We highlight this bidirectional explicit constraint is the key of mathematically exact inversion. We systematically investigate the Local Truncation Error (LTE) within the BELM framework and show that the existing heuristic designs of exact inversion samplers yield sub-optimal LTE. Consequently, we propose the Optimal BELM (O-BELM) sampler through the LTE minimization approach. We conduct additional analysis to substantiate the theoretical stability and global convergence property of the proposed optimal sampler. Comprehensive experiments demonstrate our O-BELM sampler establishes the exact inversion property while achieving high-quality sampling. Additional experiments in image editing and image interpolation highlight the extensive potential of applying O-BELM in varying applications.<|reference_end|> | arxiv | @article{wang2024belm:,
title={BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact
Inversion in Diffusion Models},
author={Fangyikang Wang, Hubery Yin, Yuejiang Dong, Huminhao Zhu, Chao Zhang,
Hanbin Zhao, Hui Qian, Chen Li},
journal={arXiv preprint arXiv:2410.07273},
year={2024},
archivePrefix={arXiv},
eprint={2410.07273},
primaryClass={cs.CV cs.LG}
} | wang2024belm: |
arxiv-667745 | 2410.07274 | Mitigation of gender bias in automatic facial non-verbal behaviors generation | <|reference_start|>Mitigation of gender bias in automatic facial non-verbal behaviors generation: Research on non-verbal behavior generation for social interactive agents focuses mainly on the believability and synchronization of non-verbal cues with speech. However, existing models, predominantly based on deep learning architectures, often perpetuate biases inherent in the training data. This raises ethical concerns, depending on the intended application of these agents. This paper addresses these issues by first examining the influence of gender on facial non-verbal behaviors. We concentrate on gaze, head movements, and facial expressions. We introduce a classifier capable of discerning the gender of a speaker from their non-verbal cues. This classifier achieves high accuracy on both real behavior data, extracted using state-of-the-art tools, and synthetic data, generated from a model developed in previous work.Building upon this work, we present a new model, FairGenderGen, which integrates a gender discriminator and a gradient reversal layer into our previous behavior generation model. This new model generates facial non-verbal behaviors from speech features, mitigating gender sensitivity in the generated behaviors. Our experiments demonstrate that the classifier, developed in the initial phase, is no longer effective in distinguishing the gender of the speaker from the generated non-verbal behaviors.<|reference_end|> | arxiv | @article{delbosc2024mitigation,
title={Mitigation of gender bias in automatic facial non-verbal behaviors
generation},
author={Alice Delbosc (TALEP, LIS, AMU), Magalie Ochs (LIS, AMU, R2I), Nicolas
Sabouret (CPU, LISN), Brian Ravenet (CPU, LISN), Stephane Ayache (AMU, LIS,
QARMA)},
journal={arXiv preprint arXiv:2410.07274},
year={2024},
archivePrefix={arXiv},
eprint={2410.07274},
primaryClass={cs.CV cs.AI cs.HC cs.LG cs.NE}
} | delbosc2024mitigation |
arxiv-667746 | 2410.07277 | Swin-BERT: A Feature Fusion System designed for Speech-based Alzheimer's Dementia Detection | <|reference_start|>Swin-BERT: A Feature Fusion System designed for Speech-based Alzheimer's Dementia Detection: Speech is usually used for constructing an automatic Alzheimer's dementia (AD) detection system, as the acoustic and linguistic abilities show a decline in people living with AD at the early stages. However, speech includes not only AD-related local and global information but also other information unrelated to cognitive status, such as age and gender. In this paper, we propose a speech-based system named Swin-BERT for automatic dementia detection. For the acoustic part, the shifted windows multi-head attention that proposed to extract local and global information from images, is used for designing our acoustic-based system. To decouple the effect of age and gender on acoustic feature extraction, they are used as an extra input of the designed acoustic system. For the linguistic part, the rhythm-related information, which varies significantly between people living with and without AD, is removed while transcribing the audio recordings into transcripts. To compensate for the removed rhythm-related information, the character-level transcripts are proposed to be used as the extra input of a word-level BERT-style system. Finally, the Swin-BERT combines the acoustic features learned from our proposed acoustic-based system with our linguistic-based system. The experiments are based on the two datasets provided by the international dementia detection challenges: the ADReSS and ADReSSo. The results show that both the proposed acoustic and linguistic systems can be better or comparable with previous research on the two datasets. Superior results are achieved by the proposed Swin-BERT system on the ADReSS and ADReSSo datasets, which are 85.58\% F-score and 87.32\% F-score respectively.<|reference_end|> | arxiv | @article{pan2024swin-bert:,
title={Swin-BERT: A Feature Fusion System designed for Speech-based Alzheimer's
Dementia Detection},
author={Yilin Pan, Yanpei Shi, Yijia Zhang, Mingyu Lu},
journal={arXiv preprint arXiv:2410.07277},
year={2024},
archivePrefix={arXiv},
eprint={2410.07277},
primaryClass={eess.AS cs.AI cs.CL cs.SD}
} | pan2024swin-bert: |
arxiv-667747 | 2410.07278 | Retrieval Replace Reduction: An effective visual token reduction method via semantic match | <|reference_start|>Retrieval Replace Reduction: An effective visual token reduction method via semantic match: Multimodal large language models (MLLMs) have demonstrated strong performance across various tasks without requiring training from scratch. However, they face significant computational and memory constraints, particularly when processing multimodal inputs that exceed context length, limiting their scalability. In this paper, we introduce a new approach, \textbf{TRSM} (\textbf{T}oken \textbf{R}eduction via \textbf{S}emantic \textbf{M}atch), which effectively reduces the number of visual tokens without compromising MLLM performance. Inspired by how humans process multimodal tasks, TRSM leverages semantic information from one modality to match relevant semantics in another, reducing the number of visual tokens.Specifically, to retain task relevant visual tokens, we use the text prompt as a query vector to retrieve the most similar vectors from the visual prompt and merge them with the text tokens. Based on experimental results, when applied to LLaVA-1.5\cite{liu2023}, our approach compresses the visual tokens by 20\%, achieving comparable performance across diverse visual question-answering and reasoning tasks.<|reference_end|> | arxiv | @article{liu2024retrieval,
title={Retrieval Replace Reduction: An effective visual token reduction method
via semantic match},
author={Yingen Liu, Fan Wu, Ruihui Li, Zhuo Tang, Kenli Li},
journal={arXiv preprint arXiv:2410.07278},
year={2024},
archivePrefix={arXiv},
eprint={2410.07278},
primaryClass={cs.CV cs.AI}
} | liu2024retrieval |
arxiv-667748 | 2410.07282 | A Utility-Mining-Driven Active Learning Approach for Analyzing Clickstream Sequences | <|reference_start|>A Utility-Mining-Driven Active Learning Approach for Analyzing Clickstream Sequences: In rapidly evolving e-commerce industry, the capability of selecting high-quality data for model training is essential. This study introduces the High-Utility Sequential Pattern Mining using SHAP values (HUSPM-SHAP) model, a utility mining-based active learning strategy to tackle this challenge. We found that the parameter settings for positive and negative SHAP values impact the model's mining outcomes, introducing a key consideration into the active learning framework. Through extensive experiments aimed at predicting behaviors that do lead to purchases or not, the designed HUSPM-SHAP model demonstrates its superiority across diverse scenarios. The model's ability to mitigate labeling needs while maintaining high predictive performance is highlighted. Our findings demonstrate the model's capability to refine e-commerce data processing, steering towards more streamlined, cost-effective prediction modeling.<|reference_end|> | arxiv | @article{wang2024a,
title={A Utility-Mining-Driven Active Learning Approach for Analyzing
Clickstream Sequences},
author={Danny Y. C. Wang, Lars Arne Jordanger, Jerry Chun-Wei Lin},
journal={arXiv preprint arXiv:2410.07282},
year={2024},
archivePrefix={arXiv},
eprint={2410.07282},
primaryClass={cs.LG}
} | wang2024a |
arxiv-667749 | 2410.07283 | Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems | <|reference_start|>Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems: As Large Language Models (LLMs) grow increasingly powerful, multi-agent systems are becoming more prevalent in modern AI applications. Most safety research, however, has focused on vulnerabilities in single-agent LLMs. These include prompt injection attacks, where malicious prompts embedded in external content trick the LLM into executing unintended or harmful actions, compromising the victim's application. In this paper, we reveal a more dangerous vector: LLM-to-LLM prompt injection within multi-agent systems. We introduce Prompt Infection, a novel attack where malicious prompts self-replicate across interconnected agents, behaving much like a computer virus. This attack poses severe threats, including data theft, scams, misinformation, and system-wide disruption, all while propagating silently through the system. Our extensive experiments demonstrate that multi-agent systems are highly susceptible, even when agents do not publicly share all communications. To address this, we propose LLM Tagging, a defense mechanism that, when combined with existing safeguards, significantly mitigates infection spread. This work underscores the urgent need for advanced security measures as multi-agent LLM systems become more widely adopted.<|reference_end|> | arxiv | @article{lee2024prompt,
title={Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems},
author={Donghyun Lee, Mo Tiwari},
journal={arXiv preprint arXiv:2410.07283},
year={2024},
archivePrefix={arXiv},
eprint={2410.07283},
primaryClass={cs.MA cs.AI cs.CR}
} | lee2024prompt |
arxiv-667750 | 2410.07286 | Benchmarking Data Heterogeneity Evaluation Approaches for Personalized Federated Learning | <|reference_start|>Benchmarking Data Heterogeneity Evaluation Approaches for Personalized Federated Learning: There is growing research interest in measuring the statistical heterogeneity of clients' local datasets. Such measurements are used to estimate the suitability for collaborative training of personalized federated learning (PFL) models. Currently, these research endeavors are taking place in silos and there is a lack of a unified benchmark to provide a fair and convenient comparison among various approaches in common settings. We aim to bridge this important gap in this paper. The proposed benchmarking framework currently includes six representative approaches. Extensive experiments have been conducted to compare these approaches under five standard non-IID FL settings, providing much needed insights into which approaches are advantageous under which settings. The proposed framework offers useful guidance on the suitability of various data divergence measures in FL systems. It is beneficial for keeping related research activities on the right track in terms of: (1) designing PFL schemes, (2) selecting appropriate data heterogeneity evaluation approaches for specific FL application scenarios, and (3) addressing fairness issues in collaborative model training. The code is available at https://github.com/Xiaoni-61/DH-Benchmark.<|reference_end|> | arxiv | @article{li2024benchmarking,
title={Benchmarking Data Heterogeneity Evaluation Approaches for Personalized
Federated Learning},
author={Zhilong Li, Xiaohu Wu, Xiaoli Tang, Tiantian He, Yew-Soon Ong,
Mengmeng Chen, Qiqi Liu, Qicheng Lao, Han Yu},
journal={arXiv preprint arXiv:2410.07286},
year={2024},
archivePrefix={arXiv},
eprint={2410.07286},
primaryClass={cs.LG cs.AI}
} | li2024benchmarking |
arxiv-667751 | 2410.07287 | Crafting desirable climate trajectories with RL explored socio-environmental simulations | <|reference_start|>Crafting desirable climate trajectories with RL explored socio-environmental simulations: Climate change poses an existential threat, necessitating effective climate policies to enact impactful change. Decisions in this domain are incredibly complex, involving conflicting entities and evidence. In the last decades, policymakers increasingly use simulations and computational methods to guide some of their decisions. Integrated Assessment Models (IAMs) are one of such methods, which combine social, economic, and environmental simulations to forecast potential policy effects. For example, the UN uses outputs of IAMs for their recent Intergovernmental Panel on Climate Change (IPCC) reports. Traditionally these have been solved using recursive equation solvers, but have several shortcomings, e.g. struggling at decision making under uncertainty. Recent preliminary work using Reinforcement Learning (RL) to replace the traditional solvers shows promising results in decision making in uncertain and noisy scenarios. We extend on this work by introducing multiple interacting RL agents as a preliminary analysis on modelling the complex interplay of socio-interactions between various stakeholders or nations that drives much of the current climate crisis. Our findings show that cooperative agents in this framework can consistently chart pathways towards more desirable futures in terms of reduced carbon emissions and improved economy. However, upon introducing competition between agents, for instance by using opposing reward functions, desirable climate futures are rarely reached. Modelling competition is key to increased realism in these simulations, as such we employ policy interpretation by visualising what states lead to more uncertain behaviour, to understand algorithm failure. Finally, we highlight the current limitations and avenues for further work to ensure future technology uptake for policy derivation.<|reference_end|> | arxiv | @article{rudd-jones2024crafting,
title={Crafting desirable climate trajectories with RL explored
socio-environmental simulations},
author={James Rudd-Jones, Fiona Thendean, Mar'ia P'erez-Ortiz},
journal={arXiv preprint arXiv:2410.07287},
year={2024},
archivePrefix={arXiv},
eprint={2410.07287},
primaryClass={physics.soc-ph cs.AI}
} | rudd-jones2024crafting |
arxiv-667752 | 2410.07289 | Principal Orthogonal Latent Components Analysis (POLCA Net) | <|reference_start|>Principal Orthogonal Latent Components Analysis (POLCA Net): Representation learning is a pivotal area in the field of machine learning, focusing on the development of methods to automatically discover the representations or features needed for a given task from raw data. Unlike traditional feature engineering, which requires manual crafting of features, representation learning aims to learn features that are more useful and relevant for tasks such as classification, prediction, and clustering. We introduce Principal Orthogonal Latent Components Analysis Network (POLCA Net), an approach to mimic and extend PCA and LDA capabilities to non-linear domains. POLCA Net combines an autoencoder framework with a set of specialized loss functions to achieve effective dimensionality reduction, orthogonality, variance-based feature sorting, high-fidelity reconstructions, and additionally, when used with classification labels, a latent representation well suited for linear classifiers and low dimensional visualization of class distribution as well.<|reference_end|> | arxiv | @article{h.2024principal,
title={Principal Orthogonal Latent Components Analysis (POLCA Net)},
author={Jose Antonio Martin H. and Freddy Perozo and Manuel Lopez},
journal={arXiv preprint arXiv:2410.07289},
year={2024},
archivePrefix={arXiv},
eprint={2410.07289},
primaryClass={cs.LG cs.AI}
} | h.2024principal |
arxiv-667753 | 2410.07295 | IterGen: Iterative Structured LLM Generation | <|reference_start|>IterGen: Iterative Structured LLM Generation: Large Language Models (LLMs) are widely used for tasks such as natural language and code generation. Still, their outputs often suffer from issues like privacy violations, and semantically inaccurate code generation. Current libraries for LLM generation rely on left-to-right decoding without systematic support for backtracking, limiting the ability to correct or refine outputs mid-generation. To address this issue, we introduce IterGen, an intuitive framework for iterative, grammar-guided LLM generation that enables users to move both forward and backward within the generated output based on grammar symbols. By leveraging a symbol-to-position mapping, IterGen ensures efficient and structured generation while allowing for corrections during the process. We demonstrate IterGen's effectiveness in two important applications: reducing privacy leakage in LLM outputs and improving the accuracy of LLM-generated SQL queries. Our code is available at https://github.com/uiuc-arc/itergen<|reference_end|> | arxiv | @article{ugare2024itergen:,
title={IterGen: Iterative Structured LLM Generation},
author={Shubham Ugare, Rohan Gumaste, Tarun Suresh, Gagandeep Singh, Sasa
Misailovic},
journal={arXiv preprint arXiv:2410.07295},
year={2024},
archivePrefix={arXiv},
eprint={2410.07295},
primaryClass={cs.SE cs.LG cs.PL}
} | ugare2024itergen: |
arxiv-667754 | 2410.07296 | ReinDiffuse: Crafting Physically Plausible Motions with Reinforced Diffusion Model | <|reference_start|>ReinDiffuse: Crafting Physically Plausible Motions with Reinforced Diffusion Model: Generating human motion from textual descriptions is a challenging task. Existing methods either struggle with physical credibility or are limited by the complexities of physics simulations. In this paper, we present \emph{ReinDiffuse} that combines reinforcement learning with motion diffusion model to generate physically credible human motions that align with textual descriptions. Our method adapts Motion Diffusion Model to output a parameterized distribution of actions, making them compatible with reinforcement learning paradigms. We employ reinforcement learning with the objective of maximizing physically plausible rewards to optimize motion generation for physical fidelity. Our approach outperforms existing state-of-the-art models on two major datasets, HumanML3D and KIT-ML, achieving significant improvements in physical plausibility and motion quality. Project: \url{https://reindiffuse.github.io/}<|reference_end|> | arxiv | @article{han2024reindiffuse:,
title={ReinDiffuse: Crafting Physically Plausible Motions with Reinforced
Diffusion Model},
author={Gaoge Han, Mingjiang Liang, Jinglei Tang, Yongkang Cheng, Wei Liu,
Shaoli Huang},
journal={arXiv preprint arXiv:2410.07296},
year={2024},
archivePrefix={arXiv},
eprint={2410.07296},
primaryClass={cs.CV}
} | han2024reindiffuse: |
arxiv-667755 | 2410.07297 | Autonomous Navigation and Collision Avoidance for Mobile Robots: Classification and Review | <|reference_start|>Autonomous Navigation and Collision Avoidance for Mobile Robots: Classification and Review: This paper introduces a novel classification for Autonomous Mobile Robots (AMRs), into three phases and five steps, focusing on autonomous collision-free navigation. Additionally, it presents the main methods and widely accepted technologies for each phase of the proposed classification. The purpose of this classification is to facilitate understanding and establish connections between the independent input variables of the system (hardware, software) and autonomous navigation. By analyzing well-established technologies in terms of sensors and methods used for autonomous navigation, this paper aims to provide a foundation of knowledge that can be applied in future projects of mobile robots.<|reference_end|> | arxiv | @article{de carvalho2024autonomous,
title={Autonomous Navigation and Collision Avoidance for Mobile Robots:
Classification and Review},
author={Marcus Vinicius Leal de Carvalho, Roberto Simoni, and Leopoldo
Yoshioka},
journal={arXiv preprint arXiv:2410.07297},
year={2024},
doi={10.5281/zenodo.13909140},
archivePrefix={arXiv},
eprint={2410.07297},
primaryClass={cs.RO cs.SE}
} | de carvalho2024autonomous |
arxiv-667756 | 2410.07298 | Enhancing Performance of Point Cloud Completion Networks with Consistency Loss | <|reference_start|>Enhancing Performance of Point Cloud Completion Networks with Consistency Loss: Point cloud completion networks are conventionally trained to minimize the disparities between the completed point cloud and the ground-truth counterpart. However, an incomplete object-level point cloud can have multiple valid completion solutions when it is examined in isolation. This one-to-many mapping issue can cause contradictory supervision signals to the network because the loss function may produce different values for identical input-output pairs of the network. In many cases, this issue could adversely affect the network optimization process. In this work, we propose to enhance the conventional learning objective using a novel completion consistency loss to mitigate the one-to-many mapping problem. Specifically, the proposed consistency loss ensure that a point cloud completion network generates a coherent completion solution for incomplete objects originating from the same source point cloud. Experimental results across multiple well-established datasets and benchmarks demonstrated the proposed completion consistency loss have excellent capability to enhance the completion performance of various existing networks without any modification to the design of the networks. The proposed consistency loss enhances the performance of the point completion network without affecting the inference speed, thereby increasing the accuracy of point cloud completion. Notably, a state-of-the-art point completion network trained with the proposed consistency loss can achieve state-of-the-art accuracy on the challenging new MVP dataset. The code and result of experiment various point completion models using proposed consistency loss will be available at: https://github.com/kaist-avelab/ConsistencyLoss .<|reference_end|> | arxiv | @article{goenawan2024enhancing,
title={Enhancing Performance of Point Cloud Completion Networks with
Consistency Loss},
author={Christofel Rio Goenawan, Kevin Tirta Wijaya, Seung-Hyun Kong},
journal={arXiv preprint arXiv:2410.07298},
year={2024},
archivePrefix={arXiv},
eprint={2410.07298},
primaryClass={cs.CV cs.AI}
} | goenawan2024enhancing |
arxiv-667757 | 2410.07299 | Towards Generalisable Time Series Understanding Across Domains | <|reference_start|>Towards Generalisable Time Series Understanding Across Domains: In natural language processing and computer vision, self-supervised pre-training on large datasets unlocks foundational model capabilities across domains and tasks. However, this potential has not yet been realised in time series analysis, where existing methods disregard the heterogeneous nature of time series characteristics. Time series are prevalent in many domains, including medicine, engineering, natural sciences, and finance, but their characteristics vary significantly in terms of variate count, inter-variate relationships, temporal dynamics, and sampling frequency. This inherent heterogeneity across domains prevents effective pre-training on large time series corpora. To address this issue, we introduce OTiS, an open model for general time series analysis, that has been specifically designed to handle multi-domain heterogeneity. We propose a novel pre-training paradigm including a tokeniser with learnable domain-specific signatures, a dual masking strategy to capture temporal causality, and a normalised cross-correlation loss to model long-range dependencies. Our model is pre-trained on a large corpus of 640,187 samples and 11 billion time points spanning 8 distinct domains, enabling it to analyse time series from any (unseen) domain. In comprehensive experiments across 15 diverse applications - including classification, regression, and forecasting - OTiS showcases its ability to accurately capture domain-specific data characteristics and demonstrates its competitiveness against state-of-the-art baselines. Our code and pre-trained weights are publicly available at https://github.com/oetu/otis.<|reference_end|> | arxiv | @article{turgut2024towards,
title={Towards Generalisable Time Series Understanding Across Domains},
author={"Ozg"un Turgut, Philip M"uller, Martin J. Menten, Daniel Rueckert},
journal={arXiv preprint arXiv:2410.07299},
year={2024},
archivePrefix={arXiv},
eprint={2410.07299},
primaryClass={cs.LG cs.AI cs.CV}
} | turgut2024towards |
arxiv-667758 | 2410.07302 | Examining the Prevalence and Dynamics of AI-Generated Media in Art Subreddits | <|reference_start|>Examining the Prevalence and Dynamics of AI-Generated Media in Art Subreddits: Broadly accessible generative AI models like Dall-E have made it possible for anyone to create compelling visual art. In online communities, the introduction of AI-generated content (AIGC) may impact community dynamics by shifting the kinds of content being posted or the responses to content suspected of being generated by AI. We take steps towards examining the potential impact of AIGC on art-related communities on Reddit. We distinguish between communities that disallow AI content and those without a direct policy. We look at image-based posts made to these communities that are transparently created by AI, or comments in these communities that suspect authors of using generative AI. We find that AI posts (and accusations) have played a very small part in these communities through the end of 2023, accounting for fewer than 0.2% of the image-based posts. Even as the absolute number of author-labelled AI posts dwindles over time, accusations of AI use remain more persistent. We show that AI content is more readily used by newcomers and may help increase participation if it aligns with community rules. However, the tone of comments suspecting AI use by others have become more negative over time, especially in communities that do not have explicit rules about AI. Overall, the results show the changing norms and interactions around AIGC in online communities designated for creativity.<|reference_end|> | arxiv | @article{matatov2024examining,
title={Examining the Prevalence and Dynamics of AI-Generated Media in Art
Subreddits},
author={Hana Matatov, Marianne Aubin Le Qu'er'e, Ofra Amir, Mor Naaman},
journal={arXiv preprint arXiv:2410.07302},
year={2024},
archivePrefix={arXiv},
eprint={2410.07302},
primaryClass={cs.AI cs.CY cs.SI}
} | matatov2024examining |
arxiv-667759 | 2410.07303 | Rectified Diffusion: Straightness Is Not Your Need in Rectified Flow | <|reference_start|>Rectified Diffusion: Straightness Is Not Your Need in Rectified Flow: Diffusion models have greatly improved visual generation but are hindered by slow generation speed due to the computationally intensive nature of solving generative ODEs. Rectified flow, a widely recognized solution, improves generation speed by straightening the ODE path. Its key components include: 1) using the diffusion form of flow-matching, 2) employing $\boldsymbol v$-prediction, and 3) performing rectification (a.k.a. reflow). In this paper, we argue that the success of rectification primarily lies in using a pretrained diffusion model to obtain matched pairs of noise and samples, followed by retraining with these matched noise-sample pairs. Based on this, components 1) and 2) are unnecessary. Furthermore, we highlight that straightness is not an essential training target for rectification; rather, it is a specific case of flow-matching models. The more critical training target is to achieve a first-order approximate ODE path, which is inherently curved for models like DDPM and Sub-VP. Building on this insight, we propose Rectified Diffusion, which generalizes the design space and application scope of rectification to encompass the broader category of diffusion models, rather than being restricted to flow-matching models. We validate our method on Stable Diffusion v1-5 and Stable Diffusion XL. Our method not only greatly simplifies the training procedure of rectified flow-based previous works (e.g., InstaFlow) but also achieves superior performance with even lower training cost. Our code is available at https://github.com/G-U-N/Rectified-Diffusion.<|reference_end|> | arxiv | @article{wang2024rectified,
title={Rectified Diffusion: Straightness Is Not Your Need in Rectified Flow},
author={Fu-Yun Wang, Ling Yang, Zhaoyang Huang, Mengdi Wang, Hongsheng Li},
journal={arXiv preprint arXiv:2410.07303},
year={2024},
archivePrefix={arXiv},
eprint={2410.07303},
primaryClass={cs.CV}
} | wang2024rectified |
arxiv-667760 | 2410.07304 | The Moral Turing Test: Evaluating Human-LLM Alignment in Moral Decision-Making | <|reference_start|>The Moral Turing Test: Evaluating Human-LLM Alignment in Moral Decision-Making: As large language models (LLMs) become increasingly integrated into society, their alignment with human morals is crucial. To better understand this alignment, we created a large corpus of human- and LLM-generated responses to various moral scenarios. We found a misalignment between human and LLM moral assessments; although both LLMs and humans tended to reject morally complex utilitarian dilemmas, LLMs were more sensitive to personal framing. We then conducted a quantitative user study involving 230 participants (N=230), who evaluated these responses by determining whether they were AI-generated and assessed their agreement with the responses. Human evaluators preferred LLMs' assessments in moral scenarios, though a systematic anti-AI bias was observed: participants were less likely to agree with judgments they believed to be machine-generated. Statistical and NLP-based analyses revealed subtle linguistic differences in responses, influencing detection and agreement. Overall, our findings highlight the complexities of human-AI perception in morally charged decision-making.<|reference_end|> | arxiv | @article{garcia2024the,
title={The Moral Turing Test: Evaluating Human-LLM Alignment in Moral
Decision-Making},
author={Basile Garcia, Crystal Qian, Stefano Palminteri},
journal={arXiv preprint arXiv:2410.07304},
year={2024},
archivePrefix={arXiv},
eprint={2410.07304},
primaryClass={cs.HC cs.AI}
} | garcia2024the |
arxiv-667761 | 2410.07305 | A Blockchain and Artificial Intelligence based System for Halal Food Traceability | <|reference_start|>A Blockchain and Artificial Intelligence based System for Halal Food Traceability: The demand of the halal food products is increasing rapidly around the world. The consumption of halal food product is just not among the Muslims but also among non-Muslims, due to the purity of the halal food products. However, there are several challenges that are faced by the halal food consumers. The challenges raise a doubt among the halal food consumers about the authenticity of the product being halal. Therefore, a solution that can address these issues and can establish trust between consumers and producers. Blockchain technology can provide a distributed ledger of an immutable record of the information. Artificial intelligence supports developing a solution for pattern identification. The proposed research utilizes blockchain an artificial intelligence-based system for developing a system that ensure the authenticity of the halal food products by providing the traceability related to all the operations and processes of the supply chain and sourcing the raw material. The proposed system has been tested with a local supermarket. The results and tests of the developed solution seemed effective and the testers expressed interest in real-world implementation of the proposed system.<|reference_end|> | arxiv | @article{alourani2024a,
title={A Blockchain and Artificial Intelligence based System for Halal Food
Traceability},
author={Abdulla Alourani, Shahnawaz Khan},
journal={arXiv preprint arXiv:2410.07305},
year={2024},
archivePrefix={arXiv},
eprint={2410.07305},
primaryClass={cs.DC cs.AI}
} | alourani2024a |
arxiv-667762 | 2410.07331 | DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models | <|reference_start|>DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models: We introduce DA-Code, a code generation benchmark specifically designed to assess LLMs on agent-based data science tasks. This benchmark features three core elements: First, the tasks within DA-Code are inherently challenging, setting them apart from traditional code generation tasks and demanding advanced coding skills in grounding and planning. Second, examples in DA-Code are all based on real and diverse data, covering a wide range of complex data wrangling and analytics tasks. Third, to solve the tasks, the models must utilize complex data science programming languages, to perform intricate data processing and derive the answers. We set up the benchmark in a controllable and executable environment that aligns with real-world data analysis scenarios and is scalable. The annotators meticulously design the evaluation suite to ensure the accuracy and robustness of the evaluation. We develop the DA-Agent baseline. Experiments show that although the baseline performs better than other existing frameworks, using the current best LLMs achieves only 30.5% accuracy, leaving ample room for improvement. We release our benchmark at [https://da-code-bench.github.io](https://da-code-bench.github.io).<|reference_end|> | arxiv | @article{huang2024da-code:,
title={DA-Code: Agent Data Science Code Generation Benchmark for Large Language
Models},
author={Yiming Huang, Jianwen Luo, Yan Yu, Yitong Zhang, Fangyu Lei, Yifan
Wei, Shizhu He, Lifu Huang, Xiao Liu, Jun Zhao, Kang Liu},
journal={arXiv preprint arXiv:2410.07331},
year={2024},
archivePrefix={arXiv},
eprint={2410.07331},
primaryClass={cs.CL cs.AI}
} | huang2024da-code: |
arxiv-667763 | 2410.07336 | Positive-Augmented Contrastive Learning for Vision-and-Language Evaluation and Training | <|reference_start|>Positive-Augmented Contrastive Learning for Vision-and-Language Evaluation and Training: Despite significant advancements in caption generation, existing evaluation metrics often fail to capture the full quality or fine-grained details of captions. This is mainly due to their reliance on non-specific human-written references or noisy pre-training data. Still, finding an effective metric is crucial not only for captions evaluation but also for the generation phase. Metrics can indeed play a key role in the fine-tuning stage of captioning models, ultimately enhancing the quality of the generated captions. In this paper, we propose PAC-S++, a learnable metric that leverages the CLIP model, pre-trained on both web-collected and cleaned data and regularized through additional pairs of generated visual and textual positive samples. Exploiting this stronger and curated pre-training, we also apply PAC-S++ as a reward in the Self-Critical Sequence Training (SCST) stage typically employed to fine-tune captioning models. Extensive experiments on different image and video datasets highlight the effectiveness of PAC-S++ compared to popular metrics for the task, including its sensitivity to object hallucinations. Furthermore, we show that integrating PAC-S++ into the fine-tuning stage of a captioning model results in semantically richer captions with fewer repetitions and grammatical errors. Evaluations on out-of-domain benchmarks further demonstrate the efficacy of our fine-tuning approach in enhancing model capabilities. Source code and trained models are publicly available at: https://github.com/aimagelab/pacscore.<|reference_end|> | arxiv | @article{sarto2024positive-augmented,
title={Positive-Augmented Contrastive Learning for Vision-and-Language
Evaluation and Training},
author={Sara Sarto, Nicholas Moratelli, Marcella Cornia, Lorenzo Baraldi, Rita
Cucchiara},
journal={arXiv preprint arXiv:2410.07336},
year={2024},
archivePrefix={arXiv},
eprint={2410.07336},
primaryClass={cs.CV cs.AI cs.CL cs.MM}
} | sarto2024positive-augmented |
arxiv-667764 | 2410.07348 | MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts | <|reference_start|>MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts: In this work, we aim to simultaneously enhance the effectiveness and efficiency of Mixture-of-Experts (MoE) methods. To achieve this, we propose MoE++, a general and heterogeneous MoE framework that integrates both Feed-Forward Network~(FFN) and zero-computation experts. Specifically, we introduce three types of zero-computation experts: the zero expert, copy expert, and constant expert, which correspond to discard, skip, and replace operations, respectively. This design offers three key advantages: (i) Low Computing Overhead: Unlike the uniform mixing mechanism for all tokens within vanilla MoE, MoE++ allows each token to engage with a dynamic number of FFNs, be adjusted by constant vectors, or even skip the MoE layer entirely. (ii) High Performance: By enabling simple tokens to utilize fewer FFN experts, MoE++ allows more experts to focus on challenging tokens, thereby unlocking greater performance potential than vanilla MoE. (iii) Deployment Friendly: Given that zero-computation experts have negligible parameters, we can deploy all zero-computation experts on each GPU, eliminating the significant communication overhead and expert load imbalance associated with FFN experts distributed across different GPUs. Moreover, we leverage gating residuals, enabling each token to consider the pathway taken in the previous layer when selecting the appropriate experts. Extensive experimental results demonstrate that MoE++ achieves better performance while delivering 1.1-2.1x expert forward throughput compared to a vanilla MoE model of the same size, which lays a solid foundation for developing advanced and efficient MoE-related models.<|reference_end|> | arxiv | @article{jin2024moe++:,
title={MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation
Experts},
author={Peng Jin, Bo Zhu, Li Yuan, Shuicheng Yan},
journal={arXiv preprint arXiv:2410.07348},
year={2024},
archivePrefix={arXiv},
eprint={2410.07348},
primaryClass={cs.LG cs.AI}
} | jin2024moe++: |
arxiv-667765 | 2410.07352 | Generating Origin-Destination Matrices in Neural Spatial Interaction Models | <|reference_start|>Generating Origin-Destination Matrices in Neural Spatial Interaction Models: Agent-based models (ABMs) are proliferating as decision-making tools across policy areas in transportation, economics, and epidemiology. In these models, a central object of interest is the discrete origin-destination matrix which captures spatial interactions and agent trip counts between locations. Existing approaches resort to continuous approximations of this matrix and subsequent ad-hoc discretisations in order to perform ABM simulation and calibration. This impedes conditioning on partially observed summary statistics, fails to explore the multimodal matrix distribution over a discrete combinatorial support, and incurs discretisation errors. To address these challenges, we introduce a computationally efficient framework that scales linearly with the number of origin-destination pairs, operates directly on the discrete combinatorial space, and learns the agents' trip intensity through a neural differential equation that embeds spatial interactions. Our approach outperforms the prior art in terms of reconstruction error and ground truth matrix coverage, at a fraction of the computational cost. We demonstrate these benefits in large-scale spatial mobility ABMs in Cambridge, UK and Washington, DC, USA.<|reference_end|> | arxiv | @article{zachos2024generating,
title={Generating Origin-Destination Matrices in Neural Spatial Interaction
Models},
author={Ioannis Zachos, Mark Girolami, Theodoros Damoulas},
journal={arXiv preprint arXiv:2410.07352},
year={2024},
archivePrefix={arXiv},
eprint={2410.07352},
primaryClass={cs.LG stat.ML}
} | zachos2024generating |
arxiv-667766 | 2410.07353 | Fabrication-Aware Inverse Design For Shape Optimization | <|reference_start|>Fabrication-Aware Inverse Design For Shape Optimization: Inverse design (ID) is a computational method that systematically explores a design space to find optimal device geometries based on specific performance criteria. In silicon photonics, ID often leads to devices with design features that degrade significantly due to the fabrication process, limiting the applicability of these devices in scalable silicon photonic fabrication. We demonstrate a solution to this performance degradation through fabrication-aware inverse design (FAID), integrating lithography models for deep-ultraviolet (DUV) lithography and electron beam lithography (EBL) into the shape optimization approach of ID. A Y-branch and an SWG-to-strip converter were generated and fabricated with this new approach. Simulated and measured results verify that the FAID yields devices with up to 0.6 dB lower insertion loss per device. The modified workflow enables designers to use ID to generate devices that adjust for process bias predicted by lithography models.<|reference_end|> | arxiv | @article{khan2024fabrication-aware,
title={Fabrication-Aware Inverse Design For Shape Optimization},
author={Shaheer Khan, Mustafa Hammood, Nicolas A. F. Jaeger, Lukas Chrostowski},
journal={arXiv preprint arXiv:2410.07353},
year={2024},
archivePrefix={arXiv},
eprint={2410.07353},
primaryClass={eess.SY cs.SY}
} | khan2024fabrication-aware |
arxiv-667767 | 2410.07356 | Optimizing High-Level Synthesis Designs with Retrieval-Augmented Large Language Models | <|reference_start|>Optimizing High-Level Synthesis Designs with Retrieval-Augmented Large Language Models: High-level synthesis (HLS) allows hardware designers to create hardware designs with high-level programming languages like C/C++/OpenCL, which greatly improves hardware design productivity. However, existing HLS flows require programmers' hardware design expertise and rely on programmers' manual code transformations and directive annotations to guide compiler optimizations. Optimizing HLS designs requires non-trivial HLS expertise and tedious iterative process in HLS code optimization. Automating HLS code optimizations has become a burning need. Recently, large language models (LLMs) trained on massive code and programming tasks have demonstrated remarkable proficiency in comprehending code, showing the ability to handle domain-specific programming queries directly without labor-intensive fine-tuning. In this work, we propose a novel retrieval-augmented LLM-based approach to effectively optimize high-level synthesis (HLS) programs. Our proposed method leverages few-shot learning, enabling large language models to adopt domain-specific knowledge through natural language prompts. We propose a unique framework, Retrieve Augmented Large Language Model Aided Design (RALAD), designed to enhance LLMs' performance in HLS code optimization tasks. RALAD employs advanced embedding techniques and top-\emph{k} search algorithms to dynamically source relevant knowledge from extensive databases, thereby providing contextually appropriate responses to complex programming queries. Our implementation of RALAD on two specialized domains, utilizing comparatively smaller language models, achieves an impressive 80\% success rate in compilation tasks and outperforms general LLMs by 3.7 -- 19$\times$ in latency improvement.<|reference_end|> | arxiv | @article{xu2024optimizing,
title={Optimizing High-Level Synthesis Designs with Retrieval-Augmented Large
Language Models},
author={Haocheng Xu, Haotian Hu, Sitao Huang},
journal={arXiv preprint arXiv:2410.07356},
year={2024},
archivePrefix={arXiv},
eprint={2410.07356},
primaryClass={cs.AR cs.PL}
} | xu2024optimizing |
arxiv-667768 | 2410.07358 | Improving the portability of predicting students performance models by using ontologies | <|reference_start|>Improving the portability of predicting students performance models by using ontologies: One of the main current challenges in Educational Data Mining and Learning Analytics is the portability or transferability of predictive models obtained for a particular course so that they can be applied to other different courses. To handle this challenge, one of the foremost problems is the models excessive dependence on the low-level attributes used to train them, which reduces the models portability. To solve this issue, the use of high level attributes with more semantic meaning, such as ontologies, may be very useful. Along this line, we propose the utilization of an ontology that uses a taxonomy of actions that summarises students interactions with the Moodle learning management system. We compare the results of this proposed approach against our previous results when we used low-level raw attributes obtained directly from Moodle logs. The results indicate that the use of the proposed ontology improves the portability of the models in terms of predictive accuracy. The main contribution of this paper is to show that the ontological models obtained in one source course can be applied to other different target courses with similar usage levels without losing prediction accuracy.<|reference_end|> | arxiv | @article{zambrano2024improving,
title={Improving the portability of predicting students performance models by
using ontologies},
author={Javier Lopez Zambrano, Juan A. Lara, Cristobal Romero},
journal={arXiv preprint arXiv:2410.07358},
year={2024},
doi={10.1007/s12528-021-09273-3},
archivePrefix={arXiv},
eprint={2410.07358},
primaryClass={cs.AI}
} | zambrano2024improving |
arxiv-667769 | 2410.07359 | Learning-Based Shielding for Safe Autonomy under Unknown Dynamics | <|reference_start|>Learning-Based Shielding for Safe Autonomy under Unknown Dynamics: Shielding is a common method used to guarantee the safety of a system under a black-box controller, such as a neural network controller from deep reinforcement learning (DRL), with simpler, verified controllers. Existing shielding methods rely on formal verification through Markov Decision Processes (MDPs), assuming either known or finite-state models, which limits their applicability to DRL settings with unknown, continuous-state systems. This paper addresses these limitations by proposing a data-driven shielding methodology that guarantees safety for unknown systems under black-box controllers. The approach leverages Deep Kernel Learning to model the systems' one-step evolution with uncertainty quantification and constructs a finite-state abstraction as an Interval MDP (IMDP). By focusing on safety properties expressed in safe linear temporal logic (safe LTL), we develop an algorithm that computes the maximally permissive set of safe policies on the IMDP, ensuring avoidance of unsafe states. The algorithms soundness and computational complexity are demonstrated through theoretical proofs and experiments on nonlinear systems, including a high-dimensional autonomous spacecraft scenario.<|reference_end|> | arxiv | @article{reed2024learning-based,
title={Learning-Based Shielding for Safe Autonomy under Unknown Dynamics},
author={Robert Reed, Morteza Lahijanian},
journal={arXiv preprint arXiv:2410.07359},
year={2024},
archivePrefix={arXiv},
eprint={2410.07359},
primaryClass={eess.SY cs.LG cs.SY}
} | reed2024learning-based |
arxiv-667770 | 2410.07362 | Large Language Models in Qualitative Research: Can We Do the Data Justice? | <|reference_start|>Large Language Models in Qualitative Research: Can We Do the Data Justice?: Qualitative researchers use tools to collect, sort, and analyze their data. Should qualitative researchers use large language models (LLMs) as part of their practice? LLMs could augment qualitative research, but it is unclear if their use is appropriate, ethical, or aligned with qualitative researchers' goals and values. We interviewed twenty qualitative researchers to investigate these tensions. Many participants see LLMs as promising interlocutors with attractive use cases across the stages of research, but wrestle with their performance and appropriateness. Participants surface concerns regarding the use of LLMs while protecting participant interests, and call attention to an urgent lack of norms and tooling to guide the ethical use of LLMs in research. Given the importance of qualitative methods to human-computer interaction, we use the tensions surfaced by our participants to outline guidelines for researchers considering using LLMs in qualitative research and design principles for LLM-assisted qualitative data analysis tools.<|reference_end|> | arxiv | @article{schroeder2024large,
title={Large Language Models in Qualitative Research: Can We Do the Data
Justice?},
author={Hope Schroeder, Marianne Aubin Le Qu'er'e, Casey Randazzo, David
Mimno, Sarita Schoenebeck},
journal={arXiv preprint arXiv:2410.07362},
year={2024},
archivePrefix={arXiv},
eprint={2410.07362},
primaryClass={cs.HC}
} | schroeder2024large |
arxiv-667771 | 2410.07364 | Unlocking Real-Time Fluorescence Lifetime Imaging: Multi-Pixel Parallelism for FPGA-Accelerated Processing | <|reference_start|>Unlocking Real-Time Fluorescence Lifetime Imaging: Multi-Pixel Parallelism for FPGA-Accelerated Processing: Fluorescence lifetime imaging (FLI) is a widely used technique in the biomedical field for measuring the decay times of fluorescent molecules, providing insights into metabolic states, protein interactions, and ligand-receptor bindings. However, its broader application in fast biological processes, such as dynamic activity monitoring, and clinical use, such as in guided surgery, is limited by long data acquisition times and computationally demanding data processing. While deep learning has reduced post-processing times, time-resolved data acquisition remains a bottleneck for real-time applications. To address this, we propose a method to achieve real-time FLI using an FPGA-based hardware accelerator. Specifically, we implemented a GRU-based sequence-to-sequence (Seq2Seq) model on an FPGA board compatible with time-resolved cameras. The GRU model balances accurate processing with the resource constraints of FPGAs, which have limited DSP units and BRAM. The limited memory and computational resources on the FPGA require efficient scheduling of operations and memory allocation to deploy deep learning models for low-latency applications. We address these challenges by using STOMP, a queue-based discrete-event simulator that automates and optimizes task scheduling and memory management on hardware. By integrating a GRU-based Seq2Seq model and its compressed version, called Seq2SeqLite, generated through knowledge distillation, we were able to process multiple pixels in parallel, reducing latency compared to sequential processing. We explore various levels of parallelism to achieve an optimal balance between performance and resource utilization. Our results indicate that the proposed techniques achieved a 17.7x and 52.0x speedup over manual scheduling for the Seq2Seq model and the Seq2SeqLite model, respectively.<|reference_end|> | arxiv | @article{erbas2024unlocking,
title={Unlocking Real-Time Fluorescence Lifetime Imaging: Multi-Pixel
Parallelism for FPGA-Accelerated Processing},
author={Ismail Erbas, Aporva Amarnath, Vikas Pandey, Karthik Swaminathan,
Naigang Wang, Xavier Intes},
journal={arXiv preprint arXiv:2410.07364},
year={2024},
archivePrefix={arXiv},
eprint={2410.07364},
primaryClass={physics.optics cs.AI cs.DC cs.LG}
} | erbas2024unlocking |
arxiv-667772 | 2410.07368 | Learning to learn ecosystems from limited data -- a meta-learning approach | <|reference_start|>Learning to learn ecosystems from limited data -- a meta-learning approach: A fundamental challenge in developing data-driven approaches to ecological systems for tasks such as state estimation and prediction is the paucity of the observational or measurement data. For example, modern machine-learning techniques such as deep learning or reservoir computing typically require a large quantity of data. Leveraging synthetic data from paradigmatic nonlinear but non-ecological dynamical systems, we develop a meta-learning framework with time-delayed feedforward neural networks to predict the long-term behaviors of ecological systems as characterized by their attractors. We show that the framework is capable of accurately reconstructing the ``dynamical climate'' of the ecological system with limited data. Three benchmark population models in ecology, namely the Hastings-Powell model, a three-species food chain, and the Lotka-Volterra system, are used to demonstrate the performance of the meta-learning based prediction framework. In all cases, enhanced accuracy and robustness are achieved using five to seven times less training data as compared with the corresponding machine-learning method trained solely from the ecosystem data. A number of issues affecting the prediction performance are addressed.<|reference_end|> | arxiv | @article{zhai2024learning,
title={Learning to learn ecosystems from limited data -- a meta-learning
approach},
author={Zheng-Meng Zhai, Bryan Glaz, Mulugeta Haile, and Ying-Cheng Lai},
journal={arXiv preprint arXiv:2410.07368},
year={2024},
archivePrefix={arXiv},
eprint={2410.07368},
primaryClass={q-bio.QM cs.LG nlin.CD}
} | zhai2024learning |
arxiv-667773 | 2410.07369 | An undetectable watermark for generative image models | <|reference_start|>An undetectable watermark for generative image models: We present the first undetectable watermarking scheme for generative image models. Undetectability ensures that no efficient adversary can distinguish between watermarked and un-watermarked images, even after making many adaptive queries. In particular, an undetectable watermark does not degrade image quality under any efficiently computable metric. Our scheme works by selecting the initial latents of a diffusion model using a pseudorandom error-correcting code (Christ and Gunn, 2024), a strategy which guarantees undetectability and robustness. We experimentally demonstrate that our watermarks are quality-preserving and robust using Stable Diffusion 2.1. Our experiments verify that, in contrast to every prior scheme we tested, our watermark does not degrade image quality. Our experiments also demonstrate robustness: existing watermark removal attacks fail to remove our watermark from images without significantly degrading the quality of the images. Finally, we find that we can robustly encode 512 bits in our watermark, and up to 2500 bits when the images are not subjected to watermark removal attacks. Our code is available at https://github.com/XuandongZhao/PRC-Watermark.<|reference_end|> | arxiv | @article{gunn2024an,
title={An undetectable watermark for generative image models},
author={Sam Gunn, Xuandong Zhao, Dawn Song},
journal={arXiv preprint arXiv:2410.07369},
year={2024},
archivePrefix={arXiv},
eprint={2410.07369},
primaryClass={cs.CR cs.AI cs.LG cs.MM}
} | gunn2024an |
arxiv-667774 | 2410.07370 | Recommending and Release Planning of User-Driven Functionality Deletion for Mobile Apps | <|reference_start|>Recommending and Release Planning of User-Driven Functionality Deletion for Mobile Apps: Evolving software with an increasing number of features poses challenges in terms of comprehensibility and usability. Traditional software release planning has predominantly focused on orchestrating the addition of features, contributing to the growing complexity and maintenance demands of larger software systems. In mobile apps, an excess of functionality can significantly impact usability, maintainability, and resource consumption, necessitating a nuanced understanding of the applicability of the law of continuous growth to mobile apps. Previous work showed that the deletion of functionality is common and sometimes driven by user reviews. For most users, the removal of features is associated with negative sentiments, prompts changes in usage patterns, and may even result in user churn. Motivated by these preliminary results, we propose Radiation to input user reviews and recommend if any functionality should be deleted from an app's User Interface (UI). We evaluate radiation using historical data and survey developers' opinions. From the analysis of 190,062 reviews from 115 randomly selected apps, we show that Radiation can recommend functionality deletion with an average F-Score of 74% and if sufficiently many negative user reviews suggest so. We conducted a survey involving 141 software developers to gain insights into the decision-making process and the level of planning for feature deletions. Our findings indicate that 77.3% of the participants often or always plan for such deletions. This underscores the importance of incorporating feature deletion planning into the overall release decision-making process.<|reference_end|> | arxiv | @article{nayebi2024recommending,
title={Recommending and Release Planning of User-Driven Functionality Deletion
for Mobile Apps},
author={Maleknaz Nayebi, Konstantin Kuznetsov, Andreas Zeller, Guenther Ruhe},
journal={arXiv preprint arXiv:2410.07370},
year={2024},
archivePrefix={arXiv},
eprint={2410.07370},
primaryClass={cs.SE}
} | nayebi2024recommending |
arxiv-667775 | 2410.07375 | Boundary-value problems of functional differential equations with state-dependent delays | <|reference_start|>Boundary-value problems of functional differential equations with state-dependent delays: We prove convergence of piecewise polynomial collocation methods applied to periodic boundary value problems for functional differential equations with state-dependent delays. The state dependence of the delays leads to nonlinearities that are not locally Lipschitz continuous preventing the direct application of general abstract discretization theoretic frameworks. We employ a weaker form of differentiability, which we call mild differentiability, to prove that a locally unique solution of the functional differential equation is approximated by the solution of the discretized problem with the expected order. An additional difficulty is that linearizations required for solving the discretized nonlinear problem with Newton iterations are not well defined. We show that Newton iterations still converge if one uses the linearization in regularized solutions. The Newton iterations' asymptotic convergence ratio is limited by the numerical discretization error. Thus, Newton iterations should show better convergence for approximations on finer meshes.<|reference_end|> | arxiv | @article{ando'2024boundary-value,
title={Boundary-value problems of functional differential equations with
state-dependent delays},
author={Alessia Ando', Jan Sieber},
journal={arXiv preprint arXiv:2410.07375},
year={2024},
archivePrefix={arXiv},
eprint={2410.07375},
primaryClass={math.NA cs.NA}
} | ando'2024boundary-value |
arxiv-667776 | 2410.07376 | Optimal Attitude Control of Large Flexible Space Structures with Distributed Momentum Actuators | <|reference_start|>Optimal Attitude Control of Large Flexible Space Structures with Distributed Momentum Actuators: Recent spacecraft mission concepts propose larger payloads that have lighter, less rigid structures. For large lightweight structures, the natural frequencies of their vibration modes may fall within the attitude controller bandwidth, threatening the stability and settling time of the controller and compromising performance. This work tackles this issue by proposing an attitude control design paradigm of distributing momentum actuators throughout the structure to have more control authority over vibration modes. The issue of jitter disturbances introduced by these actuators is addressed by expanding the bandwidth of the attitude controller to suppress excess vibrations. Numerical simulation results show that, at the expense of more control action, a distributed configuration can achieve lower settling times and reduce structural deformation compared to a more standard centralized configuration.<|reference_end|> | arxiv | @article{cachim2024optimal,
title={Optimal Attitude Control of Large Flexible Space Structures with
Distributed Momentum Actuators},
author={Pedro Cachim, Will Kraus, Zachary Manchester, Pedro Lourenco, Rodrigo
Ventura},
journal={arXiv preprint arXiv:2410.07376},
year={2024},
archivePrefix={arXiv},
eprint={2410.07376},
primaryClass={astro-ph.IM cs.SY eess.SY}
} | cachim2024optimal |
arxiv-667777 | 2410.07378 | Static Pricing for Online Selection Problem and its Variants | <|reference_start|>Static Pricing for Online Selection Problem and its Variants: This paper studies an online selection problem, where a seller seeks to sequentially sell multiple copies of an item to arriving buyers. We consider an adversarial setting, making no modeling assumptions about buyers' valuations for the items except acknowledging a finite support. In this paper, we focus on a class of static pricing algorithms that sample a price from a pre-determined distribution and sell items to buyers whose valuations exceed the sampled price. Such algorithms are of practical interests due to their advantageous properties, such as ease of implementation and non-discrimination over prices. Our work shows that the simple static pricing strategy can achieve strong guarantees comparable to the best known dynamic pricing algorithms. Particularly, we design the optimal static pricing algorithms for the adversarial online selection problem and its two important variants: the online assignment problem and the online selection with convex cost. The static pricing algorithms can even attain the optimal competitive ratios among all online algorithms for the online selection problem and the online assignment problem. To achieve these results, we propose an economics-based approach in the competitive analysis of static pricing algorithms, and develop a novel representative function-based approach to derive the lower bounds. We expect these approaches will be useful in related problems such as online matching.<|reference_end|> | arxiv | @article{sun2024static,
title={Static Pricing for Online Selection Problem and its Variants},
author={Bo Sun, Hossein Nekouyan Jazi, Xiaoqi Tan, Raouf Boutaba},
journal={arXiv preprint arXiv:2410.07378},
year={2024},
archivePrefix={arXiv},
eprint={2410.07378},
primaryClass={cs.GT cs.DS}
} | sun2024static |
arxiv-667778 | 2410.07379 | Learn from Real: Reality Defender's Submission to ASVspoof5 Challenge | <|reference_start|>Learn from Real: Reality Defender's Submission to ASVspoof5 Challenge: Audio deepfake detection is crucial to combat the malicious use of AI-synthesized speech. Among many efforts undertaken by the community, the ASVspoof challenge has become one of the benchmarks to evaluate the generalizability and robustness of detection models. In this paper, we present Reality Defender's submission to the ASVspoof5 challenge, highlighting a novel pretraining strategy which significantly improves generalizability while maintaining low computational cost during training. Our system SLIM learns the style-linguistics dependency embeddings from various types of bonafide speech using self-supervised contrastive learning. The learned embeddings help to discriminate spoof from bonafide speech by focusing on the relationship between the style and linguistics aspects. We evaluated our system on ASVspoof5, ASV2019, and In-the-wild. Our submission achieved minDCF of 0.1499 and EER of 5.5% on ASVspoof5 Track 1, and EER of 7.4% and 10.8% on ASV2019 and In-the-wild respectively.<|reference_end|> | arxiv | @article{zhu2024learn,
title={Learn from Real: Reality Defender's Submission to ASVspoof5 Challenge},
author={Yi Zhu, Chirag Goel, Surya Koppisetti, Trang Tran, Ankur Kumar, Gaurav
Bharaj},
journal={arXiv preprint arXiv:2410.07379},
year={2024},
archivePrefix={arXiv},
eprint={2410.07379},
primaryClass={eess.AS cs.AI cs.CL}
} | zhu2024learn |
arxiv-667779 | 2410.07381 | Tally: Non-Intrusive Performance Isolation for Concurrent Deep Learning Workloads | <|reference_start|>Tally: Non-Intrusive Performance Isolation for Concurrent Deep Learning Workloads: GPU underutilization is a significant concern in many production deep learning clusters, leading to prolonged job queues and increased operational expenses. A promising solution to this inefficiency is GPU sharing, which improves resource utilization by allowing multiple workloads to execute concurrently on a single GPU. However, the practical deployment of GPU sharing in production settings faces critical obstacles due to the limitations of existing mechanisms, such as high integration costs, inadequate performance isolation, and limited application compatibility. To address these issues, we introduce \emph{Tally}, a non-intrusive GPU sharing mechanism that provides robust performance isolation and comprehensive workload compatibility. Tally operates as a virtualization layer between applications and GPUs, transparently orchestrating the device execution of concurrent workloads. The key to Tally's robust performance isolation capability lies in its fine-grained thread-block level GPU kernel scheduling strategy, which allows the system to effectively mitigate interference caused by workload co-execution. Our evaluation, conducted on a diverse set of workload combinations, demonstrates that Tally on average incurs a mere $7.2\%$ overhead on the $99^{th}$-percentile latency of high-priority inference tasks when executed concurrently with best-effort training workloads compared to $188.9\%$ overhead exhibited by the state-of-the-art GPU sharing systems like TGS, while achieving over $80\%$ of TGS's system throughput.<|reference_end|> | arxiv | @article{zhao2024tally:,
title={Tally: Non-Intrusive Performance Isolation for Concurrent Deep Learning
Workloads},
author={Wei Zhao, Anand Jayarajan, Gennady Pekhimenko},
journal={arXiv preprint arXiv:2410.07381},
year={2024},
archivePrefix={arXiv},
eprint={2410.07381},
primaryClass={cs.DC}
} | zhao2024tally: |
arxiv-667780 | 2410.07382 | Optimal-Length Labeling Schemes for Fast Deterministic Communication in Radio Networks | <|reference_start|>Optimal-Length Labeling Schemes for Fast Deterministic Communication in Radio Networks: We consider two fundamental communication tasks in arbitrary radio networks: broadcasting (information from one source has to reach all nodes) and gossiping (every node has a message and all messages have to reach all nodes). Nodes are assigned labels that are (not necessarily different) binary strings. Each node knows its own label and can use it as a parameter in the same deterministic algorithm. The length of a labeling scheme is the largest length of a label. The goal is to find labeling schemes of asymptotically optimal length for the above tasks, and to design fast deterministic distributed algorithms for each of them, using labels of optimal length. Our main result concerns broadcasting. We show the existence of a labeling scheme of constant length that supports broadcasting in time $O(D+\log^2 n)$, where $D$ is the diameter of the network and $n$ is the number of nodes. This broadcasting time is an improvement over the best currently known $O(D\log n + \log^2 n)$ time of broadcasting with constant-length labels, due to Ellen and Gilbert (SPAA 2020). It also matches the optimal broadcasting time in radio networks of known topology. Hence, we show that appropriately chosen node labels of constant length permit to achieve, in a distributed way, the optimal centralized broadcasting time. This is, perhaps, the most surprising finding of this paper. We are able to obtain our result thanks to a novel methodological tool of propagating information in radio networks, that we call a 2-height respecting tree. Next, we apply our broadcasting algorithm to solve the gossiping problem. We get a gossiping algorithm working in time $O(D + \Delta\log n + \log^2 n)$, using a labeling scheme of optimal length $O(\log \Delta)$, where $\Delta$ is the maximum degree. Our time is the same as the best known gossiping time in radio networks of known topology.<|reference_end|> | arxiv | @article{gańczorz2024optimal-length,
title={Optimal-Length Labeling Schemes for Fast Deterministic Communication in
Radio Networks},
author={Adam Ga'nczorz (1), Tomasz Jurdzi'nski (1), Andrzej Pelc (2) ((1)
Institute of Computer Science, University of Wroc{l}aw, (2) D'epartement
d'informatique, Universit'e du Qu'ebec en Outaouais)},
journal={arXiv preprint arXiv:2410.07382},
year={2024},
archivePrefix={arXiv},
eprint={2410.07382},
primaryClass={cs.DC}
} | gańczorz2024optimal-length |
arxiv-667781 | 2410.07383 | SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers | <|reference_start|>SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers: The performance of Transformer models has been enhanced by increasing the number of parameters and the length of the processed text. Consequently, fine-tuning the entire model becomes a memory-intensive process. High-performance methods for parameter-efficient fine-tuning (PEFT) typically work with Attention blocks and often overlook MLP blocks, which contain about half of the model parameters. We propose a new selective PEFT method, namely SparseGrad, that performs well on MLP blocks. We transfer layer gradients to a space where only about 1\% of the layer's elements remain significant. By converting gradients into a sparse structure, we reduce the number of updated parameters. We apply SparseGrad to fine-tune BERT and RoBERTa for the NLU task and LLaMa-2 for the Question-Answering task. In these experiments, with identical memory requirements, our method outperforms LoRA and MeProp, robust popular state-of-the-art PEFT approaches.<|reference_end|> | arxiv | @article{chekalina2024sparsegrad:,
title={SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers},
author={Viktoriia Chekalina, Anna Rudenko, Gleb Mezentsev, Alexander Mikhalev,
Alexander Panchenko, Ivan Oseledets},
journal={arXiv preprint arXiv:2410.07383},
year={2024},
archivePrefix={arXiv},
eprint={2410.07383},
primaryClass={cs.CL cs.AI}
} | chekalina2024sparsegrad: |
arxiv-667782 | 2410.07385 | En masse scanning and automated surfacing of small objects using Micro-CT | <|reference_start|>En masse scanning and automated surfacing of small objects using Micro-CT: Modern archaeological methods increasingly utilize 3D virtual representations of objects, computationally intensive analyses, high resolution scanning, large datasets, and machine learning. With higher resolution scans, challenges surrounding computational power, memory, and file storage quickly arise. Processing and analyzing high resolution scans often requires memory-intensive workflows, which are infeasible for most computers and increasingly necessitate the use of super-computers or innovative methods for processing on standard computers. Here we introduce a novel protocol for en-masse micro-CT scanning of small objects with a {\em mostly-automated} processing workflow that functions in memory-limited settings. We scanned 1,112 animal bone fragments using just 10 micro-CT scans, which were post-processed into individual PLY files. Notably, our methods can be applied to any object (with discernible density from the packaging material) making this method applicable to a variety of inquiries and fields including paleontology, geology, electrical engineering, and materials science. Further, our methods may immediately be adopted by scanning institutes to pool customer orders together and offer more affordable scanning. The work presented herein is part of a larger program facilitated by the international and multi-disciplinary research consortium known as Anthropological and Mathematical Analysis of Archaeological and Zooarchaeological Evidence (AMAAZE). AMAAZE unites experts in anthropology, mathematics, and computer science to develop new methods for mass-scale virtual archaeological research. Overall, our new scanning method and processing workflows lay the groundwork and set the standard for future mass-scale, high resolution scanning studies.<|reference_end|> | arxiv | @article{o'neill2024en,
title={En masse scanning and automated surfacing of small objects using
Micro-CT},
author={Riley C. W. O'Neill, Katrina Yezzi-Woodley, Jeff Calder, Peter J.
Olver},
journal={arXiv preprint arXiv:2410.07385},
year={2024},
archivePrefix={arXiv},
eprint={2410.07385},
primaryClass={cs.CV eess.IV}
} | o'neill2024en |
arxiv-667783 | 2410.07387 | Siamese networks for Poincar\'e embeddings and the reconstruction of evolutionary trees | <|reference_start|>Siamese networks for Poincar\'e embeddings and the reconstruction of evolutionary trees: We present a method for reconstructing evolutionary trees from high-dimensional data, with a specific application to bird song spectrograms. We address the challenge of inferring phylogenetic relationships from phenotypic traits, like vocalizations, without predefined acoustic properties. Our approach combines two main components: Poincar\'e embeddings for dimensionality reduction and distance computation, and the neighbor joining algorithm for tree reconstruction. Unlike previous work, we employ Siamese networks to learn embeddings from only leaf node samples of the latent tree. We demonstrate our method's effectiveness on both synthetic data and spectrograms from six species of finches.<|reference_end|> | arxiv | @article{carvallo2024siamese,
title={Siamese networks for Poincar\'e embeddings and the reconstruction of
evolutionary trees},
author={Ciro Carvallo, Hern'an Bocaccio, Gabriel B. Mindlin and Pablo
Groisman},
journal={arXiv preprint arXiv:2410.07387},
year={2024},
archivePrefix={arXiv},
eprint={2410.07387},
primaryClass={q-bio.PE cs.LG}
} | carvallo2024siamese |
arxiv-667784 | 2410.07388 | On Densest $k$-Subgraph Mining and Diagonal Loading | <|reference_start|>On Densest $k$-Subgraph Mining and Diagonal Loading: The Densest $k$-Subgraph (D$k$S) problem aims to find a subgraph comprising $k$ vertices with the maximum number of edges between them. A continuous reformulation of the binary quadratic D$k$S problem is considered, which incorporates a diagonal loading term. It is shown that this non-convex, continuous relaxation is tight for a range of diagonal loading parameters, and the impact of the diagonal loading parameter on the optimization landscape is studied. On the algorithmic side, two projection-free algorithms are proposed to tackle the relaxed problem, based on Frank-Wolfe and explicit constraint parametrization, respectively. Experiments suggest that both algorithms have merits relative to the state-of-art, while the Frank-Wolfe-based algorithm stands out in terms of subgraph density, computational complexity, and ability to scale up to very large datasets.<|reference_end|> | arxiv | @article{lu2024on,
title={On Densest $k$-Subgraph Mining and Diagonal Loading},
author={Qiheng Lu, Nicholas D. Sidiropoulos, Aritra Konar},
journal={arXiv preprint arXiv:2410.07388},
year={2024},
archivePrefix={arXiv},
eprint={2410.07388},
primaryClass={cs.SI cs.DS}
} | lu2024on |
arxiv-667785 | 2410.07389 | MIMO MAC Empowered by Reconfigurable Intelligent Surfaces: Capacity Region and Large System Analysis | <|reference_start|>MIMO MAC Empowered by Reconfigurable Intelligent Surfaces: Capacity Region and Large System Analysis: Smart wireless environments enabled by multiple distributed Reconfigurable Intelligent Surfaces (RISs) have recently attracted significant research interest as a wireless connectivity paradigm for sixth Generation (6G) networks. In this paper, using random matrix theory methods, we calculate the mean of the sum Mutual Information (MI) for the correlated Multiple-Input Multiple-Output (MIMO) Multiple Access Channel (MAC) in the presence of multiple RISs, in the large-antenna number limit. We thus obtain the capacity region boundaries, after optimizing over the tunable RISs' phase configurations. Furthermore, we obtain a closed-form expression for the variance of the sum-MI metric, which together with the mean provides a tight Gaussian approximation for the outage probability. The derived results become relevant in the presence of fast-fading, when channel estimation is extremely challenging. Our numerical investigations showcased that, when the angle-spread in the neighborhood of each RIS is small, which is expected for higher carrier frequencies, the communication link strongly improves from optimizing the ergodic MI of the multiple RISs.We also found that, increasing the number of transmitting users in such MIMO-MAC-RIS systems results to rapidly diminishing sum-MI gains, hence, providing limits on the number of users that can be efficiently served by a given RIS.<|reference_end|> | arxiv | @article{moustakas2024mimo,
title={MIMO MAC Empowered by Reconfigurable Intelligent Surfaces: Capacity
Region and Large System Analysis},
author={Aris L. Moustakas and George C. Alexandropoulos},
journal={arXiv preprint arXiv:2410.07389},
year={2024},
archivePrefix={arXiv},
eprint={2410.07389},
primaryClass={cs.IT cs.ET math.IT}
} | moustakas2024mimo |
arxiv-667786 | 2410.07391 | The Cognitive Capabilities of Generative AI: A Comparative Analysis with Human Benchmarks | <|reference_start|>The Cognitive Capabilities of Generative AI: A Comparative Analysis with Human Benchmarks: There is increasing interest in tracking the capabilities of general intelligence foundation models. This study benchmarks leading large language models and vision language models against human performance on the Wechsler Adult Intelligence Scale (WAIS-IV), a comprehensive, population-normed assessment of underlying human cognition and intellectual abilities, with a focus on the domains of VerbalComprehension (VCI), Working Memory (WMI), and Perceptual Reasoning (PRI). Most models demonstrated exceptional capabilities in the storage, retrieval, and manipulation of tokens such as arbitrary sequences of letters and numbers, with performance on the Working Memory Index (WMI) greater or equal to the 99.5th percentile when compared to human population normative ability. Performance on the Verbal Comprehension Index (VCI) which measures retrieval of acquired information, and linguistic understanding about the meaning of words and their relationships to each other, also demonstrated consistent performance at or above the 98th percentile. Despite these broad strengths, we observed consistently poor performance on the Perceptual Reasoning Index (PRI; range 0.1-10th percentile) from multimodal models indicating profound inability to interpret and reason on visual information. Smaller and older model versions consistently performed worse, indicating that training data, parameter count and advances in tuning are resulting in significant advances in cognitive ability.<|reference_end|> | arxiv | @article{galatzer-levy2024the,
title={The Cognitive Capabilities of Generative AI: A Comparative Analysis with
Human Benchmarks},
author={Isaac R. Galatzer-Levy, David Munday, Jed McGiffin, Xin Liu, Danny
Karmon, Ilia Labzovsky, Rivka Moroshko, Amir Zait, Daniel McDuff},
journal={arXiv preprint arXiv:2410.07391},
year={2024},
archivePrefix={arXiv},
eprint={2410.07391},
primaryClass={cs.AI}
} | galatzer-levy2024the |
arxiv-667787 | 2410.07393 | How Much Power Must We Extract From a Receiver Antenna to Effect Communications? | <|reference_start|>How Much Power Must We Extract From a Receiver Antenna to Effect Communications?: Subject to the laws of classical physics - the science that governs the design of today's wireless communication systems - there is no need to extract power from a receiver antenna in order to effect communications. If we dispense with a transmission line and, instead, make the front-end electronics colocated with the antenna, then a high input-impedance preamplifier can measure the open-circuit voltage directly on the antenna port without drawing either current or power. Neither Friis' concept of noise figure, nor Shannon information theory, nor electronics technology dictates that we must extract power from an antenna.<|reference_end|> | arxiv | @article{marzetta2024how,
title={How Much Power Must We Extract From a Receiver Antenna to Effect
Communications?},
author={Thomas L. Marzetta and Brian McMinn and Amritpal Singh and Thorkild B.
Hansen},
journal={arXiv preprint arXiv:2410.07393},
year={2024},
archivePrefix={arXiv},
eprint={2410.07393},
primaryClass={eess.SP cs.IT math.IT}
} | marzetta2024how |
arxiv-667788 | 2410.07394 | Structured Spatial Reasoning with Open Vocabulary Object Detectors | <|reference_start|>Structured Spatial Reasoning with Open Vocabulary Object Detectors: Reasoning about spatial relationships between objects is essential for many real-world robotic tasks, such as fetch-and-delivery, object rearrangement, and object search. The ability to detect and disambiguate different objects and identify their location is key to successful completion of these tasks. Several recent works have used powerful Vision and Language Models (VLMs) to unlock this capability in robotic agents. In this paper we introduce a structured probabilistic approach that integrates rich 3D geometric features with state-of-the-art open-vocabulary object detectors to enhance spatial reasoning for robotic perception. The approach is evaluated and compared against zero-shot performance of the state-of-the-art Vision and Language Models (VLMs) on spatial reasoning tasks. To enable this comparison, we annotate spatial clauses in real-world RGB-D Active Vision Dataset [1] and conduct experiments on this and the synthetic Semantic Abstraction [2] dataset. Results demonstrate the effectiveness of the proposed method, showing superior performance of grounding spatial relations over state of the art open-source VLMs by more than 20%.<|reference_end|> | arxiv | @article{nejatishahidin2024structured,
title={Structured Spatial Reasoning with Open Vocabulary Object Detectors},
author={Negar Nejatishahidin, Madhukar Reddy Vongala, Jana Kosecka},
journal={arXiv preprint arXiv:2410.07394},
year={2024},
archivePrefix={arXiv},
eprint={2410.07394},
primaryClass={cs.CV}
} | nejatishahidin2024structured |
arxiv-667789 | 2410.07395 | LLM Embeddings Improve Test-time Adaptation to Tabular $Y|X$-Shifts | <|reference_start|>LLM Embeddings Improve Test-time Adaptation to Tabular $Y|X$-Shifts: For tabular datasets, the change in the relationship between the label and covariates ($Y|X$-shifts) is common due to missing variables (a.k.a. confounders). Since it is impossible to generalize to a completely new and unknown domain, we study models that are easy to adapt to the target domain even with few labeled examples. We focus on building more informative representations of tabular data that can mitigate $Y|X$-shifts, and propose to leverage the prior world knowledge in LLMs by serializing (write down) the tabular data to encode it. We find LLM embeddings alone provide inconsistent improvements in robustness, but models trained on them can be well adapted/finetuned to the target domain even using 32 labeled observations. Our finding is based on a comprehensive and systematic study consisting of 7650 source-target pairs and benchmark against 261,000 model configurations trained by 22 algorithms. Our observation holds when ablating the size of accessible target data and different adaptation strategies. The code is available at https://github.com/namkoong-lab/LLM-Tabular-Shifts.<|reference_end|> | arxiv | @article{zeng2024llm,
title={LLM Embeddings Improve Test-time Adaptation to Tabular $Y|X$-Shifts},
author={Yibo Zeng, Jiashuo Liu, Henry Lam, Hongseok Namkoong},
journal={arXiv preprint arXiv:2410.07395},
year={2024},
archivePrefix={arXiv},
eprint={2410.07395},
primaryClass={cs.LG cs.AI math.OC stat.ML}
} | zeng2024llm |
arxiv-667790 | 2410.07397 | Aligning AI-driven discovery with human intuition | <|reference_start|>Aligning AI-driven discovery with human intuition: As data-driven modeling of physical dynamical systems becomes more prevalent, a new challenge is emerging: making these models more compatible and aligned with existing human knowledge. AI-driven scientific modeling processes typically begin with identifying hidden state variables, then deriving governing equations, followed by predicting and analyzing future behaviors. The critical initial step of identification of an appropriate set of state variables remains challenging for two reasons. First, finding a compact set of meaningfully predictive variables is mathematically difficult and under-defined. A second reason is that variables found often lack physical significance, and are therefore difficult for human scientists to interpret. We propose a new general principle for distilling representations that are naturally more aligned with human intuition, without relying on prior physical knowledge. We demonstrate our approach on a number of experimental and simulated system where the variables generated by the AI closely resemble those chosen independently by human scientists. We suggest that this principle can help make human-AI collaboration more fruitful, as well as shed light on how humans make scientific modeling choices.<|reference_end|> | arxiv | @article{zhang2024aligning,
title={Aligning AI-driven discovery with human intuition},
author={Kevin Zhang, Hod Lipson},
journal={arXiv preprint arXiv:2410.07397},
year={2024},
archivePrefix={arXiv},
eprint={2410.07397},
primaryClass={cs.LG}
} | zhang2024aligning |
arxiv-667791 | 2410.07400 | Advocating Character Error Rate for Multilingual ASR Evaluation | <|reference_start|>Advocating Character Error Rate for Multilingual ASR Evaluation: Automatic speech recognition (ASR) systems have traditionally been evaluated using English datasets, with the word error rate (WER) serving as the predominant metric. WER's simplicity and ease of interpretation have contributed to its widespread adoption, particularly for English. However, as ASR systems expand to multilingual contexts, WER fails in various ways, particularly with morphologically complex languages or those without clear word boundaries. Our work documents the limitations of WER as an evaluation metric and advocates for the character error rate (CER) as the primary metric in multilingual ASR evaluation. We show that CER avoids many of the challenges WER faces and exhibits greater consistency across writing systems. We support our proposition by conducting human evaluations of ASR transcriptions in three languages: Malayalam, English, and Arabic, which exhibit distinct morphological characteristics. We show that CER correlates more closely with human judgments than WER, even for English. To facilitate further research, we release our human evaluation dataset for future benchmarking of ASR metrics. Our findings suggest that CER should be prioritized, or at least supplemented, in multilingual ASR evaluations to account for the varying linguistic characteristics of different languages.<|reference_end|> | arxiv | @article{k2024advocating,
title={Advocating Character Error Rate for Multilingual ASR Evaluation},
author={Thennal D K, Jesin James, Deepa P Gopinath, Muhammed Ashraf K},
journal={arXiv preprint arXiv:2410.07400},
year={2024},
archivePrefix={arXiv},
eprint={2410.07400},
primaryClass={cs.CL cs.SD eess.AS}
} | k2024advocating |
arxiv-667792 | 2410.07401 | Enhancing Soccer Camera Calibration Through Keypoint Exploitation | <|reference_start|>Enhancing Soccer Camera Calibration Through Keypoint Exploitation: Accurate camera calibration is essential for transforming 2D images from camera sensors into 3D world coordinates, enabling precise scene geometry interpretation and supporting sports analytics tasks such as player tracking, offside detection, and performance analysis. However, obtaining a sufficient number of high-quality point pairs remains a significant challenge for both traditional and deep learning-based calibration methods. This paper introduces a multi-stage pipeline that addresses this challenge by leveraging the structural features of the football pitch. Our approach significantly increases the number of usable points for calibration by exploiting line-line and line-conic intersections, points on the conics, and other geometric features. To mitigate the impact of imperfect annotations, we employ data fitting techniques. Our pipeline utilizes deep learning for keypoint and line detection and incorporates geometric constraints based on real-world pitch dimensions. A voter algorithm iteratively selects the most reliable keypoints, further enhancing calibration accuracy. We evaluated our approach on the largest football broadcast camera calibration dataset available, and secured the top position in the SoccerNet Camera Calibration Challenge 2023 [arXiv:2309.06006], which demonstrates the effectiveness of our method in real-world scenarios. The project code is available at https://github.com/NikolasEnt/soccernet-calibration-sportlight .<|reference_end|> | arxiv | @article{falaleev2024enhancing,
title={Enhancing Soccer Camera Calibration Through Keypoint Exploitation},
author={Nikolay S. Falaleev and Ruilong Chen},
journal={In Proceedings of the 7th ACM International Workshop on Multimedia
Content Analysis in Sports (MMSports '24). Association for Computing
Machinery, New York, NY, USA (2024) 65-73},
year={2024},
doi={10.1145/3689061.3689074},
archivePrefix={arXiv},
eprint={2410.07401},
primaryClass={cs.CV}
} | falaleev2024enhancing |
arxiv-667793 | 2410.07403 | On the Feasibility of A Mixed-Method Approach for Solving Long Horizon Task-Oriented Dexterous Manipulation | <|reference_start|>On the Feasibility of A Mixed-Method Approach for Solving Long Horizon Task-Oriented Dexterous Manipulation: In-hand manipulation of tools using dexterous hands in real-world is an underexplored problem in the literature. In addition to more complex geometry and larger size of the tools compared to more commonly used objects like cubes or cylinders, task oriented in-hand tool manipulation involves many sub-tasks to be performed sequentially. This may involve reaching to the tool, picking it up, reorienting it in hand with or without regrasping to reach to a desired final grasp appropriate for the tool usage, and carrying the tool to the desired pose. Research on long-horizon manipulation using dexterous hands is rather limited and the existing work focus on learning the individual sub-tasks using a method like reinforcement learning (RL) and combine the policies for different subtasks to perform a long horizon task. However, in general a single method may not be the best for all the sub-tasks, and this can be more pronounced when dealing with multi-fingered hands manipulating objects with complex geometry like tools. In this paper, we investigate the use of a mixed-method approach to solve for the long-horizon task of tool usage and we use imitation learning, reinforcement learning and model based control. We also discuss a new RL-based teacher-student framework that combines real world data into offline training. We show that our proposed approach for each subtask outperforms the commonly adopted reinforcement learning approach across different subtasks and in performing the long horizon task in simulation. Finally we show the successful transferability to real world.<|reference_end|> | arxiv | @article{mehta2024on,
title={On the Feasibility of A Mixed-Method Approach for Solving Long Horizon
Task-Oriented Dexterous Manipulation},
author={Shaunak A. Mehta and Rana Soltani Zarrin},
journal={arXiv preprint arXiv:2410.07403},
year={2024},
archivePrefix={arXiv},
eprint={2410.07403},
primaryClass={cs.RO}
} | mehta2024on |
arxiv-667794 | 2410.07404 | Fostering Intrinsic Motivation in Reinforcement Learning with Pretrained Foundation Models | <|reference_start|>Fostering Intrinsic Motivation in Reinforcement Learning with Pretrained Foundation Models: Exploration remains a significant challenge in reinforcement learning, especially in environments where extrinsic rewards are sparse or non-existent. The recent rise of foundation models, such as CLIP, offers an opportunity to leverage pretrained, semantically rich embeddings that encapsulate broad and reusable knowledge. In this work we explore the potential of these foundation models not just to drive exploration, but also to analyze the critical role of the episodic novelty term in enhancing exploration effectiveness of the agent. We also investigate whether providing the intrinsic module with complete state information -- rather than just partial observations -- can improve exploration, despite the difficulties in handling small variations within large state spaces. Our experiments in the MiniGrid domain reveal that intrinsic modules can effectively utilize full state information, significantly increasing sample efficiency while learning an optimal policy. Moreover, we show that the embeddings provided by foundation models are sometimes even better than those constructed by the agent during training, further accelerating the learning process, especially when coupled with the episodic novelty term to enhance exploration.<|reference_end|> | arxiv | @article{andres2024fostering,
title={Fostering Intrinsic Motivation in Reinforcement Learning with Pretrained
Foundation Models},
author={Alain Andres and Javier Del Ser},
journal={arXiv preprint arXiv:2410.07404},
year={2024},
archivePrefix={arXiv},
eprint={2410.07404},
primaryClass={cs.AI cs.LG}
} | andres2024fostering |
arxiv-667795 | 2410.07405 | Exploring Efficient Foundational Multi-modal Models for Video Summarization | <|reference_start|>Exploring Efficient Foundational Multi-modal Models for Video Summarization: Foundational models are able to generate text outputs given prompt instructions and text, audio, or image inputs. Recently these models have been combined to perform tasks on video, such as video summarization. Such video foundation models perform pre-training by aligning outputs from each modality-specific model into the same embedding space. Then the embeddings from each model are used within a language model, which is fine-tuned on a desired instruction set. Aligning each modality during pre-training is computationally expensive and prevents rapid testing of different base modality models. During fine-tuning, evaluation is carried out within in-domain videos where it is hard to understand the generalizability and data efficiency of these methods. To alleviate these issues we propose a plug-and-play video language model. It directly uses the texts generated from each input modality into the language model, avoiding pre-training alignment overhead. Instead of fine-tuning we leverage few-shot instruction adaptation strategies. We compare the performance versus the computational costs for our plug-and-play style method and baseline tuning methods. Finally, we explore the generalizability of each method during domain shift and present insights on what data is useful when training data is limited. Through this analysis, we present practical insights on how to leverage multi-modal foundational models for effective results given realistic compute and data limitations.<|reference_end|> | arxiv | @article{samel2024exploring,
title={Exploring Efficient Foundational Multi-modal Models for Video
Summarization},
author={Karan Samel, Apoorva Beedu, Nitish Sontakke, Irfan Essa},
journal={arXiv preprint arXiv:2410.07405},
year={2024},
archivePrefix={arXiv},
eprint={2410.07405},
primaryClass={cs.CV cs.AI}
} | samel2024exploring |
arxiv-667796 | 2410.07407 | Optimized Spatial Architecture Mapping Flow for Transformer Accelerators | <|reference_start|>Optimized Spatial Architecture Mapping Flow for Transformer Accelerators: Recent innovations in Transformer-based large language models have significantly advanced the field of general-purpose neural language understanding and generation. With billions of trainable parameters, deployment of these large models relies on high-performance hardware accelerators to efficiently deliver the required computation. Spatial architectures, such as TPUs, offer a promising solution to accelerating computation-intensive workloads. However, the design process for existing spatial architectures is predominantly manual, and it often involves time-consuming redesigns for new applications and new problem dimensions, which greatly limits the development of optimally designed accelerators for Transformer models. To address these challenges, we propose SAMT (Spatial Architecture Mapping for Transformers), a comprehensive framework designed to optimize the dataflow mapping of Transformer inference workloads onto spatial accelerators. We demonstrate the effectiveness of SAMT in improving the performance of spatial accelerators for Transformer models. We propose and leverage the dynamic operator fusion schemes for the Transformer models and co-search the optimal dataflow mapping strategies for spatial accelerators. SAMT significantly reduces inference latency by 12% to 91% and energy consumption by 3% to 23% for evaluated Transformer models compared to traditional spatial accelerator designs among edge, mobile and cloud settings.<|reference_end|> | arxiv | @article{xu2024optimized,
title={Optimized Spatial Architecture Mapping Flow for Transformer Accelerators},
author={Haocheng Xu, Faraz Tahmasebi, Ye Qiao, Hongzheng Tian, Hyoukjun Kwon,
Sitao Huang},
journal={arXiv preprint arXiv:2410.07407},
year={2024},
archivePrefix={arXiv},
eprint={2410.07407},
primaryClass={cs.AR}
} | xu2024optimized |
arxiv-667797 | 2410.07408 | ACDC: Automated Creation of Digital Cousins for Robust Policy Learning | <|reference_start|>ACDC: Automated Creation of Digital Cousins for Robust Policy Learning: Training robot policies in the real world can be unsafe, costly, and difficult to scale. Simulation serves as an inexpensive and potentially limitless source of training data, but suffers from the semantics and physics disparity beween simulated and real-world environments. These discrepancies can be minimized by training in digital twins,which serve as virtual replicas of a real scene but are expensive to generate and cannot produce cross-domain generalization. To address these limitations, we propose the concept of digital cousins, a virtual asset or scene that, unlike a digital twin,does not explicitly model a real-world counterpart but still exhibits similar geometric and semantic affordances. As a result, digital cousins simultaneously reduce the cost of generating an analogous virtual environment while also facilitating better robustness during sim-to-real domain transfer by providing a distribution of similar training scenes. Leveraging digital cousins, we introduce a novel method for the Automatic Creation of Digital Cousins (ACDC), and propose a fully automated real-to-sim-to-real pipeline for generating fully interactive scenes and training robot policies that can be deployed zero-shot in the original scene. We find that ACDC can produce digital cousin scenes that preserve geometric and semantic affordances, and can be used to train policies that outperform policies trained on digital twins, achieving 90% vs. 25% under zero-shot sim-to-real transfer. Additional details are available at https://digital-cousins.github.io/.<|reference_end|> | arxiv | @article{dai2024automated,
title={Automated Creation of Digital Cousins for Robust Policy Learning},
author={Tianyuan Dai, Josiah Wong, Yunfan Jiang, Chen Wang, Cem Gokmen, Ruohan
Zhang, Jiajun Wu, Li Fei-Fei},
journal={arXiv preprint arXiv:2410.07408},
year={2024},
archivePrefix={arXiv},
eprint={2410.07408},
primaryClass={cs.RO}
} | dai2024automated |
arxiv-667798 | 2410.07409 | Learning responsibility allocations for multi-agent interactions: A differentiable optimization approach with control barrier functions | <|reference_start|>Learning responsibility allocations for multi-agent interactions: A differentiable optimization approach with control barrier functions: From autonomous driving to package delivery, ensuring safe yet efficient multi-agent interaction is challenging as the interaction dynamics are influenced by hard-to-model factors such as social norms and contextual cues. Understanding these influences can aid in the design and evaluation of socially-aware autonomous agents whose behaviors are aligned with human values. In this work, we seek to codify factors governing safe multi-agent interactions via the lens of responsibility, i.e., an agent's willingness to deviate from their desired control to accommodate safe interaction with others. Specifically, we propose a data-driven modeling approach based on control barrier functions and differentiable optimization that efficiently learns agents' responsibility allocation from data. We demonstrate on synthetic and real-world datasets that we can obtain an interpretable and quantitative understanding of how much agents adjust their behavior to ensure the safety of others given their current environment.<|reference_end|> | arxiv | @article{remy2024learning,
title={Learning responsibility allocations for multi-agent interactions: A
differentiable optimization approach with control barrier functions},
author={Isaac Remy, David Fridovich-Keil, Karen Leung},
journal={arXiv preprint arXiv:2410.07409},
year={2024},
archivePrefix={arXiv},
eprint={2410.07409},
primaryClass={eess.SY cs.LG cs.MA cs.RO cs.SY}
} | remy2024learning |
arxiv-667799 | 2410.07410 | Aligning Motion-Blurred Images Using Contrastive Learning on Overcomplete Pixels | <|reference_start|>Aligning Motion-Blurred Images Using Contrastive Learning on Overcomplete Pixels: We propose a new contrastive objective for learning overcomplete pixel-level features that are invariant to motion blur. Other invariances (e.g., pose, illumination, or weather) can be learned by applying the corresponding transformations on unlabeled images during self-supervised training. We showcase that a simple U-Net trained with our objective can produce local features useful for aligning the frames of an unseen video captured with a moving camera under realistic and challenging conditions. Using a carefully designed toy example, we also show that the overcomplete pixels can encode the identity of objects in an image and the pixel coordinates relative to these objects.<|reference_end|> | arxiv | @article{pogorelyuk2024aligning,
title={Aligning Motion-Blurred Images Using Contrastive Learning on
Overcomplete Pixels},
author={Leonid Pogorelyuk, Stefan T. Radev},
journal={arXiv preprint arXiv:2410.07410},
year={2024},
archivePrefix={arXiv},
eprint={2410.07410},
primaryClass={cs.CV}
} | pogorelyuk2024aligning |
arxiv-667800 | 2410.07413 | A Rapid Trajectory Optimization and Control Framework for Resource-Constrained Applications | <|reference_start|>A Rapid Trajectory Optimization and Control Framework for Resource-Constrained Applications: This paper presents a computationally efficient model predictive control formulation that uses an integral Chebyshev collocation method to enable rapid operations of autonomous agents. By posing the finite-horizon optimal control problem and recursive re-evaluation of the optimal trajectories, minimization of the L2 norms of the state and control errors are transcribed into a quadratic program. Control and state variable constraints are parameterized using Chebyshev polynomials and are accommodated in the optimal trajectory generation programs to incorporate the actuator limits and keepout constraints. Differentiable collision detection of polytopes is leveraged for optimal collision avoidance. Results obtained from the collocation methods are benchmarked against the existing approaches on an edge computer to outline the performance improvements. Finally, collaborative control scenarios involving multi-agent space systems are considered to demonstrate the technical merits of the proposed work.<|reference_end|> | arxiv | @article{parikh2024a,
title={A Rapid Trajectory Optimization and Control Framework for
Resource-Constrained Applications},
author={Deep Parikh, Thomas L. Ahrens, Manoranjan Majji},
journal={arXiv preprint arXiv:2410.07413},
year={2024},
archivePrefix={arXiv},
eprint={2410.07413},
primaryClass={cs.RO cs.SY eess.SY}
} | parikh2024a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.