corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-661501
|
2409.16320
|
Developing a Thailand solar irradiance map using Himawari-8 satellite imageries and deep learning models
|
<|reference_start|>Developing a Thailand solar irradiance map using Himawari-8 satellite imageries and deep learning models: This paper presents an online platform that shows Thailand's solar irradiance map every 30 minutes. It is available at https://www.cusolarforecast.com. The methodology for estimating global horizontal irradiance (GHI) across Thailand relies on cloud index extracted from Himawari-8 satellite imagery, Ineichen clear-sky model with locally-tuned Linke turbidity, and machine learning models. The methods take clear-sky irradiance, cloud index, re-analyzed GHI and temperature data from the MERRA-2 database, and date-time as inputs for GHI estimation models, including LightGBM, LSTM, Informer, and Transformer. These are benchmarked with the estimate from the SolCast service by evaluation of 15-minute ground GHI data from 53 ground stations over 1.5 years during 2022-2023. The results show that the four models have competitive performances and outperform the SolCast service. The best model is LightGBM, with an MAE of 78.58 W/sqm and RMSE of 118.97 W/sqm. Obtaining re-analyzed MERRA-2 data for Thailand is not economically feasible for deployment. When removing these features, the Informer model has a winning performance of 78.67 W/sqm in MAE. The obtained performance aligns with existing literature by taking the climate zone and time granularity of data into consideration. As the map shows an estimate of GHI over 93,000 grids with a frequent update, the paper also describes a computational framework for displaying the entire map. It tests the runtime performance of deep learning models in the GHI estimation process.<|reference_end|>
|
arxiv
|
@article{suwanwimolkul2024developing,
title={Developing a Thailand solar irradiance map using Himawari-8 satellite
imageries and deep learning models},
author={Suwichaya Suwanwimolkul, Natanon Tongamrak, Nuttamon Thungka, Naebboon
Hoonchareon, Jitkomut Songsiri},
journal={arXiv preprint arXiv:2409.16320},
year={2024},
archivePrefix={arXiv},
eprint={2409.16320},
primaryClass={physics.ao-ph cs.AI cs.CV cs.LG}
}
|
suwanwimolkul2024developing
|
arxiv-661502
|
2409.16321
|
WeatherFormer: Empowering Global Numerical Weather Forecasting with Space-Time Transformer
|
<|reference_start|>WeatherFormer: Empowering Global Numerical Weather Forecasting with Space-Time Transformer: Numerical Weather Prediction (NWP) system is an infrastructure that exerts considerable impacts on modern society.Traditional NWP system, however, resolves it by solving complex partial differential equations with a huge computing cluster, resulting in tons of carbon emission. Exploring efficient and eco-friendly solutions for NWP attracts interest from Artificial Intelligence (AI) and earth science communities. To narrow the performance gap between the AI-based methods and physic predictor, this work proposes a new transformer-based NWP framework, termed as WeatherFormer, to model the complex spatio-temporal atmosphere dynamics and empowering the capability of data-driven NWP. WeatherFormer innovatively introduces the space-time factorized transformer blocks to decrease the parameters and memory consumption, in which Position-aware Adaptive Fourier Neural Operator (PAFNO) is proposed for location sensible token mixing. Besides, two data augmentation strategies are utilized to boost the performance and decrease training consumption. Extensive experiments on WeatherBench dataset show WeatherFormer achieves superior performance over existing deep learning methods and further approaches the most advanced physical model.<|reference_end|>
|
arxiv
|
@article{gong2024weatherformer:,
title={WeatherFormer: Empowering Global Numerical Weather Forecasting with
Space-Time Transformer},
author={Junchao Gong, Tao Han, Kang Chen, Lei Bai},
journal={arXiv preprint arXiv:2409.16321},
year={2024},
archivePrefix={arXiv},
eprint={2409.16321},
primaryClass={cs.AI cs.LG physics.ao-ph}
}
|
gong2024weatherformer:
|
arxiv-661503
|
2409.16322
|
Towards Within-Class Variation in Alzheimer's Disease Detection from Spontaneous Speech
|
<|reference_start|>Towards Within-Class Variation in Alzheimer's Disease Detection from Spontaneous Speech: Alzheimer's Disease (AD) detection has emerged as a promising research area that employs machine learning classification models to distinguish between individuals with AD and those without. Unlike conventional classification tasks, we identify within-class variation as a critical challenge in AD detection: individuals with AD exhibit a spectrum of cognitive impairments. Given that many AD detection tasks lack fine-grained labels, simplistic binary classification may overlook two crucial aspects: within-class differences and instance-level imbalance. The former compels the model to map AD samples with varying degrees of impairment to a single diagnostic label, disregarding certain changes in cognitive function. While the latter biases the model towards overrepresented severity levels. This work presents early efforts to address these challenges. We propose two novel methods: Soft Target Distillation (SoTD) and Instance-level Re-balancing (InRe), targeting two problems respectively. Experiments on the ADReSS and ADReSSo datasets demonstrate that the proposed methods significantly improve detection accuracy. Further analysis reveals that SoTD effectively harnesses the strengths of multiple component models, while InRe substantially alleviates model over-fitting. These findings provide insights for developing more robust and reliable AD detection models.<|reference_end|>
|
arxiv
|
@article{kang2024towards,
title={Towards Within-Class Variation in Alzheimer's Disease Detection from
Spontaneous Speech},
author={Jiawen Kang, Dongrui Han, Lingwei Meng, Jingyan Zhou, Jinchao Li,
Xixin Wu, Helen Meng},
journal={arXiv preprint arXiv:2409.16322},
year={2024},
archivePrefix={arXiv},
eprint={2409.16322},
primaryClass={eess.AS cs.AI cs.CL cs.LG cs.SD q-bio.NC}
}
|
kang2024towards
|
arxiv-661504
|
2409.16326
|
Automated Spatio-Temporal Weather Modeling for Load Forecasting
|
<|reference_start|>Automated Spatio-Temporal Weather Modeling for Load Forecasting: Electricity is difficult to store, except at prohibitive cost, and therefore the balance between generation and load must be maintained at all times. Electricity is traditionally managed by anticipating demand and intermittent production (wind, solar) and matching flexible production (hydro, nuclear, coal and gas). Accurate forecasting of electricity load and renewable production is therefore essential to ensure grid performance and stability. Both are highly dependent on meteorological variables (temperature, wind, sunshine). These dependencies are complex and difficult to model. On the one hand, spatial variations do not have a uniform impact because population, industry, and wind and solar farms are not evenly distributed across the territory. On the other hand, temporal variations can have delayed effects on load (due to the thermal inertia of buildings). With access to observations from different weather stations and simulated data from meteorological models, we believe that both phenomena can be modeled together. In today's state-of-the-art load forecasting models, the spatio-temporal modeling of the weather is fixed. In this work, we aim to take advantage of the automated representation and spatio-temporal feature extraction capabilities of deep neural networks to improve spatio-temporal weather modeling for load forecasting. We compare our deep learning-based methodology with the state-of-the-art on French national load. This methodology could also be fully adapted to forecasting renewable energy production.<|reference_end|>
|
arxiv
|
@article{keisler2024automated,
title={Automated Spatio-Temporal Weather Modeling for Load Forecasting},
author={Julie Keisler (CRIStAL, EDF R&D OSIRIS, EDF R&D), Margaux Bregere
(EDF R&D, EDF R&D OSIRIS, LPSM (UMR_8001))},
journal={International Ruhr Energy Conference, Aug 2024, Essen, University
Duisburg-Essen, Germany},
year={2024},
archivePrefix={arXiv},
eprint={2409.16326},
primaryClass={cs.LG cs.AI stat.ML}
}
|
keisler2024automated
|
arxiv-661505
|
2409.16327
|
GATher: Graph Attention Based Predictions of Gene-Disease Links
|
<|reference_start|>GATher: Graph Attention Based Predictions of Gene-Disease Links: Target selection is crucial in pharmaceutical drug discovery, directly influencing clinical trial success. Despite its importance, drug development remains resource-intensive, often taking over a decade with significant financial costs. High failure rates highlight the need for better early-stage target selection. We present GATher, a graph attention network designed to predict therapeutic gene-disease links by integrating data from diverse biomedical sources into a graph with over 4.4 million edges. GATher incorporates GATv3, a novel graph attention convolution layer, and GATv3HeteroConv, which aggregates transformations for each edge type, enhancing its ability to manage complex interactions within this extensive dataset. Utilizing hard negative sampling and multi-task pre-training, GATher addresses topological imbalances and improves specificity. Trained on data up to 2018 and evaluated through 2024, our results show GATher predicts clinical trial outcomes with a ROC AUC of 0.69 for unmet efficacy failures and 0.79 for positive efficacy. Feature attribution methods, using Captum, highlight key nodes and relationships, enhancing model interpretability. By 2024, GATher improved precision in prioritizing the top 200 clinical trial targets to 14.1%, an absolute increase of over 3.5% compared to other methods. GATher outperforms existing models like GAT, GATv2, and HGT in predicting clinical trial outcomes, demonstrating its potential in enhancing target validation and predicting clinical efficacy and safety.<|reference_end|>
|
arxiv
|
@article{narganes-carlon2024gather:,
title={GATher: Graph Attention Based Predictions of Gene-Disease Links},
author={David Narganes-Carlon, Anniek Myatt, Mani Mudaliar, and Daniel J.
Crowther},
journal={arXiv preprint arXiv:2409.16327},
year={2024},
archivePrefix={arXiv},
eprint={2409.16327},
primaryClass={q-bio.QM cs.LG}
}
|
narganes-carlon2024gather:
|
arxiv-661506
|
2409.16329
|
MRI Radiomics for IDH Genotype Prediction in Glioblastoma Diagnosis
|
<|reference_start|>MRI Radiomics for IDH Genotype Prediction in Glioblastoma Diagnosis: Radiomics is a relatively new field which utilises automatically identified features from radiological scans. It has found a widespread application, particularly in oncology because many of the important oncological biomarkers are not visible to the naked eye. The recent advent of big data, including in medical imaging, and the development of new ML techniques brought the possibility of faster and more accurate oncological diagnosis. Furthermore, standardised mathematical feature extraction based on radiomics helps to eliminate possible radiologist bias. This paper reviews the recent development in the oncological use of MRI radiomic features. It focuses on the identification of the isocitrate dehydrogenase (IDH) mutation status, which is an important biomarker for the diagnosis of glioblastoma and grade IV astrocytoma.<|reference_end|>
|
arxiv
|
@article{kozák2024mri,
title={MRI Radiomics for IDH Genotype Prediction in Glioblastoma Diagnosis},
author={Stanislav Koz'ak},
journal={arXiv preprint arXiv:2409.16329},
year={2024},
archivePrefix={arXiv},
eprint={2409.16329},
primaryClass={q-bio.QM cs.AI cs.CV cs.LG}
}
|
kozák2024mri
|
arxiv-661507
|
2409.16331
|
Exploring the traditional NMT model and Large Language Model for chat translation
|
<|reference_start|>Exploring the traditional NMT model and Large Language Model for chat translation: This paper describes the submissions of Huawei Translation Services Center(HW-TSC) to WMT24 chat translation shared task on English$\leftrightarrow$Germany (en-de) bidirection. The experiments involved fine-tuning models using chat data and exploring various strategies, including Minimum Bayesian Risk (MBR) decoding and self-training. The results show significant performance improvements in certain directions, with the MBR self-training method achieving the best results. The Large Language Model also discusses the challenges and potential avenues for further research in the field of chat translation.<|reference_end|>
|
arxiv
|
@article{yang2024exploring,
title={Exploring the traditional NMT model and Large Language Model for chat
translation},
author={Jinlong Yang, Hengchao Shang, Daimeng Wei, Jiaxin Guo, Zongyao Li,
Zhanglin Wu, Zhiqiang Rao, Shaojun Li, Yuhao Xie, Yuanchang Luo, Jiawei
Zheng, Bin Wei, Hao Yang},
journal={arXiv preprint arXiv:2409.16331},
year={2024},
archivePrefix={arXiv},
eprint={2409.16331},
primaryClass={cs.CL cs.AI}
}
|
yang2024exploring
|
arxiv-661508
|
2409.16333
|
Predicting Distance matrix with large language models
|
<|reference_start|>Predicting Distance matrix with large language models: Structural prediction has long been considered critical in RNA research, especially following the success of AlphaFold2 in protein studies, which has drawn significant attention to the field. While recent advances in machine learning and data accumulation have effectively addressed many biological tasks, particularly in protein related research. RNA structure prediction remains a significant challenge due to data limitations. Obtaining RNA structural data is difficult because traditional methods such as nuclear magnetic resonance spectroscopy, Xray crystallography, and electron microscopy are expensive and time consuming. Although several RNA 3D structure prediction methods have been proposed, their accuracy is still limited. Predicting RNA structural information at another level, such as distance maps, remains highly valuable. Distance maps provide a simplified representation of spatial constraints between nucleotides, capturing essential relationships without requiring a full 3D model. This intermediate level of structural information can guide more accurate 3D modeling and is computationally less intensive, making it a useful tool for improving structural predictions. In this work, we demonstrate that using only primary sequence information, we can accurately infer the distances between RNA bases by utilizing a large pretrained RNA language model coupled with a well trained downstream transformer.<|reference_end|>
|
arxiv
|
@article{yang2024predicting,
title={Predicting Distance matrix with large language models},
author={Jiaxing Yang},
journal={arXiv preprint arXiv:2409.16333},
year={2024},
archivePrefix={arXiv},
eprint={2409.16333},
primaryClass={q-bio.BM cs.CV cs.LG q-fin.CP}
}
|
yang2024predicting
|
arxiv-661509
|
2409.16336
|
Refereeing the Referees: Evaluating Two-Sample Tests for Validating Generators in Precision Sciences
|
<|reference_start|>Refereeing the Referees: Evaluating Two-Sample Tests for Validating Generators in Precision Sciences: We propose a robust methodology to evaluate the performance and computational efficiency of non-parametric two-sample tests, specifically designed for high-dimensional generative models in scientific applications such as in particle physics. The study focuses on tests built from univariate integral probability measures: the sliced Wasserstein distance and the mean of the Kolmogorov-Smirnov statistics, already discussed in the literature, and the novel sliced Kolmogorov-Smirnov statistic. These metrics can be evaluated in parallel, allowing for fast and reliable estimates of their distribution under the null hypothesis. We also compare these metrics with the recently proposed unbiased Fr\'echet Gaussian Distance and the unbiased quadratic Maximum Mean Discrepancy, computed with a quartic polynomial kernel. We evaluate the proposed tests on various distributions, focusing on their sensitivity to deformations parameterized by a single parameter $\epsilon$. Our experiments include correlated Gaussians and mixtures of Gaussians in 5, 20, and 100 dimensions, and a particle physics dataset of gluon jets from the JetNet dataset, considering both jet- and particle-level features. Our results demonstrate that one-dimensional-based tests provide a level of sensitivity comparable to other multivariate metrics, but with significantly lower computational cost, making them ideal for evaluating generative models in high-dimensional settings. This methodology offers an efficient, standardized tool for model comparison and can serve as a benchmark for more advanced tests, including machine-learning-based approaches.<|reference_end|>
|
arxiv
|
@article{grossi2024refereeing,
title={Refereeing the Referees: Evaluating Two-Sample Tests for Validating
Generators in Precision Sciences},
author={Samuele Grossi, Marco Letizia and Riccardo Torre},
journal={arXiv preprint arXiv:2409.16336},
year={2024},
archivePrefix={arXiv},
eprint={2409.16336},
primaryClass={stat.ML cs.LG hep-ph stat.AP}
}
|
grossi2024refereeing
|
arxiv-661510
|
2409.16339
|
Large-scale digital phenotyping: identifying depression and anxiety indicators in a general UK population with over 10,000 participants
|
<|reference_start|>Large-scale digital phenotyping: identifying depression and anxiety indicators in a general UK population with over 10,000 participants: Digital phenotyping offers a novel and cost-efficient approach for managing depression and anxiety. Previous studies, often limited to small-to-medium or specific populations, may lack generalizability. We conducted a cross-sectional analysis of data from 10,129 participants recruited from a UK-based general population between June 2020 and August 2022. Participants shared wearable (Fitbit) data and self-reported questionnaires on depression (PHQ-8), anxiety (GAD-7), and mood via a study app. We first examined the correlations between PHQ-8/GAD-7 scores and wearable-derived features, demographics, health data, and mood assessments. Subsequently, unsupervised clustering was used to identify behavioural patterns associated with depression or anxiety. Finally, we employed separate XGBoost models to predict depression and anxiety and compared the results using different subsets of features. We observed significant associations between the severity of depression and anxiety with several factors, including mood, age, gender, BMI, sleep patterns, physical activity, and heart rate. Clustering analysis revealed that participants simultaneously exhibiting lower physical activity levels and higher heart rates reported more severe symptoms. Prediction models incorporating all types of variables achieved the best performance ($R^2$=0.41, MAE=3.42 for depression; $R^2$=0.31, MAE=3.50 for anxiety) compared to those using subsets of variables. This study identified potential indicators for depression and anxiety, highlighting the utility of digital phenotyping and machine learning technologies for rapid screening of mental disorders in general populations. These findings provide robust real-world insights for future healthcare applications.<|reference_end|>
|
arxiv
|
@article{zhang2024large-scale,
title={Large-scale digital phenotyping: identifying depression and anxiety
indicators in a general UK population with over 10,000 participants},
author={Yuezhou Zhang, Callum Stewart, Yatharth Ranjan, Pauline Conde, Heet
Sankesara, Zulqarnain Rashid, Shaoxiong Sun, Richard J B Dobson, Amos A
Folarin},
journal={arXiv preprint arXiv:2409.16339},
year={2024},
archivePrefix={arXiv},
eprint={2409.16339},
primaryClass={q-bio.QM cs.LG}
}
|
zhang2024large-scale
|
arxiv-661511
|
2409.16340
|
Future-Proofing Medical Imaging with Privacy-Preserving Federated Learning and Uncertainty Quantification: A Review
|
<|reference_start|>Future-Proofing Medical Imaging with Privacy-Preserving Federated Learning and Uncertainty Quantification: A Review: Artificial Intelligence (AI) has demonstrated significant potential in automating various medical imaging tasks, which could soon become routine in clinical practice for disease diagnosis, prognosis, treatment planning, and post-treatment surveillance. However, the privacy concerns surrounding patient data present a major barrier to the widespread adoption of AI in medical imaging, as large, diverse training datasets are essential for developing accurate, generalizable, and robust Artificial intelligence models. Federated Learning (FL) offers a solution that enables organizations to train AI models collaboratively without sharing sensitive data. federated learning exchanges model training information, such as gradients, between the participating sites. Despite its promise, federated learning is still in its developmental stages and faces several challenges. Notably, sensitive information can still be inferred from the gradients shared during model training. Quantifying AI models' uncertainty is vital due to potential data distribution shifts post-deployment, which can affect model performance. Uncertainty quantification (UQ) in FL is particularly challenging due to data heterogeneity across participating sites. This review provides a comprehensive examination of FL, privacy-preserving FL (PPFL), and UQ in FL. We identify key gaps in current FL methodologies and propose future research directions to enhance data privacy and trustworthiness in medical imaging applications.<|reference_end|>
|
arxiv
|
@article{koutsoubis2024future-proofing,
title={Future-Proofing Medical Imaging with Privacy-Preserving Federated
Learning and Uncertainty Quantification: A Review},
author={Nikolas Koutsoubis, Asim Waqas, Yasin Yilmaz, Ravi P. Ramachandran,
Matthew Schabath, and Ghulam Rasool},
journal={arXiv preprint arXiv:2409.16340},
year={2024},
archivePrefix={arXiv},
eprint={2409.16340},
primaryClass={eess.IV cs.AI cs.CV}
}
|
koutsoubis2024future-proofing
|
arxiv-661512
|
2409.16341
|
Quality Matters: Evaluating Synthetic Data for Tool-Using LLMs
|
<|reference_start|>Quality Matters: Evaluating Synthetic Data for Tool-Using LLMs: Training large language models (LLMs) for external tool usage is a rapidly expanding field, with recent research focusing on generating synthetic data to address the shortage of available data. However, the absence of systematic data quality checks poses complications for properly training and testing models. To that end, we propose two approaches for assessing the reliability of data for training LLMs to use external tools. The first approach uses intuitive, human-defined correctness criteria. The second approach uses a model-driven assessment with in-context evaluation. We conduct a thorough evaluation of data quality on two popular benchmarks, followed by an extrinsic evaluation that showcases the impact of data quality on model performance. Our results demonstrate that models trained on high-quality data outperform those trained on unvalidated data, even when trained with a smaller quantity of data. These findings empirically support the significance of assessing and ensuring the reliability of training data for tool-using LLMs.<|reference_end|>
|
arxiv
|
@article{iskander2024quality,
title={Quality Matters: Evaluating Synthetic Data for Tool-Using LLMs},
author={Shadi Iskander, Nachshon Cohen, Zohar Karnin, Ori Shapira, Sofia
Tolmach},
journal={arXiv preprint arXiv:2409.16341},
year={2024},
archivePrefix={arXiv},
eprint={2409.16341},
primaryClass={cs.LG cs.CL cs.SE}
}
|
iskander2024quality
|
arxiv-661513
|
2409.16342
|
Transformer based time series prediction of the maximum power point for solar photovoltaic cells
|
<|reference_start|>Transformer based time series prediction of the maximum power point for solar photovoltaic cells: This paper proposes an improved deep learning based maximum power point tracking (MPPT) in solar photovoltaic cells considering various time series based environmental inputs. Generally, artificial neural network based MPPT algorithms use basic neural network architectures and inputs which do not represent the ambient conditions in a comprehensive manner. In this article, the ambient conditions of a location are represented through a comprehensive set of environmental features. Furthermore, the inclusion of time based features in the input data is considered to model cyclic patterns temporally within the atmospheric conditions leading to robust modeling of the MPPT algorithm. A transformer based deep learning architecture is trained as a time series prediction model using multidimensional time series input features. The model is trained on a dataset containing typical meteorological year data points of ambient weather conditions from 50 locations. The attention mechanism in the transformer modules allows the model to learn temporal patterns in the data efficiently. The proposed model achieves a 0.47% mean average percentage error of prediction on non zero operating voltage points in a test dataset consisting of data collected over a period of 200 consecutive hours resulting in the average power efficiency of 99.54% and peak power efficiency of 99.98%. The proposed model is validated through real time simulations. The proposed model performs power point tracking in a robust, dynamic, and nonlatent manner, over a wide range of atmospheric conditions.<|reference_end|>
|
arxiv
|
@article{agrawal2024transformer,
title={Transformer based time series prediction of the maximum power point for
solar photovoltaic cells},
author={Palaash Agrawal, Hari Om Bansal, Aditya R. Gautam, Om Prakash Mahela,
Baseem Khan},
journal={Energy Sci Eng. 2022; 10: 3397-3410},
year={2024},
doi={10.1002/ese3.1226},
archivePrefix={arXiv},
eprint={2409.16342},
primaryClass={eess.SY cs.LG cs.SY}
}
|
agrawal2024transformer
|
arxiv-661514
|
2409.16346
|
Scalable quantum dynamics compilation via quantum machine learning
|
<|reference_start|>Scalable quantum dynamics compilation via quantum machine learning: Quantum dynamics compilation is an important task for improving quantum simulation efficiency: It aims to synthesize multi-qubit target dynamics into a circuit consisting of as few elementary gates as possible. Compared to deterministic methods such as Trotterization, variational quantum compilation (VQC) methods employ variational optimization to reduce gate costs while maintaining high accuracy. In this work, we explore the potential of a VQC scheme by making use of out-of-distribution generalization results in quantum machine learning (QML): By learning the action of a given many-body dynamics on a small data set of product states, we can obtain a unitary circuit that generalizes to highly entangled states such as the Haar random states. The efficiency in training allows us to use tensor network methods to compress such time-evolved product states by exploiting their low entanglement features. Our approach exceeds state-of-the-art compilation results in both system size and accuracy in one dimension ($1$D). For the first time, we extend VQC to systems on two-dimensional (2D) strips with a quasi-1D treatment, demonstrating a significant resource advantage over standard Trotterization methods, highlighting the method's promise for advancing quantum simulation tasks on near-term quantum processors.<|reference_end|>
|
arxiv
|
@article{zhang2024scalable,
title={Scalable quantum dynamics compilation via quantum machine learning},
author={Yuxuan Zhang, Roeland Wiersema, Juan Carrasquilla, Lukasz Cincio, and
Yong Baek Kim},
journal={arXiv preprint arXiv:2409.16346},
year={2024},
number={LA-UR-24-30133},
archivePrefix={arXiv},
eprint={2409.16346},
primaryClass={quant-ph cond-mat.str-el cs.LG}
}
|
zhang2024scalable
|
arxiv-661515
|
2409.16371
|
Do the Right Thing, Just Debias! Multi-Category Bias Mitigation Using LLMs
|
<|reference_start|>Do the Right Thing, Just Debias! Multi-Category Bias Mitigation Using LLMs: This paper tackles the challenge of building robust and generalizable bias mitigation models for language. Recognizing the limitations of existing datasets, we introduce ANUBIS, a novel dataset with 1507 carefully curated sentence pairs encompassing nine social bias categories. We evaluate state-of-the-art models like T5, utilizing Supervised Fine-Tuning (SFT), Reinforcement Learning (PPO, DPO), and In-Context Learning (ICL) for effective bias mitigation. Our analysis focuses on multi-class social bias reduction, cross-dataset generalizability, and environmental impact of the trained models. ANUBIS and our findings offer valuable resources for building more equitable AI systems and contribute to the development of responsible and unbiased technologies with broad societal impact.<|reference_end|>
|
arxiv
|
@article{roy2024do,
title={Do the Right Thing, Just Debias! Multi-Category Bias Mitigation Using
LLMs},
author={Amartya Roy, Danush Khanna, Devanshu Mahapatra, Vasanthakumar, Avirup
Das, Kripabandhu Ghosh},
journal={arXiv preprint arXiv:2409.16371},
year={2024},
archivePrefix={arXiv},
eprint={2409.16371},
primaryClass={cs.CL}
}
|
roy2024do
|
arxiv-661516
|
2409.16376
|
Beyond Text-to-Text: An Overview of Multimodal and Generative Artificial Intelligence for Education Using Topic Modeling
|
<|reference_start|>Beyond Text-to-Text: An Overview of Multimodal and Generative Artificial Intelligence for Education Using Topic Modeling: Generative artificial intelligence (GenAI) can reshape education and learning. While large language models (LLMs) like ChatGPT dominate current educational research, multimodal capabilities, such as text-to-speech and text-to-image, are less explored. This study uses topic modeling to map the research landscape of multimodal and generative AI in education. An extensive literature search using Dimensions.ai yielded 4175 articles. Employing a topic modeling approach, latent topics were extracted, resulting in 38 interpretable topics organized into 14 thematic areas. Findings indicate a predominant focus on text-to-text models in educational contexts, with other modalities underexplored, overlooking the broader potential of multimodal approaches. The results suggest a research gap, stressing the importance of more balanced attention across different AI modalities and educational levels. In summary, this research provides an overview of current trends in generative AI for education, underlining opportunities for future exploration of multimodal technologies to fully realize the transformative potential of artificial intelligence in education.<|reference_end|>
|
arxiv
|
@article{heilala2024beyond,
title={Beyond Text-to-Text: An Overview of Multimodal and Generative Artificial
Intelligence for Education Using Topic Modeling},
author={Ville Heilala, Roberto Araya, Raija H"am"al"ainen},
journal={arXiv preprint arXiv:2409.16376},
year={2024},
archivePrefix={arXiv},
eprint={2409.16376},
primaryClass={cs.AI cs.HC}
}
|
heilala2024beyond
|
arxiv-661517
|
2409.16380
|
Development and Application of a Sentinel-2 Satellite Imagery Dataset for Deep-Learning Driven Forest Wildfire Detection
|
<|reference_start|>Development and Application of a Sentinel-2 Satellite Imagery Dataset for Deep-Learning Driven Forest Wildfire Detection: Forest loss due to natural events, such as wildfires, represents an increasing global challenge that demands advanced analytical methods for effective detection and mitigation. To this end, the integration of satellite imagery with deep learning (DL) methods has become essential. Nevertheless, this approach requires substantial amounts of labeled data to produce accurate results. In this study, we use bi-temporal Sentinel-2 satellite imagery sourced from Google Earth Engine (GEE) to build the California Wildfire GeoImaging Dataset (CWGID), a high-resolution labeled satellite imagery dataset with over 100,000 labeled before and after forest wildfire image pairs for wildfire detection through DL. Our methods include data acquisition from authoritative sources, data processing, and an initial dataset analysis using three pre-trained Convolutional Neural Network (CNN) architectures. Our results show that the EF EfficientNet-B0 model achieves the highest accuracy of over 92% in detecting forest wildfires. The CWGID and the methodology used to build it, prove to be a valuable resource for training and testing DL architectures for forest wildfire detection.<|reference_end|>
|
arxiv
|
@article{martin2024development,
title={Development and Application of a Sentinel-2 Satellite Imagery Dataset
for Deep-Learning Driven Forest Wildfire Detection},
author={Valeria Martin, K.Brent Venable, Derek Morgan},
journal={arXiv preprint arXiv:2409.16380},
year={2024},
archivePrefix={arXiv},
eprint={2409.16380},
primaryClass={cs.CV cs.LG}
}
|
martin2024development
|
arxiv-661518
|
2409.16381
|
Instance Segmentation of Reinforced Concrete Bridges with Synthetic Point Clouds
|
<|reference_start|>Instance Segmentation of Reinforced Concrete Bridges with Synthetic Point Clouds: The National Bridge Inspection Standards require detailed element-level bridge inspections. Traditionally, inspectors manually assign condition ratings by rating structural components based on damage, but this process is labor-intensive and time-consuming. Automating the element-level bridge inspection process can facilitate more comprehensive condition documentation to improve overall bridge management. While semantic segmentation of bridge point clouds has been studied, research on instance segmentation of bridge elements is limited, partly due to the lack of annotated datasets, and the difficulty in generalizing trained models. To address this, we propose a novel approach for generating synthetic data using three distinct methods. Our framework leverages the Mask3D transformer model, optimized with hyperparameter tuning and a novel occlusion technique. The model achieves state-of-the-art performance on real LiDAR and photogrammetry bridge point clouds, respectively, demonstrating the potential of the framework for automating element-level bridge inspections.<|reference_end|>
|
arxiv
|
@article{rahman2024instance,
title={Instance Segmentation of Reinforced Concrete Bridges with Synthetic
Point Clouds},
author={Asad Ur Rahman, Vedhus Hoskere},
journal={arXiv preprint arXiv:2409.16381},
year={2024},
archivePrefix={arXiv},
eprint={2409.16381},
primaryClass={cs.CV eess.IV}
}
|
rahman2024instance
|
arxiv-661519
|
2409.16382
|
Towards Synthetic Data Generation for Improved Pain Recognition in Videos under Patient Constraints
|
<|reference_start|>Towards Synthetic Data Generation for Improved Pain Recognition in Videos under Patient Constraints: Recognizing pain in video is crucial for improving patient-computer interaction systems, yet traditional data collection in this domain raises significant ethical and logistical challenges. This study introduces a novel approach that leverages synthetic data to enhance video-based pain recognition models, providing an ethical and scalable alternative. We present a pipeline that synthesizes realistic 3D facial models by capturing nuanced facial movements from a small participant pool, and mapping these onto diverse synthetic avatars. This process generates 8,600 synthetic faces, accurately reflecting genuine pain expressions from varied angles and perspectives. Utilizing advanced facial capture techniques, and leveraging public datasets like CelebV-HQ and FFHQ-UV for demographic diversity, our new synthetic dataset significantly enhances model training while ensuring privacy by anonymizing identities through facial replacements. Experimental results demonstrate that models trained on combinations of synthetic data paired with a small amount of real participants achieve superior performance in pain recognition, effectively bridging the gap between synthetic simulations and real-world applications. Our approach addresses data scarcity and ethical concerns, offering a new solution for pain detection and opening new avenues for research in privacy-preserving dataset generation. All resources are publicly available to encourage further innovation in this field.<|reference_end|>
|
arxiv
|
@article{nasimzada2024towards,
title={Towards Synthetic Data Generation for Improved Pain Recognition in
Videos under Patient Constraints},
author={Jonas Nasimzada, Jens Kleesiek, Ken Herrmann, Alina Roitberg and
Constantin Seibold},
journal={arXiv preprint arXiv:2409.16382},
year={2024},
archivePrefix={arXiv},
eprint={2409.16382},
primaryClass={cs.CV}
}
|
nasimzada2024towards
|
arxiv-661520
|
2409.16383
|
RISCORE: Enhancing In-Context Riddle Solving in Language Models through Context-Reconstructed Example Augmentation
|
<|reference_start|>RISCORE: Enhancing In-Context Riddle Solving in Language Models through Context-Reconstructed Example Augmentation: Riddle-solving requires advanced reasoning skills, pushing LLMs to engage in abstract thinking and creative problem-solving, often revealing limitations in their cognitive abilities. In this paper, we examine the riddle-solving capabilities of LLMs using a multiple-choice format, exploring how different prompting techniques impact performance on riddles that demand diverse reasoning skills. To enhance results, we introduce RISCORE (RIddle Solving with COntext REcontruciton) a novel fully automated prompting method that generates and utilizes contextually reconstructed sentence-based puzzles in conjunction with the original examples to create few-shot exemplars. Our experiments demonstrate that RISCORE significantly improves the performance of language models in both vertical and lateral thinking tasks, surpassing traditional exemplar selection strategies across a variety of few-shot settings.<|reference_end|>
|
arxiv
|
@article{panagiotopoulos2024riscore:,
title={RISCORE: Enhancing In-Context Riddle Solving in Language Models through
Context-Reconstructed Example Augmentation},
author={Ioannis Panagiotopoulos, Giorgos Filandrianos, Maria Lymperaiou,
Giorgos Stamou},
journal={arXiv preprint arXiv:2409.16383},
year={2024},
archivePrefix={arXiv},
eprint={2409.16383},
primaryClass={cs.CL}
}
|
panagiotopoulos2024riscore:
|
arxiv-661521
|
2409.16385
|
Embedded IPC: Fast and Intersection-free Simulation in Reduced Subspace for Robot Manipulation
|
<|reference_start|>Embedded IPC: Fast and Intersection-free Simulation in Reduced Subspace for Robot Manipulation: Physics-based simulation is essential for developing and evaluating robot manipulation policies, particularly in scenarios involving deformable objects and complex contact interactions. However, existing simulators often struggle to balance computational efficiency with numerical accuracy, especially when modeling deformable materials with frictional contact constraints. We introduce an efficient subspace representation for the Incremental Potential Contact (IPC) method, leveraging model reduction to decrease the number of degrees of freedom. Our approach decouples simulation complexity from the resolution of the input model by representing elasticity in a low-resolution subspace while maintaining collision constraints on an embedded high-resolution surface. Our barrier formulation ensures intersection-free trajectories and configurations regardless of material stiffness, time step size, or contact severity. We validate our simulator through quantitative experiments with a soft bubble gripper grasping and qualitative demonstrations of placing a plate on a dish rack. The results demonstrate our simulator's efficiency, physical accuracy, computational stability, and robust handling of frictional contact, making it well-suited for generating demonstration data and evaluating downstream robot training applications.<|reference_end|>
|
arxiv
|
@article{du2024embedded,
title={Embedded IPC: Fast and Intersection-free Simulation in Reduced Subspace
for Robot Manipulation},
author={Wenxin Du, Chang Yu, Siyu Ma, Ying Jiang, Zeshun Zong, Yin Yang, Joe
Masterjohn, Alejandro Castro, Xuchen Han, Chenfanfu Jiang},
journal={arXiv preprint arXiv:2409.16385},
year={2024},
archivePrefix={arXiv},
eprint={2409.16385},
primaryClass={cs.RO}
}
|
du2024embedded
|
arxiv-661522
|
2409.16386
|
Camera Calibration and Stereo via a Single Image of a Spherical Mirror
|
<|reference_start|>Camera Calibration and Stereo via a Single Image of a Spherical Mirror: This paper presents a novel technique for camera calibration using a single view that incorporates a spherical mirror. Leveraging the distinct characteristics of the sphere's contour visible in the image and its reflections, we showcase the effectiveness of our method in achieving precise calibration. Furthermore, the reflection from the mirrored surface provides additional information about the surrounding scene beyond the image frame. Our method paves the way for the development of simple catadioptric stereo systems. We explore the challenges and opportunities associated with employing a single mirrored sphere, highlighting the potential applications of this setup in practical scenarios. The paper delves into the intricacies of the geometry and calibration procedures involved in catadioptric stereo utilizing a spherical mirror. Experimental results, encompassing both synthetic and real-world data, are presented to illustrate the feasibility and accuracy of our approach.<|reference_end|>
|
arxiv
|
@article{barzilay2024camera,
title={Camera Calibration and Stereo via a Single Image of a Spherical Mirror},
author={Nissim Barzilay, Ofek Narinsky, Michael Werman},
journal={arXiv preprint arXiv:2409.16386},
year={2024},
archivePrefix={arXiv},
eprint={2409.16386},
primaryClass={cs.CV}
}
|
barzilay2024camera
|
arxiv-661523
|
2409.16388
|
Self-Elicitation of Requirements with Automated GUI Prototyping
|
<|reference_start|>Self-Elicitation of Requirements with Automated GUI Prototyping: Requirements Elicitation (RE) is a crucial activity especially in the early stages of software development. GUI prototyping has widely been adopted as one of the most effective RE techniques for user-facing software systems. However, GUI prototyping requires (i) the availability of experienced requirements analysts, (ii) typically necessitates conducting multiple joint sessions with customers and (iii) creates considerable manual effort. In this work, we propose SERGUI, a novel approach enabling the Self-Elicitation of Requirements (SER) based on an automated GUI prototyping assistant. SERGUI exploits the vast prototyping knowledge embodied in a large-scale GUI repository through Natural Language Requirements (NLR) based GUI retrieval and facilitates fast feedback through GUI prototypes. The GUI retrieval approach is closely integrated with a Large Language Model (LLM) driving the prompting-based recommendation of GUI features for the current GUI prototyping context and thus stimulating the elicitation of additional requirements. We envision SERGUI to be employed in the initial RE phase, creating an initial GUI prototype specification to be used by the analyst as a means for communicating the requirements. To measure the effectiveness of our approach, we conducted a preliminary evaluation. Video presentation of SERGUI at: https://youtu.be/pzAAB9Uht80<|reference_end|>
|
arxiv
|
@article{kolthoff2024self-elicitation,
title={Self-Elicitation of Requirements with Automated GUI Prototyping},
author={Kristian Kolthoff, Christian Bartelt, Simone Paolo Ponzetto, Kurt
Schneider},
journal={arXiv preprint arXiv:2409.16388},
year={2024},
doi={10.1145/3691620.3695350},
archivePrefix={arXiv},
eprint={2409.16388},
primaryClass={cs.SE}
}
|
kolthoff2024self-elicitation
|
arxiv-661524
|
2409.16389
|
Willems' Fundamental Lemma for Nonlinear Systems with Koopman Linear Embedding
|
<|reference_start|>Willems' Fundamental Lemma for Nonlinear Systems with Koopman Linear Embedding: Koopman operator theory and Willems' fundamental lemma both can provide (approximated) data-driven linear representation for nonlinear systems. However, choosing lifting functions for the Koopman operator is challenging, and the quality of the data-driven model from Willems' fundamental lemma has no guarantee for general nonlinear systems. In this paper, we extend Willems' fundamental lemma for a class of nonlinear systems that admit a Koopman linear embedding. We first characterize the relationship between the trajectory space of a nonlinear system and that of its Koopman linear embedding. We then prove that the trajectory space of Koopman linear embedding can be formed by a linear combination of rich-enough trajectories from the nonlinear system. Combining these two results leads to a data-driven representation of the nonlinear system, which bypasses the need for the lifting functions and thus eliminates the associated bias errors. Our results illustrate that both the width (more trajectories) and depth (longer trajectories) of the trajectory library are important to ensure the accuracy of the data-driven model.<|reference_end|>
|
arxiv
|
@article{shang2024willems',
title={Willems' Fundamental Lemma for Nonlinear Systems with Koopman Linear
Embedding},
author={Xu Shang, Jorge Cort'es, Yang Zheng},
journal={arXiv preprint arXiv:2409.16389},
year={2024},
archivePrefix={arXiv},
eprint={2409.16389},
primaryClass={math.OC cs.SY eess.SY}
}
|
shang2024willems'
|
arxiv-661525
|
2409.16391
|
Patch-Based Contrastive Learning and Memory Consolidation for Online Unsupervised Continual Learning
|
<|reference_start|>Patch-Based Contrastive Learning and Memory Consolidation for Online Unsupervised Continual Learning: We focus on a relatively unexplored learning paradigm known as {\em Online Unsupervised Continual Learning} (O-UCL), where an agent receives a non-stationary, unlabeled data stream and progressively learns to identify an increasing number of classes. This paradigm is designed to model real-world applications where encountering novelty is the norm, such as exploring a terrain with several unknown and time-varying entities. Unlike prior work in unsupervised, continual, or online learning, O-UCL combines all three areas into a single challenging and realistic learning paradigm. In this setting, agents are frequently evaluated and must aim to maintain the best possible representation at any point of the data stream, rather than at the end of pre-specified offline tasks. The proposed approach, called \textbf{P}atch-based \textbf{C}ontrastive learning and \textbf{M}emory \textbf{C}onsolidation (PCMC), builds a compositional understanding of data by identifying and clustering patch-level features. Embeddings for these patch-level features are extracted with an encoder trained via patch-based contrastive learning. PCMC incorporates new data into its distribution while avoiding catastrophic forgetting, and it consolidates memory examples during ``sleep" periods. We evaluate PCMC's performance on streams created from the ImageNet and Places365 datasets. Additionally, we explore various versions of the PCMC algorithm and compare its performance against several existing methods and simple baselines.<|reference_end|>
|
arxiv
|
@article{taylor2024patch-based,
title={Patch-Based Contrastive Learning and Memory Consolidation for Online
Unsupervised Continual Learning},
author={Cameron Taylor, Vassilis Vassiliades, Constantine Dovrolis},
journal={arXiv preprint arXiv:2409.16391},
year={2024},
archivePrefix={arXiv},
eprint={2409.16391},
primaryClass={cs.LG cs.CV}
}
|
taylor2024patch-based
|
arxiv-661526
|
2409.16392
|
Rao-Blackwellized POMDP Planning
|
<|reference_start|>Rao-Blackwellized POMDP Planning: Partially Observable Markov Decision Processes (POMDPs) provide a structured framework for decision-making under uncertainty, but their application requires efficient belief updates. Sequential Importance Resampling Particle Filters (SIRPF), also known as Bootstrap Particle Filters, are commonly used as belief updaters in large approximate POMDP solvers, but they face challenges such as particle deprivation and high computational costs as the system's state dimension grows. To address these issues, this study introduces Rao-Blackwellized POMDP (RB-POMDP) approximate solvers and outlines generic methods to apply Rao-Blackwellization in both belief updates and online planning. We compare the performance of SIRPF and Rao-Blackwellized Particle Filters (RBPF) in a simulated localization problem where an agent navigates toward a target in a GPS-denied environment using POMCPOW and RB-POMCPOW planners. Our results not only confirm that RBPFs maintain accurate belief approximations over time with fewer particles, but, more surprisingly, RBPFs combined with quadrature-based integration improve planning quality significantly compared to SIRPF-based planning under the same computational limits.<|reference_end|>
|
arxiv
|
@article{lee2024rao-blackwellized,
title={Rao-Blackwellized POMDP Planning},
author={Jiho Lee, Nisar R. Ahmed, Kyle H. Wray, Zachary N. Sunberg},
journal={arXiv preprint arXiv:2409.16392},
year={2024},
archivePrefix={arXiv},
eprint={2409.16392},
primaryClass={cs.AI cs.LG cs.RO}
}
|
lee2024rao-blackwellized
|
arxiv-661527
|
2409.16395
|
Design and Evaluation of a CDSS for Drug Allergy Management Using LLMs and Pharmaceutical Data Integration
|
<|reference_start|>Design and Evaluation of a CDSS for Drug Allergy Management Using LLMs and Pharmaceutical Data Integration: Medication errors significantly threaten patient safety, leading to adverse drug events and substantial economic burdens on healthcare systems. Clinical Decision Support Systems (CDSSs) aimed at mitigating these errors often face limitations, including reliance on static databases and rule-based algorithms, which can result in high false alert rates and alert fatigue among clinicians. This paper introduces HELIOT, an innovative CDSS for drug allergy management, integrating Large Language Models (LLMs) with a comprehensive pharmaceutical data repository. HELIOT leverages advanced natural language processing capabilities to interpret complex medical texts and synthesize unstructured data, overcoming the limitations of traditional CDSSs. An empirical evaluation using a synthetic patient dataset and expert-verified ground truth demonstrates HELIOT's high accuracy, precision, recall, and F1 score, uniformly reaching 100\% across multiple experimental runs. The results underscore HELIOT's potential to enhance decision support in clinical settings, offering a scalable, efficient, and reliable solution for managing drug allergies.<|reference_end|>
|
arxiv
|
@article{de vito2024design,
title={Design and Evaluation of a CDSS for Drug Allergy Management Using LLMs
and Pharmaceutical Data Integration},
author={Gabriele De Vito, Filomena Ferrucci, Athanasios Angelakis},
journal={arXiv preprint arXiv:2409.16395},
year={2024},
archivePrefix={arXiv},
eprint={2409.16395},
primaryClass={cs.AI}
}
|
de vito2024design
|
arxiv-661528
|
2409.16399
|
Revisiting Acoustic Features for Robust ASR
|
<|reference_start|>Revisiting Acoustic Features for Robust ASR: Automatic Speech Recognition (ASR) systems must be robust to the myriad types of noises present in real-world environments including environmental noise, room impulse response, special effects as well as attacks by malicious actors (adversarial attacks). Recent works seek to improve accuracy and robustness by developing novel Deep Neural Networks (DNNs) and curating diverse training datasets for them, while using relatively simple acoustic features. While this approach improves robustness to the types of noise present in the training data, it confers limited robustness against unseen noises and negligible robustness to adversarial attacks. In this paper, we revisit the approach of earlier works that developed acoustic features inspired by biological auditory perception that could be used to perform accurate and robust ASR. In contrast, Specifically, we evaluate the ASR accuracy and robustness of several biologically inspired acoustic features. In addition to several features from prior works, such as gammatone filterbank features (GammSpec), we also propose two new acoustic features called frequency masked spectrogram (FreqMask) and difference of gammatones spectrogram (DoGSpec) to simulate the neuro-psychological phenomena of frequency masking and lateral suppression. Experiments on diverse models and datasets show that (1) DoGSpec achieves significantly better robustness than the highly popular log mel spectrogram (LogMelSpec) with minimal accuracy degradation, and (2) GammSpec achieves better accuracy and robustness to non-adversarial noises from the Speech Robust Bench benchmark, but it is outperformed by DoGSpec against adversarial attacks.<|reference_end|>
|
arxiv
|
@article{shah2024revisiting,
title={Revisiting Acoustic Features for Robust ASR},
author={Muhammad A. Shah, Bhiksha Raj},
journal={arXiv preprint arXiv:2409.16399},
year={2024},
archivePrefix={arXiv},
eprint={2409.16399},
primaryClass={cs.SD cs.CL eess.AS}
}
|
shah2024revisiting
|
arxiv-661529
|
2409.16400
|
Chasing the Shadows: TTPs in Action to Attribute Advanced Persistent Threats
|
<|reference_start|>Chasing the Shadows: TTPs in Action to Attribute Advanced Persistent Threats: The current state of Advanced Persistent Threats (APT) attribution primarily relies on time-consuming manual processes. These include mapping incident artifacts onto threat attribution frameworks and employing expert reasoning to uncover the most likely responsible APT groups. This research aims to assist the threat analyst in the attribution process by presenting an attribution method named CAPTAIN (Comprehensive Advanced Persistent Threat AttrIbutioN). This novel APT attribution approach leverages the Tactics, Techniques, and Procedures (TTPs) employed by various APT groups in past attacks. CAPTAIN follows two significant development steps: baseline establishment and similarity measure for attack pattern matching. This method starts by maintaining a TTP database of APTs seen in past attacks as baseline behaviour of threat groups. The attribution process leverages the contextual information added by TTP sequences, which reflects the sequence of behaviours threat actors demonstrated during the attack on different kill-chain stages. Then, it compares the provided TTPs with established baseline to identify the most closely matching threat group. CAPTAIN introduces a novel similarity measure for APT group attack-pattern matching that calculates the similarity between TTP sequences. The proposed approach outperforms traditional similarity measures like Cosine, Euclidean, and Longest Common Subsequence (LCS) in performing attribution. Overall, CAPTAIN performs attribution with the precision of 61.36% (top-1) and 69.98% (top-2), surpassing the existing state-of-the-art attribution methods.<|reference_end|>
|
arxiv
|
@article{rani2024chasing,
title={Chasing the Shadows: TTPs in Action to Attribute Advanced Persistent
Threats},
author={Nanda Rani, Bikash Saha, Vikas Maurya, Sandeep Kumar Shukla},
journal={arXiv preprint arXiv:2409.16400},
year={2024},
archivePrefix={arXiv},
eprint={2409.16400},
primaryClass={cs.CR}
}
|
rani2024chasing
|
arxiv-661530
|
2409.16404
|
FastTalker: Jointly Generating Speech and Conversational Gestures from Text
|
<|reference_start|>FastTalker: Jointly Generating Speech and Conversational Gestures from Text: Generating 3D human gestures and speech from a text script is critical for creating realistic talking avatars. One solution is to leverage separate pipelines for text-to-speech (TTS) and speech-to-gesture (STG), but this approach suffers from poor alignment of speech and gestures and slow inference times. In this paper, we introduce FastTalker, an efficient and effective framework that simultaneously generates high-quality speech audio and 3D human gestures at high inference speeds. Our key insight is reusing the intermediate features from speech synthesis for gesture generation, as these features contain more precise rhythmic information than features re-extracted from generated speech. Specifically, 1) we propose an end-to-end framework that concurrently generates speech waveforms and full-body gestures, using intermediate speech features such as pitch, onset, energy, and duration directly for gesture decoding; 2) we redesign the causal network architecture to eliminate dependencies on future inputs for real applications; 3) we employ Reinforcement Learning-based Neural Architecture Search (NAS) to enhance both performance and inference speed by optimizing our network architecture. Experimental results on the BEAT2 dataset demonstrate that FastTalker achieves state-of-the-art performance in both speech synthesis and gesture generation, processing speech and gestures in 0.17 seconds per second on an NVIDIA 3090.<|reference_end|>
|
arxiv
|
@article{guo2024fasttalker:,
title={FastTalker: Jointly Generating Speech and Conversational Gestures from
Text},
author={Zixin Guo, Jian Zhang},
journal={arXiv preprint arXiv:2409.16404},
year={2024},
archivePrefix={arXiv},
eprint={2409.16404},
primaryClass={cs.MM cs.SD eess.AS}
}
|
guo2024fasttalker:
|
arxiv-661531
|
2409.16405
|
Design of a Reformed Array Logic Binary Multiplier for High-Speed Computations
|
<|reference_start|>Design of a Reformed Array Logic Binary Multiplier for High-Speed Computations: Binary multipliers have long been a staple component in digital circuitry, serving crucial roles in microprocessor design, digital signal processing units and many more applications. This work presents a unique design for a multiplier that utilizes a reformed-array-logic approach to compute the product of two unsigned binary numbers. We employed a multiplexer and a barrel shifter to multiply partial products in a single clock cycle to speed up the traditional array logic. In addition, we have employed a combination of Carry Save Adders (CSA) and Ripple Carry Adders (RCA) to accumulate the partial products instead of using standalone RCAs to speed up the multiplication process further. Finally, we have demonstrated our design to perform multiplication of two 16-bit unsigned binary numbers on Cadence Virtuoso. Our design is modular and can be scaled up or down to accommodate the multiplication of any n-bit unsigned numbers.<|reference_end|>
|
arxiv
|
@article{mohammad2024design,
title={Design of a Reformed Array Logic Binary Multiplier for High-Speed
Computations},
author={Sakib Mohammad and Themistoklis Haniotakis},
journal={arXiv preprint arXiv:2409.16405},
year={2024},
archivePrefix={arXiv},
eprint={2409.16405},
primaryClass={cs.AR}
}
|
mohammad2024design
|
arxiv-661532
|
2409.16407
|
Towards Representation Learning for Weighting Problems in Design-Based Causal Inference
|
<|reference_start|>Towards Representation Learning for Weighting Problems in Design-Based Causal Inference: Reweighting a distribution to minimize a distance to a target distribution is a powerful and flexible strategy for estimating a wide range of causal effects, but can be challenging in practice because optimal weights typically depend on knowledge of the underlying data generating process. In this paper, we focus on design-based weights, which do not incorporate outcome information; prominent examples include prospective cohort studies, survey weighting, and the weighting portion of augmented weighting estimators. In such applications, we explore the central role of representation learning in finding desirable weights in practice. Unlike the common approach of assuming a well-specified representation, we highlight the error due to the choice of a representation and outline a general framework for finding suitable representations that minimize this error. Building on recent work that combines balancing weights and neural networks, we propose an end-to-end estimation procedure that learns a flexible representation, while retaining promising theoretical properties. We show that this approach is competitive in a range of common causal inference tasks.<|reference_end|>
|
arxiv
|
@article{clivio2024towards,
title={Towards Representation Learning for Weighting Problems in Design-Based
Causal Inference},
author={Oscar Clivio, Avi Feller, Chris Holmes},
journal={arXiv preprint arXiv:2409.16407},
year={2024},
archivePrefix={arXiv},
eprint={2409.16407},
primaryClass={stat.ML cs.LG stat.ME}
}
|
clivio2024towards
|
arxiv-661533
|
2409.16408
|
Modern Hopfield Networks meet Encoded Neural Representations -- Addressing Practical Considerations
|
<|reference_start|>Modern Hopfield Networks meet Encoded Neural Representations -- Addressing Practical Considerations: Content-addressable memories such as Modern Hopfield Networks (MHN) have been studied as mathematical models of auto-association and storage/retrieval in the human declarative memory, yet their practical use for large-scale content storage faces challenges. Chief among them is the occurrence of meta-stable states, particularly when handling large amounts of high dimensional content. This paper introduces Hopfield Encoding Networks (HEN), a framework that integrates encoded neural representations into MHNs to improve pattern separability and reduce meta-stable states. We show that HEN can also be used for retrieval in the context of hetero association of images with natural language queries, thus removing the limitation of requiring access to partial content in the same domain. Experimental results demonstrate substantial reduction in meta-stable states and increased storage capacity while still enabling perfect recall of a significantly larger number of inputs advancing the practical utility of associative memory networks for real-world tasks.<|reference_end|>
|
arxiv
|
@article{kashyap2024modern,
title={Modern Hopfield Networks meet Encoded Neural Representations --
Addressing Practical Considerations},
author={Satyananda Kashyap, Niharika S. D'Souza, Luyao Shi, Ken C. L. Wong,
Hongzhi Wang, Tanveer Syeda-Mahmood},
journal={arXiv preprint arXiv:2409.16408},
year={2024},
archivePrefix={arXiv},
eprint={2409.16408},
primaryClass={cs.LG cs.AI cs.CV cs.IR cs.NE}
}
|
kashyap2024modern
|
arxiv-661534
|
2409.16410
|
Evaluating Blocking Biases in Entity Matching
|
<|reference_start|>Evaluating Blocking Biases in Entity Matching: Entity Matching (EM) is crucial for identifying equivalent data entities across different sources, a task that becomes increasingly challenging with the growth and heterogeneity of data. Blocking techniques, which reduce the computational complexity of EM, play a vital role in making this process scalable. Despite advancements in blocking methods, the issue of fairness; where blocking may inadvertently favor certain demographic groups; has been largely overlooked. This study extends traditional blocking metrics to incorporate fairness, providing a framework for assessing bias in blocking techniques. Through experimental analysis, we evaluate the effectiveness and fairness of various blocking methods, offering insights into their potential biases. Our findings highlight the importance of considering fairness in EM, particularly in the blocking phase, to ensure equitable outcomes in data integration tasks.<|reference_end|>
|
arxiv
|
@article{moslemi2024evaluating,
title={Evaluating Blocking Biases in Entity Matching},
author={Mohammad Hossein Moslemi, Harini Balamurugan, Mostafa Milani},
journal={arXiv preprint arXiv:2409.16410},
year={2024},
archivePrefix={arXiv},
eprint={2409.16410},
primaryClass={cs.LG cs.DB}
}
|
moslemi2024evaluating
|
arxiv-661535
|
2409.16412
|
Vision-based Xylem Wetness Classification in Stem Water Potential Determination
|
<|reference_start|>Vision-based Xylem Wetness Classification in Stem Water Potential Determination: Water is often overused in irrigation, making efficient management of it crucial. Precision Agriculture emphasizes tools like stem water potential (SWP) analysis for better plant status determination. However, such tools often require labor-intensive in-situ sampling. Automation and machine learning can streamline this process and enhance outcomes. This work focused on automating stem detection and xylem wetness classification using the Scholander Pressure Chamber, a widely used but demanding method for SWP measurement. The aim was to refine stem detection and develop computer-vision-based methods to better classify water emergence at the xylem. To this end, we collected and manually annotated video data, applying vision- and learning-based methods for detection and classification. Additionally, we explored data augmentation and fine-tuned parameters to identify the most effective models. The identified best-performing models for stem detection and xylem wetness classification were evaluated end-to-end over 20 SWP measurements. Learning-based stem detection via YOLOv8n combined with ResNet50-based classification achieved a Top-1 accuracy of 80.98%, making it the best-performing approach for xylem wetness classification.<|reference_end|>
|
arxiv
|
@article{peiris2024vision-based,
title={Vision-based Xylem Wetness Classification in Stem Water Potential
Determination},
author={Pamodya Peiris, Aritra Samanta, Caio Mucchiani, Cody Simons, Amit
Roy-Chowdhury, and Konstantinos Karydis},
journal={arXiv preprint arXiv:2409.16412},
year={2024},
archivePrefix={arXiv},
eprint={2409.16412},
primaryClass={cs.RO cs.CV}
}
|
peiris2024vision-based
|
arxiv-661536
|
2409.16415
|
Improving Intersession Reproducibility for Forearm Ultrasound based Hand Gesture Classification through an Incremental Learning Approach
|
<|reference_start|>Improving Intersession Reproducibility for Forearm Ultrasound based Hand Gesture Classification through an Incremental Learning Approach: Ultrasound images of the forearm can be used to classify hand gestures towards developing human machine interfaces. In our previous work, we have demonstrated gesture classification using ultrasound on a single subject without removing the probe before evaluation. This has limitations in usage as once the probe is removed and replaced, the accuracy declines since the classifier performance is sensitive to the probe location on the arm. In this paper, we propose training a model on multiple data collection sessions to create a generalized model, utilizing incremental learning through fine tuning. Ultrasound data was acquired for 5 hand gestures within a session (without removing and putting the probe back on) and across sessions. A convolutional neural network (CNN) with 5 cascaded convolution layers was used for this study. A pre-trained CNN was fine tuned with the convolution blocks acting as a feature extractor, and the parameters of the remaining layers updated in an incremental fashion. Fine tuning was done using different session splits within a session and between multiple sessions. We found that incremental fine tuning can help enhance classification accuracy with more fine tuning sessions. After 2 fine tuning sessions for each experiment, we found an approximate 10% increase in classification accuracy. This work demonstrates that incremental learning through fine tuning on ultrasound based hand gesture classification can be used improves accuracy while saving storage, processing power, and time. It can be expanded to generalize between multiple subjects and towards developing personalized wearable devices.<|reference_end|>
|
arxiv
|
@article{bimbraw2024improving,
title={Improving Intersession Reproducibility for Forearm Ultrasound based Hand
Gesture Classification through an Incremental Learning Approach},
author={Keshav Bimbraw, Jack Rothenberg and Haichong K. Zhang},
journal={arXiv preprint arXiv:2409.16415},
year={2024},
archivePrefix={arXiv},
eprint={2409.16415},
primaryClass={cs.CV cs.RO}
}
|
bimbraw2024improving
|
arxiv-661537
|
2409.16416
|
Selection of Prompt Engineering Techniques for Code Generation through Predicting Code Complexity
|
<|reference_start|>Selection of Prompt Engineering Techniques for Code Generation through Predicting Code Complexity: Large Language Models (LLMs) have demonstrated impressive performance in software engineering tasks. However, improving their accuracy in generating correct and reliable code remains challenging. Numerous prompt engineering techniques (PETs) have been developed to address this, but no single approach is universally optimal. Selecting the right PET for each query is difficult for two primary reasons: (1) interactive prompting techniques may not consistently deliver the expected benefits, especially for simpler queries, and (2) current automated prompt engineering methods lack adaptability and fail to fully utilize multi-stage responses. To overcome these challenges, we propose PET-Select, a PET-agnostic selection model that uses code complexity as a proxy to classify queries and select the most appropriate PET. By incorporating contrastive learning, PET-Select effectively distinguishes between simple and complex problems, allowing it to choose PETs that are best suited for each query's complexity level. Our evaluations on the MBPP and HumanEval benchmarks using GPT-3.5 Turbo and GPT-4o show up to a 1.9% improvement in pass@1 accuracy, along with a 74.8% reduction in token usage. Additionally, we provide both quantitative and qualitative results to demonstrate how PET-Select effectively selects the most appropriate techniques for each code generation query, further showcasing its efficiency in optimizing PET selection.<|reference_end|>
|
arxiv
|
@article{wang2024selection,
title={Selection of Prompt Engineering Techniques for Code Generation through
Predicting Code Complexity},
author={Chung-Yu Wang, Alireza DaghighFarsoodeh, Hung Viet Pham},
journal={arXiv preprint arXiv:2409.16416},
year={2024},
archivePrefix={arXiv},
eprint={2409.16416},
primaryClass={cs.SE cs.AI}
}
|
wang2024selection
|
arxiv-661538
|
2409.16418
|
Task-oriented Prompt Enhancement via Script Generation
|
<|reference_start|>Task-oriented Prompt Enhancement via Script Generation: Large Language Models (LLMs) have demonstrated remarkable abilities across various tasks, leveraging advanced reasoning. Yet, they struggle with task-oriented prompts due to a lack of specific prior knowledge of the task answers. The current state-of-the-art approach, PAL, utilizes code generation to address this issue. However, PAL depends on manually crafted prompt templates and examples while still producing inaccurate results. In this work, we present TITAN-a novel strategy designed to enhance LLMs' performance on task-oriented prompts. TITAN achieves this by generating scripts using a universal approach and zero-shot learning. Unlike existing methods, TITAN eliminates the need for detailed task-specific instructions and extensive manual efforts. TITAN enhances LLMs' performance on various tasks by utilizing their analytical and code-generation capabilities in a streamlined process. TITAN employs two key techniques: (1) step-back prompting to extract the task's input specifications and (2) chain-of-thought prompting to identify required procedural steps. This information is used to improve the LLMs' code-generation process. TITAN further refines the generated script through post-processing and the script is executed to retrieve the final answer. Our comprehensive evaluation demonstrates TITAN's effectiveness in a diverse set of tasks. On average, TITAN outperforms the state-of-the-art zero-shot approach by 7.6% and 3.9% when paired with GPT-3.5 and GPT-4. Overall, without human annotation, TITAN achieves state-of-the-art performance in 8 out of 11 cases while only marginally losing to few-shot approaches (which needed human intervention) on three occasions by small margins. This work represents a significant advancement in addressing task-oriented prompts, offering a novel solution for effectively utilizing LLMs in everyday life tasks.<|reference_end|>
|
arxiv
|
@article{wang2024task-oriented,
title={Task-oriented Prompt Enhancement via Script Generation},
author={Chung-Yu Wang, Alireza DaghighFarsoodeh, Hung Viet Pham},
journal={arXiv preprint arXiv:2409.16418},
year={2024},
archivePrefix={arXiv},
eprint={2409.16418},
primaryClass={cs.SE cs.AI}
}
|
wang2024task-oriented
|
arxiv-661539
|
2409.16420
|
Deep Learning Model-Based Channel Estimation for THz Band Massive MIMO with RF Impairments
|
<|reference_start|>Deep Learning Model-Based Channel Estimation for THz Band Massive MIMO with RF Impairments: THz band enabled large scale massive MIMO (M-MIMO) is considered as a key enabler for the 6G technology, given its enormous bandwidth and for its low latency connectivity. In the large-scale M-MIMO configuration, enlarged array aperture and small wavelengths of THz results in an amalgamation of both far field and near field paths, which makes tasks such as channel estimation for THz M-MIMO highly challenging. Moreover, at the THz transceiver, radio frequency (RF) impairments such as phase noise (PN) of the analog devices also leads to degradation in channel estimation performance. Classical estimators as well as traditional deep learning (DL) based algorithms struggle to maintain their robustness when performing for large scale antenna arrays i.e., M-MIMO, and when RF impairments are considered for practical usage. To effectively address this issue, it is crucial to utilize a neural network (NN) that has the ability to study the behaviors of the channel and RF impairment correlations, such as a recurrent neural network (RNN). The RF impairments act as sequential noise data which is subsequently incorporated with the channel data, leading to choose a specific type of RNN known as bidirectional long short-term memory (BiLSTM) which is followed by gated recurrent units (GRU) to process the sequential data. Simulation results demonstrate that our proposed model outperforms other benchmark approaches at various signal-to-noise ratio (SNR) levels.<|reference_end|>
|
arxiv
|
@article{tarafder2024deep,
title={Deep Learning Model-Based Channel Estimation for THz Band Massive MIMO
with RF Impairments},
author={Pulok Tarafder, Imtiaz Ahmed, Danda B. Rawat, Ramesh Annavajjala,
Kumar Vijay Mishra},
journal={arXiv preprint arXiv:2409.16420},
year={2024},
archivePrefix={arXiv},
eprint={2409.16420},
primaryClass={cs.IT eess.SP math.IT}
}
|
tarafder2024deep
|
arxiv-661540
|
2409.16421
|
A minimizing movement approach for crystalline eikonal-curvature flows of spirals
|
<|reference_start|>A minimizing movement approach for crystalline eikonal-curvature flows of spirals: We propose an algorithm for evolving spiral curves on a planar domain by normal velocities depending on the so-called crystalline curvatures. The algorithm uses a minimizing movement approach and relies on a special level set method for embedding the spirals. We present numerical simulations and comparisons demonstrating the efficacy of the proposed numerical algorithm.<|reference_end|>
|
arxiv
|
@article{ohtsuka2024a,
title={A minimizing movement approach for crystalline eikonal-curvature flows
of spirals},
author={Takeshi Ohtsuka, and Yen-Hsi Richard Tsai},
journal={arXiv preprint arXiv:2409.16421},
year={2024},
archivePrefix={arXiv},
eprint={2409.16421},
primaryClass={math.NA cs.NA}
}
|
ohtsuka2024a
|
arxiv-661541
|
2409.16422
|
Is All Learning (Natural) Gradient Descent?
|
<|reference_start|>Is All Learning (Natural) Gradient Descent?: This paper shows that a wide class of effective learning rules -- those that improve a scalar performance measure over a given time window -- can be rewritten as natural gradient descent with respect to a suitably defined loss function and metric. Specifically, we show that parameter updates within this class of learning rules can be expressed as the product of a symmetric positive definite matrix (i.e., a metric) and the negative gradient of a loss function. We also demonstrate that these metrics have a canonical form and identify several optimal ones, including the metric that achieves the minimum possible condition number. The proofs of the main results are straightforward, relying only on elementary linear algebra and calculus, and are applicable to continuous-time, discrete-time, stochastic, and higher-order learning rules, as well as loss functions that explicitly depend on time.<|reference_end|>
|
arxiv
|
@article{shoji2024is,
title={Is All Learning (Natural) Gradient Descent?},
author={Lucas Shoji, Kenta Suzuki, Leo Kozachkov},
journal={arXiv preprint arXiv:2409.16422},
year={2024},
archivePrefix={arXiv},
eprint={2409.16422},
primaryClass={cs.LG math.DS q-bio.NC}
}
|
shoji2024is
|
arxiv-661542
|
2409.16425
|
Lessons for Editors of AI Incidents from the AI Incident Database
|
<|reference_start|>Lessons for Editors of AI Incidents from the AI Incident Database: As artificial intelligence (AI) systems become increasingly deployed across the world, they are also increasingly implicated in AI incidents - harm events to individuals and society. As a result, industry, civil society, and governments worldwide are developing best practices and regulations for monitoring and analyzing AI incidents. The AI Incident Database (AIID) is a project that catalogs AI incidents and supports further research by providing a platform to classify incidents for different operational and research-oriented goals. This study reviews the AIID's dataset of 750+ AI incidents and two independent taxonomies applied to these incidents to identify common challenges to indexing and analyzing AI incidents. We find that certain patterns of AI incidents present structural ambiguities that challenge incident databasing and explore how epistemic uncertainty in AI incident reporting is unavoidable. We therefore report mitigations to make incident processes more robust to uncertainty related to cause, extent of harm, severity, or technical details of implicated systems. With these findings, we discuss how to develop future AI incident reporting practices.<|reference_end|>
|
arxiv
|
@article{paeth2024lessons,
title={Lessons for Editors of AI Incidents from the AI Incident Database},
author={Kevin Paeth, Daniel Atherton, Nikiforos Pittaras, Heather Frase, Sean
McGregor},
journal={arXiv preprint arXiv:2409.16425},
year={2024},
archivePrefix={arXiv},
eprint={2409.16425},
primaryClass={cs.CY cs.AI cs.LG}
}
|
paeth2024lessons
|
arxiv-661543
|
2409.16426
|
Statistical tuning of artificial neural network
|
<|reference_start|>Statistical tuning of artificial neural network: Neural networks are often regarded as "black boxes" due to their complex functions and numerous parameters, which poses significant challenges for interpretability. This study addresses these challenges by introducing methods to enhance the understanding of neural networks, focusing specifically on models with a single hidden layer. We establish a theoretical framework by demonstrating that the neural network estimator can be interpreted as a nonparametric regression model. Building on this foundation, we propose statistical tests to assess the significance of input neurons and introduce algorithms for dimensionality reduction, including clustering and (PCA), to simplify the network and improve its interpretability and accuracy. The key contributions of this study include the development of a bootstrapping technique for evaluating artificial neural network (ANN) performance, applying statistical tests and logistic regression to analyze hidden neurons, and assessing neuron efficiency. We also investigate the behavior of individual hidden neurons in relation to out-put neurons and apply these methodologies to the IDC and Iris datasets to validate their practical utility. This research advances the field of Explainable Artificial Intelligence by presenting robust statistical frameworks for interpreting neural networks, thereby facilitating a clearer understanding of the relationships between inputs, outputs, and individual network components.<|reference_end|>
|
arxiv
|
@article{mohamad2024statistical,
title={Statistical tuning of artificial neural network},
author={Mohamad Yamen AL Mohamad, Hossein Bevrani and Ali Akbar Haydari},
journal={arXiv preprint arXiv:2409.16426},
year={2024},
archivePrefix={arXiv},
eprint={2409.16426},
primaryClass={stat.ML cs.LG stat.AP}
}
|
mohamad2024statistical
|
arxiv-661544
|
2409.16427
|
HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions
|
<|reference_start|>HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions: AI agents are increasingly autonomous in their interactions with human users and tools, leading to increased interactional safety risks. We present HAICOSYSTEM, a framework examining AI agent safety within diverse and complex social interactions. HAICOSYSTEM features a modular sandbox environment that simulates multi-turn interactions between human users and AI agents, where the AI agents are equipped with a variety of tools (e.g., patient management platforms) to navigate diverse scenarios (e.g., a user attempting to access other patients' profiles). To examine the safety of AI agents in these interactions, we develop a comprehensive multi-dimensional evaluation framework that uses metrics covering operational, content-related, societal, and legal risks. Through running 1840 simulations based on 92 scenarios across seven domains (e.g., healthcare, finance, education), we demonstrate that HAICOSYSTEM can emulate realistic user-AI interactions and complex tool use by AI agents. Our experiments show that state-of-the-art LLMs, both proprietary and open-sourced, exhibit safety risks in over 50\% cases, with models generally showing higher risks when interacting with simulated malicious users. Our findings highlight the ongoing challenge of building agents that can safely navigate complex interactions, particularly when faced with malicious users. To foster the AI agent safety ecosystem, we release a code platform that allows practitioners to create custom scenarios, simulate interactions, and evaluate the safety and performance of their agents.<|reference_end|>
|
arxiv
|
@article{zhou2024haicosystem:,
title={HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI
Interactions},
author={Xuhui Zhou, Hyunwoo Kim, Faeze Brahman, Liwei Jiang, Hao Zhu, Ximing
Lu, Frank Xu, Bill Yuchen Lin, Yejin Choi, Niloofar Mireshghallah, Ronan Le
Bras, and Maarten Sap},
journal={arXiv preprint arXiv:2409.16427},
year={2024},
archivePrefix={arXiv},
eprint={2409.16427},
primaryClass={cs.AI}
}
|
zhou2024haicosystem:
|
arxiv-661545
|
2409.16429
|
Leveraging Local Structure for Improving Model Explanations: An Information Propagation Approach
|
<|reference_start|>Leveraging Local Structure for Improving Model Explanations: An Information Propagation Approach: Numerous explanation methods have been recently developed to interpret the decisions made by deep neural network (DNN) models. For image classifiers, these methods typically provide an attribution score to each pixel in the image to quantify its contribution to the prediction. However, most of these explanation methods appropriate attribution scores to pixels independently, even though both humans and DNNs make decisions by analyzing a set of closely related pixels simultaneously. Hence, the attribution score of a pixel should be evaluated jointly by considering itself and its structurally-similar pixels. We propose a method called IProp, which models each pixel's individual attribution score as a source of explanatory information and explains the image prediction through the dynamic propagation of information across all pixels. To formulate the information propagation, IProp adopts the Markov Reward Process, which guarantees convergence, and the final status indicates the desired pixels' attribution scores. Furthermore, IProp is compatible with any existing attribution-based explanation method. Extensive experiments on various explanation methods and DNN models verify that IProp significantly improves them on a variety of interpretability metrics.<|reference_end|>
|
arxiv
|
@article{yang2024leveraging,
title={Leveraging Local Structure for Improving Model Explanations: An
Information Propagation Approach},
author={Ruo Yang, Binghui Wang, Mustafa Bilgic},
journal={arXiv preprint arXiv:2409.16429},
year={2024},
doi={10.1145/3627673.3679575},
archivePrefix={arXiv},
eprint={2409.16429},
primaryClass={cs.CV cs.AI cs.LG}
}
|
yang2024leveraging
|
arxiv-661546
|
2409.16430
|
A Comprehensive Survey of Bias in LLMs: Current Landscape and Future Directions
|
<|reference_start|>A Comprehensive Survey of Bias in LLMs: Current Landscape and Future Directions: Large Language Models(LLMs) have revolutionized various applications in natural language processing (NLP) by providing unprecedented text generation, translation, and comprehension capabilities. However, their widespread deployment has brought to light significant concerns regarding biases embedded within these models. This paper presents a comprehensive survey of biases in LLMs, aiming to provide an extensive review of the types, sources, impacts, and mitigation strategies related to these biases. We systematically categorize biases into several dimensions. Our survey synthesizes current research findings and discusses the implications of biases in real-world applications. Additionally, we critically assess existing bias mitigation techniques and propose future research directions to enhance fairness and equity in LLMs. This survey serves as a foundational resource for researchers, practitioners, and policymakers concerned with addressing and understanding biases in LLMs.<|reference_end|>
|
arxiv
|
@article{ranjan2024a,
title={A Comprehensive Survey of Bias in LLMs: Current Landscape and Future
Directions},
author={Rajesh Ranjan, Shailja Gupta, Surya Narayan Singh},
journal={arXiv preprint arXiv:2409.16430},
year={2024},
archivePrefix={arXiv},
eprint={2409.16430},
primaryClass={cs.CL cs.AI cs.CY cs.HC}
}
|
ranjan2024a
|
arxiv-661547
|
2409.16431
|
Hand Gesture Classification Based on Forearm Ultrasound Video Snippets Using 3D Convolutional Neural Networks
|
<|reference_start|>Hand Gesture Classification Based on Forearm Ultrasound Video Snippets Using 3D Convolutional Neural Networks: Ultrasound based hand movement estimation is a crucial area of research with applications in human-machine interaction. Forearm ultrasound offers detailed information about muscle morphology changes during hand movement which can be used to estimate hand gestures. Previous work has focused on analyzing 2-Dimensional (2D) ultrasound image frames using techniques such as convolutional neural networks (CNNs). However, such 2D techniques do not capture temporal features from segments of ultrasound data corresponding to continuous hand movements. This study uses 3D CNN based techniques to capture spatio-temporal patterns within ultrasound video segments for gesture recognition. We compared the performance of a 2D convolution-based network with (2+1)D convolution-based, 3D convolution-based, and our proposed network. Our methodology enhanced the gesture classification accuracy to 98.8 +/- 0.9%, from 96.5 +/- 2.3% compared to a network trained with 2D convolution layers. These results demonstrate the advantages of using ultrasound video snippets for improving hand gesture classification performance.<|reference_end|>
|
arxiv
|
@article{bimbraw2024hand,
title={Hand Gesture Classification Based on Forearm Ultrasound Video Snippets
Using 3D Convolutional Neural Networks},
author={Keshav Bimbraw, Ankit Talele and Haichong K. Zhang},
journal={arXiv preprint arXiv:2409.16431},
year={2024},
archivePrefix={arXiv},
eprint={2409.16431},
primaryClass={cs.CV cs.RO eess.IV}
}
|
bimbraw2024hand
|
arxiv-661548
|
2409.16434
|
Lessons Learned from a Unifying Empirical Study of Parameter-Efficient Transfer Learning (PETL) in Visual Recognition
|
<|reference_start|>Lessons Learned from a Unifying Empirical Study of Parameter-Efficient Transfer Learning (PETL) in Visual Recognition: Parameter-efficient transfer learning (PETL) has attracted significant attention lately, due to the increasing size of pre-trained models and the need to fine-tune (FT) them for superior downstream performance. This community-wide enthusiasm has sparked a plethora of approaches. Nevertheless, a systematic study to understand their performance and suitable application scenarios is lacking, leaving questions like when to apply PETL and which approach to use largely unanswered. In this paper, we conduct a unifying empirical study of representative PETL methods in the context of Vision Transformers. We systematically tune their hyper-parameters to fairly compare their accuracy on downstream tasks. Our study not only offers a valuable user guide but also unveils several new insights. First, if tuned carefully, different PETL methods can obtain similar accuracy in the low-shot benchmark VTAB-1K. This includes simple methods like FT the bias terms that were reported inferior. Second, though with similar accuracy, we find that PETL methods make different mistakes and high-confidence predictions, likely due to their different inductive biases. Such an inconsistency (or complementariness) opens up the opportunity for ensemble methods, and we make preliminary attempts at this. Third, going beyond the commonly used low-shot tasks, we find that PETL is also useful in many-shot regimes -- it achieves comparable and sometimes better accuracy than full FT, using much fewer learnable parameters. Last but not least, we investigate PETL's ability to preserve a pre-trained model's robustness to distribution shifts (e.g., a CLIP backbone). Perhaps not surprisingly, PETL methods outperform full FT alone. However, with weight-space ensembles, the fully fine-tuned model can better balance target (i.e., downstream) distribution and distribution shift performance, suggesting a future research direction for PETL.<|reference_end|>
|
arxiv
|
@article{mai2024lessons,
title={Lessons Learned from a Unifying Empirical Study of Parameter-Efficient
Transfer Learning (PETL) in Visual Recognition},
author={Zheda Mai, Ping Zhang, Cheng-Hao Tu, Hong-You Chen, Li Zhang, Wei-Lun
Chao},
journal={arXiv preprint arXiv:2409.16434},
year={2024},
archivePrefix={arXiv},
eprint={2409.16434},
primaryClass={cs.LG cs.AI cs.CV}
}
|
mai2024lessons
|
arxiv-661549
|
2409.16438
|
Glitch in Time: Exploiting Temporal Misalignment of IMU For Eavesdropping
|
<|reference_start|>Glitch in Time: Exploiting Temporal Misalignment of IMU For Eavesdropping: The increasing use of voice assistants and related applications has raised significant concerns about the security of Inertial Measurement Units (IMUs) in smartphones. These devices are vulnerable to acoustic eavesdropping attacks, jeopardizing user privacy. In response, Google imposed a rate limit of 200 Hz on permission-free access to IMUs, aiming to neutralize such side-channel attacks. Our research introduces a novel exploit, STAG, which circumvents these protections. It induces a temporal misalignment between the gyroscope and accelerometer, cleverly combining their data to resample at higher rates and reviving the potential for eavesdropping attacks previously curtailed by Google's security enhancements. Compared to prior methods, STAG achieves an 83.4% reduction in word error rate, highlighting its effectiveness in exploiting IMU data under restricted access and emphasizing the persistent security risks associated with these sensors.<|reference_end|>
|
arxiv
|
@article{najeeb2024glitch,
title={Glitch in Time: Exploiting Temporal Misalignment of IMU For
Eavesdropping},
author={Ahmed Najeeb, Abdul Rafay, Naveed Anwar Bhatti, Muhammad Hamad Alizai},
journal={arXiv preprint arXiv:2409.16438},
year={2024},
archivePrefix={arXiv},
eprint={2409.16438},
primaryClass={cs.CR}
}
|
najeeb2024glitch
|
arxiv-661550
|
2409.16439
|
Active Perception with Initial-State Uncertainty: A Policy Gradient Method
|
<|reference_start|>Active Perception with Initial-State Uncertainty: A Policy Gradient Method: This paper studies the synthesis of an active perception policy that maximizes the information leakage of the initial state in a stochastic system modeled as a hidden Markov model (HMM). Specifically, the emission function of the HMM is controllable with a set of perception or sensor query actions. Given the goal is to infer the initial state from partial observations in the HMM, we use Shannon conditional entropy as the planning objective and develop a novel policy gradient method with convergence guarantees. By leveraging a variant of observable operators in HMMs, we prove several important properties of the gradient of the conditional entropy with respect to the policy parameters, which allow efficient computation of the policy gradient and stable and fast convergence. We demonstrate the effectiveness of our solution by applying it to an inference problem in a stochastic grid world environment.<|reference_end|>
|
arxiv
|
@article{shi2024active,
title={Active Perception with Initial-State Uncertainty: A Policy Gradient
Method},
author={Chongyang Shi, Shuo Han, Michael Dorothy, and Jie Fu},
journal={arXiv preprint arXiv:2409.16439},
year={2024},
archivePrefix={arXiv},
eprint={2409.16439},
primaryClass={eess.SY cs.SY}
}
|
shi2024active
|
arxiv-661551
|
2409.16441
|
A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation
|
<|reference_start|>A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation: While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 Brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N=25) before and after a contusion injury. We additionally benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury and semantic segmentation models to label the anatomy for comparison and creation of task-specific architectures. Finally, we evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images to determine whether training on our porcine dataset is sufficient for accurately interpreting human data. Our results show that the YOLOv8 detection model outperforms all evaluated models for injury localization, achieving a mean Average Precision (mAP50-95) score of 0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score of 0.587, while SAMed achieves the highest Mean Dice score generalizing to human anatomy (0.445). To the best of our knowledge, this is the largest annotated dataset of spinal cord ultrasound images made publicly available to researchers and medical professionals, as well as the first public report of object detection and segmentation architectures to assess anatomical markers in the spinal cord for methodology development and clinical applications.<|reference_end|>
|
arxiv
|
@article{kumar2024a,
title={A novel open-source ultrasound dataset with deep learning benchmarks for
spinal cord injury localization and anatomical segmentation},
author={Avisha Kumar, Kunal Kotkar, Kelly Jiang, Meghana Bhimreddy, Daniel
Davidar, Carly Weber-Levine, Siddharth Krishnan, Max J. Kerensky, Ruixing
Liang, Kelley Kempski Leadingham, Denis Routkevitch, Andrew M. Hersh,
Kimberly Ashayeri, Betty Tyler, Ian Suk, Jennifer Son, Nicholas Theodore,
Nitish Thakor, and Amir Manbachi},
journal={arXiv preprint arXiv:2409.16441},
year={2024},
archivePrefix={arXiv},
eprint={2409.16441},
primaryClass={eess.IV cs.CV cs.LG}
}
|
kumar2024a
|
arxiv-661552
|
2409.16444
|
Artificial Intelligence for Secured Information Systems in Smart Cities: Collaborative IoT Computing with Deep Reinforcement Learning and Blockchain
|
<|reference_start|>Artificial Intelligence for Secured Information Systems in Smart Cities: Collaborative IoT Computing with Deep Reinforcement Learning and Blockchain: The accelerated expansion of the Internet of Things (IoT) has raised critical challenges associated with privacy, security, and data integrity, specifically in infrastructures such as smart cities or smart manufacturing. Blockchain technology provides immutable, scalable, and decentralized solutions to address these challenges, and integrating deep reinforcement learning (DRL) into the IoT environment offers enhanced adaptability and decision-making. This paper investigates the integration of blockchain and DRL to optimize mobile transmission and secure data exchange in IoT-assisted smart cities. Through the clustering and categorization of IoT application systems, the combination of DRL and blockchain is shown to enhance the performance of IoT networks by maintaining privacy and security. Based on the review of papers published between 2015 and 2024, we have classified the presented approaches and offered practical taxonomies, which provide researchers with critical perspectives and highlight potential areas for future exploration and research. Our investigation shows how combining blockchain's decentralized framework with DRL can address privacy and security issues, improve mobile transmission efficiency, and guarantee robust, privacy-preserving IoT systems. Additionally, we explore blockchain integration for DRL and outline the notable applications of DRL technology. By addressing the challenges of machine learning and blockchain integration, this study proposes novel perspectives for researchers and serves as a foundational exploration from an interdisciplinary standpoint.<|reference_end|>
|
arxiv
|
@article{far2024artificial,
title={Artificial Intelligence for Secured Information Systems in Smart Cities:
Collaborative IoT Computing with Deep Reinforcement Learning and Blockchain},
author={Amin Zakaie Far, Mohammad Zakaie Far, Sonia Gharibzadeh, Shiva
Zangeneh, Leila Amini, Morteza Rahimi},
journal={arXiv preprint arXiv:2409.16444},
year={2024},
archivePrefix={arXiv},
eprint={2409.16444},
primaryClass={cs.AI cs.CR}
}
|
far2024artificial
|
arxiv-661553
|
2409.16446
|
Underground Mapping and Localization Based on Ground-Penetrating Radar
|
<|reference_start|>Underground Mapping and Localization Based on Ground-Penetrating Radar: 3D object reconstruction based on deep neural networks has gained increasing attention in recent years. However, 3D reconstruction of underground objects to generate point cloud maps remains a challenge. Ground Penetrating Radar (GPR) is one of the most powerful and extensively used tools for detecting and locating underground objects such as plant root systems and pipelines, with its cost-effectiveness and continuously evolving technology. This paper introduces a parabolic signal detection network based on deep convolutional neural networks, utilizing B-scan images from GPR sensors. The detected keypoints can aid in accurately fitting parabolic curves used to interpret the original GPR B-scan images as cross-sections of the object model. Additionally, a multi-task point cloud network was designed to perform both point cloud segmentation and completion simultaneously, filling in sparse point cloud maps. For unknown locations, GPR A-scan data can be used to match corresponding A-scan data in the constructed map, pinpointing the position to verify the accuracy of the map construction by the model. Experimental results demonstrate the effectiveness of our method.<|reference_end|>
|
arxiv
|
@article{zhang2024underground,
title={Underground Mapping and Localization Based on Ground-Penetrating Radar},
author={Jinchang Zhang, Guoyu Lu},
journal={arXiv preprint arXiv:2409.16446},
year={2024},
archivePrefix={arXiv},
eprint={2409.16446},
primaryClass={cs.CV}
}
|
zhang2024underground
|
arxiv-661554
|
2409.16450
|
A Multi-Agent Multi-Environment Mixed Q-Learning for Partially Decentralized Wireless Network Optimization
|
<|reference_start|>A Multi-Agent Multi-Environment Mixed Q-Learning for Partially Decentralized Wireless Network Optimization: Q-learning is a powerful tool for network control and policy optimization in wireless networks, but it struggles with large state spaces. Recent advancements, like multi-environment mixed Q-learning (MEMQ), improves performance and reduces complexity by integrating multiple Q-learning algorithms across multiple related environments so-called digital cousins. However, MEMQ is designed for centralized single-agent networks and is not suitable for decentralized or multi-agent networks. To address this challenge, we propose a novel multi-agent MEMQ algorithm for partially decentralized wireless networks with multiple mobile transmitters (TXs) and base stations (BSs), where TXs do not have access to each other's states and actions. In uncoordinated states, TXs act independently to minimize their individual costs. In coordinated states, TXs use a Bayesian approach to estimate the joint state based on local observations and share limited information with leader TX to minimize joint cost. The cost of information sharing scales linearly with the number of TXs and is independent of the joint state-action space size. The proposed scheme is 50% faster than centralized MEMQ with only a 20% increase in average policy error (APE) and is 25% faster than several advanced decentralized Q-learning algorithms with 40% less APE. The convergence of the algorithm is also demonstrated.<|reference_end|>
|
arxiv
|
@article{bozkus2024a,
title={A Multi-Agent Multi-Environment Mixed Q-Learning for Partially
Decentralized Wireless Network Optimization},
author={Talha Bozkus, Urbashi Mitra},
journal={arXiv preprint arXiv:2409.16450},
year={2024},
archivePrefix={arXiv},
eprint={2409.16450},
primaryClass={eess.SP cs.LG}
}
|
bozkus2024a
|
arxiv-661555
|
2409.16451
|
Hierarchical Hybrid Learning for Long-Horizon Contact-Rich Robotic Assembly
|
<|reference_start|>Hierarchical Hybrid Learning for Long-Horizon Contact-Rich Robotic Assembly: Generalizable long-horizon robotic assembly requires reasoning at multiple levels of abstraction. End-to-end imitation learning (IL) has been proven a promising approach, but it requires a large amount of demonstration data for training and often fails to meet the high-precision requirement of assembly tasks. Reinforcement Learning (RL) approaches have succeeded in high-precision assembly tasks, but suffer from sample inefficiency and hence, are less competent at long-horizon tasks. To address these challenges, we propose a hierarchical modular approach, named ARCH (Adaptive Robotic Composition Hierarchy), which enables long-horizon high-precision assembly in contact-rich settings. ARCH employs a hierarchical planning framework, including a low-level primitive library of continuously parameterized skills and a high-level policy. The low-level primitive library includes essential skills for assembly tasks, such as grasping and inserting. These primitives consist of both RL and model-based controllers. The high-level policy, learned via imitation learning from a handful of demonstrations, selects the appropriate primitive skills and instantiates them with continuous input parameters. We extensively evaluate our approach on a real robot manipulation platform. We show that while trained on a single task, ARCH generalizes well to unseen tasks and outperforms baseline methods in terms of success rate and data efficiency. Videos can be found at https://long-horizon-assembly.github.io.<|reference_end|>
|
arxiv
|
@article{sun2024hierarchical,
title={Hierarchical Hybrid Learning for Long-Horizon Contact-Rich Robotic
Assembly},
author={Jiankai Sun, Aidan Curtis, Yang You, Yan Xu, Michael Koehle, Leonidas
Guibas, Sachin Chitta, Mac Schwager, Hui Li},
journal={arXiv preprint arXiv:2409.16451},
year={2024},
archivePrefix={arXiv},
eprint={2409.16451},
primaryClass={cs.RO}
}
|
sun2024hierarchical
|
arxiv-661556
|
2409.16452
|
FMDLlama: Financial Misinformation Detection based on Large Language Models
|
<|reference_start|>FMDLlama: Financial Misinformation Detection based on Large Language Models: The emergence of social media has made the spread of misinformation easier. In the financial domain, the accuracy of information is crucial for various aspects of financial market, which has made financial misinformation detection (FMD) an urgent problem that needs to be addressed. Large language models (LLMs) have demonstrated outstanding performance in various fields. However, current studies mostly rely on traditional methods and have not explored the application of LLMs in the field of FMD. The main reason is the lack of FMD instruction tuning datasets and evaluation benchmarks. In this paper, we propose FMDLlama, the first open-sourced instruction-following LLMs for FMD task based on fine-tuning Llama3.1 with instruction data, the first multi-task FMD instruction dataset (FMDID) to support LLM instruction tuning, and a comprehensive FMD evaluation benchmark (FMD-B) with classification and explanation generation tasks to test the FMD ability of LLMs. We compare our models with a variety of LLMs on FMD-B, where our model outperforms all other open-sourced LLMs as well as ChatGPT.<|reference_end|>
|
arxiv
|
@article{liu2024fmdllama:,
title={FMDLlama: Financial Misinformation Detection based on Large Language
Models},
author={Zhiwei Liu, Xin Zhang, Kailai Yang, Qianqian Xie, Jimin Huang, Sophia
Ananiadou},
journal={arXiv preprint arXiv:2409.16452},
year={2024},
archivePrefix={arXiv},
eprint={2409.16452},
primaryClass={cs.CL}
}
|
liu2024fmdllama:
|
arxiv-661557
|
2409.16453
|
Extending Mercer's expansion to indefinite and asymmetric kernels
|
<|reference_start|>Extending Mercer's expansion to indefinite and asymmetric kernels: Mercer's expansion and Mercer's theorem are cornerstone results in kernel theory. While the classical Mercer's theorem only considers continuous symmetric positive definite kernels, analogous expansions are effective in practice for indefinite and asymmetric kernels. In this paper we extend Mercer's expansion to continuous kernels, providing a rigorous theoretical underpinning for indefinite and asymmetric kernels. We begin by demonstrating that Mercer's expansion may not be pointwise convergent for continuous indefinite kernels, before proving that the expansion of continuous kernels with bounded variation uniformly in each variable separably converges pointwise almost everywhere, almost uniformly, and unconditionally almost everywhere. We also describe an algorithm for computing Mercer's expansion for general kernels and give new decay bounds on its terms.<|reference_end|>
|
arxiv
|
@article{jeong2024extending,
title={Extending Mercer's expansion to indefinite and asymmetric kernels},
author={Sungwoo Jeong, Alex Townsend},
journal={arXiv preprint arXiv:2409.16453},
year={2024},
archivePrefix={arXiv},
eprint={2409.16453},
primaryClass={math.NA cs.NA math.FA}
}
|
jeong2024extending
|
arxiv-661558
|
2409.16455
|
MultiTalk: Introspective and Extrospective Dialogue for Human-Environment-LLM Alignment
|
<|reference_start|>MultiTalk: Introspective and Extrospective Dialogue for Human-Environment-LLM Alignment: LLMs have shown promising results in task planning due to their strong natural language understanding and reasoning capabilities. However, issues such as hallucinations, ambiguities in human instructions, environmental constraints, and limitations in the executing agent's capabilities often lead to flawed or incomplete plans. This paper proposes MultiTalk, an LLM-based task planning methodology that addresses these issues through a framework of introspective and extrospective dialogue loops. This approach helps ground generated plans in the context of the environment and the agent's capabilities, while also resolving uncertainties and ambiguities in the given task. These loops are enabled by specialized systems designed to extract and predict task-specific states, and flag mismatches or misalignments among the human user, the LLM agent, and the environment. Effective feedback pathways between these systems and the LLM planner foster meaningful dialogue. The efficacy of this methodology is demonstrated through its application to robotic manipulation tasks. Experiments and ablations highlight the robustness and reliability of our method, and comparisons with baselines further illustrate the superiority of MultiTalk in task planning for embodied agents.<|reference_end|>
|
arxiv
|
@article{devarakonda2024multitalk:,
title={MultiTalk: Introspective and Extrospective Dialogue for
Human-Environment-LLM Alignment},
author={Venkata Naren Devarakonda, Ali Umut Kaypak, Shuaihang Yuan, Prashanth
Krishnamurthy, Yi Fang, Farshad Khorrami},
journal={arXiv preprint arXiv:2409.16455},
year={2024},
archivePrefix={arXiv},
eprint={2409.16455},
primaryClass={cs.RO}
}
|
devarakonda2024multitalk:
|
arxiv-661559
|
2409.16456
|
Communication and Energy Efficient Federated Learning using Zero-Order Optimization Technique
|
<|reference_start|>Communication and Energy Efficient Federated Learning using Zero-Order Optimization Technique: Federated learning (FL) is a popular machine learning technique that enables multiple users to collaboratively train a model while maintaining the user data privacy. A significant challenge in FL is the communication bottleneck in the upload direction, and thus the corresponding energy consumption of the devices, attributed to the increasing size of the model/gradient. In this paper, we address this issue by proposing a zero-order (ZO) optimization method that requires the upload of a quantized single scalar per iteration by each device instead of the whole gradient vector. We prove its theoretical convergence and find an upper bound on its convergence rate in the non-convex setting, and we discuss its implementation in practical scenarios. Our FL method and the corresponding convergence analysis take into account the impact of quantization and packet dropping due to wireless errors. We show also the superiority of our method, in terms of communication overhead and energy consumption, as compared to standard gradient-based FL methods.<|reference_end|>
|
arxiv
|
@article{mhanna2024communication,
title={Communication and Energy Efficient Federated Learning using Zero-Order
Optimization Technique},
author={Elissa Mhanna and Mohamad Assaad},
journal={arXiv preprint arXiv:2409.16456},
year={2024},
archivePrefix={arXiv},
eprint={2409.16456},
primaryClass={cs.LG cs.DC}
}
|
mhanna2024communication
|
arxiv-661560
|
2409.16458
|
Parameter Estimation for the Reduced Fracture Model via a Direct Filter Method
|
<|reference_start|>Parameter Estimation for the Reduced Fracture Model via a Direct Filter Method: In this work, we present a numerical method that provides accurate real-time detection for the widths of the fractures in a fractured porous medium based on observational data on porous medium fluid mass and velocity. To achieve this task, an inverse problem is formulated by first constructing a forward formulation based on the reduced fracture model of the diffusion equation. A parameter estimation problem is then performed online by utilizing a direct filter method. Numerical experiments are carried out to demonstrate the accuracy of our method in approximating the target parameters.<|reference_end|>
|
arxiv
|
@article{huynh2024parameter,
title={Parameter Estimation for the Reduced Fracture Model via a Direct Filter
Method},
author={Phuoc Toan Huynh and Feng Bao and Thi-Thao-Phuong Hoang},
journal={arXiv preprint arXiv:2409.16458},
year={2024},
archivePrefix={arXiv},
eprint={2409.16458},
primaryClass={math.NA cs.NA math.PR}
}
|
huynh2024parameter
|
arxiv-661561
|
2409.16460
|
MBC: Multi-Brain Collaborative Control for Quadruped Robots
|
<|reference_start|>MBC: Multi-Brain Collaborative Control for Quadruped Robots: In the field of locomotion task of quadruped robots, Blind Policy and Perceptive Policy each have their own advantages and limitations. The Blind Policy relies on preset sensor information and algorithms, suitable for known and structured environments, but it lacks adaptability in complex or unknown environments. The Perceptive Policy uses visual sensors to obtain detailed environmental information, allowing it to adapt to complex terrains, but its effectiveness is limited under occluded conditions, especially when perception fails. Unlike the Blind Policy, the Perceptive Policy is not as robust under these conditions. To address these challenges, we propose a MBC:Multi-Brain collaborative system that incorporates the concepts of Multi-Agent Reinforcement Learning and introduces collaboration between the Blind Policy and the Perceptive Policy. By applying this multi-policy collaborative model to a quadruped robot, the robot can maintain stable locomotion even when the perceptual system is impaired or observational data is incomplete. Our simulations and real-world experiments demonstrate that this system significantly improves the robot's passability and robustness against perception failures in complex environments, validating the effectiveness of multi-policy collaboration in enhancing robotic motion performance.<|reference_end|>
|
arxiv
|
@article{liu2024mbc:,
title={MBC: Multi-Brain Collaborative Control for Quadruped Robots},
author={Hang Liu, Yi Cheng, Rankun Li, Xiaowen Hu, Linqi Ye, Houde Liu},
journal={arXiv preprint arXiv:2409.16460},
year={2024},
archivePrefix={arXiv},
eprint={2409.16460},
primaryClass={cs.RO cs.SY eess.SY}
}
|
liu2024mbc:
|
arxiv-661562
|
2409.16461
|
Strategies for Improving NL-to-FOL Translation with LLMs: Data Generation, Incremental Fine-Tuning, and Verification
|
<|reference_start|>Strategies for Improving NL-to-FOL Translation with LLMs: Data Generation, Incremental Fine-Tuning, and Verification: Logical reasoning is a fundamental task in natural language processing that presents significant challenges to Large Language Models (LLMs). The inherent characteristics of logical reasoning makes it well-suited for symbolic representations such as first-order logic (FOL). Research in symbolic logical reasoning explored FOL generation using state-of-the-art LLMs (i.e., GPT-4) to produce FOL translations of natural language (NL) statements, but errors in translation are usually not the focus. We address this by categorizing the translation errors in FOL statements generated by LLMs. To make progress towards improving the quality of FOL translations for smaller language models such as LLaMA-2 13B and Mistral 7B, we create ProofFOL, a high-quality FOL-annotated subset of ProofWriter dataset using GPT-4o. The models fine-tuned on this silver standard data achieve a significant gain in performance when compared to larger language models such as LLaMA-2 70B. In addition to improving the model using large data, we also tackle the issue of data scarcity and introduce an incremental framework encompassing of data augmentation and verification steps. In the augmentation process, a single pair of (premises, conclusion) is split into multiple new instances based on the predicates and FOLs. This data is used for fine-tuning, and the inference on this model generates FOLs with fewer errors over the model trained on the original data. Our investigation on the translation errors leads to generation of a perturbation dataset, which is used to train a verifier that corrects potential syntactic and semantic FOL translation errors. We demonstrate an efficient method for making the most of a limited existing human-annotated dataset. Our results show state-of-the-art performance for ProofWriter and ProntoQA datasets using ProofFOL on LLaMA-2 and Mistral models.<|reference_end|>
|
arxiv
|
@article{thatikonda2024strategies,
title={Strategies for Improving NL-to-FOL Translation with LLMs: Data
Generation, Incremental Fine-Tuning, and Verification},
author={Ramya Keerthy Thatikonda, Jiuzhou Han, Wray Buntine, Ehsan Shareghi},
journal={arXiv preprint arXiv:2409.16461},
year={2024},
archivePrefix={arXiv},
eprint={2409.16461},
primaryClass={cs.CL}
}
|
thatikonda2024strategies
|
arxiv-661563
|
2409.16465
|
Initialization of Monocular Visual Navigation for Autonomous Agents Using Modified Structure from Small Motion
|
<|reference_start|>Initialization of Monocular Visual Navigation for Autonomous Agents Using Modified Structure from Small Motion: We propose a standalone monocular visual Simultaneous Localization and Mapping (vSLAM) initialization pipeline for autonomous space robots. Our method, a state-of-the-art factor graph optimization pipeline, extends Structure from Small Motion (SfSM) to robustly initialize a monocular agent in spacecraft inspection trajectories, addressing visual estimation challenges such as weak-perspective projection and center-pointing motion, which exacerbates the bas-relief ambiguity, dominant planar geometry, which causes motion estimation degeneracies in classical Structure from Motion, and dynamic illumination conditions, which reduce the survivability of visual information. We validate our approach on realistic, simulated satellite inspection image sequences with a tumbling spacecraft and demonstrate the method's effectiveness over existing monocular initialization procedures.<|reference_end|>
|
arxiv
|
@article{florez2024initialization,
title={Initialization of Monocular Visual Navigation for Autonomous Agents
Using Modified Structure from Small Motion},
author={Juan-Diego Florez, Mehregan Dor, Panagiotis Tsiotras},
journal={arXiv preprint arXiv:2409.16465},
year={2024},
archivePrefix={arXiv},
eprint={2409.16465},
primaryClass={cs.RO cs.CV}
}
|
florez2024initialization
|
arxiv-661564
|
2409.16467
|
Learning Dynamics of a Ball with Differentiable Factor Graph and Roto-Translational Invariant Representations
|
<|reference_start|>Learning Dynamics of a Ball with Differentiable Factor Graph and Roto-Translational Invariant Representations: Robots in dynamic environments need fast, accurate models of how objects move in their environments to support agile planning. In sports such as ping pong, analytical models often struggle to accurately predict ball trajectories with spins due to complex aerodynamics, elastic behaviors, and the challenges of modeling sliding and rolling friction. On the other hand, despite the promise of data-driven methods, machine learning struggles to make accurate, consistent predictions without precise input. In this paper, we propose an end-to-end learning framework that can jointly train a dynamics model and a factor graph estimator. Our approach leverages a Gram-Schmidt (GS) process to extract roto-translational invariant representations to improve the model performance, which can further reduce the validation error compared to data augmentation method. Additionally, we propose a network architecture that enhances nonlinearity by using self-multiplicative bypasses in the layer connections. By leveraging these novel methods, our proposed approach predicts the ball's position with an RMSE of 37.2 mm of the paddle radius at the apex after the first bounce, and 71.5 mm after the second bounce.<|reference_end|>
|
arxiv
|
@article{xiao2024learning,
title={Learning Dynamics of a Ball with Differentiable Factor Graph and
Roto-Translational Invariant Representations},
author={Qingyu Xiao, Zixuan Wu and Matthew Gombolay},
journal={arXiv preprint arXiv:2409.16467},
year={2024},
archivePrefix={arXiv},
eprint={2409.16467},
primaryClass={cs.RO}
}
|
xiao2024learning
|
arxiv-661565
|
2409.16469
|
Spelling Correction through Rewriting of Non-Autoregressive ASR Lattices
|
<|reference_start|>Spelling Correction through Rewriting of Non-Autoregressive ASR Lattices: For end-to-end Automatic Speech Recognition (ASR) models, recognizing personal or rare phrases can be hard. A promising way to improve accuracy is through spelling correction (or rewriting) of the ASR lattice, where potentially misrecognized phrases are replaced with acoustically similar and contextually relevant alternatives. However, rewriting is challenging for ASR models trained with connectionist temporal classification (CTC) due to noisy hypotheses produced by a non-autoregressive, context-independent beam search. We present a finite-state transducer (FST) technique for rewriting wordpiece lattices generated by Transformer-based CTC models. Our algorithm performs grapheme-to-phoneme (G2P) conversion directly from wordpieces into phonemes, avoiding explicit word representations and exploiting the richness of the CTC lattice. Our approach requires no retraining or modification of the ASR model. We achieved up to a 15.2% relative reduction in sentence error rate (SER) on a test set with contextually relevant entities.<|reference_end|>
|
arxiv
|
@article{velikovich2024spelling,
title={Spelling Correction through Rewriting of Non-Autoregressive ASR Lattices},
author={Leonid Velikovich, Christopher Li, Diamantino Caseiro, Shankar Kumar,
Pat Rondon, Kandarp Joshi, Xavier Velez},
journal={arXiv preprint arXiv:2409.16469},
year={2024},
archivePrefix={arXiv},
eprint={2409.16469},
primaryClass={cs.CL cs.SD eess.AS}
}
|
velikovich2024spelling
|
arxiv-661566
|
2409.16470
|
Frequency-based View Selection in Gaussian Splatting Reconstruction
|
<|reference_start|>Frequency-based View Selection in Gaussian Splatting Reconstruction: Three-dimensional reconstruction is a fundamental problem in robotics perception. We examine the problem of active view selection to perform 3D Gaussian Splatting reconstructions with as few input images as possible. Although 3D Gaussian Splatting has made significant progress in image rendering and 3D reconstruction, the quality of the reconstruction is strongly impacted by the selection of 2D images and the estimation of camera poses through Structure-from-Motion (SfM) algorithms. Current methods to select views that rely on uncertainties from occlusions, depth ambiguities, or neural network predictions directly are insufficient to handle the issue and struggle to generalize to new scenes. By ranking the potential views in the frequency domain, we are able to effectively estimate the potential information gain of new viewpoints without ground truth data. By overcoming current constraints on model architecture and efficacy, our method achieves state-of-the-art results in view selection, demonstrating its potential for efficient image-based 3D reconstruction.<|reference_end|>
|
arxiv
|
@article{li2024frequency-based,
title={Frequency-based View Selection in Gaussian Splatting Reconstruction},
author={Monica M.Q. Li, Pierre-Yves Lajoie, and Giovanni Beltrame},
journal={arXiv preprint arXiv:2409.16470},
year={2024},
archivePrefix={arXiv},
eprint={2409.16470},
primaryClass={cs.CV cs.RO}
}
|
li2024frequency-based
|
arxiv-661567
|
2409.16471
|
Score-based Neural Ordinary Differential Equations for Computing Mean Field Control Problems
|
<|reference_start|>Score-based Neural Ordinary Differential Equations for Computing Mean Field Control Problems: Classical neural ordinary differential equations (ODEs) are powerful tools for approximating the log-density functions in high-dimensional spaces along trajectories, where neural networks parameterize the velocity fields. This paper proposes a system of neural differential equations representing first- and second-order score functions along trajectories based on deep neural networks. We reformulate the mean field control (MFC) problem with individual noises into an unconstrained optimization problem framed by the proposed neural ODE system. Additionally, we introduce a novel regularization term to enforce characteristics of viscous Hamilton--Jacobi--Bellman (HJB) equations to be satisfied based on the evolution of the second-order score function. Examples include regularized Wasserstein proximal operators (RWPOs), probability flow matching of Fokker--Planck (FP) equations, and linear quadratic (LQ) MFC problems, which demonstrate the effectiveness and accuracy of the proposed method.<|reference_end|>
|
arxiv
|
@article{zhou2024score-based,
title={Score-based Neural Ordinary Differential Equations for Computing Mean
Field Control Problems},
author={Mo Zhou, Stanley Osher, Wuchen Li},
journal={arXiv preprint arXiv:2409.16471},
year={2024},
archivePrefix={arXiv},
eprint={2409.16471},
primaryClass={math.OC cs.LG}
}
|
zhou2024score-based
|
arxiv-661568
|
2409.16472
|
Sub-Nyquist USF Spectral Estimation: $K$ Frequencies with $6K + 4$ Modulo Samples
|
<|reference_start|>Sub-Nyquist USF Spectral Estimation: $K$ Frequencies with $6K + 4$ Modulo Samples: Digital acquisition of high bandwidth signals is particularly challenging when Nyquist rate sampling is impractical. This has led to extensive research in sub-Nyquist sampling methods, primarily for spectral and sinusoidal frequency estimation. However, these methods struggle with high-dynamic-range (HDR) signals that can saturate analog-to-digital converters (ADCs). Addressing this, we introduce a novel sub-Nyquist spectral estimation method, driven by the Unlimited Sensing Framework (USF), utilizing a multi-channel system. The sub-Nyquist USF method aliases samples in both amplitude and frequency domains, rendering the inverse problem particularly challenging. Towards this goal, our exact recovery theorem establishes that $K$ sinusoids of arbitrary amplitudes and frequencies can be recovered from $6K + 4$ modulo samples, remarkably, independent of the sampling rate or folding threshold. In the true spirit of sub-Nyquist sampling, via modulo ADC hardware experiments, we demonstrate successful spectrum estimation of HDR signals in the kHz range using Hz range sampling rates (0.078\% Nyquist rate). Our experiments also reveal up to a 33-fold improvement in frequency estimation accuracy using one less bit compared to conventional ADCs. These findings open new avenues in spectral estimation applications, e.g., radars, direction-of-arrival (DoA) estimation, and cognitive radio, showcasing the potential of USF.<|reference_end|>
|
arxiv
|
@article{guo2024sub-nyquist,
title={Sub-Nyquist USF Spectral Estimation: $K$ Frequencies with $6K + 4$
Modulo Samples},
author={Ruiming Guo, Yuliang Zhu and Ayush Bhandari},
journal={arXiv preprint arXiv:2409.16472},
year={2024},
archivePrefix={arXiv},
eprint={2409.16472},
primaryClass={cs.IT eess.SP math.IT}
}
|
guo2024sub-nyquist
|
arxiv-661569
|
2409.16473
|
KinScene: Model-Based Mobile Manipulation of Articulated Scenes
|
<|reference_start|>KinScene: Model-Based Mobile Manipulation of Articulated Scenes: Sequentially interacting with articulated objects is crucial for a mobile manipulator to operate effectively in everyday environments. To enable long-horizon tasks involving articulated objects, this study explores building scene-level articulation models for indoor scenes through autonomous exploration. While previous research has studied mobile manipulation with articulated objects by considering object kinematic constraints, it primarily focuses on individual-object scenarios and lacks extension to a scene-level context for task-level planning. To manipulate multiple object parts sequentially, the robot needs to reason about the resultant motion of each part and anticipate its impact on future actions. We introduce KinScene, a full-stack approach for long-horizon manipulation tasks with articulated objects. The robot maps the scene, detects and physically interacts with articulated objects, collects observations, and infers the articulation properties. For sequential tasks, the robot plans a feasible series of object interactions based on the inferred articulation model. We demonstrate that our approach repeatably constructs accurate scene-level kinematic and geometric models, enabling long-horizon mobile manipulation in a real-world scene. Code and additional results are available at https://chengchunhsu.github.io/KinScene/<|reference_end|>
|
arxiv
|
@article{hsu2024kinscene:,
title={KinScene: Model-Based Mobile Manipulation of Articulated Scenes},
author={Cheng-Chun Hsu, Ben Abbatematteo, Zhenyu Jiang, Yuke Zhu, Roberto
Mart'in-Mart'in, Joydeep Biswas},
journal={arXiv preprint arXiv:2409.16473},
year={2024},
archivePrefix={arXiv},
eprint={2409.16473},
primaryClass={cs.RO}
}
|
hsu2024kinscene:
|
arxiv-661570
|
2409.16475
|
Interaction Techniques for User-friendly Interfaces for Gate-based Quantum Computing
|
<|reference_start|>Interaction Techniques for User-friendly Interfaces for Gate-based Quantum Computing: Quantum computers offer promising approaches to various fields. To use current noisy quantum computers, developers need to examine the compilation of a logical circuit, the status of available hardware, and noises in results. As those tasks are less common in classical computing, quantum developers may not be familiar with performing them. Therefore, easier and more intuitive interfaces are necessary to make quantum computers more approachable. While existing notebook-based toolkits like Qiskit offer application programming interfaces and visualization techniques, it is still difficult to navigate the vast space of quantum program design and hardware status. Inspired by human-computer interaction (HCI) work in data science and visualization, our work introduces four user interaction techniques that can augment existing notebook-based toolkits for gate-based quantum computing: (1) a circuit writer that lets users provide high-level information about a circuit and generates a code snippet to build it; (2) a machine explorer that provides detailed properties and configurations of a hardware with a code to load selected information; (3) a circuit viewer that allows for comparing logical circuit, compiled circuit, and hardware configurations; and (4) a visualization for adjusting measurement outcomes with hardware error rates.<|reference_end|>
|
arxiv
|
@article{kim2024interaction,
title={Interaction Techniques for User-friendly Interfaces for Gate-based
Quantum Computing},
author={Hyeok Kim, Kaitlin N. Smith},
journal={arXiv preprint arXiv:2409.16475},
year={2024},
archivePrefix={arXiv},
eprint={2409.16475},
primaryClass={cs.HC cs.SY eess.SY}
}
|
kim2024interaction
|
arxiv-661571
|
2409.16477
|
Convergence analysis of iterative solution with inexact block preconditioning for weak Galerkin finite element approximation of Stokes flow
|
<|reference_start|>Convergence analysis of iterative solution with inexact block preconditioning for weak Galerkin finite element approximation of Stokes flow: This work is concerned with the convergence of the iterative solution for the Stokes flow, discretized with the weak Galerkin finite element method and preconditioned using inexact block Schur complement preconditioning. The resulting saddle point linear system is singular and the pressure solution is not unique. The system is regularized with a commonly used strategy by specifying the pressure value at a specific location. It is analytically shown that the regularized system is nonsingular but has an eigenvalue approaching zero as the fluid kinematic viscosity tends to zero. Inexact block diagonal and triangular Schur complement preconditioners are considered with the minimal residual method (MINRES) and the generalized minimal residual method (GMRES), respectively. For both cases, the bounds are obtained for the eigenvalues of the preconditioned systems and for the residual of MINRES/GMRES. These bounds show that the convergence factor of MINRES/GMRES is almost independent of the viscosity parameter and mesh size while the number of MINRES/GMRES iterations required to reach convergence depends on the parameters only logarithmically. The theoretical findings and effectiveness of the preconditioners are verified with two- and three-dimensional numerical examples.<|reference_end|>
|
arxiv
|
@article{huang2024convergence,
title={Convergence analysis of iterative solution with inexact block
preconditioning for weak Galerkin finite element approximation of Stokes flow},
author={Weizhang Huang and Zhuoran Wang},
journal={arXiv preprint arXiv:2409.16477},
year={2024},
archivePrefix={arXiv},
eprint={2409.16477},
primaryClass={math.NA cs.NA}
}
|
huang2024convergence
|
arxiv-661572
|
2409.16478
|
Algorithmic Drift: A Simulation Framework to Study the Effects of Recommender Systems on User Preferences
|
<|reference_start|>Algorithmic Drift: A Simulation Framework to Study the Effects of Recommender Systems on User Preferences: Digital platforms such as social media and e-commerce websites adopt Recommender Systems to provide value to the user. However, the social consequences deriving from their adoption are still unclear. Many scholars argue that recommenders may lead to detrimental effects, such as bias-amplification deriving from the feedback loop between algorithmic suggestions and users' choices. Nonetheless, the extent to which recommenders influence changes in users leaning remains uncertain. In this context, it is important to provide a controlled environment for evaluating the recommendation algorithm before deployment. To address this, we propose a stochastic simulation framework that mimics user-recommender system interactions in a long-term scenario. In particular, we simulate the user choices by formalizing a user model, which comprises behavioral aspects, such as the user resistance towards the recommendation algorithm and their inertia in relying on the received suggestions. Additionally, we introduce two novel metrics for quantifying the algorithm's impact on user preferences, specifically in terms of drift over time. We conduct an extensive evaluation on multiple synthetic datasets, aiming at testing the robustness of our framework when considering different scenarios and hyper-parameters setting. The experimental results prove that the proposed methodology is effective in detecting and quantifying the drift over the users preferences by means of the simulation. All the code and data used to perform the experiments are publicly available.<|reference_end|>
|
arxiv
|
@article{coppolillo2024algorithmic,
title={Algorithmic Drift: A Simulation Framework to Study the Effects of
Recommender Systems on User Preferences},
author={Erica Coppolillo, Simone Mungari, Ettore Ritacco, Francesco Fabbri,
Marco Minici, Francesco Bonchi and Giuseppe Manco},
journal={arXiv preprint arXiv:2409.16478},
year={2024},
archivePrefix={arXiv},
eprint={2409.16478},
primaryClass={cs.IR cs.AI cs.SI}
}
|
coppolillo2024algorithmic
|
arxiv-661573
|
2409.16480
|
Exploring Performance Trade-offs in JHipster
|
<|reference_start|>Exploring Performance Trade-offs in JHipster: The performance of software systems remains a persistent concern in the field of software engineering. While traditional metrics like binary size and execution time have long been focal points for developers, the power consumption concern has gained significant attention, adding a layer of complexity to performance evaluation. Configurable software systems, with their potential for numerous configurations, further complicate this evaluation process. In this experience paper, we examine the impact of configurations on performance, specifically focusing on the web stack generator JHipster. Our goal is to understand how configuration choices within JHipster influence the performance of the generated system. We undertake an exhaustive analysis of JHipster by examining its configurations and their effects on system performance. Additionally, we explore individual configuration options to gauge their specific influence on performance. Through this process, we develop a comprehensive performance model for JHipster, enabling us to automate the identification of configurations that optimize specific performance metrics. In particular, we identify configurations that demonstrate near-optimal performance across multiple indicators and report on significant correlations between configuration choices within JHipster and the performance of generated systems.<|reference_end|>
|
arxiv
|
@article{guégain2024exploring,
title={Exploring Performance Trade-offs in JHipster},
author={Edouard Gu'egain, Alexandre Bonvoisin, Cl'ement Quinton, Mathieu
Acher and Romain Rouvoy},
journal={arXiv preprint arXiv:2409.16480},
year={2024},
archivePrefix={arXiv},
eprint={2409.16480},
primaryClass={cs.SE}
}
|
guégain2024exploring
|
arxiv-661574
|
2409.16482
|
Generative AI-driven forecasting of oil production
|
<|reference_start|>Generative AI-driven forecasting of oil production: Forecasting oil production from oilfields with multiple wells is an important problem in petroleum and geothermal energy extraction, as well as energy storage technologies. The accuracy of oil forecasts is a critical determinant of economic projections, hydrocarbon reserves estimation, construction of fluid processing facilities, and energy price fluctuations. Leveraging generative AI techniques, we model time series forecasting of oil and water productions across four multi-well sites spanning four decades. Our goal is to effectively model uncertainties and make precise forecasts to inform decision-making processes at the field scale. We utilize an autoregressive model known as TimeGrad and a variant of a transformer architecture named Informer, tailored specifically for forecasting long sequence time series data. Predictions from both TimeGrad and Informer closely align with the ground truth data. The overall performance of the Informer stands out, demonstrating greater efficiency compared to TimeGrad in forecasting oil production rates across all sites.<|reference_end|>
|
arxiv
|
@article{gandhi2024generative,
title={Generative AI-driven forecasting of oil production},
author={Yash Gandhi, Kexin Zheng, Birendra Jha, Ken-ichi Nomura, Aiichiro
Nakano, Priya Vashishta, Rajiv K. Kalia},
journal={arXiv preprint arXiv:2409.16482},
year={2024},
archivePrefix={arXiv},
eprint={2409.16482},
primaryClass={cs.LG}
}
|
gandhi2024generative
|
arxiv-661575
|
2409.16484
|
BehAV: Behavioral Rule Guided Autonomy Using VLMs for Robot Navigation in Outdoor Scenes
|
<|reference_start|>BehAV: Behavioral Rule Guided Autonomy Using VLMs for Robot Navigation in Outdoor Scenes: We present BehAV, a novel approach for autonomous robot navigation in outdoor scenes guided by human instructions and leveraging Vision Language Models (VLMs). Our method interprets human commands using a Large Language Model (LLM) and categorizes the instructions into navigation and behavioral guidelines. Navigation guidelines consist of directional commands (e.g., "move forward until") and associated landmarks (e.g., "the building with blue windows"), while behavioral guidelines encompass regulatory actions (e.g., "stay on") and their corresponding objects (e.g., "pavements"). We use VLMs for their zero-shot scene understanding capabilities to estimate landmark locations from RGB images for robot navigation. Further, we introduce a novel scene representation that utilizes VLMs to ground behavioral rules into a behavioral cost map. This cost map encodes the presence of behavioral objects within the scene and assigns costs based on their regulatory actions. The behavioral cost map is integrated with a LiDAR-based occupancy map for navigation. To navigate outdoor scenes while adhering to the instructed behaviors, we present an unconstrained Model Predictive Control (MPC)-based planner that prioritizes both reaching landmarks and following behavioral guidelines. We evaluate the performance of BehAV on a quadruped robot across diverse real-world scenarios, demonstrating a 22.49% improvement in alignment with human-teleoperated actions, as measured by Frechet distance, and achieving a 40% higher navigation success rate compared to state-of-the-art methods.<|reference_end|>
|
arxiv
|
@article{weerakoon2024behav:,
title={BehAV: Behavioral Rule Guided Autonomy Using VLMs for Robot Navigation
in Outdoor Scenes},
author={Kasun Weerakoon, Mohamed Elnoor, Gershom Seneviratne, Vignesh
Rajagopal, Senthil Hariharan Arul, Jing Liang, Mohamed Khalid M Jaffar,
Dinesh Manocha},
journal={arXiv preprint arXiv:2409.16484},
year={2024},
archivePrefix={arXiv},
eprint={2409.16484},
primaryClass={cs.RO}
}
|
weerakoon2024behav:
|
arxiv-661576
|
2409.16486
|
To Explore the Potential Inhibitors against Multitarget Proteins of COVID 19 using In Silico Study
|
<|reference_start|>To Explore the Potential Inhibitors against Multitarget Proteins of COVID 19 using In Silico Study: The global pandemic due to emergence of COVID 19 has created the unrivaled public health crisis. It has huge morbidity rate never comprehended in the recent decades. Researchers have made many efforts to find the optimal solution of this pandemic. Progressively, drug repurposing is an emergent and powerful strategy with saving cost, time, and labor. Lacking of identified repurposed drug candidates against COVID 19 demands more efforts to explore the potential inhibitors for effective cure. In this study, we used the combination of molecular docking and machine learning regression approaches to explore the potential inhibitors for the treatment of COVID 19. We calculated the binding affinities of these drugs to multitarget proteins using molecular docking process. We perform the QSAR modeling by employing various machine learning regression approaches to identify the potential inhibitors against COVID 19. Our findings with best scores of R2 and RMSE demonstrated that our proposed Decision Tree Regression (DTR) model is the most appropriate model to explore the potential inhibitors. We proposed five novel promising inhibitors with their respective Zinc IDs ZINC (3873365, 85432544, 8214470, 85536956, and 261494640) within the range of -19.7 kcal/mol to -12.6 kcal/mol. We further analyzed the physiochemical and pharmacokinetic properties of these most potent inhibitors to examine their behavior. The analysis of these properties is the key factor to promote an effective cure for public health. Our work constructs an efficient structure with which to probe the potential inhibitors against COVID-19, creating the combination of molecular docking with machine learning regression approaches.<|reference_end|>
|
arxiv
|
@article{aqeel2024to,
title={To Explore the Potential Inhibitors against Multitarget Proteins of
COVID 19 using In Silico Study},
author={Imra Aqeel},
journal={arXiv preprint arXiv:2409.16486},
year={2024},
archivePrefix={arXiv},
eprint={2409.16486},
primaryClass={q-bio.QM cs.AI}
}
|
aqeel2024to
|
arxiv-661577
|
2409.16488
|
Diffusion Models to Enhance the Resolution of Microscopy Images: A Tutorial
|
<|reference_start|>Diffusion Models to Enhance the Resolution of Microscopy Images: A Tutorial: Diffusion models have emerged as a prominent technique in generative modeling with neural networks, making their mark in tasks like text-to-image translation and super-resolution. In this tutorial, we provide a comprehensive guide to build denoising diffusion probabilistic models (DDPMs) from scratch, with a specific focus on transforming low-resolution microscopy images into their corresponding high-resolution versions. We provide the theoretical background, mathematical derivations, and a detailed Python code implementation using PyTorch, along with techniques to enhance model performance.<|reference_end|>
|
arxiv
|
@article{bachimanchi2024diffusion,
title={Diffusion Models to Enhance the Resolution of Microscopy Images: A
Tutorial},
author={Harshith Bachimanchi and Giovanni Volpe},
journal={arXiv preprint arXiv:2409.16488},
year={2024},
archivePrefix={arXiv},
eprint={2409.16488},
primaryClass={eess.IV cs.CV cs.LG q-bio.OT}
}
|
bachimanchi2024diffusion
|
arxiv-661578
|
2409.16490
|
Exploring Knowledge Tracing in Tutor-Student Dialogues
|
<|reference_start|>Exploring Knowledge Tracing in Tutor-Student Dialogues: Recent advances in large language models (LLMs) have led to the development of artificial intelligence (AI)-powered tutoring chatbots, showing promise in providing broad access to high-quality personalized education. Existing works have primarily studied how to make LLMs follow tutoring principles but not how to model student behavior in dialogues. However, analyzing student dialogue turns can serve as a formative assessment, since open-ended student discourse may indicate their knowledge levels and reveal specific misconceptions. In this work, we present a first attempt at performing knowledge tracing (KT) in tutor-student dialogues. We propose LLM prompting methods to identify the knowledge components/skills involved in each dialogue turn and diagnose whether the student responds correctly to the tutor, and verify the LLM's effectiveness via an expert human evaluation. We then apply a range of KT methods on the resulting labeled data to track student knowledge levels over an entire dialogue. We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues. We perform extensive qualitative analyses to highlight the challenges in dialogue KT and outline multiple avenues for future work.<|reference_end|>
|
arxiv
|
@article{scarlatos2024exploring,
title={Exploring Knowledge Tracing in Tutor-Student Dialogues},
author={Alexander Scarlatos and Andrew Lan},
journal={arXiv preprint arXiv:2409.16490},
year={2024},
archivePrefix={arXiv},
eprint={2409.16490},
primaryClass={cs.CL cs.CY cs.LG}
}
|
scarlatos2024exploring
|
arxiv-661579
|
2409.16491
|
Proactive Schemes: A Survey of Adversarial Attacks for Social Good
|
<|reference_start|>Proactive Schemes: A Survey of Adversarial Attacks for Social Good: Adversarial attacks in computer vision exploit the vulnerabilities of machine learning models by introducing subtle perturbations to input data, often leading to incorrect predictions or classifications. These attacks have evolved in sophistication with the advent of deep learning, presenting significant challenges in critical applications, which can be harmful for society. However, there is also a rich line of research from a transformative perspective that leverages adversarial techniques for social good. Specifically, we examine the rise of proactive schemes-methods that encrypt input data using additional signals termed templates, to enhance the performance of deep learning models. By embedding these imperceptible templates into digital media, proactive schemes are applied across various applications, from simple image enhancements to complicated deep learning frameworks to aid performance, as compared to the passive schemes, which don't change the input data distribution for their framework. The survey delves into the methodologies behind these proactive schemes, the encryption and learning processes, and their application to modern computer vision and natural language processing applications. Additionally, it discusses the challenges, potential vulnerabilities, and future directions for proactive schemes, ultimately highlighting their potential to foster the responsible and secure advancement of deep learning technologies.<|reference_end|>
|
arxiv
|
@article{asnani2024proactive,
title={Proactive Schemes: A Survey of Adversarial Attacks for Social Good},
author={Vishal Asnani, Xi Yin, Xiaoming Liu},
journal={arXiv preprint arXiv:2409.16491},
year={2024},
archivePrefix={arXiv},
eprint={2409.16491},
primaryClass={cs.CV}
}
|
asnani2024proactive
|
arxiv-661580
|
2409.16493
|
NoTeeline: Supporting Real-Time Notetaking from Keypoints with Large Language Models
|
<|reference_start|>NoTeeline: Supporting Real-Time Notetaking from Keypoints with Large Language Models: Video has become a popular media form for information sharing and consumption. However, taking notes while watching a video requires significant time and effort. To address this, we propose a novel interactive system, NoTeeline, for taking real-time, personalized notes. NoTeeline lets users quickly jot down keypoints (micronotes), which are automatically expanded into full-fledged notes that capture the content of the user's micronotes and are consistent with the user's writing style. In a within-subjects study (N=12), we found that NoTeeline helps users create high-quality notes that capture the essence of their micronotes with a higher factual correctness (93.2%) while accurately reflecting their writing style. While using NoTeeline, participants experienced significantly reduced mental effort, captured satisfactory notes while writing 47% less text, and completed notetaking with 43.9% less time compared to a manual notetaking baseline.<|reference_end|>
|
arxiv
|
@article{huq2024noteeline:,
title={NoTeeline: Supporting Real-Time, Personalized Notetaking with
LLM-Enhanced Micronotes},
author={Faria Huq, Abdus Samee, David Chuan-en Lin, Xiaodi Alice Tang, Jeffrey
P. Bigham},
journal={arXiv preprint arXiv:2409.16493},
year={2024},
archivePrefix={arXiv},
eprint={2409.16493},
primaryClass={cs.HC}
}
|
huq2024noteeline:
|
arxiv-661581
|
2409.16494
|
A Unified Hallucination Mitigation Framework for Large Vision-Language Models
|
<|reference_start|>A Unified Hallucination Mitigation Framework for Large Vision-Language Models: Hallucination is a common problem for Large Vision-Language Models (LVLMs) with long generations which is difficult to eradicate. The generation with hallucinations is partially inconsistent with the image content. To mitigate hallucination, current studies either focus on the process of model inference or the results of model generation, but the solutions they design sometimes do not deal appropriately with various types of queries and the hallucinations of the generations about these queries. To accurately deal with various hallucinations, we present a unified framework, Dentist, for hallucination mitigation. The core step is to first classify the queries, then perform different processes of hallucination mitigation based on the classification result, just like a dentist first observes the teeth and then makes a plan. In a simple deployment, Dentist can classify queries as perception or reasoning and easily mitigate potential hallucinations in answers which has been demonstrated in our experiments. On MMbench, we achieve a 13.44%/10.2%/15.8% improvement in accuracy on Image Quality, a Coarse Perception visual question answering (VQA) task, over the baseline InstructBLIP/LLaVA/VisualGLM.<|reference_end|>
|
arxiv
|
@article{chang2024a,
title={A Unified Hallucination Mitigation Framework for Large Vision-Language
Models},
author={Yue Chang and Liqiang Jing and Xiaopeng Zhang and Yue Zhang},
journal={arXiv preprint arXiv:2409.16494},
year={2024},
archivePrefix={arXiv},
eprint={2409.16494},
primaryClass={cs.CV cs.CL}
}
|
chang2024a
|
arxiv-661582
|
2409.16495
|
Flight: A FaaS-Based Framework for Complex and Hierarchical Federated Learning
|
<|reference_start|>Flight: A FaaS-Based Framework for Complex and Hierarchical Federated Learning: Federated Learning (FL) is a decentralized machine learning paradigm where models are trained on distributed devices and are aggregated at a central server. Existing FL frameworks assume simple two-tier network topologies where end devices are directly connected to the aggregation server. While this is a practical mental model, it does not exploit the inherent topology of real-world distributed systems like the Internet-of-Things. We present Flight, a novel FL framework that supports complex hierarchical multi-tier topologies, asynchronous aggregation, and decouples the control plane from the data plane. We compare the performance of Flight against Flower, a state-of-the-art FL framework. Our results show that Flight scales beyond Flower, supporting up to 2048 simultaneous devices, and reduces FL makespan across several models. Finally, we show that Flight's hierarchical FL model can reduce communication overheads by more than 60%.<|reference_end|>
|
arxiv
|
@article{hudson2024flight:,
title={Flight: A FaaS-Based Framework for Complex and Hierarchical Federated
Learning},
author={Nathaniel Hudson, Valerie Hayot-Sasson, Yadu Babuji, Matt Baughman, J.
Gregory Pauloski, Ryan Chard, Ian Foster, Kyle Chard},
journal={arXiv preprint arXiv:2409.16495},
year={2024},
archivePrefix={arXiv},
eprint={2409.16495},
primaryClass={cs.LG cs.DC}
}
|
hudson2024flight:
|
arxiv-661583
|
2409.16496
|
Real-Time Detection of Electronic Components in Waste Printed Circuit Boards: A Transformer-Based Approach
|
<|reference_start|>Real-Time Detection of Electronic Components in Waste Printed Circuit Boards: A Transformer-Based Approach: Critical Raw Materials (CRMs) such as copper, manganese, gallium, and various rare earths have great importance for the electronic industry. To increase the concentration of individual CRMs and thus make their extraction from Waste Printed Circuit Boards (WPCBs) convenient, we have proposed a practical approach that involves selective disassembling of the different types of electronic components from WPCBs using mechatronic systems guided by artificial vision techniques. In this paper we evaluate the real-time accuracy of electronic component detection and localization of the Real-Time DEtection TRansformer model architecture. Transformers have recently become very popular for the extraordinary results obtained in natural language processing and machine translation. Also in this case, the transformer model achieves very good performances, often superior to those of the latest state of the art object detection and localization models YOLOv8 and YOLOv9.<|reference_end|>
|
arxiv
|
@article{mohsin2024real-time,
title={Real-Time Detection of Electronic Components in Waste Printed Circuit
Boards: A Transformer-Based Approach},
author={Muhammad Mohsin, Stefano Rovetta, Francesco Masulli, Alberto Cabri},
journal={arXiv preprint arXiv:2409.16496},
year={2024},
archivePrefix={arXiv},
eprint={2409.16496},
primaryClass={cs.CV}
}
|
mohsin2024real-time
|
arxiv-661584
|
2409.16497
|
Unsupervised Text Representation Learning via Instruction-Tuning for Zero-Shot Dense Retrieval
|
<|reference_start|>Unsupervised Text Representation Learning via Instruction-Tuning for Zero-Shot Dense Retrieval: Dense retrieval systems are commonly used for information retrieval (IR). They rely on learning text representations through an encoder and usually require supervised modeling via labelled data which can be costly to obtain or simply unavailable. In this study, we introduce a novel unsupervised text representation learning technique via instruction-tuning the pre-trained encoder-decoder large language models (LLM) under the dual-encoder retrieval framework. We demonstrate the corpus representation can be augmented by the representations of relevant synthetic queries generated by the instruct-tuned LLM founded on the Rao-Blackwell theorem. Furthermore, we effectively align the query and corpus text representation with self-instructed-tuning. Specifically, we first prompt an open-box pre-trained LLM to follow defined instructions (i.e. question generation and keyword summarization) to generate synthetic queries. Next, we fine-tune the pre-trained LLM with defined instructions and the generated queries that passed quality check. Finally, we generate synthetic queries with the instruction-tuned LLM for each corpora and represent each corpora by weighted averaging the synthetic queries and original corpora embeddings. We evaluate our proposed method under low-resource settings on three English and one German retrieval datasets measuring NDCG@10, MRR@100, Recall@100. We significantly improve the average zero-shot retrieval performance on all metrics, increasing open-box FLAN-T5 model variations by [3.34%, 3.50%] in absolute and exceeding three competitive dense retrievers (i.e. mDPR, T-Systems, mBART-Large), with model of size at least 38% smaller, by 1.96%, 4.62%, 9.52% absolute on NDCG@10.<|reference_end|>
|
arxiv
|
@article{zeng2024unsupervised,
title={Unsupervised Text Representation Learning via Instruction-Tuning for
Zero-Shot Dense Retrieval},
author={Qiuhai Zeng, Zimeng Qiu, Dae Yon Hwang, Xin He, William M. Campbell},
journal={arXiv preprint arXiv:2409.16497},
year={2024},
archivePrefix={arXiv},
eprint={2409.16497},
primaryClass={cs.AI}
}
|
zeng2024unsupervised
|
arxiv-661585
|
2409.16499
|
Learning Linear Dynamics from Bilinear Observations
|
<|reference_start|>Learning Linear Dynamics from Bilinear Observations: We consider the problem of learning a realization of a partially observed dynamical system with linear state transitions and bilinear observations. Under very mild assumptions on the process and measurement noises, we provide a finite time analysis for learning the unknown dynamics matrices (up to a similarity transform). Our analysis involves a regression problem with heavy-tailed and dependent data. Moreover, each row of our design matrix contains a Kronecker product of current input with a history of inputs, making it difficult to guarantee persistence of excitation. We overcome these challenges, first providing a data-dependent high probability error bound for arbitrary but fixed inputs. Then, we derive a data-independent error bound for inputs chosen according to a simple random design. Our main results provide an upper bound on the statistical error rates and sample complexity of learning the unknown dynamics matrices from a single finite trajectory of bilinear observations.<|reference_end|>
|
arxiv
|
@article{sattar2024learning,
title={Learning Linear Dynamics from Bilinear Observations},
author={Yahya Sattar, Yassir Jedra, Sarah Dean},
journal={arXiv preprint arXiv:2409.16499},
year={2024},
archivePrefix={arXiv},
eprint={2409.16499},
primaryClass={cs.LG cs.SY eess.SY math.OC stat.ML}
}
|
sattar2024learning
|
arxiv-661586
|
2409.16500
|
Random ensembles of symplectic and unitary states are indistinguishable
|
<|reference_start|>Random ensembles of symplectic and unitary states are indistinguishable: A unitary state $t$-design is an ensemble of pure quantum states whose moments match up to the $t$-th order those of states uniformly sampled from a $d$-dimensional Hilbert space. Typically, unitary state $t$-designs are obtained by evolving some reference pure state with unitaries from an ensemble that forms a design over the unitary group $\mathbb{U}(d)$, as unitary designs induce state designs. However, in this work we study whether Haar random symplectic states -- i.e., states obtained by evolving some reference state with unitaries sampled according to the Haar measure over $\mathbb{SP}(d/2)$ -- form unitary state $t$-designs. Importantly, we recall that random symplectic unitaries fail to be unitary designs for $t>1$, and that, while it is known that symplectic unitaries are universal, this does not imply that their Haar measure leads to a state design. Notably, our main result states that Haar random symplectic states form unitary $t$-designs for all $t$, meaning that their distribution is unconditionally indistinguishable from that of unitary Haar random states, even with tests that use infinite copies of each state. As such, our work showcases the intriguing possibility of creating state $t$-designs using ensembles of unitaries which do not constitute designs over $\mathbb{U}(d)$ themselves, such as ensembles that form $t$-designs over $\mathbb{SP}(d/2)$.<|reference_end|>
|
arxiv
|
@article{west2024random,
title={Random ensembles of symplectic and unitary states are indistinguishable},
author={Maxwell West, Antonio Anna Mele, Martin Larocca, M. Cerezo},
journal={arXiv preprint arXiv:2409.16500},
year={2024},
number={LA-UR-24-30011},
archivePrefix={arXiv},
eprint={2409.16500},
primaryClass={quant-ph cs.CC cs.IT math.IT}
}
|
west2024random
|
arxiv-661587
|
2409.16501
|
Clarke Transform -- A Fundamental Tool for Continuum Robotics
|
<|reference_start|>Clarke Transform -- A Fundamental Tool for Continuum Robotics: This article introduces the Clarke transform and Clarke coordinates, which present a solution to the disengagement of an arbitrary number of coupled displacement actuation of continuum and soft robots. The Clarke transform utilizes the generalized Clarke transformation and its inverse to reduce any number of joint values to a two-dimensional space without sacrificing any significant information. This space is the manifold of the joint space and is described by two orthogonal Clarke coordinates. Application to kinematics, sampling, and control are presented. By deriving the solution to the previously unknown forward robot-dependent mapping for an arbitrary number of joints, the forward and inverse kinematics formulations are branchless, closed-form, and singular-free. Sampling is used as a proxy for gauging the performance implications for various methods and frameworks, leading to a branchless, closed-form, and vectorizable sampling method with a 100 percent success rate and the possibility to shape desired distributions. Due to the utilization of the manifold, the fairly simple constraint-informed, two-dimensional, and linear controller always provides feasible control outputs. On top of that, the relations to improved representations in continuum and soft robotics are established, where the Clarke coordinates are their generalizations. The Clarke transform offers valuable geometric insights and paves the way for developing approaches directly on the two-dimensional manifold within the high-dimensional joint space, ensuring compliance with the constraint. While being an easy-to-construct linear map, the proposed Clarke transform is mathematically consistent, physically meaningful, as well as interpretable and contributes to the unification of frameworks across continuum and soft robots.<|reference_end|>
|
arxiv
|
@article{grassmann2024clarke,
title={Clarke Transform -- A Fundamental Tool for Continuum Robotics},
author={Reinhard Grassmann and Anastasiia Senyk and Jessica Burgner-Kahrs},
journal={arXiv preprint arXiv:2409.16501},
year={2024},
archivePrefix={arXiv},
eprint={2409.16501},
primaryClass={cs.RO}
}
|
grassmann2024clarke
|
arxiv-661588
|
2409.16502
|
GSplatLoc: Grounding Keypoint Descriptors into 3D Gaussian Splatting for Improved Visual Localization
|
<|reference_start|>GSplatLoc: Grounding Keypoint Descriptors into 3D Gaussian Splatting for Improved Visual Localization: Although various visual localization approaches exist, such as scene coordinate and pose regression, these methods often struggle with high memory consumption or extensive optimization requirements. To address these challenges, we utilize recent advancements in novel view synthesis, particularly 3D Gaussian Splatting (3DGS), to enhance localization. 3DGS allows for the compact encoding of both 3D geometry and scene appearance with its spatial features. Our method leverages the dense description maps produced by XFeat's lightweight keypoint detection and description model. We propose distilling these dense keypoint descriptors into 3DGS to improve the model's spatial understanding, leading to more accurate camera pose predictions through 2D-3D correspondences. After estimating an initial pose, we refine it using a photometric warping loss. Benchmarking on popular indoor and outdoor datasets shows that our approach surpasses state-of-the-art Neural Render Pose (NRP) methods, including NeRFMatch and PNeRFLoc.<|reference_end|>
|
arxiv
|
@article{sidorov2024gsplatloc:,
title={GSplatLoc: Grounding Keypoint Descriptors into 3D Gaussian Splatting for
Improved Visual Localization},
author={Gennady Sidorov, Malik Mohrat, Ksenia Lebedeva, Ruslan Rakhimov,
Sergey Kolyubin},
journal={arXiv preprint arXiv:2409.16502},
year={2024},
archivePrefix={arXiv},
eprint={2409.16502},
primaryClass={cs.CV cs.AI cs.LG cs.RO}
}
|
sidorov2024gsplatloc:
|
arxiv-661589
|
2409.16504
|
Low Latency Point Cloud Rendering with Learned Splatting
|
<|reference_start|>Low Latency Point Cloud Rendering with Learned Splatting: Point cloud is a critical 3D representation with many emerging applications. Because of the point sparsity and irregularity, high-quality rendering of point clouds is challenging and often requires complex computations to recover the continuous surface representation. On the other hand, to avoid visual discomfort, the motion-to-photon latency has to be very short, under 10 ms. Existing rendering solutions lack in either quality or speed. To tackle these challenges, we present a framework that unlocks interactive, free-viewing and high-fidelity point cloud rendering. We train a generic neural network to estimate 3D elliptical Gaussians from arbitrary point clouds and use differentiable surface splatting to render smooth texture and surface normal for arbitrary views. Our approach does not require per-scene optimization, and enable real-time rendering of dynamic point cloud. Experimental results demonstrate the proposed solution enjoys superior visual quality and speed, as well as generalizability to different scene content and robustness to compression artifacts. The code is available at https://github.com/huzi96/gaussian-pcloud-render .<|reference_end|>
|
arxiv
|
@article{hu2024low,
title={Low Latency Point Cloud Rendering with Learned Splatting},
author={Yueyu Hu, Ran Gong, Qi Sun, Yao Wang},
journal={arXiv preprint arXiv:2409.16504},
year={2024},
archivePrefix={arXiv},
eprint={2409.16504},
primaryClass={cs.CV}
}
|
hu2024low
|
arxiv-661590
|
2409.16507
|
Center-fixing of tropical cyclones using uncertainty-aware deep learning applied to high-temporal-resolution geostationary satellite imagery
|
<|reference_start|>Center-fixing of tropical cyclones using uncertainty-aware deep learning applied to high-temporal-resolution geostationary satellite imagery: Determining the location of a tropical cyclone's (TC) surface circulation center -- "center-fixing" -- is a critical first step in the TC-forecasting process, affecting current and future estimates of track, intensity, and structure. Despite a recent increase in the number of automated center-fixing methods, only one such method (ARCHER-2) is operational, and its best performance is achieved when using microwave or scatterometer data, which are not available at every forecast cycle. We develop a deep-learning algorithm called GeoCenter; it relies only on geostationary IR satellite imagery, which is available for all TC basins at high frequency (10-15 min) and low latency (< 10 min) during both day and night. GeoCenter ingests an animation (time series) of IR images, including 10 channels at lag times up to 3 hours. The animation is centered at a "first guess" location, offset from the true TC-center location by 48 km on average and sometimes > 100 km; GeoCenter is tasked with correcting this offset. On an independent testing dataset, GeoCenter achieves a mean/median/RMS (root mean square) error of 26.9/23.3/32.0 km for all systems, 25.7/22.3/30.5 km for tropical systems, and 15.7/13.6/18.6 km for category-2--5 hurricanes. These values are similar to ARCHER-2 errors when microwave or scatterometer data are available, and better than ARCHER-2 errors when only IR data are available. GeoCenter also performs skillful uncertainty quantification (UQ), producing a well calibrated ensemble of 200 TC-center locations. Furthermore, all predictors used by GeoCenter are available in real time, which would make GeoCenter easy to implement operationally every 10-15 min.<|reference_end|>
|
arxiv
|
@article{lagerquist2024center-fixing,
title={Center-fixing of tropical cyclones using uncertainty-aware deep learning
applied to high-temporal-resolution geostationary satellite imagery},
author={Ryan Lagerquist, Galina Chirokova, Robert DeMaria, Mark DeMaria, Imme
Ebert-Uphoff},
journal={arXiv preprint arXiv:2409.16507},
year={2024},
archivePrefix={arXiv},
eprint={2409.16507},
primaryClass={physics.ao-ph cs.AI}
}
|
lagerquist2024center-fixing
|
arxiv-661591
|
2409.16510
|
Distributed Channel Estimation for 6D Movable Antenna: Unveiling Directional Sparsity
|
<|reference_start|>Distributed Channel Estimation for 6D Movable Antenna: Unveiling Directional Sparsity: Six-dimensional movable antenna (6DMA) is an innovative technology to improve wireless network capacity by adjusting 3D positions and 3D rotations of antenna surfaces based on channel spatial distribution. However, the existing works on 6DMA have assumed a central processing unit (CPU) to jointly process the signals of all 6DMA surfaces to execute various tasks. This inevitably incurs prohibitively high processing cost for channel estimation. Therefore, we propose a distributed 6DMA processing architecture to reduce processing complexity of CPU by equipping each 6DMA surface with a local processing unit (LPU). In particular, we unveil for the first time a new \textbf{\textit{directional sparsity}} property of 6DMA channels, where each user has significant channel gains only for a (small) subset of 6DMA position-rotation pairs, which can receive direct/reflected signals from users. In addition, we propose a practical three-stage protocol for the 6DMA-equipped base station (BS) to conduct statistical CSI acquisition for all 6DMA candidate positions/rotations, 6DMA position/rotation optimization, and instantaneous channel estimation for user data transmission with optimized 6DMA positions/rotations. Specifically, the directional sparsity is leveraged to develop distributed algorithms for joint sparsity detection and channel power estimation, as well as for directional sparsity-aided instantaneous channel estimation. Using the estimated channel power, we develop a channel power-based optimization algorithm to maximize the ergodic sum rate of the users by optimizing the antenna positions/rotations. Simulation results show that our channel estimation algorithms are more accurate than benchmarks with lower pilot overhead, and our optimization outperforms fluid/movable antennas optimized only in two dimensions (2D), even when the latter have perfect instantaneous CSI.<|reference_end|>
|
arxiv
|
@article{shao2024distributed,
title={Distributed Channel Estimation for 6D Movable Antenna: Unveiling
Directional Sparsity},
author={Xiaodan Shao, Rui Zhang, Qijun Jiang, Jihong Park, Tony Q. S.Quek,
Robert Schober},
journal={arXiv preprint arXiv:2409.16510},
year={2024},
archivePrefix={arXiv},
eprint={2409.16510},
primaryClass={eess.SP cs.IT math.IT}
}
|
shao2024distributed
|
arxiv-661592
|
2409.16517
|
SynChart: Synthesizing Charts from Language Models
|
<|reference_start|>SynChart: Synthesizing Charts from Language Models: With the release of GPT-4V(O), its use in generating pseudo labels for multi-modality tasks has gained significant popularity. However, it is still a secret how to build such advanced models from its base large language models (LLMs). This work explores the potential of using LLMs alone for data generation and develop competitive multi-modality models focusing on chart understanding. We construct a large-scale chart dataset, SynChart, which contains approximately 4 million diverse chart images with over 75 million dense annotations, including data tables, code, descriptions, and question-answer sets. We trained a 4.2B chart-expert model using this dataset and achieve near-GPT-4O performance on the ChartQA task, surpassing GPT-4V.<|reference_end|>
|
arxiv
|
@article{liu2024synchart:,
title={SynChart: Synthesizing Charts from Language Models},
author={Mengchen Liu, Qixiu Li, Dongdong Chen, Dong Chen, Jianmin Bao,
Yunsheng Li},
journal={arXiv preprint arXiv:2409.16517},
year={2024},
archivePrefix={arXiv},
eprint={2409.16517},
primaryClass={cs.AI}
}
|
liu2024synchart:
|
arxiv-661593
|
2409.16519
|
Feynman-Kac Formula for Nonlinear Schr\"odinger Equations with Applications in Numerical Approximations
|
<|reference_start|>Feynman-Kac Formula for Nonlinear Schr\"odinger Equations with Applications in Numerical Approximations: This paper is devoted to a Feynman-Kac formula for general nonlinear time-dependent Schr\"odinger equations with applications in numerical approximations. Our formulation integrates both the Fisk-Stratonovich and It\^o integrals within the framework of backward stochastic differential equations. Utilizing this Feynman-Kac representation, we propose a deep-learning-based approach for numerical approximation. Numerical experiments are performed to validate the accuracy and efficiency of our method, and a convergence analysis is provided to support the results.<|reference_end|>
|
arxiv
|
@article{cheung2024feynman-kac,
title={Feynman-Kac Formula for Nonlinear Schr\"odinger Equations with
Applications in Numerical Approximations},
author={Hang Cheung, Jinniao Qiu, Yang Yang},
journal={arXiv preprint arXiv:2409.16519},
year={2024},
archivePrefix={arXiv},
eprint={2409.16519},
primaryClass={math.AP cs.NA math.NA math.PR}
}
|
cheung2024feynman-kac
|
arxiv-661594
|
2409.16521
|
Understanding the Cognitive Complexity in Language Elicited by Product Images
|
<|reference_start|>Understanding the Cognitive Complexity in Language Elicited by Product Images: Product images (e.g., a phone) can be used to elicit a diverse set of consumer-reported features expressed through language, including surface-level perceptual attributes (e.g., "white") and more complex ones, like perceived utility (e.g., "battery"). The cognitive complexity of elicited language reveals the nature of cognitive processes and the context required to understand them; cognitive complexity also predicts consumers' subsequent choices. This work offers an approach for measuring and validating the cognitive complexity of human language elicited by product images, providing a tool for understanding the cognitive processes of human as well as virtual respondents simulated by Large Language Models (LLMs). We also introduce a large dataset that includes diverse descriptive labels for product images, including human-rated complexity. We demonstrate that human-rated cognitive complexity can be approximated using a set of natural language models that, combined, roughly capture the complexity construct. Moreover, this approach is minimally supervised and scalable, even in use cases with limited human assessment of complexity.<|reference_end|>
|
arxiv
|
@article{chen2024understanding,
title={Understanding the Cognitive Complexity in Language Elicited by Product
Images},
author={Yan-Ying Chen, Shabnam Hakimi, Monica Van, Francine Chen, Matthew
Hong, Matt Klenk, Charlene Wu},
journal={Published by ICML 2024 Workshop on LLMs and Cognition},
year={2024},
archivePrefix={arXiv},
eprint={2409.16521},
primaryClass={cs.CL}
}
|
chen2024understanding
|
arxiv-661595
|
2409.16526
|
APILOT: Navigating Large Language Models to Generate Secure Code by Sidestepping Outdated API Pitfalls
|
<|reference_start|>APILOT: Navigating Large Language Models to Generate Secure Code by Sidestepping Outdated API Pitfalls: With the rapid development of large language models (LLMs), their applications have expanded into diverse fields, such as code assistance. However, the substantial size of LLMs makes their training highly resource- and time-intensive, rendering frequent retraining or updates impractical. Consequently, time-sensitive data can become outdated, potentially misleading LLMs in time-aware tasks. For example, new vulnerabilities are discovered in various programs every day. Without updating their knowledge, LLMs may inadvertently generate code that includes these newly discovered vulnerabilities. Current strategies, such as prompt engineering and fine-tuning, do not effectively address this issue. To address this issue, we propose solution, named APILOT, which maintains a realtime, quickly updatable dataset of outdated APIs. Additionally, APILOT utilizes an augmented generation method that leverages this dataset to navigate LLMs in generating secure, version-aware code. We conducted a comprehensive evaluation to measure the effectiveness of APILOT in reducing the incidence of outdated API recommendations across seven different state-of-the-art LLMs. The evaluation results indicate that APILOT can reduce outdated code recommendations by 89.42% on average with limited performance overhead. Interestingly, while enhancing security, APILOT also improves the usability of the code generated by LLMs, showing an average increase of 27.54% in usability. This underscores APILOT's dual capability to enhance both the safety and practical utility of code suggestions in contemporary software development environments.<|reference_end|>
|
arxiv
|
@article{bai2024apilot:,
title={APILOT: Navigating Large Language Models to Generate Secure Code by
Sidestepping Outdated API Pitfalls},
author={Weiheng Bai, Keyang Xuan, Pengxiang Huang, Qiushi Wu, Jianing Wen,
Jingjing Wu and Kangjie Lu},
journal={arXiv preprint arXiv:2409.16526},
year={2024},
archivePrefix={arXiv},
eprint={2409.16526},
primaryClass={cs.CR}
}
|
bai2024apilot:
|
arxiv-661596
|
2409.16530
|
T2Pair++: Secure and Usable IoT Pairing with Zero Information Loss
|
<|reference_start|>T2Pair++: Secure and Usable IoT Pairing with Zero Information Loss: Secure pairing is crucial for ensuring the trustworthy deployment and operation of Internet of Things (IoT) devices. However, traditional pairing methods are often unsuitable for IoT devices due to their lack of conventional user interfaces, such as keyboards. Proximity-based pairing approaches are usable but vulnerable to exploitation by co-located malicious devices. While methods based on a user's physical operations (such as shaking) on IoT devices offer greater security, they typically rely on inertial sensors to sense the operations, which most IoT devices lack. We introduce a novel technique called Universal Operation Sensing, enabling IoT devices to sense the user's physical operations without the need for inertial sensors. With this technique, users can complete the pairing process within seconds using simple actions such as pressing a button or twisting a knob, whether they are holding a smartphone or wearing a smartwatch. Moreover, we reveal an inaccuracy issue in the fuzzy commitment protocol, which is frequently used for pairing. To address it, we propose an accurate pairing protocol, which does not use fuzzy commitment and incurs zero information loss. The comprehensive evaluation shows that it is secure, usable and efficient.<|reference_end|>
|
arxiv
|
@article{wu2024t2pair++:,
title={T2Pair++: Secure and Usable IoT Pairing with Zero Information Loss},
author={Chuxiong Wu, Xiaopeng Li, Lannan Luo, and Qiang Zeng},
journal={arXiv preprint arXiv:2409.16530},
year={2024},
archivePrefix={arXiv},
eprint={2409.16530},
primaryClass={cs.CR}
}
|
wu2024t2pair++:
|
arxiv-661597
|
2409.16532
|
Graph Pruning Based Spatial and Temporal Graph Convolutional Network with Transfer Learning for Traffic Prediction
|
<|reference_start|>Graph Pruning Based Spatial and Temporal Graph Convolutional Network with Transfer Learning for Traffic Prediction: With the process of urbanization and the rapid growth of population, the issue of traffic congestion has become an increasingly critical concern. Intelligent transportation systems heavily rely on real-time and precise prediction algorithms to address this problem. While Recurrent Neural Network (RNN) and Graph Convolutional Network (GCN) methods in deep learning have demonstrated high accuracy in predicting road conditions when sufficient data is available, forecasting in road networks with limited data remains a challenging task. This study proposed a novel Spatial-temporal Convolutional Network (TL-GPSTGN) based on graph pruning and transfer learning framework to tackle this issue. Firstly, the essential structure and information of the graph are extracted by analyzing the correlation and information entropy of the road network structure and feature data. By utilizing graph pruning techniques, the adjacency matrix of the graph and the input feature data are processed, resulting in a significant improvement in the model's migration performance. Subsequently, the well-characterized data are inputted into the spatial-temporal graph convolutional network to capture the spatial-temporal relationships and make predictions regarding the road conditions. Furthermore, this study conducts comprehensive testing and validation of the TL-GPSTGN method on real datasets, comparing its prediction performance against other commonly used models under identical conditions. The results demonstrate the exceptional predictive accuracy of TL-GPSTGN on a single dataset, as well as its robust migration performance across different datasets.<|reference_end|>
|
arxiv
|
@article{jing2024graph,
title={Graph Pruning Based Spatial and Temporal Graph Convolutional Network
with Transfer Learning for Traffic Prediction},
author={Zihao Jing},
journal={arXiv preprint arXiv:2409.16532},
year={2024},
archivePrefix={arXiv},
eprint={2409.16532},
primaryClass={cs.AI}
}
|
jing2024graph
|
arxiv-661598
|
2409.16535
|
Prompt Sliders for Fine-Grained Control, Editing and Erasing of Concepts in Diffusion Models
|
<|reference_start|>Prompt Sliders for Fine-Grained Control, Editing and Erasing of Concepts in Diffusion Models: Diffusion models have recently surpassed GANs in image synthesis and editing, offering superior image quality and diversity. However, achieving precise control over attributes in generated images remains a challenge. Concept Sliders introduced a method for fine-grained image control and editing by learning concepts (attributes/objects). However, this approach adds parameters and increases inference time due to the loading and unloading of Low-Rank Adapters (LoRAs) used for learning concepts. These adapters are model-specific and require retraining for different architectures, such as Stable Diffusion (SD) v1.5 and SD-XL. In this paper, we propose a straightforward textual inversion method to learn concepts through text embeddings, which are generalizable across models that share the same text encoder, including different versions of the SD model. We refer to our method as Prompt Sliders. Besides learning new concepts, we also show that Prompt Sliders can be used to erase undesirable concepts such as artistic styles or mature content. Our method is 30% faster than using LoRAs because it eliminates the need to load and unload adapters and introduces no additional parameters aside from the target concept text embedding. Each concept embedding only requires 3KB of storage compared to the 8922KB or more required for each LoRA adapter, making our approach more computationally efficient. Project Page: https://deepaksridhar.github.io/promptsliders.github.io/<|reference_end|>
|
arxiv
|
@article{sridhar2024prompt,
title={Prompt Sliders for Fine-Grained Control, Editing and Erasing of Concepts
in Diffusion Models},
author={Deepak Sridhar, Nuno Vasconcelos},
journal={arXiv preprint arXiv:2409.16535},
year={2024},
archivePrefix={arXiv},
eprint={2409.16535},
primaryClass={cs.CV}
}
|
sridhar2024prompt
|
arxiv-661599
|
2409.16536
|
Time Constant: Actuator Fingerprinting using Transient Response of Device and Process in ICS
|
<|reference_start|>Time Constant: Actuator Fingerprinting using Transient Response of Device and Process in ICS: Command injection and replay attacks are key threats in Cyber Physical Systems (CPS). We develop a novel actuator fingerprinting technique named Time Constant. Time Constant captures the transient dynamics of an actuator and physical process. The transient behavior is device-specific. We combine process and device transient characteristics to develop a copy-resistant actuator fingerprint that resists command injection and replay attacks in the face of insider adversaries. We validated the proposed scheme on data from a real water treatment testbed, as well as through real-time attack detection in the live plant. Our results show that we can uniquely distinguish between process states and actuators based on their Time Constant.<|reference_end|>
|
arxiv
|
@article{ahmed2024time,
title={Time Constant: Actuator Fingerprinting using Transient Response of
Device and Process in ICS},
author={Chuadhry Mujeeb Ahmed, Matthew Calder, Sean Gunawan, Jay Prakash,
Shishir Nagaraja and Jianying Zhou},
journal={arXiv preprint arXiv:2409.16536},
year={2024},
archivePrefix={arXiv},
eprint={2409.16536},
primaryClass={cs.CR}
}
|
ahmed2024time
|
arxiv-661600
|
2409.16537
|
A QoE-Aware Split Inference Accelerating Algorithm for NOMA-based Edge Intelligence
|
<|reference_start|>A QoE-Aware Split Inference Accelerating Algorithm for NOMA-based Edge Intelligence: Even the AI has been widely used and significantly changed our life, deploying the large AI models on resource limited edge devices directly is not appropriate. Thus, the model split inference is proposed to improve the performance of edge intelligence, in which the AI model is divided into different sub models and the resource-intensive sub model is offloaded to edge server wirelessly for reducing resource requirements and inference latency. However, the previous works mainly concentrate on improving and optimizing the system QoS, ignore the effect of QoE which is another critical item for the users except for QoS. Even the QoE has been widely learned in EC, considering the differences between task offloading in EC and split inference in EI, and the specific issues in QoE which are still not addressed in EC and EI, these algorithms cannot work effectively in edge split inference scenarios. Thus, an effective resource allocation algorithm is proposed in this paper, for accelerating split inference in EI and achieving the tradeoff between inference delay, QoE, and resource consumption, abbreviated as ERA. Specifically, the ERA takes the resource consumption, QoE, and inference latency into account to find the optimal model split strategy and resource allocation strategy. Since the minimum inference delay and resource consumption, and maximum QoE cannot be satisfied simultaneously, the gradient descent based algorithm is adopted to find the optimal tradeoff between them. Moreover, the loop iteration GD approach is developed to reduce the complexity of the GD algorithm caused by parameter discretization. Additionally, the properties of the proposed algorithms are investigated, including convergence, complexity, and approximation error. The experimental results demonstrate that the performance of ERA is much better than that of the previous studies.<|reference_end|>
|
arxiv
|
@article{yuan2024a,
title={A QoE-Aware Split Inference Accelerating Algorithm for NOMA-based Edge
Intelligence},
author={Xin Yuan, Ning Li, Quan Chen, Wenchao Xu, Zhaoxin Zhang, Song Guo},
journal={arXiv preprint arXiv:2409.16537},
year={2024},
archivePrefix={arXiv},
eprint={2409.16537},
primaryClass={cs.LG}
}
|
yuan2024a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.