corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-662401
|
2409.17986
|
Supra-Laplacian Encoding for Transformer on Dynamic Graphs
|
<|reference_start|>Supra-Laplacian Encoding for Transformer on Dynamic Graphs: Fully connected Graph Transformers (GT) have rapidly become prominent in the static graph community as an alternative to Message-Passing models, which suffer from a lack of expressivity, oversquashing, and under-reaching. However, in a dynamic context, by interconnecting all nodes at multiple snapshots with self-attention, GT loose both structural and temporal information. In this work, we introduce Supra-LAplacian encoding for spatio-temporal TransformErs (SLATE), a new spatio-temporal encoding to leverage the GT architecture while keeping spatio-temporal information. Specifically, we transform Discrete Time Dynamic Graphs into multi-layer graphs and take advantage of the spectral properties of their associated supra-Laplacian matrix. Our second contribution explicitly model nodes' pairwise relationships with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction. SLATE outperforms numerous state-of-the-art methods based on Message-Passing Graph Neural Networks combined with recurrent models (e.g LSTM), and Dynamic Graph Transformers, on 9 datasets. Code and instructions to reproduce our results will be open-sourced.<|reference_end|>
|
arxiv
|
@article{karmim2024supra-laplacian,
title={Supra-Laplacian Encoding for Transformer on Dynamic Graphs},
author={Yannis Karmim, Marc Lafon, Rapha"el Fournier S'niehotta, Nicolas
Thome},
journal={arXiv preprint arXiv:2409.17986},
year={2024},
archivePrefix={arXiv},
eprint={2409.17986},
primaryClass={cs.LG}
}
|
karmim2024supra-laplacian
|
arxiv-662402
|
2409.17987
|
LLM4Brain: Training a Large Language Model for Brain Video Understanding
|
<|reference_start|>LLM4Brain: Training a Large Language Model for Brain Video Understanding: Decoding visual-semantic information from brain signals, such as functional MRI (fMRI), across different subjects poses significant challenges, including low signal-to-noise ratio, limited data availability, and cross-subject variability. Recent advancements in large language models (LLMs) show remarkable effectiveness in processing multimodal information. In this study, we introduce an LLM-based approach for reconstructing visual-semantic information from fMRI signals elicited by video stimuli. Specifically, we employ fine-tuning techniques on an fMRI encoder equipped with adaptors to transform brain responses into latent representations aligned with the video stimuli. Subsequently, these representations are mapped to textual modality by LLM. In particular, we integrate self-supervised domain adaptation methods to enhance the alignment between visual-semantic information and brain responses. Our proposed method achieves good results using various quantitative semantic metrics, while yielding similarity with ground-truth information.<|reference_end|>
|
arxiv
|
@article{zheng2024llm4brain:,
title={LLM4Brain: Training a Large Language Model for Brain Video Understanding},
author={Ruizhe Zheng and Lichao Sun},
journal={arXiv preprint arXiv:2409.17987},
year={2024},
archivePrefix={arXiv},
eprint={2409.17987},
primaryClass={cs.CV cs.HC}
}
|
zheng2024llm4brain:
|
arxiv-662403
|
2409.17988
|
Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions
|
<|reference_start|>Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions: The stark contrast in the design philosophy of an event camera makes it particularly ideal for operating under high-speed, high dynamic range and low-light conditions, where standard cameras underperform. Nonetheless, event cameras still suffer from some amount of motion blur, especially under these challenging conditions, in contrary to what most think. This is attributed to the limited bandwidth of the event sensor pixel, which is mostly proportional to the light intensity. Thus, to ensure that event cameras can truly excel in such conditions where it has an edge over standard cameras, it is crucial to account for event motion blur in downstream applications, especially reconstruction. However, none of the recent works on reconstructing Neural Radiance Fields (NeRFs) from events, nor event simulators, have considered the full effects of event motion blur. To this end, we propose, Deblur e-NeRF, a novel method to directly and effectively reconstruct blur-minimal NeRFs from motion-blurred events generated under high-speed motion or low-light conditions. The core component of this work is a physically-accurate pixel bandwidth model proposed to account for event motion blur under arbitrary speed and lighting conditions. We also introduce a novel threshold-normalized total variation loss to improve the regularization of large textureless patches. Experiments on real and novel realistically simulated sequences verify our effectiveness. Our code, event simulator and synthetic event dataset will be open-sourced.<|reference_end|>
|
arxiv
|
@article{low2024deblur,
title={Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or
Low-light Conditions},
author={Weng Fei Low, Gim Hee Lee},
journal={arXiv preprint arXiv:2409.17988},
year={2024},
archivePrefix={arXiv},
eprint={2409.17988},
primaryClass={cs.CV cs.GR cs.RO cs.SY eess.SY}
}
|
low2024deblur
|
arxiv-662404
|
2409.17990
|
Extracting Affect Aggregates from Longitudinal Social Media Data with Temporal Adapters for Large Language Models
|
<|reference_start|>Extracting Affect Aggregates from Longitudinal Social Media Data with Temporal Adapters for Large Language Models: This paper proposes temporally aligned Large Language Models (LLMs) as a tool for longitudinal analysis of social media data. We fine-tune Temporal Adapters for Llama 3 8B on full timelines from a panel of British Twitter users, and extract longitudinal aggregates of emotions and attitudes with established questionnaires. We validate our estimates against representative British survey data and find strong positive, significant correlations for several collective emotions. The obtained estimates are robust across multiple training seeds and prompt formulations, and in line with collective emotions extracted using a traditional classification model trained on labeled data. To the best of our knowledge, this is the first work to extend the analysis of affect in LLMs to a longitudinal setting through Temporal Adapters. Our work enables new approaches towards the longitudinal analysis of social media data.<|reference_end|>
|
arxiv
|
@article{ahnert2024extracting,
title={Extracting Affect Aggregates from Longitudinal Social Media Data with
Temporal Adapters for Large Language Models},
author={Georg Ahnert, Max Pellert, David Garcia, Markus Strohmaier},
journal={arXiv preprint arXiv:2409.17990},
year={2024},
archivePrefix={arXiv},
eprint={2409.17990},
primaryClass={cs.CY cs.CL}
}
|
ahnert2024extracting
|
arxiv-662405
|
2409.17991
|
Dimension-independent learning rates for high-dimensional classification problems
|
<|reference_start|>Dimension-independent learning rates for high-dimensional classification problems: We study the problem of approximating and estimating classification functions that have their decision boundary in the $RBV^2$ space. Functions of $RBV^2$ type arise naturally as solutions of regularized neural network learning problems and neural networks can approximate these functions without the curse of dimensionality. We modify existing results to show that every $RBV^2$ function can be approximated by a neural network with bounded weights. Thereafter, we prove the existence of a neural network with bounded weights approximating a classification function. And we leverage these bounds to quantify the estimation rates. Finally, we present a numerical study that analyzes the effect of different regularity conditions on the decision boundaries.<|reference_end|>
|
arxiv
|
@article{lerma-pineda2024dimension-independent,
title={Dimension-independent learning rates for high-dimensional classification
problems},
author={Andres Felipe Lerma-Pineda, Philipp Petersen, Simon Frieder, Thomas
Lukasiewicz},
journal={arXiv preprint arXiv:2409.17991},
year={2024},
archivePrefix={arXiv},
eprint={2409.17991},
primaryClass={cs.LG cs.NA math.NA stat.ML}
}
|
lerma-pineda2024dimension-independent
|
arxiv-662406
|
2409.17992
|
LoopSR: Looping Sim-and-Real for Lifelong Policy Adaptation of Legged Robots
|
<|reference_start|>LoopSR: Looping Sim-and-Real for Lifelong Policy Adaptation of Legged Robots: Reinforcement Learning (RL) has shown its remarkable and generalizable capability in legged locomotion through sim-to-real transfer. However, while adaptive methods like domain randomization are expected to make policy more robust to diverse environments, such comprehensiveness potentially detracts from the policy's performance in any specific environment according to the No Free Lunch theorem, leading to a suboptimal solution once deployed in the real world. To address this issue, we propose a lifelong policy adaptation framework named LoopSR, which utilizes a transformer-based encoder to project real-world trajectories into a latent space, and accordingly reconstruct the real-world environments back in simulation for further improvement. Autoencoder architecture and contrastive learning methods are adopted to better extract the characteristics of real-world dynamics. The simulation parameters for continual training are derived by combining predicted parameters from the decoder with retrieved parameters from the simulation trajectory dataset. By leveraging the continual training, LoopSR achieves superior data efficiency compared with strong baselines, with only a limited amount of data to yield eminent performance in both sim-to-sim and sim-to-real experiments.<|reference_end|>
|
arxiv
|
@article{wu2024loopsr:,
title={LoopSR: Looping Sim-and-Real for Lifelong Policy Adaptation of Legged
Robots},
author={Peilin Wu, Weiji Xie, Jiahang Cao, Hang Lai, Weinan Zhang},
journal={arXiv preprint arXiv:2409.17992},
year={2024},
archivePrefix={arXiv},
eprint={2409.17992},
primaryClass={cs.RO cs.LG}
}
|
wu2024loopsr:
|
arxiv-662407
|
2409.17993
|
InterNet: Unsupervised Cross-modal Homography Estimation Based on Interleaved Modality Transfer and Self-supervised Homography Prediction
|
<|reference_start|>InterNet: Unsupervised Cross-modal Homography Estimation Based on Interleaved Modality Transfer and Self-supervised Homography Prediction: We propose a novel unsupervised cross-modal homography estimation framework, based on interleaved modality transfer and self-supervised homography prediction, named InterNet. InterNet integrates modality transfer and self-supervised homography estimation, introducing an innovative interleaved optimization framework to alternately promote both components. The modality transfer gradually narrows the modality gaps, facilitating the self-supervised homography estimation to fully leverage the synthetic intra-modal data. The self-supervised homography estimation progressively achieves reliable predictions, thereby providing robust cross-modal supervision for the modality transfer. To further boost the estimation accuracy, we also formulate a fine-grained homography feature loss to improve the connection between two components. Furthermore, we employ a simple yet effective distillation training technique to reduce model parameters and improve cross-domain generalization ability while maintaining comparable performance. Experiments reveal that InterNet achieves the state-of-the-art (SOTA) performance among unsupervised methods, and even outperforms many supervised methods such as MHN and LocalTrans.<|reference_end|>
|
arxiv
|
@article{yu2024internet:,
title={InterNet: Unsupervised Cross-modal Homography Estimation Based on
Interleaved Modality Transfer and Self-supervised Homography Prediction},
author={Junchen Yu, Si-Yuan Cao, Runmin Zhang, Chenghao Zhang, Jianxin Hu, Zhu
Yu, Beinan Yu, Hui-liang Shen},
journal={arXiv preprint arXiv:2409.17993},
year={2024},
archivePrefix={arXiv},
eprint={2409.17993},
primaryClass={cs.CV}
}
|
yu2024internet:
|
arxiv-662408
|
2409.17994
|
CRoP: Context-wise Robust Static Human-Sensing Personalization
|
<|reference_start|>CRoP: Context-wise Robust Static Human-Sensing Personalization: The advancement in deep learning and internet-of-things have led to diverse human sensing applications. However, distinct patterns in human sensing, influenced by various factors or contexts, challenge generic neural network model's performance due to natural distribution shifts. To address this, personalization tailors models to individual users. Yet most personalization studies overlook intra-user heterogeneity across contexts in sensory data, limiting intra-user generalizability. This limitation is especially critical in clinical applications, where limited data availability hampers both generalizability and personalization. Notably, intra-user sensing attributes are expected to change due to external factors such as treatment progression, further complicating the challenges. This work introduces CRoP, a novel static personalization approach using an off-the-shelf pre-trained model and pruning to optimize personalization and generalization. CRoP shows superior personalization effectiveness and intra-user robustness across four human-sensing datasets, including two from real-world health domains, highlighting its practical and social impact. Additionally, to support CRoP's generalization ability and design choices, we provide empirical justification through gradient inner product analysis, ablation studies, and comparisons against state-of-the-art baselines.<|reference_end|>
|
arxiv
|
@article{kaur2024crop:,
title={CRoP: Context-wise Robust Static Human-Sensing Personalization},
author={Sawinder Kaur, Avery Gump, Jingyu Xin, Yi Xiao, Harshit Sharma, Nina R
Benway, Jonathan L Preston, Asif Salekin},
journal={arXiv preprint arXiv:2409.17994},
year={2024},
archivePrefix={arXiv},
eprint={2409.17994},
primaryClass={cs.AI}
}
|
kaur2024crop:
|
arxiv-662409
|
2409.17995
|
Joint Localization and Planning using Diffusion
|
<|reference_start|>Joint Localization and Planning using Diffusion: Diffusion models have been successfully applied to robotics problems such as manipulation and vehicle path planning. In this work, we explore their application to end-to-end navigation -- including both perception and planning -- by considering the problem of jointly performing global localization and path planning in known but arbitrary 2D environments. In particular, we introduce a diffusion model which produces collision-free paths in a global reference frame given an egocentric LIDAR scan, an arbitrary map, and a desired goal position. To this end, we implement diffusion in the space of paths in SE(2), and describe how to condition the denoising process on both obstacles and sensor observations. In our evaluation, we show that the proposed conditioning techniques enable generalization to realistic maps of considerably different appearance than the training environment, demonstrate our model's ability to accurately describe ambiguous solutions, and run extensive simulation experiments showcasing our model's use as a real-time, end-to-end localization and planning stack.<|reference_end|>
|
arxiv
|
@article{beyer2024joint,
title={Joint Localization and Planning using Diffusion},
author={L. Lao Beyer, S. Karaman},
journal={arXiv preprint arXiv:2409.17995},
year={2024},
archivePrefix={arXiv},
eprint={2409.17995},
primaryClass={cs.RO cs.AI cs.LG}
}
|
beyer2024joint
|
arxiv-662410
|
2409.17996
|
PhoCoLens: Photorealistic and Consistent Reconstruction in Lensless Imaging
|
<|reference_start|>PhoCoLens: Photorealistic and Consistent Reconstruction in Lensless Imaging: Lensless cameras offer significant advantages in size, weight, and cost compared to traditional lens-based systems. Without a focusing lens, lensless cameras rely on computational algorithms to recover the scenes from multiplexed measurements. However, current algorithms struggle with inaccurate forward imaging models and insufficient priors to reconstruct high-quality images. To overcome these limitations, we introduce a novel two-stage approach for consistent and photorealistic lensless image reconstruction. The first stage of our approach ensures data consistency by focusing on accurately reconstructing the low-frequency content with a spatially varying deconvolution method that adjusts to changes in the Point Spread Function (PSF) across the camera's field of view. The second stage enhances photorealism by incorporating a generative prior from pre-trained diffusion models. By conditioning on the low-frequency content retrieved in the first stage, the diffusion model effectively reconstructs the high-frequency details that are typically lost in the lensless imaging process, while also maintaining image fidelity. Our method achieves a superior balance between data fidelity and visual quality compared to existing methods, as demonstrated with two popular lensless systems, PhlatCam and DiffuserCam. Project website: https://phocolens.github.io/.<|reference_end|>
|
arxiv
|
@article{cai2024phocolens:,
title={PhoCoLens: Photorealistic and Consistent Reconstruction in Lensless
Imaging},
author={Xin Cai, Zhiyuan You, Hailong Zhang, Wentao Liu, Jinwei Gu, Tianfan
Xue},
journal={arXiv preprint arXiv:2409.17996},
year={2024},
archivePrefix={arXiv},
eprint={2409.17996},
primaryClass={eess.IV cs.CV cs.LG}
}
|
cai2024phocolens:
|
arxiv-662411
|
2409.17997
|
Distributed Invariant Unscented Kalman Filter based on Inverse Covariance Intersection with Intermittent Measurements
|
<|reference_start|>Distributed Invariant Unscented Kalman Filter based on Inverse Covariance Intersection with Intermittent Measurements: This paper studies the problem of distributed state estimation (DSE) over sensor networks on matrix Lie groups, which is crucial for applications where system states evolve on Lie groups rather than vector spaces. We propose a diffusion-based distributed invariant Unscented Kalman Filter using the inverse covariance intersection (DIUKF-ICI) method to address target tracking in 3D environments. Unlike existing distributed UKFs confined to vector spaces, our approach extends the distributed UKF framework to Lie groups, enabling local estimates to be fused with intermediate information from neighboring agents on Lie groups. To handle the unknown correlations across local estimates, we extend the ICI fusion strategy to matrix Lie groups for the first time and integrate it into the diffusion algorithm. We demonstrate that the estimation error of the proposed method is bounded. Additionally, the algorithm is fully distributed, robust against intermittent measurements, and adaptable to time-varying communication topologies. The effectiveness of the proposed method is validated through extensive Monte-Carlo simulations.<|reference_end|>
|
arxiv
|
@article{ruan2024distributed,
title={Distributed Invariant Unscented Kalman Filter based on Inverse
Covariance Intersection with Intermittent Measurements},
author={Zhian Ruan, Yizhi Zhou},
journal={arXiv preprint arXiv:2409.17997},
year={2024},
archivePrefix={arXiv},
eprint={2409.17997},
primaryClass={eess.SY cs.SY}
}
|
ruan2024distributed
|
arxiv-662412
|
2409.18000
|
Safe Time-Varying Optimization based on Gaussian Processes with Spatio-Temporal Kernel
|
<|reference_start|>Safe Time-Varying Optimization based on Gaussian Processes with Spatio-Temporal Kernel: Ensuring safety is a key aspect in sequential decision making problems, such as robotics or process control. The complexity of the underlying systems often makes finding the optimal decision challenging, especially when the safety-critical system is time-varying. Overcoming the problem of optimizing an unknown time-varying reward subject to unknown time-varying safety constraints, we propose TVSafeOpt, a new algorithm built on Bayesian optimization with a spatio-temporal kernel. The algorithm is capable of safely tracking a time-varying safe region without the need for explicit change detection. Optimality guarantees are also provided for the algorithm when the optimization problem becomes stationary. We show that TVSafeOpt compares favorably against SafeOpt on synthetic data, both regarding safety and optimality. Evaluation on a realistic case study with gas compressors confirms that TVSafeOpt ensures safety when solving time-varying optimization problems with unknown reward and safety functions.<|reference_end|>
|
arxiv
|
@article{li2024safe,
title={Safe Time-Varying Optimization based on Gaussian Processes with
Spatio-Temporal Kernel},
author={Jialin Li and Marta Zagorowska and Giulia De Pasquale and Alisa
Rupenyan and John Lygeros},
journal={arXiv preprint arXiv:2409.18000},
year={2024},
archivePrefix={arXiv},
eprint={2409.18000},
primaryClass={cs.LG math.OC}
}
|
li2024safe
|
arxiv-662413
|
2409.18003
|
Enhancing Tourism Recommender Systems for Sustainable City Trips Using Retrieval-Augmented Generation
|
<|reference_start|>Enhancing Tourism Recommender Systems for Sustainable City Trips Using Retrieval-Augmented Generation: Tourism Recommender Systems (TRS) have traditionally focused on providing personalized travel suggestions, often prioritizing user preferences without considering broader sustainability goals. Integrating sustainability into TRS has become essential with the increasing need to balance environmental impact, local community interests, and visitor satisfaction. This paper proposes a novel approach to enhancing TRS for sustainable city trips using Large Language Models (LLMs) and a modified Retrieval-Augmented Generation (RAG) pipeline. We enhance the traditional RAG system by incorporating a sustainability metric based on a city's popularity and seasonal demand during the prompt augmentation phase. This modification, called Sustainability Augmented Reranking (SAR), ensures the system's recommendations align with sustainability goals. Evaluations using popular open-source LLMs, such as Llama-3.1-Instruct-8B and Mistral-Instruct-7B, demonstrate that the SAR-enhanced approach consistently matches or outperforms the baseline (without SAR) across most metrics, highlighting the benefits of incorporating sustainability into TRS.<|reference_end|>
|
arxiv
|
@article{banerjee2024enhancing,
title={Enhancing Tourism Recommender Systems for Sustainable City Trips Using
Retrieval-Augmented Generation},
author={Ashmi Banerjee, Adithi Satish, Wolfgang W"orndl},
journal={arXiv preprint arXiv:2409.18003},
year={2024},
archivePrefix={arXiv},
eprint={2409.18003},
primaryClass={cs.IR}
}
|
banerjee2024enhancing
|
arxiv-662414
|
2409.18006
|
Multilingual Evaluation of Long Context Retrieval and Reasoning
|
<|reference_start|>Multilingual Evaluation of Long Context Retrieval and Reasoning: Recent large language models (LLMs) demonstrate impressive capabilities in handling long contexts, some exhibiting near-perfect recall on synthetic retrieval tasks. However, these evaluations have mainly focused on English text and involved a single target sentence within lengthy contexts. Our work investigates how LLM performance generalizes to multilingual settings with multiple hidden target sentences. We comprehensively evaluate several long-context LLMs on retrieval and reasoning tasks across five languages: English, Vietnamese, Indonesian, Swahili, and Somali. These languages share the Latin script but belong to distinct language families and resource levels. Our analysis reveals a significant performance gap between languages. The best-performing models such as Gemini-1.5 and GPT-4o, achieve around 96% accuracy in English to around 36% in Somali with a single target sentence. However, this accuracy drops to 40% in English and 0% in Somali when dealing with three target sentences. Our findings highlight the challenges long-context LLMs face when processing longer contexts, an increase in the number of target sentences, or languages of lower resource levels.<|reference_end|>
|
arxiv
|
@article{agrawal2024evaluating,
title={Evaluating Multilingual Long-Context Models for Retrieval and Reasoning},
author={Ameeta Agrawal, Andy Dang, Sina Bagheri Nezhad, Rhitabrat Pokharel,
Russell Scheinberg},
journal={arXiv preprint arXiv:2409.18006},
year={2024},
archivePrefix={arXiv},
eprint={2409.18006},
primaryClass={cs.CL}
}
|
agrawal2024evaluating
|
arxiv-662415
|
2409.18009
|
Control Industrial Automation System with Large Language Models
|
<|reference_start|>Control Industrial Automation System with Large Language Models: Traditional industrial automation systems require specialized expertise to operate and complex reprogramming to adapt to new processes. Large language models offer the intelligence to make them more flexible and easier to use. However, LLMs' application in industrial settings is underexplored. This paper introduces a framework for integrating LLMs to achieve end-to-end control of industrial automation systems. At the core of the framework are an agent system designed for industrial tasks, a structured prompting method, and an event-driven information modeling mechanism that provides real-time data for LLM inference. The framework supplies LLMs with real-time events on different context semantic levels, allowing them to interpret the information, generate production plans, and control operations on the automation system. It also supports structured dataset creation for fine-tuning on this downstream application of LLMs. Our contribution includes a formal system design, proof-of-concept implementation, and a method for generating task-specific datasets for LLM fine-tuning and testing. This approach enables a more adaptive automation system that can respond to spontaneous events, while allowing easier operation and configuration through natural language for more intuitive human-machine interaction. We provide demo videos and detailed data on GitHub: https://github.com/YuchenXia/LLM4IAS<|reference_end|>
|
arxiv
|
@article{xia2024control,
title={Control Industrial Automation System with Large Language Models},
author={Yuchen Xia, Nasser Jazdi, Jize Zhang, Chaitanya Shah, Michael Weyrich},
journal={arXiv preprint arXiv:2409.18009},
year={2024},
archivePrefix={arXiv},
eprint={2409.18009},
primaryClass={eess.SY cs.AI cs.HC cs.MA cs.RO cs.SY}
}
|
xia2024control
|
arxiv-662416
|
2409.18010
|
End-to-end guarantees for indirect data-driven control of bilinear systems with finite stochastic data
|
<|reference_start|>End-to-end guarantees for indirect data-driven control of bilinear systems with finite stochastic data: In this paper we propose an end-to-end algorithm for indirect data-driven control for bilinear systems with stability guarantees. We consider the case where the collected i.i.d. data is affected by probabilistic noise with possibly unbounded support and leverage tools from statistical learning theory to derive finite sample identification error bounds. To this end, we solve the bilinear identification problem by solving a set of linear and affine identification problems, by a particular choice of a control input during the data collection phase. We provide a priori as well as data-dependent finite sample identification error bounds on the individual matrices as well as ellipsoidal bounds, both of which are structurally suitable for control. Further, we integrate the structure of the derived identification error bounds in a robust controller design to obtain an exponentially stable closed-loop. By means of an extensive numerical study we showcase the interplay between the controller design and the derived identification error bounds. Moreover, we note appealing connections of our results to indirect data-driven control of general nonlinear systems through Koopman operator theory and discuss how our results may be applied in this setup.<|reference_end|>
|
arxiv
|
@article{chatzikiriakos2024end-to-end,
title={End-to-end guarantees for indirect data-driven control of bilinear
systems with finite stochastic data},
author={Nicolas Chatzikiriakos, Robin Str"asser, Frank Allg"ower, Andrea
Iannelli},
journal={arXiv preprint arXiv:2409.18010},
year={2024},
archivePrefix={arXiv},
eprint={2409.18010},
primaryClass={eess.SY cs.SY math.OC stat.ML}
}
|
chatzikiriakos2024end-to-end
|
arxiv-662417
|
2409.18013
|
Spatiotemporal Learning on Cell-embedded Graphs
|
<|reference_start|>Spatiotemporal Learning on Cell-embedded Graphs: Data-driven simulation of physical systems has recently kindled significant attention, where many neural models have been developed. In particular, mesh-based graph neural networks (GNNs) have demonstrated significant potential in predicting spatiotemporal dynamics across arbitrary geometric domains. However, the existing node-edge message passing mechanism in GNNs limits the model's representation learning ability. In this paper, we proposed a cell-embedded GNN model (aka CeGNN) to learn spatiotemporal dynamics with lifted performance. Specifically, we introduce a learnable cell attribution to the node-edge message passing process, which better captures the spatial dependency of regional features. Such a strategy essentially upgrades the local aggregation scheme from the first order (e.g., from edge to node) to a higher order (e.g., from volume to edge and then to node), which takes advantage of volumetric information in message passing. Meanwhile, a novel feature-enhanced block is designed to further improve the performance of CeGNN and relieve the over-smoothness problem, via treating the latent features as basis functions. The extensive experiments on various PDE systems and one real-world dataset demonstrate that CeGNN achieves superior performance compared with other baseline models, particularly reducing the prediction error with up to 1 orders of magnitude on several PDE systems.<|reference_end|>
|
arxiv
|
@article{mi2024spatiotemporal,
title={Spatiotemporal Learning on Cell-embedded Graphs},
author={Yuan Mi, Hao Sun},
journal={arXiv preprint arXiv:2409.18013},
year={2024},
archivePrefix={arXiv},
eprint={2409.18013},
primaryClass={cs.LG}
}
|
mi2024spatiotemporal
|
arxiv-662418
|
2409.18014
|
Role-RL: Online Long-Context Processing with Role Reinforcement Learning for Distinct LLMs in Their Optimal Roles
|
<|reference_start|>Role-RL: Online Long-Context Processing with Role Reinforcement Learning for Distinct LLMs in Their Optimal Roles: Large language models (LLMs) with long-context processing are still challenging because of their implementation complexity, training efficiency and data sparsity. To address this issue, a new paradigm named Online Long-context Processing (OLP) is proposed when we process a document of unlimited length, which typically occurs in the information reception and organization of diverse streaming media such as automated news reporting, live e-commerce, and viral short videos. Moreover, a dilemma was often encountered when we tried to select the most suitable LLM from a large number of LLMs amidst explosive growth aiming for outstanding performance, affordable prices, and short response delays. In view of this, we also develop Role Reinforcement Learning (Role-RL) to automatically deploy different LLMs in their respective roles within the OLP pipeline according to their actual performance. Extensive experiments are conducted on our OLP-MINI dataset and it is found that OLP with Role-RL framework achieves OLP benchmark with an average recall rate of 93.2% and the LLM cost saved by 79.4%. The code and dataset are publicly available at: https://anonymous.4open.science/r/Role-RL.<|reference_end|>
|
arxiv
|
@article{he2024role-rl:,
title={Role-RL: Online Long-Context Processing with Role Reinforcement Learning
for Distinct LLMs in Their Optimal Roles},
author={Lewei He, Tianyu Shi, Pengran Huang, Bingzhi Chen, Qianglong Chen,
Jiahui Pan},
journal={arXiv preprint arXiv:2409.18014},
year={2024},
archivePrefix={arXiv},
eprint={2409.18014},
primaryClass={cs.AI}
}
|
he2024role-rl:
|
arxiv-662419
|
2409.18016
|
Relating Superconducting Optoelectronic Networks to Classical Neurodynamics
|
<|reference_start|>Relating Superconducting Optoelectronic Networks to Classical Neurodynamics: The circuits comprising superconducting optoelectronic synapses, dendrites, and neurons are described by numerically cumbersome and formally opaque coupled differential equations. Reference 1 showed that a phenomenological model of superconducting loop neurons eliminates the need to solve the Josephson circuit equations that describe synapses and dendrites. The initial goal of the model was to decrease the time required for simulations, yet an additional benefit of the model was increased transparency of the underlying neural circuit operations and conceptual clarity regarding the connection of loop neurons to other physical systems. Whereas the original model simplified the treatment of the Josephson-junction dynamics, essentially by only considering low-pass versions of the dendritic outputs, the model resorted to an awkward treatment of spikes generated by semiconductor transmitter circuits that required explicitly checking for threshold crossings and distinct treatment of time steps wherein somatic threshold is reached. Here we extend that model to simplify the treatment of spikes coming from somas, again making use of the fact that in neural systems the downstream recipients of spike events almost always perform low-pass filtering. We provide comparisons between the first and second phenomenological models, quantifying the accuracy of the additional approximations. We identify regions of circuit parameter space in which the extended model works well and regions where it works poorly. For some circuit parameters it is possible to represent the downstream dendritic response to a single spike as well as coincidences or sequences of spikes, indicating the model is not simply a reduction to rate coding. The governing equations are shown to be nearly identical to those ubiquitous in the neuroscience literature for modeling leaky-integrator dendrites and neurons.<|reference_end|>
|
arxiv
|
@article{shainline2024relating,
title={Relating Superconducting Optoelectronic Networks to Classical
Neurodynamics},
author={Jeffrey M. Shainline, Bryce A. Primavera, and Ryan O'Loughlin},
journal={arXiv preprint arXiv:2409.18016},
year={2024},
archivePrefix={arXiv},
eprint={2409.18016},
primaryClass={cs.NE}
}
|
shainline2024relating
|
arxiv-662420
|
2409.18017
|
Transferring disentangled representations: bridging the gap between synthetic and real images
|
<|reference_start|>Transferring disentangled representations: bridging the gap between synthetic and real images: Developing meaningful and efficient representations that separate the fundamental structure of the data generation mechanism is crucial in representation learning. However, Disentangled Representation Learning has not fully shown its potential on real images, because of correlated generative factors, their resolution and limited access to ground truth labels. Specifically on the latter, we investigate the possibility of leveraging synthetic data to learn general-purpose disentangled representations applicable to real data, discussing the effect of fine-tuning and what properties of disentanglement are preserved after the transfer. We provide an extensive empirical study to address these issues. In addition, we propose a new interpretable intervention-based metric, to measure the quality of factors encoding in the representation. Our results indicate that some level of disentanglement, transferring a representation from synthetic to real data, is possible and effective.<|reference_end|>
|
arxiv
|
@article{dapueto2024transferring,
title={Transferring disentangled representations: bridging the gap between
synthetic and real images},
author={Jacopo Dapueto, Nicoletta Noceti, Francesca Odone},
journal={arXiv preprint arXiv:2409.18017},
year={2024},
archivePrefix={arXiv},
eprint={2409.18017},
primaryClass={cs.CV cs.AI}
}
|
dapueto2024transferring
|
arxiv-662421
|
2409.18023
|
DARE: Diverse Visual Question Answering with Robustness Evaluation
|
<|reference_start|>DARE: Diverse Visual Question Answering with Robustness Evaluation: Vision Language Models (VLMs) extend remarkable capabilities of text-only large language models and vision-only models, and are able to learn from and process multi-modal vision-text input. While modern VLMs perform well on a number of standard image classification and image-text matching tasks, they still struggle with a number of crucial vision-language (VL) reasoning abilities such as counting and spatial reasoning. Moreover, while they might be very brittle to small variations in instructions and/or evaluation protocols, existing benchmarks fail to evaluate their robustness (or rather the lack of it). In order to couple challenging VL scenarios with comprehensive robustness evaluation, we introduce DARE, Diverse Visual Question Answering with Robustness Evaluation, a carefully created and curated multiple-choice VQA benchmark. DARE evaluates VLM performance on five diverse categories and includes four robustness-oriented evaluations based on the variations of: prompts, the subsets of answer options, the output format and the number of correct answers. Among a spectrum of other findings, we report that state-of-the-art VLMs still struggle with questions in most categories and are unable to consistently deliver their peak performance across the tested robustness evaluations. The worst case performance across the subsets of options is up to 34% below the performance in the standard case. The robustness of the open-source VLMs such as LLaVA 1.6 and Idefics2 cannot match the closed-source models such as GPT-4 and Gemini, but even the latter remain very brittle to different variations.<|reference_end|>
|
arxiv
|
@article{sterz2024dare:,
title={DARE: Diverse Visual Question Answering with Robustness Evaluation},
author={Hannah Sterz, Jonas Pfeiffer, Ivan Vuli'c},
journal={arXiv preprint arXiv:2409.18023},
year={2024},
archivePrefix={arXiv},
eprint={2409.18023},
primaryClass={cs.CL}
}
|
sterz2024dare:
|
arxiv-662422
|
2409.18024
|
Report on the Workshop on Simulations for Information Access (Sim4IA 2024) at SIGIR 2024
|
<|reference_start|>Report on the Workshop on Simulations for Information Access (Sim4IA 2024) at SIGIR 2024: This paper is a report of the Workshop on Simulations for Information Access (Sim4IA) workshop at SIGIR 2024. The workshop had two keynotes, a panel discussion, nine lightning talks, and two breakout sessions. Key takeaways were user simulation's importance in academia and industry, the possible bridging of online and offline evaluation, and the issues of organizing a companion shared task around user simulations for information access. We report on how we organized the workshop, provide a brief overview of what happened at the workshop, and summarize the main topics and findings of the workshop and future work.<|reference_end|>
|
arxiv
|
@article{breuer2024report,
title={Report on the Workshop on Simulations for Information Access (Sim4IA
2024) at SIGIR 2024},
author={Timo Breuer, Christin Katharina Kreutz, Norbert Fuhr, Krisztian Balog,
Philipp Schaer, Nolwenn Bernard, Ingo Frommholz, Marcel Gohsen, Kaixin Ji,
Gareth J. F. Jones, J"uri Keller, Jiqun Liu, Martin Mladenov, Gabriella
Pasi, Johanne Trippas, Xi Wang, Saber Zerhoudi, ChengXiang Zhai},
journal={arXiv preprint arXiv:2409.18024},
year={2024},
archivePrefix={arXiv},
eprint={2409.18024},
primaryClass={cs.IR}
}
|
breuer2024report
|
arxiv-662423
|
2409.18025
|
An Adversarial Perspective on Machine Unlearning for AI Safety
|
<|reference_start|>An Adversarial Perspective on Machine Unlearning for AI Safety: Large language models are finetuned to refuse questions about hazardous knowledge, but these protections can often be bypassed. Unlearning methods aim at completely removing hazardous capabilities from models and make them inaccessible to adversaries. This work challenges the fundamental differences between unlearning and traditional safety post-training from an adversarial perspective. We demonstrate that existing jailbreak methods, previously reported as ineffective against unlearning, can be successful when applied carefully. Furthermore, we develop a variety of adaptive methods that recover most supposedly unlearned capabilities. For instance, we show that finetuning on 10 unrelated examples or removing specific directions in the activation space can recover most hazardous capabilities for models edited with RMU, a state-of-the-art unlearning method. Our findings challenge the robustness of current unlearning approaches and question their advantages over safety training.<|reference_end|>
|
arxiv
|
@article{łucki2024an,
title={An Adversarial Perspective on Machine Unlearning for AI Safety},
author={Jakub {L}ucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian
Tram`er, Javier Rando},
journal={arXiv preprint arXiv:2409.18025},
year={2024},
archivePrefix={arXiv},
eprint={2409.18025},
primaryClass={cs.LG cs.AI cs.CL cs.CR}
}
|
łucki2024an
|
arxiv-662424
|
2409.18026
|
ReliOcc: Towards Reliable Semantic Occupancy Prediction via Uncertainty Learning
|
<|reference_start|>ReliOcc: Towards Reliable Semantic Occupancy Prediction via Uncertainty Learning: Vision-centric semantic occupancy prediction plays a crucial role in autonomous driving, which requires accurate and reliable predictions from low-cost sensors. Although having notably narrowed the accuracy gap with LiDAR, there is still few research effort to explore the reliability in predicting semantic occupancy from camera. In this paper, we conduct a comprehensive evaluation of existing semantic occupancy prediction models from a reliability perspective for the first time. Despite the gradual alignment of camera-based models with LiDAR in term of accuracy, a significant reliability gap persists. To addresses this concern, we propose ReliOcc, a method designed to enhance the reliability of camera-based occupancy networks. ReliOcc provides a plug-and-play scheme for existing models, which integrates hybrid uncertainty from individual voxels with sampling-based noise and relative voxels through mix-up learning. Besides, an uncertainty-aware calibration strategy is devised to further enhance model reliability in offline mode. Extensive experiments under various settings demonstrate that ReliOcc significantly enhances model reliability while maintaining the accuracy of both geometric and semantic predictions. Importantly, our proposed approach exhibits robustness to sensor failures and out of domain noises during inference.<|reference_end|>
|
arxiv
|
@article{wang2024reliocc:,
title={ReliOcc: Towards Reliable Semantic Occupancy Prediction via Uncertainty
Learning},
author={Song Wang, Zhongdao Wang, Jiawei Yu, Wentong Li, Bailan Feng, Junbo
Chen, Jianke Zhu},
journal={arXiv preprint arXiv:2409.18026},
year={2024},
archivePrefix={arXiv},
eprint={2409.18026},
primaryClass={cs.CV cs.RO}
}
|
wang2024reliocc:
|
arxiv-662425
|
2409.18028
|
Compositional Hardness of Code in Large Language Models -- A Probabilistic Perspective
|
<|reference_start|>Compositional Hardness of Code in Large Language Models -- A Probabilistic Perspective: A common practice in large language model (LLM) usage for complex analytical tasks such as code generation, is to sample a solution for the entire task within the model's context window. Previous works have shown that subtask decomposition within the model's context (chain of thought), is beneficial for solving such tasks. In this work, we point a limitation of LLMs' ability to perform several sub-tasks within the same context window - an in-context hardness of composition, pointing to an advantage for distributing a decomposed problem in a multi-agent system of LLMs. The hardness of composition is quantified by a generation complexity metric, i.e., the number of LLM generations required to sample at least one correct solution. We find a gap between the generation complexity of solving a compositional problem within the same context relative to distributing it among multiple agents, that increases exponentially with the solution's length. We prove our results theoretically and demonstrate them empirically.<|reference_end|>
|
arxiv
|
@article{wolf2024compositional,
title={Compositional Hardness of Code in Large Language Models -- A
Probabilistic Perspective},
author={Yotam Wolf, Binyamin Rothberg, Dorin Shteyman, Amnon Shashua},
journal={arXiv preprint arXiv:2409.18028},
year={2024},
archivePrefix={arXiv},
eprint={2409.18028},
primaryClass={cs.AI cs.CL}
}
|
wolf2024compositional
|
arxiv-662426
|
2409.18030
|
Certifying rings of integers in number fields
|
<|reference_start|>Certifying rings of integers in number fields: Number fields and their rings of integers, which generalize the rational numbers and the integers, are foundational objects in number theory. There are several computer algebra systems and databases concerned with the computational aspects of these. In particular, computing the ring of integers of a given number field is one of the main tasks of computational algebraic number theory. In this paper, we describe a formalization in Lean 4 for certifying such computations. In order to accomplish this, we developed several data types amenable to computation. Moreover, many other underlying mathematical concepts and results had to be formalized, most of which are also of independent interest. These include resultants and discriminants, as well as methods for proving irreducibility of univariate polynomials over finite fields and over the rational numbers. To illustrate the feasibility of our strategy, we formally verified entries from the $\textit{Number fields}$ section of the $\textit{L-functions and modular forms database}$ (LMFDB). These concern, for several number fields, the explicitly given $\textit{integral basis}$ of the ring of integers and the $\textit{discriminant}$. To accomplish this, we wrote SageMath code that computes the corresponding certificates and outputs a Lean proof of the statement to be verified.<|reference_end|>
|
arxiv
|
@article{baanen2024certifying,
title={Certifying rings of integers in number fields},
author={Anne Baanen, Alain Chavarri Villarello, Sander R. Dahmen},
journal={arXiv preprint arXiv:2409.18030},
year={2024},
archivePrefix={arXiv},
eprint={2409.18030},
primaryClass={cs.LO math.NT}
}
|
baanen2024certifying
|
arxiv-662427
|
2409.18031
|
Reasoning Multi-Agent Behavioral Topology for Interactive Autonomous Driving
|
<|reference_start|>Reasoning Multi-Agent Behavioral Topology for Interactive Autonomous Driving: Autonomous driving system aims for safe and social-consistent driving through the behavioral integration among interactive agents. However, challenges remain due to multi-agent scene uncertainty and heterogeneous interaction. Current dense and sparse behavioral representations struggle with inefficiency and inconsistency in multi-agent modeling, leading to instability of collective behavioral patterns when integrating prediction and planning (IPP). To address this, we initiate a topological formation that serves as a compliant behavioral foreground to guide downstream trajectory generations. Specifically, we introduce Behavioral Topology (BeTop), a pivotal topological formulation that explicitly represents the consensual behavioral pattern among multi-agent future. BeTop is derived from braid theory to distill compliant interactive topology from multi-agent future trajectories. A synergistic learning framework (BeTopNet) supervised by BeTop facilitates the consistency of behavior prediction and planning within the predicted topology priors. Through imitative contingency learning, BeTop also effectively manages behavioral uncertainty for prediction and planning. Extensive verification on large-scale real-world datasets, including nuPlan and WOMD, demonstrates that BeTop achieves state-of-the-art performance in both prediction and planning tasks. Further validations on the proposed interactive scenario benchmark showcase planning compliance in interactive cases.<|reference_end|>
|
arxiv
|
@article{liu2024reasoning,
title={Reasoning Multi-Agent Behavioral Topology for Interactive Autonomous
Driving},
author={Haochen Liu, Li Chen, Yu Qiao, Chen Lv and Hongyang Li},
journal={arXiv preprint arXiv:2409.18031},
year={2024},
archivePrefix={arXiv},
eprint={2409.18031},
primaryClass={cs.RO}
}
|
liu2024reasoning
|
arxiv-662428
|
2409.18032
|
FlowBench: A Large Scale Benchmark for Flow Simulation over Complex Geometries
|
<|reference_start|>FlowBench: A Large Scale Benchmark for Flow Simulation over Complex Geometries: Simulating fluid flow around arbitrary shapes is key to solving various engineering problems. However, simulating flow physics across complex geometries remains numerically challenging and computationally resource-intensive, particularly when using conventional PDE solvers. Machine learning methods offer attractive opportunities to create fast and adaptable PDE solvers. However, benchmark datasets to measure the performance of such methods are scarce, especially for flow physics across complex geometries. We introduce FlowBench, a dataset for neural simulators with over 10K samples, which is currently larger than any publicly available flow physics dataset. FlowBench contains flow simulation data across complex geometries (\textit{parametric vs. non-parametric}), spanning a range of flow conditions (\textit{Reynolds number and Grashoff number}), capturing a diverse array of flow phenomena (\textit{steady vs. transient; forced vs. free convection}), and for both 2D and 3D. FlowBench contains over 10K data samples, with each sample the outcome of a fully resolved, direct numerical simulation using a well-validated simulator framework designed for modeling transport phenomena in complex geometries. For each sample, we include velocity, pressure, and temperature field data at 3 different resolutions and several summary statistics features of engineering relevance (such as coefficients of lift and drag, and Nusselt numbers). %Additionally, we include masks and signed distance fields for each shape. We envision that FlowBench will enable evaluating the interplay between complex geometry, coupled flow phenomena, and data sufficiency on the performance of current, and future, neural PDE solvers. We enumerate several evaluation metrics to help rank order the performance of neural PDE solvers. We benchmark the performance of several baseline methods including FNO, CNO, WNO, and DeepONet.<|reference_end|>
|
arxiv
|
@article{tali2024flowbench:,
title={FlowBench: A Large Scale Benchmark for Flow Simulation over Complex
Geometries},
author={Ronak Tali, Ali Rabeh, Cheng-Hau Yang, Mehdi Shadkhah, Samundra Karki,
Abhisek Upadhyaya, Suriya Dhakshinamoorthy, Marjan Saadati, Soumik Sarkar,
Adarsh Krishnamurthy, Chinmay Hegde, Aditya Balu, Baskar Ganapathysubramanian},
journal={arXiv preprint arXiv:2409.18032},
year={2024},
archivePrefix={arXiv},
eprint={2409.18032},
primaryClass={physics.flu-dyn cs.LG cs.NE}
}
|
tali2024flowbench:
|
arxiv-662429
|
2409.18033
|
Automated Detection and Analysis of Power Words in Persuasive Text Using Natural Language Processing
|
<|reference_start|>Automated Detection and Analysis of Power Words in Persuasive Text Using Natural Language Processing: Power words are terms that evoke strong emotional responses and significantly influence readers' behavior, playing a crucial role in fields like marketing, politics, and motivational writing. This study proposes a methodology for the automated detection and analysis of power words in persuasive text using a custom lexicon created from a comprehensive dataset scraped from online sources. A specialized Python package, The Text Monger, is created and employed to identify the presence and frequency of power words within a given text. By analyzing diverse datasets, including fictional excerpts, speeches, and marketing materials,the aim is to classify and assess the impact of power words on sentiment and reader engagement. The findings provide valuable insights into the effectiveness of power words across various domains, offering practical applications for content creators, advertisers, and policymakers looking to enhance their messaging and engagement strategies.<|reference_end|>
|
arxiv
|
@article{garje2024automated,
title={Automated Detection and Analysis of Power Words in Persuasive Text Using
Natural Language Processing},
author={Sahil Garje},
journal={arXiv preprint arXiv:2409.18033},
year={2024},
archivePrefix={arXiv},
eprint={2409.18033},
primaryClass={cs.CL}
}
|
garje2024automated
|
arxiv-662430
|
2409.18036
|
Optimal Dynamic Parameterized Subset Sampling
|
<|reference_start|>Optimal Dynamic Parameterized Subset Sampling: In this paper, we study the Dynamic Parameterized Subset Sampling (DPSS) problem in the Word RAM model. In DPSS, the input is a set,~$S$, of~$n$ items, where each item,~$x$, has a non-negative integer weight,~$w(x)$. Given a pair of query parameters, $(\alpha, \beta)$, each of which is a non-negative rational number, a parameterized subset sampling query on~$S$ seeks to return a subset $T \subseteq S$ such that each item $x \in S$ is selected in~$T$, independently, with probability $p_x(\alpha, \beta) = \min \left\{\frac{w(x)}{\alpha \sum_{x\in S} w(x)+\beta}, 1 \right\}$. More specifically, the DPSS problem is defined in a dynamic setting, where the item set,~$S$, can be updated with insertions of new items or deletions of existing items. Our first main result is an optimal algorithm for solving the DPSS problem, which achieves~$O(n)$ pre-processing time, $O(1+\mu_S(\alpha,\beta))$ expected time for each query parameterized by $(\alpha, \beta)$, given on-the-fly, and $O(1)$ time for each update; here, $\mu_S(\alpha,\beta)$ is the expected size of the query result. At all times, the worst-case space consumption of our algorithm is linear in the current number of items in~$S$. Our second main contribution is a hardness result for the DPSS problem when the item weights are~$O(1)$-word float numbers, rather than integers. Specifically, we reduce Integer Sorting to the deletion-only DPSS problem with float item weights. Our reduction implies that an optimal algorithm for deletion-only DPSS with float item weights (achieving all the same bounds as aforementioned) implies an optimal algorithm for Integer Sorting. The latter remains an important open problem. Last but not least, a key technical ingredient for our first main result is an efficient algorithm for generating Truncated Geometric random variates in $O(1)$ expected time in the Word RAM model.<|reference_end|>
|
arxiv
|
@article{gan2024optimal,
title={Optimal Dynamic Parameterized Subset Sampling},
author={Junhao Gan, Seeun William Umboh, Hanzhi Wang, Anthony Wirth and Zhuo
Zhang},
journal={arXiv preprint arXiv:2409.18036},
year={2024},
archivePrefix={arXiv},
eprint={2409.18036},
primaryClass={cs.DS cs.DB}
}
|
gan2024optimal
|
arxiv-662431
|
2409.18037
|
HARMONIC: A Framework for Explanatory Cognitive Robots
|
<|reference_start|>HARMONIC: A Framework for Explanatory Cognitive Robots: We present HARMONIC, a framework for implementing cognitive robots that transforms general-purpose robots into trusted teammates capable of complex decision-making, natural communication and human-level explanation. The framework supports interoperability between a strategic (cognitive) layer for high-level decision-making and a tactical (robot) layer for low-level control and execution. We describe the core features of the framework and our initial implementation, in which HARMONIC was deployed on a simulated UGV and drone involved in a multi-robot search and retrieval task.<|reference_end|>
|
arxiv
|
@article{oruganti2024harmonic:,
title={HARMONIC: A Framework for Explanatory Cognitive Robots},
author={Sanjay Oruganti, Sergei Nirenburg, Marjorie McShane, Jesse English,
Michael K. Roberts, and Christian Arndt},
journal={arXiv preprint arXiv:2409.18037},
year={2024},
archivePrefix={arXiv},
eprint={2409.18037},
primaryClass={cs.RO cs.AI cs.HC cs.MA}
}
|
oruganti2024harmonic:
|
arxiv-662432
|
2409.18038
|
MMDVS-LF: A Multi-Modal Dynamic-Vision-Sensor Line Following Dataset
|
<|reference_start|>MMDVS-LF: A Multi-Modal Dynamic-Vision-Sensor Line Following Dataset: Dynamic Vision Sensors (DVS), offer a unique advantage in control applications, due to their high temporal resolution, and asynchronous event-based data. Still, their adoption in machine learning algorithms remains limited. To address this gap, and promote the development of models that leverage the specific characteristics of DVS data, we introduce the Multi-Modal Dynamic-Vision-Sensor Line Following dataset (MMDVS-LF). This comprehensive dataset, is the first to integrate multiple sensor modalities, including DVS recordings, RGB video, odometry, and Inertial Measurement Unit (IMU) data, from a small-scale standardized vehicle. Additionally, the dataset includes eye-tracking and demographic data of drivers performing a Line Following task on a track. With its diverse range of data, MMDVS-LF opens new opportunities for developing deep learning algorithms, and conducting data science projects across various domains, supporting innovation in autonomous systems and control applications.<|reference_end|>
|
arxiv
|
@article{resch2024mmdvs-lf:,
title={MMDVS-LF: A Multi-Modal Dynamic-Vision-Sensor Line Following Dataset},
author={Felix Resch, M'onika Farsang, Radu Grosu},
journal={arXiv preprint arXiv:2409.18038},
year={2024},
archivePrefix={arXiv},
eprint={2409.18038},
primaryClass={cs.RO}
}
|
resch2024mmdvs-lf:
|
arxiv-662433
|
2409.18039
|
Ecosystem-Agnostic Standardization of Quantum Runtime Architecture: Accelerating Utility in Quantum Computing
|
<|reference_start|>Ecosystem-Agnostic Standardization of Quantum Runtime Architecture: Accelerating Utility in Quantum Computing: Fault tolerance is a long-term objective driving many companies and research organizations to compete in making current, imperfect quantum computers useful - Quantum Utility (QU). It looks promising to achieve this by leveraging software optimization approaches primarily driven by AI techniques. This aggressive research covers all layers of Quantum Computing Optimization Middleware (QCOM) and requires execution on real quantum hardware (QH). Due to the nascent nature of the technology domain and the proprietary strategies of both large and small players, popular runtimes for executing quantum workloads lack flexibility in programming models, scheduling, and hardware access patterns, including queuing, which creates roadblocks for researchers and slows innovation. These problems are further exacerbated by emerging hybrid operating models that place Graphical Processing Unit (GPU) supercomputing and Quantum Intermediate Representation (QIR) at the heart of real-time computations across quantum and distributed resources. There is a need for a widely adopted runtime platform (RP) driven by the open-source community that can be easily deployed to work in a distributed manner between Quantum Processing Unit (QPU), GPU, control hardware, external compute resources and provide required flexibility in terms of programming & configuration models.<|reference_end|>
|
arxiv
|
@article{tsymbalista2024ecosystem-agnostic,
title={Ecosystem-Agnostic Standardization of Quantum Runtime Architecture:
Accelerating Utility in Quantum Computing},
author={Markiian Tsymbalista, Ihor Katernyak},
journal={arXiv preprint arXiv:2409.18039},
year={2024},
archivePrefix={arXiv},
eprint={2409.18039},
primaryClass={quant-ph cs.ET}
}
|
tsymbalista2024ecosystem-agnostic
|
arxiv-662434
|
2409.18042
|
EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
|
<|reference_start|>EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions: GPT-4o, an omni-modal model that enables vocal conversations with diverse emotions and tones, marks a milestone for omni-modal foundation models. However, empowering Large Language Models to perceive and generate images, texts, and speeches end-to-end with publicly available data remains challenging in the open-source community. Existing vision-language models rely on external tools for the speech processing, while speech-language models still suffer from limited or even without vision-understanding abilities. To address this gap, we propose EMOVA (EMotionally Omni-present Voice Assistant), to enable Large Language Models with end-to-end speech capabilities while maintaining the leading vision-language performance. With a semantic-acoustic disentangled speech tokenizer, we notice surprisingly that omni-modal alignment can further enhance vision-language and speech abilities compared with the corresponding bi-modal aligned counterparts. Moreover, a lightweight style module is proposed for flexible speech style controls (e.g., emotions and pitches). For the first time, EMOVA achieves state-of-the-art performance on both the vision-language and speech benchmarks, and meanwhile, supporting omni-modal spoken dialogue with vivid emotions.<|reference_end|>
|
arxiv
|
@article{chen2024emova:,
title={EMOVA: Empowering Language Models to See, Hear and Speak with Vivid
Emotions},
author={Kai Chen, Yunhao Gou, Runhui Huang, Zhili Liu, Daxin Tan, Jing Xu,
Chunwei Wang, Yi Zhu, Yihan Zeng, Kuo Yang, Dingdong Wang, Kun Xiang, Haoyuan
Li, Haoli Bai, Jianhua Han, Xiaohui Li, Weike Jin, Nian Xie, Yu Zhang, James
T. Kwok, Hengshuang Zhao, Xiaodan Liang, Dit-Yan Yeung, Xiao Chen, Zhenguo
Li, Wei Zhang, Qun Liu, Jun Yao, Lanqing Hong, Lu Hou, Hang Xu},
journal={arXiv preprint arXiv:2409.18042},
year={2024},
archivePrefix={arXiv},
eprint={2409.18042},
primaryClass={cs.CV cs.CL}
}
|
chen2024emova:
|
arxiv-662435
|
2409.18043
|
MARS: Multi-radio Architecture with Radio Selection using Decision Trees for emerging mesoscale CPS/IoT applications
|
<|reference_start|>MARS: Multi-radio Architecture with Radio Selection using Decision Trees for emerging mesoscale CPS/IoT applications: IoT is rapidly growing from small-scale apps to large-scale apps. Small-scale apps employ short-range radios like Zigbee,BLE while large-scale apps employ long-range radios like LoRa,NB-IoT. The other upcoming category of apps like P2P energy-trade in smart homes are termed mesoscale IoT apps. There are no specialized radios for these apps. They either use short/long-range radios. To close this gap, we explored mesoscale apps using the COTS IoT radios available. Our qualitative analysis identifies Zigbee and LoRa as potential candidates. Our quantitative analysis on single and multi-hop topologies showed that Zigbee and LoRa achieve competitive throughput at a distance of 500-1200m from the gateway. A fundamental finding of these analyses is that a multi-radio system that can efficiently switch between Zigbee and LoRa performs better than the single-radio systems. However, instantaneously selecting and switching to a high-throughput radio during transmission is not trivial because of the erratic link quality dynamics. To address this issue, we developed MARS, that uses path quality metrics to instantaneously select the high-throughput radio during transmission. However, realizing MARS on resource-constrained end devices entails the challenge of obtaining instantaneous path-quality metrics. Traditional path quality estimation is not instantaneous due to propagation and queuing delays. We overcome this challenge by showing that collecting local path metrics as input to our decision trees provides sufficient information to instantaneously identify the high-throughput radio. The radio selector of MARS is powered by TAO-CART trees. The evaluation of MARS on a large-scale mesh topology at two different locations shows that MARS can efficiently identify and switch to the high-throughput radio during transmission, leading to an average throughput gain of 48.2% and 49.79% over its competitors.<|reference_end|>
|
arxiv
|
@article{sundaram2024mars:,
title={MARS: Multi-radio Architecture with Radio Selection using Decision Trees
for emerging mesoscale CPS/IoT applications},
author={Jothi Prasanna Shanmuga Sundaram, Arman Zharmagambetov, Magzhan
Gabidolla, Miguel A. Carreira-Perpinan, Alberto Cerpa},
journal={arXiv preprint arXiv:2409.18043},
year={2024},
archivePrefix={arXiv},
eprint={2409.18043},
primaryClass={cs.NI}
}
|
sundaram2024mars:
|
arxiv-662436
|
2409.18044
|
Unveiling the Role of Pretraining in Direct Speech Translation
|
<|reference_start|>Unveiling the Role of Pretraining in Direct Speech Translation: Direct speech-to-text translation systems encounter an important drawback in data scarcity. A common solution consists on pretraining the encoder on automatic speech recognition, hence losing efficiency in the training process. In this study, we compare the training dynamics of a system using a pretrained encoder, the conventional approach, and one trained from scratch. We observe that, throughout the training, the randomly initialized model struggles to incorporate information from the speech inputs for its predictions. Hence, we hypothesize that this issue stems from the difficulty of effectively training an encoder for direct speech translation. While a model trained from scratch needs to learn acoustic and semantic modeling simultaneously, a pretrained one can just focus on the latter. Based on these findings, we propose a subtle change in the decoder cross-attention to integrate source information from earlier steps in training. We show that with this change, the model trained from scratch can achieve comparable performance to the pretrained one, while reducing the training time.<|reference_end|>
|
arxiv
|
@article{alastruey2024unveiling,
title={Unveiling the Role of Pretraining in Direct Speech Translation},
author={Belen Alastruey, Gerard I. G'allego, Marta R. Costa-juss`a},
journal={arXiv preprint arXiv:2409.18044},
year={2024},
archivePrefix={arXiv},
eprint={2409.18044},
primaryClass={cs.CL}
}
|
alastruey2024unveiling
|
arxiv-662437
|
2409.18046
|
IFCap: Image-like Retrieval and Frequency-based Entity Filtering for Zero-shot Captioning
|
<|reference_start|>IFCap: Image-like Retrieval and Frequency-based Entity Filtering for Zero-shot Captioning: Recent advancements in image captioning have explored text-only training methods to overcome the limitations of paired image-text data. However, existing text-only training methods often overlook the modality gap between using text data during training and employing images during inference. To address this issue, we propose a novel approach called Image-like Retrieval, which aligns text features with visually relevant features to mitigate the modality gap. Our method further enhances the accuracy of generated captions by designing a Fusion Module that integrates retrieved captions with input features. Additionally, we introduce a Frequency-based Entity Filtering technique that significantly improves caption quality. We integrate these methods into a unified framework, which we refer to as IFCap ($\textbf{I}$mage-like Retrieval and $\textbf{F}$requency-based Entity Filtering for Zero-shot $\textbf{Cap}$tioning). Through extensive experimentation, our straightforward yet powerful approach has demonstrated its efficacy, outperforming the state-of-the-art methods by a significant margin in both image captioning and video captioning compared to zero-shot captioning based on text-only training.<|reference_end|>
|
arxiv
|
@article{lee2024ifcap:,
title={IFCap: Image-like Retrieval and Frequency-based Entity Filtering for
Zero-shot Captioning},
author={Soeun Lee, Si-Woo Kim, Taewhan Kim, Dong-Jin Kim},
journal={arXiv preprint arXiv:2409.18046},
year={2024},
archivePrefix={arXiv},
eprint={2409.18046},
primaryClass={cs.CV cs.AI cs.CL cs.LG}
}
|
lee2024ifcap:
|
arxiv-662438
|
2409.18047
|
HARMONIC: Cognitive and Control Collaboration in Human-Robotic Teams
|
<|reference_start|>HARMONIC: Cognitive and Control Collaboration in Human-Robotic Teams: This paper presents a novel approach to multi-robot planning and collaboration. We demonstrate a cognitive strategy for robots in human-robot teams that incorporates metacognition, natural language communication, and explainability. The system is embodied using the HARMONIC architecture that flexibly integrates cognitive and control capabilities across the team. We evaluate our approach through simulation experiments involving a joint search task by a team of heterogeneous robots (a UGV and a drone) and a human. We detail the system's handling of complex, real-world scenarios, effective action coordination between robots with different capabilities, and natural human-robot communication. This work demonstrates that the robots' ability to reason about plans, goals, and attitudes, and to provide explanations for actions and decisions are essential prerequisites for realistic human-robot teaming.<|reference_end|>
|
arxiv
|
@article{oruganti2024harmonic:,
title={HARMONIC: Cognitive and Control Collaboration in Human-Robotic Teams},
author={Sanjay Oruganti, Sergei Nirenburg, Marjorie McShane, Jesse English,
Michael K. Roberts and Christian Arndt},
journal={arXiv preprint arXiv:2409.18047},
year={2024},
archivePrefix={arXiv},
eprint={2409.18047},
primaryClass={cs.RO cs.AI cs.MA}
}
|
oruganti2024harmonic:
|
arxiv-662439
|
2409.18048
|
Next-Gen Software Engineering: AI-Assisted Big Models
|
<|reference_start|>Next-Gen Software Engineering: AI-Assisted Big Models: The effectiveness of model-driven software engineering (MDSE) has been demonstrated in the context of complex software; however, it has not been widely adopted due to the requisite efforts associated with model development and maintenance, as well as the specific modelling competencies required for MDSE. Concurrently, artificial intelligence (AI) methods, particularly machine learning (ML) methods, have demonstrated considerable abilities when applied to the huge code bases accessible on open-source coding platforms. The so-called big code provides the basis for significant advances in empirical software engineering, as well as in the automation of coding processes and improvements in software quality with the use of AI. The objective of this paper is to facilitate a synthesis between these two significant domains of software engineering (SE), namely models and AI in SE. The paper provides an overview of the current status of AI-assisted software engineering. In light of the aforementioned considerations, a vision of AI-assisted Big Models in SE is put forth, with the aim of capitalising on the advantages inherent to both approaches in the context of software development. Finally, the new paradigm of pair modelling in MDSE is proposed.<|reference_end|>
|
arxiv
|
@article{schieferdecker2024next-gen,
title={Next-Gen Software Engineering: AI-Assisted Big Models},
author={Ina K. Schieferdecker},
journal={arXiv preprint arXiv:2409.18048},
year={2024},
archivePrefix={arXiv},
eprint={2409.18048},
primaryClass={cs.SE cs.ET}
}
|
schieferdecker2024next-gen
|
arxiv-662440
|
2409.18049
|
Revisit Anything: Visual Place Recognition via Image Segment Retrieval
|
<|reference_start|>Revisit Anything: Visual Place Recognition via Image Segment Retrieval: Accurately recognizing a revisited place is crucial for embodied agents to localize and navigate. This requires visual representations to be distinct, despite strong variations in camera viewpoint and scene appearance. Existing visual place recognition pipelines encode the "whole" image and search for matches. This poses a fundamental challenge in matching two images of the same place captured from different camera viewpoints: "the similarity of what overlaps can be dominated by the dissimilarity of what does not overlap". We address this by encoding and searching for "image segments" instead of the whole images. We propose to use open-set image segmentation to decompose an image into `meaningful' entities (i.e., things and stuff). This enables us to create a novel image representation as a collection of multiple overlapping subgraphs connecting a segment with its neighboring segments, dubbed SuperSegment. Furthermore, to efficiently encode these SuperSegments into compact vector representations, we propose a novel factorized representation of feature aggregation. We show that retrieving these partial representations leads to significantly higher recognition recall than the typical whole image based retrieval. Our segments-based approach, dubbed SegVLAD, sets a new state-of-the-art in place recognition on a diverse selection of benchmark datasets, while being applicable to both generic and task-specialized image encoders. Finally, we demonstrate the potential of our method to ``revisit anything'' by evaluating our method on an object instance retrieval task, which bridges the two disparate areas of research: visual place recognition and object-goal navigation, through their common aim of recognizing goal objects specific to a place. Source code: https://github.com/AnyLoc/Revisit-Anything.<|reference_end|>
|
arxiv
|
@article{garg2024revisit,
title={Revisit Anything: Visual Place Recognition via Image Segment Retrieval},
author={Kartik Garg, Sai Shubodh Puligilla, Shishir Kolathaya, Madhava
Krishna, Sourav Garg},
journal={arXiv preprint arXiv:2409.18049},
year={2024},
archivePrefix={arXiv},
eprint={2409.18049},
primaryClass={cs.CV cs.AI cs.IR cs.LG cs.RO}
}
|
garg2024revisit
|
arxiv-662441
|
2409.18051
|
Inverse Reinforcement Learning with Multiple Planning Horizons
|
<|reference_start|>Inverse Reinforcement Learning with Multiple Planning Horizons: In this work, we study an inverse reinforcement learning (IRL) problem where the experts are planning under a shared reward function but with different, unknown planning horizons. Without the knowledge of discount factors, the reward function has a larger feasible solution set, which makes it harder for existing IRL approaches to identify a reward function. To overcome this challenge, we develop algorithms that can learn a global multi-agent reward function with agent-specific discount factors that reconstruct the expert policies. We characterize the feasible solution space of the reward function and discount factors for both algorithms and demonstrate the generalizability of the learned reward function across multiple domains.<|reference_end|>
|
arxiv
|
@article{yao2024inverse,
title={Inverse Reinforcement Learning with Multiple Planning Horizons},
author={Jiayu Yao, Weiwei Pan, Finale Doshi-Velez, Barbara E Engelhardt},
journal={Reinforcement Learning Journal 3 (2024) 1138-1167},
year={2024},
archivePrefix={arXiv},
eprint={2409.18051},
primaryClass={cs.LG}
}
|
yao2024inverse
|
arxiv-662442
|
2409.18052
|
Explaining Explaining
|
<|reference_start|>Explaining Explaining: Explanation is key to people having confidence in high-stakes AI systems. However, machine-learning-based systems -- which account for almost all current AI -- can't explain because they are usually black boxes. The explainable AI (XAI) movement hedges this problem by redefining "explanation". The human-centered explainable AI (HCXAI) movement identifies the explanation-oriented needs of users but can't fulfill them because of its commitment to machine learning. In order to achieve the kinds of explanations needed by real people operating in critical domains, we must rethink how to approach AI. We describe a hybrid approach to developing cognitive agents that uses a knowledge-based infrastructure supplemented by data obtained through machine learning when applicable. These agents will serve as assistants to humans who will bear ultimate responsibility for the decisions and actions of the human-robot team. We illustrate the explanatory potential of such agents using the under-the-hood panels of a demonstration system in which a team of simulated robots collaborate on a search task assigned by a human.<|reference_end|>
|
arxiv
|
@article{nirenburg2024explaining,
title={Explaining Explaining},
author={Sergei Nirenburg, Marjorie McShane, Kenneth W. Goodman and Sanjay
Oruganti},
journal={arXiv preprint arXiv:2409.18052},
year={2024},
archivePrefix={arXiv},
eprint={2409.18052},
primaryClass={cs.AI cs.MA cs.RO}
}
|
nirenburg2024explaining
|
arxiv-662443
|
2409.18053
|
DualAD: Dual-Layer Planning for Reasoning in Autonomous Driving
|
<|reference_start|>DualAD: Dual-Layer Planning for Reasoning in Autonomous Driving: We present a novel autonomous driving framework, DualAD, designed to imitate human reasoning during driving. DualAD comprises two layers: a rule-based motion planner at the bottom layer that handles routine driving tasks requiring minimal reasoning, and an upper layer featuring a rule-based text encoder that converts driving scenarios from absolute states into text description. This text is then processed by a large language model (LLM) to make driving decisions. The upper layer intervenes in the bottom layer's decisions when potential danger is detected, mimicking human reasoning in critical situations. Closed-loop experiments demonstrate that DualAD, using a zero-shot pre-trained model, significantly outperforms rule-based motion planners that lack reasoning abilities. Our experiments also highlight the effectiveness of the text encoder, which considerably enhances the model's scenario understanding. Additionally, the integrated DualAD model improves with stronger LLMs, indicating the framework's potential for further enhancement. We make code and benchmarks publicly available.<|reference_end|>
|
arxiv
|
@article{wang2024dualad:,
title={DualAD: Dual-Layer Planning for Reasoning in Autonomous Driving},
author={Dingrui Wang, Marc Kaufeld, Johannes Betz},
journal={arXiv preprint arXiv:2409.18053},
year={2024},
archivePrefix={arXiv},
eprint={2409.18053},
primaryClass={cs.RO cs.AI}
}
|
wang2024dualad:
|
arxiv-662444
|
2409.18055
|
Visual Data Diagnosis and Debiasing with Concept Graphs
|
<|reference_start|>Visual Data Diagnosis and Debiasing with Concept Graphs: The widespread success of deep learning models today is owed to the curation of extensive datasets significant in size and complexity. However, such models frequently pick up inherent biases in the data during the training process, leading to unreliable predictions. Diagnosing and debiasing datasets is thus a necessity to ensure reliable model performance. In this paper, we present CONBIAS, a novel framework for diagnosing and mitigating Concept co-occurrence Biases in visual datasets. CONBIAS represents visual datasets as knowledge graphs of concepts, enabling meticulous analysis of spurious concept co-occurrences to uncover concept imbalances across the whole dataset. Moreover, we show that by employing a novel clique-based concept balancing strategy, we can mitigate these imbalances, leading to enhanced performance on downstream tasks. Extensive experiments show that data augmentation based on a balanced concept distribution augmented by CONBIAS improves generalization performance across multiple datasets compared to state-of-the-art methods. We will make our code and data publicly available.<|reference_end|>
|
arxiv
|
@article{chakraborty2024visual,
title={Visual Data Diagnosis and Debiasing with Concept Graphs},
author={Rwiddhi Chakraborty, Yinong Wang, Jialu Gao, Runkai Zheng, Cheng
Zhang, Fernando De la Torre},
journal={arXiv preprint arXiv:2409.18055},
year={2024},
archivePrefix={arXiv},
eprint={2409.18055},
primaryClass={cs.CV cs.AI}
}
|
chakraborty2024visual
|
arxiv-662445
|
2409.18057
|
LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field
|
<|reference_start|>LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field: Recent works have shown that neural radiance fields (NeRFs) on top of parametric models have reached SOTA quality to build photorealistic head avatars from a monocular video. However, one major limitation of the NeRF-based avatars is the slow rendering speed due to the dense point sampling of NeRF, preventing them from broader utility on resource-constrained devices. We introduce LightAvatar, the first head avatar model based on neural light fields (NeLFs). LightAvatar renders an image from 3DMM parameters and a camera pose via a single network forward pass, without using mesh or volume rendering. The proposed approach, while being conceptually appealing, poses a significant challenge towards real-time efficiency and training stability. To resolve them, we introduce dedicated network designs to obtain proper representations for the NeLF model and maintain a low FLOPs budget. Meanwhile, we tap into a distillation-based training strategy that uses a pretrained avatar model as teacher to synthesize abundant pseudo data for training. A warping field network is introduced to correct the fitting error in the real data so that the model can learn better. Extensive experiments suggest that our method can achieve new SOTA image quality quantitatively or qualitatively, while being significantly faster than the counterparts, reporting 174.1 FPS (512x512 resolution) on a consumer-grade GPU (RTX3090) with no customized optimization.<|reference_end|>
|
arxiv
|
@article{wang2024lightavatar:,
title={LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field},
author={Huan Wang and Feitong Tan and Ziqian Bai and Yinda Zhang and Shichen
Liu and Qiangeng Xu and Menglei Chai and Anish Prabhu and Rohit Pandey and
Sean Fanello and Zeng Huang and Yun Fu},
journal={arXiv preprint arXiv:2409.18057},
year={2024},
archivePrefix={arXiv},
eprint={2409.18057},
primaryClass={cs.CV}
}
|
wang2024lightavatar:
|
arxiv-662446
|
2409.18060
|
Inferring Alt-text For UI Icons With Large Language Models During App Development
|
<|reference_start|>Inferring Alt-text For UI Icons With Large Language Models During App Development: Ensuring accessibility in mobile applications remains a significant challenge, particularly for visually impaired users who rely on screen readers. User interface icons are essential for navigation and interaction and often lack meaningful alt-text, creating barriers to effective use. Traditional deep learning approaches for generating alt-text require extensive datasets and struggle with the diversity and imbalance of icon types. More recent Vision Language Models (VLMs) require complete UI screens, which can be impractical during the iterative phases of app development. To address these issues, we introduce a novel method using Large Language Models (LLMs) to autonomously generate informative alt-text for mobile UI icons with partial UI data. By incorporating icon context, that include class, resource ID, bounds, OCR-detected text, and contextual information from parent and sibling nodes, we fine-tune an off-the-shelf LLM on a small dataset of approximately 1.4k icons, yielding IconDesc. In an empirical evaluation and a user study IconDesc demonstrates significant improvements in generating relevant alt-text. This ability makes IconDesc an invaluable tool for developers, aiding in the rapid iteration and enhancement of UI accessibility.<|reference_end|>
|
arxiv
|
@article{haque2024inferring,
title={Inferring Alt-text For UI Icons With Large Language Models During App
Development},
author={Sabrina Haque and Christoph Csallner},
journal={arXiv preprint arXiv:2409.18060},
year={2024},
archivePrefix={arXiv},
eprint={2409.18060},
primaryClass={cs.HC cs.SE}
}
|
haque2024inferring
|
arxiv-662447
|
2409.18061
|
Optimal Protocols for Continual Learning via Statistical Physics and Control Theory
|
<|reference_start|>Optimal Protocols for Continual Learning via Statistical Physics and Control Theory: Artificial neural networks often struggle with catastrophic forgetting when learning multiple tasks sequentially, as training on new tasks degrades the performance on previously learned ones. Recent theoretical work has addressed this issue by analysing learning curves in synthetic frameworks under predefined training protocols. However, these protocols relied on heuristics and lacked a solid theoretical foundation assessing their optimality. In this paper, we fill this gap combining exact equations for training dynamics, derived using statistical physics techniques, with optimal control methods. We apply this approach to teacher-student models for continual learning and multi-task problems, obtaining a theory for task-selection protocols maximising performance while minimising forgetting. Our theoretical analysis offers non-trivial yet interpretable strategies for mitigating catastrophic forgetting, shedding light on how optimal learning protocols can modulate established effects, such as the influence of task similarity on forgetting. Finally, we validate our theoretical findings on real-world data.<|reference_end|>
|
arxiv
|
@article{mori2024optimal,
title={Optimal Protocols for Continual Learning via Statistical Physics and
Control Theory},
author={Francesco Mori, Stefano Sarao Mannelli, Francesca Mignacco},
journal={arXiv preprint arXiv:2409.18061},
year={2024},
archivePrefix={arXiv},
eprint={2409.18061},
primaryClass={cs.LG cond-mat.dis-nn cond-mat.stat-mech}
}
|
mori2024optimal
|
arxiv-662448
|
2409.18062
|
Efficient Approximation of Centrality Measures in Uncertain Graphs
|
<|reference_start|>Efficient Approximation of Centrality Measures in Uncertain Graphs: In this thesis I propose an algorithm to heuristically calculate different distance measures on uncertain graphs (i.e. graphs where edges only exist with a certain probability) and apply this to the heuristic calculation of harmonic closeness centrality. This approach is mainly based on previous work on the calculation of distance measures by Potamias et al. and on a heuristic algorithm for betweenness centrality by Chenxu Wang and Ziyuan Lin. I extend on their research by using the concept of possible shortest paths, applying them to the afformentioned distances. To the best of my knowledge, this algorithmic approach has never been studied before. I will compare my heuristic results for harmonic closeness against the Monte Carlo method both in runtime and accuracy. Similarly, I will conduct new experiments on the betweenness centrality heuristic proposed y Chenxu Wang and Ziyuan Lin to test its efficacy on a bigger variety of instances. Finally, I will test both of these algorithms on large scale graphs to evaluate the scalability of their runtime.<|reference_end|>
|
arxiv
|
@article{ketels2024efficient,
title={Efficient Approximation of Centrality Measures in Uncertain Graphs},
author={Daniel Ketels},
journal={arXiv preprint arXiv:2409.18062},
year={2024},
archivePrefix={arXiv},
eprint={2409.18062},
primaryClass={cs.DM cs.DC}
}
|
ketels2024efficient
|
arxiv-662449
|
2409.18063
|
Breaking the Mold: Nonlinear Ranking Function Synthesis Without Templates
|
<|reference_start|>Breaking the Mold: Nonlinear Ranking Function Synthesis Without Templates: This paper studies the problem of synthesizing (lexicographic) polynomial ranking functions for loops that can be described in polynomial arithmetic over integers and reals. While the analogous ranking function synthesis problem for linear arithmetic is decidable, even checking whether a given function ranks an integer loop is undecidable in the nonlinear setting. We side-step the decidability barrier by working within the theory of linear integer/real rings (LIRR) rather than the standard model of arithmetic. We develop a termination analysis that is guaranteed to succeed if a loop (expressed as a formula) admits a (lexicographic) polynomial ranking function. In contrast to template-based ranking function synthesis in real arithmetic, our completeness result holds for lexicographic ranking functions of unbounded dimension and degree, and effectively subsumes linear lexicographic ranking function synthesis for linear integer loops.<|reference_end|>
|
arxiv
|
@article{zhu2024breaking,
title={Breaking the Mold: Nonlinear Ranking Function Synthesis Without
Templates},
author={Shaowei Zhu and Zachary Kincaid},
journal={CAV 2024. Lecture Notes in Computer Science, vol 14681. Springer,
Cham},
year={2024},
doi={10.1007/978-3-031-65627-9_21},
archivePrefix={arXiv},
eprint={2409.18063},
primaryClass={cs.PL cs.LO}
}
|
zhu2024breaking
|
arxiv-662450
|
2409.18071
|
FreeEdit: Mask-free Reference-based Image Editing with Multi-modal Instruction
|
<|reference_start|>FreeEdit: Mask-free Reference-based Image Editing with Multi-modal Instruction: Introducing user-specified visual concepts in image editing is highly practical as these concepts convey the user's intent more precisely than text-based descriptions. We propose FreeEdit, a novel approach for achieving such reference-based image editing, which can accurately reproduce the visual concept from the reference image based on user-friendly language instructions. Our approach leverages the multi-modal instruction encoder to encode language instructions to guide the editing process. This implicit way of locating the editing area eliminates the need for manual editing masks. To enhance the reconstruction of reference details, we introduce the Decoupled Residual ReferAttention (DRRA) module. This module is designed to integrate fine-grained reference features extracted by a detail extractor into the image editing process in a residual way without interfering with the original self-attention. Given that existing datasets are unsuitable for reference-based image editing tasks, particularly due to the difficulty in constructing image triplets that include a reference image, we curate a high-quality dataset, FreeBench, using a newly developed twice-repainting scheme. FreeBench comprises the images before and after editing, detailed editing instructions, as well as a reference image that maintains the identity of the edited object, encompassing tasks such as object addition, replacement, and deletion. By conducting phased training on FreeBench followed by quality tuning, FreeEdit achieves high-quality zero-shot editing through convenient language instructions. We conduct extensive experiments to evaluate the effectiveness of FreeEdit across multiple task types, demonstrating its superiority over existing methods. The code will be available at: https://freeedit.github.io/.<|reference_end|>
|
arxiv
|
@article{he2024freeedit:,
title={FreeEdit: Mask-free Reference-based Image Editing with Multi-modal
Instruction},
author={Runze He, Kai Ma, Linjiang Huang, Shaofei Huang, Jialin Gao, Xiaoming
Wei, Jiao Dai, Jizhong Han, Si Liu},
journal={arXiv preprint arXiv:2409.18071},
year={2024},
archivePrefix={arXiv},
eprint={2409.18071},
primaryClass={cs.CV cs.AI}
}
|
he2024freeedit:
|
arxiv-662451
|
2409.18073
|
Infer Human's Intentions Before Following Natural Language Instructions
|
<|reference_start|>Infer Human's Intentions Before Following Natural Language Instructions: For AI agents to be helpful to humans, they should be able to follow natural language instructions to complete everyday cooperative tasks in human environments. However, real human instructions inherently possess ambiguity, because the human speakers assume sufficient prior knowledge about their hidden goals and intentions. Standard language grounding and planning methods fail to address such ambiguities because they do not model human internal goals as additional partially observable factors in the environment. We propose a new framework, Follow Instructions with Social and Embodied Reasoning (FISER), aiming for better natural language instruction following in collaborative embodied tasks. Our framework makes explicit inferences about human goals and intentions as intermediate reasoning steps. We implement a set of Transformer-based models and evaluate them over a challenging benchmark, HandMeThat. We empirically demonstrate that using social reasoning to explicitly infer human intentions before making action plans surpasses purely end-to-end approaches. We also compare our implementation with strong baselines, including Chain of Thought prompting on the largest available pre-trained language models, and find that FISER provides better performance on the embodied social reasoning tasks under investigation, reaching the state-of-the-art on HandMeThat.<|reference_end|>
|
arxiv
|
@article{wan2024infer,
title={Infer Human's Intentions Before Following Natural Language Instructions},
author={Yanming Wan, Yue Wu, Yiping Wang, Jiayuan Mao, Natasha Jaques},
journal={arXiv preprint arXiv:2409.18073},
year={2024},
archivePrefix={arXiv},
eprint={2409.18073},
primaryClass={cs.AI cs.CL cs.LG}
}
|
wan2024infer
|
arxiv-662452
|
2409.18082
|
SKT: Integrating State-Aware Keypoint Trajectories with Vision-Language Models for Robotic Garment Manipulation
|
<|reference_start|>SKT: Integrating State-Aware Keypoint Trajectories with Vision-Language Models for Robotic Garment Manipulation: Automating garment manipulation poses a significant challenge for assistive robotics due to the diverse and deformable nature of garments. Traditional approaches typically require separate models for each garment type, which limits scalability and adaptability. In contrast, this paper presents a unified approach using vision-language models (VLMs) to improve keypoint prediction across various garment categories. By interpreting both visual and semantic information, our model enables robots to manage different garment states with a single model. We created a large-scale synthetic dataset using advanced simulation techniques, allowing scalable training without extensive real-world data. Experimental results indicate that the VLM-based method significantly enhances keypoint detection accuracy and task success rates, providing a more flexible and general solution for robotic garment manipulation. In addition, this research also underscores the potential of VLMs to unify various garment manipulation tasks within a single framework, paving the way for broader applications in home automation and assistive robotics for future.<|reference_end|>
|
arxiv
|
@article{li2024skt:,
title={SKT: Integrating State-Aware Keypoint Trajectories with Vision-Language
Models for Robotic Garment Manipulation},
author={Xin Li, Siyuan Huang, Qiaojun Yu, Zhengkai Jiang, Ce Hao, Yimeng Zhu,
Hongsheng Li, Peng Gao, Cewu Lu},
journal={arXiv preprint arXiv:2409.18082},
year={2024},
archivePrefix={arXiv},
eprint={2409.18082},
primaryClass={cs.RO cs.AI cs.CV}
}
|
li2024skt:
|
arxiv-662453
|
2409.18083
|
Stable Video Portraits
|
<|reference_start|>Stable Video Portraits: Rapid advances in the field of generative AI and text-to-image methods in particular have transformed the way we interact with and perceive computer-generated imagery today. In parallel, much progress has been made in 3D face reconstruction, using 3D Morphable Models (3DMM). In this paper, we present SVP, a novel hybrid 2D/3D generation method that outputs photorealistic videos of talking faces leveraging a large pre-trained text-to-image prior (2D), controlled via a 3DMM (3D). Specifically, we introduce a person-specific fine-tuning of a general 2D stable diffusion model which we lift to a video model by providing temporal 3DMM sequences as conditioning and by introducing a temporal denoising procedure. As an output, this model generates temporally smooth imagery of a person with 3DMM-based controls, i.e., a person-specific avatar. The facial appearance of this person-specific avatar can be edited and morphed to text-defined celebrities, without any fine-tuning at test time. The method is analyzed quantitatively and qualitatively, and we show that our method outperforms state-of-the-art monocular head avatar methods.<|reference_end|>
|
arxiv
|
@article{ostrek2024stable,
title={Stable Video Portraits},
author={Mirela Ostrek, Justus Thies},
journal={arXiv preprint arXiv:2409.18083},
year={2024},
archivePrefix={arXiv},
eprint={2409.18083},
primaryClass={cs.CV}
}
|
ostrek2024stable
|
arxiv-662454
|
2409.18084
|
GSON: A Group-based Social Navigation Framework with Large Multimodal Model
|
<|reference_start|>GSON: A Group-based Social Navigation Framework with Large Multimodal Model: As the number of service robots and autonomous vehicles in human-centered environments grows, their requirements go beyond simply navigating to a destination. They must also take into account dynamic social contexts and ensure respect and comfort for others in shared spaces, which poses significant challenges for perception and planning. In this paper, we present a group-based social navigation framework GSON to enable mobile robots to perceive and exploit the social group of their surroundings by leveling the visual reasoning capability of the Large Multimodal Model (LMM). For perception, we apply visual prompting techniques to zero-shot extract the social relationship among pedestrians and combine the result with a robust pedestrian detection and tracking pipeline to alleviate the problem of low inference speed of the LMM. Given the perception result, the planning system is designed to avoid disrupting the current social structure. We adopt a social structure-based mid-level planner as a bridge between global path planning and local motion planning to preserve the global context and reactive response. The proposed method is validated on real-world mobile robot navigation tasks involving complex social structure understanding and reasoning. Experimental results demonstrate the effectiveness of the system in these scenarios compared with several baselines.<|reference_end|>
|
arxiv
|
@article{luo2024gson:,
title={GSON: A Group-based Social Navigation Framework with Large Multimodal
Model},
author={Shangyi Luo, Ji Zhu, Peng Sun, Yuhong Deng, Cunjun Yu, Anxing Xiao,
Xueqian Wang},
journal={arXiv preprint arXiv:2409.18084},
year={2024},
archivePrefix={arXiv},
eprint={2409.18084},
primaryClass={cs.RO cs.AI}
}
|
luo2024gson:
|
arxiv-662455
|
2409.18085
|
Explicit Local Time-Stepping for the Inhomogeneous Wave Equation with Optimal Convergence
|
<|reference_start|>Explicit Local Time-Stepping for the Inhomogeneous Wave Equation with Optimal Convergence: Adaptivity and local mesh refinement are crucial for the efficient numerical simulation of wave phenomena in complex geometry. Local mesh refinement, however, can impose a tiny time-step across the entire computational domain when using explicit time integration. By taking smaller time-steps yet only inside locally refined regions, local time-stepping methods overcome the stringent CFL stability restriction imposed on the global time-step by a small fraction of the elements without sacrificing explicitness. In [21], a leapfrog based local time-stepping method was proposed for the inhomogeneous wave equation, which applies standard leapfrog time-marching with a smaller time-step inside the refined region. Here, to remove potential instability at certain time-steps, a stabilized version is proposed which leads to optimal L2-error estimates under a CFL condition independent of the coarse-to-fine mesh ratio. Moreover, a weighted transition is introduced to restore optimal H1-convergence when the source is nonzero across the coarse-to-fine mesh interface. Numerical experiments corroborate the theoretical error estimates and illustrate the usefulness of these improvements.<|reference_end|>
|
arxiv
|
@article{grote2024explicit,
title={Explicit Local Time-Stepping for the Inhomogeneous Wave Equation with
Optimal Convergence},
author={Marcus J. Grote, Simon R. J. Michel, Stefan A. Sauter},
journal={arXiv preprint arXiv:2409.18085},
year={2024},
archivePrefix={arXiv},
eprint={2409.18085},
primaryClass={math.NA cs.NA}
}
|
grote2024explicit
|
arxiv-662456
|
2409.18092
|
DiffSSC: Semantic LiDAR Scan Completion using Denoising Diffusion Probabilistic Models
|
<|reference_start|>DiffSSC: Semantic LiDAR Scan Completion using Denoising Diffusion Probabilistic Models: Perception systems play a crucial role in autonomous driving, incorporating multiple sensors and corresponding computer vision algorithms. 3D LiDAR sensors are widely used to capture sparse point clouds of the vehicle's surroundings. However, such systems struggle to perceive occluded areas and gaps in the scene due to the sparsity of these point clouds and their lack of semantics. To address these challenges, Semantic Scene Completion (SSC) jointly predicts unobserved geometry and semantics in the scene given raw LiDAR measurements, aiming for a more complete scene representation. Building on promising results of diffusion models in image generation and super-resolution tasks, we propose their extension to SSC by implementing the noising and denoising diffusion processes in the point and semantic spaces individually. To control the generation, we employ semantic LiDAR point clouds as conditional input and design local and global regularization losses to stabilize the denoising process. We evaluate our approach on autonomous driving datasets and our approach outperforms the state-of-the-art for SSC.<|reference_end|>
|
arxiv
|
@article{cao2024diffssc:,
title={DiffSSC: Semantic LiDAR Scan Completion using Denoising Diffusion
Probabilistic Models},
author={Helin Cao and Sven Behnke},
journal={arXiv preprint arXiv:2409.18092},
year={2024},
archivePrefix={arXiv},
eprint={2409.18092},
primaryClass={cs.CV cs.AI cs.RO}
}
|
cao2024diffssc:
|
arxiv-662457
|
2409.18094
|
Mobility in Age-Based Gossip Networks
|
<|reference_start|>Mobility in Age-Based Gossip Networks: We consider a gossiping network where a source forwards updates to a set of $n$ gossiping nodes that are placed in an arbitrary graph structure and gossip with their neighbors. In this paper, we analyze how mobility of nodes affects the freshness of nodes in the gossiping network. To model mobility, we let nodes randomly exchange positions with other nodes in the network. The position of the node determines how the node interacts with the rest of the network. In order to quantify information freshness, we use the version age of information metric. We use the stochastic hybrid system (SHS) framework to derive recursive equations to find the version age for a set of positions in the network in terms of the version ages of sets of positions that are one larger or of the same size. We use these recursive equations to find an upper bound for the average version age of a node in two example networks. We show that mobility can decrease the version age of nodes in a disconnected network from linear scaling in $n$ to at most square root scaling and even to constant scaling in some cases. We perform numerical simulations to analyze how mobility affects the version age of different positions in the network and also show that the upper bounds obtained for the example networks are tight.<|reference_end|>
|
arxiv
|
@article{srivastava2024mobility,
title={Mobility in Age-Based Gossip Networks},
author={Arunabh Srivastava and Sennur Ulukus},
journal={arXiv preprint arXiv:2409.18094},
year={2024},
archivePrefix={arXiv},
eprint={2409.18094},
primaryClass={cs.IT cs.SI eess.SP math.IT}
}
|
srivastava2024mobility
|
arxiv-662458
|
2409.18097
|
A Sim-to-Real Vision-based Lane Keeping System for a 1:10-scale Autonomous Vehicle
|
<|reference_start|>A Sim-to-Real Vision-based Lane Keeping System for a 1:10-scale Autonomous Vehicle: In recent years, several competitions have highlighted the need to investigate vision-based solutions to address scenarios with functional insufficiencies in perception, world modeling and localization. This article presents the Vision-based Lane Keeping System (VbLKS) developed by the DEI-Unipd Team within the context of the Bosch Future Mobility Challenge 2022. The main contribution lies in a Simulation-to-Reality (Sim2Real) GPS-denied VbLKS for a 1:10-scale autonomous vehicle. In this VbLKS, the input to a tailored Pure Pursuit (PP) based control strategy, namely the Lookahead Heading Error (LHE), is estimated at a constant lookahead distance employing a Convolutional Neural Network (CNN). A training strategy for a compact CNN is proposed, emphasizing data generation and augmentation on simulated camera images from a 3D Gazebo simulator, and enabling real-time operation on low-level hardware. A tailored PP-based lateral controller equipped with a derivative action and a PP-based velocity reference generation are implemented. Tuning ranges are established through a systematic time-delay stability analysis. Validation in a representative controlled laboratory setting is provided.<|reference_end|>
|
arxiv
|
@article{gallina2024a,
title={A Sim-to-Real Vision-based Lane Keeping System for a 1:10-scale
Autonomous Vehicle},
author={Antonio Gallina, Matteo Grandin, Angelo Cenedese and Mattia Bruschetta},
journal={arXiv preprint arXiv:2409.18097},
year={2024},
archivePrefix={arXiv},
eprint={2409.18097},
primaryClass={cs.RO cs.SY eess.SY}
}
|
gallina2024a
|
arxiv-662459
|
2409.18098
|
StackGen: Generating Stable Structures from Silhouettes via Diffusion
|
<|reference_start|>StackGen: Generating Stable Structures from Silhouettes via Diffusion: Humans naturally obtain intuition about the interactions between and the stability of rigid objects by observing and interacting with the world. It is this intuition that governs the way in which we regularly configure objects in our environment, allowing us to build complex structures from simple, everyday objects. Robotic agents, on the other hand, traditionally require an explicit model of the world that includes the detailed geometry of each object and an analytical model of the environment dynamics, which are difficult to scale and preclude generalization. Instead, robots would benefit from an awareness of intuitive physics that enables them to similarly reason over the stable interaction of objects in their environment. Towards that goal, we propose StackGen, a diffusion model that generates diverse stable configurations of building blocks matching a target silhouette. To demonstrate the capability of the method, we evaluate it in a simulated environment and deploy it in the real setting using a robotic arm to assemble structures generated by the model.<|reference_end|>
|
arxiv
|
@article{sun2024stackgen:,
title={StackGen: Generating Stable Structures from Silhouettes via Diffusion},
author={Luzhe Sun, Takuma Yoneda, Samuel W. Wheeler, Tianchong Jiang, Matthew
R. Walter},
journal={arXiv preprint arXiv:2409.18098},
year={2024},
archivePrefix={arXiv},
eprint={2409.18098},
primaryClass={cs.RO}
}
|
sun2024stackgen:
|
arxiv-662460
|
2409.18099
|
EfficientCrackNet: A Lightweight Model for Crack Segmentation
|
<|reference_start|>EfficientCrackNet: A Lightweight Model for Crack Segmentation: Crack detection, particularly from pavement images, presents a formidable challenge in the domain of computer vision due to several inherent complexities such as intensity inhomogeneity, intricate topologies, low contrast, and noisy backgrounds. Automated crack detection is crucial for maintaining the structural integrity of essential infrastructures, including buildings, pavements, and bridges. Existing lightweight methods often face challenges including computational inefficiency, complex crack patterns, and difficult backgrounds, leading to inaccurate detection and impracticality for real-world applications. To address these limitations, we propose EfficientCrackNet, a lightweight hybrid model combining Convolutional Neural Networks (CNNs) and transformers for precise crack segmentation. EfficientCrackNet integrates depthwise separable convolutions (DSC) layers and MobileViT block to capture both global and local features. The model employs an Edge Extraction Method (EEM) and for efficient crack edge detection without pretraining, and Ultra-Lightweight Subspace Attention Module (ULSAM) to enhance feature extraction. Extensive experiments on three benchmark datasets Crack500, DeepCrack, and GAPs384 demonstrate that EfficientCrackNet achieves superior performance compared to existing lightweight models, while requiring only 0.26M parameters, and 0.483 FLOPs (G). The proposed model offers an optimal balance between accuracy and computational efficiency, outperforming state-of-the-art lightweight models, and providing a robust and adaptable solution for real-world crack segmentation.<|reference_end|>
|
arxiv
|
@article{zim2024efficientcracknet:,
title={EfficientCrackNet: A Lightweight Model for Crack Segmentation},
author={Abid Hasan Zim, Aquib Iqbal, Zaid Al-Huda, Asad Malik, Minoru
Kuribayash},
journal={arXiv preprint arXiv:2409.18099},
year={2024},
archivePrefix={arXiv},
eprint={2409.18099},
primaryClass={cs.CV cs.AI}
}
|
zim2024efficientcracknet:
|
arxiv-662461
|
2409.18100
|
Self-supervised Pretraining for Cardiovascular Magnetic Resonance Cine Segmentation
|
<|reference_start|>Self-supervised Pretraining for Cardiovascular Magnetic Resonance Cine Segmentation: Self-supervised pretraining (SSP) has shown promising results in learning from large unlabeled datasets and, thus, could be useful for automated cardiovascular magnetic resonance (CMR) short-axis cine segmentation. However, inconsistent reports of the benefits of SSP for segmentation have made it difficult to apply SSP to CMR. Therefore, this study aimed to evaluate SSP methods for CMR cine segmentation. To this end, short-axis cine stacks of 296 subjects (90618 2D slices) were used for unlabeled pretraining with four SSP methods; SimCLR, positional contrastive learning, DINO, and masked image modeling (MIM). Subsets of varying numbers of subjects were used for supervised fine-tuning of 2D models for each SSP method, as well as to train a 2D baseline model from scratch. The fine-tuned models were compared to the baseline using the 3D Dice similarity coefficient (DSC) in a test dataset of 140 subjects. The SSP methods showed no performance gains with the largest supervised fine-tuning subset compared to the baseline (DSC = 0.89). When only 10 subjects (231 2D slices) are available for supervised training, SSP using MIM (DSC = 0.86) improves over training from scratch (DSC = 0.82). This study found that SSP is valuable for CMR cine segmentation when labeled training data is scarce, but does not aid state-of-the-art deep learning methods when ample labeled data is available. Moreover, the choice of SSP method is important. The code is publicly available at: https://github.com/q-cardIA/ssp-cmr-cine-segmentation<|reference_end|>
|
arxiv
|
@article{de mooij2024self-supervised,
title={Self-supervised Pretraining for Cardiovascular Magnetic Resonance Cine
Segmentation},
author={Rob A. J. de Mooij, Josien P. W. Pluim, Cian M. Scannell},
journal={arXiv preprint arXiv:2409.18100},
year={2024},
archivePrefix={arXiv},
eprint={2409.18100},
primaryClass={cs.CV cs.LG}
}
|
de mooij2024self-supervised
|
arxiv-662462
|
2409.18101
|
AI-Powered Augmented Reality for Satellite Assembly, Integration and Test
|
<|reference_start|>AI-Powered Augmented Reality for Satellite Assembly, Integration and Test: The integration of Artificial Intelligence (AI) and Augmented Reality (AR) is set to transform satellite Assembly, Integration, and Testing (AIT) processes by enhancing precision, minimizing human error, and improving operational efficiency in cleanroom environments. This paper presents a technical description of the European Space Agency's (ESA) project "AI for AR in Satellite AIT," which combines real-time computer vision and AR systems to assist technicians during satellite assembly. Leveraging Microsoft HoloLens 2 as the AR interface, the system delivers context-aware instructions and real-time feedback, tackling the complexities of object recognition and 6D pose estimation in AIT workflows. All AI models demonstrated over 70% accuracy, with the detection model exceeding 95% accuracy, indicating a high level of performance and reliability. A key contribution of this work lies in the effective use of synthetic data for training AI models in AR applications, addressing the significant challenges of obtaining real-world datasets in highly dynamic satellite environments, as well as the creation of the Segmented Anything Model for Automatic Labelling (SAMAL), which facilitates the automatic annotation of real data, achieving speeds up to 20 times faster than manual human annotation. The findings demonstrate the efficacy of AI-driven AR systems in automating critical satellite assembly tasks, setting a foundation for future innovations in the space industry.<|reference_end|>
|
arxiv
|
@article{patricio2024ai-powered,
title={AI-Powered Augmented Reality for Satellite Assembly, Integration and
Test},
author={Alvaro Patricio, Joao Valente, Atabak Dehban, Ines Cadilha, Daniel
Reis, Rodrigo Ventura},
journal={arXiv preprint arXiv:2409.18101},
year={2024},
archivePrefix={arXiv},
eprint={2409.18101},
primaryClass={cs.CV cs.AI}
}
|
patricio2024ai-powered
|
arxiv-662463
|
2409.18102
|
MALPOLON: A Framework for Deep Species Distribution Modeling
|
<|reference_start|>MALPOLON: A Framework for Deep Species Distribution Modeling: This paper describes a deep-SDM framework, MALPOLON. Written in Python and built upon the PyTorch library, this framework aims to facilitate training and inferences of deep species distribution models (deep-SDM) and sharing for users with only general Python language skills (e.g., modeling ecologists) who are interested in testing deep learning approaches to build new SDMs. More advanced users can also benefit from the framework's modularity to run more specific experiments by overriding existing classes while taking advantage of press-button examples to train neural networks on multiple classification tasks using custom or provided raw and pre-processed datasets. The framework is open-sourced on GitHub and PyPi along with extensive documentation and examples of use in various scenarios. MALPOLON offers straightforward installation, YAML-based configuration, parallel computing, multi-GPU utilization, baseline and foundational models for benchmarking, and extensive tutorials/documentation, aiming to enhance accessibility and performance scalability for ecologists and researchers.<|reference_end|>
|
arxiv
|
@article{larcher2024malpolon:,
title={MALPOLON: A Framework for Deep Species Distribution Modeling},
author={Theo Larcher, Lukas Picek, Benjamin Deneu, Titouan Lorieul, Maximilien
Servajean, Alexis Joly},
journal={arXiv preprint arXiv:2409.18102},
year={2024},
archivePrefix={arXiv},
eprint={2409.18102},
primaryClass={cs.LG cs.CV}
}
|
larcher2024malpolon:
|
arxiv-662464
|
2409.18104
|
Find Rhinos without Finding Rhinos: Active Learning with Multimodal Imagery of South African Rhino Habitats
|
<|reference_start|>Find Rhinos without Finding Rhinos: Active Learning with Multimodal Imagery of South African Rhino Habitats: Much of Earth's charismatic megafauna is endangered by human activities, particularly the rhino, which is at risk of extinction due to the poaching crisis in Africa. Monitoring rhinos' movement is crucial to their protection but has unfortunately proven difficult because rhinos are elusive. Therefore, instead of tracking rhinos, we propose the novel approach of mapping communal defecation sites, called middens, which give information about rhinos' spatial behavior valuable to anti-poaching, management, and reintroduction efforts. This paper provides the first-ever mapping of rhino midden locations by building classifiers to detect them using remotely sensed thermal, RGB, and LiDAR imagery in passive and active learning settings. As existing active learning methods perform poorly due to the extreme class imbalance in our dataset, we design MultimodAL, an active learning system employing a ranking technique and multimodality to achieve competitive performance with passive learning models with 94% fewer labels. Our methods could therefore save over 76 hours in labeling time when used on a similarly-sized dataset. Unexpectedly, our midden map reveals that rhino middens are not randomly distributed throughout the landscape; rather, they are clustered. Consequently, rangers should be targeted at areas with high midden densities to strengthen anti-poaching efforts, in line with UN Target 15.7.<|reference_end|>
|
arxiv
|
@article{gordon2024find,
title={Find Rhinos without Finding Rhinos: Active Learning with Multimodal
Imagery of South African Rhino Habitats},
author={Lucia Gordon, Nikhil Behari, Samuel Collier, Elizabeth Bondi-Kelly,
Jackson A. Killian, Catherine Ressijac, Peter Boucher, Andrew Davies, Milind
Tambe},
journal={Proceedings of the Thirty-Second International Joint Conference on
Artificial Intelligence. AI for Good. Pages 5977-5985. 2023},
year={2024},
doi={10.24963/ijcai.2023/663},
archivePrefix={arXiv},
eprint={2409.18104},
primaryClass={cs.CV cs.AI cs.LG}
}
|
gordon2024find
|
arxiv-662465
|
2409.18105
|
Effect of electric vehicles, heat pumps, and solar panels on low-voltage feeders: Evidence from smart meter profiles
|
<|reference_start|>Effect of electric vehicles, heat pumps, and solar panels on low-voltage feeders: Evidence from smart meter profiles: Electric vehicles (EVs), heat pumps (HPs) and solar panels are low-carbon technologies (LCTs) that are being connected to the low-voltage grid (LVG) at a rapid pace. One of the main hurdles to understand their impact on the LVG is the lack of recent, large electricity consumption datasets, measured in real-world conditions. We investigated the contribution of LCTs to the size and timing of peaks on LV feeders by using a large dataset of 42,089 smart meter profiles of residential LVG customers. These profiles were measured in 2022 by Fluvius, the distribution system operator (DSO) of Flanders, Belgium. The dataset contains customers that proactively requested higher-resolution smart metering data, and hence is biased towards energy-interested people. LV feeders of different sizes were statistically modelled with a profile sampling approach. For feeders with 40 connections, we found a contribution to the feeder peak of 1.2 kW for a HP, 1.4 kW for an EV and 2.0 kW for an EV charging faster than 6.5 kW. A visual analysis of the feeder-level loads shows that the classical duck curve is replaced by a night-camel curve for feeders with only HPs and a night-dromedary curve for feeders with only EVs charging faster than 6.5 kW. Consumption patterns will continue to change as the energy transition is carried out, because of e.g. dynamic electricity tariffs or increased battery capacities. Our introduced methods are simple to implement, making it a useful tool for DSOs that have access to smart meter data to monitor changing consumption patterns.<|reference_end|>
|
arxiv
|
@article{becker2024effect,
title={Effect of electric vehicles, heat pumps, and solar panels on low-voltage
feeders: Evidence from smart meter profiles},
author={T. Becker, R. Smet, B. Macharis, K. Vanthournout},
journal={arXiv preprint arXiv:2409.18105},
year={2024},
archivePrefix={arXiv},
eprint={2409.18105},
primaryClass={eess.SY cs.CY cs.SY stat.AP}
}
|
becker2024effect
|
arxiv-662466
|
2409.18108
|
Language-Embedded Gaussian Splats (LEGS): Incrementally Building Room-Scale Representations with a Mobile Robot
|
<|reference_start|>Language-Embedded Gaussian Splats (LEGS): Incrementally Building Room-Scale Representations with a Mobile Robot: Building semantic 3D maps is valuable for searching for objects of interest in offices, warehouses, stores, and homes. We present a mapping system that incrementally builds a Language-Embedded Gaussian Splat (LEGS): a detailed 3D scene representation that encodes both appearance and semantics in a unified representation. LEGS is trained online as a robot traverses its environment to enable localization of open-vocabulary object queries. We evaluate LEGS on 4 room-scale scenes where we query for objects in the scene to assess how LEGS can capture semantic meaning. We compare LEGS to LERF and find that while both systems have comparable object query success rates, LEGS trains over 3.5x faster than LERF. Results suggest that a multi-camera setup and incremental bundle adjustment can boost visual reconstruction quality in constrained robot trajectories, and suggest LEGS can localize open-vocabulary and long-tail object queries with up to 66% accuracy.<|reference_end|>
|
arxiv
|
@article{yu2024language-embedded,
title={Language-Embedded Gaussian Splats (LEGS): Incrementally Building
Room-Scale Representations with a Mobile Robot},
author={Justin Yu, Kush Hari, Kishore Srinivas, Karim El-Refai, Adam Rashid,
Chung Min Kim, Justin Kerr, Richard Cheng, Muhammad Zubair Irshad, Ashwin
Balakrishna, Thomas Kollar, Ken Goldberg},
journal={arXiv preprint arXiv:2409.18108},
year={2024},
archivePrefix={arXiv},
eprint={2409.18108},
primaryClass={cs.RO}
}
|
yu2024language-embedded
|
arxiv-662467
|
2409.18109
|
Canonical labelling of sparse random graphs
|
<|reference_start|>Canonical labelling of sparse random graphs: We show that if $p=O(1/n)$, then the Erd\H{o}s-R\'{e}nyi random graph $G(n,p)$ with high probability admits a canonical labeling computable in time $O(n\log n)$. Combined with the previous results on the canonization of random graphs, this implies that $G(n,p)$ with high probability admits a polynomial-time canonical labeling whatever the edge probability function $p$. Our algorithm combines the standard color refinement routine with simple post-processing based on the classical linear-time tree canonization. Noteworthy, our analysis of how well color refinement performs in this setting allows us to complete the description of the automorphism group of the 2-core of $G(n,p)$.<|reference_end|>
|
arxiv
|
@article{verbitsky2024canonical,
title={Canonical labelling of sparse random graphs},
author={Oleg Verbitsky and Maksim Zhukovskii},
journal={arXiv preprint arXiv:2409.18109},
year={2024},
archivePrefix={arXiv},
eprint={2409.18109},
primaryClass={cs.DM math.CO}
}
|
verbitsky2024canonical
|
arxiv-662468
|
2409.18110
|
Open-World Evaluation for Retrieving Diverse Perspectives
|
<|reference_start|>Open-World Evaluation for Retrieving Diverse Perspectives: We study retrieving a set of documents that covers various perspectives on a complex and contentious question (e.g., will ChatGPT do more harm than good?). We curate a Benchmark for Retrieval Diversity for Subjective questions (BERDS), where each example consists of a question and diverse perspectives associated with the question, sourced from survey questions and debate websites. On this data, retrievers paired with a corpus are evaluated to surface a document set that contains diverse perspectives. Our framing diverges from most retrieval tasks in that document relevancy cannot be decided by simple string matches to references. Instead, we build a language model based automatic evaluator that decides whether each retrieved document contains a perspective. This allows us to evaluate the performance of three different types of corpus (Wikipedia, web snapshot, and corpus constructed on the fly with retrieved pages from the search engine) paired with retrievers. Retrieving diverse documents remains challenging, with the outputs from existing retrievers covering all perspectives on only 33.74% of the examples. We further study the impact of query expansion and diversity-focused reranking approaches and analyze retriever sycophancy. Together, we lay the foundation for future studies in retrieval diversity handling complex queries.<|reference_end|>
|
arxiv
|
@article{chen2024open-world,
title={Open-World Evaluation for Retrieving Diverse Perspectives},
author={Hung-Ting Chen, Eunsol Choi},
journal={arXiv preprint arXiv:2409.18110},
year={2024},
archivePrefix={arXiv},
eprint={2409.18110},
primaryClass={cs.CL cs.IR}
}
|
chen2024open-world
|
arxiv-662469
|
2409.18111
|
ET Bench: Towards Open-Ended Event-Level Video-Language Understanding
|
<|reference_start|>ET Bench: Towards Open-Ended Event-Level Video-Language Understanding: Recent advances in Video Large Language Models (Video-LLMs) have demonstrated their great potential in general-purpose video understanding. To verify the significance of these models, a number of benchmarks have been proposed to diagnose their capabilities in different scenarios. However, existing benchmarks merely evaluate models through video-level question-answering, lacking fine-grained event-level assessment and task diversity. To fill this gap, we introduce E.T. Bench (Event-Level & Time-Sensitive Video Understanding Benchmark), a large-scale and high-quality benchmark for open-ended event-level video understanding. Categorized within a 3-level task taxonomy, E.T. Bench encompasses 7.3K samples under 12 tasks with 7K videos (251.4h total length) under 8 domains, providing comprehensive evaluations. We extensively evaluated 8 Image-LLMs and 12 Video-LLMs on our benchmark, and the results reveal that state-of-the-art models for coarse-level (video-level) understanding struggle to solve our fine-grained tasks, e.g., grounding event-of-interests within videos, largely due to the short video context length, improper time representations, and lack of multi-event training data. Focusing on these issues, we further propose a strong baseline model, E.T. Chat, together with an instruction-tuning dataset E.T. Instruct 164K tailored for fine-grained event-level understanding. Our simple but effective solution demonstrates superior performance in multiple scenarios.<|reference_end|>
|
arxiv
|
@article{liu2024e.t.,
title={E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding},
author={Ye Liu, Zongyang Ma, Zhongang Qi, Yang Wu, Ying Shan, Chang Wen Chen},
journal={arXiv preprint arXiv:2409.18111},
year={2024},
archivePrefix={arXiv},
eprint={2409.18111},
primaryClass={cs.CV}
}
|
liu2024e.t.
|
arxiv-662470
|
2409.18114
|
EdgeRunner: Auto-regressive Auto-encoder for Artistic Mesh Generation
|
<|reference_start|>EdgeRunner: Auto-regressive Auto-encoder for Artistic Mesh Generation: Current auto-regressive mesh generation methods suffer from issues such as incompleteness, insufficient detail, and poor generalization. In this paper, we propose an Auto-regressive Auto-encoder (ArAE) model capable of generating high-quality 3D meshes with up to 4,000 faces at a spatial resolution of $512^3$. We introduce a novel mesh tokenization algorithm that efficiently compresses triangular meshes into 1D token sequences, significantly enhancing training efficiency. Furthermore, our model compresses variable-length triangular meshes into a fixed-length latent space, enabling training latent diffusion models for better generalization. Extensive experiments demonstrate the superior quality, diversity, and generalization capabilities of our model in both point cloud and image-conditioned mesh generation tasks.<|reference_end|>
|
arxiv
|
@article{tang2024edgerunner:,
title={EdgeRunner: Auto-regressive Auto-encoder for Artistic Mesh Generation},
author={Jiaxiang Tang, Zhaoshuo Li, Zekun Hao, Xian Liu, Gang Zeng, Ming-Yu
Liu, Qinsheng Zhang},
journal={arXiv preprint arXiv:2409.18114},
year={2024},
archivePrefix={arXiv},
eprint={2409.18114},
primaryClass={cs.CV}
}
|
tang2024edgerunner:
|
arxiv-662471
|
2409.18118
|
Slowly Scaling Per-Record Differential Privacy
|
<|reference_start|>Slowly Scaling Per-Record Differential Privacy: We develop formal privacy mechanisms for releasing statistics from data with many outlying values, such as income data. These mechanisms ensure that a per-record differential privacy guarantee degrades slowly in the protected records' influence on the statistics being released. Formal privacy mechanisms generally add randomness, or "noise," to published statistics. If a noisy statistic's distribution changes little with the addition or deletion of a single record in the underlying dataset, an attacker looking at this statistic will find it plausible that any particular record was present or absent, preserving the records' privacy. More influential records -- those whose addition or deletion would change the statistics' distribution more -- typically suffer greater privacy loss. The per-record differential privacy framework quantifies these record-specific privacy guarantees, but existing mechanisms let these guarantees degrade rapidly (linearly or quadratically) with influence. While this may be acceptable in cases with some moderately influential records, it results in unacceptably high privacy losses when records' influence varies widely, as is common in economic data. We develop mechanisms with privacy guarantees that instead degrade as slowly as logarithmically with influence. These mechanisms allow for the accurate, unbiased release of statistics, while providing meaningful protection for highly influential records. As an example, we consider the private release of sums of unbounded establishment data such as payroll, where our mechanisms extend meaningful privacy protection even to very large establishments. We evaluate these mechanisms empirically and demonstrate their utility.<|reference_end|>
|
arxiv
|
@article{finley2024slowly,
title={Slowly Scaling Per-Record Differential Privacy},
author={Brian Finley, Anthony M Caruso, Justin C Doty, Ashwin Machanavajjhala,
Mikaela R Meyer, David Pujol, William Sexton, Zachary Terner},
journal={arXiv preprint arXiv:2409.18118},
year={2024},
archivePrefix={arXiv},
eprint={2409.18118},
primaryClass={cs.CR stat.ME}
}
|
finley2024slowly
|
arxiv-662472
|
2409.18119
|
Multi-View and Multi-Scale Alignment for Contrastive Language-Image Pre-training in Mammography
|
<|reference_start|>Multi-View and Multi-Scale Alignment for Contrastive Language-Image Pre-training in Mammography: Contrastive Language-Image Pre-training (CLIP) shows promise in medical image analysis but requires substantial data and computational resources. Due to these restrictions, existing CLIP applications in medical imaging focus mainly on modalities like chest X-rays that have abundant image-report data available, leaving many other important modalities under-explored. Here, we propose the first adaptation of the full CLIP model to mammography, which presents significant challenges due to labeled data scarcity, high-resolution images with small regions of interest, and data imbalance. We first develop a specialized supervision framework for mammography that leverages its multi-view nature. Furthermore, we design a symmetric local alignment module to better focus on detailed features in high-resolution images. Lastly, we incorporate a parameter-efficient fine-tuning approach for large language models pre-trained with medical knowledge to address data limitations. Our multi-view and multi-scale alignment (MaMA) method outperforms state-of-the-art baselines for three different tasks on two large real-world mammography datasets, EMBED and RSNA-Mammo, with only 52% model size compared with the largest baseline.<|reference_end|>
|
arxiv
|
@article{du2024multi-view,
title={Multi-View and Multi-Scale Alignment for Contrastive Language-Image
Pre-training in Mammography},
author={Yuexi Du, John Onofrey, Nicha C. Dvornek},
journal={arXiv preprint arXiv:2409.18119},
year={2024},
archivePrefix={arXiv},
eprint={2409.18119},
primaryClass={cs.CV cs.AI cs.LG}
}
|
du2024multi-view
|
arxiv-662473
|
2409.18120
|
EvMAPPER: High Altitude Orthomapping with Event Cameras
|
<|reference_start|>EvMAPPER: High Altitude Orthomapping with Event Cameras: Traditionally, unmanned aerial vehicles (UAVs) rely on CMOS-based cameras to collect images about the world below. One of the most successful applications of UAVs is to generate orthomosaics or orthomaps, in which a series of images are integrated together to develop a larger map. However, the use of CMOS-based cameras with global or rolling shutters mean that orthomaps are vulnerable to challenging light conditions, motion blur, and high-speed motion of independently moving objects under the camera. Event cameras are less sensitive to these issues, as their pixels are able to trigger asynchronously on brightness changes. This work introduces the first orthomosaic approach using event cameras. In contrast to existing methods relying only on CMOS cameras, our approach enables map generation even in challenging light conditions, including direct sunlight and after sunset.<|reference_end|>
|
arxiv
|
@article{cladera2024evmapper:,
title={EvMAPPER: High Altitude Orthomapping with Event Cameras},
author={Fernando Cladera, Kenneth Chaney, M. Ani Hsieh, Camillo J. Taylor,
Vijay Kumar},
journal={arXiv preprint arXiv:2409.18120},
year={2024},
archivePrefix={arXiv},
eprint={2409.18120},
primaryClass={cs.RO cs.CV}
}
|
cladera2024evmapper:
|
arxiv-662474
|
2409.18121
|
Robot See Robot Do: Imitating Articulated Object Manipulation with Monocular 4D Reconstruction
|
<|reference_start|>Robot See Robot Do: Imitating Articulated Object Manipulation with Monocular 4D Reconstruction: Humans can learn to manipulate new objects by simply watching others; providing robots with the ability to learn from such demonstrations would enable a natural interface specifying new behaviors. This work develops Robot See Robot Do (RSRD), a method for imitating articulated object manipulation from a single monocular RGB human demonstration given a single static multi-view object scan. We first propose 4D Differentiable Part Models (4D-DPM), a method for recovering 3D part motion from a monocular video with differentiable rendering. This analysis-by-synthesis approach uses part-centric feature fields in an iterative optimization which enables the use of geometric regularizers to recover 3D motions from only a single video. Given this 4D reconstruction, the robot replicates object trajectories by planning bimanual arm motions that induce the demonstrated object part motion. By representing demonstrations as part-centric trajectories, RSRD focuses on replicating the demonstration's intended behavior while considering the robot's own morphological limits, rather than attempting to reproduce the hand's motion. We evaluate 4D-DPM's 3D tracking accuracy on ground truth annotated 3D part trajectories and RSRD's physical execution performance on 9 objects across 10 trials each on a bimanual YuMi robot. Each phase of RSRD achieves an average of 87% success rate, for a total end-to-end success rate of 60% across 90 trials. Notably, this is accomplished using only feature fields distilled from large pretrained vision models -- without any task-specific training, fine-tuning, dataset collection, or annotation. Project page: https://robot-see-robot-do.github.io<|reference_end|>
|
arxiv
|
@article{kerr2024robot,
title={Robot See Robot Do: Imitating Articulated Object Manipulation with
Monocular 4D Reconstruction},
author={Justin Kerr, Chung Min Kim, Mingxuan Wu, Brent Yi, Qianqian Wang, Ken
Goldberg, Angjoo Kanazawa},
journal={arXiv preprint arXiv:2409.18121},
year={2024},
archivePrefix={arXiv},
eprint={2409.18121},
primaryClass={cs.RO cs.CV}
}
|
kerr2024robot
|
arxiv-662475
|
2409.18122
|
RT-GuIDE: Real-Time Gaussian splatting for Information-Driven Exploration
|
<|reference_start|>RT-GuIDE: Real-Time Gaussian splatting for Information-Driven Exploration: We propose a framework for active mapping and exploration that leverages Gaussian splatting for constructing information-rich maps. Further, we develop a parallelized motion planning algorithm that can exploit the Gaussian map for real-time navigation. The Gaussian map constructed onboard the robot is optimized for both photometric and geometric quality while enabling real-time situational awareness for autonomy. We show through simulation experiments that our method is competitive with approaches that use alternate information gain metrics, while being orders of magnitude faster to compute. In real-world experiments, our algorithm achieves better map quality (10% higher Peak Signal-to-Noise Ratio (PSNR) and 30% higher geometric reconstruction accuracy) than Gaussian maps constructed by traditional exploration baselines. Experiment videos and more details can be found on our project page: https://tyuezhan.github.io/RT_GuIDE/<|reference_end|>
|
arxiv
|
@article{tao2024rt-guide:,
title={RT-GuIDE: Real-Time Gaussian splatting for Information-Driven
Exploration},
author={Yuezhan Tao, Dexter Ong, Varun Murali, Igor Spasojevic, Pratik
Chaudhari and Vijay Kumar},
journal={arXiv preprint arXiv:2409.18122},
year={2024},
archivePrefix={arXiv},
eprint={2409.18122},
primaryClass={cs.RO}
}
|
tao2024rt-guide:
|
arxiv-662476
|
2409.18124
|
Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction
|
<|reference_start|>Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction: Leveraging the visual priors of pre-trained text-to-image diffusion models offers a promising solution to enhance zero-shot generalization in dense prediction tasks. However, existing methods often uncritically use the original diffusion formulation, which may not be optimal due to the fundamental differences between dense prediction and image generation. In this paper, we provide a systemic analysis of the diffusion formulation for the dense prediction, focusing on both quality and efficiency. And we find that the original parameterization type for image generation, which learns to predict noise, is harmful for dense prediction; the multi-step noising/denoising diffusion process is also unnecessary and challenging to optimize. Based on these insights, we introduce Lotus, a diffusion-based visual foundation model with a simple yet effective adaptation protocol for dense prediction. Specifically, Lotus is trained to directly predict annotations instead of noise, thereby avoiding harmful variance. We also reformulate the diffusion process into a single-step procedure, simplifying optimization and significantly boosting inference speed. Additionally, we introduce a novel tuning strategy called detail preserver, which achieves more accurate and fine-grained predictions. Without scaling up the training data or model capacity, Lotus achieves SoTA performance in zero-shot depth and normal estimation across various datasets. It also enhances efficiency, being significantly faster than most existing diffusion-based methods. Lotus' superior quality and efficiency also enable a wide range of practical applications, such as joint estimation, single/multi-view 3D reconstruction, etc. Project page: https://lotus3d.github.io/.<|reference_end|>
|
arxiv
|
@article{he2024lotus:,
title={Lotus: Diffusion-based Visual Foundation Model for High-quality Dense
Prediction},
author={Jing He, Haodong Li, Wei Yin, Yixun Liang, Leheng Li, Kaiqiang Zhou,
Hongbo Zhang, Bingbing Liu, Ying-Cong Chen},
journal={arXiv preprint arXiv:2409.18124},
year={2024},
archivePrefix={arXiv},
eprint={2409.18124},
primaryClass={cs.CV}
}
|
he2024lotus:
|
arxiv-662477
|
2409.18125
|
LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with 3D-awareness
|
<|reference_start|>LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with 3D-awareness: Recent advancements in Large Multimodal Models (LMMs) have greatly enhanced their proficiency in 2D visual understanding tasks, enabling them to effectively process and understand images and videos. However, the development of LMMs with 3D-awareness for 3D scene understanding has been hindered by the lack of large-scale 3D vision-language datasets and powerful 3D encoders. In this paper, we introduce a simple yet effective framework called LLaVA-3D. Leveraging the strong 2D understanding priors from LLaVA, our LLaVA-3D efficiently adapts LLaVA for 3D scene understanding without compromising 2D understanding capabilities. To achieve this, we employ a simple yet effective representation, 3D Patch, which connects 2D CLIP patch features with their corresponding positions in 3D space. By integrating the 3D Patches into 2D LMMs and employing joint 2D and 3D vision-language instruction tuning, we establish a unified architecture for both 2D image understanding and 3D scene understanding. Experimental results show that LLaVA-3D converges 3.5x faster than existing 3D LMMs when trained on 3D vision-language datasets. Moreover, LLaVA-3D not only achieves state-of-the-art performance across various 3D tasks but also maintains comparable 2D image understanding and vision-language conversation capabilities with LLaVA.<|reference_end|>
|
arxiv
|
@article{zhu2024llava-3d:,
title={LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with
3D-awareness},
author={Chenming Zhu, Tai Wang, Wenwei Zhang, Jiangmiao Pang, Xihui Liu},
journal={arXiv preprint arXiv:2409.18125},
year={2024},
archivePrefix={arXiv},
eprint={2409.18125},
primaryClass={cs.CV}
}
|
zhu2024llava-3d:
|
arxiv-662478
|
2409.18127
|
EgoLM: Multi-Modal Language Model of Egocentric Motions
|
<|reference_start|>EgoLM: Multi-Modal Language Model of Egocentric Motions: As the prevalence of wearable devices, learning egocentric motions becomes essential to develop contextual AI. In this work, we present EgoLM, a versatile framework that tracks and understands egocentric motions from multi-modal inputs, e.g., egocentric videos and motion sensors. EgoLM exploits rich contexts for the disambiguation of egomotion tracking and understanding, which are ill-posed under single modality conditions. To facilitate the versatile and multi-modal framework, our key insight is to model the joint distribution of egocentric motions and natural languages using large language models (LLM). Multi-modal sensor inputs are encoded and projected to the joint latent space of language models, and used to prompt motion generation or text generation for egomotion tracking or understanding, respectively. Extensive experiments on large-scale multi-modal human motion dataset validate the effectiveness of EgoLM as a generalist model for universal egocentric learning.<|reference_end|>
|
arxiv
|
@article{hong2024egolm:,
title={EgoLM: Multi-Modal Language Model of Egocentric Motions},
author={Fangzhou Hong, Vladimir Guzov, Hyo Jin Kim, Yuting Ye, Richard
Newcombe, Ziwei Liu, Lingni Ma},
journal={arXiv preprint arXiv:2409.18127},
year={2024},
archivePrefix={arXiv},
eprint={2409.18127},
primaryClass={cs.CV}
}
|
hong2024egolm:
|
arxiv-662479
|
2409.18128
|
FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity Refiner
|
<|reference_start|>FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity Refiner: Building on the success of diffusion models in visual generation, flow-based models reemerge as another prominent family of generative models that have achieved competitive or better performance in terms of both visual quality and inference speed. By learning the velocity field through flow-matching, flow-based models tend to produce a straighter sampling trajectory, which is advantageous during the sampling process. However, unlike diffusion models for which fast samplers are well-developed, efficient sampling of flow-based generative models has been rarely explored. In this paper, we propose a framework called FlowTurbo to accelerate the sampling of flow-based models while still enhancing the sampling quality. Our primary observation is that the velocity predictor's outputs in the flow-based models will become stable during the sampling, enabling the estimation of velocity via a lightweight velocity refiner. Additionally, we introduce several techniques including a pseudo corrector and sample-aware compilation to further reduce inference time. Since FlowTurbo does not change the multi-step sampling paradigm, it can be effectively applied for various tasks such as image editing, inpainting, etc. By integrating FlowTurbo into different flow-based models, we obtain an acceleration ratio of 53.1%$\sim$58.3% on class-conditional generation and 29.8%$\sim$38.5% on text-to-image generation. Notably, FlowTurbo reaches an FID of 2.12 on ImageNet with 100 (ms / img) and FID of 3.93 with 38 (ms / img), achieving the real-time image generation and establishing the new state-of-the-art. Code is available at https://github.com/shiml20/FlowTurbo.<|reference_end|>
|
arxiv
|
@article{zhao2024flowturbo:,
title={FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity
Refiner},
author={Wenliang Zhao, Minglei Shi, Xumin Yu, Jie Zhou, Jiwen Lu},
journal={arXiv preprint arXiv:2409.18128},
year={2024},
archivePrefix={arXiv},
eprint={2409.18128},
primaryClass={cs.CV}
}
|
zhao2024flowturbo:
|
arxiv-662480
|
2409.18132
|
Decomposition of one-layer neural networks via the infinite sum of reproducing kernel Banach spaces
|
<|reference_start|>Decomposition of one-layer neural networks via the infinite sum of reproducing kernel Banach spaces: In this paper, we define the sum of RKBSs using the characterization theorem of RKBSs and show that the sum of RKBSs is compatible with the direct sum of feature spaces. Moreover, we decompose the integral RKBS into the sum of $p$-norm RKBSs. Finally, we provide applications for the structural understanding of the integral RKBS class.<|reference_end|>
|
arxiv
|
@article{shin2024decomposition,
title={Decomposition of one-layer neural networks via the infinite sum of
reproducing kernel Banach spaces},
author={Seungcheol Shin and Myungjoo Kang},
journal={arXiv preprint arXiv:2409.18132},
year={2024},
archivePrefix={arXiv},
eprint={2409.18132},
primaryClass={math.FA cs.AI}
}
|
shin2024decomposition
|
arxiv-662481
|
2409.18139
|
Robust optimization and uncertainty quantification in the nonlinear mechanics of an elevator brake system
|
<|reference_start|>Robust optimization and uncertainty quantification in the nonlinear mechanics of an elevator brake system: This paper deals with nonlinear mechanics of an elevator brake system subjected to uncertainties. A deterministic model that relates the braking force with uncertain parameters is deduced from mechanical equilibrium conditions. In order to take into account parameters variabilities, a parametric probabilistic approach is employed. In this stochastic formalism, the uncertain parameters are modeled as random variables, with distributions specified by the maximum entropy principle. The uncertainties are propagated by the Monte Carlo method, which provides a detailed statistical characterization of the response. This work still considers the optimum design of the brake system, formulating and solving nonlinear optimization problems, with and without the uncertainties effects.<|reference_end|>
|
arxiv
|
@article{wolszczak2024robust,
title={Robust optimization and uncertainty quantification in the nonlinear
mechanics of an elevator brake system},
author={Piotr Wolszczak, Pawel Lonkwic, Americo Cunha Jr, Grzegorz Litak,
Szymon Molski},
journal={Meccanica, vol. 54, pp. 1057-1069, 2019},
year={2024},
doi={10.1007/s11012-019-00992-7},
archivePrefix={arXiv},
eprint={2409.18139},
primaryClass={cs.CE math.OC physics.class-ph stat.AP}
}
|
wolszczak2024robust
|
arxiv-662482
|
2409.18142
|
A Survey on Multimodal Benchmarks: In the Era of Large AI Models
|
<|reference_start|>A Survey on Multimodal Benchmarks: In the Era of Large AI Models: The rapid evolution of Multimodal Large Language Models (MLLMs) has brought substantial advancements in artificial intelligence, significantly enhancing the capability to understand and generate multimodal content. While prior studies have largely concentrated on model architectures and training methodologies, a thorough analysis of the benchmarks used for evaluating these models remains underexplored. This survey addresses this gap by systematically reviewing 211 benchmarks that assess MLLMs across four core domains: understanding, reasoning, generation, and application. We provide a detailed analysis of task designs, evaluation metrics, and dataset constructions, across diverse modalities. We hope that this survey will contribute to the ongoing advancement of MLLM research by offering a comprehensive overview of benchmarking practices and identifying promising directions for future work. An associated GitHub repository collecting the latest papers is available.<|reference_end|>
|
arxiv
|
@article{li2024a,
title={A Survey on Multimodal Benchmarks: In the Era of Large AI Models},
author={Lin Li and Guikun Chen and Hanrong Shi and Jun Xiao and Long Chen},
journal={arXiv preprint arXiv:2409.18142},
year={2024},
archivePrefix={arXiv},
eprint={2409.18142},
primaryClass={cs.AI cs.MM}
}
|
li2024a
|
arxiv-662483
|
2409.18147
|
SSP-RACL: Classification of Noisy Fundus Images with Self-Supervised Pretraining and Robust Adaptive Credal Loss
|
<|reference_start|>SSP-RACL: Classification of Noisy Fundus Images with Self-Supervised Pretraining and Robust Adaptive Credal Loss: Fundus image classification is crucial in the computer aided diagnosis tasks, but label noise significantly impairs the performance of deep neural networks. To address this challenge, we propose a robust framework, Self-Supervised Pre-training with Robust Adaptive Credal Loss (SSP-RACL), for handling label noise in fundus image datasets. First, we use Masked Autoencoders (MAE) for pre-training to extract features, unaffected by label noise. Subsequently, RACL employ a superset learning framework, setting confidence thresholds and adaptive label relaxation parameter to construct possibility distributions and provide more reliable ground-truth estimates, thus effectively suppressing the memorization effect. Additionally, we introduce clinical knowledge-based asymmetric noise generation to simulate real-world noisy fundus image datasets. Experimental results demonstrate that our proposed method outperforms existing approaches in handling label noise, showing superior performance.<|reference_end|>
|
arxiv
|
@article{ye2024ssp-racl:,
title={SSP-RACL: Classification of Noisy Fundus Images with Self-Supervised
Pretraining and Robust Adaptive Credal Loss},
author={Mengwen Ye, Yingzi Huangfu, You Li, Zekuan Yu},
journal={arXiv preprint arXiv:2409.18147},
year={2024},
archivePrefix={arXiv},
eprint={2409.18147},
primaryClass={cs.CV}
}
|
ye2024ssp-racl:
|
arxiv-662484
|
2409.18152
|
Reinforcement Learning for Finite Space Mean-Field Type Games
|
<|reference_start|>Reinforcement Learning for Finite Space Mean-Field Type Games: Mean field type games (MFTGs) describe Nash equilibria between large coalitions: each coalition consists of a continuum of cooperative agents who maximize the average reward of their coalition while interacting non-cooperatively with a finite number of other coalitions. Although the theory has been extensively developed, we are still lacking efficient and scalable computational methods. Here, we develop reinforcement learning methods for such games in a finite space setting with general dynamics and reward functions. We start by proving that MFTG solution yields approximate Nash equilibria in finite-size coalition games. We then propose two algorithms. The first is based on quantization of the mean-field spaces and Nash Q-learning. We provide convergence and stability analysis. We then propose an deep reinforcement learning algorithm, which can scale to larger spaces. Numerical examples on 5 environments show the scalability and the efficiency of the proposed method.<|reference_end|>
|
arxiv
|
@article{shao2024reinforcement,
title={Reinforcement Learning for Finite Space Mean-Field Type Games},
author={Kai Shao, Jiacheng Shen, Chijie An, Mathieu Lauri`ere},
journal={arXiv preprint arXiv:2409.18152},
year={2024},
archivePrefix={arXiv},
eprint={2409.18152},
primaryClass={cs.GT cs.LG math.OC}
}
|
shao2024reinforcement
|
arxiv-662485
|
2409.18153
|
Most Influential Subset Selection: Challenges, Promises, and Beyond
|
<|reference_start|>Most Influential Subset Selection: Challenges, Promises, and Beyond: How can we attribute the behaviors of machine learning models to their training data? While the classic influence function sheds light on the impact of individual samples, it often fails to capture the more complex and pronounced collective influence of a set of samples. To tackle this challenge, we study the Most Influential Subset Selection (MISS) problem, which aims to identify a subset of training samples with the greatest collective influence. We conduct a comprehensive analysis of the prevailing approaches in MISS, elucidating their strengths and weaknesses. Our findings reveal that influence-based greedy heuristics, a dominant class of algorithms in MISS, can provably fail even in linear regression. We delineate the failure modes, including the errors of influence function and the non-additive structure of the collective influence. Conversely, we demonstrate that an adaptive version of these heuristics which applies them iteratively, can effectively capture the interactions among samples and thus partially address the issues. Experiments on real-world datasets corroborate these theoretical findings, and further demonstrate that the merit of adaptivity can extend to more complex scenarios such as classification tasks and non-linear neural networks. We conclude our analysis by emphasizing the inherent trade-off between performance and computational efficiency, questioning the use of additive metrics such as the linear datamodeling score, and offering a range of discussions.<|reference_end|>
|
arxiv
|
@article{hu2024most,
title={Most Influential Subset Selection: Challenges, Promises, and Beyond},
author={Yuzheng Hu, Pingbang Hu, Han Zhao, Jiaqi W. Ma},
journal={arXiv preprint arXiv:2409.18153},
year={2024},
archivePrefix={arXiv},
eprint={2409.18153},
primaryClass={cs.LG stat.ML}
}
|
hu2024most
|
arxiv-662486
|
2409.18155
|
Non-cooperative rational synthesis problem for probabilistic strategies
|
<|reference_start|>Non-cooperative rational synthesis problem for probabilistic strategies: We study the decidability and complexity of non-cooperative rational synthesis problem (abbreviated as NCRSP) for some classes of probabilistic strategies. We show that NCRSP for stationary strategies and Muller objectives is in 3-EXPTIME, and if we restrict the strategies of environment players to be positional, NCRSP becomes NEXPSPACE solvable. On the other hand, NCRSP_>, which is a variant of NCRSP, is shown to be undecidable even for pure finite-state strategies and terminal reachability objectives. Finally, we show that NCRSP becomes EXPTIME solvable if we restrict the memory of a strategy to be the most recently visited t vertices where t is linear in the size of the game.<|reference_end|>
|
arxiv
|
@article{koide2024non-cooperative,
title={Non-cooperative rational synthesis problem for probabilistic strategies},
author={So Koide, Yoshiaki Takata and Hiroyuki Seki},
journal={arXiv preprint arXiv:2409.18155},
year={2024},
archivePrefix={arXiv},
eprint={2409.18155},
primaryClass={cs.GT cs.FL}
}
|
koide2024non-cooperative
|
arxiv-662487
|
2409.18156
|
A novel application of Shapley values for large multidimensional time-series data: Applying explainable AI to a DNA profile classification neural network
|
<|reference_start|>A novel application of Shapley values for large multidimensional time-series data: Applying explainable AI to a DNA profile classification neural network: The application of Shapley values to high-dimensional, time-series-like data is computationally challenging - and sometimes impossible. For $N$ inputs the problem is $2^N$ hard. In image processing, clusters of pixels, referred to as superpixels, are used to streamline computations. This research presents an efficient solution for time-seres-like data that adapts the idea of superpixels for Shapley value computation. Motivated by a forensic DNA classification example, the method is applied to multivariate time-series-like data whose features have been classified by a convolutional neural network (CNN). In DNA processing, it is important to identify alleles from the background noise created by DNA extraction and processing. A single DNA profile has $31,200$ scan points to classify, and the classification decisions must be defensible in a court of law. This means that classification is routinely performed by human readers - a monumental and time consuming process. The application of a CNN with fast computation of meaningful Shapley values provides a potential alternative to the classification. This research demonstrates the realistic, accurate and fast computation of Shapley values for this massive task<|reference_end|>
|
arxiv
|
@article{elborough2024a,
title={A novel application of Shapley values for large multidimensional
time-series data: Applying explainable AI to a DNA profile classification
neural network},
author={Lauren Elborough, Duncan Taylor, Melissa Humphries},
journal={arXiv preprint arXiv:2409.18156},
year={2024},
archivePrefix={arXiv},
eprint={2409.18156},
primaryClass={q-bio.QM cs.LG q-bio.GN stat.ML}
}
|
elborough2024a
|
arxiv-662488
|
2409.18157
|
Recombination vs Stochasticity: A Comparative Study on the Maximum Clique Problem
|
<|reference_start|>Recombination vs Stochasticity: A Comparative Study on the Maximum Clique Problem: The maximum clique problem (MCP) is a fundamental problem in graph theory and in computational complexity. Given a graph G, the problem is that of finding the largest clique (complete subgraph) in G. The MCP has many important applications in different domains and has been much studied. The problem has been shown to be NP-Hard and the corresponding decision problem to be NP-Complete. All exact (optimal) algorithms discovered so far run in exponential time. Various meta-heuristics have been used to approximate the MCP. These include genetic and memetic algorithms, ant colony optimization, greedy algorithms, Tabu algorithms, and simulated annealing. This study presents a critical examination of the effectiveness of applying genetic algorithms (GAs) to the MCP compared to a purely stochastic approach. Our results indicate that Monte Carlo algorithms, which employ random searches to generate and then refine sub-graphs into cliques, often surpass genetic algorithms in both speed and capability, particularly in less dense graphs. This observation challenges the conventional reliance on genetic algorithms, suggesting a reevaluation of the roles of the crossover and mutation operators in exploring the solution space. We observe that, in some of the denser graphs, the recombination strategy of genetic algorithms shows unexpected efficacy, hinting at the untapped potential of genetic methods under specific conditions. This work not only questions established paradigms but also opens avenues for exploring algorithmic efficiency in solving the MCP and other NP-Hard problems, inviting further research into the conditions that favor purely stochastic methods over genetic recombination and vice versa.<|reference_end|>
|
arxiv
|
@article{vella2024recombination,
title={Recombination vs Stochasticity: A Comparative Study on the Maximum
Clique Problem},
author={Michael Vella, John Abela, Kristian Guillaumier},
journal={arXiv preprint arXiv:2409.18157},
year={2024},
archivePrefix={arXiv},
eprint={2409.18157},
primaryClass={cs.NE}
}
|
vella2024recombination
|
arxiv-662489
|
2409.18158
|
Decomposable Transformer Point Processes
|
<|reference_start|>Decomposable Transformer Point Processes: The standard paradigm of modeling marked point processes is by parameterizing the intensity function using an attention-based (Transformer-style) architecture. Despite the flexibility of these methods, their inference is based on the computationally intensive thinning algorithm. In this work, we propose a framework where the advantages of the attention-based architecture are maintained and the limitation of the thinning algorithm is circumvented. The framework depends on modeling the conditional distribution of inter-event times with a mixture of log-normals satisfying a Markov property and the conditional probability mass function for the marks with a Transformer-based architecture. The proposed method attains state-of-the-art performance in predicting the next event of a sequence given its history. The experiments also reveal the efficacy of the methods that do not rely on the thinning algorithm during inference over the ones they do. Finally, we test our method on the challenging long-horizon prediction task and find that it outperforms a baseline developed specifically for tackling this task; importantly, inference requires just a fraction of time compared to the thinning-based baseline.<|reference_end|>
|
arxiv
|
@article{panos2024decomposable,
title={Decomposable Transformer Point Processes},
author={Aristeidis Panos},
journal={arXiv preprint arXiv:2409.18158},
year={2024},
archivePrefix={arXiv},
eprint={2409.18158},
primaryClass={stat.ML cs.LG}
}
|
panos2024decomposable
|
arxiv-662490
|
2409.18162
|
The Nexus of AR/VR, Large Language Models, UI/UX, and Robotics Technologies in Enhancing Learning and Social Interaction for Children: A Systematic Review
|
<|reference_start|>The Nexus of AR/VR, Large Language Models, UI/UX, and Robotics Technologies in Enhancing Learning and Social Interaction for Children: A Systematic Review: The combination of large language models (LLMs), augmented reality (AR), and user interface/user experience (UI/UX) design in therapies for children, especially with disorders like autism spectrum disorder (ASD), is examined in this review study. 150 publications were found by a thorough literature search throughout PubMed, ACM, IEEE Xplore, Elsevier, and Google Scholar; 42 of them were chosen for in-depth study due to their methodological rigor and relevance. Three primary areas are covered in this review: how AR can improve social and learning results; how LLMs can help with communication; and how UI/UX design affects how effective these technologies are. Results reveal that while LLMs can provide individualized learning and communication support, AR has demonstrated promise in enhancing social skills, motivation, and attention. For children with ASD, accessible and interesting interventions depend heavily on effective UI/UX design. To optimize the benefits of these technologies in ASD therapies, the study emphasizes the need for additional research to address difficulties related to customization, accessibility, and integration.<|reference_end|>
|
arxiv
|
@article{paneru2024the,
title={The Nexus of AR/VR, Large Language Models, UI/UX, and Robotics
Technologies in Enhancing Learning and Social Interaction for Children: A
Systematic Review},
author={Biplov Paneru, Bishwash Paneru},
journal={arXiv preprint arXiv:2409.18162},
year={2024},
archivePrefix={arXiv},
eprint={2409.18162},
primaryClass={cs.HC cs.AI cs.SI}
}
|
paneru2024the
|
arxiv-662491
|
2409.18163
|
A Survey on Neural Architecture Search Based on Reinforcement Learning
|
<|reference_start|>A Survey on Neural Architecture Search Based on Reinforcement Learning: The automation of feature extraction of machine learning has been successfully realized by the explosive development of deep learning. However, the structures and hyperparameters of deep neural network architectures also make huge difference on the performance in different tasks. The process of exploring optimal structures and hyperparameters often involves a lot of tedious human intervene. As a result, a legitimate question is to ask for the automation of searching for optimal network structures and hyperparameters. The work of automation of exploring optimal hyperparameters is done by Hyperparameter Optimization. Neural Architecture Search is aimed to automatically find the best network structure given specific tasks. In this paper, we firstly introduced the overall development of Neural Architecture Search and then focus mainly on providing an overall and understandable survey about Neural Architecture Search works that are relevant with reinforcement learning, including improvements and variants based on the hope of satisfying more complex structures and resource-insufficient environment.<|reference_end|>
|
arxiv
|
@article{shao2024a,
title={A Survey on Neural Architecture Search Based on Reinforcement Learning},
author={Wenzhu Shao},
journal={arXiv preprint arXiv:2409.18163},
year={2024},
archivePrefix={arXiv},
eprint={2409.18163},
primaryClass={cs.LG cs.AI}
}
|
shao2024a
|
arxiv-662492
|
2409.18164
|
Data-Prep-Kit: getting your data ready for LLM application development
|
<|reference_start|>Data-Prep-Kit: getting your data ready for LLM application development: Data preparation is the first and a very important step towards any Large Language Model (LLM) development. This paper introduces an easy-to-use, extensible, and scale-flexible open-source data preparation toolkit called Data Prep Kit (DPK). DPK is architected and designed to enable users to scale their data preparation to their needs. With DPK they can prepare data on a local machine or effortlessly scale to run on a cluster with thousands of CPU Cores. DPK comes with a highly scalable, yet extensible set of modules that transform natural language and code data. If the user needs additional transforms, they can be easily developed using extensive DPK support for transform creation. These modules can be used independently or pipelined to perform a series of operations. In this paper, we describe DPK architecture and show its performance from a small scale to a very large number of CPUs. The modules from DPK have been used for the preparation of Granite Models [1] [2]. We believe DPK is a valuable contribution to the AI community to easily prepare data to enhance the performance of their LLM models or to fine-tune models with Retrieval-Augmented Generation (RAG).<|reference_end|>
|
arxiv
|
@article{wood2024data-prep-kit:,
title={Data-Prep-Kit: getting your data ready for LLM application development},
author={David Wood, Boris Lublinsky, Alexy Roytman, Shivdeep Singh, Constantin
Adam, Abdulhamid Adebayo, Sungeun An, Yuan Chi Chang, Xuan-Hong Dang, Nirmit
Desai, Michele Dolfi, Hajar Emami-Gohari, Revital Eres, Takuya Goto, Dhiraj
Joshi, Yan Koyfman, Mohammad Nassar, Hima Patel, Paramesvaran Selvam, Yousaf
Shah, Saptha Surendran, Daiki Tsuzuku, Petros Zerfos and Shahrokh Daijavad},
journal={arXiv preprint arXiv:2409.18164},
year={2024},
archivePrefix={arXiv},
eprint={2409.18164},
primaryClass={cs.AI cs.CL cs.LG}
}
|
wood2024data-prep-kit:
|
arxiv-662493
|
2409.18166
|
Describing Deferred Acceptance and Strategyproofness to Participants: Experimental Analysis
|
<|reference_start|>Describing Deferred Acceptance and Strategyproofness to Participants: Experimental Analysis: We conduct an incentivized lab experiment to test participants' ability to understand the DA matching mechanism and the strategyproofness property, conveyed in different ways. We find that while many participants can (using a novel GUI) learn DA's mechanics and calculate its outcomes, such understanding does not imply understanding of strategyproofness (as measured by specially designed tests). However, a novel menu description of strategyproofness conveys this property significantly better than other treatments. While behavioral effects are small on average, participants with levels of strategyproofness understanding above a certain threshold play the classical dominant strategy at very high rates.<|reference_end|>
|
arxiv
|
@article{gonczarowski2024describing,
title={Describing Deferred Acceptance and Strategyproofness to Participants:
Experimental Analysis},
author={Yannai A. Gonczarowski, Ori Heffetz, Guy Ishai, Clayton Thomas},
journal={arXiv preprint arXiv:2409.18166},
year={2024},
archivePrefix={arXiv},
eprint={2409.18166},
primaryClass={econ.GN cs.GT q-fin.EC}
}
|
gonczarowski2024describing
|
arxiv-662494
|
2409.18168
|
Jump Diffusion-Informed Neural Networks with Transfer Learning for Accurate American Option Pricing under Data Scarcity
|
<|reference_start|>Jump Diffusion-Informed Neural Networks with Transfer Learning for Accurate American Option Pricing under Data Scarcity: Option pricing models, essential in financial mathematics and risk management, have been extensively studied and recently advanced by AI methodologies. However, American option pricing remains challenging due to the complexity of determining optimal exercise times and modeling non-linear payoffs resulting from stochastic paths. Moreover, the prevalent use of the Black-Scholes formula in hybrid models fails to accurately capture the discontinuity in the price process, limiting model performance, especially under scarce data conditions. To address these issues, this study presents a comprehensive framework for American option pricing consisting of six interrelated modules, which combine nonlinear optimization algorithms, analytical and numerical models, and neural networks to improve pricing performance. Additionally, to handle the scarce data challenge, this framework integrates the transfer learning through numerical data augmentation and a physically constrained, jump diffusion process-informed neural network to capture the leptokurtosis of the log return distribution. To increase training efficiency, a warm-up period using Bayesian optimization is designed to provide optimal data loss and physical loss coefficients. Experimental results of six case studies demonstrate the accuracy, convergence, physical effectiveness, and generalization of the framework. Moreover, the proposed model shows superior performance in pricing deep out-of-the-money options.<|reference_end|>
|
arxiv
|
@article{sun2024jump,
title={Jump Diffusion-Informed Neural Networks with Transfer Learning for
Accurate American Option Pricing under Data Scarcity},
author={Qiguo Sun, Hanyue Huang, XiBei Yang, Yuwei Zhang},
journal={arXiv preprint arXiv:2409.18168},
year={2024},
archivePrefix={arXiv},
eprint={2409.18168},
primaryClass={cs.LG}
}
|
sun2024jump
|
arxiv-662495
|
2409.18169
|
Harmful Fine-tuning Attacks and Defenses for Large Language Models: A Survey
|
<|reference_start|>Harmful Fine-tuning Attacks and Defenses for Large Language Models: A Survey: Recent research demonstrates that the nascent fine-tuning-as-a-service business model exposes serious safety concerns -- fine-tuning over a few harmful data uploaded by the users can compromise the safety alignment of the model. The attack, known as harmful fine-tuning, has raised a broad research interest among the community. However, as the attack is still new, \textbf{we observe from our miserable submission experience that there are general misunderstandings within the research community.} We in this paper aim to clear some common concerns for the attack setting, and formally establish the research problem. Specifically, we first present the threat model of the problem, and introduce the harmful fine-tuning attack and its variants. Then we systematically survey the existing literature on attacks/defenses/mechanical analysis of the problem. Finally, we outline future research directions that might contribute to the development of the field. Additionally, we present a list of questions of interest, which might be useful to refer to when reviewers in the peer review process question the realism of the experiment/attack/defense setting. A curated list of relevant papers is maintained and made accessible at: \url{https://github.com/git-disl/awesome_LLM-harmful-fine-tuning-papers}.<|reference_end|>
|
arxiv
|
@article{huang2024harmful,
title={Harmful Fine-tuning Attacks and Defenses for Large Language Models: A
Survey},
author={Tiansheng Huang, Sihao Hu, Fatih Ilhan, Selim Furkan Tekin, Ling Liu},
journal={arXiv preprint arXiv:2409.18169},
year={2024},
archivePrefix={arXiv},
eprint={2409.18169},
primaryClass={cs.CR cs.AI cs.LG}
}
|
huang2024harmful
|
arxiv-662496
|
2409.18170
|
Evaluation of Large Language Models for Summarization Tasks in the Medical Domain: A Narrative Review
|
<|reference_start|>Evaluation of Large Language Models for Summarization Tasks in the Medical Domain: A Narrative Review: Large Language Models have advanced clinical Natural Language Generation, creating opportunities to manage the volume of medical text. However, the high-stakes nature of medicine requires reliable evaluation, which remains a challenge. In this narrative review, we assess the current evaluation state for clinical summarization tasks and propose future directions to address the resource constraints of expert human evaluation.<|reference_end|>
|
arxiv
|
@article{croxford2024evaluation,
title={Evaluation of Large Language Models for Summarization Tasks in the
Medical Domain: A Narrative Review},
author={Emma Croxford, Yanjun Gao, Nicholas Pellegrino, Karen K. Wong, Graham
Wills, Elliot First, Frank J. Liao, Cherodeep Goswami, Brian Patterson, Majid
Afshar},
journal={arXiv preprint arXiv:2409.18170},
year={2024},
archivePrefix={arXiv},
eprint={2409.18170},
primaryClass={cs.CL cs.AI}
}
|
croxford2024evaluation
|
arxiv-662497
|
2409.18193
|
LowREm: A Repository of Word Embeddings for 87 Low-Resource Languages Enhanced with Multilingual Graph Knowledge
|
<|reference_start|>LowREm: A Repository of Word Embeddings for 87 Low-Resource Languages Enhanced with Multilingual Graph Knowledge: Contextualized embeddings based on large language models (LLMs) are available for various languages, but their coverage is often limited for lower resourced languages. Training LLMs for such languages is often difficult due to insufficient data and high computational cost. Especially for very low resource languages, static word embeddings thus still offer a viable alternative. There is, however, a notable lack of comprehensive repositories with such embeddings for diverse languages. To address this, we present LowREm, a centralized repository of static embeddings for 87 low-resource languages. We also propose a novel method to enhance GloVe-based embeddings by integrating multilingual graph knowledge, utilizing another source of knowledge. We demonstrate the superior performance of our enhanced embeddings as compared to contextualized embeddings extracted from XLM-R on sentiment analysis. Our code and data are publicly available under https://huggingface.co/DFKI.<|reference_end|>
|
arxiv
|
@article{gurgurov2024lowrem:,
title={LowREm: A Repository of Word Embeddings for 87 Low-Resource Languages
Enhanced with Multilingual Graph Knowledge},
author={Daniil Gurgurov, Rishu Kumar, Simon Ostermann},
journal={arXiv preprint arXiv:2409.18193},
year={2024},
archivePrefix={arXiv},
eprint={2409.18193},
primaryClass={cs.CL}
}
|
gurgurov2024lowrem:
|
arxiv-662498
|
2409.18197
|
Autonomous Network Defence using Reinforcement Learning
|
<|reference_start|>Autonomous Network Defence using Reinforcement Learning: In the network security arms race, the defender is significantly disadvantaged as they need to successfully detect and counter every malicious attack. In contrast, the attacker needs to succeed only once. To level the playing field, we investigate the effectiveness of autonomous agents in a realistic network defence scenario. We first outline the problem, provide the background on reinforcement learning and detail our proposed agent design. Using a network environment simulation, with 13 hosts spanning 3 subnets, we train a novel reinforcement learning agent and show that it can reliably defend continual attacks by two advanced persistent threat (APT) red agents: one with complete knowledge of the network layout and another which must discover resources through exploration but is more general.<|reference_end|>
|
arxiv
|
@article{foley2024autonomous,
title={Autonomous Network Defence using Reinforcement Learning},
author={Myles Foley, Chris Hicks, Kate Highnam, Vasilios Mavroudis},
journal={ASIA CCS '22: Proceedings of the 2022 ACM on Asia Conference on
Computer and Communications Security},
year={2024},
doi={10.1145/3488932.3527286},
archivePrefix={arXiv},
eprint={2409.18197},
primaryClass={cs.AI cs.CR cs.LG}
}
|
foley2024autonomous
|
arxiv-662499
|
2409.18199
|
LangSAMP: Language-Script Aware Multilingual Pretraining
|
<|reference_start|>LangSAMP: Language-Script Aware Multilingual Pretraining: Recent multilingual pretrained language models (mPLMs) often avoid using language embeddings -- learnable vectors assigned to different languages. These embeddings are discarded for two main reasons: (1) mPLMs are expected to have a single, unified parameter set across all languages, and (2) they need to function seamlessly as universal text encoders without requiring language IDs as input. However, this removal increases the burden on token embeddings to encode all language-specific information, which may hinder the model's ability to produce more language-neutral representations. To address this challenge, we propose Language-Script Aware Multilingual Pretraining (LangSAMP), a method that incorporates both language and script embeddings to enhance representation learning while maintaining a simple architecture. Specifically, we integrate these embeddings into the output of the transformer blocks before passing the final representations to the language modeling head for prediction. We apply LangSAMP to the continual pretraining of XLM-R on a highly multilingual corpus covering more than 500 languages. The resulting model consistently outperforms the baseline. Extensive analysis further shows that language/script embeddings encode language/script-specific information, which improves the selection of source languages for crosslingual transfer. We make our code and models publicly available at \url{https://github.com/cisnlp/LangSAMP}.<|reference_end|>
|
arxiv
|
@article{liu2024langsamp:,
title={LangSAMP: Language-Script Aware Multilingual Pretraining},
author={Yihong Liu, Haotian Ye, Chunlan Ma, Mingyang Wang, Hinrich Sch"utze},
journal={arXiv preprint arXiv:2409.18199},
year={2024},
archivePrefix={arXiv},
eprint={2409.18199},
primaryClass={cs.CL}
}
|
liu2024langsamp:
|
arxiv-662500
|
2409.18201
|
Loop-Diffusion: an equivariant diffusion model for designing and scoring protein loops
|
<|reference_start|>Loop-Diffusion: an equivariant diffusion model for designing and scoring protein loops: Predicting protein functional characteristics from structure remains a central problem in protein science, with broad implications from understanding the mechanisms of disease to designing novel therapeutics. Unfortunately, current machine learning methods are limited by scarce and biased experimental data, and physics-based methods are either too slow to be useful, or too simplified to be accurate. In this work, we present Loop-Diffusion, an energy based diffusion model which leverages a dataset of general protein loops from the entire protein universe to learn an energy function that generalizes to functional prediction tasks. We evaluate Loop-Diffusion's performance on scoring TCR-pMHC interfaces and demonstrate state-of-the-art results in recognizing binding-enhancing mutations.<|reference_end|>
|
arxiv
|
@article{borisiak2024loop-diffusion:,
title={Loop-Diffusion: an equivariant diffusion model for designing and scoring
protein loops},
author={Kevin Borisiak, Gian Marco Visani, Armita Nourmohammad},
journal={arXiv preprint arXiv:2409.18201},
year={2024},
archivePrefix={arXiv},
eprint={2409.18201},
primaryClass={physics.bio-ph cs.LG q-bio.QM}
}
|
borisiak2024loop-diffusion:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.