corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-661001
|
2409.15375
|
DS2TA: Denoising Spiking Transformer with Attenuated Spatiotemporal Attention
|
<|reference_start|>DS2TA: Denoising Spiking Transformer with Attenuated Spatiotemporal Attention: Vision Transformers (ViT) are current high-performance models of choice for various vision applications. Recent developments have given rise to biologically inspired spiking transformers that thrive in ultra-low power operations on neuromorphic hardware, however, without fully unlocking the potential of spiking neural networks. We introduce DS2TA, a Denoising Spiking transformer with attenuated SpatioTemporal Attention, designed specifically for vision applications. DS2TA introduces a new spiking attenuated spatiotemporal attention mechanism that considers input firing correlations occurring in both time and space, thereby fully harnessing the computational power of spiking neurons at the core of the transformer architecture. Importantly, DS2TA facilitates parameter-efficient spatiotemporal attention computation without introducing extra weights. DS2TA employs efficient hashmap-based nonlinear spiking attention denoisers to enhance the robustness and expressive power of spiking attention maps. DS2TA demonstrates state-of-the-art performances on several widely adopted static image and dynamic neuromorphic datasets. Operated over 4 time steps, DS2TA achieves 94.92% top-1 accuracy on CIFAR10 and 77.47% top-1 accuracy on CIFAR100, as well as 79.1% and 94.44% on CIFAR10-DVS and DVS-Gesture using 10 time steps.<|reference_end|>
|
arxiv
|
@article{xu2024ds2ta:,
title={DS2TA: Denoising Spiking Transformer with Attenuated Spatiotemporal
Attention},
author={Boxun Xu, Hejia Geng, Yuxuan Yin, Peng Li},
journal={arXiv preprint arXiv:2409.15375},
year={2024},
archivePrefix={arXiv},
eprint={2409.15375},
primaryClass={cs.NE cs.AI cs.CV cs.LG}
}
|
xu2024ds2ta:
|
arxiv-661002
|
2409.15376
|
ControlMath: Controllable Data Generation Promotes Math Generalist Models
|
<|reference_start|>ControlMath: Controllable Data Generation Promotes Math Generalist Models: Utilizing large language models (LLMs) for data augmentation has yielded encouraging results in mathematical reasoning. However, these approaches face constraints in problem diversity, potentially restricting them to in-domain/distribution data generation. To this end, we propose ControlMath, an iterative method involving an equation-generator module and two LLM-based agents. The module creates diverse equations, which the Problem-Crafter agent then transforms into math word problems. The Reverse-Agent filters and selects high-quality data, adhering to the "less is more" principle, achieving better results with fewer data points. This approach enables the generation of diverse math problems, not limited to specific domains or distributions. As a result, we collect ControlMathQA, which involves 190k math word problems. Extensive results prove that combining our dataset with in-domain datasets like GSM8K can help improve the model's mathematical ability to generalize, leading to improved performances both within and beyond specific domains.<|reference_end|>
|
arxiv
|
@article{chen2024controlmath:,
title={ControlMath: Controllable Data Generation Promotes Math Generalist
Models},
author={Nuo Chen, Ning Wu, Jianhui Chang, Jia Li},
journal={arXiv preprint arXiv:2409.15376},
year={2024},
number={EMNLP 2024 Main},
archivePrefix={arXiv},
eprint={2409.15376},
primaryClass={cs.LG cs.AI cs.CL}
}
|
chen2024controlmath:
|
arxiv-661003
|
2409.15377
|
Prompting Large Language Models for Supporting the Differential Diagnosis of Anemia
|
<|reference_start|>Prompting Large Language Models for Supporting the Differential Diagnosis of Anemia: In practice, clinicians achieve a diagnosis by following a sequence of steps, such as laboratory exams, observations, or imaging. The pathways to reach diagnosis decisions are documented by guidelines authored by expert organizations, which guide clinicians to reach a correct diagnosis through these sequences of steps. While these guidelines are beneficial for following medical reasoning and consolidating medical knowledge, they have some drawbacks. They often fail to address patients with uncommon conditions due to their focus on the majority population, and are slow and costly to update, making them unsuitable for rapidly emerging diseases or new practices. Inspired by clinical guidelines, our study aimed to develop pathways similar to those that can be obtained in clinical guidelines. We tested three Large Language Models (LLMs) -Generative Pretrained Transformer 4 (GPT-4), Large Language Model Meta AI (LLaMA), and Mistral -on a synthetic yet realistic dataset to differentially diagnose anemia and its subtypes. By using advanced prompting techniques to enhance the decision-making process, we generated diagnostic pathways using these models. Experimental results indicate that LLMs hold huge potential in clinical pathway discovery from patient data, with GPT-4 exhibiting the best performance in all conducted experiments.<|reference_end|>
|
arxiv
|
@article{castagnari2024prompting,
title={Prompting Large Language Models for Supporting the Differential
Diagnosis of Anemia},
author={Elisa Castagnari (HeKA), Lillian Muyama (HeKA), Adrien Coulet (HeKA)},
journal={LLMs4MI 2024 @FLLM 2024, IEEE, Nov 2024, Dubai, United Arab
Emirates},
year={2024},
archivePrefix={arXiv},
eprint={2409.15377},
primaryClass={cs.CL cs.AI}
}
|
castagnari2024prompting
|
arxiv-661004
|
2409.15378
|
Toward Automated Clinical Transcriptions
|
<|reference_start|>Toward Automated Clinical Transcriptions: Administrative documentation is a major driver of rising healthcare costs and is linked to adverse outcomes, including physician burnout and diminished quality of care. This paper introduces a secure system that applies recent advancements in speech-to-text transcription and speaker-labeling (diarization) to patient-provider conversations. This system is optimized to produce accurate transcriptions and highlight potential errors to promote rapid human verification, further reducing the necessary manual effort. Applied to over 40 hours of simulated conversations, this system offers a promising foundation for automating clinical transcriptions.<|reference_end|>
|
arxiv
|
@article{klusty2024toward,
title={Toward Automated Clinical Transcriptions},
author={Mitchell A. Klusty, W. Vaiden Logan, Samuel E. Armstrong, Aaron D.
Mullen, Caroline N. Leach, Jeff Talbert, V. K. Cody Bumgardner},
journal={arXiv preprint arXiv:2409.15378},
year={2024},
archivePrefix={arXiv},
eprint={2409.15378},
primaryClass={eess.AS cs.AI cs.CL cs.SD}
}
|
klusty2024toward
|
arxiv-661005
|
2409.15380
|
Kalahi: A handcrafted, grassroots cultural LLM evaluation suite for Filipino
|
<|reference_start|>Kalahi: A handcrafted, grassroots cultural LLM evaluation suite for Filipino: Multilingual large language models (LLMs) today may not necessarily provide culturally appropriate and relevant responses to its Filipino users. We introduce Kalahi, a cultural LLM evaluation suite collaboratively created by native Filipino speakers. It is composed of 150 high-quality, handcrafted and nuanced prompts that test LLMs for generations that are relevant to shared Filipino cultural knowledge and values. Strong LLM performance in Kalahi indicates a model's ability to generate responses similar to what an average Filipino would say or do in a given situation. We conducted experiments on LLMs with multilingual and Filipino language support. Results show that Kalahi, while trivial for Filipinos, is challenging for LLMs, with the best model answering only 46.0% of the questions correctly compared to native Filipino performance of 89.10%. Thus, Kalahi can be used to accurately and reliably evaluate Filipino cultural representation in LLMs.<|reference_end|>
|
arxiv
|
@article{montalan2024kalahi:,
title={Kalahi: A handcrafted, grassroots cultural LLM evaluation suite for
Filipino},
author={Jann Railey Montalan, Jian Gang Ngui, Wei Qi Leong, Yosephine Susanto,
Hamsawardhini Rengarajan, William Chandra Tjhi, Alham Fikri Aji},
journal={arXiv preprint arXiv:2409.15380},
year={2024},
archivePrefix={arXiv},
eprint={2409.15380},
primaryClass={cs.CL cs.AI}
}
|
montalan2024kalahi:
|
arxiv-661006
|
2409.15381
|
Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image Generation
|
<|reference_start|>Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image Generation: Recent studies show that text-to-image (T2I) models are vulnerable to adversarial attacks, especially with noun perturbations in text prompts. In this study, we investigate the impact of adversarial attacks on different POS tags within text prompts on the images generated by T2I models. We create a high-quality dataset for realistic POS tag token swapping and perform gradient-based attacks to find adversarial suffixes that mislead T2I models into generating images with altered tokens. Our empirical results show that the attack success rate (ASR) varies significantly among different POS tag categories, with nouns, proper nouns, and adjectives being the easiest to attack. We explore the mechanism behind the steering effect of adversarial suffixes, finding that the number of critical tokens and content fusion vary among POS tags, while features like suffix transferability are consistent across categories. We have made our implementation publicly available at - https://github.com/shahariar-shibli/Adversarial-Attack-on-POS-Tags.<|reference_end|>
|
arxiv
|
@article{shahariar2024adversarial,
title={Adversarial Attacks on Parts of Speech: An Empirical Study in
Text-to-Image Generation},
author={G M Shahariar, Jia Chen, Jiachen Li, Yue Dong},
journal={arXiv preprint arXiv:2409.15381},
year={2024},
archivePrefix={arXiv},
eprint={2409.15381},
primaryClass={cs.CL cs.AI cs.CR cs.LG}
}
|
shahariar2024adversarial
|
arxiv-661007
|
2409.15383
|
Generalization in birdsong classification: impact of transfer learning methods and dataset characteristics
|
<|reference_start|>Generalization in birdsong classification: impact of transfer learning methods and dataset characteristics: Animal sounds can be recognised automatically by machine learning, and this has an important role to play in biodiversity monitoring. Yet despite increasingly impressive capabilities, bioacoustic species classifiers still exhibit imbalanced performance across species and habitats, especially in complex soundscapes. In this study, we explore the effectiveness of transfer learning in large-scale bird sound classification across various conditions, including single- and multi-label scenarios, and across different model architectures such as CNNs and Transformers. Our experiments demonstrate that both fine-tuning and knowledge distillation yield strong performance, with cross-distillation proving particularly effective in improving in-domain performance on Xeno-canto data. However, when generalizing to soundscapes, shallow fine-tuning exhibits superior performance compared to knowledge distillation, highlighting its robustness and constrained nature. Our study further investigates how to use multi-species labels, in cases where these are present but incomplete. We advocate for more comprehensive labeling practices within the animal sound community, including annotating background species and providing temporal details, to enhance the training of robust bird sound classifiers. These findings provide insights into the optimal reuse of pretrained models for advancing automatic bioacoustic recognition.<|reference_end|>
|
arxiv
|
@article{ghani2024generalization,
title={Generalization in birdsong classification: impact of transfer learning
methods and dataset characteristics},
author={Burooj Ghani, Vincent J. Kalkman, Bob Planqu'e, Willem-Pier Vellinga,
Lisa Gill, Dan Stowell},
journal={arXiv preprint arXiv:2409.15383},
year={2024},
archivePrefix={arXiv},
eprint={2409.15383},
primaryClass={cs.SD cs.LG eess.AS}
}
|
ghani2024generalization
|
arxiv-661008
|
2409.15384
|
BurstM: Deep Burst Multi-scale SR using Fourier Space with Optical Flow
|
<|reference_start|>BurstM: Deep Burst Multi-scale SR using Fourier Space with Optical Flow: Multi frame super-resolution(MFSR) achieves higher performance than single image super-resolution (SISR), because MFSR leverages abundant information from multiple frames. Recent MFSR approaches adapt the deformable convolution network (DCN) to align the frames. However, the existing MFSR suffers from misalignments between the reference and source frames due to the limitations of DCN, such as small receptive fields and the predefined number of kernels. From these problems, existing MFSR approaches struggle to represent high-frequency information. To this end, we propose Deep Burst Multi-scale SR using Fourier Space with Optical Flow (BurstM). The proposed method estimates the optical flow offset for accurate alignment and predicts the continuous Fourier coefficient of each frame for representing high-frequency textures. In addition, we have enhanced the network flexibility by supporting various super-resolution (SR) scale factors with the unimodel. We demonstrate that our method has the highest performance and flexibility than the existing MFSR methods. Our source code is available at https://github.com/Egkang-Luis/burstm<|reference_end|>
|
arxiv
|
@article{kang2024burstm:,
title={BurstM: Deep Burst Multi-scale SR using Fourier Space with Optical Flow},
author={EungGu Kang and Byeonghun Lee and Sunghoon Im and Kyong Hwan Jin},
journal={arXiv preprint arXiv:2409.15384},
year={2024},
archivePrefix={arXiv},
eprint={2409.15384},
primaryClass={eess.IV cs.CV cs.LG}
}
|
kang2024burstm:
|
arxiv-661009
|
2409.15386
|
Coverage and Bias of Street View Imagery in Mapping the Urban Environment
|
<|reference_start|>Coverage and Bias of Street View Imagery in Mapping the Urban Environment: Street View Imagery (SVI) has emerged as a valuable data form in urban studies, enabling new ways to map and sense urban environments. However, fundamental concerns regarding the representativeness, quality, and reliability of SVI remain underexplored, e.g.\ to what extent can cities be captured by such data and do data gaps result in bias. This research, positioned at the intersection of spatial data quality and urban analytics, addresses these concerns by proposing a novel workflow to estimate SVI's feature-level coverage on urban environment. The workflow integrates the positional relationships between SVI and target features, as well as the impact of environmental obstructions. Expanding the domain of data quality to SVI, we introduce an indicator system that evaluates the extent of coverage, focusing on the completeness and frequency dimensions. Using London as a case study, three experiments are conducted to identify potential biases in SVI's ability to cover and represent urban features, with a focus on building facades. The research highlights the limitations of traditional spatial data quality metrics in assessing SVI, and variability of SVI coverage under different data acquisition practices. Tailored approaches that consider the unique metadata and horizontal perspective of SVI are also underscored. The findings suggest that while SVI offers valuable insights, it is no panacea -- its application in urban research requires careful consideration of data coverage and feature-level representativeness to ensure reliable results.<|reference_end|>
|
arxiv
|
@article{fan2024coverage,
title={Coverage and Bias of Street View Imagery in Mapping the Urban
Environment},
author={Zicheng Fan, Chen-Chieh Feng, Filip Biljecki},
journal={arXiv preprint arXiv:2409.15386},
year={2024},
archivePrefix={arXiv},
eprint={2409.15386},
primaryClass={cs.LG}
}
|
fan2024coverage
|
arxiv-661010
|
2409.15388
|
Two hardness results for the maximum 2-edge-colorable subgraph problem in bipartite graphs
|
<|reference_start|>Two hardness results for the maximum 2-edge-colorable subgraph problem in bipartite graphs: In this paper, we consider the maximum $k$-edge-colorable subgraph problem. In this problem we are given a graph $G$ and a positive integer $k$, the goal to take $k$ matchings of $G$ such that their union contains maximum number of edges. This problem is NP-hard in cubic graphs, and polynomial time solvable in bipartite graphs as we observe in our paper. We present two NP-hardness results for two versions of this problem where we have weights on edges or color constraints on vertices. In fact, we show that these versions are NP-hard already in bipartite graphs of maximum degree three. In order to achieve these results, we establish a connection between our problems and the problem of construction of special maximum matchings considered in the Master thesis of the author and defended back in 2003.<|reference_end|>
|
arxiv
|
@article{mkrtchyan2024two,
title={Two hardness results for the maximum 2-edge-colorable subgraph problem
in bipartite graphs},
author={Vahan Mkrtchyan},
journal={arXiv preprint arXiv:2409.15388},
year={2024},
archivePrefix={arXiv},
eprint={2409.15388},
primaryClass={math.CO cs.DM}
}
|
mkrtchyan2024two
|
arxiv-661011
|
2409.15389
|
Custodial and Non-Custodial Wallets
|
<|reference_start|>Custodial and Non-Custodial Wallets: Non-custodial wallets are a type of cryptocurrency wallet wherein the owner has full control over the private keys and is solely responsible for managing and securing the digital assets that it contains. Unlike custodial wallets, which are managed by third parties, such as exchanges, non-custodial wallets ensure that funds are controlled exclusively by the end user. We characterise the difference between custodial and non-custodial wallets and examine their key features and related risks.<|reference_end|>
|
arxiv
|
@article{seymour2024custodial,
title={Custodial and Non-Custodial Wallets},
author={Tony Seymour and Geoff Goodell},
journal={arXiv preprint arXiv:2409.15389},
year={2024},
archivePrefix={arXiv},
eprint={2409.15389},
primaryClass={cs.CR cs.CY}
}
|
seymour2024custodial
|
arxiv-661012
|
2409.15391
|
Supply Risk-Aware Alloy Discovery and Design
|
<|reference_start|>Supply Risk-Aware Alloy Discovery and Design: Materials design is a critical driver of innovation, yet overlooking the technological, economic, and environmental risks inherent in materials and their supply chains can lead to unsustainable and risk-prone solutions. To address this, we present a novel risk-aware design approach that integrates Supply-Chain Aware Design Strategies into the materials development process. This approach leverages existing language models and text analysis to develop a specialized model for predicting materials feedstock supply risk indices. To efficiently navigate the multi-objective, multi-constraint design space, we employ Batch Bayesian Optimization (BBO), enabling the identification of Pareto-optimal high entropy alloys (HEAs) that balance performance objectives with minimized supply risk. A case study using the MoNbTiVW system demonstrates the efficacy of our approach in four scenarios, highlighting the significant impact of incorporating supply risk into the design process. By optimizing for both performance and supply risk, we ensure that the developed alloys are not only high-performing but also sustainable and economically viable. This integrated approach represents a critical step towards a future where materials discovery and design seamlessly consider sustainability, supply chain dynamics, and comprehensive life cycle analysis.<|reference_end|>
|
arxiv
|
@article{mulukutla2024supply,
title={Supply Risk-Aware Alloy Discovery and Design},
author={Mrinalini Mulukutla (1), Robert Robinson (1), Danial Khatamsaz (1),
Brent Vela (1), Nhu Vu (1), Raymundo Arr'oyave (1 and 2) ((1) Texas A&M
University Materials Science and Engineering Department, (2) Texas A&M
University Mechanical Engineering Department)},
journal={arXiv preprint arXiv:2409.15391},
year={2024},
archivePrefix={arXiv},
eprint={2409.15391},
primaryClass={cs.LG cond-mat.mtrl-sci}
}
|
mulukutla2024supply
|
arxiv-661013
|
2409.15393
|
Approximated Orthogonal Projection Unit: Stabilizing Regression Network Training Using Natural Gradient
|
<|reference_start|>Approximated Orthogonal Projection Unit: Stabilizing Regression Network Training Using Natural Gradient: Neural networks (NN) are extensively studied in cutting-edge soft sensor models due to their feature extraction and function approximation capabilities. Current research into network-based methods primarily focuses on models' offline accuracy. Notably, in industrial soft sensor context, online optimizing stability and interpretability are prioritized, followed by accuracy. This requires a clearer understanding of network's training process. To bridge this gap, we propose a novel NN named the Approximated Orthogonal Projection Unit (AOPU) which has solid mathematical basis and presents superior training stability. AOPU truncates the gradient backpropagation at dual parameters, optimizes the trackable parameters updates, and enhances the robustness of training. We further prove that AOPU attains minimum variance estimation (MVE) in NN, wherein the truncated gradient approximates the natural gradient (NG). Empirical results on two chemical process datasets clearly show that AOPU outperforms other models in achieving stable convergence, marking a significant advancement in soft sensor field.<|reference_end|>
|
arxiv
|
@article{wang2024approximated,
title={Approximated Orthogonal Projection Unit: Stabilizing Regression Network
Training Using Natural Gradient},
author={Shaoqi Wang and Chunjie Yang and Siwei Lou},
journal={arXiv preprint arXiv:2409.15393},
year={2024},
archivePrefix={arXiv},
eprint={2409.15393},
primaryClass={cs.LG cs.AI stat.ML}
}
|
wang2024approximated
|
arxiv-661014
|
2409.15394
|
Neural Control Variates with Automatic Integration
|
<|reference_start|>Neural Control Variates with Automatic Integration: This paper presents a method to leverage arbitrary neural network architecture for control variates. Control variates are crucial in reducing the variance of Monte Carlo integration, but they hinge on finding a function that both correlates with the integrand and has a known analytical integral. Traditional approaches rely on heuristics to choose this function, which might not be expressive enough to correlate well with the integrand. Recent research alleviates this issue by modeling the integrands with a learnable parametric model, such as a neural network. However, the challenge remains in creating an expressive parametric model with a known analytical integral. This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures. Instead of using a network to approximate the integrand directly, we employ the network to approximate the anti-derivative of the integrand. This allows us to use automatic differentiation to create a function whose integration can be constructed by the antiderivative network. We apply our method to solve partial differential equations using the Walk-on-sphere algorithm. Our results indicate that this approach is unbiased and uses various network architectures to achieve lower variance than other control variate methods.<|reference_end|>
|
arxiv
|
@article{li2024neural,
title={Neural Control Variates with Automatic Integration},
author={Zilu Li, Guandao Yang, Qingqing Zhao, Xi Deng, Leonidas Guibas,
Bharath Hariharan, Gordon Wetzstein},
journal={SIGGRAPH Conference Papers 2024},
year={2024},
doi={10.1145/3641519.3657395},
archivePrefix={arXiv},
eprint={2409.15394},
primaryClass={cs.LG cs.AI cs.GR cs.NA math.NA}
}
|
li2024neural
|
arxiv-661015
|
2409.15395
|
Parse Trees Guided LLM Prompt Compression
|
<|reference_start|>Parse Trees Guided LLM Prompt Compression: Offering rich contexts to Large Language Models (LLMs) has shown to boost the performance in various tasks, but the resulting longer prompt would increase the computational cost and might exceed the input limit of LLMs. Recently, some prompt compression methods have been suggested to shorten the length of prompts by using language models to generate shorter prompts or by developing computational models to select important parts of original prompt. The generative compression methods would suffer from issues like hallucination, while the selective compression methods have not involved linguistic rules and overlook the global structure of prompt. To this end, we propose a novel selective compression method called PartPrompt. It first obtains a parse tree for each sentence based on linguistic rules, and calculates local information entropy for each node in a parse tree. These local parse trees are then organized into a global tree according to the hierarchical structure such as the dependency of sentences, paragraphs, and sections. After that, the root-ward propagation and leaf-ward propagation are proposed to adjust node values over the global tree. Finally, a recursive algorithm is developed to prune the global tree based on the adjusted node values. The experiments show that PartPrompt receives the state-of-the-art performance across various datasets, metrics, compression ratios, and target LLMs for inference. The in-depth ablation studies confirm the effectiveness of designs in PartPrompt, and other additional experiments also demonstrate its superiority in terms of the coherence of compressed prompts and in the extreme long prompt scenario.<|reference_end|>
|
arxiv
|
@article{mao2024parse,
title={Parse Trees Guided LLM Prompt Compression},
author={Wenhao Mao, Chengbin Hou, Tianyu Zhang, Xinyu Lin, Ke Tang, Hairong Lv},
journal={arXiv preprint arXiv:2409.15395},
year={2024},
archivePrefix={arXiv},
eprint={2409.15395},
primaryClass={cs.CL cs.AI}
}
|
mao2024parse
|
arxiv-661016
|
2409.15397
|
The ParlaSpeech Collection of Automatically Generated Speech and Text Datasets from Parliamentary Proceedings
|
<|reference_start|>The ParlaSpeech Collection of Automatically Generated Speech and Text Datasets from Parliamentary Proceedings: Recent significant improvements in speech and language technologies come both from self-supervised approaches over raw language data as well as various types of explicit supervision. To ensure high-quality processing of spoken data, the most useful type of explicit supervision is still the alignment between the speech signal and its corresponding text transcript, which is a data type that is not available for many languages. In this paper, we present our approach to building large and open speech-and-text-aligned datasets of less-resourced languages based on transcripts of parliamentary proceedings and their recordings. Our starting point are the ParlaMint comparable corpora of transcripts of parliamentary proceedings of 26 national European parliaments. In the pilot run on expanding the ParlaMint corpora with aligned publicly available recordings, we focus on three Slavic languages, namely Croatian, Polish, and Serbian. The main challenge of our approach is the lack of any global alignment between the ParlaMint texts and the available recordings, as well as the sometimes varying data order in each of the modalities, which requires a novel approach in aligning long sequences of text and audio in a large search space. The results of this pilot run are three high-quality datasets that span more than 5,000 hours of speech and accompanying text transcripts. Although these datasets already make a huge difference in the availability of spoken and textual data for the three languages, we want to emphasize the potential of the presented approach in building similar datasets for many more languages.<|reference_end|>
|
arxiv
|
@article{ljubešić2024the,
title={The ParlaSpeech Collection of Automatically Generated Speech and Text
Datasets from Parliamentary Proceedings},
author={Nikola Ljubev{s}i'c, Peter Rupnik and Danijel Korv{z}inek},
journal={arXiv preprint arXiv:2409.15397},
year={2024},
archivePrefix={arXiv},
eprint={2409.15397},
primaryClass={eess.AS cs.CL cs.LG cs.SD}
}
|
ljubešić2024the
|
arxiv-661017
|
2409.15398
|
Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI
|
<|reference_start|>Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI: As generative AI, particularly large language models (LLMs), become increasingly integrated into production applications, new attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems. Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks. Despite growing academic interest in adversarial risks for generative AI, there is limited guidance tailored for practitioners to assess and mitigate these challenges in real-world environments. To address this, our contributions include: (1) a practical examination of red- and blue-teaming strategies for securing generative AI, (2) identification of key challenges and open questions in defense development and evaluation, and (3) the Attack Atlas, an intuitive framework that brings a practical approach to analyzing single-turn input attacks, placing it at the forefront for practitioners. This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.<|reference_end|>
|
arxiv
|
@article{rawat2024attack,
title={Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in
Red Teaming GenAI},
author={Ambrish Rawat, Stefan Schoepf, Giulio Zizzo, Giandomenico Cornacchia,
Muhammad Zaid Hameed, Kieran Fraser, Erik Miehling, Beat Buesser, Elizabeth
M. Daly, Mark Purcell, Prasanna Sattigeri, Pin-Yu Chen, Kush R. Varshney},
journal={arXiv preprint arXiv:2409.15398},
year={2024},
archivePrefix={arXiv},
eprint={2409.15398},
primaryClass={cs.CR cs.AI cs.LG}
}
|
rawat2024attack
|
arxiv-661018
|
2409.15400
|
Parallel Graph Drawing Algorithm for Bipartite Planar Graphs
|
<|reference_start|>Parallel Graph Drawing Algorithm for Bipartite Planar Graphs: We give a parallel $O(\log(n))$-time algorithm on a CRCW PRAM to assign vertical and horizontal segments to the vertices of any planar bipartite graph $G$ in the following manner: i) Two segments cannot share an interior point ii) Two segments intersect if and only if the corresponding vertices are adjacent, which uses a polynomial number of processors. In other words, represent vertices of a planar bipartite graph as parallel segments, and edges as intersection points between these segments. Note that two segments are not allowed to cross. Our method is based on a parallel algorithm for st-numbering which uses an ear decomposition search.<|reference_end|>
|
arxiv
|
@article{jain2024parallel,
title={Parallel Graph Drawing Algorithm for Bipartite Planar Graphs},
author={Naman Jain},
journal={arXiv preprint arXiv:2409.15400},
year={2024},
archivePrefix={arXiv},
eprint={2409.15400},
primaryClass={cs.CG}
}
|
jain2024parallel
|
arxiv-661019
|
2409.15402
|
Uncovering Coordinated Cross-Platform Information Operations Threatening the Integrity of the 2024 US Presidential Election Online Discussion
|
<|reference_start|>Uncovering Coordinated Cross-Platform Information Operations Threatening the Integrity of the 2024 US Presidential Election Online Discussion: Information Operations (IOs) pose a significant threat to the integrity of democratic processes, with the potential to influence election-related online discourse. In anticipation of the 2024 U.S. presidential election, we present a study aimed at uncovering the digital traces of coordinated IOs on $\mathbb{X}$ (formerly Twitter). Using our machine learning framework for detecting online coordination, we analyze a dataset comprising election-related conversations on $\mathbb{X}$ from May 2024. This reveals a network of coordinated inauthentic actors, displaying notable similarities in their link-sharing behaviors. Our analysis shows concerted efforts by these accounts to disseminate misleading, redundant, and biased information across the Web through a coordinated cross-platform information operation: The links shared by this network frequently direct users to other social media platforms or suspicious websites featuring low-quality political content and, in turn, promoting the same $\mathbb{X}$ and YouTube accounts. Members of this network also shared deceptive images generated by AI, accompanied by language attacking political figures and symbolic imagery intended to convey power and dominance. While $\mathbb{X}$ has suspended a subset of these accounts, more than 75% of the coordinated network remains active. Our findings underscore the critical role of developing computational models to scale up the detection of threats on large social media platforms, and emphasize the broader implications of these techniques to detect IOs across the wider Web.<|reference_end|>
|
arxiv
|
@article{minici2024uncovering,
title={Uncovering Coordinated Cross-Platform Information Operations Threatening
the Integrity of the 2024 U.S. Presidential Election Online Discussion},
author={Marco Minici, Luca Luceri, Federico Cinus, Emilio Ferrara},
journal={arXiv preprint arXiv:2409.15402},
year={2024},
number={The 2024 Election Integrity Initiative: HUMANS Lab - Working Paper
No. 2024.4 - University of Southern California},
archivePrefix={arXiv},
eprint={2409.15402},
primaryClass={cs.SI cs.CY}
}
|
minici2024uncovering
|
arxiv-661020
|
2409.15404
|
Renaming in distributed certification
|
<|reference_start|>Renaming in distributed certification: Local certification is the area of distributed network computing asking the following question: How to certify to the nodes of a network that a global property holds, if they are limited to a local verification? In this area, it is often essential to have identifiers, that is, unique integers assigned to the nodes. In this short paper, we show how to reduce the range of the identifiers, in three different settings. More precisely, we show how to rename identifiers in the classical local certification setting, when we can (resp. cannot) choose the new identifiers, and we show how a global certificate can help to encode very compactly a new identifier assignment that is not injective in general, but still useful. We conclude with a number of applications of these three results.<|reference_end|>
|
arxiv
|
@article{bousquet2024renaming,
title={Renaming in distributed certification},
author={Nicolas Bousquet, Louis Esperet, Laurent Feuilloley, S'ebastien
Zeitoun},
journal={arXiv preprint arXiv:2409.15404},
year={2024},
archivePrefix={arXiv},
eprint={2409.15404},
primaryClass={cs.DC cs.DM cs.DS}
}
|
bousquet2024renaming
|
arxiv-661021
|
2409.15436
|
GenAI Advertising: Risks of Personalizing Ads with LLMs
|
<|reference_start|>GenAI Advertising: Risks of Personalizing Ads with LLMs: Recent advances in large language models have enabled the creation of highly effective chatbots, which may serve as a platform for targeted advertising. This paper investigates the risks of personalizing advertising in chatbots to their users. We developed a chatbot that embeds personalized product advertisements within LLM responses, inspired by similar forays by AI companies. Our benchmarks show that ad injection impacted certain LLM attribute performance, particularly response desirability. We conducted a between-subjects experiment with 179 participants using chabots with no ads, unlabeled targeted ads, and labeled targeted ads. Results revealed that participants struggled to detect chatbot ads and unlabeled advertising chatbot responses were rated higher. Yet, once disclosed, participants found the use of ads embedded in LLM responses to be manipulative, less trustworthy, and intrusive. Participants tried changing their privacy settings via chat interface rather than the disclosure. Our findings highlight ethical issues with integrating advertising into chatbot responses<|reference_end|>
|
arxiv
|
@article{tang2024genai,
title={GenAI Advertising: Risks of Personalizing Ads with LLMs},
author={Brian Jay Tang, Kaiwen Sun, Noah T. Curran, Florian Schaub, Kang G.
Shin},
journal={arXiv preprint arXiv:2409.15436},
year={2024},
archivePrefix={arXiv},
eprint={2409.15436},
primaryClass={cs.HC}
}
|
tang2024genai
|
arxiv-661022
|
2409.15441
|
Steward: Natural Language Web Automation
|
<|reference_start|>Steward: Natural Language Web Automation: Recently, large language models (LLMs) have demonstrated exceptional capabilities in serving as the foundation for AI assistants. One emerging application of LLMs, navigating through websites and interacting with UI elements across various web pages, remains somewhat underexplored. We introduce Steward, a novel LLM-powered web automation tool designed to serve as a cost-effective, scalable, end-to-end solution for automating web interactions. Traditional browser automation frameworks like Selenium, Puppeteer, and Playwright are not scalable for extensive web interaction tasks, such as studying recommendation algorithms on platforms like YouTube and Twitter. These frameworks require manual coding of interactions, limiting their utility in large-scale or dynamic contexts. Steward addresses these limitations by integrating LLM capabilities with browser automation, allowing for natural language-driven interaction with websites. Steward operates by receiving natural language instructions and reactively planning and executing a sequence of actions on websites, looping until completion, making it a practical tool for developers and researchers to use. It achieves high efficiency, completing actions in 8.52 to 10.14 seconds at a cost of $0.028 per action or an average of $0.18 per task, which is further reduced to 4.8 seconds and $0.022 through a caching mechanism. It runs tasks on real websites with a 40% completion success rate. We discuss various design and implementation challenges, including state representation, action sequence selection, system responsiveness, detecting task completion, and caching implementation.<|reference_end|>
|
arxiv
|
@article{tang2024steward:,
title={Steward: Natural Language Web Automation},
author={Brian Tang, Kang G. Shin},
journal={arXiv preprint arXiv:2409.15441},
year={2024},
archivePrefix={arXiv},
eprint={2409.15441},
primaryClass={cs.AI}
}
|
tang2024steward:
|
arxiv-661023
|
2409.15443
|
Revealing an Unattractivity Bias in Mental Reconstruction of Occluded Faces using Generative Image Models
|
<|reference_start|>Revealing an Unattractivity Bias in Mental Reconstruction of Occluded Faces using Generative Image Models: Previous studies have shown that faces are rated as more attractive when they are partially occluded. The cause of this observation remains unclear. One explanation is a mental reconstruction of the occluded face parts which is biased towards a more attractive percept as shown in face-attractiveness rating tasks. We aimed to test for this hypothesis by using a delayed matching-to-sample task, which directly requires mental reconstruction. In two online experiments, we presented observers with unattractive, neutral or attractive synthetic reconstructions of the occluded face parts using a state-of-the-art diffusion-based image generator. Our experiments do not support the initial hypothesis and reveal an unattractiveness bias for occluded faces instead. This suggests that facial attractiveness rating tasks do not prompt reconstructions. Rather, the attractivity bias may arise from global image features, and faces may actually be reconstructed with unattractive properties when mental reconstruction is applied.<|reference_end|>
|
arxiv
|
@article{riedmann2024revealing,
title={Revealing an Unattractivity Bias in Mental Reconstruction of Occluded
Faces using Generative Image Models},
author={Frederik Riedmann, Bernhard Egger, Tim Rohe},
journal={arXiv preprint arXiv:2409.15443},
year={2024},
archivePrefix={arXiv},
eprint={2409.15443},
primaryClass={cs.CV}
}
|
riedmann2024revealing
|
arxiv-661024
|
2409.15447
|
Topological and geometric characterization of synthetic aperture sonar collections
|
<|reference_start|>Topological and geometric characterization of synthetic aperture sonar collections: This article explores the theoretical underpinnings of -- and practical results for -- topological and geometric features of data collected by synthetic aperture sonar systems. We prove a strong theoretical guarantee about the structure of the space of echos (the signature space) that is relevant for classification, and that this structure is measurable in laboratory conditions. The guarantee makes minimal assumptions about the sonar platform's trajectory, and establishes that the signature space divides neatly into topological features based upon the number of prominent echos and their placement, and geometric features that capture their corresponding sonar cross section.<|reference_end|>
|
arxiv
|
@article{robinson2024topological,
title={Topological and geometric characterization of synthetic aperture sonar
collections},
author={Michael Robinson and Zander Memon and Maxwell Gualtieri},
journal={arXiv preprint arXiv:2409.15447},
year={2024},
archivePrefix={arXiv},
eprint={2409.15447},
primaryClass={cs.CE}
}
|
robinson2024topological
|
arxiv-661025
|
2409.15448
|
Optimization-based Verification of Discrete-time Control Barrier Functions: A Branch-and-Bound Approach
|
<|reference_start|>Optimization-based Verification of Discrete-time Control Barrier Functions: A Branch-and-Bound Approach: Discrete-time Control Barrier Functions (DTCBFs) form a powerful control theoretic tool to guarantee safety and synthesize safe controllers for discrete-time dynamical systems. In this paper, we provide an optimization-based algorithm, inspired by the $\alpha$BB algorithm, for the verification of a candidate DTCBF, i.e., either verifying a given candidate function as a valid DTCBF or falsifying it by providing a counterexample for a general nonlinear discrete-time system with input constraints. This method is applicable whether a corresponding control policy is known or unknown. We apply our method to a numerical case study to illustrate its efficacy.<|reference_end|>
|
arxiv
|
@article{shakhesi2024optimization-based,
title={Optimization-based Verification of Discrete-time Control Barrier
Functions: A Branch-and-Bound Approach},
author={Erfan Shakhesi, W.P.M.H. Heemels, Alexander Katriniok},
journal={arXiv preprint arXiv:2409.15448},
year={2024},
archivePrefix={arXiv},
eprint={2409.15448},
primaryClass={math.OC cs.SY eess.SY}
}
|
shakhesi2024optimization-based
|
arxiv-661026
|
2409.15451
|
Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models
|
<|reference_start|>Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models: Large Language Models (LLM) have emerged as a tool for robots to generate task plans using common sense reasoning. For the LLM to generate actionable plans, scene context must be provided, often through a map. Recent works have shifted from explicit maps with fixed semantic classes to implicit open vocabulary maps based on queryable embeddings capable of representing any semantic class. However, embeddings cannot directly report the scene context as they are implicit, requiring further processing for LLM integration. To address this, we propose an explicit text-based map that can represent thousands of semantic classes while easily integrating with LLMs due to their text-based nature by building upon large-scale image recognition models. We study how entities in our map can be localized and show through evaluations that our text-based map localizations perform comparably to those from open vocabulary maps while using two to four orders of magnitude less memory. Real-robot experiments demonstrate the grounding of an LLM with the text-based map to solve user tasks.<|reference_end|>
|
arxiv
|
@article{zhang2024tag,
title={Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with
Large Language Models},
author={Mike Zhang, Kaixian Qu, Vaishakh Patil, Cesar Cadena, Marco Hutter},
journal={arXiv preprint arXiv:2409.15451},
year={2024},
archivePrefix={arXiv},
eprint={2409.15451},
primaryClass={cs.RO cs.AI cs.CV}
}
|
zhang2024tag
|
arxiv-661027
|
2409.15452
|
CUTE: Measuring LLMs' Understanding of Their Tokens
|
<|reference_start|>CUTE: Measuring LLMs' Understanding of Their Tokens: Large Language Models (LLMs) show remarkable performance on a wide variety of tasks. Most LLMs split text into multi-character tokens and process them as atomic units without direct access to individual characters. This raises the question: To what extent can LLMs learn orthographic information? To answer this, we propose a new benchmark, CUTE, which features a collection of tasks designed to test the orthographic knowledge of LLMs. We evaluate popular LLMs on CUTE, finding that most of them seem to know the spelling of their tokens, yet fail to use this information effectively to manipulate text, calling into question how much of this knowledge is generalizable.<|reference_end|>
|
arxiv
|
@article{edman2024cute:,
title={CUTE: Measuring LLMs' Understanding of Their Tokens},
author={Lukas Edman, Helmut Schmid, Alexander Fraser},
journal={arXiv preprint arXiv:2409.15452},
year={2024},
archivePrefix={arXiv},
eprint={2409.15452},
primaryClass={cs.CL}
}
|
edman2024cute:
|
arxiv-661028
|
2409.15454
|
In-Context Learning May Not Elicit Trustworthy Reasoning: A-Not-B Errors in Pretrained Language Models
|
<|reference_start|>In-Context Learning May Not Elicit Trustworthy Reasoning: A-Not-B Errors in Pretrained Language Models: Recent advancements in artificial intelligence have led to the creation of highly capable large language models (LLMs) that can perform tasks in a human-like manner. However, LLMs exhibit only infant-level cognitive abilities in certain areas. One such area is the A-Not-B error, a phenomenon seen in infants where they repeat a previously rewarded behavior despite well-observed changed conditions. This highlights their lack of inhibitory control -- the ability to stop a habitual or impulsive response. In our work, we design a text-based multi-choice QA scenario similar to the A-Not-B experimental settings to systematically test the inhibitory control abilities of LLMs. We found that state-of-the-art LLMs (like Llama3-8b) perform consistently well with in-context learning (ICL) but make errors and show a significant drop of as many as 83.3% in reasoning tasks when the context changes trivially. This suggests that LLMs only have inhibitory control abilities on par with human infants in this regard, often failing to suppress the previously established response pattern during ICL.<|reference_end|>
|
arxiv
|
@article{han2024in-context,
title={In-Context Learning May Not Elicit Trustworthy Reasoning: A-Not-B Errors
in Pretrained Language Models},
author={Pengrui Han, Peiyang Song, Haofei Yu, Jiaxuan You},
journal={arXiv preprint arXiv:2409.15454},
year={2024},
archivePrefix={arXiv},
eprint={2409.15454},
primaryClass={cs.CL cs.AI}
}
|
han2024in-context
|
arxiv-661029
|
2409.15458
|
Simplifying Triangle Meshes in the Wild
|
<|reference_start|>Simplifying Triangle Meshes in the Wild: This paper introduces a fast and robust method for simplifying surface triangle meshes in the wild while maintaining high visual quality. While previous methods achieve excellent results on manifold meshes by using the quadric error metric, they suffer from producing high-quality outputs for user-created meshes, which often contain non-manifold elements and multiple connected components. In this work, we begin by outlining the pitfalls of existing mesh simplification techniques and highlighting the discrepancy in their formulations with existing mesh data. We then propose a method for simplifying these (non-manifold) triangle meshes, while maintaining quality comparable to the existing methods for manifold inputs. Our key idea is to reformulate mesh simplification as a problem of decimating simplicial 2-complexes. This involves a novel construction to turn a triangle soup into a simplicial 2-complex, followed by iteratively collapsing 1-simplices (vertex pairs) with our modified quadric error metric tailored for topology changes. Besides, we also tackle textured mesh simplification. Instead of following existing strategies to preserve mesh UVs, we propose a novel perspective that only focuses on preserving texture colors defined on the surface, regardless of the layout in the texture UV space. This leads to a more robust method for textured mesh simplification that is free from the texture bleeding artifact. Our mesh simplification enables level-of-detail algorithms to operate on arbitrary triangle meshes in the wild. We demonstrate improvements over prior techniques through extensive qualitative and quantitative evaluations, along with user studies.<|reference_end|>
|
arxiv
|
@article{liu2024simplifying,
title={Simplifying Triangle Meshes in the Wild},
author={Hsueh-Ti Derek Liu, Xiaoting Zhang, Cem Yuksel},
journal={arXiv preprint arXiv:2409.15458},
year={2024},
archivePrefix={arXiv},
eprint={2409.15458},
primaryClass={cs.GR}
}
|
liu2024simplifying
|
arxiv-661030
|
2409.15459
|
Position-building in competition with real-world constraints
|
<|reference_start|>Position-building in competition with real-world constraints: This paper extends the optimal-trading framework developed in arXiv:2409.03586v1 to compute optimal strategies with real-world constraints. The aim of the current paper, as with the previous, is to study trading in the context of multi-player non-cooperative games. While the former paper relies on methods from the calculus of variations and optimal strategies arise as the solution of partial differential equations, the current paper demonstrates that the entire framework may be re-framed as a quadratic programming problem and cast in this light constraints are readily incorporated into the calculation of optimal strategies. An added benefit is that two-trader equilibria may be calculated as the end-points of a dynamic process of traders forming repeated adjustments to each other's strategy.<|reference_end|>
|
arxiv
|
@article{chriss2024position-building,
title={Position-building in competition with real-world constraints},
author={Neil A. Chriss},
journal={arXiv preprint arXiv:2409.15459},
year={2024},
archivePrefix={arXiv},
eprint={2409.15459},
primaryClass={q-fin.TR cs.GT}
}
|
chriss2024position-building
|
arxiv-661031
|
2409.15461
|
RAM2C: A Liberal Arts Educational Chatbot based on Retrieval-augmented Multi-role Multi-expert Collaboration
|
<|reference_start|>RAM2C: A Liberal Arts Educational Chatbot based on Retrieval-augmented Multi-role Multi-expert Collaboration: Recently, many studies focus on utilizing large language models (LLMs) into educational dialogues. Especially, within liberal arts dialogues, educators must balance \textbf{H}umanized communication, \textbf{T}eaching expertise, and \textbf{S}afety-ethics (\textbf{HTS}), besides the subject knowledge itself. However, due to collecting massive amounts of HTS-compliant teaching dialogues from real world as training corpus is expensive, the outputs of existing LLMs in teaching dialogues fall short of human standards. To address this, we design a Retrieval-augmented Multi-role Multi-expert Collaboration (RAM2C) framework to automatically generate such dialogues data. Specifically, we first establish HTS-guided knowledge bases, encompassing three domain knowledge in teaching skills, psychology, and safety ethics. Then, RAM2C organizes LLMs, which are retrieval-augmented by the above different knowledge bases, into multi-experts groups with distinct roles to generate the HTS-compliant educational dialogues dataset. We then fine-tuned the LLMs using this dataset. Empirical evaluations indicate that RM2C-empowered LLMs excel in Chinese reading teaching, offering more personalized, and ethically safe teaching response, demonstrating RAM2C's practicality and high quality. We release the experiments at \hyperlink{https://github.com/ram2c/ram2c}{https://github.com/ram2c/ram2c}.<|reference_end|>
|
arxiv
|
@article{huang2024ram2c:,
title={RAM2C: A Liberal Arts Educational Chatbot based on Retrieval-augmented
Multi-role Multi-expert Collaboration},
author={Haoyu Huang, Tong Niu, Rui Yang, Luping Shi},
journal={arXiv preprint arXiv:2409.15461},
year={2024},
archivePrefix={arXiv},
eprint={2409.15461},
primaryClass={cs.AI cs.CL}
}
|
huang2024ram2c:
|
arxiv-661032
|
2409.15463
|
Preventing Rowhammer Exploits via Low-Cost Domain-Aware Memory Allocation
|
<|reference_start|>Preventing Rowhammer Exploits via Low-Cost Domain-Aware Memory Allocation: Rowhammer is a hardware security vulnerability at the heart of every system with modern DRAM-based memory. Despite its discovery a decade ago, comprehensive defenses remain elusive, while the probability of successful attacks grows with DRAM density. Hardware-based defenses have been ineffective, due to considerable cost, delays in commercial adoption, and attackers' repeated ability to circumvent them. Meanwhile, more flexible software-based solutions either incur substantial performance and memory capacity overheads, or offer limited forms of protection. Citadel is a new memory allocator design that prevents Rowhammer-initiated security exploits by addressing the vulnerability's root cause: physical adjacency of DRAM rows. Citadel enables creation of flexible security domains and isolates different domains in physically disjoint memory regions, guaranteeing security by design. On a server system, Citadel supports thousands of security domains at a modest 7.4% average memory overhead and no performance loss. In contrast, recent domain isolation schemes fail to support many workload scenarios due to excessive overheads, and incur 4--6x higher overheads for supported scenarios. As a software solution, Citadel offers readily deployable Rowhammer-aware isolation on legacy, current, and future systems.<|reference_end|>
|
arxiv
|
@article{saxena2024preventing,
title={Preventing Rowhammer Exploits via Low-Cost Domain-Aware Memory
Allocation},
author={Anish Saxena, Walter Wang, Alexandros Daglis},
journal={arXiv preprint arXiv:2409.15463},
year={2024},
archivePrefix={arXiv},
eprint={2409.15463},
primaryClass={cs.CR}
}
|
saxena2024preventing
|
arxiv-661033
|
2409.15464
|
Toward a Predictive eXtended Reality Teleoperation System with Duo-Virtual Spaces
|
<|reference_start|>Toward a Predictive eXtended Reality Teleoperation System with Duo-Virtual Spaces: Extended Reality (XR) provides a more intuitive interaction method for teleoperating robots compared to traditional 2D controls. Recent studies have laid the groundwork for usable teleoperation with XR, but it fails in tasks requiring rapid motion and precise manipulations due to the large delay between user motion and agent feedback. In this work, we profile the end-to-end latency in a state-of-the-art XR teleoperation system and propose our idea to optimize the latency by implementing a duo-virtual spaces design and localizing the agent and objects in the user-side virtual space, while calibrating with periodic ground-truth poses from the agent-side virtual space.<|reference_end|>
|
arxiv
|
@article{zhang2024toward,
title={Toward a Predictive eXtended Reality Teleoperation System with
Duo-Virtual Spaces},
author={Ziliang Zhang, Cong Liu, Hyoseung Kim},
journal={arXiv preprint arXiv:2409.15464},
year={2024},
archivePrefix={arXiv},
eprint={2409.15464},
primaryClass={cs.RO cs.HC}
}
|
zhang2024toward
|
arxiv-661034
|
2409.15465
|
In the Wild Ungraspable Object Picking with Bimanual Nonprehensile Manipulation
|
<|reference_start|>In the Wild Ungraspable Object Picking with Bimanual Nonprehensile Manipulation: Picking diverse objects in the real world is a fundamental robotics skill. However, many objects in such settings are bulky, heavy, or irregularly shaped, making them ungraspable by conventional end effectors like suction grippers and parallel jaw grippers (PJGs). In this paper, we expand the range of pickable items without hardware modifications using bimanual nonprehensile manipulation. We focus on a grocery shopping scenario, where a bimanual mobile manipulator equipped with a suction gripper and a PJG is tasked with retrieving ungraspable items from tightly packed grocery shelves. From visual observations, our method first identifies optimal grasp points based on force closure and friction constraints. If the grasp points are occluded, a series of nonprehensile nudging motions are performed to clear the obstruction. A bimanual grasp utilizing contacts on the side of the end effectors is then executed to grasp the target item. In our replica grocery store, we achieved a 90% success rate over 102 trials in uncluttered scenes, and a 67% success rate over 45 trials in cluttered scenes. We also deployed our system to a real-world grocery store and successfully picked previously unseen items. Our results highlight the potential of bimanual nonprehensile manipulation for in-the-wild robotic picking tasks. A video summarizing this work can be found at youtu.be/g0hOrDuK8jM<|reference_end|>
|
arxiv
|
@article{wu2024in,
title={In the Wild Ungraspable Object Picking with Bimanual Nonprehensile
Manipulation},
author={Albert Wu, Dan Kruse},
journal={arXiv preprint arXiv:2409.15465},
year={2024},
archivePrefix={arXiv},
eprint={2409.15465},
primaryClass={cs.RO}
}
|
wu2024in
|
arxiv-661035
|
2409.15466
|
Mat\'ern Kernels for Tunable Implicit Surface Reconstruction
|
<|reference_start|>Mat\'ern Kernels for Tunable Implicit Surface Reconstruction: We propose to use the family of Mat\'ern kernels for tunable implicit surface reconstruction, building upon the recent success of kernel methods for 3D reconstruction of oriented point clouds. As we show, both, from a theoretical and practical perspective, Mat\'ern kernels have some appealing properties which make them particularly well suited for surface reconstruction -- outperforming state-of-the-art methods based on the arc-cosine kernel while being significantly easier to implement, faster to compute, and scaleable. Being stationary, we demonstrate that the Mat\'ern kernels' spectrum can be tuned in the same fashion as Fourier feature mappings help coordinate-based MLPs to overcome spectral bias. Moreover, we theoretically analyze Mat\'ern kernel's connection to SIREN networks as well as its relation to previously employed arc-cosine kernels. Finally, based on recently introduced Neural Kernel Fields, we present data-dependent Mat\'ern kernels and conclude that especially the Laplace kernel (being part of the Mat\'ern family) is extremely competitive, performing almost on par with state-of-the-art methods in the noise-free case while having a more than five times shorter training time.<|reference_end|>
|
arxiv
|
@article{weiherer2024mat\'ern,
title={Mat\'ern Kernels for Tunable Implicit Surface Reconstruction},
author={Maximilian Weiherer, Bernhard Egger},
journal={arXiv preprint arXiv:2409.15466},
year={2024},
archivePrefix={arXiv},
eprint={2409.15466},
primaryClass={cs.CV cs.LG}
}
|
weiherer2024mat\'ern
|
arxiv-661036
|
2409.15468
|
FRSZ2 for In-Register Block Compression Inside GMRES on GPUs
|
<|reference_start|>FRSZ2 for In-Register Block Compression Inside GMRES on GPUs: The performance of the GMRES iterative solver on GPUs is limited by the GPU main memory bandwidth. Compressed Basis GMRES outperforms GMRES by storing the Krylov basis in low precision, thereby reducing the memory access. An open question is whether compression techniques that are more sophisticated than casting to low precision can enable large runtime savings while preserving the accuracy of the final results. This paper presents the lightweight in-register compressor FRSZ2 that can decompress at the bandwidth speed of a modern NVIDIA H100 GPU. In an experimental evaluation, we demonstrate using FRSZ2 instead of low precision for compression of the Krylov basis can bring larger runtime benefits without impacting final accuracy.<|reference_end|>
|
arxiv
|
@article{grützmacher2024frsz2,
title={FRSZ2 for In-Register Block Compression Inside GMRES on GPUs},
author={Thomas Gr"utzmacher, Robert Underwood, Sheng Di, Franck Cappello,
Hartwig Anzt},
journal={arXiv preprint arXiv:2409.15468},
year={2024},
archivePrefix={arXiv},
eprint={2409.15468},
primaryClass={cs.PF cs.DS}
}
|
grützmacher2024frsz2
|
arxiv-661037
|
2409.15470
|
A Near-Optimal Low-Energy Deterministic Distributed SSSP with Ramifications on Congestion and APSP
|
<|reference_start|>A Near-Optimal Low-Energy Deterministic Distributed SSSP with Ramifications on Congestion and APSP: We present a low-energy deterministic distributed algorithm that computes exact Single-Source Shortest Paths (SSSP) in near-optimal time: it runs in $\tilde{O}(n)$ rounds and each node is awake during only $poly(\log n)$ rounds. When a node is not awake, it performs no computations or communications and spends no energy. The general approach we take along the way to this result can be viewed as a novel adaptation of Dijkstra's classic approach to SSSP, which makes it suitable for the distributed setting. Notice that Dijkstra's algorithm itself is not efficient in the distributed setting due to its need for repeatedly computing the minimum-distance unvisited node in the entire network. Our adapted approach has other implications, as we outline next. As a step toward the above end-result, we obtain a simple deterministic algorithm for exact SSSP with near-optimal time and message complexities of $\tilde{O}(n)$ and $\tilde{O}(m)$, in which each edge communicates only $poly(\log n)$ messages. Therefore, one can simultaneously run $n$ instances of it for $n$ sources, using a simple random delay scheduling. That computes All Pairs Shortest Paths (APSP) in the near-optimal time complexity of $\tilde{O}(n)$. This algorithm matches the complexity of the recent APSP algorithm of Bernstein and Nanongkai [STOC 2019] using a completely different method (and one that is more modular, in the sense that the SSSPs are solved independently). It also takes a step toward resolving the open problem on a deterministic $\tilde{O}(n)$-time APSP, as the only randomness used now is in the scheduling.<|reference_end|>
|
arxiv
|
@article{ghaffari2024a,
title={A Near-Optimal Low-Energy Deterministic Distributed SSSP with
Ramifications on Congestion and APSP},
author={Mohsen Ghaffari, Anton Trygub},
journal={arXiv preprint arXiv:2409.15470},
year={2024},
archivePrefix={arXiv},
eprint={2409.15470},
primaryClass={cs.DS cs.DC}
}
|
ghaffari2024a
|
arxiv-661038
|
2409.15471
|
EvAlignUX: Advancing UX Research through LLM-Supported Exploration of Evaluation Metrics
|
<|reference_start|>EvAlignUX: Advancing UX Research through LLM-Supported Exploration of Evaluation Metrics: Evaluating UX in the context of AI's complexity, unpredictability, and generative nature presents unique challenges. HCI scholars lack sufficient tool support to build knowledge around diverse evaluation metrics and develop comprehensive UX evaluation plans. In this paper, we introduce EvAlignUX, an innovative system grounded in scientific literature and powered by large language models (LLMs), designed to help HCI scholars explore evaluation metrics and their relationship to potential research outcomes. A user study involving 19 HCI scholars revealed that EvAlignUX significantly improved the perceived clarity, specificity, feasibility, and overall quality of their evaluation proposals. The use of EvAlignUX enhanced participants' thought processes, resulting in the creation of a Question Bank that can be used to guide UX Evaluation Development. Additionally, the influence of researchers' backgrounds on their perceived inspiration and concerns about over-reliance on AI highlights future research directions for AI's role in fostering critical thinking.<|reference_end|>
|
arxiv
|
@article{zheng2024evalignux:,
title={EvAlignUX: Advancing UX Research through LLM-Supported Exploration of
Evaluation Metrics},
author={Qingxiao Zheng, Minrui Chen, Pranav Sharma, Yiliu Tang, Mehul Oswal,
Yiren Liu, Yun Huang},
journal={arXiv preprint arXiv:2409.15471},
year={2024},
archivePrefix={arXiv},
eprint={2409.15471},
primaryClass={cs.HC}
}
|
zheng2024evalignux:
|
arxiv-661039
|
2409.15473
|
Exploring Requirements Elicitation from App Store User Reviews Using Large Language Models
|
<|reference_start|>Exploring Requirements Elicitation from App Store User Reviews Using Large Language Models: Mobile applications have become indispensable companions in our daily lives. Spanning over the categories from communication and entertainment to healthcare and finance, these applications have been influential in every aspect. Despite their omnipresence, developing apps that meet user needs and expectations still remains a challenge. Traditional requirements elicitation methods like user interviews can be time-consuming and suffer from limited scope and subjectivity. This research introduces an approach leveraging the power of Large Language Models (LLMs) to analyze user reviews for automated requirements elicitation. We fine-tuned three well-established LLMs BERT, DistilBERT, and GEMMA, on a dataset of app reviews labeled for usefulness. Our evaluation revealed BERT's superior performance, achieving an accuracy of 92.40% and an F1-score of 92.39%, demonstrating its effectiveness in accurately classifying useful reviews. While GEMMA displayed a lower overall performance, it excelled in recall (93.39%), indicating its potential for capturing a comprehensive set of valuable user insights. These findings suggest that LLMs offer a promising avenue for streamlining requirements elicitation in mobile app development, leading to the creation of more user-centric and successful applications.<|reference_end|>
|
arxiv
|
@article{ghosh2024exploring,
title={Exploring Requirements Elicitation from App Store User Reviews Using
Large Language Models},
author={Tanmai Kumar Ghosh, Atharva Pargaonkar, Nasir U. Eisty},
journal={arXiv preprint arXiv:2409.15473},
year={2024},
archivePrefix={arXiv},
eprint={2409.15473},
primaryClass={cs.SE}
}
|
ghosh2024exploring
|
arxiv-661040
|
2409.15475
|
Framework for Robust Localization of UUVs and Mapping of Net Pens
|
<|reference_start|>Framework for Robust Localization of UUVs and Mapping of Net Pens: This paper presents a general framework integrating vision and acoustic sensor data to enhance localization and mapping in highly dynamic and complex underwater environments, with a particular focus on fish farming. The proposed pipeline is suited to obtain both the net-relative pose estimates of an Unmanned Underwater Vehicle (UUV) and the depth map of the net pen purely based on vision data. Furthermore, this paper presents a method to estimate the global pose of an UUV fusing the net-relative pose estimates with acoustic data. The pipeline proposed in this paper showcases results on datasets obtained from industrial-scale fish farms and successfully demonstrates that the vision-based TRU-Depth model, when provided with sparse depth priors from the FFT method and combined with the Wavemap method, can estimate both net-relative and global position of the UUV in real time and generate detailed 3D maps suitable for autonomous navigation and inspection purposes.<|reference_end|>
|
arxiv
|
@article{botta2024framework,
title={Framework for Robust Localization of UUVs and Mapping of Net Pens},
author={David Botta, Luca Ebner, Andrej Studer, Victor Reijgwart, Roland
Siegwart, Eleni Kelasidi},
journal={arXiv preprint arXiv:2409.15475},
year={2024},
archivePrefix={arXiv},
eprint={2409.15475},
primaryClass={cs.RO}
}
|
botta2024framework
|
arxiv-661041
|
2409.15476
|
Parallel Dynamic Maximal Matching
|
<|reference_start|>Parallel Dynamic Maximal Matching: We present the first (randomized) parallel dynamic algorithm for maximal matching, which can process an arbitrary number of updates simultaneously. Given a batch of edge deletion or insertion updates to the graph, our parallel algorithm adjusts the maximal matching to these updates in $poly(\log n)$ depth and using $poly(\log n)$ amortized work per update. That is, the amortized work for processing a batch of $k$ updates is $kpoly(\log n)$, while all this work is done in $poly(\log n)$ depth, with high probability. This can be seen as a parallel counterpart of the sequential dynamic algorithms for constant-approximate and maximal matching [Onak and Rubinfeld STOC'10; Baswana, Gupta, and Sen FOCS'11; and Solomon FOCS'16]. Our algorithm readily generalizes to maximal matching in hypergraphs of rank $r$ -- where each hyperedge has at most $r$ endpoints -- with a $poly(r)$ increase in work, while retaining the $poly(\log n)$ depth.<|reference_end|>
|
arxiv
|
@article{ghaffari2024parallel,
title={Parallel Dynamic Maximal Matching},
author={Mohsen Ghaffari, Anton Trygub},
journal={arXiv preprint arXiv:2409.15476},
year={2024},
archivePrefix={arXiv},
eprint={2409.15476},
primaryClass={cs.DS cs.DC}
}
|
ghaffari2024parallel
|
arxiv-661042
|
2409.15477
|
MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation models
|
<|reference_start|>MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation models: Multimodal Large Language Models (MLLMs) have tremendous potential to improve the accuracy, availability, and cost-effectiveness of healthcare by providing automated solutions or serving as aids to medical professionals. Despite promising first steps in developing medical MLLMs in the past few years, their capabilities and limitations are not well-understood. Recently, many benchmark datasets have been proposed that test the general medical knowledge of such models across a variety of medical areas. However, the systematic failure modes and vulnerabilities of such models are severely underexplored with most medical benchmarks failing to expose the shortcomings of existing models in this safety-critical domain. In this paper, we introduce MediConfusion, a challenging medical Visual Question Answering (VQA) benchmark dataset, that probes the failure modes of medical MLLMs from a vision perspective. We reveal that state-of-the-art models are easily confused by image pairs that are otherwise visually dissimilar and clearly distinct for medical experts. Strikingly, all available models (open-source or proprietary) achieve performance below random guessing on MediConfusion, raising serious concerns about the reliability of existing medical MLLMs for healthcare deployment. We also extract common patterns of model failure that may help the design of a new generation of more trustworthy and reliable MLLMs in healthcare.<|reference_end|>
|
arxiv
|
@article{sepehri2024mediconfusion:,
title={MediConfusion: Can you trust your AI radiologist? Probing the
reliability of multimodal medical foundation models},
author={Mohammad Shahab Sepehri, Zalan Fabian, Maryam Soltanolkotabi, Mahdi
Soltanolkotabi},
journal={arXiv preprint arXiv:2409.15477},
year={2024},
archivePrefix={arXiv},
eprint={2409.15477},
primaryClass={cs.CV}
}
|
sepehri2024mediconfusion:
|
arxiv-661043
|
2409.15481
|
Adapting Segment Anything Model for Unseen Object Instance Segmentation
|
<|reference_start|>Adapting Segment Anything Model for Unseen Object Instance Segmentation: Unseen Object Instance Segmentation (UOIS) is crucial for autonomous robots operating in unstructured environments. Previous approaches require full supervision on large-scale tabletop datasets for effective pretraining. In this paper, we propose UOIS-SAM, a data-efficient solution for the UOIS task that leverages SAM's high accuracy and strong generalization capabilities. UOIS-SAM integrates two key components: (i) a Heatmap-based Prompt Generator (HPG) to generate class-agnostic point prompts with precise foreground prediction, and (ii) a Hierarchical Discrimination Network (HDNet) that adapts SAM's mask decoder, mitigating issues introduced by the SAM baseline, such as background confusion and over-segmentation, especially in scenarios involving occlusion and texture-rich objects. Extensive experimental results on OCID, OSD, and additional photometrically challenging datasets including PhoCAL and HouseCat6D, demonstrate that, even using only 10% of the training samples compared to previous methods, UOIS-SAM achieves state-of-the-art performance in unseen object segmentation, highlighting its effectiveness and robustness in various tabletop scenes.<|reference_end|>
|
arxiv
|
@article{cao2024adapting,
title={Adapting Segment Anything Model for Unseen Object Instance Segmentation},
author={Rui Cao, Chuanxin Song, Biqi Yang, Jiangliu Wang, Pheng-Ann Heng,
Yun-Hui Liu},
journal={arXiv preprint arXiv:2409.15481},
year={2024},
archivePrefix={arXiv},
eprint={2409.15481},
primaryClass={cs.RO cs.CV}
}
|
cao2024adapting
|
arxiv-661044
|
2409.15484
|
Blind Localization of Early Room Reflections with Arbitrary Microphone Array
|
<|reference_start|>Blind Localization of Early Room Reflections with Arbitrary Microphone Array: Blindly estimating the direction of arrival (DoA) of early room reflections without prior knowledge of the room impulse response or source signal is highly valuable in audio signal processing applications. The FF-PHALCOR (Frequency Focusing PHase ALigned CORrelation) method was recently developed for this purpose, extending the original PHALCOR method to work with arbitrary arrays rather than just spherical ones. Previous studies have provided only initial insights into its performance. This study offers a comprehensive analysis of the method's performance and limitations, examining how reflection characteristics such as delay, amplitude, and spatial density affect its effectiveness. The research also proposes improvements to overcome these limitations, enhancing detection quality and reducing false alarms. Additionally, the study examined how spatial perception is affected by generating room impulse responses using estimated reflection information. The findings suggest a perceptual advantage of the proposed approach over the baseline, with particularly high perceptual quality when using the spherical array with 32 microphones. However, the quality is somewhat reduced when using a semi-circular array with only 6 microphones.<|reference_end|>
|
arxiv
|
@article{hadadi2024blind,
title={Blind Localization of Early Room Reflections with Arbitrary Microphone
Array},
author={Yogev Hadadi, Vladimir Tourbabin, Zamir Ben-Hur, David Lou Alon, Boaz
Rafaely},
journal={arXiv preprint arXiv:2409.15484},
year={2024},
archivePrefix={arXiv},
eprint={2409.15484},
primaryClass={eess.AS cs.SD}
}
|
hadadi2024blind
|
arxiv-661045
|
2409.15486
|
VLMine: Long-Tail Data Mining with Vision Language Models
|
<|reference_start|>VLMine: Long-Tail Data Mining with Vision Language Models: Ensuring robust performance on long-tail examples is an important problem for many real-world applications of machine learning, such as autonomous driving. This work focuses on the problem of identifying rare examples within a corpus of unlabeled data. We propose a simple and scalable data mining approach that leverages the knowledge contained within a large vision language model (VLM). Our approach utilizes a VLM to summarize the content of an image into a set of keywords, and we identify rare examples based on keyword frequency. We find that the VLM offers a distinct signal for identifying long-tail examples when compared to conventional methods based on model uncertainty. Therefore, we propose a simple and general approach for integrating signals from multiple mining algorithms. We evaluate the proposed method on two diverse tasks: 2D image classification, in which inter-class variation is the primary source of data diversity, and on 3D object detection, where intra-class variation is the main concern. Furthermore, through the detection task, we demonstrate that the knowledge extracted from 2D images is transferable to the 3D domain. Our experiments consistently show large improvements (between 10\% and 50\%) over the baseline techniques on several representative benchmarks: ImageNet-LT, Places-LT, and the Waymo Open Dataset.<|reference_end|>
|
arxiv
|
@article{ye2024vlmine:,
title={VLMine: Long-Tail Data Mining with Vision Language Models},
author={Mao Ye, Gregory P. Meyer, Zaiwei Zhang, Dennis Park, Siva Karthik
Mustikovela, Yuning Chai, Eric M Wolff},
journal={arXiv preprint arXiv:2409.15486},
year={2024},
archivePrefix={arXiv},
eprint={2409.15486},
primaryClass={cs.CV cs.AI}
}
|
ye2024vlmine:
|
arxiv-661046
|
2409.15487
|
AgriNeRF: Neural Radiance Fields for Agriculture in Challenging Lighting Conditions
|
<|reference_start|>AgriNeRF: Neural Radiance Fields for Agriculture in Challenging Lighting Conditions: Neural Radiance Fields (NeRFs) have shown significant promise in 3D scene reconstruction and novel view synthesis. In agricultural settings, NeRFs can serve as digital twins, providing critical information about fruit detection for yield estimation and other important metrics for farmers. However, traditional NeRFs are not robust to challenging lighting conditions, such as low-light, extreme bright light and varying lighting. To address these issues, this work leverages three different sensors: an RGB camera, an event camera and a thermal camera. Our RGB scene reconstruction shows an improvement in PSNR and SSIM by +2.06 dB and +8.3% respectively. Our cross-spectral scene reconstruction enhances downstream fruit detection by +43.0% in mAP50 and +61.1% increase in mAP50-95. The integration of additional sensors leads to a more robust and informative NeRF. We demonstrate that our multi-modal system yields high quality photo-realistic reconstructions under various tree canopy covers and at different times of the day. This work results in the development of a resilient NeRF, capable of performing well in visibly degraded scenarios, as well as a learnt cross-spectral representation, that is used for automated fruit detection.<|reference_end|>
|
arxiv
|
@article{chopra2024agrinerf:,
title={AgriNeRF: Neural Radiance Fields for Agriculture in Challenging Lighting
Conditions},
author={Samarth Chopra, Fernando Cladera, Varun Murali, Vijay Kumar},
journal={arXiv preprint arXiv:2409.15487},
year={2024},
archivePrefix={arXiv},
eprint={2409.15487},
primaryClass={cs.RO}
}
|
chopra2024agrinerf:
|
arxiv-661047
|
2409.15488
|
Voice Assistants for Health Self-Management: Designing for and with Older Adults
|
<|reference_start|>Voice Assistants for Health Self-Management: Designing for and with Older Adults: Supporting older adults in health self-management is crucial for promoting independent aging, particularly given the growing strain on healthcare systems. While voice assistants (VAs) hold the potential to support aging in place, they often lack tailored assistance and present usability challenges. We addressed these issues through a five-stage design process with older adults to develop a personal health assistant. Starting with in-home interviews (N=17), we identified two primary challenges in older adult's health self-management: health awareness and medical adherence. To address these challenges, we developed a high-fidelity LLM-powered VA prototype to debrief doctor's visit notes and generate tailored medication reminders. We refined our prototype with feedback from co-design workshops (N=10) and validated its usability through in-home studies (N=5). Our work highlights key design features for personal health assistants and provides broader insights into desirable VA characteristics, including personalization, adapting to user context, and respect for user autonomy.<|reference_end|>
|
arxiv
|
@article{mahmood2024voice,
title={Voice Assistants for Health Self-Management: Designing for and with
Older Adults},
author={Amama Mahmood, Shiye Cao, Maia Stiber, Victor Nikhil Antony and
Chien-Ming Huang},
journal={arXiv preprint arXiv:2409.15488},
year={2024},
archivePrefix={arXiv},
eprint={2409.15488},
primaryClass={cs.HC}
}
|
mahmood2024voice
|
arxiv-661048
|
2409.15491
|
Computational Pathology for Accurate Prediction of Breast Cancer Recurrence: Development and Validation of a Deep Learning-based Tool
|
<|reference_start|>Computational Pathology for Accurate Prediction of Breast Cancer Recurrence: Development and Validation of a Deep Learning-based Tool: Accurate recurrence risk stratification is crucial for optimizing treatment plans for breast cancer patients. Current prognostic tools like Oncotype DX (ODX) offer valuable genomic insights for HR+/HER2- patients but are limited by cost and accessibility, particularly in underserved populations. In this study, we present Deep-BCR-Auto, a deep learning-based computational pathology approach that predicts breast cancer recurrence risk from routine H&E-stained whole slide images (WSIs). Our methodology was validated on two independent cohorts: the TCGA-BRCA dataset and an in-house dataset from The Ohio State University (OSU). Deep-BCR-Auto demonstrated robust performance in stratifying patients into low- and high-recurrence risk categories. On the TCGA-BRCA dataset, the model achieved an area under the receiver operating characteristic curve (AUROC) of 0.827, significantly outperforming existing weakly supervised models (p=0.041). In the independent OSU dataset, Deep-BCR-Auto maintained strong generalizability, achieving an AUROC of 0.832, along with 82.0% accuracy, 85.0% specificity, and 67.7% sensitivity. These findings highlight the potential of computational pathology as a cost-effective alternative for recurrence risk assessment, broadening access to personalized treatment strategies. This study underscores the clinical utility of integrating deep learning-based computational pathology into routine pathological assessment for breast cancer prognosis across diverse clinical settings.<|reference_end|>
|
arxiv
|
@article{su2024computational,
title={Computational Pathology for Accurate Prediction of Breast Cancer
Recurrence: Development and Validation of a Deep Learning-based Tool},
author={Ziyu Su, Yongxin Guo, Robert Wesolowski, Gary Tozbikian, Nathaniel S.
O'Connell, M. Khalid Khan Niazi, Metin N. Gurcan},
journal={arXiv preprint arXiv:2409.15491},
year={2024},
archivePrefix={arXiv},
eprint={2409.15491},
primaryClass={eess.IV cs.AI q-bio.QM}
}
|
su2024computational
|
arxiv-661049
|
2409.15493
|
Autonomous Exploration and Semantic Updating of Large-Scale Indoor Environments with Mobile Robots
|
<|reference_start|>Autonomous Exploration and Semantic Updating of Large-Scale Indoor Environments with Mobile Robots: We introduce a new robotic system that enables a mobile robot to autonomously explore an unknown environment, build a semantic map of the environment, and subsequently update the semantic map to reflect environment changes, such as location changes of objects. Our system leverages a LiDAR scanner for 2D occupancy grid mapping and an RGB-D camera for object perception. We introduce a semantic map representation that combines a 2D occupancy grid map for geometry, with a topological map for object semantics. This map representation enables us to effectively update the semantics by deleting or adding nodes to the topological map. Our system has been tested on a Fetch robot. The robot can semantically map a 93m x 90m floor and update the semantic map once objects are moved in the environment.<|reference_end|>
|
arxiv
|
@article{allu2024autonomous,
title={Autonomous Exploration and Semantic Updating of Large-Scale Indoor
Environments with Mobile Robots},
author={Sai Haneesh Allu, Itay Kadosh, Tyler Summers, Yu Xiang},
journal={arXiv preprint arXiv:2409.15493},
year={2024},
archivePrefix={arXiv},
eprint={2409.15493},
primaryClass={cs.RO cs.CV}
}
|
allu2024autonomous
|
arxiv-661050
|
2409.15495
|
From Our Lab to Their Homes: Learnings from Longitudinal Field Research with Older Adults
|
<|reference_start|>From Our Lab to Their Homes: Learnings from Longitudinal Field Research with Older Adults: Conducting research with older adults in their home environments presents unique opportunities and challenges that differ significantly from traditional lab-based studies. In this paper, we share our experiences from year-long research activities aiming to design and evaluate conversational voice assistants for older adults through longitudinal deployment, interviews, co-design workshops, and evaluation studies. We discuss the benefits of bringing the lab to their home, including producing realistic and contextual interactions, creating stronger researcher-participant bonds, and enabling participant growth with the research over time. We also detail the difficulties encountered in various aspects of the research process, including recruitment, scheduling, logistics, following study protocols, and study closure. These learnings highlight the complex, yet rewarding, nature of longitudinal home-based research with older adults, offering lessons for future studies aiming to achieve real-world applicability.<|reference_end|>
|
arxiv
|
@article{mahmood2024from,
title={From Our Lab to Their Homes: Learnings from Longitudinal Field Research
with Older Adults},
author={Amama Mahmood and Chien-Ming Huang},
journal={arXiv preprint arXiv:2409.15495},
year={2024},
archivePrefix={arXiv},
eprint={2409.15495},
primaryClass={cs.HC}
}
|
mahmood2024from
|
arxiv-661051
|
2409.15498
|
Analysis of Human Perception in Distinguishing Real and AI-Generated Faces: An Eye-Tracking Based Study
|
<|reference_start|>Analysis of Human Perception in Distinguishing Real and AI-Generated Faces: An Eye-Tracking Based Study: Recent advancements in Artificial Intelligence have led to remarkable improvements in generating realistic human faces. While these advancements demonstrate significant progress in generative models, they also raise concerns about the potential misuse of these generated images. In this study, we investigate how humans perceive and distinguish between real and fake images. We designed a perceptual experiment using eye-tracking technology to analyze how individuals differentiate real faces from those generated by AI. Our analysis of StyleGAN-3 generated images reveals that participants can distinguish real from fake faces with an average accuracy of 76.80%. Additionally, we found that participants scrutinize images more closely when they suspect an image to be fake. We believe this study offers valuable insights into human perception of AI-generated media.<|reference_end|>
|
arxiv
|
@article{huang2024analysis,
title={Analysis of Human Perception in Distinguishing Real and AI-Generated
Faces: An Eye-Tracking Based Study},
author={Jin Huang, Subhadra Gopalakrishnan, Trisha Mittal, Jake Zuena, Jaclyn
Pytlarz},
journal={arXiv preprint arXiv:2409.15498},
year={2024},
archivePrefix={arXiv},
eprint={2409.15498},
primaryClass={cs.CV}
}
|
huang2024analysis
|
arxiv-661052
|
2409.15500
|
Sticky coupling as a control variate for sensitivity analysis
|
<|reference_start|>Sticky coupling as a control variate for sensitivity analysis: We present and analyze a control variate strategy based on couplings to reduce the variance of finite difference estimators of sensitivity coefficients, called transport coefficients in the physics literature. We study the bias and variance of a sticky-coupling and a synchronous-coupling based estimator as the finite difference parameter $\eta$ goes to zero. For diffusions with elliptic additive noise, we show that when the drift is contractive outside a compact the bias of a sticky-coupling based estimator is bounded as $\eta \to 0$ and its variance behaves like $\eta^{-1}$, compared to the standard estimator whose bias and variance behave like $\eta^{-1}$ and $\eta^{-2}$, respectively. Under the stronger assumption that the drift is contractive everywhere, we additionally show that the bias and variance of the synchronous-coupling based estimator are both bounded as $\eta \to 0$. Our hypotheses include overdamped Langevin dynamics with many physically relevant non-convex potentials. We illustrate our theoretical results with numerical examples, including overdamped Langevin dynamics with a highly non-convex Lennard-Jones potential to demonstrate both failure of synchronous coupling and the effectiveness of sticky coupling in the not globally contractive setting.<|reference_end|>
|
arxiv
|
@article{darshan2024sticky,
title={Sticky coupling as a control variate for sensitivity analysis},
author={Shiva Darshan, Andreas Eberle, Gabriel Stoltz},
journal={arXiv preprint arXiv:2409.15500},
year={2024},
archivePrefix={arXiv},
eprint={2409.15500},
primaryClass={math.PR cs.NA math.NA math.ST stat.CO stat.TH}
}
|
darshan2024sticky
|
arxiv-661053
|
2409.15501
|
Adenocarcinoma Segmentation Using Pre-trained Swin-UNet with Parallel Cross-Attention for Multi-Domain Imaging
|
<|reference_start|>Adenocarcinoma Segmentation Using Pre-trained Swin-UNet with Parallel Cross-Attention for Multi-Domain Imaging: Computer aided pathological analysis has been the gold standard for tumor diagnosis, however domain shift is a significant problem in histopathology. It may be caused by variability in anatomical structures, tissue preparation, and imaging processes challenges the robustness of segmentation models. In this work, we present a framework consist of pre-trained encoder with a Swin-UNet architecture enhanced by a parallel cross-attention module to tackle the problem of adenocarcinoma segmentation across different organs and scanners, considering both morphological changes and scanner-induced domain variations. Experiment conducted on Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation challenge dataset showed that our framework achieved segmentation scores of 0.7469 for the cross-organ track and 0.7597 for the cross-scanner track on the final challenge test sets, and effectively navigates diverse imaging conditions and improves segmentation accuracy across varying domains.<|reference_end|>
|
arxiv
|
@article{qayyum2024adenocarcinoma,
title={Adenocarcinoma Segmentation Using Pre-trained Swin-UNet with Parallel
Cross-Attention for Multi-Domain Imaging},
author={Abdul Qayyum, Moona Mazher Imran Razzak, and Steven A Niederer},
journal={arXiv preprint arXiv:2409.15501},
year={2024},
archivePrefix={arXiv},
eprint={2409.15501},
primaryClass={eess.IV cs.CV}
}
|
qayyum2024adenocarcinoma
|
arxiv-661054
|
2409.15503
|
From Text to Treatment Effects: A Meta-Learning Approach to Handling Text-Based Confounding
|
<|reference_start|>From Text to Treatment Effects: A Meta-Learning Approach to Handling Text-Based Confounding: One of the central goals of causal machine learning is the accurate estimation of heterogeneous treatment effects from observational data. In recent years, meta-learning has emerged as a flexible, model-agnostic paradigm for estimating conditional average treatment effects (CATE) using any supervised model. This paper examines the performance of meta-learners when the confounding variables are embedded in text. Through synthetic data experiments, we show that learners using pre-trained text representations of confounders, in addition to tabular background variables, achieve improved CATE estimates compare to those relying solely on the tabular variables, particularly when sufficient data is available. However, due to the entangled nature of the text embeddings, these models do not fully match the performance of meta-learners with perfect confounder knowledge. These findings highlight both the potential and the limitations of pre-trained text representations for causal inference and open up interesting avenues for future research.<|reference_end|>
|
arxiv
|
@article{arno2024from,
title={From Text to Treatment Effects: A Meta-Learning Approach to Handling
Text-Based Confounding},
author={Henri Arno, Paloma Rabaey, Thomas Demeester},
journal={arXiv preprint arXiv:2409.15503},
year={2024},
archivePrefix={arXiv},
eprint={2409.15503},
primaryClass={cs.AI}
}
|
arno2024from
|
arxiv-661055
|
2409.15505
|
Discovering Object Attributes by Prompting Large Language Models with Perception-Action APIs
|
<|reference_start|>Discovering Object Attributes by Prompting Large Language Models with Perception-Action APIs: There has been a lot of interest in grounding natural language to physical entities through visual context. While Vision Language Models (VLMs) can ground linguistic instructions to visual sensory information, they struggle with grounding non-visual attributes, like the weight of an object. Our key insight is that non-visual attribute detection can be effectively achieved by active perception guided by visual reasoning. To this end, we present a perception-action programming API that consists of VLMs and Large Language Models (LLMs) as backbones, together with a set of robot control functions. When prompted with this API and a natural language query, an LLM generates a program to actively identify attributes given an input image. Offline testing on the Odd-One-Out dataset demonstrates that our framework outperforms vanilla VLMs in detecting attributes like relative object location, size, and weight. Online testing in realistic household scenes on AI2-THOR and a real robot demonstration on a DJI RoboMaster EP robot highlight the efficacy of our approach.<|reference_end|>
|
arxiv
|
@article{mavrogiannis2024discovering,
title={Discovering Object Attributes by Prompting Large Language Models with
Perception-Action APIs},
author={Angelos Mavrogiannis, Dehao Yuan, Yiannis Aloimonos},
journal={arXiv preprint arXiv:2409.15505},
year={2024},
archivePrefix={arXiv},
eprint={2409.15505},
primaryClass={cs.RO}
}
|
mavrogiannis2024discovering
|
arxiv-661056
|
2409.15506
|
Spectral Graph Theoretic Methods for Enhancing Network Robustness in Robot Localization
|
<|reference_start|>Spectral Graph Theoretic Methods for Enhancing Network Robustness in Robot Localization: This paper addresses the optimization of edge-weighted networks by maximizing algebraic connectivity to enhance network robustness. Motivated by the need for precise robot position estimation in cooperative localization and pose-graph sparsification in Simultaneous Localization and Mapping (SLAM), the algebraic connectivity maximization problem is formulated as a Mixed Integer Semi-Definite Program (MISDP), which is NP-hard. Leveraging spectral graph theoretic methods, specifically Cheeger's inequality, this work introduces novel "Cheeger cuts" to strengthen and efficiently solve medium-scale MISDPs. Further, a new Mixed Integer Linear Program (MILP) is developed for efficiently computing Cheeger cuts, implemented within an outer-approximation algorithm for solving the MISDP. A greedy k-opt heuristic is also presented, producing high-quality solutions that serve as valid lower bounds for Cheeger cuts. Comprehensive numerical analyses demonstrate the efficacy of strengthened cuts via substantial improvements in run times on synthetic and realistic robot localization datasets.<|reference_end|>
|
arxiv
|
@article{somisetty2024spectral,
title={Spectral Graph Theoretic Methods for Enhancing Network Robustness in
Robot Localization},
author={Neelkamal Somisetty, Harsha Nagarajan, Swaroop Darbha},
journal={arXiv preprint arXiv:2409.15506},
year={2024},
number={LA-UR-24-30124},
archivePrefix={arXiv},
eprint={2409.15506},
primaryClass={eess.SY cs.SY math.OC math.SP}
}
|
somisetty2024spectral
|
arxiv-661057
|
2409.15511
|
Bayesian computation with generative diffusion models by Multilevel Monte Carlo
|
<|reference_start|>Bayesian computation with generative diffusion models by Multilevel Monte Carlo: Generative diffusion models have recently emerged as a powerful strategy to perform stochastic sampling in Bayesian inverse problems, delivering remarkably accurate solutions for a wide range of challenging applications. However, diffusion models often require a large number of neural function evaluations per sample in order to deliver accurate posterior samples. As a result, using diffusion models as stochastic samplers for Monte Carlo integration in Bayesian computation can be highly computationally expensive. This cost is especially high in large-scale inverse problems such as computational imaging, which rely on large neural networks that are expensive to evaluate. With Bayesian imaging problems in mind, this paper presents a Multilevel Monte Carlo strategy that significantly reduces the cost of Bayesian computation with diffusion models. This is achieved by exploiting cost-accuracy trade-offs inherent to diffusion models to carefully couple models of different levels of accuracy in a manner that significantly reduces the overall cost of the calculation, without reducing the final accuracy. The effectiveness of the proposed Multilevel Monte Carlo approach is demonstrated with three canonical computational imaging problems, where we observe a $4\times$-to-$8\times$ reduction in computational cost compared to conventional Monte Carlo averaging.<|reference_end|>
|
arxiv
|
@article{haji-ali2024bayesian,
title={Bayesian computation with generative diffusion models by Multilevel
Monte Carlo},
author={Abdul-Lateef Haji-Ali, Marcelo Pereyra, Luke Shaw, Konstantinos
Zygalakis},
journal={arXiv preprint arXiv:2409.15511},
year={2024},
archivePrefix={arXiv},
eprint={2409.15511},
primaryClass={stat.CO cs.CV cs.LG}
}
|
haji-ali2024bayesian
|
arxiv-661058
|
2409.15512
|
PixelBytes: Catching Unified Embedding for Multimodal Generation
|
<|reference_start|>PixelBytes: Catching Unified Embedding for Multimodal Generation: This report introduces PixelBytes Embedding, a novel approach for unified multimodal representation learning. Our method captures diverse inputs in a single, cohesive representation, enabling emergent properties for multimodal sequence generation, particularly for text and pixelated images. Inspired by state-of-the-art sequence models such as Image Transformers, PixelCNN, and Mamba-Bytes, PixelBytes aims to address the challenges of integrating different data types. We explore various model architectures, including Recurrent Neural Networks (RNNs), State Space Models (SSMs), and Attention-based models, focusing on bidirectional processing and our innovative PxBy embedding technique. Our experiments, conducted on a specialized PixelBytes Pok{\'e}mon dataset, demonstrate that bidirectional sequence models with PxBy embedding and convolutional layers can generate coherent multimodal sequences. This work contributes to the advancement of integrated AI models capable of understanding and generating multimodal data in a unified manner.<|reference_end|>
|
arxiv
|
@article{furfaro2024pixelbytes:,
title={PixelBytes: Catching Unified Embedding for Multimodal Generation},
author={Fabien Furfaro},
journal={arXiv preprint arXiv:2409.15512},
year={2024},
archivePrefix={arXiv},
eprint={2409.15512},
primaryClass={cs.CV cs.AI}
}
|
furfaro2024pixelbytes:
|
arxiv-661059
|
2409.15514
|
SpaGBOL: Spatial-Graph-Based Orientated Localisation
|
<|reference_start|>SpaGBOL: Spatial-Graph-Based Orientated Localisation: Cross-View Geo-Localisation within urban regions is challenging in part due to the lack of geo-spatial structuring within current datasets and techniques. We propose utilising graph representations to model sequences of local observations and the connectivity of the target location. Modelling as a graph enables generating previously unseen sequences by sampling with new parameter configurations. To leverage this newly available information, we propose a GNN-based architecture, producing spatially strong embeddings and improving discriminability over isolated image embeddings. We outline SpaGBOL, introducing three novel contributions. 1) The first graph-structured dataset for Cross-View Geo-Localisation, containing multiple streetview images per node to improve generalisation. 2) Introducing GNNs to the problem, we develop the first system that exploits the correlation between node proximity and feature similarity. 3) Leveraging the unique properties of the graph representation - we demonstrate a novel retrieval filtering approach based on neighbourhood bearings. SpaGBOL achieves state-of-the-art accuracies on the unseen test graph - with relative Top-1 retrieval improvements on previous techniques of 11%, and 50% when filtering with Bearing Vector Matching on the SpaGBOL dataset.<|reference_end|>
|
arxiv
|
@article{shore2024spagbol:,
title={SpaGBOL: Spatial-Graph-Based Orientated Localisation},
author={Tavis Shore, Oscar Mendez, Simon Hadfield},
journal={arXiv preprint arXiv:2409.15514},
year={2024},
archivePrefix={arXiv},
eprint={2409.15514},
primaryClass={cs.CV}
}
|
shore2024spagbol:
|
arxiv-661060
|
2409.15515
|
Learning When to Retrieve, What to Rewrite, and How to Respond in Conversational QA
|
<|reference_start|>Learning When to Retrieve, What to Rewrite, and How to Respond in Conversational QA: Augmenting Large Language Models (LLMs) with information retrieval capabilities (i.e., Retrieval-Augmented Generation (RAG)) has proven beneficial for knowledge-intensive tasks. However, understanding users' contextual search intent when generating responses is an understudied topic for conversational question answering (QA). This conversational extension leads to additional concerns when compared to single-turn QA as it is more challenging for systems to comprehend conversational context and manage retrieved passages over multiple turns. In this work, we propose a method for enabling LLMs to decide when to retrieve in RAG settings given a conversational context. When retrieval is deemed necessary, the LLM then rewrites the conversation for passage retrieval and judges the relevance of returned passages before response generation. Operationally, we build on the single-turn SELF-RAG framework (Asai et al., 2023) and propose SELF-multi-RAG for conversational settings. SELF-multi-RAG demonstrates improved capabilities over single-turn variants with respect to retrieving relevant passages (by using summarized conversational context) and assessing the quality of generated responses. Experiments on three conversational QA datasets validate the enhanced response generation capabilities of SELF-multi-RAG, with improvements of ~13% measured by human annotation.<|reference_end|>
|
arxiv
|
@article{roy2024learning,
title={Learning When to Retrieve, What to Rewrite, and How to Respond in
Conversational QA},
author={Nirmal Roy, Leonardo F. R. Ribeiro, Rexhina Blloshmi, Kevin Small},
journal={arXiv preprint arXiv:2409.15515},
year={2024},
archivePrefix={arXiv},
eprint={2409.15515},
primaryClass={cs.CL cs.AI}
}
|
roy2024learning
|
arxiv-661061
|
2409.15517
|
MATCH POLICY: A Simple Pipeline from Point Cloud Registration to Manipulation Policies
|
<|reference_start|>MATCH POLICY: A Simple Pipeline from Point Cloud Registration to Manipulation Policies: Many manipulation tasks require the robot to rearrange objects relative to one another. Such tasks can be described as a sequence of relative poses between parts of a set of rigid bodies. In this work, we propose MATCH POLICY, a simple but novel pipeline for solving high-precision pick and place tasks. Instead of predicting actions directly, our method registers the pick and place targets to the stored demonstrations. This transfers action inference into a point cloud registration task and enables us to realize nontrivial manipulation policies without any training. MATCH POLICY is designed to solve high-precision tasks with a key-frame setting. By leveraging the geometric interaction and the symmetries of the task, it achieves extremely high sample efficiency and generalizability to unseen configurations. We demonstrate its state-of-the-art performance across various tasks on RLBench benchmark compared with several strong baselines and test it on a real robot with six tasks.<|reference_end|>
|
arxiv
|
@article{huang2024match,
title={MATCH POLICY: A Simple Pipeline from Point Cloud Registration to
Manipulation Policies},
author={Haojie Huang, Haotian Liu, Dian Wang, Robin Walters, Robert Platt},
journal={arXiv preprint arXiv:2409.15517},
year={2024},
archivePrefix={arXiv},
eprint={2409.15517},
primaryClass={cs.RO cs.CV}
}
|
huang2024match
|
arxiv-661062
|
2409.15518
|
Eagle: Efficient Training-Free Router for Multi-LLM Inference
|
<|reference_start|>Eagle: Efficient Training-Free Router for Multi-LLM Inference: The proliferation of Large Language Models (LLMs) with varying capabilities and costs has created a need for efficient model selection in AI systems. LLM routers address this need by dynamically choosing the most suitable model for a given query based on task requirements and budget constraints. However, existing routers face challenges in scalability and real-time adaptation, particularly in high-volume online environments. We present Eagle, a novel LLM routing approach that combines global and local ELO ranking modules to overcome these limitations. By evaluating both general and specialized LLM abilities, Eagle provides a scalable, training-free solution that enhances model selection quality while reducing computational overhead. Our experiments across multiple datasets show Eagle consistently outperforms baseline methods, with improvements of up to 23.52 percent in Area Under Curve (AUC) scores. Moreover, Eagle demonstrates remarkable efficiency, requiring only 1/20 of baseline methods' time for initialization and 100 to 200 times faster incremental updates in online scenarios, making it well-suited for dynamic, high-volume online serving environments.<|reference_end|>
|
arxiv
|
@article{zhao2024eagle:,
title={Eagle: Efficient Training-Free Router for Multi-LLM Inference},
author={Zesen Zhao, Shuowei Jin, Z. Morley Mao},
journal={arXiv preprint arXiv:2409.15518},
year={2024},
archivePrefix={arXiv},
eprint={2409.15518},
primaryClass={cs.LG}
}
|
zhao2024eagle:
|
arxiv-661063
|
2409.15520
|
Enabling Resource-Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines
|
<|reference_start|>Enabling Resource-Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines: Large Language Models (LLMs) have demonstrated exceptional performance in automating various tasks, such as text generation and summarization. Currently LLMs are trained and fine-tuned on large cloud server. Deploying and fine-tuning these models on resource-constrained edge devices remains a significant challenge due to their substantial memory and computational requirements. This paper introduces a resource-efficient zeroth-order optimization approach that lowers the barriers for fine-tuning LLMs in such constrained environments. Our method features a parallelized randomized gradient estimation (P-RGE) technique, which performs gradient estimation with high parallel efficiency. P-RGE leverages outer-loop and inner-loop parallelization to perform multiple function queries and forward passes in parallel, reducing the wall-clock end-to-end training time. By integrating this technique with parameter-efficient fine-tuning methods (e.g., LoRA) and on-device inference engines (e.g., ExecuTorch), we demonstrate efficient fine-tuning of LLMs on both server-side and edge devices. Experiments show that P-RGE achieves significant runtime speedups and memory savings while maintaining fine-tuning accuracy, which paves the way for more practical deployment of LLMs in real-time, on-device applications.<|reference_end|>
|
arxiv
|
@article{gao2024enabling,
title={Enabling Efficient On-Device Fine-Tuning of LLMs Using Only Inference
Engines},
author={Lei Gao, Amir Ziashahabi, Yue Niu, Salman Avestimehr, Murali Annavaram},
journal={arXiv preprint arXiv:2409.15520},
year={2024},
archivePrefix={arXiv},
eprint={2409.15520},
primaryClass={cs.LG cs.DC}
}
|
gao2024enabling
|
arxiv-661064
|
2409.15521
|
CANDERE-COACH: Reinforcement Learning from Noisy Feedback
|
<|reference_start|>CANDERE-COACH: Reinforcement Learning from Noisy Feedback: In recent times, Reinforcement learning (RL) has been widely applied to many challenging tasks. However, in order to perform well, it requires access to a good reward function which is often sparse or manually engineered with scope for error. Introducing human prior knowledge is often seen as a possible solution to the above-mentioned problem, such as imitation learning, learning from preference, and inverse reinforcement learning. Learning from feedback is another framework that enables an RL agent to learn from binary evaluative signals describing the teacher's (positive or negative) evaluation of the agent's action. However, these methods often make the assumption that evaluative teacher feedback is perfect, which is a restrictive assumption. In practice, such feedback can be noisy due to limited teacher expertise or other exacerbating factors like cognitive load, availability, distraction, etc. In this work, we propose the CANDERE-COACH algorithm, which is capable of learning from noisy feedback by a nonoptimal teacher. We propose a noise-filtering mechanism to de-noise online feedback data, thereby enabling the RL agent to successfully learn with up to 40% of the teacher feedback being incorrect. Experiments on three common domains demonstrate the effectiveness of the proposed approach.<|reference_end|>
|
arxiv
|
@article{li2024candere-coach:,
title={CANDERE-COACH: Reinforcement Learning from Noisy Feedback},
author={Yuxuan Li, Srijita Das, Matthew E. Taylor},
journal={arXiv preprint arXiv:2409.15521},
year={2024},
archivePrefix={arXiv},
eprint={2409.15521},
primaryClass={cs.LG cs.AI}
}
|
li2024candere-coach:
|
arxiv-661065
|
2409.15523
|
SEAL: Suite for Evaluating API-use of LLMs
|
<|reference_start|>SEAL: Suite for Evaluating API-use of LLMs: Large language models (LLMs) have limitations in handling tasks that require real-time access to external APIs. While several benchmarks like ToolBench and APIGen have been developed to assess LLMs' API-use capabilities, they often suffer from issues such as lack of generalizability, limited multi-step reasoning coverage, and instability due to real-time API fluctuations. In this paper, we introduce SEAL, an end-to-end testbed designed to evaluate LLMs in real-world API usage. SEAL standardizes existing benchmarks, integrates an agent system for testing API retrieval and planning, and addresses the instability of real-time APIs by introducing a GPT-4-powered API simulator with caching for deterministic evaluations. Our testbed provides a comprehensive evaluation pipeline that covers API retrieval, API calls, and final responses, offering a reliable framework for structured performance comparison in diverse real-world scenarios. SEAL is publicly available, with ongoing updates for new benchmarks.<|reference_end|>
|
arxiv
|
@article{kim2024seal:,
title={SEAL: Suite for Evaluating API-use of LLMs},
author={Woojeong Kim, Ashish Jagmohan, Aditya Vempaty},
journal={arXiv preprint arXiv:2409.15523},
year={2024},
archivePrefix={arXiv},
eprint={2409.15523},
primaryClass={cs.AI}
}
|
kim2024seal:
|
arxiv-661066
|
2409.15525
|
Speech2rtMRI: Speech-Guided Diffusion Model for Real-time MRI Video of the Vocal Tract during Speech
|
<|reference_start|>Speech2rtMRI: Speech-Guided Diffusion Model for Real-time MRI Video of the Vocal Tract during Speech: Understanding speech production both visually and kinematically can inform second language learning system designs, as well as the creation of speaking characters in video games and animations. In this work, we introduce a data-driven method to visually represent articulator motion in Magnetic Resonance Imaging (MRI) videos of the human vocal tract during speech based on arbitrary audio or speech input. We leverage large pre-trained speech models, which are embedded with prior knowledge, to generalize the visual domain to unseen data using a speech-to-video diffusion model. Our findings demonstrate that the visual generation significantly benefits from the pre-trained speech representations. We also observed that evaluating phonemes in isolation is challenging but becomes more straightforward when assessed within the context of spoken words. Limitations of the current results include the presence of unsmooth tongue motion and video distortion when the tongue contacts the palate.<|reference_end|>
|
arxiv
|
@article{nguyen2024speech2rtmri:,
title={Speech2rtMRI: Speech-Guided Diffusion Model for Real-time MRI Video of
the Vocal Tract during Speech},
author={Hong Nguyen, Sean Foley, Kevin Huang, Xuan Shi, Tiantian Feng,
Shrikanth Narayanan},
journal={arXiv preprint arXiv:2409.15525},
year={2024},
archivePrefix={arXiv},
eprint={2409.15525},
primaryClass={eess.IV cs.CV cs.SD eess.AS}
}
|
nguyen2024speech2rtmri:
|
arxiv-661067
|
2409.15528
|
Learning Diverse Robot Striking Motions with Diffusion Models and Kinematically Constrained Gradient Guidance
|
<|reference_start|>Learning Diverse Robot Striking Motions with Diffusion Models and Kinematically Constrained Gradient Guidance: Advances in robot learning have enabled robots to generate skills for a variety of tasks. Yet, robot learning is typically sample inefficient, struggles to learn from data sources exhibiting varied behaviors, and does not naturally incorporate constraints. These properties are critical for fast, agile tasks such as playing table tennis. Modern techniques for learning from demonstration improve sample efficiency and scale to diverse data, but are rarely evaluated on agile tasks. In the case of reinforcement learning, achieving good performance requires training on high-fidelity simulators. To overcome these limitations, we develop a novel diffusion modeling approach that is offline, constraint-guided, and expressive of diverse agile behaviors. The key to our approach is a kinematic constraint gradient guidance (KCGG) technique that computes gradients through both the forward kinematics of the robot arm and the diffusion model to direct the sampling process. KCGG minimizes the cost of violating constraints while simultaneously keeping the sampled trajectory in-distribution of the training data. We demonstrate the effectiveness of our approach for time-critical robotic tasks by evaluating KCGG in two challenging domains: simulated air hockey and real table tennis. In simulated air hockey, we achieved a 25.4% increase in block rate, while in table tennis, we saw a 17.3% increase in success rate compared to imitation learning baselines.<|reference_end|>
|
arxiv
|
@article{lee2024learning,
title={Learning Diverse Robot Striking Motions with Diffusion Models and
Kinematically Constrained Gradient Guidance},
author={Kin Man Lee, Sean Ye, Qingyu Xiao, Zixuan Wu, Zulfiqar Zaidi, David B.
D'Ambrosio, Pannag R. Sanketi, Matthew Gombolay},
journal={arXiv preprint arXiv:2409.15528},
year={2024},
archivePrefix={arXiv},
eprint={2409.15528},
primaryClass={cs.RO cs.LG}
}
|
lee2024learning
|
arxiv-661068
|
2409.15529
|
VaLID: Verification as Late Integration of Detections for LiDAR-Camera Fusion
|
<|reference_start|>VaLID: Verification as Late Integration of Detections for LiDAR-Camera Fusion: Vehicle object detection is possible using both LiDAR and camera data. Methods using LiDAR generally outperform those using cameras only. The highest accuracy methods utilize both of these modalities through data fusion. In our study, we propose a model-independent late fusion method, VaLID, which validates whether each predicted bounding box is acceptable or not. Our method verifies the higher-performing, yet overly optimistic LiDAR model detections using camera detections that are obtained from either specially trained, general, or open-vocabulary models. VaLID uses a simple multi-layer perceptron trained with a high recall bias to reduce the false predictions made by the LiDAR detector, while still preserving the true ones. Evaluating with multiple combinations of LiDAR and camera detectors on the KITTI dataset, we reduce false positives by an average of 63.9%, thus outperforming the individual detectors on 2D average precision (2DAP). Our approach is model-agnostic and demonstrates state-of-the-art competitive performance even when using generic camera detectors that were not trained specifically for this dataset.<|reference_end|>
|
arxiv
|
@article{vats2024valid:,
title={VaLID: Verification as Late Integration of Detections for LiDAR-Camera
Fusion},
author={Vanshika Vats, Marzia Binta Nizam, James Davis},
journal={arXiv preprint arXiv:2409.15529},
year={2024},
archivePrefix={arXiv},
eprint={2409.15529},
primaryClass={cs.CV}
}
|
vats2024valid:
|
arxiv-661069
|
2409.15537
|
Quasi-Monte Carlo integration for feedback control under uncertainty
|
<|reference_start|>Quasi-Monte Carlo integration for feedback control under uncertainty: A control in feedback form is derived for linear quadratic, time-invariant optimal control problems subject to parabolic partial differential equations with coefficients depending on a countably infinite number of uncertain parameters. It is shown that the Riccati-based feedback operator depends analytically on the parameters provided that the system operator depends analytically on the parameters, as is the case, for instance, in diffusion problems when the diffusion coefficient is parameterized by a Karhunen--Lo\`eve expansion. These novel parametric regularity results allow the application of quasi-Monte Carlo (QMC) methods to efficiently compute an a-priori chosen feedback law based on the expected value. Moreover, under moderate assumptions on the input random field, QMC methods achieve superior error rates compared to ordinary Monte Carlo methods, independently of the stochastic dimension of the problem. Indeed, our paper for the first time studies Banach-space-valued integration by higher-order QMC methods.<|reference_end|>
|
arxiv
|
@article{guth2024quasi-monte,
title={Quasi-Monte Carlo integration for feedback control under uncertainty},
author={Philipp A. Guth, Peter Kritzer, Karl Kunisch},
journal={arXiv preprint arXiv:2409.15537},
year={2024},
archivePrefix={arXiv},
eprint={2409.15537},
primaryClass={math.OC cs.NA math.NA}
}
|
guth2024quasi-monte
|
arxiv-661070
|
2409.15542
|
Ditto: Elastic Confidential VMs with Secure and Dynamic CPU Scaling
|
<|reference_start|>Ditto: Elastic Confidential VMs with Secure and Dynamic CPU Scaling: Confidential Virtual Machines (CVMs) are a type of VMbased Trusted Execution Environments (TEEs) designed to enhance the security of cloud-based VMs, safeguarding them even from malicious hypervisors. Although CVMs have been widely adopted by major cloud service providers, current CVM designs face significant challenges in runtime resource management due to their fixed capacities and lack of transparency. These limitations hamper efficient cloud resource management, leading to increased operational costs and reduced agility in responding to fluctuating workloads. This paper introduces a dynamic CPU resource management approach, featuring the novel concept of "Elastic CVM. This approach allows for hypervisor-assisted runtime adjustment of CPU resources using a specialized vCPU type, termed Worker vCPU. This new approach enhances CPU resource adaptability and operational efficiency without compromising security. Additionally, we introduce a Worker vCPU Abstraction Layer to simplify Worker vCPU deployment and management. To demonstrate the effectiveness of our approach, we have designed and implemented a serverless computing prototype platform, called Ditto. We show that Ditto significantly improves performance and efficiency through finergrain resource management. The concept of "Elastic CVM" and the Worker vCPU design not only optimize cloud resource utilization but also pave the way for more flexible and cost-effective confidential computing environments.<|reference_end|>
|
arxiv
|
@article{zhao2024ditto:,
title={Ditto: Elastic Confidential VMs with Secure and Dynamic CPU Scaling},
author={Shixuan Zhao, Mengyuan Li, Mengjia Yan, Zhiqiang Lin},
journal={arXiv preprint arXiv:2409.15542},
year={2024},
archivePrefix={arXiv},
eprint={2409.15542},
primaryClass={cs.CR}
}
|
zhao2024ditto:
|
arxiv-661071
|
2409.15543
|
Investigations of effect of temperature and strain dependent material properties on thermoelastic damping -- A generalized 3-D finite element formulation
|
<|reference_start|>Investigations of effect of temperature and strain dependent material properties on thermoelastic damping -- A generalized 3-D finite element formulation: A comprehensive 3-D finite element formulation for the coupled thermoelastic system is proposed based on the Total Lagrangian framework to study the thermoelastic damping (TED) in small scale structures. The proposed formulation takes into account geometric nonlinearity because of large deformation and material nonlinearity where material parameters are functions of temperature and strain field. Using the proposed finite element formulation, the TED quality factor is obtained for 1-D rod undergoing longitudinal vibrations using the eigenvalue analysis. We first validate the accuracy of the finite element implementation with previously known theoretical and numerical results. Subsequently we demonstrate the utility of the proposed numerical framework to study the effect of geometric nonlinearity, temperature and strain dependent material nonlinearity on the thermoelastic damping.In addition, the effect of internal/ external heating and different thermal boundary conditions on TED is discussed<|reference_end|>
|
arxiv
|
@article{dixit2024investigations,
title={Investigations of effect of temperature and strain dependent material
properties on thermoelastic damping -- A generalized 3-D finite element
formulation},
author={Saurabh Dixit},
journal={arXiv preprint arXiv:2409.15543},
year={2024},
archivePrefix={arXiv},
eprint={2409.15543},
primaryClass={cs.CE}
}
|
dixit2024investigations
|
arxiv-661072
|
2409.15544
|
A positive meshless finite difference scheme for scalar conservation laws with adaptive artificial viscosity driven by fault detection
|
<|reference_start|>A positive meshless finite difference scheme for scalar conservation laws with adaptive artificial viscosity driven by fault detection: We present a meshless finite difference method for multivariate scalar conservation laws that generates positive schemes satisfying a local maximum principle on irregular nodes and relies on artificial viscosity for shock capturing. Coupling two different numerical differentiation formulas and adaptive selection of the sets of influence allows to meet a local CFL condition without any a priori time step restriction. Artificial viscosity term is chosen in an adaptive way by applying it only in the vicinity of the sharp features of the solution identified by an algorithm for fault detection on scattered data. Numerical tests demonstrate a robust performance of the method on irregular nodes and advantages of adaptive artificial viscosity. The accuracy of the obtained solutions is comparable to that for standard monotone methods available only on Cartesian grids.<|reference_end|>
|
arxiv
|
@article{bracco2024a,
title={A positive meshless finite difference scheme for scalar conservation
laws with adaptive artificial viscosity driven by fault detection},
author={Cesare Bracco, Oleg Davydov, Carlotta Giannelli, Alessandra Sestini},
journal={arXiv preprint arXiv:2409.15544},
year={2024},
archivePrefix={arXiv},
eprint={2409.15544},
primaryClass={math.NA cs.NA}
}
|
bracco2024a
|
arxiv-661073
|
2409.15545
|
Rethinking Emotion Bias in Music via Frechet Audio Distance
|
<|reference_start|>Rethinking Emotion Bias in Music via Frechet Audio Distance: The subjective nature of music emotion introduces inherent bias in both recognition and generation, especially when relying on a single audio encoder, emotion classifier, or evaluation metric. In this work, we conduct a study on Music Emotion Recognition (MER) and Emotional Music Generation (EMG), employing diverse audio encoders alongside the Frechet Audio Distance (FAD), a reference-free evaluation metric. Our study begins with a benchmark evaluation of MER, highlighting the limitations associated with using a single audio encoder and the disparities observed across different measurements. We then propose assessing MER performance using FAD from multiple encoders to provide a more objective measure of music emotion. Furthermore, we introduce an enhanced EMG approach designed to improve both the variation and prominence of generated music emotion, thus enhancing realism. Additionally, we investigate the realism disparities between the emotions conveyed in real and synthetic music, comparing our EMG model against two baseline models. Experimental results underscore the emotion bias problem in both MER and EMG and demonstrate the potential of using FAD and diverse audio encoders to evaluate music emotion objectively.<|reference_end|>
|
arxiv
|
@article{li2024rethinking,
title={Rethinking Emotion Bias in Music via Frechet Audio Distance},
author={Yuanchao Li, Azalea Gui, Dimitra Emmanouilidou, Hannes Gamper},
journal={arXiv preprint arXiv:2409.15545},
year={2024},
archivePrefix={arXiv},
eprint={2409.15545},
primaryClass={eess.AS cs.CL cs.MM cs.SD}
}
|
li2024rethinking
|
arxiv-661074
|
2409.15546
|
A Novel Framework for the Automated Characterization of Gram-Stained Blood Culture Slides Using a Large-Scale Vision Transformer
|
<|reference_start|>A Novel Framework for the Automated Characterization of Gram-Stained Blood Culture Slides Using a Large-Scale Vision Transformer: This study introduces a new framework for the artificial intelligence-assisted characterization of Gram-stained whole-slide images (WSIs). As a test for the diagnosis of bloodstream infections, Gram stains provide critical early data to inform patient treatment. Rapid and reliable analysis of Gram stains has been shown to be positively associated with better clinical outcomes, underscoring the need for improved tools to automate Gram stain analysis. In this work, we developed a novel transformer-based model for Gram-stained WSI classification, which is more scalable to large datasets than previous convolutional neural network (CNN) -based methods as it does not require patch-level manual annotations. We also introduce a large Gram stain dataset from Dartmouth-Hitchcock Medical Center (Lebanon, New Hampshire, USA) to evaluate our model, exploring the classification of five major categories of Gram-stained WSIs: Gram-positive cocci in clusters, Gram-positive cocci in pairs/chains, Gram-positive rods, Gram-negative rods, and slides with no bacteria. Our model achieves a classification accuracy of 0.858 (95% CI: 0.805, 0.905) and an AUC of 0.952 (95% CI: 0.922, 0.976) using five-fold nested cross-validation on our 475-slide dataset, demonstrating the potential of large-scale transformer models for Gram stain classification. We further demonstrate the generalizability of our trained model, which achieves strong performance on external datasets without additional fine-tuning.<|reference_end|>
|
arxiv
|
@article{mcmahon2024a,
title={A Novel Framework for the Automated Characterization of Gram-Stained
Blood Culture Slides Using a Large-Scale Vision Transformer},
author={Jack McMahon, Naofumi Tomita, Elizabeth S. Tatishev, Adrienne A.
Workman, Cristina R Costales, Niaz Banaei, Isabella W. Martin, Saeed
Hassanpour},
journal={arXiv preprint arXiv:2409.15546},
year={2024},
archivePrefix={arXiv},
eprint={2409.15546},
primaryClass={eess.IV cs.CV}
}
|
mcmahon2024a
|
arxiv-661075
|
2409.15548
|
Nothing Conformal about Adaptive Conformal Inference
|
<|reference_start|>Nothing Conformal about Adaptive Conformal Inference: Conformal prediction is a widely-used framework for distribution-free uncertainty quantification, which generates valid prediction sets at a user-defined significance level. However, this framework relies on the assumption that the data-generating distribution is exchangeable, a condition that is frequently violated in time-series and other structured data. In such cases, the validity guarantees of conformal prediction break down. Adaptive conformal inference (ACI) has been proposed as a solution for non-exchangeable data by dynamically adjusting the significance level to retain at least finite sample guarantees on the marginal coverage error rate. This paper demonstrates that, despite its name, ACI does not strictly require the use of conformal predictors. Instead, it can operate effectively with the more general concept of a confidence predictor, which is often computationally simpler. The key requirement is that larger significance levels correspond to smaller prediction sets, a property known as nested prediction sets. Through experiments on synthetic and real-world data, we investigate whether ACI with conformal predictors offers advantages over confidence predictors. Our results indicate that confidence predictors can perform just as well, and sometimes better than conformal predictors in some cases, although further empirical studies are needed to determine when one approach may be preferable.<|reference_end|>
|
arxiv
|
@article{szabadváry2024beyond,
title={Beyond Conformal Predictors: Adaptive Conformal Inference with
Confidence Predictors},
author={Johan Hallberg Szabadv'ary},
journal={arXiv preprint arXiv:2409.15548},
year={2024},
archivePrefix={arXiv},
eprint={2409.15548},
primaryClass={stat.ML cs.LG}
}
|
szabadváry2024beyond
|
arxiv-661076
|
2409.15550
|
Talk, Listen, Connect: Navigating Empathy in Human-AI Interactions
|
<|reference_start|>Talk, Listen, Connect: Navigating Empathy in Human-AI Interactions: Social interactions promote well-being, yet challenges like geographic distance and mental health conditions can limit in-person engagement. Advances in AI agents are transferring communication, particularly in mental health, where AI chatbots provide accessible, non-judgmental support. However, a key challenge is how effectively these systems can express empathy, which is crucial in human-centered design. Current research highlights a gap in understanding how AI can authentically convey empathy, particularly as issues like anxiety, depression, and loneliness increase. Our research focuses on this gap by comparing empathy expression in human-human versus human-AI interactions. Using personal narratives and statistical analysis, we examine empathy levels elicited by humans and AI, including GPT-4o and fine-tuned versions of the model. This work aims to enhance the authenticity of AI-driven empathy, contributing to the future design of more reliable and effective mental health support systems that foster meaningful social interactions.<|reference_end|>
|
arxiv
|
@article{roshanaei2024talk,,
title={Talk, Listen, Connect: Navigating Empathy in Human-AI Interactions},
author={Mahnaz Roshanaei, Rezvaneh Rezapour, Magy Seif El-Nasr},
journal={arXiv preprint arXiv:2409.15550},
year={2024},
archivePrefix={arXiv},
eprint={2409.15550},
primaryClass={cs.HC}
}
|
roshanaei2024talk,
|
arxiv-661077
|
2409.15551
|
Revise, Reason, and Recognize: LLM-Based Emotion Recognition via Emotion-Specific Prompts and ASR Error Correction
|
<|reference_start|>Revise, Reason, and Recognize: LLM-Based Emotion Recognition via Emotion-Specific Prompts and ASR Error Correction: Annotating and recognizing speech emotion using prompt engineering has recently emerged with the advancement of Large Language Models (LLMs), yet its efficacy and reliability remain questionable. In this paper, we conduct a systematic study on this topic, beginning with the proposal of novel prompts that incorporate emotion-specific knowledge from acoustics, linguistics, and psychology. Subsequently, we examine the effectiveness of LLM-based prompting on Automatic Speech Recognition (ASR) transcription, contrasting it with ground-truth transcription. Furthermore, we propose a Revise-Reason-Recognize prompting pipeline for robust LLM-based emotion recognition from spoken language with ASR errors. Additionally, experiments on context-aware learning, in-context learning, and instruction tuning are performed to examine the usefulness of LLM training schemes in this direction. Finally, we investigate the sensitivity of LLMs to minor prompt variations. Experimental results demonstrate the efficacy of the emotion-specific prompts, ASR error correction, and LLM training schemes for LLM-based emotion recognition. Our study aims to refine the use of LLMs in emotion recognition and related domains.<|reference_end|>
|
arxiv
|
@article{li2024revise,,
title={Revise, Reason, and Recognize: LLM-Based Emotion Recognition via
Emotion-Specific Prompts and ASR Error Correction},
author={Yuanchao Li, Yuan Gong, Chao-Han Huck Yang, Peter Bell, Catherine Lai},
journal={arXiv preprint arXiv:2409.15551},
year={2024},
archivePrefix={arXiv},
eprint={2409.15551},
primaryClass={eess.AS cs.AI cs.CL cs.MM cs.SD}
}
|
li2024revise,
|
arxiv-661078
|
2409.15552
|
Time-Adaptive PIROCK Method with Error Control for Multi-Fluid and Single-Fluid MHD Systems
|
<|reference_start|>Time-Adaptive PIROCK Method with Error Control for Multi-Fluid and Single-Fluid MHD Systems: The solar atmosphere is a complex environment with diverse species and varying ionization states, especially in the chromosphere, where significant ionization variations occur. This region transitions from highly collisional to weakly collisional states, leading to complex plasma state transitions influenced by magnetic strengths and collisional properties. These processes introduce numerical stiffness in multi-fluid models, imposing severe timestep restrictions on standard time integration methods. New numerical methods are essential to address these computational challenges, effectively managing the diverse timescales in multi-fluid and multi-physics models. The widely used time operator splitting technique offers a straightforward approach but requires careful timestep management to avoid stability issues and errors. Despite some studies on splitting errors, their impact on solar and stellar astrophysics is often overlooked. We focus on a Multi-Fluid Multi-Species (MFMS) model, which presents significant challenges for time integration. We propose a second-order Partitioned Implicit-Explicit Runge-Kutta (PIROCK) method that combines efficient explicit and implicit integration techniques with variable time-stepping and error control. Compared to a standard third-order explicit method and a first-order Lie splitting approach, the PIROCK method shows robust advantages in accuracy, stability, and computational efficiency. Our results reveal PIROCK's capability to solve multi-fluid problems with unprecedented efficiency. Preliminary results on chemical fractionation represent a significant step toward understanding the First-Ionization-Potential (FIP) effect in the solar atmosphere.<|reference_end|>
|
arxiv
|
@article{wargnier2024time-adaptive,
title={Time-Adaptive PIROCK Method with Error Control for Multi-Fluid and
Single-Fluid MHD Systems},
author={Q. M. Wargnier and G. Vilmart and J. Mart'inez-Sykora and V. H.
Hansteen and B. De Pontieu},
journal={arXiv preprint arXiv:2409.15552},
year={2024},
archivePrefix={arXiv},
eprint={2409.15552},
primaryClass={astro-ph.SR cs.NA math.NA}
}
|
wargnier2024time-adaptive
|
arxiv-661079
|
2409.15553
|
SOFI: Multi-Scale Deformable Transformer for Camera Calibration with Enhanced Line Queries
|
<|reference_start|>SOFI: Multi-Scale Deformable Transformer for Camera Calibration with Enhanced Line Queries: Camera calibration consists of estimating camera parameters such as the zenith vanishing point and horizon line. Estimating the camera parameters allows other tasks like 3D rendering, artificial reality effects, and object insertion in an image. Transformer-based models have provided promising results; however, they lack cross-scale interaction. In this work, we introduce \textit{multi-Scale defOrmable transFormer for camera calibratIon with enhanced line queries}, SOFI. SOFI improves the line queries used in CTRL-C and MSCC by using both line content and line geometric features. Moreover, SOFI's line queries allow transformer models to adopt the multi-scale deformable attention mechanism to promote cross-scale interaction between the feature maps produced by the backbone. SOFI outperforms existing methods on the \textit {Google Street View}, \textit {Horizon Line in the Wild}, and \textit {Holicity} datasets while keeping a competitive inference speed.<|reference_end|>
|
arxiv
|
@article{janampa2024sofi:,
title={SOFI: Multi-Scale Deformable Transformer for Camera Calibration with
Enhanced Line Queries},
author={Sebastian Janampa and Marios Pattichis},
journal={arXiv preprint arXiv:2409.15553},
year={2024},
archivePrefix={arXiv},
eprint={2409.15553},
primaryClass={cs.CV}
}
|
janampa2024sofi:
|
arxiv-661080
|
2409.15557
|
Mixture of Efficient Diffusion Experts Through Automatic Interval and Sub-Network Selection
|
<|reference_start|>Mixture of Efficient Diffusion Experts Through Automatic Interval and Sub-Network Selection: Diffusion probabilistic models can generate high-quality samples. Yet, their sampling process requires numerous denoising steps, making it slow and computationally intensive. We propose to reduce the sampling cost by pruning a pretrained diffusion model into a mixture of efficient experts. First, we study the similarities between pairs of denoising timesteps, observing a natural clustering, even across different datasets. This suggests that rather than having a single model for all time steps, separate models can serve as ``experts'' for their respective time intervals. As such, we separately fine-tune the pretrained model on each interval, with elastic dimensions in depth and width, to obtain experts specialized in their corresponding denoising interval. To optimize the resource usage between experts, we introduce our Expert Routing Agent, which learns to select a set of proper network configurations. By doing so, our method can allocate the computing budget between the experts in an end-to-end manner without requiring manual heuristics. Finally, with a selected configuration, we fine-tune our pruned experts to obtain our mixture of efficient experts. We demonstrate the effectiveness of our method, DiffPruning, across several datasets, LSUN-Church, LSUN-Beds, FFHQ, and ImageNet, on the Latent Diffusion Model architecture.<|reference_end|>
|
arxiv
|
@article{ganjdanesh2024mixture,
title={Mixture of Efficient Diffusion Experts Through Automatic Interval and
Sub-Network Selection},
author={Alireza Ganjdanesh, Yan Kang, Yuchen Liu, Richard Zhang, Zhe Lin, Heng
Huang},
journal={arXiv preprint arXiv:2409.15557},
year={2024},
archivePrefix={arXiv},
eprint={2409.15557},
primaryClass={cs.CV}
}
|
ganjdanesh2024mixture
|
arxiv-661081
|
2409.15558
|
Stalactite: Toolbox for Fast Prototyping of Vertical Federated Learning Systems
|
<|reference_start|>Stalactite: Toolbox for Fast Prototyping of Vertical Federated Learning Systems: Machine learning (ML) models trained on datasets owned by different organizations and physically located in remote databases offer benefits in many real-world use cases. State regulations or business requirements often prevent data transfer to a central location, making it difficult to utilize standard machine learning algorithms. Federated Learning (FL) is a technique that enables models to learn from distributed datasets without revealing the original data. Vertical Federated learning (VFL) is a type of FL where data samples are divided by features across several data owners. For instance, in a recommendation task, a user can interact with various sets of items, and the logs of these interactions are stored by different organizations. In this demo paper, we present \emph{Stalactite} - an open-source framework for VFL that provides the necessary functionality for building prototypes of VFL systems. It has several advantages over the existing frameworks. In particular, it allows researchers to focus on the algorithmic side rather than engineering and to easily deploy learning in a distributed environment. It implements several VFL algorithms and has a built-in homomorphic encryption layer. We demonstrate its use on a real-world recommendation datasets.<|reference_end|>
|
arxiv
|
@article{zakharova2024stalactite:,
title={Stalactite: Toolbox for Fast Prototyping of Vertical Federated Learning
Systems},
author={Anastasiia Zakharova, Dmitriy Alexandrov, Maria Khodorchenko, Nikolay
Butakov, Alexey Vasilev, Maxim Savchenko, Alexander Grigorievskiy},
journal={arXiv preprint arXiv:2409.15558},
year={2024},
doi={10.1145/3640457.3691700},
archivePrefix={arXiv},
eprint={2409.15558},
primaryClass={cs.LG cs.DC cs.IR}
}
|
zakharova2024stalactite:
|
arxiv-661082
|
2409.15559
|
LDPC Codes in Cooperative Communication
|
<|reference_start|>LDPC Codes in Cooperative Communication: In fact, the broadcast nature of every transmitter makes it possible for other transceivers in the channel to overhear the broadcasted signal. The proposed idea in cooperative communication is to use these intermediate transceivers as relay for the transmitted signal, therefore we will have spatial diversity which can improve throughput and received data reliability in the system. In this dissertation we consider some important aspects of cooperative communication in a network composed of three nodes. First, we verify the increase of reliability for the received signal by comparing the reliability of the received bits in a cooperative network with a non-cooperative one. Then we step forward and use LDPC error correction technique to improve the reliability of the received bits even more and compare it with a network without LDPC codes (encoder and decoder) to measure the level of improvement for different SNRs. The overall aim of this dissertation is to deploy cooperative communication idea to consider and test its claimed aspects and also enhance its performance by using LDPC error correction technique.<|reference_end|>
|
arxiv
|
@article{mehrban2024ldpc,
title={LDPC Codes in Cooperative Communication},
author={Ali Mehrban},
journal={arXiv preprint arXiv:2409.15559},
year={2024},
archivePrefix={arXiv},
eprint={2409.15559},
primaryClass={eess.SP cs.SY eess.SY}
}
|
mehrban2024ldpc
|
arxiv-661083
|
2409.15560
|
QUB-PHEO: A Visual-Based Dyadic Multi-View Dataset for Intention Inference in Collaborative Assembly
|
<|reference_start|>QUB-PHEO: A Visual-Based Dyadic Multi-View Dataset for Intention Inference in Collaborative Assembly: QUB-PHEO introduces a visual-based, dyadic dataset with the potential of advancing human-robot interaction (HRI) research in assembly operations and intention inference. This dataset captures rich multimodal interactions between two participants, one acting as a 'robot surrogate,' across a variety of assembly tasks that are further broken down into 36 distinct subtasks. With rich visual annotations, such as facial landmarks, gaze, hand movements, object localization, and more for 70 participants, QUB-PHEO offers two versions: full video data for 50 participants and visual cues for all 70. Designed to improve machine learning models for HRI, QUB-PHEO enables deeper analysis of subtle interaction cues and intentions, promising contributions to the field. The dataset will be available at https://github.com/exponentialR/QUB-PHEO subject to an End-User License Agreement (EULA).<|reference_end|>
|
arxiv
|
@article{adebayo2024qub-pheo:,
title={QUB-PHEO: A Visual-Based Dyadic Multi-View Dataset for Intention
Inference in Collaborative Assembly},
author={Samuel Adebayo, Se'an McLoone, Joost C. Dessing},
journal={arXiv preprint arXiv:2409.15560},
year={2024},
archivePrefix={arXiv},
eprint={2409.15560},
primaryClass={cs.CV cs.HC eess.IV eess.SP}
}
|
adebayo2024qub-pheo:
|
arxiv-661084
|
2409.15561
|
Analyzing Privacy Implications of Data Collection in Android Automotive OS
|
<|reference_start|>Analyzing Privacy Implications of Data Collection in Android Automotive OS: Modern vehicles have become sophisticated computation and sensor systems, as evidenced by advanced driver assistance systems, in-car infotainment, and autonomous driving capabilities. They collect and process vast amounts of data through various embedded subsystems. One significant player in this landscape is Android Automotive OS (AAOS), which has been integrated into over 100M vehicles and has become a dominant force in the in-vehicle infotainment market. With this extensive data collection, privacy has become increasingly crucial. The volume of data gathered by these systems raises questions about how this information is stored, used, and protected, making privacy a critical issue for manufacturers and consumers. However, very little has been done on vehicle data privacy. This paper focuses on the privacy implications of AAOS, examining the exact nature and scope of data collection and the corresponding privacy policies from the original equipment manufacturers (OEMs). We develop a novel automotive privacy analysis tool called PriDrive which employs three methodological approaches: network traffic inspection, and both static and dynamic analyses of Android images using rooted emulators from various OEMs. These methodologies are followed by an assessment of whether the collected data types were properly disclosed in OEMs and 3rd party apps' privacy policies (to identify any discrepancies or violations). Our evaluation on three different OEM platforms reveals that vehicle speed is collected at a sampling rate of roughly 25 Hz. Other properties such as model info, climate & AC, and seat data are collected in a batch 30 seconds into vehicle startup. In addition, several vehicle property types were collected without disclosure in their respective privacy policies. For example, OEM A's policies only covers 110 vehicle properties or 13.02% of the properties found in our static analysis.<|reference_end|>
|
arxiv
|
@article{gözübüyük2024analyzing,
title={Analyzing Privacy Implications of Data Collection in Android Automotive
OS},
author={Bulut G"oz"ub"uy"uk, Brian Tang, Kang G. Shin, Mert D. Pes'e},
journal={arXiv preprint arXiv:2409.15561},
year={2024},
archivePrefix={arXiv},
eprint={2409.15561},
primaryClass={cs.CR}
}
|
gözübüyük2024analyzing
|
arxiv-661085
|
2409.15563
|
Using Machine Teaching to Boost Novices' Robot Teaching Skill
|
<|reference_start|>Using Machine Teaching to Boost Novices' Robot Teaching Skill: Recent evidence has shown that, contrary to expectations, it is difficult for users, especially novices, to teach robots tasks through LfD. This paper introduces a framework that leverages MT algorithms to train novices to become better teachers of robots, and verifies whether such teaching ability is retained beyond the period of training and generalises such that novices teach robots more effectively, even for skills for which training has not been received. A between-subjects study is reported, in which novice teachers are asked to teach simple motor skills to a robot. The results demonstrate that subjects that receive training show average 78.83% improvement in teaching ability (as measured by accuracy of the skill learnt by the robot), and average 63.69% improvement in the teaching of new skills not included as part of the training.<|reference_end|>
|
arxiv
|
@article{zhu2024using,
title={Using Machine Teaching to Boost Novices' Robot Teaching Skill},
author={Yuqing Zhu, Endong Sun and Matthew Howard},
journal={arXiv preprint arXiv:2409.15563},
year={2024},
archivePrefix={arXiv},
eprint={2409.15563},
primaryClass={cs.RO}
}
|
zhu2024using
|
arxiv-661086
|
2409.15564
|
CauSkelNet: Causal Representation Learning for Human Behaviour Analysis
|
<|reference_start|>CauSkelNet: Causal Representation Learning for Human Behaviour Analysis: Constrained by the lack of model interpretability and a deep understanding of human movement in traditional movement recognition machine learning methods, this study introduces a novel representation learning method based on causal inference to better understand human joint dynamics and complex behaviors. We propose a two-stage framework that combines the Peter-Clark (PC) algorithm and Kullback-Leibler (KL) divergence to identify and quantify causal relationships between joints. Our method effectively captures interactions and produces interpretable, robust representations. Experiments on the EmoPain dataset show that our causal GCN outperforms traditional GCNs in accuracy, F1 score, and recall, especially in detecting protective behaviors. The model is also highly invariant to data scale changes, enhancing its reliability in practical applications. Our approach advances human motion analysis and paves the way for more adaptive intelligent healthcare solutions.<|reference_end|>
|
arxiv
|
@article{gu2024causkelnet:,
title={CauSkelNet: Causal Representation Learning for Human Behaviour Analysis},
author={Xingrui Gu, Chuyi Jiang, Erte Wang, Zekun Wu, Qiang Cui, Leimin Tian,
Lianlong Wu, Siyang Song, Chuang Yu},
journal={arXiv preprint arXiv:2409.15564},
year={2024},
archivePrefix={arXiv},
eprint={2409.15564},
primaryClass={cs.LG cs.CV}
}
|
gu2024causkelnet:
|
arxiv-661087
|
2409.15565
|
Critic Loss for Image Classification
|
<|reference_start|>Critic Loss for Image Classification: Modern neural network classifiers achieve remarkable performance across a variety of tasks; however, they frequently exhibit overconfidence in their predictions due to the cross-entropy loss. Inspired by this problem, we propose the \textbf{Cr}i\textbf{t}ic Loss for Image \textbf{Cl}assification (CrtCl, pronounced Critical). CrtCl formulates image classification training in a generator-critic framework, with a base classifier acting as a generator, and a correctness critic imposing a loss on the classifier. The base classifier, acting as the generator, given images, generates the probability distribution over classes and intermediate embeddings. The critic model, given the image, intermediate embeddings, and output predictions of the base model, predicts the probability that the base model has produced the correct classification, which then can be back propagated as a self supervision signal. Notably, the critic does not use the label as input, meaning that the critic can train the base model on both labeled and unlabeled data in semi-supervised learning settings. CrtCl represents a learned loss method for accuracy, alleviating the negative side effects of using cross-entropy loss. Additionally, CrtCl provides a powerful way to select data to be labeled in an active learning setting, by estimating the classification ability of the base model on unlabeled data. We study the effectiveness of CrtCl in low-labeled data regimes, and in the context of active learning. In classification, we find that CrtCl, compared to recent baselines, increases classifier generalization and calibration with various amounts of labeled data. In active learning, we show our method outperforms baselines in accuracy and calibration. We observe consistent results across three image classification datasets.<|reference_end|>
|
arxiv
|
@article{rappazzo2024critic,
title={Critic Loss for Image Classification},
author={Brendan Hogan Rappazzo, Aaron Ferber, Carla Gomes},
journal={arXiv preprint arXiv:2409.15565},
year={2024},
archivePrefix={arXiv},
eprint={2409.15565},
primaryClass={cs.CV}
}
|
rappazzo2024critic
|
arxiv-661088
|
2409.15566
|
GEM-RAG: Graphical Eigen Memories For Retrieval Augmented Generation
|
<|reference_start|>GEM-RAG: Graphical Eigen Memories For Retrieval Augmented Generation: The ability to form, retrieve, and reason about memories in response to stimuli serves as the cornerstone for general intelligence - shaping entities capable of learning, adaptation, and intuitive insight. Large Language Models (LLMs) have proven their ability, given the proper memories or context, to reason and respond meaningfully to stimuli. However, they are still unable to optimally encode, store, and retrieve memories - the ability to do this would unlock their full ability to operate as AI agents, and to specialize to niche domains. To remedy this, one promising area of research is Retrieval Augmented Generation (RAG), which aims to augment LLMs by providing them with rich in-context examples and information. In question-answering (QA) applications, RAG methods embed the text of interest in chunks, and retrieve the most relevant chunks for a prompt using text embeddings. Motivated by human memory encoding and retrieval, we aim to improve over standard RAG methods by generating and encoding higher-level information and tagging the chunks by their utility to answer questions. We introduce Graphical Eigen Memories For Retrieval Augmented Generation (GEM-RAG). GEM-RAG works by tagging each chunk of text in a given text corpus with LLM generated ``utility'' questions, connecting chunks in a graph based on the similarity of both their text and utility questions, and then using the eigendecomposition of the memory graph to build higher level summary nodes that capture the main themes of the text. We evaluate GEM-RAG, using both UnifiedQA and GPT-3.5 Turbo as the LLMs, with SBERT, and OpenAI's text encoders on two standard QA tasks, showing that GEM-RAG outperforms other state-of-the-art RAG methods on these tasks. We also discuss the implications of having a robust RAG system and future directions.<|reference_end|>
|
arxiv
|
@article{rappazzo2024gem-rag:,
title={GEM-RAG: Graphical Eigen Memories For Retrieval Augmented Generation},
author={Brendan Hogan Rappazzo, Yingheng Wang, Aaron Ferber, Carla Gomes},
journal={arXiv preprint arXiv:2409.15566},
year={2024},
archivePrefix={arXiv},
eprint={2409.15566},
primaryClass={cs.CL cs.AI}
}
|
rappazzo2024gem-rag:
|
arxiv-661089
|
2409.15567
|
Asking an AI for salary negotiation advice is a matter of concern: Controlled experimental perturbation of ChatGPT for protected and non-protected group discrimination on a contextual task with no clear ground truth answers
|
<|reference_start|>Asking an AI for salary negotiation advice is a matter of concern: Controlled experimental perturbation of ChatGPT for protected and non-protected group discrimination on a contextual task with no clear ground truth answers: We conducted controlled experimental bias audits for four versions of ChatGPT, which we asked to recommend an opening offer in salary negotiations for a new hire. We submitted 98,800 prompts to each version, systematically varying the employee's gender, university, and major, and tested prompts in voice of each side of the negotiation: the employee versus employer. We find ChatGPT as a multi-model platform is not robust and consistent enough to be trusted for such a task. We observed statistically significant salary offers when varying gender for all four models, although with smaller gaps than for other attributes tested. The largest gaps were different model versions and between the employee- vs employer-voiced prompts. We also observed substantial gaps when varying university and major, but many of the biases were not consistent across model versions. We tested for fictional and fraudulent universities and found wildly inconsistent results across cases and model versions. We make broader contributions to the AI/ML fairness literature. Our scenario and our experimental design differ from mainstream AI/ML auditing efforts in key ways. Bias audits typically test discrimination for protected classes like gender, which we contrast with testing non-protected classes of university and major. Asking for negotiation advice includes how aggressive one ought to be in a negotiation relative to known empirical salary distributions and scales, which is a deeply contextual and personalized task that has no objective ground truth to validate. These results raise concerns for the specific model versions we tested and ChatGPT as a multi-model platform in continuous development. Our epistemology does not permit us to definitively certify these models as either generally biased or unbiased on the attributes we test, but our study raises matters of concern for stakeholders to further investigate.<|reference_end|>
|
arxiv
|
@article{geiger2024asking,
title={Asking an AI for salary negotiation advice is a matter of concern:
Controlled experimental perturbation of ChatGPT for protected and
non-protected group discrimination on a contextual task with no clear ground
truth answers},
author={R. Stuart Geiger, Flynn O'Sullivan, Elsie Wang, Jonathan Lo},
journal={arXiv preprint arXiv:2409.15567},
year={2024},
archivePrefix={arXiv},
eprint={2409.15567},
primaryClass={cs.CY cs.AI cs.CL cs.LG}
}
|
geiger2024asking
|
arxiv-661090
|
2409.15568
|
Cross-Domain Latent Factors Sharing via Implicit Matrix Factorization
|
<|reference_start|>Cross-Domain Latent Factors Sharing via Implicit Matrix Factorization: Data sparsity has been one of the long-standing problems for recommender systems. One of the solutions to mitigate this issue is to exploit knowledge available in other source domains. However, many cross-domain recommender systems introduce a complex architecture that makes them less scalable in practice. On the other hand, matrix factorization methods are still considered to be strong baselines for single-domain recommendations. In this paper, we introduce the CDIMF, a model that extends the standard implicit matrix factorization with ALS to cross-domain scenarios. We apply the Alternating Direction Method of Multipliers to learn shared latent factors for overlapped users while factorizing the interaction matrix. In a dual-domain setting, experiments on industrial datasets demonstrate a competing performance of CDIMF for both cold-start and warm-start. The proposed model can outperform most other recent cross-domain and single-domain models. We also provide the code to reproduce experiments on GitHub.<|reference_end|>
|
arxiv
|
@article{samra2024cross-domain,
title={Cross-Domain Latent Factors Sharing via Implicit Matrix Factorization},
author={Abdulaziz Samra, Evgeney Frolov, Alexey Vasilev, Alexander
Grigorievskiy, Anton Vakhrushev},
journal={arXiv preprint arXiv:2409.15568},
year={2024},
doi={10.1145/3640457.3688143},
archivePrefix={arXiv},
eprint={2409.15568},
primaryClass={cs.IR cs.LG}
}
|
samra2024cross-domain
|
arxiv-661091
|
2409.15574
|
Clinical-grade Multi-Organ Pathology Report Generation for Multi-scale Whole Slide Images via a Semantically Guided Medical Text Foundation Model
|
<|reference_start|>Clinical-grade Multi-Organ Pathology Report Generation for Multi-scale Whole Slide Images via a Semantically Guided Medical Text Foundation Model: Vision language models (VLM) have achieved success in both natural language comprehension and image recognition tasks. However, their use in pathology report generation for whole slide images (WSIs) is still limited due to the huge size of multi-scale WSIs and the high cost of WSI annotation. Moreover, in most of the existing research on pathology report generation, sufficient validation regarding clinical efficacy has not been conducted. Herein, we propose a novel Patient-level Multi-organ Pathology Report Generation (PMPRG) model, which utilizes the multi-scale WSI features from our proposed multi-scale regional vision transformer (MR-ViT) model and their real pathology reports to guide VLM training for accurate pathology report generation. The model then automatically generates a report based on the provided key features attended regional features. We assessed our model using a WSI dataset consisting of multiple organs, including the colon and kidney. Our model achieved a METEOR score of 0.68, demonstrating the effectiveness of our approach. This model allows pathologists to efficiently generate pathology reports for patients, regardless of the number of WSIs involved.<|reference_end|>
|
arxiv
|
@article{tan2024clinical-grade,
title={Clinical-grade Multi-Organ Pathology Report Generation for Multi-scale
Whole Slide Images via a Semantically Guided Medical Text Foundation Model},
author={Jing Wei Tan, SeungKyu Kim, Eunsu Kim, Sung Hak Lee, Sangjeong Ahn,
Won-Ki Jeong},
journal={arXiv preprint arXiv:2409.15574},
year={2024},
archivePrefix={arXiv},
eprint={2409.15574},
primaryClass={cs.CV}
}
|
tan2024clinical-grade
|
arxiv-661092
|
2409.15576
|
Optimizing News Text Classification with Bi-LSTM and Attention Mechanism for Efficient Data Processing
|
<|reference_start|>Optimizing News Text Classification with Bi-LSTM and Attention Mechanism for Efficient Data Processing: The development of Internet technology has led to a rapid increase in news information. Filtering out valuable content from complex information has become an urgentproblem that needs to be solved. In view of the shortcomings of traditional manual classification methods that are time-consuming and inefficient, this paper proposes an automaticclassification scheme for news texts based on deep learning. This solution achieves efficient classification and management of news texts by introducing advanced machine learning algorithms, especially an optimization model that combines Bi-directional Long Short-Term Memory Network (Bi-LSTM) and Attention Mechanism. Experimental results show that this solution can not only significantly improve the accuracy and timeliness of classification, but also significantly reduce the need for manual intervention. It has important practical significance for improving the information processing capabilities of the news industry and accelerating the speed of information flow. Through comparative analysis of multiple common models, the effectiveness and advancement of the proposed method are proved, laying a solid foundation for future news text classification research.<|reference_end|>
|
arxiv
|
@article{liu2024optimizing,
title={Optimizing News Text Classification with Bi-LSTM and Attention Mechanism
for Efficient Data Processing},
author={Bingyao Liu, Jiajing Chen, Rui Wang, Junming Huang, Yuanshuai Luo,
Jianjun Wei},
journal={arXiv preprint arXiv:2409.15576},
year={2024},
archivePrefix={arXiv},
eprint={2409.15576},
primaryClass={cs.CL cs.IR}
}
|
liu2024optimizing
|
arxiv-661093
|
2409.15578
|
Examining the physical and psychological effects of combining multimodal feedback with continuous control in prosthetic hands
|
<|reference_start|>Examining the physical and psychological effects of combining multimodal feedback with continuous control in prosthetic hands: Myoelectric prosthetic hands are typically controlled to move between discrete positions and do not provide sensory feedback to the user. In this work, we present and evaluate a closed-loop, continuous myoelectric prosthetic hand controller, that can continuously control the position of multiple degrees of freedom of a prosthesis while rendering proprioceptive feedback to the user via a haptic feedback armband. Twenty-eight participants without and ten participants with limb difference were recruited to holistically evaluate the physical and psychological effects of the controller via isolated control and sensory tasks, dexterity assessments, embodiment and task load questionnaires, and post-study interviews. The combination of proprioceptive feedback and continuous control enabled accurate positioning, to within 10% mean absolute motor position error, and grasp-force modulation, to within 20% mean absolute motor force error, and restored blindfolded object identification ability to open-loop discrete controller levels. Dexterity assessment and embodiment questionnaire results revealed no significant physical performance or psychological embodiment differences between control types, with the exception of perceived sensation, which was significantly higher (p < 0.001) for closed-loop controllers. Key differences between participants with and without upper limb difference were identified, including in perceived body completeness and frustration, which can inform future prosthesis development and rehabilitation.<|reference_end|>
|
arxiv
|
@article{chappell2024examining,
title={Examining the physical and psychological effects of combining multimodal
feedback with continuous control in prosthetic hands},
author={Digby Chappell, Zeyu Yang, Angus B. Clark, Alexandre Berkovic, Colin
Laganier, Weston Baxter, Fernando Bello, Petar Kormushev, Nicolas Rojas},
journal={arXiv preprint arXiv:2409.15578},
year={2024},
archivePrefix={arXiv},
eprint={2409.15578},
primaryClass={cs.RO}
}
|
chappell2024examining
|
arxiv-661094
|
2409.15581
|
Mixing Data-driven and Geometric Models for Satellite Docking Port State Estimation using an RGB or Event Camera
|
<|reference_start|>Mixing Data-driven and Geometric Models for Satellite Docking Port State Estimation using an RGB or Event Camera: In-orbit automated servicing is a promising path towards lowering the cost of satellite operations and reducing the amount of orbital debris. For this purpose, we present a pipeline for automated satellite docking port detection and state estimation using monocular vision data from standard RGB sensing or an event camera. Rather than taking snapshots of the environment, an event camera has independent pixels that asynchronously respond to light changes, offering advantages such as high dynamic range, low power consumption and latency, etc. This work focuses on satellite-agnostic operations (only a geometric knowledge of the actual port is required) using the recently released Lockheed Martin Mission Augmentation Port (LM-MAP) as the target. By leveraging shallow data-driven techniques to preprocess the incoming data to highlight the LM-MAP's reflective navigational aids and then using basic geometric models for state estimation, we present a lightweight and data-efficient pipeline that can be used independently with either RGB or event cameras. We demonstrate the soundness of the pipeline and perform a quantitative comparison of the two modalities based on data collected with a photometrically accurate test bench that includes a robotic arm to simulate the target satellite's uncontrolled motion.<|reference_end|>
|
arxiv
|
@article{gentil2024mixing,
title={Mixing Data-driven and Geometric Models for Satellite Docking Port State
Estimation using an RGB or Event Camera},
author={Cedric Le Gentil, Jack Naylor, Nuwan Munasinghe, Jasprabhjit Mehami,
Benny Dai, Mikhail Asavkin, Donald G. Dansereau, and Teresa Vidal-Calleja},
journal={arXiv preprint arXiv:2409.15581},
year={2024},
archivePrefix={arXiv},
eprint={2409.15581},
primaryClass={cs.RO cs.CV}
}
|
gentil2024mixing
|
arxiv-661095
|
2409.15582
|
Generalization vs Specialization under Concept Shift
|
<|reference_start|>Generalization vs Specialization under Concept Shift: Machine learning models are often brittle under distribution shift, i.e., when data distributions at test time differ from those during training. Understanding this failure mode is central to identifying and mitigating safety risks of mass adoption of machine learning. Here we analyze ridge regression under concept shift -- a form of distribution shift in which the input-label relationship changes at test time. We derive an exact expression for prediction risk in the high-dimensional limit. Our results reveal nontrivial effects of concept shift on generalization performance, depending on the properties of robust and nonrobust features of the input. We show that test performance can exhibit a nonmonotonic data dependence, even when double descent is absent. Finally, our experiments on MNIST and FashionMNIST suggest that this intriguing behavior is present also in classification problems.<|reference_end|>
|
arxiv
|
@article{nguyen2024generalization,
title={Generalization vs. Specialization under Concept Shift},
author={Alex Nguyen, David J. Schwab, Vudtiwat Ngampruetikorn},
journal={arXiv preprint arXiv:2409.15582},
year={2024},
archivePrefix={arXiv},
eprint={2409.15582},
primaryClass={stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.LG}
}
|
nguyen2024generalization
|
arxiv-661096
|
2409.15584
|
FACET: Fast and Accurate Event-Based Eye Tracking Using Ellipse Modeling for Extended Reality
|
<|reference_start|>FACET: Fast and Accurate Event-Based Eye Tracking Using Ellipse Modeling for Extended Reality: Eye tracking is a key technology for gaze-based interactions in Extended Reality (XR), but traditional frame-based systems struggle to meet XR's demands for high accuracy, low latency, and power efficiency. Event cameras offer a promising alternative due to their high temporal resolution and low power consumption. In this paper, we present FACET (Fast and Accurate Event-based Eye Tracking), an end-to-end neural network that directly outputs pupil ellipse parameters from event data, optimized for real-time XR applications. The ellipse output can be directly used in subsequent ellipse-based pupil trackers. We enhance the EV-Eye dataset by expanding annotated data and converting original mask labels to ellipse-based annotations to train the model. Besides, a novel trigonometric loss is adopted to address angle discontinuities and a fast causal event volume event representation method is put forward. On the enhanced EV-Eye test set, FACET achieves an average pupil center error of 0.20 pixels and an inference time of 0.53 ms, reducing pixel error and inference time by 1.6$\times$ and 1.8$\times$ compared to the prior art, EV-Eye, with 4.4$\times$ and 11.7$\times$ less parameters and arithmetic operations. The code is available at https://github.com/DeanJY/FACET.<|reference_end|>
|
arxiv
|
@article{ding2024facet:,
title={FACET: Fast and Accurate Event-Based Eye Tracking Using Ellipse Modeling
for Extended Reality},
author={Junyuan Ding, Ziteng Wang, Chang Gao, Min Liu, Qinyu Chen},
journal={arXiv preprint arXiv:2409.15584},
year={2024},
archivePrefix={arXiv},
eprint={2409.15584},
primaryClass={cs.CV cs.AI}
}
|
ding2024facet:
|
arxiv-661097
|
2409.15585
|
XMoP: Whole-Body Control Policy for Zero-shot Cross-Embodiment Neural Motion Planning
|
<|reference_start|>XMoP: Whole-Body Control Policy for Zero-shot Cross-Embodiment Neural Motion Planning: Classical manipulator motion planners work across different robot embodiments. However they plan on a pre-specified static environment representation, and are not scalable to unseen dynamic environments. Neural Motion Planners (NMPs) are an appealing alternative to conventional planners as they incorporate different environmental constraints to learn motion policies directly from raw sensor observations. Contemporary state-of-the-art NMPs can successfully plan across different environments. However none of the existing NMPs generalize across robot embodiments. In this paper we propose Cross-Embodiment Motion Policy (XMoP), a neural policy for learning to plan over a distribution of manipulators. XMoP implicitly learns to satisfy kinematic constraints for a distribution of robots and $\textit{zero-shot}$ transfers the planning behavior to unseen robotic manipulators within this distribution. We achieve this generalization by formulating a whole-body control policy that is trained on planning demonstrations from over three million procedurally sampled robotic manipulators in different simulated environments. Despite being completely trained on synthetic embodiments and environments, our policy exhibits strong sim-to-real generalization across manipulators with different kinematic variations and degrees of freedom with a single set of frozen policy parameters. We evaluate XMoP on $7$ commercial manipulators and show successful cross-embodiment motion planning, achieving an average $70\%$ success rate on baseline benchmarks. Furthermore, we demonstrate our policy sim-to-real on two unseen manipulators solving novel planning problems across three real-world domains even with dynamic obstacles.<|reference_end|>
|
arxiv
|
@article{rath2024xmop:,
title={XMoP: Whole-Body Control Policy for Zero-shot Cross-Embodiment Neural
Motion Planning},
author={Prabin Kumar Rath, Nakul Gopalan},
journal={arXiv preprint arXiv:2409.15585},
year={2024},
archivePrefix={arXiv},
eprint={2409.15585},
primaryClass={cs.RO}
}
|
rath2024xmop:
|
arxiv-661098
|
2409.15586
|
TFT-multi: simultaneous forecasting of vital sign trajectories in the ICU
|
<|reference_start|>TFT-multi: simultaneous forecasting of vital sign trajectories in the ICU: Trajectory forecasting in healthcare data has been an important area of research in precision care and clinical integration for computational methods. In recent years, generative AI models have demonstrated promising results in capturing short and long range dependencies in time series data. While these models have also been applied in healthcare, most of them only predict one value at a time, which is unrealistic in a clinical setting where multiple measures are taken at once. In this work, we extend the framework temporal fusion transformer (TFT), a multi-horizon time series prediction tool, and propose TFT-multi, an end-to-end framework that can predict multiple vital trajectories simultaneously. We apply TFT-multi to forecast 5 vital signs recorded in the intensive care unit: blood pressure, pulse, SpO2, temperature and respiratory rate. We hypothesize that by jointly predicting these measures, which are often correlated with one another, we can make more accurate predictions, especially in variables with large missingness. We validate our model on the public MIMIC dataset and an independent institutional dataset, and demonstrate that this approach outperforms state-of-the-art univariate prediction tools including the original TFT and Prophet, as well as vector regression modeling for multivariate prediction. Furthermore, we perform a study case analysis by applying our pipeline to forecast blood pressure changes in response to actual and hypothetical pressor administration.<|reference_end|>
|
arxiv
|
@article{he2024tft-multi:,
title={TFT-multi: simultaneous forecasting of vital sign trajectories in the
ICU},
author={Rosemary Y. He, Jeffrey N. Chiang},
journal={arXiv preprint arXiv:2409.15586},
year={2024},
archivePrefix={arXiv},
eprint={2409.15586},
primaryClass={eess.SP cs.AI}
}
|
he2024tft-multi:
|
arxiv-661099
|
2409.15589
|
Beyond Humanoid Prosthetic Hands: Modular Terminal Devices That Improve User Performance
|
<|reference_start|>Beyond Humanoid Prosthetic Hands: Modular Terminal Devices That Improve User Performance: Despite decades of research and development, myoelectric prosthetic hands lack functionality and are often rejected by users. This lack in functionality can be attributed to the widely accepted anthropomorphic design ideology in the field; attempting to replicate human hand form and function despite severe limitations in control and sensing technology. Instead, prosthetic hands can be tailored to perform specific tasks without increasing complexity by shedding the constraints of anthropomorphism. In this paper, we develop and evaluate four open-source modular non-humanoid devices to perform the motion required to replicate human flicking motion and to twist a screwdriver, and the functionality required to pick and place flat objects and to cut paper. Experimental results from these devices demonstrate that, versus a humanoid prosthesis, non-humanoid prosthesis design dramatically improves task performance, reduces user compensatory movement, and reduces task load. Case studies with two end users demonstrate the translational benefits of this research. We found that special attention should be paid to monitoring end-user task load to ensure positive rehabilitation outcomes.<|reference_end|>
|
arxiv
|
@article{chappell2024beyond,
title={Beyond Humanoid Prosthetic Hands: Modular Terminal Devices That Improve
User Performance},
author={Digby Chappell, Barry Mulvey, Shehara Perera, Fernando Bello, Petar
Kormushev, Nicolas Rojas},
journal={arXiv preprint arXiv:2409.15589},
year={2024},
archivePrefix={arXiv},
eprint={2409.15589},
primaryClass={cs.RO}
}
|
chappell2024beyond
|
arxiv-661100
|
2409.15590
|
MapEx: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions
|
<|reference_start|>MapEx: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions: Exploration is a critical challenge in robotics, centered on understanding unknown environments. In this work, we focus on robots exploring structured indoor environments which are often predictable and composed of repeating patterns. Most existing approaches, such as conventional frontier approaches, have difficulty leveraging the predictability and explore with simple heuristics such as `closest first'. Recent works use deep learning techniques to predict unknown regions of the map, using these predictions for information gain calculation. However, these approaches are often sensitive to the predicted map quality or do not reason over sensor coverage. To overcome these issues, our key insight is to jointly reason over what the robot can observe and its uncertainty to calculate probabilistic information gain. We introduce MapEx, a new exploration framework that uses predicted maps to form probabilistic sensor model for information gain estimation. MapEx generates multiple predicted maps based on observed information, and takes into consideration both the computed variances of predicted maps and estimated visible area to estimate the information gain of a given viewpoint. Experiments on the real-world KTH dataset showed on average 12.4% improvement than representative map-prediction based exploration and 25.4% improvement than nearest frontier approach.<|reference_end|>
|
arxiv
|
@article{ho2024mapex:,
title={MapEx: Indoor Structure Exploration with Probabilistic Information Gain
from Global Map Predictions},
author={Cherie Ho, Seungchan Kim, Brady Moon, Aditya Parandekar, Narek
Harutyunyan, Chen Wang, Katia Sycara, Graeme Best, Sebastian Scherer},
journal={arXiv preprint arXiv:2409.15590},
year={2024},
archivePrefix={arXiv},
eprint={2409.15590},
primaryClass={cs.RO cs.CV}
}
|
ho2024mapex:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.