corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-660901
|
2409.15244
|
Workspace Awareness Needs in Mixed-Presence Collaboration on Wall-Sized Displays
|
<|reference_start|>Workspace Awareness Needs in Mixed-Presence Collaboration on Wall-Sized Displays: To enhance workspace awareness for mixed-presence meetings with large displays, previous work propose digital cues to share gestures, gaze, or entire postures. While such cues were demonstrated useful in horizontal or smaller workspaces, efforts have focused on isolated elements in controlled settings. It is unknown what needs would emerge with a more realistic setting and how they could be addressed with workspace awareness cues. In this paper, we report on the results of a focus group, centred around users' perceptions while testing a mixed-presence scenario on wall-sized displays. We analyse the gathered comments using Gutwin and Greenberg's workspace awareness framework to identify the most relevant needs. Our results lead to a refinement of the original framework for wall-sized displays and in particular to a categorization into three types of workspace awareness components (i) the Environment, (ii) Actions and (iii) Attention.<|reference_end|>
|
arxiv
|
@article{coppens2024workspace,
title={Workspace Awareness Needs in Mixed-Presence Collaboration on Wall-Sized
Displays},
author={Adrien Coppens and Lou Schwartz and Val'erie Maquil},
journal={arXiv preprint arXiv:2409.15244},
year={2024},
doi={10.1007/978-3-031-71315-6_3},
archivePrefix={arXiv},
eprint={2409.15244},
primaryClass={cs.HC}
}
|
coppens2024workspace
|
arxiv-660902
|
2409.15246
|
On-Air Deep Learning Integrated Semantic Inference Models for Enhanced Earth Observation Satellite Networks
|
<|reference_start|>On-Air Deep Learning Integrated Semantic Inference Models for Enhanced Earth Observation Satellite Networks: Earth Observation (EO) systems play a crucial role in achieving Sustainable Development Goals by collecting and analyzing vital global data through satellite networks. These systems are essential for tasks like mapping, disaster monitoring, and resource management, but they face challenges in processing and transmitting large volumes of EO data, especially in specialized fields such as agriculture and real-time disaster response. Domain-adapted Large Language Models (LLMs) provide a promising solution by facilitating data fusion between extensive EO data and semantic EO data. By improving integration and interpretation of diverse datasets, LLMs address the challenges of processing specialized information in agriculture and disaster response applications. This fusion enhances the accuracy and relevance of transmitted data. This paper presents a framework for semantic communication in EO satellite networks, aimed at improving data transmission efficiency and overall system performance through cognitive processing techniques. The proposed system employs Discrete-Task-Oriented Source-Channel Coding (DT-JSCC) and Semantic Data Augmentation (SA) to focus on relevant information while minimizing communication overhead. By integrating cognitive semantic processing and inter-satellite links, the framework enhances the analysis and transmission of multispectral satellite imagery, improving object detection, pattern recognition, and real-time decision-making. The introduction of Cognitive Semantic Augmentation (CSA) allows satellites to process and transmit semantic information, boosting adaptability to changing environments and application needs. This end-to-end architecture is tailored for next-generation satellite networks, such as those supporting 6G, and demonstrates significant improvements in efficiency and accuracy.<|reference_end|>
|
arxiv
|
@article{chou2024on-air,
title={On-Air Deep Learning Integrated Semantic Inference Models for Enhanced
Earth Observation Satellite Networks},
author={Hong-fu Chou, Vu Nguyen Ha, Prabhu Thiruvasagam, Thanh-Dung Le,
Geoffrey Eappen, Ti Ti Nguyen, Luis M. Garces-Socarras, Jorge L.
Gonzalez-Rios, Juan Carlos Merlano-Duncan, Symeon Chatzinotas},
journal={arXiv preprint arXiv:2409.15246},
year={2024},
archivePrefix={arXiv},
eprint={2409.15246},
primaryClass={cs.LG cs.CV cs.NI}
}
|
chou2024on-air
|
arxiv-660903
|
2409.15248
|
Founding Quantum Cryptography on Quantum Advantage, or, Towards Cryptography from $\mathsf\#P$-Hardness
|
<|reference_start|>Founding Quantum Cryptography on Quantum Advantage, or, Towards Cryptography from $\mathsf\#P$-Hardness: Recent oracle separations [Kretschmer, TQC'21, Kretschmer et. al., STOC'23] have raised the tantalizing possibility of building quantum cryptography from sources of hardness that persist even if the polynomial hierarchy collapses. We realize this possibility by building quantum bit commitments and secure computation from unrelativized, well-studied mathematical problems that are conjectured to be hard for $\mathsf{P^{\#P}}$ -- such as approximating the permanents of complex Gaussian matrices, or approximating the output probabilities of random quantum circuits. Indeed, we show that as long as any one of the conjectures underlying sampling-based quantum advantage (e.g., BosonSampling, Random Circuit Sampling, IQP, etc.) is true, quantum cryptography can be based on the extremely mild assumption that $\mathsf{P^{\#P}} \not\subseteq \mathsf{(io)BQP/qpoly}$. We prove that the following hardness assumptions are equivalent. (1) The hardness of approximating the probability assigned to a randomly chosen string in the support of certain efficiently sampleable distributions (upto inverse polynomial multiplicative error).(2) The existence of one-way puzzles, where a quantum sampler outputs a pair of classical strings -- a puzzle and its key -- and where the hardness lies in finding the key corresponding to a random puzzle. These are known to imply quantum bit commitments [Khurana and Tomer, STOC'24]. (3) The existence of state puzzles, or one-way state synthesis, where it is hard to synthesize a secret quantum state given a public classical identifier. These capture the hardness of search problems with quantum inputs (secrets) and classical outputs (challenges). These are the first constructions of quantum cryptographic primitives (one-way puzzles, quantum bit commitments, state puzzles) from concrete, well-founded mathematical assumptions that do not imply the existence of classical cryptography.<|reference_end|>
|
arxiv
|
@article{khurana2024founding,
title={Founding Quantum Cryptography on Quantum Advantage, or, Towards
Cryptography from $\mathsf{\#P}$-Hardness},
author={Dakshita Khurana (UIUC), Kabir Tomer (UIUC)},
journal={arXiv preprint arXiv:2409.15248},
year={2024},
archivePrefix={arXiv},
eprint={2409.15248},
primaryClass={quant-ph cs.CR}
}
|
khurana2024founding
|
arxiv-660904
|
2409.15250
|
ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models
|
<|reference_start|>ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models: Recent progress in large language models and access to large-scale robotic datasets has sparked a paradigm shift in robotics models transforming them into generalists able to adapt to various tasks, scenes, and robot modalities. A large step for the community are open Vision Language Action models which showcase strong performance in a wide variety of tasks. In this work, we study the visual generalization capabilities of three existing robotic foundation models, and propose a corresponding evaluation framework. Our study shows that the existing models do not exhibit robustness to visual out-of-domain scenarios. This is potentially caused by limited variations in the training data and/or catastrophic forgetting, leading to domain limitations in the vision foundation models. We further explore OpenVLA, which uses two pre-trained vision foundation models and is, therefore, expected to generalize to out-of-domain experiments. However, we showcase catastrophic forgetting by DINO-v2 in OpenVLA through its failure to fulfill the task of depth regression. To overcome the aforementioned issue of visual catastrophic forgetting, we propose a gradual backbone reversal approach founded on model merging. This enables OpenVLA which requires the adaptation of the visual backbones during initial training -- to regain its visual generalization ability. Regaining this capability enables our ReVLA model to improve over OpenVLA by a factor of 77% and 66% for grasping and lifting in visual OOD tasks .<|reference_end|>
|
arxiv
|
@article{dey2024revla:,
title={ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models},
author={Sombit Dey, Jan-Nico Zaech, Nikolay Nikolov, Luc Van Gool, Danda Pani
Paudel},
journal={arXiv preprint arXiv:2409.15250},
year={2024},
archivePrefix={arXiv},
eprint={2409.15250},
primaryClass={cs.CV cs.RO}
}
|
dey2024revla:
|
arxiv-660905
|
2409.15251
|
Machine Learning Toric Duality in Brane Tilings
|
<|reference_start|>Machine Learning Toric Duality in Brane Tilings: We apply a variety of machine learning methods to the study of Seiberg duality within 4d $\mathcal{N}=1$ quantum field theories arising on the worldvolumes of D3-branes probing toric Calabi-Yau 3-folds. Such theories admit an elegant description in terms of bipartite tessellations of the torus known as brane tilings or dimer models. An intricate network of infrared dualities interconnects the space of such theories and partitions it into universality classes, the prediction and classification of which is a problem that naturally lends itself to a machine learning investigation. In this paper, we address a preliminary set of such enquiries. We begin by training a fully connected neural network to identify classes of Seiberg dual theories realised on $\mathbb{Z}_m\times\mathbb{Z}_n$ orbifolds of the conifold and achieve $R^2=0.988$. Then, we evaluate various notions of robustness of our methods against perturbations of the space of theories under investigation, and discuss these results in terms of the nature of the neural network's learning. Finally, we employ a more sophisticated residual architecture to classify the toric phase space of the $Y^{6,0}$ theories, and to predict the individual gauged linear $\sigma$-model multiplicities in toric diagrams thereof. In spite of the non-trivial nature of this task, we achieve remarkably accurate results; namely, upon fixing a choice of Kasteleyn matrix representative, the regressor achieves a mean absolute error of $0.021$. We also discuss how the performance is affected by relaxing these assumptions.<|reference_end|>
|
arxiv
|
@article{capuozzo2024machine,
title={Machine Learning Toric Duality in Brane Tilings},
author={Pietro Capuozzo, Tancredi Schettini Gherardini and Benjamin Suzzoni},
journal={arXiv preprint arXiv:2409.15251},
year={2024},
archivePrefix={arXiv},
eprint={2409.15251},
primaryClass={hep-th cs.LG}
}
|
capuozzo2024machine
|
arxiv-660906
|
2409.15253
|
Investigating Robot Dogs for Construction Monitoring: A Comparative Analysis of Specifications and On-site Requirements
|
<|reference_start|>Investigating Robot Dogs for Construction Monitoring: A Comparative Analysis of Specifications and On-site Requirements: Robot dogs are receiving increasing attention in various fields of research. However, the number of studies investigating their potential usability on construction sites is scarce. The construction industry implies several human resource-demanding tasks such as safety monitoring, material transportation, and site inspections. Robot dogs can address some of these challenges by providing automated support and lowering manual effort. In this paper, we investigate the potential usability of currently available robot dogs on construction sites in terms of focusing on their different specifications and on-site requirements to support data acquisition. In addition, we conducted a real-world experiment on a large-scale construction site using a quadruped robot. In conclusion, we consider robot dogs to be a valuable asset for monitoring intricate construction environments in the future, particularly as their limitations are mitigated through technical advancements.<|reference_end|>
|
arxiv
|
@article{torres2024investigating,
title={Investigating Robot Dogs for Construction Monitoring: A Comparative
Analysis of Specifications and On-site Requirements},
author={Miguel Arturo Vega Torres, Fabian Pfitzner},
journal={arXiv preprint arXiv:2409.15253},
year={2024},
doi={10.13154/294-10094},
archivePrefix={arXiv},
eprint={2409.15253},
primaryClass={cs.RO cs.AR cs.CV}
}
|
torres2024investigating
|
arxiv-660907
|
2409.15254
|
Archon: An Architecture Search Framework for Inference-Time Techniques
|
<|reference_start|>Archon: An Architecture Search Framework for Inference-Time Techniques: Inference-time techniques are emerging as highly effective tools to enhance large language model (LLM) capabilities. However, best practices for developing systems that combine these techniques remain underdeveloped due to our limited understanding of the utility of individual inference-time techniques and the interactions between them. Additionally, efficiently and automatically searching the space of model choices, inference-time techniques, and their compositions is challenging due to the large design space. To address these challenges, we introduce Archon, a modular framework for selecting, combining, and stacking layers of inference-time techniques to construct optimized LLM systems for target benchmarks. Rather than relying on a single LLM called once, we leverage a diverse set of LLMs and inference-time techniques, creating LLM systems greater than the sum of their parts. Archon defines an extensible design space, encompassing techniques such as generation ensembling, repeated sampling, ranking, fusion, critiquing, verification, and unit testing. It transforms the problem of building LLM systems into a hyperparameter optimization objective. Given the available LLMs, inference-time techniques, and compute budget, Archon utilizes hyperparameter search techniques to discover optimized architectures for target benchmark(s). We evaluate Archon architectures across a range of instruction-following, reasoning, and coding benchmarks, including MT-Bench, Arena-Hard-Auto, AlpacaEval 2.0, MixEval, MixEval Hard, MATH, and CodeContests. Archon architectures outperform frontier models, such as GPT-4o and Claude 3.5 Sonnet, on these benchmarks, achieving an average accuracy increase of 15.1 percentage points by using all available LLMs. We make our code and datasets available publicly on Github: https://github.com/ScalingIntelligence/Archon.<|reference_end|>
|
arxiv
|
@article{saad-falcon2024archon:,
title={Archon: An Architecture Search Framework for Inference-Time Techniques},
author={Jon Saad-Falcon, Adrian Gamarra Lafuente, Shlok Natarajan, Nahum Maru,
Hristo Todorov, Etash Guha, E. Kelly Buchanan, Mayee Chen, Neel Guha,
Christopher R'e, Azalia Mirhoseini},
journal={arXiv preprint arXiv:2409.15254},
year={2024},
archivePrefix={arXiv},
eprint={2409.15254},
primaryClass={cs.LG cs.AI cs.CL}
}
|
saad-falcon2024archon:
|
arxiv-660908
|
2409.15255
|
ZeroSCD: Zero-Shot Street Scene Change Detection
|
<|reference_start|>ZeroSCD: Zero-Shot Street Scene Change Detection: Scene Change Detection is a challenging task in computer vision and robotics that aims to identify differences between two images of the same scene captured at different times. Traditional change detection methods rely on training models that take these image pairs as input and estimate the changes, which requires large amounts of annotated data, a costly and time-consuming process. To overcome this, we propose ZeroSCD, a zero-shot scene change detection framework that eliminates the need for training. ZeroSCD leverages pre-existing models for place recognition and semantic segmentation, utilizing their features and outputs to perform change detection. In this framework, features extracted from the place recognition model are used to estimate correspondences and detect changes between the two images. These are then combined with segmentation results from the semantic segmentation model to precisely delineate the boundaries of the detected changes. Extensive experiments on benchmark datasets demonstrate that ZeroSCD outperforms several state-of-the-art methods in change detection accuracy, despite not being trained on any of the benchmark datasets, proving its effectiveness and adaptability across different scenarios.<|reference_end|>
|
arxiv
|
@article{kannan2024zeroscd:,
title={ZeroSCD: Zero-Shot Street Scene Change Detection},
author={Shyam Sundar Kannan and Byung-Cheol Min},
journal={arXiv preprint arXiv:2409.15255},
year={2024},
archivePrefix={arXiv},
eprint={2409.15255},
primaryClass={cs.RO cs.CV}
}
|
kannan2024zeroscd:
|
arxiv-660909
|
2409.15256
|
Behavioral Bias of Vision-Language Models: A Behavioral Finance View
|
<|reference_start|>Behavioral Bias of Vision-Language Models: A Behavioral Finance View: Large Vision-Language Models (LVLMs) evolve rapidly as Large Language Models (LLMs) was equipped with vision modules to create more human-like models. However, we should carefully evaluate their applications in different domains, as they may possess undesired biases. Our work studies the potential behavioral biases of LVLMs from a behavioral finance perspective, an interdisciplinary subject that jointly considers finance and psychology. We propose an end-to-end framework, from data collection to new evaluation metrics, to assess LVLMs' reasoning capabilities and the dynamic behaviors manifested in two established human financial behavioral biases: recency bias and authority bias. Our evaluations find that recent open-source LVLMs such as LLaVA-NeXT, MobileVLM-V2, Mini-Gemini, MiniCPM-Llama3-V 2.5 and Phi-3-vision-128k suffer significantly from these two biases, while the proprietary model GPT-4o is negligibly impacted. Our observations highlight directions in which open-source models can improve. The code is available at https://github.com/mydcxiao/vlm_behavioral_fin.<|reference_end|>
|
arxiv
|
@article{xiao2024behavioral,
title={Behavioral Bias of Vision-Language Models: A Behavioral Finance View},
author={Yuhang Xiao, Yudi Lin, Ming-Chang Chiu},
journal={arXiv preprint arXiv:2409.15256},
year={2024},
archivePrefix={arXiv},
eprint={2409.15256},
primaryClass={cs.CL cs.AI}
}
|
xiao2024behavioral
|
arxiv-660910
|
2409.15259
|
S$^2$AG-Vid: Enhancing Multi-Motion Alignment in Video Diffusion Models via Spatial and Syntactic Attention-Based Guidance
|
<|reference_start|>S$^2$AG-Vid: Enhancing Multi-Motion Alignment in Video Diffusion Models via Spatial and Syntactic Attention-Based Guidance: Recent advancements in text-to-video (T2V) generation using diffusion models have garnered significant attention. However, existing T2V models primarily focus on simple scenes featuring a single object performing a single motion. Challenges arise in scenarios involving multiple objects with distinct motions, often leading to incorrect video-text alignment between subjects and their corresponding motions. To address this challenge, we propose \textbf{S$^2$AG-Vid}, a training-free inference-stage optimization method that improves the alignment of multiple objects with their corresponding motions in T2V models. S$^2$AG-Vid initially applies a spatial position-based, cross-attention (CA) constraint in the early stages of the denoising process, facilitating multiple nouns distinctly attending to the correct subject regions. To enhance the motion-subject binding, we implement a syntax-guided contrastive constraint in the subsequent denoising phase, aimed at improving the correlations between the CA maps of verbs and their corresponding nouns.Both qualitative and quantitative evaluations demonstrate that the proposed framework significantly outperforms baseline approaches, producing higher-quality videos with improved subject-motion consistency.<|reference_end|>
|
arxiv
|
@article{li2024s$^2$ag-vid:,
title={S$^2$AG-Vid: Enhancing Multi-Motion Alignment in Video Diffusion Models
via Spatial and Syntactic Attention-Based Guidance},
author={Yuanhang Li, Qi Mao, Lan Chen, Zhen Fang, Lei Tian, Xinyan Xiao,
Libiao Jin, Hua Wu},
journal={arXiv preprint arXiv:2409.15259},
year={2024},
archivePrefix={arXiv},
eprint={2409.15259},
primaryClass={cs.CV cs.AI}
}
|
li2024s$^2$ag-vid:
|
arxiv-660911
|
2409.15260
|
Generative AI Is Not Ready for Clinical Use in Patient Education for Lower Back Pain Patients, Even With Retrieval-Augmented Generation
|
<|reference_start|>Generative AI Is Not Ready for Clinical Use in Patient Education for Lower Back Pain Patients, Even With Retrieval-Augmented Generation: Low back pain (LBP) is a leading cause of disability globally. Following the onset of LBP and subsequent treatment, adequate patient education is crucial for improving functionality and long-term outcomes. Despite advancements in patient education strategies, significant gaps persist in delivering personalized, evidence-based information to patients with LBP. Recent advancements in large language models (LLMs) and generative artificial intelligence (GenAI) have demonstrated the potential to enhance patient education. However, their application and efficacy in delivering educational content to patients with LBP remain underexplored and warrant further investigation. In this study, we introduce a novel approach utilizing LLMs with Retrieval-Augmented Generation (RAG) and few-shot learning to generate tailored educational materials for patients with LBP. Physical therapists manually evaluated our model responses for redundancy, accuracy, and completeness using a Likert scale. In addition, the readability of the generated education materials is assessed using the Flesch Reading Ease score. The findings demonstrate that RAG-based LLMs outperform traditional LLMs, providing more accurate, complete, and readable patient education materials with less redundancy. Having said that, our analysis reveals that the generated materials are not yet ready for use in clinical practice. This study underscores the potential of AI-driven models utilizing RAG to improve patient education for LBP; however, significant challenges remain in ensuring the clinical relevance and granularity of content generated by these models.<|reference_end|>
|
arxiv
|
@article{zhao2024generative,
title={Generative AI Is Not Ready for Clinical Use in Patient Education for
Lower Back Pain Patients, Even With Retrieval-Augmented Generation},
author={Yi-Fei Zhao, Allyn Bove, David Thompson, James Hill, Yi Xu, Yufan Ren,
Andrea Hassman, Leming Zhou, Yanshan Wang},
journal={arXiv preprint arXiv:2409.15260},
year={2024},
archivePrefix={arXiv},
eprint={2409.15260},
primaryClass={cs.AI cs.IR}
}
|
zhao2024generative
|
arxiv-660912
|
2409.15261
|
Identification and Localization of Cometary Activity in Solar System Objects with Machine Learning
|
<|reference_start|>Identification and Localization of Cometary Activity in Solar System Objects with Machine Learning: In this chapter, we will discuss the use of Machine Learning methods for the identification and localization of cometary activity for Solar System objects in ground and in space-based wide-field all-sky surveys. We will begin the chapter by discussing the challenges of identifying known and unknown active, extended Solar System objects in the presence of stellar-type sources and the application of classical pre-ML identification techniques and their limitations. We will then transition to the discussion of implementing ML techniques to address the challenge of extended object identification. We will finish with prospective future methods and the application to future surveys such as the Vera C. Rubin Observatory.<|reference_end|>
|
arxiv
|
@article{bolin2024identification,
title={Identification and Localization of Cometary Activity in Solar System
Objects with Machine Learning},
author={Bryce T. Bolin, Michael W. Coughlin},
journal={arXiv preprint arXiv:2409.15261},
year={2024},
archivePrefix={arXiv},
eprint={2409.15261},
primaryClass={astro-ph.EP astro-ph.IM cs.AI cs.LG}
}
|
bolin2024identification
|
arxiv-660913
|
2409.15263
|
The Palomar twilight survey of 'Ayl\'o'chaxnim, Atiras, and comets
|
<|reference_start|>The Palomar twilight survey of 'Ayl\'o'chaxnim, Atiras, and comets: Near-sun sky twilight observations allow for the detection of asteroid interior to the orbit of Venus (Aylos), the Earth (Atiras), and comets. We present the results of observations with the Palomar 48-inch telescope (P48)/Zwicky Transient Facility (ZTF) camera in 30 s r-band exposures taken during evening astronomical twilight from 2019 Sep 20 to 2022 March 7 and during morning astronomical twilight sky from 2019 Sep 21 to 2022 Sep 29. More than 46,000 exposures were taken in evening and morning astronomical twilight within 31 to 66 degrees from the Sun with an r-band limiting magnitude between 18.1 and 20.9. The twilight pointings show a slight seasonal dependence in limiting magnitude and ability to point closer towards the Sun, with limiting magnitude slightly improving during summer. In total, the one Aylo, (594913) 'Ayl\'o'chaxnim, and 4 Atiras, 2020 OV1, 2021 BS1, 2021 PB2, and 2021 VR3, were discovered in evening and morning twilight observations. Additional twilight survey discoveries also include 6 long-period comets: C/2020 T2, C/2020 V2, C/2021 D2, C/2021 E3, C/2022 E3, and C/2022 P3, and two short-period comets: P/2021 N1 and P/2022 P2 using deep learning comet detection pipelines. The P48/ZTF twilight survey also recovered 11 known Atiras, one Aylo, three short-period comes, two long-period comets, and one interstellar object. Lastly, the Vera Rubin Observatory will conduct a twilight survey starting in its first year of operations and will cover the sky within 45 degrees of the Sun. Twilight surveys such as those by ZTF and future surveys will provide opportunities for discovering asteroids inside the orbits of Earth and Venus.<|reference_end|>
|
arxiv
|
@article{bolin2024the,
title={The Palomar twilight survey of 'Ayl\'o'chaxnim, Atiras, and comets},
author={B. T. Bolin, F. J. Masci, M. W. Coughlin, D. A. Duev, v{Z}. Ivezi'c,
R. L. Jones, P. Yoachim, T. Ahumada, V. Bhalerao, H. Choudhary, C. Contreras,
Y.-C. Cheng, C.M. Copperwheat, K. Deshmukh, C. Fremling, M. Granvik, K. K.
Hardegree-Ullman, A. Y. Q. Ho, R. Jedicke, M. Kasliwal, H. Kumar, Z.-Y. Lin,
A. Mahabal, A. Monson, J.D. Neill, D. Nesvorn'y, D. A. Perley, J. N. Purdum,
R. Quimby, E. Serabyn, K. Sharma, and V. Swain},
journal={arXiv preprint arXiv:2409.15263},
year={2024},
archivePrefix={arXiv},
eprint={2409.15263},
primaryClass={astro-ph.EP astro-ph.IM cs.AI cs.LG}
}
|
bolin2024the
|
arxiv-660914
|
2409.15264
|
UDA-Bench: Revisiting Common Assumptions in Unsupervised Domain Adaptation Using a Standardized Framework
|
<|reference_start|>UDA-Bench: Revisiting Common Assumptions in Unsupervised Domain Adaptation Using a Standardized Framework: In this work, we take a deeper look into the diverse factors that influence the efficacy of modern unsupervised domain adaptation (UDA) methods using a large-scale, controlled empirical study. To facilitate our analysis, we first develop UDA-Bench, a novel PyTorch framework that standardizes training and evaluation for domain adaptation enabling fair comparisons across several UDA methods. Using UDA-Bench, our comprehensive empirical study into the impact of backbone architectures, unlabeled data quantity, and pre-training datasets reveals that: (i) the benefits of adaptation methods diminish with advanced backbones, (ii) current methods underutilize unlabeled data, and (iii) pre-training data significantly affects downstream adaptation in both supervised and self-supervised settings. In the context of unsupervised adaptation, these observations uncover several novel and surprising properties, while scientifically validating several others that were often considered empirical heuristics or practitioner intuitions in the absence of a standardized training and evaluation framework. The UDA-Bench framework and trained models are publicly available at https://github.com/ViLab-UCSD/UDABench_ECCV2024.<|reference_end|>
|
arxiv
|
@article{kalluri2024uda-bench:,
title={UDA-Bench: Revisiting Common Assumptions in Unsupervised Domain
Adaptation Using a Standardized Framework},
author={Tarun Kalluri, Sreyas Ravichandran, Manmohan Chandraker},
journal={arXiv preprint arXiv:2409.15264},
year={2024},
archivePrefix={arXiv},
eprint={2409.15264},
primaryClass={cs.LG cs.CV}
}
|
kalluri2024uda-bench:
|
arxiv-660915
|
2409.15267
|
Peer-to-Peer Learning Dynamics of Wide Neural Networks
|
<|reference_start|>Peer-to-Peer Learning Dynamics of Wide Neural Networks: Peer-to-peer learning is an increasingly popular framework that enables beyond-5G distributed edge devices to collaboratively train deep neural networks in a privacy-preserving manner without the aid of a central server. Neural network training algorithms for emerging environments, e.g., smart cities, have many design considerations that are difficult to tune in deployment settings -- such as neural network architectures and hyperparameters. This presents a critical need for characterizing the training dynamics of distributed optimization algorithms used to train highly nonconvex neural networks in peer-to-peer learning environments. In this work, we provide an explicit, non-asymptotic characterization of the learning dynamics of wide neural networks trained using popular distributed gradient descent (DGD) algorithms. Our results leverage both recent advancements in neural tangent kernel (NTK) theory and extensive previous work on distributed learning and consensus. We validate our analytical results by accurately predicting the parameter and error dynamics of wide neural networks trained for classification tasks.<|reference_end|>
|
arxiv
|
@article{chaudhari2024peer-to-peer,
title={Peer-to-Peer Learning Dynamics of Wide Neural Networks},
author={Shreyas Chaudhari, Srinivasa Pranav, Emile Anand, Jos'e M. F. Moura},
journal={arXiv preprint arXiv:2409.15267},
year={2024},
archivePrefix={arXiv},
eprint={2409.15267},
primaryClass={cs.LG cs.SY eess.SY}
}
|
chaudhari2024peer-to-peer
|
arxiv-660916
|
2409.15268
|
Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking
|
<|reference_start|>Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking: The release of ChatGPT in November 2022 sparked an explosion of interest in post-training and an avalanche of new preference optimization (PO) methods. These methods claim superior alignment by virtue of better correspondence with human pairwise preferences, often measured by LLM-judges. In this work, we attempt to answer the following question -- do LLM-judge preferences translate to progress on other, more concrete metrics for alignment, and if not, why not? We define a concrete metric for alignment, and introduce SOS-Bench (Substance Outweighs Style Benchmark), which is to the best of our knowledge the largest standardized, reproducible LLM meta-benchmark to date. We find that (1) LLM-judge preferences do not correlate with concrete measures of safety, world knowledge, and instruction following; (2) LLM-judges have powerful implicit biases, prioritizing style over factuality and safety; and (3) the supervised fine-tuning (SFT) stage of post-training, and not the PO stage, has the greatest impact on alignment, with data scaling and prompt diversity as the driving factors. Our codebase and complete results can be found at https://github.com/penfever/sos-bench.<|reference_end|>
|
arxiv
|
@article{feuer2024style,
title={Style Outweighs Substance: Failure Modes of LLM Judges in Alignment
Benchmarking},
author={Benjamin Feuer, Micah Goldblum, Teresa Datta, Sanjana Nambiar, Raz
Besaleli, Samuel Dooley, Max Cembalest, John P. Dickerson},
journal={arXiv preprint arXiv:2409.15268},
year={2024},
archivePrefix={arXiv},
eprint={2409.15268},
primaryClass={cs.LG cs.AI}
}
|
feuer2024style
|
arxiv-660917
|
2409.15269
|
ReLoo: Reconstructing Humans Dressed in Loose Garments from Monocular Video in the Wild
|
<|reference_start|>ReLoo: Reconstructing Humans Dressed in Loose Garments from Monocular Video in the Wild: While previous years have seen great progress in the 3D reconstruction of humans from monocular videos, few of the state-of-the-art methods are able to handle loose garments that exhibit large non-rigid surface deformations during articulation. This limits the application of such methods to humans that are dressed in standard pants or T-shirts. Our method, ReLoo, overcomes this limitation and reconstructs high-quality 3D models of humans dressed in loose garments from monocular in-the-wild videos. To tackle this problem, we first establish a layered neural human representation that decomposes clothed humans into a neural inner body and outer clothing. On top of the layered neural representation, we further introduce a non-hierarchical virtual bone deformation module for the clothing layer that can freely move, which allows the accurate recovery of non-rigidly deforming loose clothing. A global optimization jointly optimizes the shape, appearance, and deformations of the human body and clothing via multi-layer differentiable volume rendering. To evaluate ReLoo, we record subjects with dynamically deforming garments in a multi-view capture studio. This evaluation, both on existing and our novel dataset, demonstrates ReLoo's clear superiority over prior art on both indoor datasets and in-the-wild videos.<|reference_end|>
|
arxiv
|
@article{guo2024reloo:,
title={ReLoo: Reconstructing Humans Dressed in Loose Garments from Monocular
Video in the Wild},
author={Chen Guo, Tianjian Jiang, Manuel Kaufmann, Chengwei Zheng, Julien
Valentin, Jie Song, Otmar Hilliges},
journal={arXiv preprint arXiv:2409.15269},
year={2024},
archivePrefix={arXiv},
eprint={2409.15269},
primaryClass={cs.CV}
}
|
guo2024reloo:
|
arxiv-660918
|
2409.15272
|
OmniBench: Towards The Future of Universal Omni-Language Models
|
<|reference_start|>OmniBench: Towards The Future of Universal Omni-Language Models: Recent advancements in multimodal large language models (MLLMs) have aimed to integrate and interpret data across diverse modalities. However, the capacity of these models to concurrently process and reason about multiple modalities remains inadequately explored, partly due to the lack of comprehensive modality-wise benchmarks. We introduce OmniBench, a novel benchmark designed to rigorously evaluate models' ability to recognize, interpret, and reason across visual, acoustic, and textual inputs simultaneously. We define models capable of such tri-modal processing as omni-language models (OLMs). OmniBench is distinguished by high-quality human annotations, ensuring that accurate responses require integrated understanding and reasoning across all three modalities. Our main findings reveal that: i) most OLMs exhibit critical limitations in instruction-following and reasoning capabilities within tri-modal contexts; and ii) most baselines models perform poorly (below 50\% accuracy) even when provided with alternative textual representations of images or/and audio. These results suggest that the ability to construct a consistent context from text, image, and audio is often overlooked in existing MLLM training paradigms. To address this gap, we curate an instruction tuning dataset of 84.5K training samples, OmniInstruct, for training OLMs to adapt to multimodal contexts. We advocate for future research to focus on developing more robust tri-modal integration techniques and training strategies to enhance OLM performance across diverse modalities. The codes and live leaderboard could be found at https://m-a-p.ai/OmniBench.<|reference_end|>
|
arxiv
|
@article{li2024omnibench:,
title={OmniBench: Towards The Future of Universal Omni-Language Models},
author={Yizhi Li, Ge Zhang, Yinghao Ma, Ruibin Yuan, Kang Zhu, Hangyu Guo,
Yiming Liang, Jiaheng Liu, Zekun Wang, Jian Yang, Siwei Wu, Xingwei Qu,
Jinjie Shi, Xinyue Zhang, Zhenzhu Yang, Xiangzhou Wang, Zhaoxiang Zhang,
Zachary Liu, Emmanouil Benetos, Wenhao Huang, Chenghua Lin},
journal={arXiv preprint arXiv:2409.15272},
year={2024},
archivePrefix={arXiv},
eprint={2409.15272},
primaryClass={cs.CL cs.AI cs.CV}
}
|
li2024omnibench:
|
arxiv-660919
|
2409.15273
|
MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors
|
<|reference_start|>MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors: Recent works in inverse rendering have shown promise in using multi-view images of an object to recover shape, albedo, and materials. However, the recovered components often fail to render accurately under new lighting conditions due to the intrinsic challenge of disentangling albedo and material properties from input images. To address this challenge, we introduce MaterialFusion, an enhanced conventional 3D inverse rendering pipeline that incorporates a 2D prior on texture and material properties. We present StableMaterial, a 2D diffusion model prior that refines multi-lit data to estimate the most likely albedo and material from given input appearances. This model is trained on albedo, material, and relit image data derived from a curated dataset of approximately ~12K artist-designed synthetic Blender objects called BlenderVault. we incorporate this diffusion prior with an inverse rendering framework where we use score distillation sampling (SDS) to guide the optimization of the albedo and materials, improving relighting performance in comparison with previous work. We validate MaterialFusion's relighting performance on 4 datasets of synthetic and real objects under diverse illumination conditions, showing our diffusion-aided approach significantly improves the appearance of reconstructed objects under novel lighting conditions. We intend to publicly release our BlenderVault dataset to support further research in this field.<|reference_end|>
|
arxiv
|
@article{litman2024materialfusion:,
title={MaterialFusion: Enhancing Inverse Rendering with Material Diffusion
Priors},
author={Yehonathan Litman, Or Patashnik, Kangle Deng, Aviral Agrawal,
Rushikesh Zawar, Fernando De la Torre, Shubham Tulsiani},
journal={arXiv preprint arXiv:2409.15273},
year={2024},
archivePrefix={arXiv},
eprint={2409.15273},
primaryClass={cs.CV}
}
|
litman2024materialfusion:
|
arxiv-660920
|
2409.15277
|
A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor?
|
<|reference_start|>A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor?: Large language models (LLMs) have exhibited remarkable capabilities across various domains and tasks, pushing the boundaries of our knowledge in learning and cognition. The latest model, OpenAI's o1, stands out as the first LLM with an internalized chain-of-thought technique using reinforcement learning strategies. While it has demonstrated surprisingly strong capabilities on various general language tasks, its performance in specialized fields such as medicine remains unknown. To this end, this report provides a comprehensive exploration of o1 on different medical scenarios, examining 3 key aspects: understanding, reasoning, and multilinguality. Specifically, our evaluation encompasses 6 tasks using data from 37 medical datasets, including two newly constructed and more challenging question-answering (QA) tasks based on professional medical quizzes from the New England Journal of Medicine (NEJM) and The Lancet. These datasets offer greater clinical relevance compared to standard medical QA benchmarks such as MedQA, translating more effectively into real-world clinical utility. Our analysis of o1 suggests that the enhanced reasoning ability of LLMs may (significantly) benefit their capability to understand various medical instructions and reason through complex clinical scenarios. Notably, o1 surpasses the previous GPT-4 in accuracy by an average of 6.2% and 6.6% across 19 datasets and two newly created complex QA scenarios. But meanwhile, we identify several weaknesses in both the model capability and the existing evaluation protocols, including hallucination, inconsistent multilingual ability, and discrepant metrics for evaluation. We release our raw data and model outputs at https://ucsc-vlaa.github.io/o1_medicine/ for future research.<|reference_end|>
|
arxiv
|
@article{xie2024a,
title={A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor?},
author={Yunfei Xie, Juncheng Wu, Haoqin Tu, Siwei Yang, Bingchen Zhao,
Yongshuo Zong, Qiao Jin, Cihang Xie, Yuyin Zhou},
journal={arXiv preprint arXiv:2409.15277},
year={2024},
archivePrefix={arXiv},
eprint={2409.15277},
primaryClass={cs.CL cs.AI}
}
|
xie2024a
|
arxiv-660921
|
2409.15278
|
PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions
|
<|reference_start|>PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions: This paper presents a versatile image-to-image visual assistant, PixWizard, designed for image generation, manipulation, and translation based on free-from language instructions. To this end, we tackle a variety of vision tasks into a unified image-text-to-image generation framework and curate an Omni Pixel-to-Pixel Instruction-Tuning Dataset. By constructing detailed instruction templates in natural language, we comprehensively include a large set of diverse vision tasks such as text-to-image generation, image restoration, image grounding, dense image prediction, image editing, controllable generation, inpainting/outpainting, and more. Furthermore, we adopt Diffusion Transformers (DiT) as our foundation model and extend its capabilities with a flexible any resolution mechanism, enabling the model to dynamically process images based on the aspect ratio of the input, closely aligning with human perceptual processes. The model also incorporates structure-aware and semantic-aware guidance to facilitate effective fusion of information from the input image. Our experiments demonstrate that PixWizard not only shows impressive generative and understanding abilities for images with diverse resolutions but also exhibits promising generalization capabilities with unseen tasks and human instructions. The code and related resources are available at https://github.com/AFeng-x/PixWizard<|reference_end|>
|
arxiv
|
@article{lin2024pixwizard:,
title={PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language
Instructions},
author={Weifeng Lin, Xinyu Wei, Renrui Zhang, Le Zhuo, Shitian Zhao, Siyuan
Huang, Junlin Xie, Yu Qiao, Peng Gao, Hongsheng Li},
journal={arXiv preprint arXiv:2409.15278},
year={2024},
archivePrefix={arXiv},
eprint={2409.15278},
primaryClass={cs.CV}
}
|
lin2024pixwizard:
|
arxiv-660922
|
2409.15281
|
LAAG-RV: LLM Assisted Assertion Generation for RTL Design Verification
|
<|reference_start|>LAAG-RV: LLM Assisted Assertion Generation for RTL Design Verification: Writing SystemVerilog Assertions (SVA) is an important but complex step in verifying Register Transfer Level (RTL) designs. Conventionally, experts need to understand the design specifications and write the SVA assertions, which is time-consuming and error-prone. However, with the recent advancement of transformer models, the Large Language Models (LLMs) assisted assertion generation for design verification is gaining interest in recent times. Motivated by this, we proposed a novel LLM-based framework, LAAG-RV, to generate SVA from the natural language specifications of the design. Our framework provides a one-time Verilog loop for signal synchronization in the generated SVA to improve the generated assertion quality. For our experiments, we created a custom LLM based on OpenAI GPT-4. Furthermore, we developed test cases to validate the LLM-generated assertions. Initial observations show that some generated assertions contain issues and did not pass all the test cases. However, by iteratively prompting the LLMs using carefully crafted manual prompts derived from test case failures in a simulator, the framework can generate correct SVAs. Our results on OpenTitan designs demonstrate that LLMs significantly simplify the process of generating assertions, making it efficient and less error-prone.<|reference_end|>
|
arxiv
|
@article{maddala2024laag-rv:,
title={LAAG-RV: LLM Assisted Assertion Generation for RTL Design Verification},
author={Karthik Maddala, Bhabesh Mali, Chandan Karfa},
journal={arXiv preprint arXiv:2409.15281},
year={2024},
archivePrefix={arXiv},
eprint={2409.15281},
primaryClass={cs.AR cs.ET}
}
|
maddala2024laag-rv:
|
arxiv-660923
|
2409.15282
|
Modelling Fire Incidents Response Times in \AAlesund
|
<|reference_start|>Modelling Fire Incidents Response Times in \AAlesund: In the ESGI-156 project together with {\AA}lesund Brannvesen we develop a model for response times to fire incidents on publicly available data for {\AA}lesund. We investigate different scenarios and a first step towards an interactive software for illustrating the response times.<|reference_end|>
|
arxiv
|
@article{christmas2024modelling,
title={Modelling Fire Incidents Response Times in {\AA}lesund},
author={J. Christmas (1), R. Bergmann (2), A. Zhakatayev (3), J. Rebenda (3
and 4), S. Singh (2) ((1) University of Exeter, UK, (2) NTNU Trondheim,
Norway, (3) University of Agder, Norway, (4) Brno University of Technology,
Czech Republic)},
journal={arXiv preprint arXiv:2409.15282},
year={2024},
archivePrefix={arXiv},
eprint={2409.15282},
primaryClass={cs.CE cs.CY}
}
|
christmas2024modelling
|
arxiv-660924
|
2409.15283
|
Equivariance-based self-supervised learning for audio signal recovery from clipped measurements
|
<|reference_start|>Equivariance-based self-supervised learning for audio signal recovery from clipped measurements: In numerous inverse problems, state-of-the-art solving strategies involve training neural networks from ground truth and associated measurement datasets that, however, may be expensive or impossible to collect. Recently, self-supervised learning techniques have emerged, with the major advantage of no longer requiring ground truth data. Most theoretical and experimental results on self-supervised learning focus on linear inverse problems. The present work aims to study self-supervised learning for the non-linear inverse problem of recovering audio signals from clipped measurements. An equivariance-based selfsupervised loss is proposed and studied. Performance is assessed on simulated clipped measurements with controlled and varied levels of clipping, and further reported on standard real music signals. We show that the performance of the proposed equivariance-based self-supervised declipping strategy compares favorably to fully supervised learning while only requiring clipped measurements alone for training.<|reference_end|>
|
arxiv
|
@article{sechaud2024equivariance-based,
title={Equivariance-based self-supervised learning for audio signal recovery
from clipped measurements},
author={Victor Sechaud (Phys-ENS), Laurent Jacques (ICTEAM), Patrice Abry
(Phys-ENS), Juli'an Tachella (Phys-ENS)},
journal={EUSIPCO, Aug 2024, Lyon, France},
year={2024},
archivePrefix={arXiv},
eprint={2409.15283},
primaryClass={eess.AS cs.IR cs.LG cs.SD eess.SP}
}
|
sechaud2024equivariance-based
|
arxiv-660925
|
2409.15284
|
The NGT200 Dataset: Geometric Multi-View Isolated Sign Recognition
|
<|reference_start|>The NGT200 Dataset: Geometric Multi-View Isolated Sign Recognition: Sign Language Processing (SLP) provides a foundation for a more inclusive future in language technology; however, the field faces several significant challenges that must be addressed to achieve practical, real-world applications. This work addresses multi-view isolated sign recognition (MV-ISR), and highlights the essential role of 3D awareness and geometry in SLP systems. We introduce the NGT200 dataset, a novel spatio-temporal multi-view benchmark, establishing MV-ISR as distinct from single-view ISR (SV-ISR). We demonstrate the benefits of synthetic data and propose conditioning sign representations on spatial symmetries inherent in sign language. Leveraging an SE(2) equivariant model improves MV-ISR performance by 8%-22% over the baseline.<|reference_end|>
|
arxiv
|
@article{ranum2024the,
title={The NGT200 Dataset: Geometric Multi-View Isolated Sign Recognition},
author={Oline Ranum and David R. Wessels and Gomer Otterspeer and Erik J.
Bekkers and Floris Roelofsen and Jari I. Andersen},
journal={arXiv preprint arXiv:2409.15284},
year={2024},
archivePrefix={arXiv},
eprint={2409.15284},
primaryClass={cs.CV cs.CL}
}
|
ranum2024the
|
arxiv-660926
|
2409.15287
|
Deciphering Cardiac Destiny: Unveiling Future Risks Through Cutting-Edge Machine Learning Approaches
|
<|reference_start|>Deciphering Cardiac Destiny: Unveiling Future Risks Through Cutting-Edge Machine Learning Approaches: Cardiac arrest remains a leading cause of death worldwide, necessitating proactive measures for early detection and intervention. This project aims to develop and assess predictive models for the timely identification of cardiac arrest incidents, utilizing a comprehensive dataset of clinical parameters and patient histories. Employing machine learning (ML) algorithms like XGBoost, Gradient Boosting, and Naive Bayes, alongside a deep learning (DL) approach with Recurrent Neural Networks (RNNs), we aim to enhance early detection capabilities. Rigorous experimentation and validation revealed the superior performance of the RNN model, which effectively captures complex temporal dependencies within the data. Our findings highlight the efficacy of these models in accurately predicting cardiac arrest likelihood, emphasizing the potential for improved patient care through early risk stratification and personalized interventions. By leveraging advanced analytics, healthcare providers can proactively mitigate cardiac arrest risk, optimize resource allocation, and improve patient outcomes. This research highlights the transformative potential of machine learning and deep learning techniques in managing cardiovascular risk and advances the field of predictive healthcare analytics.<|reference_end|>
|
arxiv
|
@article{divya2024deciphering,
title={Deciphering Cardiac Destiny: Unveiling Future Risks Through Cutting-Edge
Machine Learning Approaches},
author={G.Divya, M.Naga SravanKumar, T.JayaDharani, B.Pavan, K.Praveen},
journal={arXiv preprint arXiv:2409.15287},
year={2024},
archivePrefix={arXiv},
eprint={2409.15287},
primaryClass={cs.CY}
}
|
divya2024deciphering
|
arxiv-660927
|
2409.15289
|
The Computational Mechanisms of Detached Mindfulness
|
<|reference_start|>The Computational Mechanisms of Detached Mindfulness: This paper investigates the computational mechanisms underlying a type of metacognitive monitoring known as detached mindfulness, a particularly effective therapeutic technique within cognitive psychology. While research strongly supports the capacity of detached mindfulness to reduce depression and anxiety, its cognitive and computational underpinnings remain largely unexplained. We employ a computational model of metacognitive skill to articulate the mechanisms through which a detached perception of affect reduces emotional reactivity.<|reference_end|>
|
arxiv
|
@article{conway-smith2024the,
title={The Computational Mechanisms of Detached Mindfulness},
author={Brendan Conway-Smith, Robert L. West},
journal={arXiv preprint arXiv:2409.15289},
year={2024},
archivePrefix={arXiv},
eprint={2409.15289},
primaryClass={q-bio.NC cs.AI}
}
|
conway-smith2024the
|
arxiv-660928
|
2409.15290
|
Broadening Access to Simulations for End-Users via Large Language Models: Challenges and Opportunities
|
<|reference_start|>Broadening Access to Simulations for End-Users via Large Language Models: Challenges and Opportunities: Large Language Models (LLMs) are becoming ubiquitous to create intelligent virtual assistants that assist users in interacting with a system, as exemplified in marketing. Although LLMs have been discussed in Modeling & Simulation (M&S), the community has focused on generating code or explaining results. We examine the possibility of using LLMs to broaden access to simulations, by enabling non-simulation end-users to ask what-if questions in everyday language. Specifically, we discuss the opportunities and challenges in designing such an end-to-end system, divided into three broad phases. First, assuming the general case in which several simulation models are available, textual queries are mapped to the most relevant model. Second, if a mapping cannot be found, the query can be automatically reformulated and clarifying questions can be generated. Finally, simulation results are produced and contextualized for decision-making. Our vision for such system articulates long-term research opportunities spanning M&S, LLMs, information retrieval, and ethics.<|reference_end|>
|
arxiv
|
@article{giabbanelli2024broadening,
title={Broadening Access to Simulations for End-Users via Large Language
Models: Challenges and Opportunities},
author={Philippe J. Giabbanelli, Jose J. Padilla, Ameeta Agrawal},
journal={arXiv preprint arXiv:2409.15290},
year={2024},
archivePrefix={arXiv},
eprint={2409.15290},
primaryClass={cs.HC cs.AI}
}
|
giabbanelli2024broadening
|
arxiv-660929
|
2409.15291
|
Exploring the Feasibility of Multimodal Chatbot AI as Copilot in Pathology Diagnostics: Generalist Model's Pitfall
|
<|reference_start|>Exploring the Feasibility of Multimodal Chatbot AI as Copilot in Pathology Diagnostics: Generalist Model's Pitfall: Pathology images are crucial for diagnosing and managing various diseases by visualizing cellular and tissue-level abnormalities. Recent advancements in artificial intelligence (AI), particularly multimodal models like ChatGPT, have shown promise in transforming medical image analysis through capabilities such as medical vision-language question answering. However, there remains a significant gap in integrating pathology image data with these AI models for clinical applications. This study benchmarks the performance of GPT on pathology images, assessing their diagnostic accuracy and efficiency in real-word clinical records. We observe significant deficits of GPT in bone diseases and a fair-level performance in diseases from other three systems. Despite offering satisfactory abnormality annotations, GPT exhibits consistent disadvantage in terminology accuracy and multimodal integration. Specifically, we demonstrate GPT's failures in interpreting immunohistochemistry results and diagnosing metastatic cancers. This study highlight the weakness of current generalist GPT model and contribute to the integration of pathology and advanced AI.<|reference_end|>
|
arxiv
|
@article{liu2024exploring,
title={Exploring the Feasibility of Multimodal Chatbot AI as Copilot in
Pathology Diagnostics: Generalist Model's Pitfall},
author={Mianxin Liu, Jianfeng Wu, Fang Yan, Hongjun Li, Wei Wang, Shaoting
Zhang, Zhe Wang},
journal={arXiv preprint arXiv:2409.15291},
year={2024},
archivePrefix={arXiv},
eprint={2409.15291},
primaryClass={cs.HC cs.CY}
}
|
liu2024exploring
|
arxiv-660930
|
2409.15292
|
SketcherX: AI-Driven Interactive Robotic drawing with Diffusion model and Vectorization Techniques
|
<|reference_start|>SketcherX: AI-Driven Interactive Robotic drawing with Diffusion model and Vectorization Techniques: We introduce SketcherX, a novel robotic system for personalized portrait drawing through interactive human-robot engagement. Unlike traditional robotic art systems that rely on analog printing techniques, SketcherX captures and processes facial images to produce vectorized drawings in a distinctive, human-like artistic style. The system comprises two 6-axis robotic arms : a face robot, which is equipped with a head-mounted camera and Large Language Model (LLM) for real-time interaction, and a drawing robot, utilizing a fine-tuned Stable Diffusion model, ControlNet, and Vision-Language models for dynamic, stylized drawing. Our contributions include the development of a custom Vector Low Rank Adaptation model (LoRA), enabling seamless adaptation to various artistic styles, and integrating a pair-wise fine-tuning approach to enhance stroke quality and stylistic accuracy. Experimental results demonstrate the system's ability to produce high-quality, personalized portraits within two minutes, highlighting its potential as a new paradigm in robotic creativity. This work advances the field of robotic art by positioning robots as active participants in the creative process, paving the way for future explorations in interactive, human-robot artistic collaboration.<|reference_end|>
|
arxiv
|
@article{song2024sketcherx:,
title={SketcherX: AI-Driven Interactive Robotic drawing with Diffusion model
and Vectorization Techniques},
author={Jookyung Song, Mookyoung Kang, Nojun Kwak},
journal={arXiv preprint arXiv:2409.15292},
year={2024},
archivePrefix={arXiv},
eprint={2409.15292},
primaryClass={cs.RO cs.AI}
}
|
song2024sketcherx:
|
arxiv-660931
|
2409.15293
|
AI and MBTI: A Synergistic Framework for Enhanced Team Dynamics
|
<|reference_start|>AI and MBTI: A Synergistic Framework for Enhanced Team Dynamics: This paper proposes a theoretical framework for understanding and leveraging the synergy between artificial intelligence (AI) and personality types as defined by the Myers-Briggs Type Indicator (MBTI) in organizational team settings. We argue that AI capabilities can complement and enhance different MBTI types, leading to improved team performance. The AI-MBTI Synergy Framework is introduced, focusing on the Intuition-Sensing and Thinking-Feeling dimensions. We present propositions about how AI can augment team dynamics across four team types: Visionary, Strategic, Supportive, and Operational. A novel implementation is proposed to create an intelligent team optimization algorithm. Implications for theory and practice are discussed, along with directions for future research.<|reference_end|>
|
arxiv
|
@article{wang2024ai,
title={AI and MBTI: A Synergistic Framework for Enhanced Team Dynamics},
author={Yue Wang},
journal={arXiv preprint arXiv:2409.15293},
year={2024},
archivePrefix={arXiv},
eprint={2409.15293},
primaryClass={cs.HC}
}
|
wang2024ai
|
arxiv-660932
|
2409.15294
|
Enhancing MBSE Education with Version Control and Automated Feedback
|
<|reference_start|>Enhancing MBSE Education with Version Control and Automated Feedback: This paper presents an innovative approach to conducting a Model-Based Systems Engineering (MBSE) course, engaging over 80 participants annually. The course is structured around collaborative group assignments, where students utilize Enterprise Architect to complete complex systems engineering tasks across six submissions. This year, we introduced several technological advancements to enhance the learning experience, including the use of LemonTree, SmartGit, and GitHub. Students collaborated on shared repositories in GitHub, received continuous feedback via automated checks through LemonTree Automation, and documented their progress with pre-rendered, continuously updating diagrams. Additionally, they managed 2-way and 3-way merges directly in SmartGit, with merge issues, updates, and model statistics readily available for each Work-in-Progress submission. The process of correcting and providing manual feedback was streamlined, thanks to accessible changelogs and renders in GitHub. An end-of-course feedback form revealed high student satisfaction.<|reference_end|>
|
arxiv
|
@article{bajczi2024enhancing,
title={Enhancing MBSE Education with Version Control and Automated Feedback},
author={Levente Bajczi, D'aniel Szekeres, Daniel Siegl, Vince Moln'ar},
journal={arXiv preprint arXiv:2409.15294},
year={2024},
archivePrefix={arXiv},
eprint={2409.15294},
primaryClass={cs.CY}
}
|
bajczi2024enhancing
|
arxiv-660933
|
2409.15295
|
Reservoir Static Property Estimation Using Nearest-Neighbor Neural Network
|
<|reference_start|>Reservoir Static Property Estimation Using Nearest-Neighbor Neural Network: This note presents an approach for estimating the spatial distribution of static properties in reservoir modeling using a nearest-neighbor neural network. The method leverages the strengths of neural networks in approximating complex, non-linear functions, particularly for tasks involving spatial interpolation. It incorporates a nearest-neighbor algorithm to capture local spatial relationships between data points and introduces randomization to quantify the uncertainty inherent in the interpolation process. This approach addresses the limitations of traditional geostatistical methods, such as Inverse Distance Weighting (IDW) and Kriging, which often fail to model the complex non-linear dependencies in reservoir data. By integrating spatial proximity and uncertainty quantification, the proposed method can improve the accuracy of static property predictions like porosity and permeability.<|reference_end|>
|
arxiv
|
@article{wang2024reservoir,
title={Reservoir Static Property Estimation Using Nearest-Neighbor Neural
Network},
author={Yuhe Wang},
journal={arXiv preprint arXiv:2409.15295},
year={2024},
archivePrefix={arXiv},
eprint={2409.15295},
primaryClass={cs.LG physics.data-an stat.AP}
}
|
wang2024reservoir
|
arxiv-660934
|
2409.15296
|
Artificial Intelligence in Education: Ethical Considerations and Insights from Ancient Greek Philosophy
|
<|reference_start|>Artificial Intelligence in Education: Ethical Considerations and Insights from Ancient Greek Philosophy: This paper explores the ethical implications of integrating Artificial Intelligence (AI) in educational settings, from primary schools to universities, while drawing insights from ancient Greek philosophy to address emerging concerns. As AI technologies increasingly influence learning environments, they offer novel opportunities for personalized learning, efficient assessment, and data-driven decision-making. However, these advancements also raise critical ethical questions regarding data privacy, algorithmic bias, student autonomy, and the changing roles of educators. This research examines specific use cases of AI in education, analyzing both their potential benefits and drawbacks. By revisiting the philosophical principles of ancient Greek thinkers such as Socrates, Aristotle, and Plato, we discuss how their writings can guide the ethical implementation of AI in modern education. The paper argues that while AI presents significant challenges, a balanced approach informed by classical philosophical thought can lead to an ethically sound transformation of education. It emphasizes the evolving role of teachers as facilitators and the importance of fostering student initiative in AI-rich environments.<|reference_end|>
|
arxiv
|
@article{karpouzis2024artificial,
title={Artificial Intelligence in Education: Ethical Considerations and
Insights from Ancient Greek Philosophy},
author={Kostas Karpouzis},
journal={arXiv preprint arXiv:2409.15296},
year={2024},
archivePrefix={arXiv},
eprint={2409.15296},
primaryClass={cs.CY}
}
|
karpouzis2024artificial
|
arxiv-660935
|
2409.15298
|
Sorbet: A Neuromorphic Hardware-Compatible Transformer-Based Spiking Language Model
|
<|reference_start|>Sorbet: A Neuromorphic Hardware-Compatible Transformer-Based Spiking Language Model: For reasons such as privacy, there are use cases for language models at the edge. This has given rise to small language models (SLMs) targeted for deployment in resource-constrained devices where energy efficiency is a significant concern. Spiking neural networks (SNNs) offer a promising solution due to their energy efficiency, and there are already works on realizing transformer-based models on SNNs. However, key operations like softmax and layer normalization (LN) are difficult to implement on neuromorphic hardware, and many of these early works sidestepped them. To address these challenges, we introduce Sorbet, a transformer-based spiking language model that is more neuromorphic hardware-compatible. Sorbet incorporates a novel shifting-based softmax called PTsoftmax and a power normalization method using bit-shifting (BSPN), both designed to replace the respective energy-intensive operations. By leveraging knowledge distillation and model quantization, Sorbet achieved a highly compressed binary weight model that maintains competitive performance while significantly reducing energy consumption. We validate Sorbet's effectiveness through extensive testing on the GLUE benchmark and a series of ablation studies, demonstrating its potential as an energy-efficient solution for language model inference.<|reference_end|>
|
arxiv
|
@article{tang2024sorbet:,
title={Sorbet: A Neuromorphic Hardware-Compatible Transformer-Based Spiking
Language Model},
author={Kaiwen Tang, Zhanglu Yan, Weng-Fai Wong},
journal={arXiv preprint arXiv:2409.15298},
year={2024},
archivePrefix={arXiv},
eprint={2409.15298},
primaryClass={cs.NE cs.CL cs.LG}
}
|
tang2024sorbet:
|
arxiv-660936
|
2409.15299
|
Irrelevant Alternatives Bias Large Language Model Hiring Decisions
|
<|reference_start|>Irrelevant Alternatives Bias Large Language Model Hiring Decisions: We investigate whether LLMs display a well-known human cognitive bias, the attraction effect, in hiring decisions. The attraction effect occurs when the presence of an inferior candidate makes a superior candidate more appealing, increasing the likelihood of the superior candidate being chosen over a non-dominated competitor. Our study finds consistent and significant evidence of the attraction effect in GPT-3.5 and GPT-4 when they assume the role of a recruiter. Irrelevant attributes of the decoy, such as its gender, further amplify the observed bias. GPT-4 exhibits greater bias variation than GPT-3.5. Our findings remain robust even when warnings against the decoy effect are included and the recruiter role definition is varied.<|reference_end|>
|
arxiv
|
@article{valkanova2024irrelevant,
title={Irrelevant Alternatives Bias Large Language Model Hiring Decisions},
author={Kremena Valkanova and Pencho Yordanov},
journal={arXiv preprint arXiv:2409.15299},
year={2024},
archivePrefix={arXiv},
eprint={2409.15299},
primaryClass={cs.CY cs.AI cs.HC}
}
|
valkanova2024irrelevant
|
arxiv-660937
|
2409.15300
|
Learning Task-Based Trainable Neuromorphic ADCs via Power-Aware Distillation
|
<|reference_start|>Learning Task-Based Trainable Neuromorphic ADCs via Power-Aware Distillation: The ability to process signals in digital form depends on analog-to-digital converters (ADCs). Traditionally, ADCs are designed to ensure that the digital representation closely matches the analog signal. However, recent studies have shown that significant power and memory savings can be achieved through task-based acquisition, where the acquisition process is tailored to the downstream processing task. An emerging technology for task-based acquisition involves the use of memristors, which are considered key enablers for neuromorphic computing. Memristors can implement ADCs with tunable mappings, allowing adaptation to specific system tasks or power constraints. In this work, we study task-based acquisition for a generic classification task using memristive ADCs. We consider the unique characteristics of this such neuromorphic ADCs, including their power consumption and noisy read-write behavior, and propose a physically compliant model based on resistive successive approximation register ADCs integrated with memristor components, enabling the adjustment of quantization regions. To optimize performance, we introduce a data-driven algorithm that jointly tunes task-based memristive ADCs alongside both digital and analog processing. Our design addresses the inherent stochasticity of memristors through power-aware distillation, complemented by a specialized learning algorithm that adapts to their unique analog-to-digital mapping. The proposed approach is shown to enhance accuracy by up to 27% and reduce power consumption by up to 66% compared to uniform ADCs. Even under noisy conditions, our method achieves substantial gains, with accuracy improvements of up to 19% and power reductions of up to 57%. These results highlight the effectiveness of our power-aware neuromorphic ADCs in improving system performance across diverse tasks.<|reference_end|>
|
arxiv
|
@article{vol2024learning,
title={Learning Task-Based Trainable Neuromorphic ADCs via Power-Aware
Distillation},
author={Tal Vol, Loai Danial, and Nir Shlezinger},
journal={arXiv preprint arXiv:2409.15300},
year={2024},
archivePrefix={arXiv},
eprint={2409.15300},
primaryClass={cs.NE}
}
|
vol2024learning
|
arxiv-660938
|
2409.15301
|
Derangetropy in Probability Distributions and Information Dynamics
|
<|reference_start|>Derangetropy in Probability Distributions and Information Dynamics: We introduce derangetropy, a novel functional measure designed to characterize the dynamics of information within probability distributions. Unlike scalar measures such as Shannon entropy, derangetropy offers a functional representation that captures the dispersion of information across the entire support of a distribution. By incorporating self-referential and periodic properties, it provides deeper insights into information dynamics governed by differential equations and equilibrium states. Through combinatorial justifications and empirical analysis, we demonstrate the utility of derangetropy in depicting distribution behavior and evolution, providing a new tool for analyzing complex and hierarchical systems in information theory.<|reference_end|>
|
arxiv
|
@article{ataei2024derangetropy,
title={Derangetropy in Probability Distributions and Information Dynamics},
author={Masoud Ataei and Xiaogang Wang},
journal={arXiv preprint arXiv:2409.15301},
year={2024},
archivePrefix={arXiv},
eprint={2409.15301},
primaryClass={cs.IT math.IT}
}
|
ataei2024derangetropy
|
arxiv-660939
|
2409.15303
|
Disruptive RIS for Enhancing Key Generation and Secret Transmission in Low-Entropy Environments
|
<|reference_start|>Disruptive RIS for Enhancing Key Generation and Secret Transmission in Low-Entropy Environments: Key generation, a pillar in physical-layer security (PLS), is the process of the exchanging signals from two legitimate users (Alice and Bob) to extract a common key from the random, common channels. The drawback of extracting keys from wireless channels is the ample dependence on the dynamicity and fluctuations of the radio channel, rendering the key vulnerable to estimation by Eve (an illegitimate user) in low-entropy environments because of insufficient randomness. Added to that, the lack of channel fluctuations lower the secret key rate (SKR) defined as the number of bits of key generated per channel use. In this work, we aim to address this challenge by using a reconfigurable intelligent surface (RIS) to produce random phases at certain, carefully curated intervals such that it disrupts the channel in low-entropy environments. We propose an RIS assisted key generation protocol, study its performance, and compare with benchmarks to observe the benefit of using an RIS while considering various important metrics such as key mismatch rate and secret key throughput. Furthermore, we characterize a scaling law as a function of the rate of change of RIS phase switching for the average secret information rate under this protocol. Then, we use both the key throughput and information rate to optimize the overall secrecy rate. Simulations are made to validate our theoretical findings and effectiveness of the proposed scheme showing an improvement in performance when an RIS is deployed.<|reference_end|>
|
arxiv
|
@article{alwazani2024disruptive,
title={Disruptive RIS for Enhancing Key Generation and Secret Transmission in
Low-Entropy Environments},
author={Hibatallah Alwazani, Anas Chaaban},
journal={arXiv preprint arXiv:2409.15303},
year={2024},
archivePrefix={arXiv},
eprint={2409.15303},
primaryClass={cs.IT math.IT}
}
|
alwazani2024disruptive
|
arxiv-660940
|
2409.15304
|
Global Context Enhanced Anomaly Detection of Cyber Attacks via Decoupled Graph Neural Networks
|
<|reference_start|>Global Context Enhanced Anomaly Detection of Cyber Attacks via Decoupled Graph Neural Networks: Recently, there has been a substantial amount of interest in GNN-based anomaly detection. Existing efforts have focused on simultaneously mastering the node representations and the classifier necessary for identifying abnormalities with relatively shallow models to create an embedding. Therefore, the existing state-of-the-art models are incapable of capturing nonlinear network information and producing suboptimal outcomes. In this thesis, we deploy decoupled GNNs to overcome this issue. Specifically, we decouple the essential node representations and classifier for detecting anomalies. In addition, for node representation learning, we develop a GNN architecture with two modules for aggregating node feature information to produce the final node embedding. Finally, we conduct empirical experiments to verify the effectiveness of our proposed approach. The findings demonstrate that decoupled training along with the global context enhanced representation of the nodes is superior to the state-of-the-art models in terms of AUC and introduces a novel way of capturing the node information.<|reference_end|>
|
arxiv
|
@article{hafez2024global,
title={Global Context Enhanced Anomaly Detection of Cyber Attacks via Decoupled
Graph Neural Networks},
author={Ahmad Hafez},
journal={arXiv preprint arXiv:2409.15304},
year={2024},
archivePrefix={arXiv},
eprint={2409.15304},
primaryClass={cs.CR cs.LG}
}
|
hafez2024global
|
arxiv-660941
|
2409.15305
|
Real-time Robotics Situation Awareness for Accident Prevention in Industry
|
<|reference_start|>Real-time Robotics Situation Awareness for Accident Prevention in Industry: This study explores human-robot interaction (HRI) based on a mobile robot and YOLO to increase real-time situation awareness and prevent accidents in the workplace. Using object segmentation, we propose an approach that is capable of analyzing these situations in real-time and providing useful information to avoid critical working situations. In the industry, ensuring the safety of workers is paramount, and solutions based on robots and AI can provide a safer environment. For that, we proposed a methodology evaluated with two different YOLO versions (YOLOv8 and YOLOv5) alongside a LoCoBot robot for supervision and to perform the interaction with a user. We show that our proposed approach is capable of navigating a test scenario and issuing alerts via Text-to-Speech when dangerous situations are faced, such as when hardhats and safety vests are not detected. Based on the results gathered, we can conclude that our system is capable of detecting and informing risk situations such as helmet/no helmet and safety vest/no safety vest situations.<|reference_end|>
|
arxiv
|
@article{deniz2024real-time,
title={Real-time Robotics Situation Awareness for Accident Prevention in
Industry},
author={Juan M. Deniz, Andre S. Kelboucas, Ricardo Bedin Grando},
journal={arXiv preprint arXiv:2409.15305},
year={2024},
archivePrefix={arXiv},
eprint={2409.15305},
primaryClass={cs.RO}
}
|
deniz2024real-time
|
arxiv-660942
|
2409.15306
|
Open-Source Differentiable Lithography Imaging Framework
|
<|reference_start|>Open-Source Differentiable Lithography Imaging Framework: The rapid evolution of the electronics industry, driven by Moore's law and the proliferation of integrated circuits, has led to significant advancements in modern society, including the Internet, wireless communication, and artificial intelligence (AI). Central to this progress is optical lithography, a critical technology in semiconductor manufacturing that accounts for approximately 30\% to 40\% of production costs. As semiconductor nodes shrink and transistor numbers increase, optical lithography becomes increasingly vital in current integrated circuit (IC) fabrication technology. This paper introduces an open-source differentiable lithography imaging framework that leverages the principles of differentiable programming and the computational power of GPUs to enhance the precision of lithography modeling and simplify the optimization of resolution enhancement techniques (RETs). The framework models the core components of lithography as differentiable segments, allowing for the implementation of standard scalar imaging models, including the Abbe and Hopkins models, as well as their approximation models. The paper introduces a computational lithography framework that optimizes semiconductor manufacturing processes using advanced computational techniques and differentiable programming. It compares imaging models and provides tools for enhancing resolution, demonstrating improved semiconductor patterning performance. The open-sourced framework represents a significant advancement in lithography technology, facilitating collaboration in the field. The source code is available at https://github.com/TorchOPC/TorchLitho<|reference_end|>
|
arxiv
|
@article{chen2024open-source,
title={Open-Source Differentiable Lithography Imaging Framework},
author={Guojin Chen, Hao Geng, Bei Yu, and David Z. Pan},
journal={arXiv preprint arXiv:2409.15306},
year={2024},
archivePrefix={arXiv},
eprint={2409.15306},
primaryClass={physics.app-ph cs.ET}
}
|
chen2024open-source
|
arxiv-660943
|
2409.15308
|
Transforming Redaction: How AI is Revolutionizing Data Protection
|
<|reference_start|>Transforming Redaction: How AI is Revolutionizing Data Protection: Document redaction is a crucial process in various sectors to safeguard sensitive information from unauthorized access and disclosure. Traditional manual redaction methods, such as those performed using Adobe Acrobat, are labor-intensive, error-prone, and time-consuming. With the burgeoning volume of digital documents, the demand for more efficient and accurate redaction techniques is intensifying. This study presents the findings from a controlled experiment that compares traditional manual redaction, a redaction tool powered by classical machine learning algorithm, and AI-assisted redaction tools (iDox.ai Redact). The results indicate that iDox.ai Redact significantly outperforms manual methods, achieving higher accuracy and faster completion times. Conversely, the competitor product, classical machine learning algorithm and with necessitates manual intervention for certain sensitive data types, did not exhibit a statistically significant improvement over manual redaction. These findings suggest that while advanced AI technologies like iDox.ai Redact can substantially enhance data protection practices by reducing human error and improving compliance with data protection regulations, there remains room for improvement in AI tools that do not fully automate the redaction process. Future research should aim to enhance AI capabilities and explore their applicability across various document types and professional settings.<|reference_end|>
|
arxiv
|
@article{peng2024transforming,
title={Transforming Redaction: How AI is Revolutionizing Data Protection},
author={Sida Peng and Ming-Jen Huang and Matt Wu and Jeremy Wei},
journal={arXiv preprint arXiv:2409.15308},
year={2024},
archivePrefix={arXiv},
eprint={2409.15308},
primaryClass={cs.CY}
}
|
peng2024transforming
|
arxiv-660944
|
2409.15309
|
Joint LOS Identification and Data Association for 6G-Enabled Networked Device-Free Sensing
|
<|reference_start|>Joint LOS Identification and Data Association for 6G-Enabled Networked Device-Free Sensing: This paper considers networked device-free sensing in an orthogonal frequency division multiplexing (OFDM) cellular system with multipath environment, where the passive targets reflect the downlink signals to the base stations (BSs) via non-line-of-sight (NLOS) paths and/or line-of-sight (LOS) paths, and the BSs share the sensing information extracted from their received echoes to jointly localize the targets. A two-phase localization protocol is considered. In Phase I, we design an efficient method that is able to accurately estimate the range of any path from a transmitting BS to a receiving BS via a target, even if the transmitting and receiving BSs are separated and not perfectly synchronized. In Phase II, we propose an effective method that is able to jointly identify the ranges of the LOS paths between the targets and the BSs as well as associate the ranges of LOS paths with the right targets, such that the number and the locations of the targets both can be accurately estimated. Numerical results verify that our proposed two-phase protocol can achieve high performance of networked sensing in the multipath environment.<|reference_end|>
|
arxiv
|
@article{shi2024joint,
title={Joint LOS Identification and Data Association for 6G-Enabled Networked
Device-Free Sensing},
author={Qin Shi and Liang Liu},
journal={arXiv preprint arXiv:2409.15309},
year={2024},
archivePrefix={arXiv},
eprint={2409.15309},
primaryClass={eess.SP cs.IT math.IT}
}
|
shi2024joint
|
arxiv-660945
|
2409.15310
|
Visual Prompting in Multimodal Large Language Models: A Survey
|
<|reference_start|>Visual Prompting in Multimodal Large Language Models: A Survey: Multimodal large language models (MLLMs) equip pre-trained large-language models (LLMs) with visual capabilities. While textual prompting in LLMs has been widely studied, visual prompting has emerged for more fine-grained and free-form visual instructions. This paper presents the first comprehensive survey on visual prompting methods in MLLMs, focusing on visual prompting, prompt generation, compositional reasoning, and prompt learning. We categorize existing visual prompts and discuss generative methods for automatic prompt annotations on the images. We also examine visual prompting methods that enable better alignment between visual encoders and backbone LLMs, concerning MLLM's visual grounding, object referring, and compositional reasoning abilities. In addition, we provide a summary of model training and in-context learning methods to improve MLLM's perception and understanding of visual prompts. This paper examines visual prompting methods developed in MLLMs and provides a vision of the future of these methods.<|reference_end|>
|
arxiv
|
@article{wu2024visual,
title={Visual Prompting in Multimodal Large Language Models: A Survey},
author={Junda Wu, Zhehao Zhang, Yu Xia, Xintong Li, Zhaoyang Xia, Aaron Chang,
Tong Yu, Sungchul Kim, Ryan A. Rossi, Ruiyi Zhang, Subrata Mitra, Dimitris N.
Metaxas, Lina Yao, Jingbo Shang, Julian McAuley},
journal={arXiv preprint arXiv:2409.15310},
year={2024},
archivePrefix={arXiv},
eprint={2409.15310},
primaryClass={cs.LG cs.CV}
}
|
wu2024visual
|
arxiv-660946
|
2409.15311
|
Enhancing coastal water body segmentation with Landsat Irish Coastal Segmentation (LICS) dataset
|
<|reference_start|>Enhancing coastal water body segmentation with Landsat Irish Coastal Segmentation (LICS) dataset: Ireland's coastline, a critical and dynamic resource, is facing challenges such as erosion, sedimentation, and human activities. Monitoring these changes is a complex task we approach using a combination of satellite imagery and deep learning methods. However, limited research exists in this area, particularly for Ireland. This paper presents the Landsat Irish Coastal Segmentation (LICS) dataset, which aims to facilitate the development of deep learning methods for coastal water body segmentation while addressing modelling challenges specific to Irish meteorology and coastal types. The dataset is used to evaluate various automated approaches for segmentation, with U-NET achieving the highest accuracy of 95.0% among deep learning methods. Nevertheless, the Normalised Difference Water Index (NDWI) benchmark outperformed U-NET with an average accuracy of 97.2%. The study suggests that deep learning approaches can be further improved with more accurate training data and by considering alternative measurements of erosion. The LICS dataset and code are freely available to support reproducible research and further advancements in coastal monitoring efforts.<|reference_end|>
|
arxiv
|
@article{o'sullivan2024enhancing,
title={Enhancing coastal water body segmentation with Landsat Irish Coastal
Segmentation (LICS) dataset},
author={Conor O'Sullivan, Ambrish Kashyap, Seamus Coveney, Xavier Monteys,
Soumyabrata Dev},
journal={Remote Sensing Applications: Society and Environment, Volume 36,
2024, 101276, ISSN 2352-9385},
year={2024},
doi={10.1016/j.rsase.2024.101276},
archivePrefix={arXiv},
eprint={2409.15311},
primaryClass={cs.CV cs.LG eess.IV}
}
|
o'sullivan2024enhancing
|
arxiv-660947
|
2409.15312
|
Evolutionary Algorithms for One-Sided Bipartite Crossing Minimisation
|
<|reference_start|>Evolutionary Algorithms for One-Sided Bipartite Crossing Minimisation: Evolutionary algorithms (EAs) are universal solvers inspired by principles of natural evolution. In many applications, EAs produce astonishingly good solutions. As they are able to deal with complex optimisation problems, they show great promise for hard problems encountered in the field of graph drawing.To complement recent theoretical advances in the analysis of EAs on graph drawing, we contribute a fundamental empirical study. We consider the so-called \textsc{One-Sided Bipartite Crossing Minimisation (OBCM)}: given two layers of a bipartite graph and a fixed horizontal order of vertices on the first layer, the task is to order the vertices on the second layer to minimise the number of edge crossings. We empirically analyse the performance of simple EAs for OBCM and compare different mutation operators on the underlying permutation ordering problem: exchanging two elements (\textit{exchange}), swapping adjacent elements (\textit{swap}) and jumping an element to a new position (\textit{jump}). EAs using jumps easily outperform all deterministic algorithms in terms of solution quality after a reasonable number of generations. We also design variations of the best-performing EAs to reduce the execution time for each generation. The improved EAs can obtain the same solution quality as before and run up to 100 times faster.<|reference_end|>
|
arxiv
|
@article{baumann2024evolutionary,
title={Evolutionary Algorithms for One-Sided Bipartite Crossing Minimisation},
author={Jakob Baumann, Ignaz Rutter, Dirk Sudholt},
journal={arXiv preprint arXiv:2409.15312},
year={2024},
archivePrefix={arXiv},
eprint={2409.15312},
primaryClass={cs.NE}
}
|
baumann2024evolutionary
|
arxiv-660948
|
2409.15313
|
Deep Transfer Learning for Breast Cancer Classification
|
<|reference_start|>Deep Transfer Learning for Breast Cancer Classification: Breast cancer is a major global health issue that affects millions of women worldwide. Classification of breast cancer as early and accurately as possible is crucial for effective treatment and enhanced patient outcomes. Deep transfer learning has emerged as a promising technique for improving breast cancer classification by utilizing pre-trained models and transferring knowledge across related tasks. In this study, we examine the use of a VGG, Vision Transformers (ViT) and Resnet to classify images for Invasive Ductal Carcinoma (IDC) cancer and make a comparative analysis of the algorithms. The result shows a great advantage of Resnet-34 with an accuracy of $90.40\%$ in classifying cancer images. However, the pretrained VGG-16 demonstrates a higher F1-score because there is less parameters to update. We believe that the field of breast cancer diagnosis stands to benefit greatly from the use of deep transfer learning. Transfer learning may assist to increase the accuracy and accessibility of breast cancer screening by allowing deep learning models to be trained with little data.<|reference_end|>
|
arxiv
|
@article{djagba2024deep,
title={Deep Transfer Learning for Breast Cancer Classification},
author={Prudence Djagba and J. K. Buwa Mbouobda},
journal={arXiv preprint arXiv:2409.15313},
year={2024},
archivePrefix={arXiv},
eprint={2409.15313},
primaryClass={cs.CV}
}
|
djagba2024deep
|
arxiv-660949
|
2409.15314
|
Reducing Bias in Deep Learning Optimization: The RSGDM Approach
|
<|reference_start|>Reducing Bias in Deep Learning Optimization: The RSGDM Approach: Currently, widely used first-order deep learning optimizers include non-adaptive learning rate optimizers and adaptive learning rate optimizers. The former is represented by SGDM (Stochastic Gradient Descent with Momentum), while the latter is represented by Adam. Both of these methods use exponential moving averages to estimate the overall gradient. However, estimating the overall gradient using exponential moving averages is biased and has a lag. This paper proposes an RSGDM algorithm based on differential correction. Our contributions are mainly threefold: 1) Analyze the bias and lag brought by the exponential moving average in the SGDM algorithm. 2) Use the differential estimation term to correct the bias and lag in the SGDM algorithm, proposing the RSGDM algorithm. 3) Experiments on the CIFAR datasets have proven that our RSGDM algorithm is superior to the SGDM algorithm in terms of convergence accuracy.<|reference_end|>
|
arxiv
|
@article{qin2024reducing,
title={Reducing Bias in Deep Learning Optimization: The RSGDM Approach},
author={Honglin Qin, Hongye Zheng, Bingxing Wang, Zhizhong Wu, Bingyao Liu,
Yuanfang Yang},
journal={arXiv preprint arXiv:2409.15314},
year={2024},
archivePrefix={arXiv},
eprint={2409.15314},
primaryClass={cs.LG}
}
|
qin2024reducing
|
arxiv-660950
|
2409.15315
|
An Efficient Recommendation Model Based on Knowledge Graph Attention-Assisted Network (KGATAX)
|
<|reference_start|>An Efficient Recommendation Model Based on Knowledge Graph Attention-Assisted Network (KGATAX): Recommendation systems play a crucial role in helping users filter through vast amounts of information. However, traditional recommendation algorithms often overlook the integration and utilization of multi-source information, limiting system performance. Therefore, this study proposes a novel recommendation model, Knowledge Graph Attention-assisted Network (KGAT-AX). We first incorporate the knowledge graph into the recommendation model, introducing an attention mechanism to explore higher order connectivity more explicitly. By using multilayer interactive information propagation, the model aggregates information to enhance its generalization ability. Furthermore, we integrate auxiliary information into entities through holographic embeddings, aggregating the information of adjacent entities for each entity by learning their inferential relationships. This allows for better utilization of auxiliary information associated with entities. We conducted experiments on real datasets to demonstrate the rationality and effectiveness of the KGAT-AX model. Through experimental analysis, we observed the effectiveness and potential of KGAT-AX compared to other baseline models on public datasets. KGAT-AX demonstrates better knowledge information capture and relationship learning capabilities.<|reference_end|>
|
arxiv
|
@article{wu2024an,
title={An Efficient Recommendation Model Based on Knowledge Graph
Attention-Assisted Network (KGATAX)},
author={Zhizhong Wu},
journal={arXiv preprint arXiv:2409.15315},
year={2024},
archivePrefix={arXiv},
eprint={2409.15315},
primaryClass={cs.LG cs.AI cs.IR}
}
|
wu2024an
|
arxiv-660951
|
2409.15316
|
Towards Social AI: A Survey on Understanding Social Interactions
|
<|reference_start|>Towards Social AI: A Survey on Understanding Social Interactions: Social interactions form the foundation of human societies. Artificial intelligence has made significant progress in certain areas, but enabling machines to seamlessly understand social interactions remains an open challenge. It is important to address this gap by endowing machines with social capabilities. We identify three key capabilities needed for effective social understanding: 1) understanding multimodal social cues, 2) understanding multi-party dynamics, and 3) understanding beliefs. Building upon these foundations, we classify and review existing machine learning works on social understanding from the perspectives of verbal, non-verbal, and multimodal social cues. The verbal branch focuses on understanding linguistic signals such as speaker intent, dialogue sentiment, and commonsense reasoning. The non-verbal branch addresses techniques for perceiving social meaning from visual behaviors such as body gestures, gaze patterns, and facial expressions. The multimodal branch covers approaches that integrate verbal and non-verbal multimodal cues to holistically interpret social interactions such as recognizing emotions, conversational dynamics, and social situations. By reviewing the scope and limitations of current approaches and benchmarks, we aim to clarify the development trajectory and illuminate the path towards more comprehensive intelligence for social understanding. We hope this survey will spur further research interest and insights into this area.<|reference_end|>
|
arxiv
|
@article{lee2024towards,
title={Towards Social AI: A Survey on Understanding Social Interactions},
author={Sangmin Lee, Minzhi Li, Bolin Lai, Wenqi Jia, Fiona Ryan, Xu Cao,
Ozgur Kara, Bikram Boote, Weiyan Shi, Diyi Yang, James M. Rehg},
journal={arXiv preprint arXiv:2409.15316},
year={2024},
archivePrefix={arXiv},
eprint={2409.15316},
primaryClass={cs.HC}
}
|
lee2024towards
|
arxiv-660952
|
2409.15317
|
Shared Autonomy with IDA: Interventional Diffusion Assistance
|
<|reference_start|>Shared Autonomy with IDA: Interventional Diffusion Assistance: The rapid development of artificial intelligence (AI) has unearthed the potential to assist humans in controlling advanced technologies. Shared autonomy (SA) facilitates control by combining inputs from a human pilot and an AI copilot. In prior SA studies, the copilot is constantly active in determining the action played at each time step. This limits human autonomy and may have deleterious effects on performance. In general, the amount of helpful copilot assistance can vary greatly depending on the task dynamics. We therefore hypothesize that human autonomy and SA performance improve through dynamic and selective copilot intervention. To address this, we develop a goal-agnostic intervention assistance (IA) that dynamically shares control by having the copilot intervene only when the expected value of the copilot's action exceeds that of the human's action across all possible goals. We implement IA with a diffusion copilot (termed IDA) trained on expert demonstrations with goal masking. We prove a lower bound on the performance of IA that depends on pilot and copilot performance. Experiments with simulated human pilots show that IDA achieves higher performance than pilot-only and traditional SA control in variants of the Reacher environment and Lunar Lander. We then demonstrate that IDA achieves better control in Lunar Lander with human-in-the-loop experiments. Human participants report greater autonomy with IDA and prefer IDA over pilot-only and traditional SA control. We attribute the success of IDA to preserving human autonomy while simultaneously offering assistance to prevent the human pilot from entering universally bad states.<|reference_end|>
|
arxiv
|
@article{mcmahan2024shared,
title={Shared Autonomy with IDA: Interventional Diffusion Assistance},
author={Brandon J. McMahan, Zhenghao Peng, Bolei Zhou, Jonathan C. Kao},
journal={arXiv preprint arXiv:2409.15317},
year={2024},
archivePrefix={arXiv},
eprint={2409.15317},
primaryClass={cs.HC cs.AI cs.LG cs.RO}
}
|
mcmahan2024shared
|
arxiv-660953
|
2409.15318
|
On the Complexity of Neural Computation in Superposition
|
<|reference_start|>On the Complexity of Neural Computation in Superposition: Recent advances in the understanding of neural networks suggest that superposition, the ability of a single neuron to represent multiple features simultaneously, is a key mechanism underlying the computational efficiency of large-scale networks. This paper explores the theoretical foundations of computing in superposition, focusing on explicit, provably correct algorithms and their efficiency. We present the first lower bounds showing that for a broad class of problems, including permutations and pairwise logical operations, a neural network computing in superposition requires at least $\Omega(m' \log m')$ parameters and $\Omega(\sqrt{m' \log m'})$ neurons, where $m'$ is the number of output features being computed. This implies that any ``lottery ticket'' sparse sub-network must have at least $\Omega(m' \log m')$ parameters no matter what the initial dense network size. Conversely, we show a nearly tight upper bound: logical operations like pairwise AND can be computed using $O(\sqrt{m'} \log m')$ neurons and $O(m' \log^2 m')$ parameters. There is thus an exponential gap between computing in superposition, the subject of this work, and representing features in superposition, which can require as little as $O(\log m'$) neurons based on the Johnson-Lindenstrauss Lemma. Our hope is that our results open a path for using complexity theoretic techniques in neural network interpretability research.<|reference_end|>
|
arxiv
|
@article{adler2024on,
title={On the Complexity of Neural Computation in Superposition},
author={Micah Adler and Nir Shavit},
journal={arXiv preprint arXiv:2409.15318},
year={2024},
archivePrefix={arXiv},
eprint={2409.15318},
primaryClass={cs.CC cs.AI cs.DS cs.NE}
}
|
adler2024on
|
arxiv-660954
|
2409.15319
|
Merging AI Incidents Research with Political Misinformation Research: Introducing the Political Deepfakes Incidents Database
|
<|reference_start|>Merging AI Incidents Research with Political Misinformation Research: Introducing the Political Deepfakes Incidents Database: This article presents the Political Deepfakes Incidents Database (PDID), a collection of politically-salient deepfakes, encompassing synthetically-created videos, images, and less-sophisticated `cheapfakes.' The project is driven by the rise of generative AI in politics, ongoing policy efforts to address harms, and the need to connect AI incidents and political communication research. The database contains political deepfake content, metadata, and researcher-coded descriptors drawn from political science, public policy, communication, and misinformation studies. It aims to help reveal the prevalence, trends, and impact of political deepfakes, such as those featuring major political figures or events. The PDID can benefit policymakers, researchers, journalists, fact-checkers, and the public by providing insights into deepfake usage, aiding in regulation, enabling in-depth analyses, supporting fact-checking and trust-building efforts, and raising awareness of political deepfakes. It is suitable for research and application on media effects, political discourse, AI ethics, technology governance, media literacy, and countermeasures.<|reference_end|>
|
arxiv
|
@article{walker2024merging,
title={Merging AI Incidents Research with Political Misinformation Research:
Introducing the Political Deepfakes Incidents Database},
author={Christina P. Walker, Daniel S. Schiff, Kaylyn Jackson Schiff},
journal={Proceedings of the AAAI Conference on Artificial Intelligence.
Vol. 38. No. 21. 2024},
year={2024},
doi={10.1609/aaai.v38i21.30349},
archivePrefix={arXiv},
eprint={2409.15319},
primaryClass={cs.CY}
}
|
walker2024merging
|
arxiv-660955
|
2409.15321
|
WaveTransfer: A Flexible End-to-end Multi-instrument Timbre Transfer with Diffusion
|
<|reference_start|>WaveTransfer: A Flexible End-to-end Multi-instrument Timbre Transfer with Diffusion: As diffusion-based deep generative models gain prevalence, researchers are actively investigating their potential applications across various domains, including music synthesis and style alteration. Within this work, we are interested in timbre transfer, a process that involves seamlessly altering the instrumental characteristics of musical pieces while preserving essential musical elements. This paper introduces WaveTransfer, an end-to-end diffusion model designed for timbre transfer. We specifically employ the bilateral denoising diffusion model (BDDM) for noise scheduling search. Our model is capable of conducting timbre transfer between audio mixtures as well as individual instruments. Notably, it exhibits versatility in that it accommodates multiple types of timbre transfer between unique instrument pairs in a single model, eliminating the need for separate model training for each pairing. Furthermore, unlike recent works limited to 16 kHz, WaveTransfer can be trained at various sampling rates, including the industry-standard 44.1 kHz, a feature of particular interest to the music community.<|reference_end|>
|
arxiv
|
@article{baoueb2024wavetransfer:,
title={WaveTransfer: A Flexible End-to-end Multi-instrument Timbre Transfer
with Diffusion},
author={Teysir Baoueb (IP Paris, LTCI, IDS, S2A), Xiaoyu Bie (IP Paris),
Hicham Janati (S2A, IDS), Gael Richard (S2A, IDS)},
journal={2024 IEEE International Workshop on Machine Learning for Signal
Processing (MLSP 2024), Sep 2024, London (UK), United Kingdom},
year={2024},
archivePrefix={arXiv},
eprint={2409.15321},
primaryClass={eess.AS cs.SD}
}
|
baoueb2024wavetransfer:
|
arxiv-660956
|
2409.15322
|
AI and Machine Learning Approaches for Predicting Nanoparticles Toxicity The Critical Role of Physiochemical Properties
|
<|reference_start|>AI and Machine Learning Approaches for Predicting Nanoparticles Toxicity The Critical Role of Physiochemical Properties: This research investigates the use of artificial intelligence and machine learning techniques to predict the toxicity of nanoparticles, a pressing concern due to their pervasive use in various industries and the inherent challenges in assessing their biological interactions. Employing models such as Decision Trees, Random Forests, and XGBoost, the study focuses on analyzing physicochemical properties like size, shape, surface charge, and chemical composition to determine their influence on toxicity. Our findings highlight the significant role of oxygen atoms, particle size, surface area, dosage, and exposure duration in affecting toxicity levels. The use of machine learning allows for a nuanced understanding of the intricate patterns these properties form in biological contexts, surpassing traditional analysis methods in efficiency and predictive power. These advancements aid in developing safer nanomaterials through computational chemistry, reducing reliance on costly and time-consuming experimental methods. This approach not only enhances our understanding of nanoparticle behavior in biological systems but also streamlines the safety assessment process, marking a significant stride towards integrating computational techniques in nanotoxicology.<|reference_end|>
|
arxiv
|
@article{yousaf2024ai,
title={AI and Machine Learning Approaches for Predicting Nanoparticles Toxicity
The Critical Role of Physiochemical Properties},
author={Iqra Yousaf},
journal={arXiv preprint arXiv:2409.15322},
year={2024},
archivePrefix={arXiv},
eprint={2409.15322},
primaryClass={physics.chem-ph cs.LG}
}
|
yousaf2024ai
|
arxiv-660957
|
2409.15323
|
Introducing ELLIPS: An Ethics-Centered Approach to Research on LLM-Based Inference of Psychiatric Conditions
|
<|reference_start|>Introducing ELLIPS: An Ethics-Centered Approach to Research on LLM-Based Inference of Psychiatric Conditions: As mental health care systems worldwide struggle to meet demand, there is increasing focus on using language models to infer neuropsychiatric conditions or psychopathological traits from language production. Yet, so far, this research has only delivered solutions with limited clinical applicability, due to insufficient consideration of ethical questions crucial to ensuring the synergy between possible applications and model design. To accelerate progress towards clinically applicable models, our paper charts the ethical landscape of research on language-based inference of psychopathology and provides a practical tool for researchers to navigate it. We identify seven core ethical principles that should guide model development and deployment in this domain, translate them into ELLIPS, an ethical toolkit operationalizing these principles into questions that can guide researchers' choices with respect to data selection, architectures, evaluation, and model deployment, and provide a case study exemplifying its use. With this, we aim to facilitate the emergence of model technology with concrete potential for real-world applicability.<|reference_end|>
|
arxiv
|
@article{rocca2024introducing,
title={Introducing ELLIPS: An Ethics-Centered Approach to Research on LLM-Based
Inference of Psychiatric Conditions},
author={Roberta Rocca, Giada Pistilli, Kritika Maheshwari, Riccardo Fusaroli},
journal={arXiv preprint arXiv:2409.15323},
year={2024},
archivePrefix={arXiv},
eprint={2409.15323},
primaryClass={cs.CY cs.AI}
}
|
rocca2024introducing
|
arxiv-660958
|
2409.15324
|
Cognitive phantoms in LLMs through the lens of latent variables
|
<|reference_start|>Cognitive phantoms in LLMs through the lens of latent variables: Large language models (LLMs) increasingly reach real-world applications, necessitating a better understanding of their behaviour. Their size and complexity complicate traditional assessment methods, causing the emergence of alternative approaches inspired by the field of psychology. Recent studies administering psychometric questionnaires to LLMs report human-like traits in LLMs, potentially influencing LLM behaviour. However, this approach suffers from a validity problem: it presupposes that these traits exist in LLMs and that they are measurable with tools designed for humans. Typical procedures rarely acknowledge the validity problem in LLMs, comparing and interpreting average LLM scores. This study investigates this problem by comparing latent structures of personality between humans and three LLMs using two validated personality questionnaires. Findings suggest that questionnaires designed for humans do not validly measure similar constructs in LLMs, and that these constructs may not exist in LLMs at all, highlighting the need for psychometric analyses of LLM responses to avoid chasing cognitive phantoms. Keywords: large language models, psychometrics, machine behaviour, latent variable modeling, validity<|reference_end|>
|
arxiv
|
@article{peereboom2024cognitive,
title={Cognitive phantoms in LLMs through the lens of latent variables},
author={Sanne Peereboom, Inga Schwabe, Bennett Kleinberg},
journal={arXiv preprint arXiv:2409.15324},
year={2024},
archivePrefix={arXiv},
eprint={2409.15324},
primaryClass={cs.AI cs.HC}
}
|
peereboom2024cognitive
|
arxiv-660959
|
2409.15326
|
Evaluating the Impact of a Specialized LLM on Physician Experience in Clinical Decision Support: A Comparison of Ask Avo and ChatGPT-4
|
<|reference_start|>Evaluating the Impact of a Specialized LLM on Physician Experience in Clinical Decision Support: A Comparison of Ask Avo and ChatGPT-4: The use of Large language models (LLMs) to augment clinical decision support systems is a topic with rapidly growing interest, but current shortcomings such as hallucinations and lack of clear source citations make them unreliable for use in the clinical environment. This study evaluates Ask Avo, an LLM-derived software by AvoMD that incorporates a proprietary Language Model Augmented Retrieval (LMAR) system, in-built visual citation cues, and prompt engineering designed for interactions with physicians, against ChatGPT-4 in end-user experience for physicians in a simulated clinical scenario environment. Eight clinical questions derived from medical guideline documents in various specialties were prompted to both models by 62 study participants, with each response rated on trustworthiness, actionability, relevancy, comprehensiveness, and friendly format from 1 to 5. Ask Avo significantly outperformed ChatGPT-4 in all criteria: trustworthiness (4.52 vs. 3.34, p<0.001), actionability (4.41 vs. 3.19, p<0.001), relevancy (4.55 vs. 3.49, p<0.001), comprehensiveness (4.50 vs. 3.37, p<0.001), and friendly format (4.52 vs. 3.60, p<0.001). Our findings suggest that specialized LLMs designed with the needs of clinicians in mind can offer substantial improvements in user experience over general-purpose LLMs. Ask Avo's evidence-based approach tailored to clinician needs shows promise in the adoption of LLM-augmented clinical decision support software.<|reference_end|>
|
arxiv
|
@article{jung2024evaluating,
title={Evaluating the Impact of a Specialized LLM on Physician Experience in
Clinical Decision Support: A Comparison of Ask Avo and ChatGPT-4},
author={Daniel Jung, Alex Butler, Joongheum Park, Yair Saperstein},
journal={arXiv preprint arXiv:2409.15326},
year={2024},
archivePrefix={arXiv},
eprint={2409.15326},
primaryClass={cs.HC cs.AI}
}
|
jung2024evaluating
|
arxiv-660960
|
2409.15327
|
Texture Discrimination via Hilbert Curve Path Based Information Quantifiers
|
<|reference_start|>Texture Discrimination via Hilbert Curve Path Based Information Quantifiers: The analysis of the spatial arrangement of colors and roughness/smoothness of figures is relevant due to its wide range of applications. This paper proposes a texture classification method that extracts data from images using the Hilbert curve. Three information theory quantifiers are then computed: permutation entropy, permutation complexity, and Fisher information measure. The proposal exhibits some important properties: (i) it allows to discriminate figures according to varying degrees of correlations (as measured by the Hurst exponent), (ii) it is invariant to rotation and symmetry transformations, (iii) it can be used either in black and white or color images. Validations have been made not only using synthetic images but also using the well-known Brodatz image database.<|reference_end|>
|
arxiv
|
@article{bariviera2024texture,
title={Texture Discrimination via Hilbert Curve Path Based Information
Quantifiers},
author={Aurelio F. Bariviera, Roberta Hansen, Ver'onica E. Pastor},
journal={arXiv preprint arXiv:2409.15327},
year={2024},
archivePrefix={arXiv},
eprint={2409.15327},
primaryClass={cs.CV physics.data-an}
}
|
bariviera2024texture
|
arxiv-660961
|
2409.15328
|
The Power of Perception in Human-AI Interaction: Investigating Psychological Factors and Cognitive Biases that Shape User Belief and Behavior
|
<|reference_start|>The Power of Perception in Human-AI Interaction: Investigating Psychological Factors and Cognitive Biases that Shape User Belief and Behavior: This thesis investigates the psychological factors that influence belief in AI predictions, comparing them to belief in astrology- and personality-based predictions, and examines the "personal validation effect" in the context of AI, particularly with Large Language Models (LLMs). Through two interconnected studies involving 238 participants, the first study explores how cognitive style, paranormal beliefs, AI attitudes, and personality traits impact perceptions of the validity, reliability, usefulness, and personalization of predictions from different sources. The study finds a positive correlation between belief in AI predictions and belief in astrology- and personality-based predictions, highlighting a "rational superstition" phenomenon where belief is more influenced by mental heuristics and intuition than by critical evaluation. Interestingly, cognitive style did not significantly affect belief in predictions, while paranormal beliefs, positive AI attitudes, and conscientiousness played significant roles. The second study reveals that positive predictions are perceived as significantly more valid, personalized, reliable, and useful than negative ones, emphasizing the strong influence of prediction valence on user perceptions. This underscores the need for AI systems to manage user expectations and foster balanced trust. The thesis concludes with a proposal for future research on how belief in AI predictions influences actual user behavior, exploring it through the lens of self-fulfilling prophecy. Overall, this thesis enhances understanding of human-AI interaction and provides insights for developing AI systems across various applications.<|reference_end|>
|
arxiv
|
@article{lee2024the,
title={The Power of Perception in Human-AI Interaction: Investigating
Psychological Factors and Cognitive Biases that Shape User Belief and
Behavior},
author={Eunhae Lee},
journal={arXiv preprint arXiv:2409.15328},
year={2024},
archivePrefix={arXiv},
eprint={2409.15328},
primaryClass={cs.HC}
}
|
lee2024the
|
arxiv-660962
|
2409.15329
|
Causality-Driven Reinforcement Learning for Joint Communication and Sensing
|
<|reference_start|>Causality-Driven Reinforcement Learning for Joint Communication and Sensing: The next-generation wireless network, 6G and beyond, envisions to integrate communication and sensing to overcome interference, improve spectrum efficiency, and reduce hardware and power consumption. Massive Multiple-Input Multiple Output (mMIMO)-based Joint Communication and Sensing (JCAS) systems realize this integration for 6G applications such as autonomous driving, as it requires accurate environmental sensing and time-critical communication with neighboring vehicles. Reinforcement Learning (RL) is used for mMIMO antenna beamforming in the existing literature. However, the huge search space for actions associated with antenna beamforming causes the learning process for the RL agent to be inefficient due to high beam training overhead. The learning process does not consider the causal relationship between action space and the reward, and gives all actions equal importance. In this work, we explore a causally-aware RL agent which can intervene and discover causal relationships for mMIMO-based JCAS environments, during the training phase. We use a state dependent action dimension selection strategy to realize causal discovery for RL-based JCAS. Evaluation of the causally-aware RL framework in different JCAS scenarios shows the benefit of our proposed framework over baseline methods in terms of the beamforming gain.<|reference_end|>
|
arxiv
|
@article{roy2024causality-driven,
title={Causality-Driven Reinforcement Learning for Joint Communication and
Sensing},
author={Anik Roy, Serene Banerjee, Jishnu Sadasivan, Arnab Sarkar, Soumyajit
Dey},
journal={arXiv preprint arXiv:2409.15329},
year={2024},
archivePrefix={arXiv},
eprint={2409.15329},
primaryClass={cs.IT cs.AI math.IT}
}
|
roy2024causality-driven
|
arxiv-660963
|
2409.15331
|
Electrooptical Image Synthesis from SAR Imagery Using Generative Adversarial Networks
|
<|reference_start|>Electrooptical Image Synthesis from SAR Imagery Using Generative Adversarial Networks: The utility of Synthetic Aperture Radar (SAR) imagery in remote sensing and satellite image analysis is well established, offering robustness under various weather and lighting conditions. However, SAR images, characterized by their unique structural and texture characteristics, often pose interpretability challenges for analysts accustomed to electrooptical (EO) imagery. This application compares state-of-the-art Generative Adversarial Networks (GANs) including Pix2Pix, CycleGan, S-CycleGan, and a novel dual?generator GAN utilizing partial convolutions and a novel dual-generator architecture utilizing transformers. These models are designed to progressively refine the realism in the translated optical images, thereby enhancing the visual interpretability of SAR data. We demonstrate the efficacy of our approach through qualitative and quantitative evaluations, comparing the synthesized EO images with actual EO images in terms of visual fidelity and feature preservation. The results show significant improvements in interpretability, making SAR data more accessible for analysts familiar with EO imagery. Furthermore, we explore the potential of this technology in various applications, including environmental monitoring, urban planning, and military reconnaissance, where rapid, accurate interpretation of SAR data is crucial. Our research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery, offering a novel tool for enhanced data interpretation and broader application of SAR technology in various domains.<|reference_end|>
|
arxiv
|
@article{rosario2024electrooptical,
title={Electrooptical Image Synthesis from SAR Imagery Using Generative
Adversarial Networks},
author={Grant Rosario, David Noever},
journal={arXiv preprint arXiv:2409.15331},
year={2024},
archivePrefix={arXiv},
eprint={2409.15331},
primaryClass={cs.CV cs.LG eess.IV}
}
|
rosario2024electrooptical
|
arxiv-660964
|
2409.15332
|
A Lightweight GAN-Based Image Fusion Algorithm for Visible and Infrared Images
|
<|reference_start|>A Lightweight GAN-Based Image Fusion Algorithm for Visible and Infrared Images: This paper presents a lightweight image fusion algorithm specifically designed for merging visible light and infrared images, with an emphasis on balancing performance and efficiency. The proposed method enhances the generator in a Generative Adversarial Network (GAN) by integrating the Convolutional Block Attention Module (CBAM) to improve feature focus and utilizing Depthwise Separable Convolution (DSConv) for more efficient computations. These innovations significantly reduce the model's computational cost, including the number of parameters and inference latency, while maintaining or even enhancing the quality of the fused images. Comparative experiments using the M3FD dataset demonstrate that the proposed algorithm not only outperforms similar image fusion methods in terms of fusion quality but also offers a more resource-efficient solution suitable for deployment on embedded devices. The effectiveness of the lightweight design is validated through extensive ablation studies, confirming its potential for real-time applications in complex environments.<|reference_end|>
|
arxiv
|
@article{wu2024a,
title={A Lightweight GAN-Based Image Fusion Algorithm for Visible and Infrared
Images},
author={Zhizhong Wu, Jiajing Chen, LiangHao Tan, Hao Gong, Zhou Yuru, Ge Shi},
journal={arXiv preprint arXiv:2409.15332},
year={2024},
archivePrefix={arXiv},
eprint={2409.15332},
primaryClass={eess.IV cs.CV}
}
|
wu2024a
|
arxiv-660965
|
2409.15334
|
Evaluating Large Language Models with Tests of Spanish as a Foreign Language: Pass or Fail?
|
<|reference_start|>Evaluating Large Language Models with Tests of Spanish as a Foreign Language: Pass or Fail?: Large Language Models (LLMs) have been profusely evaluated on their ability to answer questions on many topics and their performance on different natural language understanding tasks. Those tests are usually conducted in English, but most LLM users are not native English speakers. Therefore, it is of interest to analyze how LLMs understand other languages at different levels: from paragraphs to morphems. In this paper, we evaluate the performance of state-of-the-art LLMs in TELEIA, a recently released benchmark with similar questions to those of Spanish exams for foreign students, covering topics such as reading comprehension, word formation, meaning and compositional semantics, and grammar. The results show that LLMs perform well at understanding Spanish but are still far from achieving the level of a native speaker in terms of grammatical competence.<|reference_end|>
|
arxiv
|
@article{mayor-rocher2024evaluating,
title={Evaluating Large Language Models with Tests of Spanish as a Foreign
Language: Pass or Fail?},
author={Marina Mayor-Rocher, Nina Melero, Elena Merino-G'omez, Mar'ia
Grandury, Javier Conde and Pedro Reviriego},
journal={arXiv preprint arXiv:2409.15334},
year={2024},
archivePrefix={arXiv},
eprint={2409.15334},
primaryClass={cs.CL}
}
|
mayor-rocher2024evaluating
|
arxiv-660966
|
2409.15335
|
Efficient learning-based sound propagation for virtual and real-world audio processing applications
|
<|reference_start|>Efficient learning-based sound propagation for virtual and real-world audio processing applications: Sound propagation is the process by which sound energy travels through a medium, such as air, to the surrounding environment as sound waves. The room impulse response (RIR) describes this process and is influenced by the positions of the source and listener, the room's geometry, and its materials. Physics-based acoustic simulators have been used for decades to compute accurate RIRs for specific acoustic environments. However, we have encountered limitations with existing acoustic simulators. To address these limitations, we propose three novel solutions. First, we introduce a learning-based RIR generator that is two orders of magnitude faster than an interactive ray-tracing simulator. Our approach can be trained to input both statistical and traditional parameters directly, and it can generate both monaural and binaural RIRs for both reconstructed and synthetic 3D scenes. Our generated RIRs outperform interactive ray-tracing simulators in speech-processing applications, including ASR, Speech Enhancement, and Speech Separation. Secondly, we propose estimating RIRs from reverberant speech signals and visual cues without a 3D representation of the environment. By estimating RIRs from reverberant speech, we can augment training data to match test data, improving the word error rate of the ASR system. Our estimated RIRs achieve a 6.9% improvement over previous learning-based RIR estimators in far-field ASR tasks. We demonstrate that our audio-visual RIR estimator aids tasks like visual acoustic matching, novel-view acoustic synthesis, and voice dubbing, validated through perceptual evaluation. Finally, we introduce IR-GAN to augment accurate RIRs using real RIRs. IR-GAN parametrically controls acoustic parameters learned from real RIRs to generate new RIRs that imitate different acoustic environments, outperforming Ray-tracing simulators on the far-field ASR benchmark by 8.95%.<|reference_end|>
|
arxiv
|
@article{ratnarajah2024efficient,
title={Efficient learning-based sound propagation for virtual and real-world
audio processing applications},
author={Anton Jeran Ratnarajah},
journal={arXiv preprint arXiv:2409.15335},
year={2024},
archivePrefix={arXiv},
eprint={2409.15335},
primaryClass={cs.SD eess.AS}
}
|
ratnarajah2024efficient
|
arxiv-660967
|
2409.15336
|
Socially-Minded Intelligence: How Individuals, Groups, and AI Systems Can Make Each-Other Smarter (or Not)
|
<|reference_start|>Socially-Minded Intelligence: How Individuals, Groups, and AI Systems Can Make Each-Other Smarter (or Not): A core part of human intelligence is the ability to work flexibly with others to achieve both individual and collective goals. The incorporation of artificial agents into human spaces is making increasing demands on artificial intelligence (AI) to demonstrate and facilitate this ability. However, this kind of flexibility is not well understood because existing approaches to intelligence typically focus either on the individual or the collective level of analysis. At the individual level, intelligence is seen as an individual-difference trait that exists independently of the social environment. At the collective level intelligence is conceptualized as a property of groups, but not in a way that can be used to understand how groups can make group members smarter or how group members acting as individuals might make the group itself more intelligent. In the present paper we argue that by focusing either on individual or collective intelligence without considering their interaction, existing conceptualizations of intelligence limit the potential of people and machines. To address this impasse, we identify and explore a new kind of intelligence - socially-minded intelligence - that can be applied to both individuals (in a social context) and collectives (of individual minds). From a socially-minded intelligence perspective, the potential intelligence of individuals is unlocked in groups, while the potential intelligence of groups is maximized by the flexible, context-sensitive commitment of individual group members. We propose ways in which socially-minded intelligence might be measured and cultivated within people, as well as how it might be modelled in AI systems. Finally, we discuss ways in which socially-minded intelligence might be used to improve human-AI teaming.<|reference_end|>
|
arxiv
|
@article{bingley2024socially-minded,
title={Socially-Minded Intelligence: How Individuals, Groups, and AI Systems
Can Make Each-Other Smarter (or Not)},
author={William J. Bingley, S. Alexander Haslam, Janet Wiles},
journal={arXiv preprint arXiv:2409.15336},
year={2024},
archivePrefix={arXiv},
eprint={2409.15336},
primaryClass={cs.HC cs.CY}
}
|
bingley2024socially-minded
|
arxiv-660968
|
2409.15337
|
Revisiting the Solution of Meta KDD Cup 2024: CRAG
|
<|reference_start|>Revisiting the Solution of Meta KDD Cup 2024: CRAG: This paper presents the solution of our team APEX in the Meta KDD CUP 2024: CRAG Comprehensive RAG Benchmark Challenge. The CRAG benchmark addresses the limitations of existing QA benchmarks in evaluating the diverse and dynamic challenges faced by Retrieval-Augmented Generation (RAG) systems. It provides a more comprehensive assessment of RAG performance and contributes to advancing research in this field. We propose a routing-based domain and dynamic adaptive RAG pipeline, which performs specific processing for the diverse and dynamic nature of the question in all three stages: retrieval, augmentation, and generation. Our method achieved superior performance on CRAG and ranked 2nd for Task 2&3 on the final competition leaderboard. Our implementation is available at this link: https://github.com/USTCAGI/CRAG-in-KDD-Cup2024.<|reference_end|>
|
arxiv
|
@article{ouyang2024revisiting,
title={Revisiting the Solution of Meta KDD Cup 2024: CRAG},
author={Jie Ouyang, Yucong Luo, Mingyue Cheng, Daoyu Wang, Shuo Yu, Qi Liu,
Enhong Chen},
journal={arXiv preprint arXiv:2409.15337},
year={2024},
archivePrefix={arXiv},
eprint={2409.15337},
primaryClass={cs.IR cs.AI cs.CL}
}
|
ouyang2024revisiting
|
arxiv-660969
|
2409.15338
|
Explainable AI: Definition and attributes of a good explanation for health AI
|
<|reference_start|>Explainable AI: Definition and attributes of a good explanation for health AI: Proposals of artificial intelligence (AI) solutions based on increasingly complex and accurate predictive models are becoming ubiquitous across many disciplines. As the complexity of these models grows, transparency and users' understanding often diminish. This suggests that accurate prediction alone is insufficient for making an AI-based solution truly useful. In the development of healthcare systems, this introduces new issues related to accountability and safety. Understanding how and why an AI system makes a recommendation may require complex explanations of its inner workings and reasoning processes. Although research on explainable AI (XAI) has significantly increased in recent years and there is high demand for XAI in medicine, defining what constitutes a good explanation remains ad hoc, and providing adequate explanations continues to be challenging. To fully realize the potential of AI, it is critical to address two fundamental questions about explanations for safety-critical AI applications, such as health-AI: (1) What is an explanation in health-AI? and (2) What are the attributes of a good explanation in health-AI? In this study, we examined published literature and gathered expert opinions through a two-round Delphi study. The research outputs include (1) a definition of what constitutes an explanation in health-AI and (2) a comprehensive list of attributes that characterize a good explanation in health-AI.<|reference_end|>
|
arxiv
|
@article{kyrimi2024explainable,
title={Explainable AI: Definition and attributes of a good explanation for
health AI},
author={Evangelia Kyrimi, Scott McLachlan, Jared M Wohlgemut, Zane B Perkins,
David A. Lagnado, William Marsh and the ExAIDSS Expert Group},
journal={arXiv preprint arXiv:2409.15338},
year={2024},
archivePrefix={arXiv},
eprint={2409.15338},
primaryClass={cs.CY cs.AI}
}
|
kyrimi2024explainable
|
arxiv-660970
|
2409.15340
|
WISDOM: An AI-powered framework for emerging research detection using weak signal analysis and advanced topic modeling
|
<|reference_start|>WISDOM: An AI-powered framework for emerging research detection using weak signal analysis and advanced topic modeling: The landscape of science and technology is characterized by its dynamic and evolving nature, constantly reshaped by new discoveries, innovations, and paradigm shifts. Moreover, science is undergoing a remarkable shift towards increasing interdisciplinary collaboration, where the convergence of diverse fields fosters innovative solutions to complex problems. Detecting emerging scientific topics is paramount as it enables industries, policymakers, and innovators to adapt their strategies, investments, and regulations proactively. As the common approach for detecting emerging technologies, despite being useful, bibliometric analyses may suffer from oversimplification and/or misinterpretation of complex interdisciplinary trends. In addition, relying solely on domain experts to pinpoint emerging technologies from science and technology trends might restrict the ability to systematically analyze extensive information and introduce subjective judgments into the interpretations. To overcome these drawbacks, in this work, we present an automated artificial intelligence-enabled framework, called WISDOM, for detecting emerging research themes using advanced topic modeling and weak signal analysis. The proposed approach can assist strategic planners and domain experts in more effectively recognizing and tracking trends related to emerging topics by swiftly processing and analyzing vast volumes of data, uncovering hidden cross-disciplinary patterns, and offering unbiased insights, thereby enhancing the efficiency and objectivity of the detection process. As the case technology, we assess WISDOM's performance in identifying emerging research as well as their trends, in the field of underwater sensing technologies using scientific papers published between 2004 and 2021.<|reference_end|>
|
arxiv
|
@article{ebadi2024wisdom:,
title={WISDOM: An AI-powered framework for emerging research detection using
weak signal analysis and advanced topic modeling},
author={Ashkan Ebadi and Alain Auger and Yvan Gauthier},
journal={arXiv preprint arXiv:2409.15340},
year={2024},
archivePrefix={arXiv},
eprint={2409.15340},
primaryClass={cs.IR cs.DL}
}
|
ebadi2024wisdom:
|
arxiv-660971
|
2409.15341
|
StructuReiser: A Structure-preserving Video Stylization Method
|
<|reference_start|>StructuReiser: A Structure-preserving Video Stylization Method: We introduce StructuReiser, a novel video-to-video translation method that transforms input videos into stylized sequences using a set of user-provided keyframes. Unlike existing approaches, StructuReiser maintains strict adherence to the structural elements of the target video, preserving the original identity while seamlessly applying the desired stylistic transformations. This enables a level of control and consistency that was previously unattainable with traditional text-driven or keyframe-based methods. Furthermore, StructuReiser supports real-time inference and custom keyframe editing, making it ideal for interactive applications and expanding the possibilities for creative expression and video manipulation.<|reference_end|>
|
arxiv
|
@article{spetlik2024structureiser:,
title={StructuReiser: A Structure-preserving Video Stylization Method},
author={Radim Spetlik, David Futschik, Daniel Sykora},
journal={arXiv preprint arXiv:2409.15341},
year={2024},
archivePrefix={arXiv},
eprint={2409.15341},
primaryClass={cs.CV cs.GR}
}
|
spetlik2024structureiser:
|
arxiv-660972
|
2409.15342
|
Recall: Empowering Multimodal Embedding for Edge Devices
|
<|reference_start|>Recall: Empowering Multimodal Embedding for Edge Devices: Human memory is inherently prone to forgetting. To address this, multimodal embedding models have been introduced, which transform diverse real-world data into a unified embedding space. These embeddings can be retrieved efficiently, aiding mobile users in recalling past information. However, as model complexity grows, so do its resource demands, leading to reduced throughput and heavy computational requirements that limit mobile device implementation. In this paper, we introduce RECALL, a novel on-device multimodal embedding system optimized for resource-limited mobile environments. RECALL achieves high-throughput, accurate retrieval by generating coarse-grained embeddings and leveraging query-based filtering for refined retrieval. Experimental results demonstrate that RECALL delivers high-quality embeddings with superior throughput, all while operating unobtrusively with minimal memory and energy consumption.<|reference_end|>
|
arxiv
|
@article{cai2024recall:,
title={Recall: Empowering Multimodal Embedding for Edge Devices},
author={Dongqi Cai, Shangguang Wang, Chen Peng, Zeling Zhang, Mengwei Xu},
journal={arXiv preprint arXiv:2409.15342},
year={2024},
archivePrefix={arXiv},
eprint={2409.15342},
primaryClass={cs.IR cs.AI cs.LG}
}
|
cai2024recall:
|
arxiv-660973
|
2409.15343
|
Advertiser Content Understanding via LLMs for Google Ads Safety
|
<|reference_start|>Advertiser Content Understanding via LLMs for Google Ads Safety: Ads Content Safety at Google requires classifying billions of ads for Google Ads content policies. Consistent and accurate policy enforcement is important for advertiser experience and user safety and it is a challenging problem, so there is a lot of value for improving it for advertisers and users. Inconsistent policy enforcement causes increased policy friction and poor experience with good advertisers, and bad advertisers exploit the inconsistency by creating multiple similar ads in the hope that some will get through our defenses. This study proposes a method to understand advertiser's intent for content policy violations, using Large Language Models (LLMs). We focus on identifying good advertisers to reduce content over-flagging and improve advertiser experience, though the approach can easily be extended to classify bad advertisers too. We generate advertiser's content profile based on multiple signals from their ads, domains, targeting info, etc. We then use LLMs to classify the advertiser content profile, along with relying on any knowledge the LLM has of the advertiser, their products or brand, to understand whether they are likely to violate a certain policy or not. After minimal prompt tuning our method was able to reach 95\% accuracy on a small test set.<|reference_end|>
|
arxiv
|
@article{wallace2024advertiser,
title={Advertiser Content Understanding via LLMs for Google Ads Safety},
author={Joseph Wallace, Tushar Dogra, Wei Qiao, Yuan Wang},
journal={arXiv preprint arXiv:2409.15343},
year={2024},
archivePrefix={arXiv},
eprint={2409.15343},
primaryClass={cs.IR}
}
|
wallace2024advertiser
|
arxiv-660974
|
2409.15344
|
Video-Driven Graph Network-Based Simulators
|
<|reference_start|>Video-Driven Graph Network-Based Simulators: Lifelike visualizations in design, cinematography, and gaming rely on precise physics simulations, typically requiring extensive computational resources and detailed physical input. This paper presents a method that can infer a system's physical properties from a short video, eliminating the need for explicit parameter input, provided it is close to the training condition. The learned representation is then used within a Graph Network-based Simulator to emulate the trajectories of physical systems. We demonstrate that the video-derived encodings effectively capture the physical properties of the system and showcase a linear dependence between some of the encodings and the system's motion.<|reference_end|>
|
arxiv
|
@article{szewczyk2024video-driven,
title={Video-Driven Graph Network-Based Simulators},
author={Franciszek Szewczyk, Gilles Louppe, Matthia Sabatelli},
journal={arXiv preprint arXiv:2409.15344},
year={2024},
archivePrefix={arXiv},
eprint={2409.15344},
primaryClass={cs.CV cs.LG}
}
|
szewczyk2024video-driven
|
arxiv-660975
|
2409.15345
|
Ultrafast vision perception by neuromorphic optical flow
|
<|reference_start|>Ultrafast vision perception by neuromorphic optical flow: Optical flow is crucial for robotic visual perception, yet current methods primarily operate in a 2D format, capturing movement velocities only in horizontal and vertical dimensions. This limitation results in incomplete motion cues, such as missing regions of interest or detailed motion analysis of different regions, leading to delays in processing high-volume visual data in real-world settings. Here, we report a 3D neuromorphic optical flow method that leverages the time-domain processing capability of memristors to embed external motion features directly into hardware, thereby completing motion cues and dramatically accelerating the computation of movement velocities and subsequent task-specific algorithms. In our demonstration, this approach reduces visual data processing time by an average of 0.3 seconds while maintaining or improving the accuracy of motion prediction, object tracking, and object segmentation. Interframe visual processing is achieved for the first time in UAV scenarios. Furthermore, the neuromorphic optical flow algorithm's flexibility allows seamless integration with existing algorithms, ensuring broad applicability. These advancements open unprecedented avenues for robotic perception, without the trade-off between accuracy and efficiency.<|reference_end|>
|
arxiv
|
@article{wang2024ultrafast,
title={Ultrafast vision perception by neuromorphic optical flow},
author={Shengbo Wang, Shuo Gao, Tongming Pu, Liangbing Zhao, Arokia Nathan},
journal={arXiv preprint arXiv:2409.15345},
year={2024},
archivePrefix={arXiv},
eprint={2409.15345},
primaryClass={cs.CV cs.RO}
}
|
wang2024ultrafast
|
arxiv-660976
|
2409.15346
|
Big data searching using words
|
<|reference_start|>Big data searching using words: Big data analytics is one of the most promising areas of new research and development in computer science, enterprises, e-commerce, and defense. For many organizations, big data is regarded as one of their most important strategic assets. This explosive growth has made it necessary to develop effective techniques for examining and analyzing big data from a mathematical perspective. Among various methods of analyzing big data, topological data analysis (TDA) is now considered one of the useful tools. However, there is no fundamental concept related to topological structure in big data. In this paper, we introduce some fundamental ideas related to the neighborhood structure of words in data searching, which can be extended to form important topological structures of big data in the future. Additionally, we introduce big data primal in big data searching and discuss the application of neighborhood structures in detecting anomalies in data searching using the Jaccard similarity coefficient.<|reference_end|>
|
arxiv
|
@article{acharjee2024big,
title={Big data searching using words},
author={Santanu Acharjee and Ripunjoy Choudhury},
journal={arXiv preprint arXiv:2409.15346},
year={2024},
archivePrefix={arXiv},
eprint={2409.15346},
primaryClass={cs.IR}
}
|
acharjee2024big
|
arxiv-660977
|
2409.15348
|
GLARE: Guided LexRank for Advanced Retrieval in Legal Analysis
|
<|reference_start|>GLARE: Guided LexRank for Advanced Retrieval in Legal Analysis: The Brazilian Constitution, known as the Citizen's Charter, provides mechanisms for citizens to petition the Judiciary, including the so-called special appeal. This specific type of appeal aims to standardize the legal interpretation of Brazilian legislation in cases where the decision contradicts federal laws. The handling of special appeals is a daily task in the Judiciary, regularly presenting significant demands in its courts. We propose a new method called GLARE, based on unsupervised machine learning, to help the legal analyst classify a special appeal on a topic from a list made available by the National Court of Brazil (STJ). As part of this method, we propose a modification of the graph-based LexRank algorithm, which we call Guided LexRank. This algorithm generates the summary of a special appeal. The degree of similarity between the generated summary and different topics is evaluated using the BM25 algorithm. As a result, the method presents a ranking of themes most appropriate to the analyzed special appeal. The proposed method does not require prior labeling of the text to be evaluated and eliminates the need for large volumes of data to train a model. We evaluate the effectiveness of the method by applying it to a special appeal corpus previously classified by human experts.<|reference_end|>
|
arxiv
|
@article{gregório2024glare:,
title={GLARE: Guided LexRank for Advanced Retrieval in Legal Analysis},
author={Fabio Greg'orio, Rafaela Castro, Kele Belloze, Rui Pedro Lopes,
Eduardo Bezerra},
journal={arXiv preprint arXiv:2409.15348},
year={2024},
archivePrefix={arXiv},
eprint={2409.15348},
primaryClass={cs.IR cs.LG}
}
|
gregório2024glare:
|
arxiv-660978
|
2409.15349
|
Damage detection in an uncertain nonlinear beam based on stochastic Volterra series
|
<|reference_start|>Damage detection in an uncertain nonlinear beam based on stochastic Volterra series: The damage detection problem in mechanical systems, using vibration measurements, is commonly called Structural Health Monitoring (SHM). Many tools are able to detect damages by changes in the vibration pattern, mainly, when damages induce nonlinear behavior. However, a more difficult problem is to detect structural variation associated with damage, when the mechanical system has nonlinear behavior even in the reference condition. In these cases, more sophisticated methods are required to detect if the changes in the response are based on some structural variation or changes in the vibration regime, because both can generate nonlinearities. Among the many ways to solve this problem, the use of the Volterra series has several favorable points, because they are a generalization of the linear convolution, allowing the separation of linear and nonlinear contributions by input filtering through the Volterra kernels. On the other hand, the presence of uncertainties in mechanical systems, due to noise, geometric imperfections, manufacturing irregularities, environmental conditions, and others, can also change the responses, becoming more difficult the damage detection procedure. An approach based on a stochastic version of Volterra series is proposed to be used in the detection of a breathing crack in a beam vibrating in a nonlinear regime of motion, even in reference condition (without crack). The system uncertainties are simulated by the variation imposed in the linear stiffness and damping coefficient. The results show, that the nonlinear analysis done, considering the high order Volterra kernels, allows the approach to detect the crack with a small propagation and probability confidence, even in the presence of uncertainties.<|reference_end|>
|
arxiv
|
@article{villani2024damage,
title={Damage detection in an uncertain nonlinear beam based on stochastic
Volterra series},
author={Luis Gustavo Giacon Villani, Samuel da Silva, Americo Cunha Jr},
journal={Mechanical Systems and Signal Processing, Vol. 125, pp. 288-310,
2019},
year={2024},
doi={10.1016/j.ymssp.2018.07.028},
archivePrefix={arXiv},
eprint={2409.15349},
primaryClass={cs.CE cs.CV cs.LG math.PR stat.AP}
}
|
villani2024damage
|
arxiv-660979
|
2409.15350
|
A Large Dataset of Spontaneous Speech with the Accent Spoken in S\~ao Paulo for Automatic Speech Recognition Evaluation
|
<|reference_start|>A Large Dataset of Spontaneous Speech with the Accent Spoken in S\~ao Paulo for Automatic Speech Recognition Evaluation: We present a freely available spontaneous speech corpus for the Brazilian Portuguese language and report preliminary automatic speech recognition (ASR) results, using both the Wav2Vec2-XLSR-53 and Distil-Whisper models fine-tuned and trained on our corpus. The NURC-SP Audio Corpus comprises 401 different speakers (204 females, 197 males) with a total of 239.30 hours of transcribed audio recordings. To the best of our knowledge, this is the first large Paulistano accented spontaneous speech corpus dedicated to the ASR task in Portuguese. We first present the design and development procedures of the NURC-SP Audio Corpus, and then describe four ASR experiments in detail. The experiments demonstrated promising results for the applicability of the corpus for ASR. Specifically, we fine-tuned two versions of Wav2Vec2-XLSR-53 model, trained a Distil-Whisper model using our dataset with labels determined by Whisper Large-V3 model, and fine-tuned this Distil-Whisper model with our corpus. Our best results were the Distil-Whisper fine-tuned over NURC-SP Audio Corpus with a WER of 24.22% followed by a fine-tuned versions of Wav2Vec2-XLSR-53 model with a WER of 33.73%, that is almost 10% point worse than Distil-Whisper's. To enable experiment reproducibility, we share the NURC-SP Audio Corpus dataset, pre-trained models, and training recipes in Hugging-Face and Github repositories.<|reference_end|>
|
arxiv
|
@article{lima2024a,
title={A Large Dataset of Spontaneous Speech with the Accent Spoken in S\~ao
Paulo for Automatic Speech Recognition Evaluation},
author={Rodrigo Lima, Sidney Evaldo Leal, Arnaldo Candido Junior, Sandra Maria
Alu'isio},
journal={arXiv preprint arXiv:2409.15350},
year={2024},
archivePrefix={arXiv},
eprint={2409.15350},
primaryClass={eess.AS cs.CL}
}
|
lima2024a
|
arxiv-660980
|
2409.15351
|
Classification of Covering Spaces and Canonical Change of Basepoint
|
<|reference_start|>Classification of Covering Spaces and Canonical Change of Basepoint: Using the language of homotopy type theory (HoTT), we 1) prove a synthetic version of the classification theorem for covering spaces, and 2) explore the existence of canonical change-of-basepoint isomorphisms between homotopy groups. There is some freedom in choosing how to translate concepts from classical algebraic topology into HoTT. The final translations we ended up with are easier to work with than the ones we started with. We discuss some earlier attempts to shed light on this translation process. The proofs are mechanized using the Coq proof assistant and closely follow classical treatments like those by Hatcher.<|reference_end|>
|
arxiv
|
@article{wemmenhove2024classification,
title={Classification of Covering Spaces and Canonical Change of Basepoint},
author={Jelle Wemmenhove, Cosmin Manea, Jim Portegies},
journal={LIPIcs, Volume 303, TYPES 2023},
year={2024},
doi={10.4230/LIPIcs.TYPES.2023.1},
archivePrefix={arXiv},
eprint={2409.15351},
primaryClass={math.AT cs.LO math.LO}
}
|
wemmenhove2024classification
|
arxiv-660981
|
2409.15352
|
An Interactive Web Application for School-Based Physical Fitness Testing in California: Geospatial Analysis and Custom Mapping
|
<|reference_start|>An Interactive Web Application for School-Based Physical Fitness Testing in California: Geospatial Analysis and Custom Mapping: Physical activity is essential for children's healthy growth and development. In the US, most states, including California, adhere to physical education standards and have implemented the mandated School-based Physical Fitness Testing (SB-PFT) for over two decades. Despite extensive data collection, research utilization of SB-PFT has been limited due to the absence of accessible analytical tools. We developed a web application using GeoServer, ArcGIS, and AWS to visualize SB-PFT data. This user-friendly platform enables education administrators and policymakers to analyze trends in children's physical fitness, identify successful programs at schools and districts, and evaluate new physical education initiatives. The application also features a custom mapping tool for comparing external datasets with SB-PFT data. We conclude that this platform, by integrating advanced analytical capabilities in an informatics-based tool, significantly enhances engagement in promoting children's physical fitness.<|reference_end|>
|
arxiv
|
@article{guo2024an,
title={An Interactive Web Application for School-Based Physical Fitness Testing
in California: Geospatial Analysis and Custom Mapping},
author={Yawen Guo, Kaiyuan Hu, Di Hu, Kai Zheng, Dan Cooper},
journal={arXiv preprint arXiv:2409.15352},
year={2024},
archivePrefix={arXiv},
eprint={2409.15352},
primaryClass={cs.CY}
}
|
guo2024an
|
arxiv-660982
|
2409.15353
|
Contextualization of ASR with LLM using phonetic retrieval-based augmentation
|
<|reference_start|>Contextualization of ASR with LLM using phonetic retrieval-based augmentation: Large language models (LLMs) have shown superb capability of modeling multimodal signals including audio and text, allowing the model to generate spoken or textual response given a speech input. However, it remains a challenge for the model to recognize personal named entities, such as contacts in a phone book, when the input modality is speech. In this work, we start with a speech recognition task and propose a retrieval-based solution to contextualize the LLM: we first let the LLM detect named entities in speech without any context, then use this named entity as a query to retrieve phonetically similar named entities from a personal database and feed them to the LLM, and finally run context-aware LLM decoding. In a voice assistant task, our solution achieved up to 30.2% relative word error rate reduction and 73.6% relative named entity error rate reduction compared to a baseline system without contextualization. Notably, our solution by design avoids prompting the LLM with the full named entity database, making it highly efficient and applicable to large named entity databases.<|reference_end|>
|
arxiv
|
@article{lei2024contextualization,
title={Contextualization of ASR with LLM using phonetic retrieval-based
augmentation},
author={Zhihong Lei, Xingyu Na, Mingbin Xu, Ernest Pusateri, Christophe Van
Gysel, Yuanyuan Zhang, Shiyi Han and Zhen Huang},
journal={arXiv preprint arXiv:2409.15353},
year={2024},
archivePrefix={arXiv},
eprint={2409.15353},
primaryClass={eess.AS cs.CL cs.LG cs.SD}
}
|
lei2024contextualization
|
arxiv-660983
|
2409.15355
|
Block-Attention for Efficient RAG
|
<|reference_start|>Block-Attention for Efficient RAG: We introduce Block-Attention, an attention mechanism designed to address the increased inference latency and cost in Retrieval-Augmented Generation (RAG) scenarios. Traditional approaches often encode the entire context. Instead, Block-Attention divides retrieved documents into discrete blocks, with each block independently calculating key-value (KV) states except for the final block. In RAG scenarios, by defining each passage as a block, Block-Attention enables us to reuse the KV states of passages that have been seen before, thereby significantly reducing the latency and the computation overhead during inference. The implementation of Block-Attention involves block segmentation, position re-encoding, and fine-tuning the LLM to adapt to the Block-Attention mechanism. Experiments on four RAG benchmarks demonstrate that after block fine-tuning, the Block-Attention model achieves performance comparable to self-attention models (68.4\% vs 67.9\% on Llama3) or even superior performance (62.8\% vs 59.6\% on Mistral). Notably, Block-Attention significantly reduces the time to first token (TTFT) and floating point operations (FLOPs) to a very low level. It only takes 45 ms to output the first token for an input sequence with a total length of 32K. Compared to the self-attention models, the time consumption and corresponding FLOPs are reduced by 98.7\% and 99.8\%, respectively.<|reference_end|>
|
arxiv
|
@article{sun2024block-attention,
title={Block-Attention for Efficient RAG},
author={East Sun, Yan Wang, and Lan Tian},
journal={arXiv preprint arXiv:2409.15355},
year={2024},
archivePrefix={arXiv},
eprint={2409.15355},
primaryClass={cs.LG cs.AI cs.CL}
}
|
sun2024block-attention
|
arxiv-660984
|
2409.15356
|
TCG CREST System Description for the Second DISPLACE Challenge
|
<|reference_start|>TCG CREST System Description for the Second DISPLACE Challenge: In this report, we describe the speaker diarization (SD) and language diarization (LD) systems developed by our team for the Second DISPLACE Challenge, 2024. Our contributions were dedicated to Track 1 for SD and Track 2 for LD in multilingual and multi-speaker scenarios. We investigated different speech enhancement techniques, voice activity detection (VAD) techniques, unsupervised domain categorization, and neural embedding extraction architectures. We also exploited the fusion of various embedding extraction models. We implemented our system with the open-source SpeechBrain toolkit. Our final submissions use spectral clustering for both the speaker and language diarization. We achieve about $7\%$ relative improvement over the challenge baseline in Track 1. We did not obtain improvement over the challenge baseline in Track 2.<|reference_end|>
|
arxiv
|
@article{raghav2024tcg,
title={TCG CREST System Description for the Second DISPLACE Challenge},
author={Nikhil Raghav, Subhajit Saha, Md Sahidullah, Swagatam Das},
journal={arXiv preprint arXiv:2409.15356},
year={2024},
archivePrefix={arXiv},
eprint={2409.15356},
primaryClass={eess.AS cs.LG cs.SD}
}
|
raghav2024tcg
|
arxiv-660985
|
2409.15357
|
A Joint Spectro-Temporal Relational Thinking Based Acoustic Modeling Framework
|
<|reference_start|>A Joint Spectro-Temporal Relational Thinking Based Acoustic Modeling Framework: Relational thinking refers to the inherent ability of humans to form mental impressions about relations between sensory signals and prior knowledge, and subsequently incorporate them into their model of their world. Despite the crucial role relational thinking plays in human understanding of speech, it has yet to be leveraged in any artificial speech recognition systems. Recently, there have been some attempts to correct this oversight, but these have been limited to coarse utterance-level models that operate exclusively in the time domain. In an attempt to narrow the gap between artificial systems and human abilities, this paper presents a novel spectro-temporal relational thinking based acoustic modeling framework. Specifically, it first generates numerous probabilistic graphs to model the relationships among speech segments across both time and frequency domains. The relational information rooted in every pair of nodes within these graphs is then aggregated and embedded into latent representations that can be utilized by downstream tasks. Models built upon this framework outperform state-of-the-art systems with a 7.82\% improvement in phoneme recognition tasks over the TIMIT dataset. In-depth analyses further reveal that our proposed relational thinking modeling mainly improves the model's ability to recognize vowels, which are the most likely to be confused by phoneme recognizers.<|reference_end|>
|
arxiv
|
@article{nan2024a,
title={A Joint Spectro-Temporal Relational Thinking Based Acoustic Modeling
Framework},
author={Zheng Nan, Ting Dang, Vidhyasaharan Sethu, Beena Ahmed},
journal={arXiv preprint arXiv:2409.15357},
year={2024},
archivePrefix={arXiv},
eprint={2409.15357},
primaryClass={eess.AS cs.CL cs.LG cs.SD}
}
|
nan2024a
|
arxiv-660986
|
2409.15359
|
Watch Your Steps: Observable and Modular Chains of Thought
|
<|reference_start|>Watch Your Steps: Observable and Modular Chains of Thought: We propose a variant of chain of thought (CoT) prompting called Program Trace Prompting that makes explanations more observable while preserving the power, generality and flexibility of CoT. In our approach, few-shot CoT demonstrations are wrapped in a formal syntax based on Python, and each prompt: identifies and names steps; defines the input/output behavior of steps; and replaces CoT explanations of in-context examples with chains of these formalized steps on the same examples. Program Trace Prompting is applicable to many tasks, achieving strong results on the 23 diverse tasks in the BIG-Bench Hard benchmark. More importantly, by instrumenting explanations in this way, we enable new types of analysis. In particular, we identify "non-local errors" (which correspond to incorrectly learning the reasoning method illustrated in the demonstrations) as an unaddressed issue in CoT learning, and we present methods for verifying the modularity of steps in a CoT explanation.<|reference_end|>
|
arxiv
|
@article{cohen2024watch,
title={Watch Your Steps: Observable and Modular Chains of Thought},
author={Cassandra A. Cohen and William W. Cohen},
journal={arXiv preprint arXiv:2409.15359},
year={2024},
archivePrefix={arXiv},
eprint={2409.15359},
primaryClass={cs.CL cs.AI cs.LG}
}
|
cohen2024watch
|
arxiv-660987
|
2409.15360
|
Reward-Robust RLHF in LLMs
|
<|reference_start|>Reward-Robust RLHF in LLMs: As Large Language Models (LLMs) continue to progress toward more advanced forms of intelligence, Reinforcement Learning from Human Feedback (RLHF) is increasingly seen as a key pathway toward achieving Artificial General Intelligence (AGI). However, the reliance on reward-model-based (RM-based) alignment methods introduces significant challenges due to the inherent instability and imperfections of Reward Models (RMs), which can lead to critical issues such as reward hacking and misalignment with human intentions. In this paper, we introduce a reward-robust RLHF framework aimed at addressing these fundamental challenges, paving the way for more reliable and resilient learning in LLMs. Our approach introduces a novel optimization objective that carefully balances performance and robustness by incorporating Bayesian Reward Model Ensembles (BRME) to model the uncertainty set of reward functions. This allows the framework to integrate both nominal performance and minimum reward signals, ensuring more stable learning even with imperfect RMs. Empirical results demonstrate that our framework consistently outperforms baselines across diverse benchmarks, showing improved accuracy and long-term stability. We also provide a theoretical analysis, demonstrating that reward-robust RLHF approaches the stability of constant reward settings, which proves to be acceptable even in a stochastic-case analysis. Together, these contributions highlight the framework potential to enhance both the performance and stability of LLM alignment.<|reference_end|>
|
arxiv
|
@article{yan2024reward-robust,
title={Reward-Robust RLHF in LLMs},
author={Yuzi Yan, Xingzhou Lou, Jialian Li, Yiping Zhang, Jian Xie, Chao Yu,
Yu Wang, Dong Yan and Yuan Shen},
journal={arXiv preprint arXiv:2409.15360},
year={2024},
archivePrefix={arXiv},
eprint={2409.15360},
primaryClass={cs.LG cs.AI cs.CL}
}
|
yan2024reward-robust
|
arxiv-660988
|
2409.15361
|
Multitask Mayhem: Unveiling and Mitigating Safety Gaps in LLMs Fine-tuning
|
<|reference_start|>Multitask Mayhem: Unveiling and Mitigating Safety Gaps in LLMs Fine-tuning: Recent breakthroughs in Large Language Models (LLMs) have led to their adoption across a wide range of tasks, ranging from code generation to machine translation and sentiment analysis, etc. Red teaming/Safety alignment efforts show that fine-tuning models on benign (non-harmful) data could compromise safety. However, it remains unclear to what extent this phenomenon is influenced by different variables, including fine-tuning task, model calibrations, etc. This paper explores the task-wise safety degradation due to fine-tuning on downstream tasks such as summarization, code generation, translation, and classification across various calibration. Our results reveal that: 1) Fine-tuning LLMs for code generation and translation leads to the highest degradation in safety guardrails. 2) LLMs generally have weaker guardrails for translation and classification, with 73-92% of harmful prompts answered, across baseline and other calibrations, falling into one of two concern categories. 3) Current solutions, including guards and safety tuning datasets, lack cross-task robustness. To address these issues, we developed a new multitask safety dataset effectively reducing attack success rates across a range of tasks without compromising the model's overall helpfulness. Our work underscores the need for generalized alignment measures to ensure safer and more robust models.<|reference_end|>
|
arxiv
|
@article{jan2024multitask,
title={Multitask Mayhem: Unveiling and Mitigating Safety Gaps in LLMs
Fine-tuning},
author={Essa Jan, Nouar AlDahoul, Moiz Ali, Faizan Ahmad, Fareed Zaffar, Yasir
Zaki},
journal={arXiv preprint arXiv:2409.15361},
year={2024},
archivePrefix={arXiv},
eprint={2409.15361},
primaryClass={cs.CL cs.AI cs.LG}
}
|
jan2024multitask
|
arxiv-660989
|
2409.15363
|
Combustion Condition Identification using a Decision Tree based Machine Learning Algorithm Applied to a Model Can Combustor with High Shear Swirl Injector
|
<|reference_start|>Combustion Condition Identification using a Decision Tree based Machine Learning Algorithm Applied to a Model Can Combustor with High Shear Swirl Injector: Combustion is the primary process in gas turbine engines, where there is a need for efficient air-fuel mixing to enhance performance. High-shear swirl injectors are commonly used to improve fuel atomization and mixing, which are key factors in determining combustion efficiency and emissions. However, under certain conditions, combustors can experience thermoacoustic instability. In this study, a decision tree-based machine learning algorithm is used to classify combustion conditions by analyzing acoustic pressure and high-speed flame imaging from a counter-rotating high-shear swirl injector of a single can combustor fueled by methane. With a constant Reynolds number and varying equivalence ratios, the combustor exhibits both stable and unstable states. Characteristic features are extracted from the data using time series analysis, providing insight into combustion dynamics. The trained supervised machine learning model accurately classifies stable and unstable operations, demonstrating effective prediction of combustion conditions within the studied parameter range.<|reference_end|>
|
arxiv
|
@article{archhith2024combustion,
title={Combustion Condition Identification using a Decision Tree based Machine
Learning Algorithm Applied to a Model Can Combustor with High Shear Swirl
Injector},
author={PK Archhith, SK Thirumalaikumaran, Balasundaram Mohan and Saptharshi
Basu},
journal={arXiv preprint arXiv:2409.15363},
year={2024},
archivePrefix={arXiv},
eprint={2409.15363},
primaryClass={cs.LG}
}
|
archhith2024combustion
|
arxiv-660990
|
2409.15364
|
VERA: Validation and Enhancement for Retrieval Augmented systems
|
<|reference_start|>VERA: Validation and Enhancement for Retrieval Augmented systems: Large language models (LLMs) exhibit remarkable capabilities but often produce inaccurate responses, as they rely solely on their embedded knowledge. Retrieval-Augmented Generation (RAG) enhances LLMs by incorporating an external information retrieval system, supplying additional context along with the query to mitigate inaccuracies for a particular context. However, accuracy issues still remain, as the model may rely on irrelevant documents or extrapolate incorrectly from its training knowledge. To assess and improve the performance of both the retrieval system and the LLM in a RAG framework, we propose \textbf{VERA} (\textbf{V}alidation and \textbf{E}nhancement for \textbf{R}etrieval \textbf{A}ugmented systems), a system designed to: 1) Evaluate and enhance the retrieved context before response generation, and 2) Evaluate and refine the LLM-generated response to ensure precision and minimize errors. VERA employs an evaluator-cum-enhancer LLM that first checks if external retrieval is necessary, evaluates the relevance and redundancy of the retrieved context, and refines it to eliminate non-essential information. Post-response generation, VERA splits the response into atomic statements, assesses their relevance to the query, and ensures adherence to the context. Our experiments demonstrate VERA's remarkable efficacy not only in improving the performance of smaller open-source models, but also larger state-of-the art models. These enhancements underscore VERA's potential to produce accurate and relevant responses, advancing the state-of-the-art in retrieval-augmented language modeling. VERA's robust methodology, combining multiple evaluation and refinement steps, effectively mitigates hallucinations and improves retrieval and response processes, making it a valuable tool for applications demanding high accuracy and reliability in information generation. .<|reference_end|>
|
arxiv
|
@article{birur2024vera:,
title={VERA: Validation and Enhancement for Retrieval Augmented systems},
author={Nitin Aravind Birur, Tanay Baswa, Divyanshu Kumar, Jatan Loya, Sahil
Agarwal, Prashanth Harshangi},
journal={arXiv preprint arXiv:2409.15364},
year={2024},
archivePrefix={arXiv},
eprint={2409.15364},
primaryClass={cs.CL cs.AI cs.IR}
}
|
birur2024vera:
|
arxiv-660991
|
2409.15365
|
Novel Saliency Analysis for the Forward Forward Algorithm
|
<|reference_start|>Novel Saliency Analysis for the Forward Forward Algorithm: Incorporating the Forward Forward algorithm into neural network training represents a transformative shift from traditional methods, introducing a dual forward mechanism that streamlines the learning process by bypassing the complexities of derivative propagation. This method is noted for its simplicity and efficiency and involves executing two forward passes the first with actual data to promote positive reinforcement, and the second with synthetically generated negative data to enable discriminative learning. Our experiments confirm that the Forward Forward algorithm is not merely an experimental novelty but a viable training strategy that competes robustly with conventional multi layer perceptron (MLP) architectures. To overcome the limitations inherent in traditional saliency techniques, which predominantly rely on gradient based methods, we developed a bespoke saliency algorithm specifically tailored for the Forward Forward framework. This innovative algorithm enhances the intuitive understanding of feature importance and network decision-making, providing clear visualizations of the data features most influential in model predictions. By leveraging this specialized saliency method, we gain deeper insights into the internal workings of the model, significantly enhancing our interpretative capabilities beyond those offered by standard approaches. Our evaluations, utilizing the MNIST and Fashion MNIST datasets, demonstrate that our method performs comparably to traditional MLP-based models.<|reference_end|>
|
arxiv
|
@article{bakhshi2024novel,
title={Novel Saliency Analysis for the Forward Forward Algorithm},
author={Mitra Bakhshi},
journal={arXiv preprint arXiv:2409.15365},
year={2024},
archivePrefix={arXiv},
eprint={2409.15365},
primaryClass={cs.LG cs.AI}
}
|
bakhshi2024novel
|
arxiv-660992
|
2409.15366
|
Trajectory Anomaly Detection with Language Models
|
<|reference_start|>Trajectory Anomaly Detection with Language Models: This paper presents a novel approach for trajectory anomaly detection using an autoregressive causal-attention model, termed LM-TAD. This method leverages the similarities between language statements and trajectories, both of which consist of ordered elements requiring coherence through external rules and contextual variations. By treating trajectories as sequences of tokens, our model learns the probability distributions over trajectories, enabling the identification of anomalous locations with high precision. We incorporate user-specific tokens to account for individual behavior patterns, enhancing anomaly detection tailored to user context. Our experiments demonstrate the effectiveness of LM-TAD on both synthetic and real-world datasets. In particular, the model outperforms existing methods on the Pattern of Life (PoL) dataset by detecting user-contextual anomalies and achieves competitive results on the Porto taxi dataset, highlighting its adaptability and robustness. Additionally, we introduce the use of perplexity and surprisal rate metrics for detecting outliers and pinpointing specific anomalous locations within trajectories. The LM-TAD framework supports various trajectory representations, including GPS coordinates, staypoints, and activity types, proving its versatility in handling diverse trajectory data. Moreover, our approach is well-suited for online trajectory anomaly detection, significantly reducing computational latency by caching key-value states of the attention mechanism, thereby avoiding repeated computations.<|reference_end|>
|
arxiv
|
@article{mbuya2024trajectory,
title={Trajectory Anomaly Detection with Language Models},
author={Jonathan Mbuya, Dieter Pfoser, Antonios Anastasopoulos},
journal={arXiv preprint arXiv:2409.15366},
year={2024},
archivePrefix={arXiv},
eprint={2409.15366},
primaryClass={cs.LG cs.AI}
}
|
mbuya2024trajectory
|
arxiv-660993
|
2409.15367
|
Fine-Tuning a Time Series Foundation Model with Wasserstein Loss
|
<|reference_start|>Fine-Tuning a Time Series Foundation Model with Wasserstein Loss: Inspired by recent advancements in large language models (LLMs) for Natural Language Processing (NLP), there has been a surge in research focused on developing foundational models for time series forecasting. One approach involves training LLM architectures on tokenized time series data using cross-entropy loss. Although this method has demonstrated promising results, cross-entropy loss is primarily designed for classification tasks and does not account for the distance between classes. To address this limitation, we propose using the Wasserstein loss for such architectures. To validate our approach, we fine-tuned a foundational time series model on $22$ zero-shot datasets, comparing the performance of cross-entropy loss with that of Wasserstein loss. Our results demonstrate that replacing cross-entropy loss with Wasserstein loss significantly improves point estimation.<|reference_end|>
|
arxiv
|
@article{chernov2024fine-tuning,
title={Fine-Tuning a Time Series Foundation Model with Wasserstein Loss},
author={Andrei Chernov},
journal={arXiv preprint arXiv:2409.15367},
year={2024},
archivePrefix={arXiv},
eprint={2409.15367},
primaryClass={cs.LG cs.AI cs.CL}
}
|
chernov2024fine-tuning
|
arxiv-660994
|
2409.15368
|
MedCodER: A Generative AI Assistant for Medical Coding
|
<|reference_start|>MedCodER: A Generative AI Assistant for Medical Coding: Medical coding is essential for standardizing clinical data and communication but is often time-consuming and prone to errors. Traditional Natural Language Processing (NLP) methods struggle with automating coding due to the large label space, lengthy text inputs, and the absence of supporting evidence annotations that justify code selection. Recent advancements in Generative Artificial Intelligence (AI) offer promising solutions to these challenges. In this work, we introduce MedCodER, a Generative AI framework for automatic medical coding that leverages extraction, retrieval, and re-ranking techniques as core components. MedCodER achieves a micro-F1 score of 0.60 on International Classification of Diseases (ICD) code prediction, significantly outperforming state-of-the-art methods. Additionally, we present a new dataset containing medical records annotated with disease diagnoses, ICD codes, and supporting evidence texts (https://doi.org/10.5281/zenodo.13308316). Ablation tests confirm that MedCodER's performance depends on the integration of each of its aforementioned components, as performance declines when these components are evaluated in isolation.<|reference_end|>
|
arxiv
|
@article{baksi2024medcoder:,
title={MedCodER: A Generative AI Assistant for Medical Coding},
author={Krishanu Das Baksi, Elijah Soba, John J. Higgins, Ravi Saini, Jaden
Wood, Jane Cook, Jack Scott, Nirmala Pudota, Tim Weninger, Edward Bowen,
Sanmitra Bhattacharya},
journal={arXiv preprint arXiv:2409.15368},
year={2024},
archivePrefix={arXiv},
eprint={2409.15368},
primaryClass={cs.CL cs.AI cs.ET cs.IR cs.LG}
}
|
baksi2024medcoder:
|
arxiv-660995
|
2409.15369
|
Geometric Relational Embeddings
|
<|reference_start|>Geometric Relational Embeddings: Relational representation learning transforms relational data into continuous and low-dimensional vector representations. However, vector-based representations fall short in capturing crucial properties of relational data that are complex and symbolic. We propose geometric relational embeddings, a paradigm of relational embeddings that respect the underlying symbolic structures. Specifically, this dissertation introduces various geometric relational embedding models capable of capturing: 1) complex structured patterns like hierarchies and cycles in networks and knowledge graphs; 2) logical structures in ontologies and logical constraints applicable for constraining machine learning model outputs; and 3) high-order structures between entities and relations. Our results obtained from benchmark and real-world datasets demonstrate the efficacy of geometric relational embeddings in adeptly capturing these discrete, symbolic, and structured properties inherent in relational data.<|reference_end|>
|
arxiv
|
@article{xiong2024geometric,
title={Geometric Relational Embeddings},
author={Bo Xiong},
journal={arXiv preprint arXiv:2409.15369},
year={2024},
archivePrefix={arXiv},
eprint={2409.15369},
primaryClass={cs.LG cs.AI cs.SI}
}
|
xiong2024geometric
|
arxiv-660996
|
2409.15370
|
Smirk: An Atomically Complete Tokenizer for Molecular Foundation Models
|
<|reference_start|>Smirk: An Atomically Complete Tokenizer for Molecular Foundation Models: Molecular Foundation Models are emerging as powerful tools for accelerating molecular design, material science, and cheminformatics, leveraging transformer architectures to speed up the discovery of new materials and drugs while reducing the computational cost of traditional ab initio methods. However, current models are constrained by closed-vocabulary tokenizers that fail to capture the full diversity of molecular structures. In this work, we systematically evaluate thirteen chemistry-specific tokenizers for their coverage of the SMILES language, uncovering substantial gaps. Using N-gram language models, we accessed the impact of tokenizer choice on model performance and quantified the information loss of unknown tokens. We introduce two new tokenizers, <i>smirk</i> and <i>smirk-gpe</i>, which can represent the entirety of the OpenSMILES specification while avoiding the pitfalls of existing tokenizers. Our work highlights the importance of open-vocabulary modeling for molecular foundation models and the need for chemically diverse benchmarks for cheminformatics.<|reference_end|>
|
arxiv
|
@article{wadell2024smirk:,
title={Smirk: An Atomically Complete Tokenizer for Molecular Foundation Models},
author={Alexius Wadell, Anoushka Bhutani, Venkatasubramanian Viswanathan},
journal={arXiv preprint arXiv:2409.15370},
year={2024},
archivePrefix={arXiv},
eprint={2409.15370},
primaryClass={cs.LG cs.AI physics.chem-ph q-bio.BM}
}
|
wadell2024smirk:
|
arxiv-660997
|
2409.15371
|
Bone: Block Affine Transformation as Parameter Efficient Fine-tuning Methods for Large Language Models
|
<|reference_start|>Bone: Block Affine Transformation as Parameter Efficient Fine-tuning Methods for Large Language Models: Low-Rank Adaptation (LoRA) has achieved remarkable training results by freezing the original weights and training only low-rank matrices, establishing itself as the predominant fine-tuning method for LLMs. In pursuit of performance closer to full-parameter training, a series of LoRA variants have emerged, such as LoRA+, PISSA, Olora, and LoRA-GA. However, these improvements complicate the initial setup of model training and increase initialization time. More importantly, they overlook the internal interactions of the original weight information. To address these issues, we introduce a novel theory, ``Weight Guide'' aimed at continuously guiding trainable matrices through the original weights during training to enhance the utilization of weight information. Based on this theory, we designed a new PEFT technique called Bone (\textbf{B}l\textbf{o}ck Affi\textbf{ne}), which not only enhances the utilization of original weight information but also emphasizes the internal connections between weights, leading to faster convergence and better data fitting. Experimental comparisons across two different LLM architectures (LLaMA2, RWKV6) and various parameter scales demonstrate that the Bone structure can achieve rapid convergence and superior data fitting without the need for complex initialization. For example, when fine-tuning LLaMA2-7B on the MetaMathQA dataset and validating on GSM8k and math benchmarks, Bone achieved fine-tuning scores of 49.36 and 8.8, respectively, outperforming PISSA by 5.84\% and 1.96\%.<|reference_end|>
|
arxiv
|
@article{kang2024bone:,
title={Bone: Block Affine Transformation as Parameter Efficient Fine-tuning
Methods for Large Language Models},
author={Jiale Kang},
journal={arXiv preprint arXiv:2409.15371},
year={2024},
archivePrefix={arXiv},
eprint={2409.15371},
primaryClass={cs.CL cs.AI}
}
|
kang2024bone:
|
arxiv-660998
|
2409.15372
|
Fuzzy Rule based Intelligent Cardiovascular Disease Prediction using Complex Event Processing
|
<|reference_start|>Fuzzy Rule based Intelligent Cardiovascular Disease Prediction using Complex Event Processing: Cardiovascular disease (CVDs) is a rapidly rising global concern due to unhealthy diets, lack of physical activity, and other factors. According to the World Health Organization (WHO), primary risk factors include elevated blood pressure, glucose, blood lipids, and obesity. Recent research has focused on accurate and timely disease prediction to reduce risk and fatalities, often relying on predictive models trained on large datasets, which require intensive training. An intelligent system for CVDs patients could greatly assist in making informed decisions by effectively analyzing health parameters. Complex Event Processing (CEP) has emerged as a valuable method for solving real-time challenges by aggregating patterns of interest and their causes and effects on end users. In this work, we propose a fuzzy rule-based system for monitoring clinical data to provide real-time decision support. We designed fuzzy rules based on clinical and WHO standards to ensure accurate predictions. Our integrated approach uses Apache Kafka and Spark for data streaming, and the Siddhi CEP engine for event processing. Additionally, we pass numerous cardiovascular disease-related parameters through CEP engines to ensure fast and reliable prediction decisions. To validate the effectiveness of our approach, we simulated real-time, unseen data to predict cardiovascular disease. Using synthetic data (1000 samples), we categorized it into "Very Low Risk, Low Risk, Medium Risk, High Risk, and Very High Risk." Validation results showed that 20% of samples were categorized as very low risk, 15-45% as low risk, 35-65% as medium risk, 55-85% as high risk, and 75% as very high risk.<|reference_end|>
|
arxiv
|
@article{kumar2024fuzzy,
title={Fuzzy Rule based Intelligent Cardiovascular Disease Prediction using
Complex Event Processing},
author={Shashi Shekhar Kumar, Anurag Harsh, Ritesh Chandra, Sonali Agarwal},
journal={arXiv preprint arXiv:2409.15372},
year={2024},
archivePrefix={arXiv},
eprint={2409.15372},
primaryClass={cs.AI cs.LG}
}
|
kumar2024fuzzy
|
arxiv-660999
|
2409.15373
|
Enhancing Performance and Scalability of Large-Scale Recommendation Systems with Jagged Flash Attention
|
<|reference_start|>Enhancing Performance and Scalability of Large-Scale Recommendation Systems with Jagged Flash Attention: The integration of hardware accelerators has significantly advanced the capabilities of modern recommendation systems, enabling the exploration of complex ranking paradigms previously deemed impractical. However, the GPU-based computational costs present substantial challenges. In this paper, we demonstrate our development of an efficiency-driven approach to explore these paradigms, moving beyond traditional reliance on native PyTorch modules. We address the specific challenges posed by ranking models' dependence on categorical features, which vary in length and complicate GPU utilization. We introduce Jagged Feature Interaction Kernels, a novel method designed to extract fine-grained insights from long categorical features through efficient handling of dynamically sized tensors. We further enhance the performance of attention mechanisms by integrating Jagged tensors with Flash Attention. Our novel Jagged Flash Attention achieves up to 9x speedup and 22x memory reduction compared to dense attention. Notably, it also outperforms dense flash attention, with up to 3x speedup and 53% more memory efficiency. In production models, we observe 10% QPS improvement and 18% memory savings, enabling us to scale our recommendation systems with longer features and more complex architectures.<|reference_end|>
|
arxiv
|
@article{xu2024enhancing,
title={Enhancing Performance and Scalability of Large-Scale Recommendation
Systems with Jagged Flash Attention},
author={Rengan Xu, Junjie Yang, Yifan Xu, Hong Li, Xing Liu, Devashish
Shankar, Haoci Zhang, Meng Liu, Boyang Li, Yuxi Hu, Mingwei Tang, Zehua
Zhang, Tunhou Zhang, Dai Li, Sijia Chen, Gian-Paolo Musumeci, Jiaqi Zhai,
Bill Zhu, Hong Yan, Srihari Reddy},
journal={arXiv preprint arXiv:2409.15373},
year={2024},
doi={10.1145/3640457.3688040},
archivePrefix={arXiv},
eprint={2409.15373},
primaryClass={cs.LG cs.AI cs.IR}
}
|
xu2024enhancing
|
arxiv-661000
|
2409.15374
|
Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions Using fMRI Data
|
<|reference_start|>Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions Using fMRI Data: Early diagnosis and intervention for Autism Spectrum Disorder (ASD) has been shown to significantly improve the quality of life of autistic individuals. However, diagnostics methods for ASD rely on assessments based on clinical presentation that are prone to bias and can be challenging to arrive at an early diagnosis. There is a need for objective biomarkers of ASD which can help improve diagnostic accuracy. Deep learning (DL) has achieved outstanding performance in diagnosing diseases and conditions from medical imaging data. Extensive research has been conducted on creating models that classify ASD using resting-state functional Magnetic Resonance Imaging (fMRI) data. However, existing models lack interpretability. This research aims to improve the accuracy and interpretability of ASD diagnosis by creating a DL model that can not only accurately classify ASD but also provide explainable insights into its working. The dataset used is a preprocessed version of the Autism Brain Imaging Data Exchange (ABIDE) with 884 samples. Our findings show a model that can accurately classify ASD and highlight critical brain regions differing between ASD and typical controls, with potential implications for early diagnosis and understanding of the neural basis of ASD. These findings are validated by studies in the literature that use different datasets and modalities, confirming that the model actually learned characteristics of ASD and not just the dataset. This study advances the field of explainable AI in medical imaging by providing a robust and interpretable model, thereby contributing to a future with objective and reliable ASD diagnostics.<|reference_end|>
|
arxiv
|
@article{vidya2024explainable,
title={Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions
Using fMRI Data},
author={Suryansh Vidya, Kush Gupta, Amir Aly, Andy Wills, Emmanuel Ifeachor
and Rohit Shankar},
journal={arXiv preprint arXiv:2409.15374},
year={2024},
archivePrefix={arXiv},
eprint={2409.15374},
primaryClass={eess.IV cs.AI cs.CV cs.LG}
}
|
vidya2024explainable
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.