corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-660101
2409.13795
Local problems in trees across a wide range of distributed models
<|reference_start|>Local problems in trees across a wide range of distributed models: The randomized online-LOCAL model captures a number of models of computing; it is at least as strong as all of these models: - the classical LOCAL model of distributed graph algorithms, - the quantum version of the LOCAL model, - finitely dependent distributions [e.g. Holroyd 2016], - any model that does not violate physical causality [Gavoille, Kosowski, Markiewicz, DICS 2009], - the SLOCAL model [Ghaffari, Kuhn, Maus, STOC 2017], and - the dynamic-LOCAL and online-LOCAL models [Akbari et al., ICALP 2023]. In general, the online-LOCAL model can be much stronger than the LOCAL model. For example, there are locally checkable labeling problems (LCLs) that can be solved with logarithmic locality in the online-LOCAL model but that require polynomial locality in the LOCAL model. However, in this work we show that in trees, many classes of LCL problems have the same locality in deterministic LOCAL and randomized online-LOCAL (and as a corollary across all the above-mentioned models). In particular, these classes of problems do not admit any distributed quantum advantage. We present a near-complete classification for the case of rooted regular trees. We also fully classify the super-logarithmic region in unrooted regular trees. Finally, we show that in general trees (rooted or unrooted, possibly irregular, possibly with input labels) problems that are global in deterministic LOCAL remain global also in the randomized online-LOCAL model.<|reference_end|>
arxiv
@article{dhar2024local, title={Local problems in trees across a wide range of distributed models}, author={Anubhav Dhar, Eli Kujawa, Henrik Lievonen, Augusto Modanese, Mikail Muftuoglu, Jan Studen'y, and Jukka Suomela}, journal={arXiv preprint arXiv:2409.13795}, year={2024}, archivePrefix={arXiv}, eprint={2409.13795}, primaryClass={cs.DC} }
dhar2024local
arxiv-660102
2409.13803
Intrinsic Single-Image HDR Reconstruction
<|reference_start|>Intrinsic Single-Image HDR Reconstruction: The low dynamic range (LDR) of common cameras fails to capture the rich contrast in natural scenes, resulting in loss of color and details in saturated pixels. Reconstructing the high dynamic range (HDR) of luminance present in the scene from single LDR photographs is an important task with many applications in computational photography and realistic display of images. The HDR reconstruction task aims to infer the lost details using the context present in the scene, requiring neural networks to understand high-level geometric and illumination cues. This makes it challenging for data-driven algorithms to generate accurate and high-resolution results. In this work, we introduce a physically-inspired remodeling of the HDR reconstruction problem in the intrinsic domain. The intrinsic model allows us to train separate networks to extend the dynamic range in the shading domain and to recover lost color details in the albedo domain. We show that dividing the problem into two simpler sub-tasks improves performance in a wide variety of photographs.<|reference_end|>
arxiv
@article{dille2024intrinsic, title={Intrinsic Single-Image HDR Reconstruction}, author={Sebastian Dille and Chris Careaga and Yau{g}{i}z Aksoy}, journal={arXiv preprint arXiv:2409.13803}, year={2024}, archivePrefix={arXiv}, eprint={2409.13803}, primaryClass={cs.CV cs.GR eess.IV} }
dille2024intrinsic
arxiv-660103
2409.13809
Classical Simulability of Quantum Circuits with Shallow Magic Depth
<|reference_start|>Classical Simulability of Quantum Circuits with Shallow Magic Depth: Quantum magic is a resource that allows quantum computation to surpass classical simulation. Previous results have linked the amount of quantum magic, characterized by the number of $T$ gates or stabilizer rank, to classical simulability. However, the effect of the distribution of quantum magic on the hardness of simulating a quantum circuit remains open. In this work, we investigate the classical simulability of quantum circuits with alternating Clifford and $T$ layers across three tasks: amplitude estimation, sampling, and evaluating Pauli observables. In the case where all $T$ gates are distributed in a single layer, performing amplitude estimation and sampling to multiplicative error are already classically intractable under reasonable assumptions, but Pauli observables are easy to evaluate. Surprisingly, with the addition of just one $T$ gate layer or merely replacing all $T$ gates with $T^{\frac{1}{2}}$, the Pauli evaluation task reveals a sharp complexity transition from P to GapP-complete. Nevertheless, when the precision requirement is relaxed to 1/poly($n$) additive error, we are able to give a polynomial time classical algorithm to compute amplitudes, Pauli observable, and sampling from $\log(n)$ sized marginal distribution for any magic-depth-one circuit that is decomposable into a product of diagonal gates. Our research provides new techniques to simulate highly magical circuits while shedding light on their complexity and their significant dependence on the magic depth.<|reference_end|>
arxiv
@article{zhang2024classical, title={Classical Simulability of Quantum Circuits with Shallow Magic Depth}, author={Yifan Zhang, Yuxuan Zhang}, journal={arXiv preprint arXiv:2409.13809}, year={2024}, archivePrefix={arXiv}, eprint={2409.13809}, primaryClass={quant-ph cs.CC} }
zhang2024classical
arxiv-660104
2409.13817
Differentiable Predictive Control for Robotics: A Data-Driven Predictive Safety Filter Approach
<|reference_start|>Differentiable Predictive Control for Robotics: A Data-Driven Predictive Safety Filter Approach: Model Predictive Control (MPC) is effective at generating safe control strategies in constrained scenarios, at the cost of computational complexity. This is especially the case in robots that require high sampling rates and have limited computing resources. Differentiable Predictive Control (DPC) trains offline a neural network approximation of the parametric MPC problem leading to computationally efficient online control laws at the cost of losing safety guarantees. DPC requires a differentiable model, and performs poorly when poorly conditioned. In this paper we propose a system decomposition technique based on relative degree to overcome this. We also develop a novel safe set generation technique based on the DPC training dataset and a novel event-triggered predictive safety filter which promotes convergence towards the safe set. Our empirical results on a quadcopter demonstrate that the DPC control laws have comparable performance to the state-of-the-art MPC whilst having up to three orders of magnitude reduction in computation time and satisfy safety requirements in a scenario that DPC was not trained on.<|reference_end|>
arxiv
@article{viljoen2024differentiable, title={Differentiable Predictive Control for Robotics: A Data-Driven Predictive Safety Filter Approach}, author={John Viljoen, Wenceslao Shaw Cortez, Jan Drgona, Sebastian East, Masayoshi Tomizuka, Draguna Vrabie}, journal={arXiv preprint arXiv:2409.13817}, year={2024}, archivePrefix={arXiv}, eprint={2409.13817}, primaryClass={cs.RO} }
viljoen2024differentiable
arxiv-660105
2409.13822
Personalization in Human-Robot Interaction through Preference-based Action Representation Learning
<|reference_start|>Personalization in Human-Robot Interaction through Preference-based Action Representation Learning: Preference-based reinforcement learning (PbRL) has shown significant promise for personalization in human-robot interaction (HRI) by explicitly integrating human preferences into the robot learning process. However, existing practices often require training a personalized robot policy from scratch, resulting in inefficient use of human feedback. In this paper, we propose preference-based action representation learning (PbARL), an efficient fine-tuning method that decouples common task structure from preference by leveraging pre-trained robot policies. Instead of directly fine-tuning the pre-trained policy with human preference, PbARL uses it as a reference for an action representation learning task that maximizes the mutual information between the pre-trained source domain and the target user preference-aligned domain. This approach allows the robot to personalize its behaviors while preserving original task performance and eliminates the need for extensive prior information from the source domain, thereby enhancing efficiency and practicality in real-world HRI scenarios. Empirical results on the Assistive Gym benchmark and a real-world user study (N=8) demonstrate the benefits of our method compared to state-of-the-art approaches.<|reference_end|>
arxiv
@article{wang2024personalization, title={Personalization in Human-Robot Interaction through Preference-based Action Representation Learning}, author={Ruiqi Wang, Dezhong Zhao, Dayoon Suh, Ziqin Yuan, Guohua Chen, and Byung-Cheol Min}, journal={arXiv preprint arXiv:2409.13822}, year={2024}, archivePrefix={arXiv}, eprint={2409.13822}, primaryClass={cs.RO} }
wang2024personalization
arxiv-660106
2409.13824
Adaptive Task Allocation in Multi-Human Multi-Robot Teams under Team Heterogeneity and Dynamic Information Uncertainty
<|reference_start|>Adaptive Task Allocation in Multi-Human Multi-Robot Teams under Team Heterogeneity and Dynamic Information Uncertainty: Task allocation in multi-human multi-robot (MH-MR) teams presents significant challenges due to the inherent heterogeneity of team members, the dynamics of task execution, and the information uncertainty of operational states. Existing approaches often fail to address these challenges simultaneously, resulting in suboptimal performance. To tackle this, we propose ATA-HRL, an adaptive task allocation framework using hierarchical reinforcement learning (HRL), which incorporates initial task allocation (ITA) that leverages team heterogeneity and conditional task reallocation in response to dynamic operational states. Additionally, we introduce an auxiliary state representation learning task to manage information uncertainty and enhance task execution. Through an extensive case study in large-scale environmental monitoring tasks, we demonstrate the benefits of our approach.<|reference_end|>
arxiv
@article{yuan2024adaptive, title={Adaptive Task Allocation in Multi-Human Multi-Robot Teams under Team Heterogeneity and Dynamic Information Uncertainty}, author={Ziqin Yuan, Ruiqi Wang, Taehyeon Kim, Dezhong Zhao, Ike Obi, and Byung-Cheol Min}, journal={arXiv preprint arXiv:2409.13824}, year={2024}, archivePrefix={arXiv}, eprint={2409.13824}, primaryClass={cs.RO} }
yuan2024adaptive
arxiv-660107
2409.13825
A Personalised 3D+t Mesh Generative Model for Unveiling Normal Heart Dynamics
<|reference_start|>A Personalised 3D+t Mesh Generative Model for Unveiling Normal Heart Dynamics: Understanding the structure and motion of the heart is crucial for diagnosing and managing cardiovascular diseases, the leading cause of global death. There is wide variation in cardiac shape and motion patterns, that are influenced by demographic, anthropometric and disease factors. Unravelling the normal patterns of shape and motion, as well as understanding how each individual deviates from the norm, would facilitate accurate diagnosis and personalised treatment strategies. To this end, we developed a novel conditional generative model, MeshHeart, to learn the distribution of cardiac shape and motion patterns. MeshHeart is capable of generating 3D+t cardiac mesh sequences, taking into account clinical factors such as age, sex, weight and height. To model the high-dimensional and complex spatio-temporal mesh data, MeshHeart employs a geometric encoder to represent cardiac meshes in a latent space, followed by a temporal Transformer to model the motion dynamics of latent representations. Based on MeshHeart, we investigate the latent space of 3D+t cardiac mesh sequences and propose a novel distance metric termed latent delta, which quantifies the deviation of a real heart from its personalised normative pattern in the latent space. In experiments using a large dataset of 38,309 subjects, MeshHeart demonstrates a high performance in cardiac mesh sequence reconstruction and generation. Features defined in the latent space are highly discriminative for cardiac disease classification, whereas the latent delta exhibits strong correlation with clinical phenotypes in phenome-wide association studies. The codes and models of this study will be released to benefit further research on digital heart modelling.<|reference_end|>
arxiv
@article{qiao2024a, title={A Personalised 3D+t Mesh Generative Model for Unveiling Normal Heart Dynamics}, author={Mengyun Qiao, Kathryn A McGurk, Shuo Wang, Paul M. Matthews, Declan P O Regan, Wenjia Bai}, journal={arXiv preprint arXiv:2409.13825}, year={2024}, archivePrefix={arXiv}, eprint={2409.13825}, primaryClass={cs.AI} }
qiao2024a
arxiv-660108
2409.13826
Clarke Transform and Clarke Coordinates -- A New Kid on the Block for State Representation of Continuum Robots
<|reference_start|>Clarke Transform and Clarke Coordinates -- A New Kid on the Block for State Representation of Continuum Robots: For almost all tendon-driven continuum robots, a segment is actuated by three or four tendons constrained by its mechanical design. For both cases, methods to account for the constraints are known. However, for an arbitrary number of tendons, a disentanglement method has yet to be formulated. Motivated by this unsolved general case, we explored state representations and exploited the two-dimensional manifold. We found that the Clarke transformation, a mathematical transformation used in vector control, can be generalized to address this problem. We present the Clarke transform and Clarke coordinates, which can be used to overcome the troublesome interdependency between the tendons, simplify modeling, and unify different improved state representations. Further connection to arc parameters leads to the possibility to derive more generalizable approaches applicable to a wider range of robot types.<|reference_end|>
arxiv
@article{grassmann2024clarke, title={Clarke Transform and Clarke Coordinates -- A New Kid on the Block for State Representation of Continuum Robots}, author={Reinhard M. Grassmann and Jessica Burgner-Kahrs}, journal={arXiv preprint arXiv:2409.13826}, year={2024}, archivePrefix={arXiv}, eprint={2409.13826}, primaryClass={cs.RO} }
grassmann2024clarke
arxiv-660109
2409.13827
Asymptotic error distribution of accelerated exponential Euler method for parabolic SPDEs
<|reference_start|>Asymptotic error distribution of accelerated exponential Euler method for parabolic SPDEs: The asymptotic error distribution of numerical methods applied to stochastic ordinary differential equations has been well studied, which characterizes the evolution pattern of the error distribution in the small step-size regime. It is still open for stochastic partial differential equations whether the normalized error process of numerical methods admits a nontrivial limit distribution. We answer this question by presenting the asymptotic error distribution of the temporal accelerated exponential Euler (AEE) method when applied to parabolic stochastic partial differential equations. In order to overcome the difficulty caused by the infinite-dimensional setting, we establish a uniform approximation theorem for convergence in distribution. Based on it, we derive the limit distribution of the normalized error process of the AEE method by studying the limit distribution of its certain appropriate finite-dimensional approximation process. As applications of our main result, the asymptotic error distribution of a fully discrete AEE method for the original equation and that of the AEE method for a stochastic ordinary differential equation are also obtained.<|reference_end|>
arxiv
@article{hong2024asymptotic, title={Asymptotic error distribution of accelerated exponential Euler method for parabolic SPDEs}, author={Jialin Hong, Diancong Jin, Xu Wang, Guanlin Yang}, journal={arXiv preprint arXiv:2409.13827}, year={2024}, archivePrefix={arXiv}, eprint={2409.13827}, primaryClass={math.NA cs.NA math.PR} }
hong2024asymptotic
arxiv-660110
2409.13828
ViTGuard: Attention-aware Detection against Adversarial Examples for Vision Transformer
<|reference_start|>ViTGuard: Attention-aware Detection against Adversarial Examples for Vision Transformer: The use of transformers for vision tasks has challenged the traditional dominant role of convolutional neural networks (CNN) in computer vision (CV). For image classification tasks, Vision Transformer (ViT) effectively establishes spatial relationships between patches within images, directing attention to important areas for accurate predictions. However, similar to CNNs, ViTs are vulnerable to adversarial attacks, which mislead the image classifier into making incorrect decisions on images with carefully designed perturbations. Moreover, adversarial patch attacks, which introduce arbitrary perturbations within a small area, pose a more serious threat to ViTs. Even worse, traditional detection methods, originally designed for CNN models, are impractical or suffer significant performance degradation when applied to ViTs, and they generally overlook patch attacks. In this paper, we propose ViTGuard as a general detection method for defending ViT models against adversarial attacks, including typical attacks where perturbations spread over the entire input and patch attacks. ViTGuard uses a Masked Autoencoder (MAE) model to recover randomly masked patches from the unmasked regions, providing a flexible image reconstruction strategy. Then, threshold-based detectors leverage distinctive ViT features, including attention maps and classification (CLS) token representations, to distinguish between normal and adversarial samples. The MAE model does not involve any adversarial samples during training, ensuring the effectiveness of our detectors against unseen attacks. ViTGuard is compared with seven existing detection methods under nine attacks across three datasets. The evaluation results show the superiority of ViTGuard over existing detectors. Finally, considering the potential detection evasion, we further demonstrate ViTGuard's robustness against adaptive attacks for evasion.<|reference_end|>
arxiv
@article{sun2024vitguard:, title={ViTGuard: Attention-aware Detection against Adversarial Examples for Vision Transformer}, author={Shihua Sun, Kenechukwu Nwodo, Shridatt Sugrim, Angelos Stavrou, Haining Wang}, journal={arXiv preprint arXiv:2409.13828}, year={2024}, archivePrefix={arXiv}, eprint={2409.13828}, primaryClass={cs.CV cs.CR} }
sun2024vitguard:
arxiv-660111
2409.13831
Measuring Copyright Risks of Large Language Model via Partial Information Probing
<|reference_start|>Measuring Copyright Risks of Large Language Model via Partial Information Probing: Exploring the data sources used to train Large Language Models (LLMs) is a crucial direction in investigating potential copyright infringement by these models. While this approach can identify the possible use of copyrighted materials in training data, it does not directly measure infringing risks. Recent research has shifted towards testing whether LLMs can directly output copyrighted content. Addressing this direction, we investigate and assess LLMs' capacity to generate infringing content by providing them with partial information from copyrighted materials, and try to use iterative prompting to get LLMs to generate more infringing content. Specifically, we input a portion of a copyrighted text into LLMs, prompt them to complete it, and then analyze the overlap between the generated content and the original copyrighted material. Our findings demonstrate that LLMs can indeed generate content highly overlapping with copyrighted materials based on these partial inputs.<|reference_end|>
arxiv
@article{zhao2024measuring, title={Measuring Copyright Risks of Large Language Model via Partial Information Probing}, author={Weijie Zhao, Huajie Shao, Zhaozhuo Xu, Suzhen Duan, Denghui Zhang}, journal={arXiv preprint arXiv:2409.13831}, year={2024}, archivePrefix={arXiv}, eprint={2409.13831}, primaryClass={cs.CL cs.AI cs.CR} }
zhao2024measuring
arxiv-660112
2409.13832
GTSinger: A Global Multi-Technique Singing Corpus with Realistic Music Scores for All Singing Tasks
<|reference_start|>GTSinger: A Global Multi-Technique Singing Corpus with Realistic Music Scores for All Singing Tasks: The scarcity of high-quality and multi-task singing datasets significantly hinders the development of diverse controllable and personalized singing tasks, as existing singing datasets suffer from low quality, limited diversity of languages and singers, absence of multi-technique information and realistic music scores, and poor task suitability. To tackle these problems, we present GTSinger, a large global, multi-technique, free-to-use, high-quality singing corpus with realistic music scores, designed for all singing tasks, along with its benchmarks. Particularly, (1) we collect 80.59 hours of high-quality singing voices, forming the largest recorded singing dataset; (2) 20 professional singers across nine widely spoken languages offer diverse timbres and styles; (3) we provide controlled comparison and phoneme-level annotations of six commonly used singing techniques, helping technique modeling and control; (4) GTSinger offers realistic music scores, assisting real-world musical composition; (5) singing voices are accompanied by manual phoneme-to-audio alignments, global style labels, and 16.16 hours of paired speech for various singing tasks. Moreover, to facilitate the use of GTSinger, we conduct four benchmark experiments: technique-controllable singing voice synthesis, technique recognition, style transfer, and speech-to-singing conversion. The corpus and demos can be found at http://gtsinger.github.io. We provide the dataset and the code for processing data and conducting benchmarks at https://huggingface.co/datasets/GTSinger/GTSinger and https://github.com/GTSinger/GTSinger.<|reference_end|>
arxiv
@article{zhang2024gtsinger:, title={GTSinger: A Global Multi-Technique Singing Corpus with Realistic Music Scores for All Singing Tasks}, author={Yu Zhang, Changhao Pan, Wenxiang Guo, Ruiqi Li, Zhiyuan Zhu, Jialei Wang, Wenhao Xu, Jingyu Lu, Zhiqing Hong, Chuxin Wang, LiChao Zhang, Jinzheng He, Ziyue Jiang, Yuxin Chen, Chen Yang, Jiecheng Zhou, Xinyu Cheng, Zhou Zhao}, journal={arXiv preprint arXiv:2409.13832}, year={2024}, archivePrefix={arXiv}, eprint={2409.13832}, primaryClass={eess.AS cs.CL cs.SD} }
zhang2024gtsinger:
arxiv-660113
2409.13837
Adaptive Robot Perception in Construction Environments using 4D BIM
<|reference_start|>Adaptive Robot Perception in Construction Environments using 4D BIM: Human Activity Recognition (HAR) is a pivotal component of robot perception for physical Human Robot Interaction (pHRI) tasks. In construction robotics, it is vital that robots have an accurate and robust perception of worker activities. This enhanced perception is the foundation of trustworthy and safe Human-Robot Collaboration (HRC) in an industrial setting. Many developed HAR algorithms lack the robustness and adaptability to ensure seamless HRC. Recent works have employed multi-modal approaches to increase feature considerations. This paper further expands previous research to include 4D building information modeling (BIM) schedule data. We created a pipeline that transforms high-level BIM schedule activities into a set of low-level tasks in real-time. The framework then utilizes this subset as a tool to restrict the solution space that the HAR algorithm can predict activities from. By limiting this subspace through 4D BIM schedule data, the algorithm has a higher chance of predicting the true possible activities from a smaller pool of possibilities in a localized setting as compared to calculating all global possibilities at every point. Results indicate that the proposed approach achieves higher confidence predictions over the base model without leveraging the BIM data.<|reference_end|>
arxiv
@article{amani2024adaptive, title={Adaptive Robot Perception in Construction Environments using 4D BIM}, author={Mani Amani and Reza Akhavian}, journal={arXiv preprint arXiv:2409.13837}, year={2024}, archivePrefix={arXiv}, eprint={2409.13837}, primaryClass={cs.RO} }
amani2024adaptive
arxiv-660114
2409.13838
Key-Scan-Based Mobile Robot Navigation: Integrated Mapping, Planning, and Control using Graphs of Scan Regions
<|reference_start|>Key-Scan-Based Mobile Robot Navigation: Integrated Mapping, Planning, and Control using Graphs of Scan Regions: Safe autonomous navigation in a priori unknown environments is an essential skill for mobile robots to reliably and adaptively perform diverse tasks (e.g., delivery, inspection, and interaction) in unstructured cluttered environments. Hybrid metric-topological maps, constructed as a pose graph of local submaps, offer a computationally efficient world representation for adaptive mapping, planning, and control at the regional level. In this paper, we consider a pose graph of locally sensed star-convex scan regions as a metric-topological map, with star convexity enabling simple yet effective local navigation strategies. We design a new family of safe local scan navigation policies and present a perception-driven feedback motion planning method through the sequential composition of local scan navigation policies, enabling provably correct and safe robot navigation over the union of local scan regions. We introduce a new concept of bridging and frontier scans for automated key scan selection and exploration for integrated mapping and navigation in unknown environments. We demonstrate the effectiveness of our key-scan-based navigation and mapping framework using a mobile robot equipped with a 360$^{\circ}$ laser range scanner in 2D cluttered environments through numerical ROS-Gazebo simulations and real hardware~experiments.<|reference_end|>
arxiv
@article{latha2024key-scan-based, title={Key-Scan-Based Mobile Robot Navigation: Integrated Mapping, Planning, and Control using Graphs of Scan Regions}, author={Dharshan Bashkaran Latha and "Om"ur Arslan}, journal={arXiv preprint arXiv:2409.13838}, year={2024}, archivePrefix={arXiv}, eprint={2409.13838}, primaryClass={cs.RO cs.SY eess.SY} }
latha2024key-scan-based
arxiv-660115
2409.13843
STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
<|reference_start|>STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions: Mitigating explicit and implicit biases in Large Language Models (LLMs) has become a critical focus in the field of natural language processing. However, many current methodologies evaluate scenarios in isolation, without considering the broader context or the spectrum of potential biases within each situation. To address this, we introduce the Sensitivity Testing on Offensive Progressions (STOP) dataset, which includes 450 offensive progressions containing 2,700 unique sentences of varying severity that progressively escalate from less to more explicitly offensive. Covering a broad spectrum of 9 demographics and 46 sub-demographics, STOP ensures inclusivity and comprehensive coverage. We evaluate several leading closed- and open-source models, including GPT-4, Mixtral, and Llama 3. Our findings reveal that even the best-performing models detect bias inconsistently, with success rates ranging from 19.3% to 69.8%. We also demonstrate how aligning models with human judgments on STOP can improve model answer rates on sensitive tasks such as BBQ, StereoSet, and CrowS-Pairs by up to 191%, while maintaining or even improving performance. STOP presents a novel framework for assessing the complex nature of biases in LLMs, which will enable more effective bias mitigation strategies and facilitates the creation of fairer language models.<|reference_end|>
arxiv
@article{morabito2024stop!, title={STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions}, author={Robert Morabito, Sangmitra Madhusudan, Tyler McDonald and Ali Emami}, journal={arXiv preprint arXiv:2409.13843}, year={2024}, archivePrefix={arXiv}, eprint={2409.13843}, primaryClass={cs.CL cs.AI cs.CY} }
morabito2024stop!
arxiv-660116
2409.13845
On the Impact of Bounded Rationality in Strategic Data Gathering
<|reference_start|>On the Impact of Bounded Rationality in Strategic Data Gathering: We consider the problem of estimation from survey data gathered from strategic and boundedly-rational agents with heterogeneous objectives and available information. Particularly, we consider a setting where there are three different types of survey responders with varying levels of available information, strategicness, and cognitive hierarchy: i) a non-strategic agent with an honest response, ii) a strategic agent that believes everyone else is a non-strategic agent and that the decoder also believes the same, hence assumes a naive estimator, i.e., level-1 in cognitive hierarchy, iii) and strategic agent that believes the population is Poisson distributed over the previous types, and that the decoder believes the same. We model each of these scenarios as a strategic classification of a 2-dimensional source (possibly correlated source and bias components) with quadratic distortion measures and provide a design algorithm. Finally, we provide our numerical results and the code to obtain them for research purposes at https://github.com/strategic-quantization/bounded-rationality.<|reference_end|>
arxiv
@article{anand2024on, title={On the Impact of Bounded Rationality in Strategic Data Gathering}, author={Anju Anand and Emrah Akyol}, journal={arXiv preprint arXiv:2409.13845}, year={2024}, archivePrefix={arXiv}, eprint={2409.13845}, primaryClass={cs.GT cs.SY eess.SY} }
anand2024on
arxiv-660117
2409.13846
Multi-Modality Conditioned Variational U-Net for Field-of-View Extension in Brain Diffusion MRI
<|reference_start|>Multi-Modality Conditioned Variational U-Net for Field-of-View Extension in Brain Diffusion MRI: An incomplete field-of-view (FOV) in diffusion magnetic resonance imaging (dMRI) can severely hinder the volumetric and bundle analyses of whole-brain white matter connectivity. Although existing works have investigated imputing the missing regions using deep generative models, it remains unclear how to specifically utilize additional information from paired multi-modality data and whether this can enhance the imputation quality and be useful for downstream tractography. To fill this gap, we propose a novel framework for imputing dMRI scans in the incomplete part of the FOV by integrating the learned diffusion features in the acquired part of the FOV to the complete brain anatomical structure. We hypothesize that by this design the proposed framework can enhance the imputation performance of the dMRI scans and therefore be useful for repairing whole-brain tractography in corrupted dMRI scans with incomplete FOV. We tested our framework on two cohorts from different sites with a total of 96 subjects and compared it with a baseline imputation method that treats the information from T1w and dMRI scans equally. The proposed framework achieved significant improvements in imputation performance, as demonstrated by angular correlation coefficient (p < 1E-5), and in downstream tractography accuracy, as demonstrated by Dice score (p < 0.01). Results suggest that the proposed framework improved imputation performance in dMRI scans by specifically utilizing additional information from paired multi-modality data, compared with the baseline method. The imputation achieved by the proposed framework enhances whole brain tractography, and therefore reduces the uncertainty when analyzing bundles associated with neurodegenerative.<|reference_end|>
arxiv
@article{li2024multi-modality, title={Multi-Modality Conditioned Variational U-Net for Field-of-View Extension in Brain Diffusion MRI}, author={Zhiyuan Li, Tianyuan Yao, Praitayini Kanakaraj, Chenyu Gao, Shunxing Bao, Lianrui Zuo, Michael E. Kim, Nancy R. Newlin, Gaurav Rudravaram, Nazirah M. Khairi, Yuankai Huo, Kurt G. Schilling, Walter A. Kukull, Arthur W. Toga, Derek B. Archer, Timothy J. Hohman, Bennett A. Landman}, journal={arXiv preprint arXiv:2409.13846}, year={2024}, archivePrefix={arXiv}, eprint={2409.13846}, primaryClass={cs.CV cs.LG} }
li2024multi-modality
arxiv-660118
2409.13847
Segment Discovery: Enhancing E-commerce Targeting
<|reference_start|>Segment Discovery: Enhancing E-commerce Targeting: Modern e-commerce services frequently target customers with incentives or interventions to engage them in their products such as games, shopping, video streaming, etc. This customer engagement increases acquisition of more customers and retention of existing ones, leading to more business for the company while improving customer experience. Often, customers are either randomly targeted or targeted based on the propensity of desirable behavior. However, such policies can be suboptimal as they do not target the set of customers who would benefit the most from the intervention and they may also not take account of any constraints. In this paper, we propose a policy framework based on uplift modeling and constrained optimization that identifies customers to target for a use-case specific intervention so as to maximize the value to the business, while taking account of any given constraints. We demonstrate improvement over state-of-the-art targeting approaches using two large-scale experimental studies and a production implementation.<|reference_end|>
arxiv
@article{li2024segment, title={Segment Discovery: Enhancing E-commerce Targeting}, author={Qiqi Li, Roopali Singh, Charin Polpanumas, Tanner Fiez, Namita Kumar, Shreya Chakrabarti}, journal={arXiv preprint arXiv:2409.13847}, year={2024}, archivePrefix={arXiv}, eprint={2409.13847}, primaryClass={cs.LG cs.IR} }
li2024segment
arxiv-660119
2409.13851
Learning Ordering in Crystalline Materials with Symmetry-Aware Graph Neural Networks
<|reference_start|>Learning Ordering in Crystalline Materials with Symmetry-Aware Graph Neural Networks: Graph convolutional neural networks (GCNNs) have become a machine learning workhorse for screening the chemical space of crystalline materials in fields such as catalysis and energy storage, by predicting properties from structures. Multicomponent materials, however, present a unique challenge since they can exhibit chemical (dis)order, where a given lattice structure can encompass a variety of elemental arrangements ranging from highly ordered structures to fully disordered solid solutions. Critically, properties like stability, strength, and catalytic performance depend not only on structures but also on orderings. To enable rigorous materials design, it is thus critical to ensure GCNNs are capable of distinguishing among atomic orderings. However, the ordering-aware capability of GCNNs has been poorly understood. Here, we benchmark various neural network architectures for capturing the ordering-dependent energetics of multicomponent materials in a custom-made dataset generated with high-throughput atomistic simulations. Conventional symmetry-invariant GCNNs were found unable to discern the structural difference between the diverse symmetrically inequivalent atomic orderings of the same material, while symmetry-equivariant model architectures could inherently preserve and differentiate the distinct crystallographic symmetries of various orderings.<|reference_end|>
arxiv
@article{peng2024learning, title={Learning Ordering in Crystalline Materials with Symmetry-Aware Graph Neural Networks}, author={Jiayu Peng, James Damewood, Jessica Karaguesian, Jaclyn R. Lunger, Rafael G'omez-Bombarelli}, journal={arXiv preprint arXiv:2409.13851}, year={2024}, archivePrefix={arXiv}, eprint={2409.13851}, primaryClass={cond-mat.mtrl-sci cs.LG physics.chem-ph} }
peng2024learning
arxiv-660120
2409.13852
Do language models practice what they preach? Examining language ideologies about gendered language reform encoded in LLMs
<|reference_start|>Do language models practice what they preach? Examining language ideologies about gendered language reform encoded in LLMs: We study language ideologies in text produced by LLMs through a case study on English gendered language reform (related to role nouns like congressperson/-woman/-man, and singular they). First, we find political bias: when asked to use language that is "correct" or "natural", LLMs use language most similarly to when asked to align with conservative (vs. progressive) values. This shows how LLMs' metalinguistic preferences can implicitly communicate the language ideologies of a particular political group, even in seemingly non-political contexts. Second, we find LLMs exhibit internal inconsistency: LLMs use gender-neutral variants more often when more explicit metalinguistic context is provided. This shows how the language ideologies expressed in text produced by LLMs can vary, which may be unexpected to users. We discuss the broader implications of these findings for value alignment.<|reference_end|>
arxiv
@article{watson2024do, title={Do language models practice what they preach? Examining language ideologies about gendered language reform encoded in LLMs}, author={Julia Watson, Sophia Lee, Barend Beekhuizen and Suzanne Stevenson}, journal={arXiv preprint arXiv:2409.13852}, year={2024}, archivePrefix={arXiv}, eprint={2409.13852}, primaryClass={cs.CL cs.AI} }
watson2024do
arxiv-660121
2409.13853
Unlocking Memorization in Large Language Models with Dynamic Soft Prompting
<|reference_start|>Unlocking Memorization in Large Language Models with Dynamic Soft Prompting: Pretrained large language models (LLMs) have revolutionized natural language processing (NLP) tasks such as summarization, question answering, and translation. However, LLMs pose significant security risks due to their tendency to memorize training data, leading to potential privacy breaches and copyright infringement. Accurate measurement of this memorization is essential to evaluate and mitigate these potential risks. However, previous attempts to characterize memorization are constrained by either using prefixes only or by prepending a constant soft prompt to the prefixes, which cannot react to changes in input. To address this challenge, we propose a novel method for estimating LLM memorization using dynamic, prefix-dependent soft prompts. Our approach involves training a transformer-based generator to produce soft prompts that adapt to changes in input, thereby enabling more accurate extraction of memorized data. Our method not only addresses the limitations of previous methods but also demonstrates superior performance in diverse experimental settings compared to state-of-the-art techniques. In particular, our method can achieve the maximum relative improvement of 112.75% and 32.26% over the vanilla baseline in terms of discoverable memorization rate for the text generation task and code generation task respectively.<|reference_end|>
arxiv
@article{wang2024unlocking, title={Unlocking Memorization in Large Language Models with Dynamic Soft Prompting}, author={Zhepeng Wang, Runxue Bao, Yawen Wu, Jackson Taylor, Cao Xiao, Feng Zheng, Weiwen Jiang, Shangqian Gao, Yanfu Zhang}, journal={arXiv preprint arXiv:2409.13853}, year={2024}, archivePrefix={arXiv}, eprint={2409.13853}, primaryClass={cs.CL cs.AI cs.CR cs.LG} }
wang2024unlocking
arxiv-660122
2409.13854
More Consideration for the Perceptron
<|reference_start|>More Consideration for the Perceptron: In this paper, we introduce the gated perceptron, an enhancement of the conventional perceptron, which incorporates an additional input computed as the product of the existing inputs. This allows the perceptron to capture non-linear interactions between features, significantly improving its ability to classify and regress on complex datasets. We explore its application in both linear and non-linear regression tasks using the Iris dataset, as well as binary and multi-class classification problems, including the PIMA Indian dataset and Breast Cancer Wisconsin dataset. Our results demonstrate that the gated perceptron can generate more distinct decision regions compared to traditional perceptrons, enhancing its classification capabilities, particularly in handling non-linear data. Performance comparisons show that the gated perceptron competes with state-of-the-art classifiers while maintaining a simple architecture.<|reference_end|>
arxiv
@article{larabi2024more, title={More Consideration for the Perceptron}, author={Slimane Larabi}, journal={arXiv preprint arXiv:2409.13854}, year={2024}, archivePrefix={arXiv}, eprint={2409.13854}, primaryClass={cs.LG cs.AI cs.NE} }
larabi2024more
arxiv-660123
2409.13857
Wormhole: Concept-Aware Deep Representation Learning for Co-Evolving Sequences
<|reference_start|>Wormhole: Concept-Aware Deep Representation Learning for Co-Evolving Sequences: Identifying and understanding dynamic concepts in co-evolving sequences is crucial for analyzing complex systems such as IoT applications, financial markets, and online activity logs. These concepts provide valuable insights into the underlying structures and behaviors of sequential data, enabling better decision-making and forecasting. This paper introduces Wormhole, a novel deep representation learning framework that is concept-aware and designed for co-evolving time sequences. Our model presents a self-representation layer and a temporal smoothness constraint to ensure robust identification of dynamic concepts and their transitions. Additionally, concept transitions are detected by identifying abrupt changes in the latent space, signifying a shift to new behavior - akin to passing through a wormhole. This novel mechanism accurately discerns concepts within co-evolving sequences and pinpoints the exact locations of these wormholes, enhancing the interpretability of the learned representations. Experiments demonstrate that this method can effectively segment time series data into meaningful concepts, providing a valuable tool for analyzing complex temporal patterns and advancing the detection of concept drifts.<|reference_end|>
arxiv
@article{xu2024wormhole:, title={Wormhole: Concept-Aware Deep Representation Learning for Co-Evolving Sequences}, author={Kunpeng Xu, Lifei Chen, Shengrui Wang}, journal={arXiv preprint arXiv:2409.13857}, year={2024}, archivePrefix={arXiv}, eprint={2409.13857}, primaryClass={cs.LG cs.AI} }
xu2024wormhole:
arxiv-660124
2409.13859
PanoCoach: Enhancing Tactical Coaching and Communication in Soccer with Mixed-Reality Telepresence
<|reference_start|>PanoCoach: Enhancing Tactical Coaching and Communication in Soccer with Mixed-Reality Telepresence: Soccer, as a dynamic team sport, requires seamless coordination and integration of tactical strategies across all players. Adapting to new tactical systems is a critical but often challenging aspect of soccer at all professional levels. Even the best players can struggle with this process, primarily due to the complexities of conveying and internalizing intricate tactical patterns. Traditional communication methods like whiteboards, on-field instructions, and video analysis often present significant difficulties in perceiving spatial relationships, anticipating team movements, and facilitating live conversation during training sessions. These challenges can lead to inconsistent interpretations of the coach's tactics among players, regardless of their skill level. To bridge the gap between tactical communication and physical execution, we propose a mixed-reality telepresence solution designed to support multi-view tactical explanations during practice. Our concept involves a multi-screen setup combining a tablet for coaches to annotate and demonstrate concepts in both 2D and 3D views, alongside VR to immerse athletes in a first-person perspective, allowing them to experience a sense of presence during coaching. Demo video uploaded at https://youtu.be/O7o4Wzd-7rw<|reference_end|>
arxiv
@article{kang2024panocoach:, title={PanoCoach: Enhancing Tactical Coaching and Communication in Soccer with Mixed-Reality Telepresence}, author={Andrew Kang, Hanspeter Pfister, Tica Lin}, journal={arXiv preprint arXiv:2409.13859}, year={2024}, archivePrefix={arXiv}, eprint={2409.13859}, primaryClass={cs.HC cs.GR} }
kang2024panocoach:
arxiv-660125
2409.13860
SSE: Multimodal Semantic Data Selection and Enrichment for Industrial-scale Data Assimilation
<|reference_start|>SSE: Multimodal Semantic Data Selection and Enrichment for Industrial-scale Data Assimilation: In recent years, the data collected for artificial intelligence has grown to an unmanageable amount. Particularly within industrial applications, such as autonomous vehicles, model training computation budgets are being exceeded while model performance is saturating -- and yet more data continues to pour in. To navigate the flood of data, we propose a framework to select the most semantically diverse and important dataset portion. Then, we further semantically enrich it by discovering meaningful new data from a massive unlabeled data pool. Importantly, we can provide explainability by leveraging foundation models to generate semantics for every data point. We quantitatively show that our Semantic Selection and Enrichment framework (SSE) can a) successfully maintain model performance with a smaller training dataset and b) improve model performance by enriching the smaller dataset without exceeding the original dataset size. Consequently, we demonstrate that semantic diversity is imperative for optimal data selection and model performance.<|reference_end|>
arxiv
@article{shen2024sse:, title={SSE: Multimodal Semantic Data Selection and Enrichment for Industrial-scale Data Assimilation}, author={Maying Shen, Nadine Chang, Sifei Liu, Jose M. Alvarez}, journal={arXiv preprint arXiv:2409.13860}, year={2024}, archivePrefix={arXiv}, eprint={2409.13860}, primaryClass={cs.CV} }
shen2024sse:
arxiv-660126
2409.13861
Learning to Simulate Aerosol Dynamics with Graph Neural Networks
<|reference_start|>Learning to Simulate Aerosol Dynamics with Graph Neural Networks: Aerosol effects on climate, weather, and air quality depend on characteristics of individual particles, which are tremendously diverse and change in time. Particle-resolved models are the only models able to capture this diversity in particle physiochemical properties, and these models are computationally expensive. As a strategy for accelerating particle-resolved microphysics models, we introduce Graph-based Learning of Aerosol Dynamics (GLAD) and use this model to train a surrogate of the particle-resolved model PartMC-MOSAIC. GLAD implements a Graph Network-based Simulator (GNS), a machine learning framework that has been used to simulate particle-based fluid dynamics models. In GLAD, each particle is represented as a node in a graph, and the evolution of the particle population over time is simulated through learned message passing. We demonstrate our GNS approach on a simple aerosol system that includes condensation of sulfuric acid onto particles composed of sulfate, black carbon, organic carbon, and water. A graph with particles as nodes is constructed, and a graph neural network (GNN) is then trained using the model output from PartMC-MOSAIC. The trained GNN can then be used for simulating and predicting aerosol dynamics over time. Results demonstrate the framework's ability to accurately learn chemical dynamics and generalize across different scenarios, achieving efficient training and prediction times. We evaluate the performance across three scenarios, highlighting the framework's robustness and adaptability in modeling aerosol microphysics and chemistry.<|reference_end|>
arxiv
@article{ferracina2024learning, title={Learning to Simulate Aerosol Dynamics with Graph Neural Networks}, author={Fabiana Ferracina, Payton Beeler, Mahantesh Halappanavar, Bala Krishnamoorthy, Marco Minutoli, and Laura Fierce}, journal={arXiv preprint arXiv:2409.13861}, year={2024}, archivePrefix={arXiv}, eprint={2409.13861}, primaryClass={physics.ao-ph cs.LG} }
ferracina2024learning
arxiv-660127
2409.13864
Persistent Backdoor Attacks in Continual Learning
<|reference_start|>Persistent Backdoor Attacks in Continual Learning: Backdoor attacks pose a significant threat to neural networks, enabling adversaries to manipulate model outputs on specific inputs, often with devastating consequences, especially in critical applications. While backdoor attacks have been studied in various contexts, little attention has been given to their practicality and persistence in continual learning, particularly in understanding how the continual updates to model parameters, as new data distributions are learned and integrated, impact the effectiveness of these attacks over time. To address this gap, we introduce two persistent backdoor attacks-Blind Task Backdoor and Latent Task Backdoor-each leveraging minimal adversarial influence. Our blind task backdoor subtly alters the loss computation without direct control over the training process, while the latent task backdoor influences only a single task's training, with all other tasks trained benignly. We evaluate these attacks under various configurations, demonstrating their efficacy with static, dynamic, physical, and semantic triggers. Our results show that both attacks consistently achieve high success rates across different continual learning algorithms, while effectively evading state-of-the-art defenses, such as SentiNet and I-BAU.<|reference_end|>
arxiv
@article{guo2024persistent, title={Persistent Backdoor Attacks in Continual Learning}, author={Zhen Guo, Abhinav Kumar, Reza Tourani}, journal={arXiv preprint arXiv:2409.13864}, year={2024}, archivePrefix={arXiv}, eprint={2409.13864}, primaryClass={cs.LG cs.CR} }
guo2024persistent
arxiv-660128
2409.13865
Neural Configuration Distance Function for Continuum Robot Control
<|reference_start|>Neural Configuration Distance Function for Continuum Robot Control: This paper presents a novel method for modeling the shape of a continuum robot as a Neural Configuration Euclidean Distance Function (N-CEDF). By learning separate distance fields for each link and combining them through the kinematics chain, the learned N-CEDF provides an accurate and computationally efficient representation of the robot's shape. The key advantage of a distance function representation of a continuum robot is that it enables efficient collision checking for motion planning in dynamic and cluttered environments, even with point-cloud observations. We integrate the N-CEDF into a Model Predictive Path Integral (MPPI) controller to generate safe trajectories. The proposed approach is validated for continuum robots with various links in several simulated environments with static and dynamic obstacles.<|reference_end|>
arxiv
@article{long2024neural, title={Neural Configuration Distance Function for Continuum Robot Control}, author={Kehan Long and Hardik Parwana and Georgios Fainekos and Bardh Hoxha and Hideki Okamoto and Nikolay Atanasov}, journal={arXiv preprint arXiv:2409.13865}, year={2024}, archivePrefix={arXiv}, eprint={2409.13865}, primaryClass={cs.RO cs.SY eess.SY} }
long2024neural
arxiv-660129
2409.13867
MAGICS: Adversarial RL with Minimax Actors Guided by Implicit Critic Stackelberg for Convergent Neural Synthesis of Robot Safety
<|reference_start|>MAGICS: Adversarial RL with Minimax Actors Guided by Implicit Critic Stackelberg for Convergent Neural Synthesis of Robot Safety: While robust optimal control theory provides a rigorous framework to compute robot control policies that are provably safe, it struggles to scale to high-dimensional problems, leading to increased use of deep learning for tractable synthesis of robot safety. Unfortunately, existing neural safety synthesis methods often lack convergence guarantees and solution interpretability. In this paper, we present Minimax Actors Guided by Implicit Critic Stackelberg (MAGICS), a novel adversarial reinforcement learning (RL) algorithm that guarantees local convergence to a minimax equilibrium solution. We then build on this approach to provide local convergence guarantees for a general deep RL-based robot safety synthesis algorithm. Through both simulation studies on OpenAI Gym environments and hardware experiments with a 36-dimensional quadruped robot, we show that MAGICS can yield robust control policies outperforming the state-of-the-art neural safety synthesis methods.<|reference_end|>
arxiv
@article{wang2024magics:, title={MAGICS: Adversarial RL with Minimax Actors Guided by Implicit Critic Stackelberg for Convergent Neural Synthesis of Robot Safety}, author={Justin Wang and Haimin Hu and Duy Phuong Nguyen and Jaime Fern'andez Fisac}, journal={arXiv preprint arXiv:2409.13867}, year={2024}, archivePrefix={arXiv}, eprint={2409.13867}, primaryClass={cs.RO cs.AI cs.LG cs.SY eess.SY} }
wang2024magics:
arxiv-660130
2409.13868
Deep Learning-Based Channel Squeeze U-Structure for Lung Nodule Detection and Segmentation
<|reference_start|>Deep Learning-Based Channel Squeeze U-Structure for Lung Nodule Detection and Segmentation: This paper introduces a novel deep-learning method for the automatic detection and segmentation of lung nodules, aimed at advancing the accuracy of early-stage lung cancer diagnosis. The proposed approach leverages a unique "Channel Squeeze U-Structure" that optimizes feature extraction and information integration across multiple semantic levels of the network. This architecture includes three key modules: shallow information processing, channel residual structure, and channel squeeze integration. These modules enhance the model's ability to detect and segment small, imperceptible, or ground-glass nodules, which are critical for early diagnosis. The method demonstrates superior performance in terms of sensitivity, Dice similarity coefficient, precision, and mean Intersection over Union (IoU). Extensive experiments were conducted on the Lung Image Database Consortium (LIDC) dataset using five-fold cross-validation, showing excellent stability and robustness. The results indicate that this approach holds significant potential for improving computer-aided diagnosis systems, providing reliable support for radiologists in clinical practice and aiding in the early detection of lung cancer, especially in resource-limited settings<|reference_end|>
arxiv
@article{sui2024deep, title={Deep Learning-Based Channel Squeeze U-Structure for Lung Nodule Detection and Segmentation}, author={Mingxiu Sui, Jiacheng Hu, Tong Zhou, Zibo Liu, Likang Wen, Junliang Du}, journal={arXiv preprint arXiv:2409.13868}, year={2024}, archivePrefix={arXiv}, eprint={2409.13868}, primaryClass={eess.IV cs.CV cs.LG} }
sui2024deep
arxiv-660131
2409.13869
Generative AI Carries Non-Democratic Biases and Stereotypes: Representation of Women, Black Individuals, Age Groups, and People with Disability in AI-Generated Images across Occupations
<|reference_start|>Generative AI Carries Non-Democratic Biases and Stereotypes: Representation of Women, Black Individuals, Age Groups, and People with Disability in AI-Generated Images across Occupations: AI governance and ethics in AI development have become critical concerns, prompting active discussions among tech companies, governments, and researchers about the potential risks AI poses to our democracies. This short essay aims to highlight one such risk: how generative AI includes or excludes equity-deserving groups in its outputs. The findings reveal that generative AI is not equitably inclusive regarding gender, race, age, and visible disability.<|reference_end|>
arxiv
@article{sadeghiani2024generative, title={Generative AI Carries Non-Democratic Biases and Stereotypes: Representation of Women, Black Individuals, Age Groups, and People with Disability in AI-Generated Images across Occupations}, author={Ayoob Sadeghiani}, journal={arXiv preprint arXiv:2409.13869}, year={2024}, archivePrefix={arXiv}, eprint={2409.13869}, primaryClass={cs.AI cs.CL cs.CY} }
sadeghiani2024generative
arxiv-660132
2409.13870
Instruct-Tuning Pretrained Causal Language Models for Ancient Greek Papyrology and Epigraphy
<|reference_start|>Instruct-Tuning Pretrained Causal Language Models for Ancient Greek Papyrology and Epigraphy: This article presents an experiment in fine-tuning a pretrained causal language model (Meta's Llama 3.1 8B Instruct) for aiding in three fundamental tasks of philological research: chronological and geographic attribution as well as text restoration in ancient Greek inscriptions and documentary papyri. Using a prompt-based instruct approach, the fine-tuned models surpass the state of the art in key metrics. For inscriptions, the models achieve a lower average character error rate (CER) of 22.5% (vs. 26.3%), while closely matching top-1 accuracy (60.9% vs. 61.8%) and top-20 accuracy (77.5% vs. 78.3%) for sequences up to 10 characters. They also provide a practical advantage by ignoring spaces during reconstruction, aligning better with the scriptio continua typically used in ancient written artifacts. In geographic attribution, the model outperforms previous benchmarks with a top-1 accuracy of 75.0% (vs. 70.8%) and a top-3 accuracy of 83.7% (vs. 82.1%). For dating, it achieves an average deviation of 26.2 years (vs. 29.3) and a median deviation of 1 year (vs. 3) from the actual date range. The models also set new baselines for documentary papyri, with a CER of 16.3%, a top-1 accuracy of 71.3%, and top-20 of 85.0% in text reconstruction; a top-1 accuracy of 66.4% and top-3 of 79.9% in geographic attribution; and, in chronological attribution, a deviation of 21.7 years from the actual termini post/ante quem, with a median deviation of 0 years.<|reference_end|>
arxiv
@article{cullhed2024instruct-tuning, title={Instruct-Tuning Pretrained Causal Language Models for Ancient Greek Papyrology and Epigraphy}, author={Eric Cullhed}, journal={arXiv preprint arXiv:2409.13870}, year={2024}, archivePrefix={arXiv}, eprint={2409.13870}, primaryClass={cs.CL cs.AI cs.LG} }
cullhed2024instruct-tuning
arxiv-660133
2409.13872
Don't Call Us, We'll Call You: Towards Mixed-Initiative Interactive Proof Assistants for Programming Language Theory
<|reference_start|>Don't Call Us, We'll Call You: Towards Mixed-Initiative Interactive Proof Assistants for Programming Language Theory: There are two kinds of systems that programming language researchers use for their work. Semantics engineering tools let them interactively explore their definitions, while proof assistants can be used to check the proofs of their properties. The disconnect between the two kinds of systems leads to errors in accepted publications and also limits the modes of interaction available when writing proofs. When constructing a proof, one typically states the property and then develops the proof manually until an automatic strategy can fill the remaining gaps. We believe that an integrated and more interactive tool that leverages the typical structure of programming language could do better. A proof assistant aware of the typical structure of programming language proofs could require less human input, assist the user in understanding their proofs, but also use insights from the exploration of executable semantics in proof construction. In the early work presented in this paper, we focus on the problem of interacting with a proof assistant and leave the semantics engineering part to the future. Rather than starting with manual proof construction and then completing the last steps automatically, we propose a way of working where the tool starts with an automatic proof search and then breaks when it requires feedback from the user. We build a small proof assistant that follows this mode of interaction and illustrates the idea using a simple proof of the commutativity of the "+" operation for Peano arithmetic. Our early experience suggests that this way of working can make proof construction easier.<|reference_end|>
arxiv
@article{verter2024don't, title={Don't Call Us, We'll Call You: Towards Mixed-Initiative Interactive Proof Assistants for Programming Language Theory}, author={Jan Liam Verter, Tomas Petricek}, journal={arXiv preprint arXiv:2409.13872}, year={2024}, archivePrefix={arXiv}, eprint={2409.13872}, primaryClass={cs.PL} }
verter2024don't
arxiv-660134
2409.13875
Data Distribution Shifts in (Industrial) Federated Learning as a Privacy Issue
<|reference_start|>Data Distribution Shifts in (Industrial) Federated Learning as a Privacy Issue: We consider industrial federated learning, a collaboration between a small number of powerful, potentially competing industrial players, mediated by a third party aspiring to improve the service it provides to its customers. We argue that this configuration harbours covert privacy risks that do not arise in e.g. cross-device settings. Companies are very protective of their intellectual property and production processes. Information about changes to their production and the timing of which is to be kept private. We study a scenario in which one of the collaborators infers changes to their competitors' production by detecting potentially subtle temporal data distribution shifts. In this framing, a data distribution shift is always problematic, even if it has no negative effect on training convergence. Thus, our goal is to find means that allow the detection of distributional shifts better than customary evaluation metrics. Based on the assumption that even minor shifts translate into the collaboratively learned machine learning model, the attacker tracks the shared models' internal state with a selection of metrics from literature in order to pick up on relevant changes. In an empirical study on benchmark datasets, we show an honest-but-curious attacker to be capable of detecting subtle distributional shifts on other clients, in some cases long before they become obvious in evaluation.<|reference_end|>
arxiv
@article{brunner2024data, title={Data Distribution Shifts in (Industrial) Federated Learning as a Privacy Issue}, author={David Brunner and Alessio Montuoro}, journal={arXiv preprint arXiv:2409.13875}, year={2024}, archivePrefix={arXiv}, eprint={2409.13875}, primaryClass={cs.LG cs.CR} }
brunner2024data
arxiv-660135
2409.13876
Physics-Informed Variational State-Space Gaussian Processes
<|reference_start|>Physics-Informed Variational State-Space Gaussian Processes: Differential equations are important mechanistic models that are integral to many scientific and engineering applications. With the abundance of available data there has been a growing interest in data-driven physics-informed models. Gaussian processes (GPs) are particularly suited to this task as they can model complex, non-linear phenomena whilst incorporating prior knowledge and quantifying uncertainty. Current approaches have found some success but are limited as they either achieve poor computational scalings or focus only on the temporal setting. This work addresses these issues by introducing a variational spatio-temporal state-space GP that handles linear and non-linear physical constraints while achieving efficient linear-in-time computation costs. We demonstrate our methods in a range of synthetic and real-world settings and outperform the current state-of-the-art in both predictive and computational performance.<|reference_end|>
arxiv
@article{hamelijnck2024physics-informed, title={Physics-Informed Variational State-Space Gaussian Processes}, author={Oliver Hamelijnck and Arno Solin and Theodoros Damoulas}, journal={arXiv preprint arXiv:2409.13876}, year={2024}, archivePrefix={arXiv}, eprint={2409.13876}, primaryClass={cs.LG stat.ML} }
hamelijnck2024physics-informed
arxiv-660136
2409.13877
Achieving Predictive Precision: Leveraging LSTM and Pseudo Labeling for Volvo's Discovery Challenge at ECML-PKDD 2024
<|reference_start|>Achieving Predictive Precision: Leveraging LSTM and Pseudo Labeling for Volvo's Discovery Challenge at ECML-PKDD 2024: This paper presents the second-place methodology in the Volvo Discovery Challenge at ECML-PKDD 2024, where we used Long Short-Term Memory networks and pseudo-labeling to predict maintenance needs for a component of Volvo trucks. We processed the training data to mirror the test set structure and applied a base LSTM model to label the test data iteratively. This approach refined our model's predictive capabilities and culminated in a macro-average F1-score of 0.879, demonstrating robust performance in predictive maintenance. This work provides valuable insights for applying machine learning techniques effectively in industrial settings.<|reference_end|>
arxiv
@article{metta2024achieving, title={Achieving Predictive Precision: Leveraging LSTM and Pseudo Labeling for Volvo's Discovery Challenge at ECML-PKDD 2024}, author={Carlo Metta, Marco Gregnanin, Andrea Papini, Silvia Giulia Galfr`e, Andrea Fois, Francesco Morandin, Marco Fantozzi, Maurizio Parton}, journal={arXiv preprint arXiv:2409.13877}, year={2024}, archivePrefix={arXiv}, eprint={2409.13877}, primaryClass={cs.LG} }
metta2024achieving
arxiv-660137
2409.13878
Transfer Learning for Passive Sonar Classification using Pre-trained Audio and ImageNet Models
<|reference_start|>Transfer Learning for Passive Sonar Classification using Pre-trained Audio and ImageNet Models: Transfer learning is commonly employed to leverage large, pre-trained models and perform fine-tuning for downstream tasks. The most prevalent pre-trained models are initially trained using ImageNet. However, their ability to generalize can vary across different data modalities. This study compares pre-trained Audio Neural Networks (PANNs) and ImageNet pre-trained models within the context of underwater acoustic target recognition (UATR). It was observed that the ImageNet pre-trained models slightly out-perform pre-trained audio models in passive sonar classification. We also analyzed the impact of audio sampling rates for model pre-training and fine-tuning. This study contributes to transfer learning applications of UATR, illustrating the potential of pre-trained models to address limitations caused by scarce, labeled data in the UATR domain.<|reference_end|>
arxiv
@article{mohammadi2024transfer, title={Transfer Learning for Passive Sonar Classification using Pre-trained Audio and ImageNet Models}, author={Amirmohammad Mohammadi, Tejashri Kelhe, Davelle Carreiro, Alexandra Van Dine, Joshua Peeples}, journal={arXiv preprint arXiv:2409.13878}, year={2024}, archivePrefix={arXiv}, eprint={2409.13878}, primaryClass={cs.SD cs.LG eess.AS} }
mohammadi2024transfer
arxiv-660138
2409.13879
"I Never Said That": A dataset, taxonomy and baselines on response clarity classification
<|reference_start|>"I Never Said That": A dataset, taxonomy and baselines on response clarity classification: Equivocation and ambiguity in public speech are well-studied discourse phenomena, especially in political science and analysis of political interviews. Inspired by the well-grounded theory on equivocation, we aim to resolve the closely related problem of response clarity in questions extracted from political interviews, leveraging the capabilities of Large Language Models (LLMs) and human expertise. To this end, we introduce a novel taxonomy that frames the task of detecting and classifying response clarity and a corresponding clarity classification dataset which consists of question-answer (QA) pairs drawn from political interviews and annotated accordingly. Our proposed two-level taxonomy addresses the clarity of a response in terms of the information provided for a given question (high-level) and also provides a fine-grained taxonomy of evasion techniques that relate to unclear, ambiguous responses (lower-level). We combine ChatGPT and human annotators to collect, validate and annotate discrete QA pairs from political interviews, to be used for our newly introduced response clarity task. We provide a detailed analysis and conduct several experiments with different model architectures, sizes and adaptation methods to gain insights and establish new baselines over the proposed dataset and task.<|reference_end|>
arxiv
@article{thomas2024"i, title={"I Never Said That": A dataset, taxonomy and baselines on response clarity classification}, author={Konstantinos Thomas, Giorgos Filandrianos, Maria Lymperaiou, Chrysoula Zerva and Giorgos Stamou}, journal={arXiv preprint arXiv:2409.13879}, year={2024}, archivePrefix={arXiv}, eprint={2409.13879}, primaryClass={cs.CL} }
thomas2024"i
arxiv-660139
2409.13881
Investigation of Time-Frequency Feature Combinations with Histogram Layer Time Delay Neural Networks
<|reference_start|>Investigation of Time-Frequency Feature Combinations with Histogram Layer Time Delay Neural Networks: While deep learning has reduced the prevalence of manual feature extraction, transformation of data via feature engineering remains essential for improving model performance, particularly for underwater acoustic signals. The methods by which audio signals are converted into time-frequency representations and the subsequent handling of these spectrograms can significantly impact performance. This work demonstrates the performance impact of using different combinations of time-frequency features in a histogram layer time delay neural network. An optimal set of features is identified with results indicating that specific feature combinations outperform single data features.<|reference_end|>
arxiv
@article{mohammadi2024investigation, title={Investigation of Time-Frequency Feature Combinations with Histogram Layer Time Delay Neural Networks}, author={Amirmohammad Mohammadi, Iren'e Masabarakiza, Ethan Barnes, Davelle Carreiro, Alexandra Van Dine, Joshua Peeples}, journal={arXiv preprint arXiv:2409.13881}, year={2024}, archivePrefix={arXiv}, eprint={2409.13881}, primaryClass={cs.SD cs.LG eess.AS} }
mohammadi2024investigation
arxiv-660140
2409.13882
Tabular Data Generation using Binary Diffusion
<|reference_start|>Tabular Data Generation using Binary Diffusion: Generating synthetic tabular data is critical in machine learning, especially when real data is limited or sensitive. Traditional generative models often face challenges due to the unique characteristics of tabular data, such as mixed data types and varied distributions, and require complex preprocessing or large pretrained models. In this paper, we introduce a novel, lossless binary transformation method that converts any tabular data into fixed-size binary representations, and a corresponding new generative model called Binary Diffusion, specifically designed for binary data. Binary Diffusion leverages the simplicity of XOR operations for noise addition and removal and employs binary cross-entropy loss for training. Our approach eliminates the need for extensive preprocessing, complex noise parameter tuning, and pretraining on large datasets. We evaluate our model on several popular tabular benchmark datasets, demonstrating that Binary Diffusion outperforms existing state-of-the-art models on Travel, Adult Income, and Diabetes datasets while being significantly smaller in size.<|reference_end|>
arxiv
@article{kinakh2024tabular, title={Tabular Data Generation using Binary Diffusion}, author={Vitaliy Kinakh, Slava Voloshynovskiy}, journal={arXiv preprint arXiv:2409.13882}, year={2024}, archivePrefix={arXiv}, eprint={2409.13882}, primaryClass={cs.LG cs.AI} }
kinakh2024tabular
arxiv-660141
2409.13884
A Multi-LLM Debiasing Framework
<|reference_start|>A Multi-LLM Debiasing Framework: Large Language Models (LLMs) are powerful tools with the potential to benefit society immensely, yet, they have demonstrated biases that perpetuate societal inequalities. Despite significant advancements in bias mitigation techniques using data augmentation, zero-shot prompting, and model fine-tuning, biases continuously persist, including subtle biases that may elude human detection. Recent research has shown a growing interest in multi-LLM approaches, which have been demonstrated to be effective in improving the quality of reasoning and factuality in LLMs. Building on this approach, we propose a novel multi-LLM debiasing framework aimed at reducing bias in LLMs. Our work is the first to introduce and evaluate two distinct approaches within this framework for debiasing LLMs: a centralized method, where the conversation is facilitated by a single central LLM, and a decentralized method, where all models communicate directly. Our findings reveal that our multi-LLM framework significantly reduces bias in LLMs, outperforming the baseline method across several social groups.<|reference_end|>
arxiv
@article{owens2024a, title={A Multi-LLM Debiasing Framework}, author={Deonna M. Owens, Ryan A. Rossi, Sungchul Kim, Tong Yu, Franck Dernoncourt, Xiang Chen, Ruiyi Zhang, Jiuxiang Gu, Hanieh Deilamsalehy, Nedim Lipka}, journal={arXiv preprint arXiv:2409.13884}, year={2024}, archivePrefix={arXiv}, eprint={2409.13884}, primaryClass={cs.CL cs.AI cs.CY cs.LG} }
owens2024a
arxiv-660142
2409.13886
Learning to Play Video Games with Intuitive Physics Priors
<|reference_start|>Learning to Play Video Games with Intuitive Physics Priors: Video game playing is an extremely structured domain where algorithmic decision-making can be tested without adverse real-world consequences. While prevailing methods rely on image inputs to avoid the problem of hand-crafting state space representations, this approach systematically diverges from the way humans actually learn to play games. In this paper, we design object-based input representations that generalize well across a number of video games. Using these representations, we evaluate an agent's ability to learn games similar to an infant - with limited world experience, employing simple inductive biases derived from intuitive representations of physics from the real world. Using such biases, we construct an object category representation to be used by a Q-learning algorithm and assess how well it learns to play multiple games based on observed object affordances. Our results suggest that a human-like object interaction setup capably learns to play several video games, and demonstrates superior generalizability, particularly for unfamiliar objects. Further exploring such methods will allow machines to learn in a human-centric way, thus incorporating more human-like learning benefits.<|reference_end|>
arxiv
@article{jaiswal2024learning, title={Learning to Play Video Games with Intuitive Physics Priors}, author={Abhishek Jaiswal and Nisheeth Srivastava}, journal={Proceedings of the Annual Meeting of the Cognitive Science Society, 46 (2024). Retrieved from https://escholarship.org/uc/item/92f5b1hk}, year={2024}, archivePrefix={arXiv}, eprint={2409.13886}, primaryClass={cs.LG cs.AI cs.CV} }
jaiswal2024learning
arxiv-660143
2409.13887
Brain-Cognition Fingerprinting via Graph-GCCA with Contrastive Learning
<|reference_start|>Brain-Cognition Fingerprinting via Graph-GCCA with Contrastive Learning: Many longitudinal neuroimaging studies aim to improve the understanding of brain aging and diseases by studying the dynamic interactions between brain function and cognition. Doing so requires accurate encoding of their multidimensional relationship while accounting for individual variability over time. For this purpose, we propose an unsupervised learning model (called \underline{\textbf{Co}}ntrastive Learning-based \underline{\textbf{Gra}}ph Generalized \underline{\textbf{Ca}}nonical Correlation Analysis (CoGraCa)) that encodes their relationship via Graph Attention Networks and generalized Canonical Correlational Analysis. To create brain-cognition fingerprints reflecting unique neural and cognitive phenotype of each person, the model also relies on individualized and multimodal contrastive learning. We apply CoGraCa to longitudinal dataset of healthy individuals consisting of resting-state functional MRI and cognitive measures acquired at multiple visits for each participant. The generated fingerprints effectively capture significant individual differences and outperform current single-modal and CCA-based multimodal models in identifying sex and age. More importantly, our encoding provides interpretable interactions between those two modalities.<|reference_end|>
arxiv
@article{wang2024brain-cognition, title={Brain-Cognition Fingerprinting via Graph-GCCA with Contrastive Learning}, author={Yixin Wang, Wei Peng, Yu Zhang, Ehsan Adeli, Qingyu Zhao, Kilian M. Pohl}, journal={arXiv preprint arXiv:2409.13887}, year={2024}, archivePrefix={arXiv}, eprint={2409.13887}, primaryClass={cs.CV} }
wang2024brain-cognition
arxiv-660144
2409.13888
Causal Feature Selection Method for Contextual Multi-Armed Bandits in Recommender System
<|reference_start|>Causal Feature Selection Method for Contextual Multi-Armed Bandits in Recommender System: Features (a.k.a. context) are critical for contextual multi-armed bandits (MAB) performance. In practice of large scale online system, it is important to select and implement important features for the model: missing important features can led to sub-optimal reward outcome, and including irrelevant features can cause overfitting, poor model interpretability, and implementation cost. However, feature selection methods for conventional machine learning models fail short for contextual MAB use cases, as conventional methods select features correlated with the outcome variable, but not necessarily causing heterogeneuous treatment effect among arms which are truely important for contextual MAB. In this paper, we introduce model-free feature selection methods designed for contexutal MAB problem, based on heterogeneous causal effect contributed by the feature to the reward distribution. Empirical evaluation is conducted based on synthetic data as well as real data from an online experiment for optimizing content cover image in a recommender system. The results show this feature selection method effectively selects the important features that lead to higher contextual MAB reward than unimportant features. Compared with model embedded method, this model-free method has advantage of fast computation speed, ease of implementation, and prune of model mis-specification issues.<|reference_end|>
arxiv
@article{zhao2024causal, title={Causal Feature Selection Method for Contextual Multi-Armed Bandits in Recommender System}, author={Zhenyu Zhao and Yexi Jiang}, journal={arXiv preprint arXiv:2409.13888}, year={2024}, archivePrefix={arXiv}, eprint={2409.13888}, primaryClass={cs.LG cs.IR stat.ML} }
zhao2024causal
arxiv-660145
2409.13890
Safe Control of Grid-Interfacing Inverters with Current Magnitude Limits
<|reference_start|>Safe Control of Grid-Interfacing Inverters with Current Magnitude Limits: Grid-interfacing inverters allow renewable resources to be connected to the electric grid and offer fast and programmable control responses. However, inverters are subject to significant physical constraints. One such constraint is a current magnitude limit required to protect semiconductor devices. While many current limiting methods are available, they can often unpredictably alter the behavior of the inverter control during overcurrent events leading to instability or poor performance. In this paper, we present a safety filter approach to limit the current magnitude of inverters controlled as voltage sources. The safety filter problem is formulated with a control barrier function constraint that encodes the current magnitude limit. To ensure feasibility of the problem, we prove the existence of a safe linear controller for a specified reference. This approach allows for the desired voltage source behavior to be minimally altered while safely limiting the current output.<|reference_end|>
arxiv
@article{joswig-jones2024safe, title={Safe Control of Grid-Interfacing Inverters with Current Magnitude Limits}, author={Trager Joswig-Jones and Baosen Zhang}, journal={arXiv preprint arXiv:2409.13890}, year={2024}, archivePrefix={arXiv}, eprint={2409.13890}, primaryClass={eess.SY cs.SY} }
joswig-jones2024safe
arxiv-660146
2409.13893
Transfer Learning with Clinical Concept Embeddings from Large Language Models
<|reference_start|>Transfer Learning with Clinical Concept Embeddings from Large Language Models: Knowledge sharing is crucial in healthcare, especially when leveraging data from multiple clinical sites to address data scarcity, reduce costs, and enable timely interventions. Transfer learning can facilitate cross-site knowledge transfer, but a major challenge is heterogeneity in clinical concepts across different sites. Large Language Models (LLMs) show significant potential of capturing the semantic meaning of clinical concepts and reducing heterogeneity. This study analyzed electronic health records from two large healthcare systems to assess the impact of semantic embeddings from LLMs on local, shared, and transfer learning models. Results indicate that domain-specific LLMs, such as Med-BERT, consistently outperform in local and direct transfer scenarios, while generic models like OpenAI embeddings require fine-tuning for optimal performance. However, excessive tuning of models with biomedical embeddings may reduce effectiveness, emphasizing the need for balance. This study highlights the importance of domain-specific embeddings and careful model tuning for effective knowledge transfer in healthcare.<|reference_end|>
arxiv
@article{gao2024transfer, title={Transfer Learning with Clinical Concept Embeddings from Large Language Models}, author={Yuhe Gao, Runxue Bao, Yuelyu Ji, Yiming Sun, Chenxi Song, Jeffrey P. Ferraro, Ye Ye}, journal={arXiv preprint arXiv:2409.13893}, year={2024}, archivePrefix={arXiv}, eprint={2409.13893}, primaryClass={cs.CL} }
gao2024transfer
arxiv-660147
2409.13894
PTQ4ADM: Post-Training Quantization for Efficient Text Conditional Audio Diffusion Models
<|reference_start|>PTQ4ADM: Post-Training Quantization for Efficient Text Conditional Audio Diffusion Models: Denoising diffusion models have emerged as state-of-the-art in generative tasks across image, audio, and video domains, producing high-quality, diverse, and contextually relevant data. However, their broader adoption is limited by high computational costs and large memory footprints. Post-training quantization (PTQ) offers a promising approach to mitigate these challenges by reducing model complexity through low-bandwidth parameters. Yet, direct application of PTQ to diffusion models can degrade synthesis quality due to accumulated quantization noise across multiple denoising steps, particularly in conditional tasks like text-to-audio synthesis. This work introduces PTQ4ADM, a novel framework for quantizing audio diffusion models(ADMs). Our key contributions include (1) a coverage-driven prompt augmentation method and (2) an activation-aware calibration set generation algorithm for text-conditional ADMs. These techniques ensure comprehensive coverage of audio aspects and modalities while preserving synthesis fidelity. We validate our approach on TANGO, Make-An-Audio, and AudioLDM models for text-conditional audio generation. Extensive experiments demonstrate PTQ4ADM's capability to reduce the model size by up to 70\% while achieving synthesis quality metrics comparable to full-precision models($<$5\% increase in FD scores). We show that specific layers in the backbone network can be quantized to 4-bit weights and 8-bit activations without significant quality loss. This work paves the way for more efficient deployment of ADMs in resource-constrained environments.<|reference_end|>
arxiv
@article{vora2024ptq4adm:, title={PTQ4ADM: Post-Training Quantization for Efficient Text Conditional Audio Diffusion Models}, author={Jayneel Vora, Aditya Krishnan, Nader Bouacida, Prabhu RV Shankar, Prasant Mohapatra}, journal={arXiv preprint arXiv:2409.13894}, year={2024}, archivePrefix={arXiv}, eprint={2409.13894}, primaryClass={cs.SD cs.LG eess.AS} }
vora2024ptq4adm:
arxiv-660148
2409.13896
Semantic-Type-Guided Bug Finding
<|reference_start|>Semantic-Type-Guided Bug Finding: In recent years, there has been an increased interest in tools that establish \emph{incorrectness} rather than correctness of program properties. In this work we build on this approach by developing a novel methodology to prove incorrectness of \emph{semantic typing} properties of functional programs, extending the incorrectness approach to the model theory of functional program typing. We define a semantic type refuter which refutes semantic typings for a simple functional language. We prove our refuter is co-recursively enumerable, and that it is sound and complete with respect to a semantic typing notion. An initial implementation is described which uses symbolic evaluation to efficiently find type errors over a functional language with a rich type system.<|reference_end|>
arxiv
@article{qian2024semantic-type-guided, title={Semantic-Type-Guided Bug Finding}, author={Kelvin Qian and Scott Smith and Brandon Stride and Shiwei Weng and Ke Wu}, journal={arXiv preprint arXiv:2409.13896}, year={2024}, doi={10.1145/3689788}, archivePrefix={arXiv}, eprint={2409.13896}, primaryClass={cs.PL} }
qian2024semantic-type-guided
arxiv-660149
2409.13897
LLM for Everyone: Representing the Underrepresented in Large Language Models
<|reference_start|>LLM for Everyone: Representing the Underrepresented in Large Language Models: Natural language processing (NLP) has witnessed a profound impact of large language models (LLMs) that excel in a multitude of tasks. However, the limitation of LLMs in multilingual settings, particularly in underrepresented languages, remains a significant hurdle. This thesis aims to bridge the gap in NLP research and development by focusing on underrepresented languages. A comprehensive evaluation of LLMs is conducted to assess their capabilities in these languages, revealing the challenges of multilingual and multicultural generalization. Addressing the multilingual generalization gap, this thesis proposes data-and-compute-efficient methods to mitigate the disparity in LLM ability in underrepresented languages, allowing better generalization on underrepresented languages without the loss of task generalization ability. The proposed solutions cover cross-lingual continual instruction tuning, retrieval-based cross-lingual in-context learning, and in-context query alignment. Furthermore, a novel method to measure cultural values alignment between LLMs operating in different languages is proposed, ensuring cultural sensitivity and inclusivity. These contributions aim to enhance the multilingual and multicultural alignment of LLMs in underrepresented languages, ultimately advancing the NLP field toward greater equality and inclusiveness.<|reference_end|>
arxiv
@article{cahyawijaya2024llm, title={LLM for Everyone: Representing the Underrepresented in Large Language Models}, author={Samuel Cahyawijaya}, journal={arXiv preprint arXiv:2409.13897}, year={2024}, archivePrefix={arXiv}, eprint={2409.13897}, primaryClass={cs.CL cs.AI} }
cahyawijaya2024llm
arxiv-660150
2409.13900
Misty: UI Prototyping Through Interactive Conceptual Blending
<|reference_start|>Misty: UI Prototyping Through Interactive Conceptual Blending: UI prototyping often involves iterating and blending elements from examples such as screenshots and sketches, but current tools offer limited support for incorporating these examples. Inspired by the cognitive process of conceptual blending, we introduce a novel UI workflow that allows developers to rapidly incorporate diverse aspects from design examples into work-in-progress UIs. We prototyped this workflow as Misty. Through an exploratory first-use study with 14 frontend developers, we assessed Misty's effectiveness and gathered feedback on this workflow. Our findings suggest that Misty's conceptual blending workflow helps developers kickstart creative explorations, flexibly specify intent in different stages of prototyping, and inspires developers through serendipitous UI blends. Misty demonstrates the potential for tools that blur the boundaries between developers and designers.<|reference_end|>
arxiv
@article{lu2024misty:, title={Misty: UI Prototyping Through Interactive Conceptual Blending}, author={Yuwen Lu, Alan Leung, Amanda Swearngin, Jeffrey Nichols, Titus Barik}, journal={arXiv preprint arXiv:2409.13900}, year={2024}, archivePrefix={arXiv}, eprint={2409.13900}, primaryClass={cs.HC} }
lu2024misty:
arxiv-660151
2409.13902
Enhancing Large Language Models with Domain-specific Retrieval Augment Generation: A Case Study on Long-form Consumer Health Question Answering in Ophthalmology
<|reference_start|>Enhancing Large Language Models with Domain-specific Retrieval Augment Generation: A Case Study on Long-form Consumer Health Question Answering in Ophthalmology: Despite the potential of Large Language Models (LLMs) in medicine, they may generate responses lacking supporting evidence or based on hallucinated evidence. While Retrieval Augment Generation (RAG) is popular to address this issue, few studies implemented and evaluated RAG in downstream domain-specific applications. We developed a RAG pipeline with 70,000 ophthalmology-specific documents that retrieve relevant documents to augment LLMs during inference time. In a case study on long-form consumer health questions, we systematically evaluated the responses including over 500 references of LLMs with and without RAG on 100 questions with 10 healthcare professionals. The evaluation focuses on factuality of evidence, selection and ranking of evidence, attribution of evidence, and answer accuracy and completeness. LLMs without RAG provided 252 references in total. Of which, 45.3% hallucinated, 34.1% consisted of minor errors, and 20.6% were correct. In contrast, LLMs with RAG significantly improved accuracy (54.5% being correct) and reduced error rates (18.8% with minor hallucinations and 26.7% with errors). 62.5% of the top 10 documents retrieved by RAG were selected as the top references in the LLM response, with an average ranking of 4.9. The use of RAG also improved evidence attribution (increasing from 1.85 to 2.49 on a 5-point scale, P<0.001), albeit with slight decreases in accuracy (from 3.52 to 3.23, P=0.03) and completeness (from 3.47 to 3.27, P=0.17). The results demonstrate that LLMs frequently exhibited hallucinated and erroneous evidence in the responses, raising concerns for downstream applications in the medical domain. RAG substantially reduced the proportion of such evidence but encountered challenges.<|reference_end|>
arxiv
@article{gilson2024enhancing, title={Enhancing Large Language Models with Domain-specific Retrieval Augment Generation: A Case Study on Long-form Consumer Health Question Answering in Ophthalmology}, author={Aidan Gilson, Xuguang Ai, Thilaka Arunachalam, Ziyou Chen, Ki Xiong Cheong, Amisha Dave, Cameron Duic, Mercy Kibe, Annette Kaminaka, Minali Prasad, Fares Siddig, Maxwell Singer, Wendy Wong, Qiao Jin, Tiarnan D.L. Keenan, Xia Hu, Emily Y. Chew, Zhiyong Lu, Hua Xu, Ron A. Adelman, Yih-Chung Tham, Qingyu Chen}, journal={arXiv preprint arXiv:2409.13902}, year={2024}, archivePrefix={arXiv}, eprint={2409.13902}, primaryClass={cs.CL cs.AI} }
gilson2024enhancing
arxiv-660152
2409.13903
CI-Bench: Benchmarking Contextual Integrity of AI Assistants on Synthetic Data
<|reference_start|>CI-Bench: Benchmarking Contextual Integrity of AI Assistants on Synthetic Data: Advances in generative AI point towards a new era of personalized applications that perform diverse tasks on behalf of users. While general AI assistants have yet to fully emerge, their potential to share personal data raises significant privacy challenges. This paper introduces CI-Bench, a comprehensive synthetic benchmark for evaluating the ability of AI assistants to protect personal information during model inference. Leveraging the Contextual Integrity framework, our benchmark enables systematic assessment of information flow across important context dimensions, including roles, information types, and transmission principles. We present a novel, scalable, multi-step synthetic data pipeline for generating natural communications, including dialogues and emails. Unlike previous work with smaller, narrowly focused evaluations, we present a novel, scalable, multi-step data pipeline that synthetically generates natural communications, including dialogues and emails, which we use to generate 44 thousand test samples across eight domains. Additionally, we formulate and evaluate a naive AI assistant to demonstrate the need for further study and careful training towards personal assistant tasks. We envision CI-Bench as a valuable tool for guiding future language model development, deployment, system design, and dataset construction, ultimately contributing to the development of AI assistants that align with users' privacy expectations.<|reference_end|>
arxiv
@article{cheng2024ci-bench:, title={CI-Bench: Benchmarking Contextual Integrity of AI Assistants on Synthetic Data}, author={Zhao Cheng, Diane Wan, Matthew Abueg, Sahra Ghalebikesabi, Ren Yi, Eugene Bagdasarian, Borja Balle, Stefan Mellem, Shawn O'Banion}, journal={arXiv preprint arXiv:2409.13903}, year={2024}, archivePrefix={arXiv}, eprint={2409.13903}, primaryClass={cs.AI} }
cheng2024ci-bench:
arxiv-660153
2409.13904
High-dimensional learning of narrow neural networks
<|reference_start|>High-dimensional learning of narrow neural networks: Recent years have been marked with the fast-pace diversification and increasing ubiquity of machine learning applications. Yet, a firm theoretical understanding of the surprising efficiency of neural networks to learn from high-dimensional data still proves largely elusive. In this endeavour, analyses inspired by statistical physics have proven instrumental, enabling the tight asymptotic characterization of the learning of neural networks in high dimensions, for a broad class of solvable models. This manuscript reviews the tools and ideas underlying recent progress in this line of work. We introduce a generic model -- the sequence multi-index model -- which encompasses numerous previously studied models as special instances. This unified framework covers a broad class of machine learning architectures with a finite number of hidden units, including multi-layer perceptrons, autoencoders, attention mechanisms; and tasks, including (un)supervised learning, denoising, contrastive learning, in the limit of large data dimension, and comparably large number of samples. We explicate in full detail the analysis of the learning of sequence multi-index models, using statistical physics techniques such as the replica method and approximate message-passing algorithms. This manuscript thus provides a unified presentation of analyses reported in several previous works, and a detailed overview of central techniques in the field of statistical physics of machine learning. This review should be a useful primer for machine learning theoreticians curious of statistical physics approaches; it should also be of value to statistical physicists interested in the transfer of such ideas to the study of neural networks.<|reference_end|>
arxiv
@article{cui2024high-dimensional, title={High-dimensional learning of narrow neural networks}, author={Hugo Cui}, journal={arXiv preprint arXiv:2409.13904}, year={2024}, archivePrefix={arXiv}, eprint={2409.13904}, primaryClass={stat.ML cond-mat.dis-nn cs.LG} }
cui2024high-dimensional
arxiv-660154
2409.13905
Haptic Shoulder for Rendering Biomechanically Accurate Joint Limits for Human-Robot Physical Interactions
<|reference_start|>Haptic Shoulder for Rendering Biomechanically Accurate Joint Limits for Human-Robot Physical Interactions: Human-robot physical interaction (pHRI) is a rapidly evolving research field with significant implications for physical therapy, search and rescue, and telemedicine. However, a major challenge lies in accurately understanding human constraints and safety in human-robot physical experiments without an IRB and physical human experiments. Concerns regarding human studies include safety concerns, repeatability, and scalability of the number and diversity of participants. This paper examines whether a physical approximation can serve as a stand-in for human subjects to enhance robot autonomy for physical assistance. This paper introduces the SHULDRD (Shoulder Haptic Universal Limb Dynamic Repositioning Device), an economical and anatomically similar device designed for real-time testing and deployment of pHRI planning tasks onto robots in the real world. SHULDRD replicates human shoulder motion, providing crucial force feedback and safety data. The device's open-source CAD and software facilitate easy construction and use, ensuring broad accessibility for researchers. By providing a flexible platform able to emulate infinite human subjects, ensure repeatable trials, and provide quantitative metrics to assess the effectiveness of the robotic intervention, SHULDRD aims to improve the safety and efficacy of human-robot physical interactions.<|reference_end|>
arxiv
@article{peiros2024haptic, title={Haptic Shoulder for Rendering Biomechanically Accurate Joint Limits for Human-Robot Physical Interactions}, author={Elizabeth Peiros, Calvin Joyce, Tarun Murugesan, Roger Nguyen, Isabella Fiorini, Rizzi Galibut, Michael C. Yip}, journal={arXiv preprint arXiv:2409.13905}, year={2024}, archivePrefix={arXiv}, eprint={2409.13905}, primaryClass={cs.RO} }
peiros2024haptic
arxiv-660155
2409.13906
A Change Language for Ontologies and Knowledge Graphs
<|reference_start|>A Change Language for Ontologies and Knowledge Graphs: Ontologies and knowledge graphs (KGs) are general-purpose computable representations of some domain, such as human anatomy, and are frequently a crucial part of modern information systems. Most of these structures change over time, incorporating new knowledge or information that was previously missing. Managing these changes is a challenge, both in terms of communicating changes to users, and providing mechanisms to make it easier for multiple stakeholders to contribute. To fill that need, we have created KGCL, the Knowledge Graph Change Language, a standard data model for describing changes to KGs and ontologies at a high level, and an accompanying human-readable controlled natural language. This language serves two purposes: a curator can use it to request desired changes, and it can also be used to describe changes that have already happened, corresponding to the concepts of "apply patch" and "diff" commonly used for managing changes in text documents and computer programs. Another key feature of KGCL is that descriptions are at a high enough level to be useful and understood by a variety of stakeholders--for example, ontology edits can be specified by commands like "add synonym 'arm' to 'forelimb'" or "move 'Parkinson disease' under 'neurodegenerative disease'". We have also built a suite of tools for managing ontology changes. These include an automated agent that integrates with and monitors GitHub ontology repositories and applies any requested changes, and a new component in the BioPortal ontology resource that allows users to make change requests directly from within the BioPortal user interface. Overall, the KGCL data model, its controlled natural language, and associated tooling allow for easier management and processing of changes associated with the development of ontologies and KGs.<|reference_end|>
arxiv
@article{hegde2024a, title={A Change Language for Ontologies and Knowledge Graphs}, author={Harshad Hegde, Jennifer Vendetti, Damien Goutte-Gattat, J Harry Caufield, John B Graybeal, Nomi L Harris, Naouel Karam, Christian Kindermann, Nicolas Matentzoglu, James A Overton, Mark A Musen, Christopher J Mungall}, journal={arXiv preprint arXiv:2409.13906}, year={2024}, archivePrefix={arXiv}, eprint={2409.13906}, primaryClass={cs.DB} }
hegde2024a
arxiv-660156
2409.13907
Data Visualization to Evaluate and Facilitate Targeted Data Acquisitions in Support of a Real-time Ocean Forecasting System
<|reference_start|>Data Visualization to Evaluate and Facilitate Targeted Data Acquisitions in Support of a Real-time Ocean Forecasting System: A robust evaluation toolset has been designed for Naval Research Laboratory's Real-Time Ocean Forecasting System RELO with the purpose of facilitating an adaptive sampling strategy and providing more educated guidance for routing underwater gliders. The major challenges are to integrate into the existing operational system and provide a bridge between the modeling and operative environments. Visualization is the selected approach, and the developed software is divided into 3 packages. The first package verifies that the glider is actually following the waypoints and predicts the position of the glider for the next cycle's instructions. The second package ensures that the delivered waypoints are both useful and feasible. The third package provides the confidence levels for the suggested path. This software's implementation is in Python for portability and modularity to allow for easy expansion of new visuals.<|reference_end|>
arxiv
@article{holmberg2024data, title={Data Visualization to Evaluate and Facilitate Targeted Data Acquisitions in Support of a Real-time Ocean Forecasting System}, author={Edward Holmberg}, journal={arXiv preprint arXiv:2409.13907}, year={2024}, archivePrefix={arXiv}, eprint={2409.13907}, primaryClass={cs.RO} }
holmberg2024data
arxiv-660157
2409.13908
Nonlinear Inverse Design of Mechanical Multi-Material Metamaterials Enabled by Video Denoising Diffusion and Structure Identifier
<|reference_start|>Nonlinear Inverse Design of Mechanical Multi-Material Metamaterials Enabled by Video Denoising Diffusion and Structure Identifier: Metamaterials, synthetic materials with customized properties, have emerged as a promising field due to advancements in additive manufacturing. These materials derive unique mechanical properties from their internal lattice structures, which are often composed of multiple materials that repeat geometric patterns. While traditional inverse design approaches have shown potential, they struggle to map nonlinear material behavior to multiple possible structural configurations. This paper presents a novel framework leveraging video diffusion models, a type of generative artificial Intelligence (AI), for inverse multi-material design based on nonlinear stress-strain responses. Our approach consists of two key components: (1) a fields generator using a video diffusion model to create solution fields based on target nonlinear stress-strain responses, and (2) a structure identifier employing two UNet models to determine the corresponding multi-material 2D design. By incorporating multiple materials, plasticity, and large deformation, our innovative design method allows for enhanced control over the highly nonlinear mechanical behavior of metamaterials commonly seen in real-world applications. It offers a promising solution for generating next-generation metamaterials with finely tuned mechanical characteristics.<|reference_end|>
arxiv
@article{park2024nonlinear, title={Nonlinear Inverse Design of Mechanical Multi-Material Metamaterials Enabled by Video Denoising Diffusion and Structure Identifier}, author={Jaewan Park, Shashank Kushwaha, Junyan He, Seid Koric, Qibang Liu, Iwona Jasiuk, Diab Abueidda}, journal={arXiv preprint arXiv:2409.13908}, year={2024}, archivePrefix={arXiv}, eprint={2409.13908}, primaryClass={cs.AI cs.CE} }
park2024nonlinear
arxiv-660158
2409.13910
Zero-shot Cross-lingual Voice Transfer for TTS
<|reference_start|>Zero-shot Cross-lingual Voice Transfer for TTS: In this paper, we introduce a zero-shot Voice Transfer (VT) module that can be seamlessly integrated into a multi-lingual Text-to-speech (TTS) system to transfer an individual's voice across languages. Our proposed VT module comprises a speaker-encoder that processes reference speech, a bottleneck layer, and residual adapters, connected to preexisting TTS layers. We compare the performance of various configurations of these components and report Mean Opinion Score (MOS) and Speaker Similarity across languages. Using a single English reference speech per speaker, we achieve an average voice transfer similarity score of 73% across nine target languages. Vocal characteristics contribute significantly to the construction and perception of individual identity. The loss of one's voice, due to physical or neurological conditions, can lead to a profound sense of loss, impacting one's core identity. As a case study, we demonstrate that our approach can not only transfer typical speech but also restore the voices of individuals with dysarthria, even when only atypical speech samples are available - a valuable utility for those who have never had typical speech or banked their voice. Cross-lingual typical audio samples, plus videos demonstrating voice restoration for dysarthric speakers are available here (google.github.io/tacotron/publications/zero_shot_voice_transfer).<|reference_end|>
arxiv
@article{biadsy2024zero-shot, title={Zero-shot Cross-lingual Voice Transfer for TTS}, author={Fadi Biadsy, Youzheng Chen, Isaac Elias, Kyle Kastner, Gary Wang, Andrew Rosenberg, Bhuvana Ramabhadran}, journal={arXiv preprint arXiv:2409.13910}, year={2024}, archivePrefix={arXiv}, eprint={2409.13910}, primaryClass={eess.AS cs.SD} }
biadsy2024zero-shot
arxiv-660159
2409.13912
OneBEV: Using One Panoramic Image for Bird's-Eye-View Semantic Mapping
<|reference_start|>OneBEV: Using One Panoramic Image for Bird's-Eye-View Semantic Mapping: In the field of autonomous driving, Bird's-Eye-View (BEV) perception has attracted increasing attention in the community since it provides more comprehensive information compared with pinhole front-view images and panoramas. Traditional BEV methods, which rely on multiple narrow-field cameras and complex pose estimations, often face calibration and synchronization issues. To break the wall of the aforementioned challenges, in this work, we introduce OneBEV, a novel BEV semantic mapping approach using merely a single panoramic image as input, simplifying the mapping process and reducing computational complexities. A distortion-aware module termed Mamba View Transformation (MVT) is specifically designed to handle the spatial distortions in panoramas, transforming front-view features into BEV features without leveraging traditional attention mechanisms. Apart from the efficient framework, we contribute two datasets, i.e., nuScenes-360 and DeepAccident-360, tailored for the OneBEV task. Experimental results showcase that OneBEV achieves state-of-the-art performance with 51.1% and 36.1% mIoU on nuScenes-360 and DeepAccident-360, respectively. This work advances BEV semantic mapping in autonomous driving, paving the way for more advanced and reliable autonomous systems.<|reference_end|>
arxiv
@article{wei2024onebev:, title={OneBEV: Using One Panoramic Image for Bird's-Eye-View Semantic Mapping}, author={Jiale Wei, Junwei Zheng, Ruiping Liu, Jie Hu, Jiaming Zhang, and Rainer Stiefelhagen}, journal={arXiv preprint arXiv:2409.13912}, year={2024}, archivePrefix={arXiv}, eprint={2409.13912}, primaryClass={cs.CV} }
wei2024onebev:
arxiv-660160
2409.13913
Target word activity detector: An approach to obtain ASR word boundaries without lexicon
<|reference_start|>Target word activity detector: An approach to obtain ASR word boundaries without lexicon: Obtaining word timestamp information from end-to-end (E2E) ASR models remains challenging due to the lack of explicit time alignment during training. This issue is further complicated in multilingual models. Existing methods, either rely on lexicons or introduce additional tokens, leading to scalability issues and increased computational costs. In this work, we propose a new approach to estimate word boundaries without relying on lexicons. Our method leverages word embeddings from sub-word token units and a pretrained ASR model, requiring only word alignment information during training. Our proposed method can scale-up to any number of languages without incurring any additional cost. We validate our approach using a multilingual ASR model trained on five languages and demonstrate its effectiveness against a strong baseline.<|reference_end|>
arxiv
@article{sivasankaran2024target, title={Target word activity detector: An approach to obtain ASR word boundaries without lexicon}, author={Sunit Sivasankaran, Eric Sun, Jinyu Li, Yan Huang, Jing Pan}, journal={arXiv preprint arXiv:2409.13913}, year={2024}, archivePrefix={arXiv}, eprint={2409.13913}, primaryClass={cs.CL cs.SD eess.AS} }
sivasankaran2024target
arxiv-660161
2409.13915
Data Pruning via Separability, Integrity, and Model Uncertainty-Aware Importance Sampling
<|reference_start|>Data Pruning via Separability, Integrity, and Model Uncertainty-Aware Importance Sampling: This paper improves upon existing data pruning methods for image classification by introducing a novel pruning metric and pruning procedure based on importance sampling. The proposed pruning metric explicitly accounts for data separability, data integrity, and model uncertainty, while the sampling procedure is adaptive to the pruning ratio and considers both intra-class and inter-class separation to further enhance the effectiveness of pruning. Furthermore, the sampling method can readily be applied to other pruning metrics to improve their performance. Overall, the proposed approach scales well to high pruning ratio and generalizes better across different classification models, as demonstrated by experiments on four benchmark datasets, including the fine-grained classification scenario.<|reference_end|>
arxiv
@article{grosz2024data, title={Data Pruning via Separability, Integrity, and Model Uncertainty-Aware Importance Sampling}, author={Steven Grosz, Rui Zhao, Rajeev Ranjan, Hongcheng Wang, Manoj Aggarwal, Gerard Medioni, Anil Jain}, journal={arXiv preprint arXiv:2409.13915}, year={2024}, archivePrefix={arXiv}, eprint={2409.13915}, primaryClass={cs.CV} }
grosz2024data
arxiv-660162
2409.13917
Seeing is Believing: The Role of Scatterplots in Recommender System Trust and Decision-Making
<|reference_start|>Seeing is Believing: The Role of Scatterplots in Recommender System Trust and Decision-Making: The accuracy of recommender systems influences their trust and decision-making when using them. Providing additional information, such as visualizations, offers context that would otherwise be lacking. However, the role of visualizations in influencing trust and decisions with recommender systems is under-explored. To bridge this gap, we conducted a two-part human-subject experiment to investigate the impact of scatterplots on recommender system decisions. Our first study focuses on high-level decisions, such as selecting which recommender system to use. The second study focuses on low-level decisions, such as agreeing or disagreeing with a specific recommendation. Our results show scatterplots accompanied by higher levels of accuracy influence decisions and that participants tended to trust the recommendations more when scatterplots were accompanied by descriptive accuracy (e.g., \textit{high}, \textit{medium}, or \textit{low}) instead of numeric accuracy (e.g., \textit{90\%}). Furthermore, we observed scatterplots often assisted participants in validating their decisions. Based on the results, we believe that scatterplots and visualizations, in general, can aid in making informed decisions, validating decisions, and building trust in recommendation systems.<|reference_end|>
arxiv
@article{doppalapudi2024seeing, title={Seeing is Believing: The Role of Scatterplots in Recommender System Trust and Decision-Making}, author={Bhavana Doppalapudi and Md Dilshadur Rahman and Paul Rosen}, journal={arXiv preprint arXiv:2409.13917}, year={2024}, archivePrefix={arXiv}, eprint={2409.13917}, primaryClass={cs.HC} }
doppalapudi2024seeing
arxiv-660163
2409.13919
Measuring Error Alignment for Decision-Making Systems
<|reference_start|>Measuring Error Alignment for Decision-Making Systems: Given that AI systems are set to play a pivotal role in future decision-making processes, their trustworthiness and reliability are of critical concern. Due to their scale and complexity, modern AI systems resist direct interpretation, and alternative ways are needed to establish trust in those systems, and determine how well they align with human values. We argue that good measures of the information processing similarities between AI and humans, may be able to achieve these same ends. While Representational alignment (RA) approaches measure similarity between the internal states of two systems, the associated data can be expensive and difficult to collect for human systems. In contrast, Behavioural alignment (BA) comparisons are cheaper and easier, but questions remain as to their sensitivity and reliability. We propose two new behavioural alignment metrics misclassification agreement which measures the similarity between the errors of two systems on the same instances, and class-level error similarity which measures the similarity between the error distributions of two systems. We show that our metrics correlate well with RA metrics, and provide complementary information to another BA metric, within a range of domains, and set the scene for a new approach to value alignment.<|reference_end|>
arxiv
@article{xu2024measuring, title={Measuring Error Alignment for Decision-Making Systems}, author={Binxia Xu, Antonis Bikakis, Daniel Onah, Andreas Vlachidis, Luke Dickens}, journal={arXiv preprint arXiv:2409.13919}, year={2024}, archivePrefix={arXiv}, eprint={2409.13919}, primaryClass={cs.AI} }
xu2024measuring
arxiv-660164
2409.13920
One Model is All You Need: ByT5-Sanskrit, a Unified Model for Sanskrit NLP Tasks
<|reference_start|>One Model is All You Need: ByT5-Sanskrit, a Unified Model for Sanskrit NLP Tasks: Morphologically rich languages are notoriously challenging to process for downstream NLP applications. This paper presents a new pretrained language model, ByT5-Sanskrit, designed for NLP applications involving the morphologically rich language Sanskrit. We evaluate ByT5-Sanskrit on established Sanskrit word segmentation tasks, where it outperforms previous data-driven approaches by a considerable margin and matches the performance of the current best lexicon-based model. It is easier to deploy and more robust to data not covered by external linguistic resources. It also achieves new state-of-the-art results in Vedic Sanskrit dependency parsing and OCR post-correction tasks. Additionally, based on the Digital Corpus of Sanskrit, we introduce a novel multitask dataset for the joint training of Sanskrit word segmentation, lemmatization, and morphosyntactic tagging tasks. We fine-tune ByT5-Sanskrit on this dataset, creating a versatile multitask model for various downstream Sanskrit applications. We have used this model in Sanskrit linguistic annotation projects, in information retrieval setups, and as a preprocessing step in a Sanskrit machine translation pipeline. We also show that our approach yields new best scores for lemmatization and dependency parsing of other morphologically rich languages. We thus demonstrate that byte-level pretrained language models can achieve excellent performance for morphologically rich languages, outperforming tokenizer-based models and presenting an important vector of exploration when constructing NLP pipelines for such languages.<|reference_end|>
arxiv
@article{nehrdich2024one, title={One Model is All You Need: ByT5-Sanskrit, a Unified Model for Sanskrit NLP Tasks}, author={Sebastian Nehrdich, Oliver Hellwig, Kurt Keutzer}, journal={arXiv preprint arXiv:2409.13920}, year={2024}, archivePrefix={arXiv}, eprint={2409.13920}, primaryClass={cs.CL cs.LG} }
nehrdich2024one
arxiv-660165
2409.13923
Tactile Neural De-rendering
<|reference_start|>Tactile Neural De-rendering: Tactile sensing has proven to be an invaluable tool for enhancing robotic perception, particularly in scenarios where visual data is limited or unavailable. However, traditional methods for pose estimation using tactile data often rely on intricate modeling of sensor mechanics or estimation of contact patches, which can be cumbersome and inherently deterministic. In this work, we introduce Tactile Neural De-rendering, a novel approach that leverages a generative model to reconstruct a local 3D representation of an object based solely on its tactile signature. By rendering the object as though perceived by a virtual camera embedded at the fingertip, our method provides a more intuitive and flexible representation of the tactile data. This 3D reconstruction not only facilitates precise pose estimation but also allows for the quantification of uncertainty, providing a robust framework for tactile-based perception in robotics.<|reference_end|>
arxiv
@article{eyzaguirre2024tactile, title={Tactile Neural De-rendering}, author={Jose A. Eyzaguirre, Miquel Oller, Nima Fazeli}, journal={arXiv preprint arXiv:2409.13923}, year={2024}, archivePrefix={arXiv}, eprint={2409.13923}, primaryClass={cs.RO} }
eyzaguirre2024tactile
arxiv-660166
2409.13926
SpaceBlender: Creating Context-Rich Collaborative Spaces Through Generative 3D Scene Blending
<|reference_start|>SpaceBlender: Creating Context-Rich Collaborative Spaces Through Generative 3D Scene Blending: There is increased interest in using generative AI to create 3D spaces for Virtual Reality (VR) applications. However, today's models produce artificial environments, falling short of supporting collaborative tasks that benefit from incorporating the user's physical context. To generate environments that support VR telepresence, we introduce SpaceBlender, a novel pipeline that utilizes generative AI techniques to blend users' physical surroundings into unified virtual spaces. This pipeline transforms user-provided 2D images into context-rich 3D environments through an iterative process consisting of depth estimation, mesh alignment, and diffusion-based space completion guided by geometric priors and adaptive text prompts. In a preliminary within-subjects study, where 20 participants performed a collaborative VR affinity diagramming task in pairs, we compared SpaceBlender with a generic virtual environment and a state-of-the-art scene generation framework, evaluating its ability to create virtual spaces suitable for collaboration. Participants appreciated the enhanced familiarity and context provided by SpaceBlender but also noted complexities in the generative environments that could detract from task focus. Drawing on participant feedback, we propose directions for improving the pipeline and discuss the value and design of blended spaces for different scenarios.<|reference_end|>
arxiv
@article{numan2024spaceblender:, title={SpaceBlender: Creating Context-Rich Collaborative Spaces Through Generative 3D Scene Blending}, author={Nels Numan, Shwetha Rajaram, Balasaravanan Thoravi Kumaravel, Nicolai Marquardt, Andrew D. Wilson}, journal={arXiv preprint arXiv:2409.13926}, year={2024}, doi={10.1145/3654777.3676361}, archivePrefix={arXiv}, eprint={2409.13926}, primaryClass={cs.AI cs.HC} }
numan2024spaceblender:
arxiv-660167
2409.13927
SiSCo: Signal Synthesis for Effective Human-Robot Communication Via Large Language Models
<|reference_start|>SiSCo: Signal Synthesis for Effective Human-Robot Communication Via Large Language Models: Effective human-robot collaboration hinges on robust communication channels, with visual signaling playing a pivotal role due to its intuitive appeal. Yet, the creation of visually intuitive cues often demands extensive resources and specialized knowledge. The emergence of Large Language Models (LLMs) offers promising avenues for enhancing human-robot interactions and revolutionizing the way we generate context-aware visual cues. To this end, we introduce SiSCo--a novel framework that combines the computational power of LLMs with mixed-reality technologies to streamline the creation of visual cues for human-robot collaboration. Our results show that SiSCo improves the efficiency of communication in human-robot teaming tasks, reducing task completion time by approximately 73% and increasing task success rates by 18% compared to baseline natural language signals. Additionally, SiSCo reduces cognitive load for participants by 46%, as measured by the NASA-TLX subscale, and receives above-average user ratings for on-the-fly signals generated for unseen objects. To encourage further development and broader community engagement, we provide full access to SiSCo's implementation and related materials on our GitHub repository.<|reference_end|>
arxiv
@article{sonawani2024sisco:, title={SiSCo: Signal Synthesis for Effective Human-Robot Communication Via Large Language Models}, author={Shubham Sonawani, Fabian Weigend and Heni Ben Amor}, journal={arXiv preprint arXiv:2409.13927}, year={2024}, archivePrefix={arXiv}, eprint={2409.13927}, primaryClass={cs.RO} }
sonawani2024sisco:
arxiv-660168
2409.13928
Eliciting Instruction-tuned Code Language Models' Capabilities to Utilize Auxiliary Function for Code Generation
<|reference_start|>Eliciting Instruction-tuned Code Language Models' Capabilities to Utilize Auxiliary Function for Code Generation: We study the code generation behavior of instruction-tuned models built on top of code pre-trained language models when they could access an auxiliary function to implement a function. We design several ways to provide auxiliary functions to the models by adding them to the query or providing a response prefix to incorporate the ability to utilize auxiliary functions with the instruction-following capability. Our experimental results show the effectiveness of combining the base models' auxiliary function utilization ability with the instruction following ability. In particular, the performance of adopting our approaches with the open-sourced language models surpasses that of the recent powerful proprietary language models, i.e., gpt-4o.<|reference_end|>
arxiv
@article{lee2024eliciting, title={Eliciting Instruction-tuned Code Language Models' Capabilities to Utilize Auxiliary Function for Code Generation}, author={Seonghyeon Lee, Suyeon Kim, Joonwon Jang, Heejae Chon, Dongha Lee, Hwanjo Yu}, journal={arXiv preprint arXiv:2409.13928}, year={2024}, archivePrefix={arXiv}, eprint={2409.13928}, primaryClass={cs.SE cs.AI cs.CL} }
lee2024eliciting
arxiv-660169
2409.13929
Failures in Perspective-taking of Multimodal AI Systems
<|reference_start|>Failures in Perspective-taking of Multimodal AI Systems: This study extends previous research on spatial representations in multimodal AI systems. Although current models demonstrate a rich understanding of spatial information from images, this information is rooted in propositional representations, which differ from the analog representations employed in human and animal spatial cognition. To further explore these limitations, we apply techniques from cognitive and developmental science to assess the perspective-taking abilities of GPT-4o. Our analysis enables a comparison between the cognitive development of the human brain and that of multimodal AI, offering guidance for future research and model development.<|reference_end|>
arxiv
@article{leonard2024failures, title={Failures in Perspective-taking of Multimodal AI Systems}, author={Bridget Leonard, Kristin Woodard, and Scott O. Murray}, journal={arXiv preprint arXiv:2409.13929}, year={2024}, archivePrefix={arXiv}, eprint={2409.13929}, primaryClass={cs.AI} }
leonard2024failures
arxiv-660170
2409.13930
RN-SDEs: Limited-Angle CT Reconstruction with Residual Null-Space Diffusion Stochastic Differential Equations
<|reference_start|>RN-SDEs: Limited-Angle CT Reconstruction with Residual Null-Space Diffusion Stochastic Differential Equations: Computed tomography is a widely used imaging modality with applications ranging from medical imaging to material analysis. One major challenge arises from the lack of scanning information at certain angles, leading to distorted CT images with artifacts. This results in an ill-posed problem known as the Limited Angle Computed Tomography (LACT) reconstruction problem. To address this problem, we propose Residual Null-Space Diffusion Stochastic Differential Equations (RN-SDEs), which are a variant of diffusion models that characterize the diffusion process with mean-reverting (MR) stochastic differential equations. To demonstrate the generalizability of RN-SDEs, our experiments are conducted on two different LACT datasets, i.e., ChromSTEM and C4KC-KiTS. Through extensive experiments, we show that by leveraging learned Mean-Reverting SDEs as a prior and emphasizing data consistency using Range-Null Space Decomposition (RNSD) based rectification, RN-SDEs can restore high-quality images from severe degradation and achieve state-of-the-art performance in most LACT tasks. Additionally, we present a quantitative comparison of computational complexity and runtime efficiency, highlighting the superior effectiveness of our proposed approach.<|reference_end|>
arxiv
@article{guo2024rn-sdes:, title={RN-SDEs: Limited-Angle CT Reconstruction with Residual Null-Space Diffusion Stochastic Differential Equations}, author={Jiaqi Guo, Santiago Lopez-Tapia, Wing Shun Li, Yunnan Wu, Marcelo Carignano, Vadim Backman, Vinayak P. Dravid, Aggelos K. Katsaggelos}, journal={arXiv preprint arXiv:2409.13930}, year={2024}, archivePrefix={arXiv}, eprint={2409.13930}, primaryClass={eess.IV cs.CV} }
guo2024rn-sdes:
arxiv-660171
2409.13931
On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists
<|reference_start|>On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists: On-device LLMs have gained increasing attention for their ability to enhance privacy and provide a personalized user experience. To facilitate learning with private and scarce local data, federated learning has become a standard approach, though it introduces challenges related to system and data heterogeneity among end users. As a solution, we propose a novel $\textbf{Co}$llaborative learning approach with a $\textbf{Mi}$xture of $\textbf{G}$eneralists and $\textbf{S}$pecialists (CoMiGS), being the first to effectively address both. Our approach distinguishes generalists and specialists by aggregating certain experts across end users while keeping others localized to specialize in user-specific datasets. A key innovation of our method is the bi-level optimization formulation of the Mixture-of-Experts learning objective, where the router is updated using a separate validation set that represents the target distribution. CoMiGS effectively balances collaboration and personalization, as demonstrated by its superior performance in scenarios with high data heterogeneity across multiple datasets. By design, our approach accommodates users' varying computational resources through different numbers of specialists. By decoupling resource abundance from data quantity, CoMiGS remains robust against overfitting-due to the generalists' regularizing effect-while adapting to local data through specialist expertise.<|reference_end|>
arxiv
@article{fan2024on-device, title={On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists}, author={Dongyang Fan, Bettina Messmer, Martin Jaggi}, journal={arXiv preprint arXiv:2409.13931}, year={2024}, archivePrefix={arXiv}, eprint={2409.13931}, primaryClass={cs.LG cs.CL} }
fan2024on-device
arxiv-660172
2409.13935
MirrorStories: Reflecting Diversity through Personalized Narrative Generation with Large Language Models
<|reference_start|>MirrorStories: Reflecting Diversity through Personalized Narrative Generation with Large Language Models: This study explores the effectiveness of Large Language Models (LLMs) in creating personalized "mirror stories" that reflect and resonate with individual readers' identities, addressing the significant lack of diversity in literature. We present MirrorStories, a corpus of 1,500 personalized short stories generated by integrating elements such as name, gender, age, ethnicity, reader interest, and story moral. We demonstrate that LLMs can effectively incorporate diverse identity elements into narratives, with human evaluators identifying personalized elements in the stories with high accuracy. Through a comprehensive evaluation involving 26 diverse human judges, we compare the effectiveness of MirrorStories against generic narratives. We find that personalized LLM-generated stories not only outscore generic human-written and LLM-generated ones across all metrics of engagement (with average ratings of 4.22 versus 3.37 on a 5-point scale), but also achieve higher textual diversity while preserving the intended moral. We also provide analyses that include bias assessments and a study on the potential for integrating images into personalized stories.<|reference_end|>
arxiv
@article{yunusov2024mirrorstories:, title={MirrorStories: Reflecting Diversity through Personalized Narrative Generation with Large Language Models}, author={Sarfaroz Yunusov, Hamza Sidat, and Ali Emami}, journal={arXiv preprint arXiv:2409.13935}, year={2024}, archivePrefix={arXiv}, eprint={2409.13935}, primaryClass={cs.CL cs.AI cs.CY} }
yunusov2024mirrorstories:
arxiv-660173
2409.13936
High-Resolution Flood Probability Mapping Using Generative Machine Learning with Large-Scale Synthetic Precipitation and Inundation Data
<|reference_start|>High-Resolution Flood Probability Mapping Using Generative Machine Learning with Large-Scale Synthetic Precipitation and Inundation Data: High-resolution flood probability maps are essential for addressing the limitations of existing flood risk assessment approaches but are often limited by the availability of historical event data. Also, producing simulated data needed for creating probabilistic flood maps using physics-based models involves significant computation and time effort inhibiting the feasibility. To address this gap, this study introduces Flood-Precip GAN (Flood-Precipitation Generative Adversarial Network), a novel methodology that leverages generative machine learning to simulate large-scale synthetic inundation data to produce probabilistic flood maps. With a focus on Harris County, Texas, Flood-Precip GAN begins with training a cell-wise depth estimator using a limited number of physics-based model-generated precipitation-flood events. This model, which emphasizes precipitation-based features, outperforms universal models. Subsequently, a Generative Adversarial Network (GAN) with constraints is employed to conditionally generate synthetic precipitation records. Strategic thresholds are established to filter these records, ensuring close alignment with true precipitation patterns. For each cell, synthetic events are smoothed using a K-nearest neighbors algorithm and processed through the depth estimator to derive synthetic depth distributions. By iterating this procedure and after generating 10,000 synthetic precipitation-flood events, we construct flood probability maps in various formats, considering different inundation depths. Validation through similarity and correlation metrics confirms the fidelity of the synthetic depth distributions relative to true data. Flood-Precip GAN provides a scalable solution for generating synthetic flood depth data needed to create high-resolution flood probability maps, significantly enhancing flood preparedness and mitigation efforts.<|reference_end|>
arxiv
@article{huang2024high-resolution, title={High-Resolution Flood Probability Mapping Using Generative Machine Learning with Large-Scale Synthetic Precipitation and Inundation Data}, author={Lipai Huang, Federico Antolini, Ali Mostafavi, Russell Blessing, Matthew Garcia, Samuel D. Brody}, journal={arXiv preprint arXiv:2409.13936}, year={2024}, archivePrefix={arXiv}, eprint={2409.13936}, primaryClass={cs.LG} }
huang2024high-resolution
arxiv-660174
2409.13937
Lightweight and Resilient Signatures for Cloud-Assisted Embedded IoT Systems
<|reference_start|>Lightweight and Resilient Signatures for Cloud-Assisted Embedded IoT Systems: Digital signatures provide scalable authentication with non-repudiation and are vital tools for the Internet of Things (IoT). Many IoT applications harbor vast quantities of resource-limited devices often used with cloud computing. However, key compromises (e.g., physical, malware) pose a significant threat to IoTs due to increased attack vectors and open operational environments. Forward security and distributed key management are critical breach-resilient countermeasures to mitigate such threats. Yet forward-secure signatures are exorbitantly costly for low-end IoTs, while cloud-assisted approaches suffer from centrality or non-colluding semi-honest servers. In this work, we create two novel digital signatures called Lightweight and Resilient Signatures with Hardware Assistance (LRSHA) and its Forward-secure version (FLRSHA). They offer a near-optimally efficient signing with small keys and signature sizes. We synergize various design strategies, such as commitment separation to eliminate costly signing operations and hardware-assisted distributed servers to enable breach-resilient verification. Our schemes achieve magnitudes of faster forward-secure signing and compact key/signature sizes without suffering from strong security assumptions (non-colluding, central servers) or a heavy burden on the verifier (extreme storage, computation). We formally prove the security of our schemes and validate their performance with full-fledged open-source implementations on both commodity hardware and 8-bit AVR microcontrollers.<|reference_end|>
arxiv
@article{nouma2024lightweight, title={Lightweight and Resilient Signatures for Cloud-Assisted Embedded IoT Systems}, author={Saif E. Nouma and Attila A. Yavuz}, journal={arXiv preprint arXiv:2409.13937}, year={2024}, archivePrefix={arXiv}, eprint={2409.13937}, primaryClass={cs.CR} }
nouma2024lightweight
arxiv-660175
2409.13939
Simple Unsupervised Knowledge Distillation With Space Similarity
<|reference_start|>Simple Unsupervised Knowledge Distillation With Space Similarity: As per recent studies, Self-supervised learning (SSL) does not readily extend to smaller architectures. One direction to mitigate this shortcoming while simultaneously training a smaller network without labels is to adopt unsupervised knowledge distillation (UKD). Existing UKD approaches handcraft preservation worthy inter/intra sample relationships between the teacher and its student. However, this may overlook/ignore other key relationships present in the mapping of a teacher. In this paper, instead of heuristically constructing preservation worthy relationships between samples, we directly motivate the student to model the teacher's embedding manifold. If the mapped manifold is similar, all inter/intra sample relationships are indirectly conserved. We first demonstrate that prior methods cannot preserve teacher's latent manifold due to their sole reliance on $L_2$ normalised embedding features. Subsequently, we propose a simple objective to capture the lost information due to normalisation. Our proposed loss component, termed \textbf{space similarity}, motivates each dimension of a student's feature space to be similar to the corresponding dimension of its teacher. We perform extensive experiments demonstrating strong performance of our proposed approach on various benchmarks.<|reference_end|>
arxiv
@article{singh2024simple, title={Simple Unsupervised Knowledge Distillation With Space Similarity}, author={Aditya Singh and Haohan Wang}, journal={arXiv preprint arXiv:2409.13939}, year={2024}, archivePrefix={arXiv}, eprint={2409.13939}, primaryClass={cs.AI cs.CV} }
singh2024simple
arxiv-660176
2409.13940
Learning Recourse Costs from Pairwise Feature Comparisons
<|reference_start|>Learning Recourse Costs from Pairwise Feature Comparisons: This paper presents a novel technique for incorporating user input when learning and inferring user preferences. When trying to provide users of black-box machine learning models with actionable recourse, we often wish to incorporate their personal preferences about the ease of modifying each individual feature. These recourse finding algorithms usually require an exhaustive set of tuples associating each feature to its cost of modification. Since it is hard to obtain such costs by directly surveying humans, in this paper, we propose the use of the Bradley-Terry model to automatically infer feature-wise costs using non-exhaustive human comparison surveys. We propose that users only provide inputs comparing entire recourses, with all candidate feature modifications, determining which recourses are easier to implement relative to others, without explicit quantification of their costs. We demonstrate the efficient learning of individual feature costs using MAP estimates, and show that these non-exhaustive human surveys, which do not necessarily contain data for each feature pair comparison, are sufficient to learn an exhaustive set of feature costs, where each feature is associated with a modification cost.<|reference_end|>
arxiv
@article{rawal2024learning, title={Learning Recourse Costs from Pairwise Feature Comparisons}, author={Kaivalya Rawal and Himabindu Lakkaraju}, journal={arXiv preprint arXiv:2409.13940}, year={2024}, archivePrefix={arXiv}, eprint={2409.13940}, primaryClass={cs.LG cs.AI cs.CY stat.ML} }
rawal2024learning
arxiv-660177
2409.13941
TalkMosaic: Interactive PhotoMosaic with Multi-modal LLM Q&A Interactions
<|reference_start|>TalkMosaic: Interactive PhotoMosaic with Multi-modal LLM Q&A Interactions: We use images of cars of a wide range of varieties to compose an image of an animal such as a bird or a lion for the theme of environmental protection to maximize the information about cars in a single composed image and to raise the awareness about environmental challenges. We present a novel way of image interaction with an artistically-composed photomosaic image, in which a simple operation of "click and display" is used to demonstrate the interactive switch between a tile image in a photomosaic image and the corresponding original car image, which will be automatically saved on the Desktop. We build a multimodal custom GPT named TalkMosaic by incorporating car images information and the related knowledge to ChatGPT. By uploading the original car image to TalkMosaic, we can ask questions about the given car image and get the corresponding answers efficiently and effectively such as where to buy the tire in the car image that satisfies high environmental standards. We give an in-depth analysis on how to speed up the inference of multimodal LLM using sparse attention and quantization techniques with presented probabilistic FlashAttention (PrFlashAttention) and Staircase Adaptive Quantization (SAQ) methods. The implemented prototype demonstrates the feasibility and effectiveness of the presented approach.<|reference_end|>
arxiv
@article{li2024talkmosaic:, title={TalkMosaic: Interactive PhotoMosaic with Multi-modal LLM Q&A Interactions}, author={Kevin Li, Fulu Li}, journal={arXiv preprint arXiv:2409.13941}, year={2024}, archivePrefix={arXiv}, eprint={2409.13941}, primaryClass={cs.CV cs.AI} }
li2024talkmosaic:
arxiv-660178
2409.13943
QoS-Aware and Routing-Flexible Network Slicing for Service-Oriented Networks
<|reference_start|>QoS-Aware and Routing-Flexible Network Slicing for Service-Oriented Networks: In this paper, we consider the network slicing (NS) problem which attempts to map multiple customized virtual network requests (also called services) to a common shared network infrastructure and manage network resources to meet diverse quality of service (QoS) requirements. We propose a mixed-integer nonlinear programming (MINLP) formulation for the considered NS problem that can flexibly route the traffic flow of the services on multiple paths and provide end-to-end delay and reliability guarantees for all services. To overcome the computational difficulty due to the intrinsic nonlinearity in the MINLP formulation, we transform the MINLP formulation into an equivalent mixed-integer linear programming (MILP) formulation and further show that their continuous relaxations are equivalent. In sharp contrast to the continuous relaxation of the MINLP formulation which is a nonconvex nonlinear programming problem, the continuous relaxation of the MILP formulation is a polynomial-time solvable linear programming problem, which significantly facilitates the algorithmic design. Based on the newly proposed MILP formulation, we develop a customized column generation (cCG) algorithm for solving the NS problem. The proposed cCG algorithm is a decomposition-based algorithm and is particularly suitable for solving large-scale NS problems. Numerical results demonstrate the efficacy of the proposed formulations and the proposed cCG algorithm.<|reference_end|>
arxiv
@article{chen2024qos-aware, title={QoS-Aware and Routing-Flexible Network Slicing for Service-Oriented Networks}, author={Wei-Kun Chen, Ya-Feng Liu, Yu-Hong Dai, and Zhi-Quan Luo}, journal={arXiv preprint arXiv:2409.13943}, year={2024}, archivePrefix={arXiv}, eprint={2409.13943}, primaryClass={cs.IT eess.SP math.IT math.OC} }
chen2024qos-aware
arxiv-660179
2409.13944
Inf-Sup Stability of Parabolic TraceFEM
<|reference_start|>Inf-Sup Stability of Parabolic TraceFEM: We develop a parabolic inf-sup theory for a modified TraceFEM semi-discretization in space of the heat equation posed on a stationary surface embedded in $\mathbb{R}^n$. We consider the normal derivative volume stabilization and add an $L^2$-type stabilization to the time derivative. We assume that the representation of and the integration over the surface are exact, however, all our results are independent of how the surface cuts the bulk mesh. For any mesh for which the method is well-defined, we establish necessary and sufficient conditions for inf-sup stability of the proposed TraceFEM in terms of $H^1$-stability of a stabilized $L^2$-projection and of an inverse inequality constant that accounts for the lack of conformity of TraceFEM. Furthermore, we prove that the latter two quantities are bounded uniformly for a sequence of shape-regular and quasi-uniform bulk meshes. We derive several consequences of uniform discrete inf-sup stability, namely uniform well-posedness, discrete maximal parabolic regularity, parabolic quasi-best approximation, convergence to minimal regularity solutions, and optimal order-regularity energy and $L^2 L^2$ error estimates. We show that the additional stabilization of the time derivative restores optimal conditioning of time-discrete TraceFEM typical of fitted discretizations.<|reference_end|>
arxiv
@article{bouck2024inf-sup, title={Inf-Sup Stability of Parabolic TraceFEM}, author={Lucas Bouck, Ricardo H. Nochetto, Mansur Shakipov, Vladimir Yushutin}, journal={arXiv preprint arXiv:2409.13944}, year={2024}, archivePrefix={arXiv}, eprint={2409.13944}, primaryClass={math.NA cs.NA} }
bouck2024inf-sup
arxiv-660180
2409.13945
PureDiffusion: Using Backdoor to Counter Backdoor in Generative Diffusion Models
<|reference_start|>PureDiffusion: Using Backdoor to Counter Backdoor in Generative Diffusion Models: Diffusion models (DMs) are advanced deep learning models that achieved state-of-the-art capability on a wide range of generative tasks. However, recent studies have shown their vulnerability regarding backdoor attacks, in which backdoored DMs consistently generate a designated result (e.g., a harmful image) called backdoor target when the models' input contains a backdoor trigger. Although various backdoor techniques have been investigated to attack DMs, defense methods against these threats are still limited and underexplored, especially in inverting the backdoor trigger. In this paper, we introduce PureDiffusion, a novel backdoor defense framework that can efficiently detect backdoor attacks by inverting backdoor triggers embedded in DMs. Our extensive experiments on various trigger-target pairs show that PureDiffusion outperforms existing defense methods with a large gap in terms of fidelity (i.e., how much the inverted trigger resembles the original trigger) and backdoor success rate (i.e., the rate that the inverted trigger leads to the corresponding backdoor target). Notably, in certain cases, backdoor triggers inverted by PureDiffusion even achieve higher attack success rate than the original triggers.<|reference_end|>
arxiv
@article{truong2024purediffusion:, title={PureDiffusion: Using Backdoor to Counter Backdoor in Generative Diffusion Models}, author={Vu Tuan Truong and Long Bao Le}, journal={arXiv preprint arXiv:2409.13945}, year={2024}, archivePrefix={arXiv}, eprint={2409.13945}, primaryClass={cs.AI} }
truong2024purediffusion:
arxiv-660181
2409.13947
PyGRF: An improved Python Geographical Random Forest model and case studies in public health and natural disasters
<|reference_start|>PyGRF: An improved Python Geographical Random Forest model and case studies in public health and natural disasters: Geographical random forest (GRF) is a recently developed and spatially explicit machine learning model. With the ability to provide more accurate predictions and local interpretations, GRF has already been used in many studies. The current GRF model, however, has limitations in its determination of the local model weight and bandwidth hyperparameters, potentially insufficient numbers of local training samples, and sometimes high local prediction errors. Also, implemented as an R package, GRF currently does not have a Python version which limits its adoption among machine learning practitioners who prefer Python. This work addresses these limitations by introducing theory-informed hyperparameter determination, local training sample expansion, and spatially-weighted local prediction. We also develop a Python-based GRF model and package, PyGRF, to facilitate the use of the model. We evaluate the performance of PyGRF on an example dataset and further demonstrate its use in two case studies in public health and natural disasters.<|reference_end|>
arxiv
@article{sun2024pygrf:, title={PyGRF: An improved Python Geographical Random Forest model and case studies in public health and natural disasters}, author={Kai Sun, Ryan Zhenqi Zhou, Jiyeon Kim, Yingjie Hu}, journal={Transactions in GIS, 2024}, year={2024}, doi={10.1111/tgis.13248}, archivePrefix={arXiv}, eprint={2409.13947}, primaryClass={cs.CY cs.LG} }
sun2024pygrf:
arxiv-660182
2409.13948
Aligning Language Models Using Follow-up Likelihood as Reward Signal
<|reference_start|>Aligning Language Models Using Follow-up Likelihood as Reward Signal: In natural human-to-human conversations, participants often receive feedback signals from one another based on their follow-up reactions. These reactions can include verbal responses, facial expressions, changes in emotional state, and other non-verbal cues. Similarly, in human-machine interactions, the machine can leverage the user's follow-up utterances as feedback signals to assess whether it has appropriately addressed the user's request. Therefore, we propose using the likelihood of follow-up utterances as rewards to differentiate preferred responses from less favored ones, without relying on human or commercial LLM-based preference annotations. Our proposed reward mechanism, ``Follow-up Likelihood as Reward" (FLR), matches the performance of strong reward models trained on large-scale human or GPT-4 annotated data on 8 pairwise-preference and 4 rating-based benchmarks. Building upon the FLR mechanism, we propose to automatically mine preference data from the online generations of a base policy model. The preference data are subsequently used to boost the helpfulness of the base model through direct alignment from preference (DAP) methods, such as direct preference optimization (DPO). Lastly, we demonstrate that fine-tuning the language model that provides follow-up likelihood with natural language feedback significantly enhances FLR's performance on reward modeling benchmarks and effectiveness in aligning the base policy model's helpfulness.<|reference_end|>
arxiv
@article{zhang2024aligning, title={Aligning Language Models Using Follow-up Likelihood as Reward Signal}, author={Chen Zhang, Dading Chong, Feng Jiang, Chengguang Tang, Anningzhe Gao, Guohua Tang, Haizhou Li}, journal={arXiv preprint arXiv:2409.13948}, year={2024}, archivePrefix={arXiv}, eprint={2409.13948}, primaryClass={cs.CL} }
zhang2024aligning
arxiv-660183
2409.13949
Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM
<|reference_start|>Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM: Multilingual large language models (LLMs) are great translators, but this is largely limited to high-resource languages. For many LLMs, translating in and out of low-resource languages remains a challenging task. To maximize data efficiency in this low-resource setting, we introduce Mufu, which includes a selection of automatically generated multilingual candidates and an instruction to correct inaccurate translations in the prompt. Mufu prompts turn a translation task into a postediting one, and seek to harness the LLM's reasoning capability with auxiliary translation candidates, from which the model is required to assess the input quality, align the semantics cross-lingually, copy from relevant inputs and override instances that are incorrect. Our experiments on En-XX translations over the Flores-200 dataset show LLMs finetuned against Mufu-style prompts are robust to poor quality auxiliary translation candidates, achieving performance superior to NLLB 1.3B distilled model in 64% of low- and very-low-resource language pairs. We then distill these models to reduce inference cost, while maintaining on average 3.1 chrF improvement over finetune-only baseline in low-resource translations.<|reference_end|>
arxiv
@article{lim2024mufu:, title={Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM}, author={Zheng Wei Lim, Nitish Gupta, Honglin Yu, Trevor Cohn}, journal={arXiv preprint arXiv:2409.13949}, year={2024}, archivePrefix={arXiv}, eprint={2409.13949}, primaryClass={cs.CL} }
lim2024mufu:
arxiv-660184
2409.13951
Deep learning for fast segmentation and critical dimension metrology & characterization enabling AR/VR design and fabrication
<|reference_start|>Deep learning for fast segmentation and critical dimension metrology & characterization enabling AR/VR design and fabrication: Quantitative analysis of microscopy images is essential in the design and fabrication of components used in augmented reality/virtual reality (AR/VR) modules. However, segmenting regions of interest (ROIs) from these complex images and extracting critical dimensions (CDs) requires novel techniques, such as deep learning models which are key for actionable decisions on process, material and device optimization. In this study, we report on the fine-tuning of a pre-trained Segment Anything Model (SAM) using a diverse dataset of electron microscopy images. We employed methods such as low-rank adaptation (LoRA) to reduce training time and enhance the accuracy of ROI extraction. The model's ability to generalize to unseen images facilitates zero-shot learning and supports a CD extraction model that precisely extracts CDs from the segmented ROIs. We demonstrate the accurate extraction of binary images from cross-sectional images of surface relief gratings (SRGs) and Fresnel lenses in both single and multiclass modes. Furthermore, these binary images are used to identify transition points, aiding in the extraction of relevant CDs. The combined use of the fine-tuned segmentation model and the CD extraction model offers substantial advantages to various industrial applications by enhancing analytical capabilities, time to data and insights, and optimizing manufacturing processes.<|reference_end|>
arxiv
@article{chaudhary2024deep, title={Deep learning for fast segmentation and critical dimension metrology & characterization enabling AR/VR design and fabrication}, author={Kundan Chaudhary, Subhei Shaar, and Raja Muthinti}, journal={arXiv preprint arXiv:2409.13951}, year={2024}, archivePrefix={arXiv}, eprint={2409.13951}, primaryClass={cs.CV cs.LG eess.IV} }
chaudhary2024deep
arxiv-660185
2409.13952
Exploring Automated Keyword Mnemonics Generation with Large Language Models via Overgenerate-and-Rank
<|reference_start|>Exploring Automated Keyword Mnemonics Generation with Large Language Models via Overgenerate-and-Rank: In this paper, we study an under-explored area of language and vocabulary learning: keyword mnemonics, a technique for memorizing vocabulary through memorable associations with a target word via a verbal cue. Typically, creating verbal cues requires extensive human effort and is quite time-consuming, necessitating an automated method that is more scalable. We propose a novel overgenerate-and-rank method via prompting large language models (LLMs) to generate verbal cues and then ranking them according to psycholinguistic measures and takeaways from a pilot user study. To assess cue quality, we conduct both an automated evaluation of imageability and coherence, as well as a human evaluation involving English teachers and learners. Results show that LLM-generated mnemonics are comparable to human-generated ones in terms of imageability, coherence, and perceived usefulness, but there remains plenty of room for improvement due to the diversity in background and preference among language learners.<|reference_end|>
arxiv
@article{lee2024exploring, title={Exploring Automated Keyword Mnemonics Generation with Large Language Models via Overgenerate-and-Rank}, author={Jaewook Lee, Hunter McNichols, Andrew Lan}, journal={arXiv preprint arXiv:2409.13952}, year={2024}, archivePrefix={arXiv}, eprint={2409.13952}, primaryClass={cs.CL cs.HC} }
lee2024exploring
arxiv-660186
2409.13953
Training Large ASR Encoders with Differential Privacy
<|reference_start|>Training Large ASR Encoders with Differential Privacy: Self-supervised learning (SSL) methods for large speech models have proven to be highly effective at ASR. With the interest in public deployment of large pre-trained models, there is a rising concern for unintended memorization and leakage of sensitive data points from the training data. In this paper, we apply differentially private (DP) pre-training to a SOTA Conformer-based encoder, and study its performance on a downstream ASR task assuming the fine-tuning data is public. This paper is the first to apply DP to SSL for ASR, investigating the DP noise tolerance of the BEST-RQ pre-training method. Notably, we introduce a novel variant of model pruning called gradient-based layer freezing that provides strong improvements in privacy-utility-compute trade-offs. Our approach yields a LibriSpeech test-clean/other WER (%) of 3.78/ 8.41 with ($10$, 1e^-9)-DP for extrapolation towards low dataset scales, and 2.81/ 5.89 with (10, 7.9e^-11)-DP for extrapolation towards high scales.<|reference_end|>
arxiv
@article{chauhan2024training, title={Training Large ASR Encoders with Differential Privacy}, author={Geeticka Chauhan, Steve Chien, Om Thakkar, Abhradeep Thakurta and Arun Narayanan}, journal={arXiv preprint arXiv:2409.13953}, year={2024}, archivePrefix={arXiv}, eprint={2409.13953}, primaryClass={cs.SD cs.CR cs.LG eess.AS} }
chauhan2024training
arxiv-660187
2409.13955
On the Effectiveness of Neural Operators at Zero-Shot Weather Downscaling
<|reference_start|>On the Effectiveness of Neural Operators at Zero-Shot Weather Downscaling: Machine learning (ML) methods have shown great potential for weather downscaling. These data-driven approaches provide a more efficient alternative for producing high-resolution weather datasets and forecasts compared to physics-based numerical simulations. Neural operators, which learn solution operators for a family of partial differential equations (PDEs), have shown great success in scientific ML applications involving physics-driven datasets. Neural operators are grid-resolution-invariant and are often evaluated on higher grid resolutions than they are trained on, i.e., zero-shot super-resolution. Given their promising zero-shot super-resolution performance on dynamical systems emulation, we present a critical investigation of their zero-shot weather downscaling capabilities, which is when models are tasked with producing high-resolution outputs using higher upsampling factors than are seen during training. To this end, we create two realistic downscaling experiments with challenging upsampling factors (e.g., 8x and 15x) across data from different simulations: the European Centre for Medium-Range Weather Forecasts Reanalysis version 5 (ERA5) and the Wind Integration National Dataset Toolkit (WTK). While neural operator-based downscaling models perform better than interpolation and a simple convolutional baseline, we show the surprising performance of an approach that combines a powerful transformer-based model with parameter-free interpolation at zero-shot weather downscaling. We find that this Swin-Transformer-based approach mostly outperforms models with neural operator layers, and suggest its use in future work as a strong baseline.<|reference_end|>
arxiv
@article{sinha2024on, title={On the Effectiveness of Neural Operators at Zero-Shot Weather Downscaling}, author={Saumya Sinha, Brandon Benton, Patrick Emami}, journal={arXiv preprint arXiv:2409.13955}, year={2024}, archivePrefix={arXiv}, eprint={2409.13955}, primaryClass={cs.CE} }
sinha2024on
arxiv-660188
2409.13956
Data-driven Modeling of Linearizable Power Flow for Large-scale Grid Topology Optimization
<|reference_start|>Data-driven Modeling of Linearizable Power Flow for Large-scale Grid Topology Optimization: Effective power flow (PF) modeling critically affects the solution accuracy and computation complexity of large-scale grid optimization problems. Especially for grid optimization with varying topologies for enhanced flexibility and resilience, a tractable approximation of nonlinear AC-PF is of paramount importance. This work develops a data-driven approach to obtain piecewise linear (PWL) PF models by using the ReLU activation and an innovative neural network (NN) layer design to match the generative structure of AC-PF models like nodal power balance. Accordingly, the proposed generative NN (GenNN) PF model not only maintains the consistency among the predicted power variables but also neatly includes the topology decision variables for attaining a mixed-integer linear program (MILP) based reformulation of grid topology optimization problems. We further develop an area-partitioning based sparsification method to reduce the number of GenNN weight parameters and thus the model complexity. Thanks to our sparse GenNN, the proposed PWL-PF can achieve scalability for large-scale systems and allow for efficient solutions of AC-PF based optimal transmission switching (OTS) and restoration order problems (ROP). Numerical tests on the IEEE 118-bus and the 6716-bus synthetic Texas grid systems have demonstrated performance improvements over competing alternatives in approximating the AC-PF and accelerating topology optimization solutions.<|reference_end|>
arxiv
@article{cho2024data-driven, title={Data-driven Modeling of Linearizable Power Flow for Large-scale Grid Topology Optimization}, author={Young-ho Cho and Hao Zhu}, journal={arXiv preprint arXiv:2409.13956}, year={2024}, archivePrefix={arXiv}, eprint={2409.13956}, primaryClass={eess.SY cs.SY} }
cho2024data-driven
arxiv-660189
2409.13958
Periodic micromagnetic finite element method
<|reference_start|>Periodic micromagnetic finite element method: Periodic micromagnetic finite element method (PM-FEM) is introduced to solve periodic unit cell problems using the Landau-Lifshitz-Gilbert equation. PM-FEM is applicable to general problems with 1D, 2D, and 3D periodicities. PM-FEM is based on a non-periodic FEM-based micromagnetic solver and extends it in several aspects to account for periodicities, including the computation of exchange and magnetostatic fields. For the exchange field, PM-FEM modifies the sparse matrix construction for computing the Laplace operator to include additional elements arising due to the periodicities. For the magnetostatic field, the periodic extensions include modifications in the local operators, such as gradient, divergence, and surface magnetic charges as well as the long-range superposition operator for computing the periodic scalar potential. The local operators are extended to account for the periodicities similar to handling the Laplace operator. For the long-range superposition operator, PM-FEM utilizes a periodic Green's function (PGF) and fast spatial convolutions. The PGF is computed rapidly via exponentially rapidly convergent sums. The spatial convolutions are accomplished via a modified fast Fourier transform based adaptive integral method that allows calculating spatial convolutions with non-uniform meshes in $O(N\log N)$ numerical operations. PM-FEM is implemented on CPU and GPU based computer architectures. PM-FEM allows efficiently handling cases of structures contained withing the periodic unit cell touching or not touching its boundaries as well as structures that protrude beyond the unit cell boundaries. PM-FEM is demonstrated to have about the same or even higher performance than its parent non-periodic code. The demonstrated numerical examples show the efficiency of PM-FEM for highly complex structures with 1D, 2D, and 3D periodicities.<|reference_end|>
arxiv
@article{ai2024periodic, title={Periodic micromagnetic finite element method}, author={Fangzhou Ai and Jiawei Duan and Vitaliy Lomakin}, journal={arXiv preprint arXiv:2409.13958}, year={2024}, archivePrefix={arXiv}, eprint={2409.13958}, primaryClass={math.NA cs.NA physics.app-ph} }
ai2024periodic
arxiv-660190
2409.13959
One Model, Any Conjunctive Query: Graph Neural Networks for Answering Complex Queries over Knowledge Graphs
<|reference_start|>One Model, Any Conjunctive Query: Graph Neural Networks for Answering Complex Queries over Knowledge Graphs: Traditional query answering over knowledge graphs -- or broadly over relational data -- is one of the most fundamental problems in data management. Motivated by the incompleteness of modern knowledge graphs, a new setup for query answering has emerged, where the goal is to predict answers that do not necessarily appear in the knowledge graph, but are present in its completion. In this work, we propose AnyCQ, a graph neural network model that can classify answers to any conjunctive query on any knowledge graph, following training. At the core of our framework lies a graph neural network model trained using a reinforcement learning objective to answer Boolean queries. Our approach and problem setup differ from existing query answering studies in multiple dimensions. First, we focus on the problem of query answer classification: given a query and a set of possible answers, classify these proposals as true or false relative to the complete knowledge graph. Second, we study the problem of query answer retrieval: given a query, retrieve an answer to the query relative to the complete knowledge graph or decide that no correct solutions exist. Trained on simple, small instances, AnyCQ can generalize to large queries of arbitrary structure, reliably classifying and retrieving answers to samples where existing approaches fail, which is empirically validated on new and challenging benchmarks. Furthermore, we demonstrate that our AnyCQ models effectively transfer to out-of-distribution knowledge graphs, when equipped with a relevant link predictor, highlighting their potential to serve as a general engine for query answering.<|reference_end|>
arxiv
@article{olejniczak2024one, title={One Model, Any Conjunctive Query: Graph Neural Networks for Answering Complex Queries over Knowledge Graphs}, author={Krzysztof Olejniczak, Xingyue Huang, .Ismail .Ilkan Ceylan, Mikhail Galkin}, journal={arXiv preprint arXiv:2409.13959}, year={2024}, archivePrefix={arXiv}, eprint={2409.13959}, primaryClass={cs.LG cs.AI} }
olejniczak2024one
arxiv-660191
2409.13964
Adaptive bias for dissensus in nonlinear opinion dynamics with application to evolutionary division of labor games
<|reference_start|>Adaptive bias for dissensus in nonlinear opinion dynamics with application to evolutionary division of labor games: This paper addresses the problem of adaptively controlling the bias parameter in nonlinear opinion dynamics (NOD) to allocate agents into groups of arbitrary sizes for the purpose of maximizing collective rewards. In previous work, an algorithm based on the coupling of NOD with an multi-objective behavior optimization was successfully deployed as part of a multi-robot system in an autonomous task allocation field experiment. Motivated by the field results, in this paper we propose and analyze a new task allocation model that synthesizes NOD with an evolutionary game framework. We prove sufficient conditions under which it is possible to control the opinion state in the group to a desired allocation of agents between two tasks through an adaptive bias using decentralized feedback. We then verify the theoretical results with a simulation study of a collaborative evolutionary division of labor game.<|reference_end|>
arxiv
@article{paine2024adaptive, title={Adaptive bias for dissensus in nonlinear opinion dynamics with application to evolutionary division of labor games}, author={Tyler M. Paine, Anastasia Bizyaeva, and Michael R. Benjamin}, journal={arXiv preprint arXiv:2409.13964}, year={2024}, archivePrefix={arXiv}, eprint={2409.13964}, primaryClass={eess.SY cs.MA cs.RO cs.SY} }
paine2024adaptive
arxiv-660192
2409.13966
ScissorBot: Learning Generalizable Scissor Skill for Paper Cutting via Simulation, Imitation, and Sim2Real
<|reference_start|>ScissorBot: Learning Generalizable Scissor Skill for Paper Cutting via Simulation, Imitation, and Sim2Real: This paper tackles the challenging robotic task of generalizable paper cutting using scissors. In this task, scissors attached to a robot arm are driven to accurately cut curves drawn on the paper, which is hung with the top edge fixed. Due to the frequent paper-scissor contact and consequent fracture, the paper features continual deformation and changing topology, which is diffult for accurate modeling. To ensure effective execution, we customize an action primitive sequence for imitation learning to constrain its action space, thus alleviating potential compounding errors. Finally, by integrating sim-to-real techniques to bridge the gap between simulation and reality, our policy can be effectively deployed on the real robot. Experimental results demonstrate that our method surpasses all baselines in both simulation and real-world benchmarks and achieves performance comparable to human operation with a single hand under the same conditions.<|reference_end|>
arxiv
@article{lyu2024scissorbot:, title={ScissorBot: Learning Generalizable Scissor Skill for Paper Cutting via Simulation, Imitation, and Sim2Real}, author={Jiangran Lyu, Yuxing Chen, Tao Du, Feng Zhu, Huiquan Liu, Yizhou Wang, He Wang}, journal={arXiv preprint arXiv:2409.13966}, year={2024}, archivePrefix={arXiv}, eprint={2409.13966}, primaryClass={cs.RO} }
lyu2024scissorbot:
arxiv-660193
2409.13968
LADICA: A Large Shared Display Interface for Generative AI Cognitive Assistance in Co-Located Team Collaboration
<|reference_start|>LADICA: A Large Shared Display Interface for Generative AI Cognitive Assistance in Co-Located Team Collaboration: Large shared displays, such as digital whiteboards, are useful for supporting co-located team collaborations by helping members perform cognitive tasks such as brainstorming, organizing ideas, and making comparisons. While recent advancement in Large Language Models (LLMs) has catalyzed AI support for these displays, most existing systems either only offer limited capabilities or diminish human control, neglecting the potential benefits of natural group dynamics. Our formative study identified cognitive challenges teams encounter, such as diverse ideation, knowledge sharing, mutual awareness, idea organization, and synchronization of live discussions with the external workspace. In response, we introduce LADICA, a large shared display interface that helps collaborative teams brainstorm, organize, and analyze ideas through multiple analytical lenses, while fostering mutual awareness of ideas and concepts. Furthermore, LADICA facilitates the real-time extraction of key information from verbal discussions and identifies relevant entities. A lab study confirmed LADICA's usability and usefulness.<|reference_end|>
arxiv
@article{zhang2024ladica:, title={LADICA: A Large Shared Display Interface for Generative AI Cognitive Assistance in Co-Located Team Collaboration}, author={Zheng Zhang, Weirui Peng, Xinyue Chen, Luke Cao, Toby Jia-Jun Li}, journal={arXiv preprint arXiv:2409.13968}, year={2024}, archivePrefix={arXiv}, eprint={2409.13968}, primaryClass={cs.HC} }
zhang2024ladica:
arxiv-660194
2409.13971
Monocular Event-Inertial Odometry with Adaptive decay-based Time Surface and Polarity-aware Tracking
<|reference_start|>Monocular Event-Inertial Odometry with Adaptive decay-based Time Surface and Polarity-aware Tracking: Event cameras have garnered considerable attention due to their advantages over traditional cameras in low power consumption, high dynamic range, and no motion blur. This paper proposes a monocular event-inertial odometry incorporating an adaptive decay kernel-based time surface with polarity-aware tracking. We utilize an adaptive decay-based Time Surface to extract texture information from asynchronous events, which adapts to the dynamic characteristics of the event stream and enhances the representation of environmental textures. However, polarity-weighted time surfaces suffer from event polarity shifts during changes in motion direction. To mitigate its adverse effects on feature tracking, we optimize the feature tracking by incorporating an additional polarity-inverted time surface to enhance the robustness. Comparative analysis with visual-inertial and event-inertial odometry methods shows that our approach outperforms state-of-the-art techniques, with competitive results across various datasets.<|reference_end|>
arxiv
@article{tang2024monocular, title={Monocular Event-Inertial Odometry with Adaptive decay-based Time Surface and Polarity-aware Tracking}, author={Kai Tang, Xiaolei Lang, Yukai Ma, Yuehao Huang, Laijian Li, Yong Liu, Jiajun Lv}, journal={arXiv preprint arXiv:2409.13971}, year={2024}, archivePrefix={arXiv}, eprint={2409.13971}, primaryClass={cs.CV cs.RO} }
tang2024monocular
arxiv-660195
2409.13972
Can Language Model Understand Word Semantics as A Chatbot? An Empirical Study of Language Model Internal External Mismatch
<|reference_start|>Can Language Model Understand Word Semantics as A Chatbot? An Empirical Study of Language Model Internal External Mismatch: Current common interactions with language models is through full inference. This approach may not necessarily align with the model's internal knowledge. Studies show discrepancies between prompts and internal representations. Most focus on sentence understanding. We study the discrepancy of word semantics understanding in internal and external mismatch across Encoder-only, Decoder-only, and Encoder-Decoder pre-trained language models.<|reference_end|>
arxiv
@article{zhao2024can, title={Can Language Model Understand Word Semantics as A Chatbot? An Empirical Study of Language Model Internal External Mismatch}, author={Jinman Zhao, Xueyan Zhang, Xingyu Yue, Weizhe Chen, Zifan Qian, Ruiyu Wang}, journal={arXiv preprint arXiv:2409.13972}, year={2024}, archivePrefix={arXiv}, eprint={2409.13972}, primaryClass={cs.CL} }
zhao2024can
arxiv-660196
2409.13975
ProTEA: Programmable Transformer Encoder Acceleration on FPGA
<|reference_start|>ProTEA: Programmable Transformer Encoder Acceleration on FPGA: Transformer neural networks (TNN) have been widely utilized on a diverse range of applications, including natural language processing (NLP), machine translation, and computer vision (CV). Their widespread adoption has been primarily driven by the exceptional performance of their multi-head self-attention block used to extract key features from sequential data. The multi-head self-attention block is followed by feedforward neural networks, which play a crucial role in introducing non-linearity to assist the model in learning complex patterns. Despite the popularity of TNNs, there has been limited numbers of hardware accelerators targeting these two critical blocks. Most prior works have concentrated on sparse architectures that are not flexible for popular TNN variants. This paper introduces \textit{ProTEA}, a runtime programmable accelerator tailored for the dense computations of most of state-of-the-art transformer encoders. \textit{ProTEA} is designed to reduce latency by maximizing parallelism. We introduce an efficient tiling of large matrices that can distribute memory and computing resources across different hardware components within the FPGA. We provide run time evaluations of \textit{ProTEA} on a Xilinx Alveo U55C high-performance data center accelerator card. Experimental results demonstrate that \textit{ProTEA} can host a wide range of popular transformer networks and achieve near optimal performance with a tile size of 64 in the multi-head self-attention block and 6 in the feedforward networks block when configured with 8 parallel attention heads, 12 layers, and an embedding dimension of 768 on the U55C. Comparative results are provided showing \textit{ProTEA} is 2.5$\times$ faster than an NVIDIA Titan XP GPU. Results also show that it achieves 1.3 -- 2.8$\times$ speed up compared with current state-of-the-art custom designed FPGA accelerators.<|reference_end|>
arxiv
@article{kabir2024protea:, title={ProTEA: Programmable Transformer Encoder Acceleration on FPGA}, author={Ehsan Kabir, Jason D. Bakos, David Andrews, Miaoqing Huang}, journal={arXiv preprint arXiv:2409.13975}, year={2024}, archivePrefix={arXiv}, eprint={2409.13975}, primaryClass={cs.AR cs.AI cs.LG cs.SY eess.SY} }
kabir2024protea:
arxiv-660197
2409.13976
Detecting Inpainted Video with Frequency Domain Insights
<|reference_start|>Detecting Inpainted Video with Frequency Domain Insights: Video inpainting enables seamless content removal and replacement within frames, posing ethical and legal risks when misused. To mitigate these risks, detecting manipulated regions in inpainted videos is critical. Previous detection methods often focus solely on the characteristics derived from spatial and temporal dimensions, which limits their effectiveness by overlooking the unique frequency characteristics of different inpainting algorithms. In this paper, we propose the Frequency Domain Insights Network (FDIN), which significantly enhances detection accuracy by incorporating insights from the frequency domain. Our network features an Adaptive Band Selective Response module to discern frequency characteristics specific to various inpainting techniques and a Fast Fourier Convolution-based Attention module for identifying periodic artifacts in inpainted regions. Utilizing 3D ResBlocks for spatiotemporal analysis, FDIN progressively refines detection precision from broad assessments to detailed localization. Experimental evaluations on public datasets demonstrate that FDIN achieves state-of-the-art performance, setting a new benchmark in video inpainting detection.<|reference_end|>
arxiv
@article{tang2024detecting, title={Detecting Inpainted Video with Frequency Domain Insights}, author={Quanhui Tang and Jingtao Cao}, journal={arXiv preprint arXiv:2409.13976}, year={2024}, archivePrefix={arXiv}, eprint={2409.13976}, primaryClass={cs.CV cs.AI} }
tang2024detecting
arxiv-660198
2409.13977
Improving 3D Semi-supervised Learning by Effectively Utilizing All Unlabelled Data
<|reference_start|>Improving 3D Semi-supervised Learning by Effectively Utilizing All Unlabelled Data: Semi-supervised learning (SSL) has shown its effectiveness in learning effective 3D representation from a small amount of labelled data while utilizing large unlabelled data. Traditional semi-supervised approaches rely on the fundamental concept of predicting pseudo-labels for unlabelled data and incorporating them into the learning process. However, we identify that the existing methods do not fully utilize all the unlabelled samples and consequently limit their potential performance. To address this issue, we propose AllMatch, a novel SSL-based 3D classification framework that effectively utilizes all the unlabelled samples. AllMatch comprises three modules: (1) an adaptive hard augmentation module that applies relatively hard augmentations to the high-confident unlabelled samples with lower loss values, thereby enhancing the contribution of such samples, (2) an inverse learning module that further improves the utilization of unlabelled data by learning what not to learn, and (3) a contrastive learning module that ensures learning from all the samples in both supervised and unsupervised settings. Comprehensive experiments on two popular 3D datasets demonstrate a performance improvement of up to 11.2% with 1% labelled data, surpassing the SOTA by a significant margin. Furthermore, AllMatch exhibits its efficiency in effectively leveraging all the unlabelled data, demonstrated by the fact that only 10% of labelled data reaches nearly the same performance as fully-supervised learning with all labelled data. The code of our work is available at: https://github.com/snehaputul/AllMatch.<|reference_end|>
arxiv
@article{paul2024improving, title={Improving 3D Semi-supervised Learning by Effectively Utilizing All Unlabelled Data}, author={Sneha Paul, Zachary Patterson, Nizar Bouguila}, journal={arXiv preprint arXiv:2409.13977}, year={2024}, archivePrefix={arXiv}, eprint={2409.13977}, primaryClass={cs.CV} }
paul2024improving
arxiv-660199
2409.13978
FracGM: A Fast Fractional Programming Technique for Geman-McClure Robust Estimator
<|reference_start|>FracGM: A Fast Fractional Programming Technique for Geman-McClure Robust Estimator: Robust estimation is essential in computer vision, robotics, and navigation, aiming to minimize the impact of outlier measurements for improved accuracy. We present a fast algorithm for Geman-McClure robust estimation, FracGM, leveraging fractional programming techniques. This solver reformulates the original non-convex fractional problem to a convex dual problem and a linear equation system, iteratively solving them in an alternating optimization pattern. Compared to graduated non-convexity approaches, this strategy exhibits a faster convergence rate and better outlier rejection capability. In addition, the global optimality of the proposed solver can be guaranteed under given conditions. We demonstrate the proposed FracGM solver with Wahba's rotation problem and 3-D point-cloud registration along with relaxation pre-processing and projection post-processing. Compared to state-of-the-art algorithms, when the outlier rates increase from 20% to 80%, FracGM shows 53% and 88% lower rotation and translation increases. In real-world scenarios, FracGM achieves better results in 13 out of 18 outcomes, while having a 19.43% improvement in the computation time.<|reference_end|>
arxiv
@article{chen2024fracgm:, title={FracGM: A Fast Fractional Programming Technique for Geman-McClure Robust Estimator}, author={Bang-Shien Chen, Yu-Kai Lin, Jian-Yu Chen, Chih-Wei Huang, Jann-Long Chern, Ching-Cherng Sun}, journal={arXiv preprint arXiv:2409.13978}, year={2024}, archivePrefix={arXiv}, eprint={2409.13978}, primaryClass={cs.CV cs.RO math.OC} }
chen2024fracgm:
arxiv-660200
2409.13979
Bias and Toxicity in Role-Play Reasoning
<|reference_start|>Bias and Toxicity in Role-Play Reasoning: Role-play in the Large Language Model (LLM) is a crucial technique that enables models to adopt specific perspectives, enhancing their ability to generate contextually relevant and accurate responses. By simulating different roles, theis approach improves reasoning capabilities across various NLP benchmarks, making the model's output more aligned with diverse scenarios. However, in this work, we demonstrate that role-play also carries potential risks. We systematically evaluate the impact of role-play by asking the language model to adopt different roles and testing it on multiple benchmarks that contain stereotypical and harmful questions. Despite the significant fluctuations in the benchmark results in different experiments, we find that applying role-play often increases the overall likelihood of generating stereotypical and harmful outputs.<|reference_end|>
arxiv
@article{zhao2024bias, title={Bias and Toxicity in Role-Play Reasoning}, author={Jinman Zhao, Zifan Qian, Linbo Cao, Yining Wang, Yitian Ding}, journal={arXiv preprint arXiv:2409.13979}, year={2024}, archivePrefix={arXiv}, eprint={2409.13979}, primaryClass={cs.CL} }
zhao2024bias