corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-665601
2410.03417
Img2CAD: Conditioned 3D CAD Model Generation from Single Image with Structured Visual Geometry
<|reference_start|>Img2CAD: Conditioned 3D CAD Model Generation from Single Image with Structured Visual Geometry: In this paper, we propose Img2CAD, the first approach to our knowledge that uses 2D image inputs to generate CAD models with editable parameters. Unlike existing AI methods for 3D model generation using text or image inputs often rely on mesh-based representations, which are incompatible with CAD tools and lack editability and fine control, Img2CAD enables seamless integration between AI-based 3D reconstruction and CAD software. We have identified an innovative intermediate representation called Structured Visual Geometry (SVG), characterized by vectorized wireframes extracted from objects. This representation significantly enhances the performance of generating conditioned CAD models. Additionally, we introduce two new datasets to further support research in this area: ABC-mono, the largest known dataset comprising over 200,000 3D CAD models with rendered images, and KOCAD, the first dataset featuring real-world captured objects alongside their ground truth CAD models, supporting further research in conditioned CAD model generation.<|reference_end|>
arxiv
@article{chen2024img2cad:, title={Img2CAD: Conditioned 3D CAD Model Generation from Single Image with Structured Visual Geometry}, author={Tianrun Chen, Chunan Yu, Yuanqi Hu, Jing Li, Tao Xu, Runlong Cao, Lanyun Zhu, Ying Zang, Yong Zhang, Zejian Li, Linyun Sun}, journal={arXiv preprint arXiv:2410.03417}, year={2024}, archivePrefix={arXiv}, eprint={2410.03417}, primaryClass={cs.CV} }
chen2024img2cad:
arxiv-665602
2410.03420
Towards Real-time Intrahepatic Vessel Identification in Intraoperative Ultrasound-Guided Liver Surgery
<|reference_start|>Towards Real-time Intrahepatic Vessel Identification in Intraoperative Ultrasound-Guided Liver Surgery: While laparoscopic liver resection is less prone to complications and maintains patient outcomes compared to traditional open surgery, its complexity hinders widespread adoption due to challenges in representing the liver's internal structure. Laparoscopic intraoperative ultrasound offers efficient, cost-effective and radiation-free guidance. Our objective is to aid physicians in identifying internal liver structures using laparoscopic intraoperative ultrasound. We propose a patient-specific approach using preoperative 3D ultrasound liver volume to train a deep learning model for real-time identification of portal tree and branch structures. Our personalized AI model, validated on ex vivo swine livers, achieved superior precision (0.95) and recall (0.93) compared to surgeons, laying groundwork for precise vessel identification in ultrasound-based liver resection. Its adaptability and potential clinical impact promise to advance surgical interventions and improve patient care.<|reference_end|>
arxiv
@article{beaudet2024towards, title={Towards Real-time Intrahepatic Vessel Identification in Intraoperative Ultrasound-Guided Liver Surgery}, author={Karl-Philippe Beaudet (IHU Strasbourg, UNISTRA, MIMESIS), Alexandros Karargyris (IHU Strasbourg, UNISTRA), Sidaty El Hadramy (UNISTRA, MIMESIS), St'ephane Cotin (UNISTRA, MIMESIS), Jean-Paul Mazellier (IHU Strasbourg, UNISTRA), Nicolas Padoy (IHU Strasbourg, UNISTRA), Juan Verde (IHU Strasbourg, UNISTRA, MIMESIS)}, journal={MICCAI 2024, Oct 2024, Marrakech, Morocco}, year={2024}, archivePrefix={arXiv}, eprint={2410.03420}, primaryClass={eess.IV cs.AI cs.CV} }
beaudet2024towards
arxiv-665603
2410.03421
One2set + Large Language Model: Best Partners for Keyphrase Generation
<|reference_start|>One2set + Large Language Model: Best Partners for Keyphrase Generation: Keyphrase generation (KPG) aims to automatically generate a collection of phrases representing the core concepts of a given document. The dominant paradigms in KPG include one2seq and one2set. Recently, there has been increasing interest in applying large language models (LLMs) to KPG. Our preliminary experiments reveal that it is challenging for a single model to excel in both recall and precision. Further analysis shows that: 1) the one2set paradigm owns the advantage of high recall, but suffers from improper assignments of supervision signals during training; 2) LLMs are powerful in keyphrase selection, but existing selection methods often make redundant selections. Given these observations, we introduce a generate-then-select framework decomposing KPG into two steps, where we adopt a one2set-based model as generator to produce candidates and then use an LLM as selector to select keyphrases from these candidates. Particularly, we make two important improvements on our generator and selector: 1) we design an Optimal Transport-based assignment strategy to address the above improper assignments; 2) we model the keyphrase selection as a sequence labeling task to alleviate redundant selections. Experimental results on multiple benchmark datasets show that our framework significantly surpasses state-of-the-art models, especially in absent keyphrase prediction.<|reference_end|>
arxiv
@article{shao2024one2set, title={One2set + Large Language Model: Best Partners for Keyphrase Generation}, author={Liangying Shao, Liang Zhang, Minlong Peng, Guoqi Ma, Hao Yue, Mingming Sun, Jinsong Su}, journal={arXiv preprint arXiv:2410.03421}, year={2024}, archivePrefix={arXiv}, eprint={2410.03421}, primaryClass={cs.CL cs.AI} }
shao2024one2set
arxiv-665604
2410.03423
Aircraft Radar Altimeter Interference Mitigation Through a CNN-Layer Only Denoising Autoencoder Architecture
<|reference_start|>Aircraft Radar Altimeter Interference Mitigation Through a CNN-Layer Only Denoising Autoencoder Architecture: Denoising autoencoders for signal processing applications have been shown to experience significant difficulty in learning to reconstruct radio frequency communication signals, particularly in the large sample regime. In communication systems, this challenge is primarily due to the need to reconstruct the modulated data stream which is generally highly stochastic in nature. In this work, we take advantage of this limitation by using the denoising autoencoder to instead remove interfering radio frequency communication signals while reconstructing highly structured FMCW radar signals. More specifically, in this work we show that a CNN-layer only autoencoder architecture can be utilized to improve the accuracy of a radar altimeter's ranging estimate even in severe interference environments consisting of a multitude of interference signals. This is demonstrated through comprehensive performance analysis of an end-to-end FMCW radar altimeter simulation with and without the convolutional layer-only autoencoder. The proposed approach significantly improves interference mitigation in the presence of both narrow-band tone interference as well as wideband QPSK interference in terms of range RMS error, number of false altitude reports, and the peak-to-sidelobe ratio of the resulting range profile. FMCW radar signals of up to 40,000 IQ samples can be reliably reconstructed.<|reference_end|>
arxiv
@article{brown2024aircraft, title={Aircraft Radar Altimeter Interference Mitigation Through a CNN-Layer Only Denoising Autoencoder Architecture}, author={Samuel B. Brown, Stephen Young, Adam Wagenknecht, Daniel Jakubisin, Charles E. Thornton, Aaron Orndorff, William C. Headley}, journal={arXiv preprint arXiv:2410.03423}, year={2024}, archivePrefix={arXiv}, eprint={2410.03423}, primaryClass={eess.SP cs.LG} }
brown2024aircraft
arxiv-665605
2410.03424
Cayley Graph Propagation
<|reference_start|>Cayley Graph Propagation: In spite of the plethora of success stories with graph neural networks (GNNs) on modelling graph-structured data, they are notoriously vulnerable to over-squashing, whereby tasks necessitate the mixing of information between distance pairs of nodes. To address this problem, prior work suggests rewiring the graph structure to improve information flow. Alternatively, a significant body of research has dedicated itself to discovering and precomputing bottleneck-free graph structures to ameliorate over-squashing. One well regarded family of bottleneck-free graphs within the mathematical community are expander graphs, with prior work$\unicode{x2014}$Expander Graph Propagation (EGP)$\unicode{x2014}$proposing the use of a well-known expander graph family$\unicode{x2014}$the Cayley graphs of the $\mathrm{SL}(2,\mathbb{Z}_n)$ special linear group$\unicode{x2014}$as a computational template for GNNs. However, in EGP the computational graphs used are truncated to align with a given input graph. In this work, we show that truncation is detrimental to the coveted expansion properties. Instead, we propose CGP, a method to propagate information over a complete Cayley graph structure, thereby ensuring it is bottleneck-free to better alleviate over-squashing. Our empirical evidence across several real-world datasets not only shows that CGP recovers significant improvements as compared to EGP, but it is also akin to or outperforms computationally complex graph rewiring techniques.<|reference_end|>
arxiv
@article{wilson2024cayley, title={Cayley Graph Propagation}, author={JJ Wilson, Maya Bechler-Speicher, Petar Veliv{c}kovi'c}, journal={arXiv preprint arXiv:2410.03424}, year={2024}, archivePrefix={arXiv}, eprint={2410.03424}, primaryClass={cs.LG cs.AI} }
wilson2024cayley
arxiv-665606
2410.03426
Movable-Antenna Aided Secure Transmission for RIS-ISAC Systems
<|reference_start|>Movable-Antenna Aided Secure Transmission for RIS-ISAC Systems: Integrated sensing and communication (ISAC) systems have the issue of secrecy leakage when using the ISAC waveforms for sensing, thus posing a potential risk for eavesdropping. To address this problem, we propose to employ movable antennas (MAs) and reconfigurable intelligent surface (RIS) to enhance the physical layer security (PLS) performance of ISAC systems, where an eavesdropping target potentially wiretaps the signals transmitted by the base station (BS). To evaluate the synergistic performance gain provided by MAs and RIS, we formulate an optimization problem for maximizing the sum-rate of the users by jointly optimizing the transmit/receive beamformers of the BS, the reflection coefficients of the RIS, and the positions of MAs at communication users, subject to a minimum communication rate requirement for each user, a minimum radar sensing requirement, and a maximum secrecy leakage to the eavesdropping target. To solve this non-convex problem with highly coupled variables, a two-layer penalty-based algorithm is developed by updating the penalty parameter in the outer-layer iterations to achieve a trade-off between the optimality and feasibility of the solution. In the inner-layer iterations, the auxiliary variables are first obtained with semi-closed-form solutions using Lagrange duality. Then, the receive beamformer filter at the BS is optimized by solving a Rayleigh-quotient subproblem. Subsequently, the transmit beamformer matrix is obtained by solving a convex subproblem. Finally, the majorization-minimization (MM) algorithm is employed to optimize the RIS reflection coefficients and the positions of MAs. Extensive simulation results validate the considerable benefits of the proposed MAs-aided RIS-ISAC systems in enhancing security performance compared to traditional fixed position antenna (FPA)-based systems.<|reference_end|>
arxiv
@article{ma2024movable-antenna, title={Movable-Antenna Aided Secure Transmission for RIS-ISAC Systems}, author={Yaodong Ma, Kai Liu, Yanming Liu, Lipeng Zhu, and Zhenyu Xiao}, journal={arXiv preprint arXiv:2410.03426}, year={2024}, archivePrefix={arXiv}, eprint={2410.03426}, primaryClass={cs.IT eess.SP math.IT} }
ma2024movable-antenna
arxiv-665607
2410.03427
Biodenoising: animal vocalization denoising without access to clean data
<|reference_start|>Biodenoising: animal vocalization denoising without access to clean data: Animal vocalization denoising is a task similar to human speech enhancement, a well-studied field of research. In contrast to the latter, it is applied to a higher diversity of sound production mechanisms and recording environments, and this higher diversity is a challenge for existing models. Adding to the challenge and in contrast to speech, we lack large and diverse datasets comprising clean vocalizations. As a solution we use as training data pseudo-clean targets, i.e. pre-denoised vocalizations, and segments of background noise without a vocalization. We propose a train set derived from bioacoustics datasets and repositories representing diverse species, acoustic environments, geographic regions. Additionally, we introduce a non-overlapping benchmark set comprising clean vocalizations from different taxa and noise samples. We show that that denoising models (demucs, CleanUNet) trained on pseudo-clean targets obtained with speech enhancement models achieve competitive results on the benchmarking set. We publish data, code, libraries, and demos https://mariusmiron.com/research/biodenoising.<|reference_end|>
arxiv
@article{miron2024biodenoising:, title={Biodenoising: animal vocalization denoising without access to clean data}, author={Marius Miron, Sara Keen, Jen-Yu Liu, Benjamin Hoffman, Masato Hagiwara, Olivier Pietquin, Felix Effenberger, Maddie Cusimano}, journal={arXiv preprint arXiv:2410.03427}, year={2024}, archivePrefix={arXiv}, eprint={2410.03427}, primaryClass={cs.SD eess.AS} }
miron2024biodenoising:
arxiv-665608
2410.03428
Research Landscape of the novel emerging field of Cryptoeconomics
<|reference_start|>Research Landscape of the novel emerging field of Cryptoeconomics: A bibliometric literature analysis was conducted to illuminate the evolving and rapidly expanding literature in the field of cryptoeconomics. This analysis presented the emerging field's intellectual, social, and conceptual structure. The intellectual structure, characterized by schools of thought, emerged through a common citation analysis. The social structure revealed collaborations among researchers, identified through a co-authorship analysis. Network analysis highlighted collaborative communities facilitating innovation and knowledge exchange within the field. The conceptual structure was enlightened by analyzing common terms occurring in titles, author keywords, abstracts, and the publication itself. This bibliometric analysis of the rapidly advancing field of cryptoeconomics serves as a foundational resource, providing insights into research productivity and emerging trends. It contributes to a deeper understanding of the field, offering valuable information on research patterns and trends. Furthermore, this analysis empowers researchers, policymakers, and industry sectors to make informed decisions, establish collaborations, and navigate the dynamic and evolving landscape of the cryptoeconomics field.<|reference_end|>
arxiv
@article{alasik2024research, title={Research Landscape of the novel emerging field of Cryptoeconomics}, author={Alican Alasik and Nihan Yildirim}, journal={arXiv preprint arXiv:2410.03428}, year={2024}, number={Chainsci/2024/36}, archivePrefix={arXiv}, eprint={2410.03428}, primaryClass={cs.SI} }
alasik2024research
arxiv-665609
2410.03429
How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics
<|reference_start|>How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics: Natural Language Inference (NLI) evaluation is crucial for assessing language understanding models; however, popular datasets suffer from systematic spurious correlations that artificially inflate actual model performance. To address this, we propose a method for the automated creation of a challenging test set without relying on the manual construction of artificial and unrealistic examples. We categorize the test set of popular NLI datasets into three difficulty levels by leveraging methods that exploit training dynamics. This categorization significantly reduces spurious correlation measures, with examples labeled as having the highest difficulty showing markedly decreased performance and encompassing more realistic and diverse linguistic phenomena. When our characterization method is applied to the training set, models trained with only a fraction of the data achieve comparable performance to those trained on the full dataset, surpassing other dataset characterization techniques. Our research addresses limitations in NLI dataset construction, providing a more authentic evaluation of model performance with implications for diverse NLU applications.<|reference_end|>
arxiv
@article{cosma2024how, title={How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics}, author={Adrian Cosma, Stefan Ruseti, Mihai Dascalu, Cornelia Caragea}, journal={arXiv preprint arXiv:2410.03429}, year={2024}, archivePrefix={arXiv}, eprint={2410.03429}, primaryClass={cs.CL} }
cosma2024how
arxiv-665610
2410.03430
Images Speak Volumes: User-Centric Assessment of Image Generation for Accessible Communication
<|reference_start|>Images Speak Volumes: User-Centric Assessment of Image Generation for Accessible Communication: Explanatory images play a pivotal role in accessible and easy-to-read (E2R) texts. However, the images available in online databases are not tailored toward the respective texts, and the creation of customized images is expensive. In this large-scale study, we investigated whether text-to-image generation models can close this gap by providing customizable images quickly and easily. We benchmarked seven, four open- and three closed-source, image generation models and provide an extensive evaluation of the resulting images. In addition, we performed a user study with people from the E2R target group to examine whether the images met their requirements. We find that some of the models show remarkable performance, but none of the models are ready to be used at a larger scale without human supervision. Our research is an important step toward facilitating the creation of accessible information for E2R creators and tailoring accessible images to the target group's needs.<|reference_end|>
arxiv
@article{anschütz2024images, title={Images Speak Volumes: User-Centric Assessment of Image Generation for Accessible Communication}, author={Miriam Ansch"utz and Tringa Sylaj and Georg Groh}, journal={arXiv preprint arXiv:2410.03430}, year={2024}, archivePrefix={arXiv}, eprint={2410.03430}, primaryClass={cs.CV cs.CL} }
anschütz2024images
arxiv-665611
2410.03431
Approaching Code Search for Python as a Translation Retrieval Problem with Dual Encoders
<|reference_start|>Approaching Code Search for Python as a Translation Retrieval Problem with Dual Encoders: Code search is vital in the maintenance and extension of software systems. Past works have used separate language models for the natural language and programming language artifacts on models with multiple encoders and different loss functions. Similarly, this work approaches code search for Python as a translation retrieval problem while the natural language queries and the programming language are treated as two types of languages. By using dual encoders, these two types of language sequences are projected onto a shared embedding space, in which the distance reflects the similarity between a given pair of query and code. However, in contrast to previous work, this approach uses a unified language model, and a dual encoder structure with a cosine similarity loss function. A unified language model helps the model take advantage of the considerable overlap of words between the artifacts, making the learning much easier. On the other hand, the dual encoders trained with cosine similarity loss helps the model learn the underlining patterns of which terms are important for predicting linked pairs of artifacts. Evaluation shows the proposed model achieves performance better than state-of-the-art code search models. In addition, this model is much less expensive in terms of time and complexity, offering a cheaper, faster, and better alternative.<|reference_end|>
arxiv
@article{khan2024approaching, title={Approaching Code Search for Python as a Translation Retrieval Problem with Dual Encoders}, author={Monoshiz Mahbub Khan, Zhe Yu}, journal={arXiv preprint arXiv:2410.03431}, year={2024}, doi={10.1007/s10664-024-10580-3}, archivePrefix={arXiv}, eprint={2410.03431}, primaryClass={cs.SE} }
khan2024approaching
arxiv-665612
2410.03432
EB-NeRD: A Large-Scale Dataset for News Recommendation
<|reference_start|>EB-NeRD: A Large-Scale Dataset for News Recommendation: Personalized content recommendations have been pivotal to the content experience in digital media from video streaming to social networks. However, several domain specific challenges have held back adoption of recommender systems in news publishing. To address these challenges, we introduce the Ekstra Bladet News Recommendation Dataset (EB-NeRD). The dataset encompasses data from over a million unique users and more than 37 million impression logs from Ekstra Bladet. It also includes a collection of over 125,000 Danish news articles, complete with titles, abstracts, bodies, and metadata, such as categories. EB-NeRD served as the benchmark dataset for the RecSys '24 Challenge, where it was demonstrated how the dataset can be used to address both technical and normative challenges in designing effective and responsible recommender systems for news publishing. The dataset is available at: https://recsys.eb.dk.<|reference_end|>
arxiv
@article{kruse2024eb-nerd:, title={EB-NeRD: A Large-Scale Dataset for News Recommendation}, author={Johannes Kruse, Kasper Lindskow, Saikishore Kalloori, Marco Polignano, Claudio Pomo, Abhishek Srivastava, Anshuk Uppal, Michael Riis Andersen, Jes Frellsen}, journal={arXiv preprint arXiv:2410.03432}, year={2024}, doi={10.1145/3687151.3687152}, archivePrefix={arXiv}, eprint={2410.03432}, primaryClass={cs.IR cs.AI cs.LG} }
kruse2024eb-nerd:
arxiv-665613
2410.03434
Self-supervised Spatio-Temporal Graph Mask-Passing Attention Network for Perceptual Importance Prediction of Multi-point Tactility
<|reference_start|>Self-supervised Spatio-Temporal Graph Mask-Passing Attention Network for Perceptual Importance Prediction of Multi-point Tactility: While visual and auditory information are prevalent in modern multimedia systems, haptic interaction, e.g., tactile and kinesthetic interaction, provides a unique form of human perception. However, multimedia technology for contact interaction is less mature than non-contact multimedia technologies and requires further development. Specialized haptic media technologies, requiring low latency and bitrates, are essential to enable haptic interaction, necessitating haptic information compression. Existing vibrotactile signal compression methods, based on the perceptual model, do not consider the characteristics of fused tactile perception at multiple spatially distributed interaction points. In fact, differences in tactile perceptual importance are not limited to conventional frequency and time domains, but also encompass differences in the spatial locations on the skin unique to tactile perception. For the most frequently used tactile information, vibrotactile texture perception, we have developed a model to predict its perceptual importance at multiple points, based on self-supervised learning and Spatio-Temporal Graph Neural Network. Current experimental results indicate that this model can effectively predict the perceptual importance of various points in multi-point tactile perception scenarios.<|reference_end|>
arxiv
@article{he2024self-supervised, title={Self-supervised Spatio-Temporal Graph Mask-Passing Attention Network for Perceptual Importance Prediction of Multi-point Tactility}, author={Dazhong He, Qian Liu}, journal={arXiv preprint arXiv:2410.03434}, year={2024}, archivePrefix={arXiv}, eprint={2410.03434}, primaryClass={cs.HC cs.AI} }
he2024self-supervised
arxiv-665614
2410.03435
A General Framework for Producing Interpretable Semantic Text Embeddings
<|reference_start|>A General Framework for Producing Interpretable Semantic Text Embeddings: Semantic text embedding is essential to many tasks in Natural Language Processing (NLP). While black-box models are capable of generating high-quality embeddings, their lack of interpretability limits their use in tasks that demand transparency. Recent approaches have improved interpretability by leveraging domain-expert-crafted or LLM-generated questions, but these methods rely heavily on expert input or well-prompt design, which restricts their generalizability and ability to generate discriminative questions across a wide range of tasks. To address these challenges, we introduce \algo{CQG-MBQA} (Contrastive Question Generation - Multi-task Binary Question Answering), a general framework for producing interpretable semantic text embeddings across diverse tasks. Our framework systematically generates highly discriminative, low cognitive load yes/no questions through the \algo{CQG} method and answers them efficiently with the \algo{MBQA} model, resulting in interpretable embeddings in a cost-effective manner. We validate the effectiveness and interpretability of \algo{CQG-MBQA} through extensive experiments and ablation studies, demonstrating that it delivers embedding quality comparable to many advanced black-box models while maintaining inherently interpretability. Additionally, \algo{CQG-MBQA} outperforms other interpretable text embedding methods across various downstream tasks.<|reference_end|>
arxiv
@article{sun2024a, title={A General Framework for Producing Interpretable Semantic Text Embeddings}, author={Yiqun Sun, Qiang Huang, Yixuan Tang, Anthony K. H. Tung, Jun Yu}, journal={arXiv preprint arXiv:2410.03435}, year={2024}, archivePrefix={arXiv}, eprint={2410.03435}, primaryClass={cs.CL cs.AI cs.LG} }
sun2024a
arxiv-665615
2410.03437
Zebra: In-Context and Generative Pretraining for Solving Parametric PDEs
<|reference_start|>Zebra: In-Context and Generative Pretraining for Solving Parametric PDEs: Solving time-dependent parametric partial differential equations (PDEs) is challenging, as models must adapt to variations in parameters such as coefficients, forcing terms, and boundary conditions. Data-driven neural solvers either train on data sampled from the PDE parameters distribution in the hope that the model generalizes to new instances or rely on gradient-based adaptation and meta-learning to implicitly encode the dynamics from observations. This often comes with increased inference complexity. Inspired by the in-context learning capabilities of large language models (LLMs), we introduce Zebra, a novel generative auto-regressive transformer designed to solve parametric PDEs without requiring gradient adaptation at inference. By leveraging in-context information during both pre-training and inference, Zebra dynamically adapts to new tasks by conditioning on input sequences that incorporate context trajectories or preceding states. This approach enables Zebra to flexibly handle arbitrarily sized context inputs and supports uncertainty quantification through the sampling of multiple solution trajectories. We evaluate Zebra across a variety of challenging PDE scenarios, demonstrating its adaptability, robustness, and superior performance compared to existing approaches.<|reference_end|>
arxiv
@article{serrano2024zebra:, title={Zebra: In-Context and Generative Pretraining for Solving Parametric PDEs}, author={Louis Serrano, Armand Kassa"i Koupa"i, Thomas X Wang, Pierre Erbacher, Patrick Gallinari}, journal={arXiv preprint arXiv:2410.03437}, year={2024}, archivePrefix={arXiv}, eprint={2410.03437}, primaryClass={cs.LG} }
serrano2024zebra:
arxiv-665616
2410.03438
Dessie: Disentanglement for Articulated 3D Horse Shape and Pose Estimation from Images
<|reference_start|>Dessie: Disentanglement for Articulated 3D Horse Shape and Pose Estimation from Images: In recent years, 3D parametric animal models have been developed to aid in estimating 3D shape and pose from images and video. While progress has been made for humans, it's more challenging for animals due to limited annotated data. To address this, we introduce the first method using synthetic data generation and disentanglement to learn to regress 3D shape and pose. Focusing on horses, we use text-based texture generation and a synthetic data pipeline to create varied shapes, poses, and appearances, learning disentangled spaces. Our method, Dessie, surpasses existing 3D horse reconstruction methods and generalizes to other large animals like zebras, cows, and deer. See the project website at: \url{https://celiali.github.io/Dessie/}.<|reference_end|>
arxiv
@article{li2024dessie:, title={Dessie: Disentanglement for Articulated 3D Horse Shape and Pose Estimation from Images}, author={Ci Li, Yi Yang, Zehang Weng, Elin Hernlund, Silvia Zuffi, Hedvig Kjellstr"om}, journal={arXiv preprint arXiv:2410.03438}, year={2024}, archivePrefix={arXiv}, eprint={2410.03438}, primaryClass={cs.CV} }
li2024dessie:
arxiv-665617
2410.03439
ToolGen: Unified Tool Retrieval and Calling via Generation
<|reference_start|>ToolGen: Unified Tool Retrieval and Calling via Generation: As large language models (LLMs) advance, their inability to autonomously execute tasks by directly interacting with external tools remains a critical limitation. Traditional methods rely on inputting tool descriptions as context, which is constrained by context length and requires separate, often inefficient, retrieval mechanisms. We introduce ToolGen, a paradigm shift that integrates tool knowledge directly into the LLM's parameters by representing each tool as a unique token. This enables the LLM to generate tool calls and arguments as part of its next token prediction capabilities, seamlessly blending tool invocation with language generation. Our framework allows the LLM to access and utilize a vast amount of tools with no additional retrieval step, significantly enhancing both performance and scalability. Experimental results with over 47,000 tools show that ToolGen not only achieves superior results in both tool retrieval and autonomous task completion but also sets the stage for a new era of AI agents that can adapt to tools across diverse domains. By fundamentally transforming tool retrieval into a generative process, ToolGen paves the way for more versatile, efficient, and autonomous AI systems. ToolGen enables end-to-end tool learning and opens opportunities for integration with other advanced techniques such as chain-of-thought and reinforcement learning, thereby expanding the practical capabilities of LLMs.<|reference_end|>
arxiv
@article{wang2024toolgen:, title={ToolGen: Unified Tool Retrieval and Calling via Generation}, author={Renxi Wang, Xudong Han, Lei Ji, Shu Wang, Timothy Baldwin, Haonan Li}, journal={arXiv preprint arXiv:2410.03439}, year={2024}, archivePrefix={arXiv}, eprint={2410.03439}, primaryClass={cs.CL} }
wang2024toolgen:
arxiv-665618
2410.03440
Exploring the Benefit of Activation Sparsity in Pre-training
<|reference_start|>Exploring the Benefit of Activation Sparsity in Pre-training: Pre-trained Transformers inherently possess the characteristic of sparse activation, where only a small fraction of the neurons are activated for each token. While sparse activation has been explored through post-training methods, its potential in pre-training remains untapped. In this work, we first study how activation properties change during pre-training. Our examination reveals that Transformers exhibit sparse activation throughout the majority of the pre-training process while the activation correlation keeps evolving as training progresses. Leveraging this observation, we propose Switchable Sparse-Dense Learning (SSD). SSD adaptively switches between the Mixtures-of-Experts (MoE) based sparse training and the conventional dense training during the pre-training process, leveraging the efficiency of sparse training and avoiding the static activation correlation of sparse training. Compared to dense training, SSD achieves comparable performance with identical model size and reduces pre-training costs. Moreover, the models trained with SSD can be directly used as MoE models for sparse inference and achieve the same performance as dense models with up to $2\times$ faster inference speed. Codes are available at https://github.com/thunlp/moefication.<|reference_end|>
arxiv
@article{zhang2024exploring, title={Exploring the Benefit of Activation Sparsity in Pre-training}, author={Zhengyan Zhang, Chaojun Xiao, Qiujieli Qin, Yankai Lin, Zhiyuan Zeng, Xu Han, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Jie Zhou}, journal={arXiv preprint arXiv:2410.03440}, year={2024}, archivePrefix={arXiv}, eprint={2410.03440}, primaryClass={cs.CL cs.AI} }
zhang2024exploring
arxiv-665619
2410.03441
CLoSD: Closing the Loop between Simulation and Diffusion for multi-task character control
<|reference_start|>CLoSD: Closing the Loop between Simulation and Diffusion for multi-task character control: Motion diffusion models and Reinforcement Learning (RL) based control for physics-based simulations have complementary strengths for human motion generation. The former is capable of generating a wide variety of motions, adhering to intuitive control such as text, while the latter offers physically plausible motion and direct interaction with the environment. In this work, we present a method that combines their respective strengths. CLoSD is a text-driven RL physics-based controller, guided by diffusion generation for various tasks. Our key insight is that motion diffusion can serve as an on-the-fly universal planner for a robust RL controller. To this end, CLoSD maintains a closed-loop interaction between two modules -- a Diffusion Planner (DiP), and a tracking controller. DiP is a fast-responding autoregressive diffusion model, controlled by textual prompts and target locations, and the controller is a simple and robust motion imitator that continuously receives motion plans from DiP and provides feedback from the environment. CLoSD is capable of seamlessly performing a sequence of different tasks, including navigation to a goal location, striking an object with a hand or foot as specified in a text prompt, sitting down, and getting up. https://guytevet.github.io/CLoSD-page/<|reference_end|>
arxiv
@article{tevet2024closd:, title={CLoSD: Closing the Loop between Simulation and Diffusion for multi-task character control}, author={Guy Tevet, Sigal Raab, Setareh Cohan, Daniele Reda, Zhengyi Luo, Xue Bin Peng, Amit H. Bermano, Michiel van de Panne}, journal={arXiv preprint arXiv:2410.03441}, year={2024}, archivePrefix={arXiv}, eprint={2410.03441}, primaryClass={cs.CV} }
tevet2024closd:
arxiv-665620
2410.03444
Factoring through monomial representations: arithmetic characterizations and ambiguity of weighted automata
<|reference_start|>Factoring through monomial representations: arithmetic characterizations and ambiguity of weighted automata: We characterize group representations that factor through monomial representations, respectively, block-triangular representations with monomial diagonal blocks, by arithmetic properties. Similar results are obtained for semigroup representations by invertible transformations. The characterizations use results on unit equations from Diophantine number theory (by Evertse, van der Poorten, and Schlickewei in characteristic zero, and by Derksen and Masser in positive characteristic). Specialized to finitely generated groups in characteristic zero, one of our main theorems recovers a slight improvement of a very recent similar characterization by Corvaja, Demeio, Rapinchuk, Ren, and Zannier that was motivated by the study of the bounded generation (BG) property. In positive characteristic, we get a characterization of linear BG groups, recovering a theorem of Ab\'ert, Lubotzky, and Pyber from 2003. Our motivation comes from weighted finite automata (WFA) over a field. For invertible WFA we show that $M$-ambiguity, finite ambiguity, and polynomial ambiguity are characterized by arithmetic properties. We discover a full correspondence between arithmetic properties and a complexity hierarchy for WFA based on ambiguity. In the invertible case, this is a far-reaching generalization of a recent result by Bell and the second author, characterizing unambiguous WFA, that resolved a 1979 conjecture of Reutenauer. As a consequence, using the computability of the (linear) Zariski closure of a finitely generated matrix semigroup, the $M$-ambiguity, finite ambiguity, and polynomial ambiguity properties are algorithmically decidable for invertible WFA.<|reference_end|>
arxiv
@article{puch2024factoring, title={Factoring through monomial representations: arithmetic characterizations and ambiguity of weighted automata}, author={Antoni Puch and Daniel Smertnig}, journal={arXiv preprint arXiv:2410.03444}, year={2024}, archivePrefix={arXiv}, eprint={2410.03444}, primaryClass={math.GR cs.FL} }
puch2024factoring
arxiv-665621
2410.03445
Attainable Force Approximation and Full-Pose Tracking Control of an Over-Actuated Thrust-Vectoring Modular Team UAV
<|reference_start|>Attainable Force Approximation and Full-Pose Tracking Control of an Over-Actuated Thrust-Vectoring Modular Team UAV: Traditional vertical take-off and landing (VTOL) aircraft can not achieve optimal efficiency for various payload weights and has limited mobility due to its under-actuation. With the thrust-vectoring mechanism, the proposed modular team UAV is fully actuated at certain attitudes. However, the attainable force space (AFS) differs according to the team configuration, which makes the controller design difficult. We propose an approximation to the AFS and a full-pose tracking controller with an attitude planner and a force projection, which guarantees the control force is feasible. The proposed approach can be applied to UAVs having multiple thrust-vectoring effectors with homogeneous agents. The simulation and experiment demonstrate a tilting motion during hovering for a 4-agent team.<|reference_end|>
arxiv
@article{chu2024attainable, title={Attainable Force Approximation and Full-Pose Tracking Control of an Over-Actuated Thrust-Vectoring Modular Team UAV}, author={Yen-Cheng Chu, Kai-Cheng Fang, and Feng-Li Lian}, journal={arXiv preprint arXiv:2410.03445}, year={2024}, archivePrefix={arXiv}, eprint={2410.03445}, primaryClass={eess.SY cs.SY} }
chu2024attainable
arxiv-665622
2410.03446
On Uncertainty In Natural Language Processing
<|reference_start|>On Uncertainty In Natural Language Processing: The last decade in deep learning has brought on increasingly capable systems that are deployed on a wide variety of applications. In natural language processing, the field has been transformed by a number of breakthroughs including large language models, which are used in increasingly many user-facing applications. In order to reap the benefits of this technology and reduce potential harms, it is important to quantify the reliability of model predictions and the uncertainties that shroud their development. This thesis studies how uncertainty in natural language processing can be characterized from a linguistic, statistical and neural perspective, and how it can be reduced and quantified through the design of the experimental pipeline. We further explore uncertainty quantification in modeling by theoretically and empirically investigating the effect of inductive model biases in text classification tasks. The corresponding experiments include data for three different languages (Danish, English and Finnish) and tasks as well as a large set of different uncertainty quantification approaches. Additionally, we propose a method for calibrated sampling in natural language generation based on non-exchangeable conformal prediction, which provides tighter token sets with better coverage of the actual continuation. Lastly, we develop an approach to quantify confidence in large black-box language models using auxiliary predictors, where the confidence is predicted from the input to and generated output text of the target model alone.<|reference_end|>
arxiv
@article{ulmer2024on, title={On Uncertainty In Natural Language Processing}, author={Dennis Ulmer}, journal={arXiv preprint arXiv:2410.03446}, year={2024}, archivePrefix={arXiv}, eprint={2410.03446}, primaryClass={cs.AI cs.CL cs.LG} }
ulmer2024on
arxiv-665623
2410.03447
How Language Models Prioritize Contextual Grammatical Cues?
<|reference_start|>How Language Models Prioritize Contextual Grammatical Cues?: Transformer-based language models have shown an excellent ability to effectively capture and utilize contextual information. Although various analysis techniques have been used to quantify and trace the contribution of single contextual cues to a target task such as subject-verb agreement or coreference resolution, scenarios in which multiple relevant cues are available in the context remain underexplored. In this paper, we investigate how language models handle gender agreement when multiple gender cue words are present, each capable of independently disambiguating a target gender pronoun. We analyze two widely used Transformer-based models: BERT, an encoder-based, and GPT-2, a decoder-based model. Our analysis employs two complementary approaches: context mixing analysis, which tracks information flow within the model, and a variant of activation patching, which measures the impact of cues on the model's prediction. We find that BERT tends to prioritize the first cue in the context to form both the target word representations and the model's prediction, while GPT-2 relies more on the final cue. Our findings reveal striking differences in how encoder-based and decoder-based models prioritize and use contextual information for their predictions.<|reference_end|>
arxiv
@article{amirzadeh2024how, title={How Language Models Prioritize Contextual Grammatical Cues?}, author={Hamidreza Amirzadeh, Afra Alishahi, Hosein Mohebbi}, journal={arXiv preprint arXiv:2410.03447}, year={2024}, archivePrefix={arXiv}, eprint={2410.03447}, primaryClass={cs.CL} }
amirzadeh2024how
arxiv-665624
2410.03448
How Toxicity Classifiers and Large Language Models Respond to Ableism
<|reference_start|>How Toxicity Classifiers and Large Language Models Respond to Ableism: People with disabilities (PwD) regularly encounter ableist hate and microaggressions online. While online platforms use machine learning models to moderate online harm, there is little research investigating how these models interact with ableism. In this paper, we curated a dataset of 100 social media comments targeted towards PwD, and recruited 160 participants to rate and explain how toxic and ableist these comments were. We then prompted state-of-the art toxicity classifiers (TCs) and large language models (LLMs) to rate and explain the harm. Our analysis revealed that TCs and LLMs rated toxicity significantly lower than PwD, but LLMs rated ableism generally on par with PwD. However, ableism explanations by LLMs overlooked emotional harm, and lacked specificity and acknowledgement of context, important facets of PwD explanations. Going forward, we discuss challenges in designing disability-aware toxicity classifiers, and advocate for the shift from ableism detection to ableism interpretation and explanation.<|reference_end|>
arxiv
@article{phutane2024how, title={How Toxicity Classifiers and Large Language Models Respond to Ableism}, author={Mahika Phutane, Ananya Seelam, Aditya Vashistha}, journal={arXiv preprint arXiv:2410.03448}, year={2024}, archivePrefix={arXiv}, eprint={2410.03448}, primaryClass={cs.HC cs.AI} }
phutane2024how
arxiv-665625
2410.03450
MLLM as Retriever: Interactively Learning Multimodal Retrieval for Embodied Agents
<|reference_start|>MLLM as Retriever: Interactively Learning Multimodal Retrieval for Embodied Agents: MLLM agents demonstrate potential for complex embodied tasks by retrieving multimodal task-relevant trajectory data. However, current retrieval methods primarily focus on surface-level similarities of textual or visual cues in trajectories, neglecting their effectiveness for the specific task at hand. To address this issue, we propose a novel method, MLLM as ReTriever (MART), which enhances the performance of embodied agents by utilizing interaction data to fine-tune an MLLM retriever based on preference learning, such that the retriever fully considers the effectiveness of trajectories and prioritize them for unseen tasks. We also introduce Trajectory Abstraction, a mechanism that leverages MLLMs' summarization capabilities to represent trajectories with fewer tokens while preserving key information, enabling agents to better comprehend milestones in the trajectory. Experimental results across various environments demonstrate our method significantly improves task success rates in unseen scenes compared to baseline methods. This work presents a new paradigm for multimodal retrieval in embodied agents, by fine-tuning a general-purpose MLLM as the retriever to assess trajectory effectiveness. All benchmark task sets and simulator code modifications for action and observation spaces will be released.<|reference_end|>
arxiv
@article{yue2024mllm, title={MLLM as Retriever: Interactively Learning Multimodal Retrieval for Embodied Agents}, author={Junpeng Yue, Xinru Xu, B"orje F. Karlsson, and Zongqing Lu}, journal={arXiv preprint arXiv:2410.03450}, year={2024}, archivePrefix={arXiv}, eprint={2410.03450}, primaryClass={cs.LG} }
yue2024mllm
arxiv-665626
2410.03453
A New World in the Depths of Microcrypt: Separating OWSGs and Quantum Money from QEFID
<|reference_start|>A New World in the Depths of Microcrypt: Separating OWSGs and Quantum Money from QEFID: While in classical cryptography, one-way functions (OWFs) are widely regarded as the "minimal assumption," the situation in quantum cryptography is less clear. Recent works have put forward two concurrent candidates for the minimal assumption in quantum cryptography: One-way state generators (OWSGs), postulating the existence of a hard search problem with an efficient verification algorithm, and EFI pairs, postulating the existence of a hard distinguishing problem. Two recent papers [Khurana and Tomer STOC'24; Batra and Jain FOCS'24] showed that OWSGs imply EFI pairs, but the reverse direction remained open. In this work, we give strong evidence that the opposite direction does not hold: We show that there is a quantum unitary oracle relative to which EFI pairs exist, but OWSGs do not. In fact, we show a slightly stronger statement that holds also for EFI pairs that output classical bits (QEFID). As a consequence, we separate, via our oracle, QEFID, and one-way puzzles from OWSGs and several other Microcrypt primitives, including efficiently verifiable one-way puzzles and unclonable state generators. In particular, this solves a problem left open in [Chung, Goldin, and Gray Crypto'24]. Using similar techniques, we also establish a fully black-box separation (which is slightly weaker than an oracle separation) between private-key quantum money schemes and QEFID pairs. One conceptual implication of our work is that the existence of an efficient verification algorithm may lead to qualitatively stronger primitives in quantum cryptography.<|reference_end|>
arxiv
@article{behera2024a, title={A New World in the Depths of Microcrypt: Separating OWSGs and Quantum Money from QEFID}, author={Amit Behera, Giulio Malavolta, Tomoyuki Morimae, Tamer Mour, Takashi Yamakawa}, journal={arXiv preprint arXiv:2410.03453}, year={2024}, number={YITP-24-127}, archivePrefix={arXiv}, eprint={2410.03453}, primaryClass={quant-ph cs.CR} }
behera2024a
arxiv-665627
2410.03456
Dynamic Diffusion Transformer
<|reference_start|>Dynamic Diffusion Transformer: Diffusion Transformer (DiT), an emerging diffusion model for image generation, has demonstrated superior performance but suffers from substantial computational costs. Our investigations reveal that these costs stem from the static inference paradigm, which inevitably introduces redundant computation in certain diffusion timesteps and spatial regions. To address this inefficiency, we propose Dynamic Diffusion Transformer (DyDiT), an architecture that dynamically adjusts its computation along both timestep and spatial dimensions during generation. Specifically, we introduce a Timestep-wise Dynamic Width (TDW) approach that adapts model width conditioned on the generation timesteps. In addition, we design a Spatial-wise Dynamic Token (SDT) strategy to avoid redundant computation at unnecessary spatial locations. Extensive experiments on various datasets and different-sized models verify the superiority of DyDiT. Notably, with <3% additional fine-tuning iterations, our method reduces the FLOPs of DiT-XL by 51%, accelerates generation by 1.73, and achieves a competitive FID score of 2.07 on ImageNet. The code is publicly available at https://github.com/NUS-HPC-AI-Lab/ Dynamic-Diffusion-Transformer.<|reference_end|>
arxiv
@article{zhao2024dynamic, title={Dynamic Diffusion Transformer}, author={Wangbo Zhao, Yizeng Han, Jiasheng Tang, Kai Wang, Yibing Song, Gao Huang, Fan Wang, Yang You}, journal={arXiv preprint arXiv:2410.03456}, year={2024}, archivePrefix={arXiv}, eprint={2410.03456}, primaryClass={cs.CV} }
zhao2024dynamic
arxiv-665628
2410.03457
CoCoLoFa: A Dataset of News Comments with Common Logical Fallacies Written by LLM-Assisted Crowds
<|reference_start|>CoCoLoFa: A Dataset of News Comments with Common Logical Fallacies Written by LLM-Assisted Crowds: Detecting logical fallacies in texts can help users spot argument flaws, but automating this detection is not easy. Manually annotating fallacies in large-scale, real-world text data to create datasets for developing and validating detection models is costly. This paper introduces CoCoLoFa, the largest known logical fallacy dataset, containing 7,706 comments for 648 news articles, with each comment labeled for fallacy presence and type. We recruited 143 crowd workers to write comments embodying specific fallacy types (e.g., slippery slope) in response to news articles. Recognizing the complexity of this writing task, we built an LLM-powered assistant into the workers' interface to aid in drafting and refining their comments. Experts rated the writing quality and labeling validity of CoCoLoFa as high and reliable. BERT-based models fine-tuned using CoCoLoFa achieved the highest fallacy detection (F1=0.86) and classification (F1=0.87) performance on its test set, outperforming the state-of-the-art LLMs. Our work shows that combining crowdsourcing and LLMs enables us to more effectively construct datasets for complex linguistic phenomena that crowd workers find challenging to produce on their own.<|reference_end|>
arxiv
@article{yeh2024cocolofa:, title={CoCoLoFa: A Dataset of News Comments with Common Logical Fallacies Written by LLM-Assisted Crowds}, author={Min-Hsuan Yeh, Ruyuan Wan, and Ting-Hao 'Kenneth' Huang}, journal={arXiv preprint arXiv:2410.03457}, year={2024}, archivePrefix={arXiv}, eprint={2410.03457}, primaryClass={cs.CL} }
yeh2024cocolofa:
arxiv-665629
2410.03458
Multi-Dialect Vietnamese: Task, Dataset, Baseline Models and Challenges
<|reference_start|>Multi-Dialect Vietnamese: Task, Dataset, Baseline Models and Challenges: Vietnamese, a low-resource language, is typically categorized into three primary dialect groups that belong to Northern, Central, and Southern Vietnam. However, each province within these regions exhibits its own distinct pronunciation variations. Despite the existence of various speech recognition datasets, none of them has provided a fine-grained classification of the 63 dialects specific to individual provinces of Vietnam. To address this gap, we introduce Vietnamese Multi-Dialect (ViMD) dataset, a novel comprehensive dataset capturing the rich diversity of 63 provincial dialects spoken across Vietnam. Our dataset comprises 102.56 hours of audio, consisting of approximately 19,000 utterances, and the associated transcripts contain over 1.2 million words. To provide benchmarks and simultaneously demonstrate the challenges of our dataset, we fine-tune state-of-the-art pre-trained models for two downstream tasks: (1) Dialect identification and (2) Speech recognition. The empirical results suggest two implications including the influence of geographical factors on dialects, and the constraints of current approaches in speech recognition tasks involving multi-dialect speech data. Our dataset is available for research purposes.<|reference_end|>
arxiv
@article{van dinh2024multi-dialect, title={Multi-Dialect Vietnamese: Task, Dataset, Baseline Models and Challenges}, author={Nguyen Van Dinh, Thanh Chi Dang, Luan Thanh Nguyen, Kiet Van Nguyen}, journal={arXiv preprint arXiv:2410.03458}, year={2024}, archivePrefix={arXiv}, eprint={2410.03458}, primaryClass={cs.CL} }
van dinh2024multi-dialect
arxiv-665630
2410.03459
Generative Semantic Communication for Text-to-Speech Synthesis
<|reference_start|>Generative Semantic Communication for Text-to-Speech Synthesis: Semantic communication is a promising technology to improve communication efficiency by transmitting only the semantic information of the source data. However, traditional semantic communication methods primarily focus on data reconstruction tasks, which may not be efficient for emerging generative tasks such as text-to-speech (TTS) synthesis. To address this limitation, this paper develops a novel generative semantic communication framework for TTS synthesis, leveraging generative artificial intelligence technologies. Firstly, we utilize a pre-trained large speech model called WavLM and the residual vector quantization method to construct two semantic knowledge bases (KBs) at the transmitter and receiver, respectively. The KB at the transmitter enables effective semantic extraction, while the KB at the receiver facilitates lifelike speech synthesis. Then, we employ a transformer encoder and a diffusion model to achieve efficient semantic coding without introducing significant communication overhead. Finally, numerical results demonstrate that our framework achieves much higher fidelity for the generated speech than four baselines, in both cases with additive white Gaussian noise channel and Rayleigh fading channel.<|reference_end|>
arxiv
@article{zheng2024generative, title={Generative Semantic Communication for Text-to-Speech Synthesis}, author={Jiahao Zheng, Jinke Ren, Peng Xu, Zhihao Yuan, Jie Xu, Fangxin Wang, Gui Gui, Shuguang Cui}, journal={arXiv preprint arXiv:2410.03459}, year={2024}, archivePrefix={arXiv}, eprint={2410.03459}, primaryClass={cs.SD cs.IT cs.LG eess.AS math.IT} }
zheng2024generative
arxiv-665631
2410.03461
Auto-GDA: Automatic Domain Adaptation for Efficient Grounding Verification in Retrieval Augmented Generation
<|reference_start|>Auto-GDA: Automatic Domain Adaptation for Efficient Grounding Verification in Retrieval Augmented Generation: While retrieval augmented generation (RAG) has been shown to enhance factuality of large language model (LLM) outputs, LLMs still suffer from hallucination, generating incorrect or irrelevant information. One common detection strategy involves prompting the LLM again to assess whether its response is grounded in the retrieved evidence, but this approach is costly. Alternatively, lightweight natural language inference (NLI) models for efficient grounding verification can be used at inference time. While existing pre-trained NLI models offer potential solutions, their performance remains subpar compared to larger models on realistic RAG inputs. RAG inputs are more complex than most datasets used for training NLI models and have characteristics specific to the underlying knowledge base, requiring adaptation of the NLI models to a specific target domain. Additionally, the lack of labeled instances in the target domain makes supervised domain adaptation, e.g., through fine-tuning, infeasible. To address these challenges, we introduce Automatic Generative Domain Adaptation (Auto-GDA). Our framework enables unsupervised domain adaptation through synthetic data generation. Unlike previous methods that rely on handcrafted filtering and augmentation strategies, Auto-GDA employs an iterative process to continuously improve the quality of generated samples using weak labels from less efficient teacher models and discrete optimization to select the most promising augmented samples. Experimental results demonstrate the effectiveness of our approach, with models fine-tuned on synthetic data using Auto-GDA often surpassing the performance of the teacher model and reaching the performance level of LLMs at 10 % of their computational cost.<|reference_end|>
arxiv
@article{leemann2024auto-gda:, title={Auto-GDA: Automatic Domain Adaptation for Efficient Grounding Verification in Retrieval Augmented Generation}, author={Tobias Leemann, Periklis Petridis, Giuseppe Vietri, Dionysis Manousakas, Aaron Roth, Sergul Aydore}, journal={arXiv preprint arXiv:2410.03461}, year={2024}, archivePrefix={arXiv}, eprint={2410.03461}, primaryClass={cs.CL cs.LG} }
leemann2024auto-gda:
arxiv-665632
2410.03462
Linear Transformer Topological Masking with Graph Random Features
<|reference_start|>Linear Transformer Topological Masking with Graph Random Features: When training transformers on graph-structured data, incorporating information about the underlying topology is crucial for good performance. Topological masking, a type of relative position encoding, achieves this by upweighting or downweighting attention depending on the relationship between the query and keys in a graph. In this paper, we propose to parameterise topological masks as a learnable function of a weighted adjacency matrix -- a novel, flexible approach which incorporates a strong structural inductive bias. By approximating this mask with graph random features (for which we prove the first known concentration bounds), we show how this can be made fully compatible with linear attention, preserving $\mathcal{O}(N)$ time and space complexity with respect to the number of input tokens. The fastest previous alternative was $\mathcal{O}(N \log N)$ and only suitable for specific graphs. Our efficient masking algorithms provide strong performance gains for tasks on image and point cloud data, including with $>30$k nodes.<|reference_end|>
arxiv
@article{reid2024linear, title={Linear Transformer Topological Masking with Graph Random Features}, author={Isaac Reid, Kumar Avinava Dubey, Deepali Jain, Will Whitney, Amr Ahmed, Joshua Ainslie, Alex Bewley, Mithun Jacob, Aranyak Mehta, David Rendleman, Connor Schenck, Richard E. Turner, Ren'e Wagner, Adrian Weller, Krzysztof Choromanski}, journal={arXiv preprint arXiv:2410.03462}, year={2024}, archivePrefix={arXiv}, eprint={2410.03462}, primaryClass={cs.LG stat.ML} }
reid2024linear
arxiv-665633
2410.03463
Diffusion State-Guided Projected Gradient for Inverse Problems
<|reference_start|>Diffusion State-Guided Projected Gradient for Inverse Problems: Recent advancements in diffusion models have been effective in learning data priors for solving inverse problems. They leverage diffusion sampling steps for inducing a data prior while using a measurement guidance gradient at each step to impose data consistency. For general inverse problems, approximations are needed when an unconditionally trained diffusion model is used since the measurement likelihood is intractable, leading to inaccurate posterior sampling. In other words, due to their approximations, these methods fail to preserve the generation process on the data manifold defined by the diffusion prior, leading to artifacts in applications such as image restoration. To enhance the performance and robustness of diffusion models in solving inverse problems, we propose Diffusion State-Guided Projected Gradient (DiffStateGrad), which projects the measurement gradient onto a subspace that is a low-rank approximation of an intermediate state of the diffusion process. DiffStateGrad, as a module, can be added to a wide range of diffusion-based inverse solvers to improve the preservation of the diffusion process on the prior manifold and filter out artifact-inducing components. We highlight that DiffStateGrad improves the robustness of diffusion models in terms of the choice of measurement guidance step size and noise while improving the worst-case performance. Finally, we demonstrate that DiffStateGrad improves upon the state-of-the-art on linear and nonlinear image restoration inverse problems.<|reference_end|>
arxiv
@article{zirvi2024diffusion, title={Diffusion State-Guided Projected Gradient for Inverse Problems}, author={Rayhan Zirvi, Bahareh Tolooshams, Anima Anandkumar}, journal={arXiv preprint arXiv:2410.03463}, year={2024}, archivePrefix={arXiv}, eprint={2410.03463}, primaryClass={cs.LG cs.AI cs.CV} }
zirvi2024diffusion
arxiv-665634
2410.03464
S7: Selective and Simplified State Space Layers for Sequence Modeling
<|reference_start|>S7: Selective and Simplified State Space Layers for Sequence Modeling: A central challenge in sequence modeling is efficiently handling tasks with extended contexts. While recent state-space models (SSMs) have made significant progress in this area, they often lack input-dependent filtering or require substantial increases in model complexity to handle input variability. We address this gap by introducing S7, a simplified yet powerful SSM that can handle input dependence while incorporating stable reparameterization and specific design choices to dynamically adjust state transitions based on input content, maintaining efficiency and performance. We prove that this reparameterization ensures stability in long-sequence modeling by keeping state transitions well-behaved over time. Additionally, it controls the gradient norm, enabling efficient training and preventing issues like exploding or vanishing gradients. S7 significantly outperforms baselines across various sequence modeling tasks, including neuromorphic event-based datasets, Long Range Arena benchmarks, and various physical and biological time series. Overall, S7 offers a more straightforward approach to sequence modeling without relying on complex, domain-specific inductive biases, achieving significant improvements across key benchmarks.<|reference_end|>
arxiv
@article{soydan2024s7:, title={S7: Selective and Simplified State Space Layers for Sequence Modeling}, author={Taylan Soydan, Nikola Zubi'c, Nico Messikommer, Siddhartha Mishra, Davide Scaramuzza}, journal={arXiv preprint arXiv:2410.03464}, year={2024}, archivePrefix={arXiv}, eprint={2410.03464}, primaryClass={cs.LG eess.SP math.DS} }
soydan2024s7:
arxiv-665635
2410.03465
Formalizing MLTL Formula Progression in Isabelle/HOL
<|reference_start|>Formalizing MLTL Formula Progression in Isabelle/HOL: Mission-time Linear Temporal Logic (MLTL) is rapidly increasing in popularity as a specification logic, e.g., for runtime verification, model checking, and other formal methods, driving a need for a larger tool base for analysis of this logic. To that end, we formalize formula progression for MLTL in the theorem prover Isabelle/HOL. As groundwork, we first formalize the syntax and semantics for MLTL as well as a verified library of key properties, including useful custom induction rules. We envision this library as being useful for future formalizations involving MLTL and as serving as a reference point for theoretical work using or developing MLTL. We then formalize the algorithm and correctness theorems for formula progression, following the literature. Along the way, we identify and fix several errors and gaps in the source material. A main motivation for our work is tool validation; we ensure the executability of our algorithms by using Isabelle's built-in functionality to generate a code export. This enables both a formal basis for correctly evaluating MLTL formulas and for automatically generating provably correct benchmarks for evaluating tools that reason about MLTL.<|reference_end|>
arxiv
@article{kosaian2024formalizing, title={Formalizing MLTL Formula Progression in Isabelle/HOL}, author={Katherine Kosaian, Zili Wang, Elizabeth Sloan, and Kristin Rozier}, journal={arXiv preprint arXiv:2410.03465}, year={2024}, archivePrefix={arXiv}, eprint={2410.03465}, primaryClass={cs.LO} }
kosaian2024formalizing
arxiv-665636
2410.03466
Is Safer Better? The Impact of Guardrails on the Argumentative Strength of LLMs in Hate Speech Countering
<|reference_start|>Is Safer Better? The Impact of Guardrails on the Argumentative Strength of LLMs in Hate Speech Countering: The potential effectiveness of counterspeech as a hate speech mitigation strategy is attracting increasing interest in the NLG research community, particularly towards the task of automatically producing it. However, automatically generated responses often lack the argumentative richness which characterises expert-produced counterspeech. In this work, we focus on two aspects of counterspeech generation to produce more cogent responses. First, by investigating the tension between helpfulness and harmlessness of LLMs, we test whether the presence of safety guardrails hinders the quality of the generations. Secondly, we assess whether attacking a specific component of the hate speech results in a more effective argumentative strategy to fight online hate. By conducting an extensive human and automatic evaluation, we show how the presence of safety guardrails can be detrimental also to a task that inherently aims at fostering positive social interactions. Moreover, our results show that attacking a specific component of the hate speech, and in particular its implicit negative stereotype and its hateful parts, leads to higher-quality generations.<|reference_end|>
arxiv
@article{bonaldi2024is, title={Is Safer Better? The Impact of Guardrails on the Argumentative Strength of LLMs in Hate Speech Countering}, author={Helena Bonaldi, Greta Damo, Nicol'as Benjam'in Ocampo, Elena Cabrio, Serena Villata, Marco Guerini}, journal={arXiv preprint arXiv:2410.03466}, year={2024}, archivePrefix={arXiv}, eprint={2410.03466}, primaryClass={cs.CL} }
bonaldi2024is
arxiv-665637
2410.03470
Vulnerability Detection via Topological Analysis of Attention Maps
<|reference_start|>Vulnerability Detection via Topological Analysis of Attention Maps: Recently, deep learning (DL) approaches to vulnerability detection have gained significant traction. These methods demonstrate promising results, often surpassing traditional static code analysis tools in effectiveness. In this study, we explore a novel approach to vulnerability detection utilizing the tools from topological data analysis (TDA) on the attention matrices of the BERT model. Our findings reveal that traditional machine learning (ML) techniques, when trained on the topological features extracted from these attention matrices, can perform competitively with pre-trained language models (LLMs) such as CodeBERTa. This suggests that TDA tools, including persistent homology, are capable of effectively capturing semantic information critical for identifying vulnerabilities.<|reference_end|>
arxiv
@article{snopov2024vulnerability, title={Vulnerability Detection via Topological Analysis of Attention Maps}, author={Pavel Snopov, Andrey Nikolaevich Golubinskiy}, journal={arXiv preprint arXiv:2410.03470}, year={2024}, archivePrefix={arXiv}, eprint={2410.03470}, primaryClass={cs.LG cs.AI math.AT} }
snopov2024vulnerability
arxiv-665638
2410.03472
Deep Reinforcement Learning for Delay-Optimized Task Offloading in Vehicular Fog Computing
<|reference_start|>Deep Reinforcement Learning for Delay-Optimized Task Offloading in Vehicular Fog Computing: The imminent rise of autonomous vehicles (AVs) is revolutionizing the future of transport. The Vehicular Fog Computing (VFC) paradigm has emerged to alleviate the load of compute-intensive and delay-sensitive AV programs via task offloading to nearby vehicles. Effective VFC requires an intelligent and dynamic offloading algorithm. As a result, this paper adapts Deep Reinforcement Learning (DRL) for VFC offloading. First, a simulation environment utilizing realistic hardware and task specifications, in addition to a novel vehicular movement model based on grid-planned cities, is created. Afterward, a DRL-based algorithm is trained and tested on the environment with the goal of minimizing global task delay. The DRL model displays impressive results, outperforming other greedy and conventional methods. The findings further demonstrate the effectiveness of the DRL model in minimizing queue congestion, especially when compared to traditional cloud computing methods that struggle to handle the demands of a large fleet of vehicles. This is corroborated by queuing theory, highlighting the self-scalability of the VFC-based DRL approach.<|reference_end|>
arxiv
@article{toopchinezhad2024deep, title={Deep Reinforcement Learning for Delay-Optimized Task Offloading in Vehicular Fog Computing}, author={Mohammad Parsa Toopchinezhad, and Mahmood Ahmadi}, journal={arXiv preprint arXiv:2410.03472}, year={2024}, archivePrefix={arXiv}, eprint={2410.03472}, primaryClass={cs.NI} }
toopchinezhad2024deep
arxiv-665639
2410.03474
Group Fairness in Peer Review
<|reference_start|>Group Fairness in Peer Review: Large conferences such as NeurIPS and AAAI serve as crossroads of various AI fields, since they attract submissions from a vast number of communities. However, in some cases, this has resulted in a poor reviewing experience for some communities, whose submissions get assigned to less qualified reviewers outside of their communities. An often-advocated solution is to break up any such large conference into smaller conferences, but this can lead to isolation of communities and harm interdisciplinary research. We tackle this challenge by introducing a notion of group fairness, called the core, which requires that every possible community (subset of researchers) to be treated in a way that prevents them from unilaterally benefiting by withdrawing from a large conference. We study a simple peer review model, prove that it always admits a reviewing assignment in the core, and design an efficient algorithm to find one such assignment. We use real data from CVPR and ICLR conferences to compare our algorithm to existing reviewing assignment algorithms on a number of metrics.<|reference_end|>
arxiv
@article{aziz2024group, title={Group Fairness in Peer Review}, author={Haris Aziz, Evi Micha, Nisarg Shah}, journal={arXiv preprint arXiv:2410.03474}, year={2024}, archivePrefix={arXiv}, eprint={2410.03474}, primaryClass={cs.GT cs.AI cs.SI physics.soc-ph} }
aziz2024group
arxiv-665640
2410.03477
On the Hardness of Learning One Hidden Layer Neural Networks
<|reference_start|>On the Hardness of Learning One Hidden Layer Neural Networks: In this work, we consider the problem of learning one hidden layer ReLU neural networks with inputs from $\mathbb{R}^d$. We show that this learning problem is hard under standard cryptographic assumptions even when: (1) the size of the neural network is polynomial in $d$, (2) its input distribution is a standard Gaussian, and (3) the noise is Gaussian and polynomially small in $d$. Our hardness result is based on the hardness of the Continuous Learning with Errors (CLWE) problem, and in particular, is based on the largely believed worst-case hardness of approximately solving the shortest vector problem up to a multiplicative polynomial factor.<|reference_end|>
arxiv
@article{li2024on, title={On the Hardness of Learning One Hidden Layer Neural Networks}, author={Shuchen Li, Ilias Zadik, Manolis Zampetakis}, journal={arXiv preprint arXiv:2410.03477}, year={2024}, archivePrefix={arXiv}, eprint={2410.03477}, primaryClass={cs.LG cs.CC math.ST stat.ML stat.TH} }
li2024on
arxiv-665641
2410.03478
VEDIT: Latent Prediction Architecture For Procedural Video Representation Learning
<|reference_start|>VEDIT: Latent Prediction Architecture For Procedural Video Representation Learning: Procedural video representation learning is an active research area where the objective is to learn an agent which can anticipate and forecast the future given the present video input, typically in conjunction with textual annotations. Prior works often rely on large-scale pretraining of visual encoders and prediction models with language supervision. However, the necessity and effectiveness of extending compute intensive pretraining to learn video clip sequences with noisy text supervision have not yet been fully validated by previous works. In this work, we show that a strong off-the-shelf frozen pretrained visual encoder, along with a well designed prediction model, can achieve state-of-the-art (SoTA) performance in forecasting and procedural planning without the need for pretraining the prediction model, nor requiring additional supervision from language or ASR. Instead of learning representations from pixel space, our method utilizes the latent embedding space of publicly available vision encoders. By conditioning on frozen clip-level embeddings from observed steps to predict the actions of unseen steps, our prediction model is able to learn robust representations for forecasting through iterative denoising - leveraging the recent advances in diffusion transformers (Peebles & Xie, 2023). Empirical studies over a total of five procedural learning tasks across four datasets (NIV, CrossTask, COIN and Ego4D-v2) show that our model advances the strong baselines in long-horizon action anticipation (+2.6% in Verb ED@20, +3.1% in Noun ED@20), and significantly improves the SoTA in step forecasting (+5.0%), task classification (+3.8%), and procedure planning tasks (up to +2.28% in success rate, +3.39% in mAcc, and +0.90% in mIoU).<|reference_end|>
arxiv
@article{lin2024vedit:, title={VEDIT: Latent Prediction Architecture For Procedural Video Representation Learning}, author={Han Lin, Tushar Nagarajan, Nicolas Ballas, Mido Assran, Mojtaba Komeili, Mohit Bansal, Koustuv Sinha}, journal={arXiv preprint arXiv:2410.03478}, year={2024}, archivePrefix={arXiv}, eprint={2410.03478}, primaryClass={cs.CV cs.LG} }
lin2024vedit:
arxiv-665642
2410.03480
SeBS-Flow: Benchmarking Serverless Cloud Function Workflows
<|reference_start|>SeBS-Flow: Benchmarking Serverless Cloud Function Workflows: Serverless computing has emerged as a prominent paradigm, with a significant adoption rate among cloud customers. While this model offers advantages such as abstraction from the deployment and resource scheduling, it also poses limitations in handling complex use cases due to the restricted nature of individual functions. Serverless workflows address this limitation by orchestrating multiple functions into a cohesive application. However, existing serverless workflow platforms exhibit significant differences in their programming models and infrastructure, making fair and consistent performance evaluations difficult in practice. To address this gap, we propose the first serverless workflow benchmarking suite SeBS-Flow, providing a platform-agnostic workflow model that enables consistent benchmarking across various platforms. SeBS-Flow includes six real-world application benchmarks and four microbenchmarks representing different computational patterns. We conduct comprehensive evaluations on three major cloud platforms, assessing performance, cost, scalability, and runtime deviations. We make our benchmark suite open-source, enabling rigorous and comparable evaluations of serverless workflows over time.<|reference_end|>
arxiv
@article{schmid2024sebs-flow:, title={SeBS-Flow: Benchmarking Serverless Cloud Function Workflows}, author={Larissa Schmid, Marcin Copik, Alexandru Calotoiu, Laurin Brandner, Anne Koziolek, Torsten Hoefler}, journal={arXiv preprint arXiv:2410.03480}, year={2024}, archivePrefix={arXiv}, eprint={2410.03480}, primaryClass={cs.DC cs.SE} }
schmid2024sebs-flow:
arxiv-665643
2410.03481
A Compact, Low-cost Force and Torque Sensor for Robot Fingers with LED-based Displacement Sensing
<|reference_start|>A Compact, Low-cost Force and Torque Sensor for Robot Fingers with LED-based Displacement Sensing: Force/torque sensing is an important modality for robotic manipulation, but commodity solutions, generally developed with other applications in mind, do not generally fit the needs of robot hands. This paper introduces a novel method for six-axis force/torque sensing, using LEDs to sense the displacement between two plates connected by a transparent elastomer. Our method allows for finger-size packaging with no amplification electronics, low cost manufacturing, and easy integration into a complete hand. On test forces between 0-2 N, our prototype sensor exhibits a mean error between 0.05 and 0.07 N across the three force directions, suggesting future applicability to fine manipulation tasks.<|reference_end|>
arxiv
@article{el-azizi2024a, title={A Compact, Low-cost Force and Torque Sensor for Robot Fingers with LED-based Displacement Sensing}, author={Amr El-Azizi, Sharfin Islam, Pedro Piacenza, Ioannis Kymissis, Matei Ciocarlie}, journal={arXiv preprint arXiv:2410.03481}, year={2024}, archivePrefix={arXiv}, eprint={2410.03481}, primaryClass={cs.RO} }
el-azizi2024a
arxiv-665644
2410.03483
S2C2A: A Flexible Task Space Planning and Control Strategy for Modular Soft Robot Arms
<|reference_start|>S2C2A: A Flexible Task Space Planning and Control Strategy for Modular Soft Robot Arms: Modular soft robot arms (MSRAs) are composed of multiple independent modules connected in a sequence. Due to their modular structure and high degrees of freedom (DOFs), these modules can simultaneously bend at different angles in various directions, enabling complex deformation. This capability allows MSRAs to perform more intricate tasks than single module robots. However, the modular structure also induces challenges in accurate planning, modeling, and control. Nonlinearity, hysteresis, and gravity complicate the physical model, while the modular structure and increased DOFs further lead to accumulative errors along the sequence. To address these challenges, we propose a flexible task space planning and control strategy for MSRAs, named S2C2A (State to Configuration to Action). Our approach formulates an optimization problem, S2C (State to Configuration planning), which integrates various loss functions and a forward MSRA model to generate configuration trajectories based on target MSRA states. Given the model complexity, we leverage a biLSTM network as the forward model. Subsequently, a configuration controller C2A (Configuration to Action control) is implemented to follow the planned configuration trajectories, leveraging only inaccurate internal sensing feedback. Both a biLSTM network and a physical model are utilized for configuration control. We validated our strategy using a cable-driven MSRA, demonstrating its ability to perform diverse offline tasks such as position control, orientation control, and obstacle avoidance. Furthermore, our strategy endows MSRA with online interaction capability with targets and obstacles. Future work will focus on addressing MSRA challenges, such as developing more accurate physical models and reducing configuration estimation errors along the module sequence.<|reference_end|>
arxiv
@article{chen2024s2c2a:, title={S2C2A: A Flexible Task Space Planning and Control Strategy for Modular Soft Robot Arms}, author={Zixi Chen, Qinghua Guan, Josie Hughes, Arianna Menciassi, Cesare Stefanini}, journal={arXiv preprint arXiv:2410.03483}, year={2024}, archivePrefix={arXiv}, eprint={2410.03483}, primaryClass={cs.RO} }
chen2024s2c2a:
arxiv-665645
2410.03486
STREAMS: An Assistive Multimodal AI Framework for Empowering Biosignal Based Robotic Controls
<|reference_start|>STREAMS: An Assistive Multimodal AI Framework for Empowering Biosignal Based Robotic Controls: End-effector based assistive robots face persistent challenges in generating smooth and robust trajectories when controlled by human's noisy and unreliable biosignals such as muscle activities and brainwaves. The produced endpoint trajectories are often jerky and imprecise to perform complex tasks such as stable robotic grasping. We propose STREAMS (Self-Training Robotic End-to-end Adaptive Multimodal Shared autonomy) as a novel framework leveraged deep reinforcement learning to tackle this challenge in biosignal based robotic control systems. STREAMS blends environmental information and synthetic user input into a Deep Q Learning Network (DQN) pipeline for an interactive end-to-end and self-training mechanism to produce smooth trajectories for the control of end-effector based robots. The proposed framework achieved a high-performance record of 98% in simulation with dynamic target estimation and acquisition without any pre-existing datasets. As a zero-shot sim-to-real user study with five participants controlling a physical robotic arm with noisy head movements, STREAMS (as an assistive mode) demonstrated significant improvements in trajectory stabilization, user satisfaction, and task performance reported as a success rate of 83% compared to manual mode which was 44% without any task support. STREAMS seeks to improve biosignal based assistive robotic controls by offering an interactive, end-to-end solution that stabilizes end-effector trajectories, enhancing task performance and accuracy.<|reference_end|>
arxiv
@article{rabiee2024streams:, title={STREAMS: An Assistive Multimodal AI Framework for Empowering Biosignal Based Robotic Controls}, author={Ali Rabiee, Sima Ghafoori, Xiangyu Bai, Sarah Ostadabbas, Reza Abiri}, journal={arXiv preprint arXiv:2410.03486}, year={2024}, archivePrefix={arXiv}, eprint={2410.03486}, primaryClass={cs.RO} }
rabiee2024streams:
arxiv-665646
2410.03487
A Multimodal Framework for Deepfake Detection
<|reference_start|>A Multimodal Framework for Deepfake Detection: The rapid advancement of deepfake technology poses a significant threat to digital media integrity. Deepfakes, synthetic media created using AI, can convincingly alter videos and audio to misrepresent reality. This creates risks of misinformation, fraud, and severe implications for personal privacy and security. Our research addresses the critical issue of deepfakes through an innovative multimodal approach, targeting both visual and auditory elements. This comprehensive strategy recognizes that human perception integrates multiple sensory inputs, particularly visual and auditory information, to form a complete understanding of media content. For visual analysis, a model that employs advanced feature extraction techniques was developed, extracting nine distinct facial characteristics and then applying various machine learning and deep learning models. For auditory analysis, our model leverages mel-spectrogram analysis for feature extraction and then applies various machine learning and deep learningmodels. To achieve a combined analysis, real and deepfake audio in the original dataset were swapped for testing purposes and ensured balanced samples. Using our proposed models for video and audio classification i.e. Artificial Neural Network and VGG19, the overall sample is classified as deepfake if either component is identified as such. Our multimodal framework combines visual and auditory analyses, yielding an accuracy of 94%.<|reference_end|>
arxiv
@article{gandhi2024a, title={A Multimodal Framework for Deepfake Detection}, author={Kashish Gandhi, Prutha Kulkarni, Taran Shah, Piyush Chaudhari, Meera Narvekar, Kranti Ghag}, journal={arXiv preprint arXiv:2410.03487}, year={2024}, doi={10.53555/jes.v20i10s.6126}, archivePrefix={arXiv}, eprint={2410.03487}, primaryClass={cs.CV cs.AI cs.LG cs.LO} }
gandhi2024a
arxiv-665647
2410.03488
MO-DDN: A Coarse-to-Fine Attribute-based Exploration Agent for Multi-object Demand-driven Navigation
<|reference_start|>MO-DDN: A Coarse-to-Fine Attribute-based Exploration Agent for Multi-object Demand-driven Navigation: The process of satisfying daily demands is a fundamental aspect of humans' daily lives. With the advancement of embodied AI, robots are increasingly capable of satisfying human demands. Demand-driven navigation (DDN) is a task in which an agent must locate an object to satisfy a specified demand instruction, such as ``I am thirsty.'' The previous study typically assumes that each demand instruction requires only one object to be fulfilled and does not consider individual preferences. However, the realistic human demand may involve multiple objects. In this paper, we introduce the Multi-object Demand-driven Navigation (MO-DDN) benchmark, which addresses these nuanced aspects, including multi-object search and personal preferences, thus making the MO-DDN task more reflective of real-life scenarios compared to DDN. Building upon previous work, we employ the concept of ``attribute'' to tackle this new task. However, instead of solely relying on attribute features in an end-to-end manner like DDN, we propose a modular method that involves constructing a coarse-to-fine attribute-based exploration agent (C2FAgent). Our experimental results illustrate that this coarse-to-fine exploration strategy capitalizes on the advantages of attributes at various decision-making levels, resulting in superior performance compared to baseline methods. Code and video can be found at https://sites.google.com/view/moddn.<|reference_end|>
arxiv
@article{wang2024mo-ddn:, title={MO-DDN: A Coarse-to-Fine Attribute-based Exploration Agent for Multi-object Demand-driven Navigation}, author={Hongcheng Wang, Peiqi Liu, Wenzhe Cai, Mingdong Wu, Zhengyu Qian, Hao Dong}, journal={arXiv preprint arXiv:2410.03488}, year={2024}, archivePrefix={arXiv}, eprint={2410.03488}, primaryClass={cs.RO} }
wang2024mo-ddn:
arxiv-665648
2410.03489
Gradient-based Jailbreak Images for Multimodal Fusion Models
<|reference_start|>Gradient-based Jailbreak Images for Multimodal Fusion Models: Augmenting language models with image inputs may enable more effective jailbreak attacks through continuous optimization, unlike text inputs that require discrete optimization. However, new multimodal fusion models tokenize all input modalities using non-differentiable functions, which hinders straightforward attacks. In this work, we introduce the notion of a tokenizer shortcut that approximates tokenization with a continuous function and enables continuous optimization. We use tokenizer shortcuts to create the first end-to-end gradient image attacks against multimodal fusion models. We evaluate our attacks on Chameleon models and obtain jailbreak images that elicit harmful information for 72.5% of prompts. Jailbreak images outperform text jailbreaks optimized with the same objective and require 3x lower compute budget to optimize 50x more input tokens. Finally, we find that representation engineering defenses, like Circuit Breakers, trained only on text attacks can effectively transfer to adversarial image inputs.<|reference_end|>
arxiv
@article{rando2024gradient-based, title={Gradient-based Jailbreak Images for Multimodal Fusion Models}, author={Javier Rando, Hannah Korevaar, Erik Brinkman, Ivan Evtimov, Florian Tram`er}, journal={arXiv preprint arXiv:2410.03489}, year={2024}, archivePrefix={arXiv}, eprint={2410.03489}, primaryClass={cs.CR cs.AI} }
rando2024gradient-based
arxiv-665649
2410.03490
Applying the FAIR Principles to Computational Workflows
<|reference_start|>Applying the FAIR Principles to Computational Workflows: Recent trends within computational and data sciences show an increasing recognition and adoption of computational workflows as tools for productivity, reproducibility, and democratized access to platforms and processing know-how. As digital objects to be shared, discovered, and reused, computational workflows benefit from the FAIR principles, which stand for Findable, Accessible, Interoperable, and Reusable. The Workflows Community Initiative's FAIR Workflows Working Group (WCI-FW), a global and open community of researchers and developers working with computational workflows across disciplines and domains, has systematically addressed the application of both FAIR data and software principles to computational workflows. We present our recommendations with commentary that reflects our discussions and justifies our choices and adaptations. Like the software and data principles on which they are based, these are offered to workflow users and authors, workflow management system developers, and providers of workflow services as guide rails for adoption and fodder for discussion. Workflows are becoming more prevalent as documented, automated instruments for data analysis, data collection, AI-based predictions, and simulations. The FAIR recommendations for workflows that we propose in this paper will maximize their value as research assets and facilitate their adoption by the wider community.<|reference_end|>
arxiv
@article{wilkinson2024applying, title={Applying the FAIR Principles to Computational Workflows}, author={Sean R. Wilkinson and Meznah Aloqalaa and Khalid Belhajjame and Michael R. Crusoe and Bruno de Paula Kinoshita and Luiz Gadelha and Daniel Garijo and Ove Johan Ragnar Gustafsson and Nick Juty and Sehrish Kanwal and Farah Zaib Khan and Johannes K"oster and Karsten Peters-von Gehlen and Line Pouchard and Randy K. Rannow and Stian Soiland-Reyes and Nicola Soranzo and Shoaib Sufi and Ziheng Sun and Baiba Vilne and Merridee A. Wouters and Denis Yuen and Carole Goble}, journal={arXiv preprint arXiv:2410.03490}, year={2024}, archivePrefix={arXiv}, eprint={2410.03490}, primaryClass={cs.DL cs.SE} }
wilkinson2024applying
arxiv-665650
2410.03492
Towards Reproducible LLM Evaluation: Quantifying Uncertainty in LLM Benchmark Scores
<|reference_start|>Towards Reproducible LLM Evaluation: Quantifying Uncertainty in LLM Benchmark Scores: Large language models (LLMs) are stochastic, and not all models give deterministic answers, even when setting temperature to zero with a fixed random seed. However, few benchmark studies attempt to quantify uncertainty, partly due to the time and cost of repeated experiments. We use benchmarks designed for testing LLMs' capacity to reason about cardinal directions to explore the impact of experimental repeats on mean score and prediction interval. We suggest a simple method for cost-effectively quantifying the uncertainty of a benchmark score and make recommendations concerning reproducible LLM evaluation.<|reference_end|>
arxiv
@article{blackwell2024towards, title={Towards Reproducible LLM Evaluation: Quantifying Uncertainty in LLM Benchmark Scores}, author={Robert E. Blackwell, Jon Barry and Anthony G. Cohn}, journal={arXiv preprint arXiv:2410.03492}, year={2024}, archivePrefix={arXiv}, eprint={2410.03492}, primaryClass={cs.CL} }
blackwell2024towards
arxiv-665651
2410.03494
Generative Artificial Intelligence for Navigating Synthesizable Chemical Space
<|reference_start|>Generative Artificial Intelligence for Navigating Synthesizable Chemical Space: We introduce SynFormer, a generative modeling framework designed to efficiently explore and navigate synthesizable chemical space. Unlike traditional molecular generation approaches, we generate synthetic pathways for molecules to ensure that designs are synthetically tractable. By incorporating a scalable transformer architecture and a diffusion module for building block selection, SynFormer surpasses existing models in synthesizable molecular design. We demonstrate SynFormer's effectiveness in two key applications: (1) local chemical space exploration, where the model generates synthesizable analogs of a reference molecule, and (2) global chemical space exploration, where the model aims to identify optimal molecules according to a black-box property prediction oracle. Additionally, we demonstrate the scalability of our approach via the improvement in performance as more computational resources become available. With our code and trained models openly available, we hope that SynFormer will find use across applications in drug discovery and materials science.<|reference_end|>
arxiv
@article{gao2024generative, title={Generative Artificial Intelligence for Navigating Synthesizable Chemical Space}, author={Wenhao Gao, Shitong Luo, Connor W. Coley}, journal={arXiv preprint arXiv:2410.03494}, year={2024}, archivePrefix={arXiv}, eprint={2410.03494}, primaryClass={cs.LG cs.AI physics.chem-ph q-bio.BM} }
gao2024generative
arxiv-665652
2410.03496
Fourier PINNs: From Strong Boundary Conditions to Adaptive Fourier Bases
<|reference_start|>Fourier PINNs: From Strong Boundary Conditions to Adaptive Fourier Bases: Interest is rising in Physics-Informed Neural Networks (PINNs) as a mesh-free alternative to traditional numerical solvers for partial differential equations (PDEs). However, PINNs often struggle to learn high-frequency and multi-scale target solutions. To tackle this problem, we first study a strong Boundary Condition (BC) version of PINNs for Dirichlet BCs and observe a consistent decline in relative error compared to the standard PINNs. We then perform a theoretical analysis based on the Fourier transform and convolution theorem. We find that strong BC PINNs can better learn the amplitudes of high-frequency components of the target solutions. However, constructing the architecture for strong BC PINNs is difficult for many BCs and domain geometries. Enlightened by our theoretical analysis, we propose Fourier PINNs -- a simple, general, yet powerful method that augments PINNs with pre-specified, dense Fourier bases. Our proposed architecture likewise learns high-frequency components better but places no restrictions on the particular BCs or problem domains. We develop an adaptive learning and basis selection algorithm via alternating neural net basis optimization, Fourier and neural net basis coefficient estimation, and coefficient truncation. This scheme can flexibly identify the significant frequencies while weakening the nominal frequencies to better capture the target solution's power spectrum. We show the advantage of our approach through a set of systematic experiments.<|reference_end|>
arxiv
@article{cooley2024fourier, title={Fourier PINNs: From Strong Boundary Conditions to Adaptive Fourier Bases}, author={Madison Cooley, Varun Shankar, Robert M. Kirby, Shandian Zhe}, journal={arXiv preprint arXiv:2410.03496}, year={2024}, archivePrefix={arXiv}, eprint={2410.03496}, primaryClass={cs.LG} }
cooley2024fourier
arxiv-665653
2410.03497
Collaborative and Efficient Personalization with Mixtures of Adaptors
<|reference_start|>Collaborative and Efficient Personalization with Mixtures of Adaptors: Non-iid data is prevalent in real-world federated learning problems. Data heterogeneity can come in different types in terms of distribution shifts. In this work, we are interested in the heterogeneity that comes from concept shifts, i.e., shifts in the prediction across clients. In particular, we consider multi-task learning, where we want the model to adapt to the task of the client. We propose a parameter-efficient framework to tackle this issue, where each client learns to mix between parameter-efficient adaptors according to its task. We use Low-Rank Adaptors (LoRAs) as the backbone and extend its concept to other types of layers. We call our framework Federated Low-Rank Adaptive Learning (FLoRAL). This framework is not an algorithm but rather a model parameterization for a multi-task learning objective, so it can work on top of any algorithm that optimizes this objective, which includes many algorithms from the literature. FLoRAL is memory-efficient, and clients are personalized with small states (e.g., one number per adaptor) as the adaptors themselves are federated. Hence, personalization is--in this sense--federated as well. Even though clients can personalize more freely by training an adaptor locally, we show that collaborative and efficient training of adaptors is possible and performs better. We also show that FLoRAL can outperform an ensemble of full models with optimal cluster assignment, which demonstrates the benefits of federated personalization and the robustness of FLoRAL to overfitting. We show promising experimental results on synthetic datasets, real-world federated multi-task problems such as MNIST, CIFAR-10, and CIFAR-100. We also provide a theoretical analysis of local SGD on a relaxed objective and discuss the effects of aggregation mismatch on convergence.<|reference_end|>
arxiv
@article{almansoori2024collaborative, title={Collaborative and Efficient Personalization with Mixtures of Adaptors}, author={Abdulla Jasem Almansoori, Samuel Horv'ath, Martin Tak'av{c}}, journal={arXiv preprint arXiv:2410.03497}, year={2024}, archivePrefix={arXiv}, eprint={2410.03497}, primaryClass={cs.LG} }
almansoori2024collaborative
arxiv-665654
2410.03499
FedStein: Enhancing Multi-Domain Federated Learning Through James-Stein Estimator
<|reference_start|>FedStein: Enhancing Multi-Domain Federated Learning Through James-Stein Estimator: Federated Learning (FL) facilitates data privacy by enabling collaborative in-situ training across decentralized clients. Despite its inherent advantages, FL faces significant challenges of performance and convergence when dealing with data that is not independently and identically distributed (non-i.i.d.). While previous research has primarily addressed the issue of skewed label distribution across clients, this study focuses on the less explored challenge of multi-domain FL, where client data originates from distinct domains with varying feature distributions. We introduce a novel method designed to address these challenges FedStein: Enhancing Multi-Domain Federated Learning Through the James-Stein Estimator. FedStein uniquely shares only the James-Stein (JS) estimates of batch normalization (BN) statistics across clients, while maintaining local BN parameters. The non-BN layer parameters are exchanged via standard FL techniques. Extensive experiments conducted across three datasets and multiple models demonstrate that FedStein surpasses existing methods such as FedAvg and FedBN, with accuracy improvements exceeding 14% in certain domains leading to enhanced domain generalization. The code is available at https://github.com/sunnyinAI/FedStein<|reference_end|>
arxiv
@article{gupta2024fedstein:, title={FedStein: Enhancing Multi-Domain Federated Learning Through James-Stein Estimator}, author={Sunny Gupta, Nikita Jangid, Amit Sethi}, journal={arXiv preprint arXiv:2410.03499}, year={2024}, archivePrefix={arXiv}, eprint={2410.03499}, primaryClass={cs.LG cs.AI cs.CV cs.DC} }
gupta2024fedstein:
arxiv-665655
2410.03501
CSI Acquisition in Cell-Free Massive MIMO Surveillance Systems
<|reference_start|>CSI Acquisition in Cell-Free Massive MIMO Surveillance Systems: We consider a cell-free massive multiple-input multiple-output (CF-mMIMO) surveillance system, in which multiple multi-antenna monitoring nodes (MNs) are deployed in either observing or jamming mode to disrupt the communication between a multi-antenna untrusted pair. We propose a simple and effective channel state information (CSI) acquisition scheme at the MNs. Specifically, our approach leverages pilot signals in both the uplink and downlink phases of the untrusted link, coupled with minimum mean-squared error (MMSE) estimation. This enables the MNs to accurately estimate the effective channels to both the untrusted transmitter (UT) and untrusted receiver (UR), thereby yielding robust monitoring performance. We analyze the spectral efficiency (SE) performance of the untrusted links and of the monitoring system, taking into account the proposed CSI acquisition and successive MMSE cancellation schemes. The monitoring success probability (MSP) is then derived. Simulation results show that the CF-mMIMO surveillance system, relying on the proposed CSI acquisition scheme, can achieve monitoring performance close to that achieved by having perfect CSI knowledge of the untrusted link (theoretical upper bound), especially when the number of MNs is large.<|reference_end|>
arxiv
@article{da silva2024csi, title={CSI Acquisition in Cell-Free Massive MIMO Surveillance Systems}, author={Isabella W. G. da Silva, Zahra Mobini, Hien Quoc Ngo, and Michail Matthaiou}, journal={arXiv preprint arXiv:2410.03501}, year={2024}, archivePrefix={arXiv}, eprint={2410.03501}, primaryClass={cs.IT eess.SP math.IT} }
da silva2024csi
arxiv-665656
2410.03502
CliMedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models in Clinical Scenarios
<|reference_start|>CliMedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models in Clinical Scenarios: With the proliferation of Large Language Models (LLMs) in diverse domains, there is a particular need for unified evaluation standards in clinical medical scenarios, where models need to be examined very thoroughly. We present CliMedBench, a comprehensive benchmark with 14 expert-guided core clinical scenarios specifically designed to assess the medical ability of LLMs across 7 pivot dimensions. It comprises 33,735 questions derived from real-world medical reports of top-tier tertiary hospitals and authentic examination exercises. The reliability of this benchmark has been confirmed in several ways. Subsequent experiments with existing LLMs have led to the following findings: (i) Chinese medical LLMs underperform on this benchmark, especially where medical reasoning and factual consistency are vital, underscoring the need for advances in clinical knowledge and diagnostic accuracy. (ii) Several general-domain LLMs demonstrate substantial potential in medical clinics, while the limited input capacity of many medical LLMs hinders their practical use. These findings reveal both the strengths and limitations of LLMs in clinical scenarios and offer critical insights for medical research.<|reference_end|>
arxiv
@article{ouyang2024climedbench:, title={CliMedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models in Clinical Scenarios}, author={Zetian Ouyang, Yishuai Qiu, Linlin Wang, Gerard de Melo, Ya Zhang, Yanfeng Wang, Liang He}, journal={arXiv preprint arXiv:2410.03502}, year={2024}, archivePrefix={arXiv}, eprint={2410.03502}, primaryClass={cs.CL} }
ouyang2024climedbench:
arxiv-665657
2410.03503
Kernel Methods in the Deep Ritz framework: Theory and practice
<|reference_start|>Kernel Methods in the Deep Ritz framework: Theory and practice: In this contribution, kernel approximations are applied as ansatz functions within the Deep Ritz method. This allows to approximate weak solutions of elliptic partial differential equations with weak enforcement of boundary conditions using Nitsche's method. A priori error estimates are proven in different norms leveraging both standard results for weak solutions of elliptic equations and well-established convergence results for kernel methods. This availability of a priori error estimates renders the method useful for practical purposes. The procedure is described in detail, meanwhile providing practical hints and implementation details. By means of numerical examples, the performance of the proposed approach is evaluated numerically and the results agree with the theoretical findings.<|reference_end|>
arxiv
@article{kleikamp2024kernel, title={Kernel Methods in the Deep Ritz framework: Theory and practice}, author={Hendrik Kleikamp, Tizian Wenzel}, journal={arXiv preprint arXiv:2410.03503}, year={2024}, archivePrefix={arXiv}, eprint={2410.03503}, primaryClass={math.NA cs.NA} }
kleikamp2024kernel
arxiv-665658
2410.03504
Uncertainty-Aware Environment Simulation of Medical Devices Digital Twins
<|reference_start|>Uncertainty-Aware Environment Simulation of Medical Devices Digital Twins: Smart medical devices are an integral component of the healthcare Internet of Things (IoT), providing patients with various healthcare services through an IoT-based application. Ensuring the dependability of such applications through system and integration-level testing mandates the physical integration of numerous medical devices, which is costly and impractical. In this context, digital twins of medical devices play an essential role in facilitating testing automation. Testing with digital twins without accounting for uncertain environmental factors of medical devices leaves many functionalities of IoT-based healthcare applications untested. In addition, digital twins operating without environmental factors remain out of sync and uncalibrated with their corresponding devices functioning in the real environment. To deal with these challenges, in this paper, we propose a model-based approach (EnvDT) for modeling and simulating the environment of medical devices' digital twins under uncertainties. We empirically evaluate the EnvDT using three medicine dispensers, Karie, Medido, and Pilly connected to a real-world IoT-based healthcare application. Our evaluation targets analyzing the coverage of environment models and the diversity of uncertain scenarios generated for digital twins. Results show that EnvDT achieves approximately 61% coverage of environment models and generates diverse uncertain scenarios (with a near-maximum diversity value of 0.62) during multiple environmental simulations.<|reference_end|>
arxiv
@article{sartaj2024uncertainty-aware, title={Uncertainty-Aware Environment Simulation of Medical Devices Digital Twins}, author={Hassan Sartaj, Shaukat Ali, and Julie Marie Gj{o}by}, journal={Software and Systems Modeling (2024)}, year={2024}, doi={10.1007/s10270-024-01223-8}, archivePrefix={arXiv}, eprint={2410.03504}, primaryClass={cs.SE} }
sartaj2024uncertainty-aware
arxiv-665659
2410.03505
Classification-Denoising Networks
<|reference_start|>Classification-Denoising Networks: Image classification and denoising suffer from complementary issues of lack of robustness or partially ignoring conditioning information. We argue that they can be alleviated by unifying both tasks through a model of the joint probability of (noisy) images and class labels. Classification is performed with a forward pass followed by conditioning. Using the Tweedie-Miyasawa formula, we evaluate the denoising function with the score, which can be computed by marginalization and back-propagation. The training objective is then a combination of cross-entropy loss and denoising score matching loss integrated over noise levels. Numerical experiments on CIFAR-10 and ImageNet show competitive classification and denoising performance compared to reference deep convolutional classifiers/denoisers, and significantly improves efficiency compared to previous joint approaches. Our model shows an increased robustness to adversarial perturbations compared to a standard discriminative classifier, and allows for a novel interpretation of adversarial gradients as a difference of denoisers.<|reference_end|>
arxiv
@article{thiry2024classification-denoising, title={Classification-Denoising Networks}, author={Louis Thiry and Florentin Guth}, journal={arXiv preprint arXiv:2410.03505}, year={2024}, archivePrefix={arXiv}, eprint={2410.03505}, primaryClass={cs.CV cs.LG} }
thiry2024classification-denoising
arxiv-665660
2410.03506
Unicast-Multicast Cell-Free Massive MIMO: Gradient-Based Resource Allocation
<|reference_start|>Unicast-Multicast Cell-Free Massive MIMO: Gradient-Based Resource Allocation: We consider a cell-free massive multiple-input multiple-output (CF-mMIMO) system with joint unicast and multi-group multicast transmissions. We derive exact closed-form expressions for the downlink achievable spectral efficiency (SE) of both unicast and multicast users. Based on these expressions, we formulate a joint optimization problem of access point (AP) selection and power control subject to quality of service (QoS) requirements of all unicast and multicast users and per-AP maximum transmit power constraint. The challenging formulated problem is transformed into a tractable form and a novel accelerated projected gradient (APG)-based algorithm is developed to solve the optimization problem. Simulation results show that our joint optimization strategy enhances notably the sum SE (SSE) (up to 58%) compared to baseline schemes, while maintaining low complexity.<|reference_end|>
arxiv
@article{abbas2024unicast-multicast, title={Unicast-Multicast Cell-Free Massive MIMO: Gradient-Based Resource Allocation}, author={Mustafa S. Abbas, Zahra Mobini, Hien Quoc Ngo, and Michail Matthaiou}, journal={arXiv preprint arXiv:2410.03506}, year={2024}, archivePrefix={arXiv}, eprint={2410.03506}, primaryClass={cs.IT math.IT} }
abbas2024unicast-multicast
arxiv-665661
2410.03509
GAP-RL: Grasps As Points for RL Towards Dynamic Object Grasping
<|reference_start|>GAP-RL: Grasps As Points for RL Towards Dynamic Object Grasping: Dynamic grasping of moving objects in complex, continuous motion scenarios remains challenging. Reinforcement Learning (RL) has been applied in various robotic manipulation tasks, benefiting from its closed-loop property. However, existing RL-based methods do not fully explore the potential for enhancing visual representations. In this letter, we propose a novel framework called Grasps As Points for RL (GAP-RL) to effectively and reliably grasp moving objects. By implementing a fast region-based grasp detector, we build a Grasp Encoder by transforming 6D grasp poses into Gaussian points and extracting grasp features as a higher-level abstraction than the original object point features. Additionally, we develop a Graspable Region Explorer for real-world deployment, which searches for consistent graspable regions, enabling smoother grasp generation and stable policy execution. To assess the performance fairly, we construct a simulated dynamic grasping benchmark involving objects with various complex motions. Experiment results demonstrate that our method effectively generalizes to novel objects and unseen dynamic motions compared to other baselines. Real-world experiments further validate the framework's sim-to-real transferability.<|reference_end|>
arxiv
@article{xie2024gap-rl:, title={GAP-RL: Grasps As Points for RL Towards Dynamic Object Grasping}, author={Pengwei Xie, Siang Chen, Qianrun Chen, Wei Tang, Dingchang Hu, Yixiang Dai, Rui Chen, Guijin Wang}, journal={arXiv preprint arXiv:2410.03509}, year={2024}, archivePrefix={arXiv}, eprint={2410.03509}, primaryClass={cs.RO} }
xie2024gap-rl:
arxiv-665662
2410.03511
Authentication by Location Tracking in Underwater Acoustic Networks
<|reference_start|>Authentication by Location Tracking in Underwater Acoustic Networks: Physical layer message authentication in underwater acoustic networks (UWANs) leverages the characteristics of the underwater acoustic channel (UWAC) as a fingerprint of the transmitting device. However, as the device moves its UWAC changes, and the authentication mechanism must track such variations. In this paper, we propose a context-based authentication mechanism operating in two steps: first, we estimate the position of the underwater device, then we predict its future position based on the previously estimated ones. To check the authenticity of the transmission, we compare the estimated and the predicted position. The location is estimated using a convolutional neural network taking as input the sample covariance matrix of the estimated UWACs. The prediction uses either a Kalman filter or a recurrent neural network (RNN). The authentication check is performed on the squared error between the predicted and estimated positions. The solution based on the Kalman filter outperforms that built on the RNN when the device moves according to a correlated Gauss-Markov mobility model, which reproduces a typical underwater motion.<|reference_end|>
arxiv
@article{ventura2024authentication, title={Authentication by Location Tracking in Underwater Acoustic Networks}, author={Gianmaria Ventura, Francesco Ardizzon, Stefano Tomasin}, journal={arXiv preprint arXiv:2410.03511}, year={2024}, archivePrefix={arXiv}, eprint={2410.03511}, primaryClass={eess.SP cs.LG} }
ventura2024authentication
arxiv-665663
2410.03514
Stabilized Neural Prediction of Potential Outcomes in Continuous Time
<|reference_start|>Stabilized Neural Prediction of Potential Outcomes in Continuous Time: Patient trajectories from electronic health records are widely used to predict potential outcomes of treatments over time, which then allows to personalize care. Yet, existing neural methods for this purpose have a key limitation: while some adjust for time-varying confounding, these methods assume that the time series are recorded in discrete time. In other words, they are constrained to settings where measurements and treatments are conducted at fixed time steps, even though this is unrealistic in medical practice. In this work, we aim to predict potential outcomes in continuous time. The latter is of direct practical relevance because it allows for modeling patient trajectories where measurements and treatments take place at arbitrary, irregular timestamps. We thus propose a new method called stabilized continuous time inverse propensity network (SCIP-Net). For this, we further derive stabilized inverse propensity weights for robust prediction of the potential outcomes. To the best of our knowledge, our SCIP-Net is the first neural method that performs proper adjustments for time-varying confounding in continuous time.<|reference_end|>
arxiv
@article{hess2024stabilized, title={Stabilized Neural Prediction of Potential Outcomes in Continuous Time}, author={Konstantin Hess and Stefan Feuerriegel}, journal={arXiv preprint arXiv:2410.03514}, year={2024}, archivePrefix={arXiv}, eprint={2410.03514}, primaryClass={cs.LG} }
hess2024stabilized
arxiv-665664
2410.03517
Fine-Grained Expressive Power of Weisfeiler-Leman: A Homomorphism Counting Perspective
<|reference_start|>Fine-Grained Expressive Power of Weisfeiler-Leman: A Homomorphism Counting Perspective: The ability of graph neural networks (GNNs) to count homomorphisms has recently been proposed as a practical and fine-grained measure of their expressive power. Although several existing works have investigated the homomorphism counting power of certain GNN families, a simple and unified framework for analyzing the problem is absent. In this paper, we first propose \emph{generalized folklore Weisfeiler-Leman (GFWL)} algorithms as a flexible design basis for expressive GNNs, and then provide a theoretical framework to algorithmically determine the homomorphism counting power of an arbitrary class of GNN within the GFWL design space. As the considered design space is large enough to accommodate almost all known powerful GNNs, our result greatly extends all existing works, and may find its application in the automation of GNN model design.<|reference_end|>
arxiv
@article{zhou2024fine-grained, title={Fine-Grained Expressive Power of Weisfeiler-Leman: A Homomorphism Counting Perspective}, author={Junru Zhou, Muhan Zhang}, journal={arXiv preprint arXiv:2410.03517}, year={2024}, archivePrefix={arXiv}, eprint={2410.03517}, primaryClass={cs.LG cs.DM} }
zhou2024fine-grained
arxiv-665665
2410.03519
Improving Online Bagging for Complex Imbalanced Data Stream
<|reference_start|>Improving Online Bagging for Complex Imbalanced Data Stream: Learning classifiers from imbalanced and concept drifting data streams is still a challenge. Most of the current proposals focus on taking into account changes in the global imbalance ratio only and ignore the local difficulty factors, such as the minority class decomposition into sub-concepts and the presence of unsafe types of examples (borderline or rare ones). As the above factors present in the stream may deteriorate the performance of popular online classifiers, we propose extensions of resampling online bagging, namely Neighbourhood Undersampling or Oversampling Online Bagging to take better account of the presence of unsafe minority examples. The performed computational experiments with synthetic complex imbalanced data streams have shown their advantage over earlier variants of online bagging resampling ensembles.<|reference_end|>
arxiv
@article{przybyl2024improving, title={Improving Online Bagging for Complex Imbalanced Data Stream}, author={Bartosz Przybyl and Jerzy Stefanowski}, journal={arXiv preprint arXiv:2410.03519}, year={2024}, archivePrefix={arXiv}, eprint={2410.03519}, primaryClass={cs.LG} }
przybyl2024improving
arxiv-665666
2410.03521
LCMDC: Large-scale Chinese Medical Dialogue Corpora for Automatic Triage and Medical Consultation
<|reference_start|>LCMDC: Large-scale Chinese Medical Dialogue Corpora for Automatic Triage and Medical Consultation: The global COVID-19 pandemic underscored major deficiencies in traditional healthcare systems, hastening the advancement of online medical services, especially in medical triage and consultation. However, existing studies face two main challenges. First, the scarcity of large-scale, publicly available, domain-specific medical datasets due to privacy concerns, with current datasets being small and limited to a few diseases, limiting the effectiveness of triage methods based on Pre-trained Language Models (PLMs). Second, existing methods lack medical knowledge and struggle to accurately understand professional terms and expressions in patient-doctor consultations. To overcome these obstacles, we construct the Large-scale Chinese Medical Dialogue Corpora (LCMDC), comprising a Coarse-grained Triage dataset with 439,630 samples, a Fine-grained Diagnosis dataset with 199,600 samples, and a Medical Consultation dataset with 472,418 items, thereby addressing the data shortage in this field. Moreover, we further propose a novel triage system that combines BERT-based supervised learning with prompt learning, as well as a GPT-based medical consultation model using reinforcement learning. To enhance domain knowledge acquisition, we pre-trained PLMs using our self-constructed background corpus. Experimental results on the LCMDC demonstrate the efficacy of our proposed systems.<|reference_end|>
arxiv
@article{wang2024lcmdc:, title={LCMDC: Large-scale Chinese Medical Dialogue Corpora for Automatic Triage and Medical Consultation}, author={Xinyuan Wang, Haozhou Li, Dingfang Zheng, Qinke Peng}, journal={arXiv preprint arXiv:2410.03521}, year={2024}, archivePrefix={arXiv}, eprint={2410.03521}, primaryClass={cs.CL cs.AI cs.LG} }
wang2024lcmdc:
arxiv-665667
2410.03522
HMT-Grasp: A Hybrid Mamba-Transformer Approach for Robot Grasping in Cluttered Environments
<|reference_start|>HMT-Grasp: A Hybrid Mamba-Transformer Approach for Robot Grasping in Cluttered Environments: Robot grasping, whether handling isolated objects, cluttered items, or stacked objects, plays a critical role in industrial and service applications. However, current visual grasp detection methods based on Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) struggle to adapt across various grasping scenarios due to the imbalance between local and global feature extraction. In this paper, we propose a novel hybrid Mamba-Transformer approach to address these challenges. Our method improves robotic visual grasping by effectively capturing both global and local information through the integration of Vision Mamba and parallel convolutional-transformer blocks. This hybrid architecture significantly improves adaptability, precision, and flexibility across various robotic tasks. To ensure a fair evaluation, we conducted extensive experiments on the Cornell, Jacquard, and OCID-Grasp datasets, ranging from simple to complex scenarios. Additionally, we performed both simulated and real-world robotic experiments. The results demonstrate that our method not only surpasses state-of-the-art techniques on standard grasping datasets but also delivers strong performance in both simulation and real-world robot applications.<|reference_end|>
arxiv
@article{xiong2024hmt-grasp:, title={HMT-Grasp: A Hybrid Mamba-Transformer Approach for Robot Grasping in Cluttered Environments}, author={Songsong Xiong, Hamidreza Kasaei}, journal={arXiv preprint arXiv:2410.03522}, year={2024}, archivePrefix={arXiv}, eprint={2410.03522}, primaryClass={cs.RO} }
xiong2024hmt-grasp:
arxiv-665668
2410.03523
A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
<|reference_start|>A Probabilistic Perspective on Unlearning and Alignment for Large Language Models: Comprehensive evaluation of Large Language Models (LLMs) is an open research problem. Existing evaluations rely on deterministic point estimates generated via greedy decoding. However, we find that deterministic evaluations fail to capture the whole output distribution of a model, yielding inaccurate estimations of model capabilities. This is particularly problematic in critical contexts such as unlearning and alignment, where precise model evaluations are crucial. To remedy this, we introduce the first formal probabilistic evaluation framework in LLMs. Namely, we derive novel metrics with high-probability guarantees concerning the output distribution of a model. Our metrics are application-independent and allow practitioners to make more reliable estimates about model capabilities before deployment. Through a case study focused on unlearning, we reveal that deterministic evaluations falsely indicate successful unlearning, whereas our probabilistic evaluations demonstrate that most if not all of the supposedly unlearned information remains accessible in these models. Additionally, we propose a novel unlearning loss based on entropy optimization and adaptive temperature scaling, which significantly improves unlearning in probabilistic settings on recent benchmarks. Our proposed shift from point estimates to probabilistic evaluations of output distributions represents an important step toward comprehensive evaluations of LLMs. https://github.com/yascho/probabilistic-unlearning<|reference_end|>
arxiv
@article{scholten2024a, title={A Probabilistic Perspective on Unlearning and Alignment for Large Language Models}, author={Yan Scholten, Stephan G"unnemann, Leo Schwinn}, journal={arXiv preprint arXiv:2410.03523}, year={2024}, archivePrefix={arXiv}, eprint={2410.03523}, primaryClass={cs.LG cs.AI} }
scholten2024a
arxiv-665669
2410.03524
Steering Large Language Models between Code Execution and Textual Reasoning
<|reference_start|>Steering Large Language Models between Code Execution and Textual Reasoning: While a lot of recent research focuses on enhancing the textual reasoning capabilities of Large Language Models (LLMs) by optimizing the multi-agent framework or reasoning chains, several benchmark tasks can be solved with 100% success through direct coding, which is more scalable and avoids the computational overhead associated with textual iterating and searching. Textual reasoning has inherent limitations in solving tasks with challenges in math, logics, optimization, and searching, which is unlikely to be solved by simply scaling up the model and data size. The recently released OpenAI GPT Code Interpreter and multi-agent frameworks such as AutoGen have demonstrated remarkable proficiency of integrating code generation and execution to solve complex tasks using LLMs. However, based on our experiments on 7 existing popular methods for steering code/text generation in both single- and multi-turn settings with 14 tasks and 6 types of LLMs (including the new O1-preview), currently there is no optimal method to correctly steer LLMs to write code when needed. We discover some interesting patterns on when models use code vs. textual reasoning with the evolution to task complexity and model sizes, which even result in an astonishingly inverse scaling law. We also discover that results from LLM written code are not always better than using textual reasoning, even if the task could be solved through code. To mitigate the above issues, we propose three methods to better steer LLM code/text generation and achieve a notable improvement. The costs of token lengths and runtime are thoroughly discussed for all the methods. We believe the problem of steering LLM code/text generation is critical for future research and has much space for further improvement. Project Page, Datasets, and Codes are available at https://yongchao98.github.io/CodeSteer/.<|reference_end|>
arxiv
@article{chen2024steering, title={Steering Large Language Models between Code Execution and Textual Reasoning}, author={Yongchao Chen, Harsh Jhamtani, Srinagesh Sharma, Chuchu Fan, Chi Wang}, journal={arXiv preprint arXiv:2410.03524}, year={2024}, archivePrefix={arXiv}, eprint={2410.03524}, primaryClass={cs.CL} }
chen2024steering
arxiv-665670
2410.03525
Artificial Human Lecturers: Initial Findings From Asia's First AI Lecturers in Class to Promote Innovation in Education
<|reference_start|>Artificial Human Lecturers: Initial Findings From Asia's First AI Lecturers in Class to Promote Innovation in Education: In recent years, artificial intelligence (AI) has become increasingly integrated into education, reshaping traditional learning environments. Despite this, there has been limited investigation into fully operational artificial human lecturers. To the best of our knowledge, our paper presents the world's first study examining their deployment in a real-world educational setting. Specifically, we investigate the use of "digital teachers," AI-powered virtual lecturers, in a postgraduate course at the Hong Kong University of Science and Technology (HKUST). Our study explores how features such as appearance, non-verbal cues, voice, and verbal expression impact students' learning experiences. Findings suggest that students highly value naturalness, authenticity, and interactivity in digital teachers, highlighting areas for improvement, such as increased responsiveness, personalized avatars, and integration with larger learning platforms. We conclude that digital teachers have significant potential to enhance education by providing a more flexible, engaging, personalized, and accessible learning experience for students.<|reference_end|>
arxiv
@article{pang2024artificial, title={Artificial Human Lecturers: Initial Findings From Asia's First AI Lecturers in Class to Promote Innovation in Education}, author={Ching Christie Pang, Yawei Zhao, Zhizhuo Yin, Jia Sun, Reza Hadi Mogavi, Pan Hui}, journal={arXiv preprint arXiv:2410.03525}, year={2024}, archivePrefix={arXiv}, eprint={2410.03525}, primaryClass={cs.HC} }
pang2024artificial
arxiv-665671
2410.03528
HiL Demonstration of Online Battery Capacity and Impedance Estimation with Minimal a Priori Parametrization Effort
<|reference_start|>HiL Demonstration of Online Battery Capacity and Impedance Estimation with Minimal a Priori Parametrization Effort: Uncertainty in the aging of batteries in battery electric vehicles impacts both the daily driving range as well as the expected economic lifetime. This paper presents a method to determine online the capacity and internal resistance of a battery cell based on real-world data. The method, based on a Joint Extended Kalman Filter combined with Recursive Least Squares, is computationally efficient and does not a priori require a fully characterized cell model. Offline simulation of the algorithm on data from differently aged cells shows convergence of the algorithm and indicates that capacity and resistance follow the expected trends. Furthermore, the algorithm is tested online on a Hardware-in-the-Loop setup to demonstrate real-time parameter updates in a realistic driving scenario.<|reference_end|>
arxiv
@article{beckers2024hil, title={HiL Demonstration of Online Battery Capacity and Impedance Estimation with Minimal a Priori Parametrization Effort}, author={Camiel J.J. Beckers (1), Feye S.J. Hoekstra (1), Frank Willems (1 and 2) ((1) TNO - Powertrains Dept., (2) Eindhoven University of Technology - Dept. of Electrical Engineering)}, journal={arXiv preprint arXiv:2410.03528}, year={2024}, archivePrefix={arXiv}, eprint={2410.03528}, primaryClass={eess.SY cs.SY} }
beckers2024hil
arxiv-665672
2410.03529
No Need to Talk: Asynchronous Mixture of Language Models
<|reference_start|>No Need to Talk: Asynchronous Mixture of Language Models: We introduce SmallTalk LM, an innovative method for training a mixture of language models in an almost asynchronous manner. Each model of the mixture specializes in distinct parts of the data distribution, without the need of high-bandwidth communication between the nodes training each model. At inference, a lightweight router directs a given sequence to a single expert, according to a short prefix. This inference scheme naturally uses a fraction of the parameters from the overall mixture model. Our experiments on language modeling demonstrate tha SmallTalk LM achieves significantly lower perplexity than dense model baselines for the same total training FLOPs and an almost identical inference cost. Finally, in our downstream evaluations we outperform the dense baseline on $75\%$ of the tasks.<|reference_end|>
arxiv
@article{filippova2024no, title={No Need to Talk: Asynchronous Mixture of Language Models}, author={Anastasiia Filippova, Angelos Katharopoulos, David Grangier, Ronan Collobert}, journal={arXiv preprint arXiv:2410.03529}, year={2024}, archivePrefix={arXiv}, eprint={2410.03529}, primaryClass={cs.LG cs.CL} }
filippova2024no
arxiv-665673
2410.03530
PRF: Parallel Resonate and Fire Neuron for Long Sequence Learning in Spiking Neural Networks
<|reference_start|>PRF: Parallel Resonate and Fire Neuron for Long Sequence Learning in Spiking Neural Networks: Recently, there is growing demand for effective and efficient long sequence modeling, with State Space Models (SSMs) proving to be effective for long sequence tasks. To further reduce energy consumption, SSMs can be adapted to Spiking Neural Networks (SNNs) using spiking functions. However, current spiking-formalized SSMs approaches still rely on float-point matrix-vector multiplication during inference, undermining SNNs' energy advantage. In this work, we address the efficiency and performance challenges of long sequence learning in SNNs simultaneously. First, we propose a decoupled reset method for parallel spiking neuron training, reducing the typical Leaky Integrate-and-Fire (LIF) model's training time from $O(L^2)$ to $O(L\log L)$, effectively speeding up the training by $6.57 \times$ to $16.50 \times$ on sequence lengths $1,024$ to $32,768$. To our best knowledge, this is the first time that parallel computation with a reset mechanism is implemented achieving equivalence to its sequential counterpart. Secondly, to capture long-range dependencies, we propose a Parallel Resonate and Fire (PRF) neuron, which leverages an oscillating membrane potential driven by a resonate mechanism from a differentiable reset function in the complex domain. The PRF enables efficient long sequence learning while maintaining parallel training. Finally, we demonstrate that the proposed spike-driven architecture using PRF achieves performance comparable to Structured SSMs (S4), with two orders of magnitude reduction in energy consumption, outperforming Transformer on Long Range Arena tasks.<|reference_end|>
arxiv
@article{huang2024prf:, title={PRF: Parallel Resonate and Fire Neuron for Long Sequence Learning in Spiking Neural Networks}, author={Yulong Huang, Zunchang Liu, Changchun Feng, Xiaopeng Lin, Hongwei Ren, Haotian Fu, Yue Zhou, Hong Xing, Bojun Cheng}, journal={arXiv preprint arXiv:2410.03530}, year={2024}, archivePrefix={arXiv}, eprint={2410.03530}, primaryClass={cs.NE} }
huang2024prf:
arxiv-665674
2410.03531
MARE: Multi-Aspect Rationale Extractor on Unsupervised Rationale Extraction
<|reference_start|>MARE: Multi-Aspect Rationale Extractor on Unsupervised Rationale Extraction: Unsupervised rationale extraction aims to extract text snippets to support model predictions without explicit rationale annotation. Researchers have made many efforts to solve this task. Previous works often encode each aspect independently, which may limit their ability to capture meaningful internal correlations between aspects. While there has been significant work on mitigating spurious correlations, our approach focuses on leveraging the beneficial internal correlations to improve multi-aspect rationale extraction. In this paper, we propose a Multi-Aspect Rationale Extractor (MARE) to explain and predict multiple aspects simultaneously. Concretely, we propose a Multi-Aspect Multi-Head Attention (MAMHA) mechanism based on hard deletion to encode multiple text chunks simultaneously. Furthermore, multiple special tokens are prepended in front of the text with each corresponding to one certain aspect. Finally, multi-task training is deployed to reduce the training overhead. Experimental results on two unsupervised rationale extraction benchmarks show that MARE achieves state-of-the-art performance. Ablation studies further demonstrate the effectiveness of our method. Our codes have been available at https://github.com/CSU-NLP-Group/MARE.<|reference_end|>
arxiv
@article{jiang2024mare:, title={MARE: Multi-Aspect Rationale Extractor on Unsupervised Rationale Extraction}, author={Han Jiang, Junwen Duan, Zhe Qu, Jianxin Wang}, journal={arXiv preprint arXiv:2410.03531}, year={2024}, archivePrefix={arXiv}, eprint={2410.03531}, primaryClass={cs.CL cs.AI} }
jiang2024mare:
arxiv-665675
2410.03532
Promoting the Culture of Qinhuai River Lantern Shadow Puppetry with a Digital Archive and Immersive Experience
<|reference_start|>Promoting the Culture of Qinhuai River Lantern Shadow Puppetry with a Digital Archive and Immersive Experience: As an intangible cultural heritage, Chinese shadow puppetry is facing challenges in terms of its appeal and comprehension, especially among audiences from different cultural backgrounds. Additionally, the fragile materials of the puppets and obstacles to preservation pose further challenges. This study creates a digital archive of the Qinhuai River Lantern Festival shadow puppetry, utilizing digital technology to recreate scenes depicted in traditional Chinese poetry and painting. Moreover, this study employs a mixed-method approach, combining qualitative and quantitative methods, to evaluate the acceptance and audience experience of immersive shadow puppetry. An in-depth exploration was conducted from sensory, emotional, cultural dimensions and research hypotheses were tested using structural equation modeling and other methods. The results indicate that enhancing ease of use and cultural experience can improve audience appeal and comprehension, while enhancing emotional experience can increase audience participation intention. Our research holds profound significance for the preservation and transmission of shadow puppetry.<|reference_end|>
arxiv
@article{liu2024promoting, title={Promoting the Culture of Qinhuai River Lantern Shadow Puppetry with a Digital Archive and Immersive Experience}, author={Yuanfang Liu, Rua Mae Williams, Guanghong Xie, Yu Wang, Wenrui Zuo}, journal={arXiv preprint arXiv:2410.03532}, year={2024}, archivePrefix={arXiv}, eprint={2410.03532}, primaryClass={cs.CY cs.HC} }
liu2024promoting
arxiv-665676
2410.03533
Multiscale fusion enhanced spiking neural network for invasive BCI neural signal decoding
<|reference_start|>Multiscale fusion enhanced spiking neural network for invasive BCI neural signal decoding: Brain-computer interfaces (BCIs) are an advanced fusion of neuroscience and artificial intelligence, requiring stable and long-term decoding of neural signals. Spiking Neural Networks (SNNs), with their neuronal dynamics and spike-based signal processing, are inherently well-suited for this task. This paper presents a novel approach utilizing a Multiscale Fusion enhanced Spiking Neural Network (MFSNN). The MFSNN emulates the parallel processing and multiscale feature fusion seen in human visual perception to enable real-time, efficient, and energy-conserving neural signal decoding. Initially, the MFSNN employs temporal convolutional networks and channel attention mechanisms to extract spatiotemporal features from raw data. It then enhances decoding performance by integrating these features through skip connections. Additionally, the MFSNN improves generalizability and robustness in cross-day signal decoding through mini-batch supervised generalization learning. In two benchmark invasive BCI paradigms, including the single-hand grasp-and-touch and center-and-out reach tasks, the MFSNN surpasses traditional artificial neural network methods, such as MLP and GRU, in both accuracy and computational efficiency. Moreover, the MFSNN's multiscale feature fusion framework is well-suited for the implementation on neuromorphic chips, offering an energy-efficient solution for online decoding of invasive BCI signals.<|reference_end|>
arxiv
@article{song2024multiscale, title={Multiscale fusion enhanced spiking neural network for invasive BCI neural signal decoding}, author={Yu Song, Liyuan Han, Bo Xu, Tielin Zhang}, journal={arXiv preprint arXiv:2410.03533}, year={2024}, archivePrefix={arXiv}, eprint={2410.03533}, primaryClass={cs.NE cs.AI q-bio.NC} }
song2024multiscale
arxiv-665677
2410.03535
NRGBoost: Energy-Based Generative Boosted Trees
<|reference_start|>NRGBoost: Energy-Based Generative Boosted Trees: Despite the rise to dominance of deep learning in unstructured data domains, tree-based methods such as Random Forests (RF) and Gradient Boosted Decision Trees (GBDT) are still the workhorses for handling discriminative tasks on tabular data. We explore generative extensions of these popular algorithms with a focus on explicitly modeling the data density (up to a normalization constant), thus enabling other applications besides sampling. As our main contribution we propose an energy-based generative boosting algorithm that is analogous to the second order boosting implemented in popular packages like XGBoost. We show that, despite producing a generative model capable of handling inference tasks over any input variable, our proposed algorithm can achieve similar discriminative performance to GBDT on a number of real world tabular datasets, outperforming alternative generative approaches. At the same time, we show that it is also competitive with neural network based models for sampling.<|reference_end|>
arxiv
@article{bravo2024nrgboost:, title={NRGBoost: Energy-Based Generative Boosted Trees}, author={Jo~ao Bravo}, journal={arXiv preprint arXiv:2410.03535}, year={2024}, archivePrefix={arXiv}, eprint={2410.03535}, primaryClass={cs.LG} }
bravo2024nrgboost:
arxiv-665678
2410.03536
Computer Vision Intelligence Test Modeling and Generation: A Case Study on Smart OCR
<|reference_start|>Computer Vision Intelligence Test Modeling and Generation: A Case Study on Smart OCR: AI-based systems possess distinctive characteristics and introduce challenges in quality evaluation at the same time. Consequently, ensuring and validating AI software quality is of critical importance. In this paper, we present an effective AI software functional testing model to address this challenge. Specifically, we first present a comprehensive literature review of previous work, covering key facets of AI software testing processes. We then introduce a 3D classification model to systematically evaluate the image-based text extraction AI function, as well as test coverage criteria and complexity. To evaluate the performance of our proposed AI software quality test, we propose four evaluation metrics to cover different aspects. Finally, based on the proposed framework and defined metrics, a mobile Optical Character Recognition (OCR) case study is presented to demonstrate the framework's effectiveness and capability in assessing AI function quality.<|reference_end|>
arxiv
@article{shu2024computer, title={Computer Vision Intelligence Test Modeling and Generation: A Case Study on Smart OCR}, author={Jing Shu, Bing-Jiun Miu, Eugene Chang, Jerry Gao, Jun Liu}, journal={arXiv preprint arXiv:2410.03536}, year={2024}, doi={10.1109/AITest62860.2024.00011}, archivePrefix={arXiv}, eprint={2410.03536}, primaryClass={cs.SE cs.AI cs.CV} }
shu2024computer
arxiv-665679
2410.03537
Ward: Provable RAG Dataset Inference via LLM Watermarks
<|reference_start|>Ward: Provable RAG Dataset Inference via LLM Watermarks: Retrieval-Augmented Generation (RAG) improves LLMs by enabling them to incorporate external data during generation. This raises concerns for data owners regarding unauthorized use of their content in RAG systems. Despite its importance, the challenge of detecting such unauthorized usage remains underexplored, with existing datasets and methodologies from adjacent fields being ill-suited for its study. In this work, we take several steps to bridge this gap. First, we formalize this problem as (black-box) RAG Dataset Inference (RAG-DI). To facilitate research on this challenge, we further introduce a novel dataset specifically designed for benchmarking RAG-DI methods under realistic conditions, and propose a set of baseline approaches. Building on this foundation, we introduce Ward, a RAG-DI method based on LLM watermarks that enables data owners to obtain rigorous statistical guarantees regarding the usage of their dataset in a RAG system. In our experimental evaluation, we show that Ward consistently outperforms all baselines across many challenging settings, achieving higher accuracy, superior query efficiency and robustness. Our work provides a foundation for future studies of RAG-DI and highlights LLM watermarks as a promising approach to this problem.<|reference_end|>
arxiv
@article{jovanović2024ward:, title={Ward: Provable RAG Dataset Inference via LLM Watermarks}, author={Nikola Jovanovi'c, Robin Staab, Maximilian Baader, Martin Vechev}, journal={arXiv preprint arXiv:2410.03537}, year={2024}, archivePrefix={arXiv}, eprint={2410.03537}, primaryClass={cs.LG cs.AI cs.CR} }
jovanović2024ward:
arxiv-665680
2410.03538
Dreamming User Multimodal Representation for Micro-Video Recommendation
<|reference_start|>Dreamming User Multimodal Representation for Micro-Video Recommendation: The proliferation of online micro-video platforms has underscored the necessity for advanced recommender systems to mitigate information overload and deliver tailored content. Despite advancements, accurately and promptly capturing dynamic user interests remains a formidable challenge. Inspired by the Platonic Representation Hypothesis, which posits that different data modalities converge towards a shared statistical model of reality, we introduce DreamUMM (Dreaming User Multi-Modal Representation), a novel approach leveraging user historical behaviors to create real-time user representation in a multimoda space. DreamUMM employs a closed-form solution correlating user video preferences with multimodal similarity, hypothesizing that user interests can be effectively represented in a unified multimodal space. Additionally, we propose Candidate-DreamUMM for scenarios lacking recent user behavior data, inferring interests from candidate videos alone. Extensive online A/B tests demonstrate significant improvements in user engagement metrics, including active days and play count. The successful deployment of DreamUMM in two micro-video platforms with hundreds of millions of daily active users, illustrates its practical efficacy and scalability in personalized micro-video content delivery. Our work contributes to the ongoing exploration of representational convergence by providing empirical evidence supporting the potential for user interest representations to reside in a multimodal space.<|reference_end|>
arxiv
@article{lin2024dreaming, title={Dreaming User Multimodal Representation Guided by The Platonic Representation Hypothesis for Micro-Video Recommendation}, author={Chengzhi Lin, Hezheng Lin, Shuchang Liu, Cangguang Ruan, LingJing Xu, Dezhao Yang, Chuyuan Wang, Yongqi Liu}, journal={arXiv preprint arXiv:2410.03538}, year={2024}, archivePrefix={arXiv}, eprint={2410.03538}, primaryClass={cs.IR cs.AI cs.CV} }
lin2024dreaming
arxiv-665681
2410.03543
Re-examining Sexism and Misogyny Classification with Annotator Attitudes
<|reference_start|>Re-examining Sexism and Misogyny Classification with Annotator Attitudes: Gender-Based Violence (GBV) is an increasing problem online, but existing datasets fail to capture the plurality of possible annotator perspectives or ensure the representation of affected groups. We revisit two important stages in the moderation pipeline for GBV: (1) manual data labelling; and (2) automated classification. For (1), we examine two datasets to investigate the relationship between annotator identities and attitudes and the responses they give to two GBV labelling tasks. To this end, we collect demographic and attitudinal information from crowd-sourced annotators using three validated surveys from Social Psychology. We find that higher Right Wing Authoritarianism scores are associated with a higher propensity to label text as sexist, while for Social Dominance Orientation and Neosexist Attitudes, higher scores are associated with a negative tendency to do so. For (2), we conduct classification experiments using Large Language Models and five prompting strategies, including infusing prompts with annotator information. We find: (i) annotator attitudes affect the ability of classifiers to predict their labels; (ii) including attitudinal information can boost performance when we use well-structured brief annotator descriptions; and (iii) models struggle to reflect the increased complexity and imbalanced classes of the new label sets.<|reference_end|>
arxiv
@article{jiang2024re-examining, title={Re-examining Sexism and Misogyny Classification with Annotator Attitudes}, author={Aiqi Jiang, Nikolas Vitsakis, Tanvi Dinkar, Gavin Abercrombie, Ioannis Konstas}, journal={arXiv preprint arXiv:2410.03543}, year={2024}, archivePrefix={arXiv}, eprint={2410.03543}, primaryClass={cs.CL} }
jiang2024re-examining
arxiv-665682
2410.03544
First-order methods and automatic differentiation: A multi-step systems identification perspective
<|reference_start|>First-order methods and automatic differentiation: A multi-step systems identification perspective: This paper presents a tool for multi-step system identification that leverages first-order optimization and exact gradient computation. Drawing inspiration from neural network training and Automatic Differentiation (AD), the proposed method computes and analyzes the gradients with respect to the parameters to identify by propagating them through system dynamics. Thus, it defines a linear, time-varying dynamical system that models the gradient evolution. This allows to formally address the "exploding gradient" issue, by providing conditions for a reliable and efficient optimization and identification process for dynamical systems. Results indicate that the proposed method is both effective and efficient, making it a promising tool for future research and applications in nonlinear systems identification and non-convex optimization.<|reference_end|>
arxiv
@article{donati2024first-order, title={First-order methods and automatic differentiation: A multi-step systems identification perspective}, author={Cesare Donati, Martina Mammarella, Fabrizio Dabbene, Carlo Novara, Constantino Lagoa}, journal={arXiv preprint arXiv:2410.03544}, year={2024}, archivePrefix={arXiv}, eprint={2410.03544}, primaryClass={eess.SY cs.SY} }
donati2024first-order
arxiv-665683
2410.03545
Enhancing Data Quality through Simple De-duplication: Navigating Responsible Computational Social Science Research
<|reference_start|>Enhancing Data Quality through Simple De-duplication: Navigating Responsible Computational Social Science Research: Research in natural language processing (NLP) for Computational Social Science (CSS) heavily relies on data from social media platforms. This data plays a crucial role in the development of models for analysing socio-linguistic phenomena within online communities. In this work, we conduct an in-depth examination of 20 datasets extensively used in NLP for CSS to comprehensively examine data quality. Our analysis reveals that social media datasets exhibit varying levels of data duplication. Consequently, this gives rise to challenges like label inconsistencies and data leakage, compromising the reliability of models. Our findings also suggest that data duplication has an impact on the current claims of state-of-the-art performance, potentially leading to an overestimation of model effectiveness in real-world scenarios. Finally, we propose new protocols and best practices for improving dataset development from social media data and its usage.<|reference_end|>
arxiv
@article{mu2024enhancing, title={Enhancing Data Quality through Simple De-duplication: Navigating Responsible Computational Social Science Research}, author={Yida Mu, Mali Jin, Xingyi Song, Nikolaos Aletras}, journal={arXiv preprint arXiv:2410.03545}, year={2024}, archivePrefix={arXiv}, eprint={2410.03545}, primaryClass={cs.CL} }
mu2024enhancing
arxiv-665684
2410.03546
Multidimensional Human Activity Recognition With Large Language Model: A Conceptual Framework
<|reference_start|>Multidimensional Human Activity Recognition With Large Language Model: A Conceptual Framework: In high-stake environments like emergency response or elder care, the integration of large language model (LLM), revolutionize risk assessment, resource allocation, and emergency responses in Human Activity Recognition (HAR) systems by leveraging data from various wearable sensors. We propose a conceptual framework that utilizes various wearable devices, each considered as a single dimension, to support a multidimensional learning approach within HAR systems. By integrating and processing data from these diverse sources, LLMs can process and translate complex sensor inputs into actionable insights. This integration mitigates the inherent uncertainties and complexities associated with them, and thus enhancing the responsiveness and effectiveness of emergency services. This paper sets the stage for exploring the transformative potential of LLMs within HAR systems in empowering emergency workers to navigate the unpredictable and risky environments they encounter in their critical roles.<|reference_end|>
arxiv
@article{hasan2024multidimensional, title={Multidimensional Human Activity Recognition With Large Language Model: A Conceptual Framework}, author={Syed Mhamudul Hasan}, journal={arXiv preprint arXiv:2410.03546}, year={2024}, archivePrefix={arXiv}, eprint={2410.03546}, primaryClass={cs.HC cs.CY cs.LG} }
hasan2024multidimensional
arxiv-665685
2410.03549
Multi-modal Atmospheric Sensing to Augment Wearable IMU-Based Hand Washing Detection
<|reference_start|>Multi-modal Atmospheric Sensing to Augment Wearable IMU-Based Hand Washing Detection: Hand washing is a crucial part of personal hygiene. Hand washing detection is a relevant topic for wearable sensing with applications in the medical and professional fields. Hand washing detection can be used to aid workers in complying with hygiene rules. Hand washing detection using body-worn IMU-based sensor systems has been shown to be a feasible approach, although, for some reported results, the specificity of the detection was low, leading to a high rate of false positives. In this work, we present a novel, open-source prototype device that additionally includes a humidity, temperature, and barometric sensor. We contribute a benchmark dataset of 10 participants and 43 hand-washing events and perform an evaluation of the sensors' benefits. Added to that, we outline the usefulness of the additional sensor in both the annotation pipeline and the machine learning models. By visual inspection, we show that especially the humidity sensor registers a strong increase in the relative humidity during a hand-washing activity. A machine learning analysis of our data shows that distinct features benefiting from such relative humidity patterns remain to be identified.<|reference_end|>
arxiv
@article{burchard2024multi-modal, title={Multi-modal Atmospheric Sensing to Augment Wearable IMU-Based Hand Washing Detection}, author={Robin Burchard and Kristof Van Laerhoven}, journal={arXiv preprint arXiv:2410.03549}, year={2024}, archivePrefix={arXiv}, eprint={2410.03549}, primaryClass={cs.HC cs.LG} }
burchard2024multi-modal
arxiv-665686
2410.03550
Loading Ceramics: Visualising Possibilities of Robotics in Ceramics
<|reference_start|>Loading Ceramics: Visualising Possibilities of Robotics in Ceramics: This article introduces an artistic research project that utilises artist-in-residency and exhibition as methods for exploring the possibilities of robotic 3D printing and ceramics. The interdisciplinary project unites artists and architects to collaborate on a proposed curatorial concept and Do-It-With-Others (DIWO) technological development. Constraints include material, specifically local clay, production technique, namely 3D printing with a robotic arm, and kiln size, as well as an exhibition concept that is further elaborated in the next chapter. The pictorial presents four projects as case studies demonstrating how the creatives integrate these constraints into their processes. This integration leads to the subsequent refinement and customization of the robotic-ceramics interface, aligning with the practitioners' requirements through software development. The project's focus extends beyond artistic outcomes, aiming also to advance the pipeline of 3D robotic printing in clay, employing a digitally controlled material press that has been developed in-house, with its functionality refined through practice.<|reference_end|>
arxiv
@article{guljajeva2024loading, title={Loading Ceramics: Visualising Possibilities of Robotics in Ceramics}, author={Varvara Guljajeva, Mar Canet Sola, Martin Melioranski, Lauri Kilusk, Kaiko Kivi}, journal={arXiv preprint arXiv:2410.03550}, year={2024}, archivePrefix={arXiv}, eprint={2410.03550}, primaryClass={cs.RO} }
guljajeva2024loading
arxiv-665687
2410.03551
Constructive Apraxia: An Unexpected Limit of Instructible Vision-Language Models and Analog for Human Cognitive Disorders
<|reference_start|>Constructive Apraxia: An Unexpected Limit of Instructible Vision-Language Models and Analog for Human Cognitive Disorders: This study reveals an unexpected parallel between instructible vision-language models (VLMs) and human cognitive disorders, specifically constructive apraxia. We tested 25 state-of-the-art VLMs, including GPT-4 Vision, DALL-E 3, and Midjourney v5, on their ability to generate images of the Ponzo illusion, a task that requires basic spatial reasoning and is often used in clinical assessments of constructive apraxia. Remarkably, 24 out of 25 models failed to correctly render two horizontal lines against a perspective background, mirroring the deficits seen in patients with parietal lobe damage. The models consistently misinterpreted spatial instructions, producing tilted or misaligned lines that followed the perspective of the background rather than remaining horizontal. This behavior is strikingly similar to how apraxia patients struggle to copy or construct simple figures despite intact visual perception and motor skills. Our findings suggest that current VLMs, despite their advanced capabilities in other domains, lack fundamental spatial reasoning abilities akin to those impaired in constructive apraxia. This limitation in AI systems provides a novel computational model for studying spatial cognition deficits and highlights a critical area for improvement in VLM architecture and training methodologies.<|reference_end|>
arxiv
@article{noever2024constructive, title={Constructive Apraxia: An Unexpected Limit of Instructible Vision-Language Models and Analog for Human Cognitive Disorders}, author={David Noever and Samantha E. Miller Noever}, journal={arXiv preprint arXiv:2410.03551}, year={2024}, archivePrefix={arXiv}, eprint={2410.03551}, primaryClass={cs.CV cs.AI cs.HC} }
noever2024constructive
arxiv-665688
2410.03552
Evaluating Investment Risks in LATAM AI Startups: Ranking of Investment Potential and Framework for Valuation
<|reference_start|>Evaluating Investment Risks in LATAM AI Startups: Ranking of Investment Potential and Framework for Valuation: The growth of the tech startup ecosystem in Latin America (LATAM) is driven by innovative entrepreneurs addressing market needs across various sectors. However, these startups encounter unique challenges and risks that require specific management approaches. This paper explores a case study with the Total Addressable Market (TAM), Serviceable Available Market (SAM), and Serviceable Obtainable Market (SOM) metrics within the context of the online food delivery industry in LATAM, serving as a model for valuing startups using the Discounted Cash Flow (DCF) method. By analyzing key emerging powers such as Argentina, Colombia, Uruguay, Costa Rica, Panama, and Ecuador, the study highlights the potential and profitability of AI-driven startups in the region through the development of a ranking of emerging powers in Latin America for tech startup investment. The paper also examines the political, economic, and competitive risks faced by startups and offers strategic insights on mitigating these risks to maximize investment returns. Furthermore, the research underscores the value of diversifying investment portfolios with startups in emerging markets, emphasizing the opportunities for substantial growth and returns despite inherent risks.<|reference_end|>
arxiv
@article{ramos-torres2024evaluating, title={Evaluating Investment Risks in LATAM AI Startups: Ranking of Investment Potential and Framework for Valuation}, author={Abraham Ramos-Torres, Laura N. Montoya}, journal={arXiv preprint arXiv:2410.03552}, year={2024}, archivePrefix={arXiv}, eprint={2410.03552}, primaryClass={q-fin.GN cs.AI q-fin.PM q-fin.PR} }
ramos-torres2024evaluating
arxiv-665689
2410.03553
Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding
<|reference_start|>Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding: Proteins, as essential biomolecules, play a central role in biological processes, including metabolic reactions and DNA replication. Accurate prediction of their properties and functions is crucial in biological applications. Recent development of protein language models (pLMs) with supervised fine tuning provides a promising solution to this problem. However, the fine-tuned model is tailored for particular downstream prediction task, and achieving general-purpose protein understanding remains a challenge. In this paper, we introduce Structure-Enhanced Protein Instruction Tuning (SEPIT) framework to bridge this gap. Our approach integrates a noval structure-aware module into pLMs to inform them with structural knowledge, and then connects these enhanced pLMs to large language models (LLMs) to generate understanding of proteins. In this framework, we propose a novel two-stage instruction tuning pipeline that first establishes a basic understanding of proteins through caption-based instructions and then refines this understanding using a mixture of experts (MoEs) to learn more complex properties and functional information with the same amount of activated parameters. Moreover, we construct the largest and most comprehensive protein instruction dataset to date, which allows us to train and evaluate the general-purpose protein understanding model. Extensive experimental results on open-ended generation and closed-set answer tasks demonstrate the superior performance of SEPIT over both closed-source general LLMs and open-source LLMs trained with protein knowledge.<|reference_end|>
arxiv
@article{wu2024structure-enhanced, title={Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding}, author={Wei Wu, Chao Wang, Liyi Chen, Mingze Yin, Yiheng Zhu, Kun Fu, Jieping Ye, Hui Xiong, Zheng Wang}, journal={arXiv preprint arXiv:2410.03553}, year={2024}, archivePrefix={arXiv}, eprint={2410.03553}, primaryClass={cs.CL q-bio.BM} }
wu2024structure-enhanced
arxiv-665690
2410.03554
Artificial intelligence inspired freeform optics design: a review
<|reference_start|>Artificial intelligence inspired freeform optics design: a review: Integrating artificial intelligence (AI) techniques such as machine learning and deep learning into freeform optics design has significantly enhanced design efficiency, expanded the design space, and led to innovative solutions. This article reviews the latest developments in AI applications within this field, highlighting their roles in initial design generation, optimization, and performance prediction. It also addresses the benefits of AI, such as improved accuracy and performance, alongside challenges like data requirements, model interpretability, and computational complexity. Despite these challenges, the future of AI in freeform optics design looks promising, with potential advancements in hybrid design methods, interpretable AI, AI-driven manufacturing, and targeted research for specific applications. Collaboration among researchers, engineers, and designers is essential to fully harness AI's potential and drive innovation in optics.<|reference_end|>
arxiv
@article{feng2024artificial, title={Artificial intelligence inspired freeform optics design: a review}, author={Lei Feng, Jingxing Liao, Jingna Yang}, journal={arXiv preprint arXiv:2410.03554}, year={2024}, archivePrefix={arXiv}, eprint={2410.03554}, primaryClass={cs.LG physics.optics} }
feng2024artificial
arxiv-665691
2410.03555
Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR
<|reference_start|>Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR: Robust autonomous navigation in environments with limited visibility remains a critical challenge in robotics. We present a novel approach that leverages Non-Line-of-Sight (NLOS) sensing using single-photon LiDAR to improve visibility and enhance autonomous navigation. Our method enables mobile robots to "see around corners" by utilizing multi-bounce light information, effectively expanding their perceptual range without additional infrastructure. We propose a three-module pipeline: (1) Sensing, which captures multi-bounce histograms using SPAD-based LiDAR; (2) Perception, which estimates occupancy maps of hidden regions from these histograms using a convolutional neural network; and (3) Control, which allows a robot to follow safe paths based on the estimated occupancy. We evaluate our approach through simulations and real-world experiments on a mobile robot navigating an L-shaped corridor with hidden obstacles. Our work represents the first experimental demonstration of NLOS imaging for autonomous navigation, paving the way for safer and more efficient robotic systems operating in complex environments. We also contribute a novel dynamics-integrated transient rendering framework for simulating NLOS scenarios, facilitating future research in this domain.<|reference_end|>
arxiv
@article{young2024enhancing, title={Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR}, author={Aaron Young, Nevindu M. Batagoda, Harry Zhang, Akshat Dave, Adithya Pediredla, Dan Negrut, Ramesh Raskar}, journal={arXiv preprint arXiv:2410.03555}, year={2024}, archivePrefix={arXiv}, eprint={2410.03555}, primaryClass={cs.RO cs.CV} }
young2024enhancing
arxiv-665692
2410.03556
BodyShapeGPT: SMPL Body Shape Manipulation with LLMs
<|reference_start|>BodyShapeGPT: SMPL Body Shape Manipulation with LLMs: Generative AI models provide a wide range of tools capable of performing complex tasks in a fraction of the time it would take a human. Among these, Large Language Models (LLMs) stand out for their ability to generate diverse texts, from literary narratives to specialized responses in different fields of knowledge. This paper explores the use of fine-tuned LLMs to identify physical descriptions of people, and subsequently create accurate representations of avatars using the SMPL-X model by inferring shape parameters. We demonstrate that LLMs can be trained to understand and manipulate the shape space of SMPL, allowing the control of 3D human shapes through natural language. This approach promises to improve human-machine interaction and opens new avenues for customization and simulation in virtual environments.<|reference_end|>
arxiv
@article{árbol2024bodyshapegpt:, title={BodyShapeGPT: SMPL Body Shape Manipulation with LLMs}, author={Baldomero R. 'Arbol and Dan Casas}, journal={arXiv preprint arXiv:2410.03556}, year={2024}, archivePrefix={arXiv}, eprint={2410.03556}, primaryClass={cs.CL cs.CV cs.LG} }
árbol2024bodyshapegpt:
arxiv-665693
2410.03558
Not All Diffusion Model Activations Have Been Evaluated as Discriminative Features
<|reference_start|>Not All Diffusion Model Activations Have Been Evaluated as Discriminative Features: Diffusion models are initially designed for image generation. Recent research shows that the internal signals within their backbones, named activations, can also serve as dense features for various discriminative tasks such as semantic segmentation. Given numerous activations, selecting a small yet effective subset poses a fundamental problem. To this end, the early study of this field performs a large-scale quantitative comparison of the discriminative ability of the activations. However, we find that many potential activations have not been evaluated, such as the queries and keys used to compute attention scores. Moreover, recent advancements in diffusion architectures bring many new activations, such as those within embedded ViT modules. Both combined, activation selection remains unresolved but overlooked. To tackle this issue, this paper takes a further step with a much broader range of activations evaluated. Considering the significant increase in activations, a full-scale quantitative comparison is no longer operational. Instead, we seek to understand the properties of these activations, such that the activations that are clearly inferior can be filtered out in advance via simple qualitative evaluation. After careful analysis, we discover three properties universal among diffusion models, enabling this study to go beyond specific models. On top of this, we present effective feature selection solutions for several popular diffusion models. Finally, the experiments across multiple discriminative tasks validate the superiority of our method over the SOTA competitors. Our code is available at https://github.com/Darkbblue/generic-diffusion-feature.<|reference_end|>
arxiv
@article{meng2024not, title={Not All Diffusion Model Activations Have Been Evaluated as Discriminative Features}, author={Benyuan Meng, Qianqian Xu, Zitai Wang, Xiaochun Cao, Qingming Huang}, journal={arXiv preprint arXiv:2410.03558}, year={2024}, archivePrefix={arXiv}, eprint={2410.03558}, primaryClass={cs.CV cs.AI} }
meng2024not
arxiv-665694
2410.03559
Optimizing food taste sensory evaluation through neural network-based taste electroencephalogram channel selection
<|reference_start|>Optimizing food taste sensory evaluation through neural network-based taste electroencephalogram channel selection: The taste electroencephalogram (EEG) evoked by the taste stimulation can reflect different brain patterns and be used in applications such as sensory evaluation of food. However, considering the computational cost and efficiency, EEG data with many channels has to face the critical issue of channel selection. This paper proposed a channel selection method called class activation mapping with attention (CAM-Attention). The CAM-Attention method combined a convolutional neural network with channel and spatial attention (CNN-CSA) model with a gradient-weighted class activation mapping (Grad-CAM) model. The CNN-CSA model exploited key features in EEG data by attention mechanism, and the Grad-CAM model effectively realized the visualization of feature regions. Then, channel selection was effectively implemented based on feature regions. Finally, the CAM-Attention method reduced the computational burden of taste EEG recognition and effectively distinguished the four tastes. In short, it has excellent recognition performance and provides effective technical support for taste sensory evaluation.<|reference_end|>
arxiv
@article{xia2024optimizing, title={Optimizing food taste sensory evaluation through neural network-based taste electroencephalogram channel selection}, author={Xiuxin Xia, Qun Wang, He Wang, Chenrui Liu, Pengwei Li, Yan Shi, Hong Men}, journal={arXiv preprint arXiv:2410.03559}, year={2024}, archivePrefix={arXiv}, eprint={2410.03559}, primaryClass={eess.SP cs.AI cs.LG q-bio.NC} }
xia2024optimizing
arxiv-665695
2410.03560
f\aerdXel: An Expert System for Danish Traffic Law
<|reference_start|>f\aerdXel: An Expert System for Danish Traffic Law: We present f{\ae}rdXel, a tool for symbolic reasoning in the domain of Danish traffic law. f{\ae}rdXel combines techniques from logic programming with a novel interface that allows users to navigate through its reasoning process, thereby ensuring the system's trustworthiness. A preliminary empirical evaluation indicates that this work is seen as very promising, and has the potential to become a foundation for real-world AI tools supporting professionals in the Danish legal sector.<|reference_end|>
arxiv
@article{cruz-filipe2024f{\ae}rdxel:, title={f{\ae}rdXel: An Expert System for Danish Traffic Law}, author={Lu'is Cruz-Filipe and Jonas Vistrup}, journal={arXiv preprint arXiv:2410.03560}, year={2024}, archivePrefix={arXiv}, eprint={2410.03560}, primaryClass={cs.AI} }
cruz-filipe2024f{\ae}rdxel:
arxiv-665696
2410.03561
A Diagrammatic Algebra for Program Logics
<|reference_start|>A Diagrammatic Algebra for Program Logics: Tape diagrams provide a convenient notation for arrows of rig categories, i.e., categories equipped with two monoidal products, $\oplus$ and $\otimes$, where $\otimes$ distributes over $\oplus $. In this work, we extend tape diagrams with traces over $\oplus$ in order to deal with iteration in imperative programming languages. More precisely, we introduce Kleene-Cartesian bicategories, namely rig categories where the monoidal structure provided by $\otimes$ is a cartesian bicategory, while the one provided by $\oplus$ is what we name a Kleene bicategory. We show that the associated language of tape diagrams is expressive enough to deal with imperative programs and the corresponding laws provide a proof system that is at least as powerful as the one of Hoare logic.<|reference_end|>
arxiv
@article{bonchi2024a, title={A Diagrammatic Algebra for Program Logics}, author={Filippo Bonchi, Alessandro Di Giorgio, Elena Di Lavore}, journal={arXiv preprint arXiv:2410.03561}, year={2024}, archivePrefix={arXiv}, eprint={2410.03561}, primaryClass={cs.LO} }
bonchi2024a
arxiv-665697
2410.03562
Class of codes correcting absorptions and emissions
<|reference_start|>Class of codes correcting absorptions and emissions: We construct a general family of quantum codes that protect against all emission, absorption, dephasing, and raising/lowering errors up to an arbitrary fixed order. Such codes are known in the literature as absorption-emission (AE) codes. We derive simplified error correction conditions for a general AE code and show that any permutation-invariant code that corrects $\le t$ errors can be mapped to an AE code that corrects up to order-$t$ transitions. Carefully tuning the parameters of permutationally invariant codes, we construct several examples of efficient AE codes, hosted in systems with low total angular momentum. Our results also imply that spin codes can be mapped to AE codes, enabling us to characterize logical operators for certain subclasses of such codes.<|reference_end|>
arxiv
@article{aydin2024class, title={Class of codes correcting absorptions and emissions}, author={Arda Aydin and Alexander Barg}, journal={arXiv preprint arXiv:2410.03562}, year={2024}, archivePrefix={arXiv}, eprint={2410.03562}, primaryClass={quant-ph cs.IT math.IT} }
aydin2024class
arxiv-665698
2410.03565
Training on more Reachable Tasks for Generalisation in Reinforcement Learning
<|reference_start|>Training on more Reachable Tasks for Generalisation in Reinforcement Learning: In multi-task reinforcement learning, agents train on a fixed set of tasks and have to generalise to new ones. Recent work has shown that increased exploration improves this generalisation, but it remains unclear why exactly that is. In this paper, we introduce the concept of reachability in multi-task reinforcement learning and show that an initial exploration phase increases the number of reachable tasks the agent is trained on. This, and not the increased exploration, is responsible for the improved generalisation, even to unreachable tasks. Inspired by this, we propose a novel method Explore-Go that implements such an exploration phase at the beginning of each episode. Explore-Go only modifies the way experience is collected and can be used with most existing on-policy or off-policy reinforcement learning algorithms. We demonstrate the effectiveness of our method when combined with some popular algorithms and show an increase in generalisation performance across several environments.<|reference_end|>
arxiv
@article{weltevrede2024training, title={Training on more Reachable Tasks for Generalisation in Reinforcement Learning}, author={Max Weltevrede, Caroline Horsch, Matthijs T.J. Spaan, Wendelin B"ohmer}, journal={arXiv preprint arXiv:2410.03565}, year={2024}, archivePrefix={arXiv}, eprint={2410.03565}, primaryClass={cs.LG cs.AI} }
weltevrede2024training
arxiv-665699
2410.03566
A Survey on Offensive AI Within Cybersecurity
<|reference_start|>A Survey on Offensive AI Within Cybersecurity: Artificial Intelligence (AI) has witnessed major growth and integration across various domains. As AI systems become increasingly prevalent, they also become targets for threat actors to manipulate their functionality for malicious purposes. This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems. It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure. The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI. Additionally, it will discuss the consequences and implications of offensive AI, presenting case studies, insights, and avenues for further research.<|reference_end|>
arxiv
@article{girhepuje2024a, title={A Survey on Offensive AI Within Cybersecurity}, author={Sahil Girhepuje, Aviral Verma, Gaurav Raina}, journal={arXiv preprint arXiv:2410.03566}, year={2024}, archivePrefix={arXiv}, eprint={2410.03566}, primaryClass={cs.CR cs.AI} }
girhepuje2024a
arxiv-665700
2410.03568
Towards Linguistically-Aware and Language-Independent Tokenization for Large Language Models (LLMs)
<|reference_start|>Towards Linguistically-Aware and Language-Independent Tokenization for Large Language Models (LLMs): This paper presents a comprehensive study on the tokenization techniques employed by state-of-the-art large language models (LLMs) and their implications on the cost and availability of services across different languages, especially low resource languages. The analysis considers multiple LLMs, including GPT-4 (using cl100k_base embeddings), GPT-3 (with p50k_base embeddings), and DaVinci (employing r50k_base embeddings), as well as the widely used BERT base tokenizer. The study evaluates the tokenization variability observed across these models and investigates the challenges of linguistic representation in subword tokenization. The research underscores the importance of fostering linguistically-aware development practices, especially for languages that are traditionally under-resourced. Moreover, this paper introduces case studies that highlight the real-world implications of tokenization choices, particularly in the context of electronic health record (EHR) systems. This research aims to promote generalizable Internationalization (I18N) practices in the development of AI services in this domain and beyond, with a strong emphasis on inclusivity, particularly for languages traditionally underrepresented in AI applications.<|reference_end|>
arxiv
@article{rahman2024towards, title={Towards Linguistically-Aware and Language-Independent Tokenization for Large Language Models (LLMs)}, author={Abrar Rahman, Garry Bowlin, Binit Mohanty, Sean McGunigal}, journal={arXiv preprint arXiv:2410.03568}, year={2024}, archivePrefix={arXiv}, eprint={2410.03568}, primaryClass={cs.CL cs.LG} }
rahman2024towards