corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-668001 | 2410.07751 | Learning Low-Level Causal Relations using a Simulated Robotic Arm | <|reference_start|>Learning Low-Level Causal Relations using a Simulated Robotic Arm: Causal learning allows humans to predict the effect of their actions on the known environment and use this knowledge to plan the execution of more complex actions. Such knowledge also captures the behaviour of the environment and can be used for its analysis and the reasoning behind the behaviour. This type of knowledge is also crucial in the design of intelligent robotic systems with common sense. In this paper, we study causal relations by learning the forward and inverse models based on data generated by a simulated robotic arm involved in two sensorimotor tasks. As a next step, we investigate feature attribution methods for the analysis of the forward model, which reveals the low-level causal effects corresponding to individual features of the state vector related to both the arm joints and the environment features. This type of analysis provides solid ground for dimensionality reduction of the state representations, as well as for the aggregation of knowledge towards the explainability of causal effects at higher levels.<|reference_end|> | arxiv | @article{cibula2024learning,
title={Learning Low-Level Causal Relations using a Simulated Robotic Arm},
author={Miroslav Cibula, Matthias Kerzel, Igor Farkav{s}},
journal={In Artificial Neural Networks and Machine Learning -- ICANN 2024
(pp. 285--298). Springer Nature Switzerland},
year={2024},
doi={10.1007/978-3-031-72359-9_21},
archivePrefix={arXiv},
eprint={2410.07751},
primaryClass={cs.RO cs.AI cs.LG}
} | cibula2024learning |
arxiv-668002 | 2410.07752 | TVBench: Redesigning Video-Language Evaluation | <|reference_start|>TVBench: Redesigning Video-Language Evaluation: Large language models have demonstrated impressive performance when integrated with vision models even enabling video understanding. However, evaluating these video models presents its own unique challenges, for which several benchmarks have been proposed. In this paper, we show that the currently most used video-language benchmarks can be solved without requiring much temporal reasoning. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative. As a solution, we propose TVBench, a novel open-source video multiple-choice question-answering benchmark, and demonstrate through extensive evaluations that it requires a high level of temporal understanding. Surprisingly, we find that most recent state-of-the-art video-language models perform similarly to random performance on TVBench, with only Gemini-Pro and Tarsier clearly surpassing this baseline.<|reference_end|> | arxiv | @article{cores2024tvbench:,
title={TVBench: Redesigning Video-Language Evaluation},
author={Daniel Cores, Michael Dorkenwald, Manuel Mucientes, Cees G. M. Snoek,
Yuki M. Asano},
journal={arXiv preprint arXiv:2410.07752},
year={2024},
archivePrefix={arXiv},
eprint={2410.07752},
primaryClass={cs.CV}
} | cores2024tvbench: |
arxiv-668003 | 2410.07753 | Synthesizing Multi-Class Surgical Datasets with Anatomy-Aware Diffusion Models | <|reference_start|>Synthesizing Multi-Class Surgical Datasets with Anatomy-Aware Diffusion Models: In computer-assisted surgery, automatically recognizing anatomical organs is crucial for understanding the surgical scene and providing intraoperative assistance. While machine learning models can identify such structures, their deployment is hindered by the need for labeled, diverse surgical datasets with anatomical annotations. Labeling multiple classes (i.e., organs) in a surgical scene is time-intensive, requiring medical experts. Although synthetically generated images can enhance segmentation performance, maintaining both organ structure and texture during generation is challenging. We introduce a multi-stage approach using diffusion models to generate multi-class surgical datasets with annotations. Our framework improves anatomy awareness by training organ specific models with an inpainting objective guided by binary segmentation masks. The organs are generated with an inference pipeline using pre-trained ControlNet to maintain the organ structure. The synthetic multi-class datasets are constructed through an image composition step, ensuring structural and textural consistency. This versatile approach allows the generation of multi-class datasets from real binary datasets and simulated surgical masks. We thoroughly evaluate the generated datasets on image quality and downstream segmentation, achieving a $15\%$ improvement in segmentation scores when combined with real images. Our codebase https://gitlab.com/nct_tso_public/muli-class-image-synthesis<|reference_end|> | arxiv | @article{venkatesh2024synthesizing,
title={Synthesizing Multi-Class Surgical Datasets with Anatomy-Aware Diffusion
Models},
author={Danush Kumar Venkatesh, Dominik Rivoir, Micha Pfeiffer, Fiona
Kolbinger, Stefanie Speidel},
journal={arXiv preprint arXiv:2410.07753},
year={2024},
archivePrefix={arXiv},
eprint={2410.07753},
primaryClass={cs.CV cs.LG}
} | venkatesh2024synthesizing |
arxiv-668004 | 2410.07756 | Graphs with nonnegative resistance curvature | <|reference_start|>Graphs with nonnegative resistance curvature: This article introduces and studies a new class of graphs motivated by discrete curvature. We call a graph resistance nonnegative if there exists a distribution on its spanning trees such that every vertex has expected degree at most two in a random spanning tree; these are precisely the graphs that admit a metric with nonnegative resistance curvature, a discrete curvature introduced by Devriendt and Lambiotte. We show that this class of graphs lies between Hamiltonian and $1$-tough graphs and, surprisingly, that a graph is resistance nonnegative if and only if its twice-dilated matching polytope intersects the interior of its spanning tree polytope. We study further characterizations and basic properties of resistance nonnegative graphs and pose several questions for future research.<|reference_end|> | arxiv | @article{devriendt2024graphs,
title={Graphs with nonnegative resistance curvature},
author={Karel Devriendt},
journal={arXiv preprint arXiv:2410.07756},
year={2024},
archivePrefix={arXiv},
eprint={2410.07756},
primaryClass={math.CO cs.DM math.MG}
} | devriendt2024graphs |
arxiv-668005 | 2410.07757 | MMHead: Towards Fine-grained Multi-modal 3D Facial Animation | <|reference_start|>MMHead: Towards Fine-grained Multi-modal 3D Facial Animation: 3D facial animation has attracted considerable attention due to its extensive applications in the multimedia field. Audio-driven 3D facial animation has been widely explored with promising results. However, multi-modal 3D facial animation, especially text-guided 3D facial animation is rarely explored due to the lack of multi-modal 3D facial animation dataset. To fill this gap, we first construct a large-scale multi-modal 3D facial animation dataset, MMHead, which consists of 49 hours of 3D facial motion sequences, speech audios, and rich hierarchical text annotations. Each text annotation contains abstract action and emotion descriptions, fine-grained facial and head movements (i.e., expression and head pose) descriptions, and three possible scenarios that may cause such emotion. Concretely, we integrate five public 2D portrait video datasets, and propose an automatic pipeline to 1) reconstruct 3D facial motion sequences from monocular videos; and 2) obtain hierarchical text annotations with the help of AU detection and ChatGPT. Based on the MMHead dataset, we establish benchmarks for two new tasks: text-induced 3D talking head animation and text-to-3D facial motion generation. Moreover, a simple but efficient VQ-VAE-based method named MM2Face is proposed to unify the multi-modal information and generate diverse and plausible 3D facial motions, which achieves competitive results on both benchmarks. Extensive experiments and comprehensive analysis demonstrate the significant potential of our dataset and benchmarks in promoting the development of multi-modal 3D facial animation.<|reference_end|> | arxiv | @article{wu2024mmhead:,
title={MMHead: Towards Fine-grained Multi-modal 3D Facial Animation},
author={Sijing Wu, Yunhao Li, Yichao Yan, Huiyu Duan, Ziwei Liu, Guangtao Zhai},
journal={arXiv preprint arXiv:2410.07757},
year={2024},
archivePrefix={arXiv},
eprint={2410.07757},
primaryClass={cs.CV}
} | wu2024mmhead: |
arxiv-668006 | 2410.07758 | HeightFormer: A Semantic Alignment Monocular 3D Object Detection Method from Roadside Perspective | <|reference_start|>HeightFormer: A Semantic Alignment Monocular 3D Object Detection Method from Roadside Perspective: The on-board 3D object detection technology has received extensive attention as a critical technology for autonomous driving, while few studies have focused on applying roadside sensors in 3D traffic object detection. Existing studies achieve the projection of 2D image features to 3D features through height estimation based on the frustum. However, they did not consider the height alignment and the extraction efficiency of bird's-eye-view features. We propose a novel 3D object detection framework integrating Spatial Former and Voxel Pooling Former to enhance 2D-to-3D projection based on height estimation. Extensive experiments were conducted using the Rope3D and DAIR-V2X-I dataset, and the results demonstrated the outperformance of the proposed algorithm in the detection of both vehicles and cyclists. These results indicate that the algorithm is robust and generalized under various detection scenarios. Improving the accuracy of 3D object detection on the roadside is conducive to building a safe and trustworthy intelligent transportation system of vehicle-road coordination and promoting the large-scale application of autonomous driving. The code and pre-trained models will be released on https://anonymous.4open.science/r/HeightFormer.<|reference_end|> | arxiv | @article{liu2024heightformer:,
title={HeightFormer: A Semantic Alignment Monocular 3D Object Detection Method
from Roadside Perspective},
author={Pei Liu (1) and Zihao Zhang (2) and Haipeng Liu (3) and Nanfang Zheng
(4) and Meixin Zhu (1) and Ziyuan Pu (4) ((1) Intelligent Transportation
Thrust, Systems Hub, The Hong Kong University of Science and Technology
(Guangzhou), (2) School of Cyber Science and Engineering, Southeast
University, (3) Li Auto Inc, (4) School of Transportation, Southeast
University)},
journal={arXiv preprint arXiv:2410.07758},
year={2024},
archivePrefix={arXiv},
eprint={2410.07758},
primaryClass={cs.CV}
} | liu2024heightformer: |
arxiv-668007 | 2410.07761 | $\textitJump Your Steps$: Optimizing Sampling Schedule of Discrete Diffusion Models | <|reference_start|>$\textitJump Your Steps$: Optimizing Sampling Schedule of Discrete Diffusion Models: Diffusion models have seen notable success in continuous domains, leading to the development of discrete diffusion models (DDMs) for discrete variables. Despite recent advances, DDMs face the challenge of slow sampling speeds. While parallel sampling methods like $\tau$-leaping accelerate this process, they introduce $\textit{Compounding Decoding Error}$ (CDE), where discrepancies arise between the true distribution and the approximation from parallel token generation, leading to degraded sample quality. In this work, we present $\textit{Jump Your Steps}$ (JYS), a novel approach that optimizes the allocation of discrete sampling timesteps by minimizing CDE without extra computational cost. More precisely, we derive a practical upper bound on CDE and propose an efficient algorithm for searching for the optimal sampling schedule. Extensive experiments across image, music, and text generation show that JYS significantly improves sampling quality, establishing it as a versatile framework for enhancing DDM performance for fast sampling.<|reference_end|> | arxiv | @article{park2024$\textit{jump,
title={$\textit{Jump Your Steps}$: Optimizing Sampling Schedule of Discrete
Diffusion Models},
author={Yong-Hyun Park, Chieh-Hsin Lai, Satoshi Hayakawa, Yuhta Takida, Yuki
Mitsufuji},
journal={arXiv preprint arXiv:2410.07761},
year={2024},
archivePrefix={arXiv},
eprint={2410.07761},
primaryClass={cs.LG cs.AI cs.CL cs.CV}
} | park2024$\textit{jump |
arxiv-668008 | 2410.07762 | QoS-Nets: Adaptive Approximate Neural Network Inference | <|reference_start|>QoS-Nets: Adaptive Approximate Neural Network Inference: In order to vary the arithmetic resource consumption of neural network applications at runtime, this work proposes the flexible reuse of approximate multipliers for neural network layer computations. We introduce a search algorithm that chooses an appropriate subset of approximate multipliers of a user-defined size from a larger search space and enables retraining to maximize task performance. Unlike previous work, our approach can output more than a single, static assignment of approximate multiplier instances to layers. These different operating points allow a system to gradually adapt its Quality of Service (QoS) to changing environmental conditions by increasing or decreasing its accuracy and resource consumption. QoS-Nets achieves this by reassigning the selected approximate multiplier instances to layers at runtime. To combine multiple operating points with the use of retraining, we propose a fine-tuning scheme that shares the majority of parameters between operating points, with only a small amount of additional parameters required per operating point. In our evaluation on MobileNetV2, QoS-Nets is used to select four approximate multiplier instances for three different operating points. These operating points result in power savings for multiplications between 15.3% and 42.8% at a Top-5 accuracy loss between 0.3 and 2.33 percentage points. Through our fine-tuning scheme, all three operating points only increase the model's parameter count by only 2.75%.<|reference_end|> | arxiv | @article{trommer2024qos-nets:,
title={QoS-Nets: Adaptive Approximate Neural Network Inference},
author={Elias Trommer, Bernd Waschneck, Akash Kumar},
journal={arXiv preprint arXiv:2410.07762},
year={2024},
archivePrefix={arXiv},
eprint={2410.07762},
primaryClass={cs.LG}
} | trommer2024qos-nets: |
arxiv-668009 | 2410.07763 | HARIVO: Harnessing Text-to-Image Models for Video Generation | <|reference_start|>HARIVO: Harnessing Text-to-Image Models for Video Generation: We present a method to create diffusion-based video models from pretrained Text-to-Image (T2I) models. Recently, AnimateDiff proposed freezing the T2I model while only training temporal layers. We advance this method by proposing a unique architecture, incorporating a mapping network and frame-wise tokens, tailored for video generation while maintaining the diversity and creativity of the original T2I model. Key innovations include novel loss functions for temporal smoothness and a mitigating gradient sampling technique, ensuring realistic and temporally consistent video generation despite limited public video data. We have successfully integrated video-specific inductive biases into the architecture and loss functions. Our method, built on the frozen StableDiffusion model, simplifies training processes and allows for seamless integration with off-the-shelf models like ControlNet and DreamBooth. project page: https://kwonminki.github.io/HARIVO<|reference_end|> | arxiv | @article{kwon2024harivo:,
title={HARIVO: Harnessing Text-to-Image Models for Video Generation},
author={Mingi Kwon, Seoung Wug Oh, Yang Zhou, Difan Liu, Joon-Young Lee,
Haoran Cai, Baqiao Liu, Feng Liu, Youngjung Uh},
journal={arXiv preprint arXiv:2410.07763},
year={2024},
archivePrefix={arXiv},
eprint={2410.07763},
primaryClass={cs.CV cs.AI}
} | kwon2024harivo: |
arxiv-668010 | 2410.07764 | Explaining Hypergraph Neural Networks: From Local Explanations to Global Concepts | <|reference_start|>Explaining Hypergraph Neural Networks: From Local Explanations to Global Concepts: Hypergraph neural networks are a class of powerful models that leverage the message passing paradigm to learn over hypergraphs, a generalization of graphs well-suited to describing relational data with higher-order interactions. However, such models are not naturally interpretable, and their explainability has received very limited attention. We introduce SHypX, the first model-agnostic post-hoc explainer for hypergraph neural networks that provides both local and global explanations. At the instance-level, it performs input attribution by discretely sampling explanation subhypergraphs optimized to be faithful and concise. At the model-level, it produces global explanation subhypergraphs using unsupervised concept extraction. Extensive experiments across four real-world and four novel, synthetic hypergraph datasets demonstrate that our method finds high-quality explanations which can target a user-specified balance between faithfulness and concision, improving over baselines by 25 percent points in fidelity on average.<|reference_end|> | arxiv | @article{su2024explaining,
title={Explaining Hypergraph Neural Networks: From Local Explanations to Global
Concepts},
author={Shiye Su, Iulia Duta, Lucie Charlotte Magister, Pietro Li`o},
journal={arXiv preprint arXiv:2410.07764},
year={2024},
archivePrefix={arXiv},
eprint={2410.07764},
primaryClass={cs.LG}
} | su2024explaining |
arxiv-668011 | 2410.07765 | GameTraversalBenchmark: Evaluating Planning Abilities Of Large Language Models Through Traversing 2D Game Maps | <|reference_start|>GameTraversalBenchmark: Evaluating Planning Abilities Of Large Language Models Through Traversing 2D Game Maps: Large language models (LLMs) have recently demonstrated great success in generating and understanding natural language. While they have also shown potential beyond the domain of natural language, it remains an open question as to what extent and in which way these LLMs can plan. We investigate their planning capabilities by proposing GameTraversalBenchmark (GTB), a benchmark consisting of diverse 2D grid-based game maps. An LLM succeeds if it can traverse through given objectives, with a minimum number of steps and a minimum number of generation errors. We evaluate a number of LLMs on GTB and found that GPT-4-Turbo achieved the highest score of 44.97% on GTB\_Score (GTBS), a composite score that combines the three above criteria. Furthermore, we preliminarily test large reasoning models, namely o1, which scores $67.84\%$ on GTBS, indicating that the benchmark remains challenging for current models. Code, data, and documentation are available at https://github.com/umair-nasir14/Game-Traversal-Benchmark.<|reference_end|> | arxiv | @article{nasir2024gametraversalbenchmark:,
title={GameTraversalBenchmark: Evaluating Planning Abilities Of Large Language
Models Through Traversing 2D Game Maps},
author={Muhammad Umair Nasir, Steven James and Julian Togelius},
journal={arXiv preprint arXiv:2410.07765},
year={2024},
archivePrefix={arXiv},
eprint={2410.07765},
primaryClass={cs.CL cs.AI}
} | nasir2024gametraversalbenchmark: |
arxiv-668012 | 2410.07768 | Dialectical Behavior Therapy Approach to LLM Prompting | <|reference_start|>Dialectical Behavior Therapy Approach to LLM Prompting: Large language models demonstrated state-of-the-art results on various reasoning tasks when applying the chain-of-thought (CoT) prompting technique. CoT prompting guides the model into breaking tasks into a few intermediate steps and provides step-by-step demonstrations. However, solving complex reasoning tasks remains a challenge. In this paper, we propose a novel prompting strategy inspired by Dialectical Behavioral Therapy (DBT). DBT, a form of cognitive-behavioral therapy, aims to help individuals cope with stress by developing a system of reasoning. We applied DBT's basic concepts of shaping dialog to construct prompts and conducted experiments on different datasets and LLMs with various numbers of parameters. Our results show that prompts crafted with DBT techniques significantly improve results on smaller models, achieving a 7% increase in accuracy on the StrategyQA, 4.8% on Aqua dataset using 8b parameters model, and a 16.2% increase on the StrategyQA, 5.3% on GSM8K dataset with 14b parameters model.<|reference_end|> | arxiv | @article{vitman2024dialectical,
title={Dialectical Behavior Therapy Approach to LLM Prompting},
author={Oxana Vitman, Nika Amaglobeli, Paul Plachinda},
journal={arXiv preprint arXiv:2410.07768},
year={2024},
archivePrefix={arXiv},
eprint={2410.07768},
primaryClass={cs.CL cs.LG}
} | vitman2024dialectical |
arxiv-668013 | 2410.07771 | Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models | <|reference_start|>Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models: This paper investigates the under-explored area of low-rank weight training for large-scale Conformer-based speech recognition models from scratch. Our study demonstrates the viability of this training paradigm for such models, yielding several notable findings. Firstly, we discover that applying a low-rank structure exclusively to the attention modules can unexpectedly enhance performance, even with a significant rank reduction of 12%. In contrast, feed-forward layers present greater challenges, as they begin to exhibit performance degradation with a moderate 50% rank reduction. Furthermore, we find that both initialization and layer-wise rank assignment play critical roles in successful low-rank training. Specifically, employing SVD initialization and linear layer-wise rank mapping significantly boosts the efficacy of low-rank weight training. Building on these insights, we introduce the Low-Rank Speech Model from Scratch (LR-SMS), an approach that achieves performance parity with full-rank training while delivering substantial reductions in parameters count (by at least 2x), and training time speedups (by 1.3x for ASR and 1.15x for AVSR).<|reference_end|> | arxiv | @article{fernandez-lopez2024full-rank,
title={Full-Rank No More: Low-Rank Weight Training for Modern Speech
Recognition Models},
author={Adriana Fernandez-Lopez, Shiwei Liu, Lu Yin, Stavros Petridis, Maja
Pantic},
journal={arXiv preprint arXiv:2410.07771},
year={2024},
archivePrefix={arXiv},
eprint={2410.07771},
primaryClass={cs.SD cs.AI cs.CL cs.CV eess.AS}
} | fernandez-lopez2024full-rank |
arxiv-668014 | 2410.07772 | Towards Quantifying The Privacy Of Redacted Text | <|reference_start|>Towards Quantifying The Privacy Of Redacted Text: In this paper we propose use of a k-anonymity-like approach for evaluating the privacy of redacted text. Given a piece of redacted text we use a state of the art transformer-based deep learning network to reconstruct the original text. This generates multiple full texts that are consistent with the redacted text, i.e. which are grammatical, have the same non-redacted words etc, and represents each of these using an embedding vector that captures sentence similarity. In this way we can estimate the number, diversity and quality of full text consistent with the redacted text and so evaluate privacy.<|reference_end|> | arxiv | @article{gusain2024towards,
title={Towards Quantifying The Privacy Of Redacted Text},
author={Vaibhav Gusain, Douglas Leith},
journal={LNCS,volume 13981, 2023, 423-429},
year={2024},
doi={10.1007/978-3-031-28238-6_32},
archivePrefix={arXiv},
eprint={2410.07772},
primaryClass={cs.LG}
} | gusain2024towards |
arxiv-668015 | 2410.07776 | Median filter method for mean curvature flow using a random Jacobi algorithm | <|reference_start|>Median filter method for mean curvature flow using a random Jacobi algorithm: We present an efficient scheme for level set mean curvature flow using a domain discretization and median filters. For this scheme, we show convergence in $L^\infty$-norm under mild assumptions on the number of points in the discretization. In addition, we strengthen the weak convergence result for the MBO thresholding scheme applied to data clustering of Lelmi and one of the authors. This is done through a strong convergence of the discretized heat flow in the optimal regime. Different boundary conditions are also discussed.<|reference_end|> | arxiv | @article{ullrich2024median,
title={Median filter method for mean curvature flow using a random Jacobi
algorithm},
author={Anton Ullrich, Tim Laux},
journal={arXiv preprint arXiv:2410.07776},
year={2024},
archivePrefix={arXiv},
eprint={2410.07776},
primaryClass={math.AP cs.NA math.NA}
} | ullrich2024median |
arxiv-668016 | 2410.07778 | On the grid-sampling limit SDE | <|reference_start|>On the grid-sampling limit SDE: In our recent work [3] we introduced the grid-sampling SDE as a proxy for modeling exploration in continuous-time reinforcement learning. In this note, we provide further motivation for the use of this SDE and discuss its wellposedness in the presence of jumps.<|reference_end|> | arxiv | @article{bender2024on,
title={On the grid-sampling limit SDE},
author={Christian Bender, Nguyen Tran Thuan},
journal={arXiv preprint arXiv:2410.07778},
year={2024},
archivePrefix={arXiv},
eprint={2410.07778},
primaryClass={stat.ML cs.LG math.PR}
} | bender2024on |
arxiv-668017 | 2410.07779 | Modeling User Preferences with Automatic Metrics: Creating a High-Quality Preference Dataset for Machine Translation | <|reference_start|>Modeling User Preferences with Automatic Metrics: Creating a High-Quality Preference Dataset for Machine Translation: Alignment with human preferences is an important step in developing accurate and safe large language models. This is no exception in machine translation (MT), where better handling of language nuances and context-specific variations leads to improved quality. However, preference data based on human feedback can be very expensive to obtain and curate at a large scale. Automatic metrics, on the other hand, can induce preferences, but they might not match human expectations perfectly. In this paper, we propose an approach that leverages the best of both worlds. We first collect sentence-level quality assessments from professional linguists on translations generated by multiple high-quality MT systems and evaluate the ability of current automatic metrics to recover these preferences. We then use this analysis to curate a new dataset, MT-Pref (metric induced translation preference) dataset, which comprises 18k instances covering 18 language directions, using texts sourced from multiple domains post-2022. We show that aligning TOWER models on MT-Pref significantly improves translation quality on WMT23 and FLORES benchmarks.<|reference_end|> | arxiv | @article{agrawal2024modeling,
title={Modeling User Preferences with Automatic Metrics: Creating a
High-Quality Preference Dataset for Machine Translation},
author={Sweta Agrawal, Jos'e G. C. de Souza, Ricardo Rei, Ant'onio Farinhas,
Gonc{c}alo Faria, Patrick Fernandes, Nuno M Guerreiro, Andre Martins},
journal={arXiv preprint arXiv:2410.07779},
year={2024},
archivePrefix={arXiv},
eprint={2410.07779},
primaryClass={cs.CL}
} | agrawal2024modeling |
arxiv-668018 | 2410.07780 | Neural Semantic Map-Learning for Autonomous Vehicles | <|reference_start|>Neural Semantic Map-Learning for Autonomous Vehicles: Autonomous vehicles demand detailed maps to maneuver reliably through traffic, which need to be kept up-to-date to ensure a safe operation. A promising way to adapt the maps to the ever-changing road-network is to use crowd-sourced data from a fleet of vehicles. In this work, we present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment including drivable area, lane markings, poles, obstacles and more as a 3D mesh. Each vehicle contributes locally reconstructed submaps as lightweight meshes, making our method applicable to a wide range of reconstruction methods and sensor modalities. Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field, which is supervised using the submap meshes to predict a fused environment representation. We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction. Our approach is evaluated on two datasets with different local mapping methods, showing improved pose alignment and reconstruction over existing methods. Additionally, we demonstrate the benefit of multi-session mapping and examine the required amount of data to enable high-fidelity map learning for autonomous vehicles.<|reference_end|> | arxiv | @article{herb2024neural,
title={Neural Semantic Map-Learning for Autonomous Vehicles},
author={Markus Herb, Nassir Navab, Federico Tombari},
journal={arXiv preprint arXiv:2410.07780},
year={2024},
archivePrefix={arXiv},
eprint={2410.07780},
primaryClass={cs.RO cs.CV}
} | herb2024neural |
arxiv-668019 | 2410.07783 | CLIP Multi-modal Hashing for Multimedia Retrieval | <|reference_start|>CLIP Multi-modal Hashing for Multimedia Retrieval: Multi-modal hashing methods are widely used in multimedia retrieval, which can fuse multi-source data to generate binary hash code. However, the individual backbone networks have limited feature expression capabilities and are not jointly pre-trained on large-scale unsupervised multi-modal data, resulting in low retrieval accuracy. To address this issue, we propose a novel CLIP Multi-modal Hashing (CLIPMH) method. Our method employs the CLIP framework to extract both text and vision features and then fuses them to generate hash code. Due to enhancement on each modal feature, our method has great improvement in the retrieval performance of multi-modal hashing methods. Compared with state-of-the-art unsupervised and supervised multi-modal hashing methods, experiments reveal that the proposed CLIPMH can significantly improve performance (a maximum increase of 8.38% in mAP).<|reference_end|> | arxiv | @article{zhu2024clip,
title={CLIP Multi-modal Hashing for Multimedia Retrieval},
author={Jian Zhu, Mingkai Sheng, Zhangmin Huang, Jingfei Chang, Jinling Jiang,
Jian Long, Cheng Luo, and Lei Liu},
journal={arXiv preprint arXiv:2410.07783},
year={2024},
archivePrefix={arXiv},
eprint={2410.07783},
primaryClass={cs.CV}
} | zhu2024clip |
arxiv-668020 | 2410.07786 | Orthogonal Nonnegative Matrix Factorization with the Kullback-Leibler divergence | <|reference_start|>Orthogonal Nonnegative Matrix Factorization with the Kullback-Leibler divergence: Orthogonal nonnegative matrix factorization (ONMF) has become a standard approach for clustering. As far as we know, most works on ONMF rely on the Frobenius norm to assess the quality of the approximation. This paper presents a new model and algorithm for ONMF that minimizes the Kullback-Leibler (KL) divergence. As opposed to the Frobenius norm which assumes Gaussian noise, the KL divergence is the maximum likelihood estimator for Poisson-distributed data, which can model better vectors of word counts in document data sets and photo counting processes in imaging. We have developed an algorithm based on alternating optimization, KL-ONMF, and show that it performs favorably with the Frobenius-norm based ONMF for document classification and hyperspectral image unmixing.<|reference_end|> | arxiv | @article{nkurunziza2024orthogonal,
title={Orthogonal Nonnegative Matrix Factorization with the Kullback-Leibler
divergence},
author={Jean Pacifique Nkurunziza, Fulgence Nahayo, Nicolas Gillis},
journal={arXiv preprint arXiv:2410.07786},
year={2024},
archivePrefix={arXiv},
eprint={2410.07786},
primaryClass={stat.ML cs.IR cs.LG eess.SP}
} | nkurunziza2024orthogonal |
arxiv-668021 | 2410.07787 | Mastering Contact-rich Tasks by Combining Soft and Rigid Robotics with Imitation Learning | <|reference_start|>Mastering Contact-rich Tasks by Combining Soft and Rigid Robotics with Imitation Learning: Soft robots have the potential to revolutionize the use of robotic systems with their capability of establishing safe, robust, and adaptable interactions with their environment, but their precise control remains challenging. In contrast, traditional rigid robots offer high accuracy and repeatability but lack the flexibility of soft robots. We argue that combining these characteristics in a hybrid robotic platform can significantly enhance overall capabilities. This work presents a novel hybrid robotic platform that integrates a rigid manipulator with a fully developed soft arm. This system is equipped with the intelligence necessary to perform flexible and generalizable tasks through imitation learning autonomously. The physical softness and machine learning enable our platform to achieve highly generalizable skills, while the rigid components ensure precision and repeatability.<|reference_end|> | arxiv | @article{montero2024mastering,
title={Mastering Contact-rich Tasks by Combining Soft and Rigid Robotics with
Imitation Learning},
author={Mariano Ram'irez Montero, Ebrahim Shahabi, Giovanni Franzese, Jens
Kober, Barbara Mazzolai, Cosimo Della Santina},
journal={arXiv preprint arXiv:2410.07787},
year={2024},
archivePrefix={arXiv},
eprint={2410.07787},
primaryClass={cs.RO cs.AI}
} | montero2024mastering |
arxiv-668022 | 2410.07790 | Enhancing Hyperspectral Image Prediction with Contrastive Learning in Low-Label Regime | <|reference_start|>Enhancing Hyperspectral Image Prediction with Contrastive Learning in Low-Label Regime: Self-supervised contrastive learning is an effective approach for addressing the challenge of limited labelled data. This study builds upon the previously established two-stage patch-level, multi-label classification method for hyperspectral remote sensing imagery. We evaluate the method's performance for both the single-label and multi-label classification tasks, particularly under scenarios of limited training data. The methodology unfolds in two stages. Initially, we focus on training an encoder and a projection network using a contrastive learning approach. This step is crucial for enhancing the ability of the encoder to discern patterns within the unlabelled data. Next, we employ the pre-trained encoder to guide the training of two distinct predictors: one for multi-label and another for single-label classification. Empirical results on four public datasets show that the predictors trained with our method perform better than those trained under fully supervised techniques. Notably, the performance is maintained even when the amount of training data is reduced by $50\%$. This advantage is consistent across both tasks. The method's effectiveness comes from its streamlined architecture. This design allows for retraining the encoder along with the predictor. As a result, the encoder becomes more adaptable to the features identified by the classifier, improving the overall classification performance. Qualitative analysis reveals the contrastive-learning-based encoder's capability to provide representations that allow separation among classes and identify location-based features despite not being explicitly trained for that. This observation indicates the method's potential in uncovering implicit spatial information within the data.<|reference_end|> | arxiv | @article{haidar2024enhancing,
title={Enhancing Hyperspectral Image Prediction with Contrastive Learning in
Low-Label Regime},
author={Salma Haidar and Jos'e Oramas},
journal={arXiv preprint arXiv:2410.07790},
year={2024},
archivePrefix={arXiv},
eprint={2410.07790},
primaryClass={cs.CV}
} | haidar2024enhancing |
arxiv-668023 | 2410.07791 | Heracles: A HfO$\mathrm_2$ Ferroelectric Capacitor Compact Model for Efficient Circuit Simulations | <|reference_start|>Heracles: A HfO$\mathrm_2$ Ferroelectric Capacitor Compact Model for Efficient Circuit Simulations: This paper presents a physics-based compact model for circuit simulations in a SPICE environment for HfO2-based ferroelectric capacitors (FeCaps). The model has been calibrated based on experimental data obtained from HfO2-based FeCaps. A thermal model with an accurate description of the device parasitics is included to derive precise device characteristics based on first principles. The model incorporates statistical data that enables Monte Carlo analysis based on realistic distributions, thereby making it particularly well-suited for design-technology co-optimization (DTCO). Furthermore, the model is demonstrated in circuit simulations using an integrated circuit with current programming, wherein partial switching of the ferroelectric polarization is observed. Finally, the model was benchmarked in an array simulation, reaching convergence in 1.8 s with an array size of 100 kb.<|reference_end|> | arxiv | @article{fehlings2024heracles:,
title={Heracles: A HfO$\mathrm{_2}$ Ferroelectric Capacitor Compact Model for
Efficient Circuit Simulations},
author={Luca Fehlings, Md Hanif Ali, Paolo Gibertini, Egidio A. Gallicchio,
Udayan Ganguly, Veeresh Deshpande, Erika Covi},
journal={arXiv preprint arXiv:2410.07791},
year={2024},
archivePrefix={arXiv},
eprint={2410.07791},
primaryClass={cs.ET}
} | fehlings2024heracles: |
arxiv-668024 | 2410.07793 | Do Current Language Models Support Code Intelligence for R Programming Language? | <|reference_start|>Do Current Language Models Support Code Intelligence for R Programming Language?: Recent advancements in developing Pre-trained Language Models for Code (Code-PLMs) have urged many areas of Software Engineering (SE) and brought breakthrough results for many SE tasks. Though these models have achieved the state-of-the-art performance for SE tasks for many popular programming languages, such as Java and Python, the Scientific Software and its related languages like R programming language have rarely benefited or even been evaluated with the Code-PLMs. Research has shown that R has many differences with other programming languages and requires specific techniques. In this study, we provide the first insights for code intelligence for R. For this purpose, we collect and open source an R dataset, and evaluate Code-PLMs for the two tasks of code summarization and method name prediction using several settings and strategies, including the differences in two R styles, Tidy-verse and Base R. Our results demonstrate that the studied models have experienced varying degrees of performance degradation when processing R programming language code, which is supported by human evaluation. Additionally, not all models show performance improvement in R-specific tasks even after multi-language fine-tuning. The dual syntax paradigms in R significantly impact the models' performance, particularly in code summarization tasks. Furthermore, the project-specific context inherent in R codebases significantly impacts the performance when attempting cross-project training.<|reference_end|> | arxiv | @article{zhao2024do,
title={Do Current Language Models Support Code Intelligence for R Programming
Language?},
author={ZiXiao Zhao, Fatemeh H. Fard},
journal={arXiv preprint arXiv:2410.07793},
year={2024},
archivePrefix={arXiv},
eprint={2410.07793},
primaryClass={cs.SE cs.AI}
} | zhao2024do |
arxiv-668025 | 2410.07795 | Optimal-State Dynamics Estimation for Physics-based Human Motion Capture from Videos | <|reference_start|>Optimal-State Dynamics Estimation for Physics-based Human Motion Capture from Videos: Human motion capture from monocular videos has made significant progress in recent years. However, modern approaches often produce temporal artifacts, e.g. in form of jittery motion and struggle to achieve smooth and physically plausible motions. Explicitly integrating physics, in form of internal forces and exterior torques, helps alleviating these artifacts. Current state-of-the-art approaches make use of an automatic PD controller to predict torques and reaction forces in order to re-simulate the input kinematics, i.e. the joint angles of a predefined skeleton. However, due to imperfect physical models, these methods often require simplifying assumptions and extensive preprocessing of the input kinematics to achieve good performance. To this end, we propose a novel method to selectively incorporate the physics models with the kinematics observations in an online setting, inspired by a neural Kalman-filtering approach. We develop a control loop as a meta-PD controller to predict internal joint torques and external reaction forces, followed by a physics-based motion simulation. A recurrent neural network is introduced to realize a Kalman filter that attentively balances the kinematics input and simulated motion, resulting in an optimal-state dynamics prediction. We show that this filtering step is crucial to provide an online supervision that helps balancing the shortcoming of the respective input motions, thus being important for not only capturing accurate global motion trajectories but also producing physically plausible human poses. The proposed approach excels in the physics-based human pose estimation task and demonstrates the physical plausibility of the predictive dynamics, compared to state of the art. The code is available on https://github.com/cuongle1206/OSDCap<|reference_end|> | arxiv | @article{le2024optimal-state,
title={Optimal-state Dynamics Estimation for Physics-based Human Motion Capture
from Videos},
author={Cuong Le, Viktor Johansson, Manon Kok, Bastian Wandt},
journal={arXiv preprint arXiv:2410.07795},
year={2024},
archivePrefix={arXiv},
eprint={2410.07795},
primaryClass={cs.CV}
} | le2024optimal-state |
arxiv-668026 | 2410.07796 | Reachability Analysis for Black-Box Dynamical Systems | <|reference_start|>Reachability Analysis for Black-Box Dynamical Systems: Hamilton-Jacobi (HJ) reachability analysis is a powerful framework for ensuring safety and performance in autonomous systems. However, existing methods typically rely on a white-box dynamics model of the system, limiting their applicability in many practical robotics scenarios where only a black-box model of the system is available. In this work, we propose a novel reachability method to compute reachable sets and safe controllers for black-box dynamical systems. Our approach efficiently approximates the Hamiltonian function using samples from the black-box dynamics. This Hamiltonian is then used to solve the HJ Partial Differential Equation (PDE), providing the reachable set of the system. The proposed method can be applied to general nonlinear systems and can be seamlessly integrated with existing reachability toolboxes for white-box systems to extend their use to black-box systems. Through simulation studies on a black-box slip-wheel car and a quadruped robot, we demonstrate the effectiveness of our approach in accurately obtaining the reachable sets for black?box dynamical systems.<|reference_end|> | arxiv | @article{chilakamarri2024reachability,
title={Reachability Analysis for Black-Box Dynamical Systems},
author={Vamsi Krishna Chilakamarri, Zeyuan Feng, and Somil Bansal},
journal={arXiv preprint arXiv:2410.07796},
year={2024},
archivePrefix={arXiv},
eprint={2410.07796},
primaryClass={eess.SY cs.SY}
} | chilakamarri2024reachability |
arxiv-668027 | 2410.07797 | Rewriting Conversational Utterances with Instructed Large Language Models | <|reference_start|>Rewriting Conversational Utterances with Instructed Large Language Models: Many recent studies have shown the ability of large language models (LLMs) to achieve state-of-the-art performance on many NLP tasks, such as question answering, text summarization, coding, and translation. In some cases, the results provided by LLMs are on par with those of human experts. These models' most disruptive innovation is their ability to perform tasks via zero-shot or few-shot prompting. This capability has been successfully exploited to train instructed LLMs, where reinforcement learning with human feedback is used to guide the model to follow the user's requests directly. In this paper, we investigate the ability of instructed LLMs to improve conversational search effectiveness by rewriting user questions in a conversational setting. We study which prompts provide the most informative rewritten utterances that lead to the best retrieval performance. Reproducible experiments are conducted on publicly-available TREC CAST datasets. The results show that rewriting conversational utterances with instructed LLMs achieves significant improvements of up to 25.2% in MRR, 31.7% in Precision@1, 27% in NDCG@3, and 11.5% in Recall@500 over state-of-the-art techniques.<|reference_end|> | arxiv | @article{galimzhanova2024rewriting,
title={Rewriting Conversational Utterances with Instructed Large Language
Models},
author={Elnara Galimzhanova, Cristina Ioana Muntean, Franco Maria Nardini,
Raffaele Perego, Guido Rocchietti},
journal={2023 IEEE/WIC International Conference on Web Intelligence and
Intelligent Agent Technology (WI-IAT)},
year={2024},
doi={10.1109/WI-IAT59888.2023.00014},
archivePrefix={arXiv},
eprint={2410.07797},
primaryClass={cs.CL cs.AI cs.HC cs.IR}
} | galimzhanova2024rewriting |
arxiv-668028 | 2410.07798 | vCLIC: Towards Fast Interrupt Handling in Virtualized RISC-V Mixed-criticality Systems | <|reference_start|>vCLIC: Towards Fast Interrupt Handling in Virtualized RISC-V Mixed-criticality Systems: The widespread diffusion of compute-intensive edge-AI workloads and the stringent demands of modern autonomous systems require advanced heterogeneous embedded architectures. Such architectures must support high-performance and reliable execution of parallel tasks with different levels of criticality. Hardware-assisted virtualization is crucial for isolating applications concurrently executing these tasks under real-time constraints, but interrupt virtualization poses challenges in ensuring transparency to virtual guests while maintaining real-time system features, such as interrupt vectoring, nesting, and tail-chaining. Despite its rapid advancement to address virtualization needs for mixed-criticality systems, the RISC-V ecosystem still lacks interrupt controllers with integrated virtualization and real-time features, currently relying on non-deterministic, bus-mediated message-signaled interrupts (MSIs) for virtualization. To overcome this limitation, we present the design, implementation, and in-system assessment of vCLIC, a virtualization extension to the RISC-V CLIC fast interrupt controller. Our approach achieves 20x interrupt latency speed-up over the software emulation required for handling non-virtualization-aware systems, reduces response latency by 15% compared to existing MSI-based approaches, and is free from interference from the system bus, at an area cost of just 8kGE when synthesized in an advanced 16nm FinFet technology.<|reference_end|> | arxiv | @article{zelioli2024vclic:,
title={vCLIC: Towards Fast Interrupt Handling in Virtualized RISC-V
Mixed-criticality Systems},
author={Enrico Zelioli, Alessandro Ottaviano, Robert Balas, Nils Wistoff,
Angelo Garofalo, Luca Benini},
journal={arXiv preprint arXiv:2410.07798},
year={2024},
archivePrefix={arXiv},
eprint={2410.07798},
primaryClass={cs.AR}
} | zelioli2024vclic: |
arxiv-668029 | 2410.07799 | Mind the Gap: a Spectral Analysis of Rank Collapse and Signal Propagation in Transformers | <|reference_start|>Mind the Gap: a Spectral Analysis of Rank Collapse and Signal Propagation in Transformers: Attention layers are the core component of transformers, the current state-of-the-art neural network architecture. However, \softmaxx-based attention puts transformers' trainability at risk. Even \textit{at initialisation}, the propagation of signals and gradients through the random network can be pathological, resulting in known issues such as (i) vanishing/exploding gradients and (ii) \textit{rank collapse}, i.e. when all tokens converge to a single representation \textit{with depth}. This paper examines signal propagation in \textit{attention-only} transformers from a random matrix perspective, illuminating the origin of such issues, as well as unveiling a new phenomenon -- (iii) rank collapse \textit{in width}. Modelling \softmaxx-based attention at initialisation with Random Markov matrices, our theoretical analysis reveals that a \textit{spectral gap} between the two largest singular values of the attention matrix causes (iii), which, in turn, exacerbates (i) and (ii). Building on this insight, we propose a novel, yet simple, practical solution to resolve rank collapse in width by removing the spectral gap. Moreover, we validate our findings and discuss the training benefits of the proposed fix through experiments that also motivate a revision of some of the default parameter scaling. Our attention model accurately describes the standard key-query attention in a single-layer transformer, making this work a significant first step towards a better understanding of the initialisation dynamics in the multi-layer case.<|reference_end|> | arxiv | @article{naderi2024mind,
title={Mind the Gap: a Spectral Analysis of Rank Collapse and Signal
Propagation in Transformers},
author={Alireza Naderi, Thiziri Nait Saada, Jared Tanner},
journal={arXiv preprint arXiv:2410.07799},
year={2024},
archivePrefix={arXiv},
eprint={2410.07799},
primaryClass={cs.LG stat.ML}
} | naderi2024mind |
arxiv-668030 | 2410.07801 | Robotic framework for autonomous manipulation of laboratory equipment with different degrees of transparency via 6D pose estimation | <|reference_start|>Robotic framework for autonomous manipulation of laboratory equipment with different degrees of transparency via 6D pose estimation: Many modern robotic systems operate autonomously, however they often lack the ability to accurately analyze the environment and adapt to changing external conditions, while teleoperation systems often require special operator skills. In the field of laboratory automation, the number of automated processes is growing, however such systems are usually developed to perform specific tasks. In addition, many of the objects used in this field are transparent, making it difficult to analyze them using visual channels. The contributions of this work include the development of a robotic framework with autonomous mode for manipulating liquid-filled objects with different degrees of transparency in complex pose combinations. The conducted experiments demonstrated the robustness of the designed visual perception system to accurately estimate object poses for autonomous manipulation, and confirmed the performance of the algorithms in dexterous operations such as liquid dispensing. The proposed robotic framework can be applied for laboratory automation, since it allows solving the problem of performing non-trivial manipulation tasks with the analysis of object poses of varying degrees of transparency and liquid levels, requiring high accuracy and repeatability.<|reference_end|> | arxiv | @article{makarova2024lucidgrasp:,
title={LucidGrasp: Robotic Framework for Autonomous Manipulation of Laboratory
Equipment with Different Degrees of Transparency via 6D Pose Estimation},
author={Maria Makarova, Daria Trinitatova, Qian Liu and Dzmitry Tsetserukou},
journal={arXiv preprint arXiv:2410.07801},
year={2024},
archivePrefix={arXiv},
eprint={2410.07801},
primaryClass={cs.RO cs.CV cs.SE cs.SY eess.SY}
} | makarova2024lucidgrasp: |
arxiv-668031 | 2410.07803 | MGMD-GAN: Generalization Improvement of Generative Adversarial Networks with Multiple Generator Multiple Discriminator Framework Against Membership Inference Attacks | <|reference_start|>MGMD-GAN: Generalization Improvement of Generative Adversarial Networks with Multiple Generator Multiple Discriminator Framework Against Membership Inference Attacks: Generative Adversarial Networks (GAN) are among the widely used Generative models in various applications. However, the original GAN architecture may memorize the distribution of the training data and, therefore, poses a threat to Membership Inference Attacks. In this work, we propose a new GAN framework that consists of Multiple Generators and Multiple Discriminators (MGMD-GAN). Disjoint partitions of the training data are used to train this model and it learns the mixture distribution of all the training data partitions. In this way, our proposed model reduces the generalization gap which makes our MGMD-GAN less vulnerable to Membership Inference Attacks. We provide an experimental analysis of our model and also a comparison with other GAN frameworks.<|reference_end|> | arxiv | @article{arefin2024mgmd-gan:,
title={MGMD-GAN: Generalization Improvement of Generative Adversarial Networks
with Multiple Generator Multiple Discriminator Framework Against Membership
Inference Attacks},
author={Nirob Arefin},
journal={arXiv preprint arXiv:2410.07803},
year={2024},
archivePrefix={arXiv},
eprint={2410.07803},
primaryClass={cs.LG}
} | arefin2024mgmd-gan: |
arxiv-668032 | 2410.07804 | Intuitive interaction flow: A Dual-Loop Human-Machine Collaboration Task Allocation Model and an experimental study | <|reference_start|>Intuitive interaction flow: A Dual-Loop Human-Machine Collaboration Task Allocation Model and an experimental study: This study investigates the issue of task allocation in Human-Machine Collaboration (HMC) within the context of Industry 4.0. By integrating philosophical insights and cognitive science, it clearly defines two typical modes of human behavior in human-machine interaction(HMI): skill-based intuitive behavior and knowledge-based intellectual behavior. Building on this, the concept of 'intuitive interaction flow' is innovatively introduced by combining human intuition with machine humanoid intelligence, leading to the construction of a dual-loop HMC task allocation model. Through comparative experiments measuring electroencephalogram (EEG) and electromyogram (EMG) activities, distinct physiological patterns associated with these behavior modes are identified, providing a preliminary foundation for future adaptive HMC frameworks. This work offers a pathway for developing intelligent HMC systems that effectively integrate human intuition and machine intelligence in Industry 4.0.<|reference_end|> | arxiv | @article{xu2024intuitive,
title={Intuitive interaction flow: A Dual-Loop Human-Machine Collaboration Task
Allocation Model and an experimental study},
author={Jiang Xu and Qiyang Miao and Ziyuan Huang and Yilin Lu and Lingyun Sun
and Tianyang Yu and Jingru Pei and Qichao Zhao},
journal={arXiv preprint arXiv:2410.07804},
year={2024},
archivePrefix={arXiv},
eprint={2410.07804},
primaryClass={cs.HC}
} | xu2024intuitive |
arxiv-668033 | 2410.07806 | Deep and Probabilistic Solar Irradiance Forecast at the Arctic Circle | <|reference_start|>Deep and Probabilistic Solar Irradiance Forecast at the Arctic Circle: Solar irradiance forecasts can be dynamic and unreliable due to changing weather conditions. Near the Arctic circle, this also translates into a distinct set of further challenges. This work is forecasting solar irradiance with Norwegian data using variations of Long-Short-Term Memory units (LSTMs). In order to gain more trustworthiness of results, the probabilistic approaches Quantile Regression (QR) and Maximum Likelihood (MLE) are optimized on top of the LSTMs, providing measures of uncertainty for the results. MLE is further extended by using a Johnson's SU distribution, a Johnson's SB distribution, and a Weibull distribution in addition to a normal Gaussian to model parameters. Contrary to a Gaussian, Weibull, Johnson's SU and Johnson's SB can return skewed distributions, enabling it to fit the non-normal solar irradiance distribution more optimally. The LSTMs are compared against each other, a simple Multi-layer Perceptron (MLP), and a smart-persistence estimator. The proposed LSTMs are found to be more accurate than smart persistence and the MLP for a multi-horizon, day-ahead (36 hours) forecast. The deterministic LSTM showed better root mean squared error (RMSE), but worse mean absolute error (MAE) than a MLE with Johnson's SB distribution. Probabilistic uncertainty estimation is shown to fit relatively well across the distribution of observed irradiance. While QR shows better uncertainty estimation calibration, MLE with Johnson's SB, Johnson's SU, or Gaussian show better performance in the other metrics employed. Optimizing and comparing the models against each other reveals a seemingly inherent trade-off between point-prediction and uncertainty estimation calibration.<|reference_end|> | arxiv | @article{erdmann2024deep,
title={Deep and Probabilistic Solar Irradiance Forecast at the Arctic Circle},
author={Niklas Erdmann, Lars {O}. Bentsen, Roy Stenbro, Heine N. Riise,
Narada Warakagoda, Paal Engelstad},
journal={arXiv preprint arXiv:2410.07806},
year={2024},
archivePrefix={arXiv},
eprint={2410.07806},
primaryClass={cs.LG}
} | erdmann2024deep |
arxiv-668034 | 2410.07809 | Linguistically-Informed Multilingual Instruction Tuning: Is There an Optimal Set of Languages to Tune? | <|reference_start|>Linguistically-Informed Multilingual Instruction Tuning: Is There an Optimal Set of Languages to Tune?: Multilingual language models often perform unevenly across different languages due to limited generalization capabilities for some languages. This issue is significant because of the growing interest in making universal language models that work well for all languages. Instruction tuning with multilingual instruction-response pairs has been used to improve model performance across various languages. However, this approach is challenged by high computational costs, a lack of quality tuning data for all languages, and the "curse of multilinguality" -- the performance drop per language after adding many languages. Recent studies have found that working with datasets with few languages and a smaller number of instances can be beneficial. Yet, there exists no systematic investigation into how choosing different languages affects multilingual instruction tuning. Our study proposes a method to select languages for instruction tuning in a linguistically informed way, aiming to boost model performance across languages and tasks. We use a simple algorithm to choose diverse languages and test their effectiveness on various benchmarks and open-ended questions. Our results show that this careful selection generally leads to better outcomes than choosing languages at random. We suggest a new and simple way of enhancing multilingual models by selecting diverse languages based on linguistic features that could help develop better multilingual systems and guide dataset creation efforts. All resources, including the code for language selection and multilingual instruction tuning, are made available in our official repository at https://github.com/GGLAB-KU/ling-informed-mit enabling reproducibility and further research in this area.<|reference_end|> | arxiv | @article{soykan2024linguistically-informed,
title={Linguistically-Informed Multilingual Instruction Tuning: Is There an
Optimal Set of Languages to Tune?},
author={G"urkan Soykan, G"ozde G"ul c{S}ahin},
journal={arXiv preprint arXiv:2410.07809},
year={2024},
archivePrefix={arXiv},
eprint={2410.07809},
primaryClass={cs.CL cs.LG}
} | soykan2024linguistically-informed |
arxiv-668035 | 2410.07810 | Towards Robust IoT Defense: Comparative Statistics of Attack Detection in Resource-Constrained Scenarios | <|reference_start|>Towards Robust IoT Defense: Comparative Statistics of Attack Detection in Resource-Constrained Scenarios: Resource constraints pose a significant cybersecurity threat to IoT smart devices, making them vulnerable to various attacks, including those targeting energy and memory. This study underscores the need for innovative security measures due to resource-related incidents in smart devices. In this paper, we conduct an extensive statistical analysis of cyberattack detection algorithms under resource constraints to identify the most efficient one. Our research involves a comparative analysis of various algorithms, including those from our previous work. We specifically compare a lightweight algorithm for detecting resource-constrained cyberattacks with another designed for the same purpose. The latter employs TinyML for detection. In addition to the comprehensive evaluation of the proposed algorithms, we introduced a novel detection method for resource-constrained attacks. This method involves analyzing protocol data and categorizing the final data packet as normal or attacked. The attacked data is further analyzed in terms of the memory and energy consumption of the devices to determine whether it is an energy or memory attack or another form of malicious activity. We compare the suggested algorithm performance using four evaluation metrics: accuracy, PoD, PoFA, and PoM. The proposed dynamic techniques dynamically select the classifier with the best results for detecting attacks, ensuring optimal performance even within resource-constrained IoT environments. The results indicate that the proposed algorithms outperform the existing works with accuracy for algorithms with TinyML and without TinyML of 99.3\%, 98.2\%, a probability of detection of 99.4\%, 97.3\%, a probability of false alarm of 1.23\%, 1.64\%, a probability of misdetection of 1.64\%, 1.46 respectively. In contrast, the accuracy of the novel detection mechanism exceeds 99.5\% for RF and 97\% for SVM.<|reference_end|> | arxiv | @article{alwaisi2024towards,
title={Towards Robust IoT Defense: Comparative Statistics of Attack Detection
in Resource-Constrained Scenarios},
author={Zainab Alwaisi, Simone Soderi},
journal={arXiv preprint arXiv:2410.07810},
year={2024},
archivePrefix={arXiv},
eprint={2410.07810},
primaryClass={cs.CR}
} | alwaisi2024towards |
arxiv-668036 | 2410.07812 | Temporal-Difference Variational Continual Learning | <|reference_start|>Temporal-Difference Variational Continual Learning: A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks. This adaptability allows them to respond to potentially inevitable shifts in the data-generating distribution over time. However, in Continual Learning (CL) settings, models often struggle to balance learning new tasks (plasticity) with retaining previous knowledge (memory stability). Consequently, they are susceptible to Catastrophic Forgetting, which degrades performance and undermines the reliability of deployed systems. Variational Continual Learning methods tackle this challenge by employing a learning objective that recursively updates the posterior distribution and enforces it to stay close to the latest posterior estimate. Nonetheless, we argue that these methods may be ineffective due to compounding approximation errors over successive recursions. To mitigate this, we propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations, preventing individual errors from dominating future posterior updates and compounding over time. We reveal insightful connections between these objectives and Temporal-Difference methods, a popular learning mechanism in Reinforcement Learning and Neuroscience. We evaluate the proposed objectives on challenging versions of popular CL benchmarks, demonstrating that they outperform standard Variational CL methods and non-variational baselines, effectively alleviating Catastrophic Forgetting.<|reference_end|> | arxiv | @article{melo2024temporal-difference,
title={Temporal-Difference Variational Continual Learning},
author={Luckeciano C. Melo, Alessandro Abate, Yarin Gal},
journal={arXiv preprint arXiv:2410.07812},
year={2024},
archivePrefix={arXiv},
eprint={2410.07812},
primaryClass={cs.LG cs.AI}
} | melo2024temporal-difference |
arxiv-668037 | 2410.07815 | Simple ReFlow: Improved Techniques for Fast Flow Models | <|reference_start|>Simple ReFlow: Improved Techniques for Fast Flow Models: Diffusion and flow-matching models achieve remarkable generative performance but at the cost of many sampling steps, this slows inference and limits applicability to time-critical tasks. The ReFlow procedure can accelerate sampling by straightening generation trajectories. However, ReFlow is an iterative procedure, typically requiring training on simulated data, and results in reduced sample quality. To mitigate sample deterioration, we examine the design space of ReFlow and highlight potential pitfalls in prior heuristic practices. We then propose seven improvements for training dynamics, learning and inference, which are verified with thorough ablation studies on CIFAR10 $32 \times 32$, AFHQv2 $64 \times 64$, and FFHQ $64 \times 64$. Combining all our techniques, we achieve state-of-the-art FID scores (without / with guidance, resp.) for fast generation via neural ODEs: $2.23$ / $1.98$ on CIFAR10, $2.30$ / $1.91$ on AFHQv2, $2.84$ / $2.67$ on FFHQ, and $3.49$ / $1.74$ on ImageNet-64, all with merely $9$ neural function evaluations.<|reference_end|> | arxiv | @article{kim2024simple,
title={Simple ReFlow: Improved Techniques for Fast Flow Models},
author={Beomsu Kim and Yu-Guan Hsieh and Michal Klein and Marco Cuturi and
Jong Chul Ye and Bahjat Kawar and James Thornton},
journal={arXiv preprint arXiv:2410.07815},
year={2024},
archivePrefix={arXiv},
eprint={2410.07815},
primaryClass={cs.LG cs.CV}
} | kim2024simple |
arxiv-668038 | 2410.07819 | Uncovering Overfitting in Large Language Model Editing | <|reference_start|>Uncovering Overfitting in Large Language Model Editing: Knowledge editing has been proposed as an effective method for updating and correcting the internal knowledge of Large Language Models (LLMs). However, existing editing methods often struggle with complex tasks, such as multi-hop reasoning. In this paper, we identify and investigate the phenomenon of Editing Overfit, where edited models assign disproportionately high probabilities to the edit target, hindering the generalization of new knowledge in complex scenarios. We attribute this issue to the current editing paradigm, which places excessive emphasis on the direct correspondence between the input prompt and the edit target for each edit sample. To further explore this issue, we introduce a new benchmark, EVOKE (EValuation of Editing Overfit in Knowledge Editing), along with fine-grained evaluation metrics. Through comprehensive experiments and analysis, we demonstrate that Editing Overfit is prevalent in current editing methods and that common overfitting mitigation strategies are of limited effectiveness in knowledge editing. To overcome this, inspired by LLMs' knowledge recall mechanisms, we propose a new plug-and-play strategy called Learn to Inference (LTI), which introduce a Multi-stage Inference Constraint module to guide the edited models in recalling new knowledge similarly to how unedited LLMs leverage knowledge through in-context learning. Extensive experimental results across a wide range of tasks validate the effectiveness of LTI in mitigating Editing Overfit.<|reference_end|> | arxiv | @article{zhang2024uncovering,
title={Uncovering Overfitting in Large Language Model Editing},
author={Mengqi Zhang, Xiaotian Ye, Qiang Liu, Pengjie Ren, Shu Wu, Zhumin Chen},
journal={arXiv preprint arXiv:2410.07819},
year={2024},
archivePrefix={arXiv},
eprint={2410.07819},
primaryClass={cs.CL}
} | zhang2024uncovering |
arxiv-668039 | 2410.07820 | Mitigating Gender Bias in Code Large Language Models via Model Editing | <|reference_start|>Mitigating Gender Bias in Code Large Language Models via Model Editing: In recent years, with the maturation of large language model (LLM) technology and the emergence of high-quality programming code datasets, researchers have become increasingly confident in addressing the challenges of program synthesis automatically. However, since most of the training samples for LLMs are unscreened, it is inevitable that LLMs' performance may not align with real-world scenarios, leading to the presence of social bias. To evaluate and quantify the gender bias in code LLMs, we propose a dataset named CodeGenBias (Gender Bias in the Code Generation) and an evaluation metric called FB-Score (Factual Bias Score) based on the actual gender distribution of correlative professions. With the help of CodeGenBias and FB-Score, we evaluate and analyze the gender bias in eight mainstream Code LLMs. Previous work has demonstrated that model editing methods that perform well in knowledge editing have the potential to mitigate social bias in LLMs. Therefore, we develop a model editing approach named MG-Editing (Multi-Granularity model Editing), which includes the locating and editing phases. Our model editing method MG-Editing can be applied at five different levels of model parameter granularity: full parameters level, layer level, module level, row level, and neuron level. Extensive experiments not only demonstrate that our MG-Editing can effectively mitigate the gender bias in code LLMs while maintaining their general code generation capabilities, but also showcase its excellent generalization. At the same time, the experimental results show that, considering both the gender bias of the model and its general code generation capability, MG-Editing is most effective when applied at the row and neuron levels of granularity.<|reference_end|> | arxiv | @article{qin2024mitigating,
title={Mitigating Gender Bias in Code Large Language Models via Model Editing},
author={Zhanyue Qin, Haochuan Wang, Zecheng Wang, Deyuan Liu, Cunhang Fan,
Zhao Lv, Zhiying Tu, Dianhui Chu, Dianbo Sui},
journal={arXiv preprint arXiv:2410.07820},
year={2024},
archivePrefix={arXiv},
eprint={2410.07820},
primaryClass={cs.SE cs.AI cs.CL}
} | qin2024mitigating |
arxiv-668040 | 2410.07824 | Exploring Foundation Models in Remote Sensing Image Change Detection: A Comprehensive Survey | <|reference_start|>Exploring Foundation Models in Remote Sensing Image Change Detection: A Comprehensive Survey: Change detection, as an important and widely applied technique in the field of remote sensing, aims to analyze changes in surface areas over time and has broad applications in areas such as environmental monitoring, urban development, and land use analysis.In recent years, deep learning, especially the development of foundation models, has provided more powerful solutions for feature extraction and data fusion, effectively addressing these complexities. This paper systematically reviews the latest advancements in the field of change detection, with a focus on the application of foundation models in remote sensing tasks.<|reference_end|> | arxiv | @article{yu2024exploring,
title={Exploring Foundation Models in Remote Sensing Image Change Detection: A
Comprehensive Survey},
author={Zihan Yu, Tianxiao Li, Yuxin Zhu, Rongze Pan},
journal={arXiv preprint arXiv:2410.07824},
year={2024},
archivePrefix={arXiv},
eprint={2410.07824},
primaryClass={cs.CV}
} | yu2024exploring |
arxiv-668041 | 2410.07825 | Extracting and Transferring Abilities For Building Multi-lingual Ability-enhanced Large Language Models | <|reference_start|>Extracting and Transferring Abilities For Building Multi-lingual Ability-enhanced Large Language Models: Multi-lingual ability transfer has become increasingly important for the broad application of large language models (LLMs). Existing work highly relies on training with the multi-lingual ability-related data, which may be not available for low-resource languages. To solve it, we propose a Multi-lingual Ability Extraction and Transfer approach, named as MAET. Our key idea is to decompose and extract language-agnostic ability-related weights from LLMs, and transfer them across different languages by simple addition and subtraction operations without training. Specially, our MAET consists of the extraction and transfer stages. In the extraction stage, we firstly locate key neurons that are highly related to specific abilities, and then employ them to extract the transferable ability-specific weights. In the transfer stage, we further select the ability-related parameter tensors, and design the merging strategy based on the linguistic and ability specific weights, to build the multi-lingual ability-enhanced LLM. To demonstrate the effectiveness of our proposed approach, we conduct extensive experiments on mathematical and scientific tasks in both high-resource lingual and low-resource lingual scenarios. Experiment results have shown that MAET can effectively and efficiently extract and transfer the advanced abilities, and outperform training-based baseline methods. Our code and data are available at \url{https://github.com/RUCAIBox/MAET}.<|reference_end|> | arxiv | @article{chen2024extracting,
title={Extracting and Transferring Abilities For Building Multi-lingual
Ability-enhanced Large Language Models},
author={Zhipeng Chen, Liang Song, Kun Zhou, Wayne Xin Zhao, Bingning Wang,
Weipeng Chen, Ji-Rong Wen},
journal={arXiv preprint arXiv:2410.07825},
year={2024},
archivePrefix={arXiv},
eprint={2410.07825},
primaryClass={cs.CL}
} | chen2024extracting |
arxiv-668042 | 2410.07826 | Fine-Tuning Language Models for Ethical Ambiguity: A Comparative Study of Alignment with Human Responses | <|reference_start|>Fine-Tuning Language Models for Ethical Ambiguity: A Comparative Study of Alignment with Human Responses: Language models often misinterpret human intentions due to their handling of ambiguity, a limitation well-recognized in NLP research. While morally clear scenarios are more discernible to LLMs, greater difficulty is encountered in morally ambiguous contexts. In this investigation, we explored LLM calibration to show that human and LLM judgments are poorly aligned in such scenarios. We used two curated datasets from the Scruples project for evaluation: DILEMMAS, which involves pairs of distinct moral scenarios to assess the model's ability to compare and contrast ethical situations, and ANECDOTES, which presents individual narratives to evaluate the model's skill in drawing out details, interpreting, and analyzing distinct moral scenarios. Model answer probabilities were extracted for all possible choices and compared with human annotations to benchmark the alignment of three models: Llama-3.1-8b, Zephyr-7b-beta, and Mistral-7b. Significant improvements were observed after fine-tuning, with notable enhancements in both cross-entropy and Dirichlet scores, particularly in the latter. Notably, after fine-tuning, the performance of Mistral-7B-Instruct-v0.3 was on par with GPT-4o. However, the experimental models that were examined were all still outperformed by the BERT and RoBERTa models in terms of cross-entropy scores. Our fine-tuning approach, which improves the model's understanding of text distributions in a text-to-text format, effectively enhances performance and alignment in complex decision-making contexts, underscoring the need for further research to refine ethical reasoning techniques and capture human judgment nuances.<|reference_end|> | arxiv | @article{senthilkumar2024fine-tuning,
title={Fine-Tuning Language Models for Ethical Ambiguity: A Comparative Study
of Alignment with Human Responses},
author={Pranav Senthilkumar, Visshwa Balasubramanian, Prisha Jain, Aneesa
Maity, Jonathan Lu, Kevin Zhu},
journal={arXiv preprint arXiv:2410.07826},
year={2024},
archivePrefix={arXiv},
eprint={2410.07826},
primaryClass={cs.CL}
} | senthilkumar2024fine-tuning |
arxiv-668043 | 2410.07827 | Why do objects have many names? A study on word informativeness in language use and lexical systems | <|reference_start|>Why do objects have many names? A study on word informativeness in language use and lexical systems: Human lexicons contain many different words that speakers can use to refer to the same object, e.g., "purple" or "magenta" for the same shade of color. On the one hand, studies on language use have explored how speakers adapt their referring expressions to successfully communicate in context, without focusing on properties of the lexical system. On the other hand, studies in language evolution have discussed how competing pressures for informativeness and simplicity shape lexical systems, without tackling in-context communication. We aim at bridging the gap between these traditions, and explore why a soft mapping between referents and words is a good solution for communication, by taking into account both in-context communication and the structure of the lexicon. We propose a simple measure of informativeness for words and lexical systems, grounded in a visual space, and analyze color naming data for English and Mandarin Chinese. We conclude that optimal lexical systems are those where multiple words can apply to the same referent, conveying different amounts of information. Such systems allow speakers to maximize communication accuracy and minimize the amount of information they convey when communicating about referents in contexts.<|reference_end|> | arxiv | @article{gualdoni2024why,
title={Why do objects have many names? A study on word informativeness in
language use and lexical systems},
author={Eleonora Gualdoni, Gemma Boleda},
journal={arXiv preprint arXiv:2410.07827},
year={2024},
archivePrefix={arXiv},
eprint={2410.07827},
primaryClass={cs.CL}
} | gualdoni2024why |
arxiv-668044 | 2410.07829 | A note on the VC dimension of 1-dimensional GNNs | <|reference_start|>A note on the VC dimension of 1-dimensional GNNs: Graph Neural Networks (GNNs) have become an essential tool for analyzing graph-structured data, leveraging their ability to capture complex relational information. While the expressivity of GNNs, particularly their equivalence to the Weisfeiler-Leman (1-WL) isomorphism test, has been well-documented, understanding their generalization capabilities remains critical. This paper focuses on the generalization of GNNs by investigating their Vapnik-Chervonenkis (VC) dimension. We extend previous results to demonstrate that 1-dimensional GNNs with a single parameter have an infinite VC dimension for unbounded graphs. Furthermore, we show that this also holds for GNNs using analytic non-polynomial activation functions, including the 1-dimensional GNNs that were recently shown to be as expressive as the 1-WL test. These results suggest inherent limitations in the generalization ability of even the most simple GNNs, when viewed from the VC dimension perspective.<|reference_end|> | arxiv | @article{daniëls2024a,
title={A note on the VC dimension of 1-dimensional GNNs},
author={Noah Dani"els and Floris Geerts},
journal={arXiv preprint arXiv:2410.07829},
year={2024},
archivePrefix={arXiv},
eprint={2410.07829},
primaryClass={cs.LG}
} | daniëls2024a |
arxiv-668045 | 2410.07830 | NusaMT-7B: Machine Translation for Low-Resource Indonesian Languages with Large Language Models | <|reference_start|>NusaMT-7B: Machine Translation for Low-Resource Indonesian Languages with Large Language Models: Large Language Models (LLMs) have demonstrated exceptional promise in translation tasks for high-resource languages. However, their performance in low-resource languages is limited by the scarcity of both parallel and monolingual corpora, as well as the presence of noise. Consequently, such LLMs suffer with alignment and have lagged behind State-of-The-Art (SoTA) neural machine translation (NMT) models in these settings. This paper introduces NusaMT-7B, an LLM-based machine translation model for low-resource Indonesian languages, starting with Balinese and Minangkabau. Leveraging the pretrained LLaMA2-7B, our approach integrates continued pre-training on monolingual data, Supervised Fine-Tuning (SFT), self-learning, and an LLM-based data cleaner to reduce noise in parallel sentences. In the FLORES-200 multilingual translation benchmark, NusaMT-7B outperforms SoTA models in the spBLEU metric by up to +6.69 spBLEU in translations into Balinese and Minangkabau, but underperforms by up to -3.38 spBLEU in translations into higher-resource languages. Our results show that fine-tuned LLMs can enhance translation quality for low-resource languages, aiding in linguistic preservation and cross-cultural communication.<|reference_end|> | arxiv | @article{tan2024nusamt-7b:,
title={NusaMT-7B: Machine Translation for Low-Resource Indonesian Languages
with Large Language Models},
author={William Tan, Kevin Zhu},
journal={arXiv preprint arXiv:2410.07830},
year={2024},
archivePrefix={arXiv},
eprint={2410.07830},
primaryClass={cs.CL}
} | tan2024nusamt-7b: |
arxiv-668046 | 2410.07832 | LaB-CL: Localized and Balanced Contrastive Learning for improving parking slot detection | <|reference_start|>LaB-CL: Localized and Balanced Contrastive Learning for improving parking slot detection: Parking slot detection is an essential technology in autonomous parking systems. In general, the classification problem of parking slot detection consists of two tasks, a task determining whether localized candidates are junctions of parking slots or not, and the other that identifies a shape of detected junctions. Both classification tasks can easily face biased learning toward the majority class, degrading classification performances. Yet, the data imbalance issue has been overlooked in parking slot detection. We propose the first supervised contrastive learning framework for parking slot detection, Localized and Balanced Contrastive Learning for improving parking slot detection (LaB-CL). The proposed LaB-CL framework uses two main approaches. First, we propose to include class prototypes to consider representations from all classes in every mini batch, from the local perspective. Second, we propose a new hard negative sampling scheme that selects local representations with high prediction error. Experiments with the benchmark dataset demonstrate that the proposed LaB-CL framework can outperform existing parking slot detection methods.<|reference_end|> | arxiv | @article{jeong2024lab-cl:,
title={LaB-CL: Localized and Balanced Contrastive Learning for improving
parking slot detection},
author={U Jin Jeong, Sumin Roh, Il Yong Chun},
journal={arXiv preprint arXiv:2410.07832},
year={2024},
archivePrefix={arXiv},
eprint={2410.07832},
primaryClass={cs.CV cs.AI cs.RO}
} | jeong2024lab-cl: |
arxiv-668047 | 2410.07834 | Multi-Scale Deformable Transformers for Student Learning Behavior Detection in Smart Classroom | <|reference_start|>Multi-Scale Deformable Transformers for Student Learning Behavior Detection in Smart Classroom: The integration of Artificial Intelligence into the modern educational system is rapidly evolving, particularly in monitoring student behavior in classrooms, a task traditionally dependent on manual observation. This conventional method is notably inefficient, prompting a shift toward more advanced solutions like computer vision. However, existing target detection models face significant challenges such as occlusion, blurring, and scale disparity, which are exacerbated by the dynamic and complex nature of classroom settings. Furthermore, these models must adeptly handle multiple target detection. To overcome these obstacles, we introduce the Student Learning Behavior Detection with Multi-Scale Deformable Transformers (SCB-DETR), an innovative approach that utilizes large convolutional kernels for upstream feature extraction, and multi-scale feature fusion. This technique significantly improves the detection capabilities for multi-scale and occluded targets, offering a robust solution for analyzing student behavior. SCB-DETR establishes an end-to-end framework that simplifies the detection process and consistently outperforms other deep learning methods. Employing our custom Student Classroom Behavior (SCBehavior) Dataset, SCB-DETR achieves a mean Average Precision (mAP) of 0.626, which is a 1.5% improvement over the baseline model's mAP and a 6% increase in AP50. These results demonstrate SCB-DETR's superior performance in handling the uneven distribution of student behaviors and ensuring precise detection in dynamic classroom environments.<|reference_end|> | arxiv | @article{wang2024multi-scale,
title={Multi-Scale Deformable Transformers for Student Learning Behavior
Detection in Smart Classroom},
author={Zhifeng Wang, Minghui Wang, Chunyan Zeng, Longlong Li},
journal={arXiv preprint arXiv:2410.07834},
year={2024},
archivePrefix={arXiv},
eprint={2410.07834},
primaryClass={cs.CV}
} | wang2024multi-scale |
arxiv-668048 | 2410.07836 | Masked Generative Priors Improve World Models Sequence Modelling Capabilities | <|reference_start|>Masked Generative Priors Improve World Models Sequence Modelling Capabilities: Deep Reinforcement Learning (RL) has become the leading approach for creating artificial agents in complex environments. Model-based approaches, which are RL methods with world models that predict environment dynamics, are among the most promising directions for improving data efficiency, forming a critical step toward bridging the gap between research and real-world deployment. In particular, world models enhance sample efficiency by learning in imagination, which involves training a generative sequence model of the environment in a self-supervised manner. Recently, Masked Generative Modelling has emerged as a more efficient and superior inductive bias for modelling and generating token sequences. Building on the Efficient Stochastic Transformer-based World Models (STORM) architecture, we replace the traditional MLP prior with a Masked Generative Prior (e.g., MaskGIT Prior) and introduce GIT-STORM. We evaluate our model on two downstream tasks: reinforcement learning and video prediction. GIT-STORM demonstrates substantial performance gains in RL tasks on the Atari 100k benchmark. Moreover, we apply Transformer-based World Models to continuous action environments for the first time, addressing a significant gap in prior research. To achieve this, we employ a state mixer function that integrates latent state representations with actions, enabling our model to handle continuous control tasks. We validate this approach through qualitative and quantitative analyses on the DeepMind Control Suite, showcasing the effectiveness of Transformer-based World Models in this new domain. Our results highlight the versatility and efficacy of the MaskGIT dynamics prior, paving the way for more accurate world models and effective RL policies.<|reference_end|> | arxiv | @article{meo2024masked,
title={Masked Generative Priors Improve World Models Sequence Modelling
Capabilities},
author={Cristian Meo and Mircea Lica and Zarif Ikram and Akihiro Nakano and
Vedant Shah and Aniket Rajiv Didolkar and Dianbo Liu and Anirudh Goyal and
Justin Dauwels},
journal={arXiv preprint arXiv:2410.07836},
year={2024},
archivePrefix={arXiv},
eprint={2410.07836},
primaryClass={cs.LG cs.AI}
} | meo2024masked |
arxiv-668049 | 2410.07838 | MinorityPrompt: Text to Minority Image Generation via Prompt Optimization | <|reference_start|>MinorityPrompt: Text to Minority Image Generation via Prompt Optimization: We investigate the generation of minority samples using pretrained text-to-image (T2I) latent diffusion models. Minority instances, in the context of T2I generation, can be defined as ones living on low-density regions of text-conditional data distributions. They are valuable for various applications of modern T2I generators, such as data augmentation and creative AI. Unfortunately, existing pretrained T2I diffusion models primarily focus on high-density regions, largely due to the influence of guided samplers (like CFG) that are essential for producing high-quality generations. To address this, we present a novel framework to counter the high-density-focus of T2I diffusion models. Specifically, we first develop an online prompt optimization framework that can encourage the emergence of desired properties during inference while preserving semantic contents of user-provided prompts. We subsequently tailor this generic prompt optimizer into a specialized solver that promotes the generation of minority features by incorporating a carefully-crafted likelihood objective. Our comprehensive experiments, conducted across various types of T2I models, demonstrate that our approach significantly enhances the capability to produce high-quality minority instances compared to existing samplers.<|reference_end|> | arxiv | @article{um2024minorityprompt:,
title={MinorityPrompt: Text to Minority Image Generation via Prompt
Optimization},
author={Soobin Um, Jong Chul Ye},
journal={arXiv preprint arXiv:2410.07838},
year={2024},
archivePrefix={arXiv},
eprint={2410.07838},
primaryClass={cs.CV cs.AI cs.LG}
} | um2024minorityprompt: |
arxiv-668050 | 2410.07839 | Enhancing Language Model Reasoning via Weighted Reasoning in Self-Consistency | <|reference_start|>Enhancing Language Model Reasoning via Weighted Reasoning in Self-Consistency: While large language models (LLMs) have rapidly improved their performance on a broad number of tasks, they still often fall short on reasoning tasks. As LLMs become more integrated in diverse real-world tasks, advancing their reasoning capabilities is crucial to their effectiveness in nuanced, complex problems. Wang et al's self-consistency framework reveals that sampling multiple rationales before taking a majority vote reliably improves model performance across various closed-answer reasoning tasks. Standard methods based on this framework aggregate the final decisions of these rationales but fail to utilize the detailed step-by-step reasoning paths applied by these paths. Our work enhances this approach by incorporating and analyzing both the reasoning paths of these rationales in addition to their final decisions before taking a majority vote. These methods not only improve the reliability of reasoning paths but also cause more robust performance on complex reasoning tasks.<|reference_end|> | arxiv | @article{knappe2024enhancing,
title={Enhancing Language Model Reasoning via Weighted Reasoning in
Self-Consistency},
author={Tim Knappe, Ryan Li, Ayush Chauhan, Kaylee Chhua, Kevin Zhu, Sean
O'Brien},
journal={arXiv preprint arXiv:2410.07839},
year={2024},
archivePrefix={arXiv},
eprint={2410.07839},
primaryClass={cs.CL}
} | knappe2024enhancing |
arxiv-668051 | 2410.07840 | Protect Before Generate: Error Correcting Codes within Discrete Deep Generative Models | <|reference_start|>Protect Before Generate: Error Correcting Codes within Discrete Deep Generative Models: Despite significant advancements in deep probabilistic models, learning low-dimensional discrete latent representations remains a challenging task. In this paper, we introduce a novel method that enhances variational inference in discrete latent variable models by leveraging Error Correcting Codes (ECCs) to introduce redundancy in the latent representations. This redundancy is then exploited by the variational posterior to yield more accurate estimates, thereby narrowing the variational gap. Inspired by ECCs commonly used in digital communications and data storage, we demonstrate proof-of-concept using a Discrete Variational Autoencoder (DVAE) with binary latent variables and block repetition codes. We further extend this idea to a hierarchical structure based on polar codes, where certain latent bits are more robustly protected. Our method improves generation quality, data reconstruction, and uncertainty calibration compared to the uncoded DVAE, even when trained with tighter bounds such as the Importance Weighted Autoencoder (IWAE) objective. In particular, we demonstrate superior performance on MNIST, FMNIST, CIFAR10, and Tiny ImageNet datasets. The general approach of integrating ECCs into variational inference is compatible with existing techniques to boost variational inference, such as importance sampling or Hamiltonian Monte Carlo. We also outline the key properties ECCs must have to effectively enhance discrete variational inference.<|reference_end|> | arxiv | @article{martínez-garcía2024protect,
title={Protect Before Generate: Error Correcting Codes within Discrete Deep
Generative Models},
author={Mar'ia Mart'inez-Garc'ia, Grace Villacr'es, David Mitchell, Pablo
M. Olmos},
journal={arXiv preprint arXiv:2410.07840},
year={2024},
archivePrefix={arXiv},
eprint={2410.07840},
primaryClass={cs.LG}
} | martínez-garcía2024protect |
arxiv-668052 | 2410.07844 | Parks and Recreation: Color Fault-Tolerant Spanners Made Local | <|reference_start|>Parks and Recreation: Color Fault-Tolerant Spanners Made Local: We provide new algorithms for constructing spanners of arbitrarily edge- or vertex-colored graphs, that can endure up to $f$ failures of entire color classes. The failure of even a single color may cause a linear number of individual edge/vertex faults. In a recent work, Petruschka, Sapir and Tzalik [ITCS `24] gave tight bounds for the (worst-case) size $s$ of such spanners, where $s=\Theta(f n^{1+1/k})$ or $s=\Theta(f^{1-1/k} n^{1+1/k})$ for spanners with stretch $(2k-1)$ that are resilient to at most $f$ edge- or vertex-color faults, respectively. Additionally, they showed an algorithm for computing spanners of size $\tilde{O}(s)$, running in $\tilde{O}(msf)$ sequential time, based on the (FT) greedy spanner algorithm. The problem of providing faster and/or distributed algorithms was left open therein. We address this problem and provide a novel variant of the classical Baswana-Sen algorithm [RSA `07] in the spirit of Parter's algorithm for vertex fault-tolerant spanners [STOC `22]. In a nutshell, our algorithms produce color fault-tolerant spanners of size $\tilde{O}_k (s)$ (hence near-optimal for any fixed $k$), have optimal locality $O(k)$ (i.e., take $O(k)$ rounds in the LOCAL model), can be implemented in $O_k (f^{k-1})$ rounds in CONGEST, and take $\tilde{O}_k (m + sf^{k-1})$ sequential time. To handle the considerably more difficult setting of color faults, our approach differs from [BS07, Par22] by taking a novel edge-centric perspective, instead of (FT)-clustering of vertices; in fact, we demonstrate that this point of view simplifies their algorithms. Another key technical contribution is in constructing and using collections of short paths that are "colorful at all scales", which we call "parks". These are intimately connected with the notion of spread set-systems that found use in recent breakthroughs regarding the famous Sunflower Conjecture.<|reference_end|> | arxiv | @article{parter2024parks,
title={Parks and Recreation: Color Fault-Tolerant Spanners Made Local},
author={Merav Parter, Asaf Petruschka, Shay Sapir, Elad Tzalik},
journal={arXiv preprint arXiv:2410.07844},
year={2024},
archivePrefix={arXiv},
eprint={2410.07844},
primaryClass={cs.DS}
} | parter2024parks |
arxiv-668053 | 2410.07845 | Autonomous Vehicles Path Planning under Temporal Logic Specifications | <|reference_start|>Autonomous Vehicles Path Planning under Temporal Logic Specifications: Path planning is an essential component of autonomous driving. A global planner is responsible for the high-level planning. It basically performs a shortest-path search on a known map, thereby defining waypoints used to control the local (low-level) planner. Local planning is a runtime verification method which is repeatedly run on the vehicle itself in real-time, so as to find the optimal short-horizon path which leads to the desired waypoint in a way which is both efficient and safe. The challenge is that the local planner has to take into account repeatedly incoming updates about the information available of the environment. In addition, it performs a complex task, as it has to take into account a large variety of requirements, originating from the necessity of collision avoidance with obstacles, respecting traffic rules, sticking to regulatory requirements, and lastly to reach the next waypoint efficiently. In this paper, we describe a logic-based specification mechanism which fulfills all these requirements.<|reference_end|> | arxiv | @article{dhonthi2024autonomous,
title={Autonomous Vehicles Path Planning under Temporal Logic Specifications},
author={Akshay Dhonthi and Nicolas Schischka and Ernst Moritz Hahn and Vahid
Hashemi},
journal={arXiv preprint arXiv:2410.07845},
year={2024},
archivePrefix={arXiv},
eprint={2410.07845},
primaryClass={cs.RO cs.LO}
} | dhonthi2024autonomous |
arxiv-668054 | 2410.07848 | SwarmPath: Drone Swarm Navigation through Cluttered Environments Leveraging Artificial Potential Field and Impedance Control | <|reference_start|>SwarmPath: Drone Swarm Navigation through Cluttered Environments Leveraging Artificial Potential Field and Impedance Control: In the area of multi-drone systems, navigating through dynamic environments from start to goal while providing collision-free trajectory and efficient path planning is a significant challenge. To solve this problem, we propose a novel SwarmPath technology that involves the integration of Artificial Potential Field (APF) with Impedance Controller. The proposed approach provides a solution based on collision free leader-follower behaviour where drones are able to adapt themselves to the environment. Moreover, the leader is virtual while drones are physical followers leveraging APF path planning approach to find the smallest possible path to the target. Simultaneously, the drones dynamically adjust impedance links, allowing themselves to create virtual links with obstacles to avoid them. As compared to conventional APF, the proposed SwarmPath system not only provides smooth collision-avoidance but also enable agents to efficiently pass through narrow passages by reducing the total travel time by 30% while ensuring safety in terms of drones connectivity. Lastly, the results also illustrate that the discrepancies between simulated and real environment, exhibit an average absolute percentage error (APE) of 6% of drone trajectories. This underscores the reliability of our solution in real-world scenarios.<|reference_end|> | arxiv | @article{khan2024swarmpath:,
title={SwarmPath: Drone Swarm Navigation through Cluttered Environments
Leveraging Artificial Potential Field and Impedance Control},
author={Roohan Ahmed Khan, Malaika Zafar, Amber Batool, Aleksey Fedoseev,
Dzmitry Tsetserukou},
journal={arXiv preprint arXiv:2410.07848},
year={2024},
archivePrefix={arXiv},
eprint={2410.07848},
primaryClass={cs.RO}
} | khan2024swarmpath: |
arxiv-668055 | 2410.07849 | Online DNN-driven Nonlinear MPC for Stylistic Humanoid Robot Walking with Step Adjustment | <|reference_start|>Online DNN-driven Nonlinear MPC for Stylistic Humanoid Robot Walking with Step Adjustment: This paper presents a three-layered architecture that enables stylistic locomotion with online contact location adjustment. Our method combines an autoregressive Deep Neural Network (DNN) acting as a trajectory generation layer with a model-based trajectory adjustment and trajectory control layers. The DNN produces centroidal and postural references serving as an initial guess and regularizer for the other layers. Being the DNN trained on human motion capture data, the resulting robot motion exhibits locomotion patterns, resembling a human walking style. The trajectory adjustment layer utilizes non-linear optimization to ensure dynamically feasible center of mass (CoM) motion while addressing step adjustments. We compare two implementations of the trajectory adjustment layer: one as a receding horizon planner (RHP) and the other as a model predictive controller (MPC). To enhance MPC performance, we introduce a Kalman filter to reduce measurement noise. The filter parameters are automatically tuned with a Genetic Algorithm. Experimental results on the ergoCub humanoid robot demonstrate the system's ability to prevent falls, replicate human walking styles, and withstand disturbances up to 68 Newton. Website: https://sites.google.com/view/dnn-mpc-walking Youtube video: https://www.youtube.com/watch?v=x3tzEfxO-xQ<|reference_end|> | arxiv | @article{romualdi2024online,
title={Online DNN-driven Nonlinear MPC for Stylistic Humanoid Robot Walking
with Step Adjustment},
author={Giulio Romualdi, Paolo Maria Viceconte, Lorenzo Moretti, Ines
Sorrentino, Stefano Dafarra, Silvio Traversaro, Daniele Pucci},
journal={arXiv preprint arXiv:2410.07849},
year={2024},
archivePrefix={arXiv},
eprint={2410.07849},
primaryClass={cs.RO}
} | romualdi2024online |
arxiv-668056 | 2410.07851 | Scalable Representation Learning for Multimodal Tabular Transactions | <|reference_start|>Scalable Representation Learning for Multimodal Tabular Transactions: Large language models (LLMs) are primarily designed to understand unstructured text. When directly applied to structured formats such as tabular data, they may struggle to discern inherent relationships and overlook critical patterns. While tabular representation learning methods can address some of these limitations, existing efforts still face challenges with sparse high-cardinality fields, precise numerical reasoning, and column-heavy tables. Furthermore, leveraging these learned representations for downstream tasks through a language based interface is not apparent. In this paper, we present an innovative and scalable solution to these challenges. Concretely, our approach introduces a multi-tier partitioning mechanism that utilizes power-law dynamics to handle large vocabularies, an adaptive quantization mechanism to impose priors on numerical continuity, and a distinct treatment of core-columns and meta-information columns. To facilitate instruction tuning on LLMs, we propose a parameter efficient decoder that interleaves transaction and text modalities using a series of adapter layers, thereby exploiting rich cross-task knowledge. We validate the efficacy of our solution on a large-scale dataset of synthetic payments transactions.<|reference_end|> | arxiv | @article{raman2024scalable,
title={Scalable Representation Learning for Multimodal Tabular Transactions},
author={Natraj Raman, Sumitra Ganesh and Manuela Veloso},
journal={arXiv preprint arXiv:2410.07851},
year={2024},
archivePrefix={arXiv},
eprint={2410.07851},
primaryClass={cs.LG}
} | raman2024scalable |
arxiv-668057 | 2410.07854 | HeGraphAdapter: Tuning Multi-Modal Vision-Language Models with Heterogeneous Graph Adapter | <|reference_start|>HeGraphAdapter: Tuning Multi-Modal Vision-Language Models with Heterogeneous Graph Adapter: Adapter-based tuning methods have shown significant potential in transferring knowledge from pre-trained Vision-Language Models to the downstream tasks. However, after reviewing existing adapters, we find they generally fail to fully explore the interactions between different modalities in constructing task-specific knowledge. Also, existing works usually only focus on similarity matching between positive text prompts, making it challenging to distinguish the classes with high similar visual contents. To address these issues, in this paper, we propose a novel Heterogeneous Graph Adapter to achieve tuning VLMs for the downstream tasks. To be specific, we first construct a unified heterogeneous graph mode, which contains i) visual nodes, positive text nodes and negative text nodes, and ii) several types of edge connections to comprehensively model the intra-modality, inter-modality and inter-class structure knowledge together. Next, we employ a specific Heterogeneous Graph Neural Network to excavate multi-modality structure knowledge for adapting both visual and textual features for the downstream tasks. Finally, after HeGraphAdapter, we construct both text-based and visual-based classifiers simultaneously to comprehensively enhance the performance of the CLIP model. Experimental results on 11 benchmark datasets demonstrate the effectiveness and benefits of the proposed HeGraphAdapter.<|reference_end|> | arxiv | @article{zhao2024hegraphadapter:,
title={HeGraphAdapter: Tuning Multi-Modal Vision-Language Models with
Heterogeneous Graph Adapter},
author={Yumiao Zhao, Bo Jiang, Xiao Wang, Qin Xu, Jin Tang},
journal={arXiv preprint arXiv:2410.07854},
year={2024},
archivePrefix={arXiv},
eprint={2410.07854},
primaryClass={cs.CV cs.MM}
} | zhao2024hegraphadapter: |
arxiv-668058 | 2410.07857 | SNN-PAR: Energy Efficient Pedestrian Attribute Recognition via Spiking Neural Networks | <|reference_start|>SNN-PAR: Energy Efficient Pedestrian Attribute Recognition via Spiking Neural Networks: Artificial neural network based Pedestrian Attribute Recognition (PAR) has been widely studied in recent years, despite many progresses, however, the energy consumption is still high. To address this issue, in this paper, we propose a Spiking Neural Network (SNN) based framework for energy-efficient attribute recognition. Specifically, we first adopt a spiking tokenizer module to transform the given pedestrian image into spiking feature representations. Then, the output will be fed into the spiking Transformer backbone networks for energy-efficient feature extraction. We feed the enhanced spiking features into a set of feed-forward networks for pedestrian attribute recognition. In addition to the widely used binary cross-entropy loss function, we also exploit knowledge distillation from the artificial neural network to the spiking Transformer network for more accurate attribute recognition. Extensive experiments on three widely used PAR benchmark datasets fully validated the effectiveness of our proposed SNN-PAR framework. The source code of this paper is released on \url{https://github.com/Event-AHU/OpenPAR}.<|reference_end|> | arxiv | @article{wang2024snn-par:,
title={SNN-PAR: Energy Efficient Pedestrian Attribute Recognition via Spiking
Neural Networks},
author={Haiyang Wang, Qian Zhu, Mowen She, Yabo Li, Haoyu Song, Minghe Xu,
Xiao Wang},
journal={arXiv preprint arXiv:2410.07857},
year={2024},
archivePrefix={arXiv},
eprint={2410.07857},
primaryClass={cs.CV cs.AI cs.NE}
} | wang2024snn-par: |
arxiv-668059 | 2410.07858 | From Logits to Hierarchies: Hierarchical Clustering made Simple | <|reference_start|>From Logits to Hierarchies: Hierarchical Clustering made Simple: The structure of many real-world datasets is intrinsically hierarchical, making the modeling of such hierarchies a critical objective in both unsupervised and supervised machine learning. Recently, novel approaches for hierarchical clustering with deep architectures have been proposed. In this work, we take a critical perspective on this line of research and demonstrate that many approaches exhibit major limitations when applied to realistic datasets, partly due to their high computational complexity. In particular, we show that a lightweight procedure implemented on top of pre-trained non-hierarchical clustering models outperforms models designed specifically for hierarchical clustering. Our proposed approach is computationally efficient and applicable to any pre-trained clustering model that outputs logits, without requiring any fine-tuning. To highlight the generality of our findings, we illustrate how our method can also be applied in a supervised setup, recovering meaningful hierarchies from a pre-trained ImageNet classifier.<|reference_end|> | arxiv | @article{palumbo2024from,
title={From Logits to Hierarchies: Hierarchical Clustering made Simple},
author={Emanuele Palumbo, Moritz Vandenhirtz, Alain Ryser, Imant Daunhawer,
Julia E. Vogt},
journal={arXiv preprint arXiv:2410.07858},
year={2024},
archivePrefix={arXiv},
eprint={2410.07858},
primaryClass={cs.LG cs.AI cs.CV}
} | palumbo2024from |
arxiv-668060 | 2410.07860 | BA-Net: Bridge Attention in Deep Neural Networks | <|reference_start|>BA-Net: Bridge Attention in Deep Neural Networks: Attention mechanisms, particularly channel attention, have become highly influential in numerous computer vision tasks. Despite their effectiveness, many existing methods primarily focus on optimizing performance through complex attention modules applied at individual convolutional layers, often overlooking the synergistic interactions that can occur across multiple layers. In response to this gap, we introduce bridge attention, a novel approach designed to facilitate more effective integration and information flow between different convolutional layers. Our work extends the original bridge attention model (BAv1) by introducing an adaptive selection operator, which reduces information redundancy and optimizes the overall information exchange. This enhancement results in the development of BAv2, which achieves substantial performance improvements in the ImageNet classification task, obtaining Top-1 accuracies of 80.49% and 81.75% when using ResNet50 and ResNet101 as backbone networks, respectively. These results surpass the retrained baselines by 1.61% and 0.77%, respectively. Furthermore, BAv2 outperforms other existing channel attention techniques, such as the classical SENet101, exceeding its retrained performance by 0.52% Additionally, integrating BAv2 into advanced convolutional networks and vision transformers has led to significant gains in performance across a wide range of computer vision tasks, underscoring its broad applicability.<|reference_end|> | arxiv | @article{zhang2024ba-net:,
title={BA-Net: Bridge Attention in Deep Neural Networks},
author={Ronghui Zhang, Runzong Zou, Yue Zhao, Zirui Zhang, Junzhou Chen, Yue
Cao, Chuan Hu, Houbing Song},
journal={arXiv preprint arXiv:2410.07860},
year={2024},
archivePrefix={arXiv},
eprint={2410.07860},
primaryClass={cs.CV}
} | zhang2024ba-net: |
arxiv-668061 | 2410.07863 | Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games | <|reference_start|>Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games: Real-world multi-agent scenarios often involve mixed motives, demanding altruistic agents capable of self-protection against potential exploitation. However, existing approaches often struggle to achieve both objectives. In this paper, based on that empathic responses are modulated by inferred social relationships between agents, we propose LASE Learning to balance Altruism and Self-interest based on Empathy), a distributed multi-agent reinforcement learning algorithm that fosters altruistic cooperation through gifting while avoiding exploitation by other agents in mixed-motive games. LASE allocates a portion of its rewards to co-players as gifts, with this allocation adapting dynamically based on the social relationship -- a metric evaluating the friendliness of co-players estimated by counterfactual reasoning. In particular, social relationship measures each co-player by comparing the estimated $Q$-function of current joint action to a counterfactual baseline which marginalizes the co-player's action, with its action distribution inferred by a perspective-taking module. Comprehensive experiments are performed in spatially and temporally extended mixed-motive games, demonstrating LASE's ability to promote group collaboration without compromising fairness and its capacity to adapt policies to various types of interactive co-players.<|reference_end|> | arxiv | @article{kong2024learning,
title={Learning to Balance Altruism and Self-interest Based on Empathy in
Mixed-Motive Games},
author={Fanqi Kong, Yizhe Huang, Song-Chun Zhu, Siyuan Qi, Xue Feng},
journal={arXiv preprint arXiv:2410.07863},
year={2024},
archivePrefix={arXiv},
eprint={2410.07863},
primaryClass={cs.AI}
} | kong2024learning |
arxiv-668062 | 2410.07864 | RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation | <|reference_start|>RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation: Bimanual manipulation is essential in robotics, yet developing foundation models is extremely challenging due to the inherent complexity of coordinating two robot arms (leading to multi-modal action distributions) and the scarcity of training data. In this paper, we present the Robotics Diffusion Transformer (RDT), a pioneering diffusion foundation model for bimanual manipulation. RDT builds on diffusion models to effectively represent multi-modality, with innovative designs of a scalable Transformer to deal with the heterogeneity of multi-modal inputs and to capture the nonlinearity and high frequency of robotic data. To address data scarcity, we further introduce a Physically Interpretable Unified Action Space, which can unify the action representations of various robots while preserving the physical meanings of original actions, facilitating learning transferrable physical knowledge. With these designs, we managed to pre-train RDT on the largest collection of multi-robot datasets to date and scaled it up to 1.2B parameters, which is the largest diffusion-based foundation model for robotic manipulation. We finally fine-tuned RDT on a self-created multi-task bimanual dataset with over 6K+ episodes to refine its manipulation capabilities. Experiments on real robots demonstrate that RDT significantly outperforms existing methods. It exhibits zero-shot generalization to unseen objects and scenes, understands and follows language instructions, learns new skills with just 1~5 demonstrations, and effectively handles complex, dexterous tasks. We refer to https://rdt-robotics.github.io/rdt-robotics/ for the code and videos.<|reference_end|> | arxiv | @article{liu2024rdt-1b:,
title={RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation},
author={Songming Liu, Lingxuan Wu, Bangguo Li, Hengkai Tan, Huayu Chen,
Zhengyi Wang, Ke Xu, Hang Su, Jun Zhu},
journal={arXiv preprint arXiv:2410.07864},
year={2024},
archivePrefix={arXiv},
eprint={2410.07864},
primaryClass={cs.RO cs.AI cs.CV cs.LG}
} | liu2024rdt-1b: |
arxiv-668063 | 2410.07865 | Synergizing Morphological Computation and Generative Design: Automatic Synthesis of Tendon-Driven Grippers | <|reference_start|>Synergizing Morphological Computation and Generative Design: Automatic Synthesis of Tendon-Driven Grippers: Robots' behavior and performance are determined both by hardware and software. The design process of robotic systems is a complex journey that involves multiple phases. Throughout this process, the aim is to tackle various criteria simultaneously, even though they often contradict each other. The ultimate goal is to uncover the optimal solution that resolves these conflicting factors. Generative, computation or automatic designs are the paradigms aimed at accelerating the whole design process. Within this paper we propose a design methodology to generate linkage mechanisms for robots with morphological computation. We use a graph grammar and a heuristic search algorithm to create robot mechanism graphs that are converted into simulation models for testing the design output. To verify the design methodology we have applied it to a relatively simple quasi-static problem of object grasping. We found a way to automatically design an underactuated tendon-driven gripper that can grasp a wide range of objects. This is possible because of its structure, not because of sophisticated planning or learning.<|reference_end|> | arxiv | @article{zharkov2024synergizing,
title={Synergizing Morphological Computation and Generative Design: Automatic
Synthesis of Tendon-Driven Grippers},
author={Kirill Zharkov, Mikhail Chaikovskii, Yefim Osipov, Rahaf Alshaowa,
Ivan Borisov, Sergey Kolyubin},
journal={arXiv preprint arXiv:2410.07865},
year={2024},
archivePrefix={arXiv},
eprint={2410.07865},
primaryClass={cs.RO}
} | zharkov2024synergizing |
arxiv-668064 | 2410.07866 | System-2 Reasoning via Generality and Adaptation | <|reference_start|>System-2 Reasoning via Generality and Adaptation: While significant progress has been made in task-specific applications, current models struggle with deep reasoning, generality, and adaptation -- key components of System-2 reasoning that are crucial for achieving Artificial General Intelligence (AGI). Despite the promise of approaches such as program synthesis, language models, and transformers, these methods often fail to generalize beyond their training data and to adapt to novel tasks, limiting their ability to perform human-like reasoning. This paper explores the limitations of existing approaches in achieving advanced System-2 reasoning and highlights the importance of generality and adaptation for AGI. Moreover, we propose four key research directions to address these gaps: (1) learning human intentions from action sequences, (2) combining symbolic and neural models, (3) meta-learning for unfamiliar environments, and (4) reinforcement learning to reason multi-step. Through these directions, we aim to advance the ability to generalize and adapt, bringing computational models closer to the reasoning capabilities required for AGI.<|reference_end|> | arxiv | @article{kim2024system-2,
title={System-2 Reasoning via Generality and Adaptation},
author={Sejin Kim, Sundong Kim},
journal={arXiv preprint arXiv:2410.07866},
year={2024},
archivePrefix={arXiv},
eprint={2410.07866},
primaryClass={cs.AI}
} | kim2024system-2 |
arxiv-668065 | 2410.07867 | The Sets of Power | <|reference_start|>The Sets of Power: Measures of voting power have been the subject of extensive research since the mid 1940s. More recently, similar measures of relative importance have been studied in other domains that include inconsistent knowledge bases, intensity of attacks in argumentation, different problems in the analysis of database management, and explainability. This paper demonstrates that all these examples are instantiations of computing measures of importance for a rather more general problem domain. The paper then shows that the best-known measures of importance can be computed for any reference set whenever one is given a monotonically increasing predicate that partitions the subsets of that reference set. As a consequence, the paper also proves that measures of importance can be devised in several domains, for some of which such measures have not yet been studied nor proposed. Furthermore, the paper highlights several research directions related with computing measures of importance.<|reference_end|> | arxiv | @article{marques-silva2024the,
title={The Sets of Power},
author={Joao Marques-Silva (1), Carlos Menc'ia (2), Ra'ul Menc'ia (2) ((1)
ICREA, University of Lleida, Spain, (2) University of Oviedo, Spain)},
journal={arXiv preprint arXiv:2410.07867},
year={2024},
archivePrefix={arXiv},
eprint={2410.07867},
primaryClass={cs.AI}
} | marques-silva2024the |
arxiv-668066 | 2410.07869 | Benchmarking Agentic Workflow Generation | <|reference_start|>Benchmarking Agentic Workflow Generation: Large Language Models (LLMs), with their exceptional ability to handle a wide range of tasks, have driven significant advancements in tackling reasoning and planning tasks, wherein decomposing complex problems into executable workflows is a crucial step in this process. Existing workflow evaluation frameworks either focus solely on holistic performance or suffer from limitations such as restricted scenario coverage, simplistic workflow structures, and lax evaluation standards. To this end, we introduce WorFBench, a unified workflow generation benchmark with multi-faceted scenarios and intricate graph workflow structures. Additionally, we present WorFEval, a systemic evaluation protocol utilizing subsequence and subgraph matching algorithms to accurately quantify the LLM agent's workflow generation capabilities. Through comprehensive evaluations across different types of LLMs, we discover distinct gaps between the sequence planning capabilities and graph planning capabilities of LLM agents, with even GPT-4 exhibiting a gap of around 15%. We also train two open-source models and evaluate their generalization abilities on held-out tasks. Furthermore, we observe that the generated workflows can enhance downstream tasks, enabling them to achieve superior performance with less time during inference. Code and dataset will be available at https://github.com/zjunlp/WorFBench.<|reference_end|> | arxiv | @article{qiao2024benchmarking,
title={Benchmarking Agentic Workflow Generation},
author={Shuofei Qiao, Runnan Fang, Zhisong Qiu, Xiaobin Wang, Ningyu Zhang,
Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen},
journal={arXiv preprint arXiv:2410.07869},
year={2024},
archivePrefix={arXiv},
eprint={2410.07869},
primaryClass={cs.CL cs.AI cs.HC cs.LG cs.MA}
} | qiao2024benchmarking |
arxiv-668067 | 2410.07872 | L-VITeX: Light-weight Visual Intuition for Terrain Exploration | <|reference_start|>L-VITeX: Light-weight Visual Intuition for Terrain Exploration: This paper presents L-VITeX, a lightweight visual intuition system for terrain exploration designed for resource-constrained robots and swarms. L-VITeX aims to provide a hint of Regions of Interest (RoIs) without computationally expensive processing. By utilizing the Faster Objects, More Objects (FOMO) tinyML architecture, the system achieves high accuracy (>99%) in RoI detection while operating on minimal hardware resources (Peak RAM usage < 50 KB) with near real-time inference (<200 ms). The paper evaluates L-VITeX's performance across various terrains, including mountainous areas, underwater shipwreck debris regions, and Martian rocky surfaces. Additionally, it demonstrates the system's application in 3D mapping using a small mobile robot run by ESP32-Cam and Gaussian Splats (GS), showcasing its potential to enhance exploration efficiency and decision-making.<|reference_end|> | arxiv | @article{mazumder2024l-vitex:,
title={L-VITeX: Light-weight Visual Intuition for Terrain Exploration},
author={Antar Mazumder, Zarin Anjum Madhiha},
journal={arXiv preprint arXiv:2410.07872},
year={2024},
archivePrefix={arXiv},
eprint={2410.07872},
primaryClass={cs.RO}
} | mazumder2024l-vitex: |
arxiv-668068 | 2410.07874 | "It's Your Turn": A Novel Channel Contention Mechanism for Improving Wi-Fi's Reliability | <|reference_start|>"It's Your Turn": A Novel Channel Contention Mechanism for Improving Wi-Fi's Reliability: The next generation of Wi-Fi, i.e., the IEEE 802.11bn (aka Wi-Fi 8), is not only expected to increase its performance and provide extended capabilities but also aims to offer a reliable service. Given that one of the main sources of unreliability in IEEE 802.11 stems from the current distributed channel access, which is based on Listen-Before-Talk (LBT), the development of novel contention schemes gains importance for Wi-Fi 8 and beyond. In this paper, we propose a new channel contention mechanism, "It's Your Turn" (IYT), that extends the existing Distributed Coordination Function (DCF) and aims at improving the reliability of distributed LBT by providing ordered device transmissions thanks to neighboring activity awareness. Using simulation results, we show that our mechanism strives to provide reliable performance by controlling the channel access delay. We prove the versatility of IYT against different topologies, coexistence with legacy devices, and increasing network densities.<|reference_end|> | arxiv | @article{wilhelmi2024"it's,
title={"It's Your Turn": A Novel Channel Contention Mechanism for Improving
Wi-Fi's Reliability},
author={Francesc Wilhelmi, Lorenzo Galati-Giordano, Gianluca Fontanesi},
journal={arXiv preprint arXiv:2410.07874},
year={2024},
archivePrefix={arXiv},
eprint={2410.07874},
primaryClass={cs.NI}
} | wilhelmi2024"it's |
arxiv-668069 | 2410.07876 | FDDM: Frequency-Decomposed Diffusion Model for Rectum Cancer Dose Prediction in Radiotherapy | <|reference_start|>FDDM: Frequency-Decomposed Diffusion Model for Rectum Cancer Dose Prediction in Radiotherapy: Accurate dose distribution prediction is crucial in the radiotherapy planning. Although previous methods based on convolutional neural network have shown promising performance, they have the problem of over-smoothing, leading to prediction without important high-frequency details. Recently, diffusion model has achieved great success in computer vision, which excels in generating images with more high-frequency details, yet suffers from time-consuming and extensive computational resource consumption. To alleviate these problems, we propose Frequency-Decomposed Diffusion Model (FDDM) that refines the high-frequency subbands of the dose map. To be specific, we design a Coarse Dose Prediction Module (CDPM) to first predict a coarse dose map and then utilize discrete wavelet transform to decompose the coarse dose map into a low-frequency subband and three high?frequency subbands. There is a notable difference between the coarse predicted results and ground truth in high?frequency subbands. Therefore, we design a diffusion-based module called High-Frequency Refinement Module (HFRM) that performs diffusion operation in the high?frequency components of the dose map instead of the original dose map. Extensive experiments on an in-house dataset verify the effectiveness of our approach.<|reference_end|> | arxiv | @article{liao2024fddm:,
title={FDDM: Frequency-Decomposed Diffusion Model for Rectum Cancer Dose
Prediction in Radiotherapy},
author={Xin Liao, Zhenghao Feng, Jianghong Xiao, Xingchen Peng, and Yan Wang},
journal={arXiv preprint arXiv:2410.07876},
year={2024},
archivePrefix={arXiv},
eprint={2410.07876},
primaryClass={eess.IV cs.CV}
} | liao2024fddm: |
arxiv-668070 | 2410.07877 | Constrained Skill Discovery: Quadruped Locomotion with Unsupervised Reinforcement Learning | <|reference_start|>Constrained Skill Discovery: Quadruped Locomotion with Unsupervised Reinforcement Learning: Representation learning and unsupervised skill discovery can allow robots to acquire diverse and reusable behaviors without the need for task-specific rewards. In this work, we use unsupervised reinforcement learning to learn a latent representation by maximizing the mutual information between skills and states subject to a distance constraint. Our method improves upon prior constrained skill discovery methods by replacing the latent transition maximization with a norm-matching objective. This not only results in a much a richer state space coverage compared to baseline methods, but allows the robot to learn more stable and easily controllable locomotive behaviors. We successfully deploy the learned policy on a real ANYmal quadruped robot and demonstrate that the robot can accurately reach arbitrary points of the Cartesian state space in a zero-shot manner, using only an intrinsic skill discovery and standard regularization rewards.<|reference_end|> | arxiv | @article{atanassov2024constrained,
title={Constrained Skill Discovery: Quadruped Locomotion with Unsupervised
Reinforcement Learning},
author={Vassil Atanassov, Wanming Yu, Alexander Luis Mitchell, Mark Nicholas
Finean, Ioannis Havoutis},
journal={arXiv preprint arXiv:2410.07877},
year={2024},
archivePrefix={arXiv},
eprint={2410.07877},
primaryClass={cs.RO}
} | atanassov2024constrained |
arxiv-668071 | 2410.07880 | Unsupervised Data Validation Methods for Efficient Model Training | <|reference_start|>Unsupervised Data Validation Methods for Efficient Model Training: This paper investigates the challenges and potential solutions for improving machine learning systems for low-resource languages. State-of-the-art models in natural language processing (NLP), text-to-speech (TTS), speech-to-text (STT), and vision-language models (VLM) rely heavily on large datasets, which are often unavailable for low-resource languages. This research explores key areas such as defining "quality data," developing methods for generating appropriate data and enhancing accessibility to model training. A comprehensive review of current methodologies, including data augmentation, multilingual transfer learning, synthetic data generation, and data selection techniques, highlights both advancements and limitations. Several open research questions are identified, providing a framework for future studies aimed at optimizing data utilization, reducing the required data quantity, and maintaining high-quality model performance. By addressing these challenges, the paper aims to make advanced machine learning models more accessible for low-resource languages, enhancing their utility and impact across various sectors.<|reference_end|> | arxiv | @article{paniv2024unsupervised,
title={Unsupervised Data Validation Methods for Efficient Model Training},
author={Yurii Paniv},
journal={arXiv preprint arXiv:2410.07880},
year={2024},
archivePrefix={arXiv},
eprint={2410.07880},
primaryClass={cs.CL cs.LG}
} | paniv2024unsupervised |
arxiv-668072 | 2410.07881 | A Comprehensive Survey on Joint Resource Allocation Strategies in Federated Edge Learning | <|reference_start|>A Comprehensive Survey on Joint Resource Allocation Strategies in Federated Edge Learning: Federated Edge Learning (FEL), an emerging distributed Machine Learning (ML) paradigm, enables model training in a distributed environment while ensuring user privacy by using physical separation for each user data. However, with the development of complex application scenarios such as the Internet of Things (IoT) and Smart Earth, the conventional resource allocation schemes can no longer effectively support these growing computational and communication demands. Therefore, joint resource optimization may be the key solution to the scaling problem. This paper simultaneously addresses the multifaceted challenges of computation and communication, with the growing multiple resource demands. We systematically review the joint allocation strategies for different resources (computation, data, communication, and network topology) in FEL, and summarize the advantages in improving system efficiency, reducing latency, enhancing resource utilization and enhancing robustness. In addition, we present the potential ability of joint optimization to enhance privacy preservation by reducing communication requirements, indirectly. This work not only provides theoretical support for resource management in federated learning (FL) systems, but also provides ideas for potential optimal deployment in multiple real-world scenarios. By thoroughly discussing the current challenges and future research directions, it also provides some important insights into multi-resource optimization in complex application environments.<|reference_end|> | arxiv | @article{zhang2024a,
title={A Comprehensive Survey on Joint Resource Allocation Strategies in
Federated Edge Learning},
author={Jingbo Zhang and Qiong Wu and Pingyi Fan and Qiang Fan},
journal={arXiv preprint arXiv:2410.07881},
year={2024},
archivePrefix={arXiv},
eprint={2410.07881},
primaryClass={cs.LG}
} | zhang2024a |
arxiv-668073 | 2410.07884 | Generated Bias: Auditing Internal Bias Dynamics of Text-To-Image Generative Models | <|reference_start|>Generated Bias: Auditing Internal Bias Dynamics of Text-To-Image Generative Models: Text-To-Image (TTI) Diffusion Models such as DALL-E and Stable Diffusion are capable of generating images from text prompts. However, they have been shown to perpetuate gender stereotypes. These models process data internally in multiple stages and employ several constituent models, often trained separately. In this paper, we propose two novel metrics to measure bias internally in these multistage multimodal models. Diffusion Bias was developed to detect and measures bias introduced by the diffusion stage of the models. Bias Amplification measures amplification of bias during the text-to-image conversion process. Our experiments reveal that TTI models amplify gender bias, the diffusion process itself contributes to bias and that Stable Diffusion v2 is more prone to gender bias than DALL-E 2.<|reference_end|> | arxiv | @article{mandal2024generated,
title={Generated Bias: Auditing Internal Bias Dynamics of Text-To-Image
Generative Models},
author={Abhishek Mandal, Susan Leavy, and Suzanne Little},
journal={arXiv preprint arXiv:2410.07884},
year={2024},
archivePrefix={arXiv},
eprint={2410.07884},
primaryClass={cs.CV cs.CY}
} | mandal2024generated |
arxiv-668074 | 2410.07887 | Collision Diversity SCRAM: Beyond the Sphere-Packing Bound | <|reference_start|>Collision Diversity SCRAM: Beyond the Sphere-Packing Bound: This paper presents a novel scheme dubbed Collision Diversity (CoD) SCRAM, which is provisioned to meet the challenging requirements of the future 6G, portrayed in massive connectivity, reliability, and ultra-low latency. The conventional SCRAM mechanism, which stands for Slotted Coded Random Access Multiplexing, is a hybrid decoding scheme, that jointly resolves collisions and decodes the Low Density Parity Check (LDPC) codewords, in a similar analogy to Belief Propagation (BP) decoding on a joint three-layer Tanner graph. The CoD SCRAM proposed herein tends to enhance the performance of SCRAM by adopting an information-theoretic approach that tends to maximize the attainable Spectral Efficiency. Besides, due to the analogy between the two-layer Tanner graph of classical LDPC codes, and the three-layer Tanner graph of SCRAM, the CoD SCRAM adopts the well-developed tools utilized to design powerful LDPC codes. Finally, the proposed CoD scheme tends to leverage the collisions among the users in order to induce diversity. Results show that the proposed CoD SCRAM scheme surpasses the conventional SCRAM scheme, which is superior to the state-of-the-art Non-Orthogonal Multiple Access (NOMA) schemes. Additionally, by leveraging the collisions, the CoD SCRAM tends to surpass the Sphere-Packing Bound (SPB) at the respective information block length of the underlying LDPC codes of the accommodated users.<|reference_end|> | arxiv | @article{nafie2024collision,
title={Collision Diversity SCRAM: Beyond the Sphere-Packing Bound},
author={Sally Nafie, Joerg Robert, Albert Heuberger},
journal={arXiv preprint arXiv:2410.07887},
year={2024},
archivePrefix={arXiv},
eprint={2410.07887},
primaryClass={cs.IT math.IT}
} | nafie2024collision |
arxiv-668075 | 2410.07888 | Deepfake detection in videos with multiple faces using geometric-fakeness features | <|reference_start|>Deepfake detection in videos with multiple faces using geometric-fakeness features: Due to the development of facial manipulation techniques in recent years deepfake detection in video stream became an important problem for face biometrics, brand monitoring or online video conferencing solutions. In case of a biometric authentication, if you replace a real datastream with a deepfake, you can bypass a liveness detection system. Using a deepfake in a video conference, you can penetrate into a private meeting. Deepfakes of victims or public figures can also be used by fraudsters for blackmailing, extorsion and financial fraud. Therefore, the task of detecting deepfakes is relevant to ensuring privacy and security. In existing approaches to a deepfake detection their performance deteriorates when multiple faces are present in a video simultaneously or when there are other objects erroneously classified as faces. In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video and its per-frame deepfake scores. To analyze temporal inconsistencies in GFFs between the frames we train a complex deep learning model that outputs a final deepfake prediction. We employ our approach to analyze videos with multiple faces that are simultaneously present in a video. Such videos often occur in practice e.g., in an online video conference. In this case, real faces appearing in a frame together with a deepfake face will significantly affect a deepfake detection and our approach allows to counter this problem. Through extensive experiments we demonstrate that our approach outperforms current state-of-the-art methods on popular benchmark datasets such as FaceForensics++, DFDC, Celeb-DF and WildDeepFake. The proposed approach remains accurate when trained to detect multiple different deepfake generation techniques.<|reference_end|> | arxiv | @article{vyshegorodtsev2024deepfake,
title={Deepfake detection in videos with multiple faces using
geometric-fakeness features},
author={Kirill Vyshegorodtsev, Dmitry Kudiyarov, Alexander Balashov, Alexander
Kuzmin},
journal={arXiv preprint arXiv:2410.07888},
year={2024},
archivePrefix={arXiv},
eprint={2410.07888},
primaryClass={cs.CV cs.CR}
} | vyshegorodtsev2024deepfake |
arxiv-668076 | 2410.07890 | Identifying latent disease factors differently expressed in patient subgroups using group factor analysis | <|reference_start|>Identifying latent disease factors differently expressed in patient subgroups using group factor analysis: In this study, we propose a novel approach to uncover subgroup-specific and subgroup-common latent factors addressing the challenges posed by the heterogeneity of neurological and mental disorders, which hinder disease understanding, treatment development, and outcome prediction. The proposed approach, sparse Group Factor Analysis (GFA) with regularised horseshoe priors, was implemented with probabilistic programming and can uncover associations (or latent factors) among multiple data modalities differentially expressed in sample subgroups. Synthetic data experiments showed the robustness of our sparse GFA by correctly inferring latent factors and model parameters. When applied to the Genetic Frontotemporal Dementia Initiative (GENFI) dataset, which comprises patients with frontotemporal dementia (FTD) with genetically defined subgroups, the sparse GFA identified latent disease factors differentially expressed across the subgroups, distinguishing between "subgroup-specific" latent factors within homogeneous groups and "subgroup common" latent factors shared across subgroups. The latent disease factors captured associations between brain structure and non-imaging variables (i.e., questionnaires assessing behaviour and disease severity) across the different genetic subgroups, offering insights into disease profiles. Importantly, two latent factors were more pronounced in the two more homogeneous FTD patient subgroups (progranulin (GRN) and microtubule-associated protein tau (MAPT) mutation), showcasing the method's ability to reveal subgroup-specific characteristics. These findings underscore the potential of sparse GFA for integrating multiple data modalities and identifying interpretable latent disease factors that can improve the characterization and stratification of patients with neurological and mental health disorders.<|reference_end|> | arxiv | @article{ferreira2024identifying,
title={Identifying latent disease factors differently expressed in patient
subgroups using group factor analysis},
author={Fabio S. Ferreira, John Ashburner, Arabella Bouzigues, Chatrin
Suksasilp, Lucy L. Russell, Phoebe H. Foster, Eve Ferry-Bolder, John C. van
Swieten, Lize C. Jiskoot, Harro Seelaar, Raquel Sanchez-Valle, Robert
Laforce, Caroline Graff, Daniela Galimberti, Rik Vandenberghe, Alexandre de
Mendonca, Pietro Tiraboschi, Isabel Santana, Alexander Gerhard, Johannes
Levin, Sandro Sorbi, Markus Otto, Florence Pasquier, Simon Ducharme, Chris R.
Butler, Isabelle Le Ber, Elizabeth Finger, Maria C. Tartaglia, Mario
Masellis, James B. Rowe, Matthis Synofzik, Fermin Moreno, Barbara Borroni,
Samuel Kaski, Jonathan D. Rohrer, Janaina Mourao-Miranda},
journal={arXiv preprint arXiv:2410.07890},
year={2024},
archivePrefix={arXiv},
eprint={2410.07890},
primaryClass={stat.ML cs.LG}
} | ferreira2024identifying |
arxiv-668077 | 2410.07892 | Soothing Sensations: Enhancing Interactions with a Socially Assistive Robot through Vibrotactile Heartbeats | <|reference_start|>Soothing Sensations: Enhancing Interactions with a Socially Assistive Robot through Vibrotactile Heartbeats: Physical interactions with socially assistive robots (SARs) positively affect user wellbeing. However, haptic experiences when touching a SAR are typically limited to perceiving the robot's movements or shell texture, while other modalities that could enhance the touch experience with the robot, such as vibrotactile stimulation, are under-explored. In this exploratory qualitative study, we investigate the potential of enhancing human interaction with the PARO robot through vibrotactile heartbeats, with the goal to regulate subjective wellbeing during stressful situations. We conducted in-depth one-on-one interviews with 30 participants, who watched three horror movie clips alone, with PARO, and with a PARO that displayed a vibrotactile heartbeat. Our findings show that PARO's presence and its interactive capabilities can help users regulate emotions through attentional redeployment from a stressor toward the robot. The vibrotactile heartbeat further reinforced PARO's physical and social presence, enhancing the socio-emotional support provided by the robot and its perceived life-likeness. We discuss the impact of individual differences in user experience and implications for the future design of life-like vibrotactile stimulation for SARs.<|reference_end|> | arxiv | @article{borgstedt2024soothing,
title={Soothing Sensations: Enhancing Interactions with a Socially Assistive
Robot through Vibrotactile Heartbeats},
author={Jacqueline Borgstedt, Shaun Macdonald, Karola Marky, Frank E. Pollick,
Stephen A. Brewster},
journal={arXiv preprint arXiv:2410.07892},
year={2024},
archivePrefix={arXiv},
eprint={2410.07892},
primaryClass={cs.HC cs.RO}
} | borgstedt2024soothing |
arxiv-668078 | 2410.07893 | Ormer: A Manipulation-resistant and Gas-efficient Blockchain Pricing Oracle for DeFi | <|reference_start|>Ormer: A Manipulation-resistant and Gas-efficient Blockchain Pricing Oracle for DeFi: Blockchain oracle is a critical third-party web service for Decentralized Finance (DeFi) protocols. Oracles retrieve external information such as token prices from exchanges and feed them as trusted data sources into smart contracts, enabling core DeFi applications such as loaning protocols. Currently, arithmetic mean based time-weighted average price (TWAP) oracles are widely used in DeFi by averaging external price data with fixed time frame, which is considered reliable and gas-efficient for protocol execution. However, recent research shows that TWAP price feeds are vulnerable to price manipulation attack even with long time frame setting, which would further introduce long time delays and price errors hindering the service quality of DeFi applications. To address this issue, we propose a novel on-chain gas-efficient pricing algorithm (Ormer) that heuristically estimates the median of the current streaming asset price feed based on a piecewise-parabolic formula, while the time delay is suppressed by fusing estimations with different observation window size. Our evaluation based on Ethereum WETH/USDT swapping pair price feed shows that Ormer reduces the mean absolute price error by 15.3% and the time delay by 49.3% compared to TWAP. For gas efficiency, an optimized smart contract design and constant storage requirement regardless of the number of price observations is developed for Ormer.<|reference_end|> | arxiv | @article{bai2024ormer:,
title={Ormer: A Manipulation-resistant and Gas-efficient Blockchain Pricing
Oracle for DeFi},
author={Dongbin Bai, Jiannong Cao, Yinfeng Cao, Long Wen},
journal={arXiv preprint arXiv:2410.07893},
year={2024},
archivePrefix={arXiv},
eprint={2410.07893},
primaryClass={cs.CR}
} | bai2024ormer: |
arxiv-668079 | 2410.07895 | Grid-AR: A Grid-based Booster for Learned Cardinality Estimation and Range Joins | <|reference_start|>Grid-AR: A Grid-based Booster for Learned Cardinality Estimation and Range Joins: We propose an advancement in cardinality estimation by augmenting autoregressive models with a traditional grid structure. The novel hybrid estimator addresses the limitations of autoregressive models by creating a smaller representation of continuous columns and by incorporating a batch execution for queries with range predicates, as opposed to an iterative sampling approach. The suggested modification markedly improves the execution time of the model for both training and prediction, reduces memory consumption, and does so with minimal decline in accuracy. We further present an algorithm that enables the estimator to calculate cardinality estimates for range join queries efficiently. To validate the effectiveness of our cardinality estimator, we conduct and present a comprehensive evaluation considering state-of-the-art competitors using three benchmark datasets -- demonstrating vast improvements in execution times and resource utilization.<|reference_end|> | arxiv | @article{gjurovski2024grid-ar:,
title={Grid-AR: A Grid-based Booster for Learned Cardinality Estimation and
Range Joins},
author={Damjan Gjurovski, Angjela Davitkova, Sebastian Michel},
journal={arXiv preprint arXiv:2410.07895},
year={2024},
archivePrefix={arXiv},
eprint={2410.07895},
primaryClass={cs.DB}
} | gjurovski2024grid-ar: |
arxiv-668080 | 2410.07896 | Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines | <|reference_start|>Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines: Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of natural language processing and reasoning tasks. However, their performance in the foundational domain of arithmetic remains unsatisfactory. When dealing with arithmetic tasks, LLMs often memorize specific examples rather than learning the underlying computational logic, limiting their ability to generalize to new problems. In this paper, we propose a Composable Arithmetic Execution Framework (CAEF) that enables LLMs to learn to execute step-by-step computations by emulating Turing Machines, thereby gaining a genuine understanding of computational logic. Moreover, the proposed framework is highly scalable, allowing composing learned operators to significantly reduce the difficulty of learning complex operators. In our evaluation, CAEF achieves nearly 100% accuracy across seven common mathematical operations on the LLaMA 3.1-8B model, effectively supporting computations involving operands with up to 100 digits, a level where GPT-4o falls short noticeably in some settings.<|reference_end|> | arxiv | @article{lai2024executing,
title={Executing Arithmetic: Fine-Tuning Large Language Models as Turing
Machines},
author={Junyu Lai, Jiahe Xu, Yao Yang, Yunpeng Huang, Chun Cao, Jingwei Xu},
journal={arXiv preprint arXiv:2410.07896},
year={2024},
archivePrefix={arXiv},
eprint={2410.07896},
primaryClass={cs.AI}
} | lai2024executing |
arxiv-668081 | 2410.07897 | Minimal Trellises for non-Degenerate and Degenerate Decoding of Quantum Stabilizer Codes | <|reference_start|>Minimal Trellises for non-Degenerate and Degenerate Decoding of Quantum Stabilizer Codes: This paper presents a comprehensive guide to designing minimal trellises for both non-degenerate and degenerate decoding of quantum stabilizer codes. For non-degenerate decoding, various strategies are explored, leveraging insights from classical rectangular codes to minimize the complexity associated with the non-degenerate maximum likelihood error estimation using the Viterbi algorithm. Additionally, novel techniques for constructing minimal multi-goal trellises for degenerate decoding are introduced, including a merging algorithm, a Shannon-product approach, and the BCJR-Wolf method. The study establishes essential properties of multi-goal trellises and provides bounds on the decoding complexity using the sum-product Viterbi decoding algorithm. These advancements decrease the decoding complexity by a factor $\mathcal{O}(n)$, where $n$ is the code length. Finally, the paper applies these results to CSS codes and demonstrates a reduction in complexity by independently applying degenerate decoding to $X$ and $Z$ errors.<|reference_end|> | arxiv | @article{stylianou2024minimal,
title={Minimal Trellises for non-Degenerate and Degenerate Decoding of Quantum
Stabilizer Codes},
author={Evagoras Stylianou, Vladimir Sidorenko, Christian Deppe and Holger
Boche},
journal={arXiv preprint arXiv:2410.07897},
year={2024},
archivePrefix={arXiv},
eprint={2410.07897},
primaryClass={cs.IT math.IT}
} | stylianou2024minimal |
arxiv-668082 | 2410.07900 | CL3: A Collaborative Learning Framework for the Medical Data Ensuring Data Privacy in the Hyperconnected Environment | <|reference_start|>CL3: A Collaborative Learning Framework for the Medical Data Ensuring Data Privacy in the Hyperconnected Environment: In a hyperconnected environment, medical institutions are particularly concerned with data privacy when sharing and transmitting sensitive patient information due to the risk of data breaches, where malicious actors could intercept sensitive information. A collaborative learning framework, including transfer, federated, and incremental learning, can generate efficient, secure, and scalable models while requiring less computation, maintaining patient data privacy, and ensuring an up-to-date model. This study aims to address the detection of COVID-19 using chest X-ray images through a proposed collaborative learning framework called CL3. Initially, transfer learning is employed, leveraging knowledge from a pre-trained model as the starting global model. Local models from different medical institutes are then integrated, and a new global model is constructed to adapt to any data drift observed in the local models. Additionally, incremental learning is considered, allowing continuous adaptation to new medical data without forgetting previously learned information. Experimental results demonstrate that the CL3 framework achieved a global accuracy of 89.99\% when using Xception with a batch size of 16 after being trained for six federated communication rounds.<|reference_end|> | arxiv | @article{parvez2024cl3:,
title={CL3: A Collaborative Learning Framework for the Medical Data Ensuring
Data Privacy in the Hyperconnected Environment},
author={Mohamamd Zavid Parvez, Rafiqul Islam, Md Zahidul Islam},
journal={arXiv preprint arXiv:2410.07900},
year={2024},
archivePrefix={arXiv},
eprint={2410.07900},
primaryClass={cs.LG}
} | parvez2024cl3: |
arxiv-668083 | 2410.07901 | Semi-Supervised Video Desnowing Network via Temporal Decoupling Experts and Distribution-Driven Contrastive Regularization | <|reference_start|>Semi-Supervised Video Desnowing Network via Temporal Decoupling Experts and Distribution-Driven Contrastive Regularization: Snow degradations present formidable challenges to the advancement of computer vision tasks by the undesirable corruption in outdoor scenarios. While current deep learning-based desnowing approaches achieve success on synthetic benchmark datasets, they struggle to restore out-of-distribution real-world snowy videos due to the deficiency of paired real-world training data. To address this bottleneck, we devise a new paradigm for video desnowing in a semi-supervised spirit to involve unlabeled real data for the generalizable snow removal. Specifically, we construct a real-world dataset with 85 snowy videos, and then present a Semi-supervised Video Desnowing Network (SemiVDN) equipped by a novel Distribution-driven Contrastive Regularization. The elaborated contrastive regularization mitigates the distribution gap between the synthetic and real data, and consequently maintains the desired snow-invariant background details. Furthermore, based on the atmospheric scattering model, we introduce a Prior-guided Temporal Decoupling Experts module to decompose the physical components that make up a snowy video in a frame-correlated manner. We evaluate our SemiVDN on benchmark datasets and the collected real snowy data. The experimental results demonstrate the superiority of our approach against state-of-the-art image- and video-level desnowing methods.<|reference_end|> | arxiv | @article{wu2024semi-supervised,
title={Semi-Supervised Video Desnowing Network via Temporal Decoupling Experts
and Distribution-Driven Contrastive Regularization},
author={Hongtao Wu, Yijun Yang, Angelica I Aviles-Rivero, Jingjing Ren,
Sixiang Chen, Haoyu Chen, Lei Zhu},
journal={arXiv preprint arXiv:2410.07901},
year={2024},
archivePrefix={arXiv},
eprint={2410.07901},
primaryClass={cs.CV}
} | wu2024semi-supervised |
arxiv-668084 | 2410.07908 | ONCOPILOT: A Promptable CT Foundation Model For Solid Tumor Evaluation | <|reference_start|>ONCOPILOT: A Promptable CT Foundation Model For Solid Tumor Evaluation: Carcinogenesis is a proteiform phenomenon, with tumors emerging in various locations and displaying complex, diverse shapes. At the crucial intersection of research and clinical practice, it demands precise and flexible assessment. However, current biomarkers, such as RECIST 1.1's long and short axis measurements, fall short of capturing this complexity, offering an approximate estimate of tumor burden and a simplistic representation of a more intricate process. Additionally, existing supervised AI models face challenges in addressing the variability in tumor presentations, limiting their clinical utility. These limitations arise from the scarcity of annotations and the models' focus on narrowly defined tasks. To address these challenges, we developed ONCOPILOT, an interactive radiological foundation model trained on approximately 7,500 CT scans covering the whole body, from both normal anatomy and a wide range of oncological cases. ONCOPILOT performs 3D tumor segmentation using visual prompts like point-click and bounding boxes, outperforming state-of-the-art models (e.g., nnUnet) and achieving radiologist-level accuracy in RECIST 1.1 measurements. The key advantage of this foundation model is its ability to surpass state-of-the-art performance while keeping the radiologist in the loop, a capability that previous models could not achieve. When radiologists interactively refine the segmentations, accuracy improves further. ONCOPILOT also accelerates measurement processes and reduces inter-reader variability, facilitating volumetric analysis and unlocking new biomarkers for deeper insights. This AI assistant is expected to enhance the precision of RECIST 1.1 measurements, unlock the potential of volumetric biomarkers, and improve patient stratification and clinical care, while seamlessly integrating into the radiological workflow.<|reference_end|> | arxiv | @article{machado2024oncopilot:,
title={ONCOPILOT: A Promptable CT Foundation Model For Solid Tumor Evaluation},
author={L'eo Machado, H'el`ene Philippe, 'Elodie Ferreres, Julien Khlaut,
Julie Dupuis, Korentin Le Floch, Denis Habip Gatenyo, Pascal Roux, Jules
Gr'egory, Maxime Ronot, Corentin Dancette, Daniel Tordjman, Pierre Manceron,
Paul H'erent},
journal={arXiv preprint arXiv:2410.07908},
year={2024},
archivePrefix={arXiv},
eprint={2410.07908},
primaryClass={eess.IV cs.AI cs.CV}
} | machado2024oncopilot: |
arxiv-668085 | 2410.07911 | Stress Detection Using PPG Signal and Combined Deep CNN-MLP Network | <|reference_start|>Stress Detection Using PPG Signal and Combined Deep CNN-MLP Network: Stress has become a fact in people's lives. It has a significant effect on the function of body systems and many key systems of the body including respiratory, cardiovascular, and even reproductive systems are impacted by stress. It can be very helpful to detect stress episodes in early steps of its appearance to avoid damages it can cause to body systems. Using physiological signals can be useful for stress detection as they reflect very important information about the human body. PPG signal due to its advantages is one of the mostly used signal in this field. In this research work, we take advantage of PPG signals to detect stress events. The PPG signals used in this work are collected from one of the newest publicly available datasets named as UBFC-Phys and a model is developed by using CNN-MLP deep learning algorithm. The results obtained from the proposed model indicate that stress can be detected with an accuracy of approximately 82 percent.<|reference_end|> | arxiv | @article{hasanpoor2024stress,
title={Stress Detection Using PPG Signal and Combined Deep CNN-MLP Network},
author={Yasin Hasanpoor, Koorosh Motaman, Bahram Tarvirdizadeh, Khalil
Alipour, and Mohammad Ghamari},
journal={arXiv preprint arXiv:2410.07911},
year={2024},
doi={10.1109/ICBME57741.2022.10052957},
archivePrefix={arXiv},
eprint={2410.07911},
primaryClass={cs.LG}
} | hasanpoor2024stress |
arxiv-668086 | 2410.07912 | Understanding Spatio-Temporal Relations in Human-Object Interaction using Pyramid Graph Convolutional Network | <|reference_start|>Understanding Spatio-Temporal Relations in Human-Object Interaction using Pyramid Graph Convolutional Network: Human activities recognition is an important task for an intelligent robot, especially in the field of human-robot collaboration, it requires not only the label of sub-activities but also the temporal structure of the activity. In order to automatically recognize both the label and the temporal structure in sequence of human-object interaction, we propose a novel Pyramid Graph Convolutional Network (PGCN), which employs a pyramidal encoder-decoder architecture consisting of an attention based graph convolution network and a temporal pyramid pooling module for downsampling and upsampling interaction sequence on the temporal axis, respectively. The system represents the 2D or 3D spatial relation of human and objects from the detection results in video data as a graph. To learn the human-object relations, a new attention graph convolutional network is trained to extract condensed information from the graph representation. To segment action into sub-actions, a novel temporal pyramid pooling module is proposed, which upsamples compressed features back to the original time scale and classifies actions per frame. We explore various attention layers, namely spatial attention, temporal attention and channel attention, and combine different upsampling decoders to test the performance on action recognition and segmentation. We evaluate our model on two challenging datasets in the field of human-object interaction recognition, i.e. Bimanual Actions and IKEA Assembly datasets. We demonstrate that our classifier significantly improves both framewise action recognition and segmentation, e.g., F1 micro and F1@50 scores on Bimanual Actions dataset are improved by $4.3\%$ and $8.5\%$ respectively.<|reference_end|> | arxiv | @article{xing2024understanding,
title={Understanding Spatio-Temporal Relations in Human-Object Interaction
using Pyramid Graph Convolutional Network},
author={Hao Xing and Darius Burschka},
journal={arXiv preprint arXiv:2410.07912},
year={2024},
archivePrefix={arXiv},
eprint={2410.07912},
primaryClass={cs.CV cs.RO}
} | xing2024understanding |
arxiv-668087 | 2410.07915 | A Lightweight Target-Driven Network of Stereo Matching for Inland Waterways | <|reference_start|>A Lightweight Target-Driven Network of Stereo Matching for Inland Waterways: Stereo matching for inland waterways is one of the key technologies for the autonomous navigation of Unmanned Surface Vehicles (USVs), which involves dividing the stereo images into reference images and target images for pixel-level matching. However, due to the challenges of the inland waterway environment, such as blurred textures, large spatial scales, and computational resource constraints of the USVs platform, the participation of geometric features from the target image is required for efficient target-driven matching. Based on this target-driven concept, we propose a lightweight target-driven stereo matching neural network, named LTNet. Specifically, a lightweight and efficient 4D cost volume, named the Geometry Target Volume (GTV), is designed to fully utilize the geometric information of target features by employing the shifted target features as the filtered feature volume. Subsequently, to address the substantial texture interference and object occlusions present in the waterway environment, a Left-Right Consistency Refinement (LRR) module is proposed. The \text{LRR} utilizes the pixel-level differences in left and right disparities to introduce soft constraints, thereby enhancing the accuracy of predictions during the intermediate stages of the network. Moreover, knowledge distillation is utilized to enhance the generalization capability of lightweight models on the USVInland dataset. Furthermore, a new large-scale benchmark, named Spring, is utilized to validate the applicability of LTNet across various scenarios. In experiments on the aforementioned two datasets, LTNet achieves competitive results, with only 3.7M parameters. The code is available at https://github.com/Open-YiQingZhou/LTNet .<|reference_end|> | arxiv | @article{su2024a,
title={A Lightweight Target-Driven Network of Stereo Matching for Inland
Waterways},
author={Jing Su, Yiqing Zhou, Yu Zhang, Chao Wang, Yi Wei},
journal={arXiv preprint arXiv:2410.07915},
year={2024},
archivePrefix={arXiv},
eprint={2410.07915},
primaryClass={cs.CV}
} | su2024a |
arxiv-668088 | 2410.07916 | Robustness Auditing for Linear Regression: To Singularity and Beyond | <|reference_start|>Robustness Auditing for Linear Regression: To Singularity and Beyond: It has recently been discovered that the conclusions of many highly influential econometrics studies can be overturned by removing a very small fraction of their samples (often less than $0.5\%$). These conclusions are typically based on the results of one or more Ordinary Least Squares (OLS) regressions, raising the question: given a dataset, can we certify the robustness of an OLS fit on this dataset to the removal of a given number of samples? Brute-force techniques quickly break down even on small datasets. Existing approaches which go beyond brute force either can only find candidate small subsets to remove (but cannot certify their non-existence) [BGM20, KZC21], are computationally intractable beyond low dimensional settings [MR22], or require very strong assumptions on the data distribution and too many samples to give reasonable bounds in practice [BP21, FH23]. We present an efficient algorithm for certifying the robustness of linear regressions to removals of samples. We implement our algorithm and run it on several landmark econometrics datasets with hundreds of dimensions and tens of thousands of samples, giving the first non-trivial certificates of robustness to sample removal for datasets of dimension $4$ or greater. We prove that under distributional assumptions on a dataset, the bounds produced by our algorithm are tight up to a $1 + o(1)$ multiplicative factor.<|reference_end|> | arxiv | @article{rubinstein2024robustness,
title={Robustness Auditing for Linear Regression: To Singularity and Beyond},
author={Ittai Rubinstein, Samuel B. Hopkins},
journal={arXiv preprint arXiv:2410.07916},
year={2024},
archivePrefix={arXiv},
eprint={2410.07916},
primaryClass={cs.LG}
} | rubinstein2024robustness |
arxiv-668089 | 2410.07917 | Understanding Human Activity with Uncertainty Measure for Novelty in Graph Convolutional Networks | <|reference_start|>Understanding Human Activity with Uncertainty Measure for Novelty in Graph Convolutional Networks: Understanding human activity is a crucial aspect of developing intelligent robots, particularly in the domain of human-robot collaboration. Nevertheless, existing systems encounter challenges such as over-segmentation, attributed to errors in the up-sampling process of the decoder. In response, we introduce a promising solution: the Temporal Fusion Graph Convolutional Network. This innovative approach aims to rectify the inadequate boundary estimation of individual actions within an activity stream and mitigate the issue of over-segmentation in the temporal dimension. Moreover, systems leveraging human activity recognition frameworks for decision-making necessitate more than just the identification of actions. They require a confidence value indicative of the certainty regarding the correspondence between observations and training examples. This is crucial to prevent overly confident responses to unforeseen scenarios that were not part of the training data and may have resulted in mismatches due to weak similarity measures within the system. To address this, we propose the incorporation of a Spectral Normalized Residual connection aimed at enhancing efficient estimation of novelty in observations. This innovative approach ensures the preservation of input distance within the feature space by imposing constraints on the maximum gradients of weight updates. By limiting these gradients, we promote a more robust handling of novel situations, thereby mitigating the risks associated with overconfidence. Our methodology involves the use of a Gaussian process to quantify the distance in feature space.<|reference_end|> | arxiv | @article{xing2024understanding,
title={Understanding Human Activity with Uncertainty Measure for Novelty in
Graph Convolutional Networks},
author={Hao Xing and Darius Burschka},
journal={arXiv preprint arXiv:2410.07917},
year={2024},
archivePrefix={arXiv},
eprint={2410.07917},
primaryClass={cs.RO cs.CV}
} | xing2024understanding |
arxiv-668090 | 2410.07918 | Accessible bridge between category theory and functional programming | <|reference_start|>Accessible bridge between category theory and functional programming: Monadic programming presents a significant challenge for many programmers. In light of category theory, we offer a new perspective on the use of monads in functional programming. This perspective is clarified through numerous examples coded in Haskell.<|reference_end|> | arxiv | @article{kadhi2024accessible,
title={Accessible bridge between category theory and functional programming},
author={Fethi Kadhi},
journal={arXiv preprint arXiv:2410.07918},
year={2024},
archivePrefix={arXiv},
eprint={2410.07918},
primaryClass={cs.PL math.CT}
} | kadhi2024accessible |
arxiv-668091 | 2410.07919 | InstructBioMol: Advancing Biomolecule Understanding and Design Following Human Instructions | <|reference_start|>InstructBioMol: Advancing Biomolecule Understanding and Design Following Human Instructions: Understanding and designing biomolecules, such as proteins and small molecules, is central to advancing drug discovery, synthetic biology, and enzyme engineering. Recent breakthroughs in Artificial Intelligence (AI) have revolutionized biomolecular research, achieving remarkable accuracy in biomolecular prediction and design. However, a critical gap remains between AI's computational power and researchers' intuition, using natural language to align molecular complexity with human intentions. Large Language Models (LLMs) have shown potential to interpret human intentions, yet their application to biomolecular research remains nascent due to challenges including specialized knowledge requirements, multimodal data integration, and semantic alignment between natural language and biomolecules. To address these limitations, we present InstructBioMol, a novel LLM designed to bridge natural language and biomolecules through a comprehensive any-to-any alignment of natural language, molecules, and proteins. This model can integrate multimodal biomolecules as input, and enable researchers to articulate design goals in natural language, providing biomolecular outputs that meet precise biological needs. Experimental results demonstrate InstructBioMol can understand and design biomolecules following human instructions. Notably, it can generate drug molecules with a 10% improvement in binding affinity and design enzymes that achieve an ESP Score of 70.4, making it the only method to surpass the enzyme-substrate interaction threshold of 60.0 recommended by the ESP developer. This highlights its potential to transform real-world biomolecular research.<|reference_end|> | arxiv | @article{zhuang2024instructbiomol:,
title={InstructBioMol: Advancing Biomolecule Understanding and Design Following
Human Instructions},
author={Xiang Zhuang, Keyan Ding, Tianwen Lyu, Yinuo Jiang, Xiaotong Li,
Zhuoyi Xiang, Zeyuan Wang, Ming Qin, Kehua Feng, Jike Wang, Qiang Zhang,
Huajun Chen},
journal={arXiv preprint arXiv:2410.07919},
year={2024},
archivePrefix={arXiv},
eprint={2410.07919},
primaryClass={cs.CL q-bio.BM}
} | zhuang2024instructbiomol: |
arxiv-668092 | 2410.07920 | Post-Training Quantization in Brain-Computer Interfaces based on Event-Related Potential Detection | <|reference_start|>Post-Training Quantization in Brain-Computer Interfaces based on Event-Related Potential Detection: Post-training quantization (PTQ) is a technique used to optimize and reduce the memory footprint and computational requirements of machine learning models. It has been used primarily for neural networks. For Brain-Computer Interfaces (BCI) that are fully portable and usable in various situations, it is necessary to provide approaches that are lightweight for storage and computation. In this paper, we propose the evaluation of post-training quantization on state-of-the-art approaches in brain-computer interfaces and assess their impact on accuracy. We evaluate the performance of the single-trial detection of event-related potentials representing one major BCI paradigm. The area under the receiver operating characteristic curve drops from 0.861 to 0.825 with PTQ when applied on both spatial filters and the classifier, while reducing the size of the model by about $\times$ 15. The results support the conclusion that PTQ can substantially reduce the memory footprint of the models while keeping roughly the same level of accuracy.<|reference_end|> | arxiv | @article{cecotti2024post-training,
title={Post-Training Quantization in Brain-Computer Interfaces based on
Event-Related Potential Detection},
author={Hubert Cecotti, Dalvir Dhaliwal, Hardip Singh, Yogesh Kumar Meena},
journal={arXiv preprint arXiv:2410.07920},
year={2024},
archivePrefix={arXiv},
eprint={2410.07920},
primaryClass={cs.HC cs.ET}
} | cecotti2024post-training |
arxiv-668093 | 2410.07921 | Meta-Learning Integration in Hierarchical Reinforcement Learning for Advanced Task Complexity | <|reference_start|>Meta-Learning Integration in Hierarchical Reinforcement Learning for Advanced Task Complexity: Hierarchical Reinforcement Learning (HRL) effectively tackles complex tasks by decomposing them into structured policies. However, HRL agents often face challenges with efficient exploration and rapid adaptation. To address this, we integrate meta-learning into HRL to enhance the agent's ability to learn and adapt hierarchical policies swiftly. Our approach employs meta-learning for rapid task adaptation based on prior experience, while intrinsic motivation mechanisms encourage efficient exploration by rewarding novel state visits. Specifically, our agent uses a high-level policy to select among multiple low-level policies operating within custom grid environments. We utilize gradient-based meta-learning with differentiable inner-loop updates, enabling optimization across a curriculum of increasingly difficult tasks. Experimental results demonstrate that our meta-learned hierarchical agent significantly outperforms traditional HRL agents without meta-learning and intrinsic motivation. The agent exhibits accelerated learning, higher cumulative rewards, and improved success rates in complex grid environments. These findings suggest that integrating meta-learning with HRL, alongside curriculum learning and intrinsic motivation, substantially enhances the agent's capability to handle complex tasks.<|reference_end|> | arxiv | @article{khajooeinejad2024meta-learning,
title={Meta-Learning Integration in Hierarchical Reinforcement Learning for
Advanced Task Complexity},
author={Arash Khajooeinejad, Masoumeh Chapariniya},
journal={arXiv preprint arXiv:2410.07921},
year={2024},
archivePrefix={arXiv},
eprint={2410.07921},
primaryClass={cs.LG cs.AI}
} | khajooeinejad2024meta-learning |
arxiv-668094 | 2410.07923 | Deep Learning for Generalised Planning with Background Knowledge | <|reference_start|>Deep Learning for Generalised Planning with Background Knowledge: Automated planning is a form of declarative problem solving which has recently drawn attention from the machine learning (ML) community. ML has been applied to planning either as a way to test `reasoning capabilities' of architectures, or more pragmatically in an attempt to scale up solvers with learned domain knowledge. In practice, planning problems are easy to solve but hard to optimise. However, ML approaches still struggle to solve many problems that are often easy for both humans and classical planners. In this paper, we thus propose a new ML approach that allows users to specify background knowledge (BK) through Datalog rules to guide both the learning and planning processes in an integrated fashion. By incorporating BK, our approach bypasses the need to relearn how to solve problems from scratch and instead focuses the learning on plan quality optimisation. Experiments with BK demonstrate that our method successfully scales and learns to plan efficiently with high quality solutions from small training data generated in under 5 seconds.<|reference_end|> | arxiv | @article{chen2024deep,
title={Deep Learning for Generalised Planning with Background Knowledge},
author={Dillon Z. Chen, Rostislav Horv{c}'ik, Gustav v{S}'ir},
journal={arXiv preprint arXiv:2410.07923},
year={2024},
archivePrefix={arXiv},
eprint={2410.07923},
primaryClass={cs.AI}
} | chen2024deep |
arxiv-668095 | 2410.07924 | ICPR 2024 Competition on Multiple Sclerosis Lesion Segmentation -- Methods and Results | <|reference_start|>ICPR 2024 Competition on Multiple Sclerosis Lesion Segmentation -- Methods and Results: This report summarizes the outcomes of the ICPR 2024 Competition on Multiple Sclerosis Lesion Segmentation (MSLesSeg). The competition aimed to develop methods capable of automatically segmenting multiple sclerosis lesions in MRI scans. Participants were provided with a novel annotated dataset comprising a heterogeneous cohort of MS patients, featuring both baseline and follow-up MRI scans acquired at different hospitals. MSLesSeg focuses on developing algorithms that can independently segment multiple sclerosis lesions of an unexamined cohort of patients. This segmentation approach aims to overcome current benchmarks by eliminating user interaction and ensuring robust lesion detection at different timepoints, encouraging innovation and promoting methodological advances.<|reference_end|> | arxiv | @article{rondinella2024icpr,
title={ICPR 2024 Competition on Multiple Sclerosis Lesion Segmentation --
Methods and Results},
author={Alessia Rondinella, Francesco Guarnera, Elena Crispino, Giulia Russo,
Clara Di Lorenzo, Davide Maimone, Francesco Pappalardo, Sebastiano Battiato},
journal={arXiv preprint arXiv:2410.07924},
year={2024},
archivePrefix={arXiv},
eprint={2410.07924},
primaryClass={eess.IV cs.CV}
} | rondinella2024icpr |
arxiv-668096 | 2410.07926 | Multimodal Perception System for Real Open Environment | <|reference_start|>Multimodal Perception System for Real Open Environment: This paper presents a novel multimodal perception system for a real open environment. The proposed system includes an embedded computation platform, cameras, ultrasonic sensors, GPS, and IMU devices. Unlike the traditional frameworks, our system integrates multiple sensors with advanced computer vision algorithms to help users walk outside reliably. The system can efficiently complete various tasks, including navigating to specific locations, passing through obstacle regions, and crossing intersections. Specifically, we also use ultrasonic sensors and depth cameras to enhance obstacle avoidance performance. The path planning module is designed to find the locally optimal route based on various feedback and the user's current state. To evaluate the performance of the proposed system, we design several experiments under different scenarios. The results show that the system can help users walk efficiently and independently in complex situations.<|reference_end|> | arxiv | @article{sha2024multimodal,
title={Multimodal Perception System for Real Open Environment},
author={Yuyang Sha},
journal={arXiv preprint arXiv:2410.07926},
year={2024},
archivePrefix={arXiv},
eprint={2410.07926},
primaryClass={cs.RO cs.CV}
} | sha2024multimodal |
arxiv-668097 | 2410.07927 | Efficient Reinforcement Learning with Large Language Model Priors | <|reference_start|>Efficient Reinforcement Learning with Large Language Model Priors: In sequential decision-making (SDM) tasks, methods like reinforcement learning (RL) and heuristic search have made notable advances in specific cases. However, they often require extensive exploration and face challenges in generalizing across diverse environments due to their limited grasp of the underlying decision dynamics. In contrast, large language models (LLMs) have recently emerged as powerful general-purpose tools, due to their capacity to maintain vast amounts of domain-specific knowledge. To harness this rich prior knowledge for efficiently solving complex SDM tasks, we propose treating LLMs as prior action distributions and integrating them into RL frameworks through Bayesian inference methods, making use of variational inference and direct posterior sampling. The proposed approaches facilitate the seamless incorporation of fixed LLM priors into both policy-based and value-based RL frameworks. Our experiments show that incorporating LLM-based action priors significantly reduces exploration and optimization complexity, substantially improving sample efficiency compared to traditional RL techniques, e.g., using LLM priors decreases the number of required samples by over 90% in offline learning scenarios.<|reference_end|> | arxiv | @article{yan2024efficient,
title={Efficient Reinforcement Learning with Large Language Model Priors},
author={Xue Yan, Yan Song, Xidong Feng, Mengyue Yang, Haifeng Zhang, Haitham
Bou Ammar, Jun Wang},
journal={arXiv preprint arXiv:2410.07927},
year={2024},
archivePrefix={arXiv},
eprint={2410.07927},
primaryClass={cs.LG}
} | yan2024efficient |
arxiv-668098 | 2410.07928 | The Function-Representation Unification Framework | <|reference_start|>The Function-Representation Unification Framework: Cognitive Architectures are the forefront of our research into developing an artificial cognition. However, they approach the problem from a separated memory and program model of computation. This model of computation poses a fundamental problem: the knowledge retrieval heuristic. In this paper we propose to solve this problem by using a new model of computation, one where the memory and the program are united: the Function-Representation. We propose a whole framework about how to implement and use these Function-Representations, and we explore their potential through mathematical definitions and proofs. We also talk about different ways to organise multiple Function-Representations, and explore the kind of functions that these Function-Representations can implement. Finally, we also explore the limitations of our proposal.<|reference_end|> | arxiv | @article{ibias2024the,
title={The Function-Representation Model of Computation},
author={Alfredo Ibias, Hector Antona, Guillem Ramirez-Miranda, Enric
Guinovart, Eduard Alarcon},
journal={arXiv preprint arXiv:2410.07928},
year={2024},
archivePrefix={arXiv},
eprint={2410.07928},
primaryClass={cs.AI}
} | ibias2024the |
arxiv-668099 | 2410.07930 | Cost-aware Simulation-based Inference | <|reference_start|>Cost-aware Simulation-based Inference: Simulation-based inference (SBI) is the preferred framework for estimating parameters of intractable models in science and engineering. A significant challenge in this context is the large computational cost of simulating data from complex models, and the fact that this cost often depends on parameter values. We therefore propose \textit{cost-aware SBI methods} which can significantly reduce the cost of existing sampling-based SBI methods, such as neural SBI and approximate Bayesian computation. This is achieved through a combination of rejection and self-normalised importance sampling, which significantly reduces the number of expensive simulations needed. Our approach is studied extensively on models from epidemiology to telecommunications engineering, where we obtain significant reductions in the overall cost of inference.<|reference_end|> | arxiv | @article{bharti2024cost-aware,
title={Cost-aware Simulation-based Inference},
author={Ayush Bharti, Daolang Huang, Samuel Kaski, Franc{c}ois-Xavier Briol},
journal={arXiv preprint arXiv:2410.07930},
year={2024},
archivePrefix={arXiv},
eprint={2410.07930},
primaryClass={stat.ML cs.LG stat.CO}
} | bharti2024cost-aware |
arxiv-668100 | 2410.07932 | Decision-Aware Predictive Model Selection for Workforce Allocation | <|reference_start|>Decision-Aware Predictive Model Selection for Workforce Allocation: Many organizations depend on human decision-makers to make subjective decisions, especially in settings where information is scarce. Although workers are often viewed as interchangeable, the specific individual assigned to a task can significantly impact outcomes due to their unique decision-making processes and risk tolerance. In this paper, we introduce a novel framework that utilizes machine learning to predict worker behavior and employs integer optimization to strategically assign workers to tasks. Unlike traditional methods that treat machine learning predictions as static inputs for optimization, in our approach, the optimal predictive model used to represent a worker's behavior is determined by how that worker is allocated within the optimization process. We present a decision-aware optimization framework that integrates predictive model selection with worker allocation. Collaborating with an auto-insurance provider and using real-world data, we evaluate the effectiveness of our proposed method by applying three different techniques to predict worker behavior. Our findings show the proposed decision-aware framework outperforms traditional methods and offers context-sensitive and data-responsive strategies for workforce management.<|reference_end|> | arxiv | @article{stratman2024decision-aware,
title={Decision-Aware Predictive Model Selection for Workforce Allocation},
author={Eric G. Stratman, Justin J. Boutilier, Laura A. Albert},
journal={arXiv preprint arXiv:2410.07932},
year={2024},
archivePrefix={arXiv},
eprint={2410.07932},
primaryClass={math.OC cs.LG}
} | stratman2024decision-aware |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.