corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-663601 | 2409.20276 | Active Neural Mapping at Scale | <|reference_start|>Active Neural Mapping at Scale: We introduce a NeRF-based active mapping system that enables efficient and robust exploration of large-scale indoor environments. The key to our approach is the extraction of a generalized Voronoi graph (GVG) from the continually updated neural map, leading to the synergistic integration of scene geometry, appearance, topology, and uncertainty. Anchoring uncertain areas induced by the neural map to the vertices of GVG allows the exploration to undergo adaptive granularity along a safe path that traverses unknown areas efficiently. Harnessing a modern hybrid NeRF representation, the proposed system achieves competitive results in terms of reconstruction accuracy, coverage completeness, and exploration efficiency even when scaling up to large indoor environments. Extensive results at different scales validate the efficacy of the proposed system.<|reference_end|> | arxiv | @article{kuang2024active,
title={Active Neural Mapping at Scale},
author={Zijia Kuang, Zike Yan, Hao Zhao, Guyue Zhou, and Hongbin Zha},
journal={arXiv preprint arXiv:2409.20276},
year={2024},
archivePrefix={arXiv},
eprint={2409.20276},
primaryClass={cs.CV cs.RO}
} | kuang2024active |
arxiv-663602 | 2409.20277 | Solution for OOD-CV Workshop SSB Challenge 2024 (Open-Set Recognition Track) | <|reference_start|>Solution for OOD-CV Workshop SSB Challenge 2024 (Open-Set Recognition Track): This report provides a detailed description of the method we explored and proposed in the OSR Challenge at the OOD-CV Workshop during ECCV 2024. The challenge required identifying whether a test sample belonged to the semantic classes of a classifier's training set, a task known as open-set recognition (OSR). Using the Semantic Shift Benchmark (SSB) for evaluation, we focused on ImageNet1k as the in-distribution (ID) dataset and a subset of ImageNet21k as the out-of-distribution (OOD) dataset.To address this, we proposed a hybrid approach, experimenting with the fusion of various post-hoc OOD detection techniques and different Test-Time Augmentation (TTA) strategies. Additionally, we evaluated the impact of several base models on the final performance. Our best-performing method combined Test-Time Augmentation with the post-hoc OOD techniques, achieving a strong balance between AUROC and FPR95 scores. Our approach resulted in AUROC: 79.77 (ranked 5th) and FPR95: 61.44 (ranked 2nd), securing second place in the overall competition.<|reference_end|> | arxiv | @article{feng2024solution,
title={Solution for OOD-CV Workshop SSB Challenge 2024 (Open-Set Recognition
Track)},
author={Mingxu Feng, Dian Chao, Peng Zheng, Yang Yang},
journal={arXiv preprint arXiv:2409.20277},
year={2024},
archivePrefix={arXiv},
eprint={2409.20277},
primaryClass={cs.CV cs.LG}
} | feng2024solution |
arxiv-663603 | 2409.20278 | Parameterised Approximation and Complexity of Minimum Flow Decompositions | <|reference_start|>Parameterised Approximation and Complexity of Minimum Flow Decompositions: Minimum flow decomposition (MFD) is the strongly NP-hard problem of finding a smallest set of integer weighted paths in a graph $G$ whose weighted sum is equal to a given flow $f$ on $G$. Despite its many practical applications, we lack an understanding of graph structures that make MFD easy or hard. In particular, it is not known whether a good approximation algorithm exists when the weights are positive. On the positive side, the main result of this paper is that MFD can be approximated within a factor $O(\log\Vert f\Vert)$ (where $\Vert f\Vert$ is the largest flow weight of all edges) times the ratio between the parallel-width of $G$ (introduced by Deligkas and Meir, MFCS 2018) and the width of $G$ (minimum number of paths to cover all edges). In particular, when the MFD size is at least the parallel-width of $G$, this becomes the first parameterised $O(\log\Vert f\Vert)$-factor approximation algorithm for MFD over positive integers. We also show that there exist instances where the ratio between the parallel-width of $G$ and the MFD size is arbitrarily large, thus narrowing down the class of graphs whose approximation is still open. We achieve these results by introducing a new notion of flow-width of $(G,f)$, which unifies both the width and the parallel-width and may be of independent interest. On the negative side, we show that small-width graphs do not make MFD easy. This question was previously open, because width-1 graphs (i.e. paths) are trivially solvable, and the existing NP-hardness proofs use graphs of unbounded width. We close this problem by showing the tight results that MFD remains strongly NP-hard on graphs of width 3, and NP-hard on graphs of width 2 (and thus also parallel-width 2). Moreover, on width-2 graphs (and more generally, on constant parallel-width graphs), MFD is solvable in quasi-polynomial time on unary-coded flows.<|reference_end|> | arxiv | @article{grigorjew2024parameterised,
title={Parameterised Approximation and Complexity of Minimum Flow
Decompositions},
author={Andreas Grigorjew, Wanchote Jiamjitrak, Brendan Mumey, Alexandru I.
Tomescu},
journal={arXiv preprint arXiv:2409.20278},
year={2024},
archivePrefix={arXiv},
eprint={2409.20278},
primaryClass={cs.DS}
} | grigorjew2024parameterised |
arxiv-663604 | 2409.20280 | Solving Electromagnetic Scattering Problems by Isogeometric Analysis with Deep Operator Learning | <|reference_start|>Solving Electromagnetic Scattering Problems by Isogeometric Analysis with Deep Operator Learning: We present a hybrid approach combining isogeometric analysis with deep operator networks to solve electromagnetic scattering problems. The neural network takes a computer-aided design representation as input and predicts the electromagnetic field in a de Rham conforming B-spline basis such that for example the tangential continuity of the electric field is respected. The physical problem is included in the loss function during training. Our numerical results demonstrate that a trained network accurately predicts the electric field, showing convergence to the analytical solution with optimal rate. Additionally, training on a variety of geometries highlights the network's generalization capabilities, achieving small error increases when applied to new geometries not included in the training set.<|reference_end|> | arxiv | @article{backmeyer2024solving,
title={Solving Electromagnetic Scattering Problems by Isogeometric Analysis
with Deep Operator Learning},
author={Merle Backmeyer, Stefan Kurz, Matthias M"oller, Sebastian Sch"ops},
journal={arXiv preprint arXiv:2409.20280},
year={2024},
archivePrefix={arXiv},
eprint={2409.20280},
primaryClass={cs.CE}
} | backmeyer2024solving |
arxiv-663605 | 2409.20283 | Match Stereo Videos via Bidirectional Alignment | <|reference_start|>Match Stereo Videos via Bidirectional Alignment: Video stereo matching is the task of estimating consistent disparity maps from rectified stereo videos. There is considerable scope for improvement in both datasets and methods within this area. Recent learning-based methods often focus on optimizing performance for independent stereo pairs, leading to temporal inconsistencies in videos. Existing video methods typically employ sliding window operation over time dimension, which can result in low-frequency oscillations corresponding to the window size. To address these challenges, we propose a bidirectional alignment mechanism for adjacent frames as a fundamental operation. Building on this, we introduce a novel video processing framework, BiDAStereo, and a plugin stabilizer network, BiDAStabilizer, compatible with general image-based methods. Regarding datasets, current synthetic object-based and indoor datasets are commonly used for training and benchmarking, with a lack of outdoor nature scenarios. To bridge this gap, we present a realistic synthetic dataset and benchmark focused on natural scenes, along with a real-world dataset captured by a stereo camera in diverse urban scenes for qualitative evaluation. Extensive experiments on in-domain, out-of-domain, and robustness evaluation demonstrate the contribution of our methods and datasets, showcasing improvements in prediction quality and achieving state-of-the-art results on various commonly used benchmarks. The project page, demos, code, and datasets are available at: \url{https://tomtomtommi.github.io/BiDAVideo/}.<|reference_end|> | arxiv | @article{jing2024match,
title={Match Stereo Videos via Bidirectional Alignment},
author={Junpeng Jing, Ye Mao, Anlan Qiu, Krystian Mikolajczyk},
journal={arXiv preprint arXiv:2409.20283},
year={2024},
archivePrefix={arXiv},
eprint={2409.20283},
primaryClass={cs.CV}
} | jing2024match |
arxiv-663606 | 2409.20286 | Self-Assessment of Evidential Grid Map Fusion for Robust Motion Planning | <|reference_start|>Self-Assessment of Evidential Grid Map Fusion for Robust Motion Planning: Conflicting sensor measurements pose a huge problem for the environment representation of an autonomous robot. Therefore, in this paper, we address the self-assessment of an evidential grid map in which data from conflicting LiDAR sensor measurements are fused, followed by methods for robust motion planning under these circumstances. First, conflicting measurements aggregated in Subjective-Logic-based evidential grid maps are classified. Then, a self-assessment framework evaluates these conflicts and estimates their severity for the overall system by calculating a degradation score. This enables the detection of calibration errors and insufficient sensor setups. In contrast to other motion planning approaches, the information gained from the evidential grid maps is further used inside our proposed path-planning algorithm. Here, the impact of conflicting measurements on the current motion plan is evaluated, and a robust and curious path-planning strategy is derived to plan paths under the influence of conflicting data. This ensures that the system integrity is maintained in severely degraded environment representations which can prevent the unnecessary abortion of planning tasks.<|reference_end|> | arxiv | @article{schumann2024self-assessment,
title={Self-Assessment of Evidential Grid Map Fusion for Robust Motion Planning},
author={Oliver Schumann, Thomas Wodtko, Michael Buchholz, Klaus Dietmayer},
journal={arXiv preprint arXiv:2409.20286},
year={2024},
archivePrefix={arXiv},
eprint={2409.20286},
primaryClass={cs.RO}
} | schumann2024self-assessment |
arxiv-663607 | 2409.20287 | Leveraging CAM Algorithms for Explaining Medical Semantic Segmentation | <|reference_start|>Leveraging CAM Algorithms for Explaining Medical Semantic Segmentation: Convolutional neural networks (CNNs) achieve prevailing results in segmentation tasks nowadays and represent the state-of-the-art for image-based analysis. However, the understanding of the accurate decision-making process of a CNN is rather unknown. The research area of explainable artificial intelligence (xAI) primarily revolves around understanding and interpreting this black-box behavior. One way of interpreting a CNN is the use of class activation maps (CAMs) that represent heatmaps to indicate the importance of image areas for the prediction of the CNN. For classification tasks, a variety of CAM algorithms exist. But for segmentation tasks, only one CAM algorithm for the interpretation of the output of a CNN exist. We propose a transfer between existing classification- and segmentation-based methods for more detailed, explainable, and consistent results which show salient pixels in semantic segmentation tasks. The resulting Seg-HiRes-Grad CAM is an extension of the segmentation-based Seg-Grad CAM with the transfer to the classification-based HiRes CAM. Our method improves the previously-mentioned existing segmentation-based method by adjusting it to recently published classification-based methods. Especially for medical image segmentation, this transfer solves existing explainability disadvantages.<|reference_end|> | arxiv | @article{rheude2024leveraging,
title={Leveraging CAM Algorithms for Explaining Medical Semantic Segmentation},
author={Tillmann Rheude, Andreas Wirtz, Arjan Kuijper, Stefan Wesarg},
journal={Machine.Learning.for.Biomedical.Imaging. 2 (2024)},
year={2024},
doi={10.59275/j.melba.2024-ebd3},
archivePrefix={arXiv},
eprint={2409.20287},
primaryClass={eess.IV cs.CV cs.LG}
} | rheude2024leveraging |
arxiv-663608 | 2409.20288 | LexEval: A Comprehensive Chinese Legal Benchmark for Evaluating Large Language Models | <|reference_start|>LexEval: A Comprehensive Chinese Legal Benchmark for Evaluating Large Language Models: Large language models (LLMs) have made significant progress in natural language processing tasks and demonstrate considerable potential in the legal domain. However, legal applications demand high standards of accuracy, reliability, and fairness. Applying existing LLMs to legal systems without careful evaluation of their potential and limitations could pose significant risks in legal practice. To this end, we introduce a standardized comprehensive Chinese legal benchmark LexEval. This benchmark is notable in the following three aspects: (1) Ability Modeling: We propose a new taxonomy of legal cognitive abilities to organize different tasks. (2) Scale: To our knowledge, LexEval is currently the largest Chinese legal evaluation dataset, comprising 23 tasks and 14,150 questions. (3) Data: we utilize formatted existing datasets, exam datasets and newly annotated datasets by legal experts to comprehensively evaluate the various capabilities of LLMs. LexEval not only focuses on the ability of LLMs to apply fundamental legal knowledge but also dedicates efforts to examining the ethical issues involved in their application. We evaluated 38 open-source and commercial LLMs and obtained some interesting findings. The experiments and findings offer valuable insights into the challenges and potential solutions for developing Chinese legal systems and LLM evaluation pipelines. The LexEval dataset and leaderboard are publicly available at \url{https://github.com/CSHaitao/LexEval} and will be continuously updated.<|reference_end|> | arxiv | @article{li2024lexeval:,
title={LexEval: A Comprehensive Chinese Legal Benchmark for Evaluating Large
Language Models},
author={Haitao Li, You Chen, Qingyao Ai, Yueyue Wu, Ruizhe Zhang, Yiqun Liu},
journal={arXiv preprint arXiv:2409.20288},
year={2024},
archivePrefix={arXiv},
eprint={2409.20288},
primaryClass={cs.CL}
} | li2024lexeval: |
arxiv-663609 | 2409.20289 | Distributed NeRF Learning for Collaborative Multi-Robot Perception | <|reference_start|>Distributed NeRF Learning for Collaborative Multi-Robot Perception: Effective environment perception is crucial for enabling downstream robotic applications. Individual robotic agents often face occlusion and limited visibility issues, whereas multi-agent systems can offer a more comprehensive mapping of the environment, quicker coverage, and increased fault tolerance. In this paper, we propose a collaborative multi-agent perception system where agents collectively learn a neural radiance field (NeRF) from posed RGB images to represent a scene. Each agent processes its local sensory data and shares only its learned NeRF model with other agents, reducing communication overhead. Given NeRF's low memory footprint, this approach is well-suited for robotic systems with limited bandwidth, where transmitting all raw data is impractical. Our distributed learning framework ensures consistency across agents' local NeRF models, enabling convergence to a unified scene representation. We show the effectiveness of our method through an extensive set of experiments on datasets containing challenging real-world scenes, achieving performance comparable to centralized mapping of the environment where data is sent to a central server for processing. Additionally, we find that multi-agent learning provides regularization benefits, improving geometric consistency in scenarios with sparse input views. We show that in such scenarios, multi-agent mapping can even outperform centralized training.<|reference_end|> | arxiv | @article{zhao2024distributed,
title={Distributed NeRF Learning for Collaborative Multi-Robot Perception},
author={Hongrui Zhao, Boris Ivanovic, Negar Mehr},
journal={arXiv preprint arXiv:2409.20289},
year={2024},
archivePrefix={arXiv},
eprint={2409.20289},
primaryClass={cs.RO cs.CV cs.LG}
} | zhao2024distributed |
arxiv-663610 | 2409.20291 | RL-GSBridge: 3D Gaussian Splatting Based Real2Sim2Real Method for Robotic Manipulation Learning | <|reference_start|>RL-GSBridge: 3D Gaussian Splatting Based Real2Sim2Real Method for Robotic Manipulation Learning: Sim-to-Real refers to the process of transferring policies learned in simulation to the real world, which is crucial for achieving practical robotics applications. However, recent Sim2real methods either rely on a large amount of augmented data or large learning models, which is inefficient for specific tasks. In recent years, radiance field-based reconstruction methods, especially the emergence of 3D Gaussian Splatting, making it possible to reproduce realistic real-world scenarios. To this end, we propose a novel real-to-sim-to-real reinforcement learning framework, RL-GSBridge, which introduces a mesh-based 3D Gaussian Splatting method to realize zero-shot sim-to-real transfer for vision-based deep reinforcement learning. We improve the mesh-based 3D GS modeling method by using soft binding constraints, enhancing the rendering quality of mesh models. We then employ a GS editing approach to synchronize rendering with the physics simulator, reflecting the interactions of the physical robot more accurately. Through a series of sim-to-real robotic arm experiments, including grasping and pick-and-place tasks, we demonstrate that RL-GSBridge maintains a satisfactory success rate in real-world task completion during sim-to-real transfer. Furthermore, a series of rendering metrics and visualization results indicate that our proposed mesh-based 3D Gaussian reduces artifacts in unstructured objects, demonstrating more realistic rendering performance.<|reference_end|> | arxiv | @article{wu2024rl-gsbridge:,
title={RL-GSBridge: 3D Gaussian Splatting Based Real2Sim2Real Method for
Robotic Manipulation Learning},
author={Yuxuan Wu, Lei Pan, Wenhua Wu, Guangming Wang, Yanzi Miao, and Hesheng
Wang},
journal={arXiv preprint arXiv:2409.20291},
year={2024},
archivePrefix={arXiv},
eprint={2409.20291},
primaryClass={cs.RO}
} | wu2024rl-gsbridge: |
arxiv-663611 | 2409.20293 | Automating MedSAM by Learning Prompts with Weak Few-Shot Supervision | <|reference_start|>Automating MedSAM by Learning Prompts with Weak Few-Shot Supervision: Foundation models such as the recently introduced Segment Anything Model (SAM) have achieved remarkable results in image segmentation tasks. However, these models typically require user interaction through handcrafted prompts such as bounding boxes, which limits their deployment to downstream tasks. Adapting these models to a specific task with fully labeled data also demands expensive prior user interaction to obtain ground-truth annotations. This work proposes to replace conditioning on input prompts with a lightweight module that directly learns a prompt embedding from the image embedding, both of which are subsequently used by the foundation model to output a segmentation mask. Our foundation models with learnable prompts can automatically segment any specific region by 1) modifying the input through a prompt embedding predicted by a simple module, and 2) using weak labels (tight bounding boxes) and few-shot supervision (10 samples). Our approach is validated on MedSAM, a version of SAM fine-tuned for medical images, with results on three medical datasets in MR and ultrasound imaging. Our code is available on https://github.com/Minimel/MedSAMWeakFewShotPromptAutomation.<|reference_end|> | arxiv | @article{gaillochet2024automating,
title={Automating MedSAM by Learning Prompts with Weak Few-Shot Supervision},
author={M'elanie Gaillochet, Christian Desrosiers and Herv'e Lombaert},
journal={arXiv preprint arXiv:2409.20293},
year={2024},
archivePrefix={arXiv},
eprint={2409.20293},
primaryClass={cs.CV}
} | gaillochet2024automating |
arxiv-663612 | 2409.20296 | PersonalLLM: Tailoring LLMs to Individual Preferences | <|reference_start|>PersonalLLM: Tailoring LLMs to Individual Preferences: As LLMs become capable of complex tasks, there is growing potential for personalized interactions tailored to the subtle and idiosyncratic preferences of the user. We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user. Departing from existing alignment benchmarks that implicitly assume uniform preferences, we curate open-ended prompts paired with many high-quality answers over which users would be expected to display heterogeneous latent preferences. Instead of persona-prompting LLMs based on high-level attributes (e.g., user's race or response length), which yields homogeneous preferences relative to humans, we develop a method that can simulate a large user base with diverse preferences from a set of pre-trained reward models. Our dataset and generated personalities offer an innovative testbed for developing personalization algorithms that grapple with continual data sparsity--few relevant feedback from the particular user--by leveraging historical data from other (similar) users. We explore basic in-context learning and meta-learning baselines to illustrate the utility of PersonalLLM and highlight the need for future methodological development. Our dataset is available at https://huggingface.co/datasets/namkoong-lab/PersonalLLM<|reference_end|> | arxiv | @article{zollo2024personalllm:,
title={PersonalLLM: Tailoring LLMs to Individual Preferences},
author={Thomas P. Zollo, Andrew Wei Tung Siah, Naimeng Ye, Ang Li, Hongseok
Namkoong},
journal={arXiv preprint arXiv:2409.20296},
year={2024},
archivePrefix={arXiv},
eprint={2409.20296},
primaryClass={cs.LG cs.CL}
} | zollo2024personalllm: |
arxiv-663613 | 2409.20297 | Explain in Plain Language Questions with Indic Languages: Drawbacks, Affordances, and Opportunities | <|reference_start|>Explain in Plain Language Questions with Indic Languages: Drawbacks, Affordances, and Opportunities: Background: Introductory computer science courses use ``Explain in Plain English'' (EiPE) activities to develop and assess students' code comprehension skills, but creating effective autograders for these questions is challenging and limited to English. This is a particular challenge in linguistically diverse countries like India where students may have limited proficiency in English. Methods: We evaluate the efficacy of a recently introduced approach called Code Generation Based Grading (CGBG) in enabling language agnostic ``Explain in Plain Language'' (EiPL) activities. Here students' EiPL responses generate code that is tested for functional equivalence to the original which was being described. Objectives: We initially evaluate the correctness of code generated from correct EiPL responses provided in 10 of India's most commonly spoken languages. To evaluate the effectiveness of the approach in practice, we assess student success and perceptions of EiPL questions in a NPTEL (National Programme on Technology Enhanced Learning) course. Results: We find promising results for the correctness of code generated from translations of correct EiPL responses, with most languages achieving a correctness rate of 75% or higher. However, in practice, many students preferred to respond in English due to greater familiarity with English as a technical language, difficulties writing in their native language, and perceptions of the grader being less capable of generating code from prompts in their mother tongue.<|reference_end|> | arxiv | @article{smith2024explain,
title={Explain in Plain Language Questions with Indic Languages: Drawbacks,
Affordances, and Opportunities},
author={David H. Smith IV, Viraj Kumar, Paul Denny},
journal={arXiv preprint arXiv:2409.20297},
year={2024},
archivePrefix={arXiv},
eprint={2409.20297},
primaryClass={cs.CY}
} | smith2024explain |
arxiv-663614 | 2409.20301 | Alignment-Free Training for Transducer-based Multi-Talker ASR | <|reference_start|>Alignment-Free Training for Transducer-based Multi-Talker ASR: Extending the RNN Transducer (RNNT) to recognize multi-talker speech is essential for wider automatic speech recognition (ASR) applications. Multi-talker RNNT (MT-RNNT) aims to achieve recognition without relying on costly front-end source separation. MT-RNNT is conventionally implemented using architectures with multiple encoders or decoders, or by serializing all speakers' transcriptions into a single output stream. The first approach is computationally expensive, particularly due to the need for multiple encoder processing. In contrast, the second approach involves a complex label generation process, requiring accurate timestamps of all words spoken by all speakers in the mixture, obtained from an external ASR system. In this paper, we propose a novel alignment-free training scheme for the MT-RNNT (MT-RNNT-AFT) that adopts the standard RNNT architecture. The target labels are created by appending a prompt token corresponding to each speaker at the beginning of the transcription, reflecting the order of each speaker's appearance in the mixtures. Thus, MT-RNNT-AFT can be trained without relying on accurate alignments, and it can recognize all speakers' speech with just one round of encoder processing. Experiments show that MT-RNNT-AFT achieves performance comparable to that of the state-of-the-art alternatives, while greatly simplifying the training process.<|reference_end|> | arxiv | @article{moriya2024alignment-free,
title={Alignment-Free Training for Transducer-based Multi-Talker ASR},
author={Takafumi Moriya, Shota Horiguchi, Marc Delcroix, Ryo Masumura,
Takanori Ashihara, Hiroshi Sato, Kohei Matsuura, Masato Mimura},
journal={arXiv preprint arXiv:2409.20301},
year={2024},
archivePrefix={arXiv},
eprint={2409.20301},
primaryClass={eess.AS cs.CL cs.SD}
} | moriya2024alignment-free |
arxiv-663615 | 2409.20302 | OM4OV: Leveraging Ontology Matching for Ontology Versioning | <|reference_start|>OM4OV: Leveraging Ontology Matching for Ontology Versioning: Due to the dynamic nature of the semantic web, ontology version control is required to capture time-varying information, most importantly for widely-used ontologies. Despite the long-standing recognition of ontology versioning (OV) as a crucial component for efficient ontology management, the growing size of ontologies and accumulating errors caused by manual labour overwhelm current OV approaches. In this paper, we propose yet another approach to performing OV using existing ontology matching (OM) techniques and systems. We introduce a unified OM4OV pipeline. From an OM perspective, we reconstruct a new task formulation, performance measurement, and dataset construction for OV tasks. Reusing the prior alignment(s) from OM, we also propose a cross-reference mechanism to effectively reduce the matching candidature and improve overall OV performance. We experimentally validate the OM4OV pipeline and its cross-reference mechanism using three datasets from the Alignment Evaluation Initiative (OAEI) and exploit insights on OM used for OV tasks.<|reference_end|> | arxiv | @article{qiang2024om4ov:,
title={OM4OV: Leveraging Ontology Matching for Ontology Versioning},
author={Zhangcheng Qiang, Kerry Taylor},
journal={arXiv preprint arXiv:2409.20302},
year={2024},
archivePrefix={arXiv},
eprint={2409.20302},
primaryClass={cs.AI cs.CL cs.IR}
} | qiang2024om4ov: |
arxiv-663616 | 2409.20303 | A Looming Replication Crisis in Evaluating Behavior in Language Models? Evidence and Solutions | <|reference_start|>A Looming Replication Crisis in Evaluating Behavior in Language Models? Evidence and Solutions: In an era where large language models (LLMs) are increasingly integrated into a wide range of everyday applications, research into these models' behavior has surged. However, due to the novelty of the field, clear methodological guidelines are lacking. This raises concerns about the replicability and generalizability of insights gained from research on LLM behavior. In this study, we discuss the potential risk of a replication crisis and support our concerns with a series of replication experiments focused on prompt engineering techniques purported to influence reasoning abilities in LLMs. We tested GPT-3.5, GPT-4o, Gemini 1.5 Pro, Claude 3 Opus, Llama 3-8B, and Llama 3-70B, on the chain-of-thought, EmotionPrompting, ExpertPrompting, Sandbagging, as well as Re-Reading prompt engineering techniques, using manually double-checked subsets of reasoning benchmarks including CommonsenseQA, CRT, NumGLUE, ScienceQA, and StrategyQA. Our findings reveal a general lack of statistically significant differences across nearly all techniques tested, highlighting, among others, several methodological weaknesses in previous research. We propose a forward-looking approach that includes developing robust methodologies for evaluating LLMs, establishing sound benchmarks, and designing rigorous experimental frameworks to ensure accurate and reliable assessments of model outputs.<|reference_end|> | arxiv | @article{vaugrante2024a,
title={A Looming Replication Crisis in Evaluating Behavior in Language Models?
Evidence and Solutions},
author={Laur`ene Vaugrante, Mathias Niepert, Thilo Hagendorff},
journal={arXiv preprint arXiv:2409.20303},
year={2024},
archivePrefix={arXiv},
eprint={2409.20303},
primaryClass={cs.CL cs.AI}
} | vaugrante2024a |
arxiv-663617 | 2409.20305 | Mixed-Precision Embeddings for Large-Scale Recommendation Models | <|reference_start|>Mixed-Precision Embeddings for Large-Scale Recommendation Models: Embedding techniques have become essential components of large databases in the deep learning era. By encoding discrete entities, such as words, items, or graph nodes, into continuous vector spaces, embeddings facilitate more efficient storage, retrieval, and processing in large databases. Especially in the domain of recommender systems, millions of categorical features are encoded as unique embedding vectors, which facilitates the modeling of similarities and interactions among features. However, numerous embedding vectors can result in significant storage overhead. In this paper, we aim to compress the embedding table through quantization techniques. Given that features vary in importance levels, we seek to identify an appropriate precision for each feature to balance model accuracy and memory usage. To this end, we propose a novel embedding compression method, termed Mixed-Precision Embeddings (MPE). Specifically, to reduce the size of the search space, we first group features by frequency and then search precision for each feature group. MPE further learns the probability distribution over precision levels for each feature group, which can be used to identify the most suitable precision with a specially designed sampling strategy. Extensive experiments on three public datasets demonstrate that MPE significantly outperforms existing embedding compression methods. Remarkably, MPE achieves about 200x compression on the Criteo dataset without comprising the prediction accuracy.<|reference_end|> | arxiv | @article{li2024mixed-precision,
title={Mixed-Precision Embeddings for Large-Scale Recommendation Models},
author={Shiwei Li, Zhuoqi Hu, Xing Tang, Haozhao Wang, Shijie Xu, Weihong Luo,
Yuhua Li, Xiuqiang He, Ruixuan Li},
journal={arXiv preprint arXiv:2409.20305},
year={2024},
archivePrefix={arXiv},
eprint={2409.20305},
primaryClass={cs.IR cs.DB}
} | li2024mixed-precision |
arxiv-663618 | 2409.20306 | Diagnosing and Repairing Distributed Routing Configurations Using Selective Symbolic Simulation | <|reference_start|>Diagnosing and Repairing Distributed Routing Configurations Using Selective Symbolic Simulation: Although substantial progress has been made in automatically verifying whether distributed routing configurations conform to certain requirements, diagnosing and repairing configuration errors remains manual and time-consuming. To fill this gap, we propose S^2Sim, a novel system for automatic routing configuration diagnosis and repair. Our key insight is that by selectively simulating variants of the given configuration in a symbolic way, we can find an intent-compliant variant, whose differences between the given configuration reveal the errors in the given configuration and suggest the patches. Building on this insight, we also design techniques to support complex scenarios (e.g., multiple protocol networks) and requirements (e.g., k-link failure tolerance). We implement a prototype of S^2Sim and evaluate its performance using networks of size O(10) ~ O(1000) with synthetic real-world configurations. Results show that S^2Sim diagnoses and repairs errors for 1) all WAN configurations within 10 s and 2) all DCN configurations within 20 minutes.<|reference_end|> | arxiv | @article{yang2024diagnosing,
title={Diagnosing and Repairing Distributed Routing Configurations Using
Selective Symbolic Simulation},
author={Rulan Yang, Hanyang Shao, Gao Han, Ziyi Wang, Xing Fang, Lizhao You,
Qiao Xiang, Linghe Kong, Ruiting Zhou, Jiwu Shu},
journal={arXiv preprint arXiv:2409.20306},
year={2024},
archivePrefix={arXiv},
eprint={2409.20306},
primaryClass={cs.NI}
} | yang2024diagnosing |
arxiv-663619 | 2409.20310 | A SSM is Polymerized from Multivariate Time Series | <|reference_start|>A SSM is Polymerized from Multivariate Time Series: For multivariate time series (MTS) tasks, previous state space models (SSMs) followed the modeling paradigm of Transformer-based methods. However, none of them explicitly model the complex dependencies of MTS: the Channel Dependency variations with Time (CDT). In view of this, we delve into the derivation of SSM, which involves approximating continuously updated functions by orthogonal function basis. We then develop Poly-Mamba, a novel method for MTS forecasting. Its core concept is to expand the original orthogonal function basis space into a multivariate orthogonal function space containing variable mixing terms, and make a projection on this space so as to explicitly describe the CDT by weighted coefficients. In Poly-Mamba, we propose the Multivariate Orthogonal Polynomial Approximation (MOPA) as a simplified implementation of this concept. For the simple linear relationship between channels, we propose Linear Channel Mixing (LCM) and generate CDT patterns adaptively for different channels through a proposed Order Combining method. Experiments on six real-world datasets demonstrate that Poly-Mamba outperforms the SOTA methods, especially when dealing with datasets having a large number of channels and complex correlations. The codes and log files will be released at: https://github.com/Joeland4/Poly-Mamba.<|reference_end|> | arxiv | @article{wu2024a,
title={A SSM is Polymerized from Multivariate Time Series},
author={Haixiang Wu},
journal={arXiv preprint arXiv:2409.20310},
year={2024},
archivePrefix={arXiv},
eprint={2409.20310},
primaryClass={cs.LG}
} | wu2024a |
arxiv-663620 | 2409.20313 | Boosting Hybrid Autoregressive Transducer-based ASR with Internal Acoustic Model Training and Dual Blank Thresholding | <|reference_start|>Boosting Hybrid Autoregressive Transducer-based ASR with Internal Acoustic Model Training and Dual Blank Thresholding: A hybrid autoregressive transducer (HAT) is a variant of neural transducer that models blank and non-blank posterior distributions separately. In this paper, we propose a novel internal acoustic model (IAM) training strategy to enhance HAT-based speech recognition. IAM consists of encoder and joint networks, which are fully shared and jointly trained with HAT. This joint training not only enhances the HAT training efficiency but also encourages IAM and HAT to emit blanks synchronously which skips the more expensive non-blank computation, resulting in more effective blank thresholding for faster decoding. Experiments demonstrate that the relative error reductions of the HAT with IAM compared to the vanilla HAT are statistically significant. Moreover, we introduce dual blank thresholding, which combines both HAT- and IAM-blank thresholding and a compatible decoding algorithm. This results in a 42-75% decoding speed-up with no major performance degradation.<|reference_end|> | arxiv | @article{moriya2024boosting,
title={Boosting Hybrid Autoregressive Transducer-based ASR with Internal
Acoustic Model Training and Dual Blank Thresholding},
author={Takafumi Moriya, Takanori Ashihara, Masato Mimura, Hiroshi Sato, Kohei
Matsuura, Ryo Masumura, Taichi Asami},
journal={arXiv preprint arXiv:2409.20313},
year={2024},
archivePrefix={arXiv},
eprint={2409.20313},
primaryClass={eess.AS cs.CL cs.SD}
} | moriya2024boosting |
arxiv-663621 | 2409.20314 | A faster algorithm for the $k$-forest problem: breaking the $O_k(n^3/2)$ complexity barrier | <|reference_start|>A faster algorithm for the $k$-forest problem: breaking the $O_k(n^3/2)$ complexity barrier: The $k$-forest problem asks to find $k$ forests in a graph $G$ maximizing the number of edges in their union. We show how to solve this problem in $O(k^3 \min\{kn, m\} \log^2 n + k \cdot{\rm MAXFLOW}(m, m) \log n)$ time, breaking the $O_k(n^{3/2})$ complexity barrier of previously known approaches. Our algorithm relies on three subroutines: the directed $k$-forest problem with bounded indegree condition, the $k$-pseudoforest problem, and the top clump computation.<|reference_end|> | arxiv | @article{arkhipov2024a,
title={A faster algorithm for the $k$-forest problem: breaking the
$O_k(n^{3/2})$ complexity barrier},
author={Pavel Arkhipov, Vladimir Kolmogorov},
journal={arXiv preprint arXiv:2409.20314},
year={2024},
archivePrefix={arXiv},
eprint={2409.20314},
primaryClass={cs.DS}
} | arkhipov2024a |
arxiv-663622 | 2409.20324 | HEADS-UP: Head-Mounted Egocentric Dataset for Trajectory Prediction in Blind Assistance Systems | <|reference_start|>HEADS-UP: Head-Mounted Egocentric Dataset for Trajectory Prediction in Blind Assistance Systems: In this paper, we introduce HEADS-UP, the first egocentric dataset collected from head-mounted cameras, designed specifically for trajectory prediction in blind assistance systems. With the growing population of blind and visually impaired individuals, the need for intelligent assistive tools that provide real-time warnings about potential collisions with dynamic obstacles is becoming critical. These systems rely on algorithms capable of predicting the trajectories of moving objects, such as pedestrians, to issue timely hazard alerts. However, existing datasets fail to capture the necessary information from the perspective of a blind individual. To address this gap, HEADS-UP offers a novel dataset focused on trajectory prediction in this context. Leveraging this dataset, we propose a semi-local trajectory prediction approach to assess collision risks between blind individuals and pedestrians in dynamic environments. Unlike conventional methods that separately predict the trajectories of both the blind individual (ego agent) and pedestrians, our approach operates within a semi-local coordinate system, a rotated version of the camera's coordinate system, facilitating the prediction process. We validate our method on the HEADS-UP dataset and implement the proposed solution in ROS, performing real-time tests on an NVIDIA Jetson GPU through a user study. Results from both dataset evaluations and live tests demonstrate the robustness and efficiency of our approach.<|reference_end|> | arxiv | @article{haghighi2024heads-up:,
title={HEADS-UP: Head-Mounted Egocentric Dataset for Trajectory Prediction in
Blind Assistance Systems},
author={Yasaman Haghighi, Celine Demonsant, Panagiotis Chalimourdas, Maryam
Tavasoli Naeini, Jhon Kevin Munoz, Bladimir Bacca, Silvan Suter, Matthieu
Gani and Alexandre Alahi},
journal={arXiv preprint arXiv:2409.20324},
year={2024},
archivePrefix={arXiv},
eprint={2409.20324},
primaryClass={cs.CV}
} | haghighi2024heads-up: |
arxiv-663623 | 2409.20325 | Old Optimizer, New Norm: An Anthology | <|reference_start|>Old Optimizer, New Norm: An Anthology: Deep learning optimizers are often motivated through a mix of convex and approximate second-order theory. We select three such methods -- Adam, Shampoo and Prodigy -- and argue that each method can instead be understood as a squarely first-order method without convexity assumptions. In fact, after switching off exponential moving averages, each method is equivalent to steepest descent under a particular norm. By generalizing this observation, we chart a new design space for training algorithms. Different operator norms should be assigned to different tensors based on the role that the tensor plays within the network. For example, while linear and embedding layers may have the same weight space of $\mathbb{R}^{m\times n}$, these layers play different roles and should be assigned different norms. We hope that this idea of carefully metrizing the neural architecture might lead to more stable, scalable and indeed faster training.<|reference_end|> | arxiv | @article{bernstein2024old,
title={Old Optimizer, New Norm: An Anthology},
author={Jeremy Bernstein and Laker Newhouse},
journal={arXiv preprint arXiv:2409.20325},
year={2024},
archivePrefix={arXiv},
eprint={2409.20325},
primaryClass={cs.LG math.OC}
} | bernstein2024old |
arxiv-663624 | 2409.20326 | MARLadona -- Towards Cooperative Team Play Using Multi-Agent Reinforcement Learning | <|reference_start|>MARLadona -- Towards Cooperative Team Play Using Multi-Agent Reinforcement Learning: Robot soccer, in its full complexity, poses an unsolved research challenge. Current solutions heavily rely on engineered heuristic strategies, which lack robustness and adaptability. Deep reinforcement learning has gained significant traction in various complex robotics tasks such as locomotion, manipulation, and competitive games (e.g., AlphaZero, OpenAI Five), making it a promising solution to the robot soccer problem. This paper introduces MARLadona. A decentralized multi-agent reinforcement learning (MARL) training pipeline capable of producing agents with sophisticated team play behavior, bridging the shortcomings of heuristic methods. Further, we created an open-source multi-agent soccer environment based on Isaac Gym. Utilizing our MARL framework and a modified a global entity encoder as our core architecture, our approach achieves a 66.8% win rate against HELIOS agent, which employs a state-of-the-art heuristic strategy. Furthermore, we provided an in-depth analysis of the policy behavior and interpreted the agent's intention using the critic network.<|reference_end|> | arxiv | @article{li2024marladona,
title={MARLadona - Towards Cooperative Team Play Using Multi-Agent
Reinforcement Learning},
author={Zichong Li, Filip Bjelonic, Victor Klemm, and Marco Hutter},
journal={arXiv preprint arXiv:2409.20326},
year={2024},
archivePrefix={arXiv},
eprint={2409.20326},
primaryClass={cs.MA}
} | li2024marladona |
arxiv-663625 | 2409.20329 | Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients | <|reference_start|>Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients: Federated learning (FL) is an appealing paradigm that allows a group of machines (a.k.a. clients) to learn collectively while keeping their data local. However, due to the heterogeneity between the clients' data distributions, the model obtained through the use of FL algorithms may perform poorly on some client's data. Personalization addresses this issue by enabling each client to have a different model tailored to their own data while simultaneously benefiting from the other clients' data. We consider an FL setting where some clients can be adversarial, and we derive conditions under which full collaboration fails. Specifically, we analyze the generalization performance of an interpolated personalized FL framework in the presence of adversarial clients, and we precisely characterize situations when full collaboration performs strictly worse than fine-tuned personalization. Our analysis determines how much we should scale down the level of collaboration, according to data heterogeneity and the tolerable fraction of adversarial clients. We support our findings with empirical results on mean estimation and binary classification problems, considering synthetic and benchmark image classification datasets.<|reference_end|> | arxiv | @article{allouah2024fine-tuning,
title={Fine-Tuning Personalization in Federated Learning to Mitigate
Adversarial Clients},
author={Youssef Allouah, Abdellah El Mrini, Rachid Guerraoui, Nirupam Gupta
and Rafael Pinot},
journal={arXiv preprint arXiv:2409.20329},
year={2024},
archivePrefix={arXiv},
eprint={2409.20329},
primaryClass={cs.LG cs.CR}
} | allouah2024fine-tuning |
arxiv-663626 | 2409.20331 | On the Structure of Information | <|reference_start|>On the Structure of Information: Shannon information and Shannon entropy are undoubtedly the most commonly used quantitative measures of information, cropping up in the literature across a broad variety of disciplines, often in contexts unrelated to coding theory. Here, we generalize the original idea behind Shannon entropy as the cost of encoding a sample of a random variable in terms of the required codeword length, to arbitrary loss functions by considering the optimally achievable loss given a certain level of knowledge about the random variable. By formalizing knowledge in terms of the measure-theoretic notion of sub-$\sigma$-algebras, we arrive at a general notion of uncertainty reduction that includes entropy and information as special cases: entropy is the reduction of uncertainty from no (or partial) knowledge to full knowledge about a random variable, whereas information is uncertainty reduction from no (or partial) knowledge to partial knowledge. As examples, we get Shannon information and entropy when measuring loss in terms of message length, variance for square error loss, and more generally, for the Bregman loss, we get Bregman information. Moreover, we show that appealing properties of Shannon entropy and information extend to the general case, including well-known relations involving the KL divergence, which are extended to divergences of proper scoring rules.<|reference_end|> | arxiv | @article{gottwald2024on,
title={On the Structure of Information},
author={Sebastian Gottwald, Daniel A. Braun},
journal={arXiv preprint arXiv:2409.20331},
year={2024},
archivePrefix={arXiv},
eprint={2409.20331},
primaryClass={cs.IT math.IT}
} | gottwald2024on |
arxiv-663627 | 2409.20332 | Devil is in Details: Locality-Aware 3D Abdominal CT Volume Generation for Self-Supervised Organ Segmentation | <|reference_start|>Devil is in Details: Locality-Aware 3D Abdominal CT Volume Generation for Self-Supervised Organ Segmentation: In the realm of medical image analysis, self-supervised learning (SSL) techniques have emerged to alleviate labeling demands, while still facing the challenge of training data scarcity owing to escalating resource requirements and privacy constraints. Numerous efforts employ generative models to generate high-fidelity, unlabeled 3D volumes across diverse modalities and anatomical regions. However, the intricate and indistinguishable anatomical structures within the abdomen pose a unique challenge to abdominal CT volume generation compared to other anatomical regions. To address the overlooked challenge, we introduce the Locality-Aware Diffusion (Lad), a novel method tailored for exquisite 3D abdominal CT volume generation. We design a locality loss to refine crucial anatomical regions and devise a condition extractor to integrate abdominal priori into generation, thereby enabling the generation of large quantities of high-quality abdominal CT volumes essential for SSL tasks without the need for additional data such as labels or radiology reports. Volumes generated through our method demonstrate remarkable fidelity in reproducing abdominal structures, achieving a decrease in FID score from 0.0034 to 0.0002 on AbdomenCT-1K dataset, closely mirroring authentic data and surpassing current methods. Extensive experiments demonstrate the effectiveness of our method in self-supervised organ segmentation tasks, resulting in an improvement in mean Dice scores on two abdominal datasets effectively. These results underscore the potential of synthetic data to advance self-supervised learning in medical image analysis.<|reference_end|> | arxiv | @article{wang2024devil,
title={Devil is in Details: Locality-Aware 3D Abdominal CT Volume Generation
for Self-Supervised Organ Segmentation},
author={Yuran Wang, Zhijing Wan, Yansheng Qiu and Zheng Wang},
journal={arXiv preprint arXiv:2409.20332},
year={2024},
archivePrefix={arXiv},
eprint={2409.20332},
primaryClass={eess.IV cs.CV}
} | wang2024devil |
arxiv-663628 | 2409.20339 | The linearized monotonicity method for elastic waves and the separation of material parameters | <|reference_start|>The linearized monotonicity method for elastic waves and the separation of material parameters: We derive a linearized version of the monotonicity method for shape reconstruction using time harmonic elastic waves. The linearized method provides an efficient version of the method, drastically reducing computation time. Here we show that the linearized method has some additional advantages. The linearized method can in particular be used to obtain additional information on the material parameters, and is able to partially separate and identify the supports of the Lam\'e parameters.<|reference_end|> | arxiv | @article{eberle-blick2024the,
title={The linearized monotonicity method for elastic waves and the separation
of material parameters},
author={Sarah Eberle-Blick, Valter Pohjola},
journal={arXiv preprint arXiv:2409.20339},
year={2024},
archivePrefix={arXiv},
eprint={2409.20339},
primaryClass={math.AP cs.NA math.NA}
} | eberle-blick2024the |
arxiv-663629 | 2409.20340 | Enhancing GANs with Contrastive Learning-Based Multistage Progressive Finetuning SNN and RL-Based External Optimization | <|reference_start|>Enhancing GANs with Contrastive Learning-Based Multistage Progressive Finetuning SNN and RL-Based External Optimization: The application of deep learning in cancer research, particularly in early diagnosis, case understanding, and treatment strategy design, emphasizes the need for high-quality data. Generative AI, especially Generative Adversarial Networks (GANs), has emerged as a leading solution to challenges like class imbalance, robust learning, and model training, while addressing issues stemming from patient privacy and the scarcity of real data. Despite their promise, GANs face several challenges, both inherent and specific to histopathology data. Inherent issues include training imbalance, mode collapse, linear learning from insufficient discriminator feedback, and hard boundary convergence due to stringent feedback. Histopathology data presents a unique challenge with its complex representation, high spatial resolution, and multiscale features. To address these challenges, we propose a framework consisting of two components. First, we introduce a contrastive learning-based Multistage Progressive Finetuning Siamese Neural Network (MFT-SNN) for assessing the similarity between histopathology patches. Second, we implement a Reinforcement Learning-based External Optimizer (RL-EO) within the GAN training loop, serving as a reward signal generator. The modified discriminator loss function incorporates a weighted reward, guiding the GAN to maximize this reward while minimizing loss. This approach offers an external optimization guide to the discriminator, preventing generator overfitting and ensuring smooth convergence. Our proposed solution has been benchmarked against state-of-the-art (SOTA) GANs and a Denoising Diffusion Probabilistic model, outperforming previous SOTA across various metrics, including FID score, KID score, Perceptual Path Length, and downstream classification tasks.<|reference_end|> | arxiv | @article{mustafa2024enhancing,
title={Enhancing GANs with Contrastive Learning-Based Multistage Progressive
Finetuning SNN and RL-Based External Optimization},
author={Osama Mustafa},
journal={arXiv preprint arXiv:2409.20340},
year={2024},
archivePrefix={arXiv},
eprint={2409.20340},
primaryClass={eess.IV cs.AI cs.CV cs.LG}
} | mustafa2024enhancing |
arxiv-663630 | 2409.20341 | Conway's cosmological theorem and automata theory | <|reference_start|>Conway's cosmological theorem and automata theory: John Conway proved that every audioactive sequence (a.k.a. look-and-say) decays into a compound of 94~elements, a statement he termed the cosmological theorem. The underlying audioactive process can be modeled by a finite-state machine, mapping one sequence of integers to another. Leveraging automata theory, we propose a new proof of Conway's theorem based on a few simple machines, using a computer to compose and minimize them.<|reference_end|> | arxiv | @article{lairez2024conway's,
title={Conway's cosmological theorem and automata theory},
author={Pierre Lairez, Aleksandr Storozhenko},
journal={arXiv preprint arXiv:2409.20341},
year={2024},
archivePrefix={arXiv},
eprint={2409.20341},
primaryClass={cs.FL}
} | lairez2024conway's |
arxiv-663631 | 2409.20342 | AI generated annotations for Breast, Brain, Liver, Lungs and Prostate cancer collections in National Cancer Institute Imaging Data Commons | <|reference_start|>AI generated annotations for Breast, Brain, Liver, Lungs and Prostate cancer collections in National Cancer Institute Imaging Data Commons: AI in Medical Imaging project aims to enhance the National Cancer Institute's (NCI) Image Data Commons (IDC) by developing nnU-Net models and providing AI-assisted segmentations for cancer radiology images. We created high-quality, AI-annotated imaging datasets for 11 IDC collections. These datasets include images from various modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), covering the lungs, breast, brain, kidneys, prostate, and liver. The nnU-Net models were trained using open-source datasets. A portion of the AI-generated annotations was reviewed and corrected by radiologists. Both the AI and radiologist annotations were encoded in compliance with the the Digital Imaging and Communications in Medicine (DICOM) standard, ensuring seamless integration into the IDC collections. All models, images, and annotations are publicly accessible, facilitating further research and development in cancer imaging. This work supports the advancement of imaging tools and algorithms by providing comprehensive and accurate annotated datasets.<|reference_end|> | arxiv | @article{murugesan2024ai,
title={AI generated annotations for Breast, Brain, Liver, Lungs and Prostate
cancer collections in National Cancer Institute Imaging Data Commons},
author={Gowtham Krishnan Murugesan, Diana McCrumb, Rahul Soni, Jithendra
Kumar, Leonard Nuernberg, Linmin Pei, Ulrike Wagner, Sutton Granger, Andrey
Y. Fedorov, Stephen Moore, Jeff Van Oss},
journal={arXiv preprint arXiv:2409.20342},
year={2024},
archivePrefix={arXiv},
eprint={2409.20342},
primaryClass={eess.IV cs.CV}
} | murugesan2024ai |
arxiv-663632 | 2409.20343 | Demystifying and Assessing Code Understandability in Java Decompilation | <|reference_start|>Demystifying and Assessing Code Understandability in Java Decompilation: Decompilation, the process of converting machine-level code into readable source code, plays a critical role in reverse engineering. Given that the main purpose of decompilation is to facilitate code comprehension in scenarios where the source code is unavailable, the understandability of decompiled code is of great importance. In this paper, we propose the first empirical study on the understandability of Java decompiled code and obtained the following findings: (1) Understandability of Java decompilation is considered as important as its correctness, and decompilation understandability issues are even more commonly encountered than decompilation failures. (2) A notable percentage of code snippets decompiled by Java decompilers exhibit significantly lower or higher levels of understandability in comparison to their original source code. (3) Unfortunately, Cognitive Complexity demonstrates relatively acceptable precision while low recall in recognizing these code snippets exhibiting diverse understandability during decompilation. (4) Even worse, perplexity demonstrates lower levels of precision and recall in recognizing such code snippets. Inspired by the four findings, we further proposed six code patterns and the first metric for the assessment of decompiled code understandability. This metric was extended from Cognitive Complexity, with six more rules harvested from an exhaustive manual analysis into 1287 pairs of source code snippets and corresponding decompiled code. This metric was also validated using the original and updated dataset, yielding an impressive macro F1-score of 0.88 on the original dataset, and 0.86 on the test set.<|reference_end|> | arxiv | @article{qin2024demystifying,
title={Demystifying and Assessing Code Understandability in Java Decompilation},
author={Ruixin Qin, Yifan Xiong, Yifei Lu, Minxue Pan},
journal={arXiv preprint arXiv:2409.20343},
year={2024},
archivePrefix={arXiv},
eprint={2409.20343},
primaryClass={cs.SE}
} | qin2024demystifying |
arxiv-663633 | 2409.20344 | Design, manufacturing, and inverse dynamic modeling of soft parallel robots actuated by dielectric elastomer actuators | <|reference_start|>Design, manufacturing, and inverse dynamic modeling of soft parallel robots actuated by dielectric elastomer actuators: Soft parallel robots with their manipulation safety and low commercial cost show a promising future for delicate operations and safe human-robot interactions. However, promoting the use of electroactive polymers (EAPs) is still challenging due to the under-improving quality of the product and the dynamic modelling of the collaborations between multiple actuators. This article presents the design, fabrication, modelling and control of a parallel kinematics Delta robot actuated by dielectric elastomer actuators (DEAs). The trade-off between the actuation force and stroke is retaken by an angular stroke amplification mechanism, and the weight of the robot frame is reduced by utilizing 3D puzzling strip structures. A generic way of constructing a high-stability conductive paint on a silicon-based film has been achieved by laser scanning the DE-film and then sandwiching a conductive particle-based electrode with a paint which is mixed by the particles and photosensitive resin. Compared to the wildly used carbon grease, the fabricated electrode shows a higher consistency in its dynamic behaviour before and after the on-stand test. Finally, to predict the output force and inverse motion of the robot end effector, we constructed the inverse dynamic model by introducing an expanded Bergstrom-Boyce model to the constitutive behavior of the dielectric film. The experimental results show a prediction of robot output force with RSME of 12.4% when the end effector remains stationary, and a well-followed trajectory with less than RSME 2.5%.<|reference_end|> | arxiv | @article{chang2024design,,
title={Design, manufacturing, and inverse dynamic modeling of soft parallel
robots actuated by dielectric elastomer actuators},
author={Jung-Che Chang, Xi Wang, Dragos Axinte, and Xin Dong},
journal={arXiv preprint arXiv:2409.20344},
year={2024},
archivePrefix={arXiv},
eprint={2409.20344},
primaryClass={cs.RO cs.SY eess.SY}
} | chang2024design, |
arxiv-663634 | 2409.20353 | CableInspect-AD: An Expert-Annotated Anomaly Detection Dataset | <|reference_start|>CableInspect-AD: An Expert-Annotated Anomaly Detection Dataset: Machine learning models are increasingly being deployed in real-world contexts. However, systematic studies on their transferability to specific and critical applications are underrepresented in the research literature. An important example is visual anomaly detection (VAD) for robotic power line inspection. While existing VAD methods perform well in controlled environments, real-world scenarios present diverse and unexpected anomalies that current datasets fail to capture. To address this gap, we introduce $\textit{CableInspect-AD}$, a high-quality, publicly available dataset created and annotated by domain experts from Hydro-Qu\'ebec, a Canadian public utility. This dataset includes high-resolution images with challenging real-world anomalies, covering defects with varying severity levels. To address the challenges of collecting diverse anomalous and nominal examples for setting a detection threshold, we propose an enhancement to the celebrated PatchCore algorithm. This enhancement enables its use in scenarios with limited labeled data. We also present a comprehensive evaluation protocol based on cross-validation to assess models' performances. We evaluate our $\textit{Enhanced-PatchCore}$ for few-shot and many-shot detection, and Vision-Language Models for zero-shot detection. While promising, these models struggle to detect all anomalies, highlighting the dataset's value as a challenging benchmark for the broader research community. Project page: https://mila-iqia.github.io/cableinspect-ad/.<|reference_end|> | arxiv | @article{arodi2024cableinspect-ad:,
title={CableInspect-AD: An Expert-Annotated Anomaly Detection Dataset},
author={Akshatha Arodi, Margaux Luck, Jean-Luc Bedwani, Aldo Zaimi, Ge Li,
Nicolas Pouliot, Julien Beaudry, Ga'etan Marceau Caron},
journal={arXiv preprint arXiv:2409.20353},
year={2024},
archivePrefix={arXiv},
eprint={2409.20353},
primaryClass={cs.CV cs.LG}
} | arodi2024cableinspect-ad: |
arxiv-663635 | 2409.20356 | Satellite image classification with neural quantum kernels | <|reference_start|>Satellite image classification with neural quantum kernels: A practical application of quantum machine learning in real-world scenarios in the short term remains elusive, despite significant theoretical efforts. Image classification, a common task for classical models, has been used to benchmark quantum algorithms with simple datasets, but only few studies have tackled complex real-data classification challenges. In this work, we address such a gap by focusing on the classification of satellite images, a task of particular interest to the earth observation (EO) industry. We first preprocess the selected intrincate dataset by reducing its dimensionality. Subsequently, we employ neural quantum kernels (NQKs)- embedding quantum kernels (EQKs) constructed from trained quantum neural networks (QNNs)- to classify images which include solar panels. We explore both $1$-to-$n$ and $n$-to-$n$ NQKs. In the former, parameters from a single-qubit QNN's training construct an $n$-qubit EQK achieving a mean test accuracy over 86% with three features. In the latter, we iteratively train an $n$-qubit QNN to ensure scalability, using the resultant architecture to directly form an $n$-qubit EQK. In this case, a test accuracy over 88% is obtained for three features and 8 qubits. Additionally, we show that the results are robust against a suboptimal training of the QNN.<|reference_end|> | arxiv | @article{rodriguez-grasa2024satellite,
title={Satellite image classification with neural quantum kernels},
author={Pablo Rodriguez-Grasa, Robert Farzan-Rodriguez, Gabriele Novelli, Yue
Ban, Mikel Sanz},
journal={arXiv preprint arXiv:2409.20356},
year={2024},
archivePrefix={arXiv},
eprint={2409.20356},
primaryClass={quant-ph cs.LG}
} | rodriguez-grasa2024satellite |
arxiv-663636 | 2409.20361 | Rotated Runtime Smooth: Training-Free Activation Smoother for accurate INT4 inference | <|reference_start|>Rotated Runtime Smooth: Training-Free Activation Smoother for accurate INT4 inference: Large language models have demonstrated promising capabilities upon scaling up parameters. However, serving large language models incurs substantial computation and memory movement costs due to their large scale. Quantization methods have been employed to reduce service costs and latency. Nevertheless, outliers in activations hinder the development of INT4 weight-activation quantization. Existing approaches separate outliers and normal values into two matrices or migrate outliers from activations to weights, suffering from high latency or accuracy degradation. Based on observing activations from large language models, outliers can be classified into channel-wise and spike outliers. In this work, we propose Rotated Runtime Smooth (RRS), a plug-and-play activation smoother for quantization, consisting of Runtime Smooth and the Rotation operation. Runtime Smooth (RS) is introduced to eliminate channel-wise outliers by smoothing activations with channel-wise maximums during runtime. The rotation operation can narrow the gap between spike outliers and normal values, alleviating the effect of victims caused by channel-wise smoothing. The proposed method outperforms the state-of-the-art method in the LLaMA and Qwen families and improves WikiText-2 perplexity from 57.33 to 6.66 for INT4 inference.<|reference_end|> | arxiv | @article{yi2024rotated,
title={Rotated Runtime Smooth: Training-Free Activation Smoother for accurate
INT4 inference},
author={Ke Yi, Zengke Liu, Jianwei Zhang, Chengyuan Li, Tong Zhang, Junyang
Lin, Jingren Zhou},
journal={arXiv preprint arXiv:2409.20361},
year={2024},
archivePrefix={arXiv},
eprint={2409.20361},
primaryClass={cs.LG cs.AI}
} | yi2024rotated |
arxiv-663637 | 2409.20362 | TwinArray Sort: An Ultrarapid Conditional Non-Comparison Based Sorting Algorithm | <|reference_start|>TwinArray Sort: An Ultrarapid Conditional Non-Comparison Based Sorting Algorithm: In computer science, sorting algorithms are crucial for data processing and machine learning. Large datasets and high efficiency requirements provide challenges for comparison-based algorithms like Quicksort and Merge sort, which achieve O(n log n) time complexity. Non-comparison-based algorithms like Spreadsort and Counting Sort have memory consumption issues and a relatively high computational demand, even if they can attain linear time complexity under certain circumstances. We present TwinArray Sort, a novel conditional non-comparison-based sorting algorithm that effectively uses array indices. When it comes to worst-case time and space complexities, TwinArray Sort achieves O(n+k). The approach remains efficient under all settings and works well with datasets with randomly sorted, reverse-sorted, or nearly sorted distributions. TwinArray Sort can handle duplicates and optimize memory efficiently since thanks to its two auxiliary arrays for value storage and frequency counting, as well as a conditional distinct array verifier. TwinArray Sort constantly performs better than conventional algorithms, according to experimental assessments and particularly when sorting unique arrays under all data distribution scenarios. The approach is suitable for massive data processing and machine learning dataset management due to its creative use of dual auxiliary arrays and a conditional distinct array verification, which improves memory use and duplication handling. TwinArray Sort overcomes conventional sorting algorithmic constraints by combining cutting-edge methods with non-comparison-based sorting advantages. Its reliable performance in a range of data distributions makes it an adaptable and effective answer for contemporary computing requirements.<|reference_end|> | arxiv | @article{amini2024twinarray,
title={TwinArray Sort: An Ultrarapid Conditional Non-Comparison Based Sorting
Algorithm},
author={Amin Amini},
journal={arXiv preprint arXiv:2409.20362},
year={2024},
archivePrefix={arXiv},
eprint={2409.20362},
primaryClass={cs.DS cs.CC}
} | amini2024twinarray |
arxiv-663638 | 2409.20363 | Symbol-based multilevel block $\tau$ preconditioners for multilevel block Toeplitz systems: GLT-based analysis and applications | <|reference_start|>Symbol-based multilevel block $\tau$ preconditioners for multilevel block Toeplitz systems: GLT-based analysis and applications: In recent years, there has been a renewed interest in preconditioning for multilevel Toeplitz systems, a research field that has been extensively explored over the past several decades. This work introduces novel preconditioning strategies using multilevel $\tau$ matrices for both symmetric and nonsymmetric multilevel Toeplitz systems. Our proposals constitute a general framework, as they are constructed solely based on the generating function of the multilevel Toeplitz coefficient matrix, when it can be defined. We begin with nonsymmetric systems, where we employ a symmetrization technique by permuting the coefficient matrix to produce a real symmetric multilevel Hankel structure. We propose a multilevel $\tau$ preconditioner tailored to the symmetrized system and prove that the eigenvalues of the preconditioned matrix sequence cluster at $\pm 1$, leading to rapid convergence when using the preconditioned minimal residual method. The high effectiveness of this approach is demonstrated through its application in solving space fractional diffusion equations. Next, for symmetric systems we introduce another multilevel $\tau$ preconditioner and show that the preconditioned conjugate gradient method can achieve an optimal convergence rate, namely a rate that is independent of the matrix size, when employed for a class of ill-conditioned multilevel Toeplitz systems. Numerical examples are provided to critically assess the effectiveness of our proposed preconditioners compared to several leading existing preconditioned solvers, highlighting their superior performance.<|reference_end|> | arxiv | @article{hon2024symbol-based,
title={Symbol-based multilevel block $\tau$ preconditioners for multilevel
block Toeplitz systems: GLT-based analysis and applications},
author={Sean Y. Hon, Congcong Li, Rosita L. Sormani, Rolf Krause, Stefano
Serra-Capizzano},
journal={arXiv preprint arXiv:2409.20363},
year={2024},
archivePrefix={arXiv},
eprint={2409.20363},
primaryClass={math.NA cs.NA}
} | hon2024symbol-based |
arxiv-663639 | 2409.20364 | Efficient Driving Behavior Narration and Reasoning on Edge Device Using Large Language Models | <|reference_start|>Efficient Driving Behavior Narration and Reasoning on Edge Device Using Large Language Models: Deep learning architectures with powerful reasoning capabilities have driven significant advancements in autonomous driving technology. Large language models (LLMs) applied in this field can describe driving scenes and behaviors with a level of accuracy similar to human perception, particularly in visual tasks. Meanwhile, the rapid development of edge computing, with its advantage of proximity to data sources, has made edge devices increasingly important in autonomous driving. Edge devices process data locally, reducing transmission delays and bandwidth usage, and achieving faster response times. In this work, we propose a driving behavior narration and reasoning framework that applies LLMs to edge devices. The framework consists of multiple roadside units, with LLMs deployed on each unit. These roadside units collect road data and communicate via 5G NSR/NR networks. Our experiments show that LLMs deployed on edge devices can achieve satisfactory response speeds. Additionally, we propose a prompt strategy to enhance the narration and reasoning performance of the system. This strategy integrates multi-modal information, including environmental, agent, and motion data. Experiments conducted on the OpenDV-Youtube dataset demonstrate that our approach significantly improves performance across both tasks.<|reference_end|> | arxiv | @article{huang2024efficient,
title={Efficient Driving Behavior Narration and Reasoning on Edge Device Using
Large Language Models},
author={Yizhou Huang, Yihua Cheng, Kezhi Wang},
journal={arXiv preprint arXiv:2409.20364},
year={2024},
archivePrefix={arXiv},
eprint={2409.20364},
primaryClass={cs.AI cs.CV cs.RO}
} | huang2024efficient |
arxiv-663640 | 2409.20365 | VideoINSTA: Zero-shot Long Video Understanding via Informative Spatial-Temporal Reasoning with LLMs | <|reference_start|>VideoINSTA: Zero-shot Long Video Understanding via Informative Spatial-Temporal Reasoning with LLMs: In the video-language domain, recent works in leveraging zero-shot Large Language Model-based reasoning for video understanding have become competitive challengers to previous end-to-end models. However, long video understanding presents unique challenges due to the complexity of reasoning over extended timespans, even for zero-shot LLM-based approaches. The challenge of information redundancy in long videos prompts the question of what specific information is essential for large language models (LLMs) and how to leverage them for complex spatial-temporal reasoning in long-form video analysis. We propose a framework VideoINSTA, i.e. INformative Spatial-TemporAl Reasoning for zero-shot long-form video understanding. VideoINSTA contributes (1) a zero-shot framework for long video understanding using LLMs; (2) an event-based temporal reasoning and content-based spatial reasoning approach for LLMs to reason over spatial-temporal information in videos; (3) a self-reflective information reasoning scheme balancing temporal factors based on information sufficiency and prediction confidence. Our model significantly improves the state-of-the-art on three long video question-answering benchmarks: EgoSchema, NextQA, and IntentQA, and the open question answering dataset ActivityNetQA. The code is released here: https://github.com/mayhugotong/VideoINSTA.<|reference_end|> | arxiv | @article{liao2024videoinsta:,
title={VideoINSTA: Zero-shot Long Video Understanding via Informative
Spatial-Temporal Reasoning with LLMs},
author={Ruotong Liao, Max Erler, Huiyu Wang, Guangyao Zhai, Gengyuan Zhang,
Yunpu Ma, Volker Tresp},
journal={arXiv preprint arXiv:2409.20365},
year={2024},
archivePrefix={arXiv},
eprint={2409.20365},
primaryClass={cs.CV}
} | liao2024videoinsta: |
arxiv-663641 | 2409.20366 | Disentangling Singlish Discourse Particles with Task-Driven Representation | <|reference_start|>Disentangling Singlish Discourse Particles with Task-Driven Representation: Singlish, or formally Colloquial Singapore English, is an English-based creole language originating from the SouthEast Asian country Singapore. The language contains influences from Sinitic languages such as Chinese dialects, Malay, Tamil and so forth. A fundamental task to understanding Singlish is to first understand the pragmatic functions of its discourse particles, upon which Singlish relies heavily to convey meaning. This work offers a preliminary effort to disentangle the Singlish discourse particles (lah, meh and hor) with task-driven representation learning. After disentanglement, we cluster these discourse particles to differentiate their pragmatic functions, and perform Singlish-to-English machine translation. Our work provides a computational method to understanding Singlish discourse particles, and opens avenues towards a deeper comprehension of the language and its usage.<|reference_end|> | arxiv | @article{foo2024disentangling,
title={Disentangling Singlish Discourse Particles with Task-Driven
Representation},
author={Linus Tze En Foo, Lynnette Hui Xian Ng},
journal={arXiv preprint arXiv:2409.20366},
year={2024},
archivePrefix={arXiv},
eprint={2409.20366},
primaryClass={cs.CL}
} | foo2024disentangling |
arxiv-663642 | 2409.20369 | Numerical solutions of ordinary differential equations using Spline-Integral Operator | <|reference_start|>Numerical solutions of ordinary differential equations using Spline-Integral Operator: In this work, we introduce a novel numerical method for solving initial value problems associated with a given differential. Our approach utilizes a spline approximation of the theoretical solution alongside the integral formulation of the analytical solution. Furthermore, we offer a rigorous proof of the method's order and provide a comprehensive stability analysis. Additionally, we showcase the effectiveness method through some examples, comparing with Taylor's methods of same order.<|reference_end|> | arxiv | @article{salgado2024numerical,
title={Numerical solutions of ordinary differential equations using
Spline-Integral Operator},
author={Gustavo H. O. Salgado and Jo~ao P. R. Romanelli},
journal={arXiv preprint arXiv:2409.20369},
year={2024},
archivePrefix={arXiv},
eprint={2409.20369},
primaryClass={math.NA cs.NA}
} | salgado2024numerical |
arxiv-663643 | 2409.20370 | The Perfect Blend: Redefining RLHF with Mixture of Judges | <|reference_start|>The Perfect Blend: Redefining RLHF with Mixture of Judges: Reinforcement learning from human feedback (RLHF) has become the leading approach for fine-tuning large language models (LLM). However, RLHF has limitations in multi-task learning (MTL) due to challenges of reward hacking and extreme multi-objective optimization (i.e., trade-off of multiple and/or sometimes conflicting objectives). Applying RLHF for MTL currently requires careful tuning of the weights for reward model and data combinations. This is often done via human intuition and does not generalize. In this work, we introduce a novel post-training paradigm which we called Constrained Generative Policy Optimization (CGPO). The core of CGPO is Mixture of Judges (MoJ) with cost-efficient constrained policy optimization with stratification, which can identify the perfect blend in RLHF in a principled manner. It shows strong empirical results with theoretical guarantees, does not require extensive hyper-parameter tuning, and is plug-and-play in common post-training pipelines. Together, this can detect and mitigate reward hacking behaviors while reaching a pareto-optimal point across an extremely large number of objectives. Our empirical evaluations demonstrate that CGPO significantly outperforms standard RLHF algorithms like PPO and DPO across various tasks including general chat, STEM questions, instruction following, and coding. Specifically, CGPO shows improvements of 7.4% in AlpacaEval-2 (general chat), 12.5% in Arena-Hard (STEM & reasoning), and consistent gains in other domains like math and coding. Notably, PPO, while commonly used, is prone to severe reward hacking in popular coding benchmarks, which CGPO successfully addresses. This breakthrough in RLHF not only tackles reward hacking and extreme multi-objective optimization challenges but also advances the state-of-the-art in aligning general-purpose LLMs for diverse applications.<|reference_end|> | arxiv | @article{xu2024the,
title={The Perfect Blend: Redefining RLHF with Mixture of Judges},
author={Tengyu Xu, Eryk Helenowski, Karthik Abinav Sankararaman, Di Jin,
Kaiyan Peng, Eric Han, Shaoliang Nie, Chen Zhu, Hejia Zhang, Wenxuan Zhou,
Zhouhao Zeng, Yun He, Karishma Mandyam, Arya Talabzadeh, Madian Khabsa,
Gabriel Cohen, Yuandong Tian, Hao Ma, Sinong Wang, Han Fang},
journal={arXiv preprint arXiv:2409.20370},
year={2024},
archivePrefix={arXiv},
eprint={2409.20370},
primaryClass={cs.LG cs.AI cs.CL}
} | xu2024the |
arxiv-663644 | 2409.20371 | Frequency Adaptive Normalization For Non-stationary Time Series Forecasting | <|reference_start|>Frequency Adaptive Normalization For Non-stationary Time Series Forecasting: Time series forecasting typically needs to address non-stationary data with evolving trend and seasonal patterns. To address the non-stationarity, reversible instance normalization has been recently proposed to alleviate impacts from the trend with certain statistical measures, e.g., mean and variance. Although they demonstrate improved predictive accuracy, they are limited to expressing basic trends and are incapable of handling seasonal patterns. To address this limitation, this paper proposes a new instance normalization solution, called frequency adaptive normalization (FAN), which extends instance normalization in handling both dynamic trend and seasonal patterns. Specifically, we employ the Fourier transform to identify instance-wise predominant frequent components that cover most non-stationary factors. Furthermore, the discrepancy of those frequency components between inputs and outputs is explicitly modeled as a prediction task with a simple MLP model. FAN is a model-agnostic method that can be applied to arbitrary predictive backbones. We instantiate FAN on four widely used forecasting models as the backbone and evaluate their prediction performance improvements on eight benchmark datasets. FAN demonstrates significant performance advancement, achieving 7.76% ~ 37.90% average improvements in MSE.<|reference_end|> | arxiv | @article{ye2024frequency,
title={Frequency Adaptive Normalization For Non-stationary Time Series
Forecasting},
author={Weiwei Ye, Songgaojun Deng, Qiaosha Zou, Ning Gui},
journal={arXiv preprint arXiv:2409.20371},
year={2024},
archivePrefix={arXiv},
eprint={2409.20371},
primaryClass={cs.LG cs.AI}
} | ye2024frequency |
arxiv-663645 | 2409.20374 | Word-wise intonation model for cross-language TTS systems | <|reference_start|>Word-wise intonation model for cross-language TTS systems: In this paper we propose a word-wise intonation model for Russian language and show how it can be generalized for other languages. The proposed model is suitable for automatic data markup and its extended application to text-to-speech systems. It can also be implemented for an intonation contour modeling by using rule-based algorithms or by predicting contours with language models. The key idea is a partial elimination of the variability connected with different placements of a stressed syllable in a word. It is achieved with simultaneous applying of pitch simplification with a dynamic time warping clustering. The proposed model could be used as a tool for intonation research or as a backbone for prosody description in text-to-speech systems. As the advantage of the model, we show its relations with the existing intonation systems as well as the possibility of using language models for prosody prediction. Finally, we demonstrate some practical evidence of the system robustness to parameter variations.<|reference_end|> | arxiv | @article{a.2024word-wise,
title={Word-wise intonation model for cross-language TTS systems},
author={Tomilov A.A., Gromova A.Y., and Svischev A.N},
journal={arXiv preprint arXiv:2409.20374},
year={2024},
archivePrefix={arXiv},
eprint={2409.20374},
primaryClass={cs.CL cs.SD eess.AS}
} | a.2024word-wise |
arxiv-663646 | 2409.20375 | A simple controller design to achieve iso-damping robustness: Non-iterative data-driven approach based on fractional-order reference model | <|reference_start|>A simple controller design to achieve iso-damping robustness: Non-iterative data-driven approach based on fractional-order reference model: This study proposes a simple controller design approach to achieve a class of robustness, the so-called iso-damping property. The proposed approach can be executed using only one-shot input/output data. An accurate mathematical model of a controlled plant is not required. The model-reference control problem is defined to achieve the desired closed-loop specifications, including the iso-damping, and the reference model is designed on the basis of fractional-order calculus. The optimization problem for the model-reference control is formulated using the one-shot input/output data while considering the bounded-input bounded-output (BIBO) stability from a bounded reference input to a bounded output. The iso-damping robust controller is obtained by solving the optimization problem. The representative advantages of the proposed approach over the conventional methods are the simplicity, practicality, and reliability from the viewpoint of the unnecessity of the plant model and explicit consideration of the BIBO stability from a bounded reference input to a bounded output. Numerical examples demonstrate the validity of the proposed approach.<|reference_end|> | arxiv | @article{yonezawa2024simple,
title={Simple controller design to achieve iso-damping robustness:
Non-iterative data-driven approach based on fractional-order reference model},
author={Ansei Yonezawa, Heisei Yonezawa, Shuichi Yahagi, Itsuro Kajiwara},
journal={arXiv preprint arXiv:2409.20375},
year={2024},
archivePrefix={arXiv},
eprint={2409.20375},
primaryClass={eess.SY cs.SY}
} | yonezawa2024simple |
arxiv-663647 | 2409.20380 | Heterogeneous computing in a strongly-connected CPU-GPU environment: fast multiple time-evolution equation-based modeling accelerated using data-driven approach | <|reference_start|>Heterogeneous computing in a strongly-connected CPU-GPU environment: fast multiple time-evolution equation-based modeling accelerated using data-driven approach: We propose a CPU-GPU heterogeneous computing method for solving time-evolution partial differential equation problems many times with guaranteed accuracy, in short time-to-solution and low energy-to-solution. On a single-GH200 node, the proposed method improved the computation speed by 86.4 and 8.67 times compared to the conventional method run only on CPU and only on GPU, respectively. Furthermore, the energy-to-solution was reduced by 32.2-fold (from 9944 J to 309 J) and 7.01-fold (from 2163 J to 309 J) when compared to using only the CPU and GPU, respectively. Using the proposed method on the Alps supercomputer, a 51.6-fold and 6.98-fold speedup was attained when compared to using only the CPU and GPU, respectively, and a high weak scaling efficiency of 94.3% was obtained up to 1,920 compute nodes. These implementations were realized using directive-based parallel programming models while enabling portability, indicating that directives are highly effective in analyses in heterogeneous computing environments.<|reference_end|> | arxiv | @article{ichimura2024heterogeneous,
title={Heterogeneous computing in a strongly-connected CPU-GPU environment:
fast multiple time-evolution equation-based modeling accelerated using
data-driven approach},
author={Tsuyoshi Ichimura, Kohei Fujita, Muneo Hori, Lalith Maddegedara, Jack
Wells, Alan Gray, Ian Karlin, John Linford},
journal={arXiv preprint arXiv:2409.20380},
year={2024},
archivePrefix={arXiv},
eprint={2409.20380},
primaryClass={cs.CE}
} | ichimura2024heterogeneous |
arxiv-663648 | 2409.20383 | Beyond Derivative Pathology of PINNs: Variable Splitting Strategy with Convergence Analysis | <|reference_start|>Beyond Derivative Pathology of PINNs: Variable Splitting Strategy with Convergence Analysis: Physics-informed neural networks (PINNs) have recently emerged as effective methods for solving partial differential equations (PDEs) in various problems. Substantial research focuses on the failure modes of PINNs due to their frequent inaccuracies in predictions. However, most are based on the premise that minimizing the loss function to zero causes the network to converge to a solution of the governing PDE. In this study, we prove that PINNs encounter a fundamental issue that the premise is invalid. We also reveal that this issue stems from the inability to regulate the behavior of the derivatives of the predicted solution. Inspired by the \textit{derivative pathology} of PINNs, we propose a \textit{variable splitting} strategy that addresses this issue by parameterizing the gradient of the solution as an auxiliary variable. We demonstrate that using the auxiliary variable eludes derivative pathology by enabling direct monitoring and regulation of the gradient of the predicted solution. Moreover, we prove that the proposed method guarantees convergence to a generalized solution for second-order linear PDEs, indicating its applicability to various problems.<|reference_end|> | arxiv | @article{park2024beyond,
title={Beyond Derivative Pathology of PINNs: Variable Splitting Strategy with
Convergence Analysis},
author={Yesom Park, Changhoon Song, Myungjoo Kang},
journal={arXiv preprint arXiv:2409.20383},
year={2024},
archivePrefix={arXiv},
eprint={2409.20383},
primaryClass={cs.LG cs.NA math.NA}
} | park2024beyond |
arxiv-663649 | 2409.20384 | FireLite: Leveraging Transfer Learning for Efficient Fire Detection in Resource-Constrained Environments | <|reference_start|>FireLite: Leveraging Transfer Learning for Efficient Fire Detection in Resource-Constrained Environments: Fire hazards are extremely dangerous, particularly in sectors such as the transportation industry, where political unrest increases the likelihood of their occurrence. By employing IP cameras to facilitate the setup of fire detection systems on transport vehicles, losses from fire events may be prevented proactively. However, the development of lightweight fire detection models is required due to the computational constraints of the embedded systems within these cameras. We introduce FireLite, a low-parameter convolutional neural network (CNN) designed for quick fire detection in contexts with limited resources, in response to this difficulty. With an accuracy of 98.77\%, our model -- which has just 34,978 trainable parameters achieves remarkable performance numbers. It also shows a validation loss of 8.74 and peaks at 98.77 for precision, recall, and F1-score measures. Because of its precision and efficiency, FireLite is a promising solution for fire detection in resource-constrained environments.<|reference_end|> | arxiv | @article{hasan2024firelite:,
title={FireLite: Leveraging Transfer Learning for Efficient Fire Detection in
Resource-Constrained Environments},
author={Mahamudul Hasan, Md Maruf Al Hossain Prince, Mohammad Samar Ansari,
Sabrina Jahan, Abu Saleh Musa Miah, Jungpil Shin},
journal={arXiv preprint arXiv:2409.20384},
year={2024},
archivePrefix={arXiv},
eprint={2409.20384},
primaryClass={cs.CV}
} | hasan2024firelite: |
arxiv-663650 | 2409.20385 | Wait, but Tylenol is Acetaminophen Investigating and Improving Language Models' Ability to Resist Requests for Misinformation | <|reference_start|>Wait, but Tylenol is Acetaminophen Investigating and Improving Language Models' Ability to Resist Requests for Misinformation: Background: Large language models (LLMs) are trained to follow directions, but this introduces a vulnerability to blindly comply with user requests even if they generate wrong information. In medicine, this could accelerate the generation of misinformation that impacts human well-being. Objectives/Methods: We analyzed compliance to requests to generate misleading content about medications in settings where models know the request is illogical. We investigated whether in-context directions and instruction-tuning of LLMs to prioritize logical reasoning over compliance reduced misinformation risk. Results: While all frontier LLMs complied with misinformation requests, both prompt-based and parameter-based approaches can improve the detection of logic flaws in requests and prevent the dissemination of medical misinformation. Conclusion: Shifting LLMs to prioritize logic over compliance could reduce risks of exploitation for medical misinformation.<|reference_end|> | arxiv | @article{chen2024wait,,
title={Wait, but Tylenol is Acetaminophen... Investigating and Improving
Language Models' Ability to Resist Requests for Misinformation},
author={Shan Chen, Mingye Gao, Kuleen Sasse, Thomas Hartvigsen, Brian Anthony,
Lizhou Fan, Hugo Aerts, Jack Gallifant, Danielle Bitterman},
journal={arXiv preprint arXiv:2409.20385},
year={2024},
archivePrefix={arXiv},
eprint={2409.20385},
primaryClass={cs.CL}
} | chen2024wait, |
arxiv-663651 | 2409.20387 | Automation from the Worker's Perspective | <|reference_start|>Automation from the Worker's Perspective: Common narratives about automation often pit new technologies against workers. The introduction of advanced machine tools, industrial robots, and AI have all been met with concern that technological progress will mean fewer jobs. However, workers themselves offer a more optimistic, nuanced perspective. Drawing on a far-reaching 2024 survey of more than 9,000 workers across nine countries, this paper finds that more workers report potential benefits from new technologies like robots and AI for their safety and comfort at work, their pay, and their autonomy on the job than report potential costs. Workers with jobs that ask them to solve complex problems, workers who feel valued by their employers, and workers who are motivated to move up in their careers are all more likely to see new technologies as beneficial. In contrast to assumptions in previous research, more formal education is in some cases associated with more negative attitudes toward automation and its impact on work. In an experimental setting, the prospect of financial incentives for workers improve their perceptions of automation technologies, whereas the prospect of increased input about how new technologies are used does not have a significant effect on workers' attitudes toward automation.<|reference_end|> | arxiv | @article{armstrong2024automation,
title={Automation from the Worker's Perspective},
author={Ben Armstrong, Valerie K. Chen, Alex Cuellar, Alexandra Forsey-Smerek,
and Julie A. Shah},
journal={arXiv preprint arXiv:2409.20387},
year={2024},
archivePrefix={arXiv},
eprint={2409.20387},
primaryClass={cs.HC cs.RO}
} | armstrong2024automation |
arxiv-663652 | 2409.20388 | SAMIPS: A Synthesised Asynchronous Processor | <|reference_start|>SAMIPS: A Synthesised Asynchronous Processor: Miniaturisation and ever increasing clock speeds pose significant challenges to synchronous VLSI design with clock distribution becoming an increasingly costly and complicated issue and power consumption rapidly emerging as a major concern. Asynchronous logic promises to alleviate these challenges however its development and adoption has been hindered by the lack of mature design tools. Balsa is a response to this gap, encompassing a CSP-based asynchronous hardware description language and a framework for automatically synnthesising asynchronous circuits. This paper discusses SAMIPS, an asynchronous implementation of the MIPS microprocessor and the first full scale asynchronous microprocessor to be synthesised in Balsa. The objectives of the paper are twofold: first to provide a holistic description of SAMIPS and its components, the approach that it has been followed for the asynchronisation of MIPS and the innovative solutions that have been developed to address hazard challenges and a quantitative performance analysis of the system; secondly, to provide insights about the effectiveness of Balsa as a hardware description language and synthesis system.<|reference_end|> | arxiv | @article{zhang2024samips:,
title={SAMIPS: A Synthesised Asynchronous Processor},
author={Qianyi Zhang and Georgios Theodoropoulos},
journal={arXiv preprint arXiv:2409.20388},
year={2024},
archivePrefix={arXiv},
eprint={2409.20388},
primaryClass={cs.AR}
} | zhang2024samips: |
arxiv-663653 | 2409.20390 | Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield Anti-stereotypical Writing | <|reference_start|>Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield Anti-stereotypical Writing: AI-based systems such as language models can replicate and amplify social biases reflected in their training data. Among other questionable behavior, this can lead to LM-generated text--and text suggestions--that contain normatively inappropriate stereotypical associations. In this paper, we consider the question of how "debiasing" a language model impacts stories that people write using that language model in a predictive text scenario. We find that (n=414), in certain scenarios, language model suggestions that align with common social stereotypes are more likely to be accepted by human authors. Conversely, although anti-stereotypical language model suggestions sometimes lead to an increased rate of anti-stereotypical stories, this influence is far from sufficient to lead to "fully debiased" stories.<|reference_end|> | arxiv | @article{baumler2024anti-stereotypical,
title={Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield
Anti-stereotypical Writing},
author={Connor Baumler, Hal Daum'e III},
journal={arXiv preprint arXiv:2409.20390},
year={2024},
archivePrefix={arXiv},
eprint={2409.20390},
primaryClass={cs.CL}
} | baumler2024anti-stereotypical |
arxiv-663654 | 2409.20391 | Machine Learning-enabled Traffic Steering in O-RAN: A Case Study on Hierarchical Learning Approach | <|reference_start|>Machine Learning-enabled Traffic Steering in O-RAN: A Case Study on Hierarchical Learning Approach: Traffic Steering is a crucial technology for wireless networks, and multiple efforts have been put into developing efficient Machine Learning (ML)-enabled traffic steering schemes for Open Radio Access Networks (O-RAN). Given the swift emergence of novel ML techniques, conducting a timely survey that comprehensively examines the ML-based traffic steering schemes in O-RAN is critical. In this article, we provide such a survey along with a case study of hierarchical learning-enabled traffic steering in O-RAN. In particular, we first introduce the background of traffic steering in O-RAN and overview relevant state-of-the-art ML techniques and their applications. Then, we analyze the compatibility of the hierarchical learning framework in O-RAN and further propose a Hierarchical Deep-Q-Learning (h-DQN) framework for traffic steering. Compared to existing works, which focus on single-layer architecture with standalone agents, h-DQN decomposes the traffic steering problem into a bi-level architecture with hierarchical intelligence. The meta-controller makes long-term and high-level policies, while the controller executes instant traffic steering actions under high-level policies. Finally, the case study shows that the hierarchical learning approach can provide significant performance improvements over the baseline algorithms.<|reference_end|> | arxiv | @article{habib2024machine,
title={Machine Learning-enabled Traffic Steering in O-RAN: A Case Study on
Hierarchical Learning Approach},
author={Md Arafat Habib, Hao Zhou, Pedro Enrique Iturria-Rivera, Yigit Ozcan,
Medhat Elsayed, Majid Bavand, Raimundas Gaigalas, Melike Erol-Kantarci},
journal={arXiv preprint arXiv:2409.20391},
year={2024},
archivePrefix={arXiv},
eprint={2409.20391},
primaryClass={cs.NI}
} | habib2024machine |
arxiv-663655 | 2409.20396 | Facility Location Games with Competitors | <|reference_start|>Facility Location Games with Competitors: In this paper, we consider facility location games with competitors where the agents are divided into groups and the agents in the same group have competitive relationships, i.e., the cost of an agent will increase if the facility is closer to their competitors. We consider three types of misreporting: misreporting the location only, misreporting the group membership only, and misreporting both. To minimize the social cost, we propose a strategyproof mechanism that is optimal when misreporting the location only. For the other two types of manipulation, we reuse the median mechanism and achieve tight bounds of 2. To minimize the maximum cost, we design new strategyproof mechanisms for the first two types of misreporting. We reuse the leftmost mechanism for misreporting both. All bounds are almost tight.<|reference_end|> | arxiv | @article{peng2024facility,
title={Facility Location Games with Competitors},
author={Cheng Peng and Houyu Zhou},
journal={arXiv preprint arXiv:2409.20396},
year={2024},
archivePrefix={arXiv},
eprint={2409.20396},
primaryClass={cs.GT}
} | peng2024facility |
arxiv-663656 | 2409.20398 | AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation | <|reference_start|>AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation: The Area Under the ROC Curve (AUC) is a well-known metric for evaluating instance-level long-tail learning problems. In the past two decades, many AUC optimization methods have been proposed to improve model performance under long-tail distributions. In this paper, we explore AUC optimization methods in the context of pixel-level long-tail semantic segmentation, a much more complicated scenario. This task introduces two major challenges for AUC optimization techniques. On one hand, AUC optimization in a pixel-level task involves complex coupling across loss terms, with structured inner-image and pairwise inter-image dependencies, complicating theoretical analysis. On the other hand, we find that mini-batch estimation of AUC loss in this case requires a larger batch size, resulting in an unaffordable space complexity. To address these issues, we develop a pixel-level AUC loss function and conduct a dependency-graph-based theoretical analysis of the algorithm's generalization ability. Additionally, we design a Tail-Classes Memory Bank (T-Memory Bank) to manage the significant memory demand. Finally, comprehensive experiments across various benchmarks confirm the effectiveness of our proposed AUCSeg method. The code is available at https://github.com/boyuh/AUCSeg.<|reference_end|> | arxiv | @article{han2024aucseg:,
title={AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation},
author={Boyu Han, Qianqian Xu, Zhiyong Yang, Shilong Bao, Peisong Wen,
Yangbangyan Jiang, Qingming Huang},
journal={arXiv preprint arXiv:2409.20398},
year={2024},
archivePrefix={arXiv},
eprint={2409.20398},
primaryClass={cs.CV cs.AI cs.LG}
} | han2024aucseg: |
arxiv-663657 | 2409.20399 | Multi-Robot Target Monitoring and Encirclement via Triggered Distributed Feedback Optimization | <|reference_start|>Multi-Robot Target Monitoring and Encirclement via Triggered Distributed Feedback Optimization: We design a distributed feedback optimization strategy, embedded into a modular ROS 2 control architecture, which allows a team of heterogeneous robots to cooperatively monitor and encircle a target while patrolling points of interest. Relying on the aggregative feedback optimization framework, we handle multi-robot dynamics while minimizing a global performance index depending on both microscopic (e.g., the location of single robots) and macroscopic variables (e.g., the spatial distribution of the team). The proposed distributed policy allows the robots to cooperatively address the global problem by employing only local measurements and neighboring data exchanges. These exchanges are performed through an asynchronous communication protocol ruled by locally-verifiable triggering conditions. We formally prove that our strategy steers the robots to a set of configurations representing stationary points of the considered optimization problem. The effectiveness and scalability of the overall strategy are tested via Monte Carlo campaigns of realistic Webots ROS 2 virtual experiments. Finally, the applicability of our solution is shown with real experiments on ground and aerial robots.<|reference_end|> | arxiv | @article{pichierri2024multi-robot,
title={Multi-Robot Target Monitoring and Encirclement via Triggered Distributed
Feedback Optimization},
author={Lorenzo Pichierri, Guido Carnevale, Lorenzo Sforni, Giuseppe
Notarstefano},
journal={arXiv preprint arXiv:2409.20399},
year={2024},
archivePrefix={arXiv},
eprint={2409.20399},
primaryClass={cs.RO}
} | pichierri2024multi-robot |
arxiv-663658 | 2409.20403 | Accelerating PoT Quantization on Edge Devices | <|reference_start|>Accelerating PoT Quantization on Edge Devices: Non-uniform quantization, such as power-of-two (PoT) quantization, matches data distributions better than uniform quantization, which reduces the quantization error of Deep Neural Networks (DNNs). PoT quantization also allows bit-shift operations to replace multiplications, but there are limited studies on the efficiency of shift-based accelerators for PoT quantization. Furthermore, existing pipelines for accelerating PoT-quantized DNNs on edge devices are not open-source. In this paper, we first design shift-based processing elements (shift-PE) for different PoT quantization methods and evaluate their efficiency using synthetic benchmarks. Then we design a shift-based accelerator using our most efficient shift-PE and propose PoTAcc, an open-source pipeline for end-to-end acceleration of PoT-quantized DNNs on resource-constrained edge devices. Using PoTAcc, we evaluate the performance of our shift-based accelerator across three DNNs. On average, it achieves a 1.23x speedup and 1.24x energy reduction compared to a multiplier-based accelerator, and a 2.46x speedup and 1.83x energy reduction compared to CPU-only execution. Our code is available at https://github.com/gicLAB/PoTAcc<|reference_end|> | arxiv | @article{saha2024accelerating,
title={Accelerating PoT Quantization on Edge Devices},
author={Rappy Saha, Jude Haris, Jos'e Cano},
journal={arXiv preprint arXiv:2409.20403},
year={2024},
archivePrefix={arXiv},
eprint={2409.20403},
primaryClass={cs.AR cs.LG}
} | saha2024accelerating |
arxiv-663659 | 2409.20407 | Open-Source Periorbital Segmentation Dataset for Ophthalmic Applications | <|reference_start|>Open-Source Periorbital Segmentation Dataset for Ophthalmic Applications: Periorbital segmentation and distance prediction using deep learning allows for the objective quantification of disease state, treatment monitoring, and remote medicine. However, there are currently no reports of segmentation datasets for the purposes of training deep learning models with sub mm accuracy on the regions around the eyes. All images (n=2842) had the iris, sclera, lid, caruncle, and brow segmented by five trained annotators. Here, we validate this dataset through intra and intergrader reliability tests and show the utility of the data in training periorbital segmentation networks. All the annotations are publicly available for free download. Having access to segmentation datasets designed specifically for oculoplastic surgery will permit more rapid development of clinically useful segmentation networks which can be leveraged for periorbital distance prediction and disease classification. In addition to the annotations, we also provide an open-source toolkit for periorbital distance prediction from segmentation masks. The weights of all models have also been open-sourced and are publicly available for use by the community.<|reference_end|> | arxiv | @article{nahass2024open-source,
title={Open-Source Periorbital Segmentation Dataset for Ophthalmic Applications},
author={George R. Nahass, Emma Koehler, Nicholas Tomaras, Danny Lopez, Madison
Cheung, Alexander Palacios, Jefferey Peterson, Sasha Hubschman, Kelsey Green,
Chad A. Purnell, Pete Setabutr, Ann Q. Tran, Darvin Yi},
journal={arXiv preprint arXiv:2409.20407},
year={2024},
archivePrefix={arXiv},
eprint={2409.20407},
primaryClass={cs.CV q-bio.TO}
} | nahass2024open-source |
arxiv-663660 | 2409.20408 | Beacon based uplink transmission for lorawan direct to satellite internet of things | <|reference_start|>Beacon based uplink transmission for lorawan direct to satellite internet of things: Direct-to-satellite IoT DtS IoT communication structure is a promising solution to provide connectivity and extend the coverage of traditional low-power and long-range technologies, especially for isolated and remote areas where deploying traditional infrastructure is impracticable. Despite their bounded visibility, the Low Earth Orbit LEO satellites complement the terrestrial networks, offering broader gateway coverage and terrestrial network traffic offloading. However, the dynamics of LEO and the nature of such integration come with several challenges affecting the efficacy of the network. Therefore, this paper proposes Beacon based Uplink LoRaWAN BU LoRaWAN to enhance satellite-terrestrial communication efficiency. The proposed scheme exploits the LoRaWAN class B synchronization mechanism to provide efficient uplink transmission from LoRaWAN devices placed on the ground to satellite gateways. BU LoRaWAN proposes an uplink transmission slot approach to synchronize ground devices uplink traffic with LEO based orbiting gateways. It also uses a queue data structure to buffer end devices ready to send packets until the appropriate moment. BU LoRaWAN avoids possible transmission collision by optimizing a random transmission slot for an end device within the beacon window. The proposed system is implemented and evaluated using OMNeT network simulator and FLoRaSat framework. The result demonstrates the feasibility of the proposed system. BU-LoRaWAN achieves better performance compared to the standard LoRaWAN, which manages to deliver almost double the traffic delivered by the standard one.<|reference_end|> | arxiv | @article{mojamed2024beacon,
title={Beacon based uplink transmission for lorawan direct to satellite
internet of things},
author={Mohammad Al Mojamed},
journal={arXiv preprint arXiv:2409.20408},
year={2024},
doi={10.5121/ijcnc.2024.16503},
archivePrefix={arXiv},
eprint={2409.20408},
primaryClass={cs.NI eess.SP}
} | mojamed2024beacon |
arxiv-663661 | 2409.20409 | Physics-Regularized Multi-Modal Image Assimilation for Brain Tumor Localization | <|reference_start|>Physics-Regularized Multi-Modal Image Assimilation for Brain Tumor Localization: Physical models in the form of partial differential equations represent an important prior for many under-constrained problems. One example is tumor treatment planning, which heavily depends on accurate estimates of the spatial distribution of tumor cells in a patient's anatomy. Medical imaging scans can identify the bulk of the tumor, but they cannot reveal its full spatial distribution. Tumor cells at low concentrations remain undetectable, for example, in the most frequent type of primary brain tumors, glioblastoma. Deep-learning-based approaches fail to estimate the complete tumor cell distribution due to a lack of reliable training data. Most existing works therefore rely on physics-based simulations to match observed tumors, providing anatomically and physiologically plausible estimations. However, these approaches struggle with complex and unknown initial conditions and are limited by overly rigid physical models. In this work, we present a novel method that balances data-driven and physics-based cost functions. In particular, we propose a unique discretization scheme that quantifies the adherence of our learned spatiotemporal tumor and brain tissue distributions to their corresponding growth and elasticity equations. This quantification, serving as a regularization term rather than a hard constraint, enables greater flexibility and proficiency in assimilating patient data than existing models. We demonstrate improved coverage of tumor recurrence areas compared to existing techniques on real-world data from a cohort of patients. The method holds the potential to enhance clinical adoption of model-driven treatment planning for glioblastoma.<|reference_end|> | arxiv | @article{balcerak2024physics-regularized,
title={Physics-Regularized Multi-Modal Image Assimilation for Brain Tumor
Localization},
author={Michal Balcerak, Tamaz Amiranashvili, Andreas Wagner, Jonas Weidner,
Petr Karnakov, Johannes C. Paetzold, Ivan Ezhov, Petros Koumoutsakos,
Benedikt Wiestler, Bjoern Menze},
journal={arXiv preprint arXiv:2409.20409},
year={2024},
archivePrefix={arXiv},
eprint={2409.20409},
primaryClass={cs.CV physics.med-ph}
} | balcerak2024physics-regularized |
arxiv-663662 | 2409.20410 | Does Positive Reinforcement Work?: A Quasi-Experimental Study of the Effects of Positive Feedback on Reddit | <|reference_start|>Does Positive Reinforcement Work?: A Quasi-Experimental Study of the Effects of Positive Feedback on Reddit: Social media platform design often incorporates explicit signals of positive feedback. Some moderators provide positive feedback with the goal of positive reinforcement, but are often unsure of their ability to actually influence user behavior. Despite its widespread use and theory touting positive feedback as crucial for user motivation, its effect on recipients is relatively unknown. This paper examines how positive feedback impacts Reddit users and evaluates its differential effects to understand who benefits most from receiving positive feedback. Through a causal inference study of 11M posts across 4 months, we find that users who received positive feedback made more frequent (2% per day) and higher quality (57% higher score; 2% fewer removals per day) posts compared to a set of matched control users. Our findings highlight the need for platforms and communities to expand their perspective on moderation and complement punitive approaches with positive reinforcement strategies.<|reference_end|> | arxiv | @article{lambert2024does,
title={Does Positive Reinforcement Work?: A Quasi-Experimental Study of the
Effects of Positive Feedback on Reddit},
author={Charlotte Lambert, Koustuv Saha, Eshwar Chandrasekharan},
journal={arXiv preprint arXiv:2409.20410},
year={2024},
archivePrefix={arXiv},
eprint={2409.20410},
primaryClass={cs.HC}
} | lambert2024does |
arxiv-663663 | 2409.20412 | Conformal Prediction for Dose-Response Models with Continuous Treatments | <|reference_start|>Conformal Prediction for Dose-Response Models with Continuous Treatments: Understanding the dose-response relation between a continuous treatment and the outcome for an individual can greatly drive decision-making, particularly in areas like personalized drug dosing and personalized healthcare interventions. Point estimates are often insufficient in these high-risk environments, highlighting the need for uncertainty quantification to support informed decisions. Conformal prediction, a distribution-free and model-agnostic method for uncertainty quantification, has seen limited application in continuous treatments or dose-response models. To address this gap, we propose a novel methodology that frames the causal dose-response problem as a covariate shift, leveraging weighted conformal prediction. By incorporating propensity estimation, conformal predictive systems, and likelihood ratios, we present a practical solution for generating prediction intervals for dose-response models. Additionally, our method approximates local coverage for every treatment value by applying kernel functions as weights in weighted conformal prediction. Finally, we use a new synthetic benchmark dataset to demonstrate the significance of covariate shift assumptions in achieving robust prediction intervals for dose-response models.<|reference_end|> | arxiv | @article{verhaeghe2024conformal,
title={Conformal Prediction for Dose-Response Models with Continuous Treatments},
author={Jarne Verhaeghe, Jef Jonkers, Sofie Van Hoecke},
journal={arXiv preprint arXiv:2409.20412},
year={2024},
archivePrefix={arXiv},
eprint={2409.20412},
primaryClass={cs.LG cs.AI stat.ML}
} | verhaeghe2024conformal |
arxiv-663664 | 2409.20413 | Novel machine learning applications at the LHC | <|reference_start|>Novel machine learning applications at the LHC: Machine learning (ML) is a rapidly growing area of research in the field of particle physics, with a vast array of applications at the CERN LHC. ML has changed the way particle physicists conduct searches and measurements as a versatile tool used to improve existing approaches and enable fundamentally new ones. In these proceedings, we describe novel ML techniques and recent results for improved classification, fast simulation, unfolding, and anomaly detection in LHC experiments.<|reference_end|> | arxiv | @article{duarte2024novel,
title={Novel machine learning applications at the LHC},
author={Javier M. Duarte},
journal={arXiv preprint arXiv:2409.20413},
year={2024},
number={CMS-CR-2024-239},
archivePrefix={arXiv},
eprint={2409.20413},
primaryClass={hep-ex cs.LG}
} | duarte2024novel |
arxiv-663665 | 2409.20414 | KANDU-Net:A Dual-Channel U-Net with KAN for Medical Image Segmentation | <|reference_start|>KANDU-Net:A Dual-Channel U-Net with KAN for Medical Image Segmentation: The U-Net model has consistently demonstrated strong performance in the field of medical image segmentation, with various improvements and enhancements made since its introduction. This paper presents a novel architecture that integrates KAN networks with U-Net, leveraging the powerful nonlinear representation capabilities of KAN networks alongside the established strengths of U-Net. We introduce a KAN-convolution dual-channel structure that enables the model to more effectively capture both local and global features. We explore effective methods for fusing features extracted by KAN with those obtained through convolutional layers, utilizing an auxiliary network to facilitate this integration process. Experiments conducted across multiple datasets show that our model performs well in terms of accuracy, indicating that the KAN-convolution dual-channel approach has significant potential in medical image segmentation tasks.<|reference_end|> | arxiv | @article{fang2024kandu-net:a,
title={KANDU-Net:A Dual-Channel U-Net with KAN for Medical Image Segmentation},
author={Chenglin Fang and Kaigui Wu},
journal={arXiv preprint arXiv:2409.20414},
year={2024},
archivePrefix={arXiv},
eprint={2409.20414},
primaryClass={eess.IV cs.CV}
} | fang2024kandu-net:a |
arxiv-663666 | 2409.20419 | AI-Based Fully Automatic Analysis of Retinal Vascular Morphology in Pediatric High Myopia | <|reference_start|>AI-Based Fully Automatic Analysis of Retinal Vascular Morphology in Pediatric High Myopia: Purpose: To investigate the changes in retinal vascular structures associated various stages of myopia by designing automated software based on an artif intelligencemodel. Methods: The study involved 1324 pediatric participants from the National Childr Medical Center in China, and 2366 high-quality retinal images and correspon refractive parameters were obtained and analyzed. Spherical equivalent refrac(SER) degree was calculated. We proposed a data analysis model based c combination of the Convolutional Neural Networks (CNN) model and the atter module to classify images, segment vascular structures, and measure vasc parameters, such as main angle (MA), branching angle (BA), bifurcation edge al(BEA) and bifurcation edge coefficient (BEC). One-way ANOVA compared param measurements betweenthenormalfundus,lowmyopia,moderate myopia,and high myopia group. Results: There were 279 (12.38%) images in normal group and 384 (16.23%) images in the high myopia group. Compared normal fundus, the MA of fundus vessels in different myopic refractive groups significantly reduced (P = 0.006, P = 0.004, P = 0.019, respectively), and performance of the venous system was particularly obvious (P<0.001). At the sa time, the BEC decreased disproportionately (P<0.001). Further analysis of fundus vascular parameters at different degrees of myopia showed that there were also significant differences in BA and branching coefficient (BC). The arterial BA value of the fundus vessel in the high myopia group was lower than that of other groups (P : 0.032, 95% confidence interval [Ci], 0.22-4.86), while the venous BA values increased(P = 0.026). The BEC values of high myopia were higher than those of low and moderate myopia groups. When the loss function of our data classification model converged to 0.09,the model accuracy reached 94.19%<|reference_end|> | arxiv | @article{zhao2024ai-based,
title={AI-Based Fully Automatic Analysis of Retinal Vascular Morphology in
Pediatric High Myopia},
author={Yinzheng Zhao, Zhihao Zhao, Junjie Yang, Li Li, M. Ali Nasseri, Daniel
Zapp},
journal={arXiv preprint arXiv:2409.20419},
year={2024},
archivePrefix={arXiv},
eprint={2409.20419},
primaryClass={cs.CV}
} | zhao2024ai-based |
arxiv-663667 | 2409.20423 | Stream-level flow matching from a Bayesian decision theoretic perspective | <|reference_start|>Stream-level flow matching from a Bayesian decision theoretic perspective: Flow matching (FM) is a family of training algorithms for fitting continuous normalizing flows (CNFs). A standard approach to FM, called conditional flow matching (CFM), exploits the fact that the marginal vector field of a CNF can be learned by fitting least-square regression to the so-called conditional vector field specified given one or both ends of the flow path. We show that viewing CFM training from a Bayesian decision theoretic perspective on parameter estimation opens the door to generalizations of CFM algorithms. We propose one such extension by introducing a CFM algorithm based on defining conditional probability paths given what we refer to as ``streams'', instances of latent stochastic paths that connect pairs of noise and observed data. Further, we advocates the modeling of these latent streams using Gaussian processes (GPs). The unique distributional properties of GPs, and in particular the fact that the velocities of a GP is still a GP, allows drawing samples from the resulting stream-augmented conditional probability path without simulating the actual streams, and hence the ``simulation-free" nature of CFM training is preserved. We show that this generalization of the CFM can substantially reduce the variance in the estimated marginal vector field at a moderate computational cost, thereby improving the quality of the generated samples under common metrics. Additionally, we show that adopting the GP on the streams allows for flexibly linking multiple related training data points (e.g., time series) and incorporating additional prior information. We empirically validate our claim through both simulations and applications to two hand-written image datasets.<|reference_end|> | arxiv | @article{wei2024stream-level,
title={Stream-level flow matching from a Bayesian decision theoretic
perspective},
author={Ganchao Wei, Li Ma},
journal={arXiv preprint arXiv:2409.20423},
year={2024},
archivePrefix={arXiv},
eprint={2409.20423},
primaryClass={stat.ML cs.AI cs.LG}
} | wei2024stream-level |
arxiv-663668 | 2409.20424 | World to Code: Multi-modal Data Generation via Self-Instructed Compositional Captioning and Filtering | <|reference_start|>World to Code: Multi-modal Data Generation via Self-Instructed Compositional Captioning and Filtering: Recent advances in Vision-Language Models (VLMs) and the scarcity of high-quality multi-modal alignment data have inspired numerous researches on synthetic VLM data generation. The conventional norm in VLM data construction uses a mixture of specialists in caption and OCR, or stronger VLM APIs and expensive human annotation. In this paper, we present World to Code (W2C), a meticulously curated multi-modal data construction pipeline that organizes the final generation output into a Python code format. The pipeline leverages the VLM itself to extract cross-modal information via different prompts and filter the generated outputs again via a consistency filtering strategy. Experiments have demonstrated the high quality of W2C by improving various existing visual question answering and visual grounding benchmarks across different VLMs. Further analysis also demonstrates that the new code parsing ability of VLMs presents better cross-modal equivalence than the commonly used detail caption ability. Our code is available at https://github.com/foundation-multimodal-models/World2Code.<|reference_end|> | arxiv | @article{wang2024world,
title={World to Code: Multi-modal Data Generation via Self-Instructed
Compositional Captioning and Filtering},
author={Jiacong Wang, Bohong Wu, Haiyong Jiang, Xun Zhou, Xin Xiao, Haoyuan
Guo, Jun Xiao},
journal={arXiv preprint arXiv:2409.20424},
year={2024},
archivePrefix={arXiv},
eprint={2409.20424},
primaryClass={cs.CV cs.AI}
} | wang2024world |
arxiv-663669 | 2409.20425 | Reprogrammable, in-materia matrix-vector multiplication with floppy modes | <|reference_start|>Reprogrammable, in-materia matrix-vector multiplication with floppy modes: Matrix-vector multiplications are a fundamental building block of artificial intelligence; this essential role has motivated their implementation in a variety of physical substrates, from memristor crossbar arrays to photonic integrated circuits. Yet their realization in soft-matter intelligent systems remains elusive. Here, we experimentally demonstrate a reprogrammable elastic metamaterial that computes matrix-vector multiplications using floppy modes -- deformations with near-zero stored elastic energy. Floppy modes allow us to program complex deformations without being hindered by the natural stiffness of the material; but their practical application is challenging, as their existence depends on global topological properties of the system. To overcome this challenge, we introduce a continuously parameterized unit cell design with well-defined compatibility characteristics. This unit cell is then combined to form arbitrary matrix-vector multiplications that can even be reprogrammed after fabrication. Our results demonstrate that floppy modes can act as key enablers for embodied intelligence, smart MEMS devices and in-sensor edge computing.<|reference_end|> | arxiv | @article{louvet2024reprogrammable,,
title={Reprogrammable, in-materia matrix-vector multiplication with floppy
modes},
author={Theophile Louvet, Parisa Omidvar, Marc Serra-Garcia},
journal={arXiv preprint arXiv:2409.20425},
year={2024},
archivePrefix={arXiv},
eprint={2409.20425},
primaryClass={cond-mat.soft cs.ET}
} | louvet2024reprogrammable, |
arxiv-663670 | 2409.20426 | Navigating Threats: A Survey of Physical Adversarial Attacks on LiDAR Perception Systems in Autonomous Vehicles | <|reference_start|>Navigating Threats: A Survey of Physical Adversarial Attacks on LiDAR Perception Systems in Autonomous Vehicles: Autonomous vehicles (AVs) rely heavily on LiDAR (Light Detection and Ranging) systems for accurate perception and navigation, providing high-resolution 3D environmental data that is crucial for object detection and classification. However, LiDAR systems are vulnerable to adversarial attacks, which pose significant challenges to the safety and robustness of AVs. This survey presents a thorough review of the current research landscape on physical adversarial attacks targeting LiDAR-based perception systems, covering both single-modality and multi-modality contexts. We categorize and analyze various attack types, including spoofing and physical adversarial object attacks, detailing their methodologies, impacts, and potential real-world implications. Through detailed case studies and analyses, we identify critical challenges and highlight gaps in existing attacks for LiDAR-based systems. Additionally, we propose future research directions to enhance the security and resilience of these systems, ultimately contributing to the safer deployment of autonomous vehicles.<|reference_end|> | arxiv | @article{guesmi2024navigating,
title={Navigating Threats: A Survey of Physical Adversarial Attacks on LiDAR
Perception Systems in Autonomous Vehicles},
author={Amira Guesmi and Muhammad Shafique},
journal={arXiv preprint arXiv:2409.20426},
year={2024},
archivePrefix={arXiv},
eprint={2409.20426},
primaryClass={cs.CV}
} | guesmi2024navigating |
arxiv-663671 | 2409.20427 | Sufficient and Necessary Explanations (and What Lies in Between) | <|reference_start|>Sufficient and Necessary Explanations (and What Lies in Between): As complex machine learning models continue to find applications in high-stakes decision-making scenarios, it is crucial that we can explain and understand their predictions. Post-hoc explanation methods provide useful insights by identifying important features in an input $\mathbf{x}$ with respect to the model output $f(\mathbf{x})$. In this work, we formalize and study two precise notions of feature importance for general machine learning models: sufficiency and necessity. We demonstrate how these two types of explanations, albeit intuitive and simple, can fall short in providing a complete picture of which features a model finds important. To this end, we propose a unified notion of importance that circumvents these limitations by exploring a continuum along a necessity-sufficiency axis. Our unified notion, we show, has strong ties to other popular definitions of feature importance, like those based on conditional independence and game-theoretic quantities like Shapley values. Crucially, we demonstrate how a unified perspective allows us to detect important features that could be missed by either of the previous approaches alone.<|reference_end|> | arxiv | @article{bharti2024sufficient,
title={Sufficient and Necessary Explanations (and What Lies in Between)},
author={Beepul Bharti, Paul Yi, Jeremias Sulam},
journal={arXiv preprint arXiv:2409.20427},
year={2024},
archivePrefix={arXiv},
eprint={2409.20427},
primaryClass={stat.ML cs.AI cs.LG}
} | bharti2024sufficient |
arxiv-663672 | 2409.20428 | Decoding the Echoes of Vision from fMRI: Memory Disentangling for Past Semantic Information | <|reference_start|>Decoding the Echoes of Vision from fMRI: Memory Disentangling for Past Semantic Information: The human visual system is capable of processing continuous streams of visual information, but how the brain encodes and retrieves recent visual memories during continuous visual processing remains unexplored. This study investigates the capacity of working memory to retain past information under continuous visual stimuli. And then we propose a new task Memory Disentangling, which aims to extract and decode past information from fMRI signals. To address the issue of interference from past memory information, we design a disentangled contrastive learning method inspired by the phenomenon of proactive interference. This method separates the information between adjacent fMRI signals into current and past components and decodes them into image descriptions. Experimental results demonstrate that this method effectively disentangles the information within fMRI signals. This research could advance brain-computer interfaces and mitigate the problem of low temporal resolution in fMRI.<|reference_end|> | arxiv | @article{xia2024decoding,
title={Decoding the Echoes of Vision from fMRI: Memory Disentangling for Past
Semantic Information},
author={Runze Xia, Congchi Yin, Piji Li},
journal={arXiv preprint arXiv:2409.20428},
year={2024},
archivePrefix={arXiv},
eprint={2409.20428},
primaryClass={cs.CL}
} | xia2024decoding |
arxiv-663673 | 2409.20429 | HELPD: Mitigating Hallucination of LVLMs by Hierarchical Feedback Learning with Vision-enhanced Penalty Decoding | <|reference_start|>HELPD: Mitigating Hallucination of LVLMs by Hierarchical Feedback Learning with Vision-enhanced Penalty Decoding: Large Vision-Language Models (LVLMs) have shown remarkable performance on many visual-language tasks. However, these models still suffer from multimodal hallucination, which means the generation of objects or content that violates the images. Many existing work detects hallucination by directly judging whether an object exists in an image, overlooking the association between the object and semantics. To address this issue, we propose Hierarchical Feedback Learning with Vision-enhanced Penalty Decoding (HELPD). This framework incorporates hallucination feedback at both object and sentence semantic levels. Remarkably, even with a marginal degree of training, this approach can alleviate over 15% of hallucination. Simultaneously, HELPD penalizes the output logits according to the image attention window to avoid being overly affected by generated text. HELPD can be seamlessly integrated with any LVLMs. Our experiments demonstrate that the proposed framework yields favorable results across multiple hallucination benchmarks. It effectively mitigates hallucination for different LVLMs and concurrently improves their text generation quality.<|reference_end|> | arxiv | @article{yuan2024helpd:,
title={HELPD: Mitigating Hallucination of LVLMs by Hierarchical Feedback
Learning with Vision-enhanced Penalty Decoding},
author={Fan Yuan, Chi Qin, Xiaogang Xu, Piji Li},
journal={arXiv preprint arXiv:2409.20429},
year={2024},
archivePrefix={arXiv},
eprint={2409.20429},
primaryClass={cs.CL cs.CV}
} | yuan2024helpd: |
arxiv-663674 | 2409.20431 | Multilevel Picard approximations and deep neural networks with ReLU, leaky ReLU, and softplus activation overcome the curse of dimensionality when approximating semilinear parabolic partial differential equations in $L^p$-sense | <|reference_start|>Multilevel Picard approximations and deep neural networks with ReLU, leaky ReLU, and softplus activation overcome the curse of dimensionality when approximating semilinear parabolic partial differential equations in $L^p$-sense: We prove that multilevel Picard approximations and deep neural networks with ReLU, leaky ReLU, and softplus activation are capable of approximating solutions of semilinear Kolmogorov PDEs in $L^\mathfrak{p}$-sense, $\mathfrak{p}\in [2,\infty)$, in the case of gradient-independent, Lipschitz-continuous nonlinearities, while the computational effort of the multilevel Picard approximations and the required number of parameters in the neural networks grow at most polynomially in both dimension $d\in \mathbb{N}$ and reciprocal of the prescribed accuracy $\epsilon$.<|reference_end|> | arxiv | @article{neufeld2024multilevel,
title={Multilevel Picard approximations and deep neural networks with ReLU,
leaky ReLU, and softplus activation overcome the curse of dimensionality when
approximating semilinear parabolic partial differential equations in
$L^p$-sense},
author={Ariel Neufeld, Tuan Anh Nguyen},
journal={arXiv preprint arXiv:2409.20431},
year={2024},
archivePrefix={arXiv},
eprint={2409.20431},
primaryClass={math.NA cs.LG cs.NA math.PR}
} | neufeld2024multilevel |
arxiv-663675 | 2409.20434 | QAEncoder: Towards Aligned Representation Learning in Question Answering System | <|reference_start|>QAEncoder: Towards Aligned Representation Learning in Question Answering System: Modern QA systems entail retrieval-augmented generation (RAG) for accurate and trustworthy responses. However, the inherent gap between user queries and relevant documents hinders precise matching. Motivated by our conical distribution hypothesis, which posits that potential queries and documents form a cone-like structure in the embedding space, we introduce QAEncoder, a training-free approach to bridge this gap. Specifically, QAEncoder estimates the expectation of potential queries in the embedding space as a robust surrogate for the document embedding, and attaches document fingerprints to effectively distinguish these embeddings. Extensive experiments on fourteen embedding models across six languages and eight datasets validate QAEncoder's alignment capability, which offers a plug-and-play solution that seamlessly integrates with existing RAG architectures and training-based methods.<|reference_end|> | arxiv | @article{wang2024qaencoder:,
title={QAEncoder: Towards Aligned Representation Learning in Question Answering
System},
author={Zhengren Wang, Qinhan Yu, Shida Wei, Zhiyu Li, Feiyu Xiong, Xiaoxing
Wang, Simin Niu, Hao Liang, Wentao Zhang},
journal={arXiv preprint arXiv:2409.20434},
year={2024},
number={v00},
archivePrefix={arXiv},
eprint={2409.20434},
primaryClass={cs.CL}
} | wang2024qaencoder: |
arxiv-663676 | 2409.20435 | ALLO: A Photorealistic Dataset and Data Generation Pipeline for Anomaly Detection During Robotic Proximity Operations in Lunar Orbit | <|reference_start|>ALLO: A Photorealistic Dataset and Data Generation Pipeline for Anomaly Detection During Robotic Proximity Operations in Lunar Orbit: NASA's forthcoming Lunar Gateway space station, which will be uncrewed most of the time, will need to operate with an unprecedented level of autonomy. Enhancing autonomy on the Gateway presents several unique challenges, one of which is to equip the Canadarm3, the Gateway's external robotic system, with the capability to perform worksite monitoring. Monitoring will involve using the arm's inspection cameras to detect any anomalies within the operating environment, a task complicated by the widely-varying lighting conditions in space. In this paper, we introduce the visual anomaly detection and localization task for space applications and establish a benchmark with our novel synthetic dataset called ALLO (for Anomaly Localization in Lunar Orbit). We develop a complete data generation pipeline to create ALLO, which we use to evaluate the performance of state-of-the-art visual anomaly detection algorithms. Given the low tolerance for risk during space operations and the lack of relevant data, we emphasize the need for novel, robust, and accurate anomaly detection methods to handle the challenging visual conditions found in lunar orbit and beyond.<|reference_end|> | arxiv | @article{leveugle2024allo:,
title={ALLO: A Photorealistic Dataset and Data Generation Pipeline for Anomaly
Detection During Robotic Proximity Operations in Lunar Orbit},
author={Selina Leveugle, Chang Won Lee, Svetlana Stolpner, Chris Langley, Paul
Grouchy, Steven Waslander, Jonathan Kelly},
journal={arXiv preprint arXiv:2409.20435},
year={2024},
archivePrefix={arXiv},
eprint={2409.20435},
primaryClass={cs.RO}
} | leveugle2024allo: |
arxiv-663677 | 2409.20440 | Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits | <|reference_start|>Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits: Follow-The-Regularized-Leader (FTRL) algorithms often enjoy optimal regret for adversarial as well as stochastic bandit problems and allow for a streamlined analysis. Nonetheless, FTRL algorithms require the solution of an optimization problem in every iteration and are thus computationally challenging. In contrast, Follow-The-Perturbed-Leader (FTPL) algorithms achieve computational efficiency by perturbing the estimates of the rewards of the arms, but their regret analysis is cumbersome. We propose a new FTPL algorithm that generates optimal policies for both adversarial and stochastic multi-armed bandits. Like FTRL, our algorithm admits a unified regret analysis, and similar to FTPL, it offers low computational costs. Unlike existing FTPL algorithms that rely on independent additive disturbances governed by a \textit{known} distribution, we allow for disturbances governed by an \textit{ambiguous} distribution that is only known to belong to a given set and propose a principle of optimism in the face of ambiguity. Consequently, our framework generalizes existing FTPL algorithms. It also encapsulates a broad range of FTRL methods as special cases, including several optimal ones, which appears to be impossible with current FTPL methods. Finally, we use techniques from discrete choice theory to devise an efficient bisection algorithm for computing the optimistic arm sampling probabilities. This algorithm is up to $10^4$ times faster than standard FTRL algorithms that solve an optimization problem in every iteration. Our results not only settle existing conjectures but also provide new insights into the impact of perturbations by mapping FTRL to FTPL.<|reference_end|> | arxiv | @article{li2024optimism,
title={Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits},
author={Mengmeng Li, Daniel Kuhn, Bahar Taskesen},
journal={arXiv preprint arXiv:2409.20440},
year={2024},
archivePrefix={arXiv},
eprint={2409.20440},
primaryClass={cs.LG stat.ML}
} | li2024optimism |
arxiv-663678 | 2409.20441 | Instance-adaptive Zero-shot Chain-of-Thought Prompting | <|reference_start|>Instance-adaptive Zero-shot Chain-of-Thought Prompting: Zero-shot Chain-of-Thought (CoT) prompting emerges as a simple and effective strategy for enhancing the performance of large language models (LLMs) in real-world reasoning tasks. Nonetheless, the efficacy of a singular, task-level prompt uniformly applied across the whole of instances is inherently limited since one prompt cannot be a good partner for all, a more appropriate approach should consider the interaction between the prompt and each instance meticulously. This work introduces an instance-adaptive prompting algorithm as an alternative zero-shot CoT reasoning scheme by adaptively differentiating good and bad prompts. Concretely, we first employ analysis on LLMs through the lens of information flow to detect the mechanism under zero-shot CoT reasoning, in which we discover that information flows from question to prompt and question to rationale jointly influence the reasoning results most. We notice that a better zero-shot CoT reasoning needs the prompt to obtain semantic information from the question then the rationale aggregates sufficient information from the question directly and via the prompt indirectly. On the contrary, lacking any of those would probably lead to a bad one. Stem from that, we further propose an instance-adaptive prompting strategy (IAP) for zero-shot CoT reasoning. Experiments conducted with LLaMA-2, LLaMA-3, and Qwen on math, logic, and commonsense reasoning tasks (e.g., GSM8K, MMLU, Causal Judgement) obtain consistent improvement, demonstrating that the instance-adaptive zero-shot CoT prompting performs better than other task-level methods with some curated prompts or sophisticated procedures, showing the significance of our findings in the zero-shot CoT reasoning mechanism.<|reference_end|> | arxiv | @article{yuan2024instance-adaptive,
title={Instance-adaptive Zero-shot Chain-of-Thought Prompting},
author={Xiaosong Yuan, Chen Shen, Shaotian Yan, Xiaofeng Zhang, Liang Xie,
Wenxiao Wang, Renchu Guan, Ying Wang, Jieping Ye},
journal={arXiv preprint arXiv:2409.20441},
year={2024},
archivePrefix={arXiv},
eprint={2409.20441},
primaryClass={cs.CL}
} | yuan2024instance-adaptive |
arxiv-663679 | 2409.20445 | Robot Navigation Using Physically Grounded Vision-Language Models in Outdoor Environments | <|reference_start|>Robot Navigation Using Physically Grounded Vision-Language Models in Outdoor Environments: We present a novel autonomous robot navigation algorithm for outdoor environments that is capable of handling diverse terrain traversability conditions. Our approach, VLM-GroNav, uses vision-language models (VLMs) and integrates them with physical grounding that is used to assess intrinsic terrain properties such as deformability and slipperiness. We use proprioceptive-based sensing, which provides direct measurements of these physical properties, and enhances the overall semantic understanding of the terrains. Our formulation uses in-context learning to ground the VLM's semantic understanding with proprioceptive data to allow dynamic updates of traversability estimates based on the robot's real-time physical interactions with the environment. We use the updated traversability estimations to inform both the local and global planners for real-time trajectory replanning. We validate our method on a legged robot (Ghost Vision 60) and a wheeled robot (Clearpath Husky), in diverse real-world outdoor environments with different deformable and slippery terrains. In practice, we observe significant improvements over state-of-the-art methods by up to 50% increase in navigation success rate.<|reference_end|> | arxiv | @article{elnoor2024robot,
title={Robot Navigation Using Physically Grounded Vision-Language Models in
Outdoor Environments},
author={Mohamed Elnoor, Kasun Weerakoon, Gershom Seneviratne, Ruiqi Xian,
Tianrui Guan, Mohamed Khalid M Jaffar, Vignesh Rajagopal and Dinesh Manocha},
journal={arXiv preprint arXiv:2409.20445},
year={2024},
archivePrefix={arXiv},
eprint={2409.20445},
primaryClass={cs.RO}
} | elnoor2024robot |
arxiv-663680 | 2409.20447 | POMONAG: Pareto-Optimal Many-Objective Neural Architecture Generator | <|reference_start|>POMONAG: Pareto-Optimal Many-Objective Neural Architecture Generator: Neural Architecture Search (NAS) automates neural network design, reducing dependence on human expertise. While NAS methods are computationally intensive and dataset-specific, auxiliary predictors reduce the models needing training, decreasing search time. This strategy is used to generate architectures satisfying multiple computational constraints. Recently, Transferable NAS has emerged, generalizing the search process from dataset-dependent to task-dependent. In this field, DiffusionNAG is a state-of-the-art method. This diffusion-based approach streamlines computation, generating architectures optimized for accuracy on unseen datasets without further adaptation. However, by focusing solely on accuracy, DiffusionNAG overlooks other crucial objectives like model complexity, computational efficiency, and inference latency -- factors essential for deploying models in resource-constrained environments. This paper introduces the Pareto-Optimal Many-Objective Neural Architecture Generator (POMONAG), extending DiffusionNAG via a many-objective diffusion process. POMONAG simultaneously considers accuracy, number of parameters, multiply-accumulate operations (MACs), and inference latency. It integrates Performance Predictor models to estimate these metrics and guide diffusion gradients. POMONAG's optimization is enhanced by expanding its training Meta-Dataset, applying Pareto Front Filtering, and refining embeddings for conditional generation. These enhancements enable POMONAG to generate Pareto-optimal architectures that outperform the previous state-of-the-art in performance and efficiency. Results were validated on two search spaces -- NASBench201 and MobileNetV3 -- and evaluated across 15 image classification datasets.<|reference_end|> | arxiv | @article{lomurno2024pomonag:,
title={POMONAG: Pareto-Optimal Many-Objective Neural Architecture Generator},
author={Eugenio Lomurno, Samuele Mariani, Matteo Monti, Matteo Matteucci},
journal={arXiv preprint arXiv:2409.20447},
year={2024},
archivePrefix={arXiv},
eprint={2409.20447},
primaryClass={cs.LG cs.AI cs.CV}
} | lomurno2024pomonag: |
arxiv-663681 | 2409.20448 | On inf-sup stability and optimal convergence of the quasi-reversibility method for unique continuation subject to Poisson's equation | <|reference_start|>On inf-sup stability and optimal convergence of the quasi-reversibility method for unique continuation subject to Poisson's equation: In this paper, we develop a framework for the discretization of a mixed formulation of quasi-reversibility solutions to ill-posed problems with respect to Poisson's equations. By carefully choosing test and trial spaces a formulation that is stable in a certain residual norm is obtained. Numerical stability and optimal convergence are established based on the conditional stability property of the problem. Tikhonov regularisation is necessary for high order polynomial approximation, , but its weak consistency may be tuned to allow for optimal convergence. For low order elements a simple numerical scheme with optimal convergence is obtained without stabilization. We also provide a guideline for feasible pairs of finite element spaces that satisfy suitable stability and consistency assumptions. Numerical experiments are provided to illustrate the theoretical results.<|reference_end|> | arxiv | @article{burman2024on,
title={On inf-sup stability and optimal convergence of the quasi-reversibility
method for unique continuation subject to Poisson's equation},
author={Erik Burman and Mingfei Lu},
journal={arXiv preprint arXiv:2409.20448},
year={2024},
archivePrefix={arXiv},
eprint={2409.20448},
primaryClass={math.NA cs.NA}
} | burman2024on |
arxiv-663682 | 2409.20449 | Linear Projections of Teacher Embeddings for Few-Class Distillation | <|reference_start|>Linear Projections of Teacher Embeddings for Few-Class Distillation: Knowledge Distillation (KD) has emerged as a promising approach for transferring knowledge from a larger, more complex teacher model to a smaller student model. Traditionally, KD involves training the student to mimic the teacher's output probabilities, while more advanced techniques have explored guiding the student to adopt the teacher's internal representations. Despite its widespread success, the performance of KD in binary classification and few-class problems has been less satisfactory. This is because the information about the teacher model's generalization patterns scales directly with the number of classes. Moreover, several sophisticated distillation methods may not be universally applicable or effective for data types beyond Computer Vision. Consequently, effective distillation techniques remain elusive for a range of key real-world applications, such as sentiment analysis, search query understanding, and advertisement-query relevance assessment. Taking these observations into account, we introduce a novel method for distilling knowledge from the teacher's model representations, which we term Learning Embedding Linear Projections (LELP). Inspired by recent findings about the structure of final-layer representations, LELP works by identifying informative linear subspaces in the teacher's embedding space, and splitting them into pseudo-subclasses. The student model is then trained to replicate these pseudo-classes. Our experimental evaluation on large-scale NLP benchmarks like Amazon Reviews and Sentiment140 demonstrate the LELP is consistently competitive with, and typically superior to, existing state-of-the-art distillation algorithms for binary and few-class problems, where most KD methods suffer.<|reference_end|> | arxiv | @article{loo2024linear,
title={Linear Projections of Teacher Embeddings for Few-Class Distillation},
author={Noel Loo, Fotis Iliopoulos, Wei Hu, Erik Vee},
journal={arXiv preprint arXiv:2409.20449},
year={2024},
archivePrefix={arXiv},
eprint={2409.20449},
primaryClass={cs.LG cs.AI}
} | loo2024linear |
arxiv-663683 | 2409.20460 | The Secretary Problem with Predicted Additive Gap | <|reference_start|>The Secretary Problem with Predicted Additive Gap: The secretary problem is one of the fundamental problems in online decision making; a tight competitive ratio for this problem of $1/\mathrm{e} \approx 0.368$ has been known since the 1960s. Much more recently, the study of algorithms with predictions was introduced: The algorithm is equipped with a (possibly erroneous) additional piece of information upfront which can be used to improve the algorithm's performance. Complementing previous work on secretary problems with prior knowledge, we tackle the following question: What is the weakest piece of information that allows us to break the $1/\mathrm{e}$ barrier? To this end, we introduce the secretary problem with predicted additive gap. As in the classical problem, weights are fixed by an adversary and elements appear in random order. In contrast to previous variants of predictions, our algorithm only has access to a much weaker piece of information: an \emph{additive gap} $c$. This gap is the difference between the highest and $k$-th highest weight in the sequence. Unlike previous pieces of advice, knowing an exact additive gap does not make the problem trivial. Our contribution is twofold. First, we show that for any index $k$ and any gap $c$, we can obtain a competitive ratio of $0.4$ when knowing the exact gap (even if we do not know $k$), hence beating the prevalent bound for the classical problem by a constant. Second, a slightly modified version of our algorithm allows to prove standard robustness-consistency properties as well as improved guarantees when knowing a range for the error of the prediction.<|reference_end|> | arxiv | @article{braun2024the,
title={The Secretary Problem with Predicted Additive Gap},
author={Alexander Braun and Sherry Sarkar},
journal={arXiv preprint arXiv:2409.20460},
year={2024},
archivePrefix={arXiv},
eprint={2409.20460},
primaryClass={cs.DS cs.GT}
} | braun2024the |
arxiv-663684 | 2409.20463 | Time Efficiency of BATS Coding on Wireless Relay Network With Overhearing | <|reference_start|>Time Efficiency of BATS Coding on Wireless Relay Network With Overhearing: Wireless relay network is a solution to extend the reach of a wireless connection by installing a relay node between the source node and the sink node. Due to the broadcast nature of wireless transmission, the sink node has a chance to receive part of the data sent by the source node. In this paper, we apply a network coding scheme called BATS codes on a wireless relay network where the relay node has a stable power supply, so that we can aim for the best decoding time instead of minimizing the number of transmissions for saving energy. We optimize the time efficiency that maximize the average decoding rate per unit time by some heuristics, and bring out a message that it is not optimal to set an average number of recoded packets per batch at the relay node equals the number of packets per batch sent by the source node.<|reference_end|> | arxiv | @article{yin2024time,
title={Time Efficiency of BATS Coding on Wireless Relay Network With
Overhearing},
author={Hoover H. F. Yin},
journal={arXiv preprint arXiv:2409.20463},
year={2024},
archivePrefix={arXiv},
eprint={2409.20463},
primaryClass={cs.IT math.IT}
} | yin2024time |
arxiv-663685 | 2409.20466 | Language Resources in Spanish for Automatic Text Simplification across Domains | <|reference_start|>Language Resources in Spanish for Automatic Text Simplification across Domains: This work describes the language resources and models developed for automatic simplification of Spanish texts in three domains: Finance, Medicine and History studies. We created several corpora in each domain, annotation and simplification guidelines, a lexicon of technical and simplified medical terms, datasets used in shared tasks for the financial domain, and two simplification tools. The methodology, resources and companion publications are shared publicly on the web-site: https://clara-nlp.uned.es/.<|reference_end|> | arxiv | @article{moreno-sandoval2024language,
title={Language Resources in Spanish for Automatic Text Simplification across
Domains},
author={Antonio Moreno-Sandoval, Leonardo Campillos-Llanos, Ana
Garc'ia-Serrano},
journal={arXiv preprint arXiv:2409.20466},
year={2024},
archivePrefix={arXiv},
eprint={2409.20466},
primaryClass={cs.CL}
} | moreno-sandoval2024language |
arxiv-663686 | 2409.20467 | A Weakly Supervised Data Labeling Framework for Machine Lexical Normalization in Vietnamese Social Media | <|reference_start|>A Weakly Supervised Data Labeling Framework for Machine Lexical Normalization in Vietnamese Social Media: This study introduces an innovative automatic labeling framework to address the challenges of lexical normalization in social media texts for low-resource languages like Vietnamese. Social media data is rich and diverse, but the evolving and varied language used in these contexts makes manual labeling labor-intensive and expensive. To tackle these issues, we propose a framework that integrates semi-supervised learning with weak supervision techniques. This approach enhances the quality of training dataset and expands its size while minimizing manual labeling efforts. Our framework automatically labels raw data, converting non-standard vocabulary into standardized forms, thereby improving the accuracy and consistency of the training data. Experimental results demonstrate the effectiveness of our weak supervision framework in normalizing Vietnamese text, especially when utilizing Pre-trained Language Models. The proposed framework achieves an impressive F1-score of 82.72% and maintains vocabulary integrity with an accuracy of up to 99.22%. Additionally, it effectively handles undiacritized text under various conditions. This framework significantly enhances natural language normalization quality and improves the accuracy of various NLP tasks, leading to an average accuracy increase of 1-3%.<|reference_end|> | arxiv | @article{nguyen2024a,
title={A Weakly Supervised Data Labeling Framework for Machine Lexical
Normalization in Vietnamese Social Media},
author={Dung Ha Nguyen, Anh Thi Hoang Nguyen and Kiet Van Nguyen},
journal={arXiv preprint arXiv:2409.20467},
year={2024},
archivePrefix={arXiv},
eprint={2409.20467},
primaryClass={cs.CL cs.AI}
} | nguyen2024a |
arxiv-663687 | 2409.20469 | Continual Human Pose Estimation for Incremental Integration of Keypoints and Pose Variations | <|reference_start|>Continual Human Pose Estimation for Incremental Integration of Keypoints and Pose Variations: This paper reformulates cross-dataset human pose estimation as a continual learning task, aiming to integrate new keypoints and pose variations into existing models without losing accuracy on previously learned datasets. We benchmark this formulation against established regularization-based methods for mitigating catastrophic forgetting, including EWC, LFL, and LwF. Moreover, we propose a novel regularization method called Importance-Weighted Distillation (IWD), which enhances conventional LwF by introducing a layer-wise distillation penalty and dynamic temperature adjustment based on layer importance for previously learned knowledge. This allows for a controlled adaptation to new tasks that respects the stability-plasticity balance critical in continual learning. Through extensive experiments across three datasets, we demonstrate that our approach outperforms existing regularization-based continual learning strategies. IWD shows an average improvement of 3.60\% over the state-of-the-art LwF method. The results highlight the potential of our method to serve as a robust framework for real-world applications where models must evolve with new data without forgetting past knowledge.<|reference_end|> | arxiv | @article{khan2024continual,
title={Continual Human Pose Estimation for Incremental Integration of Keypoints
and Pose Variations},
author={Muhammad Saif Ullah Khan, Muhammad Ahmed Ullah Khan, Muhammad Zeshan
Afzal, Didier Stricker},
journal={arXiv preprint arXiv:2409.20469},
year={2024},
archivePrefix={arXiv},
eprint={2409.20469},
primaryClass={cs.CV}
} | khan2024continual |
arxiv-663688 | 2409.20473 | Impact of Tactile Sensor Quantities and Placements on Learning-based Dexterous Manipulation | <|reference_start|>Impact of Tactile Sensor Quantities and Placements on Learning-based Dexterous Manipulation: Tactile information effectively enables faster training and better task performance for learning-based in-hand manipulation. Existing approaches are validated in simulated environments with a large number of tactile sensors. However, attaching such sensors to a real robot hand is not applicable due to high cost and physical limitations. To enable real-world adoption of tactile sensors, this study investigates the impact of tactile sensors, including their varying quantities and placements on robot hands, on the dexterous manipulation task performance and analyzes the importance of each. Through empirically decreasing the sensor quantities, we successfully find an optimized set of tactile sensors (21 sensors) configuration, which keeps over 93% task performance with only 20% sensor quantities compared to the original set (92 sensors) for the block manipulation task, leading to a potential reduction of over 80% in sensor manufacturing and design costs. To transform the empirical results into a generalizable understanding, we build a task performance prediction model with a weighted linear regression algorithm and use it to forecast the task performance with different sensor configurations. To show its generalizability, we verified this model in egg and pen manipulation tasks and achieved an average prediction error of 3.12%.<|reference_end|> | arxiv | @article{guo2024impact,
title={Impact of Tactile Sensor Quantities and Placements on Learning-based
Dexterous Manipulation},
author={Haoran Guo, Haoyang Wang, Zhengxiong Li, He Bai, Lingfeng Tao},
journal={arXiv preprint arXiv:2409.20473},
year={2024},
archivePrefix={arXiv},
eprint={2409.20473},
primaryClass={cs.RO}
} | guo2024impact |
arxiv-663689 | 2409.20474 | IRFusionFormer: Enhancing Pavement Crack Segmentation with RGB-T Fusion and Topological-Based Loss | <|reference_start|>IRFusionFormer: Enhancing Pavement Crack Segmentation with RGB-T Fusion and Topological-Based Loss: Crack segmentation is crucial in civil engineering, particularly for assessing pavement integrity and ensuring the durability of infrastructure. While deep learning has advanced RGB-based segmentation, performance degrades under adverse conditions like low illumination or motion blur. Thermal imaging offers complementary information by capturing emitted radiation, improving crack detection in challenging environments. Combining RGB and thermal images (RGB-T) for crack segmentation shows promise in complex real-world conditions, such as adverse weather, yet research in this area remains limited. Current RGB-T segmentation methods often fail to fully exploit the complementary relationships between modalities at various levels of interaction. To address this, we propose IRFusionFormer, a novel model for crack segmentation that effectively integrates RGB and thermal data. Our Efficient RGB-T Cross Fusion Module captures multi-scale relationships and long-range dependencies between modalities without significant computational overhead. Additionally, we introduce the Interaction-Hybrid-Branch-Supervision framework, which enhances interaction between modalities by distributing fused features across branches with joint supervision. To maintain the topological structure of cracks, we introduce a novel topology-based loss function that preserves connectivity during training. Our method achieves state-of-the-art performance, with a Dice score of 90.01% and an IoU of 81.83%, significantly improving robustness and accuracy in varying environmental conditions. These advancements address key challenges in pavement crack segmentation, offering a more reliable and efficient solution. For access to the codes, data, and models from this study, visit https://github.com/sheauhuu/IRFusionFormer<|reference_end|> | arxiv | @article{xiao2024irfusionformer:,
title={IRFusionFormer: Enhancing Pavement Crack Segmentation with RGB-T Fusion
and Topological-Based Loss},
author={Ruiqiang Xiao, Xiaohu Chen},
journal={arXiv preprint arXiv:2409.20474},
year={2024},
archivePrefix={arXiv},
eprint={2409.20474},
primaryClass={cs.CV}
} | xiao2024irfusionformer: |
arxiv-663690 | 2409.20476 | Intel(R) SHMEM: GPU-initiated OpenSHMEM using SYCL | <|reference_start|>Intel(R) SHMEM: GPU-initiated OpenSHMEM using SYCL: Modern high-end systems are increasingly becoming heterogeneous, providing users options to use general purpose Graphics Processing Units (GPU) and other accelerators for additional performance. High Performance Computing (HPC) and Artificial Intelligence (AI) applications are often carefully arranged to overlap communications and computation for increased efficiency on such platforms. This has led to efforts to extend popular communication libraries to support GPU awareness and more recently, GPU-initiated operations. In this paper, we present Intel SHMEM, a library that enables users to write programs that are GPU aware, in that API calls support GPU memory, and also support GPU-initiated communication operations by embedding OpenSHMEM style calls within GPU kernels. We also propose thread-collaborative extensions to the OpenSHMEM standard that can enable users to better exploit the strengths of GPUs. Our implementation adapts to choose between direct load/store from GPU and the GPU copy engine based transfer to optimize performance on different configurations.<|reference_end|> | arxiv | @article{brooks2024intel(r),
title={Intel(R) SHMEM: GPU-initiated OpenSHMEM using SYCL},
author={Alex Brooks, Philip Marshall, David Ozog, Md. Wasi-ur- Rahman,
Lawrence Stewart, Rithwik Tom},
journal={arXiv preprint arXiv:2409.20476},
year={2024},
archivePrefix={arXiv},
eprint={2409.20476},
primaryClass={cs.DC}
} | brooks2024intel(r) |
arxiv-663691 | 2409.20477 | Impartial Selection Under Combinatorial Constraints | <|reference_start|>Impartial Selection Under Combinatorial Constraints: Impartial selection problems are concerned with the selection of one or more agents from a set based on mutual nominations from within the set. To avoid strategic nominations of the agents, the axiom of impartiality requires that the selection of each agent is independent of the nominations cast by that agent. This paper initiates the study of impartial selection problems where the nominations are weighted and the set of agents that can be selected is restricted by a combinatorial constraint. We call a selection mechanism $\alpha$-optimal if, for every instance, the ratio between the total sum of weighted nominations of the selected set and that of the best feasible set of agents is at least $\alpha$. We show that a natural extension of a mechanism studied for the selection of a single agent remains impartial and $\frac{1}{4}$-optimal for general independence systems, and we generalize upper bounds from the selection of multiple agents by parameterizing them by the girth of the independence system. We then focus on independence systems defined by knapsack and matroid constraints, giving impartial mechanisms that exploit a greedy order of the agents and achieve approximation ratios of $\frac{1}{3}$ and $\frac{1}{2}$, respectively, when agents cast a single nomination. For graphic matroids, we further devise an impartial and $\frac{1}{3}$-optimal mechanism for an arbitrary number of unweighted nominations.<|reference_end|> | arxiv | @article{cembrano2024impartial,
title={Impartial Selection Under Combinatorial Constraints},
author={Javier Cembrano, Max Klimm, Arturo Merino},
journal={arXiv preprint arXiv:2409.20477},
year={2024},
archivePrefix={arXiv},
eprint={2409.20477},
primaryClass={cs.GT econ.TH}
} | cembrano2024impartial |
arxiv-663692 | 2409.20483 | RecSys Challenge 2024: Balancing Accuracy and Editorial Values in News Recommendations | <|reference_start|>RecSys Challenge 2024: Balancing Accuracy and Editorial Values in News Recommendations: The RecSys Challenge 2024 aims to advance news recommendation by addressing both the technical and normative challenges inherent in designing effective and responsible recommender systems for news publishing. This paper describes the challenge, including its objectives, problem setting, and the dataset provided by the Danish news publishers Ekstra Bladet and JP/Politikens Media Group ("Ekstra Bladet"). The challenge explores the unique aspects of news recommendation, such as modeling user preferences based on behavior, accounting for the influence of the news agenda on user interests, and managing the rapid decay of news items. Additionally, the challenge embraces normative complexities, investigating the effects of recommender systems on news flow and their alignment with editorial values. We summarize the challenge setup, dataset characteristics, and evaluation metrics. Finally, we announce the winners and highlight their contributions. The dataset is available at: https://recsys.eb.dk.<|reference_end|> | arxiv | @article{kruse2024recsys,
title={RecSys Challenge 2024: Balancing Accuracy and Editorial Values in News
Recommendations},
author={Johannes Kruse, Kasper Lindskow, Saikishore Kalloori, Marco Polignano,
Claudio Pomo, Abhishek Srivastava, Anshuk Uppal, Michael Riis Andersen, Jes
Frellsen},
journal={arXiv preprint arXiv:2409.20483},
year={2024},
doi={10.1145/3640457.3687164},
archivePrefix={arXiv},
eprint={2409.20483},
primaryClass={cs.IR cs.AI cs.LG}
} | kruse2024recsys |
arxiv-663693 | 2409.20484 | "What" x "When" working memory representations using Laplace Neural Manifolds | <|reference_start|>"What" x "When" working memory representations using Laplace Neural Manifolds: Working memory $\unicode{x2013}$ the ability to remember recent events as they recede continuously into the past $\unicode{x2013}$ requires the ability to represent any stimulus at any time delay. This property requires neurons coding working memory to show mixed selectivity, with conjunctive receptive fields (RFs) for stimuli and time, forming a representation of 'what' $\times$ 'when'. We study the properties of such a working memory in simple experiments where a single stimulus must be remembered for a short time. The requirement of conjunctive receptive fields allows the covariance matrix of the network to decouple neatly, allowing an understanding of the low-dimensional dynamics of the population. Different choices of temporal basis functions lead to qualitatively different dynamics. We study a specific choice $\unicode{x2013}$ a Laplace space with exponential basis functions for time coupled to an "Inverse Laplace" space with circumscribed basis functions in time. We refer to this choice with basis functions that evenly tile log time as a Laplace Neural Manifold. Despite the fact that they are related to one another by a linear projection, the Laplace population shows a stable stimulus-specific subspace whereas the Inverse Laplace population shows rotational dynamics. The growth of the rank of the covariance matrix with time depends on the density of the temporal basis set; logarithmic tiling shows good agreement with data. We sketch a continuous attractor CANN that constructs a Laplace Neural Manifold. The attractor in the Laplace space appears as an edge; the attractor for the inverse space appears as a bump. This work provides a map for going from more abstract cognitive models of WM to circuit-level implementation using continuous attractor neural networks, and places constraints on the types of neural dynamics that support working memory.<|reference_end|> | arxiv | @article{sarkar2024"what",
title={"What" x "When" working memory representations using Laplace Neural
Manifolds},
author={Aakash Sarkar, Chenyu Wang, Shangfu Zuo, Marc W. Howard},
journal={arXiv preprint arXiv:2409.20484},
year={2024},
archivePrefix={arXiv},
eprint={2409.20484},
primaryClass={q-bio.NC cs.NE}
} | sarkar2024"what" |
arxiv-663694 | 2409.20486 | Propelling Innovation to Defeat Data-Leakage Hardware Trojans: From Theory to Practice | <|reference_start|>Propelling Innovation to Defeat Data-Leakage Hardware Trojans: From Theory to Practice: Many design companies have gone fabless and rely on external fabrication facilities to produce chips due to increasing cost of semiconductor manufacturing. However, not all of these facilities can be considered trustworthy; some may inject hardware Trojans and jeopardize the security of the system. One common objective of hardware Trojans is to establish a side channel for data leakage. While extensive literature exists on various defensive measures, almost all of them focus on preventing the establishment of side channels, and can be compromised if attackers gain access to the physical chip and can perform reverse engineering between multiple fabrication runs. In this paper, we advance (from theory to practice) RECORD: Randomized Encoding of COmbinational Logic for Resistance to Data Leakage. RECORD is a novel scheme of temporarily randomized encoding for combinational logic that, with the aid of Quilt Packaging, prevents attackers from interpreting the data.<|reference_end|> | arxiv | @article{kwiat2024propelling,
title={Propelling Innovation to Defeat Data-Leakage Hardware Trojans: From
Theory to Practice},
author={Kevin Kwiat, Jason Kulick, Paul Ratazzi},
journal={arXiv preprint arXiv:2409.20486},
year={2024},
archivePrefix={arXiv},
eprint={2409.20486},
primaryClass={cs.CR}
} | kwiat2024propelling |
arxiv-663695 | 2409.20488 | Evaluating the Impact of Convolutional Neural Network Layer Depth on the Enhancement of Inertial Navigation System Solutions | <|reference_start|>Evaluating the Impact of Convolutional Neural Network Layer Depth on the Enhancement of Inertial Navigation System Solutions: Secure navigation is pivotal for several applications including autonomous vehicles, robotics, and aviation. The inertial navigation system estimates position, velocity, and attitude through dead reckoning especially when external references like GPS are unavailable. However, the three accelerometers and three gyroscopes that compose the system are exposed to various types of errors including bias errors, scale factor errors, and noise, which can significantly degrade the accuracy of navigation constituting also a key vulnerability of this system. This work aims to adopt a supervised convolutional neural network (ConvNet) to address this vulnerability inherent in inertial navigation systems. In addition to this, this paper evaluates the impact of the ConvNet layer's depth on the accuracy of these corrections. This evaluation aims to determine the optimal layer configuration maximizing the effectiveness of error correction in INS (Inertial Navigation System) leading to precise navigation solutions.<|reference_end|> | arxiv | @article{aftatah2024evaluating,
title={Evaluating the Impact of Convolutional Neural Network Layer Depth on the
Enhancement of Inertial Navigation System Solutions},
author={Mohammed Aftatah and Khalid Zebbara},
journal={arXiv preprint arXiv:2409.20488},
year={2024},
archivePrefix={arXiv},
eprint={2409.20488},
primaryClass={cs.RO}
} | aftatah2024evaluating |
arxiv-663696 | 2409.20489 | Online Decision Deferral under Budget Constraints | <|reference_start|>Online Decision Deferral under Budget Constraints: Machine Learning (ML) models are increasingly used to support or substitute decision making. In applications where skilled experts are a limited resource, it is crucial to reduce their burden and automate decisions when the performance of an ML model is at least of equal quality. However, models are often pre-trained and fixed, while tasks arrive sequentially and their distribution may shift. In that case, the respective performance of the decision makers may change, and the deferral algorithm must remain adaptive. We propose a contextual bandit model of this online decision making problem. Our framework includes budget constraints and different types of partial feedback models. Beyond the theoretical guarantees of our algorithm, we propose efficient extensions that achieve remarkable performance on real-world datasets.<|reference_end|> | arxiv | @article{reid2024online,
title={Online Decision Deferral under Budget Constraints},
author={Mirabel Reid, Tom S"uhr, Claire Vernade, Samira Samadi},
journal={arXiv preprint arXiv:2409.20489},
year={2024},
archivePrefix={arXiv},
eprint={2409.20489},
primaryClass={cs.LG}
} | reid2024online |
arxiv-663697 | 2409.20490 | Age of Gossip with the Push-Pull Protocol | <|reference_start|>Age of Gossip with the Push-Pull Protocol: We consider a wireless network where a source generates packets and forwards them to a network containing $n$ nodes. The nodes in the network use the asynchronous push, pull or push-pull gossip communication protocols to maintain the most recent updates from the source. We use the version age of information metric to quantify the freshness of information in the network. Prior to this work, only the push gossiping protocol has been studied for age of information analysis. In this paper, we use the stochastic hybrid systems (SHS) framework to obtain recursive equations for the expected version age of sets of nodes in the time limit. We then show that the pull and push-pull protocols can achieve constant version age, while it is already known that the push protocol can only achieve logarithmic version age. We then show that the push-pull protocol performs better than the push and the pull protocol. Finally, we carry out numerical simulations to evaluate these results.<|reference_end|> | arxiv | @article{srivastava2024age,
title={Age of Gossip with the Push-Pull Protocol},
author={Arunabh Srivastava, Thomas Jacob Maranzatto, Sennur Ulukus},
journal={arXiv preprint arXiv:2409.20490},
year={2024},
archivePrefix={arXiv},
eprint={2409.20490},
primaryClass={cs.IT cs.NI eess.SP math.IT}
} | srivastava2024age |
arxiv-663698 | 2409.20494 | An Effectively $\Omega(c)$ Language and Runtime | <|reference_start|>An Effectively $\Omega(c)$ Language and Runtime: The performance of an application/runtime is usually thought of as a continuous function where, the lower the amount of memory/time used on a given workload, then the better the compiler/runtime is. However, in practice, good performance of an application is conceptually more of a binary function -- either the application responds in under, say 100ms, and is fast enough for a user to barely notice, or it takes a noticeable amount of time, leaving the user waiting and potentially abandoning the task. Thus, performance really means how often the application is fast enough to be usable, leading industrial developers to focus on the 95th and 99th percentile latencies as heavily, or moreso, than average response time. Unfortunately, tracking and optimizing for these high percentile latencies is difficult and often requires a deep understanding of the application, runtime, GC, and OS interactions. This is further complicated by the fact that tail performance is often only seen occasionally, and is specific to a certain workload or input, making these issues uniquely painful to handle. Our vision is to create a language and runtime that is designed to be $\Omega(c)$ in its performance -- that is, it is designed to have an effectively constant time to execute all operations, there is a constant fixed memory overhead for the application footprint, and the garbage-collector performs a constant amount of work per allocation + a (small) bounded pause for all collection/release operations.<|reference_end|> | arxiv | @article{marron2024an,
title={An Effectively $\Omega(c)$ Language and Runtime},
author={Mark Marron},
journal={arXiv preprint arXiv:2409.20494},
year={2024},
archivePrefix={arXiv},
eprint={2409.20494},
primaryClass={cs.PL cs.SE}
} | marron2024an |
arxiv-663699 | 2409.20498 | Enhancing Romanian Offensive Language Detection through Knowledge Distillation, Multi-Task Learning, and Data Augmentation | <|reference_start|>Enhancing Romanian Offensive Language Detection through Knowledge Distillation, Multi-Task Learning, and Data Augmentation: This paper highlights the significance of natural language processing (NLP) within artificial intelligence, underscoring its pivotal role in comprehending and modeling human language. Recent advancements in NLP, particularly in conversational bots, have garnered substantial attention and adoption among developers. This paper explores advanced methodologies for attaining smaller and more efficient NLP models. Specifically, we employ three key approaches: (1) training a Transformer-based neural network to detect offensive language, (2) employing data augmentation and knowledge distillation techniques to increase performance, and (3) incorporating multi-task learning with knowledge distillation and teacher annealing using diverse datasets to enhance efficiency. The culmination of these methods has yielded demonstrably improved outcomes.<|reference_end|> | arxiv | @article{matei2024enhancing,
title={Enhancing Romanian Offensive Language Detection through Knowledge
Distillation, Multi-Task Learning, and Data Augmentation},
author={Vlad-Cristian Matei, Iulian-Marius Tu{a}iatu, Ru{a}zvan-Alexandru
Smu{a}du and Dumitru-Clementin Cercel},
journal={arXiv preprint arXiv:2409.20498},
year={2024},
doi={10.1007/978-3-031-70239-6_22},
archivePrefix={arXiv},
eprint={2409.20498},
primaryClass={cs.CL}
} | matei2024enhancing |
arxiv-663700 | 2409.20499 | Crater Projection in Linear Pushbroom Camera Images | <|reference_start|>Crater Projection in Linear Pushbroom Camera Images: Scientific imaging of the Moon, Mars, and other celestial bodies is often accomplished with pushbroom cameras. Craters with elliptical rims are common objects of interest within the images produced by such sensors. This work provides a framework to analyze the appearance of crater rims in pushbroom images. With knowledge of only common ellipse parameters describing the crater rim, explicit formulations are developed and shown to be convenient for drawing the apparent crater in pushbroom images. Implicit forms are also developed and indicate the orbital conditions under which craters form conics in images. Several numerical examples are provided which demonstrate how different forms of crater rim projections can be interpreted and used in practice.<|reference_end|> | arxiv | @article{mancini2024crater,
title={Crater Projection in Linear Pushbroom Camera Images},
author={Michela Mancini, Ava Thrasher, Carl De Vries, John Christian},
journal={arXiv preprint arXiv:2409.20499},
year={2024},
archivePrefix={arXiv},
eprint={2409.20499},
primaryClass={astro-ph.IM cs.CG eess.IV}
} | mancini2024crater |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.