corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-666801
2410.05552
The Power of Adaptivity in Experimental Design
<|reference_start|>The Power of Adaptivity in Experimental Design: Given n experiment subjects with potentially heterogeneous covariates and two possible treatments, namely active treatment and control, this paper addresses the fundamental question of determining the optimal accuracy in estimating the treatment effect. Furthermore, we propose an experimental design that approaches this optimal accuracy, giving a (non-asymptotic) answer to this fundamental yet still open question. The methodological contribution is listed as following. First, we establish an idealized optimal estimator with minimal variance as benchmark, and then demonstrate that adaptive experiment is necessary to achieve near-optimal estimation accuracy. Secondly, by incorporating the concept of doubly robust method into sequential experimental design, we frame the optimal estimation problem as an online bandit learning problem, bridging the two fields of statistical estimation and bandit learning. Using tools and ideas from both bandit algorithm design and adaptive statistical estimation, we propose a general low switching adaptive experiment framework, which could be a generic research paradigm for a wide range of adaptive experimental design. Through information-theoretic lower bound combined with Bayes risk analysis, we demonstrate the optimality of our proposed experiment. Numerical result indicates that the estimation accuracy approaches optimal with as few as two or three policy updates.<|reference_end|>
arxiv
@article{li2024optimal, title={Optimal Adaptive Experimental Design for Estimating Treatment Effect}, author={Jiachun Li, David Simchi-Levi, Yunxiao Zhao}, journal={arXiv preprint arXiv:2410.05552}, year={2024}, archivePrefix={arXiv}, eprint={2410.05552}, primaryClass={stat.ML cs.LG} }
li2024optimal
arxiv-666802
2410.05553
On Instruction-Finetuning Neural Machine Translation Models
<|reference_start|>On Instruction-Finetuning Neural Machine Translation Models: In this work, we introduce instruction finetuning for Neural Machine Translation (NMT) models, which distills instruction following capabilities from Large Language Models (LLMs) into orders-of-magnitude smaller NMT models. Our instruction-finetuning recipe for NMT models enables customization of translations for a limited but disparate set of translation-specific tasks. We show that NMT models are capable of following multiple instructions simultaneously and demonstrate capabilities of zero-shot composition of instructions. We also show that through instruction finetuning, traditionally disparate tasks such as formality-controlled machine translation, multi-domain adaptation as well as multi-modal translations can be tackled jointly by a single instruction finetuned NMT model, at a performance level comparable to LLMs such as GPT-3.5-Turbo. To the best of our knowledge, our work is among the first to demonstrate the instruction-following capabilities of traditional NMT models, which allows for faster, cheaper and more efficient serving of customized translations.<|reference_end|>
arxiv
@article{raunak2024on, title={On Instruction-Finetuning Neural Machine Translation Models}, author={Vikas Raunak, Roman Grundkiewicz, Marcin Junczys-Dowmunt}, journal={arXiv preprint arXiv:2410.05553}, year={2024}, archivePrefix={arXiv}, eprint={2410.05553}, primaryClass={cs.CL cs.AI} }
raunak2024on
arxiv-666803
2410.05554
MultiNash-PF: A Particle Filtering Approach for Computing Multiple Local Generalized Nash Equilibria in Trajectory Games
<|reference_start|>MultiNash-PF: A Particle Filtering Approach for Computing Multiple Local Generalized Nash Equilibria in Trajectory Games: Modern-world robotics involves complex environments where multiple autonomous agents must interact with each other and other humans. This necessitates advanced interactive multi-agent motion planning techniques. Generalized Nash equilibrium(GNE), a solution concept in constrained game theory, provides a mathematical model to predict the outcome of interactive motion planning, where each agent needs to account for other agents in the environment. However, in practice, multiple local GNEs may exist. Finding a single GNE itself is complex as it requires solving coupled constrained optimal control problems. Furthermore, finding all such local GNEs requires exploring the solution space of GNEs, which is a challenging task. This work proposes the MultiNash-PF framework to efficiently compute multiple local GNEs in constrained trajectory games. Potential games are a class of games for which a local GNE of a trajectory game can be found by solving a single constrained optimal control problem. We propose MultiNash-PF that integrates the potential game approach with implicit particle filtering, a sample-efficient method for non-convex trajectory optimization. We first formulate the underlying game as a constrained potential game and then utilize the implicit particle filtering to identify the coarse estimates of multiple local minimizers of the game's potential function. MultiNash-PF then refines these estimates with optimization solvers, obtaining different local GNEs. We show through numerical simulations that MultiNash-PF reduces computation time by up to 50\% compared to a baseline approach.<|reference_end|>
arxiv
@article{bhatt2024multinash-pf:, title={MultiNash-PF: A Particle Filtering Approach for Computing Multiple Local Generalized Nash Equilibria in Trajectory Games}, author={Maulik Bhatt, Iman Askari, Yue Yu, Ufuk Topcu, Huazhen Fang, Negar Mehr}, journal={arXiv preprint arXiv:2410.05554}, year={2024}, archivePrefix={arXiv}, eprint={2410.05554}, primaryClass={cs.RO} }
bhatt2024multinash-pf:
arxiv-666804
2410.05557
Rethinking Weak-to-Strong Augmentation in Source-Free Domain Adaptive Object Detection
<|reference_start|>Rethinking Weak-to-Strong Augmentation in Source-Free Domain Adaptive Object Detection: Source-Free domain adaptive Object Detection (SFOD) aims to transfer a detector (pre-trained on source domain) to new unlabelled target domains. Current SFOD methods typically follow the Mean Teacher framework, where weak-to-strong augmentation provides diverse and sharp contrast for self-supervised learning. However, this augmentation strategy suffers from an inherent problem called crucial semantics loss: Due to random, strong disturbance, strong augmentation is prone to losing typical visual components, hindering cross-domain feature extraction. To address this thus-far ignored limitation, this paper introduces a novel Weak-to-Strong Contrastive Learning (WSCoL) approach. The core idea is to distill semantics lossless knowledge in the weak features (from the weak/teacher branch) to guide the representation learning upon the strong features (from the strong/student branch). To achieve this, we project the original features into a shared space using a mapping network, thereby reducing the bias between the weak and strong features. Meanwhile, a weak features-guided contrastive learning is performed in a weak-to-strong manner alternatively. Specifically, we first conduct an adaptation-aware prototype-guided clustering on the weak features to generate pseudo labels for corresponding strong features matched through proposals. Sequentially, we identify positive-negative samples based on the pseudo labels and perform cross-category contrastive learning on the strong features where an uncertainty estimator encourages adaptive background contrast. Extensive experiments demonstrate that WSCoL yields new state-of-the-art performance, offering a built-in mechanism mitigating crucial semantics loss for traditional Mean Teacher framework. The code and data will be released soon.<|reference_end|>
arxiv
@article{yang2024rethinking, title={Rethinking Weak-to-Strong Augmentation in Source-Free Domain Adaptive Object Detection}, author={Jiuzheng Yang, Song Tang, Yangkuiyi Zhang, Shuaifeng Li, Mao Ye, Jianwei Zhang and Xiatian Zhu}, journal={arXiv preprint arXiv:2410.05557}, year={2024}, archivePrefix={arXiv}, eprint={2410.05557}, primaryClass={cs.CV} }
yang2024rethinking
arxiv-666805
2410.05558
Narrative-of-Thought: Improving Temporal Reasoning of Large Language Models via Recounted Narratives
<|reference_start|>Narrative-of-Thought: Improving Temporal Reasoning of Large Language Models via Recounted Narratives: Reasoning about time and temporal relations is an integral aspect of human cognition, essential for perceiving the world and navigating our experiences. Though large language models (LLMs) have demonstrated impressive performance in many reasoning tasks, temporal reasoning remains challenging due to its intrinsic complexity. In this work, we first study an essential task of temporal reasoning -- temporal graph generation, to unveil LLMs' inherent, global reasoning capabilities. We show that this task presents great challenges even for the most powerful LLMs, such as GPT-3.5/4. We also notice a significant performance gap by small models (<10B) that lag behind LLMs by 50%. Next, we study how to close this gap with a budget constraint, e.g., not using model finetuning. We propose a new prompting technique tailored for temporal reasoning, Narrative-of-Thought (NoT), that first converts the events set to a Python class, then prompts a small model to generate a temporally grounded narrative, guiding the final generation of a temporal graph. Extensive experiments showcase the efficacy of NoT in improving various metrics. Notably, NoT attains the highest F1 on the Schema-11 evaluation set, while securing an overall F1 on par with GPT-3.5. NoT also achieves the best structural similarity across the board, even compared with GPT-3.5/4. Our code is available at https://github.com/launchnlp/NoT.<|reference_end|>
arxiv
@article{zhang2024narrative-of-thought:, title={Narrative-of-Thought: Improving Temporal Reasoning of Large Language Models via Recounted Narratives}, author={Xinliang Frederick Zhang, Nick Beauchamp, Lu Wang}, journal={arXiv preprint arXiv:2410.05558}, year={2024}, archivePrefix={arXiv}, eprint={2410.05558}, primaryClass={cs.CL cs.AI} }
zhang2024narrative-of-thought:
arxiv-666806
2410.05559
Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification
<|reference_start|>Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification: We propose a constraint learning schema for fine-tuning Large Language Models (LLMs) with attribute control. Given a training corpus and control criteria formulated as a sequence-level constraint on model outputs, our method fine-tunes the LLM on the training corpus while enhancing constraint satisfaction with minimal impact on its utility and generation quality. Specifically, our approach regularizes the LLM training by penalizing the KL divergence between the desired output distribution, which satisfies the constraints, and the LLM's posterior. This regularization term can be approximated by an auxiliary model trained to decompose the sequence-level constraints into token-level guidance, allowing the term to be measured by a closed-form formulation. To further improve efficiency, we design a parallel scheme for concurrently updating both the LLM and the auxiliary model. We evaluate the empirical performance of our approach by controlling the toxicity when training an LLM. We show that our approach leads to an LLM that produces fewer inappropriate responses while achieving competitive performance on benchmarks and a toxicity detection task.<|reference_end|>
arxiv
@article{meng2024attribute, title={Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification}, author={Tao Meng, Ninareh Mehrabi, Palash Goyal, Anil Ramakrishna, Aram Galstyan, Richard Zemel, Kai-Wei Chang, Rahul Gupta, Charith Peris}, journal={arXiv preprint arXiv:2410.05559}, year={2024}, archivePrefix={arXiv}, eprint={2410.05559}, primaryClass={cs.CL} }
meng2024attribute
arxiv-666807
2410.05560
Cyber Threats to Canadian Federal Election: Emerging Threats, Assessment, and Mitigation Strategies
<|reference_start|>Cyber Threats to Canadian Federal Election: Emerging Threats, Assessment, and Mitigation Strategies: As Canada prepares for the 2025 federal election, ensuring the integrity and security of the electoral process against cyber threats is crucial. Recent foreign interference in elections globally highlight the increasing sophistication of adversaries in exploiting technical and human vulnerabilities. Such vulnerabilities also exist in Canada's electoral system that relies on a complex network of IT systems, vendors, and personnel. To mitigate these vulnerabilities, a threat assessment is crucial to identify emerging threats, develop incident response capabilities, and build public trust and resilience against cyber threats. Therefore, this paper presents a comprehensive national cyber threat assessment, following the NIST Special Publication 800-30 framework, focusing on identifying and mitigating cybersecurity risks to the upcoming 2025 Canadian federal election. The research identifies three major threats: misinformation, disinformation, and malinformation (MDM) campaigns; attacks on critical infrastructure and election support systems; and espionage by malicious actors. Through detailed analysis, the assessment offers insights into the capabilities, intent, and potential impact of these threats. The paper also discusses emerging technologies and their influence on election security and proposes a multi-faceted approach to risk mitigation ahead of the election.<|reference_end|>
arxiv
@article{islam2024cyber, title={Cyber Threats to Canadian Federal Election: Emerging Threats, Assessment, and Mitigation Strategies}, author={Nazmul Islam, Soomin Kim, Mohammad Pirooz and Sasha Shvetsov}, journal={arXiv preprint arXiv:2410.05560}, year={2024}, archivePrefix={arXiv}, eprint={2410.05560}, primaryClass={cs.CR} }
islam2024cyber
arxiv-666808
2410.05561
DDES Study of Confined and Unconfined NACA Wing Sections Using Spectral Elements
<|reference_start|>DDES Study of Confined and Unconfined NACA Wing Sections Using Spectral Elements: We develop hybrid RANS-LES strategies within the spectral element code Nek5000 based on the $k-\tau$ class of turbulence models. We chose airfoil sections at small flight configurations as our target problem to comprehensively test the solver accuracy and performance. We present verification and validation results of an unconfined NACA0012 wing section in a pure RANS and in a hybrid RANS-LES setup for an angle of attack ranging from 0 to 90 degrees. The RANS results shows good corroboration with existing experimental and numerical datasets for low incoming flow angles. A small discrepancy appears at higher angle in comparison with the experiments, which is in line with our expectations from a RANS formulation. On the other hand, DDES captures both the attached and separated flow dynamics well when compared with available numerical datasets. We demonstrate that for the hybrid turbulence modeling approach a high-order spectral element discretization converges faster (i.e., with less resolution) and captures the flow dynamics more accurately than representative low-order finite-volume and finite-difference approaches. We also revise some of the guidelines on sample size requirements for statistics convergence. Furthermore, we analyze some of the observed discrepancies of our unconfined DDES at higher angles with the experiments by evaluating the side wall "blocking" effect. We carry out additional simulations in a confined 'numerical wind tunnel' and assess the observed differences as a function of Reynolds number.<|reference_end|>
arxiv
@article{kumar2024ddes, title={DDES Study of Confined and Unconfined NACA Wing Sections Using Spectral Elements}, author={Vishal Kumar, Ananias Tomboulides, Paul Fischer, Misun Min}, journal={arXiv preprint arXiv:2410.05561}, year={2024}, archivePrefix={arXiv}, eprint={2410.05561}, primaryClass={cs.CE} }
kumar2024ddes
arxiv-666809
2410.05562
FogROS2-PLR: Probabilistic Latency-Reliability For Cloud Robotics
<|reference_start|>FogROS2-PLR: Probabilistic Latency-Reliability For Cloud Robotics: Cloud robotics enables robots to offload computationally intensive tasks to cloud servers for performance, cost, and ease of management. However, the network and cloud computing infrastructure are not designed for reliable timing guarantees, due to fluctuating Quality-of-Service (QoS). In this work, we formulate an impossibility triangle theorem for: Latency reliability, Singleton server, and Commodity hardware. The LSC theorem suggests that providing replicated servers with uncorrelated failures can exponentially reduce the probability of missing a deadline. We present FogROS2-Probabilistic Latency Reliability (PLR) that uses multiple independent network interfaces to send requests to replicated cloud servers and uses the first response back. We design routing mechanisms to discover, connect, and route through non-default network interfaces on robots. FogROS2-PLR optimizes the selection of interfaces to servers to minimize the probability of missing a deadline. We conduct a cloud-connected driving experiment with two 5G service providers, demonstrating FogROS2-PLR effectively provides smooth service quality even if one of the service providers experiences low coverage and base station handover. We use 99 Percentile (P99) latency to evaluate anomalous long-tail latency behavior. In one experiment, FogROS2-PLR improves P99 latency by up to 3.7x compared to using one service provider. We deploy FogROS2-PLR on a physical Stretch 3 robot performing an indoor human-tracking task. Even in a fully covered Wi-Fi and 5G environment, FogROS2-PLR improves the responsiveness of the robot reducing mean latency by 36% and P99 latency by 33%.<|reference_end|>
arxiv
@article{chen2024fogros2-plr:, title={FogROS2-PLR: Probabilistic Latency-Reliability For Cloud Robotics}, author={Kaiyuan Chen, Nan Tian, Christian Juette, Tianshuang Qiu, Liu Ren, John Kubiatowicz, and Ken Goldberg}, journal={arXiv preprint arXiv:2410.05562}, year={2024}, archivePrefix={arXiv}, eprint={2410.05562}, primaryClass={cs.RO cs.DC cs.NI} }
chen2024fogros2-plr:
arxiv-666810
2410.05563
Rational Metareasoning for Large Language Models
<|reference_start|>Rational Metareasoning for Large Language Models: Being prompted to engage in reasoning has emerged as a core technique for using large language models (LLMs), deploying additional inference-time compute to improve task performance. However, as LLMs increase in both size and adoption, inference costs are correspondingly becoming increasingly burdensome. How, then, might we optimize reasoning's cost-performance tradeoff? This work introduces a novel approach based on computational models of metareasoning used in cognitive science, training LLMs to selectively use intermediate reasoning steps only when necessary. We first develop a reward function that incorporates the Value of Computation by penalizing unnecessary reasoning, then use this reward function with Expert Iteration to train the LLM. Compared to few-shot chain-of-thought prompting and STaR, our method significantly reduces inference costs (20-37\% fewer tokens generated across three models) while maintaining task performance across diverse datasets.<|reference_end|>
arxiv
@article{de sabbata2024rational, title={Rational Metareasoning for Large Language Models}, author={C. Nicol`o De Sabbata, Theodore R. Sumers, Thomas L. Griffiths}, journal={arXiv preprint arXiv:2410.05563}, year={2024}, archivePrefix={arXiv}, eprint={2410.05563}, primaryClass={cs.CL cs.AI cs.LG} }
de sabbata2024rational
arxiv-666811
2410.05564
Unsupervised Representation Learning from Sparse Transformation Analysis
<|reference_start|>Unsupervised Representation Learning from Sparse Transformation Analysis: There is a vast literature on representation learning based on principles such as coding efficiency, statistical independence, causality, controllability, or symmetry. In this paper we propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components. Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model, before being decoded to predict a future input state. The flow model is decomposed into a number of rotational (divergence-free) vector fields and a number of potential flow (curl-free) fields. Our sparsity prior encourages only a small number of these fields to be active at any instant and infers the speed with which the probability flows along these fields. Training this model is completely unsupervised using a standard variational objective and results in a new form of disentangled representations where the input is not only represented by a combination of independent factors, but also by a combination of independent transformation primitives given by the learned flow fields. When viewing the transformations as symmetries one may interpret this as learning approximately equivariant representations. Empirically we demonstrate that this model achieves state of the art in terms of both data likelihood and unsupervised approximate equivariance errors on datasets composed of sequence transformations.<|reference_end|>
arxiv
@article{song2024unsupervised, title={Unsupervised Representation Learning from Sparse Transformation Analysis}, author={Yue Song, Thomas Anderson Keller, Yisong Yue, Pietro Perona, Max Welling}, journal={arXiv preprint arXiv:2410.05564}, year={2024}, archivePrefix={arXiv}, eprint={2410.05564}, primaryClass={cs.LG cs.CV} }
song2024unsupervised
arxiv-666812
2410.05565
Chain and Causal Attention for Efficient Entity Tracking
<|reference_start|>Chain and Causal Attention for Efficient Entity Tracking: This paper investigates the limitations of transformers for entity-tracking tasks in large language models. We identify a theoretical constraint, showing that transformers require at least $\log_2 (n+1)$ layers to handle entity tracking with $n$ state changes. To address this issue, we propose an efficient and frugal enhancement to the standard attention mechanism, enabling it to manage long-term dependencies more efficiently. By considering attention as an adjacency matrix, our model can track entity states with a single layer. Empirical results demonstrate significant improvements in entity tracking datasets while keeping competitive performance on standard natural language modeling. Our modified attention allows us to achieve the same performance with drastically fewer layers. Additionally, our enhanced mechanism reveals structured internal representations of attention. Extensive experiments on both toy and complex datasets validate our approach. Our contributions include theoretical insights, an improved attention mechanism, and empirical validation.<|reference_end|>
arxiv
@article{fagnou2024chain, title={Chain and Causal Attention for Efficient Entity Tracking}, author={Erwan Fagnou, Paul Caillon, Blaise Delattre, Alexandre Allauzen}, journal={arXiv preprint arXiv:2410.05565}, year={2024}, archivePrefix={arXiv}, eprint={2410.05565}, primaryClass={cs.LG cs.CL} }
fagnou2024chain
arxiv-666813
2410.05570
Conversate: Supporting Reflective Learning in Interview Practice Through Interactive Simulation and Dialogic Feedback
<|reference_start|>Conversate: Supporting Reflective Learning in Interview Practice Through Interactive Simulation and Dialogic Feedback: Job interviews play a critical role in shaping one's career, yet practicing interview skills can be challenging, especially without access to human coaches or peers for feedback. Recent advancements in large language models (LLMs) present an opportunity to enhance the interview practice experience. Yet, little research has explored the effectiveness and user perceptions of such systems or the benefits and challenges of using LLMs for interview practice. Furthermore, while prior work and recent commercial tools have demonstrated the potential of AI to assist with interview practice, they often deliver one-way feedback, where users only receive information about their performance. By contrast, dialogic feedback, a concept developed in learning sciences, is a two-way interaction feedback process that allows users to further engage with and learn from the provided feedback through interactive dialogue. This paper introduces Conversate, a web-based application that supports reflective learning in job interview practice by leveraging large language models (LLMs) for interactive interview simulations and dialogic feedback. To start the interview session, the user provides the title of a job position (e.g., entry-level software engineer) in the system. Then, our system will initialize the LLM agent to start the interview simulation by asking the user an opening interview question and following up with questions carefully adapted to subsequent user responses. After the interview session, our back-end LLM framework will then analyze the user's responses and highlight areas for improvement. Users can then annotate the transcript by selecting specific sections and writing self-reflections. Finally, the user can interact with the system for dialogic feedback, conversing with the LLM agent to learn from and iteratively refine their answers based on the agent's guidance.<|reference_end|>
arxiv
@article{daryanto2024conversate:, title={Conversate: Supporting Reflective Learning in Interview Practice Through Interactive Simulation and Dialogic Feedback}, author={Taufiq Daryanto, Xiaohan Ding, Lance T. Wilhelm, Sophia Stil, Kirk McInnis Knutsen, and Eugenia H. Rho}, journal={arXiv preprint arXiv:2410.05570}, year={2024}, doi={10.1145/3701188}, archivePrefix={arXiv}, eprint={2410.05570}, primaryClass={cs.HC} }
daryanto2024conversate:
arxiv-666814
2410.05572
Improved deep learning of chaotic dynamical systems with multistep penalty losses
<|reference_start|>Improved deep learning of chaotic dynamical systems with multistep penalty losses: Predicting the long-term behavior of chaotic systems remains a formidable challenge due to their extreme sensitivity to initial conditions and the inherent limitations of traditional data-driven modeling approaches. This paper introduces a novel framework that addresses these challenges by leveraging the recently proposed multi-step penalty (MP) optimization technique. Our approach extends the applicability of MP optimization to a wide range of deep learning architectures, including Fourier Neural Operators and UNETs. By introducing penalized local discontinuities in the forecast trajectory, we effectively handle the non-convexity of loss landscapes commonly encountered in training neural networks for chaotic systems. We demonstrate the effectiveness of our method through its application to two challenging use-cases: the prediction of flow velocity evolution in two-dimensional turbulence and ocean dynamics using reanalysis data. Our results highlight the potential of this approach for accurate and stable long-term prediction of chaotic dynamics, paving the way for new advancements in data-driven modeling of complex natural phenomena.<|reference_end|>
arxiv
@article{chakraborty2024improved, title={Improved deep learning of chaotic dynamical systems with multistep penalty losses}, author={Dibyajyoti Chakraborty, Seung Whan Chung, Ashesh Chattopadhyay, Romit Maulik}, journal={arXiv preprint arXiv:2410.05572}, year={2024}, archivePrefix={arXiv}, eprint={2410.05572}, primaryClass={cs.LG cs.AI math.DS} }
chakraborty2024improved
arxiv-666815
2410.05573
TaeBench: Improving Quality of Toxic Adversarial Examples
<|reference_start|>TaeBench: Improving Quality of Toxic Adversarial Examples: Toxicity text detectors can be vulnerable to adversarial examples - small perturbations to input text that fool the systems into wrong detection. Existing attack algorithms are time-consuming and often produce invalid or ambiguous adversarial examples, making them less useful for evaluating or improving real-world toxicity content moderators. This paper proposes an annotation pipeline for quality control of generated toxic adversarial examples (TAE). We design model-based automated annotation and human-based quality verification to assess the quality requirements of TAE. Successful TAE should fool a target toxicity model into making benign predictions, be grammatically reasonable, appear natural like human-generated text, and exhibit semantic toxicity. When applying these requirements to more than 20 state-of-the-art (SOTA) TAE attack recipes, we find many invalid samples from a total of 940k raw TAE attack generations. We then utilize the proposed pipeline to filter and curate a high-quality TAE dataset we call TaeBench (of size 264k). Empirically, we demonstrate that TaeBench can effectively transfer-attack SOTA toxicity content moderation models and services. Our experiments also show that TaeBench with adversarial training achieve significant improvements of the robustness of two toxicity detectors.<|reference_end|>
arxiv
@article{zhu2024taebench:, title={TaeBench: Improving Quality of Toxic Adversarial Examples}, author={Xuan Zhu, Dmitriy Bespalov, Liwen You, Ninad Kulkarni, Yanjun Qi}, journal={arXiv preprint arXiv:2410.05573}, year={2024}, archivePrefix={arXiv}, eprint={2410.05573}, primaryClass={cs.CR cs.AI cs.CL cs.LG} }
zhu2024taebench:
arxiv-666816
2410.05575
ClaimBrush: A Novel Framework for Automated Patent Claim Refinement Based on Large Language Models
<|reference_start|>ClaimBrush: A Novel Framework for Automated Patent Claim Refinement Based on Large Language Models: Automatic refinement of patent claims in patent applications is crucial from the perspective of intellectual property strategy. In this paper, we propose ClaimBrush, a novel framework for automated patent claim refinement that includes a dataset and a rewriting model. We constructed a dataset for training and evaluating patent claim rewriting models by collecting a large number of actual patent claim rewriting cases from the patent examination process. Using the constructed dataset, we built an automatic patent claim rewriting model by fine-tuning a large language model. Furthermore, we enhanced the performance of the automatic patent claim rewriting model by applying preference optimization based on a prediction model of patent examiners' Office Actions. The experimental results showed that our proposed rewriting model outperformed heuristic baselines and zero-shot learning in state-of-the-art large language models. Moreover, preference optimization based on patent examiners' preferences boosted the performance of patent claim refinement.<|reference_end|>
arxiv
@article{kawano2024claimbrush:, title={ClaimBrush: A Novel Framework for Automated Patent Claim Refinement Based on Large Language Models}, author={Seiya Kawano, Hirofumi Nonaka, Koichiro Yoshino}, journal={arXiv preprint arXiv:2410.05575}, year={2024}, archivePrefix={arXiv}, eprint={2410.05575}, primaryClass={cs.CL cs.AI} }
kawano2024claimbrush:
arxiv-666817
2410.05576
Submodular Optimization for Keyframe Selection & Usage in SLAM
<|reference_start|>Submodular Optimization for Keyframe Selection & Usage in SLAM: Keyframes are LiDAR scans saved for future reference in Simultaneous Localization And Mapping (SLAM), but despite their central importance most algorithms leave choices of which scans to save and how to use them to wasteful heuristics. This work proposes two novel keyframe selection strategies for localization and map summarization, as well as a novel approach to submap generation which selects keyframes that best constrain localization. Our results show that online keyframe selection and submap generation reduce the number of saved keyframes and improve per scan computation time without compromising localization performance. We also present a map summarization feature for quickly capturing environments under strict map size constraints.<|reference_end|>
arxiv
@article{thorne2024submodular, title={Submodular Optimization for Keyframe Selection & Usage in SLAM}, author={David Thorne, Nathan Chan, Yanlong Ma, Christa S. Robison, Philip R. Osteen, Brett T. Lopez}, journal={arXiv preprint arXiv:2410.05576}, year={2024}, archivePrefix={arXiv}, eprint={2410.05576}, primaryClass={cs.RO} }
thorne2024submodular
arxiv-666818
2410.05577
Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future
<|reference_start|>Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future: Underwater object detection (UOD), aiming to identify and localise the objects in underwater images or videos, presents significant challenges due to the optical distortion, water turbidity, and changing illumination in underwater scenes. In recent years, artificial intelligence (AI) based methods, especially deep learning methods, have shown promising performance in UOD. To further facilitate future advancements, we comprehensively study AI-based UOD. In this survey, we first categorise existing algorithms into traditional machine learning-based methods and deep learning-based methods, and summarise them by considering learning strategy, experimental dataset, utilised features or frameworks, and learning stage. Next, we discuss the potential challenges and suggest possible solutions and new directions. We also perform both quantitative and qualitative evaluations of mainstream algorithms across multiple benchmark datasets by considering the diverse and biased experimental setups. Finally, we introduce two off-the-shelf detection analysis tools, Diagnosis and TIDE, which well-examine the effects of object characteristics and various types of errors on detectors. These tools help identify the strengths and weaknesses of detectors, providing insigts for further improvement. The source codes, trained models, utilised datasets, detection results, and detection analysis tools are public available at \url{https://github.com/LongChenCV/UODReview}, and will be regularly updated.<|reference_end|>
arxiv
@article{chen2024underwater, title={Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future}, author={Long Chen, Yuzhi Huang, Junyu Dong, Qi Xu, Sam Kwong, Huimin Lu, Huchuan Lu, and Chongyi Li}, journal={arXiv preprint arXiv:2410.05577}, year={2024}, archivePrefix={arXiv}, eprint={2410.05577}, primaryClass={cs.CV} }
chen2024underwater
arxiv-666819
2410.05578
Swift Sampler: Efficient Learning of Sampler by 10 Parameters
<|reference_start|>Swift Sampler: Efficient Learning of Sampler by 10 Parameters: Data selection is essential for training deep learning models. An effective data sampler assigns proper sampling probability for training data and helps the model converge to a good local minimum with high performance. Previous studies in data sampling are mainly based on heuristic rules or learning through a huge amount of time-consuming trials. In this paper, we propose an automatic \textbf{swift sampler} search algorithm, \textbf{SS}, to explore automatically learning effective samplers efficiently. In particular, \textbf{SS} utilizes a novel formulation to map a sampler to a low dimension of hyper-parameters and uses an approximated local minimum to quickly examine the quality of a sampler. Benefiting from its low computational expense, \textbf{SS} can be applied on large-scale data sets with high efficiency. Comprehensive experiments on various tasks demonstrate that \textbf{SS} powered sampling can achieve obvious improvements (e.g., 1.5\% on ImageNet) and transfer among different neural networks. Project page: https://github.com/Alexander-Yao/Swift-Sampler.<|reference_end|>
arxiv
@article{yao2024swift, title={Swift Sampler: Efficient Learning of Sampler by 10 Parameters}, author={Jiawei Yao, Chuming Li, Canran Xiao}, journal={arXiv preprint arXiv:2410.05578}, year={2024}, archivePrefix={arXiv}, eprint={2410.05578}, primaryClass={cs.LG cs.AI} }
yao2024swift
arxiv-666820
2410.05579
A Survey on Annotations in Information Visualization: Empirical Insights, Applications, and Challenges
<|reference_start|>A Survey on Annotations in Information Visualization: Empirical Insights, Applications, and Challenges: We present a comprehensive survey on the use of annotations in information visualizations, highlighting their crucial role in improving audience understanding and engagement with visual data. Our investigation encompasses empirical studies on annotations, showcasing their impact on user engagement, interaction, comprehension, and memorability across various contexts. We also study the existing tools and techniques for creating annotations and their diverse applications, enhancing the understanding of both practical and theoretical aspects of annotations in data visualization. Additionally, we identify existing research gaps and propose potential future research directions, making our survey a valuable resource for researchers, visualization designers, and practitioners by providing a thorough understanding of the application of annotations in visualization.<|reference_end|>
arxiv
@article{rahman2024a, title={A Survey on Annotations in Information Visualization: Empirical Insights, Applications, and Challenges}, author={Md Dilshadur Rahman, Bhavana Doppalapudi, Ghulam Jilani Quadri, Paul Rosen}, journal={arXiv preprint arXiv:2410.05579}, year={2024}, archivePrefix={arXiv}, eprint={2410.05579}, primaryClass={cs.HC} }
rahman2024a
arxiv-666821
2410.05580
Noncrossing Longest Paths and Cycles
<|reference_start|>Noncrossing Longest Paths and Cycles: Edge crossings in geometric graphs are sometimes undesirable as they could lead to unwanted situations such as collisions in motion planning and inconsistency in VLSI layout. Short geometric structures such as shortest perfect matchings, shortest spanning trees, shortest spanning paths, and shortest spanning cycles on a given point set are inherently noncrossing. However, the longest such structures need not be noncrossing. In fact, it is intuitive to expect many edge crossings in various geometric graphs that are longest. Recently, \'Alvarez-Rebollar, Cravioto-Lagos, Mar\'in, Sol\'e-Pi, and Urrutia (Graphs and Combinatorics, 2024) constructed a set of points for which the longest perfect matching is noncrossing. They raised several challenging questions in this direction. In particular, they asked whether the longest spanning path, on any finite set of points in the plane, must have a pair of crossing edges. They also conjectured that the longest spanning cycle must have a pair of crossing edges. In this paper, we give a negative answer to the question and also refute the conjecture. We present a framework for constructing arbitrarily large point sets for which the longest perfect matchings, the longest spanning paths, and the longest spanning cycles are noncrossing.<|reference_end|>
arxiv
@article{aloupis2024noncrossing, title={Noncrossing Longest Paths and Cycles}, author={Greg Aloupis, Ahmad Biniaz, Prosenjit Bose, Jean-Lou De Carufel, David Eppstein, Anil Maheshwari, Saeed Odak, Michiel Smid, Csaba D. T'oth, Pavel Valtr}, journal={arXiv preprint arXiv:2410.05580}, year={2024}, archivePrefix={arXiv}, eprint={2410.05580}, primaryClass={cs.CG} }
aloupis2024noncrossing
arxiv-666822
2410.05581
Adaptation Odyssey in LLMs: Why Does Additional Pretraining Sometimes Fail to Improve?
<|reference_start|>Adaptation Odyssey in LLMs: Why Does Additional Pretraining Sometimes Fail to Improve?: In the last decade, the generalization and adaptation abilities of deep learning models were typically evaluated on fixed training and test distributions. Contrary to traditional deep learning, large language models (LLMs) are (i) even more overparameterized, (ii) trained on unlabeled text corpora curated from the Internet with minimal human intervention, and (iii) trained in an online fashion. These stark contrasts prevent researchers from transferring lessons learned on model generalization and adaptation in deep learning contexts to LLMs. To this end, our short paper introduces empirical observations that aim to shed light on further training of already pretrained language models. Specifically, we demonstrate that training a model on a text domain could degrade its perplexity on the test portion of the same domain. We observe with our subsequent analysis that the performance degradation is positively correlated with the similarity between the additional and the original pretraining dataset of the LLM. Our further token-level perplexity observations reveals that the perplexity degradation is due to a handful of tokens that are not informative about the domain. We hope these findings will guide us in determining when to adapt a model vs when to rely on its foundational capabilities.<|reference_end|>
arxiv
@article{öncel2024adaptation, title={Adaptation Odyssey in LLMs: Why Does Additional Pretraining Sometimes Fail to Improve?}, author={F{i}rat "Oncel, Matthias Bethge, Beyza Ermis, Mirco Ravanelli, Cem Subakan, c{C}au{g}atay Y{i}ld{i}z}, journal={arXiv preprint arXiv:2410.05581}, year={2024}, archivePrefix={arXiv}, eprint={2410.05581}, primaryClass={cs.CL cs.AI cs.LG} }
öncel2024adaptation
arxiv-666823
2410.05582
Gen-Drive: Enhancing Diffusion Generative Driving Policies with Reward Modeling and Reinforcement Learning Fine-tuning
<|reference_start|>Gen-Drive: Enhancing Diffusion Generative Driving Policies with Reward Modeling and Reinforcement Learning Fine-tuning: Autonomous driving necessitates the ability to reason about future interactions between traffic agents and to make informed evaluations for planning. This paper introduces the \textit{Gen-Drive} framework, which shifts from the traditional prediction and deterministic planning framework to a generation-then-evaluation planning paradigm. The framework employs a behavior diffusion model as a scene generator to produce diverse possible future scenarios, thereby enhancing the capability for joint interaction reasoning. To facilitate decision-making, we propose a scene evaluator (reward) model, trained with pairwise preference data collected through VLM assistance, thereby reducing human workload and enhancing scalability. Furthermore, we utilize an RL fine-tuning framework to improve the generation quality of the diffusion model, rendering it more effective for planning tasks. We conduct training and closed-loop planning tests on the nuPlan dataset, and the results demonstrate that employing such a generation-then-evaluation strategy outperforms other learning-based approaches. Additionally, the fine-tuned generative driving policy shows significant enhancements in planning performance. We further demonstrate that utilizing our learned reward model for evaluation or RL fine-tuning leads to better planning performance compared to relying on human-designed rewards. Project website: https://mczhi.github.io/GenDrive.<|reference_end|>
arxiv
@article{huang2024gen-drive:, title={Gen-Drive: Enhancing Diffusion Generative Driving Policies with Reward Modeling and Reinforcement Learning Fine-tuning}, author={Zhiyu Huang, Xinshuo Weng, Maximilian Igl, Yuxiao Chen, Yulong Cao, Boris Ivanovic, Marco Pavone, Chen Lv}, journal={arXiv preprint arXiv:2410.05582}, year={2024}, archivePrefix={arXiv}, eprint={2410.05582}, primaryClass={cs.RO} }
huang2024gen-drive:
arxiv-666824
2410.05583
NegMerge: Consensual Weight Negation for Strong Machine Unlearning
<|reference_start|>NegMerge: Consensual Weight Negation for Strong Machine Unlearning: Machine unlearning aims to selectively remove specific knowledge from a model. Current methods, such as task arithmetic, rely on fine-tuning models on the forget set, generating a task vector, and subtracting it from the original model. However, we argue the effectiveness of this approach is highly sensitive to hyperparameter selection, necessitating careful validation to identify the best model among many fine-tuned candidates. In this paper, we propose a novel method that leverages all given fine-tuned models rather than selecting a single one. By constructing task vectors from models trained with varied hyperparameters and merging only the components of the task vectors with consistent signs, we perform unlearning by negating the merged task vector from the original model. Given that existing methods also utilize multiple fine-tuned models, our approach delivers more effective unlearning without incurring additional computational costs. We demonstrate the effectiveness of our method on both vision-language models and standard image classification models, showing improved unlearning performance with minimal degradation on the retain set, outperforming state-of-the-art techniques.<|reference_end|>
arxiv
@article{kim2024negmerge:, title={NegMerge: Consensual Weight Negation for Strong Machine Unlearning}, author={Hyoseo Kim, Dongyoon Han, Junsuk Choe}, journal={arXiv preprint arXiv:2410.05583}, year={2024}, archivePrefix={arXiv}, eprint={2410.05583}, primaryClass={cs.LG cs.AI} }
kim2024negmerge:
arxiv-666825
2410.05584
Rethinking Reward Model Evaluation: Are We Barking up the Wrong Tree?
<|reference_start|>Rethinking Reward Model Evaluation: Are We Barking up the Wrong Tree?: Reward Models (RMs) are crucial for aligning language models with human preferences. Currently, the evaluation of RMs depends on measuring accuracy against a validation set of manually annotated preference data. Although this method is straightforward and widely adopted, the relationship between RM accuracy and downstream policy performance remains under-explored. In this work, we conduct experiments in a synthetic setting to investigate how differences in RM measured by accuracy translate into gaps in optimized policy performance. Our findings reveal that while there is a weak positive correlation between accuracy and downstream performance, policies optimized towards RMs with similar accuracy can exhibit quite different performance. Moreover, we discover that the way of measuring accuracy significantly impacts its ability to predict the final policy performance. Through the lens of Regressional Goodhart's effect, we identify the existence of exogenous variables impacting the relationship between RM quality measured by accuracy and policy model capability. This underscores the inadequacy of relying solely on accuracy to reflect their impact on policy optimization.<|reference_end|>
arxiv
@article{wen2024rethinking, title={Rethinking Reward Model Evaluation: Are We Barking up the Wrong Tree?}, author={Xueru Wen, Jie Lou, Yaojie Lu, Hongyu Lin, Xing Yu, Xinyu Lu, Ben He, Xianpei Han, Debing Zhang, Le Sun}, journal={arXiv preprint arXiv:2410.05584}, year={2024}, archivePrefix={arXiv}, eprint={2410.05584}, primaryClass={cs.LG cs.AI cs.CL} }
wen2024rethinking
arxiv-666826
2410.05585
Towards Robust Spacecraft Trajectory Optimization via Transformers
<|reference_start|>Towards Robust Spacecraft Trajectory Optimization via Transformers: Future multi-spacecraft missions require robust autonomous trajectory optimization capabilities to ensure safe and efficient rendezvous operations. This capability hinges on solving non-convex optimal control problems in real time, although traditional iterative methods such as sequential convex programming impose significant computational challenges. To mitigate this burden, the Autonomous Rendezvous Transformer introduced a generative model trained to provide near-optimal initial guesses. This approach provides convergence to better local optima (e.g., fuel optimality), improves feasibility rates, and results in faster convergence speed of optimization algorithms through warm-starting. This work extends the capabilities of ART to address robust chance-constrained optimal control problems. Specifically, ART is applied to challenging rendezvous scenarios in Low Earth Orbit (LEO), ensuring fault-tolerant behavior under uncertainty. Through extensive experimentation, the proposed warm-starting strategy is shown to consistently produce high-quality reference trajectories, achieving up to 30% cost improvement and 50% reduction in infeasible cases compared to conventional methods, demonstrating robust performance across multiple state representations. Additionally, a post hoc evaluation framework is proposed to assess the quality of generated trajectories and mitigate runtime failures, marking an initial step toward the reliable deployment of AI-driven solutions in safety-critical autonomous systems such as spacecraft.<|reference_end|>
arxiv
@article{takubo2024towards, title={Towards Robust Spacecraft Trajectory Optimization via Transformers}, author={Yuji Takubo, Tommaso Guffanti, Daniele Gammelli, Marco Pavone, Simone D'Amico}, journal={arXiv preprint arXiv:2410.05585}, year={2024}, archivePrefix={arXiv}, eprint={2410.05585}, primaryClass={math.OC cs.AI cs.RO} }
takubo2024towards
arxiv-666827
2410.05586
TeaserGen: Generating Teasers for Long Documentaries
<|reference_start|>TeaserGen: Generating Teasers for Long Documentaries: Teasers are an effective tool for promoting content in entertainment, commercial and educational fields. However, creating an effective teaser for long videos is challenging for it requires long-range multimodal modeling on the input videos, while necessitating maintaining audiovisual alignments, managing scene changes and preserving factual accuracy for the output teasers. Due to the lack of a publicly-available dataset, progress along this research direction has been hindered. In this work, we present DocumentaryNet, a collection of 1,269 documentaries paired with their teasers, featuring multimodal data streams of video, speech, music, sound effects and narrations. With DocumentaryNet, we propose a new two-stage system for generating teasers from long documentaries. The proposed TeaserGen system first generates the teaser narration from the transcribed narration of the documentary using a pretrained large language model, and then selects the most relevant visual content to accompany the generated narration through language-vision models. For narration-video matching, we explore two approaches: a pretraining-based model using pretrained contrastive language-vision models and a deep sequential model that learns the mapping between the narrations and visuals. Our experimental results show that the pretraining-based approach is more effective at identifying relevant visual content than directly trained deep autoregressive models.<|reference_end|>
arxiv
@article{xu2024teasergen:, title={TeaserGen: Generating Teasers for Long Documentaries}, author={Weihan Xu, Paul Pu Liang, Haven Kim, Julian McAuley, Taylor Berg-Kirkpatrick, Hao-Wen Dong}, journal={arXiv preprint arXiv:2410.05586}, year={2024}, archivePrefix={arXiv}, eprint={2410.05586}, primaryClass={cs.CV cs.AI} }
xu2024teasergen:
arxiv-666828
2410.05587
Deep Learning-Based Decoding of Linear Block Codes for Spin-Torque Transfer Magnetic Random Access Memory (STT-MRAM)
<|reference_start|>Deep Learning-Based Decoding of Linear Block Codes for Spin-Torque Transfer Magnetic Random Access Memory (STT-MRAM): Thanks to its superior features of fast read/write speed and low power consumption, spin-torque transfer magnetic random access memory (STT-MRAM) has become a promising non-volatile memory (NVM) technology that is suitable for many applications. However, the reliability of STT-MRAM is seriously affected by the variation of the memory fabrication process and the working temperature, and the later will lead to an unknown offset of the channel. Hence, there is a pressing need to develop more effective error correction coding techniques to tackle these imperfections and improve the reliability of STT-MRAM. In this work, we propose, for the first time, the application of deep-learning (DL) based algorithms and techniques to improve the decoding performance of linear block codes with short codeword lengths for STT-MRAM. We formulate the belief propagation (BP) decoding of linear block code as a neural network (NN), and propose a novel neural normalized-offset reliability-based min-sum (NNORB-MS) decoding algorithm. We successfully apply our proposed decoding algorithm to the STT-MRAM channel through channel symmetrization to overcome the channel asymmetry. We also propose an NN-based soft information generation method (SIGM) to take into account the unknown offset of the channel. Simulation results demonstrate that our proposed NNORB-MS decoding algorithm can achieve significant performance gain over both the hard-decision decoding (HDD) and the regular reliability-based min-sum (RB-MS) decoding algorithm, for cases without and with the unknown channel offset. Moreover, the decoder structure and time complexity of the NNORB-MS algorithm remain similar to those of the regular RB-MS algorithm.<|reference_end|>
arxiv
@article{zhong2024deep, title={Deep Learning-Based Decoding of Linear Block Codes for Spin-Torque Transfer Magnetic Random Access Memory (STT-MRAM)}, author={Xingwei Zhong, Kui Cai, Zhen Mei, and Tony Q.S.Quek}, journal={arXiv preprint arXiv:2410.05587}, year={2024}, archivePrefix={arXiv}, eprint={2410.05587}, primaryClass={cs.IT eess.SP math.IT} }
zhong2024deep
arxiv-666829
2410.05589
ParallelSpec: Parallel Drafter for Efficient Speculative Decoding
<|reference_start|>ParallelSpec: Parallel Drafter for Efficient Speculative Decoding: Speculative decoding has proven to be an efficient solution to large language model (LLM) inference, where the small drafter predicts future tokens at a low cost, and the target model is leveraged to verify them in parallel. However, most existing works still draft tokens auto-regressively to maintain sequential dependency in language modeling, which we consider a huge computational burden in speculative decoding. We present ParallelSpec, an alternative to auto-regressive drafting strategies in state-of-the-art speculative decoding approaches. In contrast to auto-regressive drafting in the speculative stage, we train a parallel drafter to serve as an efficient speculative model. ParallelSpec learns to efficiently predict multiple future tokens in parallel using a single model, and it can be integrated into any speculative decoding framework that requires aligning the output distributions of the drafter and the target model with minimal training cost. Experimental results show that ParallelSpec accelerates baseline methods in latency up to 62% on text generation benchmarks from different domains, and it achieves 2.84X overall speedup on the Llama-2-13B model using third-party evaluation criteria.<|reference_end|>
arxiv
@article{xiao2024parallelspec:, title={ParallelSpec: Parallel Drafter for Efficient Speculative Decoding}, author={Zilin Xiao, Hongming Zhang, Tao Ge, Siru Ouyang, Vicente Ordonez, Dong Yu}, journal={arXiv preprint arXiv:2410.05589}, year={2024}, archivePrefix={arXiv}, eprint={2410.05589}, primaryClass={cs.CL cs.LG} }
xiao2024parallelspec:
arxiv-666830
2410.05591
TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation
<|reference_start|>TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation: Despite significant advancements in customizing text-to-image and video generation models, generating images and videos that effectively integrate multiple personalized concepts remains a challenging task. To address this, we present TweedieMix, a novel method for composing customized diffusion models during the inference phase. By analyzing the properties of reverse diffusion sampling, our approach divides the sampling process into two stages. During the initial steps, we apply a multiple object-aware sampling technique to ensure the inclusion of the desired target objects. In the later steps, we blend the appearances of the custom concepts in the de-noised image space using Tweedie's formula. Our results demonstrate that TweedieMix can generate multiple personalized concepts with higher fidelity than existing methods. Moreover, our framework can be effortlessly extended to image-to-video diffusion models, enabling the generation of videos that feature multiple personalized concepts. Results and source code are in our anonymous project page.<|reference_end|>
arxiv
@article{kwon2024tweediemix:, title={TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation}, author={Gihyun Kwon, Jong Chul Ye}, journal={arXiv preprint arXiv:2410.05591}, year={2024}, archivePrefix={arXiv}, eprint={2410.05591}, primaryClass={cs.CV} }
kwon2024tweediemix:
arxiv-666831
2410.05592
Training Stiff Neural Ordinary Differential Equations with Implicit Single-Step Methods
<|reference_start|>Training Stiff Neural Ordinary Differential Equations with Implicit Single-Step Methods: Stiff systems of ordinary differential equations (ODEs) are pervasive in many science and engineering fields, yet standard neural ODE approaches struggle to learn them. This limitation is the main barrier to the widespread adoption of neural ODEs. In this paper, we propose an approach based on single-step implicit schemes to enable neural ODEs to handle stiffness and demonstrate that our implicit neural ODE method can learn stiff dynamics. This work addresses a key limitation in current neural ODE methods, paving the way for their use in a wider range of scientific problems.<|reference_end|>
arxiv
@article{fronk2024training, title={Training Stiff Neural Ordinary Differential Equations with Implicit Single-Step Methods}, author={Colby Fronk, Linda Petzold}, journal={arXiv preprint arXiv:2410.05592}, year={2024}, archivePrefix={arXiv}, eprint={2410.05592}, primaryClass={math.NA cs.AI cs.CE cs.NA} }
fronk2024training
arxiv-666832
2410.05593
When Graph Neural Networks Meet Dynamic Mode Decomposition
<|reference_start|>When Graph Neural Networks Meet Dynamic Mode Decomposition: Graph Neural Networks (GNNs) have emerged as fundamental tools for a wide range of prediction tasks on graph-structured data. Recent studies have drawn analogies between GNN feature propagation and diffusion processes, which can be interpreted as dynamical systems. In this paper, we delve deeper into this perspective by connecting the dynamics in GNNs to modern Koopman theory and its numerical method, Dynamic Mode Decomposition (DMD). We illustrate how DMD can estimate a low-rank, finite-dimensional linear operator based on multiple states of the system, effectively approximating potential nonlinear interactions between nodes in the graph. This approach allows us to capture complex dynamics within the graph accurately and efficiently. We theoretically establish a connection between the DMD-estimated operator and the original dynamic operator between system states. Building upon this foundation, we introduce a family of DMD-GNN models that effectively leverage the low-rank eigenfunctions provided by the DMD algorithm. We further discuss the potential of enhancing our approach by incorporating domain-specific constraints such as symmetry into the DMD computation, allowing the corresponding GNN models to respect known physical properties of the underlying system. Our work paves the path for applying advanced dynamical system analysis tools via GNNs. We validate our approach through extensive experiments on various learning tasks, including directed graphs, large-scale graphs, long-range interactions, and spatial-temporal graphs. We also empirically verify that our proposed models can serve as powerful encoders for link prediction tasks. The results demonstrate that our DMD-enhanced GNNs achieve state-of-the-art performance, highlighting the effectiveness of integrating DMD into GNN frameworks.<|reference_end|>
arxiv
@article{shi2024when, title={When Graph Neural Networks Meet Dynamic Mode Decomposition}, author={Dai Shi, Lequan Lin, Andi Han, Zhiyong Wang, Yi Guo, Junbin Gao}, journal={arXiv preprint arXiv:2410.05593}, year={2024}, archivePrefix={arXiv}, eprint={2410.05593}, primaryClass={cs.LG} }
shi2024when
arxiv-666833
2410.05595
Disruption Risk Evaluation on Large-scale Production Network with Establishments and Products
<|reference_start|>Disruption Risk Evaluation on Large-scale Production Network with Establishments and Products: We constructed an establishment-level production network where each establishment inputs and outputs multiple products, using data that includes the firm-level production network and establishments covering nearly all Japanese entities. The network represents the manufacturing sector with 183,951 establishments across 157,537 firms and 919,982 inter-establishment linkages. A probabilistic model of supply chain disruptions was applied to this network. The key findings are as follows: (1) The establishment-level network exhibits greater shock propagation compared to the firm-level network. (2) Incorporating actual product information leads to a larger impact on propagation compared to using industry-level information. (3) Regional shock simulations reveal that while the firm-level network shows greater shock propagation when the shock originates in Tokyo, no such difference is observed in the establishment-level network.<|reference_end|>
arxiv
@article{inoue2024disruption, title={Disruption Risk Evaluation on Large-scale Production Network with Establishments and Products}, author={Hiroyasu Inoue and Yasuyuki Todo}, journal={arXiv preprint arXiv:2410.05595}, year={2024}, archivePrefix={arXiv}, eprint={2410.05595}, primaryClass={cs.SI} }
inoue2024disruption
arxiv-666834
2410.05596
Linear Convergence of Data-Enabled Policy Optimization for Linear Quadratic Tracking
<|reference_start|>Linear Convergence of Data-Enabled Policy Optimization for Linear Quadratic Tracking: Data-enabled policy optimization (DeePO) is a newly proposed method to attack the open problem of direct adaptive LQR. In this work, we extend the DeePO framework to the linear quadratic tracking (LQT) with offline data. By introducing a covariance parameterization of the LQT policy, we derive a direct data-driven formulation of the LQT problem. Then, we use gradient descent method to iteratively update the parameterized policy to find an optimal LQT policy. Moreover, by revealing the connection between DeePO and model-based policy optimization, we prove the linear convergence of the DeePO iteration. Finally, a numerical experiment is given to validate the convergence results. We hope our work paves the way to direct adaptive LQT with online closed-loop data.<|reference_end|>
arxiv
@article{kang2024linear, title={Linear Convergence of Data-Enabled Policy Optimization for Linear Quadratic Tracking}, author={Shubo Kang, Feiran Zhao and Keyou You}, journal={arXiv preprint arXiv:2410.05596}, year={2024}, archivePrefix={arXiv}, eprint={2410.05596}, primaryClass={eess.SY cs.SY math.OC} }
kang2024linear
arxiv-666835
2410.05597
SMART: A Flexible Approach to Regression using Spline-Based Multivariate Adaptive Regression Trees
<|reference_start|>SMART: A Flexible Approach to Regression using Spline-Based Multivariate Adaptive Regression Trees: Decision trees are powerful for predictive modeling but often suffer from high variance when modeling continuous relationships. While algorithms like Multivariate Adaptive Regression Splines (MARS) excel at capturing such continuous relationships, they perform poorly when modeling discontinuities. To address the limitations of both approaches, we introduce Spline-based Multivariate Adaptive Regression Trees (SMART), which uses a decision tree to identify subsets of data with distinct continuous relationships and then leverages MARS to fit these relationships independently. Unlike other methods that rely on the tree structure to model interaction and higher-order terms, SMART leverages MARS's native ability to handle these terms, allowing the tree to focus solely on identifying discontinuities in the relationship. We test SMART on various datasets, demonstrating its improvement over state-of-the-art methods in such cases. Additionally, we provide an open-source implementation of our method to be used by practitioners.<|reference_end|>
arxiv
@article{pattie2024smart:, title={SMART: A Flexible Approach to Regression using Spline-Based Multivariate Adaptive Regression Trees}, author={William Pattie, Arvind Krishna}, journal={arXiv preprint arXiv:2410.05597}, year={2024}, archivePrefix={arXiv}, eprint={2410.05597}, primaryClass={stat.ML cs.LG} }
pattie2024smart:
arxiv-666836
2410.05600
Bridging Modalities: Enhancing Cross-Modality Hate Speech Detection with Few-Shot In-Context Learning
<|reference_start|>Bridging Modalities: Enhancing Cross-Modality Hate Speech Detection with Few-Shot In-Context Learning: The widespread presence of hate speech on the internet, including formats such as text-based tweets and vision-language memes, poses a significant challenge to digital platform safety. Recent research has developed detection models tailored to specific modalities; however, there is a notable gap in transferring detection capabilities across different formats. This study conducts extensive experiments using few-shot in-context learning with large language models to explore the transferability of hate speech detection between modalities. Our findings demonstrate that text-based hate speech examples can significantly enhance the classification accuracy of vision-language hate speech. Moreover, text-based demonstrations outperform vision-language demonstrations in few-shot learning settings. These results highlight the effectiveness of cross-modality knowledge transfer and offer valuable insights for improving hate speech detection systems.<|reference_end|>
arxiv
@article{hee2024bridging, title={Bridging Modalities: Enhancing Cross-Modality Hate Speech Detection with Few-Shot In-Context Learning}, author={Ming Shan Hee, Aditi Kumaresan, Roy Ka-Wei Lee}, journal={arXiv preprint arXiv:2410.05600}, year={2024}, archivePrefix={arXiv}, eprint={2410.05600}, primaryClass={cs.CL} }
hee2024bridging
arxiv-666837
2410.05601
ReFIR: Grounding Large Restoration Models with Retrieval Augmentation
<|reference_start|>ReFIR: Grounding Large Restoration Models with Retrieval Augmentation: Recent advances in diffusion-based Large Restoration Models (LRMs) have significantly improved photo-realistic image restoration by leveraging the internal knowledge embedded within model weights. However, existing LRMs often suffer from the hallucination dilemma, i.e., producing incorrect contents or textures when dealing with severe degradations, due to their heavy reliance on limited internal knowledge. In this paper, we propose an orthogonal solution called the Retrieval-augmented Framework for Image Restoration (ReFIR), which incorporates retrieved images as external knowledge to extend the knowledge boundary of existing LRMs in generating details faithful to the original scene. Specifically, we first introduce the nearest neighbor lookup to retrieve content-relevant high-quality images as reference, after which we propose the cross-image injection to modify existing LRMs to utilize high-quality textures from retrieved images. Thanks to the additional external knowledge, our ReFIR can well handle the hallucination challenge and facilitate faithfully results. Extensive experiments demonstrate that ReFIR can achieve not only high-fidelity but also realistic restoration results. Importantly, our ReFIR requires no training and is adaptable to various LRMs.<|reference_end|>
arxiv
@article{guo2024refir:, title={ReFIR: Grounding Large Restoration Models with Retrieval Augmentation}, author={Hang Guo, Tao Dai, Zhihao Ouyang, Taolin Zhang, Yaohua Zha, Bin Chen, Shu-tao Xia}, journal={arXiv preprint arXiv:2410.05601}, year={2024}, archivePrefix={arXiv}, eprint={2410.05601}, primaryClass={cs.CV} }
guo2024refir:
arxiv-666838
2410.05602
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
<|reference_start|>Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series: Many real-world datasets, such as healthcare, climate, and economics, are often collected as irregular time series, which poses challenges for accurate modeling. In this paper, we propose the Amortized Control of continuous State Space Model (ACSSM) for continuous dynamical modeling of time series for irregular and discrete observations. We first present a multi-marginal Doob's $h$-transform to construct a continuous dynamical system conditioned on these irregular observations. Following this, we introduce a variational inference algorithm with a tight evidence lower bound (ELBO), leveraging stochastic optimal control (SOC) theory to approximate the intractable Doob's $h$-transform and simulate the conditioned dynamics. To improve efficiency and scalability during both training and inference, ACSSM employs amortized inference to decouple representation learning from the latent dynamics. Additionally, it incorporates a simulation-free latent dynamics framework and a transformer-based data assimilation scheme, facilitating parallel inference of the latent states and ELBO computation. Through empirical evaluations across a variety of real-world datasets, ACSSM demonstrates superior performance in tasks such as classification, regression, interpolation, and extrapolation, while maintaining computational efficiency.<|reference_end|>
arxiv
@article{park2024amortized, title={Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series}, author={Byoungwoo Park, Hyungi Lee, Juho Lee}, journal={arXiv preprint arXiv:2410.05602}, year={2024}, archivePrefix={arXiv}, eprint={2410.05602}, primaryClass={stat.ML cs.LG} }
park2024amortized
arxiv-666839
2410.05603
Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
<|reference_start|>Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition: Large Language Models (LLMs) have demonstrated remarkable in-context learning (ICL) capabilities. In this study, we explore a surprising phenomenon related to ICL: LLMs can perform multiple, computationally distinct ICL tasks simultaneously, during a single inference call, a capability we term "task superposition". We provide empirical evidence of this phenomenon across various LLM families and scales and show that this phenomenon emerges even if we train the model to in-context learn one task at a time. We offer theoretical explanations that this capability is well within the expressive power of transformers. We also explore how LLMs internally compose task vectors during superposition. Furthermore, we show that larger models can solve more ICL tasks in parallel, and better calibrate their output distribution. Our findings offer insights into the latent capabilities of LLMs, further substantiate the perspective of "LLMs as superposition of simulators", and raise questions about the mechanisms enabling simultaneous task execution.<|reference_end|>
arxiv
@article{xiong2024everything, title={Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition}, author={Zheyang Xiong, Ziyang Cai, John Cooper, Albert Ge, Vasilis Papageorgiou, Zack Sifakis, Angeliki Giannou, Ziqian Lin, Liu Yang, Saurabh Agarwal, Grigorios G Chrysos, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos}, journal={arXiv preprint arXiv:2410.05603}, year={2024}, archivePrefix={arXiv}, eprint={2410.05603}, primaryClass={cs.LG cs.AI cs.CL} }
xiong2024everything
arxiv-666840
2410.05604
Accelerating the discovery of low-energy structure configurations: a computational approach that integrates first-principles calculations, Monte Carlo sampling, and Machine Learning
<|reference_start|>Accelerating the discovery of low-energy structure configurations: a computational approach that integrates first-principles calculations, Monte Carlo sampling, and Machine Learning: Finding Minimum Energy Configurations (MECs) is essential in fields such as physics, chemistry, and materials science, as they represent the most stable states of the systems. In particular, identifying such MECs in multi-component alloys considered candidate PFMs is key because it determines the most stable arrangement of atoms within the alloy, directly influencing its phase stability, structural integrity, and thermo-mechanical properties. However, since the search space grows exponentially with the number of atoms considered, obtaining such MECs using computationally expensive first-principles DFT calculations often results in a cumbersome task. To escape the above compromise between physical fidelity and computational efficiency, we have developed a novel physics-based data-driven approach that combines Monte Carlo sampling, first-principles DFT calculations, and Machine Learning to accelerate the discovery of MECs in multi-component alloys. More specifically, we have leveraged well-established Cluster Expansion (CE) techniques with Local Outlier Factor models to establish strategies that enhance the reliability of the CE method. In this work, we demonstrated the capabilities of the proposed approach for the particular case of a tungsten-based quaternary high-entropy alloy. However, the method is applicable to other types of alloys and enables a wide range of applications.<|reference_end|>
arxiv
@article{musa2024accelerating, title={Accelerating the discovery of low-energy structure configurations: a computational approach that integrates first-principles calculations, Monte Carlo sampling, and Machine Learning}, author={Md Rajib Khan Musa, Yichen Qian, Jie Peng, David Cereceda}, journal={arXiv preprint arXiv:2410.05604}, year={2024}, archivePrefix={arXiv}, eprint={2410.05604}, primaryClass={cond-mat.mtrl-sci cs.LG} }
musa2024accelerating
arxiv-666841
2410.05605
CodeDPO: Aligning Code Models with Self Generated and Verified Source Code
<|reference_start|>CodeDPO: Aligning Code Models with Self Generated and Verified Source Code: Code generation models have shown significant potential for programming tasks. However, existing training methods like supervised fine-tuning face key limitations: they do not effectively teach models to prioritize correct over incorrect solutions in ambiguous situations, nor do they effectively optimize the runtime efficiency of the generated code. To address these challenges, we propose CodeDPO, a framework that integrates preference learning into code generation to improve two key code preference factors: code correctness and efficiency. CodeDPO employs a novel dataset construction method, utilizing a self-generation-and-validation mechanism that simultaneously generates and evaluates code and test cases. The underlying assumption is that test cases executable by multiple code snippets provide more reliable validation, and code that passes more tests is more likely to be correct. Through this self-validation process, our PageRank-inspired algorithm iteratively updates the ranking score of each code snippet, ultimately creating a code preference optimization dataset based on correctness and efficiency. CodeDPO is flexible and scalable, generating diverse preference optimization data without depending on external resources. Through comprehensive evaluations of five widely used benchmarks, CodeDPO demonstrates significant improvements in correctness and efficiency compared to existing methods. Our experiments prove that CodeDPO enhances the capabilities of LLMs in code generation and provides a robust foundation for conducting code preference optimization in more complex and challenging real-world scenarios.<|reference_end|>
arxiv
@article{zhang2024codedpo:, title={CodeDPO: Aligning Code Models with Self Generated and Verified Source Code}, author={Kechi Zhang, Ge Li, Yihong Dong, Jingjing Xu, Jun Zhang, Jing Su, Yongfei Liu, Zhi Jin}, journal={arXiv preprint arXiv:2410.05605}, year={2024}, archivePrefix={arXiv}, eprint={2410.05605}, primaryClass={cs.SE} }
zhang2024codedpo:
arxiv-666842
2410.05608
Multimodal Large Language Models and Tunings: Vision, Language, Sensors, Audio, and Beyond
<|reference_start|>Multimodal Large Language Models and Tunings: Vision, Language, Sensors, Audio, and Beyond: This tutorial explores recent advancements in multimodal pretrained and large models, capable of integrating and processing diverse data forms such as text, images, audio, and video. Participants will gain an understanding of the foundational concepts of multimodality, the evolution of multimodal research, and the key technical challenges addressed by these models. We will cover the latest multimodal datasets and pretrained models, including those beyond vision and language. Additionally, the tutorial will delve into the intricacies of multimodal large models and instruction tuning strategies to optimise performance for specific tasks. Hands-on laboratories will offer practical experience with state-of-the-art multimodal models, demonstrating real-world applications like visual storytelling and visual question answering. This tutorial aims to equip researchers, practitioners, and newcomers with the knowledge and skills to leverage multimodal AI. ACM Multimedia 2024 is the ideal venue for this tutorial, aligning perfectly with our goal of understanding multimodal pretrained and large language models, and their tuning mechanisms.<|reference_end|>
arxiv
@article{han2024multimodal, title={Multimodal Large Language Models and Tunings: Vision, Language, Sensors, Audio, and Beyond}, author={Soyeon Caren Han, Feiqi Cao, Josiah Poon, Roberto Navigli}, journal={arXiv preprint arXiv:2410.05608}, year={2024}, archivePrefix={arXiv}, eprint={2410.05608}, primaryClass={cs.CL} }
han2024multimodal
arxiv-666843
2410.05609
The Breakdown of Gaussian Universality in Classification of High-dimensional Mixtures
<|reference_start|>The Breakdown of Gaussian Universality in Classification of High-dimensional Mixtures: The assumption of Gaussian or Gaussian mixture data has been extensively exploited in a long series of precise performance analyses of machine learning (ML) methods, on large datasets having comparably numerous samples and features. To relax this restrictive assumption, subsequent efforts have been devoted to establish "Gaussian equivalent principles" by studying scenarios of Gaussian universality where the asymptotic performance of ML methods on non-Gaussian data remains unchanged when replaced with Gaussian data having the same mean and covariance. Beyond the realm of Gaussian universality, there are few exact results on how the data distribution affects the learning performance. In this article, we provide a precise high-dimensional characterization of empirical risk minimization, for classification under a general mixture data setting of linear factor models that extends Gaussian mixtures. The Gaussian universality is shown to break down under this setting, in the sense that the asymptotic learning performance depends on the data distribution beyond the class means and covariances. To clarify the limitations of Gaussian universality in classification of mixture data and to understand the impact of its breakdown, we specify conditions for Gaussian universality and discuss their implications for the choice of loss function.<|reference_end|>
arxiv
@article{mai2024the, title={The Breakdown of Gaussian Universality in Classification of High-dimensional Mixtures}, author={Xiaoyi Mai and Zhenyu Liao}, journal={arXiv preprint arXiv:2410.05609}, year={2024}, archivePrefix={arXiv}, eprint={2410.05609}, primaryClass={stat.ML cs.LG math.ST stat.TH} }
mai2024the
arxiv-666844
2410.05610
Chain-of-Thoughts for Molecular Understanding
<|reference_start|>Chain-of-Thoughts for Molecular Understanding: The adaptation of large language models (LLMs) to chemistry has shown promising performance in molecular understanding tasks, such as generating a text description from a molecule. However, proper reasoning based on molecular structural information remains a significant challenge, e.g., even advanced LLMs such as GPT-4o struggle to identify functional groups which are crucial for inferring the molecular property of interest. To address this limitation, we propose StructCoT, a structure-aware chain-of-thought (CoT) that enhances LLMs' understanding of molecular structures by explicitly injecting the key structural features of molecules. Moreover, we introduce two fine-tuning frameworks for adapting the existing LLMs to use our StructCoT. Our experiments demonstrate that incorporating StructCoT with our fine-tuning frameworks leads to consistent improvements in both molecular understanding tasks.<|reference_end|>
arxiv
@article{jang2024chain-of-thoughts, title={Chain-of-Thoughts for Molecular Understanding}, author={Yunhui Jang, Jaehyung Kim, Sungsoo Ahn}, journal={arXiv preprint arXiv:2410.05610}, year={2024}, archivePrefix={arXiv}, eprint={2410.05610}, primaryClass={cs.LG cs.AI} }
jang2024chain-of-thoughts
arxiv-666845
2410.05612
Leveraging free energy in pretraining model selection for improved fine-tuning
<|reference_start|>Leveraging free energy in pretraining model selection for improved fine-tuning: Recent advances in artificial intelligence have been fueled by the development of foundation models such as BERT, GPT, T5, and Vision Transformers. These models are first pretrained on vast and diverse datasets and then adapted to specific downstream tasks, often with significantly less data. However, the mechanisms behind the success of this ubiquitous pretrain-then-adapt paradigm remain underexplored, particularly the characteristics of pretraining checkpoints that lend themselves to good downstream adaptation. We introduce a Bayesian model selection criterion, called the downstream free energy, which quantifies a checkpoint's adaptability by measuring the concentration of nearby favorable parameters for the downstream task. We demonstrate that this free energy criterion can be effectively implemented without access to the downstream data or prior knowledge of the downstream task. Furthermore, we provide empirical evidence that the free energy criterion reliably correlates with improved fine-tuning performance, offering a principled approach to predicting model adaptability.<|reference_end|>
arxiv
@article{munn2024leveraging, title={Leveraging free energy in pretraining model selection for improved fine-tuning}, author={Michael Munn, Susan Wei}, journal={arXiv preprint arXiv:2410.05612}, year={2024}, archivePrefix={arXiv}, eprint={2410.05612}, primaryClass={cs.LG} }
munn2024leveraging
arxiv-666846
2410.05613
Stereotype or Personalization? User Identity Biases Chatbot Recommendations
<|reference_start|>Stereotype or Personalization? User Identity Biases Chatbot Recommendations: We demonstrate that when people use large language models (LLMs) to generate recommendations, the LLMs produce responses that reflect both what the user wants and who the user is. While personalized recommendations are often desired by users, it can be difficult in practice to distinguish cases of bias from cases of personalization: we find that models generate racially stereotypical recommendations regardless of whether the user revealed their identity intentionally through explicit indications or unintentionally through implicit cues. We argue that chatbots ought to transparently indicate when recommendations are influenced by a user's revealed identity characteristics, but observe that they currently fail to do so. Our experiments show that even though a user's revealed identity significantly influences model recommendations (p < 0.001), model responses obfuscate this fact in response to user queries. This bias and lack of transparency occurs consistently across multiple popular consumer LLMs (gpt-4o-mini, gpt-4-turbo, llama-3-70B, and claude-3.5) and for four American racial groups.<|reference_end|>
arxiv
@article{kantharuban2024stereotype, title={Stereotype or Personalization? User Identity Biases Chatbot Recommendations}, author={Anjali Kantharuban, Jeremiah Milbauer, Emma Strubell, and Graham Neubig}, journal={arXiv preprint arXiv:2410.05613}, year={2024}, archivePrefix={arXiv}, eprint={2410.05613}, primaryClass={cs.CL} }
kantharuban2024stereotype
arxiv-666847
2410.05614
Positivity-preserving truncated Euler and Milstein methods for financial SDEs with super-linear coefficients
<|reference_start|>Positivity-preserving truncated Euler and Milstein methods for financial SDEs with super-linear coefficients: In this paper, we propose two variants of the positivity-preserving schemes, namely the truncated Euler-Maruyama (EM) method and the truncated Milstein scheme, applied to stochastic differential equations (SDEs) with positive solutions and super-linear coefficients. Under some regularity and integrability assumptions we derive the optimal strong convergence rates of the two schemes. Moreover, we demonstrate flexibility of our approaches by applying the truncated methods to approximate SDEs with super-linear coefficients (3/2 and Ai{\i}t-Sahalia models) directly and also with sub-linear coefficients (CIR model) indirectly. Numerical experiments are provided to verify the effectiveness of the theoretical results.<|reference_end|>
arxiv
@article{deng2024positivity-preserving, title={Positivity-preserving truncated Euler and Milstein methods for financial SDEs with super-linear coefficients}, author={Shounian Deng and Chen Fei and Weiyin Fei and Xuerong Mao}, journal={arXiv preprint arXiv:2410.05614}, year={2024}, archivePrefix={arXiv}, eprint={2410.05614}, primaryClass={math.NA cs.NA} }
deng2024positivity-preserving
arxiv-666848
2410.05618
Deep Transfer Learning-based Detection for Flash Memory Channels
<|reference_start|>Deep Transfer Learning-based Detection for Flash Memory Channels: The NAND flash memory channel is corrupted by different types of noises, such as the data retention noise and the wear-out noise, which lead to unknown channel offset and make the flash memory channel non-stationary. In the literature, machine learning-based methods have been proposed for data detection for flash memory channels. However, these methods require a large number of training samples and labels to achieve a satisfactory performance, which is costly. Furthermore, with a large unknown channel offset, it may be impossible to obtain enough correct labels. In this paper, we reformulate the data detection for the flash memory channel as a transfer learning (TL) problem. We then propose a model-based deep TL (DTL) algorithm for flash memory channel detection. It can effectively reduce the training data size from $10^6$ samples to less than 104 samples. Moreover, we propose an unsupervised domain adaptation (UDA)-based DTL algorithm using moment alignment, which can detect data without any labels. Hence, it is suitable for scenarios where the decoding of error-correcting code fails and no labels can be obtained. Finally, a UDA-based threshold detector is proposed to eliminate the need for a neural network. Both the channel raw error rate analysis and simulation results demonstrate that the proposed DTL-based detection schemes can achieve near-optimal bit error rate (BER) performance with much less training data and/or without using any labels.<|reference_end|>
arxiv
@article{mei2024deep, title={Deep Transfer Learning-based Detection for Flash Memory Channels}, author={Zhen Mei, Kui Cai, Long Shi, Jun Li, Li Chen, and Kees A. Schouhamer Immink}, journal={arXiv preprint arXiv:2410.05618}, year={2024}, archivePrefix={arXiv}, eprint={2410.05618}, primaryClass={cs.IT eess.SP math.IT} }
mei2024deep
arxiv-666849
2410.05623
Understanding Gradient Boosting Classifier: Training, Prediction, and the Role of $\gamma_j$
<|reference_start|>Understanding Gradient Boosting Classifier: Training, Prediction, and the Role of $\gamma_j$: The Gradient Boosting Classifier (GBC) is a widely used machine learning algorithm for binary classification, which builds decision trees iteratively to minimize prediction errors. This document explains the GBC's training and prediction processes, focusing on the computation of terminal node values $\gamma_j$, which are crucial to optimizing the logistic loss function. We derive $\gamma_j$ through a Taylor series approximation and provide a step-by-step pseudocode for the algorithm's implementation. The guide explains the theory of GBC and its practical application, demonstrating its effectiveness in binary classification tasks. We provide a step-by-step example in the appendix to help readers understand.<|reference_end|>
arxiv
@article{chen2024understanding, title={Understanding Gradient Boosting Classifier: Training, Prediction, and the Role of $\gamma_j$}, author={Hung-Hsuan Chen}, journal={arXiv preprint arXiv:2410.05623}, year={2024}, archivePrefix={arXiv}, eprint={2410.05623}, primaryClass={cs.LG cs.AI} }
chen2024understanding
arxiv-666850
2410.05624
Remote Sensing Image Segmentation Using Vision Mamba and Multi-Scale Multi-Frequency Feature Fusion
<|reference_start|>Remote Sensing Image Segmentation Using Vision Mamba and Multi-Scale Multi-Frequency Feature Fusion: As remote sensing imaging technology continues to advance and evolve, processing high-resolution and diversified satellite imagery to improve segmentation accuracy and enhance interpretation efficiency emerg as a pivotal area of investigation within the realm of remote sensing. Although segmentation algorithms based on CNNs and Transformers achieve significant progress in performance, balancing segmentation accuracy and computational complexity remains challenging, limiting their wide application in practical tasks. To address this, this paper introduces state space model (SSM) and proposes a novel hybrid semantic segmentation network based on vision Mamba (CVMH-UNet). This method designs a cross-scanning visual state space block (CVSSBlock) that uses cross 2D scanning (CS2D) to fully capture global information from multiple directions, while by incorporating convolutional neural network branches to overcome the constraints of Vision Mamba (VMamba) in acquiring local information, this approach facilitates a comprehensive analysis of both global and local features. Furthermore, to address the issue of limited discriminative power and the difficulty in achieving detailed fusion with direct skip connections, a multi-frequency multi-scale feature fusion block (MFMSBlock) is designed. This module introduces multi-frequency information through 2D discrete cosine transform (2D DCT) to enhance information utilization and provides additional scale local detail information through point-wise convolution branches. Finally, it aggregates multi-scale information along the channel dimension, achieving refined feature fusion. Findings from experiments conducted on renowned datasets of remote sensing imagery demonstrate that proposed CVMH-UNet achieves superior segmentation performance while maintaining low computational complexity, outperforming surpassing current leading-edge segmentation algorithms.<|reference_end|>
arxiv
@article{cao2024remote, title={Remote Sensing Image Segmentation Using Vision Mamba and Multi-Scale Multi-Frequency Feature Fusion}, author={Yice Cao, Chenchen Liu, Zhenhua Wu, Wenxin Yao, Liu Xiong, Jie Chen, Zhixiang Huang}, journal={arXiv preprint arXiv:2410.05624}, year={2024}, archivePrefix={arXiv}, eprint={2410.05624}, primaryClass={cs.CV cs.LG} }
cao2024remote
arxiv-666851
2410.05626
On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory
<|reference_start|>On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory: This paper aims to discuss the impact of random initialization of neural networks in the neural tangent kernel (NTK) theory, which is ignored by most recent works in the NTK theory. It is well known that as the network's width tends to infinity, the neural network with random initialization converges to a Gaussian process $f^{\mathrm{GP}}$, which takes values in $L^{2}(\mathcal{X})$, where $\mathcal{X}$ is the domain of the data. In contrast, to adopt the traditional theory of kernel regression, most recent works introduced a special mirrored architecture and a mirrored (random) initialization to ensure the network's output is identically zero at initialization. Therefore, it remains a question whether the conventional setting and mirrored initialization would make wide neural networks exhibit different generalization capabilities. In this paper, we first show that the training dynamics of the gradient flow of neural networks with random initialization converge uniformly to that of the corresponding NTK regression with random initialization $f^{\mathrm{GP}}$. We then show that $\mathbf{P}(f^{\mathrm{GP}} \in [\mathcal{H}^{\mathrm{NT}}]^{s}) = 1$ for any $s < \frac{3}{d+1}$ and $\mathbf{P}(f^{\mathrm{GP}} \in [\mathcal{H}^{\mathrm{NT}}]^{s}) = 0$ for any $s \geq \frac{3}{d+1}$, where $[\mathcal{H}^{\mathrm{NT}}]^{s}$ is the real interpolation space of the RKHS $\mathcal{H}^{\mathrm{NT}}$ associated with the NTK. Consequently, the generalization error of the wide neural network trained by gradient descent is $\Omega(n^{-\frac{3}{d+3}})$, and it still suffers from the curse of dimensionality. On one hand, the result highlights the benefits of mirror initialization. On the other hand, it implies that NTK theory may not fully explain the superior performance of neural networks.<|reference_end|>
arxiv
@article{chen2024on, title={On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory}, author={Guhan Chen, Yicheng Li, Qian Lin}, journal={arXiv preprint arXiv:2410.05626}, year={2024}, archivePrefix={arXiv}, eprint={2410.05626}, primaryClass={stat.ML cs.LG} }
chen2024on
arxiv-666852
2410.05627
CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning
<|reference_start|>CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning: Aiming to incrementally learn new classes with only few samples while preserving the knowledge of base (old) classes, few-shot class-incremental learning (FSCIL) faces several challenges, such as overfitting and catastrophic forgetting. Such a challenging problem is often tackled by fixing a feature extractor trained on base classes to reduce the adverse effects of overfitting and forgetting. Under such formulation, our primary focus is representation learning on base classes to tackle the unique challenge of FSCIL: simultaneously achieving the transferability and the discriminability of the learned representation. Building upon the recent efforts for enhancing transferability, such as promoting the spread of features, we find that trying to secure the spread of features within a more confined feature space enables the learned representation to strike a better balance between transferability and discriminability. Thus, in stark contrast to prior beliefs that the inter-class distance should be maximized, we claim that the closer different classes are, the better for FSCIL. The empirical results and analysis from the perspective of information bottleneck theory justify our simple yet seemingly counter-intuitive representation learning method, raising research questions and suggesting alternative research directions. The code is available at https://github.com/JungHunOh/CLOSER_ECCV2024.<|reference_end|>
arxiv
@article{oh2024closer:, title={CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning}, author={Junghun Oh, Sungyong Baik, Kyoung Mu Lee}, journal={arXiv preprint arXiv:2410.05627}, year={2024}, archivePrefix={arXiv}, eprint={2410.05627}, primaryClass={cs.CV cs.AI} }
oh2024closer:
arxiv-666853
2410.05628
Versatile Motion Langauge Models for Multi-Turn Interactive Agents
<|reference_start|>Versatile Motion Langauge Models for Multi-Turn Interactive Agents: Recent advancements in large language models (LLMs) have greatly enhanced their ability to generate natural and contextually relevant text, making AI interactions more human-like. However, generating and understanding interactive human-like motion, where two individuals engage in coordinated movements, remains a challenge due to the complexity of modeling these coordinated interactions. Furthermore, a versatile model is required to handle diverse interactive scenarios, such as chat systems that follow user instructions or adapt to their assigned role while adjusting interaction dynamics. To tackle this problem, we introduce VIM, short for the Versatile Interactive Motion language model, which integrates both language and motion modalities to effectively understand, generate, and control interactive motions in multi-turn conversational contexts. To address the scarcity of multi-turn interactive motion data, we introduce a synthetic dataset, INERT-MT2, where we utilize pre-trained models to create diverse instructional datasets with interactive motion. Our approach first trains a motion tokenizer that encodes interactive motions into residual discrete tokens. In the pretraining stage, the model learns to align motion and text representations with these discrete tokens. During the instruction fine-tuning stage, VIM adapts to multi-turn conversations using the INTER-MT2 dataset. We evaluate the versatility of our method across motion-related tasks, motion to text, text to motion, reaction generation, motion editing, and reasoning about motion sequences. The results highlight the versatility and effectiveness of proposed method in handling complex interactive motion synthesis.<|reference_end|>
arxiv
@article{park2024versatile, title={Versatile Motion Language Models for Multi-Turn Interactive Agents}, author={Jeongeun Park, Sungjoon Choi, Sangdoo Yun}, journal={arXiv preprint arXiv:2410.05628}, year={2024}, archivePrefix={arXiv}, eprint={2410.05628}, primaryClass={cs.AI} }
park2024versatile
arxiv-666854
2410.05629
Vector-ICL: In-context Learning with Continuous Vector Representations
<|reference_start|>Vector-ICL: In-context Learning with Continuous Vector Representations: Large language models (LLMs) have shown remarkable in-context learning (ICL) capabilities on textual data. We explore whether these capabilities can be extended to continuous vectors from diverse domains, obtained from black-box pretrained encoders. By aligning input data with an LLM's embedding space through lightweight projectors, we observe that LLMs can effectively process and learn from these projected vectors, which we term Vector-ICL. In particular, we find that pretraining projectors with general language modeling objectives enables Vector-ICL, while task-specific finetuning further enhances performance. In our experiments across various tasks and modalities, including text reconstruction, numerical function regression, text classification, summarization, molecule captioning, time-series classification, graph classification, and fMRI decoding, Vector-ICL often surpasses both few-shot ICL and domain-specific model or tuning. We further conduct analyses and case studies, indicating the potential of LLMs to process vector representations beyond traditional token-based paradigms.<|reference_end|>
arxiv
@article{zhuang2024vector-icl:, title={Vector-ICL: In-context Learning with Continuous Vector Representations}, author={Yufan Zhuang, Chandan Singh, Liyuan Liu, Jingbo Shang, Jianfeng Gao}, journal={arXiv preprint arXiv:2410.05629}, year={2024}, archivePrefix={arXiv}, eprint={2410.05629}, primaryClass={cs.CL cs.AI} }
zhuang2024vector-icl:
arxiv-666855
2410.05630
Navigating Inflation in Ghana: How Can Machine Learning Enhance Economic Stability and Growth Strategies
<|reference_start|>Navigating Inflation in Ghana: How Can Machine Learning Enhance Economic Stability and Growth Strategies: Inflation remains a persistent challenge for many African countries. This research investigates the critical role of machine learning (ML) in understanding and managing inflation in Ghana, emphasizing its significance for the country's economic stability and growth. Utilizing a comprehensive dataset spanning from 2010 to 2022, the study aims to employ advanced ML models, particularly those adept in time series forecasting, to predict future inflation trends. The methodology is designed to provide accurate and reliable inflation forecasts, offering valuable insights for policymakers and advocating for a shift towards data-driven approaches in economic decision-making. This study aims to significantly advance the academic field of economic analysis by applying machine learning (ML) and offering practical guidance for integrating advanced technological tools into economic governance, ultimately demonstrating ML's potential to enhance Ghana's economic resilience and support sustainable development through effective inflation management.<|reference_end|>
arxiv
@article{baidoo2024navigating, title={Navigating Inflation in Ghana: How Can Machine Learning Enhance Economic Stability and Growth Strategies}, author={Theophilus G. Baidoo, Ashley Obeng}, journal={arXiv preprint arXiv:2410.05630}, year={2024}, archivePrefix={arXiv}, eprint={2410.05630}, primaryClass={econ.EM cs.LG} }
baidoo2024navigating
arxiv-666856
2410.05631
Embracing Objects Over Statics: An Analysis of Method Preferences in Open Source Java Frameworks
<|reference_start|>Embracing Objects Over Statics: An Analysis of Method Preferences in Open Source Java Frameworks: In today's software development landscape, the extent to which Java applications utilize object-oriented programming paradigm remains a subject of interest. Although some researches point to the considerable overhead associated with object orientation, one might logically assume that modern Java applications would lean towards a procedural style to boost performance, favoring static over instance method calls. In order to validate this assumption, this study scrutinizes the runtime behavior of 28 open-source Java frameworks using the YourKit profiler. Contrary to expectations, our findings reveal a predominant use of instance methods and constructors over static methods. This suggests that developers still favor an object-oriented approach, despite its potential drawbacks.<|reference_end|>
arxiv
@article{zakharov2024embracing, title={Embracing Objects Over Statics: An Analysis of Method Preferences in Open Source Java Frameworks}, author={Vladimir Zakharov, Yegor Bugayenko}, journal={arXiv preprint arXiv:2410.05631}, year={2024}, archivePrefix={arXiv}, eprint={2410.05631}, primaryClass={cs.SE cs.PL} }
zakharov2024embracing
arxiv-666857
2410.05634
Identification and estimation for matrix time series CP-factor models
<|reference_start|>Identification and estimation for matrix time series CP-factor models: We investigate the identification and the estimation for matrix time series CP-factor models. Unlike the generalized eigenanalysis-based method of Chang et al. (2023) which requires the two factor loading matrices to be full-ranked, the newly proposed estimation can handle rank-deficient factor loading matrices. The estimation procedure consists of the spectral decomposition of several matrices and a matrix joint diagonalization algorithm, resulting in low computational cost. The theoretical guarantee established without the stationarity assumption shows that the proposed estimation exhibits a faster convergence rate than that of Chang et al. (2023). In fact the new estimator is free from the adverse impact of any eigen-gaps, unlike most eigenanalysis-based methods such as that of Chang et al. (2023). Furthermore, in terms of the error rates of the estimation, the proposed procedure is equivalent to handling a vector time series of dimension $\max(p,q)$ instead of $p \times q$, where $(p, q)$ are the dimensions of the matrix time series concerned. We have achieved this without assuming the "near orthogonality" of the loadings under various incoherence conditions often imposed in the CP-decomposition literature, see Han and Zhang (2022), Han et al. (2024) and the references within. Illustration with both simulated and real matrix time series data shows the usefulness of the proposed approach.<|reference_end|>
arxiv
@article{chang2024identification, title={Identification and estimation for matrix time series CP-factor models}, author={Jinyuan Chang, Yue Du, Guanglin Huang, Qiwei Yao}, journal={arXiv preprint arXiv:2410.05634}, year={2024}, archivePrefix={arXiv}, eprint={2410.05634}, primaryClass={stat.ME cs.LG econ.EM} }
chang2024identification
arxiv-666858
2410.05637
Federated Neural Nonparametric Point Processes
<|reference_start|>Federated Neural Nonparametric Point Processes: Temporal point processes (TPPs) are effective for modeling event occurrences over time, but they struggle with sparse and uncertain events in federated systems, where privacy is a major concern. To address this, we propose \textit{FedPP}, a Federated neural nonparametric Point Process model. FedPP integrates neural embeddings into Sigmoidal Gaussian Cox Processes (SGCPs) on the client side, which is a flexible and expressive class of TPPs, allowing it to generate highly flexible intensity functions that capture client-specific event dynamics and uncertainties while efficiently summarizing historical records. For global aggregation, FedPP introduces a divergence-based mechanism that communicates the distributions of SGCPs' kernel hyperparameters between the server and clients, while keeping client-specific parameters local to ensure privacy and personalization. FedPP effectively captures event uncertainty and sparsity, and extensive experiments demonstrate its superior performance in federated settings, particularly with KL divergence and Wasserstein distance-based global aggregation.<|reference_end|>
arxiv
@article{chen2024federated, title={Federated Neural Nonparametric Point Processes}, author={Hui Chen, Hengyu Liu, Yaqiong Li, Xuhui Fan, Zhilin Zhao, Feng Zhou, Christopher John Quinn and Longbing Cao}, journal={arXiv preprint arXiv:2410.05637}, year={2024}, archivePrefix={arXiv}, eprint={2410.05637}, primaryClass={cs.LG cs.AI cs.CR} }
chen2024federated
arxiv-666859
2410.05638
Time Series Classification of Supraglacial Lakes Evolution over Greenland Ice Sheet
<|reference_start|>Time Series Classification of Supraglacial Lakes Evolution over Greenland Ice Sheet: The Greenland Ice Sheet (GrIS) has emerged as a significant contributor to global sea level rise, primarily due to increased meltwater runoff. Supraglacial lakes, which form on the ice sheet surface during the summer months, can impact ice sheet dynamics and mass loss; thus, better understanding these lakes' seasonal evolution and dynamics is an important task. This study presents a computationally efficient time series classification approach that uses Gaussian Mixture Models (GMMs) of the Reconstructed Phase Spaces (RPSs) to identify supraglacial lakes based on their seasonal evolution: 1) those that refreeze at the end of the melt season, 2) those that drain during the melt season, and 3) those that become buried, remaining liquid insulated a few meters beneath the surface. Our approach uses time series data from the Sentinel-1 and Sentinel-2 satellites, which utilize microwave and visible radiation, respectively. Evaluated on a GrIS-wide dataset, the RPS-GMM model, trained on a single representative sample per class, achieves 85.46% accuracy with Sentinel-1 data alone and 89.70% with combined Sentinel-1 and Sentinel-2 data. This performance significantly surpasses existing machine learning and deep learning models which require a large training data. The results demonstrate the robustness of the RPS-GMM model in capturing the complex temporal dynamics of supraglacial lakes with minimal training data.<|reference_end|>
arxiv
@article{hossain2024time, title={Time Series Classification of Supraglacial Lakes Evolution over Greenland Ice Sheet}, author={Emam Hossain, Md Osman Gani, Devon Dunmire, Aneesh Subramanian, Hammad Younas}, journal={arXiv preprint arXiv:2410.05638}, year={2024}, archivePrefix={arXiv}, eprint={2410.05638}, primaryClass={cs.LG} }
hossain2024time
arxiv-666860
2410.05639
DecorateLM: Data Engineering through Corpus Rating, Tagging, and Editing with Language Models
<|reference_start|>DecorateLM: Data Engineering through Corpus Rating, Tagging, and Editing with Language Models: The performance of Large Language Models (LLMs) is substantially influenced by the pretraining corpus, which consists of vast quantities of unsupervised data processed by the models. Despite its critical role in model performance, ensuring the quality of this data is challenging due to its sheer volume and the absence of sample-level quality annotations and enhancements. In this paper, we introduce DecorateLM, a data engineering method designed to refine the pretraining corpus through data rating, tagging and editing. Specifically, DecorateLM rates texts against quality criteria, tags texts with hierarchical labels, and edits texts into a more formalized format. Due to the massive size of the pretraining corpus, adopting an LLM for decorating the entire corpus is less efficient. Therefore, to balance performance with efficiency, we curate a meticulously annotated training corpus for DecorateLM using a large language model and distill data engineering expertise into a compact 1.2 billion parameter small language model (SLM). We then apply DecorateLM to enhance 100 billion tokens of the training corpus, selecting 45 billion tokens that exemplify high quality and diversity for the further training of another 1.2 billion parameter LLM. Our results demonstrate that employing such high-quality data can significantly boost model performance, showcasing a powerful approach to enhance the quality of the pretraining corpus.<|reference_end|>
arxiv
@article{zhao2024decoratelm:, title={DecorateLM: Data Engineering through Corpus Rating, Tagging, and Editing with Language Models}, author={Ranchi Zhao, Zhen Leng Thai, Yifan Zhang, Shengding Hu, Yunqi Ba, Jie Zhou, Jie Cai, Zhiyuan Liu, Maosong Sun}, journal={EMNLP 2024}, year={2024}, archivePrefix={arXiv}, eprint={2410.05639}, primaryClass={cs.CL} }
zhao2024decoratelm:
arxiv-666861
2410.05641
Synthesizing Efficient and Permissive Programmatic Runtime Shields for Neural Policies
<|reference_start|>Synthesizing Efficient and Permissive Programmatic Runtime Shields for Neural Policies: With the increasing use of neural policies in control systems, ensuring their safety and reliability has become a critical software engineering task. One prevalent approach to ensuring the safety of neural policies is to deploy programmatic runtime shields alongside them to correct their unsafe commands. However, the programmatic runtime shields synthesized by existing methods are either computationally expensive or insufficiently permissive, resulting in high overhead and unnecessary interventions on the system. To address these challenges, we propose Aegis, a novel framework that synthesizes lightweight and permissive programmatic runtime shields for neural policies. Aegis achieves this by formulating the seeking of a runtime shield as a sketch-based program synthesis problem and proposing a novel method that leverages counterexample-guided inductive synthesis and Bayesian optimization to solve it. To evaluate Aegis and its synthesized shields, we use four representative control systems and compare Aegis with the current state-of-the-art. Our results show that the programmatic runtime shields synthesized by Aegis can correct all unsafe commands from neural policies, ensuring that the systems do not violate any desired safety properties at all times. Compared to the current state-of-the-art, Aegis's shields exhibit a 2.1$\times$ reduction in time overhead and a 4.4$\times$ reduction in memory usage, suggesting that they are much more lightweight. Moreover, Aegis's shields incur an average of 1.6$\times$ fewer interventions than other shields, showing better permissiveness.<|reference_end|>
arxiv
@article{shi2024synthesizing, title={Synthesizing Efficient and Permissive Programmatic Runtime Shields for Neural Policies}, author={Jieke Shi, Junda He, Zhou Yang, {DJ}or{dj}e v{Z}ikeli'c, and David Lo}, journal={arXiv preprint arXiv:2410.05641}, year={2024}, archivePrefix={arXiv}, eprint={2410.05641}, primaryClass={cs.SE} }
shi2024synthesizing
arxiv-666862
2410.05642
Minimally Intrusive Access Management to Content Delivery Networks based on Performance Models and Access Patterns
<|reference_start|>Minimally Intrusive Access Management to Content Delivery Networks based on Performance Models and Access Patterns: This paper presents an approach to managing access to Content Delivery Networks (CDNs), focusing on combating the misuse of tokens through performance analysis and statistical access patterns. In particular, we explore the impact of token sharing on the content delivery infrastructure, proposing the definition of acceptable request limits to detect and block abnormal accesses. Additionally, we introduce countermeasures against piracy, such as degrading the quality of service for pirate users to discourage them from illegal sharing, and using queuing models to quantify system performance in different piracy scenarios. Adopting these measures can improve the consistency and efficiency of CDN access and cost management, protecting the infrastructure and the legitimate user experience.<|reference_end|>
arxiv
@article{rodrigues2024minimally, title={Minimally Intrusive Access Management to Content Delivery Networks based on Performance Models and Access Patterns}, author={Lenise M. V. Rodrigues, Daniel Sadoc Menasch'e, Arthur Serra and Antonio A. de Arag~ao Rocha}, journal={arXiv preprint arXiv:2410.05642}, year={2024}, archivePrefix={arXiv}, eprint={2410.05642}, primaryClass={cs.NI cs.CR} }
rodrigues2024minimally
arxiv-666863
2410.05643
TRACE: Temporal Grounding Video LLM via Causal Event Modeling
<|reference_start|>TRACE: Temporal Grounding Video LLM via Causal Event Modeling: Video Temporal Grounding (VTG) is a crucial capability for video understanding models and plays a vital role in downstream tasks such as video browsing and editing. To effectively handle various tasks simultaneously and enable zero-shot prediction, there is a growing trend in employing video LLMs for VTG tasks. However, current video LLM-based methods rely exclusively on natural language generation, lacking the ability to model the clear structure inherent in videos, which restricts their effectiveness in tackling VTG tasks. To address this issue, this paper first formally introduces causal event modeling framework, which represents videos as sequences of events, and predict the current event using previous events, video inputs, and textural instructions. Each event consists of three components: timestamps, salient scores, and textual captions. We then propose a novel task-interleaved video LLM called TRACE to effectively implement the causal event modeling framework in practice. The TRACE processes visual frames, timestamps, salient scores, and text as distinct tasks, employing various encoders and decoding heads for each. Task tokens are arranged in an interleaved sequence according to the causal event modeling framework's formulation. Extensive experiments on various VTG tasks and datasets demonstrate the superior performance of TRACE compared to state-of-the-art video LLMs. Our model and code are available at \url{https://github.com/gyxxyg/TRACE}.<|reference_end|>
arxiv
@article{guo2024trace:, title={TRACE: Temporal Grounding Video LLM via Causal Event Modeling}, author={Yongxin Guo, Jingyu Liu, Mingda Li, Xiaoying Tang, Qingbin Liu, Xi Chen}, journal={arXiv preprint arXiv:2410.05643}, year={2024}, archivePrefix={arXiv}, eprint={2410.05643}, primaryClass={cs.CV} }
guo2024trace:
arxiv-666864
2410.05644
Sneak Path Interference-Aware Adaptive Detection and Decoding for Resistive Memory Arrays
<|reference_start|>Sneak Path Interference-Aware Adaptive Detection and Decoding for Resistive Memory Arrays: Resistive random-access memory (ReRAM) is an emerging non-volatile memory technology for high-density and high-speed data storage. However, the sneak path interference (SPI) occurred in the ReRAM crossbar array seriously affects its data recovery performance. In this letter, we first propose a quantized channel model of ReRAM, based on which we design both the one-bit and multi-bit channel quantizers by maximizing the mutual information of the channel. A key channel parameter that affects the quantizer design is the sneak path occurrence probability (SPOP) of the memory cell. We first use the average SPOP calculated statistically to design the quantizer, which leads to the same channel detector for different memory arrays. We then adopt the SPOP estimated separately for each memory array for the quantizer design, which is generated by an effective channel estimator and through an iterative detection and decoding scheme for the ReRAM channel. This results in an array-level SPI-aware adaptive detection and decoding approach. Moreover, since there is a strong correlation of the SPI that affects memory cells in the same rows/columns than that affecting cells in different rows/columns, we further derive a column-level scheme which outperforms the array-level scheme. We also propose a channel decomposition method that enables effective ways for theoretically analyzing the ReRAM channel. Simulation results show that the proposed SPI-aware adaptive detection and decoding schemes can approach the ideal performance with three quantization bits, with only one decoding iteration.<|reference_end|>
arxiv
@article{li2024sneak, title={Sneak Path Interference-Aware Adaptive Detection and Decoding for Resistive Memory Arrays}, author={Panpan Li, Kui Cai, Guanghui Song, and Zhen Mei}, journal={arXiv preprint arXiv:2410.05644}, year={2024}, archivePrefix={arXiv}, eprint={2410.05644}, primaryClass={cs.IT math.IT} }
li2024sneak
arxiv-666865
2410.05645
Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations
<|reference_start|>Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations: Custom animated visualizations of large, complex datasets are helpful across many domains, but they are hard to develop. Much of the difficulty arises from maintaining visualization state across many animated graphical elements that may change in number over time. We contribute Counterpoint, a framework for state management designed to help implement such visualizations in JavaScript. Using Counterpoint, developers can manipulate large collections of marks with reactive attributes that are easy to render in scalable APIs such as Canvas and WebGL. Counterpoint also helps orchestrate the entry and exit of graphical elements using the concept of a rendering "stage." Through a performance evaluation, we show that Counterpoint adds minimal overhead over current high-performance rendering techniques while simplifying implementation. We provide two examples of visualizations created using Counterpoint that illustrate its flexibility and compatibility with other visualization toolkits as well as considerations for users with disabilities. Counterpoint is open-source and available at https://github.com/cmudig/counterpoint.<|reference_end|>
arxiv
@article{sivaraman2024counterpoint:, title={Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations}, author={Venkatesh Sivaraman, Frank Elavsky, Dominik Moritz, Adam Perer}, journal={arXiv preprint arXiv:2410.05645}, year={2024}, archivePrefix={arXiv}, eprint={2410.05645}, primaryClass={cs.GR cs.HC} }
sivaraman2024counterpoint:
arxiv-666866
2410.05646
Score-Based Variational Inference for Inverse Problems
<|reference_start|>Score-Based Variational Inference for Inverse Problems: Existing diffusion-based methods for inverse problems sample from the posterior using score functions and accept the generated random samples as solutions. In applications that posterior mean is preferred, we have to generate multiple samples from the posterior which is time-consuming. In this work, by analyzing the probability density evolution of the conditional reverse diffusion process, we prove that the posterior mean can be achieved by tracking the mean of each reverse diffusion step. Based on that, we establish a framework termed reverse mean propagation (RMP) that targets the posterior mean directly. We show that RMP can be implemented by solving a variational inference problem, which can be further decomposed as minimizing a reverse KL divergence at each reverse step. We further develop an algorithm that optimizes the reverse KL divergence with natural gradient descent using score functions and propagates the mean at each reverse step. Experiments demonstrate the validity of the theory of our framework and show that our algorithm outperforms state-of-the-art algorithms on reconstruction performance with lower computational complexity in various inverse problems.<|reference_end|>
arxiv
@article{xue2024score-based, title={Score-Based Variational Inference for Inverse Problems}, author={Zhipeng Xue, Penghao Cai, Xiaojun Yuan, Xiqi Gao}, journal={arXiv preprint arXiv:2410.05646}, year={2024}, archivePrefix={arXiv}, eprint={2410.05646}, primaryClass={cs.LG cs.AI cs.IT math.IT} }
xue2024score-based
arxiv-666867
2410.05647
FGCL: Fine-grained Contrastive Learning For Mandarin Stuttering Event Detection
<|reference_start|>FGCL: Fine-grained Contrastive Learning For Mandarin Stuttering Event Detection: This paper presents the T031 team's approach to the StutteringSpeech Challenge in SLT2024. Mandarin Stuttering Event Detection (MSED) aims to detect instances of stuttering events in Mandarin speech. We propose a detailed acoustic analysis method to improve the accuracy of stutter detection by capturing subtle nuances that previous Stuttering Event Detection (SED) techniques have overlooked. To this end, we introduce the Fine-Grained Contrastive Learning (FGCL) framework for MSED. Specifically, we model the frame-level probabilities of stuttering events and introduce a mining algorithm to identify both easy and confusing frames. Then, we propose a stutter contrast loss to enhance the distinction between stuttered and fluent speech frames, thereby improving the discriminative capability of stuttered feature embeddings. Extensive evaluations on English and Mandarin datasets demonstrate the effectiveness of FGCL, achieving a significant increase of over 5.0% in F1 score on Mandarin data.<|reference_end|>
arxiv
@article{jiang2024fgcl:, title={FGCL: Fine-grained Contrastive Learning For Mandarin Stuttering Event Detection}, author={Han Jiang, Wenyu Wang, Yiquan Zhou, Hongwu Ding, Jiacheng Xu, Jihua Zhu}, journal={arXiv preprint arXiv:2410.05647}, year={2024}, archivePrefix={arXiv}, eprint={2410.05647}, primaryClass={cs.SD eess.AS} }
jiang2024fgcl:
arxiv-666868
2410.05648
Does RoBERTa Perform Better than BERT in Continual Learning: An Attention Sink Perspective
<|reference_start|>Does RoBERTa Perform Better than BERT in Continual Learning: An Attention Sink Perspective: Continual learning (CL) aims to train models that can sequentially learn new tasks without forgetting previous tasks' knowledge. Although previous works observed that pre-training can benefit CL, it remains unclear whether a pre-trained model with higher downstream capacity also performs better in CL. In this paper, we observe that pre-trained models may allocate high attention scores to some 'sink' tokens, such as [SEP] tokens, which are ubiquitous across various tasks. Such attention sinks may lead to models' over-smoothing in single-task learning and interference in sequential tasks' learning, which may compromise the models' CL performance despite their high pre-trained capabilities. To reduce these effects, we propose a pre-scaling mechanism that encourages attention diversity across all tokens. Specifically, it first scales the task's attention to the non-sink tokens in a probing stage, and then fine-tunes the model with scaling. Experiments show that pre-scaling yields substantial improvements in CL without experience replay, or progressively storing parameters from previous tasks.<|reference_end|>
arxiv
@article{bai2024does, title={Does RoBERTa Perform Better than BERT in Continual Learning: An Attention Sink Perspective}, author={Xueying Bai, Yifan Sun, Niranjan Balasubramanian}, journal={arXiv preprint arXiv:2410.05648}, year={2024}, archivePrefix={arXiv}, eprint={2410.05648}, primaryClass={cs.LG cs.CL} }
bai2024does
arxiv-666869
2410.05650
SIA-OVD: Shape-Invariant Adapter for Bridging the Image-Region Gap in Open-Vocabulary Detection
<|reference_start|>SIA-OVD: Shape-Invariant Adapter for Bridging the Image-Region Gap in Open-Vocabulary Detection: Open-vocabulary detection (OVD) aims to detect novel objects without instance-level annotations to achieve open-world object detection at a lower cost. Existing OVD methods mainly rely on the powerful open-vocabulary image-text alignment capability of Vision-Language Pretrained Models (VLM) such as CLIP. However, CLIP is trained on image-text pairs and lacks the perceptual ability for local regions within an image, resulting in the gap between image and region representations. Directly using CLIP for OVD causes inaccurate region classification. We find the image-region gap is primarily caused by the deformation of region feature maps during region of interest (RoI) extraction. To mitigate the inaccurate region classification in OVD, we propose a new Shape-Invariant Adapter named SIA-OVD to bridge the image-region gap in the OVD task. SIA-OVD learns a set of feature adapters for regions with different shapes and designs a new adapter allocation mechanism to select the optimal adapter for each region. The adapted region representations can align better with text representations learned by CLIP. Extensive experiments demonstrate that SIA-OVD effectively improves the classification accuracy for regions by addressing the gap between images and regions caused by shape deformation. SIA-OVD achieves substantial improvements over representative methods on the COCO-OVD benchmark. The code is available at https://github.com/PKU-ICST-MIPL/SIA-OVD_ACMMM2024.<|reference_end|>
arxiv
@article{wang2024sia-ovd:, title={SIA-OVD: Shape-Invariant Adapter for Bridging the Image-Region Gap in Open-Vocabulary Detection}, author={Zishuo Wang, Wenhao Zhou, Jinglin Xu, Yuxin Peng}, journal={arXiv preprint arXiv:2410.05650}, year={2024}, doi={10.1145/3664647.3680642}, archivePrefix={arXiv}, eprint={2410.05650}, primaryClass={cs.CV cs.MM} }
wang2024sia-ovd:
arxiv-666870
2410.05651
ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler
<|reference_start|>ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler: Recent progress in large-scale text-to-video (T2V) and image-to-video (I2V) diffusion models has greatly enhanced video generation, especially in terms of keyframe interpolation. However, current image-to-video diffusion models, while powerful in generating videos from a single conditioning frame, need adaptation for two-frame (start & end) conditioned generation, which is essential for effective bounded interpolation. Unfortunately, existing approaches that fuse temporally forward and backward paths in parallel often suffer from off-manifold issues, leading to artifacts or requiring multiple iterative re-noising steps. In this work, we introduce a novel, bidirectional sampling strategy to address these off-manifold issues without requiring extensive re-noising or fine-tuning. Our method employs sequential sampling along both forward and backward paths, conditioned on the start and end frames, respectively, ensuring more coherent and on-manifold generation of intermediate frames. Additionally, we incorporate advanced guidance techniques, CFG++ and DDS, to further enhance the interpolation process. By integrating these, our method achieves state-of-the-art performance, efficiently generating high-quality, smooth videos between keyframes. On a single 3090 GPU, our method can interpolate 25 frames at 1024 x 576 resolution in just 195 seconds, establishing it as a leading solution for keyframe interpolation.<|reference_end|>
arxiv
@article{yang2024vibidsampler:, title={ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler}, author={Serin Yang, Taesung Kwon, Jong Chul Ye}, journal={arXiv preprint arXiv:2410.05651}, year={2024}, archivePrefix={arXiv}, eprint={2410.05651}, primaryClass={cs.CV cs.AI cs.LG} }
yang2024vibidsampler:
arxiv-666871
2410.05653
A Blockchain-Enhanced Framework for Privacy and Data Integrity in Crowdsourced Drone Services
<|reference_start|>A Blockchain-Enhanced Framework for Privacy and Data Integrity in Crowdsourced Drone Services: We present an innovative framework that integrates consumer-grade drones into bushfire management, addressing both service improvement and data privacy concerns under Australia's Privacy Act 1988. This system establishes a marketplace where bushfire management authorities, as data consumers, access critical information from drone operators, who serve as data providers. The framework employs local differential privacy to safeguard the privacy of data providers from all system entities, ensuring compliance with privacy standards. Additionally, a blockchain-based solution facilitates fair data and fee exchanges while maintaining immutable records for enhanced accountability. Validated through a proof-of-concept implementation, the framework's scalability and adaptability make it well-suited for large-scale, real-world applications in bushfire management.<|reference_end|>
arxiv
@article{akram2024a, title={A Blockchain-Enhanced Framework for Privacy and Data Integrity in Crowdsourced Drone Services}, author={Junaid Akram and Ali Anaissi}, journal={arXiv preprint arXiv:2410.05653}, year={2024}, archivePrefix={arXiv}, eprint={2410.05653}, primaryClass={cs.CR} }
akram2024a
arxiv-666872
2410.05655
Efficient Policy Evaluation with Safety Constraint for Reinforcement Learning
<|reference_start|>Efficient Policy Evaluation with Safety Constraint for Reinforcement Learning: In reinforcement learning, classic on-policy evaluation methods often suffer from high variance and require massive online data to attain the desired accuracy. Previous studies attempt to reduce evaluation variance by searching for or designing proper behavior policies to collect data. However, these approaches ignore the safety of such behavior policies -- the designed behavior policies have no safety guarantee and may lead to severe damage during online executions. In this paper, to address the challenge of reducing variance while ensuring safety simultaneously, we propose an optimal variance-minimizing behavior policy under safety constraints. Theoretically, while ensuring safety constraints, our evaluation method is unbiased and has lower variance than on-policy evaluation. Empirically, our method is the only existing method to achieve both substantial variance reduction and safety constraint satisfaction. Furthermore, we show our method is even superior to previous methods in both variance reduction and execution safety.<|reference_end|>
arxiv
@article{chen2024efficient, title={Efficient Policy Evaluation with Safety Constraint for Reinforcement Learning}, author={Claire Chen, Shuze Liu, Shangtong Zhang}, journal={arXiv preprint arXiv:2410.05655}, year={2024}, archivePrefix={arXiv}, eprint={2410.05655}, primaryClass={cs.LG} }
chen2024efficient
arxiv-666873
2410.05656
On the Modeling Capabilities of Large Language Models for Sequential Decision Making
<|reference_start|>On the Modeling Capabilities of Large Language Models for Sequential Decision Making: Large pretrained models are showing increasingly better performance in reasoning and planning tasks across different modalities, opening the possibility to leverage them for complex sequential decision making problems. In this paper, we investigate the capabilities of Large Language Models (LLMs) for reinforcement learning (RL) across a diversity of interactive domains. We evaluate their ability to produce decision-making policies, either directly, by generating actions, or indirectly, by first generating reward models to train an agent with RL. Our results show that, even without task-specific fine-tuning, LLMs excel at reward modeling. In particular, crafting rewards through artificial intelligence (AI) feedback yields the most generally applicable approach and can enhance performance by improving credit assignment and exploration. Finally, in environments with unfamiliar dynamics, we explore how fine-tuning LLMs with synthetic data can significantly improve their reward modeling capabilities while mitigating catastrophic forgetting, further broadening their utility in sequential decision-making tasks.<|reference_end|>
arxiv
@article{klissarov2024on, title={On the Modeling Capabilities of Large Language Models for Sequential Decision Making}, author={Martin Klissarov, Devon Hjelm, Alexander Toshev, Bogdan Mazoure}, journal={arXiv preprint arXiv:2410.05656}, year={2024}, archivePrefix={arXiv}, eprint={2410.05656}, primaryClass={cs.AI} }
klissarov2024on
arxiv-666874
2410.05660
Robust Transfer Learning for Active Level Set Estimation with Locally Adaptive Gaussian Process Prior
<|reference_start|>Robust Transfer Learning for Active Level Set Estimation with Locally Adaptive Gaussian Process Prior: The objective of active level set estimation for a black-box function is to precisely identify regions where the function values exceed or fall below a specified threshold by iteratively performing function evaluations to gather more information about the function. This becomes particularly important when function evaluations are costly, drastically limiting our ability to acquire large datasets. A promising way to sample-efficiently model the black-box function is by incorporating prior knowledge from a related function. However, this approach risks slowing down the estimation task if the prior knowledge is irrelevant or misleading. In this paper, we present a novel transfer learning method for active level set estimation that safely integrates a given prior knowledge while constantly adjusting it to guarantee a robust performance of a level set estimation algorithm even when the prior knowledge is irrelevant. We theoretically analyze this algorithm to show that it has a better level set convergence compared to standard transfer learning approaches that do not make any adjustment to the prior. Additionally, extensive experiments across multiple datasets confirm the effectiveness of our method when applied to various different level set estimation algorithms as well as different transfer learning scenarios.<|reference_end|>
arxiv
@article{ngo2024robust, title={Robust Transfer Learning for Active Level Set Estimation with Locally Adaptive Gaussian Process Prior}, author={Giang Ngo, Dang Nguyen, Sunil Gupta}, journal={arXiv preprint arXiv:2410.05660}, year={2024}, archivePrefix={arXiv}, eprint={2410.05660}, primaryClass={cs.LG stat.ML} }
ngo2024robust
arxiv-666875
2410.05661
Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models
<|reference_start|>Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models: The scaling of large language models (LLMs) is a critical research area for the efficiency and effectiveness of model training and deployment. Our work investigates the transferability and discrepancies of scaling laws between Dense Models and Mixture of Experts (MoE) models. Through a combination of theoretical analysis and extensive experiments, including consistent loss scaling, optimal batch size and learning rate scaling, and resource allocation strategies scaling, our findings reveal that the power-law scaling framework also applies to MoE Models, indicating that the fundamental principles governing the scaling behavior of these models are preserved, even though the architecture differs. Additionally, MoE Models demonstrate superior generalization, resulting in lower testing losses with the same training compute budget compared to Dense Models. These findings indicate the scaling consistency and transfer generalization capabilities of MoE Models, providing new insights for optimizing MoE Model training and deployment strategies.<|reference_end|>
arxiv
@article{wang2024scaling, title={Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models}, author={Siqi Wang, Zhengyu Chen, Bei Li, Keqing He, Min Zhang, Jingang Wang}, journal={arXiv preprint arXiv:2410.05661}, year={2024}, archivePrefix={arXiv}, eprint={2410.05661}, primaryClass={cs.LG cs.AI} }
wang2024scaling
arxiv-666876
2410.05662
Federated Learning with Dynamic Client Arrival and Departure: Convergence and Rapid Adaptation via Initial Model Construction
<|reference_start|>Federated Learning with Dynamic Client Arrival and Departure: Convergence and Rapid Adaptation via Initial Model Construction: While most existing federated learning (FL) approaches assume a fixed set of clients in the system, in practice, clients can dynamically leave or join the system depending on their needs or interest in the specific task. This dynamic FL setting introduces several key challenges: (1) the objective function dynamically changes depending on the current set of clients, unlike traditional FL approaches that maintain a static optimization goal; (2) the current global model may not serve as the best initial point for the next FL rounds and could potentially lead to slow adaptation, given the possibility of clients leaving or joining the system. In this paper, we consider a dynamic optimization objective in FL that seeks the optimal model tailored to the currently active set of clients. Building on our probabilistic framework that provides direct insights into how the arrival and departure of different types of clients influence the shifts in optimal points, we establish an upper bound on the optimality gap, accounting for factors such as stochastic gradient noise, local training iterations, non-IIDness of data distribution, and deviations between optimal points caused by dynamic client pattern. We also propose an adaptive initial model construction strategy that employs weighted averaging guided by gradient similarity, prioritizing models trained on clients whose data characteristics align closely with the current one, thereby enhancing adaptability to the current clients. The proposed approach is validated on various datasets and FL algorithms, demonstrating robust performance across diverse client arrival and departure patterns, underscoring its effectiveness in dynamic FL environments.<|reference_end|>
arxiv
@article{chang2024federated, title={Federated Learning with Dynamic Client Arrival and Departure: Convergence and Rapid Adaptation via Initial Model Construction}, author={Zhan-Lun Chang, Dong-Jun Han, Rohit Parasnis, Seyyedali Hosseinalipour, Christopher G. Brinton}, journal={arXiv preprint arXiv:2410.05662}, year={2024}, archivePrefix={arXiv}, eprint={2410.05662}, primaryClass={cs.LG} }
chang2024federated
arxiv-666877
2410.05663
Abstract Hardware Grounding towards the Automated Design of Automation Systems
<|reference_start|>Abstract Hardware Grounding towards the Automated Design of Automation Systems: Crafting automation systems tailored for specific domains requires aligning the space of human experts' semantics with the space of robot executable actions, and scheduling the required resources and system layout accordingly. Regrettably, there are three major gaps, fine-grained domain-specific knowledge injection, heterogeneity between human knowledge and robot instructions, and diversity of users' preferences, resulting automation system design a case-by-case and labour-intensive effort, thus hindering the democratization of automation. We refer to this challenging alignment as the abstract hardware grounding problem, where we firstly regard the procedural operations in humans' semantics space as the abstraction of hardware requirements, then we ground such abstractions to instantiated hardware devices, subject to constraints and preferences in the real world -- optimizing this problem is essentially standardizing and automating the design of automation systems. On this basis, we develop an automated design framework in a hybrid data-driven and principle-derived fashion. Results on designing self-driving laboratories for enhancing experiment-driven scientific discovery suggest our framework's potential to produce compact systems that fully satisfy domain-specific and user-customized requirements with no redundancy.<|reference_end|>
arxiv
@article{shi2024abstract, title={Abstract Hardware Grounding towards the Automated Design of Automation Systems}, author={Yu-Zhe Shi, Qiao Xu, Fanxu Meng, Lecheng Ruan, Qining Wang}, journal={arXiv preprint arXiv:2410.05663}, year={2024}, archivePrefix={arXiv}, eprint={2410.05663}, primaryClass={cs.RO} }
shi2024abstract
arxiv-666878
2410.05664
Holistic Unlearning Benchmark: A Multi-Faceted Evaluation for Text-to-Image Diffusion Model Unlearning
<|reference_start|>Holistic Unlearning Benchmark: A Multi-Faceted Evaluation for Text-to-Image Diffusion Model Unlearning: As text-to-image diffusion models become advanced enough for commercial applications, there is also increasing concern about their potential for malicious and harmful use. Model unlearning has been proposed to mitigate the concerns by removing undesired and potentially harmful information from the pre-trained model. So far, the success of unlearning is mainly measured by whether the unlearned model can generate a target concept while maintaining image quality. However, unlearning is typically tested under limited scenarios, and the side effects of unlearning have barely been studied in the current literature. In this work, we thoroughly analyze unlearning under various scenarios with five key aspects. Our investigation reveals that every method has side effects or limitations, especially in more complex and realistic situations. By releasing our comprehensive evaluation framework with the source codes and artifacts, we hope to inspire further research in this area, leading to more reliable and effective unlearning methods.<|reference_end|>
arxiv
@article{moon2024holistic, title={Holistic Unlearning Benchmark: A Multi-Faceted Evaluation for Text-to-Image Diffusion Model Unlearning}, author={Saemi Moon, Minjong Lee, Sangdon Park, Dongwoo Kim}, journal={arXiv preprint arXiv:2410.05664}, year={2024}, archivePrefix={arXiv}, eprint={2410.05664}, primaryClass={cs.CV cs.LG} }
moon2024holistic
arxiv-666879
2410.05665
Edge-Cloud Collaborative Satellite Image Analysis for Efficient Man-Made Structure Recognition
<|reference_start|>Edge-Cloud Collaborative Satellite Image Analysis for Efficient Man-Made Structure Recognition: The increasing availability of high-resolution satellite imagery has created immense opportunities for various applications. However, processing and analyzing such vast amounts of data in a timely and accurate manner poses significant challenges. The paper presents a new satellite image processing architecture combining edge and cloud computing to better identify man-made structures against natural landscapes. By employing lightweight models at the edge, the system initially identifies potential man-made structures from satellite imagery. These identified images are then transmitted to the cloud, where a more complex model refines the classification, determining specific types of structures. The primary focus is on the trade-off between latency and accuracy, as efficient models often sacrifice accuracy. We compare this hybrid edge-cloud approach against traditional "bent-pipe" method in virtual environment experiments as well as introduce a practical model and compare its performance with existing lightweight models for edge deployment, focusing on accuracy and latency. The results demonstrate that the edge-cloud collaborative model not only reduces overall latency due to minimized data transmission but also maintains high accuracy, offering substantial improvements over traditional approaches under this scenario.<|reference_end|>
arxiv
@article{sheng2024edge-cloud, title={Edge-Cloud Collaborative Satellite Image Analysis for Efficient Man-Made Structure Recognition}, author={Kaicheng Sheng, Junxiao Xue, and Hui Zhang}, journal={arXiv preprint arXiv:2410.05665}, year={2024}, archivePrefix={arXiv}, eprint={2410.05665}, primaryClass={cs.CV} }
sheng2024edge-cloud
arxiv-666880
2410.05668
Diversity and Inclusion Index with Networks and Similarity: Analysis and its Application
<|reference_start|>Diversity and Inclusion Index with Networks and Similarity: Analysis and its Application: In recent years, the concepts of ``diversity'' and ``inclusion'' have attracted considerable attention across a range of fields, encompassing both social and biological disciplines. To fully understand these concepts, it is critical to not only examine the number of categories but also the similarities and relationships among them. In this study, I introduce a novel index for diversity and inclusion that considers similarities and network connections. I analyzed the properties of these indices and investigated their mathematical relationships using established measures of diversity and networks. Moreover, I developed a methodology for estimating similarities based on the utility of diversity. I also created a method for visualizing proportions, similarities, and network connections. Finally, I evaluated the correlation with external metrics using real-world data, confirming that both the proposed indices and our index can be effectively utilized. This study contributes to a more nuanced understanding of diversity and inclusion analysis.<|reference_end|>
arxiv
@article{kinjo2024diversity, title={Diversity and Inclusion Index with Networks and Similarity: Analysis and its Application}, author={Keita Kinjo}, journal={arXiv preprint arXiv:2410.05668}, year={2024}, archivePrefix={arXiv}, eprint={2410.05668}, primaryClass={cs.SI cs.AI stat.ME} }
kinjo2024diversity
arxiv-666881
2410.05669
ACPBench: Reasoning about Action, Change, and Planning
<|reference_start|>ACPBench: Reasoning about Action, Change, and Planning: There is an increasing body of work using Large Language Models (LLMs) as agents for orchestrating workflows and making decisions in domains that require planning and multi-step reasoning. As a result, it is imperative to evaluate LLMs on core skills required for planning. In this work, we present ACPBench, a benchmark for evaluating the reasoning tasks in the field of planning. The benchmark consists of 7 reasoning tasks over 13 planning domains. The collection is constructed from planning domains described in a formal language. This allows us to synthesize problems with provably correct solutions across many tasks and domains. Further, it allows us the luxury of scale without additional human effort, i.e., many additional problems can be created automatically. Our extensive evaluation of 22 open-sourced and frontier LLMs highlight the significant gap in the reasoning capability of the LLMs. The average accuracy of one of the best-performing frontier LLMs -- GPT-4o on these tasks can fall as low as 52.50% ACPBench collection is available at https://ibm.github.io/ACPBench.<|reference_end|>
arxiv
@article{kokel2024acpbench:, title={ACPBench: Reasoning about Action, Change, and Planning}, author={Harsha Kokel, Michael Katz, Kavitha Srinivas, Shirin Sohrabi}, journal={arXiv preprint arXiv:2410.05669}, year={2024}, archivePrefix={arXiv}, eprint={2410.05669}, primaryClass={cs.AI} }
kokel2024acpbench:
arxiv-666882
2410.05670
Improving Disease Comorbidity Prediction Based on Human Interactome with Biologically Supervised Graph Embedding
<|reference_start|>Improving Disease Comorbidity Prediction Based on Human Interactome with Biologically Supervised Graph Embedding: Comorbidity carries significant implications for disease understanding and management. The genetic causes for comorbidity often trace back to mutations occurred either in the same gene associated with two diseases or in different genes associated with different diseases respectively but coming into connection via protein-protein interactions. Therefore, human interactome has been used in more sophisticated study of disease comorbidity. Human interactome, as a large incomplete graph, presents its own challenges to extracting useful features for comorbidity prediction. In this work, we introduce a novel approach named Biologically Supervised Graph Embedding (BSE) to allow for selecting most relevant features to enhance the prediction accuracy of comorbid disease pairs. Our investigation into BSE's impact on both centered and uncentered embedding methods showcases its consistent superiority over the state-of-the-art techniques and its adeptness in selecting dimensions enriched with vital biological insights, thereby improving prediction performance significantly, up to 50% when measured by ROC for some variations. Further analysis indicates that BSE consistently and substantially improves the ratio of disease associations to gene connectivity, affirming its potential in uncovering latent biological factors affecting comorbidity. The statistically significant enhancements across diverse metrics underscore BSE's potential to introduce novel avenues for precise disease comorbidity predictions and other potential applications. The GitHub repository containing the source code can be accessed at the following link: https://github.com/xihan-qin/Biologically-Supervised-Graph-Embedding.<|reference_end|>
arxiv
@article{qin2024improving, title={Improving Disease Comorbidity Prediction Based on Human Interactome with Biologically Supervised Graph Embedding}, author={Xihan Qin, Li Liao}, journal={arXiv preprint arXiv:2410.05670}, year={2024}, archivePrefix={arXiv}, eprint={2410.05670}, primaryClass={cs.LG} }
qin2024improving
arxiv-666883
2410.05672
Embedding derivatives and derivative Area operators of Hardy spaces into Lebesgue spaces
<|reference_start|>Embedding derivatives and derivative Area operators of Hardy spaces into Lebesgue spaces: We characterize the compactness of embedding derivatives from Hardy space $H^p$ into Lebesgue space $L^q(\mu)$. We also completely characterize the boundedness and compactness of derivative area operators from $H^p$ into $L^q(\mathbb{S}_n)$, $0<p, q<\infty$. Some of the tools used in the proof of the one-dimensional case are not available in higher dimensions, such as the strong factorization of Hardy spaces. Therefore, we need the theory of tent spaces which was established by Coifman, Mayer and Stein in 1985.<|reference_end|>
arxiv
@article{liu2024embedding, title={Embedding derivatives and derivative Area operators of Hardy spaces into Lebesgue spaces}, author={Xiaosong Liu, Zengjian Lou, Zixing Yuan, Ruhan Zhao}, journal={arXiv preprint arXiv:2410.05672}, year={2024}, archivePrefix={arXiv}, eprint={2410.05672}, primaryClass={cs.IR} }
liu2024embedding
arxiv-666884
2410.05673
Learning Equilibria in Adversarial Team Markov Games: A Nonconvex-Hidden-Concave Min-Max Optimization Problem
<|reference_start|>Learning Equilibria in Adversarial Team Markov Games: A Nonconvex-Hidden-Concave Min-Max Optimization Problem: We study the problem of learning a Nash equilibrium (NE) in Markov games which is a cornerstone in multi-agent reinforcement learning (MARL). In particular, we focus on infinite-horizon adversarial team Markov games (ATMGs) in which agents that share a common reward function compete against a single opponent, the adversary. These games unify two-player zero-sum Markov games and Markov potential games, resulting in a setting that encompasses both collaboration and competition. Kalogiannis et al. (2023a) provided an efficient equilibrium computation algorithm for ATMGs which presumes knowledge of the reward and transition functions and has no sample complexity guarantees. We contribute a learning algorithm that utilizes MARL policy gradient methods with iteration and sample complexity that is polynomial in the approximation error $\epsilon$ and the natural parameters of the ATMG, resolving the main caveats of the solution by (Kalogiannis et al., 2023a). It is worth noting that previously, the existence of learning algorithms for NE was known for Markov two-player zero-sum and potential games but not for ATMGs. Seen through the lens of min-max optimization, computing a NE in these games consists a nonconvex-nonconcave saddle-point problem. Min-max optimization has received extensive study. Nevertheless, the case of nonconvex-nonconcave landscapes remains elusive: in full generality, finding saddle-points is computationally intractable (Daskalakis et al., 2021). We circumvent the aforementioned intractability by developing techniques that exploit the hidden structure of the objective function via a nonconvex-concave reformulation. However, this introduces the challenge of a feasibility set with coupled constraints. We tackle these challenges by establishing novel techniques for optimizing weakly-smooth nonconvex functions, extending the framework of (Devolder et al., 2014).<|reference_end|>
arxiv
@article{kalogiannis2024learning, title={Learning Equilibria in Adversarial Team Markov Games: A Nonconvex-Hidden-Concave Min-Max Optimization Problem}, author={Fivos Kalogiannis, Jingming Yan, Ioannis Panageas}, journal={arXiv preprint arXiv:2410.05673}, year={2024}, archivePrefix={arXiv}, eprint={2410.05673}, primaryClass={cs.GT} }
kalogiannis2024learning
arxiv-666885
2410.05674
Mobile IoT device for BPM monitoring people with heart problems
<|reference_start|>Mobile IoT device for BPM monitoring people with heart problems: The developed system using a mobile electronic device for monitoring and warnings of heart problems, when the heart rate is outside the nominal range, which ranges from 60 to 100 beats per minute. Also, a system has been developed to save and monitor in real time changes of the cardiac pulsations, through a sensor connected to a control system. The connection of the communication module for Arduino GSM/GPRS/GPS, using the GPS network to locate the user. In addition, this device connects with GSM / GPRS technology that allows text messages to be sent to the contact number configured in the device, when warnings of heart problems are issued, moreover connects to the internet to store data in the cloud.<|reference_end|>
arxiv
@article{chuquimarca2024mobile, title={Mobile IoT device for BPM monitoring people with heart problems}, author={Luis Chuquimarca, Dahyana Roca, Washington Torres, Luis Amaya, Jaime Orozco, David S'anchez}, journal={In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (pp. 1-5). IEEE}, year={2024}, doi={10.1109/ICECCE49384.2020.9179293}, archivePrefix={arXiv}, eprint={2410.05674}, primaryClass={eess.SP cs.SY eess.SY} }
chuquimarca2024mobile
arxiv-666886
2410.05675
Understanding with toy surrogate models in machine learning
<|reference_start|>Understanding with toy surrogate models in machine learning: In the natural and social sciences, it is common to use toy models -- extremely simple and highly idealized representations -- to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on the output. The obvious difference is that the common target of a toy and a full-scale model in the sciences is some phenomenon in the world, while the target of a surrogate model is another model. This essential difference makes toy surrogate models (TSMs) a new object of study for theories of understanding, one that is not easily accommodated under current analyses. This paper provides an account of what it means to understand an opaque ML model globally with the aid of such simple models.<|reference_end|>
arxiv
@article{páez2024understanding, title={Understanding with toy surrogate models in machine learning}, author={Andr'es P'aez}, journal={arXiv preprint arXiv:2410.05675}, year={2024}, archivePrefix={arXiv}, eprint={2410.05675}, primaryClass={cs.LG} }
páez2024understanding
arxiv-666887
2410.05677
T2V-Turbo-v2: Enhancing Video Generation Model Post-Training through Data, Reward, and Conditional Guidance Design
<|reference_start|>T2V-Turbo-v2: Enhancing Video Generation Model Post-Training through Data, Reward, and Conditional Guidance Design: In this paper, we focus on enhancing a diffusion-based text-to-video (T2V) model during the post-training phase by distilling a highly capable consistency model from a pretrained T2V model. Our proposed method, T2V-Turbo-v2, introduces a significant advancement by integrating various supervision signals, including high-quality training data, reward model feedback, and conditional guidance, into the consistency distillation process. Through comprehensive ablation studies, we highlight the crucial importance of tailoring datasets to specific learning objectives and the effectiveness of learning from diverse reward models for enhancing both the visual quality and text-video alignment. Additionally, we highlight the vast design space of conditional guidance strategies, which centers on designing an effective energy function to augment the teacher ODE solver. We demonstrate the potential of this approach by extracting motion guidance from the training datasets and incorporating it into the ODE solver, showcasing its effectiveness in improving the motion quality of the generated videos with the improved motion-related metrics from VBench and T2V-CompBench. Empirically, our T2V-Turbo-v2 establishes a new state-of-the-art result on VBench, with a Total score of 85.13, surpassing proprietary systems such as Gen-3 and Kling.<|reference_end|>
arxiv
@article{li2024t2v-turbo-v2:, title={T2V-Turbo-v2: Enhancing Video Generation Model Post-Training through Data, Reward, and Conditional Guidance Design}, author={Jiachen Li, Qian Long, Jian Zheng, Xiaofeng Gao, Robinson Piramuthu, Wenhu Chen, William Yang Wang}, journal={arXiv preprint arXiv:2410.05677}, year={2024}, archivePrefix={arXiv}, eprint={2410.05677}, primaryClass={cs.CV cs.AI} }
li2024t2v-turbo-v2:
arxiv-666888
2410.05680
Convolutional neural networks applied to modification of images
<|reference_start|>Convolutional neural networks applied to modification of images: The reader will learn how digital images are edited using linear algebra and calculus. Starting from the concept of filter towards machine learning techniques such as convolutional neural networks.<|reference_end|>
arxiv
@article{aguirre-velez2024convolutional, title={Convolutional neural networks applied to modification of images}, author={Carlos I. Aguirre-Velez and Jose Antonio Arciniega-Nevarez and Eric Dolores-Cuenca}, journal={In: Sriraman, B. (eds) Handbook of Visual, Experimental and Computational Mathematics . Springer, Cham. (2023)}, year={2024}, doi={10.1007/978-3-030-93954-0_5-1}, archivePrefix={arXiv}, eprint={2410.05680}, primaryClass={cs.CV} }
aguirre-velez2024convolutional
arxiv-666889
2410.05681
Whole-Body Dynamic Throwing with Legged Manipulators
<|reference_start|>Whole-Body Dynamic Throwing with Legged Manipulators: Most robotic behaviours focus on either manipulation or locomotion, where tasks that require the integration of both, such as full-body throwing, remain under-explored. Throwing with a robot involves complex coordination between object manipulation and legged locomotion, which is crucial for advanced real-world interactions. This work investigates the challenge of full-body throwing in robotic systems and highlights the advantages of utilising the robot's entire body. We propose a deep reinforcement learning (RL) approach that leverages the robot's body to enhance throwing performance through a strategically designed curriculum to avoid local optima and sparse but informative reward functions to improve policy flexibility. The robot's body learns to generate additional momentum and fine-tune the projectile release velocity. Our full-body method achieves on average 47% greater throwing distance and 34% greater throwing accuracy than the arm alone, across two robot morphologies - an armed quadruped and a humanoid. We also extend our method to optimise robot stability during throws. The learned policy effectively generalises throwing to targets at any 3D point in space within a specified range, which has not previously been achieved and does so with human-level throwing accuracy. We successfully transferred this approach from simulation to a real robot using sim2real techniques, demonstrating its practical viability.<|reference_end|>
arxiv
@article{munn2024whole-body, title={Whole-Body Dynamic Throwing with Legged Manipulators}, author={Humphrey Munn, Brendan Tidd, David Howard, Marcus Gallagher}, journal={arXiv preprint arXiv:2410.05681}, year={2024}, archivePrefix={arXiv}, eprint={2410.05681}, primaryClass={cs.RO} }
munn2024whole-body
arxiv-666890
2410.05683
How Maintainable is Proficient Code? A Case Study of Three PyPI Libraries
<|reference_start|>How Maintainable is Proficient Code? A Case Study of Three PyPI Libraries: Python is very popular because it can be used for a wider audience of developers, data scientists, machine learning experts and so on. Like other programming languages, there are beginner to advanced levels of writing Python code. However, like all software, code constantly needs to be maintained as bugs and the need for new features emerge. Although the Zen of Python states that "Simple is better than complex," we hypothesize that more elegant and proficient code might be harder for the developer to maintain. To study this relationship between the understanding of code maintainability and code proficiency, we present an exploratory study into the complexity of Python code on three Python libraries. Specifically, we investigate the risk level of proficient code inside a file. As a starting point, we mined and collected the proficiency of code from three PyPI libraries totaling 3,003 files. We identified several instances of high proficient code that was also high risk, with examples being simple list comprehensions, 'enumerate' calls, generator expressions, simple dictionary comprehensions, and the 'super' function. Our early examples revealed that most code-proficient development presented a low maintainability risk, yet there are some cases where proficient code is also risky to maintenance. We envision that the study should help developers identify scenarios where and when using proficient code might be detrimental to future code maintenance activities.<|reference_end|>
arxiv
@article{febriyanti2024how, title={How Maintainable is Proficient Code? A Case Study of Three PyPI Libraries}, author={Indira Febriyanti, Youmei Fan, Kazumasa Shimari, Kenichi Matsumoto, and Raula Gaikovina Kula}, journal={arXiv preprint arXiv:2410.05683}, year={2024}, archivePrefix={arXiv}, eprint={2410.05683}, primaryClass={cs.SE} }
febriyanti2024how
arxiv-666891
2410.05684
Copiloting Diagnosis of Autism in Real Clinical Scenarios via LLMs
<|reference_start|>Copiloting Diagnosis of Autism in Real Clinical Scenarios via LLMs: Autism spectrum disorder(ASD) is a pervasive developmental disorder that significantly impacts the daily functioning and social participation of individuals. Despite the abundance of research focused on supporting the clinical diagnosis of ASD, there is still a lack of systematic and comprehensive exploration in the field of methods based on Large Language Models (LLMs), particularly regarding the real-world clinical diagnostic scenarios based on Autism Diagnostic Observation Schedule, Second Edition (ADOS-2). Therefore, we have proposed a framework called ADOS-Copilot, which strikes a balance between scoring and explanation and explored the factors that influence the performance of LLMs in this task. The experimental results indicate that our proposed framework is competitive with the diagnostic results of clinicians, with a minimum MAE of 0.4643, binary classification F1-score of 81.79\%, and ternary classification F1-score of 78.37\%. Furthermore, we have systematically elucidated the strengths and limitations of current LLMs in this task from the perspectives of ADOS-2, LLMs' capabilities, language, and model scale aiming to inspire and guide the future application of LLMs in a broader fields of mental health disorders. We hope for more research to be transferred into real clinical practice, opening a window of kindness to the world for eccentric children.<|reference_end|>
arxiv
@article{jiang2024copiloting, title={Copiloting Diagnosis of Autism in Real Clinical Scenarios via LLMs}, author={Yi Jiang, Qingyang Shen, Shuzhong Lai, Shunyu Qi, Qian Zheng, Lin Yao, Yueming Wang, Gang Pan}, journal={arXiv preprint arXiv:2410.05684}, year={2024}, archivePrefix={arXiv}, eprint={2410.05684}, primaryClass={cs.HC cs.AI cs.CL} }
jiang2024copiloting
arxiv-666892
2410.05686
Deep Learning and Machine Learning with GPGPU and CUDA: Unlocking the Power of Parallel Computing
<|reference_start|>Deep Learning and Machine Learning with GPGPU and CUDA: Unlocking the Power of Parallel Computing: This book presents a comprehensive exploration of GPGPU (General Purpose Graphics Processing Unit) and its applications in deep learning and machine learning. It focuses on how parallel computing, particularly through the use of CUDA (Compute Unified Device Architecture), can unlock unprecedented computational power for complex tasks. The book provides detailed discussions on CPU and GPU architectures, data flow in deep learning, and advanced GPU features like streams, concurrency, and dynamic parallelism. Furthermore, it delves into practical applications of GPGPU in various domains such as scientific computing, machine learning acceleration, real-time rendering, and cryptocurrency mining. The authors also emphasize the importance of selecting the right parallel architecture (e.g., GPU, FPGA, TPU, ASIC) based on specific tasks, offering insights into optimizing algorithms for these platforms. The book also provides practical examples with popular machine learning frameworks like PyTorch, TensorFlow, and XGBoost, demonstrating how to efficiently leverage GPU resources in both training and inference. This resource is valuable for both beginners and advanced readers who are looking to deepen their understanding of GPU-based parallel computing and its significant role in modern machine learning and AI applications.<|reference_end|>
arxiv
@article{li2024deep, title={Deep Learning and Machine Learning with GPGPU and CUDA: Unlocking the Power of Parallel Computing}, author={Ming Li, Ziqian Bi, Tianyang Wang, Yizhu Wen, Qian Niu, Junyu Liu, Benji Peng, Sen Zhang, Xuanhe Pan, Jiawei Xu, Jinlang Wang, Keyu Chen, Caitlyn Heqi Yin, Pohsun Feng, Ming Liu}, journal={arXiv preprint arXiv:2410.05686}, year={2024}, archivePrefix={arXiv}, eprint={2410.05686}, primaryClass={cs.DC cs.AR} }
li2024deep
arxiv-666893
2410.05687
Extreme Value Modelling of Feature Residuals for Anomaly Detection in Dynamic Graphs
<|reference_start|>Extreme Value Modelling of Feature Residuals for Anomaly Detection in Dynamic Graphs: Detecting anomalies in a temporal sequence of graphs can be applied is areas such as the detection of accidents in transport networks and cyber attacks in computer networks. Existing methods for detecting abnormal graphs can suffer from multiple limitations, such as high false positive rates as well as difficulties with handling variable-sized graphs and non-trivial temporal dynamics. To address this, we propose a technique where temporal dependencies are explicitly modelled via time series analysis of a large set of pertinent graph features, followed by using residuals to remove the dependencies. Extreme Value Theory is then used to robustly model and classify any remaining extremes, aiming to produce low false positives rates. Comparative evaluations on a multitude of graph instances show that the proposed approach obtains considerably better accuracy than TensorSplat and Laplacian Anomaly Detection.<|reference_end|>
arxiv
@article{kandanaarachchi2024extreme, title={Extreme Value Modelling of Feature Residuals for Anomaly Detection in Dynamic Graphs}, author={Sevvandi Kandanaarachchi, Conrad Sanderson, Rob J. Hyndman}, journal={arXiv preprint arXiv:2410.05687}, year={2024}, archivePrefix={arXiv}, eprint={2410.05687}, primaryClass={cs.LG} }
kandanaarachchi2024extreme
arxiv-666894
2410.05688
Fishery resources management
<|reference_start|>Fishery resources management: We consider management of the fish species Plecoglossus altivelis altivelis, a major inland fishery resource in Japan playing important roles from economic, cultural, and recreational viewpoints. We firstly summarize the collected body weight data of the fish in the Hii River, Japan since 2016. The two kinds of data are available in each year with few exceptions: the historical data during summer and autumn collected with the help of an angular and the annual distribution data at the Toami (casting net) competition where we could obtain the data from many anglers during two hours in one day. We fit deterministic and uncertain logistic growth models to the data in each year and discuss their performance. The fitted uncertain logistic growth model is applied to an optimal harvesting problem of the fish subject to a sustainability concern and model distortion. Several numerical schemes for solving the problem are examined and compared both theoretically and numerically.<|reference_end|>
arxiv
@article{yoshioka2024fishery, title={Fishery resources management}, author={Hidekazu Yoshioka, Motoh Tsujimura, Yumi Yoshioka}, journal={arXiv preprint arXiv:2410.05688}, year={2024}, archivePrefix={arXiv}, eprint={2410.05688}, primaryClass={cs.CE} }
yoshioka2024fishery
arxiv-666895
2410.05690
Long-Context Linear System Identification
<|reference_start|>Long-Context Linear System Identification: This paper addresses the problem of long-context linear system identification, where the state $x_t$ of a dynamical system at time $t$ depends linearly on previous states $x_s$ over a fixed context window of length $p$. We establish a sample complexity bound that matches the i.i.d. parametric rate up to logarithmic factors for a broad class of systems, extending previous works that considered only first-order dependencies. Our findings reveal a learning-without-mixing phenomenon, indicating that learning long-context linear autoregressive models is not hindered by slow mixing properties potentially associated with extended context windows. Additionally, we extend these results to (i) shared low-rank representations, where rank-regularized estimators improve rates with respect to dimensionality, and (ii) misspecified context lengths in strictly stable systems, where shorter contexts offer statistical advantages.<|reference_end|>
arxiv
@article{yüksel2024long-context, title={Long-Context Linear System Identification}, author={Ou{g}uz Kaan Y"uksel, Mathieu Even, Nicolas Flammarion}, journal={arXiv preprint arXiv:2410.05690}, year={2024}, archivePrefix={arXiv}, eprint={2410.05690}, primaryClass={stat.ML cs.LG cs.SY eess.SY math.ST stat.TH} }
yüksel2024long-context
arxiv-666896
2410.05694
DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing
<|reference_start|>DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing: Recent advances in diffusion models have introduced a new era of text-guided image manipulation, enabling users to create realistic edited images with simple textual prompts. However, there is significant concern about the potential misuse of these methods, especially in creating misleading or harmful content. Although recent defense strategies, which introduce imperceptible adversarial noise to induce model failure, have shown promise, they remain ineffective against more sophisticated manipulations, such as editing with a mask. In this work, we propose DiffusionGuard, a robust and effective defense method against unauthorized edits by diffusion-based image editing models, even in challenging setups. Through a detailed analysis of these models, we introduce a novel objective that generates adversarial noise targeting the early stage of the diffusion process. This approach significantly improves the efficiency and effectiveness of adversarial noises. We also introduce a mask-augmentation technique to enhance robustness against various masks during test time. Finally, we introduce a comprehensive benchmark designed to evaluate the effectiveness and robustness of methods in protecting against privacy threats in realistic scenarios. Through extensive experiments, we show that our method achieves stronger protection and improved mask robustness with lower computational costs compared to the strongest baseline. Additionally, our method exhibits superior transferability and better resilience to noise removal techniques compared to all baseline methods. Our source code is publicly available at https://github.com/choi403/DiffusionGuard.<|reference_end|>
arxiv
@article{choi2024diffusionguard:, title={DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing}, author={June Suk Choi, Kyungmin Lee, Jongheon Jeong, Saining Xie, Jinwoo Shin, Kimin Lee}, journal={arXiv preprint arXiv:2410.05694}, year={2024}, archivePrefix={arXiv}, eprint={2410.05694}, primaryClass={cs.CV} }
choi2024diffusionguard:
arxiv-666897
2410.05695
Unlocking the Boundaries of Thought: A Reasoning Granularity Framework to Quantify and Optimize Chain-of-Thought
<|reference_start|>Unlocking the Boundaries of Thought: A Reasoning Granularity Framework to Quantify and Optimize Chain-of-Thought: Chain-of-Thought (CoT) reasoning has emerged as a promising approach for enhancing the performance of large language models (LLMs) on complex reasoning tasks. Recently, a series of studies attempt to explain the mechanisms underlying CoT, aiming to deepen the understanding of its efficacy. Nevertheless, the existing research faces two major challenges: (1) a lack of quantitative metrics to assess CoT capabilities and (2) a dearth of guidance on optimizing CoT performance. Motivated by this, in this work, we introduce a novel reasoning granularity framework (RGF) to address these challenges. To solve the lack of quantification, we first define a reasoning granularity (RG) to quantify the upper bound of CoT and establish a combination law for RG, enabling a practical quantitative approach applicable to various real-world CoT tasks. To address the lack of optimization, we propose three categories of RGs. We further optimize these categories with combination laws focused on RG promotion and reasoning path optimization for CoT improvement. Through extensive experiments on 25 models and 4 tasks, the study validates the existence and rationality of the proposed framework. Furthermore, it explains the effectiveness of 10 CoT strategies and guides optimization from two perspectives. We hope this work can provide a comprehensive understanding of the boundaries and optimization strategies for reasoning in LLMs. Our code and data are available at https://github.com/LightChen233/reasoning-granularity.<|reference_end|>
arxiv
@article{chen2024unlocking, title={Unlocking the Capabilities of Thought: A Reasoning Boundary Framework to Quantify and Optimize Chain-of-Thought}, author={Qiguang Chen, Libo Qin, Jiaqi Wang, Jinxuan Zhou, Wanxiang Che}, journal={arXiv preprint arXiv:2410.05695}, year={2024}, archivePrefix={arXiv}, eprint={2410.05695}, primaryClass={cs.CL} }
chen2024unlocking
arxiv-666898
2410.05697
Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning
<|reference_start|>Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning: Graph Neural Networks (GNNs) are proficient in graph representation learning and achieve promising performance on versatile tasks such as node classification and link prediction. Usually, a comprehensive hyperparameter tuning is essential for fully unlocking GNN's top performance, especially for complicated tasks such as node classification on large graphs and long-range graphs. This is usually associated with high computational and time costs and careful design of appropriate search spaces. This work introduces a graph-conditioned latent diffusion framework (GNN-Diff) to generate high-performing GNNs based on the model checkpoints of sub-optimal hyperparameters selected by a light-tuning coarse search. We validate our method through 166 experiments across four graph tasks: node classification on small, large, and long-range graphs, as well as link prediction. Our experiments involve 10 classic and state-of-the-art target models and 20 publicly available datasets. The results consistently demonstrate that GNN-Diff: (1) boosts the performance of GNNs with efficient hyperparameter tuning; and (2) presents high stability and generalizability on unseen data across multiple generation runs. The code is available at https://github.com/lequanlin/GNN-Diff.<|reference_end|>
arxiv
@article{lin2024diffusing, title={Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning}, author={Lequan Lin, Dai Shi, Andi Han, Zhiyong Wang, Junbin Gao}, journal={arXiv preprint arXiv:2410.05697}, year={2024}, archivePrefix={arXiv}, eprint={2410.05697}, primaryClass={cs.LG} }
lin2024diffusing
arxiv-666899
2410.05698
A Two-Step Approach for Data-Efficient French Pronunciation Learning
<|reference_start|>A Two-Step Approach for Data-Efficient French Pronunciation Learning: Recent studies have addressed intricate phonological phenomena in French, relying on either extensive linguistic knowledge or a significant amount of sentence-level pronunciation data. However, creating such resources is expensive and non-trivial. To this end, we propose a novel two-step approach that encompasses two pronunciation tasks: grapheme-to-phoneme and post-lexical processing. We then investigate the efficacy of the proposed approach with a notably limited amount of sentence-level pronunciation data. Our findings demonstrate that the proposed two-step approach effectively mitigates the lack of extensive labeled data, and serves as a feasible solution for addressing French phonological phenomena even under resource-constrained environments.<|reference_end|>
arxiv
@article{lee2024a, title={A Two-Step Approach for Data-Efficient French Pronunciation Learning}, author={Hoyeon Lee, Hyeeun Jang, Jong-Hwan Kim, Jae-Min Kim}, journal={arXiv preprint arXiv:2410.05698}, year={2024}, archivePrefix={arXiv}, eprint={2410.05698}, primaryClass={cs.CL cs.AI} }
lee2024a
arxiv-666900
2410.05700
Log-concave Sampling over a Convex Body with a Barrier: a Robust and Unified Dikin Walk
<|reference_start|>Log-concave Sampling over a Convex Body with a Barrier: a Robust and Unified Dikin Walk: We consider the problem of sampling from a $d$-dimensional log-concave distribution $\pi(\theta) \propto \exp(-f(\theta))$ for $L$-Lipschitz $f$, constrained to a convex body with an efficiently computable self-concordant barrier function, contained in a ball of radius $R$ with a $w$-warm start. We propose a \emph{robust} sampling framework that computes spectral approximations to the Hessian of the barrier functions in each iteration. We prove that for polytopes that are described by $n$ hyperplanes, sampling with the Lee-Sidford barrier function mixes within $\widetilde O((d^2+dL^2R^2)\log(w/\delta))$ steps with a per step cost of $\widetilde O(nd^{\omega-1})$, where $\omega\approx 2.37$ is the fast matrix multiplication exponent. Compared to the prior work of Mangoubi and Vishnoi, our approach gives faster mixing time as we are able to design a generalized soft-threshold Dikin walk beyond log-barrier. We further extend our result to show how to sample from a $d$-dimensional spectrahedron, the constrained set of a semidefinite program, specified by the set $\{x\in \mathbb{R}^d: \sum_{i=1}^d x_i A_i \succeq C \}$ where $A_1,\ldots,A_d, C$ are $n\times n$ real symmetric matrices. We design a walk that mixes in $\widetilde O((nd+dL^2R^2)\log(w/\delta))$ steps with a per iteration cost of $\widetilde O(n^\omega+n^2d^{3\omega-5})$. We improve the mixing time bound of prior best Dikin walk due to Narayanan and Rakhlin that mixes in $\widetilde O((n^2d^3+n^2dL^2R^2)\log(w/\delta))$ steps.<|reference_end|>
arxiv
@article{gu2024log-concave, title={Log-concave Sampling from a Convex Body with a Barrier: a Robust and Unified Dikin Walk}, author={Yuzhou Gu, Nikki Lijing Kuang, Yi-An Ma, Zhao Song, Lichen Zhang}, journal={arXiv preprint arXiv:2410.05700}, year={2024}, archivePrefix={arXiv}, eprint={2410.05700}, primaryClass={cs.DS cs.LG stat.ML} }
gu2024log-concave