corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-660401
|
2409.14324
|
Unveiling Narrative Reasoning Limits of Large Language Models with Trope in Movie Synopses
|
<|reference_start|>Unveiling Narrative Reasoning Limits of Large Language Models with Trope in Movie Synopses: Large language models (LLMs) equipped with chain-of-thoughts (CoT) prompting have shown significant multi-step reasoning capabilities in factual content like mathematics, commonsense, and logic. However, their performance in narrative reasoning, which demands greater abstraction capabilities, remains unexplored. This study utilizes tropes in movie synopses to assess the abstract reasoning abilities of state-of-the-art LLMs and uncovers their low performance. We introduce a trope-wise querying approach to address these challenges and boost the F1 score by 11.8 points. Moreover, while prior studies suggest that CoT enhances multi-step reasoning, this study shows CoT can cause hallucinations in narrative content, reducing GPT-4's performance. We also introduce an Adversarial Injection method to embed trope-related text tokens into movie synopses without explicit tropes, revealing CoT's heightened sensitivity to such injections. Our comprehensive analysis provides insights for future research directions.<|reference_end|>
|
arxiv
|
@article{su2024unveiling,
title={Unveiling Narrative Reasoning Limits of Large Language Models with Trope
in Movie Synopses},
author={Hung-Ting Su, Ya-Ching Hsu, Xudong Lin, Xiang-Qian Shi, Yulei Niu,
Han-Yuan Hsu, Hung-yi Lee, Winston H. Hsu},
journal={arXiv preprint arXiv:2409.14324},
year={2024},
archivePrefix={arXiv},
eprint={2409.14324},
primaryClass={cs.CL cs.AI cs.LG}
}
|
su2024unveiling
|
arxiv-660402
|
2409.14325
|
Extending the Extension: Deterministic Algorithm for Non-monotone Submodular Maximization
|
<|reference_start|>Extending the Extension: Deterministic Algorithm for Non-monotone Submodular Maximization: Maximization of submodular functions under various constraints is a fundamental problem that has been studied extensively. A powerful technique that has emerged and has been shown to be extremely effective for such problems is the following. First, a continues relaxation of the problem is obtained by relaxing the (discrete) set of feasible solutions to a convex body, and extending the discrete submodular function $f$ to a continuous function $F$ known as the multilinear extension. Then, two algorithmic steps are implemented. The first step approximately solves the relaxation by finding a fractional solution within the convex body that approximately maximizes $F$; and the second step rounds this fractional solution to a feasible integral solution. While this ``fractionally solve and then round'' approach has been a key technique for resolving many questions in the field, the main drawback of algorithms based on it is that evaluating the multilinear extension may require a number of value oracle queries to $f$ that is exponential in the size of $f$'s ground set. The only known way to tackle this issue is to approximate the value of $F$ via sampling, which makes all algorithms based on this approach inherently randomized and quite slow. In this work, we introduce a new tool, that we refer to as the extended multilinear extension, designed to derandomize submodular maximization algorithms that are based on the successful ``solve fractionally and then round'' approach. We demonstrate the effectiveness of this new tool on the fundamental problem of maximizing a submodular function subject to a matroid constraint, and show that it allows for a deterministic implementation of both the fractionally solving step and the rounding step of the above approach. As a bonus, we also get a randomized algorithm for the problem with an improved query complexity.<|reference_end|>
|
arxiv
|
@article{buchbinder2024extending,
title={Extending the Extension: Deterministic Algorithm for Non-monotone
Submodular Maximization},
author={Niv Buchbinder and Moran Feldman},
journal={arXiv preprint arXiv:2409.14325},
year={2024},
archivePrefix={arXiv},
eprint={2409.14325},
primaryClass={cs.DS cs.DM}
}
|
buchbinder2024extending
|
arxiv-660403
|
2409.14327
|
Transforming Multidimensional Time Series into Interpretable Event Sequences for Advanced Data Mining
|
<|reference_start|>Transforming Multidimensional Time Series into Interpretable Event Sequences for Advanced Data Mining: This paper introduces a novel spatiotemporal feature representation model designed to address the limitations of traditional methods in multidimensional time series (MTS) analysis. The proposed approach converts MTS into one-dimensional sequences of spatially evolving events, preserving the complex coupling relationships between dimensions. By employing a variable-length tuple mining method, key spatiotemporal features are extracted, enhancing the interpretability and accuracy of time series analysis. Unlike conventional models, this unsupervised method does not rely on large training datasets, making it adaptable across different domains. Experimental results from motion sequence classification validate the model's superior performance in capturing intricate patterns within the data. The proposed framework has significant potential for applications across various fields, including backend services for monitoring and optimizing IT infrastructure, medical diagnosis through continuous patient monitoring and health trend analysis, and internet businesses for tracking user behavior and forecasting sales. This work offers a new theoretical foundation and technical support for advancing time series data mining and its practical applications in human behavior recognition and other domains.<|reference_end|>
|
arxiv
|
@article{yan2024transforming,
title={Transforming Multidimensional Time Series into Interpretable Event
Sequences for Advanced Data Mining},
author={Xu Yan, Yaoting Jiang, Wenyi Liu, Didi Yi, Jianjun Wei},
journal={arXiv preprint arXiv:2409.14327},
year={2024},
archivePrefix={arXiv},
eprint={2409.14327},
primaryClass={cs.LG cs.AI}
}
|
yan2024transforming
|
arxiv-660404
|
2409.14329
|
ISC4DGF: Enhancing Directed Grey-box Fuzzing with LLM-Driven Initial Seed Corpus Generation
|
<|reference_start|>ISC4DGF: Enhancing Directed Grey-box Fuzzing with LLM-Driven Initial Seed Corpus Generation: Fuzz testing is crucial for identifying software vulnerabilities, with coverage-guided grey-box fuzzers like AFL and Angora excelling in broad detection. However, as the need for targeted detection grows, directed grey-box fuzzing (DGF) has become essential, focusing on specific vulnerabilities. The initial seed corpus, which consists of carefully selected input samples that the fuzzer uses as a starting point, is fundamental in determining the paths that the fuzzer explores. A well-designed seed corpus can guide the fuzzer more effectively towards critical areas of the code, improving the efficiency and success of the fuzzing process. Even with its importance, many works concentrate on refining guidance mechanisms while paying less attention to optimizing the initial seed corpus. In this paper, we introduce ISC4DGF, a novel approach to generating optimized initial seed corpus for DGF using Large Language Models (LLMs). By leveraging LLMs' deep software understanding and refined user inputs, ISC4DGF creates precise seed corpus that efficiently trigger specific vulnerabilities. Implemented on AFL and tested against state-of-the-art fuzzers like AFLGo, FairFuzz, and Entropic using the Magma benchmark, ISC4DGF achieved a 35.63x speedup and 616.10x fewer target reaches. Moreover, ISC4DGF focused on more effectively detecting target vulnerabilities, enhancing efficiency while operating with reduced code coverage.<|reference_end|>
|
arxiv
|
@article{xu2024isc4dgf:,
title={ISC4DGF: Enhancing Directed Grey-box Fuzzing with LLM-Driven Initial
Seed Corpus Generation},
author={Yijiang Xu, Hongrui Jia, Liguo Chen, Xin Wang, Zhengran Zeng, Yidong
Wang, Qing Gao, Jindong Wang, Wei Ye, Shikun Zhang, Zhonghai Wu},
journal={arXiv preprint arXiv:2409.14329},
year={2024},
archivePrefix={arXiv},
eprint={2409.14329},
primaryClass={cs.SE}
}
|
xu2024isc4dgf:
|
arxiv-660405
|
2409.14330
|
Thinking in Granularity: Dynamic Quantization for Image Super-Resolution by Intriguing Multi-Granularity Clues
|
<|reference_start|>Thinking in Granularity: Dynamic Quantization for Image Super-Resolution by Intriguing Multi-Granularity Clues: Dynamic quantization has attracted rising attention in image super-resolution (SR) as it expands the potential of heavy SR models onto mobile devices while preserving competitive performance. Existing methods explore layer-to-bit configuration upon varying local regions, adaptively allocating the bit to each layer and patch. Despite the benefits, they still fall short in the trade-off of SR accuracy and quantization efficiency. Apart from this, adapting the quantization level for each layer individually can disturb the original inter-layer relationships, thus diminishing the representation capability of quantized models. In this work, we propose Granular-DQ, which capitalizes on the intrinsic characteristics of images while dispensing with the previous consideration for layer sensitivity in quantization. Granular-DQ conducts a multi-granularity analysis of local patches with further exploration of their information densities, achieving a distinctive patch-wise and layer-invariant dynamic quantization paradigm. Specifically, Granular-DQ initiates by developing a granularity-bit controller (GBC) to apprehend the coarse-to-fine granular representations of different patches, matching their proportional contribution to the entire image to determine the proper bit-width allocation. On this premise, we investigate the relation between bit-width and information density, devising an entropy-to-bit (E2B) mechanism that enables further fine-grained dynamic bit adaption of high-bit patches. Extensive experiments validate the superiority and generalization ability of Granular-DQ over recent state-of-the-art methods on various SR models. Code will be available at \url{https://github.com/MmmingS/Granular-DQ.git}.<|reference_end|>
|
arxiv
|
@article{wang2024thinking,
title={Thinking in Granularity: Dynamic Quantization for Image Super-Resolution
by Intriguing Multi-Granularity Clues},
author={Mingshen Wang, Zhao Zhang, Feng Li, Ke Xu, Kang Miao, Meng Wang},
journal={arXiv preprint arXiv:2409.14330},
year={2024},
archivePrefix={arXiv},
eprint={2409.14330},
primaryClass={eess.IV cs.CV}
}
|
wang2024thinking
|
arxiv-660406
|
2409.14331
|
PISR: Polarimetric Neural Implicit Surface Reconstruction for Textureless and Specular Objects
|
<|reference_start|>PISR: Polarimetric Neural Implicit Surface Reconstruction for Textureless and Specular Objects: Neural implicit surface reconstruction has achieved remarkable progress recently. Despite resorting to complex radiance modeling, state-of-the-art methods still struggle with textureless and specular surfaces. Different from RGB images, polarization images can provide direct constraints on the azimuth angles of the surface normals. In this paper, we present PISR, a novel method that utilizes a geometrically accurate polarimetric loss to refine shape independently of appearance. In addition, PISR smooths surface normals in image space to eliminate severe shape distortions and leverages the hash-grid-based neural signed distance function to accelerate the reconstruction. Experimental results demonstrate that PISR achieves higher accuracy and robustness, with an L1 Chamfer distance of 0.5 mm and an F-score of 99.5% at 1 mm, while converging 4~30 times faster than previous polarimetric surface reconstruction methods.<|reference_end|>
|
arxiv
|
@article{chen2024pisr:,
title={PISR: Polarimetric Neural Implicit Surface Reconstruction for
Textureless and Specular Objects},
author={Guangcheng Chen, Yicheng He, Li He, Hong Zhang},
journal={arXiv preprint arXiv:2409.14331},
year={2024},
archivePrefix={arXiv},
eprint={2409.14331},
primaryClass={cs.CV}
}
|
chen2024pisr:
|
arxiv-660407
|
2409.14335
|
MQM-APE: Toward High-Quality Error Annotation Predictors with Automatic Post-Editing in LLM Translation Evaluators
|
<|reference_start|>MQM-APE: Toward High-Quality Error Annotation Predictors with Automatic Post-Editing in LLM Translation Evaluators: Large Language Models (LLMs) have shown significant potential as judges for Machine Translation (MT) quality assessment, providing both scores and fine-grained feedback. Although approaches such as GEMBA-MQM has shown SOTA performance on reference-free evaluation, the predicted errors do not align well with those annotated by human, limiting their interpretability as feedback signals. To enhance the quality of error annotations predicted by LLM evaluators, we introduce a universal and training-free framework, $\textbf{MQM-APE}$, based on the idea of filtering out non-impactful errors by Automatically Post-Editing (APE) the original translation based on each error, leaving only those errors that contribute to quality improvement. Specifically, we prompt the LLM to act as 1) $\textit{evaluator}$ to provide error annotations, 2) $\textit{post-editor}$ to determine whether errors impact quality improvement and 3) $\textit{pairwise quality verifier}$ as the error filter. Experiments show that our approach consistently improves both the reliability and quality of error spans against GEMBA-MQM, across eight LLMs in both high- and low-resource languages. Orthogonal to trained approaches, MQM-APE complements translation-specific evaluators such as Tower, highlighting its broad applicability. Further analysis confirm the effectiveness of each module and offer valuable insights into evaluator design and LLMs selection. The code will be released to facilitate the community.<|reference_end|>
|
arxiv
|
@article{lu2024mqm-ape:,
title={MQM-APE: Toward High-Quality Error Annotation Predictors with Automatic
Post-Editing in LLM Translation Evaluators},
author={Qingyu Lu, Liang Ding, Kanjian Zhang, Jinxia Zhang, Dacheng Tao},
journal={arXiv preprint arXiv:2409.14335},
year={2024},
archivePrefix={arXiv},
eprint={2409.14335},
primaryClass={cs.CL}
}
|
lu2024mqm-ape:
|
arxiv-660408
|
2409.14336
|
Zero-Shot Skeleton-based Action Recognition with Dual Visual-Text Alignment
|
<|reference_start|>Zero-Shot Skeleton-based Action Recognition with Dual Visual-Text Alignment: Zero-shot action recognition, which addresses the issue of scalability and generalization in action recognition and allows the models to adapt to new and unseen actions dynamically, is an important research topic in computer vision communities. The key to zero-shot action recognition lies in aligning visual features with semantic vectors representing action categories. Most existing methods either directly project visual features onto the semantic space of text category or learn a shared embedding space between the two modalities. However, a direct projection cannot accurately align the two modalities, and learning robust and discriminative embedding space between visual and text representations is often difficult. To address these issues, we introduce Dual Visual-Text Alignment (DVTA) for skeleton-based zero-shot action recognition. The DVTA consists of two alignment modules-Direct Alignment (DA) and Augmented Alignment (AA)-along with a designed Semantic Description Enhancement (SDE). The DA module maps the skeleton features to the semantic space through a specially designed visual projector, followed by the SDE, which is based on cross-attention to enhance the connection between skeleton and text, thereby reducing the gap between modalities. The AA module further strengthens the learning of the embedding space by utilizing deep metric learning to learn the similarity between skeleton and text. Our approach achieves state-of-the-art performances on several popular zero-shot skeleton-based action recognition benchmarks.<|reference_end|>
|
arxiv
|
@article{kuang2024zero-shot,
title={Zero-Shot Skeleton-based Action Recognition with Dual Visual-Text
Alignment},
author={Jidong Kuang, Hongsong Wang, Chaolei Han and Jie Gui},
journal={arXiv preprint arXiv:2409.14336},
year={2024},
archivePrefix={arXiv},
eprint={2409.14336},
primaryClass={cs.CV}
}
|
kuang2024zero-shot
|
arxiv-660409
|
2409.14337
|
MobileViews: A Large-Scale Mobile GUI Dataset
|
<|reference_start|>MobileViews: A Large-Scale Mobile GUI Dataset: Mobile screen assistants help smartphone users by interpreting mobile screens and responding to user requests. The excessive private information on mobile screens necessitates small, on-device models to power these assistants. However, there is a lack of a comprehensive and large-scale mobile screen dataset with high diversity to train and enhance these models. To efficiently construct such a dataset, we utilize an LLM-enhanced automatic app traversal tool to minimize human intervention. We then employ two SoC clusters to provide high-fidelity mobile environments, including more than 200 Android instances to parallelize app interactions. By utilizing the system to collect mobile screens over 81,600 device-hours, we introduce MobileViews, the largest mobile screen dataset, which includes over 600K screenshot-view hierarchy pairs from more than 20K modern Android apps. We demonstrate the effectiveness of MobileViews by training SOTA multimodal LLMs that power mobile screen assistants on it and the Rico dataset, which was introduced seven years ago. Evaluation results on mobile screen tasks show that the scale and quality of mobile screens in MobileViews demonstrate significant advantages over Rico in augmenting mobile screen assistants.<|reference_end|>
|
arxiv
|
@article{gao2024mobileviews:,
title={MobileViews: A Large-Scale Mobile GUI Dataset},
author={Longxi Gao, Li Zhang, Shihe Wang, Shangguang Wang, Yuanchun Li,
Mengwei Xu},
journal={arXiv preprint arXiv:2409.14337},
year={2024},
archivePrefix={arXiv},
eprint={2409.14337},
primaryClass={cs.HC}
}
|
gao2024mobileviews:
|
arxiv-660410
|
2409.14339
|
Increasing Information-Carrying Capacity by Exploiting Diverse Traffic Characteristics in Multi-Band Optical Networks
|
<|reference_start|>Increasing Information-Carrying Capacity by Exploiting Diverse Traffic Characteristics in Multi-Band Optical Networks: Efficient network management in optical backbone networks is crucial for handling continuous traffic growth. In this work, we address the challenges of managing dynamic traffic in C- and C+L-band optical backbone networks while exploring application flexibility, namely the compressibility and delayability metrics. We propose a strategy, named Delay-Aware and Compression-Aware (DACA) provisioning algorithm, which reduces blocking probability, thereby increasing information-carrying capacity of the network compared to baseline strategies.<|reference_end|>
|
arxiv
|
@article{kalkunte2024increasing,
title={Increasing Information-Carrying Capacity by Exploiting Diverse Traffic
Characteristics in Multi-Band Optical Networks},
author={Ramanuja Kalkunte, Forough Shirin Abkenar, Sifat Ferdousi, Rana Kumar
Jana, Anand Srivastava, Abhijit Mitra, Massimo Tornatore, Biswanath Mukherjee},
journal={arXiv preprint arXiv:2409.14339},
year={2024},
archivePrefix={arXiv},
eprint={2409.14339},
primaryClass={cs.NI}
}
|
kalkunte2024increasing
|
arxiv-660411
|
2409.14340
|
Self-Supervised Audio-Visual Soundscape Stylization
|
<|reference_start|>Self-Supervised Audio-Visual Soundscape Stylization: Speech sounds convey a great deal of information about the scenes, resulting in a variety of effects ranging from reverberation to additional ambient sounds. In this paper, we manipulate input speech to sound as though it was recorded within a different scene, given an audio-visual conditional example recorded from that scene. Our model learns through self-supervision, taking advantage of the fact that natural video contains recurring sound events and textures. We extract an audio clip from a video and apply speech enhancement. We then train a latent diffusion model to recover the original speech, using another audio-visual clip taken from elsewhere in the video as a conditional hint. Through this process, the model learns to transfer the conditional example's sound properties to the input speech. We show that our model can be successfully trained using unlabeled, in-the-wild videos, and that an additional visual signal can improve its sound prediction abilities. Please see our project webpage for video results: https://tinglok.netlify.app/files/avsoundscape/<|reference_end|>
|
arxiv
|
@article{li2024self-supervised,
title={Self-Supervised Audio-Visual Soundscape Stylization},
author={Tingle Li, Renhao Wang, Po-Yao Huang, Andrew Owens, Gopala
Anumanchipalli},
journal={arXiv preprint arXiv:2409.14340},
year={2024},
archivePrefix={arXiv},
eprint={2409.14340},
primaryClass={cs.CV cs.LG cs.MM cs.SD eess.AS}
}
|
li2024self-supervised
|
arxiv-660412
|
2409.14341
|
VERCEL: Verification and Rectification of Configuration Errors with Least Squares
|
<|reference_start|>VERCEL: Verification and Rectification of Configuration Errors with Least Squares: We present Vercel, a network verification and automatic fault rectification tool that is based on a computationally tractable, algorithmically expressive, and mathematically aesthetic domain of linear algebra. Vercel works on abstracting out packet headers into standard basis vectors that are used to create a port-specific forwarding matrix $\mathcal{A}$, representing a set of packet headers/prefixes that a router forwards along a port. By equating this matrix $\mathcal{A}$ and a vector $b$ (that represents the set of all headers under consideration), we are able to apply \textit{least squares} (which produces a column rank agnostic solution) to compute which headers are reachable at the destination. Reachability now simply means evaluating if vector $b$ is in the column space of $\mathcal{A}$, which can efficiently be computed using least squares. Further, the use of vector representation and least squares opens new possibilities for understanding network behavior. For example, we are able to map rules, routing policies, what-if scenarios to the fundamental linear algebraic form, $\mathcal{A}x=b$, as well as determine how to configure forwarding tables appropriately. We show Vercel is faster than the state-of-art such as NetPlumber, Veriflow, APKeep, AP Verifier, when measured over diverse datasets. Vercel is almost as fast as Deltanet, when rules are verified in batches and provides better scalability, expressiveness and memory efficiency. A key highlight of Vercel is that while evaluating for reachability, the tool can incorporate intents, and transform these into auto-configurable table entries, implying a recommendation/correction system.<|reference_end|>
|
arxiv
|
@article{singh2024vercel:,
title={VERCEL: Verification and Rectification of Configuration Errors with
Least Squares},
author={Abhiram Singh, Sidharth Sharma and Ashwin Gumaste},
journal={arXiv preprint arXiv:2409.14341},
year={2024},
archivePrefix={arXiv},
eprint={2409.14341},
primaryClass={cs.NI}
}
|
singh2024vercel:
|
arxiv-660413
|
2409.14342
|
Adapting Gait Frequency for Posture-regulating Humanoid Push-recovery via Hierarchical Model Predictive Control
|
<|reference_start|>Adapting Gait Frequency for Posture-regulating Humanoid Push-recovery via Hierarchical Model Predictive Control: Current humanoid push-recovery strategies often use whole-body motion, yet posture regulation is often overlooked. For instance, during manipulation tasks, the upper body may need to stay upright and have minimal recovery displacement. This paper introduces a novel approach to enhancing humanoid push-recovery performance under unknown disturbances and regulating body posture by tailoring the recovery stepping strategy. We propose a hierarchical-MPC-based scheme that analyzes and detects instability in the prediction window and quickly recovers through adapting gait frequency. Our approach integrates a high-level nonlinear MPC, a posture-aware gait frequency adaptation planner, and a low-level convex locomotion MPC. The planners predict the center of mass (CoM) state trajectories that can be assessed for precursors of potential instability and posture deviation. In simulation, we demonstrate improved maximum recoverable impulse by 131% on average compared with baseline approaches. In hardware experiments, a 125 ms advancement in recovery stepping timing/reflex has been observed with the proposed approach, We also demonstrate improved push-recovery performance and minimized attitude change under 0.2 rad.<|reference_end|>
|
arxiv
|
@article{li2024adapting,
title={Adapting Gait Frequency for Posture-regulating Humanoid Push-recovery
via Hierarchical Model Predictive Control},
author={Junheng Li, Zhanhao Le, Junchao Ma, and Quan Nguyen},
journal={arXiv preprint arXiv:2409.14342},
year={2024},
archivePrefix={arXiv},
eprint={2409.14342},
primaryClass={cs.RO cs.SY eess.SY}
}
|
li2024adapting
|
arxiv-660414
|
2409.14343
|
Memory Matching is not Enough: Jointly Improving Memory Matching and Decoding for Video Object Segmentation
|
<|reference_start|>Memory Matching is not Enough: Jointly Improving Memory Matching and Decoding for Video Object Segmentation: Memory-based video object segmentation methods model multiple objects over long temporal-spatial spans by establishing memory bank, which achieve the remarkable performance. However, they struggle to overcome the false matching and are prone to lose critical information, resulting in confusion among different objects. In this paper, we propose an effective approach which jointly improving the matching and decoding stages to alleviate the false matching issue.For the memory matching stage, we present a cost aware mechanism that suppresses the slight errors for short-term memory and a shunted cross-scale matching for long-term memory which establish a wide filed matching spaces for various object scales. For the readout decoding stage, we implement a compensatory mechanism aims at recovering the essential information where missing at the matching stage. Our approach achieves the outstanding performance in several popular benchmarks (i.e., DAVIS 2016&2017 Val (92.4%&88.1%), and DAVIS 2017 Test (83.9%)), and achieves 84.8%&84.6% on YouTubeVOS 2018&2019 Val.<|reference_end|>
|
arxiv
|
@article{zheng2024memory,
title={Memory Matching is not Enough: Jointly Improving Memory Matching and
Decoding for Video Object Segmentation},
author={Jintu Zheng, Yun Liang, Yuqing Zhang and Wanchao Su},
journal={arXiv preprint arXiv:2409.14343},
year={2024},
archivePrefix={arXiv},
eprint={2409.14343},
primaryClass={cs.CV eess.IV}
}
|
zheng2024memory
|
arxiv-660415
|
2409.14346
|
Improved direction of arrival estimations with a wearable microphone array for dynamic environments by reliability weighting
|
<|reference_start|>Improved direction of arrival estimations with a wearable microphone array for dynamic environments by reliability weighting: Direction-of-arrival estimation of multiple speakers in a room is an important task for a wide range of applications. In particular, challenging environments with moving speakers, reverberation and noise, lead to significant performance degradation for current methods. With the aim of better understanding factors affecting performance and improving current methods, in this paper multi-speaker direction-of-arrival (DOA) estimation is investigated using a modified version of the local space domain distance (LSDD) algorithm in a noisy, dynamic and reverberant environment employing a wearable microphone array. This study utilizes the recently published EasyCom speech dataset, recorded using a wearable microphone array mounted on eyeglasses. While the original LSDD algorithm demonstrates strong performance in static environments, its efficacy significantly diminishes in the dynamic settings of the EasyCom dataset. Several enhancements to the LSDD algorithm are developed following a comprehensive performance and system analysis, which enable improved DOA estimation under these challenging conditions. These improvements include incorporating a weighted reliability approach and introducing a new quality measure that reliably identifies the more accurate DOA estimates, thereby enhancing both the robustness and accuracy of the algorithm in challenging environments.<|reference_end|>
|
arxiv
|
@article{mitchell2024improved,
title={Improved direction of arrival estimations with a wearable microphone
array for dynamic environments by reliability weighting},
author={Daniel A. Mitchell, Boaz Rafaely, Anurag Kumar and Vladimir Tourbabin},
journal={arXiv preprint arXiv:2409.14346},
year={2024},
archivePrefix={arXiv},
eprint={2409.14346},
primaryClass={eess.AS cs.SD}
}
|
mitchell2024improved
|
arxiv-660416
|
2409.14348
|
A Feature Engineering Approach for Literary and Colloquial Tamil Speech Classification using 1D-CNN
|
<|reference_start|>A Feature Engineering Approach for Literary and Colloquial Tamil Speech Classification using 1D-CNN: In ideal human computer interaction (HCI), the colloquial form of a language would be preferred by most users, since it is the form used in their day-to-day conversations. However, there is also an undeniable necessity to preserve the formal literary form. By embracing the new and preserving the old, both service to the common man (practicality) and service to the language itself (conservation) can be rendered. Hence, it is ideal for computers to have the ability to accept, process, and converse in both forms of the language, as required. To address this, it is first necessary to identify the form of the input speech, which in the current work is between literary and colloquial Tamil speech. Such a front-end system must consist of a simple, effective, and lightweight classifier that is trained on a few effective features that are capable of capturing the underlying patterns of the speech signal. To accomplish this, a one-dimensional convolutional neural network (1D-CNN) that learns the envelope of features across time, is proposed. The network is trained on a select number of handcrafted features initially, and then on Mel frequency cepstral coefficients (MFCC) for comparison. The handcrafted features were selected to address various aspects of speech such as the spectral and temporal characteristics, prosody, and voice quality. The features are initially analyzed by considering ten parallel utterances and observing the trend of each feature with respect to time. The proposed 1D-CNN, trained using the handcrafted features, offers an F1 score of 0.9803, while that trained on the MFCC offers an F1 score of 0.9895. In light of this, feature ablation and feature combination are explored. When the best ranked handcrafted features, from the feature ablation study, are combined with the MFCC, they offer the best results with an F1 score of 0.9946.<|reference_end|>
|
arxiv
|
@article{nanmalar2024a,
title={A Feature Engineering Approach for Literary and Colloquial Tamil Speech
Classification using 1D-CNN},
author={M. Nanmalar, S. Johanan Joysingh, P. Vijayalakshmi, T. Nagarajan},
journal={arXiv preprint arXiv:2409.14348},
year={2024},
archivePrefix={arXiv},
eprint={2409.14348},
primaryClass={eess.AS cs.LG cs.SD}
}
|
nanmalar2024a
|
arxiv-660417
|
2409.14350
|
D2D Coded Caching from Two Classes of Optimal DPDAs using Cross Resolvable Designs
|
<|reference_start|>D2D Coded Caching from Two Classes of Optimal DPDAs using Cross Resolvable Designs: Coded caching in a wireless device-to-device (D2D) network was first studied by Ji \textit{et al.} in [4] (referred to as the JCM scheme). In a D2D network, a central server first places the data in the user cache memories and all the user's demands are served by inter-user coded multicast transmissions. Low subpacketization level D2D coded caching schemes are desirable for practical implementations. Wang \textit{et al.} in [7] proposed an array called D2D placement delivery array (DPDA) which characterizes the placement phase and the delivery phase in a D2D network. A lower bound on the transmission load of a DPDA is derived and only the JCM scheme achieves this lower bound, but requires a subpacketization level that grows exponentially with the number of users. Low subpacketization level D2D schemes can be obtained by constructing appropriate DPDAs. In this paper, we propose two new classes of DPDA constructions that give low subpacketization level D2D schemes using cross resolvable designs. The first class of constructed DPDA achieves the known lower bound on the transmission load of DPDA while requiring a subpacketization level lesser than that of the JCM scheme. We propose another lower bound on the transmission load of a DPDA and show that the second class of constructed DPDA achieves this lower bound.<|reference_end|>
|
arxiv
|
@article{t.2024d2d,
title={D2D Coded Caching from Two Classes of Optimal DPDAs using Cross
Resolvable Designs},
author={Rashid Ummer N.T. and B. Sundar Rajan},
journal={arXiv preprint arXiv:2409.14350},
year={2024},
archivePrefix={arXiv},
eprint={2409.14350},
primaryClass={cs.IT math.IT}
}
|
t.2024d2d
|
arxiv-660418
|
2409.14357
|
Using Natural Language Processing to find Indication for Burnout with Text Classification: From Online Data to Real-World Data
|
<|reference_start|>Using Natural Language Processing to find Indication for Burnout with Text Classification: From Online Data to Real-World Data: Burnout, classified as a syndrome in the ICD-11, arises from chronic workplace stress that has not been effectively managed. It is characterized by exhaustion, cynicism, and reduced professional efficacy, and estimates of its prevalence vary significantly due to inconsistent measurement methods. Recent advancements in Natural Language Processing (NLP) and machine learning offer promising tools for detecting burnout through textual data analysis, with studies demonstrating high predictive accuracy. This paper contributes to burnout detection in German texts by: (a) collecting an anonymous real-world dataset including free-text answers and Oldenburg Burnout Inventory (OLBI) responses; (b) demonstrating the limitations of a GermanBERT-based classifier trained on online data; (c) presenting two versions of a curated BurnoutExpressions dataset, which yielded models that perform well in real-world applications; and (d) providing qualitative insights from an interdisciplinary focus group on the interpretability of AI models used for burnout detection. Our findings emphasize the need for greater collaboration between AI researchers and clinical experts to refine burnout detection models. Additionally, more real-world data is essential to validate and enhance the effectiveness of current AI methods developed in NLP research, which are often based on data automatically scraped from online sources and not evaluated in a real-world context. This is essential for ensuring AI tools are well suited for practical applications.<|reference_end|>
|
arxiv
|
@article{kurpicz-briki2024using,
title={Using Natural Language Processing to find Indication for Burnout with
Text Classification: From Online Data to Real-World Data},
author={Mascha Kurpicz-Briki, Ghofrane Merhbene, Alexandre Puttick, Souhir Ben
Souissi, Jannic Bieri, Thomas J"org M"uller, Christoph Golz},
journal={arXiv preprint arXiv:2409.14357},
year={2024},
archivePrefix={arXiv},
eprint={2409.14357},
primaryClass={cs.CL cs.LG}
}
|
kurpicz-briki2024using
|
arxiv-660419
|
2409.14360
|
In-place Switch: Reprogramming based SLC Cache Design for Hybrid 3D SSDs
|
<|reference_start|>In-place Switch: Reprogramming based SLC Cache Design for Hybrid 3D SSDs: Recently, 3D SSDs are widely adopted in PCs, data centers, and cloud storage systems. To increase capacity, high bit-density cells, such as Triple-Level Cell (TLC), are utilized within 3D SSDs. However, due to the inferior performance of TLC, a portion of TLCs is configured to operate as Single-Level Cell (SLC) to provide high performance, with host data initially directed to the SLCs. In SLC/TLC hybrid 3D SSDs, a portion of the TLC space is designated as an SLC cache to achieve high SSD performance by writing host data at the SLC speed. Given the limited size of the SLC cache, block reclamation is necessary to free up the SLC cache during idle periods. However, our preliminary studies indicate that the SLC cache can lead to a performance cliff if filled rapidly and cause significant write amplification when data migration occurs during idle times. In this work, we propose leveraging a reprogram operation to address these challenges. Specifically, when the SLC cache is full or during idle periods, a reprogram operation is performed to switch used SLC pages to TLC pages in place (termed In-place Switch, IPS). Subsequently, other free TLC space is allocated as the new SLC cache. IPS can continuously provide sufficient SLC cache within SSDs, significantly improving write performance and reducing write amplification. Experimental results demonstrate that IPS can reduce write latency and write amplification by up to 0.75 times and 0.53 times, respectively, compared to state-of-the-art SLC cache technologies.<|reference_end|>
|
arxiv
|
@article{yang2024in-place,
title={In-place Switch: Reprogramming based SLC Cache Design for Hybrid 3D SSDs},
author={Xufeng Yang, Zhengjian Cong, Congming Gao},
journal={arXiv preprint arXiv:2409.14360},
year={2024},
archivePrefix={arXiv},
eprint={2409.14360},
primaryClass={cs.AR}
}
|
yang2024in-place
|
arxiv-660420
|
2409.14363
|
MANTA -- Model Adapter Native generations that's Affordable
|
<|reference_start|>MANTA -- Model Adapter Native generations that's Affordable: The presiding model generation algorithms rely on simple, inflexible adapter selection to provide personalized results. We propose the model-adapter composition problem as a generalized problem to past work factoring in practical hardware and affordability constraints, and introduce MANTA as a new approach to the problem. Experiments on COCO 2014 validation show MANTA to be superior in image task diversity and quality at the cost of a modest drop in alignment. Our system achieves a $94\%$ win rate in task diversity and a $80\%$ task quality win rate versus the best known system, and demonstrates strong potential for direct use in synthetic data generation and the creative art domains.<|reference_end|>
|
arxiv
|
@article{chaurasia2024manta,
title={MANTA -- Model Adapter Native generations that's Affordable},
author={Ansh Chaurasia},
journal={arXiv preprint arXiv:2409.14363},
year={2024},
archivePrefix={arXiv},
eprint={2409.14363},
primaryClass={cs.AI eess.IV}
}
|
chaurasia2024manta
|
arxiv-660421
|
2409.14364
|
More Effective LLM Compressed Tokens with Uniformly Spread Position Identifiers and Compression Loss
|
<|reference_start|>More Effective LLM Compressed Tokens with Uniformly Spread Position Identifiers and Compression Loss: Compressing Transformer inputs into compressd tokens allows running LLMs with improved speed and cost efficiency. Based on the compression method ICAE, we carefully examine the position identifier choices for compressed tokens and also propose a new compression loss. We demonstrate empirically that our proposed methods achieve significantly higher compression ratios (15x compared to 4x for ICAE), while being able to attain comparable reconstruction performance.<|reference_end|>
|
arxiv
|
@article{zhao2024more,
title={More Effective LLM Compressed Tokens with Uniformly Spread Position
Identifiers and Compression Loss},
author={Runsong Zhao, Pengcheng Huang, Xinyu Liu, Chunyang Xiao, Tong Xiao,
Jingbo Zhu},
journal={arXiv preprint arXiv:2409.14364},
year={2024},
archivePrefix={arXiv},
eprint={2409.14364},
primaryClass={cs.CL}
}
|
zhao2024more
|
arxiv-660422
|
2409.14365
|
D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation
|
<|reference_start|>D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation: Depth sensing is an important problem for 3D vision-based robotics. Yet, a real-world active stereo or ToF depth camera often produces noisy and incomplete depth which bottlenecks robot performances. In this work, we propose D3RoMa, a learning-based depth estimation framework on stereo image pairs that predicts clean and accurate depth in diverse indoor scenes, even in the most challenging scenarios with translucent or specular surfaces where classical depth sensing completely fails. Key to our method is that we unify depth estimation and restoration into an image-to-image translation problem by predicting the disparity map with a denoising diffusion probabilistic model. At inference time, we further incorporated a left-right consistency constraint as classifier guidance to the diffusion process. Our framework combines recently advanced learning-based approaches and geometric constraints from traditional stereo vision. For model training, we create a large scene-level synthetic dataset with diverse transparent and specular objects to compensate for existing tabletop datasets. The trained model can be directly applied to real-world in-the-wild scenes and achieve state-of-the-art performance in multiple public depth estimation benchmarks. Further experiments in real environments show that accurate depth prediction significantly improves robotic manipulation in various scenarios.<|reference_end|>
|
arxiv
|
@article{wei2024d3roma:,
title={D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic
Robotic Manipulation},
author={Songlin Wei, Haoran Geng, Jiayi Chen, Congyue Deng, Wenbo Cui,
Chengyang Zhao, Xiaomeng Fang, Leonidas Guibas, He Wang},
journal={arXiv preprint arXiv:2409.14365},
year={2024},
archivePrefix={arXiv},
eprint={2409.14365},
primaryClass={cs.RO}
}
|
wei2024d3roma:
|
arxiv-660423
|
2409.14366
|
Robust Data-Driven Tube-Based Zonotopic Predictive Control with Closed-Loop Guarantees
|
<|reference_start|>Robust Data-Driven Tube-Based Zonotopic Predictive Control with Closed-Loop Guarantees: This work proposes a robust data-driven tube-based zonotopic predictive control (TZPC) approach for discrete-time linear systems, designed to ensure stability and recursive feasibility in the presence of bounded noise. The proposed approach consists of two phases. In an initial learning phase, we provide an over-approximation of all models consistent with past input and noisy state data using zonotope properties. Subsequently, in a control phase, we formulate an optimization problem, which by integrating terminal ingredients is proven to be recursively feasible. Moreover, we prove that implementing this data-driven predictive control approach guarantees robust exponential stability of the closed-loop system. The effectiveness and competitive performance of the proposed control strategy, compared to recent data-driven predictive control methods, are illustrated through numerical simulations.<|reference_end|>
|
arxiv
|
@article{farjadnia2024robust,
title={Robust Data-Driven Tube-Based Zonotopic Predictive Control with
Closed-Loop Guarantees},
author={Mahsa Farjadnia, Angela Fontan, Amr Alanwar, Marco Molinari, and Karl
Henrik Johansson},
journal={arXiv preprint arXiv:2409.14366},
year={2024},
archivePrefix={arXiv},
eprint={2409.14366},
primaryClass={eess.SY cs.SY}
}
|
farjadnia2024robust
|
arxiv-660424
|
2409.14368
|
Evaluating the Quality of Code Comments Generated by Large Language Models for Novice Programmers
|
<|reference_start|>Evaluating the Quality of Code Comments Generated by Large Language Models for Novice Programmers: Large Language Models (LLMs) show promise in generating code comments for novice programmers, but their educational effectiveness remains under-evaluated. This study assesses the instructional quality of code comments produced by GPT-4, GPT-3.5-Turbo, and Llama2, compared to expert-developed comments, focusing on their suitability for novices. Analyzing a dataset of ``easy'' level Java solutions from LeetCode, we find that GPT-4 exhibits comparable quality to expert comments in aspects critical for beginners, such as clarity, beginner-friendliness, concept elucidation, and step-by-step guidance. GPT-4 outperforms Llama2 in discussing complexity (chi-square = 11.40, p = 0.001) and is perceived as significantly more supportive for beginners than GPT-3.5 and Llama2 with Mann-Whitney U-statistics = 300.5 and 322.5, p = 0.0017 and 0.0003). This study highlights the potential of LLMs for generating code comments tailored to novice programmers.<|reference_end|>
|
arxiv
|
@article{fan2024evaluating,
title={Evaluating the Quality of Code Comments Generated by Large Language
Models for Novice Programmers},
author={Aysa Xuemo Fan, Arun Balajiee Lekshmi Narayanan, Mohammad Hassany,
Jiaze Ke},
journal={arXiv preprint arXiv:2409.14368},
year={2024},
archivePrefix={arXiv},
eprint={2409.14368},
primaryClass={cs.SE cs.AI cs.HC}
}
|
fan2024evaluating
|
arxiv-660425
|
2409.14369
|
Few-Shot Testing of Autonomous Vehicles with Scenario Similarity Learning
|
<|reference_start|>Few-Shot Testing of Autonomous Vehicles with Scenario Similarity Learning: Testing and evaluation are critical to the development and deployment of autonomous vehicles (AVs). Given the rarity of safety-critical events such as crashes, millions of tests are typically needed to accurately assess AV safety performance. Although techniques like importance sampling can accelerate this process, it usually still requires too many numbers of tests for field testing. This severely hinders the testing and evaluation process, especially for third-party testers and governmental bodies with very limited testing budgets. The rapid development cycles of AV technology further exacerbate this challenge. To fill this research gap, this paper introduces the few-shot testing (FST) problem and proposes a methodological framework to tackle it. As the testing budget is very limited, usually smaller than 100, the FST method transforms the testing scenario generation problem from probabilistic sampling to deterministic optimization, reducing the uncertainty of testing results. To optimize the selection of testing scenarios, a cross-attention similarity mechanism is proposed to learn to extract the information of AV's testing scenario space. This allows iterative searches for scenarios with the smallest evaluation error, ensuring precise testing within budget constraints. Experimental results in cut-in scenarios demonstrate the effectiveness of the FST method, significantly enhancing accuracy and enabling efficient, precise AV testing.<|reference_end|>
|
arxiv
|
@article{li2024few-shot,
title={Few-Shot Testing of Autonomous Vehicles with Scenario Similarity
Learning},
author={Shu Li, Honglin He, Jingxuan Yang, Jianming Hu, Yi Zhang and Shuo Feng},
journal={arXiv preprint arXiv:2409.14369},
year={2024},
archivePrefix={arXiv},
eprint={2409.14369},
primaryClass={eess.SY cs.SY}
}
|
li2024few-shot
|
arxiv-660426
|
2409.14371
|
The Ability of Large Language Models to Evaluate Constraint-satisfaction in Agent Responses to Open-ended Requests
|
<|reference_start|>The Ability of Large Language Models to Evaluate Constraint-satisfaction in Agent Responses to Open-ended Requests: Generative AI agents are often expected to respond to complex user requests that have No One Right Answer (NORA), e.g., "design a vegetarian meal plan below 1800 calories". Such requests may entail a set of constraints that the agent should adhere to. To successfully develop agents for NORA scenarios, an accurate automatic evaluation framework is essential, and specifically - one capable of validating the satisfaction of constraints in the agent's response. Recently, large language models (LLMs) have been adopted as versatile evaluators for many NORA tasks, but their ability to evaluate constraint-satisfaction in generated text remains unclear. To study this, we develop and release a novel Arithmetic Constraint-Satisfaction (ACS) benchmarking dataset. The dataset consists of complex user requests with corresponding constraints, agent responses and human labels indicating each constraint's satisfaction level in the response. A unique property of this dataset is that validating many of its constraints requires reviewing the response as a whole (in contrast to many other benchmarks that require the validation of a single independent item). Moreover, it assesses LLMs in performing reasoning, in-context data extraction, arithmetic calculations, and counting. We then benchmark both open and proprietary LLMs on evaluating constraint-satisfaction, and show that most models still have a significant headroom for improvement, and that errors primarily stem from reasoning issues. In addition, most models exhibit a skewed constraint-satisfaction prediction pattern, with higher accuracy where the ground-truth label is "satisfied". Lastly, few-shot prompting for our task proved to be rather challenging, since many of the studied models showed a degradation in performance when it was introduced.<|reference_end|>
|
arxiv
|
@article{madmoni2024the,
title={The Ability of Large Language Models to Evaluate Constraint-satisfaction
in Agent Responses to Open-ended Requests},
author={Lior Madmoni, Amir Zait, Ilia Labzovsky, Danny Karmon},
journal={arXiv preprint arXiv:2409.14371},
year={2024},
archivePrefix={arXiv},
eprint={2409.14371},
primaryClass={cs.CL}
}
|
madmoni2024the
|
arxiv-660427
|
2409.14374
|
J2N -- Nominal Adjective Identification and its Application
|
<|reference_start|>J2N -- Nominal Adjective Identification and its Application: This paper explores the challenges posed by nominal adjectives (NAs) in natural language processing (NLP) tasks, particularly in part-of-speech (POS) tagging. We propose treating NAs as a distinct POS tag, "JN," and investigate its impact on POS tagging, BIO chunking, and coreference resolution. Our study shows that reclassifying NAs can improve the accuracy of syntactic analysis and structural understanding in NLP. We present experimental results using Hidden Markov Models (HMMs), Maximum Entropy (MaxEnt) models, and Spacy, demonstrating the feasibility and potential benefits of this approach. Additionally we trained a bert model to identify the NA in untagged text.<|reference_end|>
|
arxiv
|
@article{qi2024j2n,
title={J2N -- Nominal Adjective Identification and its Application},
author={Lemeng Qi, Yang Han, Zhuotong Xie},
journal={arXiv preprint arXiv:2409.14374},
year={2024},
archivePrefix={arXiv},
eprint={2409.14374},
primaryClass={cs.CL}
}
|
qi2024j2n
|
arxiv-660428
|
2409.14377
|
To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems
|
<|reference_start|>To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems: Powerful predictive AI systems have demonstrated great potential in augmenting human decision making. Recent empirical work has argued that the vision for optimal human-AI collaboration requires 'appropriate reliance' of humans on AI systems. However, accurately estimating the trustworthiness of AI advice at the instance level is quite challenging, especially in the absence of performance feedback pertaining to the AI system. In practice, the performance disparity of machine learning models on out-of-distribution data makes the dataset-specific performance feedback unreliable in human-AI collaboration. Inspired by existing literature on critical thinking and a critical mindset, we propose the use of debugging an AI system as an intervention to foster appropriate reliance. In this paper, we explore whether a critical evaluation of AI performance within a debugging setting can better calibrate users' assessment of an AI system and lead to more appropriate reliance. Through a quantitative empirical study (N = 234), we found that our proposed debugging intervention does not work as expected in facilitating appropriate reliance. Instead, we observe a decrease in reliance on the AI system after the intervention -- potentially resulting from an early exposure to the AI system's weakness. We explore the dynamics of user confidence and user estimation of AI trustworthiness across groups with different performance levels to help explain how inappropriate reliance patterns occur. Our findings have important implications for designing effective interventions to facilitate appropriate reliance and better human-AI collaboration.<|reference_end|>
|
arxiv
|
@article{he2024to,
title={To Err Is AI! Debugging as an Intervention to Facilitate Appropriate
Reliance on AI Systems},
author={Gaole He, Abri Bharos, and Ujwal Gadiraju},
journal={arXiv preprint arXiv:2409.14377},
year={2024},
doi={10.1145/3648188.3675141},
archivePrefix={arXiv},
eprint={2409.14377},
primaryClass={cs.AI}
}
|
he2024to
|
arxiv-660429
|
2409.14378
|
Sparse Low-Ranked Self-Attention Transformer for Remaining Useful Lifetime Prediction of Optical Fiber Amplifiers
|
<|reference_start|>Sparse Low-Ranked Self-Attention Transformer for Remaining Useful Lifetime Prediction of Optical Fiber Amplifiers: Optical fiber amplifiers are key elements in present optical networks. Failures of these components result in high financial loss of income of the network operator as the communication traffic over an affected link is interrupted. Applying Remaining useful lifetime (RUL) prediction in the context of Predictive Maintenance (PdM) to optical fiber amplifiers to predict upcoming system failures at an early stage, so that network outages can be minimized through planning of targeted maintenance actions, ensures reliability and safety. Optical fiber amplifier are complex systems, that work under various operating conditions, which makes correct forecasting a difficult task. Increased monitoring capabilities of systems results in datasets that facilitate the application of data-driven RUL prediction methods. Deep learning models in particular have shown good performance, but generalization based on comparatively small datasets for RUL prediction is difficult. In this paper, we propose Sparse Low-ranked self-Attention Transformer (SLAT) as a novel RUL prediction method. SLAT is based on an encoder-decoder architecture, wherein two parallel working encoders extract features for sensors and time steps. By utilizing the self-attention mechanism, long-term dependencies can be learned from long sequences. The implementation of sparsity in the attention matrix and a low-rank parametrization reduce overfitting and increase generalization. Experimental application to optical fiber amplifiers exemplified on EDFA, as well as a reference dataset from turbofan engines, shows that SLAT outperforms the state-of-the-art methods.<|reference_end|>
|
arxiv
|
@article{schneider2024sparse,
title={Sparse Low-Ranked Self-Attention Transformer for Remaining Useful
Lifetime Prediction of Optical Fiber Amplifiers},
author={Dominic Schneider, Lutz Rapp},
journal={arXiv preprint arXiv:2409.14378},
year={2024},
archivePrefix={arXiv},
eprint={2409.14378},
primaryClass={cs.LG cs.AI eess.SP}
}
|
schneider2024sparse
|
arxiv-660430
|
2409.14379
|
GroupDiff: Diffusion-based Group Portrait Editing
|
<|reference_start|>GroupDiff: Diffusion-based Group Portrait Editing: Group portrait editing is highly desirable since users constantly want to add a person, delete a person, or manipulate existing persons. It is also challenging due to the intricate dynamics of human interactions and the diverse gestures. In this work, we present GroupDiff, a pioneering effort to tackle group photo editing with three dedicated contributions: 1) Data Engine: Since there is no labeled data for group photo editing, we create a data engine to generate paired data for training. The training data engine covers the diverse needs of group portrait editing. 2) Appearance Preservation: To keep the appearance consistent after editing, we inject the images of persons from the group photo into the attention modules and employ skeletons to provide intra-person guidance. 3) Control Flexibility: Bounding boxes indicating the locations of each person are used to reweight the attention matrix so that the features of each person can be injected into the correct places. This inter-person guidance provides flexible manners for manipulation. Extensive experiments demonstrate that GroupDiff exhibits state-of-the-art performance compared to existing methods. GroupDiff offers controllability for editing and maintains the fidelity of the original photos.<|reference_end|>
|
arxiv
|
@article{jiang2024groupdiff:,
title={GroupDiff: Diffusion-based Group Portrait Editing},
author={Yuming Jiang, Nanxuan Zhao, Qing Liu, Krishna Kumar Singh, Shuai Yang,
Chen Change Loy, Ziwei Liu},
journal={arXiv preprint arXiv:2409.14379},
year={2024},
archivePrefix={arXiv},
eprint={2409.14379},
primaryClass={cs.CV}
}
|
jiang2024groupdiff:
|
arxiv-660431
|
2409.14381
|
Investigating Layer Importance in Large Language Models
|
<|reference_start|>Investigating Layer Importance in Large Language Models: Large language models (LLMs) have gained increasing attention due to their prominent ability to understand and process texts. Nevertheless, LLMs largely remain opaque. The lack of understanding of LLMs has obstructed the deployment in safety-critical scenarios and hindered the development of better models. In this study, we advance the understanding of LLM by investigating the significance of individual layers in LLMs. We propose an efficient sampling method to faithfully evaluate the importance of layers using Shapley values, a widely used explanation framework in feature attribution and data valuation. In addition, we conduct layer ablation experiments to assess the performance degradation resulting from the exclusion of specific layers. Our findings reveal the existence of cornerstone layers, wherein certain early layers can exhibit a dominant contribution over others. Removing one cornerstone layer leads to a drastic collapse of the model performance, often reducing it to random guessing. Conversely, removing non-cornerstone layers results in only marginal performance changes. This study identifies cornerstone layers in LLMs and underscores their critical role for future research.<|reference_end|>
|
arxiv
|
@article{zhang2024investigating,
title={Investigating Layer Importance in Large Language Models},
author={Yang Zhang, Yanfei Dong, Kenji Kawaguchi},
journal={arXiv preprint arXiv:2409.14381},
year={2024},
archivePrefix={arXiv},
eprint={2409.14381},
primaryClass={cs.CL cs.LG}
}
|
zhang2024investigating
|
arxiv-660432
|
2409.14385
|
Prior Knowledge Distillation Network for Face Super-Resolution
|
<|reference_start|>Prior Knowledge Distillation Network for Face Super-Resolution: The purpose of face super-resolution (FSR) is to reconstruct high-resolution (HR) face images from low-resolution (LR) inputs. With the continuous advancement of deep learning technologies, contemporary prior-guided FSR methods initially estimate facial priors and then use this information to assist in the super-resolution reconstruction process. However, ensuring the accuracy of prior estimation remains challenging, and straightforward cascading and convolutional operations often fail to fully leverage prior knowledge. Inaccurate or insufficiently utilized prior information inevitably degrades FSR performance. To address this issue, we propose a prior knowledge distillation network (PKDN) for FSR, which involves transferring prior information from the teacher network to the student network. This approach enables the network to learn priors during the training stage while relying solely on low-resolution facial images during the testing stage, thus mitigating the adverse effects of prior estimation inaccuracies. Additionally, we incorporate robust attention mechanisms to design a parsing map fusion block that effectively utilizes prior information. To prevent feature loss, we retain multi-scale features during the feature extraction stage and employ them in the subsequent super-resolution reconstruction process. Experimental results on benchmark datasets demonstrate that our PKDN approach surpasses existing FSR methods in generating high-quality face images.<|reference_end|>
|
arxiv
|
@article{yang2024prior,
title={Prior Knowledge Distillation Network for Face Super-Resolution},
author={Qiu Yang, Xiao Sun, Xin-yu Li, Feng-Qi Cui, Yu-Tong Guo, Shuang-Zhen
Hu, Ping Luo, Si-Ying Li},
journal={arXiv preprint arXiv:2409.14385},
year={2024},
archivePrefix={arXiv},
eprint={2409.14385},
primaryClass={cs.CV}
}
|
yang2024prior
|
arxiv-660433
|
2409.14388
|
Defining a new perspective: Enterprise Information Governance
|
<|reference_start|>Defining a new perspective: Enterprise Information Governance: This paper adduces a novel definition of regulatory enterprise information governance as a strategic framework that acts through control mechanisms designed to assure accountability in managing decision rights over information and data assets in organizations. This new pragmatic definition takes the perspectives of both the practitioner and of the scholar. It builds upon earlier definitions to take a novel and more clearly regulatory approach and to synthesize a new definition for such governance; to build out a view of it as a scalable regulatory framework for large or complex organizations that sees governance from this new perspective as a business architecture or target operating model in this increasingly critical domain. The paper supports and enables scholarly consideration and further research. It looks at definitions of information and data; of strategy in relation to information and data; of data management; of enterprise architecture; of governance, and governance as a type of strategic endeavor, and of the nature of strategic and tactical policies and standards that form the basis for such governance.<|reference_end|>
|
arxiv
|
@article{mccullough2024defining,
title={Defining a new perspective: Enterprise Information Governance},
author={Alastair McCullough},
journal={arXiv preprint arXiv:2409.14388},
year={2024},
archivePrefix={arXiv},
eprint={2409.14388},
primaryClass={cs.DB cs.HC cs.SE}
}
|
mccullough2024defining
|
arxiv-660434
|
2409.14393
|
MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting
|
<|reference_start|>MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting: Crafting a single, versatile physics-based controller that can breathe life into interactive characters across a wide spectrum of scenarios represents an exciting frontier in character animation. An ideal controller should support diverse control modalities, such as sparse target keyframes, text instructions, and scene information. While previous works have proposed physically simulated, scene-aware control models, these systems have predominantly focused on developing controllers that each specializes in a narrow set of tasks and control modalities. This work presents MaskedMimic, a novel approach that formulates physics-based character control as a general motion inpainting problem. Our key insight is to train a single unified model to synthesize motions from partial (masked) motion descriptions, such as masked keyframes, objects, text descriptions, or any combination thereof. This is achieved by leveraging motion tracking data and designing a scalable training method that can effectively utilize diverse motion descriptions to produce coherent animations. Through this process, our approach learns a physics-based controller that provides an intuitive control interface without requiring tedious reward engineering for all behaviors of interest. The resulting controller supports a wide range of control modalities and enables seamless transitions between disparate tasks. By unifying character control through motion inpainting, MaskedMimic creates versatile virtual characters. These characters can dynamically adapt to complex scenes and compose diverse motions on demand, enabling more interactive and immersive experiences.<|reference_end|>
|
arxiv
|
@article{tessler2024maskedmimic:,
title={MaskedMimic: Unified Physics-Based Character Control Through Masked
Motion Inpainting},
author={Chen Tessler, Yunrong Guo, Ofir Nabati, Gal Chechik, Xue Bin Peng},
journal={arXiv preprint arXiv:2409.14393},
year={2024},
archivePrefix={arXiv},
eprint={2409.14393},
primaryClass={cs.AI cs.RO}
}
|
tessler2024maskedmimic:
|
arxiv-660435
|
2409.14394
|
Frequency-regularized Neural Representation Method for Sparse-view Tomographic Reconstruction
|
<|reference_start|>Frequency-regularized Neural Representation Method for Sparse-view Tomographic Reconstruction: Sparse-view tomographic reconstruction is a pivotal direction for reducing radiation dose and augmenting clinical applicability. While many research works have proposed the reconstruction of tomographic images from sparse 2D projections, existing models tend to excessively focus on high-frequency information while overlooking low-frequency components within the sparse input images. This bias towards high-frequency information often leads to overfitting, particularly intense at edges and boundaries in the reconstructed slices. In this paper, we introduce the Frequency Regularized Neural Attenuation/Activity Field (Freq-NAF) for self-supervised sparse-view tomographic reconstruction. Freq-NAF mitigates overfitting by incorporating frequency regularization, directly controlling the visible frequency bands in the neural network input. This approach effectively balances high-frequency and low-frequency information. We conducted numerical experiments on CBCT and SPECT datasets, and our method demonstrates state-of-the-art accuracy.<|reference_end|>
|
arxiv
|
@article{xian2024frequency-regularized,
title={Frequency-regularized Neural Representation Method for Sparse-view
Tomographic Reconstruction},
author={Jingmou Xian, Jian Zhu, Haolin Liao, Si Li},
journal={arXiv preprint arXiv:2409.14394},
year={2024},
archivePrefix={arXiv},
eprint={2409.14394},
primaryClass={eess.IV cs.CV}
}
|
xian2024frequency-regularized
|
arxiv-660436
|
2409.14395
|
Predicting User Stances from Target-Agnostic Information using Large Language Models
|
<|reference_start|>Predicting User Stances from Target-Agnostic Information using Large Language Models: We investigate Large Language Models' (LLMs) ability to predict a user's stance on a target given a collection of his/her target-agnostic social media posts (i.e., user-level stance prediction). While we show early evidence that LLMs are capable of this task, we highlight considerable variability in the performance of the model across (i) the type of stance target, (ii) the prediction strategy and (iii) the number of target-agnostic posts supplied. Post-hoc analyses further hint at the usefulness of target-agnostic posts in providing relevant information to LLMs through the presence of both surface-level (e.g., target-relevant keywords) and user-level features (e.g., encoding users' moral values). Overall, our findings suggest that LLMs might offer a viable method for determining public stances towards new topics based on historical and target-agnostic data. At the same time, we also call for further research to better understand LLMs' strong performance on the stance prediction task and how their effectiveness varies across task contexts.<|reference_end|>
|
arxiv
|
@article{loh2024predicting,
title={Predicting User Stances from Target-Agnostic Information using Large
Language Models},
author={Siyuan Brandon Loh, Liang Ze Wong, Prasanta Bhattacharya, Joseph
Simons, Wei Gao, Hong Zhang},
journal={arXiv preprint arXiv:2409.14395},
year={2024},
archivePrefix={arXiv},
eprint={2409.14395},
primaryClass={cs.CL}
}
|
loh2024predicting
|
arxiv-660437
|
2409.14396
|
Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape
|
<|reference_start|>Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape: Fine-tuning large-scale pre-trained models is prohibitively expensive in terms of computational and memory costs. Low-Rank Adaptation (LoRA), a popular Parameter-Efficient Fine-Tuning (PEFT) method, provides an efficient way to fine-tune models by optimizing only a low-rank matrix. Despite recent progress made in improving LoRA's performance, the connection between the LoRA optimization space and the original full parameter space is often overlooked. A solution that appears flat in the LoRA space may exist sharp directions in the full parameter space, potentially harming generalization performance. In this paper, we propose Flat-LoRA, an efficient approach that seeks a low-rank adaptation located in a flat region of the full parameter space.Instead of relying on the well-established sharpness-aware minimization approach, which can incur significant computational and memory burdens, we utilize random weight perturbation with a Bayesian expectation loss objective to maintain training efficiency and design a refined perturbation generation strategy for improved performance. Experiments on natural language processing and image classification tasks with various architectures demonstrate the effectiveness of our approach.<|reference_end|>
|
arxiv
|
@article{li2024flat-lora:,
title={Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape},
author={Tao Li, Zhengbao He, Yujun Li, Yasheng Wang, Lifeng Shang, Xiaolin
Huang},
journal={arXiv preprint arXiv:2409.14396},
year={2024},
archivePrefix={arXiv},
eprint={2409.14396},
primaryClass={cs.LG}
}
|
li2024flat-lora:
|
arxiv-660438
|
2409.14399
|
Beyond Persuasion: Towards Conversational Recommender System with Credible Explanations
|
<|reference_start|>Beyond Persuasion: Towards Conversational Recommender System with Credible Explanations: With the aid of large language models, current conversational recommender system (CRS) has gaining strong abilities to persuade users to accept recommended items. While these CRSs are highly persuasive, they can mislead users by incorporating incredible information in their explanations, ultimately damaging the long-term trust between users and the CRS. To address this, we propose a simple yet effective method, called PC-CRS, to enhance the credibility of CRS's explanations during persuasion. It guides the explanation generation through our proposed credibility-aware persuasive strategies and then gradually refines explanations via post-hoc self-reflection. Experimental results demonstrate the efficacy of PC-CRS in promoting persuasive and credible explanations. Further analysis reveals the reason behind current methods producing incredible explanations and the potential of credible explanations to improve recommendation accuracy.<|reference_end|>
|
arxiv
|
@article{qin2024beyond,
title={Beyond Persuasion: Towards Conversational Recommender System with
Credible Explanations},
author={Peixin Qin, Chen Huang, Yang Deng, Wenqiang Lei, Tat-Seng Chua},
journal={arXiv preprint arXiv:2409.14399},
year={2024},
archivePrefix={arXiv},
eprint={2409.14399},
primaryClass={cs.CL cs.AI}
}
|
qin2024beyond
|
arxiv-660439
|
2409.14401
|
Investigating the Impact of Hard Samples on Accuracy Reveals In-class Data Imbalance
|
<|reference_start|>Investigating the Impact of Hard Samples on Accuracy Reveals In-class Data Imbalance: In the AutoML domain, test accuracy is heralded as the quintessential metric for evaluating model efficacy, underpinning a wide array of applications from neural architecture search to hyperparameter optimization. However, the reliability of test accuracy as the primary performance metric has been called into question, notably through research highlighting how label noise can obscure the true ranking of state-of-the-art models. We venture beyond, along another perspective where the existence of hard samples within datasets casts further doubt on the generalization capabilities inferred from test accuracy alone. Our investigation reveals that the distribution of hard samples between training and test sets affects the difficulty levels of those sets, thereby influencing the perceived generalization capability of models. We unveil two distinct generalization pathways-toward easy and hard samples-highlighting the complexity of achieving balanced model evaluation. Finally, we propose a benchmarking procedure for comparing hard sample identification methods, facilitating the advancement of more nuanced approaches in this area. Our primary goal is not to propose a definitive solution but to highlight the limitations of relying primarily on test accuracy as an evaluation metric, even when working with balanced datasets, by introducing the in-class data imbalance problem. By doing so, we aim to stimulate a critical discussion within the research community and open new avenues for research that consider a broader spectrum of model evaluation criteria. The anonymous code is available at https://github.com/PawPuk/CurvBIM blueunder the GPL-3.0 license.<|reference_end|>
|
arxiv
|
@article{pukowski2024investigating,
title={Investigating the Impact of Hard Samples on Accuracy Reveals In-class
Data Imbalance},
author={Pawel Pukowski and Haiping Lu},
journal={arXiv preprint arXiv:2409.14401},
year={2024},
archivePrefix={arXiv},
eprint={2409.14401},
primaryClass={cs.LG}
}
|
pukowski2024investigating
|
arxiv-660440
|
2409.14403
|
GraspMamba: A Mamba-based Language-driven Grasp Detection Framework with Hierarchical Feature Learning
|
<|reference_start|>GraspMamba: A Mamba-based Language-driven Grasp Detection Framework with Hierarchical Feature Learning: Grasp detection is a fundamental robotic task critical to the success of many industrial applications. However, current language-driven models for this task often struggle with cluttered images, lengthy textual descriptions, or slow inference speed. We introduce GraspMamba, a new language-driven grasp detection method that employs hierarchical feature fusion with Mamba vision to tackle these challenges. By leveraging rich visual features of the Mamba-based backbone alongside textual information, our approach effectively enhances the fusion of multimodal features. GraspMamba represents the first Mamba-based grasp detection model to extract vision and language features at multiple scales, delivering robust performance and rapid inference time. Intensive experiments show that GraspMamba outperforms recent methods by a clear margin. We validate our approach through real-world robotic experiments, highlighting its fast inference speed.<|reference_end|>
|
arxiv
|
@article{nguyen2024graspmamba:,
title={GraspMamba: A Mamba-based Language-driven Grasp Detection Framework with
Hierarchical Feature Learning},
author={Huy Hoang Nguyen, An Vuong, Anh Nguyen, Ian Reid, Minh Nhat Vu},
journal={arXiv preprint arXiv:2409.14403},
year={2024},
archivePrefix={arXiv},
eprint={2409.14403},
primaryClass={cs.RO cs.CV}
}
|
nguyen2024graspmamba:
|
arxiv-660441
|
2409.14408
|
A Bekenstein-type bound in QFT
|
<|reference_start|>A Bekenstein-type bound in QFT: Let B be a spacetime region of width 2R > 0, and \phi a vector state localized in B. We show that the vacuum relative entropy of \phi, on the local von Neumann algebra of B, is bounded by 2\pi R-times the energy of the state \phi in B. This bound is model-independent and rigorous; it follows solely from first principles in the framework of translation covariant, local Quantum Field Theory on the Minkowski spacetime.<|reference_end|>
|
arxiv
|
@article{longo2024a,
title={A Bekenstein-type bound in QFT},
author={Roberto Longo},
journal={arXiv preprint arXiv:2409.14408},
year={2024},
archivePrefix={arXiv},
eprint={2409.14408},
primaryClass={math-ph cs.IT hep-th math.IT math.MP math.OA}
}
|
longo2024a
|
arxiv-660442
|
2409.14411
|
Scaling Diffusion Policy in Transformer to 1 Billion Parameters for Robotic Manipulation
|
<|reference_start|>Scaling Diffusion Policy in Transformer to 1 Billion Parameters for Robotic Manipulation: Diffusion Policy is a powerful technique tool for learning end-to-end visuomotor robot control. It is expected that Diffusion Policy possesses scalability, a key attribute for deep neural networks, typically suggesting that increasing model size would lead to enhanced performance. However, our observations indicate that Diffusion Policy in transformer architecture (\DP) struggles to scale effectively; even minor additions of layers can deteriorate training outcomes. To address this issue, we introduce Scalable Diffusion Transformer Policy for visuomotor learning. Our proposed method, namely \textbf{\methodname}, introduces two modules that improve the training dynamic of Diffusion Policy and allow the network to better handle multimodal action distribution. First, we identify that \DP~suffers from large gradient issues, making the optimization of Diffusion Policy unstable. To resolve this issue, we factorize the feature embedding of observation into multiple affine layers, and integrate it into the transformer blocks. Additionally, our utilize non-causal attention which allows the policy network to \enquote{see} future actions during prediction, helping to reduce compounding errors. We demonstrate that our proposed method successfully scales the Diffusion Policy from 10 million to 1 billion parameters. This new model, named \methodname, can effectively scale up the model size with improved performance and generalization. We benchmark \methodname~across 50 different tasks from MetaWorld and find that our largest \methodname~outperforms \DP~with an average improvement of 21.6\%. Across 7 real-world robot tasks, our ScaleDP demonstrates an average improvement of 36.25\% over DP-T on four single-arm tasks and 75\% on three bimanual tasks. We believe our work paves the way for scaling up models for visuomotor learning. The project page is available at scaling-diffusion-policy.github.io.<|reference_end|>
|
arxiv
|
@article{zhu2024scaling,
title={Scaling Diffusion Policy in Transformer to 1 Billion Parameters for
Robotic Manipulation},
author={Minjie Zhu, Yichen Zhu, Jinming Li, Junjie Wen, Zhiyuan Xu, Ning Liu,
Ran Cheng, Chaomin Shen, Yaxin Peng, Feifei Feng, Jian Tang},
journal={arXiv preprint arXiv:2409.14411},
year={2024},
archivePrefix={arXiv},
eprint={2409.14411},
primaryClass={cs.RO}
}
|
zhu2024scaling
|
arxiv-660443
|
2409.14412
|
COSBO: Conservative Offline Simulation-Based Policy Optimization
|
<|reference_start|>COSBO: Conservative Offline Simulation-Based Policy Optimization: Offline reinforcement learning allows training reinforcement learning models on data from live deployments. However, it is limited to choosing the best combination of behaviors present in the training data. In contrast, simulation environments attempting to replicate the live environment can be used instead of the live data, yet this approach is limited by the simulation-to-reality gap, resulting in a bias. In an attempt to get the best of both worlds, we propose a method that combines an imperfect simulation environment with data from the target environment, to train an offline reinforcement learning policy. Our experiments demonstrate that the proposed method outperforms state-of-the-art approaches CQL, MOPO, and COMBO, especially in scenarios with diverse and challenging dynamics, and demonstrates robust behavior across a variety of experimental conditions. The results highlight that using simulator-generated data can effectively enhance offline policy learning despite the sim-to-real gap, when direct interaction with the real-world is not possible.<|reference_end|>
|
arxiv
|
@article{kargar2024cosbo:,
title={COSBO: Conservative Offline Simulation-Based Policy Optimization},
author={Eshagh Kargar and Ville Kyrki},
journal={arXiv preprint arXiv:2409.14412},
year={2024},
archivePrefix={arXiv},
eprint={2409.14412},
primaryClass={cs.LG cs.AI cs.RO}
}
|
kargar2024cosbo:
|
arxiv-660444
|
2409.14416
|
Uncovering EDK2 Firmware Flaws: Insights from Code Audit Tools
|
<|reference_start|>Uncovering EDK2 Firmware Flaws: Insights from Code Audit Tools: Firmware serves as a foundational software layer in modern computers, initiating as the first code executed on platform hardware, similar in function to a minimal operating system. Defined as a software interface between an operating system and platform firmware, the Unified Extensible Firmware Interface (UEFI) standardizes system initialization and management. A prominent open-source implementation of UEFI, the EFI Development Kit II (EDK2), plays a crucial role in shaping firmware architecture. Despite its widespread adoption, the architecture faces challenges such as limited system resources at early stages and a lack of standard security features. Furthermore, the scarcity of open-source tools specifically designed for firmware analysis emphasizes the need for adaptable, innovative solutions. In this paper, we explore the application of general code audit tools to firmware, with a particular focus on EDK2. Although these tools were not originally designed for firmware analysis, they have proven effective in identifying critical areas for enhancement in firmware security. Our findings, derived from deploying key audit tools on EDK2, categorize these tools based on their methodologies and illustrate their capability to uncover unique firmware attributes, significantly contributing to the understanding and improvement of firmware security.<|reference_end|>
|
arxiv
|
@article{farahani2024uncovering,
title={Uncovering EDK2 Firmware Flaws: Insights from Code Audit Tools},
author={Mahsa Farahani, Ghazal Shenavar, Ali Hosseinghorban and Alireza Ejlali},
journal={arXiv preprint arXiv:2409.14416},
year={2024},
archivePrefix={arXiv},
eprint={2409.14416},
primaryClass={cs.CR cs.AR cs.SE}
}
|
farahani2024uncovering
|
arxiv-660445
|
2409.14424
|
Dormant: Defending against Pose-driven Human Image Animation
|
<|reference_start|>Dormant: Defending against Pose-driven Human Image Animation: Pose-driven human image animation has achieved tremendous progress, enabling the generation of vivid and realistic human videos from just one single photo. However, it conversely exacerbates the risk of image misuse, as attackers may use one available image to create videos involving politics, violence and other illegal content. To counter this threat, we propose Dormant, a novel protection approach tailored to defend against pose-driven human image animation techniques. Dormant applies protective perturbation to one human image, preserving the visual similarity to the original but resulting in poor-quality video generation. The protective perturbation is optimized to induce misextraction of appearance features from the image and create incoherence among the generated video frames. Our extensive evaluation across 8 animation methods and 4 datasets demonstrates the superiority of Dormant over 6 baseline protection methods, leading to misaligned identities, visual distortions, noticeable artifacts, and inconsistent frames in the generated videos. Moreover, Dormant shows effectiveness on 6 real-world commercial services, even with fully black-box access.<|reference_end|>
|
arxiv
|
@article{zhou2024dormant:,
title={Dormant: Defending against Pose-driven Human Image Animation},
author={Jiachen Zhou, Mingsi Wang, Tianlin Li, Guozhu Meng, Kai Chen},
journal={arXiv preprint arXiv:2409.14424},
year={2024},
archivePrefix={arXiv},
eprint={2409.14424},
primaryClass={cs.CR cs.AI cs.CV}
}
|
zhou2024dormant:
|
arxiv-660446
|
2409.14426
|
p and hp Spectral Element Methods for Elliptic Boundary Layer Problems
|
<|reference_start|>p and hp Spectral Element Methods for Elliptic Boundary Layer Problems: In this article, we consider p and hp least-squares spectral element methods for one-dimensional elliptic boundary layer problems. We derive stability estimates and design a numerical scheme based on minimizing the residuals in the sense of least squares in appropriate Sobolev norms. We prove parameter robust uniform error estimates i.e. error in the approximation is independent of the boundary layer parameter. For the p-version we prove a robust uniform convergence rate of O(sqrt(log W)/W) in the H2-norm, where W denotes the polynomial order used in approximation and for the hp-version the convergence rate is shown to be O(e^(-W/logW)). Numerical results are presented for a number of model elliptic boundary layer problems confirming the theoretical estimates and uniform convergence results for the p and hp versions.<|reference_end|>
|
arxiv
|
@article{husain2024p,
title={p and hp Spectral Element Methods for Elliptic Boundary Layer Problems},
author={Akhlaq Husain, Aliya Kazmi, Subhashree Mohapatra, Ziya Uddin},
journal={arXiv preprint arXiv:2409.14426},
year={2024},
archivePrefix={arXiv},
eprint={2409.14426},
primaryClass={math.NA cs.NA math.AP}
}
|
husain2024p
|
arxiv-660447
|
2409.14429
|
Challenging the Performance-Interpretability Trade-off: An Evaluation of Interpretable Machine Learning Models
|
<|reference_start|>Challenging the Performance-Interpretability Trade-off: An Evaluation of Interpretable Machine Learning Models: Machine learning is permeating every conceivable domain to promote data-driven decision support. The focus is often on advanced black-box models due to their assumed performance advantages, whereas interpretable models are often associated with inferior predictive qualities. More recently, however, a new generation of generalized additive models (GAMs) has been proposed that offer promising properties for capturing complex, non-linear patterns while remaining fully interpretable. To uncover the merits and limitations of these models, this study examines the predictive performance of seven different GAMs in comparison to seven commonly used machine learning models based on a collection of twenty tabular benchmark datasets. To ensure a fair and robust model comparison, an extensive hyperparameter search combined with cross-validation was performed, resulting in 68,500 model runs. In addition, this study qualitatively examines the visual output of the models to assess their level of interpretability. Based on these results, the paper dispels the misconception that only black-box models can achieve high accuracy by demonstrating that there is no strict trade-off between predictive performance and model interpretability for tabular data. Furthermore, the paper discusses the importance of GAMs as powerful interpretable models for the field of information systems and derives implications for future work from a socio-technical perspective.<|reference_end|>
|
arxiv
|
@article{kruschel2024challenging,
title={Challenging the Performance-Interpretability Trade-off: An Evaluation of
Interpretable Machine Learning Models},
author={Sven Kruschel, Nico Hambauer, Sven Weinzierl, Sandra Zilker, Mathias
Kraus, Patrick Zschech},
journal={arXiv preprint arXiv:2409.14429},
year={2024},
archivePrefix={arXiv},
eprint={2409.14429},
primaryClass={cs.LG cs.AI cs.HC cs.NE}
}
|
kruschel2024challenging
|
arxiv-660448
|
2409.14430
|
Pomo3D: 3D-Aware Portrait Accessorizing and More
|
<|reference_start|>Pomo3D: 3D-Aware Portrait Accessorizing and More: We propose Pomo3D, a 3D portrait manipulation framework that allows free accessorizing by decomposing and recomposing portraits and accessories. It enables the avatars to attain out-of-distribution (OOD) appearances of simultaneously wearing multiple accessories. Existing methods still struggle to offer such explicit and fine-grained editing; they either fail to generate additional objects on given portraits or cause alterations to portraits (e.g., identity shift) when generating accessories. This restriction presents a noteworthy obstacle as people typically seek to create charming appearances with diverse and fashionable accessories in the virtual universe. Our approach provides an effective solution to this less-addressed issue. We further introduce the Scribble2Accessories module, enabling Pomo3D to create 3D accessories from user-drawn accessory scribble maps. Moreover, we design a bias-conscious mapper to mitigate biased associations present in real-world datasets. In addition to object-level manipulation above, Pomo3D also offers extensive editing options on portraits, including global or local editing of geometry and texture and avatar stylization, elevating 3D editing of neural portraits to a more comprehensive level.<|reference_end|>
|
arxiv
|
@article{liu2024pomo3d:,
title={Pomo3D: 3D-Aware Portrait Accessorizing and More},
author={Tzu-Chieh Liu, Chih-Ting Liu, Shao-Yi Chien},
journal={arXiv preprint arXiv:2409.14430},
year={2024},
archivePrefix={arXiv},
eprint={2409.14430},
primaryClass={cs.CV cs.AI}
}
|
liu2024pomo3d:
|
arxiv-660449
|
2409.14432
|
EM-DARTS: Hierarchical Differentiable Architecture Search for Eye Movement Recognition
|
<|reference_start|>EM-DARTS: Hierarchical Differentiable Architecture Search for Eye Movement Recognition: Eye movement biometrics has received increasing attention thanks to its high secure identification. Although deep learning (DL) models have been recently successfully applied for eye movement recognition, the DL architecture still is determined by human prior knowledge. Differentiable Neural Architecture Search (DARTS) automates the manual process of architecture design with high search efficiency. DARTS, however, usually stacks the same multiple learned cells to form a final neural network for evaluation, limiting therefore the diversity of the network. Incidentally, DARTS usually searches the architecture in a shallow network while evaluating it in a deeper one, which results in a large gap between the architecture depths in the search and evaluation scenarios. To address this issue, we propose EM-DARTS, a hierarchical differentiable architecture search algorithm to automatically design the DL architecture for eye movement recognition. First, we define a supernet and propose a global and local alternate Neural Architecture Search method to search the optimal architecture alternately with an differentiable neural architecture search. The local search strategy aims to find an optimal architecture for different cells while the global search strategy is responsible for optimizing the architecture of the target network. To further reduce redundancy, a transfer entropy is proposed to compute the information amount of each layer, so as to further simplify search network. Our experiments on three public databases demonstrate that the proposed EM-DARTS is capable of producing an optimal architecture that leads to state-of-the-art recognition performance.<|reference_end|>
|
arxiv
|
@article{qin2024em-darts:,
title={EM-DARTS: Hierarchical Differentiable Architecture Search for Eye
Movement Recognition},
author={Huafeng Qin, Hongyu Zhu, Xin Jin, Xin Yu, Mounim A. El-Yacoubi, and
Xinbo Gao},
journal={arXiv preprint arXiv:2409.14432},
year={2024},
archivePrefix={arXiv},
eprint={2409.14432},
primaryClass={cs.CV}
}
|
qin2024em-darts:
|
arxiv-660450
|
2409.14433
|
OStr-DARTS: Differentiable Neural Architecture Search based on Operation Strength
|
<|reference_start|>OStr-DARTS: Differentiable Neural Architecture Search based on Operation Strength: Differentiable architecture search (DARTS) has emerged as a promising technique for effective neural architecture search, and it mainly contains two steps to find the high-performance architecture: First, the DARTS supernet that consists of mixed operations will be optimized via gradient descent. Second, the final architecture will be built by the selected operations that contribute the most to the supernet. Although DARTS improves the efficiency of NAS, it suffers from the well-known degeneration issue which can lead to deteriorating architectures. Existing works mainly attribute the degeneration issue to the failure of its supernet optimization, while little attention has been paid to the selection method. In this paper, we cease to apply the widely-used magnitude-based selection method and propose a novel criterion based on operation strength that estimates the importance of an operation by its effect on the final loss. We show that the degeneration issue can be effectively addressed by using the proposed criterion without any modification of supernet optimization, indicating that the magnitude-based selection method can be a critical reason for the instability of DARTS. The experiments on NAS-Bench-201 and DARTS search spaces show the effectiveness of our method.<|reference_end|>
|
arxiv
|
@article{yang2024ostr-darts:,
title={OStr-DARTS: Differentiable Neural Architecture Search based on Operation
Strength},
author={Le Yang, Ziwei Zheng, Yizeng Han, Shiji Song, Gao Huang and Fan Li},
journal={arXiv preprint arXiv:2409.14433},
year={2024},
archivePrefix={arXiv},
eprint={2409.14433},
primaryClass={cs.AI}
}
|
yang2024ostr-darts:
|
arxiv-660451
|
2409.14435
|
Adaptive Compensation for Robotic Joint Failures Using Partially Observable Reinforcement Learning
|
<|reference_start|>Adaptive Compensation for Robotic Joint Failures Using Partially Observable Reinforcement Learning: Robotic manipulators are widely used in various industries for complex and repetitive tasks. However, they remain vulnerable to unexpected hardware failures. In this study, we address the challenge of enabling a robotic manipulator to complete tasks despite joint malfunctions. Specifically, we develop a reinforcement learning (RL) framework to adaptively compensate for a non-functional joint during task execution. Our experimental platform is the Franka robot with 7 degrees of freedom (DOFs). We formulate the problem as a partially observable Markov decision process (POMDP), where the robot is trained under various joint failure conditions and tested in both seen and unseen scenarios. We consider scenarios where a joint is permanently broken and where it functions intermittently. Additionally, we demonstrate the effectiveness of our approach by comparing it with traditional inverse kinematics-based control methods. The results show that the RL algorithm enables the robot to successfully complete tasks even with joint failures, achieving a high success rate with an average rate of 93.6%. This showcases its robustness and adaptability. Our findings highlight the potential of RL to enhance the resilience and reliability of robotic systems, making them better suited for unpredictable environments. All related codes and models are published online.<|reference_end|>
|
arxiv
|
@article{pham2024adaptive,
title={Adaptive Compensation for Robotic Joint Failures Using Partially
Observable Reinforcement Learning},
author={Tan-Hanh Pham, Godwyll Aikins, Tri Truong, and Kim-Doang Nguyen},
journal={arXiv preprint arXiv:2409.14435},
year={2024},
archivePrefix={arXiv},
eprint={2409.14435},
primaryClass={cs.RO}
}
|
pham2024adaptive
|
arxiv-660452
|
2409.14436
|
Automotive innovation landscaping using LLM
|
<|reference_start|>Automotive innovation landscaping using LLM: The process of landscaping automotive innovation through patent analysis is crucial for Research and Development teams. It aids in comprehending innovation trends, technological advancements, and the latest technologies from competitors. Traditionally, this process required intensive manual efforts. However, with the advent of Large Language Models (LLMs), it can now be automated, leading to faster and more efficient patent categorization & state-of-the-art of inventive concept extraction. This automation can assist various R\&D teams in extracting relevant information from extensive patent databases. This paper introduces a method based on prompt engineering to extract essential information for landscaping. The information includes the problem addressed by the patent, the technology utilized, and the area of innovation within the vehicle ecosystem (such as safety, Advanced Driver Assistance Systems and more).The result demonstrates the implementation of this method to create a landscape of fuel cell technology using open-source patent data. This approach provides a comprehensive overview of the current state of fuel cell technology, offering valuable insights for future research and development in this field.<|reference_end|>
|
arxiv
|
@article{gorain2024automotive,
title={Automotive innovation landscaping using LLM},
author={Raju Gorain and Omkar Salunke},
journal={arXiv preprint arXiv:2409.14436},
year={2024},
archivePrefix={arXiv},
eprint={2409.14436},
primaryClass={cs.CL cs.AI cs.RO}
}
|
gorain2024automotive
|
arxiv-660453
|
2409.14438
|
Deflation Techniques for Finding Multiple Local Minima of a Nonlinear Least Squares Problem
|
<|reference_start|>Deflation Techniques for Finding Multiple Local Minima of a Nonlinear Least Squares Problem: In this paper we generalize the technique of deflation to define two new methods to systematically find many local minima of a nonlinear least squares problem. The methods are based on the Gauss-Newton algorithm, and as such do not require the calculation of a Hessian matrix. They also require fewer deflations than for applying the deflated Newton method on the first order optimality conditions, as the latter finds all stationary points, not just local minima. One application of interest covered in this paper is the inverse eigenvalue problem (IEP) associated with the modelling of spectroscopic data of relevance to the physical and chemical sciences. Open source MATLAB code is provided at https://github.com/AlbanBloorRiley/DeflatedGaussNewton.<|reference_end|>
|
arxiv
|
@article{riley2024deflation,
title={Deflation Techniques for Finding Multiple Local Minima of a Nonlinear
Least Squares Problem},
author={Alban Bloor Riley, Marcus Webb, Michael L Baker},
journal={arXiv preprint arXiv:2409.14438},
year={2024},
archivePrefix={arXiv},
eprint={2409.14438},
primaryClass={math.NA cs.NA}
}
|
riley2024deflation
|
arxiv-660454
|
2409.14439
|
A Visualized Malware Detection Framework with CNN and Conditional GAN
|
<|reference_start|>A Visualized Malware Detection Framework with CNN and Conditional GAN: Malware visualization analysis incorporating with Machine Learning (ML) has been proven to be a promising solution for improving security defenses on different platforms. In this work, we propose an integrated framework for addressing common problems experienced by ML utilizers in developing malware detection systems. Namely, a pictorial presentation system with extensions is designed to preserve the identities of benign/malign samples by encoding each variable into binary digits and mapping them into black and white pixels. A conditional Generative Adversarial Network based model is adopted to produce synthetic images and mitigate issues of imbalance classes. Detection models architected by Convolutional Neural Networks are for validating performances while training on datasets with and without artifactual samples. Result demonstrates accuracy rates of 98.51% and 97.26% for these two training scenarios.<|reference_end|>
|
arxiv
|
@article{wang2024a,
title={A Visualized Malware Detection Framework with CNN and Conditional GAN},
author={Fang Wang (Florence Wong), Hussam Al Hamadi, Ernesto Damiani},
journal={arXiv preprint arXiv:2409.14439},
year={2024},
doi={10.1109/BigData55660.2022.10020534},
archivePrefix={arXiv},
eprint={2409.14439},
primaryClass={cs.CR cs.AI cs.LG}
}
|
wang2024a
|
arxiv-660455
|
2409.14440
|
Contact Compliance Visuo-Proprioceptive Policy for Contact-Rich Manipulation with Cost-Efficient Haptic Hand-Arm Teleoperation System
|
<|reference_start|>Contact Compliance Visuo-Proprioceptive Policy for Contact-Rich Manipulation with Cost-Efficient Haptic Hand-Arm Teleoperation System: Learning robot manipulation skills in real-world environments is extremely challenging. Robots learning manipulation skills in real-world environments is extremely challenging. Recent research on imitation learning and visuomotor policies has significantly enhanced the ability of robots to perform manipulation tasks. In this paper, we propose Admit Policy, a visuo-proprioceptive imitation learning framework with force compliance, designed to reduce contact force fluctuations during robot execution of contact-rich manipulation tasks. This framework also includes a hand-arm teleoperation system with vibrotactile feedback for efficient data collection. Our framework utilizes RGB images, robot joint positions, and contact forces as observations and leverages a consistency-constrained teacher-student probabilistic diffusion model to generate future trajectories for end-effector positions and contact forces. An admittance model is then employed to track these trajectories, enabling effective force-position control across various tasks.We validated our framework on five challenging contact-rich manipulation tasks. Among these tasks, while improving success rates, our approach most significantly reduced the mean contact force required to complete the tasks by up to 53.92% and decreased the standard deviation of contact force fluctuations by 76.51% compared to imitation learning algorithms without dynamic contact force prediction and tracking.<|reference_end|>
|
arxiv
|
@article{zhou2024admittance,
title={Admittance Visuomotor Policy Learning for General-Purpose Contact-Rich
Manipulations},
author={Bo Zhou, Ruixuan Jiao, Yi Li, Xiaogang Yuan, Fang Fang, and Shihua Li},
journal={arXiv preprint arXiv:2409.14440},
year={2024},
archivePrefix={arXiv},
eprint={2409.14440},
primaryClass={cs.RO}
}
|
zhou2024admittance
|
arxiv-660456
|
2409.14444
|
Fake It till You Make It: Curricular Dynamic Forgery Augmentations towards General Deepfake Detection
|
<|reference_start|>Fake It till You Make It: Curricular Dynamic Forgery Augmentations towards General Deepfake Detection: Previous studies in deepfake detection have shown promising results when testing face forgeries from the same dataset as the training. However, the problem remains challenging when one tries to generalize the detector to forgeries from unseen datasets and created by unseen methods. In this work, we present a novel general deepfake detection method, called \textbf{C}urricular \textbf{D}ynamic \textbf{F}orgery \textbf{A}ugmentation (CDFA), which jointly trains a deepfake detector with a forgery augmentation policy network. Unlike the previous works, we propose to progressively apply forgery augmentations following a monotonic curriculum during the training. We further propose a dynamic forgery searching strategy to select one suitable forgery augmentation operation for each image varying between training stages, producing a forgery augmentation policy optimized for better generalization. In addition, we propose a novel forgery augmentation named self-shifted blending image to simply imitate the temporal inconsistency of deepfake generation. Comprehensive experiments show that CDFA can significantly improve both cross-datasets and cross-manipulations performances of various naive deepfake detectors in a plug-and-play way, and make them attain superior performances over the existing methods in several benchmark datasets.<|reference_end|>
|
arxiv
|
@article{lin2024fake,
title={Fake It till You Make It: Curricular Dynamic Forgery Augmentations
towards General Deepfake Detection},
author={Yuzhen Lin, Wentang Song, Bin Li, Yuezun Li, Jiangqun Ni, Han Chen and
Qiushi Li},
journal={arXiv preprint arXiv:2409.14444},
year={2024},
archivePrefix={arXiv},
eprint={2409.14444},
primaryClass={cs.CV}
}
|
lin2024fake
|
arxiv-660457
|
2409.14446
|
Detection of pulmonary pathologies using convolutional neural networks, Data Augmentation, ResNet50 and Vision Transformers
|
<|reference_start|>Detection of pulmonary pathologies using convolutional neural networks, Data Augmentation, ResNet50 and Vision Transformers: Pulmonary diseases are a public health problem that requires accurate and fast diagnostic techniques. In this paper, a method based on convolutional neural networks (CNN), Data Augmentation, ResNet50 and Vision Transformers (ViT) is proposed to detect lung pathologies from medical images. A dataset of X-ray images and CT scans of patients with different lung diseases, such as cancer, pneumonia, tuberculosis and fibrosis, is used. The results obtained by the proposed method are compared with those of other existing methods, using performance metrics such as accuracy, sensitivity, specificity and area under the ROC curve. The results show that the proposed method outperforms the other methods in all metrics, achieving an accuracy of 98% and an area under the ROC curve of 99%. It is concluded that the proposed method is an effective and promising tool for the diagnosis of pulmonary pathologies by medical imaging.<|reference_end|>
|
arxiv
|
@article{amador2024detection,
title={Detection of pulmonary pathologies using convolutional neural networks,
Data Augmentation, ResNet50 and Vision Transformers},
author={Pablo Ramirez Amador, Dinarle Milagro Ortega, Arnold Cesarano},
journal={arXiv preprint arXiv:2409.14446},
year={2024},
archivePrefix={arXiv},
eprint={2409.14446},
primaryClass={eess.IV cs.AI cs.CV}
}
|
amador2024detection
|
arxiv-660458
|
2409.14447
|
ParvaGPU: Efficient Spatial GPU Sharing for Large-Scale DNN Inference in Cloud Environments
|
<|reference_start|>ParvaGPU: Efficient Spatial GPU Sharing for Large-Scale DNN Inference in Cloud Environments: In cloud environments, GPU-based deep neural network (DNN) inference servers are required to meet the Service Level Objective (SLO) latency for each workload under a specified request rate, while also minimizing GPU resource consumption. However, previous studies have not fully achieved this objective. In this paper, we propose ParvaGPU, a technology that facilitates spatial GPU sharing for large-scale DNN inference in cloud computing. ParvaGPU integrates NVIDIA's Multi-Instance GPU (MIG) and Multi-Process Service (MPS) technologies to enhance GPU utilization, with the goal of meeting the diverse SLOs of each workload and reducing overall GPU usage. Specifically, ParvaGPU addresses the challenges of minimizing underutilization within allocated GPU space partitions and external fragmentation in combined MIG and MPS environments. We conducted our assessment on multiple A100 GPUs, evaluating 11 diverse DNN workloads with varying SLOs. Our evaluation revealed no SLO violations and a significant reduction in GPU usage compared to state-of-the-art frameworks.<|reference_end|>
|
arxiv
|
@article{lee2024parvagpu:,
title={ParvaGPU: Efficient Spatial GPU Sharing for Large-Scale DNN Inference in
Cloud Environments},
author={Munkyu Lee, Sihoon Seong, Minki Kang, Jihyuk Lee, Gap-Joo Na, In-Geol
Chun, Dimitrios Nikolopoulos, and Cheol-Ho Hong},
journal={arXiv preprint arXiv:2409.14447},
year={2024},
archivePrefix={arXiv},
eprint={2409.14447},
primaryClass={cs.DC}
}
|
lee2024parvagpu:
|
arxiv-660459
|
2409.14449
|
Space-time FEM-BEM couplings for parabolic transmission problems
|
<|reference_start|>Space-time FEM-BEM couplings for parabolic transmission problems: We develop couplings of a recent space-time first-order system least-squares (FOSLS) method for parabolic problems and space-time boundary element methods (BEM) for the heat equation to numerically solve a parabolic transmission problem on the full space and a finite time interval. In particular, we demonstrate coercivity of the couplings under certain restrictions and validate our theoretical findings by numerical experiments.<|reference_end|>
|
arxiv
|
@article{führer2024space-time,
title={Space-time FEM-BEM couplings for parabolic transmission problems},
author={Thomas F"uhrer, Gregor Gantner, Michael Karkulik},
journal={arXiv preprint arXiv:2409.14449},
year={2024},
archivePrefix={arXiv},
eprint={2409.14449},
primaryClass={math.NA cs.NA}
}
|
führer2024space-time
|
arxiv-660460
|
2409.14454
|
A Unified Approach for Learning the Dynamics of Power System Generators and Inverter-based Resources
|
<|reference_start|>A Unified Approach for Learning the Dynamics of Power System Generators and Inverter-based Resources: The growing prevalence of inverter-based resources (IBRs) for renewable energy integration and electrification greatly challenges power system dynamic analysis. To account for both synchronous generators (SGs) and IBRs, this work presents an approach for learning the model of an individual dynamic component. The recurrent neural network (RNN) model is used to match the recursive structure in predicting the key dynamical states of a component from its terminal bus voltage and set-point input. To deal with the fast transients especially due to IBRs, we develop a Stable Integral (SI-)RNN to mimic high-order integral methods that can enhance the stability and accuracy for the dynamic learning task. We demonstrate that the proposed SI-RNN model not only can successfully predict the component's dynamic behaviors, but also offers the possibility of efficiently computing the dynamic sensitivity relative to a set-point change. These capabilities have been numerically validated based on full-order Electromagnetic Transient (EMT) simulations on a small test system with both SGs and IBRs, particularly for predicting the dynamics of grid-forming inverters.<|reference_end|>
|
arxiv
|
@article{liu2024a,
title={A Unified Approach for Learning the Dynamics of Power System Generators
and Inverter-based Resources},
author={Shaohui Liu, Weiqian Cai, Hao Zhu, Brian Johnson},
journal={arXiv preprint arXiv:2409.14454},
year={2024},
archivePrefix={arXiv},
eprint={2409.14454},
primaryClass={eess.SY cs.LG cs.SY}
}
|
liu2024a
|
arxiv-660461
|
2409.14455
|
A High-Performance External Validity Index for Clustering with a Large Number of Clusters
|
<|reference_start|>A High-Performance External Validity Index for Clustering with a Large Number of Clusters: This paper introduces the Stable Matching Based Pairing (SMBP) algorithm, a high-performance external validity index for clustering evaluation in large-scale datasets with a large number of clusters. SMBP leverages the stable matching framework to pair clusters across different clustering methods, significantly reducing computational complexity to $O(N^2)$, compared to traditional Maximum Weighted Matching (MWM) with $O(N^3)$ complexity. Through comprehensive evaluations on real-world and synthetic datasets, SMBP demonstrates comparable accuracy to MWM and superior computational efficiency. It is particularly effective for balanced, unbalanced, and large-scale datasets with a large number of clusters, making it a scalable and practical solution for modern clustering tasks. Additionally, SMBP is easily implementable within machine learning frameworks like PyTorch and TensorFlow, offering a robust tool for big data applications. The algorithm is validated through extensive experiments, showcasing its potential as a powerful alternative to existing methods such as Maximum Match Measure (MMM) and Centroid Ratio (CR).<|reference_end|>
|
arxiv
|
@article{karbasian2024a,
title={A High-Performance External Validity Index for Clustering with a Large
Number of Clusters},
author={Mohammad Yasin Karbasian, Ramin Javadi},
journal={arXiv preprint arXiv:2409.14455},
year={2024},
archivePrefix={arXiv},
eprint={2409.14455},
primaryClass={cs.DS cs.GT cs.LG}
}
|
karbasian2024a
|
arxiv-660462
|
2409.14456
|
Scoring rule nets: beyond mean target prediction in multivariate regression
|
<|reference_start|>Scoring rule nets: beyond mean target prediction in multivariate regression: Probabilistic regression models trained with maximum likelihood estimation (MLE), can sometimes overestimate variance to an unacceptable degree. This is mostly problematic in the multivariate domain. While univariate models often optimize the popular Continuous Ranked Probability Score (CRPS), in the multivariate domain, no such alternative to MLE has yet been widely accepted. The Energy Score - the most investigated alternative - notoriously lacks closed-form expressions and sensitivity to the correlation between target variables. In this paper, we propose Conditional CRPS: a multivariate strictly proper scoring rule that extends CRPS. We show that closed-form expressions exist for popular distributions and illustrate their sensitivity to correlation. We then show in a variety of experiments on both synthetic and real data, that Conditional CRPS often outperforms MLE, and produces results comparable to state-of-the-art non-parametric models, such as Distributional Random Forest (DRF).<|reference_end|>
|
arxiv
|
@article{roordink2024scoring,
title={Scoring rule nets: beyond mean target prediction in multivariate
regression},
author={Daan Roordink and Sibylle Hess},
journal={arXiv preprint arXiv:2409.14456},
year={2024},
doi={10.1007/978-3-031-43415-0_12},
archivePrefix={arXiv},
eprint={2409.14456},
primaryClass={cs.AI}
}
|
roordink2024scoring
|
arxiv-660463
|
2409.14457
|
Large Model Agents: State-of-the-Art, Cooperation Paradigms, Security and Privacy, and Future Trends
|
<|reference_start|>Large Model Agents: State-of-the-Art, Cooperation Paradigms, Security and Privacy, and Future Trends: Large Model (LM) agents, powered by large foundation models such as GPT-4 and DALL-E 2, represent a significant step towards achieving Artificial General Intelligence (AGI). LM agents exhibit key characteristics of autonomy, embodiment, and connectivity, allowing them to operate across physical, virtual, and mixed-reality environments while interacting seamlessly with humans, other agents, and their surroundings. This paper provides a comprehensive survey of the state-of-the-art in LM agents, focusing on the architecture, cooperation paradigms, security, privacy, and future prospects. Specifically, we first explore the foundational principles of LM agents, including general architecture, key components, enabling technologies, and modern applications. Then, we discuss practical collaboration paradigms from data, computation, and knowledge perspectives towards connected intelligence of LM agents. Furthermore, we systematically analyze the security vulnerabilities and privacy breaches associated with LM agents, particularly in multi-agent settings. We also explore their underlying mechanisms and review existing and potential countermeasures. Finally, we outline future research directions for building robust and secure LM agent ecosystems.<|reference_end|>
|
arxiv
|
@article{wang2024large,
title={Large Model Agents: State-of-the-Art, Cooperation Paradigms, Security
and Privacy, and Future Trends},
author={Yuntao Wang, Yanghe Pan, Quan Zhao, Yi Deng, Zhou Su, Linkang Du, and
Tom H. Luan},
journal={arXiv preprint arXiv:2409.14457},
year={2024},
archivePrefix={arXiv},
eprint={2409.14457},
primaryClass={cs.AI}
}
|
wang2024large
|
arxiv-660464
|
2409.14459
|
Exploring Multilingual Probing in Large Language Models: A Cross-Language Analysis
|
<|reference_start|>Exploring Multilingual Probing in Large Language Models: A Cross-Language Analysis: Probing techniques for large language models (LLMs) have primarily focused on English, overlooking the vast majority of the world's languages. In this paper, we extend these probing methods to a multilingual context, investigating the behaviors of LLMs across diverse languages. We conduct experiments on several open-source LLM models, analyzing probing accuracy, trends across layers, and similarities between probing vectors for multiple languages. Our key findings reveal: (1) a consistent performance gap between high-resource and low-resource languages, with high-resource languages achieving significantly higher probing accuracy; (2) divergent layer-wise accuracy trends, where high-resource languages show substantial improvement in deeper layers similar to English; and (3) higher representational similarities among high-resource languages, with low-resource languages demonstrating lower similarities both among themselves and with high-resource languages. These results highlight significant disparities in LLMs' multilingual capabilities and emphasize the need for improved modeling of low-resource languages.<|reference_end|>
|
arxiv
|
@article{li2024exploring,
title={Exploring Multilingual Probing in Large Language Models: A
Cross-Language Analysis},
author={Daoyang Li, Mingyu Jin, Qingcheng Zeng, Haiyan Zhao, Mengnan Du},
journal={arXiv preprint arXiv:2409.14459},
year={2024},
archivePrefix={arXiv},
eprint={2409.14459},
primaryClass={cs.CL cs.AI cs.LG}
}
|
li2024exploring
|
arxiv-660465
|
2409.14461
|
Low-Light Enhancement Effect on Classification and Detection: An Empirical Study
|
<|reference_start|>Low-Light Enhancement Effect on Classification and Detection: An Empirical Study: Low-light images are commonly encountered in real-world scenarios, and numerous low-light image enhancement (LLIE) methods have been proposed to improve the visibility of these images. The primary goal of LLIE is to generate clearer images that are more visually pleasing to humans. However, the impact of LLIE methods in high-level vision tasks, such as image classification and object detection, which rely on high-quality image datasets, is not well {explored}. To explore the impact, we comprehensively evaluate LLIE methods on these high-level vision tasks by utilizing an empirical investigation comprising image classification and object detection experiments. The evaluation reveals a dichotomy: {\textit{While Low-Light Image Enhancement (LLIE) methods enhance human visual interpretation, their effect on computer vision tasks is inconsistent and can sometimes be harmful. }} Our findings suggest a disconnect between image enhancement for human visual perception and for machine analysis, indicating a need for LLIE methods tailored to support high-level vision tasks effectively. This insight is crucial for the development of LLIE techniques that align with the needs of both human and machine vision.<|reference_end|>
|
arxiv
|
@article{wu2024low-light,
title={Low-Light Enhancement Effect on Classification and Detection: An
Empirical Study},
author={Xu Wu, Zhihui Lai, Zhou Jie, Can Gao, Xianxu Hou, Ya-nan Zhang, Linlin
Shen},
journal={arXiv preprint arXiv:2409.14461},
year={2024},
archivePrefix={arXiv},
eprint={2409.14461},
primaryClass={cs.CV}
}
|
wu2024low-light
|
arxiv-660466
|
2409.14462
|
A Further Investigation on Complete Complementary Codes from $q$-ary Functions
|
<|reference_start|>A Further Investigation on Complete Complementary Codes from $q$-ary Functions: This research focuses on constructing $q$-ary functions for complete complementary codes (CCCs) with flexible parameters. Most existing work has primarily identified sufficient conditions for $q$-ary functions related to $q$-ary CCCs. To the best of the authors' knowledge, this study is the first to establish both the necessary and sufficient conditions for $q$-ary functions, encompassing most existing CCCs constructions as special cases. For $q$-ary CCCs with a length of $q^m$ and a set size of $q^{n+1}$, we begin by analyzing the necessary and sufficient conditions for $q$-ary functions defined over the domain $\mathbb{Z}_q^m$. Additionally, we construct CCCs with lengths given by $L = \prod_{i=1}^k p_i^{m_i}$, set sizes given by $K = \prod_{i=1}^k p_i^{n_i+1}$, and an alphabet size of $\nu = \prod_{i=1}^k p_i$, where $p_1 < p_2 < \cdots < p_k$. To achieve these specific parameters, we examine the necessary and sufficient conditions for $\nu$-ary functions over the domain $\mathbf{Z}_{p_1}^{m_1} \times \cdots \times \mathbf{Z}_{p_k}^{m_k}$, which is a subset of $\mathbb{Z}_{\nu}^m$ and contains $\prod_{i=1}^k p_i^{m_i}$ vectors. In this context, $\mathbf{Z}_{p_i}^{m_i} = \{0, 1, \ldots, p_i - 1\}^{m_i}$, and $m$ is the sum of $m_1, m_2, \ldots, m_k$. The $q$-ary and $\nu$-ary functions allow us to cover all possible length sequences. However, we find that the proposed $\nu$-ary functions are more suitable for generating CCCs with a length of $L = \prod_{i=1}^k p_i^{m_i}$, particularly when $m_i$ is coprime to $m_j$ for some $1 \leq i \neq j \leq k$. While the proposed $q$-ary functions can also produce CCCs of the same length $L$, the set size and alphabet size become as large as $L$, since in this case, the only choice for $q$ is $L$. In contrast, the proposed $\nu$-ary functions yield CCCs with a more flexible set size $K\leq L$ and an alphabet size of $\nu<L$.<|reference_end|>
|
arxiv
|
@article{sarkar2024a,
title={A Further Investigation on Complete Complementary Codes from $q$-ary
Functions},
author={Palash Sarkar, Chunlei Li, Sudhan Majhi, and Zilong Liu},
journal={arXiv preprint arXiv:2409.14462},
year={2024},
archivePrefix={arXiv},
eprint={2409.14462},
primaryClass={math.CO cs.IT math.IT}
}
|
sarkar2024a
|
arxiv-660467
|
2409.14464
|
AggregHate: An Efficient Aggregative Approach for the Detection of Hatemongers on Social Platforms
|
<|reference_start|>AggregHate: An Efficient Aggregative Approach for the Detection of Hatemongers on Social Platforms: Automatic detection of online hate speech serves as a crucial step in the detoxification of the online discourse. Moreover, accurate classification can promote a better understanding of the proliferation of hate as a social phenomenon. While most prior work focus on the detection of hateful utterances, we argue that focusing on the user level is as important, albeit challenging. In this paper we consider a multimodal aggregative approach for the detection of hate-mongers, taking into account the potentially hateful texts, user activity, and the user network. We evaluate our methods on three unique datasets X (Twitter), Gab, and Parler showing that a processing a user's texts in her social context significantly improves the detection of hate mongers, compared to previously used text and graph-based methods. Our method can be then used to improve the classification of coded messages, dog-whistling, and racial gas-lighting, as well as inform intervention measures. Moreover, our approach is highly efficient even for very large datasets and networks.<|reference_end|>
|
arxiv
|
@article{marzea2024aggreghate:,
title={AggregHate: An Efficient Aggregative Approach for the Detection of
Hatemongers on Social Platforms},
author={Tom Marzea, Abraham Israeli, Oren Tsur},
journal={arXiv preprint arXiv:2409.14464},
year={2024},
archivePrefix={arXiv},
eprint={2409.14464},
primaryClass={cs.CL cs.SI}
}
|
marzea2024aggreghate:
|
arxiv-660468
|
2409.14465
|
On logic and generative AI
|
<|reference_start|>On logic and generative AI: A hundred years ago, logic was almost synonymous with foundational studies. The ongoing AI revolution raises many deep foundational problems involving neuroscience, philosophy, computer science, and logic. The goal of the following dialog is to provoke young logicians with a taste for foundations to notice the foundational problems raised by the AI revolution.<|reference_end|>
|
arxiv
|
@article{gurevich2024on,
title={On logic and generative AI},
author={Yuri Gurevich and Andreas Blass},
journal={Bulletin of the EATCS 143, June 2024},
year={2024},
archivePrefix={arXiv},
eprint={2409.14465},
primaryClass={cs.AI cs.LO}
}
|
gurevich2024on
|
arxiv-660469
|
2409.14469
|
Rethinking Semantic Parsing for Large Language Models: Enhancing LLM Performance with Semantic Hints
|
<|reference_start|>Rethinking Semantic Parsing for Large Language Models: Enhancing LLM Performance with Semantic Hints: Semantic Parsing aims to capture the meaning of a sentence and convert it into a logical, structured form. Previous studies show that semantic parsing enhances the performance of smaller models (e.g., BERT) on downstream tasks. However, it remains unclear whether the improvements extend similarly to LLMs. In this paper, our empirical findings reveal that, unlike smaller models, directly adding semantic parsing results into LLMs reduces their performance. To overcome this, we propose SENSE, a novel prompting approach that embeds semantic hints within the prompt. Experiments show that SENSE consistently improves LLMs' performance across various tasks, highlighting the potential of integrating semantic information to improve LLM capabilities.<|reference_end|>
|
arxiv
|
@article{an2024rethinking,
title={Rethinking Semantic Parsing for Large Language Models: Enhancing LLM
Performance with Semantic Hints},
author={Kaikai An, Shuzheng Si, Helan Hu, Haozhe Zhao, Yuchi Wang, Qingyan
Guo, Baobao Chang},
journal={arXiv preprint arXiv:2409.14469},
year={2024},
archivePrefix={arXiv},
eprint={2409.14469},
primaryClass={cs.CL}
}
|
an2024rethinking
|
arxiv-660470
|
2409.14472
|
Blockchain Based Information Security and Privacy Protection: Challenges and Future Directions using Computational Literature Review
|
<|reference_start|>Blockchain Based Information Security and Privacy Protection: Challenges and Future Directions using Computational Literature Review: Blockchain technology is an emerging digital innovation that has gained immense popularity in enhancing individual security and privacy within Information Systems (IS). This surge in interest is reflected in the exponential increase in research articles published on blockchain technology, highlighting its growing significance in the digital landscape. However, the rapid proliferation of published research presents significant challenges for manual analysis and synthesis due to the vast volume of information. The complexity and breadth of topics, combined with the inherent limitations of human data processing capabilities, make it difficult to comprehensively analyze and draw meaningful insights from the literature. To this end, we adopted the Computational Literature Review (CLR) to analyze pertinent literature impact and topic modelling using the Latent Dirichlet Allocation (LDA) technique. We identified 10 topics related to security and privacy and provided a detailed description of each topic. From the critical analysis, we have observed several limitations, and several future directions are provided as an outcome of this review.<|reference_end|>
|
arxiv
|
@article{shankar2024blockchain,
title={Blockchain Based Information Security and Privacy Protection: Challenges
and Future Directions using Computational Literature Review},
author={Gauri Shankar, Md Raihan Uddin, Saddam Mukta, Prabhat Kumar, Shareeful
Islam and A.K.M. Najmul Islam},
journal={arXiv preprint arXiv:2409.14472},
year={2024},
archivePrefix={arXiv},
eprint={2409.14472},
primaryClass={cs.CR}
}
|
shankar2024blockchain
|
arxiv-660471
|
2409.14473
|
A Large Language Model and Denoising Diffusion Framework for Targeted Design of Microstructures with Commands in Natural Language
|
<|reference_start|>A Large Language Model and Denoising Diffusion Framework for Targeted Design of Microstructures with Commands in Natural Language: Microstructure plays a critical role in determining the macroscopic properties of materials, with applications spanning alloy design, MEMS devices, and tissue engineering, among many others. Computational frameworks have been developed to capture the complex relationship between microstructure and material behavior. However, despite these advancements, the steep learning curve associated with domain-specific knowledge and complex algorithms restricts the broader application of these tools. To lower this barrier, we propose a framework that integrates Natural Language Processing (NLP), Large Language Models (LLMs), and Denoising Diffusion Probabilistic Models (DDPMs) to enable microstructure design using intuitive natural language commands. Our framework employs contextual data augmentation, driven by a pretrained LLM, to generate and expand a diverse dataset of microstructure descriptors. A retrained NER model extracts relevant microstructure descriptors from user-provided natural language inputs, which are then used by the DDPM to generate microstructures with targeted mechanical properties and topological features. The NLP and DDPM components of the framework are modular, allowing for separate training and validation, which ensures flexibility in adapting the framework to different datasets and use cases. A surrogate model system is employed to rank and filter generated samples based on their alignment with target properties. Demonstrated on a database of nonlinear hyperelastic microstructures, this framework serves as a prototype for accessible inverse design of microstructures, starting from intuitive natural language commands.<|reference_end|>
|
arxiv
|
@article{kartashov2024a,
title={A Large Language Model and Denoising Diffusion Framework for Targeted
Design of Microstructures with Commands in Natural Language},
author={Nikita Kartashov and Nikolaos N. Vlassis},
journal={arXiv preprint arXiv:2409.14473},
year={2024},
archivePrefix={arXiv},
eprint={2409.14473},
primaryClass={cs.CE cs.CL}
}
|
kartashov2024a
|
arxiv-660472
|
2409.14474
|
SynBench: A Synthetic Benchmark for Non-rigid 3D Point Cloud Registration
|
<|reference_start|>SynBench: A Synthetic Benchmark for Non-rigid 3D Point Cloud Registration: Non-rigid point cloud registration is a crucial task in computer vision. Evaluating a non-rigid point cloud registration method requires a dataset with challenges such as large deformation levels, noise, outliers, and incompleteness. Despite the existence of several datasets for deformable point cloud registration, the absence of a comprehensive benchmark with all challenges makes it difficult to achieve fair evaluations among different methods. This paper introduces SynBench, a new non-rigid point cloud registration dataset created using SimTool, a toolset for soft body simulation in Flex and Unreal Engine. SynBench provides the ground truth of corresponding points between two point sets and encompasses key registration challenges, including varying levels of deformation, noise, outliers, and incompleteness. To the best of the authors' knowledge, compared to existing datasets, SynBench possesses three particular characteristics: (1) it is the first benchmark that provides various challenges for non-rigid point cloud registration, (2) SynBench encompasses challenges of varying difficulty levels, and (3) it includes ground truth corresponding points both before and after deformation. The authors believe that SynBench enables future non-rigid point cloud registration methods to present a fair comparison of their achievements. SynBench is publicly available at: https://doi.org/10.11588/data/R9IKCF.<|reference_end|>
|
arxiv
|
@article{monji-azad2024synbench:,
title={SynBench: A Synthetic Benchmark for Non-rigid 3D Point Cloud
Registration},
author={Sara Monji-Azad, Marvin Kinz, Claudia Scherl, David M"annle, J"urgen
Hesser, Nikolas L"ow},
journal={arXiv preprint arXiv:2409.14474},
year={2024},
archivePrefix={arXiv},
eprint={2409.14474},
primaryClass={cs.CV cs.AI cs.GR cs.LG}
}
|
monji-azad2024synbench:
|
arxiv-660473
|
2409.14475
|
Lesion Segmentation in Whole-Body Multi-Tracer PET-CT Images; a Contribution to AutoPET 2024 Challenge
|
<|reference_start|>Lesion Segmentation in Whole-Body Multi-Tracer PET-CT Images; a Contribution to AutoPET 2024 Challenge: The automatic segmentation of pathological regions within whole-body PET-CT volumes has the potential to streamline various clinical applications such as diagno-sis, prognosis, and treatment planning. This study aims to address this challenge by contributing to the AutoPET MICCAI 2024 challenge through a proposed workflow that incorporates image preprocessing, tracer classification, and lesion segmentation steps. The implementation of this pipeline led to a significant enhancement in the segmentation accuracy of the models. This improvement is evidenced by an average overall Dice score of 0.548 across 1611 training subjects, 0.631 and 0.559 for classi-fied FDG and PSMA subjects of the training set, and 0.792 on the preliminary testing phase dataset.<|reference_end|>
|
arxiv
|
@article{astaraki2024lesion,
title={Lesion Segmentation in Whole-Body Multi-Tracer PET-CT Images; a
Contribution to AutoPET 2024 Challenge},
author={Mehdi Astaraki, Simone Bendazzoli},
journal={arXiv preprint arXiv:2409.14475},
year={2024},
archivePrefix={arXiv},
eprint={2409.14475},
primaryClass={eess.IV cs.CV}
}
|
astaraki2024lesion
|
arxiv-660474
|
2409.14478
|
Can Large Language Models Logically Predict Myocardial Infarction? Evaluation based on UK Biobank Cohort
|
<|reference_start|>Can Large Language Models Logically Predict Myocardial Infarction? Evaluation based on UK Biobank Cohort: Background: Large language models (LLMs) have seen extraordinary advances with applications in clinical decision support. However, high-quality evidence is urgently needed on the potential and limitation of LLMs in providing accurate clinical decisions based on real-world medical data. Objective: To evaluate quantitatively whether universal state-of-the-art LLMs (ChatGPT and GPT-4) can predict the incidence risk of myocardial infarction (MI) with logical inference, and to further make comparison between various models to assess the performance of LLMs comprehensively. Methods: In this retrospective cohort study, 482,310 participants recruited from 2006 to 2010 were initially included in UK Biobank database and later on resampled into a final cohort of 690 participants. For each participant, tabular data of the risk factors of MI were transformed into standardized textual descriptions for ChatGPT recognition. Responses were generated by asking ChatGPT to select a score ranging from 0 to 10 representing the risk. Chain of Thought (CoT) questioning was used to evaluate whether LLMs make prediction logically. The predictive performance of ChatGPT was compared with published medical indices, traditional machine learning models and other large language models. Conclusions: Current LLMs are not ready to be applied in clinical medicine fields. Future medical LLMs are suggested to be expert in medical domain knowledge to understand both natural languages and quantified medical data, and further make logical inferences.<|reference_end|>
|
arxiv
|
@article{zhi2024can,
title={Can Large Language Models Logically Predict Myocardial Infarction?
Evaluation based on UK Biobank Cohort},
author={Yuxing Zhi, Yuan Guo, Kai Yuan, Hesong Wang, Heng Xu, Haina Yao,
Albert C Yang, Guangrui Huang, Yuping Duan},
journal={arXiv preprint arXiv:2409.14478},
year={2024},
archivePrefix={arXiv},
eprint={2409.14478},
primaryClass={cs.AI}
}
|
zhi2024can
|
arxiv-660475
|
2409.14483
|
One Model for Two Tasks: Cooperatively Recognizing and Recovering Low-Resolution Scene Text Images by Iterative Mutual Guidance
|
<|reference_start|>One Model for Two Tasks: Cooperatively Recognizing and Recovering Low-Resolution Scene Text Images by Iterative Mutual Guidance: Scene text recognition (STR) from high-resolution (HR) images has been significantly successful, however text reading on low-resolution (LR) images is still challenging due to insufficient visual information. Therefore, recently many scene text image super-resolution (STISR) models have been proposed to generate super-resolution (SR) images for the LR ones, then STR is done on the SR images, which thus boosts recognition performance. Nevertheless, these methods have two major weaknesses. On the one hand, STISR approaches may generate imperfect or even erroneous SR images, which mislead the subsequent recognition of STR models. On the other hand, as the STISR and STR models are jointly optimized, to pursue high recognition accuracy, the fidelity of SR images may be spoiled. As a result, neither the recognition performance nor the fidelity of STISR models are desirable. Then, can we achieve both high recognition performance and good fidelity? To this end, in this paper we propose a novel method called IMAGE (the abbreviation of Iterative MutuAl GuidancE) to effectively recognize and recover LR scene text images simultaneously. Concretely, IMAGE consists of a specialized STR model for recognition and a tailored STISR model to recover LR images, which are optimized separately. And we develop an iterative mutual guidance mechanism, with which the STR model provides high-level semantic information as clue to the STISR model for better super-resolution, meanwhile the STISR model offers essential low-level pixel clue to the STR model for more accurate recognition. Extensive experiments on two LR datasets demonstrate the superiority of our method over the existing works on both recognition performance and super-resolution fidelity.<|reference_end|>
|
arxiv
|
@article{zhao2024one,
title={One Model for Two Tasks: Cooperatively Recognizing and Recovering
Low-Resolution Scene Text Images by Iterative Mutual Guidance},
author={Minyi Zhao, Yang Wang, Jihong Guan, Shuigeng Zhou},
journal={arXiv preprint arXiv:2409.14483},
year={2024},
archivePrefix={arXiv},
eprint={2409.14483},
primaryClass={cs.CV}
}
|
zhao2024one
|
arxiv-660476
|
2409.14484
|
Effectively Enhancing Vision Language Large Models by Prompt Augmentation and Caption Utilization
|
<|reference_start|>Effectively Enhancing Vision Language Large Models by Prompt Augmentation and Caption Utilization: Recent studies have shown that Vision Language Large Models (VLLMs) may output content not relevant to the input images. This problem, called the hallucination phenomenon, undoubtedly degrades VLLM performance. Therefore, various anti-hallucination techniques have been proposed to make model output more reasonable and accurate. Despite their successes, from extensive tests we found that augmenting the prompt (e.g. word appending, rewriting, and spell error etc.) may change model output and make the output hallucinate again. To cure this drawback, we propose a new instruct-tuning framework called Prompt Augmentation and Caption Utilization (PACU) to boost VLLM's generation ability under the augmented prompt scenario. Concretely, on the one hand, PACU exploits existing LLMs to augment and evaluate diverse prompts automatically. The resulting high-quality prompts are utilized to enhance VLLM's ability to process different prompts. On the other hand, PACU exploits image captions to jointly work with image features as well as the prompts for response generation. When the visual feature is inaccurate, LLM can capture useful information from the image captions for response generation. Extensive experiments on hallucination evaluation and prompt-augmented datasets demonstrate that our PACU method can work well with existing schemes to effectively boost VLLM model performance. Code is available in https://github.com/zhaominyiz/PACU.<|reference_end|>
|
arxiv
|
@article{zhao2024effectively,
title={Effectively Enhancing Vision Language Large Models by Prompt
Augmentation and Caption Utilization},
author={Minyi Zhao, Jie Wang, Zhaoyang Li, Jiyuan Zhang, Zhenbang Sun,
Shuigeng Zhou},
journal={arXiv preprint arXiv:2409.14484},
year={2024},
archivePrefix={arXiv},
eprint={2409.14484},
primaryClass={cs.CV}
}
|
zhao2024effectively
|
arxiv-660477
|
2409.14485
|
Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding
|
<|reference_start|>Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding: Although current Multi-modal Large Language Models (MLLMs) demonstrate promising results in video understanding, processing extremely long videos remains an ongoing challenge. Typically, MLLMs struggle with handling thousands of tokens that exceed the maximum context length of LLMs, and they experience reduced visual clarity due to token aggregation. Another challenge is the high computational cost stemming from the large number of video tokens. To tackle these issues, we propose Video-XL, an extra-long vision language model designed for efficient hour-scale video understanding. Specifically, we argue that LLMs can be adapted as effective visual condensers and introduce Visual Context Latent Summarization, which condenses visual contexts into highly compact forms. Extensive experiments demonstrate that our model achieves promising results on popular long video understanding benchmarks, despite being trained on limited image data. Moreover, Video-XL strikes a promising balance between efficiency and effectiveness, processing 1024 frames on a single 80GB GPU while achieving nearly 100\% accuracy in the Needle-in-a-Haystack evaluation. We envision Video-XL becoming a valuable tool for long video applications such as video summarization, surveillance anomaly detection, and Ad placement identification.<|reference_end|>
|
arxiv
|
@article{shu2024video-xl:,
title={Video-XL: Extra-Long Vision Language Model for Hour-Scale Video
Understanding},
author={Yan Shu, Peitian Zhang, Zheng Liu, Minghao Qin, Junjie Zhou, Tiejun
Huang, Bo Zhao},
journal={arXiv preprint arXiv:2409.14485},
year={2024},
archivePrefix={arXiv},
eprint={2409.14485},
primaryClass={cs.CV}
}
|
shu2024video-xl:
|
arxiv-660478
|
2409.14486
|
Unsupervised Word Discovery: Boundary Detection with Clustering vs Dynamic Programming
|
<|reference_start|>Unsupervised Word Discovery: Boundary Detection with Clustering vs Dynamic Programming: We look at the long-standing problem of segmenting unlabeled speech into word-like segments and clustering these into a lexicon. Several previous methods use a scoring model coupled with dynamic programming to find an optimal segmentation. Here we propose a much simpler strategy: we predict word boundaries using the dissimilarity between adjacent self-supervised features, then we cluster the predicted segments to construct a lexicon. For a fair comparison, we update the older ES-KMeans dynamic programming method with better features and boundary constraints. On the five-language ZeroSpeech benchmarks, our simple approach gives similar state-of-the-art results compared to the new ES-KMeans+ method, while being almost five times faster.<|reference_end|>
|
arxiv
|
@article{malan2024unsupervised,
title={Unsupervised Word Discovery: Boundary Detection with Clustering vs.
Dynamic Programming},
author={Simon Malan, Benjamin van Niekerk, Herman Kamper},
journal={arXiv preprint arXiv:2409.14486},
year={2024},
archivePrefix={arXiv},
eprint={2409.14486},
primaryClass={eess.AS cs.CL cs.SD}
}
|
malan2024unsupervised
|
arxiv-660479
|
2409.14488
|
Enhancing LLM-based Autonomous Driving Agents to Mitigate Perception Attacks
|
<|reference_start|>Enhancing LLM-based Autonomous Driving Agents to Mitigate Perception Attacks: There is a growing interest in integrating Large Language Models (LLMs) with autonomous driving (AD) systems. However, AD systems are vulnerable to attacks against their object detection and tracking (ODT) functions. Unfortunately, our evaluation of four recent LLM agents against ODT attacks shows that the attacks are 63.26% successful in causing them to crash or violate traffic rules due to (1) misleading memory modules that provide past experiences for decision making, (2) limitations of prompts in identifying inconsistencies, and (3) reliance on ground truth perception data. In this paper, we introduce Hudson, a driving reasoning agent that extends prior LLM-based driving systems to enable safer decision making during perception attacks while maintaining effectiveness under benign conditions. Hudson achieves this by first instrumenting the AD software to collect real-time perception results and contextual information from the driving scene. This data is then formalized into a domain-specific language (DSL). To guide the LLM in detecting and making safe control decisions during ODT attacks, Hudson translates the DSL into natural language, along with a list of custom attack detection instructions. Following query execution, Hudson analyzes the LLM's control decision to understand its causal reasoning process. We evaluate the effectiveness of Hudson using a proprietary LLM (GPT-4) and two open-source LLMs (Llama and Gemma) in various adversarial driving scenarios. GPT-4, Llama, and Gemma achieve, on average, an attack detection accuracy of 83. 3%, 63. 6%, and 73. 6%. Consequently, they make safe control decisions in 86.4%, 73.9%, and 80% of the attacks. Our results, following the growing interest in integrating LLMs into AD systems, highlight the strengths of LLMs and their potential to detect and mitigate ODT attacks.<|reference_end|>
|
arxiv
|
@article{song2024enhancing,
title={Enhancing LLM-based Autonomous Driving Agents to Mitigate Perception
Attacks},
author={Ruoyu Song, Muslum Ozgur Ozmen, Hyungsub Kim, Antonio Bianchi, Z.
Berkay Celik},
journal={arXiv preprint arXiv:2409.14488},
year={2024},
archivePrefix={arXiv},
eprint={2409.14488},
primaryClass={cs.CR cs.AI}
}
|
song2024enhancing
|
arxiv-660480
|
2409.14489
|
A New Twist on Low-Complexity Digital Backpropagation
|
<|reference_start|>A New Twist on Low-Complexity Digital Backpropagation: This work proposes a novel low-complexity digital backpropagation (DBP) method, with the goal of optimizing the trade-off between backpropagation accuracy and complexity. The method combines a split step Fourier method (SSFM)-like structure with a simplifed logarithmic perturbation method to obtain a high accuracy with a small number of DBP steps. Subband processing and asymmetric steps with optimized splitting ratio are also employed to further reduce the number of steps. The first part of the manuscript is dedicated to the derivation of a simplified logaritmic-perturbation model for the propagation of a dual-polarization multiband signal in a fiber, which serves as a theoretical background for the development of the proposed coupled-band enhanced SSFM (CBESSFM). Next, the manuscript presents a digital signal processing algorithm for the implementation of DBP based on a discrete-time version of the model and an overlap-and-save processing strategy. A detailed analysis of the computational complexity of the algorithm is also presented. Finally, the performance and complexity of the proposed DBP method are investigated through numerical simulations. In a wavelength division multiplexing system over a 15 x 80km single mode fiber link, the proposed CB-ESSFM achieves a gain of about 1 dB over simple dispersion compensation with only 15 steps (corresponding to about 680 real multiplications per 2D symbol), with an improvement of 0.9 dB w.r.t. conventional SSFM and almost 0.4 dB w.r.t. our previously proposed ESSFM. Significant gains are obtained also at lower complexity. For instance, the gain reduces to a still significant value of 0.34 dB when a single DBP step is employed, requiring just 75 real multiplications per 2D symbol. A similar analysis is performed also for longer links, confirming the good performance of the proposed method w.r.t. the others.<|reference_end|>
|
arxiv
|
@article{civelli2024a,
title={A New Twist on Low-Complexity Digital Backpropagation},
author={Stella Civelli, Debi Pada Jana, Enrico Forestieri, Marco Secondini},
journal={arXiv preprint arXiv:2409.14489},
year={2024},
archivePrefix={arXiv},
eprint={2409.14489},
primaryClass={cs.IT eess.SP math.IT}
}
|
civelli2024a
|
arxiv-660481
|
2409.14491
|
Work Smarter Not Harder: Simple Imitation Learning with CS-PIBT Outperforms Large Scale Imitation Learning for MAPF
|
<|reference_start|>Work Smarter Not Harder: Simple Imitation Learning with CS-PIBT Outperforms Large Scale Imitation Learning for MAPF: Multi-Agent Path Finding (MAPF) is the problem of effectively finding efficient collision-free paths for a group of agents in a shared workspace. The MAPF community has largely focused on developing high-performance heuristic search methods. Recently, several works have applied various machine learning (ML) techniques to solve MAPF, usually involving sophisticated architectures, reinforcement learning techniques, and set-ups, but none using large amounts of high-quality supervised data. Our initial objective in this work was to show how simple large scale imitation learning of high-quality heuristic search methods can lead to state-of-the-art ML MAPF performance. However, we find that, at least with our model architecture, simple large scale (700k examples with hundreds of agents per example) imitation learning does \textit{not} produce impressive results. Instead, we find that by using prior work that post-processes MAPF model predictions to resolve 1-step collisions (CS-PIBT), we can train a simple ML MAPF model in minutes that dramatically outperforms existing ML MAPF policies. This has serious implications for all future ML MAPF policies (with local communication) which currently struggle to scale. In particular, this finding implies that future learnt policies should (1) always use smart 1-step collision shields (e.g. CS-PIBT), (2) always include the collision shield with greedy actions as a baseline (e.g. PIBT) and (3) motivates future models to focus on longer horizon / more complex planning as 1-step collisions can be efficiently resolved.<|reference_end|>
|
arxiv
|
@article{veerapaneni2024work,
title={Work Smarter Not Harder: Simple Imitation Learning with CS-PIBT
Outperforms Large Scale Imitation Learning for MAPF},
author={Rishi Veerapaneni, Arthur Jakobsson, Kevin Ren, Samuel Kim, Jiaoyang
Li, Maxim Likhachev},
journal={arXiv preprint arXiv:2409.14491},
year={2024},
archivePrefix={arXiv},
eprint={2409.14491},
primaryClass={cs.MA cs.RO}
}
|
veerapaneni2024work
|
arxiv-660482
|
2409.14494
|
CPT-Boosted Wav2vec20: Towards Noise Robust Speech Recognition for Classroom Environments
|
<|reference_start|>CPT-Boosted Wav2vec20: Towards Noise Robust Speech Recognition for Classroom Environments: Creating Automatic Speech Recognition (ASR) systems that are robust and resilient to classroom conditions is paramount to the development of AI tools to aid teachers and students. In this work, we study the efficacy of continued pretraining (CPT) in adapting Wav2vec2.0 to the classroom domain. We show that CPT is a powerful tool in that regard and reduces the Word Error Rate (WER) of Wav2vec2.0-based models by upwards of 10%. More specifically, CPT improves the model's robustness to different noises, microphones and classroom conditions.<|reference_end|>
|
arxiv
|
@article{attia2024cpt-boosted,
title={CPT-Boosted Wav2vec2.0: Towards Noise Robust Speech Recognition for
Classroom Environments},
author={Ahmed Adel Attia, Dorottya Demszky, Tolulope Ogunremi, Jing Liu, Carol
Espy-Wilson},
journal={arXiv preprint arXiv:2409.14494},
year={2024},
archivePrefix={arXiv},
eprint={2409.14494},
primaryClass={cs.CL cs.LG cs.SD eess.AS}
}
|
attia2024cpt-boosted
|
arxiv-660483
|
2409.14495
|
Thought-Path Contrastive Learning via Premise-Oriented Data Augmentation for Logical Reading Comprehension
|
<|reference_start|>Thought-Path Contrastive Learning via Premise-Oriented Data Augmentation for Logical Reading Comprehension: Logical reading comprehension is a challenging task that entails grasping the underlying semantics of text and applying reasoning to deduce the correct answer. Prior researches have primarily focused on enhancing logical reasoning capabilities through Chain-of-Thought (CoT) or data augmentation. However, previous work constructing chain-of-thought rationales concentrates solely on analyzing correct options, neglecting the incorrect alternatives. Addtionally, earlier efforts on data augmentation by altering contexts rely on rule-based methods, which result in generated contexts that lack diversity and coherence. To address these issues, we propose a Premise-Oriented Data Augmentation (PODA) framework. This framework can generate CoT rationales including analyses for both correct and incorrect options, while constructing diverse and high-quality counterfactual contexts from incorrect candidate options. We integrate summarizing premises and identifying premises for each option into rationales. Subsequently, we employ multi-step prompts with identified premises to construct counterfactual context. To facilitate the model's capabilities to better differentiate the reasoning process associated with each option, we introduce a novel thought-path contrastive learning method that compares reasoning paths between the original and counterfactual samples. Experimental results on three representative LLMs demonstrate that our method can improve the baselines substantially across two challenging logical reasoning benchmarks (ReClor and LogiQA 2.0). The data and code are released at https://github.com/lalalamdbf/TPReasoner.<|reference_end|>
|
arxiv
|
@article{wang2024thought-path,
title={Thought-Path Contrastive Learning via Premise-Oriented Data Augmentation
for Logical Reading Comprehension},
author={Chenxu Wang, Ping Jian, Zhen Yang},
journal={arXiv preprint arXiv:2409.14495},
year={2024},
archivePrefix={arXiv},
eprint={2409.14495},
primaryClass={cs.CL cs.AI}
}
|
wang2024thought-path
|
arxiv-660484
|
2409.14496
|
On a measure of intelligence
|
<|reference_start|>On a measure of intelligence: The Fall 2024 Logic in Computer Science column of the Bulletin of EATCS is a little discussion on intelligence, measuring intelligence, and related issues, provoked by a fascinating must-read article ``On the measure of intelligence'' by Fran\c{c}ois Chollet. The discussion includes a modicum of critique of the article.<|reference_end|>
|
arxiv
|
@article{gurevich2024on,
title={On a measure of intelligence},
author={Yuri Gurevich},
journal={arXiv preprint arXiv:2409.14496},
year={2024},
archivePrefix={arXiv},
eprint={2409.14496},
primaryClass={cs.AI}
}
|
gurevich2024on
|
arxiv-660485
|
2409.14499
|
A Review of Scalable and Privacy-Preserving Multi-Agent Frameworks for Distributed Energy Resource Control
|
<|reference_start|>A Review of Scalable and Privacy-Preserving Multi-Agent Frameworks for Distributed Energy Resource Control: Distributed energy resources (DERs) are gaining prominence due to their advantages in improving energy efficiency, reducing carbon emissions, and enhancing grid resilience. Despite the increasing deployment, the potential of DERs has yet to be fully explored and exploited. A fundamental question restrains the management of numerous DERs in large-scale power systems, "How should DER data be securely processed and DER operations be efficiently optimized?" To address this question, this paper considers two critical issues, namely privacy for processing DER data and scalability in optimizing DER operations, then surveys existing and emerging solutions from a multi-agent framework perspective. In the context of scalability, this paper reviews state-of-the-art research that relies on parallel control, optimization, and learning within distributed and/or decentralized information exchange structures, while in the context of privacy, it identifies privacy preservation measures that can be synthesized into the aforementioned scalable structures. Despite research advances in these areas, challenges remain because these highly interdisciplinary studies blend a wide variety of scalable computing architectures and privacy preservation techniques from different fields, making them difficult to adapt in practice. To mitigate this issue, this paper provides a holistic review of trending strategies that orchestrate privacy and scalability for large-scale power system operations from a multi-agent perspective, particularly for DER control problems. Furthermore, this review extrapolates new approaches for future scalable, privacy-aware, and cybersecure pathways to unlock the full potential of DERs through controlling, optimizing, and learning generic multi-agent-based cyber-physical systems.<|reference_end|>
|
arxiv
|
@article{huo2024a,
title={A Review of Scalable and Privacy-Preserving Multi-Agent Frameworks for
Distributed Energy Resources},
author={Xiang Huo, Hao Huang, Katherine R. Davis, H. Vincent Poor, Mingxi Liu},
journal={arXiv preprint arXiv:2409.14499},
year={2024},
archivePrefix={arXiv},
eprint={2409.14499},
primaryClass={eess.SY cs.SY math.OC}
}
|
huo2024a
|
arxiv-660486
|
2409.14500
|
TabGraphs: A Benchmark and Strong Baselines for Learning on Graphs with Tabular Node Features
|
<|reference_start|>TabGraphs: A Benchmark and Strong Baselines for Learning on Graphs with Tabular Node Features: Tabular machine learning is an important field for industry and science. In this field, table rows are usually treated as independent data samples, but additional information about relations between them is sometimes available and can be used to improve predictive performance. Such information can be naturally modeled with a graph, thus tabular machine learning may benefit from graph machine learning methods. However, graph machine learning models are typically evaluated on datasets with homogeneous node features, which have little in common with heterogeneous mixtures of numerical and categorical features present in tabular datasets. Thus, there is a critical difference between the data used in tabular and graph machine learning studies, which does not allow one to understand how successfully graph models can be transferred to tabular data. To bridge this gap, we propose a new benchmark of diverse graphs with heterogeneous tabular node features and realistic prediction tasks. We use this benchmark to evaluate a vast set of models, including simple methods previously overlooked in the literature. Our experiments show that graph neural networks (GNNs) can indeed often bring gains in predictive performance for tabular data, but standard tabular models also can be adapted to work with graph data by using simple feature preprocessing, which sometimes enables them to compete with and even outperform GNNs. Based on our empirical study, we provide insights for researchers and practitioners in both tabular and graph machine learning fields.<|reference_end|>
|
arxiv
|
@article{bazhenov2024tabgraphs:,
title={TabGraphs: A Benchmark and Strong Baselines for Learning on Graphs with
Tabular Node Features},
author={Gleb Bazhenov, Oleg Platonov, Liudmila Prokhorenkova},
journal={arXiv preprint arXiv:2409.14500},
year={2024},
archivePrefix={arXiv},
eprint={2409.14500},
primaryClass={cs.LG cs.AI}
}
|
bazhenov2024tabgraphs:
|
arxiv-660487
|
2409.14501
|
Rydberg Atomic Quantum Receivers for Classical Wireless Communication and Sensing
|
<|reference_start|>Rydberg Atomic Quantum Receivers for Classical Wireless Communication and Sensing: The Rydberg atomic quantum receiver (RAQR) is an emerging quantum precision sensing platform designed for receiving radio frequency (RF) signals. It relies on creation of Rydberg atoms from normal atoms by exciting one or more electrons to a very high energy level, which in turn makes the atom sensitive to RF signals. The RAQR realizes RF-to-optical conversion based on light-atom interaction relying on the so called electromagnetically induced transparency (EIT) and Aulter-Townes splitting (ATS), so that the desired RF signal can be read out optically. The large dipole moments of Rydberg atoms associated with rich choices of Rydberg states and various modulation schemes facilitate an ultra-high sensitivity ($\sim$ nV/cm/$\sqrt{\text{Hz}}$) and an ultra-broadband tunability (near direct-current to Terahertz). RAQRs also exhibit compelling scalability and lend themselves to the construction of innovative, compact receivers. Initial experimental studies have demonstrated their capabilities in classical wireless communications and sensing. To fully harness their potential in a wide variety of applications, we commence by outlining the underlying fundamentals of Rydberg atoms, followed by the principles, structures, and theories of RAQRs. Finally, we conceive Rydberg atomic quantum single-input single-output (RAQ-SISO) and multiple-input multiple-output (RAQ-MIMO) schemes for facilitating the integration of RAQRs with classical wireless systems, and conclude with a set of potent research directions.<|reference_end|>
|
arxiv
|
@article{gong2024rydberg,
title={Rydberg Atomic Quantum Receivers for Classical Wireless Communication
and Sensing},
author={Tierui Gong, Aveek Chandra, Chau Yuen, Yong Liang Guan, Rainer Dumke,
Chong Meng Samson See, M'erouane Debbah, Lajos Hanzo},
journal={arXiv preprint arXiv:2409.14501},
year={2024},
archivePrefix={arXiv},
eprint={2409.14501},
primaryClass={eess.SP cs.IT math.IT quant-ph}
}
|
gong2024rydberg
|
arxiv-660488
|
2409.14506
|
InteLiPlan: Interactive Lightweight LLM-Based Planner for Domestic Robot Autonomy
|
<|reference_start|>InteLiPlan: Interactive Lightweight LLM-Based Planner for Domestic Robot Autonomy: We introduce a lightweight LLM-based framework designed to enhance the autonomy and robustness of domestic robots, targeting onboard embodied intelligence. By addressing challenges such as kinematic constraints and dynamic environments, our approach reduces reliance on large-scale data and incorporates a robot-agnostic pipeline. Our framework, InteLiPlan, ensures that the LLM model's decision-making capabilities are effectively aligned with robotic functions, enhancing operational robustness and adaptability, while our human-in-the-loop mechanism allows for real-time human intervention in the case where the system fails. We evaluate our method in both simulation and on the real Toyota HSR robot. The results show that our method achieves a 93% success rate in the fetch me task completion with system failure recovery, outperforming the baseline method in a domestic environment. InteLiPlan achieves comparable performance to the state-of-the-art large-scale LLM-based robotics planner, while guaranteeing real-time onboard computing with embodied intelligence.<|reference_end|>
|
arxiv
|
@article{ly2024inteliplan:,
title={InteLiPlan: Interactive Lightweight LLM-Based Planner for Domestic Robot
Autonomy},
author={Kim Tien Ly, Kai Lu, Ioannis Havoutis},
journal={arXiv preprint arXiv:2409.14506},
year={2024},
archivePrefix={arXiv},
eprint={2409.14506},
primaryClass={cs.RO}
}
|
ly2024inteliplan:
|
arxiv-660489
|
2409.14507
|
A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders
|
<|reference_start|>A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders: Sparse Autoencoders (SAEs) have emerged as a promising approach to decompose the activations of Large Language Models (LLMs) into human-interpretable latents. In this paper, we pose two questions. First, to what extent do SAEs extract monosemantic and interpretable latents? Second, to what extent does varying the sparsity or the size of the SAE affect monosemanticity / interpretability? By investigating these questions in the context of a simple first-letter identification task where we have complete access to ground truth labels for all tokens in the vocabulary, we are able to provide more detail than prior investigations. Critically, we identify a problematic form of feature-splitting we call feature absorption where seemingly monosemantic latents fail to fire in cases where they clearly should. Our investigation suggests that varying SAE size or sparsity is insufficient to solve this issue, and that there are deeper conceptual issues in need of resolution.<|reference_end|>
|
arxiv
|
@article{chanin2024a,
title={A is for Absorption: Studying Feature Splitting and Absorption in Sparse
Autoencoders},
author={David Chanin, James Wilken-Smith, Tom'av{s} Dulka, Hardik Bhatnagar,
Joseph Bloom},
journal={arXiv preprint arXiv:2409.14507},
year={2024},
archivePrefix={arXiv},
eprint={2409.14507},
primaryClass={cs.CL cs.AI}
}
|
chanin2024a
|
arxiv-660490
|
2409.14509
|
Can AI writing be salvaged? Mitigating Idiosyncrasies and Improving Human-AI Alignment in the Writing Process through Edits
|
<|reference_start|>Can AI writing be salvaged? Mitigating Idiosyncrasies and Improving Human-AI Alignment in the Writing Process through Edits: LLM-based applications are helping people write, and LLM-generated text is making its way into social media, journalism, and our classrooms. However, the differences between LLM-generated and human-written text remain unclear. To explore this, we hired professional writers to edit paragraphs in several creative domains. We first found these writers agree on undesirable idiosyncrasies in LLM-generated text, formalizing it into a seven-category taxonomy (e.g. cliches, unnecessary exposition). Second, we curated the LAMP corpus: 1,057 LLM-generated paragraphs edited by professional writers according to our taxonomy. Analysis of LAMP reveals that none of the LLMs used in our study (GPT4o, Claude-3.5-Sonnet, Llama-3.1-70b) outperform each other in terms of writing quality, revealing common limitations across model families. Third, we explored automatic editing methods to improve LLM-generated text. A large-scale preference annotation confirms that although experts largely prefer text edited by other experts, automatic editing methods show promise in improving alignment between LLM-generated and human-written text.<|reference_end|>
|
arxiv
|
@article{chakrabarty2024can,
title={Can AI writing be salvaged? Mitigating Idiosyncrasies and Improving
Human-AI Alignment in the Writing Process through Edits},
author={Tuhin Chakrabarty, Philippe Laban, Chien-Sheng Wu},
journal={arXiv preprint arXiv:2409.14509},
year={2024},
archivePrefix={arXiv},
eprint={2409.14509},
primaryClass={cs.CL cs.CY cs.HC}
}
|
chakrabarty2024can
|
arxiv-660491
|
2409.14511
|
Evaluation of Task Specific Productivity Improvements Using a Generative Artificial Intelligence Personal Assistant Tool
|
<|reference_start|>Evaluation of Task Specific Productivity Improvements Using a Generative Artificial Intelligence Personal Assistant Tool: This study evaluates the productivity improvements achieved using a generative artificial intelligence personal assistant tool (PAT) developed by Trane Technologies. The PAT, based on OpenAI's GPT 3.5 model, was deployed on Microsoft Azure to ensure secure access and protection of intellectual property. To assess the tool's productivity effectiveness, an experiment was conducted comparing the completion times and content quality of four common office tasks: writing an email, summarizing an article, creating instructions for a simple task, and preparing a presentation outline. Sixty-three (63) participants were randomly divided into a test group using the PAT and a control group performing the tasks manually. Results indicated significant productivity enhancements, particularly for tasks involving summarization and instruction creation, with improvements ranging from 3.3% to 69%. The study further analyzed factors such as the age of users, response word counts, and quality of responses, revealing that the PAT users generated more verbose and higher-quality content. An 'LLM-as-a-judge' method employing GPT-4 was used to grade the quality of responses, which effectively distinguished between high and low-quality outputs. The findings underscore the potential of PATs in enhancing workplace productivity and highlight areas for further research and optimization.<|reference_end|>
|
arxiv
|
@article{freeman2024evaluation,
title={Evaluation of Task Specific Productivity Improvements Using a Generative
Artificial Intelligence Personal Assistant Tool},
author={Brian S. Freeman, Kendall Arriola, Dan Cottell, Emmett Lawlor, Matt
Erdman, Trevor Sutherland, Brian Wells},
journal={arXiv preprint arXiv:2409.14511},
year={2024},
archivePrefix={arXiv},
eprint={2409.14511},
primaryClass={cs.HC}
}
|
freeman2024evaluation
|
arxiv-660492
|
2409.14513
|
Order of Magnitude Speedups for LLM Membership Inference
|
<|reference_start|>Order of Magnitude Speedups for LLM Membership Inference: Large Language Models (LLMs) have the promise to revolutionize computing broadly, but their complexity and extensive training data also expose significant privacy vulnerabilities. One of the simplest privacy risks associated with LLMs is their susceptibility to membership inference attacks (MIAs), wherein an adversary aims to determine whether a specific data point was part of the model's training set. Although this is a known risk, state of the art methodologies for MIAs rely on training multiple computationally costly shadow models, making risk evaluation prohibitive for large models. Here we adapt a recent line of work which uses quantile regression to mount membership inference attacks; we extend this work by proposing a low-cost MIA that leverages an ensemble of small quantile regression models to determine if a document belongs to the model's training set or not. We demonstrate the effectiveness of this approach on fine-tuned LLMs of varying families (OPT, Pythia, Llama) and across multiple datasets. Across all scenarios we obtain comparable or improved accuracy compared to state of the art shadow model approaches, with as little as 6% of their computation budget. We demonstrate increased effectiveness across multi-epoch trained target models, and architecture miss-specification robustness, that is, we can mount an effective attack against a model using a different tokenizer and architecture, without requiring knowledge on the target model.<|reference_end|>
|
arxiv
|
@article{zhang2024order,
title={Order of Magnitude Speedups for LLM Membership Inference},
author={Rongting Zhang and Martin Bertran and Aaron Roth},
journal={arXiv preprint arXiv:2409.14513},
year={2024},
archivePrefix={arXiv},
eprint={2409.14513},
primaryClass={cs.LG cs.CR stat.ML}
}
|
zhang2024order
|
arxiv-660493
|
2409.14515
|
SPAQ-DL-SLAM: Towards Optimizing Deep Learning-based SLAM for Resource-Constrained Embedded Platforms
|
<|reference_start|>SPAQ-DL-SLAM: Towards Optimizing Deep Learning-based SLAM for Resource-Constrained Embedded Platforms: Optimizing Deep Learning-based Simultaneous Localization and Mapping (DL-SLAM) algorithms is essential for efficient implementation on resource-constrained embedded platforms, enabling real-time on-board computation in autonomous mobile robots. This paper presents SPAQ-DL-SLAM, a framework that strategically applies Structured Pruning and Quantization (SPAQ) to the architecture of one of the state-ofthe-art DL-SLAM algorithms, DROID-SLAM, for resource and energy-efficiency. Specifically, we perform structured pruning with fine-tuning based on layer-wise sensitivity analysis followed by 8-bit post-training static quantization (PTQ) on the deep learning modules within DROID-SLAM. Our SPAQ-DROIDSLAM model, optimized version of DROID-SLAM model using our SPAQ-DL-SLAM framework with 20% structured pruning and 8-bit PTQ, achieves an 18.9% reduction in FLOPs and a 79.8% reduction in overall model size compared to the DROID-SLAM model. Our evaluations on the TUM-RGBD benchmark shows that SPAQ-DROID-SLAM model surpasses the DROID-SLAM model by an average of 10.5% on absolute trajectory error (ATE) metric. Additionally, our results on the ETH3D SLAM training benchmark demonstrate enhanced generalization capabilities of the SPAQ-DROID-SLAM model, seen by a higher Area Under the Curve (AUC) score and success in 2 additional data sequences compared to the DROIDSLAM model. Despite these improvements, the model exhibits performance variance on the distinct Vicon Room sequences from the EuRoC dataset, which are captured at high angular velocities. This varying performance at some distinct scenarios suggests that designing DL-SLAM algorithms taking operating environments and tasks in consideration can achieve optimal performance and resource efficiency for deployment in resource-constrained embedded platforms.<|reference_end|>
|
arxiv
|
@article{pudasaini2024spaq-dl-slam:,
title={SPAQ-DL-SLAM: Towards Optimizing Deep Learning-based SLAM for
Resource-Constrained Embedded Platforms},
author={Niraj Pudasaini, Muhammad Abdullah Hanif, Muhammad Shafique},
journal={arXiv preprint arXiv:2409.14515},
year={2024},
archivePrefix={arXiv},
eprint={2409.14515},
primaryClass={cs.RO cs.CV cs.LG}
}
|
pudasaini2024spaq-dl-slam:
|
arxiv-660494
|
2409.14516
|
Beyond Words: Evaluating Large Language Models in Transportation Planning
|
<|reference_start|>Beyond Words: Evaluating Large Language Models in Transportation Planning: The resurgence and rapid advancement of Generative Artificial Intelligence (GenAI) in 2023 has catalyzed transformative shifts across numerous industry sectors, including urban transportation and logistics. This study investigates the evaluation of Large Language Models (LLMs), specifically GPT-4 and Phi-3-mini, to enhance transportation planning. The study assesses the performance and spatial comprehension of these models through a transportation-informed evaluation framework that includes general geospatial skills, general transportation domain skills, and real-world transportation problem-solving. Utilizing a mixed-methods approach, the research encompasses an evaluation of the LLMs' general Geographic Information System (GIS) skills, general transportation domain knowledge as well as abilities to support human decision-making in the real-world transportation planning scenarios of congestion pricing. Results indicate that GPT-4 demonstrates superior accuracy and reliability across various GIS and transportation-specific tasks compared to Phi-3-mini, highlighting its potential as a robust tool for transportation planners. Nonetheless, Phi-3-mini exhibits competence in specific analytical scenarios, suggesting its utility in resource-constrained environments. The findings underscore the transformative potential of GenAI technologies in urban transportation planning. Future work could explore the application of newer LLMs and the impact of Retrieval-Augmented Generation (RAG) techniques, on a broader set of real-world transportation planning and operations challenges, to deepen the integration of advanced AI models in transportation management practices.<|reference_end|>
|
arxiv
|
@article{ying2024beyond,
title={Beyond Words: Evaluating Large Language Models in Transportation
Planning},
author={Shaowei Ying, Zhenlong Li, Manzhu Yu},
journal={arXiv preprint arXiv:2409.14516},
year={2024},
archivePrefix={arXiv},
eprint={2409.14516},
primaryClass={cs.AI cs.CL cs.IR}
}
|
ying2024beyond
|
arxiv-660495
|
2409.14517
|
Sliding Window Training -- Utilizing Historical Recommender Systems Data for Foundation Models
|
<|reference_start|>Sliding Window Training -- Utilizing Historical Recommender Systems Data for Foundation Models: Long-lived recommender systems (RecSys) often encounter lengthy user-item interaction histories that span many years. To effectively learn long term user preferences, Large RecSys foundation models (FM) need to encode this information in pretraining. Usually, this is done by either generating a long enough sequence length to take all history sequences as input at the cost of large model input dimension or by dropping some parts of the user history to accommodate model size and latency requirements on the production serving side. In this paper, we introduce a sliding window training technique to incorporate long user history sequences during training time without increasing the model input dimension. We show the quantitative & qualitative improvements this technique brings to the RecSys FM in learning user long term preferences. We additionally show that the average quality of items in the catalog learnt in pretraining also improves.<|reference_end|>
|
arxiv
|
@article{joshi2024sliding,
title={Sliding Window Training -- Utilizing Historical Recommender Systems Data
for Foundation Models},
author={Swanand Joshi, Yesu Feng, Ko-Jen Hsiao, Zhe Zhang, Sudarshan Lamkhede},
journal={arXiv preprint arXiv:2409.14517},
year={2024},
doi={10.1145/3640457.3688051},
archivePrefix={arXiv},
eprint={2409.14517},
primaryClass={cs.IR cs.LG}
}
|
joshi2024sliding
|
arxiv-660496
|
2409.14518
|
RPKI: Not Perfect But Good Enough
|
<|reference_start|>RPKI: Not Perfect But Good Enough: The Resource Public Key Infrastructure (RPKI) protocol was standardized to add cryptographic security to Internet routing. With over 50% of Internet resources protected with RPKI today, the protocol already impacts significant parts of Internet traffic. In addition to its growing adoption, there is also increasing political interest in RPKI. The White House indicated in its Roadmap to Enhance Internet Routing Security, on 4 September 2024, that RPKI is a mature and readily available technology for securing inter-domain routing. The Roadmap attributes the main obstacles towards wide adoption of RPKI to a lack of understanding, lack of prioritization, and administrative barriers. This work presents the first comprehensive study of the maturity of RPKI as a viable production-grade technology. We find that current RPKI implementations still lack production-grade resilience and are plagued by software vulnerabilities, inconsistent specifications, and operational challenges, raising significant security concerns. The deployments lack experience with full-fledged strict RPKI-validation in production environments and operate in fail-open test mode. We provide recommendations to improve RPKI resilience and guide stakeholders in securing their deployments against emerging threats. The numerous issues we have discovered with the current RPKI specifications and implementations inevitably lead to the question: Is RPKI sufficiently stable to align with the expectations outlined in the White House roadmap? Certainly, it is not perfect, but is it good enough? The answer, as we will explore, varies depending on one's viewpoint.<|reference_end|>
|
arxiv
|
@article{schulmann2024rpki:,
title={RPKI: Not Perfect But Good Enough},
author={Haya Schulmann and Niklas Vogel and Michael Waidner},
journal={arXiv preprint arXiv:2409.14518},
year={2024},
archivePrefix={arXiv},
eprint={2409.14518},
primaryClass={cs.CR}
}
|
schulmann2024rpki:
|
arxiv-660497
|
2409.14519
|
RobotFingerPrint: Unified Gripper Coordinate Space for Multi-Gripper Grasp Synthesis
|
<|reference_start|>RobotFingerPrint: Unified Gripper Coordinate Space for Multi-Gripper Grasp Synthesis: We introduce a novel representation named as the unified gripper coordinate space for grasp synthesis of multiple grippers. The space is a 2D surface of a sphere in 3D using longitude and latitude as its coordinates, and it is shared for all robotic grippers. We propose a new algorithm to map the palm surface of a gripper into the unified gripper coordinate space, and design a conditional variational autoencoder to predict the unified gripper coordinates given an input object. The predicted unified gripper coordinates establish correspondences between the gripper and the object, which can be used in an optimization problem to solve the grasp pose and the finger joints for grasp synthesis. We demonstrate that using the unified gripper coordinate space improves the success rate and diversity in the grasp synthesis of multiple grippers.<|reference_end|>
|
arxiv
|
@article{khargonkar2024robotfingerprint:,
title={RobotFingerPrint: Unified Gripper Coordinate Space for Multi-Gripper
Grasp Synthesis},
author={Ninad Khargonkar, Luis Felipe Casas, Balakrishnan Prabhakaran and Yu
Xiang},
journal={arXiv preprint arXiv:2409.14519},
year={2024},
archivePrefix={arXiv},
eprint={2409.14519},
primaryClass={cs.RO cs.CV cs.LG}
}
|
khargonkar2024robotfingerprint:
|
arxiv-660498
|
2409.14521
|
UAV-Enabled Data Collection for IoT Networks via Rainbow Learning
|
<|reference_start|>UAV-Enabled Data Collection for IoT Networks via Rainbow Learning: Unmanned aerial vehicles (UAVs) assisted Internet of things (IoT) systems have become an important part of future wireless communications. To achieve higher communication rate, the joint design of UAV trajectory and resource allocation is crucial. This letter considers a scenario where a multi-antenna UAV is dispatched to simultaneously collect data from multiple ground IoT nodes (GNs) within a time interval. To improve the sum data collection (SDC) volume, i.e., the total data volume transmitted by the GNs, the UAV trajectory, the UAV receive beamforming, the scheduling of the GNs, and the transmit power of the GNs are jointly optimized. Since the problem is non-convex and the optimization variables are highly coupled, it is hard to solve using traditional optimization methods. To find a near-optimal solution, a double-loop structured optimization-driven deep reinforcement learning (DRL) algorithm and a fully DRL-based algorithm are proposed to solve the problem effectively. Simulation results verify that the proposed algorithms outperform two benchmarks with significant improvement in SDC volumes.<|reference_end|>
|
arxiv
|
@article{jiao2024uav-enabled,
title={UAV-Enabled Data Collection for IoT Networks via Rainbow Learning},
author={Yingchao Jiao and Xuhui Zhang and Wenchao Liu and Yinyu Wu and Jinke
Ren and Yanyan Shen and Bo Yang and Xinping Guan},
journal={arXiv preprint arXiv:2409.14521},
year={2024},
archivePrefix={arXiv},
eprint={2409.14521},
primaryClass={eess.SP cs.IT math.IT}
}
|
jiao2024uav-enabled
|
arxiv-660499
|
2409.14522
|
Modeling Pedestrian Crossing Behavior: A Reinforcement Learning Approach with Sensory Motor Constraints
|
<|reference_start|>Modeling Pedestrian Crossing Behavior: A Reinforcement Learning Approach with Sensory Motor Constraints: Understanding pedestrian behavior is crucial for the safe deployment of Autonomous Vehicles (AVs) in urban environments. Traditional pedestrian behavior models often fall into two categories: mechanistic models, which do not generalize well to complex environments, and machine-learned models, which generally overlook sensory-motor constraints influencing human behavior and thus prone to fail in untrained scenarios. We hypothesize that sensory-motor constraints, fundamental to how humans perceive and interact with their surroundings, are essential for realistic simulations. Thus, we introduce a constrained reinforcement learning (RL) model that simulates the crossing decision and locomotion of pedestrians. It was constrained to emulate human sensory mechanisms with noisy visual perception and looming aversion. Additionally, human motor constraint was incorporated through a bio-mechanical model of walking. We gathered data from a human-in-the-loop experiment to understand pedestrian behavior. The findings reveal several phenomena not addressed by existing pedestrian models, regarding how pedestrians adapt their walking speed to the kinematics and behavior of the approaching vehicle. Our model successfully captures these human-like walking speed patterns, enabling us to understand these patterns as a trade-off between time pressure and walking effort. Importantly, the model retains the ability to reproduce various phenomena previously captured by a simpler version of the model. Additionally, phenomena related to external human-machine interfaces and light conditions were also included. Overall, our results not only demonstrate the potential of constrained RL in modeling pedestrian behaviors but also highlight the importance of sensory-motor mechanisms in modeling pedestrian-vehicle interactions.<|reference_end|>
|
arxiv
|
@article{wang2024modeling,
title={Modeling Pedestrian Crossing Behavior: A Reinforcement Learning Approach
with Sensory Motor Constraints},
author={Yueyang Wang, Aravinda Ramakrishnan Srinivasan, Yee Mun Lee, and
Gustav Markkula},
journal={arXiv preprint arXiv:2409.14522},
year={2024},
archivePrefix={arXiv},
eprint={2409.14522},
primaryClass={cs.HC}
}
|
wang2024modeling
|
arxiv-660500
|
2409.14524
|
tabulapdf: An R Package to Extract Tables from PDF Documents
|
<|reference_start|>tabulapdf: An R Package to Extract Tables from PDF Documents: tabulapdf is an R package that utilizes the Tabula Java library to import tables from PDF files directly into R. This tool can reduce time and effort in data extraction processes in fields like investigative journalism. It allows for automatic and manual table extraction, the latter facilitated through a Shiny interface, enabling manual areas selection with a computer mouse for data retrieval.<|reference_end|>
|
arxiv
|
@article{sepúlveda2024tabulapdf:,
title={tabulapdf: An R Package to Extract Tables from PDF Documents},
author={Mauricio Vargas Sep'ulveda and Thomas J. Leeper and Tom Paskhalis and
Manuel Aristar'an and Jeremy B. Merrill and Mike Tigas},
journal={arXiv preprint arXiv:2409.14524},
year={2024},
archivePrefix={arXiv},
eprint={2409.14524},
primaryClass={cs.IR cs.DL}
}
|
sepúlveda2024tabulapdf:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.