corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-661601
2409.16538
Source-Free Domain Adaptation for YOLO Object Detection
<|reference_start|>Source-Free Domain Adaptation for YOLO Object Detection: Source-free domain adaptation (SFDA) is a challenging problem in object detection, where a pre-trained source model is adapted to a new target domain without using any source domain data for privacy and efficiency reasons. Most state-of-the-art SFDA methods for object detection have been proposed for Faster-RCNN, a detector that is known to have high computational complexity. This paper focuses on domain adaptation techniques for real-world vision systems, particularly for the YOLO family of single-shot detectors known for their fast baselines and practical applications. Our proposed SFDA method - Source-Free YOLO (SF-YOLO) - relies on a teacher-student framework in which the student receives images with a learned, target domain-specific augmentation, allowing the model to be trained with only unlabeled target data and without requiring feature alignment. A challenge with self-training using a mean-teacher architecture in the absence of labels is the rapid decline of accuracy due to noisy or drifting pseudo-labels. To address this issue, a teacher-to-student communication mechanism is introduced to help stabilize the training and reduce the reliance on annotated target data for model selection. Despite its simplicity, our approach is competitive with state-of-the-art detectors on several challenging benchmark datasets, even sometimes outperforming methods that use source data for adaptation.<|reference_end|>
arxiv
@article{varailhon2024source-free, title={Source-Free Domain Adaptation for YOLO Object Detection}, author={Simon Varailhon, Masih Aminbeidokhti, Marco Pedersoli, Eric Granger}, journal={arXiv preprint arXiv:2409.16538}, year={2024}, archivePrefix={arXiv}, eprint={2409.16538}, primaryClass={cs.CV cs.AI cs.LG} }
varailhon2024source-free
arxiv-661602
2409.16539
Context-aware and Style-related Incremental Decoding framework for Discourse-Level Literary Translation
<|reference_start|>Context-aware and Style-related Incremental Decoding framework for Discourse-Level Literary Translation: This report outlines our approach for the WMT24 Discourse-Level Literary Translation Task, focusing on the Chinese-English language pair in the Constrained Track. Translating literary texts poses significant challenges due to the nuanced meanings, idiomatic expressions, and intricate narrative structures inherent in such works. To address these challenges, we leveraged the Chinese-Llama2 model, specifically enhanced for this task through a combination of Continual Pre-training (CPT) and Supervised Fine-Tuning (SFT). Our methodology includes a novel Incremental Decoding framework, which ensures that each sentence is translated with consideration of its broader context, maintaining coherence and consistency throughout the text. This approach allows the model to capture long-range dependencies and stylistic elements, producing translations that faithfully preserve the original literary quality. Our experiments demonstrate significant improvements in both sentence-level and document-level BLEU scores, underscoring the effectiveness of our proposed framework in addressing the complexities of document-level literary translation.<|reference_end|>
arxiv
@article{luo2024context-aware, title={Context-aware and Style-related Incremental Decoding framework for Discourse-Level Literary Translation}, author={Yuanchang Luo, Jiaxin Guo, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Zhiqiang Rao, Shaojun Li, Jinlong Yang, Hao Yang}, journal={arXiv preprint arXiv:2409.16539}, year={2024}, archivePrefix={arXiv}, eprint={2409.16539}, primaryClass={cs.AI} }
luo2024context-aware
arxiv-661603
2409.16541
Monge-Kantorovich Fitting With Sobolev Budgets
<|reference_start|>Monge-Kantorovich Fitting With Sobolev Budgets: We consider the problem of finding the ``best'' approximation of an $n$-dimensional probability measure $\rho$ using a measure $\nu$ whose support is parametrized by $f : \mathbb{R}^m \to \mathbb{R}^n$ where $m < n$. We quantify the performance of the approximation with the Monge-Kantorovich $p$-cost (also called the Wasserstein $p$-cost) $\mathbb{W}_p^p(\rho, \nu)$, and constrain the complexity of the approximation by bounding the $W^{k,q}$ Sobolev norm of $f$, which acts as a ``budget.'' We may then reformulate the problem as minimizing a functional $\mathscr{J}_p(f)$ under a constraint on the Sobolev budget. We treat general $k \geq 1$ for the Sobolev differentiability order (though $q, m$ are chosen to restrict $W^{k,q}$ to the supercritical regime $k q > m$ to guarantee existence of optimizers). The problem is closely related to (but distinct from) principal curves with length constraints when $m=1, k = 1$ and smoothing splines when $k > 1$. New aspects and challenges arise from the higher order differentiability condition. We study the gradient of $\mathscr{J}_p$, which is given by a vector field along $f$ we call the barycenter field. We use it to construct improvements to a given $f$, which gives a nontrivial (almost) strict monotonicty relation between the functional $\mathscr{J}_p$ and the Sobolev budget. We also provide a natural discretization scheme and establish its consistency. We use this scheme to model a generative learning task; in particular, we demonstrate that adding a constraint like ours as a soft penalty yields substantial improvement in training a GAN to produce images of handwritten digits, with performance competitive with weight-decay.<|reference_end|>
arxiv
@article{kobayashi2024monge-kantorovich, title={Monge-Kantorovich Fitting With Sobolev Budgets}, author={Forest Kobayashi, Jonathan Hayase, Young-Heon Kim}, journal={arXiv preprint arXiv:2409.16541}, year={2024}, number={PIMS-20240923-PRN01}, archivePrefix={arXiv}, eprint={2409.16541}, primaryClass={cs.LG math.AP} }
kobayashi2024monge-kantorovich
arxiv-661604
2409.16544
First Past the Post: Evaluating Query Optimization in MongoDB
<|reference_start|>First Past the Post: Evaluating Query Optimization in MongoDB: Query optimization is crucial for every database management system (DBMS) to enable fast execution of declarative queries. Most DBMS designs include cost-based query optimization. However, MongoDB implements a different approach to choose an execution plan that we call "first past the post" (FPTP) query optimization. FPTP does not estimate costs for each execution plan, but rather partially executes the alternative plans in a round-robin race and observes the work done by each relative to the number of records returned. In this paper, we analyze the effectiveness of MongoDB's FPTP query optimizer. We see whether the optimizer chooses the best execution plan among the alternatives and measure how the chosen plan compares to the optimal plan. We also show how to visualize the effectiveness and identify situations where the MongoDB 7.0.1 query optimizer chooses suboptimal query plans. Through experiments, we conclude that FPTP has a preference bias, choosing index scans even in many cases where collection scans would run faster. We identify the reasons for the preference bias, which can lead MongoDB to choose a plan with more than twice the runtime compared to the optimal plan for the query.<|reference_end|>
arxiv
@article{tao2024first, title={First Past the Post: Evaluating Query Optimization in MongoDB}, author={Dawei Tao, Enqi Liu, Sidath Randeni Kadupitige, Michael Cahill, Alan Fekete, Uwe R"ohm}, journal={arXiv preprint arXiv:2409.16544}, year={2024}, archivePrefix={arXiv}, eprint={2409.16544}, primaryClass={cs.DB} }
tao2024first
arxiv-661605
2409.16546
AlignedKV: Reducing Memory Access of KV-Cache with Precision-Aligned Quantization
<|reference_start|>AlignedKV: Reducing Memory Access of KV-Cache with Precision-Aligned Quantization: Model quantization has become a crucial technique to address the issues of large memory consumption and long inference times associated with LLMs. Mixed-precision quantization, which distinguishes between important and unimportant parameters, stands out among numerous quantization schemes as it achieves a balance between precision and compression rate. However, existing approaches can only identify important parameters through qualitative analysis and manual experiments without quantitatively analyzing how their importance is determined. We propose a new criterion, so-called 'precision alignment', to build a quantitative framework to holistically evaluate the importance of parameters in mixed-precision quantization. Our observations on floating point addition under various real-world scenarios suggest that two addends should have identical precision, otherwise the information in the higher-precision number will be wasted. Such an observation offers an essential principle to determine the precision of each parameter in matrix multiplication operation. As the first step towards applying the above discovery to large model inference, we develop a dynamic KV-Cache quantization technique to effectively reduce memory access latency. Different from existing quantization approaches that focus on memory saving, this work directly aims to accelerate LLM inference through quantifying floating numbers. The proposed technique attains a 25% saving of memory access and delivers up to 1.3x speedup in the computation of attention in the decoding phase of LLM, with almost no loss of precision.<|reference_end|>
arxiv
@article{tan2024alignedkv:, title={AlignedKV: Reducing Memory Access of KV-Cache with Precision-Aligned Quantization}, author={Yifan Tan and Haoze Wang and Chao Yan and Yangdong Deng}, journal={arXiv preprint arXiv:2409.16546}, year={2024}, archivePrefix={arXiv}, eprint={2409.16546}, primaryClass={cs.LG} }
tan2024alignedkv:
arxiv-661606
2409.16551
fOGA: Orthogonal Greedy Algorithm for Fractional Laplace Equations
<|reference_start|>fOGA: Orthogonal Greedy Algorithm for Fractional Laplace Equations: In this paper, we explore the finite difference approximation of the fractional Laplace operator in conjunction with a neural network method for solving it. We discretized the fractional Laplace operator using the Riemann-Liouville formula relevant to fractional equations. A shallow neural network was constructed to address the discrete fractional operator, coupled with the OGA algorithm. To validate the feasibility of our approach, we conducted numerical experiments, testing both the Laplace operator and the fractional Laplace operator, yielding favorable convergence results.<|reference_end|>
arxiv
@article{shan2024foga:, title={fOGA: Orthogonal Greedy Algorithm for Fractional Laplace Equations}, author={Ruitong Shan, Young Ju Lee, Jiwei Jia}, journal={arXiv preprint arXiv:2409.16551}, year={2024}, archivePrefix={arXiv}, eprint={2409.16551}, primaryClass={math.NA cs.NA} }
shan2024foga:
arxiv-661607
2409.16552
Device for detection of activity-dependent changes in neural spheroids at MHz and GHz frequencies
<|reference_start|>Device for detection of activity-dependent changes in neural spheroids at MHz and GHz frequencies: Intracellular processes triggered by neural activity include changes in ionic concentrations, protein release, and synaptic vesicle cycling. These processes play significant roles in neurological disorders. The beneficial effects of brain stimulation may also be mediated through intracellular changes. There is a lack of label-free techniques for monitoring activity-dependent intracellular changes. Electromagnetic (EM) waves at frequencies larger than 1x10^6 Hz (1 MHz) were previously used to probe intracellular contents of cells, as cell membrane becomes transparent at this frequency range. EM waves interact with membranes of intracellular organelles, proteins, and water in the MHz-GHz range. In this work, we developed a device for probing the interaction between intracellular contents of active neurons and EM waves. The device used an array of grounded coplanar waveguides (GCPWs) to deliver EM waves to a three-dimensional (3D) spheroid of rat cortical neurons. Neural activity was evoked using optogenetics, with synchronous detection of propagation of EM waves. Broadband measurements were conducted in the MHz-GHz range to track changes in transmission coefficients. Neuronal activity was found to reversibly alter EM wave transmission. Pharmacological suppression of neuronal activity abolished changes in transmission. Time constants of changes in transmission were in the range of seconds to tens of seconds, suggesting the presence of relatively slow, activity-dependent intracellular processes. This study provides the first evidence that EM transmission through neuronal tissue is activity-dependent in MHz-GHz range. Device developed in this work may find future applications in studies of the mechanisms of neurological disorders and the development of new therapies.<|reference_end|>
arxiv
@article{omidi2024device, title={Device for detection of activity-dependent changes in neural spheroids at MHz and GHz frequencies}, author={Saeed Omidi, Gianluca Fabi, Xiaopeng Wang, James C. M. Hwang, Yevgeny Berdichevsky}, journal={arXiv preprint arXiv:2409.16552}, year={2024}, archivePrefix={arXiv}, eprint={2409.16552}, primaryClass={q-bio.NC cs.SY eess.SY} }
omidi2024device
arxiv-661608
2409.16554
EMIT- Event-Based Masked Auto Encoding for Irregular Time Series
<|reference_start|>EMIT- Event-Based Masked Auto Encoding for Irregular Time Series: Irregular time series, where data points are recorded at uneven intervals, are prevalent in healthcare settings, such as emergency wards where vital signs and laboratory results are captured at varying times. This variability, which reflects critical fluctuations in patient health, is essential for informed clinical decision-making. Existing self-supervised learning research on irregular time series often relies on generic pretext tasks like forecasting, which may not fully utilise the signal provided by irregular time series. There is a significant need for specialised pretext tasks designed for the characteristics of irregular time series to enhance model performance and robustness, especially in scenarios with limited data availability. This paper proposes a novel pretraining framework, EMIT, an event-based masking for irregular time series. EMIT focuses on masking-based reconstruction in the latent space, selecting masking points based on the rate of change in the data. This method preserves the natural variability and timing of measurements while enhancing the model's ability to process irregular intervals without losing essential information. Extensive experiments on the MIMIC-III and PhysioNet Challenge datasets demonstrate the superior performance of our event-based masking strategy. The code has been released at https://github.com/hrishi-ds/EMIT .<|reference_end|>
arxiv
@article{patel2024emit-, title={EMIT- Event-Based Masked Auto Encoding for Irregular Time Series}, author={Hrishikesh Patel, Ruihong Qiu, Adam Irwin, Shazia Sadiq, Sen Wang}, journal={arXiv preprint arXiv:2409.16554}, year={2024}, archivePrefix={arXiv}, eprint={2409.16554}, primaryClass={cs.LG} }
patel2024emit-
arxiv-661609
2409.16558
Bias Reduction in Social Networks through Agent-Based Simulations
<|reference_start|>Bias Reduction in Social Networks through Agent-Based Simulations: Online social networks use recommender systems to suggest relevant information to their users in the form of personalized timelines. Studying how these systems expose people to information at scale is difficult to do as one cannot assume each user is subject to the same timeline condition and building appropriate evaluation infrastructure is costly. We show that a simple agent-based model where users have fixed preferences affords us the ability to compare different recommender systems (and thus different personalized timelines) in their ability to skew users' perception of their network. Importantly, we show that a simple greedy algorithm that constructs a feed based on network properties reduces such perception biases comparable to a random feed. This underscores the influence network structure has in determining the effectiveness of recommender systems in the social network context and offers a tool for mitigating perception biases through algorithmic feed construction.<|reference_end|>
arxiv
@article{bartley2024bias, title={Bias Reduction in Social Networks through Agent-Based Simulations}, author={Nathan Bartley, Keith Burghardt, and Kristina Lerman}, journal={arXiv preprint arXiv:2409.16558}, year={2024}, archivePrefix={arXiv}, eprint={2409.16558}, primaryClass={cs.SI cs.CY} }
bartley2024bias
arxiv-661610
2409.16559
Demystifying Issues, Causes and Solutions in LLM Open-Source Projects
<|reference_start|>Demystifying Issues, Causes and Solutions in LLM Open-Source Projects: With the advancements of Large Language Models (LLMs), an increasing number of open-source software projects are using LLMs as their core functional component. Although research and practice on LLMs are capturing considerable interest, no dedicated studies explored the challenges faced by practitioners of LLM open-source projects, the causes of these challenges, and potential solutions. To fill this research gap, we conducted an empirical study to understand the issues that practitioners encounter when developing and using LLM open-source software, the possible causes of these issues, and potential solutions.We collected all closed issues from 15 LLM open-source projects and labelled issues that met our requirements. We then randomly selected 994 issues from the labelled issues as the sample for data extraction and analysis to understand the prevalent issues, their underlying causes, and potential solutions. Our study results show that (1) Model Issue is the most common issue faced by practitioners, (2) Model Problem, Configuration and Connection Problem, and Feature and Method Problem are identified as the most frequent causes of the issues, and (3) Optimize Model is the predominant solution to the issues. Based on the study results, we provide implications for practitioners and researchers of LLM open-source projects.<|reference_end|>
arxiv
@article{cai2024demystifying, title={Demystifying Issues, Causes and Solutions in LLM Open-Source Projects}, author={Yangxiao Cai, Peng Liang, Yifei Wang, Zengyang Li, Mojtaba Shahin}, journal={arXiv preprint arXiv:2409.16559}, year={2024}, archivePrefix={arXiv}, eprint={2409.16559}, primaryClass={cs.SE cs.AI} }
cai2024demystifying
arxiv-661611
2409.16560
Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
<|reference_start|>Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference: Large language models (LLMs) have shown outstanding performance across numerous real-world tasks. However, the autoregressive nature of these models makes the inference process slow and costly. Speculative decoding has emerged as a promising solution, leveraging a smaller auxiliary model to draft future tokens, which are then validated simultaneously by the larger model, achieving a speed-up of 1-2x. Although speculative decoding matches the same distribution as multinomial sampling, multinomial sampling itself is prone to suboptimal outputs, whereas beam sampling is widely recognized for producing higher-quality results by maintaining multiple candidate sequences at each step. This paper explores the novel integration of speculative decoding with beam sampling. However, there are four key challenges: (1) how to generate multiple sequences from the larger model's distribution given drafts sequences from the small model; (2) how to dynamically optimize the number of beams to balance efficiency and accuracy; (3) how to efficiently verify the multiple drafts in parallel; and (4) how to address the extra memory costs inherent in beam sampling. To address these challenges, we propose dynamic-width speculative beam decoding (DSBD). Specifically, we first introduce a novel draft and verification scheme that generates multiple sequences following the large model's distribution based on beam sampling trajectories from the small model. Then, we introduce an adaptive mechanism to dynamically tune the number of beams based on the context, optimizing efficiency and effectiveness. Besides, we extend tree-based parallel verification to handle multiple trees simultaneously, accelerating the verification process. Finally, we illustrate a simple modification to our algorithm to mitigate the memory overhead of beam sampling...<|reference_end|>
arxiv
@article{qin2024dynamic-width, title={Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference}, author={Zongyue Qin, Zifan He, Neha Prakriya, Jason Cong, Yizhou Sun}, journal={arXiv preprint arXiv:2409.16560}, year={2024}, archivePrefix={arXiv}, eprint={2409.16560}, primaryClass={cs.AI} }
qin2024dynamic-width
arxiv-661612
2409.16561
Supporting Co-Adaptive Machine Teaching through Human Concept Learning and Cognitive Theories
<|reference_start|>Supporting Co-Adaptive Machine Teaching through Human Concept Learning and Cognitive Theories: An important challenge in interactive machine learning, particularly in subjective or ambiguous domains, is fostering bi-directional alignment between humans and models. Users teach models their concept definition through data labeling, while refining their own understandings throughout the process. To facilitate this, we introduce MOCHA, an interactive machine learning tool informed by two theories of human concept learning and cognition. First, it utilizes a neuro-symbolic pipeline to support Variation Theory-based counterfactual data generation. By asking users to annotate counterexamples that are syntactically and semantically similar to already-annotated data but predicted to have different labels, the system can learn more effectively while helping users understand the model and reflect on their own label definitions. Second, MOCHA uses Structural Alignment Theory to present groups of counterexamples, helping users comprehend alignable differences between data items and annotate them in batch. We validated MOCHA's effectiveness and usability through a lab study with 18 participants.<|reference_end|>
arxiv
@article{gebreegziabher2024supporting, title={Supporting Co-Adaptive Machine Teaching through Human Concept Learning and Cognitive Theories}, author={Simret Araya Gebreegziabher, Yukun Yang, Elena L. Glassman, Toby Jia-Jun Li}, journal={arXiv preprint arXiv:2409.16561}, year={2024}, archivePrefix={arXiv}, eprint={2409.16561}, primaryClass={cs.HC} }
gebreegziabher2024supporting
arxiv-661613
2409.16563
Enhancing disease detection in radiology reports through fine-tuning lightweight LLM on weak labels
<|reference_start|>Enhancing disease detection in radiology reports through fine-tuning lightweight LLM on weak labels: Despite significant progress in applying large language models (LLMs) to the medical domain, several limitations still prevent them from practical applications. Among these are the constraints on model size and the lack of cohort-specific labeled datasets. In this work, we investigated the potential of improving a lightweight LLM, such as Llama 3.1-8B, through fine-tuning with datasets using synthetic labels. Two tasks are jointly trained by combining their respective instruction datasets. When the quality of the task-specific synthetic labels is relatively high (e.g., generated by GPT4- o), Llama 3.1-8B achieves satisfactory performance on the open-ended disease detection task, with a micro F1 score of 0.91. Conversely, when the quality of the task-relevant synthetic labels is relatively low (e.g., from the MIMIC-CXR dataset), fine-tuned Llama 3.1-8B is able to surpass its noisy teacher labels (micro F1 score of 0.67 v.s. 0.63) when calibrated against curated labels, indicating the strong inherent underlying capability of the model. These findings demonstrate the potential of fine-tuning LLMs with synthetic labels, offering a promising direction for future research on LLM specialization in the medical domain.<|reference_end|>
arxiv
@article{wei2024enhancing, title={Enhancing disease detection in radiology reports through fine-tuning lightweight LLM on weak labels}, author={Yishu Wei, Xindi Wang, Hanley Ong, Yiliang Zhou, Adam Flanders, George Shih, Yifan Peng}, journal={arXiv preprint arXiv:2409.16563}, year={2024}, archivePrefix={arXiv}, eprint={2409.16563}, primaryClass={cs.AI} }
wei2024enhancing
arxiv-661614
2409.16565
A multi-scale probabilistic methodology to predict high-cycle fatigue lifetime for alloys with process-induced pores
<|reference_start|>A multi-scale probabilistic methodology to predict high-cycle fatigue lifetime for alloys with process-induced pores: A multi-scale methodology is developed in conjunction with a probabilistic fatigue lifetime model for structures with pores whose exact distribution, i.e. geometries and locations, is unknown. The method takes into account uncertainty in fatigue lifetimes in structures due to defects at two scales: micro-scale heterogeneity & meso-scale pores. An element-wise probabilistic strain-life model with its criterion modified for taking into account multiaxial loading is developed for taking into account the effect of micro-scale defects on the lifetime. Meso-scale pores in the structure are taken into account via statistical modelling of the expected pore populations via a finite element method, based on tomographic scans of a small region of porous material used to make the structure. A previously implemented Neuber-type plastic correction algorithm is used for fast full-field approximation of the strain-life criterion around the statistically generated pore fields. The probability of failure of a porous structure is obtained via a weakest link assumption at the level of its constituent finite elements. The fatigue model can be identified via a maximum likelihood estimate on experimental fatigue data of structures containing different types of pore populations. The proposed method is tested on an existing high-cycle fatigue data-set of an aluminium alloy with two levels of porosity. The model requires lesser data for identification than traditional models that consider porous media as a homogeneous material, as the same base material is considered for the two grades of porous material. Numerical studies on synthetically generated data show that the method is capable of taking into account the statistical size effect in fatigue, and demonstrate that fatigue properties of subsurface porous material are lower than that of core porous material, which makes homogenisation of the method non-trivial.<|reference_end|>
arxiv
@article{palchoudhary2024a, title={A multi-scale probabilistic methodology to predict high-cycle fatigue lifetime for alloys with process-induced pores}, author={Abhishek Palchoudhary, Cristian Ovalle, Vincent Maurel, Pierre Kerfriden}, journal={arXiv preprint arXiv:2409.16565}, year={2024}, archivePrefix={arXiv}, eprint={2409.16565}, primaryClass={cs.CE cond-mat.mtrl-sci} }
palchoudhary2024a
arxiv-661615
2409.16566
PANOS: Payload-Aware Navigation in Offroad Scenarios
<|reference_start|>PANOS: Payload-Aware Navigation in Offroad Scenarios: Nature has evolved humans to walk on different terrains by developing a detailed understanding of their physical characteristics. Similarly, legged robots need to develop their capability to walk on complex terrains with a variety of task-dependent payloads to achieve their goals. However, conventional terrain adaptation methods are susceptible to failure with varying payloads. In this work, we introduce PANOS, a weakly supervised approach that integrates proprioception and exteroception from onboard sensing to achieve a stable gait while walking by a legged robot over various terrains. Our work also provides evidence of its adaptability over varying payloads. We evaluate our method on multiple terrains and payloads using a legged robot. PANOS improves the stability up to 44% without any payload and 53% with 15 lbs payload. We also notice a reduction in the vibration cost of 20% with the payload for various terrain types when compared to state-of-the-art methods.<|reference_end|>
arxiv
@article{singh2024panos:, title={PANOS: Payload-Aware Navigation in Offroad Scenarios}, author={Kartikeya Singh, Yash Turkar, Christo Aluckal, Charuvarahan Adhivarahan, Karthik Dantu}, journal={arXiv preprint arXiv:2409.16566}, year={2024}, archivePrefix={arXiv}, eprint={2409.16566}, primaryClass={cs.RO} }
singh2024panos:
arxiv-661616
2409.16570
Disentangling Questions from Query Generation for Task-Adaptive Retrieval
<|reference_start|>Disentangling Questions from Query Generation for Task-Adaptive Retrieval: This paper studies the problem of information retrieval, to adapt to unseen tasks. Existing work generates synthetic queries from domain-specific documents to jointly train the retriever. However, the conventional query generator assumes the query as a question, thus failing to accommodate general search intents. A more lenient approach incorporates task-adaptive elements, such as few-shot learning with an 137B LLM. In this paper, we challenge a trend equating query and question, and instead conceptualize query generation task as a "compilation" of high-level intent into task-adaptive query. Specifically, we propose EGG, a query generator that better adapts to wide search intents expressed in the BeIR benchmark. Our method outperforms baselines and existing models on four tasks with underexplored intents, while utilizing a query generator 47 times smaller than the previous state-of-the-art. Our findings reveal that instructing the LM with explicit search intent is a key aspect of modeling an effective query generator.<|reference_end|>
arxiv
@article{lee2024disentangling, title={Disentangling Questions from Query Generation for Task-Adaptive Retrieval}, author={Yoonsang Lee, Minsoo Kim, Seung-won Hwang}, journal={arXiv preprint arXiv:2409.16570}, year={2024}, archivePrefix={arXiv}, eprint={2409.16570}, primaryClass={cs.CL} }
lee2024disentangling
arxiv-661617
2409.16572
Efficient and generalizable nested Fourier-DeepONet for three-dimensional geological carbon sequestration
<|reference_start|>Efficient and generalizable nested Fourier-DeepONet for three-dimensional geological carbon sequestration: Geological carbon sequestration (GCS) involves injecting CO$_2$ into subsurface geological formations for permanent storage. Numerical simulations could guide decisions in GCS projects by predicting CO$_2$ migration pathways and the pressure distribution in storage formation. However, these simulations are often computationally expensive due to highly coupled physics and large spatial-temporal simulation domains. Surrogate modeling with data-driven machine learning has become a promising alternative to accelerate physics-based simulations. Among these, the Fourier neural operator (FNO) has been applied to three-dimensional synthetic subsurface models. Here, to further improve performance, we have developed a nested Fourier-DeepONet by combining the expressiveness of the FNO with the modularity of a deep operator network (DeepONet). This new framework is twice as efficient as a nested FNO for training and has at least 80% lower GPU memory requirement due to its flexibility to treat temporal coordinates separately. These performance improvements are achieved without compromising prediction accuracy. In addition, the generalization and extrapolation ability of nested Fourier-DeepONet beyond the training range has been thoroughly evaluated. Nested Fourier-DeepONet outperformed the nested FNO for extrapolation in time with more than 50% reduced error. It also exhibited good extrapolation accuracy beyond the training range in terms of reservoir properties, number of wells, and injection rate.<|reference_end|>
arxiv
@article{lee2024efficient, title={Efficient and generalizable nested Fourier-DeepONet for three-dimensional geological carbon sequestration}, author={Jonathan E. Lee, Min Zhu, Ziqiao Xi, Kun Wang, Yanhua O. Yuan, Lu Lu}, journal={arXiv preprint arXiv:2409.16572}, year={2024}, archivePrefix={arXiv}, eprint={2409.16572}, primaryClass={cs.LG physics.comp-ph} }
lee2024efficient
arxiv-661618
2409.16573
Task-driven SLAM Benchmarking
<|reference_start|>Task-driven SLAM Benchmarking: For assistive robots, one critical use case of SLAM is to support localization as they navigate through an environment completing tasks. Current SLAM benchmarks do not consider task-based deployments where repeatability (precision) is more critical than accuracy. To address this gap, we propose a task-driven benchmarking framework for evaluating SLAM methods. The framework accounts for SLAM's mapping capabilities, employs precision as a key metric, and has low resource requirements to implement. Testing of state-of-the-art SLAM methods in both simulated and real-world scenarios provides insights into the performance properties of modern SLAM solutions. In particular, it shows that passive stereo SLAM operates at a level of precision comparable to LiDAR-based SLAM in typical indoor environments. The benchmarking approach offers a more relevant and accurate assessment of SLAM performance in task-driven applications.<|reference_end|>
arxiv
@article{du2024task-driven, title={Task-driven SLAM Benchmarking}, author={Yanwei Du, Shiyu Feng, Carlton G. Cort, Patricio A. Vela}, journal={arXiv preprint arXiv:2409.16573}, year={2024}, archivePrefix={arXiv}, eprint={2409.16573}, primaryClass={cs.RO} }
du2024task-driven
arxiv-661619
2409.16576
FusionANNS: An Efficient CPU/GPU Cooperative Processing Architecture for Billion-scale Approximate Nearest Neighbor Search
<|reference_start|>FusionANNS: An Efficient CPU/GPU Cooperative Processing Architecture for Billion-scale Approximate Nearest Neighbor Search: Approximate nearest neighbor search (ANNS) has emerged as a crucial component of database and AI infrastructure. Ever-increasing vector datasets pose significant challenges in terms of performance, cost, and accuracy for ANNS services. None of modern ANNS systems can address these issues simultaneously. We present FusionANNS, a high-throughput, low-latency, cost-efficient, and high-accuracy ANNS system for billion-scale datasets using SSDs and only one entry-level GPU. The key idea of FusionANNS lies in CPU/GPU collaborative filtering and re-ranking mechanisms, which significantly reduce I/O operations across CPUs, GPU, and SSDs to break through the I/O performance bottleneck. Specifically, we propose three novel designs: (1) multi-tiered indexing to avoid data swapping between CPUs and GPU, (2) heuristic re-ranking to eliminate unnecessary I/Os and computations while guaranteeing high accuracy, and (3) redundant-aware I/O deduplication to further improve I/O efficiency. We implement FusionANNS and compare it with the state-of-the-art SSD-based ANNS system -- SPANN and GPU-accelerated in-memory ANNS system -- RUMMY. Experimental results show that FusionANNS achieves 1) 9.4-13.1X higher query per second (QPS) and 5.7-8.8X higher cost efficiency compared with SPANN; 2) and 2-4.9X higher QPS and 2.3-6.8X higher cost efficiency compared with RUMMY, while guaranteeing low latency and high accuracy.<|reference_end|>
arxiv
@article{tian2024fusionanns:, title={FusionANNS: An Efficient CPU/GPU Cooperative Processing Architecture for Billion-scale Approximate Nearest Neighbor Search}, author={Bing Tian, Haikun Liu, Yuhang Tang, Shihai Xiao, Zhuohui Duan, Xiaofei Liao, Xuecang Zhang, Junhua Zhu, Yu Zhang}, journal={arXiv preprint arXiv:2409.16576}, year={2024}, archivePrefix={arXiv}, eprint={2409.16576}, primaryClass={cs.IR cs.DB cs.OS} }
tian2024fusionanns:
arxiv-661620
2409.16577
Reactive Multi-Robot Navigation in Outdoor Environments Through Uncertainty-Aware Active Learning of Human Preference Landscape
<|reference_start|>Reactive Multi-Robot Navigation in Outdoor Environments Through Uncertainty-Aware Active Learning of Human Preference Landscape: Compared with single robots, Multi-Robot Systems (MRS) can perform missions more efficiently due to the presence of multiple members with diverse capabilities. However, deploying an MRS in wide real-world environments is still challenging due to uncertain and various obstacles (e.g., building clusters and trees). With a limited understanding of environmental uncertainty on performance, an MRS cannot flexibly adjust its behaviors (e.g., teaming, load sharing, trajectory planning) to ensure both environment adaptation and task accomplishments. In this work, a novel joint preference landscape learning and behavior adjusting framework (PLBA) is designed. PLBA efficiently integrates real-time human guidance to MRS coordination and utilizes Sparse Variational Gaussian Processes with Varying Output Noise to quickly assess human preferences by leveraging spatial correlations between environment characteristics. An optimization-based behavior-adjusting method then safely adapts MRS behaviors to environments. To validate PLBA's effectiveness in MRS behavior adaption, a flood disaster search and rescue task was designed. 20 human users provided 1764 feedback based on human preferences obtained from MRS behaviors related to "task quality", "task progress", "robot safety". The prediction accuracy and adaptation speed results show the effectiveness of PLBA in preference learning and MRS behavior adaption.<|reference_end|>
arxiv
@article{huang2024reactive, title={Reactive Multi-Robot Navigation in Outdoor Environments Through Uncertainty-Aware Active Learning of Human Preference Landscape}, author={Chao Huang, Wenshuo Zang, Carlo Pinciroli, Zhi Jane Li, Taposh Banerjee, Lili Su, Rui Liu}, journal={arXiv preprint arXiv:2409.16577}, year={2024}, archivePrefix={arXiv}, eprint={2409.16577}, primaryClass={cs.RO cs.AI} }
huang2024reactive
arxiv-661621
2409.16578
FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning
<|reference_start|>FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning: In recent years, the Robotics field has initiated several efforts toward building generalist robot policies through large-scale multi-task Behavior Cloning. However, direct deployments of these policies have led to unsatisfactory performance, where the policy struggles with unseen states and tasks. How can we break through the performance plateau of these models and elevate their capabilities to new heights? In this paper, we propose FLaRe, a large-scale Reinforcement Learning fine-tuning framework that integrates robust pre-trained representations, large-scale training, and gradient stabilization techniques. Our method aligns pre-trained policies towards task completion, achieving state-of-the-art (SoTA) performance both on previously demonstrated and on entirely novel tasks and embodiments. Specifically, on a set of long-horizon mobile manipulation tasks, FLaRe achieves an average success rate of 79.5% in unseen environments, with absolute improvements of +23.6% in simulation and +30.7% on real robots over prior SoTA methods. By utilizing only sparse rewards, our approach can enable generalizing to new capabilities beyond the pretraining data with minimal human effort. Moreover, we demonstrate rapid adaptation to new embodiments and behaviors with less than a day of fine-tuning. Videos can be found on the project website at https://robot-flare.github.io/<|reference_end|>
arxiv
@article{hu2024flare:, title={FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning}, author={Jiaheng Hu, Rose Hendrix, Ali Farhadi, Aniruddha Kembhavi, Roberto Martin-Martin, Peter Stone, Kuo-Hao Zeng, Kiana Ehsani}, journal={arXiv preprint arXiv:2409.16578}, year={2024}, archivePrefix={arXiv}, eprint={2409.16578}, primaryClass={cs.RO cs.CV cs.LG} }
hu2024flare:
arxiv-661622
2409.16579
Friend- and Enemy-oriented Hedonic Games With Strangers Full Version
<|reference_start|>Friend- and Enemy-oriented Hedonic Games With Strangers Full Version: We introduce friend- and enemy-oriented hedonic games with strangers (FOHGS and EOHGS respectively), two classes of hedonic games wherein agents are classified as friends, enemies, or strangers under the assumption that strangers will become either friends or enemies ex post facto. For several notions of stability in FOHGS and EOHGS, we characterize the hardness of verification for possible and necessary stability. We characterize the hardness of deciding whether possibly and necessarily X stable partitions exist for a given stability notion X. We prove that necessarily internally stable partitions always exist and provide sufficient conditions for necessary contractual individual stability.<|reference_end|>
arxiv
@article{schlueter2024friend-, title={Friend- and Enemy-oriented Hedonic Games With Strangers Full Version}, author={TJ Schlueter, Makoto Yokoo}, journal={arXiv preprint arXiv:2409.16579}, year={2024}, archivePrefix={arXiv}, eprint={2409.16579}, primaryClass={cs.GT} }
schlueter2024friend-
arxiv-661623
2409.16581
SelectiveKD: A semi-supervised framework for cancer detection in DBT through Knowledge Distillation and Pseudo-labeling
<|reference_start|>SelectiveKD: A semi-supervised framework for cancer detection in DBT through Knowledge Distillation and Pseudo-labeling: When developing Computer Aided Detection (CAD) systems for Digital Breast Tomosynthesis (DBT), the complexity arising from the volumetric nature of the modality poses significant technical challenges for obtaining large-scale accurate annotations. Without access to large-scale annotations, the resulting model may not generalize to different domains. Given the costly nature of obtaining DBT annotations, how to effectively increase the amount of data used for training DBT CAD systems remains an open challenge. In this paper, we present SelectiveKD, a semi-supervised learning framework for building cancer detection models for DBT, which only requires a limited number of annotated slices to reach high performance. We achieve this by utilizing unlabeled slices available in a DBT stack through a knowledge distillation framework in which the teacher model provides a supervisory signal to the student model for all slices in the DBT volume. Our framework mitigates the potential noise in the supervisory signal from a sub-optimal teacher by implementing a selective dataset expansion strategy using pseudo labels. We evaluate our approach with a large-scale real-world dataset of over 10,000 DBT exams collected from multiple device manufacturers and locations. The resulting SelectiveKD process effectively utilizes unannotated slices from a DBT stack, leading to significantly improved cancer classification performance (AUC) and generalization performance.<|reference_end|>
arxiv
@article{dillard2024selectivekd:, title={SelectiveKD: A semi-supervised framework for cancer detection in DBT through Knowledge Distillation and Pseudo-labeling}, author={Laurent Dillard, Hyeonsoo Lee, Weonsuk Lee, Tae Soo Kim, Ali Diba, Thijs Kooi}, journal={arXiv preprint arXiv:2409.16581}, year={2024}, archivePrefix={arXiv}, eprint={2409.16581}, primaryClass={cs.CV} }
dillard2024selectivekd:
arxiv-661624
2409.16583
$\mathcalL_1$ Adaptive Optimizer for Uncertain Time-Varying Convex Optimization
<|reference_start|>$\mathcalL_1$ Adaptive Optimizer for Uncertain Time-Varying Convex Optimization: We propose an adaptive method for uncertain time-varying (TV) convex optimization, termed as $\mathcal{L}_{1}$ adaptive optimization ($\mathcal{L}_{1}$-AO). The proposed method uses a baseline TV optimizer with a prediction model, designed for the gradient dynamics to exploit the underlying structure of the temporal correlation. Inspired by $\mathcal{L}_{1}$ adaptive control, the proposed method augments an adaptive update law to estimate and compensate for the uncertainty from the inaccurate prediction in the online implementation. The proposed method provides the performance bounds of the error in the optimization variables and cost function, allowing efficient and reliable optimization for uncertain TV problems.<|reference_end|>
arxiv
@article{kim2024$\mathcal{l}_{1}$, title={$\mathcal{L}_{1}$ Adaptive Optimizer for Uncertain Time-Varying Convex Optimization}, author={Jinrae Kim, Naira Hovakimyan}, journal={arXiv preprint arXiv:2409.16583}, year={2024}, archivePrefix={arXiv}, eprint={2409.16583}, primaryClass={math.OC cs.SY eess.SY} }
kim2024$\mathcal{l}_{1}$
arxiv-661625
2409.16585
Is speckle noise more challenging to mitigate than additive noise?
<|reference_start|>Is speckle noise more challenging to mitigate than additive noise?: We study the problem of estimating a function in the presence of both speckle and additive noises. Although additive noise has been thoroughly explored in nonparametric estimation, speckle noise, prevalent in applications such as synthetic aperture radar, ultrasound imaging, and digital holography, has not received as much attention. Consequently, there is a lack of theoretical investigations into the fundamental limits of mitigating the speckle noise. This paper is the first step in filling this gap. Our focus is on investigating the minimax estimation error for estimating a $\beta$-H\"older continuous function and determining the rate of the minimax risk. Specifically, if $n$ represents the number of data points, $f$ denotes the underlying function to be estimated, and $\hat{\nu}_n$ is an estimate of $f$, then $\inf_{\hat{\nu}_n} \sup_f \mathbb{E}_f\| \hat{\nu}_n - f \|^2_2$ decays at the rate $n^{-\frac{2\beta}{2\beta+1}}$. Interestingly, this rate is identical to the one achieved for mitigating additive noise when the noise's variance is $\Theta(1)$. To validate the accuracy of our minimax upper bounds, we implement the minimax optimal algorithms on simulated data and employ Monte Carlo simulations to characterize their exact risk. Our simulations closely mirror the expected behaviors in decay rate as per our theory.<|reference_end|>
arxiv
@article{malekian2024is, title={Is speckle noise more challenging to mitigate than additive noise?}, author={Reihaneh Malekian and Arian Maleki}, journal={arXiv preprint arXiv:2409.16585}, year={2024}, archivePrefix={arXiv}, eprint={2409.16585}, primaryClass={math.ST cs.IT eess.SP math.IT stat.TH} }
malekian2024is
arxiv-661626
2409.16586
AutoSTF: Decoupled Neural Architecture Search for Cost-Effective Automated Spatio-Temporal Forecasting
<|reference_start|>AutoSTF: Decoupled Neural Architecture Search for Cost-Effective Automated Spatio-Temporal Forecasting: Spatio-temporal forecasting is a critical component of various smart city applications, such as transportation optimization, energy management, and socio-economic analysis. Recently, several automated spatio-temporal forecasting methods have been proposed to automatically search the optimal neural network architecture for capturing complex spatio-temporal dependencies. However, the existing automated approaches suffer from expensive neural architecture search overhead, which hinders their practical use and the further exploration of diverse spatio-temporal operators in a finer granularity. In this paper, we propose AutoSTF, a decoupled automatic neural architecture search framework for cost-effective automated spatio-temporal forecasting. From the efficiency perspective, we first decouple the mixed search space into temporal space and spatial space and respectively devise representation compression and parameter-sharing schemes to mitigate the parameter explosion. The decoupled spatio-temporal search not only expedites the model optimization process but also leaves new room for more effective spatio-temporal dependency modeling. From the effectiveness perspective, we propose a multi-patch transfer module to jointly capture multi-granularity temporal dependencies and extend the spatial search space to enable finer-grained layer-wise spatial dependency search. Extensive experiments on eight datasets demonstrate the superiority of AutoSTF in terms of both accuracy and efficiency. Specifically, our proposed method achieves up to 13.48x speed-up compared to state-of-the-art automatic spatio-temporal forecasting methods while maintaining the best forecasting accuracy.<|reference_end|>
arxiv
@article{lyu2024autostf:, title={AutoSTF: Decoupled Neural Architecture Search for Cost-Effective Automated Spatio-Temporal Forecasting}, author={Tengfei Lyu, Weijia Zhang, Jinliang Deng, Hao Liu}, journal={arXiv preprint arXiv:2409.16586}, year={2024}, archivePrefix={arXiv}, eprint={2409.16586}, primaryClass={cs.LG cs.AI} }
lyu2024autostf:
arxiv-661627
2409.16590
Pre-trained Graphformer-based Ranking at Web-scale Search (Extended Abstract)
<|reference_start|>Pre-trained Graphformer-based Ranking at Web-scale Search (Extended Abstract): Both Transformer and Graph Neural Networks (GNNs) have been employed in the domain of learning to rank (LTR). However, these approaches adhere to two distinct yet complementary problem formulations: ranking score regression based on query-webpage pairs, and link prediction within query-webpage bipartite graphs, respectively. While it is possible to pre-train GNNs or Transformers on source datasets and subsequently fine-tune them on sparsely annotated LTR datasets, the distributional shifts between the pair-based and bipartite graph domains present significant challenges in integrating these heterogeneous models into a unified LTR framework at web scale. To address this, we introduce the novel MPGraf model, which leverages a modular and capsule-based pre-training strategy, aiming to cohesively integrate the regression capabilities of Transformers with the link prediction strengths of GNNs. We conduct extensive offline and online experiments to rigorously evaluate the performance of MPGraf.<|reference_end|>
arxiv
@article{li2024pre-trained, title={Pre-trained Graphformer-based Ranking at Web-scale Search (Extended Abstract)}, author={Yuchen Li, Haoyi Xiong, Linghe Kong, Zeyi Sun, Hongyang Chen, Shuaiqiang Wang, Dawei Yin}, journal={arXiv preprint arXiv:2409.16590}, year={2024}, archivePrefix={arXiv}, eprint={2409.16590}, primaryClass={cs.LG cs.IR} }
li2024pre-trained
arxiv-661628
2409.16592
MambaJSCC: Adaptive Deep Joint Source-Channel Coding with Generalized State Space Model
<|reference_start|>MambaJSCC: Adaptive Deep Joint Source-Channel Coding with Generalized State Space Model: Lightweight and efficient neural network models for deep joint source-channel coding (JSCC) are crucial for semantic communications. In this paper, we propose a novel JSCC architecture, named MambaJSCC, that achieves state-of-the-art performance with low computational and parameter overhead. MambaJSCC utilizes the visual state space model with channel adaptation (VSSM-CA) blocks as its backbone for transmitting images over wireless channels, where the VSSM-CA primarily consists of the generalized state space models (GSSM) and the zero-parameter, zero-computational channel adaptation method (CSI-ReST). We design the GSSM module, leveraging reversible matrix transformations to express generalized scan expanding operations, and theoretically prove that two GSSM modules can effectively capture global information. We discover that GSSM inherently possesses the ability to adapt to channels, a form of endogenous intelligence. Based on this, we design the CSI-ReST method, which injects channel state information (CSI) into the initial state of GSSM to utilize its native response, and into the residual state to mitigate CSI forgetting, enabling effective channel adaptation without introducing additional computational and parameter overhead. Experimental results show that MambaJSCC not only outperforms existing JSCC methods (e.g., SwinJSCC) across various scenarios but also significantly reduces parameter size, computational overhead, and inference delay.<|reference_end|>
arxiv
@article{wu2024mambajscc:, title={MambaJSCC: Adaptive Deep Joint Source-Channel Coding with Generalized State Space Model}, author={Tong Wu, Zhiyong Chen, Meixia Tao, Yaping Sun, Xiaodong Xu, Wenjun Zhang, and Ping Zhang}, journal={arXiv preprint arXiv:2409.16592}, year={2024}, archivePrefix={arXiv}, eprint={2409.16592}, primaryClass={cs.IT cs.AI cs.LG math.IT} }
wu2024mambajscc:
arxiv-661629
2409.16593
A Hybrid Quantum Neural Network for Split Learning
<|reference_start|>A Hybrid Quantum Neural Network for Split Learning: Quantum Machine Learning (QML) is an emerging field of research with potential applications to distributed collaborative learning, such as Split Learning (SL). SL allows resource-constrained clients to collaboratively train ML models with a server, reduce their computational overhead, and enable data privacy by avoiding raw data sharing. Although QML with SL has been studied, the problem remains open in resource-constrained environments where clients lack quantum computing capabilities. Additionally, data privacy leakage between client and server in SL poses risks of reconstruction attacks on the server side. To address these issues, we propose Hybrid Quantum Split Learning (HQSL), an application of Hybrid QML in SL. HQSL enables classical clients to train models with a hybrid quantum server and curtails reconstruction attacks. In addition, we introduce a novel qubit-efficient data-loading technique for designing a quantum layer in HQSL, minimizing both the number of qubits and circuit depth. Experiments on five datasets demonstrate HQSL's feasibility and ability to enhance classification performance compared to its classical models. Notably, HQSL achieves mean improvements of over 3% in both accuracy and F1-score for the Fashion-MNIST dataset, and over 1.5% in both metrics for the Speech Commands dataset. We expand these studies to include up to 100 clients, confirming HQSL's scalability. Moreover, we introduce a noise-based defense mechanism to tackle reconstruction attacks on the server side. Overall, HQSL enables classical clients to collaboratively train their models with a hybrid quantum server, leveraging quantum advantages while improving model performance and security against data privacy leakage-related reconstruction attacks.<|reference_end|>
arxiv
@article{cowlessur2024a, title={A Hybrid Quantum Neural Network for Split Learning}, author={Hevish Cowlessur and Chandra Thapa and Tansu Alpcan and Seyit Camtepe}, journal={arXiv preprint arXiv:2409.16593}, year={2024}, archivePrefix={arXiv}, eprint={2409.16593}, primaryClass={quant-ph cs.AI} }
cowlessur2024a
arxiv-661630
2409.16594
Generative Pre-trained Ranking Model with Over-parameterization at Web-Scale (Extended Abstract)
<|reference_start|>Generative Pre-trained Ranking Model with Over-parameterization at Web-Scale (Extended Abstract): Learning to rank (LTR) is widely employed in web searches to prioritize pertinent webpages from retrieved content based on input queries. However, traditional LTR models encounter two principal obstacles that lead to suboptimal performance: (1) the lack of well-annotated query-webpage pairs with ranking scores covering a diverse range of search query popularities, which hampers their ability to address queries across the popularity spectrum, and (2) inadequately trained models that fail to induce generalized representations for LTR, resulting in overfitting. To address these challenges, we propose a \emph{\uline{G}enerative \uline{S}emi-\uline{S}upervised \uline{P}re-trained} (GS2P) LTR model. We conduct extensive offline experiments on both a publicly available dataset and a real-world dataset collected from a large-scale search engine. Furthermore, we deploy GS2P in a large-scale web search engine with realistic traffic, where we observe significant improvements in the real-world application.<|reference_end|>
arxiv
@article{li2024generative, title={Generative Pre-trained Ranking Model with Over-parameterization at Web-Scale (Extended Abstract)}, author={Yuchen Li, Haoyi Xiong, Linghe Kong, Jiang Bian, Shuaiqiang Wang, Guihai Chen, Dawei Yin}, journal={arXiv preprint arXiv:2409.16594}, year={2024}, archivePrefix={arXiv}, eprint={2409.16594}, primaryClass={cs.IR cs.LG} }
li2024generative
arxiv-661631
2409.16595
Robo-Platform: A Robotic System for Recording Sensors and Controlling Robots
<|reference_start|>Robo-Platform: A Robotic System for Recording Sensors and Controlling Robots: Mobile smartphones compactly provide sensors such as cameras, IMUs, GNSS measurement units, and wireless and wired communication channels required for robotics projects. They are affordable, portable, and programmable, which makes them ideal for testing, data acquisition, controlling mobile robots, and many other robotic applications. A robotic system is proposed in this paper, consisting of an Android phone, a microcontroller board attached to the phone via USB, and a remote wireless controller station. In the data acquisition mode, the Android device can record a dataset of a diverse configuration of multiple cameras, IMUs, GNSS units, and external USB ADC channels in the rawest format used for, but not limited to, pose estimation and scene reconstruction applications. In robot control mode, the Android phone, a microcontroller board, and other peripherals constitute the mobile or stationary robotic system. This system is controlled using a remote server connected over Wi-Fi or Bluetooth. Experiments show that although the SLAM and AR applications can utilize the acquired data, the proposed system can pave the way for more advanced algorithms for processing these noisy and sporadic measurements. Moreover, the characteristics of the communication media are studied, and two example robotic projects, which involve controlling a toy car and a quadcopter, are included.<|reference_end|>
arxiv
@article{najafabadi2024robo-platform:, title={Robo-Platform: A Robotic System for Recording Sensors and Controlling Robots}, author={Masoud Dayani Najafabadi}, journal={arXiv preprint arXiv:2409.16595}, year={2024}, archivePrefix={arXiv}, eprint={2409.16595}, primaryClass={cs.RO cs.SY eess.SY} }
najafabadi2024robo-platform:
arxiv-661632
2409.16597
EventHallusion: Diagnosing Event Hallucinations in Video LLMs
<|reference_start|>EventHallusion: Diagnosing Event Hallucinations in Video LLMs: Recently, Multimodal Large Language Models (MLLMs) have made significant progress in the video comprehension field. Despite remarkable content reasoning and instruction following capabilities they demonstrated, the hallucination problem of these VideoLLMs is less explored compared with its counterpart in the image domain. To mitigate this gap, we first propose EventHallusion, a novel benchmark that focuses on assessing the VideoLMMs' hallucination phenomenon on video event comprehension. Based on the observation that existing VideoLLMs are entangled with the priors stemming from their foundation models, our EventHallusion is curated by meticulously collecting videos and annotating questions to intentionally mislead the VideoLLMs into interpreting events based on these priors rather than accurately understanding the video content. On the other hand, we also propose a simple yet effective method, called Temporal Contrastive Decoding (TCD), to tackle the hallucination problems of VideoLLMs. The proposed TCD suppresses the model's preference toward their priors by comparing the original video with a constructed counterpart, whose temporal cues are disrupted, during the autoregressive decoding stage. Through comprehensive evaluation of eight open-source and two closed-source VideoLLMs on the proposed EventHallusion benchmark, we find that the open-source models suffer significantly from hallucination problems, whereas the closed-source models perform markedly better. By further equipping open-sourced VideoLLMs with the proposed TCD approach, evident performance improvements are achieved across most metrics in the EventHallusion benchmark. Our codes and benchmark data are available at https://github.com/Stevetich/EventHallusion.<|reference_end|>
arxiv
@article{zhang2024eventhallusion:, title={EventHallusion: Diagnosing Event Hallucinations in Video LLMs}, author={Jiacheng Zhang, Yang Jiao, Shaoxiang Chen, Jingjing Chen, Yu-Gang Jiang}, journal={arXiv preprint arXiv:2409.16597}, year={2024}, archivePrefix={arXiv}, eprint={2409.16597}, primaryClass={cs.CV} }
zhang2024eventhallusion:
arxiv-661633
2409.16600
FAFA: Frequency-Aware Flow-Aided Self-Supervision for Underwater Object Pose Estimation
<|reference_start|>FAFA: Frequency-Aware Flow-Aided Self-Supervision for Underwater Object Pose Estimation: Although methods for estimating the pose of objects in indoor scenes have achieved great success, the pose estimation of underwater objects remains challenging due to difficulties brought by the complex underwater environment, such as degraded illumination, blurring, and the substantial cost of obtaining real annotations. In response, we introduce FAFA, a Frequency-Aware Flow-Aided self-supervised framework for 6D pose estimation of unmanned underwater vehicles (UUVs). Essentially, we first train a frequency-aware flow-based pose estimator on synthetic data, where an FFT-based augmentation approach is proposed to facilitate the network in capturing domain-invariant features and target domain styles from a frequency perspective. Further, we perform self-supervised training by enforcing flow-aided multi-level consistencies to adapt it to the real-world underwater environment. Our framework relies solely on the 3D model and RGB images, alleviating the need for any real pose annotations or other-modality data like depths. We evaluate the effectiveness of FAFA on common underwater object pose benchmarks and showcase significant performance improvements compared to state-of-the-art methods. Code is available at github.com/tjy0703/FAFA.<|reference_end|>
arxiv
@article{tang2024fafa:, title={FAFA: Frequency-Aware Flow-Aided Self-Supervision for Underwater Object Pose Estimation}, author={Jingyi Tang, Gu Wang, Zeyu Chen, Shengquan Li, Xiu Li and Xiangyang Ji}, journal={arXiv preprint arXiv:2409.16600}, year={2024}, archivePrefix={arXiv}, eprint={2409.16600}, primaryClass={cs.CV} }
tang2024fafa:
arxiv-661634
2409.16601
Cyber Food Swamps: Investigating the Impacts of Online-to-Offline Food Delivery Platforms on Healthy Food Choices
<|reference_start|>Cyber Food Swamps: Investigating the Impacts of Online-to-Offline Food Delivery Platforms on Healthy Food Choices: Online-to-offline (O2O) food delivery platforms have substantially enriched the food choices of urban residents by allowing them to conveniently access farther food outlets. However, concerns about the healthiness of delivered food persist, especially because the impact of O2O food delivery platforms on users' healthy food choices remains unclear. This study leverages large-scale empirical data from a leading O2O delivery platform to comprehensively analyze online food choice behaviors and how they are influenced by the online exposure to fast food restaurants, i.e., online food environment. Our analyses reveal significant discrepancy in food preferences across demographic groups and city sizes, where male, low-income, and younger users and those located in larger cities more likely to order fast food via O2O platforms. Besides, we also perform a comparative analysis on the food exposure differences in online and offline environments, confirming that the extended service ranges of O2O platforms can create larger "cyber food swamps". Furthermore, regression analysis highlights that a higher ratio of fast food orders is associated with "cyber food swamps", areas characterized by a higher share of accessible fast food restaurants. A 10% increase in this share raises the probability of ordering fast food by 22.0%. Moreover, a quasi-natural experiment substantiates the long-term causal effect of online food environment changes on healthy food choices. Our findings underscore the need for O2O food delivery platforms to address the health implications of online food choice exposure, thereby informing efforts by various stakeholders to improve residents' dietary health.<|reference_end|>
arxiv
@article{zhang2024cyber, title={Cyber Food Swamps: Investigating the Impacts of Online-to-Offline Food Delivery Platforms on Healthy Food Choices}, author={Yunke Zhang, Yiran Fan, Peijie Liu, Fengli Xu, Yong Li}, journal={arXiv preprint arXiv:2409.16601}, year={2024}, archivePrefix={arXiv}, eprint={2409.16601}, primaryClass={cs.CY} }
zhang2024cyber
arxiv-661635
2409.16603
Overview of the First Shared Task on Clinical Text Generation: RRG24 and "Discharge Me!"
<|reference_start|>Overview of the First Shared Task on Clinical Text Generation: RRG24 and "Discharge Me!": Recent developments in natural language generation have tremendous implications for healthcare. For instance, state-of-the-art systems could automate the generation of sections in clinical reports to alleviate physician workload and streamline hospital documentation. To explore these applications, we present a shared task consisting of two subtasks: (1) Radiology Report Generation (RRG24) and (2) Discharge Summary Generation ("Discharge Me!"). RRG24 involves generating the 'Findings' and 'Impression' sections of radiology reports given chest X-rays. "Discharge Me!" involves generating the 'Brief Hospital Course' and 'Discharge Instructions' sections of discharge summaries for patients admitted through the emergency department. "Discharge Me!" submissions were subsequently reviewed by a team of clinicians. Both tasks emphasize the goal of reducing clinician burnout and repetitive workloads by generating documentation. We received 201 submissions from across 8 teams for RRG24, and 211 submissions from across 16 teams for "Discharge Me!".<|reference_end|>
arxiv
@article{xu2024overview, title={Overview of the First Shared Task on Clinical Text Generation: RRG24 and "Discharge Me!"}, author={Justin Xu, Zhihong Chen, Andrew Johnston, Louis Blankemeier, Maya Varma, Jason Hom, William J. Collins, Ankit Modi, Robert Lloyd, Benjamin Hopkins, Curtis Langlotz, Jean-Benoit Delbrouck}, journal={Proceedings of the 23rd Workshop on Biomedical Natural Language Processing (2024) 85-98}, year={2024}, doi={10.18653/v1/2024.bionlp-1.7}, archivePrefix={arXiv}, eprint={2409.16603}, primaryClass={cs.CL} }
xu2024overview
arxiv-661636
2409.16604
Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement
<|reference_start|>Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement: Despite the impressive advancements made in recent low-light image enhancement techniques, the scarcity of paired data has emerged as a significant obstacle to further advancements. This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training. The mean-teacher technique is a prominent semi-supervised learning method, successfully adopted for addressing high-level and low-level vision tasks. However, two primary issues hinder the naive mean-teacher method from attaining optimal performance in low-light image enhancement. Firstly, pixel-wise consistency loss is insufficient for transferring realistic illumination distribution from the teacher to the student model, which results in color cast in the enhanced images. Secondly, cutting-edge image enhancement approaches fail to effectively cooperate with the mean-teacher framework to restore detailed information in dark areas due to their tendency to overlook modeling structured information within local regions. To mitigate the above issues, we first introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors. Then, we design a Mamba-based low-light image enhancement backbone to effectively enhance Mamba's local region pixel relationship representation ability with a multi-scale feature learning scheme, facilitating the generation of images with rich textural details. Further, we propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details. The experimental results indicate that our Semi-LLIE surpasses existing methods in both quantitative and qualitative metrics.<|reference_end|>
arxiv
@article{li2024semi-llie:, title={Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement}, author={Guanlin Li, Ke Zhang, Ting Wang, Ming Li, Bin Zhao, and Xuelong Li}, journal={arXiv preprint arXiv:2409.16604}, year={2024}, archivePrefix={arXiv}, eprint={2409.16604}, primaryClass={cs.CV} }
li2024semi-llie:
arxiv-661637
2409.16605
Evaluating and Enhancing Large Language Models for Novelty Assessment in Scholarly Publications
<|reference_start|>Evaluating and Enhancing Large Language Models for Novelty Assessment in Scholarly Publications: Recent studies have evaluated the creativity/novelty of large language models (LLMs) primarily from a semantic perspective, using benchmarks from cognitive science. However, accessing the novelty in scholarly publications is a largely unexplored area in evaluating LLMs. In this paper, we introduce a scholarly novelty benchmark (SchNovel) to evaluate LLMs' ability to assess novelty in scholarly papers. SchNovel consists of 15000 pairs of papers across six fields sampled from the arXiv dataset with publication dates spanning 2 to 10 years apart. In each pair, the more recently published paper is assumed to be more novel. Additionally, we propose RAG-Novelty, which simulates the review process taken by human reviewers by leveraging the retrieval of similar papers to assess novelty. Extensive experiments provide insights into the capabilities of different LLMs to assess novelty and demonstrate that RAG-Novelty outperforms recent baseline models.<|reference_end|>
arxiv
@article{lin2024evaluating, title={Evaluating and Enhancing Large Language Models for Novelty Assessment in Scholarly Publications}, author={Ethan Lin, Zhiyuan Peng, Yi Fang}, journal={arXiv preprint arXiv:2409.16605}, year={2024}, archivePrefix={arXiv}, eprint={2409.16605}, primaryClass={cs.CL cs.AI cs.IR cs.LG} }
lin2024evaluating
arxiv-661638
2409.16606
VFDelta: A Framework for Detecting Silent Vulnerability Fixes by Enhancing Code Change Learning
<|reference_start|>VFDelta: A Framework for Detecting Silent Vulnerability Fixes by Enhancing Code Change Learning: Vulnerability fixes in open source software (OSS) usually follow the coordinated vulnerability disclosure model and are silently fixed. This delay can expose OSS users to risks as malicious parties might exploit the software before fixes are publicly known. Therefore, it is important to identify vulnerability fixes early and automatically. Existing methods classify vulnerability fixes by learning code change representations from commits, typically by concatenating code changes, which does not effectively highlight nuanced differences. Additionally, previous approaches fine-tune code embedding models and classification models separately, which limits overall effectiveness. We propose VFDelta, a lightweight yet effective framework that embeds code before and after changes using independent models with surrounding code as context. By performing element-wise subtraction on these embeddings, we capture fine-grain changes. Our architecture allows joint training of embedding and classification models, optimizing overall performance. Experiments demonstrate that VFDelta achieves up to 0.33 F1 score and 0.63 CostEffort@5, improving over state-of-the-art methods by 77.4% and 7.1%, respectively. Ablation analysis confirms the importance of our code change representation in capturing small changes. We also expanded the dataset and introduced a temporal split to simulate real-world scenarios; VFDelta significantly outperforms baselines VulFixMiner and MiDas across all metrics in this setting.<|reference_end|>
arxiv
@article{yang2024vfdelta:, title={VFDelta: A Framework for Detecting Silent Vulnerability Fixes by Enhancing Code Change Learning}, author={Xu Yang, Shaowei Wang, Jiayuan Zhou, Xing Hu}, journal={arXiv preprint arXiv:2409.16606}, year={2024}, archivePrefix={arXiv}, eprint={2409.16606}, primaryClass={cs.SE} }
yang2024vfdelta:
arxiv-661639
2409.16608
Omni 3D: BEOL-Compatible 3D Logic with Omnipresent Power, Signal, and Clock
<|reference_start|>Omni 3D: BEOL-Compatible 3D Logic with Omnipresent Power, Signal, and Clock: This paper presents Omni 3D - a 3D-stacked device architecture that is naturally enabled by back-end-of-line (BEOL)-compatible transistors. Omni 3D arbitrarily interleaves metal layers for both signal/power with FETs in 3D (i.e., nFETs and pFETs are stacked in 3D). Thus, signal/power routing layers have fine-grained, all-sided access to the FET active regions maximizing 3D standard cell design flexibility. This is in sharp contrast to approaches such as back-side power delivery networks (BSPDNs), complementary FETs (CFETs), and stacked FETs. Importantly, the routing flexibility of Omni 3D is enabled by double-side routing and an interleaved metal (IM) layer for inter- and intra-cell routing, respectively. In this work, we explore Omni 3D variants (e.g., both with and without the IM layer) and optimize these variants using a virtual-source BEOL-FET compact model. We establish a physical design flow that efficiently utilizes the double-side routing in Omni 3D and perform a thorough design-technology-co-optimization (DTCO) of Omni 3D device architecture on several design points. From our design flow, we project 2.0x improvement in the energy-delay product and 1.5x reduction in area compared to the state-of-the-art CFETs with BSPDNs.<|reference_end|>
arxiv
@article{choi2024omni, title={Omni 3D: BEOL-Compatible 3D Logic with Omnipresent Power, Signal, and Clock}, author={Suhyeong Choi, Carlo Gilardi, Paul Gutwin, Robert M. Radway, Tathagata Srimani, Subhasish Mitra}, journal={arXiv preprint arXiv:2409.16608}, year={2024}, archivePrefix={arXiv}, eprint={2409.16608}, primaryClass={cs.ET cs.AR} }
choi2024omni
arxiv-661640
2409.16609
Random Forest Regression Feature Importance for Climate Impact Pathway Detection
<|reference_start|>Random Forest Regression Feature Importance for Climate Impact Pathway Detection: Disturbances to the climate system, both natural and anthropogenic, have far reaching impacts that are not always easy to identify or quantify using traditional climate science analyses or causal modeling techniques. In this paper, we develop a novel technique for discovering and ranking the chain of spatio-temporal downstream impacts of a climate source, referred to herein as a source-impact pathway, using Random Forest Regression (RFR) and SHapley Additive exPlanation (SHAP) feature importances. Rather than utilizing RFR for classification or regression tasks (the most common use case for RFR), we propose a fundamentally new RFR-based workflow in which we: (i) train random forest (RF) regressors on a set of spatio-temporal features of interest, (ii) calculate their pair-wise feature importances using the SHAP weights associated with those features, and (iii) translate these feature importances into a weighted pathway network (i.e., a weighted directed graph), which can be used to trace out and rank interdependencies between climate features and/or modalities. We adopt a tiered verification approach to verify our new pathway identification methodology. In this approach, we apply our method to ensembles of data generated by running two increasingly complex benchmarks: (i) a set of synthetic coupled equations, and (ii) a fully coupled simulation of the 1991 eruption of Mount Pinatubo in the Philippines performed using a modified version 2 of the U.S. Department of Energy's Energy Exascale Earth System Model (E3SMv2). We find that our RFR feature importance-based approach can accurately detect known pathways of impact for both test cases.<|reference_end|>
arxiv
@article{brown2024random, title={Random Forest Regression Feature Importance for Climate Impact Pathway Detection}, author={Meredith G. L. Brown, Matt Peterson, Irina Tezaur, Kara Peterson, Diana Bull}, journal={arXiv preprint arXiv:2409.16609}, year={2024}, archivePrefix={arXiv}, eprint={2409.16609}, primaryClass={cs.LG} }
brown2024random
arxiv-661641
2409.16611
Achieving Stable High-Speed Locomotion for Humanoid Robots with Deep Reinforcement Learning
<|reference_start|>Achieving Stable High-Speed Locomotion for Humanoid Robots with Deep Reinforcement Learning: Humanoid robots offer significant versatility for performing a wide range of tasks, yet their basic ability to walk and run, especially at high velocities, remains a challenge. This letter presents a novel method that combines deep reinforcement learning with kinodynamic priors to achieve stable locomotion control (KSLC). KSLC promotes coordinated arm movements to counteract destabilizing forces, enhancing overall stability. Compared to the baseline method, KSLC provides more accurate tracking of commanded velocities and better generalization in velocity control. In simulation tests, the KSLC-enabled humanoid robot successfully tracked a target velocity of 3.5 m/s with reduced fluctuations. Sim-to-sim validation in a high-fidelity environment further confirmed its robust performance, highlighting its potential for real-world applications.<|reference_end|>
arxiv
@article{zhang2024achieving, title={Achieving Stable High-Speed Locomotion for Humanoid Robots with Deep Reinforcement Learning}, author={Xinming Zhang, Xianghui Wang, Lerong Zhang, Guodong Guo, Xiaoyu Shen and Wei Zhang}, journal={arXiv preprint arXiv:2409.16611}, year={2024}, archivePrefix={arXiv}, eprint={2409.16611}, primaryClass={cs.RO} }
zhang2024achieving
arxiv-661642
2409.16612
ECG-Image-Database: A Dataset of ECG Images with Real-World Imaging and Scanning Artifacts; A Foundation for Computerized ECG Image Digitization and Analysis
<|reference_start|>ECG-Image-Database: A Dataset of ECG Images with Real-World Imaging and Scanning Artifacts; A Foundation for Computerized ECG Image Digitization and Analysis: We introduce the ECG-Image-Database, a large and diverse collection of electrocardiogram (ECG) images generated from ECG time-series data, with real-world scanning, imaging, and physical artifacts. We used ECG-Image-Kit, an open-source Python toolkit, to generate realistic images of 12-lead ECG printouts from raw ECG time-series. The images include realistic distortions such as noise, wrinkles, stains, and perspective shifts, generated both digitally and physically. The toolkit was applied to 977 12-lead ECG records from the PTB-XL database and 1,000 from Emory Healthcare to create high-fidelity synthetic ECG images. These unique images were subjected to both programmatic distortions using ECG-Image-Kit and physical effects like soaking, staining, and mold growth, followed by scanning and photography under various lighting conditions to create real-world artifacts. The resulting dataset includes 35,595 software-labeled ECG images with a wide range of imaging artifacts and distortions. The dataset provides ground truth time-series data alongside the images, offering a reference for developing machine and deep learning models for ECG digitization and classification. The images vary in quality, from clear scans of clean papers to noisy photographs of degraded papers, enabling the development of more generalizable digitization algorithms. ECG-Image-Database addresses a critical need for digitizing paper-based and non-digital ECGs for computerized analysis, providing a foundation for developing robust machine and deep learning models capable of converting ECG images into time-series. The dataset aims to serve as a reference for ECG digitization and computerized annotation efforts. ECG-Image-Database was used in the PhysioNet Challenge 2024 on ECG image digitization and classification.<|reference_end|>
arxiv
@article{reyna2024ecg-image-database:, title={ECG-Image-Database: A Dataset of ECG Images with Real-World Imaging and Scanning Artifacts; A Foundation for Computerized ECG Image Digitization and Analysis}, author={Matthew A. Reyna and Deepanshi and James Weigle and Zuzana Koscova and Kiersten Campbell and Kshama Kodthalu Shivashankara and Soheil Saghafi and Sepideh Nikookar and Mohsen Motie-Shirazi and Yashar Kiarashi and Salman Seyedi and Gari D. Clifford and Reza Sameni}, journal={arXiv preprint arXiv:2409.16612}, year={2024}, archivePrefix={arXiv}, eprint={2409.16612}, primaryClass={q-bio.QM cs.AI eess.IV eess.SP} }
reyna2024ecg-image-database:
arxiv-661643
2409.16615
DeformStream: Deformation-based Adaptive Volumetric Video Streaming
<|reference_start|>DeformStream: Deformation-based Adaptive Volumetric Video Streaming: Volumetric video streaming offers immersive 3D experiences but faces significant challenges due to high bandwidth requirements and latency issues in transmitting detailed content in real time. Traditional methods like point cloud streaming compromise visual quality when zoomed in, and neural rendering techniques are too computationally intensive for real-time use. Though mesh-based streaming stands out by preserving surface detail and connectivity, offering a more refined representation for 3D content, traditional mesh streaming methods typically transmit data on a per-frame basis, failing to take full advantage of temporal redundancies across frames. This results in inefficient bandwidth usage and poor adaptability to fluctuating network conditions. We introduce Deformation-based Adaptive Volumetric Video Streaming, a novel framework that enhances volumetric video streaming performance by leveraging the inherent deformability of mesh-based representations. DeformStream uses embedded deformation to reconstruct subsequent frames from inter-frame motion, significantly reducing bandwidth usage while ensuring visual coherence between frames. To address frame reconstruction overhead and network adaptability, we formulate a new QoE model that accounts for client-side deformation latency and design a dynamic programming algorithm to optimize the trade-off between visual quality and bandwidth consumption under varying network conditions. Our evaluation demonstrates that Deformation-based Adaptive Volumetric Video Streaming outperforms existing mesh-based streaming systems in both bandwidth efficiency and visual quality, offering a robust solution for real-time volumetric video applications.<|reference_end|>
arxiv
@article{li2024deformstream:, title={DeformStream: Deformation-based Adaptive Volumetric Video Streaming}, author={Boyan Li, Yongting Chen, Dayou Zhang, Fangxin Wang}, journal={arXiv preprint arXiv:2409.16615}, year={2024}, archivePrefix={arXiv}, eprint={2409.16615}, primaryClass={cs.CV} }
li2024deformstream:
arxiv-661644
2409.16618
Claim-Guided Textual Backdoor Attack for Practical Applications
<|reference_start|>Claim-Guided Textual Backdoor Attack for Practical Applications: Recent advances in natural language processing and the increased use of large language models have exposed new security vulnerabilities, such as backdoor attacks. Previous backdoor attacks require input manipulation after model distribution to activate the backdoor, posing limitations in real-world applicability. Addressing this gap, we introduce a novel Claim-Guided Backdoor Attack (CGBA), which eliminates the need for such manipulations by utilizing inherent textual claims as triggers. CGBA leverages claim extraction, clustering, and targeted training to trick models to misbehave on targeted claims without affecting their performance on clean data. CGBA demonstrates its effectiveness and stealthiness across various datasets and models, significantly enhancing the feasibility of practical backdoor attacks. Our code and data will be available at https://github.com/PaperCGBA/CGBA.<|reference_end|>
arxiv
@article{song2024claim-guided, title={Claim-Guided Textual Backdoor Attack for Practical Applications}, author={Minkyoo Song, Hanna Kim, Jaehan Kim, Youngjin Jin, Seungwon Shin}, journal={arXiv preprint arXiv:2409.16618}, year={2024}, archivePrefix={arXiv}, eprint={2409.16618}, primaryClass={cs.CL cs.AI cs.CR} }
song2024claim-guided
arxiv-661645
2409.16619
CasFT: Future Trend Modeling for Information Popularity Prediction with Dynamic Cues-Driven Diffusion Models
<|reference_start|>CasFT: Future Trend Modeling for Information Popularity Prediction with Dynamic Cues-Driven Diffusion Models: The rapid spread of diverse information on online social platforms has prompted both academia and industry to realize the importance of predicting content popularity, which could benefit a wide range of applications, such as recommendation systems and strategic decision-making. Recent works mainly focused on extracting spatiotemporal patterns inherent in the information diffusion process within a given observation period so as to predict its popularity over a future period of time. However, these works often overlook the future popularity trend, as future popularity could either increase exponentially or stagnate, introducing uncertainties to the prediction performance. Additionally, how to transfer the preceding-term dynamics learned from the observed diffusion process into future-term trends remains an unexplored challenge. Against this background, we propose CasFT, which leverages observed information Cascades and dynamic cues extracted via neural ODEs as conditions to guide the generation of Future popularity-increasing Trends through a diffusion model. These generated trends are then combined with the spatiotemporal patterns in the observed information cascade to make the final popularity prediction. Extensive experiments conducted on three real-world datasets demonstrate that CasFT significantly improves the prediction accuracy, compared to state-of-the-art approaches, yielding 2.2%-19.3% improvement across different datasets.<|reference_end|>
arxiv
@article{jing2024casft:, title={CasFT: Future Trend Modeling for Information Popularity Prediction with Dynamic Cues-Driven Diffusion Models}, author={Xin Jing, Yichen Jing, Yuhuan Lu, Bangchao Deng, Xueqin Chen, and Dingqi Yang}, journal={arXiv preprint arXiv:2409.16619}, year={2024}, archivePrefix={arXiv}, eprint={2409.16619}, primaryClass={cs.AI} }
jing2024casft:
arxiv-661646
2409.16620
Optimized Monte Carlo Tree Search for Enhanced Decision Making in the FrozenLake Environment
<|reference_start|>Optimized Monte Carlo Tree Search for Enhanced Decision Making in the FrozenLake Environment: Monte Carlo Tree Search (MCTS) is a powerful algorithm for solving complex decision-making problems. This paper presents an optimized MCTS implementation applied to the FrozenLake environment, a classic reinforcement learning task characterized by stochastic transitions. The optimization leverages cumulative reward and visit count tables along with the Upper Confidence Bound for Trees (UCT) formula, resulting in efficient learning in a slippery grid world. We benchmark our implementation against other decision-making algorithms, including MCTS with Policy and Q-Learning, and perform a detailed comparison of their performance. The results demonstrate that our optimized approach effectively maximizes rewards and success rates while minimizing convergence time, outperforming baseline methods, especially in environments with inherent randomness.<|reference_end|>
arxiv
@article{guerra2024optimized, title={Optimized Monte Carlo Tree Search for Enhanced Decision Making in the FrozenLake Environment}, author={Esteban Aldana Guerra}, journal={arXiv preprint arXiv:2409.16620}, year={2024}, archivePrefix={arXiv}, eprint={2409.16620}, primaryClass={cs.AI} }
guerra2024optimized
arxiv-661647
2409.16621
Entailment-Driven Privacy Policy Classification with LLMs
<|reference_start|>Entailment-Driven Privacy Policy Classification with LLMs: While many online services provide privacy policies for end users to read and understand what personal data are being collected, these documents are often lengthy and complicated. As a result, the vast majority of users do not read them at all, leading to data collection under uninformed consent. Several attempts have been made to make privacy policies more user friendly by summarising them, providing automatic annotations or labels for key sections, or by offering chat interfaces to ask specific questions. With recent advances in Large Language Models (LLMs), there is an opportunity to develop more effective tools to parse privacy policies and help users make informed decisions. In this paper, we propose an entailment-driven LLM based framework to classify paragraphs of privacy policies into meaningful labels that are easily understood by users. The results demonstrate that our framework outperforms traditional LLM methods, improving the F1 score in average by 11.2%. Additionally, our framework provides inherently explainable and meaningful predictions.<|reference_end|>
arxiv
@article{silva2024entailment-driven, title={Entailment-Driven Privacy Policy Classification with LLMs}, author={Bhanuka Silva, Dishanika Denipitiyage, Suranga Seneviratne, Anirban Mahanti, Aruna Seneviratne}, journal={arXiv preprint arXiv:2409.16621}, year={2024}, archivePrefix={arXiv}, eprint={2409.16621}, primaryClass={cs.AI} }
silva2024entailment-driven
arxiv-661648
2409.16623
On Your Mark, Get Set, Predict! Modeling Continuous-Time Dynamics of Cascades for Information Popularity Prediction
<|reference_start|>On Your Mark, Get Set, Predict! Modeling Continuous-Time Dynamics of Cascades for Information Popularity Prediction: Information popularity prediction is important yet challenging in various domains, including viral marketing and news recommendations. The key to accurately predicting information popularity lies in subtly modeling the underlying temporal information diffusion process behind observed events of an information cascade, such as the retweets of a tweet. To this end, most existing methods either adopt recurrent networks to capture the temporal dynamics from the first to the last observed event or develop a statistical model based on self-exciting point processes to make predictions. However, information diffusion is intrinsically a complex continuous-time process with irregularly observed discrete events, which is oversimplified using recurrent networks as they fail to capture the irregular time intervals between events, or using self-exciting point processes as they lack flexibility to capture the complex diffusion process. Against this background, we propose ConCat, modeling the Continuous-time dynamics of Cascades for information popularity prediction. On the one hand, it leverages neural Ordinary Differential Equations (ODEs) to model irregular events of a cascade in continuous time based on the cascade graph and sequential event information. On the other hand, it considers cascade events as neural temporal point processes (TPPs) parameterized by a conditional intensity function which can also benefit the popularity prediction task. We conduct extensive experiments to evaluate ConCat on three real-world datasets. Results show that ConCat achieves superior performance compared to state-of-the-art baselines, yielding a 2.3%-33.2% improvement over the best-performing baselines across the three datasets.<|reference_end|>
arxiv
@article{jing2024on, title={On Your Mark, Get Set, Predict! Modeling Continuous-Time Dynamics of Cascades for Information Popularity Prediction}, author={Xin Jing, Yichen Jing, Yuhuan Lu, Bangchao Deng, Sikun Yang, Dingqi Yang}, journal={arXiv preprint arXiv:2409.16623}, year={2024}, archivePrefix={arXiv}, eprint={2409.16623}, primaryClass={cs.AI} }
jing2024on
arxiv-661649
2409.16626
Ascend HiFloat8 Format for Deep Learning
<|reference_start|>Ascend HiFloat8 Format for Deep Learning: This preliminary white paper proposes a novel 8-bit floating-point data format HiFloat8 (abbreviated as HiF8) for deep learning. HiF8 features tapered precision. For normal value encoding, it provides 7 exponent values with 3-bit mantissa, 8 exponent values with 2-bit mantissa, and 16 exponent values with 1-bit mantissa. For denormal value encoding, it extends the dynamic range by 7 extra powers of 2, from 31 to 38 binades (notice that FP16 covers 40 binades). Meanwhile, HiF8 encodes all the special values except that positive zero and negative zero are represented by only one bit-pattern. Thanks to the better balance between precision and dynamic range, HiF8 can be simultaneously used in both forward and backward passes of AI training. In this paper, we will describe the definition and rounding methods of HiF8, as well as the tentative training and inference solutions. To demonstrate the efficacy of HiF8, massive simulation results on various neural networks, including traditional neural networks and large language models (LLMs), will also be presented.<|reference_end|>
arxiv
@article{luo2024ascend, title={Ascend HiFloat8 Format for Deep Learning}, author={Yuanyong Luo, Zhongxing Zhang, Richard Wu, Hu Liu, Ying Jin, Kai Zheng, Minmin Wang, Zhanying He, Guipeng Hu, Luyao Chen, Tianchi Hu, Junsong Wang, Minqi Chen, Mikhaylov Dmitry, Korviakov Vladimir, Bobrin Maxim, Yuhao Hu, Guanfu Chen, Zeyi Huang}, journal={arXiv preprint arXiv:2409.16626}, year={2024}, archivePrefix={arXiv}, eprint={2409.16626}, primaryClass={cs.LG cs.AI cs.AR} }
luo2024ascend
arxiv-661650
2409.16627
Train Once, Deploy Anywhere: Matryoshka Representation Learning for Multimodal Recommendation
<|reference_start|>Train Once, Deploy Anywhere: Matryoshka Representation Learning for Multimodal Recommendation: Despite recent advancements in language and vision modeling, integrating rich multimodal knowledge into recommender systems continues to pose significant challenges. This is primarily due to the need for efficient recommendation, which requires adaptive and interactive responses. In this study, we focus on sequential recommendation and introduce a lightweight framework called full-scale Matryoshka representation learning for multimodal recommendation (fMRLRec). Our fMRLRec captures item features at different granularities, learning informative representations for efficient recommendation across multiple dimensions. To integrate item features from diverse modalities, fMRLRec employs a simple mapping to project multimodal item features into an aligned feature space. Additionally, we design an efficient linear transformation that embeds smaller features into larger ones, substantially reducing memory requirements for large-scale training on recommendation data. Combined with improved state space modeling techniques, fMRLRec scales to different dimensions and only requires one-time training to produce multiple models tailored to various granularities. We demonstrate the effectiveness and efficiency of fMRLRec on multiple benchmark datasets, which consistently achieves superior performance over state-of-the-art baseline methods. We make our code and data publicly available at https://github.com/yueqirex/fMRLRec.<|reference_end|>
arxiv
@article{wang2024train, title={Train Once, Deploy Anywhere: Matryoshka Representation Learning for Multimodal Recommendation}, author={Yueqi Wang, Zhenrui Yue, Huimin Zeng, Dong Wang, Julian McAuley}, journal={arXiv preprint arXiv:2409.16627}, year={2024}, archivePrefix={arXiv}, eprint={2409.16627}, primaryClass={cs.IR} }
wang2024train
arxiv-661651
2409.16629
Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing
<|reference_start|>Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing: We present a novel approach to synthesize dexterous motions for physically simulated hands in tasks that require coordination between the control of two hands with high temporal precision. Instead of directly learning a joint policy to control two hands, our approach performs bimanual control through cooperative learning where each hand is treated as an individual agent. The individual policies for each hand are first trained separately, and then synchronized through latent space manipulation in a centralized environment to serve as a joint policy for two-hand control. By doing so, we avoid directly performing policy learning in the joint state-action space of two hands with higher dimensions, greatly improving the overall training efficiency. We demonstrate the effectiveness of our proposed approach in the challenging guitar-playing task. The virtual guitarist trained by our approach can synthesize motions from unstructured reference data of general guitar-playing practice motions, and accurately play diverse rhythms with complex chord pressing and string picking patterns based on the input guitar tabs that do not exist in the references. Along with this paper, we provide the motion capture data that we collected as the reference for policy training. Code is available at: https://pei-xu.github.io/guitar.<|reference_end|>
arxiv
@article{xu2024synchronize, title={Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing}, author={Pei Xu, Ruocheng Wang}, journal={arXiv preprint arXiv:2409.16629}, year={2024}, doi={10.1145/3680528.3687692}, archivePrefix={arXiv}, eprint={2409.16629}, primaryClass={cs.GR} }
xu2024synchronize
arxiv-661652
2409.16630
Stochastic Subsampling With Average Pooling
<|reference_start|>Stochastic Subsampling With Average Pooling: Regularization of deep neural networks has been an important issue to achieve higher generalization performance without overfitting problems. Although the popular method of Dropout provides a regularization effect, it causes inconsistent properties in the output, which may degrade the performance of deep neural networks. In this study, we propose a new module called stochastic average pooling, which incorporates Dropout-like stochasticity in pooling. We describe the properties of stochastic subsampling and average pooling and leverage them to design a module without any inconsistency problem. The stochastic average pooling achieves a regularization effect without any potential performance degradation due to the inconsistency issue and can easily be plugged into existing architectures of deep neural networks. Experiments demonstrate that replacing existing average pooling with stochastic average pooling yields consistent improvements across a variety of tasks, datasets, and models.<|reference_end|>
arxiv
@article{kim2024stochastic, title={Stochastic Subsampling With Average Pooling}, author={Bum Jun Kim, Sang Woo Kim}, journal={arXiv preprint arXiv:2409.16630}, year={2024}, archivePrefix={arXiv}, eprint={2409.16630}, primaryClass={cs.LG cs.AI cs.CV} }
kim2024stochastic
arxiv-661653
2409.16631
Enhancing Nighttime UAV Tracking with Light Distribution Suppression
<|reference_start|>Enhancing Nighttime UAV Tracking with Light Distribution Suppression: Visual object tracking has boosted extensive intelligent applications for unmanned aerial vehicles (UAVs). However, the state-of-the-art (SOTA) enhancers for nighttime UAV tracking always neglect the uneven light distribution in low-light images, inevitably leading to excessive enhancement in scenarios with complex illumination. To address these issues, this work proposes a novel enhancer, i.e., LDEnhancer, enhancing nighttime UAV tracking with light distribution suppression. Specifically, a novel image content refinement module is developed to decompose the light distribution information and image content information in the feature space, allowing for the targeted enhancement of the image content information. Then this work designs a new light distribution generation module to capture light distribution effectively. The features with light distribution information and image content information are fed into the different parameter estimation modules, respectively, for the parameter map prediction. Finally, leveraging two parameter maps, an innovative interweave iteration adjustment is proposed for the collaborative pixel-wise adjustment of low-light images. Additionally, a challenging nighttime UAV tracking dataset with uneven light distribution, namely NAT2024-2, is constructed to provide a comprehensive evaluation, which contains 40 challenging sequences with over 74K frames in total. Experimental results on the authoritative UAV benchmarks and the proposed NAT2024-2 demonstrate that LDEnhancer outperforms other SOTA low-light enhancers for nighttime UAV tracking. Furthermore, real-world tests on a typical UAV platform with an NVIDIA Orin NX confirm the practicality and efficiency of LDEnhancer. The code is available at https://github.com/vision4robotics/LDEnhancer.<|reference_end|>
arxiv
@article{yao2024enhancing, title={Enhancing Nighttime UAV Tracking with Light Distribution Suppression}, author={Liangliang Yao, Changhong Fu, Yiheng Wang, Haobo Zuo, Kunhan Lu}, journal={arXiv preprint arXiv:2409.16631}, year={2024}, archivePrefix={arXiv}, eprint={2409.16631}, primaryClass={cs.CV} }
yao2024enhancing
arxiv-661654
2409.16632
Functional Stochastic Gradient MCMC for Bayesian Neural Networks
<|reference_start|>Functional Stochastic Gradient MCMC for Bayesian Neural Networks: Classical parameter-space Bayesian inference for Bayesian neural networks (BNNs) suffers from several unresolved prior issues, such as knowledge encoding intractability and pathological behaviours in deep networks, which can lead to improper posterior inference. To address these issues, functional Bayesian inference has recently been proposed leveraging functional priors, such as the emerging functional variational inference. In addition to variational methods, stochastic gradient Markov Chain Monte Carlo (MCMC) is another scalable and effective inference method for BNNs to asymptotically generate samples from the true posterior by simulating continuous dynamics. However, existing MCMC methods perform solely in parameter space and inherit the unresolved prior issues, while extending these dynamics to function space is a non-trivial undertaking. In this paper, we introduce novel functional MCMC schemes, including stochastic gradient versions, based on newly designed diffusion dynamics that can incorporate more informative functional priors. Moreover, we prove that the stationary measure of these functional dynamics is the target posterior over functions. Our functional MCMC schemes demonstrate improved performance in both predictive accuracy and uncertainty quantification on several tasks compared to naive parameter-space MCMC and functional variational inference.<|reference_end|>
arxiv
@article{wu2024functional, title={Functional Stochastic Gradient MCMC for Bayesian Neural Networks}, author={Mengjing Wu, Junyu Xuan, Jie Lu}, journal={arXiv preprint arXiv:2409.16632}, year={2024}, archivePrefix={arXiv}, eprint={2409.16632}, primaryClass={cs.LG} }
wu2024functional
arxiv-661655
2409.16633
PIFS-Rec: Process-In-Fabric-Switch for Large-Scale Recommendation System Inferences
<|reference_start|>PIFS-Rec: Process-In-Fabric-Switch for Large-Scale Recommendation System Inferences: Deep Learning Recommendation Models (DLRMs) have become increasingly popular and prevalent in today's datacenters, consuming most of the AI inference cycles. The performance of DLRMs is heavily influenced by available bandwidth due to their large vector sizes in embedding tables and concurrent accesses. To achieve substantial improvements over existing solutions, novel approaches towards DLRM optimization are needed, especially, in the context of emerging interconnect technologies like CXL. This study delves into exploring CXL-enabled systems, implementing a process-in-fabric-switch (PIFS) solution to accelerate DLRMs while optimizing their memory and bandwidth scalability. We present an in-depth characterization of industry-scale DLRM workloads running on CXL-ready systems, identifying the predominant bottlenecks in existing CXL systems. We, therefore, propose PIFS-Rec, a PIFS-based scheme that implements near-data processing through downstream ports of the fabric switch. PIFS-Rec achieves a latency that is 3.89x lower than Pond, an industry-standard CXL-based system, and also outperforms BEACON, a state-of-the-art scheme, by 2.03x.<|reference_end|>
arxiv
@article{huo2024pifs-rec:, title={PIFS-Rec: Process-In-Fabric-Switch for Large-Scale Recommendation System Inferences}, author={Pingyi Huo, Anusha Devulapally, Hasan Al Maruf, Minseo Park, Krishnakumar Nair, Meena Arunachalam, Gulsum Gudukbay Akbulut, Mahmut Taylan Kandemir, Vijaykrishnan Narayanan}, journal={arXiv preprint arXiv:2409.16633}, year={2024}, archivePrefix={arXiv}, eprint={2409.16633}, primaryClass={cs.AR cs.DC cs.IR cs.LG} }
huo2024pifs-rec:
arxiv-661656
2409.16635
Judgment of Thoughts: Courtroom of the Binary Logical Reasoning in Large Language Models
<|reference_start|>Judgment of Thoughts: Courtroom of the Binary Logical Reasoning in Large Language Models: This paper proposes a novel prompt engineering technique called Judgment of Thought (JoT) that is specifically tailored for binary logical reasoning tasks. JoT employs three roles$\unicode{x2014}$lawyer, prosecutor, and judge$\unicode{x2014}$to facilitate more reliable and accurate reasoning by the model. In this framework, the judge utilizes a high$\unicode{x2010}$level model, while the lawyer and prosecutor utilize low$\unicode{x2010}$level models. This structure helps the judge better understand the responses from both the lawyer and prosecutor, enabling a more accurate judgment. Experimental results on large language model (LLM) benchmark datasets, such as BigBenchHard and Winogrande, demonstrate that JoT outperforms existing methods, including Chain of Thought (CoT) and Self$\unicode{x2010}$Consistency (SC), in binary logical reasoning tasks. Additionally, in real$\unicode{x2010}$world tasks, such as Fake News Detection and SMS Spam Detection, JoT shows comparable or improved performance compared to existing techniques. JoT significantly enhances the accuracy and reliability of models in binary reasoning tasks and show potential for practical applicability across various domains. Future research should aim to further broaden the applicability of JoT and optimize its implementation for real$\unicode{x2010}$world problem$\unicode{x2010}$solving.<|reference_end|>
arxiv
@article{park2024judgment, title={Judgment of Thoughts: Courtroom of the Binary Logical Reasoning in Large Language Models}, author={Sungjune Park and Daeseon Choi}, journal={arXiv preprint arXiv:2409.16635}, year={2024}, archivePrefix={arXiv}, eprint={2409.16635}, primaryClass={cs.AI} }
park2024judgment
arxiv-661657
2409.16636
Training Language Models to Win Debates with Self-Play Improves Judge Accuracy
<|reference_start|>Training Language Models to Win Debates with Self-Play Improves Judge Accuracy: We test the robustness of debate as a method of scalable oversight by training models to debate with data generated via self-play. In a long-context reading comprehension task, we find that language model based evaluators answer questions more accurately when judging models optimized to win debates. By contrast, we find no such relationship for consultancy models trained to persuade a judge without an opposing debater present. In quantitative and qualitative comparisons between our debate models and novel consultancy baselines, we find evidence that debate training encourages stronger and more informative arguments, showing promise that it can help provide high-quality supervision for tasks that are difficult to directly evaluate.<|reference_end|>
arxiv
@article{arnesen2024training, title={Training Language Models to Win Debates with Self-Play Improves Judge Accuracy}, author={Samuel Arnesen, David Rein, Julian Michael}, journal={arXiv preprint arXiv:2409.16636}, year={2024}, archivePrefix={arXiv}, eprint={2409.16636}, primaryClass={cs.CL cs.AI} }
arnesen2024training
arxiv-661658
2409.16637
Deep-Learning Recognition of Scanning Transmission Electron Microscopy: Quantifying and Mitigating the Influence of Gaussian Noises
<|reference_start|>Deep-Learning Recognition of Scanning Transmission Electron Microscopy: Quantifying and Mitigating the Influence of Gaussian Noises: Scanning transmission electron microscopy (STEM) is a powerful tool to reveal the morphologies and structures of materials, thereby attracting intensive interests from the scientific and industrial communities. The outstanding spatial (atomic level) and temporal (ms level) resolutions of the STEM techniques generate fruitful amounts of high-definition data, thereby enabling the high-volume and high-speed analysis of materials. On the other hand, processing of the big dataset generated by STEM is time-consuming and beyond the capability of human-based manual work, which urgently calls for computer-based automation. In this work, we present a deep-learning mask region-based neural network (Mask R-CNN) for the recognition of nanoparticles imaged by STEM, as well as generating the associated dimensional analysis. The Mask R-CNN model was tested on simulated STEM-HAADF results with different Gaussian noises, particle shapes and particle sizes, and the results indicated that Gaussian noise has determining influence on the accuracy of recognition. By applying Gaussian and Non-Local Means filters on the noise-containing STEM-HAADF results, the influences of noises are largely mitigated, and recognition accuracy is significantly improved. This filtering-recognition approach was further applied to experimental STEM-HAADF results, which yields satisfying accuracy compared with the traditional threshold methods. The deep-learning-based method developed in this work has great potentials in analysis of the complicated structures and large data generated by STEM-HAADF.<|reference_end|>
arxiv
@article{zhang2024deep-learning, title={Deep-Learning Recognition of Scanning Transmission Electron Microscopy: Quantifying and Mitigating the Influence of Gaussian Noises}, author={Hanlei Zhang, Jincheng Bai, Xiabo Chen, Can Li, Chuanjian Zhong, Jiye Fang, and Guangwen Zhou}, journal={arXiv preprint arXiv:2409.16637}, year={2024}, archivePrefix={arXiv}, eprint={2409.16637}, primaryClass={eess.IV cs.CV} }
zhang2024deep-learning
arxiv-661659
2409.16639
Examining the Rat in the Tunnel: Interpretable Multi-Label Classification of Tor-based Malware
<|reference_start|>Examining the Rat in the Tunnel: Interpretable Multi-Label Classification of Tor-based Malware: Despite being the most popular privacy-enhancing network, Tor is increasingly adopted by cybercriminals to obfuscate malicious traffic, hindering the identification of malware-related communications between compromised devices and Command and Control (C&C) servers. This malicious traffic can induce congestion and reduce Tor's performance, while encouraging network administrators to block Tor traffic. Recent research, however, demonstrates the potential for accurately classifying captured Tor traffic as malicious or benign. While existing efforts have addressed malware class identification, their performance remains limited, with micro-average precision and recall values around 70%. Accurately classifying specific malware classes is crucial for effective attack prevention and mitigation. Furthermore, understanding the unique patterns and attack vectors employed by different malware classes helps the development of robust and adaptable defence mechanisms. We utilise a multi-label classification technique based on Message-Passing Neural Networks, demonstrating its superiority over previous approaches such as Binary Relevance, Classifier Chains, and Label Powerset, by achieving micro-average precision (MAP) and recall (MAR) exceeding 90%. Compared to previous work, we significantly improve performance by 19.98%, 10.15%, and 59.21% in MAP, MAR, and Hamming Loss, respectively. Next, we employ Explainable Artificial Intelligence (XAI) techniques to interpret the decision-making process within these models. Finally, we assess the robustness of all techniques by crafting adversarial perturbations capable of manipulating classifier predictions and generating false positives and negatives.<|reference_end|>
arxiv
@article{karunanayake2024examining, title={Examining the Rat in the Tunnel: Interpretable Multi-Label Classification of Tor-based Malware}, author={Ishan Karunanayake, Mashael AlSabah, Nadeem Ahmed, Sanjay Jha}, journal={arXiv preprint arXiv:2409.16639}, year={2024}, archivePrefix={arXiv}, eprint={2409.16639}, primaryClass={cs.CR cs.LG} }
karunanayake2024examining
arxiv-661660
2409.16640
HURRY: Highly Utilized, Reconfigurable ReRAM-based In-situ Accelerator with Multifunctionality
<|reference_start|>HURRY: Highly Utilized, Reconfigurable ReRAM-based In-situ Accelerator with Multifunctionality: Resistive random-access memory (ReRAM) crossbar arrays are suitable for efficient inference computations in neural networks due to their analog general matrix-matrix multiplication (GEMM) capabilities. However, traditional ReRAM-based accelerators suffer from spatial and temporal underutilization. We present HURRY, a reconfigurable and multifunctional ReRAM-based in-situ accelerator. HURRY uses a block activation scheme for concurrent activation of dynamically sized ReRAM portions, enhancing spatial utilization. Additionally, it incorporates functional blocks for convolution, ReLU, max pooling, and softmax computations to improve temporal utilization. System-level scheduling and data mapping strategies further optimize performance. Consequently, HURRY achieves up to 3.35x speedup, 5.72x higher energy efficiency, and 7.91x greater area efficiency compared to current ReRAM-based accelerators.<|reference_end|>
arxiv
@article{shin2024hurry:, title={HURRY: Highly Utilized, Reconfigurable ReRAM-based In-situ Accelerator with Multifunctionality}, author={Hery Shin, Jae-Young Kim, Donghyuk Kim, and Joo-Young Kim}, journal={arXiv preprint arXiv:2409.16640}, year={2024}, archivePrefix={arXiv}, eprint={2409.16640}, primaryClass={cs.AR} }
shin2024hurry:
arxiv-661661
2409.16643
A Fast Dynamic Internal Predictive Power Scheduling Approach for Power Management in Microgrids
<|reference_start|>A Fast Dynamic Internal Predictive Power Scheduling Approach for Power Management in Microgrids: This paper presents a Dynamic Internal Predictive Power Scheduling (DIPPS) approach for optimizing power management in microgrids, particularly focusingon external power exchanges among diverse prosumers. DIPPS utilizes a dynamic objective function with a time-varying binary parameter to control the timing of power transfers to the external grid, facilitated by efficient usage of energy storage for surplus renewable power. The microgrid power scheduling problem is modeled as a mixed-integer nonlinear programmig (MINLP-PS) and subsequently transformed into a mixed-integer linear programming (MILP-PS) optimization through McCormick's relaxation to reduce the computational complexity. A predictive window with 6 data points is solved at an average of 0.92s, a 97.6% improvement over the 38.27s required for the MINLP-PS formulation, implying the numerical feasibility of the DIPPS approach for real-time implementation. Finally, the approach is validated against a static objective using real-world load data across three case studies with different time-varying parameters, demonstrationg the ability of DIPPS to optimize power exchanges and efficiently utilize distributed resources whie shifting the eexternal power transfers to specified time durations.<|reference_end|>
arxiv
@article{maya2024a, title={A Fast Dynamic Internal Predictive Power Scheduling Approach for Power Management in Microgrids}, author={Neethu Maya, Bala Kameshwar Poolla, Seshadhri Srinivasan, Narasimman Sundararajan, Suresh Sundaram}, journal={arXiv preprint arXiv:2409.16643}, year={2024}, archivePrefix={arXiv}, eprint={2409.16643}, primaryClass={eess.SY cs.SY} }
maya2024a
arxiv-661662
2409.16644
Enabling Auditory Large Language Models for Automatic Speech Quality Evaluation
<|reference_start|>Enabling Auditory Large Language Models for Automatic Speech Quality Evaluation: Speech quality assessment typically requires evaluating audio from multiple aspects, such as mean opinion score (MOS) and speaker similarity (SIM) etc., which can be challenging to cover using one small model designed for a single task. In this paper, we propose leveraging recently introduced auditory large language models (LLMs) for automatic speech quality assessment. By employing task-specific prompts, auditory LLMs are finetuned to predict MOS, SIM and A/B testing results, which are commonly used for evaluating text-to-speech systems. Additionally, the finetuned auditory LLM is able to generate natural language descriptions assessing aspects like noisiness, distortion, discontinuity, and overall quality, providing more interpretable outputs. Extensive experiments have been performed on the NISQA, BVCC, SOMOS and VoxSim speech quality datasets, using open-source auditory LLMs such as SALMONN, Qwen-Audio, and Qwen2-Audio. For the natural language descriptions task, a commercial model Google Gemini 1.5 Pro is also evaluated. The results demonstrate that auditory LLMs achieve competitive performance compared to state-of-the-art task-specific small models in predicting MOS and SIM, while also delivering promising results in A/B testing and natural language descriptions. Our data processing scripts and finetuned model checkpoints will be released upon acceptance.<|reference_end|>
arxiv
@article{wang2024enabling, title={Enabling Auditory Large Language Models for Automatic Speech Quality Evaluation}, author={Siyin Wang, Wenyi Yu, Yudong Yang, Changli Tang, Yixuan Li, Jimin Zhuang, Xianzhao Chen, Xiaohai Tian, Jun Zhang, Guangzhi Sun, Lu Lu, Chao Zhang}, journal={arXiv preprint arXiv:2409.16644}, year={2024}, archivePrefix={arXiv}, eprint={2409.16644}, primaryClass={eess.AS cs.CL cs.SD} }
wang2024enabling
arxiv-661663
2409.16645
Task Addition in Multi-Task Learning by Geometrical Alignment
<|reference_start|>Task Addition in Multi-Task Learning by Geometrical Alignment: Training deep learning models on limited data while maintaining generalization is one of the fundamental challenges in molecular property prediction. One effective solution is transferring knowledge extracted from abundant datasets to those with scarce data. Recently, a novel algorithm called Geometrically Aligned Transfer Encoder (GATE) has been introduced, which uses soft parameter sharing by aligning the geometrical shapes of task-specific latent spaces. However, GATE faces limitations in scaling to multiple tasks due to computational costs. In this study, we propose a task addition approach for GATE to improve performance on target tasks with limited data while minimizing computational complexity. It is achieved through supervised multi-task pre-training on a large dataset, followed by the addition and training of task-specific modules for each target task. Our experiments demonstrate the superior performance of the task addition strategy for GATE over conventional multi-task methods, with comparable computational costs.<|reference_end|>
arxiv
@article{yim2024task, title={Task Addition in Multi-Task Learning by Geometrical Alignment}, author={Soorin Yim, Dae-Woong Jeong, Sung Moon Ko, Sumin Lee, Hyunseung Kim, Chanhui Lee, Sehui Han}, journal={arXiv preprint arXiv:2409.16645}, year={2024}, archivePrefix={arXiv}, eprint={2409.16645}, primaryClass={cs.LG cs.AI} }
yim2024task
arxiv-661664
2409.16646
Cross-Lingual and Cross-Cultural Variation in Image Descriptions
<|reference_start|>Cross-Lingual and Cross-Cultural Variation in Image Descriptions: Do speakers of different languages talk differently about what they see? Behavioural and cognitive studies report cultural effects on perception; however, these are mostly limited in scope and hard to replicate. In this work, we conduct the first large-scale empirical study of cross-lingual variation in image descriptions. Using a multimodal dataset with 31 languages and images from diverse locations, we develop a method to accurately identify entities mentioned in captions and present in the images, then measure how they vary across languages. Our analysis reveals that pairs of languages that are geographically or genetically closer tend to mention the same entities more frequently. We also identify entity categories whose saliency is universally high (such as animate beings), low (clothing accessories) or displaying high variance across languages (landscape). In a case study, we measure the differences in a specific language pair (e.g., Japanese mentions clothing far more frequently than English). Furthermore, our method corroborates previous small-scale studies, including 1) Rosch et al. (1976)'s theory of basic-level categories, demonstrating a preference for entities that are neither too generic nor too specific, and 2) Miyamoto et al. (2006)'s hypothesis that environments afford patterns of perception, such as entity counts. Overall, our work reveals the presence of both universal and culture-specific patterns in entity mentions.<|reference_end|>
arxiv
@article{berger2024cross-lingual, title={Cross-Lingual and Cross-Cultural Variation in Image Descriptions}, author={Uri Berger and Edoardo M. Ponti}, journal={arXiv preprint arXiv:2409.16646}, year={2024}, archivePrefix={arXiv}, eprint={2409.16646}, primaryClass={cs.CL} }
berger2024cross-lingual
arxiv-661665
2409.16647
Domain-Independent Automatic Generation of Descriptive Texts for Time-Series Data
<|reference_start|>Domain-Independent Automatic Generation of Descriptive Texts for Time-Series Data: Due to scarcity of time-series data annotated with descriptive texts, training a model to generate descriptive texts for time-series data is challenging. In this study, we propose a method to systematically generate domain-independent descriptive texts from time-series data. We identify two distinct approaches for creating pairs of time-series data and descriptive texts: the forward approach and the backward approach. By implementing the novel backward approach, we create the Temporal Automated Captions for Observations (TACO) dataset. Experimental results demonstrate that a contrastive learning based model trained using the TACO dataset is capable of generating descriptive texts for time-series data in novel domains.<|reference_end|>
arxiv
@article{dohi2024domain-independent, title={Domain-Independent Automatic Generation of Descriptive Texts for Time-Series Data}, author={Kota Dohi, Aoi Ito, Harsh Purohit, Tomoya Nishida, Takashi Endo, Yohei Kawaguchi}, journal={arXiv preprint arXiv:2409.16647}, year={2024}, archivePrefix={arXiv}, eprint={2409.16647}, primaryClass={cs.CL cs.LG} }
dohi2024domain-independent
arxiv-661666
2409.16650
Succinct Data Structures for Baxter Permutation and Related Families
<|reference_start|>Succinct Data Structures for Baxter Permutation and Related Families: A permutation $\pi: [n] \rightarrow [n]$ is a Baxter permutation if and only if it does not contain either of the patterns $2-41-3$ and $3-14-2$. Baxter permutations are one of the most widely studied subclasses of general permutation due to their connections with various combinatorial objects such as plane bipolar orientations and mosaic floorplans, etc. In this paper, we introduce a novel succinct representation (i.e., using $o(n)$ additional bits from their information-theoretical lower bounds) for Baxter permutations of size $n$ that supports $\pi(i)$ and $\pi^{-1}(j)$ queries for any $i \in [n]$ in $O(f_1(n))$ and $O(f_2(n))$ time, respectively. Here, $f_1(n)$ and $f_2(n)$ are arbitrary increasing functions that satisfy the conditions $\omega(\log n)$ and $\omega(\log^2 n)$, respectively. This stands out as the first succinct representation with sub-linear worst-case query times for Baxter permutations. Additionally, we consider a subclass of Baxter permutations called \textit{separable permutations}, which do not contain either of the patterns $2-4-1-3$ and $3-1-4-2$. In this paper, we provide the first succinct representation of the separable permutation $\rho: [n] \rightarrow [n]$ of size $n$ that supports both $\rho(i)$ and $\rho^{-1}(j)$ queries in $O(1)$ time. In particular, this result circumvents Golynski's [SODA 2009] lower bound result for trade-offs between redundancy and $\rho(i)$ and $\rho^{-1}(j)$ queries. Moreover, as applications of these permutations with the queries, we also introduce the first succinct representations for mosaic/slicing floorplans, and plane bipolar orientations, which can further support specific navigational queries on them efficiently.<|reference_end|>
arxiv
@article{chakraborty2024succinct, title={Succinct Data Structures for Baxter Permutation and Related Families}, author={Sankardeep Chakraborty, Seungbum Jo, Geunho Kim, Kunihiko Sadakane}, journal={arXiv preprint arXiv:2409.16650}, year={2024}, archivePrefix={arXiv}, eprint={2409.16650}, primaryClass={cs.DS} }
chakraborty2024succinct
arxiv-661667
2409.16651
Learning Representation for Multitask learning through Self Supervised Auxiliary learning
<|reference_start|>Learning Representation for Multitask learning through Self Supervised Auxiliary learning: Multi-task learning is a popular machine learning approach that enables simultaneous learning of multiple related tasks, improving algorithmic efficiency and effectiveness. In the hard parameter sharing approach, an encoder shared through multiple tasks generates data representations passed to task-specific predictors. Therefore, it is crucial to have a shared encoder that provides decent representations for every and each task. However, despite recent advances in multi-task learning, the question of how to improve the quality of representations generated by the shared encoder remains open. To address this gap, we propose a novel approach called Dummy Gradient norm Regularization that aims to improve the universality of the representations generated by the shared encoder. Specifically, the method decreases the norm of the gradient of the loss function with repect to dummy task-specific predictors to improve the universality of the shared encoder's representations. Through experiments on multiple multi-task learning benchmark datasets, we demonstrate that DGR effectively improves the quality of the shared representations, leading to better multi-task prediction performances. Applied to various classifiers, the shared representations generated by DGR also show superior performance compared to existing multi-task learning methods. Moreover, our approach takes advantage of computational efficiency due to its simplicity. The simplicity also allows us to seamlessly integrate DGR with the existing multi-task learning algorithms.<|reference_end|>
arxiv
@article{shin2024learning, title={Learning Representation for Multitask learning through Self Supervised Auxiliary learning}, author={Seokwon Shin, Hyungrok Do, and Youngdoo Son}, journal={arXiv preprint arXiv:2409.16651}, year={2024}, archivePrefix={arXiv}, eprint={2409.16651}, primaryClass={stat.ML cs.LG} }
shin2024learning
arxiv-661668
2409.16652
Progressive Representation Learning for Real-Time UAV Tracking
<|reference_start|>Progressive Representation Learning for Real-Time UAV Tracking: Visual object tracking has significantly promoted autonomous applications for unmanned aerial vehicles (UAVs). However, learning robust object representations for UAV tracking is especially challenging in complex dynamic environments, when confronted with aspect ratio change and occlusion. These challenges severely alter the original information of the object. To handle the above issues, this work proposes a novel progressive representation learning framework for UAV tracking, i.e., PRL-Track. Specifically, PRL-Track is divided into coarse representation learning and fine representation learning. For coarse representation learning, two innovative regulators, which rely on appearance and semantic information, are designed to mitigate appearance interference and capture semantic information. Furthermore, for fine representation learning, a new hierarchical modeling generator is developed to intertwine coarse object representations. Exhaustive experiments demonstrate that the proposed PRL-Track delivers exceptional performance on three authoritative UAV tracking benchmarks. Real-world tests indicate that the proposed PRL-Track realizes superior tracking performance with 42.6 frames per second on the typical UAV platform equipped with an edge smart camera. The code, model, and demo videos are available at \url{https://github.com/vision4robotics/PRL-Track}.<|reference_end|>
arxiv
@article{fu2024progressive, title={Progressive Representation Learning for Real-Time UAV Tracking}, author={Changhong Fu, Xiang Lei, Haobo Zuo, Liangliang Yao, Guangze Zheng, and Jia Pan}, journal={arXiv preprint arXiv:2409.16652}, year={2024}, archivePrefix={arXiv}, eprint={2409.16652}, primaryClass={cs.CV cs.AI} }
fu2024progressive
arxiv-661669
2409.16653
The Credibility Transformer
<|reference_start|>The Credibility Transformer: Inspired by the large success of Transformers in Large Language Models, these architectures are increasingly applied to tabular data. This is achieved by embedding tabular data into low-dimensional Euclidean spaces resulting in similar structures as time-series data. We introduce a novel credibility mechanism to this Transformer architecture. This credibility mechanism is based on a special token that should be seen as an encoder that consists of a credibility weighted average of prior information and observation based information. We demonstrate that this novel credibility mechanism is very beneficial to stabilize training, and our Credibility Transformer leads to predictive models that are superior to state-of-the-art deep learning models.<|reference_end|>
arxiv
@article{richman2024the, title={The Credibility Transformer}, author={Ronald Richman, Salvatore Scognamiglio, Mario V. W"uthrich}, journal={arXiv preprint arXiv:2409.16653}, year={2024}, archivePrefix={arXiv}, eprint={2409.16653}, primaryClass={cs.LG q-fin.GN} }
richman2024the
arxiv-661670
2409.16654
Speech Recognition Rescoring with Large Speech-Text Foundation Models
<|reference_start|>Speech Recognition Rescoring with Large Speech-Text Foundation Models: Large language models (LLM) have demonstrated the ability to understand human language by leveraging large amount of text data. Automatic speech recognition (ASR) systems are often limited by available transcribed speech data and benefit from a second pass rescoring using LLM. Recently multi-modal large language models, particularly speech and text foundational models have demonstrated strong spoken language understanding. Speech-Text foundational models leverage large amounts of unlabelled and labelled data both in speech and text modalities to model human language. In this work, we propose novel techniques to use multi-modal LLM for ASR rescoring. We also explore discriminative training to further improve the foundational model rescoring performance. We demonstrate cross-modal knowledge transfer in speech-text LLM can benefit rescoring. Our experiments demonstrate up-to 20% relative improvements over Whisper large ASR and up-to 15% relative improvements over text-only LLM.<|reference_end|>
arxiv
@article{shivakumar2024speech, title={Speech Recognition Rescoring with Large Speech-Text Foundation Models}, author={Prashanth Gurunath Shivakumar, Jari Kolehmainen, Aditya Gourav, Yi Gu, Ankur Gandhe, Ariya Rastrow, Ivan Bulyko}, journal={arXiv preprint arXiv:2409.16654}, year={2024}, archivePrefix={arXiv}, eprint={2409.16654}, primaryClass={eess.AS cs.CL cs.SD} }
shivakumar2024speech
arxiv-661671
2409.16656
A Rule-Based Approach for UI Migration from Android to iOS
<|reference_start|>A Rule-Based Approach for UI Migration from Android to iOS: In the mobile development process, creating the user interface (UI) is highly resource intensive. Consequently, numerous studies have focused on automating UI development, such as generating UI from screenshots or design specifications. However, they heavily rely on computer vision techniques for image recognition. Any recognition errors can cause invalid UI element generation, compromising the effectiveness of these automated approaches. Moreover, developing an app UI from scratch remains a time consuming and labor intensive task. To address this challenge, we propose a novel approach called GUIMIGRATOR, which enables the cross platform migration of existing Android app UIs to iOS, thereby automatically generating UI to facilitate the reuse of existing UI. This approach not only avoids errors from screenshot recognition but also reduces the cost of developing UIs from scratch. GUIMIGRATOR extracts and parses Android UI layouts, views, and resources to construct a UI skeleton tree. GUIMIGRATOR generates the final UI code files utilizing target code templates, which are then compiled and validated in the iOS development platform, i.e., Xcode. We evaluate the effectiveness of GUIMIGRATOR on 31 Android open source applications across ten domains. The results show that GUIMIGRATOR achieves a UI similarity score of 78 between migration screenshots, outperforming two popular existing LLMs substantially. Additionally, GUIMIGRATOR demonstrates high efficiency, taking only 7.6 seconds to migrate the datasets. These findings indicate that GUIMIGRATOR effectively facilitates the reuse of Android UI code on iOS, leveraging the strengths of both platforms UI frameworks and making new contributions to cross platform development.<|reference_end|>
arxiv
@article{gao2024a, title={A Rule-Based Approach for UI Migration from Android to iOS}, author={Yi Gao, Xing Hu, Tongtong Xu, Xin Xia and Xiaohu Yang}, journal={arXiv preprint arXiv:2409.16656}, year={2024}, archivePrefix={arXiv}, eprint={2409.16656}, primaryClass={cs.SE} }
gao2024a
arxiv-661672
2409.16658
Pre-trained Language Models Return Distinguishable Probability Distributions to Unfaithfully Hallucinated Texts
<|reference_start|>Pre-trained Language Models Return Distinguishable Probability Distributions to Unfaithfully Hallucinated Texts: In this work, we show the pre-trained language models return distinguishable generation probability and uncertainty distribution to unfaithfully hallucinated texts, regardless of their size and structure. By examining 24 models on 6 data sets, we find out that 88-98% of cases return statistically significantly distinguishable generation probability and uncertainty distributions. Using this general phenomenon, we showcase a hallucination-reducing training algorithm. Our algorithm outperforms other baselines by achieving higher faithfulness metrics while maintaining sound general text quality measures.<|reference_end|>
arxiv
@article{cha2024pre-trained, title={Pre-trained Language Models Return Distinguishable Probability Distributions to Unfaithfully Hallucinated Texts}, author={Taehun Cha and Donghun Lee}, journal={arXiv preprint arXiv:2409.16658}, year={2024}, archivePrefix={arXiv}, eprint={2409.16658}, primaryClass={cs.CL} }
cha2024pre-trained
arxiv-661673
2409.16663
Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models
<|reference_start|>Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models: We propose the use of latent space generative world models to address the covariate shift problem in autonomous driving. A world model is a neural network capable of predicting an agent's next state given past states and actions. By leveraging a world model during training, the driving policy effectively mitigates covariate shift without requiring an excessive amount of training data. During end-to-end training, our policy learns how to recover from errors by aligning with states observed in human demonstrations, so that at runtime it can recover from perturbations outside the training distribution. Additionally, we introduce a novel transformer-based perception encoder that employs multi-view cross-attention and a learned scene query. We present qualitative and quantitative results, demonstrating significant improvements upon prior state of the art in closed-loop testing in the CARLA simulator, as well as showing the ability to handle perturbations in both CARLA and NVIDIA's DRIVE Sim.<|reference_end|>
arxiv
@article{popov2024mitigating, title={Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models}, author={Alexander Popov, Alperen Degirmenci, David Wehr, Shashank Hegde, Ryan Oldja, Alexey Kamenev, Bertrand Douillard, David Nist'er, Urs Muller, Ruchi Bhargava, Stan Birchfield, Nikolai Smolyanskiy}, journal={arXiv preprint arXiv:2409.16663}, year={2024}, archivePrefix={arXiv}, eprint={2409.16663}, primaryClass={cs.RO cs.CV cs.LG cs.SY eess.SY} }
popov2024mitigating
arxiv-661674
2409.16665
Multirotor Nonlinear Model Predictive Control based on Visual Servoing of Evolving Features
<|reference_start|>Multirotor Nonlinear Model Predictive Control based on Visual Servoing of Evolving Features: This article presents a Visual Servoing Nonlinear Model Predictive Control (NMPC) scheme for autonomously tracking a moving target using multirotor Unmanned Aerial Vehicles (UAVs). The scheme is developed for surveillance and tracking of contour-based areas with evolving features. NMPC is used to manage input and state constraints, while additional barrier functions are incorporated in order to ensure system safety and optimal performance. The proposed control scheme is designed based on the extraction and implementation of the full dynamic model of the features describing the target and the state variables. Real-time simulations and experiments using a quadrotor UAV equipped with a camera demonstrate the effectiveness of the proposed strategy.<|reference_end|>
arxiv
@article{aspragkathos2024multirotor, title={Multirotor Nonlinear Model Predictive Control based on Visual Servoing of Evolving Features}, author={Sotirios N. Aspragkathos, Panagiotis Rousseas, George C. Karras, Kostas J. Kyriakopoulos}, journal={arXiv preprint arXiv:2409.16665}, year={2024}, archivePrefix={arXiv}, eprint={2409.16665}, primaryClass={cs.RO cs.SY eess.SY} }
aspragkathos2024multirotor
arxiv-661675
2409.16666
TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans
<|reference_start|>TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans: We introduce a novel framework that learns a dynamic neural radiance field (NeRF) for full-body talking humans from monocular videos. Prior work represents only the body pose or the face. However, humans communicate with their full body, combining body pose, hand gestures, as well as facial expressions. In this work, we propose TalkinNeRF, a unified NeRF-based network that represents the holistic 4D human motion. Given a monocular video of a subject, we learn corresponding modules for the body, face, and hands, that are combined together to generate the final result. To capture complex finger articulation, we learn an additional deformation field for the hands. Our multi-identity representation enables simultaneous training for multiple subjects, as well as robust animation under completely unseen poses. It can also generalize to novel identities, given only a short video as input. We demonstrate state-of-the-art performance for animating full-body talking humans, with fine-grained hand articulation and facial expressions.<|reference_end|>
arxiv
@article{chatziagapi2024talkinnerf:, title={TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans}, author={Aggelina Chatziagapi and Bindita Chaudhuri and Amit Kumar and Rakesh Ranjan and Dimitris Samaras and Nikolaos Sarafianos}, journal={arXiv preprint arXiv:2409.16666}, year={2024}, archivePrefix={arXiv}, eprint={2409.16666}, primaryClass={cs.CV} }
chatziagapi2024talkinnerf:
arxiv-661676
2409.16667
A Character-Centric Creative Story Generation via Imagination
<|reference_start|>A Character-Centric Creative Story Generation via Imagination: Creative story generation with diverse and detailed story elements is a long-standing goal for large language models. While existing methodologies generate long and coherent stories, they fall significantly short of human capabilities in terms of diversity and character detail. To address this, we introduce a novel story generation framework called CCI (Character-centric Creative story generation via Imagination). CCI features two innovative modules for creative story generation: IG (Image-Guided Imagination) and MW (Multi-Writer model). In the IG module, we utilize DALL-E 3 to create visual representations of key story elements. The IG generates more novel and concrete characters, backgrounds, and main plots than text-only methods. The MW module uses these story elements created by IG to generate multiple description candidates for the protagonist and select the best one. This method incorporates vivid and rich character descriptions into the story. We compared the stories generated by CCI and baseline models through human evaluation and statistical analysis. The results showed significant improvements in the creativity. Furthermore, by enabling interactive multi-modal story generation with users, we have opened up possibilities for human-LLM integration in cultural development.<|reference_end|>
arxiv
@article{park2024a, title={A Character-Centric Creative Story Generation via Imagination}, author={Kyeongman Park, Minbeom Kim, Kyomin Jung}, journal={arXiv preprint arXiv:2409.16667}, year={2024}, archivePrefix={arXiv}, eprint={2409.16667}, primaryClass={cs.CL} }
park2024a
arxiv-661677
2409.16668
Topic-aware Causal Intervention for Counterfactual Detection
<|reference_start|>Topic-aware Causal Intervention for Counterfactual Detection: Counterfactual statements, which describe events that did not or cannot take place, are beneficial to numerous NLP applications. Hence, we consider the problem of counterfactual detection (CFD) and seek to enhance the CFD models. Previous models are reliant on clue phrases to predict counterfactuality, so they suffer from significant performance drop when clue phrase hints do not exist during testing. Moreover, these models tend to predict non-counterfactuals over counterfactuals. To address these issues, we propose to integrate neural topic model into the CFD model to capture the global semantics of the input statement. We continue to causally intervene the hidden representations of the CFD model to balance the effect of the class labels. Extensive experiments show that our approach outperforms previous state-of-the-art CFD and bias-resolving methods in both the CFD and other bias-sensitive tasks.<|reference_end|>
arxiv
@article{nguyen2024topic-aware, title={Topic-aware Causal Intervention for Counterfactual Detection}, author={Thong Nguyen, Truc-My Nguyen}, journal={arXiv preprint arXiv:2409.16668}, year={2024}, archivePrefix={arXiv}, eprint={2409.16668}, primaryClass={cs.CL} }
nguyen2024topic-aware
arxiv-661678
2409.16670
GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning
<|reference_start|>GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning: Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in handling a range of graph analytical tasks across various domains, such as e-commerce and social networks. Despite their versatility, GNNs face significant challenges in transferability, limiting their utility in real-world applications. Existing research in GNN transfer learning overlooks discrepancies in distribution among various graph datasets, facing challenges when transferring across different distributions. How to effectively adopt a well-trained GNN to new graphs with varying feature and structural distributions remains an under-explored problem. Taking inspiration from the success of Low-Rank Adaptation (LoRA) in adapting large language models to various domains, we propose GraphLoRA, an effective and parameter-efficient method for transferring well-trained GNNs to diverse graph domains. Specifically, we first propose a Structure-aware Maximum Mean Discrepancy (SMMD) to align divergent node feature distributions across source and target graphs. Moreover, we introduce low-rank adaptation by injecting a small trainable GNN alongside the pre-trained one, effectively bridging structural distribution gaps while mitigating the catastrophic forgetting. Additionally, a structure-aware regularization objective is proposed to enhance the adaptability of the pre-trained GNN to target graph with scarce supervision labels. Extensive experiments on six real-world datasets demonstrate the effectiveness of GraphLoRA against eleven baselines by tuning only 20% of parameters, even across disparate graph domains. The code is available at https://anonymous.4open.science/r/GraphLoRA.<|reference_end|>
arxiv
@article{yang2024graphlora:, title={GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning}, author={Zhe-Rui Yang, Jindong Han, Chang-Dong Wang, Hao Liu}, journal={arXiv preprint arXiv:2409.16670}, year={2024}, archivePrefix={arXiv}, eprint={2409.16670}, primaryClass={cs.LG cs.AI} }
yang2024graphlora:
arxiv-661679
2409.16671
Wildlife Product Trading in Online Social Networks: A Case Study on Ivory-Related Product Sales Promotion Posts
<|reference_start|>Wildlife Product Trading in Online Social Networks: A Case Study on Ivory-Related Product Sales Promotion Posts: Wildlife trafficking (WLT) has emerged as a global issue, with traffickers expanding their operations from offline to online platforms, utilizing e-commerce websites and social networks to enhance their illicit trade. This paper addresses the challenge of detecting and recognizing wildlife product sales promotion behaviors in online social networks, a crucial aspect in combating these environmentally harmful activities. To counter these environmentally damaging illegal operations, in this research, we focus on wildlife product sales promotion behaviors in online social networks. Specifically, 1) A scalable dataset related to wildlife product trading is collected using a network-based approach. This dataset is labeled through a human-in-the-loop machine learning process, distinguishing positive class samples containing wildlife product selling posts and hard-negatives representing normal posts misclassified as potential WLT posts, subsequently corrected by human annotators. 2) We benchmark the machine learning results on the proposed dataset and build a practical framework that automatically identifies suspicious wildlife selling posts and accounts, sufficiently leveraging the multi-modal nature of online social networks. 3) This research delves into an in-depth analysis of trading posts, shedding light on the systematic and organized selling behaviors prevalent in the current landscape. We provide detailed insights into the nature of these behaviors, contributing valuable information for understanding and countering illegal wildlife product trading.<|reference_end|>
arxiv
@article{mou2024wildlife, title={Wildlife Product Trading in Online Social Networks: A Case Study on Ivory-Related Product Sales Promotion Posts}, author={Guanyi Mou, Yun Yue, Kyumin Lee, Ziming Zhang}, journal={ICWSM 2024}, year={2024}, doi={10.1609/icwsm.v18i1.31375}, archivePrefix={arXiv}, eprint={2409.16671}, primaryClass={cs.SI cs.LG} }
mou2024wildlife
arxiv-661680
2409.16672
Stochastic Shortest Path Problem with Failure Probability
<|reference_start|>Stochastic Shortest Path Problem with Failure Probability: We solve a sequential decision-making problem under uncertainty that takes into account the failure probability of a task. This problem cannot be handled by the stochastic shortest path problem, which is the standard model for sequential decision-making. This problem is addressed by introducing dead-ends. Conventionally, we only consider policies that minimize the probability of task failure, so the optimal policy constructed could be overly conservative. In this paper, we address this issue by expanding the search range to a class of policies whose failure probability is less than a desired threshold. This problem can be solved by treating it as a framework of a Bayesian Markov decision process and a two-person zero-sum game. Also, it can be seen that the optimal policy is expressed in the form of a probability distribution on a set of deterministic policies. We also demonstrate the effectiveness of the proposed methods by applying them to a motion planning problem with obstacle avoidance for a moving robot.<|reference_end|>
arxiv
@article{otsubo2024stochastic, title={Stochastic Shortest Path Problem with Failure Probability}, author={Ritsusamuel Otsubo}, journal={arXiv preprint arXiv:2409.16672}, year={2024}, archivePrefix={arXiv}, eprint={2409.16672}, primaryClass={math.OC cs.SY eess.SY} }
otsubo2024stochastic
arxiv-661681
2409.16673
SWE2: SubWord Enriched and Significant Word Emphasized Framework for Hate Speech Detection
<|reference_start|>SWE2: SubWord Enriched and Significant Word Emphasized Framework for Hate Speech Detection: Hate speech detection on online social networks has become one of the emerging hot topics in recent years. With the broad spread and fast propagation speed across online social networks, hate speech makes significant impacts on society by increasing prejudice and hurting people. Therefore, there are aroused attention and concern from both industry and academia. In this paper, we address the hate speech problem and propose a novel hate speech detection framework called SWE2, which only relies on the content of messages and automatically identifies hate speech. In particular, our framework exploits both word-level semantic information and sub-word knowledge. It is intuitively persuasive and also practically performs well under a situation with/without character-level adversarial attack. Experimental results show that our proposed model achieves 0.975 accuracy and 0.953 macro F1, outperforming 7 state-of-the-art baselines under no adversarial attack. Our model robustly and significantly performed well under extreme adversarial attack (manipulation of 50% messages), achieving 0.967 accuracy and 0.934 macro F1.<|reference_end|>
arxiv
@article{mou2024swe2:, title={SWE2: SubWord Enriched and Significant Word Emphasized Framework for Hate Speech Detection}, author={Guanyi Mou, Pengyi Ye, Kyumin Lee}, journal={CIKM 2020}, year={2024}, doi={10.1145/3340531.3411990}, archivePrefix={arXiv}, eprint={2409.16673}, primaryClass={cs.CL cs.LG} }
mou2024swe2:
arxiv-661682
2409.16674
A Prompting-Based Representation Learning Method for Recommendation with Large Language Models
<|reference_start|>A Prompting-Based Representation Learning Method for Recommendation with Large Language Models: In recent years, Recommender Systems (RS) have witnessed a transformative shift with the advent of Large Language Models (LLMs) in the field of Natural Language Processing (NLP). Models such as GPT-3.5/4, Llama, have demonstrated unprecedented capabilities in understanding and generating human-like text. The extensive information pre-trained by these LLMs allows for the potential to capture a more profound semantic representation from different contextual information of users and items. While the great potential lies behind the thriving of LLMs, the challenge of leveraging user-item preferences from contextual information and its alignment with the improvement of Recommender Systems needs to be addressed. Believing that a better understanding of the user or item itself can be the key factor in improving recommendation performance, we conduct research on generating informative profiles using state-of-the-art LLMs. To boost the linguistic abilities of LLMs in Recommender Systems, we introduce the Prompting-Based Representation Learning Method for Recommendation (P4R). In our P4R framework, we utilize the LLM prompting strategy to create personalized item profiles. These profiles are then transformed into semantic representation spaces using a pre-trained BERT model for text embedding. Furthermore, we incorporate a Graph Convolution Network (GCN) for collaborative filtering representation. The P4R framework aligns these two embedding spaces in order to address the general recommendation tasks. In our evaluation, we compare P4R with state-of-the-art Recommender models and assess the quality of prompt-based profile generation.<|reference_end|>
arxiv
@article{chen2024a, title={A Prompting-Based Representation Learning Method for Recommendation with Large Language Models}, author={Junyi Chen, Toyotaro Suzumura}, journal={arXiv preprint arXiv:2409.16674}, year={2024}, archivePrefix={arXiv}, eprint={2409.16674}, primaryClass={cs.IR} }
chen2024a
arxiv-661683
2409.16675
CryptoTrain: Fast Secure Training on Encrypted Dataset
<|reference_start|>CryptoTrain: Fast Secure Training on Encrypted Dataset: Secure training, while protecting the confidentiality of both data and model weights, typically incurs significant training overhead. Traditional Fully Homomorphic Encryption (FHE)-based non-inter-active training models are heavily burdened by computationally demanding bootstrapping. To develop an efficient secure training system, we established a foundational framework, CryptoTrain-B, utilizing a hybrid cryptographic protocol that merges FHE with Oblivious Transfer (OT) for handling linear and non-linear operations, respectively. This integration eliminates the need for costly bootstrapping. Although CryptoTrain-B sets a new baseline in performance, reducing its training overhead remains essential. We found that ciphertext-ciphertext multiplication (CCMul) is a critical bottleneck in operations involving encrypted inputs and models. Our solution, the CCMul-Precompute technique, involves precomputing CCMul offline and resorting to the less resource-intensive ciphertext-plaintext multiplication (CPMul) during private training. Furthermore, conventional polynomial convolution in FHE systems tends to encode irrelevant and redundant values into polynomial slots, necessitating additional polynomials and ciphertexts for input representation and leading to extra multiplications. Addressing this, we introduce correlated polynomial convolution, which encodes only related input values into polynomials, thus drastically reducing the number of computations and overheads. By integrating CCMul-Precompute and correlated polynomial convolution into CryptoTrain-B, we facilitate a rapid and efficient secure training framework, CryptoTrain. Extensive experiments demonstrate that CryptoTrain achieves a ~5.3X training time reduction compared to prior methods.<|reference_end|>
arxiv
@article{xue2024cryptotrain:, title={CryptoTrain: Fast Secure Training on Encrypted Dataset}, author={Jiaqi Xue, Yancheng Zhang, Yanshan Wang, Xueqiang Wang, Hao Zheng, Qian Lou}, journal={arXiv preprint arXiv:2409.16675}, year={2024}, archivePrefix={arXiv}, eprint={2409.16675}, primaryClass={cs.CR cs.DB cs.LG} }
xue2024cryptotrain:
arxiv-661684
2409.16676
An Integrated Machine Learning and Deep Learning Framework for Credit Card Approval Prediction
<|reference_start|>An Integrated Machine Learning and Deep Learning Framework for Credit Card Approval Prediction: Credit scoring is vital in the financial industry, assessing the risk of lending to credit card applicants. Traditional credit scoring methods face challenges with large datasets and data imbalance between creditworthy and non-creditworthy applicants. This paper introduces an advanced machine learning and deep learning framework to improve the accuracy and reliability of credit card approval predictions. We utilized extensive datasets of user application records and credit history, implementing a comprehensive preprocessing strategy, feature engineering, and model integration. Our methodology combines neural networks with an ensemble of base models, including logistic regression, support vector machines, k-nearest neighbors, decision trees, random forests, and gradient boosting. The ensemble approach addresses data imbalance using Synthetic Minority Over-sampling Technique (SMOTE) and mitigates overfitting risks. Experimental results show that our integrated model surpasses traditional single-model approaches in precision, recall, F1-score, AUC, and Kappa, providing a robust and scalable solution for credit card approval predictions. This research underscores the potential of advanced machine learning techniques to transform credit risk assessment and financial decision-making.<|reference_end|>
arxiv
@article{tong2024an, title={An Integrated Machine Learning and Deep Learning Framework for Credit Card Approval Prediction}, author={Kejian Tong, Zonglin Han, Yanxin Shen, Yujian Long, Yijing Wei}, journal={arXiv preprint arXiv:2409.16676}, year={2024}, archivePrefix={arXiv}, eprint={2409.16676}, primaryClass={cs.CE} }
tong2024an
arxiv-661685
2409.16678
TSBP: Improving Object Detection in Histology Images via Test-time Self-guided Bounding-box Propagation
<|reference_start|>TSBP: Improving Object Detection in Histology Images via Test-time Self-guided Bounding-box Propagation: A global threshold (e.g., 0.5) is often applied to determine which bounding boxes should be included in the final results for an object detection task. A higher threshold reduces false positives but may result in missing a significant portion of true positives. A lower threshold can increase detection recall but may also result in more false positives. Because of this, using a preset global threshold (e.g., 0.5) applied to all the bounding box candidates may lead to suboptimal solutions. In this paper, we propose a Test-time Self-guided Bounding-box Propagation (TSBP) method, leveraging Earth Mover's Distance (EMD) to enhance object detection in histology images. TSBP utilizes bounding boxes with high confidence to influence those with low confidence, leveraging visual similarities between them. This propagation mechanism enables bounding boxes to be selected in a controllable, explainable, and robust manner, which surpasses the effectiveness of using simple thresholds and uncertainty calibration methods. Importantly, TSBP does not necessitate additional labeled samples for model training or parameter estimation, unlike calibration methods. We conduct experiments on gland detection and cell detection tasks in histology images. The results show that our proposed TSBP significantly improves detection outcomes when working in conjunction with state-of-the-art deep learning-based detection networks. Compared to other methods such as uncertainty calibration, TSBP yields more robust and accurate object detection predictions while using no additional labeled samples. The code is available at https://github.com/jwhgdeu/TSBP.<|reference_end|>
arxiv
@article{yang2024tsbp:, title={TSBP: Improving Object Detection in Histology Images via Test-time Self-guided Bounding-box Propagation}, author={Tingting Yang, Liang Xiao, Yizhe Zhang}, journal={arXiv preprint arXiv:2409.16678}, year={2024}, archivePrefix={arXiv}, eprint={2409.16678}, primaryClass={eess.IV cs.AI cs.CV cs.LG} }
yang2024tsbp:
arxiv-661686
2409.16680
Online 6DoF Pose Estimation in Forests using Cross-View Factor Graph Optimisation and Deep Learned Re-localisation
<|reference_start|>Online 6DoF Pose Estimation in Forests using Cross-View Factor Graph Optimisation and Deep Learned Re-localisation: This paper presents a novel approach for robust global localisation and 6DoF pose estimation of ground robots in forest environments by leveraging cross-view factor graph optimisation and deep-learned re-localisation. The proposed method addresses the challenges of aligning aerial and ground data for pose estimation, which is crucial for accurate point-to-point navigation in GPS-denied environments. By integrating information from both perspectives into a factor graph framework, our approach effectively estimates the robot's global position and orientation. We validate the performance of our method through extensive experiments in diverse forest scenarios, demonstrating its superiority over existing baselines in terms of accuracy and robustness in these challenging environments. Experimental results show that our proposed localisation system can achieve drift-free localisation with bounded positioning errors, ensuring reliable and safe robot navigation under canopies.<|reference_end|>
arxiv
@article{de lima2024online, title={Online 6DoF Pose Estimation in Forests using Cross-View Factor Graph Optimisation and Deep Learned Re-localisation}, author={Lucas Carvalho de Lima, Ethan Griffiths, Maryam Haghighat, Simon Denman, Clinton Fookes, Paulo Borges, Michael Br"unig and Milad Ramezani}, journal={arXiv preprint arXiv:2409.16680}, year={2024}, archivePrefix={arXiv}, eprint={2409.16680}, primaryClass={cs.RO} }
de lima2024online
arxiv-661687
2409.16681
Emotional Dimension Control in Language Model-Based Text-to-Speech: Spanning a Broad Spectrum of Human Emotions
<|reference_start|>Emotional Dimension Control in Language Model-Based Text-to-Speech: Spanning a Broad Spectrum of Human Emotions: Current emotional text-to-speech (TTS) systems face challenges in mimicking a broad spectrum of human emotions due to the inherent complexity of emotions and limitations in emotional speech datasets and models. This paper proposes a TTS framework that facilitates control over pleasure, arousal, and dominance, and can synthesize a diversity of emotional styles without requiring any emotional speech data during TTS training. We train an emotional attribute predictor using only categorical labels from speech data, aligning with psychological research and incorporating anchored dimensionality reduction on self-supervised learning (SSL) features. The TTS framework converts text inputs into phonetic tokens via an autoregressive language model and uses pseudo-emotional dimensions to guide the parallel prediction of fine-grained acoustic details. Experiments conducted on the LibriTTS dataset demonstrate that our framework can synthesize speech with enhanced naturalness and a variety of emotional styles by effectively controlling emotional dimensions, even without the inclusion of any emotional speech during TTS training.<|reference_end|>
arxiv
@article{zhou2024emotional, title={Emotional Dimension Control in Language Model-Based Text-to-Speech: Spanning a Broad Spectrum of Human Emotions}, author={Kun Zhou, You Zhang, Shengkui Zhao, Hao Wang, Zexu Pan, Dianwen Ng, Chong Zhang, Chongjia Ni, Yukun Ma, Trung Hieu Nguyen, Jia Qi Yip, Bin Ma}, journal={arXiv preprint arXiv:2409.16681}, year={2024}, archivePrefix={arXiv}, eprint={2409.16681}, primaryClass={eess.AS cs.CL cs.SD} }
zhou2024emotional
arxiv-661688
2409.16682
SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA
<|reference_start|>SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA: Text-to-SQL parsing and end-to-end question answering (E2E TQA) are two main approaches for Table-based Question Answering task. Despite success on multiple benchmarks, they have yet to be compared and their synergy remains unexplored. In this paper, we identify different strengths and weaknesses through evaluating state-of-the-art models on benchmark datasets: Text-to-SQL demonstrates superiority in handling questions involving arithmetic operations and long tables; E2E TQA excels in addressing ambiguous questions, non-standard table schema, and complex table contents. To combine both strengths, we propose a Synergistic Table-based Question Answering approach that integrate different models via answer selection, which is agnostic to any model types. Further experiments validate that ensembling models by either feature-based or LLM-based answer selector significantly improves the performance over individual models.<|reference_end|>
arxiv
@article{zhang2024syntqa:, title={SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA}, author={Siyue Zhang, Anh Tuan Luu, Chen Zhao}, journal={arXiv preprint arXiv:2409.16682}, year={2024}, archivePrefix={arXiv}, eprint={2409.16682}, primaryClass={cs.CL} }
zhang2024syntqa:
arxiv-661689
2409.16684
Erase then Rectify: A Training-Free Parameter Editing Approach for Cost-Effective Graph Unlearning
<|reference_start|>Erase then Rectify: A Training-Free Parameter Editing Approach for Cost-Effective Graph Unlearning: Graph unlearning, which aims to eliminate the influence of specific nodes, edges, or attributes from a trained Graph Neural Network (GNN), is essential in applications where privacy, bias, or data obsolescence is a concern. However, existing graph unlearning techniques often necessitate additional training on the remaining data, leading to significant computational costs, particularly with large-scale graphs. To address these challenges, we propose a two-stage training-free approach, Erase then Rectify (ETR), designed for efficient and scalable graph unlearning while preserving the model utility. Specifically, we first build a theoretical foundation showing that masking parameters critical for unlearned samples enables effective unlearning. Building on this insight, the Erase stage strategically edits model parameters to eliminate the impact of unlearned samples and their propagated influence on intercorrelated nodes. To further ensure the GNN's utility, the Rectify stage devises a gradient approximation method to estimate the model's gradient on the remaining dataset, which is then used to enhance model performance. Overall, ETR achieves graph unlearning without additional training or full training data access, significantly reducing computational overhead and preserving data privacy. Extensive experiments on seven public datasets demonstrate the consistent superiority of ETR in model utility, unlearning efficiency, and unlearning effectiveness, establishing it as a promising solution for real-world graph unlearning challenges.<|reference_end|>
arxiv
@article{yang2024erase, title={Erase then Rectify: A Training-Free Parameter Editing Approach for Cost-Effective Graph Unlearning}, author={Zhe-Rui Yang, Jindong Han, Chang-Dong Wang, Hao Liu}, journal={arXiv preprint arXiv:2409.16684}, year={2024}, archivePrefix={arXiv}, eprint={2409.16684}, primaryClass={cs.LG cs.AI} }
yang2024erase
arxiv-661690
2409.16685
Skyeyes: Ground Roaming using Aerial View Images
<|reference_start|>Skyeyes: Ground Roaming using Aerial View Images: Integrating aerial imagery-based scene generation into applications like autonomous driving and gaming enhances realism in 3D environments, but challenges remain in creating detailed content for occluded areas and ensuring real-time, consistent rendering. In this paper, we introduce Skyeyes, a novel framework that can generate photorealistic sequences of ground view images using only aerial view inputs, thereby creating a ground roaming experience. More specifically, we combine a 3D representation with a view consistent generation model, which ensures coherence between generated images. This method allows for the creation of geometrically consistent ground view images, even with large view gaps. The images maintain improved spatial-temporal coherence and realism, enhancing scene comprehension and visualization from aerial perspectives. To the best of our knowledge, there are no publicly available datasets that contain pairwise geo-aligned aerial and ground view imagery. Therefore, we build a large, synthetic, and geo-aligned dataset using Unreal Engine. Both qualitative and quantitative analyses on this synthetic dataset display superior results compared to other leading synthesis approaches. See the project page for more results: https://chaoren2357.github.io/website-skyeyes/.<|reference_end|>
arxiv
@article{gao2024skyeyes:, title={Skyeyes: Ground Roaming using Aerial View Images}, author={Zhiyuan Gao, Wenbin Teng, Gonglin Chen, Jinsen Wu, Ningli Xu, Rongjun Qin, Andrew Feng, Yajie Zhao}, journal={arXiv preprint arXiv:2409.16685}, year={2024}, archivePrefix={arXiv}, eprint={2409.16685}, primaryClass={cs.CV} }
gao2024skyeyes:
arxiv-661691
2409.16686
MSI-Agent: Incorporating Multi-Scale Insight into Embodied Agents for Superior Planning and Decision-Making
<|reference_start|>MSI-Agent: Incorporating Multi-Scale Insight into Embodied Agents for Superior Planning and Decision-Making: Long-term memory is significant for agents, in which insights play a crucial role. However, the emergence of irrelevant insight and the lack of general insight can greatly undermine the effectiveness of insight. To solve this problem, in this paper, we introduce Multi-Scale Insight Agent (MSI-Agent), an embodied agent designed to improve LLMs' planning and decision-making ability by summarizing and utilizing insight effectively across different scales. MSI achieves this through the experience selector, insight generator, and insight selector. Leveraging a three-part pipeline, MSI can generate task-specific and high-level insight, store it in a database, and then use relevant insight from it to aid in decision-making. Our experiments show that MSI outperforms another insight strategy when planning by GPT3.5. Moreover, We delve into the strategies for selecting seed experience and insight, aiming to provide LLM with more useful and relevant insight for better decision-making. Our observations also indicate that MSI exhibits better robustness when facing domain-shifting scenarios.<|reference_end|>
arxiv
@article{fu2024msi-agent:, title={MSI-Agent: Incorporating Multi-Scale Insight into Embodied Agents for Superior Planning and Decision-Making}, author={Dayuan Fu, Biqing Qi, Yihuai Gao, Che Jiang, Guanting Dong, Bowen Zhou}, journal={EMNLP 2024 Main}, year={2024}, archivePrefix={arXiv}, eprint={2409.16686}, primaryClass={cs.AI cs.CL} }
fu2024msi-agent:
arxiv-661692
2409.16688
Cycle Counting under Local Differential Privacy for Degeneracy-bounded Graphs
<|reference_start|>Cycle Counting under Local Differential Privacy for Degeneracy-bounded Graphs: We propose an algorithm for counting the number of cycles under local differential privacy for degeneracy-bounded input graphs. Numerous studies have focused on counting the number of triangles under the privacy notion, demonstrating that the expected $\ell_2$-error of these algorithms is $\Omega(n^{1.5})$, where $n$ is the number of nodes in the graph. When parameterized by the number of cycles of length four ($C_4$), the best existing triangle counting algorithm has an error of $O(n^{1.5} + \sqrt{C_4}) = O(n^2)$. In this paper, we introduce an algorithm with an expected $\ell_2$-error of $O(\delta^{1.5} n^{0.5} + \delta^{0.5} d_{\max}^{0.5} n^{0.5})$, where $\delta$ is the degeneracy and $d_{\max}$ is the maximum degree of the graph. For degeneracy-bounded graphs ($\delta \in \Theta(1)$) commonly found in practical social networks, our algorithm achieves an expected $\ell_2$-error of $O(d_{\max}^{0.5} n^{0.5}) = O(n)$. Our algorithm's core idea is a precise count of triangles following a preprocessing step that approximately sorts the degree of all nodes. This approach can be extended to approximate the number of cycles of length $k$, maintaining a similar $\ell_2$-error, namely $O(\delta^{(k-2)/2} d_{\max}^{0.5} n^{(k-2)/2} + \delta^{k/2} n^{(k-2)/2})$ or $O(d_{\max}^{0.5} n^{(k-2)/2}) = O(n^{(k-1)/2})$ for degeneracy-bounded graphs.<|reference_end|>
arxiv
@article{hillebrand2024cycle, title={Cycle Counting under Local Differential Privacy for Degeneracy-bounded Graphs}, author={Quentin Hillebrand, Vorapong Suppakitpaisarn, Tetsuo Shibuya}, journal={arXiv preprint arXiv:2409.16688}, year={2024}, archivePrefix={arXiv}, eprint={2409.16688}, primaryClass={cs.CR cs.DS} }
hillebrand2024cycle
arxiv-661693
2409.16689
Layout-Corrector: Alleviating Layout Sticking Phenomenon in Discrete Diffusion Model
<|reference_start|>Layout-Corrector: Alleviating Layout Sticking Phenomenon in Discrete Diffusion Model: Layout generation is a task to synthesize a harmonious layout with elements characterized by attributes such as category, position, and size. Human designers experiment with the placement and modification of elements to create aesthetic layouts, however, we observed that current discrete diffusion models (DDMs) struggle to correct inharmonious layouts after they have been generated. In this paper, we first provide novel insights into layout sticking phenomenon in DDMs and then propose a simple yet effective layout-assessment module Layout-Corrector, which works in conjunction with existing DDMs to address the layout sticking problem. We present a learning-based module capable of identifying inharmonious elements within layouts, considering overall layout harmony characterized by complex composition. During the generation process, Layout-Corrector evaluates the correctness of each token in the generated layout, reinitializing those with low scores to the ungenerated state. The DDM then uses the high-scored tokens as clues to regenerate the harmonized tokens. Layout-Corrector, tested on common benchmarks, consistently boosts layout-generation performance when in conjunction with various state-of-the-art DDMs. Furthermore, our extensive analysis demonstrates that the Layout-Corrector (1) successfully identifies erroneous tokens, (2) facilitates control over the fidelity-diversity trade-off, and (3) significantly mitigates the performance drop associated with fast sampling.<|reference_end|>
arxiv
@article{iwai2024layout-corrector:, title={Layout-Corrector: Alleviating Layout Sticking Phenomenon in Discrete Diffusion Model}, author={Shoma Iwai, Atsuki Osanai, Shunsuke Kitada, Shinichiro Omachi}, journal={arXiv preprint arXiv:2409.16689}, year={2024}, archivePrefix={arXiv}, eprint={2409.16689}, primaryClass={cs.CV cs.AI cs.GR cs.LG} }
iwai2024layout-corrector:
arxiv-661694
2409.16693
CaBRNet, an open-source library for developing and evaluating Case-Based Reasoning Models
<|reference_start|>CaBRNet, an open-source library for developing and evaluating Case-Based Reasoning Models: In the field of explainable AI, a vibrant effort is dedicated to the design of self-explainable models, as a more principled alternative to post-hoc methods that attempt to explain the decisions after a model opaquely makes them. However, this productive line of research suffers from common downsides: lack of reproducibility, unfeasible comparison, diverging standards. In this paper, we propose CaBRNet, an open-source, modular, backward-compatible framework for Case-Based Reasoning Networks: https://github.com/aiser-team/cabrnet.<|reference_end|>
arxiv
@article{xu-darme2024cabrnet,, title={CaBRNet, an open-source library for developing and evaluating Case-Based Reasoning Models}, author={Romain Xu-Darme (LSL), Aymeric Varasse (LSL), Alban Grastien (LSL), Julien Girard (LSL), Zakaria Chihani (LSL)}, journal={xAI 2024 - The 2nd World Conference on eXplainable Artificial Intelligence, Jul 2024, La valette, Malta. pp.TBD}, year={2024}, archivePrefix={arXiv}, eprint={2409.16693}, primaryClass={cs.AI} }
xu-darme2024cabrnet,
arxiv-661695
2409.16694
A Survey of Low-bit Large Language Models: Basics, Systems, and Algorithms
<|reference_start|>A Survey of Low-bit Large Language Models: Basics, Systems, and Algorithms: Large language models (LLMs) have achieved remarkable advancements in natural language processing, showcasing exceptional performance across various tasks. However, the expensive memory and computational requirements present significant challenges for their practical deployment. Low-bit quantization has emerged as a critical approach to mitigate these challenges by reducing the bit-width of model parameters, activations, and gradients, thus decreasing memory usage and computational demands. This paper presents a comprehensive survey of low-bit quantization methods tailored for LLMs, covering the fundamental principles, system implementations, and algorithmic strategies. An overview of basic concepts and new data formats specific to low-bit LLMs is first introduced, followed by a review of frameworks and systems that facilitate low-bit LLMs across various hardware platforms. Then, we categorize and analyze techniques and toolkits for efficient low-bit training and inference of LLMs. Finally, we conclude with a discussion of future trends and potential advancements of low-bit LLMs. Our systematic overview from basic, system, and algorithm perspectives can offer valuable insights and guidelines for future works to enhance the efficiency and applicability of LLMs through low-bit quantization.<|reference_end|>
arxiv
@article{gong2024a, title={A Survey of Low-bit Large Language Models: Basics, Systems, and Algorithms}, author={Ruihao Gong, Yifu Ding, Zining Wang, Chengtao Lv, Xingyu Zheng, Jinyang Du, Haotong Qin, Jinyang Guo, Michele Magno, Xianglong Liu}, journal={arXiv preprint arXiv:2409.16694}, year={2024}, archivePrefix={arXiv}, eprint={2409.16694}, primaryClass={cs.AI cs.CL cs.LG} }
gong2024a
arxiv-661696
2409.16695
In which fields can ChatGPT detect journal article quality? An evaluation of REF2021 results
<|reference_start|>In which fields can ChatGPT detect journal article quality? An evaluation of REF2021 results: Time spent by academics on research quality assessment might be reduced if automated approaches can help. Whilst citation-based indicators have been extensively developed and evaluated for this, they have substantial limitations and Large Language Models (LLMs) like ChatGPT provide an alternative approach. This article assesses whether ChatGPT 4o-mini can be used to estimate the quality of journal articles across academia. It samples up to 200 articles from all 34 Units of Assessment (UoAs) in the UK's Research Excellence Framework (REF) 2021, comparing ChatGPT scores with departmental average scores. There was an almost universally positive Spearman correlation between ChatGPT scores and departmental averages, varying between 0.08 (Philosophy) and 0.78 (Psychology, Psychiatry and Neuroscience), except for Clinical Medicine (rho=-0.12). Although other explanations are possible, especially because REF score profiles are public, the results suggest that LLMs can provide reasonable research quality estimates in most areas of science, and particularly the physical and health sciences and engineering, even before citation data is available. Nevertheless, ChatGPT assessments seem to be more positive for most health and physical sciences than for other fields, a concern for multidisciplinary assessments, and the ChatGPT scores are only based on titles and abstracts, so cannot be research evaluations.<|reference_end|>
arxiv
@article{thelwall2024in, title={In which fields can ChatGPT detect journal article quality? An evaluation of REF2021 results}, author={Mike Thelwall, Abdallah Yaghi}, journal={arXiv preprint arXiv:2409.16695}, year={2024}, archivePrefix={arXiv}, eprint={2409.16695}, primaryClass={cs.DL} }
thelwall2024in
arxiv-661697
2409.16697
Numerical Approximation Capacity of Neural Networks with Bounded Parameters: Do Limits Exist, and How Can They Be Measured?
<|reference_start|>Numerical Approximation Capacity of Neural Networks with Bounded Parameters: Do Limits Exist, and How Can They Be Measured?: The Universal Approximation Theorem posits that neural networks can theoretically possess unlimited approximation capacity with a suitable activation function and a freely chosen or trained set of parameters. However, a more practical scenario arises when these neural parameters, especially the nonlinear weights and biases, are bounded. This leads us to question: \textbf{Does the approximation capacity of a neural network remain universal, or does it have a limit when the parameters are practically bounded? And if it has a limit, how can it be measured?} Our theoretical study indicates that while universal approximation is theoretically feasible, in practical numerical scenarios, Deep Neural Networks (DNNs) with any analytic activation functions (such as Tanh and Sigmoid) can only be approximated by a finite-dimensional vector space under a bounded nonlinear parameter space (NP space), whether in a continuous or discrete sense. Based on this study, we introduce the concepts of \textit{$\epsilon$ outer measure} and \textit{Numerical Span Dimension (NSdim)} to quantify the approximation capacity limit of a family of networks both theoretically and practically. Furthermore, drawing on our new theoretical study and adopting a fresh perspective, we strive to understand the relationship between back-propagation neural networks and random parameter networks (such as the Extreme Learning Machine (ELM)) with both finite and infinite width. We also aim to provide fresh insights into regularization, the trade-off between width and depth, parameter space, width redundancy, condensation, and other related important issues.<|reference_end|>
arxiv
@article{liu2024numerical, title={Numerical Approximation Capacity of Neural Networks with Bounded Parameters: Do Limits Exist, and How Can They Be Measured?}, author={Li Liu, Tengchao Yu, Heng Yong}, journal={arXiv preprint arXiv:2409.16697}, year={2024}, archivePrefix={arXiv}, eprint={2409.16697}, primaryClass={cs.LG} }
liu2024numerical
arxiv-661698
2409.16700
A Learning Support Method for Multi-threaded Programs Using Trace Tables
<|reference_start|>A Learning Support Method for Multi-threaded Programs Using Trace Tables: Multi-threaded programs are expected to improve responsiveness and conserve resources by dividing an application process into multiple threads for concurrent processing. However, due to scheduling and the interaction of multiple threads, their runtime behavior is more complex than that of single-threaded programs, making which makes debugging difficult unless the concepts specific to multi-threaded programs and the execution order of instructions can be understood. In this paper, we propose a learning tool for multi-threaded programs using trace tables.<|reference_end|>
arxiv
@article{murata2024a, title={A Learning Support Method for Multi-threaded Programs Using Trace Tables}, author={Takumi Murata and Hiroaki Hashiura}, journal={arXiv preprint arXiv:2409.16700}, year={2024}, archivePrefix={arXiv}, eprint={2409.16700}, primaryClass={cs.SE} }
murata2024a
arxiv-661699
2409.16701
Unit Test Generation for Vulnerability Exploitation in Java Third-Party Libraries
<|reference_start|>Unit Test Generation for Vulnerability Exploitation in Java Third-Party Libraries: Open-source third-party libraries are widely used in software development. These libraries offer substantial advantages in terms of time and resource savings. However, a significant concern arises due to the publicly disclosed vulnerabilities within these libraries. Existing automated vulnerability detection tools often suffer from false positives and fail to accurately assess the propagation of inputs capable of triggering vulnerabilities from client projects to vulnerable code in libraries. In this paper, we propose a novel approach called VULEUT (Vulnerability Exploit Unit Test Generation), which combines vulnerability exploitation reachability analysis and LLM-based unit test generation. VULEUT is designed to automatically verify the exploitability of vulnerabilities in third-party libraries commonly used in client software projects. VULEUT first analyzes the client projects to determine the reachability of vulnerability conditions. And then, it leverages the Large Language Model (LLM) to generate unit tests for vulnerability confirmation. To evaluate the effectiveness of VULEUT, we collect 32 vulnerabilities from various third-party libraries and conduct experiments on 70 real client projects. Besides, we also compare our approach with two representative tools, i.e., TRANSFER and VESTA. Our results demonstrate the effectiveness of VULEUT, with 229 out of 292 generated unit tests successfully confirming vulnerability exploitation across 70 client projects, which outperforms baselines by 24%.<|reference_end|>
arxiv
@article{gao2024unit, title={Unit Test Generation for Vulnerability Exploitation in Java Third-Party Libraries}, author={Yi Gao, Xing Hu, Zirui Chen, Xiaohu Yang and Xin Xia}, journal={arXiv preprint arXiv:2409.16701}, year={2024}, archivePrefix={arXiv}, eprint={2409.16701}, primaryClass={cs.SE} }
gao2024unit
arxiv-661700
2409.16702
3DDX: Bone Surface Reconstruction from a Single Standard-Geometry Radiograph via Dual-Face Depth Estimation
<|reference_start|>3DDX: Bone Surface Reconstruction from a Single Standard-Geometry Radiograph via Dual-Face Depth Estimation: Radiography is widely used in orthopedics for its affordability and low radiation exposure. 3D reconstruction from a single radiograph, so-called 2D-3D reconstruction, offers the possibility of various clinical applications, but achieving clinically viable accuracy and computational efficiency is still an unsolved challenge. Unlike other areas in computer vision, X-ray imaging's unique properties, such as ray penetration and fixed geometry, have not been fully exploited. We propose a novel approach that simultaneously learns multiple depth maps (front- and back-surface of multiple bones) derived from the X-ray image to computed tomography registration. The proposed method not only leverages the fixed geometry characteristic of X-ray imaging but also enhances the precision of the reconstruction of the whole surface. Our study involved 600 CT and 2651 X-ray images (4 to 5 posed X-ray images per patient), demonstrating our method's superiority over traditional approaches with a surface reconstruction error reduction from 4.78 mm to 1.96 mm. This significant accuracy improvement and enhanced computational efficiency suggest our approach's potential for clinical application.<|reference_end|>
arxiv
@article{gu20243ddx:, title={3DDX: Bone Surface Reconstruction from a Single Standard-Geometry Radiograph via Dual-Face Depth Estimation}, author={Yi Gu, Yoshito Otake, Keisuke Uemura, Masaki Takao, Mazen Soufi, Seiji Okada, Nobuhiko Sugano, Hugues Talbot, Yoshinobu Sato}, journal={arXiv preprint arXiv:2409.16702}, year={2024}, archivePrefix={arXiv}, eprint={2409.16702}, primaryClass={eess.IV cs.CV} }
gu20243ddx: