corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-660601
|
2409.14680
|
S2O: An Integrated Driving Decision-making Performance Evaluation Method Bridging Subjective Feeling to Objective Evaluation
|
<|reference_start|>S2O: An Integrated Driving Decision-making Performance Evaluation Method Bridging Subjective Feeling to Objective Evaluation: Autonomous driving decision-making is one of the critical modules towards intelligent transportation systems, and how to evaluate the driving performance comprehensively and precisely is a crucial challenge. A biased evaluation misleads and hinders decision-making modification and development. Current planning evaluation metrics include deviation from the real driver trajectory and objective driving experience indicators. The former category does not necessarily indicate good driving performance since human drivers also make errors and has been proven to be ineffective in interactive close-loop systems. On the other hand, existing objective driving experience models only consider limited factors, lacking comprehensiveness. And the integration mechanism of various factors relies on intuitive experience, lacking precision. In this research, we propose S2O, a novel integrated decision-making evaluation method bridging subjective human feeling to objective evaluation. First, modified fundamental models of four kinds of driving factors which are safety, time efficiency, comfort, and energy efficiency are established to cover common driving factors. Then based on the analysis of human rating distribution regularity, a segmental linear fitting model in conjunction with a complementary SVM segment classifier is designed to express human's subjective rating by objective driving factor terms. Experiments are conducted on the D2E dataset, which includes approximately 1,000 driving cases and 40,000 human rating scores. Results show that S2O achieves a mean absolute error of 4.58 to ground truth under a percentage scale. Compared with baselines, the evaluation error is reduced by 32.55%. Implementation on the SUMO platform proves the real-time efficiency of online evaluation, and validation on performance evaluation of three autonomous driving planning algorithms proves the feasibility.<|reference_end|>
|
arxiv
|
@article{wang2024s2o:,
title={S2O: An Integrated Driving Decision-making Performance Evaluation Method
Bridging Subjective Feeling to Objective Evaluation},
author={Yuning Wang, Zehong Ke, Yanbo Jiang, Jinhao Li, Shaobing Xu, John M.
Dolan, Jianqiang Wang},
journal={arXiv preprint arXiv:2409.14680},
year={2024},
archivePrefix={arXiv},
eprint={2409.14680},
primaryClass={cs.RO cs.HC}
}
|
wang2024s2o:
|
arxiv-660602
|
2409.14682
|
Robust Training Objectives Improve Embedding-based Retrieval in Industrial Recommendation Systems
|
<|reference_start|>Robust Training Objectives Improve Embedding-based Retrieval in Industrial Recommendation Systems: Improving recommendation systems (RS) can greatly enhance the user experience across many domains, such as social media. Many RS utilize embedding-based retrieval (EBR) approaches to retrieve candidates for recommendation. In an EBR system, the embedding quality is key. According to recent literature, self-supervised multitask learning (SSMTL) has showed strong performance on academic benchmarks in embedding learning and resulted in an overall improvement in multiple downstream tasks, demonstrating a larger resilience to the adverse conditions between each downstream task and thereby increased robustness and task generalization ability through the training objective. However, whether or not the success of SSMTL in academia as a robust training objectives translates to large-scale (i.e., over hundreds of million users and interactions in-between) industrial RS still requires verification. Simply adopting academic setups in industrial RS might entail two issues. Firstly, many self-supervised objectives require data augmentations (e.g., embedding masking/corruption) over a large portion of users and items, which is prohibitively expensive in industrial RS. Furthermore, some self-supervised objectives might not align with the recommendation task, which might lead to redundant computational overheads or negative transfer. In light of these two challenges, we evaluate using a robust training objective, specifically SSMTL, through a large-scale friend recommendation system on a social media platform in the tech sector, identifying whether this increase in robustness can work at scale in enhancing retrieval in the production setting. Through online A/B testing with SSMTL-based EBR, we observe statistically significant increases in key metrics in the friend recommendations, with up to 5.45% improvements in new friends made and 1.91% improvements in new friends made with cold-start users.<|reference_end|>
|
arxiv
|
@article{kolodner2024robust,
title={Robust Training Objectives Improve Embedding-based Retrieval in
Industrial Recommendation Systems},
author={Matthew Kolodner, Mingxuan Ju, Zihao Fan, Tong Zhao, Elham Ghazizadeh,
Yan Wu, Neil Shah, Yozen Liu},
journal={arXiv preprint arXiv:2409.14682},
year={2024},
archivePrefix={arXiv},
eprint={2409.14682},
primaryClass={cs.IR cs.LG}
}
|
kolodner2024robust
|
arxiv-660603
|
2409.14683
|
Reducing the Footprint of Multi-Vector Retrieval with Minimal Performance Impact via Token Pooling
|
<|reference_start|>Reducing the Footprint of Multi-Vector Retrieval with Minimal Performance Impact via Token Pooling: Over the last few years, multi-vector retrieval methods, spearheaded by ColBERT, have become an increasingly popular approach to Neural IR. By storing representations at the token level rather than at the document level, these methods have demonstrated very strong retrieval performance, especially in out-of-domain settings. However, the storage and memory requirements necessary to store the large number of associated vectors remain an important drawback, hindering practical adoption. In this paper, we introduce a simple clustering-based token pooling approach to aggressively reduce the number of vectors that need to be stored. This method can reduce the space & memory footprint of ColBERT indexes by 50% with virtually no retrieval performance degradation. This method also allows for further reductions, reducing the vector count by 66%-to-75% , with degradation remaining below 5% on a vast majority of datasets. Importantly, this approach requires no architectural change nor query-time processing, and can be used as a simple drop-in during indexation with any ColBERT-like model.<|reference_end|>
|
arxiv
|
@article{clavié2024reducing,
title={Reducing the Footprint of Multi-Vector Retrieval with Minimal
Performance Impact via Token Pooling},
author={Benjamin Clavi'e, Antoine Chaffin, Griffin Adams},
journal={arXiv preprint arXiv:2409.14683},
year={2024},
archivePrefix={arXiv},
eprint={2409.14683},
primaryClass={cs.IR cs.AI cs.CL}
}
|
clavié2024reducing
|
arxiv-660604
|
2409.14685
|
Near-field Beam Focusing under Discrete Phase Shifters
|
<|reference_start|>Near-field Beam Focusing under Discrete Phase Shifters: Extremely large-scale arrays (XL-arrays) have emerged as a promising technology for enabling near-field communications in future wireless systems. However, the huge number of antennas pose demanding challenges on the hardware cost and energy consumption, especially when the antennas employ high-resolution phase shifters (PSs). To address this issue, in this paper, we consider discrete PSs at the XL-array which are practically more energy efficient, and investigate the impact of PS resolution on the near-field beam-focusing effect. To this end, we propose a new Fourier series expansion method to efficiently tackle the difficulty in characterising the beam pattern properties under phase quantization. Interestingly, we analytically show, for the first time, that 1) discrete PSs introduce additional grating lobes; 2) the main lobe still exhibits the beam-focusing effect with its beam power increasing with PS resolution; and 3) there are two types of grating lobes, featured by the beam-focusing and beam-steering effects, respectively. Finally, numerical results demonstrate that the grating lobes generally degrade the communication performance. However, a low-resolution of 3-bit PSs can achieve similar beam pattern and rate performance with the continuous PS counterpart, while it attains much higher energy efficiency.<|reference_end|>
|
arxiv
|
@article{zhang2024near-field,
title={Near-field Beam Focusing under Discrete Phase Shifters},
author={Haodong Zhang and Changsheng You and Cong Zhou},
journal={arXiv preprint arXiv:2409.14685},
year={2024},
archivePrefix={arXiv},
eprint={2409.14685},
primaryClass={cs.IT eess.SP math.IT}
}
|
zhang2024near-field
|
arxiv-660605
|
2409.14688
|
A Generalized Control Revision Method for Autonomous Driving Safety
|
<|reference_start|>A Generalized Control Revision Method for Autonomous Driving Safety: Safety is one of the most crucial challenges of autonomous driving vehicles, and one solution to guarantee safety is to employ an additional control revision module after the planning backbone. Control Barrier Function (CBF) has been widely used because of its strong mathematical foundation on safety. However, the incompatibility with heterogeneous perception data and incomplete consideration of traffic scene elements make existing systems hard to be applied in dynamic and complex real-world scenarios. In this study, we introduce a generalized control revision method for autonomous driving safety, which adopts both vectorized perception and occupancy grid map as inputs and comprehensively models multiple types of traffic scene constraints based on a new proposed barrier function. Traffic elements are integrated into one unified framework, decoupled from specific scenario settings or rules. Experiments on CARLA, SUMO, and OnSite simulator prove that the proposed algorithm could realize safe control revision under complicated scenes, adapting to various planning backbones, road topologies, and risk types. Physical platform validation also verifies the real-world application feasibility.<|reference_end|>
|
arxiv
|
@article{zhu2024a,
title={A Generalized Control Revision Method for Autonomous Driving Safety},
author={Zehang Zhu, Yuning Wang, Tianqi Ke, Zeyu Han, Shaobing Xu, Qing Xu,
John M. Dolan, Jianqiang Wang},
journal={arXiv preprint arXiv:2409.14688},
year={2024},
archivePrefix={arXiv},
eprint={2409.14688},
primaryClass={cs.RO cs.SY eess.SY}
}
|
zhu2024a
|
arxiv-660606
|
2409.14689
|
EDGE-Rec: Efficient and Data-Guided Edge Diffusion For Recommender Systems Graphs
|
<|reference_start|>EDGE-Rec: Efficient and Data-Guided Edge Diffusion For Recommender Systems Graphs: Most recommender systems research focuses on binary historical user-item interaction encodings to predict future interactions. User features, item features, and interaction strengths remain largely under-utilized in this space or only indirectly utilized, despite proving largely effective in large-scale production recommendation systems. We propose a new attention mechanism, loosely based on the principles of collaborative filtering, called Row-Column Separable Attention RCSA to take advantage of real-valued interaction weights as well as user and item features directly. Building on this mechanism, we additionally propose a novel Graph Diffusion Transformer GDiT architecture which is trained to iteratively denoise the weighted interaction matrix of the user-item interaction graph directly. The weighted interaction matrix is built from the bipartite structure of the user-item interaction graph and corresponding edge weights derived from user-item rating interactions. Inspired by the recent progress in text-conditioned image generation, our method directly produces user-item rating predictions on the same scale as the original ratings by conditioning the denoising process on user and item features with a principled approach.<|reference_end|>
|
arxiv
|
@article{priyam2024edge-rec:,
title={EDGE-Rec: Efficient and Data-Guided Edge Diffusion For Recommender
Systems Graphs},
author={Utkarsh Priyam, Hemit Shah, Edoardo Botta},
journal={arXiv preprint arXiv:2409.14689},
year={2024},
archivePrefix={arXiv},
eprint={2409.14689},
primaryClass={cs.IR cs.LG}
}
|
priyam2024edge-rec:
|
arxiv-660607
|
2409.14692
|
Dynamic Realms: 4D Content Analysis, Recovery and Generation with Geometric, Topological and Physical Priors
|
<|reference_start|>Dynamic Realms: 4D Content Analysis, Recovery and Generation with Geometric, Topological and Physical Priors: My research focuses on the analysis, recovery, and generation of 4D content, where 4D includes three spatial dimensions (x, y, z) and a temporal dimension t, such as shape and motion. This focus goes beyond static objects to include dynamic changes over time, providing a comprehensive understanding of both spatial and temporal variations. These techniques are critical in applications like AR/VR, embodied AI, and robotics. My research aims to make 4D content generation more efficient, accessible, and higher in quality by incorporating geometric, topological, and physical priors. I also aim to develop effective methods for 4D content recovery and analysis using these priors.<|reference_end|>
|
arxiv
|
@article{dou2024dynamic,
title={Dynamic Realms: 4D Content Analysis, Recovery and Generation with
Geometric, Topological and Physical Priors},
author={Zhiyang Dou},
journal={arXiv preprint arXiv:2409.14692},
year={2024},
archivePrefix={arXiv},
eprint={2409.14692},
primaryClass={cs.CV cs.GR}
}
|
dou2024dynamic
|
arxiv-660608
|
2409.14693
|
A Novel Multivariate Bi-LSTM model for Short-Term Equity Price Forecasting
|
<|reference_start|>A Novel Multivariate Bi-LSTM model for Short-Term Equity Price Forecasting: Prediction models are crucial in the stock market as they aid in forecasting future prices and trends, enabling investors to make informed decisions and manage risks more effectively. In the Indian stock market, where volatility is often high, accurate predictions can provide a significant edge in capitalizing on market movements. While various models like regression and Artificial Neural Networks (ANNs) have been explored for this purpose, studies have shown that Long Short-Term Memory networks (LSTMs) are the most effective. This is because they can capture complex temporal dependencies present in financial data. This paper presents a Bidirectional Multivariate LSTM model designed to predict short-term stock prices of Indian companies in the NIFTY 100 across four major sectors. Both Univariate LSTM and Univariate Bidirectional LSTM models were evaluated based on R2 score, RMSE, MSE, MAE, and MAPE. To improve predictive accuracy, the analysis was extended to multivariate data. Additionally, 12 technical indicators, having high correlation values with the close price(greater than 0.99) including EMA5, SMA5, TRIMA5, KAMA10 and the Bollinger Bands were selected as variables to further optimize the prediction models. The proposed Bidirectional Multivariate LSTM model, when applied to a dataset containing these indicators, achieved an exceptionally high average R2 score of 99.4779% across the four stocks, which is 3.9833% higher than that of the Unidirectional Multivariate LSTM without technical indicators. The proposed model has an average RMSE of 0.0103955, an average MAE of 0.007485 and an average MAPE of 1.1635%. This highlights the model's exceptional forecasting accuracy and emphasizes its potential to improve short-term trading strategies.<|reference_end|>
|
arxiv
|
@article{oak2024a,
title={A Novel Multivariate Bi-LSTM model for Short-Term Equity Price
Forecasting},
author={Omkar Oak, Rukmini Nazre, Rujuta Budke, Yogita Mahatekar},
journal={arXiv preprint arXiv:2409.14693},
year={2024},
archivePrefix={arXiv},
eprint={2409.14693},
primaryClass={cs.CE}
}
|
oak2024a
|
arxiv-660609
|
2409.14694
|
Improved Routing of Multiparty Entanglement over Quantum Networks
|
<|reference_start|>Improved Routing of Multiparty Entanglement over Quantum Networks: Effective routing of entanglements over a quantum network is a fundamental problem in quantum communication. Due to the fragility of quantum states, it is difficult to route entanglements at long distances. Graph states can be utilized for this purpose, reducing the need for long-distance entanglement routing by leveraging local operations. In this paper, we propose two graph state-based routing protocols for sharing GHZ states, achieving larger sizes than the existing works, for given network topologies. For this improvement, we consider tree structures connecting the users participating in the final GHZ states, as opposed to the linear configurations used in the earlier ones. For arbitrary network topologies, we show that if such a tree is balanced, it achieves a larger size than unbalanced trees. In particular, for grid networks, we show special constructions of the above-mentioned tree that achieve optimal results. Moreover, if the user nodes among whom the entanglement is to be routed are pre-specified, we propose a strategy to accomplish the required routing.<|reference_end|>
|
arxiv
|
@article{basak2024improved,
title={Improved Routing of Multiparty Entanglement over Quantum Networks},
author={Nirupam Basak and Goutam Paul},
journal={arXiv preprint arXiv:2409.14694},
year={2024},
archivePrefix={arXiv},
eprint={2409.14694},
primaryClass={quant-ph cs.NI}
}
|
basak2024improved
|
arxiv-660610
|
2409.14697
|
QueenV2: Future of Quantum Circuit Simulation
|
<|reference_start|>QueenV2: Future of Quantum Circuit Simulation: A state vector-based quantum circuit simulation can provide accurate results for the development and validation of quantum computing algorithms, without being affected by noise interference. However, existing quantum circuit simulators have consistently underperformed due to inadequate integration with quantum circuits and high-performance computing architectures. To tackle the challenges in quantum computing, we propose QueenV2, which builds upon the design principles of Queen and elevates performance to a new level. Experimental results on the NVIDIA RTX-4090 demonstrate that QueenV2 achieves up to a 40x improvement in gate performance and a 5x improvement in circuit performance compared to hyQuas. Furthermore, QueenV2 realizes a 137x speedup in gate benchmarks and a 14x speedup in circuit performance relative to NVIDIA cuQuantum, enabled by gate fusion via the IBM Qiskit toolkit. By eliminating reliance on third-party libraries, QueenV2 is positioned to significantly accelerate quantum circuit simulation, thus promoting the development of innovative accelerators and quantum algorithms.<|reference_end|>
|
arxiv
|
@article{wang2024queenv2:,
title={QueenV2: Future of Quantum Circuit Simulation},
author={Chuan-Chi Wang},
journal={arXiv preprint arXiv:2409.14697},
year={2024},
archivePrefix={arXiv},
eprint={2409.14697},
primaryClass={quant-ph cs.SE}
}
|
wang2024queenv2:
|
arxiv-660611
|
2409.14698
|
Bimanual In-hand Manipulation using Dual Limit Surfaces
|
<|reference_start|>Bimanual In-hand Manipulation using Dual Limit Surfaces: In-hand object manipulation is an important capability for dexterous manipulation. In this paper, we introduce a modeling and planning framework for in-hand object reconfiguration, focusing on frictional patch contacts between the robot's palms (or fingers) and the object. Our approach leverages two cooperative patch contacts on either side of the object to iteratively reposition it within the robot's grasp by alternating between sliding and sticking motions. Unlike previous methods that rely on single-point contacts or restrictive assumptions on contact dynamics, our framework models the complex interaction of dual frictional patches, allowing for greater control over object motion. We develop a planning algorithm that computes feasible motions to reorient and re-grasp objects without causing unintended slippage. We demonstrate the effectiveness of our approach in simulation and real-world experiments, showing significant improvements in object stability and pose accuracy across various object geometries.<|reference_end|>
|
arxiv
|
@article{dang2024bimanual,
title={Bimanual In-hand Manipulation using Dual Limit Surfaces},
author={An Dang, James Lorenz, Xili Yi, Nima Fazeli},
journal={arXiv preprint arXiv:2409.14698},
year={2024},
archivePrefix={arXiv},
eprint={2409.14698},
primaryClass={cs.RO}
}
|
dang2024bimanual
|
arxiv-660612
|
2409.14700
|
Adaptive and Robust Watermark for Generative Tabular Data
|
<|reference_start|>Adaptive and Robust Watermark for Generative Tabular Data: Recent developments in generative models have demonstrated its ability to create high-quality synthetic data. However, the pervasiveness of synthetic content online also brings forth growing concerns that it can be used for malicious purposes. To ensure the authenticity of the data, watermarking techniques have recently emerged as a promising solution due to their strong statistical guarantees. In this paper, we propose a flexible and robust watermarking mechanism for generative tabular data. Specifically, a data provider with knowledge of the downstream tasks can partition the feature space into pairs of $(key, value)$ columns. Within each pair, the data provider first uses elements in the $key$ column to generate a randomized set of ''green'' intervals, then encourages elements of the $value$ column to be in one of these ''green'' intervals. We show theoretically and empirically that the watermarked datasets (i) have negligible impact on the data quality and downstream utility, (ii) can be efficiently detected, and (iii) are robust against multiple attacks commonly observed in data science.<|reference_end|>
|
arxiv
|
@article{ngo2024adaptive,
title={Adaptive and Robust Watermark for Generative Tabular Data},
author={Dung Daniel Ngo, Daniel Scott, Saheed Obitayo, Vamsi K. Potluru,
Manuela Veloso},
journal={arXiv preprint arXiv:2409.14700},
year={2024},
archivePrefix={arXiv},
eprint={2409.14700},
primaryClass={cs.CR}
}
|
ngo2024adaptive
|
arxiv-660613
|
2409.14702
|
Rate-Splitting for Cell-Free Massive MIMO: Performance Analysis and Generative AI Approach
|
<|reference_start|>Rate-Splitting for Cell-Free Massive MIMO: Performance Analysis and Generative AI Approach: Cell-free (CF) massive multiple-input multipleoutput (MIMO) provides a ubiquitous coverage to user equipments (UEs) but it is also susceptible to interference. Ratesplitting (RS) effectively extracts data by decoding interference, yet its effectiveness is limited by the weakest UE. In this paper, we investigate an RS-based CF massive MIMO system, which combines strengths and mitigates weaknesses of both approaches. Considering imperfect channel state information (CSI) resulting from both pilot contamination and noise, we derive a closed-form expression for the sum spectral efficiency (SE) of the RS-based CF massive MIMO system under a spatially correlated Rician channel. Moreover, we propose low-complexity heuristic algorithms based on statistical CSI for power-splitting of common messages and power-control of private messages, and genetic algorithm is adopted as a solution for upper bound performance. Furthermore, we formulate a joint optimization problem, aiming to maximize the sum SE of the RS-based CF massive MIMO system by optimizing the power-splitting factor and power-control coefficient. Importantly, we improve a generative AI (GAI) algorithm to address this complex and nonconvexity problem by using a diffusion model to obtain solutions. Simulation results demonstrate its effectiveness and practicality in mitigating interference, especially in dynamic environments.<|reference_end|>
|
arxiv
|
@article{zheng2024rate-splitting,
title={Rate-Splitting for Cell-Free Massive MIMO: Performance Analysis and
Generative AI Approach},
author={Jiakang Zheng, Jiayi Zhang, Hongyang Du, Ruichen Zhang, Dusit Niyato,
Octavia A. Dobre, Bo Ai},
journal={arXiv preprint arXiv:2409.14702},
year={2024},
archivePrefix={arXiv},
eprint={2409.14702},
primaryClass={cs.IT eess.SP math.IT}
}
|
zheng2024rate-splitting
|
arxiv-660614
|
2409.14703
|
MemeCLIP: Leveraging CLIP Representations for Multimodal Meme Classification
|
<|reference_start|>MemeCLIP: Leveraging CLIP Representations for Multimodal Meme Classification: The complexity of text-embedded images presents a formidable challenge in machine learning given the need for multimodal understanding of the multiple aspects of expression conveyed in them. While previous research in multimodal analysis has primarily focused on singular aspects such as hate speech and its subclasses, our study expands the focus to encompass multiple aspects of linguistics: hate, target, stance, and humor detection. We introduce a novel dataset PrideMM comprising text-embedded images associated with the LGBTQ+ Pride movement, thereby addressing a serious gap in existing resources. We conduct extensive experimentation on PrideMM by using unimodal and multimodal baseline methods to establish benchmarks for each task. Additionally, we propose a novel framework MemeCLIP for efficient downstream learning while preserving the knowledge of the pre-trained CLIP model. The results of our experiments show that MemeCLIP achieves superior performance compared to previously proposed frameworks on two real-world datasets. We further compare the performance of MemeCLIP and zero-shot GPT-4 on the hate classification task. Finally, we discuss the shortcomings of our model by qualitatively analyzing misclassified samples. Our code and dataset are publicly available at: https://github.com/SiddhantBikram/MemeCLIP.<|reference_end|>
|
arxiv
|
@article{shah2024memeclip:,
title={MemeCLIP: Leveraging CLIP Representations for Multimodal Meme
Classification},
author={Siddhant Bikram Shah, Shuvam Shiwakoti, Maheep Chaudhary and Haohan
Wang},
journal={arXiv preprint arXiv:2409.14703},
year={2024},
archivePrefix={arXiv},
eprint={2409.14703},
primaryClass={cs.LG cs.CL cs.MM}
}
|
shah2024memeclip:
|
arxiv-660615
|
2409.14704
|
VLEU: a Method for Automatic Evaluation for Generalizability of Text-to-Image Models
|
<|reference_start|>VLEU: a Method for Automatic Evaluation for Generalizability of Text-to-Image Models: Progress in Text-to-Image (T2I) models has significantly improved the generation of images from textual descriptions. However, existing evaluation metrics do not adequately assess the models' ability to handle a diverse range of textual prompts, which is crucial for their generalizability. To address this, we introduce a new metric called Visual Language Evaluation Understudy (VLEU). VLEU uses large language models to sample from the visual text domain, the set of all possible input texts for T2I models, to generate a wide variety of prompts. The images generated from these prompts are evaluated based on their alignment with the input text using the CLIP model.VLEU quantifies a model's generalizability by computing the Kullback-Leibler divergence between the marginal distribution of the visual text and the conditional distribution of the images generated by the model. This metric provides a quantitative way to compare different T2I models and track improvements during model finetuning. Our experiments demonstrate the effectiveness of VLEU in evaluating the generalization capability of various T2I models, positioning it as an essential metric for future research in text-to-image synthesis.<|reference_end|>
|
arxiv
|
@article{cao2024vleu:,
title={VLEU: a Method for Automatic Evaluation for Generalizability of
Text-to-Image Models},
author={Jingtao Cao, Zheng Zhang, Hongru Wang and Kam-Fai Wong},
journal={arXiv preprint arXiv:2409.14704},
year={2024},
archivePrefix={arXiv},
eprint={2409.14704},
primaryClass={cs.CV cs.AI cs.CL}
}
|
cao2024vleu:
|
arxiv-660616
|
2409.14705
|
Target-Aware Language Modeling via Granular Data Sampling
|
<|reference_start|>Target-Aware Language Modeling via Granular Data Sampling: Language model pretraining generally targets a broad range of use cases and incorporates data from diverse sources. However, there are instances where we desire a model that excels in specific areas without markedly compromising performance in other areas. A cost-effective and straightforward approach is sampling with low-dimensional data features, which allows to select large-scale pretraining data for domain-specific use cases. In this work, we revisit importance sampling with n-gram features consisting of multi-granular tokens, which strikes a good balance between sentence compression and representation capabilities. We observed the sampled data to have a high correlation with the target downstream task performance while preserving its effectiveness on other tasks. This leads to the proposed data sampling paradigm where language models can be pretrained more efficiently on selected documents. On eight benchmarks we demonstrate with $\sim$1% of the data, pretrained models perform on par with the full RefinedWeb data and outperform randomly selected samples for model sizes ranging from 125M to 1.5B.<|reference_end|>
|
arxiv
|
@article{chang2024target-aware,
title={Target-Aware Language Modeling via Granular Data Sampling},
author={Ernie Chang, Pin-Jie Lin, Yang Li, Changsheng Zhao, Daeil Kim,
Rastislav Rabatin, Zechun Liu, Yangyang Shi, Vikas Chandra},
journal={arXiv preprint arXiv:2409.14705},
year={2024},
archivePrefix={arXiv},
eprint={2409.14705},
primaryClass={cs.CL cs.AI}
}
|
chang2024target-aware
|
arxiv-660617
|
2409.14707
|
Bird-inspired tendon coupling improves paddling efficiency by shortening phase transition times
|
<|reference_start|>Bird-inspired tendon coupling improves paddling efficiency by shortening phase transition times: Drag-based swimming with rowing appendages, fins, and webbed feet is a widely adapted locomotion form in aquatic animals. To develop effective underwater and swimming vehicles, a wide range of bioinspired drag-based paddles have been proposed, often faced with a trade-off between propulsive efficiency and versatility. Webbed feet provide an effective propulsive force in the power phase, are light weight and robust, and can even be partially folded away in the recovery phase. However, during the transition between recovery and power phase, much time is lost folding and unfolding, leading to drag and reducing efficiency. In this work, we took inspiration from the coupling tendons of aquatic birds and utilized tendon coupling mechanisms to shorten the transition time between recovery and power phase. Results from our hardware experiments show that the proposed mechanisms improve propulsive efficiency by 2.0 and 2.4 times compared to a design without extensor tendons or based on passive paddle, respectively. We further report that distal leg joint clutching, which has been shown to improve efficiency in terrestrial walking, did not play an major role in swimming locomotion. In sum, we describe a new principle for an efficient, drag-based leg and paddle design, with potential relevance for the swimming mechanics in aquatic birds.<|reference_end|>
|
arxiv
|
@article{lin2024bird-inspired,
title={Bird-inspired tendon coupling improves paddling efficiency by shortening
phase transition times},
author={Jianfeng Lin, Zhao Guo, Alexander Badri-Spr"owitz},
journal={arXiv preprint arXiv:2409.14707},
year={2024},
archivePrefix={arXiv},
eprint={2409.14707},
primaryClass={cs.RO physics.bio-ph}
}
|
lin2024bird-inspired
|
arxiv-660618
|
2409.14708
|
A Multimedia Framework for Continuum Robots: Systematic, Computational, and Control Perspectives
|
<|reference_start|>A Multimedia Framework for Continuum Robots: Systematic, Computational, and Control Perspectives: Continuum robots, which often rely on interdisciplinary and multimedia collaborations, have been increasingly recognized for their potential to revolutionize the field of human-computer interaction (HCI) in varied applications due to their adaptive, responsive, and flexible characteristics. Despite their promises, the lack of an integrated framework poses a significant limitation for both users and developers, resulting in inefficiency and complexity during preliminary developments. Thus, this paper introduces a unified framework for continuum robotic systems that addresses these challenges by integrating system architecture, dynamics computation, and control strategy within a computer-aided design (CAD) platform. The proposed method allows for efficient modeling and quick preview of the robot performance, and thus facilitating iterative design and implementation, with a view to enhancing the quality of robot developments.<|reference_end|>
|
arxiv
|
@article{hsieh2024a,
title={A Multimedia Framework for Continuum Robots: Systematic, Computational,
and Control Perspectives},
author={Po-Yu Hsieh, June-Hao Hou},
journal={arXiv preprint arXiv:2409.14708},
year={2024},
archivePrefix={arXiv},
eprint={2409.14708},
primaryClass={cs.RO cs.MM}
}
|
hsieh2024a
|
arxiv-660619
|
2409.14709
|
Video-to-Audio Generation with Fine-grained Temporal Semantics
|
<|reference_start|>Video-to-Audio Generation with Fine-grained Temporal Semantics: With recent advances of AIGC, video generation have gained a surge of research interest in both academia and industry (e.g., Sora). However, it remains a challenge to produce temporally aligned audio to synchronize the generated video, considering the complicated semantic information included in the latter. In this work, inspired by the recent success of text-to-audio (TTA) generation, we first investigate the video-to-audio (VTA) generation framework based on latent diffusion model (LDM). Similar to latest pioneering exploration in VTA, our preliminary results also show great potentials of LDM in VTA task, but it still suffers from sub-optimal temporal alignment. To this end, we propose to enhance the temporal alignment of VTA with frame-level semantic information. With the recently popular grounding segment anything model (Grounding SAM), we can extract the fine-grained semantics in video frames to enable VTA to produce better-aligned audio signal. Extensive experiments demonstrate the effectiveness of our system on both objective and subjective evaluation metrics, which shows both better audio quality and fine-grained temporal alignment.<|reference_end|>
|
arxiv
|
@article{hu2024video-to-audio,
title={Video-to-Audio Generation with Fine-grained Temporal Semantics},
author={Yuchen Hu, Yu Gu, Chenxing Li, Rilin Chen, Dong Yu},
journal={arXiv preprint arXiv:2409.14709},
year={2024},
archivePrefix={arXiv},
eprint={2409.14709},
primaryClass={eess.AS cs.SD}
}
|
hu2024video-to-audio
|
arxiv-660620
|
2409.14710
|
ERABAL: Enhancing Role-Playing Agents through Boundary-Aware Learning
|
<|reference_start|>ERABAL: Enhancing Role-Playing Agents through Boundary-Aware Learning: Role-playing is an emerging application in the field of Human-Computer Interaction (HCI), primarily implemented through the alignment training of a large language model (LLM) with assigned characters. Despite significant progress, role-playing agents (RPLAs) still struggle with maintaining role-consistency across conversations, particularly when confronted with boundary queries subtly related to character attributes. In this paper, we present ERABAL, a framework aimed at enhancing RPLAs' role-playing capabilities through boundary-aware learning. ERABAL encompasses a generation pipeline for role-specific dialogues and a concomitant methodology for alignment training. Through comprehensive evaluations, we demonstrate that ERABAL is both efficient and effective. By training with significantly fewer dialogues than those used in leading approaches, ERABAL achieves notable improvements across WikiRoleEval, CharacterEval, and the role-playing subset of MT-Bench compared to the generalist baseline models. Our code and datasets will be made publicly available to support further research.<|reference_end|>
|
arxiv
|
@article{tang2024erabal:,
title={ERABAL: Enhancing Role-Playing Agents through Boundary-Aware Learning},
author={Yihong Tang, Jiao Ou, Che Liu, Fuzheng Zhang, Di Zhang, Kun Gai},
journal={arXiv preprint arXiv:2409.14710},
year={2024},
archivePrefix={arXiv},
eprint={2409.14710},
primaryClass={cs.CL cs.AI}
}
|
tang2024erabal:
|
arxiv-660621
|
2409.14712
|
Room Impulse Responses help attackers to evade Deep Fake Detection
|
<|reference_start|>Room Impulse Responses help attackers to evade Deep Fake Detection: The ASVspoof 2021 benchmark, a widely-used evaluation framework for anti-spoofing, consists of two subsets: Logical Access (LA) and Deepfake (DF), featuring samples with varied coding characteristics and compression artifacts. Notably, the current state-of-the-art (SOTA) system boasts impressive performance, achieving an Equal Error Rate (EER) of 0.87% on the LA subset and 2.58% on the DF. However, benchmark accuracy is no guarantee of robustness in real-world scenarios. This paper investigates the effectiveness of utilizing room impulse responses (RIRs) to enhance fake speech and increase their likelihood of evading fake speech detection systems. Our findings reveal that this simple approach significantly improves the evasion rate, doubling the SOTA system's EER. To counter this type of attack, We augmented training data with a large-scale synthetic/simulated RIR dataset. The results demonstrate significant improvement on both reverberated fake speech and original samples, reducing DF task EER to 2.13%.<|reference_end|>
|
arxiv
|
@article{luong2024room,
title={Room Impulse Responses help attackers to evade Deep Fake Detection},
author={Hieu-Thi Luong, Duc-Tuan Truong, Kong Aik Lee and Eng Siong Chng},
journal={arXiv preprint arXiv:2409.14712},
year={2024},
archivePrefix={arXiv},
eprint={2409.14712},
primaryClass={eess.AS cs.SD}
}
|
luong2024room
|
arxiv-660622
|
2409.14713
|
Phantom of Latent for Large Language and Vision Models
|
<|reference_start|>Phantom of Latent for Large Language and Vision Models: The success of visual instruction tuning has accelerated the development of large language and vision models (LLVMs). Following the scaling laws of instruction-tuned large language models (LLMs), LLVMs either have further increased their sizes, reaching 26B, 34B, and even 80B parameters. While this increase in model size has yielded significant performance gains, it demands substantially more hardware resources for both training and inference. Consequently, there naturally exists a strong need for efficient LLVMs that achieve the performance of larger models while being smaller in size. To achieve this need, we present a new efficient LLVM family with model sizes of 0.5B, 1.8B, 3.8B, and 7B parameters, Phantom, which significantly enhances learning capabilities within limited structures. By temporarily increasing the latent hidden dimension during multi-head self-attention (MHSA), we make LLVMs prepare to look and understand much more vision-language knowledge on the latent, without substantially increasing physical model sizes. To maximize its advantage, we introduce Phantom Optimization (PO) using both autoregressive supervised fine-tuning (SFT) and direct preference optimization (DPO)-like concept, which effectively follows correct answers while eliminating incorrect and ambiguous ones. Phantom outperforms numerous larger open- and closed-source LLVMs, positioning itself as a leading solution in the landscape of efficient LLVMs.<|reference_end|>
|
arxiv
|
@article{lee2024phantom,
title={Phantom of Latent for Large Language and Vision Models},
author={Byung-Kwan Lee, Sangyun Chung, Chae Won Kim, Beomchan Park, Yong Man
Ro},
journal={arXiv preprint arXiv:2409.14713},
year={2024},
archivePrefix={arXiv},
eprint={2409.14713},
primaryClass={cs.CV}
}
|
lee2024phantom
|
arxiv-660623
|
2409.14719
|
DiSPo: Diffusion-SSM based Policy Learning for Coarse-to-Fine Action Discretization
|
<|reference_start|>DiSPo: Diffusion-SSM based Policy Learning for Coarse-to-Fine Action Discretization: We aim to solve the problem of generating coarse-to-fine skills learning from demonstrations (LfD). To scale precision, traditional LfD approaches often rely on extensive fine-grained demonstrations with external interpolations or dynamics models with limited generalization capabilities. For memory-efficient learning and convenient granularity change, we propose a novel diffusion-SSM based policy (DiSPo) that learns from diverse coarse skills and produces varying control scales of actions by leveraging a state-space model, Mamba. Our evaluations show the adoption of Mamba and the proposed step-scaling method enables DiSPo to outperform in five coarse-to-fine benchmark tests while DiSPo shows decent performance in typical fine-grained motion learning and reproduction. We finally demonstrate the scalability of actions with simulation and real-world manipulation tasks.<|reference_end|>
|
arxiv
|
@article{oh2024dispo:,
title={DiSPo: Diffusion-SSM based Policy Learning for Coarse-to-Fine Action
Discretization},
author={Nayoung Oh, Moonkyeong Jung and Daehyung Park},
journal={arXiv preprint arXiv:2409.14719},
year={2024},
archivePrefix={arXiv},
eprint={2409.14719},
primaryClass={cs.RO}
}
|
oh2024dispo:
|
arxiv-660624
|
2409.14720
|
ControlEdit: A MultiModal Local Clothing Image Editing Method
|
<|reference_start|>ControlEdit: A MultiModal Local Clothing Image Editing Method: Multimodal clothing image editing refers to the precise adjustment and modification of clothing images using data such as textual descriptions and visual images as control conditions, which effectively improves the work efficiency of designers and reduces the threshold for user design. In this paper, we propose a new image editing method ControlEdit, which transfers clothing image editing to multimodal-guided local inpainting of clothing images. We address the difficulty of collecting real image datasets by leveraging the self-supervised learning approach. Based on this learning approach, we extend the channels of the feature extraction network to ensure consistent clothing image style before and after editing, and we design an inverse latent loss function to achieve soft control over the content of non-edited areas. In addition, we adopt Blended Latent Diffusion as the sampling method to make the editing boundaries transition naturally and enforce consistency of non-edited area content. Extensive experiments demonstrate that ControlEdit surpasses baseline algorithms in both qualitative and quantitative evaluations.<|reference_end|>
|
arxiv
|
@article{cheng2024controledit:,
title={ControlEdit: A MultiModal Local Clothing Image Editing Method},
author={Di Cheng, YingJie Shi, ShiXin Sun, JiaFu Zhang, WeiJing Wang, Yu Liu},
journal={arXiv preprint arXiv:2409.14720},
year={2024},
archivePrefix={arXiv},
eprint={2409.14720},
primaryClass={cs.CV}
}
|
cheng2024controledit:
|
arxiv-660625
|
2409.14721
|
MEVIUS: A Quadruped Robot Easily Constructed through E-Commerce with Sheet Metal Welding and Machining
|
<|reference_start|>MEVIUS: A Quadruped Robot Easily Constructed through E-Commerce with Sheet Metal Welding and Machining: Quadruped robots that individual researchers can build by themselves are crucial for expanding the scope of research due to their high scalability and customizability. These robots must be easily ordered and assembled through e-commerce or DIY methods, have a low number of components for easy maintenance, and possess durability to withstand experiments in diverse environments. Various quadruped robots have been developed so far, but most robots that can be built by research institutions are relatively small and made of plastic using 3D printers. These robots cannot withstand experiments in external environments such as mountain trails or rubble, and they will easily break with intense movements. Although there is the advantage of being able to print parts by yourself, the large number of components makes replacing broken parts and maintenance very cumbersome. Therefore, in this study, we develop a metal quadruped robot MEVIUS, that can be constructed and assembled using only materials ordered through e-commerce. We have considered the minimum set of components required for a quadruped robot, employing metal machining, sheet metal welding, and off-the-shelf components only. Also, we have achieved a simple circuit and software configuration. Considering the communication delay due to its simple configuration, we experimentally demonstrate that MEVIUS, utilizing reinforcement learning and Sim2Real, can traverse diverse rough terrains and withstand outside experiments. All hardware and software components can be obtained from https://github.com/haraduka/mevius.<|reference_end|>
|
arxiv
|
@article{kawaharazuka2024mevius:,
title={MEVIUS: A Quadruped Robot Easily Constructed through E-Commerce with
Sheet Metal Welding and Machining},
author={Kento Kawaharazuka, Shintaro Inoue, Temma Suzuki, Sota Yuzaki, Shogo
Sawaguchi, Kei Okada, Masayuki Inaba},
journal={arXiv preprint arXiv:2409.14721},
year={2024},
archivePrefix={arXiv},
eprint={2409.14721},
primaryClass={cs.RO}
}
|
kawaharazuka2024mevius:
|
arxiv-660626
|
2409.14722
|
Neural refractive index field: Unlocking the Potential of Background-oriented Schlieren Tomography in Volumetric Flow Visualization
|
<|reference_start|>Neural refractive index field: Unlocking the Potential of Background-oriented Schlieren Tomography in Volumetric Flow Visualization: Background-oriented Schlieren tomography (BOST) is a prevalent method for visualizing intricate turbulent flows, valued for its ease of implementation and capacity to capture three-dimensional distributions of a multitude of flow parameters. However, the voxel-based meshing scheme leads to significant challenges, such as inadequate spatial resolution, substantial discretization errors, poor noise immunity, and excessive computational costs. This work presents an innovative reconstruction approach termed neural refractive index field (NeRIF) which implicitly represents the flow field with a neural network, which is trained with tailored strategies. Both numerical simulations and experimental demonstrations on turbulent Bunsen flames suggest that our approach can significantly improve the reconstruction accuracy and spatial resolution while concurrently reducing computational expenses. Although showcased in the context of background-oriented schlieren tomography here, the key idea embedded in the NeRIF can be readily adapted to various other tomographic modalities including tomographic absorption spectroscopy and tomographic particle imaging velocimetry, broadening its potential impact across different domains of flow visualization and analysis.<|reference_end|>
|
arxiv
|
@article{he2024neural,
title={Neural refractive index field: Unlocking the Potential of
Background-oriented Schlieren Tomography in Volumetric Flow Visualization},
author={Yuanzhe He, Yutao Zheng, Shijie Xu, Chang Liu, Di Peng, Yingzheng Liu,
Weiwei Cai},
journal={arXiv preprint arXiv:2409.14722},
year={2024},
archivePrefix={arXiv},
eprint={2409.14722},
primaryClass={physics.flu-dyn cs.HC cs.LG physics.optics}
}
|
he2024neural
|
arxiv-660627
|
2409.14723
|
ERPoT: Effective and Reliable Pose Tracking for Mobile Robots Based on Lightweight and Compact Polygon Maps
|
<|reference_start|>ERPoT: Effective and Reliable Pose Tracking for Mobile Robots Based on Lightweight and Compact Polygon Maps: This paper presents an effective and reliable pose tracking solution termed ERPoT for mobile robots operating in large-scale outdoor environments, underpinned by an innovative prior polygon map. Especially, to overcome the challenge that arises as the map size grows with the expansion of the environment, the novel form of a prior map composed of multiple polygons is proposed. Benefiting from the use of polygons to concisely and accurately depict environmental occupancy, the prior polygon map achieves long-term reliable pose tracking while ensuring a compact form. More importantly, pose tracking is carried out under pure LiDAR mode, and the dense 3D point cloud is transformed into a sparse 2D scan through ground removal and obstacle selection. On this basis, a novel cost function for pose estimation through point-polygon matching is introduced, encompassing two distinct constraint forms: point-to-vertex and point-to-edge. In this study, our primary focus lies on two crucial aspects: lightweight and compact prior map construction, as well as effective and reliable robot pose tracking. Both aspects serve as the foundational pillars for future navigation across different mobile platforms equipped with different LiDAR sensors in different environments. Comparative experiments based on the publicly available datasets and our self-recorded datasets are conducted, and evaluation results show the superior performance of ERPoT on reliability, prior map size, pose estimation error, and runtime over the other five approaches. The corresponding code can be accessed at https://github.com/ghm0819/ERPoT, and the supplementary video is at https://youtu.be/cseml5FrW1Q.<|reference_end|>
|
arxiv
|
@article{gao2024erpot:,
title={ERPoT: Effective and Reliable Pose Tracking for Mobile Robots Based on
Lightweight and Compact Polygon Maps},
author={Haiming Gao, Qibo Qiu, Hongyan Liu, Dingkun Liang, Chaoqun Wang, and
Xuebo Zhang},
journal={arXiv preprint arXiv:2409.14723},
year={2024},
archivePrefix={arXiv},
eprint={2409.14723},
primaryClass={cs.RO}
}
|
gao2024erpot:
|
arxiv-660628
|
2409.14724
|
EDSNet: Efficient-DSNet for Video Summarization
|
<|reference_start|>EDSNet: Efficient-DSNet for Video Summarization: Current video summarization methods largely rely on transformer-based architectures, which, due to their quadratic complexity, require substantial computational resources. In this work, we address these inefficiencies by enhancing the Direct-to-Summarize Network (DSNet) with more resource-efficient token mixing mechanisms. We show that replacing traditional attention with alternatives like Fourier, Wavelet transforms, and Nystr\"omformer improves efficiency and performance. Furthermore, we explore various pooling strategies within the Regional Proposal Network, including ROI pooling, Fast Fourier Transform pooling, and flat pooling. Our experimental results on TVSum and SumMe datasets demonstrate that these modifications significantly reduce computational costs while maintaining competitive summarization performance. Thus, our work offers a more scalable solution for video summarization tasks.<|reference_end|>
|
arxiv
|
@article{prasad2024edsnet:,
title={EDSNet: Efficient-DSNet for Video Summarization},
author={Ashish Prasad, Pranav Jeevan, Amit Sethi},
journal={arXiv preprint arXiv:2409.14724},
year={2024},
archivePrefix={arXiv},
eprint={2409.14724},
primaryClass={cs.CV cs.AI cs.LG}
}
|
prasad2024edsnet:
|
arxiv-660629
|
2409.14726
|
Semantic Communication Enabled 6G-NTN Framework: A Novel Denoising and Gateway Hop Integration Mechanism
|
<|reference_start|>Semantic Communication Enabled 6G-NTN Framework: A Novel Denoising and Gateway Hop Integration Mechanism: The sixth-generation (6G) non-terrestrial networks (NTNs) are crucial for real-time monitoring in critical applications like disaster relief. However, limited bandwidth, latency, rain attenuation, long propagation delays, and co-channel interference pose challenges to efficient satellite communication. Therefore, semantic communication (SC) has emerged as a promising solution to improve transmission efficiency and address these issues. In this paper, we explore the potential of SC as a bandwidth-efficient, latency-minimizing strategy specifically suited to 6G satellite communications. While existing SC methods have demonstrated efficacy in direct satellite-terrestrial transmissions, they encounter limitations in satellite networks due to distortion accumulation across gateway hop-relays. Additionally, certain ground users (GUs) experience poor signal-to-noise ratios (SNR), making direct satellite communication challenging. To address these issues, we propose a novel framework that optimizes gateway hop-relay selection for GUs with low SNR and integrates gateway-based denoising mechanisms to ensure high-quality-of-service (QoS) in satellite-based SC networks. This approach directly mitigates distortion, leading to significant improvements in satellite service performance by delivering customized services tailored to the unique signal conditions of each GU. Our findings represent a critical advancement in reliable and efficient data transmission from the Earth observation satellites, thereby enabling fast and effective responses to urgent events. Simulation results demonstrate that our proposed strategy significantly enhances overall network performance, outperforming conventional methods by offering tailored communication services based on specific GU conditions.<|reference_end|>
|
arxiv
|
@article{nguyen2024semantic,
title={Semantic Communication Enabled 6G-NTN Framework: A Novel Denoising and
Gateway Hop Integration Mechanism},
author={Loc X. Nguyen, Sheikh Salman Hassan, Yan Kyaw Tun, Kitae Kim, Zhu Han,
and Choong Seon Hong},
journal={arXiv preprint arXiv:2409.14726},
year={2024},
archivePrefix={arXiv},
eprint={2409.14726},
primaryClass={cs.ET eess.SP}
}
|
nguyen2024semantic
|
arxiv-660630
|
2409.14728
|
Homogenization principle and numerical analysis for fractional stochastic differential equations with different scales
|
<|reference_start|>Homogenization principle and numerical analysis for fractional stochastic differential equations with different scales: This work is concerned with fractional stochastic differential equations with different scales. We establish the existence and uniqueness of solutions for Caputo fractional stochastic differential systems under the non-Lipschitz condition. Based on the idea of temporal homogenization, we prove that the homogenization principle (averaging principle) holds in the sense of mean square ($L^2$ norm) convergence under a novel homogenization assumption. Furthermore, an Euler-Maruyama scheme for the non-autonomous system is constructed and its numerical error is analyzed. Finally, two numerical examples are presented to verify the theoretical results. Different from the existing literature, we demonstrate the computational advantages of the homogenized autonomous system from a numerical perspective.<|reference_end|>
|
arxiv
|
@article{wang2024homogenization,
title={Homogenization principle and numerical analysis for fractional
stochastic differential equations with different scales},
author={Zhaoyang Wang, Ping Lin},
journal={arXiv preprint arXiv:2409.14728},
year={2024},
archivePrefix={arXiv},
eprint={2409.14728},
primaryClass={math.NA cs.NA}
}
|
wang2024homogenization
|
arxiv-660631
|
2409.14729
|
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
|
<|reference_start|>PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs: Large Language Models (LLMs) have gained widespread use in various applications due to their powerful capability to generate human-like text. However, prompt injection attacks, which involve overwriting a model's original instructions with malicious prompts to manipulate the generated text, have raised significant concerns about the security and reliability of LLMs. Ensuring that LLMs are robust against such attacks is crucial for their deployment in real-world applications, particularly in critical tasks. In this paper, we propose PROMPTFUZZ, a novel testing framework that leverages fuzzing techniques to systematically assess the robustness of LLMs against prompt injection attacks. Inspired by software fuzzing, PROMPTFUZZ selects promising seed prompts and generates a diverse set of prompt injections to evaluate the target LLM's resilience. PROMPTFUZZ operates in two stages: the prepare phase, which involves selecting promising initial seeds and collecting few-shot examples, and the focus phase, which uses the collected examples to generate diverse, high-quality prompt injections. Using PROMPTFUZZ, we can uncover more vulnerabilities in LLMs, even those with strong defense prompts. By deploying the generated attack prompts from PROMPTFUZZ in a real-world competition, we achieved the 7th ranking out of over 4000 participants (top 0.14%) within 2 hours. Additionally, we construct a dataset to fine-tune LLMs for enhanced robustness against prompt injection attacks. While the fine-tuned model shows improved robustness, PROMPTFUZZ continues to identify vulnerabilities, highlighting the importance of robust testing for LLMs. Our work emphasizes the critical need for effective testing tools and provides a practical framework for evaluating and improving the robustness of LLMs against prompt injection attacks.<|reference_end|>
|
arxiv
|
@article{yu2024promptfuzz:,
title={PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt
Injection in LLMs},
author={Jiahao Yu, Yangguang Shao, Hanwen Miao, Junzheng Shi, Xinyu Xing},
journal={arXiv preprint arXiv:2409.14729},
year={2024},
archivePrefix={arXiv},
eprint={2409.14729},
primaryClass={cs.CR cs.AI}
}
|
yu2024promptfuzz:
|
arxiv-660632
|
2409.14736
|
Learning Koopman Dynamics for Safe Legged Locomotion with Reinforcement Learning-based Controller
|
<|reference_start|>Learning Koopman Dynamics for Safe Legged Locomotion with Reinforcement Learning-based Controller: Learning-based algorithms have demonstrated impressive performance in agile locomotion of legged robots. However, learned policies are often complex and opaque due to the black-box nature of learning algorithms, which hinders predictability and precludes guarantees on performance or safety. In this work, we develop a novel safe navigation framework that combines Koopman operators and model-predictive control (MPC) frameworks. Our method adopts Koopman operator theory to learn the linear evolution of dynamics of the underlying locomotion policy, which can be effectively learned with Dynamic Mode Decomposition (DMD). Given that our learned model is linear, we can readily leverage the standard MPC algorithm. Our framework is easy to implement with less prior knowledge because it does not require access to the underlying dynamical systems or control-theoretic techniques. We demonstrate that the learned linear dynamics can better predict the trajectories of legged robots than baselines. In addition, we showcase that the proposed navigation framework can achieve better safety with less collisions in challenging and dense environments with narrow passages.<|reference_end|>
|
arxiv
|
@article{kim2024learning,
title={Learning Koopman Dynamics for Safe Legged Locomotion with Reinforcement
Learning-based Controller},
author={Jeonghwan Kim, Yunhai Han, Harish Ravichandar, Sehoon Ha},
journal={arXiv preprint arXiv:2409.14736},
year={2024},
archivePrefix={arXiv},
eprint={2409.14736},
primaryClass={cs.RO}
}
|
kim2024learning
|
arxiv-660633
|
2409.14737
|
An Adverse Weather-Immune Scheme with Unfolded Regularization and Foundation Model Knowledge Distillation for Street Scene Understanding
|
<|reference_start|>An Adverse Weather-Immune Scheme with Unfolded Regularization and Foundation Model Knowledge Distillation for Street Scene Understanding: Various adverse weather conditions pose a significant challenge to autonomous driving (AD) perception. A common strategy is to minimize the disparity between images captured in clear and adverse weather conditions. However, this technique typically relies on utilizing clear image as a reference, which is challenging to obtain in practice. Furthermore, this method typically targets a single adverse condition and perform poorly when confronting the mixup of multiple adverse weather conditions. To address these issues, we introduce a reference-free and \underline{Adv}erse weather-\underline{Immu}ne scheme (called AdvImmu) achieved by leveraging the invariance of weather conditions over short periods (seconds). Specifically, AdvImmu includes three components: Locally Sequential Mechanism (LSM), Globally Shuffled Mechanism (GSM), and Unfolded Regularizers (URs). LSM leverages temporal correlations between adjacent frames to enhance model performance. GSM is proposed to shuffle LSM segments to prevent the overfitting to temporal patterns of only using LSM. URs are the deep unfolding implementation of two proposed regularizers to penalize the model complexity to enhance across-weather generalization. In addition, to overcome the over-reliance on consecutive frame-wise annotations in the training of AdvImmu (typically unavailable in AD scenarios), we incorporate the Segment Anything Model (SAM) to annotate frames, and additionally propose a cluster algorithm (denoted as SBICAC) to surmount SAM's category-agnostic issue to generate pseudo-labels. Extensive experiments demonstrate that the proposed AdvImmu outperforms existing state-of-the-art methods by 88.56\% in mean Intersection over Union (mIoU).<|reference_end|>
|
arxiv
|
@article{kou2024an,
title={An Adverse Weather-Immune Scheme with Unfolded Regularization and
Foundation Model Knowledge Distillation for Street Scene Understanding},
author={Wei-Bin Kou, Guangxu Zhu, Rongguang Ye, Shuai Wang, Qingfeng Lin, Ming
Tang, Yik-Chung Wu},
journal={arXiv preprint arXiv:2409.14737},
year={2024},
archivePrefix={arXiv},
eprint={2409.14737},
primaryClass={cs.RO}
}
|
kou2024an
|
arxiv-660634
|
2409.14738
|
Enabling On-Chip High-Frequency Adaptive Linear Optimal Control via Linearized Gaussian Process
|
<|reference_start|>Enabling On-Chip High-Frequency Adaptive Linear Optimal Control via Linearized Gaussian Process: Unpredictable and complex aerodynamic effects pose significant challenges to achieving precise flight control, such as the downwash effect from upper vehicles to lower ones. Conventional methods often struggle to accurately model these interactions, leading to controllers that require large safety margins between vehicles. Moreover, the controller on real drones usually requires high-frequency and has limited on-chip computation, making the adaptive control design more difficult to implement. To address these challenges, we incorporate Gaussian process (GP) to model the adaptive external aerodynamics with linear model predictive control. The GP is linearized to enable real-time high-frequency solutions. Moreover, to handle the error caused by linearization, we integrate end-to-end Bayesian optimization during sample collection stages to improve the control performance. Experimental results on both simulations and real quadrotors show that we can achieve real-time solvable computation speed with acceptable tracking errors.<|reference_end|>
|
arxiv
|
@article{gao2024enabling,
title={Enabling On-Chip High-Frequency Adaptive Linear Optimal Control via
Linearized Gaussian Process},
author={Yuan Gao, Yinyi Lai, Jun Wang, Yini Fang},
journal={arXiv preprint arXiv:2409.14738},
year={2024},
archivePrefix={arXiv},
eprint={2409.14738},
primaryClass={cs.RO cs.SY eess.SY}
}
|
gao2024enabling
|
arxiv-660635
|
2409.14739
|
AmpAgent: An LLM-based Multi-Agent System for Multi-stage Amplifier Schematic Design from Literature for Process and Performance Porting
|
<|reference_start|>AmpAgent: An LLM-based Multi-Agent System for Multi-stage Amplifier Schematic Design from Literature for Process and Performance Porting: Multi-stage amplifiers are widely applied in analog circuits. However, their large number of components, complex transfer functions, and intricate pole-zero distributions necessitate extensive manpower for derivation and param sizing to ensure their stability. In order to achieve efficient derivation of the transfer function and simplify the difficulty of circuit design, we propose AmpAgent: a multi-agent system based on large language models (LLMs) for efficiently designing such complex amplifiers from literature with process and performance porting. AmpAgent is composed of three agents: Literature Analysis Agent, Mathematics Reasoning Agent and Device Sizing Agent. They are separately responsible for retrieving key information (e.g. formulas and transfer functions) from the literature, decompose the whole circuit's design problem by deriving the key formulas, and address the decomposed problem iteratively. AmpAgent was employed in the schematic design of seven types of multi-stage amplifiers with different compensation techniques. In terms of design efficiency, AmpAgent has reduced the number of iterations by 1.32$ \sim $4${\times}$ and execution time by 1.19$ \sim $2.99${\times}$ compared to conventional optimization algorithms, with a success rate increased by 1.03$ \sim $6.79${\times}$. In terms of circuit performance, it has improved by 1.63$ \sim $27.25${\times}$ compared to the original literature. The findings suggest that LLMs could play a crucial role in the field of complex analog circuit schematic design, as well as process and performance porting.<|reference_end|>
|
arxiv
|
@article{liu2024ampagent:,
title={AmpAgent: An LLM-based Multi-Agent System for Multi-stage Amplifier
Schematic Design from Literature for Process and Performance Porting},
author={Chengjie Liu, Weiyu Chen, Anlan Peng, Yuan Du, Li Du and Jun Yang},
journal={arXiv preprint arXiv:2409.14739},
year={2024},
archivePrefix={arXiv},
eprint={2409.14739},
primaryClass={cs.ET cs.SY eess.SY}
}
|
liu2024ampagent:
|
arxiv-660636
|
2409.14740
|
ToxiCraft: A Novel Framework for Synthetic Generation of Harmful Information
|
<|reference_start|>ToxiCraft: A Novel Framework for Synthetic Generation of Harmful Information: In different NLP tasks, detecting harmful content is crucial for online environments, especially with the growing influence of social media. However, previous research has two main issues: 1) a lack of data in low-resource settings, and 2) inconsistent definitions and criteria for judging harmful content, requiring classification models to be robust to spurious features and diverse. We propose Toxicraft, a novel framework for synthesizing datasets of harmful information to address these weaknesses. With only a small amount of seed data, our framework can generate a wide variety of synthetic, yet remarkably realistic, examples of toxic information. Experimentation across various datasets showcases a notable enhancement in detection model robustness and adaptability, surpassing or close to the gold labels. We release the generated data at Github upon acceptance.<|reference_end|>
|
arxiv
|
@article{hui2024toxicraft:,
title={ToxiCraft: A Novel Framework for Synthetic Generation of Harmful
Information},
author={Zheng Hui, Zhaoxiao Guo, Hang Zhao, Juanyong Duan, Congrui Huang},
journal={arXiv preprint arXiv:2409.14740},
year={2024},
archivePrefix={arXiv},
eprint={2409.14740},
primaryClass={cs.CL cs.AI}
}
|
hui2024toxicraft:
|
arxiv-660637
|
2409.14741
|
Less yet robust: crucial region selection for scene recognition
|
<|reference_start|>Less yet robust: crucial region selection for scene recognition: Scene recognition, particularly for aerial and underwater images, often suffers from various types of degradation, such as blurring or overexposure. Previous works that focus on convolutional neural networks have been shown to be able to extract panoramic semantic features and perform well on scene recognition tasks. However, low-quality images still impede model performance due to the inappropriate use of high-level semantic features. To address these To address these challenges, we propose an adaptive selection mechanism to identify the most important and robust regions with high-level features. Thus, the model can perform learning via these regions to avoid interference. implement a learnable mask in the neural network, which can filter high-level features by assigning weights to different regions of the feature matrix. We also introduce a regularization term to further enhance the significance of key high-level feature regions. Different from previous methods, our learnable matrix pays extra attention to regions that are important to multiple categories but may cause misclassification and sets constraints to reduce the influence of such regions.This is a plug-and-play architecture that can be easily extended to other methods. Additionally, we construct an Underwater Geological Scene Classification dataset to assess the effectiveness of our model. Extensive experimental results demonstrate the superiority and robustness of our proposed method over state-of-the-art techniques on two datasets.<|reference_end|>
|
arxiv
|
@article{zhang2024less,
title={Less yet robust: crucial region selection for scene recognition},
author={Jianqi Zhang and Mengxuan Wang and Jingyao Wang and Lingyu Si and
Changwen Zheng and Fanjiang Xu},
journal={arXiv preprint arXiv:2409.14741},
year={2024},
archivePrefix={arXiv},
eprint={2409.14741},
primaryClass={cs.CV cs.AI}
}
|
zhang2024less
|
arxiv-660638
|
2409.14743
|
LlamaPartialSpoof: An LLM-Driven Fake Speech Dataset Simulating Disinformation Generation
|
<|reference_start|>LlamaPartialSpoof: An LLM-Driven Fake Speech Dataset Simulating Disinformation Generation: Previous fake speech datasets were constructed from a defender's perspective to develop countermeasure (CM) systems without considering diverse motivations of attackers. To better align with real-life scenarios, we created LlamaPartialSpoof, a 130-hour dataset contains both fully and partially fake speech, using a large language model (LLM) and voice cloning technologies to evaluate the robustness of CMs. By examining information valuable to both attackers and defenders, we identify several key vulnerabilities in current CM systems, which can be exploited to enhance attack success rates, including biases toward certain text-to-speech models or concatenation methods. Our experimental results indicate that current fake speech detection system struggle to generalize to unseen scenarios, achieving a best performance of 24.44% equal error rate.<|reference_end|>
|
arxiv
|
@article{luong2024llamapartialspoof:,
title={LlamaPartialSpoof: An LLM-Driven Fake Speech Dataset Simulating
Disinformation Generation},
author={Hieu-Thi Luong, Haoyang Li, Lin Zhang, Kong Aik Lee and Eng Siong Chng},
journal={arXiv preprint arXiv:2409.14743},
year={2024},
archivePrefix={arXiv},
eprint={2409.14743},
primaryClass={eess.AS cs.SD}
}
|
luong2024llamapartialspoof:
|
arxiv-660639
|
2409.14744
|
LINKAGE: Listwise Ranking among Varied-Quality References for Non-Factoid QA Evaluation via LLMs
|
<|reference_start|>LINKAGE: Listwise Ranking among Varied-Quality References for Non-Factoid QA Evaluation via LLMs: Non-Factoid (NF) Question Answering (QA) is challenging to evaluate due to diverse potential answers and no objective criterion. The commonly used automatic evaluation metrics like ROUGE or BERTScore cannot accurately measure semantic similarities or answers from different perspectives. Recently, Large Language Models (LLMs) have been resorted to for NFQA evaluation due to their compelling performance on various NLP tasks. Common approaches include pointwise scoring of each candidate answer and pairwise comparisons between answers. Inspired by the evolution from pointwise to pairwise to listwise in learning-to-rank methods, we propose a novel listwise NFQA evaluation approach, that utilizes LLMs to rank candidate answers in a list of reference answers sorted by descending quality. Moreover, for NF questions that do not have multi-grade or any golden answers, we leverage LLMs to generate the reference answer list of various quality to facilitate the listwise evaluation. Extensive experimental results on three NFQA datasets, i.e., ANTIQUE, the TREC-DL-NF, and WebGLM show that our method has significantly higher correlations with human annotations compared to automatic scores and common pointwise and pairwise approaches.<|reference_end|>
|
arxiv
|
@article{yang2024linkage:,
title={LINKAGE: Listwise Ranking among Varied-Quality References for
Non-Factoid QA Evaluation via LLMs},
author={Sihui Yang, Keping Bi, Wanqing Cui, Jiafeng Guo, Xueqi Cheng},
journal={arXiv preprint arXiv:2409.14744},
year={2024},
archivePrefix={arXiv},
eprint={2409.14744},
primaryClass={cs.CL}
}
|
yang2024linkage:
|
arxiv-660640
|
2409.14745
|
Some Thoughts on Symbolic Transfer Entropy
|
<|reference_start|>Some Thoughts on Symbolic Transfer Entropy: Transfer entropy is used to establish a measure of causal relationships between two variables. Symbolic transfer entropy, as an estimation method for transfer entropy, is widely applied due to its robustness against non-stationarity. This paper investigates the embedding dimension parameter in symbolic transfer entropy and proposes optimization methods for high complexity in extreme cases with complex data. Additionally, it offers some perspectives on estimation methods for transfer entropy.<|reference_end|>
|
arxiv
|
@article{jin2024some,
title={Some Thoughts on Symbolic Transfer Entropy},
author={Dian Jin},
journal={arXiv preprint arXiv:2409.14745},
year={2024},
archivePrefix={arXiv},
eprint={2409.14745},
primaryClass={cs.CC cs.IT math.IT}
}
|
jin2024some
|
arxiv-660641
|
2409.14747
|
Distribution-Level Feature Distancing for Machine Unlearning: Towards a Better Trade-off Between Model Utility and Forgetting
|
<|reference_start|>Distribution-Level Feature Distancing for Machine Unlearning: Towards a Better Trade-off Between Model Utility and Forgetting: With the explosive growth of deep learning applications, the right to be forgotten has become increasingly in demand in various AI industries. For example, given a facial recognition system, some individuals may wish to remove images that might have been used in the training phase from the trained model. Unfortunately, modern deep neural networks sometimes unexpectedly leak personal identities. Recent studies have presented various machine unlearning algorithms to make a trained model unlearn the data to be forgotten. While these methods generally perform well in terms of forgetting scores, we have found that an unexpected modelutility drop can occur. This phenomenon, which we term correlation collapse, happens when the machine unlearning algorithms reduce the useful correlation between image features and the true label. To address this challenge, we propose Distribution-Level Feature Distancing (DLFD), a novel method that efficiently forgets instances while preventing correlation collapse. Our method synthesizes data samples so that the generated data distribution is far from the distribution of samples being forgotten in the feature space, achieving effective results within a single training epoch. Through extensive experiments on facial recognition datasets, we demonstrate that our approach significantly outperforms state-of-the-art machine unlearning methods.<|reference_end|>
|
arxiv
|
@article{choi2024distribution-level,
title={Distribution-Level Feature Distancing for Machine Unlearning: Towards a
Better Trade-off Between Model Utility and Forgetting},
author={Dasol Choi, Dongbin Na},
journal={arXiv preprint arXiv:2409.14747},
year={2024},
archivePrefix={arXiv},
eprint={2409.14747},
primaryClass={cs.CV cs.AI}
}
|
choi2024distribution-level
|
arxiv-660642
|
2409.14750
|
FineCops-Ref: A new Dataset and Task for Fine-Grained Compositional Referring Expression Comprehension
|
<|reference_start|>FineCops-Ref: A new Dataset and Task for Fine-Grained Compositional Referring Expression Comprehension: Referring Expression Comprehension (REC) is a crucial cross-modal task that objectively evaluates the capabilities of language understanding, image comprehension, and language-to-image grounding. Consequently, it serves as an ideal testing ground for Multi-modal Large Language Models (MLLMs). In pursuit of this goal, we have established a new REC dataset characterized by two key features: Firstly, it is designed with controllable varying levels of difficulty, necessitating multi-level fine-grained reasoning across object categories, attributes, and multi-hop relationships. Secondly, it includes negative text and images created through fine-grained editing and generation based on existing data, thereby testing the model's ability to correctly reject scenarios where the target object is not visible in the image--an essential aspect often overlooked in existing datasets and approaches. Utilizing this high-quality dataset, we conducted comprehensive evaluations of both state-of-the-art specialist models and MLLMs. Our findings indicate that there remains a significant gap in achieving satisfactory grounding performance. We anticipate that our dataset will inspire new approaches to enhance visual reasoning and develop more advanced cross-modal interaction strategies, ultimately unlocking the full potential of MLLMs. Our code and the datasets are available at https://github.com/liujunzhuo/FineCops-Ref.<|reference_end|>
|
arxiv
|
@article{liu2024finecops-ref:,
title={FineCops-Ref: A new Dataset and Task for Fine-Grained Compositional
Referring Expression Comprehension},
author={Junzhuo Liu, Xuzheng Yang, Weiwei Li, Peng Wang},
journal={arXiv preprint arXiv:2409.14750},
year={2024},
archivePrefix={arXiv},
eprint={2409.14750},
primaryClass={cs.CV cs.CL}
}
|
liu2024finecops-ref:
|
arxiv-660643
|
2409.14751
|
UniBEVFusion: Unified Radar-Vision BEVFusion for 3D Object Detection
|
<|reference_start|>UniBEVFusion: Unified Radar-Vision BEVFusion for 3D Object Detection: 4D millimeter-wave (MMW) radar, which provides both height information and dense point cloud data over 3D MMW radar, has become increasingly popular in 3D object detection. In recent years, radar-vision fusion models have demonstrated performance close to that of LiDAR-based models, offering advantages in terms of lower hardware costs and better resilience in extreme conditions. However, many radar-vision fusion models treat radar as a sparse LiDAR, underutilizing radar-specific information. Additionally, these multi-modal networks are often sensitive to the failure of a single modality, particularly vision. To address these challenges, we propose the Radar Depth Lift-Splat-Shoot (RDL) module, which integrates radar-specific data into the depth prediction process, enhancing the quality of visual Bird-Eye View (BEV) features. We further introduce a Unified Feature Fusion (UFF) approach that extracts BEV features across different modalities using shared module. To assess the robustness of multi-modal models, we develop a novel Failure Test (FT) ablation experiment, which simulates vision modality failure by injecting Gaussian noise. We conduct extensive experiments on the View-of-Delft (VoD) and TJ4D datasets. The results demonstrate that our proposed Unified BEVFusion (UniBEVFusion) network significantly outperforms state-of-the-art models on the TJ4D dataset, with improvements of 1.44 in 3D and 1.72 in BEV object detection accuracy.<|reference_end|>
|
arxiv
|
@article{zhao2024unibevfusion:,
title={UniBEVFusion: Unified Radar-Vision BEVFusion for 3D Object Detection},
author={Haocheng Zhao, Runwei Guan, Taoyu Wu, Ka Lok Man, Limin Yu, Yutao Yue},
journal={arXiv preprint arXiv:2409.14751},
year={2024},
archivePrefix={arXiv},
eprint={2409.14751},
primaryClass={cs.CV cs.AI}
}
|
zhao2024unibevfusion:
|
arxiv-660644
|
2409.14754
|
CushionCatch: Compliant Catching Mechanism for Mobile Manipulators via Combined Optimization and Learning
|
<|reference_start|>CushionCatch: Compliant Catching Mechanism for Mobile Manipulators via Combined Optimization and Learning: This paper presents a framework to achieve compliant catching with cushioning mechanism(CCCM) for mobile manipulators. First, we introduce a two-level motion optimization scheme, comprising a high-level capture planner and a low-level joint planner. The low-level joint planner consists of two distinct components: Pre-Catching (PRC) planner and Post-Catching (POC) planner. Next, we propose a network that leverages the strengths of LSTM for temporal dependencies and positional encoding for spatial context(P-LSTM). P-LSTM is designed to effectively learn compliant control strategies from human demonstrations. To account for structural differences between humans and robots, safety constraints are incorporated into POC planner to avoid potential collisions. We validate the CCCM framework through both simulated and real-world ball-catching scenarios, achieving a success rate of 98.70% in simulation, 92.59% in real-world tests, and a 33.2% reduction in impact torques.<|reference_end|>
|
arxiv
|
@article{chen2024cushioncatch:,
title={CushionCatch: Compliant Catching Mechanism for Mobile Manipulators via
Combined Optimization and Learning},
author={Bingjie Chen, Keyu Fan, Houde Liu, Chongkun Xia, Liang Han, Bin Liang},
journal={arXiv preprint arXiv:2409.14754},
year={2024},
archivePrefix={arXiv},
eprint={2409.14754},
primaryClass={cs.RO}
}
|
chen2024cushioncatch:
|
arxiv-660645
|
2409.14755
|
BranchPoseNet: Characterizing tree branching with a deep learning-based pose estimation approach
|
<|reference_start|>BranchPoseNet: Characterizing tree branching with a deep learning-based pose estimation approach: This paper presents an automated pipeline for detecting tree whorls in proximally laser scanning data using a pose-estimation deep learning model. Accurate whorl detection provides valuable insights into tree growth patterns, wood quality, and offers potential for use as a biometric marker to track trees throughout the forestry value chain. The workflow processes point cloud data to create sectional images, which are subsequently used to identify keypoints representing tree whorls and branches along the stem. The method was tested on a dataset of destructively sampled individual trees, where the whorls were located along the stems of felled trees. The results demonstrated strong potential, with accurate identification of tree whorls and precise calculation of key structural metrics, unlocking new insights and deeper levels of information from individual tree point clouds.<|reference_end|>
|
arxiv
|
@article{puliti2024branchposenet:,
title={BranchPoseNet: Characterizing tree branching with a deep learning-based
pose estimation approach},
author={Stefano Puliti, Carolin Fischer, Rasmus Astrup},
journal={arXiv preprint arXiv:2409.14755},
year={2024},
archivePrefix={arXiv},
eprint={2409.14755},
primaryClass={cs.CV q-bio.QM}
}
|
puliti2024branchposenet:
|
arxiv-660646
|
2409.14759
|
VLM's Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models
|
<|reference_start|>VLM's Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models: Vision language models (VLMs) have shown promising reasoning capabilities across various benchmarks; however, our understanding of their visual perception remains limited. In this work, we propose an eye examination process to investigate how a VLM perceives images, specifically focusing on key elements of visual recognition, from primitive color and shape to semantic levels. To this end, we introduce a dataset named LENS to guide a VLM to follow the examination and check its readiness. Once the model is ready, we conduct the examination. Through this examination, we quantify and visualize VLMs' sensitivities to color and shape, and semantic matching. Our findings reveal that VLMs have varying sensitivity to different colors while consistently showing insensitivity to green across different VLMs. Also, we found different shape sensitivity and semantic recognition depending on LLM's capacity despite using the same fixed visual encoder. Our analyses and findings have potential to inspire the design of VLMs and the pre-processing of visual input to VLMs for improving application performance.<|reference_end|>
|
arxiv
|
@article{hyeon-woo2024vlm's,
title={VLM's Eye Examination: Instruct and Inspect Visual Competency of Vision
Language Models},
author={Nam Hyeon-Woo, Moon Ye-Bin, Wonseok Choi, Lee Hyun and Tae-Hyun Oh},
journal={arXiv preprint arXiv:2409.14759},
year={2024},
archivePrefix={arXiv},
eprint={2409.14759},
primaryClass={cs.CV cs.AI}
}
|
hyeon-woo2024vlm's
|
arxiv-660647
|
2409.14760
|
Isometric Immersion Learning with Riemannian Geometry
|
<|reference_start|>Isometric Immersion Learning with Riemannian Geometry: Manifold learning has been proven to be an effective method for capturing the implicitly intrinsic structure of non-Euclidean data, in which one of the primary challenges is how to maintain the distortion-free (isometry) of the data representations. Actually, there is still no manifold learning method that provides a theoretical guarantee of isometry. Inspired by Nash's isometric theorem, we introduce a new concept called isometric immersion learning based on Riemannian geometry principles. Following this concept, an unsupervised neural network-based model that simultaneously achieves metric and manifold learning is proposed by integrating Riemannian geometry priors. What's more, we theoretically derive and algorithmically implement a maximum likelihood estimation-based training method for the new model. In the simulation experiments, we compared the new model with the state-of-the-art baselines on various 3-D geometry datasets, demonstrating that the new model exhibited significantly superior performance in multiple evaluation metrics. Moreover, we applied the Riemannian metric learned from the new model to downstream prediction tasks in real-world scenarios, and the accuracy was improved by an average of 8.8%.<|reference_end|>
|
arxiv
|
@article{chen2024isometric,
title={Isometric Immersion Learning with Riemannian Geometry},
author={Zihao Chen, Wenyong Wang, Yu Xiang},
journal={arXiv preprint arXiv:2409.14760},
year={2024},
archivePrefix={arXiv},
eprint={2409.14760},
primaryClass={cs.LG}
}
|
chen2024isometric
|
arxiv-660648
|
2409.14762
|
Do Large Language Models have Problem-Solving Capability under Incomplete Information Scenarios?
|
<|reference_start|>Do Large Language Models have Problem-Solving Capability under Incomplete Information Scenarios?: The evaluation of the problem-solving capability under incomplete information scenarios of Large Language Models (LLMs) is increasingly important, encompassing capabilities such as questioning, knowledge search, error detection, and path planning. Current research mainly focus on LLMs' problem-solving capability such as ``Twenty Questions''. However, these kinds of games do not require recognizing misleading cues which are necessary in the incomplete information scenario. Moreover, the existing game such as ``Who is undercover'' are highly subjective, making it challenging for evaluation. Therefore, in this paper, we introduce a novel game named BrainKing based on the ``Who is undercover'' and ``Twenty Questions'' for evaluating LLM capabilities under incomplete information scenarios. It requires LLMs to identify target entities with limited yes-or-no questions and potential misleading answers. By setting up easy, medium, and hard difficulty modes, we comprehensively assess the performance of LLMs across various aspects. Our results reveal the capabilities and limitations of LLMs in BrainKing, providing significant insights of LLM problem-solving levels.<|reference_end|>
|
arxiv
|
@article{chen2024do,
title={Do Large Language Models have Problem-Solving Capability under
Incomplete Information Scenarios?},
author={Yuyan Chen, Tianhao Yu, Yueze Li, Songzhou Yan, Sijia Liu, Jiaqing
Liang, Yanghua Xiao},
journal={arXiv preprint arXiv:2409.14762},
year={2024},
archivePrefix={arXiv},
eprint={2409.14762},
primaryClass={cs.CL cs.AI}
}
|
chen2024do
|
arxiv-660649
|
2409.14766
|
Robust and Flexible Omnidirectional Depth Estimation with Multiple 360\deg Cameras
|
<|reference_start|>Robust and Flexible Omnidirectional Depth Estimation with Multiple 360\deg Cameras: Omnidirectional depth estimation has received much attention from researchers in recent years. However, challenges arise due to camera soiling and variations in camera layouts, affecting the robustness and flexibility of the algorithm. In this paper, we use the geometric constraints and redundant information of multiple 360-degree cameras to achieve robust and flexible multi-view omnidirectional depth estimation. We implement two algorithms, in which the two-stage algorithm obtains initial depth maps by pairwise stereo matching of multiple cameras and fuses the multiple depth maps to achieve the final depth estimation; the one-stage algorithm adopts spherical sweeping based on hypothetical depths to construct a uniform spherical matching cost of the multi-camera images and obtain the depth. Additionally, a generalized epipolar equirectangular projection is introduced to simplify the spherical epipolar constraints. To overcome panorama distortion, a spherical feature extractor is implemented. Furthermore, a synthetic 360-degree dataset consisting of 12K road scene panoramas and 3K ground truth depth maps is presented to train and evaluate 360-degree depth estimation algorithms. Our dataset takes soiled camera lenses and glare into consideration, which is more consistent with the real-world environment. Experiments show that our two algorithms achieve state-of-the-art performance, accurately predicting depth maps even when provided with soiled panorama inputs. The flexibility of the algorithms is experimentally validated in terms of camera layouts and numbers.<|reference_end|>
|
arxiv
|
@article{li2024robust,
title={Robust and Flexible Omnidirectional Depth Estimation with Multiple
360{\deg} Cameras},
author={Ming Li, Xueqian Jin, Xuejiao Hu, Jinghao Cao, Sidan Du and Yang Li},
journal={arXiv preprint arXiv:2409.14766},
year={2024},
archivePrefix={arXiv},
eprint={2409.14766},
primaryClass={cs.CV}
}
|
li2024robust
|
arxiv-660650
|
2409.14769
|
Language-Agnostic Analysis of Speech Depression Detection
|
<|reference_start|>Language-Agnostic Analysis of Speech Depression Detection: The people with Major Depressive Disorder (MDD) exhibit the symptoms of tonal variations in their speech compared to the healthy counterparts. However, these tonal variations not only confine to the state of MDD but also on the language, which has unique tonal patterns. This work analyzes automatic speech-based depression detection across two languages, English and Malayalam, which exhibits distinctive prosodic and phonemic characteristics. We propose an approach that utilizes speech data collected along with self-reported labels from participants reading sentences from IViE corpus, in both English and Malayalam. The IViE corpus consists of five sets of sentences: simple sentences, WH-questions, questions without morphosyntactic markers, inversion questions and coordinations, that can naturally prompt speakers to speak in different tonal patterns. Convolutional Neural Networks (CNNs) are employed for detecting depression from speech. The CNN model is trained to identify acoustic features associated with depression in speech, focusing on both languages. The model's performance is evaluated on the collected dataset containing recordings from both depressed and non-depressed speakers, analyzing its effectiveness in detecting depression across the two languages. Our findings and collected data could contribute to the development of language-agnostic speech-based depression detection systems, thereby enhancing accessibility for diverse populations.<|reference_end|>
|
arxiv
|
@article{binu2024language-agnostic,
title={Language-Agnostic Analysis of Speech Depression Detection},
author={Sona Binu, Jismi Jose, Fathima Shimna K V, Alino Luke Hans, Reni K.
Cherian, Starlet Ben Alex, Priyanka Srivastava, Chiranjeevi Yarra},
journal={arXiv preprint arXiv:2409.14769},
year={2024},
archivePrefix={arXiv},
eprint={2409.14769},
primaryClass={cs.CL}
}
|
binu2024language-agnostic
|
arxiv-660651
|
2409.14771
|
OMPar: Automatic Parallelization with AI-Driven Source-to-Source Compilation
|
<|reference_start|>OMPar: Automatic Parallelization with AI-Driven Source-to-Source Compilation: Manual parallelization of code remains a significant challenge due to the complexities of modern software systems and the widespread adoption of multi-core architectures. This paper introduces OMPar, an AI-driven tool designed to automate the parallelization of C/C++ code using OpenMP pragmas. OMPar integrates Large Language Models (LLMs) through two key components: OMPify, which assesses loop parallelization potential, and MonoCoder-OMP, a new fine-tuned model which generates precise OpenMP pragmas. The evaluation of OMPar follows the same rigorous process applied to traditional tools like source-to-source AutoPar and ICPC compilers: (1) ensuring the generated code compiles and runs correctly in serial form, (2) assessing performance with the gradual addition of threads and corresponding physical cores, and (3) verifying and validating the correctness of the code's output. Benchmarks from HeCBench and ParEval are used to evaluate accuracy and performance. Experimental results demonstrate that OMPar significantly outperforms traditional methods, achieving higher accuracy in identifying parallelizable loops and generating efficient pragmas. Beyond accuracy, OMPar offers advantages such as the ability to work on partial or incomplete codebases and the capacity to continuously learn from new code patterns, enhancing its parallelization capabilities over time. These results underscore the potential of LLMs in revolutionizing automatic parallelization techniques, paving the way for more efficient and scalable parallel computing systems.<|reference_end|>
|
arxiv
|
@article{kadosh2024ompar:,
title={OMPar: Automatic Parallelization with AI-Driven Source-to-Source
Compilation},
author={Tal Kadosh, Niranjan Hasabnis, Prema Soundararajan, Vy A. Vo, Mihai
Capota, Nesreen Ahmed, Yuval Pinter, Gal Oren},
journal={arXiv preprint arXiv:2409.14771},
year={2024},
archivePrefix={arXiv},
eprint={2409.14771},
primaryClass={cs.CL}
}
|
kadosh2024ompar:
|
arxiv-660652
|
2409.14774
|
CFVNet: An End-to-End Cancelable Finger Vein Network for Recognition
|
<|reference_start|>CFVNet: An End-to-End Cancelable Finger Vein Network for Recognition: Finger vein recognition technology has become one of the primary solutions for high-security identification systems. However, it still has information leakage problems, which seriously jeopardizes users privacy and anonymity and cause great security risks. In addition, there is no work to consider a fully integrated secure finger vein recognition system. So, different from the previous systems, we integrate preprocessing and template protection into an integrated deep learning model. We propose an end-to-end cancelable finger vein network (CFVNet), which can be used to design an secure finger vein recognition system.It includes a plug-and-play BWR-ROIAlign unit, which consists of three sub-modules: Localization, Compression and Transformation. The localization module achieves automated localization of stable and unique finger vein ROI. The compression module losslessly removes spatial and channel redundancies. The transformation module uses the proposed BWR method to introduce unlinkability, irreversibility and revocability to the system. BWR-ROIAlign can directly plug into the model to introduce the above features for DCNN-based finger vein recognition systems. We perform extensive experiments on four public datasets to study the performance and cancelable biometric attributes of the CFVNet-based recognition system. The average accuracy, EERs and Dsys on the four datasets are 99.82%, 0.01% and 0.025, respectively, and achieves competitive performance compared with the state-of-the-arts.<|reference_end|>
|
arxiv
|
@article{wang2024cfvnet:,
title={CFVNet: An End-to-End Cancelable Finger Vein Network for Recognition},
author={Yifan Wang, Jie Gui, Yuan Yan Tang, and James Tin-Yau Kwok},
journal={in IEEE Transactions on Information Forensics and Security, vol.
19, pp. 7810-7823, 2024},
year={2024},
doi={10.1109/TIFS.2024.3436528},
archivePrefix={arXiv},
eprint={2409.14774},
primaryClass={cs.CV}
}
|
wang2024cfvnet:
|
arxiv-660653
|
2409.14775
|
Like a Martial Arts Dodge: Safe Expeditious Whole-Body Control of Mobile Manipulators for Collision Avoidance
|
<|reference_start|>Like a Martial Arts Dodge: Safe Expeditious Whole-Body Control of Mobile Manipulators for Collision Avoidance: In the control task of mobile manipulators(MM), achieving efficient and agile obstacle avoidance in dynamic environments is challenging. In this letter, we present a safe expeditious whole-body(SEWB) control for MMs that ensures both external and internal collision-free. SEWB is constructed by a two-layer optimization structure. Firstly, control barrier functions(CBFs) are employed for a MM to establish initial safety constraints. Moreover, to resolve the pseudo-equilibrium problem of CBFs and improve avoidance agility, we propose a novel sub-optimization called adaptive cyclic inequality(ACI). ACI considers obstacle positions, velocities, and predefined directions to generate directional constraints. Then, we combine CBF and ACI to decompose safety constraints alongside an equality constraint for expectation control. Considering all these constraints, we formulate a quadratic programming(QP) as our primary optimization. In the QP cost function, we account for the motion accuracy differences between the base and manipulator, as well as obstacle influences, to achieve optimized motion. We validate the effectiveness of our SEWB control in avoiding collision and reaching target points through simulations and real-world experiments, particularly in challenging scenarios that involve fast-moving obstacles. SEWB has been proven to achieve whole-body collision-free and improve avoidance agility, similar to a "martial arts dodge".<|reference_end|>
|
arxiv
|
@article{chen2024like,
title={Like a Martial Arts Dodge: Safe Expeditious Whole-Body Control of Mobile
Manipulators for Collision Avoidance},
author={Bingjie Chen, Houde Liu, Chongkun Xia, Liang Han, Xueqian Wang, Bin
Liang},
journal={arXiv preprint arXiv:2409.14775},
year={2024},
archivePrefix={arXiv},
eprint={2409.14775},
primaryClass={cs.RO}
}
|
chen2024like
|
arxiv-660654
|
2409.14778
|
Human Hair Reconstruction with Strand-Aligned 3D Gaussians
|
<|reference_start|>Human Hair Reconstruction with Strand-Aligned 3D Gaussians: We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians to produce accurate and realistic strand-based reconstructions from multi-view data. In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands. This fundamental difference allows the use of the resulting hairstyles out-of-the-box in modern computer graphics engines for editing, rendering, and simulation. Our 3D lifting method relies on unstructured Gaussians to generate multi-view ground truth data to supervise the fitting of hair strands. The hairstyle itself is represented in the form of the so-called strand-aligned 3D Gaussians. This representation allows us to combine strand-based hair priors, which are essential for realistic modeling of the inner structure of hairstyles, with the differentiable rendering capabilities of 3D Gaussian Splatting. Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.<|reference_end|>
|
arxiv
|
@article{zakharov2024human,
title={Human Hair Reconstruction with Strand-Aligned 3D Gaussians},
author={Egor Zakharov, Vanessa Sklyarova, Michael Black, Giljoo Nam, Justus
Thies, Otmar Hilliges},
journal={arXiv preprint arXiv:2409.14778},
year={2024},
archivePrefix={arXiv},
eprint={2409.14778},
primaryClass={cs.CV cs.GR}
}
|
zakharov2024human
|
arxiv-660655
|
2409.14779
|
Hardware/Algorithm Co-design for Real-Time I/O Control with Improved Timing Accuracy and Robustness
|
<|reference_start|>Hardware/Algorithm Co-design for Real-Time I/O Control with Improved Timing Accuracy and Robustness: In safety-critical systems, timing accuracy is the key to achieving precise I/O control. To meet such strict timing requirements, dedicated hardware assistance has recently been investigated and developed. However, these solutions are often fragile, due to unforeseen timing defects. In this paper, we propose a robust and timing-accurate I/O co-processor, which manages I/O tasks using Execution Time Servers (ETSs) and a two-level scheduler. The ETSs limit the impact of timing defects between tasks, and the scheduler prioritises ETSs based on their importance, offering a robust and configurable scheduling infrastructure. Based on the hardware design, we present an ETS-based timing-accurate I/O schedule, with the ETS parameters configured to further enhance robustness against timing defects. Experiments show the proposed I/O control method outperforms the state-of-the-art method in terms of timing accuracy and robustness without introducing significant overhead.<|reference_end|>
|
arxiv
|
@article{jiang2024hardware/algorithm,
title={Hardware/Algorithm Co-design for Real-Time I/O Control with Improved
Timing Accuracy and Robustness},
author={Zhe Jiang, Shuai Zhao, Ran Wei, Xin Si, Gang Chen and Nan Guan},
journal={arXiv preprint arXiv:2409.14779},
year={2024},
archivePrefix={arXiv},
eprint={2409.14779},
primaryClass={cs.AR}
}
|
jiang2024hardware/algorithm
|
arxiv-660656
|
2409.14781
|
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
|
<|reference_start|>Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method: As the scale of training corpora for large language models (LLMs) grows, model developers become increasingly reluctant to disclose details on their data. This lack of transparency poses challenges to scientific evaluation and ethical deployment. Recently, pretraining data detection approaches, which infer whether a given text was part of an LLM's training data through black-box access, have been explored. The Min-K\% Prob method, which has achieved state-of-the-art results, assumes that a non-training example tends to contain a few outlier words with low token probabilities. However, the effectiveness may be limited as it tends to misclassify non-training texts that contain many common words with high probabilities predicted by LLMs. To address this issue, we introduce a divergence-based calibration method, inspired by the divergence-from-randomness concept, to calibrate token probabilities for pretraining data detection. We compute the cross-entropy (i.e., the divergence) between the token probability distribution and the token frequency distribution to derive a detection score. We have developed a Chinese-language benchmark, PatentMIA, to assess the performance of detection approaches for LLMs on Chinese text. Experimental results on English-language benchmarks and PatentMIA demonstrate that our proposed method significantly outperforms existing methods. Our code and PatentMIA benchmark are available at \url{https://github.com/zhang-wei-chao/DC-PDD}.<|reference_end|>
|
arxiv
|
@article{zhang2024pretraining,
title={Pretraining Data Detection for Large Language Models: A Divergence-based
Calibration Method},
author={Weichao Zhang, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing
Fan, Xueqi Cheng},
journal={arXiv preprint arXiv:2409.14781},
year={2024},
archivePrefix={arXiv},
eprint={2409.14781},
primaryClass={cs.CL cs.CR}
}
|
zhang2024pretraining
|
arxiv-660657
|
2409.14783
|
Disjoint covering of bipartite graphs with $s$-clubs
|
<|reference_start|>Disjoint covering of bipartite graphs with $s$-clubs: For a positive integer $s$, an $s$-club in a graph $G$ is a set of vertices inducing a subgraph with diameter at most $s$. As generalizations of cliques, $s$-clubs offer a flexible model for real-world networks. This paper addresses the problems of partitioning and disjoint covering of vertices with $s$-clubs on bipartite graphs. First we prove that for any fixed $s \geq 3$ and fixed $k \geq 3$, determining whether the vertices of $G$ can be partitioned into at most $k$ disjoint $s$-clubs is NP-complete even for bipartite graphs. Additionally, we study the Maximum Disjoint $(t,s)$-Club Covering problem (MAX-DCC($t,s$)), which aims to find a collection of vertex-disjoint $(t,s)$-clubs (i.e. $s$-clubs with at least $t$ vertices) that covers the maximum number of vertices in $G$. We prove that it is NP-hard to achieve an approximation factor of $\frac{95}{94} $ for MAX-DCC($t,3$) for any fixed $t\geq 8$ and for MAX-DCC($t,2$) for any fixed $t\geq 5$ even for bipartite graphs. Previously, results were known only for MAX-DCC($3,2$). Finally, we provide a polynomial-time algorithm for MAX-DCC($2,2$).<|reference_end|>
|
arxiv
|
@article{monti2024disjoint,
title={Disjoint covering of bipartite graphs with $s$-clubs},
author={Angelo Monti and Blerina Sinaimeri},
journal={arXiv preprint arXiv:2409.14783},
year={2024},
archivePrefix={arXiv},
eprint={2409.14783},
primaryClass={cs.CC}
}
|
monti2024disjoint
|
arxiv-660658
|
2409.14784
|
SAMEdge: An Edge-cloud Video Analytics Architecture for the Segment Anything Model
|
<|reference_start|>SAMEdge: An Edge-cloud Video Analytics Architecture for the Segment Anything Model: As artificial intelligence continues to evolve, it is increasingly capable of handling a wide range of video analytics tasks with merely one large model. One of the key foundation technologies is the Segment Anything Model (SAM), which allows the video analytics tasks to be determined on the fly according to the input prompts from the user. However, achieving real-time response in video analytics applications is crucial for user experiences due to the limited communication and computation resources on the edge, especially with SAM, where users may continuously interact by adding or adjusting prompts. In this paper, we propose SAMEdge, a novel edge-cloud computing architecture designed to support SAM computations for edge users. SAMEdge integrates new modules on the edge and the cloud to maximize analytics accuracy under visual prompts and image prompts input with latency constraints. It addresses resource challenges associated with prompt encoding and image encoding by offering a visual prompt transformation algorithm for visual prompts and efficient workload partitioning for image encoding. SAMEdge is implemented by extending the open-source SAM project from Meta AI. We demonstrate the practical application of SAMEdge through a case study on a Visual Tour Guide application. Our evaluation indicates that SAMEdge significantly enhances the accuracy of the video analytics application under distinct network bandwidths across various prompts.<|reference_end|>
|
arxiv
|
@article{lu2024samedge:,
title={SAMEdge: An Edge-cloud Video Analytics Architecture for the Segment
Anything Model},
author={Rui Lu, Siping Shi, Yanting Liu and Dan Wang},
journal={arXiv preprint arXiv:2409.14784},
year={2024},
archivePrefix={arXiv},
eprint={2409.14784},
primaryClass={cs.AI}
}
|
lu2024samedge:
|
arxiv-660659
|
2409.14785
|
Towards Efficient and Robust VQA-NLE Data Generation with Large Vision-Language Models
|
<|reference_start|>Towards Efficient and Robust VQA-NLE Data Generation with Large Vision-Language Models: Natural Language Explanation (NLE) aims to elucidate the decision-making process by providing detailed, human-friendly explanations in natural language. It helps demystify the decision-making processes of large vision-language models (LVLMs) through the use of language models. While existing methods for creating a Vision Question-Answering with Natural Language Explanation (VQA-NLE) datasets can provide explanations, they heavily rely on human annotations that are time-consuming and costly. In this study, we propose a novel approach that leverages LVLMs to efficiently generate high-quality synthetic VQA-NLE datasets. By evaluating our synthetic data, we showcase how advanced prompting techniques can lead to the production of high-quality VQA-NLE data. Our findings indicate that this proposed method achieves up to 20x faster than human annotation, with only a minimal decrease in qualitative metrics, achieving robust quality that is nearly equivalent to human-annotated data. Furthermore, we show that incorporating visual prompts significantly enhances the relevance of text generation. Our study paves the way for a more efficient and robust automated generation of multi-modal NLE data, offering a promising solution to the problem.<|reference_end|>
|
arxiv
|
@article{irawan2024towards,
title={Towards Efficient and Robust VQA-NLE Data Generation with Large
Vision-Language Models},
author={Patrick Amadeus Irawan, Genta Indra Winata, Samuel Cahyawijaya, Ayu
Purwarianti},
journal={arXiv preprint arXiv:2409.14785},
year={2024},
archivePrefix={arXiv},
eprint={2409.14785},
primaryClass={cs.CL cs.CV}
}
|
irawan2024towards
|
arxiv-660660
|
2409.14790
|
Fastest quotient iteration with variational principles for self-adjoint eigenvalue problems
|
<|reference_start|>Fastest quotient iteration with variational principles for self-adjoint eigenvalue problems: For the generalized eigenvalue problem, a quotient function is devised for estimating eigenvalues in terms of an approximate eigenvector. This gives rise to an infinite family of quotients, all entirely arguable to be used in estimation. Although the Rayleigh quotient is among them, one can suggest using it only in an auxiliary manner for choosing the quotient for near optimal results. In normal eigenvalue problems, for any approximate eigenvector, there always exists a "perfect" quotient exactly giving an eigenvalue. For practical estimates in the self-adjoint case, an approximate midpoint of the spectrum is a good choice for reformulating the eigenvalue problem yielding apparently the fastest quotient iterative method there exists. No distinction is made between estimating extreme or interior eigenvalues. Preconditioning from the left results in changing the inner-product and affects the estimates accordingly. Preconditioning from the right preserves self-adjointness and can hence be performed without any restrictions. It is used in variational methods for optimally computing approximate eigenvectors.<|reference_end|>
|
arxiv
|
@article{huhtanen2024fastest,
title={Fastest quotient iteration with variational principles for self-adjoint
eigenvalue problems},
author={Marko Huhtanen, Vesa Kotila and Pauliina Uusitalo},
journal={arXiv preprint arXiv:2409.14790},
year={2024},
archivePrefix={arXiv},
eprint={2409.14790},
primaryClass={math.NA cs.NA}
}
|
huhtanen2024fastest
|
arxiv-660661
|
2409.14791
|
Multiscale scattered data analysis in samplet coordinates
|
<|reference_start|>Multiscale scattered data analysis in samplet coordinates: We study multiscale scattered data interpolation schemes for globally supported radial basis functions, with a focus on the Mat\'ern class. The multiscale approximation is constructed through a sequence of residual corrections, where radial basis functions with different lengthscale parameters are employed to capture varying levels of detail. To apply this approach to large data sets, we suggest to represent the resulting generalized Vandermonde matrices in samplet coordinates. Samplets are localized, discrete signed measures exhibiting vanishing moments and allow for the sparse approximation of generalized Vandermonde matrices issuing from a vast class of radial basis functions. Given a quasi-uniform set of $N$ data sites, and local approximation spaces with geometrically decreasing dimension, the full multiscale system can be assembled with cost $\mathcal{O}(N \log N)$. We prove that the condition numbers of the linear systems at each level remain bounded independent of the particular level, allowing us to use an iterative solver with a bounded number of iterations for the numerical solution. Hence, the overall cost of the proposed approach is $\mathcal{O}(N \log N)$. The theoretical findings are accompanied by extensive numerical studies in two and three spatial dimensions.<|reference_end|>
|
arxiv
|
@article{avesani2024multiscale,
title={Multiscale scattered data analysis in samplet coordinates},
author={Sara Avesani, R"udiger Kempf, Michael Multerer, Holger Wendland},
journal={arXiv preprint arXiv:2409.14791},
year={2024},
archivePrefix={arXiv},
eprint={2409.14791},
primaryClass={math.NA cs.LG cs.NA}
}
|
avesani2024multiscale
|
arxiv-660662
|
2409.14792
|
Adaptive Conformal Inference for Multi-Step Ahead Time-Series Forecasting Online
|
<|reference_start|>Adaptive Conformal Inference for Multi-Step Ahead Time-Series Forecasting Online: The aim of this paper is to propose an adaptation of the well known adaptive conformal inference (ACI) algorithm to achieve finite-sample coverage guarantees in multi-step ahead time-series forecasting in the online setting. ACI dynamically adjusts significance levels, and comes with finite-sample guarantees on coverage, even for non-exchangeable data. Our multi-step ahead ACI procedure inherits these guarantees at each prediction step, as well as for the overall error rate. The multi-step ahead ACI algorithm can be used with different target error and learning rates at different prediction steps, which is illustrated in our numerical examples, where we employ a version of the confromalised ridge regression algorithm, adapted to multi-input multi-output forecasting. The examples serve to show how the method works in practice, illustrating the effect of variable target error and learning rates for different prediction steps, which suggests that a balance may be struck between efficiency (interval width) and coverage.t<|reference_end|>
|
arxiv
|
@article{szabadváry2024adaptive,
title={Adaptive Conformal Inference for Multi-Step Ahead Time-Series
Forecasting Online},
author={Johan Hallberg Szabadv'ary},
journal={The conference version was published in Proceedings of Machine
Learning Research, 230:250-263, 2024 (COPA 2024)},
year={2024},
archivePrefix={arXiv},
eprint={2409.14792},
primaryClass={stat.ML cs.LG}
}
|
szabadváry2024adaptive
|
arxiv-660663
|
2409.14794
|
Advancing Depression Detection on Social Media Platforms Through Fine-Tuned Large Language Models
|
<|reference_start|>Advancing Depression Detection on Social Media Platforms Through Fine-Tuned Large Language Models: This study investigates the use of Large Language Models (LLMs) for improved depression detection from users social media data. Through the use of fine-tuned GPT 3.5 Turbo 1106 and LLaMA2-7B models and a sizable dataset from earlier studies, we were able to identify depressed content in social media posts with a high accuracy of nearly 96.0 percent. The comparative analysis of the obtained results with the relevant studies in the literature shows that the proposed fine-tuned LLMs achieved enhanced performance compared to existing state of the-art systems. This demonstrates the robustness of LLM-based fine-tuned systems to be used as potential depression detection systems. The study describes the approach in depth, including the parameters used and the fine-tuning procedure, and it addresses the important implications of our results for the early diagnosis of depression on several social media platforms.<|reference_end|>
|
arxiv
|
@article{shah2024advancing,
title={Advancing Depression Detection on Social Media Platforms Through
Fine-Tuned Large Language Models},
author={Shahid Munir Shah, Syeda Anshrah Gillani, Mirza Samad Ahmed Baig,
Muhammad Aamer Saleem, Muhammad Hamzah Siddiqui},
journal={arXiv preprint arXiv:2409.14794},
year={2024},
archivePrefix={arXiv},
eprint={2409.14794},
primaryClass={cs.CV}
}
|
shah2024advancing
|
arxiv-660664
|
2409.14796
|
Research on Dynamic Data Flow Anomaly Detection based on Machine Learning
|
<|reference_start|>Research on Dynamic Data Flow Anomaly Detection based on Machine Learning: The sophistication and diversity of contemporary cyberattacks have rendered the use of proxies, gateways, firewalls, and encrypted tunnels as a standalone defensive strategy inadequate. Consequently, the proactive identification of data anomalies has emerged as a prominent area of research within the field of data security. The majority of extant studies concentrate on sample equilibrium data, with the consequence that the detection effect is not optimal in the context of unbalanced data. In this study, the unsupervised learning method is employed to identify anomalies in dynamic data flows. Initially, multi-dimensional features are extracted from real-time data, and a clustering algorithm is utilised to analyse the patterns of the data. This enables the potential outliers to be automatically identified. By clustering similar data, the model is able to detect data behaviour that deviates significantly from normal traffic without the need for labelled data. The results of the experiments demonstrate that the proposed method exhibits high accuracy in the detection of anomalies across a range of scenarios. Notably, it demonstrates robust and adaptable performance, particularly in the context of unbalanced data.<|reference_end|>
|
arxiv
|
@article{wang2024research,
title={Research on Dynamic Data Flow Anomaly Detection based on Machine
Learning},
author={Liyang Wang, Yu Cheng, Hao Gong, Jiacheng Hu, Xirui Tang, Iris Li},
journal={arXiv preprint arXiv:2409.14796},
year={2024},
archivePrefix={arXiv},
eprint={2409.14796},
primaryClass={cs.LG cs.AI cs.CR}
}
|
wang2024research
|
arxiv-660665
|
2409.14798
|
PrivaMatch: A Privacy-Preserving DNA Matching Scheme for Forensic Investigation
|
<|reference_start|>PrivaMatch: A Privacy-Preserving DNA Matching Scheme for Forensic Investigation: DNA fingerprinting and matching for identifying suspects has been a common practice in criminal investigation. Such proceedings involve multiple parties such as investigating agencies, suspects and forensic labs. A major challenge in such settings is to carry out the matching process between the suspects' DNA samples and the samples obtained from the crime scene without compromising the privacy of the suspects' DNA profiles. Additionally, it is necessary that sensitive details pertaining to the investigation such as the identities of the suspects and evidence obtained from the crime scene must be kept private to the investigating agency. We present a novel DNA matching scheme, termed as PrivaMatch, which addresses multiple concerns about privacy of the suspects' DNA profiles and the crime scene evidence. In the proposed scheme, the investigating agencies oblivious transfer and zero-knowledge proofs to privately obtain the DNA profiles of the suspects from the forensic lab's database.In addition, we present a clever data obfuscation technique using homomorphic encryption and modular arithmetic for the investigating agency to privately obtain the DNA profile of the crime scene's sample, keeping the profile oblivious from the forensic lab. The DNA profile of the crime scene sample is operated on using a homomorphic cryptosystem such that neither of the parties (e.g., the investigation agency, forensic labs, DNA database owners) learns about the private data of the other parties. The proposed scheme is analysed formally and the practicality of its security strengths is verified using simulations under standard assumptions.<|reference_end|>
|
arxiv
|
@article{das2024privamatch:,
title={PrivaMatch: A Privacy-Preserving DNA Matching Scheme for Forensic
Investigation},
author={Sankha Das},
journal={arXiv preprint arXiv:2409.14798},
year={2024},
archivePrefix={arXiv},
eprint={2409.14798},
primaryClass={cs.CR}
}
|
das2024privamatch:
|
arxiv-660666
|
2409.14800
|
Choose the Final Translation from NMT and LLM hypotheses Using MBR Decoding: HW-TSC's Submission to the WMT24 General MT Shared Task
|
<|reference_start|>Choose the Final Translation from NMT and LLM hypotheses Using MBR Decoding: HW-TSC's Submission to the WMT24 General MT Shared Task: This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT24 general machine translation (MT) shared task, where we participate in the English to Chinese (en2zh) language pair. Similar to previous years' work, we use training strategies such as regularized dropout, bidirectional training, data diversification, forward translation, back translation, alternated training, curriculum learning, and transductive ensemble learning to train the neural machine translation (NMT) model based on the deep Transformer-big architecture. The difference is that we also use continue pre-training, supervised fine-tuning, and contrastive preference optimization to train the large language model (LLM) based MT model. By using Minimum Bayesian risk (MBR) decoding to select the final translation from multiple hypotheses for NMT and LLM-based MT models, our submission receives competitive results in the final evaluation.<|reference_end|>
|
arxiv
|
@article{wu2024choose,
title={Choose the Final Translation from NMT and LLM hypotheses Using MBR
Decoding: HW-TSC's Submission to the WMT24 General MT Shared Task},
author={Zhanglin Wu, Daimeng Wei, Zongyao Li, Hengchao Shang, Jiaxin Guo,
Shaojun Li, Zhiqiang Rao, Yuanchang Luo, Ning Xie, Hao Yang},
journal={arXiv preprint arXiv:2409.14800},
year={2024},
archivePrefix={arXiv},
eprint={2409.14800},
primaryClass={cs.AI}
}
|
wu2024choose
|
arxiv-660667
|
2409.14801
|
MTP: A Dataset for Multi-Modal Turning Points in Casual Conversations
|
<|reference_start|>MTP: A Dataset for Multi-Modal Turning Points in Casual Conversations: Detecting critical moments, such as emotional outbursts or changes in decisions during conversations, is crucial for understanding shifts in human behavior and their consequences. Our work introduces a novel problem setting focusing on these moments as turning points (TPs), accompanied by a meticulously curated, high-consensus, human-annotated multi-modal dataset. We provide precise timestamps, descriptions, and visual-textual evidence high-lighting changes in emotions, behaviors, perspectives, and decisions at these turning points. We also propose a framework, TPMaven, utilizing state-of-the-art vision-language models to construct a narrative from the videos and large language models to classify and detect turning points in our multi-modal dataset. Evaluation results show that TPMaven achieves an F1-score of 0.88 in classification and 0.61 in detection, with additional explanations aligning with human expectations.<|reference_end|>
|
arxiv
|
@article{ho2024mtp:,
title={MTP: A Dataset for Multi-Modal Turning Points in Casual Conversations},
author={Gia-Bao Dinh Ho, Chang Wei Tan, Zahra Zamanzadeh Darban, Mahsa Salehi,
Gholamreza Haffari, Wray Buntine},
journal={arXiv preprint arXiv:2409.14801},
year={2024},
archivePrefix={arXiv},
eprint={2409.14801},
primaryClass={cs.CL}
}
|
ho2024mtp:
|
arxiv-660668
|
2409.14803
|
Benchmarking Edge AI Platforms for High-Performance ML Inference
|
<|reference_start|>Benchmarking Edge AI Platforms for High-Performance ML Inference: Edge computing's growing prominence, due to its ability to reduce communication latency and enable real-time processing, is promoting the rise of high-performance, heterogeneous System-on-Chip solutions. While current approaches often involve scaling down modern hardware, the performance characteristics of neural network workloads on these platforms can vary significantly, especially when it comes to parallel processing, which is a critical consideration for edge deployments. To address this, we conduct a comprehensive study comparing the latency and throughput of various linear algebra and neural network inference tasks across CPU-only, CPU/GPU, and CPU/NPU integrated solutions. {We find that the Neural Processing Unit (NPU) excels in matrix-vector multiplication (58.6% faster) and some neural network tasks (3.2$\times$ faster for video classification and large language models). GPU outperforms in matrix multiplication (22.6% faster) and LSTM networks (2.7$\times$ faster) while CPU excels at less parallel operations like dot product. NPU-based inference offers a balance of latency and throughput at lower power consumption. GPU-based inference, though more energy-intensive, performs best with large dimensions and batch sizes. We highlight the potential of heterogeneous computing solutions for edge AI, where diverse compute units can be strategically leveraged to boost accurate and real-time inference.<|reference_end|>
|
arxiv
|
@article{jayanth2024benchmarking,
title={Benchmarking Edge AI Platforms for High-Performance ML Inference},
author={Rakshith Jayanth, Neelesh Gupta, Viktor Prasanna},
journal={arXiv preprint arXiv:2409.14803},
year={2024},
archivePrefix={arXiv},
eprint={2409.14803},
primaryClass={cs.AI}
}
|
jayanth2024benchmarking
|
arxiv-660669
|
2409.14805
|
SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated Learning
|
<|reference_start|>SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated Learning: Federated Learning is a promising approach for training machine learning models while preserving data privacy, but its distributed nature makes it vulnerable to backdoor attacks, particularly in NLP tasks while related research remains limited. This paper introduces SDBA, a novel backdoor attack mechanism designed for NLP tasks in FL environments. Our systematic analysis across LSTM and GPT-2 models identifies the most vulnerable layers for backdoor injection and achieves both stealth and long-lasting durability through layer-wise gradient masking and top-k% gradient masking within these layers. Experiments on next token prediction and sentiment analysis tasks show that SDBA outperforms existing backdoors in durability and effectively bypasses representative defense mechanisms, with notable performance in LLM such as GPT-2. These results underscore the need for robust defense strategies in NLP-based FL systems.<|reference_end|>
|
arxiv
|
@article{choe2024sdba:,
title={SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated
Learning},
author={Minyeong Choe, Cheolhee Park, Changho Seo, and Hyunil Kim},
journal={arXiv preprint arXiv:2409.14805},
year={2024},
archivePrefix={arXiv},
eprint={2409.14805},
primaryClass={cs.LG cs.CR}
}
|
choe2024sdba:
|
arxiv-660670
|
2409.14810
|
Pre-trained Language Model and Knowledge Distillation for Lightweight Sequential Recommendation
|
<|reference_start|>Pre-trained Language Model and Knowledge Distillation for Lightweight Sequential Recommendation: Sequential recommendation models user interests based on historical behaviors to provide personalized recommendation. Previous sequential recommendation algorithms primarily employ neural networks to extract features of user interests, achieving good performance. However, due to the recommendation system datasets sparsity, these algorithms often employ small-scale network frameworks, resulting in weaker generalization capability. Recently, a series of sequential recommendation algorithms based on large pre-trained language models have been proposed. Nonetheless, given the real-time demands of recommendation systems, the challenge remains in applying pre-trained language models for rapid recommendations in real scenarios. To address this, we propose a sequential recommendation algorithm based on a pre-trained language model and knowledge distillation. The key of proposed algorithm is to transfer pre-trained knowledge across domains and achieve lightweight inference by knowledge distillation. The algorithm operates in two stages: in the first stage, we fine-tune the pre-trained language model on the recommendation dataset to transfer the pre-trained knowledge to the recommendation task; in the second stage, we distill the trained language model to transfer the learned knowledge to a lightweight model. Extensive experiments on multiple public recommendation datasets show that the proposed algorithm enhances recommendation accuracy and provide timely recommendation services.<|reference_end|>
|
arxiv
|
@article{li2024pre-trained,
title={Pre-trained Language Model and Knowledge Distillation for Lightweight
Sequential Recommendation},
author={Li Li, Mingyue Cheng, Zhiding Liu, Hao Zhang, Qi Liu, Enhong Chen},
journal={arXiv preprint arXiv:2409.14810},
year={2024},
archivePrefix={arXiv},
eprint={2409.14810},
primaryClass={cs.IR cs.LG}
}
|
li2024pre-trained
|
arxiv-660671
|
2409.14815
|
Automatic Geometric Decomposition for Analytical Inverse Kinematics
|
<|reference_start|>Automatic Geometric Decomposition for Analytical Inverse Kinematics: Calculating the inverse kinematics (IK) is fundamental for motion planning in robotics. Compared to numerical or learning-based approaches, analytical IK provides higher efficiency and accuracy. However, existing analytical approaches require manual intervention, are ill-conditioned, or rely on time-consuming symbolic manipulation. In this paper, we propose a fast and stable method that enables automatic online derivation and computation of analytical inverse kinematics. Our approach is based on remodeling the kinematic chain of a manipulator to automatically decompose its IK into pre-solved geometric subproblems. We exploit intersecting and parallel joint axes to assign a given manipulator to a certain kinematic class and the corresponding subproblem decomposition. In numerical experiments, we demonstrate that our decomposition is orders of magnitudes faster in deriving the IK than existing tools that employ symbolic manipulation. Following this one-time derivation, our method matches and even surpasses baselines, such as IKFast, in terms of speed and accuracy during the online computation of explicit IK solutions. Finally, we provide a C++ toolbox with Python wrappers that, for the first time, enables plug-and-play analytical IK within less than a millisecond.<|reference_end|>
|
arxiv
|
@article{ostermeier2024automatic,
title={Automatic Geometric Decomposition for Analytical Inverse Kinematics},
author={Daniel Ostermeier and Jonathan K"ulz and Matthias Althoff},
journal={arXiv preprint arXiv:2409.14815},
year={2024},
archivePrefix={arXiv},
eprint={2409.14815},
primaryClass={cs.RO}
}
|
ostermeier2024automatic
|
arxiv-660672
|
2409.14816
|
VARADE: a Variational-based AutoRegressive model for Anomaly Detection on the Edge
|
<|reference_start|>VARADE: a Variational-based AutoRegressive model for Anomaly Detection on the Edge: Detecting complex anomalies on massive amounts of data is a crucial task in Industry 4.0, best addressed by deep learning. However, available solutions are computationally demanding, requiring cloud architectures prone to latency and bandwidth issues. This work presents VARADE, a novel solution implementing a light autoregressive framework based on variational inference, which is best suited for real-time execution on the edge. The proposed approach was validated on a robotic arm, part of a pilot production line, and compared with several state-of-the-art algorithms, obtaining the best trade-off between anomaly detection accuracy, power consumption and inference frequency on two different edge platforms.<|reference_end|>
|
arxiv
|
@article{mascolini2024varade:,
title={VARADE: a Variational-based AutoRegressive model for Anomaly Detection
on the Edge},
author={Alessio Mascolini, Sebastiano Gaiardelli, Francesco Ponzio, Nicola
Dall'Ora, Enrico Macii, Sara Vinco, Santa Di Cataldo, Franco Fummi},
journal={arXiv preprint arXiv:2409.14816},
year={2024},
doi={10.1145/3649329.3655691},
archivePrefix={arXiv},
eprint={2409.14816},
primaryClass={cs.LG cs.AI}
}
|
mascolini2024varade:
|
arxiv-660673
|
2409.14818
|
MobileVLM: A Vision-Language Model for Better Intra- and Inter-UI Understanding
|
<|reference_start|>MobileVLM: A Vision-Language Model for Better Intra- and Inter-UI Understanding: Recently, mobile AI agents based on VLMs have been gaining increasing attention. These works typically utilize VLM as a foundation, fine-tuning it with instruction-based mobile datasets. However, these VLMs are typically pre-trained on general-domain data, which often results in a lack of fundamental capabilities specific to the mobile domain. Therefore, they may struggle to recognize specific UI elements and understand intra-UI fine-grained information. In addition, the current fine-tuning task focuses on interacting with the most relevant element for the given instruction. These fine-tuned VLMs may still ignore the relationships between UI pages, neglect the roles of elements in page transitions and lack inter-UI understanding. To address issues, we propose a VLM called MobileVLM, which includes two additional pre-training stages to enhance both intra- and inter-UI understanding. We defined four UI-based pre-training tasks, enabling the model to better perceive fine-grained elements and capture page transition actions. To address the lack of mobile pre-training data, we built a large Chinese mobile dataset Mobile3M from scratch, which contains 3 million UI pages, and real-world transition actions, forming a directed graph structure. Experimental results show MobileVLM excels on both our test set and public mobile benchmarks, outperforming existing VLMs.<|reference_end|>
|
arxiv
|
@article{wu2024mobilevlm:,
title={MobileVLM: A Vision-Language Model for Better Intra- and Inter-UI
Understanding},
author={Qinzhuo Wu, Weikai Xu, Wei Liu, Tao Tan, Jianfeng Liu, Ang Li, Jian
Luan, Bin Wang, Shuo Shang},
journal={arXiv preprint arXiv:2409.14818},
year={2024},
archivePrefix={arXiv},
eprint={2409.14818},
primaryClass={cs.CL cs.AI}
}
|
wu2024mobilevlm:
|
arxiv-660674
|
2409.14820
|
Past Meets Present: Creating Historical Analogy with Large Language Models
|
<|reference_start|>Past Meets Present: Creating Historical Analogy with Large Language Models: Historical analogies, which compare known past events with contemporary but unfamiliar events, are important abilities that help people make decisions and understand the world. However, research in applied history suggests that people have difficulty finding appropriate analogies. And previous studies in the AI community have also overlooked historical analogies. To fill this gap, in this paper, we focus on the historical analogy acquisition task, which aims to acquire analogous historical events for a given event. We explore retrieval and generation methods for acquiring historical analogies based on different large language models (LLMs). Furthermore, we propose a self-reflection method to mitigate hallucinations and stereotypes when LLMs generate historical analogies. Through human evaluations and our specially designed automatic multi-dimensional assessment, we find that LLMs generally have a good potential for historical analogies. And the performance of the models can be further improved by using our self-reflection method.<|reference_end|>
|
arxiv
|
@article{li2024past,
title={Past Meets Present: Creating Historical Analogy with Large Language
Models},
author={Nianqi Li, Siyu Yuan, Jiangjie Chen, Jiaqing Liang, Feng Wei, Zujie
Liang, Deqing Yang, Yanghua Xiao},
journal={arXiv preprint arXiv:2409.14820},
year={2024},
archivePrefix={arXiv},
eprint={2409.14820},
primaryClass={cs.CL cs.AI}
}
|
li2024past
|
arxiv-660675
|
2409.14821
|
Towards Real-world Deployment of NILM Systems: Challenges and Practices
|
<|reference_start|>Towards Real-world Deployment of NILM Systems: Challenges and Practices: Non-intrusive load monitoring (NILM), as a key load monitoring technology, can much reduce the deployment cost of traditional power sensors. Previous research has largely focused on developing cloud-exclusive NILM algorithms, which often result in high computation costs and significant service delays. To address these issues, we propose a three-tier framework to enhance the real-world applicability of NILM systems through edge-cloud collaboration. Considering the computational resources available at both the edge and cloud, we implement a lightweight NILM model at the edge and a deep learning based model at the cloud, respectively. In addition to the differential model implementations, we also design a NILM-specific deployment scheme that integrates Gunicorn and NGINX to bridge the gap between theoretical algorithms and practical applications. To verify the effectiveness of the proposed framework, we apply real-world NILM scenario settings and implement the entire process of data acquisition, model training, and system deployment. The results demonstrate that our framework can achieve high decomposition accuracy while significantly reducing the cloud workload and communication overhead under practical considerations.<|reference_end|>
|
arxiv
|
@article{xue2024towards,
title={Towards Real-world Deployment of NILM Systems: Challenges and Practices},
author={Junyu Xue, Yu Zhang, Xudong Wang, Yi Wang, Guoming Tang},
journal={arXiv preprint arXiv:2409.14821},
year={2024},
archivePrefix={arXiv},
eprint={2409.14821},
primaryClass={eess.SY cs.AI cs.SY}
}
|
xue2024towards
|
arxiv-660676
|
2409.14822
|
Shannon Bounds for Quadratic Rate-Distortion Problems
|
<|reference_start|>Shannon Bounds for Quadratic Rate-Distortion Problems: The Shannon lower bound has been the subject of several important contributions by Berger. This paper surveys Shannon bounds on rate-distortion problems under mean-squared error distortion with a particular emphasis on Berger's techniques. Moreover, as a new result, the Gray-Wyner network is added to the canon of settings for which such bounds are known. In the Shannon bounding technique, elegant lower bounds are expressed in terms of the source entropy power. Moreover, there is often a complementary upper bound that involves the source variance in such a way that the bounds coincide in the special case of Gaussian statistics. Such pairs of bounds are sometimes referred to as Shannon bounds. The present paper puts Berger's work on many aspects of this problem in the context of more recent developments, encompassing indirect and remote source coding such as the CEO problem, originally proposed by Berger, as well as the Gray-Wyner network as a new contribution.<|reference_end|>
|
arxiv
|
@article{gastpar2024shannon,
title={Shannon Bounds for Quadratic Rate-Distortion Problems},
author={Michael Gastpar and Erixhen Sula},
journal={arXiv preprint arXiv:2409.14822},
year={2024},
doi={10.1109/JSAIT.2024.3465022},
archivePrefix={arXiv},
eprint={2409.14822},
primaryClass={cs.IT math.IT}
}
|
gastpar2024shannon
|
arxiv-660677
|
2409.14823
|
HiFi-Glot: Neural Formant Synthesis with Differentiable Resonant Filters
|
<|reference_start|>HiFi-Glot: Neural Formant Synthesis with Differentiable Resonant Filters: We introduce an end-to-end neural speech synthesis system that uses the source-filter model of speech production. Specifically, we apply differentiable resonant filters to a glottal waveform generated by a neural vocoder. The aim is to obtain a controllable synthesiser, similar to classic formant synthesis, but with much higher perceptual quality - filling a research gap in current neural waveform generators and responding to hitherto unmet needs in the speech sciences. Our setup generates audio from a core set of phonetically meaningful speech parameters, with the filters providing direct control over formant frequency resonances in synthesis. Direct synthesis control is a key feature for reliable stimulus creation in important speech science experiments. We show that the proposed source-filter method gives better perceptual quality than the industry standard for formant manipulation (i.e., Praat), whilst being competitive in terms of formant frequency control accuracy.<|reference_end|>
|
arxiv
|
@article{juvela2024hifi-glot:,
title={HiFi-Glot: Neural Formant Synthesis with Differentiable Resonant Filters},
author={Lauri Juvela, Pablo P'erez Zarazaga, Gustav Eje Henter, Zofia Malisz},
journal={arXiv preprint arXiv:2409.14823},
year={2024},
archivePrefix={arXiv},
eprint={2409.14823},
primaryClass={cs.SD eess.AS}
}
|
juvela2024hifi-glot:
|
arxiv-660678
|
2409.14826
|
ToolPlanner: A Tool Augmented LLM for Multi Granularity Instructions with Path Planning and Feedback
|
<|reference_start|>ToolPlanner: A Tool Augmented LLM for Multi Granularity Instructions with Path Planning and Feedback: Recently, tool-augmented LLMs have gained increasing attention. Given an instruction, tool-augmented LLMs can interact with various external tools in multiple rounds and provide a final answer. However, previous LLMs were trained on overly detailed instructions, which included API names or parameters, while real users would not explicitly mention these API details. This leads to a gap between trained LLMs and real-world scenarios. In addition, most works ignore whether the interaction process follows the instruction. To address these issues, we constructed a training dataset called MGToolBench, which contains statement and category-level instructions to better reflect real-world scenarios. In addition, we propose ToolPlanner, a two-stage reinforcement learning framework that utilizes path planning and two feedback mechanisms to enhance the LLM's task completion and instruction-following capabilities. Experimental results show that ToolPlanner significantly improves the Match Rate, Pass Rate and Win Rate by 26.8%, 20.2%, and 5.6% compared to the SOTA model. Human evaluation verifies that the multi-granularity instructions can better align with users' usage habits. Our data and code will be released upon acceptance.<|reference_end|>
|
arxiv
|
@article{wu2024toolplanner:,
title={ToolPlanner: A Tool Augmented LLM for Multi Granularity Instructions
with Path Planning and Feedback},
author={Qinzhuo Wu, Wei Liu, Jian Luan, Bin Wang},
journal={arXiv preprint arXiv:2409.14826},
year={2024},
archivePrefix={arXiv},
eprint={2409.14826},
primaryClass={cs.CL cs.AI}
}
|
wu2024toolplanner:
|
arxiv-660679
|
2409.14827
|
AIM 2024 Challenge on Video Saliency Prediction: Methods and Results
|
<|reference_start|>AIM 2024 Challenge on Video Saliency Prediction: Methods and Results: This paper reviews the Challenge on Video Saliency Prediction at AIM 2024. The goal of the participants was to develop a method for predicting accurate saliency maps for the provided set of video sequences. Saliency maps are widely exploited in various applications, including video compression, quality assessment, visual perception studies, the advertising industry, etc. For this competition, a previously unused large-scale audio-visual mouse saliency (AViMoS) dataset of 1500 videos with more than 70 observers per video was collected using crowdsourced mouse tracking. The dataset collection methodology has been validated using conventional eye-tracking data and has shown high consistency. Over 30 teams registered in the challenge, and there are 7 teams that submitted the results in the final phase. The final phase solutions were tested and ranked by commonly used quality metrics on a private test subset. The results of this evaluation and the descriptions of the solutions are presented in this report. All data, including the private test subset, is made publicly available on the challenge homepage - https://challenges.videoprocessing.ai/challenges/video-saliency-prediction.html.<|reference_end|>
|
arxiv
|
@article{moskalenko2024aim,
title={AIM 2024 Challenge on Video Saliency Prediction: Methods and Results},
author={Andrey Moskalenko, Alexey Bryncev, Dmitry Vatolin, Radu Timofte, Gen
Zhan, Li Yang, Yunlong Tang, Yiting Liao, Jiongzhi Lin, Baitao Huang, Morteza
Moradi, Mohammad Moradi, Francesco Rundo, Concetto Spampinato, Ali Borji,
Simone Palazzo, Yuxin Zhu, Yinan Sun, Huiyu Duan, Yuqin Cao, Ziheng Jia,
Qiang Hu, Xiongkuo Min, Guangtao Zhai, Hao Fang, Runmin Cong, Xiankai Lu,
Xiaofei Zhou, Wei Zhang, Chunyu Zhao, Wentao Mu, Tao Deng, Hamed R. Tavakoli},
journal={arXiv preprint arXiv:2409.14827},
year={2024},
archivePrefix={arXiv},
eprint={2409.14827},
primaryClass={cs.CV cs.HC cs.MM}
}
|
moskalenko2024aim
|
arxiv-660680
|
2409.14828
|
Two Deep Learning Solutions for Automatic Blurring of Faces in Videos
|
<|reference_start|>Two Deep Learning Solutions for Automatic Blurring of Faces in Videos: The widespread use of cameras in everyday life situations generates a vast amount of data that may contain sensitive information about the people and vehicles moving in front of them (location, license plates, physical characteristics, etc). In particular, people's faces are recorded by surveillance cameras in public spaces. In order to ensure the privacy of individuals, face blurring techniques can be applied to the collected videos. In this paper we present two deep-learning based options to tackle the problem. First, a direct approach, consisting of a classical object detector (based on the YOLO architecture) trained to detect faces, which are subsequently blurred. Second, an indirect approach, in which a Unet-like segmentation network is trained to output a version of the input image in which all the faces have been blurred.<|reference_end|>
|
arxiv
|
@article{plaud2024two,
title={Two Deep Learning Solutions for Automatic Blurring of Faces in Videos},
author={Roman Plaud, Jose-Luis Lisani},
journal={arXiv preprint arXiv:2409.14828},
year={2024},
archivePrefix={arXiv},
eprint={2409.14828},
primaryClass={cs.CV}
}
|
plaud2024two
|
arxiv-660681
|
2409.14829
|
RoWSFormer: A Robust Watermarking Framework with Swin Transformer for Enhanced Geometric Attack Resilience
|
<|reference_start|>RoWSFormer: A Robust Watermarking Framework with Swin Transformer for Enhanced Geometric Attack Resilience: In recent years, digital watermarking techniques based on deep learning have been widely studied. To achieve both imperceptibility and robustness of image watermarks, most current methods employ convolutional neural networks to build robust watermarking frameworks. However, despite the success of CNN-based watermarking models, they struggle to achieve robustness against geometric attacks due to the limitations of convolutional neural networks in capturing global and long-range relationships. To address this limitation, we propose a robust watermarking framework based on the Swin Transformer, named RoWSFormer. Specifically, we design the Locally-Channel Enhanced Swin Transformer Block as the core of both the encoder and decoder. This block utilizes the self-attention mechanism to capture global and long-range information, thereby significantly improving adaptation to geometric distortions. Additionally, we construct the Frequency-Enhanced Transformer Block to extract frequency domain information, which further strengthens the robustness of the watermarking framework. Experimental results demonstrate that our RoWSFormer surpasses existing state-of-the-art watermarking methods. For most non-geometric attacks, RoWSFormer improves the PSNR by 3 dB while maintaining the same extraction accuracy. In the case of geometric attacks (such as rotation, scaling, and affine transformations), RoWSFormer achieves over a 6 dB improvement in PSNR, with extraction accuracy exceeding 97\%.<|reference_end|>
|
arxiv
|
@article{chen2024rowsformer:,
title={RoWSFormer: A Robust Watermarking Framework with Swin Transformer for
Enhanced Geometric Attack Resilience},
author={Weitong Chen, Yuheng Li},
journal={arXiv preprint arXiv:2409.14829},
year={2024},
archivePrefix={arXiv},
eprint={2409.14829},
primaryClass={cs.MM cs.CV eess.IV}
}
|
chen2024rowsformer:
|
arxiv-660682
|
2409.14830
|
Identify As A Human Does: A Pathfinder of Next-Generation Anti-Cheat Framework for First-Person Shooter Games
|
<|reference_start|>Identify As A Human Does: A Pathfinder of Next-Generation Anti-Cheat Framework for First-Person Shooter Games: The gaming industry has experienced substantial growth, but cheating in online games poses a significant threat to the integrity of the gaming experience. Cheating, particularly in first-person shooter (FPS) games, can lead to substantial losses for the game industry. Existing anti-cheat solutions have limitations, such as client-side hardware constraints, security risks, server-side unreliable methods, and both-sides suffer from a lack of comprehensive real-world datasets. To address these limitations, the paper proposes HAWK, a server-side FPS anti-cheat framework for the popular game CS:GO. HAWK utilizes machine learning techniques to mimic human experts' identification process, leverages novel multi-view features, and it is equipped with a well-defined workflow. The authors evaluate HAWK with the first large and real-world datasets containing multiple cheat types and cheating sophistication, and it exhibits promising efficiency and acceptable overheads, shorter ban times compared to the in-use anti-cheat, a significant reduction in manual labor, and the ability to capture cheaters who evaded official inspections.<|reference_end|>
|
arxiv
|
@article{zhang2024identify,
title={Identify As A Human Does: A Pathfinder of Next-Generation Anti-Cheat
Framework for First-Person Shooter Games},
author={Jiayi Zhang, Chenxin Sun, Yue Gu, Qingyu Zhang, Jiayi Lin, Xiaojiang
Du, Chenxiong Qian},
journal={arXiv preprint arXiv:2409.14830},
year={2024},
archivePrefix={arXiv},
eprint={2409.14830},
primaryClass={cs.CR cs.AI cs.HC cs.LG}
}
|
zhang2024identify
|
arxiv-660683
|
2409.14831
|
Machine Learning Methods as Robust Quantum Noise Estimators
|
<|reference_start|>Machine Learning Methods as Robust Quantum Noise Estimators: Access to quantum computing is steadily increasing each year as the speed advantage of quantum computers solidifies with the growing number of usable qubits. However, the inherent noise encountered when running these systems can lead to measurement inaccuracies, especially pronounced when dealing with large or complex circuits. Achieving a balance between the complexity of circuits and the desired degree of output accuracy is a nontrivial yet necessary task for the creation of production-ready quantum software. In this study, we demonstrate how traditional machine learning (ML) models can estimate quantum noise by analyzing circuit composition. To accomplish this, we train multiple ML models on random quantum circuits, aiming to learn to estimate the discrepancy between ideal and noisy circuit outputs. By employing various noise models from distinct IBM systems, our results illustrate how this approach can accurately predict the robustness of circuits with a low error rate. By providing metrics on the stability of circuits, these techniques can be used to assess the quality and security of quantum code, leading to more reliable quantum products.<|reference_end|>
|
arxiv
|
@article{gardeazabal-gutierrez2024machine,
title={Machine Learning Methods as Robust Quantum Noise Estimators},
author={Jon Gardeazabal-Gutierrez, Erik B. Terres-Escudero, Pablo Garc'ia
Bringas},
journal={arXiv preprint arXiv:2409.14831},
year={2024},
archivePrefix={arXiv},
eprint={2409.14831},
primaryClass={quant-ph cs.DC}
}
|
gardeazabal-gutierrez2024machine
|
arxiv-660684
|
2409.14832
|
Energy-Aware Federated Learning in Satellite Constellations
|
<|reference_start|>Energy-Aware Federated Learning in Satellite Constellations: Federated learning in satellite constellations, where the satellites collaboratively train a machine learning model, is a promising technology towards enabling globally connected intelligence and the integration of space networks into terrestrial mobile networks. The energy required for this computationally intensive task is provided either by solar panels or by an internal battery if the satellite is in Earth's shadow. Careful management of this battery and system's available energy resources is not only necessary for reliable satellite operation, but also to avoid premature battery aging. We propose a novel energy-aware computation time scheduler for satellite FL, which aims to minimize battery usage without any impact on the convergence speed. Numerical results indicate an increase of more than 3x in battery lifetime can be achieved over energy-agnostic task scheduling.<|reference_end|>
|
arxiv
|
@article{razmi2024energy-aware,
title={Energy-Aware Federated Learning in Satellite Constellations},
author={Nasrin Razmi, Bho Matthiesen, Armin Dekorsy, and Petar Popovski},
journal={arXiv preprint arXiv:2409.14832},
year={2024},
archivePrefix={arXiv},
eprint={2409.14832},
primaryClass={cs.DC cs.LG eess.SP}
}
|
razmi2024energy-aware
|
arxiv-660685
|
2409.14833
|
SymAware: A Software Development Framework for Trustworthy Multi-Agent Systems with Situational Awareness
|
<|reference_start|>SymAware: A Software Development Framework for Trustworthy Multi-Agent Systems with Situational Awareness: Developing trustworthy multi-agent systems for practical applications is challenging due to the complicated communication of situational awareness (SA) among agents. This paper showcases a novel efficient and easy-to-use software framework for multi-agent simulation, named SymAware which provides a rich set of predefined data structures to compute, store, and communicate SA for agents. It also provides an abstract interface for the agents to compute their control inputs taking into account the awareness of the situation, knowledge, and risk of surrounding agents. Besides, utilizing a cluster of specialized components, SymAware hides the heavy computation of physical rendering and communication interfacing of simulation engines behind the control threads, resulting in high implementation efficiency in bridging the gap between conceptual prototyping and practical applications. Three multi-agent case studies are used to validate the efficacy and efficiency of this software framework.<|reference_end|>
|
arxiv
|
@article{casablanca2024symaware:,
title={SymAware: A Software Development Framework for Trustworthy Multi-Agent
Systems with Situational Awareness},
author={Ernesto Casablanca, Zengjie Zhang, Gregorio Marchesini, Sofie
Haesaert, Dimos V. Dimarogonas and Sadegh Soudjani},
journal={arXiv preprint arXiv:2409.14833},
year={2024},
archivePrefix={arXiv},
eprint={2409.14833},
primaryClass={cs.RO}
}
|
casablanca2024symaware:
|
arxiv-660686
|
2409.14836
|
Orthogonal Finetuning for Direct Preference Optimization
|
<|reference_start|>Orthogonal Finetuning for Direct Preference Optimization: DPO is an effective preference optimization algorithm. However, the DPO-tuned models tend to overfit on the dispreferred samples, manifested as overly long generations lacking diversity. While recent regularization approaches have endeavored to alleviate this issue by modifying the objective function, they achieved that at the cost of alignment performance degradation. In this paper, we innovatively incorporate regularization from the perspective of weight updating to curb alignment overfitting. Through the pilot experiment, we discovered that there exists a positive correlation between overfitting and the hyperspherical energy fluctuation. Hence, we introduce orthogonal finetuning for DPO via a weight-Rotated Preference Optimization (RoPO) method, which merely conducts rotational and magnitude-stretching updates on the weight parameters to maintain the hyperspherical energy invariant, thereby preserving the knowledge encoded in the angle between neurons. Extensive experiments demonstrate that our model aligns perfectly with human preferences while retaining the original expressive capacity using only 0.0086% of the trainable parameters, suggesting an effective regularization against overfitting. Specifically, RoPO outperforms DPO by up to 10 points on MT-Bench and by up to 2.8 points on AlpacaEval 2, while enhancing the generation diversity by an average of 6 points.<|reference_end|>
|
arxiv
|
@article{yang2024orthogonal,
title={Orthogonal Finetuning for Direct Preference Optimization},
author={Chenxu Yang, Ruipeng Jia, Naibin Gu, Zheng Lin, Siyuan Chen, Chao
Pang, Weichong Yin, Yu Sun, Hua Wu, Weiping Wang},
journal={arXiv preprint arXiv:2409.14836},
year={2024},
archivePrefix={arXiv},
eprint={2409.14836},
primaryClass={cs.CL cs.AI cs.LG}
}
|
yang2024orthogonal
|
arxiv-660687
|
2409.14837
|
MESC: Re-thinking Algorithmic Priority and/or Criticality Inversions for Heterogeneous MCSs
|
<|reference_start|>MESC: Re-thinking Algorithmic Priority and/or Criticality Inversions for Heterogeneous MCSs: Modern Mixed-Criticality Systems (MCSs) rely on hardware heterogeneity to satisfy ever-increasing computational demands. However, most of the heterogeneous co-processors are designed to achieve high throughput, with their micro-architectures executing the workloads in a streaming manner. This streaming execution is often non-preemptive or limited-preemptive, preventing tasks' prioritisation based on their importance and resulting in frequent occurrences of algorithmic priority and/or criticality inversions. Such problems present a significant barrier to guaranteeing the systems' real-time predictability, especially when co-processors dominate the execution of the workloads (e.g., DNNs and transformers). In contrast to existing works that typically enable coarse-grained context switch by splitting the workloads/algorithms, we demonstrate a method that provides fine-grained context switch on a widely used open-source DNN accelerator by enabling instruction-level preemption without any workloads/algorithms modifications. As a systematic solution, we build a real system, i.e., Make Each Switch Count (MESC), from the SoC and ISA to the OS kernel. A theoretical model and analysis are also provided for timing guarantees. Experimental results reveal that, compared to conventional MCSs using non-preemptive DNN accelerators, MESC achieved a 250x and 300x speedup in resolving algorithmic priority and criticality inversions, with less than 5\% overhead. To our knowledge, this is the first work investigating algorithmic priority and criticality inversions for MCSs at the instruction level.<|reference_end|>
|
arxiv
|
@article{guan2024mesc:,
title={MESC: Re-thinking Algorithmic Priority and/or Criticality Inversions for
Heterogeneous MCSs},
author={Jiapeng Guan, Ran Wei, Dean You, Yingquan Wang, Ruizhe Yang, Hui Wang
and Zhe Jiang},
journal={arXiv preprint arXiv:2409.14837},
year={2024},
archivePrefix={arXiv},
eprint={2409.14837},
primaryClass={cs.AR}
}
|
guan2024mesc:
|
arxiv-660688
|
2409.14838
|
MICSim: A Modular Simulator for Mixed-signal Compute-in-Memory based AI Accelerator
|
<|reference_start|>MICSim: A Modular Simulator for Mixed-signal Compute-in-Memory based AI Accelerator: This work introduces MICSim, an open-source, pre-circuit simulator designed for early-stage evaluation of chip-level software performance and hardware overhead of mixed-signal compute-in-memory (CIM) accelerators. MICSim features a modular design, allowing easy multi-level co-design and design space exploration. Modularized from the state-of-the-art CIM simulator NeuroSim, MICSim provides a highly configurable simulation framework supporting multiple quantization algorithms, diverse circuit/architecture designs, and different memory devices. This modular approach also allows MICSim to be effectively extended to accommodate new designs. MICSim natively supports evaluating accelerators' software and hardware performance for CNNs and Transformers in Python, leveraging the popular PyTorch and HuggingFace Transformers frameworks. These capabilities make MICSim highly adaptive when simulating different networks and user-friendly. This work demonstrates that MICSim can easily be combined with optimization strategies to perform design space exploration and used for chip-level Transformers CIM accelerators evaluation. Also, MICSim can achieve a 9x - 32x speedup of NeuroSim through a statistic-based average mode proposed by this work.<|reference_end|>
|
arxiv
|
@article{wang2024micsim:,
title={MICSim: A Modular Simulator for Mixed-signal Compute-in-Memory based AI
Accelerator},
author={Cong Wang, Zeming Chen, Shanshi Huang},
journal={arXiv preprint arXiv:2409.14838},
year={2024},
archivePrefix={arXiv},
eprint={2409.14838},
primaryClass={cs.AI cs.AR}
}
|
wang2024micsim:
|
arxiv-660689
|
2409.14839
|
Explainable and Human-Grounded AI for Decision Support Systems: The Theory of Epistemic Quasi-Partnerships
|
<|reference_start|>Explainable and Human-Grounded AI for Decision Support Systems: The Theory of Epistemic Quasi-Partnerships: In the context of AI decision support systems (AI-DSS), we argue that meeting the demands of ethical and explainable AI (XAI) is about developing AI-DSS to provide human decision-makers with three types of human-grounded explanations: reasons, counterfactuals, and confidence, an approach we refer to as the RCC approach. We begin by reviewing current empirical XAI literature that investigates the relationship between various methods for generating model explanations (e.g., LIME, SHAP, Anchors), the perceived trustworthiness of the model, and end-user accuracy. We demonstrate how current theories about what constitutes good human-grounded reasons either do not adequately explain this evidence or do not offer sound ethical advice for development. Thus, we offer a novel theory of human-machine interaction: the theory of epistemic quasi-partnerships (EQP). Finally, we motivate adopting EQP and demonstrate how it explains the empirical evidence, offers sound ethical advice, and entails adopting the RCC approach.<|reference_end|>
|
arxiv
|
@article{dorsch2024explainable,
title={Explainable and Human-Grounded AI for Decision Support Systems: The
Theory of Epistemic Quasi-Partnerships},
author={John Dorsch and Maximilian Moll},
journal={arXiv preprint arXiv:2409.14839},
year={2024},
archivePrefix={arXiv},
eprint={2409.14839},
primaryClass={cs.AI cs.ET cs.HC}
}
|
dorsch2024explainable
|
arxiv-660690
|
2409.14842
|
HW-TSC's Submission to the CCMT 2024 Machine Translation Tasks
|
<|reference_start|>HW-TSC's Submission to the CCMT 2024 Machine Translation Tasks: This paper presents the submission of Huawei Translation Services Center (HW-TSC) to machine translation tasks of the 20th China Conference on Machine Translation (CCMT 2024). We participate in the bilingual machine translation task and multi-domain machine translation task. For these two translation tasks, we use training strategies such as regularized dropout, bidirectional training, data diversification, forward translation, back translation, alternated training, curriculum learning, and transductive ensemble learning to train neural machine translation (NMT) models based on the deep Transformer-big architecture. Furthermore, to explore whether large language model (LLM) can help improve the translation quality of NMT systems, we use supervised fine-tuning to train llama2-13b as an Automatic post-editing (APE) model to improve the translation results of the NMT model on the multi-domain machine translation task. By using these plyometric strategies, our submission achieves a competitive result in the final evaluation.<|reference_end|>
|
arxiv
|
@article{wu2024hw-tsc's,
title={HW-TSC's Submission to the CCMT 2024 Machine Translation Tasks},
author={Zhanglin Wu, Yuanchang Luo, Daimeng Wei, Jiawei Zheng, Bin Wei,
Zongyao Li, Hengchao Shang, Jiaxin Guo, Shaojun Li, Weidong Zhang, Ning Xie,
Hao Yang},
journal={arXiv preprint arXiv:2409.14842},
year={2024},
archivePrefix={arXiv},
eprint={2409.14842},
primaryClass={cs.AI cs.CL}
}
|
wu2024hw-tsc's
|
arxiv-660691
|
2409.14844
|
Evaluating Robot Influence on Pedestrian Behavior Models for Crowd Simulation and Benchmarking
|
<|reference_start|>Evaluating Robot Influence on Pedestrian Behavior Models for Crowd Simulation and Benchmarking: The presence of robots amongst pedestrians affects them causing deviation to their trajectories. Existing methods suffer from the limitation of not being able to objectively measure this deviation in unseen cases. In order to solve this issue, we introduce a simulation framework that repetitively measures and benchmarks the deviation in trajectory of pedestrians due to robots driven by different navigation algorithms. We simulate the deviation behavior of the pedestrians using an enhanced Social Force Model (SFM) with a robot force component that accounts for the influence of robots on pedestrian behavior, resulting in the Social Robot Force Model (SRFM). Parameters for this model are learned using the pedestrian trajectories from the JRDB dataset. Pedestrians are then simulated using the SRFM with and without the robot force component to objectively measure the deviation to their trajectory caused by the robot in 5 different scenarios. Our work in this paper is a proof of concept that shows objectively measuring the pedestrian reaction to robot is possible. We use our simulation to train two different RL policies and evaluate them against traditional navigation models.<|reference_end|>
|
arxiv
|
@article{agrawal2024evaluating,
title={Evaluating Robot Influence on Pedestrian Behavior Models for Crowd
Simulation and Benchmarking},
author={Subham Agrawal, Nils Dengler, Maren Bennewitz},
journal={arXiv preprint arXiv:2409.14844},
year={2024},
archivePrefix={arXiv},
eprint={2409.14844},
primaryClass={cs.RO}
}
|
agrawal2024evaluating
|
arxiv-660692
|
2409.14846
|
A-VL: Adaptive Attention for Large Vision-Language Models
|
<|reference_start|>A-VL: Adaptive Attention for Large Vision-Language Models: The Large Vision-Language Model (LVLM) integrates computer vision and natural language processing techniques, offering substantial application potential. However, these models demand extensive resources during inference. Adaptive attention techniques can dynamically reduce computational redundancy and thus improve efficiency. Although current adaptive attention methods significantly reduce the memory requirements of Transformer-based language models, they are not tailored for LVLMs. We observe that LVLMs generate responses from both remote image tokens and local text tokens, and different modalities have different attention patterns. This observation inspires us to manage the attention for each modality separately. Specifically, for visual input, we store the cache of potentially useful information but only compute the most critical parts. For language input, we care more about local information. Based on our observation and analysis of vision-language attention patterns, we develop A-VL, a plug-and-play adaptive attention tailored for LVLM inference. Extensive evaluations on three vision-language tasks and five datasets show the effectiveness of our designs. Our approach A-VL outperforms existing adaptive attention methods in reducing memory usage and computational load without compromising performance.<|reference_end|>
|
arxiv
|
@article{zhang2024a-vl:,
title={A-VL: Adaptive Attention for Large Vision-Language Models},
author={Junyang Zhang, Mu Yuan, Ruiguang Zhong, Puhan Luo, Huiyou Zhan,
Ningkang Zhang, Chengchen Hu, Xiangyang Li},
journal={arXiv preprint arXiv:2409.14846},
year={2024},
archivePrefix={arXiv},
eprint={2409.14846},
primaryClass={cs.AI cs.CV}
}
|
zhang2024a-vl:
|
arxiv-660693
|
2409.14847
|
Revisiting Video Quality Assessment from the Perspective of Generalization
|
<|reference_start|>Revisiting Video Quality Assessment from the Perspective of Generalization: The increasing popularity of short video platforms such as YouTube Shorts, TikTok, and Kwai has led to a surge in User-Generated Content (UGC), which presents significant challenges for the generalization performance of Video Quality Assessment (VQA) tasks. These challenges not only affect performance on test sets but also impact the ability to generalize across different datasets. While prior research has primarily focused on enhancing feature extractors, sampling methods, and network branches, it has largely overlooked the generalization capabilities of VQA tasks. In this work, we reevaluate the VQA task from a generalization standpoint. We begin by analyzing the weight loss landscape of VQA models, identifying a strong correlation between this landscape and the generalization gaps. We then investigate various techniques to regularize the weight loss landscape. Our results reveal that adversarial weight perturbations can effectively smooth this landscape, significantly improving the generalization performance, with cross-dataset generalization and fine-tuning performance enhanced by up to 1.8% and 3%, respectively. Through extensive experiments across various VQA methods and datasets, we validate the effectiveness of our approach. Furthermore, by leveraging our insights, we achieve state-of-the-art performance in Image Quality Assessment (IQA) tasks. Our code is available at https://github.com/XinliYue/VQA-Generalization.<|reference_end|>
|
arxiv
|
@article{yue2024revisiting,
title={Revisiting Video Quality Assessment from the Perspective of
Generalization},
author={Xinli Yue, Jianhui Sun, Liangchao Yao, Fan Xia, Yuetang Deng, Tianyi
Wang, Lei Li, Fengyun Rao, Jing Lv, Qian Wang, Lingchen Zhao},
journal={arXiv preprint arXiv:2409.14847},
year={2024},
archivePrefix={arXiv},
eprint={2409.14847},
primaryClass={cs.CV}
}
|
yue2024revisiting
|
arxiv-660694
|
2409.14848
|
A Bi-criterion Steiner Traveling Salesperson Problem with Time Windows for Last-Mile Electric Vehicle Logistics
|
<|reference_start|>A Bi-criterion Steiner Traveling Salesperson Problem with Time Windows for Last-Mile Electric Vehicle Logistics: This paper addresses the problem of energy-efficient and safe routing of last-mile electric freight vehicles. With the rising environmental footprint of the transportation sector and the growing popularity of E-Commerce, freight companies are likely to benefit from optimal time-window-feasible tours that minimize energy usage while reducing traffic conflicts at intersections and thereby improving safety. We formulate this problem as a Bi-criterion Steiner Traveling Salesperson Problem with Time Windows (BSTSPTW) with energy consumed and the number of left turns at intersections as the two objectives while also considering regenerative braking capabilities. We first discuss an exact mixed-integer programming model with scalarization to enumerate points on the efficiency frontier for small instances. For larger networks, we develop an efficient local search-based heuristic, which uses several operators to intensify and diversify the search process. We demonstrate the utility of the proposed methods using benchmark data and real-world instances from Amazon delivery routes in Austin, US. Comparisons with state-of-the-art solvers shows that our heuristics can generate near-optimal solutions within reasonable time budgets, effectively balancing energy efficiency and safety under practical delivery constraints.<|reference_end|>
|
arxiv
|
@article{agarwal2024a,
title={A Bi-criterion Steiner Traveling Salesperson Problem with Time Windows
for Last-Mile Electric Vehicle Logistics},
author={Prateek Agarwal, Debojjal Bagchi, Tarun Rambha, Venktesh Pandey},
journal={arXiv preprint arXiv:2409.14848},
year={2024},
archivePrefix={arXiv},
eprint={2409.14848},
primaryClass={math.OC cs.CE cs.DM}
}
|
agarwal2024a
|
arxiv-660695
|
2409.14849
|
Gabow's Cardinality Matching Algorithm in General Graphs: Implementation and Experiments
|
<|reference_start|>Gabow's Cardinality Matching Algorithm in General Graphs: Implementation and Experiments: It is known since 1975 (\cite{HK75}) that maximum cardinality matchings in bipartite graphs with $n$ nodes and $m$ edges can be computed in time $O(\sqrt{n} m)$. Asymptotically faster algorithms were found in the last decade and maximum cardinality bipartite matchings can now be computed in near-linear time~\cite{NearlyLinearTimeBipartiteMatching, AlmostLinearTimeMaxFlow,AlmostLinearTimeMinCostFlow}. For general graphs, the problem seems harder. Algorithms with running time $O(\sqrt{n} m)$ were given in~\cite{MV80,Vazirani94,Vazirani12,Vazirani20,Vazirani23,Goldberg-Karzanov,GT91,Gabow:GeneralMatching}. Mattingly and Ritchey~\cite{Mattingly-Ritchey} and Huang and Stein~\cite{Huang-Stein} discuss implementations of the Micali-Vazirani Algorithm. We describe an implementation of Gabow's algorithm~\cite{Gabow:GeneralMatching} in C++ based on LEDA~\cite{LEDAsystem,LEDAbook} and report on running time experiments. On worst-case graphs, the asymptotic improvement pays off dramatically. On random graphs, there is no improvement with respect to algorithms that have a worst-case running time of $O(n m)$. The performance seems to be near-linear. The implementation is available open-source.<|reference_end|>
|
arxiv
|
@article{ansaripour2024gabow's,
title={Gabow's Cardinality Matching Algorithm in General Graphs: Implementation
and Experiments},
author={Matin Ansaripour and Alireza Danaei and Kurt Mehlhorn},
journal={arXiv preprint arXiv:2409.14849},
year={2024},
archivePrefix={arXiv},
eprint={2409.14849},
primaryClass={cs.DS}
}
|
ansaripour2024gabow's
|
arxiv-660696
|
2409.14850
|
GroCo: Ground Constraint for Metric Self-Supervised Monocular Depth
|
<|reference_start|>GroCo: Ground Constraint for Metric Self-Supervised Monocular Depth: Monocular depth estimation has greatly improved in the recent years but models predicting metric depth still struggle to generalize across diverse camera poses and datasets. While recent supervised methods mitigate this issue by leveraging ground prior information at inference, their adaptability to self-supervised settings is limited due to the additional challenge of scale recovery. Addressing this gap, we propose in this paper a novel constraint on ground areas designed specifically for the self-supervised paradigm. This mechanism not only allows to accurately recover the scale but also ensures coherence between the depth prediction and the ground prior. Experimental results show that our method surpasses existing scale recovery techniques on the KITTI benchmark and significantly enhances model generalization capabilities. This improvement can be observed by its more robust performance across diverse camera rotations and its adaptability in zero-shot conditions with previously unseen driving datasets such as DDAD.<|reference_end|>
|
arxiv
|
@article{cecille2024groco:,
title={GroCo: Ground Constraint for Metric Self-Supervised Monocular Depth},
author={Aur'elien Cecille, Stefan Duffner, Franck Davoine, Thibault Neveu and
R'emi Agier},
journal={arXiv preprint arXiv:2409.14850},
year={2024},
archivePrefix={arXiv},
eprint={2409.14850},
primaryClass={cs.CV cs.AI cs.LG cs.RO}
}
|
cecille2024groco:
|
arxiv-660697
|
2409.14851
|
Disentanglement with Factor Quantized Variational Autoencoders
|
<|reference_start|>Disentanglement with Factor Quantized Variational Autoencoders: Disentangled representation learning aims to represent the underlying generative factors of a dataset in a latent representation independently of one another. In our work, we propose a discrete variational autoencoder (VAE) based model where the ground truth information about the generative factors are not provided to the model. We demonstrate the advantages of learning discrete representations over learning continuous representations in facilitating disentanglement. Furthermore, we propose incorporating an inductive bias into the model to further enhance disentanglement. Precisely, we propose scalar quantization of the latent variables in a latent representation with scalar values from a global codebook, and we add a total correlation term to the optimization as an inductive bias. Our method called FactorQVAE is the first method that combines optimization based disentanglement approaches with discrete representation learning, and it outperforms the former disentanglement methods in terms of two disentanglement metrics (DCI and InfoMEC) while improving the reconstruction performance. Our code can be found at \url{https://github.com/ituvisionlab/FactorQVAE}.<|reference_end|>
|
arxiv
|
@article{baykal2024disentanglement,
title={Disentanglement with Factor Quantized Variational Autoencoders},
author={Gulcin Baykal, Melih Kandemir, Gozde Unal},
journal={arXiv preprint arXiv:2409.14851},
year={2024},
archivePrefix={arXiv},
eprint={2409.14851},
primaryClass={cs.CV cs.LG}
}
|
baykal2024disentanglement
|
arxiv-660698
|
2409.14852
|
FUSED-Net: Enhancing Few-Shot Traffic Sign Detection with Unfrozen Parameters, Pseudo-Support Sets, Embedding Normalization, and Domain Adaptation
|
<|reference_start|>FUSED-Net: Enhancing Few-Shot Traffic Sign Detection with Unfrozen Parameters, Pseudo-Support Sets, Embedding Normalization, and Domain Adaptation: Automatic Traffic Sign Recognition is paramount in modern transportation systems, motivating several research endeavors to focus on performance improvement by utilizing large-scale datasets. As the appearance of traffic signs varies across countries, curating large-scale datasets is often impractical; and requires efficient models that can produce satisfactory performance using limited data. In this connection, we present 'FUSED-Net', built-upon Faster RCNN for traffic sign detection, enhanced by Unfrozen Parameters, Pseudo-Support Sets, Embedding Normalization, and Domain Adaptation while reducing data requirement. Unlike traditional approaches, we keep all parameters unfrozen during training, enabling FUSED-Net to learn from limited samples. The generation of a Pseudo-Support Set through data augmentation further enhances performance by compensating for the scarcity of target domain data. Additionally, Embedding Normalization is incorporated to reduce intra-class variance, standardizing feature representation. Domain Adaptation, achieved by pre-training on a diverse traffic sign dataset distinct from the target domain, improves model generalization. Evaluating FUSED-Net on the BDTSD dataset, we achieved 2.4x, 2.2x, 1.5x, and 1.3x improvements of mAP in 1-shot, 3-shot, 5-shot, and 10-shot scenarios, respectively compared to the state-of-the-art Few-Shot Object Detection (FSOD) models. Additionally, we outperform state-of-the-art works on the cross-domain FSOD benchmark under several scenarios.<|reference_end|>
|
arxiv
|
@article{rahman2024fused-net:,
title={FUSED-Net: Enhancing Few-Shot Traffic Sign Detection with Unfrozen
Parameters, Pseudo-Support Sets, Embedding Normalization, and Domain
Adaptation},
author={Md. Atiqur Rahman, Nahian Ibn Asad, Md. Mushfiqul Haque Omi, Md.
Bakhtiar Hasan, Sabbir Ahmed and Md. Hasanul Kabir},
journal={arXiv preprint arXiv:2409.14852},
year={2024},
archivePrefix={arXiv},
eprint={2409.14852},
primaryClass={cs.CV cs.AI}
}
|
rahman2024fused-net:
|
arxiv-660699
|
2409.14853
|
"I Feel Myself So Small!": Designing and Evaluating VR Awe Experiences Based on Theories Related to Sublime
|
<|reference_start|>"I Feel Myself So Small!": Designing and Evaluating VR Awe Experiences Based on Theories Related to Sublime: Research suggests the potential of employing VR to elicit awe experiences, thereby promoting well-being. Building upon theories related to the sublime and embodiment, we designed three VR scenes to evaluate the effectiveness of sublime and embodied design elements in invoking awe experiences. We conducted a within-subject study involving 28 young adults who experienced the three VR designs. Results demonstrated that the VR design with sublime elements significantly elicited more intense awe experiences compared to the one without, while adding embodied elements did not enhance the intensity of awe. Qualitative interviews revealed critical design elements (e.g., the obscure event should be reasonable) and their underlying mechanisms (e.g., leading to feelings of enlightenment) in invoking awe experiences. We further discuss considerations and implications for the design of effective awe-inspiring VR applications.<|reference_end|>
|
arxiv
|
@article{he2024"i,
title={"I Feel Myself So Small!": Designing and Evaluating VR Awe Experiences
Based on Theories Related to Sublime},
author={Zhiting He, Min Fan, Xinyi Guo, Yifan Zhao, Yuqiu Wang},
journal={arXiv preprint arXiv:2409.14853},
year={2024},
archivePrefix={arXiv},
eprint={2409.14853},
primaryClass={cs.HC}
}
|
he2024"i
|
arxiv-660700
|
2409.14857
|
Embedding Knowledge Graph in Function Spaces
|
<|reference_start|>Embedding Knowledge Graph in Function Spaces: We introduce a novel embedding method diverging from conventional approaches by operating within function spaces of finite dimension rather than finite vector space, thus departing significantly from standard knowledge graph embedding techniques. Initially employing polynomial functions to compute embeddings, we progress to more intricate representations using neural networks with varying layer complexities. We argue that employing functions for embedding computation enhances expressiveness and allows for more degrees of freedom, enabling operations such as composition, derivatives and primitive of entities representation. Additionally, we meticulously outline the step-by-step construction of our approach and provide code for reproducibility, thereby facilitating further exploration and application in the field.<|reference_end|>
|
arxiv
|
@article{teyou2024embedding,
title={Embedding Knowledge Graph in Function Spaces},
author={Louis Mozart Kamdem Teyou and Caglar Demir and Axel-Cyrille Ngonga
Ngomo},
journal={arXiv preprint arXiv:2409.14857},
year={2024},
archivePrefix={arXiv},
eprint={2409.14857},
primaryClass={stat.ML cs.AI cs.LG}
}
|
teyou2024embedding
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.