corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-662201
|
2409.17605
|
Good Data Is All Imitation Learning Needs
|
<|reference_start|>Good Data Is All Imitation Learning Needs: In this paper, we address the limitations of traditional teacher-student models, imitation learning, and behaviour cloning in the context of Autonomous/Automated Driving Systems (ADS), where these methods often struggle with incomplete coverage of real-world scenarios. To enhance the robustness of such models, we introduce the use of Counterfactual Explanations (CFEs) as a novel data augmentation technique for end-to-end ADS. CFEs, by generating training samples near decision boundaries through minimal input modifications, lead to a more comprehensive representation of expert driver strategies, particularly in safety-critical scenarios. This approach can therefore help improve the model's ability to handle rare and challenging driving events, such as anticipating darting out pedestrians, ultimately leading to safer and more trustworthy decision-making for ADS. Our experiments in the CARLA simulator demonstrate that CF-Driver outperforms the current state-of-the-art method, achieving a higher driving score and lower infraction rates. Specifically, CF-Driver attains a driving score of 84.2, surpassing the previous best model by 15.02 percentage points. These results highlight the effectiveness of incorporating CFEs in training end-to-end ADS. To foster further research, the CF-Driver code is made publicly available.<|reference_end|>
|
arxiv
|
@article{samadi2024good,
title={Good Data Is All Imitation Learning Needs},
author={Amir Samadi, Konstantinos Koufos, Kurt Debattista, and Mehrdad Dianati},
journal={arXiv preprint arXiv:2409.17605},
year={2024},
archivePrefix={arXiv},
eprint={2409.17605},
primaryClass={cs.CV cs.LG}
}
|
samadi2024good
|
arxiv-662202
|
2409.17606
|
FlooNoC: A 645 Gbps/link 015 pJ/B/hop Open-Source NoC with Wide Physical Links and End-to-End AXI4 Parallel Multi-Stream Support
|
<|reference_start|>FlooNoC: A 645 Gbps/link 015 pJ/B/hop Open-Source NoC with Wide Physical Links and End-to-End AXI4 Parallel Multi-Stream Support: The new generation of domain-specific AI accelerators is characterized by rapidly increasing demands for bulk data transfers, as opposed to small, latency-critical cache line transfers typical of traditional cache-coherent systems. In this paper, we address this critical need by introducing the FlooNoC Network-on-Chip (NoC), featuring very wide, fully Advanced eXtensible Interface (AXI4) compliant links designed to meet the massive bandwidth needs at high energy efficiency. At the transport level, non-blocking transactions are supported for latency tolerance. Additionally, a novel end-to-end ordering approach for AXI4, enabled by a multi-stream capable Direct Memory Access (DMA) engine simplifies network interfaces and eliminates inter-stream dependencies. Furthermore, dedicated physical links are instantiated for short, latency-critical messages. A complete end-to-end reference implementation in 12nm FinFET technology demonstrates the physical feasibility and power performance area (PPA) benefits of our approach. Utilizing wide links on high levels of metal, we achieve a bandwidth of 645 Gbps per link and a total aggregate bandwidth of 103 Tbps for an 8x4 mesh of processors cluster tiles, with a total of 288 RISC-V cores. The NoC imposes a minimal area overhead of only 3.5% per compute tile and achieves a leading-edge energy efficiency of 0.15 pJ/B/hop at 0.8 V. Compared to state-of-the-art NoCs, our system offers three times the energy efficiency and more than double the link bandwidth. Furthermore, compared to a traditional AXI4-based multi-layer interconnect, our NoC achieves a 30% reduction in area, corresponding to a 47% increase in GFLOPSDP within the same floorplan.<|reference_end|>
|
arxiv
|
@article{fischer2024floonoc:,
title={FlooNoC: A 645 Gbps/link 0.15 pJ/B/hop Open-Source NoC with Wide
Physical Links and End-to-End AXI4 Parallel Multi-Stream Support},
author={Tim Fischer, Michael Rogenmoser, Thomas Benz, Frank K. G"urkaynak,
Luca Benini},
journal={arXiv preprint arXiv:2409.17606},
year={2024},
archivePrefix={arXiv},
eprint={2409.17606},
primaryClass={cs.AR}
}
|
fischer2024floonoc:
|
arxiv-662203
|
2409.17607
|
Dirichlet-Based Coarse-to-Fine Example Selection For Open-Set Annotation
|
<|reference_start|>Dirichlet-Based Coarse-to-Fine Example Selection For Open-Set Annotation: Active learning (AL) has achieved great success by selecting the most valuable examples from unlabeled data. However, they usually deteriorate in real scenarios where open-set noise gets involved, which is studied as open-set annotation (OSA). In this paper, we owe the deterioration to the unreliable predictions arising from softmax-based translation invariance and propose a Dirichlet-based Coarse-to-Fine Example Selection (DCFS) strategy accordingly. Our method introduces simplex-based evidential deep learning (EDL) to break translation invariance and distinguish known and unknown classes by considering evidence-based data and distribution uncertainty simultaneously. Furthermore, hard known-class examples are identified by model discrepancy generated from two classifier heads, where we amplify and alleviate the model discrepancy respectively for unknown and known classes. Finally, we combine the discrepancy with uncertainties to form a two-stage strategy, selecting the most informative examples from known classes. Extensive experiments on various openness ratio datasets demonstrate that DCFS achieves state-of-art performance.<|reference_end|>
|
arxiv
|
@article{wang2024dirichlet-based,
title={Dirichlet-Based Coarse-to-Fine Example Selection For Open-Set Annotation},
author={Ye-Wen Wang, Chen-Chen Zong, Ming-Kun Xie, Sheng-Jun Huang},
journal={arXiv preprint arXiv:2409.17607},
year={2024},
archivePrefix={arXiv},
eprint={2409.17607},
primaryClass={cs.AI}
}
|
wang2024dirichlet-based
|
arxiv-662204
|
2409.17608
|
Appearance Blur-driven AutoEncoder and Motion-guided Memory Module for Video Anomaly Detection
|
<|reference_start|>Appearance Blur-driven AutoEncoder and Motion-guided Memory Module for Video Anomaly Detection: Video anomaly detection (VAD) often learns the distribution of normal samples and detects the anomaly through measuring significant deviations, but the undesired generalization may reconstruct a few anomalies thus suppressing the deviations. Meanwhile, most VADs cannot cope with cross-dataset validation for new target domains, and few-shot methods must laboriously rely on model-tuning from the target domain to complete domain adaptation. To address these problems, we propose a novel VAD method with a motion-guided memory module to achieve cross-dataset validation with zero-shot. First, we add Gaussian blur to the raw appearance images, thereby constructing the global pseudo-anomaly, which serves as the input to the network. Then, we propose multi-scale residual channel attention to deblur the pseudo-anomaly in normal samples. Next, memory items are obtained by recording the motion features in the training phase, which are used to retrieve the motion features from the raw information in the testing phase. Lastly, our method can ignore the blurred real anomaly through attention and rely on motion memory items to increase the normality gap between normal and abnormal motion. Extensive experiments on three benchmark datasets demonstrate the effectiveness of the proposed method. Compared with cross-domain methods, our method achieves competitive performance without adaptation during testing.<|reference_end|>
|
arxiv
|
@article{lyu2024appearance,
title={Appearance Blur-driven AutoEncoder and Motion-guided Memory Module for
Video Anomaly Detection},
author={Jiahao Lyu, Minghua Zhao, Jing Hu, Xuewen Huang, Shuangli Du, Cheng
Shi, Zhiyong Lv},
journal={arXiv preprint arXiv:2409.17608},
year={2024},
archivePrefix={arXiv},
eprint={2409.17608},
primaryClass={cs.CV}
}
|
lyu2024appearance
|
arxiv-662205
|
2409.17610
|
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue
|
<|reference_start|>ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue: The rocketing prosperity of large language models (LLMs) in recent years has boosted the prevalence of vision-language models (VLMs) in the medical sector. In our online medical consultation scenario, a doctor responds to the texts and images provided by a patient in multiple rounds to diagnose her/his health condition, forming a multi-turn multimodal medical dialogue format. Unlike high-quality images captured by professional equipment in traditional medical visual question answering (Med-VQA), the images in our case are taken by patients' mobile phones. These images have poor quality control, with issues such as excessive background elements and the lesion area being significantly off-center, leading to degradation of vision-language alignment in the model training phase. In this paper, we propose ZALM3, a Zero-shot strategy to improve vision-language ALignment in Multi-turn Multimodal Medical dialogue. Since we observe that the preceding text conversations before an image can infer the regions of interest (RoIs) in the image, ZALM3 employs an LLM to summarize the keywords from the preceding context and a visual grounding model to extract the RoIs. The updated images eliminate unnecessary background noise and provide more effective vision-language alignment. To better evaluate our proposed method, we design a new subjective assessment metric for multi-turn unimodal/multimodal medical dialogue to provide a fine-grained performance comparison. Our experiments across three different clinical departments remarkably demonstrate the efficacy of ZALM3 with statistical significance.<|reference_end|>
|
arxiv
|
@article{li2024zalm3:,
title={ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context
Information in Multi-Turn Multimodal Medical Dialogue},
author={Zhangpu Li, Changhong Zou, Suxue Ma, Zhicheng Yang, Chen Du, Youbao
Tang, Zhenjie Cao, Ning Zhang, Jui-Hsin Lai, Ruei-Sung Lin, Yuan Ni, Xingzhi
Sun, Jing Xiao, Jieke Hou, Kai Zhang, Mei Han},
journal={arXiv preprint arXiv:2409.17610},
year={2024},
archivePrefix={arXiv},
eprint={2409.17610},
primaryClass={cs.CL cs.CV}
}
|
li2024zalm3:
|
arxiv-662206
|
2409.17612
|
Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment
|
<|reference_start|>Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment: The sharp increase in data-related expenses has motivated research into condensing datasets while retaining the most informative features. Dataset distillation has thus recently come to the fore. This paradigm generates synthetic dataset that are representative enough to replace the original dataset in training a neural network. To avoid redundancy in these synthetic datasets, it is crucial that each element contains unique features and remains diverse from others during the synthesis stage. In this paper, we provide a thorough theoretical and empirical analysis of diversity within synthesized datasets. We argue that enhancing diversity can improve the parallelizable yet isolated synthesizing approach. Specifically, we introduce a novel method that employs dynamic and directed weight adjustment techniques to modulate the synthesis process, thereby maximizing the representativeness and diversity of each synthetic instance. Our method ensures that each batch of synthetic data mirrors the characteristics of a large, varying subset of the original dataset. Extensive experiments across multiple datasets, including CIFAR, Tiny-ImageNet, and ImageNet-1K, demonstrate the superior performance of our method, highlighting its effectiveness in producing diverse and representative synthetic datasets with minimal computational expense.<|reference_end|>
|
arxiv
|
@article{du2024diversity-driven,
title={Diversity-Driven Synthesis: Enhancing Dataset Distillation through
Directed Weight Adjustment},
author={Jiawei Du, Xin Zhang, Juncheng Hu, Wenxin Huang, Joey Tianyi Zhou},
journal={arXiv preprint arXiv:2409.17612},
year={2024},
archivePrefix={arXiv},
eprint={2409.17612},
primaryClass={cs.LG cs.CV}
}
|
du2024diversity-driven
|
arxiv-662207
|
2409.17613
|
Stereographic Projection of Probabilistic Frequency-Domain Uncertainty
|
<|reference_start|>Stereographic Projection of Probabilistic Frequency-Domain Uncertainty: This paper investigates the stereographic projection of points along the Nyquist plots of single input single output (SISO) linear time invariant (LTI) systems subject to probabilistic uncertainty. At each frequency, there corresponds a complex-valued random variable with given probability distribution in the complex plane. The chordal distance between the stereographic projections of this complex value and the corresponding value for a nominal model, as per the well-known Nu-Gap metric of Vinnicombe, is also a random quantity. The main result provides the cumulative density function (CDF) of the chordal distance at a given frequency. Such a stochastic distance framework opens up a fresh and a fertile research direction on probabilistic robust control theory.<|reference_end|>
|
arxiv
|
@article{nystrom2024stereographic,
title={Stereographic Projection of Probabilistic Frequency-Domain Uncertainty},
author={Anton Nystrom, Venkatraman Renganathan, Michael Cantoni},
journal={arXiv preprint arXiv:2409.17613},
year={2024},
archivePrefix={arXiv},
eprint={2409.17613},
primaryClass={eess.SY cs.SY}
}
|
nystrom2024stereographic
|
arxiv-662208
|
2409.17617
|
Estimating The Carbon Footprint Of Digital Agriculture Deployment: A Parametric Bottom-Up Modelling Approach
|
<|reference_start|>Estimating The Carbon Footprint Of Digital Agriculture Deployment: A Parametric Bottom-Up Modelling Approach: Digitalization appears as a lever to enhance agriculture sustainability. However, existing works on digital agriculture's own sustainability remain scarce, disregarding the environmental effects of deploying digital devices on a large-scale. We propose a bottom-up method to estimate the carbon footprint of digital agriculture scenarios considering deployment of devices over a diversity of farm sizes. It is applied to two use-cases and demonstrates that digital agriculture encompasses a diversity of devices with heterogeneous carbon footprints and that more complex devices yield higher footprints not always compensated by better performances or scaling gains. By emphasizing the necessity of considering the multiplicity of devices, and the territorial distribution of farm sizes when modelling digital agriculture deployments, this study highlights the need for further exploration of the first-order effects of digital technologies in agriculture.<|reference_end|>
|
arxiv
|
@article{la rocca2024estimating,
title={Estimating The Carbon Footprint Of Digital Agriculture Deployment: A
Parametric Bottom-Up Modelling Approach},
author={Pierre La Rocca, Ga"el Guennebaud, Aur'elie Bugeau (IUF, LaBRI, UB),
Anne-Laure Ligozat (ENSIIE, LISN, STL)},
journal={arXiv preprint arXiv:2409.17617},
year={2024},
archivePrefix={arXiv},
eprint={2409.17617},
primaryClass={cs.CY}
}
|
la rocca2024estimating
|
arxiv-662209
|
2409.17618
|
Learning Occlusion-aware Decision-making from Agent Interaction via Active Perception
|
<|reference_start|>Learning Occlusion-aware Decision-making from Agent Interaction via Active Perception: Occlusion-aware decision-making is essential in autonomous driving due to the high uncertainty of various occlusions. Recent occlusion-aware decision-making methods encounter issues such as high computational complexity, scenario scalability challenges, or reliance on limited expert data. Benefiting from automatically generating data by exploration randomization, we uncover that reinforcement learning (RL) may show promise in occlusion-aware decision-making. However, previous occlusion-aware RL faces challenges in expanding to various dynamic and static occlusion scenarios, low learning efficiency, and lack of predictive ability. To address these issues, we introduce Pad-AI, a self-reinforcing framework to learn occlusion-aware decision-making through active perception. Pad-AI utilizes vectorized representation to represent occluded environments efficiently and learns over the semantic motion primitives to focus on high-level active perception exploration. Furthermore, Pad-AI integrates prediction and RL within a unified framework to provide risk-aware learning and security guarantees. Our framework was tested in challenging scenarios under both dynamic and static occlusions and demonstrated efficient and general perception-aware exploration performance to other strong baselines in closed-loop evaluations.<|reference_end|>
|
arxiv
|
@article{jia2024learning,
title={Learning Occlusion-aware Decision-making from Agent Interaction via
Active Perception},
author={Jie Jia, Yiming Shu, Zhongxue Gan, Wenchao Ding},
journal={arXiv preprint arXiv:2409.17618},
year={2024},
archivePrefix={arXiv},
eprint={2409.17618},
primaryClass={cs.RO}
}
|
jia2024learning
|
arxiv-662210
|
2409.17621
|
Leveraging Semantic and Geometric Information for Zero-Shot Robot-to-Human Handover
|
<|reference_start|>Leveraging Semantic and Geometric Information for Zero-Shot Robot-to-Human Handover: Human-robot interaction (HRI) encompasses a wide range of collaborative tasks, with handover being one of the most fundamental. As robots become more integrated into human environments, the potential for service robots to assist in handing objects to humans is increasingly promising. In robot-to-human (R2H) handover, selecting the optimal grasp is crucial for success, as it requires avoiding interference with the humans preferred grasp region and minimizing intrusion into their workspace. Existing methods either inadequately consider geometric information or rely on data-driven approaches, which often struggle to generalize across diverse objects. To address these limitations, we propose a novel zero-shot system that combines semantic and geometric information to generate optimal handover grasps. Our method first identifies grasp regions using semantic knowledge from vision-language models (VLMs) and, by incorporating customized visual prompts, achieves finer granularity in region grounding. A grasp is then selected based on grasp distance and approach angle to maximize human ease and avoid interference. We validate our approach through ablation studies and real-world comparison experiments. Results demonstrate that our system improves handover success rates and provides a more user-preferred interaction experience. Videos, appendixes and more are available at https://sites.google.com/view/vlm-handover/.<|reference_end|>
|
arxiv
|
@article{liu2024leveraging,
title={Leveraging Semantic and Geometric Information for Zero-Shot
Robot-to-Human Handover},
author={Jiangshan Liu, Wenlong Dong, Jiankun Wang, Max Q.-H. Meng},
journal={arXiv preprint arXiv:2409.17621},
year={2024},
archivePrefix={arXiv},
eprint={2409.17621},
primaryClass={cs.RO}
}
|
liu2024leveraging
|
arxiv-662211
|
2409.17622
|
Neural P$^3$M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs
|
<|reference_start|>Neural P$^3$M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs: Geometric graph neural networks (GNNs) have emerged as powerful tools for modeling molecular geometry. However, they encounter limitations in effectively capturing long-range interactions in large molecular systems. To address this challenge, we introduce Neural P$^3$M, a versatile enhancer of geometric GNNs to expand the scope of their capabilities by incorporating mesh points alongside atoms and reimaging traditional mathematical operations in a trainable manner. Neural P$^3$M exhibits flexibility across a wide range of molecular systems and demonstrates remarkable accuracy in predicting energies and forces, outperforming on benchmarks such as the MD22 dataset. It also achieves an average improvement of 22% on the OE62 dataset while integrating with various architectures.<|reference_end|>
|
arxiv
|
@article{wang2024neural,
title={Neural P$^3$M: A Long-Range Interaction Modeling Enhancer for Geometric
GNNs},
author={Yusong Wang, Chaoran Cheng, Shaoning Li, Yuxuan Ren, Bin Shao, Ge Liu,
Pheng-Ann Heng, Nanning Zheng},
journal={arXiv preprint arXiv:2409.17622},
year={2024},
archivePrefix={arXiv},
eprint={2409.17622},
primaryClass={cs.LG cs.AI}
}
|
wang2024neural
|
arxiv-662212
|
2409.17623
|
Fully Dynamic Graph Algorithms with Edge Differential Privacy
|
<|reference_start|>Fully Dynamic Graph Algorithms with Edge Differential Privacy: We study differentially private algorithms for analyzing graphs in the challenging setting of continual release with fully dynamic updates, where edges are inserted and deleted over time, and the algorithm is required to update the solution at every time step. Previous work has presented differentially private algorithms for many graph problems that can handle insertions only or deletions only (called partially dynamic algorithms) and obtained some hardness results for the fully dynamic setting. The only algorithms in the latter setting were for the edge count, given by Fichtenberger, Henzinger, and Ost (ESA 21), and for releasing the values of all graph cuts, given by Fichtenberger, Henzinger, and Upadhyay (ICML 23). We provide the first differentially private and fully dynamic graph algorithms for several other fundamental graph statistics (including the triangle count, the number of connected components, the size of the maximum matching, and the degree histogram), analyze their error and show strong lower bounds on the error for all algorithms in this setting. We study two variants of edge differential privacy for fully dynamic graph algorithms: event-level and item-level. We give upper and lower bounds on the error of both event-level and item-level fully dynamic algorithms for several fundamental graph problems. No fully dynamic algorithms that are private at the item-level (the more stringent of the two notions) were known before. In the case of item-level privacy, for several problems, our algorithms match our lower bounds.<|reference_end|>
|
arxiv
|
@article{raskhodnikova2024fully,
title={Fully Dynamic Graph Algorithms with Edge Differential Privacy},
author={Sofya Raskhodnikova and Teresa Anna Steiner},
journal={arXiv preprint arXiv:2409.17623},
year={2024},
archivePrefix={arXiv},
eprint={2409.17623},
primaryClass={cs.DS cs.CR}
}
|
raskhodnikova2024fully
|
arxiv-662213
|
2409.17624
|
HGS-Planner: Hierarchical Planning Framework for Active Scene Reconstruction Using 3D Gaussian Splatting
|
<|reference_start|>HGS-Planner: Hierarchical Planning Framework for Active Scene Reconstruction Using 3D Gaussian Splatting: In complex missions such as search and rescue,robots must make intelligent decisions in unknown environments, relying on their ability to perceive and understand their surroundings. High-quality and real-time reconstruction enhances situational awareness and is crucial for intelligent robotics. Traditional methods often struggle with poor scene representation or are too slow for real-time use. Inspired by the efficacy of 3D Gaussian Splatting (3DGS), we propose a hierarchical planning framework for fast and high-fidelity active reconstruction. Our method evaluates completion and quality gain to adaptively guide reconstruction, integrating global and local planning for efficiency. Experiments in simulated and real-world environments show our approach outperforms existing real-time methods.<|reference_end|>
|
arxiv
|
@article{xu2024hgs-planner:,
title={HGS-Planner: Hierarchical Planning Framework for Active Scene
Reconstruction Using 3D Gaussian Splatting},
author={Zijun Xu, Rui Jin, Ke Wu, Yi Zhao, Zhiwei Zhang, Jieru Zhao, Fei Gao,
Zhongxue Gan and Wenchao Ding},
journal={arXiv preprint arXiv:2409.17624},
year={2024},
archivePrefix={arXiv},
eprint={2409.17624},
primaryClass={cs.RO}
}
|
xu2024hgs-planner:
|
arxiv-662214
|
2409.17625
|
Benign or Not-Benign Overfitting in Token Selection of Attention Mechanism
|
<|reference_start|>Benign or Not-Benign Overfitting in Token Selection of Attention Mechanism: Modern over-parameterized neural networks can be trained to fit the training data perfectly while still maintaining a high generalization performance. This "benign overfitting" phenomenon has been studied in a surge of recent theoretical work; however, most of these studies have been limited to linear models or two-layer neural networks. In this work, we analyze benign overfitting in the token selection mechanism of the attention architecture, which characterizes the success of transformer models. We first show the existence of a benign overfitting solution and explain its mechanism in the attention architecture. Next, we discuss whether the model converges to such a solution, raising the difficulties specific to the attention architecture. We then present benign overfitting cases and not-benign overfitting cases by conditioning different scenarios based on the behavior of attention probabilities during training. To the best of our knowledge, this is the first study to characterize benign overfitting for the attention mechanism.<|reference_end|>
|
arxiv
|
@article{sakamoto2024benign,
title={Benign or Not-Benign Overfitting in Token Selection of Attention
Mechanism},
author={Keitaro Sakamoto and Issei Sato},
journal={arXiv preprint arXiv:2409.17625},
year={2024},
archivePrefix={arXiv},
eprint={2409.17625},
primaryClass={cs.LG}
}
|
sakamoto2024benign
|
arxiv-662215
|
2409.17626
|
Recognizing Lawyers as AI Creators and Intermediaries in Contestability
|
<|reference_start|>Recognizing Lawyers as AI Creators and Intermediaries in Contestability: Laws play a key role in the complex socio-technical system impacting contestability: they create the regulations shaping the way AI systems are designed, evaluated, and used. Despite their role in the AI value chain, lawyers' impact on contestability has gone largely unrecognized in the design of AI systems. In this paper, we highlight two main roles lawyers play that impact contestability: (1) as AI Creators because the regulations they create shape the design and evaluation of AI systems before they are deployed; and (2) as Intermediaries because they interpret regulations when harm occurs, navigating the gap between stakeholders, instutions, and harmful outcomes. We use these two roles to illuminate new opportunities and challenges for including lawyers in the design of AI systems, contributing a significant first step in practical recommendations to amplify the power to contest systems through cross-disciplinary design.<|reference_end|>
|
arxiv
|
@article{mansi2024recognizing,
title={Recognizing Lawyers as AI Creators and Intermediaries in Contestability},
author={Gennie Mansi, Mark Riedl},
journal={arXiv preprint arXiv:2409.17626},
year={2024},
archivePrefix={arXiv},
eprint={2409.17626},
primaryClass={cs.HC}
}
|
mansi2024recognizing
|
arxiv-662216
|
2409.17627
|
Verifying Randomized Consensus Protocols with Common Coins
|
<|reference_start|>Verifying Randomized Consensus Protocols with Common Coins: Randomized fault-tolerant consensus protocols with common coins are widely used in cloud computing and blockchain platforms. Due to their fundamental role, it is vital to guarantee their correctness. Threshold automata is a formal model designed for the verification of fault-tolerant consensus protocols. It has recently been extended to probabilistic threshold automata (PTAs) to verify randomized fault-tolerant consensus protocols. Nevertheless, PTA can only model randomized consensus protocols with local coins. In this work, we extend PTA to verify randomized fault-tolerant consensus protocols with common coins. Our main idea is to add a process to simulate the common coin (the so-called common-coin process). Although the addition of the common-coin process destroys the symmetry and poses technical challenges, we show how PTA can be adapted to overcome the challenges. We apply our approach to verify the agreement, validity and almost-sure termination properties of 8 randomized consensus protocols with common coins.<|reference_end|>
|
arxiv
|
@article{gao2024verifying,
title={Verifying Randomized Consensus Protocols with Common Coins},
author={Song Gao, Bohua Zhan, Zhilin Wu, Lijun Zhang},
journal={2024 54th Annual IEEE/IFIP International Conference on Dependable
Systems and Networks (DSN), Brisbane, Australia, 2024, pp. 403-415},
year={2024},
doi={10.1109/DSN58291.2024.00047},
archivePrefix={arXiv},
eprint={2409.17627},
primaryClass={cs.DC cs.FL}
}
|
gao2024verifying
|
arxiv-662217
|
2409.17628
|
Convolutional Signal Propagation: A Simple Scalable Algorithm for Hypergraphs
|
<|reference_start|>Convolutional Signal Propagation: A Simple Scalable Algorithm for Hypergraphs: Last decade has seen the emergence of numerous methods for learning on graphs, particularly Graph Neural Networks (GNNs). These methods, however, are often not directly applicable to more complex structures like bipartite graphs (equivalent to hypergraphs), which represent interactions among two entity types (e.g. a user liking a movie). This paper proposes Convolutional Signal Propagation (CSP), a non-parametric simple and scalable method that natively operates on bipartite graphs (hypergraphs) and can be implemented with just a few lines of code. After defining CSP, we demonstrate its relationship with well-established methods like label propagation, Naive Bayes, and Hypergraph Convolutional Networks. We evaluate CSP against several reference methods on real-world datasets from multiple domains, focusing on retrieval and classification tasks. Our results show that CSP offers competitive performance while maintaining low computational complexity, making it an ideal first choice as a baseline for hypergraph node classification and retrieval. Moreover, despite operating on hypergraphs, CSP achieves good results in tasks typically not associated with hypergraphs, such as natural language processing.<|reference_end|>
|
arxiv
|
@article{procházka2024convolutional,
title={Convolutional Signal Propagation: A Simple Scalable Algorithm for
Hypergraphs},
author={Pavel Proch'azka, Marek Dv{e}div{c}, Luk'av{s} Bajer},
journal={arXiv preprint arXiv:2409.17628},
year={2024},
archivePrefix={arXiv},
eprint={2409.17628},
primaryClass={cs.LG}
}
|
procházka2024convolutional
|
arxiv-662218
|
2409.17629
|
Hand-object reconstruction via interaction-aware graph attention mechanism
|
<|reference_start|>Hand-object reconstruction via interaction-aware graph attention mechanism: Estimating the poses of both a hand and an object has become an important area of research due to the growing need for advanced vision computing. The primary challenge involves understanding and reconstructing how hands and objects interact, such as contact and physical plausibility. Existing approaches often adopt a graph neural network to incorporate spatial information of hand and object meshes. However, these approaches have not fully exploited the potential of graphs without modification of edges within and between hand- and object-graphs. We propose a graph-based refinement method that incorporates an interaction-aware graph-attention mechanism to account for hand-object interactions. Using edges, we establish connections among closely correlated nodes, both within individual graphs and across different graphs. Experiments demonstrate the effectiveness of our proposed method with notable improvements in the realm of physical plausibility.<|reference_end|>
|
arxiv
|
@article{woo2024hand-object,
title={Hand-object reconstruction via interaction-aware graph attention
mechanism},
author={Taeyun Woo, Tae-Kyun Kim, Jinah Park},
journal={arXiv preprint arXiv:2409.17629},
year={2024},
archivePrefix={arXiv},
eprint={2409.17629},
primaryClass={cs.CV cs.AI}
}
|
woo2024hand-object
|
arxiv-662219
|
2409.17630
|
System-Level Safety Monitoring and Recovery for Perception Failures in Autonomous Vehicles
|
<|reference_start|>System-Level Safety Monitoring and Recovery for Perception Failures in Autonomous Vehicles: The safety-critical nature of autonomous vehicle (AV) operation necessitates development of task-relevant algorithms that can reason about safety at the system level and not just at the component level. To reason about the impact of a perception failure on the entire system performance, such task-relevant algorithms must contend with various challenges: complexity of AV stacks, high uncertainty in the operating environments, and the need for real-time performance. To overcome these challenges, in this work, we introduce a Q-network called SPARQ (abbreviation for Safety evaluation for Perception And Recovery Q-network) that evaluates the safety of a plan generated by a planning algorithm, accounting for perception failures that the planning process may have overlooked. This Q-network can be queried during system runtime to assess whether a proposed plan is safe for execution or poses potential safety risks. If a violation is detected, the network can then recommend a corrective plan while accounting for the perceptual failure. We validate our algorithm using the NuPlan-Vegas dataset, demonstrating its ability to handle cases where a perception failure compromises a proposed plan while the corrective plan remains safe. We observe an overall accuracy and recall of 90% while sustaining a frequency of 42Hz on the unseen testing dataset. We compare our performance to a popular reachability-based baseline and analyze some interesting properties of our approach in improving the safety properties of an AV pipeline.<|reference_end|>
|
arxiv
|
@article{chakraborty2024system-level,
title={System-Level Safety Monitoring and Recovery for Perception Failures in
Autonomous Vehicles},
author={Kaustav Chakraborty, Zeyuan Feng, Sushant Veer, Apoorva Sharma, Boris
Ivanovic, Marco Pavone, Somil Bansal},
journal={arXiv preprint arXiv:2409.17630},
year={2024},
archivePrefix={arXiv},
eprint={2409.17630},
primaryClass={cs.RO}
}
|
chakraborty2024system-level
|
arxiv-662220
|
2409.17632
|
Model-Free Stochastic Process Modeling and Optimization using Normalizing Flows
|
<|reference_start|>Model-Free Stochastic Process Modeling and Optimization using Normalizing Flows: Real-world chemical processes often exhibit stochastic dynamics with non-trivial correlations and state-dependent fluctuations. However, most process models simply add stationary noise terms to a deterministic prediction, which can lead to inaccurate predictions. This work proposes using conditional normalizing flows as discrete-time models (DTMs) to learn the stochastic dynamics of chemical processes. Normalizing flows learn an explicit expression of the system states' probability density function (PDF) given prior states and control inputs. The resulting model naturally allows for formulating stochastic and probabilistic setpoint-tracking objectives and chance constraints. In applications to a continuous reactor and a reactor cascade, the normalizing flow yields stable simulations over long time horizons and high-quality results in stochastic and probabilistic MPC formulation for open-loop control. Furthermore, a chance-constrained optimization finds reliable startup controls for the reactor cascade with stochastic reactions. In conclusion, the conditional normalizing flow presents an excellent choice for modeling nonlinear stochastic dynamics.<|reference_end|>
|
arxiv
|
@article{cramer2024model-free,
title={Model-Free Stochastic Process Modeling and Optimization using
Normalizing Flows},
author={Eike Cramer},
journal={arXiv preprint arXiv:2409.17632},
year={2024},
archivePrefix={arXiv},
eprint={2409.17632},
primaryClass={cs.LG}
}
|
cramer2024model-free
|
arxiv-662221
|
2409.17634
|
P4Q: Learning to Prompt for Quantization in Visual-language Models
|
<|reference_start|>P4Q: Learning to Prompt for Quantization in Visual-language Models: Large-scale pre-trained Vision-Language Models (VLMs) have gained prominence in various visual and multimodal tasks, yet the deployment of VLMs on downstream application platforms remains challenging due to their prohibitive requirements of training samples and computing resources. Fine-tuning and quantization of VLMs can substantially reduce the sample and computation costs, which are in urgent need. There are two prevailing paradigms in quantization, Quantization-Aware Training (QAT) can effectively quantize large-scale VLMs but incur a huge training cost, while low-bit Post-Training Quantization (PTQ) suffers from a notable performance drop. We propose a method that balances fine-tuning and quantization named ``Prompt for Quantization'' (P4Q), in which we design a lightweight architecture to leverage contrastive loss supervision to enhance the recognition performance of a PTQ model. Our method can effectively reduce the gap between image features and text features caused by low-bit quantization, based on learnable prompts to reorganize textual representations and a low-bit adapter to realign the distributions of image and text features. We also introduce a distillation loss based on cosine similarity predictions to distill the quantized model using a full-precision teacher. Extensive experimental results demonstrate that our P4Q method outperforms prior arts, even achieving comparable results to its full-precision counterparts. For instance, our 8-bit P4Q can theoretically compress the CLIP-ViT/B-32 by 4 $\times$ while achieving 66.94\% Top-1 accuracy, outperforming the learnable prompt fine-tuned full-precision model by 2.24\% with negligible additional parameters on the ImageNet dataset.<|reference_end|>
|
arxiv
|
@article{sun2024p4q:,
title={P4Q: Learning to Prompt for Quantization in Visual-language Models},
author={Huixin Sun, Runqi Wang, Yanjing Li, Xianbin Cao, Xiaolong Jiang, Yao
Hu, Baochang Zhang},
journal={arXiv preprint arXiv:2409.17634},
year={2024},
archivePrefix={arXiv},
eprint={2409.17634},
primaryClass={cs.CV cs.AI}
}
|
sun2024p4q:
|
arxiv-662222
|
2409.17635
|
FlowMAC: Conditional Flow Matching for Audio Coding at Low Bit Rates
|
<|reference_start|>FlowMAC: Conditional Flow Matching for Audio Coding at Low Bit Rates: This paper introduces FlowMAC, a novel neural audio codec for high-quality general audio compression at low bit rates based on conditional flow matching (CFM). FlowMAC jointly learns a mel spectrogram encoder, quantizer and decoder. At inference time the decoder integrates a continuous normalizing flow via an ODE solver to generate a high-quality mel spectrogram. This is the first time that a CFM-based approach is applied to general audio coding, enabling a scalable, simple and memory efficient training. Our subjective evaluations show that FlowMAC at 3 kbps achieves similar quality as state-of-the-art GAN-based and DDPM-based neural audio codecs at double the bit rate. Moreover, FlowMAC offers a tunable inference pipeline, which permits to trade off complexity and quality. This enables real-time coding on CPU, while maintaining high perceptual quality.<|reference_end|>
|
arxiv
|
@article{pia2024flowmac:,
title={FlowMAC: Conditional Flow Matching for Audio Coding at Low Bit Rates},
author={Nicola Pia and Martin Strauss and Markus Multrus and Bernd Edler},
journal={arXiv preprint arXiv:2409.17635},
year={2024},
archivePrefix={arXiv},
eprint={2409.17635},
primaryClass={eess.AS cs.LG cs.SD}
}
|
pia2024flowmac:
|
arxiv-662223
|
2409.17637
|
Intervention strategies for misinformation sharing on social media: A bibliometric analysis
|
<|reference_start|>Intervention strategies for misinformation sharing on social media: A bibliometric analysis: Widely distributed misinformation shared across social media channels is a pressing issue that poses a significant threat to many aspects of society's well-being. Inaccurate shared information causes confusion, can adversely affect mental health, and can lead to mis-informed decision-making. Therefore, it is important to implement proactive measures to intervene and curb the spread of misinformation where possible. This has prompted scholars to investigate a variety of intervention strategies for misinformation sharing on social media. This study explores the typology of intervention strategies for addressing misinformation sharing on social media, identifying 4 important clusters - cognition-based, automated-based, information-based, and hybrid-based. The literature selection process utilized the PRISMA method to ensure a systematic and comprehensive analysis of relevant literature while maintaining transparency and reproducibility. A total of 139 articles published from 2013-2023 were then analyzed. Meanwhile, bibliometric analyses were conducted using performance analysis and science mapping techniques for the typology development. A comparative analysis of the typology was conducted to reveal patterns and evolution in the field. This provides valuable insights for both theory and practical applications. Overall, the study concludes that scholarly contributions to scientific research and publication help to address research gaps and expand knowledge in this field. Understanding the evolution of intervention strategies for misinformation sharing on social media can support future research that contributes to the development of more effective and sustainable solutions to this persistent problem.<|reference_end|>
|
arxiv
|
@article{zainudin2024intervention,
title={Intervention strategies for misinformation sharing on social media: A
bibliometric analysis},
author={Juanita Zainudin and Nazlena Mohamad Ali and Alan F. Smeaton and
Mohamad Taha Ijab},
journal={arXiv preprint arXiv:2409.17637},
year={2024},
doi={10.1109/ACCESS.2024.3469248},
archivePrefix={arXiv},
eprint={2409.17637},
primaryClass={cs.SI cs.CY physics.soc-ph}
}
|
zainudin2024intervention
|
arxiv-662224
|
2409.17640
|
T3: A Novel Zero-shot Transfer Learning Framework Iteratively Training on an Assistant Task for a Target Task
|
<|reference_start|>T3: A Novel Zero-shot Transfer Learning Framework Iteratively Training on an Assistant Task for a Target Task: Long text summarization, gradually being essential for efficiently processing large volumes of information, stays challenging for Large Language Models (LLMs) such as GPT and LLaMA families because of the insufficient open-sourced training datasets and the high requirement of contextual details dealing. To address the issue, we design a novel zero-shot transfer learning framework, abbreviated as T3, to iteratively training a baseline LLM on an assistant task for the target task, where the former should own richer data resources and share structural or semantic similarity with the latter. In practice, T3 is approached to deal with the long text summarization task by utilizing question answering as the assistant task, and further validated its effectiveness on the BBC summary, NarraSum, FairytaleQA, and NLQuAD datasets, with up to nearly 14% improvement in ROUGE, 35% improvement in BLEU, and 16% improvement in Factscore compared to three baseline LLMs, demonstrating its potential for more assistant-target task combinations.<|reference_end|>
|
arxiv
|
@article{tong2024t3:,
title={T3: A Novel Zero-shot Transfer Learning Framework Iteratively Training
on an Assistant Task for a Target Task},
author={Xindi Tong, Yujin Zhu, Shijian Fan, Liang Xu},
journal={arXiv preprint arXiv:2409.17640},
year={2024},
archivePrefix={arXiv},
eprint={2409.17640},
primaryClass={cs.CL cs.AI}
}
|
tong2024t3:
|
arxiv-662225
|
2409.17641
|
AP-VLM: Active Perception Enabled by Vision-Language Models
|
<|reference_start|>AP-VLM: Active Perception Enabled by Vision-Language Models: Active perception enables robots to dynamically gather information by adjusting their viewpoints, a crucial capability for interacting with complex, partially observable environments. In this paper, we present AP-VLM, a novel framework that combines active perception with a Vision-Language Model (VLM) to guide robotic exploration and answer semantic queries. Using a 3D virtual grid overlaid on the scene and orientation adjustments, AP-VLM allows a robotic manipulator to intelligently select optimal viewpoints and orientations to resolve challenging tasks, such as identifying objects in occluded or inclined positions. We evaluate our system on two robotic platforms: a 7-DOF Franka Panda and a 6-DOF UR5, across various scenes with differing object configurations. Our results demonstrate that AP-VLM significantly outperforms passive perception methods and baseline models, including Toward Grounded Common Sense Reasoning (TGCSR), particularly in scenarios where fixed camera views are inadequate. The adaptability of AP-VLM in real-world settings shows promise for enhancing robotic systems' understanding of complex environments, bridging the gap between high-level semantic reasoning and low-level control.<|reference_end|>
|
arxiv
|
@article{sripada2024ap-vlm:,
title={AP-VLM: Active Perception Enabled by Vision-Language Models},
author={Venkatesh Sripada, Samuel Carter, Frank Guerin, Amir Ghalamzan},
journal={arXiv preprint arXiv:2409.17641},
year={2024},
archivePrefix={arXiv},
eprint={2409.17641},
primaryClass={cs.RO}
}
|
sripada2024ap-vlm:
|
arxiv-662226
|
2409.17642
|
AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure
|
<|reference_start|>AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure: Large language model (LLM)-based AI delegates are increasingly utilized to act on behalf of users, assisting them with a wide range of tasks through conversational interfaces. Despite their advantages, concerns arise regarding the potential risk of privacy leaks, particularly in scenarios involving social interactions. While existing research has focused on protecting privacy by limiting the access of AI delegates to sensitive user information, many social scenarios require disclosing private details to achieve desired outcomes, necessitating a balance between privacy protection and disclosure. To address this challenge, we conduct a pilot study to investigate user preferences for AI delegates across various social relations and task scenarios, and then propose a novel AI delegate system that enables privacy-conscious self-disclosure. Our user study demonstrates that the proposed AI delegate strategically protects privacy, pioneering its use in diverse and dynamic social interactions.<|reference_end|>
|
arxiv
|
@article{chen2024ai,
title={AI Delegates with a Dual Focus: Ensuring Privacy and Strategic
Self-Disclosure},
author={Xi Chen, Zhiyang Zhang, Fangkai Yang, Xiaoting Qin, Chao Du, Xi Cheng,
Hangxin Liu, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang},
journal={arXiv preprint arXiv:2409.17642},
year={2024},
archivePrefix={arXiv},
eprint={2409.17642},
primaryClass={cs.AI cs.CY}
}
|
chen2024ai
|
arxiv-662227
|
2409.17643
|
Efficient Fairness-Performance Pareto Front Computation
|
<|reference_start|>Efficient Fairness-Performance Pareto Front Computation: There is a well known intrinsic trade-off between the fairness of a representation and the performance of classifiers derived from the representation. Due to the complexity of optimisation algorithms in most modern representation learning approaches, for a given method it may be non-trivial to decide whether the obtained fairness-performance curve of the method is optimal, i.e., whether it is close to the true Pareto front for these quantities for the underlying data distribution. In this paper we propose a new method to compute the optimal Pareto front, which does not require the training of complex representation models. We show that optimal fair representations possess several useful structural properties, and that these properties enable a reduction of the computation of the Pareto Front to a compact discrete problem. We then also show that these compact approximating problems can be efficiently solved via off-the shelf concave-convex programming methods. Since our approach is independent of the specific model of representations, it may be used as the benchmark to which representation learning algorithms may be compared. We experimentally evaluate the approach on a number of real world benchmark datasets.<|reference_end|>
|
arxiv
|
@article{kozdoba2024efficient,
title={Efficient Fairness-Performance Pareto Front Computation},
author={Mark Kozdoba, Binyamin Perets and Shie Mannor},
journal={arXiv preprint arXiv:2409.17643},
year={2024},
archivePrefix={arXiv},
eprint={2409.17643},
primaryClass={stat.ML cs.LG}
}
|
kozdoba2024efficient
|
arxiv-662228
|
2409.17647
|
MECD: Unlocking Multi-Event Causal Discovery in Video Reasoning
|
<|reference_start|>MECD: Unlocking Multi-Event Causal Discovery in Video Reasoning: Video causal reasoning aims to achieve a high-level understanding of video content from a causal perspective. However, current video reasoning tasks are limited in scope, primarily executed in a question-answering paradigm and focusing on short videos containing only a single event and simple causal relationships, lacking comprehensive and structured causality analysis for videos with multiple events. To fill this gap, we introduce a new task and dataset, Multi-Event Causal Discovery (MECD). It aims to uncover the causal relationships between events distributed chronologically across long videos. Given visual segments and textual descriptions of events, MECD requires identifying the causal associations between these events to derive a comprehensive, structured event-level video causal diagram explaining why and how the final result event occurred. To address MECD, we devise a novel framework inspired by the Granger Causality method, using an efficient mask-based event prediction model to perform an Event Granger Test, which estimates causality by comparing the predicted result event when premise events are masked versus unmasked. Furthermore, we integrate causal inference techniques such as front-door adjustment and counterfactual inference to address challenges in MECD like causality confounding and illusory causality. Experiments validate the effectiveness of our framework in providing causal relationships in multi-event videos, outperforming GPT-4o and VideoLLaVA by 5.7% and 4.1%, respectively.<|reference_end|>
|
arxiv
|
@article{chen2024mecd:,
title={MECD: Unlocking Multi-Event Causal Discovery in Video Reasoning},
author={Tieyuan Chen, Huabin Liu, Tianyao He, Yihang Chen, Chaofan Gan, Xiao
Ma, Cheng Zhong, Yang Zhang, Yingxue Wang, Hui Lin, Weiyao Lin},
journal={arXiv preprint arXiv:2409.17647},
year={2024},
archivePrefix={arXiv},
eprint={2409.17647},
primaryClass={cs.CV}
}
|
chen2024mecd:
|
arxiv-662229
|
2409.17648
|
Efficient In-Domain Question Answering for Resource-Constrained Environments
|
<|reference_start|>Efficient In-Domain Question Answering for Resource-Constrained Environments: Retrieval Augmented Generation (RAG) is a common method for integrating external knowledge into pretrained Large Language Models (LLMs) to enhance accuracy and relevancy in question answering (QA) tasks. However, prompt engineering and resource efficiency remain significant bottlenecks in developing optimal and robust RAG solutions for real-world QA applications. Recent studies have shown success in using fine tuning to address these problems; in particular, Retrieval Augmented Fine Tuning (RAFT) applied to smaller 7B models has demonstrated superior performance compared to RAG setups with much larger models such as GPT-3.5. The combination of RAFT with parameter-efficient fine tuning (PEFT) techniques, such as Low-Rank Adaptation (LoRA), promises an even more efficient solution, yet remains an unexplored area. In this work, we combine RAFT with LoRA to reduce fine tuning and storage requirements and gain faster inference times while maintaining comparable RAG performance. This results in a more compute-efficient RAFT, or CRAFT, which is particularly useful for knowledge-intensive QA tasks in resource-constrained environments where internet access may be restricted and hardware resources limited.<|reference_end|>
|
arxiv
|
@article{chung2024efficient,
title={Efficient In-Domain Question Answering for Resource-Constrained
Environments},
author={Isaac Chung, Phat Vo, Arman C. Kizilkale, Aaron Reite},
journal={arXiv preprint arXiv:2409.17648},
year={2024},
archivePrefix={arXiv},
eprint={2409.17648},
primaryClass={cs.CL}
}
|
chung2024efficient
|
arxiv-662230
|
2409.17649
|
Provable Performance Guarantees of Copy Detection Patterns
|
<|reference_start|>Provable Performance Guarantees of Copy Detection Patterns: Copy Detection Patterns (CDPs) are crucial elements in modern security applications, playing a vital role in safeguarding industries such as food, pharmaceuticals, and cosmetics. Current performance evaluations of CDPs predominantly rely on empirical setups using simplistic metrics like Hamming distances or Pearson correlation. These methods are often inadequate due to their sensitivity to distortions, degradation, and their limitations to stationary statistics of printing and imaging. Additionally, machine learning-based approaches suffer from distribution biases and fail to generalize to unseen counterfeit samples. Given the critical importance of CDPs in preventing counterfeiting, including the counterfeit vaccines issue highlighted during the COVID-19 pandemic, there is an urgent need for provable performance guarantees across various criteria. This paper aims to establish a theoretical framework to derive optimal criteria for the analysis, optimization, and future development of CDP authentication technologies, ensuring their reliability and effectiveness in diverse security scenarios.<|reference_end|>
|
arxiv
|
@article{tutt2024provable,
title={Provable Performance Guarantees of Copy Detection Patterns},
author={Joakim Tutt and Slava Voloshynovskiy},
journal={arXiv preprint arXiv:2409.17649},
year={2024},
archivePrefix={arXiv},
eprint={2409.17649},
primaryClass={cs.CR cs.CV}
}
|
tutt2024provable
|
arxiv-662231
|
2409.17650
|
Digital Twin Ecosystem for Oncology Clinical Operations
|
<|reference_start|>Digital Twin Ecosystem for Oncology Clinical Operations: Artificial Intelligence (AI) and Large Language Models (LLMs) hold significant promise in revolutionizing healthcare, especially in clinical applications. Simultaneously, Digital Twin technology, which models and simulates complex systems, has gained traction in enhancing patient care. However, despite the advances in experimental clinical settings, the potential of AI and digital twins to streamline clinical operations remains largely untapped. This paper introduces a novel digital twin framework specifically designed to enhance oncology clinical operations. We propose the integration of multiple specialized digital twins, such as the Medical Necessity Twin, Care Navigator Twin, and Clinical History Twin, to enhance workflow efficiency and personalize care for each patient based on their unique data. Furthermore, by synthesizing multiple data sources and aligning them with the National Comprehensive Cancer Network (NCCN) guidelines, we create a dynamic Cancer Care Path, a continuously evolving knowledge base that enables these digital twins to provide precise, tailored clinical recommendations.<|reference_end|>
|
arxiv
|
@article{pandey2024digital,
title={Digital Twin Ecosystem for Oncology Clinical Operations},
author={Himanshu Pandey, Akhil Amod, Shivang, Kshitij Jaggi, Ruchi Garg,
Abheet Jain, Vinayak Tantia},
journal={arXiv preprint arXiv:2409.17650},
year={2024},
archivePrefix={arXiv},
eprint={2409.17650},
primaryClass={cs.AI cs.CL}
}
|
pandey2024digital
|
arxiv-662232
|
2409.17652
|
FactorSim: Generative Simulation via Factorized Representation
|
<|reference_start|>FactorSim: Generative Simulation via Factorized Representation: Generating simulations to train intelligent agents in game-playing and robotics from natural language input, from user input or task documentation, remains an open-ended challenge. Existing approaches focus on parts of this challenge, such as generating reward functions or task hyperparameters. Unlike previous work, we introduce FACTORSIM that generates full simulations in code from language input that can be used to train agents. Exploiting the structural modularity specific to coded simulations, we propose to use a factored partially observable Markov decision process representation that allows us to reduce context dependence during each step of the generation. For evaluation, we introduce a generative simulation benchmark that assesses the generated simulation code's accuracy and effectiveness in facilitating zero-shot transfers in reinforcement learning settings. We show that FACTORSIM outperforms existing methods in generating simulations regarding prompt alignment (e.g., accuracy), zero-shot transfer abilities, and human evaluation. We also demonstrate its effectiveness in generating robotic tasks.<|reference_end|>
|
arxiv
|
@article{sun2024factorsim:,
title={FactorSim: Generative Simulation via Factorized Representation},
author={Fan-Yun Sun, S. I. Harini, Angela Yi, Yihan Zhou, Alex Zook, Jonathan
Tremblay, Logan Cross, Jiajun Wu, Nick Haber},
journal={arXiv preprint arXiv:2409.17652},
year={2024},
archivePrefix={arXiv},
eprint={2409.17652},
primaryClass={cs.AI cs.RO}
}
|
sun2024factorsim:
|
arxiv-662233
|
2409.17655
|
AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environment
|
<|reference_start|>AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environment: The increasing demand for intelligent assistants in human-populated environments has motivated significant research in autonomous robotic systems. Traditional service robots and virtual assistants, however, struggle with real-world task execution due to their limited capacity for dynamic reasoning and interaction, particularly when human collaboration is required. Recent developments in Large Language Models have opened new avenues for improving these systems, enabling more sophisticated reasoning and natural interaction capabilities. In this paper, we introduce AssistantX, an LLM-powered proactive assistant designed to operate autonomously in a physical office environment. Unlike conventional service robots, AssistantX leverages a novel multi-agent architecture, PPDR4X, which provides advanced inference capabilities and comprehensive collaboration awareness. By effectively bridging the gap between virtual operations and physical interactions, AssistantX demonstrates robust performance in managing complex real-world scenarios. Our evaluation highlights the architecture's effectiveness, showing that AssistantX can respond to clear instructions, actively retrieve supplementary information from memory, and proactively seek collaboration from team members to ensure successful task completion. More details and videos can be found at https://assistantx-agent.github.io/AssistantX/.<|reference_end|>
|
arxiv
|
@article{sun2024assistantx:,
title={AssistantX: An LLM-Powered Proactive Assistant in Collaborative
Human-Populated Environment},
author={Nan Sun, Bo Mao, Yongchang Li, Lumeng Ma, Di Guo, Huaping Liu},
journal={arXiv preprint arXiv:2409.17655},
year={2024},
archivePrefix={arXiv},
eprint={2409.17655},
primaryClass={cs.RO cs.AI cs.MA}
}
|
sun2024assistantx:
|
arxiv-662234
|
2409.17656
|
Prototype based Masked Audio Model for Self-Supervised Learning of Sound Event Detection
|
<|reference_start|>Prototype based Masked Audio Model for Self-Supervised Learning of Sound Event Detection: A significant challenge in sound event detection (SED) is the effective utilization of unlabeled data, given the limited availability of labeled data due to high annotation costs. Semi-supervised algorithms rely on labeled data to learn from unlabeled data, and the performance is constrained by the quality and size of the former. In this paper, we introduce the Prototype based Masked Audio Model~(PMAM) algorithm for self-supervised representation learning in SED, to better exploit unlabeled data. Specifically, semantically rich frame-level pseudo labels are constructed from a Gaussian mixture model (GMM) based prototypical distribution modeling. These pseudo labels supervise the learning of a Transformer-based masked audio model, in which binary cross-entropy loss is employed instead of the widely used InfoNCE loss, to provide independent loss contributions from different prototypes, which is important in real scenarios in which multiple labels may apply to unsupervised data frames. A final stage of fine-tuning with just a small amount of labeled data yields a very high performing SED model. On like-for-like tests using the DESED task, our method achieves a PSDS1 score of 62.5\%, surpassing current state-of-the-art models and demonstrating the superiority of the proposed technique.<|reference_end|>
|
arxiv
|
@article{cai2024prototype,
title={Prototype based Masked Audio Model for Self-Supervised Learning of Sound
Event Detection},
author={Pengfei Cai, Yan Song, Nan Jiang, Qing Gu, Ian McLoughlin},
journal={arXiv preprint arXiv:2409.17656},
year={2024},
archivePrefix={arXiv},
eprint={2409.17656},
primaryClass={cs.SD cs.AI eess.AS}
}
|
cai2024prototype
|
arxiv-662235
|
2409.17658
|
Powers of large matrices on GPU platforms to compute the Roman domination number of cylindrical graphs
|
<|reference_start|>Powers of large matrices on GPU platforms to compute the Roman domination number of cylindrical graphs: The Roman domination in a graph $G$ is a variant of the classical domination, defined by means of a so-called Roman domination function $f\colon V(G)\to \{0,1,2\}$ such that if $f(v)=0$ then, the vertex $v$ is adjacent to at least one vertex $w$ with $f(w)=2$. The weight $f(G)$ of a Roman dominating function of $G$ is the sum of the weights of all vertices of $G$, that is, $f(G)=\sum_{u\in V(G)}f(u)$. The Roman domination number $\gamma_R(G)$ is the minimum weight of a Roman dominating function of $G$. In this paper we propose algorithms to compute this parameter involving the $(\min,+)$ powers of large matrices with high computational requirements and the GPU (Graphics Processing Unit) allows us to accelerate such operations. Specific routines have been developed to efficiently compute the $(\min ,+)$ product on GPU architecture, taking advantage of its computational power. These algorithms allow us to compute the Roman domination number of cylindrical graphs $P_m\Box C_n$ i.e., the Cartesian product of a path and a cycle, in cases $m=7,8,9$, $ n\geq 3$ and $m\geq $10$, n\equiv 0\pmod 5$. Moreover, we provide a lower bound for the remaining cases $m\geq 10, n\not\equiv 0\pmod 5$.<|reference_end|>
|
arxiv
|
@article{martínez2024powers,
title={Powers of large matrices on GPU platforms to compute the Roman
domination number of cylindrical graphs},
author={J.A. Mart'inez, E.M. Garz'on and M.L. Puertas},
journal={IEEE Access, vol. 9, pp. 29346-29355, 2021},
year={2024},
doi={10.1109/ACCESS.2021.3058738},
archivePrefix={arXiv},
eprint={2409.17658},
primaryClass={math.CO cs.DM}
}
|
martínez2024powers
|
arxiv-662236
|
2409.17659
|
Hierarchical End-to-End Autonomous Driving: Integrating BEV Perception with Deep Reinforcement Learning
|
<|reference_start|>Hierarchical End-to-End Autonomous Driving: Integrating BEV Perception with Deep Reinforcement Learning: End-to-end autonomous driving offers a streamlined alternative to the traditional modular pipeline, integrating perception, prediction, and planning within a single framework. While Deep Reinforcement Learning (DRL) has recently gained traction in this domain, existing approaches often overlook the critical connection between feature extraction of DRL and perception. In this paper, we bridge this gap by mapping the DRL feature extraction network directly to the perception phase, enabling clearer interpretation through semantic segmentation. By leveraging Bird's-Eye-View (BEV) representations, we propose a novel DRL-based end-to-end driving framework that utilizes multi-sensor inputs to construct a unified three-dimensional understanding of the environment. This BEV-based system extracts and translates critical environmental features into high-level abstract states for DRL, facilitating more informed control. Extensive experimental evaluations demonstrate that our approach not only enhances interpretability but also significantly outperforms state-of-the-art methods in autonomous driving control tasks, reducing the collision rate by 20%.<|reference_end|>
|
arxiv
|
@article{lu2024hierarchical,
title={Hierarchical End-to-End Autonomous Driving: Integrating BEV Perception
with Deep Reinforcement Learning},
author={Siyi Lu, Lei He, Shengbo Eben Li, Yugong Luo, Jianqiang Wang, Keqiang
Li},
journal={arXiv preprint arXiv:2409.17659},
year={2024},
archivePrefix={arXiv},
eprint={2409.17659},
primaryClass={cs.AI}
}
|
lu2024hierarchical
|
arxiv-662237
|
2409.17661
|
A Fuzzy-based Approach to Predict Human Interaction by Functional Near-Infrared Spectroscopy
|
<|reference_start|>A Fuzzy-based Approach to Predict Human Interaction by Functional Near-Infrared Spectroscopy: The paper introduces a Fuzzy-based Attention (Fuzzy Attention Layer) mechanism, a novel computational approach to enhance the interpretability and efficacy of neural models in psychological research. The proposed Fuzzy Attention Layer mechanism is integrated as a neural network layer within the Transformer Encoder model to facilitate the analysis of complex psychological phenomena through neural signals, such as those captured by functional Near-Infrared Spectroscopy (fNIRS). By leveraging fuzzy logic, the Fuzzy Attention Layer is capable of learning and identifying interpretable patterns of neural activity. This capability addresses a significant challenge when using Transformer: the lack of transparency in determining which specific brain activities most contribute to particular predictions. Our experimental results demonstrated on fNIRS data from subjects engaged in social interactions involving handholding reveal that the Fuzzy Attention Layer not only learns interpretable patterns of neural activity but also enhances model performance. Additionally, the learned patterns provide deeper insights into the neural correlates of interpersonal touch and emotional exchange. The application of our model shows promising potential in deciphering the subtle complexities of human social behaviors, thereby contributing significantly to the fields of social neuroscience and psychological AI.<|reference_end|>
|
arxiv
|
@article{jiang2024a,
title={A Fuzzy-based Approach to Predict Human Interaction by Functional
Near-Infrared Spectroscopy},
author={Xiaowei Jiang, Liang Ou, Yanan Chen, Na Ao, Yu-Cheng Chang, Thomas Do,
Chin-Teng Lin},
journal={arXiv preprint arXiv:2409.17661},
year={2024},
archivePrefix={arXiv},
eprint={2409.17661},
primaryClass={cs.AI q-bio.NC}
}
|
jiang2024a
|
arxiv-662238
|
2409.17663
|
Explanation Bottleneck Models
|
<|reference_start|>Explanation Bottleneck Models: Recent concept-based interpretable models have succeeded in providing meaningful explanations by pre-defined concept sets. However, the dependency on the pre-defined concepts restricts the application because of the limited number of concepts for explanations. This paper proposes a novel interpretable deep neural network called explanation bottleneck models (XBMs). XBMs generate a text explanation from the input without pre-defined concepts and then predict a final task prediction based on the generated explanation by leveraging pre-trained vision-language encoder-decoder models. To achieve both the target task performance and the explanation quality, we train XBMs through the target task loss with the regularization penalizing the explanation decoder via the distillation from the frozen pre-trained decoder. Our experiments, including a comparison to state-of-the-art concept bottleneck models, confirm that XBMs provide accurate and fluent natural language explanations without pre-defined concept sets. Code will be available at https://github.com/yshinya6/xbm/.<|reference_end|>
|
arxiv
|
@article{yamaguchi2024explanation,
title={Explanation Bottleneck Models},
author={Shin'ya Yamaguchi and Kosuke Nishida},
journal={arXiv preprint arXiv:2409.17663},
year={2024},
archivePrefix={arXiv},
eprint={2409.17663},
primaryClass={cs.AI cs.CV cs.LG}
}
|
yamaguchi2024explanation
|
arxiv-662239
|
2409.17664
|
Comodule Representations of Second-Order Functionals
|
<|reference_start|>Comodule Representations of Second-Order Functionals: We develop and investigate a general theory of representations of second-order functionals, based on a notion of a right comodule for a monad on the category of containers. We show how the notion of comodule representability naturally subsumes classic representations of continuous functionals with well-founded trees. We find other kinds of representations by varying the monad, the comodule, and in some cases the underlying category of containers. Examples include uniformly continuous or finitely supported functionals, functionals querying their arguments precisely once, or at most once, functionals interacting with an ambient environment through computational effects, as well as functionals trivially representing themselves. Many of these rely on our construction of a monad on containers from a monad on shapes and a weak Mendler-style monad algebra on the universe for positions. We show that comodule representability on the category of propositional containers, which have positions valued in a universe of propositions, is closely related to instance reducibility in constructive mathematics, and through it to Weihrauch reducibility in computability theory.<|reference_end|>
|
arxiv
|
@article{ahman2024comodule,
title={Comodule Representations of Second-Order Functionals},
author={Danel Ahman and Andrej Bauer},
journal={arXiv preprint arXiv:2409.17664},
year={2024},
archivePrefix={arXiv},
eprint={2409.17664},
primaryClass={cs.LO math.CT math.LO}
}
|
ahman2024comodule
|
arxiv-662240
|
2409.17665
|
A Novel Improved Beluga Whale Optimization Algorithm for Solving Localization Problem in Swarm Robotic Systems
|
<|reference_start|>A Novel Improved Beluga Whale Optimization Algorithm for Solving Localization Problem in Swarm Robotic Systems: In Swarm Robotic Systems (SRSs), only a few robots are equipped with Global Positioning System (GPS) devices, known as anchors. A challenge lies in inferring the positions of other unknown robots based on the positions of anchors. Existing solutions estimate their positions using distance measurements between unknown robots and anchors. Based on existing solutions, this study proposes a novel meta-heuristic algorithm - Improved Beluga Whale Optimization Algorithm (IBWO) to address the localization problem of SRSs, focusing on enhancing the accuracy of localization results. Simulation results demonstrate the effectiveness of this study. Specifically, we test the localization accuracy of robots under different proportions of anchors, different communication radius of robots, and different total number of robots. Compared to the traditional multilateration method and four other localization methods based on meta-heuristic algorithms, the localization accuracy of this method is consistently superior.<|reference_end|>
|
arxiv
|
@article{teng2024a,
title={A Novel Improved Beluga Whale Optimization Algorithm for Solving
Localization Problem in Swarm Robotic Systems},
author={Zuhao Teng and Qian Dong},
journal={arXiv preprint arXiv:2409.17665},
year={2024},
archivePrefix={arXiv},
eprint={2409.17665},
primaryClass={cs.NI}
}
|
teng2024a
|
arxiv-662241
|
2409.17667
|
SLO-Aware Task Offloading within Collaborative Vehicle Platoons
|
<|reference_start|>SLO-Aware Task Offloading within Collaborative Vehicle Platoons: In the context of autonomous vehicles (AVs), offloading is essential for guaranteeing the execution of perception tasks, e.g., mobile mapping or object detection. While existing work focused extensively on minimizing inter-vehicle networking latency through offloading, other objectives become relevant in the case of vehicle platoons, e.g., energy efficiency or data quality for heavy-duty or public transport. Therefore, we aim to enforce these Service Level Objectives (SLOs) through intelligent task offloading within AV platoons. We present a collaborative framework for handling and offloading services in a purely Vehicle-to-Vehicle approach (V2V) based on Bayesian Networks (BNs). Each service aggregates local observations into a platoon-wide understanding of how to ensure SLOs for heterogeneous vehicle types. With the resulting models, services can proactively decide to offload if this promises to improve global SLO fulfillment. We evaluate the approach in a real-case setting, where vehicles in a platoon continuously (i.e., every 500 ms) interpret the SLOs of three actual perception services. Our probabilistic, predictive method shows promising results in handling large AV platoons; within seconds, it detects and resolves SLO violations through offloading.<|reference_end|>
|
arxiv
|
@article{sedlak2024slo-aware,
title={SLO-Aware Task Offloading within Collaborative Vehicle Platoons},
author={Boris Sedlak, Andrea Morichetta, Yuhao Wang, Yang Fei, Liang Wang,
Schahram Dustdar, and Xiaobo Qu},
journal={arXiv preprint arXiv:2409.17667},
year={2024},
archivePrefix={arXiv},
eprint={2409.17667},
primaryClass={cs.DC}
}
|
sedlak2024slo-aware
|
arxiv-662242
|
2409.17668
|
A Database Engineered System for Big Data Analytics on Tornado Climatology
|
<|reference_start|>A Database Engineered System for Big Data Analytics on Tornado Climatology: Recognizing the challenges with current tornado warning systems, we investigate alternative approaches. In particular, we present a database engi-neered system that integrates information from heterogeneous rich data sources, including climatology data for tornadoes and data just before a tornado warning. The system aids in predicting tornado occurrences by identifying the data points that form the basis of a tornado warning. Evaluation on US data highlights the advantages of using a classification forecasting recurrent neural network (RNN) model. The results highlight the effectiveness of our database engineered system for big data analytics on tornado climatology-especially, in accurately predict-ing tornado lead-time, magnitude, and location, contributing to the development of sustainable cities.<|reference_end|>
|
arxiv
|
@article{bian2024a,
title={A Database Engineered System for Big Data Analytics on Tornado
Climatology},
author={Fengfan Bian, Carson K. Leung, Piers Grenier, Harry Pu, Samuel Ning,
Alfredo Cuzzocrea},
journal={arXiv preprint arXiv:2409.17668},
year={2024},
archivePrefix={arXiv},
eprint={2409.17668},
primaryClass={cs.DB}
}
|
bian2024a
|
arxiv-662243
|
2409.17669
|
Impact of opinion formation phenomena in epidemic dynamics: kinetic modeling on networks
|
<|reference_start|>Impact of opinion formation phenomena in epidemic dynamics: kinetic modeling on networks: After the recent COVID-19 outbreaks, it became increasingly evident that individuals' thoughts and beliefs can have a strong impact on disease transmission. It becomes therefore important to understand how information and opinions on protective measures evolve during epidemics. To this end, incorporating the impact of social media is essential to take into account the hierarchical structure of these platforms. In this context, we present a novel approach to take into account the interplay between infectious disease dynamics and socially-structured opinion dynamics. Our work extends a conventional compartmental framework including behavioral attitudes in shaping public opinion and promoting the adoption of protective measures under the influence of different degrees of connectivity. The proposed approach is capable to reproduce the emergence of epidemic waves. Specifically, it provides a clear link between the social influence of highly connected individuals and the epidemic dynamics. Through a heterogeneity of numerical tests we show how this comprehensive framework offers a more nuanced understanding of epidemic dynamics in the context of modern information dissemination and social behavior.<|reference_end|>
|
arxiv
|
@article{albi2024impact,
title={Impact of opinion formation phenomena in epidemic dynamics: kinetic
modeling on networks},
author={Giacomo Albi and Elisa Calzola and Giacomo Dimarco and Mattia Zanella},
journal={arXiv preprint arXiv:2409.17669},
year={2024},
archivePrefix={arXiv},
eprint={2409.17669},
primaryClass={physics.soc-ph cs.NA math.NA}
}
|
albi2024impact
|
arxiv-662244
|
2409.17670
|
A Comprehensive Review of TLSNotary Protocol
|
<|reference_start|>A Comprehensive Review of TLSNotary Protocol: Transport Layer Security (TLS) protocol is a cryptographic protocol designed to secure communication over the internet. The TLS protocol has become a fundamental in secure communication, most commonly used for securing web browsing sessions. In this work, we investigate the TLSNotary protocol, which aim to enable the Client to obtain proof of provenance for data from TLS session, while getting as much as possible from the TLS security properties. To achieve such proofs without any Server-side adjustments or permissions, the power of secure multi-party computation (MPC) together with zero knowledge proofs is used to extend the standard TLS Protocol. To make the compliacted landscape of MPC as comprehensible as possible we first introduce the cryptographic primitives required to understand the TLSNotary protocol and go through standard TLS protocol. Finally, we look at the TLSNotary protocol in detail.<|reference_end|>
|
arxiv
|
@article{kalka2024a,
title={A Comprehensive Review of TLSNotary Protocol},
author={Maciej Kalka and Marek Kirejczyk},
journal={arXiv preprint arXiv:2409.17670},
year={2024},
archivePrefix={arXiv},
eprint={2409.17670},
primaryClass={cs.CR}
}
|
kalka2024a
|
arxiv-662245
|
2409.17671
|
Leveraging Anthropometric Measurements to Improve Human Mesh Estimation and Ensure Consistent Body Shapes
|
<|reference_start|>Leveraging Anthropometric Measurements to Improve Human Mesh Estimation and Ensure Consistent Body Shapes: The basic body shape of a person does not change within a single video. However, most SOTA human mesh estimation (HME) models output a slightly different body shape for each video frame, which results in inconsistent body shapes for the same person. In contrast, we leverage anthropometric measurements like tailors are already obtaining from humans for centuries. We create a model called A2B that converts such anthropometric measurements to body shape parameters of human mesh models. Moreover, we find that finetuned SOTA 3D human pose estimation (HPE) models outperform HME models regarding the precision of the estimated keypoints. We show that applying inverse kinematics (IK) to the results of such a 3D HPE model and combining the resulting body pose with the A2B body shape leads to superior and consistent human meshes for challenging datasets like ASPset or fit3D, where we can lower the MPJPE by over 30 mm compared to SOTA HME models. Further, replacing HME models estimates of the body shape parameters with A2B model results not only increases the performance of these HME models, but also leads to consistent body shapes.<|reference_end|>
|
arxiv
|
@article{ludwig2024leveraging,
title={Leveraging Anthropometric Measurements to Improve Human Mesh Estimation
and Ensure Consistent Body Shapes},
author={Katja Ludwig, Julian Lorenz, Daniel Kienzle, Tuan Bui, Rainer Lienhart},
journal={arXiv preprint arXiv:2409.17671},
year={2024},
archivePrefix={arXiv},
eprint={2409.17671},
primaryClass={cs.CV}
}
|
ludwig2024leveraging
|
arxiv-662246
|
2409.17672
|
Semantic model for the description of energy data in the Module Type Package
|
<|reference_start|>Semantic model for the description of energy data in the Module Type Package: Modular production systems that employ the Module Type Package (MTP) to describe module interfaces can, at present, only communicate energy data through proprietary solutions. Due to this limitation, users face additional effort when calculating energy KPIs for modules or determining the energy efficiency of modules. To address this issue, we present a model that facilitates energy data to be described semantically and uniformly in the MTP on the basis of an industrial standard (OPC 34100). MTPs incorporating this model can transmit semantically consistent energy data from modules to the process control system, making the data available for further applications, such as monitoring or optimization.<|reference_end|>
|
arxiv
|
@article{reiche2024semantic,
title={Semantic model for the description of energy data in the Module Type
Package},
author={Leif-Thore Reiche, Felix Gehlhoff, Alexander Fay},
journal={arXiv preprint arXiv:2409.17672},
year={2024},
archivePrefix={arXiv},
eprint={2409.17672},
primaryClass={eess.SY cs.SY}
}
|
reiche2024semantic
|
arxiv-662247
|
2409.17673
|
Cross-lingual Human-Preference Alignment for Neural Machine Translation with Direct Quality Optimization
|
<|reference_start|>Cross-lingual Human-Preference Alignment for Neural Machine Translation with Direct Quality Optimization: Reinforcement Learning from Human Feedback (RLHF) and derivative techniques like Direct Preference Optimization (DPO) are task-alignment algorithms used to repurpose general, foundational models for specific tasks. We show that applying task-alignment to neural machine translation (NMT) addresses an existing task--data mismatch in NMT, leading to improvements across all languages of a multilingual model, even when task-alignment is only applied to a subset of those languages. We do so by introducing Direct Quality Optimization (DQO), a variant of DPO leveraging a pre-trained translation quality estimation model as a proxy for human preferences, and verify the improvements with both automatic metrics and human evaluation.<|reference_end|>
|
arxiv
|
@article{uhlig2024cross-lingual,
title={Cross-lingual Human-Preference Alignment for Neural Machine Translation
with Direct Quality Optimization},
author={Kaden Uhlig, Joern Wuebker, Raphael Reinauer, John DeNero},
journal={arXiv preprint arXiv:2409.17673},
year={2024},
archivePrefix={arXiv},
eprint={2409.17673},
primaryClass={cs.CL}
}
|
uhlig2024cross-lingual
|
arxiv-662248
|
2409.17674
|
Self-Supervised Learning of Deviation in Latent Representation for Co-speech Gesture Video Generation
|
<|reference_start|>Self-Supervised Learning of Deviation in Latent Representation for Co-speech Gesture Video Generation: Gestures are pivotal in enhancing co-speech communication. While recent works have mostly focused on point-level motion transformation or fully supervised motion representations through data-driven approaches, we explore the representation of gestures in co-speech, with a focus on self-supervised representation and pixel-level motion deviation, utilizing a diffusion model which incorporates latent motion features. Our approach leverages self-supervised deviation in latent representation to facilitate hand gestures generation, which are crucial for generating realistic gesture videos. Results of our first experiment demonstrate that our method enhances the quality of generated videos, with an improvement from 2.7 to 4.5% for FGD, DIV, and FVD, and 8.1% for PSNR, 2.5% for SSIM over the current state-of-the-art methods.<|reference_end|>
|
arxiv
|
@article{yang2024self-supervised,
title={Self-Supervised Learning of Deviation in Latent Representation for
Co-speech Gesture Video Generation},
author={Huan Yang, Jiahui Chen, Chaofan Ding, Runhua Shi, Siyu Xiong, Qingqi
Hong, Xiaoqi Mo, Xinhan Di},
journal={arXiv preprint arXiv:2409.17674},
year={2024},
archivePrefix={arXiv},
eprint={2409.17674},
primaryClass={cs.CV}
}
|
yang2024self-supervised
|
arxiv-662249
|
2409.17675
|
EM-Net: Efficient Channel and Frequency Learning with Mamba for 3D Medical Image Segmentation
|
<|reference_start|>EM-Net: Efficient Channel and Frequency Learning with Mamba for 3D Medical Image Segmentation: Convolutional neural networks have primarily led 3D medical image segmentation but may be limited by small receptive fields. Transformer models excel in capturing global relationships through self-attention but are challenged by high computational costs at high resolutions. Recently, Mamba, a state space model, has emerged as an effective approach for sequential modeling. Inspired by its success, we introduce a novel Mamba-based 3D medical image segmentation model called EM-Net. It not only efficiently captures attentive interaction between regions by integrating and selecting channels, but also effectively utilizes frequency domain to harmonize the learning of features across varying scales, while accelerating training speed. Comprehensive experiments on two challenging multi-organ datasets with other state-of-the-art (SOTA) algorithms show that our method exhibits better segmentation accuracy while requiring nearly half the parameter size of SOTA models and 2x faster training speed.<|reference_end|>
|
arxiv
|
@article{chang2024em-net:,
title={EM-Net: Efficient Channel and Frequency Learning with Mamba for 3D
Medical Image Segmentation},
author={Ao Chang, Jiajun Zeng, Ruobing Huang, and Dong Ni},
journal={arXiv preprint arXiv:2409.17675},
year={2024},
archivePrefix={arXiv},
eprint={2409.17675},
primaryClass={cs.CV}
}
|
chang2024em-net:
|
arxiv-662250
|
2409.17677
|
Optimal Memorization Capacity of Transformers
|
<|reference_start|>Optimal Memorization Capacity of Transformers: Recent research in the field of machine learning has increasingly focused on the memorization capacity of Transformers, but how efficient they are is not yet well understood. We demonstrate that Transformers can memorize labels with $\tilde{O}(\sqrt{N})$ parameters in a next-token prediction setting for $N$ input sequences of length $n$, which is proved to be optimal up to logarithmic factors. This indicates that Transformers can efficiently perform memorization with little influence from the input length $n$ owing to the benefit of parameter sharing. We also analyze the memorization capacity in the sequence-to-sequence setting, and find that $\tilde{O}(\sqrt{nN})$ parameters are not only sufficient, but also necessary at least for Transformers with hardmax. These results suggest that while self-attention mechanisms can efficiently identify input sequences, the feed-forward network becomes a bottleneck when associating a label to each token.<|reference_end|>
|
arxiv
|
@article{kajitsuka2024optimal,
title={Optimal Memorization Capacity of Transformers},
author={Tokio Kajitsuka, Issei Sato},
journal={arXiv preprint arXiv:2409.17677},
year={2024},
archivePrefix={arXiv},
eprint={2409.17677},
primaryClass={cs.LG}
}
|
kajitsuka2024optimal
|
arxiv-662251
|
2409.17678
|
Modeling the Popularity of Events on Web by Sparsity and Mutual-Excitation Guided Graph Neural Network
|
<|reference_start|>Modeling the Popularity of Events on Web by Sparsity and Mutual-Excitation Guided Graph Neural Network: The content of a webpage described or posted an event in the cyberspace inevitably reflects viewpoints, values and trends of the physical society. Mapping an event on web to the popularity score plays a pivot role to sense the social trends from the cyberspace. However, the complex semantic correspondence between texts and images, as well as the implicit text-image-popularity mapping mechanics pose a significant challenge to this non-trivial task. In this paper, we address this problem from a viewpoint of understanding the interpretable mapping mechanics. Concretely, we organize the keywords from different events into an unified graph. The unified graph facilitates to model the popularity of events via two-level mappings, i.e., the self excitation and the mutual excitation. The self-excitation assumes that each keyword forms the popularity while the mutual-excitation models that two keywords would excite each other to determine the popularity of an event. Specifically, we use Graph Neural Network (GNN) as the backbone to model the self-excitation, the mutual excitation and the context of images into a sparse and deep factor model. Besides, to our best knowledge, we release a challenge web event dataset for the popularity prediction task. The experimental results on three public datasets demonstrate that our method achieves significant improvements and outperforms the state-of-the-art methods. Dataset is publicly available at: https://github.com/pangjunbiao/Hot-events-dataset.<|reference_end|>
|
arxiv
|
@article{deng2024modeling,
title={Modeling the Popularity of Events on Web by Sparsity and
Mutual-Excitation Guided Graph Neural Network},
author={Jiaxin Deng, Linlin Jia, Junbiao Pang, Qingming Huang},
journal={arXiv preprint arXiv:2409.17678},
year={2024},
archivePrefix={arXiv},
eprint={2409.17678},
primaryClass={cs.MM}
}
|
deng2024modeling
|
arxiv-662252
|
2409.17680
|
Event-based Stereo Depth Estimation: A Survey
|
<|reference_start|>Event-based Stereo Depth Estimation: A Survey: Stereopsis has widespread appeal in robotics as it is the predominant way by which living beings perceive depth to navigate our 3D world. Event cameras are novel bio-inspired sensors that detect per-pixel brightness changes asynchronously, with very high temporal resolution and high dynamic range, enabling machine perception in high-speed motion and broad illumination conditions. The high temporal precision also benefits stereo matching, making disparity (depth) estimation a popular research area for event cameras ever since its inception. Over the last 30 years, the field has evolved rapidly, from low-latency, low-power circuit design to current deep learning (DL) approaches driven by the computer vision community. The bibliography is vast and difficult to navigate for non-experts due its highly interdisciplinary nature. Past surveys have addressed distinct aspects of this topic, in the context of applications, or focusing only on a specific class of techniques, but have overlooked stereo datasets. This survey provides a comprehensive overview, covering both instantaneous stereo and long-term methods suitable for simultaneous localization and mapping (SLAM), along with theoretical and empirical comparisons. It is the first to extensively review DL methods as well as stereo datasets, even providing practical suggestions for creating new benchmarks to advance the field. The main advantages and challenges faced by event-based stereo depth estimation are also discussed. Despite significant progress, challenges remain in achieving optimal performance in not only accuracy but also efficiency, a cornerstone of event-based computing. We identify several gaps and propose future research directions. We hope this survey inspires future research in this area, by serving as an accessible entry point for newcomers, as well as a practical guide for seasoned researchers in the community.<|reference_end|>
|
arxiv
|
@article{ghosh2024event-based,
title={Event-based Stereo Depth Estimation: A Survey},
author={Suman Ghosh and Guillermo Gallego},
journal={arXiv preprint arXiv:2409.17680},
year={2024},
archivePrefix={arXiv},
eprint={2409.17680},
primaryClass={cs.CV cs.RO}
}
|
ghosh2024event-based
|
arxiv-662253
|
2409.17681
|
Computation Pre-Offloading for MEC-Enabled Vehicular Networks via Trajectory Prediction
|
<|reference_start|>Computation Pre-Offloading for MEC-Enabled Vehicular Networks via Trajectory Prediction: Task offloading is of paramount importance to efficiently orchestrate vehicular wireless networks, necessitating the availability of information regarding the current network status and computational resources. However, due to the mobility of the vehicles and the limited computational resources for performing task offloading in near-real-time, such schemes may require high latency, thus, become even infeasible. To address this issue, in this paper, we present a Trajectory Prediction-based Pre-offloading Decision (TPPD) algorithm for analyzing the historical trajectories of vehicles to predict their future coordinates, thereby allowing for computational resource allocation in advance. We first utilize the Long Short-Term Memory (LSTM) network model to predict each vehicle's movement trajectory. Then, based on the task requirements and the predicted trajectories, we devise a dynamic resource allocation algorithm using a Double Deep Q-Network (DDQN) that enables the edge server to minimize task processing delay, while ensuring effective utilization of the available computational resources. Our simulation results verify the effectiveness of the proposed approach, showcasing that, as compared with traditional real-time task offloading strategies, the proposed TPPD algorithm significantly reduces task processing delay while improving resource utilization.<|reference_end|>
|
arxiv
|
@article{zhang2024computation,
title={Computation Pre-Offloading for MEC-Enabled Vehicular Networks via
Trajectory Prediction},
author={Ting Zhang, Bo Yang, Zhiwen Yu, Xuelin Cao, George C. Alexandropoulos,
Yan Zhang, and Chau Yuen},
journal={arXiv preprint arXiv:2409.17681},
year={2024},
archivePrefix={arXiv},
eprint={2409.17681},
primaryClass={cs.NI cs.CY}
}
|
zhang2024computation
|
arxiv-662254
|
2409.17682
|
Dark Miner: Defend against unsafe generation for text-to-image diffusion models
|
<|reference_start|>Dark Miner: Defend against unsafe generation for text-to-image diffusion models: Text-to-image diffusion models have been demonstrated with unsafe generation due to unfiltered large-scale training data, such as violent, sexual, and shocking images, necessitating the erasure of unsafe concepts. Most existing methods focus on modifying the generation probabilities conditioned on the texts containing unsafe descriptions. However, they fail to guarantee safe generation for unseen texts in the training phase, especially for the prompts from adversarial attacks. In this paper, we re-analyze the erasure task and point out that existing methods cannot guarantee the minimization of the total probabilities of unsafe generation. To tackle this problem, we propose Dark Miner. It entails a recurring three-stage process that comprises mining, verifying, and circumventing. It greedily mines embeddings with maximum generation probabilities of unsafe concepts and reduces unsafe generation more effectively. In the experiments, we evaluate its performance on two inappropriate concepts, two objects, and two styles. Compared with 6 previous state-of-the-art methods, our method achieves better erasure and defense results in most cases, especially under 4 state-of-the-art attacks, while preserving the model's native generation capability. Our code will be available on GitHub.<|reference_end|>
|
arxiv
|
@article{meng2024dark,
title={Dark Miner: Defend against unsafe generation for text-to-image diffusion
models},
author={Zheling Meng, Bo Peng, Xiaochuan Jin, Yue Jiang, Jing Dong, Wei Wang,
Tieniu Tan},
journal={arXiv preprint arXiv:2409.17682},
year={2024},
archivePrefix={arXiv},
eprint={2409.17682},
primaryClass={cs.CV}
}
|
meng2024dark
|
arxiv-662255
|
2409.17683
|
Zero- and Few-shot Named Entity Recognition and Text Expansion in Medication Prescriptions using ChatGPT
|
<|reference_start|>Zero- and Few-shot Named Entity Recognition and Text Expansion in Medication Prescriptions using ChatGPT: Introduction: Medication prescriptions are often in free text and include a mix of two languages, local brand names, and a wide range of idiosyncratic formats and abbreviations. Large language models (LLMs) have shown promising ability to generate text in response to input prompts. We use ChatGPT 3.5 to automatically structure and expand medication statements in discharge summaries and thus make them easier to interpret for people and machines. Methods: Named-entity Recognition (NER) and Text Expansion (EX) are used in a zero- and few-shot setting with different prompt strategies. 100 medication statements were manually annotated and curated. NER performance was measured by using strict and partial matching. For the task EX, two experts interpreted the results by assessing semantic equivalence between original and expanded statements. The model performance was measured by precision, recall, and F1 score. Results: For NER, the best-performing prompt reached an average F1 score of 0.94 in the test set. For EX, the few-shot prompt showed superior performance among other prompts, with an average F1 score of 0.87. Conclusion: Our study demonstrates good performance for NER and EX tasks in free-text medication statements using ChatGPT. Compared to a zero-shot baseline, a few-shot approach prevented the system from hallucinating, which would be unacceptable when processing safety-relevant medication data.<|reference_end|>
|
arxiv
|
@article{isaradech2024zero-,
title={Zero- and Few-shot Named Entity Recognition and Text Expansion in
Medication Prescriptions using ChatGPT},
author={Natthanaphop Isaradech, Andrea Riedel, Wachiranun Sirikul, Markus
Kreuzthaler, Stefan Schulz},
journal={arXiv preprint arXiv:2409.17683},
year={2024},
archivePrefix={arXiv},
eprint={2409.17683},
primaryClass={cs.CL cs.AI}
}
|
isaradech2024zero-
|
arxiv-662256
|
2409.17684
|
Preserving logical and functional dependencies in synthetic tabular data
|
<|reference_start|>Preserving logical and functional dependencies in synthetic tabular data: Dependencies among attributes are a common aspect of tabular data. However, whether existing tabular data generation algorithms preserve these dependencies while generating synthetic data is yet to be explored. In addition to the existing notion of functional dependencies, we introduce the notion of logical dependencies among the attributes in this article. Moreover, we provide a measure to quantify logical dependencies among attributes in tabular data. Utilizing this measure, we compare several state-of-the-art synthetic data generation algorithms and test their capability to preserve logical and functional dependencies on several publicly available datasets. We demonstrate that currently available synthetic tabular data generation algorithms do not fully preserve functional dependencies when they generate synthetic datasets. In addition, we also showed that some tabular synthetic data generation models can preserve inter-attribute logical dependencies. Our review and comparison of the state-of-the-art reveal research needs and opportunities to develop task-specific synthetic tabular data generation models.<|reference_end|>
|
arxiv
|
@article{umesh2024preserving,
title={Preserving logical and functional dependencies in synthetic tabular data},
author={Chaithra Umesh, Kristian Schultz, Manjunath Mahendra, Saparshi Bej,
Olaf Wolkenhauer},
journal={arXiv preprint arXiv:2409.17684},
year={2024},
archivePrefix={arXiv},
eprint={2409.17684},
primaryClass={cs.LG cs.AI}
}
|
umesh2024preserving
|
arxiv-662257
|
2409.17685
|
Artificial Data Point Generation in Clustered Latent Space for Small Medical Datasets
|
<|reference_start|>Artificial Data Point Generation in Clustered Latent Space for Small Medical Datasets: One of the growing trends in machine learning is the use of data generation techniques, since the performance of machine learning models is dependent on the quantity of the training dataset. However, in many medical applications, collecting large datasets is challenging due to resource constraints, which leads to overfitting and poor generalization. This paper introduces a novel method, Artificial Data Point Generation in Clustered Latent Space (AGCL), designed to enhance classification performance on small medical datasets through synthetic data generation. The AGCL framework involves feature extraction, K-means clustering, cluster evaluation based on a class separation metric, and the generation of synthetic data points from clusters with distinct class representations. This method was applied to Parkinson's disease screening, utilizing facial expression data, and evaluated across multiple machine learning classifiers. Experimental results demonstrate that AGCL significantly improves classification accuracy compared to baseline, GN and kNNMTD. AGCL achieved the highest overall test accuracy of 83.33% and cross-validation accuracy of 90.90% in majority voting over different emotions, confirming its effectiveness in augmenting small datasets.<|reference_end|>
|
arxiv
|
@article{haghbin2024artificial,
title={Artificial Data Point Generation in Clustered Latent Space for Small
Medical Datasets},
author={Yasaman Haghbin, Hadi Moradi, Reshad Hosseini},
journal={arXiv preprint arXiv:2409.17685},
year={2024},
archivePrefix={arXiv},
eprint={2409.17685},
primaryClass={cs.AI cs.LG}
}
|
haghbin2024artificial
|
arxiv-662258
|
2409.17686
|
MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling
|
<|reference_start|>MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling: Motion generation from discrete quantization offers many advantages over continuous regression, but at the cost of inevitable approximation errors. Previous methods usually quantize the entire body pose into one code, which not only faces the difficulty in encoding all joints within one vector but also loses the spatial relationship between different joints. Differently, in this work we quantize each individual joint into one vector, which i) simplifies the quantization process as the complexity associated with a single joint is markedly lower than that of the entire pose; ii) maintains a spatial-temporal structure that preserves both the spatial relationships among joints and the temporal movement patterns; iii) yields a 2D token map, which enables the application of various 2D operations widely used in 2D images. Grounded in the 2D motion quantization, we build a spatial-temporal modeling framework, where 2D joint VQVAE, temporal-spatial 2D masking technique, and spatial-temporal 2D attention are proposed to take advantage of spatial-temporal signals among the 2D tokens. Extensive experiments demonstrate that our method significantly outperforms previous methods across different datasets, with a $26.6\%$ decrease of FID on HumanML3D and a $29.9\%$ decrease on KIT-ML.<|reference_end|>
|
arxiv
|
@article{yuan2024mogents:,
title={MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling},
author={Weihao Yuan, Weichao Shen, Yisheng He, Yuan Dong, Xiaodong Gu, Zilong
Dong, Liefeng Bo, Qixing Huang},
journal={arXiv preprint arXiv:2409.17686},
year={2024},
archivePrefix={arXiv},
eprint={2409.17686},
primaryClass={cs.CV}
}
|
yuan2024mogents:
|
arxiv-662259
|
2409.17687
|
Graph Edit Distance with General Costs Using Neural Set Divergence
|
<|reference_start|>Graph Edit Distance with General Costs Using Neural Set Divergence: Graph Edit Distance (GED) measures the (dis-)similarity between two given graphs, in terms of the minimum-cost edit sequence that transforms one graph to the other. However, the exact computation of GED is NP-Hard, which has recently motivated the design of neural methods for GED estimation. However, they do not explicitly account for edit operations with different costs. In response, we propose GRAPHEDX, a neural GED estimator that can work with general costs specified for the four edit operations, viz., edge deletion, edge addition, node deletion and node addition. We first present GED as a quadratic assignment problem (QAP) that incorporates these four costs. Then, we represent each graph as a set of node and edge embeddings and use them to design a family of neural set divergence surrogates. We replace the QAP terms corresponding to each operation with their surrogates. Computing such neural set divergence require aligning nodes and edges of the two graphs. We learn these alignments using a Gumbel-Sinkhorn permutation generator, additionally ensuring that the node and edge alignments are consistent with each other. Moreover, these alignments are cognizant of both the presence and absence of edges between node-pairs. Experiments on several datasets, under a variety of edit cost settings, show that GRAPHEDX consistently outperforms state-of-the-art methods and heuristics in terms of prediction error.<|reference_end|>
|
arxiv
|
@article{jain2024graph,
title={Graph Edit Distance with General Costs Using Neural Set Divergence},
author={Eeshaan Jain, Indradyumna Roy, Saswat Meher, Soumen Chakrabarti, Abir
De},
journal={Advances in Neural Information Processing Systems, 38 (2024)},
year={2024},
archivePrefix={arXiv},
eprint={2409.17687},
primaryClass={cs.LG cs.AI}
}
|
jain2024graph
|
arxiv-662260
|
2409.17688
|
HPC acceleration of large (min, +) matrix products to compute domination-type parameters in graphs
|
<|reference_start|>HPC acceleration of large (min, +) matrix products to compute domination-type parameters in graphs: The computation of the domination-type parameters is a challenging problem in Cartesian product graphs. We present an algorithmic method to compute the $2$-domination number of the Cartesian product of a path with small order and any cycle, involving the $(\min,+)$ matrix product. We establish some theoretical results that provide the algorithms necessary to compute that parameter, and the main challenge to run such algorithms comes from the large size of the matrices used, which makes it necessary to improve the techniques to handle these objects. We analyze the performance of the algorithms on modern multicore CPUs and on GPUs and we show the advantages over the sequential implementation. The use of these platforms allows us to compute the $2$-domination number of cylinders such that their paths have at most $12$ vertices.<|reference_end|>
|
arxiv
|
@article{garzón2024hpc,
title={HPC acceleration of large (min, +) matrix products to compute
domination-type parameters in graphs},
author={E.M. Garz'on, J.A. Mart'inez, J.J. Moreno and M.L. Puertas},
journal={Journal of Supercomputing 78, pp. 17826-17843, 2022},
year={2024},
doi={10.1007/s11227-022-04574-5},
archivePrefix={arXiv},
eprint={2409.17688},
primaryClass={cs.DM math.CO}
}
|
garzón2024hpc
|
arxiv-662261
|
2409.17691
|
Efficient Bias Mitigation Without Privileged Information
|
<|reference_start|>Efficient Bias Mitigation Without Privileged Information: Deep neural networks trained via empirical risk minimisation often exhibit significant performance disparities across groups, particularly when group and task labels are spuriously correlated (e.g., "grassy background" and "cows"). Existing bias mitigation methods that aim to address this issue often either rely on group labels for training or validation, or require an extensive hyperparameter search. Such data and computational requirements hinder the practical deployment of these methods, especially when datasets are too large to be group-annotated, computational resources are limited, and models are trained through already complex pipelines. In this paper, we propose Targeted Augmentations for Bias Mitigation (TAB), a simple hyperparameter-free framework that leverages the entire training history of a helper model to identify spurious samples, and generate a group-balanced training set from which a robust model can be trained. We show that TAB improves worst-group performance without any group information or model selection, outperforming existing methods while maintaining overall accuracy.<|reference_end|>
|
arxiv
|
@article{zarlenga2024efficient,
title={Efficient Bias Mitigation Without Privileged Information},
author={Mateo Espinosa Zarlenga, Swami Sankaranarayanan, Jerone T. A. Andrews,
Zohreh Shams, Mateja Jamnik, Alice Xiang},
journal={arXiv preprint arXiv:2409.17691},
year={2024},
archivePrefix={arXiv},
eprint={2409.17691},
primaryClass={cs.LG cs.AI}
}
|
zarlenga2024efficient
|
arxiv-662262
|
2409.17692
|
MIO: A Foundation Model on Multimodal Tokens
|
<|reference_start|>MIO: A Foundation Model on Multimodal Tokens: In this paper, we introduce MIO, a novel foundation model built on multimodal tokens, capable of understanding and generating speech, text, images, and videos in an end-to-end, autoregressive manner. While the emergence of large language models (LLMs) and multimodal large language models (MM-LLMs) propels advancements in artificial general intelligence through their versatile capabilities, they still lack true any-to-any understanding and generation. Recently, the release of GPT-4o has showcased the remarkable potential of any-to-any LLMs for complex real-world tasks, enabling omnidirectional input and output across images, speech, and text. However, it is closed-source and does not support the generation of multimodal interleaved sequences. To address this gap, we present MIO, which is trained on a mixture of discrete tokens across four modalities using causal multimodal modeling. MIO undergoes a four-stage training process: (1) alignment pre-training, (2) interleaved pre-training, (3) speech-enhanced pre-training, and (4) comprehensive supervised fine-tuning on diverse textual, visual, and speech tasks. Our experimental results indicate that MIO exhibits competitive, and in some cases superior, performance compared to previous dual-modal baselines, any-to-any model baselines, and even modality-specific baselines. Moreover, MIO demonstrates advanced capabilities inherent to its any-to-any feature, such as interleaved video-text generation, chain-of-visual-thought reasoning, visual guideline generation, instructional image editing, etc.<|reference_end|>
|
arxiv
|
@article{wang2024mio:,
title={MIO: A Foundation Model on Multimodal Tokens},
author={Zekun Wang, King Zhu, Chunpu Xu, Wangchunshu Zhou, Jiaheng Liu, Yibo
Zhang, Jiashuo Wang, Ning Shi, Siyu Li, Yizhi Li, Haoran Que, Zhaoxiang
Zhang, Yuanxing Zhang, Ge Zhang, Ke Xu, Jie Fu, Wenhao Huang},
journal={arXiv preprint arXiv:2409.17692},
year={2024},
archivePrefix={arXiv},
eprint={2409.17692},
primaryClass={cs.CL cs.AI cs.LG}
}
|
wang2024mio:
|
arxiv-662263
|
2409.17693
|
Spatial embedding promotes a specific form of modularity with low entropy and heterogeneous spectral dynamics
|
<|reference_start|>Spatial embedding promotes a specific form of modularity with low entropy and heterogeneous spectral dynamics: Understanding how biological constraints shape neural computation is a central goal of computational neuroscience. Spatially embedded recurrent neural networks provide a promising avenue to study how modelled constraints shape the combined structural and functional organisation of networks over learning. Prior work has shown that spatially embedded systems like this can combine structure and function into single artificial models during learning. But it remains unclear precisely how, in general, structural constraints bound the range of attainable configurations. In this work, we show that it is possible to study these restrictions through entropic measures of the neural weights and eigenspectrum, across both rate and spiking neural networks. Spatial embedding, in contrast to baseline models, leads to networks with a highly specific low entropy modularity where connectivity is readily interpretable given the known spatial and communication constraints acting on them. Crucially, these networks also demonstrate systematically modulated spectral dynamics, revealing how they exploit heterogeneity in their function to overcome the constraints imposed on their structure. This work deepens our understanding of constrained learning in neural networks, across coding schemes and tasks, where solutions to simultaneous structural and functional objectives must be accomplished in tandem.<|reference_end|>
|
arxiv
|
@article{sheeran2024spatial,
title={Spatial embedding promotes a specific form of modularity with low
entropy and heterogeneous spectral dynamics},
author={Cornelia Sheeran, Andrew S. Ham, Duncan E. Astle, Jascha Achterberg
and Danyal Akarca},
journal={arXiv preprint arXiv:2409.17693},
year={2024},
archivePrefix={arXiv},
eprint={2409.17693},
primaryClass={cs.NE q-bio.NC}
}
|
sheeran2024spatial
|
arxiv-662264
|
2409.17698
|
The application of GPT-4 in grading design university students' assignment and providing feedback: An exploratory study
|
<|reference_start|>The application of GPT-4 in grading design university students' assignment and providing feedback: An exploratory study: This study aims to investigate whether GPT-4 can effectively grade assignments for design university students and provide useful feedback. In design education, assignments do not have a single correct answer and often involve solving an open-ended design problem. This subjective nature of design projects often leads to grading problems,as grades can vary between different raters,for instance instructor from engineering background or architecture background. This study employs an iterative research approach in developing a Custom GPT with the aim of achieving more reliable results and testing whether it can provide design students with constructive feedback. The findings include: First,through several rounds of iterations the inter-reliability between GPT and human raters reached a level that is generally accepted by educators. This indicates that by providing accurate prompts to GPT,and continuously iterating to build a Custom GPT, it can be used to effectively grade students' design assignments, serving as a reliable complement to human raters. Second, the intra-reliability of GPT's scoring at different times is between 0.65 and 0.78. This indicates that, with adequate instructions, a Custom GPT gives consistent results which is a precondition for grading students. As consistency and comparability are the two main rules to ensure the reliability of educational assessment, this study has looked at whether a Custom GPT can be developed that adheres to these two rules. We finish the paper by testing whether Custom GPT can provide students with useful feedback and reflecting on how educators can develop and iterate a Custom GPT to serve as a complementary rater.<|reference_end|>
|
arxiv
|
@article{huang2024the,
title={The application of GPT-4 in grading design university students'
assignment and providing feedback: An exploratory study},
author={Qian Huang, Thijs Willems, King Wang Poon},
journal={arXiv preprint arXiv:2409.17698},
year={2024},
archivePrefix={arXiv},
eprint={2409.17698},
primaryClass={cs.AI}
}
|
huang2024the
|
arxiv-662265
|
2409.17699
|
MoJE: Mixture of Jailbreak Experts, Naive Tabular Classifiers as Guard for Prompt Attacks
|
<|reference_start|>MoJE: Mixture of Jailbreak Experts, Naive Tabular Classifiers as Guard for Prompt Attacks: The proliferation of Large Language Models (LLMs) in diverse applications underscores the pressing need for robust security measures to thwart potential jailbreak attacks. These attacks exploit vulnerabilities within LLMs, endanger data integrity and user privacy. Guardrails serve as crucial protective mechanisms against such threats, but existing models often fall short in terms of both detection accuracy, and computational efficiency. This paper advocates for the significance of jailbreak attack prevention on LLMs, and emphasises the role of input guardrails in safeguarding these models. We introduce MoJE (Mixture of Jailbreak Expert), a novel guardrail architecture designed to surpass current limitations in existing state-of-the-art guardrails. By employing simple linguistic statistical techniques, MoJE excels in detecting jailbreak attacks while maintaining minimal computational overhead during model inference. Through rigorous experimentation, MoJE demonstrates superior performance capable of detecting 90% of the attacks without compromising benign prompts, enhancing LLMs security against jailbreak attacks.<|reference_end|>
|
arxiv
|
@article{cornacchia2024moje:,
title={MoJE: Mixture of Jailbreak Experts, Naive Tabular Classifiers as Guard
for Prompt Attacks},
author={Giandomenico Cornacchia, Giulio Zizzo, Kieran Fraser, Muhammad Zaid
Hameed, Ambrish Rawat, Mark Purcell},
journal={arXiv preprint arXiv:2409.17699},
year={2024},
archivePrefix={arXiv},
eprint={2409.17699},
primaryClass={cs.CR cs.AI cs.LG}
}
|
cornacchia2024moje:
|
arxiv-662266
|
2409.17700
|
Demystifying Privacy in 5G Stand Alone Networks
|
<|reference_start|>Demystifying Privacy in 5G Stand Alone Networks: Ensuring user privacy remains critical in mobile networks, particularly with the rise of connected devices and denser 5G infrastructure. Privacy concerns have persisted across 2G, 3G, and 4G/LTE networks. Recognizing these concerns, the 3rd Generation Partnership Project (3GPP) has made privacy enhancements in 5G Release 15. However, the extent of operator adoption remains unclear, especially as most networks operate in 5G Non Stand Alone (NSA) mode, relying on 4G Core Networks. This study provides the first qualitative and experimental comparison between 5G NSA and Stand Alone (SA) in real operator networks, focusing on privacy enhancements addressing top eight pre-5G attacks based on recent academic literature. Additionally, it evaluates the privacy levels of OpenAirInterface (OAI), a leading open-source software for 5G, against real network deployments for the same attacks. The analysis reveals two new 5G privacy vulnerabilities, underscoring the need for further research and stricter standards.<|reference_end|>
|
arxiv
|
@article{eleftherakis2024demystifying,
title={Demystifying Privacy in 5G Stand Alone Networks},
author={Stavros Eleftherakis and Timothy Otim and Giuseppe Santaromita and
Almudena Diaz Zayas and Domenico Giustiniano and Nicolas Kourtellis},
journal={arXiv preprint arXiv:2409.17700},
year={2024},
doi={10.1145/3636534.3690696},
archivePrefix={arXiv},
eprint={2409.17700},
primaryClass={cs.NI}
}
|
eleftherakis2024demystifying
|
arxiv-662267
|
2409.17702
|
Episodic Memory Verbalization using Hierarchical Representations of Life-Long Robot Experience
|
<|reference_start|>Episodic Memory Verbalization using Hierarchical Representations of Life-Long Robot Experience: Verbalization of robot experience, i.e., summarization of and question answering about a robot's past, is a crucial ability for improving human-robot interaction. Previous works applied rule-based systems or fine-tuned deep models to verbalize short (several-minute-long) streams of episodic data, limiting generalization and transferability. In our work, we apply large pretrained models to tackle this task with zero or few examples, and specifically focus on verbalizing life-long experiences. For this, we derive a tree-like data structure from episodic memory (EM), with lower levels representing raw perception and proprioception data, and higher levels abstracting events to natural language concepts. Given such a hierarchical representation built from the experience stream, we apply a large language model as an agent to interactively search the EM given a user's query, dynamically expanding (initially collapsed) tree nodes to find the relevant information. The approach keeps computational costs low even when scaling to months of robot experience data. We evaluate our method on simulated household robot data, human egocentric videos, and real-world robot recordings, demonstrating its flexibility and scalability.<|reference_end|>
|
arxiv
|
@article{bärmann2024episodic,
title={Episodic Memory Verbalization using Hierarchical Representations of
Life-Long Robot Experience},
author={Leonard B"armann, Chad DeChant, Joana Plewnia, Fabian Peller-Konrad,
Daniel Bauer, Tamim Asfour, Alex Waibel},
journal={arXiv preprint arXiv:2409.17702},
year={2024},
archivePrefix={arXiv},
eprint={2409.17702},
primaryClass={cs.RO cs.AI}
}
|
bärmann2024episodic
|
arxiv-662268
|
2409.17703
|
PGN: The RNN's New Successor is Effective for Long-Range Time Series Forecasting
|
<|reference_start|>PGN: The RNN's New Successor is Effective for Long-Range Time Series Forecasting: Due to the recurrent structure of RNN, the long information propagation path poses limitations in capturing long-term dependencies, gradient explosion/vanishing issues, and inefficient sequential execution. Based on this, we propose a novel paradigm called Parallel Gated Network (PGN) as the new successor to RNN. PGN directly captures information from previous time steps through the designed Historical Information Extraction (HIE) layer and leverages gated mechanisms to select and fuse it with the current time step information. This reduces the information propagation path to $\mathcal{O}(1)$, effectively addressing the limitations of RNN. To enhance PGN's performance in long-range time series forecasting tasks, we propose a novel temporal modeling framework called Temporal PGN (TPGN). TPGN incorporates two branches to comprehensively capture the semantic information of time series. One branch utilizes PGN to capture long-term periodic patterns while preserving their local characteristics. The other branch employs patches to capture short-term information and aggregate the global representation of the series. TPGN achieves a theoretical complexity of $\mathcal{O}(\sqrt{L})$, ensuring efficiency in its operations. Experimental results on five benchmark datasets demonstrate the state-of-the-art (SOTA) performance and high efficiency of TPGN, further confirming the effectiveness of PGN as the new successor to RNN in long-range time series forecasting. The code is available in this repository: \url{https://github.com/Water2sea/TPGN}.<|reference_end|>
|
arxiv
|
@article{jia2024pgn:,
title={PGN: The RNN's New Successor is Effective for Long-Range Time Series
Forecasting},
author={Yuxin Jia, Youfang Lin, Jing Yu, Shuo Wang, Tianhao Liu, Huaiyu Wan},
journal={arXiv preprint arXiv:2409.17703},
year={2024},
archivePrefix={arXiv},
eprint={2409.17703},
primaryClass={cs.LG}
}
|
jia2024pgn:
|
arxiv-662269
|
2409.17704
|
Transfer Learning in $\ell_1$ Regularized Regression: Hyperparameter Selection Strategy based on Sharp Asymptotic Analysis
|
<|reference_start|>Transfer Learning in $\ell_1$ Regularized Regression: Hyperparameter Selection Strategy based on Sharp Asymptotic Analysis: Transfer learning techniques aim to leverage information from multiple related datasets to enhance prediction quality against a target dataset. Such methods have been adopted in the context of high-dimensional sparse regression, and some Lasso-based algorithms have been invented: Trans-Lasso and Pretraining Lasso are such examples. These algorithms require the statistician to select hyperparameters that control the extent and type of information transfer from related datasets. However, selection strategies for these hyperparameters, as well as the impact of these choices on the algorithm's performance, have been largely unexplored. To address this, we conduct a thorough, precise study of the algorithm in a high-dimensional setting via an asymptotic analysis using the replica method. Our approach reveals a surprisingly simple behavior of the algorithm: Ignoring one of the two types of information transferred to the fine-tuning stage has little effect on generalization performance, implying that efforts for hyperparameter selection can be significantly reduced. Our theoretical findings are also empirically supported by real-world applications on the IMDb dataset.<|reference_end|>
|
arxiv
|
@article{okajima2024transfer,
title={Transfer Learning in $\ell_1$ Regularized Regression: Hyperparameter
Selection Strategy based on Sharp Asymptotic Analysis},
author={Koki Okajima and Tomoyuki Obuchi},
journal={arXiv preprint arXiv:2409.17704},
year={2024},
archivePrefix={arXiv},
eprint={2409.17704},
primaryClass={stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.LG}
}
|
okajima2024transfer
|
arxiv-662270
|
2409.17705
|
On the Output Redundancy of LTI Systems: A Geometric Approach with Application to Privacy
|
<|reference_start|>On the Output Redundancy of LTI Systems: A Geometric Approach with Application to Privacy: This paper examines the properties of output-redundant systems, that is, systems possessing a larger number of outputs than inputs, through the lenses of the geometric approach of Wonham et al. We begin by formulating a simple output allocation synthesis problem, which involves ``concealing" input information from a malicious eavesdropper having access to the system output, while still allowing for a legitimate user to reconstruct it. It is shown that the solvability of this problem requires the availability of a redundant set of outputs. This very problem is instrumental to unveiling the fundamental geometric properties of output-redundant systems, which form the basis for our subsequent constructions and results. As a direct application, we demonstrate how output allocation can be employed to effectively protect the information of input information from certain output eavesdroppers with guaranteed results.<|reference_end|>
|
arxiv
|
@article{yang2024on,
title={On the Output Redundancy of LTI Systems: A Geometric Approach with
Application to Privacy},
author={Guitao Yang, Alexander J. Gallo, Angelo Barboni, Riccardo M.G.
Ferrari, Andrea Serrani and Thomas Parisini},
journal={arXiv preprint arXiv:2409.17705},
year={2024},
archivePrefix={arXiv},
eprint={2409.17705},
primaryClass={eess.SY cs.SY}
}
|
yang2024on
|
arxiv-662271
|
2409.17707
|
Oversampled Low Ambiguity Zone Sequences for Channel Estimation over Doubly Selective Channels
|
<|reference_start|>Oversampled Low Ambiguity Zone Sequences for Channel Estimation over Doubly Selective Channels: Pilot sequence design over doubly selective channels (DSC) is challenging due to the variations in both the time- and frequency-domains. Against this background, the contribution of this paper is twofold: Firstly, we investigate the optimal sequence design criteria for efficient channel estimation in orthogonal frequency division multiplexing systems under DSC. Secondly, to design pilot sequences that can satisfy the derived criteria, we propose a new metric called oversampled ambiguity function (O-AF), which considers both fractional and integer Doppler frequency shifts. Optimizing the sidelobes of O-AF through a modified iterative twisted approximation (ITROX) algorithm, we develop a new class of pilot sequences called ``oversampled low ambiguity zone (O-LAZ) sequences". Through numerical experiments, we evaluate the efficiency of the proposed O-LAZ sequences over the traditional low ambiguity zone (LAZ) sequences, Zadoff-Chu (ZC) sequences and m-sequences, by comparing their channel estimation performances over DSC.<|reference_end|>
|
arxiv
|
@article{gu2024oversampled,
title={Oversampled Low Ambiguity Zone Sequences for Channel Estimation over
Doubly Selective Channels},
author={Zhi Gu, Zhengchun Zhou, Pingzhi Fan, Avik Ranjan Adhikary, and Zilong
Liu},
journal={arXiv preprint arXiv:2409.17707},
year={2024},
archivePrefix={arXiv},
eprint={2409.17707},
primaryClass={cs.IT eess.SP math.IT}
}
|
gu2024oversampled
|
arxiv-662272
|
2409.17711
|
Efficient Pointwise-Pairwise Learning-to-Rank for News Recommendation
|
<|reference_start|>Efficient Pointwise-Pairwise Learning-to-Rank for News Recommendation: News recommendation is a challenging task that involves personalization based on the interaction history and preferences of each user. Recent works have leveraged the power of pretrained language models (PLMs) to directly rank news items by using inference approaches that predominately fall into three categories: pointwise, pairwise, and listwise learning-to-rank. While pointwise methods offer linear inference complexity, they fail to capture crucial comparative information between items that is more effective for ranking tasks. Conversely, pairwise and listwise approaches excel at incorporating these comparisons but suffer from practical limitations: pairwise approaches are either computationally expensive or lack theoretical guarantees, and listwise methods often perform poorly in practice. In this paper, we propose a novel framework for PLM-based news recommendation that integrates both pointwise relevance prediction and pairwise comparisons in a scalable manner. We present a rigorous theoretical analysis of our framework, establishing conditions under which our approach guarantees improved performance. Extensive experiments show that our approach outperforms the state-of-the-art methods on the MIND and Adressa news recommendation datasets.<|reference_end|>
|
arxiv
|
@article{kannen2024efficient,
title={Efficient Pointwise-Pairwise Learning-to-Rank for News Recommendation},
author={Nithish Kannen, Yao Ma, Gerrit J.J. van den Burg, Jean Baptiste
Faddoul},
journal={arXiv preprint arXiv:2409.17711},
year={2024},
archivePrefix={arXiv},
eprint={2409.17711},
primaryClass={cs.IR cs.LG}
}
|
kannen2024efficient
|
arxiv-662273
|
2409.17714
|
From Innermost to Full Probabilistic Term Rewriting: Almost-Sure Termination, Complexity, and Modularity
|
<|reference_start|>From Innermost to Full Probabilistic Term Rewriting: Almost-Sure Termination, Complexity, and Modularity: There are many evaluation strategies for term rewrite systems, but automatically proving termination or analyzing complexity is usually easiest for innermost rewriting. Several syntactic criteria exist when innermost termination implies full termination or when runtime complexity and innermost runtime complexity coincide. We adapt these criteria to the probabilistic setting, e.g., we show when it suffices to analyze almost-sure termination w.r.t. innermost rewriting in order to prove full almost-sure termination of probabilistic term rewrite systems. These criteria can be applied for both termination and complexity analysis in the probabilistic setting. We implemented and evaluated our new contributions in the tool AProVE. Moreover, we also use our new results on innermost and full probabilistic rewriting to investigate the modularity of probabilistic termination properties.<|reference_end|>
|
arxiv
|
@article{kassing2024from,
title={From Innermost to Full Probabilistic Term Rewriting: Almost-Sure
Termination, Complexity, and Modularity},
author={Jan-Christoph Kassing and J"urgen Giesl},
journal={arXiv preprint arXiv:2409.17714},
year={2024},
archivePrefix={arXiv},
eprint={2409.17714},
primaryClass={cs.LO}
}
|
kassing2024from
|
arxiv-662274
|
2409.17715
|
Optimal Sensitivity Oracle for Steiner Mincut
|
<|reference_start|>Optimal Sensitivity Oracle for Steiner Mincut: Let $G=(V,E)$ be an undirected weighted graph on $n=|V|$ vertices and $S\subseteq V$ be a Steiner set. Steiner mincut is a well-studied concept, which provides a generalization to both (s,t)-mincut (when $|S|=2$) and global mincut (when $|S|=n$). Here, we address the problem of designing a compact data structure that can efficiently report a Steiner mincut and its capacity after the failure of any edge in $G$; such a data structure is known as a \textit{Sensitivity Oracle} for Steiner mincut. In the area of minimum cuts, although many Sensitivity Oracles have been designed in unweighted graphs, however, in weighted graphs, Sensitivity Oracles exist only for (s,t)-mincut [Annals of Operations Research 1991, NETWORKS 2019, ICALP 2024], which is just a special case of Steiner mincut. Here, we generalize this result to any arbitrary set $S\subseteq V$. 1. Sensitivity Oracle: Assuming the capacity of every edge is known, a. there is an ${\mathcal O}(n)$ space data structure that can report the capacity of Steiner mincut in ${\mathcal O}(1)$ time and b. there is an ${\mathcal O}(n(n-|S|+1))$ space data structure that can report a Steiner mincut in ${\mathcal O}(n)$ time after the failure of any edge in $G$. 2. Lower Bound: We show that any data structure that, after the failure of any edge, can report a Steiner mincut or its capacity must occupy $\Omega(n^2)$ bits of space in the worst case, irrespective of the size of the Steiner set. The lower bound in (2) shows that the assumption in (1) is essential to break the $\Omega(n^2)$ lower bound on space. For $|S|=n-k$ for any constant $k\ge 0$, it occupies only ${\mathcal O}(n)$ space. So, we also present the first Sensitivity Oracle occupying ${\mathcal O}(n)$ space for global mincut.<|reference_end|>
|
arxiv
|
@article{bhanja2024optimal,
title={Optimal Sensitivity Oracle for Steiner Mincut},
author={Koustav Bhanja},
journal={arXiv preprint arXiv:2409.17715},
year={2024},
archivePrefix={arXiv},
eprint={2409.17715},
primaryClass={cs.DS}
}
|
bhanja2024optimal
|
arxiv-662275
|
2409.17716
|
QuForge: A Library for Qudits Simulation
|
<|reference_start|>QuForge: A Library for Qudits Simulation: Quantum computing with qudits, an extension of qubits to multiple levels, is a research field less mature than qubit-based quantum computing. However, qudits can offer some advantages over qubits, by representing information with fewer separated components. In this article, we present QuForge, a Python-based library designed to simulate quantum circuits with qudits. This library provides the necessary quantum gates for implementing quantum algorithms, tailored to any chosen qudit dimension. Built on top of differentiable frameworks, QuForge supports execution on accelerating devices such as GPUs and TPUs, significantly speeding up simulations. It also supports sparse operations, leading to a reduction in memory consumption compared to other libraries. Additionally, by constructing quantum circuits as differentiable graphs, QuForge facilitates the implementation of quantum machine learning algorithms, enhancing the capabilities and flexibility of quantum computing research.<|reference_end|>
|
arxiv
|
@article{farias2024quforge:,
title={QuForge: A Library for Qudits Simulation},
author={Tiago de Souza Farias, Lucas Friedrich, Jonas Maziero},
journal={arXiv preprint arXiv:2409.17716},
year={2024},
archivePrefix={arXiv},
eprint={2409.17716},
primaryClass={quant-ph cs.LG}
}
|
farias2024quforge:
|
arxiv-662276
|
2409.17717
|
Behaviour4All: in-the-wild Facial Behaviour Analysis Toolkit
|
<|reference_start|>Behaviour4All: in-the-wild Facial Behaviour Analysis Toolkit: In this paper, we introduce Behavior4All, a comprehensive, open-source toolkit for in-the-wild facial behavior analysis, integrating Face Localization, Valence-Arousal Estimation, Basic Expression Recognition and Action Unit Detection, all within a single framework. Available in both CPU-only and GPU-accelerated versions, Behavior4All leverages 12 large-scale, in-the-wild datasets consisting of over 5 million images from diverse demographic groups. It introduces a novel framework that leverages distribution matching and label co-annotation to address tasks with non-overlapping annotations, encoding prior knowledge of their relatedness. In the largest study of its kind, Behavior4All outperforms both state-of-the-art and toolkits in overall performance as well as fairness across all databases and tasks. It also demonstrates superior generalizability on unseen databases and on compound expression recognition. Finally, Behavior4All is way times faster than other toolkits.<|reference_end|>
|
arxiv
|
@article{kollias2024behaviour4all:,
title={Behaviour4All: in-the-wild Facial Behaviour Analysis Toolkit},
author={Dimitrios Kollias and Chunchang Shao and Odysseus Kaloidas and Ioannis
Patras},
journal={arXiv preprint arXiv:2409.17717},
year={2024},
archivePrefix={arXiv},
eprint={2409.17717},
primaryClass={cs.CV}
}
|
kollias2024behaviour4all:
|
arxiv-662277
|
2409.17720
|
Scene Understanding in Pick-and-Place Tasks: Analyzing Transformations Between Initial and Final Scenes
|
<|reference_start|>Scene Understanding in Pick-and-Place Tasks: Analyzing Transformations Between Initial and Final Scenes: With robots increasingly collaborating with humans in everyday tasks, it is important to take steps toward robotic systems capable of understanding the environment. This work focuses on scene understanding to detect pick and place tasks given initial and final images from the scene. To this end, a dataset is collected for object detection and pick and place task detection. A YOLOv5 network is subsequently trained to detect the objects in the initial and final scenes. Given the detected objects and their bounding boxes, two methods are proposed to detect the pick and place tasks which transform the initial scene into the final scene. A geometric method is proposed which tracks objects' movements in the two scenes and works based on the intersection of the bounding boxes which moved within scenes. Contrarily, the CNN-based method utilizes a Convolutional Neural Network to classify objects with intersected bounding boxes into 5 classes, showing the spatial relationship between the involved objects. The performed pick and place tasks are then derived from analyzing the experiments with both scenes. Results show that the CNN-based method, using a VGG16 backbone, outscores the geometric method by roughly 12 percentage points in certain scenarios, with an overall success rate of 84.3%.<|reference_end|>
|
arxiv
|
@article{ghasemi2024scene,
title={Scene Understanding in Pick-and-Place Tasks: Analyzing Transformations
Between Initial and Final Scenes},
author={Seraj Ghasemi, Hamed Hosseini, MohammadHossein Koosheshi, Mehdi Tale
Masouleh, and Ahmad Kalhor},
journal={arXiv preprint arXiv:2409.17720},
year={2024},
doi={10.1109/ICEE63041.2024.10667903},
archivePrefix={arXiv},
eprint={2409.17720},
primaryClass={cs.CV cs.RO cs.SY eess.SY}
}
|
ghasemi2024scene
|
arxiv-662278
|
2409.17721
|
Improving the Vector Basis Neural Network for RANS Equations Using Separate Trainings
|
<|reference_start|>Improving the Vector Basis Neural Network for RANS Equations Using Separate Trainings: We present a new data-driven turbulence model for Reynolds-averaged Navier-Stokes equations called $\nu_t$-Vector Basis Neural Network. This new model, grounded on the already existing Vector Basis Neural Network, predicts separately the turbulent viscosity $\nu_t$ and the contribution of the Reynolds force vector that is not already accounted in $\nu_t$. Numerical experiments on the flow in a Square Duct show the better accuracy of the new model compared to the reference one.<|reference_end|>
|
arxiv
|
@article{oberto2024improving,
title={Improving the Vector Basis Neural Network for RANS Equations Using
Separate Trainings},
author={Davide Oberto},
journal={arXiv preprint arXiv:2409.17721},
year={2024},
archivePrefix={arXiv},
eprint={2409.17721},
primaryClass={physics.flu-dyn cs.NA math.NA}
}
|
oberto2024improving
|
arxiv-662279
|
2409.17723
|
VVTEAM: A Compact Behavioral Model for Volatile Memristors
|
<|reference_start|>VVTEAM: A Compact Behavioral Model for Volatile Memristors: Volatile memristors have recently gained popularity as promising devices for neuromorphic circuits, capable of mimicking the leaky function of neurons and offering advantages over capacitor-based circuits in terms of power dissipation and area. Additionally, volatile memristors are useful as selector devices and for hardware security circuits such as physical unclonable functions. To facilitate the design and simulation of circuits, a compact behavioral model is essential. This paper proposes V-VTEAM, a compact, simple, general, and flexible behavioral model for volatile memristors, inspired by the VTEAM nonvolatile memristor model and developed in MATLAB. The validity of the model is demonstrated by fitting it to an ion drift/diffusion-based Ag/SiOx/C/W volatile memristor, achieving a relative root mean error square of 4.5%.<|reference_end|>
|
arxiv
|
@article{patni2024vvteam:,
title={VVTEAM: A Compact Behavioral Model for Volatile Memristors},
author={Tanay Patni, Rishona Daniels and Shahar Kvatinsky},
journal={arXiv preprint arXiv:2409.17723},
year={2024},
archivePrefix={arXiv},
eprint={2409.17723},
primaryClass={cs.AR cs.ET cs.NE}
}
|
patni2024vvteam:
|
arxiv-662280
|
2409.17725
|
Stable Object Placement Under Geometric Uncertainty via Differentiable Contact Dynamics
|
<|reference_start|>Stable Object Placement Under Geometric Uncertainty via Differentiable Contact Dynamics: From serving a cup of coffee to carefully rearranging delicate items, stable object placement is a crucial skill for future robots. This skill is challenging due to the required accuracy, which is difficult to achieve under geometric uncertainty. We leverage differentiable contact dynamics to develop a principled method for stable object placement under geometric uncertainty. We estimate the geometric uncertainty by minimizing the discrepancy between the force-torque sensor readings and the model predictions through gradient descent. We further keep track of a belief over multiple possible geometric parameters to mitigate the gradient-based method's sensitivity to the initialization. We verify our approach in the real world on various geometric uncertainties, including the in-hand pose uncertainty of the grasped object, the object's shape uncertainty, and the environment's shape uncertainty.<|reference_end|>
|
arxiv
|
@article{li2024stable,
title={Stable Object Placement Under Geometric Uncertainty via Differentiable
Contact Dynamics},
author={Linfeng Li, Gang Yang, Lin Shao, David Hsu},
journal={arXiv preprint arXiv:2409.17725},
year={2024},
archivePrefix={arXiv},
eprint={2409.17725},
primaryClass={cs.RO}
}
|
li2024stable
|
arxiv-662281
|
2409.17726
|
Recent advances in interpretable machine learning using structure-based protein representations
|
<|reference_start|>Recent advances in interpretable machine learning using structure-based protein representations: Recent advancements in machine learning (ML) are transforming the field of structural biology. For example, AlphaFold, a groundbreaking neural network for protein structure prediction, has been widely adopted by researchers. The availability of easy-to-use interfaces and interpretable outcomes from the neural network architecture, such as the confidence scores used to color the predicted structures, have made AlphaFold accessible even to non-ML experts. In this paper, we present various methods for representing protein 3D structures from low- to high-resolution, and show how interpretable ML methods can support tasks such as predicting protein structures, protein function, and protein-protein interactions. This survey also emphasizes the significance of interpreting and visualizing ML-based inference for structure-based protein representations that enhance interpretability and knowledge discovery. Developing such interpretable approaches promises to further accelerate fields including drug development and protein design.<|reference_end|>
|
arxiv
|
@article{vecchietti2024recent,
title={Recent advances in interpretable machine learning using structure-based
protein representations},
author={Luiz Felipe Vecchietti, Minji Lee, Begench Hangeldiyev, Hyunkyu Jung,
Hahnbeom Park, Tae-Kyun Kim, Meeyoung Cha, Ho Min Kim},
journal={arXiv preprint arXiv:2409.17726},
year={2024},
archivePrefix={arXiv},
eprint={2409.17726},
primaryClass={cs.LG}
}
|
vecchietti2024recent
|
arxiv-662282
|
2409.17727
|
Robotic-CLIP: Fine-tuning CLIP on Action Data for Robotic Applications
|
<|reference_start|>Robotic-CLIP: Fine-tuning CLIP on Action Data for Robotic Applications: Vision language models have played a key role in extracting meaningful features for various robotic applications. Among these, Contrastive Language-Image Pretraining (CLIP) is widely used in robotic tasks that require both vision and natural language understanding. However, CLIP was trained solely on static images paired with text prompts and has not yet been fully adapted for robotic tasks involving dynamic actions. In this paper, we introduce Robotic-CLIP to enhance robotic perception capabilities. We first gather and label large-scale action data, and then build our Robotic-CLIP by fine-tuning CLIP on 309,433 videos (~7.4 million frames) of action data using contrastive learning. By leveraging action data, Robotic-CLIP inherits CLIP's strong image performance while gaining the ability to understand actions in robotic contexts. Intensive experiments show that our Robotic-CLIP outperforms other CLIP-based models across various language-driven robotic tasks. Additionally, we demonstrate the practical effectiveness of Robotic-CLIP in real-world grasping applications.<|reference_end|>
|
arxiv
|
@article{nguyen2024robotic-clip:,
title={Robotic-CLIP: Fine-tuning CLIP on Action Data for Robotic Applications},
author={Nghia Nguyen, Minh Nhat Vu, Tung D. Ta, Baoru Huang, Thieu Vo, Ngan
Le, Anh Nguyen},
journal={arXiv preprint arXiv:2409.17727},
year={2024},
archivePrefix={arXiv},
eprint={2409.17727},
primaryClass={cs.RO cs.CV}
}
|
nguyen2024robotic-clip:
|
arxiv-662283
|
2409.17728
|
AlterMOMA: Fusion Redundancy Pruning for Camera-LiDAR Fusion Models with Alternative Modality Masking
|
<|reference_start|>AlterMOMA: Fusion Redundancy Pruning for Camera-LiDAR Fusion Models with Alternative Modality Masking: Camera-LiDAR fusion models significantly enhance perception performance in autonomous driving. The fusion mechanism leverages the strengths of each modality while minimizing their weaknesses. Moreover, in practice, camera-LiDAR fusion models utilize pre-trained backbones for efficient training. However, we argue that directly loading single-modal pre-trained camera and LiDAR backbones into camera-LiDAR fusion models introduces similar feature redundancy across modalities due to the nature of the fusion mechanism. Unfortunately, existing pruning methods are developed explicitly for single-modal models, and thus, they struggle to effectively identify these specific redundant parameters in camera-LiDAR fusion models. In this paper, to address the issue above on camera-LiDAR fusion models, we propose a novelty pruning framework Alternative Modality Masking Pruning (AlterMOMA), which employs alternative masking on each modality and identifies the redundant parameters. Specifically, when one modality parameters are masked (deactivated), the absence of features from the masked backbone compels the model to reactivate previous redundant features of the other modality backbone. Therefore, these redundant features and relevant redundant parameters can be identified via the reactivation process. The redundant parameters can be pruned by our proposed importance score evaluation function, Alternative Evaluation (AlterEva), which is based on the observation of the loss changes when certain modality parameters are activated and deactivated. Extensive experiments on the nuScene and KITTI datasets encompassing diverse tasks, baseline models, and pruning algorithms showcase that AlterMOMA outperforms existing pruning methods, attaining state-of-the-art performance.<|reference_end|>
|
arxiv
|
@article{sun2024altermoma:,
title={AlterMOMA: Fusion Redundancy Pruning for Camera-LiDAR Fusion Models with
Alternative Modality Masking},
author={Shiqi Sun, Yantao Lu, Ning Liu, Bo Jiang, JinChao Chen, Ying Zhang},
journal={arXiv preprint arXiv:2409.17728},
year={2024},
archivePrefix={arXiv},
eprint={2409.17728},
primaryClass={cs.CV cs.AI}
}
|
sun2024altermoma:
|
arxiv-662284
|
2409.17729
|
Neural Implicit Representation for Highly Dynamic LiDAR Mapping and Odometry
|
<|reference_start|>Neural Implicit Representation for Highly Dynamic LiDAR Mapping and Odometry: Recent advancements in Simultaneous Localization and Mapping (SLAM) have increasingly highlighted the robustness of LiDAR-based techniques. At the same time, Neural Radiance Fields (NeRF) have introduced new possibilities for 3D scene reconstruction, exemplified by SLAM systems. Among these, NeRF-LOAM has shown notable performance in NeRF-based SLAM applications. However, despite its strengths, these systems often encounter difficulties in dynamic outdoor environments due to their inherent static assumptions. To address these limitations, this paper proposes a novel method designed to improve reconstruction in highly dynamic outdoor scenes. Based on NeRF-LOAM, the proposed approach consists of two primary components. First, we separate the scene into static background and dynamic foreground. By identifying and excluding dynamic elements from the mapping process, this segmentation enables the creation of a dense 3D map that accurately represents the static background only. The second component extends the octree structure to support multi-resolution representation. This extension not only enhances reconstruction quality but also aids in the removal of dynamic objects identified by the first module. Additionally, Fourier feature encoding is applied to the sampled points, capturing high-frequency information and leading to more complete reconstruction results. Evaluations on various datasets demonstrate that our method achieves more competitive results compared to current state-of-the-art approaches.<|reference_end|>
|
arxiv
|
@article{zhang2024neural,
title={Neural Implicit Representation for Highly Dynamic LiDAR Mapping and
Odometry},
author={Qi Zhang, He Wang, Ru Li, Wenbin Li},
journal={arXiv preprint arXiv:2409.17729},
year={2024},
archivePrefix={arXiv},
eprint={2409.17729},
primaryClass={cs.CV}
}
|
zhang2024neural
|
arxiv-662285
|
2409.17730
|
Autoregressive Generation Strategies for Top-K Sequential Recommendations
|
<|reference_start|>Autoregressive Generation Strategies for Top-K Sequential Recommendations: The goal of modern sequential recommender systems is often formulated in terms of next-item prediction. In this paper, we explore the applicability of generative transformer-based models for the Top-K sequential recommendation task, where the goal is to predict items a user is likely to interact with in the "near future". We explore commonly used autoregressive generation strategies, including greedy decoding, beam search, and temperature sampling, to evaluate their performance for the Top-K sequential recommendation task. In addition, we propose novel Reciprocal Rank Aggregation (RRA) and Relevance Aggregation (RA) generation strategies based on multi-sequence generation with temperature sampling and subsequent aggregation. Experiments on diverse datasets give valuable insights regarding commonly used strategies' applicability and show that suggested approaches improve performance on longer time horizons compared to widely-used Top-K prediction approach and single-sequence autoregressive generation strategies.<|reference_end|>
|
arxiv
|
@article{volodkevich2024autoregressive,
title={Autoregressive Generation Strategies for Top-K Sequential
Recommendations},
author={Anna Volodkevich, Danil Gusak, Anton Klenitskiy, Alexey Vasilev},
journal={arXiv preprint arXiv:2409.17730},
year={2024},
archivePrefix={arXiv},
eprint={2409.17730},
primaryClass={cs.IR cs.LG}
}
|
volodkevich2024autoregressive
|
arxiv-662286
|
2409.17731
|
Robust Ladder Climbing with a Quadrupedal Robot
|
<|reference_start|>Robust Ladder Climbing with a Quadrupedal Robot: Quadruped robots are proliferating in industrial environments where they carry sensor suites and serve as autonomous inspection platforms. Despite the advantages of legged robots over their wheeled counterparts on rough and uneven terrain, they are still yet to be able to reliably negotiate ubiquitous features of industrial infrastructure: ladders. Inability to traverse ladders prevents quadrupeds from inspecting dangerous locations, puts humans in harm's way, and reduces industrial site productivity. In this paper, we learn quadrupedal ladder climbing via a reinforcement learning-based control policy and a complementary hooked end-effector. We evaluate the robustness in simulation across different ladder inclinations, rung geometries, and inter-rung spacings. On hardware, we demonstrate zero-shot transfer with an overall 90% success rate at ladder angles ranging from 70{\deg} to 90{\deg}, consistent climbing performance during unmodeled perturbations, and climbing speeds 232x faster than the state of the art. This work expands the scope of industrial quadruped robot applications beyond inspection on nominal terrains to challenging infrastructural features in the environment, highlighting synergies between robot morphology and control policy when performing complex skills. More information can be found at the project website: https://sites.google.com/leggedrobotics.com/climbingladders.<|reference_end|>
|
arxiv
|
@article{vogel2024robust,
title={Robust Ladder Climbing with a Quadrupedal Robot},
author={Dylan Vogel, Robert Baines, Joseph Church, Julian Lotzer, Karl Werner,
Marco Hutter},
journal={arXiv preprint arXiv:2409.17731},
year={2024},
archivePrefix={arXiv},
eprint={2409.17731},
primaryClass={cs.RO}
}
|
vogel2024robust
|
arxiv-662287
|
2409.17736
|
Efficient and stable time integration of Cahn-Hilliard equations: explicit, implicit and explicit iterative schemes
|
<|reference_start|>Efficient and stable time integration of Cahn-Hilliard equations: explicit, implicit and explicit iterative schemes: To solve the Cahn-Hilliard equation numerically, a new time integration algorithm is proposed, which is based on a combination of the Eyre splitting and the local iteration modified (LIM) scheme. The latter is employed to tackle the implicit system arising each time integration step. The proposed method is gradient-stable and allows to use large time steps, whereas, regarding its computational structure, it is an explicit time integration scheme. Numerical tests are presented to demonstrate abilities of the new method and to compare it with other time integration methods for Cahn-Hilliard equation.<|reference_end|>
|
arxiv
|
@article{botchev2024efficient,
title={Efficient and stable time integration of Cahn-Hilliard equations:
explicit, implicit and explicit iterative schemes},
author={M. A. Botchev and I. A. Fahurdinov and E. B. Savenkov},
journal={arXiv preprint arXiv:2409.17736},
year={2024},
doi={10.1134/S0965542524700945},
archivePrefix={arXiv},
eprint={2409.17736},
primaryClass={math.NA cs.CE cs.NA physics.comp-ph}
}
|
botchev2024efficient
|
arxiv-662288
|
2409.17738
|
Physically Consistent RIS: From Reradiation Mode Optimization to Practical Realization
|
<|reference_start|>Physically Consistent RIS: From Reradiation Mode Optimization to Practical Realization: We propose a practical framework for designing a physically consistent reconfigurable intelligent surface (RIS) to overcome the inefficiency of the conventional phase gradient approach. For a section of Cape Town and across three different coverage enhancement scenarios, we optimize the amplitude of the RIS reradiation modes using Sionna ray tracing and a gradient-based learning technique. We then determine the required RIS surface/sheet impedance given the desired amplitudes for the reradiation modes, design the corresponding unitcells, and validate the performance through full-wave numerical simulations using CST Microwave Studio. We further validate our approach by fabricating a RIS using the parallel plate waveguide technique and conducting experimental measurements that align with our theoretical predictions.<|reference_end|>
|
arxiv
|
@article{shabanpour2024physically,
title={Physically Consistent RIS: From Reradiation Mode Optimization to
Practical Realization},
author={Javad Shabanpour, Constantin Simovski, and Giovanni Geraci},
journal={arXiv preprint arXiv:2409.17738},
year={2024},
archivePrefix={arXiv},
eprint={2409.17738},
primaryClass={physics.app-ph cs.IT cs.NI eess.SP math.IT}
}
|
shabanpour2024physically
|
arxiv-662289
|
2409.17740
|
AnyLogo: Symbiotic Subject-Driven Diffusion System with Gemini Status
|
<|reference_start|>AnyLogo: Symbiotic Subject-Driven Diffusion System with Gemini Status: Diffusion models have made compelling progress on facilitating high-throughput daily production. Nevertheless, the appealing customized requirements are remain suffered from instance-level finetuning for authentic fidelity. Prior zero-shot customization works achieve the semantic consistence through the condensed injection of identity features, while addressing detailed low-level signatures through complex model configurations and subject-specific fabrications, which significantly break the statistical coherence within the overall system and limit the applicability across various scenarios. To facilitate the generic signature concentration with rectified efficiency, we present \textbf{AnyLogo}, a zero-shot region customizer with remarkable detail consistency, building upon the symbiotic diffusion system with eliminated cumbersome designs. Streamlined as vanilla image generation, we discern that the rigorous signature extraction and creative content generation are promisingly compatible and can be systematically recycled within a single denoising model. In place of the external configurations, the gemini status of the denoising model promote the reinforced subject transmission efficiency and disentangled semantic-signature space with continuous signature decoration. Moreover, the sparse recycling paradigm is adopted to prevent the duplicated risk with compressed transmission quota for diversified signature stimulation. Extensive experiments on constructed logo-level benchmarks demonstrate the effectiveness and practicability of our methods.<|reference_end|>
|
arxiv
|
@article{zhang2024anylogo:,
title={AnyLogo: Symbiotic Subject-Driven Diffusion System with Gemini Status},
author={Jinghao Zhang, Wen Qian, Hao Luo, Fan Wang, Feng Zhao},
journal={arXiv preprint arXiv:2409.17740},
year={2024},
archivePrefix={arXiv},
eprint={2409.17740},
primaryClass={cs.CV}
}
|
zhang2024anylogo:
|
arxiv-662290
|
2409.17742
|
TADAR: Thermal Array-based Detection and Ranging for Privacy-Preserving Human Sensing
|
<|reference_start|>TADAR: Thermal Array-based Detection and Ranging for Privacy-Preserving Human Sensing: Human sensing has gained increasing attention in various applications. Among the available technologies, visual images offer high accuracy, while sensing on the RF spectrum preserves privacy, creating a conflict between imaging resolution and privacy preservation. In this paper, we explore thermal array sensors as an emerging modality that strikes an excellent resolution-privacy balance for ubiquitous sensing. To this end, we present TADAR, the first multi-user Thermal Array-based Detection and Ranging system that estimates the inherently missing range information, extending thermal array outputs from 2D thermal pixels to 3D depths and empowering them as a promising modality for ubiquitous privacy-preserving human sensing. We prototype TADAR using a single commodity thermal array sensor and conduct extensive experiments in different indoor environments. Our results show that TADAR achieves a mean F1 score of 88.8% for multi-user detection and a mean accuracy of 32.0 cm for multi-user ranging, which further improves to 20.1 cm for targets located within 3 m. We conduct two case studies on fall detection and occupancy estimation to showcase the potential applications of TADAR. We hope TADAR will inspire the vast community to explore new directions of thermal array sensing, beyond wireless and acoustic sensing. TADAR is open-sourced on GitHub: https://github.com/aiot-lab/TADAR.<|reference_end|>
|
arxiv
|
@article{zhang2024tadar:,
title={TADAR: Thermal Array-based Detection and Ranging for Privacy-Preserving
Human Sensing},
author={Xie Zhang, Chenshu Wu},
journal={arXiv preprint arXiv:2409.17742},
year={2024},
archivePrefix={arXiv},
eprint={2409.17742},
primaryClass={cs.HC}
}
|
zhang2024tadar:
|
arxiv-662291
|
2409.17743
|
Information transmission under Markovian noise
|
<|reference_start|>Information transmission under Markovian noise: We consider an open quantum system undergoing Markovian dynamics, the latter being modelled by a discrete-time quantum Markov semigroup $\{\Phi^n\}_{n \in {\mathbb{N}}}$, resulting from the action of sequential uses of a quantum channel $\Phi$, with $n \in {\mathbb{N}}$ being the discrete time parameter. We find upper and lower bounds on the one-shot $\epsilon$-error information transmission capacities of $\Phi^n$ for a finite time $n\in \mathbb{N}$ and $\epsilon \in [0,1)$ in terms of the structure of the peripheral space of the channel $\Phi$. We consider transmission of $(i)$ classical information (both in the unassisted and entanglement-assisted settings); $(ii)$ quantum information and $(iii)$ private classical information.<|reference_end|>
|
arxiv
|
@article{singh2024information,
title={Information transmission under Markovian noise},
author={Satvik Singh and Nilanjana Datta},
journal={arXiv preprint arXiv:2409.17743},
year={2024},
archivePrefix={arXiv},
eprint={2409.17743},
primaryClass={quant-ph cs.IT math.IT}
}
|
singh2024information
|
arxiv-662292
|
2409.17744
|
Privacy for Quantum Annealing Attack on Spin Reversal Transformations in the case of cryptanalysis
|
<|reference_start|>Privacy for Quantum Annealing Attack on Spin Reversal Transformations in the case of cryptanalysis: This paper demonstrates that applying spin reversal transformations (SRT), commonly known as a sufficient method for privacy enhancing in problems solved using quantum annealing, does not guarantee privacy for all possible problems. We show how to recover the original problem from the Ising problem obtained using SRT when the resulting problem in Ising form represents the algebraic attack on the $E_0$ stream cipher. A small example is used to illustrate how to retrieve the original problem from the one transformed by SRT. Moreover, it is shown that our method is efficient even for full-scale problems.<|reference_end|>
|
arxiv
|
@article{leśniak2024privacy,
title={Privacy for Quantum Annealing. Attack on Spin Reversal Transformations
in the case of cryptanalysis},
author={Mateusz Le'sniak and Micha{l} Wro'nski},
journal={arXiv preprint arXiv:2409.17744},
year={2024},
archivePrefix={arXiv},
eprint={2409.17744},
primaryClass={cs.CR}
}
|
leśniak2024privacy
|
arxiv-662293
|
2409.17745
|
Few-shot Prompting for Pairwise Ranking: An Effective Non-Parametric Retrieval Model
|
<|reference_start|>Few-shot Prompting for Pairwise Ranking: An Effective Non-Parametric Retrieval Model: A supervised ranking model, despite its advantage of being effective, usually involves complex processing - typically multiple stages of task-specific pre-training and fine-tuning. This has motivated researchers to explore simpler pipelines leveraging large language models (LLMs) that are capable of working in a zero-shot manner. However, since zero-shot inference does not make use of a training set of pairs of queries and their relevant documents, its performance is mostly worse than that of supervised models, which are trained on such example pairs. Motivated by the existing findings that training examples generally improve zero-shot performance, in our work, we explore if this also applies to ranking models. More specifically, given a query and a pair of documents, the preference prediction task is improved by augmenting examples of preferences for similar queries from a training set. Our proposed pairwise few-shot ranker demonstrates consistent improvements over the zero-shot baseline on both in-domain (TREC DL) and out-domain (BEIR subset) retrieval benchmarks. Our method also achieves a close performance to that of a supervised model without requiring any complex training pipeline.<|reference_end|>
|
arxiv
|
@article{sinhababu2024few-shot,
title={Few-shot Prompting for Pairwise Ranking: An Effective Non-Parametric
Retrieval Model},
author={Nilanjan Sinhababu, Andrew Parry, Debasis Ganguly, Debasis Samanta,
Pabitra Mitra},
journal={arXiv preprint arXiv:2409.17745},
year={2024},
archivePrefix={arXiv},
eprint={2409.17745},
primaryClass={cs.IR cs.CL cs.LG}
}
|
sinhababu2024few-shot
|
arxiv-662294
|
2409.17746
|
Paraformer-v2: An improved non-autoregressive transformer for noise-robust speech recognition
|
<|reference_start|>Paraformer-v2: An improved non-autoregressive transformer for noise-robust speech recognition: Attention-based encoder-decoder, e.g. transformer and its variants, generates the output sequence in an autoregressive (AR) manner. Despite its superior performance, AR model is computationally inefficient as its generation requires as many iterations as the output length. In this paper, we propose Paraformer-v2, an improved version of Paraformer, for fast, accurate, and noise-robust non-autoregressive speech recognition. In Paraformer-v2, we use a CTC module to extract the token embeddings, as the alternative to the continuous integrate-and-fire module in Paraformer. Extensive experiments demonstrate that Paraformer-v2 outperforms Paraformer on multiple datasets, especially on the English datasets (over 14% improvement on WER), and is more robust in noisy environments.<|reference_end|>
|
arxiv
|
@article{an2024paraformer-v2:,
title={Paraformer-v2: An improved non-autoregressive transformer for
noise-robust speech recognition},
author={Keyu An, Zerui Li, Zhifu Gao, Shiliang Zhang},
journal={arXiv preprint arXiv:2409.17746},
year={2024},
archivePrefix={arXiv},
eprint={2409.17746},
primaryClass={eess.AS cs.SD}
}
|
an2024paraformer-v2:
|
arxiv-662295
|
2409.17747
|
Text Image Generation for Low-Resource Languages with Dual Translation Learning
|
<|reference_start|>Text Image Generation for Low-Resource Languages with Dual Translation Learning: Scene text recognition in low-resource languages frequently faces challenges due to the limited availability of training datasets derived from real-world scenes. This study proposes a novel approach that generates text images in low-resource languages by emulating the style of real text images from high-resource languages. Our approach utilizes a diffusion model that is conditioned on binary states: ``synthetic'' and ``real.'' The training of this model involves dual translation tasks, where it transforms plain text images into either synthetic or real text images, based on the binary states. This approach not only effectively differentiates between the two domains but also facilitates the model's explicit recognition of characters in the target language. Furthermore, to enhance the accuracy and variety of generated text images, we introduce two guidance techniques: Fidelity-Diversity Balancing Guidance and Fidelity Enhancement Guidance. Our experimental results demonstrate that the text images generated by our proposed framework can significantly improve the performance of scene text recognition models for low-resource languages.<|reference_end|>
|
arxiv
|
@article{noguchi2024text,
title={Text Image Generation for Low-Resource Languages with Dual Translation
Learning},
author={Chihiro Noguchi, Shun Fukuda, Shoichiro Mihara, Masao Yamanaka},
journal={arXiv preprint arXiv:2409.17747},
year={2024},
archivePrefix={arXiv},
eprint={2409.17747},
primaryClass={cs.CV}
}
|
noguchi2024text
|
arxiv-662296
|
2409.17750
|
Are Transformers in Pre-trained LM A Good ASR Encoder? An Empirical Study
|
<|reference_start|>Are Transformers in Pre-trained LM A Good ASR Encoder? An Empirical Study: In this study, we delve into the efficacy of transformers within pre-trained language models (PLMs) when repurposed as encoders for Automatic Speech Recognition (ASR). Our underlying hypothesis posits that, despite being initially trained on text-based corpora, these transformers possess a remarkable capacity to extract effective features from the input sequence. This inherent capability, we argue, is transferrable to speech data, thereby augmenting the acoustic modeling ability of ASR. Through rigorous empirical analysis, our findings reveal a notable improvement in Character Error Rate (CER) and Word Error Rate (WER) across diverse ASR tasks when transformers from pre-trained LMs are incorporated. Particularly, they serve as an advantageous starting point for initializing ASR encoders. Furthermore, we uncover that these transformers, when integrated into a well-established ASR encoder, can significantly boost performance, especially in scenarios where profound semantic comprehension is pivotal. This underscores the potential of leveraging the semantic prowess embedded within pre-trained transformers to advance ASR systems' capabilities.<|reference_end|>
|
arxiv
|
@article{an2024are,
title={Are Transformers in Pre-trained LM A Good ASR Encoder? An Empirical
Study},
author={Keyu An, Shiliang Zhang, Zhijie Yan},
journal={arXiv preprint arXiv:2409.17750},
year={2024},
archivePrefix={arXiv},
eprint={2409.17750},
primaryClass={eess.AS cs.CL cs.SD}
}
|
an2024are
|
arxiv-662297
|
2409.17754
|
Byzantine-Robust Aggregation for Securing Decentralized Federated Learning
|
<|reference_start|>Byzantine-Robust Aggregation for Securing Decentralized Federated Learning: Federated Learning (FL) emerges as a distributed machine learning approach that addresses privacy concerns by training AI models locally on devices. Decentralized Federated Learning (DFL) extends the FL paradigm by eliminating the central server, thereby enhancing scalability and robustness through the avoidance of a single point of failure. However, DFL faces significant challenges in optimizing security, as most Byzantine-robust algorithms proposed in the literature are designed for centralized scenarios. In this paper, we present a novel Byzantine-robust aggregation algorithm to enhance the security of Decentralized Federated Learning environments, coined WFAgg. This proposal handles the adverse conditions and strength robustness of dynamic decentralized topologies at the same time by employing multiple filters to identify and mitigate Byzantine attacks. Experimental results demonstrate the effectiveness of the proposed algorithm in maintaining model accuracy and convergence in the presence of various Byzantine attack scenarios, outperforming state-of-the-art centralized Byzantine-robust aggregation schemes (such as Multi-Krum or Clustering). These algorithms are evaluated on an IID image classification problem in both centralized and decentralized scenarios.<|reference_end|>
|
arxiv
|
@article{cajaraville-aboy2024byzantine-robust,
title={Byzantine-Robust Aggregation for Securing Decentralized Federated
Learning},
author={Diego Cajaraville-Aboy, Ana Fern'andez-Vilas, Rebeca P.
D'iaz-Redondo, and Manuel Fern'andez-Veiga},
journal={arXiv preprint arXiv:2409.17754},
year={2024},
archivePrefix={arXiv},
eprint={2409.17754},
primaryClass={cs.LG cs.AI}
}
|
cajaraville-aboy2024byzantine-robust
|
arxiv-662298
|
2409.17755
|
SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning
|
<|reference_start|>SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning: This paper addresses a challenging interactive task learning scenario we call rearrangement under unawareness: to manipulate a rigid-body environment in a context where the robot is unaware of a concept that's key to solving the instructed task. We propose SECURE, an interactive task learning framework designed to solve such problems by fixing a deficient domain model using embodied conversation. Through dialogue, the robot discovers and then learns to exploit unforeseen possibilities. Using SECURE, the robot not only learns from the user's corrective feedback when it makes a mistake, but it also learns to make strategic dialogue decisions for revealing useful evidence about novel concepts for solving the instructed task. Together, these abilities allow the robot to generalise to subsequent tasks using newly acquired knowledge. We demonstrate that a robot that is semantics-aware -- that is, it exploits the logical consequences of both sentence and discourse semantics in the learning and inference process -- learns to solve rearrangement under unawareness more effectively than a robot that lacks such capabilities.<|reference_end|>
|
arxiv
|
@article{rubavicius2024secure:,
title={SECURE: Semantics-aware Embodied Conversation under Unawareness for
Lifelong Robot Learning},
author={Rimvydas Rubavicius, Peter David Fagan, Alex Lascarides, Subramanian
Ramamoorthy},
journal={arXiv preprint arXiv:2409.17755},
year={2024},
archivePrefix={arXiv},
eprint={2409.17755},
primaryClass={cs.RO cs.AI cs.CL}
}
|
rubavicius2024secure:
|
arxiv-662299
|
2409.17756
|
Stackelberg Attack on Protocol Fee Governance
|
<|reference_start|>Stackelberg Attack on Protocol Fee Governance: We establish a Stackelberg attack by Liquidity Providers against Governance of an AMM, leveraging forking and commitments through a Grim Forker smart contract. We produce a dynamic, block-by-block model of AMM reserves and trading volume in the presence of competing forks, derive equilibrium conditions in the presence of protocol fees, and analyze Stackelberg equilibria with smart contract moves.<|reference_end|>
|
arxiv
|
@article{hajjar2024stackelberg,
title={Stackelberg Attack on Protocol Fee Governance},
author={Alexandre Hajjar},
journal={arXiv preprint arXiv:2409.17756},
year={2024},
archivePrefix={arXiv},
eprint={2409.17756},
primaryClass={cs.GT}
}
|
hajjar2024stackelberg
|
arxiv-662300
|
2409.17757
|
Integrating Hierarchical Semantic into Iterative Generation Model for Entailment Tree Explanation
|
<|reference_start|>Integrating Hierarchical Semantic into Iterative Generation Model for Entailment Tree Explanation: Manifestly and logically displaying the line of reasoning from evidence to answer is significant to explainable question answering (QA). The entailment tree exhibits the lines structurally, which is different from the self-explanation principle in large-scale language models. Existing methods rarely consider the semantic association of sentences between and within hierarchies within the tree structure, which is prone to apparent mistakes in combinations. In this work, we propose an architecture of integrating the Hierarchical Semantics of sentences under the framework of Controller-Generator (HiSCG) to explain answers. The HiSCG designs a hierarchical mapping between hypotheses and facts, discriminates the facts involved in tree constructions, and optimizes single-step entailments. To the best of our knowledge, We are the first to notice hierarchical semantics of sentences between the same layer and adjacent layers to yield improvements. The proposed method achieves comparable performance on all three settings of the EntailmentBank dataset. The generalization results on two out-of-domain datasets also demonstrate the effectiveness of our method.<|reference_end|>
|
arxiv
|
@article{wang2024integrating,
title={Integrating Hierarchical Semantic into Iterative Generation Model for
Entailment Tree Explanation},
author={Qin Wang, Jianzhou Feng and Yiming Xu},
journal={arXiv preprint arXiv:2409.17757},
year={2024},
archivePrefix={arXiv},
eprint={2409.17757},
primaryClass={cs.CL cs.AI}
}
|
wang2024integrating
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.