corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-666301
2410.04607
Forbidden induced subgraphs in iterative higher order line graphs
<|reference_start|>Forbidden induced subgraphs in iterative higher order line graphs: Let $G$ be a simple finite connected graph. The line graph $L(G)$ of graph $G$ is the graph whose vertices are the edges of $G$, where $ef \in E(L(G))$ when $e \cap f \neq \emptyset$. Iteratively, the higher order line graphs are defined inductively as $L^1(G) = L(G)$ and $L^n(G) = L(L^{n-1}(G))$ for $n \geq 2$. In [Derived graphs and digraphs, Beitrage zur Graphentheorie (Teubner, Leipzig 1968), 17--33 (1968)], Beineke characterize line graphs in terms of nine forbidden subgraphs. Inspired by this result, in this paper, we characterize second order line graphs in terms of pure forbidden induced subgraphs. We also give a sufficient list of forbidden subgraphs for a graph $G$ such that $G$ is a higher order line graph. We characterize all order line graphs of graph $G$ with $\Delta(G) = 3$ and $4$.<|reference_end|>
arxiv
@article{sanghi2024forbidden, title={Forbidden induced subgraphs in iterative higher order line graphs}, author={Aryan Sanghi, Devsi Bantva and Sudebkumar Prasant Pal}, journal={arXiv preprint arXiv:2410.04607}, year={2024}, archivePrefix={arXiv}, eprint={2410.04607}, primaryClass={math.CO cs.DM} }
sanghi2024forbidden
arxiv-666302
2410.04609
VISTA: A Visual and Textual Attention Dataset for Interpreting Multimodal Models
<|reference_start|>VISTA: A Visual and Textual Attention Dataset for Interpreting Multimodal Models: The recent developments in deep learning led to the integration of natural language processing (NLP) with computer vision, resulting in powerful integrated Vision and Language Models (VLMs). Despite their remarkable capabilities, these models are frequently regarded as black boxes within the machine learning research community. This raises a critical question: which parts of an image correspond to specific segments of text, and how can we decipher these associations? Understanding these connections is essential for enhancing model transparency, interpretability, and trustworthiness. To answer this question, we present an image-text aligned human visual attention dataset that maps specific associations between image regions and corresponding text segments. We then compare the internal heatmaps generated by VL models with this dataset, allowing us to analyze and better understand the model's decision-making process. This approach aims to enhance model transparency, interpretability, and trustworthiness by providing insights into how these models align visual and linguistic information. We conducted a comprehensive study on text-guided visual saliency detection in these VL models. This study aims to understand how different models prioritize and focus on specific visual elements in response to corresponding text segments, providing deeper insights into their internal mechanisms and improving our ability to interpret their outputs.<|reference_end|>
arxiv
@article{harshit2024vista:, title={VISTA: A Visual and Textual Attention Dataset for Interpreting Multimodal Models}, author={Harshit and Tolga Tasdizen}, journal={arXiv preprint arXiv:2410.04609}, year={2024}, archivePrefix={arXiv}, eprint={2410.04609}, primaryClass={cs.CV} }
harshit2024vista:
arxiv-666303
2410.04611
The $Z$-Curve as an $n$-Dimensional Hypersphere: Properties and Analysis
<|reference_start|>The $Z$-Curve as an $n$-Dimensional Hypersphere: Properties and Analysis: In this research, we introduce what seems to be a new mathematical object resulting from projecting the $n$-dimensional $Z$-curve onto an $n$-dimensional sphere. The first part presents the algorithm that enables this transition, and the second part focuses on studying the properties of the resulting object.<|reference_end|>
arxiv
@article{gonzalez2024the, title={The $Z$-Curve as an $n$-Dimensional Hypersphere: Properties and Analysis}, author={Diego Vazquez Gonzalez, Hsing-Kuo Pao}, journal={arXiv preprint arXiv:2410.04611}, year={2024}, archivePrefix={arXiv}, eprint={2410.04611}, primaryClass={cs.DM cs.DS math.GR} }
gonzalez2024the
arxiv-666304
2410.04612
Regressing the Relative Future: Efficient Policy Optimization for Multi-turn RLHF
<|reference_start|>Regressing the Relative Future: Efficient Policy Optimization for Multi-turn RLHF: Large Language Models (LLMs) have achieved remarkable success at tasks like summarization that involve a single turn of interaction. However, they can still struggle with multi-turn tasks like dialogue that require long-term planning. Previous works on multi-turn dialogue extend single-turn reinforcement learning from human feedback (RLHF) methods to the multi-turn setting by treating all prior dialogue turns as a long context. Such approaches suffer from covariate shift: the conversations in the training set have previous turns generated by some reference policy, which means that low training error may not necessarily correspond to good performance when the learner is actually in the conversation loop. In response, we introduce REgressing the RELative FUture (REFUEL), an efficient policy optimization approach designed to address multi-turn RLHF in LLMs. REFUEL employs a single model to estimate $Q$-values and trains on self-generated data, addressing the covariate shift issue. REFUEL frames the multi-turn RLHF problem as a sequence of regression tasks on iteratively collected datasets, enabling ease of implementation. Theoretically, we prove that REFUEL can match the performance of any policy covered by the training set. Empirically, we evaluate our algorithm by using Llama-3.1-70B-it to simulate a user in conversation with our model. REFUEL consistently outperforms state-of-the-art methods such as DPO and REBEL across various settings. Furthermore, despite having only 8 billion parameters, Llama-3-8B-it fine-tuned with REFUEL outperforms Llama-3.1-70B-it on long multi-turn dialogues. Implementation of REFUEL can be found at https://github.com/ZhaolinGao/REFUEL/, and models trained by REFUEL can be found at https://huggingface.co/Cornell-AGI.<|reference_end|>
arxiv
@article{gao2024regressing, title={Regressing the Relative Future: Efficient Policy Optimization for Multi-turn RLHF}, author={Zhaolin Gao, Wenhao Zhan, Jonathan D. Chang, Gokul Swamy, Kiant'e Brantley, Jason D. Lee, Wen Sun}, journal={arXiv preprint arXiv:2410.04612}, year={2024}, archivePrefix={arXiv}, eprint={2410.04612}, primaryClass={cs.LG cs.AI cs.CL} }
gao2024regressing
arxiv-666305
2410.04614
Building Solidarity Amid Hostility: Experiences of Fat People in Online Communities
<|reference_start|>Building Solidarity Amid Hostility: Experiences of Fat People in Online Communities: Online communities are important spaces for members of marginalized groups to organize and support one another. To better understand the experiences of fat people -- a group whose marginalization often goes unrecognized -- in online communities, we conducted 12 semi-structured interviews with fat people. Our participants leveraged online communities to engage in consciousness raising around fat identity, learning to locate "the problem of being fat" not within themselves or their own bodies but rather in the oppressive design of the society around them. Participants were then able to use these communities to mitigate everyday experiences of anti-fatness, such as navigating hostile healthcare systems. However, to access these benefits, our participants had to navigate myriad sociotechnical harms, ranging from harassment to discriminatory algorithms. In light of these findings, we suggest that researchers and designers of online communities support selective fat visibility, consider fat people in the design of content moderation systems, and investigate algorithmic discrimination toward fat people. More broadly, we call on researchers and designers to contend with the social and material realities of fat experience, as opposed to the prevailing paradigm of treating fat people as problems to be solved in-and-of-themselves. This requires recognizing fat people as a marginalized social group and actively confronting anti-fatness as it is embedded in the design of technology.<|reference_end|>
arxiv
@article{payne2024building, title={Building Solidarity Amid Hostility: Experiences of Fat People in Online Communities}, author={Blakeley H. Payne, Jordan Taylor, Katta Spiel, Casey Fiesler}, journal={arXiv preprint arXiv:2410.04614}, year={2024}, archivePrefix={arXiv}, eprint={2410.04614}, primaryClass={cs.HC} }
payne2024building
arxiv-666306
2410.04616
LRQ-Fact: LLM-Generated Relevant Questions for Multimodal Fact-Checking
<|reference_start|>LRQ-Fact: LLM-Generated Relevant Questions for Multimodal Fact-Checking: Human fact-checkers have specialized domain knowledge that allows them to formulate precise questions to verify information accuracy. However, this expert-driven approach is labor-intensive and is not scalable, especially when dealing with complex multimodal misinformation. In this paper, we propose a fully-automated framework, LRQ-Fact, for multimodal fact-checking. Firstly, the framework leverages Vision-Language Models (VLMs) and Large Language Models (LLMs) to generate comprehensive questions and answers for probing multimodal content. Next, a rule-based decision-maker module evaluates both the original content and the generated questions and answers to assess the overall veracity. Extensive experiments on two benchmarks show that LRQ-Fact improves detection accuracy for multimodal misinformation. Moreover, we evaluate its generalizability across different model backbones, offering valuable insights for further refinement.<|reference_end|>
arxiv
@article{beigi2024lrq-fact:, title={LRQ-Fact: LLM-Generated Relevant Questions for Multimodal Fact-Checking}, author={Alimohammad Beigi, Bohan Jiang, Dawei Li, Tharindu Kumarage, Zhen Tan, Pouya Shaeri and Huan Liu}, journal={arXiv preprint arXiv:2410.04616}, year={2024}, archivePrefix={arXiv}, eprint={2410.04616}, primaryClass={cs.CL} }
beigi2024lrq-fact:
arxiv-666307
2410.04617
Evaluation of Code LLMs on Geospatial Code Generation
<|reference_start|>Evaluation of Code LLMs on Geospatial Code Generation: Software development support tools have been studied for a long time, with recent approaches using Large Language Models (LLMs) for code generation. These models can generate Python code for data science and machine learning applications. LLMs are helpful for software engineers because they increase productivity in daily work. An LLM can also serve as a "mentor" for inexperienced software developers, and be a viable learning support. High-quality code generation with LLMs can also be beneficial in geospatial data science. However, this domain poses different challenges, and code generation LLMs are typically not evaluated on geospatial tasks. Here, we show how we constructed an evaluation benchmark for code generation models, based on a selection of geospatial tasks. We categorised geospatial tasks based on their complexity and required tools. Then, we created a dataset with tasks that test model capabilities in spatial reasoning, spatial data processing, and geospatial tools usage. The dataset consists of specific coding problems that were manually created for high quality. For every problem, we proposed a set of test scenarios that make it possible to automatically check the generated code for correctness. In addition, we tested a selection of existing code generation LLMs for code generation in the geospatial domain. We share our dataset and reproducible evaluation code on a public GitHub repository, arguing that this can serve as an evaluation benchmark for new LLMs in the future. Our dataset will hopefully contribute to the development new models capable of solving geospatial coding tasks with high accuracy. These models will enable the creation of coding assistants tailored for geospatial applications.<|reference_end|>
arxiv
@article{gramacki2024evaluation, title={Evaluation of Code LLMs on Geospatial Code Generation}, author={Piotr Gramacki, Bruno Martins, Piotr Szyma'nski}, journal={arXiv preprint arXiv:2410.04617}, year={2024}, archivePrefix={arXiv}, eprint={2410.04617}, primaryClass={cs.CL cs.SE} }
gramacki2024evaluation
arxiv-666308
2410.04618
Towards Unsupervised Blind Face Restoration using Diffusion Prior
<|reference_start|>Towards Unsupervised Blind Face Restoration using Diffusion Prior: Blind face restoration methods have shown remarkable performance, particularly when trained on large-scale synthetic datasets with supervised learning. These datasets are often generated by simulating low-quality face images with a handcrafted image degradation pipeline. The models trained on such synthetic degradations, however, cannot deal with inputs of unseen degradations. In this paper, we address this issue by using only a set of input images, with unknown degradations and without ground truth targets, to fine-tune a restoration model that learns to map them to clean and contextually consistent outputs. We utilize a pre-trained diffusion model as a generative prior through which we generate high quality images from the natural image distribution while maintaining the input image content through consistency constraints. These generated images are then used as pseudo targets to fine-tune a pre-trained restoration model. Unlike many recent approaches that employ diffusion models at test time, we only do so during training and thus maintain an efficient inference-time performance. Extensive experiments show that the proposed approach can consistently improve the perceptual quality of pre-trained blind face restoration models while maintaining great consistency with the input contents. Our best model also achieves the state-of-the-art results on both synthetic and real-world datasets.<|reference_end|>
arxiv
@article{kuai2024towards, title={Towards Unsupervised Blind Face Restoration using Diffusion Prior}, author={Tianshu Kuai, Sina Honari, Igor Gilitschenski, Alex Levinshtein}, journal={arXiv preprint arXiv:2410.04618}, year={2024}, archivePrefix={arXiv}, eprint={2410.04618}, primaryClass={cs.CV} }
kuai2024towards
arxiv-666309
2410.04619
The Role of Social Support and Influencers in Social Media Communities
<|reference_start|>The Role of Social Support and Influencers in Social Media Communities: How can individual agents coordinate their actions to achieve a shared objective in distributed systems? This challenge spans economic, technical, and sociological domains, each confronting scalability, heterogeneity, and conflicts between individual and collective goals. In economic markets, a common currency facilitates coordination, raising the question of whether such mechanisms can be applied in other contexts. This paper explores this idea within social media platforms, where social support (likes, shares, comments) acts as a currency that shapes content production and sharing. We investigate two key questions: (1) Can social support serve as an effective coordination tool, and (2) What role do influencers play in content creation and dissemination? Our formal analysis shows that social support can coordinate user actions similarly to money in economic markets. Influencers serve dual roles, aggregating content and acting as information proxies, guiding content producers in large markets. While imperfections in information lead to a "price of influence" and suboptimal outcomes, this price diminishes as markets grow, improving social welfare. These insights provide a framework for understanding coordination in distributed environments, with applications in both sociological systems and multi-agent AI systems.<|reference_end|>
arxiv
@article{su2024the, title={The Role of Social Support and Influencers in Social Media Communities}, author={Junwei Su, Peter Marbach}, journal={arXiv preprint arXiv:2410.04619}, year={2024}, archivePrefix={arXiv}, eprint={2410.04619}, primaryClass={cs.SI cs.GT cs.MA} }
su2024the
arxiv-666310
2410.04620
Passage Retrieval of Polish Texts Using OKAPI BM25 and an Ensemble of Cross Encoders
<|reference_start|>Passage Retrieval of Polish Texts Using OKAPI BM25 and an Ensemble of Cross Encoders: Passage Retrieval has traditionally relied on lexical methods like TF-IDF and BM25. Recently, some neural network models have surpassed these methods in performance. However, these models face challenges, such as the need for large annotated datasets and adapting to new domains. This paper presents a winning solution to the Poleval 2023 Task 3: Passage Retrieval challenge, which involves retrieving passages of Polish texts in three domains: trivia, legal, and customer support. However, only the trivia domain was used for training and development data. The method used the OKAPI BM25 algorithm to retrieve documents and an ensemble of publicly available multilingual Cross Encoders for Reranking. Fine-tuning the reranker models slightly improved performance but only in the training domain, while it worsened in other domains.<|reference_end|>
arxiv
@article{pokrywka2024passage, title={Passage Retrieval of Polish Texts Using OKAPI BM25 and an Ensemble of Cross Encoders}, author={Jakub Pokrywka}, journal={Proceedings of the 18th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. \'Sl\k{e}zak (eds). ACSIS, Vol. 35, pages 1265-1269 (2023)}, year={2024}, doi={10.15439/2023F9253}, archivePrefix={arXiv}, eprint={2410.04620}, primaryClass={cs.CL cs.AI} }
pokrywka2024passage
arxiv-666311
2410.04621
Punctuation Prediction for Polish Texts using Transformers
<|reference_start|>Punctuation Prediction for Polish Texts using Transformers: Speech recognition systems typically output text lacking punctuation. However, punctuation is crucial for written text comprehension. To tackle this problem, Punctuation Prediction models are developed. This paper describes a solution for Poleval 2022 Task 1: Punctuation Prediction for Polish Texts, which scores 71.44 Weighted F1. The method utilizes a single HerBERT model finetuned to the competition data and an external dataset.<|reference_end|>
arxiv
@article{pokrywka2024punctuation, title={Punctuation Prediction for Polish Texts using Transformers}, author={Jakub Pokrywka}, journal={Proceedings of the 18th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. \'Sl\k{e}zak (eds). ACSIS, Vol. 35, pages 1251-1254 (2023)}, year={2024}, doi={10.15439/2023F1633}, archivePrefix={arXiv}, eprint={2410.04621}, primaryClass={cs.CL} }
pokrywka2024punctuation
arxiv-666312
2410.04628
Control Large Language Models via Divide and Conquer
<|reference_start|>Control Large Language Models via Divide and Conquer: This paper investigates controllable generation for large language models (LLMs) with prompt-based control, focusing on Lexically Constrained Generation (LCG). We systematically evaluate the performance of LLMs on satisfying lexical constraints with prompt-based control, as well as their efficacy in downstream applications. We conclude that LLMs face significant challenges in consistently satisfying lexical constraints with prompt-based control. We identified three key limitations of LLMs for LCG, including (1) position bias, where LLMs tend to satisfy constraints that appear in specific positions within the input; (2) low responsiveness to decoding parameters, which render minimal impact on control of LLMs; and (3) struggle with handling the inherent complexity of certain constraints (e.g., compound words). To address these issues, we introduce a Divide and Conquer Generation strategy, effective for both white-box and black-box LLMs, to enhance LLMs performance in LCG tasks, which demonstrates over 90% improvement on success rate in the most challenging LCG task. Our analysis provides valuable insights into the performance of LLMs in LCG with prompt-based control, and our proposed strategy offers a pathway to more sophisticated and customized text generation applications.<|reference_end|>
arxiv
@article{li2024control, title={Control Large Language Models via Divide and Conquer}, author={Bingxuan Li, Yiwei Wang, Tao Meng, Kai-Wei Chang, Nanyun Peng}, journal={arXiv preprint arXiv:2410.04628}, year={2024}, archivePrefix={arXiv}, eprint={2410.04628}, primaryClass={cs.CL} }
li2024control
arxiv-666313
2410.04631
DeepLTL: Learning to Efficiently Satisfy Complex LTL Specifications
<|reference_start|>DeepLTL: Learning to Efficiently Satisfy Complex LTL Specifications: Linear temporal logic (LTL) has recently been adopted as a powerful formalism for specifying complex, temporally extended tasks in reinforcement learning (RL). However, learning policies that efficiently satisfy arbitrary specifications not observed during training remains a challenging problem. Existing approaches suffer from several shortcomings: they are often only applicable to finite-horizon fragments of LTL, are restricted to suboptimal solutions, and do not adequately handle safety constraints. In this work, we propose a novel learning approach to address these concerns. Our method leverages the structure of B\"uchi automata, which explicitly represent the semantics of LTL specifications, to learn policies conditioned on sequences of truth assignments that lead to satisfying the desired formulae. Experiments in a variety of discrete and continuous domains demonstrate that our approach is able to zero-shot satisfy a wide range of finite- and infinite-horizon specifications, and outperforms existing methods in terms of both satisfaction probability and efficiency.<|reference_end|>
arxiv
@article{jackermeier2024deepltl:, title={DeepLTL: Learning to Efficiently Satisfy Complex LTL Specifications}, author={Mathias Jackermeier, Alessandro Abate}, journal={arXiv preprint arXiv:2410.04631}, year={2024}, archivePrefix={arXiv}, eprint={2410.04631}, primaryClass={cs.AI cs.LG} }
jackermeier2024deepltl:
arxiv-666314
2410.04633
A Cross-Lingual Meta-Learning Method Based on Domain Adaptation for Speech Emotion Recognition
<|reference_start|>A Cross-Lingual Meta-Learning Method Based on Domain Adaptation for Speech Emotion Recognition: Best-performing speech models are trained on large amounts of data in the language they are meant to work for. However, most languages have sparse data, making training models challenging. This shortage of data is even more prevalent in speech emotion recognition. Our work explores the model's performance in limited data, specifically for speech emotion recognition. Meta-learning specializes in improving the few-shot learning. As a result, we employ meta-learning techniques on speech emotion recognition tasks, accent recognition, and person identification. To this end, we propose a series of improvements over the multistage meta-learning method. Unlike other works focusing on smaller models due to the high computational cost of meta-learning algorithms, we take a more practical approach. We incorporate a large pre-trained backbone and a prototypical network, making our methods more feasible and applicable. Our most notable contribution is an improved fine-tuning technique during meta-testing that significantly boosts the performance on out-of-distribution datasets. This result, together with incremental improvements from several other works, helped us achieve accuracy scores of 83.78% and 56.30% for Greek and Romanian speech emotion recognition datasets not included in the training or validation splits in the context of 4-way 5-shot learning.<|reference_end|>
arxiv
@article{ion2024a, title={A Cross-Lingual Meta-Learning Method Based on Domain Adaptation for Speech Emotion Recognition}, author={David-Gabriel Ion, Ru{a}zvan-Alexandru Smu{a}du, Dumitru-Clementin Cercel, Florin Pop, Mihaela-Claudia Cercel}, journal={arXiv preprint arXiv:2410.04633}, year={2024}, archivePrefix={arXiv}, eprint={2410.04633}, primaryClass={cs.CL} }
ion2024a
arxiv-666315
2410.04634
Is What You Ask For What You Get? Investigating Concept Associations in Text-to-Image Models
<|reference_start|>Is What You Ask For What You Get? Investigating Concept Associations in Text-to-Image Models: Text-to-image (T2I) models are increasingly used in impactful real-life applications. As such, there is a growing need to audit these models to ensure that they generate desirable, task-appropriate images. However, systematically inspecting the associations between prompts and generated content in a human-understandable way remains challenging. To address this, we propose \emph{Concept2Concept}, a framework where we characterize conditional distributions of vision language models using interpretable concepts and metrics that can be defined in terms of these concepts. This characterization allows us to use our framework to audit models and prompt-datasets. To demonstrate, we investigate several case studies of conditional distributions of prompts, such as user defined distributions or empirical, real world distributions. Lastly, we implement Concept2Concept as an open-source interactive visualization tool facilitating use by non-technical end-users. Warning: This paper contains discussions of harmful content, including CSAM and NSFW material, which may be disturbing to some readers.<|reference_end|>
arxiv
@article{magid2024is, title={Is What You Ask For What You Get? Investigating Concept Associations in Text-to-Image Models}, author={Salma Abdel Magid, Weiwei Pan, Simon Warchol, Grace Guo, Junsik Kim, Mahia Rahman, Hanspeter Pfister}, journal={arXiv preprint arXiv:2410.04634}, year={2024}, archivePrefix={arXiv}, eprint={2410.04634}, primaryClass={cs.CV} }
magid2024is
arxiv-666316
2410.04636
Multi-Tiered Self-Contrastive Learning for Medical Microwave Radiometry (MWR) Breast Cancer Detection
<|reference_start|>Multi-Tiered Self-Contrastive Learning for Medical Microwave Radiometry (MWR) Breast Cancer Detection: The pursuit of enhanced breast cancer detection and monitoring techniques is a paramount healthcare objective, driving the need for innovative imaging technologies and diagnostic approaches. This study introduces a novel multi-tiered self-contrastive model tailored for the application of microwave radiometry (MWR) breast cancer detection. Our approach encompasses three distinct models: Local-MWR (L-MWR), Regional-MWR (R-MWR), and Global-MWR (G-MWR), each engineered to analyze varying sub-regional comparisons within the breasts. These models are cohesively integrated through the Joint-MWR (J-MWR) network, which leverages the self-contrastive data generated at each analytical level to enhance detection capabilities. Employing a dataset comprising 4,932 cases of female patients, our research showcases the effectiveness of our proposed models. Notably, the J-MWR model distinguishes itself by achieving a Matthews correlation coefficient of 0.74 $\pm$ 0.018, surpassing existing MWR neural networks and contrastive methods. These results highlight the significant potential of self-contrastive learning techniques in improving both the diagnostic accuracy and generalizability of MWR-based breast cancer detection processes. Such advancements hold considerable promise for further investigative and clinical endeavors. The source code is available at: https://github.com/cgalaz01/self_contrastive_mwr<|reference_end|>
arxiv
@article{galazis2024multi-tiered, title={Multi-Tiered Self-Contrastive Learning for Medical Microwave Radiometry (MWR) Breast Cancer Detection}, author={Christoforos Galazis, Huiyi Wu, Igor Goryanin}, journal={arXiv preprint arXiv:2410.04636}, year={2024}, archivePrefix={arXiv}, eprint={2410.04636}, primaryClass={eess.IV cs.AI cs.CV} }
galazis2024multi-tiered
arxiv-666317
2410.04638
Provable Weak-to-Strong Generalization via Benign Overfitting
<|reference_start|>Provable Weak-to-Strong Generalization via Benign Overfitting: The classic teacher-student model in machine learning posits that a strong teacher supervises a weak student to improve the student's capabilities. We instead consider the inverted situation, where a weak teacher supervises a strong student with imperfect pseudolabels. This paradigm was recently brought forth by Burns et al.'23 and termed \emph{weak-to-strong generalization}. We theoretically investigate weak-to-strong generalization for binary and multilabel classification in a stylized overparameterized spiked covariance model with Gaussian covariates where the weak teacher's pseudolabels are asymptotically like random guessing. Under these assumptions, we provably identify two asymptotic phases of the strong student's generalization after weak supervision: (1) successful generalization and (2) random guessing. Our techniques should eventually extend to weak-to-strong multiclass classification. Towards doing so, we prove a tight lower tail inequality for the maximum of correlated Gaussians, which may be of independent interest. Understanding the multilabel setting reinforces the value of using logits for weak supervision when they are available.<|reference_end|>
arxiv
@article{wu2024provable, title={Provable Weak-to-Strong Generalization via Benign Overfitting}, author={David X. Wu, Anant Sahai}, journal={arXiv preprint arXiv:2410.04638}, year={2024}, archivePrefix={arXiv}, eprint={2410.04638}, primaryClass={cs.LG stat.ML} }
wu2024provable
arxiv-666318
2410.04639
Radial Basis Operator Networks
<|reference_start|>Radial Basis Operator Networks: Operator networks are designed to approximate nonlinear operators, which provide mappings between infinite-dimensional spaces such as function spaces. These networks are playing an increasingly important role in machine learning, with their most notable contributions in the field of scientific computing. Their significance stems from their ability to handle the type of data often encountered in scientific applications. For instance, in climate modeling or fluid dynamics, input data typically consists of discretized continuous fields (like temperature distributions or velocity fields). We introduce the radial basis operator network (RBON), which represents a significant advancement as the first operator network capable of learning an operator in both the time domain and frequency domain when adjusted to accept complex-valued inputs. Despite the small, single hidden-layer structure, the RBON boasts small $L^2$ relative test error for both in- and out-of-distribution data (OOD) of less than $1\times 10^{-7}$ in some benchmark cases. Moreover, the RBON maintains small error on OOD data from entirely different function classes from the training data.<|reference_end|>
arxiv
@article{kurz2024radial, title={Radial Basis Operator Networks}, author={Jason Kurz and Sean Oughton and Shitao Liu}, journal={arXiv preprint arXiv:2410.04639}, year={2024}, archivePrefix={arXiv}, eprint={2410.04639}, primaryClass={cs.LG} }
kurz2024radial
arxiv-666319
2410.04640
Unpacking Failure Modes of Generative Policies: Runtime Monitoring of Consistency and Progress
<|reference_start|>Unpacking Failure Modes of Generative Policies: Runtime Monitoring of Consistency and Progress: Robot behavior policies trained via imitation learning are prone to failure under conditions that deviate from their training data. Thus, algorithms that monitor learned policies at test time and provide early warnings of failure are necessary to facilitate scalable deployment. We propose Sentinel, a runtime monitoring framework that splits the detection of failures into two complementary categories: 1) Erratic failures, which we detect using statistical measures of temporal action consistency, and 2) task progression failures, where we use Vision Language Models (VLMs) to detect when the policy confidently and consistently takes actions that do not solve the task. Our approach has two key strengths. First, because learned policies exhibit diverse failure modes, combining complementary detectors leads to significantly higher accuracy at failure detection. Second, using a statistical temporal action consistency measure ensures that we quickly detect when multimodal, generative policies exhibit erratic behavior at negligible computational cost. In contrast, we only use VLMs to detect failure modes that are less time-sensitive. We demonstrate our approach in the context of diffusion policies trained on robotic mobile manipulation domains in both simulation and the real world. By unifying temporal consistency detection and VLM runtime monitoring, Sentinel detects 18% more failures than using either of the two detectors alone and significantly outperforms baselines, thus highlighting the importance of assigning specialized detectors to complementary categories of failure. Qualitative results are made available at https://sites.google.com/stanford.edu/sentinel.<|reference_end|>
arxiv
@article{agia2024unpacking, title={Unpacking Failure Modes of Generative Policies: Runtime Monitoring of Consistency and Progress}, author={Christopher Agia, Rohan Sinha, Jingyun Yang, Zi-ang Cao, Rika Antonova, Marco Pavone, Jeannette Bohg}, journal={arXiv preprint arXiv:2410.04640}, year={2024}, archivePrefix={arXiv}, eprint={2410.04640}, primaryClass={cs.RO cs.AI cs.LG} }
agia2024unpacking
arxiv-666320
2410.04642
The Optimization Landscape of SGD Across the Feature Learning Strength
<|reference_start|>The Optimization Landscape of SGD Across the Feature Learning Strength: We consider neural networks (NNs) where the final layer is down-scaled by a fixed hyperparameter $\gamma$. Recent work has identified $\gamma$ as controlling the strength of feature learning. As $\gamma$ increases, network evolution changes from "lazy" kernel dynamics to "rich" feature-learning dynamics, with a host of associated benefits including improved performance on common tasks. In this work, we conduct a thorough empirical investigation of the effect of scaling $\gamma$ across a variety of models and datasets in the online training setting. We first examine the interaction of $\gamma$ with the learning rate $\eta$, identifying several scaling regimes in the $\gamma$-$\eta$ plane which we explain theoretically using a simple model. We find that the optimal learning rate $\eta^*$ scales non-trivially with $\gamma$. In particular, $\eta^* \propto \gamma^2$ when $\gamma \ll 1$ and $\eta^* \propto \gamma^{2/L}$ when $\gamma \gg 1$ for a feed-forward network of depth $L$. Using this optimal learning rate scaling, we proceed with an empirical study of the under-explored "ultra-rich" $\gamma \gg 1$ regime. We find that networks in this regime display characteristic loss curves, starting with a long plateau followed by a drop-off, sometimes followed by one or more additional staircase steps. We find networks of different large $\gamma$ values optimize along similar trajectories up to a reparameterization of time. We further find that optimal online performance is often found at large $\gamma$ and could be missed if this hyperparameter is not tuned. Our findings indicate that analytical study of the large-$\gamma$ limit may yield useful insights into the dynamics of representation learning in performant models.<|reference_end|>
arxiv
@article{atanasov2024the, title={The Optimization Landscape of SGD Across the Feature Learning Strength}, author={Alexander Atanasov, Alexandru Meterez, James B. Simon, Cengiz Pehlevan}, journal={arXiv preprint arXiv:2410.04642}, year={2024}, archivePrefix={arXiv}, eprint={2410.04642}, primaryClass={cs.LG stat.ML} }
atanasov2024the
arxiv-666321
2410.04643
New Error Estimates for An Elliptic Distributed Optimal Control Problem with Pointwise Control Constraints
<|reference_start|>New Error Estimates for An Elliptic Distributed Optimal Control Problem with Pointwise Control Constraints: We derive error estimates for a linear-quadratic elliptic distributed optimal control problem with pointwise control constraints that can be applied to standard finite element methods and multiscale finite element methods.<|reference_end|>
arxiv
@article{brenner2024new, title={New Error Estimates for An Elliptic Distributed Optimal Control Problem with Pointwise Control Constraints}, author={Susanne C. Brenner and Li-yeng Sung}, journal={arXiv preprint arXiv:2410.04643}, year={2024}, archivePrefix={arXiv}, eprint={2410.04643}, primaryClass={math.OC cs.NA math.NA} }
brenner2024new
arxiv-666322
2410.04646
Mode-GS: Monocular Depth Guided Anchored 3D Gaussian Splatting for Robust Ground-View Scene Rendering
<|reference_start|>Mode-GS: Monocular Depth Guided Anchored 3D Gaussian Splatting for Robust Ground-View Scene Rendering: We present a novel-view rendering algorithm, Mode-GS, for ground-robot trajectory datasets. Our approach is based on using anchored Gaussian splats, which are designed to overcome the limitations of existing 3D Gaussian splatting algorithms. Prior neural rendering methods suffer from severe splat drift due to scene complexity and insufficient multi-view observation, and can fail to fix splats on the true geometry in ground-robot datasets. Our method integrates pixel-aligned anchors from monocular depths and generates Gaussian splats around these anchors using residual-form Gaussian decoders. To address the inherent scale ambiguity of monocular depth, we parameterize anchors with per-view depth-scales and employ scale-consistent depth loss for online scale calibration. Our method results in improved rendering performance, based on PSNR, SSIM, and LPIPS metrics, in ground scenes with free trajectory patterns, and achieves state-of-the-art rendering performance on the R3LIVE odometry dataset and the Tanks and Temples dataset.<|reference_end|>
arxiv
@article{lee2024mode-gs:, title={Mode-GS: Monocular Depth Guided Anchored 3D Gaussian Splatting for Robust Ground-View Scene Rendering}, author={Yonghan Lee, Jaehoon Choi, Dongki Jung, Jaeseong Yun, Soohyun Ryu, Dinesh Manocha, and Suyong Yeon}, journal={arXiv preprint arXiv:2410.04646}, year={2024}, archivePrefix={arXiv}, eprint={2410.04646}, primaryClass={cs.CV cs.RO} }
lee2024mode-gs:
arxiv-666323
2410.04648
AdaptDiff: Cross-Modality Domain Adaptation via Weak Conditional Semantic Diffusion for Retinal Vessel Segmentation
<|reference_start|>AdaptDiff: Cross-Modality Domain Adaptation via Weak Conditional Semantic Diffusion for Retinal Vessel Segmentation: Deep learning has shown remarkable performance in medical image segmentation. However, despite its promise, deep learning has many challenges in practice due to its inability to effectively transition to unseen domains, caused by the inherent data distribution shift and the lack of manual annotations to guide domain adaptation. To tackle this problem, we present an unsupervised domain adaptation (UDA) method named AdaptDiff that enables a retinal vessel segmentation network trained on fundus photography (FP) to produce satisfactory results on unseen modalities (e.g., OCT-A) without any manual labels. For all our target domains, we first adopt a segmentation model trained on the source domain to create pseudo-labels. With these pseudo-labels, we train a conditional semantic diffusion probabilistic model to represent the target domain distribution. Experimentally, we show that even with low quality pseudo-labels, the diffusion model can still capture the conditional semantic information. Subsequently, we sample on the target domain with binary vessel masks from the source domain to get paired data, i.e., target domain synthetic images conditioned on the binary vessel map. Finally, we fine-tune the pre-trained segmentation network using the synthetic paired data to mitigate the domain gap. We assess the effectiveness of AdaptDiff on seven publicly available datasets across three distinct modalities. Our results demonstrate a significant improvement in segmentation performance across all unseen datasets. Our code is publicly available at https://github.com/DeweiHu/AdaptDiff.<|reference_end|>
arxiv
@article{hu2024adaptdiff:, title={AdaptDiff: Cross-Modality Domain Adaptation via Weak Conditional Semantic Diffusion for Retinal Vessel Segmentation}, author={Dewei Hu, Hao Li, Han Liu, Jiacheng Wang, Xing Yao, Daiwei Lu, Ipek Oguz}, journal={arXiv preprint arXiv:2410.04648}, year={2024}, archivePrefix={arXiv}, eprint={2410.04648}, primaryClass={cs.CV} }
hu2024adaptdiff:
arxiv-666324
2410.04652
Multimodal 3D Fusion and In-Situ Learning for Spatially Aware AI
<|reference_start|>Multimodal 3D Fusion and In-Situ Learning for Spatially Aware AI: Seamless integration of virtual and physical worlds in augmented reality benefits from the system semantically "understanding" the physical environment. AR research has long focused on the potential of context awareness, demonstrating novel capabilities that leverage the semantics in the 3D environment for various object-level interactions. Meanwhile, the computer vision community has made leaps in neural vision-language understanding to enhance environment perception for autonomous tasks. In this work, we introduce a multimodal 3D object representation that unifies both semantic and linguistic knowledge with the geometric representation, enabling user-guided machine learning involving physical objects. We first present a fast multimodal 3D reconstruction pipeline that brings linguistic understanding to AR by fusing CLIP vision-language features into the environment and object models. We then propose "in-situ" machine learning, which, in conjunction with the multimodal representation, enables new tools and interfaces for users to interact with physical spaces and objects in a spatially and linguistically meaningful manner. We demonstrate the usefulness of the proposed system through two real-world AR applications on Magic Leap 2: a) spatial search in physical environments with natural language and b) an intelligent inventory system that tracks object changes over time. We also make our full implementation and demo data available at (https://github.com/cy-xu/spatially_aware_AI) to encourage further exploration and research in spatially aware AI.<|reference_end|>
arxiv
@article{xu2024multimodal, title={Multimodal 3D Fusion and In-Situ Learning for Spatially Aware AI}, author={Chengyuan Xu, Radha Kumaran, Noah Stier, Kangyou Yu, Tobias H"ollerer}, journal={arXiv preprint arXiv:2410.04652}, year={2024}, archivePrefix={arXiv}, eprint={2410.04652}, primaryClass={cs.HC cs.AI cs.CV} }
xu2024multimodal
arxiv-666325
2410.04654
Study of Tomlinson-Harashima Precoders for Rate-Splitting-Based Cell-Free MIMO Networks
<|reference_start|>Study of Tomlinson-Harashima Precoders for Rate-Splitting-Based Cell-Free MIMO Networks: Cell-free (CF) systems have the potential to fulfill the increasing performance demand of future wireless applications by employing distributed access points (APs) that transmit the information over the same time-frequency resources. Due to the simultaneous transmission, multiuser interference (MUI) degrades the overall performance. To cope with the MUI in the downlink several linear precoding techniques, which rely on perfect channel state information at the transmitter (CSIT), have been studied. However, perfect CSIT is hardly obtained in practical systems. In this context, rate-splitting (RS) has arisen as a potential solution to deal with CSIT imperfections. In contrast to existing works, we explore non-linear precoding techniques along with RS-CF systems. Furthermore, the multi-branch (MB) concept is included to further enhance the overall performance of the system. Simulations show that the proposed MB-THP for RS-based CF systems outperforms the conventional linear precoders.<|reference_end|>
arxiv
@article{flores2024study, title={Study of Tomlinson-Harashima Precoders for Rate-Splitting-Based Cell-Free MIMO Networks}, author={A. Flores and R. C. de Lamare}, journal={arXiv preprint arXiv:2410.04654}, year={2024}, archivePrefix={arXiv}, eprint={2410.04654}, primaryClass={cs.IT math.IT} }
flores2024study
arxiv-666326
2410.04655
Graph Fourier Neural Kernels (G-FuNK): Learning Solutions of Nonlinear Diffusive Parametric PDEs on Multiple Domains
<|reference_start|>Graph Fourier Neural Kernels (G-FuNK): Learning Solutions of Nonlinear Diffusive Parametric PDEs on Multiple Domains: Predicting time-dependent dynamics of complex systems governed by non-linear partial differential equations (PDEs) with varying parameters and domains is a challenging task motivated by applications across various fields. We introduce a novel family of neural operators based on our Graph Fourier Neural Kernels, designed to learn solution generators for nonlinear PDEs in which the highest-order term is diffusive, across multiple domains and parameters. G-FuNK combines components that are parameter- and domain-adapted with others that are not. The domain-adapted components are constructed using a weighted graph on the discretized domain, where the graph Laplacian approximates the highest-order diffusive term, ensuring boundary condition compliance and capturing the parameter and domain-specific behavior. Meanwhile, the learned components transfer across domains and parameters using our variant Fourier Neural Operators. This approach naturally embeds geometric and directional information, improving generalization to new test domains without need for retraining the network. To handle temporal dynamics, our method incorporates an integrated ODE solver to predict the evolution of the system. Experiments show G-FuNK's capability to accurately approximate heat, reaction diffusion, and cardiac electrophysiology equations across various geometries and anisotropic diffusivity fields. G-FuNK achieves low relative errors on unseen domains and fiber fields, significantly accelerating predictions compared to traditional finite-element solvers.<|reference_end|>
arxiv
@article{loeffler2024graph, title={Graph Fourier Neural Kernels (G-FuNK): Learning Solutions of Nonlinear Diffusive Parametric PDEs on Multiple Domains}, author={Shane E. Loeffler, Zan Ahmad, Syed Yusuf Ali, Carolyna Yamamoto, Dan M. Popescu, Alana Yee, Yash Lal, Natalia Trayanova, Mauro Maggioni}, journal={arXiv preprint arXiv:2410.04655}, year={2024}, archivePrefix={arXiv}, eprint={2410.04655}, primaryClass={cs.LG cs.AI math.SP stat.ME stat.ML} }
loeffler2024graph
arxiv-666327
2410.04657
Contrastive Learning to Improve Retrieval for Real-world Fact Checking
<|reference_start|>Contrastive Learning to Improve Retrieval for Real-world Fact Checking: Recent work on fact-checking addresses a realistic setting where models incorporate evidence retrieved from the web to decide the veracity of claims. A bottleneck in this pipeline is in retrieving relevant evidence: traditional methods may surface documents directly related to a claim, but fact-checking complex claims requires more inferences. For instance, a document about how a vaccine was developed is relevant to addressing claims about what it might contain, even if it does not address them directly. We present Contrastive Fact-Checking Reranker (CFR), an improved retriever for this setting. By leveraging the AVeriTeC dataset, which annotates subquestions for claims with human written answers from evidence documents, we fine-tune Contriever with a contrastive objective based on multiple training signals, including distillation from GPT-4, evaluating subquestion answers, and gold labels in the dataset. We evaluate our model on both retrieval and end-to-end veracity judgments about claims. On the AVeriTeC dataset, we find a 6\% improvement in veracity classification accuracy. We also show our gains can be transferred to FEVER, ClaimDecomp, HotpotQA, and a synthetic dataset requiring retrievers to make inferences.<|reference_end|>
arxiv
@article{sriram2024contrastive, title={Contrastive Learning to Improve Retrieval for Real-world Fact Checking}, author={Aniruddh Sriram, Fangyuan Xu, Eunsol Choi, Greg Durrett}, journal={arXiv preprint arXiv:2410.04657}, year={2024}, archivePrefix={arXiv}, eprint={2410.04657}, primaryClass={cs.CL cs.AI cs.LG} }
sriram2024contrastive
arxiv-666328
2410.04659
ActiView: Evaluating Active Perception Ability for Multimodal Large Language Models
<|reference_start|>ActiView: Evaluating Active Perception Ability for Multimodal Large Language Models: Active perception, a crucial human capability, involves setting a goal based on the current understanding of the environment and performing actions to achieve that goal. Despite significant efforts in evaluating Multimodal Large Language Models (MLLMs), active perception has been largely overlooked. To address this gap, we propose a novel benchmark named ActiView to evaluate active perception in MLLMs. Since comprehensively assessing active perception is challenging, we focus on a specialized form of Visual Question Answering (VQA) that eases the evaluation yet challenging for existing MLLMs. Given an image, we restrict the perceptual field of a model, requiring it to actively zoom or shift its perceptual field based on reasoning to answer the question successfully. We conduct extensive evaluation over 27 models, including proprietary and open-source models, and observe that the ability to read and comprehend multiple images simultaneously plays a significant role in enabling active perception. Results reveal a significant gap in the active perception capability of MLLMs, indicating that this area deserves more attention. We hope that our benchmark could help develop methods for MLLMs to understand multimodal inputs in more natural and holistic ways.<|reference_end|>
arxiv
@article{wang2024actiview:, title={ActiView: Evaluating Active Perception Ability for Multimodal Large Language Models}, author={Ziyue Wang, Chi Chen, Fuwen Luo, Yurui Dong, Yuanchi Zhang, Yuzhuang Xu, Xiaolong Wang, Peng Li, Yang Liu}, journal={arXiv preprint arXiv:2410.04659}, year={2024}, archivePrefix={arXiv}, eprint={2410.04659}, primaryClass={cs.CV} }
wang2024actiview:
arxiv-666329
2410.04660
Knowledge Graph Based Agent for Complex, Knowledge-Intensive QA in Medicine
<|reference_start|>Knowledge Graph Based Agent for Complex, Knowledge-Intensive QA in Medicine: Biomedical knowledge is uniquely complex and structured, requiring distinct reasoning strategies compared to other scientific disciplines like physics or chemistry. Biomedical scientists do not rely on a single approach to reasoning; instead, they use various strategies, including rule-based, prototype-based, and case-based reasoning. This diversity calls for flexible approaches that accommodate multiple reasoning strategies while leveraging in-domain knowledge. We introduce KGARevion, a knowledge graph (KG) based agent designed to address the complexity of knowledge-intensive medical queries. Upon receiving a query, KGARevion generates relevant triplets by using the knowledge base of the LLM. These triplets are then verified against a grounded KG to filter out erroneous information and ensure that only accurate, relevant data contribute to the final answer. Unlike RAG-based models, this multi-step process ensures robustness in reasoning while adapting to different models of medical reasoning. Evaluations on four gold-standard medical QA datasets show that KGARevion improves accuracy by over 5.2%, outperforming 15 models in handling complex medical questions. To test its capabilities, we curated three new medical QA datasets with varying levels of semantic complexity, where KGARevion achieved a 10.4% improvement in accuracy.<|reference_end|>
arxiv
@article{su2024knowledge, title={Knowledge Graph Based Agent for Complex, Knowledge-Intensive QA in Medicine}, author={Xiaorui Su, Yibo Wang, Shanghua Gao, Xiaolong Liu, Valentina Giunchiglia, Djork-Arn'e Clevert, and Marinka Zitnik}, journal={arXiv preprint arXiv:2410.04660}, year={2024}, archivePrefix={arXiv}, eprint={2410.04660}, primaryClass={cs.AI} }
su2024knowledge
arxiv-666330
2410.04661
Federated Learning Nodes Can Reconstruct Peers' Image Data
<|reference_start|>Federated Learning Nodes Can Reconstruct Peers' Image Data: Federated learning (FL) is a privacy-preserving machine learning framework that enables multiple nodes to train models on their local data and periodically average weight updates to benefit from other nodes' training. Each node's goal is to collaborate with other nodes to improve the model's performance while keeping its training data private. However, this framework does not guarantee data privacy. Prior work has shown that the gradient-sharing steps in FL can be vulnerable to data reconstruction attacks from an honest-but-curious central server. In this work, we show that an honest-but-curious node/client can also launch attacks to reconstruct peers' image data in a centralized system, presenting a severe privacy risk. We demonstrate that a single client can silently reconstruct other clients' private images using diluted information available within consecutive updates. We leverage state-of-the-art diffusion models to enhance the perceptual quality and recognizability of the reconstructed images, further demonstrating the risk of information leakage at a semantic level. This highlights the need for more robust privacy-preserving mechanisms that protect against silent client-side attacks during federated training.<|reference_end|>
arxiv
@article{wilson2024federated, title={Federated Learning Nodes Can Reconstruct Peers' Image Data}, author={Ethan Wilson, Kai Yue, Chau-Wai Wong, and Huaiyu Dai}, journal={arXiv preprint arXiv:2410.04661}, year={2024}, archivePrefix={arXiv}, eprint={2410.04661}, primaryClass={cs.LG cs.CR} }
wilson2024federated
arxiv-666331
2410.04662
Path Planning and Robust Path Tracking Control of an Automated Parallel Parking Maneuver
<|reference_start|>Path Planning and Robust Path Tracking Control of an Automated Parallel Parking Maneuver: Self driving vehicles should be able to perform parallel parking or a similar maneuver successfully. With this motivation, the S shaped maneuverability test of the Ohio driver license examination is chosen here for automatic execution by a self driving vehicle with drive by wire capability and longitudinal and lateral controls. The Ohio maneuverability test requires the driver to start within an area enclosed by four pylons and the driver is asked to go to the left of the fifth pylon directly in front of the vehicle in a smooth and continuous manner while ending in a parallel direction to the initial one. The driver is then asked to go backwards to the starting location of the vehicle without stopping the vehicle or hitting the pylons. As a self driving vehicle should do a much better job repeatably than a driver, a high order polynomial path model is built along with speed profiling to start and stop smoothly at the ends of the path without large longitudinal and lateral accelerations. In contrast to the long horizon, higher speed path planning and path tracking control applications in the literature, this paper treats low speed and very short horizon path planning and path tracking control with stopping and direction reversal. The path is constructed using a segmented polynomial fit optimization routine that guarantees path curvature smoothness. A linear path tracking model is utilized as the basis of the designed control system consisting of a disturbance observer based curvature rejection filter and a speed scheduled, parameter space robust PID controller. Simulation studies indicate that it has better performance compared to other common control systems such as standalone PID controller and combined PID and feedforward control. indicate that it has better performance compared to other common control systems such as standalone PID controller and combined PID and feedforward control.<|reference_end|>
arxiv
@article{cao2024path, title={Path Planning and Robust Path Tracking Control of an Automated Parallel Parking Maneuver}, author={Xincheng Cao, Levent Guvenc}, journal={arXiv preprint arXiv:2410.04662}, year={2024}, archivePrefix={arXiv}, eprint={2410.04662}, primaryClass={eess.SY cs.SY} }
cao2024path
arxiv-666332
2410.04663
Adversarial Multi-Agent Evaluation of Large Language Models through Iterative Debates
<|reference_start|>Adversarial Multi-Agent Evaluation of Large Language Models through Iterative Debates: This paper explores optimal architectures for evaluating the outputs of large language models (LLMs) using LLMs themselves. We propose a novel framework that interprets LLMs as advocates within an ensemble of interacting agents, allowing them to defend their answers and reach conclusions through a judge and jury system. This approach offers a more dynamic and comprehensive evaluation process compared to traditional human-based assessments or automated metrics. We discuss the motivation behind this framework, its key components, and comparative advantages. We also present a probabilistic model to evaluate the error reduction achieved by iterative advocate systems. Finally, we outline experiments to validate the effectiveness of multi-advocate architectures and discuss future research directions.<|reference_end|>
arxiv
@article{bandi2024adversarial, title={Adversarial Multi-Agent Evaluation of Large Language Models through Iterative Debates}, author={Chaithanya Bandi and Abir Harrasse}, journal={arXiv preprint arXiv:2410.04663}, year={2024}, archivePrefix={arXiv}, eprint={2410.04663}, primaryClass={cs.CL cs.LG cs.MA} }
bandi2024adversarial
arxiv-666333
2410.04664
A Universal Formulation for Path-Parametric Planning and Control
<|reference_start|>A Universal Formulation for Path-Parametric Planning and Control: This work presents a unified framework for path-parametric planning and control. This formulation is universal as it standardizes the entire spectrum of path-parametric techniques -- from traditional path following to more recent contouring or progress-maximizing Model Predictive Control and Reinforcement Learning -- under a single framework. The ingredients underlying this universality are twofold: First, we present a compact and efficient technique capable of computing singularity-free, smooth and differentiable moving frames. Second, we derive a spatial path parameterization of the Cartesian coordinates applicable to any arbitrary curve without prior assumptions on its parametric speed or moving frame, and that perfectly interplays with the aforementioned path parameterization method. The combination of these two ingredients leads to a planning and control framework that brings togehter existing path-parametric techniques in literature. Aiming to unify all these approaches, we open source PACOR, a software library that implements the presented content, thereby providing a self-contained toolkit for the formulation of path-parametric planning and control methods.<|reference_end|>
arxiv
@article{arrizabalaga2024a, title={A Universal Formulation for Path-Parametric Planning and Control}, author={Jon Arrizabalaga, Markus Ryll}, journal={arXiv preprint arXiv:2410.04664}, year={2024}, archivePrefix={arXiv}, eprint={2410.04664}, primaryClass={cs.RO cs.SY eess.SY} }
arrizabalaga2024a
arxiv-666334
2410.04668
The role of interface boundary conditions and sampling strategies for Schwarz-based coupling of projection-based reduced order models
<|reference_start|>The role of interface boundary conditions and sampling strategies for Schwarz-based coupling of projection-based reduced order models: This paper presents and evaluates a framework for the coupling of subdomain-local projection-based reduced order models (PROMs) using the Schwarz alternating method following a domain decomposition (DD) of the spatial domain on which a given problem of interest is posed. In this approach, the solution on the full domain is obtained via an iterative process in which a sequence of subdomain-local problems are solved, with information propagating between subdomains through transmission boundary conditions (BCs). We explore several new directions involving the Schwarz alternating method aimed at maximizing the method's efficiency and flexibility, and demonstrate it on three challenging two-dimensional nonlinear hyperbolic problems: the shallow water equations, Burgers' equation, and the compressible Euler equations. We demonstrate that, for a cell-centered finite volume discretization and a non-overlapping DD, it is possible to obtain a stable and accurate coupled model utilizing Dirichlet-Dirichlet (rather than Robin-Robin or alternating Dirichlet-Neumann) transmission BCs on the subdomain boundaries. We additionally explore the impact of boundary sampling when utilizing the Schwarz alternating method to couple subdomain-local hyper-reduced PROMs. Our numerical results suggest that the proposed methodology has the potential to improve PROM accuracy by enabling the spatial localization of these models via domain decomposition, and achieve up to two orders of magnitude speedup over equivalent coupled full order model solutions and moderate speedups over analogous monolithic solutions.<|reference_end|>
arxiv
@article{wentland2024the, title={The role of interface boundary conditions and sampling strategies for Schwarz-based coupling of projection-based reduced order models}, author={Christopher R. Wentland, Francesco Rizzi, Joshua Barnett, Irina Tezaur}, journal={arXiv preprint arXiv:2410.04668}, year={2024}, archivePrefix={arXiv}, eprint={2410.04668}, primaryClass={math.NA cs.LG cs.NA math-ph math.MP} }
wentland2024the
arxiv-666335
2410.04671
CAR: Controllable Autoregressive Modeling for Visual Generation
<|reference_start|>CAR: Controllable Autoregressive Modeling for Visual Generation: Controllable generation, which enables fine-grained control over generated outputs, has emerged as a critical focus in visual generative models. Currently, there are two primary technical approaches in visual generation: diffusion models and autoregressive models. Diffusion models, as exemplified by ControlNet and T2I-Adapter, offer advanced control mechanisms, whereas autoregressive models, despite showcasing impressive generative quality and scalability, remain underexplored in terms of controllability and flexibility. In this study, we introduce Controllable AutoRegressive Modeling (CAR), a novel, plug-and-play framework that integrates conditional control into multi-scale latent variable modeling, enabling efficient control generation within a pre-trained visual autoregressive model. CAR progressively refines and captures control representations, which are injected into each autoregressive step of the pre-trained model to guide the generation process. Our approach demonstrates excellent controllability across various types of conditions and delivers higher image quality compared to previous methods. Additionally, CAR achieves robust generalization with significantly fewer training resources compared to those required for pre-training the model. To the best of our knowledge, we are the first to propose a control framework for pre-trained autoregressive visual generation models.<|reference_end|>
arxiv
@article{yao2024car:, title={CAR: Controllable Autoregressive Modeling for Visual Generation}, author={Ziyu Yao, Jialin Li, Yifeng Zhou, Yong Liu, Xi Jiang, Chengjie Wang, Feng Zheng, Yuexian Zou, Lei Li}, journal={arXiv preprint arXiv:2410.04671}, year={2024}, archivePrefix={arXiv}, eprint={2410.04671}, primaryClass={cs.CV} }
yao2024car:
arxiv-666336
2410.04678
Deciphering Refactoring Branch Dynamics in Modern Code Review: An Empirical Study on Qt
<|reference_start|>Deciphering Refactoring Branch Dynamics in Modern Code Review: An Empirical Study on Qt: Context: Modern code review is a widely employed technique in both industrial and open-source projects, serving to enhance software quality, share knowledge, and ensure compliance with coding standards and guidelines. While code review is extensively studied for its general challenges, best practices, outcomes, and socio-technical aspects, little attention has been paid to how refactoring is reviewed and what developers prioritize when reviewing refactored code in the Refactor branch. Objective: The goal is to understand the review process for refactoring changes in the Refactor branch and to identify what developers care about when reviewing code in this branch. Method: In this study, we present a quantitative and qualitative examination to understand the main criteria developers use to decide whether to accept or reject refactored code submissions and identify the challenges inherent in this process. Results: Analyzing 2,154 refactoring and non-refactoring reviews across Qt open-source projects, we find that reviews involving refactoring from the Refactor branch take significantly less time to resolve in terms of code review efforts. Additionally, documentation of developer intent is notably sparse within the Refactor branch compared to other branches. Furthermore, through thematic analysis of a substantial sample of refactoring code review discussions, we construct a comprehensive taxonomy consisting of 12 refactoring review criteria.<|reference_end|>
arxiv
@article{alomar2024deciphering, title={Deciphering Refactoring Branch Dynamics in Modern Code Review: An Empirical Study on Qt}, author={Eman Abdullah AlOmar}, journal={arXiv preprint arXiv:2410.04678}, year={2024}, archivePrefix={arXiv}, eprint={2410.04678}, primaryClass={cs.SE} }
alomar2024deciphering
arxiv-666337
2410.04680
Next Best Sense: Guiding Vision and Touch with FisherRF for 3D Gaussian Splatting
<|reference_start|>Next Best Sense: Guiding Vision and Touch with FisherRF for 3D Gaussian Splatting: We propose a framework for active next best view and touch selection for robotic manipulators using 3D Gaussian Splatting (3DGS). 3DGS is emerging as a useful explicit 3D scene representation for robotics, as it has the ability to represent scenes in a both photorealistic and geometrically accurate manner. However, in real-world, online robotic scenes where the number of views is limited given efficiency requirements, random view selection for 3DGS becomes impractical as views are often overlapping and redundant. We address this issue by proposing an end-to-end online training and active view selection pipeline, which enhances the performance of 3DGS in few-view robotics settings. We first elevate the performance of few-shot 3DGS with a novel semantic depth alignment method using Segment Anything Model 2 (SAM2) that we supplement with Pearson depth and surface normal loss to improve color and depth reconstruction of real-world scenes. We then extend FisherRF, a next-best-view selection method for 3DGS, to select views and touch poses based on depth uncertainty. We perform online view selection on a real robot system during live 3DGS training. We motivate our improvements to few-shot GS scenes, and extend depth-based FisherRF to them, where we demonstrate both qualitative and quantitative improvements on challenging robot scenes. For more information, please see our project page at https://armlabstanford.github.io/next-best-sense.<|reference_end|>
arxiv
@article{strong2024next, title={Next Best Sense: Guiding Vision and Touch with FisherRF for 3D Gaussian Splatting}, author={Matthew Strong, Boshu Lei, Aiden Swann, Wen Jiang, Kostas Daniilidis, Monroe Kennedy III}, journal={arXiv preprint arXiv:2410.04680}, year={2024}, archivePrefix={arXiv}, eprint={2410.04680}, primaryClass={cs.RO cs.CV} }
strong2024next
arxiv-666338
2410.04681
Coverage Analysis for 3D Indoor Terahertz Communication System Over Fluctuating Two-Ray Fading Channels
<|reference_start|>Coverage Analysis for 3D Indoor Terahertz Communication System Over Fluctuating Two-Ray Fading Channels: In this paper, we develop a novel analytical framework for a three-dimensional (3D) indoor terahertz (THz) communication system. Our proposed model incorporates more accurate modeling of wall blockages via Manhattan line processes and precise modeling of THz fading channels via a fluctuating two-ray (FTR) channel model. We also account for traditional unique features of THz, such as molecular absorption loss, user blockages, and 3D directional antenna beams. Moreover, we model locations of access points (APs) using a Poisson point process and adopt the nearest line-of-sight AP association strategy. Due to the high penetration loss caused by wall blockages, we consider that a user equipment (UE) and its associated AP and interfering APs are all in the same rectangular area, i.e., a room. Based on the proposed rectangular area model, we evaluate the impact of the UE's location on the distance to its associated AP. We then develop a tractable method to derive a new expression for the coverage probability by examining the interference from interfering APs and considering the FTR fading experienced by THz communications. Aided by simulation results, we validate our analysis and demonstrate that the UE's location has a pronounced impact on its coverage probability. Additionally, we find that the optimal AP density is determined by both the UE's location and the room size, which provides valuable insights for meeting the coverage requirements of future THz communication system deployment.<|reference_end|>
arxiv
@article{tang2024coverage, title={Coverage Analysis for 3D Indoor Terahertz Communication System Over Fluctuating Two-Ray Fading Channels}, author={Zhifeng Tang, Nan Yang, Salman Durrani, Xiangyun Zhou, Markku Juntti, and Josep Miquel Jornet}, journal={arXiv preprint arXiv:2410.04681}, year={2024}, archivePrefix={arXiv}, eprint={2410.04681}, primaryClass={cs.IT eess.SP math.IT} }
tang2024coverage
arxiv-666339
2410.04682
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
<|reference_start|>On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning: Test-time adaptation (TTA) updates the model weights during the inference stage using testing data to enhance generalization. However, this practice exposes TTA to adversarial risks. Existing studies have shown that when TTA is updated with crafted adversarial test samples, also known as test-time poisoned data, the performance on benign samples can deteriorate. Nonetheless, the perceived adversarial risk may be overstated if the poisoned data is generated under overly strong assumptions. In this work, we first review realistic assumptions for test-time data poisoning, including white-box versus grey-box attacks, access to benign data, attack budget, and more. We then propose an effective and realistic attack method that better produces poisoned samples without access to benign samples, and derive an effective in-distribution attack objective. We also design two TTA-aware attack objectives. Our benchmarks of existing attack methods reveal that the TTA methods are more robust than previously believed. In addition, we analyze effective defense strategies to help develop adversarially robust TTA methods.<|reference_end|>
arxiv
@article{su2024on, title={On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning}, author={Yongyi Su and Yushu Li and Nanqing Liu and Kui Jia and Xulei Yang and Chuan-Sheng Foo and Xun Xu}, journal={arXiv preprint arXiv:2410.04682}, year={2024}, archivePrefix={arXiv}, eprint={2410.04682}, primaryClass={cs.LG cs.CV} }
su2024on
arxiv-666340
2410.04683
Towards Measuring Goal-Directedness in AI Systems
<|reference_start|>Towards Measuring Goal-Directedness in AI Systems: Recent advances in deep learning have brought attention to the possibility of creating advanced, general AI systems that outperform humans across many tasks. However, if these systems pursue unintended goals, there could be catastrophic consequences. A key prerequisite for AI systems pursuing unintended goals is whether they will behave in a coherent and goal-directed manner in the first place, optimizing for some unknown goal; there exists significant research trying to evaluate systems for said behaviors. However, the most rigorous definitions of goal-directedness we currently have are difficult to compute in real-world settings. Drawing upon this previous literature, we explore policy goal-directedness within reinforcement learning (RL) environments. In our findings, we propose a different family of definitions of the goal-directedness of a policy that analyze whether it is well-modeled as near-optimal for many (sparse) reward functions. We operationalize this preliminary definition of goal-directedness and test it in toy Markov decision process (MDP) environments. Furthermore, we explore how goal-directedness could be measured in frontier large-language models (LLMs). Our contribution is a definition of goal-directedness that is simpler and more easily computable in order to approach the question of whether AI systems could pursue dangerous goals. We recommend further exploration of measuring coherence and goal-directedness, based on our findings.<|reference_end|>
arxiv
@article{xu2024towards, title={Towards Measuring Goal-Directedness in AI Systems}, author={Dylan Xu, Juan-Pablo Rivera}, journal={arXiv preprint arXiv:2410.04683}, year={2024}, archivePrefix={arXiv}, eprint={2410.04683}, primaryClass={cs.LG cs.AI} }
xu2024towards
arxiv-666341
2410.04684
Combining Structural and Unstructured Data: A Topic-based Finite Mixture Model for Insurance Claim Prediction
<|reference_start|>Combining Structural and Unstructured Data: A Topic-based Finite Mixture Model for Insurance Claim Prediction: Modeling insurance claim amounts and classifying claims into different risk levels are critical yet challenging tasks. Traditional predictive models for insurance claims often overlook the valuable information embedded in claim descriptions. This paper introduces a novel approach by developing a joint mixture model that integrates both claim descriptions and claim amounts. Our method establishes a probabilistic link between textual descriptions and loss amounts, enhancing the accuracy of claims clustering and prediction. In our proposed model, the latent topic/component indicator serves as a proxy for both the thematic content of the claim description and the component of loss distributions. Specifically, conditioned on the topic/component indicator, the claim description follows a multinomial distribution, while the claim amount follows a component loss distribution. We propose two methods for model calibration: an EM algorithm for maximum a posteriori estimates, and an MH-within-Gibbs sampler algorithm for the posterior distribution. The empirical study demonstrates that the proposed methods work effectively, providing interpretable claims clustering and prediction.<|reference_end|>
arxiv
@article{hou2024combining, title={Combining Structural and Unstructured Data: A Topic-based Finite Mixture Model for Insurance Claim Prediction}, author={Yanxi Hou, Xiaolan Xia, Guangyuan Gao}, journal={arXiv preprint arXiv:2410.04684}, year={2024}, archivePrefix={arXiv}, eprint={2410.04684}, primaryClass={stat.AP cs.LG} }
hou2024combining
arxiv-666342
2410.04686
Does the Infamous Pie Chart Really Hurt Decision-Making in the Real World? Assessing the Role of Visualization in High-Level Academic Decisions
<|reference_start|>Does the Infamous Pie Chart Really Hurt Decision-Making in the Real World? Assessing the Role of Visualization in High-Level Academic Decisions: Visualization design influences how people perceive data patterns, yet most research focuses on low-level analytic tasks, such as finding correlations. Existing work has criticized pie charts for their perceptual limitations. However, simpler visualizations like pie and bar charts are widely used for real-world decision-making, such as choosing schools or advisors. As a case study, we examine whether pie charts hurt high-level decisions compared to bar charts, using the website that presents academic data, CSRankings.org. By comparing the impact of pie charts versus bar charts on users' impressions of faculty productivity and projected workload, we found no significant differences in decisions among over 300 participants. Our findings challenge traditional views on visualization design, emphasizing the need for real-world use cases in evaluations.<|reference_end|>
arxiv
@article{li2024does, title={Does the Infamous Pie Chart Really Hurt Decision-Making in the Real World? Assessing the Role of Visualization in High-Level Academic Decisions}, author={Yixuan Li, Emery D. Berger, Minsuk Kahng, Cindy Xiong Bearfield}, journal={arXiv preprint arXiv:2410.04686}, year={2024}, archivePrefix={arXiv}, eprint={2410.04686}, primaryClass={cs.HC} }
li2024does
arxiv-666343
2410.04687
Uni-polarized RIS Beamforming for Improving Connectivity of Multi-RIS-Assisted D2D Networks
<|reference_start|>Uni-polarized RIS Beamforming for Improving Connectivity of Multi-RIS-Assisted D2D Networks: This paper introduces a novel method to enhance the connectivity of multi-reconfigurable intelligent surface-assisted device-to-device networks, referred to as multi-RIS-assisted D2D networks, through a unique phase shift determination. The proposed method aims to optimize the power-domain array factor (PDAF), targeting specific azimuth angles of reliable user equipments (UEs) and enhancing network connectivity. We formulate an optimization problem that jointly optimizes RIS beamforming design, RIS-aided link selection, and RIS positioning. This problem is a mixed-integer non-binary programming. The optimization problem is divided into two sub-problems, which are solved individually and iteratively. The first sub-problem of RIS-aided link selection is solved using an efficient perturbation method while developing genetic algorithm (GA) to obtain RIS beamforming design. The GA optimizes the RIS phase shift to generate multiple RIS-aided narrowbeams that exhibit significant PDAF towards azimuth angles of interest while minimizing PDAF towards undesired azimuth angles. The second sub-problem of RIS positioning is addressed using the Adam optimizer. Numerical simulations verify the superiority of the proposed scheme in improving network connectivity compared to other schemes, including those utilizing distributed small RISs, each generating one RIS-aided link.<|reference_end|>
arxiv
@article{saif2024uni-polarized, title={Uni-polarized RIS Beamforming for Improving Connectivity of Multi-RIS-Assisted D2D Networks}, author={Mohammed Saif, Mohammad Javad-Kalbasi, Shahrokh Valaee}, journal={arXiv preprint arXiv:2410.04687}, year={2024}, archivePrefix={arXiv}, eprint={2410.04687}, primaryClass={cs.IT eess.SP math.IT} }
saif2024uni-polarized
arxiv-666344
2410.04689
Low-Rank Continual Pyramid Vision Transformer: Incrementally Segment Whole-Body Organs in CT with Light-Weighted Adaptation
<|reference_start|>Low-Rank Continual Pyramid Vision Transformer: Incrementally Segment Whole-Body Organs in CT with Light-Weighted Adaptation: Deep segmentation networks achieve high performance when trained on specific datasets. However, in clinical practice, it is often desirable that pretrained segmentation models can be dynamically extended to enable segmenting new organs without access to previous training datasets or without training from scratch. This would ensure a much more efficient model development and deployment paradigm accounting for the patient privacy and data storage issues. This clinically preferred process can be viewed as a continual semantic segmentation (CSS) problem. Previous CSS works would either experience catastrophic forgetting or lead to unaffordable memory costs as models expand. In this work, we propose a new continual whole-body organ segmentation model with light-weighted low-rank adaptation (LoRA). We first train and freeze a pyramid vision transformer (PVT) base segmentation model on the initial task, then continually add light-weighted trainable LoRA parameters to the frozen model for each new learning task. Through a holistically exploration of the architecture modification, we identify three most important layers (i.e., patch-embedding, multi-head attention and feed forward layers) that are critical in adapting to the new segmentation tasks, while retaining the majority of the pretrained parameters fixed. Our proposed model continually segments new organs without catastrophic forgetting and meanwhile maintaining a low parameter increasing rate. Continually trained and tested on four datasets covering different body parts of a total of 121 organs, results show that our model achieves high segmentation accuracy, closely reaching the PVT and nnUNet upper bounds, and significantly outperforms other regularization-based CSS methods. When comparing to the leading architecture-based CSS method, our model has a substantial lower parameter increasing rate while achieving comparable performance.<|reference_end|>
arxiv
@article{zhu2024low-rank, title={Low-Rank Continual Pyramid Vision Transformer: Incrementally Segment Whole-Body Organs in CT with Light-Weighted Adaptation}, author={Vince Zhu, Zhanghexuan Ji, Dazhou Guo, Puyang Wang, Yingda Xia, Le Lu, Xianghua Ye, Wei Zhu, Dakai Jin}, journal={arXiv preprint arXiv:2410.04689}, year={2024}, archivePrefix={arXiv}, eprint={2410.04689}, primaryClass={cs.CV} }
zhu2024low-rank
arxiv-666345
2410.04690
SegINR: Segment-wise Implicit Neural Representation for Sequence Alignment in Neural Text-to-Speech
<|reference_start|>SegINR: Segment-wise Implicit Neural Representation for Sequence Alignment in Neural Text-to-Speech: We present SegINR, a novel approach to neural Text-to-Speech (TTS) that addresses sequence alignment without relying on an auxiliary duration predictor and complex autoregressive (AR) or non-autoregressive (NAR) frame-level sequence modeling. SegINR simplifies the process by converting text sequences directly into frame-level features. It leverages an optimal text encoder to extract embeddings, transforming each into a segment of frame-level features using a conditional implicit neural representation (INR). This method, named segment-wise INR (SegINR), models temporal dynamics within each segment and autonomously defines segment boundaries, reducing computational costs. We integrate SegINR into a two-stage TTS framework, using it for semantic token prediction. Our experiments in zero-shot adaptive TTS scenarios demonstrate that SegINR outperforms conventional methods in speech quality with computational efficiency.<|reference_end|>
arxiv
@article{kim2024seginr:, title={SegINR: Segment-wise Implicit Neural Representation for Sequence Alignment in Neural Text-to-Speech}, author={Minchan Kim, Myeonghun Jeong, Joun Yeop Lee, Nam Soo Kim}, journal={arXiv preprint arXiv:2410.04690}, year={2024}, archivePrefix={arXiv}, eprint={2410.04690}, primaryClass={eess.AS cs.LG} }
kim2024seginr:
arxiv-666346
2410.04691
Deeper Insights Without Updates: The Power of In-Context Learning Over Fine-Tuning
<|reference_start|>Deeper Insights Without Updates: The Power of In-Context Learning Over Fine-Tuning: Fine-tuning and in-context learning (ICL) are two prevalent methods in imbuing large language models with task-specific knowledge. It is commonly believed that fine-tuning can surpass ICL given sufficient training samples as it allows the model to adjust its internal parameters based on the data. However, this paper presents a counterintuitive finding: For tasks with implicit patterns, ICL captures these patterns significantly better than fine-tuning. We developed several datasets featuring implicit patterns, such as sequences determining answers through parity or identifying reducible terms in calculations. We then evaluated the models' understanding of these patterns under both fine-tuning and ICL across models ranging from 0.5B to 7B parameters. The results indicate that models employing ICL can quickly grasp deep patterns and significantly improve accuracy. In contrast, fine-tuning, despite utilizing thousands of times more training samples than ICL, achieved only limited improvements. We also proposed circuit shift theory from a mechanistic interpretability's view to explain why ICL wins.<|reference_end|>
arxiv
@article{yin2024deeper, title={Deeper Insights Without Updates: The Power of In-Context Learning Over Fine-Tuning}, author={Qingyu Yin, Xuzheng He, Luoao Deng, Chak Tou Leong, Fan Wang, Yanzhao Yan, Xiaoyu Shen, Qiang Zhang}, journal={arXiv preprint arXiv:2410.04691}, year={2024}, archivePrefix={arXiv}, eprint={2410.04691}, primaryClass={cs.LG cs.CL} }
yin2024deeper
arxiv-666347
2410.04692
A Clifford Algebraic Approach to E(n)-Equivariant High-order Graph Neural Networks
<|reference_start|>A Clifford Algebraic Approach to E(n)-Equivariant High-order Graph Neural Networks: Designing neural network architectures that can handle data symmetry is crucial. This is especially important for geometric graphs whose properties are equivariance under Euclidean transformations. Current equivariant graph neural networks (EGNNs), particularly those using message passing, have a limitation in expressive power. Recent high-order graph neural networks can overcome this limitation, yet they lack equivariance properties, representing a notable drawback in certain applications in chemistry and physical sciences. In this paper, we introduce the Clifford Group Equivariant Graph Neural Networks (CG-EGNNs), a novel EGNN that enhances high-order message passing by integrating high-order local structures in the context of Clifford algebras. As a key benefit of using Clifford algebras, CG-EGNN can learn functions that capture equivariance from positional features. By adopting the high-order message passing mechanism, CG-EGNN gains richer information from neighbors, thus improving model performance. Furthermore, we establish the universality property of the $k$-hop message passing framework, showcasing greater expressive power of CG-EGNNs with additional $k$-hop message passing mechanism. We empirically validate that CG-EGNNs outperform previous methods on various benchmarks including n-body, CMU motion capture, and MD17, highlighting their effectiveness in geometric deep learning.<|reference_end|>
arxiv
@article{tran2024a, title={A Clifford Algebraic Approach to E(n)-Equivariant High-order Graph Neural Networks}, author={Hoang-Viet Tran, Thieu N. Vo, Tho Tran Huu, Tan Minh Nguyen}, journal={arXiv preprint arXiv:2410.04692}, year={2024}, archivePrefix={arXiv}, eprint={2410.04692}, primaryClass={cs.LG stat.ML} }
tran2024a
arxiv-666348
2410.04694
Transient-Safe and Attack-Resilient Secondary Control in AC Microgrids Under Polynomially Unbounded FDI Attacks
<|reference_start|>Transient-Safe and Attack-Resilient Secondary Control in AC Microgrids Under Polynomially Unbounded FDI Attacks: This letter proposes a novel, fully distributed, transient-safe resilient secondary control strategies for AC microgrids, addressing unbounded false data injection (FDI) attacks on control input channels. Unlike existing methods that focus primarily on steady-state convergence, our approach guarantees transient safety, ensuring that system states remain within predefined safety bounds even during attack initiation a critical aspect overlooked in prior research. Given the reduction of network inertia by increasing the penetration of inverted-based renewables, large overshooting and intense fluctuations are more likely to occur during transients caused by disturbances and cyber-attacks. To mitigate these risks, the proposed control method enhances defense capabilities against polynomially unbounded FDI attacks, maintaining safe system trajectories for both frequency and voltage throughout the transient response. Through rigorous Lyapunov-based stability analysis, we formally certify the strategies to achieve uniformly ultimately bounded (UUB) convergence in frequency and voltage regulation, and active power sharing across multi-inverter-based AC microgrids. Numerical simulation studies verify the effectiveness of the proposed control protocols, demonstrating improved system reliability, safety and resilience under adverse conditions.<|reference_end|>
arxiv
@article{rajabinezhad2024transient-safe, title={Transient-Safe and Attack-Resilient Secondary Control in AC Microgrids Under Polynomially Unbounded FDI Attacks}, author={Mohamadamin Rajabinezhad, Nesa Shams, Yichao Wang, and Shan Zuo}, journal={arXiv preprint arXiv:2410.04694}, year={2024}, archivePrefix={arXiv}, eprint={2410.04694}, primaryClass={eess.SY cs.SY} }
rajabinezhad2024transient-safe
arxiv-666349
2410.04697
Higher order numerical methods for SDEs without globally monotone coefficients
<|reference_start|>Higher order numerical methods for SDEs without globally monotone coefficients: In the present work, we delve into further study of numerical approximations of SDEs with non-globally monotone coefficients. We design and analyze a new family of stopped increment-tamed time discretization schemes of Euler, Milstein and order 1.5 type for such SDEs. By formulating a novel unified framework, the proposed methods are shown to possess the exponential integrability properties, which are crucial to recovering convergence rates in the non-global monotone setting. Armed with such exponential integrability properties and by the arguments of perturbation estimates, we successfully identify the optimal strong convergence rates of the aforementioned methods in the non-global monotone setting. Numerical experiments are finally presented to corroborate the theoretical results.<|reference_end|>
arxiv
@article{dai2024higher, title={Higher order numerical methods for SDEs without globally monotone coefficients}, author={Lei Dai, Xiaojie Wang}, journal={arXiv preprint arXiv:2410.04697}, year={2024}, archivePrefix={arXiv}, eprint={2410.04697}, primaryClass={math.NA cs.NA} }
dai2024higher
arxiv-666350
2410.04698
MathHay: An Automated Benchmark for Long-Context Mathematical Reasoning in LLMs
<|reference_start|>MathHay: An Automated Benchmark for Long-Context Mathematical Reasoning in LLMs: Recent large language models (LLMs) have demonstrated versatile capabilities in long-context scenarios. Although some recent benchmarks have been developed to evaluate the long-context capabilities of LLMs, there is a lack of benchmarks evaluating the mathematical reasoning abilities of LLMs over long contexts, which is crucial for LLMs' application in real-world scenarios. In this paper, we introduce MathHay, an automated benchmark designed to assess the long-context mathematical reasoning capabilities of LLMs. Unlike previous benchmarks like Needle in a Haystack, which focus primarily on information retrieval within long texts, MathHay demands models with both information-seeking and complex mathematical reasoning abilities. We conduct extensive experiments on MathHay to assess the long-context mathematical reasoning abilities of eight top-performing LLMs. Even the best-performing model, Gemini-1.5-Pro-002, still struggles with mathematical reasoning over long contexts, achieving only 51.26% accuracy at 128K tokens. This highlights the significant room for improvement on the MathHay benchmark.<|reference_end|>
arxiv
@article{wang2024mathhay:, title={MathHay: An Automated Benchmark for Long-Context Mathematical Reasoning in LLMs}, author={Lei Wang, Shan Dong, Yuhui Xu, Hanze Dong, Yalu Wang, Amrita Saha, Ee-Peng Lim, Caiming Xiong, Doyen Sahoo}, journal={arXiv preprint arXiv:2410.04698}, year={2024}, archivePrefix={arXiv}, eprint={2410.04698}, primaryClass={cs.CL} }
wang2024mathhay:
arxiv-666351
2410.04699
The LLM Effect: Are Humans Truly Using LLMs, or Are They Being Influenced By Them Instead?
<|reference_start|>The LLM Effect: Are Humans Truly Using LLMs, or Are They Being Influenced By Them Instead?: Large Language Models (LLMs) have shown capabilities close to human performance in various analytical tasks, leading researchers to use them for time and labor-intensive analyses. However, their capability to handle highly specialized and open-ended tasks in domains like policy studies remains in question. This paper investigates the efficiency and accuracy of LLMs in specialized tasks through a structured user study focusing on Human-LLM partnership. The study, conducted in two stages-Topic Discovery and Topic Assignment-integrates LLMs with expert annotators to observe the impact of LLM suggestions on what is usually human-only analysis. Results indicate that LLM-generated topic lists have significant overlap with human generated topic lists, with minor hiccups in missing document-specific topics. However, LLM suggestions may significantly improve task completion speed, but at the same time introduce anchoring bias, potentially affecting the depth and nuance of the analysis, raising a critical question about the trade-off between increased efficiency and the risk of biased analysis.<|reference_end|>
arxiv
@article{choi2024the, title={The LLM Effect: Are Humans Truly Using LLMs, or Are They Being Influenced By Them Instead?}, author={Alexander S. Choi, Syeda Sabrina Akter, JP Singh, Antonios Anastasopoulos}, journal={arXiv preprint arXiv:2410.04699}, year={2024}, archivePrefix={arXiv}, eprint={2410.04699}, primaryClass={cs.CL cs.HC} }
choi2024the
arxiv-666352
2410.04702
Demo of Zero-Shot Guitar Amplifier Modelling: Enhancing Modeling with Hyper Neural Networks
<|reference_start|>Demo of Zero-Shot Guitar Amplifier Modelling: Enhancing Modeling with Hyper Neural Networks: Electric guitar tone modeling typically focuses on the non-linear transformation from clean to amplifier-rendered audio. Traditional methods rely on one-to-one mappings, incorporating device parameters into neural models to replicate specific amplifiers. However, these methods are limited by the need for specific training data. In this paper, we adapt a model based on the previous work, which leverages a tone embedding encoder and a feature wise linear modulation (FiLM) condition method. In this work, we altered conditioning method using a hypernetwork-based gated convolutional network (GCN) to generate audio that blends clean input with the tone characteristics of reference audio. By extending the training data to cover a wider variety of amplifier tones, our model is able to capture a broader range of tones. Additionally, we developed a real-time plugin to demonstrate the system's practical application, allowing users to experience its performance interactively. Our results indicate that the proposed system achieves superior tone modeling versatility compared to traditional methods.<|reference_end|>
arxiv
@article{chen2024demo, title={Demo of Zero-Shot Guitar Amplifier Modelling: Enhancing Modeling with Hyper Neural Networks}, author={Yu-Hua Chen, Yuan-Chiao Cheng, Yen-Tung Yeh, Jui-Te Wu, Yu-Hsiang Ho, Jyh-Shing Roger Jang, Yi-Hsuan Yang}, journal={arXiv preprint arXiv:2410.04702}, year={2024}, archivePrefix={arXiv}, eprint={2410.04702}, primaryClass={cs.SD eess.AS} }
chen2024demo
arxiv-666353
2410.04703
Neural Fourier Modelling: A Highly Compact Approach to Time-Series Analysis
<|reference_start|>Neural Fourier Modelling: A Highly Compact Approach to Time-Series Analysis: Neural time-series analysis has traditionally focused on modeling data in the time domain, often with some approaches incorporating equivalent Fourier domain representations as auxiliary spectral features. In this work, we shift the main focus to frequency representations, modeling time-series data fully and directly in the Fourier domain. We introduce Neural Fourier Modelling (NFM), a compact yet powerful solution for time-series analysis. NFM is grounded in two key properties of the Fourier transform (FT): (i) the ability to model finite-length time series as functions in the Fourier domain, treating them as continuous-time elements in function space, and (ii) the capacity for data manipulation (such as resampling and timespan extension) within the Fourier domain. We reinterpret Fourier-domain data manipulation as frequency extrapolation and interpolation, incorporating this as a core learning mechanism in NFM, applicable across various tasks. To support flexible frequency extension with spectral priors and effective modulation of frequency representations, we propose two learning modules: Learnable Frequency Tokens (LFT) and Implicit Neural Fourier Filters (INFF). These modules enable compact and expressive modeling in the Fourier domain. Extensive experiments demonstrate that NFM achieves state-of-the-art performance on a wide range of tasks (forecasting, anomaly detection, and classification), including challenging time-series scenarios with previously unseen sampling rates at test time. Moreover, NFM is highly compact, requiring fewer than 40K parameters in each task, with time-series lengths ranging from 100 to 16K.<|reference_end|>
arxiv
@article{kim2024neural, title={Neural Fourier Modelling: A Highly Compact Approach to Time-Series Analysis}, author={Minjung Kim, Yusuke Hioka, Michael Witbrock}, journal={arXiv preprint arXiv:2410.04703}, year={2024}, archivePrefix={arXiv}, eprint={2410.04703}, primaryClass={cs.LG stat.ML} }
kim2024neural
arxiv-666354
2410.04704
Modeling and Estimation of Vocal Tract and Glottal Source Parameters Using ARMAX-LF Model
<|reference_start|>Modeling and Estimation of Vocal Tract and Glottal Source Parameters Using ARMAX-LF Model: Modeling and estimation of the vocal tract and glottal source parameters of vowels from raw speech can be typically done by using the Auto-Regressive with eXogenous input (ARX) model and Liljencrants-Fant (LF) model with an iteration-based estimation approach. However, the all-pole autoregressive model in the modeling of vocal tract filters cannot provide the locations of anti-formants (zeros), which increases the estimation errors in certain classes of speech sounds, such as nasal, fricative, and stop consonants. In this paper, we propose the Auto-Regressive Moving Average eXogenous with LF (ARMAX-LF) model to extend the ARX-LF model to a wider variety of speech sounds, including vowels and nasalized consonants. The LF model represents the glottal source derivative as a parametrized time-domain model, and the ARMAX model represents the vocal tract as a pole-zero filter with an additional exogenous LF excitation as input. To estimate multiple parameters with fewer errors, we first utilize the powerful nonlinear fitting ability of deep neural networks (DNNs) to build a mapping from extracted glottal source derivatives or speech waveforms to corresponding LF parameters. Then, glottal source and vocal tract parameters can be estimated with fewer estimation errors and without any iterations as in the analysis-by-synthesis strategy. Experimental results with synthesized speech using the linear source-filter model, synthesized speech using the physical model, and real speech signals showed that the proposed ARMAX-LF model with a DNN-based estimation method can estimate the parameters of both vowels and nasalized sounds with fewer errors and estimation time.<|reference_end|>
arxiv
@article{lia2024modeling, title={Modeling and Estimation of Vocal Tract and Glottal Source Parameters Using ARMAX-LF Model}, author={Kai Lia, Masato Akagia, Yongwei Lib, Masashi Unokia}, journal={arXiv preprint arXiv:2410.04704}, year={2024}, archivePrefix={arXiv}, eprint={2410.04704}, primaryClass={cs.SD cs.CL eess.AS} }
lia2024modeling
arxiv-666355
2410.04707
Learning How Hard to Think: Input-Adaptive Allocation of LM Computation
<|reference_start|>Learning How Hard to Think: Input-Adaptive Allocation of LM Computation: Computationally intensive decoding procedures--including search, reranking, and self-critique--can improve the quality of language model (LM) outputs in problems spanning code generation, numerical reasoning, and dialog. Existing work typically applies the same decoding procedure for every input to an LM. But not all inputs require the same amount of computation to process. Can we allocate decoding computation adaptively, using more resources to answer questions whose answers will be harder to compute? We present an approach that predicts the distribution of rewards given an input and computation budget, then allocates additional computation to inputs for which it is predicted to be most useful. We apply this approach in two decoding procedures: first, an adaptive best-of-k procedure that dynamically selects the number of samples to generate as input to a reranker; second, a routing procedure that dynamically responds to a query using a decoding procedure that is expensive but accurate, or one that is cheaper but less capable. Across a suite of programming, mathematics, and dialog tasks, we show that accurate computation-allocation procedures can be learned, and reduce computation by up to 50% at no cost to response quality, or improve quality by up to 10% at a fixed computational budget.<|reference_end|>
arxiv
@article{damani2024learning, title={Learning How Hard to Think: Input-Adaptive Allocation of LM Computation}, author={Mehul Damani, Idan Shenfeld, Andi Peng, Andreea Bobu and Jacob Andreas}, journal={arXiv preprint arXiv:2410.04707}, year={2024}, archivePrefix={arXiv}, eprint={2410.04707}, primaryClass={cs.LG cs.AI cs.CL} }
damani2024learning
arxiv-666356
2410.04708
Tight Stability, Convergence, and Robustness Bounds for Predictive Coding Networks
<|reference_start|>Tight Stability, Convergence, and Robustness Bounds for Predictive Coding Networks: Energy-based learning algorithms, such as predictive coding (PC), have garnered significant attention in the machine learning community due to their theoretical properties, such as local operations and biologically plausible mechanisms for error correction. In this work, we rigorously analyze the stability, robustness, and convergence of PC through the lens of dynamical systems theory. We show that, first, PC is Lyapunov stable under mild assumptions on its loss and residual energy functions, which implies intrinsic robustness to small random perturbations due to its well-defined energy-minimizing dynamics. Second, we formally establish that the PC updates approximate quasi-Newton methods by incorporating higher-order curvature information, which makes them more stable and able to converge with fewer iterations compared to models trained via backpropagation (BP). Furthermore, using this dynamical framework, we provide new theoretical bounds on the similarity between PC and other algorithms, i.e., BP and target propagation (TP), by precisely characterizing the role of higher-order derivatives. These bounds, derived through detailed analysis of the Hessian structures, show that PC is significantly closer to quasi-Newton updates than TP, providing a deeper understanding of the stability and efficiency of PC compared to conventional learning methods.<|reference_end|>
arxiv
@article{mali2024tight, title={Tight Stability, Convergence, and Robustness Bounds for Predictive Coding Networks}, author={Ankur Mali, Tommaso Salvatori, Alexander Ororbia}, journal={arXiv preprint arXiv:2410.04708}, year={2024}, archivePrefix={arXiv}, eprint={2410.04708}, primaryClass={cs.LG cs.AI cs.NE math.OC stat.ML} }
mali2024tight
arxiv-666357
2410.04715
Rule-based Data Selection for Large Language Models
<|reference_start|>Rule-based Data Selection for Large Language Models: The quality of training data significantly impacts the performance of large language models (LLMs). There are increasing studies using LLMs to rate and select data based on several human-crafted metrics (rules). However, these conventional rule-based approaches often depend too heavily on human heuristics, lack effective metrics for assessing rules, and exhibit limited adaptability to new tasks. In our study, we introduce an innovative rule-based framework that utilizes the orthogonality of score vectors associated with rules as a novel metric for rule evaluations. Our approach includes an automated pipeline that first uses LLMs to generate a diverse set of rules, encompassing various rating dimensions to evaluate data quality. Then it rates a batch of data based on these rules and uses the determinantal point process (DPP) from random matrix theory to select the most orthogonal score vectors, thereby identifying a set of independent rules. These rules are subsequently used to evaluate all data, selecting samples with the highest average scores for downstream tasks such as LLM training. We verify the effectiveness of our method through two experimental setups: 1) comparisons with ground truth ratings and 2) benchmarking LLMs trained with the chosen data. Our comprehensive experiments cover a range of scenarios, including general pre-training and domain-specific fine-tuning in areas such as IMDB, Medical, Math, and Code. The outcomes demonstrate that our DPP-based rule rating method consistently outperforms other approaches, including rule-free rating, uniform sampling, importance resampling, and QuRating, in terms of both rating precision and model performance.<|reference_end|>
arxiv
@article{li2024rule-based, title={Rule-based Data Selection for Large Language Models}, author={Xiaomin Li, Mingye Gao, Zhiwei Zhang, Chang Yue, Hong Hu}, journal={arXiv preprint arXiv:2410.04715}, year={2024}, archivePrefix={arXiv}, eprint={2410.04715}, primaryClass={cs.CL cs.AI cs.LG} }
li2024rule-based
arxiv-666358
2410.04716
H-SIREN: Improving implicit neural representations with hyperbolic periodic functions
<|reference_start|>H-SIREN: Improving implicit neural representations with hyperbolic periodic functions: Implicit neural representations (INR) have been recently adopted in various applications ranging from computer vision tasks to physics simulations by solving partial differential equations. Among existing INR-based works, multi-layer perceptrons with sinusoidal activation functions find widespread applications and are also frequently treated as a baseline for the development of better activation functions for INR applications. Recent investigations claim that the use of sinusoidal activation functions could be sub-optimal due to their limited supported frequency set as well as their tendency to generate over-smoothed solutions. We provide a simple solution to mitigate such an issue by changing the activation function at the first layer from $\sin(x)$ to $\sin(\sinh(2x))$. We demonstrate H-SIREN in various computer vision and fluid flow problems, where it surpasses the performance of several state-of-the-art INRs.<|reference_end|>
arxiv
@article{gao2024h-siren:, title={H-SIREN: Improving implicit neural representations with hyperbolic periodic functions}, author={Rui Gao, Rajeev K. Jaiman}, journal={arXiv preprint arXiv:2410.04716}, year={2024}, archivePrefix={arXiv}, eprint={2410.04716}, primaryClass={cs.CV} }
gao2024h-siren:
arxiv-666359
2410.04717
$\textbfOnly-IF$:Revealing the Decisive Effect of Instruction Diversity on Generalization
<|reference_start|>$\textbfOnly-IF$:Revealing the Decisive Effect of Instruction Diversity on Generalization: Understanding and accurately following instructions is critical for large language models (LLMs) to be effective across diverse tasks. In this work, we rigorously examine the key factors that enable models to generalize to unseen instructions, providing insights to guide the collection of data for instruction-tuning. Through controlled experiments, inspired by the Turing-complete Markov algorithm, we demonstrate that such generalization $\textbf{only emerges}$ when training data is diversified enough across semantic domains. Our findings also reveal that merely diversifying within limited domains fails to ensure robust generalization. In contrast, cross-domain data diversification, even under constrained data budgets, significantly enhances a model's adaptability. We further extend our analysis to real-world scenarios, including fine-tuning of $\textit{$\textbf{specialist}$}$ and $\textit{$\textbf{generalist}$}$ models. In both cases, we demonstrate that 1) better performance can be achieved by increasing the diversity of an established dataset while keeping the data size constant, and 2) when scaling up the data, diversifying the semantics of instructions is more effective than simply increasing the quantity of similar data. Our research provides important insights for dataset collation, particularly when optimizing model performance by expanding training data for both specialist and generalist scenarios. We show that careful consideration of data diversification is key: training specialist models with data extending beyond their core domain leads to significant performance improvements, while generalist models benefit from diverse data mixtures that enhance their overall instruction-following capabilities across a wide range of applications. Our results highlight the critical role of strategic diversification and offer clear guidelines for improving data quality.<|reference_end|>
arxiv
@article{zhang2024$\textbf{only-if}$:revealing, title={$\textbf{Only-IF}$:Revealing the Decisive Effect of Instruction Diversity on Generalization}, author={Dylan Zhang, Justin Wang, Francois Charton}, journal={arXiv preprint arXiv:2410.04717}, year={2024}, archivePrefix={arXiv}, eprint={2410.04717}, primaryClass={cs.CL cs.AI cs.LG cs.SE} }
zhang2024$\textbf{only-if}$:revealing
arxiv-666360
2410.04719
Domains as Objectives: Domain-Uncertainty-Aware Policy Optimization through Explicit Multi-Domain Convex Coverage Set Learning
<|reference_start|>Domains as Objectives: Domain-Uncertainty-Aware Policy Optimization through Explicit Multi-Domain Convex Coverage Set Learning: The problem of uncertainty is a feature of real world robotics problems and any control framework must contend with it in order to succeed in real applications tasks. Reinforcement Learning is no different, and epistemic uncertainty arising from model uncertainty or misspecification is a challenge well captured by the sim-to-real gap. A simple solution to this issue is domain randomization (DR), which unfortunately can result in conservative agents. As a remedy to this conservativeness, the use of universal policies that take additional information about the randomized domain has risen as an alternative solution, along with recurrent neural network-based controllers. Uncertainty-aware universal policies present a particularly compelling solution able to account for system identification uncertainties during deployment. In this paper, we reveal that the challenge of efficiently optimizing uncertainty-aware policies can be fundamentally reframed as solving the convex coverage set (CCS) problem within a multi-objective reinforcement learning (MORL) context. By introducing a novel Markov decision process (MDP) framework where each domain's performance is treated as an independent objective, we unify the training of uncertainty-aware policies with MORL approaches. This connection enables the application of MORL algorithms for domain randomization (DR), allowing for more efficient policy optimization. To illustrate this, we focus on the linear utility function, which aligns with the expectation in DR formulations, and propose a series of algorithms adapted from the MORL literature to solve the CCS, demonstrating their ability to enhance the performance of uncertainty-aware policies.<|reference_end|>
arxiv
@article{ilboudo2024domains, title={Domains as Objectives: Domain-Uncertainty-Aware Policy Optimization through Explicit Multi-Domain Convex Coverage Set Learning}, author={Wendyam Eric Lionel Ilboudo, Taisuke Kobayashi and Takamitsu Matsubara}, journal={arXiv preprint arXiv:2410.04719}, year={2024}, archivePrefix={arXiv}, eprint={2410.04719}, primaryClass={cs.RO} }
ilboudo2024domains
arxiv-666361
2410.04721
ACDC: Autoregressive Coherent Multimodal Generation using Diffusion Correction
<|reference_start|>ACDC: Autoregressive Coherent Multimodal Generation using Diffusion Correction: Autoregressive models (ARMs) and diffusion models (DMs) represent two leading paradigms in generative modeling, each excelling in distinct areas: ARMs in global context modeling and long-sequence generation, and DMs in generating high-quality local contexts, especially for continuous data such as images and short videos. However, ARMs often suffer from exponential error accumulation over long sequences, leading to physically implausible results, while DMs are limited by their local context generation capabilities. In this work, we introduce Autoregressive Coherent multimodal generation with Diffusion Correction (ACDC), a zero-shot approach that combines the strengths of both ARMs and DMs at the inference stage without the need for additional fine-tuning. ACDC leverages ARMs for global context generation and memory-conditioned DMs for local correction, ensuring high-quality outputs by correcting artifacts in generated multimodal tokens. In particular, we propose a memory module based on large language models (LLMs) that dynamically adjusts the conditioning texts for the DMs, preserving crucial global context information. Our experiments on multimodal tasks, including coherent multi-frame story generation and autoregressive video generation, demonstrate that ACDC effectively mitigates the accumulation of errors and significantly enhances the quality of generated outputs, achieving superior performance while remaining agnostic to specific ARM and DM architectures. Project page: https://acdc2025.github.io/<|reference_end|>
arxiv
@article{chung2024acdc:, title={ACDC: Autoregressive Coherent Multimodal Generation using Diffusion Correction}, author={Hyungjin Chung, Dohun Lee, Jong Chul Ye}, journal={arXiv preprint arXiv:2410.04721}, year={2024}, archivePrefix={arXiv}, eprint={2410.04721}, primaryClass={cs.LG cs.CV} }
chung2024acdc:
arxiv-666362
2410.04722
A Strategy for Label Alignment in Deep Neural Networks
<|reference_start|>A Strategy for Label Alignment in Deep Neural Networks: One recent research demonstrated successful application of the label alignment property for unsupervised domain adaptation in a linear regression settings. Instead of regularizing representation learning to be domain invariant, the research proposed to regularize the linear regression model to align with the top singular vectors of the data matrix from the target domain. In this work we expand upon this idea and generalize it to the case of deep learning, where we derive an alternative formulation of the original adaptation algorithm exploiting label alignment suitable for deep neural network. We also perform experiments to demonstrate that our approach achieves comparable performance to mainstream unsupervised domain adaptation methods while having stabler convergence. All experiments and implementations in our work can be found at the following codebase: \url{https://github.com/xuanrui-work/DeepLabelAlignment}.<|reference_end|>
arxiv
@article{zeng2024a, title={A Strategy for Label Alignment in Deep Neural Networks}, author={Xuanrui Zeng}, journal={arXiv preprint arXiv:2410.04722}, year={2024}, archivePrefix={arXiv}, eprint={2410.04722}, primaryClass={cs.LG} }
zeng2024a
arxiv-666363
2410.04723
ProtoNAM: Prototypical Neural Additive Models for Interpretable Deep Tabular Learning
<|reference_start|>ProtoNAM: Prototypical Neural Additive Models for Interpretable Deep Tabular Learning: Generalized additive models (GAMs) have long been a powerful white-box tool for the intelligible analysis of tabular data, revealing the influence of each feature on the model predictions. Despite the success of neural networks (NNs) in various domains, their application as NN-based GAMs in tabular data analysis remains suboptimal compared to tree-based ones, and the opacity of encoders in NN-GAMs also prevents users from understanding how networks learn the functions. In this work, we propose a new deep tabular learning method, termed Prototypical Neural Additive Model (ProtoNAM), which introduces prototypes into neural networks in the framework of GAMs. With the introduced prototype-based feature activation, ProtoNAM can flexibly model the irregular mapping from tabular features to the outputs while maintaining the explainability of the final prediction. We also propose a gradient-boosting inspired hierarchical shape function modeling method, facilitating the discovery of complex feature patterns and bringing transparency into the learning process of each network layer. Our empirical evaluations demonstrate that ProtoNAM outperforms all existing NN-based GAMs, while providing additional insights into the shape function learned for each feature. The source code of ProtoNAM is available at \url{https://github.com/Teddy-XiongGZ/ProtoNAM}.<|reference_end|>
arxiv
@article{xiong2024protonam:, title={ProtoNAM: Prototypical Neural Additive Models for Interpretable Deep Tabular Learning}, author={Guangzhi Xiong, Sanchit Sinha, Aidong Zhang}, journal={arXiv preprint arXiv:2410.04723}, year={2024}, archivePrefix={arXiv}, eprint={2410.04723}, primaryClass={cs.LG cs.AI stat.ML} }
xiong2024protonam:
arxiv-666364
2410.04727
Forgetting Curve: A Reliable Method for Evaluating Memorization Capability for Long-context Models
<|reference_start|>Forgetting Curve: A Reliable Method for Evaluating Memorization Capability for Long-context Models: Numerous recent works target to extend effective context length for language models and various methods, tasks and benchmarks exist to measure model's effective memorization length. However, through thorough investigations, we find limitations for currently existing evaluations on model's memorization capability. We provide an extensive survey for limitations in this work and propose a new method called forgetting curve to measure the memorization capability of long-context models. We show that forgetting curve has the advantage of being robust to the tested corpus and the experimental settings, of not relying on prompts and can be applied to any model size. We apply our forgetting curve to a large variety of models involving both transformer and RNN/SSM based architectures. Our measurement provides empirical evidence for the effectiveness of transformer extension techniques while raises questions for the effective length of RNN/SSM based models. We also examine the difference between our measurement and existing benchmarks as well as popular metrics for various models. Our code and results can be found at https://github.com/1azybug/ForgettingCurve.<|reference_end|>
arxiv
@article{liu2024forgetting, title={Forgetting Curve: A Reliable Method for Evaluating Memorization Capability for Long-context Models}, author={Xinyu Liu, Runsong Zhao, Pengcheng Huang, Chunyang Xiao, Bei Li, Jingang Wang, Tong Xiao, Jingbo Zhu}, journal={arXiv preprint arXiv:2410.04727}, year={2024}, archivePrefix={arXiv}, eprint={2410.04727}, primaryClass={cs.CL} }
liu2024forgetting
arxiv-666365
2410.04730
Exploring Gestural Interaction with a Cushion Interface for Smart Home Control
<|reference_start|>Exploring Gestural Interaction with a Cushion Interface for Smart Home Control: In this research, we aim to realize cushion interface for operating smart home. We designed user-defined gestures using cushion and developed gesture recognition system. We asked some users to make gestures using cushions for operating home appliances and determined user-defined gesture sets. We developed two methods for gesture identification. The First, We inserted sensor modules consisting of photo reflective sensors and acceleration sensor inside a cushion. The second, we embedded the acceleration sensor arrays in the cushion cover. Gesture recognizer was implemented using Convolutional Neural Networks (CNN). To evaluate our method, We conducted an experiment to measure recognition accuracy. Results showed that an average accuracy was 94.8% when training for each user, and an average accuracy of 91.3% when testing with a user that did not exist in the training data set.<|reference_end|>
arxiv
@article{suzuki2024exploring, title={Exploring Gestural Interaction with a Cushion Interface for Smart Home Control}, author={Yuri Suzuki, Kaho Kato, Naomi Furui, Daisuke Sakamoto and Yuta Sugiura}, journal={arXiv preprint arXiv:2410.04730}, year={2024}, archivePrefix={arXiv}, eprint={2410.04730}, primaryClass={cs.HC} }
suzuki2024exploring
arxiv-666366
2410.04731
Efficient transformer with reinforced position embedding for language models
<|reference_start|>Efficient transformer with reinforced position embedding for language models: In this paper, we propose an efficient transformer architecture that uses reinforced positional embedding to obtain superior performance with half the number of encoder decoder layers. We demonstrate that concatenating positional encoding with trainable token embeddings, normalizing columns in the token embedding matrix, and using the normalized token embedding matrix as the value of the attention layer improve the training and validation loss and the training time in an encoder-decoder Transformer model for a Portuguese-English translation task with 10 epochs or 12 hours of training across 10 trials. Our method, with roughly a threefold parameter reduction compared to the baseline model, yields a mean training loss of 1.21, a mean validation loss of 1.51, and an average training time of 1352.27 seconds per epoch, surpassing the baseline model with the same embedding dimension that employs addition of positional encoding and token embeddings, which achieves a mean training loss of 1.96, a validation loss of 2.18, and an average training time of 4297.79 seconds per epoch. Additionally, we evaluated our proposed architecture and the baseline across 14 diverse translation datasets from TensorFlow. The results indicate that our method consistently achieves lower or comparable training and validation losses, suggesting enhanced learning efficiency.<|reference_end|>
arxiv
@article{hsiao2024efficient, title={Efficient transformer with reinforced position embedding for language models}, author={Yen-Che Hsiao and Abhishek Dutta}, journal={arXiv preprint arXiv:2410.04731}, year={2024}, archivePrefix={arXiv}, eprint={2410.04731}, primaryClass={cs.CL} }
hsiao2024efficient
arxiv-666367
2410.04732
Guidance of the Center of Pressure Using Haptic Presentation
<|reference_start|>Guidance of the Center of Pressure Using Haptic Presentation: Accurately instructing posture and the position of the body's center of gravity is challenging. In this study, we propose a system that utilizes haptic feedback to induce the Center of Pressure (CoP) movement. The Wii Balance Board is employed to sense the CoP, and vibration motors are used for haptic feedback. To provide a comparison, inductions were also performed using visual and auditory feedback, and the time required for induction was measured. Additionally, after the experiments, a questionnaire survey was conducted.<|reference_end|>
arxiv
@article{kawasaki2024guidance, title={Guidance of the Center of Pressure Using Haptic Presentation}, author={Yohei Kawasaki and Yuta Sugiura}, journal={arXiv preprint arXiv:2410.04732}, year={2024}, archivePrefix={arXiv}, eprint={2410.04732}, primaryClass={cs.HC} }
kawasaki2024guidance
arxiv-666368
2410.04733
PredFormer: Transformers Are Effective Spatial-Temporal Predictive Learners
<|reference_start|>PredFormer: Transformers Are Effective Spatial-Temporal Predictive Learners: Spatiotemporal predictive learning methods generally fall into two categories: recurrent-based approaches, which face challenges in parallelization and performance, and recurrent-free methods, which employ convolutional neural networks (CNNs) as encoder-decoder architectures. These methods benefit from strong inductive biases but often at the expense of scalability and generalization. This paper proposes PredFormer, a pure transformer-based framework for spatiotemporal predictive learning. Motivated by the Vision Transformers (ViT) design, PredFormer leverages carefully designed Gated Transformer blocks, following a comprehensive analysis of 3D attention mechanisms, including full-, factorized-, and interleaved- spatial-temporal attention. With its recurrent-free, transformer-based design, PredFormer is both simple and efficient, significantly outperforming previous methods by large margins. Extensive experiments on synthetic and real-world datasets demonstrate that PredFormer achieves state-of-the-art performance. On Moving MNIST, PredFormer achieves a 51.3% reduction in MSE relative to SimVP. For TaxiBJ, the model decreases MSE by 33.1% and boosts FPS from 533 to 2364. Additionally, on WeatherBench, it reduces MSE by 11.1% while enhancing FPS from 196 to 404. These performance gains in both accuracy and efficiency demonstrate PredFormer's potential for real-world applications. The source code will be released at https://github.com/yyyujintang/PredFormer.<|reference_end|>
arxiv
@article{tang2024predformer:, title={PredFormer: Transformers Are Effective Spatial-Temporal Predictive Learners}, author={Yujin Tang, Lu Qi, Fei Xie, Xiangtai Li, Chao Ma, Ming-Hsuan Yang}, journal={arXiv preprint arXiv:2410.04733}, year={2024}, archivePrefix={arXiv}, eprint={2410.04733}, primaryClass={cs.CV} }
tang2024predformer:
arxiv-666369
2410.04734
TLDR: Token-Level Detective Reward Model for Large Vision Language Models
<|reference_start|>TLDR: Token-Level Detective Reward Model for Large Vision Language Models: Although reward models have been successful in improving multimodal large language models, the reward models themselves remain brutal and contain minimal information. Notably, existing reward models only mimic human annotations by assigning only one binary feedback to any text, no matter how long the text is. In the realm of multimodal language models, where models are required to process both images and texts, a naive reward model may learn implicit biases toward texts and become less grounded in images. In this paper, we propose a $\textbf{T}$oken-$\textbf{L}$evel $\textbf{D}$etective $\textbf{R}$eward Model ($\textbf{TLDR}$) to provide fine-grained annotations to each text token. We first introduce a perturbation-based method to generate synthetic hard negatives and their token-level labels to train TLDR models. Then we show the rich usefulness of TLDR models both in assisting off-the-shelf models to self-correct their generations, and in serving as a hallucination evaluation tool. Finally, we show that TLDR models can significantly speed up human annotation by 3 times to acquire a broader range of high-quality vision language data.<|reference_end|>
arxiv
@article{fu2024tldr:, title={TLDR: Token-Level Detective Reward Model for Large Vision Language Models}, author={Deqing Fu, Tong Xiao, Rui Wang, Wang Zhu, Pengchuan Zhang, Guan Pang, Robin Jia, Lawrence Chen}, journal={arXiv preprint arXiv:2410.04734}, year={2024}, archivePrefix={arXiv}, eprint={2410.04734}, primaryClass={cs.LG cs.CL cs.CV} }
fu2024tldr:
arxiv-666370
2410.04738
Diffusion Models in 3D Vision: A Survey
<|reference_start|>Diffusion Models in 3D Vision: A Survey: In recent years, 3D vision has become a crucial field within computer vision, powering a wide range of applications such as autonomous driving, robotics, augmented reality (AR), and medical imaging. This field relies on the accurate perception, understanding, and reconstruction of 3D scenes from 2D data sources like images and videos. Diffusion models, originally designed for 2D generative tasks, offer the potential for more flexible, probabilistic approaches that can better capture the variability and uncertainty present in real-world 3D data. However, traditional methods often struggle with efficiency and scalability. In this paper, we review the state-of-the-art approaches that leverage diffusion models for 3D visual tasks, including but not limited to 3D object generation, shape completion, point cloud reconstruction, and scene understanding. We provide an in-depth discussion of the underlying mathematical principles of diffusion models, outlining their forward and reverse processes, as well as the various architectural advancements that enable these models to work with 3D datasets. We also discuss the key challenges in applying diffusion models to 3D vision, such as handling occlusions and varying point densities, and the computational demands of high-dimensional data. Finally, we discuss potential solutions, including improving computational efficiency, enhancing multimodal fusion, and exploring the use of large-scale pretraining for better generalization across 3D tasks. This paper serves as a foundation for future exploration and development in this rapidly evolving field.<|reference_end|>
arxiv
@article{wang2024diffusion, title={Diffusion Models in 3D Vision: A Survey}, author={Zhen Wang, Dongyuan Li, Renhe Jiang}, journal={arXiv preprint arXiv:2410.04738}, year={2024}, archivePrefix={arXiv}, eprint={2410.04738}, primaryClass={cs.CV} }
wang2024diffusion
arxiv-666371
2410.04739
TableRAG: Million-Token Table Understanding with Language Models
<|reference_start|>TableRAG: Million-Token Table Understanding with Language Models: Recent advancements in language models (LMs) have notably enhanced their ability to reason with tabular data, primarily through program-aided mechanisms that manipulate and analyze tables. However, these methods often require the entire table as input, leading to scalability challenges due to the positional bias or context length constraints. In response to these challenges, we introduce TableRAG, a Retrieval-Augmented Generation (RAG) framework specifically designed for LM-based table understanding. TableRAG leverages query expansion combined with schema and cell retrieval to pinpoint crucial information before providing it to the LMs. This enables more efficient data encoding and precise retrieval, significantly reducing prompt lengths and mitigating information loss. We have developed two new million-token benchmarks from the Arcade and BIRD-SQL datasets to thoroughly evaluate TableRAG's effectiveness at scale. Our results demonstrate that TableRAG's retrieval design achieves the highest retrieval quality, leading to the new state-of-the-art performance on large-scale table understanding.<|reference_end|>
arxiv
@article{chen2024tablerag:, title={TableRAG: Million-Token Table Understanding with Language Models}, author={Si-An Chen, Lesly Miculicich, Julian Martin Eisenschlos, Zifeng Wang, Zilong Wang, Yanfei Chen, Yasuhisa Fujii, Hsuan-Tien Lin, Chen-Yu Lee, Tomas Pfister}, journal={arXiv preprint arXiv:2410.04739}, year={2024}, archivePrefix={arXiv}, eprint={2410.04739}, primaryClass={cs.CL cs.AI cs.IR cs.LG} }
chen2024tablerag:
arxiv-666372
2410.04740
Evaluating the Generalization Ability of Spatiotemporal Model in Urban Scenario
<|reference_start|>Evaluating the Generalization Ability of Spatiotemporal Model in Urban Scenario: Spatiotemporal neural networks have shown great promise in urban scenarios by effectively capturing temporal and spatial correlations. However, urban environments are constantly evolving, and current model evaluations are often limited to traffic scenarios and use data mainly collected only a few weeks after training period to evaluate model performance. The generalization ability of these models remains largely unexplored. To address this, we propose a Spatiotemporal Out-of-Distribution (ST-OOD) benchmark, which comprises six urban scenario: bike-sharing, 311 services, pedestrian counts, traffic speed, traffic flow, ride-hailing demand, and bike-sharing, each with in-distribution (same year) and out-of-distribution (next years) settings. We extensively evaluate state-of-the-art spatiotemporal models and find that their performance degrades significantly in out-of-distribution settings, with most models performing even worse than a simple Multi-Layer Perceptron (MLP). Our findings suggest that current leading methods tend to over-rely on parameters to overfit training data, which may lead to good performance on in-distribution data but often results in poor generalization. We also investigated whether dropout could mitigate the negative effects of overfitting. Our results showed that a slight dropout rate could significantly improve generalization performance on most datasets, with minimal impact on in-distribution performance. However, balancing in-distribution and out-of-distribution performance remains a challenging problem. We hope that the proposed benchmark will encourage further research on this critical issue.<|reference_end|>
arxiv
@article{wang2024evaluating, title={Evaluating the Generalization Ability of Spatiotemporal Model in Urban Scenario}, author={Hongjun Wang, Jiyuan Chen, Tong Pan, Zheng Dong, Lingyu Zhang, Renhe Jiang, and Xuan Song}, journal={arXiv preprint arXiv:2410.04740}, year={2024}, archivePrefix={arXiv}, eprint={2410.04740}, primaryClass={cs.LG cs.AI cs.CY cs.DB} }
wang2024evaluating
arxiv-666373
2410.04743
Smart energy management: process structure-based hybrid neural networks for optimal scheduling and economic predictive control in integrated systems
<|reference_start|>Smart energy management: process structure-based hybrid neural networks for optimal scheduling and economic predictive control in integrated systems: Integrated energy systems (IESs) are complex systems consisting of diverse operating units spanning multiple domains. To address its operational challenges, we propose a physics-informed hybrid time-series neural network (NN) surrogate to predict the dynamic performance of IESs across multiple time scales. This neural network-based modeling approach develops time-series multi-layer perceptrons (MLPs) for the operating units and integrates them with prior process knowledge about system structure and fundamental dynamics. This integration forms three hybrid NNs (long-term, slow, and fast MLPs) that predict the entire system dynamics across multiple time scales. Leveraging these MLPs, we design an NN-based scheduler and an NN-based economic model predictive control (NEMPC) framework to meet global operational requirements: rapid electrical power responsiveness to operators requests, adequate cooling supply to customers, and increased system profitability, while addressing the dynamic time-scale multiplicity present in IESs. The proposed day-ahead scheduler is formulated using the ReLU network-based MLP, which effectively represents IES performance under a broad range of conditions from a long-term perspective. The scheduler is then exactly recast into a mixed-integer linear programming problem for efficient evaluation. The real-time NEMPC, based on slow and fast MLPs, comprises two sequential distributed control agents: a slow NEMPC for the cooling-dominant subsystem with slower transient responses and a fast NEMPC for the power-dominant subsystem with faster responses. Extensive simulations demonstrate that the developed scheduler and NEMPC schemes outperform their respective benchmark scheduler and controller by about 25% and 40%. Together, they enhance overall system performance by over 70% compared to benchmark approaches.<|reference_end|>
arxiv
@article{wu2024smart, title={Smart energy management: process structure-based hybrid neural networks for optimal scheduling and economic predictive control in integrated systems}, author={Long Wu and Xunyuan Yin and Lei Pan and Jinfeng Liu (University of Alberta)}, journal={arXiv preprint arXiv:2410.04743}, year={2024}, archivePrefix={arXiv}, eprint={2410.04743}, primaryClass={eess.SY cs.LG cs.SY math.OC} }
wu2024smart
arxiv-666374
2410.04746
PSA: Private Set Alignment for Secure and Collaborative Analytics on Large-Scale Data
<|reference_start|>PSA: Private Set Alignment for Secure and Collaborative Analytics on Large-Scale Data: Enforcement of privacy regulation is essential for collaborative data analytics. In this work, we address a scenario in which two companies expect to securely join their datasets with respect to their common customers to maximize data insights. Apart from the necessary protection of raw data, it becomes more challenging to protect the identities and attributes of common customers, as it requires participants to align their records associated with common customers without knowing who they are. We proposed a solution, dubbed PSA, for this scenario, which is effectively applicable to real-world use cases, such as evaluating advertising conversion using data from both publishers and merchants. The contributions of this work are threefold: 1. We defined the notion of PSA with two levels of privacy protection and proposed novel PSA protocols based on the modified oblivious switching network, which leverages efficient symmetric key operations and offline precomputation to save online run time. 2. We implemented and benchmarked the proposed protocols in different network conditions by joining two datasets, each at the scale of one million records, in 35.5 sec on a single thread with a network bandwidth of 500 Mbps, resulting in an X100 improvement over the existing Homomorphic based protocols. 3. We give new proof for an algorithm of quasi-linear complexity that constructs an oblivious switching network to achieve a target permutation distinct from the existing one in the literature.<|reference_end|>
arxiv
@article{wang2024psa:, title={PSA: Private Set Alignment for Secure and Collaborative Analytics on Large-Scale Data}, author={Jiabo Wang, Elmo Xuyun Huang, Pu Duan, Huaxiong Wang, Kwok-Yan Lam}, journal={arXiv preprint arXiv:2410.04746}, year={2024}, archivePrefix={arXiv}, eprint={2410.04746}, primaryClass={cs.CR} }
wang2024psa:
arxiv-666375
2410.04749
LLaVA Needs More Knowledge: Retrieval Augmented Natural Language Generation with Knowledge Graph for Explaining Thoracic Pathologies
<|reference_start|>LLaVA Needs More Knowledge: Retrieval Augmented Natural Language Generation with Knowledge Graph for Explaining Thoracic Pathologies: Generating Natural Language Explanations (NLEs) for model predictions on medical images, particularly those depicting thoracic pathologies, remains a critical and challenging task. Existing methodologies often struggle due to general models' insufficient domain-specific medical knowledge and privacy concerns associated with retrieval-based augmentation techniques. To address these issues, we propose a novel Vision-Language framework augmented with a Knowledge Graph (KG)-based datastore, which enhances the model's understanding by incorporating additional domain-specific medical knowledge essential for generating accurate and informative NLEs. Our framework employs a KG-based retrieval mechanism that not only improves the precision of the generated explanations but also preserves data privacy by avoiding direct data retrieval. The KG datastore is designed as a plug-and-play module, allowing for seamless integration with various model architectures. We introduce and evaluate three distinct frameworks within this paradigm: KG-LLaVA, which integrates the pre-trained LLaVA model with KG-RAG; Med-XPT, a custom framework combining MedCLIP, a transformer-based projector, and GPT-2; and Bio-LLaVA, which adapts LLaVA by incorporating the Bio-ViT-L vision model. These frameworks are validated on the MIMIC-NLE dataset, where they achieve state-of-the-art results, underscoring the effectiveness of KG augmentation in generating high-quality NLEs for thoracic pathologies.<|reference_end|>
arxiv
@article{hamza2024llava, title={LLaVA Needs More Knowledge: Retrieval Augmented Natural Language Generation with Knowledge Graph for Explaining Thoracic Pathologies}, author={Ameer Hamza, Abdullah, Yong Hyun Ahn, Sungyoung Lee, Seong Tae Kim}, journal={arXiv preprint arXiv:2410.04749}, year={2024}, archivePrefix={arXiv}, eprint={2410.04749}, primaryClass={cs.CV} }
hamza2024llava
arxiv-666376
2410.04751
Intriguing Properties of Large Language and Vision Models
<|reference_start|>Intriguing Properties of Large Language and Vision Models: Recently, large language and vision models (LLVMs) have received significant attention and development efforts due to their remarkable generalization performance across a wide range of tasks requiring perception and cognitive abilities. A key factor behind their success is their simple architecture, which consists of a vision encoder, a projector, and a large language model (LLM). Despite their achievements in advanced reasoning tasks, their performance on fundamental perception-related tasks (e.g., MMVP) remains surprisingly low. This discrepancy raises the question of how LLVMs truly perceive images and exploit the advantages of the vision encoder. To address this, we systematically investigate this question regarding several aspects: permutation invariance, robustness, math reasoning, alignment preserving and importance, by evaluating the most common LLVM's families (i.e., LLaVA) across 10 evaluation benchmarks. Our extensive experiments reveal several intriguing properties of current LLVMs: (1) they internally process the image in a global manner, even when the order of visual patch sequences is randomly permuted; (2) they are sometimes able to solve math problems without fully perceiving detailed numerical information; (3) the cross-modal alignment is overfitted to complex reasoning tasks, thereby, causing them to lose some of the original perceptual capabilities of their vision encoder; (4) the representation space in the lower layers (<25%) plays a crucial role in determining performance and enhancing visual understanding. Lastly, based on the above observations, we suggest potential future directions for building better LLVMs and constructing more challenging evaluation benchmarks.<|reference_end|>
arxiv
@article{lee2024intriguing, title={Intriguing Properties of Large Language and Vision Models}, author={Young-Jun Lee, Byungsoo Ko, Han-Gyu Kim, Yechan Hwang, Ho-Jin Choi}, journal={arXiv preprint arXiv:2410.04751}, year={2024}, archivePrefix={arXiv}, eprint={2410.04751}, primaryClass={cs.CV cs.CL} }
lee2024intriguing
arxiv-666377
2410.04752
Document-level Causal Relation Extraction with Knowledge-guided Binary Question Answering
<|reference_start|>Document-level Causal Relation Extraction with Knowledge-guided Binary Question Answering: As an essential task in information extraction (IE), Event-Event Causal Relation Extraction (ECRE) aims to identify and classify the causal relationships between event mentions in natural language texts. However, existing research on ECRE has highlighted two critical challenges, including the lack of document-level modeling and causal hallucinations. In this paper, we propose a Knowledge-guided binary Question Answering (KnowQA) method with event structures for ECRE, consisting of two stages: Event Structure Construction and Binary Question Answering. We conduct extensive experiments under both zero-shot and fine-tuning settings with large language models (LLMs) on the MECI and MAVEN-ERE datasets. Experimental results demonstrate the usefulness of event structures on document-level ECRE and the effectiveness of KnowQA by achieving state-of-the-art on the MECI dataset. We observe not only the effectiveness but also the high generalizability and low inconsistency of our method, particularly when with complete event structures after fine-tuning the models.<|reference_end|>
arxiv
@article{wang2024document-level, title={Document-level Causal Relation Extraction with Knowledge-guided Binary Question Answering}, author={Zimu Wang, Lei Xia, Wei Wang, Xinya Du}, journal={arXiv preprint arXiv:2410.04752}, year={2024}, archivePrefix={arXiv}, eprint={2410.04752}, primaryClass={cs.CL} }
wang2024document-level
arxiv-666378
2410.04753
ImProver: Agent-Based Automated Proof Optimization
<|reference_start|>ImProver: Agent-Based Automated Proof Optimization: Large language models (LLMs) have been used to generate formal proofs of mathematical theorems in proofs assistants such as Lean. However, we often want to optimize a formal proof with respect to various criteria, depending on its downstream use. For example, we may want a proof to adhere to a certain style, or to be readable, concise, or modularly structured. Having suitably optimized proofs is also important for learning tasks, especially since human-written proofs may not optimal for that purpose. To this end, we study a new problem of automated proof optimization: rewriting a proof so that it is correct and optimizes for an arbitrary criterion, such as length or readability. As a first method for automated proof optimization, we present ImProver, a large-language-model agent that rewrites proofs to optimize arbitrary user-defined metrics in Lean. We find that naively applying LLMs to proof optimization falls short, and we incorporate various improvements into ImProver, such as the use of symbolic Lean context in a novel Chain-of-States technique, as well as error-correction and retrieval. We test ImProver on rewriting real-world undergraduate, competition, and research-level mathematics theorems, finding that ImProver is capable of rewriting proofs so that they are substantially shorter, more modular, and more readable.<|reference_end|>
arxiv
@article{ahuja2024improver:, title={ImProver: Agent-Based Automated Proof Optimization}, author={Riyaz Ahuja, Jeremy Avigad, Prasad Tetali, Sean Welleck}, journal={arXiv preprint arXiv:2410.04753}, year={2024}, archivePrefix={arXiv}, eprint={2410.04753}, primaryClass={cs.AI cs.CL cs.LG cs.LO} }
ahuja2024improver:
arxiv-666379
2410.04754
A Comprehensive Study on GDPR-Oriented Analysis of Privacy Policies: Taxonomy, Corpus and GDPR Concept Classifiers
<|reference_start|>A Comprehensive Study on GDPR-Oriented Analysis of Privacy Policies: Taxonomy, Corpus and GDPR Concept Classifiers: Machine learning based classifiers that take a privacy policy as the input and predict relevant concepts are useful in different applications such as (semi-)automated compliance analysis against requirements of the EU GDPR. In all past studies, such classifiers produce a concept label per segment (e.g., sentence or paragraph) and their performances were evaluated by using a dataset of labeled segments without considering the privacy policy they belong to. However, such an approach could overestimate the performance in real-world settings, where all segments in a new privacy policy are supposed to be unseen. Additionally, we also observed other research gaps, including the lack of a more complete GDPR taxonomy and the less consideration of hierarchical information in privacy policies. To fill such research gaps, we developed a more complete GDPR taxonomy, created the first corpus of labeled privacy policies with hierarchical information, and conducted the most comprehensive performance evaluation of GDPR concept classifiers for privacy policies. Our work leads to multiple novel findings, including the confirmed inappropriateness of splitting training and test sets at the segment level, the benefits of considering hierarchical information, and the limitations of the "one size fits all" approach, and the significance of testing cross-corpus generalizability.<|reference_end|>
arxiv
@article{tang2024a, title={A Comprehensive Study on GDPR-Oriented Analysis of Privacy Policies: Taxonomy, Corpus and GDPR Concept Classifiers}, author={Peng Tang, Xin Li, Yuxin Chen, Weidong Qiu, Haochen Mei, Allison Holmes, Fenghua Li, and Shujun Li}, journal={arXiv preprint arXiv:2410.04754}, year={2024}, archivePrefix={arXiv}, eprint={2410.04754}, primaryClass={cs.CR} }
tang2024a
arxiv-666380
2410.04756
Item Cluster-aware Prompt Learning for Session-based Recommendation
<|reference_start|>Item Cluster-aware Prompt Learning for Session-based Recommendation: Session-based recommendation (SBR) aims to capture dynamic user preferences by analyzing item sequences within individual sessions. However, most existing approaches focus mainly on intra-session item relationships, neglecting the connections between items across different sessions (inter-session relationships), which limits their ability to fully capture complex item interactions. While some methods incorporate inter-session information, they often suffer from high computational costs, leading to longer training times and reduced efficiency. To address these challenges, we propose the CLIP-SBR (Cluster-aware Item Prompt learning for Session-Based Recommendation) framework. CLIP-SBR is composed of two modules: 1) an item relationship mining module that builds a global graph to effectively model both intra- and inter-session relationships, and 2) an item cluster-aware prompt learning module that uses soft prompts to integrate these relationships into SBR models efficiently. We evaluate CLIP-SBR across eight SBR models and three benchmark datasets, consistently demonstrating improved recommendation performance and establishing CLIP-SBR as a robust solution for session-based recommendation tasks.<|reference_end|>
arxiv
@article{yang2024item, title={Item Cluster-aware Prompt Learning for Session-based Recommendation}, author={Wooseong Yang, Chen Wang, Zihe Song, Weizhi Zhang, Philip S. Yu}, journal={arXiv preprint arXiv:2410.04756}, year={2024}, archivePrefix={arXiv}, eprint={2410.04756}, primaryClass={cs.IR cs.AI cs.LG} }
yang2024item
arxiv-666381
2410.04759
Driving with Regulation: Interpretable Decision-Making for Autonomous Vehicles with Retrieval-Augmented Reasoning via LLM
<|reference_start|>Driving with Regulation: Interpretable Decision-Making for Autonomous Vehicles with Retrieval-Augmented Reasoning via LLM: This work presents an interpretable decision-making framework for autonomous vehicles that integrates traffic regulations, norms, and safety guidelines comprehensively and enables seamless adaptation to different regions. While traditional rule-based methods struggle to incorporate the full scope of traffic rules, we develop a Traffic Regulation Retrieval (TRR) Agent based on Retrieval-Augmented Generation (RAG) to automatically retrieve relevant traffic rules and guidelines from extensive regulation documents and relevant records based on the ego vehicle's situation. Given the semantic complexity of the retrieved rules, we also design a reasoning module powered by a Large Language Model (LLM) to interpret these rules, differentiate between mandatory rules and safety guidelines, and assess actions on legal compliance and safety. Additionally, the reasoning is designed to be interpretable, enhancing both transparency and reliability. The framework demonstrates robust performance on both hypothesized and real-world cases across diverse scenarios, along with the ability to adapt to different regions with ease.<|reference_end|>
arxiv
@article{cai2024driving, title={Driving with Regulation: Interpretable Decision-Making for Autonomous Vehicles with Retrieval-Augmented Reasoning via LLM}, author={Tianhui Cai, Yifan Liu, Zewei Zhou, Haoxuan Ma, Seth Z. Zhao, Zhiwen Wu and Jiaqi Ma}, journal={arXiv preprint arXiv:2410.04759}, year={2024}, archivePrefix={arXiv}, eprint={2410.04759}, primaryClass={cs.AI} }
cai2024driving
arxiv-666382
2410.04760
Stochastic Runge-Kutta Methods: Provable Acceleration of Diffusion Models
<|reference_start|>Stochastic Runge-Kutta Methods: Provable Acceleration of Diffusion Models: Diffusion models play a pivotal role in contemporary generative modeling, claiming state-of-the-art performance across various domains. Despite their superior sample quality, mainstream diffusion-based stochastic samplers like DDPM often require a large number of score function evaluations, incurring considerably higher computational cost compared to single-step generators like generative adversarial networks. While several acceleration methods have been proposed in practice, the theoretical foundations for accelerating diffusion models remain underexplored. In this paper, we propose and analyze a training-free acceleration algorithm for SDE-style diffusion samplers, based on the stochastic Runge-Kutta method. The proposed sampler provably attains $\varepsilon^2$ error -- measured in KL divergence -- using $\widetilde O(d^{3/2} / \varepsilon)$ score function evaluations (for sufficiently small $\varepsilon$), strengthening the state-of-the-art guarantees $\widetilde O(d^{3} / \varepsilon)$ in terms of dimensional dependency. Numerical experiments validate the efficiency of the proposed method.<|reference_end|>
arxiv
@article{wu2024stochastic, title={Stochastic Runge-Kutta Methods: Provable Acceleration of Diffusion Models}, author={Yuchen Wu, Yuxin Chen, Yuting Wei}, journal={arXiv preprint arXiv:2410.04760}, year={2024}, archivePrefix={arXiv}, eprint={2410.04760}, primaryClass={stat.ML cs.LG} }
wu2024stochastic
arxiv-666383
2410.04761
Shuffling Gradient Descent-Ascent with Variance Reduction for Nonconvex-Strongly Concave Smooth Minimax Problems
<|reference_start|>Shuffling Gradient Descent-Ascent with Variance Reduction for Nonconvex-Strongly Concave Smooth Minimax Problems: In recent years, there has been considerable interest in designing stochastic first-order algorithms to tackle finite-sum smooth minimax problems. To obtain the gradient estimates, one typically relies on the uniform sampling-with-replacement scheme or various sampling-without-replacement (also known as shuffling) schemes. While the former is easier to analyze, the latter often have better empirical performance. In this paper, we propose a novel single-loop stochastic gradient descent-ascent (GDA) algorithm that employs both shuffling schemes and variance reduction to solve nonconvex-strongly concave smooth minimax problems. We show that the proposed algorithm achieves $\epsilon$-stationarity in expectation in $\mathcal{O}(\kappa^2 \epsilon^{-2})$ iterations, where $\kappa$ is the condition number of the problem. This outperforms existing shuffling schemes and matches the complexity of the best-known sampling-with-replacement algorithms. Our proposed algorithm also achieves the same complexity as that of its deterministic counterpart, the two-timescale GDA algorithm. Our numerical experiments demonstrate the superior performance of the proposed algorithm.<|reference_end|>
arxiv
@article{jiang2024shuffling, title={Shuffling Gradient Descent-Ascent with Variance Reduction for Nonconvex-Strongly Concave Smooth Minimax Problems}, author={Xia Jiang, Linglingzhi Zhu, Anthony Man-Cho So, Shisheng Cui, Jian Sun}, journal={arXiv preprint arXiv:2410.04761}, year={2024}, archivePrefix={arXiv}, eprint={2410.04761}, primaryClass={math.OC cs.GT} }
jiang2024shuffling
arxiv-666384
2410.04762
WTCL-Dehaze: Rethinking Real-world Image Dehazing via Wavelet Transform and Contrastive Learning
<|reference_start|>WTCL-Dehaze: Rethinking Real-world Image Dehazing via Wavelet Transform and Contrastive Learning: Images captured in hazy outdoor conditions often suffer from colour distortion, low contrast, and loss of detail, which impair high-level vision tasks. Single image dehazing is essential for applications such as autonomous driving and surveillance, with the aim of restoring image clarity. In this work, we propose WTCL-Dehaze an enhanced semi-supervised dehazing network that integrates Contrastive Loss and Discrete Wavelet Transform (DWT). We incorporate contrastive regularization to enhance feature representation by contrasting hazy and clear image pairs. Additionally, we utilize DWT for multi-scale feature extraction, effectively capturing high-frequency details and global structures. Our approach leverages both labelled and unlabelled data to mitigate the domain gap and improve generalization. The model is trained on a combination of synthetic and real-world datasets, ensuring robust performance across different scenarios. Extensive experiments demonstrate that our proposed algorithm achieves superior performance and improved robustness compared to state-of-the-art single image dehazing methods on both benchmark datasets and real-world images.<|reference_end|>
arxiv
@article{appiah2024wtcl-dehaze:, title={WTCL-Dehaze: Rethinking Real-world Image Dehazing via Wavelet Transform and Contrastive Learning}, author={Divine Joseph Appiah, Donghai Guan, Abdul Nasser Kasule and Mingqiang Wei}, journal={arXiv preprint arXiv:2410.04762}, year={2024}, archivePrefix={arXiv}, eprint={2410.04762}, primaryClass={cs.CV} }
appiah2024wtcl-dehaze:
arxiv-666385
2410.04764
Double Oracle Neural Architecture Search for Game Theoretic Deep Learning Models
<|reference_start|>Double Oracle Neural Architecture Search for Game Theoretic Deep Learning Models: In this paper, we propose a new approach to train deep learning models using game theory concepts including Generative Adversarial Networks (GANs) and Adversarial Training (AT) where we deploy a double-oracle framework using best response oracles. GAN is essentially a two-player zero-sum game between the generator and the discriminator. The same concept can be applied to AT with attacker and classifier as players. Training these models is challenging as a pure Nash equilibrium may not exist and even finding the mixed Nash equilibrium is difficult as training algorithms for both GAN and AT have a large-scale strategy space. Extending our preliminary model DO-GAN, we propose the methods to apply the double oracle framework concept to Adversarial Neural Architecture Search (NAS for GAN) and Adversarial Training (NAS for AT) algorithms. We first generalize the players' strategies as the trained models of generator and discriminator from the best response oracles. We then compute the meta-strategies using a linear program. For scalability of the framework where multiple network models of best responses are stored in the memory, we prune the weakly-dominated players' strategies to keep the oracles from becoming intractable. Finally, we conduct experiments on MNIST, CIFAR-10 and TinyImageNet for DONAS-GAN. We also evaluate the robustness under FGSM and PGD attacks on CIFAR-10, SVHN and TinyImageNet for DONAS-AT. We show that all our variants have significant improvements in both subjective qualitative evaluation and quantitative metrics, compared with their respective base architectures.<|reference_end|>
arxiv
@article{aung2024double, title={Double Oracle Neural Architecture Search for Game Theoretic Deep Learning Models}, author={Aye Phyu Phyu Aung, Xinrun Wang, Ruiyu Wang, Hau Chan, Bo An, Xiaoli Li, J. Senthilnath}, journal={arXiv preprint arXiv:2410.04764}, year={2024}, archivePrefix={arXiv}, eprint={2410.04764}, primaryClass={cs.LG cs.GT} }
aung2024double
arxiv-666386
2410.04765
Molecular topological deep learning for polymer property prediction
<|reference_start|>Molecular topological deep learning for polymer property prediction: Accurate and efficient prediction of polymer properties is of key importance for polymer design. Traditional experimental tools and density function theory (DFT)-based simulations for polymer property evaluation, are both expensive and time-consuming. Recently, a gigantic amount of graph-based molecular models have emerged and demonstrated huge potential in molecular data analysis. Even with the great progresses, these models tend to ignore the high-order and mutliscale information within the data. In this paper, we develop molecular topological deep learning (Mol-TDL) for polymer property analysis. Our Mol-TDL incorporates both high-order interactions and multiscale properties into topological deep learning architecture. The key idea is to represent polymer molecules as a series of simplicial complices at different scales and build up simplical neural networks accordingly. The aggregated information from different scales provides a more accurate prediction of polymer molecular properties.<|reference_end|>
arxiv
@article{shen2024molecular, title={Molecular topological deep learning for polymer property prediction}, author={Cong Shen, Yipeng Zhang, Fei Han and Kelin Xia}, journal={arXiv preprint arXiv:2410.04765}, year={2024}, archivePrefix={arXiv}, eprint={2410.04765}, primaryClass={cond-mat.mtrl-sci cs.AI cs.LG} }
shen2024molecular
arxiv-666387
2410.04768
A Stretchable Electrostatic Tactile Surface
<|reference_start|>A Stretchable Electrostatic Tactile Surface: Tactile sensation is essential for humans to recognize objects. Various devices have been developed in the past for tactile presentation by electrostatic force, which are easy to configure devices, but there is currently no such device that features stretchability. Considering that the device is worn over the joints of a human body or robot, it is extremely important that the device itself be stretchable. In this study, we propose a stretchable electrostatic tactile surface comprising a stretchable transparent electrode and a stretchable insulating film that can be stretched to a maximum of 50%. This means that when attached to the human body, this surface can respond to the expansion and contraction that occur due to joint movements. This surface can also provide tactile information in response to deformation such as pushing and pulling. As a basic investigation, we measured the lower limit of voltage that can be perceived by changing the configuration of the surface and evaluated the states of stretching and contraction. We also investigated and modeled the relationship between the voltage and the perceived intensity.<|reference_end|>
arxiv
@article{takayanagi2024a, title={A Stretchable Electrostatic Tactile Surface}, author={Naoto Takayanagi, Naoji Matsuhisa, Yuki Hashimoto, Yuta Sugiura}, journal={arXiv preprint arXiv:2410.04768}, year={2024}, archivePrefix={arXiv}, eprint={2410.04768}, primaryClass={cs.HC} }
takayanagi2024a
arxiv-666388
2410.04771
On the Complexity of Computing the Co-lexicographic Width of a Regular Language
<|reference_start|>On the Complexity of Computing the Co-lexicographic Width of a Regular Language: Co-lex partial orders were recently introduced in (Cotumaccio et al., SODA 2021 and JACM 2023) as a powerful tool to index finite state automata, with applications to regular expression matching. They generalize Wheeler orders (Gagie et al., Theoretical Computer Science 2017) and naturally reflect the co-lexicographic order of the strings labeling source-to-node paths in the automaton. Briefly, the co-lex width $p$ of a finite-state automaton measures how sortable its states are with respect to the co-lex order among the strings they accept. Automata of co-lex width $p$ can be compressed to $O(\log p)$ bits per edge and admit regular expression matching algorithms running in time proportional to $p^2$ per matched character. The deterministic co-lex width of a regular language $\mathcal L$ is the smallest width of such a co-lex order, among all DFAs recognizing $\mathcal L$. Since languages of small co-lex width admit efficient solutions to automata compression and pattern matching, computing the co-lex width of a language is relevant in these applications. The paper introducing co-lex orders determined that the deterministic co-lex width $p$ of a language $\mathcal L$ can be computed in time proportional to $m^{O(p)}$, given as input any DFA $\mathcal A$ for $\mathcal L$, of size (number of transitions) $m =|\mathcal A|$. In this paper, using new techniques, we show that it is possible to decide in $O(m^p)$ time if the deterministic co-lex width of the language recognized by a given minimum DFA is strictly smaller than some integer $p\ge 2$. We complement this upper bound with a matching conditional lower bound based on the Strong Exponential Time Hypothesis. The problem is known to be PSPACE-complete when the input is an NFA (D'Agostino et al., Theoretical Computer Science 2023); thus, together with that result, our paper essentially settles the complexity of the problem.<|reference_end|>
arxiv
@article{becker2024on, title={On the Complexity of Computing the Co-lexicographic Width of a Regular Language}, author={Ruben Becker, Davide Cenzato, Sung-Hwan Kim, Tomasz Kociumaka, Bojana Kodric, Alberto Policriti, Nicola Prezza}, journal={arXiv preprint arXiv:2410.04771}, year={2024}, archivePrefix={arXiv}, eprint={2410.04771}, primaryClass={cs.FL} }
becker2024on
arxiv-666389
2410.04772
From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing
<|reference_start|>From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing: Artificial intelligence (AI) is increasingly intervening in our lives, raising widespread concern about its unintended and undeclared side effects. These developments have brought attention to the problem of AI auditing: the systematic evaluation and analysis of an AI system, its development, and its behavior relative to a set of predetermined criteria. Auditing can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing. It plays a critical role in providing assurances to various AI stakeholders, from developers to end users. Audits may, for instance, be used to verify that an algorithm complies with the law, is consistent with industry standards, and meets the developer's claimed specifications. However, there are many operational challenges to AI auditing that complicate its implementation. In this work, we examine a key operational issue in AI auditing: what type of access to an AI system is needed to perform a meaningful audit? Addressing this question has direct policy relevance, as it can inform AI audit guidelines and requirements. We begin by discussing the factors that auditors balance when determining the appropriate type of access, and unpack the benefits and drawbacks of four types of access. We conclude that, at minimum, black-box access -- providing query access to a model without exposing its internal implementation -- should be granted to auditors, as it balances concerns related to trade secrets, data privacy, audit standardization, and audit efficiency. We then suggest a framework for determining how much further access (in addition to black-box access) to grant auditors. We show that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.<|reference_end|>
arxiv
@article{cen2024from, title={From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing}, author={Sarah H. Cen and Rohan Alur}, journal={arXiv preprint arXiv:2410.04772}, year={2024}, archivePrefix={arXiv}, eprint={2410.04772}, primaryClass={cs.CY cs.LG} }
cen2024from
arxiv-666390
2410.04774
Granular Ball Twin Support Vector Machine
<|reference_start|>Granular Ball Twin Support Vector Machine: On Efficient and Scalable Computation of the Nonparametric Maximum Likelihood Estimator in Mixture ModelsTwin support vector machine (TSVM) is an emerging machine learning model with versatile applicability in classification and regression endeavors. Nevertheless, TSVM confronts noteworthy challenges: $(i)$ the imperative demand for matrix inversions presents formidable obstacles to its efficiency and applicability on large-scale datasets; $(ii)$ the omission of the structural risk minimization (SRM) principle in its primal formulation heightens the vulnerability to overfitting risks; and $(iii)$ the TSVM exhibits a high susceptibility to noise and outliers, and also demonstrates instability when subjected to resampling. In view of the aforementioned challenges, we propose the granular ball twin support vector machine (GBTSVM). GBTSVM takes granular balls, rather than individual data points, as inputs to construct a classifier. These granular balls, characterized by their coarser granularity, exhibit robustness to resampling and reduced susceptibility to the impact of noise and outliers. We further propose a novel large-scale granular ball twin support vector machine (LS-GBTSVM). LS-GBTSVM's optimization formulation ensures two critical facets: $(i)$ it eliminates the need for matrix inversions, streamlining the LS-GBTSVM's computational efficiency, and $(ii)$ it incorporates the SRM principle through the incorporation of regularization terms, effectively addressing the issue of overfitting. The proposed LS-GBTSVM exemplifies efficiency, scalability for large datasets, and robustness against noise and outliers. We conduct a comprehensive evaluation of the GBTSVM and LS-GBTSVM models on benchmark datasets from UCI, KEEL, and NDC datasets. Our experimental findings and statistical analyses affirm the superior generalization prowess of the proposed GBTSVM and LS-GBTSVM models.<|reference_end|>
arxiv
@article{quadir2024granular, title={Granular Ball Twin Support Vector Machine}, author={A. Quadir, M. Sajid, M. Tanveer}, journal={IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024}, year={2024}, doi={10.1109/TNNLS.2024.3476391}, archivePrefix={arXiv}, eprint={2410.04774}, primaryClass={cs.LG} }
quadir2024granular
arxiv-666391
2410.04775
OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning
<|reference_start|>OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning: Sensory earables have evolved from basic audio enhancement devices into sophisticated platforms for clinical-grade health monitoring and wellbeing management. This paper introduces OmniBuds, an advanced sensory earable platform integrating multiple biosensors and onboard computation powered by a machine learning accelerator, all within a real-time operating system (RTOS). The platform's dual-ear symmetric design, equipped with precisely positioned kinetic, acoustic, optical, and thermal sensors, enables highly accurate and real-time physiological assessments. Unlike conventional earables that rely on external data processing, OmniBuds leverage real-time onboard computation to significantly enhance system efficiency, reduce latency, and safeguard privacy by processing data locally. This capability includes executing complex machine learning models directly on the device. We provide a comprehensive analysis of OmniBuds' design, hardware and software architecture demonstrating its capacity for multi-functional applications, accurate and robust tracking of physiological parameters, and advanced human-computer interaction.<|reference_end|>
arxiv
@article{montanari2024omnibuds:, title={OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning}, author={Alessandro Montanari, Ashok Thangarajan, Khaldoon Al-Naimi, Andrea Ferlini, Yang Liu, Ananta Narayanan Balaji, Fahim Kawsar}, journal={arXiv preprint arXiv:2410.04775}, year={2024}, archivePrefix={arXiv}, eprint={2410.04775}, primaryClass={cs.ET cs.LG} }
montanari2024omnibuds:
arxiv-666392
2410.04778
MM-R$^3$: On (In-)Consistency of Multi-modal Large Language Models (MLLMs)
<|reference_start|>MM-R$^3$: On (In-)Consistency of Multi-modal Large Language Models (MLLMs): With the advent of Large Language Models (LLMs) and Multimodal (Visio-lingual) LLMs, a flurry of research has emerged, analyzing the performance of such models across a diverse array of tasks. While most studies focus on evaluating the capabilities of state-of-the-art (SoTA) MLLM models through task accuracy (e.g., Visual Question Answering, grounding) across various datasets, our work explores the related but complementary aspect of consistency - the ability of an MLLM model to produce semantically similar or identical responses to semantically similar queries. We note that consistency is a fundamental prerequisite (necessary but not sufficient condition) for robustness and trust in MLLMs. Humans, in particular, are known to be highly consistent (even if not always accurate) in their responses, and consistency is inherently expected from AI systems. Armed with this perspective, we propose the MM-R$^3$ benchmark, which analyses the performance in terms of consistency and accuracy in SoTA MLLMs with three tasks: Question Rephrasing, Image Restyling, and Context Reasoning. Our analysis reveals that consistency does not always align with accuracy, indicating that models with higher accuracy are not necessarily more consistent, and vice versa. Furthermore, we propose a simple yet effective mitigation strategy in the form of an adapter module trained to minimize inconsistency across prompts. With our proposed strategy, we are able to achieve absolute improvements of 5.7% and 12.5%, on average on widely used MLLMs such as BLIP-2 and LLaVa 1.5M in terms of consistency over their existing counterparts.<|reference_end|>
arxiv
@article{chou2024mm-r$^3$:, title={MM-R$^3$: On (In-)Consistency of Multi-modal Large Language Models (MLLMs)}, author={Shih-Han Chou, Shivam Chandhok, James J. Little, Leonid Sigal}, journal={arXiv preprint arXiv:2410.04778}, year={2024}, archivePrefix={arXiv}, eprint={2410.04778}, primaryClass={cs.CV} }
chou2024mm-r$^3$:
arxiv-666393
2410.04779
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
<|reference_start|>Fast Training of Sinusoidal Neural Fields via Scaling Initialization: Neural fields are an emerging paradigm that represent data as continuous functions parameterized by neural networks. Despite many advantages, neural fields often have a high training cost, which prevents a broader adoption. In this paper, we focus on a popular family of neural fields, called sinusoidal neural fields (SNFs), and study how it should be initialized to maximize the training speed. We find that the standard initialization scheme for SNFs -- designed based on the signal propagation principle -- is suboptimal. In particular, we show that by simply multiplying each weight (except for the last layer) by a constant, we can accelerate SNF training by 10$\times$. This method, coined $\textit{weight scaling}$, consistently provides a significant speedup over various data domains, allowing the SNFs to train faster than more recently proposed architectures. To understand why the weight scaling works well, we conduct extensive theoretical and empirical analyses which reveal that the weight scaling not only resolves the spectral bias quite effectively but also enjoys a well-conditioned optimization trajectory.<|reference_end|>
arxiv
@article{yeom2024fast, title={Fast Training of Sinusoidal Neural Fields via Scaling Initialization}, author={Taesun Yeom, Sangyoon Lee, Jaeho Lee}, journal={arXiv preprint arXiv:2410.04779}, year={2024}, archivePrefix={arXiv}, eprint={2410.04779}, primaryClass={cs.LG cs.AI} }
yeom2024fast
arxiv-666394
2410.04780
Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality
<|reference_start|>Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality: Multimodal Large Language Models (MLLMs) have emerged as a central focus in both industry and academia, but often suffer from biases introduced by visual and language priors, which can lead to multimodal hallucination. These biases arise from the visual encoder and the Large Language Model (LLM) backbone, affecting the attention mechanism responsible for aligning multimodal inputs. Existing decoding-based mitigation methods focus on statistical correlations and overlook the causal relationships between attention mechanisms and model output, limiting their effectiveness in addressing these biases. To tackle this issue, we propose a causal inference framework termed CausalMM that applies structural causal modeling to MLLMs, treating modality priors as a confounder between attention mechanisms and output. Specifically, by employing backdoor adjustment and counterfactual reasoning at both the visual and language attention levels, our method mitigates the negative effects of modality priors and enhances the alignment of MLLM's inputs and outputs, with a maximum score improvement of 65.3% on 6 VLind-Bench indicators and 164 points on MME Benchmark compared to conventional methods. Extensive experiments validate the effectiveness of our approach while being a plug-and-play solution. Our code is available at: https://github.com/The-Martyr/CausalMM<|reference_end|>
arxiv
@article{zhou2024mitigating, title={Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality}, author={Guanyu Zhou, Yibo Yan, Xin Zou, Kun Wang, Aiwei Liu, Xuming Hu}, journal={arXiv preprint arXiv:2410.04780}, year={2024}, archivePrefix={arXiv}, eprint={2410.04780}, primaryClass={cs.CV} }
zhou2024mitigating
arxiv-666395
2410.04783
When GDD meets GNN: A Knowledge-driven Neural Connection for Effective Entity Resolution in Property Graphs
<|reference_start|>When GDD meets GNN: A Knowledge-driven Neural Connection for Effective Entity Resolution in Property Graphs: This paper studies the entity resolution (ER) problem in property graphs. ER is the task of identifying and linking different records that refer to the same real-world entity. It is commonly used in data integration, data cleansing, and other applications where it is important to have accurate and consistent data. In general, two predominant approaches exist in the literature: rule-based and learning-based methods. On the one hand, rule-based techniques are often desired due to their explainability and ability to encode domain knowledge. Learning-based methods, on the other hand, are preferred due to their effectiveness in spite of their black-box nature. In this work, we devise a hybrid ER solution, GraphER, that leverages the strengths of both systems for property graphs. In particular, we adopt graph differential dependency (GDD) for encoding the so-called record-matching rules, and employ them to guide a graph neural network (GNN) based representation learning for the task. We conduct extensive empirical evaluation of our proposal on benchmark ER datasets including 17 graph datasets and 7 relational datasets in comparison with 10 state-of-the-art (SOTA) techniques. The results show that our approach provides a significantly better solution to addressing ER in graph data, both quantitatively and qualitatively, while attaining highly competitive results on the benchmark relational datasets w.r.t. the SOTA solutions.<|reference_end|>
arxiv
@article{hu2024when, title={When GDD meets GNN: A Knowledge-driven Neural Connection for Effective Entity Resolution in Property Graphs}, author={Junwei Hu, Michael Bewong, Selasi Kwashie, Yidi Zhang, Vincent Nofong, John Wondoh, Zaiwen Feng}, journal={arXiv preprint arXiv:2410.04783}, year={2024}, archivePrefix={arXiv}, eprint={2410.04783}, primaryClass={cs.DB} }
hu2024when
arxiv-666396
2410.04784
Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge
<|reference_start|>Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge: Having been trained on massive pretraining data, large language models have shown excellent performance on many knowledge-intensive tasks. However, pretraining data tends to contain misleading and even conflicting information, and it is intriguing to understand how LLMs handle these noisy data during training. In this study, we systematically analyze LLMs' learning preferences for data with conflicting knowledge. We find that pretrained LLMs establish learning preferences similar to humans, i.e., preferences towards formal texts and texts with fewer spelling errors, resulting in faster learning and more favorable treatment of knowledge in data with such features when facing conflicts. This finding is generalizable across models and languages and is more evident in larger models. An in-depth analysis reveals that LLMs tend to trust data with features that signify consistency with the majority of data, and it is possible to instill new preferences and erase old ones by manipulating the degree of consistency with the majority data.<|reference_end|>
arxiv
@article{li2024formality, title={Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge}, author={Jiahuan Li, Yiqing Cao, Shujian Huang, Jiajun Chen}, journal={arXiv preprint arXiv:2410.04784}, year={2024}, archivePrefix={arXiv}, eprint={2410.04784}, primaryClass={cs.CL} }
li2024formality
arxiv-666397
2410.04785
Towards Ultra-Low-Power Neuromorphic Speech Enhancement with Spiking-FullSubNet
<|reference_start|>Towards Ultra-Low-Power Neuromorphic Speech Enhancement with Spiking-FullSubNet: Speech enhancement is critical for improving speech intelligibility and quality in various audio devices. In recent years, deep learning-based methods have significantly improved speech enhancement performance, but they often come with a high computational cost, which is prohibitive for a large number of edge devices, such as headsets and hearing aids. This work proposes an ultra-low-power speech enhancement system based on the brain-inspired spiking neural network (SNN) called Spiking-FullSubNet. Spiking-FullSubNet follows a full-band and sub-band fusioned approach to effectively capture both global and local spectral information. To enhance the efficiency of computationally expensive sub-band modeling, we introduce a frequency partitioning method inspired by the sensitivity profile of the human peripheral auditory system. Furthermore, we introduce a novel spiking neuron model that can dynamically control the input information integration and forgetting, enhancing the multi-scale temporal processing capability of SNN, which is critical for speech denoising. Experiments conducted on the recent Intel Neuromorphic Deep Noise Suppression (N-DNS) Challenge dataset show that the Spiking-FullSubNet surpasses state-of-the-art methods by large margins in terms of both speech quality and energy efficiency metrics. Notably, our system won the championship of the Intel N-DNS Challenge (Algorithmic Track), opening up a myriad of opportunities for ultra-low-power speech enhancement at the edge. Our source code and model checkpoints are publicly available at https://github.com/haoxiangsnr/spiking-fullsubnet.<|reference_end|>
arxiv
@article{hao2024towards, title={Towards Ultra-Low-Power Neuromorphic Speech Enhancement with Spiking-FullSubNet}, author={Xiang Hao, Chenxiang Ma, Qu Yang, Jibin Wu, Kay Chen Tan}, journal={arXiv preprint arXiv:2410.04785}, year={2024}, archivePrefix={arXiv}, eprint={2410.04785}, primaryClass={eess.AS cs.SD} }
hao2024towards
arxiv-666398
2410.04787
A Differentially Private Energy Trading Mechanism Approaching Social Optimum
<|reference_start|>A Differentially Private Energy Trading Mechanism Approaching Social Optimum: This paper proposes a differentially private energy trading mechanism for prosumers in peer-to-peer (P2P) markets, offering provable privacy guarantees while approaching the Nash equilibrium with nearly socially optimal efficiency. We first model the P2P energy trading as a (generalized) Nash game and prove the vulnerability of traditional distributed algorithms to privacy attacks through an adversarial inference model. To address this challenge, we develop a privacy-preserving Nash equilibrium seeking algorithm incorporating carefully calibrated Laplacian noise. We prove that the proposed algorithm achieves $\epsilon$-differential privacy while converging in expectation to the Nash equilibrium with a suitable stepsize. Numerical experiments are conducted to evaluate the algorithm's robustness against privacy attacks, convergence behavior, and optimality compared to the non-private solution. Results demonstrate that our mechanism effectively protects prosumers' sensitive information while maintaining near-optimal market outcomes, offering a practical approach for privacy-preserving coordination in P2P markets.<|reference_end|>
arxiv
@article{cao2024a, title={A Differentially Private Energy Trading Mechanism Approaching Social Optimum}, author={Yuji Cao, Yue Chen}, journal={arXiv preprint arXiv:2410.04787}, year={2024}, archivePrefix={arXiv}, eprint={2410.04787}, primaryClass={cs.GT math.OC} }
cao2024a
arxiv-666399
2410.04789
Analysis of Hybrid Compositions in Animation Film with Weakly Supervised Learning
<|reference_start|>Analysis of Hybrid Compositions in Animation Film with Weakly Supervised Learning: We present an approach for the analysis of hybrid visual compositions in animation in the domain of ephemeral film. We combine ideas from semi-supervised and weakly supervised learning to train a model that can segment hybrid compositions without requiring pre-labeled segmentation masks. We evaluate our approach on a set of ephemeral films from 13 film archives. Results demonstrate that the proposed learning strategy yields a performance close to a fully supervised baseline. On a qualitative level the performed analysis provides interesting insights on hybrid compositions in animation film.<|reference_end|>
arxiv
@article{portos2024analysis, title={Analysis of Hybrid Compositions in Animation Film with Weakly Supervised Learning}, author={M'onica Apellaniz Portos, Roberto Labadie-Tamayo, Claudius Stemmler, Erwin Feyersinger, Andreas Babic, Franziska Bruckner, Vr"a"ath "Ohner and Matthias Zeppelzauer}, journal={arXiv preprint arXiv:2410.04789}, year={2024}, archivePrefix={arXiv}, eprint={2410.04789}, primaryClass={cs.CV cs.AI} }
portos2024analysis
arxiv-666400
2410.04790
GARLIC: LLM-Guided Dynamic Progress Control with Hierarchical Weighted Graph for Long Document QA
<|reference_start|>GARLIC: LLM-Guided Dynamic Progress Control with Hierarchical Weighted Graph for Long Document QA: In the past, Retrieval-Augmented Generation (RAG) methods split text into chunks to enable language models to handle long documents. Recent tree-based RAG methods are able to retrieve detailed information while preserving global context. However, with the advent of more powerful LLMs, such as Llama 3.1, which offer better comprehension and support for longer inputs, we found that even recent tree-based RAG methods perform worse than directly feeding the entire document into Llama 3.1, although RAG methods still hold an advantage in reducing computational costs. In this paper, we propose a new retrieval method, called LLM-Guided Dynamic Progress Control with Hierarchical Weighted Graph (GARLIC), which outperforms previous state-of-the-art baselines, including Llama 3.1, while retaining the computational efficiency of RAG methods. Our method introduces several improvements: (1) Rather than using a tree structure, we construct a Hierarchical Weighted Directed Acyclic Graph with many-to-many summarization, where the graph edges are derived from attention mechanisms, and each node focuses on a single event or very few events. (2) We introduce a novel retrieval method that leverages the attention weights of LLMs rather than dense embedding similarity. Our method allows for searching the graph along multiple paths and can terminate at any depth. (3) We use the LLM to control the retrieval process, enabling it to dynamically adjust the amount and depth of information retrieved for different queries. Experimental results show that our method outperforms previous state-of-the-art baselines, including Llama 3.1, on two single-document and two multi-document QA datasets, while maintaining similar computational complexity to traditional RAG methods.<|reference_end|>
arxiv
@article{wang2024garlic:, title={GARLIC: LLM-Guided Dynamic Progress Control with Hierarchical Weighted Graph for Long Document QA}, author={Xinyu Wang, Yanzheng Xiang, Lin Gui, Yulan He}, journal={arXiv preprint arXiv:2410.04790}, year={2024}, archivePrefix={arXiv}, eprint={2410.04790}, primaryClass={cs.CL} }
wang2024garlic: