corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-666201
2410.04419
LiteVLoc: Map-Lite Visual Localization for Image Goal Navigation
<|reference_start|>LiteVLoc: Map-Lite Visual Localization for Image Goal Navigation: This paper presents LiteVLoc, a hierarchical visual localization framework that uses a lightweight topo-metric map to represent the environment. The method consists of three sequential modules that estimate camera poses in a coarse-to-fine manner. Unlike mainstream approaches relying on detailed 3D representations, LiteVLoc reduces storage overhead by leveraging learning-based feature matching and geometric solvers for metric pose estimation. A novel dataset for the map-free relocalization task is also introduced. Extensive experiments including localization and navigation in both simulated and real-world scenarios have validate the system's performance and demonstrated its precision and efficiency for large-scale deployment. Code and data will be made publicly available.<|reference_end|>
arxiv
@article{jiao2024litevloc:, title={LiteVLoc: Map-Lite Visual Localization for Image Goal Navigation}, author={Jianhao Jiao, Jinhao He, Changkun Liu, Sebastian Aegidius, Xiangcheng Hu, Tristan Braud, Dimitrios Kanoulas}, journal={arXiv preprint arXiv:2410.04419}, year={2024}, archivePrefix={arXiv}, eprint={2410.04419}, primaryClass={cs.RO cs.CV} }
jiao2024litevloc:
arxiv-666202
2410.04420
Non-deterministic asynchronous automata games and their undecidability
<|reference_start|>Non-deterministic asynchronous automata games and their undecidability: We propose a new model of a distributed game, called an ATS game, which is played on a non-deterministic asynchronous transition system -- a natural distributed finite-state device working on Mazurkiewicz traces. This new partial-information game is played between an environment and a distributed system comprising multiple processes. A distributed strategy uses causal past to make the next move. The key algorithmic question is to solve the game, that is, to decide the existence of a distributed winning strategy. It turns out ATS games are equivalent to asynchronous games, which are known to be undecidable. We prove that ATS games are undecidable in this article.<|reference_end|>
arxiv
@article{adsul2024non-deterministic, title={Non-deterministic asynchronous automata games and their undecidability}, author={Bharat Adsul and Nehul Jain}, journal={arXiv preprint arXiv:2410.04420}, year={2024}, archivePrefix={arXiv}, eprint={2410.04420}, primaryClass={cs.FL} }
adsul2024non-deterministic
arxiv-666203
2410.04421
Disentangling Regional Primitives for Image Generation
<|reference_start|>Disentangling Regional Primitives for Image Generation: This paper presents a method to explain the internal representation structure of a neural network for image generation. Specifically, our method disentangles primitive feature components from the intermediate-layer feature of the neural network, which ensures that each feature component is exclusively used to generate a specific set of image regions. In this way, the generation of the entire image can be considered as the superposition of different pre-encoded primitive regional patterns, each being generated by a feature component. We find that the feature component can be represented as an OR relationship between the demands for generating different image regions, which is encoded by the neural network. Therefore, we extend the Harsanyi interaction to represent such an OR interaction to disentangle the feature component. Experiments show a clear correspondence between each feature component and the generation of specific image regions.<|reference_end|>
arxiv
@article{chen2024disentangling, title={Disentangling Regional Primitives for Image Generation}, author={Zhengting Chen, Lei Cheng, Lianghui Ding, Quanshi Zhang}, journal={arXiv preprint arXiv:2410.04421}, year={2024}, archivePrefix={arXiv}, eprint={2410.04421}, primaryClass={cs.CV cs.AI cs.LG} }
chen2024disentangling
arxiv-666204
2410.04422
Hyper-multi-step: The Truth Behind Difficult Long-context Tasks
<|reference_start|>Hyper-multi-step: The Truth Behind Difficult Long-context Tasks: Long-context language models (LCLM), characterized by their extensive context window, is becoming increasingly popular. Meanwhile, many long-context benchmarks present challenging tasks that even the most advanced LCLMs struggle to complete. However, the underlying sources of various challenging long-context tasks have seldom been studied. To bridge this gap, we conduct experiments to indicate their difficulty stems primarily from two basic issues: "multi-matching retrieval," which requires the simultaneous retrieval of multiple items, and "logic-based retrieval," which necessitates logical judgment within retrieval criteria. These two problems, while seemingly straightforward, actually exceed the capabilities of LCLMs because they are proven to be hyper-multi-step (demanding numerous steps to solve) in nature. This finding could explain why LLMs struggle with more advanced long-context tasks, providing a more accurate perspective for rethinking solutions for them.<|reference_end|>
arxiv
@article{yu2024hyper-multi-step:, title={Hyper-multi-step: The Truth Behind Difficult Long-context Tasks}, author={Yijiong Yu, Ma Xiufa, Fang Jianwei, Zhi Xu, Su Guangyao, Wang Jiancheng, Yongfeng Huang, Zhixiao Qi, Wei Wang, Weifeng Liu, Ran Chen, Ji Pei}, journal={arXiv preprint arXiv:2410.04422}, year={2024}, archivePrefix={arXiv}, eprint={2410.04422}, primaryClass={cs.CL} }
yu2024hyper-multi-step:
arxiv-666205
2410.04424
DAdEE: Unsupervised Domain Adaptation in Early Exit PLMs
<|reference_start|>DAdEE: Unsupervised Domain Adaptation in Early Exit PLMs: Pre-trained Language Models (PLMs) exhibit good accuracy and generalization ability across various tasks using self-supervision, but their large size results in high inference latency. Early Exit (EE) strategies handle the issue by allowing the samples to exit from classifiers attached to the intermediary layers, but they do not generalize well, as exit classifiers can be sensitive to domain changes. To address this, we propose Unsupervised Domain Adaptation in EE framework (DADEE) that employs multi-level adaptation using knowledge distillation. DADEE utilizes GAN-based adversarial adaptation at each layer to achieve domain-invariant representations, reducing the domain gap between the source and target domain across all layers. The attached exits not only speed up inference but also enhance domain adaptation by reducing catastrophic forgetting and mode collapse, making it more suitable for real-world scenarios. Experiments on tasks such as sentiment analysis, entailment classification, and natural language inference demonstrate that DADEE consistently outperforms not only early exit methods but also various domain adaptation methods under domain shift scenarios. The anonymized source code is available at https://github.com/Div290/DAdEE.<|reference_end|>
arxiv
@article{bajpai2024dadee:, title={DAdEE: Unsupervised Domain Adaptation in Early Exit PLMs}, author={Divya Jyoti Bajpai and Manjesh Kumar Hanawal}, journal={arXiv preprint arXiv:2410.04424}, year={2024}, archivePrefix={arXiv}, eprint={2410.04424}, primaryClass={cs.CL cs.AI} }
bajpai2024dadee:
arxiv-666206
2410.04426
CoVLM: Leveraging Consensus from Vision-Language Models for Semi-supervised Multi-modal Fake News Detection
<|reference_start|>CoVLM: Leveraging Consensus from Vision-Language Models for Semi-supervised Multi-modal Fake News Detection: In this work, we address the real-world, challenging task of out-of-context misinformation detection, where a real image is paired with an incorrect caption for creating fake news. Existing approaches for this task assume the availability of large amounts of labeled data, which is often impractical in real-world, since it requires extensive manual intervention and domain expertise. In contrast, since obtaining a large corpus of unlabeled image-text pairs is much easier, here, we propose a semi-supervised protocol, where the model has access to a limited number of labeled image-text pairs and a large corpus of unlabeled pairs. Additionally, the occurrence of fake news being much lesser compared to the real ones, the datasets tend to be highly imbalanced, thus making the task even more challenging. Towards this goal, we propose a novel framework, Consensus from Vision-Language Models (CoVLM), which generates robust pseudo-labels for unlabeled pairs using thresholds derived from the labeled data. This approach can automatically determine the right threshold parameters of the model for selecting the confident pseudo-labels. Experimental results on benchmark datasets across challenging conditions and comparisons with state-of-the-art approaches demonstrate the effectiveness of our framework.<|reference_end|>
arxiv
@article{devank2024covlm:, title={CoVLM: Leveraging Consensus from Vision-Language Models for Semi-supervised Multi-modal Fake News Detection}, author={Devank, Jayateja Kalla, Soma Biswas}, journal={arXiv preprint arXiv:2410.04426}, year={2024}, archivePrefix={arXiv}, eprint={2410.04426}, primaryClass={cs.CV} }
devank2024covlm:
arxiv-666207
2410.04427
Consistent and Repeatable Testing of mMIMO O-RU across labs: A Japan-Singapore Experience
<|reference_start|>Consistent and Repeatable Testing of mMIMO O-RU across labs: A Japan-Singapore Experience: Open Radio Access Networks (RAN) aim to bring a paradigm shift to telecommunications industry, by enabling an open, intelligent, virtualized, and multi-vendor interoperable RAN ecosystem. At the center of this movement, O-RAN ALLIANCE defines the O-RAN architecture and standards, so that companies around the globe can use these specifications to create innovative and interoperable solutions. To accelerate the adoption of O-RAN products, rigorous testing of O-RAN Radio Unit (O-RU) and other O-RAN products plays a key role. O-RAN ALLIANCE has approved around 20 Open Testing and Integration Centres (OTICs) globally. OTICs serve as vendor-neutral platforms for providing the testing and integration services, with the vision that an O-RAN product certified in any OTIC is accepted in other parts of the world. To demonstrate the viability of such a certified-once-and-use-everywhere approach, one theme in the O-RAN Global PlugFest Spring 2024 is to demonstrate consistent and repeatable testing for the open fronthaul interface across multiple labs. Towards this, Japan OTIC and Asia Pacific OTIC in Singapore have teamed up together with an O-RU vendor and Keysight Technology. Our international team successfully completed all test cases defined by O-RAN ALLIANCE for O-RU conformance testing. In this paper, we share our journey in achieving this outcome, focusing on the challenges we have overcome and the lessons we have learned through this process.<|reference_end|>
arxiv
@article{nguyen2024consistent, title={Consistent and Repeatable Testing of mMIMO O-RU across labs: A Japan-Singapore Experience}, author={Thanh-Tam Nguyen, Mao V. Ngo, Binbin Chen, Mitsuhiro Kuchitsu, Serena Wai, Seitaro Kawai, Kenya Suzuki, Eng Wei Koo, and Tony Quek}, journal={arXiv preprint arXiv:2410.04427}, year={2024}, archivePrefix={arXiv}, eprint={2410.04427}, primaryClass={cs.NI} }
nguyen2024consistent
arxiv-666208
2410.04432
Total positivity and accurate computations related to $q$-Abel polynomials
<|reference_start|>Total positivity and accurate computations related to $q$-Abel polynomials: The attainment of accurate numerical solutions of ill-conditioned linear algebraic problems involving totally positive matrices has been gathering considerable attention among researchers over the last years. In parallel, the interest of $q$-calculus has been steadily growing in the literature. In this work the $q$-analogue of the Abel polynomial basis is studied. The total positivity of the matrix of change of basis between monomial and $q$-Abel bases is characterized, providing its bidiagonal factorization. Moreover, well-known high relative accuracy results of Vandermonde matrices corresponding to increasing positive nodes are extended to the decreasing negative case. This further allows to solve with high relative accuracy several algebraic problems concerning collocation, Wronskian and Gramian matrices of $q$-Abel polynomials. Finally, a series of numerical tests support the presented theoretical results and illustrate the goodness of the method where standard approaches fail to deliver accurate solutions.<|reference_end|>
arxiv
@article{khiar2024total, title={Total positivity and accurate computations related to $q$-Abel polynomials}, author={Y. Khiar, E. Mainar, E. Royo-Amondarain and B. Rubio}, journal={J Sci Comput 101, 56 (2024)}, year={2024}, doi={10.1007/s10915-024-02699-8}, archivePrefix={arXiv}, eprint={2410.04432}, primaryClass={math.NA cs.NA} }
khiar2024total
arxiv-666209
2410.04433
CAPEEN: Image Captioning with Early Exits and Knowledge Distillation
<|reference_start|>CAPEEN: Image Captioning with Early Exits and Knowledge Distillation: Deep neural networks (DNNs) have made significant progress in recognizing visual elements and generating descriptive text in image-captioning tasks. However, their improved performance comes from increased computational burden and inference latency. Early Exit (EE) strategies can be used to enhance their efficiency, but their adaptation presents challenges in image captioning as it requires varying levels of semantic information for accurate predictions. To overcome this, we introduce CAPEEN to improve the performance of EE strategies using knowledge distillation. Inference in CAPEEN is completed at intermediary layers if prediction confidence exceeds a predefined value learned from the training data. To account for real-world deployments, where target distributions could drift from that of training samples, we introduce a variant A-CAPEEN to adapt the thresholds on the fly using Multiarmed bandits framework. Experiments on the MS COCO and Flickr30k datasets show that CAPEEN gains speedup of 1.77x while maintaining competitive performance compared to the final layer, and A-CAPEEN additionally offers robustness against distortions. The source code is available at https://github.com/Div290/CapEEN<|reference_end|>
arxiv
@article{bajpai2024capeen:, title={CAPEEN: Image Captioning with Early Exits and Knowledge Distillation}, author={Divya Jyoti Bajpai and Manjesh Kumar Hanawal}, journal={arXiv preprint arXiv:2410.04433}, year={2024}, archivePrefix={arXiv}, eprint={2410.04433}, primaryClass={cs.CV cs.AI cs.CL} }
bajpai2024capeen:
arxiv-666210
2410.04434
A Mathematical Explanation of UNet
<|reference_start|>A Mathematical Explanation of UNet: The UNet architecture has transformed image segmentation. UNet's versatility and accuracy have driven its widespread adoption, significantly advancing fields reliant on machine learning problems with images. In this work, we give a clear and concise mathematical explanation of UNet. We explain what is the meaning and function of each of the components of UNet. We will show that UNet is solving a control problem. We decompose the control variables using multigrid methods. Then, operator-splitting techniques is used to solve the problem, whose architecture exactly recovers the UNet architecture. Our result shows that UNet is a one-step operator-splitting algorithm for the control problem.<|reference_end|>
arxiv
@article{tai2024a, title={A Mathematical Explanation of UNet}, author={Xue-Cheng Tai, Hao Liu, Raymond H. Chan, Lingfeng Li}, journal={arXiv preprint arXiv:2410.04434}, year={2024}, archivePrefix={arXiv}, eprint={2410.04434}, primaryClass={cs.CV} }
tai2024a
arxiv-666211
2410.04439
Empowering Backbone Models for Visual Text Generation with Input Granularity Control and Glyph-Aware Training
<|reference_start|>Empowering Backbone Models for Visual Text Generation with Input Granularity Control and Glyph-Aware Training: Diffusion-based text-to-image models have demonstrated impressive achievements in diversity and aesthetics but struggle to generate images with legible visual texts. Existing backbone models have limitations such as misspelling, failing to generate texts, and lack of support for Chinese text, but their development shows promising potential. In this paper, we propose a series of methods, aiming to empower backbone models to generate visual texts in English and Chinese. We first conduct a preliminary study revealing that Byte Pair Encoding (BPE) tokenization and the insufficient learning of cross-attention modules restrict the performance of the backbone models. Based on these observations, we make the following improvements: (1) We design a mixed granularity input strategy to provide more suitable text representations; (2) We propose to augment the conventional training objective with three glyph-aware training losses, which enhance the learning of cross-attention modules and encourage the model to focus on visual texts. Through experiments, we demonstrate that our methods can effectively empower backbone models to generate semantic relevant, aesthetically appealing, and accurate visual text images, while maintaining their fundamental image generation quality.<|reference_end|>
arxiv
@article{li2024empowering, title={Empowering Backbone Models for Visual Text Generation with Input Granularity Control and Glyph-Aware Training}, author={Wenbo Li, Guohao Li, Zhibin Lan, Xue Xu,Wanru Zhuang,Jiachen Liu, Xinyan Xiao, Jinsong Su}, journal={arXiv preprint arXiv:2410.04439}, year={2024}, archivePrefix={arXiv}, eprint={2410.04439}, primaryClass={cs.CV cs.AI} }
li2024empowering
arxiv-666212
2410.04440
Automated Detection of Defects on Metal Surfaces using Vision Transformers
<|reference_start|>Automated Detection of Defects on Metal Surfaces using Vision Transformers: Metal manufacturing often results in the production of defective products, leading to operational challenges. Since traditional manual inspection is time-consuming and resource-intensive, automatic solutions are needed. The study utilizes deep learning techniques to develop a model for detecting metal surface defects using Vision Transformers (ViTs). The proposed model focuses on the classification and localization of defects using a ViT for feature extraction. The architecture branches into two paths: classification and localization. The model must approach high classification accuracy while keeping the Mean Square Error (MSE) and Mean Absolute Error (MAE) as low as possible in the localization process. Experimental results show that it can be utilized in the process of automated defects detection, improve operational efficiency, and reduce errors in metal manufacturing.<|reference_end|>
arxiv
@article{alaa2024automated, title={Automated Detection of Defects on Metal Surfaces using Vision Transformers}, author={Toqa Alaa, Mostafa Kotb, Arwa Zakaria, Mariam Diab, and Walid Gomaa}, journal={arXiv preprint arXiv:2410.04440}, year={2024}, archivePrefix={arXiv}, eprint={2410.04440}, primaryClass={cs.CV} }
alaa2024automated
arxiv-666213
2410.04442
TimeBridge: Non-Stationarity Matters for Long-term Time Series Forecasting
<|reference_start|>TimeBridge: Non-Stationarity Matters for Long-term Time Series Forecasting: Non-stationarity poses significant challenges for multivariate time series forecasting due to the inherent short-term fluctuations and long-term trends that can lead to spurious regressions or obscure essential long-term relationships. Most existing methods either eliminate or retain non-stationarity without adequately addressing its distinct impacts on short-term and long-term modeling. Eliminating non-stationarity is essential for avoiding spurious regressions and capturing local dependencies in short-term modeling, while preserving it is crucial for revealing long-term cointegration across variates. In this paper, we propose TimeBridge, a novel framework designed to bridge the gap between non-stationarity and dependency modeling in long-term time series forecasting. By segmenting input series into smaller patches, TimeBridge applies Integrated Attention to mitigate short-term non-stationarity and capture stable dependencies within each variate, while Cointegrated Attention preserves non-stationarity to model long-term cointegration across variates. Extensive experiments show that TimeBridge consistently achieves state-of-the-art performance in both short-term and long-term forecasting. Additionally, TimeBridge demonstrates exceptional performance in financial forecasting on the CSI 500 and S&P 500 indices, further validating its robustness and effectiveness. Code is available at \url{https://github.com/Hank0626/TimeBridge}.<|reference_end|>
arxiv
@article{liu2024timebridge:, title={TimeBridge: Non-Stationarity Matters for Long-term Time Series Forecasting}, author={Peiyuan Liu, Beiliang Wu, Yifan Hu, Naiqi Li, Tao Dai, Jigang Bao, Shu-tao Xia}, journal={arXiv preprint arXiv:2410.04442}, year={2024}, archivePrefix={arXiv}, eprint={2410.04442}, primaryClass={cs.LG stat.ML} }
liu2024timebridge:
arxiv-666214
2410.04444
G\"odel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement
<|reference_start|>G\"odel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement: The rapid advancement of large language models (LLMs) has significantly enhanced the capabilities of AI-driven agents across various tasks. However, existing agentic systems, whether based on fixed pipeline algorithms or pre-defined meta-learning frameworks, cannot search the whole agent design space due to the restriction of human-designed components, and thus might miss the globally optimal agent design. In this paper, we introduce G\"odel Agent, a self-evolving framework inspired by the G\"odel machine, enabling agents to recursively improve themselves without relying on predefined routines or fixed optimization algorithms. G\"odel Agent leverages LLMs to dynamically modify its own logic and behavior, guided solely by high-level objectives through prompting. Experimental results on mathematical reasoning and complex agent tasks demonstrate that implementation of G\"odel Agent can achieve continuous self-improvement, surpassing manually crafted agents in performance, efficiency, and generalizability.<|reference_end|>
arxiv
@article{yin2024g\"odel, title={G\"odel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement}, author={Xunjian Yin and Xinyi Wang and Liangming Pan and Xiaojun Wan and William Yang Wang}, journal={arXiv preprint arXiv:2410.04444}, year={2024}, archivePrefix={arXiv}, eprint={2410.04444}, primaryClass={cs.AI} }
yin2024g\"odel
arxiv-666215
2410.04445
Optimising for the Unknown: Domain Alignment for Cephalometric Landmark Detection
<|reference_start|>Optimising for the Unknown: Domain Alignment for Cephalometric Landmark Detection: Cephalometric Landmark Detection is the process of identifying key areas for cephalometry. Each landmark is a single GT point labelled by a clinician. A machine learning model predicts the probability locus of a landmark represented by a heatmap. This work, for the 2024 CL-Detection MICCAI Challenge, proposes a domain alignment strategy with a regional facial extraction module and an X-ray artefact augmentation procedure. The challenge ranks our method's results as the best in MRE of 1.186mm and third in the 2mm SDR of 82.04% on the online validation leaderboard. The code is available at https://github.com/Julian-Wyatt/OptimisingfortheUnknown.<|reference_end|>
arxiv
@article{wyatt2024optimising, title={Optimising for the Unknown: Domain Alignment for Cephalometric Landmark Detection}, author={Julian Wyatt and Irina Voiculescu}, journal={arXiv preprint arXiv:2410.04445}, year={2024}, archivePrefix={arXiv}, eprint={2410.04445}, primaryClass={cs.CV} }
wyatt2024optimising
arxiv-666216
2410.04447
Attention Shift: Steering AI Away from Unsafe Content
<|reference_start|>Attention Shift: Steering AI Away from Unsafe Content: This study investigates the generation of unsafe or harmful content in state-of-the-art generative models, focusing on methods for restricting such generations. We introduce a novel training-free approach using attention reweighing to remove unsafe concepts without additional training during inference. We compare our method against existing ablation methods, evaluating the performance on both, direct and adversarial jailbreak prompts, using qualitative and quantitative metrics. We hypothesize potential reasons for the observed results and discuss the limitations and broader implications of content restriction.<|reference_end|>
arxiv
@article{garg2024attention, title={Attention Shift: Steering AI Away from Unsafe Content}, author={Shivank Garg and Manyana Tiwari}, journal={arXiv preprint arXiv:2410.04447}, year={2024}, archivePrefix={arXiv}, eprint={2410.04447}, primaryClass={cs.CV cs.CR cs.LG} }
garg2024attention
arxiv-666217
2410.04449
Video Summarization Techniques: A Comprehensive Review
<|reference_start|>Video Summarization Techniques: A Comprehensive Review: The rapid expansion of video content across a variety of industries, including social media, education, entertainment, and surveillance, has made video summarization an essential field of study. The current work is a survey that explores the various approaches and methods created for video summarizing, emphasizing both abstractive and extractive strategies. The process of extractive summarization involves the identification of key frames or segments from the source video, utilizing methods such as shot boundary recognition, and clustering. On the other hand, abstractive summarization creates new content by getting the essential content from the video, using machine learning models like deep neural networks and natural language processing, reinforcement learning, attention mechanisms, generative adversarial networks, and multi-modal learning. We also include approaches that incorporate the two methodologies, along with discussing the uses and difficulties encountered in real-world implementations. The paper also covers the datasets used to benchmark these techniques. This review attempts to provide a state-of-the-art thorough knowledge of the current state and future directions of video summarization research.<|reference_end|>
arxiv
@article{alaa2024video, title={Video Summarization Techniques: A Comprehensive Review}, author={Toqa Alaa, Ahmad Mongy, Assem Bakr, Mariam Diab, and Walid Gomaa}, journal={arXiv preprint arXiv:2410.04449}, year={2024}, archivePrefix={arXiv}, eprint={2410.04449}, primaryClass={cs.CV} }
alaa2024video
arxiv-666218
2410.04450
Spanning disks in triangulations of surfaces
<|reference_start|>Spanning disks in triangulations of surfaces: Given a triangulation $G$ of a surface $\mathbb{D}$, a spanning disk is a disk $\mathbb{D} \subseteq \mathbb{S}$ containing all the vertices of $G$ such that the boundary of $\mathbb{D}$ is a cycle of $G$. In this paper, we consider the question of when a triangulation of a surface contains a spanning disk. We give a very short proof that every triangulation of the torus contains a spanning disk, which strengthens a theorem of Nevo and Tarabykin. For arbitrary surfaces, we prove that triangulations with sufficiently high facewidth always contain spanning disks. Finally, we exhibit triangulations which do not have spanning disks. This shows that a minimum facewidth condition is necessary. Our results are motivated by and have applications for rigidity questions in the plane.<|reference_end|>
arxiv
@article{clinch2024spanning, title={Spanning disks in triangulations of surfaces}, author={Katie Clinch and Sean Dewar and Niloufar Fuladi and Maximilian Gorsky and Tony Huynh and Eleftherios Kastis and Anthony Nixon and Brigitte Servatius}, journal={arXiv preprint arXiv:2410.04450}, year={2024}, archivePrefix={arXiv}, eprint={2410.04450}, primaryClass={math.CO cs.DM} }
clinch2024spanning
arxiv-666219
2410.04452
MindScope: Exploring cognitive biases in large language models through Multi-Agent Systems
<|reference_start|>MindScope: Exploring cognitive biases in large language models through Multi-Agent Systems: Detecting cognitive biases in large language models (LLMs) is a fascinating task that aims to probe the existing cognitive biases within these models. Current methods for detecting cognitive biases in language models generally suffer from incomplete detection capabilities and a restricted range of detectable bias types. To address this issue, we introduced the 'MindScope' dataset, which distinctively integrates static and dynamic elements. The static component comprises 5,170 open-ended questions spanning 72 cognitive bias categories. The dynamic component leverages a rule-based, multi-agent communication framework to facilitate the generation of multi-round dialogues. This framework is flexible and readily adaptable for various psychological experiments involving LLMs. In addition, we introduce a multi-agent detection method applicable to a wide range of detection tasks, which integrates Retrieval-Augmented Generation (RAG), competitive debate, and a reinforcement learning-based decision module. Demonstrating substantial effectiveness, this method has shown to improve detection accuracy by as much as 35.10% compared to GPT-4. Codes and appendix are available at https://github.com/2279072142/MindScope.<|reference_end|>
arxiv
@article{xie2024mindscope:, title={MindScope: Exploring cognitive biases in large language models through Multi-Agent Systems}, author={Zhentao Xie, Jiabao Zhao, Yilei Wang, Jinxin Shi, Yanhong Bai, Xingjiao Wu and Liang He}, journal={arXiv preprint arXiv:2410.04452}, year={2024}, archivePrefix={arXiv}, eprint={2410.04452}, primaryClass={cs.CL cs.AI} }
xie2024mindscope:
arxiv-666220
2410.04453
CONFINE: Preserving Data Secrecy in Decentralized Process Mining
<|reference_start|>CONFINE: Preserving Data Secrecy in Decentralized Process Mining: In the contemporary business landscape, collaboration across multiple organizations offers a multitude of opportunities, including reduced operational costs, enhanced performance, and accelerated technological advancement. The application of process mining techniques in an inter-organizational setting, exploiting the recorded process event data, enables the coordination of joint effort and allows for a deeper understanding of the business. Nevertheless, considerable concerns pertaining to data confidentiality emerge, as organizations frequently demonstrate a reluctance to expose sensitive data demanded for process mining, due to concerns related to privacy and security risks. The presence of conflicting interests among the parties involved can impede the practice of open data sharing. To address these challenges, we propose our approach and toolset named CONFINE, which we developed with the intent of enabling process mining on process event data from multiple providers while preserving the confidentiality and integrity of the original records. To ensure that the presented interaction protocol steps are secure and that the processed information is hidden from both involved and external actors, our approach is based on a decentralized architecture and consists of trusted applications running in Trusted Execution Environments (TEE). In this demo paper, we provide an overview of the core components and functionalities as well as the specific details of its application.<|reference_end|>
arxiv
@article{goretti2024confine:, title={CONFINE: Preserving Data Secrecy in Decentralized Process Mining}, author={Valerio Goretti, Davide Basile, Luca Barbaro and Claudio Di Ciccio}, journal={arXiv preprint arXiv:2410.04453}, year={2024}, archivePrefix={arXiv}, eprint={2410.04453}, primaryClass={cs.DC} }
goretti2024confine:
arxiv-666221
2410.04454
CopyLens: Dynamically Flagging Copyrighted Sub-Dataset Contributions to LLM Outputs
<|reference_start|>CopyLens: Dynamically Flagging Copyrighted Sub-Dataset Contributions to LLM Outputs: Large Language Models (LLMs) have become pervasive due to their knowledge absorption and text-generation capabilities. Concurrently, the copyright issue for pretraining datasets has been a pressing concern, particularly when generation includes specific styles. Previous methods either focus on the defense of identical copyrighted outputs or find interpretability by individual tokens with computational burdens. However, the gap between them exists, where direct assessments of how dataset contributions impact LLM outputs are missing. Once the model providers ensure copyright protection for data holders, a more mature LLM community can be established. To address these limitations, we introduce CopyLens, a new framework to analyze how copyrighted datasets may influence LLM responses. Specifically, a two-stage approach is employed: First, based on the uniqueness of pretraining data in the embedding space, token representations are initially fused for potential copyrighted texts, followed by a lightweight LSTM-based network to analyze dataset contributions. With such a prior, a contrastive-learning-based non-copyright OOD detector is designed. Our framework can dynamically face different situations and bridge the gap between current copyright detection methods. Experiments show that CopyLens improves efficiency and accuracy by 15.2% over our proposed baseline, 58.7% over prompt engineering methods, and 0.21 AUC over OOD detection baselines.<|reference_end|>
arxiv
@article{ma2024copylens:, title={CopyLens: Dynamically Flagging Copyrighted Sub-Dataset Contributions to LLM Outputs}, author={Qichao Ma, Rui-Jie Zhu, Peiye Liu, Renye Yan, Fahong Zhang, Ling Liang, Meng Li, Zhaofei Yu, Zongwei Wang, Yimao Cai, Tiejun Huang}, journal={arXiv preprint arXiv:2410.04454}, year={2024}, archivePrefix={arXiv}, eprint={2410.04454}, primaryClass={cs.CL} }
ma2024copylens:
arxiv-666222
2410.04456
SWEb: A Large Web Dataset for the Scandinavian Languages
<|reference_start|>SWEb: A Large Web Dataset for the Scandinavian Languages: This paper presents the hitherto largest pretraining dataset for the Scandinavian languages: the Scandinavian WEb (SWEb), comprising over one trillion tokens. The paper details the collection and processing pipeline, and introduces a novel model-based text extractor that significantly reduces complexity in comparison with rule-based approaches. We also introduce a new cloze-style benchmark for evaluating language models in Swedish, and use this test to compare models trained on the SWEb data to models trained on FineWeb, with competitive results. All data, models and code are shared openly.<|reference_end|>
arxiv
@article{norlund2024sweb:, title={SWEb: A Large Web Dataset for the Scandinavian Languages}, author={Tobias Norlund, Tim Isbister, Amaru Cuba Gyllensten, Paul Dos Santos, Danila Petrelli, Ariel Ekgren, Magnus Sahlgren}, journal={arXiv preprint arXiv:2410.04456}, year={2024}, archivePrefix={arXiv}, eprint={2410.04456}, primaryClass={cs.CL} }
norlund2024sweb:
arxiv-666223
2410.04457
An Attention-Based Algorithm for Gravity Adaptation Zone Calibration
<|reference_start|>An Attention-Based Algorithm for Gravity Adaptation Zone Calibration: Accurate calibration of gravity adaptation zones is of great significance in fields such as underwater navigation, geophysical exploration, and marine engineering. With the increasing application of gravity field data in these areas, traditional calibration methods based on single features are becoming inadequate for capturing the complex characteristics of gravity fields and addressing the intricate interrelationships among multidimensional data. This paper proposes an attention-enhanced algorithm for gravity adaptation zone calibration. By introducing an attention mechanism, the algorithm adaptively fuses multidimensional gravity field features and dynamically assigns feature weights, effectively solving the problems of multicollinearity and redundancy inherent in traditional feature selection methods, significantly improving calibration accuracy and robustness.In addition, a large-scale gravity field dataset with over 10,000 sampling points was constructed, and Kriging interpolation was used to enhance the spatial resolution of the data, providing a reliable data foundation for model training and evaluation. We conducted both qualitative and quantitative experiments on several classical machine learning models (such as SVM, GBDT, and RF), and the results demonstrate that the proposed algorithm significantly improves performance across these models, outperforming other traditional feature selection methods. The method proposed in this paper provides a new solution for gravity adaptation zone calibration, showing strong generalization ability and potential for application in complex environments. The code is available at \href{this link} {https://github.com/hulnifox/RF-ATTN}.<|reference_end|>
arxiv
@article{yu2024an, title={An Attention-Based Algorithm for Gravity Adaptation Zone Calibration}, author={Chen Yu}, journal={arXiv preprint arXiv:2410.04457}, year={2024}, archivePrefix={arXiv}, eprint={2410.04457}, primaryClass={cs.LG cs.AI physics.geo-ph} }
yu2024an
arxiv-666224
2410.04458
A Comprehensive Framework for Analyzing the Convergence of Adam: Bridging the Gap with SGD
<|reference_start|>A Comprehensive Framework for Analyzing the Convergence of Adam: Bridging the Gap with SGD: Adaptive Moment Estimation (Adam) is a cornerstone optimization algorithm in deep learning, widely recognized for its flexibility with adaptive learning rates and efficiency in handling large-scale data. However, despite its practical success, the theoretical understanding of Adam's convergence has been constrained by stringent assumptions, such as almost surely bounded stochastic gradients or uniformly bounded gradients, which are more restrictive than those typically required for analyzing stochastic gradient descent (SGD). In this paper, we introduce a novel and comprehensive framework for analyzing the convergence properties of Adam. This framework offers a versatile approach to establishing Adam's convergence. Specifically, we prove that Adam achieves asymptotic (last iterate sense) convergence in both the almost sure sense and the \(L_1\) sense under the relaxed assumptions typically used for SGD, namely \(L\)-smoothness and the ABC inequality. Meanwhile, under the same assumptions, we show that Adam attains non-asymptotic sample complexity bounds similar to those of SGD.<|reference_end|>
arxiv
@article{jin2024a, title={A Comprehensive Framework for Analyzing the Convergence of Adam: Bridging the Gap with SGD}, author={Ruinan Jin, Xiao Li, Yaoliang Yu, Baoxiang Wang}, journal={arXiv preprint arXiv:2410.04458}, year={2024}, archivePrefix={arXiv}, eprint={2410.04458}, primaryClass={cs.LG math.OC} }
jin2024a
arxiv-666225
2410.04460
U-net based prediction of cerebrospinal fluid distribution and ventricular reflux grading
<|reference_start|>U-net based prediction of cerebrospinal fluid distribution and ventricular reflux grading: Previous work shows evidence that cerebrospinal fluid (CSF) plays a crucial role in brain waste clearance processes, and that altered flow patterns are associated with various diseases of the central nervous system. In this study, we investigate the potential of deep learning to predict the distribution in human brain of a gadolinium-based CSF contrast agent (tracer) administered intrathecal. For this, T1-weighted magnetic resonance imaging (MRI) scans taken at multiple time points before and after intrathecal injection were utilized. We propose a U-net-based supervised learning model to predict pixel-wise signal increases at their peak after 24 hours. Its performance is evaluated based on different tracer distribution stages provided during training, including predictions from baseline scans taken before injection. Our findings indicate that using imaging data from just the first two hours post-injection for training yields tracer flow predictions comparable to those trained with additional later-stage scans. The model was further validated by comparing ventricular reflux gradings provided by neuroradiologists, and inter-rater grading among medical experts and the model showed excellent agreement. Our results demonstrate the potential of deep learning-based methods for CSF flow prediction, suggesting that fewer MRI scans could be sufficient for clinical analysis, which might significantly improve clinical efficiency, patient well-being, and lower healthcare costs.<|reference_end|>
arxiv
@article{rieff2024u-net, title={U-net based prediction of cerebrospinal fluid distribution and ventricular reflux grading}, author={Melanie Rieff, Fabian Holzberger, Oksana Lapina, Geir Ringstad, Lars Magnus Valnes, Bogna Warsza, Kent-Andre Mardal, Per Kristian Eide, Barbara Wohlmuth}, journal={arXiv preprint arXiv:2410.04460}, year={2024}, archivePrefix={arXiv}, eprint={2410.04460}, primaryClass={eess.IV cs.CV cs.LG} }
rieff2024u-net
arxiv-666226
2410.04461
Improved Off-policy Reinforcement Learning in Biological Sequence Design
<|reference_start|>Improved Off-policy Reinforcement Learning in Biological Sequence Design: Designing biological sequences with desired properties is a significant challenge due to the combinatorially vast search space and the high cost of evaluating each candidate sequence. To address these challenges, reinforcement learning (RL) methods, such as GFlowNets, utilize proxy models for rapid reward evaluation and annotated data for policy training. Although these approaches have shown promise in generating diverse and novel sequences, the limited training data relative to the vast search space often leads to the misspecification of proxy for out-of-distribution inputs. We introduce $\delta$-Conservative Search, a novel off-policy search method for training GFlowNets designed to improve robustness against proxy misspecification. The key idea is to incorporate conservativeness, controlled by parameter $\delta$, to constrain the search to reliable regions. Specifically, we inject noise into high-score offline sequences by randomly masking tokens with a Bernoulli distribution of parameter $\delta$ and then denoise masked tokens using the GFlowNet policy. Additionally, $\delta$ is adaptively adjusted based on the uncertainty of the proxy model for each data point. This enables the reflection of proxy uncertainty to determine the level of conservativeness. Experimental results demonstrate that our method consistently outperforms existing machine learning methods in discovering high-score sequences across diverse tasks-including DNA, RNA, protein, and peptide design-especially in large-scale scenarios.<|reference_end|>
arxiv
@article{kim2024improved, title={Improved Off-policy Reinforcement Learning in Biological Sequence Design}, author={Hyeonah Kim, Minsu Kim, Taeyoung Yun, Sanghyeok Choi, Emmanuel Bengio, Alex Hern'andez-Garc'ia, Jinkyoo Park}, journal={arXiv preprint arXiv:2410.04461}, year={2024}, archivePrefix={arXiv}, eprint={2410.04461}, primaryClass={cs.LG q-bio.BM} }
kim2024improved
arxiv-666227
2410.04462
Tensor-Train Point Cloud Compression and Efficient Approximate Nearest-Neighbor Search
<|reference_start|>Tensor-Train Point Cloud Compression and Efficient Approximate Nearest-Neighbor Search: Nearest-neighbor search in large vector databases is crucial for various machine learning applications. This paper introduces a novel method using tensor-train (TT) low-rank tensor decomposition to efficiently represent point clouds and enable fast approximate nearest-neighbor searches. We propose a probabilistic interpretation and utilize density estimation losses like Sliced Wasserstein to train TT decompositions, resulting in robust point cloud compression. We reveal an inherent hierarchical structure within TT point clouds, facilitating efficient approximate nearest-neighbor searches. In our paper, we provide detailed insights into the methodology and conduct comprehensive comparisons with existing methods. We demonstrate its effectiveness in various scenarios, including out-of-distribution (OOD) detection problems and approximate nearest-neighbor (ANN) search tasks.<|reference_end|>
arxiv
@article{novikov2024tensor-train, title={Tensor-Train Point Cloud Compression and Efficient Approximate Nearest-Neighbor Search}, author={Georgii Novikov, Alexander Gneushev, Alexey Kadeishvili, Ivan Oseledets}, journal={arXiv preprint arXiv:2410.04462}, year={2024}, archivePrefix={arXiv}, eprint={2410.04462}, primaryClass={cs.CV cs.LG} }
novikov2024tensor-train
arxiv-666228
2410.04463
Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information
<|reference_start|>Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information: Chain-of-Thought (CoT) has become a vital technique for enhancing the performance of Large Language Models (LLMs), attracting increasing attention from researchers. One stream of approaches focuses on the iterative enhancement of LLMs by continuously verifying and refining their reasoning outputs for desired quality. Despite its impressive results, this paradigm faces two critical issues: (1) Simple verification methods: The current paradigm relies solely on a single verification method. (2) Wrong Information Ignorance: Traditional paradigms directly ignore wrong information during reasoning and refine the logic paths from scratch each time. To address these challenges, we propose Wrong-of-Thought (WoT), which includes two core modules: (1) Multi-Perspective Verification: A multi-perspective verification method for accurately refining the reasoning process and result, and (2) Wrong Information Utilization: Utilizing wrong information to alert LLMs and reduce the probability of LLMs making same mistakes. Experiments on 8 popular datasets and 5 LLMs demonstrate that WoT surpasses all previous baselines. In addition, WoT exhibits powerful capabilities in difficult computation tasks.<|reference_end|>
arxiv
@article{zhang2024wrong-of-thought:, title={Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information}, author={Yongheng Zhang, Qiguang Chen, Jingxuan Zhou, Peng Wang, Jiasheng Si, Jin Wang, Wenpeng Lu, Libo Qin}, journal={arXiv preprint arXiv:2410.04463}, year={2024}, archivePrefix={arXiv}, eprint={2410.04463}, primaryClass={cs.CL} }
zhang2024wrong-of-thought:
arxiv-666229
2410.04466
Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective
<|reference_start|>Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective: Large Language Models (LLMs) have demonstrated remarkable capabilities across various fields, from natural language understanding to text generation. Compared to non-generative LLMs like BERT and DeBERTa, generative LLMs like GPT series and Llama series are currently the main focus due to their superior algorithmic performance. The advancements in generative LLMs are closely intertwined with the development of hardware capabilities. Various hardware platforms exhibit distinct hardware characteristics, which can help improve LLM inference performance. Therefore, this paper comprehensively surveys efficient generative LLM inference on different hardware platforms. First, we provide an overview of the algorithm architecture of mainstream generative LLMs and delve into the inference process. Then, we summarize different optimization methods for different platforms such as CPU, GPU, FPGA, ASIC, and PIM/NDP, and provide inference results for generative LLMs. Furthermore, we perform a qualitative and quantitative comparison of inference performance with batch sizes 1 and 8 on different hardware platforms by considering hardware power consumption, absolute inference speed (tokens/s), and energy efficiency (tokens/J). We compare the performance of the same optimization methods across different hardware platforms, the performance across different hardware platforms, and the performance of different methods on the same hardware platform. This provides a systematic and comprehensive summary of existing inference acceleration work by integrating software optimization methods and hardware platforms, which can point to the future trends and potential developments of generative LLMs and hardware technology for edge-side scenarios.<|reference_end|>
arxiv
@article{li2024large, title={Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective}, author={Jinhao Li, Jiaming Xu, Shan Huang, Yonghua Chen, Wen Li, Jun Liu, Yaoxiu Lian, Jiayi Pan, Li Ding, Hao Zhou, Yu Wang, Guohao Dai}, journal={arXiv preprint arXiv:2410.04466}, year={2024}, archivePrefix={arXiv}, eprint={2410.04466}, primaryClass={cs.AR cs.LG} }
li2024large
arxiv-666230
2410.04468
Revisiting In-context Learning Inference Circuit in Large Language Models
<|reference_start|>Revisiting In-context Learning Inference Circuit in Large Language Models: In-context Learning (ICL) is an emerging few-shot learning paradigm on Language Models (LMs) with inner mechanisms un-explored. There are already existing works describing the inner processing of ICL, while they struggle to capture all the inference phenomena in large language models. Therefore, this paper proposes a comprehensive circuit to model the inference dynamics and try to explain the observed phenomena of ICL. In detail, we divide ICL inference into 3 major operations: (1) Summarize: LMs encode every input text (demonstrations and queries) into linear representation in the hidden states with sufficient information to solve ICL tasks. (2) Semantics Merge: LMs merge the encoded representations of demonstrations with their corresponding label tokens to produce joint representations of labels and demonstrations. (3) Feature Retrieval and Copy: LMs search the joint representations similar to the query representation on a task subspace, and copy the searched representations into the query. Then, language model heads capture these copied label representations to a certain extent and decode them into predicted labels. The proposed inference circuit successfully captured many phenomena observed during the ICL process, making it a comprehensive and practical explanation of the ICL inference process. Moreover, ablation analysis by disabling the proposed steps seriously damages the ICL performance, suggesting the proposed inference circuit is a dominating mechanism. Additionally, we confirm and list some bypass mechanisms that solve ICL tasks in parallel with the proposed circuit.<|reference_end|>
arxiv
@article{cho2024revisiting, title={Revisiting In-context Learning Inference Circuit in Large Language Models}, author={Hakaze Cho, Mariko Kato, Yoshihiro Sakai, Naoya Inoue}, journal={arXiv preprint arXiv:2410.04468}, year={2024}, archivePrefix={arXiv}, eprint={2410.04468}, primaryClass={cs.CL cs.AI cs.LG} }
cho2024revisiting
arxiv-666231
2410.04471
Numerical Solution for Nonlinear 4D Variational Data Assimilation (4D-Var) via ADMM
<|reference_start|>Numerical Solution for Nonlinear 4D Variational Data Assimilation (4D-Var) via ADMM: The four-dimensional variational data assimilation (4D-Var) has emerged as an important methodology, widely used in numerical weather prediction, oceanographic modeling, and climate forecasting. Classical unconstrained gradient-based algorithms often struggle with local minima, making their numerical performance highly sensitive to the initial guess. In this study, we exploit the separable structure of the 4D-Var problem to propose a practical variant of the alternating direction method of multipliers (ADMM), referred to as the linearized multi-block ADMM with regularization. Unlike classical first-order optimization methods that primarily focus on initial conditions, our approach derives the Euler-Lagrange equation for the entire dynamical system, enabling more comprehensive and effective utilization of observational data. When the initial condition is poorly chosen, the arg min operation steers the iteration towards the observational data, thereby reducing sensitivity to the initial guess. The quadratic subproblems further simplify the solution process, while the parallel structure enhances computational efficiency, especially when utilizing modern hardware. To validate our approach, we demonstrate its superior performance using the Lorenz system, even in the presence of noisy observational data. Furthermore, we showcase the effectiveness of the linearized multi-block ADMM with regularization in solving the 4D-Var problems for the viscous Burgers' equation, across various numerical schemes, including finite difference, finite element, and spectral methods. Finally, we illustrate the recovery of dynamics under noisy observational data in a 2D turbulence scenario, particularly focusing on vorticity concentration, highlighting the robustness of our algorithm in handling complex physical phenomena.<|reference_end|>
arxiv
@article{li2024numerical, title={Numerical Solution for Nonlinear 4D Variational Data Assimilation (4D-Var) via ADMM}, author={Bowen Li, Bin Shi}, journal={arXiv preprint arXiv:2410.04471}, year={2024}, archivePrefix={arXiv}, eprint={2410.04471}, primaryClass={math.NA cs.NA math.OC physics.flu-dyn} }
li2024numerical
arxiv-666232
2410.04472
Collapsed Language Models Promote Fairness
<|reference_start|>Collapsed Language Models Promote Fairness: To mitigate societal biases implicitly encoded in recent successful pretrained language models, a diverse array of approaches have been proposed to encourage model fairness, focusing on prompting, data augmentation, regularized fine-tuning, and more. Despite the development, it is nontrivial to reach a principled understanding of fairness and an effective algorithm that can consistently debias language models. In this work, by rigorous evaluations of Neural Collapse -- a learning phenomenon happen in last-layer representations and classifiers in deep networks -- on fairness-related words, we find that debiased language models exhibit collapsed alignment between token representations and word embeddings. More importantly, this observation inspires us to design a principled fine-tuning method that can effectively improve fairness in a wide range of debiasing methods, while still preserving the performance of language models on standard natural language understanding tasks. We attach our code at https://github.com/Xujxyang/Fairness-NC-main.<|reference_end|>
arxiv
@article{xu2024collapsed, title={Collapsed Language Models Promote Fairness}, author={Jingxuan Xu, Wuyang Chen, Linyi Li, Yao Zhao, Yunchao Wei}, journal={arXiv preprint arXiv:2410.04472}, year={2024}, archivePrefix={arXiv}, eprint={2410.04472}, primaryClass={cs.CL cs.CY} }
xu2024collapsed
arxiv-666233
2410.04475
Partial reciprocity-based precoding matrix prediction in FDD massive MIMO with mobility
<|reference_start|>Partial reciprocity-based precoding matrix prediction in FDD massive MIMO with mobility: The timely precoding of frequency division duplex (FDD) massive multiple-input multiple-output (MIMO) systems is a substantial challenge in practice, especially in mobile environments. In order to improve the precoding performance and reduce the precoding complexity, we propose a partial reciprocity-based precoding matrix prediction scheme and further reduce its complexity by exploiting the channel gram matrix. We prove that the precoders can be predicted through a closed-form eigenvector interpolation which was based on the periodic eigenvector samples. Numerical results validate the performance improvements of our schemes over the conventional schemes from 30 km/h to 500 km/h of moving speed.<|reference_end|>
arxiv
@article{qin2024partial, title={Partial reciprocity-based precoding matrix prediction in FDD massive MIMO with mobility}, author={Ziao Qin, Haifan Yin}, journal={arXiv preprint arXiv:2410.04475}, year={2024}, archivePrefix={arXiv}, eprint={2410.04475}, primaryClass={cs.IT eess.SP math.IT} }
qin2024partial
arxiv-666234
2410.04477
Block Vecchia Approximation for Scalable and Efficient Gaussian Process Computations
<|reference_start|>Block Vecchia Approximation for Scalable and Efficient Gaussian Process Computations: Gaussian Processes (GPs) are vital for modeling and predicting irregularly-spaced, large geospatial datasets. However, their computations often pose significant challenges in large-scale applications. One popular method to approximate GPs is the Vecchia approximation, which approximates the full likelihood via a series of conditional probabilities. The classical Vecchia approximation uses univariate conditional distributions, which leads to redundant evaluations and memory burdens. To address this challenge, our study introduces block Vecchia, which evaluates each multivariate conditional distribution of a block of observations, with blocks formed using the K-means algorithm. The proposed GPU framework for the block Vecchia uses varying batched linear algebra operations to compute multivariate conditional distributions concurrently, notably diminishing the frequent likelihood evaluations. Diving into the factor affecting the accuracy of the block Vecchia, the neighbor selection criterion is investigated, where we found that the random ordering markedly enhances the approximated quality as the block count becomes large. To verify the scalability and efficiency of the algorithm, we conduct a series of numerical studies and simulations, demonstrating their practical utility and effectiveness compared to the exact GP. Moreover, we tackle large-scale real datasets using the block Vecchia method, i.e., high-resolution 3D profile wind speed with a million points.<|reference_end|>
arxiv
@article{pan2024block, title={Block Vecchia Approximation for Scalable and Efficient Gaussian Process Computations}, author={Qilong Pan, Sameh Abdulah, Marc G. Genton, Ying Sun}, journal={arXiv preprint arXiv:2410.04477}, year={2024}, archivePrefix={arXiv}, eprint={2410.04477}, primaryClass={stat.CO cs.CE} }
pan2024block
arxiv-666235
2410.04478
Configurable Multilingual ASR with Speech Summary Representations
<|reference_start|>Configurable Multilingual ASR with Speech Summary Representations: Approximately half of the world's population is multilingual, making multilingual ASR (MASR) essential. Deploying multiple monolingual models is challenging when the ground-truth language is unknown in advance. This motivates research efforts on configurable multilingual MASR models that can be prompted manually or adapted automatically to recognise specific languages. In this paper, we present the Configurable MASR model with Summary Vector (csvMASR), a novel architecture designed to enhance configurability. Our approach leverages adapters and introduces speech summary vector representations, inspired by conversational summary representations in speech diarization, to combine outputs from language-specific components at the utterance level. We also incorporate an auxiliary language classification loss to enhance configurability. Using data from 7 languages in the Multilingual Librispeech (MLS) dataset, csvMASR outperforms existing MASR models and reduces the word error rate (WER) from 10.33\% to 9.95\% when compared with the baseline. Additionally, csvMASR demonstrates superior performance in language classification and prompting tasks.<|reference_end|>
arxiv
@article{zhu2024configurable, title={Configurable Multilingual ASR with Speech Summary Representations}, author={Harrison Zhu, Ivan Fung, Yingke Zhu, Lahiru Samarakoon}, journal={arXiv preprint arXiv:2410.04478}, year={2024}, archivePrefix={arXiv}, eprint={2410.04478}, primaryClass={cs.SD cs.CL eess.AS} }
zhu2024configurable
arxiv-666236
2410.04479
SITCOM: Step-wise Triple-Consistent Diffusion Sampling for Inverse Problems
<|reference_start|>SITCOM: Step-wise Triple-Consistent Diffusion Sampling for Inverse Problems: Diffusion models (DMs) are a class of generative models that allow sampling from a distribution learned over a training set. When applied to solving inverse imaging problems (IPs), the reverse sampling steps of DMs are typically modified to approximately sample from a measurement-conditioned distribution in the image space. However, these modifications may be unsuitable for certain settings (such as in the presence of measurement noise) and non-linear tasks, as they often struggle to correct errors from earlier sampling steps and generally require a large number of optimization and/or sampling steps. To address these challenges, we state three conditions for achieving measurement-consistent diffusion trajectories. Building on these conditions, we propose a new optimization-based sampling method that not only enforces the standard data manifold measurement consistency and forward diffusion consistency, as seen in previous studies, but also incorporates backward diffusion consistency that maintains a diffusion trajectory by optimizing over the input of the pre-trained model at every sampling step. By enforcing these conditions, either implicitly or explicitly, our sampler requires significantly fewer reverse steps. Therefore, we refer to our accelerated method as Step-wise Triple-Consistent Sampling (SITCOM). Compared to existing state-of-the-art baseline methods, under different levels of measurement noise, our extensive experiments across five linear and three non-linear image restoration tasks demonstrate that SITCOM achieves competitive or superior results in terms of standard image similarity metrics while requiring a significantly reduced run-time across all considered tasks.<|reference_end|>
arxiv
@article{alkhouri2024sitcom:, title={SITCOM: Step-wise Triple-Consistent Diffusion Sampling for Inverse Problems}, author={Ismail Alkhouri, Shijun Liang, Cheng-Han Huang, Jimmy Dai, Qing Qu, Saiprasad Ravishankar, Rongrong Wang}, journal={arXiv preprint arXiv:2410.04479}, year={2024}, archivePrefix={arXiv}, eprint={2410.04479}, primaryClass={eess.IV cs.CV cs.LG} }
alkhouri2024sitcom:
arxiv-666237
2410.04480
Learning to Solve Abstract Reasoning Problems with Neurosymbolic Program Synthesis and Task Generation
<|reference_start|>Learning to Solve Abstract Reasoning Problems with Neurosymbolic Program Synthesis and Task Generation: The ability to think abstractly and reason by analogy is a prerequisite to rapidly adapt to new conditions, tackle newly encountered problems by decomposing them, and synthesize knowledge to solve problems comprehensively. We present TransCoder, a method for solving abstract problems based on neural program synthesis, and conduct a comprehensive analysis of decisions made by the generative module of the proposed architecture. At the core of TransCoder is a typed domain-specific language, designed to facilitate feature engineering and abstract reasoning. In training, we use the programs that failed to solve tasks to generate new tasks and gather them in a synthetic dataset. As each synthetic task created in this way has a known associated program (solution), the model is trained on them in supervised mode. Solutions are represented in a transparent programmatic form, which can be inspected and verified. We demonstrate TransCoder's performance using the Abstract Reasoning Corpus dataset, for which our framework generates tens of thousands of synthetic problems with corresponding solutions and facilitates systematic progress in learning.<|reference_end|>
arxiv
@article{bednarek2024learning, title={Learning to Solve Abstract Reasoning Problems with Neurosymbolic Program Synthesis and Task Generation}, author={Jakub Bednarek, Krzysztof Krawiec}, journal={arXiv preprint arXiv:2410.04480}, year={2024}, doi={10.1007/978-3-031-71167-1_21}, archivePrefix={arXiv}, eprint={2410.04480}, primaryClass={cs.AI cs.SC} }
bednarek2024learning
arxiv-666238
2410.04484
Fine-Grained Prediction of Reading Comprehension from Eye Movements
<|reference_start|>Fine-Grained Prediction of Reading Comprehension from Eye Movements: Can human reading comprehension be assessed from eye movements in reading? In this work, we address this longstanding question using large-scale eyetracking data over textual materials that are geared towards behavioral analyses of reading comprehension. We focus on a fine-grained and largely unaddressed task of predicting reading comprehension from eye movements at the level of a single question over a passage. We tackle this task using three new multimodal language models, as well as a battery of prior models from the literature. We evaluate the models' ability to generalize to new textual items, new participants, and the combination of both, in two different reading regimes, ordinary reading and information seeking. The evaluations suggest that although the task is highly challenging, eye movements contain useful signals for fine-grained prediction of reading comprehension. Code and data will be made publicly available.<|reference_end|>
arxiv
@article{shubi2024fine-grained, title={Fine-Grained Prediction of Reading Comprehension from Eye Movements}, author={Omer Shubi, Yoav Meiri, Cfir Avraham Hadar, Yevgeni Berzak}, journal={arXiv preprint arXiv:2410.04484}, year={2024}, archivePrefix={arXiv}, eprint={2410.04484}, primaryClass={cs.CL} }
shubi2024fine-grained
arxiv-666239
2410.04485
Exploring the Potential of Conversational Test Suite Based Program Repair on SWE-bench
<|reference_start|>Exploring the Potential of Conversational Test Suite Based Program Repair on SWE-bench: Automatic program repair at project level may open yet to be seen opportunities in various fields of human activity. Since the SWE-Bench challenge was presented, we have seen numerous of solutions. Patch generation is a part of program repair, and test suite-based conversational patch generation has proven its effectiveness. However, the potential of conversational patch generation has not yet specifically estimated on SWE-Bench. This study reports experimental results aimed at evaluating the individual effectiveness of conversational patch generation on problems from SWE-Bench. The experiments show that a simple conversational pipeline based on LLaMA 3.1 70B can generate valid patches in 47\% of cases, which is comparable to the state-of-the-art in program repair on SWE-Bench.<|reference_end|>
arxiv
@article{cheshkov2024exploring, title={Exploring the Potential of Conversational Test Suite Based Program Repair on SWE-bench}, author={Anton Cheshkov, Pavel Zadorozhny, Rodion Levichev, Evgeny Maslov, Ronaldo Franco Jaldin}, journal={arXiv preprint arXiv:2410.04485}, year={2024}, archivePrefix={arXiv}, eprint={2410.04485}, primaryClass={cs.SE cs.AI cs.MA} }
cheshkov2024exploring
arxiv-666240
2410.04487
The Fourier Cosine Method for Discrete Probability Distributions
<|reference_start|>The Fourier Cosine Method for Discrete Probability Distributions: We provide a rigorous convergence proof demonstrating that the well-known semi-analytical Fourier cosine (COS) formula for the inverse Fourier transform of continuous probability distributions can be extended to discrete probability distributions, with the help of spectral filters. We establish general convergence rates for these filters and further show that several classical spectral filters achieve convergence rates one order faster than previously recognized in the literature on the Gibbs phenomenon. Our numerical experiments corroborate the theoretical convergence results. Additionally, we illustrate the computational speed and accuracy of the discrete COS method with applications in computational statistics and quantitative finance. The theoretical and numerical results highlight the method's potential for solving problems involving discrete distributions, particularly when the characteristic function is known, allowing the discrete Fourier transform (DFT) to be bypassed.<|reference_end|>
arxiv
@article{shen2024the, title={The Fourier Cosine Method for Discrete Probability Distributions}, author={Xiaoyu Shen, Fang Fang, Chengguang Liu}, journal={arXiv preprint arXiv:2410.04487}, year={2024}, archivePrefix={arXiv}, eprint={2410.04487}, primaryClass={math.NA cs.NA q-fin.CP} }
shen2024the
arxiv-666241
2410.04488
A Pluggable Common Sense-Enhanced Framework for Knowledge Graph Completion
<|reference_start|>A Pluggable Common Sense-Enhanced Framework for Knowledge Graph Completion: Knowledge graph completion (KGC) tasks aim to infer missing facts in a knowledge graph (KG) for many knowledge-intensive applications. However, existing embedding-based KGC approaches primarily rely on factual triples, potentially leading to outcomes inconsistent with common sense. Besides, generating explicit common sense is often impractical or costly for a KG. To address these challenges, we propose a pluggable common sense-enhanced KGC framework that incorporates both fact and common sense for KGC. This framework is adaptable to different KGs based on their entity concept richness and has the capability to automatically generate explicit or implicit common sense from factual triples. Furthermore, we introduce common sense-guided negative sampling and a coarse-to-fine inference approach for KGs with rich entity concepts. For KGs without concepts, we propose a dual scoring scheme involving a relation-aware concept embedding mechanism. Importantly, our approach can be integrated as a pluggable module for many knowledge graph embedding (KGE) models, facilitating joint common sense and fact-driven training and inference. The experiments illustrate that our framework exhibits good scalability and outperforms existing models across various KGC tasks.<|reference_end|>
arxiv
@article{niu2024a, title={A Pluggable Common Sense-Enhanced Framework for Knowledge Graph Completion}, author={Guanglin Niu, Bo Li, Siling Feng}, journal={arXiv preprint arXiv:2410.04488}, year={2024}, archivePrefix={arXiv}, eprint={2410.04488}, primaryClass={cs.AI cs.CL} }
niu2024a
arxiv-666242
2410.04489
Grokking at the Edge of Linear Separability
<|reference_start|>Grokking at the Edge of Linear Separability: We study the generalization properties of binary logistic classification in a simplified setting, for which a "memorizing" and "generalizing" solution can always be strictly defined, and elucidate empirically and analytically the mechanism underlying Grokking in its dynamics. We analyze the asymptotic long-time dynamics of logistic classification on a random feature model with a constant label and show that it exhibits Grokking, in the sense of delayed generalization and non-monotonic test loss. We find that Grokking is amplified when classification is applied to training sets which are on the verge of linear separability. Even though a perfect generalizing solution always exists, we prove the implicit bias of the logisitc loss will cause the model to overfit if the training data is linearly separable from the origin. For training sets that are not separable from the origin, the model will always generalize perfectly asymptotically, but overfitting may occur at early stages of training. Importantly, in the vicinity of the transition, that is, for training sets that are almost separable from the origin, the model may overfit for arbitrarily long times before generalizing. We gain more insights by examining a tractable one-dimensional toy model that quantitatively captures the key features of the full model. Finally, we highlight intriguing common properties of our findings with recent literature, suggesting that grokking generally occurs in proximity to the interpolation threshold, reminiscent of critical phenomena often observed in physical systems.<|reference_end|>
arxiv
@article{beck2024grokking, title={Grokking at the Edge of Linear Separability}, author={Alon Beck, Noam Levi, Yohai Bar-Sinai}, journal={arXiv preprint arXiv:2410.04489}, year={2024}, archivePrefix={arXiv}, eprint={2410.04489}, primaryClass={stat.ML cond-mat.dis-nn cs.LG math-ph math.MP} }
beck2024grokking
arxiv-666243
2410.04490
A Large-Scale Exploit Instrumentation Study of AI/ML Supply Chain Attacks in Hugging Face Models
<|reference_start|>A Large-Scale Exploit Instrumentation Study of AI/ML Supply Chain Attacks in Hugging Face Models: The development of machine learning (ML) techniques has led to ample opportunities for developers to develop and deploy their own models. Hugging Face serves as an open source platform where developers can share and download other models in an effort to make ML development more collaborative. In order for models to be shared, they first need to be serialized. Certain Python serialization methods are considered unsafe, as they are vulnerable to object injection. This paper investigates the pervasiveness of these unsafe serialization methods across Hugging Face, and demonstrates through an exploitation approach, that models using unsafe serialization methods can be exploited and shared, creating an unsafe environment for ML developers. We investigate to what extent Hugging Face is able to flag repositories and files using unsafe serialization methods, and develop a technique to detect malicious models. Our results show that Hugging Face is home to a wide range of potentially vulnerable models.<|reference_end|>
arxiv
@article{casey2024a, title={A Large-Scale Exploit Instrumentation Study of AI/ML Supply Chain Attacks in Hugging Face Models}, author={Beatrice Casey, Joanna C. S. Santos, Mehdi Mirakhorli}, journal={arXiv preprint arXiv:2410.04490}, year={2024}, archivePrefix={arXiv}, eprint={2410.04490}, primaryClass={cs.CR cs.LG cs.SE} }
casey2024a
arxiv-666244
2410.04491
Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis
<|reference_start|>Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis: Multimodal Sentiment Analysis (MSA) utilizes multimodal data to infer the users' sentiment. Previous methods focus on equally treating the contribution of each modality or statically using text as the dominant modality to conduct interaction, which neglects the situation where each modality may become dominant. In this paper, we propose a Knowledge-Guided Dynamic Modality Attention Fusion Framework (KuDA) for multimodal sentiment analysis. KuDA uses sentiment knowledge to guide the model dynamically selecting the dominant modality and adjusting the contributions of each modality. In addition, with the obtained multimodal representation, the model can further highlight the contribution of dominant modality through the correlation evaluation loss. Extensive experiments on four MSA benchmark datasets indicate that KuDA achieves state-of-the-art performance and is able to adapt to different scenarios of dominant modality.<|reference_end|>
arxiv
@article{feng2024knowledge-guided, title={Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis}, author={Xinyu Feng, Yuming Lin, Lihua He, You Li, Liang Chang, Ya Zhou}, journal={arXiv preprint arXiv:2410.04491}, year={2024}, archivePrefix={arXiv}, eprint={2410.04491}, primaryClass={cs.CL cs.AI cs.MM} }
feng2024knowledge-guided
arxiv-666245
2410.04492
Interpret Your Decision: Logical Reasoning Regularization for Generalization in Visual Classification
<|reference_start|>Interpret Your Decision: Logical Reasoning Regularization for Generalization in Visual Classification: Vision models excel in image classification but struggle to generalize to unseen data, such as classifying images from unseen domains or discovering novel categories. In this paper, we explore the relationship between logical reasoning and deep learning generalization in visual classification. A logical regularization termed L-Reg is derived which bridges a logical analysis framework to image classification. Our work reveals that L-Reg reduces the complexity of the model in terms of the feature distribution and classifier weights. Specifically, we unveil the interpretability brought by L-Reg, as it enables the model to extract the salient features, such as faces to persons, for classification. Theoretical analysis and experiments demonstrate that L-Reg enhances generalization across various scenarios, including multi-domain generalization and generalized category discovery. In complex real-world scenarios where images span unknown classes and unseen domains, L-Reg consistently improves generalization, highlighting its practical efficacy.<|reference_end|>
arxiv
@article{tan2024interpret, title={Interpret Your Decision: Logical Reasoning Regularization for Generalization in Visual Classification}, author={Zhaorui Tan, Xi Yang, Qiufeng Wang, Anh Nguyen, Kaizhu Huang}, journal={arXiv preprint arXiv:2410.04492}, year={2024}, archivePrefix={arXiv}, eprint={2410.04492}, primaryClass={cs.CV cs.AI cs.LG} }
tan2024interpret
arxiv-666246
2410.04497
Generalizability analysis of deep learning predictions of human brain responses to augmented and semantically novel visual stimuli
<|reference_start|>Generalizability analysis of deep learning predictions of human brain responses to augmented and semantically novel visual stimuli: The purpose of this work is to investigate the soundness and utility of a neural network-based approach as a framework for exploring the impact of image enhancement techniques on visual cortex activation. In a preliminary study, we prepare a set of state-of-the-art brain encoding models, selected among the top 10 methods that participated in The Algonauts Project 2023 Challenge [16]. We analyze their ability to make valid predictions about the effects of various image enhancement techniques on neural responses. Given the impossibility of acquiring the actual data due to the high costs associated with brain imaging procedures, our investigation builds up on a series of experiments. Specifically, we analyze the ability of brain encoders to estimate the cerebral reaction to various augmentations by evaluating the response to augmentations targeting objects (i.e., faces and words) with known impact on specific areas. Moreover, we study the predicted activation in response to objects unseen during training, exploring the impact of semantically out-of-distribution stimuli. We provide relevant evidence for the generalization ability of the models forming the proposed framework, which appears to be promising for the identification of the optimal visual augmentation filter for a given task, model-driven design strategies as well as for AR and VR applications.<|reference_end|>
arxiv
@article{piskovskyi2024generalizability, title={Generalizability analysis of deep learning predictions of human brain responses to augmented and semantically novel visual stimuli}, author={Valentyn Piskovskyi, Riccardo Chimisso, Sabrina Patania, Tom Foulsham, Giuseppe Vizzari, Dimitri Ognibene}, journal={arXiv preprint arXiv:2410.04497}, year={2024}, archivePrefix={arXiv}, eprint={2410.04497}, primaryClass={cs.CV cs.AI cs.HC} }
piskovskyi2024generalizability
arxiv-666247
2410.04498
AdaMemento: Adaptive Memory-Assisted Policy Optimization for Reinforcement Learning
<|reference_start|>AdaMemento: Adaptive Memory-Assisted Policy Optimization for Reinforcement Learning: In sparse reward scenarios of reinforcement learning (RL), the memory mechanism provides promising shortcuts to policy optimization by reflecting on past experiences like humans. However, current memory-based RL methods simply store and reuse high-value policies, lacking a deeper refining and filtering of diverse past experiences and hence limiting the capability of memory. In this paper, we propose AdaMemento, an adaptive memory-enhanced RL framework. Instead of just memorizing positive past experiences, we design a memory-reflection module that exploits both positive and negative experiences by learning to predict known local optimal policies based on real-time states. To effectively gather informative trajectories for the memory, we further introduce a fine-grained intrinsic motivation paradigm, where nuances in similar states can be precisely distinguished to guide exploration. The exploitation of past experiences and exploration of new policies are then adaptively coordinated by ensemble learning to approach the global optimum. Furthermore, we theoretically prove the superiority of our new intrinsic motivation and ensemble mechanism. From 59 quantitative and visualization experiments, we confirm that AdaMemento can distinguish subtle states for better exploration and effectively exploiting past experiences in memory, achieving significant improvement over previous methods.<|reference_end|>
arxiv
@article{yan2024adamemento:, title={AdaMemento: Adaptive Memory-Assisted Policy Optimization for Reinforcement Learning}, author={Renye Yan, Yaozhong Gan, You Wu, Junliang Xing, Ling Liangn, Yeshang Zhu, Yimao Cai}, journal={arXiv preprint arXiv:2410.04498}, year={2024}, archivePrefix={arXiv}, eprint={2410.04498}, primaryClass={cs.LG} }
yan2024adamemento:
arxiv-666248
2410.04499
Adjusting Pretrained Backbones for Performativity
<|reference_start|>Adjusting Pretrained Backbones for Performativity: With the widespread deployment of deep learning models, they influence their environment in various ways. The induced distribution shifts can lead to unexpected performance degradation in deployed models. Existing methods to anticipate performativity typically incorporate information about the deployed model into the feature vector when predicting future outcomes. While enjoying appealing theoretical properties, modifying the input dimension of the prediction task is often not practical. To address this, we propose a novel technique to adjust pretrained backbones for performativity in a modular way, achieving better sample efficiency and enabling the reuse of existing deep learning assets. Focusing on performative label shift, the key idea is to train a shallow adapter module to perform a Bayes-optimal label shift correction to the backbone's logits given a sufficient statistic of the model to be deployed. As such, our framework decouples the construction of input-specific feature embeddings from the mechanism governing performativity. Motivated by dynamic benchmarking as a use-case, we evaluate our approach under adversarial sampling, for vision and language tasks. We show how it leads to smaller loss along the retraining trajectory and enables us to effectively select among candidate models to anticipate performance degradations. More broadly, our work provides a first baseline for addressing performativity in deep learning.<|reference_end|>
arxiv
@article{demirel2024adjusting, title={Adjusting Pretrained Backbones for Performativity}, author={Berker Demirel, Lingjing Kong, Kun Zhang, Theofanis Karaletsos, Celestine Mendler-D"unner, Francesco Locatello}, journal={arXiv preprint arXiv:2410.04499}, year={2024}, archivePrefix={arXiv}, eprint={2410.04499}, primaryClass={cs.LG cs.AI} }
demirel2024adjusting
arxiv-666249
2410.04501
Leveraging Large Language Models for Suicide Detection on Social Media with Limited Labels
<|reference_start|>Leveraging Large Language Models for Suicide Detection on Social Media with Limited Labels: The increasing frequency of suicidal thoughts highlights the importance of early detection and intervention. Social media platforms, where users often share personal experiences and seek help, could be utilized to identify individuals at risk. However, the large volume of daily posts makes manual review impractical. This paper explores the use of Large Language Models (LLMs) to automatically detect suicidal content in text-based social media posts. We propose a novel method for generating pseudo-labels for unlabeled data by prompting LLMs, along with traditional classification fine-tuning techniques to enhance label accuracy. To create a strong suicide detection model, we develop an ensemble approach involving prompting with Qwen2-72B-Instruct, and using fine-tuned models such as Llama3-8B, Llama3.1-8B, and Gemma2-9B. We evaluate our approach on the dataset of the Suicide Ideation Detection on Social Media Challenge, a track of the IEEE Big Data 2024 Big Data Cup. Additionally, we conduct a comprehensive analysis to assess the impact of different models and fine-tuning strategies on detection performance. Experimental results show that the ensemble model significantly improves the detection accuracy, by 5% points compared with the individual models. It achieves a weight F1 score of 0.770 on the public test set, and 0.731 on the private test set, providing a promising solution for identifying suicidal content in social media. Our analysis shows that the choice of LLMs affects the prompting performance, with larger models providing better accuracy. Our code and checkpoints are publicly available at https://github.com/khanhvynguyen/Suicide_Detection_LLMs.<|reference_end|>
arxiv
@article{nguyen2024leveraging, title={Leveraging Large Language Models for Suicide Detection on Social Media with Limited Labels}, author={Vy Nguyen, Chau Pham}, journal={arXiv preprint arXiv:2410.04501}, year={2024}, archivePrefix={arXiv}, eprint={2410.04501}, primaryClass={cs.CL cs.AI cs.LG} }
nguyen2024leveraging
arxiv-666250
2410.04503
LRHP: Learning Representations for Human Preferences via Preference Pairs
<|reference_start|>LRHP: Learning Representations for Human Preferences via Preference Pairs: To improve human-preference alignment training, current research has developed numerous preference datasets consisting of preference pairs labeled as "preferred" or "dispreferred". These preference pairs are typically used to encode human preferences into a single numerical value through reward modeling, which acts as a reward signal during reinforcement learning from human feedback (RLHF). However, representing these human preferences as a numerical value complicates the analysis of these preferences and restricts their broader applications other than RLHF. In contrast, in this work, we introduce a preference representation learning task that aims to construct a richer and more structured representation of human preferences. We further develop a more generalizable framework, Learning Representations for Human Preferences via preference pairs (namely LRHP), which extends beyond traditional reward modeling to tackle this task. We verify the utility of preference representations in two downstream tasks: preference data selection and preference margin prediction. Building upon the human preferences in representations, we achieve strong performance in both tasks, significantly outperforming baselines.<|reference_end|>
arxiv
@article{wang2024lrhp:, title={LRHP: Learning Representations for Human Preferences via Preference Pairs}, author={Chenglong Wang, Yang Gan, Yifu Huo, Yongyu Mu, Qiaozhi He, Murun Yang, Tong Xiao, Chunliang Zhang, Tongran Liu, Jingbo Zhu}, journal={arXiv preprint arXiv:2410.04503}, year={2024}, archivePrefix={arXiv}, eprint={2410.04503}, primaryClass={cs.CL cs.AI} }
wang2024lrhp:
arxiv-666251
2410.04507
MECFormer: Multi-task Whole Slide Image Classification with Expert Consultation Network
<|reference_start|>MECFormer: Multi-task Whole Slide Image Classification with Expert Consultation Network: Whole slide image (WSI) classification is a crucial problem for cancer diagnostics in clinics and hospitals. A WSI, acquired at gigapixel size, is commonly tiled into patches and processed by multiple-instance learning (MIL) models. Previous MIL-based models designed for this problem have only been evaluated on individual tasks for specific organs, and the ability to handle multiple tasks within a single model has not been investigated. In this study, we propose MECFormer, a generative Transformer-based model designed to handle multiple tasks within one model. To leverage the power of learning multiple tasks simultaneously and to enhance the model's effectiveness in focusing on each individual task, we introduce an Expert Consultation Network, a projection layer placed at the beginning of the Transformer-based model. Additionally, to enable flexible classification, autoregressive decoding is incorporated by a language decoder for WSI classification. Through extensive experiments on five datasets involving four different organs, one cancer classification task, and four cancer subtyping tasks, MECFormer demonstrates superior performance compared to individual state-of-the-art multiple-instance learning models.<|reference_end|>
arxiv
@article{bui2024mecformer:, title={MECFormer: Multi-task Whole Slide Image Classification with Expert Consultation Network}, author={Doanh C. Bui and Jin Tae Kwak}, journal={arXiv preprint arXiv:2410.04507}, year={2024}, archivePrefix={arXiv}, eprint={2410.04507}, primaryClass={cs.CV} }
bui2024mecformer:
arxiv-666252
2410.04509
ErrorRadar: Benchmarking Complex Mathematical Reasoning of Multimodal Large Language Models Via Error Detection
<|reference_start|>ErrorRadar: Benchmarking Complex Mathematical Reasoning of Multimodal Large Language Models Via Error Detection: As the field of Multimodal Large Language Models (MLLMs) continues to evolve, their potential to revolutionize artificial intelligence is particularly promising, especially in addressing mathematical reasoning tasks. Current mathematical benchmarks predominantly focus on evaluating MLLMs' problem-solving ability, yet there is a crucial gap in addressing more complex scenarios such as error detection, for enhancing reasoning capability in complicated settings. To fill this gap, we formally formulate the new task: multimodal error detection, and introduce ErrorRadar, the first benchmark designed to assess MLLMs' capabilities in such a task. ErrorRadar evaluates two sub-tasks: error step identification and error categorization, providing a comprehensive framework for evaluating MLLMs' complex mathematical reasoning ability. It consists of 2,500 high-quality multimodal K-12 mathematical problems, collected from real-world student interactions in an educational organization, with rigorous annotation and rich metadata such as problem type and error category. Through extensive experiments, we evaluated both open-source and closed-source representative MLLMs, benchmarking their performance against educational expert evaluators. Results indicate significant challenges still remain, as GPT-4o with best performance is still around 10% behind human evaluation. The dataset will be available upon acceptance.<|reference_end|>
arxiv
@article{yan2024errorradar:, title={ErrorRadar: Benchmarking Complex Mathematical Reasoning of Multimodal Large Language Models Via Error Detection}, author={Yibo Yan, Shen Wang, Jiahao Huo, Hang Li, Boyan Li, Jiamin Su, Xiong Gao, Yi-Fan Zhang, Tianlong Xu, Zhendong Chu, Aoxiao Zhong, Kun Wang, Hui Xiong, Philip S. Yu, Xuming Hu, Qingsong Wen}, journal={arXiv preprint arXiv:2410.04509}, year={2024}, archivePrefix={arXiv}, eprint={2410.04509}, primaryClass={cs.CL} }
yan2024errorradar:
arxiv-666253
2410.04511
Realizing Video Summarization from the Path of Language-based Semantic Understanding
<|reference_start|>Realizing Video Summarization from the Path of Language-based Semantic Understanding: The recent development of Video-based Large Language Models (VideoLLMs), has significantly advanced video summarization by aligning video features and, in some cases, audio features with Large Language Models (LLMs). Each of these VideoLLMs possesses unique strengths and weaknesses. Many recent methods have required extensive fine-tuning to overcome the limitations of these models, which can be resource-intensive. In this work, we observe that the strengths of one VideoLLM can complement the weaknesses of another. Leveraging this insight, we propose a novel video summarization framework inspired by the Mixture of Experts (MoE) paradigm, which operates as an inference-time algorithm without requiring any form of fine-tuning. Our approach integrates multiple VideoLLMs to generate comprehensive and coherent textual summaries. It effectively combines visual and audio content, provides detailed background descriptions, and excels at identifying keyframes, which enables more semantically meaningful retrieval compared to traditional computer vision approaches that rely solely on visual information, all without the need for additional fine-tuning. Moreover, the resulting summaries enhance performance in downstream tasks such as summary video generation, either through keyframe selection or in combination with text-to-image models. Our language-driven approach offers a semantically rich alternative to conventional methods and provides flexibility to incorporate newer VideoLLMs, enhancing adaptability and performance in video summarization tasks.<|reference_end|>
arxiv
@article{mu2024realizing, title={Realizing Video Summarization from the Path of Language-based Semantic Understanding}, author={Kuan-Chen Mu, Zhi-Yi Chin, Wei-Chen Chiu}, journal={arXiv preprint arXiv:2410.04511}, year={2024}, archivePrefix={arXiv}, eprint={2410.04511}, primaryClass={cs.CV cs.CL} }
mu2024realizing
arxiv-666254
2410.04512
Support Graph Preconditioners for Off-Lattice Cell-Based Models
<|reference_start|>Support Graph Preconditioners for Off-Lattice Cell-Based Models: Off-lattice agent-based models (or cell-based models) of multicellular systems are increasingly used to create in-silico models of in-vitro and in-vivo experimental setups of cells and tissues, such as cancer spheroids, neural crest cell migration, and liver lobules. These applications, which simulate thousands to millions of cells, require robust and efficient numerical methods. At their core, these models necessitate the solution of a large friction-dominated equation of motion, resulting in a sparse, symmetric, and positive definite matrix equation. The conjugate gradient method is employed to solve this problem, but this requires a good preconditioner for optimal performance. In this study, we develop a graph-based preconditioning strategy that can be easily implemented in such agent-based models. Our approach centers on extending support graph preconditioners to block-structured matrices. We prove asymptotic bounds on the condition number of these preconditioned friction matrices. We then benchmark the conjugate gradient method with our support graph preconditioners and compare its performance to other common preconditioning strategies.<|reference_end|>
arxiv
@article{steinman2024support, title={Support Graph Preconditioners for Off-Lattice Cell-Based Models}, author={Justin Steinman and Andreas Buttensch"on}, journal={arXiv preprint arXiv:2410.04512}, year={2024}, archivePrefix={arXiv}, eprint={2410.04512}, primaryClass={math.NA cs.NA q-bio.CB} }
steinman2024support
arxiv-666255
2410.04514
DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination
<|reference_start|>DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination: Despite the great success of Large Vision-Language Models (LVLMs), they inevitably suffer from hallucination. As we know, both the visual encoder and the Large Language Model (LLM) decoder in LVLMs are Transformer-based, allowing the model to extract visual information and generate text outputs via attention mechanisms. We find that the attention distribution of LLM decoder on image tokens is highly consistent with the visual encoder and both distributions tend to focus on particular background tokens rather than the referred objects in the image. We attribute to the unexpected attention distribution to an inherent flaw in the visual encoder itself, which misguides LLMs to over emphasize the redundant information and generate object hallucination. To address the issue, we propose DAMRO, a novel training-free strategy that $D$ive into $A$ttention $M$echanism of LVLM to $R$educe $O$bject Hallucination. Specifically, our approach employs classification token (CLS) of ViT to filter out high-attention outlier tokens scattered in the background and then eliminate their influence during decoding stage. We evaluate our method on LVLMs including LLaVA-1.5, LLaVA-NeXT and InstructBLIP, using various benchmarks such as POPE, CHAIR, MME and GPT-4V Aided Evaluation. The results demonstrate that our approach significantly reduces the impact of these outlier tokens, thus effectively alleviating the hallucination of LVLMs. The code of our method will be released soon.<|reference_end|>
arxiv
@article{gong2024damro:, title={DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination}, author={Xuan Gong, Tianshi Ming, Xinpeng Wang, Zhihua Wei}, journal={arXiv preprint arXiv:2410.04514}, year={2024}, archivePrefix={arXiv}, eprint={2410.04514}, primaryClass={cs.CL cs.CV} }
gong2024damro:
arxiv-666256
2410.04518
A Reinforcement Learning Engine with Reduced Action and State Space for Scalable Cyber-Physical Optimal Response
<|reference_start|>A Reinforcement Learning Engine with Reduced Action and State Space for Scalable Cyber-Physical Optimal Response: Numerous research studies have been conducted to enhance the resilience of cyber-physical systems (CPSs) by detecting potential cyber or physical disturbances. However, the development of scalable and optimal response measures under power system contingency based on fusing cyber-physical data is still in an early stage. To address this research gap, this paper introduces a power system response engine based on reinforcement learning (RL) and role and interaction discovery (RID) techniques. RL-RID-GridResponder is designed to automatically detect the contingency and assist with the decision-making process to ensure optimal power system operation. The RL-RID-GridResponder learns via an RL-based structure and achieves enhanced scalability by integrating an RID module with reduced action and state spaces. The applicability of RL-RID-GridResponder in providing scalable and optimal responses for CPSs is demonstrated on power systems in the context of Denial of Service (DoS) attacks. Moreover, simulations are conducted on a Volt-Var regulation problem using the augmented WSCC 9-bus and augmented IEEE 24-bus systems based on fused cyber and physical data sets. The results show that the proposed RL-RID-GridResponder can provide fast and accurate responses to ensure optimal power system operation under DoS and can extend to other system contingencies such as line outages and loss of loads.<|reference_end|>
arxiv
@article{sun2024a, title={A Reinforcement Learning Engine with Reduced Action and State Space for Scalable Cyber-Physical Optimal Response}, author={Shining Sun, Khandaker Akramul Haque, Xiang Huo, Leen Al Homoud, Shamina Hossain-McKenzie, Ana Goulart, Katherine Davis}, journal={arXiv preprint arXiv:2410.04518}, year={2024}, archivePrefix={arXiv}, eprint={2410.04518}, primaryClass={eess.SY cs.SY} }
sun2024a
arxiv-666257
2410.04519
RevMUX: Data Multiplexing with Reversible Adapters for Efficient LLM Batch Inference
<|reference_start|>RevMUX: Data Multiplexing with Reversible Adapters for Efficient LLM Batch Inference: Large language models (LLMs) have brought a great breakthrough to the natural language processing (NLP) community, while leading the challenge of handling concurrent customer queries due to their high throughput demands. Data multiplexing addresses this by merging multiple inputs into a single composite input, allowing more efficient inference through a shared forward pass. However, as distinguishing individuals from a composite input is challenging, conventional methods typically require training the entire backbone, yet still suffer from performance degradation. In this paper, we introduce RevMUX, a parameter-efficient data multiplexing framework that incorporates a reversible design in the multiplexer, which can be reused by the demultiplexer to perform reverse operations and restore individual samples for classification. Extensive experiments on four datasets and three types of LLM backbones demonstrate the effectiveness of RevMUX for enhancing LLM inference efficiency while retaining a satisfactory classification performance.<|reference_end|>
arxiv
@article{xu2024revmux:, title={RevMUX: Data Multiplexing with Reversible Adapters for Efficient LLM Batch Inference}, author={Yige Xu, Xu Guo, Zhiwei Zeng, Chunyan Miao}, journal={arXiv preprint arXiv:2410.04519}, year={2024}, archivePrefix={arXiv}, eprint={2410.04519}, primaryClass={cs.CL} }
xu2024revmux:
arxiv-666258
2410.04520
Dynamic Post-Hoc Neural Ensemblers
<|reference_start|>Dynamic Post-Hoc Neural Ensemblers: Ensemble methods are known for enhancing the accuracy and robustness of machine learning models by combining multiple base learners. However, standard approaches like greedy or random ensembles often fall short, as they assume a constant weight across samples for the ensemble members. This can limit expressiveness and hinder performance when aggregating the ensemble predictions. In this study, we explore employing neural networks as ensemble methods, emphasizing the significance of dynamic ensembling to leverage diverse model predictions adaptively. Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions during the training. We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities. Our experiments showcase that the dynamic neural ensemblers yield competitive results compared to strong baselines in computer vision, natural language processing, and tabular data.<|reference_end|>
arxiv
@article{arango2024dynamic, title={Dynamic Post-Hoc Neural Ensemblers}, author={Sebastian Pineda Arango, Maciej Janowski, Lennart Purucker, Arber Zela, Frank Hutter, Josif Grabocka}, journal={arXiv preprint arXiv:2410.04520}, year={2024}, archivePrefix={arXiv}, eprint={2410.04520}, primaryClass={cs.LG} }
arango2024dynamic
arxiv-666259
2410.04521
MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM Integration
<|reference_start|>MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM Integration: In recent advancements, multimodal large language models (MLLMs) have been fine-tuned on specific medical image datasets to address medical visual question answering (Med-VQA) tasks. However, this common approach of task-specific fine-tuning is costly and necessitates separate models for each downstream task, limiting the exploration of zero-shot capabilities. In this paper, we introduce MC-CoT, a modular cross-modal collaboration Chain-of-Thought (CoT) framework designed to enhance the zero-shot performance of MLLMs in Med-VQA by leveraging large language models (LLMs). MC-CoT improves reasoning and information extraction by integrating medical knowledge and task-specific guidance, where LLM provides various complex medical reasoning chains and MLLM provides various observations of medical images based on instructions of the LLM. Our experiments on datasets such as SLAKE, VQA-RAD, and PATH-VQA show that MC-CoT surpasses standalone MLLMs and various multimodality CoT frameworks in recall rate and accuracy. These findings highlight the importance of incorporating background information and detailed guidance in addressing complex zero-shot Med-VQA tasks.<|reference_end|>
arxiv
@article{wei2024mc-cot:, title={MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM Integration}, author={Lai Wei, Wenkai Wang, Xiaoyu Shen, Yu Xie, Zhihao Fan, Xiaojin Zhang, Zhongyu Wei, Wei Chen}, journal={arXiv preprint arXiv:2410.04521}, year={2024}, archivePrefix={arXiv}, eprint={2410.04521}, primaryClass={cs.CV} }
wei2024mc-cot:
arxiv-666260
2410.04523
Semi-Markovian Planning to Coordinate Aerial and Maritime Medical Evacuation Platforms
<|reference_start|>Semi-Markovian Planning to Coordinate Aerial and Maritime Medical Evacuation Platforms: The transfer of patients between two aircraft using an underway watercraft increases medical evacuation reach and flexibility in maritime environments. The selection of any one of multiple underway watercraft for patient exchange is complicated by participating aircraft utilization history and a participating watercraft position and velocity. The selection problem is modeled as a semi-Markov decision process with an action space including both fixed land and moving watercraft exchange points. Monte Carlo tree search with root parallelization is used to select optimal exchange points and determine aircraft dispatch times. Model parameters are varied in simulation to identify representative scenarios where watercraft exchange points reduce incident response times. We find that an optimal policy with watercraft exchange points outperforms an optimal policy without watercraft exchange points and a greedy policy by 35% and 40%, respectively. In partnership with the United States Army, we deploy for the first time the watercraft exchange point by executing a mock patient transfer with a manikin between two HH-60M medical evacuation helicopters and an underway Army Logistic Support Vessel south of the Hawaiian island of Oahu. Both helicopters were dispatched in accordance with our optimized decision strategy.<|reference_end|>
arxiv
@article{al-husseini2024semi-markovian, title={Semi-Markovian Planning to Coordinate Aerial and Maritime Medical Evacuation Platforms}, author={Mahdi Al-Husseini, Kyle H. Wray, and Mykel J. Kochenderfer}, journal={arXiv preprint arXiv:2410.04523}, year={2024}, archivePrefix={arXiv}, eprint={2410.04523}, primaryClass={cs.AI} }
al-husseini2024semi-markovian
arxiv-666261
2410.04524
Towards Secure Tuning: Mitigating Security Risks Arising from Benign Instruction Fine-Tuning
<|reference_start|>Towards Secure Tuning: Mitigating Security Risks Arising from Benign Instruction Fine-Tuning: Instruction Fine-Tuning (IFT) has become an essential method for adapting base Large Language Models (LLMs) into variants for professional and private use. However, researchers have raised concerns over a significant decrease in LLMs' security following IFT, even when the IFT process involves entirely benign instructions (termed Benign IFT). Our study represents a pioneering effort to mitigate the security risks arising from Benign IFT. Specifically, we conduct a Module Robustness Analysis, aiming to investigate how LLMs' internal modules contribute to their security. Based on our analysis, we propose a novel IFT strategy, called the Modular Layer-wise Learning Rate (ML-LR) strategy. In our analysis, we implement a simple security feature classifier that serves as a proxy to measure the robustness of modules (e.g. $Q$/$K$/$V$, etc.). Our findings reveal that the module robustness shows clear patterns, varying regularly with the module type and the layer depth. Leveraging these insights, we develop a proxy-guided search algorithm to identify a robust subset of modules, termed Mods$_{Robust}$. During IFT, the ML-LR strategy employs differentiated learning rates for Mods$_{Robust}$ and the rest modules. Our experimental results show that in security assessments, the application of our ML-LR strategy significantly mitigates the rise in harmfulness of LLMs following Benign IFT. Notably, our ML-LR strategy has little impact on the usability or expertise of LLMs following Benign IFT. Furthermore, we have conducted comprehensive analyses to verify the soundness and flexibility of our ML-LR strategy.<|reference_end|>
arxiv
@article{du2024towards, title={Towards Secure Tuning: Mitigating Security Risks Arising from Benign Instruction Fine-Tuning}, author={Yanrui Du, Sendong Zhao, Jiawei Cao, Ming Ma, Danyang Zhao, Fenglei Fan, Ting Liu, Bing Qin}, journal={arXiv preprint arXiv:2410.04524}, year={2024}, archivePrefix={arXiv}, eprint={2410.04524}, primaryClass={cs.CL} }
du2024towards
arxiv-666262
2410.04525
Look Around and Find Out: OOD Detection with Relative Angles
<|reference_start|>Look Around and Find Out: OOD Detection with Relative Angles: Deep learning systems deployed in real-world applications often encounter data that is different from their in-distribution (ID). A reliable system should ideally abstain from making decisions in this out-of-distribution (OOD) setting. Existing state-of-the-art methods primarily focus on feature distances, such as k-th nearest neighbors and distances to decision boundaries, either overlooking or ineffectively using in-distribution statistics. In this work, we propose a novel angle-based metric for OOD detection that is computed relative to the in-distribution structure. We demonstrate that the angles between feature representations and decision boundaries, viewed from the mean of in-distribution features, serve as an effective discriminative factor between ID and OOD data. Our method achieves state-of-the-art performance on CIFAR-10 and ImageNet benchmarks, reducing FPR95 by 0.88% and 7.74% respectively. Our score function is compatible with existing feature space regularization techniques, enhancing performance. Additionally, its scale-invariance property enables creating an ensemble of models for OOD detection via simple score summation.<|reference_end|>
arxiv
@article{demirel2024look, title={Look Around and Find Out: OOD Detection with Relative Angles}, author={Berker Demirel, Marco Fumero, Francesco Locatello}, journal={arXiv preprint arXiv:2410.04525}, year={2024}, archivePrefix={arXiv}, eprint={2410.04525}, primaryClass={cs.LG cs.CV} }
demirel2024look
arxiv-666263
2410.04526
FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering
<|reference_start|>FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering: In this paper, we introduce FAMMA, an open-source benchmark for financial multilingual multimodal question answering (QA). Our benchmark aims to evaluate the abilities of multimodal large language models (MLLMs) in answering questions that require advanced financial knowledge and sophisticated reasoning. It includes 1,758 meticulously collected question-answer pairs from university textbooks and exams, spanning 8 major subfields in finance including corporate finance, asset management, and financial engineering. Some of the QA pairs are written in Chinese or French, while a majority of them are in English. These questions are presented in a mixed format combining text and heterogeneous image types, such as charts, tables, and diagrams. We evaluate a range of state-of-the-art MLLMs on our benchmark, and our analysis shows that FAMMA poses a significant challenge for these models. Even advanced systems like GPT-4o and Claude-35-Sonnet achieve only 42\% accuracy. Additionally, the open-source Qwen2-VL lags notably behind its proprietary counterparts. Lastly, we explore GPT o1-style reasoning chains to enhance the models' reasoning capabilities, which significantly improve error correction. Our FAMMA benchmark will facilitate future research to develop expert systems in financial QA. The leaderboard is available at https://famma-bench.github.io/famma/ .<|reference_end|>
arxiv
@article{xue2024famma:, title={FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering}, author={Siqiao Xue, Tingting Chen, Fan Zhou, Qingyang Dai, Zhixuan Chu, Hongyuan Mei}, journal={arXiv preprint arXiv:2410.04526}, year={2024}, archivePrefix={arXiv}, eprint={2410.04526}, primaryClass={cs.CL cs.AI} }
xue2024famma:
arxiv-666264
2410.04527
Casablanca: Data and Models for Multidialectal Arabic Speech Recognition
<|reference_start|>Casablanca: Data and Models for Multidialectal Arabic Speech Recognition: In spite of the recent progress in speech processing, the majority of world languages and dialects remain uncovered. This situation only furthers an already wide technological divide, thereby hindering technological and socioeconomic inclusion. This challenge is largely due to the absence of datasets that can empower diverse speech systems. In this paper, we seek to mitigate this obstacle for a number of Arabic dialects by presenting Casablanca, a large-scale community-driven effort to collect and transcribe a multi-dialectal Arabic dataset. The dataset covers eight dialects: Algerian, Egyptian, Emirati, Jordanian, Mauritanian, Moroccan, Palestinian, and Yemeni, and includes annotations for transcription, gender, dialect, and code-switching. We also develop a number of strong baselines exploiting Casablanca. The project page for Casablanca is accessible at: www.dlnlp.ai/speech/casablanca.<|reference_end|>
arxiv
@article{talafha2024casablanca:, title={Casablanca: Data and Models for Multidialectal Arabic Speech Recognition}, author={Bashar Talafha, Karima Kadaoui, Samar Mohamed Magdy, Mariem Habiboullah, Chafei Mohamed Chafei, Ahmed Oumar El-Shangiti, Hiba Zayed, Mohamedou cheikh tourad, Rahaf Alhamouri, Rwaa Assi, Aisha Alraeesi, Hour Mohamed, Fakhraddin Alwajih, Abdelrahman Mohamed, Abdellah El Mekki, El Moatez Billah Nagoudi, Benelhadj Djelloul Mama Saadia, Hamzah A. Alsayadi, Walid Al-Dhabyani, Sara Shatnawi, Yasir Ech-Chammakhy, Amal Makouar, Yousra Berrachedi, Mustafa Jarrar, Shady Shehata, Ismail Berrada, Muhammad Abdul-Mageed}, journal={arXiv preprint arXiv:2410.04527}, year={2024}, archivePrefix={arXiv}, eprint={2410.04527}, primaryClass={cs.CL} }
talafha2024casablanca:
arxiv-666265
2410.04528
Round Trip Time Estimation Utilizing Cyclic Shift of Uplink Reference Signal
<|reference_start|>Round Trip Time Estimation Utilizing Cyclic Shift of Uplink Reference Signal: In the context of fifth-generation new radio (5G NR) technology, it is not possible to directly obtain an absolute uplink (UL) channel impulse response (CIR) at the base station (gNB) from a user equipment (UE). The UL CIR obtained through the sounding reference signal (SRS) is always time-shifted by the timing advance (TA) applied at the UE. The TA is crucial for maintaining UL synchronization, and transmitting SRS without applying the TA will result in interference. In this work, we propose a new method to obtain absolute UL CIR from a UE and then use it to estimate the round trip time (RTT) at the gNB. This method requires enhancing the current 5G protocol stack with a new Zadoff-Chu (ZC) based wideband uplink reference signal (URS). Capitalizing on the cyclic shift property of the URS sequence, we can obtain the RTT with a significant reduction in overhead and latency compared to existing schemes. The proposed method is experimentally validated using a real-world testbed based on OpenAirInterface (OAI).<|reference_end|>
arxiv
@article{gangula2024round, title={Round Trip Time Estimation Utilizing Cyclic Shift of Uplink Reference Signal}, author={Rajeev Gangula, Tommaso Melodia, Rakesh Mundlamuri and Florian Kaltenberger}, journal={arXiv preprint arXiv:2410.04528}, year={2024}, archivePrefix={arXiv}, eprint={2410.04528}, primaryClass={cs.IT eess.SP math.IT} }
gangula2024round
arxiv-666266
2410.04529
In-Place Panoptic Radiance Field Segmentation with Perceptual Prior for 3D Scene Understanding
<|reference_start|>In-Place Panoptic Radiance Field Segmentation with Perceptual Prior for 3D Scene Understanding: Accurate 3D scene representation and panoptic understanding are essential for applications such as virtual reality, robotics, and autonomous driving. However, challenges persist with existing methods, including precise 2D-to-3D mapping, handling complex scene characteristics like boundary ambiguity and varying scales, and mitigating noise in panoptic pseudo-labels. This paper introduces a novel perceptual-prior-guided 3D scene representation and panoptic understanding method, which reformulates panoptic understanding within neural radiance fields as a linear assignment problem involving 2D semantics and instance recognition. Perceptual information from pre-trained 2D panoptic segmentation models is incorporated as prior guidance, thereby synchronizing the learning processes of appearance, geometry, and panoptic understanding within neural radiance fields. An implicit scene representation and understanding model is developed to enhance generalization across indoor and outdoor scenes by extending the scale-encoded cascaded grids within a reparameterized domain distillation framework. This model effectively manages complex scene attributes and generates 3D-consistent scene representations and panoptic understanding outcomes for various scenes. Experiments and ablation studies under challenging conditions, including synthetic and real-world scenes, demonstrate the proposed method's effectiveness in enhancing 3D scene representation and panoptic segmentation accuracy.<|reference_end|>
arxiv
@article{li2024in-place, title={In-Place Panoptic Radiance Field Segmentation with Perceptual Prior for 3D Scene Understanding}, author={Shenghao Li}, journal={arXiv preprint arXiv:2410.04529}, year={2024}, archivePrefix={arXiv}, eprint={2410.04529}, primaryClass={cs.CV} }
li2024in-place
arxiv-666267
2410.04534
UniMuMo: Unified Text, Music and Motion Generation
<|reference_start|>UniMuMo: Unified Text, Music and Motion Generation: We introduce UniMuMo, a unified multimodal model capable of taking arbitrary text, music, and motion data as input conditions to generate outputs across all three modalities. To address the lack of time-synchronized data, we align unpaired music and motion data based on rhythmic patterns to leverage existing large-scale music-only and motion-only datasets. By converting music, motion, and text into token-based representation, our model bridges these modalities through a unified encoder-decoder transformer architecture. To support multiple generation tasks within a single framework, we introduce several architectural improvements. We propose encoding motion with a music codebook, mapping motion into the same feature space as music. We introduce a music-motion parallel generation scheme that unifies all music and motion generation tasks into a single transformer decoder architecture with a single training task of music-motion joint generation. Moreover, the model is designed by fine-tuning existing pre-trained single-modality models, significantly reducing computational demands. Extensive experiments demonstrate that UniMuMo achieves competitive results on all unidirectional generation benchmarks across music, motion, and text modalities. Quantitative results are available in the \href{https://hanyangclarence.github.io/unimumo_demo/}{project page}.<|reference_end|>
arxiv
@article{yang2024unimumo:, title={UniMuMo: Unified Text, Music and Motion Generation}, author={Han Yang, Kun Su, Yutong Zhang, Jiaben Chen, Kaizhi Qian, Gaowen Liu, Chuang Gan}, journal={arXiv preprint arXiv:2410.04534}, year={2024}, archivePrefix={arXiv}, eprint={2410.04534}, primaryClass={cs.SD cs.CV cs.GR cs.LG cs.MM eess.AS} }
yang2024unimumo:
arxiv-666268
2410.04536
Multi-LED Classification as Pretext For Robot Heading Estimation
<|reference_start|>Multi-LED Classification as Pretext For Robot Heading Estimation: We propose a self-supervised approach for visual robot detection and heading estimation by learning to estimate the states (OFF or ON) of four independent robot-mounted LEDs. Experimental results show a median image-space position error of 14 px and relative heading MAE of 17 degrees, versus a supervised upperbound scoring 10 px and 8 degrees, respectively.<|reference_end|>
arxiv
@article{carlotti2024multi-led, title={Multi-LED Classification as Pretext For Robot Heading Estimation}, author={Nicholas Carlotti, Mirko Nava, Alessandro Giusti}, journal={arXiv preprint arXiv:2410.04536}, year={2024}, archivePrefix={arXiv}, eprint={2410.04536}, primaryClass={cs.RO} }
carlotti2024multi-led
arxiv-666269
2410.04539
YanTian: An Application Platform for AI Global Weather Forecasting Models
<|reference_start|>YanTian: An Application Platform for AI Global Weather Forecasting Models: To promote the practical application of AI Global Weather Forecasting Models (AIGWFM), we have developed an adaptable application platform named 'YanTian'. This platform enhances existing open-source AIGWFM with a suite of capability-enhancing modules and is constructed by a "loosely coupled" plug-in architecture. The goal of 'YanTian' is to address the limitations of current open-source AIGWFM in operational application, including improving local forecast accuracy, providing spatial high-resolution forecasts, increasing density of forecast intervals, and generating diverse products with the provision of AIGC capabilities. 'YianTian' also provides a simple, visualized user interface, allowing meteorologists easily access both basic and extended capabilities of the platform by simply configuring the platform UI. Users do not need to possess the complex artificial intelligence knowledge and the coding techniques. Additionally, 'YianTian' can be deployed on a PC with GPUs. We hope 'YianTian' can facilitate the operational widespread adoption of AIGWFMs.<|reference_end|>
arxiv
@article{cheng2024yantian:, title={YanTian: An Application Platform for AI Global Weather Forecasting Models}, author={Wencong Cheng, Jiangjiang Xia, Chang Qu, Zhigang Wang, Xinyi Zeng, Fang Huang, Tianye Li}, journal={arXiv preprint arXiv:2410.04539}, year={2024}, archivePrefix={arXiv}, eprint={2410.04539}, primaryClass={physics.ao-ph cs.LG} }
cheng2024yantian:
arxiv-666270
2410.04540
Distribution Grids May Be a Barrier To Residential Electrification
<|reference_start|>Distribution Grids May Be a Barrier To Residential Electrification: Replacing fossil-fueled appliances and vehicles with electric alternatives can reduce greenhouse gas emissions and air pollution in many settings. However, residential electrification can raise electricity demand beyond the safe limits of electrical infrastructure, increasing the risk of blackouts or requiring grid reinforcement that can be slow and expensive. Here, we estimate the physical and economic impacts on distribution grids of electrifying all housing and personal vehicles in each county of the lower 48 United States. We find that space heating is the main driver of grid impacts, with the coldest regions seeing demand peaks up to three times higher than today's peaks. Accommodating electrification of all housing and personal vehicles could require up to 312 GW of distribution grid reinforcement nationally, at a cost of $183 to $415 billion, or $1,500 to $3,400 per household (95% confidence intervals). However, demand-side management can mitigate demand peaks, reducing grid reinforcement costs by up to 92%.<|reference_end|>
arxiv
@article{priyadarshan2024distribution, title={Distribution Grids May Be a Barrier To Residential Electrification}, author={Priyadarshan and Constance Crozier and Kyri Baker and Kevin Kircher}, journal={arXiv preprint arXiv:2410.04540}, year={2024}, archivePrefix={arXiv}, eprint={2410.04540}, primaryClass={eess.SY cs.SY} }
priyadarshan2024distribution
arxiv-666271
2410.04541
On Evaluating LLMs' Capabilities as Functional Approximators: A Bayesian Perspective
<|reference_start|>On Evaluating LLMs' Capabilities as Functional Approximators: A Bayesian Perspective: Recent works have successfully applied Large Language Models (LLMs) to function modeling tasks. However, the reasons behind this success remain unclear. In this work, we propose a new evaluation framework to comprehensively assess LLMs' function modeling abilities. By adopting a Bayesian perspective of function modeling, we discover that LLMs are relatively weak in understanding patterns in raw data, but excel at utilizing prior knowledge about the domain to develop a strong understanding of the underlying function. Our findings offer new insights about the strengths and limitations of LLMs in the context of function modeling.<|reference_end|>
arxiv
@article{siddiqui2024on, title={On Evaluating LLMs' Capabilities as Functional Approximators: A Bayesian Perspective}, author={Shoaib Ahmed Siddiqui, Yanzhi Chen, Juyeon Heo, Menglin Xia, Adrian Weller}, journal={arXiv preprint arXiv:2410.04541}, year={2024}, archivePrefix={arXiv}, eprint={2410.04541}, primaryClass={cs.LG cs.AI} }
siddiqui2024on
arxiv-666272
2410.04542
Generative Flows on Synthetic Pathway for Drug Design
<|reference_start|>Generative Flows on Synthetic Pathway for Drug Design: Generative models in drug discovery have recently gained attention as efficient alternatives to brute-force virtual screening. However, most existing models do not account for synthesizability, limiting their practical use in real-world scenarios. In this paper, we propose RxnFlow, which sequentially assembles molecules using predefined molecular building blocks and chemical reaction templates to constrain the synthetic chemical pathway. We then train on this sequential generating process with the objective of generative flow networks (GFlowNets) to generate both highly rewarded and diverse molecules. To mitigate the large action space of synthetic pathways in GFlowNets, we implement a novel action space subsampling method. This enables RxnFlow to learn generative flows over extensive action spaces comprising combinations of 1.2 million building blocks and 71 reaction templates without significant computational overhead. Additionally, RxnFlow can employ modified or expanded action spaces for generation without retraining, allowing for the introduction of additional objectives or the incorporation of newly discovered building blocks. We experimentally demonstrate that RxnFlow outperforms existing reaction-based and fragment-based models in pocket-specific optimization across various target pockets. Furthermore, RxnFlow achieves state-of-the-art performance on CrossDocked2020 for pocket-conditional generation, with an average Vina score of -8.85kcal/mol and 34.8% synthesizability.<|reference_end|>
arxiv
@article{seo2024generative, title={Generative Flows on Synthetic Pathway for Drug Design}, author={Seonghwan Seo, Minsu Kim, Tony Shen, Martin Ester, Jinkyoo Park, Sungsoo Ahn, Woo Youn Kim}, journal={arXiv preprint arXiv:2410.04542}, year={2024}, archivePrefix={arXiv}, eprint={2410.04542}, primaryClass={q-bio.BM cs.LG} }
seo2024generative
arxiv-666273
2410.04543
Pullback Flow Matching on Data Manifolds
<|reference_start|>Pullback Flow Matching on Data Manifolds: We propose Pullback Flow Matching (PFM), a novel framework for generative modeling on data manifolds. Unlike existing methods that assume or learn restrictive closed-form manifold mappings for training Riemannian Flow Matching (RFM) models, PFM leverages pullback geometry and isometric learning to preserve the underlying manifold's geometry while enabling efficient generation and precise interpolation in latent space. This approach not only facilitates closed-form mappings on the data manifold but also allows for designable latent spaces, using assumed metrics on both data and latent manifolds. By enhancing isometric learning through Neural ODEs and proposing a scalable training objective, we achieve a latent space more suitable for interpolation, leading to improved manifold learning and generative performance. We demonstrate PFM's effectiveness through applications in synthetic data, protein dynamics and protein sequence data, generating novel proteins with specific properties. This method shows strong potential for drug discovery and materials science, where generating novel samples with specific properties is of great interest.<|reference_end|>
arxiv
@article{de kruiff2024pullback, title={Pullback Flow Matching on Data Manifolds}, author={Friso de Kruiff, Erik Bekkers, Ozan "Oktem, Carola-Bibiane Sch"onlieb and Willem Diepeveen}, journal={arXiv preprint arXiv:2410.04543}, year={2024}, archivePrefix={arXiv}, eprint={2410.04543}, primaryClass={cs.LG cs.AI math.DG q-bio.BM} }
de kruiff2024pullback
arxiv-666274
2410.04544
Fast Area-Weighted Peeling of Convex Hulls for Outlier Detection
<|reference_start|>Fast Area-Weighted Peeling of Convex Hulls for Outlier Detection: We present a novel 2D convex hull peeling algorithm for outlier detection, which repeatedly removes the point on the hull that decreases the hull's area the most. To find k outliers among n points, one simply peels k points. The algorithm is an efficient heuristic for exact methods, which find the k points whose removal together results in the smallest convex hull. Our algorithm runs in O(nlogn) time using O(n) space for any choice of k. This is a significant speedup compared to the fastest exact algorithms, which run in O(n^2logn + (n - k)^3) time using O(n\logn + (n-k)^3) space by Eppstein et al., and O(nlogn + 4k_C_2k (3k)^k n) time by Atanassov et al. Existing heuristic peeling approaches are not area-based. Instead, an approach by Harsh et al. repeatedly removes the point furthest from the mean using various distance metrics and runs in O(nlogn + kn) time. Other approaches greedily peel one convex layer at a time, which is efficient when using an O(nlogn) time algorithm by Chazelle to compute the convex layers. However, in many cases this fails to recover outliers. For most values of n and k, our approach is the fastest and first practical choice for finding outliers based on minimizing the area of the convex hull. Our algorithm also generalizes to other objectives such as perimeter.<|reference_end|>
arxiv
@article{sridhar2024fast, title={Fast Area-Weighted Peeling of Convex Hulls for Outlier Detection}, author={Vinesh Sridhar and Rolf Svenning}, journal={In Proceedings of the 36th Canadian Conference on Computational Geometry, pages 233-240, 2024}, year={2024}, archivePrefix={arXiv}, eprint={2410.04544}, primaryClass={cs.CG} }
sridhar2024fast
arxiv-666275
2410.04545
How Does the Disclosure of AI Assistance Affect the Perceptions of Writing?
<|reference_start|>How Does the Disclosure of AI Assistance Affect the Perceptions of Writing?: Recent advances in generative AI technologies like large language models have boosted the incorporation of AI assistance in writing workflows, leading to the rise of a new paradigm of human-AI co-creation in writing. To understand how people perceive writings that are produced under this paradigm, in this paper, we conduct an experimental study to understand whether and how the disclosure of the level and type of AI assistance in the writing process would affect people's perceptions of the writing on various aspects, including their evaluation on the quality of the writing and their ranking of different writings. Our results suggest that disclosing the AI assistance in the writing process, especially if AI has provided assistance in generating new content, decreases the average quality ratings for both argumentative essays and creative stories. This decrease in the average quality ratings often comes with an increased level of variations in different individuals' quality evaluations of the same writing. Indeed, factors such as an individual's writing confidence and familiarity with AI writing assistants are shown to moderate the impact of AI assistance disclosure on their writing quality evaluations. We also find that disclosing the use of AI assistance may significantly reduce the proportion of writings produced with AI's content generation assistance among the top-ranked writings.<|reference_end|>
arxiv
@article{li2024how, title={How Does the Disclosure of AI Assistance Affect the Perceptions of Writing?}, author={Zhuoyan Li, Chen Liang, Jing Peng, Ming Yin}, journal={arXiv preprint arXiv:2410.04545}, year={2024}, archivePrefix={arXiv}, eprint={2410.04545}, primaryClass={cs.CL} }
li2024how
arxiv-666276
2410.04546
Learning De-Biased Representations for Remote-Sensing Imagery
<|reference_start|>Learning De-Biased Representations for Remote-Sensing Imagery: Remote sensing (RS) imagery, requiring specialized satellites to collect and being difficult to annotate, suffers from data scarcity and class imbalance in certain spectrums. Due to data scarcity, training any large-scale RS models from scratch is unrealistic, and the alternative is to transfer pre-trained models by fine-tuning or a more data-efficient method LoRA. Due to class imbalance, transferred models exhibit strong bias, where features of the major class dominate over those of the minor class. In this paper, we propose debLoRA, a generic training approach that works with any LoRA variants to yield debiased features. It is an unsupervised learning approach that can diversify minor class features based on the shared attributes with major classes, where the attributes are obtained by a simple step of clustering. To evaluate it, we conduct extensive experiments in two transfer learning scenarios in the RS domain: from natural to optical RS images, and from optical RS to multi-spectrum RS images. We perform object classification and oriented object detection tasks on the optical RS dataset DOTA and the SAR dataset FUSRS. Results show that our debLoRA consistently surpasses prior arts across these RS adaptation settings, yielding up to 3.3 and 4.7 percentage points gains on the tail classes for natural to optical RS and optical RS to multi-spectrum RS adaptations, respectively, while preserving the performance on head classes, substantiating its efficacy and adaptability.<|reference_end|>
arxiv
@article{tian2024learning, title={Learning De-Biased Representations for Remote-Sensing Imagery}, author={Zichen Tian, Zhaozheng Chen, Qianru Sun}, journal={arXiv preprint arXiv:2410.04546}, year={2024}, archivePrefix={arXiv}, eprint={2410.04546}, primaryClass={cs.CV} }
tian2024learning
arxiv-666277
2410.04547
Distributed Detection of Adversarial Attacks for Resilient Cooperation of Multi-Robot Systems with Intermittent Communication
<|reference_start|>Distributed Detection of Adversarial Attacks for Resilient Cooperation of Multi-Robot Systems with Intermittent Communication: This paper concerns the consensus and formation of a network of mobile autonomous agents in adversarial settings where a group of malicious (compromised) agents are subject to deception attacks. In addition, the communication network is arbitrarily time-varying and subject to intermittent connections, possibly imposed by denial-of-service (DoS) attacks. We provide explicit bounds for network connectivity in an integral sense, enabling the characterization of the system's resilience to specific classes of adversarial attacks. We also show that under the condition of connectivity in an integral sense uniformly in time, the system is finite-gain $\mathcal{L}_{p}$ stable and uniformly exponentially fast consensus and formation are achievable, provided malicious agents are detected and isolated from the network. We present a distributed and reconfigurable framework with theoretical guarantees for detecting malicious agents, allowing for the resilient cooperation of the remaining cooperative agents. Simulation studies are provided to illustrate the theoretical findings.<|reference_end|>
arxiv
@article{bahrami2024distributed, title={Distributed Detection of Adversarial Attacks for Resilient Cooperation of Multi-Robot Systems with Intermittent Communication}, author={Rayan Bahrami and Hamidreza Jafarnejadsani}, journal={arXiv preprint arXiv:2410.04547}, year={2024}, archivePrefix={arXiv}, eprint={2410.04547}, primaryClass={cs.MA cs.RO cs.SY eess.SY} }
bahrami2024distributed
arxiv-666278
2410.04551
Social Choice for Heterogeneous Fairness in Recommendation
<|reference_start|>Social Choice for Heterogeneous Fairness in Recommendation: Algorithmic fairness in recommender systems requires close attention to the needs of a diverse set of stakeholders that may have competing interests. Previous work in this area has often been limited by fixed, single-objective definitions of fairness, built into algorithms or optimization criteria that are applied to a single fairness dimension or, at most, applied identically across dimensions. These narrow conceptualizations limit the ability to adapt fairness-aware solutions to the wide range of stakeholder needs and fairness definitions that arise in practice. Our work approaches recommendation fairness from the standpoint of computational social choice, using a multi-agent framework. In this paper, we explore the properties of different social choice mechanisms and demonstrate the successful integration of multiple, heterogeneous fairness definitions across multiple data sets.<|reference_end|>
arxiv
@article{aird2024social, title={Social Choice for Heterogeneous Fairness in Recommendation}, author={Amanda Aird, Elena v{S}tefancov'a, Cassidy All, Amy Voida, Martin Homola, Nicholas Mattei, Robin Burke}, journal={arXiv preprint arXiv:2410.04551}, year={2024}, archivePrefix={arXiv}, eprint={2410.04551}, primaryClass={cs.IR cs.CY cs.LG} }
aird2024social
arxiv-666279
2410.04552
Modeling Social Media Recommendation Impacts Using Academic Networks: A Graph Neural Network Approach
<|reference_start|>Modeling Social Media Recommendation Impacts Using Academic Networks: A Graph Neural Network Approach: The widespread use of social media has highlighted potential negative impacts on society and individuals, largely driven by recommendation algorithms that shape user behavior and social dynamics. Understanding these algorithms is essential but challenging due to the complex, distributed nature of social media networks as well as limited access to real-world data. This study proposes to use academic social networks as a proxy for investigating recommendation systems in social media. By employing Graph Neural Networks (GNNs), we develop a model that separates the prediction of academic infosphere from behavior prediction, allowing us to simulate recommender-generated infospheres and assess the model's performance in predicting future co-authorships. Our approach aims to improve our understanding of recommendation systems' roles and social networks modeling. To support the reproducibility of our work we publicly make available our implementations: https://github.com/DimNeuroLab/academic_network_project<|reference_end|>
arxiv
@article{guidotti2024modeling, title={Modeling Social Media Recommendation Impacts Using Academic Networks: A Graph Neural Network Approach}, author={Sabrina Guidotti, Gregor Donabauer, Simone Somazzi, Udo Kruschwitz, Davide Taibi and Dimitri Ognibene}, journal={arXiv preprint arXiv:2410.04552}, year={2024}, archivePrefix={arXiv}, eprint={2410.04552}, primaryClass={cs.SI cs.AI cs.IR cs.LG} }
guidotti2024modeling
arxiv-666280
2410.04553
Bisimulation metric for Model Predictive Control
<|reference_start|>Bisimulation metric for Model Predictive Control: Model-based reinforcement learning has shown promise for improving sample efficiency and decision-making in complex environments. However, existing methods face challenges in training stability, robustness to noise, and computational efficiency. In this paper, we propose Bisimulation Metric for Model Predictive Control (BS-MPC), a novel approach that incorporates bisimulation metric loss in its objective function to directly optimize the encoder. This time-step-wise direct optimization enables the learned encoder to extract intrinsic information from the original state space while discarding irrelevant details and preventing the gradients and errors from diverging. BS-MPC improves training stability, robustness against input noise, and computational efficiency by reducing training time. We evaluate BS-MPC on both continuous control and image-based tasks from the DeepMind Control Suite, demonstrating superior performance and robustness compared to state-of-the-art baseline methods.<|reference_end|>
arxiv
@article{shimizu2024bisimulation, title={Bisimulation metric for Model Predictive Control}, author={Yutaka Shimizu and Masayoshi Tomizuka}, journal={arXiv preprint arXiv:2410.04553}, year={2024}, archivePrefix={arXiv}, eprint={2410.04553}, primaryClass={cs.LG cs.SY eess.SY} }
shimizu2024bisimulation
arxiv-666281
2410.04555
$\textttdattri$: A Library for Efficient Data Attribution
<|reference_start|>$\textttdattri$: A Library for Efficient Data Attribution: Data attribution methods aim to quantify the influence of individual training samples on the prediction of artificial intelligence (AI) models. As training data plays an increasingly crucial role in the modern development of large-scale AI models, data attribution has found broad applications in improving AI performance and safety. However, despite a surge of new data attribution methods being developed recently, there lacks a comprehensive library that facilitates the development, benchmarking, and deployment of different data attribution methods. In this work, we introduce $\texttt{dattri}$, an open-source data attribution library that addresses the above needs. Specifically, $\texttt{dattri}$ highlights three novel design features. Firstly, $\texttt{dattri}$ proposes a unified and easy-to-use API, allowing users to integrate different data attribution methods into their PyTorch-based machine learning pipeline with a few lines of code changed. Secondly, $\texttt{dattri}$ modularizes low-level utility functions that are commonly used in data attribution methods, such as Hessian-vector product, inverse-Hessian-vector product or random projection, making it easier for researchers to develop new data attribution methods. Thirdly, $\texttt{dattri}$ provides a comprehensive benchmark framework with pre-trained models and ground truth annotations for a variety of benchmark settings, including generative AI settings. We have implemented a variety of state-of-the-art efficient data attribution methods that can be applied to large-scale neural network models, and will continuously update the library in the future. Using the developed $\texttt{dattri}$ library, we are able to perform a comprehensive and fair benchmark analysis across a wide range of data attribution methods. The source code of $\texttt{dattri}$ is available at https://github.com/TRAIS-Lab/dattri.<|reference_end|>
arxiv
@article{deng2024$\texttt{dattri}$:, title={$\texttt{dattri}$: A Library for Efficient Data Attribution}, author={Junwei Deng, Ting-Wei Li, Shiyuan Zhang, Shixuan Liu, Yijun Pan, Hao Huang, Xinhe Wang, Pingbang Hu, Xingjian Zhang, Jiaqi W. Ma}, journal={arXiv preprint arXiv:2410.04555}, year={2024}, archivePrefix={arXiv}, eprint={2410.04555}, primaryClass={cs.LG cs.CY} }
deng2024$\texttt{dattri}$:
arxiv-666282
2410.04560
GAMformer: In-Context Learning for Generalized Additive Models
<|reference_start|>GAMformer: In-Context Learning for Generalized Additive Models: Generalized Additive Models (GAMs) are widely recognized for their ability to create fully interpretable machine learning models for tabular data. Traditionally, training GAMs involves iterative learning algorithms, such as splines, boosted trees, or neural networks, which refine the additive components through repeated error reduction. In this paper, we introduce GAMformer, the first method to leverage in-context learning to estimate shape functions of a GAM in a single forward pass, representing a significant departure from the conventional iterative approaches to GAM fitting. Building on previous research applying in-context learning to tabular data, we exclusively use complex, synthetic data to train GAMformer, yet find it extrapolates well to real-world data. Our experiments show that GAMformer performs on par with other leading GAMs across various classification benchmarks while generating highly interpretable shape functions.<|reference_end|>
arxiv
@article{mueller2024gamformer:, title={GAMformer: In-Context Learning for Generalized Additive Models}, author={Andreas Mueller, Julien Siems, Harsha Nori, David Salinas, Arber Zela, Rich Caruana, Frank Hutter}, journal={arXiv preprint arXiv:2410.04560}, year={2024}, archivePrefix={arXiv}, eprint={2410.04560}, primaryClass={cs.LG stat.ML} }
mueller2024gamformer:
arxiv-666283
2410.04567
Power Minimization with Rate Constraints for Multi-User MIMO Systems with Large-Size RISs
<|reference_start|>Power Minimization with Rate Constraints for Multi-User MIMO Systems with Large-Size RISs: This study focuses on the optimization of a single-cell multi-user multiple-input multiple-output (MIMO) system with multiple large-size reconfigurable intelligent surfaces (RISs). The overall transmit power is minimized by optimizing the precoding coefficients and the RIS configuration, with constraints on users' signal-to-interference-plus-noise ratios (SINRs). The minimization problem is divided into two sub-problems and solved by means of an iterative alternating optimization (AO) approach. The first sub-problem focuses on finding the best precoder design. The second sub-problem optimizes the configuration of the RISs by partitioning them into smaller tiles. Each tile is then configured as a combination of pre-defined configurations. This allows the efficient optimization of RISs, especially in scenarios where the computational complexity would be prohibitive using traditional approaches. Simulation results show the good performance and limited complexity of the proposed method in comparison to benchmark schemes.<|reference_end|>
arxiv
@article{palmucci2024power, title={Power Minimization with Rate Constraints for Multi-User MIMO Systems with Large-Size RISs}, author={Silvia Palmucci, Giulio Bartoli, Andrea Abrardo, Marco Moretti, Marco Di Renzo}, journal={arXiv preprint arXiv:2410.04567}, year={2024}, archivePrefix={arXiv}, eprint={2410.04567}, primaryClass={cs.IT eess.SP math.IT} }
palmucci2024power
arxiv-666284
2410.04568
Ranking Policy Learning via Marketplace Expected Value Estimation From Observational Data
<|reference_start|>Ranking Policy Learning via Marketplace Expected Value Estimation From Observational Data: We develop a decision making framework to cast the problem of learning a ranking policy for search or recommendation engines in a two-sided e-commerce marketplace as an expected reward optimization problem using observational data. As a value allocation mechanism, the ranking policy allocates retrieved items to the designated slots so as to maximize the user utility from the slotted items, at any given stage of the shopping journey. The objective of this allocation can in turn be defined with respect to the underlying probabilistic user browsing model as the expected number of interaction events on presented items matching the user intent, given the ranking context. Through recognizing the effect of ranking as an intervention action to inform users' interactions with slotted items and the corresponding economic value of the interaction events for the marketplace, we formulate the expected reward of the marketplace as the collective value from all presented ranking actions. The key element in this formulation is a notion of context value distribution, which signifies not only the attribution of value to ranking interventions within a session but also the distribution of marketplace reward across user sessions. We build empirical estimates for the expected reward of the marketplace from observational data that account for the heterogeneity of economic value across session contexts as well as the distribution shifts in learning from observational user activity data. The ranking policy can then be trained by optimizing the empirical expected reward estimates via standard Bayesian inference techniques. We report empirical results for a product search ranking task in a major e-commerce platform demonstrating the fundamental trade-offs governed by ranking polices trained on empirical reward estimates with respect to extreme choices of the context value distribution.<|reference_end|>
arxiv
@article{ebrahimzadeh2024ranking, title={Ranking Policy Learning via Marketplace Expected Value Estimation From Observational Data}, author={Ehsan Ebrahimzadeh, Nikhil Monga, Hang Gao, Alex Cozzi, Abraham Bagherjeiran}, journal={arXiv preprint arXiv:2410.04568}, year={2024}, archivePrefix={arXiv}, eprint={2410.04568}, primaryClass={cs.IR cs.AI cs.LG stat.AP stat.ML} }
ebrahimzadeh2024ranking
arxiv-666285
2410.04570
Watermarking Decision Tree Ensembles
<|reference_start|>Watermarking Decision Tree Ensembles: Protecting the intellectual property of machine learning models is a hot topic and many watermarking schemes for deep neural networks have been proposed in the literature. Unfortunately, prior work largely neglected the investigation of watermarking techniques for other types of models, including decision tree ensembles, which are a state-of-the-art model for classification tasks on non-perceptual data. In this paper, we present the first watermarking scheme designed for decision tree ensembles, focusing in particular on random forest models. We discuss watermark creation and verification, presenting a thorough security analysis with respect to possible attacks. We finally perform an experimental evaluation of the proposed scheme, showing excellent results in terms of accuracy and security against the most relevant threats.<|reference_end|>
arxiv
@article{calzavara2024watermarking, title={Watermarking Decision Tree Ensembles}, author={Stefano Calzavara, Lorenzo Cazzaro, Donald Gera, Salvatore Orlando}, journal={arXiv preprint arXiv:2410.04570}, year={2024}, archivePrefix={arXiv}, eprint={2410.04570}, primaryClass={cs.LG cs.CR cs.MM} }
calzavara2024watermarking
arxiv-666286
2410.04571
EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?
<|reference_start|>EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?: How can we harness the collective capabilities of multiple Large Language Models (LLMs) to create an even more powerful model? This question forms the foundation of our research, where we propose an innovative approach to weak-to-strong (w2s) generalization-a critical problem in AI alignment. Our work introduces an easy-to-hard (e2h) framework for studying the feasibility of w2s generalization, where weak models trained on simpler tasks collaboratively supervise stronger models on more complex tasks. This setup mirrors real-world challenges, where direct human supervision is limited. To achieve this, we develop a novel AdaBoost-inspired ensemble method, demonstrating that an ensemble of weak supervisors can enhance the performance of stronger LLMs across classification and generative tasks on difficult QA datasets. In several cases, our ensemble approach matches the performance of models trained on ground-truth data, establishing a new benchmark for w2s generalization. We observe an improvement of up to 14% over existing baselines and average improvements of 5% and 4% for binary classification and generative tasks, respectively. This research points to a promising direction for enhancing AI through collective supervision, especially in scenarios where labeled data is sparse or insufficient.<|reference_end|>
arxiv
@article{agrawal2024ensemw2s:, title={EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?}, author={Aakriti Agrawal, Mucong Ding, Zora Che, Chenghao Deng, Anirudh Satheesh, John Langford, Furong Huang}, journal={arXiv preprint arXiv:2410.04571}, year={2024}, archivePrefix={arXiv}, eprint={2410.04571}, primaryClass={cs.LG} }
agrawal2024ensemw2s:
arxiv-666287
2410.04573
Admissibility Over Winning: A New Approach to Reactive Synthesis in Robotics
<|reference_start|>Admissibility Over Winning: A New Approach to Reactive Synthesis in Robotics: Reactive synthesis is a framework for modeling and automatically synthesizing strategies in robotics, typically through computing a \emph{winning} strategy in a 2-player game between the robot and the environment. Winning strategies, however, do not always exist, even in some simple cases. In such situations, it is still desirable for the robot to attempt its task rather than "giving up". In this work, we explore the notion of admissibility to define strategies beyond winning, tailored specifically for robotic systems. We introduce an ordering of admissible strategies and define \emph{admissibly rational strategies}, which aim to be winning and cooperative when possible, and non-violating and hopeful when necessary. We present an efficient synthesis algorithm and demonstrate that admissibly rational strategies produce desirable behaviors through case studies.<|reference_end|>
arxiv
@article{muvvala2024admissibility, title={Admissibility Over Winning: A New Approach to Reactive Synthesis in Robotics}, author={Karan Muvvala and Morteza Lahijanian}, journal={arXiv preprint arXiv:2410.04573}, year={2024}, archivePrefix={arXiv}, eprint={2410.04573}, primaryClass={cs.RO cs.FL cs.GT} }
muvvala2024admissibility
arxiv-666288
2410.04574
Enhancing 3D Human Pose Estimation Amidst Severe Occlusion with Dual Transformer Fusion
<|reference_start|>Enhancing 3D Human Pose Estimation Amidst Severe Occlusion with Dual Transformer Fusion: In the field of 3D Human Pose Estimation from monocular videos, the presence of diverse occlusion types presents a formidable challenge. Prior research has made progress by harnessing spatial and temporal cues to infer 3D poses from 2D joint observations. This paper introduces a Dual Transformer Fusion (DTF) algorithm, a novel approach to obtain a holistic 3D pose estimation, even in the presence of severe occlusions. Confronting the issue of occlusion-induced missing joint data, we propose a temporal interpolation-based occlusion guidance mechanism. To enable precise 3D Human Pose Estimation, our approach leverages the innovative DTF architecture, which first generates a pair of intermediate views. Each intermediate-view undergoes spatial refinement through a self-refinement schema. Subsequently, these intermediate-views are fused to yield the final 3D human pose estimation. The entire system is end-to-end trainable. Through extensive experiments conducted on the Human3.6M and MPI-INF-3DHP datasets, our method's performance is rigorously evaluated. Notably, our approach outperforms existing state-of-the-art methods on both datasets, yielding substantial improvements. The code is available here: https://github.com/MehwishG/DTF.<|reference_end|>
arxiv
@article{ghafoor2024enhancing, title={Enhancing 3D Human Pose Estimation Amidst Severe Occlusion with Dual Transformer Fusion}, author={Mehwish Ghafoor, Arif Mahmood, Muhammad Bilal}, journal={arXiv preprint arXiv:2410.04574}, year={2024}, archivePrefix={arXiv}, eprint={2410.04574}, primaryClass={cs.CV cs.LG} }
ghafoor2024enhancing
arxiv-666289
2410.04577
Robustness Reprogramming for Representation Learning
<|reference_start|>Robustness Reprogramming for Representation Learning: This work tackles an intriguing and fundamental open challenge in representation learning: Given a well-trained deep learning model, can it be reprogrammed to enhance its robustness against adversarial or noisy input perturbations without altering its parameters? To explore this, we revisit the core feature transformation mechanism in representation learning and propose a novel non-linear robust pattern matching technique as a robust alternative. Furthermore, we introduce three model reprogramming paradigms to offer flexible control of robustness under different efficiency requirements. Comprehensive experiments and ablation studies across diverse learning models ranging from basic linear model and MLPs to shallow and modern deep ConvNets demonstrate the effectiveness of our approaches. This work not only opens a promising and orthogonal direction for improving adversarial defenses in deep learning beyond existing methods but also provides new insights into designing more resilient AI systems with robust statistics.<|reference_end|>
arxiv
@article{hou2024robustness, title={Robustness Reprogramming for Representation Learning}, author={Zhichao Hou, MohamadAli Torkamani, Hamid Krim, Xiaorui Liu}, journal={arXiv preprint arXiv:2410.04577}, year={2024}, archivePrefix={arXiv}, eprint={2410.04577}, primaryClass={cs.LG stat.ML} }
hou2024robustness
arxiv-666290
2410.04579
Upsample or Upweight? Balanced Training on Heavily Imbalanced Datasets
<|reference_start|>Upsample or Upweight? Balanced Training on Heavily Imbalanced Datasets: Data availability across domains often follows a long-tail distribution: a few domains have abundant data, while most face dat . a scarcity. This imbalance poses challenges in training language models uniformly across all domains. In our study, we focus on multilingual settings, where data sizes vary significantly between high- and low-resource languages. Common strategies to address this include upsampling low-resource languages (Temperature Sampling) or upweighting their loss (Scalarization). Although often considered equivalent, this assumption has not been proven, which motivates our study. Through both theoretical and empirical analysis, we identify the conditions under which these approaches are equivalent and when they diverge. Specifically, we demonstrate that these two methods are equivalent under full gradient descent, but this equivalence breaks down with stochastic gradient descent. Empirically, we observe that Temperature Sampling converges more quickly but is prone to overfitting. We argue that this faster convergence is likely due to the lower variance in gradient estimations, as shown theoretically. Based on these insights, we propose Cooldown, a strategy that reduces sampling temperature during training, accelerating convergence without overfitting to low-resource languages. Our method is competitive with existing data re-weighting and offers computational efficiency.<|reference_end|>
arxiv
@article{li2024upsample, title={Upsample or Upweight? Balanced Training on Heavily Imbalanced Datasets}, author={Tianjian Li, Haoran Xu, Weiting Tan, Kenton Murray, Daniel Khashabi}, journal={arXiv preprint arXiv:2410.04579}, year={2024}, archivePrefix={arXiv}, eprint={2410.04579}, primaryClass={cs.CL cs.LG stat.ML} }
li2024upsample
arxiv-666291
2410.04581
Efficient Linearizability Monitoring for Sets, Stacks, Queues and Priority Queues
<|reference_start|>Efficient Linearizability Monitoring for Sets, Stacks, Queues and Priority Queues: In this paper, we consider the problem of automatically monitoring linearizability. Here, one observes an execution of a concurrent program that interacts with a concurrent object and determines if the execution witnesses the violation of linearizability with respect to the sequential specification of the underlying data structure of the concurrent object. This problem has been extensively studied in the past for read-write registers, and both tight upper and lower bounds have been proposed in this case. While this problem has also been studied for the case of other prominent data structures such as stacks and queues, we find that these results are either not extensive or in some cases incorrect. In this paper, we study the problem under the restriction where values inserted in the data types are distinct (in the execution observed). We then show that under such a restriction, the linearizability problem is solvable in polynomial time for these data types. Beyond theoretical soundness and completeness, the algorithms proposed are empirically proven to outperform all state-of-the-art linearizability monitors.<|reference_end|>
arxiv
@article{han2024efficient, title={Efficient Linearizability Monitoring for Sets, Stacks, Queues and Priority Queues}, author={Lee Zheng Han, Umang Mathur}, journal={arXiv preprint arXiv:2410.04581}, year={2024}, archivePrefix={arXiv}, eprint={2410.04581}, primaryClass={cs.PL} }
han2024efficient
arxiv-666292
2410.04585
Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval
<|reference_start|>Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval: Large language models (LLMs) have demonstrated significant potential in clinical decision support. Yet LLMs still suffer from hallucinations and lack fine-grained contextual medical knowledge, limiting their high-stake healthcare applications such as clinical diagnosis. Traditional retrieval-augmented generation (RAG) methods attempt to address these limitations but frequently retrieve sparse or irrelevant information, undermining prediction accuracy. We introduce KARE, a novel framework that integrates knowledge graph (KG) community-level retrieval with LLM reasoning to enhance healthcare predictions. KARE constructs a comprehensive multi-source KG by integrating biomedical databases, clinical literature, and LLM-generated insights, and organizes it using hierarchical graph community detection and summarization for precise and contextually relevant information retrieval. Our key innovations include: (1) a dense medical knowledge structuring approach enabling accurate retrieval of relevant information; (2) a dynamic knowledge retrieval mechanism that enriches patient contexts with focused, multi-faceted medical insights; and (3) a reasoning-enhanced prediction framework that leverages these enriched contexts to produce both accurate and interpretable clinical predictions. Extensive experiments demonstrate that KARE outperforms leading models by up to 10.8-15.0% on MIMIC-III and 12.6-12.7% on MIMIC-IV for mortality and readmission predictions. In addition to its impressive prediction accuracy, our framework leverages the reasoning capabilities of LLMs, enhancing the trustworthiness of clinical predictions.<|reference_end|>
arxiv
@article{jiang2024reasoning-enhanced, title={Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval}, author={Pengcheng Jiang, Cao Xiao, Minhao Jiang, Parminder Bhatia, Taha Kass-Hout, Jimeng Sun, Jiawei Han}, journal={arXiv preprint arXiv:2410.04585}, year={2024}, archivePrefix={arXiv}, eprint={2410.04585}, primaryClass={cs.CL} }
jiang2024reasoning-enhanced
arxiv-666293
2410.04587
Hammer: Robust Function-Calling for On-Device Language Models via Function Masking
<|reference_start|>Hammer: Robust Function-Calling for On-Device Language Models via Function Masking: Large language models have demonstrated impressive value in performing as autonomous agents when equipped with external tools and API calls. Nonetheless, effectively harnessing their potential for executing complex tasks crucially relies on enhancements in their function calling capabilities. This paper identifies a critical gap in existing function calling models, where performance varies significantly across benchmarks, often due to being misled by specific naming conventions. To address such an issue, we introduce Hammer, a novel family of foundation models specifically engineered for on-device function calling. Hammer employs an augmented dataset that enhances models' sensitivity to irrelevant functions and incorporates function masking techniques to minimize misleading. Our empirical evaluations reveal that Hammer not only outperforms larger models but also demonstrates robust generalization across diverse benchmarks, achieving sota results. Our open source contributions include a specialized dataset for irrelevance detection, a tuning framework for enhanced generalization, and the Hammer models, establishing a new standard for function calling performance.<|reference_end|>
arxiv
@article{lin2024hammer:, title={Hammer: Robust Function-Calling for On-Device Language Models via Function Masking}, author={Qiqiang Lin, Muning Wen, Qiuying Peng, Guanyu Nie, Junwei Liao, Jun Wang, Xiaoyun Mo, Jiamu Zhou, Cheng Cheng, Yin Zhao, Jun Wang, Weinan Zhang}, journal={arXiv preprint arXiv:2410.04587}, year={2024}, archivePrefix={arXiv}, eprint={2410.04587}, primaryClass={cs.LG cs.AI cs.SE} }
lin2024hammer:
arxiv-666294
2410.04589
Towards the first UD Treebank of Spoken Italian: the KIParla forest
<|reference_start|>Towards the first UD Treebank of Spoken Italian: the KIParla forest: The present project endeavors to enrich the linguistic resources available for Italian by constructing a Universal Dependencies treebank for the KIParla corpus (Mauri et al., 2019, Ballar\`e et al., 2020), an existing and well known resource for spoken Italian.<|reference_end|>
arxiv
@article{pannitto2024towards, title={Towards the first UD Treebank of Spoken Italian: the KIParla forest}, author={Ludovica Pannitto}, journal={arXiv preprint arXiv:2410.04589}, year={2024}, archivePrefix={arXiv}, eprint={2410.04589}, primaryClass={cs.CL} }
pannitto2024towards
arxiv-666295
2410.04592
CardioAI: A Multimodal AI-based System to Support Symptom Monitoring and Risk Detection of Cancer Treatment-Induced Cardiotoxicity
<|reference_start|>CardioAI: A Multimodal AI-based System to Support Symptom Monitoring and Risk Detection of Cancer Treatment-Induced Cardiotoxicity: Despite recent advances in cancer treatments that prolong patients' lives, treatment-induced cardiotoxicity remains one severe side effect. The clinical decision-making of cardiotoxicity is challenging, as non-clinical symptoms can be missed until life-threatening events occur at a later stage, and clinicians already have a high workload centered on the treatment, not the side effects. Our project starts with a participatory design study with 11 clinicians to understand their practices and needs; then we build a multimodal AI system, CardioAI, that integrates wearables and LLM-powered voice assistants to monitor multimodal non-clinical symptoms. Also, the system includes an explainable risk prediction module that can generate cardiotoxicity risk scores and summaries as explanations to support clinicians' decision-making. We conducted a heuristic evaluation with four clinical experts and found that they all believe CardioAI integrates well into their workflow, reduces their information overload, and enables them to make more informed decisions.<|reference_end|>
arxiv
@article{wu2024cardioai:, title={CardioAI: A Multimodal AI-based System to Support Symptom Monitoring and Risk Detection of Cancer Treatment-Induced Cardiotoxicity}, author={Siyi Wu, Weidan Cao, Shihan Fu, Bingsheng Yao, Ziqi Yang, Changchang Yin, Varun Mishra, Daniel Addison, Ping Zhang, Dakuo Wang}, journal={arXiv preprint arXiv:2410.04592}, year={2024}, archivePrefix={arXiv}, eprint={2410.04592}, primaryClass={cs.HC} }
wu2024cardioai:
arxiv-666296
2410.04596
Need Help? Designing Proactive AI Assistants for Programming
<|reference_start|>Need Help? Designing Proactive AI Assistants for Programming: While current chat-based AI assistants primarily operate reactively, responding only when prompted by users, there is significant potential for these systems to proactively assist in tasks without explicit invocation, enabling a mixed-initiative interaction. This work explores the design and implementation of proactive AI assistants powered by large language models. We first outline the key design considerations for building effective proactive assistants. As a case study, we propose a proactive chat-based programming assistant that automatically provides suggestions and facilitates their integration into the programmer's code. The programming context provides a shared workspace enabling the assistant to offer more relevant suggestions. We conducted a randomized experimental study examining the impact of various design elements of the proactive assistant on programmer productivity and user experience. Our findings reveal significant benefits of incorporating proactive chat assistants into coding environments and uncover important nuances that influence their usage and effectiveness.<|reference_end|>
arxiv
@article{chen2024need, title={Need Help? Designing Proactive AI Assistants for Programming}, author={Valerie Chen, Alan Zhu, Sebastian Zhao, Hussein Mozannar, David Sontag, Ameet Talwalkar}, journal={arXiv preprint arXiv:2410.04596}, year={2024}, archivePrefix={arXiv}, eprint={2410.04596}, primaryClass={cs.HC} }
chen2024need
arxiv-666297
2410.04601
ProtocoLLM: Automatic Evaluation Framework of LLMs on Domain-Specific Scientific Protocol Formulation Tasks
<|reference_start|>ProtocoLLM: Automatic Evaluation Framework of LLMs on Domain-Specific Scientific Protocol Formulation Tasks: Automated generation of scientific protocols executable by robots can significantly accelerate scientific research processes. Large Language Models (LLMs) excel at Scientific Protocol Formulation Tasks (SPFT), but the evaluation of their capabilities rely on human evaluation. Here, we propose a flexible, automatic framework to evaluate LLM's capability on SPFT: ProtocoLLM. This framework prompts the target model and GPT-4 to extract pseudocode from biology protocols using only predefined lab actions and evaluates the output of target model using LLAM-EVAL, the pseudocode generated by GPT-4 serving as a baseline and Llama-3 acting as the evaluator. Our adaptable prompt-based evaluation method, LLAM-EVAL, offers significant flexibility in terms of evaluation model, material, criteria, and is free of cost. We evaluate GPT variations, Llama, Mixtral, Gemma, Cohere, and Gemini. Overall, we find that GPT and Cohere is a powerful scientific protocol formulators. We also introduce BIOPROT 2.0, a dataset with biology protocols and corresponding pseudocodes, which can aid LLMs in formulation and evaluation of SPFT. Our work is extensible to assess LLMs on SPFT across various domains and other fields that require protocol generation for specific goals.<|reference_end|>
arxiv
@article{yi2024protocollm:, title={ProtocoLLM: Automatic Evaluation Framework of LLMs on Domain-Specific Scientific Protocol Formulation Tasks}, author={Seungjun Yi, Jaeyoung Lim, Juyong Yoon}, journal={arXiv preprint arXiv:2410.04601}, year={2024}, archivePrefix={arXiv}, eprint={2410.04601}, primaryClass={cs.CL} }
yi2024protocollm:
arxiv-666298
2410.04602
Decoding MIE: A Novel Dataset Approach Using Topic Extraction and Affiliation Parsing
<|reference_start|>Decoding MIE: A Novel Dataset Approach Using Topic Extraction and Affiliation Parsing: The rapid expansion of medical informatics literature presents significant challenges in synthesizing and analyzing research trends. This study introduces a novel dataset derived from the Medical Informatics Europe (MIE) Conference proceedings, addressing the need for sophisticated analytical tools in the field. Utilizing the Triple-A software, we extracted and processed metadata and abstract from 4,606 articles published in the "Studies in Health Technology and Informatics" journal series, focusing on MIE conferences from 1996 onwards. Our methodology incorporated advanced techniques such as affiliation parsing using the TextRank algorithm. The resulting dataset, available in JSON format, offers a comprehensive view of bibliometric details, extracted topics, and standardized affiliation information. Analysis of this data revealed interesting patterns in Digital Object Identifier usage, citation trends, and authorship attribution across the years. Notably, we observed inconsistencies in author data and a brief period of linguistic diversity in publications. This dataset represents a significant contribution to the medical informatics community, enabling longitudinal studies of research trends, collaboration network analyses, and in-depth bibliometric investigations. By providing this enriched, structured resource spanning nearly three decades of conference proceedings, we aim to facilitate novel insights and advancements in the rapidly evolving field of medical informatics.<|reference_end|>
arxiv
@article{bitaraf2024decoding, title={Decoding MIE: A Novel Dataset Approach Using Topic Extraction and Affiliation Parsing}, author={Ehsan Bitaraf and Maryam Jafarpour}, journal={arXiv preprint arXiv:2410.04602}, year={2024}, archivePrefix={arXiv}, eprint={2410.04602}, primaryClass={cs.IR} }
bitaraf2024decoding
arxiv-666299
2410.04604
Distributed ADMM Approach for the Power Distribution Network Reconfiguration
<|reference_start|>Distributed ADMM Approach for the Power Distribution Network Reconfiguration: The electrical network reconfiguration problem aims to minimize losses in a distribution system by adjusting switches while ensuring radial topology. The growing use of renewable energy and the complexity of managing modern power grids make solving the reconfiguration problem crucial. Distributed algorithms help optimize grid configurations, ensuring efficient adaptation to changing conditions and better utilization of renewable energy sources. This paper introduces a distributed algorithm designed to tackle the problem of power distribution network reconfiguration with a radiality constraint. This algorithm relies on ADMM (Alternating Direction Method of Multipliers), where each agent progressively updates its estimation based on the information exchanged with neighboring agents. We show that every agent is required to solve a linearly constrained convex quadratic programming problem and a Minimum Weight Rooted Arborescence Problem (MWRAP) with local weights during each iteration. Through numerical experiments, we demonstrate the performance of the proposed algorithm in various scenarios, including its application to a 33-bus test system and a real-world network.<|reference_end|>
arxiv
@article{mokhtari2024distributed, title={Distributed ADMM Approach for the Power Distribution Network Reconfiguration}, author={Yacine Mokhtari, Patrick Coirault, Emmanuel Moulay, J'er^ome Le Ny, Didier Larraillet}, journal={arXiv preprint arXiv:2410.04604}, year={2024}, archivePrefix={arXiv}, eprint={2410.04604}, primaryClass={eess.SY cs.SY} }
mokhtari2024distributed
arxiv-666300
2410.04606
Privacy's Peril: Unmasking the Unregulated Underground Market of Data Brokers and the Suggested Framework
<|reference_start|>Privacy's Peril: Unmasking the Unregulated Underground Market of Data Brokers and the Suggested Framework: The internet is a common place for businesses to collect and store as much client data as possible and computer storage capacity has increased exponentially due to this trend. Businesses utilize this data to enhance customer satisfaction, generate revenue, boost sales, and increase profile. However, the emerging sector of data brokers is plagued with legal challenges. In part I, we will look at what a data broker is, how it collects information, the data industry, and some of the difficulties it encounters. In Part II, we will look at potential options for regulating data brokers. All options are provided in light of the EU General Data Protection Regulation (GDPR). In Part III, we shall present our analysis and findings.<|reference_end|>
arxiv
@article{bajwa2024privacy's, title={Privacy's Peril: Unmasking the Unregulated Underground Market of Data Brokers and the Suggested Framework}, author={Rabia Bajwa,Farah Tasnur Meem}, journal={arXiv preprint arXiv:2410.04606}, year={2024}, archivePrefix={arXiv}, eprint={2410.04606}, primaryClass={cs.CR} }
bajwa2024privacy's