corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-662001
2409.17286
Scalable quality control on processing of large diffusion-weighted and structural magnetic resonance imaging datasets
<|reference_start|>Scalable quality control on processing of large diffusion-weighted and structural magnetic resonance imaging datasets: Proper quality control (QC) is time consuming when working with large-scale medical imaging datasets, yet necessary, as poor-quality data can lead to erroneous conclusions or poorly trained machine learning models. Most efforts to reduce data QC time rely on outlier detection, which cannot capture every instance of algorithm failure. Thus, there is a need to visually inspect every output of data processing pipelines in a scalable manner. We design a QC pipeline that allows for low time cost and effort across a team setting for a large database of diffusion weighted and structural magnetic resonance images. Our proposed method satisfies the following design criteria: 1.) a consistent way to perform and manage quality control across a team of researchers, 2.) quick visualization of preprocessed data that minimizes the effort and time spent on the QC process without compromising the condition or caliber of the QC, and 3.) a way to aggregate QC results across pipelines and datasets that can be easily shared. In addition to meeting these design criteria, we also provide information on what a successful output should be and common occurrences of algorithm failures for various processing pipelines. Our method reduces the time spent on QC by a factor of over 20 when compared to naively opening outputs in an image viewer and demonstrate how it can facilitate aggregation and sharing of QC results within a team. While researchers must spend time on robust visual QC of data, there are mechanisms by which the process can be streamlined and efficient.<|reference_end|>
arxiv
@article{kim2024scalable, title={Scalable quality control on processing of large diffusion-weighted and structural magnetic resonance imaging datasets}, author={Michael E. Kim, Chenyu Gao, Karthik Ramadass, Praitayini Kanakaraj, Nancy R. Newlin, Gaurav Rudravaram, Kurt G. Schilling, Blake E. Dewey, David A. Bennett, Sid OBryant, Robert C. Barber, Derek Archer, Timothy J. Hohman, Shunxing Bao, Zhiyuan Li, Bennett A. Landman, Nazirah Mohd Khairi, The Alzheimers Disease Neuroimaging Initiative, The HABSHD Study Team}, journal={arXiv preprint arXiv:2409.17286}, year={2024}, archivePrefix={arXiv}, eprint={2409.17286}, primaryClass={cs.DC} }
kim2024scalable
arxiv-662002
2409.17287
Blockchain-Enabled Variational Information Bottleneck for Data Extraction Based on Mutual Information in Internet of Vehicles
<|reference_start|>Blockchain-Enabled Variational Information Bottleneck for Data Extraction Based on Mutual Information in Internet of Vehicles: The Internet of Vehicles (IoV) network can address the issue of limited computing resources and data processing capabilities of individual vehicles, but it also brings the risk of privacy leakage to vehicle users. Applying blockchain technology can establish secure data links within the IoV, solving the problems of insufficient computing resources for each vehicle and the security of data transmission over the network. However, with the development of the IoV, the amount of data interaction between multiple vehicles and between vehicles and base stations, roadside units, etc., is continuously increasing. There is a need to further reduce the interaction volume, and intelligent data compression is key to solving this problem. The VIB technique facilitates the training of encoding and decoding models, substantially diminishing the volume of data that needs to be transmitted. This paper introduces an innovative approach that integrates blockchain with VIB, referred to as BVIB, designed to lighten computational workloads and reinforce the security of the network. We first construct a new network framework by separating the encoding and decoding networks to address the computational burden issue, and then propose a new algorithm to enhance the security of IoV networks. We also discuss the impact of the data extraction rate on system latency to determine the most suitable data extraction rate. An experimental framework combining Python and C++ has been established to substantiate the efficacy of our BVIB approach. Comprehensive simulation studies indicate that the BVIB consistently excels in comparison to alternative foundational methodologies.<|reference_end|>
arxiv
@article{zhang2024blockchain-enabled, title={Blockchain-Enabled Variational Information Bottleneck for Data Extraction Based on Mutual Information in Internet of Vehicles}, author={Cui Zhang, Wenjun Zhang, Qiong Wu, Pingyi Fan, Nan Cheng, Wen Chen, Khaled B. Letaief}, journal={arXiv preprint arXiv:2409.17287}, year={2024}, archivePrefix={arXiv}, eprint={2409.17287}, primaryClass={cs.CR cs.LG} }
zhang2024blockchain-enabled
arxiv-662003
2409.17289
Steering LLM Summarization with Visual Workspaces for Sensemaking
<|reference_start|>Steering LLM Summarization with Visual Workspaces for Sensemaking: Large Language Models (LLMs) have been widely applied in summarization due to their speedy and high-quality text generation. Summarization for sensemaking involves information compression and insight extraction. Human guidance in sensemaking tasks can prioritize and cluster relevant information for LLMs. However, users must translate their cognitive thinking into natural language to communicate with LLMs. Can we use more readable and operable visual representations to guide the summarization process for sensemaking? Therefore, we propose introducing an intermediate step--a schematic visual workspace for human sensemaking--before the LLM generation to steer and refine the summarization process. We conduct a series of proof-of-concept experiments to investigate the potential for enhancing the summarization by GPT-4 through visual workspaces. Leveraging a textual sensemaking dataset with a ground truth summary, we evaluate the impact of a human-generated visual workspace on LLM-generated summarization of the dataset and assess the effectiveness of space-steered summarization. We categorize several types of extractable information from typical human workspaces that can be injected into engineered prompts to steer the LLM summarization. The results demonstrate how such workspaces can help align an LLM with the ground truth, leading to more accurate summarization results than without the workspaces.<|reference_end|>
arxiv
@article{tang2024steering, title={Steering LLM Summarization with Visual Workspaces for Sensemaking}, author={Xuxin Tang, Eric Krokos, Can Liu, Kylie Davidson, Kirsten Whitley, Naren Ramakrishnan, Chris North}, journal={arXiv preprint arXiv:2409.17289}, year={2024}, archivePrefix={arXiv}, eprint={2409.17289}, primaryClass={cs.HC} }
tang2024steering
arxiv-662004
2409.17293
A two-scale computational homogenization approach for elastoplastic truss-based lattice structures
<|reference_start|>A two-scale computational homogenization approach for elastoplastic truss-based lattice structures: The revolutionary advancements in metal additive manufacturing have enabled the production of alloy-based lattice structures with complex geometrical features and high resolutions. This has encouraged the development of nonlinear material models, including plasticity, damage, etc., for such materials. However, the prohibitive computational cost arising from the high number of degrees of freedom for engineering structures composed of lattice structures highlights the necessity of homogenization techniques, such as the two-scale computational homogenization method. In the present work, a two-scale homogenization approach with on-the-fly exchange of information is adopted to study the elastoplastic behavior of truss-based lattice structures. The macroscopic homogenized structure is represented by a two-dimensional continuum, while the underlying microscale lattices are modeled as a network of one-dimensional truss elements. This helps to significantly reduce the associated computational cost by reducing the microscopic degrees of freedom. The microscale trusses are assumed to exhibit an elastoplastic material behavior characterized by a combination of nonlinear exponential isotropic hardening and linear kinematic hardening. Through multiple numerical examples, the performance of the adopted homogenization approach is examined by comparing forces and displacements with direct numerical simulations of discrete structures for three types of stretching-dominated lattice topologies, including triangular, X-braced and X-Plus-braced unit cells. Furthermore, the principle of scale separation, which emphasizes the need for an adequate separation between the macroscopic and microscopic characteristic lengths, is investigated.<|reference_end|>
arxiv
@article{danesh2024a, title={A two-scale computational homogenization approach for elastoplastic truss-based lattice structures}, author={Hooman Danesh, Lisamarie Heu{ss}en, Francisco J. Mont'ans, Stefanie Reese, Tim Brepols}, journal={arXiv preprint arXiv:2409.17293}, year={2024}, archivePrefix={arXiv}, eprint={2409.17293}, primaryClass={cs.CE} }
danesh2024a
arxiv-662005
2409.17294
Schr\"odinger bridge based deep conditional generative learning
<|reference_start|>Schr\"odinger bridge based deep conditional generative learning: Conditional generative models represent a significant advancement in the field of machine learning, allowing for the controlled synthesis of data by incorporating additional information into the generation process. In this work we introduce a novel Schr\"odinger bridge based deep generative method for learning conditional distributions. We start from a unit-time diffusion process governed by a stochastic differential equation (SDE) that transforms a fixed point at time $0$ into a desired target conditional distribution at time $1$. For effective implementation, we discretize the SDE with Euler-Maruyama method where we estimate the drift term nonparametrically using a deep neural network. We apply our method to both low-dimensional and high-dimensional conditional generation problems. The numerical studies demonstrate that though our method does not directly provide the conditional density estimation, the samples generated by this method exhibit higher quality compared to those obtained by several existing methods. Moreover, the generated samples can be effectively utilized to estimate the conditional density and related statistical quantities, such as conditional mean and conditional standard deviation.<|reference_end|>
arxiv
@article{huang2024schr\"odinger, title={Schr\"odinger bridge based deep conditional generative learning}, author={Hanwen Huang}, journal={arXiv preprint arXiv:2409.17294}, year={2024}, archivePrefix={arXiv}, eprint={2409.17294}, primaryClass={stat.ML cs.LG} }
huang2024schr\"odinger
arxiv-662006
2409.17295
Electromagnetically Consistent Optimization Algorithms for the Global Design of RIS
<|reference_start|>Electromagnetically Consistent Optimization Algorithms for the Global Design of RIS: The reconfigurable intelligent surface is an emerging technology for wireless communications. We model it as an inhomogeneous boundary of surface impedance, and consider various optimization problems that offer different tradeoffs in terms of performance and implementation complexity. The considered non-convex optimization problems are reformulated as a sequence of approximating linear quadratically constrained or semidefinite programs, which are proved to have a polynomial complexity and to converge monotonically in the objective value.<|reference_end|>
arxiv
@article{shabir2024electromagnetically, title={Electromagnetically Consistent Optimization Algorithms for the Global Design of RIS}, author={M. W. Shabir, M. Di Renzo, A. Zappone, and M. Debbah}, journal={arXiv preprint arXiv:2409.17295}, year={2024}, archivePrefix={arXiv}, eprint={2409.17295}, primaryClass={cs.IT math.IT} }
shabir2024electromagnetically
arxiv-662007
2409.17298
Sparsity, Regularization and Causality in Agricultural Yield: The Case of Paddy Rice in Peru
<|reference_start|>Sparsity, Regularization and Causality in Agricultural Yield: The Case of Paddy Rice in Peru: This study introduces a novel approach that integrates agricultural census data with remotely sensed time series to develop precise predictive models for paddy rice yield across various regions of Peru. By utilizing sparse regression and Elastic-Net regularization techniques, the study identifies causal relationships between key remotely sensed variables-such as NDVI, precipitation, and temperature-and agricultural yield. To further enhance prediction accuracy, the first- and second-order dynamic transformations (velocity and acceleration) of these variables are applied, capturing non-linear patterns and delayed effects on yield. The findings highlight the improved predictive performance when combining regularization techniques with climatic and geospatial variables, enabling more precise forecasts of yield variability. The results confirm the existence of causal relationships in the Granger sense, emphasizing the value of this methodology for strategic agricultural management. This contributes to more efficient and sustainable production in paddy rice cultivation.<|reference_end|>
arxiv
@article{guzman-lopez2024sparsity,, title={Sparsity, Regularization and Causality in Agricultural Yield: The Case of Paddy Rice in Peru}, author={Rita Rocio Guzman-Lopez, Luis Huamanchumo, Kevin Fernandez, Oscar Cutipa-Luque, Yhon Tiahuallpa and Helder Rojas}, journal={arXiv preprint arXiv:2409.17298}, year={2024}, archivePrefix={arXiv}, eprint={2409.17298}, primaryClass={stat.ME cs.LG stat.AP stat.ML} }
guzman-lopez2024sparsity,
arxiv-662008
2409.17299
High-Performance Implementation of the Optimized Event Generator for Strong-Field QED Plasma Simulations
<|reference_start|>High-Performance Implementation of the Optimized Event Generator for Strong-Field QED Plasma Simulations: Numerical simulation of strong-field quantum electrodynamics (SFQED) processes is an essential step towards current and future high-intensity laser experiments. The complexity of SFQED phenomena and their stochastic nature make them extremely computationally challenging, requiring the use of supercomputers for realistic simulations. Recently, we have presented a novel approach to numerical simulation of SFQED processes based on an accurate approximation of precomputed rates, which minimizes the number of rate calculations per QED event. The current paper is focused on the high-performance implementation of this method, including vectorization of resource-intensive kernels and improvement of parallel computing efficiency. Using two codes, PICADOR and hi-$\chi$ (the latter being free and publicly available), we demonstrate significant reduction in computation time due to these improvements. We hope that the proposed approach can be applied in other codes for the numerical simulation of SFQED processes.<|reference_end|>
arxiv
@article{panova2024high-performance, title={High-Performance Implementation of the Optimized Event Generator for Strong-Field QED Plasma Simulations}, author={Elena Panova, Valentin Volokitin, Aleksei Bashinov, Alexander Muraviev, Evgeny Efimenko, Iosif Meyerov}, journal={arXiv preprint arXiv:2409.17299}, year={2024}, archivePrefix={arXiv}, eprint={2409.17299}, primaryClass={physics.comp-ph cs.DC} }
panova2024high-performance
arxiv-662009
2409.17300
Neural Network Plasticity and Loss Sharpness
<|reference_start|>Neural Network Plasticity and Loss Sharpness: In recent years, continual learning, a prediction setting in which the problem environment may evolve over time, has become an increasingly popular research field due to the framework's gearing towards complex, non-stationary objectives. Learning such objectives requires plasticity, or the ability of a neural network to adapt its predictions to a different task. Recent findings indicate that plasticity loss on new tasks is highly related to loss landscape sharpness in non-stationary RL frameworks. We explore the usage of sharpness regularization techniques, which seek out smooth minima and have been touted for their generalization capabilities in vanilla prediction settings, in efforts to combat plasticity loss. Our findings indicate that such techniques have no significant effect on reducing plasticity loss.<|reference_end|>
arxiv
@article{koster2024neural, title={Neural Network Plasticity and Loss Sharpness}, author={Max Koster and Jude Kukla}, journal={arXiv preprint arXiv:2409.17300}, year={2024}, archivePrefix={arXiv}, eprint={2409.17300}, primaryClass={cs.LG cs.AI} }
koster2024neural
arxiv-662010
2409.17302
Riemannian conjugate Sobolev gradients and their application to compute ground states of BECs
<|reference_start|>Riemannian conjugate Sobolev gradients and their application to compute ground states of BECs: This work considers the numerical computation of ground states of rotating Bose-Einstein condensates (BECs) which can exhibit a multiscale lattice of quantized vortices. This problem involves the minimization of an energy functional on a Riemannian manifold. For this we apply the framework of nonlinear conjugate gradient methods in combination with the paradigm of Sobolev gradients to investigate different metrics. Here we build on previous work that proposed to enhance the convergence of regular Riemannian gradients methods by an adaptively changing metric that is based on the current energy. In this work, we extend this approach to the branch of Riemannian conjugate gradient (CG) methods and investigate the arising schemes numerically. Special attention is given to the selection of the momentum parameter in search direction and how this affects the performance of the resulting schemes. As known from similar applications, we find that the choice of the momentum parameter plays a critical role, with certain parameters reducing the number of iterations required to achieve a specified tolerance by a significant factor. Besides the influence of the momentum parameters, we also investigate how the methods with adaptive metric compare to the corresponding realizations with a standard $H^1_0$-metric. As one of our main findings, the results of the numerical experiments show that the Riemannian CG method with the proposed adaptive metric along with a Polak-Ribi\'ere or Hestenes-Stiefel-type momentum parameter show the best performance and highest robustness compared to the other CG methods that were part of our numerical study.<|reference_end|>
arxiv
@article{ai2024riemannian, title={Riemannian conjugate Sobolev gradients and their application to compute ground states of BECs}, author={Yueshan Ai, Patrick Henning, Mahima Yadav, Sitong Yuan}, journal={arXiv preprint arXiv:2409.17302}, year={2024}, archivePrefix={arXiv}, eprint={2409.17302}, primaryClass={math.NA cs.NA} }
ai2024riemannian
arxiv-662011
2409.17304
Democratizing Signal Processing and Machine Learning: Math Learning Equity for Elementary and Middle School Students
<|reference_start|>Democratizing Signal Processing and Machine Learning: Math Learning Equity for Elementary and Middle School Students: Signal Processing (SP) and Machine Learning (ML) rely on good math and coding knowledge, in particular, linear algebra, probability, and complex numbers. A good grasp of these relies on scalar algebra learned in middle school. The ability to understand and use scalar algebra well, in turn, relies on a good foundation in basic arithmetic. Because of various systemic barriers, many students are not able to build a strong foundation in arithmetic in elementary school. This leads them to struggle with algebra and everything after that. Since math learning is cumulative, the gap between those without a strong early foundation and everyone else keeps increasing over the school years and becomes difficult to fill in college. In this article we discuss how SP faculty and graduate students can play an important role in starting, and participating in, university-run (or other) out-of-school math support programs to supplement students' learning. Two example programs run by the authors (CyMath at ISU and Ab7G at Purdue) are briefly described. The second goal of this article is to use our perspective as SP, and engineering, educators who have seen the long-term impact of elementary school math teaching policies, to provide some simple almost zero cost suggestions that elementary schools could adopt to improve math learning: (i) more math practice in school, (ii) send small amounts of homework (individual work is critical in math), and (iii) parent awareness (math resources, need for early math foundation, clear in-school test information and sharing of feedback from the tests). In summary, good early math support (in school and through out-of-school programs) can help make SP and ML more accessible.<|reference_end|>
arxiv
@article{vaswani2024democratizing, title={Democratizing Signal Processing and Machine Learning: Math Learning Equity for Elementary and Middle School Students}, author={Namrata Vaswani, Mohamed Y. Selim, Renee Serrell Gibert}, journal={arXiv preprint arXiv:2409.17304}, year={2024}, archivePrefix={arXiv}, eprint={2409.17304}, primaryClass={math.HO cs.CY cs.LG} }
vaswani2024democratizing
arxiv-662012
2409.17306
Bounds on the Complete Forcing Number of Graphs
<|reference_start|>Bounds on the Complete Forcing Number of Graphs: A forcing set for a perfect matching of a graph is defined as a subset of the edges of that perfect matching such that there exists a unique perfect matching containing it. A complete forcing set for a graph is a subset of its edges, such that it intersects the edges of every perfect matching in a forcing set of that perfect matching. The size of a smallest complete forcing set of a graph is called the complete forcing number of the graph. In this paper, we derive new upper bounds for the complete forcing number of graphs in terms of other graph theoretical parameters such as the degeneracy or the spectral radius of the graph. We show that for graphs with the number of edges more than some constant times the number of vertices, our result outperforms the best known upper bound for the complete forcing number. For the set of edge-transitive graphs, we present a lower bound for the complete forcing number in terms of maximum forcing number. This result in particular is applied to the hypercube graphs and Cartesian powers of even cycles.<|reference_end|>
arxiv
@article{ebrahimi2024bounds, title={Bounds on the Complete Forcing Number of Graphs}, author={Javad B. Ebrahimi and Aref Nemayande and Elahe Tohidi}, journal={arXiv preprint arXiv:2409.17306}, year={2024}, archivePrefix={arXiv}, eprint={2409.17306}, primaryClass={math.CO cs.DM} }
ebrahimi2024bounds
arxiv-662013
2409.17308
Consistent estimation of generative model representations in the data kernel perspective space
<|reference_start|>Consistent estimation of generative model representations in the data kernel perspective space: Generative models, such as large language models and text-to-image diffusion models, produce relevant information when presented a query. Different models may produce different information when presented the same query. As the landscape of generative models evolves, it is important to develop techniques to study and analyze differences in model behaviour. In this paper we present novel theoretical results for embedding-based representations of generative models in the context of a set of queries. We establish sufficient conditions for the consistent estimation of the model embeddings in situations where the query set and the number of models grow.<|reference_end|>
arxiv
@article{acharyya2024consistent, title={Consistent estimation of generative model representations in the data kernel perspective space}, author={Aranyak Acharyya and Michael W. Trosset and Carey E. Priebe and Hayden S. Helm}, journal={arXiv preprint arXiv:2409.17308}, year={2024}, archivePrefix={arXiv}, eprint={2409.17308}, primaryClass={cs.LG math.ST stat.TH} }
acharyya2024consistent
arxiv-662014
2409.17311
A Hybrid Quantum-Classical AI-Based Detection Strategy for Generative Adversarial Network-Based Deepfake Attacks on an Autonomous Vehicle Traffic Sign Classification System
<|reference_start|>A Hybrid Quantum-Classical AI-Based Detection Strategy for Generative Adversarial Network-Based Deepfake Attacks on an Autonomous Vehicle Traffic Sign Classification System: The perception module in autonomous vehicles (AVs) relies heavily on deep learning-based models to detect and identify various objects in their surrounding environment. An AV traffic sign classification system is integral to this module, which helps AVs recognize roadway traffic signs. However, adversarial attacks, in which an attacker modifies or alters the image captured for traffic sign recognition, could lead an AV to misrecognize the traffic signs and cause hazardous consequences. Deepfake presents itself as a promising technology to be used for such adversarial attacks, in which a deepfake traffic sign would replace a real-world traffic sign image before the image is fed to the AV traffic sign classification system. In this study, the authors present how a generative adversarial network-based deepfake attack can be crafted to fool the AV traffic sign classification systems. The authors developed a deepfake traffic sign image detection strategy leveraging hybrid quantum-classical neural networks (NNs). This hybrid approach utilizes amplitude encoding to represent the features of an input traffic sign image using quantum states, which substantially reduces the memory requirement compared to its classical counterparts. The authors evaluated this hybrid deepfake detection approach along with several baseline classical convolutional NNs on real-world and deepfake traffic sign images. The results indicate that the hybrid quantum-classical NNs for deepfake detection could achieve similar or higher performance than the baseline classical convolutional NNs in most cases while requiring less than one-third of the memory required by the shallowest classical convolutional NN considered in this study.<|reference_end|>
arxiv
@article{salek2024a, title={A Hybrid Quantum-Classical AI-Based Detection Strategy for Generative Adversarial Network-Based Deepfake Attacks on an Autonomous Vehicle Traffic Sign Classification System}, author={M Sabbir Salek, Shaozhi Li, and Mashrur Chowdhury}, journal={arXiv preprint arXiv:2409.17311}, year={2024}, archivePrefix={arXiv}, eprint={2409.17311}, primaryClass={cs.AI cs.ET} }
salek2024a
arxiv-662015
2409.17312
BabyLlama-2: Ensemble-Distilled Models Consistently Outperform Teachers With Limited Data
<|reference_start|>BabyLlama-2: Ensemble-Distilled Models Consistently Outperform Teachers With Limited Data: We present BabyLlama-2, a 345 million parameter model distillation-pretrained from two teachers on a 10 million word corpus for the BabyLM competition. On BLiMP and SuperGLUE benchmarks, BabyLlama-2 outperforms baselines trained on both 10 and 100 million word datasets with the same data mix, as well as its teacher models. Through an extensive hyperparameter sweep, we demonstrate that the advantages of distillation cannot be attributed to suboptimal hyperparameter selection of the teachers. Our findings underscore the need for further investigation into distillation techniques, particularly in data-limited settings.<|reference_end|>
arxiv
@article{tastet2024babyllama-2:, title={BabyLlama-2: Ensemble-Distilled Models Consistently Outperform Teachers With Limited Data}, author={Jean-Loup Tastet, Inar Timiryasov}, journal={arXiv preprint arXiv:2409.17312}, year={2024}, archivePrefix={arXiv}, eprint={2409.17312}, primaryClass={cs.CL cs.LG} }
tastet2024babyllama-2:
arxiv-662016
2409.17313
Navigating the Nuances: A Fine-grained Evaluation of Vision-Language Navigation
<|reference_start|>Navigating the Nuances: A Fine-grained Evaluation of Vision-Language Navigation: This study presents a novel evaluation framework for the Vision-Language Navigation (VLN) task. It aims to diagnose current models for various instruction categories at a finer-grained level. The framework is structured around the context-free grammar (CFG) of the task. The CFG serves as the basis for the problem decomposition and the core premise of the instruction categories design. We propose a semi-automatic method for CFG construction with the help of Large-Language Models (LLMs). Then, we induct and generate data spanning five principal instruction categories (i.e. direction change, landmark recognition, region recognition, vertical movement, and numerical comprehension). Our analysis of different models reveals notable performance discrepancies and recurrent issues. The stagnation of numerical comprehension, heavy selective biases over directional concepts, and other interesting findings contribute to the development of future language-guided navigation systems.<|reference_end|>
arxiv
@article{wang2024navigating, title={Navigating the Nuances: A Fine-grained Evaluation of Vision-Language Navigation}, author={Zehao Wang, Minye Wu, Yixin Cao, Yubo Ma, Meiqi Chen, Tinne Tuytelaars}, journal={arXiv preprint arXiv:2409.17313}, year={2024}, archivePrefix={arXiv}, eprint={2409.17313}, primaryClass={cs.CV cs.AI cs.CL} }
wang2024navigating
arxiv-662017
2409.17314
A Mixed finite element method for the velocity-pseudostress formulation of the Oseen eigenvalue problem
<|reference_start|>A Mixed finite element method for the velocity-pseudostress formulation of the Oseen eigenvalue problem: In this paper, we introduce and analyze a mixed formulation for the Oseen eigenvalue problem by introducing the pseudostress tensor as a new unknown, allowing us to eliminate the fluid pressure. The well-posedness of the solution operator is established using a fixed-point argument. For the numerical analysis, we use the tensorial versions of Raviart-Thomas and Brezzi-Douglas-Marini elements to approximate the pseudostress, and piecewise polynomials for the velocity. Convergence and a priori error estimates are derived based on compact operator theory. We present a series of numerical tests in two and three dimensions to confirm the theoretical findings.<|reference_end|>
arxiv
@article{lepe2024a, title={A Mixed finite element method for the velocity-pseudostress formulation of the Oseen eigenvalue problem}, author={Felipe Lepe, Gonzalo Rivera, Jesus Vellojin}, journal={arXiv preprint arXiv:2409.17314}, year={2024}, archivePrefix={arXiv}, eprint={2409.17314}, primaryClass={math.NA cs.NA} }
lepe2024a
arxiv-662018
2409.17315
KIPPS: Knowledge infusion in Privacy Preserving Synthetic Data Generation
<|reference_start|>KIPPS: Knowledge infusion in Privacy Preserving Synthetic Data Generation: The integration of privacy measures, including differential privacy techniques, ensures a provable privacy guarantee for the synthetic data. However, challenges arise for Generative Deep Learning models when tasked with generating realistic data, especially in critical domains such as Cybersecurity and Healthcare. Generative Models optimized for continuous data struggle to model discrete and non-Gaussian features that have domain constraints. Challenges increase when the training datasets are limited and not diverse. In such cases, generative models create synthetic data that repeats sensitive features, which is a privacy risk. Moreover, generative models face difficulties comprehending attribute constraints in specialized domains. This leads to the generation of unrealistic data that impacts downstream accuracy. To address these issues, this paper proposes a novel model, KIPPS, that infuses Domain and Regulatory Knowledge from Knowledge Graphs into Generative Deep Learning models for enhanced Privacy Preserving Synthetic data generation. The novel framework augments the training of generative models with supplementary context about attribute values and enforces domain constraints during training. This added guidance enhances the model's capacity to generate realistic and domain-compliant synthetic data. The proposed model is evaluated on real-world datasets, specifically in the domains of Cybersecurity and Healthcare, where domain constraints and rules add to the complexity of the data. Our experiments evaluate the privacy resilience and downstream accuracy of the model against benchmark methods, demonstrating its effectiveness in addressing the balance between privacy preservation and data accuracy in complex domains.<|reference_end|>
arxiv
@article{kotal2024kipps:, title={KIPPS: Knowledge infusion in Privacy Preserving Synthetic Data Generation}, author={Anantaa Kotal and Anupam Joshi}, journal={arXiv preprint arXiv:2409.17315}, year={2024}, archivePrefix={arXiv}, eprint={2409.17315}, primaryClass={cs.LG cs.AI cs.CR} }
kotal2024kipps:
arxiv-662019
2409.17316
Bi-TTA: Bidirectional Test-Time Adapter for Remote Physiological Measurement
<|reference_start|>Bi-TTA: Bidirectional Test-Time Adapter for Remote Physiological Measurement: Remote photoplethysmography (rPPG) is gaining prominence for its non-invasive approach to monitoring physiological signals using only cameras. Despite its promise, the adaptability of rPPG models to new, unseen domains is hindered due to the environmental sensitivity of physiological signals. To address this, we pioneer the Test-Time Adaptation (TTA) in rPPG, enabling the adaptation of pre-trained models to the target domain during inference, sidestepping the need for annotations or source data due to privacy considerations. Particularly, utilizing only the user's face video stream as the accessible target domain data, the rPPG model is adjusted by tuning on each single instance it encounters. However, 1) TTA algorithms are designed predominantly for classification tasks, ill-suited in regression tasks such as rPPG due to inadequate supervision. 2) Tuning pre-trained models in a single-instance manner introduces variability and instability, posing challenges to effectively filtering domain-relevant from domain-irrelevant features while simultaneously preserving the learned information. To overcome these challenges, we present Bi-TTA, a novel expert knowledge-based Bidirectional Test-Time Adapter framework. Specifically, leveraging two expert-knowledge priors for providing self-supervision, our Bi-TTA primarily comprises two modules: a prospective adaptation (PA) module using sharpness-aware minimization to eliminate domain-irrelevant noise, enhancing the stability and efficacy during the adaptation process, and a retrospective stabilization (RS) module to dynamically reinforce crucial learned model parameters, averting performance degradation caused by overfitting or catastrophic forgetting. To this end, we established a large-scale benchmark for rPPG tasks under TTA protocol. The experimental results demonstrate the significant superiority of our approach over the state-of-the-art.<|reference_end|>
arxiv
@article{li2024bi-tta:, title={Bi-TTA: Bidirectional Test-Time Adapter for Remote Physiological Measurement}, author={Haodong Li, Hao Lu, Ying-Cong Chen}, journal={arXiv preprint arXiv:2409.17316}, year={2024}, archivePrefix={arXiv}, eprint={2409.17316}, primaryClass={cs.CV} }
li2024bi-tta:
arxiv-662020
2409.17317
Towards a complete classification of holographic entropy inequalities
<|reference_start|>Towards a complete classification of holographic entropy inequalities: We propose a deterministic method to find all holographic entropy inequalities and prove the completeness of our method. We use a triality between holographic entropy inequalities, contraction maps and partial cubes. More specifically, the validity of a holographic entropy inequality is implied by the existence of a contraction map, which we prove to be equivalent to finding an isometric embedding of a contracted graph. Thus, by virtue of the completeness of the contraction map proof method, the problem of finding all holographic entropy inequalities is equivalent to the problem of finding all contraction maps, which we translate to a problem of finding all image graph partial cubes. We give an algorithmic solution to this problem and characterize the complexity of our method. We also demonstrate interesting by-products, most notably, a procedure to generate candidate quantum entropy inequalities.<|reference_end|>
arxiv
@article{bao2024towards, title={Towards a complete classification of holographic entropy inequalities}, author={Ning Bao, Keiichiro Furuya, Joydeep Naskar}, journal={arXiv preprint arXiv:2409.17317}, year={2024}, archivePrefix={arXiv}, eprint={2409.17317}, primaryClass={hep-th cs.DM quant-ph} }
bao2024towards
arxiv-662021
2409.17320
Accelerating Multi-Block Constrained Optimization Through Learning to Optimize
<|reference_start|>Accelerating Multi-Block Constrained Optimization Through Learning to Optimize: Learning to Optimize (L2O) approaches, including algorithm unrolling, plug-and-play methods, and hyperparameter learning, have garnered significant attention and have been successfully applied to the Alternating Direction Method of Multipliers (ADMM) and its variants. However, the natural extension of L2O to multi-block ADMM-type methods remains largely unexplored. Such an extension is critical, as multi-block methods leverage the separable structure of optimization problems, offering substantial reductions in per-iteration complexity. Given that classical multi-block ADMM does not guarantee convergence, the Majorized Proximal Augmented Lagrangian Method (MPALM), which shares a similar form with multi-block ADMM and ensures convergence, is more suitable in this setting. Despite its theoretical advantages, MPALM's performance is highly sensitive to the choice of penalty parameters. To address this limitation, we propose a novel L2O approach that adaptively selects this hyperparameter using supervised learning. We demonstrate the versatility and effectiveness of our method by applying it to the Lasso problem and the optimal transport problem. Our numerical results show that the proposed framework outperforms popular alternatives. Given its applicability to generic linearly constrained composite optimization problems, this work opens the door to a wide range of potential real-world applications.<|reference_end|>
arxiv
@article{liang2024accelerating, title={Accelerating Multi-Block Constrained Optimization Through Learning to Optimize}, author={Ling Liang, Cameron Austin, Haizhao Yang}, journal={arXiv preprint arXiv:2409.17320}, year={2024}, archivePrefix={arXiv}, eprint={2409.17320}, primaryClass={math.OC cs.LG} }
liang2024accelerating
arxiv-662022
2409.17322
The Evolution of Emojis for Sharing Emotions: A Systematic Review of the HCI Literature
<|reference_start|>The Evolution of Emojis for Sharing Emotions: A Systematic Review of the HCI Literature: With the prevalence of instant messaging and social media platforms, emojis have become important artifacts for expressing emotions and feelings in our daily lives. We ask how HCI researchers have examined the role and evolution of emojis in sharing emotions over the past 10 years. We conducted a systematic literature review of papers addressing emojis employed for emotion communication between users. After screening more than 1,000 articles, we identified 42 articles of studies analyzing ways and systems that enable users to share emotions with emojis. Two main themes described how these papers have (1) improved how users select the right emoji from an increasing emoji lexicon, and (2) employed emojis in new ways and digital materials to enhance communication. We also discovered an increasingly broad scope of functionality across appearance, medium, and affordance. We discuss and offer insights into potential opportunities and challenges emojis will bring for HCI research.<|reference_end|>
arxiv
@article{chiang2024the, title={The Evolution of Emojis for Sharing Emotions: A Systematic Review of the HCI Literature}, author={Charles Chiang, Diego Gomez-Zara}, journal={arXiv preprint arXiv:2409.17322}, year={2024}, archivePrefix={arXiv}, eprint={2409.17322}, primaryClass={cs.HC} }
chiang2024the
arxiv-662023
2409.17326
How Transliterations Improve Crosslingual Alignment
<|reference_start|>How Transliterations Improve Crosslingual Alignment: Recent studies have shown that post-aligning multilingual pretrained language models (mPLMs) using alignment objectives on both original and transliterated data can improve crosslingual alignment. This improvement further leads to better crosslingual transfer performance. However, it remains unclear how and why a better crosslingual alignment is achieved, as this technique only involves transliterations, and does not use any parallel data. This paper attempts to explicitly evaluate the crosslingual alignment and identify the key elements in transliteration-based approaches that contribute to better performance. For this, we train multiple models under varying setups for two pairs of related languages: (1) Polish and Ukrainian and (2) Hindi and Urdu. To assess alignment, we define four types of similarities based on sentence representations. Our experiments show that adding transliterations alone improves the overall similarities, even for random sentence pairs. With the help of auxiliary alignment objectives, especially the contrastive objective, the model learns to distinguish matched from random pairs, leading to better alignments. However, we also show that better alignment does not always yield better downstream performance, suggesting that further research is needed to clarify the connection between alignment and performance.<|reference_end|>
arxiv
@article{liu2024how, title={How Transliterations Improve Crosslingual Alignment}, author={Yihong Liu, Mingyang Wang, Amir Hossein Kargaran, Ayyoob Imani, Orgest Xhelili, Haotian Ye, Chunlan Ma, Franc{c}ois Yvon, Hinrich Sch"utze}, journal={arXiv preprint arXiv:2409.17326}, year={2024}, archivePrefix={arXiv}, eprint={2409.17326}, primaryClass={cs.CL} }
liu2024how
arxiv-662024
2409.17328
The poison of dimensionality
<|reference_start|>The poison of dimensionality: This paper advances the understanding of how the size of a machine learning model affects its vulnerability to poisoning, despite state-of-the-art defenses. Given isotropic random honest feature vectors and the geometric median (or clipped mean) as the robust gradient aggregator rule, we essentially prove that, perhaps surprisingly, linear and logistic regressions with $D \geq 169 H^2/P^2$ parameters are subject to arbitrary model manipulation by poisoners, where $H$ and $P$ are the numbers of honestly labeled and poisoned data points used for training. Our experiments go on exposing a fundamental tradeoff between augmenting model expressivity and increasing the poisoners' attack surface, on both synthetic data, and on MNIST & FashionMNIST data for linear classifiers with random features. We also discuss potential implications for source-based learning and neural nets.<|reference_end|>
arxiv
@article{hoang2024the, title={The poison of dimensionality}, author={L^e-Nguy^en Hoang}, journal={arXiv preprint arXiv:2409.17328}, year={2024}, archivePrefix={arXiv}, eprint={2409.17328}, primaryClass={cs.LG cs.CR stat.ML} }
hoang2024the
arxiv-662025
2409.17329
Dynamic direct access of MSO query evaluation over strings
<|reference_start|>Dynamic direct access of MSO query evaluation over strings: We study the problem of evaluating a Monadic Second Order (MSO) query over strings under updates in the setting of direct access. We present an algorithm that, given an MSO query with first-order free variables represented by an unambiguous variable-set automaton $\mathcal{A}$ with state set $Q$ and variables $X$ and a string $s$, computes a data structure in time $\mathcal{O}(|Q|^\omega\cdot |X|^2 \cdot |s|)$ and, then, given an index $i$ retrieves, using the data structure, the $i$-th output of the evaluation of $\mathcal{A}$ over $s$ in time $\mathcal{O}(|Q|^\omega \cdot |X|^3 \cdot \log(|s|)^2)$ where $\omega$ is the exponent for matrix multiplication. Ours is the first efficient direct access algorithm for MSO query evaluation over strings; such algorithms so far had only been studied for first-order queries and conjunctive queries over relational data. Our algorithm gives the answers in lexicographic order where, in contrast to the setting of conjunctive queries, the order between variables can be freely chosen by the user without degrading the runtime. Moreover, our data structure can be updated efficiently after changes to the input string, allowing more powerful updates than in the enumeration literature, e.g.~efficient deletion of substrings, concatenation and splitting of strings, and cut-and-paste operations. Our approach combines a matrix representation of MSO queries and a novel data structure for dynamic word problems over semi-groups which yields an overall algorithm that is elegant and easy to formulate.<|reference_end|>
arxiv
@article{bourhis2024dynamic, title={Dynamic direct access of MSO query evaluation over strings}, author={Pierre Bourhis, Florent Capelli, Stefan Mengel, Cristian Riveros}, journal={arXiv preprint arXiv:2409.17329}, year={2024}, archivePrefix={arXiv}, eprint={2409.17329}, primaryClass={cs.DB cs.DS cs.FL} }
bourhis2024dynamic
arxiv-662026
2409.17330
VL4AD: Vision-Language Models Improve Pixel-wise Anomaly Detection
<|reference_start|>VL4AD: Vision-Language Models Improve Pixel-wise Anomaly Detection: Semantic segmentation networks have achieved significant success under the assumption of independent and identically distributed data. However, these networks often struggle to detect anomalies from unknown semantic classes due to the limited set of visual concepts they are typically trained on. To address this issue, anomaly segmentation often involves fine-tuning on outlier samples, necessitating additional efforts for data collection, labeling, and model retraining. Seeking to avoid this cumbersome work, we take a different approach and propose to incorporate Vision-Language (VL) encoders into existing anomaly detectors to leverage the semantically broad VL pre-training for improved outlier awareness. Additionally, we propose a new scoring function that enables data- and training-free outlier supervision via textual prompts. The resulting VL4AD model, which includes max-logit prompt ensembling and a class-merging strategy, achieves competitive performance on widely used benchmark datasets, thereby demonstrating the potential of vision-language models for pixel-wise anomaly detection.<|reference_end|>
arxiv
@article{zhong2024vl4ad:, title={VL4AD: Vision-Language Models Improve Pixel-wise Anomaly Detection}, author={Liangyu Zhong, Joachim Sicking, Fabian H"uger and Hanno Gottschalk}, journal={arXiv preprint arXiv:2409.17330}, year={2024}, archivePrefix={arXiv}, eprint={2409.17330}, primaryClass={cs.CV} }
zhong2024vl4ad:
arxiv-662027
2409.17331
ChatCam: Empowering Camera Control through Conversational AI
<|reference_start|>ChatCam: Empowering Camera Control through Conversational AI: Cinematographers adeptly capture the essence of the world, crafting compelling visual narratives through intricate camera movements. Witnessing the strides made by large language models in perceiving and interacting with the 3D world, this study explores their capability to control cameras with human language guidance. We introduce ChatCam, a system that navigates camera movements through conversations with users, mimicking a professional cinematographer's workflow. To achieve this, we propose CineGPT, a GPT-based autoregressive model for text-conditioned camera trajectory generation. We also develop an Anchor Determinator to ensure precise camera trajectory placement. ChatCam understands user requests and employs our proposed tools to generate trajectories, which can be used to render high-quality video footage on radiance field representations. Our experiments, including comparisons to state-of-the-art approaches and user studies, demonstrate our approach's ability to interpret and execute complex instructions for camera operation, showing promising applications in real-world production settings.<|reference_end|>
arxiv
@article{liu2024chatcam:, title={ChatCam: Empowering Camera Control through Conversational AI}, author={Xinhang Liu, Yu-Wing Tai, Chi-Keung Tang}, journal={arXiv preprint arXiv:2409.17331}, year={2024}, archivePrefix={arXiv}, eprint={2409.17331}, primaryClass={cs.CV} }
liu2024chatcam:
arxiv-662028
2409.17332
Block Expanded DINORET: Adapting Natural Domain Foundation Models for Retinal Imaging Without Catastrophic Forgetting
<|reference_start|>Block Expanded DINORET: Adapting Natural Domain Foundation Models for Retinal Imaging Without Catastrophic Forgetting: Integrating deep learning into medical imaging is poised to greatly advance diagnostic methods but it faces challenges with generalizability. Foundation models, based on self-supervised learning, address these issues and improve data efficiency. Natural domain foundation models show promise for medical imaging, but systematic research evaluating domain adaptation, especially using self-supervised learning and parameter-efficient fine-tuning, remains underexplored. Additionally, little research addresses the issue of catastrophic forgetting during fine-tuning of foundation models. We adapted the DINOv2 vision transformer for retinal imaging classification tasks using self-supervised learning and generated two novel foundation models termed DINORET and BE DINORET. Publicly available color fundus photographs were employed for model development and subsequent fine-tuning for diabetic retinopathy staging and glaucoma detection. We introduced block expansion as a novel domain adaptation strategy and assessed the models for catastrophic forgetting. Models were benchmarked to RETFound, a state-of-the-art foundation model in ophthalmology. DINORET and BE DINORET demonstrated competitive performance on retinal imaging tasks, with the block expanded model achieving the highest scores on most datasets. Block expansion successfully mitigated catastrophic forgetting. Our few-shot learning studies indicated that DINORET and BE DINORET outperform RETFound in terms of data-efficiency. This study highlights the potential of adapting natural domain vision models to retinal imaging using self-supervised learning and block expansion. BE DINORET offers robust performance without sacrificing previously acquired capabilities. Our findings suggest that these methods could enable healthcare institutions to develop tailored vision models for their patient populations, enhancing global healthcare inclusivity.<|reference_end|>
arxiv
@article{zoellin2024block, title={Block Expanded DINORET: Adapting Natural Domain Foundation Models for Retinal Imaging Without Catastrophic Forgetting}, author={Jay Zoellin, Colin Merk, Mischa Buob, Amr Saad, Samuel Giesser, Tahm Spitznagel, Ferhat Turgut, Rui Santos, Yukun Zhou, Sigfried Wagner, Pearse A. Keane, Yih Chung Tham, Delia Cabrera DeBuc, Matthias D. Becker, Gabor M. Somfai}, journal={arXiv preprint arXiv:2409.17332}, year={2024}, archivePrefix={arXiv}, eprint={2409.17332}, primaryClass={cs.CV cs.AI} }
zoellin2024block
arxiv-662029
2409.17335
Non-asymptotic Convergence of Training Transformers for Next-token Prediction
<|reference_start|>Non-asymptotic Convergence of Training Transformers for Next-token Prediction: Transformers have achieved extraordinary success in modern machine learning due to their excellent ability to handle sequential data, especially in next-token prediction (NTP) tasks. However, the theoretical understanding of their performance in NTP is limited, with existing studies focusing mainly on asymptotic performance. This paper provides a fine-grained non-asymptotic analysis of the training dynamics of a one-layer transformer consisting of a self-attention module followed by a feed-forward layer. We first characterize the essential structural properties of training datasets for NTP using a mathematical framework based on partial orders. Then, we design a two-stage training algorithm, where the pre-processing stage for training the feed-forward layer and the main stage for training the attention layer exhibit fast convergence performance. Specifically, both layers converge sub-linearly to the direction of their corresponding max-margin solutions. We also show that the cross-entropy loss enjoys a linear convergence rate. Furthermore, we show that the trained transformer presents non-trivial prediction ability with dataset shift, which sheds light on the remarkable generalization performance of transformers. Our analysis technique involves the development of novel properties on the attention gradient and further in-depth analysis of how these properties contribute to the convergence of the training process. Our experiments further validate our theoretical findings.<|reference_end|>
arxiv
@article{huang2024non-asymptotic, title={Non-asymptotic Convergence of Training Transformers for Next-token Prediction}, author={Ruiquan Huang, Yingbin Liang, Jing Yang}, journal={arXiv preprint arXiv:2409.17335}, year={2024}, archivePrefix={arXiv}, eprint={2409.17335}, primaryClass={cs.LG stat.ML} }
huang2024non-asymptotic
arxiv-662030
2409.17336
The Technology of Outrage: Bias in Artificial Intelligence
<|reference_start|>The Technology of Outrage: Bias in Artificial Intelligence: Artificial intelligence and machine learning are increasingly used to offload decision making from people. In the past, one of the rationales for this replacement was that machines, unlike people, can be fair and unbiased. Evidence suggests otherwise. We begin by entertaining the ideas that algorithms can replace people and that algorithms cannot be biased. Taken as axioms, these statements quickly lead to absurdity. Spurred on by this result, we investigate the slogans more closely and identify equivocation surrounding the word 'bias.' We diagnose three forms of outrage-intellectual, moral, and political-that are at play when people react emotionally to algorithmic bias. Then we suggest three practical approaches to addressing bias that the AI community could take, which include clarifying the language around bias, developing new auditing methods for intelligent systems, and building certain capabilities into these systems. We conclude by offering a moral regarding the conversations about algorithmic bias that may transfer to other areas of artificial intelligence.<|reference_end|>
arxiv
@article{bridewell2024the, title={The Technology of Outrage: Bias in Artificial Intelligence}, author={Will Bridewell, Paul F. Bello, Selmer Bringsjord}, journal={arXiv preprint arXiv:2409.17336}, year={2024}, archivePrefix={arXiv}, eprint={2409.17336}, primaryClass={cs.CY cs.AI} }
bridewell2024the
arxiv-662031
2409.17340
Koopman-driven grip force prediction through EMG sensing
<|reference_start|>Koopman-driven grip force prediction through EMG sensing: Loss of hand function due to conditions like stroke or multiple sclerosis significantly impacts daily activities. Robotic rehabilitation provides tools to restore hand function, while novel methods based on surface electromyography (sEMG) enable the adaptation of the device's force output according to the user's condition, thereby improving rehabilitation outcomes. This study aims to achieve accurate force estimations during medium wrap grasps using a single sEMG sensor pair, thereby addressing the challenge of escalating sensor requirements for precise predictions. We conducted sEMG measurements on 13 subjects at two forearm positions, validating results with a hand dynamometer. We established flexible signal-processing steps, yielding high peak cross-correlations between the processed sEMG signal (representing meaningful muscle activity) and grip force. Influential parameters were subsequently identified through sensitivity analysis. Leveraging a novel data-driven Koopman operator theory-based approach and problem-specific data lifting techniques, we devised a methodology for the estimation and short-term prediction of grip force from processed sEMG signals. A weighted mean absolute percentage error (wMAPE) of approx. 5.5% was achieved for the estimated grip force, whereas predictions with a 0.5-second prediction horizon resulted in a wMAPE of approx. 17.9%. The methodology proved robust regarding precise electrode positioning, as the effect of sensing position on error metrics was non-significant. The algorithm executes exceptionally fast, processing, estimating, and predicting a 0.5-second sEMG signal batch in just approx. 30 ms, facilitating real-time implementation.<|reference_end|>
arxiv
@article{bazina2024koopman-driven, title={Koopman-driven grip force prediction through EMG sensing}, author={Tomislav Bazina, Ervin Kamenar, Maria Fonoberova, Igor Mezi'c}, journal={arXiv preprint arXiv:2409.17340}, year={2024}, archivePrefix={arXiv}, eprint={2409.17340}, primaryClass={cs.RO cs.AI math.DS} }
bazina2024koopman-driven
arxiv-662032
2409.17341
Energy-Efficient & Real-Time Computer Vision with Intelligent Skipping via Reconfigurable CMOS Image Sensors
<|reference_start|>Energy-Efficient & Real-Time Computer Vision with Intelligent Skipping via Reconfigurable CMOS Image Sensors: Current video-based computer vision (CV) applications typically suffer from high energy consumption due to reading and processing all pixels in a frame, regardless of their significance. While previous works have attempted to reduce this energy by skipping input patches or pixels and using feedback from the end task to guide the skipping algorithm, the skipping is not performed during the sensor read phase. As a result, these methods can not optimize the front-end sensor energy. Moreover, they may not be suitable for real-time applications due to the long latency of modern CV networks that are deployed in the back-end. To address this challenge, this paper presents a custom-designed reconfigurable CMOS image sensor (CIS) system that improves energy efficiency by selectively skipping uneventful regions or rows within a frame during the sensor's readout phase, and the subsequent analog-to-digital conversion (ADC) phase. A novel masking algorithm intelligently directs the skipping process in real-time, optimizing both the front-end sensor and back-end neural networks for applications including autonomous driving and augmented/virtual reality (AR/VR). Our system can also operate in standard mode without skipping, depending on application needs. We evaluate our hardware-algorithm co-design framework on object detection based on BDD100K and ImageNetVID, and gaze estimation based on OpenEDS, achieving up to 53% reduction in front-end sensor energy while maintaining state-of-the-art (SOTA) accuracy.<|reference_end|>
arxiv
@article{kaiser2024energy-efficient, title={Energy-Efficient & Real-Time Computer Vision with Intelligent Skipping via Reconfigurable CMOS Image Sensors}, author={Md Abdullah-Al Kaiser, Sreetama Sarkar, Peter A. Beerel, Akhilesh R. Jaiswal, Gourav Datta}, journal={arXiv preprint arXiv:2409.17341}, year={2024}, archivePrefix={arXiv}, eprint={2409.17341}, primaryClass={cs.CV} }
kaiser2024energy-efficient
arxiv-662033
2409.17345
SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model
<|reference_start|>SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model: We introduce SeaSplat, a method to enable real-time rendering of underwater scenes leveraging recent advances in 3D radiance fields. Underwater scenes are challenging visual environments, as rendering through a medium such as water introduces both range and color dependent effects on image capture. We constrain 3D Gaussian Splatting (3DGS), a recent advance in radiance fields enabling rapid training and real-time rendering of full 3D scenes, with a physically grounded underwater image formation model. Applying SeaSplat to the real-world scenes from SeaThru-NeRF dataset, a scene collected by an underwater vehicle in the US Virgin Islands, and simulation-degraded real-world scenes, not only do we see increased quantitative performance on rendering novel viewpoints from the scene with the medium present, but are also able to recover the underlying true color of the scene and restore renders to be without the presence of the intervening medium. We show that the underwater image formation helps learn scene structure, with better depth maps, as well as show that our improvements maintain the significant computational improvements afforded by leveraging a 3D Gaussian representation.<|reference_end|>
arxiv
@article{yang2024seasplat:, title={SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model}, author={Daniel Yang, John J. Leonard, Yogesh Girdhar}, journal={arXiv preprint arXiv:2409.17345}, year={2024}, archivePrefix={arXiv}, eprint={2409.17345}, primaryClass={cs.CV cs.RO} }
yang2024seasplat:
arxiv-662034
2409.17346
Multi-Tier Preservation of Discrete Morse Smale Complexes in Error-Bounded Lossy Compression
<|reference_start|>Multi-Tier Preservation of Discrete Morse Smale Complexes in Error-Bounded Lossy Compression: We propose a multi-tier paradigm to preserve various components of Morse-Smale complexes in lossy compressed scalar fields, including extrema, saddles, separatrices, and persistence diagrams. Existing error-bounded lossy compressors rarely consider preserving topological structures such as discrete Morse-Smale complexes, leading to significant inaccuracies in data interpretation and potentially resulting in incorrect scientific conclusions. This paper mainly focuses on preserving the Morse-Smale complexes in 2D or 3D discrete scalar fields by precisely preserving critical simplices and the separatrices that connect them. Our approach generates a series of edits during compression time, which are applied to the decompressed data to accurately reconstruct the complexes while maintaining the error within prescribed bounds. We design a workflow that iteratively fixes critical simplices and separatrices in alternating steps until convergence within finite iterations. Our approach addresses diverse application needs by offering users flexible options to balance compression efficiency and feature preservation. To enable effective integration with lossy compressors, we use GPU parallelism to enhance the performance of each workflow component. We conduct experiments on various datasets to demonstrate the effectiveness of our method in accurately preserving Morse-Smale complexes.<|reference_end|>
arxiv
@article{li2024multi-tier, title={Multi-Tier Preservation of Discrete Morse Smale Complexes in Error-Bounded Lossy Compression}, author={Yuxiao Li, Xin Liang, Bei Wang, and Hanqi Guo}, journal={arXiv preprint arXiv:2409.17346}, year={2024}, archivePrefix={arXiv}, eprint={2409.17346}, primaryClass={cs.GR} }
li2024multi-tier
arxiv-662035
2409.17348
Language Grounded Multi-agent Communication for Ad-hoc Teamwork
<|reference_start|>Language Grounded Multi-agent Communication for Ad-hoc Teamwork: Multi-Agent Reinforcement Learning (MARL) methods have shown promise in enabling agents to learn a shared communication protocol from scratch and accomplish challenging team tasks. However, the learned language is usually not interpretable to humans or other agents not co-trained together, limiting its applicability in ad-hoc teamwork scenarios. In this work, we propose a novel computational pipeline that aligns the communication space between MARL agents with an embedding space of human natural language by grounding agent communications on synthetic data generated by embodied Large Language Models (LLMs) in interactive teamwork scenarios. Our results demonstrate that introducing language grounding not only maintains task performance but also accelerates the emergence of communication. Furthermore, the learned communication protocols exhibit zero-shot generalization capabilities in ad-hoc teamwork scenarios with unseen teammates and novel task states. This work presents a significant step toward enabling effective communication and collaboration between artificial agents and humans in real-world teamwork settings.<|reference_end|>
arxiv
@article{li2024language, title={Language Grounded Multi-agent Communication for Ad-hoc Teamwork}, author={Huao Li, Hossein Nourkhiz Mahjoub, Behdad Chalaki, Vaishnav Tadiparthi, Kwonjoon Lee, Ehsan Moradi-Pari, Charles Michael Lewis, Katia P Sycara}, journal={arXiv preprint arXiv:2409.17348}, year={2024}, archivePrefix={arXiv}, eprint={2409.17348}, primaryClass={cs.MA} }
li2024language
arxiv-662036
2409.17352
On the Interplay of Clustering and Evolution in the Emergence of Epidemic Outbreaks
<|reference_start|>On the Interplay of Clustering and Evolution in the Emergence of Epidemic Outbreaks: In an increasingly interconnected world, a key scientific challenge is to examine mechanisms that lead to the widespread propagation of contagions, such as misinformation and pathogens, and identify risk factors that can trigger large-scale outbreaks. Underlying both the spread of disease and misinformation epidemics is the evolution of the contagion as it propagates, leading to the emergence of different strains, e.g., through genetic mutations in pathogens and alterations in the information content. Recent studies have revealed that models that do not account for heterogeneity in transmission risks associated with different strains of the circulating contagion can lead to inaccurate predictions. However, existing results on multi-strain spreading assume that the network has a vanishingly small clustering coefficient, whereas clustering is widely known to be a fundamental property of real-world social networks. In this work, we investigate spreading processes that entail evolutionary adaptations on random graphs with tunable clustering and arbitrary degree distributions. We derive a mathematical framework to quantify the epidemic characteristics of a contagion that evolves as it spreads, with the structure of the underlying network as given via arbitrary {\em joint} degree distributions of single-edges and triangles. To the best of our knowledge, our work is the first to jointly analyze the impact of clustering and evolution on the emergence of epidemic outbreaks. We supplement our theoretical finding with numerical simulations and case studies, shedding light on the impact of clustering on contagion spread.<|reference_end|>
arxiv
@article{sood2024on, title={On the Interplay of Clustering and Evolution in the Emergence of Epidemic Outbreaks}, author={Mansi Sood, Hejin Gu, Rashad Eletreby, Swarun Kumar, Chai Wah Wu, Osman Yagan}, journal={arXiv preprint arXiv:2409.17352}, year={2024}, archivePrefix={arXiv}, eprint={2409.17352}, primaryClass={cs.SI cs.SY eess.SY} }
sood2024on
arxiv-662037
2409.17353
Internalizing ASR with Implicit Chain of Thought for Efficient Speech-to-Speech Conversational LLM
<|reference_start|>Internalizing ASR with Implicit Chain of Thought for Efficient Speech-to-Speech Conversational LLM: Current speech-based LLMs are predominantly trained on extensive ASR and TTS datasets, excelling in tasks related to these domains. However, their ability to handle direct speech-to-speech conversations remains notably constrained. These models often rely on an ASR-to-TTS chain-of-thought pipeline, converting speech into text for processing before generating audio responses, which introduces latency and loses audio features. We propose a method that implicitly internalizes ASR chain of thought into a speech LLM, enhancing its native speech understanding capabilities. Our approach reduces latency and improves the model's native understanding of speech, paving the way for more efficient and natural real-time audio interactions. We also release a large-scale synthetic conversational dataset to facilitate further research.<|reference_end|>
arxiv
@article{yuen2024internalizing, title={Internalizing ASR with Implicit Chain of Thought for Efficient Speech-to-Speech Conversational LLM}, author={Robin Shing-Hei Yuen, Timothy Tin-Long Tse, Jian Zhu}, journal={arXiv preprint arXiv:2409.17353}, year={2024}, archivePrefix={arXiv}, eprint={2409.17353}, primaryClass={cs.CL} }
yuen2024internalizing
arxiv-662038
2409.17354
Multi-scale decomposition of sea surface height snapshots using machine learning
<|reference_start|>Multi-scale decomposition of sea surface height snapshots using machine learning: Knowledge of ocean circulation is important for understanding and predicting weather and climate, and managing the blue economy. This circulation can be estimated through Sea Surface Height (SSH) observations, but requires decomposing the SSH into contributions from balanced and unbalanced motions (BMs and UBMs). This decomposition is particularly pertinent for the novel SWOT satellite, which measures SSH at an unprecedented spatial resolution. Specifically, the requirement, and the goal of this work, is to decompose instantaneous SSH into BMs and UBMs. While a few studies using deep learning (DL) approaches have shown promise in framing this decomposition as an image-to-image translation task, these models struggle to work well across a wide range of spatial scales and require extensive training data, which is scarce in this domain. These challenges are not unique to our task, and pervade many problems requiring multi-scale fidelity. We show that these challenges can be addressed by using zero-phase component analysis (ZCA) whitening and data augmentation; making this a viable option for SSH decomposition across scales.<|reference_end|>
arxiv
@article{lyu2024multi-scale, title={Multi-scale decomposition of sea surface height snapshots using machine learning}, author={Jingwen Lyu and Yue Wang and Christian Pedersen and Spencer Jones and Dhruv Balwada}, journal={arXiv preprint arXiv:2409.17354}, year={2024}, archivePrefix={arXiv}, eprint={2409.17354}, primaryClass={physics.ao-ph cs.CV} }
lyu2024multi-scale
arxiv-662039
2409.17355
Learning Utilities from Demonstrations in Markov Decision Processes
<|reference_start|>Learning Utilities from Demonstrations in Markov Decision Processes: Our goal is to extract useful knowledge from demonstrations of behavior in sequential decision-making problems. Although it is well-known that humans commonly engage in risk-sensitive behaviors in the presence of stochasticity, most Inverse Reinforcement Learning (IRL) models assume a risk-neutral agent. Beyond introducing model misspecification, these models do not directly capture the risk attitude of the observed agent, which can be crucial in many applications. In this paper, we propose a novel model of behavior in Markov Decision Processes (MDPs) that explicitly represents the agent's risk attitude through a utility function. We then define the Utility Learning (UL) problem as the task of inferring the observed agent's risk attitude, encoded via a utility function, from demonstrations in MDPs, and we analyze the partial identifiability of the agent's utility. Furthermore, we devise two provably efficient algorithms for UL in a finite-data regime, and we analyze their sample complexity. We conclude with proof-of-concept experiments that empirically validate both our model and our algorithms.<|reference_end|>
arxiv
@article{lazzati2024learning, title={Learning Utilities from Demonstrations in Markov Decision Processes}, author={Filippo Lazzati, Alberto Maria Metelli}, journal={arXiv preprint arXiv:2409.17355}, year={2024}, archivePrefix={arXiv}, eprint={2409.17355}, primaryClass={cs.LG} }
lazzati2024learning
arxiv-662040
2409.17356
A vision-based framework for human behavior understanding in industrial assembly lines
<|reference_start|>A vision-based framework for human behavior understanding in industrial assembly lines: This paper introduces a vision-based framework for capturing and understanding human behavior in industrial assembly lines, focusing on car door manufacturing. The framework leverages advanced computer vision techniques to estimate workers' locations and 3D poses and analyze work postures, actions, and task progress. A key contribution is the introduction of the CarDA dataset, which contains domain-relevant assembly actions captured in a realistic setting to support the analysis of the framework for human pose and action analysis. The dataset comprises time-synchronized multi-camera RGB-D videos, motion capture data recorded in a real car manufacturing environment, and annotations for EAWS-based ergonomic risk scores and assembly activities. Experimental results demonstrate the effectiveness of the proposed approach in classifying worker postures and robust performance in monitoring assembly task progress.<|reference_end|>
arxiv
@article{papoutsakis2024a, title={A vision-based framework for human behavior understanding in industrial assembly lines}, author={Konstantinos Papoutsakis, Nikolaos Bakalos, Konstantinos Fragkoulis, Athena Zacharia, Georgia Kapetadimitri and Maria Pateraki}, journal={arXiv preprint arXiv:2409.17356}, year={2024}, archivePrefix={arXiv}, eprint={2409.17356}, primaryClass={cs.CV} }
papoutsakis2024a
arxiv-662041
2409.17357
Revisiting inverse Hessian vector products for calculating influence functions
<|reference_start|>Revisiting inverse Hessian vector products for calculating influence functions: Influence functions are a popular tool for attributing a model's output to training data. The traditional approach relies on the calculation of inverse Hessian-vector products (iHVP), but the classical solver "Linear time Stochastic Second-order Algorithm" (LiSSA, Agarwal et al. (2017)) is often deemed impractical for large models due to expensive computation and hyperparameter tuning. We show that the three hyperparameters -- the scaling factor, the batch size, and the number of steps -- can be chosen depending on the spectral properties of the Hessian, particularly its trace and largest eigenvalue. By evaluating with random sketching (Swartworth and Woodruff, 2023), we find that the batch size has to be sufficiently large for LiSSA to converge; however, for all of the models we consider, the requirement is mild. We confirm our findings empirically by comparing to Proximal Bregman Retraining Functions (PBRF, Bae et al. (2022)). Finally, we discuss what role the inverse Hessian plays in calculating the influence.<|reference_end|>
arxiv
@article{klochkov2024revisiting, title={Revisiting inverse Hessian vector products for calculating influence functions}, author={Yegor Klochkov and Yang Liu}, journal={arXiv preprint arXiv:2409.17357}, year={2024}, archivePrefix={arXiv}, eprint={2409.17357}, primaryClass={cs.LG} }
klochkov2024revisiting
arxiv-662042
2409.17359
Data-driven Probabilistic Trajectory Learning with High Temporal Resolution in Terminal Airspace
<|reference_start|>Data-driven Probabilistic Trajectory Learning with High Temporal Resolution in Terminal Airspace: Predicting flight trajectories is a research area that holds significant merit. In this paper, we propose a data-driven learning framework, that leverages the predictive and feature extraction capabilities of the mixture models and seq2seq-based neural networks while addressing prevalent challenges caused by error propagation and dimensionality reduction. After training with this framework, the learned model can improve long-step prediction accuracy significantly given the past trajectories and the context information. The accuracy and effectiveness of the approach are evaluated by comparing the predicted trajectories with the ground truth. The results indicate that the proposed method has outperformed the state-of-the-art predicting methods on a terminal airspace flight trajectory dataset. The trajectories generated by the proposed method have a higher temporal resolution(1 timestep per second vs 0.1 timestep per second) and are closer to the ground truth.<|reference_end|>
arxiv
@article{xiang2024data-driven, title={Data-driven Probabilistic Trajectory Learning with High Temporal Resolution in Terminal Airspace}, author={Jun Xiang and Jun Chen}, journal={arXiv preprint arXiv:2409.17359}, year={2024}, archivePrefix={arXiv}, eprint={2409.17359}, primaryClass={cs.RO cs.LG} }
xiang2024data-driven
arxiv-662043
2409.17363
Improving satellite imagery segmentation using multiple Sentinel-2 revisits
<|reference_start|>Improving satellite imagery segmentation using multiple Sentinel-2 revisits: In recent years, analysis of remote sensing data has benefited immensely from borrowing techniques from the broader field of computer vision, such as the use of shared models pre-trained on large and diverse datasets. However, satellite imagery has unique features that are not accounted for in traditional computer vision, such as the existence of multiple revisits of the same location. Here, we explore the best way to use revisits in the framework of fine-tuning pre-trained remote sensing models. We focus on an applied research question of relevance to climate change mitigation -- power substation segmentation -- that is representative of applied uses of pre-trained models more generally. Through extensive tests of different multi-temporal input schemes across diverse model architectures, we find that fusing representations from multiple revisits in the model latent space is superior to other methods of using revisits, including as a form of data augmentation. We also find that a SWIN Transformer-based architecture performs better than U-nets and ViT-based models. We verify the generality of our results on a separate building density estimation task.<|reference_end|>
arxiv
@article{jindgar2024improving, title={Improving satellite imagery segmentation using multiple Sentinel-2 revisits}, author={Kartik Jindgar and Grace W. Lindsay}, journal={arXiv preprint arXiv:2409.17363}, year={2024}, archivePrefix={arXiv}, eprint={2409.17363}, primaryClass={cs.CV} }
jindgar2024improving
arxiv-662044
2409.17364
Exploring synthetic data for cross-speaker style transfer in style representation based TTS
<|reference_start|>Exploring synthetic data for cross-speaker style transfer in style representation based TTS: Incorporating cross-speaker style transfer in text-to-speech (TTS) models is challenging due to the need to disentangle speaker and style information in audio. In low-resource expressive data scenarios, voice conversion (VC) can generate expressive speech for target speakers, which can then be used to train the TTS model. However, the quality and style transfer ability of the VC model are crucial for the overall TTS model quality. In this work, we explore the use of synthetic data generated by a VC model to assist the TTS model in cross-speaker style transfer tasks. Additionally, we employ pre-training of the style encoder using timbre perturbation and prototypical angular loss to mitigate speaker leakage. Our results show that using VC synthetic data can improve the naturalness and speaker similarity of TTS in cross-speaker scenarios. Furthermore, we extend this approach to a cross-language scenario, enhancing accent transfer.<|reference_end|>
arxiv
@article{ueda2024exploring, title={Exploring synthetic data for cross-speaker style transfer in style representation based TTS}, author={Lucas H. Ueda, Leonardo B. de M. M. Marques, Fl'avio O. Sim~oes, M'ario U. Neto, Fernando Runstein, Bianca Dal B'o, Paula D. P. Costa}, journal={arXiv preprint arXiv:2409.17364}, year={2024}, archivePrefix={arXiv}, eprint={2409.17364}, primaryClass={eess.AS cs.SD} }
ueda2024exploring
arxiv-662045
2409.17367
Implicit Neural Representations for Simultaneous Reduction and Continuous Reconstruction of Multi-Altitude Climate Data
<|reference_start|>Implicit Neural Representations for Simultaneous Reduction and Continuous Reconstruction of Multi-Altitude Climate Data: The world is moving towards clean and renewable energy sources, such as wind energy, in an attempt to reduce greenhouse gas emissions that contribute to global warming. To enhance the analysis and storage of wind data, we introduce a deep learning framework designed to simultaneously enable effective dimensionality reduction and continuous representation of multi-altitude wind data from discrete observations. The framework consists of three key components: dimensionality reduction, cross-modal prediction, and super-resolution. We aim to: (1) improve data resolution across diverse climatic conditions to recover high-resolution details; (2) reduce data dimensionality for more efficient storage of large climate datasets; and (3) enable cross-prediction between wind data measured at different heights. Comprehensive testing confirms that our approach surpasses existing methods in both super-resolution quality and compression efficiency.<|reference_end|>
arxiv
@article{qayyum2024implicit, title={Implicit Neural Representations for Simultaneous Reduction and Continuous Reconstruction of Multi-Altitude Climate Data}, author={Alif Bin Abdul Qayyum, Xihaier Luo, Nathan M. Urban, Xiaoning Qian and Byung-Jun Yoon}, journal={arXiv preprint arXiv:2409.17367}, year={2024}, doi={10.1109/MLSP58920.2024.10734742}, archivePrefix={arXiv}, eprint={2409.17367}, primaryClass={cs.LG cs.CV} }
qayyum2024implicit
arxiv-662046
2409.17368
EfiMon: A Process Analyser for Granular Power Consumption Prediction
<|reference_start|>EfiMon: A Process Analyser for Granular Power Consumption Prediction: High-performance computing (HPC) and supercomputing are critical in Artificial Intelligence (AI) research, development, and deployment. The extensive use of supercomputers for training complex AI models, which can take from days to months, raises significant concerns about energy consumption and carbon emissions. Traditional methods for estimating the energy consumption of HPC workloads rely on metering reports from computing nodes power supply units, assuming exclusive use of the entire node. This assumption is increasingly untenable with the advent of next-generation supercomputers that share resources to accelerate workloads, as seen in initiatives like Acceleration as a Service (XaaS) and cloud computing. This paper introduces EfiMon, an agnostic and non-invasive tool designed to extract detailed information about process execution, including instructions executed within specific time windows and CPU and RAM usage. Additionally, it captures comprehensive system metrics, such as power consumption reported by CPU sockets and PSUs. This data enables the development of prediction models to estimate the energy consumption of individual processes without requiring isolation. Using a regression-based mathematical model, our tool is able to estimate single processes' power consumption in isolated and shared resource environments. In shared scenarios, the model demonstrates robust performance, deviating by a maximum of 2.2% on Intel-based machines and 4.4% on AMD systems compared to non-shared cases. This significant accuracy showcases EfiMon's potential for enhancing energy accounting in supercomputing, contributing to more efficient and energy-aware optimisation strategies in HPC.<|reference_end|>
arxiv
@article{león-vega2024efimon:, title={EfiMon: A Process Analyser for Granular Power Consumption Prediction}, author={Luis G. Le'on-Vega, Niccol`o Tosato, Stefano Cozzini}, journal={arXiv preprint arXiv:2409.17368}, year={2024}, archivePrefix={arXiv}, eprint={2409.17368}, primaryClass={cs.DC cs.PF} }
león-vega2024efimon:
arxiv-662047
2409.17369
Evaluation of Spectrum Sharing Algorithms for Networks with Heterogeneous Wireless Devices
<|reference_start|>Evaluation of Spectrum Sharing Algorithms for Networks with Heterogeneous Wireless Devices: As highlighted in the National Spectrum Strategy, Dynamic Spectrum Access (DSA) is key for enabling 6G networks to meet the increasing demand for spectrum from various, heterogeneous emerging applications. In this paper, we consider heterogeneous wireless networks with multiple 6G base stations (BS) and a limited number of frequency bands available for transmission. Each BS is associated with a geographical location, a coverage area, and a bandwidth requirement. We assume that clients/UEs are within the corresponding BS's coverage area. To avoid interference, we impose that BSs with overlapping coverage areas must use different frequency bands. We address the challenging problem of efficiently allocating contiguous frequency bands to BSs while avoiding interference. Specifically, we define performance metrics that capture the feasibility of the frequency allocation task, the number of BSs that can be allocated within the limited frequency bands, and the amount of resources utilized by the network. Then, we consider five different DSA algorithms that prioritize BSs based on different features - one of these algorithms is known in the graph theory literature as Welsh-Powell graph colouring algorithm - and compare their performance using extensive simulations. Our results show that DSA algorithms that attempt to maximize the chances of obtaining a feasible frequency allocation - which have been widely studied in the literature - tend to under-perform in all other metrics.<|reference_end|>
arxiv
@article{walishetti2024evaluation, title={Evaluation of Spectrum Sharing Algorithms for Networks with Heterogeneous Wireless Devices}, author={Ankit Walishetti, Igor Kadota, Aidan Kim, Colin Ward, Eduardo Gutierrez, Randall Berry}, journal={arXiv preprint arXiv:2409.17369}, year={2024}, archivePrefix={arXiv}, eprint={2409.17369}, primaryClass={cs.NI} }
walishetti2024evaluation
arxiv-662048
2409.17370
The Overfocusing Bias of Convolutional Neural Networks: A Saliency-Guided Regularization Approach
<|reference_start|>The Overfocusing Bias of Convolutional Neural Networks: A Saliency-Guided Regularization Approach: Despite transformers being considered as the new standard in computer vision, convolutional neural networks (CNNs) still outperform them in low-data regimes. Nonetheless, CNNs often make decisions based on narrow, specific regions of input images, especially when training data is limited. This behavior can severely compromise the model's generalization capabilities, making it disproportionately dependent on certain features that might not represent the broader context of images. While the conditions leading to this phenomenon remain elusive, the primary intent of this article is to shed light on this observed behavior of neural networks. Our research endeavors to prioritize comprehensive insight and to outline an initial response to this phenomenon. In line with this, we introduce Saliency Guided Dropout (SGDrop), a pioneering regularization approach tailored to address this specific issue. SGDrop utilizes attribution methods on the feature map to identify and then reduce the influence of the most salient features during training. This process encourages the network to diversify its attention and not focus solely on specific standout areas. Our experiments across several visual classification benchmarks validate SGDrop's role in enhancing generalization. Significantly, models incorporating SGDrop display more expansive attributions and neural activity, offering a more comprehensive view of input images in contrast to their traditionally trained counterparts.<|reference_end|>
arxiv
@article{bertoin2024the, title={The Overfocusing Bias of Convolutional Neural Networks: A Saliency-Guided Regularization Approach}, author={David Bertoin, Eduardo Hugo Sanchez, Mehdi Zouitine and Emmanuel Rachelson}, journal={arXiv preprint arXiv:2409.17370}, year={2024}, archivePrefix={arXiv}, eprint={2409.17370}, primaryClass={cs.CV cs.AI} }
bertoin2024the
arxiv-662049
2409.17372
Search for Efficient Large Language Models
<|reference_start|>Search for Efficient Large Language Models: Large Language Models (LLMs) have long held sway in the realms of artificial intelligence research. Numerous efficient techniques, including weight pruning, quantization, and distillation, have been embraced to compress LLMs, targeting memory reduction and inference acceleration, which underscore the redundancy in LLMs. However, most model compression techniques concentrate on weight optimization, overlooking the exploration of optimal architectures. Besides, traditional architecture search methods, limited by the elevated complexity with extensive parameters, struggle to demonstrate their effectiveness on LLMs. In this paper, we propose a training-free architecture search framework to identify optimal subnets that preserve the fundamental strengths of the original LLMs while achieving inference acceleration. Furthermore, after generating subnets that inherit specific weights from the original LLMs, we introduce a reformation algorithm that utilizes the omitted weights to rectify the inherited weights with a small amount of calibration data. Compared with SOTA training-free structured pruning works that can generate smaller networks, our method demonstrates superior performance across standard benchmarks. Furthermore, our generated subnets can directly reduce the usage of GPU memory and achieve inference acceleration.<|reference_end|>
arxiv
@article{shen2024search, title={Search for Efficient Large Language Models}, author={Xuan Shen, Pu Zhao, Yifan Gong, Zhenglun Kong, Zheng Zhan, Yushu Wu, Ming Lin, Chao Wu, Xue Lin, Yanzhi Wang}, journal={arXiv preprint arXiv:2409.17372}, year={2024}, archivePrefix={arXiv}, eprint={2409.17372}, primaryClass={cs.AI} }
shen2024search
arxiv-662050
2409.17373
data2lang2vec: Data Driven Typological Features Completion
<|reference_start|>data2lang2vec: Data Driven Typological Features Completion: Language typology databases enhance multi-lingual Natural Language Processing (NLP) by improving model adaptability to diverse linguistic structures. The widely-used lang2vec toolkit integrates several such databases, but its coverage remains limited at 28.9\%. Previous work on automatically increasing coverage predicts missing values based on features from other languages or focuses on single features, we propose to use textual data for better-informed feature prediction. To this end, we introduce a multi-lingual Part-of-Speech (POS) tagger, achieving over 70\% accuracy across 1,749 languages, and experiment with external statistical features and a variety of machine learning algorithms. We also introduce a more realistic evaluation setup, focusing on likely to be missing typology features, and show that our approach outperforms previous work in both setups.<|reference_end|>
arxiv
@article{amirzadeh2024data2lang2vec:, title={data2lang2vec: Data Driven Typological Features Completion}, author={Hamidreza Amirzadeh, Sadegh Jafari, Anika Harju, Rob van der Goot}, journal={arXiv preprint arXiv:2409.17373}, year={2024}, archivePrefix={arXiv}, eprint={2409.17373}, primaryClass={cs.CL} }
amirzadeh2024data2lang2vec:
arxiv-662051
2409.17376
Optical Lens Attack on Deep Learning Based Monocular Depth Estimation
<|reference_start|>Optical Lens Attack on Deep Learning Based Monocular Depth Estimation: Monocular Depth Estimation (MDE) plays a crucial role in vision-based Autonomous Driving (AD) systems. It utilizes a single-camera image to determine the depth of objects, facilitating driving decisions such as braking a few meters in front of a detected obstacle or changing lanes to avoid collision. In this paper, we investigate the security risks associated with monocular vision-based depth estimation algorithms utilized by AD systems. By exploiting the vulnerabilities of MDE and the principles of optical lenses, we introduce LensAttack, a physical attack that involves strategically placing optical lenses on the camera of an autonomous vehicle to manipulate the perceived object depths. LensAttack encompasses two attack formats: concave lens attack and convex lens attack, each utilizing different optical lenses to induce false depth perception. We begin by constructing a mathematical model of our attack, incorporating various attack parameters. Subsequently, we simulate the attack and evaluate its real-world performance in driving scenarios to demonstrate its effect on state-of-the-art MDE models. The results highlight the significant impact of LensAttack on the accuracy of depth estimation in AD systems.<|reference_end|>
arxiv
@article{zhou2024optical, title={Optical Lens Attack on Deep Learning Based Monocular Depth Estimation}, author={Ce Zhou (1), Qiben Yan (1), Daniel Kent (1), Guangjing Wang (1), Ziqi Zhang (2), Hayder Radha (1) ((1) Michigan State University, (2) Peking University)}, journal={arXiv preprint arXiv:2409.17376}, year={2024}, archivePrefix={arXiv}, eprint={2409.17376}, primaryClass={cs.CR cs.CV} }
zhou2024optical
arxiv-662052
2409.17379
Decentralized Nonlinear Model Predictive Control for Safe Collision Avoidance in Quadrotor Teams with Limited Detection Range
<|reference_start|>Decentralized Nonlinear Model Predictive Control for Safe Collision Avoidance in Quadrotor Teams with Limited Detection Range: Multi-quadrotor systems face significant challenges in decentralized control, particularly with safety and coordination under sensing and communication limitations. State-of-the-art methods leverage Control Barrier Functions (CBFs) to provide safety guarantees but often neglect actuation constraints and limited detection range. To address these gaps, we propose a novel decentralized Nonlinear Model Predictive Control (NMPC) that integrates Exponential CBFs (ECBFs) to enhance safety and optimality in multi-quadrotor systems. We provide both conservative and practical minimum bounds of the range that preserve the safety guarantees of the ECBFs. We validate our approach through extensive simulations with up to 10 quadrotors and 20 obstacles, as well as real-world experiments with 3 quadrotors. Results demonstrate the effectiveness of the proposed framework in realistic settings, highlighting its potential for reliable quadrotor teams operations.<|reference_end|>
arxiv
@article{goarin2024decentralized, title={Decentralized Nonlinear Model Predictive Control for Safe Collision Avoidance in Quadrotor Teams with Limited Detection Range}, author={Manohari Goarin, Guanrui Li, Alessandro Saviolo, and Giuseppe Loianno}, journal={arXiv preprint arXiv:2409.17379}, year={2024}, archivePrefix={arXiv}, eprint={2409.17379}, primaryClass={cs.RO cs.MA} }
goarin2024decentralized
arxiv-662053
2409.17380
Tesla's Autopilot: Ethics and Tragedy
<|reference_start|>Tesla's Autopilot: Ethics and Tragedy: This case study delves into the ethical ramifications of an incident involving Tesla's Autopilot, emphasizing Tesla Motors' moral responsibility. Using a seven-step ethical decision-making process, it examines user behavior, system constraints, and regulatory implications. This incident prompts a broader evaluation of ethical challenges in the automotive industry's adoption of autonomous technologies, urging a reconsideration of industry norms and legal frameworks. The analysis offers a succinct exploration of ethical considerations in evolving technological landscapes.<|reference_end|>
arxiv
@article{jatavallabha2024tesla's, title={Tesla's Autopilot: Ethics and Tragedy}, author={Aravinda Jatavallabha}, journal={arXiv preprint arXiv:2409.17380}, year={2024}, archivePrefix={arXiv}, eprint={2409.17380}, primaryClass={cs.CY cs.AI} }
jatavallabha2024tesla's
arxiv-662054
2409.17383
VectorSearch: Enhancing Document Retrieval with Semantic Embeddings and Optimized Search
<|reference_start|>VectorSearch: Enhancing Document Retrieval with Semantic Embeddings and Optimized Search: Traditional retrieval methods have been essential for assessing document similarity but struggle with capturing semantic nuances. Despite advancements in latent semantic analysis (LSA) and deep learning, achieving comprehensive semantic understanding and accurate retrieval remains challenging due to high dimensionality and semantic gaps. The above challenges call for new techniques to effectively reduce the dimensions and close the semantic gaps. To this end, we propose VectorSearch, which leverages advanced algorithms, embeddings, and indexing techniques for refined retrieval. By utilizing innovative multi-vector search operations and encoding searches with advanced language models, our approach significantly improves retrieval accuracy. Experiments on real-world datasets show that VectorSearch outperforms baseline metrics, demonstrating its efficacy for large-scale retrieval tasks.<|reference_end|>
arxiv
@article{monir2024vectorsearch:, title={VectorSearch: Enhancing Document Retrieval with Semantic Embeddings and Optimized Search}, author={Solmaz Seyed Monir, Irene Lau, Shubing Yang, Dongfang Zhao}, journal={arXiv preprint arXiv:2409.17383}, year={2024}, archivePrefix={arXiv}, eprint={2409.17383}, primaryClass={cs.IR cs.AI cs.DB cs.LG cs.PF} }
monir2024vectorsearch:
arxiv-662055
2409.17385
Data-efficient Trajectory Prediction via Coreset Selection
<|reference_start|>Data-efficient Trajectory Prediction via Coreset Selection: Modern vehicles are equipped with multiple information-collection devices such as sensors and cameras, continuously generating a large volume of raw data. Accurately predicting the trajectories of neighboring vehicles is a vital component in understanding the complex driving environment. Yet, training trajectory prediction models is challenging in two ways. Processing the large-scale data is computation-intensive. Moreover, easy-medium driving scenarios often overwhelmingly dominate the dataset, leaving challenging driving scenarios such as dense traffic under-represented. For example, in the Argoverse motion prediction dataset, there are very few instances with $\ge 50$ agents, while scenarios with $10 \thicksim 20$ agents are far more common. In this paper, to mitigate data redundancy in the over-represented driving scenarios and to reduce the bias rooted in the data scarcity of complex ones, we propose a novel data-efficient training method based on coreset selection. This method strategically selects a small but representative subset of data while balancing the proportions of different scenario difficulties. To the best of our knowledge, we are the first to introduce a method capable of effectively condensing large-scale trajectory dataset, while achieving a state-of-the-art compression ratio. Notably, even when using only 50% of the Argoverse dataset, the model can be trained with little to no decline in performance. Moreover, the selected coreset maintains excellent generalization ability.<|reference_end|>
arxiv
@article{yang2024data-efficient, title={Data-efficient Trajectory Prediction via Coreset Selection}, author={Ruining Yang and Lili Su}, journal={arXiv preprint arXiv:2409.17385}, year={2024}, archivePrefix={arXiv}, eprint={2409.17385}, primaryClass={cs.LG cs.AI cs.CV} }
yang2024data-efficient
arxiv-662056
2409.17386
Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning
<|reference_start|>Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning: Unsupervised Multiplex Graph Learning (UMGL) aims to learn node representations on various edge types without manual labeling. However, existing research overlooks a key factor: the reliability of the graph structure. Real-world data often exhibit a complex nature and contain abundant task-irrelevant noise, severely compromising UMGL's performance. Moreover, existing methods primarily rely on contrastive learning to maximize mutual information across different graphs, limiting them to multiplex graph redundant scenarios and failing to capture view-unique task-relevant information. In this paper, we focus on a more realistic and challenging task: to unsupervisedly learn a fused graph from multiple graphs that preserve sufficient task-relevant information while removing task-irrelevant noise. Specifically, our proposed Information-aware Unsupervised Multiplex Graph Fusion framework (InfoMGF) uses graph structure refinement to eliminate irrelevant noise and simultaneously maximizes view-shared and view-unique task-relevant information, thereby tackling the frontier of non-redundant multiplex graph. Theoretical analyses further guarantee the effectiveness of InfoMGF. Comprehensive experiments against various baselines on different downstream tasks demonstrate its superior performance and robustness. Surprisingly, our unsupervised method even beats the sophisticated supervised approaches. The source code and datasets are available at https://github.com/zxlearningdeep/InfoMGF.<|reference_end|>
arxiv
@article{shen2024beyond, title={Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning}, author={Zhixiang Shen, Shuo Wang, Zhao Kang}, journal={arXiv preprint arXiv:2409.17386}, year={2024}, archivePrefix={arXiv}, eprint={2409.17386}, primaryClass={cs.LG cs.AI cs.SI} }
shen2024beyond
arxiv-662057
2409.17387
Enhancing Polyglot Voices by Leveraging Cross-Lingual Fine-Tuning in Any-to-One Voice Conversion
<|reference_start|>Enhancing Polyglot Voices by Leveraging Cross-Lingual Fine-Tuning in Any-to-One Voice Conversion: The creation of artificial polyglot voices remains a challenging task, despite considerable progress in recent years. This paper investigates self-supervised learning for voice conversion to create native-sounding polyglot voices. We introduce a novel cross-lingual any-to-one voice conversion system that is able to preserve the source accent without the need for multilingual data from the target speaker. In addition, we show a novel cross-lingual fine-tuning strategy that further improves the accent and reduces the training data requirements. Objective and subjective evaluations with English, Spanish, French and Mandarin Chinese confirm that our approach improves on state-of-the-art methods, enhancing the speech intelligibility and overall quality of the converted speech, especially in cross-lingual scenarios. Audio samples are available at https://giuseppe-ruggiero.github.io/a2o-vc-demo/<|reference_end|>
arxiv
@article{ruggiero2024enhancing, title={Enhancing Polyglot Voices by Leveraging Cross-Lingual Fine-Tuning in Any-to-One Voice Conversion}, author={Giuseppe Ruggiero, Matteo Testa, Jurgen Van de Walle, Luigi Di Caro}, journal={arXiv preprint arXiv:2409.17387}, year={2024}, archivePrefix={arXiv}, eprint={2409.17387}, primaryClass={cs.SD eess.AS} }
ruggiero2024enhancing
arxiv-662058
2409.17388
A Semi-Analytic Diagonalization FEM for the Spectral Fractional Laplacian
<|reference_start|>A Semi-Analytic Diagonalization FEM for the Spectral Fractional Laplacian: We present a technique for approximating solutions to the spectral fractional Laplacian, which is based on the Caffarelli-Silvestre extension and diagonalization. Our scheme uses the analytic solution to the associated eigenvalue problem in the extended dimension. We show its relation to a quadrature scheme. Numerical examples demonstrate the performance of the method.<|reference_end|>
arxiv
@article{salgado2024a, title={A Semi-Analytic Diagonalization FEM for the Spectral Fractional Laplacian}, author={Abner J. Salgado and Shane E. Sawyer}, journal={arXiv preprint arXiv:2409.17388}, year={2024}, archivePrefix={arXiv}, eprint={2409.17388}, primaryClass={math.NA cs.NA} }
salgado2024a
arxiv-662059
2409.17389
Safe Leaf Manipulation for Accurate Shape and Pose Estimation of Occluded Fruits
<|reference_start|>Safe Leaf Manipulation for Accurate Shape and Pose Estimation of Occluded Fruits: Fruit monitoring plays an important role in crop management, and rising global fruit consumption combined with labor shortages necessitates automated monitoring with robots. However, occlusions from plant foliage often hinder accurate shape and pose estimation. Therefore, we propose an active fruit shape and pose estimation method that physically manipulates occluding leaves to reveal hidden fruits. This paper introduces a framework that plans robot actions to maximize visibility and minimize leaf damage. We developed a novel scene-consistent shape completion technique to improve fruit estimation under heavy occlusion and utilize a perception-driven deformation graph model to predict leaf deformation during planning. Experiments on artificial and real sweet pepper plants demonstrate that our method enables robots to safely move leaves aside, exposing fruits for accurate shape and pose estimation, outperforming baseline methods. Project page: https://shaoxiongyao.github.io/lmap-ssc/.<|reference_end|>
arxiv
@article{yao2024safe, title={Safe Leaf Manipulation for Accurate Shape and Pose Estimation of Occluded Fruits}, author={Shaoxiong Yao, Sicong Pan, Maren Bennewitz, Kris Hauser}, journal={arXiv preprint arXiv:2409.17389}, year={2024}, archivePrefix={arXiv}, eprint={2409.17389}, primaryClass={cs.RO} }
yao2024safe
arxiv-662060
2409.17391
Scaling Behavior for Large Language Models regarding Numeral Systems: An Example using Pythia
<|reference_start|>Scaling Behavior for Large Language Models regarding Numeral Systems: An Example using Pythia: Though Large Language Models (LLMs) have shown remarkable abilities in mathematics reasoning, they are still struggling with performing numeric operations accurately, such as addition and multiplication. Numbers can be tokenized into tokens in various ways by different LLMs and affect the numeric operations performance. Currently, there are two representatives: 1) Tokenize into $1$-digit, and 2) Tokenize into $1\sim 3$ digit. The difference is roughly equivalent to using different numeral systems (namely base $10$ or base $10^{3}$). In light of this, we study the scaling behavior of different numeral systems in the context of transformer-based large language models. We empirically show that a base $10$ system is consistently more data-efficient than a base $10^{2}$ or $10^{3}$ system across training data scale, model sizes under from-scratch training settings, while different number systems have very similar fine-tuning performances. We attribute this to higher token frequencies of a base $10$ system. Additionally, we reveal extrapolation behavior patterns on addition and multiplication. We identify that base $100$ and base $1000$ systems struggle on token-level discernment and token-level operations. We also sheds light on the mechanism learnt by the models.<|reference_end|>
arxiv
@article{zhou2024scaling, title={Scaling Behavior for Large Language Models regarding Numeral Systems: An Example using Pythia}, author={Zhejian Zhou, Jiayu Wang, Dahua Lin, Kai Chen}, journal={arXiv preprint arXiv:2409.17391}, year={2024}, archivePrefix={arXiv}, eprint={2409.17391}, primaryClass={cs.CL} }
zhou2024scaling
arxiv-662061
2409.17392
Trading through Earnings Seasons using Self-Supervised Contrastive Representation Learning
<|reference_start|>Trading through Earnings Seasons using Self-Supervised Contrastive Representation Learning: Earnings release is a key economic event in the financial markets and crucial for predicting stock movements. Earnings data gives a glimpse into how a company is doing financially and can hint at where its stock might go next. However, the irregularity of its release cycle makes it a challenge to incorporate this data in a medium-frequency algorithmic trading model and the usefulness of this data fades fast after it is released, making it tough for models to stay accurate over time. Addressing this challenge, we introduce the Contrastive Earnings Transformer (CET) model, a self-supervised learning approach rooted in Contrastive Predictive Coding (CPC), aiming to optimise the utilisation of earnings data. To ascertain its effectiveness, we conduct a comparative study of CET against benchmark models across diverse sectors. Our research delves deep into the intricacies of stock data, evaluating how various models, and notably CET, handle the rapidly changing relevance of earnings data over time and over different sectors. The research outcomes shed light on CET's distinct advantage in extrapolating the inherent value of earnings data over time. Its foundation on CPC allows for a nuanced understanding, facilitating consistent stock predictions even as the earnings data ages. This finding about CET presents a fresh approach to better use earnings data in algorithmic trading for predicting stock price trends.<|reference_end|>
arxiv
@article{ye2024trading, title={Trading through Earnings Seasons using Self-Supervised Contrastive Representation Learning}, author={Zhengxin Joseph Ye and Bjoern Schuller}, journal={arXiv preprint arXiv:2409.17392}, year={2024}, archivePrefix={arXiv}, eprint={2409.17392}, primaryClass={cs.LG q-fin.TR} }
ye2024trading
arxiv-662062
2409.17395
An Anatomy-Aware Shared Control Approach for Assisted Teleoperation of Lung Ultrasound Examinations
<|reference_start|>An Anatomy-Aware Shared Control Approach for Assisted Teleoperation of Lung Ultrasound Examinations: The introduction of artificial intelligence and robotics in telehealth is enabling personalised treatment and supporting teleoperated procedures such as lung ultrasound, which has gained attention during the COVID-19 pandemic. Although fully autonomous systems face challenges due to anatomical variability, teleoperated systems appear to be more practical in current healthcare settings. This paper presents an anatomy-aware control framework for teleoperated lung ultrasound. Using biomechanically accurate 3D models such as SMPL and SKEL, the system provides a real-time visual feedback and applies virtual constraints to assist in precise probe placement tasks. Evaluations on five subjects show the accuracy of the biomechanical models and the efficiency of the system in improving probe placement and reducing procedure time compared to traditional teleoperation. The results demonstrate that the proposed framework enhances the physician's capabilities in executing remote lung ultrasound examinations, towards more objective and repeatable acquisitions.<|reference_end|>
arxiv
@article{nardi2024an, title={An Anatomy-Aware Shared Control Approach for Assisted Teleoperation of Lung Ultrasound Examinations}, author={Davide Nardi, Edoardo Lamon, Luca Beber, Daniele Fontanelli, Matteo Saveriano, and Luigi Palopoli}, journal={arXiv preprint arXiv:2409.17395}, year={2024}, archivePrefix={arXiv}, eprint={2409.17395}, primaryClass={cs.RO} }
nardi2024an
arxiv-662063
2409.17397
Severity Prediction in Mental Health: LLM-based Creation, Analysis, Evaluation of a Novel Multilingual Dataset
<|reference_start|>Severity Prediction in Mental Health: LLM-based Creation, Analysis, Evaluation of a Novel Multilingual Dataset: Large Language Models (LLMs) are increasingly integrated into various medical fields, including mental health support systems. However, there is a gap in research regarding the effectiveness of LLMs in non-English mental health support applications. To address this problem, we present a novel multilingual adaptation of widely-used mental health datasets, translated from English into six languages (Greek, Turkish, French, Portuguese, German, and Finnish). This dataset enables a comprehensive evaluation of LLM performance in detecting mental health conditions and assessing their severity across multiple languages. By experimenting with GPT and Llama, we observe considerable variability in performance across languages, despite being evaluated on the same translated dataset. This inconsistency underscores the complexities inherent in multilingual mental health support, where language-specific nuances and mental health data coverage can affect the accuracy of the models. Through comprehensive error analysis, we emphasize the risks of relying exclusively on large language models (LLMs) in medical settings (e.g., their potential to contribute to misdiagnoses). Moreover, our proposed approach offers significant cost savings for multilingual tasks, presenting a major advantage for broad-scale implementation.<|reference_end|>
arxiv
@article{skianis2024severity, title={Severity Prediction in Mental Health: LLM-based Creation, Analysis, Evaluation of a Novel Multilingual Dataset}, author={Konstantinos Skianis, John Pavlopoulos, A. Seza Dou{g}ru"oz}, journal={arXiv preprint arXiv:2409.17397}, year={2024}, archivePrefix={arXiv}, eprint={2409.17397}, primaryClass={cs.CL cs.LG} }
skianis2024severity
arxiv-662064
2409.17400
AgRegNet: A Deep Regression Network for Flower and Fruit Density Estimation, Localization, and Counting in Orchards
<|reference_start|>AgRegNet: A Deep Regression Network for Flower and Fruit Density Estimation, Localization, and Counting in Orchards: One of the major challenges for the agricultural industry today is the uncertainty in manual labor availability and the associated cost. Automated flower and fruit density estimation, localization, and counting could help streamline harvesting, yield estimation, and crop-load management strategies such as flower and fruitlet thinning. This article proposes a deep regression-based network, AgRegNet, to estimate density, count, and location of flower and fruit in tree fruit canopies without explicit object detection or polygon annotation. Inspired by popular U-Net architecture, AgRegNet is a U-shaped network with an encoder-to-decoder skip connection and modified ConvNeXt-T as an encoder feature extractor. AgRegNet can be trained based on information from point annotation and leverages segmentation information and attention modules (spatial and channel) to highlight relevant flower and fruit features while suppressing non-relevant background features. Experimental evaluation in apple flower and fruit canopy images under an unstructured orchard environment showed that AgRegNet achieved promising accuracy as measured by Structural Similarity Index (SSIM), percentage Mean Absolute Error (pMAE) and mean Average Precision (mAP) to estimate flower and fruit density, count, and centroid location, respectively. Specifically, the SSIM, pMAE, and mAP values for flower images were 0.938, 13.7%, and 0.81, respectively. For fruit images, the corresponding values were 0.910, 5.6%, and 0.93. Since the proposed approach relies on information from point annotation, it is suitable for sparsely and densely located objects. This simplified technique will be highly applicable for growers to accurately estimate yields and decide on optimal chemical and mechanical flower thinning practices.<|reference_end|>
arxiv
@article{bhattarai2024agregnet:, title={AgRegNet: A Deep Regression Network for Flower and Fruit Density Estimation, Localization, and Counting in Orchards}, author={Uddhav Bhattarai, Santosh Bhusal, Qin Zhang, Manoj Karkee}, journal={arXiv preprint arXiv:2409.17400}, year={2024}, archivePrefix={arXiv}, eprint={2409.17400}, primaryClass={cs.CV cs.AI} }
bhattarai2024agregnet:
arxiv-662065
2409.17401
Zeroth-Order Policy Gradient for Reinforcement Learning from Human Feedback without Reward Inference
<|reference_start|>Zeroth-Order Policy Gradient for Reinforcement Learning from Human Feedback without Reward Inference: Reward inference (learning a reward model from human preferences) is a critical intermediate step in Reinforcement Learning from Human Feedback (RLHF) for fine-tuning Large Language Models (LLMs) such as ChatGPT. In practice, reward inference faces several fundamental challenges, including double problem misspecification, reward model evaluation without ground truth, distribution shift, and overfitting in joint reward model and policy training. An alternative approach that avoids these pitfalls is direct policy optimization without reward inference, such as Direct Preference Optimization (DPO), which provides a much simpler pipeline and has shown empirical success in LLMs. However, DPO utilizes the closed-form expression between the optimal policy and the reward function, which only works under the bandit setting or deterministic MDPs. This paper develops two RLHF algorithms without reward inference, which work for general RL problems beyond bandits and deterministic MDPs, and general preference models beyond the Bradely-Terry model. The key idea is to estimate the local value function difference from human preferences and then approximate the policy gradient with a zeroth-order gradient approximator. For both algorithms, we establish rates of convergence in terms of the number of policy gradient iterations, as well as the number of trajectory samples and human preference queries per iteration. Our results show there exist provably efficient methods to solve general RLHF problems without reward inference.<|reference_end|>
arxiv
@article{zhang2024zeroth-order, title={Zeroth-Order Policy Gradient for Reinforcement Learning from Human Feedback without Reward Inference}, author={Qining Zhang, Lei Ying}, journal={arXiv preprint arXiv:2409.17401}, year={2024}, archivePrefix={arXiv}, eprint={2409.17401}, primaryClass={cs.LG stat.ML} }
zhang2024zeroth-order
arxiv-662066
2409.17402
Enhancing Recommendation with Denoising Auxiliary Task
<|reference_start|>Enhancing Recommendation with Denoising Auxiliary Task: The historical interaction sequences of users plays a crucial role in training recommender systems that can accurately predict user preferences. However, due to the arbitrariness of user behavior, the presence of noise in these sequences poses a challenge to predicting their next actions in recommender systems. To address this issue, our motivation is based on the observation that training noisy sequences and clean sequences (sequences without noise) with equal weights can impact the performance of the model. We propose a novel self-supervised Auxiliary Task Joint Training (ATJT) method aimed at more accurately reweighting noisy sequences in recommender systems. Specifically, we strategically select subsets from users' original sequences and perform random replacements to generate artificially replaced noisy sequences. Subsequently, we perform joint training on these artificially replaced noisy sequences and the original sequences. Through effective reweighting, we incorporate the training results of the noise recognition model into the recommender model. We evaluate our method on three datasets using a consistent base model. Experimental results demonstrate the effectiveness of introducing self-supervised auxiliary task to enhance the base model's performance.<|reference_end|>
arxiv
@article{liu2024enhancing, title={Enhancing Recommendation with Denoising Auxiliary Task}, author={Pengsheng Liu, Linan Zheng, Jiale Chen, Guangfa Zhang, Yang Xu, Jinyun Fang}, journal={arXiv preprint arXiv:2409.17402}, year={2024}, doi={10.1007/s11390-024-4069-5}, archivePrefix={arXiv}, eprint={2409.17402}, primaryClass={cs.IR cs.AI cs.LG} }
liu2024enhancing
arxiv-662067
2409.17403
Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving
<|reference_start|>Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving: Object detection is a crucial task in autonomous driving. While existing research has proposed various attacks on object detection, such as those using adversarial patches or stickers, the exploration of projection attacks on 3D surfaces remains largely unexplored. Compared to adversarial patches or stickers, which have fixed adversarial patterns, projection attacks allow for transient modifications to these patterns, enabling a more flexible attack. In this paper, we introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios. We frame the attack formulation as an optimization problem, utilizing a combination of color mapping and geometric transformation models. Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings. Evaluations conducted in an indoor environment show an attack success rate of up to 100% under low ambient light conditions, highlighting the potential damage of our attack in real-world driving scenarios.<|reference_end|>
arxiv
@article{zhou2024transient, title={Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving}, author={Ce Zhou, Qiben Yan, Sijia Liu}, journal={arXiv preprint arXiv:2409.17403}, year={2024}, archivePrefix={arXiv}, eprint={2409.17403}, primaryClass={cs.CR cs.AI cs.CV} }
zhou2024transient
arxiv-662068
2409.17405
AI Enabled Neutron Flux Measurement and Virtual Calibration in Boiling Water Reactors
<|reference_start|>AI Enabled Neutron Flux Measurement and Virtual Calibration in Boiling Water Reactors: Accurately capturing the three dimensional power distribution within a reactor core is vital for ensuring the safe and economical operation of the reactor, compliance with Technical Specifications, and fuel cycle planning (safety, control, and performance evaluation). Offline (that is, during cycle planning and core design), a three dimensional neutronics simulator is used to estimate the reactor's power, moderator, void, and flow distributions, from which margin to thermal limits and fuel exposures can be approximated. Online, this is accomplished with a system of local power range monitors (LPRMs) designed to capture enough neutron flux information to infer the full nodal power distribution. Certain problems with this process, ranging from measurement and calibration to the power adaption process, pose challenges to operators and limit the ability to design reload cores economically (e.g., engineering in insufficient margin or more margin than required). Artificial intelligence (AI) and machine learning (ML) are being used to solve the problems to reduce maintenance costs, improve the accuracy of online local power measurements, and decrease the bias between offline and online power distributions, thereby leading to a greater ability to design safe and economical reload cores. We present ML models trained from two deep neural network (DNN) architectures, SurrogateNet and LPRMNet, that demonstrate a testing error of 1 percent and 3 percent, respectively. Applications of these models can include virtual sensing capability for bypassed or malfunctioning LPRMs, on demand virtual calibration of detectors between successive calibrations, highly accurate nuclear end of life determinations for LPRMs, and reduced bias between measured and predicted power distributions within the core.<|reference_end|>
arxiv
@article{tunga2024ai, title={AI Enabled Neutron Flux Measurement and Virtual Calibration in Boiling Water Reactors}, author={Anirudh Tunga, Jordan Heim, Michael Mueterthies, Thomas Gruenwald and Jonathan Nistor}, journal={13th Nuclear Plant Instrumentation, Control & Human-Machine Interface Technologies (NPIC&HMIT 2023)}, year={2024}, doi={10.13182/NPICHMIT23-41018}, archivePrefix={arXiv}, eprint={2409.17405}, primaryClass={cs.AI cs.LG} }
tunga2024ai
arxiv-662069
2409.17406
Spiders Based on Anxiety: How Reinforcement Learning Can Deliver Desired User Experience in Virtual Reality Personalized Arachnophobia Treatment
<|reference_start|>Spiders Based on Anxiety: How Reinforcement Learning Can Deliver Desired User Experience in Virtual Reality Personalized Arachnophobia Treatment: The need to generate a spider to provoke a desired anxiety response arises in the context of personalized virtual reality exposure therapy (VRET), a treatment approach for arachnophobia. This treatment involves patients observing virtual spiders in order to become desensitized and decrease their phobia, which requires that the spiders elicit specific anxiety responses. However, VRET approaches tend to require therapists to hand-select the appropriate spider for each patient, which is a time-consuming process and takes significant technical knowledge and patient insight. While automated methods exist, they tend to employ rules-based approaches with minimal ability to adapt to specific users. To address these challenges, we present a framework for VRET utilizing procedural content generation (PCG) and reinforcement learning (RL), which automatically adapts a spider to elicit a desired anxiety response. We demonstrate the superior performance of this system compared to a more common rules-based VRET method.<|reference_end|>
arxiv
@article{mahmoudi-nejad2024spiders, title={Spiders Based on Anxiety: How Reinforcement Learning Can Deliver Desired User Experience in Virtual Reality Personalized Arachnophobia Treatment}, author={Athar Mahmoudi-Nejad, Matthew Guzdial, Pierre Boulanger}, journal={arXiv preprint arXiv:2409.17406}, year={2024}, archivePrefix={arXiv}, eprint={2409.17406}, primaryClass={cs.LG cs.HC} }
mahmoudi-nejad2024spiders
arxiv-662070
2409.17407
Post-hoc Reward Calibration: A Case Study on Length Bias
<|reference_start|>Post-hoc Reward Calibration: A Case Study on Length Bias: Reinforcement Learning from Human Feedback aligns the outputs of Large Language Models with human values and preferences. Central to this process is the reward model (RM), which translates human feedback into training signals for optimising LLM behaviour. However, RMs can develop biases by exploiting spurious correlations in their training data, such as favouring outputs based on length or style rather than true quality. These biases can lead to incorrect output rankings, sub-optimal model evaluations, and the amplification of undesirable behaviours in LLMs alignment. This paper addresses the challenge of correcting such biases without additional data and training, introducing the concept of Post-hoc Reward Calibration. We first propose an intuitive approach to estimate the bias term and, thus, remove it to approximate the underlying true reward. We then extend the approach to a more general and robust form with the Locally Weighted Regression. Focusing on the prevalent length bias, we validate our proposed approaches across three experimental settings, demonstrating consistent improvements: (1) a 3.11 average performance gain across 33 reward models on the RewardBench dataset; (2) enhanced alignment of RM rankings with GPT-4 evaluations and human preferences based on the AlpacaEval benchmark; and (3) improved Length-Controlled win rate of the RLHF process in multiple LLM--RM combinations. Our method is computationally efficient and generalisable to other types of bias and RMs, offering a scalable and robust solution for mitigating biases in LLM alignment. Our code and results are available at https://github.com/ZeroYuHuang/Reward-Calibration.<|reference_end|>
arxiv
@article{huang2024post-hoc, title={Post-hoc Reward Calibration: A Case Study on Length Bias}, author={Zeyu Huang, Zihan Qiu, Zili Wang, Edoardo M. Ponti, Ivan Titov}, journal={arXiv preprint arXiv:2409.17407}, year={2024}, archivePrefix={arXiv}, eprint={2409.17407}, primaryClass={cs.AI cs.CL} }
huang2024post-hoc
arxiv-662071
2409.17408
Sociotechnical Approach to Enterprise Generative Artificial Intelligence (E-GenAI)
<|reference_start|>Sociotechnical Approach to Enterprise Generative Artificial Intelligence (E-GenAI): In this theoretical article, a sociotechnical approach is proposed to characterize. First, the business ecosystem, focusing on the relationships among Providers, Enterprise, and Customers through SCM, ERP, and CRM platforms to align: (1) Business Intelligence (BI), Fuzzy Logic (FL), and TRIZ (Theory of Inventive Problem Solving), through the OID model, and (2) Knowledge Management (KM) and Imperfect Knowledge Management (IKM), through the OIDK model. Second, the article explores the E-GenAI business ecosystem, which integrates GenAI-based platforms for SCM, ERP, and CRM with GenAI-based platforms for BI, FL, TRIZ, KM, and IKM, to align Large Language Models (LLMs) through the E-GenAI (OID) model. Finally, to understand the dynamics of LLMs, we utilize finite automata to model the relationships between Followers and Followees. This facilitates the construction of LLMs that can identify specific characteristics of users on a social media platform.<|reference_end|>
arxiv
@article{jimenez2024sociotechnical, title={Sociotechnical Approach to Enterprise Generative Artificial Intelligence (E-GenAI)}, author={Leoncio Jimenez, Francisco Venegas}, journal={arXiv preprint arXiv:2409.17408}, year={2024}, archivePrefix={arXiv}, eprint={2409.17408}, primaryClass={cs.CY cs.AI cs.IT math.IT} }
jimenez2024sociotechnical
arxiv-662072
2409.17410
Copying style, Extracting value: Illustrators' Perception of AI Style Transfer and its Impact on Creative Labor
<|reference_start|>Copying style, Extracting value: Illustrators' Perception of AI Style Transfer and its Impact on Creative Labor: Generative text-to-image models are disrupting the lives of creative professionals. Specifically, illustrators are threatened by models that claim to extract and reproduce their style. Yet, research on style transfer has rarely focused on their perspectives. We provided four illustrators with a model fine-tuned to their style and conducted semi-structured interviews about the model's successes, limitations, and potential uses. Evaluating their output, artists reported that style transfer successfully copies aesthetic fragments but is limited by content-style disentanglement and lacks the crucial emergent quality of their style. They also deemed the others' copies more successful. Understanding the results of style transfer as "boundary objects," we analyze how they can simultaneously be considered unsuccessful by artists and poised to replace their work by others. We connect our findings to critical HCI frameworks, demonstrating that style transfer, rather than merely a Creativity Support Tool, should also be understood as a supply chain optimization one.<|reference_end|>
arxiv
@article{porquet2024copying, title={Copying style, Extracting value: Illustrators' Perception of AI Style Transfer and its Impact on Creative Labor}, author={Julien Porquet, Sitong Wang, Lydia B. Chilton}, journal={arXiv preprint arXiv:2409.17410}, year={2024}, archivePrefix={arXiv}, eprint={2409.17410}, primaryClass={cs.HC} }
porquet2024copying
arxiv-662073
2409.17411
Exploring Semantic Clustering in Deep Reinforcement Learning for Video Games
<|reference_start|>Exploring Semantic Clustering in Deep Reinforcement Learning for Video Games: In this paper, we investigate the semantic clustering properties of deep reinforcement learning (DRL) for video games, enriching our understanding of the internal dynamics of DRL and advancing its interpretability. In this context, semantic clustering refers to the inherent capacity of neural networks to internally group video inputs based on semantic similarity. To achieve this, we propose a novel DRL architecture that integrates a semantic clustering module featuring both feature dimensionality reduction and online clustering. This module seamlessly integrates into the DRL training pipeline, addressing instability issues observed in previous t-SNE-based analysis methods and eliminating the necessity for extensive manual annotation of semantic analysis. Through experiments, we validate the effectiveness of the proposed module and the semantic clustering properties in DRL for video games. Additionally, based on these properties, we introduce new analytical methods to help understand the hierarchical structure of policies and the semantic distribution within the feature space.<|reference_end|>
arxiv
@article{zhang2024exploring, title={Exploring Semantic Clustering in Deep Reinforcement Learning for Video Games}, author={Liang Zhang, Justin Lieffers, Adarsh Pyarelal}, journal={arXiv preprint arXiv:2409.17411}, year={2024}, archivePrefix={arXiv}, eprint={2409.17411}, primaryClass={cs.AI} }
zhang2024exploring
arxiv-662074
2409.17414
Uniformly $hp$-stable elements for the elasticity complex
<|reference_start|>Uniformly $hp$-stable elements for the elasticity complex: For the discretization of symmetric, divergence-conforming stress tensors in continuum mechanics, we prove inf-sup stability bounds which are uniform in polynomial degree and mesh size for the Hu--Zhang finite element in two dimensions. This is achieved via an explicit construction of a bounded right inverse of the divergence operator, with the crucial component being the construction of bounded Poincar\'e operators for the stress elasticity complex which are polynomial-preserving, in the Bernstein--Gelfand--Gelfand framework of the finite element exterior calculus. We also construct $hp$-bounded projection operators satisfying a commuting diagram property and $hp$-stable Hodge decompositions. Numerical examples are provided.<|reference_end|>
arxiv
@article{aznaran2024uniformly, title={Uniformly $hp$-stable elements for the elasticity complex}, author={Francis R. A. Aznaran and Kaibo Hu and Charles Parker}, journal={arXiv preprint arXiv:2409.17414}, year={2024}, archivePrefix={arXiv}, eprint={2409.17414}, primaryClass={math.NA cs.NA} }
aznaran2024uniformly
arxiv-662075
2409.17416
From Deception to Detection: The Dual Roles of Large Language Models in Fake News
<|reference_start|>From Deception to Detection: The Dual Roles of Large Language Models in Fake News: Fake news poses a significant threat to the integrity of information ecosystems and public trust. The advent of Large Language Models (LLMs) holds considerable promise for transforming the battle against fake news. Generally, LLMs represent a double-edged sword in this struggle. One major concern is that LLMs can be readily used to craft and disseminate misleading information on a large scale. This raises the pressing questions: Can LLMs easily generate biased fake news? Do all LLMs have this capability? Conversely, LLMs offer valuable prospects for countering fake news, thanks to their extensive knowledge of the world and robust reasoning capabilities. This leads to other critical inquiries: Can we use LLMs to detect fake news, and do they outperform typical detection models? In this paper, we aim to address these pivotal questions by exploring the performance of various LLMs. Our objective is to explore the capability of various LLMs in effectively combating fake news, marking this as the first investigation to analyze seven such models. Our results reveal that while some models adhere strictly to safety protocols, refusing to generate biased or misleading content, other models can readily produce fake news across a spectrum of biases. Additionally, our results show that larger models generally exhibit superior detection abilities and that LLM-generated fake news are less likely to be detected than human-written ones. Finally, our findings demonstrate that users can benefit from LLM-generated explanations in identifying fake news.<|reference_end|>
arxiv
@article{sallami2024from, title={From Deception to Detection: The Dual Roles of Large Language Models in Fake News}, author={Dorsaf Sallami, Yuan-Chen Chang, Esma A"imeur}, journal={arXiv preprint arXiv:2409.17416}, year={2024}, archivePrefix={arXiv}, eprint={2409.17416}, primaryClass={cs.CL cs.AI} }
sallami2024from
arxiv-662076
2409.17417
Enhancing Investment Opinion Ranking through Argument-Based Sentiment Analysis
<|reference_start|>Enhancing Investment Opinion Ranking through Argument-Based Sentiment Analysis: In the era of rapid Internet and social media platform development, individuals readily share their viewpoints online. The overwhelming quantity of these posts renders comprehensive analysis impractical. This necessitates an efficient recommendation system to filter and present significant, relevant opinions. Our research introduces a dual-pronged argument mining technique to improve recommendation system effectiveness, considering both professional and amateur investor perspectives. Our first strategy involves using the discrepancy between target and closing prices as an opinion indicator. The second strategy applies argument mining principles to score investors' opinions, subsequently ranking them by these scores. Experimental results confirm the effectiveness of our approach, demonstrating its ability to identify opinions with higher profit potential. Beyond profitability, our research extends to risk analysis, examining the relationship between recommended opinions and investor behaviors. This offers a holistic view of potential outcomes following the adoption of these recommended opinions.<|reference_end|>
arxiv
@article{chen2024enhancing, title={Enhancing Investment Opinion Ranking through Argument-Based Sentiment Analysis}, author={Chung-Chi Chen, Hen-Hsen Huang, Hsin-Hsi Chen, Hiroya Takamura, Ichiro Kobayashi, Yusuke Miyao}, journal={arXiv preprint arXiv:2409.17417}, year={2024}, archivePrefix={arXiv}, eprint={2409.17417}, primaryClass={cs.CL} }
chen2024enhancing
arxiv-662077
2409.17419
Pre-Finetuning with Impact Duration Awareness for Stock Movement Prediction
<|reference_start|>Pre-Finetuning with Impact Duration Awareness for Stock Movement Prediction: Understanding the duration of news events' impact on the stock market is crucial for effective time-series forecasting, yet this facet is largely overlooked in current research. This paper addresses this research gap by introducing a novel dataset, the Impact Duration Estimation Dataset (IDED), specifically designed to estimate impact duration based on investor opinions. Our research establishes that pre-finetuning language models with IDED can enhance performance in text-based stock movement predictions. In addition, we juxtapose our proposed pre-finetuning task with sentiment analysis pre-finetuning, further affirming the significance of learning impact duration. Our findings highlight the promise of this novel research direction in stock movement prediction, offering a new avenue for financial forecasting. We also provide the IDED and pre-finetuned language models under the CC BY-NC-SA 4.0 license for academic use, fostering further exploration in this field.<|reference_end|>
arxiv
@article{chiu2024pre-finetuning, title={Pre-Finetuning with Impact Duration Awareness for Stock Movement Prediction}, author={Chr-Jr Chiu, Chung-Chi Chen, Hen-Hsen Huang, Hsin-Hsi Chen}, journal={arXiv preprint arXiv:2409.17419}, year={2024}, archivePrefix={arXiv}, eprint={2409.17419}, primaryClass={cs.CL} }
chiu2024pre-finetuning
arxiv-662078
2409.17420
VibraForge: A Scalable Prototyping Toolkit For Creating Spatialized Vibrotactile Feedback Systems
<|reference_start|>VibraForge: A Scalable Prototyping Toolkit For Creating Spatialized Vibrotactile Feedback Systems: Spatialized vibrotactile feedback systems deliver tactile information by placing multiple vibrotactile actuators on the body. As increasing numbers of actuators are required to adequately convey information in complicated applications, haptic designers find it difficult to create such systems due to limited scalability of existing toolkits. We propose VibraForge, an open-source vibrotactile toolkit that supports up to 128 vibrotactile actuators. Each actuator is encapsulated within a self-contained vibration unit and driven by its own microcontroller. By leveraging a chain-connection method, each unit receives independent vibration commands from a control unit, with fine-grained control over intensity and frequency. We also designed a GUI Editor to expedite the authoring of spatial vibrotactile patterns. Technical evaluations show that vibration units reliably reproduce audio waveforms with low-latency and high-bandwidth data communication. Case studies of phonemic tactile display, virtual reality fitness training, and drone teleoperation demonstrate the potential usage of VibraForge within different domains.<|reference_end|>
arxiv
@article{huang2024vibraforge:, title={VibraForge: A Scalable Prototyping Toolkit For Creating Spatialized Vibrotactile Feedback Systems}, author={Bingjian Huang, Siyi Ren, Yuewen Luo, Qilong Cheng, Hanfeng Cai, Yeqi Sang, Mauricio Sousa, Paul H. Dietz, Daniel Wigdor}, journal={arXiv preprint arXiv:2409.17420}, year={2024}, archivePrefix={arXiv}, eprint={2409.17420}, primaryClass={cs.HC} }
huang2024vibraforge:
arxiv-662079
2409.17421
Solar Active Regions Emergence Prediction Using Long Short-Term Memory Networks
<|reference_start|>Solar Active Regions Emergence Prediction Using Long Short-Term Memory Networks: We developed Long Short-Term Memory (LSTM) models to predict the formation of active regions (ARs) on the solar surface. Using the Doppler shift velocity, the continuum intensity, and the magnetic field observations from the Solar Dynamics Observatory (SDO) Helioseismic and Magnetic Imager (HMI), we have created time-series datasets of acoustic power and magnetic flux, which are used to train LSTM models on predicting continuum intensity, 12 hours in advance. These novel machine learning (ML) models are able to capture variations of the acoustic power density associated with upcoming magnetic flux emergence and continuum intensity decrease. Testing of the models' performance was done on data for 5 ARs, unseen from the models during training. Model 8, the best performing model trained, was able to make a successful prediction of emergence for all testing active regions in an experimental setting and three of them in an operational. The model predicted the emergence of AR11726, AR13165, and AR13179 respectively 10, 29, and 5 hours in advance, and variations of this model achieved average RMSE values of 0.11 for both active and quiet areas on the solar disc. This work sets the foundations for ML-aided prediction of solar ARs.<|reference_end|>
arxiv
@article{kasapis2024solar, title={Solar Active Regions Emergence Prediction Using Long Short-Term Memory Networks}, author={Spiridon Kasapis, Irina N. Kitiashvili, Alexander G. Kosovichev, John T. Stefan}, journal={arXiv preprint arXiv:2409.17421}, year={2024}, archivePrefix={arXiv}, eprint={2409.17421}, primaryClass={astro-ph.SR cs.AI cs.LG} }
kasapis2024solar
arxiv-662080
2409.17422
Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction
<|reference_start|>Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction: Large Language Models (LLMs) have demonstrated remarkable capabilities in handling long context inputs, but this comes at the cost of increased computational resources and latency. Our research introduces a novel approach for the long context bottleneck to accelerate LLM inference and reduce GPU memory consumption. Our research demonstrates that LLMs can identify relevant tokens in the early layers before generating answers to a query. Leveraging this insight, we propose an algorithm that uses early layers of an LLM as filters to select and compress input tokens, significantly reducing the context length for subsequent processing. Our method, GemFilter, demonstrates substantial improvements in both speed and memory efficiency compared to existing techniques, such as standard attention and SnapKV/H2O. Notably, it achieves a 2.4$\times$ speedup and 30\% reduction in GPU memory usage compared to SOTA methods. Evaluation on the Needle in a Haystack task shows that GemFilter significantly outperforms standard attention, SnapKV and demonstrates comparable performance on the LongBench challenge. GemFilter is simple, training-free, and broadly applicable across different LLMs. Crucially, it provides interpretability by allowing humans to inspect the selected input sequence. These findings not only offer practical benefits for LLM deployment, but also enhance our understanding of LLM internal mechanisms, paving the way for further optimizations in LLM design and inference. Our code is available at \url{https://github.com/SalesforceAIResearch/GemFilter}.<|reference_end|>
arxiv
@article{shi2024discovering, title={Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction}, author={Zhenmei Shi, Yifei Ming, Xuan-Phi Nguyen, Yingyu Liang, Shafiq Joty}, journal={arXiv preprint arXiv:2409.17422}, year={2024}, archivePrefix={arXiv}, eprint={2409.17422}, primaryClass={cs.CL cs.AI cs.LG} }
shi2024discovering
arxiv-662081
2409.17424
Results of the Big ANN: NeurIPS'23 competition
<|reference_start|>Results of the Big ANN: NeurIPS'23 competition: The 2023 Big ANN Challenge, held at NeurIPS 2023, focused on advancing the state-of-the-art in indexing data structures and search algorithms for practical variants of Approximate Nearest Neighbor (ANN) search that reflect the growing complexity and diversity of workloads. Unlike prior challenges that emphasized scaling up classical ANN search ~\cite{DBLP:conf/nips/SimhadriWADBBCH21}, this competition addressed filtered search, out-of-distribution data, sparse and streaming variants of ANNS. Participants developed and submitted innovative solutions that were evaluated on new standard datasets with constrained computational resources. The results showcased significant improvements in search accuracy and efficiency over industry-standard baselines, with notable contributions from both academic and industrial teams. This paper summarizes the competition tracks, datasets, evaluation metrics, and the innovative approaches of the top-performing submissions, providing insights into the current advancements and future directions in the field of approximate nearest neighbor search.<|reference_end|>
arxiv
@article{simhadri2024results, title={Results of the Big ANN: NeurIPS'23 competition}, author={Harsha Vardhan Simhadri, Martin Aum"uller, Amir Ingber, Matthijs Douze, George Williams, Magdalen Dobson Manohar, Dmitry Baranchuk, Edo Liberty, Frank Liu, Ben Landrum, Mazin Karjikar, Laxman Dhulipala, Meng Chen, Yue Chen, Rui Ma, Kai Zhang, Yuzheng Cai, Jiayang Shi, Yizhuo Chen, Weiguo Zheng, Zihao Wan, Jie Yin and Ben Huang}, journal={arXiv preprint arXiv:2409.17424}, year={2024}, archivePrefix={arXiv}, eprint={2409.17424}, primaryClass={cs.IR cs.DS cs.LG cs.PF} }
simhadri2024results
arxiv-662082
2409.17425
Website visits can predict angler presence using machine learning
<|reference_start|>Website visits can predict angler presence using machine learning: Understanding and predicting recreational fishing activity is important for sustainable fisheries management. However, traditional methods of measuring fishing pressure, such as surveys, can be costly and limited in both time and spatial extent. Predictive models that relate fishing activity to environmental or economic factors typically rely on historical data, which often restricts their spatial applicability due to data scarcity. In this study, high-resolution angler-generated data from an online platform and easily accessible auxiliary data were tested to predict daily boat presence and aerial counts of boats at almost 200 lakes over five years in Ontario, Canada. Lake-information website visits alone enabled predicting daily angler boat presence with 78% accuracy. While incorporating additional environmental, socio-ecological, weather and angler-generated features into machine learning models did not remarkably improve prediction performance of boat presence, they were substantial for the prediction of boat counts. Models achieved an R2 of up to 0.77 at known lakes included in the model training, but they performed poorly for unknown lakes (R2 = 0.21). The results demonstrate the value of integrating angler-generated data from online platforms into predictive models and highlight the potential of machine learning models to enhance fisheries management.<|reference_end|>
arxiv
@article{schmid2024website, title={Website visits can predict angler presence using machine learning}, author={Julia S. Schmid (1) and Sean Simmons (2) and Mark A. Lewis (1 and 3 and 4 and 5) and Mark S. Poesch (5) and Pouria Ramazi (6) ((1) Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, Canada, (2) Anglers Atlas, Goldstream Publishing, Prince George, British Columbia, Canada, (3) Department of Mathematics and Statistics, University of Victoria, Victoria, British Columbia, Canada, (4) Department of Biology, University of Victoria, Victoria, British Columbia, Canada, (5) Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada, (6) Department of Mathematics and Statistics, Brock University, St. Catharines, Ontario, Canada)}, journal={arXiv preprint arXiv:2409.17425}, year={2024}, archivePrefix={arXiv}, eprint={2409.17425}, primaryClass={physics.soc-ph cs.LG} }
schmid2024website
arxiv-662083
2409.17426
Exploring the Use of ChatGPT for a Systematic Literature Review: a Design-Based Research
<|reference_start|>Exploring the Use of ChatGPT for a Systematic Literature Review: a Design-Based Research: ChatGPT has been used in several educational contexts,including learning, teaching and research. It also has potential to conduct the systematic literature review (SLR). However, there are limited empirical studies on how to use ChatGPT in conducting a SLR. Based on a SLR published,this study used ChatGPT to conduct a SLR of the same 33 papers in a design-based approach, to see what the differences are by comparing the reviews' results,and to answer: To what extent can ChatGPT conduct SLR? What strategies can human researchers utilize to structure prompts for ChatGPT that enhance the reliability and validity of a SLR? This study found that ChatGPT could conduct a SLR. It needs detailed and accurate prompts to analyze the literature. It also has limitations. Guiding principles are summarized from this study for researchers to follow when they need to conduct SLRs using ChatGPT.<|reference_end|>
arxiv
@article{huang2024exploring, title={Exploring the Use of ChatGPT for a Systematic Literature Review: a Design-Based Research}, author={Qian Huang, Qiyun Wang}, journal={arXiv preprint arXiv:2409.17426}, year={2024}, archivePrefix={arXiv}, eprint={2409.17426}, primaryClass={cs.AI} }
huang2024exploring
arxiv-662084
2409.17427
Stress Detection from Photoplethysmography in a Virtual Reality Environment
<|reference_start|>Stress Detection from Photoplethysmography in a Virtual Reality Environment: Personalized virtual reality exposure therapy is a therapeutic practice that can adapt to an individual patient, leading to better health outcomes. Measuring a patient's mental state to adjust the therapy is a critical but difficult task. Most published studies use subjective methods to estimate a patient's mental state, which can be inaccurate. This article proposes a virtual reality exposure therapy (VRET) platform capable of assessing a patient's mental state using non-intrusive and widely available physiological signals such as photoplethysmography (PPG). In a case study, we evaluate how PPG signals can be used to detect two binary classifications: peaceful and stressful states. Sixteen healthy subjects were exposed to the two VR environments (relaxed and stressful). Using LOSO cross-validation, our best classification model could predict the two states with a 70.6% accuracy which outperforms many more complex approaches.<|reference_end|>
arxiv
@article{mahmoudi-nejad2024stress, title={Stress Detection from Photoplethysmography in a Virtual Reality Environment}, author={Athar Mahmoudi-Nejad, Pierre Boulanger, Matthew Guzdial}, journal={arXiv preprint arXiv:2409.17427}, year={2024}, archivePrefix={arXiv}, eprint={2409.17427}, primaryClass={cs.LG cs.HC} }
mahmoudi-nejad2024stress
arxiv-662085
2409.17429
Real-World Data Inspired Interactive Connected Traffic Scenario Generation
<|reference_start|>Real-World Data Inspired Interactive Connected Traffic Scenario Generation: Simulation is a crucial step in ensuring accurate, efficient, and realistic Connected and Autonomous Vehicles (CAVs) testing and validation. As the adoption of CAV accelerates, the integration of real-world data into simulation environments becomes increasingly critical. Among various technologies utilized by CAVs, Vehicle-to-Everything (V2X) communication plays a crucial role in ensuring a seamless transmission of information between CAVs, infrastructure, and other road users. However, most existing studies have focused on developing and testing communication protocols, resource allocation strategies, and data dissemination techniques in V2X. There is a gap where real-world V2X data is integrated into simulations to generate diverse and high-fidelity traffic scenarios. To fulfill this research gap, we leverage real-world Signal Phase and Timing (SPaT) data from Roadside Units (RSUs) to enhance the fidelity of CAV simulations. Moreover, we developed an algorithm that enables Autonomous Vehicles (AVs) to respond dynamically to real-time traffic signal data, simulating realistic V2X communication scenarios. Such high-fidelity simulation environments can generate multimodal data, including trajectory, semantic camera, depth camera, and bird's eye view data for various traffic scenarios. The generated scenarios and data provide invaluable insights into AVs' interactions with traffic infrastructure and other road users. This work aims to bridge the gap between theoretical research and practical deployment of CAVs, facilitating the development of smarter and safer transportation systems.<|reference_end|>
arxiv
@article{you2024real-world, title={Real-World Data Inspired Interactive Connected Traffic Scenario Generation}, author={Junwei You, Pei Li, Yang Cheng, Keshu Wu, Rui Gan, Steven T. Parker, Bin Ran}, journal={arXiv preprint arXiv:2409.17429}, year={2024}, archivePrefix={arXiv}, eprint={2409.17429}, primaryClass={cs.RO} }
you2024real-world
arxiv-662086
2409.17430
A Hierarchical Gradient Tracking Algorithm for Mitigating Subnet-Drift in Fog Learning Networks
<|reference_start|>A Hierarchical Gradient Tracking Algorithm for Mitigating Subnet-Drift in Fog Learning Networks: Federated learning (FL) encounters scalability challenges when implemented over fog networks that do not follow FL's conventional star topology architecture. Semi-decentralized FL (SD-FL) has proposed a solution for device-to-device (D2D) enabled networks that divides model cooperation into two stages: at the lower stage, D2D communications is employed for local model aggregations within subnetworks (subnets), while the upper stage handles device-server (DS) communications for global model aggregations. However, existing SD-FL schemes are based on gradient diversity assumptions that become performance bottlenecks as data distributions become more heterogeneous. In this work, we develop semi-decentralized gradient tracking (SD-GT), the first SD-FL methodology that removes the need for such assumptions by incorporating tracking terms into device updates for each communication layer. Our analytical characterization of SD-GT reveals upper bounds on convergence for non-convex, convex, and strongly-convex problems. We show how the bounds enable the development of an optimization algorithm that navigates the performance-efficiency trade-off by tuning subnet sampling rate and D2D rounds for each global training interval. Our subsequent numerical evaluations demonstrate that SD-GT obtains substantial improvements in trained model quality and communication cost relative to baselines in SD-FL and gradient tracking on several datasets.<|reference_end|>
arxiv
@article{chen2024a, title={A Hierarchical Gradient Tracking Algorithm for Mitigating Subnet-Drift in Fog Learning Networks}, author={Evan Chen, Shiqiang Wang, Christopher G. Brinton}, journal={arXiv preprint arXiv:2409.17430}, year={2024}, archivePrefix={arXiv}, eprint={2409.17430}, primaryClass={cs.NI} }
chen2024a
arxiv-662087
2409.17431
On Extending Direct Preference Optimization to Accommodate Ties
<|reference_start|>On Extending Direct Preference Optimization to Accommodate Ties: We derive and investigate two DPO variants that explicitly model the possibility of declaring a tie in pair-wise comparisons. We replace the Bradley-Terry model in DPO with two well-known modeling extensions, by Rao and Kupper and by Davidson, that assign probability to ties as alternatives to clear preferences. Our experiments in neural machine translation and summarization show that explicitly labeled ties can be added to the datasets for these DPO variants without the degradation in task performance that is observed when the same tied pairs are presented to DPO. We find empirically that the inclusion of ties leads to stronger regularization with respect to the reference policy as measured by KL divergence, and we see this even for DPO in its original form. These findings motivate and enable the inclusion of tied pairs in preference optimization as opposed to simply discarding them.<|reference_end|>
arxiv
@article{chen2024on, title={On Extending Direct Preference Optimization to Accommodate Ties}, author={Jinghong Chen, Guangyu Yang, Weizhe Lin, Jingbiao Mei, Bill Byrne}, journal={arXiv preprint arXiv:2409.17431}, year={2024}, archivePrefix={arXiv}, eprint={2409.17431}, primaryClass={cs.CL} }
chen2024on
arxiv-662088
2409.17432
HazeSpace2M: A Dataset for Haze Aware Single Image Dehazing
<|reference_start|>HazeSpace2M: A Dataset for Haze Aware Single Image Dehazing: Reducing the atmospheric haze and enhancing image clarity is crucial for computer vision applications. The lack of real-life hazy ground truth images necessitates synthetic datasets, which often lack diverse haze types, impeding effective haze type classification and dehazing algorithm selection. This research introduces the HazeSpace2M dataset, a collection of over 2 million images designed to enhance dehazing through haze type classification. HazeSpace2M includes diverse scenes with 10 haze intensity levels, featuring Fog, Cloud, and Environmental Haze (EH). Using the dataset, we introduce a technique of haze type classification followed by specialized dehazers to clear hazy images. Unlike conventional methods, our approach classifies haze types before applying type-specific dehazing, improving clarity in real-life hazy images. Benchmarking with state-of-the-art (SOTA) models, ResNet50 and AlexNet achieve 92.75\% and 92.50\% accuracy, respectively, against existing synthetic datasets. However, these models achieve only 80% and 70% accuracy, respectively, against our Real Hazy Testset (RHT), highlighting the challenging nature of our HazeSpace2M dataset. Additional experiments show that haze type classification followed by specialized dehazing improves results by 2.41% in PSNR, 17.14% in SSIM, and 10.2\% in MSE over general dehazers. Moreover, when testing with SOTA dehazing models, we found that applying our proposed framework significantly improves their performance. These results underscore the significance of HazeSpace2M and our proposed framework in addressing atmospheric haze in multimedia processing. Complete code and dataset is available on \href{https://github.com/tanvirnwu/HazeSpace2M} {\textcolor{blue}{\textbf{GitHub}}}.<|reference_end|>
arxiv
@article{islam2024hazespace2m:, title={HazeSpace2M: A Dataset for Haze Aware Single Image Dehazing}, author={Md Tanvir Islam, Nasir Rahim, Saeed Anwar, Muhammad Saqib, Sambit Bakshi, Khan Muhammad}, journal={ACM Multimedia 2024}, year={2024}, doi={10.1145/3664647.3681382}, archivePrefix={arXiv}, eprint={2409.17432}, primaryClass={cs.CV} }
islam2024hazespace2m:
arxiv-662089
2409.17433
HDFlow: Enhancing LLM Complex Problem-Solving with Hybrid Thinking and Dynamic Workflows
<|reference_start|>HDFlow: Enhancing LLM Complex Problem-Solving with Hybrid Thinking and Dynamic Workflows: Despite recent advancements in large language models (LLMs), their performance on complex reasoning problems requiring multi-step thinking and combining various skills is still limited. To address this, we propose a novel framework HDFlow for complex reasoning with LLMs that combines fast and slow thinking modes in an adaptive manner. Our approach consists of two key components: 1) a new approach for slow, deliberate reasoning called Dynamic Workflow, which automatically decomposes complex problems into more manageable sub-tasks and dynamically designs a workflow to assemble specialized LLM or symbolic reasoning tools to solve sub-tasks; 2) Hybrid Thinking, a general framework that dynamically combines fast and slow thinking based on problem complexity. Finally, we propose an easy-to-scale method for automatically synthesizing a large-scale dataset of 27K challenging reasoning problems for complex reasoning and a hybrid thinking tuning method that trains smaller LLMs on this dataset to internalize the fast/slow hybrid reasoning strategies. Experiments on four reasoning benchmark datasets demonstrate that our slow thinking with dynamic workflows significantly outperforms Chain-of-Thought, and hybrid thinking achieves the highest accuracy while providing an effective balance between computational efficiency and performance. Fine-tuning using our hybrid thinking approach also significantly boosts the complex reasoning capabilities of open-source language models. The results showcase the promise of slow thinking, dynamic workflows, and hybrid thinking in expanding the frontier of complex problem-solving with LLMs\footnote{Code and data will be released at \url{https://github.com/wenlinyao/HDFlow}.}.<|reference_end|>
arxiv
@article{yao2024hdflow:, title={HDFlow: Enhancing LLM Complex Problem-Solving with Hybrid Thinking and Dynamic Workflows}, author={Wenlin Yao, Haitao Mi, Dong Yu}, journal={arXiv preprint arXiv:2409.17433}, year={2024}, archivePrefix={arXiv}, eprint={2409.17433}, primaryClass={cs.CL cs.AI} }
yao2024hdflow:
arxiv-662090
2409.17434
Harnessing the Potential of Gen-AI Coding Assistants in Public Sector Software Development
<|reference_start|>Harnessing the Potential of Gen-AI Coding Assistants in Public Sector Software Development: The study on GitHub Copilot by GovTech Singapore's Engineering Productivity Programme (EPP) reveals significant potential for AI Code Assistant tools to boost developer productivity and improve application quality in the public sector. Highlighting the substantial benefits for the public sector, the study observed an increased productivity (coding / tasks speed increased by 21-28%), which translates into accelerated development, and quicker go-to-market, with a notable consensus (95%) that the tool increases developer satisfaction. Particularly, junior developers experienced considerable efficiency gains and reduced coding times, illustrating Copilot's capability to enhance job satisfaction by easing routine tasks. This advancement allows for a sharper focus on complex projects, faster learning, and improved code quality. Recognising the strategic importance of these tools, the study recommends the development of an AI Framework to maximise such benefits while cautioning against potential over-reliance without solid foundational programming skills. It also advises public sector developers to classify their code as "Open" to use Gen-AI Coding Assistant tools on the Cloud like GitHub Copilot and to consider self-hosted tools like Codeium or Code Llama for confidential code to leverage technology efficiently within the public sector framework. With up to 8,000 developers, comprising both public officers and vendors developing applications for the public sector and its customers, there is significant potential to enhance productivity.<|reference_end|>
arxiv
@article{ng2024harnessing, title={Harnessing the Potential of Gen-AI Coding Assistants in Public Sector Software Development}, author={Kevin KB Ng, Liyana Fauzi, Leon Leow, Jaren Ng}, journal={arXiv preprint arXiv:2409.17434}, year={2024}, archivePrefix={arXiv}, eprint={2409.17434}, primaryClass={cs.SE} }
ng2024harnessing
arxiv-662091
2409.17435
Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation
<|reference_start|>Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation: Imitation learning has demonstrated significant potential in performing high-precision manipulation tasks using visual feedback from cameras. However, it is common practice in imitation learning for cameras to be fixed in place, resulting in issues like occlusion and limited field of view. Furthermore, cameras are often placed in broad, general locations, without an effective viewpoint specific to the robot's task. In this work, we investigate the utility of active vision (AV) for imitation learning and manipulation, in which, in addition to the manipulation policy, the robot learns an AV policy from human demonstrations to dynamically change the robot's camera viewpoint to obtain better information about its environment and the given task. We introduce AV-ALOHA, a new bimanual teleoperation robot system with AV, an extension of the ALOHA 2 robot system, incorporating an additional 7-DoF robot arm that only carries a stereo camera and is solely tasked with finding the best viewpoint. This camera streams stereo video to an operator wearing a virtual reality (VR) headset, allowing the operator to control the camera pose using head and body movements. The system provides an immersive teleoperation experience, with bimanual first-person control, enabling the operator to dynamically explore and search the scene and simultaneously interact with the environment. We conduct imitation learning experiments of our system both in real-world and in simulation, across a variety of tasks that emphasize viewpoint planning. Our results demonstrate the effectiveness of human-guided AV for imitation learning, showing significant improvements over fixed cameras in tasks with limited visibility. Project website: https://soltanilara.github.io/av-aloha/<|reference_end|>
arxiv
@article{chuang2024active, title={Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation}, author={Ian Chuang and Andrew Lee and Dechen Gao and Iman Soltani}, journal={arXiv preprint arXiv:2409.17435}, year={2024}, archivePrefix={arXiv}, eprint={2409.17435}, primaryClass={cs.RO} }
chuang2024active
arxiv-662092
2409.17436
Minimizing Live Experiments in Recommender Systems: User Simulation to Evaluate Preference Elicitation Policies
<|reference_start|>Minimizing Live Experiments in Recommender Systems: User Simulation to Evaluate Preference Elicitation Policies: Evaluation of policies in recommender systems typically involves A/B testing using live experiments on real users to assess a new policy's impact on relevant metrics. This ``gold standard'' comes at a high cost, however, in terms of cycle time, user cost, and potential user retention. In developing policies for ``onboarding'' new users, these costs can be especially problematic, since on-boarding occurs only once. In this work, we describe a simulation methodology used to augment (and reduce) the use of live experiments. We illustrate its deployment for the evaluation of ``preference elicitation'' algorithms used to onboard new users of the YouTube Music platform. By developing counterfactually robust user behavior models, and a simulation service that couples such models with production infrastructure, we are able to test new algorithms in a way that reliably predicts their performance on key metrics when deployed live. We describe our domain, our simulation models and platform, results of experiments and deployment, and suggest future steps needed to further realistic simulation as a powerful complement to live experiments.<|reference_end|>
arxiv
@article{hsu2024minimizing, title={Minimizing Live Experiments in Recommender Systems: User Simulation to Evaluate Preference Elicitation Policies}, author={Chih-Wei Hsu, Martin Mladenov, Ofer Meshi, James Pine, Hubert Pham, Shane Li, Xujian Liang, Anton Polishko, Li Yang, Ben Scheetz, Craig Boutilier}, journal={arXiv preprint arXiv:2409.17436}, year={2024}, doi={10.1145/3626772.3661358}, archivePrefix={arXiv}, eprint={2409.17436}, primaryClass={cs.IR cs.LG} }
hsu2024minimizing
arxiv-662093
2409.17439
Rejection Sampling IMLE: Designing Priors for Better Few-Shot Image Synthesis
<|reference_start|>Rejection Sampling IMLE: Designing Priors for Better Few-Shot Image Synthesis: An emerging area of research aims to learn deep generative models with limited training data. Prior generative models like GANs and diffusion models require a lot of data to perform well, and their performance degrades when they are trained on only a small amount of data. A recent technique called Implicit Maximum Likelihood Estimation (IMLE) has been adapted to the few-shot setting, achieving state-of-the-art performance. However, current IMLE-based approaches encounter challenges due to inadequate correspondence between the latent codes selected for training and those drawn during inference. This results in suboptimal test-time performance. We theoretically show a way to address this issue and propose RS-IMLE, a novel approach that changes the prior distribution used for training. This leads to substantially higher quality image generation compared to existing GAN and IMLE-based methods, as validated by comprehensive experiments conducted on nine few-shot image datasets.<|reference_end|>
arxiv
@article{vashist2024rejection, title={Rejection Sampling IMLE: Designing Priors for Better Few-Shot Image Synthesis}, author={Chirag Vashist, Shichong Peng, Ke Li}, journal={arXiv preprint arXiv:2409.17439}, year={2024}, archivePrefix={arXiv}, eprint={2409.17439}, primaryClass={cs.CV cs.LG} }
vashist2024rejection
arxiv-662094
2409.17440
A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction
<|reference_start|>A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction: Accurate traffic prediction faces significant challenges, necessitating a deep understanding of both temporal and spatial cues and their complex interactions across multiple variables. Recent advancements in traffic prediction systems are primarily due to the development of complex sequence-centric models. However, existing approaches often embed multiple variables and spatial relationships at each time step, which may hinder effective variable-centric learning, ultimately leading to performance degradation in traditional traffic prediction tasks. To overcome these limitations, we introduce variable-centric and prior knowledge-centric modeling techniques. Specifically, we propose a Heterogeneous Mixture of Experts (TITAN) model for traffic flow prediction. TITAN initially consists of three experts focused on sequence-centric modeling. Then, designed a low-rank adaptive method, TITAN simultaneously enables variable-centric modeling. Furthermore, we supervise the gating process using a prior knowledge-centric modeling strategy to ensure accurate routing. Experiments on two public traffic network datasets, METR-LA and PEMS-BAY, demonstrate that TITAN effectively captures variable-centric dependencies while ensuring accurate routing. Consequently, it achieves improvements in all evaluation metrics, ranging from approximately 4.37\% to 11.53\%, compared to previous state-of-the-art (SOTA) models. The code is open at \href{https://github.com/sqlcow/TITAN}{https://github.com/sqlcow/TITAN}.<|reference_end|>
arxiv
@article{wang2024a, title={A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction}, author={Guangyu Wang, Yujie Chen, Ming Gao, Zhiqiao Wu, Jiafu Tang, Jiabi Zhao}, journal={arXiv preprint arXiv:2409.17440}, year={2024}, archivePrefix={arXiv}, eprint={2409.17440}, primaryClass={cs.AI} }
wang2024a
arxiv-662095
2409.17443
Cat-and-Mouse Satellite Dynamics: Divergent Adversarial Reinforcement Learning for Contested Multi-Agent Space Operations
<|reference_start|>Cat-and-Mouse Satellite Dynamics: Divergent Adversarial Reinforcement Learning for Contested Multi-Agent Space Operations: As space becomes increasingly crowded and contested, robust autonomous capabilities for multi-agent environments are gaining critical importance. Current autonomous systems in space primarily rely on optimization-based path planning or long-range orbital maneuvers, which have not yet proven effective in adversarial scenarios where one satellite is actively pursuing another. We introduce Divergent Adversarial Reinforcement Learning (DARL), a two-stage Multi-Agent Reinforcement Learning (MARL) approach designed to train autonomous evasion strategies for satellites engaged with multiple adversarial spacecraft. Our method enhances exploration during training by promoting diverse adversarial strategies, leading to more robust and adaptable evader models. We validate DARL through a cat-and-mouse satellite scenario, modeled as a partially observable multi-agent capture the flag game where two adversarial `cat' spacecraft pursue a single `mouse' evader. DARL's performance is compared against several benchmarks, including an optimization-based satellite path planner, demonstrating its ability to produce highly robust models for adversarial multi-agent space environments.<|reference_end|>
arxiv
@article{mehlman2024cat-and-mouse, title={Cat-and-Mouse Satellite Dynamics: Divergent Adversarial Reinforcement Learning for Contested Multi-Agent Space Operations}, author={Cameron Mehlman, Joseph Abramov, Gregory Falco}, journal={arXiv preprint arXiv:2409.17443}, year={2024}, archivePrefix={arXiv}, eprint={2409.17443}, primaryClass={cs.RO} }
mehlman2024cat-and-mouse
arxiv-662096
2409.17445
The Interplay of Computing, Ethics, and Policy in Brain-Computer Interface Design
<|reference_start|>The Interplay of Computing, Ethics, and Policy in Brain-Computer Interface Design: Brain-computer interfaces (BCIs) connect biological neurons in the brain with external systems like prosthetics and computers. They are increasingly incorporating processing capabilities to analyze and stimulate neural activity, and consequently, pose unique design challenges related to ethics, law, and policy. For the first time, this paper articulates how ethical, legal, and policy considerations can shape BCI architecture design, and how the decisions that architects make constrain or expand the ethical, legal, and policy frameworks that can be applied to them.<|reference_end|>
arxiv
@article{ugur2024the, title={The Interplay of Computing, Ethics, and Policy in Brain-Computer Interface Design}, author={Muhammed Ugur, Raghavendra Pradyumna Pothukuchi, Abhishek Bhattacharjee}, journal={The 1st Workshop on Hot Topics in Ethical Computer Systems, April, 2024}, year={2024}, archivePrefix={arXiv}, eprint={2409.17445}, primaryClass={cs.AR cs.CY} }
ugur2024the
arxiv-662097
2409.17446
Efficient Federated Learning against Heterogeneous and Non-stationary Client Unavailability
<|reference_start|>Efficient Federated Learning against Heterogeneous and Non-stationary Client Unavailability: Addressing intermittent client availability is critical for the real-world deployment of federated learning algorithms. Most prior work either overlooks the potential non-stationarity in the dynamics of client unavailability or requires substantial memory/computation overhead. We study federated learning in the presence of heterogeneous and non-stationary client availability, which may occur when the deployment environments are uncertain or the clients are mobile. The impacts of the heterogeneity and non-stationarity in client unavailability can be significant, as we illustrate using FedAvg, the most widely adopted federated learning algorithm. We propose FedAPM, which includes novel algorithmic structures that (i) compensate for missed computations due to unavailability with only $O(1)$ additional memory and computation with respect to standard FedAvg, and (ii) evenly diffuse local updates within the federated learning system through implicit gossiping, despite being agnostic to non-stationary dynamics. We show that FedAPM converges to a stationary point of even non-convex objectives while achieving the desired linear speedup property. We corroborate our analysis with numerical experiments over diversified client unavailability dynamics on real-world data sets.<|reference_end|>
arxiv
@article{xiang2024efficient, title={Efficient Federated Learning against Heterogeneous and Non-stationary Client Unavailability}, author={Ming Xiang, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong, Lili Su}, journal={arXiv preprint arXiv:2409.17446}, year={2024}, archivePrefix={arXiv}, eprint={2409.17446}, primaryClass={cs.DC cs.LG math.OC} }
xiang2024efficient
arxiv-662098
2409.17448
Enhancing Financial Sentiment Analysis with Expert-Designed Hint
<|reference_start|>Enhancing Financial Sentiment Analysis with Expert-Designed Hint: This paper investigates the role of expert-designed hint in enhancing sentiment analysis on financial social media posts. We explore the capability of large language models (LLMs) to empathize with writer perspectives and analyze sentiments. Our findings reveal that expert-designed hint, i.e., pointing out the importance of numbers, significantly improve performances across various LLMs, particularly in cases requiring perspective-taking skills. Further analysis on tweets containing different types of numerical data demonstrates that the inclusion of expert-designed hint leads to notable improvements in sentiment analysis performance, especially for tweets with monetary-related numbers. Our findings contribute to the ongoing discussion on the applicability of Theory of Mind in NLP and open new avenues for improving sentiment analysis in financial domains through the strategic use of expert knowledge.<|reference_end|>
arxiv
@article{chen2024enhancing, title={Enhancing Financial Sentiment Analysis with Expert-Designed Hint}, author={Chung-Chi Chen, Hiroya Takamura, Ichiro Kobayashi, Yusuke Miyao}, journal={arXiv preprint arXiv:2409.17448}, year={2024}, archivePrefix={arXiv}, eprint={2409.17448}, primaryClass={cs.CL} }
chen2024enhancing
arxiv-662099
2409.17451
Study of Subjective and Objective Quality in Super-Resolution Enhanced Broadcast Images on a Novel SR-IQA Dataset
<|reference_start|>Study of Subjective and Objective Quality in Super-Resolution Enhanced Broadcast Images on a Novel SR-IQA Dataset: To display low-quality broadcast content on high-resolution screens in full-screen format, the application of Super-Resolution (SR), a key consumer technology, is essential. Recently, SR methods have been developed that not only increase resolution while preserving the original image information but also enhance the perceived quality. However, evaluating the quality of SR images generated from low-quality sources, such as SR-enhanced broadcast content, is challenging due to the need to consider both distortions and improvements. Additionally, assessing SR image quality without original high-quality sources presents another significant challenge. Unfortunately, there has been a dearth of research specifically addressing the Image Quality Assessment (IQA) of SR images under these conditions. In this work, we introduce a new IQA dataset for SR broadcast images in both 2K and 4K resolutions. We conducted a subjective quality evaluation to obtain the Mean Opinion Score (MOS) for these SR images and performed a comprehensive human study to identify the key factors influencing the perceived quality. Finally, we evaluated the performance of existing IQA metrics on our dataset. This study reveals the limitations of current metrics, highlighting the need for a more robust IQA metric that better correlates with the perceived quality of SR images.<|reference_end|>
arxiv
@article{kim2024study, title={Study of Subjective and Objective Quality in Super-Resolution Enhanced Broadcast Images on a Novel SR-IQA Dataset}, author={Yongrok Kim, Junha Shin, Juhyun Lee and Hyunsuk Ko}, journal={arXiv preprint arXiv:2409.17451}, year={2024}, archivePrefix={arXiv}, eprint={2409.17451}, primaryClass={eess.IV cs.CV} }
kim2024study
arxiv-662100
2409.17452
Description-based Controllable Text-to-Speech with Cross-Lingual Voice Control
<|reference_start|>Description-based Controllable Text-to-Speech with Cross-Lingual Voice Control: We propose a novel description-based controllable text-to-speech (TTS) method with cross-lingual control capability. To address the lack of audio-description paired data in the target language, we combine a TTS model trained on the target language with a description control model trained on another language, which maps input text descriptions to the conditional features of the TTS model. These two models share disentangled timbre and style representations based on self-supervised learning (SSL), allowing for disentangled voice control, such as controlling speaking styles while retaining the original timbre. Furthermore, because the SSL-based timbre and style representations are language-agnostic, combining the TTS and description control models while sharing the same embedding space effectively enables cross-lingual control of voice characteristics. Experiments on English and Japanese TTS demonstrate that our method achieves high naturalness and controllability for both languages, even though no Japanese audio-description pairs are used.<|reference_end|>
arxiv
@article{yamamoto2024description-based, title={Description-based Controllable Text-to-Speech with Cross-Lingual Voice Control}, author={Ryuichi Yamamoto, Yuma Shirahata, Masaya Kawamura, Kentaro Tachibana}, journal={arXiv preprint arXiv:2409.17452}, year={2024}, archivePrefix={arXiv}, eprint={2409.17452}, primaryClass={eess.AS cs.CL cs.LG cs.SD} }
yamamoto2024description-based