corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-661801
|
2409.16865
|
Linking in Style: Understanding learned features in deep learning models
|
<|reference_start|>Linking in Style: Understanding learned features in deep learning models: Convolutional neural networks (CNNs) learn abstract features to perform object classification, but understanding these features remains challenging due to difficult-to-interpret results or high computational costs. We propose an automatic method to visualize and systematically analyze learned features in CNNs. Specifically, we introduce a linking network that maps the penultimate layer of a pre-trained classifier to the latent space of a generative model (StyleGAN-XL), thereby enabling an interpretable, human-friendly visualization of the classifier's representations. Our findings indicate a congruent semantic order in both spaces, enabling a direct linear mapping between them. Training the linking network is computationally inexpensive and decoupled from training both the GAN and the classifier. We introduce an automatic pipeline that utilizes such GAN-based visualizations to quantify learned representations by analyzing activation changes in the classifier in the image domain. This quantification allows us to systematically study the learned representations in several thousand units simultaneously and to extract and visualize units selective for specific semantic concepts. Further, we illustrate how our method can be used to quantify and interpret the classifier's decision boundary using counterfactual examples. Overall, our method offers systematic and objective perspectives on learned abstract representations in CNNs. https://github.com/kaschube-lab/LinkingInStyle.git<|reference_end|>
|
arxiv
|
@article{wehrheim2024linking,
title={Linking in Style: Understanding learned features in deep learning models},
author={Maren H. Wehrheim, Pamela Osuna-Vargas, Matthias Kaschube},
journal={arXiv preprint arXiv:2409.16865},
year={2024},
archivePrefix={arXiv},
eprint={2409.16865},
primaryClass={cs.CV}
}
|
wehrheim2024linking
|
arxiv-661802
|
2409.16866
|
Risk-averse learning with delayed feedback
|
<|reference_start|>Risk-averse learning with delayed feedback: In real-world scenarios, the impacts of decisions may not manifest immediately. Taking these delays into account facilitates accurate assessment and management of risk in real-world environments, thereby ensuring the efficacy of strategies. In this paper, we investigate risk-averse learning using Conditional Value at Risk (CVaR) as risk measure, while incorporating delayed feedback with unknown but bounded delays. We develop two risk-averse learning algorithms that rely on one-point and two-point zeroth-order optimization approaches, respectively. The regret achieved by the algorithms is analyzed in terms of the cumulative delay and the number of total samplings. The results suggest that the two-point risk-averse learning achieves a smaller regret bound than the one-point algorithm. Furthermore, the one-point risk-averse learning algorithm attains sublinear regret under certain delay conditions, and the two-point risk-averse learning algorithm achieves sublinear regret with minimal restrictions on the delay. We provide numerical experiments on a dynamic pricing problem to demonstrate the performance of the proposed algorithms.<|reference_end|>
|
arxiv
|
@article{wang2024risk-averse,
title={Risk-averse learning with delayed feedback},
author={Siyi Wang, Zifan Wang, Karl Henrik Johansson and Sandra Hirche},
journal={arXiv preprint arXiv:2409.16866},
year={2024},
archivePrefix={arXiv},
eprint={2409.16866},
primaryClass={cs.LG math.OC}
}
|
wang2024risk-averse
|
arxiv-661803
|
2409.16867
|
Multi-objective Evolution of Heuristic Using Large Language Model
|
<|reference_start|>Multi-objective Evolution of Heuristic Using Large Language Model: Heuristics are commonly used to tackle diverse search and optimization problems. Design heuristics usually require tedious manual crafting with domain knowledge. Recent works have incorporated large language models (LLMs) into automatic heuristic search leveraging their powerful language and coding capacity. However, existing research focuses on the optimal performance on the target problem as the sole objective, neglecting other criteria such as efficiency and scalability, which are vital in practice. To tackle this challenge, we propose to model heuristic search as a multi-objective optimization problem and consider introducing other practical criteria beyond optimal performance. Due to the complexity of the search space, conventional multi-objective optimization methods struggle to effectively handle multi-objective heuristic search. We propose the first LLM-based multi-objective heuristic search framework, Multi-objective Evolution of Heuristic (MEoH), which integrates LLMs in a zero-shot manner to generate a non-dominated set of heuristics to meet multiple design criteria. We design a new dominance-dissimilarity mechanism for effective population management and selection, which incorporates both code dissimilarity in the search space and dominance in the objective space. MEoH is demonstrated in two well-known combinatorial optimization problems: the online Bin Packing Problem (BPP) and the Traveling Salesman Problem (TSP). Results indicate that a variety of elite heuristics are automatically generated in a single run, offering more trade-off options than existing methods. It successfully achieves competitive or superior performance while improving efficiency up to 10 times. Moreover, we also observe that the multi-objective search introduces novel insights into heuristic design and leads to the discovery of diverse heuristics.<|reference_end|>
|
arxiv
|
@article{yao2024multi-objective,
title={Multi-objective Evolution of Heuristic Using Large Language Model},
author={Shunyu Yao, Fei Liu, Xi Lin, Zhichao Lu, Zhenkun Wang, and Qingfu
Zhang},
journal={arXiv preprint arXiv:2409.16867},
year={2024},
archivePrefix={arXiv},
eprint={2409.16867},
primaryClass={cs.AI}
}
|
yao2024multi-objective
|
arxiv-661804
|
2409.16870
|
Quantifying Visual Properties of GAM Shape Plots: Impact on Perceived Cognitive Load and Interpretability
|
<|reference_start|>Quantifying Visual Properties of GAM Shape Plots: Impact on Perceived Cognitive Load and Interpretability: Generalized Additive Models (GAMs) offer a balance between performance and interpretability in machine learning. The interpretability aspect of GAMs is expressed through shape plots, representing the model's decision-making process. However, the visual properties of these plots, e.g. number of kinks (number of local maxima and minima), can impact their complexity and the cognitive load imposed on the viewer, compromising interpretability. Our study, including 57 participants, investigates the relationship between the visual properties of GAM shape plots and cognitive load they induce. We quantify various visual properties of shape plots and evaluate their alignment with participants' perceived cognitive load, based on 144 plots. Our results indicate that the number of kinks metric is the most effective, explaining 86.4% of the variance in users' ratings. We develop a simple model based on number of kinks that provides a practical tool for predicting cognitive load, enabling the assessment of one aspect of GAM interpretability without direct user involvement.<|reference_end|>
|
arxiv
|
@article{kruschel2024quantifying,
title={Quantifying Visual Properties of GAM Shape Plots: Impact on Perceived
Cognitive Load and Interpretability},
author={Sven Kruschel, Lasse Bohlen, Julian Rosenberger, Patrick Zschech and
Mathias Kraus},
journal={arXiv preprint arXiv:2409.16870},
year={2024},
archivePrefix={arXiv},
eprint={2409.16870},
primaryClass={cs.HC cs.LG}
}
|
kruschel2024quantifying
|
arxiv-661805
|
2409.16872
|
Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications
|
<|reference_start|>Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications: The popularisation of applying AI in businesses poses significant challenges relating to ethical principles, governance, and legal compliance. Although businesses have embedded AI into their day-to-day processes, they lack a unified approach for mitigating its potential risks. This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable. Balancing these factors ensures the design of a framework that addresses its trade-offs, such as balancing performance against explainability. A successful framework provides practical advice for businesses to meet regulatory requirements in sectors such as finance and healthcare, where it is critical to comply with standards like GPDR and the EU AI Act. Different case studies validate this framework by integrating AI in both academic and practical environments. For instance, large language models are cost-effective alternatives for generating synthetic opinions that emulate attitudes to environmental issues. These case studies demonstrate how having a structured framework could enhance transparency and maintain performance levels as shown from the alignment between synthetic and expected distributions. This alignment is quantified using metrics like Chi-test scores, normalized mutual information, and Jaccard indexes. Future research should explore the framework's empirical validation in diverse industrial settings further, ensuring the model's scalability and adaptability.<|reference_end|>
|
arxiv
|
@article{lin2024ethical,
title={Ethical and Scalable Automation: A Governance and Compliance Framework
for Business Applications},
author={Haocheng Lin},
journal={arXiv preprint arXiv:2409.16872},
year={2024},
archivePrefix={arXiv},
eprint={2409.16872},
primaryClass={cs.AI cs.LG}
}
|
lin2024ethical
|
arxiv-661806
|
2409.16875
|
Feedforward Controllers from Learned Dynamic Local Model Networks with Application to Excavator Assistance Functions
|
<|reference_start|>Feedforward Controllers from Learned Dynamic Local Model Networks with Application to Excavator Assistance Functions: Complicated first principles modelling and controller synthesis can be prohibitively slow and expensive for high-mix, low-volume products such as hydraulic excavators. Instead, in a data-driven approach, recorded trajectories from the real system can be used to train local model networks (LMNs), for which feedforward controllers are derived via feedback linearization. However, previous works required LMNs without zero dynamics for feedback linearization, which restricts the model structure and thus modelling capacity of LMNs. In this paper, we overcome this restriction by providing a criterion for when feedback linearization of LMNs with zero dynamics yields a valid controller. As a criterion we propose the bounded-input bounded-output stability of the resulting controller. In two additional contributions, we extend this approach to consider measured disturbance signals and multiple inputs and outputs. We illustrate the effectiveness of our contributions in a hydraulic excavator control application with hardware experiments. To this end, we train LMNs from recorded, noisy data and derive feedforward controllers used as part of a leveling assistance system on the excavator. In our experiments, incorporating disturbance signals and multiple inputs and outputs enhances tracking performance of the learned controller. A video of our experiments is available at https://youtu.be/lrrWBx2ASaE.<|reference_end|>
|
arxiv
|
@article{greiser2024feedforward,
title={Feedforward Controllers from Learned Dynamic Local Model Networks with
Application to Excavator Assistance Functions},
author={Leon Greiser, Ozan Demir, Benjamin Hartmann, Henrik Hose, Sebastian
Trimpe},
journal={arXiv preprint arXiv:2409.16875},
year={2024},
archivePrefix={arXiv},
eprint={2409.16875},
primaryClass={eess.SY cs.LG cs.SY}
}
|
greiser2024feedforward
|
arxiv-661807
|
2409.16876
|
Automating Traffic Model Enhancement with AI Research Agent
|
<|reference_start|>Automating Traffic Model Enhancement with AI Research Agent: Developing efficient traffic models is essential for optimizing transportation systems, yet current approaches remain time-intensive and susceptible to human errors due to their reliance on manual processes. Traditional workflows involve exhaustive literature reviews, formula optimization, and iterative testing, leading to inefficiencies in research. In response, we introduce the Traffic Research Agent (TR-Agent), an AI-driven system designed to autonomously develop and refine traffic models through an iterative, closed-loop process. Specifically, we divide the research pipeline into four key stages: idea generation, theory formulation, theory evaluation, and iterative optimization; and construct TR-Agent with four corresponding modules: Idea Generator, Code Generator, Evaluator, and Analyzer. Working in synergy, these modules retrieve knowledge from external resources, generate novel ideas, implement and debug models, and finally assess them on the evaluation datasets. Furthermore, the system continuously refines these models based on iterative feedback, enhancing research efficiency and model performance. Experimental results demonstrate that TR-Agent achieves significant performance improvements across multiple traffic models, including the Intelligent Driver Model (IDM) for car following, the MOBIL lane-changing model, and the Lighthill-Whitham-Richards (LWR) traffic flow model. Additionally, TR-Agent provides detailed explanations for its optimizations, allowing researchers to verify and build upon its improvements easily. This flexibility makes the framework a powerful tool for researchers in transportation and beyond. To further support research and collaboration, we have open-sourced both the code and data used in our experiments, facilitating broader access and enabling continued advancements in the field.<|reference_end|>
|
arxiv
|
@article{guo2024automating,
title={Automating Traffic Model Enhancement with AI Research Agent},
author={Xusen Guo, Xinxi Yang, Mingxing Peng, Hongliang Lu, Meixin Zhu, and
Hai Yang},
journal={arXiv preprint arXiv:2409.16876},
year={2024},
archivePrefix={arXiv},
eprint={2409.16876},
primaryClass={cs.AI}
}
|
guo2024automating
|
arxiv-661808
|
2409.16879
|
GRACE: Generating Socially Appropriate Robot Actions Leveraging LLMs and Human Explanations
|
<|reference_start|>GRACE: Generating Socially Appropriate Robot Actions Leveraging LLMs and Human Explanations: When operating in human environments, robots need to handle complex tasks while both adhering to social norms and accommodating individual preferences. For instance, based on common sense knowledge, a household robot can predict that it should avoid vacuuming during a social gathering, but it may still be uncertain whether it should vacuum before or after having guests. In such cases, integrating common-sense knowledge with human preferences, often conveyed through human explanations, is fundamental yet a challenge for existing systems. In this paper, we introduce GRACE, a novel approach addressing this while generating socially appropriate robot actions. GRACE leverages common sense knowledge from Large Language Models (LLMs), and it integrates this knowledge with human explanations through a generative network architecture. The bidirectional structure of GRACE enables robots to refine and enhance LLM predictions by utilizing human explanations and makes robots capable of generating such explanations for human-specified actions. Our experimental evaluations show that integrating human explanations boosts GRACE's performance, where it outperforms several baselines and provides sensible explanations.<|reference_end|>
|
arxiv
|
@article{dogan2024grace:,
title={GRACE: Generating Socially Appropriate Robot Actions Leveraging LLMs and
Human Explanations},
author={Fethiye Irmak Dogan, Umut Ozyurt, Gizem Cinar and Hatice Gunes},
journal={arXiv preprint arXiv:2409.16879},
year={2024},
archivePrefix={arXiv},
eprint={2409.16879},
primaryClass={cs.RO}
}
|
dogan2024grace:
|
arxiv-661809
|
2409.16882
|
Revisiting Space Mission Planning: A Reinforcement Learning-Guided Approach for Multi-Debris Rendezvous
|
<|reference_start|>Revisiting Space Mission Planning: A Reinforcement Learning-Guided Approach for Multi-Debris Rendezvous: This research introduces a novel application of a masked Proximal Policy Optimization (PPO) algorithm from the field of deep reinforcement learning (RL), for determining the most efficient sequence of space debris visitation, utilizing the Lambert solver as per Izzo's adaptation for individual rendezvous. The aim is to optimize the sequence in which all the given debris should be visited to get the least total time for rendezvous for the entire mission. A neural network (NN) policy is developed, trained on simulated space missions with varying debris fields. After training, the neural network calculates approximately optimal paths using Izzo's adaptation of Lambert maneuvers. Performance is evaluated against standard heuristics in mission planning. The reinforcement learning approach demonstrates a significant improvement in planning efficiency by optimizing the sequence for debris rendezvous, reducing the total mission time by an average of approximately {10.96\%} and {13.66\%} compared to the Genetic and Greedy algorithms, respectively. The model on average identifies the most time-efficient sequence for debris visitation across various simulated scenarios with the fastest computational speed. This approach signifies a step forward in enhancing mission planning strategies for space debris clearance.<|reference_end|>
|
arxiv
|
@article{bandyopadhyay2024revisiting,
title={Revisiting Space Mission Planning: A Reinforcement Learning-Guided
Approach for Multi-Debris Rendezvous},
author={Agni Bandyopadhyay and Guenther Waxenegger-Wilfing},
journal={arXiv preprint arXiv:2409.16882},
year={2024},
archivePrefix={arXiv},
eprint={2409.16882},
primaryClass={cs.LG cs.AI cs.RO}
}
|
bandyopadhyay2024revisiting
|
arxiv-661810
|
2409.16884
|
Shifting from endangerment to rebirth in the Artificial Intelligence Age: An Ensemble Machine Learning Approach for Hawrami Text Classification
|
<|reference_start|>Shifting from endangerment to rebirth in the Artificial Intelligence Age: An Ensemble Machine Learning Approach for Hawrami Text Classification: Hawrami, a dialect of Kurdish, is classified as an endangered language as it suffers from the scarcity of data and the gradual loss of its speakers. Natural Language Processing projects can be used to partially compensate for data availability for endangered languages/dialects through a variety of approaches, such as machine translation, language model building, and corpora development. Similarly, NLP projects such as text classification are in language documentation. Several text classification studies have been conducted for Kurdish, but they were mainly dedicated to two particular dialects: Sorani (Central Kurdish) and Kurmanji (Northern Kurdish). In this paper, we introduce various text classification models using a dataset of 6,854 articles in Hawrami labeled into 15 categories by two native speakers. We use K-nearest Neighbor (KNN), Linear Support Vector Machine (Linear SVM), Logistic Regression (LR), and Decision Tree (DT) to evaluate how well those methods perform the classification task. The results indicate that the Linear SVM achieves a 96% of accuracy and outperforms the other approaches.<|reference_end|>
|
arxiv
|
@article{khaksar2024shifting,
title={Shifting from endangerment to rebirth in the Artificial Intelligence
Age: An Ensemble Machine Learning Approach for Hawrami Text Classification},
author={Aram Khaksar and Hossein Hassani},
journal={arXiv preprint arXiv:2409.16884},
year={2024},
archivePrefix={arXiv},
eprint={2409.16884},
primaryClass={cs.CL}
}
|
khaksar2024shifting
|
arxiv-661811
|
2409.16894
|
Wrapped in Anansi's Web: Unweaving the Impacts of Generative-AI Personalization and VR Immersion in Oral Storytelling
|
<|reference_start|>Wrapped in Anansi's Web: Unweaving the Impacts of Generative-AI Personalization and VR Immersion in Oral Storytelling: Oral traditions, vital to cultural identity, are losing relevance among youth due to the dominance of modern media. This study addresses the revitalization of these traditions by reconnecting young people with folklore. We introduce Anansi the Spider VR, a novel virtual space that combines first-person virtual reality (VR) with generative artificial intelligence (Gen-AI)-driven narrative personalization. This space immerses users in the Anansi Spider story, empowering them to influence the narrative as they envision themselves as the `protagonists,' thereby enhancing personal reflection. In a 2 by 2 between-subjects study with 48 participants, we employed a mixed-method approach to measure user engagement and changes in interest, complemented by semi-structured interviews providing qualitative insights into personalization and immersion. Our results indicate that personalization in VR significantly boosts engagement and cultural learning interest. We recommend that future studies using VR and Gen-AI to revitalize oral storytelling prioritize respecting cultural integrity and honoring original storytellers and communities.<|reference_end|>
|
arxiv
|
@article{lau2024wrapped,
title={Wrapped in Anansi's Web: Unweaving the Impacts of Generative-AI
Personalization and VR Immersion in Oral Storytelling},
author={Ka Hei Carrie Lau, Bhada Yun, Samuel Saruba, Efe Bozkir, Enkelejda
Kasneci},
journal={arXiv preprint arXiv:2409.16894},
year={2024},
archivePrefix={arXiv},
eprint={2409.16894},
primaryClass={cs.HC}
}
|
lau2024wrapped
|
arxiv-661812
|
2409.16896
|
Sense of Agency in Closed-loop Muscle Stimulation
|
<|reference_start|>Sense of Agency in Closed-loop Muscle Stimulation: To maintain a user's sense of agency (SoA) when working with a physical motor augmentation device, the actuation must align with the user's intentions. In experiments, this is often achieved using stimulus-response paradigms where the motor augmentation can be optimally timed. However, in the everyday world users primarily act at their own volition. We designed a closed-loop system for motor augmentation using an EEG-based brain-computer interface (BCI) to cue users' volitional finger tapping. Relying on the readiness potentials, the system autonomously cued the finger movement at the time of the intent to interact via electrical muscle stimulation (EMS). The prototype discriminated pre-movement from idle EEG segments with an average F1 score of 0.7. However, we found only weak evidence for a maintained SoA. Still, participants reported a higher level of control when working with the system instead of being passively moved.<|reference_end|>
|
arxiv
|
@article{gehrke2024sense,
title={Sense of Agency in Closed-loop Muscle Stimulation},
author={Lukas Gehrke, Leonie Terfurth, and Klaus Gramann},
journal={arXiv preprint arXiv:2409.16896},
year={2024},
archivePrefix={arXiv},
eprint={2409.16896},
primaryClass={cs.HC}
}
|
gehrke2024sense
|
arxiv-661813
|
2409.16897
|
HVT: A Comprehensive Vision Framework for Learning in Non-Euclidean Space
|
<|reference_start|>HVT: A Comprehensive Vision Framework for Learning in Non-Euclidean Space: Data representation in non-Euclidean spaces has proven effective for capturing hierarchical and complex relationships in real-world datasets. Hyperbolic spaces, in particular, provide efficient embeddings for hierarchical structures. This paper introduces the Hyperbolic Vision Transformer (HVT), a novel extension of the Vision Transformer (ViT) that integrates hyperbolic geometry. While traditional ViTs operate in Euclidean space, our method enhances the self-attention mechanism by leveraging hyperbolic distance and M\"obius transformations. This enables more effective modeling of hierarchical and relational dependencies in image data. We present rigorous mathematical formulations, showing how hyperbolic geometry can be incorporated into attention layers, feed-forward networks, and optimization. We offer improved performance for image classification using the ImageNet dataset.<|reference_end|>
|
arxiv
|
@article{fein-ashley2024hvt:,
title={HVT: A Comprehensive Vision Framework for Learning in Non-Euclidean
Space},
author={Jacob Fein-Ashley and Ethan Feng and Minh Pham},
journal={arXiv preprint arXiv:2409.16897},
year={2024},
archivePrefix={arXiv},
eprint={2409.16897},
primaryClass={cs.CV}
}
|
fein-ashley2024hvt:
|
arxiv-661814
|
2409.16898
|
AI-driven View Guidance System in Intra-cardiac Echocardiography Imaging
|
<|reference_start|>AI-driven View Guidance System in Intra-cardiac Echocardiography Imaging: Intra-cardiac Echocardiography (ICE) is a crucial imaging modality used in electrophysiology (EP) and structural heart disease (SHD) interventions, providing real-time, high-resolution views from within the heart. Despite its advantages, effective manipulation of the ICE catheter requires significant expertise, which can lead to inconsistent outcomes, particularly among less experienced operators. To address this challenge, we propose an AI-driven closed-loop view guidance system with human-in-the-loop feedback, designed to assist users in navigating ICE imaging without requiring specialized knowledge. Our method models the relative position and orientation vectors between arbitrary views and clinically defined ICE views in a spatial coordinate system, guiding users on how to manipulate the ICE catheter to transition from the current view to the desired view over time. Operating in a closed-loop configuration, the system continuously predicts and updates the necessary catheter manipulations, ensuring seamless integration into existing clinical workflows. The effectiveness of the proposed system is demonstrated through a simulation-based evaluation, achieving an 89% success rate with the 6532 test dataset, highlighting its potential to improve the accuracy and efficiency of ICE imaging procedures.<|reference_end|>
|
arxiv
|
@article{huh2024ai-driven,
title={AI-driven View Guidance System in Intra-cardiac Echocardiography Imaging},
author={Jaeyoung Huh and Paul Klein and Gareth Funka-Lea and Puneet Sharma and
Ankur Kapoor and Young-Ho Kim},
journal={arXiv preprint arXiv:2409.16898},
year={2024},
archivePrefix={arXiv},
eprint={2409.16898},
primaryClass={cs.AI}
}
|
huh2024ai-driven
|
arxiv-661815
|
2409.16899
|
Robotic Backchanneling in Online Conversation Facilitation: A Cross-Generational Study
|
<|reference_start|>Robotic Backchanneling in Online Conversation Facilitation: A Cross-Generational Study: Japan faces many challenges related to its aging society, including increasing rates of cognitive decline in the population and a shortage of caregivers. Efforts have begun to explore solutions using artificial intelligence (AI), especially socially embodied intelligent agents and robots that can communicate with people. Yet, there has been little research on the compatibility of these agents with older adults in various everyday situations. To this end, we conducted a user study to evaluate a robot that functions as a facilitator for a group conversation protocol designed to prevent cognitive decline. We modified the robot to use backchannelling, a natural human way of speaking, to increase receptiveness of the robot and enjoyment of the group conversation experience. We conducted a cross-generational study with young adults and older adults. Qualitative analyses indicated that younger adults perceived the backchannelling version of the robot as kinder, more trustworthy, and more acceptable than the non-backchannelling robot. Finally, we found that the robot's backchannelling elicited nonverbal backchanneling in older participants.<|reference_end|>
|
arxiv
|
@article{kobuki2024robotic,
title={Robotic Backchanneling in Online Conversation Facilitation: A
Cross-Generational Study},
author={Sota Kobuki, Katie Seaborn, Seiki Tokunaga, Kosuke Fukumori, Shun
Hidaka, Kazuhiro Tamura, Koji Inoue, Tatsuya Kawahara, Mihoko Otake-Mastuura},
journal={arXiv preprint arXiv:2409.16899},
year={2024},
doi={10.1109/RO-MAN57019.2023.10309362},
archivePrefix={arXiv},
eprint={2409.16899},
primaryClass={cs.RO cs.CL cs.HC}
}
|
kobuki2024robotic
|
arxiv-661816
|
2409.16900
|
A Roadmap for Embodied and Social Grounding in LLMs
|
<|reference_start|>A Roadmap for Embodied and Social Grounding in LLMs: The fusion of Large Language Models (LLMs) and robotic systems has led to a transformative paradigm in the robotic field, offering unparalleled capabilities not only in the communication domain but also in skills like multimodal input handling, high-level reasoning, and plan generation. The grounding of LLMs knowledge into the empirical world has been considered a crucial pathway to exploit the efficiency of LLMs in robotics. Nevertheless, connecting LLMs' representations to the external world with multimodal approaches or with robots' bodies is not enough to let them understand the meaning of the language they are manipulating. Taking inspiration from humans, this work draws attention to three necessary elements for an agent to grasp and experience the world. The roadmap for LLMs grounding is envisaged in an active bodily system as the reference point for experiencing the environment, a temporally structured experience for a coherent, self-related interaction with the external world, and social skills to acquire a common-grounded shared experience.<|reference_end|>
|
arxiv
|
@article{incao2024a,
title={A Roadmap for Embodied and Social Grounding in LLMs},
author={Sara Incao, Carlo Mazzola, Giulia Belgiovine, Alessandra Sciutti},
journal={arXiv preprint arXiv:2409.16900},
year={2024},
archivePrefix={arXiv},
eprint={2409.16900},
primaryClass={cs.RO cs.AI cs.CL cs.HC}
}
|
incao2024a
|
arxiv-661817
|
2409.16902
|
Towards Underwater Camouflaged Object Tracking: An Experimental Evaluation of SAM and SAM 2
|
<|reference_start|>Towards Underwater Camouflaged Object Tracking: An Experimental Evaluation of SAM and SAM 2: Over the past decade, significant progress has been made in visual object tracking, largely due to the availability of large-scale training datasets. However, existing tracking datasets are primarily focused on open-air scenarios, which greatly limits the development of object tracking in underwater environments. To address this issue, we take a step forward by proposing the first large-scale underwater camouflaged object tracking dataset, namely UW-COT. Based on the proposed dataset, this paper presents an experimental evaluation of several advanced visual object tracking methods and the latest advancements in image and video segmentation. Specifically, we compare the performance of the Segment Anything Model (SAM) and its updated version, SAM 2, in challenging underwater environments. Our findings highlight the improvements in SAM 2 over SAM, demonstrating its enhanced capability to handle the complexities of underwater camouflaged objects. Compared to current advanced visual object tracking methods, the latest video segmentation foundation model SAM 2 also exhibits significant advantages, providing valuable insights into the development of more effective tracking technologies for underwater scenarios. The dataset will be accessible at \color{magenta}{https://github.com/983632847/Awesome-Multimodal-Object-Tracking}.<|reference_end|>
|
arxiv
|
@article{zhang2024towards,
title={Towards Underwater Camouflaged Object Tracking: An Experimental
Evaluation of SAM and SAM 2},
author={Chunhui Zhang, Li Liu, Guanjie Huang, Hao Wen, Xi Zhou, Yanfeng Wang},
journal={arXiv preprint arXiv:2409.16902},
year={2024},
archivePrefix={arXiv},
eprint={2409.16902},
primaryClass={cs.CV cs.AI}
}
|
zhang2024towards
|
arxiv-661818
|
2409.16904
|
Discriminative Anchor Learning for Efficient Multi-view Clustering
|
<|reference_start|>Discriminative Anchor Learning for Efficient Multi-view Clustering: Multi-view clustering aims to study the complementary information across views and discover the underlying structure. For solving the relatively high computational cost for the existing approaches, works based on anchor have been presented recently. Even with acceptable clustering performance, these methods tend to map the original representation from multiple views into a fixed shared graph based on the original dataset. However, most studies ignore the discriminative property of the learned anchors, which ruin the representation capability of the built model. Moreover, the complementary information among anchors across views is neglected to be ensured by simply learning the shared anchor graph without considering the quality of view-specific anchors. In this paper, we propose discriminative anchor learning for multi-view clustering (DALMC) for handling the above issues. We learn discriminative view-specific feature representations according to the original dataset and build anchors from different views based on these representations, which increase the quality of the shared anchor graph. The discriminative feature learning and consensus anchor graph construction are integrated into a unified framework to improve each other for realizing the refinement. The optimal anchors from multiple views and the consensus anchor graph are learned with the orthogonal constraints. We give an iterative algorithm to deal with the formulated problem. Extensive experiments on different datasets show the effectiveness and efficiency of our method compared with other methods.<|reference_end|>
|
arxiv
|
@article{qin2024discriminative,
title={Discriminative Anchor Learning for Efficient Multi-view Clustering},
author={Yalan Qin and Nan Pu and Hanzhou Wu and Nicu Sebe},
journal={arXiv preprint arXiv:2409.16904},
year={2024},
archivePrefix={arXiv},
eprint={2409.16904},
primaryClass={cs.LG cs.AI}
}
|
qin2024discriminative
|
arxiv-661819
|
2409.16907
|
An Adaptive Screen-Space Meshing Approach for Normal Integration
|
<|reference_start|>An Adaptive Screen-Space Meshing Approach for Normal Integration: Reconstructing surfaces from normals is a key component of photometric stereo. This work introduces an adaptive surface triangulation in the image domain and afterwards performs the normal integration on a triangle mesh. Our key insight is that surface curvature can be computed from normals. Based on the curvature, we identify flat areas and aggregate pixels into triangles. The approximation quality is controlled by a single user parameter facilitating a seamless generation of low- to high-resolution meshes. Compared to pixel grids, our triangle meshes adapt locally to surface details and allow for a sparser representation. Our new mesh-based formulation of the normal integration problem is strictly derived from discrete differential geometry and leads to well-conditioned linear systems. Results on real and synthetic data show that 10 to 100 times less vertices are required than pixels. Experiments suggest that this sparsity translates into a sublinear runtime in the number of pixels. For 64 MP normal maps, our meshing-first approach generates and integrates meshes in minutes while pixel-based approaches require hours just for the integration.<|reference_end|>
|
arxiv
|
@article{heep2024an,
title={An Adaptive Screen-Space Meshing Approach for Normal Integration},
author={Moritz Heep and Eduard Zell},
journal={arXiv preprint arXiv:2409.16907},
year={2024},
doi={10.1007/978-3-031-72920-1_25},
archivePrefix={arXiv},
eprint={2409.16907},
primaryClass={cs.CV}
}
|
heep2024an
|
arxiv-661820
|
2409.16909
|
Enhancing Temporal Sensitivity and Reasoning for Time-Sensitive Question Answering
|
<|reference_start|>Enhancing Temporal Sensitivity and Reasoning for Time-Sensitive Question Answering: Time-Sensitive Question Answering (TSQA) demands the effective utilization of specific temporal contexts, encompassing multiple time-evolving facts, to address time-sensitive questions. This necessitates not only the parsing of temporal information within questions but also the identification and understanding of time-evolving facts to generate accurate answers. However, current large language models still have limited sensitivity to temporal information and their inadequate temporal reasoning capabilities. In this paper, we propose a novel framework that enhances temporal awareness and reasoning through Temporal Information-Aware Embedding and Granular Contrastive Reinforcement Learning. Experimental results on four TSQA datasets demonstrate that our framework significantly outperforms existing LLMs in TSQA tasks, marking a step forward in bridging the performance gap between machine and human temporal understanding and reasoning.<|reference_end|>
|
arxiv
|
@article{yang2024enhancing,
title={Enhancing Temporal Sensitivity and Reasoning for Time-Sensitive Question
Answering},
author={Wanqi Yang, Yanda Li, Meng Fang, Ling Chen},
journal={arXiv preprint arXiv:2409.16909},
year={2024},
archivePrefix={arXiv},
eprint={2409.16909},
primaryClass={cs.CL cs.AI}
}
|
yang2024enhancing
|
arxiv-661821
|
2409.16911
|
Pruning Multilingual Large Language Models for Multilingual Inference
|
<|reference_start|>Pruning Multilingual Large Language Models for Multilingual Inference: Multilingual large language models (MLLMs), trained on multilingual balanced data, demonstrate better zero-shot learning performance in non-English languages compared to large language models trained on English-dominant data. However, the disparity in performance between English and non-English languages remains a challenge yet to be fully addressed. A distinctive characteristic of MLLMs is their high-quality translation capabilities, indicating an acquired proficiency in aligning between languages. This study explores how to enhance the zero-shot performance of MLLMs in non-English languages by leveraging their alignment capability between English and non-English languages. To achieve this, we first analyze the behavior of MLLMs when performing translation and reveal that there are large magnitude features that play a critical role in the translation process. Inspired by these findings, we retain the weights associated with operations involving the large magnitude features and prune other weights to force MLLMs to rely on these features for tasks beyond translation. We empirically demonstrate that this pruning strategy can enhance the MLLMs' performance in non-English language.<|reference_end|>
|
arxiv
|
@article{kim2024pruning,
title={Pruning Multilingual Large Language Models for Multilingual Inference},
author={Hwichan Kim, Jun Suzuki, Tosho Hirasawa, Mamoru Komachi},
journal={arXiv preprint arXiv:2409.16911},
year={2024},
archivePrefix={arXiv},
eprint={2409.16911},
primaryClass={cs.CL}
}
|
kim2024pruning
|
arxiv-661822
|
2409.16913
|
Tell Me What You Don't Know: Enhancing Refusal Capabilities of Role-Playing Agents via Representation Space Analysis and Editing
|
<|reference_start|>Tell Me What You Don't Know: Enhancing Refusal Capabilities of Role-Playing Agents via Representation Space Analysis and Editing: Role-Playing Agents (RPAs) have shown remarkable performance in various applications, yet they often struggle to recognize and appropriately respond to hard queries that conflict with their role-play knowledge. To investigate RPAs' performance when faced with different types of conflicting requests, we develop an evaluation benchmark that includes contextual knowledge conflicting requests, parametric knowledge conflicting requests, and non-conflicting requests to assess RPAs' ability to identify conflicts and refuse to answer appropriately without over-refusing. Through extensive evaluation, we find that most RPAs behave significant performance gaps toward different conflict requests. To elucidate the reasons, we conduct an in-depth representation-level analysis of RPAs under various conflict scenarios. Our findings reveal the existence of rejection regions and direct response regions within the model's forwarding representation, and thus influence the RPA's final response behavior. Therefore, we introduce a lightweight representation editing approach that conveniently shifts conflicting requests to the rejection region, thereby enhancing the model's refusal accuracy. The experimental results validate the effectiveness of our editing method, improving RPAs' refusal ability of conflicting requests while maintaining their general role-playing capabilities.<|reference_end|>
|
arxiv
|
@article{liu2024tell,
title={Tell Me What You Don't Know: Enhancing Refusal Capabilities of
Role-Playing Agents via Representation Space Analysis and Editing},
author={Wenhao Liu, Siyu An, Junru Lu, Muling Wu, Tianlong Li, Xiaohua Wang,
Xiaoqing Zheng, Di Yin, Xing Sun, Xuanjing Huang},
journal={arXiv preprint arXiv:2409.16913},
year={2024},
archivePrefix={arXiv},
eprint={2409.16913},
primaryClass={cs.AI}
}
|
liu2024tell
|
arxiv-661823
|
2409.16914
|
Zero-Shot Detection of LLM-Generated Text using Token Cohesiveness
|
<|reference_start|>Zero-Shot Detection of LLM-Generated Text using Token Cohesiveness: The increasing capability and widespread usage of large language models (LLMs) highlight the desirability of automatic detection of LLM-generated text. Zero-shot detectors, due to their training-free nature, have received considerable attention and notable success. In this paper, we identify a new feature, token cohesiveness, that is useful for zero-shot detection, and we demonstrate that LLM-generated text tends to exhibit higher token cohesiveness than human-written text. Based on this observation, we devise TOCSIN, a generic dual-channel detection paradigm that uses token cohesiveness as a plug-and-play module to improve existing zero-shot detectors. To calculate token cohesiveness, TOCSIN only requires a few rounds of random token deletion and semantic difference measurement, making it particularly suitable for a practical black-box setting where the source model used for generation is not accessible. Extensive experiments with four state-of-the-art base detectors on various datasets, source models, and evaluation settings demonstrate the effectiveness and generality of the proposed approach. Code available at: \url{https://github.com/Shixuan-Ma/TOCSIN}.<|reference_end|>
|
arxiv
|
@article{ma2024zero-shot,
title={Zero-Shot Detection of LLM-Generated Text using Token Cohesiveness},
author={Shixuan Ma and Quan Wang},
journal={arXiv preprint arXiv:2409.16914},
year={2024},
archivePrefix={arXiv},
eprint={2409.16914},
primaryClass={cs.CL}
}
|
ma2024zero-shot
|
arxiv-661824
|
2409.16915
|
Let's Make a Splan: Risk-Aware Trajectory Optimization in a Normalized Gaussian Splat
|
<|reference_start|>Let's Make a Splan: Risk-Aware Trajectory Optimization in a Normalized Gaussian Splat: Neural Radiance Fields and Gaussian Splatting have transformed the field of computer vision by enabling photo-realistic representation of complex scenes. Despite this success, they have seen only limited use in real-world robotics tasks such as trajectory optimization. Two key factors have contributed to this limited success. First, it is challenging to reason about collisions in radiance models. Second, it is difficult to perform inference of radiance models fast enough for real-time trajectory synthesis. This paper addresses these challenges by proposing SPLANNING, a risk-aware trajectory optimizer that operates in a Gaussian Splatting model. This paper first derives a method for rigorously upper-bounding the probability of collision between a robot and a radiance field. Second, this paper introduces a normalized reformulation of Gaussian Splatting that enables the efficient computation of the collision bound in a Gaussian Splat. Third, a method is presented to optimize trajectories while avoiding collisions with a scene represented by a Gaussian Splat. Experiments demonstrate that SPLANNING outperforms state-of-the-art methods in generating collision-free trajectories in highly cluttered environments. The proposed system is also tested on a real-world robot manipulator. A project page is available at https://roahmlab.github.io/splanning.<|reference_end|>
|
arxiv
|
@article{michaux2024let's,
title={Let's Make a Splan: Risk-Aware Trajectory Optimization in a Normalized
Gaussian Splat},
author={Jonathan Michaux, Seth Isaacson, Challen Enninful Adu, Adam Li, Rahul
Kashyap Swayampakula, Parker Ewen, Sean Rice, Katherine A. Skinner, and Ram
Vasudevan},
journal={arXiv preprint arXiv:2409.16915},
year={2024},
archivePrefix={arXiv},
eprint={2409.16915},
primaryClass={cs.RO}
}
|
michaux2024let's
|
arxiv-661825
|
2409.16919
|
Running Cloud-native Workloads on HPC with High-Performance Kubernetes
|
<|reference_start|>Running Cloud-native Workloads on HPC with High-Performance Kubernetes: The escalating complexity of applications and services encourages a shift towards higher-level data processing pipelines that integrate both Cloud-native and HPC steps into the same workflow. Cloud providers and HPC centers typically provide both execution platforms on separate resources. In this paper we explore a more practical design that enables running unmodified Cloud-native workloads directly on the main HPC cluster, avoiding resource partitioning and retaining the HPC center's existing job management and accounting policies.<|reference_end|>
|
arxiv
|
@article{chazapis2024running,
title={Running Cloud-native Workloads on HPC with High-Performance Kubernetes},
author={Antony Chazapis, Evangelos Maliaroudakis, Fotis Nikolaidis, Manolis
Marazakis, Angelos Bilas},
journal={arXiv preprint arXiv:2409.16919},
year={2024},
archivePrefix={arXiv},
eprint={2409.16919},
primaryClass={cs.DC}
}
|
chazapis2024running
|
arxiv-661826
|
2409.16920
|
Cross-lingual Speech Emotion Recognition: Humans vs Self-Supervised Models
|
<|reference_start|>Cross-lingual Speech Emotion Recognition: Humans vs Self-Supervised Models: Utilizing Self-Supervised Learning (SSL) models for Speech Emotion Recognition (SER) has proven effective, yet limited research has explored cross-lingual scenarios. This study presents a comparative analysis between human performance and SSL models, beginning with a layer-wise analysis and an exploration of parameter-efficient fine-tuning strategies in monolingual, cross-lingual, and transfer learning contexts. We further compare the SER ability of models and humans at both utterance- and segment-levels. Additionally, we investigate the impact of dialect on cross-lingual SER through human evaluation. Our findings reveal that models, with appropriate knowledge transfer, can adapt to the target language and achieve performance comparable to native speakers. We also demonstrate the significant effect of dialect on SER for individuals without prior linguistic and paralinguistic background. Moreover, both humans and models exhibit distinct behaviors across different emotions. These results offer new insights into the cross-lingual SER capabilities of SSL models, underscoring both their similarities to and differences from human emotion perception.<|reference_end|>
|
arxiv
|
@article{han2024cross-lingual,
title={Cross-lingual Speech Emotion Recognition: Humans vs. Self-Supervised
Models},
author={Zhichen Han, Tianqi Geng, Hui Feng, Jiahong Yuan, Korin Richmond,
Yuanchao Li},
journal={arXiv preprint arXiv:2409.16920},
year={2024},
archivePrefix={arXiv},
eprint={2409.16920},
primaryClass={eess.AS cs.AI cs.CL cs.HC cs.SD}
}
|
han2024cross-lingual
|
arxiv-661827
|
2409.16921
|
Moner: Motion Correction in Undersampled Radial MRI with Unsupervised Neural Representation
|
<|reference_start|>Moner: Motion Correction in Undersampled Radial MRI with Unsupervised Neural Representation: Motion correction (MoCo) in radial MRI is a challenging problem due to the unpredictability of subject's motion. Current state-of-the-art (SOTA) MoCo algorithms often use extensive high-quality MR images to pre-train neural networks, obtaining excellent reconstructions. However, the need for large-scale datasets significantly increases costs and limits model generalization. In this work, we propose Moner, an unsupervised MoCo method that jointly solves artifact-free MR images and accurate motion from undersampled, rigid motion-corrupted k-space data, without requiring training data. Our core idea is to leverage the continuous prior of implicit neural representation (INR) to constrain this ill-posed inverse problem, enabling ideal solutions. Specifically, we incorporate a quasi-static motion model into the INR, granting its ability to correct subject's motion. To stabilize model optimization, we reformulate radial MRI as a back-projection problem using the Fourier-slice theorem. Additionally, we propose a novel coarse-to-fine hash encoding strategy, significantly enhancing MoCo accuracy. Experiments on multiple MRI datasets show our Moner achieves performance comparable to SOTA MoCo techniques on in-domain data, while demonstrating significant improvements on out-of-domain data.<|reference_end|>
|
arxiv
|
@article{wu2024moner:,
title={Moner: Motion Correction in Undersampled Radial MRI with Unsupervised
Neural Representation},
author={Qing Wu, Chenhe Du, XuanYu Tian, Jingyi Yu, Yuyao Zhang, Hongjiang Wei},
journal={arXiv preprint arXiv:2409.16921},
year={2024},
archivePrefix={arXiv},
eprint={2409.16921},
primaryClass={eess.IV cs.CV}
}
|
wu2024moner:
|
arxiv-661828
|
2409.16922
|
Decomposition of Equivariant Maps via Invariant Maps: Application to Universal Approximation under Symmetry
|
<|reference_start|>Decomposition of Equivariant Maps via Invariant Maps: Application to Universal Approximation under Symmetry: In this paper, we develop a theory about the relationship between invariant and equivariant maps with regard to a group $G$. We then leverage this theory in the context of deep neural networks with group symmetries in order to obtain novel insight into their mechanisms. More precisely, we establish a one-to-one relationship between equivariant maps and certain invariant maps. This allows us to reduce arguments for equivariant maps to those for invariant maps and vice versa. As an application, we propose a construction of universal equivariant architectures built from universal invariant networks. We, in turn, explain how the universal architectures arising from our construction differ from standard equivariant architectures known to be universal. Furthermore, we explore the complexity, in terms of the number of free parameters, of our models, and discuss the relation between invariant and equivariant networks' complexity. Finally, we also give an approximation rate for G-equivariant deep neural networks with ReLU activation functions for finite group G.<|reference_end|>
|
arxiv
|
@article{sannai2024decomposition,
title={Decomposition of Equivariant Maps via Invariant Maps: Application to
Universal Approximation under Symmetry},
author={Akiyoshi Sannai, Yuuki Takai, Matthieu Cordonnier},
journal={Transactions on Machine Learning Research, 2024},
year={2024},
archivePrefix={arXiv},
eprint={2409.16922},
primaryClass={cs.LG}
}
|
sannai2024decomposition
|
arxiv-661829
|
2409.16923
|
AI-assisted Gaze Detection for Proctoring Online Exams
|
<|reference_start|>AI-assisted Gaze Detection for Proctoring Online Exams: For high-stakes online exams, it is important to detect potential rule violations to ensure the security of the test. In this study, we investigate the task of detecting whether test takers are looking away from the screen, as such behavior could be an indication that the test taker is consulting external resources. For asynchronous proctoring, the exam videos are recorded and reviewed by the proctors. However, when the length of the exam is long, it could be tedious for proctors to watch entire exam videos to determine the exact moments when test takers look away. We present an AI-assisted gaze detection system, which allows proctors to navigate between different video frames and discover video frames where the test taker is looking in similar directions. The system enables proctors to work more effectively to identify suspicious moments in videos. An evaluation framework is proposed to evaluate the system against human-only and ML-only proctoring, and a user study is conducted to gather feedback from proctors, aiming to demonstrate the effectiveness of the system.<|reference_end|>
|
arxiv
|
@article{shih2024ai-assisted,
title={AI-assisted Gaze Detection for Proctoring Online Exams},
author={Yong-Siang Shih, Zach Zhao, Chenhao Niu, Bruce Iberg, James Sharpnack,
Mirza Basim Baig},
journal={arXiv preprint arXiv:2409.16923},
year={2024},
archivePrefix={arXiv},
eprint={2409.16923},
primaryClass={cs.AI cs.HC}
}
|
shih2024ai-assisted
|
arxiv-661830
|
2409.16925
|
Game4Loc: A UAV Geo-Localization Benchmark from Game Data
|
<|reference_start|>Game4Loc: A UAV Geo-Localization Benchmark from Game Data: The vision-based geo-localization technology for UAV, serving as a secondary source of GPS information in addition to the global navigation satellite systems (GNSS), can still operate independently in the GPS-denied environment. Recent deep learning based methods attribute this as the task of image matching and retrieval. By retrieving drone-view images in geo-tagged satellite image database, approximate localization information can be obtained. However, due to high costs and privacy concerns, it is usually difficult to obtain large quantities of drone-view images from a continuous area. Existing drone-view datasets are mostly composed of small-scale aerial photography with a strong assumption that there exists a perfect one-to-one aligned reference image for any query, leaving a significant gap from the practical localization scenario. In this work, we construct a large-range contiguous area UAV geo-localization dataset named GTA-UAV, featuring multiple flight altitudes, attitudes, scenes, and targets using modern computer games. Based on this dataset, we introduce a more practical UAV geo-localization task including partial matches of cross-view paired data, and expand the image-level retrieval to the actual localization in terms of distance (meters). For the construction of drone-view and satellite-view pairs, we adopt a weight-based contrastive learning approach, which allows for effective learning while avoiding additional post-processing matching steps. Experiments demonstrate the effectiveness of our data and training method for UAV geo-localization, as well as the generalization capabilities to real-world scenarios.<|reference_end|>
|
arxiv
|
@article{ji2024game4loc:,
title={Game4Loc: A UAV Geo-Localization Benchmark from Game Data},
author={Yuxiang Ji, Boyong He, Zhuoyue Tan, Liaoni Wu},
journal={arXiv preprint arXiv:2409.16925},
year={2024},
archivePrefix={arXiv},
eprint={2409.16925},
primaryClass={cs.CV}
}
|
ji2024game4loc:
|
arxiv-661831
|
2409.16928
|
Quantum-Classical Sentiment Analysis
|
<|reference_start|>Quantum-Classical Sentiment Analysis: In this study, we initially investigate the application of a hybrid classical-quantum classifier (HCQC) for sentiment analysis, comparing its performance against the classical CPLEX classifier and the Transformer architecture. Our findings indicate that while the HCQC underperforms relative to the Transformer in terms of classification accuracy, but it requires significantly less time to converge to a reasonably good approximate solution. This experiment also reveals a critical bottleneck in the HCQC, whose architecture is partially undisclosed by the D-Wave property. To address this limitation, we propose a novel algorithm based on the algebraic decomposition of QUBO models, which enhances the time the quantum processing unit can allocate to problem-solving tasks.<|reference_end|>
|
arxiv
|
@article{bifulco2024quantum-classical,
title={Quantum-Classical Sentiment Analysis},
author={Mario Bifulco and Luca Roversi},
journal={arXiv preprint arXiv:2409.16928},
year={2024},
archivePrefix={arXiv},
eprint={2409.16928},
primaryClass={cs.AI}
}
|
bifulco2024quantum-classical
|
arxiv-661832
|
2409.16934
|
Investigating OCR-Sensitive Neurons to Improve Entity Recognition in Historical Documents
|
<|reference_start|>Investigating OCR-Sensitive Neurons to Improve Entity Recognition in Historical Documents: This paper investigates the presence of OCR-sensitive neurons within the Transformer architecture and their influence on named entity recognition (NER) performance on historical documents. By analysing neuron activation patterns in response to clean and noisy text inputs, we identify and then neutralise OCR-sensitive neurons to improve model performance. Based on two open access large language models (Llama2 and Mistral), experiments demonstrate the existence of OCR-sensitive regions and show improvements in NER performance on historical newspapers and classical commentaries, highlighting the potential of targeted neuron modulation to improve models' performance on noisy text.<|reference_end|>
|
arxiv
|
@article{boros2024investigating,
title={Investigating OCR-Sensitive Neurons to Improve Entity Recognition in
Historical Documents},
author={Emanuela Boros and Maud Ehrmann},
journal={arXiv preprint arXiv:2409.16934},
year={2024},
archivePrefix={arXiv},
eprint={2409.16934},
primaryClass={cs.CL cs.AI}
}
|
boros2024investigating
|
arxiv-661833
|
2409.16936
|
Tactile Perception of Electroadhesion: Effect of DC versus AC Stimulation and Finger Moisture
|
<|reference_start|>Tactile Perception of Electroadhesion: Effect of DC versus AC Stimulation and Finger Moisture: Electroadhesion has emerged as a viable technique for displaying tactile feedback on touch surfaces, particularly capacitive touchscreens found in smartphones and tablets. This involves applying a voltage signal to the conductive layer of the touchscreen to generate tactile sensations on the fingerpads of users. In our investigation, we explore the tactile perception of electroadhesion under DC and AC stimulations. Our tactile perception experiments with 10 participants demonstrate a significantly lower voltage detection threshold for AC signals compared to their DC counterparts. This discrepancy is elucidated by the underlying electro-mechanical interactions between the finger and the voltage-induced touchscreen and considering the response of mechanoreceptors in the fingerpad to electrostatic forces generated by electroadhesion. Additionally, our study highlights the impact of moisture on electroadhesive tactile perception. Participants with moist fingers exhibited markedly higher threshold levels. Our electrical impedance measurements show a substantial reduction in impedance magnitude when sweat is present at the finger-touchscreen interface, indicating increased conductivity. These findings not only contribute to our understanding of tactile perception under electroadhesion but also shed light on the underlying physics. In this regard, the results of this study extend beyond mobile devices to encompass other applications of this technology, including robotics, automation, space missions, and textiles.<|reference_end|>
|
arxiv
|
@article{aliabbasi2024tactile,
title={Tactile Perception of Electroadhesion: Effect of DC versus AC
Stimulation and Finger Moisture},
author={Easa AliAbbasi, Muhammad Muzammil, Omer Sirin, Philippe Lef`evre,
{O}rjan Gr{o}ttem Martinsen, and Cagatay Basdogan},
journal={arXiv preprint arXiv:2409.16936},
year={2024},
doi={10.1109/TOH.2024.3441670},
archivePrefix={arXiv},
eprint={2409.16936},
primaryClass={cs.HC}
}
|
aliabbasi2024tactile
|
arxiv-661834
|
2409.16937
|
Semi-Supervised Cognitive State Classification from Speech with Multi-View Pseudo-Labeling
|
<|reference_start|>Semi-Supervised Cognitive State Classification from Speech with Multi-View Pseudo-Labeling: The lack of labeled data is a common challenge in speech classification tasks, particularly those requiring extensive subjective assessment, such as cognitive state classification. In this work, we propose a Semi-Supervised Learning (SSL) framework, introducing a novel multi-view pseudo-labeling method that leverages both acoustic and linguistic characteristics to select the most confident data for training the classification model. Acoustically, unlabeled data are compared to labeled data using the Frechet audio distance, calculated from embeddings generated by multiple audio encoders. Linguistically, large language models are prompted to revise automatic speech recognition transcriptions and predict labels based on our proposed task-specific knowledge. High-confidence data are identified when pseudo-labels from both sources align, while mismatches are treated as low-confidence data. A bimodal classifier is then trained to iteratively label the low-confidence data until a predefined criterion is met. We evaluate our SSL framework on emotion recognition and dementia detection tasks. Experimental results demonstrate that our method achieves competitive performance compared to fully supervised learning using only 30% of the labeled data and significantly outperforms two selected baselines.<|reference_end|>
|
arxiv
|
@article{li2024semi-supervised,
title={Semi-Supervised Cognitive State Classification from Speech with
Multi-View Pseudo-Labeling},
author={Yuanchao Li, Zixing Zhang, Jing Han, Peter Bell, Catherine Lai},
journal={arXiv preprint arXiv:2409.16937},
year={2024},
archivePrefix={arXiv},
eprint={2409.16937},
primaryClass={eess.AS cs.AI cs.CL cs.MM cs.SD}
}
|
li2024semi-supervised
|
arxiv-661835
|
2409.16938
|
Generative Object Insertion in Gaussian Splatting with a Multi-View Diffusion Model
|
<|reference_start|>Generative Object Insertion in Gaussian Splatting with a Multi-View Diffusion Model: Generating and inserting new objects into 3D content is a compelling approach for achieving versatile scene recreation. Existing methods, which rely on SDS optimization or single-view inpainting, often struggle to produce high-quality results. To address this, we propose a novel method for object insertion in 3D content represented by Gaussian Splatting. Our approach introduces a multi-view diffusion model, dubbed MVInpainter, which is built upon a pre-trained stable video diffusion model to facilitate view-consistent object inpainting. Within MVInpainter, we incorporate a ControlNet-based conditional injection module to enable controlled and more predictable multi-view generation. After generating the multi-view inpainted results, we further propose a mask-aware 3D reconstruction technique to refine Gaussian Splatting reconstruction from these sparse inpainted views. By leveraging these fabricate techniques, our approach yields diverse results, ensures view-consistent and harmonious insertions, and produces better object quality. Extensive experiments demonstrate that our approach outperforms existing methods.<|reference_end|>
|
arxiv
|
@article{zhong2024generative,
title={Generative Object Insertion in Gaussian Splatting with a Multi-View
Diffusion Model},
author={Hongliang Zhong, Can Wang, Jingbo Zhang, Jing Liao},
journal={arXiv preprint arXiv:2409.16938},
year={2024},
archivePrefix={arXiv},
eprint={2409.16938},
primaryClass={cs.CV cs.AI cs.GR}
}
|
zhong2024generative
|
arxiv-661836
|
2409.16940
|
Going Beyond U-Net: Assessing Vision Transformers for Semantic Segmentation in Microscopy Image Analysis
|
<|reference_start|>Going Beyond U-Net: Assessing Vision Transformers for Semantic Segmentation in Microscopy Image Analysis: Segmentation is a crucial step in microscopy image analysis. Numerous approaches have been developed over the past years, ranging from classical segmentation algorithms to advanced deep learning models. While U-Net remains one of the most popular and well-established models for biomedical segmentation tasks, recently developed transformer-based models promise to enhance the segmentation process of microscopy images. In this work, we assess the efficacy of transformers, including UNETR, the Segment Anything Model, and Swin-UPerNet, and compare them with the well-established U-Net model across various image modalities such as electron microscopy, brightfield, histopathology, and phase-contrast. Our evaluation identifies several limitations in the original Swin Transformer model, which we address through architectural modifications to optimise its performance. The results demonstrate that these modifications improve segmentation performance compared to the classical U-Net model and the unmodified Swin-UPerNet. This comparative analysis highlights the promise of transformer models for advancing biomedical image segmentation. It demonstrates that their efficiency and applicability can be improved with careful modifications, facilitating their future use in microscopy image analysis tools.<|reference_end|>
|
arxiv
|
@article{tsiporenko2024going,
title={Going Beyond U-Net: Assessing Vision Transformers for Semantic
Segmentation in Microscopy Image Analysis},
author={Illia Tsiporenko, Pavel Chizhov, Dmytro Fishman},
journal={arXiv preprint arXiv:2409.16940},
year={2024},
archivePrefix={arXiv},
eprint={2409.16940},
primaryClass={eess.IV cs.CV}
}
|
tsiporenko2024going
|
arxiv-661837
|
2409.16942
|
Performance assessment of ADAS in a representative subset of critical traffic situations
|
<|reference_start|>Performance assessment of ADAS in a representative subset of critical traffic situations: As a variety of automated collision prevention systems gain presence within personal vehicles, rating and differentiating the automated safety performance of car models has become increasingly important for consumers, manufacturers, and insurers. In 2023, Swiss Re and partners initiated an eight-month long vehicle testing campaign conducted on a recognized UNECE type approval authority and Euro NCAP accredited proving ground in Germany. The campaign exposed twelve mass-produced vehicle models and one prototype vehicle fitted with collision prevention systems to a selection of safety-critical traffic scenarios representative of United States and European Union accident landscape. In this paper, we compare and evaluate the relative safety performance of these thirteen collision prevention systems (hardware and software stack) as demonstrated by this testing campaign. We first introduce a new scoring system which represents a test system's predicted impact on overall real-world collision frequency and reduction of collision impact energy, weighted based on the real-world relevance of the test scenario. Next, we introduce a novel metric that quantifies the realism of the protocol and confirm that our test protocol is a plausible representation of real-world driving. Finally, we find that the prototype system in its pre-release state outperforms the mass-produced (post-consumer-release) vehicles in the majority of the tested scenarios on the test track.<|reference_end|>
|
arxiv
|
@article{di lillo2024performance,
title={Performance assessment of ADAS in a representative subset of critical
traffic situations},
author={Luigi Di Lillo, Andrea Triscari, Xilin Zhou, Robert Dyro, Ruolin Li,
Marco Pavone},
journal={arXiv preprint arXiv:2409.16942},
year={2024},
archivePrefix={arXiv},
eprint={2409.16942},
primaryClass={cs.RO}
}
|
di lillo2024performance
|
arxiv-661838
|
2409.16944
|
Go-SLAM: Grounded Object Segmentation and Localization with Gaussian Splatting SLAM
|
<|reference_start|>Go-SLAM: Grounded Object Segmentation and Localization with Gaussian Splatting SLAM: We introduce Go-SLAM, a novel framework that utilizes 3D Gaussian Splatting SLAM to reconstruct dynamic environments while embedding object-level information within the scene representations. This framework employs advanced object segmentation techniques, assigning a unique identifier to each Gaussian splat that corresponds to the object it represents. Consequently, our system facilitates open-vocabulary querying, allowing users to locate objects using natural language descriptions. Furthermore, the framework features an optimal path generation module that calculates efficient navigation paths for robots toward queried objects, considering obstacles and environmental uncertainties. Comprehensive evaluations in various scene settings demonstrate the effectiveness of our approach in delivering high-fidelity scene reconstructions, precise object segmentation, flexible object querying, and efficient robot path planning. This work represents an additional step forward in bridging the gap between 3D scene reconstruction, semantic object understanding, and real-time environment interactions.<|reference_end|>
|
arxiv
|
@article{pham2024go-slam:,
title={Go-SLAM: Grounded Object Segmentation and Localization with Gaussian
Splatting SLAM},
author={Phu Pham, Dipam Patel, Damon Conover, Aniket Bera},
journal={arXiv preprint arXiv:2409.16944},
year={2024},
archivePrefix={arXiv},
eprint={2409.16944},
primaryClass={cs.RO cs.AI cs.CV cs.GR}
}
|
pham2024go-slam:
|
arxiv-661839
|
2409.16945
|
Face Forgery Detection with Elaborate Backbone
|
<|reference_start|>Face Forgery Detection with Elaborate Backbone: Face Forgery Detection (FFD), or Deepfake detection, aims to determine whether a digital face is real or fake. Due to different face synthesis algorithms with diverse forgery patterns, FFD models often overfit specific patterns in training datasets, resulting in poor generalization to other unseen forgeries. This severe challenge requires FFD models to possess strong capabilities in representing complex facial features and extracting subtle forgery cues. Although previous FFD models directly employ existing backbones to represent and extract facial forgery cues, the critical role of backbones is often overlooked, particularly as their knowledge and capabilities are insufficient to address FFD challenges, inevitably limiting generalization. Therefore, it is essential to integrate the backbone pre-training configurations and seek practical solutions by revisiting the complete FFD workflow, from backbone pre-training and fine-tuning to inference of discriminant results. Specifically, we analyze the crucial contributions of backbones with different configurations in FFD task and propose leveraging the ViT network with self-supervised learning on real-face datasets to pre-train a backbone, equipping it with superior facial representation capabilities. We then build a competitive backbone fine-tuning framework that strengthens the backbone's ability to extract diverse forgery cues within a competitive learning mechanism. Moreover, we devise a threshold optimization mechanism that utilizes prediction confidence to improve the inference reliability. Comprehensive experiments demonstrate that our FFD model with the elaborate backbone achieves excellent performance in FFD and extra face-related tasks, i.e., presentation attack detection. Code and models are available at https://github.com/zhenglab/FFDBackbone.<|reference_end|>
|
arxiv
|
@article{guo2024face,
title={Face Forgery Detection with Elaborate Backbone},
author={Zonghui Guo, Yingjie Liu, Jie Zhang, Haiyong Zheng, Shiguang Shan},
journal={arXiv preprint arXiv:2409.16945},
year={2024},
archivePrefix={arXiv},
eprint={2409.16945},
primaryClass={cs.CV}
}
|
guo2024face
|
arxiv-661840
|
2409.16946
|
Setting the AI Agenda -- Evidence from Sweden in the ChatGPT Era
|
<|reference_start|>Setting the AI Agenda -- Evidence from Sweden in the ChatGPT Era: This paper examines the development of the Artificial Intelligence (AI) meta-debate in Sweden before and after the release of ChatGPT. From the perspective of agenda-setting theory, we propose that it is an elite outside of party politics that is leading the debate -- i.e. that the politicians are relatively silent when it comes to this rapid development. We also suggest that the debate has become more substantive and risk-oriented in recent years. To investigate this claim, we draw on an original dataset of elite-level documents from the early 2010s to the present, using op-eds published in a number of leading Swedish newspapers. By conducting a qualitative content analysis of these materials, our preliminary findings lend support to the expectation that an academic, rather than a political elite is steering the debate.<|reference_end|>
|
arxiv
|
@article{bruinsma2024setting,
title={Setting the AI Agenda -- Evidence from Sweden in the ChatGPT Era},
author={Bastiaan Bruinsma and Annika Fred'en and Kajsa Hansson and Moa
Johansson and Pasko Kisi'c-Merino and Denitsa Saynova},
journal={arXiv preprint arXiv:2409.16946},
year={2024},
archivePrefix={arXiv},
eprint={2409.16946},
primaryClass={cs.AI cs.CY}
}
|
bruinsma2024setting
|
arxiv-661841
|
2409.16947
|
NTIRE 2024 Challenge on Stereo Image Super-Resolution: Methods and Results
|
<|reference_start|>NTIRE 2024 Challenge on Stereo Image Super-Resolution: Methods and Results: This paper summarizes the 3rd NTIRE challenge on stereo image super-resolution (SR) with a focus on new solutions and results. The task of this challenge is to super-resolve a low-resolution stereo image pair to a high-resolution one with a magnification factor of x4 under a limited computational budget. Compared with single image SR, the major challenge of this challenge lies in how to exploit additional information in another viewpoint and how to maintain stereo consistency in the results. This challenge has 2 tracks, including one track on bicubic degradation and one track on real degradations. In total, 108 and 70 participants were successfully registered for each track, respectively. In the test phase, 14 and 13 teams successfully submitted valid results with PSNR (RGB) scores better than the baseline. This challenge establishes a new benchmark for stereo image SR.<|reference_end|>
|
arxiv
|
@article{wang2024ntire,
title={NTIRE 2024 Challenge on Stereo Image Super-Resolution: Methods and
Results},
author={Longguang Wang, Yulan Guo, Juncheng Li, Hongda Liu, Yang Zhao,
Yingqian Wang, Zhi Jin, Shuhang Gu, Radu Timofte},
journal={arXiv preprint arXiv:2409.16947},
year={2024},
archivePrefix={arXiv},
eprint={2409.16947},
primaryClass={cs.CV}
}
|
wang2024ntire
|
arxiv-661842
|
2409.16948
|
The Power-Oriented Graphs Modeling Technique: From the Fundamental Principles to the Systematic, Step-by-Step Modeling of Complex Physical Systems
|
<|reference_start|>The Power-Oriented Graphs Modeling Technique: From the Fundamental Principles to the Systematic, Step-by-Step Modeling of Complex Physical Systems: Modeling physical systems is an essential skill for a control engineer, since it enables to achieve a deep understanding of their dynamic behavior and, consequently, the development of effective control strategies. The first part of this article provides a tutorial description of the fundamental principles and properties of the Power-Oriented Graphs (POG) modeling technique. Various case studies in different energetic domains are then presented to consolidate the fundamental principles, each highlighting different features of the POG modeling technique. The latter is then compared with the other two main graphical modeling techniques available in the literature, namely Bond Graph (BG) and Energetic Macroscopic Representation (EMR). The second part of this article assumes once again a tutorial nature, in order to introduce the new Fast Modeling POG (FMPOG) procedure. The FMPOG, which operates in the POG framework, is a methodical step-by-step procedure that enables the readers to quickly derive the power-oriented graphical model of physical systems starting from their schematics. From the power-oriented graphical model, the state-space model can then be directly determined. To ensure the FMPOG procedure is easily usable by the entire community, we apply it to three examples in different energetic domains in this article, guiding the reader step-by-step through the derivation of the physical systems models. A freely available Matlab/Simulink program is provided in a repository, allowing the users to automatically apply the FMPOG procedure to various classes of physical systems. This program allows to convert the physical systems schematics into the corresponding POG block schemes and, ultimately, into the state-space mathematical models.<|reference_end|>
|
arxiv
|
@article{tebaldi2024the,
title={The Power-Oriented Graphs Modeling Technique: From the Fundamental
Principles to the Systematic, Step-by-Step Modeling of Complex Physical
Systems},
author={Davide Tebaldi and Roberto Zanasi},
journal={arXiv preprint arXiv:2409.16948},
year={2024},
archivePrefix={arXiv},
eprint={2409.16948},
primaryClass={eess.SY cs.SY}
}
|
tebaldi2024the
|
arxiv-661843
|
2409.16949
|
DALDA: Data Augmentation Leveraging Diffusion Model and LLM with Adaptive Guidance Scaling
|
<|reference_start|>DALDA: Data Augmentation Leveraging Diffusion Model and LLM with Adaptive Guidance Scaling: In this paper, we present an effective data augmentation framework leveraging the Large Language Model (LLM) and Diffusion Model (DM) to tackle the challenges inherent in data-scarce scenarios. Recently, DMs have opened up the possibility of generating synthetic images to complement a few training images. However, increasing the diversity of synthetic images also raises the risk of generating samples outside the target distribution. Our approach addresses this issue by embedding novel semantic information into text prompts via LLM and utilizing real images as visual prompts, thus generating semantically rich images. To ensure that the generated images remain within the target distribution, we dynamically adjust the guidance weight based on each image's CLIPScore to control the diversity. Experimental results show that our method produces synthetic images with enhanced diversity while maintaining adherence to the target distribution. Consequently, our approach proves to be more efficient in the few-shot setting on several benchmarks. Our code is available at https://github.com/kkyuhun94/dalda .<|reference_end|>
|
arxiv
|
@article{jung2024dalda:,
title={DALDA: Data Augmentation Leveraging Diffusion Model and LLM with
Adaptive Guidance Scaling},
author={Kyuheon Jung, Yongdeuk Seo, Seongwoo Cho, Jaeyoung Kim, Hyun-seok Min,
Sungchul Choi},
journal={arXiv preprint arXiv:2409.16949},
year={2024},
archivePrefix={arXiv},
eprint={2409.16949},
primaryClass={cs.CV}
}
|
jung2024dalda:
|
arxiv-661844
|
2409.16950
|
Dynamic Obstacle Avoidance through Uncertainty-Based Adaptive Planning with Diffusion
|
<|reference_start|>Dynamic Obstacle Avoidance through Uncertainty-Based Adaptive Planning with Diffusion: By framing reinforcement learning as a sequence modeling problem, recent work has enabled the use of generative models, such as diffusion models, for planning. While these models are effective in predicting long-horizon state trajectories in deterministic environments, they face challenges in dynamic settings with moving obstacles. Effective collision avoidance demands continuous monitoring and adaptive decision-making. While replanning at every timestep could ensure safety, it introduces substantial computational overhead due to the repetitive prediction of overlapping state sequences -- a process that is particularly costly with diffusion models, known for their intensive iterative sampling procedure. We propose an adaptive generative planning approach that dynamically adjusts replanning frequency based on the uncertainty of action predictions. Our method minimizes the need for frequent, computationally expensive, and redundant replanning while maintaining robust collision avoidance performance. In experiments, we obtain a 13.5% increase in the mean trajectory length and a 12.7% increase in mean reward over long-horizon planning, indicating a reduction in collision rates and an improved ability to navigate the environment safely.<|reference_end|>
|
arxiv
|
@article{punyamoorty2024dynamic,
title={Dynamic Obstacle Avoidance through Uncertainty-Based Adaptive Planning
with Diffusion},
author={Vineet Punyamoorty, Pascal Jutras-Dub'e, Ruqi Zhang, Vaneet Aggarwal,
Damon Conover, Aniket Bera},
journal={arXiv preprint arXiv:2409.16950},
year={2024},
archivePrefix={arXiv},
eprint={2409.16950},
primaryClass={cs.RO cs.AI cs.LG}
}
|
punyamoorty2024dynamic
|
arxiv-661845
|
2409.16953
|
Path-adaptive Spatio-Temporal State Space Model for Event-based Recognition with Arbitrary Duration
|
<|reference_start|>Path-adaptive Spatio-Temporal State Space Model for Event-based Recognition with Arbitrary Duration: Event cameras are bio-inspired sensors that capture the intensity changes asynchronously and output event streams with distinct advantages, such as high temporal resolution. To exploit event cameras for object/action recognition, existing methods predominantly sample and aggregate events in a second-level duration at every fixed temporal interval (or frequency). However, they often face difficulties in capturing the spatiotemporal relationships for longer, e.g., minute-level, events and generalizing across varying temporal frequencies. To fill the gap, we present a novel framework, dubbed PAST-SSM, exhibiting superior capacity in recognizing events with arbitrary duration (e.g., 0.1s to 4.5s) and generalizing to varying inference frequencies. Our key insight is to learn the spatiotemporal relationships from the encoded event features via the state space model (SSM) -- whose linear complexity makes it ideal for modeling high temporal resolution events with longer sequences. To achieve this goal, we first propose a Path-Adaptive Event Aggregation and Scan (PEAS) module to encode events of varying duration into features with fixed dimensions by adaptively scanning and selecting aggregated event frames. On top of PEAS, we introduce a novel Multi-faceted Selection Guiding (MSG) loss to minimize the randomness and redundancy of the encoded features. This subtly enhances the model generalization across different inference frequencies. Lastly, the SSM is employed to better learn the spatiotemporal properties from the encoded features. Moreover, we build a minute-level event-based recognition dataset, named ArDVS100, with arbitrary duration for the benefit of the community. Extensive experiments prove that our method outperforms prior arts by +3.45%, +0.38% and +8.31% on the DVS Action, SeAct and HARDVS datasets, respectively.<|reference_end|>
|
arxiv
|
@article{zhou2024path-adaptive,
title={Path-adaptive Spatio-Temporal State Space Model for Event-based
Recognition with Arbitrary Duration},
author={Jiazhou Zhou, Kanghao Chen, Lei Zhang, Lin Wang},
journal={arXiv preprint arXiv:2409.16953},
year={2024},
archivePrefix={arXiv},
eprint={2409.16953},
primaryClass={cs.CV}
}
|
zhou2024path-adaptive
|
arxiv-661846
|
2409.16954
|
Weighted Cross-entropy for Low-Resource Languages in Multilingual Speech Recognition
|
<|reference_start|>Weighted Cross-entropy for Low-Resource Languages in Multilingual Speech Recognition: This paper addresses the challenge of integrating low-resource languages into multilingual automatic speech recognition (ASR) systems. We introduce a novel application of weighted cross-entropy, typically used for unbalanced datasets, to facilitate the integration of low-resource languages into pre-trained multilingual ASR models within the context of continual multilingual learning. We fine-tune the Whisper multilingual ASR model on five high-resource languages and one low-resource language, employing language-weighted dynamic cross-entropy and data augmentation. The results show a remarkable 6.69% word error rate (WER) reduction for the low-resource language compared to the fine-tuned model without applying our approach, and a 48.86% WER reduction compared to the original Whisper model. In addition, our approach yields an average WER reduction of 3.29% across the six languages, showing no degradation for the high-resource languages.<|reference_end|>
|
arxiv
|
@article{piñeiro-martín2024weighted,
title={Weighted Cross-entropy for Low-Resource Languages in Multilingual Speech
Recognition},
author={Andr'es Pi~neiro-Mart'in, Carmen Garc'ia-Mateo, Laura
Doc'io-Fern'andez, Mar'ia del Carmen L'opez-P'erez, Georg Rehm},
journal={Proceedings of Interspeech 2024},
year={2024},
doi={10.21437/Interspeech.2024-734},
archivePrefix={arXiv},
eprint={2409.16954},
primaryClass={cs.CL cs.SD eess.AS}
}
|
piñeiro-martín2024weighted
|
arxiv-661847
|
2409.16955
|
Enumerating all geodesics
|
<|reference_start|>Enumerating all geodesics: By "geodesic" we mean any sequence of vertices $(v_1,v_2,...,v_k)$ of a graph $G$ that constitute a shortest path from $v_1$ to $v_k$. We propose a novel, output-polynomial algorithm to enumerate all geodesics of $G$. The graph can be directed or not, and weighted or not.<|reference_end|>
|
arxiv
|
@article{wild2024enumerating,
title={Enumerating all geodesics},
author={Marcel Wild},
journal={arXiv preprint arXiv:2409.16955},
year={2024},
archivePrefix={arXiv},
eprint={2409.16955},
primaryClass={math.CO cs.DM}
}
|
wild2024enumerating
|
arxiv-661848
|
2409.16956
|
Informed deep hierarchical classification: a non-standard analysis inspired approach
|
<|reference_start|>Informed deep hierarchical classification: a non-standard analysis inspired approach: This work proposes a novel approach to the deep hierarchical classification task, i.e., the problem of classifying data according to multiple labels organized in a rigid parent-child structure. It consists in a multi-output deep neural network equipped with specific projection operators placed before each output layer. The design of such an architecture, called lexicographic hybrid deep neural network (LH-DNN), has been possible by combining tools from different and quite distant research fields: lexicographic multi-objective optimization, non-standard analysis, and deep learning. To assess the efficacy of the approach, the resulting network is compared against the B-CNN, a convolutional neural network tailored for hierarchical classification tasks, on the CIFAR10, CIFAR100 (where it has been originally and recently proposed before being adopted and tuned for multiple real-world applications) and Fashion-MNIST benchmarks. Evidence states that an LH-DNN can achieve comparable if not superior performance, especially in the learning of the hierarchical relations, in the face of a drastic reduction of the learning parameters, training epochs, and computational time, without the need for ad-hoc loss functions weighting values.<|reference_end|>
|
arxiv
|
@article{fiaschi2024informed,
title={Informed deep hierarchical classification: a non-standard analysis
inspired approach},
author={Lorenzo Fiaschi and Marco Cococcioni},
journal={arXiv preprint arXiv:2409.16956},
year={2024},
archivePrefix={arXiv},
eprint={2409.16956},
primaryClass={cs.AI cs.LG math.LO}
}
|
fiaschi2024informed
|
arxiv-661849
|
2409.16957
|
DualLQR: Efficient Grasping of Oscillating Apples using Task Parameterized Learning from Demonstration
|
<|reference_start|>DualLQR: Efficient Grasping of Oscillating Apples using Task Parameterized Learning from Demonstration: Learning from Demonstration offers great potential for robots to learn to perform agricultural tasks, specifically selective harvesting. One of the challenges is that the target fruit can be oscillating while approaching. Grasping oscillating targets has two requirements: 1) close tracking of the target during the final approach for damage-free grasping, and 2) the complete path should be as short as possible for improved efficiency. We propose a new method called DualLQR. In this method, we use a finite horizon Linear Quadratic Regulator (LQR) on a moving target, without the need of refitting the LQR. To make this possible, we use a dual LQR setup, with an LQR running in two seperate reference frames. Through extensive simulation testing, it was found that the state-of-art method barely meets the required final accuracy without oscillations and drops below the required accuracy with an oscillating target. DualLQR was found to be able to meet the required final accuracy even with high oscillations, with an accuracy increase of 60% for high orientation oscillations. Further testing on a real-world apple grasping task showed that DualLQR was able to successfully grasp oscillating apples, with a success rate of 99%.<|reference_end|>
|
arxiv
|
@article{van de ven2024duallqr:,
title={DualLQR: Efficient Grasping of Oscillating Apples using Task
Parameterized Learning from Demonstration},
author={Robert van de Ven, Ard Nieuwenhuizen, Eldert J. van Henten, and Gert
Kootstra},
journal={arXiv preprint arXiv:2409.16957},
year={2024},
archivePrefix={arXiv},
eprint={2409.16957},
primaryClass={cs.RO}
}
|
van de ven2024duallqr:
|
arxiv-661850
|
2409.16958
|
Metaheuristic Method for Solving Systems of Equations
|
<|reference_start|>Metaheuristic Method for Solving Systems of Equations: This study investigates the effectiveness of Genetic Algorithms (GAs) in solving both linear and nonlinear systems of equations, comparing their performance to traditional methods such as Gaussian Elimination, Newton's Method, and Levenberg-Marquardt. The GA consistently delivered accurate solutions across various test cases, demonstrating its robustness and flexibility. A key advantage of the GA is its ability to explore the solution space broadly, uncovering multiple sets of solutions -- a feat that traditional methods, which typically converge to a single solution, cannot achieve. This feature proved especially beneficial in complex nonlinear systems, where multiple valid solutions exist, highlighting the GA's superiority in navigating intricate solution landscapes.<|reference_end|>
|
arxiv
|
@article{odan2024metaheuristic,
title={Metaheuristic Method for Solving Systems of Equations},
author={Samson Odan},
journal={arXiv preprint arXiv:2409.16958},
year={2024},
archivePrefix={arXiv},
eprint={2409.16958},
primaryClass={cs.NE math.OC}
}
|
odan2024metaheuristic
|
arxiv-661851
|
2409.16959
|
RESAA: A Removal and Structural Analysis Attack Against Compound Logic Locking
|
<|reference_start|>RESAA: A Removal and Structural Analysis Attack Against Compound Logic Locking: The semiconductor industry's paradigm shift towards fabless integrated circuit (IC) manufacturing has introduced security threats, including piracy, counterfeiting, hardware Trojans, and overproduction. In response to these challenges, various countermeasures, including Logic locking (LL), have been proposed to protect designs and mitigate security risks. LL is likely the most researched form of intellectual property (IP) protection for ICs. A significant advance has been made with the introduction of compound logic locking (CLL), where two LL techniques are concurrently utilized for improved resiliency against attacks. However, the vulnerabilities of LL techniques, particularly CLL, need to be explored further. This paper presents a novel framework, RESAA, designed to classify CLL-locked designs, identify critical gates, and execute various attacks to uncover secret keys. RESAA is agnostic to specific LL techniques, offering comprehensive insights into CLL's security scenarios. Experimental results demonstrate RESAA's efficacy in identifying critical gates, distinguishing segments corresponding to different LL techniques, and determining associated keys based on different threat models. In particular, for the oracle-less threat model, RESAA can achieve up to 92.6% accuracy on a relatively complex ITC'99 benchmark circuit. The results reported in this paper emphasize the significance of evaluation and thoughtful selection of LL techniques, as all studied CLL variants demonstrated vulnerability to our framework. RESAA is also open-sourced for the community at large.<|reference_end|>
|
arxiv
|
@article{almeida2024resaa:,
title={RESAA: A Removal and Structural Analysis Attack Against Compound Logic
Locking},
author={Felipe Almeida, Levent Aksoy, Samuel Pagliarini},
journal={arXiv preprint arXiv:2409.16959},
year={2024},
archivePrefix={arXiv},
eprint={2409.16959},
primaryClass={cs.CR}
}
|
almeida2024resaa:
|
arxiv-661852
|
2409.16965
|
ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods
|
<|reference_start|>ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods: Numerous methods have been implemented that pursue fairness with respect to sensitive features by mitigating biases in machine learning. Yet, the problem settings that each method tackles vary significantly, including the stage of intervention, the composition of sensitive features, the fairness notion, and the distribution of the output. Even in binary classification, these subtle differences make it highly complicated to benchmark fairness methods, as their performance can strongly depend on exactly how the bias mitigation problem was originally framed. Hence, we introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting, enabling proper comparability between methods for any use case. We apply ABCFair to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset to sidestep the fairness-accuracy trade-off.<|reference_end|>
|
arxiv
|
@article{defrance2024abcfair:,
title={ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods},
author={MaryBeth Defrance, Maarten Buyl, Tijl De Bie},
journal={arXiv preprint arXiv:2409.16965},
year={2024},
archivePrefix={arXiv},
eprint={2409.16965},
primaryClass={cs.LG cs.CY}
}
|
defrance2024abcfair:
|
arxiv-661853
|
2409.16967
|
Multi-Robot Informative Path Planning for Efficient Target Mapping using Deep Reinforcement Learning
|
<|reference_start|>Multi-Robot Informative Path Planning for Efficient Target Mapping using Deep Reinforcement Learning: Autonomous robots are being employed in several mapping and data collection tasks due to their efficiency and low labor costs. In these tasks, the robots are required to map targets-of-interest in an unknown environment while constrained to a given resource budget such as path length or mission time. This is a challenging problem as each robot has to not only detect and avoid collisions from static obstacles in the environment but also has to model other robots' trajectories to avoid inter-robot collisions. We propose a novel deep reinforcement learning approach for multi-robot informative path planning to map targets-of-interest in an unknown 3D environment. A key aspect of our approach is an augmented graph that models other robots' trajectories to enable planning for communication and inter-robot collision avoidance. We train our decentralized reinforcement learning policy via the centralized training and decentralized execution paradigm. Once trained, our policy is also scalable to varying number of robots and does not require re-training. Our approach outperforms other state-of-the-art multi-robot target mapping approaches by 33.75% in terms of the number of discovered targets-of-interest. We open-source our code and model at: https://github.com/AccGen99/marl_ipp<|reference_end|>
|
arxiv
|
@article{vashisth2024multi-robot,
title={Multi-Robot Informative Path Planning for Efficient Target Mapping using
Deep Reinforcement Learning},
author={Apoorva Vashisth, Dipam Patel, Damon Conover, Aniket Bera},
journal={arXiv preprint arXiv:2409.16967},
year={2024},
archivePrefix={arXiv},
eprint={2409.16967},
primaryClass={cs.RO cs.CV}
}
|
vashisth2024multi-robot
|
arxiv-661854
|
2409.16968
|
Bridge to Real Environment with Hardware-in-the-loop for Wireless Artificial Intelligence Paradigms
|
<|reference_start|>Bridge to Real Environment with Hardware-in-the-loop for Wireless Artificial Intelligence Paradigms: Nowadays, many machine learning (ML) solutions to improve the wireless standard IEEE802.11p for Vehicular Adhoc Network (VANET) are commonly evaluated in the simulated world. At the same time, this approach could be cost-effective compared to real-world testing due to the high cost of vehicles. There is a risk of unexpected outcomes when these solutions are implemented in the real world, potentially leading to wasted resources. To mitigate this challenge, the hardware-in-the-loop is the way to move forward as it enables the opportunity to test in the real world and simulated worlds together. Therefore, we have developed what we believe is the pioneering hardware-in-the-loop for testing artificial intelligence, multiple services, and HD map data (LiDAR), in both simulated and real-world settings.<|reference_end|>
|
arxiv
|
@article{redondo2024bridge,
title={Bridge to Real Environment with Hardware-in-the-loop for Wireless
Artificial Intelligence Paradigms},
author={Jeffrey Redondo, Nauman Aslam, Juan Zhang, and Zhenhui Yuan},
journal={arXiv preprint arXiv:2409.16968},
year={2024},
archivePrefix={arXiv},
eprint={2409.16968},
primaryClass={cs.LG cs.NI eess.SP}
}
|
redondo2024bridge
|
arxiv-661855
|
2409.16972
|
Efficient Submap-based Autonomous MAV Exploration using Visual-Inertial SLAM Configurable for LiDARs or Depth Cameras
|
<|reference_start|>Efficient Submap-based Autonomous MAV Exploration using Visual-Inertial SLAM Configurable for LiDARs or Depth Cameras: Autonomous exploration of unknown space is an essential component for the deployment of mobile robots in the real world. Safe navigation is crucial for all robotics applications and requires accurate and consistent maps of the robot's surroundings. To achieve full autonomy and allow deployment in a wide variety of environments, the robot must rely on on-board state estimation which is prone to drift over time. We propose a Micro Aerial Vehicle (MAV) exploration framework based on local submaps to allow retaining global consistency by applying loop-closure corrections to the relative submap poses. To enable large-scale exploration we efficiently compute global, environment-wide frontiers from the local submap frontiers and use a sampling-based next-best-view exploration planner. Our method seamlessly supports using either a LiDAR sensor or a depth camera, making it suitable for different kinds of MAV platforms. We perform comparative evaluations in simulation against a state-of-the-art submap-based exploration framework to showcase the efficiency and reconstruction quality of our approach. Finally, we demonstrate the applicability of our method to real-world MAVs, one equipped with a LiDAR and the other with a depth camera. Video available at https://youtu.be/Uf5fwmYcuq4 .<|reference_end|>
|
arxiv
|
@article{papatheodorou2024efficient,
title={Efficient Submap-based Autonomous MAV Exploration using Visual-Inertial
SLAM Configurable for LiDARs or Depth Cameras},
author={Sotiris Papatheodorou, Simon Boche, Sebasti'an Barbas Laina, Stefan
Leutenegger},
journal={arXiv preprint arXiv:2409.16972},
year={2024},
archivePrefix={arXiv},
eprint={2409.16972},
primaryClass={cs.RO}
}
|
papatheodorou2024efficient
|
arxiv-661856
|
2409.16973
|
Adaptive Self-Supervised Learning Strategies for Dynamic On-Device LLM Personalization
|
<|reference_start|>Adaptive Self-Supervised Learning Strategies for Dynamic On-Device LLM Personalization: Large language models (LLMs) have revolutionized how we interact with technology, but their personalization to individual user preferences remains a significant challenge, particularly in on-device applications. Traditional methods often depend heavily on labeled datasets and can be resource-intensive. To address these issues, we present Adaptive Self-Supervised Learning Strategies (ASLS), which utilizes self-supervised learning techniques to personalize LLMs dynamically. The framework comprises a user profiling layer for collecting interaction data and a neural adaptation layer for real-time model fine-tuning. This innovative approach enables continuous learning from user feedback, allowing the model to generate responses that align closely with user-specific contexts. The adaptive mechanisms of ASLS minimize computational demands and enhance personalization efficiency. Experimental results across various user scenarios illustrate the superior performance of ASLS in boosting user engagement and satisfaction, highlighting its potential to redefine LLMs as highly responsive and context-aware systems on-device.<|reference_end|>
|
arxiv
|
@article{mendoza2024adaptive,
title={Adaptive Self-Supervised Learning Strategies for Dynamic On-Device LLM
Personalization},
author={Rafael Mendoza, Isabella Cruz, Richard Liu, Aarav Deshmukh, David
Williams, Jesscia Peng, Rohan Iyer},
journal={arXiv preprint arXiv:2409.16973},
year={2024},
archivePrefix={arXiv},
eprint={2409.16973},
primaryClass={cs.CL cs.AI cs.LG}
}
|
mendoza2024adaptive
|
arxiv-661857
|
2409.16974
|
Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions
|
<|reference_start|>Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions: There have been rapid advancements in the capabilities of large language models (LLMs) in recent years, greatly revolutionizing the field of natural language processing (NLP) and artificial intelligence (AI) to understand and interact with human language. Therefore, in this work, we conduct a systematic investigation of the literature to identify the prominent themes and directions of LLM developments, impacts, and limitations. Our findings illustrate the aims, methodologies, limitations, and future directions of LLM research. It includes responsible development considerations, algorithmic improvements, ethical challenges, and societal implications of LLM development. Overall, this paper provides a rigorous and comprehensive overview of current research in LLM and identifies potential directions for future development. The article highlights the application areas that could have a positive impact on society along with the ethical considerations.<|reference_end|>
|
arxiv
|
@article{kaya2024decoding,
title={Decoding Large-Language Models: A Systematic Overview of Socio-Technical
Impacts, Constraints, and Emerging Questions},
author={Zeyneb N. Kaya and Souvick Ghosh},
journal={arXiv preprint arXiv:2409.16974},
year={2024},
archivePrefix={arXiv},
eprint={2409.16974},
primaryClass={cs.CL cs.AI}
}
|
kaya2024decoding
|
arxiv-661858
|
2409.16976
|
Hydraulic Volumetric Soft Everting Vine Robot Steering Mechanism for Underwater Exploration
|
<|reference_start|>Hydraulic Volumetric Soft Everting Vine Robot Steering Mechanism for Underwater Exploration: Despite a significant proportion of the Earth being covered in water, exploration of what lies below has been limited due to the challenges and difficulties inherent in the process. Current state of the art robots such as Remotely Operated Vehicles (ROVs) and Autonomous Underwater Vehicles (AUVs) are bulky, rigid and unable to conform to their environment. Soft robotics offers solutions to this issue. Fluid-actuated eversion or growing robots, in particular, are a good example. While current eversion robots have found many applications on land, their inherent properties make them particularly well suited to underwater environments. An important factor when considering underwater eversion robots is the establishment of a suitable steering mechanism that can enable the robot to change direction as required. This project proposes a design for an eversion robot that is capable of steering while underwater, through the use of bending pouches, a design commonly seen in the literature on land-based eversion robots. These bending pouches contract to enable directional change. Similar to their land-based counterparts, the underwater eversion robot uses the same fluid in the medium it operates in to achieve extension and bending but also to additionally aid in neutral buoyancy. The actuation method of bending pouches meant that robots needed to fully extend before steering was possible. Three robots, with the same design and dimensions were constructed from polyethylene tubes and tested. Our research shows that although the soft eversion robot design in this paper was not capable of consistently generating the same amounts of bending for the inflation volume, it still achieved suitable bending at a range of inflation volumes and was observed to bend to a maximum angle of 68 degrees at 2000 ml, which is in line with the bending angles reported for land-based eversion robots in the literature.<|reference_end|>
|
arxiv
|
@article{kaleel2024hydraulic,
title={Hydraulic Volumetric Soft Everting Vine Robot Steering Mechanism for
Underwater Exploration},
author={Danyaal Kaleel, Benoit Clement and Kaspar Althoefer},
journal={arXiv preprint arXiv:2409.16976},
year={2024},
archivePrefix={arXiv},
eprint={2409.16976},
primaryClass={cs.RO}
}
|
kaleel2024hydraulic
|
arxiv-661859
|
2409.16978
|
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
|
<|reference_start|>Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI: While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this bottom-up approach inspired by design thinking: the XAI research community should adopt a top-down, user-focused perspective to ensure user relevance. We illustrate this with a relatively young subfield of XAI, Training Data Attribution (TDA). With the surge in TDA research and growing competition, the field risks repeating the same patterns of solutionism. We conducted a needfinding study with a diverse group of AI practitioners to identify potential user needs related to TDA. Through interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks that are currently largely overlooked. We invite the TDA and XAI communities to consider these novel tasks and improve the user relevance of their research outcomes.<|reference_end|>
|
arxiv
|
@article{nguyen2024towards,
title={Towards User-Focused Research in Training Data Attribution for
Human-Centered Explainable AI},
author={Elisa Nguyen, Johannes Bertram, Evgenii Kortukov, Jean Y. Song, Seong
Joon Oh},
journal={arXiv preprint arXiv:2409.16978},
year={2024},
archivePrefix={arXiv},
eprint={2409.16978},
primaryClass={cs.HC cs.AI cs.LG}
}
|
nguyen2024towards
|
arxiv-661860
|
2409.16984
|
AXCEL: Automated eXplainable Consistency Evaluation using LLMs
|
<|reference_start|>AXCEL: Automated eXplainable Consistency Evaluation using LLMs: Large Language Models (LLMs) are widely used in both industry and academia for various tasks, yet evaluating the consistency of generated text responses continues to be a challenge. Traditional metrics like ROUGE and BLEU show a weak correlation with human judgment. More sophisticated metrics using Natural Language Inference (NLI) have shown improved correlations but are complex to implement, require domain-specific training due to poor cross-domain generalization, and lack explainability. More recently, prompt-based metrics using LLMs as evaluators have emerged; while they are easier to implement, they still lack explainability and depend on task-specific prompts, which limits their generalizability. This work introduces Automated eXplainable Consistency Evaluation using LLMs (AXCEL), a prompt-based consistency metric which offers explanations for the consistency scores by providing detailed reasoning and pinpointing inconsistent text spans. AXCEL is also a generalizable metric which can be adopted to multiple tasks without changing the prompt. AXCEL outperforms both non-prompt and prompt-based state-of-the-art (SOTA) metrics in detecting inconsistencies across summarization by 8.7%, free text generation by 6.2%, and data-to-text conversion tasks by 29.4%. We also evaluate the influence of underlying LLMs on prompt based metric performance and recalibrate the SOTA prompt-based metrics with the latest LLMs for fair comparison. Further, we show that AXCEL demonstrates strong performance using open source LLMs.<|reference_end|>
|
arxiv
|
@article{sreekar2024axcel:,
title={AXCEL: Automated eXplainable Consistency Evaluation using LLMs},
author={P Aditya Sreekar, Sahil Verma, Suransh Chopra, Sarik Ghazarian,
Abhishek Persad and Narayanan Sadagopan},
journal={arXiv preprint arXiv:2409.16984},
year={2024},
archivePrefix={arXiv},
eprint={2409.16984},
primaryClass={cs.AI cs.CL}
}
|
sreekar2024axcel:
|
arxiv-661861
|
2409.16986
|
Harnessing Diversity for Important Data Selection in Pretraining Large Language Models
|
<|reference_start|>Harnessing Diversity for Important Data Selection in Pretraining Large Language Models: Data selection is of great significance in pre-training large language models, given the variation in quality within the large-scale available training corpora. To achieve this, researchers are currently investigating the use of data influence to measure the importance of data instances, $i.e.,$ a high influence score indicates that incorporating this instance to the training set is likely to enhance the model performance. Consequently, they select the top-$k$ instances with the highest scores. However, this approach has several limitations. (1) Computing the influence of all available data is time-consuming. (2) The selected data instances are not diverse enough, which may hinder the pre-trained model's ability to generalize effectively to various downstream tasks. In this paper, we introduce \texttt{Quad}, a data selection approach that considers both quality and diversity by using data influence to achieve state-of-the-art pre-training results. In particular, noting that attention layers capture extensive semantic details, we have adapted the accelerated $iHVP$ computation methods for attention layers, enhancing our ability to evaluate the influence of data, $i.e.,$ its quality. For the diversity, \texttt{Quad} clusters the dataset into similar data instances within each cluster and diverse instances across different clusters. For each cluster, if we opt to select data from it, we take some samples to evaluate the influence to prevent processing all instances. To determine which clusters to select, we utilize the classic Multi-Armed Bandit method, treating each cluster as an arm. This approach favors clusters with highly influential instances (ensuring high quality) or clusters that have been selected less frequently (ensuring diversity), thereby well balancing between quality and diversity.<|reference_end|>
|
arxiv
|
@article{zhang2024harnessing,
title={Harnessing Diversity for Important Data Selection in Pretraining Large
Language Models},
author={Chi Zhang, Huaping Zhong, Kuan Zhang, Chengliang Chai, Rui Wang,
Xinlin Zhuang, Tianyi Bai, Jiantao Qiu, Lei Cao, Ju Fan, Ye Yuan, Guoren Wang
and Conghui He},
journal={arXiv preprint arXiv:2409.16986},
year={2024},
archivePrefix={arXiv},
eprint={2409.16986},
primaryClass={cs.AI}
}
|
zhang2024harnessing
|
arxiv-661862
|
2409.16990
|
Single Image, Any Face: Generalisable 3D Face Generation
|
<|reference_start|>Single Image, Any Face: Generalisable 3D Face Generation: The creation of 3D human face avatars from a single unconstrained image is a fundamental task that underlies numerous real-world vision and graphics applications. Despite the significant progress made in generative models, existing methods are either less suited in design for human faces or fail to generalise from the restrictive training domain to unconstrained facial images. To address these limitations, we propose a novel model, Gen3D-Face, which generates 3D human faces with unconstrained single image input within a multi-view consistent diffusion framework. Given a specific input image, our model first produces multi-view images, followed by neural surface construction. To incorporate face geometry information in a generalisable manner, we utilise input-conditioned mesh estimation instead of ground-truth mesh along with synthetic multi-view training data. Importantly, we introduce a multi-view joint generation scheme to enhance appearance consistency among different views. To the best of our knowledge, this is the first attempt and benchmark for creating photorealistic 3D human face avatars from single images for generic human subject across domains. Extensive experiments demonstrate the superiority of our method over previous alternatives for out-of-domain singe image 3D face generation and top competition for in-domain setting.<|reference_end|>
|
arxiv
|
@article{wang2024single,
title={Single Image, Any Face: Generalisable 3D Face Generation},
author={Wenqing Wang, Haosen Yang, Josef Kittler, Xiatian Zhu},
journal={arXiv preprint arXiv:2409.16990},
year={2024},
archivePrefix={arXiv},
eprint={2409.16990},
primaryClass={cs.CV}
}
|
wang2024single
|
arxiv-661863
|
2409.16991
|
What is the relationship between Slow Feature Analysis and the Successor Representation?
|
<|reference_start|>What is the relationship between Slow Feature Analysis and the Successor Representation?: (This is a work in progress. Feedback is welcome) An analytical comparison is made between slow feature analysis (SFA) and the successor representation (SR). While SFA and the SR stem from distinct areas of machine learning, they share important properties, both in terms of their mathematics and the types of information they are sensitive to. This work studies their connection along these two axes. In particular, multiple variants of the SFA algorithm are explored analytically and then applied to the setting of an MDP, leading to a family of eigenvalue problems involving the SR and other related quantities. These resulting eigenvalue problems are then illustrated in the toy setting of a gridworld, where it is demonstrated that the place- and grid-like fields often associated to the SR can equally be generated using SFA.<|reference_end|>
|
arxiv
|
@article{seabrook2024what,
title={What is the relationship between Slow Feature Analysis and the Successor
Representation?},
author={Eddie Seabrook and Laurenz Wiskott},
journal={arXiv preprint arXiv:2409.16991},
year={2024},
archivePrefix={arXiv},
eprint={2409.16991},
primaryClass={cs.LG}
}
|
seabrook2024what
|
arxiv-661864
|
2409.16997
|
INT-FlashAttention: Enabling Flash Attention for INT8 Quantization
|
<|reference_start|>INT-FlashAttention: Enabling Flash Attention for INT8 Quantization: As the foundation of large language models (LLMs), self-attention module faces the challenge of quadratic time and memory complexity with respect to sequence length. FlashAttention accelerates attention computation and reduces its memory usage by leveraging the GPU memory hierarchy. A promising research direction is to integrate FlashAttention with quantization methods. This paper introduces INT-FlashAttention, the first INT8 quantization architecture compatible with the forward workflow of FlashAttention, which significantly improves the inference speed of FlashAttention on Ampere GPUs. We implement our INT-FlashAttention prototype with fully INT8 activations and general matrix-multiplication (GEMM) kernels, making it the first attention operator with fully INT8 input. As a general token-level post-training quantization framework, INT-FlashAttention is also compatible with other data formats like INT4, etc. Experimental results show INT-FlashAttention achieves 72% faster inference speed and 82% smaller quantization error compared to standard FlashAttention with FP16 and FP8 data format.<|reference_end|>
|
arxiv
|
@article{chen2024int-flashattention:,
title={INT-FlashAttention: Enabling Flash Attention for INT8 Quantization},
author={Shimao Chen, Zirui Liu, Zhiying Wu, Ce Zheng, Peizhuang Cong, Zihan
Jiang, Yuhan Wu, Lei Su, Tong Yang},
journal={arXiv preprint arXiv:2409.16997},
year={2024},
archivePrefix={arXiv},
eprint={2409.16997},
primaryClass={cs.LG cs.AI}
}
|
chen2024int-flashattention:
|
arxiv-661865
|
2409.16998
|
PitRSDNet: Predicting Intra-operative Remaining Surgery Duration in Endoscopic Pituitary Surgery
|
<|reference_start|>PitRSDNet: Predicting Intra-operative Remaining Surgery Duration in Endoscopic Pituitary Surgery: Accurate intra-operative Remaining Surgery Duration (RSD) predictions allow for anaesthetists to more accurately decide when to administer anaesthetic agents and drugs, as well as to notify hospital staff to send in the next patient. Therefore RSD plays an important role in improving patient care and minimising surgical theatre costs via efficient scheduling. In endoscopic pituitary surgery, it is uniquely challenging due to variable workflow sequences with a selection of optional steps contributing to high variability in surgery duration. This paper presents PitRSDNet for predicting RSD during pituitary surgery, a spatio-temporal neural network model that learns from historical data focusing on workflow sequences. PitRSDNet integrates workflow knowledge into RSD prediction in two forms: 1) multi-task learning for concurrently predicting step and RSD; and 2) incorporating prior steps as context in temporal learning and inference. PitRSDNet is trained and evaluated on a new endoscopic pituitary surgery dataset with 88 videos to show competitive performance improvements over previous statistical and machine learning methods. The findings also highlight how PitRSDNet improve RSD precision on outlier cases utilising the knowledge of prior steps.<|reference_end|>
|
arxiv
|
@article{wijekoon2024pitrsdnet:,
title={PitRSDNet: Predicting Intra-operative Remaining Surgery Duration in
Endoscopic Pituitary Surgery},
author={Anjana Wijekoon, Adrito Das, Roxana R. Herrera, Danyal Z. Khan, John
Hanrahan, Eleanor Carter, Valpuri Luoma, Danail Stoyanov, Hani J. Marcus,
Sophia Bano},
journal={arXiv preprint arXiv:2409.16998},
year={2024},
archivePrefix={arXiv},
eprint={2409.16998},
primaryClass={eess.IV cs.CV cs.LG}
}
|
wijekoon2024pitrsdnet:
|
arxiv-661866
|
2409.16999
|
WasteGAN: Data Augmentation for Robotic Waste Sorting through Generative Adversarial Networks
|
<|reference_start|>WasteGAN: Data Augmentation for Robotic Waste Sorting through Generative Adversarial Networks: Robotic waste sorting poses significant challenges in both perception and manipulation, given the extreme variability of objects that should be recognized on a cluttered conveyor belt. While deep learning has proven effective in solving complex tasks, the necessity for extensive data collection and labeling limits its applicability in real-world scenarios like waste sorting. To tackle this issue, we introduce a data augmentation method based on a novel GAN architecture called wasteGAN. The proposed method allows to increase the performance of semantic segmentation models, starting from a very limited bunch of labeled examples, such as few as 100. The key innovations of wasteGAN include a novel loss function, a novel activation function, and a larger generator block. Overall, such innovations helps the network to learn from limited number of examples and synthesize data that better mirrors real-world distributions. We then leverage the higher-quality segmentation masks predicted from models trained on the wasteGAN synthetic data to compute semantic-aware grasp poses, enabling a robotic arm to effectively recognizing contaminants and separating waste in a real-world scenario. Through comprehensive evaluation encompassing dataset-based assessments and real-world experiments, our methodology demonstrated promising potential for robotic waste sorting, yielding performance gains of up to 5.8\% in picking contaminants. The project page is available at https://github.com/bach05/wasteGAN.git<|reference_end|>
|
arxiv
|
@article{bacchin2024wastegan:,
title={WasteGAN: Data Augmentation for Robotic Waste Sorting through Generative
Adversarial Networks},
author={Alberto Bacchin, Leonardo Barcellona, Matteo Terreran, Stefano
Ghidoni, Emanuele Menegatti, Takuya Kiyokawa},
journal={arXiv preprint arXiv:2409.16999},
year={2024},
archivePrefix={arXiv},
eprint={2409.16999},
primaryClass={cs.RO cs.CV}
}
|
bacchin2024wastegan:
|
arxiv-661867
|
2409.17001
|
Adverse Weather Optical Flow: Cumulative Homogeneous-Heterogeneous Adaptation
|
<|reference_start|>Adverse Weather Optical Flow: Cumulative Homogeneous-Heterogeneous Adaptation: Optical flow has made great progress in clean scenes, while suffers degradation under adverse weather due to the violation of the brightness constancy and gradient continuity assumptions of optical flow. Typically, existing methods mainly adopt domain adaptation to transfer motion knowledge from clean to degraded domain through one-stage adaptation. However, this direct adaptation is ineffective, since there exists a large gap due to adverse weather and scene style between clean and real degraded domains. Moreover, even within the degraded domain itself, static weather (e.g., fog) and dynamic weather (e.g., rain) have different impacts on optical flow. To address above issues, we explore synthetic degraded domain as an intermediate bridge between clean and real degraded domains, and propose a cumulative homogeneous-heterogeneous adaptation framework for real adverse weather optical flow. Specifically, for clean-degraded transfer, our key insight is that static weather possesses the depth-association homogeneous feature which does not change the intrinsic motion of the scene, while dynamic weather additionally introduces the heterogeneous feature which results in a significant boundary discrepancy in warp errors between clean and degraded domains. For synthetic-real transfer, we figure out that cost volume correlation shares a similar statistical histogram between synthetic and real degraded domains, benefiting to holistically aligning the homogeneous correlation distribution for synthetic-real knowledge distillation. Under this unified framework, the proposed method can progressively and explicitly transfer knowledge from clean scenes to real adverse weather. In addition, we further collect a real adverse weather dataset with manually annotated optical flow labels and perform extensive experiments to verify the superiority of the proposed method.<|reference_end|>
|
arxiv
|
@article{zhou2024adverse,
title={Adverse Weather Optical Flow: Cumulative Homogeneous-Heterogeneous
Adaptation},
author={Hanyu Zhou, Yi Chang, Zhiwei Shi, Wending Yan, Gang Chen, Yonghong
Tian, Luxin Yan},
journal={arXiv preprint arXiv:2409.17001},
year={2024},
archivePrefix={arXiv},
eprint={2409.17001},
primaryClass={cs.CV}
}
|
zhou2024adverse
|
arxiv-661868
|
2409.17004
|
Semantically-Driven Disambiguation for Human-Robot Interaction
|
<|reference_start|>Semantically-Driven Disambiguation for Human-Robot Interaction: Ambiguities are common in human-robot interaction, especially when a robot follows user instructions in a large collocated space. For instance, when the user asks the robot to find an object in a home environment, the object might be in several places depending on its varying semantic properties (e.g., a bowl can be in the kitchen cabinet or on the dining room table, depending on whether it is clean/dirty, full/empty and the other objects around it). Previous works on object semantics have predicted such relationships using one shot-inferences which are likely to fail for ambiguous or partially understood instructions. This paper focuses on this gap and suggests a semantically-driven disambiguation approach by utilizing follow-up clarifications to handle such uncertainties. To achieve this, we first obtain semantic knowledge embeddings, and then these embeddings are used to generate clarifying questions by following an iterative process. The evaluation of our method shows that our approach is model agnostic, i.e., applicable to different semantic embedding models, and follow-up clarifications improve the performance regardless of the embedding model. Additionally, our ablation studies show the significance of informative clarifications and iterative predictions to enhance system accuracies.<|reference_end|>
|
arxiv
|
@article{dogan2024semantically-driven,
title={Semantically-Driven Disambiguation for Human-Robot Interaction},
author={Fethiye Irmak Dogan, Weiyu Liu, Iolanda Leite, Sonia Chernova},
journal={arXiv preprint arXiv:2409.17004},
year={2024},
archivePrefix={arXiv},
eprint={2409.17004},
primaryClass={cs.RO}
}
|
dogan2024semantically-driven
|
arxiv-661869
|
2409.17005
|
Models Can and Should Embrace the Communicative Nature of Human-Generated Math
|
<|reference_start|>Models Can and Should Embrace the Communicative Nature of Human-Generated Math: Math is constructed by people for people: just as natural language corpora reflect not just propositions but the communicative goals of language users, the math data that models are trained on reflects not just idealized mathematical entities but rich communicative intentions. While there are important advantages to treating math in a purely symbolic manner, we here hypothesize that there are benefits to treating math as situated linguistic communication and that language models are well suited for this goal, in ways that are not fully appreciated. We illustrate these points with two case studies. First, we ran an experiment in which we found that language models interpret the equals sign in a humanlike way -- generating systematically different word problems for the same underlying equation arranged in different ways. Second, we found that language models prefer proofs to be ordered in naturalistic ways, even though other orders would be logically equivalent. We advocate for AI systems that learn from and represent the communicative intentions latent in human-generated math.<|reference_end|>
|
arxiv
|
@article{boguraev2024models,
title={Models Can and Should Embrace the Communicative Nature of
Human-Generated Math},
author={Sasha Boguraev, Ben Lipkin, Leonie Weissweiler, Kyle Mahowald},
journal={arXiv preprint arXiv:2409.17005},
year={2024},
archivePrefix={arXiv},
eprint={2409.17005},
primaryClass={cs.AI cs.CL}
}
|
boguraev2024models
|
arxiv-661870
|
2409.17010
|
MT2KD: Towards A General-Purpose Encoder for Speech, Speaker, and Audio Events
|
<|reference_start|>MT2KD: Towards A General-Purpose Encoder for Speech, Speaker, and Audio Events: With the advances in deep learning, the performance of end-to-end (E2E) single-task models for speech and audio processing has been constantly improving. However, it is still challenging to build a general-purpose model with high performance on multiple tasks, since different speech and audio processing tasks usually require different training data, input features, or model architectures to achieve optimal performance. In this work, MT2KD, a novel two-stage multi-task learning framework is proposed to build a general-purpose speech and audio encoder that jointly performs three fundamental tasks: automatic speech recognition (ASR), audio tagging (AT) and speaker verification (SV). In the first stage, multi-teacher knowledge distillation (KD) is applied to align the feature spaces of three single-task high-performance teacher encoders into a single student encoder using the same unlabelled data. In the second stage, multi-task supervised fine-tuning is carried out by initialising the model from the first stage and training on the separate labelled data of each single task. Experiments demonstrate that the proposed multi-task training pipeline significantly outperforms a baseline model trained with multi-task learning from scratch. The final system achieves good performance on ASR, AT and SV: with less than 4% relative word-error-rate increase on ASR, only 1.9 lower mean averaged precision on AT and 0.23% absolute higher equal error rate on SV compared to the best-performing single-task encoders, using only a 66M total model parameters.<|reference_end|>
|
arxiv
|
@article{yang2024mt2kd:,
title={MT2KD: Towards A General-Purpose Encoder for Speech, Speaker, and Audio
Events},
author={Xiaoyu Yang, Qiujia Li, Chao Zhang, Phil Woodland},
journal={arXiv preprint arXiv:2409.17010},
year={2024},
archivePrefix={arXiv},
eprint={2409.17010},
primaryClass={eess.AS cs.SD}
}
|
yang2024mt2kd:
|
arxiv-661871
|
2409.17011
|
LLM-CARD: Towards a Description and Landscape of Large Language Models
|
<|reference_start|>LLM-CARD: Towards a Description and Landscape of Large Language Models: With the rapid growth of the Natural Language Processing (NLP) field, a vast variety of Large Language Models (LLMs) continue to emerge for diverse NLP tasks. As an increasing number of papers are presented, researchers and developers face the challenge of information overload. Thus, it is particularly important to develop a system that can automatically extract and organise key information about LLMs from academic papers (\textbf{LLM model card}). This work is to develop such a pioneer system by using Named Entity Recognition (\textbf{NER}) and Relation Extraction (\textbf{RE}) methods that automatically extract key information about large language models from the papers, helping researchers to efficiently access information about LLMs. These features include model \textit{licence}, model \textit{name}, and model \textit{application}. With these features, we can form a model card for each paper. \textbf{Data-contribution} wise, 106 academic papers were processed by defining three dictionaries - LLMs name, licence, and application. 11,051 sentences were extracted through dictionary lookup, and the dataset was constructed through manual review of the final selection of 129 sentences that have a link between the name and the licence, and 106 sentences that have a link between the model name and the application. Data and code in \textsc{LLM-Card} is openly hosted at \url{https://github.com/shengwei-tian/dependency-parser-visualization}<|reference_end|>
|
arxiv
|
@article{tian2024llm-card:,
title={LLM-CARD: Towards a Description and Landscape of Large Language Models},
author={Shengwei Tian, Lifeng Han, Erick Mendez Guzman, Goran Nenadic},
journal={arXiv preprint arXiv:2409.17011},
year={2024},
archivePrefix={arXiv},
eprint={2409.17011},
primaryClass={cs.CL cs.DL}
}
|
tian2024llm-card:
|
arxiv-661872
|
2409.17012
|
AI-Driven Risk-Aware Scheduling for Active Debris Removal Missions
|
<|reference_start|>AI-Driven Risk-Aware Scheduling for Active Debris Removal Missions: The proliferation of debris in Low Earth Orbit (LEO) represents a significant threat to space sustainability and spacecraft safety. Active Debris Removal (ADR) has emerged as a promising approach to address this issue, utilising Orbital Transfer Vehicles (OTVs) to facilitate debris deorbiting, thereby reducing future collision risks. However, ADR missions are substantially complex, necessitating accurate planning to make the missions economically viable and technically effective. Moreover, these servicing missions require a high level of autonomous capability to plan under evolving orbital conditions and changing mission requirements. In this paper, an autonomous decision-planning model based on Deep Reinforcement Learning (DRL) is developed to train an OTV to plan optimal debris removal sequencing. It is shown that using the proposed framework, the agent can find optimal mission plans and learn to update the planning autonomously to include risk handling of debris with high collision risk.<|reference_end|>
|
arxiv
|
@article{poupon2024ai-driven,
title={AI-Driven Risk-Aware Scheduling for Active Debris Removal Missions},
author={Antoine Poupon, Hugo de Rohan Willner, Pierre Nikitits, Adam Abdin},
journal={arXiv preprint arXiv:2409.17012},
year={2024},
archivePrefix={arXiv},
eprint={2409.17012},
primaryClass={cs.AI}
}
|
poupon2024ai-driven
|
arxiv-661873
|
2409.17015
|
Investigations on Algorithm Selection for Interval-Based Coding Methods
|
<|reference_start|>Investigations on Algorithm Selection for Interval-Based Coding Methods: There is a class of entropy-coding methods which do not substitute symbols by code words (such as Huffman coding), but operate on intervals or ranges. This class includes three prominent members: conventional arithmetic coding, range coding, and coding based on asymmetric numeral systems. To determine the correct symbol in the decoder, each of these methods requires the comparison of a state variable with subinterval boundaries. In adaptive operation, considering varying symbol statistics, an array of interval boundaries must additionally be kept up to date. The larger the symbol alphabet, the more time-consuming both the search for the correct subinterval and the updating of interval borders become. Detailed pseudo-code is used to discuss different approaches to speed up the symbol search in the decoder and the adaptation of the array of interval borders, both depending on the chosen alphabet size. It is shown that reducing the $\mathcal{O}$-complexity does not lead to an acceleration in practical implementations if the alphabet size is too small. In adaptive compression mode, the binary indexing method proves to be superior when considering the overall processing time. Although the symbol search (in the decoder) takes longer than with other algorithms, the faster updating of the array of interval borders more than compensates for this disadvantage. A variant of the binary indexing method is proposed, which is more flexible and has a partially lower complexity than the original approach.<|reference_end|>
|
arxiv
|
@article{strutz2024investigations,
title={Investigations on Algorithm Selection for Interval-Based Coding Methods},
author={Tilo Strutz and Nico Schreiber},
journal={arXiv preprint arXiv:2409.17015},
year={2024},
archivePrefix={arXiv},
eprint={2409.17015},
primaryClass={cs.IT cs.DS math.IT}
}
|
strutz2024investigations
|
arxiv-661874
|
2409.17016
|
CNN Mixture-of-Depths
|
<|reference_start|>CNN Mixture-of-Depths: We introduce Mixture-of-Depths (MoD) for Convolutional Neural Networks (CNNs), a novel approach that enhances the computational efficiency of CNNs by selectively processing channels based on their relevance to the current prediction. This method optimizes computational resources by dynamically selecting key channels in feature maps for focused processing within the convolutional blocks (Conv-Blocks), while skipping less relevant channels. Unlike conditional computation methods that require dynamic computation graphs, CNN MoD uses a static computation graph with fixed tensor sizes which improve hardware efficiency. It speeds up the training and inference processes without the need for customized CUDA kernels, unique loss functions, or finetuning. CNN MoD either matches the performance of traditional CNNs with reduced inference times, GMACs, and parameters, or exceeds their performance while maintaining similar inference times, GMACs, and parameters. For example, on ImageNet, ResNet86-MoD exceeds the performance of the standard ResNet50 by 0.45% with a 6% speedup on CPU and 5% on GPU. Moreover, ResNet75-MoD achieves the same performance as ResNet50 with a 25% speedup on CPU and 15% on GPU.<|reference_end|>
|
arxiv
|
@article{cakaj2024cnn,
title={CNN Mixture-of-Depths},
author={Rinor Cakaj, Jens Mehnert, Bin Yang},
journal={arXiv preprint arXiv:2409.17016},
year={2024},
archivePrefix={arXiv},
eprint={2409.17016},
primaryClass={cs.CV cs.LG}
}
|
cakaj2024cnn
|
arxiv-661875
|
2409.17020
|
PTQ4RIS: Post-Training Quantization for Referring Image Segmentation
|
<|reference_start|>PTQ4RIS: Post-Training Quantization for Referring Image Segmentation: Referring Image Segmentation (RIS), aims to segment the object referred by a given sentence in an image by understanding both visual and linguistic information. However, existing RIS methods tend to explore top-performance models, disregarding considerations for practical applications on resources-limited edge devices. This oversight poses a significant challenge for on-device RIS inference. To this end, we propose an effective and efficient post-training quantization framework termed PTQ4RIS. Specifically, we first conduct an in-depth analysis of the root causes of performance degradation in RIS model quantization and propose dual-region quantization (DRQ) and reorder-based outlier-retained quantization (RORQ) to address the quantization difficulties in visual and text encoders. Extensive experiments on three benchmarks with different bits settings (from 8 to 4 bits) demonstrates its superior performance. Importantly, we are the first PTQ method specifically designed for the RIS task, highlighting the feasibility of PTQ in RIS applications. Code will be available at {https://github.com/gugu511yy/PTQ4RIS}.<|reference_end|>
|
arxiv
|
@article{jiang2024ptq4ris:,
title={PTQ4RIS: Post-Training Quantization for Referring Image Segmentation},
author={Xiaoyan Jiang, Hang Yang, Kaiying Zhu, Xihe Qiu, Shibo Zhao, Sifan
Zhou},
journal={arXiv preprint arXiv:2409.17020},
year={2024},
archivePrefix={arXiv},
eprint={2409.17020},
primaryClass={cs.CV}
}
|
jiang2024ptq4ris:
|
arxiv-661876
|
2409.17021
|
CombU: A Combined Unit Activation for Fitting Mathematical Expressions with Neural Networks
|
<|reference_start|>CombU: A Combined Unit Activation for Fitting Mathematical Expressions with Neural Networks: The activation functions are fundamental to neural networks as they introduce non-linearity into data relationships, thereby enabling deep networks to approximate complex data relations. Existing efforts to enhance neural network performance have predominantly focused on developing new mathematical functions. However, we find that a well-designed combination of existing activation functions within a neural network can also achieve this objective. In this paper, we introduce the Combined Units activation (CombU), which employs different activation functions at various dimensions across different layers. This approach can be theoretically proven to fit most mathematical expressions accurately. The experiments conducted on four mathematical expression datasets, compared against six State-Of-The-Art (SOTA) activation function algorithms, demonstrate that CombU outperforms all SOTA algorithms in 10 out of 16 metrics and ranks in the top three for the remaining six metrics.<|reference_end|>
|
arxiv
|
@article{li2024combu:,
title={CombU: A Combined Unit Activation for Fitting Mathematical Expressions
with Neural Networks},
author={Jiayu Li, Zilong Zhao, Kevin Yee, Uzair Javaid, Biplab Sikdar},
journal={arXiv preprint arXiv:2409.17021},
year={2024},
archivePrefix={arXiv},
eprint={2409.17021},
primaryClass={cs.LG}
}
|
li2024combu:
|
arxiv-661877
|
2409.17023
|
Enhanced Wavelet Scattering Network for image inpainting detection
|
<|reference_start|>Enhanced Wavelet Scattering Network for image inpainting detection: The rapid advancement of image inpainting tools, especially those aimed at removing artifacts, has made digital image manipulation alarmingly accessible. This paper proposes several innovative ideas for detecting inpainting forgeries based on low level noise analysis by combining Dual-Tree Complex Wavelet Transform (DT-CWT) for feature extraction with convolutional neural networks (CNN) for forged area detection and localization, and lastly by employing an innovative combination of texture segmentation with noise variance estimations. The DT-CWT offers significant advantages due to its shift-invariance, enhancing its robustness against subtle manipulations during the inpainting process. Furthermore, its directional selectivity allows for the detection of subtle artifacts introduced by inpainting within specific frequency bands and orientations. Various neural network architectures were evaluated and proposed. Lastly, we propose a fusion detection module that combines texture analysis with noise variance estimation to give the forged area. Our approach was benchmarked against state-of-the-art methods and demonstrated superior performance over all cited alternatives. The training code (with pretrained model weights) as long as the dataset will be available at https://github.com/jmaba/Deep-dual-tree-complex-neural-network-for-image-inpainting-detection<|reference_end|>
|
arxiv
|
@article{adrian-alin2024enhanced,
title={Enhanced Wavelet Scattering Network for image inpainting detection},
author={Barglazan Adrian-Alin and Brad Remus},
journal={arXiv preprint arXiv:2409.17023},
year={2024},
archivePrefix={arXiv},
eprint={2409.17023},
primaryClass={cs.CV}
}
|
adrian-alin2024enhanced
|
arxiv-661878
|
2409.17025
|
Automated Surgical Skill Assessment in Endoscopic Pituitary Surgery using Real-time Instrument Tracking on a High-fidelity Bench-top Phantom
|
<|reference_start|>Automated Surgical Skill Assessment in Endoscopic Pituitary Surgery using Real-time Instrument Tracking on a High-fidelity Bench-top Phantom: Improved surgical skill is generally associated with improved patient outcomes, although assessment is subjective; labour-intensive; and requires domain specific expertise. Automated data driven metrics can alleviate these difficulties, as demonstrated by existing machine learning instrument tracking models in minimally invasive surgery. However, these models have been tested on limited datasets of laparoscopic surgery, with a focus on isolated tasks and robotic surgery. In this paper, a new public dataset is introduced, focusing on simulated surgery, using the nasal phase of endoscopic pituitary surgery as an exemplar. Simulated surgery allows for a realistic yet repeatable environment, meaning the insights gained from automated assessment can be used by novice surgeons to hone their skills on the simulator before moving to real surgery. PRINTNet (Pituitary Real-time INstrument Tracking Network) has been created as a baseline model for this automated assessment. Consisting of DeepLabV3 for classification and segmentation; StrongSORT for tracking; and the NVIDIA Holoscan SDK for real-time performance, PRINTNet achieved 71.9% Multiple Object Tracking Precision running at 22 Frames Per Second. Using this tracking output, a Multilayer Perceptron achieved 87% accuracy in predicting surgical skill level (novice or expert), with the "ratio of total procedure time to instrument visible time" correlated with higher surgical skill. This therefore demonstrates the feasibility of automated surgical skill assessment in simulated endoscopic pituitary surgery. The new publicly available dataset can be found here: https://doi.org/10.5522/04/26511049.<|reference_end|>
|
arxiv
|
@article{das2024automated,
title={Automated Surgical Skill Assessment in Endoscopic Pituitary Surgery
using Real-time Instrument Tracking on a High-fidelity Bench-top Phantom},
author={Adrito Das, Bilal Sidiqi, Laurent Mennillo, Zhehua Mao, Mikael
Brudfors, Miguel Xochicale, Danyal Z. Khan, Nicola Newall, John G. Hanrahan,
Matthew J. Clarkson, Danail Stoyanov, Hani J. Marcus, Sophia Bano},
journal={arXiv preprint arXiv:2409.17025},
year={2024},
archivePrefix={arXiv},
eprint={2409.17025},
primaryClass={eess.IV cs.CV}
}
|
das2024automated
|
arxiv-661879
|
2409.17027
|
Counterfactual Token Generation in Large Language Models
|
<|reference_start|>Counterfactual Token Generation in Large Language Models: "Sure, I am happy to generate a story for you: Captain Lyra stood at the helm of her trusty ship, the Maelstrom's Fury, gazing out at the endless sea. [...] Lyra's eyes welled up with tears as she realized the bitter truth - she had sacrificed everything for fleeting riches, and lost the love of her crew, her family, and herself." Although this story, generated by a large language model, is captivating, one may wonder -- how would the story have unfolded if the model had chosen "Captain Maeve" as the protagonist instead? We cannot know. State-of-the-art large language models are stateless -- they maintain no internal memory or state. Given a prompt, they generate a sequence of tokens as an output using an autoregressive process. As a consequence, they cannot reason about counterfactual alternatives to tokens they have generated in the past. In this work, our goal is to enhance them with this functionality. To this end, we develop a causal model of token generation that builds upon the Gumbel-Max structural causal model. Our model allows any large language model to perform counterfactual token generation at almost no cost in comparison with vanilla token generation, it is embarrassingly simple to implement, and it does not require any fine-tuning nor prompt engineering. We implement our model on Llama 3 8B-instruct and conduct both qualitative and quantitative analyses of counterfactually generated text. We conclude with a demonstrative application of counterfactual token generation for bias detection, unveiling interesting insights about the model of the world constructed by large language models.<|reference_end|>
|
arxiv
|
@article{chatzi2024counterfactual,
title={Counterfactual Token Generation in Large Language Models},
author={Ivi Chatzi, Nina Corvelo Benz, Eleni Straitouri, Stratis Tsirtsis,
Manuel Gomez-Rodriguez},
journal={arXiv preprint arXiv:2409.17027},
year={2024},
archivePrefix={arXiv},
eprint={2409.17027},
primaryClass={cs.LG cs.AI cs.CL}
}
|
chatzi2024counterfactual
|
arxiv-661880
|
2409.17029
|
EventHDR: from Event to High-Speed HDR Videos and Beyond
|
<|reference_start|>EventHDR: from Event to High-Speed HDR Videos and Beyond: Event cameras are innovative neuromorphic sensors that asynchronously capture the scene dynamics. Due to the event-triggering mechanism, such cameras record event streams with much shorter response latency and higher intensity sensitivity compared to conventional cameras. On the basis of these features, previous works have attempted to reconstruct high dynamic range (HDR) videos from events, but have either suffered from unrealistic artifacts or failed to provide sufficiently high frame rates. In this paper, we present a recurrent convolutional neural network that reconstruct high-speed HDR videos from event sequences, with a key frame guidance to prevent potential error accumulation caused by the sparse event data. Additionally, to address the problem of severely limited real dataset, we develop a new optical system to collect a real-world dataset with paired high-speed HDR videos and event streams, facilitating future research in this field. Our dataset provides the first real paired dataset for event-to-HDR reconstruction, avoiding potential inaccuracies from simulation strategies. Experimental results demonstrate that our method can generate high-quality, high-speed HDR videos. We further explore the potential of our work in cross-camera reconstruction and downstream computer vision tasks, including object detection, panoramic segmentation, optical flow estimation, and monocular depth estimation under HDR scenarios.<|reference_end|>
|
arxiv
|
@article{zou2024eventhdr:,
title={EventHDR: from Event to High-Speed HDR Videos and Beyond},
author={Yunhao Zou, Ying Fu, Tsuyoshi Takatani, Yinqiang Zheng},
journal={arXiv preprint arXiv:2409.17029},
year={2024},
archivePrefix={arXiv},
eprint={2409.17029},
primaryClass={cs.CV}
}
|
zou2024eventhdr:
|
arxiv-661881
|
2409.17032
|
Space-Based Quantum Internet: Entanglement Distribution in Time-Varying LEO Constellations
|
<|reference_start|>Space-Based Quantum Internet: Entanglement Distribution in Time-Varying LEO Constellations: This paper addresses the complexities of entanglement distribution in LEO satellite networks, particularly those arising from their dynamic topology. Traditional static and dynamic entanglement distribution methods often result in high entanglement drop rates and reduced end-to-end throughput. We introduce a novel framework that leverages the dynamic nature of LEO satellite networks to enhance entanglement distribution efficiency. Employing a space-time graph model to represent the network's temporal evolution, we propose an entanglement distribution strategy based on path utility, incorporating pointing errors, non-ideal link transmittance for intersatellite links, and atmospheric effects for downlinks. Our approach demonstrates superior performance in reducing entanglement drop rates and improving throughput compared to conventional methods. This study advances the field of quantum communication in satellite networks, offering resilient and efficient entanglement distribution strategies that support practical applications such as distributed computing, quantum multipartite cryptography, and distributed quantum sensing. The findings underscore the potential of integrating dynamic satellite networks with quantum technologies to create a reliable and secure quantum internet.<|reference_end|>
|
arxiv
|
@article{koudia2024space-based,
title={Space-Based Quantum Internet: Entanglement Distribution in Time-Varying
LEO Constellations},
author={Seid Koudia, Junaid ur Rehman and Symeon Chatzinotas},
journal={arXiv preprint arXiv:2409.17032},
year={2024},
archivePrefix={arXiv},
eprint={2409.17032},
primaryClass={quant-ph cs.NI}
}
|
koudia2024space-based
|
arxiv-661882
|
2409.17044
|
How to Connect Speech Foundation Models and Large Language Models? What Matters and What Does Not
|
<|reference_start|>How to Connect Speech Foundation Models and Large Language Models? What Matters and What Does Not: The remarkable performance achieved by Large Language Models (LLM) has driven research efforts to leverage them for a wide range of tasks and input modalities. In speech-to-text (S2T) tasks, the emerging solution consists of projecting the output of the encoder of a Speech Foundational Model (SFM) into the LLM embedding space through an adapter module. However, no work has yet investigated how much the downstream-task performance depends on each component (SFM, adapter, LLM) nor whether the best design of the adapter depends on the chosen SFM and LLM. To fill this gap, we evaluate the combination of 5 adapter modules, 2 LLMs (Mistral and Llama), and 2 SFMs (Whisper and SeamlessM4T) on two widespread S2T tasks, namely Automatic Speech Recognition and Speech Translation. Our results demonstrate that the SFM plays a pivotal role in downstream performance, while the adapter choice has moderate impact and depends on the SFM and LLM.<|reference_end|>
|
arxiv
|
@article{verdini2024how,
title={How to Connect Speech Foundation Models and Large Language Models? What
Matters and What Does Not},
author={Francesco Verdini, Pierfrancesco Melucci, Stefano Perna, Francesco
Cariaggi, Marco Gaido, Sara Papi, Szymon Mazurek, Marek Kasztelnik, Luisa
Bentivogli, S'ebastien Brati`eres, Paolo Merialdo, Simone Scardapane},
journal={arXiv preprint arXiv:2409.17044},
year={2024},
archivePrefix={arXiv},
eprint={2409.17044},
primaryClass={cs.CL cs.AI cs.LG}
}
|
verdini2024how
|
arxiv-661883
|
2409.17045
|
GeoBiked: A Dataset with Geometric Features and Automated Labeling Techniques to Enable Deep Generative Models in Engineering Design
|
<|reference_start|>GeoBiked: A Dataset with Geometric Features and Automated Labeling Techniques to Enable Deep Generative Models in Engineering Design: We provide a dataset for enabling Deep Generative Models (DGMs) in engineering design and propose methods to automate data labeling by utilizing large-scale foundation models. GeoBiked is curated to contain 4 355 bicycle images, annotated with structural and technical features and is used to investigate two automated labeling techniques: The utilization of consolidated latent features (Hyperfeatures) from image-generation models to detect geometric correspondences (e.g. the position of the wheel center) in structural images and the generation of diverse text descriptions for structural images. GPT-4o, a vision-language-model (VLM), is instructed to analyze images and produce diverse descriptions aligned with the system-prompt. By representing technical images as Diffusion-Hyperfeatures, drawing geometric correspondences between them is possible. The detection accuracy of geometric points in unseen samples is improved by presenting multiple annotated source images. GPT-4o has sufficient capabilities to generate accurate descriptions of technical images. Grounding the generation only on images leads to diverse descriptions but causes hallucinations, while grounding it on categorical labels restricts the diversity. Using both as input balances creativity and accuracy. Successfully using Hyperfeatures for geometric correspondence suggests that this approach can be used for general point-detection and annotation tasks in technical images. Labeling such images with text descriptions using VLMs is possible, but dependent on the models detection capabilities, careful prompt-engineering and the selection of input information. Applying foundation models in engineering design is largely unexplored. We aim to bridge this gap with a dataset to explore training, finetuning and conditioning DGMs in this field and suggesting approaches to bootstrap foundation models to process technical images.<|reference_end|>
|
arxiv
|
@article{mueller2024geobiked:,
title={GeoBiked: A Dataset with Geometric Features and Automated Labeling
Techniques to Enable Deep Generative Models in Engineering Design},
author={Phillip Mueller, Sebastian Mueller, Lars Mikelsons},
journal={arXiv preprint arXiv:2409.17045},
year={2024},
archivePrefix={arXiv},
eprint={2409.17045},
primaryClass={cs.CV cs.AI}
}
|
mueller2024geobiked:
|
arxiv-661884
|
2409.17046
|
Detecting Temporal Ambiguity in Questions
|
<|reference_start|>Detecting Temporal Ambiguity in Questions: Detecting and answering ambiguous questions has been a challenging task in open-domain question answering. Ambiguous questions have different answers depending on their interpretation and can take diverse forms. Temporally ambiguous questions are one of the most common types of such questions. In this paper, we introduce TEMPAMBIQA, a manually annotated temporally ambiguous QA dataset consisting of 8,162 open-domain questions derived from existing datasets. Our annotations focus on capturing temporal ambiguity to study the task of detecting temporally ambiguous questions. We propose a novel approach by using diverse search strategies based on disambiguated versions of the questions. We also introduce and test non-search, competitive baselines for detecting temporal ambiguity using zero-shot and few-shot approaches.<|reference_end|>
|
arxiv
|
@article{piryani2024detecting,
title={Detecting Temporal Ambiguity in Questions},
author={Bhawna Piryani, Abdelrahman Abdallah, Jamshid Mozafari, Adam Jatowt},
journal={arXiv preprint arXiv:2409.17046},
year={2024},
archivePrefix={arXiv},
eprint={2409.17046},
primaryClass={cs.CL}
}
|
piryani2024detecting
|
arxiv-661885
|
2409.17048
|
Predictive Covert Communication Against Multi-UAV Surveillance Using Graph Koopman Autoencoder
|
<|reference_start|>Predictive Covert Communication Against Multi-UAV Surveillance Using Graph Koopman Autoencoder: Low Probability of Detection (LPD) communication aims to obscure the presence of radio frequency (RF) signals to evade surveillance. In the context of mobile surveillance utilizing unmanned aerial vehicles (UAVs), achieving LPD communication presents significant challenges due to the UAVs' rapid and continuous movements, which are characterized by unknown nonlinear dynamics. Therefore, accurately predicting future locations of UAVs is essential for enabling real-time LPD communication. In this paper, we introduce a novel framework termed predictive covert communication, aimed at minimizing detectability in terrestrial ad-hoc networks under multi-UAV surveillance. Our data-driven method synergistically integrates graph neural networks (GNN) with Koopman theory to model the complex interactions within a multi-UAV network and facilitating long-term predictions by linearizing the dynamics, even with limited historical data. Extensive simulation results substantiate that the predicted trajectories using our method result in at least 63%-75% lower probability of detection when compared to well-known state-of-the-art baseline approaches, showing promise in enabling low-latency covert operations in practical scenarios.<|reference_end|>
|
arxiv
|
@article{krishnan2024predictive,
title={Predictive Covert Communication Against Multi-UAV Surveillance Using
Graph Koopman Autoencoder},
author={Sivaram Krishnan, Jihong Park, Gregory Sherman, Benjamin Campbell and
Jinho Choi},
journal={arXiv preprint arXiv:2409.17048},
year={2024},
archivePrefix={arXiv},
eprint={2409.17048},
primaryClass={cs.LG cs.NI eess.SP}
}
|
krishnan2024predictive
|
arxiv-661886
|
2409.17049
|
ControlCity: A Multimodal Diffusion Model Based Approach for Accurate Geospatial Data Generation and Urban Morphology Analysis
|
<|reference_start|>ControlCity: A Multimodal Diffusion Model Based Approach for Accurate Geospatial Data Generation and Urban Morphology Analysis: Volunteer Geographic Information (VGI), with its rich variety, large volume, rapid updates, and diverse sources, has become a critical source of geospatial data. However, VGI data from platforms like OSM exhibit significant quality heterogeneity across different data types, particularly with urban building data. To address this, we propose a multi-source geographic data transformation solution, utilizing accessible and complete VGI data to assist in generating urban building footprint data. We also employ a multimodal data generation framework to improve accuracy. First, we introduce a pipeline for constructing an 'image-text-metadata-building footprint' dataset, primarily based on road network data and supplemented by other multimodal data. We then present ControlCity, a geographic data transformation method based on a multimodal diffusion model. This method first uses a pre-trained text-to-image model to align text, metadata, and building footprint data. An improved ControlNet further integrates road network and land-use imagery, producing refined building footprint data. Experiments across 22 global cities demonstrate that ControlCity successfully simulates real urban building patterns, achieving state-of-the-art performance. Specifically, our method achieves an average FID score of 50.94, reducing error by 71.01% compared to leading methods, and a MIoU score of 0.36, an improvement of 38.46%. Additionally, our model excels in tasks like urban morphology transfer, zero-shot city generation, and spatial data completeness assessment. In the zero-shot city task, our method accurately predicts and generates similar urban structures, demonstrating strong generalization. This study confirms the effectiveness of our approach in generating urban building footprint data and capturing complex city characteristics.<|reference_end|>
|
arxiv
|
@article{zhou2024controlcity:,
title={ControlCity: A Multimodal Diffusion Model Based Approach for Accurate
Geospatial Data Generation and Urban Morphology Analysis},
author={Fangshuo Zhou, Huaxia Li, Rui Hu, Sensen Wu, Hailin Feng, Zhenhong Du
and Liuchang Xu},
journal={arXiv preprint arXiv:2409.17049},
year={2024},
archivePrefix={arXiv},
eprint={2409.17049},
primaryClass={cs.CV cs.AI}
}
|
zhou2024controlcity:
|
arxiv-661887
|
2409.17054
|
Using LLM for Real-Time Transcription and Summarization of Doctor-Patient Interactions into ePuskesmas in Indonesia
|
<|reference_start|>Using LLM for Real-Time Transcription and Summarization of Doctor-Patient Interactions into ePuskesmas in Indonesia: One of the key issues contributing to inefficiency in Puskesmas is the time-consuming nature of doctor-patient interactions. Doctors need to conduct thorough consultations, which include diagnosing the patient's condition, providing treatment advice, and transcribing detailed notes into medical records. In regions with diverse linguistic backgrounds, doctors often have to ask clarifying questions, further prolonging the process. While diagnosing is essential, transcription and summarization can often be automated using AI to improve time efficiency and help doctors enhance care quality and enable early diagnosis and intervention. This paper proposes a solution using a localized large language model (LLM) to transcribe, translate, and summarize doctor-patient conversations. We utilize the Whisper model for transcription and GPT-3 to summarize them into the ePuskemas medical records format. This system is implemented as an add-on to an existing web browser extension, allowing doctors to fill out patient forms while talking. By leveraging this solution for real-time transcription, translation, and summarization, doctors can improve the turnaround time for patient care while enhancing the quality of records, which become more detailed and insightful for future visits. This innovation addresses challenges like overcrowded facilities and the administrative burden on healthcare providers in Indonesia. We believe this solution will help doctors save time, provide better care, and produce more accurate medical records, representing a significant step toward modernizing healthcare and ensuring patients receive timely, high-quality care, even in resource-constrained settings.<|reference_end|>
|
arxiv
|
@article{irfan2024using,
title={Using LLM for Real-Time Transcription and Summarization of
Doctor-Patient Interactions into ePuskesmas in Indonesia},
author={Azmul Asmar Irfan, Nur Ahmad Khatim, and Mansur M. Arief},
journal={arXiv preprint arXiv:2409.17054},
year={2024},
archivePrefix={arXiv},
eprint={2409.17054},
primaryClass={cs.AI cs.CL cs.SD eess.AS}
}
|
irfan2024using
|
arxiv-661888
|
2409.17055
|
DRIM: Learning Disentangled Representations from Incomplete Multimodal Healthcare Data
|
<|reference_start|>DRIM: Learning Disentangled Representations from Incomplete Multimodal Healthcare Data: Real-life medical data is often multimodal and incomplete, fueling the growing need for advanced deep learning models capable of integrating them efficiently. The use of diverse modalities, including histopathology slides, MRI, and genetic data, offers unprecedented opportunities to improve prognosis prediction and to unveil new treatment pathways. Contrastive learning, widely used for deriving representations from paired data in multimodal tasks, assumes that different views contain the same task-relevant information and leverages only shared information. This assumption becomes restrictive when handling medical data since each modality also harbors specific knowledge relevant to downstream tasks. We introduce DRIM, a new multimodal method for capturing these shared and unique representations, despite data sparsity. More specifically, given a set of modalities, we aim to encode a representation for each one that can be divided into two components: one encapsulating patient-related information common across modalities and the other, encapsulating modality-specific details. This is achieved by increasing the shared information among different patient modalities while minimizing the overlap between shared and unique components within each modality. Our method outperforms state-of-the-art algorithms on glioma patients survival prediction tasks, while being robust to missing modalities. To promote reproducibility, the code is made publicly available at https://github.com/Lucas-rbnt/DRIM<|reference_end|>
|
arxiv
|
@article{robinet2024drim:,
title={DRIM: Learning Disentangled Representations from Incomplete Multimodal
Healthcare Data},
author={Lucas Robinet, Ahmad Berjaoui, Ziad Kheil, Elizabeth Cohen-Jonathan
Moyal},
journal={arXiv preprint arXiv:2409.17055},
year={2024},
archivePrefix={arXiv},
eprint={2409.17055},
primaryClass={cs.AI cs.LG}
}
|
robinet2024drim:
|
arxiv-661889
|
2409.17056
|
A Novel MOSFET based Single Event Latchup Detection, Current Limiting & Self Power Cycling circuit for Spacecraft systems
|
<|reference_start|>A Novel MOSFET based Single Event Latchup Detection, Current Limiting & Self Power Cycling circuit for Spacecraft systems: Single Event Latch-up (SEL) is one of the prime concerns for CMOS ICs used in space systems. Galactic Cosmic Rays or Solar Energetic Particles (SEP) may trigger the parasitic latch up circuit in CMOS ICs and cause increase in current beyond the safe limits thereby presenting a threat of permanent failure of the IC. Mitigation of the SEL is always a challenging task. The conventional mitigation approaches inherently introduce some response time which presents an uncertainty because during this response time the current may exceed the safe current limits. This paper presents a novel circuit based on MOSFETs which provides end-to-end complete solution of detecting SEL, limiting the current below the set threshold and executing power cycling to restore the normal functioning of the CMOS IC. The proposed circuit has been simulated in MULTISIM and the simulation results match very well with the expected behavior of (i)current limiting and (ii) the total time duration taken in power cycling to bring the SEL sensitive device back to its normal operational state. This circuit can be harnessed by spacecraft system designers to overcome the catastrophic threat of SEL posed by space radiation environment.<|reference_end|>
|
arxiv
|
@article{pandey2024a,
title={A Novel MOSFET based Single Event Latchup Detection, Current Limiting &
Self Power Cycling circuit for Spacecraft systems},
author={Ishan Pandey, Kinshuk Gupta, Vinod Kumar, A.R. Khan, Sandhya V. Kamat},
journal={arXiv preprint arXiv:2409.17056},
year={2024},
archivePrefix={arXiv},
eprint={2409.17056},
primaryClass={eess.SY cs.SY}
}
|
pandey2024a
|
arxiv-661890
|
2409.17058
|
Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors
|
<|reference_start|>Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors: Diffusion-based image super-resolution (SR) methods have achieved remarkable success by leveraging large pre-trained text-to-image diffusion models as priors. However, these methods still face two challenges: the requirement for dozens of sampling steps to achieve satisfactory results, which limits efficiency in real scenarios, and the neglect of degradation models, which are critical auxiliary information in solving the SR problem. In this work, we introduced a novel one-step SR model, which significantly addresses the efficiency issue of diffusion-based SR methods. Unlike existing fine-tuning strategies, we designed a degradation-guided Low-Rank Adaptation (LoRA) module specifically for SR, which corrects the model parameters based on the pre-estimated degradation information from low-resolution images. This module not only facilitates a powerful data-dependent or degradation-dependent SR model but also preserves the generative prior of the pre-trained diffusion model as much as possible. Furthermore, we tailor a novel training pipeline by introducing an online negative sample generation strategy. Combined with the classifier-free guidance strategy during inference, it largely improves the perceptual quality of the super-resolution results. Extensive experiments have demonstrated the superior efficiency and effectiveness of the proposed model compared to recent state-of-the-art methods.<|reference_end|>
|
arxiv
|
@article{zhang2024degradation-guided,
title={Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors},
author={Aiping Zhang, Zongsheng Yue, Renjing Pei, Wenqi Ren, Xiaochun Cao},
journal={arXiv preprint arXiv:2409.17058},
year={2024},
archivePrefix={arXiv},
eprint={2409.17058},
primaryClass={cs.CV}
}
|
zhang2024degradation-guided
|
arxiv-661891
|
2409.17063
|
Benchmarking Domain Generalization Algorithms in Computational Pathology
|
<|reference_start|>Benchmarking Domain Generalization Algorithms in Computational Pathology: Deep learning models have shown immense promise in computational pathology (CPath) tasks, but their performance often suffers when applied to unseen data due to domain shifts. Addressing this requires domain generalization (DG) algorithms. However, a systematic evaluation of DG algorithms in the CPath context is lacking. This study aims to benchmark the effectiveness of 30 DG algorithms on 3 CPath tasks of varying difficulty through 7,560 cross-validation runs. We evaluate these algorithms using a unified and robust platform, incorporating modality-specific techniques and recent advances like pretrained foundation models. Our extensive cross-validation experiments provide insights into the relative performance of various DG strategies. We observe that self-supervised learning and stain augmentation consistently outperform other methods, highlighting the potential of pretrained models and data augmentation. Furthermore, we introduce a new pan-cancer tumor detection dataset (HISTOPANTUM) as a benchmark for future research. This study offers valuable guidance to researchers in selecting appropriate DG approaches for CPath tasks.<|reference_end|>
|
arxiv
|
@article{zamanitajeddin2024benchmarking,
title={Benchmarking Domain Generalization Algorithms in Computational Pathology},
author={Neda Zamanitajeddin, Mostafa Jahanifar, Kesi Xu, Fouzia Siraj, Nasir
Rajpoot},
journal={arXiv preprint arXiv:2409.17063},
year={2024},
archivePrefix={arXiv},
eprint={2409.17063},
primaryClass={cs.CV cs.AI cs.LG}
}
|
zamanitajeddin2024benchmarking
|
arxiv-661892
|
2409.17066
|
VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models
|
<|reference_start|>VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models: Scaling model size significantly challenges the deployment and inference of Large Language Models (LLMs). Due to the redundancy in LLM weights, recent research has focused on pushing weight-only quantization to extremely low-bit (even down to 2 bits). It reduces memory requirements, optimizes storage costs, and decreases memory bandwidth needs during inference. However, due to numerical representation limitations, traditional scalar-based weight quantization struggles to achieve such extreme low-bit. Recent research on Vector Quantization (VQ) for LLMs has demonstrated the potential for extremely low-bit model quantization by compressing vectors into indices using lookup tables. In this paper, we introduce Vector Post-Training Quantization (VPTQ) for extremely low-bit quantization of LLMs. We use Second-Order Optimization to formulate the LLM VQ problem and guide our quantization algorithm design by solving the optimization. We further refine the weights using Channel-Independent Second-Order Optimization for a granular VQ. In addition, by decomposing the optimization problem, we propose a brief and effective codebook initialization algorithm. We also extend VPTQ to support residual and outlier quantization, which enhances model accuracy and further compresses the model. Our experimental results show that VPTQ reduces model quantization perplexity by $0.01$-$0.34$ on LLaMA-2, $0.38$-$0.68$ on Mistral-7B, $4.41$-$7.34$ on LLaMA-3 over SOTA at 2-bit, with an average accuracy improvement of $0.79$-$1.5\%$ on LLaMA-2, $1\%$ on Mistral-7B, $11$-$22\%$ on LLaMA-3 on QA tasks on average. We only utilize $10.4$-$18.6\%$ of the quantization algorithm execution time, resulting in a $1.6$-$1.8\times$ increase in inference throughput compared to SOTA.<|reference_end|>
|
arxiv
|
@article{liu2024vptq:,
title={VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large
Language Models},
author={Yifei Liu, Jicheng Wen, Yang Wang, Shengyu Ye, Li Lyna Zhang, Ting
Cao, Cheng Li, Mao Yang},
journal={arXiv preprint arXiv:2409.17066},
year={2024},
archivePrefix={arXiv},
eprint={2409.17066},
primaryClass={cs.AI}
}
|
liu2024vptq:
|
arxiv-661893
|
2409.17069
|
The Effect of Perceptual Metrics on Music Representation Learning for Genre Classification
|
<|reference_start|>The Effect of Perceptual Metrics on Music Representation Learning for Genre Classification: The subjective quality of natural signals can be approximated with objective perceptual metrics. Designed to approximate the perceptual behaviour of human observers, perceptual metrics often reflect structures found in natural signals and neurological pathways. Models trained with perceptual metrics as loss functions can capture perceptually meaningful features from the structures held within these metrics. We demonstrate that using features extracted from autoencoders trained with perceptual losses can improve performance on music understanding tasks, i.e. genre classification, over using these metrics directly as distances when learning a classifier. This result suggests improved generalisation to novel signals when using perceptual metrics as loss functions for representation learning.<|reference_end|>
|
arxiv
|
@article{namgyal2024the,
title={The Effect of Perceptual Metrics on Music Representation Learning for
Genre Classification},
author={Tashi Namgyal, Alexander Hepburn, Raul Santos-Rodriguez, Valero
Laparra, Jesus Malo},
journal={arXiv preprint arXiv:2409.17069},
year={2024},
archivePrefix={arXiv},
eprint={2409.17069},
primaryClass={cs.SD cs.AI cs.CV cs.LG eess.AS}
}
|
namgyal2024the
|
arxiv-661894
|
2409.17070
|
Syndeo: Portable Ray Clusters with Secure Containerization
|
<|reference_start|>Syndeo: Portable Ray Clusters with Secure Containerization: We present Syndeo: a software framework for container orchestration of Ray on Slurm. In general the idea behind Syndeo is to write code once and deploy anywhere. Specifically, Syndeo is designed to addresses the issues of portability, scalability, and security for parallel computing. The design is portable because the containerized Ray code can be re-deployed on Amazon Web Services, Microsoft Azure, Google Cloud, or Alibaba Cloud. The process is scalable because we optimize for multi-node, high-throughput computing. The process is secure because users are forced to operate with unprivileged profiles meaning administrators control the access permissions. We demonstrate Syndeo's portable, scalable, and secure design by deploying containerized parallel workflows on Slurm for which Ray does not officially support.<|reference_end|>
|
arxiv
|
@article{li2024syndeo:,
title={Syndeo: Portable Ray Clusters with Secure Containerization},
author={William Li, Rodney S. Lafuente Mercado, Jaime D. Pena, Ross E. Allen},
journal={arXiv preprint arXiv:2409.17070},
year={2024},
archivePrefix={arXiv},
eprint={2409.17070},
primaryClass={cs.DC}
}
|
li2024syndeo:
|
arxiv-661895
|
2409.17073
|
Enhancing Post-Hoc Attributions in Long Document Comprehension via Coarse Grained Answer Decomposition
|
<|reference_start|>Enhancing Post-Hoc Attributions in Long Document Comprehension via Coarse Grained Answer Decomposition: Accurately attributing answer text to its source document is crucial for developing a reliable question-answering system. However, attribution for long documents remains largely unexplored. Post-hoc attribution systems are designed to map answer text back to the source document, yet the granularity of this mapping has not been addressed. Furthermore, a critical question arises: What exactly should be attributed? This involves identifying the specific information units within an answer that require grounding. In this paper, we propose and investigate a novel approach to the factual decomposition of generated answers for attribution, employing template-based in-context learning. To accomplish this, we utilize the question and integrate negative sampling during few-shot in-context learning for decomposition. This approach enhances the semantic understanding of both abstractive and extractive answers. We examine the impact of answer decomposition by providing a thorough examination of various attribution approaches, ranging from retrieval-based techniques to LLM-based attributors.<|reference_end|>
|
arxiv
|
@article{ramu2024enhancing,
title={Enhancing Post-Hoc Attributions in Long Document Comprehension via
Coarse Grained Answer Decomposition},
author={Pritika Ramu, Koustava Goswami, Apoorv Saxena, Balaji Vasan Srinivasan},
journal={arXiv preprint arXiv:2409.17073},
year={2024},
archivePrefix={arXiv},
eprint={2409.17073},
primaryClass={cs.CL}
}
|
ramu2024enhancing
|
arxiv-661896
|
2409.17076
|
Positive spoof Lehmer factorizations
|
<|reference_start|>Positive spoof Lehmer factorizations: We investigate the integer solutions of Diophantine equations related to Lehmer's totient conjecture. We give an algorithm that computes all nontrivial positive spoof Lehmer factorizations with a fixed number of bases $r$, and enumerate all nontrivial positive spoof Lehmer factorizations with 6 or fewer factors.<|reference_end|>
|
arxiv
|
@article{molnar2024positive,
title={Positive spoof Lehmer factorizations},
author={Grant Molnar and Guntas Singh},
journal={arXiv preprint arXiv:2409.17076},
year={2024},
archivePrefix={arXiv},
eprint={2409.17076},
primaryClass={math.NT cs.DM}
}
|
molnar2024positive
|
arxiv-661897
|
2409.17077
|
Efficient Feature Interactions with Transformers: Improving User Spending Propensity Predictions in Gaming
|
<|reference_start|>Efficient Feature Interactions with Transformers: Improving User Spending Propensity Predictions in Gaming: Dream11 is a fantasy sports platform that allows users to create their own virtual teams for real-life sports events. We host multiple sports and matches for our 200M+ user base. In this RMG (real money gaming) setting, users pay an entry amount to participate in various contest products that we provide to users. In our current work, we discuss the problem of predicting the user's propensity to spend in a gaming round, so it can be utilized for various downstream applications. e.g. Upselling users by incentivizing them marginally as per their spending propensity, or personalizing the product listing based on the user's propensity to spend. We aim to model the spending propensity of each user based on past transaction data. In this paper, we benchmark tree-based and deep-learning models that show good results on structured data, and we propose a new architecture change that is specifically designed to capture the rich interactions among the input features. We show that our proposed architecture outperforms the existing models on the task of predicting the user's propensity to spend in a gaming round. Our new transformer model surpasses the state-of-the-art FT-Transformer, improving MAE by 2.5\% and MSE by 21.8\%.<|reference_end|>
|
arxiv
|
@article{prakash2024efficient,
title={Efficient Feature Interactions with Transformers: Improving User
Spending Propensity Predictions in Gaming},
author={Ved Prakash, Kartavya Kothari},
journal={arXiv preprint arXiv:2409.17077},
year={2024},
archivePrefix={arXiv},
eprint={2409.17077},
primaryClass={cs.LG}
}
|
prakash2024efficient
|
arxiv-661898
|
2409.17079
|
Collision-free time-optimal path parameterization for multi-robot teams
|
<|reference_start|>Collision-free time-optimal path parameterization for multi-robot teams: Coordinating the motion of multiple robots in cluttered environments remains a computationally challenging task. We study the problem of minimizing the execution time of a set of geometric paths by a team of robots with state-dependent actuation constraints. We propose a Time-Optimal Path Parameterization (TOPP) algorithm for multiple car-like agents, where the modulation of the timing of every robot along its assigned path is employed to ensure collision avoidance and dynamic feasibility. This is achieved through the use of a priority queue to determine the order of trajectory execution for each robot while taking into account all possible collisions with higher priority robots in a spatiotemporal graph. We show a 10-20% reduction in makespan against existing state-of-the-art methods and validate our approach through simulations and hardware experiments.<|reference_end|>
|
arxiv
|
@article{mao2024collision-free,
title={Collision-free time-optimal path parameterization for multi-robot teams},
author={Katherine Mao, Igor Spasojevic, Malakhi Hopkins, M. Ani Hsieh, and
Vijay Kumar},
journal={arXiv preprint arXiv:2409.17079},
year={2024},
archivePrefix={arXiv},
eprint={2409.17079},
primaryClass={cs.RO}
}
|
mao2024collision-free
|
arxiv-661899
|
2409.17080
|
Can Vision Language Models Learn from Visual Demonstrations of Ambiguous Spatial Reasoning?
|
<|reference_start|>Can Vision Language Models Learn from Visual Demonstrations of Ambiguous Spatial Reasoning?: Large vision-language models (VLMs) have become state-of-the-art for many computer vision tasks, with in-context learning (ICL) as a popular adaptation strategy for new ones. But can VLMs learn novel concepts purely from visual demonstrations, or are they limited to adapting to the output format of ICL examples? We propose a new benchmark we call Spatial Visual Ambiguity Tasks (SVAT) that challenges state-of-the-art VLMs to learn new visuospatial tasks in-context. We find that VLMs fail to do this zero-shot, and sometimes continue to fail after finetuning. However, adding simpler data to the training by curriculum learning leads to improved ICL performance.<|reference_end|>
|
arxiv
|
@article{zhao2024can,
title={Can Vision Language Models Learn from Visual Demonstrations of Ambiguous
Spatial Reasoning?},
author={Bowen Zhao, Leo Parker Dirac, Paulina Varshavskaya},
journal={arXiv preprint arXiv:2409.17080},
year={2024},
archivePrefix={arXiv},
eprint={2409.17080},
primaryClass={cs.CV cs.CL}
}
|
zhao2024can
|
arxiv-661900
|
2409.17082
|
Energy efficiency analysis as a function of the working voltages in supercapacitors
|
<|reference_start|>Energy efficiency analysis as a function of the working voltages in supercapacitors: Supercapacitors are increasingly used as energy storage elements. Unlike batteries, their state of charge has a considerable influence on their voltage in normal operation, allowing them to work from zero to their maximum voltage. In this work, a theoretical and practical analysis is proposed of the energy efficiency of these devices according to their working voltages. To this end, several supercapacitors were subjected to charge and discharge cycles until the measurements of current and voltage stabilized. At this point their energy efficiency was calculated. These charge-discharge cycles were carried out: i) without rest between charging and discharging; and ii) with a rest of several minutes between the two stages. Using the information obtained from the tests, the energy efficiency is shown plotted against the minimum and maximum working voltages. By consulting the data and the graphs, the ideal working voltages to optimize the energy efficiency of these devices can be obtained.<|reference_end|>
|
arxiv
|
@article{quintana2024energy,
title={Energy efficiency analysis as a function of the working voltages in
supercapacitors},
author={Jose Quintana, Alejandro Ramos, Moises Diaz, Ignacio Nuez},
journal={Energy, 2021, vol. 230, p. 120689},
year={2024},
doi={10.1016/j.energy.2021.120689},
archivePrefix={arXiv},
eprint={2409.17082},
primaryClass={eess.SY cs.SY}
}
|
quintana2024energy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.